code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Lab Assignment 2 The spirit of data science includes exploration, traversing the unknown, and applying a deep understanding of the challenge you're facing. In an academic setting, it's hard to duplicate these tasks, but this lab will attempt to take a few steps away from the traditional, textbook, "plug the equation in" pattern, so you can get a taste of what analyzing data in the real world is all about. After the September 11 attacks, a series of secret regulations, laws, and processes were enacted, perhaps to better protect the citizens of the United States. These processes continued through president Bush's term and were renewed and and strengthened during the Obama administration. Then, on May 24, 2006, the United States Foreign Intelligence Surveillance Court (FISC) made a fundamental shift in its approach to Section 215 of the Patriot Act, permitting the FBI to compel production of "business records" relevant to terrorism investigations, which are shared with the NSA. The court now defined as business records the entirety of a telephone company's call database, also known as Call Detail Records (CDR or metadata). News of this came to public light after an ex-NSA contractor leaked the information, and a few more questions were raised when it was further discovered that not just the call records of suspected terrorists were being collected in bulk... but perhaps the entirety of Americans as a whole. After all, if you know someone who knows someone who knows someone, your private records are relevant to a terrorism investigation. The white house quickly reassured the public in a press release that "Nobody is listening to your telephone calls," since, "that's not what this program is about." The public was greatly relieved. The questions you'll be exploring in this lab assignment using K-Means are: exactly how useful is telephone metadata? It must have some use, otherwise the government wouldn't have invested however many millions they did into it secretly collecting it from phone carriers. Also what kind of intelligence can you extract from CDR metadata besides its face value? You will be using a sample CDR dataset generated for 10 people living in the Dallas, Texas metroplex area. Your task will be to attempt to do what many researchers have already successfully done - partly de-anonymize the CDR data. People generally behave in predictable manners, moving from home to work with a few errands in between. With enough call data, given a few K-locations of interest, K-Means should be able to isolate rather easily the geolocations where a person spends the most of their time. Note: to safeguard from doxing people, the CDR dataset you'll be using for this assignment was generated using the tools available in the Dive Deeper section. CDRs are at least supposed to be protected by privacy laws, and are the basis for proprietary revenue calculations. In reality, there are quite a few public CDRs out there. Much information can be discerned from them such as social networks, criminal acts, and believe it or not, even the spread of decreases as was demonstrated by Flowminder Foundation paper on Ebola. 1. Open up the starter code in /Module5/assignment2.py and read through it all. It's long, so make sure you understand everything that is being asked for you before proceeding. 2. Load up the CDR dataset from /Module5/Datasets/CDR.csv. Do your due diligence to make sure it's been loaded correctly and all the features and rows match up. 3. Pick the first unique user in the list to examine. Follow the steps in the assignment file to approximate where the user lives. 4. Once you have a (Latitude, Longitude) coordinate pair, drop them into Google Maps. Just do a search for the "{Lat, Lon}". So if your centroid is located at Longitude = -96.949246 and Latitude = 32.953856, then do a maps search for "32.953856, -96.949246". 5. Answer the questions below. ``` # import import pandas as pd import numpy as np from sklearn.cluster import KMeans #from sklearn.datasets import load_boston import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('ggplot') # Look Pretty %matplotlib notebook def showandtell(title=None): if title != None: plt.savefig(title + ".png", bbox_inches='tight', dpi=300) plt.show() #exit() # INFO: This dataset has call records for 10 users tracked over the course of 3 years. # Your job is to find out where the users likely live and work at! # TODO: Load up the dataset and take a peek at its head # Convert the date using pd.to_datetime, and the time using pd.to_timedelta dataFile = r'C:\Users\ng35019\Documents\Training\python_for_ds\Module5Clustering\Datasets\CDR.csv' df = pd.read_csv(dataFile); df.CallDate = pd.to_datetime(df.CallDate) df.CallTime = pd.to_timedelta(df.CallTime) df # TODO: Get a distinct list of "In" phone numbers (users) and store the values in a # regular python list. # Hint: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html In = df.In.unique().tolist(); In # TODO: Create a slice called user1 that filters to only include dataset records where the # "In" feature (user phone number) is equal to the first number on your unique list above user1 = df[df.In == In[0]]; user1 # INFO: Plot all the call locations user1.plot.scatter(x='TowerLon', y='TowerLat', c='gray', alpha=0.1, title='Call Locations') showandtell() # Comment this line out when you're ready to proceed # On Week End days: # 1. People probably don't go into work we = user1[(user1.DOW == 'Sat') | (user1.DOW == 'Sun')] # 2. They probably sleep in late on Saturday we = we[(we.CallTime < '06:00:00') | (we.CallTime > '10:00:00')]; we # 3. They probably run a bunch of random errands, since they couldn't during the week # 4. They should be home, at least during the very late hours, e.g. 1-4 AM fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(we.TowerLon,we.TowerLat, c='g', marker='o', alpha=0.2) ax.set_title('Weekend Calls (<6am or >10p)') showandtell() # TODO: Comment this line out when you're ready to proceed # On Weekdays: # 1. People probably are at work during normal working hours # 2. They probably are at home in the early morning and during the late night # 3. They probably spend time commuting between work and home everyday wd = user1[(user1.DOW != 'Sat') & (user1.DOW != 'Sun')] wd = wd[(wd.CallTime < '07:00:00') | (wd.CallTime > '20:00:00')]; wd wd.plot.scatter(x='TowerLon', y='TowerLat', alpha=0.1, title='Call Locations') showandtell() # join both df #df = we[['TowerLat','TowerLon']].append(wd[['TowerLat','TowerLon']]); df = we[['TowerLat','TowerLon']]; df.plot.scatter(x='TowerLon', y='TowerLat', alpha=0.1, title='Call Locations') showandtell() # run K-Mean with K=1 and plot the centroids # TODO: Use K-Means to try and find seven cluster centers in this dataframe. kmeans_model = KMeans(n_clusters=2) kmeans_model.fit(df) # INFO: Print and plot the centroids... centroids = kmeans_model.cluster_centers_ fig = plt.figure() ax = fig.add_subplot(111) ax.set_title('Weekend Calls (<6am or >10p) and Weekdays Calls (<7am or >8pm)') ax.scatter(df.TowerLon,df.TowerLat, c='g', marker='o', alpha=0.2) ax.scatter(centroids[:,1], centroids[:,0], marker='x', c='red', alpha=0.5, linewidths=10, s=169) showandtell() print(centroids) ```
github_jupyter
``` #Load libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import accuracy_score, precision_score, recall_score, confusion_matrix import warnings # To ignore any warnings warnings.filterwarnings("ignore") dataset = pd.read_csv('loan_data_set.csv') print(dataset['Loan_Status'].value_counts()) dataset.describe() dataset sns.countplot(x = 'Loan_Status', data=dataset, palette='hls') sns.set(rc={'axes.facecolor':'#f8f9fa', 'figure.facecolor':'#f8f9fa'}) plt.show() import pandas as pd dataset = pd.read_csv('loan_data_set.csv') dataset.dtypes print(dataset.columns[dataset.isnull().any()].tolist()) missing_values = dataset.isnull() missing_values sns.heatmap(data = missing_values, yticklabels=False, cbar=False, cmap='viridis') #Heatmap of missing data values sns.countplot(x='Loan_Status', data=dataset, hue='Education') #comparing those who had the loan and those who didint based their educational background from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import LabelEncoder from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from sklearn.tree import DecisionTreeClassifier # encoding the categorical features var_mod = ['Gender','Married','Dependents','Education','Self_Employed','Property_Area','Loan_Status'] le = LabelEncoder() for i in var_mod: dataset[i] = le.fit_transform(dataset[i].astype(str)) # spliting the dataset into features and labels X = pd.DataFrame(dataset.iloc[:, 1:-1]) #excluding Loan_ID y = pd.DataFrame(dataset.iloc[:,-1]).values.ravel() #just labels # imputing missing values for the features imputer = SimpleImputer(strategy="mean") imputer = imputer.fit(X) X = imputer.transform(X) # splitting dataset into train and test dataset x_train, x_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2, random_state=7) # Logistic Regression logistic_reg_model = LogisticRegression(solver='liblinear') logistic_reg_model.fit(x_train, y_train) y_pred = logistic_reg_model.predict(x_test) print("Accuracy:",accuracy_score(y_test, y_pred)) print("Precision:",precision_score(y_test, y_pred)) print("Recall:",recall_score(y_test, y_pred)) y_single = logistic_reg_model.predict(x_test[0].reshape(1, -1)) # confusion matrix cnf_matrix = confusion_matrix(y_test, y_pred) class_names=[0,1] # name of classes fig, ax = plt.subplots() tick_marks = np.arange(len(class_names)) plt.xticks(tick_marks, class_names) plt.yticks(tick_marks, class_names) # create heatmap sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g') ax.xaxis.set_label_position("top") plt.tight_layout() plt.title('Confusion matrix', y=1.1) plt.ylabel('Actual label') plt.xlabel('Predicted label') model_decision_tree = DecisionTreeClassifier() model_decision_tree.fit(x_train,y_train) predictions = model_decision_tree.predict(x_test) print(accuracy_score(y_test, predictions)) model = RandomForestClassifier(n_estimators=100) model.fit(x_train,y_train) predictions = model.predict(x_test) print(accuracy_score(y_test, predictions)) model = KNeighborsClassifier(n_neighbors=9) model.fit(x_train,y_train) predictions = model.predict(x_test) print(accuracy_score(y_test, predictions)) model = SVC(gamma='scale', kernel='rbf') model.fit(x_train,y_train) predictions = model.predict(x_test) print(accuracy_score(y_test, predictions)) dataset_test = dataset features = ['LP001486', 'Male','Yes','1','Not Graduate','No',4583,1508,128,360,1,'Rural','N'] new_customer = pd.DataFrame({ 'Loan_ID': [features[0]], 'Gender': [features[1]], 'Married': [features[2]], 'Dependents': [features[3]], 'Education': [features[4]], 'Self_Employed': [features[5]] 'ApplicantIncome': [features[6]], 'CoapplicantIncome': [features[7]], 'LoanAmount': [features[8]], 'Loan_Amount_Term': [features[9]], 'Credit_History': [features[10]], 'Property_Area':[features[11]], 'Loan_Status': [features[12]], }) # append new single input to end of dataset dataset_test = dataset_test.append(new_customer) # encoding the categorical features var_mod = ['Gender','Married','Dependents','Education','Self_Employed','Property_Area','Loan_Status'] le = LabelEncoder() for i in var_mod: dataset_test[i] = le.fit_transform(dataset_test[i].astype(str)) # extrating encoded user input from encoded dataset user = dataset_test[-1:] # last row contains user data user = pd.DataFrame(user.iloc[:, 1:-1]) # exclude ID user.values y_single = model_decision_tree.predict(user.values) # Test encoded input on decision tress model print(y_single[0]) import pandas as pd from sklearn import model_selection from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import MinMaxScaler, OneHotEncoder, LabelEncoder from sklearn.compose import ColumnTransformer from sklearn.metrics import accuracy_score, precision_score, recall_score from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from sklearn.tree import DecisionTreeClassifier dataset_1 = pd.read_csv('loan_data_set.csv') dataset_1 = dataset_1.drop('Loan_ID', axis=1) X = dataset_1.drop('Loan_Status', axis=1) y = dataset_1['Loan_Status'] x_train, x_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2, random_state=7) # numeric_features = ['ApplicantIncome','CoapplicantIncome','LoanAmount','Loan_Amount_Term'] numeric_features = dataset_1.select_dtypes(include=['int64', 'float64']).columns numeric_features_steps = [('imputer', SimpleImputer(strategy='median')),('scaler', MinMaxScaler())] numeric_transformer = Pipeline(steps=numeric_features_steps) # categorical_features = ['Gender','Married','Dependents','Education','Self_Employed','Property_Area'] categorical_features = dataset_1.select_dtypes(include=['object']).drop(['Loan_Status'], axis=1).columns categorical_features_steps = [('imputer', SimpleImputer(strategy='constant', fill_value='missing')),('onehot', OneHotEncoder())] categorical_transformer = Pipeline(steps=categorical_features_steps) preprocessor = ColumnTransformer( remainder = 'passthrough', transformers=[ ('numeric', numeric_transformer, numeric_features), ('categorical', categorical_transformer, categorical_features) ]) classifiers = { 'K-Nearnest Neighbour': KNeighborsClassifier(9), 'Logistic Regression(solver=liblinear)': LogisticRegression(solver='liblinear'), 'Support Vector Machine(gamma=auto, kernel=rbf)': SVC(gamma='auto', kernel='rbf'), 'Support Vector Machine(kernel="rbf", C=0.025, probability=True)': SVC(gamma='auto', kernel="rbf", C=0.025, probability=True), 'Nu Support Vector Machine(probability=True)': NuSVC(gamma='auto', probability=True), 'DecisionTreeClassifier': DecisionTreeClassifier(), 'Random Forest Classifier': RandomForestClassifier(n_estimators=100), 'AdaBoost Classifier': AdaBoostClassifier(), 'Gradient Boosting Classifier': GradientBoostingClassifier() } pred_models = [] for name, classifier in classifiers.items(): pipe = Pipeline(steps=[('preprocessor', preprocessor), ('classifier', classifier)]) pipe.fit(x_train, y_train) pred_models.append(pipe) y_pred = pred_models[1].predict(x_test) # print("Accuracy: %.4f" % pred_models[1].score(x_test, y_test)) x_test for name, classifier in classifiers.items(): pipe = Pipeline(steps=[('preprocessor', preprocessor),('classifier', classifier)]) pipe.fit(x_train, y_train) y_pred = pipe.predict(x_test) print("Classifier: ", name) print("Accuracy: %.4f" % pipe.score(x_test, y_test)) features = ['LP001486', 'Male', 'Yes', '1', 'Graduate', 'No', 5483, 1508, 128, 360, 0, 'Urban', 'N'] new_customer = pd.DataFrame({ 'Gender': [features[1]], 'Married': [features[2]], 'Dependents': [features[3]], 'Education': [features[4]], 'Self_Employed': [features[5]], 'ApplicantIncome': [features[6]], 'CoapplicantIncome': [features[7]], 'LoanAmount': [features[8]], 'Loan_Amount_Term': [features[9]], 'Credit_History': [features[10]], 'Property_Area':[features[11]], }) y_pred_single = pred_models[1].predict(new_customer) # print(y_pred_single[0]) if y_pred_single[0] == 'Y': print('Yes, you\'re eligible') else: print('Sorry, you\'re not eligible') ```
github_jupyter
``` import sys sys.path.append('C:\\Users\dell-pc\Desktop\大四上\Computer_Vision\CNN') from data import * from network import three_layer_cnn # data train_data, test_data = loaddata() import numpy as np print(train_data.keys()) print("Number of train items: %d" % len(train_data['images'])) print("Number of test items: %d" % len(test_data['labels'])) print("Edge length of picture : %f" % np.sqrt(len(train_data['images'][0]))) Class = set(train_data['labels']) print("Total classes: ", Class) # reshape def imageC(data_list): data = np.array(data_list).reshape(len(data_list), 1, 28, 28) return data data = imageC(train_data['images'][0:3]) print(np.shape(data)) # test def test(cnn, test_batchSize): test_pred = [] for i in range(int(len(test_data['images']) / test_batchSize)): out = cnn.inference(imageC(test_data['images'][i*test_batchSize:(i+1)*test_batchSize])) y = np.array(test_data['labels'][i*test_batchSize:(i+1)*test_batchSize]) loss, pred = cnn.svm_loss(out, y, mode='test') test_pred.extend(pred) # accuracy count = 0 for i in range(len(test_pred)): if test_pred[i] == test_data['labels'][i]: count += 1 acc = count / len(test_pred) return acc, loss # train print('Begin training ...') cnn = three_layer_cnn() cnn.initial() epoch = 3 batchSize = 30 train_loss = [] train_acc = [] test_loss = [] test_acc = [] for i in range(epoch): for j in range(int(len(train_data['images']) / batchSize)): # for j in range(30): data = imageC(train_data['images'][j*batchSize:(j+1)*batchSize]) label = np.array(train_data['labels'][j*batchSize:(j+1)*batchSize]) output = cnn.forward(data) loss1, pred = cnn.svm_loss(output, label) train_loss.append(loss1) if j % 200 == 0: # train count = 0 for k in range(batchSize): if pred[k] == label[k]: count += 1 acc1 = count / batchSize train_acc.append(acc1) cnn.backward() if j % 200 == 0: # test acc2, loss2 = test(cnn, 10) test_loss.append(loss2) test_acc.append(acc2) print('Epoch: %d; Item: %d; Train loss: %f; Test loss: %f; Train acc: %f; Test acc: %f ' % (i, (j + 1) * batchSize, loss1, loss2, acc1, acc2)) print('End training!') # test acc, loss = test(cnn, 10) print('Accuracy for 3-layers convolutional neural networks: %f' % acc) # plot import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator, FormatStrFormatter ax = plt.subplot(2, 1, 1) plt.title('Training loss (Batch Size: 30)') plt.xlabel('Iteration') plt.plot(train_loss, 'o') plt.subplot(2, 1, 2) plt.title('Accuracy') plt.xlabel('Iteration(x100)') plt.plot(train_acc, '-o', label='train') plt.plot(test_acc, '-o', label='test') plt.legend(loc='upper right', ncol=1) plt.gcf().set_size_inches(15, 15) plt.show() ```
github_jupyter
**Tools - pandas** *The `pandas` library provides high-performance, easy-to-use data structures and data analysis tools. The main data structure is the `DataFrame`, which you can think of as an in-memory 2D table (like a spreadsheet, with column names and row labels). Many features available in Excel are available programmatically, such as creating pivot tables, computing columns based on other columns, plotting graphs, etc. You can also group rows by column value, or join tables much like in SQL. Pandas is also great at handling time series.* Prerequisites: * NumPy – if you are not familiar with NumPy, we recommend that you go through the [NumPy tutorial](tools_numpy.ipynb) now. # Setup First, let's make sure this notebook works well in both python 2 and 3: ``` from __future__ import division, print_function, unicode_literals ``` Now let's import `pandas`. People usually import it as `pd`: ``` # import pandas as pd # Practice import pandas as pd ``` # `Series` objects The `pandas` library contains these useful data structures: * `Series` objects, that we will discuss now. A `Series` object is 1D array, similar to a column in a spreadsheet (with a column name and row labels). * `DataFrame` objects. This is a 2D table, similar to a spreadsheet (with column names and row labels). * `Panel` objects. You can see a `Panel` as a dictionary of `DataFrame`s. These are less used, so we will not discuss them here. ## Creating a `Series` Let's start by creating our first `Series` object! ``` # s = pd.Series([2,-1,3,5]) # s # Practice s = pd.Series([2, -1, 3, 5]) s ``` ## Similar to a 1D `ndarray` `Series` objects behave much like one-dimensional NumPy `ndarray`s, and you can often pass them as parameters to NumPy functions: ``` # import numpy as np # np.exp(s) # Practice import numpy as np np.exp(s) ``` Arithmetic operations on `Series` are also possible, and they apply *elementwise*, just like for `ndarray`s: ``` # s + [1000,2000,3000,4000] s + [1000, 2000, 3000, 4000] ``` Similar to NumPy, if you add a single number to a `Series`, that number is added to all items in the `Series`. This is called * broadcasting*: ``` # s + 1000 # Practice s + 1000 ``` The same is true for all binary operations such as `*` or `/`, and even conditional operations: ``` # s < 0 # Practice print(s * 2 ) print(s < 0) ``` ## Index labels Each item in a `Series` object has a unique identifier called the *index label*. By default, it is simply the rank of the item in the `Series` (starting at `0`) but you can also set the index labels manually: ``` # s2 = pd.Series([68, 83, 112, 68], index=["alice", "bob", "charles", "darwin"]) # s2 # Practice s2 = pd.Series([68, 83, 112, 68], index=["alice", "bob", "charles", "darwin"]) s2 ``` You can then use the `Series` just like a `dict`: ``` # s2["bob"] # Practice s2["charles"] ``` You can still access the items by integer location, like in a regular array: ``` # s2[1] # Practice s2[2] ``` To make it clear when you are accessing by label or by integer location, it is recommended to always use the `loc` attribute when accessing by label, and the `iloc` attribute when accessing by integer location: My Notes: Gets the location of the index label ``` # s2.loc["bob"] # Practice s2.loc["alice"] # s2.iloc[1] # Practice s2.iloc[3] ``` Slicing a `Series` also slices the index labels: ``` # s2.iloc[1:3] # Practice s2.iloc[2:3] ``` This can lead to unexpected results when using the default numeric labels, so be careful: ``` # surprise = pd.Series([1000, 1001, 1002, 1003]) # surprise # Practice surprise = pd.Series([1000, 1001, 1002, 1003]) surprise # surprise_slice = surprise[2:] # surprise_slice # Practice surprise_slice = surprise[2:] surprise_slice ``` Oh look! The first element has index label `2`. The element with index label `0` is absent from the slice: ``` # try: # surprise_slice[0] # except KeyError as e: # print("Key error:", e) # Practice try: surprise_slice[0] except KeyError as e: print("Key error:", e) ``` But remember that you can access elements by integer location using the `iloc` attribute. This illustrates another reason why it's always better to use `loc` and `iloc` to access `Series` objects: ``` # surprise_slice.iloc[0] # Practice surprise_slice.iloc[1] ``` ## Init from `dict` You can create a `Series` object from a `dict`. The keys will be used as index labels: ``` # weights = {"alice": 68, "bob": 83, "colin": 86, "darwin": 68} # s3 = pd.Series(weights) # s3 # Practice weights = {"alice":68, "bob":83, "colin":86, "darwin":68} s3 = pd.Series(weights) s3 ``` You can control which elements you want to include in the `Series` and in what order by explicitly specifying the desired `index`: ``` # s4 = pd.Series(weights, index = ["colin", "alice"]) # s4 # Practice s4 = pd.Series(weights, index=["colin", "alice"]) s4 ``` ## Automatic alignment When an operation involves multiple `Series` objects, `pandas` automatically aligns items by matching index labels. My Notes: ``` s2 s3 # print(s2.keys()) # print(s3.keys()) # s2 + s3 # Practice print(s2.keys()) print(s3.keys()) s2 + s3 ``` The resulting `Series` contains the union of index labels from `s2` and `s3`. Since `"colin"` is missing from `s2` and `"charles"` is missing from `s3`, these items have a `NaN` result value. (ie. Not-a-Number means *missing*). Automatic alignment is very handy when working with data that may come from various sources with varying structure and missing items. But if you forget to set the right index labels, you can have surprising results: ``` # s5 = pd.Series([1000,1000,1000,1000]) # print("s2 =", s2.values) # print("s5 =", s5.values) # s2 + s5 # Practice s5 = pd.Series([1000, 1000, 1000, 1000]) print("s2 =", s2.values) print("s5=", s5.values) s2 + s5 # My Notes: Cannot add as they are not of the same index labels ``` Pandas could not align the `Series`, since their labels do not match at all, hence the full `NaN` result. ## Init with a scalar You can also initialize a `Series` object using a scalar and a list of index labels: all items will be set to the scalar. ``` # meaning = pd.Series(42, ["life", "universe", "everything"]) # meaning # Practice meaning = pd.Series(42, ["life", "universe", "everything"]) meaning ``` ## `Series` name A `Series` can have a `name`: ``` # s6 = pd.Series([83, 68], index=["bob", "alice"], name="weights") # s6 # Practice s6 = pd.Series([83, 68], index=["bob", "alice"], name="weights") s6 ``` ## Plotting a `Series` Pandas makes it easy to plot `Series` data using matplotlib (for more details on matplotlib, check out the [matplotlib tutorial](tools_matplotlib.ipynb)). Just import matplotlib and call the `plot()` method: ``` # %matplotlib inline # import matplotlib.pyplot as plt # temperatures = [4.4,5.1,6.1,6.2,6.1,6.1,5.7,5.2,4.7,4.1,3.9,3.5] # s7 = pd.Series(temperatures, name="Temperature") # s7.plot() # plt.show() # Practice %matplotlib inline import matplotlib.pyplot as plt temperatures = [4.4, 5.1, 6.1, 6.2, 6.1, 6.1, 5.7, 5.2, 4.7, 4.1, 3.9, 3.5] s7 = pd.Series(temperatures, name="Temperature") s7.plot() plt.show() ``` There are *many* options for plotting your data. It is not necessary to list them all here: if you need a particular type of plot (histograms, pie charts, etc.), just look for it in the excellent [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) section of pandas' documentation, and look at the example code. # Handling time Many datasets have timestamps, and pandas is awesome at manipulating such data: * it can represent periods (such as 2016Q3) and frequencies (such as "monthly"), * it can convert periods to actual timestamps, and *vice versa*, * it can resample data and aggregate values any way you like, * it can handle timezones. ## Time range Let's start by creating a time series using `pd.date_range()`. This returns a `DatetimeIndex` containing one datetime per hour for 12 hours starting on October 29th 2016 at 5:30pm. ``` # dates = pd.date_range('2016/10/29 5:30pm', periods=12, freq='H') # dates # Practice dates = pd.date_range('2016/10/29 5:30pm', periods=12, freq='H') dates ``` This `DatetimeIndex` may be used as an index in a `Series`: ``` # temp_series = pd.Series(temperatures, dates) # temp_series # Practice temp_series = pd.Series(temperatures, index=dates) temp_series ``` Let's plot this series: ``` # temp_series.plot(kind="bar") # plt.grid(True) # plt.show() # Practice temp_series.plot(kind="bar") plt.grid(True) plt.show() ``` ## Resampling Pandas lets us resample a time series very simply. Just call the `resample()` method and specify a new frequency: ``` # temp_series_freq_2H = temp_series.resample("2H") # temp_series_freq_2H # Practice temp_series_freq_2H = temp_series.resample("2H") temp_series_freq_2H ``` The resampling operation is actually a deferred operation, which is why we did not get a `Series` object, but a `DatetimeIndexResampler` object instead. To actually perform the resampling operation, we can simply call the `mean()` method: Pandas will compute the mean of every pair of consecutive hours: ``` # temp_series_freq_2H = temp_series_freq_2H.mean() # Practice temp_series_freq_2H = temp_series_freq_2H.mean() ``` Let's plot the result: ``` # temp_series_freq_2H.plot(kind="bar") # plt.show() # Practice temp_series_freq_2H.plot(kind="bar") plt.grid(True) plt.show() ``` Note how the values have automatically been aggregated into 2-hour periods. If we look at the 6-8pm period, for example, we had a value of `5.1` at 6:30pm, and `6.1` at 7:30pm. After resampling, we just have one value of `5.6`, which is the mean of `5.1` and `6.1`. Rather than computing the mean, we could have used any other aggregation function, for example we can decide to keep the minimum value of each period: ``` # temp_series_freq_2H = temp_series.resample("2H").min() # temp_series_freq_2H # Practice temp_series_freq_2H = temp_series.resample("2H").min() temp_series_freq_2H ``` Or, equivalently, we could use the `apply()` method instead: ``` # temp_series_freq_2H = temp_series.resample("2H").apply(np.min) # temp_series_freq_2H # Practice temp_series_freq_2H = temp_series.resample("2H").apply(np.min) temp_series_freq_2H ``` ## Upsampling and interpolation This was an example of downsampling. We can also upsample (ie. increase the frequency), but this creates holes in our data: ``` temp_series_freq_15min = temp_series.resample("15Min").mean() temp_series_freq_15min.head(n=10) # `head` displays the top n values ``` One solution is to fill the gaps by interpolating. We just call the `interpolate()` method. The default is to use linear interpolation, but we can also select another method, such as cubic interpolation: ``` temp_series_freq_15min = temp_series.resample("15Min").interpolate(method="cubic") temp_series_freq_15min.head(n=10) temp_series.plot(label="Period: 1 hour") temp_series_freq_15min.plot(label="Period: 15 minutes") plt.legend() plt.show() ``` ## Timezones By default datetimes are *naive*: they are not aware of timezones, so 2016-10-30 02:30 might mean October 30th 2016 at 2:30am in Paris or in New York. We can make datetimes timezone *aware* by calling the `tz_localize()` method: ``` temp_series_ny = temp_series.tz_localize("America/New_York") temp_series_ny ``` Note that `-04:00` is now appended to all the datetimes. This means that these datetimes refer to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) - 4 hours. We can convert these datetimes to Paris time like this: ``` temp_series_paris = temp_series_ny.tz_convert("Europe/Paris") temp_series_paris ``` You may have noticed that the UTC offset changes from `+02:00` to `+01:00`: this is because France switches to winter time at 3am that particular night (time goes back to 2am). Notice that 2:30am occurs twice! Let's go back to a naive representation (if you log some data hourly using local time, without storing the timezone, you might get something like this): ``` temp_series_paris_naive = temp_series_paris.tz_localize(None) temp_series_paris_naive ``` Now `02:30` is really ambiguous. If we try to localize these naive datetimes to the Paris timezone, we get an error: ``` try: temp_series_paris_naive.tz_localize("Europe/Paris") except Exception as e: print(type(e)) print(e) ``` Fortunately using the `ambiguous` argument we can tell pandas to infer the right DST (Daylight Saving Time) based on the order of the ambiguous timestamps: ``` temp_series_paris_naive.tz_localize("Europe/Paris", ambiguous="infer") ``` ## Periods The `pd.period_range()` function returns a `PeriodIndex` instead of a `DatetimeIndex`. For example, let's get all quarters in 2016 and 2017: ``` # quarters = pd.period_range('2016Q1', periods=8, freq='Q') # quarters # Practice quarters = pd.period_range('2016Q1', periods=8, freq='Q') quarters ``` Adding a number `N` to a `PeriodIndex` shifts the periods by `N` times the `PeriodIndex`'s frequency: ``` # quarters + 3 # Practice quarters + 3 ``` The `asfreq()` method lets us change the frequency of the `PeriodIndex`. All periods are lengthened or shortened accordingly. For example, let's convert all the quarterly periods to monthly periods (zooming in): ``` # quarters.asfreq("M") # Practice quarters.asfreq("M") ``` By default, the `asfreq` zooms on the end of each period. We can tell it to zoom on the start of each period instead: ``` # quarters.asfreq("M", how="start") # Practice quarters.asfreq("M", how="start") ``` And we can zoom out: ``` # quarters.asfreq("A") # Practice quarters.asfreq("A") ``` Of course we can create a `Series` with a `PeriodIndex`: ``` # quarterly_revenue = pd.Series([300, 320, 290, 390, 320, 360, 310, 410], index = quarters) # quarterly_revenue # Practice quarterly_revenue = pd.Series([300, 320, 290, 390, 320, 360, 310, 410], index = quarters) quarterly_revenue # quarterly_revenue.plot(kind="line") # plt.show() # Practice quarterly_revenue.plot(kind="line") plt.show() ``` We can convert periods to timestamps by calling `to_timestamp`. By default this will give us the first day of each period, but by setting `how` and `freq`, we can get the last hour of each period: ``` # last_hours = quarterly_revenue.to_timestamp(how="end", freq="H") # last_hours # Practice last_hours = quarterly_revenue.to_timestamp(how="end", freq="H") last_hours ``` And back to periods by calling `to_period`: ``` # last_hours.to_period() # Practice last_hours.to_period() ``` Pandas also provides many other time-related functions that we recommend you check out in the [documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html). To whet your appetite, here is one way to get the last business day of each month in 2016, at 9am: ``` # months_2016 = pd.period_range("2016", periods=12, freq="M") # one_day_after_last_days = months_2016.asfreq("D") + 1 # last_bdays = one_day_after_last_days.to_timestamp() - pd.tseries.offsets.BDay() # last_bdays.to_period("H") + 9 ``` Practice ``` months_2016 = pd.period_range("2016", periods=12, freq="M") months_2016 one_day_after_last_days = months_2016.asfreq("D") + 1 one_day_after_last_days last_bdays = one_day_after_last_days.to_timestamp() last_bdays pd.tseries.offsets.BDay() last_bdays = one_day_after_last_days.to_timestamp() - pd.tseries.offsets.BDay() # My Notes: Last Business days last_bdays ``` My Notes: Adding 9:00am to the period ``` last_bdays.to_period("H") + 9 ``` # `DataFrame` objects A DataFrame object represents a spreadsheet, with cell values, column names and row index labels. You can define expressions to compute columns based on other columns, create pivot-tables, group rows, draw graphs, etc. You can see `DataFrame`s as dictionaries of `Series`. ## Creating a `DataFrame` You can create a DataFrame by passing a dictionary of `Series` objects: ``` # people_dict = { # "weight": pd.Series([68, 83, 112], index=["alice", "bob", "charles"]), # "birthyear": pd.Series([1984, 1985, 1992], index=["bob", "alice", "charles"], name="year"), # "children": pd.Series([0, 3], index=["charles", "bob"]), # "hobby": pd.Series(["Biking", "Dancing"], index=["alice", "bob"]), # } # people = pd.DataFrame(people_dict) # people # Practice people_dict = { "weight": pd.Series([68, 83, 112], index=["alice", "bob", "charles"]), "birthyear": pd.Series([1984, 1985, 1992], index=["bob", "alice", "charles"], name="year"), "children": pd.Series([0, 3], index=["charles", "bob"]), "hobby": pd.Series(["Biking", "Dancing"], index=["alice", "bob"]), } people = pd.DataFrame(people_dict) people ``` A few things to note: * the `Series` were automatically aligned based on their index, * missing values are represented as `NaN`, * `Series` names are ignored (the name `"year"` was dropped), * `DataFrame`s are displayed nicely in Jupyter notebooks, woohoo! You can access columns pretty much as you would expect. They are returned as `Series` objects: ``` # people["birthyear"] # Practice people["birthyear"] # Practice people.loc["bob"] ``` You can also get multiple columns at once: ``` # people[["birthyear", "hobby"]] # Practice people[["birthyear", "hobby", "children"]] ``` If you pass a list of columns and/or index row labels to the `DataFrame` constructor, it will guarantee that these columns and/or rows will exist, in that order, and no other column/row will exist. For example: ``` # d2 = pd.DataFrame( # people_dict, # columns=["birthyear", "weight", "height"], # index=["bob", "alice", "eugene"] # ) # d2 # Practice d2 = pd.DataFrame( people_dict, columns=["birthyear", "weight", "height"], index = ["bob", "alice", "eugene", "martin"] ) d2 ``` Another convenient way to create a `DataFrame` is to pass all the values to the constructor as an `ndarray`, or a list of lists, and specify the column names and row index labels separately: ``` # values = [ # [1985, np.nan, "Biking", 68], # [1984, 3, "Dancing", 83], # [1992, 0, np.nan, 112] # ] # d3 = pd.DataFrame( # values, # columns=["birthyear", "children", "hobby", "weight"], # index=["alice", "bob", "charles"] # ) # d3 # Practice values = [ [1985, np.nan, "Biking", 68], [1984, 3, "Dancing", 83], [1992, 0, np.nan, 112] ] d3 = pd.DataFrame( values, columns = ["birthyear", "children", "hobby", "weight"], index = ["alice", "bob", "charles"] ) d3 ``` To specify missing values, you can either use `np.nan` or NumPy's masked arrays: ``` # masked_array = np.ma.asarray(values, dtype=np.object) # masked_array[(0, 2), (1, 2)] = np.ma.masked # d3 = pd.DataFrame( # masked_array, # columns=["birthyear", "children", "hobby", "weight"], # index=["alice", "bob", "charles"] # ) # d3 # Practice masked_array = np.ma.asarray(values, dtype=np.object) masked_array[(0, 2), (1, 2)] = np.ma.masked d3 = pd.DataFrame( masked_array, columns = ["birthyear", "children", "hobby", "weight"], index=["alice", "bob", "charles"] ) d3 # Practice masked_array[(0, 2), (1, 2)] = np.ma.masked ``` Instead of an `ndarray`, you can also pass a `DataFrame` object: ``` # d4 = pd.DataFrame( # d3, # columns=["hobby", "children"], # index=["alice", "bob"] # ) # d4 # Practice d4 = pd.DataFrame( d3, columns = ["hobby", "children"], index = ["alice", "bob"] ) d4 ``` It is also possible to create a `DataFrame` with a dictionary (or list) of dictionaries (or list): ``` # people = pd.DataFrame({ # "birthyear": {"alice":1985, "bob": 1984, "charles": 1992}, # "hobby": {"alice":"Biking", "bob": "Dancing"}, # "weight": {"alice":68, "bob": 83, "charles": 112}, # "children": {"bob": 3, "charles": 0} # }) # people # Practice people = pd.DataFrame({ "birthyear": {"alice": 1985, "bob": 1984, "charles": 1992}, "hobby": {"alice": "Biking", "bob": "Dancing"}, "weight": {"alice": 68, "bob": 83, "charles": 112}, }) people ``` ## Multi-indexing If all columns are tuples of the same size, then they are understood as a multi-index. The same goes for row index labels. For example: ``` # d5 = pd.DataFrame( # { # ("public", "birthyear"): # {("Paris","alice"):1985, ("Paris","bob"): 1984, ("London","charles"): 1992}, # ("public", "hobby"): # {("Paris","alice"):"Biking", ("Paris","bob"): "Dancing"}, # ("private", "weight"): # {("Paris","alice"):68, ("Paris","bob"): 83, ("London","charles"): 112}, # ("private", "children"): # {("Paris", "alice"):np.nan, ("Paris","bob"): 3, ("London","charles"): 0} # } # ) # d5 d5 = pd.DataFrame( { ("public", "birthyear"): {("Paris", "alice"): 1985, ("Paris", "bob"): 1984, ("London", "charles"): 1992}, ("public", "hobby"): {("Paris", "alice"): "Biking", ("Paris", "bob"): "Dancing"}, ("private", "weight"): {("Paris", "alice"): 68, ("Paris", "bob"): 83, ("London", "charles"): 112}, ("private", "children"): {("Paris", "alice"):np.nan, ("Paris", "bob"):3, ("London", "charles"): 0}, } ) d5 # My Notes: Start from the top most columns ``` You can now get a `DataFrame` containing all the `"public"` columns very simply: ``` # d5["public"] # Practice d5["private"] d5["public", "hobby"] # Same result as d5["public"]["hobby"] # Practice # d5["public", "hobby"] # d5["public"]["hobby"] d5["private", "children"] ``` ## Dropping a level Let's look at `d5` again: ``` # d5 # Practice d5 ``` There are two levels of columns, and two levels of indices. We can drop a column level by calling `droplevel()` (the same goes for indices): ``` # d5.columns = d5.columns.droplevel(level = 0) # d5 # Practice d5.columns = d5.columns.droplevel(level=0) d5 # Practice # My Notes: Drop the top most index d5.index = d5.index.droplevel(level=0) d5 ``` ## Transposing You can swap columns and indices using the `T` attribute: ``` # d6 = d5.T # d6 d6 = d5.T d6 ``` ## Stacking and unstacking levels Calling the `stack()` method will push the lowest column level after the lowest index: ``` # d7 = d6.stack() # d7 d7 =d6.stack() d7 ``` Note that many `NaN` values appeared. This makes sense because many new combinations did not exist before (eg. there was no `bob` in `London`). Calling `unstack()` will do the reverse, once again creating many `NaN` values. ``` # d8 = d7.unstack() # d8 # Practice d8 = d7.unstack() d8 ``` If we call `unstack` again, we end up with a `Series` object: ``` # d9 = d8.unstack() # d9 # Practice d9 = d8.unstack() d9 ``` The `stack()` and `unstack()` methods let you select the `level` to stack/unstack. You can even stack/unstack multiple levels at once: ``` # d10 = d9.unstack(level = (0,1)) # My Notes: Alice, bob, charles (the names of the person goes to column). (0, 1) means get the (0) London, paris column # and the (1) names column transposes to the top. # d10 # Practice d10 = d5.unstack(level = (0, 1)) d10 d11 = d9.unstack(level = (0, 2)) # My Notes: (0, 2) means the (0) column London and Paris and # the (2) column birthyear, children and hobby column tranposes to the top d11 d12 = d9.unstack(level = (1, 0)) # My Notes: Moves the (1) names column and (0) city columns is transposed to the top. d12 d13 = d9.unstack(level = (1, 1)) d13 d14 = d9.unstack(level = (1, 2)) d14 ``` ## Most methods return modified copies As you may have noticed, the `stack()` and `unstack()` methods do not modify the object they apply to. Instead, they work on a copy and return that copy. This is true of most methods in pandas. ## Accessing rows Let's go back to the `people` `DataFrame`: ``` # people # Practice people ``` The `loc` attribute lets you access rows instead of columns. The result is a `Series` object in which the `DataFrame`'s column names are mapped to row index labels: ``` # people.loc["charles"] # Practice people.loc["charles"] ``` You can also access rows by integer location using the `iloc` attribute: ``` # people.iloc[2] # Practice people.iloc[2] ``` You can also get a slice of rows, and this returns a `DataFrame` object: ``` # people.iloc[1:3] # Practice people.iloc[0:2] ``` Finally, you can pass a boolean array to get the matching rows: ``` people[np.array([True, False, True])] # My Notes: Only get the first and third indexes # Practice people[np.array([True, False, True])] ``` This is most useful when combined with boolean expressions: ``` # people[people["birthyear"] < 1990] # Practice # people[people["birthyear"] < 1990] # people[people["weight"] > 68] ``` ## Adding and removing columns You can generally treat `DataFrame` objects like dictionaries of `Series`, so the following work fine: ``` # people = pd.DataFrame({ # "birthyear": {"alice":1985, "bob": 1984, "charles": 1992}, # "hobby": {"alice":"Biking", "bob": "Dancing"}, # "weight": {"alice":68, "bob": 83, "charles": 112}, # "children": {"bob": 3, "charles": 0} # }) # people # Practice people = pd.DataFrame({ "birthyear": {"alice": 1985, "bob": 1984, "charles": 1992}, "hobby": {"alice": "Biking", "bob": "Dancing"}, "weight": {"alice": 68, "bob": 83, "charles": 112}, "children": {"alice": 0, "bob": 2, "charles": 4} }) people # people people # people["age"] = 2018 - people["birthyear"] # adds a new column "age" # people["over 30"] = people["age"] > 30 # adds another column "over 30" # birthyears = people.pop("birthyear") # del people["children"] # people # Practice people["age"] = 2020 - people["birthyear"] # adds a new column "age" people["over 30"] = people["age"] > 30 # adds another column "over 30" birthyears = people.pop("birthyear") # My Notes: Removes the "birthyear" column del people["children"] people # birthyears birthyears ``` When you add a new colum, it must have the same number of rows. Missing rows are filled with NaN, and extra rows are ignored: ``` # people["pets"] = pd.Series({"bob": 0, "charles": 5, "eugene":1}) # alice is missing, eugene is ignored # people # Practice people["pets"] = pd.Series({"bob":0, "charles": 5, "eugene": 1}) # alice is missing, eugene is ignored people ``` When adding a new column, it is added at the end (on the right) by default. You can also insert a column anywhere else using the `insert()` method: ``` people.insert(1, "height", [172, 181, 185]) people # My Notes people.insert(0, "drinks", pd.Series({"alice": "pepsi", "bob": "coke", "charles": "water"})) people # My Notes del people["drinks"] people ``` ## Assigning new columns You can also create new columns by calling the `assign()` method. Note that this returns a new `DataFrame` object, the original is not modified: ``` # people.assign( # body_mass_index = people["weight"] / (people["height"] / 100) ** 2, # has_pets = people["pets"] > 0 # ) # Practice people.assign( body_mass_index = people["weight"] / (people["height"] / 100) ** 2, has_pets = people["pets"] > 0 ) ``` Note that you cannot access columns created within the same assignment: ``` # try: # people.assign( # body_mass_index = people["weight"] / (people["height"] / 100) ** 2, # overweight = people["body_mass_index"] > 25 # My Notes: Just created in the assignment, but cannot be used # ) # except KeyError as e: # print("Key error:", e) # Practice try: people.assign( body_mass_index = people["weight"] / (people["height"] / 100) ** 2, overweight = people["body_mass_index"] > 25 ) except KeyError as e: print("Key error:", e) ``` The solution is to split this assignment in two consecutive assignments: ``` # d6 = people.assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) # d6.assign(overweight = d6["body_mass_index"] > 25) d6 = people.assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) d6.assign(overweight = d6["body_mass_index"] > 25) ``` Having to create a temporary variable `d6` is not very convenient. You may want to just chain the assigment calls, but it does not work because the `people` object is not actually modified by the first assignment: ``` # try: # (people # .assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) # .assign(overweight = people["body_mass_index"] > 25) # ) # except KeyError as e: # print("Key error:", e) # Practice try: (people .assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) .assign(overweight = people["body_mass_index"] > 25) ) except KeyError as e: print("Key error:", e) ``` But fear not, there is a simple solution. You can pass a function to the `assign()` method (typically a `lambda` function), and this function will be called with the `DataFrame` as a parameter: ``` # (people # .assign(body_mass_index = lambda df: df["weight"] / (df["height"] / 100) ** 2) # .assign(overweight = lambda df: df["body_mass_index"] > 25) # ) # Practice (people .assign(body_mass_index = lambda df: df["weight"] / (df["height"] / 100) ** 2) .assign(overweight = lambda df: df["body_mass_index"] > 25) ) ``` Problem solved! ## Evaluating an expression A great feature supported by pandas is expression evaluation. This relies on the `numexpr` library which must be installed. ``` # people.eval("weight / (height/100) ** 2 > 25") # Practice people.eval("weight / (height / 100) ** 2 > 25") ``` Assignment expressions are also supported. Let's set `inplace=True` to directly modify the `DataFrame` rather than getting a modified copy: ``` # people.eval("body_mass_index = weight / (height/100) ** 2", inplace=True) # people # Practice people.eval("body_mass_index = weight / (height / 100) ** 2", inplace=True) people ``` You can use a local or global variable in an expression by prefixing it with `'@'`: ``` # overweight_threshold = 30 # people.eval("overweight = body_mass_index > @overweight_threshold", inplace=True) # people # Practice overweight_threshold = 30 people.eval("overweight = body_mass_index > @overweight_threshold", inplace=True) people ``` ## Querying a `DataFrame` The `query()` method lets you filter a `DataFrame` based on a query expression: ``` people # people.query("age > 30 and pets == 0") # Practice people.query("age > 30 and pets == 0") # My Notes hobby_biking = "Biking" people.query("hobby == @hobby_biking") ``` ## Sorting a `DataFrame` You can sort a `DataFrame` by calling its `sort_index` method. By default it sorts the rows by their index label, in ascending order, but let's reverse the order: ``` # people.sort_index(ascending=False) # Practice people.sort_index(ascending=False) ``` Note that `sort_index` returned a sorted *copy* of the `DataFrame`. To modify `people` directly, we can set the `inplace` argument to `True`. Also, we can sort the columns instead of the rows by setting `axis=1`: ``` # people.sort_index(axis=1, inplace=True) # people # Practice people.sort_index(axis=1, inplace=True) # My Notes: Sorts the columns by columns by setting axis = 1, in alphabetical order people ``` To sort the `DataFrame` by the values instead of the labels, we can use `sort_values` and specify the column to sort by: ``` # people.sort_values(by="age", inplace=True) # people # Practice people.sort_values(by="age", inplace=True) people ``` ## Plotting a `DataFrame` Just like for `Series`, pandas makes it easy to draw nice graphs based on a `DataFrame`. For example, it is trivial to create a line plot from a `DataFrame`'s data by calling its `plot` method: ``` # people.plot(kind = "line", x = "body_mass_index", y = ["height", "weight"]) # plt.show() # Practice people.plot(kind = "line", x = "body_mass_index", y = ["height", "weight"]) plt.show() ``` You can pass extra arguments supported by matplotlib's functions. For example, we can create scatterplot and pass it a list of sizes using the `s` argument of matplotlib's `scatter()` function: ``` # people.plot(kind = "scatter", x = "height", y = "weight", s=[40, 120, 200]) # plt.show() # Practice people.plot(kind = "scatter", x = "height", y = "weight", s=[40, 120, 200]) plt.show() ``` Again, there are way too many options to list here: the best option is to scroll through the [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) page in pandas' documentation, find the plot you are interested in and look at the example code. ## Operations on `DataFrame`s Although `DataFrame`s do not try to mimick NumPy arrays, there are a few similarities. Let's create a `DataFrame` to demonstrate this: ``` # grades_array = np.array([[8,8,9],[10,9,9],[4, 8, 2], [9, 10, 10]]) # grades = pd.DataFrame(grades_array, columns=["sep", "oct", "nov"], index=["alice","bob","charles","darwin"]) # grades # Practice grades_array = np.array([[8,8,9], [10,9,9], [4,8,2], [9,10,10]]) grades = pd.DataFrame(grades_array, columns=["sep", "oct", "nov"], index=["alice", "bob", "charles", "darwin"]) grades ``` You can apply NumPy mathematical functions on a `DataFrame`: the function is applied to all values: ``` # np.sqrt(grades) # Practice np.sqrt(grades) ``` Similarly, adding a single value to a `DataFrame` will add that value to all elements in the `DataFrame`. This is called *broadcasting*: ``` # grades + 1 # Practice grades + 1 ``` Of course, the same is true for all other binary operations, including arithmetic (`*`,`/`,`**`...) and conditional (`>`, `==`...) operations: ``` # grades >= 5 # Practice grades >= 5 ``` Aggregation operations, such as computing the `max`, the `sum` or the `mean` of a `DataFrame`, apply to each column, and you get back a `Series` object: ``` # grades.mean() # Practice grades.mean() ``` The `all` method is also an aggregation operation: it checks whether all values are `True` or not. Let's see during which months all students got a grade greater than `5`: ``` # (grades > 5).all() # Practice (grades > 5).all() ``` Most of these functions take an optional `axis` parameter which lets you specify along which axis of the `DataFrame` you want the operation executed. The default is `axis=0`, meaning that the operation is executed vertically (on each column). You can set `axis=1` to execute the operation horizontally (on each row). For example, let's find out which students had all grades greater than `5`: ``` # (grades > 5).all(axis = 1) # Practice (grades > 5).all(axis = 1) ``` The `any` method returns `True` if any value is True. Let's see who got at least one grade 10: ``` # (grades == 10).any(axis = 1) # Practice (grades == 10).any(axis = 1) ``` If you add a `Series` object to a `DataFrame` (or execute any other binary operation), pandas attempts to broadcast the operation to all *rows* in the `DataFrame`. This only works if the `Series` has the same size as the `DataFrame`s rows. For example, let's substract the `mean` of the `DataFrame` (a `Series` object) from the `DataFrame`: ``` # My Notes grades pd.DataFrame(grades.mean(), columns=["mean_grades"]) # My Notes: Convert series object to DataFrame for easier visualization # grades - grades.mean() # equivalent to: grades - [7.75, 8.75, 7.50] # Practice grades - grades.mean() # equivalent to: grades - [7.75, 8.75, 7.50] ``` We substracted `7.75` from all September grades, `8.75` from October grades and `7.50` from November grades. It is equivalent to substracting this `DataFrame`: ``` # pd.DataFrame([[7.75, 8.75, 7.50]]*4, index=grades.index, columns=grades.columns) pd.DataFrame([[7.75, 8.75, 7.50]] * 4, index=grades.index, columns=grades.columns) ``` If you want to substract the global mean from every grade, here is one way to do it: ``` grades.values.mean() # grades - grades.values.mean() # substracts the global mean (8.00) from all grades # Practice grades - grades.values.mean() # substracts the global mean (800) from all grades ``` ## Automatic alignment Similar to `Series`, when operating on multiple `DataFrame`s, pandas automatically aligns them by row index label, but also by column names. Let's create a `DataFrame` with bonus points for each person from October to December: ``` # bonus_array = np.array([[0,np.nan,2],[np.nan,1,0],[0, 1, 0], [3, 3, 0]]) # bonus_points = pd.DataFrame(bonus_array, columns=["oct", "nov", "dec"], index=["bob","colin", "darwin", "charles"]) # bonus_points # Practice bonus_array = np.array([[0, np.nan, 2], [np.nan, 1, 0], [0, 1, 0], [3, 3, 0]]) bonus_points = pd.DataFrame(bonus_array, columns=["oct", "nov", "dec"], index=["bob", "colin", "darwin", "charles"]) bonus_points.sort_index(axis=0, ascending=True) # My Notes grades.iloc[1:] # grades + bonus_points # Practice # example = (grades + bonus_points) # example.sort_index(axis=1, ascending=False) (grades + bonus_points).sort_index(axis=1, ascending=False) ``` Looks like the addition worked in some cases but way too many elements are now empty. That's because when aligning the `DataFrame`s, some columns and rows were only present on one side, and thus they were considered missing on the other side (`NaN`). Then adding `NaN` to a number results in `NaN`, hence the result. ## Handling missing data Dealing with missing data is a frequent task when working with real life data. Pandas offers a few tools to handle missing data. Let's try to fix the problem above. For example, we can decide that missing data should result in a zero, instead of `NaN`. We can replace all `NaN` values by a any value using the `fillna()` method: ``` # (grades + bonus_points).fillna(0) # Practice (grades + bonus_points).fillna(0) ``` It's a bit unfair that we're setting grades to zero in September, though. Perhaps we should decide that missing grades are missing grades, but missing bonus points should be replaced by zeros: ``` # fixed_bonus_points = bonus_points.fillna(0) # fixed_bonus_points.insert(0, "sep", 0) # fixed_bonus_points.loc["alice"] = 0 # grades + fixed_bonus_points # Practice fixed_bonus_points = bonus_points.fillna(0) # My Notes: Fills the NaN values with 0's fixed_bonus_points.insert(0, "sep", 0) # My Notes: insert 0 values in "sep" column in position 0 fixed_bonus_points.loc["alice"] = 0 # My Notes: Set Alice's score to be 0 for all the months grades + fixed_bonus_points ``` That's much better: although we made up some data, we have not been too unfair. Another way to handle missing data is to interpolate. Let's look at the `bonus_points` `DataFrame` again: ``` # bonus_points # Practice bonus_points ``` Now let's call the `interpolate` method. By default, it interpolates vertically (`axis=0`), so let's tell it to interpolate horizontally (`axis=1`). ``` # bonus_points.interpolate(axis=1) # Practice bonus_points.interpolate(axis=1) ``` Bob had 0 bonus points in October, and 2 in December. When we interpolate for November, we get the mean: 1 bonus point. Colin had 1 bonus point in November, but we do not know how many bonus points he had in September, so we cannot interpolate, this is why there is still a missing value in October after interpolation. To fix this, we can set the September bonus points to 0 before interpolation. ``` # better_bonus_points = bonus_points.copy() # better_bonus_points.insert(0, "sep", 0) # better_bonus_points.loc["alice"] = 0 # better_bonus_points = better_bonus_points.interpolate(axis=1) # better_bonus_points # Practice better_bonus_points = bonus_points.copy() better_bonus_points.insert(0, "sep", 0) better_bonus_points.loc["alice"] = 0 better_bonus_points = better_bonus_points.interpolate(axis=1) better_bonus_points ``` Great, now we have reasonable bonus points everywhere. Let's find out the final grades: ``` # grades + better_bonus_points # Practice grades + better_bonus_points ``` It is slightly annoying that the September column ends up on the right. This is because the `DataFrame`s we are adding do not have the exact same columns (the `grades` `DataFrame` is missing the `"dec"` column), so to make things predictable, pandas orders the final columns alphabetically. To fix this, we can simply add the missing column before adding: ``` # grades["dec"] = np.nan # final_grades = grades + better_bonus_points # final_grades # Practice grades["dec"] = np.nan final_grades = grades + better_bonus_points final_grades ``` There's not much we can do about December and Colin: it's bad enough that we are making up bonus points, but we can't reasonably make up grades (well I guess some teachers probably do). So let's call the `dropna()` method to get rid of rows that are full of `NaN`s: ``` # final_grades_clean = final_grades.dropna(how="all") # final_grades_clean # Practice final_grades_clean = final_grades.dropna(how="all") final_grades_clean ``` Now let's remove columns that are full of `NaN`s by setting the `axis` argument to `1`: ``` # final_grades_clean = final_grades_clean.dropna(axis=1, how="all") # final_grades_clean # Practice final_grades_clean = final_grades_clean.dropna(axis=1, how="all") final_grades_clean ``` ## Aggregating with `groupby` Similar to the SQL language, pandas allows grouping your data into groups to run calculations over each group. First, let's add some extra data about each person so we can group them, and let's go back to the `final_grades` `DataFrame` so we can see how `NaN` values are handled: ``` # final_grades["hobby"] = ["Biking", "Dancing", np.nan, "Dancing", "Biking"] # final_grades # Practice final_grades["hobby"] = ["Biking", "Dancing", np.nan, "Dancing", "Biking"] final_grades ``` Now let's group data in this `DataFrame` by hobby: ``` # grouped_grades = final_grades.groupby("hobby") # grouped_grades # Practice grouped_grades = final_grades.groupby("hobby") grouped_grades ``` We are ready to compute the average grade per hobby: ``` # grouped_grades.mean() # Practice grouped_grades.mean() ``` That was easy! Note that the `NaN` values have simply been skipped when computing the means. ## Pivot tables Pandas supports spreadsheet-like [pivot tables](https://en.wikipedia.org/wiki/Pivot_table) that allow quick data summarization. To illustrate this, let's create a simple `DataFrame`: ``` # bonus_points # Practice bonus_points # more_grades = final_grades_clean.stack().reset_index() # more_grades.columns = ["name", "month", "grade"] # more_grades["bonus"] = [np.nan, np.nan, np.nan, 0, np.nan, 2, 3, 3, 0, 0, 1, 0] # more_grades # Practice more_grades = final_grades_clean.stack().reset_index() more_grades.columns = ["name", "month", "grade"] more_grades["bonus"] = [np.nan, np.nan, np.nan, 0, np.nan, 2, 3, 3, 0, 0, 1, 0] more_grades ``` Now we can call the `pd.pivot_table()` function for this `DataFrame`, asking to group by the `name` column. By default, `pivot_table()` computes the mean of each numeric column: ``` # pd.pivot_table(more_grades, index="name") # Practice pd.pivot_table(more_grades, index="name") ``` We can change the aggregation function by setting the `aggfunc` argument, and we can also specify the list of columns whose values will be aggregated: ``` # pd.pivot_table(more_grades, index="name", values=["grade","bonus"], aggfunc=np.max) # Practice pd.pivot_table(more_grades, index="name", values=["grade", "bonus"], aggfunc = np.max) ``` We can also specify the `columns` to aggregate over horizontally, and request the grand totals for each row and column by setting `margins=True`: ``` # pd.pivot_table(more_grades, index="name", values="grade", columns="month", margins=True) # Practice pd.pivot_table(more_grades, index="name", values="grade", columns="month", margins=True) ``` Finally, we can specify multiple index or column names, and pandas will create multi-level indices: ``` # pd.pivot_table(more_grades, index=("name", "month"), margins=True) # Practice pd.pivot_table(more_grades, index=("name", "month"), margins=True) # My Notes # Note that NaN values are not included pd.pivot_table(more_grades, index=("name", "month", "bonus"), margins=True) ``` ## Overview functions When dealing with large `DataFrames`, it is useful to get a quick overview of its content. Pandas offers a few functions for this. First, let's create a large `DataFrame` with a mix of numeric values, missing values and text values. Notice how Jupyter displays only the corners of the `DataFrame`: ``` # much_data = np.fromfunction(lambda x,y: (x+y*y)%17*11, (10000, 26)) # large_df = pd.DataFrame(much_data, columns=list("ABCDEFGHIJKLMNOPQRSTUVWXYZ")) # large_df[large_df % 16 == 0] = np.nan # large_df.insert(3,"some_text", "Blabla") # large_df # Practice much_data = np.fromfunction(lambda x,y: (x + y * y) % 17 * 11, (10000, 26)) large_df = pd.DataFrame(much_data, columns=list("ABCDEFGHIJKLMNOPQRSTUVWXYZ")) large_df[large_df % 16 == 0] = np.nan large_df.insert(3, "some_text", "Blabla") # My Notes: Insert column "some_text" in column 3 with text "Blabla" large_df # My Notes a = np.arange(0, 101) a # My Notes a % 16 == 0 ``` The `head()` method returns the top 5 rows: ``` # large_df.head() # Practice large_df.head() ``` Of course there's also a `tail()` function to view the bottom 5 rows. You can pass the number of rows you want: ``` # large_df.tail(n=2) # Practice large_df.tail(n=2) ``` The `info()` method prints out a summary of each columns contents: ``` # large_df.info() # Practice large_df.info() ``` Finally, the `describe()` method gives a nice overview of the main aggregated values over each column: * `count`: number of non-null (not NaN) values * `mean`: mean of non-null values * `std`: [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of non-null values * `min`: minimum of non-null values * `25%`, `50%`, `75%`: 25th, 50th and 75th [percentile](https://en.wikipedia.org/wiki/Percentile) of non-null values * `max`: maximum of non-null values ``` # large_df.describe() large_df.describe() ``` # Saving & loading Pandas can save `DataFrame`s to various backends, including file formats such as CSV, Excel, JSON, HTML and HDF5, or to a SQL database. Let's create a `DataFrame` to demonstrate this: ``` # my_df = pd.DataFrame( # [["Biking", 68.5, 1985, np.nan], ["Dancing", 83.1, 1984, 3]], # columns=["hobby","weight","birthyear","children"], # index=["alice", "bob"] # ) # my_df # Practice my_df = pd.DataFrame( [["Biking", 68.5, 1985, np.nan], ["Dancing", 83.1, 1984, 3]], columns = ["hobby", "weight", "birthyear", "children"], index = ["alice", "bob"] ) my_df ``` ## Saving Let's save it to CSV, HTML and JSON: ``` # my_df.to_csv("my_df.csv") # my_df.to_html("my_df.html") # my_df.to_json("my_df.json") # Practice my_df.to_csv("my_df.csv") my_df.to_html("my_df.html") my_df.to_json("my_df.json") ``` Done! Let's take a peek at what was saved: ``` # for filename in ("my_df.csv", "my_df.html", "my_df.json"): # print("#", filename) # with open(filename, "rt") as f: # print(f.read()) # print() # Practice for filename in ("my_df.csv", "my_df.html", "my_df.json"): print("#", filename) with open(filename, "rt") as f: print(f.read()) print() ``` Note that the index is saved as the first column (with no name) in a CSV file, as `<th>` tags in HTML and as keys in JSON. Saving to other formats works very similarly, but some formats require extra libraries to be installed. For example, saving to Excel requires the openpyxl library: ``` # try: # my_df.to_excel("my_df.xlsx", sheet_name='People') # except ImportError as e: # print(e) # Practice try: my_df.to_excel("my_df.xlsx", sheet_name='People') except ImportError as e: print(e) ``` ## Loading Now let's load our CSV file back into a `DataFrame`: ``` # my_df_loaded = pd.read_csv("my_df.csv", index_col=0) # my_df_loaded # Practice my_df_loaded = pd.read_csv("my_df.csv", index_col=0) my_df_loaded ``` As you might guess, there are similar `read_json`, `read_html`, `read_excel` functions as well. We can also read data straight from the Internet. For example, let's load all U.S. cities from [simplemaps.com](http://simplemaps.com/): ``` # My Notes import os path = os.path.join("." + "\datasets\simplemaps\worldcities.csv") path # us_cities = None # try: # csv_url = "http://simplemaps.com/files/cities.csv" # us_cities = pd.read_csv(csv_url, index_col=0) # us_cities = us_cities.head() # except IOError as e: # print(e) # us_cities # Practice us_cities = None try: csv_url = path us_cities = pd.read_csv(csv_url, index_col=0) us_cities = us_cities.head() except IOError as e: print(e) us_cities ``` There are more options available, in particular regarding datetime format. Check out the [documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) for more details. # Combining `DataFrame`s ## SQL-like joins One powerful feature of pandas is it's ability to perform SQL-like joins on `DataFrame`s. Various types of joins are supported: inner joins, left/right outer joins and full joins. To illustrate this, let's start by creating a couple simple `DataFrame`s: ``` # city_loc = pd.DataFrame( # [ # ["CA", "San Francisco", 37.781334, -122.416728], # ["NY", "New York", 40.705649, -74.008344], # ["FL", "Miami", 25.791100, -80.320733], # ["OH", "Cleveland", 41.473508, -81.739791], # ["UT", "Salt Lake City", 40.755851, -111.896657] # ], columns=["state", "city", "lat", "lng"]) # city_loc # Practice city_loc = pd.DataFrame( [ ["CA", "San Francisco", 37.781334, -122.416728], ["NY", "New York", 40.705649, -74.008344], ["FL", "Miami", 25.791100, -80.320733], ["OH", "Cleveland", 41.473508, -81.739791], ["UT", "Salt Lake City", 40.755851, -111.896657] ], columns = ["state", "city", "lat", "lng"] ) city_loc # city_pop = pd.DataFrame( # [ # [808976, "San Francisco", "California"], # [8363710, "New York", "New-York"], # [413201, "Miami", "Florida"], # [2242193, "Houston", "Texas"] # ], index=[3,4,5,6], columns=["population", "city", "state"]) # city_pop # Practice city_pop = pd.DataFrame( [ [808976, "San Francisco", "California"], [8363710, "New York", "New-York"], [413201, "Miami", "Florida"], [2242193, "Houston", "Texas"] ], index = [3,4,5,6], columns=["population", "city", "state"]) city_pop ``` Now let's join these `DataFrame`s using the `merge()` function: ``` # pd.merge(left=city_loc, right=city_pop, on="city") # Practice pd.merge(left=city_loc, right=city_pop, on="city") ``` Note that both `DataFrame`s have a column named `state`, so in the result they got renamed to `state_x` and `state_y`. Also, note that Cleveland, Salt Lake City and Houston were dropped because they don't exist in *both* `DataFrame`s. This is the equivalent of a SQL `INNER JOIN`. If you want a `FULL OUTER JOIN`, where no city gets dropped and `NaN` values are added, you must specify `how="outer"`: ``` # all_cities = pd.merge(left=city_loc, right=city_pop, on="city", how="outer") # all_cities # Practice all_cities = pd.merge(left=city_loc, right=city_pop, on="city", how="outer") all_cities ``` Of course `LEFT OUTER JOIN` is also available by setting `how="left"`: only the cities present in the left `DataFrame` end up in the result. Similarly, with `how="right"` only cities in the right `DataFrame` appear in the result. For example: ``` # My Notes pd.merge(left=city_loc, right=city_pop, on="city", how="left") # pd.merge(left=city_loc, right=city_pop, on="city", how="right") # Practice pd.merge(left=city_loc, right=city_pop, on="city", how="right") ``` If the key to join on is actually in one (or both) `DataFrame`'s index, you must use `left_index=True` and/or `right_index=True`. If the key column names differ, you must use `left_on` and `right_on`. For example: ``` # city_pop2 = city_pop.copy() # city_pop2.columns = ["population", "name", "state"] # pd.merge(left=city_loc, right=city_pop2, left_on="city", right_on="name") # Practice city_pop2 = city_pop.copy() city_pop2.columns = ["population", "name", "state"] # city_pop2 pd.merge(left=city_loc, right=city_pop2, left_on="city", right_on="name") ``` # Stopped here 21/5/2020 4:37PM ## Concatenation Rather than joining `DataFrame`s, we may just want to concatenate them. That's what `concat()` is for: ``` city_loc city_pop # result_concat = pd.concat([city_loc, city_pop]) # result_concat # Practice result_concat = pd.concat([city_loc, city_pop]) result_concat # My Notes # Same info on the rows stack on top of one another ``` Note that this operation aligned the data horizontally (by columns) but not vertically (by rows). In this example, we end up with multiple rows having the same index (eg. 3). Pandas handles this rather gracefully: ``` # result_concat.loc[3] # My Notes: Notice in the index there are 2 3's and 4's. # Practice result_concat.loc[3] ``` Or you can tell pandas to just ignore the index: ``` # pd.concat([city_loc, city_pop], ignore_index=True) # My Notes # # Practice pd.concat([city_loc, city_pop], ignore_index=True) ``` Notice that when a column does not exist in a `DataFrame`, it acts as if it was filled with `NaN` values. If we set `join="inner"`, then only columns that exist in *both* `DataFrame`s are returned: ``` # pd.concat([city_loc, city_pop], join="inner") # My Notes # Similar columns of both data frames are only joined together. # Practice pd.concat([city_loc, city_pop], join="inner") ``` You can concatenate `DataFrame`s horizontally instead of vertically by setting `axis=1`: ``` # pd.concat([city_loc, city_pop], axis=1) # My Notes: Adding columns horizontally, notice the city and state columns # Practice pd.concat([city_loc, city_pop], axis=1) pd.concat([city_loc, city_pop], axis=0) # My Notes: Adding the dataframe vertically, aka appending the rows at the bottom ``` In this case it really does not make much sense because the indices do not align well (eg. Cleveland and San Francisco end up on the same row, because they shared the index label `3`). So let's reindex the `DataFrame`s by city name before concatenating: My Notes: Your index is your first column without a name ``` pd.concat([city_loc.set_index("city"), city_pop.set_index("city")], axis=1) # Practice pd.concat([city_loc.set_index("city"), city_pop.set_index("city")], axis=1) ``` This looks a lot like a `FULL OUTER JOIN`, except that the `state` columns were not renamed to `state_x` and `state_y`, and the `city` column is now the index. The `append()` method is a useful shorthand for concatenating `DataFrame`s vertically: ``` # city_loc.append(city_pop) city_loc.append(city_pop) ``` As always in pandas, the `append()` method does *not* actually modify `city_loc`: it works on a copy and returns the modified copy. # Categories It is quite frequent to have values that represent categories, for example `1` for female and `2` for male, or `"A"` for Good, `"B"` for Average, `"C"` for Bad. These categorical values can be hard to read and cumbersome to handle, but fortunately pandas makes it easy. To illustrate this, let's take the `city_pop` `DataFrame` we created earlier, and add a column that represents a category: ``` # city_eco = city_pop.copy() # city_eco["eco_code"] = [17, 17, 34, 20] # city_eco # Practice city_eco = city_pop.copy() city_eco["eco_code"] = [17, 17, 34, 20] city_eco ``` Right now the `eco_code` column is full of apparently meaningless codes. Let's fix that. First, we will create a new categorical column based on the `eco_code`s: ``` # city_eco["economy"] = city_eco["eco_code"].astype('category') # city_eco["economy"].cat.categories # Practice city_eco["economy"] = city_eco["eco_code"].astype('category') city_eco["economy"].cat.categories ``` Now we can give each category a meaningful name: ``` # city_eco["economy"].cat.categories = ["Finance", "Energy", "Tourism"] # city_eco # Practice city_eco["economy"].cat.categories = ["Finance", "Energy", "Tourism"] city_eco ``` Note that categorical values are sorted according to their categorical order, *not* their alphabetical order: ``` # city_eco.sort_values(by="economy", ascending=False) # Practice city_eco.sort_values(by="economy", ascending=False) ``` # What next? As you probably noticed by now, pandas is quite a large library with *many* features. Although we went through the most important features, there is still a lot to discover. Probably the best way to learn more is to get your hands dirty with some real-life data. It is also a good idea to go through pandas' excellent [documentation](http://pandas.pydata.org/pandas-docs/stable/index.html), in particular the [Cookbook](http://pandas.pydata.org/pandas-docs/stable/cookbook.html).
github_jupyter
# Building your Deep Neural Network: Step by Step Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want! - In this notebook, you will implement all the functions required to build a deep neural network. - In the next assignment, you will use these functions to build a deep neural network for image classification. **After this assignment you will be able to:** - Use non-linear units like ReLU to improve your model - Build a deeper neural network (with more than 1 hidden layer) - Implement an easy-to-use neural network class **Notation**: - Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters. - Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations). Let's get started! ### <font color='darkblue'> Updates to Assignment <font> #### If you were working on a previous version * The current notebook filename is version "4a". * You can find your work in the file directory as version "4". * To see the file directory, click on the Coursera logo at the top left of the notebook. #### List of Updates * compute_cost unit test now includes tests for Y = 0 as well as Y = 1. This catches a possible bug before students get graded. * linear_backward unit test now has a more complete unit test that catches a possible bug before students get graded. ## 1 - Packages Let's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the main package for scientific computing with Python. - [matplotlib](http://matplotlib.org) is a library to plot graphs in Python. - dnn_utils provides some necessary functions for this notebook. - testCases provides some test cases to assess the correctness of your functions - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed. ``` import numpy as np import h5py import matplotlib.pyplot as plt from testCases_v4a import * from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) ``` ## 2 - Outline of the Assignment To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will: - Initialize the parameters for a two-layer network and for an $L$-layer neural network. - Implement the forward propagation module (shown in purple in the figure below). - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). - We give you the ACTIVATION function (relu/sigmoid). - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function. - Compute the loss. - Implement the backward propagation module (denoted in red in the figure below). - Complete the LINEAR part of a layer's backward propagation step. - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function - Finally update the parameters. <img src="images/final outline.png" style="width:800px;height:500px;"> <caption><center> **Figure 1**</center></caption><br> **Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. ## 3 - Initialization You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. ### 3.1 - 2-layer Neural Network **Exercise**: Create and initialize the parameters of the 2-layer neural network. **Instructions**: - The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. - Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape. - Use zero initialization for the biases. Use `np.zeros(shape)`. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: parameters -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(1) ### START CODE HERE ### (≈ 4 lines of code) W1 = np.random.randn(n_h, n_x) * 0.01 # None b1 = np.zeros((n_h, 1)) # None W2 = np.random.randn(n_y, n_h) * 0.01 # None b2 = np.zeros((n_y, 1)) # None ### END CODE HERE ### assert(W1.shape == (n_h, n_x)) assert(b1.shape == (n_h, 1)) assert(W2.shape == (n_y, n_h)) assert(b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters = initialize_parameters(3,2,1) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected output**: <table style="width:80%"> <tr> <td> **W1** </td> <td> [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]] </td> </tr> <tr> <td> **b1**</td> <td>[[ 0.] [ 0.]]</td> </tr> <tr> <td>**W2**</td> <td> [[ 0.01744812 -0.00761207]]</td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> ### 3.2 - L-layer Neural Network The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: <table style="width:100%"> <tr> <td> </td> <td> **Shape of W** </td> <td> **Shape of b** </td> <td> **Activation** </td> <td> **Shape of Activation** </td> <tr> <tr> <td> **Layer 1** </td> <td> $(n^{[1]},12288)$ </td> <td> $(n^{[1]},1)$ </td> <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> <td> $(n^{[1]},209)$ </td> <tr> <tr> <td> **Layer 2** </td> <td> $(n^{[2]}, n^{[1]})$ </td> <td> $(n^{[2]},1)$ </td> <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> <td> $(n^{[2]}, 209)$ </td> <tr> <tr> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$</td> <td> $\vdots$ </td> <tr> <tr> <td> **Layer L-1** </td> <td> $(n^{[L-1]}, n^{[L-2]})$ </td> <td> $(n^{[L-1]}, 1)$ </td> <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> <td> $(n^{[L-1]}, 209)$ </td> <tr> <tr> <td> **Layer L** </td> <td> $(n^{[L]}, n^{[L-1]})$ </td> <td> $(n^{[L]}, 1)$ </td> <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td> <td> $(n^{[L]}, 209)$ </td> <tr> </table> Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\\ m & n & o \\ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \\ t \\ u \end{bmatrix}\tag{2}$$ Then $WX + b$ will be: $$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u \end{bmatrix}\tag{3} $$ **Exercise**: Implement initialization for an L-layer Neural Network. **Instructions**: - The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function. - Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`. - Use zeros initialization for the biases. Use `np.zeros(shape)`. - We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. This means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network). ```python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1)) ``` ``` # GRADED FUNCTION: initialize_parameters_deep def initialize_parameters_deep(layer_dims): """ Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) """ np.random.seed(3) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01 # None parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) # None ### END CODE HERE ### assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters parameters = initialize_parameters_deep([5,4,3]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected output**: <table style="width:80%"> <tr> <td> **W1** </td> <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> </tr> <tr> <td>**b1** </td> <td>[[ 0.] [ 0.] [ 0.] [ 0.]]</td> </tr> <tr> <td>**W2** </td> <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> </tr> <tr> <td>**b2** </td> <td>[[ 0.] [ 0.] [ 0.]]</td> </tr> </table> ## 4 - Forward propagation module ### 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order: - LINEAR - LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model) The linear forward module (vectorized over all the examples) computes the following equations: $$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$ where $A^{[0]} = X$. **Exercise**: Build the linear part of forward propagation. **Reminder**: The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help. ``` # GRADED FUNCTION: linear_forward def linear_forward(A, W, b): """ Implement the linear part of a layer's forward propagation. Arguments: A -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) Returns: Z -- the input of the activation function, also called pre-activation parameter cache -- a python tuple containing "A", "W" and "b" ; stored for computing the backward pass efficiently """ ### START CODE HERE ### (≈ 1 line of code) Z = np.dot(W, A) + b # None ### END CODE HERE ### assert(Z.shape == (W.shape[0], A.shape[1])) cache = (A, W, b) return Z, cache A, W, b = linear_forward_test_case() Z, linear_cache = linear_forward(A, W, b) print("Z = " + str(Z)) ``` **Expected output**: <table style="width:35%"> <tr> <td> **Z** </td> <td> [[ 3.26295337 -1.23429987]] </td> </tr> </table> ### 4.2 - Linear-Activation Forward In this notebook, you will use two activation functions: - **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` python A, activation_cache = sigmoid(Z) ``` - **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` python A, activation_cache = relu(Z) ``` For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step. **Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function. ``` # GRADED FUNCTION: linear_activation_forward def linear_activation_forward(A_prev, W, b, activation): """ Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments: A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: A -- the output of the activation function, also called the post-activation value cache -- a python tuple containing "linear_cache" and "activation_cache"; stored for computing the backward pass efficiently """ if activation == "sigmoid": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) # None A, activation_cache = sigmoid(Z) # None ### END CODE HERE ### elif activation == "relu": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache =linear_forward(A_prev, W, b) # None A, activation_cache = relu(Z) # None ### END CODE HERE ### assert (A.shape == (W.shape[0], A_prev.shape[1])) cache = (linear_cache, activation_cache) return A, cache A_prev, W, b = linear_activation_forward_test_case() A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid") print("With sigmoid: A = " + str(A)) A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu") print("With ReLU: A = " + str(A)) ``` **Expected output**: <table style="width:35%"> <tr> <td> **With sigmoid: A ** </td> <td > [[ 0.96890023 0.11013289]]</td> </tr> <tr> <td> **With ReLU: A ** </td> <td > [[ 3.43896131 0. ]]</td> </tr> </table> **Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. ### d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID. <img src="images/model_architecture_kiank.png" style="width:600px;height:300px;"> <caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br> **Exercise**: Implement the forward propagation of the above model. **Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.) **Tips**: - Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times - Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`. ``` # GRADED FUNCTION: L_model_forward def L_model_forward(X, parameters): """ Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments: X -- data, numpy array of shape (input size, number of examples) parameters -- output of initialize_parameters_deep() Returns: AL -- last post-activation value caches -- list of caches containing: every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1) """ caches = [] A = X L = len(parameters) // 2 # number of layers in the neural network # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list. for l in range(1, L): A_prev = A ### START CODE HERE ### (≈ 2 lines of code) A, cache = linear_activation_forward(A_prev, parameters['W'+str(l)], parameters['b'+str(l)], "relu") # None caches.append(cache) # None ### END CODE HERE ### # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. ### START CODE HERE ### (≈ 2 lines of code) AL, cache = linear_activation_forward(A, parameters['W'+str(L)], parameters['b'+str(L)], "sigmoid") # None caches.append(cache) # None ### END CODE HERE ### assert(AL.shape == (1,X.shape[1])) return AL, caches X, parameters = L_model_forward_test_case_2hidden() AL, caches = L_model_forward(X, parameters) print("AL = " + str(AL)) print("Length of caches list = " + str(len(caches))) ``` <table style="width:50%"> <tr> <td> **AL** </td> <td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td> </tr> <tr> <td> **Length of caches list ** </td> <td > 3 </td> </tr> </table> Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. ## 5 - Cost function Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning. **Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$ ``` # GRADED FUNCTION: compute_cost def compute_cost(AL, Y): """ Implement the cost function defined by equation (7). Arguments: AL -- probability vector corresponding to your label predictions, shape (1, number of examples) Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns: cost -- cross-entropy cost """ m = Y.shape[1] # Compute loss from aL and y. ### START CODE HERE ### (≈ 1 lines of code) cost = - 1 / m * np.sum(np.dot(Y, np.log(AL).T) + np.dot((1 - Y),np.log(1 - AL).T)) # None ### END CODE HERE ### cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). assert(cost.shape == ()) return cost Y, AL = compute_cost_test_case() print("cost = " + str(compute_cost(AL, Y))) ``` **Expected Output**: <table> <tr> <td>**cost** </td> <td> 0.2797765635793422</td> </tr> </table> ## 6 - Backward propagation module Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. **Reminder**: <img src="images/backprop_kiank.png" style="width:650px;height:250px;"> <caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption> <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows: $$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$ In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted. Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$. This is why we talk about **backpropagation**. !--> Now, similar to forward propagation, you are going to build the backward propagation in three steps: - LINEAR backward - LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) ### 6.1 - Linear backward For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation). Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$. <img src="images/linearback_kiank.png" style="width:250px;height:300px;"> <caption><center> **Figure 4** </center></caption> The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need: $$ dW^{[l]} = \frac{\partial \mathcal{J} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$ $$ db^{[l]} = \frac{\partial \mathcal{J} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$ $$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ **Exercise**: Use the 3 formulas above to implement linear_backward(). ``` # GRADED FUNCTION: linear_backward def linear_backward(dZ, cache): """ Implement the linear portion of backward propagation for a single layer (layer l) Arguments: dZ -- Gradient of the cost with respect to the linear output (of current layer l) cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ A_prev, W, b = cache m = A_prev.shape[1] ### START CODE HERE ### (≈ 3 lines of code) dW = 1 / m * np.dot(dZ, A_prev.T) # None db = 1 / m * np.sum(dZ, axis=1, keepdims=True) # None dA_prev = np.dot(W.T, dZ) # None ### END CODE HERE ### assert (dA_prev.shape == A_prev.shape) assert (dW.shape == W.shape) assert (db.shape == b.shape) return dA_prev, dW, db # Set up some test inputs dZ, linear_cache = linear_backward_test_case() dA_prev, dW, db = linear_backward(dZ, linear_cache) print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db)) ``` ** Expected Output**: ``` dA_prev = [[-1.15171336 0.06718465 -0.3204696 2.09812712] [ 0.60345879 -3.72508701 5.81700741 -3.84326836] [-0.4319552 -1.30987417 1.72354705 0.05070578] [-0.38981415 0.60811244 -1.25938424 1.47191593] [-2.52214926 2.67882552 -0.67947465 1.48119548]] dW = [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716] [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808] [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]] db = [[-0.14713786] [-0.11313155] [-0.13209101]] ``` ### 6.2 - Linear-Activation backward Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. To help you implement `linear_activation_backward`, we provided two backward functions: - **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows: ```python dZ = sigmoid_backward(dA, activation_cache) ``` - **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows: ```python dZ = relu_backward(dA, activation_cache) ``` If $g(.)$ is the activation function, `sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. **Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer. ``` # GRADED FUNCTION: linear_activation_backward def linear_activation_backward(dA, cache, activation): """ Implement the backward propagation for the LINEAR->ACTIVATION layer. Arguments: dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ linear_cache, activation_cache = cache if activation == "relu": ### START CODE HERE ### (≈ 2 lines of code) dZ = relu_backward(dA, activation_cache) # None dA_prev, dW, db = linear_backward(dZ, linear_cache) # None ### END CODE HERE ### elif activation == "sigmoid": ### START CODE HERE ### (≈ 2 lines of code) dZ = sigmoid_backward(dA, activation_cache) # None dA_prev, dW, db = linear_backward(dZ, linear_cache) # None ### END CODE HERE ### return dA_prev, dW, db dAL, linear_activation_cache = linear_activation_backward_test_case() dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid") print ("sigmoid:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db) + "\n") dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu") print ("relu:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db)) ``` **Expected output with sigmoid:** <table style="width:100%"> <tr> <td > dA_prev </td> <td >[[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> </tr> <tr> <td > db </td> <td > [[-0.05729622]] </td> </tr> </table> **Expected output with relu:** <table style="width:100%"> <tr> <td > dA_prev </td> <td > [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> </tr> <tr> <td > db </td> <td > [[-0.20837892]] </td> </tr> </table> ### 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. <img src="images/mn_backward.png" style="width:450px;height:300px;"> <caption><center> **Figure 5** : Backward pass </center></caption> ** Initializing backpropagation**: To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$. To do so, use this formula (derived using calculus which you don't need in-depth knowledge of): ```python dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL ``` You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$ For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`. **Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model. ``` # GRADED FUNCTION: L_model_backward def L_model_backward(AL, Y, caches): """ Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group Arguments: AL -- probability vector, output of the forward propagation (L_model_forward()) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) caches -- list of caches containing: every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2) the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1]) Returns: grads -- A dictionary with the gradients grads["dA" + str(l)] = ... grads["dW" + str(l)] = ... grads["db" + str(l)] = ... """ grads = {} L = len(caches) # the number of layers m = AL.shape[1] Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL # Initializing the backpropagation ### START CODE HERE ### (1 line of code) dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL # None ### END CODE HERE ### # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"] ### START CODE HERE ### (approx. 2 lines) current_cache = caches[L-1] # None grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, "sigmoid") # None ### END CODE HERE ### # Loop from l=L-2 to l=0 for l in reversed(range(L-1)): # lth layer: (RELU -> LINEAR) gradients. # Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] ### START CODE HERE ### (approx. 5 lines) current_cache = caches[l] # None dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+1)], current_cache, "relu") # None grads["dA" + str(l)] = dA_prev_temp # None grads["dW" + str(l + 1)] = dW_temp # None grads["db" + str(l + 1)] = db_temp # None ### END CODE HERE ### return grads AL, Y_assess, caches = L_model_backward_test_case() grads = L_model_backward(AL, Y_assess, caches) print_grads(grads) ``` **Expected Output** <table style="width:60%"> <tr> <td > dW1 </td> <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> </tr> <tr> <td > db1 </td> <td > [[-0.22007063] [ 0. ] [-0.02835349]] </td> </tr> <tr> <td > dA1 </td> <td > [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]] </td> </tr> </table> ### 6.4 - Update Parameters In this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$ where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. **Exercise**: Implement `update_parameters()` to update your parameters using gradient descent. **Instructions**: Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$. ``` # GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate): """ Update parameters using gradient descent Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backward Returns: parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ... """ L = len(parameters) // 2 # number of layers in the neural network # Update rule for each parameter. Use a for loop. ### START CODE HERE ### (≈ 3 lines of code) for l in range(L): parameters["W" + str(l+1)] += - learning_rate * grads["dW" + str(l + 1)] # None parameters["b" + str(l+1)] += - learning_rate * grads["db" + str(l + 1)] # None ### END CODE HERE ### return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads, 0.1) print ("W1 = "+ str(parameters["W1"])) print ("b1 = "+ str(parameters["b1"])) print ("W2 = "+ str(parameters["W2"])) print ("b2 = "+ str(parameters["b2"])) ``` **Expected Output**: <table style="width:100%"> <tr> <td > W1 </td> <td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008] [-1.76569676 -0.80627147 0.51115557 -1.18258802] [-1.0535704 -0.86128581 0.68284052 2.20374577]] </td> </tr> <tr> <td > b1 </td> <td > [[-0.04659241] [-1.28888275] [ 0.53405496]] </td> </tr> <tr> <td > W2 </td> <td > [[-0.55569196 0.0354055 1.32964895]]</td> </tr> <tr> <td > b2 </td> <td > [[-0.84610769]] </td> </tr> </table> ## 7 - Conclusion Congrats on implementing all the functions required for building a deep neural network! We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier. In the next assignment you will put all these together to build two models: - A two-layer neural network - An L-layer neural network You will in fact use these models to classify cat vs non-cat images!
github_jupyter
# Preliminary instruction To follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so. # Chapter 9: Risk is a Number ``` # Chapter 9: Risk is a Number import pandas as pd import numpy as np import yfinance as yf %matplotlib inline import matplotlib.pyplot as plt ``` #### Mock Strategy: Turtle for dummies ``` # Chapter 9: Risk is a Number def regime_breakout(df,_h,_l,window): hl = np.where(df[_h] == df[_h].rolling(window).max(),1, np.where(df[_l] == df[_l].rolling(window).min(), -1,np.nan)) roll_hl = pd.Series(index= df.index, data= hl).fillna(method= 'ffill') return roll_hl def turtle_trader(df, _h, _l, slow, fast): ''' _slow: Long/Short direction _fast: trailing stop loss ''' _slow = regime_breakout(df,_h,_l,window = slow) _fast = regime_breakout(df,_h,_l,window = fast) turtle = pd. Series(index= df.index, data = np.where(_slow == 1,np.where(_fast == 1,1,0), np.where(_slow == -1, np.where(_fast ==-1,-1,0),0))) return turtle ``` #### Run the strategy with Softbank in absolute Plot: Softbank turtle for dummies, positions, and returns Plot: Softbank cumulative returns and Sharpe ratios: rolling and cumulative ``` # Chapter 9: Risk is a Number ticker = '9984.T' # Softbank start = '2017-12-31' end = None df = round(yf.download(tickers= ticker,start= start, end = end, interval = "1d",group_by = 'column',auto_adjust = True, prepost = True, treads = True, proxy = None),0) slow = 50 fast = 20 df['tt'] = turtle_trader(df, _h= 'High', _l= 'Low', slow= slow,fast= fast) df['stop_loss'] = np.where(df['tt'] == 1, df['Low'].rolling(fast).min(), np.where(df['tt'] == -1, df['High'].rolling(fast).max(),np.nan)) df['tt_chg1D'] = df['Close'].diff() * df['tt'].shift() df['tt_PL_cum'] = df['tt_chg1D'].cumsum() df['tt_returns'] = df['Close'].pct_change() * df['tt'].shift() tt_log_returns = np.log(df['Close']/df['Close'].shift()) * df['tt'].shift() df['tt_cumul'] = tt_log_returns.cumsum().apply(np.exp) - 1 df[['Close','stop_loss','tt','tt_cumul']].plot(secondary_y=['tt','tt_cumul'], figsize=(20,8),style= ['k','r--','b:','b'], title= str(ticker)+' Close Price, Turtle L/S entries, cumulative returns') df[['tt_PL_cum','tt_chg1D']].plot(secondary_y=['tt_chg1D'], figsize=(20,8),style= ['b','c:'], title= str(ticker) +' Daily P&L & Cumulative P&L') ``` #### Sharpe ratio: the right mathematical answer to the wrong question Plot: Softbank cumulative returns and Sharpe ratios: rolling and cumulative ``` # Chapter 9: Risk is a Number r_f = 0.00001 # risk free returns def rolling_sharpe(returns, r_f, window): avg_returns = returns.rolling(window).mean() std_returns = returns.rolling(window).std(ddof=0) return (avg_returns - r_f) / std_returns def expanding_sharpe(returns, r_f): avg_returns = returns.expanding().mean() std_returns = returns.expanding().std(ddof=0) return (avg_returns - r_f) / std_returns window= 252 df['sharpe_roll'] = rolling_sharpe(returns= tt_log_returns, r_f= r_f, window= window) * 252**0.5 df['sharpe']= expanding_sharpe(returns=tt_log_returns,r_f= r_f) * 252**0.5 df[window:][['tt_cumul','sharpe_roll','sharpe'] ].plot(figsize = (20,8),style = ['b','c-.','c'],grid=True, title = str(ticker)+' cumulative returns, Sharpe ratios: rolling & cumulative') ``` ### Grit Index This formula was originally invented by Peter G. Martin in 1987 and published as the Ulcer Index in his book The Investor's Guide to Fidelity Funds. Legendary trader Ed Seykota recycled it into the Seykota Lake ratio. Investors react to drawdowns in three ways: 1. Magnitude: never test the stomach of your investors 2. Frequency: never test the nerves of your investors 3. Duration: never test the patience of your investors The Grit calculation sequence is as follows: 1. Calculate the peak cumulative returns using rolling().max() or expanding().max() 2. Calculate the squared drawdown from the peak and square them 3. Calculate the least square sum by taking the square root of the squared drawdowns 4. Divide the cumulative returns by the surface of losses Plot: Softbank cumulative returns and Grit ratios: rolling and cumulative ``` # Chapter 9: Risk is a Number def rolling_grit(cumul_returns, window): tt_rolling_peak = cumul_returns.rolling(window).max() drawdown_squared = (cumul_returns - tt_rolling_peak) ** 2 ulcer = drawdown_squared.rolling(window).sum() ** 0.5 return cumul_returns / ulcer def expanding_grit(cumul_returns): tt_peak = cumul_returns.expanding().max() drawdown_squared = (cumul_returns - tt_peak) ** 2 ulcer = drawdown_squared.expanding().sum() ** 0.5 return cumul_returns / ulcer window = 252 df['grit_roll'] = rolling_grit(cumul_returns= df['tt_cumul'] , window = window) df['grit'] = expanding_grit(cumul_returns= df['tt_cumul']) df[window:][['tt_cumul','grit_roll', 'grit'] ].plot(figsize = (20,8), secondary_y = 'tt_cumul',style = ['b','g-.','g'],grid=True, title = str(ticker) + ' cumulative returns & Grit Ratios: rolling & cumulative '+ str(window) + ' days') ``` ### Common Sense Ratio 1. Risk metric for trend following strategies: profit ratio, gain-to-pain ratio 2. Risk metric for trend following strategies: tail ratio 3. Combined risk metric: profit ratio * tail ratio Plot: Cumulative returns and common sense ratios: cumulative and rolling ``` # Chapter 9: Risk is a Number def rolling_profits(returns,window): profit_roll = returns.copy() profit_roll[profit_roll < 0] = 0 profit_roll_sum = profit_roll.rolling(window).sum().fillna(method='ffill') return profit_roll_sum def rolling_losses(returns,window): loss_roll = returns.copy() loss_roll[loss_roll > 0] = 0 loss_roll_sum = loss_roll.rolling(window).sum().fillna(method='ffill') return loss_roll_sum def expanding_profits(returns): profit_roll = returns.copy() profit_roll[profit_roll < 0] = 0 profit_roll_sum = profit_roll.expanding().sum().fillna(method='ffill') return profit_roll_sum def expanding_losses(returns): loss_roll = returns.copy() loss_roll[loss_roll > 0] = 0 loss_roll_sum = loss_roll.expanding().sum().fillna(method='ffill') return loss_roll_sum def profit_ratio(profits, losses): pr = profits.fillna(method='ffill') / abs(losses.fillna(method='ffill')) return pr def rolling_tail_ratio(cumul_returns, window, percentile,limit): left_tail = np.abs(cumul_returns.rolling(window).quantile(percentile)) right_tail = cumul_returns.rolling(window).quantile(1-percentile) np.seterr(all='ignore') tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit) return tail def expanding_tail_ratio(cumul_returns, percentile,limit): left_tail = np.abs(cumul_returns.expanding().quantile(percentile)) right_tail = cumul_returns.expanding().quantile(1 - percentile) np.seterr(all='ignore') tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit) return tail def common_sense_ratio(pr,tr): return pr * tr ``` #### Plot: Cumulative returns and profit ratios: cumulative and rolling ``` # Chapter 9: Risk is a Number window = 252 df['pr_roll'] = profit_ratio(profits= rolling_profits(returns = tt_log_returns,window = window), losses= rolling_losses(returns = tt_log_returns,window = window)) df['pr'] = profit_ratio(profits= expanding_profits(returns= tt_log_returns), losses= expanding_losses(returns = tt_log_returns)) df[window:] [['tt_cumul','pr_roll','pr'] ].plot(figsize = (20,8),secondary_y= ['tt_cumul'], style = ['r','y','y:'],grid=True) ``` #### Plot: Cumulative returns and common sense ratios: cumulative and rolling ``` # Chapter 9: Risk is a Number window = 252 df['tr_roll'] = rolling_tail_ratio(cumul_returns= df['tt_cumul'], window= window, percentile= 0.05,limit=5) df['tr'] = expanding_tail_ratio(cumul_returns= df['tt_cumul'], percentile= 0.05,limit=5) df['csr_roll'] = common_sense_ratio(pr= df['pr_roll'],tr= df['tr_roll']) df['csr'] = common_sense_ratio(pr= df['pr'],tr= df['tr']) df[window:] [['tt_cumul','csr_roll','csr'] ].plot(secondary_y= ['tt_cumul'],style = ['b','r-.','r'], figsize = (20,8), title= str(ticker)+' cumulative returns, Common Sense Ratios: cumulative & rolling '+str(window)+ ' days') ``` ### T-stat of gain expectancy, Van Tharp's System Quality Number (SQN) Plot: Softbank cumulative returns and t-stat (Van Tharp's SQN): cumulative and rolling ``` # Chapter 9: Risk is a Number def expectancy(win_rate,avg_win,avg_loss): # win% * avg_win% - loss% * abs(avg_loss%) return win_rate * avg_win + (1-win_rate) * avg_loss def t_stat(signal_count, trading_edge): sqn = (signal_count ** 0.5) * trading_edge / trading_edge.std(ddof=0) return sqn # Trade Count df['trades'] = df.loc[(df['tt'].diff() !=0) & (pd.notnull(df['tt'])),'tt'].abs().cumsum() signal_count = df['trades'].fillna(method='ffill') signal_roll = signal_count.diff(window) # Rolling t_stat window = 252 win_roll = tt_log_returns.copy() win_roll[win_roll < 0] = np.nan win_rate_roll = win_roll.rolling(window,min_periods=0).count() / window avg_win_roll = rolling_profits(returns = tt_log_returns,window = window) / window avg_loss_roll = rolling_losses(returns = tt_log_returns,window = window) / window edge_roll= expectancy(win_rate= win_rate_roll,avg_win=avg_win_roll,avg_loss=avg_loss_roll) df['sqn_roll'] = t_stat(signal_count= signal_roll, trading_edge=edge_roll) # Cumulative t-stat tt_win_count = tt_log_returns[tt_log_returns>0].expanding().count().fillna(method='ffill') tt_count = tt_log_returns[tt_log_returns!=0].expanding().count().fillna(method='ffill') win_rate = (tt_win_count / tt_count).fillna(method='ffill') avg_win = expanding_profits(returns= tt_log_returns) / tt_count avg_loss = expanding_losses(returns= tt_log_returns) / tt_count trading_edge = expectancy(win_rate,avg_win,avg_loss).fillna(method='ffill') df['sqn'] = t_stat(signal_count, trading_edge) df[window:][['tt_cumul','sqn','sqn_roll'] ].plot(figsize = (20,8), secondary_y= ['tt_cumul'], grid= True,style = ['b','y','y-.'], title= str(ticker)+' Cumulative Returns and SQN: cumulative & rolling'+ str(window)+' days') ``` ### Robustness score Combined risk metric: 1. The Grit Index integrates losses throughout the period 2. The CSR combines risks endemic to the two types of strategies in a single measure 3. The t-stat SQN incorporates trading frequency into the trading edge formula to show the most efficient use of capital. ``` # Chapter 9: Risk is a Number def robustness_score(grit,csr,sqn): start_date = max(grit[pd.notnull(grit)].index[0], csr[pd.notnull(csr)].index[0], sqn[pd.notnull(sqn)].index[0]) score = grit * csr * sqn / (grit[start_date] * csr[start_date] * sqn[start_date]) return score df['score_roll'] = robustness_score(grit = df['grit_roll'], csr = df['csr_roll'],sqn= df['sqn_roll']) df['score'] = robustness_score(grit = df['grit'],csr = df['csr'],sqn = df['sqn']) df[window:][['tt_cumul','score','score_roll']].plot( secondary_y= ['score'],figsize=(20,6),style = ['b','k','k-.'], title= str(ticker)+' Cumulative Returns and Robustness Score: cumulative & rolling '+ str(window)+' days') ```
github_jupyter
# Boucles https://python.sdv.univ-paris-diderot.fr/05_boucles_comparaisons/ Répéter des actions ## Itération sur les éléments d'une liste ``` placard = ["farine", "oeufs", "lait", "sucre"] for ingredient in placard: print(ingredient) ``` Remarques : - La variable *ingredient* est appelée *variable d'itération* et change de valeur à chaque itération de la boucle. - La ligne débutant par `for` se termine toujours par `:` - Le bloc d'instructions `print(ingredient)` est indenté : décalage vers la droite du contenu du bloc d'instructions. ``` placard = ["farine", "oeufs", "lait", "sucre"] for ingredient in placard: print("J'ajoute un ingrédient :") print(ingredient) print("Les crèpes sont prêtes !") ``` Ici, le bloc d'instructions de la boucle `for` est composé des 2 instructions : ``` print("J'ajoute un ingrédient :") print(ingredient) ``` L'instruction `print("Les crèpes sont prêtes !")` est en dehors du bloc d'instructions. ## Itération sur les caractères d'une chaîne de caractères ``` sequence = "ATCG" for base in sequence: print(base) sequence = "ATCG" for base in sequence: print("La base est : {}".format(base)) ``` # Tests https://python.sdv.univ-paris-diderot.fr/06_tests/ Prendre des décisions ``` nombre = 2 if nombre == 2: print("Gagné !") ``` Remarques : - `:` après `if` - Un bloc d'instructions après `if` ## Tests à deux cas ``` nombre = 2 if nombre == 2: print("Gagné !") else: print("Perdu !") ``` Remarques : - `:` après `if` et `else` - Un bloc d'instructions après `if` - Un bloc d'instructions après `else` ## Tests à plusieurs cas ``` base = "T" if base == "A": print("Choix d'une adénine") elif base == "T": print("Choix d'une thymine") elif base == "C": print("Choix d'une cytosine") elif base == "G": print("Choix d'une guanine") ``` On peut également définir un cas « par défaut » avec `else` : ``` base = "P" if base == "A": print("Choix d'une adénine") elif base == "T": print("Choix d'une thymine") elif base == "C": print("Choix d'une cytosine") elif base == "G": print("Choix d'une guanine") else: print("Révise ta biologie !") ``` ## Tirage aléatoire ``` import random random.choice(["Sandra", "Julie", "Magali", "Benoist", "Hubert"]) base = random.choice(["A", "T", "C", "G"]) if base == "A": print("Choix d'une adénine") elif base == "T": print("Choix d'une thymine") elif base == "C": print("Choix d'une cytosine") elif base == "G": print("Choix d'une guanine") ``` Remarques : - `:` après `if` et `elif` - Un bloc d'instructions après `if` - Un bloc d'instructions après `elif` ## Attention à l'indentation ! ``` nombres = [4, 5, 6] for nb in nombres: if nb == 5: print("Le test est vrai") print("car la variable nb vaut {}".format(nb)) nombres = [4, 5, 6] for nb in nombres: if nb == 5: print("Le test est vrai") print("car la variable nb vaut {}".format(nb)) ``` # Exercices ## Notes d'un étudiant Voici les notes d'un étudiant : ``` notes = [14, 9, 6, 8, 12] ``` Calculez la moyenne de ces notes. Utilisez l'écriture formatée pour afficher la valeur de la moyenne avec deux décimales. ## Séquence complémentaire La liste ci-dessous représente la séquence d'un brin d'ADN : ``` sequence = ["A","C","G","T","T","A","G","C","T","A","A","C","G"] ``` Créez un code qui transforme cette séquence en sa séquence complémentaire. Rappel : la séquence complémentaire s'obtient en remplaçant A par T, T par A, C par G et G par C.
github_jupyter
``` import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import os print(os.listdir("../input")) import time # import pytorch import torch import torch.nn as nn import torch.nn.functional as F from torch.optim import SGD,Adam,lr_scheduler from torch.utils.data import random_split import torchvision from torchvision import transforms, datasets from torch.utils.data import DataLoader # define transformations for train train_transform = transforms.Compose([ transforms.RandomHorizontalFlip(p=.40), transforms.RandomRotation(30), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # define transformations for test test_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # define training dataloader def get_training_dataloader(train_transform, batch_size=128, num_workers=0, shuffle=True): """ return training dataloader Args: train_transform: transfroms for train dataset path: path to cifar100 training python dataset batch_size: dataloader batchsize num_workers: dataloader num_works shuffle: whether to shuffle Returns: train_data_loader:torch dataloader object """ transform_train = train_transform cifar10_training = torchvision.datasets.CIFAR10(root='.', train=True, download=True, transform=transform_train) cifar10_training_loader = DataLoader( cifar10_training, shuffle=shuffle, num_workers=num_workers, batch_size=batch_size) return cifar10_training_loader # define test dataloader def get_testing_dataloader(test_transform, batch_size=128, num_workers=0, shuffle=True): """ return training dataloader Args: test_transform: transforms for test dataset path: path to cifar100 test python dataset batch_size: dataloader batchsize num_workers: dataloader num_works shuffle: whether to shuffle Returns: cifar100_test_loader:torch dataloader object """ transform_test = test_transform cifar10_test = torchvision.datasets.CIFAR10(root='.', train=False, download=True, transform=transform_test) cifar10_test_loader = DataLoader( cifar10_test, shuffle=shuffle, num_workers=num_workers, batch_size=batch_size) return cifar10_test_loader # implement mish activation function def f_mish(input): ''' Applies the mish function element-wise: mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x))) ''' return input * torch.tanh(F.softplus(input)) # implement class wrapper for mish activation function class mish(nn.Module): ''' Applies the mish function element-wise: mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x))) Shape: - Input: (N, *) where * means, any number of additional dimensions - Output: (N, *), same shape as the input Examples: >>> m = mish() >>> input = torch.randn(2) >>> output = m(input) ''' def __init__(self): ''' Init method. ''' super().__init__() def forward(self, input): ''' Forward pass of the function. ''' return f_mish(input) # implement swish activation function def f_swish(input): ''' Applies the swish function element-wise: swish(x) = x * sigmoid(x) ''' return input * torch.sigmoid(input) # implement class wrapper for swish activation function class swish(nn.Module): ''' Applies the swish function element-wise: swish(x) = x * sigmoid(x) Shape: - Input: (N, *) where * means, any number of additional dimensions - Output: (N, *), same shape as the input Examples: >>> m = swish() >>> input = torch.randn(2) >>> output = m(input) ''' def __init__(self): ''' Init method. ''' super().__init__() def forward(self, input): ''' Forward pass of the function. ''' return f_swish(input) class BasicResidualSEBlock(nn.Module): expansion = 1 def __init__(self, in_channels, out_channels, stride, r=16, activation = 'relu'): super().__init__() if activation == 'relu': f_activation = nn.ReLU(inplace=True) self.activation = F.relu if activation == 'swish': f_activation = swish() self.activation = f_swish if activation == 'mish': f_activation = mish() self.activation = f_mish self.residual = nn.Sequential( nn.Conv2d(in_channels, out_channels, 3, stride=stride, padding=1), nn.BatchNorm2d(out_channels), f_activation, nn.Conv2d(out_channels, out_channels * self.expansion, 3, padding=1), nn.BatchNorm2d(out_channels * self.expansion), f_activation ) self.shortcut = nn.Sequential() if stride != 1 or in_channels != out_channels * self.expansion: self.shortcut = nn.Sequential( nn.Conv2d(in_channels, out_channels * self.expansion, 1, stride=stride), nn.BatchNorm2d(out_channels * self.expansion) ) self.squeeze = nn.AdaptiveAvgPool2d(1) self.excitation = nn.Sequential( nn.Linear(out_channels * self.expansion, out_channels * self.expansion // r), f_activation, nn.Linear(out_channels * self.expansion // r, out_channels * self.expansion), nn.Sigmoid() ) def forward(self, x): shortcut = self.shortcut(x) residual = self.residual(x) squeeze = self.squeeze(residual) squeeze = squeeze.view(squeeze.size(0), -1) excitation = self.excitation(squeeze) excitation = excitation.view(residual.size(0), residual.size(1), 1, 1) x = residual * excitation.expand_as(residual) + shortcut return self.activation(x) class BottleneckResidualSEBlock(nn.Module): expansion = 4 def __init__(self, in_channels, out_channels, stride, r=16, activation = 'relu'): super().__init__() if activation == 'relu': f_activation = nn.ReLU(inplace=True) self.activation = F.relu if activation == 'swish': f_activation = swish() self.activation = f_swish if activation == 'mish': f_activation = mish() self.activation = f_mish self.residual = nn.Sequential( nn.Conv2d(in_channels, out_channels, 1), nn.BatchNorm2d(out_channels), f_activation, nn.Conv2d(out_channels, out_channels, 3, stride=stride, padding=1), nn.BatchNorm2d(out_channels), f_activation, nn.Conv2d(out_channels, out_channels * self.expansion, 1), nn.BatchNorm2d(out_channels * self.expansion), f_activation ) self.squeeze = nn.AdaptiveAvgPool2d(1) self.excitation = nn.Sequential( nn.Linear(out_channels * self.expansion, out_channels * self.expansion // r), f_activation, nn.Linear(out_channels * self.expansion // r, out_channels * self.expansion), nn.Sigmoid() ) self.shortcut = nn.Sequential() if stride != 1 or in_channels != out_channels * self.expansion: self.shortcut = nn.Sequential( nn.Conv2d(in_channels, out_channels * self.expansion, 1, stride=stride), nn.BatchNorm2d(out_channels * self.expansion) ) def forward(self, x): shortcut = self.shortcut(x) residual = self.residual(x) squeeze = self.squeeze(residual) squeeze = squeeze.view(squeeze.size(0), -1) excitation = self.excitation(squeeze) excitation = excitation.view(residual.size(0), residual.size(1), 1, 1) x = residual * excitation.expand_as(residual) + shortcut return self.activation(x) class SEResNet(nn.Module): def __init__(self, block, block_num, class_num=10, activation = 'relu'): super().__init__() self.in_channels = 64 if activation == 'relu': f_activation = nn.ReLU(inplace=True) self.activation = F.relu if activation == 'swish': f_activation = swish() self.activation = f_swish if activation == 'mish': f_activation = mish() self.activation = f_mish self.pre = nn.Sequential( nn.Conv2d(3, 64, 3, padding=1), nn.BatchNorm2d(64), f_activation ) self.stage1 = self._make_stage(block, block_num[0], 64, 1, activation = activation) self.stage2 = self._make_stage(block, block_num[1], 128, 2, activation = activation) self.stage3 = self._make_stage(block, block_num[2], 256, 2, activation = activation) self.stage4 = self._make_stage(block, block_num[3], 516, 2, activation = activation) self.linear = nn.Linear(self.in_channels, class_num) def forward(self, x): x = self.pre(x) x = self.stage1(x) x = self.stage2(x) x = self.stage3(x) x = self.stage4(x) x = F.adaptive_avg_pool2d(x, 1) x = x.view(x.size(0), -1) x = self.linear(x) return x def _make_stage(self, block, num, out_channels, stride, activation = 'relu'): layers = [] layers.append(block(self.in_channels, out_channels, stride, activation = activation)) self.in_channels = out_channels * block.expansion while num - 1: layers.append(block(self.in_channels, out_channels, 1, activation = activation)) num -= 1 return nn.Sequential(*layers) def seresnet18(activation = 'relu'): return SEResNet(BasicResidualSEBlock, [2, 2, 2, 2], activation = activation) def seresnet34(activation = 'relu'): return SEResNet(BasicResidualSEBlock, [3, 4, 6, 3], activation = activation) def seresnet50(activation = 'relu'): return SEResNet(BottleneckResidualSEBlock, [3, 4, 6, 3], activation = activation) def seresnet101(activation = 'relu'): return SEResNet(BottleneckResidualSEBlock, [3, 4, 23, 3], activation = activation) def seresnet152(activation = 'relu'): return SEResNet(BottleneckResidualSEBlock, [3, 8, 36, 3], activation = activation) trainloader = get_training_dataloader(train_transform) testloader = get_testing_dataloader(test_transform) epochs = 100 batch_size = 128 learning_rate = 0.001 device = torch.device('cuda:0' if torch.cuda.is_available() else "cpu") device model = seresnet18(activation = 'mish') # set loss function criterion = nn.CrossEntropyLoss() # set optimizer, only train the classifier parameters, feature parameters are frozen optimizer = Adam(model.parameters(), lr=learning_rate) train_stats = pd.DataFrame(columns = ['Epoch', 'Time per epoch', 'Avg time per step', 'Train loss', 'Train accuracy', 'Train top-3 accuracy','Test loss', 'Test accuracy', 'Test top-3 accuracy']) #train the model model.to(device) steps = 0 running_loss = 0 for epoch in range(epochs): since = time.time() train_accuracy = 0 top3_train_accuracy = 0 for inputs, labels in trainloader: steps += 1 # Move input and label tensors to the default device inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() logps = model.forward(inputs) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() # calculate train top-1 accuracy ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) train_accuracy += torch.mean(equals.type(torch.FloatTensor)).item() # Calculate train top-3 accuracy np_top3_class = ps.topk(3, dim=1)[1].cpu().numpy() target_numpy = labels.cpu().numpy() top3_train_accuracy += np.mean([1 if target_numpy[i] in np_top3_class[i] else 0 for i in range(0, len(target_numpy))]) time_elapsed = time.time() - since test_loss = 0 test_accuracy = 0 top3_test_accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) batch_loss = criterion(logps, labels) test_loss += batch_loss.item() # Calculate test top-1 accuracy ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) test_accuracy += torch.mean(equals.type(torch.FloatTensor)).item() # Calculate test top-3 accuracy np_top3_class = ps.topk(3, dim=1)[1].cpu().numpy() target_numpy = labels.cpu().numpy() top3_test_accuracy += np.mean([1 if target_numpy[i] in np_top3_class[i] else 0 for i in range(0, len(target_numpy))]) print(f"Epoch {epoch+1}/{epochs}.. " f"Time per epoch: {time_elapsed:.4f}.. " f"Average time per step: {time_elapsed/len(trainloader):.4f}.. " f"Train loss: {running_loss/len(trainloader):.4f}.. " f"Train accuracy: {train_accuracy/len(trainloader):.4f}.. " f"Top-3 train accuracy: {top3_train_accuracy/len(trainloader):.4f}.. " f"Test loss: {test_loss/len(testloader):.4f}.. " f"Test accuracy: {test_accuracy/len(testloader):.4f}.. " f"Top-3 test accuracy: {top3_test_accuracy/len(testloader):.4f}") train_stats = train_stats.append({'Epoch': epoch, 'Time per epoch':time_elapsed, 'Avg time per step': time_elapsed/len(trainloader), 'Train loss' : running_loss/len(trainloader), 'Train accuracy': train_accuracy/len(trainloader), 'Train top-3 accuracy':top3_train_accuracy/len(trainloader),'Test loss' : test_loss/len(testloader), 'Test accuracy': test_accuracy/len(testloader), 'Test top-3 accuracy':top3_test_accuracy/len(testloader)}, ignore_index=True) running_loss = 0 model.train() train_stats.to_csv('train_log_SENet18_Mish.csv') ```
github_jupyter
``` %matplotlib inline ``` PyTorch: Defining New autograd Functions ---------------------------------------- A third order polynomial, trained to predict $y=\sin(x)$ from $-\pi$ to $\pi$ by minimizing squared Euclidean distance. Instead of writing the polynomial as $y=a+bx+cx^2+dx^3$, we write the polynomial as $y=a+b P_3(c+dx)$ where $P_3(x)= rac{1}{2}\left(5x^3-3x ight)$ is the `Legendre polynomial`_ of degree three. https://en.wikipedia.org/wiki/Legendre_polynomials This implementation computes the forward pass using operations on PyTorch Tensors, and uses PyTorch autograd to compute gradients. In this implementation we implement our own custom autograd function to perform $P_3'(x)$. By mathematics, $P_3'(x)= rac{3}{2}\left(5x^2-1 ight)$ ``` import torch import math class LegendrePolynomial3(torch.autograd.Function): """ We can implement our own custom autograd Functions by subclassing torch.autograd.Function and implementing the forward and backward passes which operate on Tensors. """ @staticmethod def forward(ctx, input): """ In the forward pass we receive a Tensor containing the input and return a Tensor containing the output. ctx is a context object that can be used to stash information for backward computation. You can cache arbitrary objects for use in the backward pass using the ctx.save_for_backward method. """ ctx.save_for_backward(input) return 0.5 * (5 * input ** 3 - 3 * input) @staticmethod def backward(ctx, grad_output): """ In the backward pass we receive a Tensor containing the gradient of the loss with respect to the output, and we need to compute the gradient of the loss with respect to the input. """ input, = ctx.saved_tensors return grad_output * 1.5 * (5 * input ** 2 - 1) dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU # Create Tensors to hold input and outputs. # By default, requires_grad=False, which indicates that we do not need to # compute gradients with respect to these Tensors during the backward pass. x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Create random Tensors for weights. For this example, we need # 4 weights: y = a + b * P3(c + d * x), these weights need to be initialized # not too far from the correct result to ensure convergence. # Setting requires_grad=True indicates that we want to compute gradients with # respect to these Tensors during the backward pass. a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True) b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True) c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True) d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True) learning_rate = 5e-6 for t in range(2000): # To apply our Function, we use Function.apply method. We alias this as 'P3'. P3 = LegendrePolynomial3.apply # Forward pass: compute predicted y using operations; we compute # P3 using our custom autograd operation. y_pred = a + b * P3(c + d * x) # Compute and print loss loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) # Use autograd to compute the backward pass. loss.backward() # Update weights using gradient descent with torch.no_grad(): a -= learning_rate * a.grad b -= learning_rate * b.grad c -= learning_rate * c.grad d -= learning_rate * d.grad # Manually zero the gradients after updating weights a.grad = None b.grad = None c.grad = None d.grad = None print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)') ```
github_jupyter
``` from os import path # Third-party import astropy import astropy.coordinates as coord from astropy.table import Table, vstack from astropy.io import fits import astropy.units as u import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np %matplotlib inline from pyvo.dal import TAPService from pyia import GaiaData import gala.coordinates as gc import scipy.stats plt.style.use('notebook') t = Table.read('../data/gd1-all-ps1-red.fits') # deredden bands = ['g', 'r', 'i', 'z', 'y'] for band in bands: t[band] = t[band] - t['A_{}'.format(band)] g = GaiaData(t) c = coord.SkyCoord(ra=g.ra, dec=g.dec, pm_ra_cosdec=g.pmra, pm_dec=g.pmdec) def gd1_dist(phi1): # 0, 10 # -60, 7 m = (10-7) / (60) return (m*phi1.wrap_at(180*u.deg).value + 10) * u.kpc gd1_c = c.transform_to(gc.GD1) gd1_c_dist = gc.GD1(phi1=gd1_c.phi1, phi2=gd1_c.phi2, distance=gd1_dist(gd1_c.phi1), pm_phi1_cosphi2=gd1_c.pm_phi1_cosphi2, pm_phi2=gd1_c.pm_phi2, radial_velocity=[0]*len(gd1_c)*u.km/u.s) # Correct for reflex motion v_sun = coord.Galactocentric.galcen_v_sun observed = gd1_c_dist.transform_to(coord.Galactic) rep = observed.cartesian.without_differentials() rep = rep.with_differentials(observed.cartesian.differentials['s'] + v_sun) gd1_c = coord.Galactic(rep).transform_to(gc.GD1) wangle = 180*u.deg pm_mask = ((gd1_c.pm_phi1_cosphi2 < -5*u.mas/u.yr) & (gd1_c.pm_phi1_cosphi2 > -10*u.mas/u.yr) & (gd1_c.pm_phi2 < 1*u.mas/u.yr) & (gd1_c.pm_phi2 > -2*u.mas/u.yr) & (g.bp_rp < 1.5*u.mag) & (g.bp_rp > 0*u.mag)) phi_mask_stream = ((np.abs(gd1_c.phi2)<1*u.deg) & (gd1_c.phi1.wrap_at(wangle)>-50*u.deg) & (gd1_c.phi1.wrap_at(wangle)<-10*u.deg)) phi_mask_off = ((gd1_c.phi2<-2*u.deg) & (gd1_c.phi2>-3*u.deg)) | ((gd1_c.phi2<3*u.deg) & (gd1_c.phi2>2*u.deg)) iso = Table.read('../data/mist_12.0_-1.35.cmd', format='ascii.commented_header', header_start=12) phasecut = (iso['phase']>=0) & (iso['phase']<3) iso = iso[phasecut] # distance modulus distance_app = 7.8*u.kpc dm = 5*np.log10((distance_app.to(u.pc)).value)-5 # main sequence + rgb i_gi = iso['PS_g']-iso['PS_i'] i_g = iso['PS_g']+dm i_left = i_gi - 0.4*(i_g/28)**5 i_right = i_gi + 0.5*(i_g/28)**5 poly = np.hstack([np.array([i_left, i_g]), np.array([i_right[::-1], i_g[::-1]])]).T ind = (poly[:,1]<21.3) & (poly[:,1]>17.8) poly_main = poly[ind] points = np.array([g.g - g.i, g.g]).T path_main = mpl.path.Path(poly_main) cmd_mask = path_main.contains_points(points) pm1_min = -9*u.mas/u.yr pm1_max = -4.5*u.mas/u.yr pm2_min = -1.7*u.mas/u.yr pm2_max = 1.*u.mas/u.yr pm_mask = ((gd1_c.pm_phi1_cosphi2 < pm1_max) & (gd1_c.pm_phi1_cosphi2 > pm1_min) & (gd1_c.pm_phi2 < pm2_max) & (gd1_c.pm_phi2 > pm2_min)) ``` ## Define target fields ``` targets = {} targets['phi1'] = np.array([-36.35, -39.5, -32.4, -29.8, -29.8])*u.deg targets['phi2'] = np.array([0.2, 0.2, 1.1, 0, 1])*u.deg Nf = len(targets['phi1']) plt.figure(figsize=(10,8)) plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask], 'ko', ms=4) for i in range(Nf): c = mpl.patches.Circle((targets['phi1'][i].value, targets['phi2'][i].value), radius=0.5, fc='none', ec='r', lw=2, zorder=2) plt.gca().add_patch(c) plt.gca().set_aspect('equal') plt.xlim(-45,-25) plt.ylim(-5,5) plt.xlabel('$\phi_1$ [deg]') plt.ylabel('$\phi_2$ [deg]') plt.tight_layout() ``` ### Show overall stream ``` plt.figure(figsize=(13,10)) plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask], 'ko', ms=0.7, alpha=0.7, rasterized=True) for i in range(Nf): c = mpl.patches.Circle((targets['phi1'][i].value, targets['phi2'][i].value), radius=0.5, fc='none', ec='r', lw=1, zorder=2) plt.gca().add_patch(c) plt.gca().set_aspect('equal') plt.xlabel('$\phi_1$ [deg]') plt.ylabel('$\phi_2$ [deg]') plt.xlim(-90,10) plt.ylim(-12,12) plt.tight_layout() targets_c = coord.SkyCoord(phi1=targets['phi1'], phi2=targets['phi2'], frame=gc.GD1) ra_field = targets_c.icrs.ra.to_string(unit=u.hour, sep=':') dec_field = targets_c.icrs.dec.to_string(unit=u.degree, sep=':') tfield = Table(np.array([ra_field, dec_field]).T, names=('ra', 'dec')) tfield.write('../data/GD1_fields_2018B.txt', format='ascii.commented_header', overwrite=True) tfield ``` ## Target priorities ``` iso = Table.read('/home/ana/data/isochrones/panstarrs/mist_12.6_-1.50.cmd', format='ascii.commented_header', header_start=12) phasecut = (iso['phase']>=0) & (iso['phase']<3) iso = iso[phasecut] # distance modulus distance_app = 7.8*u.kpc dm = 5*np.log10((distance_app.to(u.pc)).value)-5 # main sequence + rgb i_gi = iso['PS_g']-iso['PS_i'] i_g = iso['PS_g']+dm i_left_narrow = i_gi - 0.4*(i_g/28)**5 i_right_narrow = i_gi + 0.5*(i_g/28)**5 poly_narrow = np.hstack([np.array([i_left_narrow, i_g]), np.array([i_right_narrow[::-1], i_g[::-1]])]).T i_left_wide = i_gi - 0.6*(i_g/28)**3 i_right_wide = i_gi + 0.7*(i_g/28)**3 poly_wide = np.hstack([np.array([i_left_wide, i_g]), np.array([i_right_wide[::-1], i_g[::-1]])]).T ind = (poly_wide[:,1]<18.3) & (poly_wide[:,1]>14) poly_low = poly_wide[ind] ind = (poly_narrow[:,1]<20.5) & (poly_narrow[:,1]>14) poly_med = poly_narrow[ind] ind = (poly_narrow[:,1]<20.5) & (poly_narrow[:,1]>17.5) poly_high = poly_narrow[ind] plt.figure(figsize=(5,10)) plt.plot(g.g[phi_mask_stream & pm_mask] - g.i[phi_mask_stream & pm_mask], g.g[phi_mask_stream & pm_mask], 'ko', ms=2, alpha=1, rasterized=True, label='') plt.plot(i_gi, i_g, 'r-') pml = mpl.patches.Polygon(poly_low, color='moccasin', alpha=0.4, zorder=2) plt.gca().add_artist(pml) pmm = mpl.patches.Polygon(poly_med, color='orange', alpha=0.3, zorder=2) plt.gca().add_artist(pmm) pmh = mpl.patches.Polygon(poly_high, color='green', alpha=0.3, zorder=2) plt.gca().add_artist(pmh) plt.xlim(-0.2, 1.8) plt.ylim(21, 13) plt.xlabel('g - i') plt.ylabel('g') plt.tight_layout() pm1_bmin = -12*u.mas/u.yr pm1_bmax = 2*u.mas/u.yr pm2_bmin = -5*u.mas/u.yr pm2_bmax = 5*u.mas/u.yr pm_broad_mask = ((gd1_c.pm_phi1_cosphi2 < pm1_bmax) & (gd1_c.pm_phi1_cosphi2 > pm1_bmin) & (gd1_c.pm_phi2 < pm2_bmax) & (gd1_c.pm_phi2 > pm2_bmin)) plt.plot(gd1_c.pm_phi1_cosphi2[phi_mask_stream].to(u.mas/u.yr), gd1_c.pm_phi2[phi_mask_stream].to(u.mas/u.yr), 'ko', ms=0.5, alpha=0.5, rasterized=True) rect_xy = [pm1_bmin.to(u.mas/u.yr).value, pm2_bmin.to(u.mas/u.yr).value] rect_w = pm1_bmax.to(u.mas/u.yr).value - pm1_bmin.to(u.mas/u.yr).value rect_h = pm2_bmax.to(u.mas/u.yr).value - pm2_bmin.to(u.mas/u.yr).value pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='orange', alpha=0.3) plt.gca().add_artist(pr) rect_xy = [pm1_min.to(u.mas/u.yr).value, pm2_min.to(u.mas/u.yr).value] rect_w = pm1_max.to(u.mas/u.yr).value - pm1_min.to(u.mas/u.yr).value rect_h = pm2_max.to(u.mas/u.yr).value - pm2_min.to(u.mas/u.yr).value pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='green', alpha=0.3) plt.gca().add_artist(pr) plt.xlim(-12,12) plt.ylim(-12,12) plt.xlabel('$\mu_{\phi_1}$ [mas yr$^{-1}$]') plt.ylabel('$\mu_{\phi_2}$ [mas yr$^{-1}$]') plt.tight_layout() ``` ## 2018C proposal ``` path_high = mpl.path.Path(poly_high) ms_mask = path_high.contains_points(points) plt.figure(figsize=(13,10)) plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask], 'ko', ms=0.7, alpha=0.7, rasterized=True) # plt.annotate('Progenitor?', xy=(-13, 0.5), xytext=(-10, 7), # arrowprops=dict(color='0.3', shrink=0.05, width=1.5, headwidth=6, headlength=8, alpha=0.4), # fontsize='small') # plt.annotate('Blob', xy=(-14, -2), xytext=(-14, -10), # arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4), # fontsize='small') plt.annotate('Spur', xy=(-33, 2), xytext=(-42, 7), arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4), fontsize='small') plt.annotate('Gaps', xy=(-40, -2), xytext=(-35, -10), arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4), fontsize='small') plt.annotate('Gaps', xy=(-21, -1), xytext=(-35, -10), arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4), fontsize='small') # plt.axvline(-55, ls='--', color='0.3', alpha=0.4, dashes=(6,4), lw=2) # plt.text(-60, 9.5, 'Previously\nundetected', fontsize='small', ha='right', va='top') pr = mpl.patches.Rectangle([-50, -5], 25, 10, color='none', ec='darkorange', lw=2) plt.gca().add_artist(pr) plt.gca().set_aspect('equal') plt.xlabel('$\phi_1$ [deg]') plt.ylabel('$\phi_2$ [deg]') plt.xlim(-90,10) plt.ylim(-12,12) plt.tight_layout() ax_inset = plt.axes([0.2,0.62,0.6,0.2]) plt.sca(ax_inset) plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask], 'ko', ms=4, alpha=0.2, rasterized=True, label='All likely GD-1 members') plt.plot(gd1_c.phi1[pm_mask & cmd_mask & ms_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask & ms_mask], 'ko', ms=4, alpha=1, rasterized=True, label='High priority targets') plt.text(-0.07, 0.5, 'GD-1 region for\nHectochelle follow-up', transform=plt.gca().transAxes, ha='right') plt.legend(bbox_to_anchor=(1, 0.85), frameon=False, loc='upper left', handlelength=0.3, markerscale=1.5) for pos in ['top', 'bottom', 'right', 'left']: plt.gca().spines[pos].set_edgecolor('orange') plt.gca().set_aspect('equal') plt.xlim(-50,-25) plt.ylim(-5,5) plt.setp(plt.gca().get_xticklabels(), visible=False) plt.setp(plt.gca().get_yticklabels(), visible=False) plt.gca().tick_params(bottom='off', left='off', right='off', top='off'); plt.savefig('../plots/prop_fig1.pdf') ts = Table.read('../data/gd1_4_vels.tab', format='ascii.commented_header', delimiter='\t') # ts = Table.read('../data/gd1_both.tab', format='ascii.commented_header', delimiter='\t') vbins = np.arange(-200,200,10) fig, ax = plt.subplots(1,3,figsize=(15,5)) plt.sca(ax[0]) plt.plot(gd1_c.pm_phi1_cosphi2[phi_mask_stream].to(u.mas/u.yr), gd1_c.pm_phi2[phi_mask_stream].to(u.mas/u.yr), 'ko', ms=0.5, alpha=0.1, rasterized=True) rect_xy = [pm1_bmin.to(u.mas/u.yr).value, pm2_bmin.to(u.mas/u.yr).value] rect_w = pm1_bmax.to(u.mas/u.yr).value - pm1_bmin.to(u.mas/u.yr).value rect_h = pm2_bmax.to(u.mas/u.yr).value - pm2_bmin.to(u.mas/u.yr).value pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='k', alpha=0.1) plt.gca().add_artist(pr) rect_xy = [pm1_min.to(u.mas/u.yr).value, pm2_min.to(u.mas/u.yr).value] rect_w = pm1_max.to(u.mas/u.yr).value - pm1_min.to(u.mas/u.yr).value rect_h = pm2_max.to(u.mas/u.yr).value - pm2_min.to(u.mas/u.yr).value pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='w', alpha=1) plt.gca().add_artist(pr) pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='tab:blue', alpha=0.5) plt.gca().add_artist(pr) plt.xlim(-12,12) plt.ylim(-12,12) plt.xlabel('$\mu_{\phi_1}$ [mas yr$^{-1}$]') plt.ylabel('$\mu_{\phi_2}$ [mas yr$^{-1}$]') plt.sca(ax[1]) plt.plot(g.g[phi_mask_stream & pm_mask] - g.i[phi_mask_stream & pm_mask], g.g[phi_mask_stream & pm_mask], 'ko', ms=2, alpha=0.5, rasterized=True, label='') # plt.plot(i_gi, i_g, 'r-') # pml = mpl.patches.Polygon(poly_low, color='moccasin', alpha=0.4, zorder=2) # plt.gca().add_artist(pml) # pmm = mpl.patches.Polygon(poly_med, color='orange', alpha=0.3, zorder=2) # plt.gca().add_artist(pmm) pmh = mpl.patches.Polygon(poly_high, color='tab:blue', alpha=0.5, zorder=2) plt.gca().add_artist(pmh) plt.gca().set_facecolor('0.95') plt.xlim(-0.2, 1.8) plt.ylim(21, 13) plt.xlabel('g - i [mag]') plt.ylabel('g [mag]') plt.sca(ax[2]) plt.hist(ts['VELOCITY'][ts['rank']==1], bins=vbins, alpha=0.5, color='tab:blue', label='Priority 1') plt.hist(ts['VELOCITY'][ts['rank']==5], bins=vbins, alpha=0.1, histtype='stepfilled', color='k', label='Priority 5') plt.legend(fontsize='small') plt.xlabel('Radial velocity [km s$^{-1}$]') plt.ylabel('Number') plt.tight_layout() plt.savefig('../plots/prop_fig3.pdf') ``` ## Target list ``` # check total number of stars per field r_fov = 0.5*u.deg mag_mask = g.g<20.5*u.mag guide = (g.g>13*u.mag) & (g.g<15*u.mag) for i in range(Nf): infield = (gd1_c.phi1.wrap_at(wangle) - targets['phi1'][i])**2 + (gd1_c.phi2 - targets['phi2'][i])**2 < r_fov**2 print(i, np.sum(infield & pm_broad_mask & mag_mask), np.sum(infield & pm_mask & mag_mask), np.sum(infield & guide)) # plt.plot(g.g[infield]-g.i[infield],g.g[infield], 'k.') plt.plot(g.pmra[infield],g.pmdec[infield], 'k.') # plt.xlim(-1,3) # plt.ylim(22,12) # find ra, dec corners for querying for guide stars cornersgd1 = astropy.coordinates.SkyCoord(phi1=np.array([-45,-45,-25,-25])*u.deg, phi2=np.array([-3,3,3,-3])*u.deg, frame=gc.GD1) corners = cornersgd1.icrs query ='''SELECT * FROM gaiadr2.gaia_source WHERE phot_g_mean_mag < 16 AND phot_g_mean_mag > 13 AND CONTAINS(POINT('ICRS', ra, dec), POLYGON('ICRS', {0.ra.degree}, {0.dec.degree}, {1.ra.degree}, {1.dec.degree}, {2.ra.degree}, {2.dec.degree}, {3.ra.degree}, {3.dec.degree})) = 1 '''.format(corners[0], corners[1], corners[2], corners[3]) print(query) spatial_mask = ((gd1_c.phi1.wrap_at(wangle)<-25*u.deg) & (gd1_c.phi1.wrap_at(wangle)>-45*u.deg) & (gd1_c.phi2<3*u.deg) & (gd1_c.phi2>-2*u.deg)) shape_mask = spatial_mask & mag_mask & pm_broad_mask Nout = np.sum(shape_mask) points = np.array([g.g[shape_mask] - g.i[shape_mask], g.g[shape_mask]]).T pm_mask = ((gd1_c.pm_phi1_cosphi2[shape_mask] < pm1_max) & (gd1_c.pm_phi1_cosphi2[shape_mask] > pm1_min) & (gd1_c.pm_phi2[shape_mask] < pm2_max) & (gd1_c.pm_phi2[shape_mask] > pm2_min)) path_med = mpl.path.Path(poly_med) path_low = mpl.path.Path(poly_low) path_high = mpl.path.Path(poly_high) # guide = (g.g[shape_mask]>13*u.mag) & (g.g[shape_mask]<15*u.mag) priority4 = pm_mask priority3 = path_low.contains_points(points) & pm_mask priority2 = path_main.contains_points(points) & pm_mask priority1 = path_high.contains_points(points) & pm_mask # set up output priorities priority = np.zeros(Nout, dtype=np.int64) + 5 # priority[guide] = -1 priority[priority4] = 4 priority[priority3] = 3 priority[priority2] = 2 priority[priority1] = 1 ttype = np.empty(Nout, dtype='S10') nontarget = priority>-1 ttype[~nontarget] = 'guide' ttype[nontarget] = 'target' name = np.arange(Nout) ara = coord.Angle(t['ra'][shape_mask]*u.deg) adec = coord.Angle(t['dec'][shape_mask]*u.deg) ra = ara.to_string(unit=u.hour, sep=':', precision=2) dec = adec.to_string(unit=u.degree, sep=':', precision=2) tcatalog = Table(np.array([ra, dec, name, priority, ttype, g.g[shape_mask]]).T, names=('ra', 'dec', 'object', 'rank', 'type', 'mag'), masked=True) tcatalog['rank'].mask = ~nontarget tguide = Table.read('../data/guides.fits.gz') plt.plot(tguide['ra'], tguide['dec'],'k.') # add guides Nguide = len(tguide) name_guides = np.arange(Nout, Nout+Nguide) priority_guides = np.zeros(Nguide, dtype='int') - 1 nontarget_guides = priority_guides==-1 ttype_guides = np.empty(Nguide, dtype='S10') ttype_guides[nontarget_guides] = 'guide' ara_guides = coord.Angle(tguide['ra']) adec_guides = coord.Angle(tguide['dec']) ra_guides = ara_guides.to_string(unit=u.hour, sep=':', precision=2) dec_guides = adec_guides.to_string(unit=u.degree, sep=':', precision=2) tguides_out = Table(np.array([ra_guides, dec_guides, name_guides, priority_guides, ttype_guides, tguide['phot_g_mean_mag']]).T, names=('ra', 'dec', 'object', 'rank', 'type', 'mag'), masked=True) tguides_out['rank'].mask = ~nontarget_guides tguides_out tcatalog = astropy.table.vstack([tcatalog, tguides_out]) tcatalog tcatalog.write('../data/gd1_catalog.cat', format='ascii.fixed_width_two_line', fill_values=[(astropy.io.ascii.masked, '')], delimiter='\t', overwrite=True) # output cutout of the whole input catalog shape_mask_arr = np.array(shape_mask) tcat_input = t[shape_mask_arr] tcat_input['name'] = name tcat_input['priority'] = priority tcat_input['type'] = ttype tcat_input.write('../data/gd1_input_catalog.fits', overwrite=True) ```
github_jupyter
``` import csv import matplotlib import matplotlib.pyplot as plt auth_csv_path = "./auth_endpoint_values.csv" service_csv_path = "./service_endpoint_values.csv" def convert_cpu_to_dict(file_path): data = [] with open(file_path) as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') csv_reader = list(csv_reader) for idx, row in enumerate(csv_reader): if idx == 0: pass #skip the first and last row else: data.append({'workers':row[0], 'cpu_utils': row[1]}) return data def convert_resp_to_dict(file_path): data = [] with open(file_path) as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') csv_reader = list(csv_reader) for idx, row in enumerate(csv_reader): if idx == 0: pass #skip the first and last row else: data.append({'workers':row[0], 'response_time': row[3]}) return data def convert_resp_95_to_dict(file_path): data = [] with open(file_path) as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') csv_reader = list(csv_reader) for idx, row in enumerate(csv_reader): if idx == 0: pass #skip the first and last row else: data.append({'workers':row[0], 'p95_response_time': row[4]}) return data auth_service_values = convert_cpu_to_dict(auth_csv_path) service_endpoint_values = convert_cpu_to_dict(service_csv_path) workers = [int(x['workers']) for x in auth_service_values] auth_cpu_utlis = [(float(x['cpu_utils']))/4 for x in auth_service_values] service_cpu_utlis = [(float(x['cpu_utils']))/4 for x in service_endpoint_values] total_cpu = [x + y for x, y in zip(auth_cpu_utlis, service_cpu_utlis)] plt.rc('font', size=14) fig, axs = plt.subplots() axs.set_ylim([0, 10]) axs.set_facecolor('#fcfcfc') axs.set_xlabel('Parallel virtual users') axs.set_ylabel('CPU utilization as % $\it{(4~cores)}$') axs.plot(workers, total_cpu, 'r', label='total CPU usage', marker='d', markersize=7) axs.plot(workers, service_cpu_utlis, linestyle='dotted',label='service-endpoint', marker='o', markersize=5) axs.plot(workers, auth_cpu_utlis, 'g--' ,label='auth-service', marker='x', mec='k', markersize=5) axs.legend() axs.grid(axis='both', color='#7D7D7D', linestyle='-', linewidth=0.5) plt.savefig("auth_token_cpu_util.pdf") plt.show() service_endpoint_resp_values = convert_resp_to_dict(service_csv_path) service_endpoint_resp_values = [float(x['response_time']) for x in service_endpoint_resp_values] service_endpoint_95_resp_values = convert_resp_95_to_dict(service_csv_path) service_endpoint_95_resp_values = [float(x['p95_response_time']) for x in service_endpoint_95_resp_values] #plt.rc('font', size=20) # controls default text sizes fig, axs = plt.subplots() axs.set_ylim([60, 180]) axs.set_facecolor('#fcfcfc') axs.grid(axis='both', color='#7D7D7D', linestyle='-', linewidth=0.5, zorder=0) axs.set_xlabel('Parallel virtual users') axs.set_ylabel('Response time (ms)') #p1 = axs.bar(workers, service_endpoint_resp_values, 3, zorder=3, alpha=0.9) axs.plot(workers, service_endpoint_resp_values, label='avg response time', marker='x', markersize=7) axs.plot(workers, service_endpoint_95_resp_values, 'r', linestyle='dotted',label='p(95) response time', marker='o', markersize=5) axs.legend(loc='upper left') plt.savefig("auth_token_response_time.pdf") plt.show() ```
github_jupyter
``` !pip install transformers datasets !wget https://github.com/crux82/squad-it/raw/master/SQuAD_it-train.json.gz !wget https://github.com/crux82/squad-it/raw/master/SQuAD_it-test.json.gz !gzip -dkv SQuAD_it-*.json.gz from datasets import load_dataset squad_it_dataset = load_dataset("json", data_files="SQuAD_it-train.json", field="data") squad_it_dataset squad_it_dataset["train"][0] data_files = {"train": "SQuAD_it-train.json", "test": "SQuAD_it-test.json"} squad_it_dataset = load_dataset("json", data_files=data_files, field="data") squad_it_dataset !wget "https://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip" !unzip drugsCom_raw.zip from datasets import load_dataset data_files = {"train": "drugsComTrain_raw.tsv", "test": "drugsComTest_raw.tsv"} # \t is the tab character in Python drug_dataset = load_dataset("csv", data_files=data_files, delimiter="\t") drug_sample = drug_dataset["train"].shuffle(seed=42).select(range(1000)) drug_sample[:3] for split in drug_dataset.keys(): assert len(drug_dataset[split]) == len(drug_dataset[split].unique("Unnamed: 0")) drug_dataset = drug_dataset.rename_column( original_column_name="Unnamed: 0", new_column_name="patient_id" ) drug_dataset def filter_nones(x): return x["condition"] is not None drug_dataset = drug_dataset.filter(lambda x: x["condition"] is not None) def lowercase_condition(example): return {"condition": example["condition"].lower()} drug_dataset = drug_dataset.map(lowercase_condition) drug_dataset["train"]["condition"][:3] def compute_review_length(example): return {"review_length": len(example["review"].split())} drug_dataset = drug_dataset.map(compute_review_length) # Inspect the first training example drug_dataset["train"][0] drug_dataset["train"].sort("review_length")[:3] drug_dataset = drug_dataset.filter(lambda x: x["review_length"] > 30) print(drug_dataset.num_rows) import html text = "I&#039;m a transformer called BERT" html.unescape(text) drug_dataset = drug_dataset.map(lambda x: {"review": html.unescape(x["review"])}) new_drug_dataset = drug_dataset.map( lambda x: {"review": [html.unescape(o) for o in x["review"]]}, batched=True ) from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") def tokenize_function(examples): return tokenizer(examples["review"], truncation=True) %time tokenized_dataset = drug_dataset.map(tokenize_function, batched=True) slow_tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False) def slow_tokenize_function(examples): return slow_tokenizer(examples["review"], truncation=True) tokenized_dataset = drug_dataset.map(slow_tokenize_function, batched=True, num_proc=8) def tokenize_and_split(examples): return tokenizer( examples["review"], truncation=True, max_length=128, return_overflowing_tokens=True ) result = tokenize_and_split(drug_dataset["train"][0]) [len(inp) for inp in result["input_ids"]] tokenized_dataset = drug_dataset.map( tokenize_and_split, batched=True, remove_columns=drug_dataset["train"].column_names ) drug_dataset.set_format("pandas") train_df = drug_dataset["train"][:] frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() from datasets import Dataset freq_dataset = Dataset.from_pandas(frequencies) freq_dataset drug_dataset.reset_format() drug_dataset_clean = drug_dataset["train"].train_test_split(train_size=0.8, seed=42) # Rename the default "test" split to "validation" drug_dataset_clean["validation"] = drug_dataset_clean.pop("test") drug_dataset_clean["test"] = drug_dataset["test"] drug_dataset_clean drug_dataset_clean.save_to_disk("drug-reviews") from datasets import load_from_disk drug_dataset_reloaded = load_from_disk("drug-reviews") drug_dataset_reloaded for split, dataset in drug_dataset_clean.items(): dataset.to_json(f"drug-reviews-{split}.jsonl") data_files = { "train": "drug-reviews-train.jsonl", "validation": "drug-reviews-validation.jsonl", "test": "drug-reviews-test.jsonl", } drug_dataset_reloaded = load_dataset("json", data_files=data_files) ```
github_jupyter
``` import os import numpy as np # matplotlib for displaying the output import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # sns.set() sns.set(style="ticks", context="talk") from scipy import signal from scipy.io import wavfile # and IPython.display for audio output import IPython.display # Librosa for audio import librosa # And the display module for visualization import librosa.display # Get data files two_up = os.path.abspath(os.path.join('.' ,"../..")) print("Project root path is: ", two_up) dataDirName = "data" rawDataDirName = "converted_wav" className = "violin" # className = "guitar" data_path = os.path.join(two_up, dataDirName, rawDataDirName, className) print(data_path) root_paths = [] # Get all files from data_path # r=root, d=directories, f = files (_, d, allFiles) = next(os.walk(data_path)) wavFiles = [f for f in allFiles if f.endswith(".wav")] print(wavFiles[0]) ``` ### Spectrogram ``` file = wavFiles[3] sample_rate, samples = wavfile.read(os.path.join(data_path, file)) frequencies, times, spectrogram = signal.spectrogram(samples, sample_rate) # all spectrogram plt.pcolormesh(times, frequencies, spectrogram) plt.ylabel('Frequency') plt.xlabel('Time') plt.show() print(times[0], times[-1]) print(frequencies[0], frequencies[-1]) # plot(times, frequencies) plt.specgram(samples,Fs=sample_rate) plt.xlabel('Time') plt.ylabel('Frequency') plt.colorbar() plt.show() ``` ### Time Domain ``` zoom_left = 10000 zoom_right = 30000 plt.plot(samples) plt.axvline(x=zoom_left) plt.axvline(x=zoom_right) plt.show() plt.plot(samples[zoom_left:zoom_right]) plt.show() ``` Librosa example ``` y, sr = librosa.load(os.path.join(data_path, file), sr=None) S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128) # Convert to log scale (dB). We'll use the peak power (max) as reference. log_S = librosa.power_to_db(S, ref=np.max) # Make a new figure plt.figure(figsize=(12,4)) # Display the spectrogram on a mel scale # sample rate and hop length parameters are used to render the time axis librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram') # draw a color bar plt.colorbar(format='%+02.0f dB') # Make the figure layout compact plt.tight_layout() y_harmonic, y_percussive = librosa.effects.hpss(y) # What do the spectrograms look like? # Let's make and display a mel-scaled power (energy-squared) spectrogram S_harmonic = librosa.feature.melspectrogram(y_harmonic, sr=sr) S_percussive = librosa.feature.melspectrogram(y_percussive, sr=sr) # Convert to log scale (dB). We'll use the peak power as reference. log_Sh = librosa.power_to_db(S_harmonic, ref=np.max) log_Sp = librosa.power_to_db(S_percussive, ref=np.max) # Make a new figure plt.figure(figsize=(12,6)) plt.subplot(2,1,1) # Display the spectrogram on a mel scale librosa.display.specshow(log_Sh, sr=sr, y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram (Harmonic)') # draw a color bar plt.colorbar(format='%+02.0f dB') plt.subplot(2,1,2) librosa.display.specshow(log_Sp, sr=sr, x_axis='time', y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram (Percussive)') # draw a color bar plt.colorbar(format='%+02.0f dB') # Make the figure layout compact plt.tight_layout() # We'll use a CQT-based chromagram with 36 bins-per-octave in the CQT analysis. An STFT-based implementation also exists in chroma_stft() # We'll use the harmonic component to avoid pollution from transients C = librosa.feature.chroma_cqt(y=y_harmonic, sr=sr, bins_per_octave=36) # Make a new figure plt.figure(figsize=(12,4)) # Display the chromagram: the energy in each chromatic pitch class as a function of time # To make sure that the colors span the full range of chroma values, set vmin and vmax librosa.display.specshow(C, sr=sr, x_axis='time', y_axis='chroma', vmin=0, vmax=1) plt.title('Chromagram') plt.colorbar() plt.tight_layout() # Next, we'll extract the top 13 Mel-frequency cepstral coefficients (MFCCs) mfcc = librosa.feature.mfcc(S=log_S, n_mfcc=13) # Let's pad on the first and second deltas while we're at it delta_mfcc = librosa.feature.delta(mfcc) delta2_mfcc = librosa.feature.delta(mfcc, order=2) # How do they look? We'll show each in its own subplot plt.figure(figsize=(12, 6)) plt.subplot(3,1,1) librosa.display.specshow(mfcc) plt.ylabel('MFCC') plt.colorbar() plt.subplot(3,1,2) librosa.display.specshow(delta_mfcc) plt.ylabel('MFCC-$\Delta$') plt.colorbar() plt.subplot(3,1,3) librosa.display.specshow(delta2_mfcc, sr=sr, x_axis='time') plt.ylabel('MFCC-$\Delta^2$') plt.colorbar() plt.tight_layout() # For future use, we'll stack these together into one matrix M = np.vstack([mfcc, delta_mfcc, delta2_mfcc]) # Now, let's run the beat tracker. # We'll use the percussive component for this part plt.figure(figsize=(12, 6)) tempo, beats = librosa.beat.beat_track(y=y_percussive, sr=sr) # Let's re-draw the spectrogram, but this time, overlay the detected beats plt.figure(figsize=(12,4)) librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel') # Let's draw transparent lines over the beat frames plt.vlines(librosa.frames_to_time(beats), 1, 0.5 * sr, colors='w', linestyles='-', linewidth=2, alpha=0.5) plt.axis('tight') plt.colorbar(format='%+02.0f dB') plt.tight_layout(); print('Estimated tempo: %.2f BPM' % tempo) print('First 5 beat frames: ', beats[:5]) # Frame numbers are great and all, but when do those beats occur? print('First 5 beat times: ', librosa.frames_to_time(beats[:5], sr=sr)) # We could also get frame numbers from times by librosa.time_to_frames() # feature.sync will summarize each beat event by the mean feature vector within that beat M_sync = librosa.util.sync(M, beats) plt.figure(figsize=(12,6)) # Let's plot the original and beat-synchronous features against each other plt.subplot(2,1,1) librosa.display.specshow(M) plt.title('MFCC-$\Delta$-$\Delta^2$') # We can also use pyplot *ticks directly # Let's mark off the raw MFCC and the delta features plt.yticks(np.arange(0, M.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$']) plt.colorbar() plt.subplot(2,1,2) # librosa can generate axis ticks from arbitrary timestamps and beat events also librosa.display.specshow(M_sync, x_axis='time', x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats))) plt.yticks(np.arange(0, M_sync.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$']) plt.title('Beat-synchronous MFCC-$\Delta$-$\Delta^2$') plt.colorbar() plt.tight_layout() # Beat synchronization is flexible. # Instead of computing the mean delta-MFCC within each beat, let's do beat-synchronous chroma # We can replace the mean with any statistical aggregation function, such as min, max, or median. C_sync = librosa.util.sync(C, beats, aggregate=np.median) plt.figure(figsize=(12,6)) plt.subplot(2, 1, 1) librosa.display.specshow(C, sr=sr, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time') plt.title('Chroma') plt.colorbar() plt.subplot(2, 1, 2) librosa.display.specshow(C_sync, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time', x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats))) plt.title('Beat-synchronous Chroma (median aggregation)') plt.colorbar() plt.tight_layout() ```
github_jupyter
# DLW Practical 1: MNIST # From linear to non-linear models with MNIST **Introduction** In this practical we will experiment further with linear and non-linear models using the MNIST dataset. MNIST consists of images of handwritten digits that we want to classify correctly. **Learning objectives**: * Implement a linear classifier on the MNIST image data set in Tensorflow. * Modify the code to to make the classifier non-linear by introducing a hidden non-linear layer. **What is expected of you:** * Step through the code and make sure you understand each step. What test set accuracy do you get? * Modify the code to make the classifier non-linear by adding a non-linear activation function layer in Tensorflow. What accuracy do you get now? *Some parts of the code were adapted from the DL Indaba practicals.* ``` import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data def display_mnist_images(gens, num_images): plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' fig, axs = plt.subplots(1, num_images, figsize=(25, 3)) for i in range(num_images): reshaped_img = (gens[i].reshape(28, 28) * 255).astype(np.uint8) axs.flat[i].imshow(reshaped_img) plt.show() # download MNIST dataset # mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # visualize random MNIST images # batch_xs, batch_ys = mnist.train.next_batch(10) list_of_images = np.split(batch_xs, 10) display_mnist_images(list_of_images, 10) x_dim, train_examples, n_classes = mnist.train.images.shape[1], mnist.train.num_examples, mnist.train.labels.shape[1] ###################################### # define the model (build the graph) # ###################################### x = tf.placeholder(tf.float32, [None, x_dim]) W = tf.Variable(tf.random_normal([x_dim, n_classes])) b = tf.Variable(tf.ones([n_classes])) y = tf.placeholder(tf.float32, [None, n_classes]) y_ = tf.add(tf.matmul(x, W), b) prob = tf.nn.softmax(y_) ######################## # define loss function # ######################## cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_, labels=y)) learning_rate = 0.01 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy_loss) ########################### # define model evaluation # ########################### actual_class, predicted_class = tf.argmax(y, 1), tf.argmax(prob, 1) correct_prediction = tf.cast(tf.equal(predicted_class, actual_class), tf.float32) classification_accuracy = tf.reduce_mean(correct_prediction) ######################### # define training cycle # ######################### num_epochs = 50 batch_size = 20 # initializing the variables before starting the session # init = tf.global_variables_initializer() # launch the graph in a session (use the session as a context manager) # with tf.Session() as sess: # run session # sess.run(init) # start main training cycle # for epoch in range(num_epochs): avg_cost = 0. avg_acc = 0. total_batch = int(mnist.train.num_examples / batch_size) # loop over all batches # for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) # run optimization op (backprop), cost op and accuracy op (to get training losses) # _, c, a = sess.run([train_step, cross_entropy_loss, classification_accuracy], feed_dict={x: batch_x, y: batch_y}) # compute avg training loss and avg training accuracy # avg_cost += c / total_batch avg_acc += a / total_batch # display logs per epoch step # if epoch % 1 == 0: print("Epoch {}: cross-entropy-loss = {:.4f}, training-accuracy = {:.3f}%".format(epoch + 1, avg_cost, avg_acc * 100)) print("Optimization Finished!") # calculate test set accuracy # test_accuracy = classification_accuracy.eval({x: mnist.test.images, y: mnist.test.labels}) print("Accuracy on test set = {:.3f}%".format(test_accuracy * 100)) ```
github_jupyter
# Data loading with ExternalSource operator In this notebook, we will see how to use the `ExternalSource` operator, which allows us to use an external data source as input to the Pipeline. This notebook derived from: https://docs.nvidia.com/deeplearning/dali/user-guide/docs/examples/general/data_loading/external_input.html ``` %matplotlib inline import collections from random import shuffle import numpy as np import matplotlib.pyplot as plt from nvidia.dali.pipeline import Pipeline from nvidia.dali import ops, types batch_size = 16 ``` ## Defining the data source We use an infinite iterator as a data source, on the sample dogs & cats images. ``` class ExternalInputIterator: def __init__(self, batch_size): self.images_dir = 'data/images/' self.batch_size = batch_size with open(self.images_dir + 'file_list.txt') as file: self.files = [line.rstrip() for line in file if line] shuffle(self.files) def __iter__(self): """ (this is not typical __iter__ ?) """ self.i = 0 self.n = len(self.files) return self def __next__(self): batch = [] labels = [] for _ in range(self.batch_size): jpg_fname, label = self.files[self.i].split(' ') file = open(self.images_dir + jpg_fname, 'rb') batch.append(np.frombuffer(file.read(), dtype=np.uint8)) labels.append(np.array([label], dtype=np.uint8)) self.i = (self.i + 1) % self.n return batch, labels ``` # Defining the pipeline The next step is to define the Pipeline. The `ExternalSource` op accepts an iterable or a callable. If the source provides multiple outputs (eg images and labels), that number must also be specified as `num_outputs` argument. Internally, the pipeline will call `source` (if callable) or run `next(source)`(if iterable) whenever more data is needed to keep the pipeline running. ``` ext_input_iter = ExternalInputIterator(batch_size) class ExternalSourcePipeline(Pipeline): def __init__(self, batch_size, ext_inp_iter, num_threads, device_id): super().__init__(batch_size, num_threads, device_id, seed=12) self.source = ops.ExternalSource(source=ext_inp_iter, num_outputs=2) self.decode = ops.ImageDecoder(device='mixed', output_type=types.RGB) self.enhance = ops.BrightnessContrast(device='gpu', contrast=2) def define_graph(self): jpgs, labels = self.source() images = self.decode(jpgs) output = self.enhance(images) return output, labels ``` # Using the Pipeline ``` ext_pipe = ExternalSourcePipeline pipe = ext_pipe(batch_size, ext_input_iter, num_threads=2, device_id=0) pipe.build() pipe_out = pipe.run() ``` Notice that labels are still on CPU and no `as_cpu()` call is needed to show them. ``` batch_cpu = pipe_out[0].as_cpu() labels_cpu = pipe_out[1] # already on cpu! print(type(batch_cpu)) dir(batch_cpu) img = batch_cpu.at(2) print(img.shape) print(labels_cpu.at(2)) plt.axis('off') plt.imshow(img) ``` ## Interacting with the GPU input The external source operator can also accept GPU data from CuPy, or any other data source that supports the [CUDA array interface](https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html) (including PyTorch). For the sake of this example, we will create an `ExternalInputGpuIterator` in such a way that it returns data on the GPU already. As `ImageDecoder` does not accept data on the GPU, *we need to decode it outside of DALI on the CPU and then move it to the GPU.* In normal cases, the image, or other data, would already be on the GPU as a result of the operation of another library. ``` import imageio import cupy as cp class ExternalInputGpuIterator: def __init__(self, batch_size): self.images_dir = 'data/images/' self.batch_size = batch_size with open(self.images_dir + 'file_list.txt') as file: self.files = [line.rstrip() for line in file if line] shuffle(self.files) def __iter__(self): self.i = 0 self.n = len(self.files) return self def __next__(self): batch = [] labels = [] for _ in range(self.batch_size): jpg_fname, label = self.files[self.i].split(' ') img = imageio.imread(self.images_dir + jpg_fname) img = cp.asarray(img) img = img * 0.6 batch.append(img.astype(cp.uint8)) labels.append(cp.asarray([label], dtype=np.uint8)) self.i = (self.i + 1) % self.n return batch, labels ``` Now that we assume the image decoding was done outside of DALI for the GPU case (ie, the raw image is already on the GPU), let's modify our previous pipeline to use the GPU version of our external source iterator. ``` ext_iter_gpu = ExternalInputGpuIterator(batch_size) print(type(next(iter(ext_iter_gpu))[0][0])) class ExternalSourceGpuPipeline(Pipeline): def __init__(self, batch_size, ext_inp_iter, num_threads, device_id): super().__init__(batch_size, num_threads, device_id, seed=12) self.source = ops.ExternalSource(device='gpu', source=ext_inp_iter, num_outputs=2) self.enhance = ops.BrightnessContrast(device='gpu', contrast=2) def define_graph(self): images, labels = self.source() output = self.enhance(images) return output, labels pipe_gpu = ExternalSourceGpuPipeline(batch_size,ext_iter_gpu, num_threads=2, device_id=0) pipe_gpu.build() pipe_out_gpu = pipe_gpu.run() batch_gpu = pipe_out_gpu[0].as_cpu() # dali.backend_impl.TensorListCPU labels_gpu = pipe_out_gpu[1].as_cpu() # dali.backend_impl.TensorListCPU # show img img = batch_gpu.at(2) print(img.shape) print(labels_gpu.at(2)) plt.axis('off') plt.imshow(img) ```
github_jupyter
<a href="https://bmi.readthedocs.io"><img src="https://raw.githubusercontent.com/csdms/espin/main/media/bmi-logo-header-text.png"></a> # Run the `Heat` model through its BMI `Heat` models the diffusion of temperature on a uniform rectangular plate with Dirichlet boundary conditions. This is the canonical example used in the [bmi-example-python](https://github.com/csdms/bmi-example-python) repository. View the source code for the [model](https://github.com/csdms/bmi-example-python/blob/master/heat/heat.py) and its [BMI](https://github.com/csdms/bmi-example-python/blob/master/heat/bmi_heat.py) on GitHub. Start by importing `os`, `numpy` and the `Heat` BMI: ``` import os import numpy as np from heat import BmiHeat ``` Create an instance of the model's BMI. ``` x = BmiHeat() ``` What's the name of this model? ``` print(x.get_component_name()) ``` Start the `Heat` model through its BMI using a configuration file: ``` cat heat.yaml x.initialize('heat.yaml') ``` Check the time information for the model. ``` print('Start time:', x.get_start_time()) print('End time:', x.get_end_time()) print('Current time:', x.get_current_time()) print('Time step:', x.get_time_step()) print('Time units:', x.get_time_units()) ``` Show the input and output variables for the component (aside on [Standard Names](https://csdms.colorado.edu/wiki/CSDMS_Standard_Names)): ``` print(x.get_input_var_names()) print(x.get_output_var_names()) ``` Next, get the identifier for the grid on which the temperature variable is defined: ``` grid_id = x.get_var_grid('plate_surface__temperature') print('Grid id:', grid_id) ``` Then get the grid attributes: ``` print('Grid type:', x.get_grid_type(grid_id)) rank = x.get_grid_rank(grid_id) print('Grid rank:', rank) shape = np.ndarray(rank, dtype=int) x.get_grid_shape(grid_id, shape) print('Grid shape:', shape) spacing = np.ndarray(rank, dtype=float) x.get_grid_spacing(grid_id, spacing) print('Grid spacing:', spacing) ``` These commands are made somewhat un-Pythonic by the generic design of the BMI. Through the model's BMI, zero out the initial temperature field, except for an impulse near the middle. Note that *set_value* expects a one-dimensional array for input. ``` temperature = np.zeros(shape) temperature[3, 4] = 100.0 x.set_value('plate_surface__temperature', temperature) ``` Check that the temperature field has been updated. Note that *get_value* expects a one-dimensional array to receive output. ``` temperature_flat = np.empty_like(temperature).flatten() x.get_value('plate_surface__temperature', temperature_flat) print(temperature_flat.reshape(shape)) ``` Now advance the model by a single time step: ``` x.update() ``` View the new state of the temperature field: ``` x.get_value('plate_surface__temperature', temperature_flat) print(temperature_flat.reshape(shape)) ``` There's diffusion! Advance the model to some distant time: ``` distant_time = 2.0 while x.get_current_time() < distant_time: x.update() ``` View the final state of the temperature field: ``` np.set_printoptions(formatter={'float': '{: 5.1f}'.format}) x.get_value('plate_surface__temperature', temperature_flat) print(temperature_flat.reshape(shape)) ``` Note that temperature isn't conserved on the plate: ``` print(temperature_flat.sum()) ``` End the model: ``` x.finalize() ```
github_jupyter
<a href="https://colab.research.google.com/github/iamsoroush/DeepEEGAbstractor/blob/master/cv_rnr_8s_proposed_gap.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #@title # Clone the repository and upgrade Keras {display-mode: "form"} !git clone https://github.com/iamsoroush/DeepEEGAbstractor.git !pip install --upgrade keras #@title # Imports {display-mode: "form"} import os import pickle import sys sys.path.append('DeepEEGAbstractor') import numpy as np from src.helpers import CrossValidator from src.models import SpatioTemporalWFB, TemporalWFB, TemporalDFB, SpatioTemporalDFB from src.dataset import DataLoader, Splitter, FixedLenGenerator from google.colab import drive drive.mount('/content/gdrive') #@title # Set data path {display-mode: "form"} #@markdown --- #@markdown Type in the folder in your google drive that contains numpy _data_ folder: parent_dir = 'soroush'#@param {type:"string"} gdrive_path = os.path.abspath(os.path.join('gdrive/My Drive', parent_dir)) data_dir = os.path.join(gdrive_path, 'data') cv_results_dir = os.path.join(gdrive_path, 'cross_validation') if not os.path.exists(cv_results_dir): os.mkdir(cv_results_dir) print('Data directory: ', data_dir) print('Cross validation results dir: ', cv_results_dir) #@title ## Set Parameters batch_size = 80 epochs = 50 k = 10 t = 10 instance_duration = 8 #@param {type:"slider", min:3, max:10, step:0.5} instance_overlap = 2 #@param {type:"slider", min:0, max:3, step:0.5} sampling_rate = 256 #@param {type:"number"} n_channels = 20 #@param {type:"number"} task = 'rnr' data_mode = 'cross_subject' #@title ## Spatio-Temporal WFB model_name = 'ST-WFB-GAP' train_generator = FixedLenGenerator(batch_size=batch_size, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=True) test_generator = FixedLenGenerator(batch_size=8, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=False) params = {'task': task, 'data_mode': data_mode, 'main_res_dir': cv_results_dir, 'model_name': model_name, 'epochs': epochs, 'train_generator': train_generator, 'test_generator': test_generator, 't': t, 'k': k, 'channel_drop': True} validator = CrossValidator(**params) dataloader = DataLoader(data_dir, task, data_mode, sampling_rate, instance_duration, instance_overlap) data, labels = dataloader.load_data() input_shape = (sampling_rate * instance_duration, n_channels) model_obj = SpatioTemporalWFB(input_shape, model_name=model_name, spatial_dropout_rate=0.2, dropout_rate=0.4) scores = validator.do_cv(model_obj, data, labels) #@title ## Temporal WFB model_name = 'T-WFB-GAP' train_generator = FixedLenGenerator(batch_size=batch_size, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=True) test_generator = FixedLenGenerator(batch_size=8, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=False) params = {'task': task, 'data_mode': data_mode, 'main_res_dir': cv_results_dir, 'model_name': model_name, 'epochs': epochs, 'train_generator': train_generator, 'test_generator': test_generator, 't': t, 'k': k, 'channel_drop': True} validator = CrossValidator(**params) dataloader = DataLoader(data_dir, task, data_mode, sampling_rate, instance_duration, instance_overlap) data, labels = dataloader.load_data() input_shape = (sampling_rate * instance_duration, n_channels) model_obj = TemporalWFB(input_shape, model_name=model_name, spatial_dropout_rate=0.2, dropout_rate=0.4) scores = validator.do_cv(model_obj, data, labels) #@title ## Spatio-Temporal DFB model_name = 'ST-DFB-GAP' train_generator = FixedLenGenerator(batch_size=batch_size, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=True) test_generator = FixedLenGenerator(batch_size=8, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=False) params = {'task': task, 'data_mode': data_mode, 'main_res_dir': cv_results_dir, 'model_name': model_name, 'epochs': epochs, 'train_generator': train_generator, 'test_generator': test_generator, 't': t, 'k': k, 'channel_drop': True} validator = CrossValidator(**params) dataloader = DataLoader(data_dir, task, data_mode, sampling_rate, instance_duration, instance_overlap) data, labels = dataloader.load_data() input_shape = (sampling_rate * instance_duration, n_channels) model_obj = SpatioTemporalDFB(input_shape, model_name=model_name, spatial_dropout_rate=0.2, dropout_rate=0.4) scores = validator.do_cv(model_obj, data, labels) #@title ## Spatio-Temporal DFB (Normalized Kernels) model_name = 'ST-DFB-NK-GAP' train_generator = FixedLenGenerator(batch_size=batch_size, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=True) test_generator = FixedLenGenerator(batch_size=8, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=False) params = {'task': task, 'data_mode': data_mode, 'main_res_dir': cv_results_dir, 'model_name': model_name, 'epochs': epochs, 'train_generator': train_generator, 'test_generator': test_generator, 't': t, 'k': k, 'channel_drop': True} validator = CrossValidator(**params) dataloader = DataLoader(data_dir, task, data_mode, sampling_rate, instance_duration, instance_overlap) data, labels = dataloader.load_data() input_shape = (sampling_rate * instance_duration, n_channels) model_obj = SpatioTemporalDFB(input_shape, model_name=model_name, spatial_dropout_rate=0.2, dropout_rate=0.4, normalize_kernels=True) scores = validator.do_cv(model_obj, data, labels) #@title ## Temporal DFB model_name = 'T-DFB-GAP' train_generator = FixedLenGenerator(batch_size=batch_size, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=True) test_generator = FixedLenGenerator(batch_size=8, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=False) params = {'task': task, 'data_mode': data_mode, 'main_res_dir': cv_results_dir, 'model_name': model_name, 'epochs': epochs, 'train_generator': train_generator, 'test_generator': test_generator, 't': t, 'k': k, 'channel_drop': True} validator = CrossValidator(**params) dataloader = DataLoader(data_dir, task, data_mode, sampling_rate, instance_duration, instance_overlap) data, labels = dataloader.load_data() input_shape = (sampling_rate * instance_duration, n_channels) model_obj = TemporalDFB(input_shape, model_name=model_name, spatial_dropout_rate=0.2, dropout_rate=0.4) scores = validator.do_cv(model_obj, data, labels) #@title ## Temporal DFB (Normalized Kernels) model_name = 'T-DFB-NK-GAP' train_generator = FixedLenGenerator(batch_size=batch_size, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=True) test_generator = FixedLenGenerator(batch_size=8, duration=instance_duration, overlap=instance_overlap, sampling_rate=sampling_rate, is_train=False) params = {'task': task, 'data_mode': data_mode, 'main_res_dir': cv_results_dir, 'model_name': model_name, 'epochs': epochs, 'train_generator': train_generator, 'test_generator': test_generator, 't': t, 'k': k, 'channel_drop': True} validator = CrossValidator(**params) dataloader = DataLoader(data_dir, task, data_mode, sampling_rate, instance_duration, instance_overlap) data, labels = dataloader.load_data() input_shape = (sampling_rate * instance_duration, n_channels) model_obj = TemporalDFB(input_shape, model_name=model_name, spatial_dropout_rate=0.2, dropout_rate=0.4, normalize_kernels=True) scores = validator.do_cv(model_obj, data, labels) ```
github_jupyter
<a href="https://colab.research.google.com/github/Yoshibansal/ML-practical/blob/main/Cat_vs_Dog_Part-1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ##Cat vs Dog (Binary class classification) ImageDataGenerator (Understanding overfitting) Download dataset ``` !wget --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \ -O /tmp/cats_and_dogs_filtered.zip #importing libraries import os import zipfile import tensorflow as tf from tensorflow.keras.optimizers import RMSprop from tensorflow.keras.preprocessing.image import ImageDataGenerator #unzip local_zip = '/tmp/cats_and_dogs_filtered.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp') zip_ref.close() base_dir = '/tmp/cats_and_dogs_filtered' train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') # Directory with our training cat pictures train_cats_dir = os.path.join(train_dir, 'cats') # Directory with our training dog pictures train_dogs_dir = os.path.join(train_dir, 'dogs') # Directory with our validation cat pictures validation_cats_dir = os.path.join(validation_dir, 'cats') # Directory with our validation dog pictures validation_dogs_dir = os.path.join(validation_dir, 'dogs') INPUT_SHAPE = (150, 150) MODEL_INPUT_SHAPE = INPUT_SHAPE + (3,) #HYPERPARAMETERS LEARNING_RATE = 1e-4 BATCH_SIZE = 20 EPOCHS = 50 #model architecture model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape = MODEL_INPUT_SHAPE), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=LEARNING_RATE), metrics=['accuracy']) #summary of model (including type of layer, Ouput shape and number of parameters) model.summary() #plotting model and saving it architecture picture dot_img_file = '/tmp/model_1.png' tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True) # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) # Flow training images in batches of 20 using train_datagen generator train_generator = train_datagen.flow_from_directory( train_dir, # This is the source directory for training images target_size=INPUT_SHAPE, # All images will be resized to 150x150 batch_size=BATCH_SIZE, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow validation images in batches of 20 using test_datagen generator validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=INPUT_SHAPE, batch_size=BATCH_SIZE, class_mode='binary') #Fitting data into model -> training model history = model.fit( train_generator, steps_per_epoch=100, # steps = 2000 images / batch_size epochs=EPOCHS, validation_data=validation_generator, validation_steps=50, # steps = 1000 images / batch_size verbose=1) #PLOTTING model performance import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.figure() plt.plot(epochs, loss, 'ro', label='Training Loss') plt.plot(epochs, val_loss, 'r', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` The Training Accuracy is close to 100%, and the validation accuracy is in the 70%-80% range. This is a great example of overfitting -- which in short means that it can do very well with images it has seen before, but not so well with images it hasn't. next we see how we can do better to avoid overfitting -- and one simple method is to **augment** the images a bit. ``` ```
github_jupyter
# Part I. ETL Pipeline for Pre-Processing the Files ## PLEASE RUN THE FOLLOWING CODE FOR PRE-PROCESSING THE FILES #### Import Python packages ``` # Import Python packages import pandas as pd import cassandra import re import os import glob import numpy as np import json import csv ``` #### Creating list of filepaths to process original event csv data files ``` # checking your current working directory print(os.getcwd()) # Get current folder and subfolder event data filepath = os.getcwd() + '/event_data' # Create a list of files and collect each filepath file_path_list = [] for root, dirs, files in os.walk(filepath): for f in files : file_path_list.append(os.path.abspath(f)) # get total number of files found num_files = len(file_path_list) print('{} files found in {}\n'.format(num_files, filepath)) # join the file path and roots with the subdirectories using glob file_path_list = glob.glob(os.path.join(root,'*')) print(file_path_list) ``` #### Processing the files to create the data file csv that will be used for Apache Casssandra tables ``` # initiating an empty list of rows that will be generated from each file full_data_rows_list = [] # for every filepath in the file path list for f in file_path_list: # reading csv file with open(f, 'r', encoding = 'utf8', newline='') as csvfile: # creating a csv reader object csvreader = csv.reader(csvfile) next(csvreader) # extracting each data row one by one and append it for line in csvreader: #print(line) full_data_rows_list.append(line) # uncomment the code below if you would like to get total number of rows #print(len(full_data_rows_list)) # uncomment the code below if you would like to check to see what the list of event data rows will look like #print(full_data_rows_list) # creating a smaller event data csv file called event_datafile_full csv that will be used to insert data into the \ # Apache Cassandra tables csv.register_dialect('myDialect', quoting=csv.QUOTE_ALL, skipinitialspace=True) with open('event_datafile_new.csv', 'w', encoding = 'utf8', newline='') as f: writer = csv.writer(f, dialect='myDialect') writer.writerow(['artist','firstName','gender','itemInSession','lastName','length',\ 'level','location','sessionId','song','userId']) for row in full_data_rows_list: if (row[0] == ''): continue writer.writerow((row[0], row[2], row[3], row[4], row[5], row[6], row[7], row[8], row[12], row[13], row[16])) # check the number of rows in your csv file with open('event_datafile_new.csv', 'r', encoding = 'utf8') as f: print(sum(1 for line in f)) ``` # Part II. Complete the Apache Cassandra coding portion of your project. ## Now you are ready to work with the CSV file titled <font color=red>event_datafile_new.csv</font>, located within the Workspace directory. The event_datafile_new.csv contains the following columns: - artist - firstName of user - gender of user - item number in session - last name of user - length of the song - level (paid or free song) - location of the user - sessionId - song title - userId The image below is a screenshot of what the denormalized data should appear like in the <font color=red>**event_datafile_new.csv**</font> after the code above is run:<br> <img src="images/image_event_datafile_new.jpg"> ## Begin writing your Apache Cassandra code in the cells below #### Creating a Cluster ``` # This should make a connection to a Cassandra instance your local machine # (127.0.0.1) from cassandra.cluster import Cluster try: # Connect to local Apache Cassandra instance cluster = Cluster(['127.0.0.1']) # Set session to connect andexecute queries. session = cluster.connect() except Exception as e: print(e) ``` #### Create Keyspace ``` # Create a Keyspace try: session.execute(""" CREATE KEYSPACE IF NOT EXISTS sparkifydb WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }""" ) except Exception as e: print(e) ``` #### Set Keyspace ``` # Set KEYSPACE try: session.set_keyspace('sparkifydb') except Exception as e: print(e) ``` ### Now we need to create tables to run the following queries. Remember, with Apache Cassandra you model the database tables on the queries you want to run. ## Create queries to ask the following three questions of the data ### 1. Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4 ### 2. Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182 ### 3. Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own' ### Query-1 ``` ## Query 1: Give me the artist, song title and song's length in the music app history that was heard during \ ## sessionId = 338, and itemInSession = 4 # CREATE TABLE: # This CQL query creates song_in_session table which contains the following columns (with data type): # * session_id INT, # * item_in_session INT, # * artist TEXT, # * song TEXT, # * length FLOAT # # To uniquely identify each row and allow efficient distribution in Cassandra cluster, # * session_id and item_in_session columns: are used as table's Primary Key (composite Partition Key). query = "CREATE TABLE IF NOT EXISTS song_in_session " query = query + "(session_id int, item_in_session int, artist text, song text, length float, \ PRIMARY KEY(session_id, item_in_session))" try: session.execute(query) except Exception as e: print(e) # INSERT data # Set new file name. file = 'event_datafile_new.csv' with open(file, encoding = 'utf8') as f: csvreader = csv.reader(f) next(csvreader) # skip header for line in csvreader: # Assign the INSERT statements into the `query` variable query = "INSERT INTO song_in_session (session_id, item_in_session, artist, song, length)" query = query + " VALUES (%s, %s, %s, %s, %s)" ## Assign column elements in the INSERT statement. session.execute(query, (int(line[8]), int(line[3]), line[0], line[9], float(line[5]))) ``` #### Do a SELECT to verify that the data have been inserted into each table ``` # SELECT statement: # To answer Query-1, this CQL query # * matches session_id (=338) and item_in_session (=4) to # * return artist, song and length from song_in_session table. query = "SELECT artist, song, length \ FROM song_in_session \ WHERE session_id = 338 AND \ item_in_session = 4" try: songs = session.execute(query) except Exception as e: print(e) for row in songs: print (row.artist, row.song, row.length) ``` ### COPY AND REPEAT THE ABOVE THREE CELLS FOR EACH OF THE THREE QUESTIONS ### Query-2 ``` ## Query 2: Give me only the following: name of artist, song (sorted by itemInSession) and # user (first and last name) for userid = 10, sessionid = 182 # CREATE TABLE # This CQL query creates artist_in_session table which contains the following columns (with data type): # * user_id INT, # * session_id INT, # * artist TEXT, # * song TEXT, # * item_in_session INT, # * first_name TEXT, # * last_name TEXT, # # To uniquely identify each row and allow efficient distribution in Cassandra cluster, # * user_id and session_id columns: are used as Composite Partition Key in table's Primary Key. # * item_in_session column: is used as Clustering Key in table's Primary Key and allows sorting order of the data. query = "CREATE TABLE IF NOT EXISTS artist_in_session " query = query + "( user_id int, \ session_id int, \ artist text, \ song text, \ item_in_session int, \ first_name text, \ last_name text, \ PRIMARY KEY((user_id, session_id), item_in_session))" try: session.execute(query) except Exception as e: print(e) # INSERT data file = 'event_datafile_new.csv' with open(file, encoding = 'utf8') as f: csvreader = csv.reader(f) next(csvreader) for line in csvreader: query = "INSERT INTO artist_in_session (user_id, \ session_id, \ artist, \ song, \ item_in_session, \ first_name, \ last_name)" query = query + " VALUES (%s, %s, %s, %s, %s, %s, %s)" session.execute(query, (int(line[10]), int(line[8]), line[0], line[9], int(line[3]), line[1], line[4])) # SELECT statement: # To answer Query-2, this CQL query # * matches user_id (=10) and session_id (=182) to # * return artist, song, first_name, and last_name (of user) from artist_in_session table. query = "SELECT artist, song, first_name, last_name \ FROM artist_in_session \ WHERE user_id = 10 AND \ session_id = 182" try: artists = session.execute(query) except Exception as e: print(e) for row in artists: print (row.artist, row.song, row.first_name, row.last_name) ``` ### Query-3 ``` ## Query 3: Give me every user name (first and last) in my music app history who listened # to the song 'All Hands Against His Own' # CREATE TABLE # CREATE TABLE # This CQL query creates artist_in_session table which contains the following columns (with data type): # * song TEXT, # * user_id INT, # * first_name TEXT, # * last_name TEXT, # # To uniquely identify each row and allow efficient distribution in Cassandra cluster, # * song, user_id columns: are used as Composite Partition Key in table's Primary Key. query = "CREATE TABLE IF NOT EXISTS user_and_song " query = query + "( song text, \ user_id int, \ first_name text, \ last_name text, \ PRIMARY KEY(song, user_id))" try: session.execute(query) except Exception as e: print(e) # INSERT data file = 'event_datafile_new.csv' with open(file, encoding = 'utf8') as f: csvreader = csv.reader(f) next(csvreader) for line in csvreader: query = "INSERT INTO user_and_song (song, \ user_id, \ first_name, \ last_name)" query = query + " VALUES (%s, %s, %s, %s)" session.execute(query, (line[9], int(line[10]), line[1], line[4])) # SELECT statement: # To answer Query-3, this CQL query # * matches song (=All Hands Against His Own) to # * return first_name and last_name (of users) from user_and_song table. query = "SELECT first_name, last_name \ FROM user_and_song \ WHERE song = 'All Hands Against His Own'" try: users = session.execute(query) except Exception as e: print(e) for row in users: print (row.first_name, row.last_name) ``` ### Drop the tables before closing out the sessions ``` ## Drop the table before closing out the sessions query = "DROP TABLE song_in_session" try: rows = session.execute(query) except Exception as e: print(e) query2 = "DROP TABLE artist_in_session" try: rows = session.execute(query2) except Exception as e: print(e) query3 = "DROP TABLE user_and_song" try: rows = session.execute(query3) except Exception as e: print(e) ``` ### Close the session and cluster connection¶ ``` session.shutdown() cluster.shutdown() ```
github_jupyter
## ``dnn-inference`` in MNIST dataset ``` import numpy as np from tensorflow import keras from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D from tensorflow.python.keras import backend as K import time from sklearn.model_selection import train_test_split from tensorflow.keras.optimizers import Adam, SGD np.random.seed(0) num_classes = 2 # input image dimensions img_rows, img_cols = 28, 28 # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() X = np.vstack((x_train, x_test)) y = np.hstack((y_train, y_test)) ind = (y == 9) + (y == 7) X, y = X[ind], y[ind] X = X.astype('float32') X += .01*abs(np.random.randn(14251, 28, 28)) y[y==7], y[y==9] = 0, 1 if K.image_data_format() == 'channels_first': X = X.reshape(x.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: X = X.reshape(X.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) X /= 255. # convert class vectors to binary class matrices y = keras.utils.to_categorical(y, num_classes) ## define the learning models def cnn(): model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adam(0.0005), metrics=['accuracy']) return model model, model_mask = cnn(), cnn() ## fitting param from tensorflow.keras.callbacks import EarlyStopping es = EarlyStopping(monitor='val_accuracy', mode='max', verbose=0, patience=15, restore_best_weights=True) fit_params = {'callbacks': [es], 'epochs': 5, 'batch_size': 32, 'validation_split': .2, 'verbose': 0} split_params = {'split': 'one-split', 'perturb': None, 'num_perm': 100, 'ratio_grid': [.2, .4, .6, .8], 'perturb_grid': [.001, .005, .01, .05, .1], 'min_inf': 100, 'min_est': 1000, 'ratio_method': 'fuse', 'cv_num': 1, 'cp': 'min', 'verbose': 1} ## Inference based on dnn_inference from dnn_inference.BBoxTest import split_test ## testing based on learning models inf_feats = [[np.arange(19,28), np.arange(13,20)], [np.arange(21,28), np.arange(4, 13)],[np.arange(7,16), np.arange(9,16)]] shiing = split_test(inf_feats=inf_feats, model=model, model_mask=model_mask, change='mask', eva_metric='zero-one') p_value_tmp = shiing.testing(X, y, cv_num=3, cp='hommel', fit_params=fit_params, split_params=split_params) ## visualize testing results shiing.visual(X,y) print('P-values: %s' %p_value_tmp) ```
github_jupyter
# MPLPPT `mplppt` is a simple library made from some hacky scripts I used to use to convert matplotlib figures to powerpoint figures. Which makes this a hacky library, I guess 😀. ## Goal `mplppt` seeks to implement an alternative `savefig` function for `matplotlib` figures. This `savefig` function saves a `matplotlib` figure with a single axis to a powerpoint presentation with a single slide containing the figure. ## Installation ```bash pip install mplppt ``` ## Imports ``` import mplppt %matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt ``` ## Supported Conversions `mplppt` supports [partly] conversion of the following matplotlib objects: * Lines [`matplotlib.lines.Line2D`] * Rectangles [`matplotlib.patches.Rectangle`] * Polygons [`matplotlib.patches.Polygon`] * pcolormesh [`matplotlib.collections.QuadMesh`] * text [`matplotlib.text.Text`] so far `mplppt` does not (yet) support (out of many other things): * markers (including tick marks) * linestyle ## Simple Example An example of all different conversions available for mplppt. Below we give an example of how all these objects can be combined into a single plot, which can then be exported to powerpoint: ``` # plot [Line2D] x = np.linspace(-1,5) y = np.sin(x) plt.plot(x,y,color='C1') # rectangle plt.gca().add_patch(mpl.patches.Rectangle((0, 0), 3, 0.5)) # polygon plt.gca().add_patch(mpl.patches.Polygon(np.array([[5.0,1.0],[4.0,-0.2],[2.0,0.6]]), color="red")) # pcolormesh x = np.linspace(0,1, 100) y = np.linspace(0,1, 100) X, Y = np.meshgrid(x,y) Z = X**2 + Y**2 plt.pcolormesh(X,Y,Z) # text text = plt.text(0,0,'hello') # set limits plt.ylim(-0.5,1) # Save figure to pptx mplppt.savefig('first_example.pptx') # show figure plt.show() ``` Which results in a powerpoint slide which looks as follows: ![simple powerpoint export screenshot](img/slide.png) ## Cool! What else can I do with this? You are not bound to using matplotlib! The `mplppt` repository contains some standard powerpoint shapes that you can use. Try something like: ``` ppt = mplppt.Group() # Create a new group of objects ppt += mplppt.Rectangle(name='rect', x=0, y=0, cx=100, cy=100, slidesize=(10,5)) # add an object to the group ppt.save('second_example.pptx') # export the group as a ppt slide ``` ## Is any of this documented? No. ## How does this work? The repository contains a template folder, which is nothing more than an empty powerpoint presentation which is unzipped. After making a copy of the template folder and adding some `xml` code for the shapes, the modified folder is zipped into a `.pptx` file. ## Copyright © Floris Laporte - MIT License
github_jupyter
# Crime Data File- Year 2018 ``` #Import libraries import pandas as pd from pandas import ExcelWriter from pandas import ExcelFile import os #Raw data file path filepath = os.path.join("","raw_crime_data","20190815_crimetrend_2018.xlsx") ``` ### Get Raw Data for processing ``` #Get raw data into data frames - bring all excel tab data in to single dataframe raw_data = pd.concat(pd.read_excel(filepath,sheet_name=None),ignore_index=True) #Check column heading print("Column headings:",raw_data.columns) #Raw Data preview raw_data.head(5) ``` ### Raw Data Cleanup ``` #Drop all columns between column index 2 to 23 (these are page numbers that corresponds to each excel sheet) raw_data = raw_data.drop(raw_data.iloc[:, 2:23], axis = 1) #Check raw data sample raw_data.head(5) #Drop all rows that has NAN records - these are blank lines from excel got converted as df row raw_data=raw_data.dropna(how='all') #Check sample data raw_data.head(5) #Drop first row raw_data=raw_data.drop(raw_data.index[0]) raw_data.head(2) #Rename the data frame columns crime_data= raw_data.rename(columns={"CURRENT DATE: 08/13/2019": "ORINumber", "INDEX CRIMES BY COUNTY FOR JAN - 2018 TO DEC - 2018": "Agency", "Unnamed: 11":"Months","Unnamed: 2":"Population","Unnamed: 3":"Murder","Unnamed: 4":"Rape", "Unnamed: 5":"Robbery","Unnamed: 6":"Assault","Unnamed: 7":"Burglary","Unnamed: 8":"Larceny", "Unnamed: 9":"Auto Theft"}) #Display fist hand crime data set sample crime_data.head(2) #Drop first row which is now column names and view sample data crime_data=crime_data.drop(crime_data.index[0]) crime_data.head(5) ``` ### Process crime data and capture required data points ``` #There are only two rows - City and Rate Per 100,00 - Lets ignore County and other and only consider PD rows orinumber=[] city=[] population=[] murder=[] rape=[] robbery=[] assault=[] burglary=[] larceny=[] autotheft=[] icount=1 # iterate over rows with iterrows() for index, row in crime_data.iterrows(): #First row is always city/County if row['Agency'] !="Rate Per 100,000" and icount==1: icity=row['Agency'] #Check if its City and if yes process with data capture if "PD" in str(icity): #Capture OriNumber, City and polulation orinumber.append(row['ORINumber']) city.append(icity) population.append(row['Population']) #Increment the counter icount+=1 else: continue #Access data using column names elif row['Agency']=="Rate Per 100,000" and icount==2: #reset counter icount=1 #Capture crime data try: murder.append(row['Murder']) except: murder.append(0) try: rape.append(row['Rape']) except: rape.append(0) try: robbery.append(row['Robbery']) except: robbery.append(0) try: assault.append(row['Assault']) except: assault.append(0) try: burglary.append(row['Burglary']) except: burglary.append(0) try: larceny.append(row['Larceny']) except: larceny.append(0) try: autotheft.append(row['Auto Theft']) except: autotheft.append(0) #Just check count of all captured data elements to make sure we catpure info correctly if (len(orinumber)==len(murder)==len(rape)==len(robbery)==len(assault)==len(burglary)==len(larceny)==len(autotheft)==len(city)==len(population)==len(murder)==len(rape)==len(robbery)==len(assault)==len(burglary)==len(larceny)==len(autotheft)==len(city)==len(population)): print(f"Successfully processed file. All data point count matched. Number of towns data is process is: {len(city)}") else: print("Plese check the raw file. Data point count does not match...") #Create a crime data frame from series of data points town_crime= {"ORINumber":orinumber,"City":city,"Population":population,"Murder":murder,"Rape":rape, "Robbery":robbery,"Assault":assault,"Burglary":burglary,"Larceny":larceny,"Auto Theft":autotheft} crime_data=pd.DataFrame(town_crime) #Verify crime data points crime_data.head(10) #Export data to csv file - not required for the project #finished crime data file path filepath = os.path.join("..","Final Output Data","Crime_Data_2018.csv") crime_data.to_csv(filepath, index = False) ```
github_jupyter
``` import tweepy import pandas as pd import sys import json consumer_key = 'Q5kScvH4J2CE6d3w8xesxT1bm' consumer_secret = 'mlGrcssaVjN9hQMi6wI6RqWKt2LcHAEyYCGh6WF8yq20qcTb8T' access_token = '944440837739487232-KTdrvr4vARk7RTKvJkRPUF8I4VOvGIr' access_token_secret = 'bfHE0jC5h3B7W3H18TxV7XsofG1xuB6zeINo2DxmZ8K1W' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True,compression=True) search_words = "#Kashmir" date_since = "2015-01-01" #.............................................................. tweets = tweepy.Cursor(api.search, q=search_words, lang="en", since=date_since).items(10) tweets #............................................................... date=[] us=[] o=0; text=[] import csv for tweet in tweets: us.append(tweet) text.append(tweet.text) print(o) o+=1 print(o) #.................................................................. #.................................................................. #.................................................................... import pandas as pd df=pd.DataFrame() id=[] for i in range(o): id.append(us[i]._json['id']) date=[] for i in range(o): date.append(us[i]._json['created_at']) user=[] for i in range(o): user.append(us[i]._json['user']['screen_name']) text=[] for i in range(o): text.append(us[i]._json['text']) df['id']=id df['date']=date df['user']=user df['text']=text retweet=[] for i in range(o): a=us[i]._json['text'] if(a[0]=="R" and a[1]=="T"): retweet.append("True") else: retweet.append("False") df['retweet']=retweet retweet_count=[] for i in range(o): retweet_count.append(us[i]._json['retweet_count']) df["retweet_count"]=retweet_count friends_count=[] followers_count=[] favourites_count=[] location=[] source=[] for i in range(o): friends_count.append(us[i]._json['user']['friends_count']) followers_count.append(us[i]._json['user']['followers_count']) favourites_count.append(us[i]._json['user']['favourites_count']) location.append(us[i]._json['user']['location']) source.append(us[i]._json['source'][-11:-4]) df['location']=location df['source']=source df['followers_count']=followers_count df['friends_count']=friends_count df['favourite_count']=favourites_count post_by=[] for i in range(o): a=us[i]._json['entities']['user_mentions'] #print(a) if(a): post_by.append(us[i]._json['entities']['user_mentions'][0]['screen_name']) else: post_by.append("null") df['post_by']=post_by #............................................................................ #............................................................ #......................................................... df #...................... df.head(n=2) df['date'] datetime? df['date'] a=df['date'].values.tolist() a type(a[0]) b=df['followers_count'].values.tolist() b type(b[0]) b[0]+b[1] c=[] c.append(b[0]) for i in range(1,len(b)): c.append(c[i-1]+b[i]) c a=[] for i in range(len(b)): a.append(i) a import pandas as pd df=pd.read_csv('Datawarehouse.csv') df b=df['followers_count'].values.tolist() c=[] c.append(b[0]) for i in range(1,len(b)): c.append(c[i-1]+b[i]) c[len(b)-1] a=df['date'].values.tolist() df['date'] date=df['date'].values.tolist() date date1=date[:1000] c1=c[:1000] d=[] d.append(c[4925]) d.append(c[5511]) d.append(c[9279]) d.append(c[12384]) d.append(c[15629]) d.append(c[18381]) d.append(c[22174]) d.append(c[23401]) d date=["Thu Feb 20 00:00"," Thu 20 12:00","Fri Feb 21 00:00","Fri Feb 21 12:00","Sat Feb 22 00:00","Sat Feb 22 12:00","Sun Feb 23 00:00","Sun Feb 23 12:00"] plt.plot(date,d,linewidth=3) plt.plot(date,d,'bo') plt.xlabel('Date',size=15) plt.ylabel("Potential Impact",size=15) plt.tight_layout(pad=0,rect=(1,2,3,3.5)) plt.show() plt.xlabel? from collections import Counter aa=df['retweet'].values.tolist() len(aa) aa1=aa[0:4925] aa1=Counter(aa1) aa1 tweets1=aa1[0] retweets1=aa1[1] print(tweets1,retweets1) aa2=aa[4926:5511] aa2=Counter(aa2) print(aa2) tweets2=aa2[0] retweets2=aa2[1] print(tweets2,retweets2) aa3=aa[5512:9279] aa3=Counter(aa3) print(aa3) tweets3=aa3[0] retweets3=aa3[1] print(tweets3,retweets3) aa4=aa[9280:12384] aa4=Counter(aa4) print(aa4) tweets4=aa4[0] retweets4=aa4[1] print(tweets4,retweets4) aa5=aa[12385:15629] aa5=Counter(aa5) print(aa5) tweets5=aa5[0] retweets5=aa5[1] print(tweets5,retweets5) aa6=aa[15630:18381] aa6=Counter(aa6) print(aa6) tweets6=aa6[0] retweets6=aa6[1] print(tweets6,retweets6) aa7=aa[18382:22174] aa7=Counter(aa7) print(aa7) tweets7=aa7[0] retweets7=aa7[1] print(tweets7,retweets7) aa8=aa[22175:23401] aa8=Counter(aa8) print(aa8) tweets8=aa8[0] retweets8=aa8[1] print(tweets8,retweets8) tweet_list=[tweets1,tweets2,tweets3,tweets4,tweets5,tweets6,tweets7,tweets8] retweet_list=[retweets1,retweets2,retweets3,retweets4,retweets5,retweets6,retweets7,retweets8] plt.plot(date,tweet_list,label='number of tweets') plt.plot(date,tweet_list,'bo') plt.plot(date,retweet_list,label='number of retweets') plt.plot(date,retweet_list,'ro') plt.xlabel('Date',size=15) plt.tight_layout(pad=0,rect=(1,2,3,3.5)) plt.legend() plt.show() print('Nice Work') c=df['followers_count'].values.tolist() cc1=c[0:4925] cc2=c[4926:5511] cc3=c[5512:9279] cc4=c[9280:12384] cc5=c[12385:15629] cc6=c[15630:18381] cc7=c[18382:22174] cc8=c[22175:23401] import numpy as np cc11=np.sum(cc1) cc11=np.sum(cc1) cc22=np.sum(cc2) cc33=np.sum(cc3) cc44=np.sum(cc4) cc55=np.sum(cc5) cc66=np.sum(cc6) cc77=np.sum(cc7) cc88=np.sum(cc8) cc11+cc22+cc33+cc44+cc55+cc66+cc77+cc88 ``` # New Section ``` impact=[cc11,cc22,cc33,cc44,cc55,cc66,cc77,cc88] plt.plot(date,impact,label='potential impact') plt.plot(date,impact,'bo') plt.xlabel('Date',size=15) plt.tight_layout(pad=0,rect=(1,2,3,3.5)) plt.legend() plt.show() import matplotlib.pyplot as plt df=pd.read_csv('Datawarehouse.csv') plt.fill_between(date,0,d,label='cummualative impact') #plt.plot(date,d,'bo') plt.xlabel('Date',size=15) plt.ylabel("Potential Impact",size=15) plt.tight_layout(pad=0,rect=(1,2,3,4)) plt.fill_between(date,impact,label='potential impact') #plt.plot(date,impact,'ro') plt.xlabel('Date',size=15) plt.tight_layout(pad=0,rect=(1,2,3,4)) plt.legend() plt.show() df.head(n=2) dff=df.to_dict() uni=[] uniqq=[] for i in range(23402): if dff['user'][i] not in uni : uni.append(dff['user'][i]) uniqq.append(dff['followers_count'][i]) len(uni) newdf=pd.DataFrame() newdf['user']=uni newdf['follower_count']=uniqq newdf.head(n=5) newnewdf=newdf.sort_values('follower_count',ascending=False) newnewdf a=newnewdf['follower_count'].values type(a) follower_sum=np.sum(a) follower_sum follower_avg=follower_sum/len(total_followers) print(follower_avg) df.head(n=2) c=df['retweet'].values from collections import Counter c=Counter(c) c ## kitne tweet and kitne retweet fdf=df.sort_values('retweet',ascending=True) fdf import pandas as pd df=pd.read_csv("Datawarehouse (1).csv") df.head(5) dff=df.to_dict() #unique user #reach uni=[] date=[] sum=0 for i in range(23402): if dff['user'][i] not in uni: uni.append(dff['user'][i]) sum+=dff['followers_count'][i] date.append(dff['date'][i]) sum date import tweepy import pandas as pd access_token = "944440837739487232-KTdrvr4vARk7RTKvJkRPUF8I4VOvGIr" access_token_secret = "bfHE0jC5h3B7W3H18TxV7XsofG1xuB6zeINo2DxmZ8K1W" consumer_key = "Q5kScvH4J2CE6d3w8xesxT1bm" consumer_secret = "mlGrcssaVjN9hQMi6wI6RqWKt2LcHAEyYCGh6WF8yq20qcTb8T" col1=[] col2=[] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True,compression=True) for user in tweepy.Cursor(api.friends, screen_name="hasanminhaj").items(400): print(user.screen_name) col1.append("hasanminhaj") col2.append(user.screen_name) col1 col2 df = pd.DataFrame(index=None) df["source"]=col1 df["target"]=col2 print(df) df.to_csv('fs.csv',index=False) df c=4 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fs.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fs.csv',index=False) c=25 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fs.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fs.csv',index=False) c=30 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fs.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fs.csv',index=False) c=50 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fs.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fs.csv',index=False) c=55 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fs.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fs.csv',index=False) c=70 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fs.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fs.csv',index=False) c=300 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fs.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fs.csv',index=False) import tweepy import pandas as pd access_token = "944440837739487232-KTdrvr4vARk7RTKvJkRPUF8I4VOvGIr" access_token_secret = "bfHE0jC5h3B7W3H18TxV7XsofG1xuB6zeINo2DxmZ8K1W" consumer_key = "Q5kScvH4J2CE6d3w8xesxT1bm" consumer_secret = "mlGrcssaVjN9hQMi6wI6RqWKt2LcHAEyYCGh6WF8yq20qcTb8T" col1=[] col2=[] auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True,compression=True) col2=['chrissyteigen', 'mitchrichmond23', 'ninaemlemdi', 'Jon_Favreau', 'drsanjaygupta', 'MattGertz', 'JasonSCampbell', 'jkbjournalist', 'tomhanks', 'HillaryClinton', 'anitakumar01', 'BoysClubNY', 'dmaq1', 'ChrisEvans', 'SarahxAnwer', 'Casey', 'martinandanar', 'MarkRuffalo', 'TheEllenShow', 'johnaugust', 'KimKap', '55buckets', 'SopanDeb', 'BetsyHodges', 'leeunkrich', 'bessbell', 'ezraklein', 'AOC', 'petrodraiz', 'chamath', 'sophchang', 'priyankachopra', 'voxdotcom', 'TiffanyHaddish', 'adriangrenier', 'jessetyler', 'johnlegend', 'alyankovic', 'perlmutations', 'jonnysun', 'nikolajcw', 'GabbySidibe', 'adampally', 'JasonRitter', 'kenjeong', 'pronounced_ing', 'ColeEscola', 'shresnik', 'zach_r1ce', 'JayPharoah', 'SacramentoKings', 'TVietor08', 'jaboukie', 'TheSamhita', 'tomsegura', 'jimmykimmel', 'gabegundacker', 'THEKIDMERO', 'Genius', 'brandonjinx', 'SheilaVee', 'jackwhitehall', 'russellhoward', 'dstfelix', 'yogrishiramdev', 'umxrshk', 'kporzee', 'EnesKanter', 'jennyhan', 'jenflanz', 'hodakatebi', 'patriotact', 'davidiserson', 'TheDweck', 'thatchriskelly', 'Jokoy', 'N_C_B', 'RonnieFieg', 'attell', 'KobiLibii', 'MB3FIVE', 'ACLU', 'BarackObama', 'nytimes', 'maddow', 'Sethrogen', 'bananapeele', 'kathygriffin', 'knguyen', 'GQMagazine', 'franciaraisa', 'BlairImani', 'AMANI2020', 'FullFrontalSamB', 'JHarden13', 'colbertlateshow', 'StephenAtHome', 'vanitaguptaCR', 'tannercolby', 'zachdilanzo', 'elizacossio', 'DevDell', 'nbcsnl', 'ColinJost', 'seanogallagher', 'LastWeekTonight', 'AnikKhan_', 'Samanth_S', 'GuzKhanOfficial', 'dissectpodcast', 'mattingebretson', 'FaizaPatelBCJ', 'Lefsetz', 'rtsimp', 'mrmedina', 'paulshipper', 'bejohnce', 'MamoudouNDiaye', 'Lilfilm', 'Felonious_munk', 'JuleykaLantigua', 'vcunningham', 'Reddsaidit', 'eshagupta2811', 'billyeichner', 'JimGaffigan', 'PreetBharara', 'A24', 'jeremysliew', 'MMFlint', 'heymichellelee', 'mikehofman', 'eveewing', 'franklinleonard', 'elseed', 'airfrance', 'Kaepernick7', 'aliamjadrizvi', 'arturodraws', 'StephenKing', 'SamHeughan', 'edgarwright', 'SusannaFogel', 'BarryJenkins', 'MitchyD', 'yunamusic', 'Kinglimaa', 'BenSPLATT', 'thevirdas', 'rakeshsatyal', 'blamethelabel', 'TonyRevolori', 'iffykaysar', 'StephenRDaw', 'AvanJogia', 'michaelsmith', 'MalPal711', 'mandamanda___', 'SheaSerrano', 'mallika_rao', 'CariChampion', 'MikeDrucker', 'jk_rowling', 'ava', 'ditzkoff', 'MekkiLeeper', 'iamledgin', 'charltonbrooker', 'ludwiggoransson', 'RachelFeinstein', 'realDonaldTrump', 'EmilyeOberg', 'levie', 'VaynerMedia', 'EdgeofSports', 'tySchmitt5', 'JohnLeguizamo', 'MarkDuplass', 'VanJones68', 'DanAmira', 'ajv', 'finkd', 'davidrocknyc', 'alexwagner', 'Complex', 'seanseaevans', 'marquezjesso', 'BRANDONWARDELL', 'NPR', 'RickFamuyiwa', 'LewisHowes', 'DreamsickJustin', 'Gladwell', 'MsEmmaBowman', 'iamsrk', 'declanwalsh', 'mholland85', 'BKBMG', 'goodreads', 'Krewella', 'amritsingh', 'BlazerRamon', 'ianbremmer', 'RogueTerritory', 'aspiegelnpr', 'janellejcomic', 'KingOfQueenz', 'iJesseWilliams', 'lildickytweets', 'Vasu', 'ingridnilsen', 'joshrogin', 'sethmeyers', 'ramy', 'solomongeorgio', 'MrGeorgeWallace', 'DaveedDiggs', 'garyvee', 'dpmeyer', 'stevejang', 'ringer', 'paulwdowns', 'robinthede', 'tedtremper', 'JustinTrudeau', 'wfcgreen', 'amirkingkhan', 'farantahir_', 'mehdirhasan', 'hopesolo', 'SenSanders', 'isaiahlester', 'BernieSanders', 'NateParker', 'katepurchase', 'telfordk', 'AnandWrites', 'MRPORTERLIVE', 'chancetherapper', 'bcamp810', 'HOUSE_of_WARIS', 'Lilly', 'brokemogul', 'ThaboSefolosha', 'bomani_jones', 'ATTACKATHLETICS', 'DLeary0us', 'UnitedBlackout', 'LenaWaithe', 'gkhamba', 'JeffreyGurian', 'atifateeq', 'kendricklamar', 'MatthewModine', 'NickKristof', 'Dreamville', 'KillerMike', 'ryanleslie', 'HEIRMJ', 'TheNarcicyst', 'joshluber', 'nathanfielder', 'djkhaled', 'ayeshacurry', 'MazMHussain', 'davidfolkenflik', 'JensenKarp', 'michaelb4jordan', 'JordanPeele', 'MattHalfhill', 'IanMcKellen', 'iraglass', 'El_Silvero', 'JLaPuma', 'anildash', 'rameswaram', 'AllOfItWNYC', 'thismyshow', 'EugeneMirman', 'ShahanR', 'NinaDavuluri', 'GrantNapearshow', 'CarmichaelDave', 'EliseCz', 'Andrea_Simmons', 'showtoones', 'electrolemon', 'iamcolinquinn', 'abrahamjoseph', 'bfishbfish', 'SpecialRepMC', 'fannynordmark', 'JRHavlan', 'ambarella', 'deray', 'hugoandmarie', 'IStandWithAhmed', 'showmetheravi', 'DesiLydic', 'roywoodjr', 'ronnychieng', 'fakedansavage', 'chrislhayes', 'twitney', 'humansofny', 'Lin_Manuel', 'mdotbrown', 'PinnapplePower', 'ChrisGethard', 'EasyPri', 'oldmanebro', 'marcecko', 'eugcordero', 'Iam1Cent', 'ajjacobs', 'melissamccarthy', 'TaheraHAhmad', 'patthewanderer', 'saladinahmed', 'SamSpratt', 'jamesmiglehart', 'AkilahObviously', 'tompapa', 'phlaimeaux', 'GBerlanti', 'AngeloLozada66', 'rojoperezzz', 'prattprattpratt', 'dherzog77', 'talkhoops', 'VRam_21', 'heavenrants', 'rastphan', 'jsmooth995', 'ComedyGroupie', 'chrizmillr', 'YourAnonNews', 'LuciaAniello', 'Babyballs69', 'paulfeig', 'GlitterCheese', 'mojorojo', 'BinaShah', 'ismat', 'RonanFarrow', 'tejucole', 'Zeyba', 'joncbenson', 'iamsambee', 'lsarsour', 'joshgondelman', 'robcorddry', 'kristenschaaled', 'danielradosh', 'timcarvell', 'AymanM', 'BergerWorld', 'iamjohnoliver', 'rabiasquared', 'Apey', 'heyrubes_', 'graceofwrath', 'raminhedayati', 'jonesinforjason', 'Variety', 'almadrigal', 'philiplord', 'HallieHaglund', 'dopequeenpheebs', 'rianjohnson', 'zainabjohnson', 'billburr', 'mileskahn', 'MrChadCarter', 'hodgman', 'sammorril', 'AaronCouch', 'brennanshroff', 'ComedyCellarUSA', 'RoryAlbanese', 'mattkoff', 'tedalexandro', 'JenaFriedman', 'OnePerfectShot', 'GeorgeKiel3', 'morninggloria', 'JSim07', 'TessaThompson_x', 'alicewetterlund', 'SarahTreem', 'baluchx', 'BENBALLER', 'TheDailyShow', 'larrywilmore', 'JessicaPilot212', 'Mowjood', 'Trevornoah', 'jinajones', 'AdamLowitt', 'jordanklepper'] col1=['hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj', 'hasanminhaj'] df=pd.DataFrame() df['source']=col1 df['target']=col2 import pandas as pd c=310 slp=0 friend = col2.copy() col3=[] for i in friend[c:]: print(c) c=c+1 col3.append(i) for j in friend[c:]: friendship=api.show_friendship(source_screen_name=i, target_screen_name=j) if(friendship[0].followed_by): print(j,i) # i followed by j df.loc[len(df)]=[j,i] df.to_csv('fin.csv',index=False) if(friendship[1].followed_by): print(i,j) # j followed by i df.loc[len(df)]=[i,j] df.to_csv('fin.csv',index=False) ```
github_jupyter
``` import sys import torch import torch.nn as nn import torch.nn.functional as F # Releasing the GPU memory torch.cuda.empty_cache() def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=dilation, groups=groups, bias=False, dilation=dilation) def conv1x1(in_planes, out_planes, stride=1): """1x1 convolution""" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, base_width=64, dilation=1, norm_layer=None): super(Bottleneck, self).__init__() if norm_layer is None: norm_layer = nn.BatchNorm2d width = int(planes * (base_width / 64.)) * groups # Both self.conv2 and self.downsample layers downsample the input when stride != 1 self.conv1 = conv1x1(inplanes, width) self.bn1 = norm_layer(width) self.conv2 = conv3x3(width, width, stride, groups, dilation) self.bn2 = norm_layer(width) self.conv3 = conv1x1(width, planes * self.expansion) self.bn3 = norm_layer(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000, zero_init_residual=False, groups=1, width_per_group=64, replace_stride_with_dilation=None, norm_layer=None): super(ResNet, self).__init__() if norm_layer is None: norm_layer = nn.BatchNorm2d self._norm_layer = norm_layer self.inplanes = 64 self.dilation = 1 if replace_stride_with_dilation is None: # each element in the tuple indicates if we should replace # the 2x2 stride with a dilated convolution instead replace_stride_with_dilation = [False, False, False] if len(replace_stride_with_dilation) != 3: raise ValueError("replace_stride_with_dilation should be None " "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) self.groups = groups self.base_width = width_per_group self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = norm_layer(self.inplanes) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2, dilate=replace_stride_with_dilation[0]) self.layer3 = self._make_layer(block, 256, layers[2], stride=2, dilate=replace_stride_with_dilation[1]) self.layer4 = self._make_layer(block, 512, layers[3], stride=2, dilate=replace_stride_with_dilation[2]) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) def _make_layer(self, block, planes, blocks, stride=1, dilate=False): norm_layer = self._norm_layer downsample = None previous_dilation = self.dilation if dilate: self.dilation *= stride stride = 1 if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( conv1x1(self.inplanes, planes * block.expansion, stride), norm_layer(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample, self.groups, self.base_width, previous_dilation, norm_layer)) self.inplanes = planes * block.expansion for _ in range(1, blocks): layers.append(block(self.inplanes, planes, groups=self.groups, base_width=self.base_width, dilation=self.dilation, norm_layer=norm_layer)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.reshape(x.size(0), -1) x = self.fc(x) return x class Net (nn.Module): def __init__(self, num_class, freeze_conv=False, n_extra_info=0, p_dropout=0.5, neurons_class=256, feat_reducer=None, classifier=None): super(Net, self).__init__() resnet = ResNet(Bottleneck, [3, 4, 6, 3]) self.features = nn.Sequential(*list(resnet.children())[:-1]) # freezing the convolution layers if freeze_conv: for param in self.features.parameters(): param.requires_grad = False # Feature reducer if feat_reducer is None: self.feat_reducer = nn.Sequential( nn.Linear(2048, neurons_class), nn.BatchNorm1d(neurons_class), nn.ReLU(), nn.Dropout(p=p_dropout) ) else: self.feat_reducer = feat_reducer # Here comes the extra information (if applicable) if classifier is None: self.classifier = nn.Linear(neurons_class + n_extra_info, num_class) else: self.classifier = classifier self.collecting = False def forward(self, img, extra_info=None): x = self.features(img) # Flatting x = x.view(x.size(0), -1) x = self.feat_reducer(x) res = self.classifier(x) return res torch_model = Net(8) ckpt = torch.load("checkpoints/resnet-50_checkpoint.pth") torch_model.load_state_dict(ckpt['model_state_dict']) torch_model.eval() torch_model.cuda() print("Done!") ```
github_jupyter
<a href="https://colab.research.google.com/github/BachiLi/A-Tour-of-Computer-Animation/blob/main/A_Tour_of_Computer_Animation_Table_of_Contents.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **A Tour of Computer Animation** -- [Tzu-Mao Li](https://cseweb.ucsd.edu/~tzli/) This is a note that records my journey into computer animation. The structure of this tour is inspired by the books ["Physically Based Rendering:From Theory To Implementation"](https://www.pbr-book.org/), ["Ray Tracing in One Weekend"](https://raytracing.github.io/books/RayTracingInOneWeekend.html), and ["Numerical Tours"](https://www.numerical-tours.com/). Most books and articles about computer animation and physics simulation are mathematic centric and do not contain much code and experiments. This note is an attempt to bridge the gap. This is the hub to the chapters of this tour. I do not assume background on computer animation or graphics, but I do assume basic familiarity on calculus and linear algebra. There will be quite a bit of math -- sorry. The code is going to be all written in numpy and visualize with matplotlib. These are going to be unoptimized code. We will focus slightly more on the foundation instead of immediate practical implementations, so it might take a while before we can render fancy animations. Don't be afraid to play with the code. **Table of Contents** 1. [Newtonian Mechanics and Forward Euler Method](https://colab.research.google.com/drive/1K-Ly9vqZbymrAYe6Krg1ZfSMPY6CnAcY) 2. [Lagrangian Mechanics and Pendulums](https://colab.research.google.com/drive/1L4QJyq8hSlgllSYytYW5UHTPvd6w7Vz9) 3. [Time Integration and Stability](https://colab.research.google.com/drive/1mXTlYt2nRnXLrXpnP26BgjHKghjGPTCL?usp=sharing) 4. [Elastic Simulation and Mass Spring Systems](https://colab.research.google.com/drive/1erjL0a_KCVx8p3lDcE747k8wqbEaxYPY?usp=sharing) 5. Physics as Constraints Solving and Position-based Dynamics Some useful textbooks and lectures (they are not prerequisite, but instead this note should be used as an accompanying material to these): - [David Levin: CSC417 - Physics-based Animation](https://www.youtube.com/playlist?list=PLTkE7n2CwG_PH09_q0Q7ttjqE2F9yGeM3) (the structure of this tour takes huge inspirations from this course) - [The Feynman Lectures on Physics](https://www.feynmanlectures.caltech.edu/) - [Doug James: CS348C - Computer Graphics: Animation and Simulation](https://graphics.stanford.edu/courses/cs348c/) - [Arnold: Mathematical Methods of Classical Mechanics](https://www.amazon.com/Mathematical-Classical-Mechanics-Graduate-Mathematics/dp/0387968903) - [Witkin and Baraff: Physics-based Modeling](https://graphics.pixar.com/pbm2001/) - [Bargteil and Shinar: An Introduction to Physics-based Animation](https://cal.cs.umbc.edu/Courses/PhysicsBasedAnimation/) - [Åström and Akenine-Möller: Immersive Linear Algebra](http://immersivemath.com/ila/index.html) The notes are still work in progress and probably contain a lot of errors. Please send email to me (tzli@ucsd.edu) if you have any suggestions and comments.
github_jupyter
``` import urllib2 from bs4 import BeautifulSoup url = 'https://www.baidu.com/' content = urllib2.urlopen(url).read() soup = BeautifulSoup(content, 'html.parser') soup print(soup.prettify()) for tag in soup.find_all(True): print(tag.name) soup('head')# or soup.head soup.body soup.body.name soup.meta.string soup.find_all('noscript',content_='0;url=http://www.baidu.com/') soup.find_all('noscript')[0] soup.find_all(["head","script"]) soup.get_text() print(soup.get_text()) from IPython.display import display_html, HTML HTML('<iframe src=http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX width=1000 height=500></iframe>') # the webpage we would like to crawl page_num = 0 url = "http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX" % page_num content = urllib2.urlopen(url).read() #获取网页的html文本 soup = BeautifulSoup(content, "lxml") articles = soup.find_all('tr') print articles[0] print articles[1] len(articles[1:]) for t in articles[1].find_all('td'): print t td = articles[1].find_all('td') print td[0] print(td[0].text) print td[0].a['href'] print td[1] print td[2] print td[3] print td[4] records = [] for i in articles[1:]: td = i.find_all('td') title = td[0].text.strip() title_url = td[0].a['href'] author = td[1].text author_url = td[1].a['href'] views = td[2].text replies = td[3].text date = td[4]['title'] record = title + '\t' + title_url+ '\t' + author + '\t'+ author_url + '\t' + views+ '\t' + replies+ '\t'+ date records.append(record) print records[2] def crawler(page_num, file_name): try: # open the browser url = "http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX" % page_num content = urllib2.urlopen(url).read() #获取网页的html文本 soup = BeautifulSoup(content, "lxml") articles = soup.find_all('tr') # write down info for i in articles[1:]: td = i.find_all('td') title = td[0].text.strip() title_url = td[0].a['href'] author = td[1].text author_url = td[1].a['href'] views = td[2].text replies = td[3].text date = td[4]['title'] record = title + '\t' + title_url+ '\t' + author + '\t'+ \ author_url + '\t' + views+ '\t' + replies+ '\t'+ date with open(file_name,'a') as p: # '''Note''':Append mode, run only once! p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding except Exception, e: print e pass # crawl all pages for page_num in range(10): print (page_num) crawler(page_num, 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_list.txt') import pandas as pd df = pd.read_csv('D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_list.txt', sep = "\t", header=None) df[: 2] len(df) df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'}) df[:2] len(df.link) df.author_page[:5] def author_crawler(url, file_name): try: content = urllib2.urlopen(url).read() #获取网页的html文本 soup = BeautifulSoup(content, "lxml") link_info = soup.find_all('div', {'class', 'link-box'}) followed_num, fans_num = [i.a.text for i in link_info] try: activity = soup.find_all('span', {'class', 'subtitle'}) post_num, reply_num = [j.text[2:] for i in activity[:1] for j in i('a')] except: post_num, reply_num = 1, 0 record = '\t'.join([url, followed_num, fans_num, post_num, reply_num]) with open(file_name,'a') as p: # '''Note''':Append mode, run only once! p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding except Exception, e: print e, url record = '\t'.join([url, 'na', 'na', 'na', 'na']) with open(file_name,'a') as p: # '''Note''':Append mode, run only once! p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding pass for k, url in enumerate(df.author_page): if k % 10==0: print k author_crawler(url, 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_author_info.txt') url = df.author_page[1] content = urllib2.urlopen(url).read() #获取网页的html文本 soup1 = BeautifulSoup(content, "lxml") activity = soup1.find_all('span', {'class', 'subtitle'}) post_num, reply_num = [j.text[2:] for i in activity[:1] for j in i('a')] print post_num, reply_num print activity[0] df.link[2] url = 'http://bbs.tianya.cn' + df.link[2] url from IPython.display import display_html, HTML HTML('<iframe src=http://bbs.tianya.cn/post-free-2848797-1.shtml width=1000 height=500></iframe>') # the webpage we would like to crawl post = urllib2.urlopen(url).read() #获取网页的html文本 post_soup = BeautifulSoup(post, "lxml") #articles = soup.find_all('tr') print (post_soup.prettify())[:1000] pa = post_soup.find_all('div', {'class', 'atl-item'}) len(pa) print pa[0] print pa[1] print pa[0].find('div', {'class', 'bbs-content'}).text.strip() print pa[87].find('div', {'class', 'bbs-content'}).text.strip() pa[1].a print pa[0].find('a', class_ = 'reportme a-link') print pa[0].find('a', class_ = 'reportme a-link')['replytime'] print pa[0].find('a', class_ = 'reportme a-link')['author'] for i in pa[:10]: p_info = i.find('a', class_ = 'reportme a-link') p_time = p_info['replytime'] p_author_id = p_info['authorid'] p_author_name = p_info['author'] p_content = i.find('div', {'class', 'bbs-content'}).text.strip() p_content = p_content.replace('\t', '') print p_time, '--->', p_author_id, '--->', p_author_name,'--->', p_content, '\n' post_soup.find('div', {'class', 'atl-pages'})#['onsubmit'] post_pages = post_soup.find('div', {'class', 'atl-pages'}) post_pages = post_pages.form['onsubmit'].split(',')[-1].split(')')[0] post_pages url = 'http://bbs.tianya.cn' + df.link[2] url_base = ''.join(url.split('-')[:-1]) + '-%d.shtml' url_base def parsePage(pa): records = [] for i in pa: p_info = i.find('a', class_ = 'reportme a-link') p_time = p_info['replytime'] p_author_id = p_info['authorid'] p_author_name = p_info['author'] p_content = i.find('div', {'class', 'bbs-content'}).text.strip() p_content = p_content.replace('\t', '').replace('\n', '')#.replace(' ', '') record = p_time + '\t' + p_author_id+ '\t' + p_author_name + '\t'+ p_content records.append(record) return records import sys def flushPrint(s): sys.stdout.write('\r') sys.stdout.write('%s' % s) sys.stdout.flush() url_1 = 'http://bbs.tianya.cn' + df.link[10] content = urllib2.urlopen(url_1).read() #获取网页的html文本 post_soup = BeautifulSoup(content, "lxml") pa = post_soup.find_all('div', {'class', 'atl-item'}) b = post_soup.find('div', class_= 'atl-pages') b url_1 = 'http://bbs.tianya.cn' + df.link[0] content = urllib2.urlopen(url_1).read() #获取网页的html文本 post_soup = BeautifulSoup(content, "lxml") pa = post_soup.find_all('div', {'class', 'atl-item'}) a = post_soup.find('div', {'class', 'atl-pages'}) a a.form if b.form: print 'true' else: print 'false' import random import time def crawler(url, file_name): try: # open the browser url_1 = 'http://bbs.tianya.cn' + url content = urllib2.urlopen(url_1).read() #获取网页的html文本 post_soup = BeautifulSoup(content, "lxml") # how many pages in a post post_form = post_soup.find('div', {'class', 'atl-pages'}) if post_form.form: post_pages = post_form.form['onsubmit'].split(',')[-1].split(')')[0] post_pages = int(post_pages) url_base = '-'.join(url_1.split('-')[:-1]) + '-%d.shtml' else: post_pages = 1 # for the first page pa = post_soup.find_all('div', {'class', 'atl-item'}) records = parsePage(pa) with open(file_name,'a') as p: # '''Note''':Append mode, run only once! for record in records: p.write('1'+ '\t' + url + '\t' + record.encode('utf-8')+"\n") # for the 2nd+ pages if post_pages > 1: for page_num in range(2, post_pages+1): time.sleep(random.random()) flushPrint(page_num) url2 =url_base % page_num content = urllib2.urlopen(url2).read() #获取网页的html文本 post_soup = BeautifulSoup(content, "lxml") pa = post_soup.find_all('div', {'class', 'atl-item'}) records = parsePage(pa) with open(file_name,'a') as p: # '''Note''':Append mode, run only once! for record in records: p.write(str(page_num) + '\t' +url + '\t' + record.encode('utf-8')+"\n") else: pass except Exception, e: print e pass url = 'http://bbs.tianya.cn' + df.link[2] file_name = 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_test.txt' crawler(url, file_name) for k, link in enumerate(df.link): flushPrint(link) if k % 10== 0: print 'This it the post of : ' + str(k) file_name = 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_network.txt' crawler(link, file_name) dtt = [] with open('D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_network.txt', 'r') as f: for line in f: pnum, link, time, author_id, author, content = line.replace('\n', '').split('\t') dtt.append([pnum, link, time, author_id, author, content]) len(dtt) dt = pd.DataFrame(dtt) dt[:5] dt=dt.rename(columns = {0:'page_num', 1:'link', 2:'time', 3:'author',4:'author_name', 5:'reply'}) dt[:5] dt.reply[:100] 18459/50 ```
github_jupyter
# Time Series Analysis of NAICS: North American employment data from 1997 to 2019 Import necessary libraries ``` import os import pandas as pd import numpy as np import datetime as dt from glob import glob import re import warnings import matplotlib.pyplot as plt import seaborn as sns from openpyxl import load_workbook from tqdm import tqdm warnings.filterwarnings('ignore') ``` ## I. Data loading and cleaning ``` def create_n_digits_df(data_path, n): """ Make pandas dataframe from all n digits code industries, ordered from the oldest (Jan 1997) to the newest (Dec 2019) Parameter: ---------- data_path : str or Pathlike object Path to the CSV data files n :int Number of digits in NAICS code Returns: ------- pandas.core.frame.DataFrame: DataFrame with n digits industries sorted in ascending dates """ try: isinstance (n, int) assert n in [2,3,4] except: print(f'Wrong value of the parameter n!!!\nExpected an integer 2, 3 or 4 but got {n}.') return list_n = [x for x in os.listdir(data_path) if re.search(f'_{n}NAICS', x)] df = pd.read_csv(data_path + list_n[-1]) for i in range(len(list_n)-1): df2 = pd.read_csv(data_path+list_n[i]) df = df.append(df2) return df data_path = 'data/' df2 = create_n_digits_df(data_path, 2) df3 = create_n_digits_df(data_path, 3) df4 = create_n_digits_df(data_path, 4) df2.head() df3.head() df4.head() def clean_df(df): """ Extract NAICS code from NAICS column and add it as new column, Remove code from NAICS column Create 'DATE' column from SYEAR and SMONTH Drop 'SYEAR' and 'SMTH' columns Parameters: ---------- df : pandas.core.frame.DataFrame Dataframe to trandform Returns: -------- pandas.core.frame.DataFrame Dataframe with 'NAICS_CODE' and 'DATE' columns, and NAICS column without code. """ def extract_code(x): if type(x) == int: return [x] if '[' not in x: y = None elif '-' in x: code_len = len(x.split('[')[1].replace(']', '').split('-')[0]) x = x.split('[')[1].replace(']', '').split('-') if code_len == 2: y = [*range(int(x[0]), int(x[1])+1)] else: y = [int(i) for i in x] else: x = x.split('[')[1].replace(']', '') y = [int(x)] return y df['NAICS_CODE'] = df['NAICS'].apply(extract_code) df['NAICS'] = df['NAICS'].astype('str').str.split('[').str.get(0).str.strip() df['DATE'] = pd.to_datetime(df['SYEAR'].astype('str') + df['SMTH'].astype('str'), format='%Y%m').dt.strftime('%Y-%m') df.drop(columns=['SYEAR', 'SMTH'], inplace=True) return df df2 = clean_df(df2) df3 = clean_df(df3) df4 = clean_df(df4) df2.head(20) def find_2(df): fi = False for code in df.NAICS_CODE: if code == None: pass else: if len(code) >= 2: fi = True break return fi print(f'Lines with two or more codes in df2?\t{find_2(df2)}\n\ Lines with two or more codes in df3?\t{find_2(df3)}\n\ Lines with two or more codes in df4?\t{find_2(df4)}') # df3.head() # df4.head(20) ``` Since only 2-digit codes dataset contains lines with more than one code, it becomes concevable to drop those lines but further information from *LMO detailed industries by NAICS* is needed. <br> We now load the detailed industries by NAICS data, which serve as bridge between the n-digit dataframes and the output file. ``` lmo = 'LMO_Detailed_Industries_by_NAICS.xlsx' df_lmo = pd.read_excel(data_path+lmo, usecols=[0,1]) df_lmo.head() def format_lmo_naics(x): if type(x) == int: y = [x] else: x = x.replace('&', ',').split(',') y = [int(i.strip()) for i in x] return y df_lmo.NAICS = df_lmo.NAICS.apply(format_lmo_naics) df_lmo.head() df_lmo.isna().any() df_lmo['code_len'] = df_lmo.NAICS.apply(lambda x : len(str(x[0]))) df_lmo.head() def check_lmo(df): fi = False i = 0 for naics in df.NAICS: if len(naics) >= 2 and len(str(naics[0])) == 2: fi = True i += 1 # break return i check_lmo(df_lmo) ``` There is a single line with 2 or more 2-digit naics codes. If there is not a line in *df2* with the same naics codes we can drop all lines from *df2* with two or more naics codes, thus making it possible and safe to use integer naics codes in n-digit datasets. ``` for i in range(len(df_lmo)): if len(df_lmo.NAICS.iloc[i]) >= 2 and df_lmo.code_len.iloc[i] == 2: code_check_df2 = df_lmo.iloc[i].NAICS print(f'Code to check in df2 : {code_check_df2}') break safety = 'Safe to drop lines with multiple codes!!!' for naic in df2.NAICS_CODE: if naic == code_check_df2: safety = 'Unsafe to drop lines with multiple codes!!!' break print(safety) ``` We can safely drop lines with more than one naics code from 2-digit dataset and cconvert *NAICS_CODE* column to *int* ``` df2['to_drop'] = df2.NAICS_CODE.apply(lambda x : len(x)>=2) df2 = df2[df2.to_drop == False] df2.drop('to_drop', axis=1, inplace=True) df2.head() for df in [df2, df3, df4]: df.dropna(inplace = True) df.NAICS_CODE = df.NAICS_CODE.apply(lambda x: int(x[0])) # df = df.reindex(columns= ['DATE', 'NAICS', 'NAICS_CODE', '_EMPLOYMENT_']) df4.head() # For Github # dic = {'x':['Other [326, 327, 334, 335, 337 & 339]', # 'Food[311 & 312]']} # dfx = pd.DataFrame(dic) ``` ## II. Filling out the output file ``` out_file = 'Data_Output_Template.xlsx' df_out = pd.read_excel(data_path+out_file, usecols = [0,1,2,3]) df_out.head(2) df_out['DATE'] = pd.to_datetime(df_out['SYEAR'].astype('str') + df_out['SMTH'].astype('str'), format='%Y%m').dt.strftime('%Y-%m') df_out.drop(columns=['SYEAR', 'SMTH'], inplace=True) df_out.head() def employment_rate(i): global df_lmo, df2, df3, df4, df_out employment_out = 0 naics_name = df_out['LMO_Detailed_Industry'].iloc[i] sdate = df_out.DATE.iloc[i] naics_codes = df_lmo[df_lmo['LMO_Detailed_Industry']==naics_name].NAICS.item() # Choose which n-digit dataset to look in code_length = df_lmo[df_lmo['LMO_Detailed_Industry']==naics_name].code_len.item() if code_length == 2: df = df2 elif code_length == 3: df = df3 else: df = df4 dfg = df.groupby(['NAICS_CODE', 'DATE'], sort=False).agg({'_EMPLOYMENT_': sum}) for code in naics_codes: try: employment = dfg.loc[(code, sdate)].item() except: employment = 0 employment_out += employment return int(employment_out) for i in tqdm(range(len(df_out))): df_out.Employment.iloc[i] = employment_rate(i) df_out.Employment = df_out.Employment.apply(lambda x : int(x)) df_out.head(20) ``` We can now copy the values of employment per naics per date in the excel file. ``` wb = load_workbook(data_path+'Data_Output_Template.xlsx') ws = wb.active for i in tqdm(range(len(df_out))): cell = f'D{i+2}' ws[cell] = df_out.Employment.iloc[i] wb.save(data_path+'Data_Output.xlsx') wb.close() ``` ## III. Times Series Analysis: Answer to the questions ### III.1. How employment in Construction evolved over time and how this compares to the total employment across all industries? #### a. Evolution of employment in construction ``` construction = df_out[df_out.LMO_Detailed_Industry == 'Construction'] plt.figure(figsize=(10,5)) g = sns.lineplot(x='DATE', y='Employment', data = construction) g.set_xticks([*range(0,264,12)]) g.set_xticklabels([dat for dat in construction.DATE if '-01' in dat], rotation = 90) g.set_title('Employment in construction from Jan 1997 to Dec 2018') plt.scatter(x=['2004-02', '2008-08', '2016-01'], y=[120000, 232750, 197250], c='r', s=100) plt.axvline(x='2004-02', ymax=0.18, linestyle='--', color='r') plt.axvline(x='2008-08', ymax=0.88, linestyle='--', color='r') plt.axvline(x='2016-01', ymax=0.68, linestyle='--', color='r') plt.annotate('2004-02', xy=('2004-02', 120000), xytext=('2006-01', 140000), arrowprops={'arrowstyle':'->'}) plt.annotate('2016-01', xy=('2016-01', 197250), xytext=('2013-01', 160000), arrowprops={'arrowstyle':'->'}) plt.annotate('2008-08', xy=('2008-08', 232750), xytext=('2008-01', 240000)) plt.grid() plt.show() ``` There are four different sections in the evolution of employment rate in construction from $1997$ to $2018$. Two sections of global steadiness (**Jan 1997** $-$ **Feb 2004** and **Aug 2008** $-$ **Jan 2016**) during which the employment rate oscillates around a certain constant, and two section of steep increase (**Feb 2004** $-$ **Aug 2008** and **Jan 2016** $-$ **Dec 2018**). #### b. Comparison of emplyment in construction with overall employment ``` df_total_per_date = df_out.groupby('DATE').agg({'Employment':np.sum}) df_total_per_date['Construction_emp(%)'] = (construction.Employment).values * 100 /df_total_per_date.Employment.values df_total_per_date['Construction_emp(%)'] = df_total_per_date['Construction_emp(%)'].apply(lambda x : round(x,2)) df_total_per_date.head() df_total_per_date[df_total_per_date['Construction_emp(%)'] == df_total_per_date['Construction_emp(%)'].max()] df_total_per_date[df_total_per_date['Construction_emp(%)'] == df_total_per_date['Construction_emp(%)'].min()] plt.figure(figsize = (10,5)) g = sns.lineplot(y='Construction_emp(%)', x='DATE', data=df_total_per_date) g.set_xticks([*range(0,264,12)]) g.set_xticklabels([dat for dat in construction.DATE if '-01' in dat], rotation = 90) g.set_title('Percentage of Employment in Construction from 1997 to Dec 2018') plt.grid() plt.show() ``` We notice that the portion of employment in construction follows the same fashion like the evolution of employment in condtruction with a maximum value of $10.23\%$ in **Aug 2008**, just at the end of the first steep increase region of the employment in construction. In contrast, the lowest value was registered in **Jan 2001**, with only $5.18\%$. ### III.2. When (Year) was the employment rate the highest between the dedicated time frame? ``` df_out['DATE'] = pd.to_datetime(df_out['DATE']) emp_year = df_out.groupby(df_out.DATE.dt.year).agg({'Employment':sum}) # emp_year = df_out.groupby('DATE').agg({'Employment':sum}) emp_year.reset_index() emp_year.head() emp_year.query('Employment==Employment.max()') plt.figure(figsize=(10,5)) sns.lineplot(x=emp_year.index, y='Employment', data =emp_year) plt.title('Total employment per Year') plt.grid() plt.show() ``` As we would have expected, $2018$ is the year with the largest employment rate with a total of $29922000$ employees. ### III.3. Which industry sector, subsector or industry group has had the highest number of employees? ``` total_counts = df_out.groupby('LMO_Detailed_Industry')['Employment'].sum().sort_values(ascending=False) total_df = pd.DataFrame({'Industry':total_counts.index, 'Employments':total_counts.values}) fig, ax = plt.subplots(figsize=(10,10)) ax = sns.barplot(x='Employments', y='Industry', data = total_df) ax.tick_params(axis='y', labelsize=8) plt.grid() plt.show() ``` Let's find the number of digits in the NAICS code of **Other retail trade (excluding cars and personal care)** and the number of industry subsectors involved. ``` df_lmo[df_lmo['LMO_Detailed_Industry']==total_df.head(1).Industry.item()].code_len.item() df_lmo[df_lmo['LMO_Detailed_Industry']==total_df.head(1).Industry.item()].NAICS.item() ``` As shown by the above figure, **Other retail trade (excluding cars and personal care)** is the industry subsector (three digits NAICS) with the largest number of employees. However, this category indludes $11$ different industry subsectors, so **construction** is definitely the industry sector that employs most people. ### III.4. As a rapidly developping field, if Data Science industry level (Number of digits) in NAICS, is less or equal to 4 then how has Data Science employment evolved over time? Otherwise, What is the lowest industry level above Data Science and how did it evolve from 1997 to 2019? Data Science NAICS code is $518210$ and its lowest industry sector included in our data (four digit NAICS) is $5182$, the name being **Data processing, hosting, and related services**. [[1]](#naics1), [[2]](#naics2) ``` data_science = df4[df4.NAICS_CODE == 5182][['_EMPLOYMENT_', 'DATE']].reset_index() data_science.drop('index', axis=1, inplace=True) data_science.head() plt.figure(figsize = (10,5)) g = sns.lineplot(y='_EMPLOYMENT_', x='DATE', data=data_science) g.set_xticks([*range(0,264,12)]) g.set_xticklabels([dat for dat in construction.DATE if '-01' in dat], rotation = 90) g.set_title('Evolution of data related employment from 1997 to Dec 2018') plt.grid() plt.show() data_science.query('_EMPLOYMENT_ ==_EMPLOYMENT_.max()') ``` We observe that the data-related employment remain quite globally constant until $2013$ then has increased until mid $2017$ when it became steady at a relatively low value, before reaching the peak of $6500$ employees in **Aug 2018**. ### III.5. Are there industry sectorsn subsectors or industry groups for which the employment rate decreased over time? To answer this question, we will plot the time series evolution of all the $59$ industries included in our data. ``` data_out = pd.DataFrame(df_out.reset_index().groupby(['LMO_Detailed_Industry', 'DATE'], as_index=False)['Employment'].sum()) data_out = data_out.pivot('DATE', 'LMO_Detailed_Industry', 'Employment') data_out.index.freq = 'MS' data_out.fillna(0, inplace=True) data_out.plot(subplots=True, figsize=(10, 120)) plt.show() ``` At first sight, the employment rate in the following industries decreased over time: - Wood product manufacturing - Telecommunications - Support activities for agriculture and forestery - Rail trandportation - Primary metal manufacturing - Paper manufacturing - Fishing, hunting and traping What would be the potential factors that caused the employment to decrease in those industries from $1997$ to $2018$? <br> Additionally, these visualizations are very cumbersom, a dashboard for optimieing their look and presentation is under building. ## References <a id="/naics" ></a> [1] [North American Industry Classification System (NAICS) Canada](www.statcan.gc.ca), 2017 Version 1.0, P. $48$ <a id="/naics2" ></a> [2] [NAICS code description](https://www.naics.com/naics-code-description/?code=518210)
github_jupyter
``` %load_ext autoreload %autoreload 2 %load_ext watermark %watermark -v -n -m -p numpy,scipy,sklearn,pandas %matplotlib inline import sys import pandas as pd import numpy as np import seaborn as sns import os import nolds import data import mne from random import randint from config import * from data.utils import prepare_dfs, prepare_resp_non, prepare_dep_non, get_metapkl from data.data_files import files_builder, DataKind from classification.prediction import predict from classification.scorers import scorer_factory metapkl = get_metapkl() meta_df = pd.read_excel(os.path.join(RAW_ROOT, META_FILE_NAME), index_col='ID', names=META_COLUMN_NAMES) data = np.transpose(files_builder(DataKind('processed')).single_file('1a.fif').df.values) def ff(row, col, t1, t2=0): if row[col] <= t1: return -1 elif row[col] <= t2: return 0 else: return 1 metapkl['resp'] = metapkl.apply(lambda row: ff(row, 'change', 1.8), axis=1) metapkl = metapkl.astype({'resp': 'category'}) print(metapkl.loc[(slice(None), 'a'), 'resp'].value_counts()) from itertools import combinations from sklearn import svm, datasets, metrics from sklearn.feature_selection import (SelectFromModel, RFE, RFECV, SelectKBest, mutual_info_classif, chi2) from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split, cross_validate, GridSearchCV from sklearn.neighbors import KNeighborsClassifier from genetic_selection import GeneticSelectionCV from sklearn.neural_network import MLPClassifier from sklearn.naive_bayes import GaussianNB estimators = { svm.SVC(kernel='linear', class_weight='balanced'): { 'C': [0.02, 0.5, 1.0, 1.5, 2, 3, 4, 5, 10, 50], 'kernel': ('linear', 'poly', 'rbf'), 'gamma': ('auto', 'scale'), # 'decision_function_shape' : ('ovo', 'ovr'), }, LogisticRegression(class_weight='balanced'): { 'C': [0.02, 0.5, 1.0, 2, 3, 4, 5, 10, 20, 50], 'penalty': ['l2', 'l1'], }, } def get_selectors(estimator): return { 'RFECV': RFECV(estimator, 5), 'SelectFromModel': SelectFromModel(estimator), # 'SelectKBest': SelectKBest(chi2, 3), 'Genetic': GeneticSelectionCV(estimator, cv=5, verbose=0, scoring=scorer, n_population=80, crossover_proba=0.8, mutation_proba=0.2, n_generations=80, crossover_independent_proba=0.5, mutation_independent_proba=0.05, tournament_size=5, caching=True, n_jobs=1 ), } scorer = scorer_factory(metrics.roc_auc_score, average='weighted') features = ('corr', 'lyap', 'sampen', 'dfa', 'hurst', 'higu') comb_size = 1 grid_search_cv = 5 for estimator, params in estimators.items(): for cols in combinations(features, comb_size): for selector_name, selector in get_selectors(estimator).items(): print(cols) print(selector_name) gs = GridSearchCV(estimator, params, iid=False, scoring=scorer, cv=grid_search_cv) predict('resp', 'a', cols, estimator, metapkl, gs=gs, show_selected=True, selector=selector) print(gs.best_params_) print('\n\n') ``` # CROSS-VALIDATED ``` estimators = [ # LogisticRegression(C=1, penalty='l1', class_weight='balanced'), svm.SVC(C=1, class_weight='balanced', kernel='linear'), ] # 75 channels = [('FP2', 'lyap'), ('F3', 'lyap'), ('O1', 'lyap'), ('T4', 'lyap'), ('T6', 'lyap'), ('F3', 'sampen'), ('C3', 'sampen'), ('T6', 'sampen')] # channels = [('F3', 'lyap'), ('O2', 'lyap'), ('T5', 'lyap'), ('T6', 'lyap'), ('FP2', 'corr'), # ('F4', 'corr'), ('O2', 'corr')] for estimator in estimators: est = predict('resp', 'a', None, estimator, channels=channels) estimators = [ # LogisticRegression(class_weight='balanced'), svm.SVC(C=1.5, class_weight='balanced', kernel='linear'), ] # 71 # channels = [('F3', 'lyap'), ('C4', 'lyap'), ('O1', 'lyap'), ('F7', 'lyap'), ('T3', 'lyap'), ('T6', 'lyap'), ] channels = [('F3', 'lyap'), ('F4', 'lyap'), ('T5', 'lyap'), ('T6', 'lyap')] for estimator in estimators: est = predict('resp', 'a', None, estimator, channels=channels) estimators = [ LogisticRegression(C=1, penalty='l2', class_weight='balanced'), # svm.SVC(C=1, class_weight='balanced', kernel='linear'), ] # 71 # channels = [('F3', 'sampen'), ('C4', 'sampen'), ('C3', 'sampen'), ('Fz', 'sampen')] channels = [('FP1', 'sampen'), ('F3', 'sampen'), ('P3', 'sampen'), ('Cz', 'sampen')] for estimator in estimators: est = predict('resp', 'a', None, estimator, channels=channels) estimators = [ LogisticRegression(C=1, penalty='l2', class_weight='balanced'), # svm.SVC(C=1, class_weight='balanced', kernel='linear'), ] channels = [('F3', 'higu'), ('F8', 'higu')] for estimator in estimators: est = predict('resp', 'a', None, estimator, channels=channels) estimators = [ # LogisticRegression(C=1, penalty='l2', class_weight='balanced'), svm.SVC(C=2, class_weight='balanced', kernel='rbf'), ] channels = [('C3', 'hurst'), ('T6', 'hurst')] for estimator in estimators: est = predict('resp', 'a', None, estimator, channels=channels) estimators = [ LogisticRegression(C=1, penalty='l2', class_weight='balanced'), # svm.SVC(C=2, class_weight='balanced', kernel='rbf'), ] channels = [('F3', 'corr'), ('F4', 'corr'), ('O2', 'corr'), ('Pz', 'corr')] for estimator in estimators: est = predict('resp', 'a', None, estimator, channels=channels) estimators = [ # LogisticRegression(C=1, penalty='l2', class_weight='balanced'), svm.SVC(C=10, class_weight='balanced', kernel='linear'), ] channels = [('T3', 'dfa'), ('T4', 'dfa'), ('Cz', 'dfa')] for estimator in estimators: est = predict('resp', 'a', None, estimator, channels=channels) channels = [('F3', 'lyap'), ('F4', 'lyap'), ('C4', 'lyap'), ('P3', 'lyap'), ('P4', 'lyap'), ('F8', 'lyap'), ('T4', 'lyap'), ('T5', 'lyap'), ('T6', 'lyap'), ('Fz', 'lyap'),] channels = [('F3', 'sampen'), ('C4', 'sampen'), ('C3', 'sampen'), ('Fz', 'sampen')] channels = [('F3', 'higu'), ('P4', 'higu'), ('C3', 'higu'), ('Fz', 'higu'), ('F8', 'higu')] channels = [('C4', 'hurst'), ('T5', 'hurst')] estimators = { svm.SVC(kernel='linear', class_weight='balanced'): { 'C': [0.02, 0.5, 1.0, 1.5, 2, 3, 4, 5, 10, 50], 'kernel': ('linear', 'poly', 'rbf'), # 'decision_function_shape' : ('ovo', 'ovr'), }, LogisticRegression(class_weight='balanced'): { 'C': [0.02, 0.5, 1.0, 2, 3, 4, 5, 10, 20, 50], 'penalty': ['l2', 'l1'], }, } for estimator, params in estimators.items(): gs = grid_search.GridSearchCV(estimator, params, iid=False, scoring=scorer, cv=5) selector = GeneticSelectionCV(estimator, cv=5, verbose=0, scoring=scorer, n_population=80, crossover_proba=0.8, mutation_proba=0.2, n_generations=80, crossover_independent_proba=0.5, mutation_independent_proba=0.05, tournament_size=5, caching=True, n_jobs=-1) selector = None est = predict('resp', 'a', None, estimator, show_selected=True, channels=channels, gs=gs, selector=selector) print(gs.best_params_) ``` # RESULTS FOR FIRST AND LAST 15% QUANTILE OF CHANGE (34 / 32) ## Lyapunov exponent ``` estimator = LogisticRegression(C=3.5, penalty='l1', class_weight='balanced') predict('resp', None, ('lyap',), estimator, channels=('FP2', 'F3', 'F4', 'C3', 'P3', 'P4', 'F7', 'F8', 'T6', 'Fz', 'Cz')) ``` ## DFA ``` estimator = LogisticRegression(C=50, penalty='l2', class_weight='balanced') predict('resp', None, ('dfa',), estimator, channels=('FP2', 'F3', 'O1', 'T5', 'T6', 'Cz')) ``` ## Hurst exponent ``` estimator = LogisticRegression(C=3, penalty='l2', class_weight='balanced') predict('resp', None, ('hurst',), estimator, channels=('FP2', 'F3', 'O1', 'T5', 'T6', 'Cz')) ``` ## Higuchi dimension Results around 55%. ## Correlation dimension ``` estimator = svm.SVC(C=2, kernel='rbf', class_weight='balanced', decision_function_shape='ovo') predict('resp', None, ('corr'), estimator, channels=('F4', 'C4', 'O1', 'F7', 'F8', 'T5', 'T6')) ``` # RESULTS FOR FIRST AND LAST TERCILE OF CHANGE (74 / 66) ## Lyapunov exponent ``` estimator = svm.SVC(C=1, kernel='rbf', class_weight='balanced', decision_function_shape='ovo') predict('resp', None, ('lyap'), estimator, channels=('F3', 'F4', 'O1', 'O2', 'T6', 'Fz')) ``` ## DFA ``` estimator = LogisticRegression(class_weight='balanced') params = {'C': [0.5, 1.0, 1.5, 2, 2.5, 3, 3.5], 'penalty': ['l2', 'l1'],} estimator = grid_search.GridSearchCV(estimator, params, iid=False, scoring=scorer) predict('resp', None, ('dfa',), estimator, channels=('C3', 'P4', 'F7', 'T6')) estimator.best_params_ ``` ### Alpha / Theta envelope All estimators fail (50/50). Very similar values between groupds and small variance. There is still chance for depression score, however. Same for sample entropy. ## Hurst exponent ``` estimator = LogisticRegression(C=3, penalty='l2', class_weight='balanced') predict('resp', None, ('hurst',), estimator, channels=('P3', 'F7', 'T4', 'T6', 'Cz')) ``` ## Higuchi dimension ``` estimator = LogisticRegression(C=3.5, penalty='l1', class_weight='balanced') predict('resp', None, ('higu',), estimator, channels=('FP2',)) ``` ## Correlation dimension ``` estimator = LogisticRegression(C=3.5, penalty='l1', class_weight='balanced') estimator = predict('resp', None, ('corr',), estimator, channels=('F4', 'C4', 'T5')) estimator = svm.SVC(C=2, kernel='rbf', class_weight='balanced', decision_function_shape='ovo') predict('resp', None, ('corr'), estimator, channels=('F4', 'C4', 'O1', 'F7', 'F8', 'T5')) ``` # Manual selection ## LLE ``` from random import randint estimators = { svm.SVC(kernel='linear', class_weight='balanced'): { 'C': [0.02, 0.5, 1.0, 1.5, 2, 3, 4, 5, 10, 50], 'kernel': ('linear', 'poly', 'rbf'), }, LogisticRegression(class_weight='balanced'): { 'C': [0.02, 0.5, 1.0, 2, 3, 4, 5, 10, 20, 50], 'penalty': ['l2', 'l1'], }, } seed = randint(0, 100000) print('Seed: %s' % seed) channels = ['Fz'] for estimator, params in estimators.items(): est = grid_search.GridSearchCV(estimator, params, iid=False, scoring=scorer, cv=4) est = predict('resp', None, ('lyap',), est, show_selected=True, channels=channels, seed=seed) seed = randint(0, 100000) print('Seed: %s' % seed) channels = ['P3'] for estimator, params in estimators.items(): est = grid_search.GridSearchCV(estimator, params, iid=False, scoring=scorer, cv=4) est = predict('resp', None, ('lyap',), est, show_selected=True, channels=channels, seed=seed) ``` ## Higuchi fractal dimension ``` estimators = { # svm.SVC(kernel='linear', class_weight='balanced'): { # 'C': [0.02, 0.5, 1.0, 1.5, 2, 3, 4, 5, 10, 50], # 'kernel': ('linear', 'poly', 'rbf'), # }, LogisticRegression(class_weight='balanced'): { 'C': [0.02, 0.5, 1.0, 2, 3, 4, 5, 10, 20, 50], 'penalty': ['l2', 'l1'], }, } # 71875 seed = randint(0, 100000) print('Seed: %s' % seed) channels = ['FP2', 'Cz'] for estimator, params in estimators.items(): est = grid_search.GridSearchCV(estimator, params, iid=False, scoring=scorer, cv=4) est = predict('resp', None, ('higu',), est, show_selected=True, channels=channels, seed=seed) ```
github_jupyter
# Approximate q-learning In this notebook you will teach a __tensorflow__ neural network to do Q-learning. __Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework. ``` import sys, os if 'google.colab' in sys.modules: %tensorflow_version 1.x if not os.path.exists('.setup_complete'): !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash !touch .setup_complete # This code creates a virtual display to draw game images on. # It will have no effect if your machine has a monitor. if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0: !bash ../xvfb start os.environ['DISPLAY'] = ':1' import gym import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline env = gym.make("CartPole-v0").env env.reset() n_actions = env.action_space.n state_dim = env.observation_space.shape plt.imshow(env.render("rgb_array")) ``` # Approximate (deep) Q-learning: building the network To train a neural network policy one must have a neural network policy. Let's build it. Since we're working with a pre-extracted features (cart positions, angles and velocities), we don't need a complicated network yet. In fact, let's build something like this for starters: ![img](https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/yet_another_week/_resource/qlearning_scheme.png) For your first run, please only use linear layers (`L.Dense`) and activations. Stuff like batch normalization or dropout may ruin everything if used haphazardly. Also please avoid using nonlinearities like sigmoid & tanh: since agent's observations are not normalized, sigmoids might be saturated at initialization. Instead, use non-saturating nonlinearities like ReLU. Ideally you should start small with maybe 1-2 hidden layers with < 200 neurons and then increase network size if agent doesn't beat the target score. ``` import tensorflow as tf import keras import keras.layers as L tf.reset_default_graph() sess = tf.InteractiveSession() keras.backend.set_session(sess) assert not tf.test.is_gpu_available(), \ "Please complete this assignment without a GPU. If you use a GPU, the code " \ "will run a lot slower due to a lot of copying to and from GPU memory. " \ "To disable the GPU in Colab, go to Runtime → Change runtime type → None." network = keras.models.Sequential() network.add(L.InputLayer(state_dim)) <YOUR CODE: stack layers!!!1> def get_action(state, epsilon=0): """ sample actions with epsilon-greedy policy recap: with p = epsilon pick random action, else pick action with highest Q(s,a) """ q_values = network.predict(state[None])[0] <YOUR CODE> return <YOUR CODE: epsilon-greedily selected action> assert network.output_shape == (None, n_actions), "please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]" assert network.layers[-1].activation == keras.activations.linear, "please make sure you predict q-values without nonlinearity" # test epsilon-greedy exploration s = env.reset() assert np.shape(get_action(s)) == (), "please return just one action (integer)" for eps in [0., 0.1, 0.5, 1.0]: state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=n_actions) best_action = state_frequencies.argmax() assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / n_actions)) < 200 for other_action in range(n_actions): if other_action != best_action: assert abs(state_frequencies[other_action] - 10000 * (eps / n_actions)) < 200 print('e=%.1f tests passed'%eps) ``` ### Q-learning via gradient descent We shall now train our agent's Q-function by minimizing the TD loss: $$ L = { 1 \over N} \sum_i (Q_{\theta}(s,a) - [r(s,a) + \gamma \cdot max_{a'} Q_{-}(s', a')]) ^2 $$ Where * $s, a, r, s'$ are current state, action, reward and next state respectively * $\gamma$ is a discount factor defined two cells above. The tricky part is with $Q_{-}(s',a')$. From an engineering standpoint, it's the same as $Q_{\theta}$ - the output of your neural network policy. However, when doing gradient descent, __we won't propagate gradients through it__ to make training more stable (see lectures). To do so, we shall use `tf.stop_gradient` function which basically says "consider this thing constant when doingbackprop". ``` # Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True) states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim) actions_ph = keras.backend.placeholder(dtype='int32', shape=[None]) rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None]) next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim) is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None]) #get q-values for all actions in current states predicted_qvalues = network(states_ph) #select q-values for chosen actions predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, n_actions), axis=1) gamma = 0.99 # compute q-values for all actions in next states predicted_next_qvalues = <YOUR CODE: apply network to get q-values for next_states_ph> # compute V*(next_states) using predicted next q-values next_state_values = <YOUR CODE> # compute "target q-values" for loss - it's what's inside square parentheses in the above formula. target_qvalues_for_actions = <YOUR CODE> # at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions) #mean squared error loss to minimize loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2 loss = tf.reduce_mean(loss) # training function that resembles agent.update(state, action, reward, next_state) from tabular agent train_step = tf.train.AdamOptimizer(1e-4).minimize(loss) assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, "make sure you update q-values for chosen actions and not just all actions" assert tf.gradients(loss, [predicted_next_qvalues])[0] is None, "make sure you don't propagate gradient w.r.t. Q_(s',a')" assert predicted_next_qvalues.shape.ndims == 2, "make sure you predicted q-values for all actions in next state" assert next_state_values.shape.ndims == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes" assert target_qvalues_for_actions.shape.ndims == 1, "there's something wrong with target q-values, they must be a vector" ``` ### Playing the game ``` sess.run(tf.global_variables_initializer()) def generate_session(env, t_max=1000, epsilon=0, train=False): """play env with approximate q-learning agent and train it at the same time""" total_reward = 0 s = env.reset() for t in range(t_max): a = get_action(s, epsilon=epsilon) next_s, r, done, _ = env.step(a) if train: sess.run(train_step,{ states_ph: [s], actions_ph: [a], rewards_ph: [r], next_states_ph: [next_s], is_done_ph: [done] }) total_reward += r s = next_s if done: break return total_reward epsilon = 0.5 for i in range(1000): session_rewards = [generate_session(env, epsilon=epsilon, train=True) for _ in range(100)] print("epoch #{}\tmean reward = {:.3f}\tepsilon = {:.3f}".format(i, np.mean(session_rewards), epsilon)) epsilon *= 0.99 assert epsilon >= 1e-4, "Make sure epsilon is always nonzero during training" if np.mean(session_rewards) > 300: print("You Win!") break ``` ### How to interpret results Welcome to the f.. world of deep f...n reinforcement learning. Don't expect agent's reward to smoothly go up. Hope for it to go increase eventually. If it deems you worthy. Seriously though, * __ mean reward__ is the average reward per game. For a correct implementation it may stay low for some 10 epochs, then start growing while oscilating insanely and converges by ~50-100 steps depending on the network architecture. * If it never reaches target score by the end of for loop, try increasing the number of hidden neurons or look at the epsilon. * __ epsilon__ - agent's willingness to explore. If you see that agent's already at < 0.01 epsilon before it's is at least 200, just reset it back to 0.1 - 0.5. ### Record videos As usual, we now use `gym.wrappers.Monitor` to record a video of our agent playing the game. Unlike our previous attempts with state binarization, this time we expect our agent to act ~~(or fail)~~ more smoothly since there's no more binarization error at play. As you already did with tabular q-learning, we set epsilon=0 for final evaluation to prevent agent from exploring himself to death. ``` # Record sessions import gym.wrappers with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor: sessions = [generate_session(env_monitor, epsilon=0, train=False) for _ in range(100)] # Show video. This may not work in some setups. If it doesn't # work for you, you can download the videos and view them locally. from pathlib import Path from IPython.display import HTML video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4']) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format(video_names[-1])) # You can also try other indices ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Eager Execution の基本 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/eager"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。 TensorFlow の Eager Execution は、計算グラフの作成と評価を同時におこなう命令的なプログラミングを行うための環境です: オペレーションはあとで実行するための計算グラフでなく、具体的な計算結果の値を返します。 この方法を用いることにより、初心者にとって TensorFlow を始めやすくなり、またモデルのデバッグも行いやすくなります。 さらにコードの記述量も削減されます。 このガイドの内容を実行するためには、対話的インタープリタ `python` を起動し、以下のコードサンプルを実行してください。 Eager Execution は研究や実験のための柔軟な機械学習環境として、以下を提供します。 * *直感的なインタフェース*— Python のデータ構造を使用して、コードを自然に記述することができます。小規模なモデルとデータに対してすばやく実験を繰り返すことができます。 * *より簡単なデバッグ*— ops を直接呼び出すことで、実行中のモデルを調査したり、変更をテストすることができます。 Python 標準のデバッグツールを用いて即座にエラーのレポーティングができます。 * *自然な制御フロー*— TensorFlow のグラフ制御フローの代わりに Python の制御フローを利用するため、動的なモデルの作成をシンプルに行うことができます。 Eager Execution は TensorFlow のほとんどのオペレーションとGPUアクセラレーションをサポートします。 Note: いくつかのモデルは Eager Execution を有効化することでオーバヘッドが増える可能性があります。 パフォーマンス改善を行っていますが、もしも問題を発見したら、バグ報告してベンチマークを共有してください。 ## セットアップと基本的な使い方 ``` import tensorflow as tf import cProfile ``` TensorFlow 2.0 では、 Eager Execution はデフォルトで有効化されます。 ``` tf.executing_eagerly() ``` これで TensorFlow のオペレーションを実行してみましょう。結果はすぐに返されます。 ``` x = [[2.]] m = tf.matmul(x, x) print("hello, {}".format(m)) ``` Eager Execution を有効化することで、 TensorFlow の挙動は変わります—TensorFlowは即座に式を評価して結果をPythonに返すようになります。 `tf.Tensor` オブジェクトは計算グラフのノードへのシンボリックハンドルの代わりに具体的な値を参照します。 セッションの中で構築して実行する計算グラフが存在しないため、`print()`やデバッガを使って容易に結果を調べることができます。 勾配計算を遮ることなくテンソル値を評価、出力、およびチェックすることができます。 Eager Execution は、[NumPy](http://www.numpy.org/)と一緒に使うことができます。 NumPy のオペレーションは、`tf.Tensor`を引数として受け取ることができます。 TensorFlow [math operations](https://www.tensorflow.org/api_guides/python/math_ops) はPython オブジェクトと Numpy array を `tf.Tensor` に変換します。 `tf.Tensor.numpy` メソッドはオブジェクトの値を NumPy の `ndarray` 形式で返します。 ``` a = tf.constant([[1, 2], [3, 4]]) print(a) # ブロードキャストのサポート b = tf.add(a, 1) print(b) # オペレータのオーバーロードがサポートされている print(a * b) # NumPy valueの使用 import numpy as np c = np.multiply(a, b) print(c) # Tensor から numpy の値を得る print(a.numpy()) # => [[1 2] # [3 4]] ``` ## 動的な制御フロー Eager Execution の主要なメリットは、モデルを実行する際にホスト言語のすべての機能性が利用できることです。 たとえば、[fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz)が簡単に書けます: ``` def fizzbuzz(max_num): counter = tf.constant(0) max_num = tf.convert_to_tensor(max_num) for num in range(1, max_num.numpy()+1): num = tf.constant(num) if int(num % 3) == 0 and int(num % 5) == 0: print('FizzBuzz') elif int(num % 3) == 0: print('Fizz') elif int(num % 5) == 0: print('Buzz') else: print(num.numpy()) counter += 1 fizzbuzz(15) ``` この関数はテンソル値に依存する条件式を持ち、実行時にこれらの値を表示します。 ## Eager Execution による学習 ### 勾配の計算 [自動微分](https://en.wikipedia.org/wiki/Automatic_differentiation)はニューラルネットワークの学習で利用される[バックプロパゲーション](https://en.wikipedia.org/wiki/Backpropagation)などの機械学習アルゴリズムの実装を行う上で便利です。 Eager Executionでは、勾配計算をあとで行うためのオペレーションをトレースするために`tf.GradientTape` を利用します。 Eager Execution では、学習や勾配計算に, `tf.GradientTape` を利用できます。これは複雑な学習ループを実行するときに特に役立ちます。 各呼び出し中に異なるオペレーションが発生する可能性があるため、すべての forward-pass オペレーションは一つの「テープ」に記録されます。勾配を計算するには、テープを逆方向に再生してから破棄します。特定の `tf.GradientTape`は一つのグラデーションしか計算できません。後続の呼び出しは実行時エラーをスローします。 ``` w = tf.Variable([[1.0]]) with tf.GradientTape() as tape: loss = w * w grad = tape.gradient(loss, w) print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32) ``` ### モデル学習 以下の example は MNIST という手書き数字分類を行うマルチレイヤーモデルを作成します。 Eager Execution 環境における学習可能なグラフを構築するためのオプティマイザーとレイヤーAPIを提示します。 ``` # mnist データのを取得し、フォーマットする (mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data() dataset = tf.data.Dataset.from_tensor_slices( (tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32), tf.cast(mnist_labels,tf.int64))) dataset = dataset.shuffle(1000).batch(32) # モデルを構築する mnist_model = tf.keras.Sequential([ tf.keras.layers.Conv2D(16,[3,3], activation='relu', input_shape=(None, None, 1)), tf.keras.layers.Conv2D(16,[3,3], activation='relu'), tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(10) ]) ``` 学習を行わずとも、モデルを呼び出して、 Eager Execution により、出力を検査することができます: ``` for images,labels in dataset.take(1): print("Logits: ", mnist_model(images[0:1]).numpy()) ``` keras モデルは組み込みで学習のループを回すメソッド `fit` がありますが、よりカスタマイズが必要な場合もあるでしょう。 Eager Executionを用いて実装された学習ループのサンプルを以下に示します: ``` optimizer = tf.keras.optimizers.Adam() loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) loss_history = [] ``` Note: モデルの状況を確認したいときは、 `tf.debugging` にある assert 機能を利用してください。この機能は Eager Execution と Graph Execution のどちらでも利用できます。 ``` def train_step(images, labels): with tf.GradientTape() as tape: logits = mnist_model(images, training=True) # assertを入れて出力の型をチェックする。 tf.debugging.assert_equal(logits.shape, (32, 10)) loss_value = loss_object(labels, logits) loss_history.append(loss_value.numpy().mean()) grads = tape.gradient(loss_value, mnist_model.trainable_variables) optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables)) def train(): for epoch in range(3): for (batch, (images, labels)) in enumerate(dataset): train_step(images, labels) print ('Epoch {} finished'.format(epoch)) train() import matplotlib.pyplot as plt plt.plot(loss_history) plt.xlabel('Batch #') plt.ylabel('Loss [entropy]') ``` ### Variablesとオプティマイザ `tf.Variable` オブジェクトは、学習中にアクセスされるミュータブルな `tf.Tensor` 値を格納し、自動微分を容易にします。 モデルのパラメータは、変数としてクラスにカプセル化できます。 `tf.GradientTape` と共に `tf.Variable` を使うことでモデルパラメータはよりカプセル化されます。たとえば、上の の自動微分の例は以下のように書き換えることができます: ``` class Model(tf.keras.Model): def __init__(self): super(Model, self).__init__() self.W = tf.Variable(5., name='weight') self.B = tf.Variable(10., name='bias') def call(self, inputs): return inputs * self.W + self.B # 3 * x + 2を近似するトイデータセット NUM_EXAMPLES = 2000 training_inputs = tf.random.normal([NUM_EXAMPLES]) noise = tf.random.normal([NUM_EXAMPLES]) training_outputs = training_inputs * 3 + 2 + noise # 最適化対象のloss関数 def loss(model, inputs, targets): error = model(inputs) - targets return tf.reduce_mean(tf.square(error)) def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return tape.gradient(loss_value, [model.W, model.B]) # 定義: # 1. モデル # 2. モデルパラメータに関する損失関数の導関数 # 3. 導関数に基づいて変数を更新するストラテジ。 model = Model() optimizer = tf.keras.optimizers.SGD(learning_rate=0.01) print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs))) # 学習ループ for i in range(300): grads = grad(model, training_inputs, training_outputs) optimizer.apply_gradients(zip(grads, [model.W, model.B])) if i % 20 == 0: print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs))) print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs))) print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy())) ``` ## Eager Execution の途中でオブジェクトのステータスを使用する TF 1.x の Graph Execution では、プログラムの状態(Variableなど)は global collection に格納され、それらの存続期間は `tf.Session` オブジェクトによって管理されます。 対照的に、 Eager Execution の間、状態オブジェクトの存続期間は、対応する Python オブジェクトの存続期間によって決定されます。 ### 変数とオブジェクト Eager Execution の間、変数はオブジェクトへの最後の参照が削除され、その後削除されるまで存続します。 ``` if tf.test.is_gpu_available(): with tf.device("gpu:0"): v = tf.Variable(tf.random.normal([1000, 1000])) v = None # v は GPU メモリを利用しなくなる ``` ### オブジェクトベースの保存 このセクションは、[チェックポイントの学習の手引き](./checkpoint.ipynb) の省略版です。 `tf.train.Checkpoint` はチェックポイントを用いて `tf.Variable` を保存および復元することができます: ``` x = tf.Variable(10.) checkpoint = tf.train.Checkpoint(x=x) x.assign(2.) # 新しい値を変数に代入して保存する。 checkpoint_path = './ckpt/' checkpoint.save('./ckpt/') x.assign(11.) # 保存後に変数の値を変える。 # チェックポイントから変数を復元する checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path)) print(x) # => 2.0 ``` モデルを保存して読み込むために、 `tf.train.Checkpoint` は隠れ変数なしにオブジェクトの内部状態を保存します。 `モデル`、 `オプティマイザ` 、そしてグローバルステップの状態を記録するには、それらを `tf.train.Checkpoint` に渡します。 ``` import os model = tf.keras.Sequential([ tf.keras.layers.Conv2D(16,[3,3], activation='relu'), tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(10) ]) optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) checkpoint_dir = 'path/to/model_dir' if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") root = tf.train.Checkpoint(optimizer=optimizer, model=model) root.save(checkpoint_prefix) root.restore(tf.train.latest_checkpoint(checkpoint_dir)) ``` 多くの学習ループでは、変数は `tf.train..Checkpoint.restore` が呼ばれたあとに作成されます。これらの変数は作成されてすぐに復元され、チェックポイントがすべてロードされたことを確認するためのアサーションが利用可能になります。詳しくは、 [guide to training checkpoints](./checkpoint.ipynb) を見てください。 ### オブジェクト指向メトリクス `tfe.keras.metrics`はオブジェクトとして保存されます。新しいデータを呼び出し可能オブジェクトに渡してメトリクスを更新し、 `tfe.keras.metrics.result`メソッドを使って結果を取得します。次に例を示します: ``` m = tf.keras.metrics.Mean("loss") m(0) m(5) m.result() # => 2.5 m([8, 9]) m.result() # => 5.5 ``` ## 高度な自動微分トピック ### 動的なモデル `tf.GradientTape` は動的モデルでも使うことができます。 以下の [バックトラックライン検索](https://wikipedia.org/wiki/Backtracking_line_search) アルゴリズムの例は、複雑な制御フローにもかかわらず 勾配があり、微分可能であることを除いて、通常の NumPy コードのように見えます: ``` def line_search_step(fn, init_x, rate=1.0): with tf.GradientTape() as tape: # 変数は自動的に記録されるが、Tensorは手動でウォッチする tape.watch(init_x) value = fn(init_x) grad = tape.gradient(value, init_x) grad_norm = tf.reduce_sum(grad * grad) init_value = value while value > init_value - rate * grad_norm: x = init_x - rate * grad value = fn(x) rate /= 2.0 return x, value ``` ### カスタム勾配 カスタム勾配は、勾配を上書きする簡単な方法です。 フォワード関数では、 入力、出力、または中間結果に関する勾配を定義します。たとえば、逆方向パスにおいて勾配のノルムを制限する簡単な方法は次のとおりです: ``` @tf.custom_gradient def clip_gradient_by_norm(x, norm): y = tf.identity(x) def grad_fn(dresult): return [tf.clip_by_norm(dresult, norm), None] return y, grad_fn ``` カスタム勾配は、一連の演算に対して数値的に安定した勾配を提供するために共通的に使用されます。: ``` def log1pexp(x): return tf.math.log(1 + tf.exp(x)) def grad_log1pexp(x): with tf.GradientTape() as tape: tape.watch(x) value = log1pexp(x) return tape.gradient(value, x) # 勾配計算は x = 0 のときはうまくいく。 grad_log1pexp(tf.constant(0.)).numpy() # しかし、x = 100のときは数値的不安定により失敗する。 grad_log1pexp(tf.constant(100.)).numpy() ``` ここで、 `log1pexp` 関数はカスタム勾配を用いて解析的に単純化することができます。 以下の実装は、フォワードパスの間に計算された `tf.exp(x)` の値を 再利用します—冗長な計算を排除することでより効率的になります: ``` @tf.custom_gradient def log1pexp(x): e = tf.exp(x) def grad(dy): return dy * (1 - 1 / (1 + e)) return tf.math.log(1 + e), grad def grad_log1pexp(x): with tf.GradientTape() as tape: tape.watch(x) value = log1pexp(x) return tape.gradient(value, x) # 上と同様に、勾配計算はx = 0のときにはうまくいきます。 grad_log1pexp(tf.constant(0.)).numpy() # また、勾配計算はx = 100でも機能します。 grad_log1pexp(tf.constant(100.)).numpy() ``` ## パフォーマンス Eager Executionの間、計算は自動的にGPUにオフロードされます。計算を実行するデバイスを指定したい場合は、 `tf.device( '/ gpu:0')` ブロック(もしくはCPUを指定するブロック)で囲むことで指定できます: ``` import time def measure(x, steps): # TensorFlowはGPUを初めて使用するときに初期化するため、時間計測対象からは除外する。 tf.matmul(x, x) start = time.time() for i in range(steps): x = tf.matmul(x, x) # tf.matmulは、行列乗算が完了する前に戻ることができる。 # (たとえば、CUDAストリームにオペレーションをエンキューした後に戻すことができる)。 # 以下のx.numpy()呼び出しは、すべてのキューに入れられたオペレーションが完了したことを確認する。 # (そして結果をホストメモリにコピーするため、計算時間は単純なmatmulオペレーションよりも多くのことを含む時間になる。) _ = x.numpy() end = time.time() return end - start shape = (1000, 1000) steps = 200 print("Time to multiply a {} matrix by itself {} times:".format(shape, steps)) # CPU上で実行するとき: with tf.device("/cpu:0"): print("CPU: {} secs".format(measure(tf.random.normal(shape), steps))) # GPU上で実行するとき(GPUが利用できれば): if tf.test.is_gpu_available(): with tf.device("/gpu:0"): print("GPU: {} secs".format(measure(tf.random.normal(shape), steps))) else: print("GPU: not found") ``` `tf.Tensor` オブジェクトはそのオブジェクトに対するオペレーションを実行するために別のデバイスにコピーすることができます: ``` if tf.test.is_gpu_available(): x = tf.random.normal([10, 10]) x_gpu0 = x.gpu() x_cpu = x.cpu() _ = tf.matmul(x_cpu, x_cpu) # CPU上で実行するとき _ = tf.matmul(x_gpu0, x_gpu0) # GPU:0上で実行するとき ``` ### ベンチマーク GPUでの [ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50) の学習のような、計算量の多いモデルの場合は、Eager Executionのパフォーマンスは `tf.function` のパフォーマンスに匹敵します。 しかし、この2つの環境下のパフォーマンスの違いは計算量の少ないモデルではより大きくなり、小さなたくさんのオペレーションからなるモデルでホットコードパスを最適化するためにやるべきことがあります。 ## functionsの利用 Eager Execution は開発とデバッグをより対話的にしますが、 TensorFlow 1.x スタイルの Graph Execution は分散学習、パフォーマンスの最適化、そしてプロダクション環境へのデプロイの観点で利点があります。 2つの手法のギャップを埋めるために、 TensorFlow 2.0 は `tf.function` という機能を導入しています。 詳しくは、 [Autograph](./function.ipynb) のガイドを見てください。
github_jupyter
# Loading data from Chile data cube * **Prerequisites:** Users of this notebook should have a basic understanding of: * How to run a [Jupyter notebook](01_Jupyter_notebooks.ipynb) * Inspecting available [Products and measurements](02_Products_and_measurements.ipynb) ## Background Loading data from the Chile instance of the [Open Data Cube](https://www.opendatacube.org/) requires the construction of a data query that specifies the what, where, and when of the data request. Each query returns a [multi-dimensional xarray object](http://xarray.pydata.org/en/stable/) containing the contents of your query. It is essential to understand the `xarray` data structures as they are fundamental to the structure of data loaded from the datacube. Manipulations, transformations and visualisation of `xarray` objects provide datacube users with the ability to explore and analyse datasets, as well as pose and answer scientific questions. ## Description This notebook will introduce how to load data from the Chile datacube through the construction of a query and use of the `dc.load()` function. Topics covered include: * Loading data using `dc.load()` * Interpreting the resulting `xarray.Dataset` object * Inspecting an individual `xarray.DataArray` * Customising parameters passed to the `dc.load()` function * Loading specific measurements * Loading data for coordinates in a custom coordinate reference system (CRS) * Projecting data to a new CRS and spatial resolution * Specifying a specific spatial resampling method * Loading data using a reusable dictionary query * Loading matching data from multiple products using `like` * Adding a progress bar to the data load *** ## Getting started To run this introduction to loading data from the datacube, run all the cells in the notebook starting with the "Load packages" cell. For help with running notebook cells, refer back to the [Jupyter Notebooks notebook](01_Jupyter_notebooks.ipynb). ### Load packages First we need to load the `datacube` package. This will allow us to query the datacube database and load some data. The `with_ui_cbk` function from `odc.ui` will allow us to show a progress bar when loading large amounts of data. ``` import datacube from odc.ui import with_ui_cbk ``` ### Connect to the datacube We then need to connect to the datacube database. We will then be able to use the `dc` datacube object to load data. The `app` parameter is a unique name used to identify the notebook that does not have any effect on the analysis. ``` dc = datacube.Datacube(app="03_Loading_data") ``` ## Loading data using `dc.load()` Loading data from the datacube uses the [dc.load()](https://datacube-core.readthedocs.io/en/latest/dev/api/generate/datacube.Datacube.load.html) function. The function requires the following minimum arguments: * `product`: A specific product to load (to revise products, see the [Products and measurements](02_Products_and_measurements.ipynb) notebook). * `x`: Defines the spatial region in the *x* dimension. By default, the *x* and *y* arguments accept queries in a geographical co-ordinate system WGS84, identified by the EPSG code *4326*. * `y`: Defines the spatial region in the *y* dimension. The dimensions ``longitude``/``latitude`` and ``x``/``y`` can be used interchangeably. * `time`: Defines the temporal extent. The time dimension can be specified using a tuple of datetime objects or strings in the "YYYY", "YYYY-MM" or "YYYY-MM-DD" format. Let's run a query to load 2018 data from Landsat 8 over Santiago . For this example, we can use the following parameters: * `product`: `usgs_espa_ls8c1_sr` * `x`=`(-71.1, -71.5)` * `y`=`(-29.5, -30)`, * `time`: `("2020-01-01", "2020-12-31")` Run the following cell to load all datasets from the `usgs_espa_ls8c1_sr` product that match this spatial and temporal extent: ``` ds = dc.load(product="usgs_espa_ls8c1_sr", x=(-71.1, -71.5), y=(-29.5, -30), output_crs = "EPSG:32719", time = ("2020-01-01", "2020-12-31"), resolution = (-25, 25), dask_chunks={"time": 1} ) ds ``` ### Interpreting the resulting `xarray.Dataset` The variable `ds` has returned an `xarray.Dataset` containing all data that matched the spatial and temporal query parameters inputted into `dc.load`. *Dimensions* * Identifies the number of timesteps returned in the search (`time: 1`) as well as the number of pixels in the `x` and `y` directions of the data query. *Coordinates* * `time` identifies the date attributed to each returned timestep. * `x` and `y` are the coordinates for each pixel within the spatial bounds of your query. *Data variables* * These are the measurements available for the nominated product. For every date (`time`) returned by the query, the measured value at each pixel (`y`, `x`) is returned as an array for each measurement. Each data variable is itself an `xarray.DataArray` object ([see below](#Inspecting-an-individual-xarray.DataArray)). *Attributes* * `crs` identifies the coordinate reference system (CRS) of the loaded data. ### Inspecting an individual `xarray.DataArray` The `xarray.Dataset` we loaded above is itself a collection of individual `xarray.DataArray` objects that hold the actual data for each data variable/measurement. For example, all measurements listed under _Data variables_ above (e.g. `blue`, `green`, `red`, `nir`, `swir1`, `swir2`) are `xarray.DataArray` objects. We can inspect the data in these `xarray.DataArray` objects using either of the following syntaxes: ``` ds["measurement_name"] ``` or: ``` ds.measurement_name ``` Being able to access data from individual data variables/measurements allows us to manipulate and analyse data from individual satellite bands or specific layers in a dataset. For example, we can access data from the near infra-red satellite band (i.e. `nir`): ``` ds.red ``` Note that the object header informs us that it is an `xarray.DataArray` containing data for the `nir` satellite band. Like an `xarray.Dataset`, the array also includes information about the data's **dimensions** (i.e. `(time: 1, y: 801, x: 644)`), **coordinates** and **attributes**. This particular data variable/measurement contains some additional information that is specific to the `nir` band, including details of array's nodata value (i.e. `nodata: -999`). > **Note**: For a more in-depth introduction to `xarray` data structures, refer to the [official xarray documentation](http://xarray.pydata.org/en/stable/data-structures.html) ## Customising the `dc.load()` function The `dc.load()` function can be tailored to refine a query. Customisation options include: * `measurements:` This argument is used to provide a list of measurement names to load, as listed in `dc.list_measurements()`. For satellite datasets, measurements contain data for each individual satellite band (e.g. near infrared). If not provided, all measurements for the product will be returned. * `crs:` The coordinate reference system (CRS) of the query's `x` and `y` coordinates is assumed to be `WGS84`/`EPSG:4326` unless the `crs` field is supplied, even if the stored data is in another projection or the `output_crs` is specified. The `crs` parameter is required if your query's coordinates are in any other CRS. * `group_by:` Satellite datasets based around scenes can have multiple observations per day with slightly different time stamps as the satellite collects data along its path. These observations can be combined by reducing the `time` dimension to the day level using `group_by=solar_day`. * `output_crs` and `resolution`: To reproject or change the resolution the data, supply the `output_crs` and `resolution` fields. * `resampling`: This argument allows you to specify a custom spatial resampling method to use when data is reprojected into a different CRS. Example syntax on the use of these options follows in the cells below. > For help or more customisation options, run `help(dc.load)` in an empty cell or visit the function's [documentation page](https://datacube-core.readthedocs.io/en/latest/dev/api/generate/datacube.Datacube.load.html) ### Specifying measurements By default, `dc.load()` will load *all* measurements in a product. To load data from the `red`, `green` and `blue` satellite bands only, we can add `measurements=["red", "green", "blue"]` to our query: ``` # Note the optional inclusion of the measurements list ds_rgb = dc.load(product="usgs_espa_ls8c1_sr", measurements=["red", "green", "blue"], x=(-71.1, -71.5), y=(-29.5, -30), output_crs = "EPSG:32719", time = ("2020-01-01", "2020-12-31"), resolution = (-25, 25), dask_chunks={"time": 1} ) ds_rgb ``` Note that the *Data variables* component of the `xarray.Dataset` now includes only the measurements specified in the query (i.e. the `red`, `green` and `blue` satellite bands). ### Loading data for coordinates in any CRS By default, `dc.load()` assumes that your query `x` and `y` coordinates are provided in degrees in the `WGS84/EPSG:4326` CRS. If your coordinates are in a different coordinate system, you need to specify this using the `crs` parameter. In the example below, we load data for a set of `x` and `y` coordinates defined in WGS84 UTM zone 19S (`EPSG:32719`), and ensure that the `dc.load()` function accounts for this by including `crs="EPSG:32719"`: ``` # Note the new `x` and `y` coordinates and `crs` parameter ds_custom_crs = dc.load(product="usgs_espa_ls8c1_sr", time=("2020-01-01", "2020-12-31"), x=(335713, 355713), y=(6287592, 6307592), crs="EPSG:32719", output_crs = "EPSG:32719", resolution = (-25, 25), dask_chunks={"time": 1} ) ds_custom_crs ``` ### CRS reprojection Certain applications may require that you output your data into a specific CRS. You can reproject your output data by specifying the new `output_crs` and identifying the `resolution` required. In this example, we will reproject our data to a new CRS (UTM Zone 34S, `EPSG:32734`) and resolution (250 x 250 m). Note that for most CRSs, the first resolution value is negative (e.g. `(-250, 250)`): ``` ds_reprojected = dc.load(product="usgs_espa_ls8c1_sr", measurements=["red", "green", "blue"], x=(-71.1, -71.5), y=(-29.5, -30), output_crs = "EPSG:32734", time = ("2020-01-01", "2020-12-31"), resolution = (-250, 250), dask_chunks={"time": 1} ) ds_reprojected ``` Note that the `crs` attribute in the *Attributes* section has changed to `EPSG:32734`. Due to the larger 250 m resolution, there are also now less pixels on the `x` and `y` dimensions (e.g. `x: 467, y: 344` compared to `x: 801, y: 801` in earlier examples). ### Spatial resampling methods When a product is re-projected to a different CRS and/or resolution, the new pixel grid may differ from the original input pixels by size, number and alignment. It is therefore necessary to apply a spatial "resampling" rule that allocates input pixel values into the new pixel grid. By default, `dc.load()` resamples pixel values using "nearest neighbour" resampling, which allocates each new pixel with the value of the closest input pixel. Depending on the type of data and the analysis being run, this may not be the most appropriate choice (e.g. for continuous data). The `resampling` parameter in `dc.load()` allows you to choose a custom resampling method from the following options: ``` "nearest", "cubic", "bilinear", "cubic_spline", "lanczos", "average", "mode", "gauss", "max", "min", "med", "q1", "q3" ``` For example, we can request that all loaded data is resampled using "average" resampling: ``` # Note the additional `resampling` parameter ds_averageresampling = dc.load(product="usgs_espa_ls8c1_sr", measurements=["red", "green", "blue"], x=(-71.1, -71.5), y=(-29.5, -30), output_crs = "EPSG:32719", time = ("2020-01-01", "2020-12-31"), resolution = (-250, 250), dask_chunks={"time": 1}, resampling="average" ) ds_averageresampling ``` You can also provide a Python dictionary to request a different sampling method for different measurements. This can be particularly useful when some measurements contain contain categorical data which require resampling methods such as "nearest" or "mode" that do not modify the input pixel values. In the example below, we specify `resampling={"red": "nearest", "*": "average"}`, which will use "nearest" neighbour resampling for the `red` satellite band only. `"*": "average"` will apply "average" resampling for all other satellite bands: ``` ds_customresampling = dc.load(product="usgs_espa_ls8c1_sr", measurements=["red", "green", "blue"], x=(-71.1, -71.5), y=(-29.5, -30), output_crs = "EPSG:32719", time = ("2020-01-01", "2020-12-31"), resolution = (-250, 250), dask_chunks={"time": 1}, resampling={"red": "nearest", "*": "average"} ) ds_customresampling ``` > **Note**: For more information about spatial resampling methods, see the [following guide](https://rasterio.readthedocs.io/en/stable/topics/resampling.html) ## Loading data using the query dictionary syntax It is often useful to re-use a set of query parameters to load data from multiple products. To achieve this, we can load data using the "query dictionary" syntax. This involves placing the query parameters we used to load data above inside a Python dictionary object which we can re-use for multiple data loads: ``` query = {"x": (-71.1, -71.5), "y": (-29.5, -30), "time": ("2020-01-01", "2020-12-31"), "output_crs": "EPSG:32719", "time": ("2020-01-01", "2020-12-31"), "resolution": (-250, 250), "dask_chunks": {"time": 1} } ``` We can then use this query dictionary object as an input to `dc.load()`. > The `**` syntax below is Python's "keyword argument unpacking" operator. This operator takes the named query parameters listed in the dictionary we created (e.g. `"x": (153.3, 153.4)`), and "unpacks" them into the `dc.load()` function as new arguments. For more information about unpacking operators, refer to the [Python documentation](https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists) ``` ds = dc.load(product="usgs_espa_ls8c1_sr", **query) ds ``` Query dictionaries can contain any set of parameters that would usually be provided to `dc.load()`: ``` query = {"x": (-71.1, -71.5), "y": (-29.5, -30), "time": ("2020-01-01", "2020-12-31"), "output_crs": "EPSG:32719", "time": ("2020-01-01", "2020-12-31"), "resolution": (-500, 500), "dask_chunks": {"time": 1}, "resampling": {"red": "nearest", "*": "average"} } ds_ls8 = dc.load(product="usgs_espa_ls8c1_sr", **query) ds_ls8 ``` ## Other helpful tricks ### Loading data "like" another dataset Another option for loading matching data from multiple products is to use `dc.load()`'s `like` parameter. This will copy the spatial and temporal extent and the CRS/resolution from an existing dataset, and use these parameters to load a new data from a new product. In the example below, we load another WOfS dataset that exactly matches the `ds_ls8` dataset we loaded earlier: ``` # THIS WON'T WORK UNTIL WE GET MORE DATA IN THE CHILE DATACUBE # ds_wofs = dc.load(product="ga_ls8c_wofs_2_annual_summary", # like=ds_ls8) # print(ds_wofs) ``` ### Adding a progress bar When loading large amounts of data, it can be useful to view the progress of the data load. The `progress_cbk` parameter in `dc.load()` allows us to add a progress bar which will indicate how the load is progressing. In this example, we will load 5 years of data (2013, 2014, 2015, 2016 and 2017) from the `ga_ls8c_wofs_2_annual_summary` product with a progress bar: This only works when dask chunking is **disabled**. To understand more about Dask, please see [Parallel processing with Dask](08_Parallel_processing_with_dask.ipynb) ``` query = {"x": (-71.1, -71.5), "y": (-29.5, -30), "time": ("2020-01-01", "2020-12-31"), "output_crs": "EPSG:32719", "time": ("2020-01-01", "2020-12-31"), "resolution": (-500, 500), # "dask_chunks": {"time": 1}, "resampling": {"red": "nearest", "*": "average"} } ds_progress = dc.load(product="usgs_espa_ls8c1_sr", progress_cbk=with_ui_cbk(), **query) ds_progress ``` ## Recommended next steps For more advanced information about working with Jupyter Notebooks or JupyterLab, you can explore [JupyterLab documentation page](https://jupyterlab.readthedocs.io/en/stable/user/notebook.html). To continue working through the notebooks in this beginner's guide, the following notebooks are designed to be worked through in the following order: 1. [Jupyter Notebooks](01_Jupyter_notebooks.ipynb) 2. [Products and Measurements](02_Products_and_measurements.ipynb) 3. **Loading data (this notebook)** 4. [Plotting](04_Plotting.ipynb) 5. [Performing a basic analysis](05_Basic_analysis.ipynb) 6. [Introduction to numpy](06_Intro_to_numpy.ipynb) 7. [Introduction to xarray](07_Intro_to_xarray.ipynb) 8. [Parallel processing with Dask](08_Parallel_processing_with_dask.ipynb) Once you have you have completed the above six tutorials, join advanced users in exploring: * The "Datasets" directory in the repository, where you can explore DE Africa products in depth. * The "Frequently used code" directory, which contains a recipe book of common techniques and methods for analysing DE Africa data. * The "Real-world examples" directory, which provides more complex workflows and analysis case studies.
github_jupyter
# Developing Advanced User Interfaces *Using Jupyter Widgets, Pandas Dataframes and Matplotlib* While BPTK-Py offers a number of high-level functions to quickly plot equations (such as `bptk.plot_scenarios`) or create a dashboard (e.g. `bptk.dashboard`), you may sometimes be in a situation when you want to create more sophisticated plots (e.g. plots with two axes) or a more sophisticated interface dashboard for your simulation. This is actually quite easy, because BPTK-Py's high-level functions already utilize some very powerfull open source libraries for data management, plotting and dashboards: Pandas, Matplotlib and Jupyter Widgets. In order to harness the full power of these libraries, you only need to understand how to make the data generated by BPTK-Py available to them. This _How To_ illustrates this using a neat little simulation of customer acquisition strategies. You don't need to understand the simulation to follow this document, but if you are interested you can read more about it on our [blog](https://www.transentis.com/an-example-to-illustrate-the-business-prototyping-methodology/). ## Advanced Plotting We'll start with some advanced plotting of simulation results. ``` ## Load the BPTK Package from BPTK_Py.bptk import bptk bptk = bptk() ``` BPTK-Py's workhorse for creating plots is the `bptk.plot_scenarios`function. The function generates all the data you would like to plot using the simulation defined by the scenario manager and the settings defined by the scenarios. The data are stored in a Pandas dataframe. When it comes to plotting the results, the framework uses Matplotlib. To illustrate this, we will recreate the plot below directly from the underlying data: ``` bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["base"], equations=['customers'], title="Base", freq="M", x_label="Time", y_label="No. of Customers" ) ``` You can access the data generated by a scenario by saving it into a dataframe. You can do this by adding the `return_df` flag to `bptk.plot_scenario`: ``` df=bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["base"], equations=['customers'], title="Base", freq="M", x_label="Time", y_label="No. of Customers", return_df=True ) ``` The dataframe is indexed by time and stores the equations (in SD models) or agent properties (in Agent-based models) in the columns ``` df[0:10] # just show the first ten items ``` The frameworks `bptk.plot_scenarios` method first runs the simulation using the setting defined in the scenario and stores the data in a dataframe. It then plots the dataframe using Pandas `df.plot`method. We can do the same: ``` subplot=df.plot(None,"customers") ``` The plot above doesn't look quite as neat as the plots created by `bptk.plot_scenarios`– this is because the framework applies some styling information. The styling information is stored in BPTK_Py.config, and you can access (and modify) it there. Now let's apply the config to `df.plot`: ``` import BPTK_Py.config as config subplot=df.plot(kind=config.configuration["kind"], alpha=config.configuration["alpha"], stacked=config.configuration["stacked"], figsize=config.configuration["figsize"], title="Base", color=config.configuration["colors"], lw=config.configuration["linewidth"]) ``` Yes! We've recreated the plot from the high level `btpk.plot_scenarios` method using basic plotting functions. Now let's do something that currently isn't possible using the high-level BPTK-Py methods - let's create a graph that has two axes. This is useful when you want to show the results of two equations at the same time, but they have different orders of magnitudes. For instance in the plot below, the number of customers is much smaller than the profit made, so the customer graph looks like a straight line. But it would still be intersting to be able to compare the two graphs. ``` bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["base"], equations=['customers','profit'], title="Base", freq="M", x_label="Time", y_label="No. of Customers" ) ``` As before, we collect the data in a dataframe. ``` df=bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["base"], equations=['customers','profit'], title="Base", freq="M", x_label="Time", y_label="No. of Customers", return_df = True ) df[0:10] ``` Plotting two axes is easy in Pandas (which itself uses the Matplotlib library): ``` ax = df.plot(None,'customers', kind=config.configuration["kind"], alpha=config.configuration["alpha"], stacked=config.configuration["stacked"], figsize=config.configuration["figsize"], title="Profit vs. Customers", color=config.configuration["colors"], lw=config.configuration["linewidth"]) # ax is a Matplotlib Axes object ax1 = ax.twinx() # Matplotlib.axes.Axes.twinx creates a twin y-axis. plot =df.plot(None,'profit',ax=ax1) ``` Voila! This is actually quite easy one you understand how to access the data (and of course a little knowledge of Pandas and Matplotlib is also useful). If you were writing a document that needed a lot of plots of this kind, you could create your own high-level function to avoide having to copy and paste the code above multiple times. ## Advanced interactive user interfaces Now let's try something a little more challenging: Let's build a dashboard for our simulation that let's you manipulate some of the scenrio settings interactively and plots results in tabs. > Note: You need to have widgets enabled in Jupyter for the following to work. Please check the [BPTK-Py installation instructions](https://bptk.transentis-labs.com/en/latest/docs/usage/installation.html) or refer to the [Jupyter Widgets](https://ipywidgets.readthedocs.io/en/latest/user_install.html) documentation First, we need to understand how to create tabs. For this we need to import the `ipywidget` Library and we also need to access Matplotlib's `pyplot` ``` %matplotlib inline import matplotlib.pyplot as plt from ipywidgets import interact import ipywidgets as widgets ``` Then we can create some tabs that display scenario results as follows: ``` out1 = widgets.Output() out2 = widgets.Output() tab = widgets.Tab(children = [out1, out2]) tab.set_title(0, 'Customers') tab.set_title(1, 'Profit') display(tab) with out1: # turn of pyplot's interactive mode to ensure the plot is not created directly plt.ioff() # create the plot, but don't show it yet bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["hereWeGo"], equations=['customers'], title="Here We Go", freq="M", x_label="Time", y_label="No. of Customers" ) # show the plot plt.show() # turn interactive mode on again plt.ion() with out2: plt.ioff() bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["hereWeGo"], equations=['profit'], title="Here We Go", freq="M", x_label="Time", y_label="Euro" ) plt.show() plt.ion() ``` That was easy! The only thing you really need to understand is to turn interactive plotting in `pyplot` off before creating the tabs and then turn it on again to create the plots. If you forget to do that, the plots appear above the tabs (try it and see!). In the next step, we need to add some sliders to manipulate the following scenario settings: * Referrals * Referral Free Months * Referral Program Adoption % * Advertising Success % Creating a slider for the referrals is easy using the integer slider from the `ipywidgets` widget library: ``` widgets.IntSlider( value=7, min=0, max=15, step=1, description='Referrals:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='d' ) ``` When manipulating a simulation model, we mostly want to start with a particular scenario and then manipulate some of the scenario settings using interactive widgets. Let's set up a new scenario for this purpose and call it `interactiveScenario`: ``` bptk.register_scenarios(scenario_manager="smCustomerAcquisition", scenarios= { "interactiveScenario":{ "constants":{ "referrals":0, "advertisingSuccessPct":0.1, "referralFreeMonths":3, "referralProgamAdoptionPct":10 } } } ) ``` We can then access the scenario using `bptk.get_scenarios`: ``` scenario = bptk.get_scenario("smCustomerAcquisition","interactiveScenario") scenario.constants bptk.plot_scenarios(scenario_managers=["smCustomerAcquisition"], scenarios=["interactiveScenario"], equations=['profit'], title="Interactive Scenario", freq="M", x_label="Time", y_label="Euro" ) ``` The scenario constants can be accessed in the constants variable: Now we have all the right pieces, we can put them together using the interact function. ``` @interact(advertising_success_pct=widgets.FloatSlider( value=0.1, min=0, max=1, step=0.01, continuous_update=False, description='Advertising Success Pct' )) def dashboard(advertising_success_pct): scenario= bptk.get_scenario("smCustomerAcquisition", "interactiveScenario") scenario.constants["advertisingSuccessPct"]=advertising_success_pct bptk.reset_scenario_cache(scenario_manager="smCustomerAcquisition", scenario="interactiveScenario") bptk.plot_scenarios(scenario_managers=["smCustomerAcquisition"], scenarios=["interactiveScenario"], equations=['profit'], title="Interactive Scenario", freq="M", x_label="Time", y_label="Euro" ) ``` Now let's combine this with the tabs from above. ``` out1 = widgets.Output() out2 = widgets.Output() tab = widgets.Tab(children = [out1, out2]) tab.set_title(0, 'Customers') tab.set_title(1, 'Profit') display(tab) @interact(advertising_success_pct=widgets.FloatSlider( value=0.1, min=0, max=10, step=0.01, continuous_update=False, description='Advertising Success Pct' )) def dashboardWithTabs(advertising_success_pct): scenario= bptk.get_scenario("smCustomerAcquisition","interactiveScenario") scenario.constants["advertisingSuccessPct"]=advertising_success_pct bptk.reset_scenario_cache(scenario_manager="smCustomerAcquisition", scenario="interactiveScenario") with out1: # turn of pyplot's interactive mode to ensure the plot is not created directly plt.ioff() # clear the widgets output ... otherwise we will end up with a long list of plots, one for each change of settings # create the plot, but don't show it yet bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["interactiveScenario"], equations=['customers'], title="Interactive Scenario", freq="M", x_label="Time", y_label="No. of Customers" ) # show the plot out1.clear_output() plt.show() # turn interactive mode on again plt.ion() with out2: plt.ioff() out2.clear_output() bptk.plot_scenarios( scenario_managers=["smCustomerAcquisition"], scenarios=["interactiveScenario"], equations=['profit'], title="Interactive Scenario", freq="M", x_label="Time", y_label="Euro" ) plt.show() plt.ion() ```
github_jupyter
<a href="https://colab.research.google.com/github/kartikgill/The-GAN-Book/blob/main/Skill-08/Cycle-GAN-No-Outputs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Import Useful Libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm_notebook %matplotlib inline import tensorflow print (tensorflow.__version__) ``` # Download and Unzip Data ``` !wget https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/horse2zebra.zip !unzip horse2zebra.zip !ls horse2zebra import glob path = "" horses_train = glob.glob(path + 'horse2zebra/trainA/*.jpg') zebras_train = glob.glob(path + 'horse2zebra/trainB/*.jpg') horses_test = glob.glob(path + 'horse2zebra/testA/*.jpg') zebras_test = glob.glob(path + 'horse2zebra/testB/*.jpg') len(horses_train), len(zebras_train), len(horses_test), len(zebras_test) import cv2 for file in horses_train[:10]: img = cv2.imread(file) print (img.shape) ``` # Display few Samples ``` print ("Horses") for k in range(2): plt.figure(figsize=(15, 15)) for j in range(6): file = np.random.choice(horses_train) img = cv2.imread(file) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.subplot(660 + 1 + j) plt.imshow(img) plt.axis('off') #plt.title(trainY[i]) plt.show() print ("-"*80) print ("Zebras") for k in range(2): plt.figure(figsize=(15, 15)) for j in range(6): file = np.random.choice(zebras_train) img = cv2.imread(file) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.subplot(660 + 1 + j) plt.imshow(img) plt.axis('off') #plt.title(trainY[i]) plt.show() ``` # Define Generator Model (Res-Net Like) ``` #Following function is taken from: https://keras.io/examples/generative/cyclegan/ class ReflectionPadding2D(tensorflow.keras.layers.Layer): """Implements Reflection Padding as a layer. Args: padding(tuple): Amount of padding for the spatial dimensions. Returns: A padded tensor with the same type as the input tensor. """ def __init__(self, padding=(1, 1), **kwargs): self.padding = tuple(padding) super(ReflectionPadding2D, self).__init__(**kwargs) def call(self, input_tensor, mask=None): padding_width, padding_height = self.padding padding_tensor = [ [0, 0], [padding_height, padding_height], [padding_width, padding_width], [0, 0], ] return tensorflow.pad(input_tensor, padding_tensor, mode="REFLECT") import tensorflow_addons as tfa # Weights initializer for the layers. kernel_init = tensorflow.keras.initializers.RandomNormal(mean=0.0, stddev=0.02) # Gamma initializer for instance normalization. gamma_init = tensorflow.keras.initializers.RandomNormal(mean=0.0, stddev=0.02) def custom_resnet_block(input_data, filters): x = ReflectionPadding2D()(input_data) x = tensorflow.keras.layers.Conv2D(filters, kernel_size=(3,3), padding='valid', kernel_initializer=kernel_init)(x) x = tfa.layers.InstanceNormalization()(x) x = tensorflow.keras.layers.Activation('relu')(x) x = ReflectionPadding2D()(x) x = tensorflow.keras.layers.Conv2D(filters, kernel_size=(3,3), padding='valid', kernel_initializer=kernel_init)(x) x = tfa.layers.InstanceNormalization()(x) x = tensorflow.keras.layers.Add()([x, input_data]) return x def make_generator(): source_image = tensorflow.keras.layers.Input(shape=(256, 256, 3)) x = ReflectionPadding2D(padding=(3, 3))(source_image) x = tensorflow.keras.layers.Conv2D(64, kernel_size=(7,7), kernel_initializer=kernel_init, use_bias=False)(x) x = tfa.layers.InstanceNormalization()(x) x = tensorflow.keras.layers.Activation('relu')(x) x = tensorflow.keras.layers.Conv2D(128, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x) x = tfa.layers.InstanceNormalization()(x) x = tensorflow.keras.layers.Activation('relu')(x) x = tensorflow.keras.layers.Conv2D(256, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x) x = tfa.layers.InstanceNormalization()(x) x = tensorflow.keras.layers.Activation('relu')(x) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = custom_resnet_block(x, 256) x = tensorflow.keras.layers.Conv2DTranspose(128, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x) x = tfa.layers.InstanceNormalization()(x) x = tensorflow.keras.layers.Activation('relu')(x) x = tensorflow.keras.layers.Conv2DTranspose(64, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x) x = tfa.layers.InstanceNormalization()(x) x = tensorflow.keras.layers.Activation('relu')(x) x = ReflectionPadding2D(padding=(3, 3))(x) x = tensorflow.keras.layers.Conv2D(3, kernel_size=(7,7), padding='valid')(x) x = tfa.layers.InstanceNormalization()(x) translated_image = tensorflow.keras.layers.Activation('tanh')(x) return source_image, translated_image source_image, translated_image = make_generator() generator_network_AB = tensorflow.keras.models.Model(inputs=source_image, outputs=translated_image) source_image, translated_image = make_generator() generator_network_BA = tensorflow.keras.models.Model(inputs=source_image, outputs=translated_image) print (generator_network_AB.summary()) ``` # Define Discriminator Network ``` def my_conv_layer(input_layer, filters, strides, bn=True): x = tensorflow.keras.layers.Conv2D(filters, kernel_size=(4,4), strides=strides, padding='same', kernel_initializer=kernel_init)(input_layer) x = tensorflow.keras.layers.LeakyReLU(alpha=0.2)(x) if bn: x = tfa.layers.InstanceNormalization()(x) return x def make_discriminator(): target_image_input = tensorflow.keras.layers.Input(shape=(256, 256, 3)) x = my_conv_layer(target_image_input, 64, (2,2), bn=False) x = my_conv_layer(x, 128, (2,2)) x = my_conv_layer(x, 256, (2,2)) x = my_conv_layer(x, 512, (1,1)) patch_features = tensorflow.keras.layers.Conv2D(1, kernel_size=(4,4), padding='same')(x) return target_image_input, patch_features target_image_input, patch_features = make_discriminator() discriminator_network_A = tensorflow.keras.models.Model(inputs=target_image_input, outputs=patch_features) target_image_input, patch_features = make_discriminator() discriminator_network_B = tensorflow.keras.models.Model(inputs=target_image_input, outputs=patch_features) print (discriminator_network_A.summary()) adam_optimizer = tensorflow.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5) discriminator_network_A.compile(loss='mse', optimizer=adam_optimizer, metrics=['accuracy']) discriminator_network_B.compile(loss='mse', optimizer=adam_optimizer, metrics=['accuracy']) ``` # Define Cycle-GAN ``` source_image_A = tensorflow.keras.layers.Input(shape=(256, 256, 3)) source_image_B = tensorflow.keras.layers.Input(shape=(256, 256, 3)) # Domain Transfer fake_B = generator_network_AB(source_image_A) fake_A = generator_network_BA(source_image_B) # Restoring original Domain get_back_A = generator_network_BA(fake_B) get_back_B = generator_network_AB(fake_A) # Get back Identical/Same Image get_same_A = generator_network_BA(source_image_A) get_same_B = generator_network_AB(source_image_B) discriminator_network_A.trainable=False discriminator_network_B.trainable=False # Tell Real vs Fake, for a given domain verify_A = discriminator_network_A(fake_A) verify_B = discriminator_network_B(fake_B) cycle_gan = tensorflow.keras.models.Model(inputs = [source_image_A, source_image_B], \ outputs = [verify_A, verify_B, get_back_A, get_back_B, get_same_A, get_same_B]) cycle_gan.summary() ``` # Compiling Model ``` cycle_gan.compile(loss=['mse', 'mse', 'mae', 'mae', 'mae', 'mae'], loss_weights=[1, 1, 10, 10, 5, 5],\ optimizer=adam_optimizer) ``` # Define Data Generators ``` def horses_to_zebras(horses, generator_network): generated_samples = generator_network.predict_on_batch(horses) return generated_samples def zebras_to_horses(zebras, generator_network): generated_samples = generator_network.predict_on_batch(zebras) return generated_samples def get_horse_samples(batch_size): random_files = np.random.choice(horses_train, size=batch_size) images = [] for file in random_files: img = cv2.imread(file) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) images.append((img-127.5)/127.5) horse_images = np.array(images) return horse_images def get_zebra_samples(batch_size): random_files = np.random.choice(zebras_train, size=batch_size) images = [] for file in random_files: img = cv2.imread(file) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) images.append((img-127.5)/127.5) zebra_images = np.array(images) return zebra_images def show_generator_results_horses_to_zebras(generator_network_AB, generator_network_BA): images = [] for j in range(5): file = np.random.choice(horses_test) img = cv2.imread(file) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) images.append(img) print ('Input Horse Images') plt.figure(figsize=(13, 13)) for j, img in enumerate(images): plt.subplot(550 + 1 + j) plt.imshow(img) plt.axis('off') #plt.title(trainY[i]) plt.show() print ('Translated (Horse -> Zebra) Images') translated = [] plt.figure(figsize=(13, 13)) for j, img in enumerate(images): img = (img-127.5)/127.5 output = horses_to_zebras(np.array([img]), generator_network_AB)[0] translated.append(output) output = (output+1.0)/2.0 plt.subplot(550 + 1 + j) plt.imshow(output) plt.axis('off') #plt.title(trainY[i]) plt.show() print ('Translated reverse ( Fake Zebras -> Fake Horses)') plt.figure(figsize=(13, 13)) for j, img in enumerate(translated): output = zebras_to_horses(np.array([img]), generator_network_BA)[0] output = (output+1.0)/2.0 plt.subplot(550 + 1 + j) plt.imshow(output) plt.axis('off') #plt.title(trainY[i]) plt.show() def show_generator_results_zebras_to_horses(generator_network_AB, generator_network_BA): images = [] for j in range(5): file = np.random.choice(zebras_test) img = cv2.imread(file) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) images.append(img) print ('Input Zebra Images') plt.figure(figsize=(13, 13)) for j, img in enumerate(images): plt.subplot(550 + 1 + j) plt.imshow(img) plt.axis('off') #plt.title(trainY[i]) plt.show() print ('Translated (Zebra -> Horse) Images') translated = [] plt.figure(figsize=(13, 13)) for j, img in enumerate(images): img = (img-127.5)/127.5 output = zebras_to_horses(np.array([img]), generator_network_BA)[0] translated.append(output) output = (output+1.0)/2.0 plt.subplot(550 + 1 + j) plt.imshow(output) plt.axis('off') #plt.title(trainY[i]) plt.show() print ('Translated reverse (Fake Horse -> Fake Zebra)') plt.figure(figsize=(13, 13)) for j, img in enumerate(translated): output = horses_to_zebras(np.array([img]), generator_network_AB)[0] output = (output+1.0)/2.0 plt.subplot(550 + 1 + j) plt.imshow(output) plt.axis('off') #plt.title(trainY[i]) plt.show() ``` # Training Cycle-GAN ``` len(horses_train), len(zebras_train) epochs = 500 batch_size = 1 steps = 1067 for i in range(0, epochs): if i%5 == 0: show_generator_results_horses_to_zebras(generator_network_AB, generator_network_BA) print ("-"*100) show_generator_results_zebras_to_horses(generator_network_AB, generator_network_BA) for j in range(steps): # A == Horses # B == Zebras domain_A_images = get_horse_samples(batch_size) domain_B_images = get_zebra_samples(batch_size) fake_patch = np.zeros((batch_size, 32, 32, 1)) real_patch = np.ones((batch_size, 32, 32, 1)) fake_B_images = generator_network_AB(domain_A_images) fake_A_images = generator_network_BA(domain_B_images) # Updating Discriminator A weights discriminator_network_A.trainable=True discriminator_network_B.trainable=False loss_d_real_A = discriminator_network_A.train_on_batch(domain_A_images, real_patch) loss_d_fake_A = discriminator_network_A.train_on_batch(fake_A_images, fake_patch) loss_d_A = np.add(loss_d_real_A, loss_d_fake_A)/2.0 # Updating Discriminator B weights discriminator_network_B.trainable=True discriminator_network_A.trainable=False loss_d_real_B = discriminator_network_B.train_on_batch(domain_B_images, real_patch) loss_d_fake_B = discriminator_network_B.train_on_batch(fake_B_images, fake_patch) loss_d_B = np.add(loss_d_real_B, loss_d_fake_B)/2.0 # Make the Discriminator belive that these are real samples and calculate loss to train the generator discriminator_network_A.trainable=False discriminator_network_B.trainable=False # Updating Generator weights loss_g = cycle_gan.train_on_batch([domain_A_images, domain_B_images],\ [real_patch, real_patch, domain_A_images, domain_B_images, domain_A_images, domain_B_images]) if j%100 == 0: print ("Epoch:%.0f, Step:%.0f, DA-Loss:%.3f, DA-Acc:%.3f, DB-Loss:%.3f, DB-Acc:%.3f, G-Loss:%.3f"\ %(i,j,loss_d_A[0],loss_d_A[1]*100,loss_d_B[0],loss_d_B[1]*100,loss_g[0])) ```
github_jupyter
# Work with Data Data is the foundation on which machine learning models are built. Managing data centrally in the cloud, and making it accessible to teams of data scientists who are running experiments and training models on multiple workstations and compute targets is an important part of any professional data science solution. In this notebook, you'll explore two Azure Machine Learning objects for working with data: *datastores*, and *datasets*. ## Before you start If you haven't already done so, you must install the latest version of the **azureml-sdk** and **azureml-widgets** packages before running this notebook. To do this, run the cell below and then **restart the kernel** before running the subsequent cells. ``` !pip install --upgrade azureml-sdk azureml-widgets ``` ## Connect to your workspace With the latest version of the SDK installed, now you're ready to connect to your workspace. **Note:** If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure. ``` import azureml.core from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name)) ``` ## Work with datastores In Azure ML, *datastores* are references to storage locations, such as Azure Storage blob containers. Every workspace has a default datastore - usually the Azure storage blob container that was created with the workspace. If you need to work with data that is stored in different locations, you can add custom datastores to your workspace and set any of them to be the default. ### View datastores Run the following code to determine the datastores in your workspace: ``` # Get the default datastore default_ds = ws.get_default_datastore() # Enumerate all datastores, indicating which is the default for ds_name in ws.datastores: print(ds_name, "- Default =", ds_name == default_ds.name) ``` You can also view and manage datastores in your workspace on the **Datastores** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com/). ### Upload data to a datastore Now that you have determined the available datastores, you can upload files from your local file system to a datastore so that it will be accessible to experiments running in the workspace, regardless of where the experiment script is actually being run ``` default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data target_path='diabetes-data/', # Put it in a folder path in the datastore overwrite=True, # Replace existing files of the same name show_progress=True) ``` ## Work with datasets Azure Machine Learning provides an abstraction for data in the form of *datasets*. A dataset is a versioned reference to a specific set of data that you may want to use in an experiment. Datasets can be *tabular* or *file-based*. ### Create a tabular dataset Let's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a tabular dataset. ``` from azureml.core import Dataset # Get the default datastore default_ds = ws.get_default_datastore() #Create a tabular dataset from the path on the datastore (this may take a short while) tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv')) # Display the first 20 rows as a Pandas dataframe tab_data_set.take(20).to_pandas_dataframe() ``` As you can see in the code above, it's easy to convert a tabular dataset to a Pandas dataframe, enabling you to work with the data using common python techniques. ### Create a file Dataset The dataset you created is a *tabular* dataset that can be read as a dataframe containing all of the data in the structured files that are included in the dataset definition. This works well for tabular data, but in some machine learning scenarios you might need to work with data that is unstructured; or you may simply want to handle reading the data from files in your own code. To accomplish this, you can use a file dataset, which creates a list of file paths in a virtual mount point, which you can use to read the data in the files. ``` #Create a file dataset from the path on the datastore (this may take a short while) file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv')) # Get the files in the dataset for file_path in file_data_set.to_path(): print(file_path) ``` ### Register datasets Now that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace. We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**. ``` # Register the tabular dataset try: tab_data_set = tab_data_set.register(workspace=ws, name='diabetes dataset', description='diabetes data', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) # Register the file dataset try: file_data_set = file_data_set.register(workspace=ws, name='diabetes file dataset', description='diabetes files', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) print('Datasets registered') ``` You can view and manage datasets on the **Datasets** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com/). You can also get a list of datasets from the workspace object: ``` print("Datasets:") for dataset_name in list(ws.datasets.keys()): dataset = Dataset.get_by_name(ws, dataset_name) print("\t", dataset.name, 'version', dataset.version) ``` The ability to version datasets enables you to redefine datasets without breaking existing experiments or pipelines that rely on previous definitions. By default, the latest version of a named dataset is returned, but you can retrieve a specific version of a dataset by specifying the version number, like this: `dataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1)` ### Train a model from a tabular dataset Now that you have datasets, you're ready to start training models from them. You can pass datasets to scripts as inputs in the estimator being used to run the script. Run the following two code cells to create: 1. A folder named **diabetes_training_from_tab_dataset** 2. A script that trains a classification model by using a tabular dataset that is passed to is as an argument. ``` import os # Create a folder for the experiment files experiment_folder = 'diabetes_training_from_tab_dataset' os.makedirs(experiment_folder, exist_ok=True) print(experiment_folder, 'folder created') %%writefile $experiment_folder/diabetes_training.py # Import libraries import os import argparse from azureml.core import Run, Dataset import pandas as pd import numpy as np import joblib from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve # Get the script arguments (regularization rate and training dataset ID) parser = argparse.ArgumentParser() parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate') parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset') args = parser.parse_args() # Set regularization hyperparameter (passed as an argument to the script) reg = args.reg_rate # Get the experiment run context run = Run.get_context() # Get the training dataset print("Loading Data...") diabetes = run.input_datasets['training_data'].to_pandas_dataframe() # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) os.makedirs('outputs', exist_ok=True) # note file saved in the outputs folder is automatically uploaded into experiment record joblib.dump(value=model, filename='outputs/diabetes_model.pkl') run.complete() ``` **Note:** In the script, the dataset is passed as a parameter (or argument). In the case of a tabular dataset, this argument will contain the ID of the registered dataset; so you could write code in the script to get the experiment's workspace from the run context, and then get the dataset using its ID; like this: `run = Run.get_context()` `ws = run.experiment.workspace` `dataset = Dataset.get_by_id(ws, id=args.training_dataset_id)` `diabetes = dataset.to_pandas_dataframe()` However, Azure Machine Learning runs automatically identify arguments that reference named datasets and add them to the run's **input_datasets** collection, so you can also retrieve the dataset from this collection by specifying its "friendly name" (which as you'll see shortly, is specified in the argument definition in the script run configuration for the experiment). This is the approach taken in the script above. Now you can run a script as an experiment, defining an argument for the training dataset, which is read by the script. **Note:** The **Dataset** class depends on some components in the **azureml-dataprep** package, which includes optional support for **pandas** that is used by the **to_pandas_dataframe()** method. So you need to include this package in the environment where the training experiment will be run. ``` from azureml.core import Experiment, ScriptRunConfig, Environment from azureml.core.conda_dependencies import CondaDependencies from azureml.widgets import RunDetails # Create a Python environment for the experiment sklearn_env = Environment("sklearn-env") # Ensure the required packages are installed (we need scikit-learn, Azure ML defaults, and Azure ML dataprep) packages = CondaDependencies.create(conda_packages=['scikit-learn','pip'], pip_packages=['azureml-defaults','azureml-dataprep[pandas]']) sklearn_env.python.conda_dependencies = packages # Get the training dataset diabetes_ds = ws.datasets.get("diabetes dataset") # Create a script config script_config = ScriptRunConfig(source_directory=experiment_folder, script='diabetes_training.py', arguments = ['--regularization', 0.1, # Regularizaton rate parameter '--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset environment=sklearn_env) # submit the experiment experiment_name = 'mslearn-train-diabetes' experiment = Experiment(workspace=ws, name=experiment_name) run = experiment.submit(config=script_config) RunDetails(run).show() run.wait_for_completion() ``` **Note:** The **--input-data** argument passes the dataset as a *named input* that includes a *friendly name* for the dataset, which is used by the script to read it from the **input_datasets** collection in the experiment run. The string value in the **--input-data** argument is actually the registered dataset's ID. As an alternative approach, you could simply pass *diabetes_ds.id*, in which case the script can access the dataset ID from the script arguments and use it to get the dataset from the workspace, but not from the **input_datasets** collection. The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker. When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log and the metrics generated by the run. ### Register the trained model As with any training experiment, you can retrieve the trained model and register it in your Azure Machine Learning workspace ``` from azureml.core import Model run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model', tags={'Training context':'Tabular dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']}) for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') ``` ### Train a model from a file dataset You've seen how to train a model using training data in a tabular dataset; but what about a file dataset? When you're using a file dataset, the dataset argument passed to the script represents a mount point containing file paths. How you read the data from these files depends on the kind of data in the files and what you want to do with it. In the case of the diabetes CSV files, you can use the Python glob module to create a list of files in the virtual mount point defined by the dataset, and read them all into Pandas dataframes that are concatenated into a single dataframe. Run the following two code cells to create: 1. A folder named **diabetes_training_from_file_dataset** 2. A script that trains a classification model by using a file dataset that is passed to is as an *input*. ``` import os # Create a folder for the experiment files experiment_folder = 'diabetes_training_from_file_dataset' os.makedirs(experiment_folder, exist_ok=True) print(experiment_folder, 'folder created') %%writefile $experiment_folder/diabetes_training.py # Import libraries import os import argparse from azureml.core import Dataset, Run import pandas as pd import numpy as np import joblib from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve import glob # Get script arguments (rgularization rate and file dataset mount point) parser = argparse.ArgumentParser() parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate') parser.add_argument('--input-data', type=str, dest='dataset_folder', help='data mount point') args = parser.parse_args() # Set regularization hyperparameter (passed as an argument to the script) reg = args.reg_rate # Get the experiment run context run = Run.get_context() # load the diabetes dataset print("Loading Data...") data_path = run.input_datasets['training_files'] # Get the training data path from the input # (You could also just use args.data_folder if you don't want to rely on a hard-coded friendly name) # Read the files all_files = glob.glob(data_path + "/*.csv") diabetes = pd.concat((pd.read_csv(f) for f in all_files), sort=False) # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) os.makedirs('outputs', exist_ok=True) # note file saved in the outputs folder is automatically uploaded into experiment record joblib.dump(value=model, filename='outputs/diabetes_model.pkl') run.complete() ``` Just as with tabular datasets, you can retrieve a file dataset from the **input_datasets** collection by using its friendly name. You can also retrieve it from the script argument, which in the case of a file dataset contains a mount path to the files (rather than the dataset ID passed for a tabular dataset). Next we need to change the way we pass the dataset to the script - it needs to define a path from which the script can read the files. You can use either the **as_download** or **as_mount** method to do this. Using **as_download** causes the files in the file dataset to be downloaded to a temporary location on the compute where the script is being run, while **as_mount** creates a mount point from which the files can be streamed directly from the datasetore. You can combine the access method with the **as_named_input** method to include the dataset in the **input_datasets** collection in the experiment run (if you omit this, for example by setting the argument to *diabetes_ds.as_mount()*, the script will be able to access the dataset mount point from the script arguments, but not from the **input_datasets** collection). ``` from azureml.core import Experiment from azureml.widgets import RunDetails # Get the training dataset diabetes_ds = ws.datasets.get("diabetes file dataset") # Create a script config script_config = ScriptRunConfig(source_directory=experiment_folder, script='diabetes_training.py', arguments = ['--regularization', 0.1, # Regularizaton rate parameter '--input-data', diabetes_ds.as_named_input('training_files').as_download()], # Reference to dataset location environment=sklearn_env) # Use the environment created previously # submit the experiment experiment_name = 'mslearn-train-diabetes' experiment = Experiment(workspace=ws, name=experiment_name) run = experiment.submit(config=script_config) RunDetails(run).show() run.wait_for_completion() ``` When the experiment has completed, in the widget, view the **azureml-logs/70_driver_log.txt** output log to verify that the files in the file dataset were downloaded to a temporary folder to enable the script to read the files. ### Register the trained model Once again, you can register the model that was trained by the experiment. ``` from azureml.core import Model run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model', tags={'Training context':'File dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']}) for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') ``` **More Information:** For more information about training with datasets, see [Training with Datasets](https://docs.microsoft.com/azure/machine-learning/how-to-train-with-datasets) in the Azure ML documentation.
github_jupyter
``` %pylab inline import scipy.stats ``` # Introduction During the first lecture we have seen that the goal of machine learning is to train (learn/fit) a **model** on a **dataset** such that we will be able to answer several questions about the data using the model. Some useful questions are: 1. Predict a target $y$ for a new input $x$: predict what is in an image, recognize some audio sample, tell if the stock price will go up or down... 2. Generate new sample $x$ similar to those form the training dataset. Alternatively, given part of a generate the other part (e.g. given half of an image generate the other half). Historically, similar questions were considered by statisticians. In fact, machine learning is very similar to statistics. Some people claim that there is very little difference between the two, and a tongue-in-cheek definition of machine learning is "statistics without checking for assumptions", to which ML practitioners reply that they are at least able to solve problems that are too complex for a through and formal statistical analysis. For a more in-depth discussion I highly recommend the ["Two cultures"](https://projecteuclid.org/euclid.ss/1009213726) essay by Leo Breiman. Due to the similarity of the two fields we will today explore a few examples of statistical inference. Some of the resulting concepts (maximum likelihood, interpreting the outputs of a model as probabilities) will be used through the semester. # Statistical Inference Consider the polling problem: 1. There exists **a population** of individuals (e.g. voters). 2. The individuals have a voting preference (party A or B). 3. We want the fraction $\phi$ of voters that prefer A. 4. But we don't want to ask everyone (which means holding an election)! Instead we want to conduct a poll (choose a **sample** of people and get their mean preference $\bar\phi$). Questions: 1. How are $\phi$ and $\bar\phi$ related? 2. What is our error? 3. How many persons do we need to ask to achieve a desired error? # Polling Suppose there is a large population of individuals, that support either candidate A or candidate B. We want to establish the fraction of supporters of A in the population $\phi$. We will conduct an opinion poll asking about the support for each party. We will choose randomly a certain number of people, $n$, and ask them about their candidates. We want to use the results of the poll to establish: 1. an estimate of the true population parameter $\phi$ 2. our confidence about the interval ## Sampling Model First, we define a formal model of sampling. We will assume that the population is much bigger than the small sample. Thus we will assume a *sampling with replacement* model: each person is selected independently at random from the full population. We call such a sample IID (Independent Identically Distributed). Having the sampling model we establish that the number of supporters of A in the sample follows a *binomial distribution* with: * poll size == $n$ == number of samples, * fraction of A's supporters == $\phi$ == success rate. For the binomial distribution with $n$ trials and probability of success $\phi$ the expected number of successes is $n\phi$ and the variance is $n\phi(1-\phi)$. Alternatively, the *fraction* of successes in the sample has the expected value $\phi$ and variance $\frac{\phi(1-\phi)}{n}$. Lets plot the PMF (Probability Mass Function) of the number of successes. ``` # Poll variability check: draw samples form the binomial distribution n = 50 phi = 0.55 # Simulate a few polls for _ in range(10): sample = random.rand(n)<phi print ("Drawn %d samples. Observed success rate: %.2f (true rate: %.2f)" % (n, 1.0*sample.sum()/n, phi)) # model parameters n = 10 phi = 0.55 # the binomial distribution model = scipy.stats.binom(n=n, p=phi) x = arange(n+1) # plot the PMF - probability mass function stem(x, model.pmf(x), 'b', label='Binomial PMF') # plot the normal approximation mu = phi * n stdev = sqrt(phi*(1-phi) * n) model_norm = scipy.stats.norm(mu, stdev) x_cont = linspace(x[0], x[-1], 1000) plot(x_cont, model_norm.pdf(x_cont), 'r', label='Norm approx.') axvline(mu, *xlim(), color='g', label='Mean') legend(loc='upper left') ``` ## Parameter Estimation In Statistics and Machine Learning we only have access to the sample. The goal is to learn something useful about the unknown population. Here we are interested in the true heads probability $\phi$. The MLE (Maximum Likelihood Estimator) for $\phi$ is just the sample mean $\bar\phi$. However, how precise it is? We want the (sample dependent) confidence interval around the sample mean, such that in 95% of experiments (samples taken), the true unknown population parameter $\phi$ is in the confidence interval. Formally we want to find $\bar\phi$ and $\epsilon$ such that $P(\bar\phi-\epsilon \leq \phi \leq \bar\phi + \epsilon) > 0.95$ or, equivalently, such that $P(|\phi-\bar\phi| \leq \epsilon) > 0.95$. Note: from the sampling model we know that for a large enough sample (>15 persons) the random variable denoting the sample mean $\bar\phi$ is approximately normally distributed with mean $\phi$ and standard deviation $\sigma = \sqrt{(\phi(1-\phi)/n)}$. However we do not know $\phi$. When designing the experiment, we can take the worse value, which is 0.5. Alternatively, we can plug for $\phi$ the estimated sample mean $\bar\phi$. Note: we are being too optimistic here, but the error will be small. For a standard normal random variable (mean 0 and standard deviation 1) 96% of samples fall within the range $\pm 1.96$. Therefore the confidence interval is approximately $\bar\phi \pm 1.96\sqrt{\frac{\bar\phi(1-\bar\phi)}{n}}$. ``` phi=0.55 n=100 n_experiments=1000 samples = rand(n_experiments, n)<phi phi_bar = samples.mean(1) hist(phi_bar, bins=20, label='observed $\\bar\\phi$') axvline([phi], color='r', label='true $\\phi$') title('Histgram of sample means $\\bar\\phi$ form %d experiments.\n' 'Model: %d trials, %.2f prob of success'%(n_experiments,n,phi)) legend() xlim(phi-0.15, phi+0.15) confidence_intervals = zeros((n_experiments, 2)) confidence_intervals[:,0] = phi_bar - 1.96*np.sqrt(phi_bar*(1-phi_bar)/n) confidence_intervals[:,1] = phi_bar + 1.96*np.sqrt(phi_bar*(1-phi_bar)/n) #note: this also works, can you exmplain how the formula works in numpy? confidence_intervals2 = phi_bar[:,None] + [-1.96, 1.96] * np.sqrt(phi_bar*(1-phi_bar)/n).reshape(-1,1) assert np.abs(confidence_intervals-confidence_intervals2).max()==0 good_experiments = (confidence_intervals[:,0]<phi) & (confidence_intervals[:,1]>phi) print ("Average confidence interval is phi_bar +-%.3f" % ((confidence_intervals[:,1]-confidence_intervals[:,0]).mean()/2.0,)) print ("Out of %d experiments, the true phi fell into the confidence interval %d times." % (n_experiments, good_experiments.sum())) ``` ## Bootstrap estimation of confidence interval ``` # Here we make a bootstrap analysis of one experiment n_bootstraps = 200 exp_id = 1 exp0 = samples[exp_id] # sample answers with replacement bootstrap_idx = np.random.randint(low=0, high=n, size=(n_bootstraps, n)) exp0_bootstraps = exp0[bootstrap_idx] # compute the mean in each bootstrap sample exp0_bootstrap_means = exp0_bootstraps.mean(1) # Estimate the confidence interval by taking the 2.5 and 97.5 percentile sorted_bootstrap_means = np.sort(exp0_bootstrap_means) bootstrap_conf_low, bootstrap_conf_high = sorted_bootstrap_means[ [int(0.025 * n_bootstraps), int(0.975 * n_bootstraps)]] hist(exp0_bootstrap_means, bins=20, label='bootstrap estims of $\phi$') axvline(phi, 0, 1, label='$\\phi$', color='red') axvline(phi_bar[exp_id], 0, 1, label='$\\bar{\\phi}$', color='green') axvspan(confidence_intervals[exp_id, 0], confidence_intervals[exp_id, 1], # ymin=0.5, ymax=1.0, alpha=0.2, label='theoretical 95% conf int', color='green') axvspan(bootstrap_conf_low, bootstrap_conf_high, # ymin=0.0, ymax=0.5, alpha=0.2, label='bootsrap 95% conf int', color='blue') legend() _ = xlim(phi-0.15, phi+0.15) title('Theoretical and bootstrap confidence intervals') ``` ## Practical conclusions about polls Practical outcome: in the worst case ($\phi=0.5$) the 95% confidence interval is $\pm 1.96\sqrt{\frac{0.5(1-0.5)}{n}} = \pm \frac{0.975}{\sqrt{n}}$. To get the usually acceptable polling error of 3 percentage points, one needs to sample 1056 persons. Polling companies typically ask between 1000-3000 persons. Questions: 1. How critical is the IID sampling assumption? 2. What do you think is a larger problem: approximating the PDF with a Gaussian distribution, or people lying in the questionnaire? # Bayesian reasoning We will treat $\phi$ - the unknown fraction of A supporters in the population as a random variable. Its probability distribution will express *our subjective* uncertainty about its value. We will need to start with a *prior* assumption about our belief of $\phi$. For convenience we will choose a *conjugate prior*, the Beta distribution, because the formula for its PDF is similar to the formula for the likelihood. ``` support = linspace(0,1,512) A=1 B=1 plot(support, scipy.stats.beta.pdf(support, A,B)) title("Prior: Beta(%.1f, %.1f) distribution" %(A,B)) ``` Then we will collect samples, and after each sample update our belief about $p$. ``` n_successes = 0 n_failures = 0 phi = 0.6 for _ in range(10): if rand() < phi: n_successes += 1 else: n_failures +=1 plot(support, scipy.stats.beta.pdf(support, A+n_successes, B+n_failures), label='posterior') axvline(phi, color='r', label='True $\\phi$') conf_int_low, conf_int_high = scipy.stats.beta.ppf((0.025,0.975), A+n_successes, B+n_failures) axvspan(conf_int_low, conf_int_high, alpha=0.2, label='95% conf int') title("Posterior after seeing %d successes and %d failures\n" "Prior pseudo-counts: A=%.1f, B=%.1f\n" "MAP estimate: %f, MLE estimate: %f\n" "conf_int: (%f, %f)"% (n_successes, n_failures, A, B, 1.0*(A+n_successes-1)/(A+n_successes+B+n_failures-2), 1.0*n_successes/(n_successes+n_failures), conf_int_low, conf_int_high)) legend() ``` Please note: in the Bayesian framework we treat the quantities we want to estimate as random variables. We need to define our prior beliefs about them. In the example, the prior was a Beta distribution. After seeing the data we update our belief about the world. In the example, this is vary easy - we keep running counts of the number of failures and successes observed. We update them seeing the data. The prior conveniently can be treated as *pseudo-counts*. To summarize the distribution over the parameter, we typically take its mode (the most likely value), calling the approach MAP (Maximum a Posteriori). ``` ```
github_jupyter
# Assignment 2: Deep N-grams Welcome to the second assignment of course 3. In this assignment you will explore Recurrent Neural Networks `RNN`. - You will be using the fundamentals of google's [trax](https://github.com/google/trax) package to implement any kind of deeplearning model. By completing this assignment, you will learn how to implement models from scratch: - How to convert a line of text into a tensor - Create an iterator to feed data to the model - Define a GRU model using `trax` - Train the model using `trax` - Compute the accuracy of your model using the perplexity - Predict using your own model ## Outline - [Overview](#0) - [Part 1: Importing the Data](#1) - [1.1 Loading in the data](#1.1) - [1.2 Convert a line to tensor](#1.2) - [Exercise 01](#ex01) - [1.3 Batch generator](#1.3) - [Exercise 02](#ex02) - [1.4 Repeating Batch generator](#1.4) - [Part 2: Defining the GRU model](#2) - [Exercise 03](#ex03) - [Part 3: Training](#3) - [3.1 Training the Model](#3.1) - [Exercise 04](#ex04) - [Part 4: Evaluation](#4) - [4.1 Evaluating using the deep nets](#4.1) - [Exercise 05](#ex05) - [Part 5: Generating the language with your own model](#5) - [Summary](#6) <a name='0'></a> ### Overview Your task will be to predict the next set of characters using the previous characters. - Although this task sounds simple, it is pretty useful. - You will start by converting a line of text into a tensor - Then you will create a generator to feed data into the model - You will train a neural network in order to predict the new set of characters of defined length. - You will use embeddings for each character and feed them as inputs to your model. - Many natural language tasks rely on using embeddings for predictions. - Your model will convert each character to its embedding, run the embeddings through a Gated Recurrent Unit `GRU`, and run it through a linear layer to predict the next set of characters. <img src = "model.png" style="width:600px;height:150px;"/> The figure above gives you a summary of what you are about to implement. - You will get the embeddings; - Stack the embeddings on top of each other; - Run them through two layers with a relu activation in the middle; - Finally, you will compute the softmax. To predict the next character: - Use the softmax output and identify the word with the highest probability. - The word with the highest probability is the prediction for the next word. ``` import os import trax import trax.fastmath.numpy as np import pickle import numpy import random as rnd from trax import fastmath from trax import layers as tl # set random seed trax.supervised.trainer_lib.init_random_number_generators(32) rnd.seed(32) ``` <a name='1'></a> # Part 1: Importing the Data <a name='1.1'></a> ### 1.1 Loading in the data <img src = "shakespeare.png" style="width:250px;height:250px;"/> Now import the dataset and do some processing. - The dataset has one sentence per line. - You will be doing character generation, so you have to process each sentence by converting each **character** (and not word) to a number. - You will use the `ord` function to convert a unique character to a unique integer ID. - Store each line in a list. - Create a data generator that takes in the `batch_size` and the `max_length`. - The `max_length` corresponds to the maximum length of the sentence. ``` dirname = 'data/' lines = [] # storing all the lines in a variable. for filename in os.listdir(dirname): with open(os.path.join(dirname, filename)) as files: for line in files: # remove leading and trailing whitespace pure_line = line.strip() # if pure_line is not the empty string, if pure_line: # append it to the list lines.append(pure_line) n_lines = len(lines) print(f"Number of lines: {n_lines}") print(f"Sample line at position 0 {lines[0]}") print(f"Sample line at position 999 {lines[999]}") ``` Notice that the letters are both uppercase and lowercase. In order to reduce the complexity of the task, we will convert all characters to lowercase. This way, the model only needs to predict the likelihood that a letter is 'a' and not decide between uppercase 'A' and lowercase 'a'. ``` # go through each line for i, line in enumerate(lines): # convert to all lowercase lines[i] = line.lower() print(f"Number of lines: {n_lines}") print(f"Sample line at position 0 {lines[0]}") print(f"Sample line at position 999 {lines[999]}") eval_lines = lines[-1000:] # Create a holdout validation set lines = lines[:-1000] # Leave the rest for training print(f"Number of lines for training: {len(lines)}") print(f"Number of lines for validation: {len(eval_lines)}") ``` <a name='1.2'></a> ### 1.2 Convert a line to tensor Now that you have your list of lines, you will convert each character in that list to a number. You can use Python's `ord` function to do it. Given a string representing of one Unicode character, the `ord` function return an integer representing the Unicode code point of that character. ``` # View the unique unicode integer associated with each character print(f"ord('a'): {ord('a')}") print(f"ord('b'): {ord('b')}") print(f"ord('c'): {ord('c')}") print(f"ord(' '): {ord(' ')}") print(f"ord('x'): {ord('x')}") print(f"ord('y'): {ord('y')}") print(f"ord('z'): {ord('z')}") print(f"ord('1'): {ord('1')}") print(f"ord('2'): {ord('2')}") print(f"ord('3'): {ord('3')}") ``` <a name='ex01'></a> ### Exercise 01 **Instructions:** Write a function that takes in a single line and transforms each character into its unicode integer. This returns a list of integers, which we'll refer to as a tensor. - Use a special integer to represent the end of the sentence (the end of the line). - This will be the EOS_int (end of sentence integer) parameter of the function. - Include the EOS_int as the last integer of the - For this exercise, you will use the number `1` to represent the end of a sentence. ``` # UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: line_to_tensor def line_to_tensor(line, EOS_int=1): """Turns a line of text into a tensor Args: line (str): A single line of text. EOS_int (int, optional): End-of-sentence integer. Defaults to 1. Returns: list: a list of integers (unicode values) for the characters in the `line`. """ # Initialize the tensor as an empty list tensor = [] ### START CODE HERE (Replace instances of 'None' with your code) ### # for each character: for c in line: # convert to unicode int c_int = ord(c) # append the unicode integer to the tensor list tensor.append(c_int) # include the end-of-sentence integer tensor.append(1) ### END CODE HERE ### return tensor # Testing your output line_to_tensor('abc xyz') ``` ##### Expected Output ```CPP [97, 98, 99, 32, 120, 121, 122, 1] ``` <a name='1.3'></a> ### 1.3 Batch generator Most of the time in Natural Language Processing, and AI in general we use batches when training our data sets. Here, you will build a data generator that takes in a text and returns a batch of text lines (lines are sentences). - The generator converts text lines (sentences) into numpy arrays of integers padded by zeros so that all arrays have the same length, which is the length of the longest sentence in the entire data set. Once you create the generator, you can iterate on it like this: ``` next(data_generator) ``` This generator returns the data in a format that you could directly use in your model when computing the feed-forward of your algorithm. This iterator returns a batch of lines and per token mask. The batch is a tuple of three parts: inputs, targets, mask. The inputs and targets are identical. The second column will be used to evaluate your predictions. Mask is 1 for non-padding tokens. <a name='ex02'></a> ### Exercise 02 **Instructions:** Implement the data generator below. Here are some things you will need. - While True loop: this will yield one batch at a time. - if index >= num_lines, set index to 0. - The generator should return shuffled batches of data. To achieve this without modifying the actual lines a list containing the indexes of `data_lines` is created. This list can be shuffled and used to get random batches everytime the index is reset. - if len(line) < max_length append line to cur_batch. - Note that a line that has length equal to max_length should not be appended to the batch. - This is because when converting the characters into a tensor of integers, an additional end of sentence token id will be added. - So if max_length is 5, and a line has 4 characters, the tensor representing those 4 characters plus the end of sentence character will be of length 5, which is the max length. - if len(cur_batch) == batch_size, go over every line, convert it to an int and store it. **Remember that when calling np you are really calling trax.fastmath.numpy which is trax’s version of numpy that is compatible with JAX. As a result of this, where you used to encounter the type numpy.ndarray now you will find the type jax.interpreters.xla.DeviceArray.** <details> <summary> <font size="3" color="darkgreen"><b>Hints</b></font> </summary> <p> <ul> <li>Use the line_to_tensor function above inside a list comprehension in order to pad lines with zeros.</li> <li>Keep in mind that the length of the tensor is always 1 + the length of the original line of characters. Keep this in mind when setting the padding of zeros.</li> </ul> </p> ``` # UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: data_generator def data_generator(batch_size, max_length, data_lines, line_to_tensor=line_to_tensor, shuffle=True): """Generator function that yields batches of data Args: batch_size (int): number of examples (in this case, sentences) per batch. max_length (int): maximum length of the output tensor. NOTE: max_length includes the end-of-sentence character that will be added to the tensor. Keep in mind that the length of the tensor is always 1 + the length of the original line of characters. data_lines (list): list of the sentences to group into batches. line_to_tensor (function, optional): function that converts line to tensor. Defaults to line_to_tensor. shuffle (bool, optional): True if the generator should generate random batches of data. Defaults to True. Yields: tuple: two copies of the batch (jax.interpreters.xla.DeviceArray) and mask (jax.interpreters.xla.DeviceArray). NOTE: jax.interpreters.xla.DeviceArray is trax's version of numpy.ndarray """ # initialize the index that points to the current position in the lines index array index = 0 # initialize the list that will contain the current batch cur_batch = [] # count the number of lines in data_lines num_lines = len(data_lines) # create an array with the indexes of data_lines that can be shuffled lines_index = [*range(num_lines)] # shuffle line indexes if shuffle is set to True if shuffle: rnd.shuffle(lines_index) ### START CODE HERE (Replace instances of 'None' with your code) ### while True: # if the index is greater or equal than to the number of lines in data_lines if index >= num_lines: # then reset the index to 0 index = 0 # shuffle line indexes if shuffle is set to True if shuffle: rnd.shuffle(lines_index) # get a line at the `lines_index[index]` position in data_lines line = data_lines[lines_index[index]] # if the length of the line is less than max_length if len(line) < max_length: # append the line to the current batch cur_batch.append(line) # increment the index by one index += 1 # if the current batch is now equal to the desired batch size if len(cur_batch) == batch_size: batch = [] mask = [] # go through each line (li) in cur_batch for li in cur_batch: # convert the line (li) to a tensor of integers tensor = line_to_tensor(li) # Create a list of zeros to represent the padding # so that the tensor plus padding will have length `max_length` pad = [0] * (max_length - len(tensor)) # combine the tensor plus pad tensor_pad = tensor + pad # append the padded tensor to the batch batch.append(tensor_pad) # A mask for tensor_pad is 1 wherever tensor_pad is not # 0 and 0 wherever tensor_pad is 0, i.e. if tensor_pad is # [1, 2, 3, 0, 0, 0] then example_mask should be # [1, 1, 1, 0, 0, 0] # Hint: Use a list comprehension for this example_mask = [0 if val == 0 else 1 for val in tensor_pad] mask.append(example_mask) # convert the batch (data type list) to a trax's numpy array batch_np_arr = np.array(batch) mask_np_arr = np.array(mask) ### END CODE HERE ## # Yield two copies of the batch and mask. yield batch_np_arr, batch_np_arr, mask_np_arr # reset the current batch to an empty list cur_batch = [] # Try out your data generator tmp_lines = ['12345678901', #length 11 '123456789', # length 9 '234567890', # length 9 '345678901'] # length 9 # Get a batch size of 2, max length 10 tmp_data_gen = data_generator(batch_size=2, max_length=10, data_lines=tmp_lines, shuffle=False) # get one batch tmp_batch = next(tmp_data_gen) # view the batch tmp_batch ``` ##### Expected output ```CPP (DeviceArray([[49, 50, 51, 52, 53, 54, 55, 56, 57, 1], [50, 51, 52, 53, 54, 55, 56, 57, 48, 1]], dtype=int32), DeviceArray([[49, 50, 51, 52, 53, 54, 55, 56, 57, 1], [50, 51, 52, 53, 54, 55, 56, 57, 48, 1]], dtype=int32), DeviceArray([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)) ``` Now that you have your generator, you can just call them and they will return tensors which correspond to your lines in Shakespeare. The first column and the second column are identical. Now you can go ahead and start building your neural network. <a name='1.4'></a> ### 1.4 Repeating Batch generator The way the iterator is currently defined, it will keep providing batches forever. Although it is not needed, we want to show you the `itertools.cycle` function which is really useful when the generator eventually stops Notice that it is expected to use this function within the training function further below Usually we want to cycle over the dataset multiple times during training (i.e. train for multiple *epochs*). For small datasets we can use [`itertools.cycle`](https://docs.python.org/3.8/library/itertools.html#itertools.cycle) to achieve this easily. ``` import itertools infinite_data_generator = itertools.cycle( data_generator(batch_size=2, max_length=10, data_lines=tmp_lines)) ``` You can see that we can get more than the 5 lines in tmp_lines using this. ``` ten_lines = [next(infinite_data_generator) for _ in range(10)] print(len(ten_lines)) ``` <a name='2'></a> # Part 2: Defining the GRU model Now that you have the input and output tensors, you will go ahead and initialize your model. You will be implementing the `GRULM`, gated recurrent unit model. To implement this model, you will be using google's `trax` package. Instead of making you implement the `GRU` from scratch, we will give you the necessary methods from a build in package. You can use the following packages when constructing the model: - `tl.Serial`: Combinator that applies layers serially (by function composition). [docs](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.combinators.Serial) / [source code](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/combinators.py#L26) - You can pass in the layers as arguments to `Serial`, separated by commas. - For example: `tl.Serial(tl.Embeddings(...), tl.Mean(...), tl.Dense(...), tl.LogSoftmax(...))` ___ - `tl.ShiftRight`: Allows the model to go right in the feed forward. [docs](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.attention.ShiftRight) / [source code](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/attention.py#L297) - `ShiftRight(n_shifts=1, mode='train')` layer to shift the tensor to the right n_shift times - Here in the exercise you only need to specify the mode and not worry about n_shifts ___ - `tl.Embedding`: Initializes the embedding. In this case it is the size of the vocabulary by the dimension of the model. [docs](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Embedding) / [source code](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.py#L113) - `tl.Embedding(vocab_size, d_feature)`. - `vocab_size` is the number of unique words in the given vocabulary. - `d_feature` is the number of elements in the word embedding (some choices for a word embedding size range from 150 to 300, for example). ___ - `tl.GRU`: `Trax` GRU layer. [docs](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.rnn.GRU) / [source code](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/rnn.py#L143) - `GRU(n_units)` Builds a traditional GRU of n_cells with dense internal transformations. - `GRU` paper: https://arxiv.org/abs/1412.3555 ___ - `tl.Dense`: A dense layer. [docs](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense) / [source code](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.py#L28) - `tl.Dense(n_units)`: The parameter `n_units` is the number of units chosen for this dense layer. ___ - `tl.LogSoftmax`: Log of the output probabilities. [docs](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.LogSoftmax) / [source code](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.py#L242) - Here, you don't need to set any parameters for `LogSoftMax()`. ___ <a name='ex03'></a> ### Exercise 03 **Instructions:** Implement the `GRULM` class below. You should be using all the methods explained above. ``` # UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: GRULM def GRULM(vocab_size=256, d_model=512, n_layers=2, mode='train'): """Returns a GRU language model. Args: vocab_size (int, optional): Size of the vocabulary. Defaults to 256. d_model (int, optional): Depth of embedding (n_units in the GRU cell). Defaults to 512. n_layers (int, optional): Number of GRU layers. Defaults to 2. mode (str, optional): 'train', 'eval' or 'predict', predict mode is for fast inference. Defaults to "train". Returns: trax.layers.combinators.Serial: A GRU language model as a layer that maps from a tensor of tokens to activations over a vocab set. """ ### START CODE HERE (Replace instances of 'None' with your code) ### model = tl.Serial( tl.ShiftRight(mode=mode), # Stack the ShiftRight layer tl.Embedding(vocab_size=vocab_size, d_feature=d_model), # Stack the embedding layer [tl.GRU(n_units=d_model) for _ in range(n_layers)], # Stack GRU layers of d_model units keeping n_layer parameter in mind (use list comprehension syntax) tl.Dense(n_units=vocab_size), # Dense layer tl.LogSoftmax() # Log Softmax ) ### END CODE HERE ### return model # testing your model model = GRULM() print(model) ``` ##### Expected output ```CPP Serial[ ShiftRight(1) Embedding_256_512 GRU_512 GRU_512 Dense_256 LogSoftmax ] ``` <a name='3'></a> # Part 3: Training Now you are going to train your model. As usual, you have to define the cost function, the optimizer, and decide whether you will be training it on a `gpu` or `cpu`. You also have to feed in a built model. Before, going into the training, we re-introduce the `TrainTask` and `EvalTask` abstractions from the last week's assignment. To train a model on a task, Trax defines an abstraction `trax.supervised.training.TrainTask` which packages the train data, loss and optimizer (among other things) together into an object. Similarly to evaluate a model, Trax defines an abstraction `trax.supervised.training.EvalTask` which packages the eval data and metrics (among other things) into another object. The final piece tying things together is the `trax.supervised.training.Loop` abstraction that is a very simple and flexible way to put everything together and train the model, all the while evaluating it and saving checkpoints. Using `training.Loop` will save you a lot of code compared to always writing the training loop by hand, like you did in courses 1 and 2. More importantly, you are less likely to have a bug in that code that would ruin your training. ``` batch_size = 32 max_length = 64 ``` An `epoch` is traditionally defined as one pass through the dataset. Since the dataset was divided in `batches` you need several `steps` (gradient evaluations) in order to complete an `epoch`. So, one `epoch` corresponds to the number of examples in a `batch` times the number of `steps`. In short, in each `epoch` you go over all the dataset. The `max_length` variable defines the maximum length of lines to be used in training our data, lines longer that that length are discarded. Below is a function and results that indicate how many lines conform to our criteria of maximum length of a sentence in the entire dataset and how many `steps` are required in order to cover the entire dataset which in turn corresponds to an `epoch`. ``` def n_used_lines(lines, max_length): ''' Args: lines: all lines of text an array of lines max_length - max_length of a line in order to be considered an int output_dir - folder to save your file an int Return: number of efective examples ''' n_lines = 0 for l in lines: if len(l) <= max_length: n_lines += 1 return n_lines num_used_lines = n_used_lines(lines, 32) print('Number of used lines from the dataset:', num_used_lines) print('Batch size (a power of 2):', int(batch_size)) steps_per_epoch = int(num_used_lines/batch_size) print('Number of steps to cover one epoch:', steps_per_epoch) ``` **Expected output:** Number of used lines from the dataset: 25881 Batch size (a power of 2): 32 Number of steps to cover one epoch: 808 <a name='3.1'></a> ### 3.1 Training the model You will now write a function that takes in your model and trains it. To train your model you have to decide how many times you want to iterate over the entire data set. <a name='ex04'></a> ### Exercise 04 **Instructions:** Implement the `train_model` program below to train the neural network above. Here is a list of things you should do: - Create a `trax.supervised.trainer.TrainTask` object, this encapsulates the aspects of the dataset and the problem at hand: - labeled_data = the labeled data that we want to *train* on. - loss_fn = [tl.CrossEntropyLoss()](https://trax-ml.readthedocs.io/en/latest/trax.layers.html?highlight=CrossEntropyLoss#trax.layers.metrics.CrossEntropyLoss) - optimizer = [trax.optimizers.Adam()](https://trax-ml.readthedocs.io/en/latest/trax.optimizers.html?highlight=Adam#trax.optimizers.adam.Adam) with learning rate = 0.0005 - Create a `trax.supervised.trainer.EvalTask` object, this encapsulates aspects of evaluating the model: - labeled_data = the labeled data that we want to *evaluate* on. - metrics = [tl.CrossEntropyLoss()](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.metrics.CrossEntropyLoss) and [tl.Accuracy()](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.metrics.Accuracy) - How frequently we want to evaluate and checkpoint the model. - Create a `trax.supervised.trainer.Loop` object, this encapsulates the following: - The previously created `TrainTask` and `EvalTask` objects. - the training model = [GRULM](#ex03) - optionally the evaluation model, if different from the training model. NOTE: in presence of Dropout etc we usually want the evaluation model to behave slightly differently than the training model. You will be using a cross entropy loss, with Adam optimizer. Please read the [trax](https://trax-ml.readthedocs.io/en/latest/index.html) documentation to get a full understanding. Make sure you use the number of steps provided as a parameter to train for the desired number of steps. **NOTE:** Don't forget to wrap the data generator in `itertools.cycle` to iterate on it for multiple epochs. ``` from trax.supervised import training # UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: train_model def train_model(model, data_generator, batch_size=32, max_length=64, lines=lines, eval_lines=eval_lines, n_steps=1, output_dir='model/'): """Function that trains the model Args: model (trax.layers.combinators.Serial): GRU model. data_generator (function): Data generator function. batch_size (int, optional): Number of lines per batch. Defaults to 32. max_length (int, optional): Maximum length allowed for a line to be processed. Defaults to 64. lines (list, optional): List of lines to use for training. Defaults to lines. eval_lines (list, optional): List of lines to use for evaluation. Defaults to eval_lines. n_steps (int, optional): Number of steps to train. Defaults to 1. output_dir (str, optional): Relative path of directory to save model. Defaults to "model/". Returns: trax.supervised.training.Loop: Training loop for the model. """ ### START CODE HERE (Replace instances of 'None' with your code) ### bare_train_generator = data_generator(batch_size, max_length, data_lines=lines) infinite_train_generator = itertools.cycle(bare_train_generator) bare_eval_generator = data_generator(batch_size, max_length, data_lines=eval_lines) infinite_eval_generator = itertools.cycle(bare_eval_generator) train_task = training.TrainTask( labeled_data=infinite_train_generator, # Use infinite train data generator loss_layer=tl.CrossEntropyLoss(), # Don't forget to instantiate this object optimizer=trax.optimizers.Adam(0.0005) # Don't forget to add the learning rate parameter ) eval_task = training.EvalTask( labeled_data=infinite_eval_generator, # Use infinite eval data generator metrics=[tl.CrossEntropyLoss(), tl.Accuracy()], # Don't forget to instantiate these objects n_eval_batches=3 # For better evaluation accuracy in reasonable time ) training_loop = training.Loop(model, train_task, eval_task=eval_task, output_dir=output_dir) training_loop.run(n_steps=n_steps) ### END CODE HERE ### # We return this because it contains a handle to the model, which has the weights etc. return training_loop # Train the model 1 step and keep the `trax.supervised.training.Loop` object. training_loop = train_model(GRULM(), data_generator) ``` The model was only trained for 1 step due to the constraints of this environment. Even on a GPU accelerated environment it will take many hours for it to achieve a good level of accuracy. For the rest of the assignment you will be using a pretrained model but now you should understand how the training can be done using Trax. <a name='4'></a> # Part 4: Evaluation <a name='4.1'></a> ### 4.1 Evaluating using the deep nets Now that you have learned how to train a model, you will learn how to evaluate it. To evaluate language models, we usually use perplexity which is a measure of how well a probability model predicts a sample. Note that perplexity is defined as: $$P(W) = \sqrt[N]{\prod_{i=1}^{N} \frac{1}{P(w_i| w_1,...,w_{n-1})}}$$ As an implementation hack, you would usually take the log of that formula (to enable us to use the log probabilities we get as output of our `RNN`, convert exponents to products, and products into sums which makes computations less complicated and computationally more efficient). You should also take care of the padding, since you do not want to include the padding when calculating the perplexity (because we do not want to have a perplexity measure artificially good). $$log P(W) = {log\big(\sqrt[N]{\prod_{i=1}^{N} \frac{1}{P(w_i| w_1,...,w_{n-1})}}\big)}$$ $$ = {log\big({\prod_{i=1}^{N} \frac{1}{P(w_i| w_1,...,w_{n-1})}}\big)^{\frac{1}{N}}}$$ $$ = {log\big({\prod_{i=1}^{N}{P(w_i| w_1,...,w_{n-1})}}\big)^{-\frac{1}{N}}} $$ $$ = -\frac{1}{N}{log\big({\prod_{i=1}^{N}{P(w_i| w_1,...,w_{n-1})}}\big)} $$ $$ = -\frac{1}{N}{\big({\sum_{i=1}^{N}{logP(w_i| w_1,...,w_{n-1})}}\big)} $$ <a name='ex05'></a> ### Exercise 05 **Instructions:** Write a program that will help evaluate your model. Implementation hack: your program takes in preds and target. Preds is a tensor of log probabilities. You can use [`tl.one_hot`](https://github.com/google/trax/blob/22765bb18608d376d8cd660f9865760e4ff489cd/trax/layers/metrics.py#L154) to transform the target into the same dimension. You then multiply them and sum. You also have to create a mask to only get the non-padded probabilities. Good luck! <details> <summary> <font size="3" color="darkgreen"><b>Hints</b></font> </summary> <p> <ul> <li>To convert the target into the same dimension as the predictions tensor use tl.one.hot with target and preds.shape[-1].</li> <li>You will also need the np.equal function in order to unpad the data and properly compute perplexity.</li> <li>Keep in mind while implementing the formula above that <em> w<sub>i</sub></em> represents a letter from our 256 letter alphabet.</li> </ul> </p> ``` # UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: test_model def test_model(preds, target): """Function to test the model. Args: preds (jax.interpreters.xla.DeviceArray): Predictions of a list of batches of tensors corresponding to lines of text. target (jax.interpreters.xla.DeviceArray): Actual list of batches of tensors corresponding to lines of text. Returns: float: log_perplexity of the model. """ ### START CODE HERE (Replace instances of 'None' with your code) ### total_log_ppx = np.sum(preds * tl.one_hot(target, preds.shape[-1]),axis= -1) # HINT: tl.one_hot() should replace one of the Nones non_pad = 1.0 - np.equal(target, 0) # You should check if the target equals 0 ppx = total_log_ppx * non_pad # Get rid of the padding log_ppx = np.sum(ppx) / np.sum(non_pad) ### END CODE HERE ### return -log_ppx # UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # Testing model = GRULM() model.init_from_file('model.pkl.gz') batch = next(data_generator(batch_size, max_length, lines, shuffle=False)) preds = model(batch[0]) log_ppx = test_model(preds, batch[1]) print('The log perplexity and perplexity of your model are respectively', log_ppx, np.exp(log_ppx)) ``` **Expected Output:** The log perplexity and perplexity of your model are respectively around 1.9 and 7.2. <a name='5'></a> # Part 5: Generating the language with your own model We will now use your own language model to generate new sentences for that we need to make draws from a Gumble distribution. The Gumbel Probability Density Function (PDF) is defined as: $$ f(z) = {1\over{\beta}}e^{(-z+e^{(-z)})} $$ where: $$ z = {(x - \mu)\over{\beta}}$$ The maximum value, which is what we choose as the prediction in the last step of a Recursive Neural Network `RNN` we are using for text generation, in a sample of a random variable following an exponential distribution approaches the Gumbel distribution when the sample increases asymptotically. For that reason, the Gumbel distribution is used to sample from a categorical distribution. ``` # Run this cell to generate some news sentence def gumbel_sample(log_probs, temperature=1.0): """Gumbel sampling from a categorical distribution.""" u = numpy.random.uniform(low=1e-6, high=1.0 - 1e-6, size=log_probs.shape) g = -np.log(-np.log(u)) return np.argmax(log_probs + g * temperature, axis=-1) def predict(num_chars, prefix): inp = [ord(c) for c in prefix] result = [c for c in prefix] max_len = len(prefix) + num_chars for _ in range(num_chars): cur_inp = np.array(inp + [0] * (max_len - len(inp))) outp = model(cur_inp[None, :]) # Add batch dim. next_char = gumbel_sample(outp[0, len(inp)]) inp += [int(next_char)] if inp[-1] == 1: break # EOS result.append(chr(int(next_char))) return "".join(result) print(predict(32, "")) print(predict(32, "")) print(predict(32, "")) print(predict(32, "")) ``` In the generated text above, you can see that the model generates text that makes sense capturing dependencies between words and without any input. A simple n-gram model would have not been able to capture all of that in one sentence. <a name='6'></a> ### <span style="color:blue"> On statistical methods </span> Using a statistical method like the one you implemented in course 2 will not give you results that are as good. Your model will not be able to encode information seen previously in the data set and as a result, the perplexity will increase. Remember from course 2 that the higher the perplexity, the worse your model is. Furthermore, statistical ngram models take up too much space and memory. As a result, it will be inefficient and too slow. Conversely, with deepnets, you can get a better perplexity. Note, learning about n-gram language models is still important and allows you to better understand deepnets.
github_jupyter
# 5. Algorithmic Question You consult for a personal trainer who has a back-to-back sequence of requests for appointments. A sequence of requests is of the form : 30, 40, 25, 50, 30, 20 where each number is the time that the person who makes the appointment wants to spend. You need to accept some requests, however you need a break between them, so you cannot accept two consecutive requests. For example, [30, 50, 20] is an acceptable solution (of duration 100), but [30, 40, 50, 20] is not, because 30 and 40 are two consecutive appointments. Your goal is to provide to the personal trainer a schedule that maximizes the total length of the accepted appointments. For example, in the previous instance, the optimal solution is [40, 50, 20], of total duration 110. ----------------------------------------- 1. Write an algorithm that computes the acceptable solution with the longest possible duration. 2. Implement a program that given in input an instance in the form given above, gives the optimal solution. The following algorithm is actually the merge of 2 algorithm : -- - Simple Comparison of the two possible sub lists (taking every other number) - Using a greedy Heuristic The app_setter function basically checks the input array with this two very simple algorithm and then compares the results. We chose this approach in order to shield the function from the vulnerabilities of the single algorithms since there are specific cases in which is possible to demonstrate the ineffectivness of the two. Nevertheless this cross result solves many problems from this point of view ``` def app_setter(A): l1,l2,l3,B,t = [],[],[],A.copy(),0 try: for i in range(0,len(A),2): #simple comparison of the two everyother lists l1.append(A[i]) for i in range(1,len(A),2): l2.append(A[i]) except IndexError: pass while t < len(B)/2: #greedy m = max(B) try: l3.append(m) except: pass try : B[B.index(m)+1] = 0 B[B.index(m)-1] = 0 B[B.index(m)] = 0 B.remove(0) except IndexError: pass t+=1 if sum(l1)>= sum(l2) and sum(l1)>=sum(l3): return l1 if sum(l2)>= sum(l1) and sum(l2)>=sum(l3): return l2 if sum(l3)>= sum(l1) and sum(l3)>=sum(l2): return l3 app_setter([10, 50, 10, 50, 10, 50, 150, 120]) [10, 50, 10, 50, 10, 50, 150, 120] [150, 50, 50] = 250 ---> Algorithm greedy [50, 50, 50, 120] = 270 ---> Simple every other i+1 [10, 10, 10, 150] = 180 ---> Simple every other ```
github_jupyter
# Table of Contents <div class="toc" style="margin-top: 1em;"><ul class="toc-item" id="toc-level0"><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Import" data-toc-modified-id="Import-0.1"><span class="toc-item-num">0.1&nbsp;&nbsp;</span>Import</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Common" data-toc-modified-id="Common-0.1.1"><span class="toc-item-num">0.1.1&nbsp;&nbsp;</span>Common</a></span></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Libs" data-toc-modified-id="Libs-0.1.2"><span class="toc-item-num">0.1.2&nbsp;&nbsp;</span>Libs</a></span></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Special" data-toc-modified-id="Special-0.1.3"><span class="toc-item-num">0.1.3&nbsp;&nbsp;</span>Special</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Load-longitudinal-retention" data-toc-modified-id="Load-longitudinal-retention-0.2"><span class="toc-item-num">0.2&nbsp;&nbsp;</span>Load longitudinal retention</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Move" data-toc-modified-id="Move-0.2.1"><span class="toc-item-num">0.2.1&nbsp;&nbsp;</span>Move</a></span></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Convert" data-toc-modified-id="Convert-0.2.2"><span class="toc-item-num">0.2.2&nbsp;&nbsp;</span>Convert</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Load-longitudinal-seq=1" data-toc-modified-id="Load-longitudinal-seq=1-0.3"><span class="toc-item-num">0.3&nbsp;&nbsp;</span>Load longitudinal seq=1</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Convert-to-parq" data-toc-modified-id="Convert-to-parq-0.3.1"><span class="toc-item-num">0.3.1&nbsp;&nbsp;</span>Convert to parq</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Load-longitudinal-seq=2" data-toc-modified-id="Load-longitudinal-seq=2-0.4"><span class="toc-item-num">0.4&nbsp;&nbsp;</span>Load longitudinal seq=2</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Move-raw" data-toc-modified-id="Move-raw-0.4.1"><span class="toc-item-num">0.4.1&nbsp;&nbsp;</span>Move raw</a></span></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Convert-to-parq" data-toc-modified-id="Convert-to-parq-0.4.2"><span class="toc-item-num">0.4.2&nbsp;&nbsp;</span>Convert to parq</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#debug" data-toc-modified-id="debug-0.4.2.1"><span class="toc-item-num">0.4.2.1&nbsp;&nbsp;</span>debug</a></span></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#debug-end" data-toc-modified-id="debug-end-0.4.2.2"><span class="toc-item-num">0.4.2.2&nbsp;&nbsp;</span>debug end</a></span></li></ul></li></ul></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#Load-Onboarding-A/B" data-toc-modified-id="Load-Onboarding-A/B-0.5"><span class="toc-item-num">0.5&nbsp;&nbsp;</span>Load Onboarding A/B</a></span></li><li><span><a href="http://localhost:8888/notebooks/notebooks/io.ipynb#OOM" data-toc-modified-id="OOM-0.6"><span class="toc-item-num">0.6&nbsp;&nbsp;</span>OOM</a></span></li></ul></ul></div> ## Import ### Common ``` import boot_utes as bu; bu.reload(bu); from boot_utes import (reload, add_path, path) add_path('../src/', '~/repos/myutils/') from collections import defaultdict, Counter, OrderedDict from functools import partial from itertools import count from operator import itemgetter as itg from pprint import pprint import os, sys, itertools as it, simplejson, time from os.path import join from glob import glob ``` ### Libs ``` # from gensim.models import Word2Vec import feather import numpy as np import numpy.random as nr import pandas as pd from pandas import DataFrame, Series from pandas.compat import lmap, lrange, lfilter, lzip import toolz.curried as z import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline pd.options.mode.use_inf_as_na = True # %mkdir cache import joblib; mem = joblib.Memory(location='cache') dbx = '/Users/wbeard/miniconda3/envs/tmpdecomp2/bin/databricks' def norm_rows(df): return df.div(df.sum(axis=1), axis=0) from os.path import join import re def get_spk_parq_fn(dir, ext='parquet'): s3parq_fs = !aws s3 ls $dir/ # print(dir) # print(s3parq_fs) spk_parq_re = re.compile(r'part.+?\.{}'.format(ext)) fn_all = spk_parq_re.findall(s3parq_fs[-1]) if not len(fn_all) == 1: print('Uh oh: fn_all=', fn_all) [fn] = fn_all return fn def show_s3_opts(*dir_elems): dir = join(*dir_elems) if not dir.endswith('/'): dir += '/' res = !aws s3 ls $dir return res def mv_s3_pq(from_dir='', to_dir='data/pings', fn='pings_0125.parq', s3base=None, ext='parquet'): s3base = s3base or 's3://mozilla-databricks-telemetry-test/wbeard/' s3dir = join(s3base, from_dir, fn) try: pqn_ = get_spk_parq_fn(s3dir, ext=ext) print('Found pq file', pqn_) except IndexError: print('ERROR: File {} not found'.format(s3dir)) opts = show_s3_opts(s3base, from_dir) print('\nPossible options:\n\t', '\n\t'.join(opts)) return s3loc = join(s3base, from_dir, fn, pqn_) to_loc = join(to_dir, fn) # print(s3loc) # print(to_loc) !aws s3 cp $s3loc $to_loc return to_loc ``` ### Special ``` # import utils.seq_utils as su; reload(su) # import utils.events_utils as eu; reload(eu) # import utils.feather_counter as fc; reload(fc) # import utils.mutes as mt; reload(mt) import myutils as mu import functoolz as fz; reload(fz) import numba_utils as nu; reload(nu) mt.set_mem(mem=mem, json=simplejson) su.set_memo(memoizer=mt.json_memo()) ;; def move_new(dbx_dir, outdir=None, outdir_base='../data/raw/'): full_dbx_dir = join("dbfs:/wbeard/", dbx_dir) outdir = join(outdir_base, dbx_dir) if outdir is None else outdir dbx_fs = !$dbx fs ls $full_dbx_dir written = os.listdir(outdir) tobewritten = [f for f in dbx_fs if f not in written] for f in tobewritten: dbx_fn = join(full_dbx_dir, f) print('{} => {}'.format(dbx_fn, outdir)) !$dbx fs cp $dbx_fn $outdir return tobewritten, written def keep_moving(dbx_dir, outdir=None, outdir_base='../data/raw/', maxn=5): while 1: time.sleep(3) tobewritten, written = move_new(dbx_dir, outdir=outdir, outdir_base=outdir_base) if maxn and (len(tobewritten) + len(written) >= maxn): return print('.', end='') vc = lambda x: Series(x).value_counts(normalize=0).sort_index() !aws s3 sync 's3://mozilla-databricks-telemetry-test/wbeard/crash_pings/p_201801' data/p29_0202 ``` ## Win_proc ``` ls ../../win_proc/data/raw/ local_dir = '../../win_proc/data/raw/' base = 's3://net-mozaws-prod-us-west-2-pipeline-analysis/wbeard/wl/' # base = 'dbfs:/wbeard/apb/' pqdir = 'till_0207.pq' # pqdir = 'abc' # , ext='json' mv_s3_pq(from_dir='', to_dir=local_dir, fn=pqdir, s3base=base) json_file = join(base, pqdir) print(json_file) !aws s3 cp "$json_file" "$local_dir" !aws s3 ls "$json_file/" ``` ``` ```
github_jupyter
``` from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer import os from nltk.tokenize import sent_tokenize import pandas as pd from wordcloud import WordCloud from PIL import Image import matplotlib.pyplot as plt import numpy as np import random from nltk.corpus import stopwords from data import reduce_genres import config import tfidf2 as tfidf os.getcwd() os.listdir(config.dataset_dir) def make_lex_dict(lexicon_file): """ Convert lexicon file to a dictionary """ lex_dict = {} for line in lexicon_file.split('\n'): (word, measure) = line.strip().split('\t')[0:2] lex_dict[word] = float(measure) return lex_dict sent_dict = make_lex_dict(open('/Users/Tristan/books/src/' +'vader_lexicon.txt', 'r').read()) ``` Sentiment analysis. Analysis is performed for each sentence and the sentiment scores kept in lists. Sentiment scores are calculated by averaging the sentiment scores for all sentences. ``` def return_sentiment_scores(sentence): # return just the sentiment scores snt = analyser.polarity_scores(sentence) return snt def sentiment_analysis(directory): analyser = SentimentIntensityAnalyzer() # returns the sentiment of every book in the directory data = pd.read_csv(config.dataset_dir + 'output/final_data.csv', index_col=0) print(len(data.index)) # max_amt = len(data.index) + 2 # print(data.index, len(os.listdir(directory))) pos_list = [] neg_list = [] neu_list = [] comp_list = [] # for every book for filename in data['filename']:#[:max_amt]: sub_pos_list = [] sub_neg_list = [] sub_neu_list = [] sub_comp_list = [] # if file is a textfile if filename.endswith(".txt"): text = open(os.path.join(directory, filename), 'r', errors='replace') # for every line in the text for line in text.readlines(): scores = return_sentiment_scores(line) # save sentiment scores sub_neg_list.append(scores['neg']) sub_neu_list.append(scores['neu']) sub_pos_list.append(scores['pos']) sub_comp_list.append(scores['compound']) # then save average sentiment scores for each book neg_list.append((sum(sub_neg_list) / float(len(sub_neg_list)))) pos_list.append((sum(sub_pos_list) / float(len(sub_pos_list)))) neu_list.append((sum(sub_neu_list) / float(len(sub_neu_list)))) comp_list.append((sum(sub_comp_list) / float(len(sub_comp_list)))) # convert scores to pandas compatible list neg = pd.Series(neg_list) pos = pd.Series(pos_list) neu = pd.Series(neu_list) com = pd.Series(comp_list) print(len(neg), len(pos), len(neu), len(com)) # fill the right columns with the right data print(type(data),'type') print(neg) data['neg score'] = neg.values data['pos score'] = pos.values data['neu score'] = neu.values data['comp score'] = com.values data.to_csv(config.dataset_dir + 'output/final_data.csv') return data analyser = SentimentIntensityAnalyzer() sentiment_analysis(config.dataset_dir + 'bookdatabase/books/') ``` We also want to count the amount of positive and negative words as features. We also create a new file for each book with just the sentiment words. As a result, we will be able to do tfidf on these files later and create wordclouds per genre. ``` def count_sentiment_words(directory): sent_words_list =[] pos_list = [] neg_list = [] data = pd.read_csv(config.dataset_dir + 'output/final_data.csv', index_col=0) for filename in data['filename']: sent_words_list =[] pos_count = 0 neg_count = 0 if filename.endswith(".txt"): text = open(os.path.join(directory, filename), 'r', errors='replace') sentiment_file = open(config.dataset_dir +'output/sentiment_word_texts/' + filename , 'w') for line in text.readlines(): for word in line.split(" "): if word in sent_dict: if sent_dict[word] >= 0: pos_count += 1 sent_words_list.append(word) sentiment_file.write("%s" % word) sentiment_file.write(" ") else: neg_count += 1 sentiment_file.write("%s" % word) sentiment_file.write(" ") pos_list.append(pos_count) neg_list.append(neg_count) data['amt pos'] = pos_list data['amt neg'] = neg_list data.to_csv(config.dataset_dir + 'output/final_data.csv') return data count_sentiment_words(config.dataset_dir + 'bookdatabase/books/') import pandas as pd def read_unique_genres(): genres_file = open(config.dataset_dir + 'unique_genres.txt', 'r') return[genre.strip('\n') for genre in genres_file.readlines()] def create_wordcloud(scores, genre): font_path = config.dataset_dir + 'Open_Sans_Condensed/OpenSansCondensed-Light.ttf' stopWords = set(stopwords.words('english')) try: w = WordCloud(stopwords = stopWords, background_color='white', min_font_size=14, font_path=font_path, width = 1000, height = 500,relative_scaling=1,normalize_plurals=False) wordcloud = w.generate_from_frequencies(scores) wordcloud.recolor(color_func=grey_color_func) except ZeroDivisionError: print('shit') return plt.figure(figsize=(15,8)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.savefig(config.dataset_dir + 'output/wordclouds/' + genre + '.png') plt.close() def grey_color_func(word, font_size, position, orientation, random_state=None, **kwargs): return "hsl(0, 0%%, %d%%)" % random.randint(10, 50) def tfidf_per_genre(plot_wc=False): data = pd.read_csv(config.dataset_dir + 'output/final_data.csv') genres_file = open(config.dataset_dir + 'unique_genres.txt', 'r') pre_genre_list = [genre.strip('\n') for genre in genres_file.readlines()] directory = config.dataset_dir + 'output/sentiment_word_texts/' doc_list = [] genre_list = reduce_genres(pre_genre_list) print(pre_genre_list) print(genre_list) # create a list of lists containing all tokens contained in the text of a certain genre for genre in genre_list: book_list = [] genre = genre.replace('/', ' ') books_of_genre = data.loc[data['genre'] == genre] for book in books_of_genre['filename']: book_list.append(book) genre_document = tfidf.genre_document(book_list, directory) doc_list.append(genre_document) # create index index = tfidf.create_index(genre_list, doc_list) # create tf_matrix tf_matrix = tfidf.create_tf_matrix(genre_list, doc_list) # create scores for each genre for genre, document in zip(genre_list, doc_list): genre = genre.replace('/', ' ') score_dict = {} document = set(document) try: for term in document: score = tfidf.tfidf(term, genre, doc_list, index, tf_matrix) score_dict[term] = score scores_file = open(config.dataset_dir +'output/top200_per_genre/' + genre + '.txt', 'w') for w in sorted(score_dict, key=score_dict.get, reverse=True): scores_file.write('%s/n' % w) scores_file.close() print('success') if plot_wc: font_path = config.dataset_dir + 'Open_Sans_Condensed/OpenSansCondensed-Light.ttf' create_wordcloud(score_dict, genre) except ZeroDivisionError: print('reaallly') continue except ValueError: continue tfidf_dict_per_genre = tfidf_per_genre(plot_wc=True) list(tfidf_dict_per_genre.keys())[:4] len(list(tfidf_dict_per_genre.keys())) len(tfidf_dict_per_genre['Diary and Novel']) # may differ per genre n_words_per_genre = 100 sample = tfidf_dict_per_genre['War'] i = list(sample.keys())[-1] sample[i] max(list(sample.values())) sample ``` ## Generate labels file ``` import pandas, os import data, config from utils import io info = pandas.read_csv(config.info_file) book_list = os.listdir(config.sentiment_words_dir)[:] labels = data.extract_genres(info, book_list) labels io.save_dict_to_csv(config.dataset_dir, 'labels', labels) ``` # (oud) Choose to most important to be kept in the feature-vector ``` from wordcloud import WordCloud import matplotlib.pyplot as plt import data, config, tfidf directory = config.dataset_dir + 'output/sentiment_word_texts' book_list = os.listdir(directory) book_list = book_list[:20] index = tfidf.create_index(directory, book_list) tf_matrix = tfidf.create_tf_matrix(directory, book_list) tfidf_dict = tfidf.perform_tfidf(directory, book_list, index, tf_matrix) # (optional) show the result w = WordCloud(background_color='white', width=900, height=500, max_words=1628,relative_scaling=1,normalize_plurals=False) wordcloud = w.generate_from_frequencies(tfidf_dict) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") # plt.savefig(config.dataset_dir + 'output/wordclouds/' + genre + '.png') # tfidf_dict_per_genre = wordcloud_per_genre() ```
github_jupyter
``` import pickle import pandas as pd import re import nltk from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer import numpy as np import bcolz import unicodedata import torch import torch.nn as nn import torch.nn.functional as F import time import torch.optim as optim import matplotlib.pyplot as plt plt.switch_backend('agg') import matplotlib.ticker as ticker import numpy as np import random ``` # Preprocessing the text data ``` def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' ) def normalizeString(s): s = unicodeToAscii(s.lower().strip()) s = s.replace("'","") s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) return s def preprocess(df): nrows = len(df) real_preprocess = [] df['Content_Parsed_1'] = df['transcription'] for row in range(0, nrows): # Create an empty list containing preprocessed words real_preprocess = [] # Save the text and its words into an object text = df.loc[row]['transcription'] text = normalizeString(text) df.loc[row]['Content_Parsed_1'] = text df['action'] = df['action'].str.lower() df['object'] = df['object'].str.lower() df['location'] = df['location'].str.lower() nltk.download('wordnet') def lemmatize(df): wordnet_lemmatizer = WordNetLemmatizer() # Lemmatizing the content nrows = len(df) lemmatized_text_list = [] for row in range(0, nrows): # Create an empty list containing lemmatized words lemmatized_list = [] # Save the text and its words into an object text = df.loc[row]['Content_Parsed_1'] text_words = text.split(" ") # Iterate through every word to lemmatize for word in text_words: lemmatized_list.append(wordnet_lemmatizer.lemmatize(word, pos="v")) # Join the list lemmatized_text = " ".join(lemmatized_list) # Append to the list containing the texts lemmatized_text_list.append(lemmatized_text) df['Content_Parsed_2'] = lemmatized_text_list path_df = "E:/saarthi/task_data/train_data.csv" with open(path_df, 'rb') as data: df = pd.read_csv(data) path_df_val = "E:/saarthi/task_data/valid_data.csv" with open(path_df, 'rb') as data: df_val = pd.read_csv(data) preprocess(df_val) lemmatize(df_val) preprocess(df) lemmatize(df) ``` # Getting Glove Word embeddings ``` glove_path = "E:" vectors = bcolz.open(f'{glove_path}/6B.50.dat')[:] words = pickle.load(open(f'{glove_path}/6B.50_words.pkl', 'rb')) word2idx = pickle.load(open(f'{glove_path}/6B.50_idx.pkl', 'rb')) glove = {w: vectors[word2idx[w]] for w in words} target_vocab = [] nrows = len(df) for row in range(0, nrows): text = df.loc[row]['Content_Parsed_2'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) target_vocab = [] nrows = len(df_val) for row in range(0, nrows): text = df.loc[row]['Content_Parsed_2'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) nrows = len(df) for row in range(0, nrows): text = df.loc[row]['action'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) nrows = len(df_val) for row in range(0, nrows): text = df.loc[row]['action'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) nrows = len(df) for row in range(0, nrows): text = df.loc[row]['object'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) nrows = len(df_val) for row in range(0, nrows): text = df.loc[row]['object'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) nrows = len(df) for row in range(0, nrows): text = df.loc[row]['location'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) nrows = len(df_val) for row in range(0, nrows): text = df.loc[row]['location'] text_words = text.split(" ") for word in text_words: if word not in target_vocab: target_vocab.append(word) ``` # Creating an embedding matrix ``` vocab_size = len(target_vocab) input_size = 50 embedding_matrix = torch.zeros((vocab_size, input_size)) for w in target_vocab: i = word_to_idx(w) embedding_matrix[i, :] = torch.from_numpy(glove[w]).float() ``` # Defining utility functions ``` def word_to_idx(word): for i, w in enumerate(target_vocab): if w == word: return i return -1 def sentence_to_matrix(sentence): words = sentence.split(" ") n = len(words) m = torch.zeros((n, input_size)) for i, w in enumerate(words): m[i] = embedding_matrix[word_to_idx(w)] return m def sentence_to_index(sentence): w = sentence.split(" ") l = [] for word in w: l.append(word_to_idx(word)) t = torch.tensor(l, dtype=torch.float32) return t output_size = len(target_vocab) input_size = 50 hidden_size = 50 def showPlot(points): plt.figure() fig, ax = plt.subplots() # this locator puts ticks at regular intervals loc = ticker.MultipleLocator(base=0.2) ax.yaxis.set_major_locator(loc) plt.plot(points) import time import math def asMinutes(s): m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) def timeSince(since, percent): now = time.time() s = now - since es = s / (percent) rs = es - s return '%s (- %s)' % (asMinutes(s), asMinutes(rs)) ``` # Creating the Networks ``` class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.gru = nn.GRU(input_size, hidden_size) def forward(self, x, hidden): x = x.unsqueeze(0) output, hidden = self.gru(x, hidden) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) s = "turn down the bathroom temperature" device = 'cuda:0' if torch.cuda.is_available() else 'cpu' matrix = sentence_to_matrix(s) print(matrix[0].unsqueeze(0).shape) encoder = EncoderRNN(input_size, hidden_size) hidden = encoder.initHidden() for i in range(matrix.shape[0]): out, hidden = encoder(matrix[i].unsqueeze(0), hidden) print(out.shape) print(hidden.shape) class DecoderRNN(nn.Module): def __init__(self, hidden_size, output_size): super(DecoderRNN, self).__init__() self.hidden_size = hidden_size self.gru = nn.GRU(hidden_size, hidden_size) self.out = nn.Linear(hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, x, hidden): output = F.relu(x) output, hidden = self.gru(output, hidden) output_softmax = self.softmax(self.out(output[0])) return output, hidden, output_softmax def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) decoder_hidden = hidden decoder_input = torch.ones((1,1,50)) decoder = DecoderRNN(hidden_size, output_size) output_sentence = df.loc[3]["action"] + " "+ df.loc[3]["object"] + " " + df.loc[3]["location"] print(output_sentence) target_tensor = sentence_to_index(output_sentence) criterion = nn.NLLLoss() loss = 0 for i in range(target_tensor.shape[0]): decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden) loss += criterion(decoder_output_softmax, target_tensor[i].unsqueeze(0).long()) print(torch.argmax(decoder_output_softmax, dim=1)) ``` # Training the networks ``` def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion): encoder_hidden = encoder.initHidden() encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() input_length = input_tensor.size(0) target_length = target_tensor.size(0) loss = 0 for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei].unsqueeze(0), encoder_hidden) decoder_input = torch.ones((1,1,50)) decoder_hidden = encoder_hidden for i in range(target_tensor.shape[0]): decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden) loss += criterion(decoder_output_softmax, target_tensor[i].unsqueeze(0).long()) loss.backward() encoder_optimizer.step() decoder_optimizer.step() return loss.item() / target_length def trainIters(encoder, decoder, n_iters, df, print_every=1000, plot_every=100, learning_rate=0.01): start = time.time() plot_losses = [] print_loss_total = 0 # Reset every print_every plot_loss_total = 0 # Reset every plot_every encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate) criterion = nn.NLLLoss() nrows = len(df) for iter in range(1, n_iters + 1): i = random.randint(0, n_iters) i = (i % nrows) s = df.loc[i]["Content_Parsed_2"] input_tensor = sentence_to_matrix(s) output_sentence = df.loc[i]["action"] + " "+ df.loc[i]["object"] + " " + df.loc[i]["location"] target_tensor = sentence_to_index(output_sentence) loss = train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion) print_loss_total += loss plot_loss_total += loss if iter % print_every == 0: print_loss_avg = print_loss_total / print_every print_loss_total = 0 print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters), iter, iter / n_iters * 100, print_loss_avg)) if iter % plot_every == 0: plot_loss_avg = plot_loss_total / plot_every plot_losses.append(plot_loss_avg) plot_loss_total = 0 showPlot(plot_losses) def predict(encoder, decoder, input_sentence): encoder_hidden = encoder.initHidden() input_tensor = sentence_to_matrix(input_sentence) decoder_input = torch.ones((1,1,50)) input_length = input_tensor.size(0) for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei].unsqueeze(0), encoder_hidden) decoder_hidden = encoder_hidden for i in range(3): decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden) idx = torch.argmax(decoder_output_softmax) print(target_vocab[idx]) def evaluate(encoder, decoder, input_sentence, target_tensor): encoder_hidden = encoder.initHidden() input_tensor = sentence_to_matrix(input_sentence) decoder_input = torch.ones((1,1,50)) input_length = input_tensor.size(0) for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei].unsqueeze(0), encoder_hidden) decoder_hidden = encoder_hidden correct = 0 for i in range(3): decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden) idx = torch.argmax(decoder_output_softmax) if(idx == target_tensor[i]): correct += 1 if(correct == 3): return 1 else: return 0 encoder = EncoderRNN(input_size, hidden_size).to(device) decoder = DecoderRNN(hidden_size, output_size) trainIters(encoder, decoder, 150000, df) ``` # Evaluating the model ``` n = len(df_val) total = 0 correct = 0 for i in range(n): output_sentence = df_val.loc[i]["action"] + " "+ df_val.loc[i]["object"] + " " + df_val.loc[i]["location"] target_tensor = sentence_to_index(output_sentence) input_sentence = df_val.loc[i]["Content_Parsed_2"] correct += evaluate(encoder, decoder, input_sentence, target_tensor) total += 1 print(correct) print(total) print(f"Accuracy on Val test : {(float(correct)/total)*100}") ```
github_jupyter
<a href="https://colab.research.google.com/github/jantic/DeOldify/blob/master/DeOldify_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # DeOldify on Colab # This notebook shows how to get your own version of [DeOldify](https://github.com/jantic/DeOldify) working on Google Colab. A lot of the initial steps are just installs -- but these are also the steps that can make running the model a tedious exercise. Initially, one must `pip install` a few dependencies, then `wget` is used to download the appropriate picture data. NECESSARY PRELIMINARY STEP: Please make sure you have gone up to the "Runtime" menu above and "Change Runtime Type" to Python3 and GPU. I hope you have fun, and thanks to Jason Antic for this awesome tool! -Matt Robinson, <matthew67robinson@gmail.com> NEW: You can now load your files from you own Google Drive, check the last cell of the notebook for more information. ``` !git clone https://github.com/jantic/DeOldify.git DeOldify from os import path from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch print(torch.__version__) print(torch.cuda.is_available()) cd DeOldify !pip install -e . %matplotlib inline %reload_ext autoreload %autoreload 2 # Doing work so I can access data from my google drive !pip install PyDrive # Work around with Pillow being preinstalled on these Colab VMs, causing conflicts otherwise. !pip install Pillow==4.1.1 import os from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials import multiprocessing from torch import autograd from fastai.transforms import TfmType from fasterai.transforms import * from fastai.conv_learner import * from fasterai.images import * from fasterai.dataset import * from fasterai.visualize import * from fasterai.callbacks import * from fasterai.loss import * from fasterai.modules import * from fasterai.training import * from fasterai.generators import * from fastai.torch_imports import * from fasterai.filters import * from pathlib import Path from itertools import repeat from google.colab import drive from IPython.display import Image import tensorboardX torch.cuda.set_device(0) plt.style.use('dark_background') torch.backends.cudnn.benchmark=True auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) ``` Note that the above requires a verification step. It isn't too bad. ``` # Now download the pretrained weights, which I have saved to my google drive # note that the id is the ending part of the shareable link url (after open?id=) # The pretrained weights can be downloaded from https://www.dropbox.com/s/7r2wu0af6okv280/colorize_gen_192.h5 download = drive.CreateFile({'id': '1mRRvS3WIHPdp36G0yc1jC0XI6i-Narv6'}) download.GetContentFile('pretrained_weights.h5') ``` With access to your Google Drive, the "deOldifyImages" directory will be created. Drop there your personal images, and after the full execution of the notebook find the results at its subdirectory "results" ``` from google.colab import drive drive.mount('/content/drive') !mkdir "/content/drive/My Drive/deOldifyImages" !mkdir "/content/drive/My Drive/deOldifyImages/results" weights_path = 'pretrained_weights.h5' results_dir='/content/drive/My Drive/deOldifyImages/results' #The higher the render_factor, the more GPU memory will be used and generally images will look better. #11GB can take a factor of 42 max. Performance generally gracefully degrades with lower factors, #though you may also find that certain images will actually render better at lower numbers. #This tends to be the case with the oldest photos. render_factor=42 filters = [Colorizer(gpu=0, weights_path=weights_path)] vis = ModelImageVisualizer(filters, render_factor=render_factor, results_dir=results_dir) # download an example picture to try. # NOTE: All the jpg files cloned from the git repo are corrupted. Must download yourself. !wget "https://media.githubusercontent.com/media/jantic/DeOldify/master/test_images/abe.jpg" -O "abe2.jpg" # %matplotlib inline vis.plot_transformed_image('abe2.jpg', render_factor=25) !wget "https://media.githubusercontent.com/media/jantic/DeOldify/master/test_images/TV1930s.jpg" -O "family_TV.jpg" vis.plot_transformed_image('family_TV.jpg', render_factor=41) ``` Let's see how well it does Dorothy before her world turns to color in the Wizard of Oz: ``` !wget "https://magnoliaforever.files.wordpress.com/2011/09/wizard-of-oz.jpg" -O "Dorothy.jpg" vis.plot_transformed_image('Dorothy.jpg', render_factor=30) ``` Let's now try Butch and Sundance. Famously the last scene ends with a black and white still. So we know what the color was beforehand. ``` !wget "https://i.ebayimg.com/images/g/HqkAAOSwRLZUAwyS/s-l300.jpg" -O "butch_and_sundance.jpg" vis.plot_transformed_image('butch_and_sundance.jpg', render_factor=29) ``` Let's get a picture of what they were actually wearing: ``` !wget "https://bethanytompkins.files.wordpress.com/2015/09/freezeframe.jpg" -O "butch_and_sundance_color.jpg" Image('butch_and_sundance_color.jpg') ``` If you want to colorise pictures from your drive, drop them in a directory named deOldifyImages (in the root of your drive) and the next cell will save the colorise pictures in deOldifyImages/results. ``` for img in os.listdir("/content/drive/My Drive/deOldifyImages/"): img_path = str("/content/drive/My Drive/deOldifyImages/") + img if os.path.isfile(img_path): vis.plot_transformed_image(img_path) ```
github_jupyter
<a href="https://colab.research.google.com/github/agemagician/Prot-Transformers/blob/master/Embedding/Advanced/Electra.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <h3> Extracting protein sequences' features using ProtElectra pretrained-model <h3> <b>1. Load necessry libraries including huggingface transformers<b> ``` !pip install -q transformers import torch from transformers import ElectraTokenizer, ElectraForPreTraining, ElectraForMaskedLM, ElectraModel import re import os import requests from tqdm.auto import tqdm ``` <b>2. Set the url location of ProtElectra and the vocabulary file<b> ``` generatorModelUrl = 'https://www.dropbox.com/s/5x5et5q84y3r01m/pytorch_model.bin?dl=1' discriminatorModelUrl = 'https://www.dropbox.com/s/9ptrgtc8ranf0pa/pytorch_model.bin?dl=1' generatorConfigUrl = 'https://www.dropbox.com/s/9059fvix18i6why/config.json?dl=1' discriminatorConfigUrl = 'https://www.dropbox.com/s/jq568evzexyla0p/config.json?dl=1' vocabUrl = 'https://www.dropbox.com/s/wck3w1q15bc53s0/vocab.txt?dl=1' ``` <b>3. Download ProtElectra models and vocabulary files<b> ``` downloadFolderPath = 'models/electra/' discriminatorFolderPath = os.path.join(downloadFolderPath, 'discriminator') generatorFolderPath = os.path.join(downloadFolderPath, 'generator') discriminatorModelFilePath = os.path.join(discriminatorFolderPath, 'pytorch_model.bin') generatorModelFilePath = os.path.join(generatorFolderPath, 'pytorch_model.bin') discriminatorConfigFilePath = os.path.join(discriminatorFolderPath, 'config.json') generatorConfigFilePath = os.path.join(generatorFolderPath, 'config.json') vocabFilePath = os.path.join(downloadFolderPath, 'vocab.txt') if not os.path.exists(discriminatorFolderPath): os.makedirs(discriminatorFolderPath) if not os.path.exists(generatorFolderPath): os.makedirs(generatorFolderPath) def download_file(url, filename): response = requests.get(url, stream=True) with tqdm.wrapattr(open(filename, "wb"), "write", miniters=1, total=int(response.headers.get('content-length', 0)), desc=filename) as fout: for chunk in response.iter_content(chunk_size=4096): fout.write(chunk) if not os.path.exists(generatorModelFilePath): download_file(generatorModelUrl, generatorModelFilePath) if not os.path.exists(discriminatorModelFilePath): download_file(discriminatorModelUrl, discriminatorModelFilePath) if not os.path.exists(generatorConfigFilePath): download_file(generatorConfigUrl, generatorConfigFilePath) if not os.path.exists(discriminatorConfigFilePath): download_file(discriminatorConfigUrl, discriminatorConfigFilePath) if not os.path.exists(vocabFilePath): download_file(vocabUrl, vocabFilePath) ``` <b>4. Load the vocabulary and ProtElectra discriminator and generator Models<b> ``` tokenizer = ElectraTokenizer(vocabFilePath, do_lower_case=False ) discriminator = ElectraForPreTraining.from_pretrained(discriminatorFolderPath) generator = ElectraForMaskedLM.from_pretrained(generatorFolderPath) electra = ElectraModel.from_pretrained(discriminatorFolderPath) ``` <b>5. Load the model into the GPU if avilabile and switch to inference mode<b> ``` device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') discriminator = discriminator.to(device) discriminator = discriminator.eval() generator = generator.to(device) generator = generator.eval() electra = electra.to(device) electra = electra.eval() ``` <b>6. Create or load sequences and map rarely occured amino acids (U,Z,O,B) to (X)<b> ``` sequences_Example = ["A E T C Z A O","S K T Z P"] sequences_Example = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences_Example] ``` <b>7. Tokenize, encode sequences and load it into the GPU if possibile<b> ``` ids = tokenizer.batch_encode_plus(sequences_Example, add_special_tokens=True, pad_to_max_length=True) input_ids = torch.tensor(ids['input_ids']).to(device) attention_mask = torch.tensor(ids['attention_mask']).to(device) ``` <b>8. Extracting sequences' features and load it into the CPU if needed<b> ``` with torch.no_grad(): discriminator_embedding = discriminator(input_ids=input_ids,attention_mask=attention_mask)[0] discriminator_embedding = discriminator_embedding.cpu().numpy() with torch.no_grad(): generator_embedding = generator(input_ids=input_ids,attention_mask=attention_mask)[0] generator_embedding = generator_embedding.cpu().numpy() with torch.no_grad(): electra_embedding = electra(input_ids=input_ids,attention_mask=attention_mask)[0] electra_embedding = electra_embedding.cpu().numpy() ``` <b>9. Remove padding ([PAD]) and special tokens ([CLS],[SEP]) that is added by Electra model<b> ``` features = [] for seq_num in range(len(electra_embedding)): seq_len = (attention_mask[seq_num] == 1).sum() seq_emd = electra_embedding[seq_num][1:seq_len-1] features.append(seq_emd) print(features) ```
github_jupyter
# Decision Point Price Momentum Oscillator (PMO) https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:dppmo ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") # fix_yahoo_finance is used to fetch data import fix_yahoo_finance as yf yf.pdr_override() # input symbol = 'AAPL' start = '2017-01-01' end = '2019-01-01' # Read data df = yf.download(symbol,start,end) # View Columns df.head() df.tail() df['ROC'] = ((df['Adj Close'] - df['Adj Close'].shift(1))/df['Adj Close'].shift(1)) * 100 df = df.dropna() df.head() df['35_Custom_EMA_ROC'] = df['ROC'].ewm(ignore_na=False,span=35,min_periods=0,adjust=True).mean() df.head() df['35_Custom_EMA_ROC_10'] = df['35_Custom_EMA_ROC']*10 df.head() df = df.dropna() df.head(20) df['PMO_Line'] = df['35_Custom_EMA_ROC_10'].ewm(ignore_na=False,span=20,min_periods=0,adjust=True).mean() df.head() df['PMO_Signal_Line'] = df['PMO_Line'].ewm(ignore_na=False,span=10,min_periods=0,adjust=True).mean() df = df.dropna() df.head() fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) ax1.plot(df['Adj Close']) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax1.legend(loc='best') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['PMO_Line'], label='PMO Line') ax2.plot(df['PMO_Signal_Line'], label='PMO Signal Line') ax2.axhline(y=0, color='red') ax2.grid() ax2.legend(loc='best') ax2.set_ylabel('PMO') ax2.set_xlabel('Date') ``` ## Candlestick with PMO ``` from matplotlib import dates as mdates import datetime as dt dfc = df.copy() dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close'] #dfc = dfc.dropna() dfc = dfc.reset_index() dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date)) dfc.head() from mpl_finance import candlestick_ohlc fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0) ax1.xaxis_date() ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y')) ax1.grid(True, which='both') ax1.minorticks_on() ax1v = ax1.twinx() colors = dfc.VolumePositive.map({True: 'g', False: 'r'}) ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4) ax1v.axes.yaxis.set_ticklabels([]) ax1v.set_ylim(0, 3*df.Volume.max()) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['PMO_Line'], label='PMO_Line') ax2.plot(df['PMO_Signal_Line'], label='PMO_Signal_Line') ax2.axhline(y=0, color='red') ax2.grid() ax2.set_ylabel('PMO') ax2.set_xlabel('Date') ax2.legend(loc='best') ```
github_jupyter
``` import re import requests import time from requests_html import HTML from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.add_argument("--headless") driver = webdriver.Chrome(options=options) categories = [ "https://www.amazon.com/Best-Sellers-Toys-Games/zgbs/toys-and-games/", "https://www.amazon.com/Best-Sellers-Electronics/zgbs/electronics/", "https://www.amazon.com/Best-Sellers/zgbs/fashion/" ] # categories first_url = categories[0] driver.get(first_url) body_el = driver.find_element_by_css_selector("body") html_str = body_el.get_attribute("innerHTML") html_obj = HTML(html=html_str) page_links = [f"https://www.amazon.com{x}" for x in html_obj.links if x.startswith("/")] # new_links = [x for x in new_links if "product-reviews/" not in x] # page_links def scrape_product_page(url, title_lookup = "#productTitle", price_lookup = "#priceblock_ourprice"): driver.get(url) time.sleep(0.5) body_el = driver.find_element_by_css_selector("body") html_str = body_el.get_attribute("innerHTML") html_obj = HTML(html=html_str) product_title = html_obj.find(title_lookup, first=True).text product_price = html_obj.find(price_lookup, first=True).text return product_title, product_price # https://www.amazon.com/LEGO-Classic-Medium-Creative-Brick/dp/B00NHQFA1I/ # https://www.amazon.com/Crayola-Washable-Watercolors-8-ea/dp/B000HHKAE2/ # <base-url>/<slug>/dp/<product_id>/ # my_regex_pattern = r"https://www.amazon.com/(?P<slug>[\w-]+)/dp/(?P<product_id>[\w-]+)/" # my_url = 'https://www.amazon.com/Crayola-Washable-Watercolors-8-ea/dp/B000HHKAE2/' # regex = re.compile(my_regex_pattern) # my_match = regex.match(my_url) # print(my_match) # my_match['product_id'] # my_match['slug'] regex_options = [ r"https://www.amazon.com/gp/product/(?P<product_id>[\w-]+)/", r"https://www.amazon.com/dp/(?P<product_id>[\w-]+)/", r"https://www.amazon.com/(?P<slug>[\w-]+)/dp/(?P<product_id>[\w-]+)/", ] def extract_product_id_from_url(url): product_id = None for regex_str in regex_options: regex = re.compile(regex_str) match = regex.match(url) if match != None: try: product_id = match['product_id'] except: pass return product_id # page_links = [x for x in page_links if extract_product_id_from_url(x) != None] def clean_page_links(page_links=[]): final_page_links = [] for url in page_links: product_id = extract_product_id_from_url(url) if product_id != None: final_page_links.append({"url": url, "product_id": product_id}) return final_page_links cleaned_links = clean_page_links(page_links) len(page_links) # == len(cleaned_links) len(cleaned_links) def perform_scrape(cleaned_items=[]): data_extracted = [] for obj in cleaned_items: link = obj['url'] product_id = obj['product_id'] title, price = (None, None) try: title, price = scrape_product_page(link) except: pass if title != None and price != None: print(link, title, price) product_data = { "url": link, "product_id": product_id, "title": title, "price": price } data_extracted.append(product_data) return data_extracted extracted_data = perform_scrape(cleaned_items=cleaned_links) print(extracted_data) ```
github_jupyter
``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline from sklearn.pipeline import Pipeline from sklearn.preprocessing import Imputer # 上述函数,其输入是包含1个多个枚举类别的2D数组,需要reshape成为这种数组 # from sklearn.preprocessing import CategoricalEncoder #后面会添加这个方法 from sklearn.base import BaseEstimator, TransformerMixin from sklearn.utils import check_array from sklearn.preprocessing import LabelEncoder from scipy import sparse # 后面再去理解 class CategoricalEncoder(BaseEstimator, TransformerMixin): """Encode categorical features as a numeric array. The input to this transformer should be a matrix of integers or strings, denoting the values taken on by categorical (discrete) features. The features can be encoded using a one-hot aka one-of-K scheme (``encoding='onehot'``, the default) or converted to ordinal integers (``encoding='ordinal'``). This encoding is needed for feeding categorical data to many scikit-learn estimators, notably linear models and SVMs with the standard kernels. Read more in the :ref:`User Guide <preprocessing_categorical_features>`. Parameters ---------- encoding : str, 'onehot', 'onehot-dense' or 'ordinal' The type of encoding to use (default is 'onehot'): - 'onehot': encode the features using a one-hot aka one-of-K scheme (or also called 'dummy' encoding). This creates a binary column for each category and returns a sparse matrix. - 'onehot-dense': the same as 'onehot' but returns a dense array instead of a sparse matrix. - 'ordinal': encode the features as ordinal integers. This results in a single column of integers (0 to n_categories - 1) per feature. categories : 'auto' or a list of lists/arrays of values. Categories (unique values) per feature: - 'auto' : Determine categories automatically from the training data. - list : ``categories[i]`` holds the categories expected in the ith column. The passed categories are sorted before encoding the data (used categories can be found in the ``categories_`` attribute). dtype : number type, default np.float64 Desired dtype of output. handle_unknown : 'error' (default) or 'ignore' Whether to raise an error or ignore if a unknown categorical feature is present during transform (default is to raise). When this is parameter is set to 'ignore' and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. Ignoring unknown categories is not supported for ``encoding='ordinal'``. Attributes ---------- categories_ : list of arrays The categories of each feature determined during fitting. When categories were specified manually, this holds the sorted categories (in order corresponding with output of `transform`). Examples -------- Given a dataset with three features and two samples, we let the encoder find the maximum value per feature and transform the data to a binary one-hot encoding. >>> from sklearn.preprocessing import CategoricalEncoder >>> enc = CategoricalEncoder(handle_unknown='ignore') >>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]]) ... # doctest: +ELLIPSIS CategoricalEncoder(categories='auto', dtype=<... 'numpy.float64'>, encoding='onehot', handle_unknown='ignore') >>> enc.transform([[0, 1, 1], [1, 0, 4]]).toarray() array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.], [ 0., 1., 1., 0., 0., 0., 0., 0., 0.]]) See also -------- sklearn.preprocessing.OneHotEncoder : performs a one-hot encoding of integer ordinal features. The ``OneHotEncoder assumes`` that input features take on values in the range ``[0, max(feature)]`` instead of using the unique values. sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of dictionary items (also handles string-valued features). sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot encoding of dictionary items or strings. """ def __init__(self, encoding='onehot', categories='auto', dtype=np.float64, handle_unknown='error'): self.encoding = encoding self.categories = categories self.dtype = dtype self.handle_unknown = handle_unknown def fit(self, X, y=None): """Fit the CategoricalEncoder to X. Parameters ---------- X : array-like, shape [n_samples, n_feature] The data to determine the categories of each feature. Returns ------- self """ if self.encoding not in ['onehot', 'onehot-dense', 'ordinal']: template = ("encoding should be either 'onehot', 'onehot-dense' " "or 'ordinal', got %s") raise ValueError(template % self.handle_unknown) if self.handle_unknown not in ['error', 'ignore']: template = ("handle_unknown should be either 'error' or " "'ignore', got %s") raise ValueError(template % self.handle_unknown) if self.encoding == 'ordinal' and self.handle_unknown == 'ignore': raise ValueError("handle_unknown='ignore' is not supported for" " encoding='ordinal'") X = check_array(X, dtype=np.object, accept_sparse='csc', copy=True) n_samples, n_features = X.shape self._label_encoders_ = [LabelEncoder() for _ in range(n_features)] for i in range(n_features): le = self._label_encoders_[i] Xi = X[:, i] if self.categories == 'auto': le.fit(Xi) else: valid_mask = np.in1d(Xi, self.categories[i]) if not np.all(valid_mask): if self.handle_unknown == 'error': diff = np.unique(Xi[~valid_mask]) msg = ("Found unknown categories {0} in column {1}" " during fit".format(diff, i)) raise ValueError(msg) le.classes_ = np.array(np.sort(self.categories[i])) self.categories_ = [le.classes_ for le in self._label_encoders_] return self def transform(self, X): """Transform X using one-hot encoding. Parameters ---------- X : array-like, shape [n_samples, n_features] The data to encode. Returns ------- X_out : sparse matrix or a 2-d array Transformed input. """ X = check_array(X, accept_sparse='csc', dtype=np.object, copy=True) n_samples, n_features = X.shape X_int = np.zeros_like(X, dtype=np.int) X_mask = np.ones_like(X, dtype=np.bool) for i in range(n_features): valid_mask = np.in1d(X[:, i], self.categories_[i]) if not np.all(valid_mask): if self.handle_unknown == 'error': diff = np.unique(X[~valid_mask, i]) msg = ("Found unknown categories {0} in column {1}" " during transform".format(diff, i)) raise ValueError(msg) else: # Set the problematic rows to an acceptable value and # continue `The rows are marked `X_mask` and will be # removed later. X_mask[:, i] = valid_mask X[:, i][~valid_mask] = self.categories_[i][0] X_int[:, i] = self._label_encoders_[i].transform(X[:, i]) if self.encoding == 'ordinal': return X_int.astype(self.dtype, copy=False) mask = X_mask.ravel() n_values = [cats.shape[0] for cats in self.categories_] n_values = np.array([0] + n_values) indices = np.cumsum(n_values) column_indices = (X_int + indices[:-1]).ravel()[mask] row_indices = np.repeat(np.arange(n_samples, dtype=np.int32), n_features)[mask] data = np.ones(n_samples * n_features)[mask] out = sparse.csc_matrix((data, (row_indices, column_indices)), shape=(n_samples, indices[-1]), dtype=self.dtype).tocsr() if self.encoding == 'onehot-dense': return out.toarray() else: return out # 另一个转换器:用于选择子集 from sklearn.base import BaseEstimator, TransformerMixin class DataFrameSelector(BaseEstimator, TransformerMixin): def __init__(self, attribute_names): self.attribute_names = attribute_names def fit(self, X, y=None): return self def transform(self, X): return X[self.attribute_names] class DataFrameFillCat(BaseEstimator, TransformerMixin): def __init__(self, arrtibute_names): self.attribute_names = arrtibute_names def fit(self, X): return self def transform(self, X): print(type(X)) for attributename in self.attribute_names: # print(X[attributename]) freq_cat = X[attributename].dropna().mode()[0] # print(freq_cat) X[attributename] = X[attributename].fillna(freq_cat) return X.values # 加载数据 train_df = pd.read_csv("./datasets/train.csv") test_df = pd.read_csv("./datasets/test.csv") combine = [train_df, test_df] train_df.head() train_df.info() train_df.describe() train_df.describe(include=np.object) num_attribute = ['MSSubClass', 'LotArea', 'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd', 'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal', 'MoSold', 'YrSold',] cat_attribute = ['MSZoning', 'Street', 'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'MasVnrType', 'ExterQual', 'ExterCond', 'Foundation', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'Heating', 'HeatingQC', 'CentralAir', 'Electrical', 'KitchenQual', 'Functional', 'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond', 'PavedDrive', 'SaleType', 'SaleCondition'] from sklearn.preprocessing import StandardScaler num_pipeline = Pipeline([ ("selector", DataFrameSelector(num_attribute)), ("imputer", Imputer(strategy="median")), ("std_scaler", StandardScaler()) ]) cat_pipeline = Pipeline([ ("selector", DataFrameSelector(cat_attribute)), ("fillna", DataFrameFillCat(cat_attribute)), ("cat_encoder", CategoricalEncoder(encoding="onehot-dense")) ]) X_train = train_df X_train_cat_pipeline = num_pipeline.fit_transform(X_train) from sklearn.pipeline import FeatureUnion full_pipeline = FeatureUnion(transformer_list=[ ("num_pipeline", num_pipeline), ("cat_pipeline", cat_pipeline), ]) from sklearn.model_selection import train_test_split X_train = train_df.drop(["Id", "SalePrice"], axis = 1) y_train = train_df["SalePrice"] # X_train.info() X_train_pipeline = full_pipeline.fit_transform(X_train) X_train, X_test, y_train, y_test = train_test_split(X_train_pipeline, y_train, test_size=0.1) X_train.shape, X_test.shape, y_train.shape # X_test_pipeline = full_pipeline.transform(X_test) from sklearn.ensemble import RandomForestRegressor rdf_reg = RandomForestRegressor() rdf_reg.fit(X_train, y_train) y_pred = rdf_reg.predict(X_test) # y_pred = rdf_reg.predict(X_test_pipeline) from sklearn.metrics import mean_squared_error scores_mse = mean_squared_error(y_pred, y_test) scores_mse from sklearn.ensemble import GradientBoostingRegressor gbr_reg = GradientBoostingRegressor(n_estimators=1000, max_depth=2) gbr_reg.fit(X_train, y_train) y_pred = gbr_reg.predict(X_test) scores_mse = mean_squared_error(y_pred, y_test) scores_mse test_df_data = test_df.drop(["Id"], axis=1) X_test_pipeline = full_pipeline.transform(test_df_data) # test_df_data.info() # test_df_data.info() y_pred = gbr_reg.predict(X_test_pipeline) result =pd.DataFrame({ "Id": test_df["Id"], "SalePrice": y_pred }) result.to_csv("result.csv", index=False) ```
github_jupyter
``` from google.colab import drive drive.mount("/content/gdrive") import numpy as np import pandas as pd import numpy as np import pandas as pd from sklearn.model_selection import train_test_split import re import torch df = pd.read_csv("/content/gdrive/MyDrive/tidydata.csv") df['label'] = df['recommended'].apply(lambda x: 0 if x== True else 1) X = df[['review']] y = df['label'] X_train, X_test, y_train, y_test = train_test_split(df.index.values, df.label.values, test_size=0.2, random_state=40, stratify=df.label.values) df['type'] = ['tmp']*df.shape[0] df.loc[X_train, 'type'] = 'train' df.loc[X_test, 'type'] = 'test' X_train_list = list(df[df.type=='train'].review.values) Y_train_list = list(df[df.type=='train'].label.values) tmp1 = [] tmp2 = [] for i in range(len(X_train_list)): if X_train_list[i]==X_train_list[i]: tmp1.append(X_train_list[i]) tmp2.append(Y_train_list[i]) X_train_list = tmp1 Y_train_list = tmp2 X_test_list = list(df[df.type=='test'].review.values) Y_test_list = list(df[df.type=='test'].label.values) tmp1 = [] tmp2 = [] for i in range(len(X_test_list)): if X_test_list[i]==X_test_list[i]: tmp1.append(X_test_list[i]) tmp2.append(Y_test_list[i]) X_test_list = tmp1 Y_test_list = tmp2 import json with open("/content/gdrive/MyDrive/df_train_w2v.json", "r") as file: train_emb = json.load(file) with open("/content/gdrive/MyDrive/df_test_w2v.json", "r") as file: test_emb = json.load(file) train_emb[0][0] from sklearn.utils import shuffle X_train, Y_train = shuffle(train_emb, Y_train_list) from sklearn.neural_network import MLPClassifier from sklearn.linear_model import LogisticRegression #NN = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(128, 2)) LR = LogisticRegression(random_state=0).fit(X_train, Y_train) #NN.fit(X_train, Y_train) LR.fit(X_train, Y_train) from sklearn.metrics import classification_report y_pred = LR.predict(test_emb) acc = 0 for i in range(len(y_pred)): if y_pred[i]==Y_test_list[i]: acc += 1 acc = acc/len(y_pred) print(acc) print(classification_report(y_pred, Y_test_list, digits=3)) sum(Y_test_list) import numpy as np from sklearn import metrics fpr, tpr, thresholds = metrics.roc_curve(Y_test_list, y_pred, pos_label=1) metrics.auc(fpr, tpr) from sklearn.metrics import classification_report from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() gnb.fit(X_train, Y_train) y_pred = gnb.predict(test_emb) acc = 0 for i in range(len(y_pred)): if y_pred[i]==Y_test_list[i]: acc += 1 acc = acc/len(y_pred) print(classification_report(y_pred, Y_test_list, digits=3)) from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification X_train, Y_train = shuffle(train_emb, Y_train_list) clf = RandomForestClassifier(max_depth=10, random_state=0) clf.fit(X_train, Y_train) #df_tfidf_test print(classification_report(clf.predict(test_emb), Y_test_list, digits=3)) y_pred = clf.predict(test_emb) ```
github_jupyter
``` import pandas as pd # Read in white wine data MCdata = pd.read_csv(r"C:\Users\soari\Documents\GitHub\MC_SNR_DNN/SNRdata.csv",header=None) # Read in red wine data MClabel = pd.read_csv(r"C:\Users\soari\Documents\GitHub\MC_SNR_DNN/SNRlabel.csv",header=None) print(MCdata.info()) print(MClabel.info()) import matplotlib.pyplot as plt pm = MCdata[100:101] # pm.plot(kind='bar',legend=False) ax = pm.transpose().plot(kind='line', title ="CDF", figsize=(6, 4), legend=False, fontsize=12) ax.set_xlabel("Bins", fontsize=12) ax.set_ylabel("CDF", fontsize=12) plt.show() # plt.savefig('filename.png', dpi=600) from sklearn.model_selection import train_test_split import numpy as np # Split the data up in train and test sets X_train, X_test, y_train, y_test = train_test_split(MCdata, MClabel, test_size=0.33, random_state=42) # Import `Sequential` from `keras.models` from keras.models import Sequential # Import `Dense` from `keras.layers` from keras.layers import Dense from tensorflow.keras import layers from tensorflow.keras import regularizers # Initialize the constructor model = Sequential() # # Add an input layer # model.add(Dense(100, activation='relu', input_shape=(100,))) # # Add one hidden layer # model.add(Dense(20, activation='relu')) # # Add an output layer # model.add(Dense(3, activation='sigmoid')) # # Strategy 1: add weight regulation to avoid overfitting # # Add an input layer # model.add(Dense(100, activation='relu', kernel_regularizer=regularizers.l2(0.001), input_shape=(100,))) # # Add one hidden layer # model.add(Dense(20, kernel_regularizer=regularizers.l2(0.001),activation='relu')) # # l2(0.001) means that every coefficient in the weight matrix of the layer will add 0.001 * weight_coefficient_value**2 to the total loss of the network. # Strategy 2: Dropout model.add(Dense(100, activation='relu', input_shape=(100,))) layers.Dropout(0.5), model.add(Dense(20, activation='relu')) layers.Dropout(0.5), model.add(Dense(1) # Model output shape model.output_shape # Model summary model.summary() # Model config model.get_config() # List all weight tensors model.get_weights() X_train=X_train.transpose() X_test=X_test.transpose() y_train=y_train.transpose() y_test=y_test.transpose() from keras import optimizers sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='mse',optimizer='adam',metrics=['mae','mse']) history=model.fit(X_train,y_train,epochs=5,batch_size=1,verbose=1,validation_split=0.2) y_pred = model.predict(X_test) score = model.evaluate(X_test, y_test,verbose=1) # print("Test Score:", score[0]) # print("Test Accuracy:", score[1]) # Plot training & validation accuracy values plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train','test'], loc='upper left') plt.show() # Plot training & validation loss values plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train','test'], loc='upper left') plt.show() ```
github_jupyter
``` library(keras) ``` **Loading MNIST dataset from the library datasets** ``` mnist <- dataset_mnist() x_train <- mnist$train$x y_train <- mnist$train$y x_test <- mnist$test$x y_test <- mnist$test$y ``` **Data Preprocessing** ``` # reshape x_train <- array_reshape(x_train, c(nrow(x_train), 784)) x_test <- array_reshape(x_test, c(nrow(x_test), 784)) # rescale x_train <- x_train / 255 x_test <- x_test / 255 ``` The y data is an integer vector with values ranging from 0 to 9. To prepare this data for training we one-hot encode the vectors into binary class matrices using the Keras to_categorical() function: ``` y_train <- to_categorical(y_train, 10) y_test <- to_categorical(y_test, 10) ``` **Building model** ``` model <- keras_model_sequential() model %>% layer_dense(units = 256, activation = 'relu', input_shape = c(784)) %>% layer_dropout(rate = 0.4) %>% layer_dense(units = 128, activation = 'relu') %>% layer_dropout(rate = 0.3) %>% layer_dense(units = 10, activation = 'softmax') # Use the summary() function to print the details of the model: summary(model) ``` **Compiling the model** ``` model %>% compile( loss = 'categorical_crossentropy', optimizer = optimizer_rmsprop(), metrics = c('accuracy') ) ``` **Training and Evaluation** ``` history <- model %>% fit( x_train, y_train, epochs = 30, batch_size = 128, validation_split = 0.2 ) plot(history) # Plot the accuracy of the training data plot(history$metrics$acc, main="Model Accuracy", xlab = "epoch", ylab="accuracy", col="blue", type="l") # Plot the accuracy of the validation data lines(history$metrics$val_acc, col="green") # Add Legend legend("bottomright", c("train","test"), col=c("blue", "green"), lty=c(1,1)) # Plot the model loss of the training data plot(history$metrics$loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l") # Plot the model loss of the test data lines(history$metrics$val_loss, col="green") # Add legend legend("topright", c("train","test"), col=c("blue", "green"), lty=c(1,1)) ``` **Predicting for the test data** ``` model %>% predict_classes(x_test) # Evaluate on test data and labels score <- model %>% evaluate(x_test, y_test, batch_size = 128) # Print the score print(score) ``` ## Hyperparameter tuning ``` # install.packages("tfruns") library(tfruns) runs <- tuning_run(file = "hyperparameter_tuning_model.r", flags = list( dense_units1 = c(8,16), dropout1 = c(0.2, 0.3, 0.4), dense_units2 = c(8,16), dropout2 = c(0.2, 0.3, 0.4) )) runs ```
github_jupyter
## This notebook contains a sample code for the COMPAS data experiment in Section 5.2. Before running the code, please check README.md and install LEMON. ``` import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn import feature_extraction from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import stealth_sampling ``` ### Functions ``` # split data to bins (s, y) = (1, 1), (1, 0), (0, 1), (0, 0) def split_to_four(X, S, Y): Z = np.c_[X, S, Y] Z_pos_pos = Z[np.logical_and(S, Y), :] Z_pos_neg = Z[np.logical_and(S, np.logical_not(Y)), :] Z_neg_pos = Z[np.logical_and(np.logical_not(S), Y), :] Z_neg_neg = Z[np.logical_and(np.logical_not(S), np.logical_not(Y)), :] Z = [Z_pos_pos, Z_pos_neg, Z_neg_pos, Z_neg_neg] return Z # compute demographic parity def demographic_parity(W): p_pos = np.mean(np.concatenate(W[:2])) p_neg = np.mean(np.concatenate(W[2:])) return np.abs(p_pos - p_neg) # compute the sampling size from each bin def computeK(Z, Nsample, sampled_spos, sampled_ypos): Kpp = Nsample*sampled_spos*sampled_ypos[0] Kpn = Nsample*sampled_spos*(1-sampled_ypos[0]) Knp = Nsample*(1-sampled_spos)*sampled_ypos[1] Knn = Nsample*(1-sampled_spos)*(1-sampled_ypos[1]) K = [Kpp, Kpn, Knp, Knn] kratio = min([min(1, z.shape[0]/k) for (z, k) in zip(Z, K)]) Kpp = int(np.floor(Nsample*kratio*sampled_spos*sampled_ypos[0])) Kpn = int(np.floor(Nsample*kratio*sampled_spos*(1-sampled_ypos[0]))) Knp = int(np.floor(Nsample*kratio*(1-sampled_spos)*sampled_ypos[1])) Knn = int(np.floor(Nsample*kratio*(1-sampled_spos)*(1-sampled_ypos[1]))) K = [max([k, 1]) for k in [Kpp, Kpn, Knp, Knn]] return K # case-contrl sampling def case_control_sampling(X, K): q = [(K[i]/sum(K)) * np.ones(x.shape[0]) / x.shape[0] for i, x in enumerate(X)] return q # compute wasserstein distance def compute_wasserstein(X1, S1, X2, S2, timeout=10.0): dx = stealth_sampling.compute_wasserstein(X1, X2, path='./', prefix='compas', timeout=timeout) dx_s1 = stealth_sampling.compute_wasserstein(X1[S1>0.5, :], X2[S2>0.5, :], path='./', prefix='compas', timeout=timeout) dx_s0 = stealth_sampling.compute_wasserstein(X1[S1<0.5, :], X2[S2<0.5, :], path='./', prefix='compas', timeout=timeout) return dx, dx_s1, dx_s0 ``` ### Fetch data and preprocess We modified [https://github.com/mbilalzafar/fair-classification/blob/master/disparate_mistreatment/propublica_compas_data_demo/load_compas_data.py] ``` url = 'https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv' feature_list = ['age_cat', 'race', 'sex', 'priors_count', 'c_charge_degree', 'two_year_recid'] sensitive = 'race' label = 'score_text' # fetch data df = pd.read_table(url, sep=',') df = df.dropna(subset=['days_b_screening_arrest']) # convert to np array data = df.to_dict('list') for k in data.keys(): data[k] = np.array(data[k]) # filtering records idx = np.logical_and(data['days_b_screening_arrest']<=30, data['days_b_screening_arrest']>=-30) idx = np.logical_and(idx, data['is_recid'] != -1) idx = np.logical_and(idx, data['c_charge_degree'] != 'O') idx = np.logical_and(idx, data['score_text'] != 'NA') idx = np.logical_and(idx, np.logical_or(data['race'] == 'African-American', data['race'] == 'Caucasian')) for k in data.keys(): data[k] = data[k][idx] # label Y Y = 1 - np.logical_not(data[label]=='Low').astype(np.int32) # feature X, sensitive feature S X = [] for feature in feature_list: vals = data[feature] if feature == 'priors_count': vals = [float(v) for v in vals] vals = preprocessing.scale(vals) vals = np.reshape(vals, (Y.size, -1)) else: lb = preprocessing.LabelBinarizer() lb.fit(vals) vals = lb.transform(vals) if feature == sensitive: S = vals[:, 0] X.append(vals) X = np.concatenate(X, axis=1) ``` ### Experiment ``` # parameter settings seed = 0 # random seed # parameter settings for sampling Nsample = 2000 # number of data to sample sampled_ypos = [0.5, 0.5] # the ratio of positive decisions '\alpha' in sampling # parameter settings for complainer Nref = 1278 # number of referential data def sample_and_evaluate(X, S, Y, Nref=1278, Nsample=2000, sampled_ypos=[0.5, 0.5], seed=0): # load data Xbase, Xref, Sbase, Sref, Ybase, Yref = train_test_split(X, S, Y, test_size=Nref, random_state=seed) N = Xbase.shape[0] scaler = StandardScaler() scaler.fit(Xbase) Xbase = scaler.transform(Xbase) Xref = scaler.transform(Xref) # wasserstein distance between base and ref np.random.seed(seed) idx = np.random.permutation(Xbase.shape[0])[:Nsample] dx, dx_s1, dx_s0 = compute_wasserstein(Xbase[idx, :], Sbase[idx], Xref, Sref, timeout=10.0) # demographic parity Z = split_to_four(Xbase, Sbase, Ybase) parity = demographic_parity([z[:, -1] for z in Z]) # sampling results = [[parity, dx, dx_s1, dx_s0]] sampled_spos = np.mean(Sbase) K = computeK(Z, Nsample, sampled_spos, sampled_ypos) for i, sampling in enumerate(['case-control', 'stealth']): #print('%s: sampling ...' % (sampling,), end='') np.random.seed(seed+i) if sampling == 'case-control': p = case_control_sampling([z[:, :-1] for z in Z], K) elif sampling == 'stealth': p = stealth_sampling.stealth_sampling([z[:, :-1] for z in Z], K, path='./', prefix='compas', timeout=30.0) idx = np.random.choice(N, sum(K), p=np.concatenate(p), replace=False) Xs = np.concatenate([z[:, :-2] for z in Z], axis=0)[idx, :] Ss = np.concatenate([z[:, -2] for z in Z], axis=0)[idx] Ts = np.concatenate([z[:, -1] for z in Z], axis=0)[idx] #print('done.') # demographic parity of the sampled data #print('%s: evaluating ...' % (sampling,), end='') Zs = split_to_four(Xs, Ss, Ts) parity = demographic_parity([z[:, -1] for z in Zs]) # wasserstein disttance dx, dx_s1, dx_s0 = compute_wasserstein(Xs, Ss, Xref, Sref, timeout=10.0) #print('done.') results.append([parity, dx, dx_s1, dx_s0]) return results ``` #### Experiment (One Run) ``` result = sample_and_evaluate(X, S, Y, Nref=Nref, Nsample=Nsample, sampled_ypos=sampled_ypos, seed=seed) df = pd.DataFrame(result) df.index = ['Baseline', 'Case-control', 'Stealth'] df.columns = ['DP', 'WD on Pr[x]', 'WD on Pr[x|s=1]', 'WD on Pr[x|s=0]'] print('Result (alpha = %.2f, seed=%d)' % (sampled_ypos[0], seed)) df ``` #### Experiment (10 Runs) ``` num_itr = 10 result_all = [] for i in range(num_itr): result_i = sample_and_evaluate(X, S, Y, Nref=Nref, Nsample=Nsample, sampled_ypos=sampled_ypos, seed=i) result_all.append(result_i) result_all = np.array(result_all) df = pd.DataFrame(np.mean(result_all, axis=0)) df.index = ['Baseline', 'Case-control', 'Stealth'] df.columns = ['DP', 'WD on Pr[x]', 'WD on Pr[x|s=1]', 'WD on Pr[x|s=0]'] print('Average Result of %d runs (alpha = %.2f)' % (num_itr, sampled_ypos[0])) df ```
github_jupyter
# Plotting species distribution areas (IUCN spatial data) ## Imports ### Libraries ``` import numpy as np import geopandas as gpd from matplotlib import pyplot as plt %matplotlib inline ``` ### Data First, let's load the shapefile containing the distribution data for marine mammals (IUCN data). ``` shp = gpd.read_file('./MARINE_MAMMALS/MARINE_MAMMALS.shp') #shore = gpd.read_file('shapefile.shp') # In case you want a custom coastline #shp.head() ``` ## Custom Functions ``` def RGB(rgb_code): """Transforms 0-255 rgb colors to 0-1 scale""" col = tuple(x/255 for x in rgb_code) # I'm shure there is a more efficient way; however, it was faster to code a simple function return col def get_ranges(species, shp, drop_columns = None): """Returns a list of simplifyied distribution data from a shape downloaded from the IUCN. Arguments: species: a list of binomial species names to get the distributions from shp: the shapefile variable containing the distribution ranges drop_columns: Columns with attributes to drop from the original shapefile, if None, no columns are dropped""" if drop_columns == None: for sp in species: if sp == species[0]: ranges = shp[shp.binomial == sp] else: ranges = ranges.append(shp[shp.binomial == sp]) else: shp = shp.drop(columns = drop_columns) for sp in species: if sp == species[0]: ranges = shp[shp.binomial == sp] else: ranges = ranges.append(shp[shp.binomial == sp]) return ranges def plot_dist_ranges(ranges, cmap = 'winter', font = {'family': 'sans-serif', 'weight': 500}, shore = None, extent = None): """Plots the distribution ranges of the species contained in the ranges variable. Arguments: ranges: A list in which each element corresponds to a distribution range (geometry) and attributes from distribution data downloadaded from the IUCN cmap: A string defining the colormap to use. If empty, defaults to 'winter' font: Dict. containing the properties of the font to use in the plot. shore: Variable storing the coastline. If None, defaults to 'naturalearth_lowres' from the geopandas datasets extent: A list with the limits of the plot, in the form: [x_inf, x_sup, y_inf, y_sup]. If none, the whole globe is plotted.""" # Setting the font of the plot. plt.rcParams['font.family'] = font['family'] plt.rcParams['font.weight'] = font['weight'] # Plot environment fig, ax = plt.subplots(1,1) # Plotting the distributions sp = list(ranges.binomial.values) ranges.plot(column = 'binomial', ax = ax, alpha = 0.35, cmap = cmap, legend = True, legend_kwds = {'prop': {'style': 'italic', 'size': 8}}) # Plotting the coastline if shore == None: world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) world.plot(color = 'lightslategray', alpha = 0.5, ax = ax) else: shore.plot(color = 'lightslategray', alpha = 0.5, ax = ax) # Cleaning the plot plt.tight_layout() for spine in ax.spines.values(): spine.set_visible(False) ax.tick_params(axis = 'both', which = 'both', bottom = False, top = False, left = False, labelleft = True, labelbottom = True) # Changing the tickmarks of the y axis if extent == None: extent = [-180, 180, -90, 90] plt.axis(extent) plt.yticks(np.arange(-80,120,40)) else: long_range = abs(extent[0] - extent[1])/4 plt.yticks(np.linspace(extent[0], extent[1], 4)) # Changing the axes labels plt.xlabel('Longitude (º)') plt.ylabel('Latitude (º)') #plt.labelweight = font['weight'] ``` ## Extracting and plotting the data As an example, the distributions of *Stenella attenuata* and *Stenella longirostris* are plotted, and most unused columns are dropped. Also, a custom color map is created to pass to the plotting function. ``` from matplotlib.colors import ListedColormap species = ['Stenella attenuata', 'Stenella longirostris'] drop_columns = ['id_no','presence', 'origin', 'source', 'seasonal', 'compiler', 'yrcompiled', 'citation', 'dist_comm', 'island', 'subspecies', 'subpop', 'tax_comm', 'kingdom', 'phylum', 'class', 'order_', 'family', 'genus', 'category', 'marine', 'terrestial', 'freshwater'] ranges = get_ranges(species, shp, drop_columns) cmap = ListedColormap([RGB((121,227,249)), RGB((27,109,183))]) plot_dist_ranges(ranges, cmap = cmap, font = {'family': 'Montserrat', 'weight': 500}) #plt.savefig('Dist.jpg', bbox_inches = 'tight', pad_inches = 0.1, transparent = False, dpi = 300) import types def imports(): for name, val in globals().items(): if isinstance(val, types.ModuleType): yield val.__name__ list(imports()) import pkg_resources import types def get_imports(): for name, val in globals().items(): if isinstance(val, types.ModuleType): # Split ensures you get root package, # not just imported function name = val.__name__.split(".")[0] elif isinstance(val, type): name = val.__module__.split(".")[0] # Some packages are weird and have different # imported names vs. system/pip names. Unfortunately, # there is no systematic way to get pip names from # a package's imported name. You'll have to add # exceptions to this list manually! poorly_named_packages = { "PIL": "Pillow", "sklearn": "scikit-learn" } if name in poorly_named_packages.keys(): name = poorly_named_packages[name] yield name imports = list(set(get_imports())) # The only way I found to get the version of the root package # from only the name of the package is to cross-check the names # of installed packages vs. imported packages requirements = [] for m in pkg_resources.working_set: if m.project_name in imports and m.project_name!="pip": requirements.append((m.project_name, m.version)) for r in requirements: print("{}=={}".format(*r)) ```
github_jupyter
<a href="https://colab.research.google.com/github/Ekram49/DS-Unit-2-Applied-Modeling/blob/master/Ekram_LS_DS_234_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Lambda School Data Science *Unit 2, Sprint 3, Module 4* --- # Model Interpretation You will use your portfolio project dataset for all assignments this sprint. ## Assignment Complete these tasks for your project, and document your work. - [ ] Continue to iterate on your project: data cleaning, exploratory visualization, feature engineering, modeling. - [ ] Make at least 1 partial dependence plot to explain your model. - [ ] Make at least 1 Shapley force plot to explain an individual prediction. - [ ] **Share at least 1 visualization (of any type) on Slack!** If you aren't ready to make these plots with your own dataset, you can practice these objectives with any dataset you've worked with previously. Example solutions are available for Partial Dependence Plots with the Tanzania Waterpumps dataset, and Shapley force plots with the Titanic dataset. (These datasets are available in the data directory of this repository.) Please be aware that **multi-class classification** will result in multiple Partial Dependence Plots (one for each class), and multiple sets of Shapley Values (one for each class). ## Stretch Goals #### Partial Dependence Plots - [ ] Make multiple PDPs with 1 feature in isolation. - [ ] Make multiple PDPs with 2 features in interaction. - [ ] Use Plotly to make a 3D PDP. - [ ] Make PDPs with categorical feature(s). Use Ordinal Encoder, outside of a pipeline, to encode your data first. If there is a natural ordering, then take the time to encode it that way, instead of random integers. Then use the encoded data with pdpbox. Get readable category names on your plot, instead of integer category codes. #### Shap Values - [ ] Make Shapley force plots to explain at least 4 individual predictions. - If your project is Binary Classification, you can do a True Positive, True Negative, False Positive, False Negative. - If your project is Regression, you can do a high prediction with low error, a low prediction with low error, a high prediction with high error, and a low prediction with high error. - [ ] Use Shapley values to display verbal explanations of individual predictions. - [ ] Use the SHAP library for other visualization types. The [SHAP repo](https://github.com/slundberg/shap) has examples for many visualization types, including: - Force Plot, individual predictions - Force Plot, multiple predictions - Dependence Plot - Summary Plot - Summary Plot, Bar - Interaction Values - Decision Plots We just did the first type during the lesson. The [Kaggle microcourse](https://www.kaggle.com/dansbecker/advanced-uses-of-shap-values) shows two more. Experiment and see what you can learn! ### Links #### Partial Dependence Plots - [Kaggle / Dan Becker: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots) - [Christoph Molnar: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904) - [pdpbox repo](https://github.com/SauceCat/PDPbox) & [docs](https://pdpbox.readthedocs.io/en/latest/) - [Plotly: 3D PDP example](https://plot.ly/scikit-learn/plot-partial-dependence/#partial-dependence-of-house-value-on-median-age-and-average-occupancy) #### Shapley Values - [Kaggle / Dan Becker: Machine Learning Explainability — SHAP Values](https://www.kaggle.com/learn/machine-learning-explainability) - [Christoph Molnar: Interpretable Machine Learning — Shapley Values](https://christophm.github.io/interpretable-ml-book/shapley.html) - [SHAP repo](https://github.com/slundberg/shap) & [docs](https://shap.readthedocs.io/en/latest/) ``` %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* !pip install eli5 !pip install pdpbox !pip install shap # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.read_csv('https://raw.githubusercontent.com/Ekram49/DS-Unit-1-Build/master/ContinousDataset.csv') df.head() df = df.rename(columns={"Team 1": "Team_1", "Team 2": "Team_2", "Team 1": "Team_1","Match Date":"Match_Date"}) df.head() df = df[(((df['Team_1'] == 'India') | (df['Team_2'] == 'India'))) & (((df['Team_1'] == 'Pakistan') | (df['Team_2'] == 'Pakistan'))) ] df.head() ``` # Baseline ``` df['Winner'].value_counts(normalize = True) import seaborn as sns sns.countplot(df['Winner']) df.isna().sum().sort_values() df = df.fillna('Missing') df.isna().sum().sort_values() ``` # New Features ``` df['played_at_home'] = (df['Host_Country'] == 'India') df['played_at_Pakistan'] = (df['Host_Country'] == 'Pakistan') df['Played_in_neutral'] = (df['Host_Country'] != 'India') & (df['Host_Country'] != 'Pakistan') from sklearn.model_selection import train_test_split train, test = train_test_split(df, train_size = .8, test_size = .2, stratify = df['Winner'], random_state =42) train, val = train_test_split(train, train_size = .8, test_size = .2, stratify = train['Winner'], random_state =42) ``` # Feature selection ``` target = 'Winner' train.describe(exclude = 'number').T.sort_values(by = 'unique', ascending = False) # Removing columns with high cordinality high_cardinality = 'Scorecard', 'Match_Date' # Margin will cause data leakage features = train.columns.drop(['Unnamed: 0', 'Winner', 'Scorecard', 'Match_Date', 'Margin']) X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] y_test = test[target] !pip install --upgrade category_encoders import category_encoders as ce from sklearn.pipeline import make_pipeline from xgboost import XGBClassifier pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names= True), XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) pipeline.fit(X_train, y_train) ``` # Validation accuracy ``` from sklearn.metrics import accuracy_score y_pred_val = pipeline.predict(X_val) accuracy_score(y_val, y_pred_val) ``` # Test accuracy ``` y_pred_test = pipeline.predict(X_test) accuracy_score(y_test, y_pred_test) ``` # PDP ``` import category_encoders as ce import seaborn as sns from sklearn.ensemble import RandomForestClassifier encoder = ce.OrdinalEncoder() X_train_encoded = encoder.fit_transform(X_train) model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) model.fit(X_train_encoded, y_train) %matplotlib inline import matplotlib.pyplot as plt from pdpbox import pdp feature = 'Host_Country' pdp_dist = pdp.pdp_isolate(model=model, dataset=X_train_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); ``` # With two features ``` from pdpbox.pdp import pdp_interact, pdp_interact_plot features = ['Host_Country', 'Ground'] interaction = pdp_interact( model=model, dataset=X_train_encoded, model_features=X_train_encoded.columns, features=features ) pdp_interact_plot(interaction, plot_type='grid', feature_names=features); ```
github_jupyter
# Matrix Multiplication Author: Yoseph K. Soenggoro ``` from random import random from itertools import product import numpy as np # Check you NumPy Version (I used 1.16.4). # If the program is incompatible with your NumPy version, use pip or conda to set the appropriate version np.__version__ # Choose the value of n, the dimension for Matrix X and Y n = 3 # Choose d as the range of random value for Matrix X and Y. # By choosing value d, the element of Matrix X and Y will be any real number between 0 and d, but never d. d = 10 ``` Before starting to multiply any two matrices, first define two different matrices $X$ and $Y$ using the `random` library. ``` # Define Matrix X and Matrix Y X = [] Y = [] for i in range(0, n): x_row = [] for j in range(0, n): x_val = random() * d x_row.append(x_val) X.append(x_row) for i in range(0, n): y_row = [] for j in range(0, n): y_val = random() * d y_row.append(y_val) Y.append(y_row) # Function to print the matrices def print_matrix(X): matrix_string = '' for i, j in product(range(0, n), range(0, n)): matrix_string += f'{X[i][j]}' + ('\t' if j != n - 1 else '\n') print(matrix_string) # Print X to Check print_matrix(X) # Print Y to Check print_matrix(Y) ``` ### Matrix Multiplication Formula (Linear Algebra) Given a $n \times n$ matrices $X$ and $Y$, as follows: \begin{align} X = \begin{bmatrix} x_{1, 1} & x_{1, 2} & \dots & x_{1, n} \\ x_{2, 1} & x_{2, 2} & \dots & x_{2, n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n, 1} & x_{n, 2} & \dots & x_{n, n} \end{bmatrix} , \quad Y = \begin{bmatrix} y_{1, 1} & y_{1, 2} & \dots & y_{1, n} \\ y_{2, 1} & y_{2, 2} & \dots & y_{2, n} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n, 1} & y_{n, 2} & \dots & y_{n, n} \end{bmatrix} \end{align} then the multiplication is defined by the following formula: \begin{align} X \cdot Y = \left[\sum_{k = 1}^n x_{i, k} \cdot y_{j, k}\right]_{i, j = 1}^n \end{align} ### Implementation \#1: Functional Paradigm The simplest way to implement Matrix Multiplication is by using modular functions, that can be used and reused multiple times within a program. Given the formula above, the Python implementation will be as follows. ``` # Function to implement Matrix Multiplication of Matrix X and Y def matrix_mul(X, Y): Z = [] for i in range(0, n): z_row = [] for j in range(0, n): z_val = 0 for k in range(0, n): z_val += X[i][k] * Y[k][j] z_row.append(z_val) Z.append(z_row) return Z ``` For the multiplication between $X$ and $Y$, the result will be kept in variable $Z$. ``` Z = matrix_mul(X, Y) print_matrix(Z) ``` ### Check Validity on Matrix Multiplication Function Despite having a working matrix multiplication implementation in functional form, we still have no idea whether the result from our implementation is right or wrong. Therefore, one method to validate the result will be doing a comparison with `NumPy`'s implementation of `matmul` API. ``` # Function to compare the Matrix Multiplication Function to NumPy's matmul def check_matrix_mul(X, Y): print('Starting Validation Process...\n\n\n') x = np.array(X) y = np.array(Y) z = np.matmul(x, y) Z = matrix_mul(X, Y) for i, j in product(range(0, n), range(0, n)): print(f'Checking index {(i, j)}... \t\t\t {round(z[i][j], 2) == round(Z[i][j], 2)}') print('\n') print('Validation Process Completed') a = check_matrix_mul(X, Y) ``` Since after checking all the results are True, then it can be confirmed that the implentation works sucessfully. ### Implementation \#2: Object-Oriented Paradigm Another paradigm that can be used is OOP or Object-Oriented Programming, which represents a program as a set of Objects with various fields and methods to interact with the defined Object. In this case, first defined a generalized form of matrices, which is known as Tensors. The implementation of `Tensor` will be as follows: ``` class Tensor: def __init__(self, X): validation = self.__checking_validity(X) self.__dim = 2 self.tensor = X if validation else [] self.__dimension = self.__get_dimension_private(X) if validation else -1 def __get_dimension_private(self, X): if not check_child(X): return 1 else: # Check whether the size of each child are the same for i in range(0, len(X)): if not check_child(X[i]): return self.__dim else: get_dimension(X[i]) self.__dim += 1 return self.__dim def __checking_validity(self, X): self.__dim = 2 valid = True if not check_child(X): return valid else: dim_0 = get_dimension(X[0]) # Check whether the size of each child are the same for i in range(1, len(X)): self.__dim = 2 if get_dimension(X[i]) != dim_0: valid &= False break return valid # Getting the Value of Tensor Rank/Dimension (Not to be confused with Matrix Dimension) def get_dimension(self): return self.__dimension ``` Since Tensors are generalized form of matrices, it implies that it is possible to define `Matrix` class as a child class of `Tensor` with additional methods (some overrides the `Tensor`'s original methods). For operators, I only managed to override the multiplication operator for the sake of implementing Matrix Multiplication. Thus, other operator such as `+`, `-`, `/`, and others will not be available for the current implementation. ``` class Matrix(Tensor): def __init__(self, X): super().__init__(X) self.__matrix_string = '' def __str__(self): return self.__matrix_string if self.__check_matrix_validation() else '' # Check whether the given input X is a valid Matrix def __check_matrix_validation(self): valid = True try: for i, j in product(range(0, n), range(0, n)): self.__matrix_string += f'{self.tensor[i][j]}' + ('\t' if j != n - 1 else '\n') except: valid = False print('Matrix is Invalid. Create New Instance with appropriate inputs.') return valid # Get Matrix Dimension: Number of Columns and Rows def get_dimension(self): print(f'Matrix Dimension: ({len(self.tensor)}, {len(self.tensor[0])})' if self.__check_matrix_validation() else -1) return [len(self.tensor), len(self.tensor[0])] # Overriding Multiplication Operator for Matrix Multiplication # and Integer-Matrix Multiplication def __mul__(self, other): if isinstance(other, Matrix): Z = [] for i in range(0, n): z_row = [] for j in range(0, n): z_val = 0 for k in range(0, n): z_val += self.tensor[i][k] * other.tensor[k][j] z_row.append(z_val) Z.append(z_row) return Matrix(Z) elif isinstance(other, int): Z = [] for i in range(0, n): z_row = [] for j in range(0, n): z_row.append(self.tensor[i][j] * other) Z.append(z_row) return Matrix(Z) else: return NotImplemented # Overriding Reverse Multiplication to support Matrix-Integer Multiplication def __rmul__(self, other): if isinstance(other, int): Z = [] for i in range(0, n): z_row = [] for j in range(0, n): z_row.append(self.tensor[i][j] * other) Z.append(z_row) return Matrix(Z) else: return NotImplemented # Transform X and Y to Matrix Object x_obj = Matrix(X) y_obj = Matrix(Y) # Implement Matrix Multiplication as follows z_obj = x_obj * y_obj print(z_obj) ``` ### Check Validity on Matrix Multiplication using OOP Similar to the previous section, we still have no idea whether the result from our implementation is right or wrong. Hence, validation is highly important. Therefore, one method to validate the result will be again doing a comparison with `NumPy`'s implementation of `matmul` API. ``` # Function to compare the Matrix Multiplication Function to Numpy's matmul def check_matrix_mul_oop(X, Y): print('Starting Validation Process...\n\n\n') x = np.array(X) y = np.array(Y) z = np.matmul(x, y) Z = Matrix(X) * Matrix(Y) for i, j in product(range(0, n), range(0, n)): print(f'Checking index {(i, j)}... \t\t\t {round(z[i][j], 2) == round(Z.tensor[i][j], 2)}') print('\n') print('Validation Process Completed') a = check_matrix_mul_oop(X, Y) ``` Since after checking all the results are True, then it can be confirmed that the implentation works sucessfully. # Python Libraries - [NumPy](https://numpy.org/)
github_jupyter
``` import pandas as pd import numpy as np import re from scipy.integrate import odeint # Read the data in, then select the relevant columns, and adjust the week so it is easier to realize # as a time series. virii = ["A (H1)", "A (H3)", "A (2009 H1N1)", "A (Subtyping not Performed)", "B"] virus = "B" file = "data/2007-2008_Region-5_WHO-NREVSS.csv" fluData = pd.read_csv(file)[["YEAR", "WEEK", "TOTAL SPECIMENS"] + virii] firstWeek = fluData["WEEK"][0] fluData["T"] = fluData["WEEK"] + 52 * (fluData["WEEK"] < firstWeek) fluData = fluData.drop(["YEAR", "WEEK"], axis=1) match = re.match("^data/(\d+-\d+)_Region-(\d+)_.*", file) title = "Flu Season " + match.groups()[0] + " for HHS Region " + match.groups()[1] region = "HHS " + match.groups()[1] match = re.match("^(\d+)-\d+.*", match.groups()[0]) popYear = match.groups()[0] import matplotlib.pyplot as plt %matplotlib inline #plt.xkcd() plt.style.use('ggplot') tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)] # Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts. for i in range(len(tableau20)): r, g, b = tableau20[i] tableau20[i] = (r / 255., g / 255., b / 255.) plt.figure(figsize=(12,6)) for idx in [0, 1, 2, 3]: plt.plot(fluData['T'], fluData[virii[idx]], ls="--", lw=2.5, color=tableau20[idx*2], alpha=1) plt.scatter(fluData['T'], fluData[virii[idx]], color=tableau20[idx*2]) y_pos = 200 + idx*50 plt.text(40, y_pos, "Virus Strain:" + virii[idx], fontsize=8, color=tableau20[idx*2]) plt.title(title, fontsize=12) plt.xlabel("Week of Flu Season", fontsize=10) plt.ylabel("Infected Individuals", fontsize=10) # Initial values of our states popData = pd.read_csv('data/population_data.csv', index_col=0) # N - total population of the region # I0 - initial infected -- we assume 1. # R0 - initial recovered -- we assume none. # S0 - initial susceptible -- S0 = N - I0 - R0 # N - total population of the region # I0 - initial infected -- we assume 1. # R0 - initial recovered -- we assume none. # S0 - initial susceptible -- S0 = N - I0 - R0 N = 52000000#int(popData[popData['Year'] == int(popYear)]['HHS 5']) # I0 = 1 R0 = 0 S0 = N - R0 - I0 print("S0, ", S0) gamma = 1/3 rho = 1.24 beta = rho*gamma def deriv(y, t, N, beta, gamma): S, I, R = y dSdt = -beta * S * I / N dIdt = beta * S * I / N - gamma * I dRdt = gamma * I return dSdt, dIdt, dRdt y0 = S0, I0, R0 min = 40 max = fluData['T'].max() t = list(range(min*7, max*7)) w = [x/7 for x in t] ret = odeint(deriv, y0, t, args=(N, beta, gamma)) S, I, R = ret.T incidence_predicted = -np.diff(S[0:len(S)-1:7]) incidence_observed = fluData['B'] fraction_confirmed = incidence_observed.sum()/incidence_predicted.sum() # Correct for the week of missed incidence plotT = fluData['T'] - 7 plt.figure(figsize=(6,3)) plt.plot(plotT[2:], incidence_predicted*fraction_confirmed, color=tableau20[2]) plt.text(40, 100, "CDC Data for Influenza B", fontsize=12, color=tableau20[0]) plt.text(40, 150, "SIRS Model Result", fontsize=12, color=tableau20[2]) plt.title(title, fontsize=12) plt.xlabel("Week of Flu Season", fontsize=10) plt.ylabel("Infected Individuals", fontsize=10) ```
github_jupyter
# Scene Classification-Test ## 1. Preprocess-KerasFolderClasses - Import pkg - Extract zip file - Preview "scene_classes.csv" - Preview "scene_{0}_annotations_20170922.json" - Test the image and pickle function - Split data into serval pickle file This part need jupyter notebook start with "jupyter notebook --NotebookApp.iopub_data_rate_limit=1000000000" (https://github.com/jupyter/notebook/issues/2287) Reference: - https://challenger.ai/competitions - https://github.com/jupyter/notebook/issues/2287 ### Import pkg ``` import numpy as np import pandas as pd # import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.image as mpimg import seaborn as sns %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from keras.utils.np_utils import to_categorical # convert to one-hot-encoding from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization from keras.optimizers import Adam from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import LearningRateScheduler, TensorBoard # import zipfile import os import zipfile import math from time import time from IPython.display import display import pdb import json from PIL import Image import glob import pickle ``` ### Extract zip file ``` input_path = 'input' datasetName = 'test_a' date = '20170922' datasetFolder = input_path + '\\data_{0}'.format(datasetName) zip_path = input_path + '\\ai_challenger_scene_{0}_{1}.zip'.format(datasetName, date) extract_path = input_path + '\\ai_challenger_scene_{0}_{1}'.format(datasetName, date) image_path = extract_path + '\\scene_{0}_images_{1}'.format(datasetName, date) scene_classes_path = extract_path + '\\scene_classes.csv' scene_annotations_path = extract_path + '\\scene_{0}_annotations_{1}.json'.format(datasetName, date) print(input_path) print(datasetFolder) print(zip_path) print(extract_path) print(image_path) print(scene_classes_path) print(scene_annotations_path) if not os.path.isdir(extract_path): with zipfile.ZipFile(zip_path) as file: for name in file.namelist(): file.extract(name, input_path) ``` ### Preview "scene_classes.csv" ``` scene_classes = pd.read_csv(scene_classes_path, header=None) display(scene_classes.head()) def get_scene_name(lable_number, scene_classes_path): scene_classes = pd.read_csv(scene_classes_path, header=None) return scene_classes.loc[lable_number, 2] print(get_scene_name(0, scene_classes_path)) ``` ### Copy images to ./input/data_test_a/test ``` from shutil import copy2 cwd = os.getcwd() test_folder = os.path.join(cwd, datasetFolder) test_sub_folder = os.path.join(test_folder, 'test') if not os.path.isdir(test_folder): os.mkdir(test_folder) os.mkdir(test_sub_folder) print(test_folder) print(test_sub_folder) trainDir = test_sub_folder for image_id in os.listdir(os.path.join(cwd, image_path)): fileName = image_path + '/' + image_id # print(fileName) # print(trainDir) copy2(fileName, trainDir) print('Done!') ```
github_jupyter
``` %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import numpy as np np.set_printoptions(precision=3) np.set_printoptions(suppress=True) ``` ### Simple linear classification algorithm | vector [x,y] | label | |-------------- |------- | | [ 0.0, 0.7] | +1 | | [-0.3, -0.5] | -1 | | [3.0, 0.1] | +1 | | [-0.1, -1.0] | -1 | | [-1.0, 1.1] | -1 | | [2.1, -3.0] | +1 | We can represent the data as a 2-dimensional numpy array ``` data = np.array([[ 0.0, 0.7], [-0.3,-0.5], [ 3.0, 0.1], [-0.1,-1.0], [-1.0, 1.1], [ 2.1,-3.0]]) ``` We can represent the labels as a simple numpy array of numbers ``` labels = np.array([ 1, -1, 1, -1, -1, +1]) ``` We can plot the data using the following function: ``` def plot_data(data, labels): fig = plt.figure(figsize=(5,5)) ax = fig.add_subplot(111) ax.scatter(data[:,0], data[:,1], c=labels, s=50, cmap=plt.cm.bwr,zorder=50) nudge = 0.08 for i in range(data.shape[0]): d = data[i] ax.annotate(f'{i}',(d[0]+nudge,d[1]+nudge)) ax.set_aspect('equal', 'datalim') plt.show() plot_data(data,labels) ``` This is a function to evaluate the accuracy of the training ``` def eval_accuracy(data,labels, A,B,C): num_correct = 0; data_len = data.shape[0] for i in range(data_len): X,Y = data[i] current_label = labels[i] output = A*X + B*Y + C predicted_label = 1 if output >= 1 else -1 if output <= -1 else 0 if (predicted_label == current_label): num_correct += 1 return np.round(num_correct / data_len,3) def create_meshgrid(data): h = 0.02 x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) return (xx,yy,np.ones(xx.shape)) def plot_learning_simple(grid,data,labels,A,B,C,iteration, accuracy): xx,yy,Z = grid for i in range(xx.shape[0]): # row for j in range(yy.shape[1]): #column X, Y = xx[i][j],yy[i][j] output = A*X + B*Y + C predicted_label = 1 if output >= 1 else -1 if output <= -1 else 0 Z[i][j] = predicted_label fig = plt.figure(figsize=(5,5)) ax = fig.add_subplot(111) plt.title(f'accuracy at the iteration {iteration}: {accuracy}') ax.contourf(xx, yy, Z, cmap=plt.cm.binary, alpha=0.1, zorder=15) ax.scatter(data[:, 0], data[:, 1], c=labels, s=50, cmap=plt.cm.bwr,zorder=50) ax.set_aspect('equal') nudge = 0.08 for i in range(data.shape[0]): d = data[i] ax.annotate(f'{i}',(d[0]+nudge,d[1]+nudge)) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.show() ``` Here is the main algorithm. ``` def train_neural_network(data, labels, step_size, no_loops, iter_info): #A, B, and C are parameters of the function F. Here, they are set to 1, -2, -1 A, B, C = 1, -2, -1 # this function is used for plotting, it can be ignored grid = create_meshgrid(data) # the main training loop for i in range(no_loops): # we randomly select the data point, and store its info into: x,y,label index = np.random.randint(data.shape[0]) X,Y = data[index] label = labels[index] # we calculate the output of the function output = A*X + B*Y + C # We need to define how to affect the output of our function. # If the label is 1 but the output is smaller than 1, we want to maximise. # If the label is -1 but the output is larger than -1, we want to minimise. sign = 1 if (label == 1 and output < 1) else -1 if (label == -1 and output > -1) else 0 # partial derivative of dF/dA is X, dF/dB is Y, and of dF/dC is 1. dA, dB, dC = X, Y, 1 # here we update the parameter values using partial derivatives A += dA * sign * step_size B += dB * sign * step_size C += dC * sign * step_size; # after a number of iterations, show training accuracy and plot it if (i%iter_info==0): accuracy = eval_accuracy(data, labels, A,B,C) plot_learning_simple(grid,data,labels,A,B,C,i,accuracy) # the algorithm returns the learned parameters A, B, and C return (A,B,C) train_1 = train_neural_network(data, labels, 0.01, 2501, 500) ``` We can inspect the result by comparing the real label of a data point and the predicted label: ``` def show_prediction(train, data, labels): A, B, C = train for i in range(data.shape[0]): X,Y = data[i] label = labels[i] output = A*X + B*Y + C predicted_label = 1 if output >= 1 else -1 if output <= -1 else 0 print (f'data point {i}: real label : {label}, pred. label: {predicted_label}, {(label==predicted_label)}') show_prediction(train_1,data,labels) ``` --- #### Let's try with a different data set ``` data2 = np.array([[ 1.2, 0.7], [-0.3,-0.5], [ 3.0, 0.1], [-0.1,-1.0], [-0.0, 1.1], [ 2.1,-1.3], [ 3.1,-1.8], [ 1.1,-0.1], [ 1.5,-2.2], [ 4.0,-1.0]]) labels2 = np.array([ 1, -1, 1, -1, -1, 1, -1, 1, -1, -1]) plot_data(data2,labels2) train_2 = train_neural_network(data2, labels2, 0.01, 2501, 500) show_prediction(train_1,data2,labels2) ```
github_jupyter
``` from collections import OrderedDict import arviz as az import matplotlib.pyplot as plt import numpy as np import pandas as pd import pymc as pm import scipy as sp from theano import shared %config InlineBackend.figure_format = 'retina' az.style.use('arviz-darkgrid') ``` #### Code 11.1 ``` trolley_df = pd.read_csv('Data/Trolley.csv', sep=';') trolley_df.head() ``` #### Code 11.2 ``` ax = (trolley_df.response .value_counts() .sort_index() .plot(kind='bar')) ax.set_xlabel("response", fontsize=14); ax.set_ylabel("Frequency", fontsize=14); ``` #### Code 11.3 ``` ax = (trolley_df.response .value_counts() .sort_index() .cumsum() .div(trolley_df.shape[0]) .plot(marker='o')) ax.set_xlim(0.9, 7.1); ax.set_xlabel("response", fontsize=14) ax.set_ylabel("cumulative proportion", fontsize=14); ``` #### Code 11.4 ``` resp_lco = (trolley_df.response .value_counts() .sort_index() .cumsum() .iloc[:-1] .div(trolley_df.shape[0]) .apply(lambda p: np.log(p / (1. - p)))) ax = resp_lco.plot(marker='o') ax.set_xlim(0.9, 7); ax.set_xlabel("response", fontsize=14) ax.set_ylabel("log-cumulative-odds", fontsize=14); ``` #### Code 11.5 ``` with pm.Model() as m11_1: a = pm.Normal( 'a', 0., 10., transform=pm.distributions.transforms.ordered, shape=6, testval=np.arange(6) - 2.5) resp_obs = pm.OrderedLogistic( 'resp_obs', 0., a, observed=trolley_df.response.values - 1 ) with m11_1: map_11_1 = pm.find_MAP() ``` #### Code 11.6 ``` map_11_1['a'] daf ``` #### Code 11.7 ``` sp.special.expit(map_11_1['a']) ``` #### Code 11.8 ``` with m11_1: trace_11_1 = pm.sample(1000, tune=1000) az.summary(trace_11_1, var_names=['a'], credible_interval=.89, rount_to=2) ``` #### Code 11.9 ``` def ordered_logistic_proba(a): pa = sp.special.expit(a) p_cum = np.concatenate(([0.], pa, [1.])) return p_cum[1:] - p_cum[:-1] ordered_logistic_proba(trace_11_1['a'].mean(axis=0)) ``` #### Code 11.10 ``` (ordered_logistic_proba(trace_11_1['a'].mean(axis=0)) \ * (1 + np.arange(7))).sum() ``` #### Code 11.11 ``` ordered_logistic_proba(trace_11_1['a'].mean(axis=0) - 0.5) ``` #### Code 11.12 ``` (ordered_logistic_proba(trace_11_1['a'].mean(axis=0) - 0.5) \ * (1 + np.arange(7))).sum() ``` #### Code 11.13 ``` action = shared(trolley_df.action.values) intention = shared(trolley_df.intention.values) contact = shared(trolley_df.contact.values) with pm.Model() as m11_2: a = pm.Normal( 'a', 0., 10., transform=pm.distributions.transforms.ordered, shape=6, testval=trace_11_1['a'].mean(axis=0) ) bA = pm.Normal('bA', 0., 10.) bI = pm.Normal('bI', 0., 10.) bC = pm.Normal('bC', 0., 10.) phi = bA * action + bI * intention + bC * contact resp_obs = pm.OrderedLogistic( 'resp_obs', phi, a, observed=trolley_df.response.values - 1 ) with m11_2: map_11_2 = pm.find_MAP() ``` #### Code 11.14 ``` with pm.Model() as m11_3: a = pm.Normal( 'a', 0., 10., transform=pm.distributions.transforms.ordered, shape=6, testval=trace_11_1['a'].mean(axis=0) ) bA = pm.Normal('bA', 0., 10.) bI = pm.Normal('bI', 0., 10.) bC = pm.Normal('bC', 0., 10.) bAI = pm.Normal('bAI', 0., 10.) bCI = pm.Normal('bCI', 0., 10.) phi = bA * action + bI * intention + bC * contact \ + bAI * action * intention \ + bCI * contact * intention resp_obs = pm.OrderedLogistic( 'resp_obs', phi, a, observed=trolley_df.response - 1 ) with m11_3: map_11_3 = pm.find_MAP() ``` #### Code 11.15 ``` def get_coefs(map_est): coefs = OrderedDict() for i, ai in enumerate(map_est['a']): coefs[f'a_{i}'] = ai coefs['bA'] = map_est.get('bA', np.nan) coefs['bI'] = map_est.get('bI', np.nan) coefs['bC'] = map_est.get('bC', np.nan) coefs['bAI'] = map_est.get('bAI', np.nan) coefs['bCI'] = map_est.get('bCI', np.nan) return coefs (pd.DataFrame.from_dict( OrderedDict([ ('m11_1', get_coefs(map_11_1)), ('m11_2', get_coefs(map_11_2)), ('m11_3', get_coefs(map_11_3)) ])) .astype(np.float64) .round(2)) ``` #### Code 11.16 ``` with m11_2: trace_11_2 = pm.sample(1000, tune=1000) with m11_3: trace_11_3 = pm.sample(1000, tune=1000) comp_df = pm.compare({m11_1:trace_11_1, m11_2:trace_11_2, m11_3:trace_11_3}) comp_df.loc[:,'model'] = pd.Series(['m11.1', 'm11.2', 'm11.3']) comp_df = comp_df.set_index('model') comp_df ``` #### Code 11.17-19 ``` pp_df = pd.DataFrame(np.array([[0, 0, 0], [0, 0, 1], [1, 0, 0], [1, 0, 1], [0, 1, 0], [0, 1, 1]]), columns=['action', 'contact', 'intention']) pp_df action.set_value(pp_df.action.values) contact.set_value(pp_df.contact.values) intention.set_value(pp_df.intention.values) with m11_3: pp_trace_11_3 = pm.sample_ppc(trace_11_3, samples=1500) PP_COLS = [f'pp_{i}' for i, _ in enumerate(pp_trace_11_3['resp_obs'])] pp_df = pd.concat((pp_df, pd.DataFrame(pp_trace_11_3['resp_obs'].T, columns=PP_COLS)), axis=1) pp_cum_df = (pd.melt( pp_df, id_vars=['action', 'contact', 'intention'], value_vars=PP_COLS, value_name='resp' ) .groupby(['action', 'contact', 'intention', 'resp']) .size() .div(1500) .rename('proba') .reset_index() .pivot_table( index=['action', 'contact', 'intention'], values='proba', columns='resp' ) .cumsum(axis=1) .iloc[:, :-1]) pp_cum_df for (plot_action, plot_contact), plot_df in pp_cum_df.groupby(level=['action', 'contact']): fig, ax = plt.subplots(figsize=(8, 6)) ax.plot([0, 1], plot_df, c='C0'); ax.plot([0, 1], [0, 0], '--', c='C0'); ax.plot([0, 1], [1, 1], '--', c='C0'); ax.set_xlim(0, 1); ax.set_xlabel("intention"); ax.set_ylim(-0.05, 1.05); ax.set_ylabel("probability"); ax.set_title( "action = {action}, contact = {contact}".format( action=plot_action, contact=plot_contact ) ); ``` #### Code 11.20 ``` # define parameters PROB_DRINK = 0.2 # 20% of days RATE_WORK = 1. # average 1 manuscript per day # sample one year of production N = 365 drink = np.random.binomial(1, PROB_DRINK, size=N) y = (1 - drink) * np.random.poisson(RATE_WORK, size=N) ``` #### Code 11.21 ``` drink_zeros = drink.sum() work_zeros = (y == 0).sum() - drink_zeros bins = np.arange(y.max() + 1) - 0.5 plt.hist(y, bins=bins); plt.bar(0., drink_zeros, width=1., bottom=work_zeros, color='C1', alpha=.5); plt.xticks(bins + 0.5); plt.xlabel("manuscripts completed"); plt.ylabel("Frequency"); ``` #### Code 11.22 ``` with pm.Model() as m11_4: ap = pm.Normal('ap', 0., 1.) p = pm.math.sigmoid(ap) al = pm.Normal('al', 0., 10.) lambda_ = pm.math.exp(al) y_obs = pm.ZeroInflatedPoisson('y_obs', 1. - p, lambda_, observed=y) with m11_4: map_11_4 = pm.find_MAP() map_11_4 ``` #### Code 11.23 ``` sp.special.expit(map_11_4['ap']) # probability drink np.exp(map_11_4['al']) # rate finish manuscripts, when not drinking ``` #### Code 11.24 ``` def dzip(x, p, lambda_, log=True): like = p**(x == 0) + (1 - p) * sp.stats.poisson.pmf(x, lambda_) return np.log(like) if log else like ``` #### Code 11.25 ``` PBAR = 0.5 THETA = 5. a = PBAR * THETA b = (1 - PBAR) * THETA p = np.linspace(0, 1, 100) plt.plot(p, sp.stats.beta.pdf(p, a, b)); plt.xlim(0, 1); plt.xlabel("probability"); plt.ylabel("Density"); ``` #### Code 11.26 ``` admit_df = pd.read_csv('Data/UCBadmit.csv', sep=';') admit_df.head() with pm.Model() as m11_5: a = pm.Normal('a', 0., 2.) pbar = pm.Deterministic('pbar', pm.math.sigmoid(a)) theta = pm.Exponential('theta', 1.) admit_obs = pm.BetaBinomial( 'admit_obs', pbar * theta, (1. - pbar) * theta, admit_df.applications.values, observed=admit_df.admit.values ) with m11_5: trace_11_5 = pm.sample(1000, tune=1000) ``` #### Code 11.27 ``` pm.summary(trace_11_5, alpha=.11).round(2) ``` #### Code 11.28 ``` np.percentile(trace_11_5['pbar'], [2.5, 50., 97.5]) ``` #### Code 11.29 ``` pbar_hat = trace_11_5['pbar'].mean() theta_hat = trace_11_5['theta'].mean() p_plot = np.linspace(0, 1, 100) plt.plot( p_plot, sp.stats.beta.pdf(p_plot, pbar_hat * theta_hat, (1. - pbar_hat) * theta_hat) ); plt.plot( p_plot, sp.stats.beta.pdf( p_plot[:, np.newaxis], trace_11_5['pbar'][:100] * trace_11_5['theta'][:100], (1. - trace_11_5['pbar'][:100]) * trace_11_5['theta'][:100] ), c='C0', alpha=0.1 ); plt.xlim(0., 1.); plt.xlabel("probability admit"); plt.ylim(0., 3.); plt.ylabel("Density"); ``` #### Code 11.30 ``` with m11_5: pp_trace_11_5 = pm.sample_ppc(trace_11_5) x_case = np.arange(admit_df.shape[0]) plt.scatter( x_case, pp_trace_11_5['admit_obs'].mean(axis=0) \ / admit_df.applications.values ); plt.scatter(x_case, admit_df.admit / admit_df.applications); high = np.percentile(pp_trace_11_5['admit_obs'], 95, axis=0) \ / admit_df.applications.values plt.scatter(x_case, high, marker='x', c='k'); low = np.percentile(pp_trace_11_5['admit_obs'], 5, axis=0) \ / admit_df.applications.values plt.scatter(x_case, low, marker='x', c='k'); ``` #### Code 11.31 ``` mu = 3. theta = 1. x = np.linspace(0, 10, 100) plt.plot(x, sp.stats.gamma.pdf(x, mu / theta, scale=theta)); import platform import sys import IPython import matplotlib import scipy print("This notebook was createad on a computer {} running {} and using:\nPython {}\nIPython {}\nPyMC {}\nNumPy {}\nPandas {}\nSciPy {}\nMatplotlib {}\n".format(platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, pd.__version__, scipy.__version__, matplotlib.__version__)) ```
github_jupyter
<a href="https://colab.research.google.com/github/iesous-kurios/DS-Unit-2-Applied-Modeling/blob/master/module4/BuildWeekProject.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* !pip install eli5 # If you're working locally: else: DATA_PATH = '../data/' # all imports needed for this sheet import numpy as np import pandas as pd from sklearn.model_selection import train_test_split import category_encoders as ce from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from sklearn.feature_selection import f_regression, SelectKBest from sklearn.linear_model import Ridge from sklearn.model_selection import cross_val_score from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt from sklearn.model_selection import validation_curve from sklearn.tree import DecisionTreeRegressor import xgboost as xgb %matplotlib inline import seaborn as sns from sklearn.metrics import accuracy_score from sklearn.model_selection import GridSearchCV, RandomizedSearchCV df = pd.read_excel('/content/pipeline_pickle.xlsx') ``` I chose "exit to permanent" housing as my target due to my belief that accurately predicting this feature would have the largest impact on actual people experiencing homelessness in my county. Developing and fine tuning an accurate model with our data could also lead to major improvements in our county's efforts at addressing the homelessness problem among singles as well (as our shelter only serves families) ``` exit_reasons = ['Rental by client with RRH or equivalent subsidy', 'Rental by client, no ongoing housing subsidy', 'Staying or living with family, permanent tenure', 'Rental by client, other ongoing housing subsidy', 'Permanent housing (other than RRH) for formerly homeless persons', 'Staying or living with friends, permanent tenure', 'Owned by client, with ongoing housing subsidy', 'Rental by client, VASH housing Subsidy' ] # pull all exit destinations from main data file and sum up the totals of each destination, # placing them into new df for calculations exits = df['3.12 Exit Destination'].value_counts() # create target column (multiple types of exits to perm) df['perm_leaver'] = df['3.12 Exit Destination'].isin(exit_reasons) # replace spaces with underscore df.columns = df.columns.str.replace(' ', '_') df = df.rename(columns = {'Length_of_Time_Homeless_(3.917_Approximate_Start)':'length_homeless', '4.2_Income_Total_at_Entry':'entry_income' }) ``` If a person were to guess "did not exit to permanent" housing every single time, they would be correct approximately 63 percent of the time. I am hoping that through this project, we will be able to provide more focused case management services to guests that displayed features which my model predicted as contributing negatively toward their chances of having an exit to permanent housing. It is my hope that a year from now, the base case will be flipped, and you would need to guess "did exit to permanent housing" to be correct approximately 63 percent of the time. ``` # base case df['perm_leaver'].value_counts(normalize=True) # see size of df prior to dropping empties df.shape # drop rows with no exit destination (current guests at time of report) df = df.dropna(subset=['3.12_Exit_Destination']) # shape of df after dropping current guests df.shape df.to_csv('/content/n_alltime.csv') # verify no NaN in exit destination feature df['3.12_Exit_Destination'].isna().value_counts() import numpy as np import pandas as pd from sklearn.model_selection import train_test_split train = df # Split train into train & val #train, val = train_test_split(train, train_size=0.80, test_size=0.20, # stratify=train['perm_leaver'], random_state=42) # Do train/test split # Use data from Jan -March 2019 to train # Use data from April 2019 to test df['enroll_date'] = pd.to_datetime(df['3.10_Enroll_Date'], infer_datetime_format=True) cutoff = pd.to_datetime('2019-01-01') train = df[df.enroll_date < cutoff] test = df[df.enroll_date >= cutoff] def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # drop any private information X = X.drop(columns=['3.1_FirstName', '3.1_LastName', '3.2_SocSecNo', '3.3_Birthdate', 'V5_Prior_Address']) # drop unusable columns X = X.drop(columns=['2.1_Organization_Name', '2.4_ProjectType', 'WorkSource_Referral_Most_Recent', 'YAHP_Referral_Most_Recent', 'SOAR_Enrollment_Determination_(Most_Recent)', 'R7_General_Health_Status', 'R8_Dental_Health_Status', 'R9_Mental_Health_Status', 'RRH_Date_Of_Move-In', 'RRH_In_Permanent_Housing', 'R10_Pregnancy_Due_Date', 'R10_Pregnancy_Status', 'R1_Referral_Source', 'R2_Date_Status_Determined', 'R2_Enroll_Status', 'R2_Reason_Why_No_Services_Funded', 'R2_Runaway_Youth', 'R3_Sexual_Orientation', '2.5_Utilization_Tracking_Method_(Invalid)', '2.2_Project_Name', '2.6_Federal_Grant_Programs', '3.16_Client_Location', '3.917_Stayed_Less_Than_90_Days', '3.917b_Stayed_in_Streets,_ES_or_SH_Night_Before', '3.917b_Stayed_Less_Than_7_Nights', '4.24_In_School_(Retired_Data_Element)', 'CaseChildren', 'ClientID', 'HEN-HP_Referral_Most_Recent', 'HEN-RRH_Referral_Most_Recent', 'Emergency_Shelter_|_Most_Recent_Enrollment', 'ProgramType', 'Days_Enrolled_Until_RRH_Date_of_Move-in', 'CurrentDate', 'Current_Age', 'Count_of_Bed_Nights_-_Entire_Episode', 'Bed_Nights_During_Report_Period']) # drop rows with no exit destination (current guests at time of report) X = X.dropna(subset=['3.12_Exit_Destination']) # remove columns to avoid data leakage X = X.drop(columns=['3.12_Exit_Destination', '5.9_Household_ID', '5.8_Personal_ID', '4.2_Income_Total_at_Exit', '4.3_Non-Cash_Benefit_Count_at_Exit']) # Drop needless feature unusable_variance = ['Enrollment_Created_By', '4.24_Current_Status_(Retired_Data_Element)'] X = X.drop(columns=unusable_variance) # Drop columns with timestamp timestamp_columns = ['3.10_Enroll_Date', '3.11_Exit_Date', 'Date_of_Last_ES_Stay_(Beta)', 'Date_of_First_ES_Stay_(Beta)', 'Prevention_|_Most_Recent_Enrollment', 'PSH_|_Most_Recent_Enrollment', 'Transitional_Housing_|_Most_Recent_Enrollment', 'Coordinated_Entry_|_Most_Recent_Enrollment', 'Street_Outreach_|_Most_Recent_Enrollment', 'RRH_|_Most_Recent_Enrollment', 'SOAR_Eligibility_Determination_(Most_Recent)', 'Date_of_First_Contact_(Beta)', 'Date_of_Last_Contact_(Beta)', '4.13_Engagement_Date', '4.11_Domestic_Violence_-_When_it_Occurred', '3.917_Homeless_Start_Date'] X = X.drop(columns=timestamp_columns) # return the wrangled dataframe return X train.shape test.shape train = wrangle(train) test = wrangle(test) # Hand pick features only known at entry to avoid data leakage features = ['CaseMembers', '3.2_Social_Security_Quality', '3.3_Birthdate_Quality', 'Age_at_Enrollment', '3.4_Race', '3.5_Ethnicity', '3.6_Gender', '3.7_Veteran_Status', '3.8_Disabling_Condition_at_Entry', '3.917_Living_Situation', 'length_homeless', '3.917_Times_Homeless_Last_3_Years', '3.917_Total_Months_Homeless_Last_3_Years', 'V5_Last_Permanent_Address', 'V5_State', 'V5_Zip', 'Municipality_(City_or_County)', '4.1_Housing_Status', '4.4_Covered_by_Health_Insurance', '4.11_Domestic_Violence', '4.11_Domestic_Violence_-_Currently_Fleeing_DV?', 'Household_Type', 'R4_Last_Grade_Completed', 'R5_School_Status', 'R6_Employed_Status', 'R6_Why_Not_Employed', 'R6_Type_of_Employment', 'R6_Looking_for_Work', 'entry_income', '4.3_Non-Cash_Benefit_Count', 'Barrier_Count_at_Entry', 'Chronic_Homeless_Status', 'Under_25_Years_Old', '4.10_Alcohol_Abuse_(Substance_Abuse)', '4.07_Chronic_Health_Condition', '4.06_Developmental_Disability', '4.10_Drug_Abuse_(Substance_Abuse)', '4.08_HIV/AIDS', '4.09_Mental_Health_Problem', '4.05_Physical_Disability' ] target = 'perm_leaver' X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] # base case df['perm_leaver'].value_counts(normalize=True) # fit linear model to get a 3 on Sprint from sklearn.linear_model import LogisticRegression encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_test_encoded = encoder.transform(X_test) imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train_encoded) X_test_imputed = imputer.transform(X_test_encoded) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_test_scaled = scaler.transform(X_test_imputed) model = LogisticRegression(random_state=42, max_iter=5000) model.fit(X_train_scaled, y_train) print ('Validation Accuracy', model.score(X_test_scaled,y_test)) ``` Linear model above beat the baseline model, now let's see if we can get even more accurate with a tree-based model ``` import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.ensemble import GradientBoostingClassifier # Make pipeline! pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=42, ) ) # Fit on train, score on val pipeline.fit(X_train, y_train) y_pred = pipeline.predict(X_test) print('Validation Accuracy', accuracy_score(y_test, y_pred)) from joblib import dump dump(pipeline, 'pipeline.joblib', compress=True) # get and plot feature importances # Linear models have coefficients whereas decision trees have "Feature Importances" import matplotlib.pyplot as plt model = pipeline.named_steps['randomforestclassifier'] encoder = pipeline.named_steps['ordinalencoder'] encoded_columns = encoder.transform(X_test).columns importances = pd.Series(model.feature_importances_, encoded_columns) n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='grey'); # cross validation k = 3 scores = cross_val_score(pipeline, X_train, y_train, cv=k, scoring='accuracy') print(f'MAE for {k} folds:', -scores) -scores.mean() ``` Now that we have beaten the linear model with a tree based model, let us see if xgboost does a better job at predicting exit destination ``` from xgboost import XGBClassifier pipeline = make_pipeline( ce.OrdinalEncoder(), XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) # Fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy:', pipeline.score(X_test, y_test)) ``` xgboost failed to beat my tree-based model, so the tree-based model is what I will use for my prediction on my web-app ``` # get and plot feature importances # Linear models have coefficients whereas decision trees have "Feature Importances" import matplotlib.pyplot as plt model = pipeline.named_steps['xgbclassifier'] encoder = pipeline.named_steps['ordinalencoder'] encoded_columns = encoder.transform(X_test).columns importances = pd.Series(model.feature_importances_, encoded_columns) n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='grey'); history = pd.read_csv('/content/n_alltime.csv') from plotly.tools import mpl_to_plotly import seaborn as sns from sklearn.metrics import accuracy_score from sklearn.model_selection import GridSearchCV, RandomizedSearchCV # Assign to X, y to avoid data leakage features = ['CaseMembers', '3.2_Social_Security_Quality', '3.3_Birthdate_Quality', 'Age_at_Enrollment', '3.4_Race', '3.5_Ethnicity', '3.6_Gender', '3.7_Veteran_Status', '3.8_Disabling_Condition_at_Entry', '3.917_Living_Situation', 'length_homeless', '3.917_Times_Homeless_Last_3_Years', '3.917_Total_Months_Homeless_Last_3_Years', 'V5_Last_Permanent_Address', 'V5_State', 'V5_Zip', 'Municipality_(City_or_County)', '4.1_Housing_Status', '4.4_Covered_by_Health_Insurance', '4.11_Domestic_Violence', '4.11_Domestic_Violence_-_Currently_Fleeing_DV?', 'Household_Type', 'R4_Last_Grade_Completed', 'R5_School_Status', 'R6_Employed_Status', 'R6_Why_Not_Employed', 'R6_Type_of_Employment', 'R6_Looking_for_Work', 'entry_income', '4.3_Non-Cash_Benefit_Count', 'Barrier_Count_at_Entry', 'Chronic_Homeless_Status', 'Under_25_Years_Old', '4.10_Alcohol_Abuse_(Substance_Abuse)', '4.07_Chronic_Health_Condition', '4.06_Developmental_Disability', '4.10_Drug_Abuse_(Substance_Abuse)', '4.08_HIV/AIDS', '4.09_Mental_Health_Problem', '4.05_Physical_Disability', 'perm_leaver' ] X = history[features] X = X.drop(columns='perm_leaver') y_pred = pipeline.predict(X) fig, ax = plt.subplots() sns.distplot(test['perm_leaver'], hist=False, kde=True, ax=ax, label='Actual') sns.distplot(y_pred, hist=False, kde=True, ax=ax, label='Predicted') ax.set_title('Distribution of Actual Exit compared to prediction') ax.legend().set_visible(True) pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier(random_state=42) ) param_distributions = { 'simpleimputer__strategy': ['most_frequent', 'mean', 'median'], 'randomforestclassifier__bootstrap': [True, False], 'randomforestclassifier__max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, None], 'randomforestclassifier__max_features': ['auto', 'sqrt'], 'randomforestclassifier__min_samples_leaf': [1, 2, 4], 'randomforestclassifier__min_samples_split': [2, 5, 10], 'randomforestclassifier__n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000]} # If you're on Colab, decrease n_iter & cv parameters search = RandomizedSearchCV( pipeline, param_distributions=param_distributions, n_iter=1, cv=3, scoring='accuracy', verbose=10, return_train_score=True, n_jobs=-1 ) # Fit on train, score on val search.fit(X_train, y_train) print('Best hyperparameters', search.best_params_) print('Cross-validation accuracy score', -search.best_score_) print('Validation Accuracy', search.score(X_test, y_test)) y_pred.shape history['perm_leaver'].value_counts() 1282+478 from joblib import dump dump(pipeline, 'pipeline2.joblib', compress=True) ```
github_jupyter
# Chapter 3 ***Ver como criar uma tabela de conteúdo TOC** ## Strings ``` a = "My dog's name is" b = "Bingo" c = a + " " + b c #trying to add string and integer d = "927" e = 927 d + e ``` ## Lists ``` a = [0, 1, 1, 2, 3, 5, 8, 13] b = [5., "girl", 2+0j, "horse", 21] b[0] b[1] ``` <div class="alert alert-block alert-warning"> <big><center>Lists are <span style="color:red"> *zero-indexed*</span> </center></big> </div> <div class="alert alert-block alert-success"> $$ \begin{align} list = &[a, b, c, d, e]\\ &\color{red}\Downarrow \hspace{2.2pc}\color{red}\Downarrow\\ &\color{purple}{list[0]} \hspace{1.2pc} \color{purple}{list[4]}\\ &\color{brown}{list[-5]} \hspace{0.7pc} \color{brown}{list[-1]} \end{align} $$ </div> ``` b[-1] b[-5] b[4] b = [5., "girl", 2+0j, "horse", 21] b[0] = b[0]+2 import numpy as np b[3] = np.pi b a ``` <div class="alert alert-block alert-warning"> <big><center>Adding lists <span style="color:red"> *concatenates*</span> them, just as the **+** operator concatenates strings. </center></big> </div> ``` a+a ``` ### Slicing lists <div class="alert alert-block alert-warning"> <big>Reparar que <span style="color:red"> *não*</span> se inclui o último elemento.</big> </div> ``` b b[1:4] b[3:5] b[2:] b[:3] b[:] b[1:-1] len(b) #len --> length ? range ``` ### Creating and modifying lists <div class="alert alert-block alert-info"> range(stop) -> range object <br> range(start, stop[, step]) -> range object </div> Útil para criar *PAs (progressões aritméticas)* ``` range(10) #começa de zero por padrão, armazena apenas início, fim e step. Útil para economizar memória print(range(10)) list(range(10)) #para explicitar todos os integrantes list(range(3,10)) list(range(0,10,2)) a = range(1,10,3) a list(a) a += [16, 31, 64, 127] a = a + [16, 31, 64,127] a = list(a) + [16, 31, 64,127] a a = [0, 0] + a a b = a[:5] + [101, 102] + a[5:] b ``` ### Tuples <div class="alert alert-block alert-warning"> <big><center>**Tuples** are lists that are <span style="color:red"> *immutable*</span></center></big> </div> Logo, pode ser usado para armazenar constantes, por exemplo. ``` c = (1, 1, 2, 3, 5, 8, 13) c[4] c[4] = 7 ``` ### Multidimensional lists and tuples Useful in making tables and other structures. ``` a = [[3,9], [8,5], [11,1]] #list a a[0] a[1][0] a[1][0] = 10 a a = ([3,9], [8,5], [11,1]) #? Não forma tuple assim... tudo deve ser parêntese. Ver abaixo a[1][0] a[1][0] = 10 a a = ((3,9), (8,5), (11,1)) #tuple a a[1][0] a[1][0] = 10 ``` ## NumPy arrays - all the elements are of the same type. ``` import numpy as np a = [0, 0, 1, 4, 7, 16, 31, 64,127] a b = np.array(a) #converts a list to an array b ``` - the `array` function promotes all of the numbers to the type of the most general entry in the list. ``` c = np.array([1, 4., -2,7]) #todos se tornarão float c ? np.linspace ``` <div class="alert alert-block alert-info"> np.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None) </div> Return evenly spaced numbers over a specified interval. Returns `num` evenly spaced samples, calculated over the interval [`start`, `stop`]. The endpoint of the interval can optionally be excluded. ``` np.linspace(0, 10, 5) np.linspace(0, 10, 5, endpoint=False) np.linspace(0, 10, 5, retstep=True) ? np.logspace ``` <div class="alert alert-block alert-info"> np.logspace(start, stop, num=50, endpoint=True, base=10.0, dtype=None) </div> Return numbers spaced evenly on a log scale. In linear space, the sequence starts at ``base**start`` (`base` to the power of `start`) and ends with ``base**stop``. ``` np.logspace(1,3,5) %precision 1 np.logspace(1,3,5) ? np.arange ``` <div class="alert alert-block alert-info"> arange([start,] stop[, step,], dtype=None) </div> Return evenly spaced values within a given interval. Values are generated within the half-open interval ``[start, stop)`` (in other words, the interval including `start` but excluding `stop`). For integer arguments the function is equivalent to the Python built-in `range <http://docs.python.org/lib/built-in-funcs.html>`_ function, but returns an ndarray rather than a list. ``` np.arange(0, 10, 2) np.arange(0., 10, 2) #todos serão float np.arange(0, 10, 1.5) ``` ### Criação de arrays de zeros e uns. ``` np.zeros(6) np.ones(8) np.ones(8, dtype=int) ``` ### Mathematical operations with arrays ``` import numpy as np a = np.linspace(-1, 5, 7) a a*6 np.sin(a) x = np.linspace(-3.14, 3.14, 21) y = np.cos(x) x y #fazer o plot disto futuramente a np.log(a) a = np.array([34., -12, 5.]) b = np.array([68., 5., 20.]) a+b #vectorized operations ``` ### Slicing and addressing arrays Fórmula para a velocidade média em um intervalo de tempo *i*: $$ v_i = \frac{y_i - y_{i-1}}{t_i - t_{i-1}} $$ ``` y = np.array([0., 1.3, 5., 10.9, 18.9, 28.7, 40.]) t = np.array([0., 0.49, 1., 1.5, 2.08, 2.55, 3.2]) y[:-1] y[1:] v = (y[1:]-y[:-1])/(t[1:]-t[:-1]) v ``` ### Multi-dimensional arrays and matrices ``` b = np.array([[1., 4, 5], [9, 7, 4]]) b #all elements of a MunPy array must be of the same data type: floats, integers, complex numbers, etc. a = np.ones((3,4), dtype=float) a np.eye(4) c = np.arange(6) c c = np.reshape(c, (2,3)) c b b[0][2] b[0,2] #0 indexed b[1,2] 2*b ``` <div class="alert alert-block alert-warning"> *Beware*: array multiplication, done on an element-by-element basis, <span, style="color:red">*is not the same as **matrix** multiplication*</span> as defined in linear algebra. Therefore, we distinguish between *array* multiplication and *matrix* multiplication in Python. </div> ``` b*c d = c.T #cria matriz transposta d np.dot(b,d) #faz multiplicação matricial ``` ## Dictionaries \* Também chamados de *hashmaps* ou *associative arrays* em outras linguagens de programação. <div class="alert alert-block alert-success"> $$ \begin{align} room =&\text{{"Emma":309, "Jacob":582, "Olivia":764}}\\ &\hspace{1.0pc}\color{red}\Downarrow \hspace{1.5pc}\color{red}\Downarrow\\ &\hspace{0.7pc}\color{purple}{key} \hspace{1.5pc}\color{purple}{value} \end{align} $$ </div> ``` room = {"Emma":309, "Jacob":582, "Olivia":764} room["Olivia"] weird = {"tank":52, 846:"horse", "bones":[23, "fox", "grass"], "phrase":"I am here"} weird["tank"] weird[846] weird["bones"] weird["phrase"] d = {} d["last name"] = "Alberts" d["first name"] = "Marie" d["birthday"] = "January 27" d d.keys() d.values() ``` ## Random numbers `np.random.rand(num)` creates an array of `num` floats **uniformly** distributed on the interval from 0 to 1. `np.random.randn(num)` produces a **normal (Gaussian)** distribution of `num` random numbers with a mean of 0 and a standard deviation of 1. They are distributed according to $$ P(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x²} $$ `np.random.randint(low, high, num)` produces a **uniform** random distribution of `num` integers between `low` (inclusive) and `high` (exclusive). ``` np.random.rand() np.random.rand(5) a, b = 10, 20 (b-a)*np.random.rand(20) + a #setting interval x0, sigma = 15, 10 sigma*np.random.randn(20) + x0 #setting width and center of normal distribution np.random.randint(1, 7, 12) #simutaling a dozen rolls of a single die ```
github_jupyter
# Comparison of arrival direction and joint models In order to verify the model is working, we fit simulations made under the assumptions of the model. We also compare the differences between a model for only the UHECR arrival directions and one for both the UHECR arrival directions and energies. <br> <br> *This code is used to produce the data shown in Figures 6, 7 and 8 (left panel) in Capel & Mortlock (2019).* *See the separate notebook in this directory for the actual plotting of figures.* ``` import numpy as np import os import h5py from matplotlib import pyplot as plt from pandas import DataFrame from fancy import Data, Model, Analysis from fancy.interfaces.stan import get_simulation_input '''Setting up''' # Define location of Stan files stan_path = '../../stan/' # Define file containing source catalogue information source_file = '../../data/sourcedata.h5' # make output directory if it doesnt exist if not os.path.isdir("output"): os.mkdir("output") # source_types = ["SBG_23", "2FHL_250Mpc", "swift_BAT_213"] source_types = ["SBG_23"] # detector_types = ["auger2010", "auger2014", "TA2015"] # detector_type = "auger2014" detector_type = "TA2015" # set random seed # random_seed = 19990308 random_seeds = [980, 546, 7984, 333, 2] # flag to control showing plots or not show_plot = True '''set detector and detector properties''' if detector_type == "TA2015": from fancy.detector.TA2015 import detector_properties, alpha_T, M, Eth elif detector_type == "auger2014": from fancy.detector.auger2014 import detector_properties, alpha_T, M, Eth elif detector_type == "auger2010": from fancy.detector.auger2010 import detector_properties, alpha_T, M, Eth else: raise Exception("Undefined detector type!") '''Create joint simulated dataset''' # Define a Stan simulation to run sim_name = stan_path + 'joint_model_sim.stan' # simulate all processes # Define simulation using Model object and compile Stan code if necessary simulation = Model(sim_filename = sim_name, include_paths = stan_path) simulation.compile(reset=False) for random_seed in random_seeds: for source_type in source_types: print("Current Source: {0}".format(source_type)) # define separate files table_file = '../tables/tables_{0}_{1}.h5'.format(source_type, detector_type) sim_output_file = 'output/joint_model_simulation_{0}_{1}_{2}.h5'.format(source_type, detector_type, random_seed) # Define a source catalogue and detector exposure # In the paper we use the SBG catalogue data = Data() data.add_source(source_file, source_type) data.add_detector(detector_properties) # Plot the sources in Galactic coordinates # if show_plot: # data.show(); # Define associated fraction f = 0.5 # Simulation input B = 20 # nG alpha = 3.0 Eth = Eth Eth_sim = 20 # EeV ptype = "p" # assume proton # number of simulated inputs # changes the background flux linearly # should choose Nsim such that FT is the same for # each observatory # this ensures that L, F0 are the same # # for PAO, we saw that FT, detector_type = 0.3601 # FT_PAO = 0.3601 # total, detector_type flux using {1} data with Nsim = 2500, detector_type # Nsim_expected = FT_PAO / (M / alpha_T) # Nsim = int(np.round(Nsim_expected)) Nsim = 200 # check value for Nsim print("Simulated events: {0}".format(Nsim)) # L in yr^-1, F in km^-2 yr^-1 L, F0 = get_simulation_input(Nsim, f, data.source.distance, M, alpha_T) # To scale between definition of flux in simulations and fits flux_scale = (Eth / Eth_sim)**(1 - alpha) simulation.input(B = B, L = L, F0 = F0, alpha = alpha, Eth = Eth, ptype=ptype) # check luminosity and isotropic flux values # L ~ O(10^39), F0 ~ 0.18 # same luminosity so only need to check one value print("Simulated Luminosity: {0:.3e}".format(L[0])) print("Simulated isotropic flux: {0:.3f}".format(F0)) # What is happening summary = b'Simulation using the joint model and SBG catalogue' # must be a byte str # Define an Analysis object to bring together Data and Model objects sim_analysis = Analysis(data, simulation, analysis_type = 'joint', filename = sim_output_file, summary = summary) print("Building tables...") # Build pre-computed values for the simulation as you go # So that you can try out different parameters sim_analysis.build_tables(sim_only = True) print("Running simulation...") # Run simulation sim_analysis.simulate(seed = random_seed, Eth_sim = Eth_sim) # Save to file sim_analysis.save() # print resulting UHECR observed after propagation and Elosses print("Observed simulated UHECRs: {0}\n".format(len(sim_analysis.source_labels))) # print plots if flag is set to true # if show_plot: # sim_analysis.plot("arrival_direction"); # sim_analysis.plot("energy"); '''Fit using arrival direction model''' for random_seed in random_seeds: for source_type in source_types: print("Current Source: {0}".format(source_type)) # define separate files table_file = '../../tables/tables_{0}_{1}.h5'.format(source_type, detector_type) sim_output_file = 'output/joint_model_simulation_{0}_{1}_{2}.h5'.format(source_type, detector_type, random_seed) arrival_output_file = 'output/arrival_direction_fit_{0}_{1}_{2}.h5'.format(source_type, detector_type, random_seed) # joint_output_file = 'output/joint_fit_{0}_PAO.h5'.format(source_type) # Define data from simulation data = Data() data.from_file(sim_output_file) # if show_plot: # data.show() # Arrival direction model model_name = stan_path + 'arrival_direction_model.stan' # Compile model = Model(model_filename = model_name, include_paths = stan_path) model.compile(reset=False) # Define threshold energy in EeV model.input(Eth = Eth) # What is happening summary = b'Fit of the arrival direction model to the joint simulation' # Define an Analysis object to bring together Data and Model objects analysis = Analysis(data, model, analysis_type = 'joint', filename = arrival_output_file, summary = summary) # Define location of pre-computed values used in fits # (see relevant notebook for how to make these files) # Each catalogue has a file of pre-computed values analysis.use_tables(table_file) # Fit the Stan model fit = analysis.fit_model(chains = 16, iterations = 500, seed = random_seed) # Save to analysis file analysis.save() '''Fit using joint model''' for random_seed in random_seeds: for source_type in source_types: print("Current Source: {0}".format(source_type)) # define separate files table_file = '../../tables/tables_{0}_{1}.h5'.format(source_type, detector_type) sim_output_file = 'output/joint_model_simulation_{0}_{1}_{2}.h5'.format(source_type, detector_type, random_seed) # arrival_output_file = 'output/arrival_direction_fit_{0}_{1}.h5'.format(source_type, detector_type) joint_output_file = 'output/joint_fit_{0}_{1}_{2}.h5'.format(source_type, detector_type, random_seed) # Define data from simulation data = Data() data.from_file(sim_output_file) # create Model and compile model_name = stan_path + 'joint_model.stan' model = Model(model_filename = model_name, include_paths = stan_path) model.compile(reset=False) model.input(Eth = Eth) # create Analysis object summary = b'Fit of the joint model to the joint simulation' analysis = Analysis(data, model, analysis_type = 'joint', filename = joint_output_file, summary = summary) analysis.use_tables(table_file) # Fit the Stan model fit = analysis.fit_model(chains = 16, iterations = 500, seed = random_seed) # Save to analysis file analysis.save() ```
github_jupyter
___ ___ # Logistic Regression Project - Solutions In this project we will be working with a fake advertising data set, indicating whether or not a particular internet user clicked on an Advertisement on a company website. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user. This data set contains the following features: * 'Daily Time Spent on Site': consumer time on site in minutes * 'Age': cutomer age in years * 'Area Income': Avg. Income of geographical area of consumer * 'Daily Internet Usage': Avg. minutes a day consumer is on the internet * 'Ad Topic Line': Headline of the advertisement * 'City': City of consumer * 'Male': Whether or not consumer was male * 'Country': Country of consumer * 'Timestamp': Time at which consumer clicked on Ad or closed window * 'Clicked on Ad': 0 or 1 indicated clicking on Ad ## Import Libraries **Import a few libraries you think you'll need (Or just import them as you go along!)** ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ``` ## Get the Data **Read in the advertising.csv file and set it to a data frame called ad_data.** ``` ad_data = pd.read_csv('advertising.csv') ``` **Check the head of ad_data** ``` ad_data.head() ``` ** Use info and describe() on ad_data** ``` ad_data.info() ad_data.describe() ``` ## Exploratory Data Analysis Let's use seaborn to explore the data! Try recreating the plots shown below! ** Create a histogram of the Age** ``` sns.set_style('whitegrid') ad_data['Age'].hist(bins=30) plt.xlabel('Age') ``` **Create a jointplot showing Area Income versus Age.** ``` sns.jointplot(x='Age',y='Area Income',data=ad_data) ``` **Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.** ``` sns.jointplot(x='Age',y='Daily Time Spent on Site',data=ad_data,color='red',kind='kde'); ``` ** Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'** ``` sns.jointplot(x='Daily Time Spent on Site',y='Daily Internet Usage',data=ad_data,color='green') ``` ** Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.** ``` sns.pairplot(ad_data,hue='Clicked on Ad',palette='bwr') ``` # Logistic Regression Now it's time to do a train test split, and train our model! You'll have the freedom here to choose columns that you want to train on! ** Split the data into training set and testing set using train_test_split** ``` from sklearn.model_selection import train_test_split X = ad_data[['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage', 'Male']] y = ad_data['Clicked on Ad'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) ``` ** Train and fit a logistic regression model on the training set.** ``` from sklearn.linear_model import LogisticRegression logmodel = LogisticRegression() logmodel.fit(X_train,y_train) ``` ## Predictions and Evaluations ** Now predict values for the testing data.** ``` predictions = logmodel.predict(X_test) ``` ** Create a classification report for the model.** ``` from sklearn.metrics import classification_report print(classification_report(y_test,predictions)) ``` ## Great Job!
github_jupyter
``` %matplotlib inline ``` Captum을 사용하여 모델 해석하기 =================================== **번역**: `정재민 <https://github.com/jjeamin>`_ Captum을 사용하면 데이터 특징(features)이 모델의 예측 또는 뉴런 활성화에 미치는 영향을 이해하고, 모델의 동작 방식을 알 수 있습니다. 그리고 \ ``Integrated Gradients``\ 와 \ ``Guided GradCam``\ 과 같은 최첨단의 feature attribution 알고리즘을 적용할 수 있습니다. 이 레시피에서는 Captum을 사용하여 다음을 수행하는 방법을 배웁니다. \* 이미지 분류기(classifier)의 예측을 해당 이미지의 특징(features)에 표시하기 \* 속성(attribution) 결과를 시각화 하기 시작하기 전에 ---------------- Captum이 Python 환경에 설치되어 있는지 확인해야 합니다. Captum은 Github에서 ``pip`` 패키지 또는 ``conda`` 패키지로 제공됩니다. 자세한 지침은 https://captum.ai/ 의 설치 안내서를 참조하면 됩니다. 모델의 경우, PyTorch에 내장 된 이미지 분류기(classifier)를 사용합니다. Captum은 샘플 이미지의 어떤 부분이 모델에 의해 만들어진 특정한 예측에 도움을 주는지 보여줍니다. ``` import torchvision from torchvision import transforms from PIL import Image import requests from io import BytesIO model = torchvision.models.resnet18(pretrained=True).eval() response = requests.get("https://image.freepik.com/free-photo/two-beautiful-puppies-cat-dog_58409-6024.jpg") img = Image.open(BytesIO(response.content)) center_crop = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), ]) normalize = transforms.Compose([ transforms.ToTensor(), # 이미지를 0에서 1사이의 값을 가진 Tensor로 변환 transforms.Normalize( # 0을 중심으로 하는 imagenet 픽셀의 rgb 분포를 따르는 정규화 mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) input_img = normalize(center_crop(img)).unsqueeze(0) ``` 속성(attribution) 계산하기 --------------------- 모델의 top-3 예측 중에는 개와 고양이에 해당하는 클래스 208과 283이 있습니다. Captum의 \ ``Occlusion``\ 알고리즘을 사용하여 각 예측을 입력의 해당 부분에 표시합니다. ``` from captum.attr import Occlusion occlusion = Occlusion(model) strides = (3, 9, 9) # 작을수록 = 세부적인 속성이지만 느림 target=208, # ImageNet에서 Labrador의 인덱스 sliding_window_shapes=(3,45, 45) # 객체의 모양을 변화시키기에 충분한 크기를 선택 baselines = 0 # 이미지를 가릴 값, 0은 회색 attribution_dog = occlusion.attribute(input_img, strides = strides, target=target, sliding_window_shapes=sliding_window_shapes, baselines=baselines) target=283, # ImageNet에서 Persian cat의 인덱스 attribution_cat = occlusion.attribute(input_img, strides = strides, target=target, sliding_window_shapes=sliding_window_shapes, baselines=0) ``` Captum은 ``Occlusion`` 외에도 \ ``Integrated Gradients``\ , \ ``Deconvolution``\ , \ ``GuidedBackprop``\ , \ ``Guided GradCam``\ , \ ``DeepLift``\ , 그리고 \ ``GradientShap``\과 같은 많은 알고리즘을 제공합니다. 이러한 모든 알고리즘은 초기화할 때 모델을 호출 가능한 \ ``forward_func``\ 으로 기대하며 속성(attribution) 결과를 통합해서 반환하는 ``attribute(...)`` 메소드를 가지는 ``Attribution`` 의 서브클래스 입니다. 이미지인 경우 속성(attribution) 결과를 시각화 해보겠습니다. 결과 시각화하기 ----------------------- Captum의 \ ``visualization``\ 유틸리티는 그림과 텍스트 입력 모두에 대한 속성(attribution) 결과를 시각화 할 수 있는 즉시 사용가능한 방법을 제공합니다. ``` import numpy as np from captum.attr import visualization as viz # 계산 속성 Tensor를 이미지 같은 numpy 배열로 변환합니다. attribution_dog = np.transpose(attribution_dog.squeeze().cpu().detach().numpy(), (1,2,0)) vis_types = ["heat_map", "original_image"] vis_signs = ["all", "all"] # "positive", "negative", 또는 모두 표시하는 "all" # positive 속성은 해당 영역의 존재가 예측 점수를 증가시킨다는 것을 의미합니다. # negative 속성은 해당 영역의 존재가 예측 점수를 낮추는 오답 영역을 의미합니다. _ = viz.visualize_image_attr_multiple(attribution_dog, center_crop(img), vis_types, vis_signs, ["attribution for dog", "image"], show_colorbar = True ) attribution_cat = np.transpose(attribution_cat.squeeze().cpu().detach().numpy(), (1,2,0)) _ = viz.visualize_image_attr_multiple(attribution_cat, center_crop(img), ["heat_map", "original_image"], ["all", "all"], # positive/negative 속성 또는 all ["attribution for cat", "image"], show_colorbar = True ) ``` 만약 데이터가 텍스트인 경우 ``visualization.visualize_text()`` 는 입력 텍스트 위에 속성(attribution)을 탐색할 수 있는 전용 뷰(view)를 제공합니다. http://captum.ai/tutorials/IMDB_TorchText_Interpret 에서 자세한 내용을 확인하세요. 마지막 노트 ----------- Captum은 이미지, 텍스트 등을 포함하여 다양한 방식으로 PyTorch에서 대부분의 모델 타입을 처리할 수 있습니다. Captum을 사용하면 다음을 수행할 수 있습니다. \* 위에서 설명한 것처럼 특정한 출력을 모델 입력에 표시하기 \* 특정한 출력을 은닉층의 뉴런에 표시하기 (Captum API reference를 보세요). \* 모델 입력에 대한 은닉층 뉴런의 반응을 표시하기 (Captum API reference를 보세요). 지원되는 메소드의 전체 API와 튜토리얼의 목록은 http://captum.ai 를 참조하세요. Gilbert Tanner의 또 다른 유용한 게시물 : https://gilberttanner.com/blog/interpreting-pytorch-models-with-captum
github_jupyter
``` import sys sys.path.append('../src') from numpy import * import matplotlib.pyplot as plt from Like import * from PlotFuncs import * import WIMPFuncs pek = line_background(6,'k') fig,ax = MakeLimitPlot_SDn() alph = 0.25 cols = cm.bone(linspace(0.3,0.7,4)) nucs = ['Xe','Ge','NaI'] zos = [0,-50,-100,-50] C_Si = WIMPFuncs.C_SDp(Si29)/WIMPFuncs.C_SDn(Si29) C_Ge = WIMPFuncs.C_SDp(Ge73)/WIMPFuncs.C_SDn(Ge73) Cs = [1.0,C_Ge,1.0] froots = ['SDn','SDp','SDn'] for nuc,zo,col,C,froot in zip(nucs,zos,cols,Cs,froots): data = loadtxt('../data/WIMPLimits/mylimits/DLNuFloor'+nuc+'_detailed_'+froot+'.txt') m,sig,NUFLOOR,DY = Floor_2D(data) plt.plot(m,NUFLOOR*C,'-',color=col,lw=3,path_effects=pek,zorder=zo) plt.fill_between(m,NUFLOOR*C,y2=1e-99,color=col,zorder=zo,alpha=alph) #plt.text(0.12,0.2e-35,r'{\bf Silicon}',rotation=45,color='k') plt.text(0.23,1.5e-38,r'{\bf Ge}',rotation=25,color='k') plt.text(0.18,5e-38,r'{\bf NaI}',rotation=26,color='k') plt.text(0.175,5e-40,r'{\bf Xenon}',rotation=31,color='k') MySaveFig(fig,'NuFloor_Targets_SDn') pek = line_background(6,'k') cmap = cm.terrain_r fig,ax = MakeLimitPlot_SDn(Collected=True,alph=1,edgecolor=col_alpha('gray',0.75),facecolor=col_alpha('gray',0.5)) data = loadtxt('../data/WIMPLimits/mylimits/DLNuFloorXe_detailed_SDn.txt') m,sig,NUFLOOR,DY = Floor_2D(data,filt=True,filt_width=2,Ex_crit=1e10) cnt = plt.contourf(m,sig,DY,levels=linspace(2,15,100),vmax=8,vmin=2.2,cmap=cmap) for c in cnt.collections: c.set_edgecolor("face") plt.plot(m,NUFLOOR,'-',color='brown',lw=3,path_effects=pek,zorder=100) im = plt.pcolormesh(-m,sig,DY,vmax=6,vmin=2.2,cmap=cmap,rasterized=True) cbar(im,extend='min') plt.gcf().text(0.82,0.9,r'$\left(\frac{{\rm d}\ln\sigma}{{\rm d}\ln N}\right)^{-1}$',fontsize=35) plt.gcf().text(0.15*(1-0.01),0.16*(1+0.01),r'{\bf Xenon}',color='k',fontsize=50,alpha=0.2) plt.gcf().text(0.15,0.16,r'{\bf Xenon}',color='brown',fontsize=50) MySaveFig(fig,'NuFloorDetailed_Xe_SDn') fig,ax = MakeLimitPlot_SDn(Collected=True,alph=1,edgecolor=col_alpha('gray',0.75),facecolor=col_alpha('gray',0.5)) data = loadtxt('../data/WIMPLimits/mylimits/DLNuFloorNaI_detailed_SDn.txt') m,sig,NUFLOOR,DY = Floor_2D(data,filt=True,filt_width=2,Ex_crit=1e11) cnt = plt.contourf(m,sig,DY,levels=linspace(2,15,100),vmax=6,vmin=2.2,cmap=cmap) for c in cnt.collections: c.set_edgecolor("face") plt.plot(m,NUFLOOR,'-',color='brown',lw=3,path_effects=pek,zorder=100) im = plt.pcolormesh(-m,sig,DY,vmax=6,vmin=2.2,cmap=cmap,rasterized=True) cbar(im,extend='min') plt.gcf().text(0.82,0.9,r'$\left(\frac{{\rm d}\ln\sigma}{{\rm d}\ln N}\right)^{-1}$',fontsize=35) plt.gcf().text(0.15*(1-0.01),0.16*(1+0.01),r'{\bf NaI}',color='k',fontsize=50,alpha=0.2) plt.gcf().text(0.15,0.16,r'{\bf NaI}',color='brown',fontsize=50) MySaveFig(fig,'NuFloorDetailed_NaI_SDn') dat1 = loadtxt("../data/WIMPLimits/SDn/XENON1T.txt") dat2 = loadtxt("../data/WIMPLimits/SDn/PandaX.txt") dat3 = loadtxt("../data/WIMPLimits/SDn/CDMSlite.txt") dat4 = loadtxt("../data/WIMPLimits/SDn/CRESST.txt") dats = [dat1,dat2,dat3,dat4] mmin = amin(dat4[:,0]) mmax = 1e4 mvals = logspace(log10(mmin),log10(mmax),1000) sig = zeros(shape=1000) for dat in dats: sig1 = 10**interp(log10(mvals),log10(dat[:,0]),log10(dat[:,1])) sig1[mvals<amin(dat[:,0])] = inf sig1[mvals>amax(dat[:,0])] = inf sig = column_stack((sig,sig1)) sig = sig[:,1:] sig = amin(sig,1) plt.loglog(mvals,sig,color='r',alpha=1,zorder=0.5,lw=2) savetxt('../data/WIMPLimits/SDn/AllLimits-2021.txt',column_stack((mvals,sig))) ```
github_jupyter
## 5. Állandó fázisú pont módszere, SPPMethod Ez a módszer alapjaiban kissé különbözik a többitől. Az előzőleg leírt globális metódusok, mint domain átváltás, kivágás, stb. itt is működnek, de másképpen kell kezelni őket. *Megjegyezném, hogy mivel ez a módszer interaktív elemet tartalmaz egyelőre csak Jupyter Notebook-ban stabil.* ``` import numpy as np import matplotlib.pyplot as plt import pysprint as ps ``` Példaként a korábban már bemutatott `ps.Generator` segítségével generálni fogok egy sorozat interferogramot, majd azon bemutatom a kiértékelés menetét. Valós méréseknél teljesen hasonlóképpen végezhető a kiértékelés. A legegyszerűbb módszer, hogy különböző karok közti időbeli késleltetésnél generáljunk és elmentsük azokat az alábbi cellában látható. A megkülönböztethetőség miatt minden fájlt a hozzá tartozó karok közti időbeli késleltetésnek megfelelően nevezem el. ``` for delay in range(-200, 201, 50): g = ps.Generator(1, 3, 2, delay, GDD=400, TOD=-500, normalize=True) g.generate_freq() np.savetxt(f'{delay}.txt', np.transpose([g.x, g.y]), delimiter=',') ``` A kód lefuttatásával a munkafüzet környtárában megjelent 7 új txt fájl. Ehhez a kiértékelési módszerhez először fel kell építeni egy listát a felhasználandó interferogramok fájlneveivel. Ezt manuálisan is megtehetjük, itt ezt elkerülve egy rövidítést fogok használni. ``` ifg_files = [f"{delay}.txt" for delay in range(-200, 201, 50)] print(ifg_files) ``` Ha nem hasonló sémára épülnek a felhasználandó fájlok nevei, akkor természetesen a fenti trükk nem működik és egyenként kell beírnunk őket. Miután definiáltuk a fájlneveket a következő lépés a ```python ps.SPPMethod(ifg_names, sam_names=None, ref_names=None, **kwargs) ``` meghívása: ``` myspp = ps.SPPMethod(ifg_files, decimal=".", sep=",", skiprows=0, meta_len=0) ``` A `**kwargs` keyword argumentumok itt elfogadják a korábban már bemutatott `parse_raw` funkció argumentumait (a kódban belül azt is hívja meg egyesével minden interferogramon), hiszen a fájlok sémáját itt is fontos megadni a helyes betöltéshez. A tárgy- és referencianyaláb spektrumai természetesen opcionális argumentumok, mi dönthetjük el, hogy normáljuk-e az interferogramokat. Az `SPPMethod` objektum először ellenőrzi, hogy a listában lévő fájlnevek valóban léteznek-e, és ha nem, akkor hibával tér vissza. Az `SPPMethod`-nak vannak további metódusai, ilyen pl. a `len(..)`, vagy az `SPPMethod.info`. Az első visszaadja, hogy hány interferogram van jelenleg az objektumban (ez jelen esetben 9), a második pedig a kiértékelés során mutatja majd, hogy hány interferogramból rögzítettünk információt (ez jelenleg 0/9). Később talán `append` (ilyen már van a 0.12.5 verzióban), `insert` és `delete` metódusokat is beépítek. ``` print(len(myspp)) print(myspp.info) ``` Az SPPMethod objektum listaszerűen viselkedik: lehet indexelni is. Mivel benne 9 darab interferogram van, ezért egy ilyen indexelés egy `ps.Dataset` objektumot ad vissza. Ez az alapja minden kiértékelési módszernek, így ez ismeri a korábban bemutatott metódusokat. Tegyük fel, hogy a 3. interferogram adatait ki szeretnénk iratni, és szeretnénk megkapni az y értékeit `np.ndarray`-ként. Ekkor a 2 indexet használva (mivel itt is 0-tól indul a számozás): ``` # a harmadik interferogram adatainak kiíratása print(myspp[2]) # a harmadik interferogram y értékeinek kinyerése, mint np.array y_ertekek = myspp[2].data.y.values print(y_ertekek) print(type(y_ertekek)) ``` Újra hangsúlyozom, minden eddig bemutatott metódus ezeken a kvázi listaelemeken is működik, köztük a `chdomain`, vagy `slice` is. Ezt használjuk ki a kiértékeléshez egy *for* ciklusban. A kiértékeléshez a definiált `SPPMethod`-on meg kell hívni egy for ciklust. Ez végigfut a benne lévő összes interferogramon. Azt, hogy mit akarunk csinálni adott interferogrammal, azt a cikluson belül tudjuk megadni. Az alapvető séma a következő: <pre> for ifg in myspp: - előfeldolgozása az adott interferogramnak - az interaktív SPP Panel megnyitása és adatok rögzítése - a calculate metódus meghívása a cikluson kívül(!) </pre> Ez kód formájában az alábbi cellában látható. Itt külön jelöltem, hogy melyik rész meddig tart. ``` # az interaktív számításokat fontos a with blokkon belülre írni with ps.interactive(): for ifg in myspp: # -----------------------------------Előfeldolgozás----------------------------------------- # Ha valós mérésünk van, érdemes valamilyen módon kiíratni a kommentet, # ami az interferogram fájlban van, hogy meg tudjuk állapítani milyen késleltetésnél készült. # Jelen esetben ennek nincs értelme, mivel a szimulált fájlokkal dolgozom. # Ezt legegyszerűbben az alábbi sorral tehetnénk meg: # print(ifg.meta['comment']) # vagy esetleg a teljes metaadatok kiíratása: # print(ifg.meta) # Ha hullámhossztartományban vagyunk, először át kell váltani. # Én frekvenciatartományban szimuláltam, ezért itt kihagyom. Ha szükség van rá a # következő sort kell használni. # ifg.chdomain() # Pl. 1.2 PHz alatti körfrekvenciaértékek kivágása. Mivel nem adtam meg stop értéket, így a felső # határt érintetlenül hagyná, ha futtatnám. Nyilván ez is opcionális. # ifg.slice(start=1.2) # -----------------------------Az interaktív panel megnyitása------------------------------- ifg.open_SPP_panel() # ---------------------------------A ciklus utáni rész------------------------------------------ # A cikluson kívül a save_data metódus meghívása, hogy elmentsük a beírt adatainkat fájlba is. # Ez természetesen opcionális, de annak érdekében, hogy ne veszítsünk adatot érdemes ezt is elvégezni. myspp.save_data('spp.txt') # a cikluson kívül meghívjuk a calculate függvényt myspp.calculate(reference_point=2, order=3); ``` A magyarázatok nélkül szimulált esetben az egész kód az alábbi, összesen 8 sorra egyszerűsödik. Valós mérés esetén néhány előfeldolgozási lépés és kiíratás természetesen még hozzáadódhat ehhez. ```python import pysprint as ps ifg_files = [f"{delay}.txt" for delay in range(-200, 201, 50)] s = ps.SPPMethod(ifg_files, decimal=".", sep=",", skiprows=0, meta_len=0) with ps.interactive(): for ifg in s: ifg.open_SPP_panel() s.save_data('spp.txt') s.calculate(reference_point=2, order=2, show_graph=True) ``` Miután a számolást már elvégezte a program, akkor elérhetővé válik rajta a `GD` property. Ez az illesztett görbét reprezentálja, típusa `ps.core.phase.Phase`. Bővebben erről a `Phase` leírásában. ``` myspp.GD ``` #### 5.1 Számolás nyers adatokból Mivel az `spp.txt` fájlba elmentettük az bevitt adatokat, azokból egyszerűen lehet újraszámolni az illesztést. Töltsük be `np.loadtxt` segítségével, majd használjuk a `ps.SPPMethod.calculate_from_raw` függvényt. ``` delay, position = np.loadtxt('spp.txt', delimiter=',', unpack=True) myspp.calculate_from_raw(delay, position, reference_point=2, order=3); ``` Az előbbi esetben látható, hogy ugyan azt az eredményt kaptuk, mint előzőleg. Ez akkor is hasznos lehet, ha már megvannak a leolvasott SPP pozícióink a hozzá tartozó késleltetésekkel és csak a számolást akarjuk elvégezni. Ekkor még létre sem kell hozni egy új objektumot, csak meghívhatjuk a függvényt következő módon: ``` # ehhez beírtam egy teljesen véletlenszerű adatsort delay_minta = [-100, 200, 500, 700, 900] position_minta = [2, 2.1, 2.3, 2.45, 2.6] ps.SPPMethod.calculate_from_raw(delay_minta, position_minta, reference_point=2, order=3); ``` **FONTOS MEGJEGYZÉS:** Az `order` argumentum a program során mindig a keresett diszperzió rendjét adja meg. #### 5.2 Számolás egy további módon Mivel továbbra is ugyan ezekkel az adatsorokkal és a `myspp` objektummal dolgozom, most törlöm az összes rögzített adatot belőlük. Ehhez a `SPPMethod.flush` függvényt használom. (Valószínűleg ez a felhasználónak kevésszer szükséges, de elérhető.) ``` myspp.flush() ``` Korábban már észrevehettük, hogy a kiíratás során - legyen bármilyen módszerről is szó - megjelentek olyan sorok is, hogy `Delay value: Not given` és `SPP position(s): Not given`. Például a `myspp` első interferogramja esetén most ez a helyzet: ``` print(myspp[0]) ``` Ahogyan a `Dataset` leírásában már szerepelt, lehetőségünk van megadni a betöltött interferogramokon az SPP módszerhez szükséges adatokat. Ekkor a `ps.SPPMethod.calculate_from_ifg(ifgs, reference_point, order)` függvénnyel kiértékelhetjük a benne lévő interferogramokat a következő módon: ``` # kicsomagolok öt interferogramot a generált 7 közül elso_ifg = myspp[0] masodik_ifg = myspp[1] harmadik_ifg = myspp[2] negyedik_ifg = myspp[3] otodik_ifg = myspp[4] # beállítok rájuk véletlenszerűen SPP adatokat elso_ifg.delay = 0 elso_ifg.positions = 2 masodik_ifg.delay = 100 masodik_ifg.positions = 2 harmadik_ifg.delay = 150 harmadik_ifg.positions = 1.6 negyedik_ifg.delay = 200 negyedik_ifg.positions = 1.2 otodik_ifg.delay = 250 otodik_ifg.positions = 1, 3, 1.2 # listába teszem őket ifgs = [elso_ifg, masodik_ifg, harmadik_ifg, negyedik_ifg, otodik_ifg] # meghívom a calculate_from_ifg függvényt ps.SPPMethod.calculate_from_ifg(ifgs, reference_point=2, order=3); ``` Ez úgy lehet hasznos, hogy amikor más módszerrel több interferogramot is kiértékelünk egymás után, csak rögzítjük az SPP adatokat is, aztán a program ezekből egyenként összegyűjti a szükséges információt a kiértékeléshez, majd abból számol. #### 5.3 Az SPPMethod működéséről mélyebben, cache, callbacks Az `SPPMethod` alapvető működését az adatok rögzítése közben az alábbi ábra mutatja. ![SPP működése](spp_diagram.svg) A hurok az `SPPMethod`-ból indul, ahol a használandó fájlok neveit, betöltési adatokat, stb. adunk meg. Ezen a ponton még semmilyen számolás és betöltés nem történik. Ezután az `SPPMethod` bármely elemének hívására egy `Dataset` objektum jön létre. Ezen megnyitható az `SPPEditor`, amiben az állandó fázisú pont(ok) helyét és a karok közti késleltetést lehet megadni. Hitelesítés után az SPP-vel kapcsolatos információk az interaktív szerkesztőből visszakerülnek a létrehozott `Dataset` objektumba és ott rögzítődnek. Minden így létrejött `Dataset` objektum kapcsolva van az `SPPMethod`-hoz, amiből felépült, így amikor megváltozik egy SPP-vel kapcsolatos adat, az egyből megváltozik az `SPPMetod`-ban is. A `Registry` gondoskodik arról, hogy minden objektum ami a memóriában van az rögzítődjön, illetve szükség esetén elérhető legyen. **Cache** Ha próbálunk elérni egy adott elemet (akár a `for` ciklussal, akár indexelve, vagy egyéb módon), létrejön egy `Dataset` objektum. Ez a `Dataset` objektum miután már egyszer elértük a memóriában marad és megtart minden rajta végrehajtott változtatást, beállítást. Alapértelmezetten *128 db* interferogram marad a memóriában egyszerre, de ez a határ szükség esetén megváltoztatható. Az éppen aktuálisan a memóriában lévő interferogramok száma (az adott `SPPMethod`-hoz tartozó) a kiíratás során a `Interferograms cached` cellában látható. **Callbacks** A fenti ábrán a ciklus utolsó lépése során (ahol a `Dataset` átadja az SPP-vel kapcsolatos adatait a `SPPMethod`-nak) lehetőség van további ún. *callback* függvények meghívására. Egy ilyen beépített callback függvény a `pysprint.eager_executor`. Ez arra használható, hogy minden egyes SPP-vel kapcsolatos adat rögzítése/változtatása után a program azonnal kiszámolja az éppen meglévő adatokból a diszperziót. A korábbiakhoz teljesen hasonlóan kell eljárnunk, csupán a `callback` argumentumot kell megadnunk kiegészítésként. Itt a kötelező argumentumokon túl megadtam a `logfile` és `verbosity` értékeit is: ez minden lépés során a `"mylog.log"` fájlba el fogja menteni az adott illesztés eredményeit és egyéb információkat, továbbá a `verbosity=1` miatt a rögzített adatsort is. Ezzel akár könnyen nyomon köthető a kiértékelés menete. ``` # a folyamatos kiértékeléshez szükséges callback függvény importálása from pysprint import eager_executor myspp2 = ps.SPPMethod( ifg_files, decimal=".", sep=",", skiprows=0, meta_len=0, callback=eager_executor(reference_point=2, order=3, logfile="mylog.log", verbosity=1) ) myspp2 ``` Ekkor láthatjuk, hogy az `Eagerly calculating` már `True` értékre változik. Természetesen a program csak értelmes esetekben fogja elvégezni a számolást (pl. szükséges, hogy az adatpontok száma nagyobb legyen, mint az illesztés rendje ). A teljesség kedvéért megemlítendő, hogy könnyen írható akár saját callback függvény is. Futtassuk le az újonnan létrehozott `myspp2`-n a már megismert *for* ciklust: ``` with ps.interactive(): for ifg in myspp2: ifg.open_SPP_panel() ``` A fenti cella futtatása közben - miután rögzítettünk elég adatot - megjelentek az eredmények, és minden új adatpont hozzáadása esetén frissültek is. Az adatok rögzítését itt ugyan az interaktív felületet használva végeztem, de akár kódban is megtehető: a `myspp` elemein kell az `delay` és `positions` argumentumokat beállítani, és minden új adat hozzáadásánál újra fogja számolni a program. Az előző számolásom közben készült logfile a következő: ``` !type mylog.log ```
github_jupyter
# Data Sets for Word2vec :label:`chapter_word2vec_data` In this section, we will introduce how to preprocess a data set with negative sampling :numref:`chapter_approx_train` and load into mini-batches for word2vec training. The data set we use is [Penn Tree Bank (PTB)]( https://catalog.ldc.upenn.edu/LDC99T42), which is a small but commonly-used corpus. It takes samples from Wall Street Journal articles and includes training sets, validation sets, and test sets. First, import the packages and modules required for the experiment. ``` import collections import d2l import math from mxnet import np, gluon import random import zipfile ``` ## Read and Preprocessing This data set has already been preprocessed. Each line of the data set acts as a sentence. All the words in a sentence are separated by spaces. In the word embedding task, each word is a token. ``` # Save to the d2l package. def read_ptb(): with zipfile.ZipFile('../data/ptb.zip', 'r') as f: raw_text = f.read('ptb/ptb.train.txt').decode("utf-8") return [line.split() for line in raw_text.split('\n')] sentences = read_ptb() '# sentences: %d' % len(sentences) ``` Next we build a vocabulary with words appeared not greater than 10 times mapped into a "&lt;unk&gt;" token. Note that the preprocessed PTB data also contains "&lt;unk&gt;" tokens presenting rare words. ``` vocab = d2l.Vocab(sentences, min_freq=10) 'vocab size: %d' % len(vocab) ``` ## Subsampling In text data, there are generally some words that appear at high frequencies, such "the", "a", and "in" in English. Generally speaking, in a context window, it is better to train the word embedding model when a word (such as "chip") and a lower-frequency word (such as "microprocessor") appear at the same time, rather than when a word appears with a higher-frequency word (such as "the"). Therefore, when training the word embedding model, we can perform subsampling[2] on the words. Specifically, each indexed word $w_i$ in the data set will drop out at a certain probability. The dropout probability is given as: $$ \mathbb{P}(w_i) = \max\left(1 - \sqrt{\frac{t}{f(w_i)}}, 0\right),$$ Here, $f(w_i)$ is the ratio of the instances of word $w_i$ to the total number of words in the data set, and the constant $t$ is a hyper-parameter (set to $10^{-4}$ in this experiment). As we can see, it is only possible to drop out the word $w_i$ in subsampling when $f(w_i) > t$. The higher the word's frequency, the higher its dropout probability. ``` # Save to the d2l package. def subsampling(sentences, vocab): # Map low frequency words into <unk> sentences = [[vocab.idx_to_token[vocab[tk]] for tk in line] for line in sentences] # Count the frequency for each word counter = d2l.count_corpus(sentences) num_tokens = sum(counter.values()) # Return True if to keep this token during subsampling keep = lambda token: ( random.uniform(0, 1) < math.sqrt(1e-4 / counter[token] * num_tokens)) # Now do the subsampling. return [[tk for tk in line if keep(tk)] for line in sentences] subsampled = subsampling(sentences, vocab) ``` Compare the sequence lengths before and after sampling, we can see subsampling significantly reduced the sequence length. ``` d2l.set_figsize((3.5, 2.5)) d2l.plt.hist([[len(line) for line in sentences], [len(line) for line in subsampled]] ) d2l.plt.xlabel('# tokens per sentence') d2l.plt.ylabel('count') d2l.plt.legend(['origin', 'subsampled']); ``` For individual tokens, the sampling rate of the high-frequency word "the" is less than 1/20. ``` def compare_counts(token): return '# of "%s": before=%d, after=%d' % (token, sum( [line.count(token) for line in sentences]), sum( [line.count(token) for line in subsampled])) compare_counts('the') ``` But the low-frequency word "join" is completely preserved. ``` compare_counts('join') ``` Lastly, we map each token into an index to construct the corpus. ``` corpus = [vocab[line] for line in subsampled] corpus[0:3] ``` ## Load the Data Set Next we read the corpus with token indicies into data batches for training. ### Extract Central Target Words and Context Words We use words with a distance from the central target word not exceeding the context window size as the context words of the given center target word. The following definition function extracts all the central target words and their context words. It uniformly and randomly samples an integer to be used as the context window size between integer 1 and the `max_window_size` (maximum context window). ``` # Save to the d2l package. def get_centers_and_contexts(corpus, max_window_size): centers, contexts = [], [] for line in corpus: # Each sentence needs at least 2 words to form a # "central target word - context word" pair if len(line) < 2: continue centers += line for i in range(len(line)): # Context window centered at i window_size = random.randint(1, max_window_size) indices = list(range(max(0, i - window_size), min(len(line), i + 1 + window_size))) # Exclude the central target word from the context words indices.remove(i) contexts.append([line[idx] for idx in indices]) return centers, contexts ``` Next, we create an artificial data set containing two sentences of 7 and 3 words, respectively. Assume the maximum context window is 2 and print all the central target words and their context words. ``` tiny_dataset = [list(range(7)), list(range(7, 10))] print('dataset', tiny_dataset) for center, context in zip(*get_centers_and_contexts(tiny_dataset, 2)): print('center', center, 'has contexts', context) ``` We set the maximum context window size to 5. The following extracts all the central target words and their context words in the data set. ``` all_centers, all_contexts = get_centers_and_contexts(corpus, 5) '# center-context pairs: %d' % len(all_centers) ``` ### Negative Sampling We use negative sampling for approximate training. For a central and context word pair, we randomly sample $K$ noise words ($K=5$ in the experiment). According to the suggestion in the Word2vec paper, the noise word sampling probability $\mathbb{P}(w)$ is the ratio of the word frequency of $w$ to the total word frequency raised to the power of 0.75 [2]. We first define a class to draw a candidate according to the sampling weights. It caches a 10000 size random number bank instead of calling `random.choices` every time. ``` # Save to the d2l package. class RandomGenerator(object): """Draw a random int in [0, n] according to n sampling weights""" def __init__(self, sampling_weights): self.population = list(range(len(sampling_weights))) self.sampling_weights = sampling_weights self.candidates = [] self.i = 0 def draw(self): if self.i == len(self.candidates): self.candidates = random.choices( self.population, self.sampling_weights, k=10000) self.i = 0 self.i += 1 return self.candidates[self.i-1] generator = RandomGenerator([2,3,4]) [generator.draw() for _ in range(10)] # Save to the d2l package. def get_negatives(all_contexts, corpus, K): counter = d2l.count_corpus(corpus) sampling_weights = [counter[i]**0.75 for i in range(len(counter))] all_negatives, generator = [], RandomGenerator(sampling_weights) for contexts in all_contexts: negatives = [] while len(negatives) < len(contexts) * K: neg = generator.draw() # Noise words cannot be context words if neg not in contexts: negatives.append(neg) all_negatives.append(negatives) return all_negatives all_negatives = get_negatives(all_contexts, corpus, 5) ``` ### Read into Batches We extract all central target words `all_centers`, and the context words `all_contexts` and noise words `all_negatives` of each central target word from the data set. We will read them in random mini-batches. In a mini-batch of data, the $i$-th example includes a central word and its corresponding $n_i$ context words and $m_i$ noise words. Since the context window size of each example may be different, the sum of context words and noise words, $n_i+m_i$, will be different. When constructing a mini-batch, we concatenate the context words and noise words of each example, and add 0s for padding until the length of the concatenations are the same, that is, the length of all concatenations is $\max_i n_i+m_i$(`max_len`). In order to avoid the effect of padding on the loss function calculation, we construct the mask variable `masks`, each element of which corresponds to an element in the concatenation of context and noise words, `contexts_negatives`. When an element in the variable `contexts_negatives` is a padding, the element in the mask variable `masks` at the same position will be 0. Otherwise, it takes the value 1. In order to distinguish between positive and negative examples, we also need to distinguish the context words from the noise words in the `contexts_negatives` variable. Based on the construction of the mask variable, we only need to create a label variable `labels` with the same shape as the `contexts_negatives` variable and set the elements corresponding to context words (positive examples) to 1, and the rest to 0. Next, we will implement the mini-batch reading function `batchify`. Its mini-batch input `data` is a list whose length is the batch size, each element of which contains central target words `center`, context words `context`, and noise words `negative`. The mini-batch data returned by this function conforms to the format we need, for example, it includes the mask variable. ``` # Save to the d2l package. def batchify(data): max_len = max(len(c) + len(n) for _, c, n in data) centers, contexts_negatives, masks, labels = [], [], [], [] for center, context, negative in data: cur_len = len(context) + len(negative) centers += [center] contexts_negatives += [context + negative + [0] * (max_len - cur_len)] masks += [[1] * cur_len + [0] * (max_len - cur_len)] labels += [[1] * len(context) + [0] * (max_len - len(context))] return (np.array(centers).reshape(-1, 1), np.array(contexts_negatives), np.array(masks), np.array(labels)) ``` Construct two simple examples: ``` x_1 = (1, [2,2], [3,3,3,3]) x_2 = (1, [2,2,2], [3,3]) batch = batchify((x_1, x_2)) names = ['centers', 'contexts_negatives', 'masks', 'labels'] for name, data in zip(names, batch): print(name, '=', data) ``` We use the `batchify` function just defined to specify the mini-batch reading method in the `DataLoader` instance. ## Put All Things Together Lastly, we define the `load_data_ptb` function that read the PTB data set and return the data loader. ``` # Save to the d2l package. def load_data_ptb(batch_size, max_window_size, num_noise_words): sentences = read_ptb() vocab = d2l.Vocab(sentences, min_freq=10) subsampled = subsampling(sentences, vocab) corpus = [vocab[line] for line in subsampled] all_centers, all_contexts = get_centers_and_contexts( corpus, max_window_size) all_negatives = get_negatives(all_contexts, corpus, num_noise_words) dataset = gluon.data.ArrayDataset( all_centers, all_contexts, all_negatives) data_iter = gluon.data.DataLoader(dataset, batch_size, shuffle=True, batchify_fn=batchify) return data_iter, vocab ``` Let's print the first mini-batch of the data iterator. ``` data_iter, vocab = load_data_ptb(512, 5, 5) for batch in data_iter: for name, data in zip(names, batch): print(name, 'shape:', data.shape) break ``` ## Summary * Subsampling attempts to minimize the impact of high-frequency words on the training of a word embedding model. * We can pad examples of different lengths to create mini-batches with examples of all the same length and use mask variables to distinguish between padding and non-padding elements, so that only non-padding elements participate in the calculation of the loss function. ## Exercises * We use the `batchify` function to specify the mini-batch reading method in the `DataLoader` instance and print the shape of each variable in the first batch read. How should these shapes be calculated? ## Scan the QR Code to [Discuss](https://discuss.mxnet.io/t/4356) ![](../img/qr_word2vec-data-set.svg)
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt try: from cycler import cycler except ModuleNotFoundError: %pip install cycler from cycler import cycler from scipy.spatial.distance import cdist try: import probml_utils as pml except ModuleNotFoundError: %pip install git+https://github.com/probml/probml-utils.git import probml_utils as pml np.random.seed(0) CB_color = ["#377eb8", "#ff7f00"] cb_cycler = cycler(linestyle=["-", "--", "-."]) * cycler(color=CB_color) plt.rc("axes", prop_cycle=cb_cycler) def fun(x, w): return w[0] * x + w[1] * np.square(x) # 'Data as mentioned in the matlab code' def polydatemake(): n = 21 sigma = 2 xtrain = np.linspace(0, 20, n) xtest = np.arange(0, 20.1, 0.1) w = np.array([-1.5, 1 / 9]) ytrain = fun(xtrain, w).reshape(-1, 1) + np.random.randn(xtrain.shape[0], 1) ytestNoisefree = fun(xtest, w) ytestNoisy = ytestNoisefree + sigma * np.random.randn(xtest.shape[0], 1) * sigma return xtrain, ytrain, xtest, ytestNoisefree, ytestNoisy [xtrain, ytrain, xtest, ytestNoisefree, ytestNoisy] = polydatemake() sigmas = [0.5, 10, 50] K = 10 centers = np.linspace(np.min(xtrain), np.max(xtrain), K) def addones(x): # x is of shape (s,) return np.insert(x[:, np.newaxis], 0, [[1]], axis=1) def rbf_features(X, centers, sigma): dist_mat = cdist(X, centers, "minkowski", p=2.0) return np.exp((-0.5 / (sigma**2)) * (dist_mat**2)) # using matrix inversion for ridge regression def ridgeReg(X, y, lambd): # returns weight vectors. D = X.shape[1] w = np.linalg.inv(X.T @ X + lambd * np.eye(D, D)) @ X.T @ y return w fig, ax = plt.subplots(3, 3, figsize=(10, 10)) plt.tight_layout() for (i, s) in enumerate(sigmas): rbf_train = rbf_features(addones(xtrain), addones(centers), s) rbf_test = rbf_features(addones(xtest), addones(centers), s) reg_w = ridgeReg(rbf_train, ytrain, 0.3) ypred = rbf_test @ reg_w ax[i, 0].plot(xtrain, ytrain, ".", markersize=8) ax[i, 0].plot(xtest, ypred) ax[i, 0].set_ylim([-10, 20]) ax[i, 0].set_xticks(np.arange(0, 21, 5)) for j in range(K): ax[i, 1].plot(xtest, rbf_test[:, j], "b-") ax[i, 1].set_xticks(np.arange(0, 21, 5)) ax[i, 1].ticklabel_format(style="sci", scilimits=(-2, 2)) ax[i, 2].imshow(rbf_train, interpolation="nearest", aspect="auto", cmap=plt.get_cmap("viridis")) ax[i, 2].set_yticks(np.arange(20, 4, -5)) ax[i, 2].set_xticks(np.arange(2, 10, 2)) pml.savefig("rbfDemoALL.pdf", dpi=300) plt.show() ```
github_jupyter
# Decision Analysis Think Bayes, Second Edition Copyright 2020 Allen B. Downey License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) ``` # If we're running on Colab, install empiricaldist # https://pypi.org/project/empiricaldist/ import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !pip install empiricaldist # Get utils.py from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py') from utils import set_pyplot_params set_pyplot_params() ``` This chapter presents a problem inspired by the game show *The Price is Right*. It is a silly example, but it demonstrates a useful process called Bayesian [decision analysis](https://en.wikipedia.org/wiki/Decision_analysis). As in previous examples, we'll use data and prior distribution to compute a posterior distribution; then we'll use the posterior distribution to choose an optimal strategy in a game that involves bidding. As part of the solution, we will use kernel density estimation (KDE) to estimate the prior distribution, and a normal distribution to compute the likelihood of the data. And at the end of the chapter, I pose a related problem you can solve as an exercise. ## The Price Is Right Problem On November 1, 2007, contestants named Letia and Nathaniel appeared on *The Price is Right*, an American television game show. They competed in a game called "The Showcase", where the objective is to guess the price of a collection of prizes. The contestant who comes closest to the actual price, without going over, wins the prizes. Nathaniel went first. His showcase included a dishwasher, a wine cabinet, a laptop computer, and a car. He bid \\$26,000. Letia's showcase included a pinball machine, a video arcade game, a pool table, and a cruise of the Bahamas. She bid \\$21,500. The actual price of Nathaniel's showcase was \\$25,347. His bid was too high, so he lost. The actual price of Letia's showcase was \\$21,578. She was only off by \\$78, so she won her showcase and, because her bid was off by less than 250, she also won Nathaniel's showcase. For a Bayesian thinker, this scenario suggests several questions: 1. Before seeing the prizes, what prior beliefs should the contestants have about the price of the showcase? 2. After seeing the prizes, how should the contestants update those beliefs? 3. Based on the posterior distribution, what should the contestants bid? The third question demonstrates a common use of Bayesian methods: decision analysis. This problem is inspired by [an example](https://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter5_LossFunctions/Ch5_LossFunctions_PyMC3.ipynb) in Cameron Davidson-Pilon's book, [*Probablistic Programming and Bayesian Methods for Hackers*](http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). ## The Prior To choose a prior distribution of prices, we can take advantage of data from previous episodes. Fortunately, [fans of the show keep detailed records](https://web.archive.org/web/20121107204942/http://www.tpirsummaries.8m.com/). For this example, I downloaded files containing the price of each showcase from the 2011 and 2012 seasons and the bids offered by the contestants. The following cells load the data files. ``` # Load the data files download('https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/data/showcases.2011.csv') download('https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/data/showcases.2012.csv') ``` The following function reads the data and cleans it up a little. ``` import pandas as pd def read_data(filename): """Read the showcase price data.""" df = pd.read_csv(filename, index_col=0, skiprows=[1]) return df.dropna().transpose() ``` I'll read both files and concatenate them. ``` df2011 = read_data('showcases.2011.csv') df2012 = read_data('showcases.2012.csv') df = pd.concat([df2011, df2012], ignore_index=True) print(df2011.shape, df2012.shape, df.shape) ``` Here's what the dataset looks like: ``` df.head(3) ``` The first two columns, `Showcase 1` and `Showcase 2`, are the values of the showcases in dollars. The next two columns are the bids the contestants made. The last two columns are the differences between the actual values and the bids. ## Kernel Density Estimation This dataset contains the prices for 313 previous showcases, which we can think of as a sample from the population of possible prices. We can use this sample to estimate the prior distribution of showcase prices. One way to do that is kernel density estimation (KDE), which uses the sample to estimate a smooth distribution. If you are not familiar with KDE, you can [read about it here](https://mathisonian.github.io/kde). SciPy provides `gaussian_kde`, which takes a sample and returns an object that represents the estimated distribution. The following function takes `sample`, makes a KDE, evaluates it at a given sequence of quantities, `qs`, and returns the result as a normalized PMF. ``` from scipy.stats import gaussian_kde from empiricaldist import Pmf def kde_from_sample(sample, qs): """Make a kernel density estimate from a sample.""" kde = gaussian_kde(sample) ps = kde(qs) pmf = Pmf(ps, qs) pmf.normalize() return pmf ``` We can use it to estimate the distribution of values for Showcase 1: ``` import numpy as np qs = np.linspace(0, 80000, 81) prior1 = kde_from_sample(df['Showcase 1'], qs) ``` Here's what it looks like: ``` from utils import decorate def decorate_value(title=''): decorate(xlabel='Showcase value ($)', ylabel='PMF', title=title) prior1.plot(label='Prior 1') decorate_value('Prior distribution of showcase value') ``` **Exercise:** Use this function to make a `Pmf` that represents the prior distribution for Showcase 2, and plot it. ``` # Solution qs = np.linspace(0, 80000, 81) prior2 = kde_from_sample(df['Showcase 2'], qs) # Solution prior1.plot(label='Prior 1') prior2.plot(label='Prior 2') decorate_value('Prior distributions of showcase value') ``` ## Distribution of Error To update these priors, we have to answer these questions: * What data should we consider and how should we quantify it? * Can we compute a likelihood function; that is, for each hypothetical price, can we compute the conditional likelihood of the data? To answer these questions, I will model each contestant as a price-guessing instrument with known error characteristics. In this model, when the contestant sees the prizes, they guess the price of each prize and add up the prices. Let's call this total `guess`. Now the question we have to answer is, "If the actual price is `price`, what is the likelihood that the contestant's guess would be `guess`?" Equivalently, if we define `error = guess - price`, we can ask, "What is the likelihood that the contestant's guess is off by `error`?" To answer this question, I'll use the historical data again. For each showcase in the dataset, let's look at the difference between the contestant's bid and the actual price: ``` sample_diff1 = df['Bid 1'] - df['Showcase 1'] sample_diff2 = df['Bid 2'] - df['Showcase 2'] ``` To visualize the distribution of these differences, we can use KDE again. ``` qs = np.linspace(-40000, 20000, 61) kde_diff1 = kde_from_sample(sample_diff1, qs) kde_diff2 = kde_from_sample(sample_diff2, qs) ``` Here's what these distributions look like: ``` kde_diff1.plot(label='Diff 1', color='C8') kde_diff2.plot(label='Diff 2', color='C4') decorate(xlabel='Difference in value ($)', ylabel='PMF', title='Difference between bid and actual value') ``` It looks like the bids are too low more often than too high, which makes sense. Remember that under the rules of the game, you lose if you overbid, so contestants probably underbid to some degree deliberately. For example, if they guess that the value of the showcase is \\$40,000, they might bid \\$36,000 to avoid going over. It looks like these distributions are well modeled by a normal distribution, so we can summarize them with their mean and standard deviation. For example, here is the mean and standard deviation of `Diff` for Player 1. ``` mean_diff1 = sample_diff1.mean() std_diff1 = sample_diff1.std() print(mean_diff1, std_diff1) ``` Now we can use these differences to model the contestant's distribution of errors. This step is a little tricky because we don't actually know the contestant's guesses; we only know what they bid. So we have to make some assumptions: * I'll assume that contestants underbid because they are being strategic, and that on average their guesses are accurate. In other words, the mean of their errors is 0. * But I'll assume that the spread of the differences reflects the actual spread of their errors. So, I'll use the standard deviation of the differences as the standard deviation of their errors. Based on these assumptions, I'll make a normal distribution with parameters 0 and `std_diff1`. SciPy provides an object called `norm` that represents a normal distribution with the given mean and standard deviation. ``` from scipy.stats import norm error_dist1 = norm(0, std_diff1) ``` The result is an object that provides `pdf`, which evaluates the probability density function of the normal distribution. For example, here is the probability density of `error=-100`, based on the distribution of errors for Player 1. ``` error = -100 error_dist1.pdf(error) ``` By itself, this number doesn't mean very much, because probability densities are not probabilities. But they are proportional to probabilities, so we can use them as likelihoods in a Bayesian update, as we'll see in the next section. ## Update Suppose you are Player 1. You see the prizes in your showcase and your guess for the total price is \\$23,000. From your guess I will subtract away each hypothetical price in the prior distribution; the result is your error under each hypothesis. ``` guess1 = 23000 error1 = guess1 - prior1.qs ``` Now suppose we know, based on past performance, that your estimation error is well modeled by `error_dist1`. Under that assumption we can compute the likelihood of your error under each hypothesis. ``` likelihood1 = error_dist1.pdf(error1) ``` The result is an array of likelihoods, which we can use to update the prior. ``` posterior1 = prior1 * likelihood1 posterior1.normalize() ``` Here's what the posterior distribution looks like: ``` prior1.plot(color='C5', label='Prior 1') posterior1.plot(color='C4', label='Posterior 1') decorate_value('Prior and posterior distribution of showcase value') ``` Because your initial guess is in the lower end of the range, the posterior distribution has shifted to the left. We can compute the posterior mean to see by how much. ``` prior1.mean(), posterior1.mean() ``` Before you saw the prizes, you expected to see a showcase with a value close to \\$30,000. After making a guess of \\$23,000, you updated the prior distribution. Based on the combination of the prior and your guess, you now expect the actual price to be about \\$26,000. **Exercise:** Now suppose you are Player 2. When you see your showcase, you guess that the total price is \\$38,000. Use `diff2` to construct a normal distribution that represents the distribution of your estimation errors. Compute the likelihood of your guess for each actual price and use it to update `prior2`. Plot the posterior distribution and compute the posterior mean. Based on the prior and your guess, what do you expect the actual price of the showcase to be? ``` # Solution mean_diff2 = sample_diff2.mean() std_diff2 = sample_diff2.std() print(mean_diff2, std_diff2) # Solution error_dist2 = norm(0, std_diff2) # Solution guess2 = 38000 error2 = guess2 - prior2.qs likelihood2 = error_dist2.pdf(error2) # Solution posterior2 = prior2 * likelihood2 posterior2.normalize() # Solution prior2.plot(color='C5', label='Prior 2') posterior2.plot(color='C15', label='Posterior 2') decorate_value('Prior and posterior distribution of showcase value') # Solution print(prior2.mean(), posterior2.mean()) ``` ## Probability of Winning Now that we have a posterior distribution for each player, let's think about strategy. First, from the point of view of Player 1, let's compute the probability that Player 2 overbids. To keep it simple, I'll use only the performance of past players, ignoring the value of the showcase. The following function takes a sequence of past bids and returns the fraction that overbid. ``` def prob_overbid(sample_diff): """Compute the probability of an overbid.""" return np.mean(sample_diff > 0) ``` Here's an estimate for the probability that Player 2 overbids. ``` prob_overbid(sample_diff2) ``` Now suppose Player 1 underbids by \\$5000. What is the probability that Player 2 underbids by more? The following function uses past performance to estimate the probability that a player underbids by more than a given amount, `diff`: ``` def prob_worse_than(diff, sample_diff): """Probability opponent diff is worse than given diff.""" return np.mean(sample_diff < diff) ``` Here's the probability that Player 2 underbids by more than \\$5000. ``` prob_worse_than(-5000, sample_diff2) ``` And here's the probability they underbid by more than \\$10,000. ``` prob_worse_than(-10000, sample_diff2) ``` We can combine these functions to compute the probability that Player 1 wins, given the difference between their bid and the actual price: ``` def compute_prob_win(diff, sample_diff): """Probability of winning for a given diff.""" # if you overbid you lose if diff > 0: return 0 # if the opponent overbids, you win p1 = prob_overbid(sample_diff) # or of their bid is worse than yours, you win p2 = prob_worse_than(diff, sample_diff) # p1 and p2 are mutually exclusive, so we can add them return p1 + p2 ``` Here's the probability that you win, given that you underbid by \\$5000. ``` compute_prob_win(-5000, sample_diff2) ``` Now let's look at the probability of winning for a range of possible differences. ``` xs = np.linspace(-30000, 5000, 121) ys = [compute_prob_win(x, sample_diff2) for x in xs] ``` Here's what it looks like: ``` import matplotlib.pyplot as plt plt.plot(xs, ys) decorate(xlabel='Difference between bid and actual price ($)', ylabel='Probability of winning', title='Player 1') ``` If you underbid by \\$30,000, the chance of winning is about 30%, which is mostly the chance your opponent overbids. As your bids gets closer to the actual price, your chance of winning approaches 1. And, of course, if you overbid, you lose (even if your opponent also overbids). **Exercise:** Run the same analysis from the point of view of Player 2. Using the sample of differences from Player 1, compute: 1. The probability that Player 1 overbids. 2. The probability that Player 1 underbids by more than \\$5000. 3. The probability that Player 2 wins, given that they underbid by \\$5000. Then plot the probability that Player 2 wins for a range of possible differences between their bid and the actual price. ``` # Solution prob_overbid(sample_diff1) # Solution prob_worse_than(-5000, sample_diff1) # Solution compute_prob_win(-5000, sample_diff1) # Solution xs = np.linspace(-30000, 5000, 121) ys = [compute_prob_win(x, sample_diff1) for x in xs] # Solution plt.plot(xs, ys) decorate(xlabel='Difference between bid and actual price ($)', ylabel='Probability of winning', title='Player 2') ``` ## Decision Analysis In the previous section we computed the probability of winning, given that we have underbid by a particular amount. In reality the contestants don't know how much they have underbid by, because they don't know the actual price. But they do have a posterior distribution that represents their beliefs about the actual price, and they can use that to estimate their probability of winning with a given bid. The following function takes a possible bid, a posterior distribution of actual prices, and a sample of differences for the opponent. It loops through the hypothetical prices in the posterior distribution and, for each price, 1. Computes the difference between the bid and the hypothetical price, 2. Computes the probability that the player wins, given that difference, and 3. Adds up the weighted sum of the probabilities, where the weights are the probabilities in the posterior distribution. ``` def total_prob_win(bid, posterior, sample_diff): """Computes the total probability of winning with a given bid. bid: your bid posterior: Pmf of showcase value sample_diff: sequence of differences for the opponent returns: probability of winning """ total = 0 for price, prob in posterior.items(): diff = bid - price total += prob * compute_prob_win(diff, sample_diff) return total ``` This loop implements the law of total probability: $$P(win) = \sum_{price} P(price) ~ P(win ~|~ price)$$ Here's the probability that Player 1 wins, based on a bid of \\$25,000 and the posterior distribution `posterior1`. ``` total_prob_win(25000, posterior1, sample_diff2) ``` Now we can loop through a series of possible bids and compute the probability of winning for each one. ``` bids = posterior1.qs probs = [total_prob_win(bid, posterior1, sample_diff2) for bid in bids] prob_win_series = pd.Series(probs, index=bids) ``` Here are the results. ``` prob_win_series.plot(label='Player 1', color='C1') decorate(xlabel='Bid ($)', ylabel='Probability of winning', title='Optimal bid: probability of winning') ``` And here's the bid that maximizes Player 1's chance of winning. ``` prob_win_series.idxmax() prob_win_series.max() ``` Recall that your guess was \\$23,000. Using your guess to compute the posterior distribution, the posterior mean is about \\$26,000. But the bid that maximizes your chance of winning is \\$21,000. **Exercise:** Do the same analysis for Player 2. ``` # Solution bids = posterior2.qs probs = [total_prob_win(bid, posterior2, sample_diff1) for bid in bids] prob_win_series = pd.Series(probs, index=bids) # Solution prob_win_series.plot(label='Player 2', color='C1') decorate(xlabel='Bid ($)', ylabel='Probability of winning', title='Optimal bid: probability of winning') # Solution prob_win_series.idxmax() # Solution prob_win_series.max() ``` ## Maximizing Expected Gain In the previous section we computed the bid that maximizes your chance of winning. And if that's your goal, the bid we computed is optimal. But winning isn't everything. Remember that if your bid is off by \\$250 or less, you win both showcases. So it might be a good idea to increase your bid a little: it increases the chance you overbid and lose, but it also increases the chance of winning both showcases. Let's see how that works out. The following function computes how much you will win, on average, given your bid, the actual price, and a sample of errors for your opponent. ``` def compute_gain(bid, price, sample_diff): """Compute expected gain given a bid and actual price.""" diff = bid - price prob = compute_prob_win(diff, sample_diff) # if you are within 250 dollars, you win both showcases if -250 <= diff <= 0: return 2 * price * prob else: return price * prob ``` For example, if the actual price is \\$35000 and you bid \\$30000, you will win about \\$23,600 worth of prizes on average, taking into account your probability of losing, winning one showcase, or winning both. ``` compute_gain(30000, 35000, sample_diff2) ``` In reality we don't know the actual price, but we have a posterior distribution that represents what we know about it. By averaging over the prices and probabilities in the posterior distribution, we can compute the expected gain for a particular bid. In this context, "expected" means the average over the possible showcase values, weighted by their probabilities. ``` def expected_gain(bid, posterior, sample_diff): """Compute the expected gain of a given bid.""" total = 0 for price, prob in posterior.items(): total += prob * compute_gain(bid, price, sample_diff) return total ``` For the posterior we computed earlier, based on a guess of \\$23,000, the expected gain for a bid of \\$21,000 is about \\$16,900. ``` expected_gain(21000, posterior1, sample_diff2) ``` But can we do any better? To find out, we can loop through a range of bids and find the one that maximizes expected gain. ``` bids = posterior1.qs gains = [expected_gain(bid, posterior1, sample_diff2) for bid in bids] expected_gain_series = pd.Series(gains, index=bids) ``` Here are the results. ``` expected_gain_series.plot(label='Player 1', color='C2') decorate(xlabel='Bid ($)', ylabel='Expected gain ($)', title='Optimal bid: expected gain') ``` Here is the optimal bid. ``` expected_gain_series.idxmax() ``` With that bid, the expected gain is about \\$17,400. ``` expected_gain_series.max() ``` Recall that your initial guess was \\$23,000. The bid that maximizes the chance of winning is \\$21,000. And the bid that maximizes your expected gain is \\$22,000. **Exercise:** Do the same analysis for Player 2. ``` # Solution bids = posterior2.qs gains = [expected_gain(bid, posterior2, sample_diff1) for bid in bids] expected_gain_series = pd.Series(gains, index=bids) # Solution expected_gain_series.plot(label='Player 2', color='C2') decorate(xlabel='Bid ($)', ylabel='Expected gain ($)', title='Optimal bid: expected gain') # Solution expected_gain_series.idxmax() # Solution expected_gain_series.max() ``` ## Summary There's a lot going on this this chapter, so let's review the steps: 1. First we used KDE and data from past shows to estimate prior distributions for the values of the showcases. 2. Then we used bids from past shows to model the distribution of errors as a normal distribution. 3. We did a Bayesian update using the distribution of errors to compute the likelihood of the data. 4. We used the posterior distribution for the value of the showcase to compute the probability of winning for each possible bid, and identified the bid that maximizes the chance of winning. 5. Finally, we used probability of winning to compute the expected gain for each possible bid, and identified the bid that maximizes expected gain. Incidentally, this example demonstrates the hazard of using the word "optimal" without specifying what you are optimizing. The bid that maximizes the chance of winning is not generally the same as the bid that maximizes expected gain. ## Discussion When people discuss the pros and cons of Bayesian estimation, as contrasted with classical methods sometimes called "frequentist", they often claim that in many cases Bayesian methods and frequentist methods produce the same results. In my opinion, this claim is mistaken because Bayesian and frequentist method produce different *kinds* of results: * The result of frequentist methods is usually a single value that is considered to be the best estimate (by one of several criteria) or an interval that quantifies the precision of the estimate. * The result of Bayesian methods is a posterior distribution that represents all possible outcomes and their probabilities. Granted, you can use the posterior distribution to choose a "best" estimate or compute an interval. And in that case the result might be the same as the frequentist estimate. But doing so discards useful information and, in my opinion, eliminates the primary benefit of Bayesian methods: the posterior distribution is more useful than a single estimate, or even an interval. The example in this chapter demonstrates the point. Using the entire posterior distribution, we can compute the bid that maximizes the probability of winning, or the bid that maximizes expected gain, even if the rules for computing the gain are complicated (and nonlinear). With a single estimate or an interval, we can't do that, even if they are "optimal" in some sense. In general, frequentist estimation provides little guidance for decision-making. If you hear someone say that Bayesian and frequentist methods produce the same results, you can be confident that they don't understand Bayesian methods. ## Exercises **Exercise:** When I worked in Cambridge, Massachusetts, I usually took the subway to South Station and then a commuter train home to Needham. Because the subway was unpredictable, I left the office early enough that I could wait up to 15 minutes and still catch the commuter train. When I got to the subway stop, there were usually about 10 people waiting on the platform. If there were fewer than that, I figured I just missed a train, so I expected to wait a little longer than usual. And if there there more than that, I expected another train soon. But if there were a *lot* more than 10 passengers waiting, I inferred that something was wrong, and I expected a long wait. In that case, I might leave and take a taxi. We can use Bayesian decision analysis to quantify the analysis I did intuitively. Given the number of passengers on the platform, how long should we expect to wait? And when should we give up and take a taxi? My analysis of this problem is in `redline.ipynb`, which is in the repository for this book. [Click here to run this notebook on Colab](https://colab.research.google.com/github/AllenDowney/ThinkBayes2/blob/master/notebooks/redline.ipynb). **Exercise:** This exercise is inspired by a true story. In 2001 I created [Green Tea Press](https://greenteapress.com) to publish my books, starting with *Think Python*. I ordered 100 copies from a short run printer and made the book available for sale through a distributor. After the first week, the distributor reported that 12 copies were sold. Based that report, I thought I would run out of copies in about 8 weeks, so I got ready to order more. My printer offered me a discount if I ordered more than 1000 copies, so I went a little crazy and ordered 2000. A few days later, my mother called to tell me that her *copies* of the book had arrived. Surprised, I asked how many. She said ten. It turned out I had sold only two books to non-relatives. And it took a lot longer than I expected to sell 2000 copies. The details of this story are unique, but the general problem is something almost every retailer has to figure out. Based on past sales, how do you predict future sales? And based on those predictions, how do you decide how much to order and when? Often the cost of a bad decision is complicated. If you place a lot of small orders rather than one big one, your costs are likely to be higher. If you run out of inventory, you might lose customers. And if you order too much, you have to pay the various costs of holding inventory. So, let's solve a version of the problem I faced. It will take some work to set up the problem; the details are in the notebook for this chapter. Suppose you start selling books online. During the first week you sell 10 copies (and let's assume that none of the customers are your mother). During the second week you sell 9 copies. Assuming that the arrival of orders is a Poisson process, we can think of the weekly orders as samples from a Poisson distribution with an unknown rate. We can use orders from past weeks to estimate the parameter of this distribution, generate a predictive distribution for future weeks, and compute the order size that maximized expected profit. * Suppose the cost of printing the book is \\$5 per copy, * But if you order 100 or more, it's \\$4.50 per copy. * For every book you sell, you get \\$10. * But if you run out of books before the end of 8 weeks, you lose \\$50 in future sales for every week you are out of stock. * If you have books left over at the end of 8 weeks, you lose \\$2 in inventory costs per extra book. For example, suppose you get orders for 10 books per week, every week. If you order 60 books, * The total cost is \\$300. * You sell all 60 books, so you make \\$600. * But the book is out of stock for two weeks, so you lose \\$100 in future sales. In total, your profit is \\$200. If you order 100 books, * The total cost is \\$450. * You sell 80 books, so you make \\$800. * But you have 20 books left over at the end, so you lose \\$40. In total, your profit is \\$310. Combining these costs with your predictive distribution, how many books should you order to maximize your expected profit? To get you started, the following functions compute profits and costs according to the specification of the problem: ``` def print_cost(printed): """Compute print costs. printed: integer number printed """ if printed < 100: return printed * 5 else: return printed * 4.5 def total_income(printed, orders): """Compute income. printed: integer number printed orders: sequence of integer number of books ordered """ sold = min(printed, np.sum(orders)) return sold * 10 def inventory_cost(printed, orders): """Compute inventory costs. printed: integer number printed orders: sequence of integer number of books ordered """ excess = printed - np.sum(orders) if excess > 0: return excess * 2 else: return 0 def out_of_stock_cost(printed, orders): """Compute out of stock costs. printed: integer number printed orders: sequence of integer number of books ordered """ weeks = len(orders) total_orders = np.cumsum(orders) for i, total in enumerate(total_orders): if total > printed: return (weeks-i) * 50 return 0 def compute_profit(printed, orders): """Compute profit. printed: integer number printed orders: sequence of integer number of books ordered """ return (total_income(printed, orders) - print_cost(printed)- out_of_stock_cost(printed, orders) - inventory_cost(printed, orders)) ``` To test these functions, suppose we get exactly 10 orders per week for eight weeks: ``` always_10 = [10] * 8 always_10 ``` If you print 60 books, your net profit is \\$200, as in the example. ``` compute_profit(60, always_10) ``` If you print 100 books, your net profit is \\$310. ``` compute_profit(100, always_10) ``` Of course, in the context of the problem you don't know how many books will be ordered in any given week. You don't even know the average rate of orders. However, given the data and some assumptions about the prior, you can compute the distribution of the rate of orders. You'll have a chance to do that, but to demonstrate the decision analysis part of the problem, I'll start with the arbitrary assumption that order rates come from a gamma distribution with mean 9. Here's a `Pmf` that represents this distribution. ``` from scipy.stats import gamma alpha = 9 qs = np.linspace(0, 25, 101) ps = gamma.pdf(qs, alpha) pmf = Pmf(ps, qs) pmf.normalize() pmf.mean() ``` And here's what it looks like: ``` pmf.plot(color='C1') decorate(xlabel=r'Book ordering rate ($\lambda$)', ylabel='PMF') ``` Now, we *could* generate a predictive distribution for the number of books ordered in a given week, but in this example we have to deal with a complicated cost function. In particular, `out_of_stock_cost` depends on the sequence of orders. So, rather than generate a predictive distribution, I suggest we run simulations. I'll demonstrate the steps. First, from our hypothetical distribution of rates, we can draw a random sample of 1000 values. ``` rates = pmf.choice(1000) np.mean(rates) ``` For each possible rate, we can generate a sequence of 8 orders. ``` np.random.seed(17) order_array = np.random.poisson(rates, size=(8, 1000)).transpose() order_array[:5, :] ``` Each row of this array is a hypothetical sequence of orders based on a different hypothetical order rate. Now, if you tell me how many books you printed, I can compute your expected profits, averaged over these 1000 possible sequences. ``` def compute_expected_profits(printed, order_array): """Compute profits averaged over a sample of orders. printed: number printed order_array: one row per sample, one column per week """ profits = [compute_profit(printed, orders) for orders in order_array] return np.mean(profits) ``` For example, here are the expected profits if you order 70, 80, or 90 books. ``` compute_expected_profits(70, order_array) compute_expected_profits(80, order_array) compute_expected_profits(90, order_array) ``` Now, let's sweep through a range of values and compute expected profits as a function of the number of books you print. ``` printed_array = np.arange(70, 110) t = [compute_expected_profits(printed, order_array) for printed in printed_array] expected_profits = pd.Series(t, printed_array) expected_profits.plot(label='') decorate(xlabel='Number of books printed', ylabel='Expected profit ($)') ``` Here is the optimal order and the expected profit. ``` expected_profits.idxmax(), expected_profits.max() ``` Now it's your turn. Choose a prior that you think is reasonable, update it with the data you are given, and then use the posterior distribution to do the analysis I just demonstrated. ``` # Solution # For a prior I chose a log-uniform distribution; # that is, a distribution that is uniform in log-space # from 1 to 100 books per week. qs = np.logspace(0, 2, 101) prior = Pmf(1, qs) prior.normalize() # Solution # Here's the CDF of the prior prior.make_cdf().plot(color='C1') decorate(xlabel=r'Book ordering rate ($\lambda$)', ylabel='CDF') # Solution # Here's a function that updates the distribution of lambda # based on one week of orders from scipy.stats import poisson def update_book(pmf, data): """Update book ordering rate. pmf: Pmf of book ordering rates data: observed number of orders in one week """ k = data lams = pmf.index likelihood = poisson.pmf(k, lams) pmf *= likelihood pmf.normalize() # Solution # Here's the update after week 1. posterior1 = prior.copy() update_book(posterior1, 10) # Solution # And the update after week 2. posterior2 = posterior1.copy() update_book(posterior2, 9) # Solution prior.mean(), posterior1.mean(), posterior2.mean() # Solution # Now we can generate a sample of 1000 values from the posterior rates = posterior2.choice(1000) np.mean(rates) # Solution # And we can generate a sequence of 8 weeks for each value order_array = np.random.poisson(rates, size=(8, 1000)).transpose() order_array[:5, :] # Solution # Here are the expected profits for each possible order printed_array = np.arange(70, 110) t = [compute_expected_profits(printed, order_array) for printed in printed_array] expected_profits = pd.Series(t, printed_array) # Solution # And here's what they look like. expected_profits.plot(label='') decorate(xlabel='Number of books printed', ylabel='Expected profit ($)') # Solution # Here's the optimal order. expected_profits.idxmax() ```
github_jupyter
# <center>HW 01: Geomviz: Visualizing Differential Geometry<center> ## <center>Special Euclidean Group SE(n)<center> <center>$\color{#003660}{\text{Swetha Pillai, Ryan Guajardo}}$<center> # <center> 1.) Mathematical Definition of Special Euclidean SE(n)<center> ### <center> This group is defined as the set of direct isometries - or rigid-body transformations - of $R^n$.<center> <center>i.e. the linear transformations of the affine space $R^n$ that preserve its canonical inner-product, or euclidean distance between points.<center> &nbsp; &nbsp; &nbsp; *** $$ \rho(x) = Rx + u $$ *** <center>$\rho$ is comprised of a rotational part $R$ and a translational part $u$.<center> $$ \newline $$ $$ \newline SE(n) = \{(R,u)\ \vert\ R \in SO(n), u \in R^n\} \newline $$ <center>Where SO(n) is the special orthogonal group.<center> # <center> 2.) Uses of Special Euclidean SE(n) in real-world applications<center> ## <center>Rigid Body Kinematics<center> Can represent linear and angular displacements in rigid bodies, commonly in SE(3) <center><img src="rigid.png" width="500"/></center> ## <center> Autonomous Quadcoptor Path Planning!<center> If we want to make a quadcopter autonomous a predefined path must be computed by finding collision free paths throughout a space whose topological structure is SE(3) <center><img src="quadcopterpic.jpeg" width="500"/></center> ## <center> Optimal Paths for Polygonal Robots SE(2) <center> Similar to Autonomous Quadcopter but we are now in a 2 dimensional plane, hence SE(2)... <center><img src="polygonal.png" width="500"/></center> ## <center>Projective Model of Ideal Pinhole Camera <center> Camera coordinate system to World coordinate system transform. <center><img src="pinhole.png" width="500"/></center> ## <center>Pose Estimation <center> ![SegmentLocal](pose_estimation.gif "segment") # 3.) Visualization of Elementary Operation on SE(3) Showcase your visualization can be used by plotting the inputs and outputs of operations such as exp, log, geodesics. ``` %pip install geomstats import warnings warnings.filterwarnings("ignore") from Special_Euclidean import * manifold = Special_Euclidean() point = manifold.random_point() # point = np.array([1,1,1,1,1,1]) manifold.plot(point) manifold.scatter(5) random_points = manifold.random_point(2) manifold.plot_exp(random_points[0], random_points[1]) manifold.plot_log(random_points[0], random_points[1]) # point = np.eye(6) point = np.array([0,0,0,0,0,0]) # all rotations and vectors equal vector = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5]) #rotation in one dimension, no translation # vector = np.array([0,0,.9,0,0,0]) #rotation in one dimension, translation in one direction # vector = np.array([0,0,.9,0,0,.5]) N_STEPS = 10 manifold.plot_geodesic(point, vector, N_STEPS) ``` # 4.) Conclusion ## SE(n) is very useful. Geomstats: https://github.com/geomstats/geomstats http://ingmec.ual.es/~jlblanco/papers/jlblanco2010geometry3D_techrep.pdf https://ieeexplore.ieee.org/document/7425231 https://arm.stanford.edu/publications/optimal-paths-polygonal-robots-se2 https://mappingignorance.org/2015/10/14/shortcuts-for-efficiently-moving-a-quadrotor-throughout-the-special-euclidean-group-se3-and-2/ https://arxiv.org/abs/2111.00190 https://www.seas.upenn.edu/~meam620/slides/kinematicsI.pdf
github_jupyter
``` import pandas as pd from IPython.display import display pd.options.display.max_columns = 20 pd.set_option('display.max_colwidth',500) # Since Our Data is Big so Lets see in small import numpy as np # For Pyspark from pyspark.sql import SparkSession ``` ## df = Panda DataFrame ds = Spark Data Frame ``` #df = pd.read_csv("../datas_not_to_upload/989.hg19_multianno.txt.intervar", sep="\t", low_memory=False) df = pd.read_csv("../datas_not_to_upload/1428/1428.hg19_multianno.txt.intervar", sep = "\t", low_memory=False) df.style.set_table_attributes('style="font-size:10px"') df.head() df.columns new_col = ['Chr', 'Start', 'End', 'Ref', 'Alt', 'Ref_Gene', 'Func_refGene', 'ExonicFunc_refGene', 'Gene_ensGene', 'avsnp147', 'AAChange_ensGene', 'AAChange_refGene', 'Clinvar','InterVar_Evidence ', 'Freq_gnomAD_genome_ALL', 'Freq_esp6500siv2_all', 'Freq_1000g2015aug_all', 'CADD_raw', 'CADD_phred', 'SIFT_score', 'GERP++_RS', 'phyloP46way_placental', 'dbscSNV_ADA_SCORE', 'dbscSNV_RF_SCORE', 'Interpro_domain', 'AAChange.knownGene', 'rmsk', 'MetaSVM_score', 'Freq_gnomAD_genome_POPs', 'OMIM', 'Phenotype_MIM', 'OrphaNumber', 'Orpha', 'Otherinfo'] df.dtypes # Columns name with Data Type ``` ## Loading By Using Pyspark ``` spark = SparkSession.builder.appName("Inter Var data analysis")\ .config("spark.some.config.option", "some-value")\ .getOrCreate() #df = spark.read.format("com.databricks.spark.csv").\ # options(header = "true", inferschema = "true").\ # load("../datas_not_to_upload/989.hg19_multianno.txt.intervar") # Load Data Frame from CSV ds = spark.read.csv("../datas_not_to_upload/989.hg19_multianno.txt.intervar", header= True, inferSchema= True,sep="\t") ds.show() ds.printSchema() # Column Name ``` ## Checking Missing Value Using PySpark ``` from pyspark.sql.functions import count def my_count(ds): " Spark data Frame" ds.agg(*[count(c).alias(c) for c in ds.columns]).show() # fill na value in spark Data Frame # df.na.fill() # Replace Null Values # df.na.drop() # Dropping any rows with null Valus #df.where() # Filter rows using the given condition # df.filter() # Filters rows using the given Condition # df.distinct() # Returns distinct rows in this DataFrame # df.sample() # Returns a sampld subset of this DataDrame # df.sampleBay() # Returns a stratified sample without replacement ``` #### Joining the data Using Pyspark ``` # Data Join # left.join(right, key, how = "*") # * = left, right, inner, full ds.describe() # Describe the data Frame Using Spark # Our data set is not feasible RIght Now len(ds.columns) ds.sample(fraction = 0.001).limit(10).toPandas() ``` ### Pandas ``` df.head() import pandas_profiling as pp import seaborn as sns #pp.ProfileReport(df) # Since Our Data Set is Too Big It will Take long Time genes_gestational = ["EBF1", "EEFSEC", "AGTR2", "WNT4", "ADCY5", "RAP2C"] genes_premature = ["EB1", "EEFSEC", "AGTR2"] sns.countplot(df[df["Ref.Gene"].isin(genes_gestational)]["Ref.Gene"]) df.columns #df["clinvar: Clinvar "].unique() #df['ExonicFunc.refGene'].unique() #df['Func.refGene'].unique() #df[' InterVar: InterVar and Evidence '].nunique() #df['Freq_gnomAD_genome_ALL'].nunique() # More than 13470 #df['Freq_esp6500siv2_all'] #df['CADD_raw'].nunique() # 5805 #df[' InterVar: InterVar and Evidence '].unique() #df.head().T #df["clinvar: Clinvar "] ```
github_jupyter
## A track example The file `times.dat` has made up data for 100-m races between Florence Griffith-Joyner and Shelly-Ann Fraser-Pryce. We want to understand how often Shelly-Ann beats Flo-Jo. ``` %pylab inline --no-import-all ``` <!-- Secret comment: How the data were generated w = np.random.normal(0,.07,10000) x = np.random.normal(10.65,.02,10000)+w y = np.random.normal(10.7,.02,10000)+w np.savetxt('times.dat', (x,y), delimiter=',') --> ``` florence, shelly = np.loadtxt('times.dat', delimiter=',') counts, bins, patches = plt.hist(florence,bins=50,alpha=0.2, label='Flo-Jo') counts, bins, patches = plt.hist(shelly,bins=bins,alpha=0.2, label='Shelly-Ann') plt.legend() plt.xlabel('times (s)') np.mean(florence), np.mean(shelly) np.std(florence),np.std(shelly) ``` ## let's make a prediction Based on the mean and std. of their times, let's make a little simulation to predict how often Shelly-Ann beats Flo-Jo. We can use propagation of errors to predict mean and standard deviation for $q=T_{shelly}-T_{Florence}$ ``` mean_q = np.mean(shelly)-np.mean(florence) sigma_q = np.sqrt(np.std(florence)**2+np.std(shelly)**2) f_guess = np.random.normal(np.mean(florence),np.std(florence),10000) s_guess = np.random.normal(np.mean(shelly),np.std(shelly),10000) toy_difference = s_guess-f_guess ``` Make Toy data ``` #toy_difference = np.random.normal(mean_q, sigma_q, 10000) counts, bins, patches = plt.hist(toy_difference,bins=50, alpha=0.2, label='toy data') counts, bins, patches = plt.hist(toy_difference[toy_difference<0],bins=bins, alpha=0.2) norm = (bins[1]-bins[0])*10000 plt.plot(bins,norm*mlab.normpdf(bins,mean_q,sigma_q), label='prediction') plt.legend() plt.xlabel('Shelly - Florence') # predict fraction of wins np.sum(toy_difference<0)/10000. #check toy data looks like real data counts, bins, patches = plt.hist(f_guess,bins=50,alpha=0.2) counts, bins, patches = plt.hist(s_guess,bins=bins,alpha=0.2) ``` ## How often does she actually win? ``` counts, bins, patches = plt.hist(shelly-florence,bins=50,alpha=0.2) counts, bins, patches = plt.hist((shelly-florence)[florence-shelly>0],bins=bins,alpha=0.2) plt.xlabel('Shelly - Florence') 1.*np.sum(florence-shelly>0)/florence.size ``` ## What's gonig on? ``` plt.scatter(f_guess,s_guess, alpha=0.01) plt.scatter(florence,shelly, alpha=0.01) plt.hexbin(shelly,florence, alpha=1) ``` Previously we learned propagation of errors formula neglecting correlation: $\sigma_q^2 = \left( \frac{\partial q}{ \partial x} \sigma_x \right)^2 + \left( \frac{\partial q}{ \partial y}\, \sigma_y \right)^2 = \frac{\partial q}{ \partial x} \frac{\partial q}{ \partial x} C_{xx} + \frac{\partial q}{ \partial y} \frac{\partial q}{ \partial y} C_{yy}$ Now we need to extend the formula to take into account correlation $\sigma_q^2 = \frac{\partial q}{ \partial x} \frac{\partial q}{ \partial x} C_{xx} + \frac{\partial q}{ \partial y} \frac{\partial q}{ \partial y} C_{yy} + 2 \frac{\partial q}{ \partial x} \frac{\partial q}{ \partial y} C_{xxy} $ ``` # covariance matrix cov_matrix = np.cov(shelly,florence) cov_matrix # normalized correlation matrix np.corrcoef(shelly,florence) # q = T_shelly - T_florence # x = T_shelly # y = T_florence # propagation of errors cov_matrix[0,0]+cov_matrix[1,1]-2*cov_matrix[0,1] mean_q = np.mean(shelly)-np.mean(florence) sigma_q_with_corr = np.sqrt(cov_matrix[0,0]+cov_matrix[1,1]-2*cov_matrix[0,1]) sigma_q_no_corr = np.sqrt(cov_matrix[0,0]+cov_matrix[1,1]) counts, bins, patches = plt.hist(shelly-florence,bins=50,alpha=0.2) counts, bins, patches = plt.hist((shelly-florence)[florence-shelly>0],bins=bins,alpha=0.2) norm = (bins[1]-bins[0])*10000 plt.plot(bins,norm*mlab.normpdf(bins,mean_q,sigma_q_with_corr), label='prediction with correlation') plt.plot(bins,norm*mlab.normpdf(bins,mean_q, sigma_q_no_corr), label='prediction without correlation') plt.legend() plt.xlabel('Shelly - Florence') 1.*np.sum(florence-shelly>0)/florence.size np.std(florence-shelly) np.sqrt(2.)*0.73 ((np.sqrt(2.)*0.073)**2-0.028**2)/2. .073**2 np.std(florence+shelly) np.sqrt(2*(np.sqrt(2.)*0.073)**2 -0.028**2) ```
github_jupyter
# Distancias ``` from scipy.spatial import distance_matrix import pandas as pd data = pd.read_csv("../datasets/movies/movies.csv", sep=";") data movies = data.columns.values.tolist()[1:] movies def dm_to_df(dd, col_name): import pandas as pd return pd.DataFrame(dd, index=col_name, columns=col_name) ``` ## Distancia de manhattan ``` dd1 = distance_matrix(data[movies], data[movies], p=1) dm_to_df(dd1, data['user_id']) ``` ## Distancia euclidea ``` dd2 = distance_matrix(data[movies], data[movies], p=2) dm_to_df(dd2, data['user_id']) ``` ------ ``` import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.add_subplot(111, projection="3d") ax.scatter(xs = data["star_wars"], ys = data["lord_of_the_rings"], zs=data["harry_potter"]) ``` ## Enlaces ``` df = dm_to_df(dd1, data["user_id"]) df Z=[] df[11]=df[1]+df[10] df.loc[11]=df.loc[1]+df.loc[10] Z.append([1,10,0.7,2])#id1, id2, d, n_elementos_en_cluster -> 11. df for i in df.columns.values.tolist(): df.loc[11][i] = min(df.loc[1][i], df.loc[10][i]) df.loc[i][11] = min(df.loc[i][1], df.loc[i][10]) df df = df.drop([1,10]) df = df.drop([1,10], axis=1) df x = 2 y = 7 n = 12 df[n]=df[x]+df[y] df.loc[n]=df.loc[x]+df.loc[y] Z.append([x,y,df.loc[x][y],2])#id1, id2, d, n_elementos_en_cluster -> 11. for i in df.columns.values.tolist(): df.loc[n][i] = min(df.loc[x][i], df.loc[y][i]) df.loc[i][n] = min(df.loc[i][x], df.loc[i][y]) df = df.drop([x,y]) df = df.drop([x,y], axis=1) df x = 11 y = 13 n = 14 df[n]=df[x]+df[y] df.loc[n]=df.loc[x]+df.loc[y] Z.append([x,y,df.loc[x][y],2])#id1, id2, d, n_elementos_en_cluster -> 11. for i in df.columns.values.tolist(): df.loc[n][i] = min(df.loc[x][i], df.loc[y][i]) df.loc[i][n] = min(df.loc[i][x], df.loc[i][y]) df = df.drop([x,y]) df = df.drop([x,y], axis=1) df x = 9 y = 12 z = 14 n = 15 df[n]=df[x]+df[y] df.loc[n]=df.loc[x]+df.loc[y] Z.append([x,y,df.loc[x][y],3])#id1, id2, d, n_elementos_en_cluster -> 11. for i in df.columns.values.tolist(): df.loc[n][i] = min(df.loc[x][i], df.loc[y][i], df.loc[z][i]) df.loc[i][n] = min(df.loc[i][x], df.loc[i][y], df.loc[i][z]) df = df.drop([x,y,z]) df = df.drop([x,y,z], axis=1) df x = 4 y = 6 z = 15 n = 16 df[n]=df[x]+df[y] df.loc[n]=df.loc[x]+df.loc[y] Z.append([x,y,df.loc[x][y],3])#id1, id2, d, n_elementos_en_cluster -> 11. for i in df.columns.values.tolist(): df.loc[n][i] = min(df.loc[x][i], df.loc[y][i], df.loc[z][i]) df.loc[i][n] = min(df.loc[i][x], df.loc[i][y], df.loc[i][z]) df = df.drop([x,y,z]) df = df.drop([x,y,z], axis=1) df x = 3 y = 16 n = 17 df[n]=df[x]+df[y] df.loc[n]=df.loc[x]+df.loc[y] Z.append([x,y,df.loc[x][y],2])#id1, id2, d, n_elementos_en_cluster -> 11. for i in df.columns.values.tolist(): df.loc[n][i] = min(df.loc[x][i], df.loc[y][i]) df.loc[i][n] = min(df.loc[i][x], df.loc[i][y]) df = df.drop([x,y]) df = df.drop([x,y], axis=1) df Z ```
github_jupyter
``` import numpy as np import pandas as pd import os import datetime import pickle from PIL import Image import matplotlib.pyplot as plt import seaborn as sns import sklearn from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import confusion_matrix ``` # Baseline: K Nearest Neighbors Classification ### Load leaf count data ``` df = pd.DataFrame.from_csv('data.csv') ``` ### Add new column of best guess at leaf count based by taking average of minimum and maximum ``` df['avg'] = df.apply(lambda x: (x['minimum']+x['maximum'])/2, axis=1) ``` ### Examine sample of leaf data ``` df.sample(10) ``` ### Examine distribution of leaf counts per plant ``` plt.hist(df.minimum, bins=[i for i in range(1, 18)], color='r', alpha=0.4, rwidth=0.3, label='minimum') plt.hist(df.maximum, bins=[i for i in range(1, 18)], color='b', alpha=0.4, rwidth=0.3, label='maximum') plt.hist(df.avg, bins=[i for i in range(1, 18)], color='g', alpha=0.5, rwidth=0.85, label='avg') plt.legend(loc='upper right') plt.title('Distribution of Plant Leaf Counts') plt.show() ``` ### Examine sample image ``` Image.open('plants_augmented/plant_1.png') ``` ### Convert sample image to grayscale ``` Image.open('plants_augmented/plant_1.png').convert('L') ``` ### Load image data, convert to grayscale, and then convert to array ``` image_data = [np.array(Image.open('plants_augmented/plant_{}.png'.format(n)).convert('L')) for n in df.index] ``` ### Examine distribution of grayscale pixel values ``` grayscale_pixels = [] for i in range(len(image_data)): grayscale_pixels += list(image_data[i].flatten()) plt.hist(grayscale_pixels, bins=[i for i in range(0, 256, 10)], color='b', alpha=0.4, rwidth=0.85) plt.title('Distribution of Gray-Scale Pixel Counts') plt.show() ``` There is a clear bimodal distribution of grayscale pixel values, which is what we expect to see for images of dark foreground objects against a lighter background. ### Separate training set (85%) and test set (15%) ``` mask = np.random.rand(len(df)) <= 0.85 df_train = df[mask] df_test = df[~mask] ``` ### Make x_train, y_train, x_test, and y_test For the baseline only, we will use the minimum number of leaves per plant as a stand-in for the best guess of the true number of leaves. This is because KNN is an algorithm for classification tasks, and therefore we need our labels to be whole numbers so that they can be treated as robust classes. In contrast, the average can be a float, which would break our class requirement. ``` x_train = [image_data[i].flatten() for i, x in enumerate(list(mask)) if x == True] x_test = [image_data[i].flatten() for i, x in enumerate(list(mask)) if x == False] y_train = list(df_train['minimum']) y_test = list(df_test['minimum']) ``` ### Pool of K values to test: 1, 2, 3, 5, 10, 15 ``` model_1NN = KNeighborsClassifier(n_neighbors=1, n_jobs=-1) model_2NN = KNeighborsClassifier(n_neighbors=2, n_jobs=-1) model_3NN = KNeighborsClassifier(n_neighbors=3, n_jobs=-1) model_5NN = KNeighborsClassifier(n_neighbors=5, n_jobs=-1) model_10NN = KNeighborsClassifier(n_neighbors=10, n_jobs=-1) model_15NN = KNeighborsClassifier(n_neighbors=15, n_jobs=-1) model_pool = [(1, model_1NN), (2, model_2NN), (3, model_3NN), (5, model_5NN), (10, model_10NN), (15, model_15NN)] for k, model in model_pool: model.fit(x_train, y_train) ``` ### Class prediction accuracy: exact class predictions ``` for k, model in model_pool: print('K = {}: {}'.format(k, round(model.score(x_test, y_test), 3))) ``` ### Class prediction accuracy: within range of plus-or-minus 1 ``` TOLERANCE = 1 for k, model in model_pool: pred = model.predict(x_test) acc = sum([1 for i, y_hat in enumerate(pred) if abs(y_hat - y_test[i]) <= df['maximum'].iloc[i]-df['minimum'].iloc[i]]) / len(y_test) print('K = {}: {}'.format(k, round(acc, 3))) ``` ### Class prediction accuracy: confusion matrix ``` for k, model in model_pool: pred = model.predict(x_test) arr = confusion_matrix(y_test, pred, labels=np.unique(y_test)) df_cm = pd.DataFrame(arr, index=[i for i in np.unique(y_test)], columns=[i for i in np.unique(y_test)]) sns.heatmap(df_cm, annot=True) plt.show() ```
github_jupyter
## Angelica EDA #### Library Imports ``` import tensorflow as tf from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from keras.preprocessing import image_dataset_from_directory from keras.models import Sequential, load_model from keras import layers, optimizers, models from keras import metrics from keras import optimizers from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix import numpy as np import matplotlib.pyplot as plt %matplotlib inline # import opencv # FIX, NOT WORKING AT THE MOMENT. import os ``` #### Data Imports ``` # train_dir_norm= '../data/chest_xray/train/NORMAL' # train_dir_pn = '../data/chest_xray/train/PNEUMONIA' # test_dir_norm= '../data/chest_xray/test/NORMAL' # test_dir_pn= '../data/chest_xray/test/PNEUMONIA' # val_dir_norm= '../data/chest_xray/val/NORMAL' # val_dir_pn= '../data/chest_xray/val/PNEUMONIA' #write a function for importing os.getcwd() val2 = '/Users/angelicadegaetano/Documents/Flatiron/Lessons/pneumonia_detection/data' # train_imgs_norm = [file for file in os.listdir(train_dir_norm) if file.endswith('.jpeg')] # train_imgs_pn = [file for file in os.listdir(train_dir_pn) if file.endswith('.jpeg')] # test_imgs_norm = [file for file in os.listdir(test_dir_norm) if file.endswith('.jpeg')] # test_imgs_pn = [file for file in os.listdir(test_dir_pn) if file.endswith('.jpeg')] # val_imgs_norm = [file for file in os.listdir(val_dir_norm) if file.endswith('.jpeg')] # val_imgs_pn = [file for file in os.listdir(val_dir_pn) if file.endswith('.jpeg')] #write a function print(len(train_imgs_norm)) print(len(train_imgs_pn)) print(len(test_imgs_norm)) print(len(test_imgs_pn)) print(len(val_imgs_norm)) print(len(val_imgs_pn)) train_dir= '../data/chest_xray/train' test_dir= '../data/chest_xray/test' val_dir= '../data/chest_xray/val' ``` #### Data Preprocessing ``` train_gen = ImageDataGenerator().flow_from_directory(train_dir) #test_gen = ImageDataGenerator().flow_from_directory(test_dir, target_size= (64, 64), batch_size=80) test_gen = ImageDataGenerator().flow_from_directory(test_dir) val_gen = ImageDataGenerator().flow_from_directory(val_dir) tf.keras.preprocessing.image_dataset_from_directory(train_dir) tf.keras.preprocessing.image_dataset_from_directory(test_dir) tf.keras.preprocessing.image_dataset_from_directory(val_dir) train_dir= '../data/chest_xray/train' test_dir= '../data/chest_xray/test' val_dir= '../data/chest_xray/val' train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( # This is the target directory train_dir, # All images will be resized to 150x150 target_size=(150, 150), batch_size=20, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') validation_generator = test_datagen.flow_from_directory(val_dir, target_size=(150, 150), batch_size=20, class_mode='binary') from keras import layers from keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) from keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=30, validation_data=validation_generator, validation_steps=50) history = model.fit_generator(train_generator, steps_per_epoch=50, epochs=10, validation_data=validation_generator, validation_steps=25) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['mse']) history = model.fit_generator(train_generator, steps_per_epoch=15, epochs=10, validation_data=validation_generator, validation_steps=210) from keras import metrics model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics= [metrics.Recall()]) history = model.fit_generator(train_generator, steps_per_epoch=15, epochs=10, validation_data=validation_generator, validation_steps=10) ``` #### Data ``` train_gen = ImageDataGenerator().flow_from_directory(train_dir, target_size = (200, 200), class_mode='binary') test_gen = ImageDataGenerator().flow_from_directory(test_dir, target_size = (200, 200),class_mode='binary') val_gen = ImageDataGenerator().flow_from_directory(val_dir, target_size = (200, 200), class_mode='binary') ``` ### Model Building ##### FSM ``` model = Sequential() model.add(layers.Dense(15, activation= 'relu', input_shape= (200,200,3))) # input layer model.add(layers.Dense(20 , activation= 'relu')) # 1 hidden layer model.add(layers.Dense(1, activation= 'sigmoid')) # outpul layer model.compile(optimizer='rmsprop', loss= 'binary_crossentropy', metrics =['Recall']) #thinking about RECALL? fsm_model_fit = model.fit() #FSM CNN model_1 = Sequential() model.add(layers.Conv2D(25, (3, 3), activation='relu', input_shape=(150, 150, 3))) model_1.add(layers.MaxPooling2D((2, 2))) model_1.add(layers.Conv2D(50, (3, 3), activation='relu')) model_1.add(layers.MaxPooling2D((2, 2))) model_1.add(layers.Dense(1, activation='sigmoid')) model_1.compile(optimizer='rmsprop', loss= 'binary_crossentropy', metrics =['Recall']) model_1_fit = model_1.fit_generator(train_gen, steps_per_epoch = 50, epochs=15, validation_data= val_gen, validation_steps= 25) ``` ## FSM- vanilla basic ``` train_dir= '../data/chest_xray/train' test_dir= '../data/chest_xray/test' val_dir= '../data/chest_xray/val' # train_datagen = ImageDataGenerator(rescale=1./255) # test_datagen = ImageDataGenerator(rescale=1./255) train_gen = ImageDataGenerator().flow_from_directory( train_dir, target_size=(200, 200), batch_size=15, class_mode='binary') val_gen = ImageDataGenerator().flow_from_directory(val_dir, target_size=(200, 200), batch_size=15, class_mode='binary') model = models.Sequential() model.add(layers.Conv2D(20, (3, 3), activation='relu', input_shape=(200, 200, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics= [metrics.Recall(), 'acc']) model.summary() history = model.fit_generator(train_gen, steps_per_epoch=20, epochs=10, validation_data=val_gen, validation_steps=10) model2 = models.Sequential() model2.add(layers.Conv2D(20, (3, 3), activation='relu', input_shape=(200, 200, 3))) model2.add(layers.MaxPooling2D((2, 2))) model2.add(layers.Conv2D(50, (3, 3), activation='relu')) model2.add(layers.MaxPooling2D((2, 2))) model2.add(layers.Conv2D(100, (3, 3), activation='relu')) model2.add(layers.MaxPooling2D((2, 2))) model2.add(layers.Conv2D(146, (3, 3), activation='relu')) model2.add(layers.MaxPooling2D((2, 2))) model2.add(layers.Flatten()) model2.add(layers.Dense(1, activation='sigmoid')) model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics= [metrics.Recall(), 'acc']) history2 = model2.fit_generator(train_gen, steps_per_epoch=20, epochs=20, validation_data=val_gen, validation_steps=20) from sklearn.metrics import classification_report Y_pred = model.predict(val_gen) y_pred = np.argmax(Y_pred, axis=1) print('Confusion Matrix') print(confusion_matrix(val_gen.classes, y_pred)) print('Classification Report') target_names = ['Normal', 'Pneumonia'] print(classification_report(val_gen.classes, y_pred, target_names=target_names)) from sklearn.metrics import classification_report Y_pred = model2.predict(val_gen) y_pred = np.argmax(Y_pred, axis=1) print('Confusion Matrix') print(confusion_matrix(val_gen.classes, y_pred)) print('Classification Report') target_names = ['Normal', 'Pneumonia'] print(classification_report(val_gen.classes, y_pred, target_names=target_names)) ```
github_jupyter
## Load Python Packages ``` # --- load packages import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler from torch.nn.modules.distance import PairwiseDistance from torch.utils.data import Dataset from torchvision import transforms from torchsummary import summary from torch.cuda.amp import GradScaler, autocast from torch.nn import functional as F import time from collections import OrderedDict import numpy as np import os from skimage import io from PIL import Image import cv2 import matplotlib.pyplot as plt ``` ## Set parameters ``` # --- Set all Parameters DatasetFolder = "./CASIA-WebFace" # path to Dataset folder ResNet_sel = "18" # select ResNet type NumberID = 10575 # Number of ID in dataset batch_size = 256 # size of batch size Triplet_size = 10000 * batch_size # size of total Triplets device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') loss_margin = 0.6 # Margin for Triplet loss learning_rate = 0.075 # choose Learning Rate(note that this value will be change during training) epochs = 200 # number of iteration over total dataset ``` ## Download Datasets #### In this section we download CASIA-WebFace and LFW-Dataset #### we use CAISA-WebFace for Training and LFW for Evaluation ``` # --- Download CASIA-WebFace Dataset print(40*"=" + " Download CASIA WebFace " + 40*'=') ! gdown --id 1Of_EVz-yHV7QVWQGihYfvtny9Ne8qXVz ! unzip CASIA-WebFace.zip ! rm CASIA-WebFace.zip # --- Download LFW Dataset print(40*"=" + " Download LFW " + 40*'=') ! wget http://vis-www.cs.umass.edu/lfw/lfw-deepfunneled.tgz ! tar -xvzf lfw-deepfunneled.tgz ! rm lfw-deepfunneled.tgz ``` # Define ResNet Parts #### 1. Residual block #### 2. Make ResNet by Prv. block ``` # --- Residual block class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, downsample=1): super().__init__() # --- Variables self.in_channels = in_channels self.out_channels = out_channels self.downsample = downsample # --- Residual parts # --- Conv part self.blocks = nn.Sequential(OrderedDict( { # --- First Conv 'conv1' : nn.Conv2d(self.in_channels, self.out_channels, kernel_size=3, stride=self.downsample, padding=1, bias=False), 'bn1' : nn.BatchNorm2d(self.out_channels), 'Relu1' : nn.ReLU(), # --- Secound Conv 'conv2' : nn.Conv2d(self.out_channels, self.out_channels, kernel_size=3, stride=1, padding=1, bias=False), 'bn2' : nn.BatchNorm2d(self.out_channels) } )) # --- shortcut part self.shortcut = nn.Sequential(OrderedDict( { 'conv' : nn.Conv2d(self.in_channels, self.out_channels, kernel_size=1, stride=self.downsample, bias=False), 'bn' : nn.BatchNorm2d(self.out_channels) } )) def forward(self, x): residual = x if (self.in_channels != self.out_channels) : residual = self.shortcut(x) x = self.blocks(x) x += residual return x # # --- Test Residual block # dummy = torch.ones((1, 32, 140, 140)) # block = ResidualBlock(32, 64) # block(dummy).shape # print(block) # --- Make ResNet18 class ResNet18(nn.Module): def __init__(self): super().__init__() # --- Pre layers with 7*7 conv with stride2 and a max-pooling self.PreBlocks = nn.Sequential( nn.Conv2d(3, 64, kernel_size=7, padding=3, stride=2, bias=False), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2, padding=1) ) # --- Define all Residual Blocks here self.CoreBlocka = nn.Sequential( ResidualBlock(64,64 ,downsample=1), ResidualBlock(64,64 ,downsample=1), # ResidualBlock(64,64 ,downsample=1), ResidualBlock(64,128 ,downsample=2), ResidualBlock(128,128 ,downsample=1), # ResidualBlock(128,128 ,downsample=1), # ResidualBlock(128,128 ,downsample=1), ResidualBlock(128,256 ,downsample=2), ResidualBlock(256,256 ,downsample=1), # ResidualBlock(256,256 ,downsample=1), # ResidualBlock(256,256 ,downsample=1), # ResidualBlock(256,256 ,downsample=1), # ResidualBlock(256,256 ,downsample=1), ResidualBlock(256,512 ,downsample=2), ResidualBlock(512,512 ,downsample=1), # ResidualBlock(512,512 ,downsample=1) ) # --- Make Average pooling self.avg = nn.AdaptiveAvgPool2d((1,1)) # --- FC layer for output self.fc = nn.Linear(512, 512, bias=False) def forward(self, x): x = self.PreBlocks(x) x = self.CoreBlocka(x) x = self.avg(x) x = x.view(x.size(0), -1) x = self.fc(x) x = F.normalize(x, p=2, dim=1) return x # dummy = torch.ones((1, 3, 114, 144)) model = ResNet18() # model # res = model(dummy) model.to(device) summary(model, (3, 114, 114)) del model ``` # Make TripletLoss Class ``` # --- Triplet loss """ This code was imported from tbmoon's 'facenet' repository: https://github.com/tbmoon/facenet/blob/master/utils.py """ import torch from torch.autograd import Function from torch.nn.modules.distance import PairwiseDistance class TripletLoss(Function): def __init__(self, margin): super(TripletLoss, self).__init__() self.margin = margin self.pdist = PairwiseDistance(p=2) def forward(self, anchor, positive, negative): pos_dist = self.pdist.forward(anchor, positive) neg_dist = self.pdist.forward(anchor, negative) hinge_dist = torch.clamp(self.margin + pos_dist - neg_dist, min=0.0) loss = torch.mean(hinge_dist) # print(torch.mean(pos_dist).item(), torch.mean(neg_dist).item(), loss.item()) # print("pos_dist", pos_dist) # print("neg_dist", neg_dist) # print(self.margin + pos_dist - neg_dist) return loss ``` # Make Triplet Dataset from CASIA-WebFace ##### 1. Make Triplet pairs ##### 2. Make them zip ##### 3. Make Dataset Calss ##### 4. Define Transform ``` # --- Create Triplet Datasets --- # --- make a list of ids and folders selected_ids = np.uint32(np.round((np.random.rand(int(Triplet_size))) * (NumberID-1))) folders = os.listdir("./CASIA-WebFace/") # --- Itrate on each id and make Triplets list TripletList = [] for index,id in enumerate(selected_ids): # --- find name of id faces folder id_str = str(folders[id]) # --- find list of faces in this folder number_faces = os.listdir("./CASIA-WebFace/"+id_str) # --- Get two Random number for Anchor and Positive while(True): two_random = np.uint32(np.round(np.random.rand(2) * (len(number_faces)-1))) if (two_random[0] != two_random[1]): break # --- Make Anchor and Positive image Anchor = str(number_faces[two_random[0]]) Positive = str(number_faces[two_random[1]]) # --- Make Negative image while(True): neg_id = np.uint32(np.round(np.random.rand(1) * (NumberID-1))) if (neg_id != id): break # --- number of images in negative Folder neg_id_str = str(folders[neg_id[0]]) number_faces = os.listdir("./CASIA-WebFace/"+neg_id_str) one_random = np.uint32(np.round(np.random.rand(1) * (len(number_faces)-1))) Negative = str(number_faces[one_random[0]]) # --- insert Anchor, Positive and Negative image path to TripletList TempList = ["","",""] TempList[0] = id_str + "/" + Anchor TempList[1] = id_str + "/" + Positive TempList[2] = neg_id_str + "/" + Negative TripletList.append(TempList) # # --- Make dataset Triplets File # f = open("CASIA-WebFace-Triplets.txt", "w") # for index, triplet in enumerate(TripletList): # f.write(triplet[0] + " " + triplet[1] + " " + triplet[2]) # if (index != len(TripletList)-1): # f.write("\n") # f.close() # # --- Make zipFile if you need # !zip -r CASIA-WebFace-Triplets.zip CASIA-WebFace-Triplets.txt # # --- Read zip File and extract TripletList # TripletList = [] # # !unzip CASIA-WebFace-Triplets.zip # # --- Read text file # with open('CASIA-WebFace-Triplets.txt') as f: # lines = f.readlines() # for line in lines: # TripletList.append(line.split(' ')) # TripletList[-1][2] = TripletList[-1][2][0:-1] # # --- Print some data # print(TripletList[0:5]) # --- Make Pytorch Dataset Class for Triplets class TripletFaceDatset(Dataset): def __init__(self, list_of_triplets, transform=None): # --- initializing values print("Start Creating Triplets Dataset from CASIA-WebFace") self.list_of_triplets = list_of_triplets self.transform = transform # --- getitem function def __getitem__(self, index): # --- get images path and read faces anc_img_path, pos_img_path, neg_img_path = self.list_of_triplets[index] anc_img = cv2.imread('./CASIA-WebFace/'+anc_img_path) pos_img = cv2.imread('./CASIA-WebFace/'+pos_img_path) neg_img = cv2.imread('./CASIA-WebFace/'+neg_img_path) # anc_img = cv2.resize(anc_img, (114,114)) # pos_img = cv2.resize(pos_img, (114,114)) # neg_img = cv2.resize(neg_img, (114,114)) # --- set transform if self.transform: anc_img = self.transform(anc_img) pos_img = self.transform(pos_img) neg_img = self.transform(neg_img) return {'anc_img' : anc_img, 'pos_img' : pos_img, 'neg_img' : neg_img} # --- return len of triplets def __len__(self): return len(self.list_of_triplets) # --- Define Transforms transform_list =transforms.Compose([ transforms.ToPILImage(), transforms.Resize((140,140)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std =[0.229, 0.224, 0.225]) ]) # --- Test Dataset triplet_dataset = TripletFaceDatset(TripletList, transform_list) triplet_dataset[0]['anc_img'].shape ``` # LFW Evaluation ##### 1. Face detection function ##### 2. Load LFW Pairs .npy file ##### 3. Define Function for evaluation ``` # -------------------------- UTILS CELL ------------------------------- trained_face_data = cv2.CascadeClassifier(cv2.data.haarcascades+'haarcascade_frontalface_default.xml') # --- define Functions def face_detect(file_name): flag = True # Choose an image to detect faces in img = cv2.imread(file_name) # Must convert to greyscale # grayscaled_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Detect Faces # face_coordinates = trained_face_data.detectMultiScale(grayscaled_img) # img_crop = [] # Draw rectangles around the faces # for (x, y, w, h) in face_coordinates: # img_crop.append(img[y-20:y+h+20, x-20:x+w+20]) # --- select only Biggest # big_id = 0 # if len(img_crop) > 1: # temp = 0 # for idx, img in enumerate(img_crop): # if img.shape[0] > temp: # temp = img.shape[0] # big_id = idx # elif len(img_crop) == 0: # flag = False # img_crop = [0] # return image crop # return [img_crop[big_id]], flag return [img], flag # --- LFW Dataset loading for test part l2_dist = PairwiseDistance(2) cos = nn.CosineSimilarity(dim=1, eps=1e-6) # --- 1. Load .npy pairs path lfw_pairs_path = np.load('lfw_pairs_path.npy', allow_pickle=True) pairs_dist_list_mat = [] pairs_dist_list_unmat = [] valid_thresh = 0.96 def lfw_validation(model): global valid_thresh tot_len = len(lfw_pairs_path) model.eval() # use model in evaluation mode with torch.no_grad(): true_match = 0 for path in lfw_pairs_path: # --- extracting pair_one_path = path['pair_one'] # print(pair_one_path) pair_two_path = path['pair_two'] # print(pair_two_path) matched = int(path['matched']) # --- detect face and resize it pair_one_img, flag_one = face_detect(pair_one_path) pair_two_img, flag_two = face_detect(pair_two_path) if (flag_one==False) or (flag_two==False): tot_len = tot_len-1 continue # --- Model Predict pair_one_img = transform_list(pair_one_img[0]) pair_two_img = transform_list(pair_two_img[0]) pair_one_embed = model(torch.unsqueeze(pair_one_img, 0).to(device)) pair_two_embed = model(torch.unsqueeze(pair_two_img, 0).to(device)) # print(pair_one_embed.shape) # break # print(pair_one_img) # break # --- find Distance pairs_dist = l2_dist.forward(pair_one_embed, pair_two_embed) if matched == 1: pairs_dist_list_mat.append(pairs_dist.item()) if matched == 0: pairs_dist_list_unmat.append(pairs_dist.item()) # --- thrsholding if (matched==1 and pairs_dist.item() <= valid_thresh) or (matched==0 and pairs_dist.item() > valid_thresh): true_match += 1 valid_thresh = (np.percentile(pairs_dist_list_unmat,25) + np.percentile(pairs_dist_list_mat,75)) /2 print("Thresh :", valid_thresh) return (true_match/tot_len)*100 # img, _ = face_detect("./lfw-deepfunneled/Steve_Lavin/Steve_Lavin_0002.jpg") # plt.imshow(img[0]) # plt.show() temp = [0.4, 0.5, 0.6, 0.7, 0.8, 0.9] for i in temp: valid_thresh = i print(lfw_validation(model)) (np.mean(pairs_dist_list_mat) + np.mean(pairs_dist_list_unmat) )/2 ppairs_dist_list_unmat # --- find best thresh round_unmat = pairs_dist_list_unmat round_mat = pairs_dist_list_mat print("----- Unmatched statistical information -----") print("len : ",len(round_unmat)) print("min : ", np.min(round_unmat)) print("Q1 : ", np.percentile(round_unmat, 15)) print("mean : ", np.mean(round_unmat)) print("Q3 : ", np.percentile(round_unmat, 75)) print("max : ", np.max(round_unmat)) print("\n") print("----- matched statistical information -----") print("len : ",len(round_mat)) print("min : ", np.min(round_mat)) print("Q1 : ", np.percentile(round_mat, 25)) print("mean : ", np.mean(round_mat)) print("Q3 : ", np.percentile(round_mat, 85)) print("max : ", np.max(round_mat)) ``` ## How to make Training Faster ``` # Make Trianing Faster in Pytorch(Cuda): # 1. use number of worker # 2. set pin_memory # 3. Enable cuDNN for optimizing Conv # 4. using AMP # 5. set bias=False in conv layer if you set batch normalizing in model # source: https://betterprogramming.pub/how-to-make-your-pytorch-code-run-faster-93079f3c1f7b ``` # DataLoader ``` # --- DataLoader face_data = torch.utils.data.DataLoader(triplet_dataset, batch_size= batch_size, shuffle=True, num_workers=4, pin_memory= True) # --- Enable cuDNN torch.backends.cudnn.benchmark = True ``` # Save Model (best acc. and last acc.) ``` # --- saving model for best and last model # --- Connect to google Drive for saving models from google.colab import drive drive.mount('/content/gdrive') # --- some variable for saving models BEST_MODEL_PATH = "./gdrive/MyDrive/best_trained.pth" LAST_MODEL_PATH = "./gdrive/MyDrive/last_trained.pth" def save_model(model_sv, loss_sv, epoch_sv, optimizer_state_sv, accuracy, accu_sv_list, loss_sv_list): # --- Inputs: # 1. model_sv : orginal model that trained # 2. loss_sv : current loss # 3. epoch_sv : current epoch # 4. optimizer_state_sv : current value of optimizer # 5. accuracy : current accuracy # --- save last epoch if accuracy >= max(accu_sv_list): torch.save(model.state_dict(), BEST_MODEL_PATH) # --- save this model for checkpoint torch.save({ 'epoch': epoch_sv, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer_state_sv.state_dict(), 'loss': loss_sv, 'accu_sv_list': accu_sv_list, 'loss_sv_list' : loss_sv_list }, LAST_MODEL_PATH) ``` # Load prev. model for continue training ``` torch.cuda.empty_cache() # --- training initialize and start model = ResNet18().to(device) # load model tiplet_loss = TripletLoss(loss_margin) # load Tripletloss optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=learning_rate) # load optimizer l2_dist = PairwiseDistance(2) # L2 distance loading # save loss values epoch_check = 0 valid_arr = [] loss_arr = [] load_last_epoch = True if (load_last_epoch == True): # --- load last model # define model objects before this checkpoint = torch.load(LAST_MODEL_PATH, map_location=device) # load model path model.load_state_dict(checkpoint['model_state_dict']) # load state dict optimizer.load_state_dict(checkpoint['optimizer_state_dict']) # load optimizer epoch_check = checkpoint['epoch'] # load epoch loss = checkpoint['loss'] # load loss value valid_arr = checkpoint['accu_sv_list'] # load Acc. values loss_arr = checkpoint['loss_sv_list'] # load loss values model.train() epoch_check loss ``` # Training Loop ``` model.train() # --- Training loop based on number of epoch temp = 0.075 for epoch in range(epoch_check,200): print(80*'=') # --- For saving imformation triplet_loss_sum = 0.0 len_face_data = len(face_data) # -- set starting time time0 = time.time() # --- make learning rate update if 50 < len(loss_arr): for g in optimizer.param_groups: g['lr'] = 0.001 temp = 0.001 # --- loop on batches for batch_idx, batch_faces in enumerate(face_data): # --- Extract face triplets and send them to CPU or GPU anc_img = batch_faces['anc_img'].to(device) pos_img = batch_faces['pos_img'].to(device) neg_img = batch_faces['neg_img'].to(device) # --- Get embedded values for each triplet anc_embed = model(anc_img) pos_embed = model(pos_img) neg_embed = model(neg_img) # --- Find Distance pos_dist = l2_dist.forward(anc_embed, pos_embed) neg_dist = l2_dist.forward(anc_embed, neg_embed) # --- Select hard triplets all = (neg_dist - pos_dist < 0.8).cpu().numpy().flatten() hard_triplets = np.where(all == 1) if len(hard_triplets[0]) == 0: # --- Check number of hard triplets continue # --- select hard embeds anc_hard_embed = anc_embed[hard_triplets] pos_hard_embed = pos_embed[hard_triplets] neg_hard_embed = neg_embed[hard_triplets] # --- Loss loss_value = tiplet_loss.forward(anc_hard_embed, pos_hard_embed, neg_hard_embed) # --- backward path optimizer.zero_grad() loss_value.backward() optimizer.step() if (batch_idx % 200 == 0) : print("Epoch: [{}/{}] ,Batch index: [{}/{}], Loss Value:[{:.8f}]".format(epoch+1, epochs, batch_idx+1, len_face_data,loss_value)) # --- save information triplet_loss_sum += loss_value.item() print("Learning Rate: ", temp) # --- Find Avg. loss value avg_triplet_loss = triplet_loss_sum / len_face_data loss_arr.append(avg_triplet_loss) # --- Validation part besed on LFW Dataset validation_acc = lfw_validation(model) valid_arr.append(validation_acc) model.train() # --- Save model with checkpoints save_model(model, avg_triplet_loss, epoch+1, optimizer, validation_acc, valid_arr, loss_arr) # --- Print information for each epoch print(" Train set - Triplet Loss = {:.8f}".format(avg_triplet_loss)) print(' Train set - Accuracy = {:.8f}'.format(validation_acc)) print(f' Execution time = {time.time() - time0}') ``` # plot and print some information ``` plt.plotvalid_arr(, 'b-', label='Validation Accuracy', ) plt.show() plt.plot(loss_arr, 'b-', label='loss values', ) plt.show() for param_group in optimizer.param_groups: print(param_group['lr']) valid_arr print(40*"=" + " Download CASIA WebFace " + 40*'=') ! gdown --id 1Of_EVz-yHV7QVWQGihYfvtny9Ne8qXVz ! unzip CASIA-WebFace.zip ! rm CASIA-WebFace.zip # --- LFW Dataset loading for test part l2_dist = PairwiseDistance(2) cos = nn.CosineSimilarity(dim=1, eps=1e-6) valid_thresh = 0.96 model.eval() with torch.no_grad(): # --- extracting pair_one_path = "./3.jpg" # print(pair_one_path) pair_two_path = "./2.jpg" # --- detect face and resize it pair_one_img, flag_one = face_detect(pair_one_path) pair_two_img, flag_two = face_detect(pair_two_path) # --- Model Predict pair_one_img = transform_list(pair_one_img[0]) pair_two_img = transform_list(pair_two_img[0]) pair_one_embed = model(torch.unsqueeze(pair_one_img, 0).to(device)) pair_two_embed = model(torch.unsqueeze(pair_two_img, 0).to(device)) # --- find Distance pairs_dist = l2_dist.forward(pair_one_embed, pair_two_embed) print(pairs_dist) # --- Create Triplet Datasets --- # --- make a list of ids and folders selected_ids = np.uint32(np.round((np.random.rand(int(Triplet_size))) * (NumberID-1))) folders = os.listdir("./CASIA-WebFace/") # --- Itrate on each id and make Triplets list TripletList = [] for index,id in enumerate(selected_ids): # --- print info # print(40*"=" + str(index) + 40*"=") # print(index) # --- find name of id faces folder id_str = str(folders[id]) # --- find list of faces in this folder number_faces = os.listdir("./CASIA-WebFace/"+id_str) # --- Get two Random number for Anchor and Positive while(True): two_random = np.uint32(np.round(np.random.rand(2) * (len(number_faces)-1))) if (two_random[0] != two_random[1]): break # --- Make Anchor and Positive image Anchor = str(number_faces[two_random[0]]) Positive = str(number_faces[two_random[1]]) # --- Make Negative image while(True): neg_id = np.uint32(np.round(np.random.rand(1) * (NumberID-1))) if (neg_id != id): break # --- number of images in negative Folder neg_id_str = str(folders[neg_id[0]]) number_faces = os.listdir("./CASIA-WebFace/"+neg_id_str) one_random = np.uint32(np.round(np.random.rand(1) * (len(number_faces)-1))) Negative = str(number_faces[one_random[0]]) # --- insert Anchor, Positive and Negative image path to TripletList TempList = ["","",""] TempList[0] = id_str + "/" + Anchor TempList[1] = id_str + "/" + Positive TempList[2] = neg_id_str + "/" + Negative TripletList.append(TempList) # print(TripletList[-1]) ```
github_jupyter
``` import re import os import sys import random import gpt_2_simple as gpt2 import tensorflow as tf import numpy as np from random_word import RandomWords import requests import giphy_client from unsplash.api import Api from unsplash.auth import Auth from medium import Client tags = {#'montag': ['Fake News', 'Opinion', 'Artificial Intelligence', 'NLP', 'Future'], 'onezero': ['Artificial Intelligence', 'Technology', 'NLP', 'Future'], #'futura': ['Sci Fi Fantasy', 'Artificial Intelligence', 'NLP', 'Future', 'Storytelling'] } def clean(story): story = story.replace('&lt;|url|&gt;', 'https://github.com/sirmammingtonham/futureMAG') story = story.replace('OneZero', 'FutureMAG') story = story.replace('onezero', 'FutureMAG') return story[16:] def split_story(story, run_name): story = clean(story) split = re.split('(\.)', story)[0:-1] metadata = split[0] title = metadata[metadata.find('# ')+2:metadata.find('## ')].strip('\n') subtitle = metadata[metadata.find('## ')+3:metadata.find('\n', metadata.find('## '))].strip('\n') split[0] = split[0].replace(subtitle, f"{subtitle} | AI generated article*") # if len(title.split(' ')) <= 2: # split = story.split('\n', 3) # title = split[1] # subtitle = split[2] # return [title, subtitle, split[3]] return [title, f"{subtitle} | AI generated article*", ''.join(split), None, run_name] def retrieve_images(story): #story[1] = subtitle #story[2] = story matches = [(m.group(), m.start(0)) for m in re.finditer(r"(<\|image\|>)", story[2])] image_creds = [] try: client_id = "b9a6edaadf1b5ec49cf05f10aab79d5d2ea1fe66431605d12ec0f7ec22bc7289" client_secret = "f00e14688a25656c07f07d85e17b4ebd94e93fcf9bf0fd1859f7713ea1d94c16" redirect_uri = "urn:ietf:wg:oauth:2.0:oob" auth = Auth(client_id, client_secret, redirect_uri) api = Api(auth) # q = max(re.sub(r'[^\w\s]', '', story[0]).split(), key=len) #take longest word from subtitle as search term q = story[0].split(' ')[:5] for match, idx in matches: pic = api.photo.random(query=q)[0] img = pic.urls.raw image_creds.append((f'https://unsplash.com/@{pic.user.username}', pic.user.name)) cap_idx = story[2].find('*', idx+11) story[2] = story[2][:cap_idx] + '&ast;&ast;' + story[2][cap_idx:] story[2] = story[2][:idx] + img + story[2][idx+9:] except: return story story[3] = image_creds return story def publish(title, sub, article, creds, run_name): if title == sub: return #holy shit this is excessive tag = tags['onezero'] + [max(re.sub(r'[^\w\s]','',title).split(), key=len).capitalize()] access_token = '2aea40d684c5c501066c6f624d05c952256f0664585d9a36b394c0821ee646499' headers = { 'Authorization': "Bearer " + access_token, 'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36' } base_url = "https://api.medium.com/v1/" me_response = requests.request("GET", base_url + 'me', headers=headers).text json_me_response = json.loads(me_response) user_id = json_me_response['data']['id'] user_url = base_url + 'users/' + user_id + '/' posts_url = user_url + 'posts/' pub_url = base_url + 'publications/424c42caa624/posts/' if not creds: return else: img_creds = "" for auth_url, author in creds: img_creds += f"[{author}]({auth_url})" img_creds += ', ' if len(creds) > 1 else ' ' img_creds += "on [Unsplash](https://unsplash.com/)" article += "\n\n*This article was written by a [GPT-2 neural network](https://openai.com/blog/better-language-models). All information in this story is most likely false, and all opinions expressed are fake. Weird to think about…\n\n" article += f"**This caption was artificially generated. Image downloaded automatically from {img_creds}.\n\n" article += "All links in this article are placeholders generated by the neural network, signifying that an actual link should have been generated there. These placeholders were later replaced by a link to the github project page.\n\n" article += "**futureMAG** is an experiment in automated storytelling/journalism. This story was created and published without human intervention.\n\n" article += "Code for this project available on github: " article += "**[sirmammingtonham/futureMAG](https://github.com/sirmammingtonham/futureMAG)**" payload = { 'title': title, 'contentFormat': 'markdown', 'tags': tag if run_name == 'onezero' else tags[run_name], 'publishStatus': 'draft', 'content': article } response = requests.request('POST', pub_url, data=payload, headers=headers) print(response.text) return payload ``` ``` run_name = 'onezero_m' sess = gpt2.start_tf_sess() gpt2.load_gpt2(sess, run_name) stories = gpt2.generate( sess, run_name, return_as_list=True, truncate="<|endoftext|>", prefix="<|startoftext|>", nsamples=1, batch_size=1, length=8000, temperature=1, top_p=0.9, split_context=0.5, ) articles = [] for story in stories: articles.append(split_story(story, run_name)) for article in articles: article = retrieve_images(article) articles[0][-1] = 'onezero' articles publish(*articles[0]) ```
github_jupyter
``` import json import math import bigfloat import numpy import pandas as pd import seaborn as sns import matplotlib.mlab as mlab import matplotlib.pyplot as plt from collections import OrderedDict import seaborn as sns sns.set_style("whitegrid", {"font.family": "DejaVu Sans"}) sns.set_context("poster") from matplotlib import rc rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}) rc('text', usetex=True) def read_data(filename, short=False): json_data = open(filename, 'r').read() raw_data = json.loads(json_data) seq = [] seq_id = [] energy = [] for k1, v1 in raw_data.iteritems(): if k1 == "name": continue stp_i = int(k1) staple = v1 #print str(stp_i) + ' ' + v1['staple_sequence'] for k2, v2 in staple.iteritems(): if not k2.isdigit(): continue arm_i = k2 arm = v2 #if short and len(arm['sequence']) > 8: # continue dG = arm['dG'] min_dG = float(arm['min_dG']) seq.append(arm['sequence']) seq_id.append('stp_' + str(stp_i) + '_' + str(arm_i)) local_min = [] for i in range(len(dG)-1): if dG[i] < dG[i-1] and dG[i] <= dG[i+1]: local_min.append(float(dG[i])) sorted_by_energy = sorted(local_min) energy.append(numpy.array(sorted_by_energy)) return seq_id, seq, energy def get_boltzmann_distribution(energy_by_arm): R = 8.3144621 # gas constant T = 293.15 # room temperature factor = 4184.0 # joules_per_kcal boltzmann_distribution = [] for dG in energy_by_arm: ps = [] total = bigfloat.BigFloat(0) for energy in dG: p = bigfloat.exp((-energy*factor)/(R*T), bigfloat.precision(1000)) ps.append(p) total = bigfloat.add(total, p) normal_ps = [] for p in ps: normal_ps.append(float(bigfloat.div(p,total))) boltzmann_distribution.append(numpy.array(normal_ps)) return boltzmann_distribution print get_boltzmann_distribution([-7.2, -4.6]) path = 'data/' filename_DB = 'DeBruijn_alpha.json' filename_pUC19 = 'pUC19_alpha.json' filename_M13 = 'M13_square.json' filename_DB7k = 'DB_7k_square.json' #ids, sequences, energies #_, _, energies_DB = read_data(path + filename_DB) #_, _, energies_pUC19 = read_data(path + filename_pUC19) #_, _, energies_M13 = read_data(path + filename_M13) _, _, energies_DB_short = read_data(path + filename_DB, short=True) _, _, energies_pUC19_short = read_data(path + filename_pUC19, short=True) _, _, energies_M13_short = read_data(path + filename_M13, short=True) _, _, energies_DB7k_short = read_data(path + filename_DB7k, short=True) #DB_dist_2 = get_boltzmann_distribution(d[:2] for d in energies_DB_short) #pUC19_dist_2 = get_boltzmann_distribution(d[:2] for d in energies_pUC19_short) #M13_dist_2 = get_boltzmann_distribution(d[:2] for d in energies_M13_short) #DB_dist_10 = get_boltzmann_distribution(d[:10] for d in energies_DB_short) #pUC19_dist_10 = get_boltzmann_distribution(d[:10] for d in energies_pUC19_short) #M13_dist_10 = get_boltzmann_distribution(d[:10] for d in energies_M13_short) #DB_dist_100 = get_boltzmann_distribution(d[:100] for d in energies_DB_short) #pUC19_dist_100 = get_boltzmann_distribution(d[:100] for d in energies_pUC19_short) #M13_dist_100 = get_boltzmann_distribution(d[:100] for d in energies_M13_short) DB_dist_all = get_boltzmann_distribution(d for d in energies_DB_short) pUC19_dist_all = get_boltzmann_distribution(d for d in energies_pUC19_short) M13_dist_all = get_boltzmann_distribution(d for d in energies_M13_short) DB7k_dist_all = get_boltzmann_distribution(d for d in energies_DB7k_short) #DB_dist = get_boltzmann_distribution(d[:100] for d in energies_DB_short) #pUC19_dist = get_boltzmann_distribution(d[:100] for d in energies_pUC19_short) #M13_dist = get_boltzmann_distribution(d[:100] for d in energies_M13_short) #DB_dist = get_boltzmann_distribution(energies_DB_short) #pUC19_dist = get_boltzmann_distribution(energies_pUC19_short) #dist = [d[0] for d in DB_dist] def example_plot(ax, fontsize=12): ax.plot([1, 2]) ax.locator_params(nbins=3) ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) def distribution_plot(ax, data_label, data, xlabel, ylabel, fontsize=15): bins = 20 x = numpy.zeros(bins) for dist in data: i = int(dist[0]*bins) i = 0 if i < 0 else i i = bins-1 if i > bins-1 else i x[i] += 1 for i in range(len(x)): x[i] = 1.0 * x[i] / len(data) index = numpy.arange(0, bins) ax.bar(index, x, bar_width, linewidth=0) ax.set_xticks(numpy.arange(0, bins+1)) ax.set_xticklabels([('$' + str(i*0.05) + '$') if i % 2 == 0 else "" for i in range(0, bins+1)]) #ax.tick_params(axis='both', which='major') ylimit = 0.2 if ('6.9' in data_label or 'M13' in data_label) else 0.3 ax.set_xlim(0, bins) ax.set_ylim(0, ylimit) ax.set_xlabel(xlabel, fontsize=20) ax.set_ylabel(ylabel, fontsize=20) ax.set_title(data_label, fontsize=20) ax.legend() plt.close('all') fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) data_set = OrderedDict() data_set['pUC19 (all)'] = pUC19_dist_all data_set['DB (all)'] = DB_dist_all data_set['M13 (all)'] = M13_dist_all data_set['DB7k (all)'] = DB7k_dist_all xlabel = r'Specific binding probability' ylabel = r'Fraction of staples' distribution_plot(ax1, r'pUC19', pUC19_dist_all, '', ylabel) distribution_plot(ax2, r'DB (2.4 knt)', DB_dist_all, '', '') distribution_plot(ax3, r'M13', M13_dist_all, xlabel, ylabel) distribution_plot(ax4, r'DBS (6.9 knt)', DB7k_dist_all, xlabel, '') #%matplotlib inline fig.set_size_inches(10, 10) plt.tight_layout() plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison.pdf",format='pdf',dpi=600) #plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison_long.pdf",format='pdf',dpi=600) fig, axes = plt.subplots(nrows=2, ncols=2) bar_width = 1.0 data_set = OrderedDict() data_set['pUC19'] = pUC19_dist_all data_set['DBS (2.4 knt)'] = DB_dist_all data_set['M13'] = M13_dist_all data_set['DBS (6.9 knt)'] = DB7k_dist_all plt.close('all') fig = plt.figure() from mpl_toolkits.axes_grid1 import Grid grid = Grid(fig, rect=111, nrows_ncols=(2,2), axes_pad=0.4, label_mode='O', add_all = True, ) for ax, (data_label, data) in zip(grid, data_set.items()): xlabel = 'Specific binding probability' if ('6.9' in data_label or 'M13' in data_label) else '' ylabel = 'Fraction of staples'if ('pUC' in data_label or 'M13' in data_label) else '' distribution_plot(ax, data_label, data, xlabel, ylabel) #axes[0,0].set_title('pUC19') #grid[0].set_title('pUC19') #grid[0].set_ylabel('Fraction of staples', fontsize=15) #grid[1].set_title('DBS (2.4 knt)') #grid[2].set_title('M13') #grid[2].set_xlabel('Specific binding probability', fontsize=15) #grid[2].set_ylabel('Fraction of staples', fontsize=15) #grid[3].set_title('DBS (6.9 knt)') #axes[1].set_title('M13') #axes[2].set_title(r'$\lambda$-phage') #fig.text(0.16, 0.92, 'pUC19', fontsize=15) #fig.text(0.6, 0.92, 'DBS (2.4 knt)', fontsize=15) #fig.text(0.16, 0.46, 'M13mp18', fontsize=15) #fig.text(0.6, 0.46, 'DBS (6.9 knt)', fontsize=15) fig.set_size_inches(6, 6) plt.tight_layout() plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison.pdf",format='pdf',dpi=600) ####################### ## OBSOLETE ## ####################### #%matplotlib inline fig, axes = plt.subplots(nrows=2, ncols=2) bar_width = 1.0 data_set = OrderedDict() data_set['pUC19 (all)'] = pUC19_dist_all data_set['DB (all)'] = DB_dist_all data_set['M13 (all)'] = M13_dist_all data_set['DB7k (all)'] = DB7k_dist_all for ax0, (data_label, data) in zip(axes.flat, data_set.items()): distribution_plot(ax0) #fig.text(0.19, 0.96, 'De Bruijn', ha='center') fig.text(0.3, 1, 'pUC19 (2.6 knt)', ha='center') fig.text(0.7, 1, 'DBS (2.4 knt)', ha='center') fig.text(0.5, 0.008, 'Specific binding probability', ha='center') fig.text(0.001, 0.5, 'Fraction of staples', va='center', rotation='vertical') fig.set_size_inches(7, 7) plt.tight_layout() plt.savefig("/home/j3ny/repos/analysis/Analysis/thermodynamic_addressability/output/addressability_comparison.pdf",format='pdf',dpi=600) ## CONVERT DATA path = 'data/' filename_DB = 'DeBruijn_alpha.json' filename_pUC19 = 'pUC19_alpha.json' filename_M13 = 'M13_square.json' filename_DB7k = 'DB_7k_square.json' ids, sequences, energies = read_data(path + filename_DB7k, short=True) dist_all = get_boltzmann_distribution(d for d in energies) with open('data/DB_medium.csv', 'w') as out: for i in range(len(ids)): out.write(ids[i] + ',' + sequences[i] + ',') out.write('%.3f' % dist_all[i][0]) out.write('\n') #print idsi], sequences[i], energies_DB_short[i], DB_dist_all[i] ```
github_jupyter
``` from pyspark.sql import SparkSession, functions as f spark = ( SparkSession .builder .appName("Hands-on-3") .master("local[*]") .getOrCreate() ) ``` ### Setting up what we had done in the previous hands-on ``` df_ratings = ( spark .read .csv( path="../../data-sets/ml-latest/ratings.csv", sep=",", encoding="UTF-8", header=True, quote='"', schema="userId INT, movieId INT, rating DOUBLE, timestamp INT", ) .withColumnRenamed("timestamp", "timestamp_unix") .withColumn("timestamp", f.to_timestamp(f.from_unixtime("timestamp_unix"))) .drop("timestamp_unix") ) df_ratings.show(n=5) df_ratings.printSchema() # As we anyway dropping timestamp_unix, we can directly apply the trasformations on timestamp column df_ratings = ( spark .read .csv( path="../../data-sets/ml-latest/ratings.csv", sep=",", encoding="UTF-8", header=True, quote='"', schema="userId INT, movieId INT, rating DOUBLE, timestamp INT", ) .withColumn("timestamp", f.to_timestamp(f.from_unixtime("timestamp"))) ) df_ratings.show(n=5) df_ratings.printSchema() # We are now going to load movies.csv for furthur analysis df_movies = ( spark .read .csv( path="../../data-sets/ml-latest/movies.csv", sep=",", encoding="UTF-8", header=True, quote='"', schema="movieId INT, title STRING, genres STRING", ) ) df_movies.show(n=5) df_movies.printSchema() ``` ### As we can see we have ellipsis (...) at some places in title and genre column. By default df.show method will truncate long strings with ellipsis (...). We can disable this with truncate=False argument, as follows- ``` df_movies.show(n=15, truncate=False) df_movies.printSchema() ``` ### Now we are going to work with filterring the data aka: Dicing and Slicing ``` # Let's filter on genre with value "Action" in movies dataframe df_movies.where("genres" == "Action").show() # This is going to fail ``` ### As you can see, we are unable to perform this action, as we require th condition to have Column object instead of string. We can fix this by using the SQL expression instead of the pythonic way of representing condition ``` df_movies.where('genres = "Action"').show() # Notice the quotes and single equals to "=" ``` ### As you can see, we can use proper SQL expression for filterring, and it works ### Another way to do the same thing is to convert the string "genre" to spark column, as follows- ``` df_movies.where(f.col("genres") == "Action").show(n=5, truncate=False) ``` ### I prefer the later, as it's more pythonic ### As we can see the genre column has values seperated by pipe "|" symbol. I would like to have it as an array instead of a single string. For doing that I will create a new column called genre_array, and use f.split to apply transformation on genre column to split on the symbol "|" ``` df_movies_with_genre = ( df_movies .withColumn("genres_array", f.split("genres", "|")) ) df_movies_with_genre.show(n=5) df_movies_with_genre.printSchema() ``` ### As we can see, it didn't work as expected. This happened because the second argument of f.split is actually a regex. And in the world of regex, "|" character is a special symbol, hence it needs to be excaped. ``` df_movies_with_genre = ( df_movies .withColumn("genres_array", f.split("genres", "\|")) # notice the escape symbol "\" ) df_movies_with_genre.show(n=5, truncate=False) df_movies_with_genre.printSchema() ``` ### Now for we want to create a single row for every movie, genre pair. In order to that, we will need to "explode" the genres_array column using f.explode ``` df_movies_with_genre = ( df_movies .withColumn("genres_array", f.split("genres", "\|")) # notice the escape symbol "\" .withColumn("genre", f.explode("genres_array")) ) df_movies_with_genre.show(n=15, truncate=False) df_movies_with_genre.printSchema() ``` ### As we can see, each movie is "exploded" across every genre type. But it's hard to see, so we will select the rows we are interested in before showing ``` df_movies_with_genre = ( df_movies .withColumn("genres_array", f.split("genres", "\|")) # notice the escape symbol "\" .withColumn("genre", f.explode("genres_array")) .select("movieId", "title", "genre") ) df_movies_with_genre.show(n=15, truncate=False) df_movies_with_genre.printSchema() ``` ### Now it's much better to visualize ### Now let's list out all the distinct genre using .distinct ``` available_genres = df_movies_with_genre.select("genre").distinct() available_genres.show(truncate=False) ``` ### We can see all the distinct genres, there is a value- (no genres listed). Let's figure out all the movies with (no genres listed) ``` df_movies_with_no_genres = ( df_movies .filter(f.col("genres") == "(no genres listed)") ) print(f"Total: {df_movies_with_no_genres.count()} movie(s) does not have genres") df_movies_with_no_genres.show(truncate=False) spark.stop() ```
github_jupyter
# Data Data en handelingen op data ## Informatica een taal leren $\sim$ **syntax** (noodzakelijk, maar niet het punt) ... informatica studeren $\sim$ **semantiek** (leren hoe machines denken!) Een programmeertaal als Python leren heeft alles te maken met syntax waarmee je handelingen kan schrijven die een machine moet uitvoeren. Maar hiervoor heb je eerst andere kennis nodig, kennis die alles te maken heeft met wat de machine (bijvoorbeeld, jouw laptop) doet. ![Achter de schermen](images/3/rob-laughter-WW1jsInXgwM-unsplash.jpg) We gaan stap voor stap ontdekken wat er zich in de machine afspeelt en gaan we kijken naar data en handelingen op (of de verwerking van) data. ## Handelingen en data ``` x = 41 y = x + 1 ``` Laten we om te beginnen de volgende twee variabelen `x` en `y` ieder een waarde toekennen. Deze waarden (`41` en `42`) worden in het geheugen opgeslagen. ## Achter het doek ![box_metafoor](images/3/box.png) Stel je een variabele voor als een doos: de inhoud van de doos is de waarde (bijvoorbeeld `41` of `42` in ons geval) met extra informatie over het *type* van de waarde (een `int` wat staat voor *integer*, een geheel getal) en een geheugenlocatie (LOC). ### Geheugen ![box_locatie](images/3/box_2.png) Geheugen is een hele lange lijst van dit soort dozen, elk met een naam, waarde, type en geheugenlocatie. ![RAM 8Gb](images/3/1136px-Samsung-1GB-DDR2-Laptop-RAM.jpg) Random Access Memory ([RAM](https://nl.wikipedia.org/wiki/Dynamic_random-access_memory)) is waar variabelen worden opgeslagen, een kaart zoals je deze hier ziet zit ook in jouw computer! Als je het zwarte materiaal voorzichtig zou weghalen zal een (microscopisch klein) raster zichtbaar worden. ![RAM Grid](images/3/ramgrid.png) Horizontaal zie je de *bitlijnen*, of adresregels (de geheugenlokatie) en verticaal de *woordlijnen* (of dataregels). Elk kruispunt is een [condensator](https://nl.wikipedia.org/wiki/Condensator) die elektrisch geladen of ongeladen kan zijn. ### Bits ![RAM Grid Bit](images/3/ramgrid_bit.png) Zo'n punt (een condensator) dat geladen (1 of `True`) of ongeladen (0 of `False`) kan zijn wordt een *bit* genoemd. Dit is de kleinst mogelijk informatie-eenheid! ### Bytes ![RAM Grid Byte](images/3/ramgrid_byte.png) Je zal ook vaak horen over *bytes* en dit is een verzameling van 8 aaneengesloten *bits* op een adresregel. Waarom 8 en niet 5, 10, 12 of meer (of minder) zal je je misschien afvragen? Dit is historisch bepaald en heeft alles te maken met het minimaal aantal bits dat ooit nodig was om een bepaalde set van karakters (letters en andere tekens) te kunnen representeren ([ASCII](https://nl.wikipedia.org/wiki/ASCII_(tekenset)) om precies te zijn). Maak je geen zorgen om wat dit precies betekent, we komen hier nog op terug! ### Woord? ![Windows 64bit](images/3/windows_64bit.png) *Woord* in woordregel is niet een woord als in een zin (taal) maar een term die staat voor de [natuurlijke eenheid](https://en.wikipedia.org/wiki/Word_(computer_architecture)) van informatie voor een bepaalde computerarchitectuur. Tegenwoordig is deze voor de meeste systemen 64-bit, dit wordt ook wel de *adresruimte* van een architectuur genoemd. Deze eenheid is van belang want het bepaalt bijvoorbeeld het grootste gehele getal dat kan worden opgeslagen. Maar hoe komen we van bits naar bytes en vervolgens tot getallen en andere data zul je je afvragen? Dit zul je later zien, eerst gaan we kijken naar de verschillende typen data die we kunnen onderscheiden. ## Datatypes *Alle* talen hebben datatypes! | Type | Voorbeeld | Wat is het? | |---------|-------------------|----------------------------------------------------------------------------------| | `float` | `3.14` of `3.0` | decimale getallen | | `int` | `42` of `10**100` | gehele getallen | | `bool` | `True` of `False` | het resultaat van een test of vergelijking met: `==`, `!=`, `<`, `>`, `<=`, `>=` | ``` type(42.0) ``` Dit zijn de eerste datatypes waar we kennis mee gaan maken en ze komen aardig overeen met wat wij (mensen!) kunnen onderscheiden, bijvoorbeeld gehele- of decimale getallen. Ook een `bool`(ean) is uiteindelijk een getal: als we `False` typen zal Python dit lezen als 0. `True` en `False`is *syntax* (!) om het voor ons makkelijker te maken, maar *semantisch* staat het voor 1 en 0 (in ieder geval voor Python!). Met de de *functie* `type(x)` kan je opvragen welk type Python denkt dat de waarde heeft. ## Operatoren Speciale tekens die alles te maken hebben met handelingen op data. ### Python operatoren | Betekenis | | |-----------------------------------|---------------------------------| | groepering | `(` `)` | | machtsverheffing | `**` | | vermenigvuldiging, modulo, deling | `*` `%` `/` `//` | | optelling, aftrekking | `+` `-` | | vergelijking | `==` `!=`, `<`, `>`, `<=`, `>=` | | toekenning | `=` | Net als bij rekenen moet je hier rekening houden met de bewerkingsvolgorde, hier zijn ze van meest naar minst belangrijk weergegeven. Het is *niet* nodig deze volgorde te onthouden, onze tip is waarden te groepereren in plaats van je zorgen te maken over de bewerkingsvolgorde. Bij twee operatoren moeten we even stilstaan omdat niet direct duidelijk is wat ze doen, de modulo operator `%` en de *integer* deling `//` (in tegenstelling tot de gewone deling `/`). ### Modulo operator `%` - `7 % 3` - `9 % 3` `x % y` is het **restant** wanneer `x` door `y` wordt gedeeld ``` 11 % 3 ``` Syntax check! Het maakt niet uit of je `x%2` of `x % 2` schrijft (met spaties), Python weet wat je bedoelt :) #### Voorbeelden | | Test | Mogelijke waarden van `x` | | |---|---------------|---------------------------|----------------------------------------------| | A | `x % 2 == 0` | | | | B | `x % 2 == 1` | | | | C | `x % 4 == 0` | | Wat gebeurt hier als `x` een jaartal is? | | D | `x % 24 == 0` | | Wat gebeurt hier als `x` een aantal uren is? | ``` 3 % 2 == 0 ``` A en B hebben alles te maken met even en oneven getallen, voorbeeld C met schrikkeljaren en voorbeeld D misschien met het digitaal display van jouw wekker? ### Integer deling - `7 // 3` - `9 // 3` - `30 // 7` `x // y` is als `x / y` maar dan **afgerond** tot een geheel getal ``` 30 // 7 ``` De `//` operator rondt af naar beneden, maar dan ook volledig naar beneden! In het Engels staat de `//` operator naast "integer division" ook bekend als "floor division": floor als in vloer (het laagste) in tegenstelling tot ceiling (plafond, hoogste). Maar er is meer aan de hand, want je zult zien dat `//` veel lijkt op de `%` operator! De verdeling van 30 in veelheden van 7: ```python 30 == (4) * 7 + (2) ``` Zouden we dit kunnen generaliseren tot een algemene regel met behulp van de operatoren `//` en `%` die we nu hebben leren kennen? De verdeling van `x` in veelheden van `y` ```python x == (x // y) * y + (x % y) ``` en dit ingevuld voor ons voorbeeld: ```python 30 = (30 // 7) * 7 + (30 % 7) ``` En daar is de `%` operator weer :) Je zult later zien dat het gebruik van `%` en `//` bijzonder handig is als we gaan rekenen met ... bits! Kort samengevat: de `//` operator rondt volledig naar beneden af (door alles achter de komma weg te laten). ### Wat is gelijk? | Een waarde TOEKENNEN | IS NIET gelijk aan | een waarde TESTEN | |----------------------|--------------------|-------------------| | `=` | `!=` | `==` | De enkele `=` ken je van wiskunde waar je $a = 1$ zal uitspreken als "a is gelijk aan 1". Bij programmeertalen is dit anders en wordt "ken aan a de waarde 1 toe" bedoeld. Om te testen of de waarde gelijk is aan een andere waarde wordt `==` gebruikt (en `!=` voor is *niet* gelijk aan). ### Identiteit Is `==` een test op *waarde* of *identiteit* (de geheugenlokatie waar de waarde *leeft*)? Sommige talen hebben `===`! Er is een verschil tussen testen op *waarde* en testen op *identiteit* (of het hetzelfde "doos" is, de geheugenlokatie). Python heeft geen `===` (zoals Javascript, een programeertal gebruikt in browsers) maar heeft speciaal voor dit geval `is`, bijvoorbeeld `a is b` om te vergelijken op basis van identiteit. Vergelijken op waarde of identiteit met `==` kan erg verschillen per taal. Voor Java (een veel gebruikte programmeertaal) betekent `==` een test op *identiteit*. Python heeft gekozen om `==` een test op gelijkheid van *waarde* te laten zijn. Dit ligt misschien het dichtst bij hoe mensen denken, zeker als het gaat om vergelijken van bijvoorbeeld getallen of tekst. Een voorbeeld om het verschil duidelijk te maken. ``` a = 3141592 b = 3141592 ``` Gegeven twee variabelen `a` en `b` met dezelfde waarde ``` a == b ``` zal inderdaar blijken dat `a` en `b` een gelijke *waarde* hebben. ``` a is b ``` maar een vergelijking op basis van *identiteit* zal niet slagen... ``` print(id(a)) print(id(b)) ``` `id(x)` geeft de adreslokatie van een waarde terug. Je kan zien dat `a` en `b` anders zijn, hoewel ze tóch dezelfe waarde hebben! (let op, deze geheugenlokaties kunnen verschillen met jouw computer!) ## Quiz Voer de volgende regels uit: ```python x = 41 y = x + 1 z = x + y ``` Welke waarden hebben `x`, `y` en `z`? ``` x = 41 y = x + 1 z = x + y print(x, y, z) ``` Voer vervolgens de volgende regel uit: ```python x = x + y ``` Welke waarden hebben `x`, `y` en `z` nu? ``` x = x + y print(x, y, z) ``` ### Achter de schermen ```python x = 41 y = x + 1 z = x + y ``` In het geheugen: ![box_voor_x](images/3/box_3a.png) De drie variabelen `x`, `y` en `z` zijn nu in het geheugen bewaard op drie verschillende lokaties. Laatste stap: ```python x = x + y ``` In het geheugen: ![box_na_x](images/3/box_3b.png) Met de laatste stap wijzigen we de waarde van `x` en dit betekent dat de oorspronkelijke lokatie wordt gewist en de nieuwe waarde in het geheugen wordt gezet, op een nieuwe lokatie! Je kan de identiteit (de geheugenlokatie) in Python opvragen met `id(x)`. Probeer dit eens met `x` voor en na de laatste operatie en je zal zien dat ze verschillend zijn. Het wissen of verwijderen van een waarde kan je doen met `del x` (dus zonder de haakjes `()`). ### Extra ```python a = 11 // 2 b = a % 3 c = b ** a+b * a ``` Welke waarden hebben `a`, `b` en `c`? ``` a = 11 // 2 b = a % 3 c = b ** a+b * a print(a, b, c) ``` ## Cultuur ![42](images/3/wikipedia_42.png) Het boek [The Hitchhiker's Guide to the Galaxy](https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy) van [Douglas Adams](https://en.wikipedia.org/wiki/Douglas_Adams) heeft sporen nagelaten in onder andere informatica: de kans is groot dat je in voorbeelden of uitwerkingen het getal 42 tegenkomt. Maar ook in het gewone leven als je op [25 mei](https://en.wikipedia.org/wiki/Towel_Day) mensen met een handdoek ziet lopen ...
github_jupyter
# Streaks analysis Streaks analysis is done by [Koch (20004)](https://www.climate-service-center.de/imperia/md/content/gkss/institut_fuer_kuestenforschung/ksd/paper/kochw_ieee_2004.pdf) algorithm implementation. ``` # import needed modules import xsar import xsarsea import xsarsea.gradients import xarray as xr import numpy as np import scipy import os import time import logging logging.basicConfig() logging.getLogger('xsar.utils').setLevel(logging.DEBUG) logging.getLogger('xsarsea.streaks').setLevel(logging.DEBUG) import holoviews as hv hv.extension('bokeh') import geoviews as gv from holoviews.operation.datashader import rasterize # open a file a 100m filename = xsar.get_test_file('S1A_IW_GRDH_1SDV_20170907T103020_20170907T103045_018268_01EB76_Z010.SAFE') # irma #filename = xsar.get_test_file('S1B_IW_GRDH_1SDV_20181013T062322_20181013T062347_013130_018428_Z000.SAFE') # bz filename=xsar.get_test_file('S1B_IW_GRDH_1SDV_20211024T051203_20211024T051228_029273_037E47_Z010.SAFE') #filename=xsar.get_test_file('S1A_IW_GRDH_1SDV_20170720T112706_20170720T112735_017554_01D5C2_Z010.SAFE') # subswath sar_ds = xsar.open_dataset(filename,resolution='100m').isel(atrack=slice(20,None,None),xtrack=slice(20,None,None)) # isel to skip bad image edge # add detrended sigma0 sar_ds['sigma0_detrend'] = xsarsea.sigma0_detrend(sar_ds.sigma0, sar_ds.incidence) # apply land mask land_mask = sar_ds['land_mask'].compute() sar_ds['sigma0_detrend'] = xr.where(land_mask, np.nan, sar_ds['sigma0_detrend']).transpose(*sar_ds['sigma0_detrend'].dims).compute() ``` ## General overview Gradients direction analysis is done by moving a window over the image. [xsarsea.gradients.Gradients](../basic_api.rst#xsarsea.gradients.Gradients) allow multiple windows sizes and resolutions. `sar_ds` is a IW_GRDH SAFE with a pixel size of 10m at full resolution. So to compute compute gradients with windows size of 16km and 32km, we need to use `windows_sizes=[1600,3200]` `sar_ds` resolution is 100m, so if we want to compute gradients at 100m an 200m, we need to use `downscales_factors=[1,2]` ``` gradients = xsarsea.gradients.Gradients(sar_ds['sigma0_detrend'].sel(pol='VV'), windows_sizes=[1600,3200], downscales_factors=[1,2]) # get gradients histograms as an xarray dataset hist = gradients.histogram # get orthogonals gradients hist['angles'] = hist['angles'] + np.pi/2 #mean hist_mean = hist.mean(['downscale_factor','window_size']) # mean, and smooth hist_mean_smooth = hist_mean.copy() hist_mean_smooth['weight'] = xsarsea.gradients.circ_smooth(hist_mean_smooth['weight']) # smooth only hist_smooth = hist.copy() hist_smooth['weight'] = xsarsea.gradients.circ_smooth(hist_smooth['weight']) # select histogram peak iangle = hist_mean_smooth['weight'].fillna(0).argmax(dim='angles') streaks_dir = hist_mean_smooth.angles.isel(angles=iangle) streaks_weight = hist_mean_smooth['weight'].isel(angles=iangle) streaks = xr.merge([dict(angle=streaks_dir,weight=streaks_weight)]).drop('angles') # convert from image convention (rad=0=atrack) to geographic convention (deg=0=north) # select needed variables in original dataset, and map them to streaks dataset streaks_geo = sar_ds[['longitude','latitude','ground_heading']].interp( atrack=streaks.atrack, xtrack=streaks.xtrack, method='nearest') streaks_geo['weight'] = streaks['weight'] # convert directions from image convention to geographic convention # note that there is no clockwise swapping, because image axes are transposed streaks_geo['streaks_dir'] = np.rad2deg(streaks['angle']) + streaks_geo['ground_heading'] streaks_geo = streaks_geo.compute() # plot. Note that hv.VectorField only accept radians, and 0 is West, so we need to reconvert degrees to radians when calling ... gv.tile_sources.Wikipedia * gv.VectorField( ( streaks_geo['longitude'], streaks_geo['latitude'], np.pi/2 -np.deg2rad(streaks_geo['streaks_dir']), streaks_geo['weight'] ) ).opts(pivot='mid', arrow_heads=False, tools=['hover'], magnitude='Magnitude') ``` > **_WARNING:_** `hv.VectorField` and `gv.VectorField` don't use degrees north convention, but radian convention, with 0 = East or right > So, to use them with degrees north, you have to convert them to gradients with > ```python > np.pi/2 -np.deg2rad(deg_north) > ``` > ## Digging into intermediate computations ### streaks_geo `streaks_geo` is a `xarray.Dataset`, with `latitude`, `longitude` and `streaks_dir` (0=deg north) variables. It has dims `('atrack', 'xtrack')`, with a spacing corresponding to the first windows size, according to the window step. ``` streaks_geo ``` ### streaks `streaks_geo` was computed from `streaks` (also a `xarray.Dataset`). The main difference is that the `angle` variable from `streaks` is in radians, in *image convention* (ie rad=0 is in atrack direction) ``` streaks ``` #### Convertion from image convention to geographic convention ```python angle_geo = np.rad2deg(angle_img) + ground_heading ``` #### Conversion from geographic convention to image convention ```python angle_img = np.deg2rad(angle_geo - ground_heading) ``` ### hist_mean `streaks` variable was computed from `hist_mean_smooth`. The main difference with `streaks` variable is that we don't have a single angle, but a histogram of probability for binned angles ``` hist_mean_smooth ``` Let's exctract one histogram at an arbitrary position, and plot the histogram. We can do this with the regular `hv.Histogram` function, or use [xsarsea.gradients.circ_hist](../basic_api.rst#xsarsea.gradients.circ_hist), that might be used with `hv.Path` to plot the histogram as a circular one. ``` hist_at = hist_mean_smooth['weight'].sel(atrack=5000,xtrack=12000,method='nearest') hv.Histogram( (hist_at.angles, hist_at )) + hv.Path(xsarsea.gradients.circ_hist(hist_at)) ``` `xsarsea` also provide an interactive drawing class [xsarsea.gradients.PlotGradients](../basic_api.rst#xsarsea.gradients.PlotGradients) that can be used to draw the circular histogram at mouse tap. (needs a live notebook) ``` # background image for vectorfield s0 = sar_ds['sigma0_detrend'].sel(pol='VV') hv_img = rasterize(hv.Image(s0).opts(cmap='gray',clim=(0,np.nanpercentile(s0,95)))) plot_mean_smooth = xsarsea.gradients.PlotGradients(hist_mean_smooth) # get vectorfield, with mouse tap activated hv_vf = plot_mean_smooth.vectorfield(tap=True) # connect mouse to histogram hv_hist = plot_mean_smooth.mouse_histogram() # notebook dynamic output hv_hist + hv_img * hv_vf ``` `hist_mean_smooth` was smoothed. Let's try `hist_smooth` ``` plot_smooth = xsarsea.gradients.PlotGradients(hist_smooth) hv_vf = plot_smooth.vectorfield() hv_hist = plot_smooth.mouse_histogram() hv_hist + (hv_img * hv_vf).opts(legend_position='right', frame_width=300) ``` Using `source` keyword for `mouse_histogram`, we can link several histrograms ``` plot_raw = xsarsea.gradients.PlotGradients(hist) plot_mean = xsarsea.gradients.PlotGradients(hist_mean) hv_vf = plot_smooth.vectorfield() gridspace = hv.GridSpace(kdims=['smooth','mean']) gridspace[(False,False)] = plot_smooth.mouse_histogram(source=plot_raw) gridspace[(True,False)] = plot_smooth.mouse_histogram() gridspace[(True,True)] = plot_smooth.mouse_histogram(source=plot_mean_smooth) gridspace[(False,True)] = plot_smooth.mouse_histogram(source=plot_mean) gridspace.opts(plot_size=(200,200)) + (hv_img * hv_vf).opts(legend_position='right', frame_height=500) ```
github_jupyter
# SLU09 - Classification With Logistic Regression: Exercise notebook ``` import pandas as pd import numpy as np import hashlib ``` In this notebook you will practice the following: - What classification is for - Logistic regression - Cost function - Binary classification You thought that you would get away without implementing your own little Logistic Regression? Hah! # Exercise 1. Implement the Exponential part of Sigmoid Function In the first exercise, you will implement **only the piece** of the sigmoid function where you have to use an exponential. Here's a quick reminder of the formula: $$\hat{p} = \frac{1}{1 + e^{-z}}$$ In this exercise we only want you to complete the exponential part given the values of b0, b1, x1, b2 and x2: $$e^{-z}$$ Recall that z has the following formula: $$z = \beta_0 + \beta_1 x_1 + \beta_2 x_2$$ **Hint: Divide your z into pieces by Betas, I've left the placeholders in there!** ``` def exponential_z_function(beta0, beta1, beta2, x1, x2): """ Implementation of the exponential part of the sigmoid function manually. In this exercise you have to compute the e raised to the power -z. Z is calculated according to the following formula: b0+b1x1+b2x2. You can use the inputs given to generate the z. Args: beta0 (np.float64): value of the intercept beta1 (np.float64): value of first coefficient beta2 (np.float64): value of second coefficient x1 (np.float64): value of first variable x2 (np.float64): value of second variable Returns: exp_z (np.float64): the exponential part of the sigmoid function """ # hint: obtain the exponential part # using np.exp() # Complete the following #beta0 = ... #b1_x1 = ... #b2_x2 = ... #exp_z = ... # YOUR CODE HERE raise NotImplementedError() return exp_z value_arr = [1, 2, 1, 1, 0.5] exponential = exponential_z_function( value_arr[0], value_arr[1], value_arr[2], value_arr[3], value_arr[4]) np.testing.assert_almost_equal(np.round(exponential,3), 0.030) ``` Expected output: Exponential part: 0.03 # Exercise 2: Make a Prediction The next step is to implement a function that receives an observation and returns the predicted probability with the sigmoid function. For instance, we can make a prediction given a model with data and coefficients by using the sigmoid: $$\hat{p} = \frac{1}{1 + e^{-z}}$$ Where Z is the linear equation - you can't use the same function that you used above for the Z part as the input are now two arrays, one with the train data (x1, x2, ..., xn) and another with the coefficients (b0, b1, .., bn). **Complete here:** ``` def predict_proba(data, coefs): """ Implementation of a function that returns predicted probabilities for an observation. In the train array you will have the data values (corresponding to the x1, x2, .. , xn). In the coefficients array you will have the coefficients values (corresponding to the b0, b1, .., bn). In this exercise you should be able to return a float with the calculated probabilities given an array of size (1, n). The resulting value should be a float (the predicted probability) with a value between 0 and 1. Note: Be mindful that the input is completely different from the function above - you receive two arrays in this functions while in the function above you received 5 floats - each corresponding to the x's and b's. Args: data (np.array): a numpy array of shape (n) - n: number of variables coefs (np.array): a numpy array of shape (n + 1, 1) - coefs[0]: intercept - coefs[1:]: remaining coefficients Returns: proba (float): the predicted probability for a data example. """ # hint: you have to implement your z in a vectorized # way aka using vector multiplications - it's different from what you have done above # hint: don't forget about adding an intercept to the train data! # YOUR CODE HERE raise NotImplementedError() return proba x = np.array([-1.2, -1.5]) coefficients = np.array([0 ,4, -1]) np.testing.assert_almost_equal(round(predict_proba(x, coefficients),3),0.036) x_1 = np.array([-1.5, -1, 3, 0]) coefficients_1 = np.array([0 ,2.1, -1, 0.5, 0]) np.testing.assert_almost_equal(round(predict_proba(x_1, coefficients_1),3),0.343) ``` Expected output: Predicted probabilities for example with 2 variables: 0.036 Predicted probabilities for example with 3 variables: 0.343 # Exercise 3: Compute the Maximum Log-Likelihood Cost Function As you will implement stochastic gradient descent, you need to calculate the cost function (the Maximum Log-Likelihood) for each prediction, checking how much you will penalize each example according to the difference between the calculated probability and its true value: $$H_{\hat{p}}(y) = - (y \log(\hat{p}) + (1-y) \log (1-\hat{p}))$$ In the next exercise, you will loop through some examples stored in an array and calculate the cost function for the full dataset. Recall that the formula to generalize the cost function across several examples is: $$H_{\hat{p}}(y) = - \frac{1}{N}\sum_{i=1}^{N} \left [{ y_i \ \log(\hat{p}_i) + (1-y_i) \ \log (1-\hat{p}_i)} \right ]$$ You will basically simulate what stochastic gradient descent does without updating the coefficients - computing the log for each example, sum each log-loss and then averaging the result across the number of observations in the x dataset/array. ``` import math def max_log_likelihood_cost_function(var_x, coefs, var_y): """ Implementation of a function that returns the Maximum-Log-Likelihood loss Args: var_x (np.array): array with x training data of size (m, n) shape where m is the number of observations and n the number of columns coefs (float64): an array with the coefficients to apply of size (1, n+1) where n is the number of columns plus the intercept. var_y (float64): an array with integers with the real outcome per example. Returns: loss (np.float): a float with the resulting log loss for the entire data. """ # A list of hints that you can follow: # - you already computed a probability for an example so you might be able to reuse the function # - Store number of examples that you have to loop through #Steps to follow: # 1. Initialize loss # 2. Loop through every example # Hint: if you don't use the function from above to predict probas # don't forget to add the intercept to the X_array! # 2.1 Calculate probability for each example # 2.2 Compute log loss # Hint: maybe separating the log loss will help you avoiding get confused inside all the parenthesis # 2.3 Sum the computed loss for the example to the total log loss # 3. Divide log loss by the number of examples (don't forget that the log loss # has to return a positive number!) # YOUR CODE HERE raise NotImplementedError() return total_loss x = np.array([[-2, -2], [3.5, 0], [6, 4]]) coefficients = np.array([[0 ,2, -1]]) y = np.array([[1],[1],[0]]) np.testing.assert_almost_equal(round(max_log_likelihood_cost_function(x, coefficients, y),3),3.376) coefficients_1 = np.array([[3 ,4, -0.6]]) x_1 = np.array([[-4, -4], [6, 0], [3, 2], [4, 0]]) y_1 = np.array([[4],[4],[2],[1.5]]) np.testing.assert_almost_equal(round(max_log_likelihood_cost_function(x_1, coefficients_1, y_1),3),-15.475) ``` Expected output: Computed log loss for first training set: 3.376 Computed log loss for second training set: -15.475 # Exercise 4: Compute a first pass on Stochastic Gradient Descent Now that we know how to calculate probabilities and the cost function, let's do an interesting exercise - computing the derivatives and updating our coefficients. Here you will do a full pass on a bunch of examples, computing the gradient descent for each time you see one of them. In this exercise, you should compute a single iteration of the gradient descent! You will basically use stochastic gradient descent but you will have to update the coefficients after you see a new example - so each time your algorithm knows that he saw something way off (for example, returning a low probability for an example with outcome = 1) he will have a way (the gradient) to change the coefficients so that he is able to minimize the cost function. ## Quick reminders: Remember our formulas for the gradient: $$\beta_{0(t+1)} = \beta_{0(t)} - learning\_rate \frac{\partial H_{\hat{p}}(y)}{\partial \beta_{0(t)}}$$ $$\beta_{t+1} = \beta_t - learning\_rate \frac{\partial H_{\hat{p}}(y)}{\partial \beta_t}$$ which can be simplified to $$\beta_{0(t+1)} = \beta_{0(t)} + learning\_rate \left [(y - \hat{p}) \ \hat{p} \ (1 - \hat{p})\right]$$ $$\beta_{t+1} = \beta_t + learning\_rate \left [(y - \hat{p}) \ \hat{p} \ (1 - \hat{p}) \ x \right]$$ You will have to initialize the coefficients in some way. If you have a training set $X$, you can initialize them to zero, this way: ```python coefficients = np.zeros(X.shape[1]+1) ``` where the $+1$ is adding the intercept. Note: We are doing a stochastic gradient descent so don't forget to go observation by observation and updating the coefficients every time! **Complete here:** ``` def compute_coefs_sgd(x_train, y_train, learning_rate = 0.1, verbose = False): """ Implementation of a function that returns the a first iteration of stochastic gradient descent. Args: x_train (np.array): a numpy array of shape (m, n) m: number of training observations n: number of variables y_train (np.array): a numpy array of shape (m,) with the real value of the target. learning_rate (np.float64): a float Returns: coefficients (np.array): a numpy array of shape (n+1,) """ # A list of hints that might help you: # 1. Calculate the number of observations # 2. Initialize the coefficients array with zeros # hint: use np.zeros() # 3. Run the stochastic gradient descent and update the coefficients after each observation # 3.1 Compute the predicted probability - you can use a function we have done previously # 3.2 Update intercept # 3.3 Update the rest of the coefficients by looping through each variable # YOUR CODE HERE raise NotImplementedError() return coefficients #Test 1 x_train = np.array([[1,2,4], [2,4,9], [2,1,4], [9,2,10]]) y_train = np.array([0,2.2,0,2.3]) learning_rate = 0.1 np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train, y_train, learning_rate)[0],3),0.022) np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train, y_train, learning_rate)[1],3),0.081) np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train, y_train, learning_rate)[2],3),0.140) np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train, y_train, learning_rate)[3],3),0.320) #Test 2 x_train_1 = np.array([[4,4,2,6], [1,5,7,2], [3,1,2,1], [8,2,9,5], [2,2,9,4]]) y_train_1 = np.array([0,1.3,0,1.3,1.2]) np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train_1, y_train_1, learning_rate).max(),3) ,0.277) np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train_1, y_train_1, learning_rate).min(),3) ,0.015) np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train_1, y_train_1, learning_rate).mean(),3),0.102) np.testing.assert_almost_equal(round(compute_coefs_sgd(x_train_1, y_train_1, learning_rate).var(),3) ,0.008) ``` # Exercise 5: Normalize Data To get this concept in your head, let's do a quick and easy function to normalize the data using a MaxMin approach. It is crucial that your variables are adjusted between $[0;1]$ (normalized) or standardized so that you can correctly analyze some logistic regression coefficients for your possible future employer. You only have to implement this formula $$ x_{normalized} = \frac{x - x_{min}}{x_{max} - x_{min}}$$ Don't forget that the `axis` argument is critical when obtaining the maximum, minimum and mean values! As you want to obtain the maximum and minimum values of each individual feature, you have to specify `axis=0`. Thus, if you wanted to obtain the maximum values of each feature of data $X$, you would do the following: ```python X_max = np.max(X, axis=0) ``` Not an assertable question but can you remember why it is important to normalize data for Logistic Regression? **Complete here:** ``` def normalize_data_function(data): """ Implementation of a function that normalizes your data variables Args: data (np.array): a numpy array of shape (m, n) m: number of observations n: number of variables Returns: normalized_data (np.array): a numpy array of shape (m, n) """ # Compute the numerator first # you can use np.min() # numerator = ... # Compute the denominator # you can use np.max() and np.min() # denominator = ... # YOUR CODE HERE raise NotImplementedError() return normalized_data data = np.array([[7,7,3], [2,2,11], [9,5,2], [0,9,5], [10,1,3], [1,5,2]]) normalized_data = normalize_data_function(data) print('Before normalization:') print(data) print('\n-------------------\n') print('After normalization:') print(normalized_data) ``` Expected output: Before normalization: [[ 7 7 3] [ 2 2 11] [ 9 5 2] [ 0 9 5] [10 1 3] [ 1 5 2]] ------------------- After normalization: [[0.7 0.75 0.11111111] [0.2 0.125 1. ] [0.9 0.5 0. ] [0. 1. 0.33333333] [1. 0. 0.11111111] [0.1 0.5 0. ]] ``` data = np.array([[2,2,11,1], [7,5,1,3], [9,5,2,6]]) normalized_data = normalize_data_function(data) np.testing.assert_almost_equal(round(normalized_data.max(),3),1.0) np.testing.assert_almost_equal(round(normalized_data.mean(),3),0.518) np.testing.assert_almost_equal(round(normalized_data.var(),3),0.205) data = np.array([[1,3,1,3], [9,5,3,1], [2,2,4,6]]) normalized_data = normalize_data_function(data) np.testing.assert_almost_equal(round(normalized_data.mean(),3),0.460) np.testing.assert_almost_equal(round(normalized_data.std(),3),0.427) ``` # Exercise 6: Training a Logistic Regression with Sklearn In this exercise, we will load a dataset related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The goal is to predict whether the client will subscribe (1/0) to a term deposit (variable y) ([link to dataset](http://archive.ics.uci.edu/ml/datasets/Bank+Marketing)) Prepare to use your sklearn skills! ``` # We will load the dataset for you bank = pd.read_csv('data/bank.csv', delimiter=";") bank.head() ``` In this exercise, you need to do the following: - Select an array/Series with the target variable (y) - Select an array/dataframe with the X numeric variables (age, balance, day, month, duration, campaign and pdays) - Scale all the X variables - normalize using Max / Min method. - Fit a logistic regression for a maximum of 100 epochs and random state = 100. - Return an array of the predicted probas and return the coefficients After this, feel free to explore your predictions! As a bonus why don't you construct a decision boundary using two variables eh? :-) ``` from sklearn.linear_model import LogisticRegression def train_model_sklearn(dataset): ''' Returns the predicted probas and coefficients of a trained logistic regression on the Titanic Dataset. Args: dataset(pd.DataFrame): dataset to train on. Returns: probas (np.array): Array of floats with the probability of surviving for each passenger coefficients (np.array): Returned coefficients of the trained logistic regression. ''' # leave this np.random seed here np.random.seed(100) # List of hints: # 1. Use the Survived variable as y # 2. Select the Numerical variables for X # hint: use pandas .loc or indexing! # 3. Scale the X dataset - you can use a function we have already # constructed or resort to the sklearn implementation # 4. Define logistic regression from sklearn with max iter = 100 also add random_state = 100 # Hint: for epochs look at the max_iter hyper param! # 5. Fit logistic # 6. Obtain probability of surviving # 7. Obtain Coefficients from logistic regression # Hint: see the sklearn logistic regression documentation if you do not know how to do this # No need to return the intercept, just the variable coefficients! # YOUR CODE HERE raise NotImplementedError() return probas, coef probas, coef = train_model_sklearn(bank) # Testing Probas max_probas = probas.max() np.testing.assert_almost_equal(max_probas, 0.997, 2) min_probas = probas.min() np.testing.assert_almost_equal(min_probas, 0.008, 2) mean_probas = probas.mean() np.testing.assert_almost_equal(mean_probas, 0.115, 2) std_probas = probas.std() np.testing.assert_almost_equal(std_probas, 0.115, 2) sum_probas = probas.sum() np.testing.assert_almost_equal(sum_probas*0.001, 0.521, 2) # Testing Coefs max_coef = coef[0].max() np.testing.assert_almost_equal(max_coef*0.1, 0.87, 1) min_coef = coef[0].min() np.testing.assert_almost_equal(min_coef*0.1, -0.18, 1) mean_coef = coef[0].mean() np.testing.assert_almost_equal(mean_coef*0.1, 0.21, 1) std_coef = coef[0].std() np.testing.assert_almost_equal(std_coef*0.1, 0.35, 1) sum_probas = coef[0].sum() np.testing.assert_almost_equal(sum_probas*0.1, 1.06, 1) ```
github_jupyter
# Baby boy/girl classifier model preparation *based on: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)* *by: Artyom Vorobyov* Notebook execution and model training is made in Google Colab ``` from fastai.vision import * from pathlib import Path # Check if running in Google Colab and save it to bool variable try: import google.colab IS_COLAB = True except: IS_COLAB = False print("Is Colab:", IS_COLAB) ``` ## Get a list of URLs ### How to get a dataset from Google Images Go to [Google Images](http://images.google.com) and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do. Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700. It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants: "canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown. ### How to download image URLs Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset. Press <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd> in Windows/Linux and <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd> in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands. You will need to get the urls of each of the images. Before running the following commands, you may want to disable ad blocking extensions (uBlock, AdBlockPlus etc.) in Chrome. Otherwise the window.open() command doesn't work. Then you can run the following commands: ```javascript urls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou); window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n'))); ``` ### What to do with babies For this particular application (baby boy/girl classifier) you can just search for "baby boys" and "baby girls". Then run the script mentioned above and save the URLs in "boys_urls.csv" and "girls_urls.csv". ## Download images fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved. Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls. ``` class_boy = 'boys' class_girl = 'girls' classes = [class_boy, class_girl] path = Path('./data') path.mkdir(parents=True, exist_ok=True) def download_dataset(is_colab): if is_colab: from google.colab import drive import shutil import zipfile # You'll be asked to sign in Google Account and copy-paste a code here. Do it. drive.mount('/content/gdrive') # Copy this model from Google Drive after export and manually put it in the "ai_models" folder in the repository # If there'll be an error during downloading the model - share it with some other Google account and download # from this 2nd account - it should work fine. zip_remote_path = Path('/content/gdrive/My Drive/Colab/boyorgirl/train.zip') shutil.copy(str(zip_remote_path), str(path)) zip_local_path = path/'train.zip' with zipfile.ZipFile(zip_local_path, 'r') as zip_ref: zip_ref.extractall(path) print("train folder contents:", (path/'train').ls()) else: data_sources = [ ('./boys_urls.csv', path/'train'/class_boy), ('./girls_urls.csv', path/'train'/class_girl) ] # Download the images listed in URL's files for urls_path, dest_path in data_sources: dest = Path(dest_path) dest.mkdir(parents=True, exist_ok=True) download_images(urls_path, dest, max_pics=800) # If you have problems download, try the code below with `max_workers=0` to see exceptions: # download_images(urls_path, dest, max_pics=20, max_workers=0) # Then we can remove any images that can't be opened: for _, dest_path in data_sources: verify_images(dest_path, delete=True, max_size=800) # If running from colab - zip your train set (train folder) and put it to "Colab/boyorgirl/train.zip" in your Google Drive download_dataset(IS_COLAB) ``` ## Cleaning the data Now it's a good moment to review the downloaded images and clean them. There will be some non-relevant images - photos of adults, photos of the baby clothes without the babies etc. Just review the images and remove non-relevant ones. For 2x400 images it'll take just 10-20 minutes in total. There's also another way to clean the data - use the `fastiai.widgets.ImageCleaner`. It's used after you've trained your model. Even if you plan to use `ImageCleaner` later - it still makes sense to review the dataset briefly by yourself at the beginning. ## Load the data ``` np.random.seed(42) data = ImageDataBunch.from_folder(path, train='train', valid_pct=0.2, ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats) ``` Good! Let's take a look at some of our pictures then. ``` # Check if all the classes were correctly read print(data.classes) print(data.classes == classes) data.show_batch(rows=3, figsize=(7,8), ds_type=DatasetType.Train) data.show_batch(rows=3, figsize=(7,8), ds_type=DatasetType.Valid) print('Train set size: {}. Validation set size: {}'.format(len(data.train_ds), len(data.valid_ds))) ``` ## Train model ``` learn = cnn_learner(data, models.resnet50, metrics=error_rate) learn.fit_one_cycle(4) learn.save('stage-1') learn.unfreeze() learn.lr_find() # If the plot is not showing try to give a start and end learning rate # learn.lr_find(start_lr=1e-5, end_lr=1e-1) learn.recorder.plot() learn.fit_one_cycle(2, max_lr=slice(1e-5,1e-3)) learn.save('stage-2') ``` ## Interpretation ``` learn.load('stage-2'); interp = ClassificationInterpretation.from_learner(learn) interp.plot_confusion_matrix() ``` ## Putting your model in production First thing first, let's export the content of our `Learner` object for production. Below are 2 variants of export - for local environment and for colab environment: ``` # Use this cell to export model from local environment within the repository def get_export_path(is_colab): if is_colab: from google.colab import drive # You'll be asked to sign in Google Account and copy-paste a code here. Do it. # force_remount=True is needed to write model if it was deleted from Google Drive, but remains in Colab local file system drive.mount('/content/gdrive', force_remount=True) # Copy this model from Google Drive after export and manually put it in the "ai_models" folder in the repository # If there'll be an error during downloading the model - share it with some other Google account and download # from this 2nd account - it should work fine. return Path('/content/gdrive/My Drive/Colab/boyorgirl/ai_models/export.pkl') else: # Used in case when notebook is run from the repository, but not in the Colab return Path('../backend/ai_models/export.pkl') # In case of Colab - model will be exported to 'Colab/boyorgirl/ai_models/export.pkl'. Download and save it in your repository manually # in the 'ai_models' folder export_path = get_export_path(IS_COLAB) # ensure folder exists export_path.parents[0].mkdir(parents=True, exist_ok=True) # absolute path is passed as learn object attaches relative path to it's data folder rather than to notebook folder learn.export(export_path.absolute()) print("Export folder contents:", export_path.parents[0].ls()) ``` This will create a file named 'export.pkl' in the given directory. This exported model contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used). ``` ```
github_jupyter
``` import torch import datasets as nlp from transformers import LongformerTokenizerFast tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096') def get_correct_alignement(context, answer): """ Some original examples in SQuAD have indices wrong by 1 or 2 character. We test and fix this here. """ gold_text = answer['text'][0] start_idx = answer['answer_start'][0] end_idx = start_idx + len(gold_text) if context[start_idx:end_idx] == gold_text: return start_idx, end_idx # When the gold label position is good elif context[start_idx-1:end_idx-1] == gold_text: return start_idx-1, end_idx-1 # When the gold label is off by one character elif context[start_idx-2:end_idx-2] == gold_text: return start_idx-2, end_idx-2 # When the gold label is off by two character else: raise ValueError() # Tokenize our training dataset def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) encodings = tokenizer.encode_plus(example['question'], example['context'], pad_to_max_length=True, max_length=512, truncation=True) context_encodings = tokenizer.encode_plus(example['context']) # Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes. # this will give us the position of answer span in the context text start_idx, end_idx = get_correct_alignement(example['context'], example['answers']) start_positions_context = context_encodings.char_to_token(start_idx) end_positions_context = context_encodings.char_to_token(end_idx-1) # here we will compute the start and end position of the answer in the whole example # as the example is encoded like this <s> question</s></s> context</s> # and we know the postion of the answer in the context # we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens) # this will give us the position of the answer span in whole example sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id) start_positions = start_positions_context + sep_idx + 1 end_positions = end_positions_context + sep_idx + 1 if end_positions > 512: start_positions, end_positions = 0, 0 encodings.update({'start_positions': start_positions, 'end_positions': end_positions, 'attention_mask': encodings['attention_mask']}) return encodings # load train and validation split of squad train_dataset = nlp.load_dataset('squad', split='train') valid_dataset = nlp.load_dataset('squad', split='validation') # Temp. Only for testing quickly train_dataset = nlp.Dataset.from_dict(train_dataset[:3]) valid_dataset = nlp.Dataset.from_dict(valid_dataset[:3]) train_dataset = train_dataset.map(convert_to_features) valid_dataset = valid_dataset.map(convert_to_features, load_from_cache_file=False) # set the tensor type and the columns which the dataset should return columns = ['input_ids', 'attention_mask', 'start_positions', 'end_positions'] train_dataset.set_format(type='torch', columns=columns) valid_dataset.set_format(type='torch', columns=columns) len(train_dataset), len(valid_dataset) t = torch.load('train_data.pt') # Write training script import json args_dict = { "n_gpu": 1, "model_name_or_path": 'allenai/longformer-base-4096', "max_len": 512 , "output_dir": './models', "overwrite_output_dir": True, "per_gpu_train_batch_size": 8, "per_gpu_eval_batch_size": 8, "gradient_accumulation_steps": 16, "learning_rate": 1e-4, "num_train_epochs": 3, "do_train": True } ## SQuAD evaluation script. Modifed slightly for this notebook from __future__ import print_function from collections import Counter import string import re import argparse import json import sys def normalize_answer(s): """Lower text and remove punctuation, articles and extra whitespace.""" def remove_articles(text): return re.sub(r'\b(a|an|the)\b', ' ', text) def white_space_fix(text): return ' '.join(text.split()) def remove_punc(text): exclude = set(string.punctuation) return ''.join(ch for ch in text if ch not in exclude) def lower(text): return text.lower() return white_space_fix(remove_articles(remove_punc(lower(s)))) def f1_score(prediction, ground_truth): prediction_tokens = normalize_answer(prediction).split() ground_truth_tokens = normalize_answer(ground_truth).split() common = Counter(prediction_tokens) & Counter(ground_truth_tokens) num_same = sum(common.values()) if num_same == 0: return 0 precision = 1.0 * num_same / len(prediction_tokens) recall = 1.0 * num_same / len(ground_truth_tokens) f1 = (2 * precision * recall) / (precision + recall) return f1 def exact_match_score(prediction, ground_truth): return (normalize_answer(prediction) == normalize_answer(ground_truth)) def metric_max_over_ground_truths(metric_fn, prediction, ground_truths): scores_for_ground_truths = [] for ground_truth in ground_truths: score = metric_fn(prediction, ground_truth) scores_for_ground_truths.append(score) return max(scores_for_ground_truths) def evaluate(gold_answers, predictions): f1 = exact_match = total = 0 for ground_truths, prediction in zip(gold_answers, predictions): total += 1 exact_match += metric_max_over_ground_truths( exact_match_score, prediction, ground_truths) f1 += metric_max_over_ground_truths(f1_score, prediction, ground_truths) exact_match = 100.0 * exact_match / total f1 = 100.0 * f1 / total return {'exact_match': exact_match, 'f1': f1} import torch from transformers import LongformerTokenizerFast, LongformerForQuestionAnswering from tqdm.auto import tqdm tokenizer = LongformerTokenizerFast.from_pretrained('models') model = LongformerForQuestionAnswering.from_pretrained('models') model = model.cuda() model.eval() ```
github_jupyter
``` from IPython.display import Markdown as md ### change to reflect your notebook _nb_loc = "09_deploying/09c_changesig.ipynb" _nb_title = "Changing signatures of exported model" ### no need to change any of this _nb_safeloc = _nb_loc.replace('/', '%2F') md(""" <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name={1}&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fblob%2Fmaster%2F{2}&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fraw%2Fmaster%2F{2}"> <img src="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png"/> Run in AI Platform Notebook</a> </td> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/{0}"> <img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> """.format(_nb_loc, _nb_title, _nb_safeloc)) ``` # Changing signatures of exported model In this notebook, we start from an already trained and saved model (as in Chapter 7). For convenience, we have put this model in a public bucket in gs://practical-ml-vision-book/flowers_5_trained ## Enable GPU and set up helper functions This notebook and pretty much every other notebook in this repository will run faster if you are using a GPU. On Colab: - Navigate to Edit→Notebook Settings - Select GPU from the Hardware Accelerator drop-down On Cloud AI Platform Notebooks: - Navigate to https://console.cloud.google.com/ai-platform/notebooks - Create an instance with a GPU or select your instance and add a GPU Next, we'll confirm that we can connect to the GPU with tensorflow: ``` import tensorflow as tf print('TensorFlow version' + tf.version.VERSION) print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!')) print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices("GPU")))) device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) ``` ## Exported model We start from a trained and saved model from Chapter 7. <pre> model.save(...) </pre> ``` MODEL_LOCATION='gs://practical-ml-vision-book/flowers_5_trained' !gsutil ls {MODEL_LOCATION} !saved_model_cli show --tag_set serve --signature_def serving_default --dir {MODEL_LOCATION} ``` ## Passing through an input Note that the signature doesn't tell us the input filename. Let's add that. ``` import tensorflow as tf import os, shutil model = tf.keras.models.load_model(MODEL_LOCATION) @tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)]) def predict_flower_type(filenames): old_fn = model.signatures['serving_default'] result = old_fn(filenames) # has flower_type_int etc. result['filename'] = filenames return result shutil.rmtree('export', ignore_errors=True) os.mkdir('export') model.save('export/flowers_model', signatures={ 'serving_default': predict_flower_type }) !saved_model_cli show --tag_set serve --signature_def serving_default --dir export/flowers_model import tensorflow as tf serving_fn = tf.keras.models.load_model('export/flowers_model').signatures['serving_default'] filenames = [ 'gs://cloud-ml-data/img/flower_photos/dandelion/9818247_e2eac18894.jpg', 'gs://cloud-ml-data/img/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg', 'gs://cloud-ml-data/img/flower_photos/daisy/9299302012_958c70564c_n.jpg', 'gs://cloud-ml-data/img/flower_photos/tulips/8733586143_3139db6e9e_n.jpg', 'gs://cloud-ml-data/img/flower_photos/tulips/8713397358_0505cc0176_n.jpg' ] pred = serving_fn(tf.convert_to_tensor(filenames)) print(pred) ``` ## Multiple signatures ``` import tensorflow as tf import os, shutil model = tf.keras.models.load_model(MODEL_LOCATION) old_fn = model.signatures['serving_default'] @tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)]) def pass_through_input(filenames): result = old_fn(filenames) # has flower_type_int etc. result['filename'] = filenames return result shutil.rmtree('export', ignore_errors=True) os.mkdir('export') model.save('export/flowers_model2', signatures={ 'serving_default': old_fn, 'input_pass_through': pass_through_input }) !saved_model_cli show --tag_set serve --dir export/flowers_model2 !saved_model_cli show --tag_set serve --dir export/flowers_model2 --signature_def serving_default !saved_model_cli show --tag_set serve --dir export/flowers_model2 --signature_def input_pass_through ``` ## Deploying multi-signature model as REST API ``` !./caip_deploy.sh --version multi --model_location ./export/flowers_model2 %%writefile request.json { "instances": [ { "filenames": "gs://cloud-ml-data/img/flower_photos/dandelion/9818247_e2eac18894.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/daisy/9299302012_958c70564c_n.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/tulips/8733586143_3139db6e9e_n.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/tulips/8713397358_0505cc0176_n.jpg" } ] } !gcloud ai-platform predict --model=flowers --version=multi --json-request=request.json %%writefile request.json { "signature_name": "input_pass_through", "instances": [ { "filenames": "gs://cloud-ml-data/img/flower_photos/dandelion/9818247_e2eac18894.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/daisy/9299302012_958c70564c_n.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/tulips/8733586143_3139db6e9e_n.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/tulips/8713397358_0505cc0176_n.jpg" } ] } !gcloud ai-platform predict --model=flowers --version=multi --json-request=request.json ``` that's a bug ... filed a bug report; hope it's fixed by the time you are reading the book. ``` from oauth2client.client import GoogleCredentials import requests import json PROJECT = 'ai-analytics-solutions' # CHANGE MODEL_NAME = 'flowers' MODEL_VERSION = 'multi' token = GoogleCredentials.get_application_default().get_access_token().access_token api = 'https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict' \ .format(PROJECT, MODEL_NAME, MODEL_VERSION) headers = {'Authorization': 'Bearer ' + token } data = { "signature_name": "input_pass_through", "instances": [ { "filenames": "gs://cloud-ml-data/img/flower_photos/dandelion/9818247_e2eac18894.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/daisy/9299302012_958c70564c_n.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/tulips/8733586143_3139db6e9e_n.jpg" }, { "filenames": "gs://cloud-ml-data/img/flower_photos/tulips/8713397358_0505cc0176_n.jpg" } ] } response = requests.post(api, json=data, headers=headers) print(response.content.decode('utf-8')) ``` ## License Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
<a href="https://colab.research.google.com/github/Miseq/naive_imdb_reviews_model/blob/master/naive_imdb_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from keras.datasets import imdb from keras import optimizers from keras import losses from keras import metrics from keras import models from keras import layers import matplotlib.pyplot as plt import numpy as np def vectorize_data(data, dimension=10000): result = np.zeros((len(data), dimension)) for i, seq in enumerate(data): result[i, seq] = 1. return result (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) x_train = vectorize_data(train_data) x_test = vectorize_data(test_data) y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=['accuracy']) x_val = x_train[:20000] partial_x_train = x_train[20000:] y_val = y_train[:20000] partial_y_train = y_train[20000:] history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) history_dict = history.history history_dict.keys() acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc)+1) # Tworzenie wykresu start tenowania i walidacji plt.plot(epochs, loss, 'bo', label='Strata trenowania') plt.plot(epochs, val_loss, 'b', label='Strata walidacji') plt.title('Strata renowania i walidacji') plt.xlabel('Epoki') plt.ylabel('Strata') plt.legend() plt.show() # Tworzenie wykresu dokładności trenowania i walidacji plt.clf() # Czyszczenie rysunku(wazne) acc_values = history_dict['acc'] val_acc_vales = history_dict['val_acc'] plt.plot(epochs, acc, 'bo', label='Dokladnosc trenowania') plt.plot(epochs, val_acc, 'b', label='Dokladnosc walidacji') plt.title('Dokladnosc trenowania i walidacji') plt.xlabel('Epoki') plt.ylabel('Strata') plt.legend() plt.show() min_loss_val = min(val_loss) max_acc_val = max(val_acc) min_loss_ix = val_loss.index(min_loss_val) max_acc_ix = val_acc.index(max_acc_val) print(f'{min_loss_ix} --- {max_acc_ix}') ``` Po 7 epoce model zaczyna być przetrenowany ``` model.fit(x_train, y_train, epochs=7, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, y_test) results ``` ## Wiecej warstw ukrytych ``` model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(8, activation='relu')) model.add(layers.Dense(4, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, y_test) results ``` ## Wieksza ilosc jednostek ukrytych ``` model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(32, activation='relu')) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, y_test) results ``` ## Funkcja straty mse ``` model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(32, activation='relu')) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.mse, metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, y_test) results ```
github_jupyter
``` import numpy as np import heapq import matplotlib.pyplot as plt from math import inf from itertools import product %matplotlib inline class PQ(object): '''Wrapper object for heapq module''' def __init__(self, data, key): self.key = key self._data = [(key(elt), elt) for elt in data] heapq.heapify(self._data) def push(self, elt): heapq.heappush(self._data, (self.key(elt), elt)) def pop(self): return heapq.heappop(self._data)[1] class Dijkstra: '''0s in grid mark free space, 1s mark obstacles''' def __init__(self, grid, start, dest): self.grid = grid self.start = start self.dest = dest self.route = [] self.came_from = {start: start} self.dist_to = np.array([[inf] * grid.shape[1]] * grid.shape[0]) self.dist_to[start] = 0 def dijsktra(self): seen = set() y0, y1 = self.start[0], self.start[1] manhattan = lambda x: abs(x[0] - y0) + abs(x[1] - y1) frontier = PQ(data=[self.start], key=manhattan) while frontier: curr = frontier.pop() seen.add(curr) for nbr in self.get_neighbors(*curr): if nbr not in seen: self.visit_neighbor(curr, nbr) frontier.push(nbr) if nbr == self.dest: frontier = [] break self.build_route() return self.route def build_route(self): node = self.dest route = [self.dest] while node != self.came_from[node]: node = self.came_from[node] route += [node] self.route = list(reversed(route)) def visit_neighbor(self, curr, nbr): tentative_dist = self.dist_to[curr] + 1 if tentative_dist < self.dist_to[nbr]: self.dist_to[nbr] = tentative_dist self.came_from[nbr] = curr def get_neighbors(self, i, j): m, n = self.grid.shape in_bounds = lambda x: 0 <= x[0] < m and 0 <= x[1] < n available = lambda x: (x[0]==i) != (x[1]==j) and self.grid[x] != 1 candidates = product(range(i-1, i+2), range(j-1, j+2)) return filter(available, filter(in_bounds, candidates)) def visualize(self): route = list(map(list, zip(*self.route))) self.grid[route] = 2 self.grid[self.start] = 3 self.grid[self.dest] = 4 f, ax = plt.subplots() ax.imshow(self.grid) ax.set_axis_off() ax.set_title('Dijkstra') plt.show() def add_obstacles(grid, obstacles): for obs in obstacles: grid[obs] = 1 def setup_grid(): grid = np.zeros((10, 10), dtype=int) o1 = ([2] * 6, range(3, 9)) o2 = ([6] * 2, range(6, 8)) o3 = (range(5, 9), [5] * 4) add_obstacles(grid, [o1, o2, o3]) return grid def main(): grid = setup_grid() start = (0, 1) dest = (8, 7) d = Dijkstra(grid, start, dest) route = d.dijsktra() d.visualize() if __name__ == '__main__': main() ```
github_jupyter
``` import pandas as pd import urllib3 def datacovidChile(): url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62342&parId=9F999E057AD8C646!62390&authkey=!AgJICaWKd7tHakw&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/BASE CALCULO COMUNA.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62359&parId=9F999E057AD8C646!62390&authkey=!AgJICaWKd7tHakw&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/casos por comuna listos.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62361&parId=9F999E057AD8C646!62390&authkey=!AgJICaWKd7tHakw&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/Covid Chile V2.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62377&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/00 DATACOVID Trabajo_HN.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62380&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/00 DATACOVID_HN_CUARENTENA.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62388&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/ALIMENTACION_HN.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62372&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/Covid HN.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62386&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/FARMACIAS_HN.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62378&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/LOCALIZA HN.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62384&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/SALUD_HN.xlsx") url = "https://onedrive.live.com/download?cid=9f999e057ad8c646&page=view&resid=9F999E057AD8C646!62381&parId=9F999E057AD8C646!62371&authkey=!Au8PrBa4C6_6k_M&app=Excel" urllib.request.urlretrieve(url, "datacovidChile/Tabla_INSTALACIONES_Honduras_v1.xlsx") return def csvHoja_datacovidChile(): dato = pd.read_excel("datacovidChile/00 DATACOVID Trabajo_HN.xlsx", sheet_name="trabajo") dato.to_csv("datacovidChile/csv/00 DATACOVID Trabajo_HN-trabajo.csv", index = False) dato = pd.read_excel("datacovidChile/00 DATACOVID_HN_CUARENTENA.xlsx", sheet_name="Cuarentena_HN") dato.to_csv("datacovidChile/csv/00 DATACOVID_HN_CUARENTENA-Cuarentena_HN.csv", index = False) dato = pd.read_excel("datacovidChile/ALIMENTACION_HN.xlsx", sheet_name="ALIMENTACION") dato.to_csv("datacovidChile/csv/ALIMENTACION_HN-ALIMENTACION.csv", index = False) dato = pd.read_excel("datacovidChile/BASE CALCULO COMUNA.xlsx", sheet_name=None) sheets = dato.keys() for sheet_name in sheets: sheet = pd.read_excel("datacovidChile/BASE CALCULO COMUNA.xlsx", sheet_name=sheet_name) #borra la fila si todos los valores son NaN sheet.dropna(axis = 0, how = 'all', inplace = True) #borra la fila de los Unnamed y pone la siguiente fila como encabezado if(sheet.columns[0] == 'Unnamed: 0'): sheet.columns = sheet.iloc[0] sheet = sheet.iloc[1:,].reindex() sheet.to_csv("datacovidChile/csv/BASE CALCULO COMUNA-%s.csv" % sheet_name, index=False) dato = pd.read_excel("datacovidChile/casos por comuna listos.xlsx", sheet_name=None) sheets = dato.keys() for sheet_name in sheets: sheet = pd.read_excel("datacovidChile/casos por comuna listos.xlsx", sheet_name=sheet_name) sheet.dropna(axis = 0, how = 'all', inplace = True) if(sheet.columns[0] == 'Unnamed: 0'): sheet.columns = sheet.iloc[0] sheet = sheet.iloc[1:,].reindex() sheet.to_csv("datacovidChile/csv/casos por comuna listos-%s.csv" % sheet_name, index=False) dato = pd.read_excel("datacovidChile/Covid Chile V2.xlsx", sheet_name=None) sheets = dato.keys() for sheet_name in sheets: sheet = pd.read_excel("datacovidChile/Covid Chile V2.xlsx", sheet_name=sheet_name) sheet.dropna(axis = 0, how = 'all', inplace = True) if(sheet.columns[0] == 'Unnamed: 0'): sheet.columns = sheet.iloc[0] sheet = sheet.iloc[1:,].reindex() sheet.to_csv("datacovidChile/csv/Covid Chile V2-%s.csv" % sheet_name, index=False) dato = pd.read_excel("datacovidChile/Covid HN.xlsx", sheet_name=None) sheets = dato.keys() for sheet_name in sheets: sheet = pd.read_excel("datacovidChile/Covid HN.xlsx", sheet_name=sheet_name) sheet.dropna(axis = 0, how = 'all', inplace = True) if(sheet.columns[0] == 'Unnamed: 0'): sheet.columns = sheet.iloc[0] sheet = sheet.iloc[1:,].reindex() sheet.to_csv("datacovidChile/csv/Covid HN-%s.csv" % sheet_name, index=False) dato = pd.read_excel("datacovidChile/FARMACIAS_HN.xlsx", sheet_name="FARMACIAS") dato.to_csv("datacovidChile/csv/FARMACIAS_HN-FARMACIAS.csv", index = False) dato = pd.read_excel("datacovidChile/LOCALIZA HN.xlsx", sheet_name="Hoja1") dato.to_csv("datacovidChile/csv/LOCALIZA HN-Hoja1.csv", index = False) dato = pd.read_excel("datacovidChile/SALUD_HN.xlsx", sheet_name="HOSPITALES") dato.to_csv("datacovidChile/csv/SALUD_HN-HOSPITALES.csv", index = False) dato = pd.read_excel("datacovidChile/Tabla_INSTALACIONES_Honduras_v1.xlsx", sheet_name="Instalaciones_Honduras") dato.to_csv("datacovidChile/csv/Tabla_INSTALACIONES_Honduras_v1-Instalaciones_Honduras.csv", index = False) return csvHoja_datacovidChile() ```
github_jupyter
### Control Flow #### Find the largest of three numbers ``` num1 = 100 num2 = 200 num3 = 1 if (num1>=num2) and (num1 >= num3): largest=num1 elif (num2>=num1) and (num2 >= num3): largest=num2 elif (num3>=num1) and (num3 >= num2): largest=num3 print("Largest number is ", largest) ``` ### while loop #### Syntax: while test_expression: Body of while #### Example ``` #find the sum of all numbers present in the list lst=[2,13,21,19,10] sum_list=0 indx=0 while indx < len(lst): sum_list=sum_list+lst[indx] indx+=1 print("Total sum is {}".format(sum_list)) ``` #### While loop with else ``` lst=[2,13,21,19,10] indx=0 while indx<len(lst): print(lst[indx]) indx+=1 else: print("Items finished in the list. Index is {}".format(indx)) # what happens if we forget to increase the index lst=[2,13,21,19,10] indx=0 while indx<len(lst): print(lst[indx]) # indx+=1 else: print("Items finished in the list. Index is {}".format(indx)) ``` #### Program to check if a number is prime ``` num=int(input("Enter a number")) div=2 isDivisibe=False while div<num: if num%div==0: isDivisibe=True div+=1 if isDivisibe: print("{} is not a prime number".format(num)) else: print("{} is a prime number".format(num)) ``` ### Python For Loop Used to iterate over sequence/lists for element in sequence: body of For ``` #find the sum of numbers in a list lst=[21,12,13,14,90] for element in lst: print(element) #whatebe you want #with element if element%2==1: print("{} is odd".format(element)) ``` #### range() function We can generate a sequence of numbers using range() fuction. range(10) will generate numbers from 0 to 9. Therefore 10 numbers ``` #print range of 10 for i in range(4): print(i) # print range of numbers from 1 to 20, eith increment of 2 for i in range(0,10,2): print(i) #Apply the range function to iterate through list fruit_list=["mango","apple","banana","grapes","strawberry"] print(len(fruit_list)) #what are the two ways we can print this list for i in range(len(fruit_list)): print(fruit_list[i]) ``` #### for loop with else ``` vals=[12,21,13] for val in vals: print(val) else: print('no items are left') ``` #### Code to display all prime numbers between an interval ``` #should contain a nested for loop ``` #### Example of continue ``` #print odd numbers present in a list #use continue list_num=[21,13,14,15,20,19,16] for my_num in list_num: # print(my_num) if my_num%2==1: print(my_num) #print if a number is prime #use break #print numbers till 5 num=23 for i in range(2,num): if num%i==0: break print(i) ```
github_jupyter
# Convolutional Neural Networks: Application Welcome to Course 4's second assignment! In this notebook, you will: - Implement helper functions that you will use when implementing a TensorFlow model - Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:** - Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). ### <font color='darkblue'> Updates to Assignment <font> #### If you were working on a previous version * The current notebook filename is version "1a". * You can find your work in the file directory as version "1". * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. #### List of Updates * `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case. * Added explanations for the kernel (filter) stride values, max pooling, and flatten functions. * Added details about softmax cross entropy with logits. * Added instructions for creating the Adam Optimizer. * Added explanation of how to evaluate tensors (optimizer and cost). * `forward_propagation`: clarified instructions, use "F" to store "flatten" layer. * Updated print statements and 'expected output' for easier visual comparisons. * Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course! ## 1.0 - TensorFlow model In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages. ``` import math import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage import tensorflow as tf from tensorflow.python.framework import ops from cnn_utils import * %matplotlib inline np.random.seed(1) ``` Run the next cell to load the "SIGNS" dataset you are going to use. ``` # Loading the data (signs) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ``` As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5. <img src="images/SIGNS.png" style="width:800px;height:300px;"> The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples. ``` # Example of a picture index = 6 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ``` In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it. To get started, let's examine the shapes of your data. ``` X_train = X_train_orig/255. X_test = X_test_orig/255. Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) conv_layers = {} ``` ### 1.1 - Create placeholders TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session. **Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation"](https://www.tensorflow.org/api_docs/python/tf/placeholder). ``` # GRADED FUNCTION: create_placeholders def create_placeholders(n_H0, n_W0, n_C0, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_H0 -- scalar, height of an input image n_W0 -- scalar, width of an input image n_C0 -- scalar, number of channels of the input n_y -- scalar, number of classes Returns: X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float" Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float" """ ### START CODE HERE ### (≈2 lines) X = tf.placeholder(tf.float32, [None, n_H0, n_W0, n_C0]) Y = tf.placeholder(tf.float32, [None, n_y]) ### END CODE HERE ### return X, Y X, Y = create_placeholders(64, 64, 3, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ``` **Expected Output** <table> <tr> <td> X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) </td> </tr> <tr> <td> Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) </td> </tr> </table> ### 1.2 - Initialize parameters You will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment. **Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use: ```python W = tf.get_variable("W", [1,2,3,4], initializer = ...) ``` #### tf.get_variable() [Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says: ``` Gets an existing variable with these parameters or create a new one. ``` So we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes weight parameters to build a neural network with tensorflow. The shapes are: W1 : [4, 4, 3, 8] W2 : [2, 2, 8, 16] Note that we will hard code the shape values in the function to make the grading simpler. Normally, functions should take values as inputs rather than hard coding. Returns: parameters -- a dictionary of tensors containing W1, W2 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 2 lines of code) W1 = tf.get_variable("W1", [4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0)) W2 = tf.get_variable("W2", [2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0)) ### END CODE HERE ### parameters = {"W1": W1, "W2": W2} return parameters tf.reset_default_graph() with tf.Session() as sess_test: parameters = initialize_parameters() init = tf.global_variables_initializer() sess_test.run(init) print("W1[1,1,1] = \n" + str(parameters["W1"].eval()[1,1,1])) print("W1.shape: " + str(parameters["W1"].shape)) print("\n") print("W2[1,1,1] = \n" + str(parameters["W2"].eval()[1,1,1])) print("W2.shape: " + str(parameters["W2"].shape)) ``` ** Expected Output:** ``` W1[1,1,1] = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192] W1.shape: (4, 4, 3, 8) W2[1,1,1] = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498] W2.shape: (2, 2, 8, 16) ``` ### 1.3 - Forward propagation In TensorFlow, there are built-in functions that implement the convolution steps for you. - **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d). - **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool). - **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu). - **tf.contrib.layers.flatten(P)**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector. * If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension. * For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten). - **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected). In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. #### Window, kernel, filter The words "window", "kernel", and "filter" are used to refer to the same thing. This is why the parameter `ksize` refers to "kernel size", and we use `(f,f)` to refer to the filter size. Both "kernel" and "filter" refer to the "window." **Exercise** Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. ``` # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Note that for simplicity and grading purposes, we'll hard-code some values such as the stride and kernel (filter) sizes. Normally, functions should take these values as function parameters. Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "W2" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] W2 = parameters['W2'] ### START CODE HERE ### # CONV2D: stride of 1, padding 'SAME' Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME') # RELU A1 = tf.nn.relu(Z1) # MAXPOOL: window 8x8, stride 8, padding 'SAME' P1 = tf.nn.max_pool(A1, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding='SAME') # CONV2D: filters W2, stride 1, padding 'SAME' Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME') # RELU A2 = tf.nn.relu(Z2) # MAXPOOL: window 4x4, stride 4, padding 'SAME' P2 = tf.nn.max_pool(A2, ksize=[1, 4, 4, 1], strides=[1, 4, 4, 1], padding='SAME') # FLATTEN F = tf.contrib.layers.flatten(P2) # FULLY-CONNECTED without non-linear activation function (not not call softmax). # 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None" Z3 = tf.contrib.layers.fully_connected(F, 6, activation_fn=None) ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) init = tf.global_variables_initializer() sess.run(init) a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)}) print("Z3 = \n" + str(a)) ``` **Expected Output**: ``` Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] ``` ### 1.4 - Compute cost Implement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions. You might find these two functions helpful: - **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits). - **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean). #### Details on softmax_cross_entropy_with_logits (optional reading) * Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1. * Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the "loss" of the model's predictions. * "Logits" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the "activation." * The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations. ** Exercise**: Compute the cost below using the function above. ``` # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) init = tf.global_variables_initializer() sess.run(init) a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)}) print("cost = " + str(a)) ``` **Expected Output**: ``` cost = 2.91034 ``` ## 1.5 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. **Exercise**: Complete the function below. The model below should: - create placeholders - initialize parameters - forward propagate - compute the cost - create an optimizer Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) #### Adam Optimizer You can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize. For details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) #### Random mini batches If you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the "Optimization" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this: ```Python minibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0) ``` (You will want to choose the correct variable names when you use it in your code). #### Evaluating the optimizer and cost Within a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost. You'll use this kind of syntax: ``` output_for_var1, output_for_var2 = sess.run( fetches=[var1, var2], feed_dict={var_inputs: the_batch_of_inputs, var_labels: the_batch_of_labels} ) ``` * Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost). * It also takes a dictionary for the `feed_dict` parameter. * The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above. * The values are the variables holding the actual numpy arrays for each mini-batch. * The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`. For more information on how to use sess.run, see the documentation [tf.Sesssion#run](https://www.tensorflow.org/api_docs/python/tf/Session#run) documentation. ``` # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): """ Implements a three-layer ConvNet in Tensorflow: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X_train -- training set, of shape (None, 64, 64, 3) Y_train -- test set, of shape (None, n_y = 6) X_test -- training set, of shape (None, 64, 64, 3) Y_test -- test set, of shape (None, n_y = 6) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: train_accuracy -- real number, accuracy on the train set (X_train) test_accuracy -- real number, testing accuracy on the test set (X_test) parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep results consistent (tensorflow seed) seed = 3 # to keep results consistent (numpy seed) (m, n_H0, n_W0, n_C0) = X_train.shape n_y = Y_train.shape[1] costs = [] # To keep track of the cost # Create Placeholders of the correct shape ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables globally init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): minibatch_cost = 0. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch """ # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the optimizer and the cost. # The feedict should contain a minibatch for (X,Y). """ ### START CODE HERE ### (1 line) _ , temp_cost = sess.run([optimizer, cost], feed_dict={X:minibatch_X, Y:minibatch_Y}) ### END CODE HERE ### minibatch_cost += temp_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 5 == 0: print ("Cost after epoch %i: %f" % (epoch, minibatch_cost)) if print_cost == True and epoch % 1 == 0: costs.append(minibatch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # Calculate the correct predictions predict_op = tf.argmax(Z3, 1) correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(accuracy) train_accuracy = accuracy.eval({X: X_train, Y: Y_train}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test}) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) return train_accuracy, test_accuracy, parameters ``` Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code! ``` _, _, parameters = model(X_train, Y_train, X_test, Y_test) ``` **Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. <table> <tr> <td> **Cost after epoch 0 =** </td> <td> 1.917929 </td> </tr> <tr> <td> **Cost after epoch 5 =** </td> <td> 1.506757 </td> </tr> <tr> <td> **Train Accuracy =** </td> <td> 0.940741 </td> </tr> <tr> <td> **Test Accuracy =** </td> <td> 0.783333 </td> </tr> </table> Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work! ``` fname = "images/thumbs_up.jpg" image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)) plt.imshow(my_image) ```
github_jupyter
# WeatherPy ---- #### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time # Import API key from api_keys import api_key # Incorporated citipy to determine city based on latitude and longitude from citipy import citipy # Output File (CSV) output_data_file = "output_data/cities.csv" # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180) ``` ## Generate Cities List ``` # List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(low=-90.000, high=90.000, size=1500) lngs = np.random.uniform(low=-180.000, high=180.000, size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities) ``` ### Perform API Calls * Perform a weather check on each city using a series of successive API calls. * Include a print log of each city as it'sbeing processed (with the city number and city name). ``` new_cities = [] cloudiness = [] country = [] date = [] humidity = [] temp = [] lat = [] lng = [] wind = [] record_counter = 0 set_counter = 0 # Starting URL for Weather Map API Call url = "http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID=" + api_key print('------------------------') print('Beginning Data Retrieval') print('------------------------') for city in cities: query_url = url + "&q=" + city # Get weather data response = requests.get(query_url).json() if record_counter < 50: record_counter += 1 else: set_counter += 1 record_counter = 0 print('Processing record {} of set {} | {}'.format(record_counter, set_counter, city)) print(url) try: cloudiness.append(response['clouds']['all']) country.append(response['sys']['country']) date.append(response['dt']) humidity.append(response['main']['humidity']) temp.append(response['main']['temp_max']) lat.append(response['coord']['lat']) lng.append(response['coord']['lon']) wind.append(response['wind']['speed']) new_cities.append(city) except: print("City not found!") pass print('-------------------------') print('Data Retrieval Complete') print('-------------------------') ``` ### Convert Raw Data to DataFrame * Export the city data into a .csv. * Display the DataFrame ``` # create a data frame from cities, temp, humidity, cloudiness and wind speed weather_dict = { "City": new_cities, "Cloudiness" : cloudiness, "Country" : country, "Date" : date, "Humidity" : humidity, "Temp": temp, "Lat" : lat, "Lng" : lng, "Wind Speed" : wind } weather_data = pd.DataFrame(weather_dict) weather_data.count() weather_data.head() ``` ### Plotting the Data * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels. * Save the plotted figures as .pngs. #### Latitude vs. Temperature Plot ``` # Latitude vs. Temperature Plot weather_data.plot(kind='scatter', x='Lat', y='Temp', c='DarkBlue') plt.title('City Latitude Vs Max Temperature ({})'.format(date) ) plt.xlabel('Latitude') plt.ylabel('Max temperature (F)') plt.grid() plt.savefig("../Images/LatitudeVsTemperature.png") ``` #### Latitude vs. Humidity Plot ``` # Latitude vs. Humidity Plot weather_data.plot(kind='scatter',x='Lat',y='Humidity', c='DarkBlue') plt.title('City Latitude Vs Max Humidity ({})'.format(date) ) plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.grid() plt.savefig("../Images/LatitudeVsHumidity.png") ``` #### Latitude vs. Cloudiness Plot ``` # Latitude Vs Cloudiness Plot weather_data.plot(kind='scatter',x='Lat',y='Cloudiness', c='DarkBlue') plt.title('City Latitude Vs Cloudiness ({})'.format(date) ) plt.xlabel('Latitude') plt.ylabel('Cloudiness (%)') plt.grid() plt.savefig("../Images/LatitudeVsCloudiness.png") ``` #### Latitude vs. Wind Speed Plot ``` # Latitude Vs Wind Speed Plot weather_data.plot(kind='scatter',x='Lat',y='Wind Speed', c='DarkBlue') plt.title('City Latitude Vs Wind Speed ({})'.format(date) ) plt.xlabel('Latitude') plt.ylabel('Wind Speed (mph)') plt.grid() plt.savefig("../Images/LatitudeVsWindSpeed.png") ```
github_jupyter
# 1-7.1 Intro Python ## `while()` loops & increments - **while `True` or forever loops** - **incrementing in loops** - Boolean operators in while loops ----- ><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font> - **create forever loops using `while` and `break`** - **use incrementing variables in a while loop** - control while loops using Boolean operators # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font> ## `while True:` ### Using the 'while True:' loop [![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/f43862cd-7cdc-45a3-adb1-a07dcbd9ae16/Unit1_Section7.1-while-forever.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/f43862cd-7cdc-45a3-adb1-a07dcbd9ae16/Unit1_Section7.1-while-forever.vtt","srclang":"en","kind":"subtitles","label":"english"}]) **`while True:`** is known as the **forever loop** because it ...loops forever Using the **`while True:`** statement results in a loop that continues to run forever ...or, until the loop is interrupted, such as with a **`break`** statement ## `break` ### in a `while` loop, causes code flow to exit the loop a **conditional** can implement **`break`** to exit a **`while True`** loop # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font> ## `while True` loops forever unless a `break` statement is used ``` # Review and run code # this example never loops because the break has no conditions while True: print('write forever, unless there is a "break"') break # [ ] review the NUMBER GUESS code then run - Q. what cause the break statement to run? number_guess = "0" secret_number = "5" while True: number_guess = input("guess the number 1 to 5: ") if number_guess == secret_number: print("Yes", number_guess,"is correct!\n") break else: print(number_guess,"is incorrect\n") # [ ] review WHAT TO WEAR code then run testing different inputs while True: weather = input("Enter weather (sunny, rainy, snowy, or quit): ") print() if weather.lower() == "sunny": print("Wear a t-shirt and sunscreen") break elif weather.lower() == "rainy": print("Bring an umbrella and boots") break elif weather.lower() == "snowy": print("Wear a warm coat and hat") break elif weather.lower().startswith("q"): print('"quit" detected, exiting') break else: print("Sorry, not sure what to suggest for", weather +"\n") ``` # &nbsp; <font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font> ## `while True` ### [ ] Program: Get a name forever ...or until done - create variable, familar_name, and assign it an empty string (**`""`**) - use **`while True:`** - ask for user input for familar_name (common name friends/family use) - keep asking until given a non-blank/non-space alphabetical name is received (Hint: Boolean string test) - break loop and print a greeting using familar_name ``` # [ ] create Get Name program familar_name="" while True: familar_name=input("Enter a familar, common, friend's, or family's name: ") if familar_name.isalpha(): break else: print("That is not a name without spaces") print(familar_name,"sounds like cool name to have") ``` # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Concept</B></font> ## Incrementing a variable [![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{%22src%22:%22http://jupyternootbookwams.streaming.mediaservices.windows.net/cc7925d2-0652-4659-93fb-f4cc8d09ac51/Unit1_Section7.1-increment.ism/manifest%22,%22type%22:%22application/vnd.ms-sstr+xml%22}],[{%22src%22:%22http://jupyternootbookwams.streaming.mediaservices.windows.net/cc7925d2-0652-4659-93fb-f4cc8d09ac51/Unit1_Section7.1-increment.vtt%22,%22srclang%22:%22en%22,%22kind%22:%22subtitles%22,%22label%22:%22english%22}]) ## Incrementing ### `votes = votes + 1` &nbsp; &nbsp; or &nbsp; `votes += 1` ## Decrementing ### `votes = votes - 1` &nbsp; &nbsp; or &nbsp; `votes -= 1` # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font> ``` # [ ] review and run example votes = 3 print(votes) votes = votes + 1 print(votes) votes += 2 print(votes) print(votes) votes -= 1 print(votes) # [ ] review the SEAT COUNT code then run seat_count = 0 while True: print("seat count:",seat_count) seat_count = seat_count + 1 if seat_count > 4: break # [ ] review the SEAT TYPE COUNT code then run entering: hard, soft, medium and exit # initialize variables seat_count = 0 soft_seats = 0 hard_seats = 0 num_seats = 4 # loops tallying seats using soft pads vs hard, until seats full or user "exits" while True: seat_type = input('enter seat type of "hard","soft" or "exit" (to finish): ') if seat_type.lower().startswith("e"): print() break elif seat_type.lower() == "hard": hard_seats += 1 elif seat_type.lower() == "soft": soft_seats += 1 else: print("invalid entry: counted as hard") hard_seats += 1 seat_count += 1 if seat_count >= num_seats: print("\nseats are full") break print(seat_count,"Seats Total: ",hard_seats,"hard and",soft_seats,"soft" ) ``` # &nbsp; <font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font> ## incrementing in a `while()` loop ### Program: Shirt Count - enter a sizes (S, M, L) - tally the count of each size - input "exit" when finished - report out the purchase of each shirt size ``` # [ ] Create the Shirt Count program, run tests small_shirts=0 medium_shirts=0 large_shirts=0 while True: order=input("Enter the shirt size you wish to order(S,M,L,done): ").upper() if order=="S": small_shirts+=1 elif order=="M": medium_shirts+=1 elif order=="L": large_shirts+=1 elif order=="DONE": print("Thank you for your order :)") break else: print("Enter a valid shirt size") print("You ordered:\n",small_shirts,"Small Shirts\n",medium_shirts,"Medium Shirts\n",large_shirts,"Large shirts") ``` ### CHALLENGE: Shirt Register (optional) Update the **Shirt Count** program to calculate cost - use shirt cost (S = 6, M = 7, L = 8) - to calculate and report the subtotal cost for each size - to calculate and report the total cost of all shirts ``` # [ ] Create the Shirt Register program, run tests small_shirts=0 medium_shirts=0 large_shirts=0 small_cost=6 medium_cost=7 large_cost=8 while True: order=input("Enter the shirt size you wish to order(S,M,L,done): ").upper() if order=="S": small_shirts+=1 elif order=="M": medium_shirts+=1 elif order=="L": large_shirts+=1 elif order=="DONE": print("Thank you for your order :)") total_small=small_shirts*small_cost total_medium= medium_shirts*medium_cost total_large=large_shirts*large_cost total_cost=total_small+total_medium+total_large break else: print("Enter a valid shirt size") print("You ordered:\n",small_shirts,"Small Shirts which costs",total_small,"\n",medium_shirts,"Medium Shirts which cost",total_medium,"\n",large_shirts,"Large shirts which cost",total_large,"\n") print("Your total order price is",total_cost) ``` [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) &nbsp; [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) &nbsp; © 2017 Microsoft
github_jupyter
# Class Coding Lab: Functions The goals of this lab are to help you to understand: - How to use Python's built-in functions in the standard library. - How to write user-defined functions - How to use other people's code. - The benefits of user-defined functions to code reuse and simplicity. - How to create a program to use functions to solve a complex idea We will demonstrate these through the following example: ## The Credit Card Problem If you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card? **Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express? While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor. So there are two things we'd like to figure out, for any "potential" card number: - Who is the issuing network? Visa, MasterCard, Discover or American Express. - IS the number potentially valid (as opposed to a made up series of digits)? ### What does this have to do with functions? If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. **Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`. When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. ## Built-In Functions Let's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library: ``` import math dir(math) ``` If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use: ``` help(math.factorial) ``` It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works: ``` math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120 math.factorial(0) # here we call the same function with input 0. The output should be 1. ``` ### 1.1 You Code Call the factorial function with an input argument of 4. What is the output? ``` #TODO write code here. def mike(): print("mike") mike() ``` ## Using functions to print things awesome in Juypter Until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string. For example this prints Hello in Heading 1. ``` from IPython.display import display, HTML from ipywidgets import interact_manual print("Exciting:") display(HTML("<h1>He<font color='red'>ll</font>o</h1>")) print("Boring:") print("Hello") ``` Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. Execute this code: ``` def print_title(text): ''' This prints text to IPython.display as H1 ''' return display(HTML("<H1>" + text + "</H1>")) def print_normal(text): ''' this prints text to IPython.display as normal text ''' return display(HTML(text)) print_normal("Mike is cool!") ``` Now let's use these two functions in a familiar program, along with `interact_manual()` to make the inputs as awesome as the outputs! ``` print_title("Area of a Rectangle") @interact_manual(length=(0,25),width=(0,25)) def area(length, width): area = length * width print_normal(f"The area is {area}.") from ipywidgets import interact_manual @interact_manual(name="",age=(1,100,10),gpa=(0.0,4.0,0.05)) def doit(name, age, gpa): print(f"{name} is {age} years old and has a {gpa} GPA.") ``` ## Get Down with OPC (Other People's Code) Now that we know a bit about **Packages**, **Modules**, and **Functions** let me expand your horizons a bit. There's a whole world of Python code out there that you can use, and it's what makes Python the powerful and popular programming language that it is today. All you need to do to use it is *read*! For example. Let's say I want to print some emojis in Python. I might search the Python Package Index [https://pypi.org/](https://pypi.org/) for some modules to try. For example this one: https://pypi.org/project/emoji/ Let's take it for a spin! ### Installing with pip First we need to install the package with the `pip` utility. This runs from the command line, so to execute pip within our notebook we use the bang `!` operator. This downloads the package and installs it into your Python environment, so that you can `import` it. ``` !pip install emoji ``` Once the package is installed we can use it. Learning how to use it is just a matter of reading the documentation and trying things out. There are no short-cuts here! For example: ``` # TODO: Run this import emoji print(emoji.emojize('Python is :thumbs_up:')) print(emoji.emojize('But I thought this :lab_coat: was supposed to be about :credit_card: ??')) ``` ### 1.2 You Code Write a python program to print the bacon, ice cream, and thumbs-down emojis on a single line. ``` ## TODO: Write your code here ``` ## Let's get back to credit cards.... Now that we know a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems: - Who is the issuing network? Visa, MasterCard, Discover or American Express. This problem can be solved by looking at the first digit of the card number: - "4" ==> "Visa" - "5" ==> "MasterCard" - "6" ==> "Discover" - "3" ==> "American Express" So for card number `5300023581452982` the issuer is "MasterCard". It should be easy to write a program to solve this problem. Here's the algorithm: ``` input credit card number into variable card get the first digit of the card number (eg. digit = card[0]) if digit equals "4" the card issuer "Visa" elif digit equals "5" the card issuer "MasterCard" elif digit equals "6" the card issuer is "Discover" elif digit equals "3" the card issues is "American Express" else the issuer is "Invalid" print issuer ``` ### 1.3 You Code Debug this code so that it prints the correct issuer based on the first card ``` ## TODO: Debug this code card = input("Enter a credit card: ") digit = card[0] if digit == '5': issuer = "Visa" elif digit == '5': issuer == "Mastercard" elif digit = '6': issuer == "Discover" elif digit == '3' issuer = "American Express" else: issuer = "Invalid" print(issuer) ``` **IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case. ## Introducing the Write - Refactor - Test - Rewrite approach It would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach: 1. Write the code 2. Refactor (change the code around) to use a function 3. Test the function by calling it 4. Rewrite the original code to use the new function. We already did step 1: Write so let's move on to: ### 1.4 You Code: refactor Let's strip the logic out of the above code to accomplish the task of the function: - Send into the function as input a credit card number as a `str` - Return back from the function as output the issuer of the card as a `str` To help you out we've written the function stub for you all you need to do is write the function body code. ``` def CardIssuer(card): ## TODO write code here they should be the same as lines 3-13 from the code above #card = input("Enter a credit card: ") digit = card[0] if digit == '4': issuer = "Visa" elif digit == '5': issuer = "Mastercard" elif digit == '6': issuer = "Discover" elif digit == '3': issuer = "American Express" else: issuer = "Invalid" #print(issuer) # the last line in the function should return the output return issuer x = CardIssuer("4873495768923465") print(x) ``` ### Step 3: Test You wrote the function, but how do you know it works? The short answer is unless you write code to *test your function* you're simply guessing! Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected! Here are some examples: ``` WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card ``` ### 1.5 You Code: Tests Write the tests based on the examples: ``` # Testing the CardIssuer() function print("WHEN card='40123456789' We EXPECT CardIssuer(card) = Visa ACTUAL", CardIssuer("40123456789")) print("WHEN card='50123456789' We EXPECT CardIssuer(card) = MasterCard ACTUAL", CardIssuer("50123456789")) ## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly ``` ### Step 4: Rewrite The final step is to re-write the original program, but use the function instead. The algorithm becomes ``` input credit card number into variable card call the CardIssuer function with card as input, issuer as output print issuer ``` ### 1.6 You Code Re-write the program. It should be 3 lines of code: - input card - call issuer function - print issuer ``` # TODO Re-write the program here, calling our function. ``` ## Functions are abstractions. Abstractions are good. Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. Here's the function which given a card will let you know if it passes the Luhn check: ``` # Todo: execute this code def checkLuhn(card): ''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm''' total = 0 length = len(card) parity = length % 2 for i in range(length): digit = int(card[i]) if i%2 == parity: digit = digit * 2 if digit > 9: digit = digit -9 total = total + digit return total % 10 == 0 checkLuhn('4916945264045429') ``` ### Is that a credit card number or the ramblings of a madman? In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test. Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones: ``` WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False ``` ``` #TODO Write your two tests here print("when card = 5443713204330437, Expec checkLuhn(card) = True, Actual=", checkLuhn('5443713204330437')) print("when card = 5111111111111111, Expec checkLuhn(card) = False, Actual=", checkLuhn('5111111111111111')) ``` ## Putting it all together Finally let's use all the functions we wrote/used in this lab to make a really cool program to validate creditcard numbers. Tools we will use: - `interact_manual()` to transform the creditcard input into a textbox - `cardIssuer()` to see if the card is a Visa, MC, Discover, Amex. - `checkLuhn()` to see if the card number passes the Lhun check - `print_title()` to display the title - `print_normal()` to display the output - `emoji.emojize()` to draw a thumbs up (passed Lhun check) or thumbs down (did not pass Lhun check). Here's the Algorithm: ``` print the title "credit card validator" write an interact function with card as input get the card issuer if the card passes lhun check use thumbs up emoji else use thumbs down emoji print in normal text the emoji icon and the card issuer ``` ### 1.7 You Code ``` ## TODO Write code here from IPython.display import display, HTML from ipywidgets import interact_manual ``` ##### Metacognition ### Rate your comfort level with this week's material so far. **1** ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below. **2** ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below. **3** ==> I can do this on my own without any help. **4** ==> I can do this on my own and can explain/teach how to do it to others. `--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--` ### Questions And Comments Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class. `--== Double-click Here then Enter Your Questions Below this Line ==--` ``` # run this code to turn in your work! from coursetools.submission import Submission Submission().submit() ```
github_jupyter
``` #export from local.imports import * from local.test import * from local.core import * from local.layers import * from local.data.pipeline import * from local.data.source import * from local.data.core import * from local.data.external import * from local.notebook.showdoc import show_doc from local.optimizer import * from local.learner import * from local.callback.progress import * #default_exp metrics # default_cls_lvl 3 ``` # Metrics > Definition of the metrics that can be used in training models ## Core metric This is where the function that converts scikit-learn metrics to fastai metrics is defined. You should skip this section unless you want to know all about the internals of fastai. ``` import sklearn.metrics as skm #export core def flatten_check(inp, targ, detach=True): "Check that `out` and `targ` have the same number of elements and flatten them." inp,targ = to_detach(inp.contiguous().view(-1)),to_detach(targ.contiguous().view(-1)) test_eq(len(inp), len(targ)) return inp,targ x1,x2 = torch.randn(5,4),torch.randn(20) x1,x2 = flatten_check(x1,x2) test_eq(x1.shape, [20]) test_eq(x2.shape, [20]) x1,x2 = torch.randn(5,4),torch.randn(21) test_fail(lambda: flatten_check(x1,x2)) #export class AccumMetric(Metric): "Stores predictions and targets on CPU in accumulate to perform final calculations with `func`." def __init__(self, func, dim_argmax=None, sigmoid=False, thresh=None, to_np=False, invert_arg=False, **kwargs): self.func,self.dim_argmax,self.sigmoid,self.thresh = func,dim_argmax,sigmoid,thresh self.to_np,self.invert_args,self.kwargs = to_np,invert_arg,kwargs def reset(self): self.targs,self.preds = [],[] def accumulate(self, learn): pred = learn.pred.argmax(dim=self.dim_argmax) if self.dim_argmax else learn.pred if self.sigmoid: pred = torch.sigmoid(pred) if self.thresh: pred = (pred >= self.thresh) pred,targ = flatten_check(pred, learn.yb) self.preds.append(pred) self.targs.append(targ) @property def value(self): preds,targs = torch.cat(self.preds),torch.cat(self.targs) if self.to_np: preds,targs = preds.numpy(),targs.numpy() return self.func(targs, preds, **self.kwargs) if self.invert_args else self.func(preds, targs, **self.kwargs) ``` `func` is only applied to the accumulated predictions/targets when the `value` attribute is asked for (so at the end of a validation/trianing phase, in use with `Learner` and its `Recorder`).The signature of `func` should be `inp,targ` (where `inp` are the predictions of the model and `targ` the corresponding labels). For classification problems with single label, predictions need to be transformed with a sofmax then an argmax before being compared to the targets. Since a softmax doesn't change the order of the numbers, we can just apply the argmax. Pass along `dim_argmax` to have this done by `AccumMetric` (usually -1 will work pretty well). For classification problems with multiple labels, or if your targets are onehot-encoded, predictions may need to pass through a sigmoid (if it wasn't included in your model) then be compared to a given threshold (to decide between 0 and 1), this is done by `AccumMetric` if you pass `sigmoid=True` and/or a value for `thresh`. If you want to use a metric function sklearn.metrics, you will need to convert predictions and labels to numpy arrays with `to_np=True`. Also, scikit-learn metrics adopt the convention `y_true`, `y_preds` which is the opposite from us, so you will need to pass `invert_arg=True` to make `AccumMetric` do the inversion for you. ``` #For testing: a fake learner and a metric that isn't an average class TstLearner(): def __init__(self): self.pred,self.yb = None,None def _l2_mean(x,y): return torch.sqrt((x-y).float().pow(2).mean()) #Go through a fake cycle with various batch sizes and computes the value of met def compute_val(met, x1, x2): met.reset() vals = [0,6,15,20] learn = TstLearner() for i in range(3): learn.pred,learn.yb = x1[vals[i]:vals[i+1]],x2[vals[i]:vals[i+1]] met.accumulate(learn) return met.value x1,x2 = torch.randn(20,5),torch.randn(20,5) tst = AccumMetric(_l2_mean) test_close(compute_val(tst, x1, x2), _l2_mean(x1, x2)) test_eq(torch.cat(tst.preds), x1.view(-1)) test_eq(torch.cat(tst.targs), x2.view(-1)) #test argmax x1,x2 = torch.randn(20,5),torch.randint(0, 5, (20,)) tst = AccumMetric(_l2_mean, dim_argmax=-1) test_close(compute_val(tst, x1, x2), _l2_mean(x1.argmax(dim=-1), x2)) #test thresh x1,x2 = torch.randn(20,5),torch.randint(0, 2, (20,5)).byte() tst = AccumMetric(_l2_mean, thresh=0.5) test_close(compute_val(tst, x1, x2), _l2_mean((x1 >= 0.5), x2)) #test sigmoid x1,x2 = torch.randn(20,5),torch.randn(20,5) tst = AccumMetric(_l2_mean, sigmoid=True) test_close(compute_val(tst, x1, x2), _l2_mean(torch.sigmoid(x1), x2)) #test to_np x1,x2 = torch.randn(20,5),torch.randn(20,5) tst = AccumMetric(lambda x,y: isinstance(x, np.ndarray) and isinstance(y, np.ndarray), to_np=True) assert compute_val(tst, x1, x2) #test invert_arg x1,x2 = torch.randn(20,5),torch.randn(20,5) tst = AccumMetric(lambda x,y: torch.sqrt(x.pow(2).mean())) test_close(compute_val(tst, x1, x2), torch.sqrt(x1.pow(2).mean())) tst = AccumMetric(lambda x,y: torch.sqrt(x.pow(2).mean()), invert_arg=True) test_close(compute_val(tst, x1, x2), torch.sqrt(x2.pow(2).mean())) #export def skm_to_fastai(func, is_class=True, thresh=None, axis=-1, sigmoid=None, **kwargs): "Convert `func` from sklearn.metrics to a fastai metric" dim_argmax = axis if is_class and thresh is None else None sigmoid = sigmoid if sigmoid is not None else (is_class and thresh is not None) return AccumMetric(func, dim_argmax=dim_argmax, sigmoid=sigmoid, thresh=thresh, to_np=True, invert_arg=True, **kwargs) ``` This is the quickest way to use a sckit-learn metric in a fastai training loop. `is_class` indicates if you are in a classification problem or not. In this case: - leaving `thresh` to `None` indicates it's a single-label classification problem and predictions will pass through an argmax over `axis` before being compared to the targets - setting a value for `thresh` indicates it's a multi-label classification problem and predictions will pass through a sigmoid (can be deactivated with `sigmoid=False`) and be compared to `thresh` before being compared to the targets If `is_class=False`, it indicates you are in a regression problem, and predictions are compared to the targets without being modified. In all cases, `kwargs` are extra keyword arguments passed to `func`. ``` tst_single = skm_to_fastai(skm.precision_score) x1,x2 = torch.randn(20,2),torch.randint(0, 2, (20,)) test_close(compute_val(tst_single, x1, x2), skm.precision_score(x2, x1.argmax(dim=-1))) tst_multi = skm_to_fastai(skm.precision_score, thresh=0.2) x1,x2 = torch.randn(20),torch.randint(0, 2, (20,)) test_close(compute_val(tst_multi, x1, x2), skm.precision_score(x2, torch.sigmoid(x1) >= 0.2)) tst_multi = skm_to_fastai(skm.precision_score, thresh=0.2, sigmoid=False) x1,x2 = torch.randn(20),torch.randint(0, 2, (20,)) test_close(compute_val(tst_multi, x1, x2), skm.precision_score(x2, x1 >= 0.2)) tst_reg = skm_to_fastai(skm.r2_score, is_class=False) x1,x2 = torch.randn(20,5),torch.randn(20,5) test_close(compute_val(tst_reg, x1, x2), skm.r2_score(x2.view(-1), x1.view(-1))) ``` ## Single-label classification > Warning: All functions defined in this section are intended for single-label classification and targets that aren't one-hot encoded. For multi-label problems or one-hot encoded targets, use the `_multi` version of them. ``` #export def accuracy(inp, targ, axis=-1): "Compute accuracy with `targ` when `pred` is bs * n_classes" pred,targ = flatten_check(inp.argmax(dim=axis), targ) return (pred == targ).float().mean() #For testing def change_targ(targ, n, c): idx = torch.randperm(len(targ))[:n] res = targ.clone() for i in idx: res[i] = (res[i]+random.randint(1,c-1))%c return res x = torch.randn(4,5) y = x.argmax(dim=1) test_eq(accuracy(x,y), 1) y1 = change_targ(y, 2, 5) test_eq(accuracy(x,y1), 0.5) test_eq(accuracy(x.unsqueeze(1).expand(4,2,5), torch.stack([y,y1], dim=1)), 0.75) #export def error_rate(inp, targ, axis=-1): "1 - `accuracy`" return 1 - accuracy(inp, targ, axis=axis) x = torch.randn(4,5) y = x.argmax(dim=1) test_eq(error_rate(x,y), 0) y1 = change_targ(y, 2, 5) test_eq(error_rate(x,y1), 0.5) test_eq(error_rate(x.unsqueeze(1).expand(4,2,5), torch.stack([y,y1], dim=1)), 0.25) #export def top_k_accuracy(inp, targ, k=5, axis=-1): "Computes the Top-k accuracy (`targ` is in the top `k` predictions of `inp`)" inp = inp.topk(k=k, dim=axis)[1] targ = targ.unsqueeze(dim=axis).expand_as(inp) return (inp == targ).sum(dim=-1).float().mean() x = torch.randn(6,5) y = torch.arange(0,6) test_eq(top_k_accuracy(x[:5],y[:5]), 1) test_eq(top_k_accuracy(x, y), 5/6) #export def APScore(axis=-1, average='macro', pos_label=1, sample_weight=None): "Average Precision for single-label classification problems" return skm_to_fastai(skm.average_precision_score, axis=axis, average=average, pos_label=pos_label, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score) for more details. ``` #export def BalancedAccuracy(axis=-1, sample_weight=None, adjusted=False): "Balanced Accuracy for single-label binary classification problems" return skm_to_fastai(skm.balanced_accuracy_score, axis=axis, sample_weight=sample_weight, adjusted=adjusted) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html#sklearn.metrics.balanced_accuracy_score) for more details. ``` #export def BrierScore(axis=-1, sample_weight=None, pos_label=None): "Brier score for single-label classification problems" return skm_to_fastai(skm.brier_score_loss, axis=axis, sample_weight=sample_weight, pos_label=pos_label) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.brier_score_loss.html#sklearn.metrics.brier_score_loss) for more details. ``` #export def CohenKappa(axis=-1, labels=None, weights=None, sample_weight=None): "Cohen kappa for single-label classification problems" return skm_to_fastai(skm.cohen_kappa_score, axis=axis, sample_weight=sample_weight, pos_label=pos_label) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.cohen_kappa_score.html#sklearn.metrics.cohen_kappa_score) for more details. ``` #export def F1Score(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None): "F1 score for single-label classification problems" return skm_to_fastai(skm.f1_score, axis=axis, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score) for more details. ``` #export def FBeta(beta, axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None): "FBeta score with `beta` for single-label classification problems" return skm_to_fastai(skm.fbeta_score, axis=axis, beta=beta, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html#sklearn.metrics.fbeta_score) for more details. ``` #export def HammingLoss(axis=-1, labels=None, sample_weight=None): "Cohen kappa for single-label classification problems" return skm_to_fastai(skm.hamming_loss, axis=axis, labels=labels, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.hamming_loss.html#sklearn.metrics.hamming_loss) for more details. ``` #export def Jaccard(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None): "Jaccard score for single-label classification problems" return skm_to_fastai(skm.jaccard_similarity_score, axis=axis, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_score.html#sklearn.metrics.jaccard_score) for more details. ``` #export def MatthewsCorrCoef(axis=-1, sample_weight=None): "Matthews correlation coefficient for single-label binary classification problems" return skm_to_fastai(skm.matthews_corrcoef, axis=axis, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html#sklearn.metrics.matthews_corrcoef) for more details. ``` #export def Precision(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None): "Precision for single-label classification problems" return skm_to_fastai(skm.precision_score, axis=axis, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score) for more details. ``` #export def Recall(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None): "Recall for single-label classification problems" return skm_to_fastai(skm.recall_score, axis=axis, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score) for more details. ``` #export def RocAuc(axis=-1, average='macro', sample_weight=None, max_fpr=None): "Area Under the Receiver Operating Characteristic Curve for single-label binary classification problems" return skm_to_fastai(skm.recall_score, axis=axis, laverage=average, sample_weight=sample_weight, max_fpr=max_fpr) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score) for more details. ``` #export class Perplexity(AvgLoss): "Perplexity (exponential of cross-entropy loss) for Language Models" @property def value(self): return torch.exp(self.total/self.count) if self.count != 0 else None @property def name(self): return "perplexity" perplexity = Perplexity() x1,x2 = torch.randn(20,5),torch.randint(0, 5, (20,)) tst = perplexity tst.reset() vals = [0,6,15,20] learn = TstLearner() for i in range(3): learn.yb = x2[vals[i]:vals[i+1]] learn.loss = F.cross_entropy(x1[vals[i]:vals[i+1]],x2[vals[i]:vals[i+1]]) tst.accumulate(learn) test_close(tst.value, torch.exp(F.cross_entropy(x1,x2))) ``` ## Multi-label classification ``` #export def accuracy_multi(inp, targ, thresh=0.5, sigmoid=True): "Compute accuracy when `inp` and `targ` are the same size." inp,targ = flatten_check(inp,targ) if sigmoid: inp = inp.sigmoid() return ((inp>thresh)==targ.byte()).float().mean() #For testing def change_1h_targ(targ, n): idx = torch.randperm(targ.numel())[:n] res = targ.clone().view(-1) for i in idx: res[i] = 1-res[i] return res.view(targ.shape) x = torch.randn(4,5) y = torch.sigmoid(x) >= 0.5 test_eq(accuracy_multi(x,y), 1) test_eq(accuracy_multi(x,1-y), 0) y1 = change_1h_targ(y, 5) test_eq(accuracy_multi(x,y1), 0.75) #Different thresh y = torch.sigmoid(x) >= 0.2 test_eq(accuracy_multi(x,y, thresh=0.2), 1) test_eq(accuracy_multi(x,1-y, thresh=0.2), 0) y1 = change_1h_targ(y, 5) test_eq(accuracy_multi(x,y1, thresh=0.2), 0.75) #No sigmoid y = x >= 0.5 test_eq(accuracy_multi(x,y, sigmoid=False), 1) test_eq(accuracy_multi(x,1-y, sigmoid=False), 0) y1 = change_1h_targ(y, 5) test_eq(accuracy_multi(x,y1, sigmoid=False), 0.75) #export def APScoreMulti(thresh=0.5, sigmoid=True, average='macro', pos_label=1, sample_weight=None): "Average Precision for multi-label classification problems" return skm_to_fastai(skm.average_precision_score, thresh=thresh, sigmoid=sigmoid, average=average, pos_label=pos_label, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score) for more details. ``` #export def BrierScoreMulti(thresh=0.5, sigmoid=True, sample_weight=None, pos_label=None): "Brier score for multi-label classification problems" return skm_to_fastai(skm.brier_score_loss, thresh=thresh, sigmoid=sigmoid, sample_weight=sample_weight, pos_label=pos_label) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.brier_score_loss.html#sklearn.metrics.brier_score_loss) for more details. ``` #export def F1ScoreMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='binary', sample_weight=None): "F1 score for multi-label classification problems" return skm_to_fastai(skm.f1_score, thresh=thresh, sigmoid=sigmoid, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score) for more details. ``` #export def FBetaMulti(beta, thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='binary', sample_weight=None): "FBeta score with `beta` for multi-label classification problems" return skm_to_fastai(skm.fbeta_score, thresh=thresh, sigmoid=sigmoid, beta=beta, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html#sklearn.metrics.fbeta_score) for more details. ``` #export def HammingLossMulti(thresh=0.5, sigmoid=True, labels=None, sample_weight=None): "Cohen kappa for multi-label classification problems" return skm_to_fastai(skm.hamming_loss, thresh=thresh, sigmoid=sigmoid, labels=labels, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.hamming_loss.html#sklearn.metrics.hamming_loss) for more details. ``` #export def JaccardMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='binary', sample_weight=None): "Jaccard score for multi-label classification problems" return skm_to_fastai(skm.jaccard_similarity_score, thresh=thresh, sigmoid=sigmoid, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_score.html#sklearn.metrics.jaccard_score) for more details. ``` #export def MatthewsCorrCoefMulti(thresh=0.5, sigmoid=True, sample_weight=None): "Matthews correlation coefficient for multi-label classification problems" return skm_to_fastai(skm.matthews_corrcoef, thresh=thresh, sigmoid=sigmoid, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html#sklearn.metrics.matthews_corrcoef) for more details. ``` #export def PrecisionMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='binary', sample_weight=None): "Precision for multi-label classification problems" return skm_to_fastai(skm.precision_score, thresh=thresh, sigmoid=sigmoid, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score) for more details. ``` #export def RecallMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='binary', sample_weight=None): "Recall for multi-label classification problems" return skm_to_fastai(skm.recall_score, thresh=thresh, sigmoid=sigmoid, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score) for more details. ``` #export def RocAucMulti(thresh=0.5, sigmoid=True, average='macro', sample_weight=None, max_fpr=None): "Area Under the Receiver Operating Characteristic Curve for multi-label binary classification problems" return skm_to_fastai(skm.recall_score, thresh=thresh, sigmoid=sigmoid, laverage=average, sample_weight=sample_weight, max_fpr=max_fpr) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score) for more details. ## Regression ``` #export def mse(inp,targ): "Mean squared error between `inp` and `targ`." return F.mse_loss(*flatten_check(inp,targ)) x1,x2 = torch.randn(4,5),torch.randn(4,5) test_close(mse(x1,x2), (x1-x2).pow(2).mean()) #export def _rmse(inp, targ): return torch.sqrt(F.mse_loss(inp, targ)) rmse = AccumMetric(_rmse) rmse.__doc__ = "Root mean squared error" show_doc(rmse, name="rmse") x1,x2 = torch.randn(20,5),torch.randn(20,5) test_eq(compute_val(rmse, x1, x2), torch.sqrt(F.mse_loss(x1,x2))) #export def mae(inp,targ): "Mean absolute error between `inp` and `targ`." inp,targ = flatten_check(inp,targ) return torch.abs(inp - targ).mean() x1,x2 = torch.randn(4,5),torch.randn(4,5) test_eq(mae(x1,x2), torch.abs(x1-x2).mean()) #export def msle(inp, targ): "Mean squared logarithmic error between `inp` and `targ`." inp,targ = flatten_check(inp,targ) return F.mse_loss(torch.log(1 + inp), torch.log(1 + targ)) x1,x2 = torch.randn(4,5),torch.randn(4,5) x1,x2 = torch.relu(x1),torch.relu(x2) test_close(msle(x1,x2), (torch.log(x1+1)-torch.log(x2+1)).pow(2).mean()) #export def _exp_rmspe(inp,targ): inp,targ = torch.exp(inp),torch.exp(targ) return torch.sqrt(((targ - inp)/targ).pow(2).mean()) exp_rmspe = AccumMetric(_exp_rmspe) exp_rmspe.__doc__ = "Root mean square percentage error of the exponential of predictions and targets" show_doc(exp_rmspe, name="exp_rmspe") x1,x2 = torch.randn(20,5),torch.randn(20,5) test_eq(compute_val(exp_rmspe, x1, x2), torch.sqrt((((torch.exp(x2) - torch.exp(x1))/torch.exp(x2))**2).mean())) #export def ExplainedVariance(sample_weight=None): "Explained variance betzeen predictions and targets" return skm_to_fastai(skm.explained_variance_score, is_class=False, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.explained_variance_score.html#sklearn.metrics.explained_variance_score) for more details. ``` #export def R2Score(sample_weight=None): "R2 score betzeen predictions and targets" return skm_to_fastai(skm.r2_score, is_class=False, sample_weight=sample_weight) ``` See the [scikit-learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score) for more details. ## Segmentation ``` #export def foreground_acc(inp, targ, bkg_idx=0, axis=1): "Computes non-background accuracy for multiclass segmentation" targ = targ.squeeze(1) mask = targ != bkg_idx return (inp.argmax(dim=axis)[mask]==targ[mask]).float().mean() x = torch.randn(4,5,3,3) y = x.argmax(dim=1)[:,None] test_eq(foreground_acc(x,y), 1) y[0] = 0 #the 0s are ignored so we get the same value test_eq(foreground_acc(x,y), 1) #export class Dice(Metric): "Dice coefficient metric for binary target in segmentation" def __init__(self, axis=1): self.axis = axis def reset(self): self.inter,self.union = 0,0 def accumulate(self, learn): pred,targ = flatten_check(learn.pred.argmax(dim=self.axis), learn.yb) self.inter += (pred*targ).float().sum().item() self.union += (pred+targ).float().sum().item() @property def value(self): return 2. * self.inter/self.union if self.union > 0 else None x1 = torch.randn(20,2,3,3) x2 = torch.randint(0, 2, (20, 3, 3)) pred = x1.argmax(1) inter = (pred*x2).float().sum().item() union = (pred+x2).float().sum().item() test_eq(compute_val(Dice(), x1, x2), 2*inter/union) #export class JaccardCoeff(Dice): "Implemetation of the jaccard coefficient that is lighter in RAM" @property def value(self): return self.inter/(self.union-self.inter) if self.union > 0 else None x1 = torch.randn(20,2,3,3) x2 = torch.randint(0, 2, (20, 3, 3)) pred = x1.argmax(1) inter = (pred*x2).float().sum().item() union = (pred+x2).float().sum().item() test_eq(compute_val(JaccardCoeff(), x1, x2), inter/(union-inter)) ``` ## Export - ``` #hide from local.notebook.export import notebook2script notebook2script(all_fs=True) ```
github_jupyter
### Deep Kung-Fu with advantage actor-critic In this notebook you'll build a deep reinforcement learning agent for atari [KungFuMaster](https://gym.openai.com/envs/KungFuMaster-v0/) and train it with advantage actor-critic. ![http://www.retroland.com/wp-content/uploads/2011/07/King-Fu-Master.jpg](http://www.retroland.com/wp-content/uploads/2011/07/King-Fu-Master.jpg) ``` from __future__ import print_function, division from IPython.core import display import matplotlib.pyplot as plt %matplotlib inline import numpy as np #If you are running on a server, launch xvfb to record game videos #Please make sure you have xvfb installed import os if os.environ.get("DISPLAY") is str and len(os.environ.get("DISPLAY"))!=0: !bash ../xvfb start %env DISPLAY=:1 ``` For starters, let's take a look at the game itself: * Image resized to 42x42 and grayscale to run faster * Rewards divided by 100 'cuz they are all divisible by 100 * Agent sees last 4 frames of game to account for object velocity ``` import gym from atari_util import PreprocessAtari # We scale rewards to avoid exploding gradients during optimization. reward_scale = 0.01 def make_env(): env = gym.make("KungFuMasterDeterministic-v0") env = PreprocessAtari( env, height=42, width=42, crop=lambda img: img[60:-30, 5:], dim_order='tensorflow', color=False, n_frames=4, reward_scale=reward_scale) return env env = make_env() obs_shape = env.observation_space.shape n_actions = env.action_space.n print("Observation shape:", obs_shape) print("Num actions:", n_actions) print("Action names:", env.env.env.get_action_meanings()) s = env.reset() for _ in range(100): s, _, _, _ = env.step(env.action_space.sample()) plt.title('Game image') plt.imshow(env.render('rgb_array')) plt.show() plt.title('Agent observation (4-frame buffer)') plt.imshow(s.transpose([0,2,1]).reshape([42,-1])) plt.show() ``` ### Build an agent We now have to build an agent for actor-critic training - a convolutional neural network that converts states into action probabilities $\pi$ and state values $V$. Your assignment here is to build and apply a neural network - with any framework you want. For starters, we want you to implement this architecture: ![https://s17.postimg.org/orswlfzcv/nnet_arch.png](https://s17.postimg.org/orswlfzcv/nnet_arch.png) After your agent gets mean reward above 50, we encourage you to experiment with model architecture to score even better. ``` import tensorflow as tf tf.reset_default_graph() sess = tf.InteractiveSession() from keras.layers import Conv2D, Dense, Flatten class Agent: def __init__(self, name, state_shape, n_actions, reuse=False): """A simple actor-critic agent""" with tf.variable_scope(name, reuse=reuse): # Prepare neural network architecture ### Your code here: prepare any necessary layers, variables, etc. # prepare a graph for agent step self.state_t = tf.placeholder('float32', [None,] + list(state_shape)) self.agent_outputs = self.symbolic_step(self.state_t) def symbolic_step(self, state_t): """Takes agent's previous step and observation, returns next state and whatever it needs to learn (tf tensors)""" # Apply neural network ### Your code here: apply agent's neural network to get policy logits and state values. logits = <logits go here> state_value = <state values go here> assert tf.is_numeric_tensor(state_value) and state_value.shape.ndims == 1, \ "please return 1D tf tensor of state values [you got %s]" % repr(state_value) assert tf.is_numeric_tensor(logits) and logits.shape.ndims == 2, \ "please return 2d tf tensor of logits [you got %s]" % repr(logits) # hint: if you triggered state_values assert with your shape being [None, 1], # just select [:, 0]-th element of state values as new state values return (logits, state_value) def step(self, state_t): """Same as symbolic step except it operates on numpy arrays""" sess = tf.get_default_session() return sess.run(self.agent_outputs, {self.state_t: state_t}) def sample_actions(self, agent_outputs): """pick actions given numeric agent outputs (np arrays)""" logits, state_values = agent_outputs policy = np.exp(logits) / np.sum(np.exp(logits), axis=-1, keepdims=True) return np.array([np.random.choice(len(p), p=p) for p in policy]) agent = Agent("agent", obs_shape, n_actions) sess.run(tf.global_variables_initializer()) state = [env.reset()] logits, value = agent.step(state) print("action logits:\n", logits) print("state values:\n", value) ``` ### Let's play! Let's build a function that measures agent's average reward. ``` def evaluate(agent, env, n_games=1): """Plays an a game from start till done, returns per-game rewards """ game_rewards = [] for _ in range(n_games): state = env.reset() total_reward = 0 while True: action = agent.sample_actions(agent.step([state]))[0] state, reward, done, info = env.step(action) total_reward += reward if done: break # We rescale the reward back to ensure compatibility # with other evaluations. game_rewards.append(total_reward / reward_scale) return game_rewards env_monitor = gym.wrappers.Monitor(env, directory="kungfu_videos", force=True) rw = evaluate(agent, env_monitor, n_games=3,) env_monitor.close() print (rw) #show video import os from IPython.display import HTML video_names = [s for s in os.listdir("./kungfu_videos/") if s.endswith(".mp4")] HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./kungfu_videos/" + video_names[-1])) #this may or may not be _last_ video. Try other indices ``` ### Training on parallel games ![img](https://s7.postimg.org/4y36s2b2z/env_pool.png) To make actor-critic training more stable, we shall play several games in parallel. This means ya'll have to initialize several parallel gym envs, send agent's actions there and .reset() each env if it becomes terminated. To minimize learner brain damage, we've taken care of them for ya - just make sure you read it before you use it. ``` class EnvBatch: def __init__(self, n_envs = 10): """ Creates n_envs environments and babysits them for ya' """ self.envs = [make_env() for _ in range(n_envs)] def reset(self): """ Reset all games and return [n_envs, *obs_shape] observations """ return np.array([env.reset() for env in self.envs]) def step(self, actions): """ Send a vector[batch_size] of actions into respective environments :returns: observations[n_envs, *obs_shape], rewards[n_envs], done[n_envs,], info[n_envs] """ results = [env.step(a) for env, a in zip(self.envs, actions)] new_obs, rewards, done, infos = map(np.array, zip(*results)) # reset environments automatically for i in range(len(self.envs)): if done[i]: new_obs[i] = self.envs[i].reset() return new_obs, rewards, done, infos ``` __Let's try it out:__ ``` env_batch = EnvBatch(10) batch_states = env_batch.reset() batch_actions = agent.sample_actions(agent.step(batch_states)) batch_next_states, batch_rewards, batch_done, _ = env_batch.step(batch_actions) print("State shape:", batch_states.shape) print("Actions:", batch_actions[:3]) print("Rewards:", batch_rewards[:3]) print("Done:", batch_done[:3]) ``` # Actor-critic Here we define a loss functions and learning algorithms as usual. ``` # These placeholders mean exactly the same as in "Let's try it out" section above states_ph = tf.placeholder('float32', [None,] + list(obs_shape)) next_states_ph = tf.placeholder('float32', [None,] + list(obs_shape)) actions_ph = tf.placeholder('int32', (None,)) rewards_ph = tf.placeholder('float32', (None,)) is_done_ph = tf.placeholder('float32', (None,)) # logits[n_envs, n_actions] and state_values[n_envs, n_actions] logits, state_values = agent.symbolic_step(states_ph) next_logits, next_state_values = agent.symbolic_step(next_states_ph) next_state_values = next_state_values * (1 - is_done_ph) # probabilities and log-probabilities for all actions probs = tf.nn.softmax(logits) # [n_envs, n_actions] logprobs = tf.nn.log_softmax(logits) # [n_envs, n_actions] # log-probabilities only for agent's chosen actions logp_actions = tf.reduce_sum(logprobs * tf.one_hot(actions_ph, n_actions), axis=-1) # [n_envs,] # compute advantage using rewards_ph, state_values and next_state_values gamma = 0.99 advantage = <YOUR CODE> assert advantage.shape.ndims == 1, "please compute advantage for each sample, vector of shape [n_envs,]" # compute policy entropy given logits_seq. Mind the "-" sign! entropy = <YOUR CODE> assert entropy.shape.ndims == 1, "please compute pointwise entropy vector of shape [n_envs,] " actor_loss = - tf.reduce_mean(logp_actions * tf.stop_gradient(advantage)) - 0.001 * tf.reduce_mean(entropy) # compute target state values using temporal difference formula. Use rewards_ph and next_step_values target_state_values = <YOUR CODE> critic_loss = tf.reduce_mean((state_values - tf.stop_gradient(target_state_values))**2 ) train_step = tf.train.AdamOptimizer(1e-4).minimize(actor_loss + critic_loss) sess.run(tf.global_variables_initializer()) # Sanity checks to catch some errors. Specific to KungFuMaster in assignment's default setup. l_act, l_crit, adv, ent = sess.run([actor_loss, critic_loss, advantage, entropy], feed_dict = { states_ph: batch_states, actions_ph: batch_actions, next_states_ph: batch_states, rewards_ph: batch_rewards, is_done_ph: batch_done, }) assert abs(l_act) < 100 and abs(l_crit) < 100, "losses seem abnormally large" assert 0 <= ent.mean() <= np.log(n_actions), "impossible entropy value, double-check the formula pls" if ent.mean() < np.log(n_actions) / 2: print("Entropy is too low for untrained agent") print("You just might be fine!") ``` # Train Just the usual - play a bit, compute loss, follow the graidents, repeat a few million times. ![img](http://images6.fanpop.com/image/photos/38900000/Daniel-san-training-the-karate-kid-38947361-499-288.gif) ``` from IPython.display import clear_output from tqdm import trange from pandas import DataFrame ewma = lambda x, span=100: DataFrame({'x':np.asarray(x)}).x.ewm(span=span).mean().values env_batch = EnvBatch(10) batch_states = env_batch.reset() rewards_history = [] entropy_history = [] for i in trange(100000): batch_actions = agent.sample_actions(agent.step(batch_states)) batch_next_states, batch_rewards, batch_done, _ = env_batch.step(batch_actions) feed_dict = { states_ph: batch_states, actions_ph: batch_actions, next_states_ph: batch_next_states, rewards_ph: batch_rewards, is_done_ph: batch_done, } batch_states = batch_next_states _, ent_t = sess.run([train_step, entropy], feed_dict) entropy_history.append(np.mean(ent_t)) if i % 500 == 0: if i % 2500 == 0: rewards_history.append(np.mean(evaluate(agent, env, n_games=3))) if rewards_history[-1] >= 50: print("Your agent has earned the yellow belt" % color) clear_output(True) plt.figure(figsize=[8, 4]) plt.subplot(1, 2, 1) plt.plot(rewards_history, label='rewards') plt.plot(ewma(np.array(rewards_history), span=10), marker='.', label='rewards ewma@10') plt.title("Session rewards") plt.grid() plt.legend() plt.subplot(1, 2, 2) plt.plot(entropy_history, label='entropy') plt.plot(ewma(np.array(entropy_history), span=1000), label='entropy ewma@1000') plt.title("Policy entropy") plt.grid() plt.legend() plt.show() ``` Relax and grab some refreshments while your agent is locked in an infinite loop of violence and death. __How to interpret plots:__ The session reward is the easy thing: it should in general go up over time, but it's okay if it fluctuates ~~like crazy~~. It's also OK if it reward doesn't increase substantially before some 10k initial steps. However, if reward reaches zero and doesn't seem to get up over 2-3 evaluations, there's something wrong happening. Since we use a policy-based method, we also keep track of __policy entropy__ - the same one you used as a regularizer. The only important thing about it is that your entropy shouldn't drop too low (`< 0.1`) before your agent gets the yellow belt. Or at least it can drop there, but _it shouldn't stay there for long_. If it does, the culprit is likely: * Some bug in entropy computation. Remember that it is $ - \sum p(a_i) \cdot log p(a_i) $ * Your agent architecture converges too fast. Increase entropy coefficient in actor loss. * Gradient explosion - just [clip gradients](https://stackoverflow.com/a/43486487) and maybe use a smaller network * Us. Or TF developers. Or aliens. Or lizardfolk. Contact us on forums before it's too late! If you're debugging, just run `logits, values = agent.step(batch_states)` and manually look into logits and values. This will reveal the problem 9 times out of 10: you'll likely see some NaNs or insanely large numbers or zeros. Try to catch the moment when this happens for the first time and investigate from there. ### "Final" evaluation ``` env_monitor = gym.wrappers.Monitor(env, directory="kungfu_videos", force=True) final_rewards = evaluate(agent, env_monitor, n_games=20) env_monitor.close() print("Final mean reward:", np.mean(final_rewards)) video_names = list(filter(lambda s: s.endswith(".mp4"), os.listdir("./kungfu_videos/"))) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./kungfu_videos/"+video_names[-1])) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./kungfu_videos/" + video_names[-2])) # try other indices # if you don't see videos, just navigate to ./kungfu_videos and download .mp4 files from there. from submit import submit_kungfu env = make_env() submit_kungfu(agent, env, evaluate, <EMAIL>, <TOKEN>) ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ### Now what? Well, 5k reward is [just the beginning](https://www.buzzfeed.com/mattjayyoung/what-the-color-of-your-karate-belt-actually-means-lg3g). Can you get past 200? With recurrent neural network memory, chances are you can even beat 400! * Try n-step advantage and "lambda"-advantage (aka GAE) - see [this article](https://arxiv.org/abs/1506.02438) * This change should improve early convergence a lot * Try recurrent neural network * RNN memory will slow things down initially, but in will reach better final reward at this game * Implement asynchronuous version * Remember [A3C](https://arxiv.org/abs/1602.01783)? The first "A" stands for asynchronuous. It means there are several parallel actor-learners out there. * You can write custom code for synchronization, but we recommend using [redis](https://redis.io/) * You can store full parameter set in redis, along with any other metadate * Here's a _quick_ way to (de)serialize parameters for redis ``` import joblib from six import BytesIO ``` ``` def dumps(data): "converts whatever to string" s = BytesIO() joblib.dump(data,s) return s.getvalue() ``` ``` def loads(string): "converts string to whatever was dumps'ed in it" return joblib.load(BytesIO(string)) ```
github_jupyter
<div align="center"> <h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png">&nbsp;<a href="https://madewithml.com/">Made With ML</a></h1> Applied ML · MLOps · Production <br> Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML. <br> </div> <br> <div align="center"> <a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>&nbsp; <a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>&nbsp; <a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>&nbsp; <a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a> <br> 🔥&nbsp; Among the <a href="https://github.com/topics/deep-learning" target="_blank">top ML</a> repositories on GitHub </div> <br> <hr> # Notebooks In this lesson, we'll learn about how to work with notebooks. <div align="left"> <a target="_blank" href="https://madewithml.com/courses/foundations/notebooks/"><img src="https://img.shields.io/badge/📖 Read-blog post-9cf"></a>&nbsp; <a href="https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/01_Notebooks.ipynb" role="button"><img src="https://img.shields.io/static/v1?label=&amp;message=View%20On%20GitHub&amp;color=586069&amp;logo=github&amp;labelColor=2f363d"></a>&nbsp; <a href="https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/01_Notebooks.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> </div> # Set Up 1. Click on this link to open the accompanying [notebook]() for this lesson or create a blank one on [Google Colab](https://colab.research.google.com/). 2. Sign into your [Google account](https://accounts.google.com/signin) to start using the notebook. If you don't want to save your work, you can skip the steps below. If you do not have access to Google, you can follow along using [Jupyter Lab](https://jupyter.org/). 3. If you do want to save your work, click the **COPY TO DRIVE** button on the toolbar. This will open a new notebook in a new tab. Rename this new notebook by removing the words Copy of from the title (change `Copy of 01_Notebooks` to `01_Notebooks`). <div align="center"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/notebooks/copy_to_drive.png" width="400">&emsp;&emsp;<img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/notebooks/rename.png" width="320"> </div> # Types of cells Notebooks are made up of cells. Each cell can either be a `code cell` or a `text cell`. * `code cell`: used for writing and executing code. * `text cell`: used for writing text, HTML, Markdown, etc. # Creating cells First, let's create a text cell. Click on a desired location in the notebook and create the cell by clicking on the `➕ TEXT` (located in the top left corner). <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/notebooks/text_cell.png" width="320"> </div> Once you create the cell, click on it and type the following text inside it: ``` ### This is a header Hello world! ``` ### This is a header Hello world! ### Header Hello, zuza! # Running cells Once you type inside the cell, press the `SHIFT` and `RETURN` (enter key) together to run the cell. # Editing cells To edit a cell, double click on it and make any changes. # Moving cells Once you create the cell, you can move it up and down by clicking on the cell and then pressing the ⬆ and ⬇ button on the top right of the cell. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/notebooks/move_cell.png" width="500"> </div> # Deleting cells You can delete the cell by clicking on it and pressing the trash can button 🗑️ on the top right corner of the cell. Alternatively, you can also press ⌘/Ctrl + M + D. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/notebooks/delete_cell.png" width="500"> </div> # Creating a code cell You can repeat the steps above to create and edit a *code* cell. You can create a code cell by clicking on the `➕ CODE` (located in the top left corner). <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/notebooks/code_cell.png" width="320"> </div> Once you've created the code cell, double click on it, type the following inside it and then press `Shift + Enter` to execute the code. ``` print ("Hello world!") ``` ``` print ("Hello world!") ``` These are the basic concepts you'll need to use these notebooks but we'll learn few more tricks in subsequent lessons.
github_jupyter
<a href="https://colab.research.google.com/github/bhagath-ac07/machine-learning/blob/main/colab/kagle_multi_linear.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount('/content/drive') # from google.colab import auth # auth.authenticate_user() # import gspread # from oauth2client.client import GoogleCredentials # gc = gspread.authorize(GoogleCredentials.get_application_default()) # worksheet = gc.open_by_url('data').sheet1 # # get_all_values gives a list of rows. # rows = worksheet.get_all_values() # # Convert to a DataFrame and render. # import pandas as pd dataset = pd.read_csv('/content/drive/MyDrive/kc_house_data1.csv') import sys #access to system parameters https://docs.python.org/3/library/sys.html print("Python version: {}". format(sys.version)) import numpy as np # linear algebra print("NumPy version: {}". format(np.__version__)) import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) print("pandas version: {}". format(pd.__version__)) import matplotlib # collection of functions for scientific and publication-ready visualization print("matplotlib version: {}". format(matplotlib.__version__)) import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import warnings # ignore warnings warnings.filterwarnings('ignore') dataset.head() print(dataset.dtypes) dataset = dataset.drop(['id','date'], axis = 1) with sns.plotting_context("notebook",font_scale=2.5): g = sns.pairplot(dataset[['sqft_lot','sqft_above','price','sqft_living','bedrooms']], hue='bedrooms', palette='tab20',size=6) g.set(xticklabels=[]); X = dataset.iloc[:,1:].values y = dataset.iloc[:,0].values print(X) print(y) #splitting dataset into training and testing dataset from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1) # from sklearn.cross_validation import train_test_split #X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 0 from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X_train, y_train) # Predicting the Test set results y_pred = regressor.predict(X_test) np.set_printoptions(precision=2) print("Predicted results") print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)), 1)) ```
github_jupyter
<a href="https://colab.research.google.com/github/Lennard94/irsa/blob/master/IRSA_COLAB.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **Install Miniconda, numpy and RDKit** ``` %%capture !pip install numpy !wget -c https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.3-Linux-x86_64.sh !chmod +x Miniconda3-py37_4.8.3-Linux-x86_64.sh !time bash ./Miniconda3-py37_4.8.3-Linux-x86_64.sh -b -f -p /usr/local !time conda install -q -y -c conda-forge rdkit==2020.09.2 !pip install scipy !pip install lmfit ``` **Import Statements** ``` import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') import numpy as np import matplotlib.pyplot as py import rdkit from rdkit import * from rdkit.Chem import * from rdkit.Chem.rdDistGeom import EmbedMultipleConfs from rdkit.Chem.rdmolfiles import * from rdkit.Chem import Draw from rdkit.Chem.Draw import IPythonConsole import os import scipy from scipy import signal from lmfit.models import LorentzianModel, QuadraticModel, LinearModel, PolynomialModel import lmfit from lmfit import Model ``` # **Algorithm** ``` class Algorithm: def __init__(self, theo_peaks, exp_peaks, cutoff = 0.01, u = 1100, h = 1800, sc = 1, algo = 1): """SPECTRUM INFORMATION""" print("Initialization ND") self.cutoff, self.theo_peaks, self.exp_peaks = cutoff, theo_peaks, exp_peaks self.algo, self.u, self.h, self.sc = algo, u, h, sc print("Initialization SUCCESSFUL") def Diagonal_IR(self, freq_i, inten_i, exp_freq_j, exp_inten_j, bond_l, bond_h, n, m, exp_vcd = 0, inten_vcd = 0, width_j = 0, width_i = 0, eta_exp = 0, eta = 0): """COMPUTE THE SCORES FOR EACH PEAK COMBINATION DYNAMICALLY""" value = 2 def sign(a): return bool(a > 0) - bool(a < 0) if self.algo == 0: if inten_vcd == exp_vcd: if min(abs(1-exp_freq_j/freq_i), abs(1-freq_i/exp_freq_j)) < self.cutoff and exp_freq_j > self.u and exp_freq_j < self.h: x_dummy = min(abs(1-exp_freq_j/freq_i), abs(1-freq_i/exp_freq_j)) width_dummy = min(abs(1-width_j/width_i), abs(1-width_i/width_j)) freq_contrib = np.exp(-1/(1-abs(x_dummy/self.cutoff)**2)) y_dummy = min(abs(1-inten_i/exp_inten_j), abs(1-exp_inten_j/inten_i)) inten_contrib = np.exp(-1/(1-abs(y_dummy/1)**2)) sigma_contrib = np.exp(-1/(1-abs(width_dummy/8)**2)) if min(abs(1-width_i/width_j), abs(1-width_j/width_i)) < 8: if abs(1-inten_i/exp_inten_j) < 1 or abs(1-exp_inten_j/inten_i) < 1: value = -inten_contrib*freq_contrib*sigma_contrib#*eta_contrib #scoring function 1 if self.algo == 1: if inten_vcd == exp_vcd: if min(abs(1-exp_freq_j/freq_i), abs(1-freq_i/exp_freq_j)) < self.cutoff and exp_freq_j > self.u and exp_freq_j < self.h: inten_contrib = inten_i*exp_inten_j if min(abs(1-width_i/width_j), abs(1-width_j/width_i)) < 8: #if abs(inten_i-exp_inten_j) < 0.2: value = -inten_contrib#*eta_contrib #scoring function 1 return value def Backtrace_IR(self, p_mat, al_mat, n, m, freq_i, inten_i, exp_freq_j, exp_inten_i, sigma, bond_l, bond_h, exp_sigma, vcd, eta, eta_exp): #n theoretical, m experimental """BACKTRACE THE NEEDLEMAN ALGORITHM""" new_freq, new_freq_VCD, old_freq, new_inten, new_sigma = [], [], [], [], [] new_eta, non_matched_sigma, new_inten_vcd, non_matched_freq = [], [], [], [] matched_freq, vcd_ir_array, non_matched_inten, non_matched_inten_vcd = [], [], [], [] n = n-1 m = m-1 current_scaling_factor = 1 factors = [] while True : if p_mat[n, m] == "D": new_freq.append(exp_freq_j[m-1]) old_freq.append(freq_i[n-1]) new_inten.append(inten_i[n-1]) new_sigma.append(sigma[n-1]) new_eta.append(eta[n-1]) vcd_ir_array.append(vcd[n-1]) current_scaling_factor = exp_freq_j[m-1]/freq_i[n-1] matched_freq.append(n-1) factors.append(current_scaling_factor) n = n-1 m = m-1 elif p_mat[n, m] == "V": non_matched_inten.append(n-1) non_matched_sigma.append(n-1) non_matched_inten_vcd.append(n-1) non_matched_freq.append(n-1) n = n-1 elif p_mat[n, m] == "H": m = m-1 else: break for i in range(len(non_matched_freq)): closest_distance = 9999 matched_to = 0 sf = 1 for j in range(len(matched_freq)): dis = abs(freq_i[non_matched_freq[i]]-freq_i[matched_freq[j]]) if(dis < closest_distance): closest_distance = dis sf = factors[j] new_freq.append(freq_i[non_matched_freq[i]]*sf) new_sigma.append(sigma[non_matched_sigma[i]]) new_eta.append(eta[non_matched_sigma[i]]) vcd_ir_array.append(vcd[non_matched_freq[i]]) old_freq.append(freq_i[non_matched_freq[i]]) new_inten.append(inten_i[non_matched_freq[i]]) new_inten_vcd.append(0) return np.asarray(new_freq), np.asarray(new_inten), np.asarray(old_freq), np.asarray(new_sigma), np.asarray(new_eta), np.asarray(vcd_ir_array) def Pointer(self, di, ho, ve): """POINTER TO CELL IN THE TABLE""" pointer = min(di, min(ho, ve)) if di == pointer: return "D" elif ho == pointer: return "H" else: return "V" def Needleman_IR(self): """NEEDLEMAN ALGORITHM FOR IR""" freq = self.theo_peaks[:, 1]*self.sc inten = self.theo_peaks[:, 0] sigma = self.theo_peaks[:, 2] vcd = self.theo_peaks[:, 3] try: eta = self.theo_peaks[:, 4] except: eta = np.ones((len(sigma))) exp_freq = self.exp_peaks[:, 1] exp_inten = self.exp_peaks[:, 0] exp_sigma = self.exp_peaks[:, 2] exp_inten_vcd = self.exp_peaks[:, 3] try: eta_exp = self.exp_peaks[:, 4] except: eta_exp = np.ones((len(exp_sigma))) bond_l = self.u bond_h = self.h n = len(freq)+1 m = len(exp_freq)+1 norm = 1 al_mat = np.zeros((n, m)) p_mat = np.zeros((n, m), dtype='U25') #string for i in range(1, n): al_mat[i, 0] = al_mat[i-1, 0]#+0.01#self.dummy_0 # BOUND SOLUTION, VALUE MIGHT BE CHANGED p_mat[i, 0] = 'V' for i in range(1, m): al_mat[0, i] = al_mat[0, i-1]#+0.01##+self.dummy_1 p_mat[0, i] = 'H' p_mat[0, 0] = "S" normalize = 0 for i in range(1, n): #theoretical for j in range(1, m): #experimental di = self.Diagonal_IR(freq[i-1], inten[i-1], exp_freq[j-1], exp_inten[j-1], bond_l, bond_h, n, m, exp_vcd = exp_inten_vcd[j-1], inten_vcd = vcd[i-1], width_j = exp_sigma[j-1], width_i = sigma[i-1], eta_exp = eta_exp[j-1], eta = eta[i-1]) di = al_mat[i-1, j-1]+di ho = al_mat[i, j-1]#+abs(exp_inten[j-1])#-np.sqrt((exp_inten[j-1]*self.cutoff*2)**2+exp_freq[j-1]**2) ve = al_mat[i-1, j]#+abs(inten[i-1])#-np.sqrt((exp_inten[j-1]*self.cutoff*2)**2+exp_freq[j-1]**2) al_mat[i, j] = min(di, min(ho, ve)) p_mat[i, j] = self.Pointer(di, ho, ve) freq, inten, old_freq, new_sigma, eta_new, vcd = self.Backtrace_IR(p_mat, al_mat, n, m, freq, inten, exp_freq, exp_inten, sigma, bond_l, bond_h, exp_sigma, vcd=vcd, eta=eta, eta_exp = eta_exp) returnvalue = al_mat[n-1, m-1]#/(n+m) ##ORIGINALLY WE DIVIDED BY THE NUMBER OF THEORETICAL PEAKS ##HOWEVER, WE FOUND THIS TOO INCONVIENT, SINCE IT MAKES THE DEPENDENCE ON THE ##PURE NUMBERS TOO LARGE return returnvalue, old_freq, freq, inten, new_sigma, np.asarray(eta_new), np.asarray(vcd) ``` # **Function Declaration** ``` def L_(x, amp, cen, wid): t = ((x-cen)/(wid/2))**2 L = amp/(1+t) return L def V_(x, amp, cen, wid, eta): t = ((x-cen)/(wid/2))**2 G = 1*np.exp(-np.log(2)*t) L = 1/(1+t) V = amp*(eta*L+(1-eta)*G) return V def add_peak(prefix, center, amplitude=0.5, sigma=12,eta=0.5): peak = Model(V_, prefix=prefix) pars = peak.make_params() pars[prefix+'cen'].set(center, min=center-2, max=center+2, vary=True) pars[prefix+'amp'].set(amplitude, vary=True, min=0.03, max=1.5) pars[prefix+'wid'].set(sigma, vary=True, min=1, max=64) pars[prefix+'eta'].set(eta, vary=True, min=0, max=1) return peak, pars def Lorentzian_broadening(peaks, w = 6): p = np.arange(500, 2000) x = (p[:, np.newaxis] - (peaks[:, 0])[np.newaxis, :])/(w/2) L = (peaks[:, 1])[np.newaxis, :]/(1+x*x) y = np.sum(L, axis=-1)[:, np.newaxis] p = p[:, np.newaxis] spectrum = np.concatenate([p, y], axis=-1) return spectrum def Voigt(freq, inten, new_sigma, new_eta, u=1000, h=1500): x = np.arange(u, h) list_append = [] for i in range(len(freq)): t = ((x-freq[i])/(new_sigma[i]/2))**2 L = inten[i]/(1+t) G = inten[i]*np.exp(-np.log(2)*t) list_append.append(L*new_eta[i]+(1-new_eta[i])*G) list_append = np.asarray(list_append) y = np.sum(list_append,axis=0) return x, y def deconvolute(spectrum, peaks, working_dir = '/content/', save_data = 'ir_exp_peaks.txt', u = 1000, h = 2000): params, model, write_state, name_state = None, None, [], [] model = None for i in range(0, len(peaks)): peak, pars = add_peak('lz%d_' % (i+1), center = peaks[i, 0], amplitude = peaks[i, 1]) if(i == 0): model = peak params = model.make_params() else: model = model + peak params.update(pars) init = model.eval(params, x = spectrum[:, 0]) result = model.fit(spectrum[:, 1], params, x = spectrum[:, 0]) comps = result.eval_components() for name, par in result.params.items(): write_state.append(par.value) write_state=np.asarray(write_state) write_state=write_state.reshape(-1,4) dic = lmfit.fit_report(result.params) py.plot(spectrum[:, 0], spectrum[:, 1], label = 'data', color = "black") py.plot(spectrum[:, 0], result.best_fit, label = 'best fit', color = "orange") py.xlim(h, u) py.ylim(0,1.02) py.show() f = open(working_dir+save_data, 'w') for i in write_state: f.write(str(i[0])+" "+str(i[1])+" "+str(i[2])+" 0 " +str(i[3])+"\n") f.close() ``` # **Global Settings** ``` working_dir = '/content/' ##settings about the experimental spectrum absorbance_ir = True ##Whether the absorbance of the exp spectrum is recorded transmission_ir = False ##Whether the transmission of the exp spectrum is recorded absorbance_raman = True ##Whether the absorbance of the exp spectrum is recorded transmission_raman = False ##Whether the transmission of the exp spectrum is recorded absorbance_vcd = True ##Whether the absorbance of the exp spectrum is recorded transmission_vcd = False ##Whether the transmission of the exp spectrum is recorded ##Sampling settings rmsd_cutoff = 0.5 ## choose either 0.5, 0.3 or 0.1 max_attempts = 10000 ## choose a large number exp_torsion_preference = False ## set torsion preference to false basic_knowledge = True ## set basic knowledge to True ir = True ##Whether to compute IR spectra raman = False ##Whether to compute Raman u = 1000 ##lower limit h = 1500 ##higher limit vcd = False ##Whether to compute VCD (only possible with g09) ## Software to be used orca_backend = True ## orca backend, use for IR and Raman g09_backend = False ## gaussian backend, use for VCD W = ' 4:00 ' ## Walltime ## Calculation setup orca/5.0.1 if(orca_backend): n_procs = ' 12 ' mem = ' 1000 ' basis_set = ' def2-TZVP def2/J ' functional = ' RI BP86 D4 ' ## or use 'RIJCOSX B3LYP/G D4 ' convergence = ' TightSCF TightOpt ' if(raman): freq = " NumFreq " elif(ir): freq = " freq " else: freq = " " elif(g09_backend): ## Calculation setup gaussian 09 n_procs = ' 12 ' mem = ' 12000 ' basis_set = ' 6-31**G(d,p) ' functional = ' B3LYP Dispersion=GD3 ' convergence = ' TightSCF TightOpt ' freq = ' freq(' if(raman): freq+='Raman, ' if(VCD): freq+='VCD, ' freq+=')' scaling_factor = 1.0 # change to 0.98 for B3LYP/G ``` # **WorkFlow, Step 1: Load experimental files** **Load experimental files** ``` from google.colab import files uploaded = files.upload() print(uploaded) for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) if(raman): uploaded = files.upload() print(uploaded) for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) if(vcd): uploaded = files.upload() print(uploaded) for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) ``` **Set path to experimenal Spectra** ``` path_to_exp_IR = '/content/IR_30.txt' ##We expect that the file has two columns: ## First column: x-coordinates ## Second column: y-coordinates path_to_exp_raman = '/content/raman.txt' path_to_exp_vcd = '/content/vcd.txt' ir_exp = np.loadtxt(path_to_exp_IR, usecols=(0, 1)) if(not absorbance_ir): ir_exp[:, 1] = 2-np.log10(ir_exp[:, 1]) idx_ir_exp = (ir_exp[:, 0]>u) & (ir_exp[:, 0]<h) ir_exp = ir_exp[idx_ir_exp] ir_exp[:, 1] = ir_exp[:, 1]/np.max(ir_exp[:, 1]) py.plot(ir_exp[:, 0], ir_exp[:, 1], label='exp spectrum') ind, _ = scipy.signal.find_peaks(ir_exp[:, 1]) ir_exp_peaks = ir_exp[ind] py.plot(ir_exp_peaks[:, 0], ir_exp_peaks[:, 1], 'o' ,label='Peaks') py.legend() py.xlim(h, u) py.ylim(0, 1.01) py.show() if(raman): raman_exp = np.loadtxt(path_to_exp_raman, usecols=(0, 1)) if(not absorbance_raman): raman_exp[:, 1] = 2-np.log10(raman_exp[:, 1]) idx_raman_exp = (raman_exp[:, 0] > u) & (raman_exp[:, 0] < h) py.plot(raman_exp[:, 0], raman_exp[:, 1]/np.max(raman_exp[idx_raman_exp, 1])) py.xlim(h, u) py.ylim(0, 1.01) py.show() if(vcd): vcd_exp = np.loadtxt(path_to_exp_vcd, usecols=(0, 1)) if(not absorbance_vcd): vcd_exp[:, 1] = 2-np.log10(vcd_exp[:, 1]) idx_vcd_exp = (vcd_exp[:, 0] > u) & (vcd_exp[:, 0] < h) py.plot(vcd_exp[:, 0], vcd_exp[:, 1]/np.max(np.abs(vcd_exp[idx_vcd_exp, 1]))) py.xlim(h, u) py.ylim(-1.01, 1.01) py.show() ``` **Deconvolute Experimental Spectra** ``` deconvolute(ir_exp, ir_exp_peaks, working_dir=working_dir, save_data = 'ir_exp_peaks.txt', u = u, h = h) ``` # **WorkFlow, Step 2: Set Up Calculation Files** **Set SMILE String** ``` #smile_string = 'C[C@H]([C@H]([C@H]([C@H](CO)Cl)O)Cl)Cl' ## Smile string of compound smile_string = 'CC(C)(C)C(=O)[C@H](C)C[C@H](C)/C=C(\C)/[C@H](OC)[C@H](C)[C@H]1OC(=O)c2c([C@@H]1O)nccc2O' mol = rdkit.Chem.MolFromSmiles(smile_string) ## Draw compound mol ``` **Create Conformational Ensemble, Write to xyz files** ``` solute_mol = AddHs(mol) EmbedMultipleConfs(solute_mol, numConfs = max_attempts, clearConfs = True, pruneRmsThresh = rmsd_cutoff, numThreads = 8, useExpTorsionAnglePrefs = exp_torsion_preference, useBasicKnowledge = basic_knowledge) ## Create calculation file path = '/content/calculation_files/' try: os.mkdir('/content/calculation_files') except: print('folder already exists') pass counter = 0 for i in range(max_attempts): try: rdkit.Chem.rdmolfiles.MolToXYZFile(solute_mol, path+str(counter)+".xyz", confId = i) counter+=1 except: pass print("Number of conformations found", str(counter)) f = open(path+'out', 'w') f.write(str(counter)) f.close() ``` **Write Calculation files** ``` if(orca_backend): f_submit = open(path+'job.sh', 'w') for i in range(counter): f = open(path+str(i)+".inp","w+") f.write("""! """ + functional + basis_set + convergence + freq + """ %maxcore """+ mem + """ %pal nprocs """ + n_procs + """ end * xyzfile 0 1 """ +str(i)+""".xyz \n""") f.close() f_sh = open(path+str(i)+".sh", 'w') f_sh.write("""a=$PWD cd $TMPDIR cp ${a}/"""+str(i)+""".inp . cp ${a}/"""+str(i)+""".xyz . module load orca openmpi/4.0.2 sleep 20 /cluster/apps/nss/orca/5.0.1/x86_64/bin/orca """+str(i)+""".inp > """+str(i)+""".out cp """+str(i)+""".out ${a} cp """+str(i)+""".engrad ${a} cp """+str(i)+""".hess ${a} cp """+str(i)+""".xyz ${a} cd ${a}""") f_sh.close() f_submit.write('chmod +wrx '+str(i)+'.sh\n') f_submit.write('bsub -n ' +n_procs + '-W '+W+' ./'+str(i)+'.sh\n') f_submit.close() #elif(g09_backend): # continue ``` **ZIP Files** ``` %%capture import zipfile from google.colab import drive from google.colab import files !zip -rm /content/input.zip /content/calculation_files ``` **Download** ``` files.download("/content/input.zip") ``` Next, you need to perform the calculations. The job.sh script automatically submits the jobs to the local cluster (i.e., bash job.sh), however, depending on your cluster architecture, you might need to update this file. You can then zip the calculation, i.e., zip -rm output.zip calculation_files, and upload it to the collab, and continue with the workflow # **Workflow, Step 3: Align the Spectra** **Upload finished computation** ``` uploaded = files.upload() print(uploaded) for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) ``` **Unzip file** ``` %%capture !unzip /content/30_calc_out_250.zip ``` **Set Path to Calculation Setup** ``` path_output = '/content/30_calc_out_250/' ``` **Read in free energies, IR spectra, and potentially other spectra** ``` free_energies, energies, ir_spectra, structure_files = [], [], [], [] if(orca_backend): files = os.listdir(path_output) for fi in files: if(fi.endswith('.out')): name = fi.split('.')[0] freq = np.zeros((400, 2)) f = open(path_output+name+".out", 'r') imaginary = False free_energies_tmp = 0 energies_tmp = 0 for line in f: if('Final Gibbs free energy' in line): free_energies_tmp = float(line.split()[-2]) elif('FINAL SINGLE POINT ENERGY' in line): energies_tmp = float(line.split()[-1]) elif('IR SPECTRUM' in line): f.readline() f.readline() f.readline() f.readline() f.readline() counter_tmp = 0 while(True): try: tmp = f.readline().split() freq[counter_tmp, 0] = float(tmp[1]) freq[counter_tmp, 1] = float(tmp[3]) if(float(tmp[1]) < 0): imaginary = True except: break counter_tmp+=1 if(imaginary==False and free_energies_tmp!=0 and energies_tmp!=0): structure_files.append(int(name)) ir_spectra.append(freq) energies.append(energies_tmp) free_energies.append(free_energies_tmp) free_energies = np.asarray(free_energies) energies = np.asarray(energies) ir_spectra = np.asarray(ir_spectra) structure_files = np.asarray(structure_files) print(len(energies)) print(len(free_energies)) print(len(structure_files)) ``` **Filter structures which are equivalent** ``` threshold = 1e5 ## corresponds to 0.026 kJmol hartree_to_kJmol = 2625.4996394799 _, index = np.unique(np.asarray(energies*threshold, dtype=int), return_index=True) free_energies = free_energies[index]*hartree_to_kJmol energies = energies[index]*hartree_to_kJmol ir_spectra = ir_spectra[index] structure_files = structure_files[index] free_energies-=np.min(free_energies) energies-=np.min(energies) py.plot(structure_files, free_energies) py.show() ``` **Generate Superimposed IR spectrum** ``` lorentzian_bandwidth = np.arange(6, 7, 1) Z = 1 print(scaling_factor) RT = 0.008314*298.15 # in kJmol ir_theo_data = ir_spectra[0] if(len(free_energies) > 0): Z = np.sum(np.exp(-free_energies/RT)) ir_theo_y = (ir_spectra[:, :, 1]*np.exp(-free_energies[:, np.newaxis]/RT)/Z).flatten() ir_theo_x = ir_spectra[:, :, 0].flatten() ir_theo_data = np.concatenate([ir_theo_x[:, np.newaxis], ir_theo_y[:, np.newaxis]], axis=-1) for w in lorentzian_bandwidth: ir_theo = Lorentzian_broadening(ir_theo_data, w = w) idx_ir_theo = (ir_theo[:, 0] > u) & (ir_theo[:, 0] < h) ir_theo = ir_theo[idx_ir_theo] ir_theo[:, 1] /= np.max(ir_theo[:, 1]) ind, _ = scipy.signal.find_peaks(ir_theo[:, 1]) py.plot(ir_theo[:, 0]*scaling_factor, ir_theo[:, 1], label = "theo", color = 'orange') py.plot(ir_exp[:, 0], ir_exp[:, 1], label = "exp", color = 'black') py.legend() py.xlim(h, u) py.show() deconvolute(ir_theo, ir_theo[ind], working_dir = '/content/', save_data = str(w)+'_ir_theo_peaks.txt', u = u, h = h) ``` **Print out Parameters** ``` #for w in lorentzian_bandwidth: # print(np.loadtxt(working_dir+str(w)+"_ir_theo_peaks.txt")) scaling_factor = np.arange(1.000, 1.02, 0.005) for sc in scaling_factor: algorithm = Algorithm(theo_peaks = np.loadtxt(working_dir+str(w)+"_ir_theo_peaks.txt"), exp_peaks = np.loadtxt(working_dir+"ir_exp_peaks.txt"), cutoff = 0.04, u = u, h = h, sc = sc) s, _, freq_aligned, inten_aligned, sigma, eta, vcd_ir_array = algorithm.Needleman_IR() vcd_ir_array = np.asarray(vcd_ir_array, dtype=int) x, y = Voigt(freq_aligned[vcd_ir_array == 0], inten_aligned[vcd_ir_array == 0], sigma[vcd_ir_array == 0], eta[vcd_ir_array == 0], u = u, h = h) y /= np.max(y) py.plot(ir_theo[:, 0], ir_theo[:, 1], label = "unaligned", color = 'orange') py.plot(x, y, label = "aligned", color = 'red') py.plot(ir_exp[:, 0], ir_exp[:, 1], label = "experimental", color = 'black') print(sc, s) py.legend() py.show() ```
github_jupyter
``` %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ### Reading in & Formatting Price Data The .csv read from contains the price of [bitcoin](https://bitcoin.org/en/) from 2015-02-01 to 2018-04-21. ``` prices_raw = pd.read_csv('data/price_data_final.csv', infer_datetime_format=True) p = prices_raw['price'] print(p.head(2)) print(p.tail(2)) ``` ### Reading in & Formatting Quantified Sentiment The .csv we read from here contains daily sentiment scores from 2015-02-01 to 2018-04-21 for the title of every post in the [Bitcoin subreddit](https://www.reddit.com/r/Bitcoin/), an online Bitcoin community. There seems to be *a lot* of noise in the form of 0's (neutral sentiment). **How can this effectively be handled?** For now, I will take the daily average. ``` sentiment_raw = pd.read_csv('data/reddit_sentiment_data_final.csv', infer_datetime_format=True) s = sentiment_raw.mean(axis=1) print(s.head(2)) print(s.tail(2)) count = 0 for i in sentiment_raw.drop('Unnamed: 0', axis=1).values: for v in i: if v == 0: count += 1 print(count) ``` ### Final Data Formatting ``` # create the "today's price" feature prices = prices_raw.set_index('Unnamed: 0') prices = prices.reset_index(drop=True) print(prices.head(3)) print(prices.tail(3)) # create "today's price - yesterday's price" feature diff = [] for n in range(0,len(prices)): diff.append(prices.iloc[n] - prices.iloc[n-1]) diff = pd.DataFrame(diff) print(diff.head(3)) print(diff.tail(3)) # create "tomorrow's price" label (for regression) label = pd.DataFrame(prices.drop(0)).reset_index(drop=True) print(label.head(3)) print(label.tail(3)) # create binary "today minus yesterday up or down" feature upOrDown = pd.DataFrame([1 if d > 0 else 0 for d in diff.values]) print(upOrDown.head(3)) print(upOrDown.tail(3)) # create "tomorrow minus today up or down" label for classification updownLabel = upOrDown.drop(0).reset_index(drop=True) print(updownLabel.head(3)) print(updownLabel.tail(3)) # create + or - sentiment feature posnegSent = pd.DataFrame([1 if d > 0 else 0 for d in s.values]) print(posnegSent.head(3)) print(posnegSent.tail(3)) # resizing prices = prices.drop(1175) diff = diff.drop(1175) upOrDown = upOrDown.drop(1175) s = s.drop(1175) posnegSent = posnegSent.drop(1175) # Bringing it all together datas = pd.concat([prices, diff, s, posnegSent, label, upOrDown, updownLabel], axis=1) datas.columns = ['price(t)', 'p(t) - p(t-1)','s(t)', 'sentPosNeg', 'p(t+1)', 'p(t) - p(t-1) > 0', 'p(t+1) - p(t) > 0'] datas = datas.drop(0) datas.head(10) ``` ### ...Let's try KNN #### This first section uses KNeighborsRegressor to predict price. ``` from sklearn.neighbors import KNeighborsRegressor knnReg = KNeighborsRegressor(n_neighbors=2) X = datas[['price(t)', 's(t)']] Y = datas['p(t+1)'] knnReg.fit(X, Y) x = list(range(1, 1175)) y = [] for n in x: y.append(knnReg.predict(X[n-1:n])) plt.plot(x,y) plt.xlabel('Days After 2-2-2015') plt.ylabel('Predicted Price (USD)') plt.title('KNN Regressor Predicting BTC Price') ``` so thats knnRegressor... not tweaked, but it followed the path so we know the model does its purpose. knnClassifier comin' up next. ``` plt.plot(x,p.drop(0).drop(1174), color='orange') plt.xlabel('Days After 2-2-2015') plt.ylabel('Acual Price (USD)') plt.title('Price of BTC, Feb 2015 - April 2018') # Some evaluation? from sklearn.model_selection import cross_val_score np.mean(cross_val_score(knnReg, X[1000:1175], Y[1000:1175])) # restricting the training to recent data ^^^ ``` #### This second section uses KNeighborsClassifier to predict whether the price will go up or down. ``` from sklearn.neighbors import KNeighborsClassifier X = datas[['sentPosNeg', 'p(t) - p(t-1) > 0']] Y = datas[['p(t+1) - p(t) > 0']] knnClass = KNeighborsClassifier(n_neighbors=3) knnClass.fit(X[1000:1175],Y[1000:1175]) np.mean(cross_val_score(knnClass, X[1000:1175], Y[1000:1175])) plt.plot(x,y,x,p.drop(0).drop(1174)) plt.xlabel('Days After 2-2-2015') plt.ylabel('Price (USD)') plt.title('KNN Regressor Predicting BTC Price') plt.legend(('Predicted', 'Actual')) ```
github_jupyter
# Project 3: Smart Beta Portfolio and Portfolio Optimization ## Overview Smart beta has a broad meaning, but we can say in practice that when we use the universe of stocks from an index, and then apply some weighting scheme other than market cap weighting, it can be considered a type of smart beta fund. By contrast, a purely alpha fund may create a portfolio of specific stocks, not related to an index, or may choose from the global universe of stocks. The other characteristic that makes a smart beta portfolio "beta" is that it gives its investors a diversified broad exposure to a particular market. Imagine you're a portfolio manager, and wish to try out some different portfolio weighting methods. One way to design portfolio is to look at certain accounting measures (fundamentals) that, based on past trends, indicate stocks that produce better results. For instance, you may start with a hypothesis that dividend-issuing stocks tend to perform better than stocks that do not. This may not always be true of all companies; for instance, Apple does not issue dividends, but has had good historical performance. The hypothesis about dividend-paying stocks may go something like this: Companies that regularly issue dividends may also be more prudent in allocating their available cash, and may indicate that they are more conscious of prioritizing shareholder interests. For example, a CEO may decide to reinvest cash into pet projects that produce low returns. Or, the CEO may do some analysis, identify that reinvesting within the company produces lower returns compared to a diversified portfolio, and so decide that shareholders would be better served if they were given the cash (in the form of dividends). So according to this hypothesis, dividends may be both a proxy for how the company is doing (in terms of earnings and cash flow), but also a signal that the company acts in the best interest of its shareholders. Of course, it's important to test whether this works in practice. You may also have another hypothesis, with which you wish to design a portfolio that can then be made into an ETF. You may find that investors may wish to invest in passive beta funds, but wish to have less risk exposure (less volatility) in their investments. The goal of having a low volatility fund that still produces returns similar to an index may be appealing to investors who have a shorter investment time horizon, and so are more risk averse. So the objective of your proposed portfolio is to design a portfolio that closely tracks an index, while also minimizing the portfolio variance. Also, if this portfolio can match the returns of the index with less volatility, then it has a higher risk-adjusted return (same return, lower volatility). Smart Beta ETFs can be designed with both of these two general methods (among others): alternative weighting and minimum volatility ETF. ## Instructions Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity. ## Packages When you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code. The other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. ### Install Packages ``` import sys !{sys.executable} -m pip install -r requirements.txt ``` ### Load Packages ``` import pandas as pd import numpy as np import helper import project_helper import project_tests ``` ## Market Data ### Load Data For this universe of stocks, we'll be selecting large dollar volume stocks. We're using this universe, since it is highly liquid. ``` df = pd.read_csv('data/eod-quotemedia.csv') percent_top_dollar = 0.2 high_volume_symbols = project_helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar) df = df[df['ticker'].isin(high_volume_symbols)] close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close') volume = df.reset_index().pivot(index='date', columns='ticker', values='adj_volume') dividends = df.reset_index().pivot(index='date', columns='ticker', values='dividends') ``` ### View Data To see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix. ``` project_helper.print_dataframe(close) ``` # Part 1: Smart Beta Portfolio In Part 1 of this project, you'll build a portfolio using dividend yield to choose the portfolio weights. A portfolio such as this could be incorporated into a smart beta ETF. You'll compare this portfolio to a market cap weighted index to see how well it performs. Note that in practice, you'll probably get the index weights from a data vendor (such as companies that create indices, like MSCI, FTSE, Standard and Poor's), but for this exercise we will simulate a market cap weighted index. ## Index Weights The index we'll be using is based on large dollar volume stocks. Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on dollar volume traded for that date. For example, assume the following is close prices and volume data: ``` Prices A B ... 2013-07-08 2 2 ... 2013-07-09 5 6 ... 2013-07-10 1 2 ... 2013-07-11 6 5 ... ... ... ... ... Volume A B ... 2013-07-08 100 340 ... 2013-07-09 240 220 ... 2013-07-10 120 500 ... 2013-07-11 10 100 ... ... ... ... ... ``` The weights created from the function `generate_dollar_volume_weights` should be the following: ``` A B ... 2013-07-08 0.126.. 0.194.. ... 2013-07-09 0.759.. 0.377.. ... 2013-07-10 0.075.. 0.285.. ... 2013-07-11 0.037.. 0.142.. ... ... ... ... ... ``` ``` def generate_dollar_volume_weights(close, volume): """ Generate dollar volume weights. Parameters ---------- close : DataFrame Close price for each ticker and date volume : str Volume for each ticker and date Returns ------- dollar_volume_weights : DataFrame The dollar volume weights for each ticker and date """ assert close.index.equals(volume.index) assert close.columns.equals(volume.columns) #TODO: Implement function dollar_volume = close * volume return (dollar_volume.T / dollar_volume.T.sum()).T project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights) ``` ### View Data Let's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap. ``` index_weights = generate_dollar_volume_weights(close, volume) project_helper.plot_weights(index_weights, 'Index Weights') ``` ## Portfolio Weights Now that we have the index weights, let's choose the portfolio weights based on dividends. Implement `calculate_dividend_weights` to returns the weights for each stock based on its total dividend yield over time. This is similar to generating the weight for the index, but it's using dividend data instead. For example, assume the following is `dividends` data: ``` Prices A B 2013-07-08 0 0 2013-07-09 0 1 2013-07-10 0.5 0 2013-07-11 0 0 2013-07-12 2 0 ... ... ... ``` The weights created from the function `calculate_dividend_weights` should be the following: ``` A B 2013-07-08 NaN NaN 2013-07-09 0 1 2013-07-10 0.333.. 0.666.. 2013-07-11 0.333.. 0.666.. 2013-07-12 0.714.. 0.285.. ... ... ... ``` ``` def calculate_dividend_weights(dividends): """ Calculate dividend weights. Parameters ---------- ex_dividend : DataFrame Ex-dividend for each stock and date Returns ------- dividend_weights : DataFrame Weights for each stock and date """ #TODO: Implement function dividend_cumsum_per_ticker = dividends.cumsum().T return (dividend_cumsum_per_ticker/dividend_cumsum_per_ticker.sum()).T project_tests.test_calculate_dividend_weights(calculate_dividend_weights) ``` ### View Data Just like the index weights, let's generate the ETF weights and view them using a heatmap. ``` etf_weights = calculate_dividend_weights(dividends) project_helper.plot_weights(etf_weights, 'ETF Weights') ``` ## Returns Implement `generate_returns` to generate returns data for all the stocks and dates from price data. You might notice we're implementing returns and not log returns. Since we're not dealing with volatility, we don't have to use log returns. ``` def generate_returns(prices): """ Generate returns for ticker and date. Parameters ---------- prices : DataFrame Price for each ticker and date Returns ------- returns : Dataframe The returns for each ticker and date """ #TODO: Implement function return prices / prices.shift(1) - 1 project_tests.test_generate_returns(generate_returns) ``` ### View Data Let's generate the closing returns using `generate_returns` and view them using a heatmap. ``` returns = generate_returns(close) project_helper.plot_returns(returns, 'Close Returns') ``` ## Weighted Returns With the returns of each stock computed, we can use it to compute the returns for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using the returns and weights. ``` def generate_weighted_returns(returns, weights): """ Generate weighted returns. Parameters ---------- returns : DataFrame Returns for each ticker and date weights : DataFrame Weights for each ticker and date Returns ------- weighted_returns : DataFrame Weighted returns for each ticker and date """ assert returns.index.equals(weights.index) assert returns.columns.equals(weights.columns) #TODO: Implement function return returns * weights project_tests.test_generate_weighted_returns(generate_weighted_returns) ``` ### View Data Let's generate the ETF and index returns using `generate_weighted_returns` and view them using a heatmap. ``` index_weighted_returns = generate_weighted_returns(returns, index_weights) etf_weighted_returns = generate_weighted_returns(returns, etf_weights) project_helper.plot_returns(index_weighted_returns, 'Index Returns') project_helper.plot_returns(etf_weighted_returns, 'ETF Returns') ``` ## Cumulative Returns To compare performance between the ETF and Index, we're going to calculate the tracking error. Before we do that, we first need to calculate the index and ETF comulative returns. Implement `calculate_cumulative_returns` to calculate the cumulative returns over time given the returns. ``` def calculate_cumulative_returns(returns): """ Calculate cumulative returns. Parameters ---------- returns : DataFrame Returns for each ticker and date Returns ------- cumulative_returns : Pandas Series Cumulative returns for each date """ #TODO: Implement function return (returns.T.sum() + 1).cumprod() project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns) ``` ### View Data Let's generate the ETF and index cumulative returns using `calculate_cumulative_returns` and compare the two. ``` index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns) etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns) project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index') ``` ## Tracking Error In order to check the performance of the smart beta portfolio, we can calculate the annualized tracking error against the index. Implement `tracking_error` to return the tracking error between the ETF and benchmark. For reference, we'll be using the following annualized tracking error function: $$ TE = \sqrt{252} * SampleStdev(r_p - r_b) $$ Where $ r_p $ is the portfolio/ETF returns and $ r_b $ is the benchmark returns. ``` def tracking_error(benchmark_returns_by_date, etf_returns_by_date): """ Calculate the tracking error. Parameters ---------- benchmark_returns_by_date : Pandas Series The benchmark returns for each date etf_returns_by_date : Pandas Series The ETF returns for each date Returns ------- tracking_error : float The tracking error """ assert benchmark_returns_by_date.index.equals(etf_returns_by_date.index) #TODO: Implement function return np.sqrt(252) * (etf_returns_by_date - benchmark_returns_by_date).std() project_tests.test_tracking_error(tracking_error) ``` ### View Data Let's generate the tracking error using `tracking_error`. ``` smart_beta_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(etf_weighted_returns, 1)) print('Smart Beta Tracking Error: {}'.format(smart_beta_tracking_error)) ``` # Part 2: Portfolio Optimization Now, let's create a second portfolio. We'll still reuse the market cap weighted index, but this will be independent of the dividend-weighted portfolio that we created in part 1. We want to both minimize the portfolio variance and also want to closely track a market cap weighted index. In other words, we're trying to minimize the distance between the weights of our portfolio and the weights of the index. $Minimize \left [ \sigma^2_p + \lambda \sqrt{\sum_{1}^{m}(weight_i - indexWeight_i)^2} \right ]$ where $m$ is the number of stocks in the portfolio, and $\lambda$ is a scaling factor that you can choose. Why are we doing this? One way that investors evaluate a fund is by how well it tracks its index. The fund is still expected to deviate from the index within a certain range in order to improve fund performance. A way for a fund to track the performance of its benchmark is by keeping its asset weights similar to the weights of the index. We’d expect that if the fund has the same stocks as the benchmark, and also the same weights for each stock as the benchmark, the fund would yield about the same returns as the benchmark. By minimizing a linear combination of both the portfolio risk and distance between portfolio and benchmark weights, we attempt to balance the desire to minimize portfolio variance with the goal of tracking the index. ## Covariance Implement `get_covariance_returns` to calculate the covariance of the `returns`. We'll use this to calculate the portfolio variance. If we have $m$ stock series, the covariance matrix is an $m \times m$ matrix containing the covariance between each pair of stocks. We can use [numpy.cov](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) to get the covariance. We give it a 2D array in which each row is a stock series, and each column is an observation at the same period of time. The covariance matrix $\mathbf{P} = \begin{bmatrix} \sigma^2_{1,1} & ... & \sigma^2_{1,m} \\ ... & ... & ...\\ \sigma_{m,1} & ... & \sigma^2_{m,m} \\ \end{bmatrix}$ ``` def get_covariance_returns(returns): """ Calculate covariance matrices. Parameters ---------- returns : DataFrame Returns for each ticker and date Returns ------- returns_covariance : 2 dimensional Ndarray The covariance of the returns """ #TODO: Implement function return np.cov(returns.T.fillna(0)) project_tests.test_get_covariance_returns(get_covariance_returns) ``` ### View Data Let's look at the covariance generated from `get_covariance_returns`. ``` covariance_returns = get_covariance_returns(returns) covariance_returns = pd.DataFrame(covariance_returns, returns.columns, returns.columns) covariance_returns_correlation = np.linalg.inv(np.diag(np.sqrt(np.diag(covariance_returns)))) covariance_returns_correlation = pd.DataFrame( covariance_returns_correlation.dot(covariance_returns).dot(covariance_returns_correlation), covariance_returns.index, covariance_returns.columns) project_helper.plot_covariance_returns_correlation( covariance_returns_correlation, 'Covariance Returns Correlation Matrix') ``` ### portfolio variance We can write the portfolio variance $\sigma^2_p = \mathbf{x^T} \mathbf{P} \mathbf{x}$ Recall that the $\mathbf{x^T} \mathbf{P} \mathbf{x}$ is called the quadratic form. We can use the cvxpy function `quad_form(x,P)` to get the quadratic form. ### Distance from index weights We want portfolio weights that track the index closely. So we want to minimize the distance between them. Recall from the Pythagorean theorem that you can get the distance between two points in an x,y plane by adding the square of the x and y distances and taking the square root. Extending this to any number of dimensions is called the L2 norm. So: $\sqrt{\sum_{1}^{n}(weight_i - indexWeight_i)^2}$ Can also be written as $\left \| \mathbf{x} - \mathbf{index} \right \|_2$. There's a cvxpy function called [norm()](https://www.cvxpy.org/api_reference/cvxpy.atoms.other_atoms.html#norm) `norm(x, p=2, axis=None)`. The default is already set to find an L2 norm, so you would pass in one argument, which is the difference between your portfolio weights and the index weights. ### objective function We want to minimize both the portfolio variance and the distance of the portfolio weights from the index weights. We also want to choose a `scale` constant, which is $\lambda$ in the expression. $\mathbf{x^T} \mathbf{P} \mathbf{x} + \lambda \left \| \mathbf{x} - \mathbf{index} \right \|_2$ This lets us choose how much priority we give to minimizing the difference from the index, relative to minimizing the variance of the portfolio. If you choose a higher value for `scale` ($\lambda$). We can find the objective function using cvxpy `objective = cvx.Minimize()`. Can you guess what to pass into this function? ### constraints We can also define our constraints in a list. For example, you'd want the weights to sum to one. So $\sum_{1}^{n}x = 1$. You may also need to go long only, which means no shorting, so no negative weights. So $x_i >0 $ for all $i$. you could save a variable as `[x >= 0, sum(x) == 1]`, where x was created using `cvx.Variable()`. ### optimization So now that we have our objective function and constraints, we can solve for the values of $\mathbf{x}$. cvxpy has the constructor `Problem(objective, constraints)`, which returns a `Problem` object. The `Problem` object has a function solve(), which returns the minimum of the solution. In this case, this is the minimum variance of the portfolio. It also updates the vector $\mathbf{x}$. We can check out the values of $x_A$ and $x_B$ that gave the minimum portfolio variance by using `x.value` ``` import cvxpy as cvx def get_optimal_weights(covariance_returns, index_weights, scale=2.0): """ Find the optimal weights. Parameters ---------- covariance_returns : 2 dimensional Ndarray The covariance of the returns index_weights : Pandas Series Index weights for all tickers at a period in time scale : int The penalty factor for weights the deviate from the index Returns ------- x : 1 dimensional Ndarray The solution for x """ assert len(covariance_returns.shape) == 2 assert len(index_weights.shape) == 1 assert covariance_returns.shape[0] == covariance_returns.shape[1] == index_weights.shape[0] #TODO: Implement function x = cvx.Variable(len(index_weights)) objective = cvx.Minimize(cvx.quad_form(x, covariance_returns) + scale*cvx.norm(x - index_weights, 2)) constraints = [ x >= 0, sum(x) == 1] cvx.Problem(objective, constraints).solve() return x.value project_tests.test_get_optimal_weights(get_optimal_weights) ``` ## Optimized Portfolio Using the `get_optimal_weights` function, let's generate the optimal ETF weights without rebalanceing. We can do this by feeding in the covariance of the entire history of data. We also need to feed in a set of index weights. We'll go with the average weights of the index over time. ``` raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns.values, index_weights.iloc[-1]) optimal_single_rebalance_etf_weights = pd.DataFrame( np.tile(raw_optimal_single_rebalance_etf_weights, (len(returns.index), 1)), returns.index, returns.columns) ``` With our ETF weights built, let's compare it to the index. Run the next cell to calculate the ETF returns and compare it to the index returns. ``` optim_etf_returns = generate_weighted_returns(returns, optimal_single_rebalance_etf_weights) optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns) project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index') optim_etf_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(optim_etf_returns, 1)) print('Optimized ETF Tracking Error: {}'.format(optim_etf_tracking_error)) ``` ## Rebalance Portfolio Over Time The single optimized ETF portfolio used the same weights for the entire history. This might not be the optimal weights for the entire period. Let's rebalance the portfolio over the same period instead of using the same weights. Implement `rebalance_portfolio` to rebalance a portfolio. Reblance the portfolio every n number of days, which is given as `shift_size`. When rebalancing, you should look back a certain number of days of data in the past, denoted as `chunk_size`. Using this data, compute the optoimal weights using `get_optimal_weights` and `get_covariance_returns`. ``` def rebalance_portfolio(returns, index_weights, shift_size, chunk_size): """ Get weights for each rebalancing of the portfolio. Parameters ---------- returns : DataFrame Returns for each ticker and date index_weights : DataFrame Index weight for each ticker and date shift_size : int The number of days between each rebalance chunk_size : int The number of days to look in the past for rebalancing Returns ------- all_rebalance_weights : list of Ndarrays The ETF weights for each point they are rebalanced """ assert returns.index.equals(index_weights.index) assert returns.columns.equals(index_weights.columns) assert shift_size > 0 assert chunk_size >= 0 #TODO: Implement function all_rebalance_weights = [] for shift in range(chunk_size, len(returns), shift_size): start_idx = shift - chunk_size covariance_returns = get_covariance_returns(returns.iloc[start_idx:shift]) all_rebalance_weights.append(get_optimal_weights(covariance_returns, index_weights.iloc[shift-1])) return all_rebalance_weights project_tests.test_rebalance_portfolio(rebalance_portfolio) ``` Run the following cell to rebalance the portfolio using `rebalance_portfolio`. ``` chunk_size = 250 shift_size = 5 all_rebalance_weights = rebalance_portfolio(returns, index_weights, shift_size, chunk_size) ``` ## Portfolio Turnover With the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Implement `get_portfolio_turnover` to calculate the annual portfolio turnover. We'll be using the formulas used in the classroom: $ AnnualizedTurnover =\frac{SumTotalTurnover}{NumberOfRebalanceEvents} * NumberofRebalanceEventsPerYear $ $ SumTotalTurnover =\sum_{t,n}{\left | x_{t,n} - x_{t+1,n} \right |} $ Where $ x_{t,n} $ are the weights at time $ t $ for equity $ n $. $ SumTotalTurnover $ is just a different way of writing $ \sum \left | x_{t_1,n} - x_{t_2,n} \right | $ ``` def get_portfolio_turnover(all_rebalance_weights, shift_size, rebalance_count, n_trading_days_in_year=252): """ Calculage portfolio turnover. Parameters ---------- all_rebalance_weights : list of Ndarrays The ETF weights for each point they are rebalanced shift_size : int The number of days between each rebalance rebalance_count : int Number of times the portfolio was rebalanced n_trading_days_in_year: int Number of trading days in a year Returns ------- portfolio_turnover : float The portfolio turnover """ assert shift_size > 0 assert rebalance_count > 0 #TODO: Implement function all_rebalance_weights_df = pd.DataFrame(np.array(all_rebalance_weights)) rebalance_total = (all_rebalance_weights_df - all_rebalance_weights_df.shift(-1)).abs().sum().sum() rebalance_avg = rebalance_total / rebalance_count rebanaces_per_year = n_trading_days_in_year / shift_size return rebalance_avg * rebanaces_per_year project_tests.test_get_portfolio_turnover(get_portfolio_turnover) ``` Run the following cell to get the portfolio turnover from `get_portfolio turnover`. ``` print(get_portfolio_turnover(all_rebalance_weights, shift_size, returns.shape[1])) ``` That's it! You've built a smart beta portfolio in part 1 and did portfolio optimization in part 2. You can now submit your project. ## Submission Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
github_jupyter
# 作業 : (Kaggle)鐵達尼生存預測 https://www.kaggle.com/c/titanic # [作業目標] - 試著調整特徵篩選的門檻值, 觀察會有什麼影響效果 # [作業重點] - 調整相關係數過濾法的篩選門檻, 看看篩選結果的影響 (In[5]~In[8], Out[5]~Out[8]) - 調整L1 嵌入法篩選門檻, 看看篩選結果的影響 (In[9]~In[11], Out[9]~Out[11]) ``` # 做完特徵工程前的所有準備 (與前範例相同) import pandas as pd import numpy as np import copy from sklearn.preprocessing import LabelEncoder, MinMaxScaler from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression data_path = '../../data/' df = pd.read_csv(data_path + 'titanic_train.csv') train_Y = df['Survived'] df = df.drop(['PassengerId'] , axis=1) df.head() %matplotlib inline # 計算df整體相關係數, 並繪製成熱圖 import seaborn as sns import matplotlib.pyplot as plt corr = df.corr() sns.heatmap(corr) plt.show() # 記得刪除 Survived df = df.drop(['Survived'] , axis=1) #只取 int64, float64 兩種數值型欄位, 存於 num_features 中 num_features = [] for dtype, feature in zip(df.dtypes, df.columns): if dtype == 'float64' or dtype == 'int64': num_features.append(feature) print(f'{len(num_features)} Numeric Features : {num_features}\n') # 削減文字型欄位, 只剩數值型欄位 df = df[num_features] df = df.fillna(-1) MMEncoder = MinMaxScaler() df.head() ``` # 作業1 * 鐵達尼生存率預測中,試著變更兩種以上的相關係數門檻值,觀察預測能力是否提升? ``` # 原始特徵 + 邏輯斯迴歸 train_X = MMEncoder.fit_transform(df.astype(np.float64)) estimator = LogisticRegression(solver='lbfgs') cross_val_score(estimator, train_X, train_Y, cv=5).mean() # 篩選相關係數1 high_list = list(corr[(corr['Survived']>0.1) | (corr['Survived']<-0.1)].index) high_list.remove('Survived') print(high_list) # 特徵1 + 邏輯斯迴歸 train_X = MMEncoder.fit_transform(df[high_list].astype(np.float64)) cross_val_score(estimator, train_X, train_Y, cv=5).mean() # 篩選相關係數2 """ Your Code Here """ high_list = list(corr[(corr['Survived']<0.1) | (corr['Survived']>-0.1)].index) high_list.remove('Survived') print(high_list) # 特徵2 + 邏輯斯迴歸 train_X = MMEncoder.fit_transform(df[high_list].astype(np.float64)) cross_val_score(estimator, train_X, train_Y, cv=5).mean() ``` # 作業2 * 續上題,使用 L1 Embedding 做特徵選擇(自訂門檻),觀察預測能力是否提升? ``` from sklearn.linear_model import Lasso """ Your Code Here, select parameter alpha """ L1_Reg = Lasso(alpha=0.003) train_X = MMEncoder.fit_transform(df.astype(np.float64)) L1_Reg.fit(train_X, train_Y) L1_Reg.coef_ from itertools import compress L1_mask = list((L1_Reg.coef_>0) | (L1_Reg.coef_<0)) L1_list = list(compress(list(df), list(L1_mask))) L1_list # L1_Embedding 特徵 + 線性迴歸 train_X = MMEncoder.fit_transform(df[L1_list].astype(np.float64)) cross_val_score(estimator, train_X, train_Y, cv=5).mean() ```
github_jupyter
# 向量化 ## 简述 向量化过程是将item转换为向量的过程,其前置步骤为语法解析、成分分解、令牌化,本部分将先后介绍如何获得数据集、如何使用本地的预训练模型、如何直接调用远程提供的预训练模型。 ## 获得数据集 ### 概述 此部分通过调用 [OpenLUNA.json](http://base.ustc.edu.cn/data/OpenLUNA/OpenLUNA.json) 获得。 ## I2V ### 概述 使用自己提供的任一预训练模型(给出模型存放路径即可)将给定的题目文本转成向量。 - 优点:可以使用自己的模型,另可调整训练参数,灵活性强。 ### D2V #### 导入类 ``` from EduNLP.I2V import D2V ``` #### 输入 类型:str 内容:题目文本 (text) ``` items = [ r"1如图几何图形.此图由三个半圆构成,三个半圆的直径分别为直角三角形$ABC$的斜边$BC$, 直角边$AB$, $AC$.$\bigtriangleup ABC$的三边所围成的区域记为$I$,黑色部分记为$II$, 其余部分记为$III$.在整个图形中随机取一点,此点取自$I,II,III$的概率分别记为$p_1,p_2,p_3$,则$\SIFChoice$$\FigureID{1}$", r"2如图来自古希腊数学家希波克拉底所研究的几何图形.此图由三个半圆构成,三个半圆的直径分别为直角三角形$ABC$的斜边$BC$, 直角边$AB$, $AC$.$\bigtriangleup ABC$的三边所围成的区域记为$I$,黑色部分记为$II$, 其余部分记为$III$.在整个图形中随机取一点,此点取自$I,II,III$的概率分别记为$p_1,p_2,p_3$,则$\SIFChoice$$\FigureID{1}$" ] ``` #### 输出 ``` model_path = "./d2v/test_d2v_256.bin" i2v = D2V("pure_text","d2v",filepath=model_path, pretrained_t2v = False) item_vectors, token_vectors = i2v(items) print(item_vectors[0]) print(token_vectors) # For d2v, token_vector is None print("shape of item_vector: ",len(item_vectors), item_vectors[0].shape) ``` ### W2V ``` from EduNLP.I2V import W2V model_path = "./w2v/general_literal_300/general_literal_300.kv" i2v = W2V("pure_text","w2v",filepath=model_path, pretrained_t2v = False) item_vectors, token_vectors = i2v(items) print(item_vectors[0]) print(token_vectors[0][0]) print("shape of item_vectors: ", len(item_vectors), item_vectors[0].shape) print("shape of token_vectors: ", len(token_vectors), len(token_vectors[0]), len(token_vectors[0][0])) ``` ## get_pretrained_i2v ### 概述 使用 EduNLP 项目组给定的预训练模型将给定的题目文本转成向量。 - 优点:简单方便。 - 缺点:只能使用项目中给定的模型,局限性较大。 ### 导入功能块 ``` from EduNLP import get_pretrained_i2v ``` ### 输入 类型:str 内容:题目文本 (text) ``` items = [ "如图来自古希腊数学家希波克拉底所研究的几何图形.此图由三个半圆构成,三个半圆的直径分别为直角三角形$ABC$的斜边$BC$, 直角边$AB$, $AC$.$\bigtriangleup ABC$的三边所围成的区域记为$I$,黑色部分记为$II$, 其余部分记为$III$.在整个图形中随机取一点,此点取自$I,II,III$的概率分别记为$p_1,p_2,p_3$,则$\SIFChoice$$\FigureID{1}$" ] ``` ### 模型选择与使用 根据题目所属学科选择预训练模型: 预训练模型名称 | 模型训练数据的所属学科 -------------- | ---------------------- d2v_all_256 | 全学科 d2v_sci_256 | 理科 d2v_eng_256 | 英语 d2v_lit_256 | 文科 w2v_eng_300 | 英语 w2v_lit_300 | 文科 ``` i2v = get_pretrained_i2v("d2v_sci_256", model_dir="./d2v") ``` - 注意: 默认的 EduNLP 项目存储地址为根目录(`~/.EduNLP`),模型存储地址为项目存储地址下的 `model` 文件夹。您可以通过修改下面的环境变量来修改模型存储地址: - EduNLP 项目存储地址:`EDUNLPPATH = xx/xx/xx` - 模型存储地址:`EDUNLPMODELPATH = xx/xx/xx` ``` item_vectors, token_vectors = i2v(items) print(item_vectors) print(token_vectors) ```
github_jupyter
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); </script> # Convert LaTeX Sentence to SymPy Expression ## Author: Ken Sible ## The following module will demonstrate a recursive descent parser for LaTeX. ### NRPy+ Source Code for this module: 1. [latex_parser.py](../edit/latex_parser.py); [\[**tutorial**\]](Tutorial-LaTeX_SymPy_Conversion.ipynb) The latex_parser.py script will convert a LaTeX sentence to a SymPy expression using the following function: parse(sentence). <a id='toc'></a> # Table of Contents $$\label{toc}$$ 1. [Step 1](#intro): Introduction: Lexical Analysis and Syntax Analysis 1. [Step 2](#sandbox): Demonstration and Sandbox (LaTeX Parser) 1. [Step 3](#tensor): Tensor Support with Einstein Notation (WIP) 1. [Step 4](#latex_pdf_output): $\LaTeX$ PDF Output <a id='intro'></a> # Step 1: Lexical Analysis and Syntax Analysis \[Back to [top](#toc)\] $$\label{intro}$$ In the following section, we discuss [lexical analysis](https://en.wikipedia.org/wiki/Lexical_analysis) (lexing) and [syntax analysis](https://en.wikipedia.org/wiki/Parsing) (parsing). In the process of lexical analysis, a lexer will tokenize a character string, called a sentence, using substring pattern matching (or tokenizing). We implemented a regex-based lexer for NRPy+, which does pattern matching using a [regular expression](https://en.wikipedia.org/wiki/Regular_expression) for each token pattern. In the process of syntax analysis, a parser will receive a token iterator from the lexer and build a parse tree containing all syntactic information of the language, as specified by a [formal grammar](https://en.wikipedia.org/wiki/Formal_grammar). We implemented a [recursive descent parser](https://en.wikipedia.org/wiki/Recursive_descent_parser) for NRPy+, which will build a parse tree in [preorder](https://en.wikipedia.org/wiki/Tree_traversal#Pre-order_(NLR)), starting from the root [nonterminal](https://en.wikipedia.org/wiki/Terminal_and_nonterminal_symbols), using a [right recursive](https://en.wikipedia.org/wiki/Left_recursion) grammar. The following right recursive, [context-free grammar](https://en.wikipedia.org/wiki/Context-free_grammar) was written for parsing [LaTeX](https://en.wikipedia.org/wiki/LaTeX), adhering to the canonical (extended) [BNF](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form) notation used for describing a context-free grammar: ``` <ROOT> -> <EXPRESSION> | <STRUCTURE> { <LINE_BREAK> <STRUCTURE> }* <STRUCTURE> -> <CONFIG> | <ENVIROMENT> | <ASSIGNMENT> <ENVIROMENT> -> <BEGIN_ALIGN> <ASSIGNMENT> { <LINE_BREAK> <ASSIGNMENT> }* <END_ALIGN> <ASSIGNMENT> -> <VARIABLE> = <EXPRESSION> <EXPRESSION> -> <TERM> { ( '+' | '-' ) <TERM> }* <TERM> -> <FACTOR> { [ '/' ] <FACTOR> }* <FACTOR> -> <BASE> { '^' <EXPONENT> }* <BASE> -> [ '-' ] ( <ATOM> | '(' <EXPRESSION> ')' | '[' <EXPRESSION> ']' ) <EXPONENT> -> <BASE> | '{' <BASE> '}' <ATOM> -> <VARIABLE> | <NUMBER> | <COMMAND> <VARIABLE> -> <ARRAY> | <SYMBOL> [ '_' ( <SYMBOL> | <INTEGER> ) ] <NUMBER> -> <RATIONAL> | <DECIMAL> | <INTEGER> <COMMAND> -> <SQRT> | <FRAC> <SQRT> -> '\\sqrt' [ '[' <INTEGER> ']' ] '{' <EXPRESSION> '}' <FRAC> -> '\\frac' '{' <EXPRESSION> '}' '{' <EXPRESSION> '}' <CONFIG> -> '%' <ARRAY> '[' <INTEGER> ']' [ ':' <SYMMETRY> ] { ',' <ARRAY> '[' <INTEGER> ']' [ ':' <SYMMETRY> ] }* <ARRAY> -> ( <SYMBOL | <TENSOR> ) [ '_' ( <SYMBOL> | '{' { <SYMBOL> }+ '}' ) [ '^' ( <SYMBOL> | '{' { <SYMBOL> }+ '}' ) ] | '^' ( <SYMBOL> | '{' { <SYMBOL> }+ '}' ) [ '_' ( <SYMBOL> | '{' { <SYMBOL> }+ '}' ) ] ] ``` <small>**Source**: Robert W. Sebesta. Concepts of Programming Languages. Pearson Education Limited, 2016.</small> ``` from latex_parser import * # Import NRPy+ module for lexing and parsing LaTeX from sympy import srepr # Import SymPy function for expression tree representation lexer = Lexer(); lexer.initialize(r'\sqrt{5}(x + 2/3)^2') print(', '.join(token for token in lexer.tokenize())) expr = parse(r'\sqrt{5}(x + 2/3)^2', expression=True) print(expr, ':', srepr(expr)) ``` #### `Grammar Derivation: (x + 2/3)^2` ``` <EXPRESSION> -> <TERM> -> <FACTOR> -> <BASE>^<EXPONENT> -> (<EXPRESSION>)^<EXPONENT> -> (<TERM> + <TERM>)^<EXPONENT> -> (<FACTOR> + <TERM>)^<EXPONENT> -> (<BASE> + <TERM>)^<EXPONENT> -> (<ATOM> + <TERM>)^<EXPONENT> -> (<VARIABLE> + <TERM>)^<EXPONENT> -> (<SYMBOL> + <TERM>)^<EXPONENT> -> (x + <TERM>)^<EXPONENT> -> (x + <FACTOR>)^<EXPONENT> -> (x + <BASE>)^<EXPONENT> -> (x + <ATOM>)^<EXPONENT> -> (x + <NUMBER>)^<EXPONENT> -> (x + <RATIONAL>)^<EXPONENT> -> (x + 2/3)^<EXPONENT> -> (x + 2/3)^<BASE> -> (x + 2/3)^<ATOM> -> (x + 2/3)^<NUMBER> -> (x + 2/3)^<INTEGER> -> (x + 2/3)^2 ``` <a id='sandbox'></a> # Step 2: Demonstration and Sandbox (LaTeX Parser) \[Back to [top](#toc)\] $$\label{sandbox}$$ We implemented a wrapper function for the `parse()` method that will accept a LaTeX sentence and return a SymPy expression. Furthermore, the entire parsing module was designed for extendibility. We apply the following procedure for extending parser functionality to include an unsupported LaTeX command: append that command to the grammar dictionary in the Lexer class with the mapping regex:token, write a grammar abstraction (similar to a regular expression) for that command, add the associated nonterminal (the command name) to the command abstraction in the Parser class, and finally implement the straightforward (private) method for parsing the grammar abstraction. We shall demonstrate the extension procedure using the `\sqrt` LaTeX command. ```<SQRT> -> '\\sqrt' [ '[' <INTEGER> ']' ] '{' <EXPRESSION> '}'``` ``` def _sqrt(self): if self.accept('LEFT_BRACKET'): integer = self.lexer.lexeme self.expect('INTEGER') root = Rational(1, integer) self.expect('RIGHT_BRACKET') else: root = Rational(1, 2) self.expect('LEFT_BRACE') expr = self.__expr() self.expect('RIGHT_BRACE') return Pow(expr, root) ``` ``` print(parse(r'\sqrt[3]{\alpha_0}', expression=True)) ``` In addition to expression parsing, we included support for equation parsing, which will produce a dictionary mapping LHS $\mapsto$ RHS, where LHS must be a symbol, and insert that mapping into the global namespace of the previous stack frame, as demonstrated below. ``` parse(r'x = n\sqrt{2}^n'); print(x) ``` We implemented robust error messaging using the custom `ParseError` exception, which should handle every conceivable case to identify, as detailed as possible, invalid syntax inside of a LaTeX sentence. The following are runnable examples of possible error messages (simply uncomment and run the cell): ``` # parse(r'\sqrt[*]{2}') # ParseError: \sqrt[*]{2} # ^ # unexpected '*' at position 6 # parse(r'\sqrt[0.5]{2}') # ParseError: \sqrt[0.5]{2} # ^ # expected token INTEGER at position 6 # parse(r'\command{}') # ParseError: \command{} # ^ # unsupported command '\command' at position 0 from warnings import filterwarnings # Import Python function for warning suppression filterwarnings('ignore', category=OverrideWarning); del Parser.namespace['x'] ``` In the sandbox code cell below, you can experiment with the LaTeX parser using the wrapper function `parse(sentence)`, where sentence must be a [raw string](https://docs.python.org/3/reference/lexical_analysis.html) to interpret a backslash as a literal character rather than an [escape sequence](https://en.wikipedia.org/wiki/Escape_sequence). ``` # Write Sandbox Code Here ``` <a id='tensor'></a> # Step 3: Tensor Support with Einstein Notation (WIP) \[Back to [top](#toc)\] $$\label{tensor}$$ In the following section, we demonstrate the current parser support for tensor notation using the Einstein summation convention. The first example will parse an equation for a tensor contraction, the second will parse an equation for raising an index using the metric tensor, and the third will parse an align enviroment with an equation dependency. In each example, every tensor should appear either on the LHS of an equation or inside of a configuration before appearing on the RHS of an equation. Moreover, the parser will raise an exception upon violation of the Einstein summation convention, i.e. an invalid free or bound index. **Configuration Syntax** `% <TENSOR> [<DIMENSION>]: <SYMMETRY>, <TENSOR> [<DIMENSION>]: <SYMMETRY>, ... ;` #### Example 1 LaTeX Source | Rendered LaTeX :----------- | :------------- <pre lang="latex"> h = h^\\mu{}_\\mu </pre> | $$ h = h^\mu{}_\mu $$ ``` parse(r""" % h^\mu_\mu [4]: nosym; h = h^\mu{}_\mu """) print('h =', h) ``` #### Example 2 LaTeX Source | Rendered LaTeX :----------- | :------------- <pre lang="latex"> v^\\mu = g^{\\mu\\nu}v_\\nu </pre> | $$ v^\mu = g^{\mu\nu}v_\nu $$ ``` parse(r""" % g^{\mu\nu} [3]: metric, v_\nu [3]; v^\mu = g^{\mu\nu}v_\nu """) print('vU =', vU) ``` #### Example 3 LaTeX Source | Rendered LaTeX :----------- | :------------- <pre lang="latex"> \\begin{align\*}<br>&emsp;&emsp;&emsp; R &= g_{ab}R^{ab} \\\\ <br>&emsp;&emsp;&emsp; G^{ab} &= R^{ab} - \\frac{1}{2}g^{ab}R <br> \\end{align\*} </pre> | $$ \begin{align*} R &= g_{ab}R^{ab} \\ G^{ab} &= R^{ab} - \frac{1}{2}g^{ab}R \end{align*} $$ ``` parse(r""" % g_{ab} [2]: metric, R^{ab} [2]: sym01; \begin{align*} R &= g_{ab}R^{ab} \\ G^{ab} &= R^{ab} - \frac{1}{2}g^{ab}R \end{align*} """) print('R =', R) display(GUU) ``` The static variable `namespace` for the `Parser` class will provide access to the global namespace of the parser across each instance of the class. ``` Parser.namespace ``` We extended our robust error messaging using the custom `TensorError` exception, which should handle any inconsistent tensor dimension and any violation of the Einstein summation convention, specifically that a bound index must appear exactly once as a superscript and exactly once as a subscript in any single term and that a free index must appear in every term with the same position and cannot be summed over in any term. The following are runnable examples of possible error messages (simply uncomment and run the cell): ``` # parse(r""" # % h^{\mu\mu}_{\mu\mu} [4]: nosym; # h = h^{\mu\mu}_{\mu\mu} # """) # TensorError: illegal bound index # parse(r""" # % g^\mu_\nu [3]: sym01, v_\nu [3]; # v^\mu = g^\mu_\nu v_\nu # """) # TensorError: illegal bound index # parse(r""" # % g^{\mu\nu} [3]: sym01, v_\mu [3], w_\nu [3]; # u^\mu = g^{\mu\nu}(v_\mu + w_\nu) # """) # TensorError: unbalanced free index ``` <a id='latex_pdf_output'></a> # Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-LaTeX_SymPy_Conversion.pdf](Tutorial-LaTeX_SymPy_Conversion.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ``` import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-LaTeX_SymPy_Conversion") ```
github_jupyter