code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# BetterReads: Optimizing GoodReads review data This notebook explores how to achieve the best results with the BetterReads algorithm when using review data scraped from GoodReads. It is a short follow-up to the exploration performed in the `03_optimizing_reviews.ipynb` notebook. We have two options when scraping review data from GoodReads: For any given book, we can either scrape 1,500 reviews, with 300 reviews for each star rating (1 to 5), or we can scrape just the top 300 reviews, of any rating. (This is due to some quirks in the way that reviews are displayed on the GoodReads website; for more information, see my [GoodReadsReviewsScraper script](https://github.com/williecostello/GoodReadsReviewsScraper).) There are advantages and disadvantages to both options. If we scrape 1,500 reviews, we obviously have more review data to work with; however, the data is artifically class-balanced, such that, for example, we'll still see a good number of negative reviews even if the vast majority of the book's reviews are positive. If we scrape just the top 300 reviews, we will have a more representative dataset, but much less data to work with. We saw in the `03_optimizing_reviews.ipynb` notebook that the BetterReads algorithm can achieve meaningful and representative results from a dataset with less than 100 reviews. So we should not dismiss the 300 review option simply because it involves less data. We should only dismiss it if its smaller dataset leads to worse results. So let's try these two options out on a particular book and see how the algorithm performs. ``` import numpy as np import pandas as pd import random from sklearn.cluster import KMeans import tensorflow_hub as hub # Loads Universal Sentence Encoder locally, from downloaded module embed = hub.load('../../Universal Sentence Encoder/module/') # Loads Universal Sentence Encoder remotely, from Tensorflow Hub # embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4") ``` ## Which set of reviews should we use? For this notebook we'll work with a new example: Sally Rooney's *Conversations with Friends*. <img src='https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1500031338l/32187419._SY475_.jpg' width=250 align=center> We have prepared two datasets, one of 1,500 reviews and another of 300 reviews, as described above. Both datasets were scraped from GoodReads at the same time, so there is some overlap between them. (Note that the total number of reviews in both datasets is less than advertised, since non-English and very short reviews are dropped during data cleaning.) ``` # Set path for processed file file_path_1500 = 'data/32187419_conversations_with_friends.csv' file_path_300 = 'data/32187419_conversations_with_friends_top_300.csv' # Read in processed file as dataframe df_1500 = pd.read_csv(file_path_1500) df_300 = pd.read_csv(file_path_300) print(f'The first dataset consists of {df_1500.shape[0]} sentences from {df_1500["review_index"].nunique()} reviews') print(f'The second dataset consists of {df_300.shape[0]} sentences from {df_300["review_index"].nunique()} reviews') ``` As we can see above, in comparison to the smaller dataset, the bigger dataset contains approximately three times the number of sentences from four times the number of reviews. And as we can see below, the bigger dataset contains approximately the same number of reviews for each star rating, while the smaller dataset is much more heavily skewed toward 5 star and 4 star reviews. ``` df_1500.groupby('review_index')['rating'].mean().value_counts().sort_index() df_300.groupby('review_index')['rating'].mean().value_counts().sort_index() ``` On [the book's actual GoodReads page](https://www.goodreads.com/book/show/32187419-conversations-with-friends), its average review rating is listed as 3.82 stars. This is nearly the same as the average review rating of our smaller dataset. The bigger dataset's average review rating, in contrast, is just less than 3. This confirms our earlier suspicion that the smaller dataset presents a more representative sample of the book's full set of reviews. ``` df_300.groupby('review_index')['rating'].mean().mean() df_1500.groupby('review_index')['rating'].mean().mean() ``` Let's see how these high-level differences affect the output of our algorithm. ``` def load_sentences(file_path): ''' Function to load and embed a book's sentences ''' # Read in processed file as dataframe df = pd.read_csv(file_path) # Copy sentence column to new variable sentences = df['sentence'].copy() # Vectorize sentences sentence_vectors = embed(sentences) return sentences, sentence_vectors def get_clusters(sentences, sentence_vectors, k, n): ''' Function to extract the n most representative sentences from k clusters, with density scores ''' # Instantiate the model kmeans_model = KMeans(n_clusters=k, random_state=24) # Fit the model kmeans_model.fit(sentence_vectors); # Set the number of cluster centre points to look at when calculating density score centre_points = int(len(sentences) * 0.02) # Initialize list to store mean inner product value for each cluster cluster_density_scores = [] # Initialize dataframe to store cluster centre sentences df = pd.DataFrame() # Loop through number of clusters for i in range(k): # Define cluster centre centre = kmeans_model.cluster_centers_[i] # Calculate inner product of cluster centre and sentence vectors ips = np.inner(centre, sentence_vectors) # Find the sentences with the highest inner products top_indices = pd.Series(ips).nlargest(n).index top_sentences = list(sentences[top_indices]) centre_ips = pd.Series(ips).nlargest(centre_points) density_score = round(np.mean(centre_ips), 5) # Append the cluster density score to master list cluster_density_scores.append(density_score) # Create new row with cluster's top 10 sentences and density score new_row = pd.Series([top_sentences, density_score]) # Append new row to master dataframe df = df.append(new_row, ignore_index=True) # Rename dataframe columns df.columns = ['sentences', 'density'] # Sort dataframe by density score, from highest to lowest df = df.sort_values(by='density', ascending=False).reset_index(drop=True) # Loop through number of clusters selected for i in range(k): # Save density / similarity score & sentence list to variables sim_score = round(df.loc[i]["density"], 3) sents = df.loc[i]['sentences'].copy() print(f'Cluster #{i+1} sentences (density score: {sim_score}):\n') print(*sents, sep='\n') print('\n') model_density_score = round(np.mean(cluster_density_scores), 5) print(f'Model density score: {model_density_score}') # Load and embed sentences sentences_1500, sentence_vectors_1500 = load_sentences(file_path_1500) sentences_300, sentence_vectors_300 = load_sentences(file_path_300) # Get cluster sentences for bigger dataset get_clusters(sentences_1500, sentence_vectors_1500, k=6, n=8) # Get cluster sentences for smaller dataset get_clusters(sentences_300, sentence_vectors_300, k=6, n=8) ``` Let's summarize our results. The bigger dataset's sentence clusters can be summed up as follows: 1. Fantastic writing 1. Reading experience (?) 1. Unlikeable characters 1. Plot synopsis 1. Not enjoyable 1. Thematic elements: relationships & emotions The smaller dataset's clusters can be summed up like this: 1. Fantastic writing 1. Plot synopsis 1. Loved it 1. Unlikeable characters 1. Reading experience 1. Thematic elements: Relationships & emotions As we can see, the two sets of results are broadly similar; there are no radical differences between the two sets of clusters. The only major difference is that the bigger dataset includes a cluster of sentences expressing dislike of the book, whereas the smaller dataset includes a cluster of sentences expressing love of the book. But this was to be expected, given the relative proportions of positive and negative reviews between the two datasets. Given these results, we feel that the smaller dataset is preferable. Its clusters seem slightly more internally coherent and to better capture the general sentiment toward the book.
github_jupyter
## Load data ``` from sklearn import datasets import pandas as pd boston = datasets.load_boston() dat = pd.DataFrame(boston.data, columns=boston.feature_names) dat.head() target = pd.DataFrame(boston.target, columns=["MEDV"]) target.head() ``` ## Analyse data ``` df = dat.copy() df = pd.concat([df, target], axis=1) df.head() df.info() df.describe() from matplotlib import pyplot as plt import seaborn.apionly as snsapi snsapi.set() df.hist(bins = 10, figsize = (15,10)); plt.show(); corr_matrix = df.corr() corr_matrix['MEDV'] import seaborn as sns sns.heatmap(corr_matrix); plt.show() print(boston['DESCR']) ``` remove features that are less correlated with our target variable. ``` dat1 = df.loc[:, ['CRIM', 'ZN', 'INDUS', 'NOX', 'RM', 'AGE', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT']] dat1.head() ``` ## Split into train and test sets ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(dat1, target, test_size = 0.2, random_state=42) y_train = y_train.values.ravel() ``` ## Cross validation to find best algorithm ``` from sklearn import model_selection from sklearn.metrics import mean_squared_error from sklearn.svm import SVR from sklearn.neighbors import KNeighborsRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.linear_model import Lasso from sklearn.linear_model import ElasticNet from sklearn.linear_model import Ridge from sklearn.linear_model import BayesianRidge from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import AdaBoostRegressor from sklearn.ensemble import ExtraTreesRegressor from sklearn.ensemble import BaggingRegressor models = [] models.append(('SVR', SVR())) models.append(('KNN', KNeighborsRegressor())) models.append(('DT', DecisionTreeRegressor())) models.append(('RF', RandomForestRegressor())) models.append(('l', Lasso())) models.append(('EN', ElasticNet())) models.append(('R', Ridge())) models.append(('BR', BayesianRidge())) models.append(('GBR', GradientBoostingRegressor())) models.append(('RF', AdaBoostRegressor())) models.append(('ET', ExtraTreesRegressor())) models.append(('BgR', BaggingRegressor())) scoring = 'neg_mean_squared_error' results = [] names = [] for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=42) cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) ``` ## Create pipeline ``` from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn.pipeline import make_pipeline from sklearn.model_selection import GridSearchCV from sklearn.model_selection import RandomizedSearchCV pipeline = make_pipeline(preprocessing.StandardScaler(), GradientBoostingRegressor(random_state=42)) ``` ## Cross validation to fine tune ``` hyperparameters = { 'gradientboostingregressor__max_features' : ['auto', 'sqrt', 'log2'], 'gradientboostingregressor__max_depth': [None, 5, 3, 1], 'gradientboostingregressor__n_estimators': [100, 150, 200, 250]} clf = GridSearchCV(pipeline, hyperparameters, cv=10, scoring = scoring) clf.fit(X_train, y_train); clf1 = RandomizedSearchCV(pipeline, hyperparameters, cv=10, random_state=42) clf1.fit(X_train, y_train); ``` ## Evaluate ``` pred = clf.predict(X_test) print("MSE for GridSearchCV: {}". format(mean_squared_error(y_test, pred))) pred1 = clf1.predict(X_test) print("MSE for RandomizedSearchCV: {}". format(mean_squared_error(y_test, pred1))) ``` ## Save ``` from sklearn.externals import joblib joblib.dump(clf1, 'boston_regressor.pkl') clf2 = joblib.load('boston_regressor.pkl') ```
github_jupyter
# 2.18 Programming for Geoscientists class test 2016 # Test instructions * This test contains **4** questions each of which should be answered. * Write your program in a Python cell just under each question. * You can write an explanation of your solution as comments in your code. * In each case your solution program must fulfil all of the instructions - please check the instructions carefully and double check that your program fulfils all of the given instructions. * Save your work regularly. * At the end of the test you should email your IPython notebook document (i.e. this document) to [Gerard J. Gorman](http://www.imperial.ac.uk/people/g.gorman) at g.gorman@imperial.ac.uk **1.** The following cells contain at least one programming bug each. For each cell add a comment to identify and explain the bug, and correct the program. ``` # Function to calculate wave velocity. def wave_velocity(k, mu, rho): vp = sqrt((k+4*mu/3)/rho) return vp # Use the function to calculate the velocity of an # acoustic wave in water. vp = wave_velocity(k=0, mu=2.29e9, rho=1000) print "Velocity of acoustic wave in water: %d", vp data = (3.14, 2.29, 10, 12) data.append(4) line = "2015-12-14T06:29:15.740Z,19.4333324,-155.2906647,1.66,2.14,ml,17,248,0.0123,0.36,hv,hv61126056,2015-12-14T06:34:58.500Z,5km W of Volcano, Hawaii,earthquake" latitude = line.split(',')[1] longitude = line.split(',')[2] print "longitude, latitude = (%g, %g)"%(longitude, latitude) ``` **2.** The Ricker wavelet is frequently employed to model seismic data. The amplitude of the Ricker wavelet with peak frequency $f$ at time $t$ is computed as: $$A = (1-2 \pi^2 f^2 t^2) e^{-\pi^2 f^2 t^2}$$ * Implement a function which calculates the amplitude of the Ricker wavelet for a given peak frequency $f$ and time $t$. * Use a *for loop* to create a python *list* for time ranging from $-0.5$ to $0.5$, using a peak frequency, $f$, of $10$. * Using the function created above, calculate a numpy array of the Ricker wavelet amplitudes for these times. * Plot a graph of time against Ricker wavelet. **3.** The data file [vp.dat](data/vp.dat) (all of the data files are stored in the sub-folder *data/* of this notebook library) contains a profile of the acoustic velocity with respect to depth. Depth is measured with respect to a reference point; therefore the first few entries contain NaN's indicating that they are actually above ground. * Write a function to read in the depth and acoustic velocity. * Ensure you skip the entries that contain NaN's. * Store depth and velocities in two seperate numpy arrays. * Plot depth against velocity ensuring you label your axis. **4.** The file [BrachiopodBiometrics.csv](data/BrachiopodBiometrics.csv) contains the biometrics of Brachiopods found in 3 different locations. * Read the data file into a Python *dictionary*. * You should use the samples location as the *key*. * For each key you should form a Python *list* containing tuples of *length* and *width* of each sample. * For each location, calculate the mean length and width of the samples. * Print the result for each location using a formatted print statement. The mean values should only be printed to within one decimal place.
github_jupyter
# OGGM flowlines: where are they? In this notebook we show how to access the OGGM flowlines location before, during, and after a run. Some of the code shown here will make it to the OGGM codebase [eventually](https://github.com/OGGM/oggm/issues/1111). ``` from oggm import cfg, utils, workflow, tasks, graphics from oggm.core import flowline import salem import xarray as xr import pandas as pd import numpy as np import geopandas as gpd import matplotlib.pyplot as plt cfg.initialize(logging_level='WARNING') ``` ## Get ready ``` # Where to store the data cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-flowlines', reset=True) # Which glaciers? rgi_ids = ['RGI60-11.00897'] # We start from prepro level 3 with all data ready gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=40) gdir = gdirs[0] gdir ``` ## Where is the terminus of the RGI glacier? There are several ways to get the terminus, depending on what you want. They are also not necessarily exact same: ### Terminus as the lowest point on the glacier ``` # Get the topo data and the glacier mask with xr.open_dataset(gdir.get_filepath('gridded_data')) as ds: topo = ds.topo # Glacier outline raster mask = ds.glacier_ext topo.plot(); topo_ext = topo.where(mask==1) topo_ext.plot(); # Get the terminus terminus = topo_ext.where(topo_ext==topo_ext.min(), drop=True) # Project its coordinates from the local UTM to WGS-84 t_lon, t_lat = salem.transform_proj(gdir.grid.proj, 'EPSG:4326', terminus.x[0], terminus.y[0]) print('lon, lat:', t_lon, t_lat) print('google link:', f'https://www.google.com/maps/place/{t_lat},{t_lon}') ``` ### Terminus as the lowest point on the main centerline ``` # Get the centerlines cls = gdir.read_pickle('centerlines') # Get the coord of the last point of the main centerline cl = cls[-1] i, j = cl.line.coords[-1] # These coords are in glacier grid coordinates. Let's convert them to lon, lat: t_lon, t_lat = gdir.grid.ij_to_crs(i, j, crs='EPSG:4326') print('lon, lat:', t_lon, t_lat) print('google link:', f'https://www.google.com/maps/place/{t_lat},{t_lon}') ``` ### Terminus as the lowest point on the main flowline "centerline" in the OGGM jargon is not the same as "flowline". Flowlines have a fixed dx and their terminus is not necessarily exact on the glacier outline. Code-wise it's very similar though: ``` # Get the flowlines cls = gdir.read_pickle('inversion_flowlines') # Get the coord of the last point of the main centerline cl = cls[-1] i, j = cl.line.coords[-1] # These coords are in glacier grid coordinates. Let's convert them to lon, lat: t_lon, t_lat = gdir.grid.ij_to_crs(i, j, crs='EPSG:4326') print('lon, lat:', t_lon, t_lat) print('google link:', f'https://www.google.com/maps/place/{t_lat},{t_lon}') ``` ### Bonus: convert the centerlines to a shapefile ``` output_dir = utils.mkdir('outputs') utils.write_centerlines_to_shape(gdirs, path=f'{output_dir}/centerlines.shp') sh = gpd.read_file(f'{output_dir}/centerlines.shp') sh.plot(); ``` Remember: the "centerlines" are not the same things as "flowlines" in OGGM. The later objects undergo further quality checks, such as the impossibility for ice to "climb", i.e. have negative slopes. The flowlines are therefore sometimes shorter than the centerlines: ``` utils.write_centerlines_to_shape(gdirs, path=f'{output_dir}/flowlines.shp', flowlines_output=True) sh = gpd.read_file(f'{output_dir}/flowlines.shp') sh.plot(); ``` ## Flowline geometry after a run: with the new flowline diagnostics (new in v1.6.0!!) ``` # TODO!!! Based on https://github.com/OGGM/oggm/pull/1308 ``` ## Flowline geometry after a run: with `FileModel` Let's do a run first: ``` cfg.PARAMS['store_model_geometry'] = True # We want to get back to it later tasks.init_present_time_glacier(gdir) tasks.run_constant_climate(gdir, nyears=100, y0=2000); ``` We use a `FileModel` to read the model output: ``` fmod = flowline.FileModel(gdir.get_filepath('model_geometry')) ``` A FileModel behaves like a OGGM's `FlowlineModel`: ``` fmod.run_until(0) # Point the file model to year 0 in the output graphics.plot_modeloutput_map(gdir, model=fmod) # plot it fmod.run_until(100) # Point the file model to year 100 in the output graphics.plot_modeloutput_map(gdir, model=fmod) # plot it # Bonus - get back to e.g. the volume timeseries fmod.volume_km3_ts().plot(); ``` OK, now create a table of the main flowline's grid points location and bed altitude (this does not change with time): ``` fl = fmod.fls[-1] # Main flowline i, j = fl.line.xy # xy flowline on grid lons, lats = gdir.grid.ij_to_crs(i, j, crs='EPSG:4326') # to WGS84 df_coords = pd.DataFrame(index=fl.dis_on_line*gdir.grid.dx) df_coords.index.name = 'Distance along flowline' df_coords['lon'] = lons df_coords['lat'] = lats df_coords['bed_elevation'] = fl.bed_h df_coords.plot(x='lon', y='lat'); df_coords['bed_elevation'].plot(); ``` Now store a time varying array of ice thickness, surface elevation along this line: ``` years = np.arange(0, 101) df_thick = pd.DataFrame(index=df_coords.index, columns=years, dtype=np.float64) df_surf_h = pd.DataFrame(index=df_coords.index, columns=years, dtype=np.float64) df_bed_h = pd.DataFrame() for year in years: fmod.run_until(year) fl = fmod.fls[-1] df_thick[year] = fl.thick df_surf_h[year] = fl.surface_h df_thick[[0, 50, 100]].plot(); plt.title('Ice thickness at three points in time'); f, ax = plt.subplots() df_surf_h[[0, 50, 100]].plot(ax=ax); df_coords['bed_elevation'].plot(ax=ax, color='k'); plt.title('Glacier elevation at three points in time'); ``` ### Location of the terminus over time Let's find the indices where the terminus is (i.e. the last point where ice is thicker than 1m), and link these to the lon, lat positions along the flowlines. The first method uses fancy pandas functions but may be more cryptic for less experienced pandas users: ``` # Nice trick from https://stackoverflow.com/questions/34384349/find-index-of-last-true-value-in-pandas-series-or-dataframe dis_term = (df_thick > 1)[::-1].idxmax() # Select the terminus coordinates at these locations loc_over_time = df_coords.loc[dis_term].set_index(dis_term.index) # Plot them over time loc_over_time.plot.scatter(x='lon', y='lat', c=loc_over_time.index, colormap='viridis'); plt.title('Location of the terminus over time'); # Plot them on a google image - you need an API key for this # api_key = '' # from motionless import DecoratedMap, LatLonMarker # dmap = DecoratedMap(maptype='satellite', key=api_key) # for y in [0, 20, 40, 60, 80, 100]: # tmp = loc_over_time.loc[y] # dmap.add_marker(LatLonMarker(tmp.lat, tmp.lon, )) # print(dmap.generate_url()) ``` <img src='https://maps.googleapis.com/maps/api/staticmap?key=AIzaSyDWG_aTgfU7CeErtIzWfdGxpStTlvDXV_o&maptype=satellite&format=png&scale=1&size=400x400&sensor=false&language=en&markers=%7C46.818796056851475%2C10.802746777546085%7C46.81537664036365%2C10.793672904092187%7C46.80792268953582%2C10.777563608554978%7C46.7953190811109%2C10.766412086223571%7C46.79236232808986%2C10.75236937607986%7C46.79236232808986%2C10.75236937607986'> And now, method 2: less fancy but maybe easier to read? ``` for yr in [0, 20, 40, 60, 80, 100]: # Find the last index of the terminus p_term = np.nonzero(df_thick[yr].values > 1)[0][-1] # Print the location of the terminus print(f'Terminus pos at year {yr}', df_coords.iloc[p_term][['lon', 'lat']].values) ``` ## Comments on "elevation band flowlines" If you use elevation band flowlines, the location of the flowlines is not known: indeed, the glacier is an even more simplified representation of the real world one. In this case, if you are interested in tracking the terminus position, you may need to use tricks, such as using the retreat from the terminus with time, or similar. ## What's next? - return to the [OGGM documentation](https://docs.oggm.org) - back to the [table of contents](welcome.ipynb)
github_jupyter
# Data ingestion & inspection ## 1. NumPy and pandas working together Pandas depends upon and interoperates with NumPy, the Python library for fast numeric array computations. For example, you can use the DataFrame attribute .values to represent a DataFrame df as a NumPy array. You can also pass pandas data structures to NumPy methods. In this exercise, we have imported pandas as pd and loaded world population data every 10 years since 1960 into the DataFrame df. This dataset was derived from the one used in the previous exercise. Your job is to extract the values and store them in an array using the attribute .values. You'll then use those values as input into the NumPy np.log10() method to compute the base 10 logarithm of the population values. Finally, you will pass the entire pandas DataFrame into the same NumPy np.log10() method and compare the results. ``` import pandas as pd df = pd.read_csv("datasets/world_population.csv") df.head() df.info() # Import numpy import numpy as np # Create array of DataFrame values: np_vals np_vals = df.values np_vals # Create new array of base 10 logarithm values: np_vals_log10 np_vals_log10 = np.log10(np_vals) np_vals_log10 # Create array of new DataFrame by passing df to np.log10(): df_log10 df_log10 = np.log10(df) df_log10 # Print original and new data containers [print(x, 'has type', type(eval(x))) for x in ['np_vals', 'np_vals_log10', 'df', 'df_log10']] ``` Wonderful work! As a data scientist, you'll frequently interact with NumPy arrays, pandas Series, and pandas DataFrames, and you'll leverage a variety of NumPy and pandas methods to perform your desired computations. Understanding how NumPy and pandas work together will prove to be very useful. ## 2. Zip lists to build a DataFrame In this exercise, you're going to make a pandas DataFrame of the top three countries to win gold medals since 1896 by first building a dictionary. list_keys contains the column names 'Country' and 'Total'. list_values contains the full names of each country and the number of gold medals awarded. The values have been taken from Wikipedia. Your job is to use these lists to construct a list of tuples, use the list of tuples to construct a dictionary, and then use that dictionary to construct a DataFrame. In doing so, you'll make use of the list(), zip(), dict() and pd.DataFrame() functions. Pandas has already been imported as pd. Note: The zip() function in Python 3 and above returns a special zip object, which is essentially a generator. To convert this zip object into a list, you'll need to use list(). ``` list_values = [['United States', 'Soviet Union', 'United Kingdom'], [1118, 473, 273]] list_keys = ['Country', 'Total'] # Zip the 2 lists together into one list of (key,value) tuples: zipped zipped = list(zip(list_keys, list_values)) # Inspect the list using zipped # Build a dictionary with the zipped list: data data = dict(zipped) data # Build and inspect a DataFrame from the dictionary: df df = pd.DataFrame(data) df ``` Fantastic! Being able to build DataFrames from scratch is an important skill. ## 3. Labeling your data You can use the DataFrame attribute df.columns to view and assign new string labels to columns in a pandas DataFrame. Import a DataFrame df containing top Billboard hits from the 1980s (from Wikipedia). Each row has the year, artist, song name and the number of weeks at the top. However, this DataFrame has the column labels a, b, c, d. Your job is to use the df.columns attribute to re-assign descriptive column labels. ``` list_values = [[1980, 1981, 1982], ["Blondie", "Christopher Cross", "Joan Jett"], ["Call Me", "Arthurs Theme", "I Love Rock and Roll"], [6, 3, 7]] list_keys = ["a", "b", "c", "d"] df = pd.DataFrame(dict(zip(list_keys, list_values))) df # Build a list of labels: list_labels list_labels = ["year", "artist", "song", "chart weeks"] # Assign the list of labels to the columns attribute: df.columns df.columns = list_labels df ``` Great work! You'll often need to rename column names like this to be more informative. ## 4. Building DataFrames with broadcasting You can implicitly use 'broadcasting', a feature of NumPy, when creating pandas DataFrames. In this exercise, you're going to create a DataFrame of cities in Pennsylvania that contains the city name in one column and the state name in the second. We have imported the names of 15 cities as the list cities. Your job is to construct a DataFrame from the list of cities and the string 'PA'. ``` cities = ['Manheim', 'Preston park', 'Biglerville', 'Indiana', 'Curwensville', 'Crown', 'Harveys lake','Mineral springs', 'Cassville','Hannastown','Saltsburg','Tunkhannock','Pittsburgh','Lemasters','Great bend'] # Make a string with the value 'PA': state state = "PA" # Construct a dictionary: data data = {'state':state, 'city':cities} data # Construct a DataFrame from dictionary data: df df = pd.DataFrame(data) # Print the DataFrame df ``` Excellent job! Broadcasting is a powerful technique. ## 5. Reading a flat file Your job is to read the World Bank population data into a DataFrame using read_csv(). The next step is to reread the same file, but simultaneously rename the columns using the names keyword input parameter, set equal to a list of new column labels. You will also need to set header=0 to rename the column labels. Finish up by inspecting the result with df.head() and df.info() (changing df to the name of your DataFrame variable). ``` data_file = "datasets/world_population.csv" # Read in the file: df1 df1 = pd.read_csv(data_file) df1 # Create a list of the new column labels: new_labels new_labels = ["year", "population"] # Read in the file, specifying the header and names parameters: df2 df2 = pd.read_csv(data_file, header=0, names=new_labels) df2 ``` Well done! Knowing how to read in flat files using pandas is a vital skill. ## 6. Delimiters, headers, and extensions Not all data files are clean and tidy. Pandas provides methods for reading those not-so-perfect data files that you encounter far too often. In this exercise, you have monthly stock data for four companies downloaded from Yahoo Finance. The data is stored as one row for each company and each column is the end-of-month closing price. In addition, this file has three aspects that may cause trouble for lesser tools: multiple header lines, comment records (rows) interleaved throughout the data rows, and space delimiters instead of commas. Your job is to use pandas to read the data from this problematic file_messy using non-default input options with read_csv() so as to tidy up the mess at read time. Then, write the cleaned up data to a CSV file with the variable file_clean that has been prepared for you, as you might do in a real data workflow. You can learn about the option input parameters needed by using help() on the pandas function pd.read_csv(). ``` file_messy = "datasets/messy_stock_data.tsv" # Inside messy_stock_data.tsv The following stock data was collect on 2016-AUG-25 from an unknown source These kind of ocmments are not very useful, are they? probably should just throw this line away too, but not the next since those are column labels name Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec # So that line you just read has all the column headers labels IBM 156.08 160.01 159.81 165.22 172.25 167.15 164.75 152.77 145.36 146.11 137.21 137.96 MSFT 45.51 43.08 42.13 43.47 47.53 45.96 45.61 45.51 43.56 48.70 53.88 55.40 # That MSFT is MicroSoft GOOGLE 512.42 537.99 559.72 540.50 535.24 532.92 590.09 636.84 617.93 663.59 735.39 755.35 APPLE 110.64 125.43 125.97 127.29 128.76 127.81 125.34 113.39 112.80 113.36 118.16 111.73 # Maybe we should have bought some Apple stock in 2008? # Read the raw file as-is: df1 df1 = pd.read_csv(file_messy) df1 # Read in the file with the correct parameters: df2 # Comment attribute Indicates remainder of line should not be parsed. If found at the beginning # of a line, the line will be ignored altogether. df2 = pd.read_csv(file_messy, delimiter=" ", header=3, comment="#", index_col="name") df2 ``` Superb! It's important to be able to save your cleaned DataFrames in the desired file format! ``` import matplotlib.pyplot as plt df2.plot(kind="box"); ``` ## 7. Plotting series using pandas Data visualization is often a very effective first step in gaining a rough understanding of a data set to be analyzed. Pandas provides data visualization by both depending upon and interoperating with the matplotlib library. You will now explore some of the basic plotting mechanics with pandas as well as related matplotlib options. We have pre-loaded a pandas DataFrame df which contains the data you need. Your job is to use the DataFrame method df.plot() to visualize the data, and then explore the optional matplotlib input parameters that this .plot() method accepts. The pandas .plot() method makes calls to matplotlib to construct the plots. This means that you can use the skills you've learned in previous visualization courses to customize the plot. In this exercise, you'll add a custom title and axis labels to the figure. Before plotting, inspect the DataFrame in the IPython Shell using df.head(). Also, use type(df) and note that it is a single column DataFrame. ``` temp = pd.read_csv("datasets/weather_data_austin_2010.csv", index_col = "Date", parse_dates = True) temp.info() temp[temp.index < "2010-04-01"].tail() temp_to_april = temp[temp.index < "2010-04-01"] # Create a plot with color='red' temp_to_april.plot(y = "Temperature", color = "red", figsize=(18, 5)) # Add a title plt.title("Temperature in Austin") # Specify the x-axis label plt.xlabel("Hours since midnight August 1, 2010") # Specify the y-axis label plt.ylabel("Temperature (degrees F)") # Display the plot plt.show() ``` ## 8. Plotting DataFrames Comparing data from several columns can be very illuminating. Pandas makes doing so easy with multi-column DataFrames. By default, calling df.plot() will cause pandas to over-plot all column data, with each column as a single line. In this exercise, load the three columns of data from a weather data set - temperature, dew point, and pressure - but the problem is that pressure has different units of measure. The pressure data, measured in Atmospheres, has a different vertical scaling than that of the other two data columns, which are both measured in degrees Fahrenheit. Your job is to plot all columns as a multi-line plot, to see the nature of vertical scaling problem. Then, use a list of column names passed into the DataFrame df[column_list] to limit plotting to just one column, and then just 2 columns of data. When you are finished, you will have created 4 plots. You can cycle through them by clicking on the 'Previous Plot' and 'Next Plot' buttons. As in the previous exercise, inspect the DataFrame df in the IPython Shell using the .head() and .info() methods. ``` # Plot all columns (default) temp.plot(figsize=(18, 10)) plt.show() # Plot all columns as subplots temp.plot(subplots=True,figsize=(18, 20)) plt.show() # Plot just the Dew Point data column_list1 = ['DewPoint'] temp[column_list1].plot(figsize=(18, 10)) plt.show() # Plot the Dew Point and Temperature data, but not the Pressure data column_list2 = ['Temperature','DewPoint'] temp[column_list2].plot(figsize=(18, 10)) plt.show() ``` Great work!
github_jupyter
``` %load_ext autoreload %autoreload 2 import os import sys sys.path.append("..") import datetime import pathlib from collections import OrderedDict import numpy as np import pandas as pd # Pytorch import torch from torch.optim import lr_scheduler import torch.optim as optim from torch.autograd import Variable # Custom from dutils import Experiment from trainer import fit import visualization as vis from tcga_datasets import SiameseDataset # Models from tcga_networks import EmbeddingNet, SiameseNet from losses import ContrastiveLoss # Metrics from sklearn.cluster import KMeans from sklearn.metrics import adjusted_mutual_info_score as ANMI def getTCGA(disease): path = "/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP/TCGA/TCGA_{}_counts.tsv.gz" files = [path.format(d) for d in disease] return files def readGCP(files, biotype='protein_coding', mean=True): """ Paths to count matrices. """ data_dict = {} for f in files: key = os.path.basename(f).split("_")[1] data = pd.read_csv(f, sep='\t', index_col=0) # transcript metadata meta = pd.DataFrame([row[:-1] for row in data.index.str.split("|")], columns=['ENST', 'ENSG', 'OTTHUMG', 'OTTHUMT', 'GENE-NUM', 'GENE', 'BP', 'BIOTYPE']) meta = pd.MultiIndex.from_frame(meta) data.index = meta # subset transcripts data = data.xs(key=biotype, level='BIOTYPE') data = data.droplevel(['ENST', 'ENSG', 'OTTHUMG', 'OTTHUMT', 'GENE-NUM', 'BP']) # average gene expression of splice variants data = data.T if mean: data = data.groupby(by=data.columns, axis=1).mean() data_dict[key] = data return data_dict def uq_norm(df, q=0.75): """ Upper quartile normalization of GEX for samples. """ quantiles = df.quantile(q=q, axis=1) norm = df.divide(quantiles, axis=0) return norm def process_TCGA(disease=['BRCA', 'LUAD', 'KIRC', 'THCA', 'PRAD', 'SKCM']): base="/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP" # get files tcga_files = getTCGA(disease) # read meta/data tcga_meta = pd.read_csv(os.path.join(base, "TCGA/TCGA_GDC_ID_MAP.tsv"), sep="\t") tcga_raw = readGCP(tcga_files, mean=True) # combine samples tcga_raw = pd.concat(tcga_raw.values()) # Upper quartile normalization tcga_raw = uq_norm(tcga_raw) # log norm tcga = tcga_raw.transform(np.log1p) return tcga, tcga_meta def generate_fsets(data, n_features, steps=5): r = np.linspace(start=1, stop=n_features, num=steps, dtype='int') idx = [np.random.choice(data.shape[1], size=i, replace=False) for i in r] return idx def feature_training(train_data, train_labels, test_data, test_labels, feature_idx, embedding, exp_dir, cuda=True): # Meta data meta_data = {"n_features":[], "model":[], "ANMI":[]} # Params batch_size = 8 kwargs = {'num_workers': 10, 'pin_memory': True} if cuda else {'num_workers': 10} # Feature Index for batch, feat in enumerate(feature_idx): print("Number features: {}\n".format(len(feat))) exp_data = {'feature_idx':feat} # Define data siamese_train_dataset = SiameseDataset(data=train_data.iloc[:,feat], labels=train_labels, train=True) siamese_test_dataset = SiameseDataset(data=test_data.iloc[:,feat], labels=test_labels, train=False) # Loaders siamese_train_loader = torch.utils.data.DataLoader(siamese_train_dataset, batch_size=batch_size, shuffle=True, **kwargs) siamese_test_loader = torch.utils.data.DataLoader(siamese_test_dataset, batch_size=batch_size, shuffle=False, **kwargs) # Instantiate model n_samples, n_features = siamese_train_dataset.train_data.shape for i in range(3): nmodel = 'model_{}'.format(i) print("\t{}".format(nmodel)) embedding_net = EmbeddingNet(n_features, embedding) model = SiameseNet(embedding_net) if cuda: model.cuda() # Parameters margin = 1. loss_fn = ContrastiveLoss(margin) lr = 1e-3 optimizer = optim.Adam(model.parameters(), lr=lr) scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1) n_epochs = 10 log_interval = round(len(siamese_train_dataset)/1/batch_size) # Train train_loss, val_loss = fit(siamese_train_loader, siamese_test_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval) # Test Embeddings val_embeddings_baseline, val_labels_baseline = vis.extract_embeddings(siamese_test_dataset.test_data, siamese_test_dataset.labels, model) # Evaluation n_clusters = len(np.unique(test_labels)) kmeans = KMeans(n_clusters=n_clusters) siamese_clusters = kmeans.fit_predict(val_embeddings_baseline) anmi = ANMI(siamese_clusters, val_labels_baseline) # Store meta_data['n_features'].append(len(feat)) meta_data['model'].append(nmodel) meta_data['ANMI'].append(anmi) exp_data[nmodel] = {'data': (val_embeddings_baseline, val_labels_baseline), 'loss': (train_loss, val_loss), 'ANMI': anmi} pd.to_pickle(exp_data, os.path.join(exp_dir, "model_{}.pkl".format(len(feat)))) pd.to_pickle(meta_data, os.path.join(exp_dir, "model_meta_data.pkl")) def main(disease, sample_type, **kwargs): # GPUs os.environ["CUDA_VISIBLE_DEVICES"] = kwargs['device'] cuda = torch.cuda.is_available() print("Cuda is available: {}".format(cuda)) # Read / write / process tcga, tcga_meta = process_TCGA(disease) # Feature design feature_idx = generate_fsets(tcga, n_features=kwargs['n_features'], steps=kwargs['steps']) # Experiment design hierarchy = OrderedDict({'Disease':disease, 'Sample Type':sample_type}) # Define experiment exp = Experiment(meta_data=tcga_meta, hierarchy=hierarchy, index='CGHubAnalysisID', cases='Case ID', min_samples=20) # Train / Test split exp.train_test_split(cases='Case ID') # Return data train_data, train_labels = exp.get_data(tcga, subset="train", dtype=np.float32) test_data, test_labels = exp.get_data(tcga, subset="test", dtype=np.float32) # randomize labels np.random.shuffle(train_labels) # Path *fix* dtime = datetime.datetime.today().strftime("%Y.%m.%d_%H:%M") exp_dir = "/srv/nas/mk2/projects/pan-cancer/experiments/feature_sel/{}_{}_{}_{}_{}-{}".format(dtime, kwargs['note'], len(exp.labels_dict), kwargs['embedding'], kwargs['n_features'], kwargs['steps']) pathlib.Path(exp_dir).mkdir(parents=True, exist_ok=False) print('Saving to: \n{}'.format(exp_dir)) # Meta data experiments = {'experiment': exp, 'train':(train_data, train_labels), 'test': (test_data, test_labels)} pd.to_pickle(experiments, os.path.join(exp_dir, "experiment_meta_data.pkl")) # Training feature_training(train_data, train_labels, test_data, test_labels, feature_idx, kwargs['embedding'], exp_dir) ``` ### Setup ``` base="/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP" # read meta/data tcga_meta = pd.read_csv(os.path.join(base, "TCGA/TCGA_GDC_ID_MAP.tsv"), sep="\t") # select disease disease = tcga_meta[tcga_meta['Sample Type']=='Solid Tissue Normal']['Disease'].value_counts() disease = list(disease[disease>=20].index) disease disease = ['BRCA', 'LUAD', 'KIRC', 'THCA', 'PRAD', 'SKCM'] sample_type = ['Primary Tumor', 'Solid Tissue Normal'] params = {"device":"4", "note":"shuffle", "n_features":50, "steps":50, "embedding":2} main(disease=disease, sample_type=sample_type, **params) ```
github_jupyter
# MNIST - Syft Duet - Data Scientist 🥁 ## PART 0: Optional - Google Colab Setup ``` %%capture # This only runs in colab and clones the code sets it up and fixes a few issues, # you can skip this if you are running Jupyter Notebooks import sys if "google.colab" in sys.modules: branch = "master" # change to the branch you want ! git clone --single-branch --branch $branch https://github.com/OpenMined/PySyft.git ! cd PySyft && ./scripts/colab.sh # fixes some colab python issues sys.path.append("/content/PySyft/src") # prevents needing restart ``` ## PART 1: Connect to a Remote Duet Server As the Data Scientist, you want to perform data science on data that is sitting in the Data Owner's Duet server in their Notebook. In order to do this, we must run the code that the Data Owner sends us, which importantly includes their Duet Session ID. The code will look like this, importantly with their real Server ID. ``` import syft as sy duet = sy.duet('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') ``` This will create a direct connection from my notebook to the remote Duet server. Once the connection is established all traffic is sent directly between the two nodes. Paste the code or Server ID that the Data Owner gives you and run it in the cell below. It will return your Client ID which you must send to the Data Owner to enter into Duet so it can pair your notebooks. ``` import syft as sy duet = sy.join_duet("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") sy.logger.add(sink="./syft_ds.log") ``` ## PART 2: Setting up a Model and our Data The majority of the code below has been adapted closely from the original PyTorch MNIST example which is available in the `original` directory with these notebooks. The `duet` variable is now your reference to a whole world of remote operations including supported libraries like torch. Lets take a look at the duet.torch attribute. ``` duet.torch ``` ``` duet.torch ``` Lets create a model just like the one in the MNIST example. We do this in almost the exact same way as in PyTorch. The main difference is we inherit from sy.Module instead of nn.Module and we need to pass in a variable called torch_ref which we will use internally for any calls that would normally be to torch. ``` class SyNet(sy.Module): def __init__(self, torch_ref): super(SyNet, self).__init__(torch_ref=torch_ref) self.conv1 = self.torch_ref.nn.Conv2d(1, 32, 3, 1) self.conv2 = self.torch_ref.nn.Conv2d(32, 64, 3, 1) self.dropout1 = self.torch_ref.nn.Dropout2d(0.25) self.dropout2 = self.torch_ref.nn.Dropout2d(0.5) self.fc1 = self.torch_ref.nn.Linear(9216, 128) self.fc2 = self.torch_ref.nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = self.torch_ref.nn.functional.relu(x) x = self.conv2(x) x = self.torch_ref.nn.functional.relu(x) x = self.torch_ref.nn.functional.max_pool2d(x, 2) x = self.dropout1(x) x = self.torch_ref.flatten(x, 1) x = self.fc1(x) x = self.torch_ref.nn.functional.relu(x) x = self.dropout2(x) x = self.fc2(x) output = self.torch_ref.nn.functional.log_softmax(x, dim=1) return output # lets import torch and torchvision just as we normally would import torch import torchvision # now we can create the model and pass in our local copy of torch local_model = SyNet(torch) ``` Next we can get our MNIST Test Set ready using our local copy of torch. ``` # we need some transforms for the MNIST data set local_transform_1 = torchvision.transforms.ToTensor() # this converts PIL images to Tensors local_transform_2 = torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset # compose our transforms local_transforms = torchvision.transforms.Compose([local_transform_1, local_transform_2]) # Lets define a few settings which are from the original MNIST example command-line args args = { "batch_size": 64, "test_batch_size": 1000, "epochs": 14, "lr": 1.0, "gamma": 0.7, "no_cuda": False, "dry_run": False, "seed": 42, # the meaning of life "log_interval": 10, "save_model": True, } # we will configure the test set here locally since we want to know if our Data Owner's # private training dataset will help us reach new SOTA results for our benchmark test set test_kwargs = { "batch_size": args["test_batch_size"], } test_data = torchvision.datasets.MNIST('../data', train=False, download=True, transform=local_transforms) test_loader = torch.utils.data.DataLoader(test_data,**test_kwargs) test_data_length = len(test_loader.dataset) print(test_data_length) ``` Now its time to send the model to our partner’s Duet Server. Note: You can load normal torch model weights before sending your model. Try training the model and saving it at the end of the notebook and then coming back and reloading the weights here, or you can train the same model once using the original script in `original` dir and load it here as well. ``` # local_model.load("./duet_mnist.pt") model = local_model.send(duet) ``` Lets create an alias for our partner’s torch called `remote_torch` so we can refer to the local torch as `torch` and any operation we want to do remotely as `remote_torch`. Remember, the return values from `remote_torch` are `Pointers`, not the real objects. They mostly act the same when using them with other `Pointers` but you can't mix them with local torch objects. ``` remote_torch = duet.torch # lets ask to see if our Data Owner has CUDA has_cuda = False has_cuda_ptr = remote_torch.cuda.is_available() has_cuda = bool(has_cuda_ptr.get( request_block=True, name="cuda_is_available", reason="To run test and inference locally", timeout_secs=5, # change to something slower )) print(has_cuda) use_cuda = not args["no_cuda"] and has_cuda # now we can set the seed remote_torch.manual_seed(args["seed"]) device = remote_torch.device("cuda" if use_cuda else "cpu") print(f"Data Owner device is {device.type.get()}") # if we have CUDA lets send our model to the GPU if has_cuda: model.cuda(device) else: model.cpu() ``` Lets get our params, setup an optimizer and a scheduler just the same as the PyTorch MNIST example ``` params = model.parameters() optimizer = remote_torch.optim.Adadelta(params, lr=args["lr"]) scheduler = remote_torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=args["gamma"]) ``` Next we need a training loop so we can improve our remote model. Since we want to train on remote data we should first check if the model is remote since we will be using remote_torch in this function. To check if a model is local or remote simply use the `.is_local` attribute. ``` def train(model, torch_ref, train_loader, optimizer, epoch, args, train_data_length): # + 0.5 lets us math.ceil without the import train_batches = round((train_data_length / args["batch_size"]) + 0.5) print(f"> Running train in {train_batches} batches") if model.is_local: print("Training requires remote model") return model.train() for batch_idx, data in enumerate(train_loader): data_ptr, target_ptr = data[0], data[1] optimizer.zero_grad() output = model(data_ptr) loss = torch_ref.nn.functional.nll_loss(output, target_ptr) loss.backward() optimizer.step() loss_item = loss.item() train_loss = duet.python.Float(0) # create a remote Float we can use for summation train_loss += loss_item if batch_idx % args["log_interval"] == 0: local_loss = None local_loss = loss_item.get( name="loss", reason="To evaluate training progress", request_block=True, timeout_secs=5 ) if local_loss is not None: print("Train Epoch: {} {} {:.4}".format(epoch, batch_idx, local_loss)) else: print("Train Epoch: {} {} ?".format(epoch, batch_idx)) if args["dry_run"]: break if batch_idx >= train_batches - 1: print("batch_idx >= train_batches, breaking") break ``` Now we can define a simple test loop very similar to the original PyTorch MNIST example. This function should expect a remote model from our outer epoch loop, so internally we can call `get` to download the weights to do an evaluation on our machine with our local test set. Remember, if we have trained on private data, our model will require permission to download, so we should use request_block=True and make sure the Data Owner approves our requests. For the rest of this function, we will use local `torch` as we normally would. ``` def test_local(model, torch_ref, test_loader, test_data_length): # download remote model if not model.is_local: local_model = model.get( request_block=True, name="model_download", reason="test evaluation", timeout_secs=5 ) else: local_model = model # + 0.5 lets us math.ceil without the import test_batches = round((test_data_length / args["test_batch_size"]) + 0.5) print(f"> Running test_local in {test_batches} batches") local_model.eval() test_loss = 0.0 correct = 0.0 with torch_ref.no_grad(): for batch_idx, (data, target) in enumerate(test_loader): output = local_model(data) iter_loss = torch_ref.nn.functional.nll_loss(output, target, reduction="sum").item() test_loss = test_loss + iter_loss pred = output.argmax(dim=1) total = pred.eq(target).sum().item() correct += total if args["dry_run"]: break if batch_idx >= test_batches - 1: print("batch_idx >= test_batches, breaking") break accuracy = correct / test_data_length print(f"Test Set Accuracy: {100 * accuracy}%") ``` Finally just for demonstration purposes, we will get the built-in MNIST dataset but on the Data Owners side from `remote_torchvision`. ``` # we need some transforms for the MNIST data set remote_torchvision = duet.torchvision transform_1 = remote_torchvision.transforms.ToTensor() # this converts PIL images to Tensors transform_2 = remote_torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset remote_list = duet.python.List() # create a remote list to add the transforms to remote_list.append(transform_1) remote_list.append(transform_2) # compose our transforms transforms = remote_torchvision.transforms.Compose(remote_list) # The DO has kindly let us initialise a DataLoader for their training set train_kwargs = { "batch_size": args["batch_size"], } train_data_ptr = remote_torchvision.datasets.MNIST('../data', train=True, download=True, transform=transforms) train_loader_ptr = remote_torch.utils.data.DataLoader(train_data_ptr,**train_kwargs) # normally we would not necessarily know the length of a remote dataset so lets ask for it # so we can pass that to our training loop and know when to stop def get_train_length(train_data_ptr): train_length_ptr = train_data_ptr.__len__() train_data_length = train_length_ptr.get( request_block=True, name="train_size", reason="To write the training loop", timeout_secs=5, ) return train_data_length try: if train_data_length is None: train_data_length = get_train_length(train_data_ptr) except NameError: train_data_length = get_train_length(train_data_ptr) print(f"Training Dataset size is: {train_data_length}") ``` ## PART 3: Training ``` %%time import time args["dry_run"] = True # comment to do a full train print("Starting Training") for epoch in range(1, args["epochs"] + 1): epoch_start = time.time() print(f"Epoch: {epoch}") # remote training on model with remote_torch train(model, remote_torch, train_loader_ptr, optimizer, epoch, args, train_data_length) # local testing on model with local torch test_local(model, torch, test_loader, test_data_length) scheduler.step() epoch_end = time.time() print(f"Epoch time: {int(epoch_end - epoch_start)} seconds") break print("Finished Training") if args["save_model"]: model.get( request_block=True, name="model_download", reason="test evaluation", timeout_secs=5 ).save("./duet_mnist.pt") ``` ## PART 4: Inference A model would be no fun without the ability to do inference. The following code shows some examples on how we can do this either remotely or locally. ``` import matplotlib.pyplot as plt def draw_image_and_label(image, label): fig = plt.figure() plt.tight_layout() plt.imshow(image, cmap="gray", interpolation="none") plt.title("Ground Truth: {}".format(label)) def prep_for_inference(image): image_batch = image.unsqueeze(0).unsqueeze(0) image_batch = image_batch * 1.0 return image_batch def classify_local(image, model): if not model.is_local: print("model is remote try .get()") return -1, torch.Tensor([-1]) image_tensor = torch.Tensor(prep_for_inference(image)) output = model(image_tensor) preds = torch.exp(output) local_y = preds local_y = local_y.squeeze() pos = local_y == max(local_y) index = torch.nonzero(pos, as_tuple=False) class_num = index.squeeze() return class_num, local_y def classify_remote(image, model): if model.is_local: print("model is local try .send()") return -1, remote_torch.Tensor([-1]) image_tensor_ptr = remote_torch.Tensor(prep_for_inference(image)) output = model(image_tensor_ptr) preds = remote_torch.exp(output) preds_result = preds.get( request_block=True, name="inference", reason="To see a real world example of inference", timeout_secs=10 ) if preds_result is None: print("No permission to do inference, request again") return -1, torch.Tensor([-1]) else: # now we have the local tensor we can use local torch local_y = torch.Tensor(preds_result) local_y = local_y.squeeze() pos = local_y == max(local_y) index = torch.nonzero(pos, as_tuple=False) class_num = index.squeeze() return class_num, local_y # lets grab something from the test set import random total_images = test_data_length # 10000 index = random.randint(0, total_images) print("Random Test Image:", index) count = 0 batch = index // test_kwargs["batch_size"] batch_index = index % int(total_images / len(test_loader)) for tensor_ptr in test_loader: data, target = tensor_ptr[0], tensor_ptr[1] if batch == count: break count += 1 print(f"Displaying {index} == {batch_index} in Batch: {batch}/{len(test_loader)}") image_1 = data[batch_index].reshape((28, 28)) label_1 = target[batch_index] draw_image_and_label(image_1, label_1) # classify remote class_num, preds = classify_remote(image_1, model) print(f"Prediction: {class_num} Ground Truth: {label_1}") print(preds) local_model = model.get( request_block=True, name="model_download", reason="To run test and inference locally", timeout_secs=5, ) # classify local class_num, preds = classify_local(image_1, local_model) print(f"Prediction: {class_num} Ground Truth: {label_1}") print(preds) # We can also download an image from the web and run inference on that from PIL import Image, ImageEnhance import PIL.ImageOps import os def classify_url_image(image_url): filename = os.path.basename(image_url) !curl -O $image_url im = Image.open(filename) im = PIL.ImageOps.invert(im) # im = im.resize((28,28), Image.ANTIALIAS) im = im.convert('LA') enhancer = ImageEnhance.Brightness(im) im = enhancer.enhance(3) print(im.size) fig = plt.figure() plt.tight_layout() plt.imshow(im, cmap="gray", interpolation="none") # classify local class_num, preds = classify_local(image_1, local_model) print(f"Prediction: {class_num}") print(preds) image_url = "https://raw.githubusercontent.com/kensanata/numbers/master/0018_CHXX/0/number-100.png" classify_url_image(image_url) ```
github_jupyter
Implementation of double deep-Q learning initially taken from https://github.com/fg91/Deep-Q-Learning/blob/master/DQN.ipynb ``` import os import random import gym import tensorflow as tf import numpy as np class FrameProcessor: """Resizes and converts RGB Atari frames to grayscale""" def __init__(self, frame_height=84, frame_width=84): """ Args: frame_height: Integer, Height of a frame of an Atari game frame_width: Integer, Width of a frame of an Atari game """ self.frame_height = frame_height self.frame_width = frame_width self.frame = tf.placeholder(shape=[210, 160, 3], dtype=tf.uint8) self.processed = tf.image.rgb_to_grayscale(self.frame) self.processed = tf.image.crop_to_bounding_box(self.processed, 34, 0, 160, 160) self.processed = tf.image.resize_images(self.processed, [self.frame_height, self.frame_width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) def process(self, session, frame): """ Args: session: A Tensorflow session object frame: A (210, 160, 3) frame of an Atari game in RGB Returns: A processed (84, 84, 1) frame in grayscale """ return session.run(self.processed, feed_dict={self.frame:frame}) class DQN: """Implements a Deep Q Network""" # pylint: disable=too-many-instance-attributes def __init__(self, n_actions, hidden=1024, learning_rate=0.00001, frame_height=84, frame_width=84, agent_history_length=4): """ Args: n_actions: Integer, number of possible actions hidden: Integer, Number of filters in the final convolutional layer. This is different from the DeepMind implementation learning_rate: Float, Learning rate for the Adam optimizer frame_height: Integer, Height of a frame of an Atari game frame_width: Integer, Width of a frame of an Atari game agent_history_length: Integer, Number of frames stacked together to create a state """ self.n_actions = n_actions self.hidden = hidden self.learning_rate = learning_rate self.frame_height = frame_height self.frame_width = frame_width self.agent_history_length = agent_history_length self.input = tf.placeholder(shape=[None, self.frame_height, self.frame_width, self.agent_history_length], dtype=tf.float32) # Normalizing the input self.inputscaled = self.input/255 # Convolutional layers self.conv1 = tf.layers.conv2d( inputs=self.inputscaled, filters=32, kernel_size=[8, 8], strides=4, kernel_initializer=tf.variance_scaling_initializer(scale=2), padding="valid", activation=tf.nn.relu, use_bias=False, name='conv1') self.conv2 = tf.layers.conv2d( inputs=self.conv1, filters=64, kernel_size=[4, 4], strides=2, kernel_initializer=tf.variance_scaling_initializer(scale=2), padding="valid", activation=tf.nn.relu, use_bias=False, name='conv2') self.conv3 = tf.layers.conv2d( inputs=self.conv2, filters=64, kernel_size=[3, 3], strides=1, kernel_initializer=tf.variance_scaling_initializer(scale=2), padding="valid", activation=tf.nn.relu, use_bias=False, name='conv3') self.conv4 = tf.layers.conv2d( inputs=self.conv3, filters=hidden, kernel_size=[7, 7], strides=1, kernel_initializer=tf.variance_scaling_initializer(scale=2), padding="valid", activation=tf.nn.relu, use_bias=False, name='conv4') # Splitting into value and advantage stream self.valuestream, self.advantagestream = tf.split(self.conv4, 2, 3) self.valuestream = tf.layers.flatten(self.valuestream) self.advantagestream = tf.layers.flatten(self.advantagestream) self.advantage = tf.layers.dense( inputs=self.advantagestream, units=self.n_actions, kernel_initializer=tf.variance_scaling_initializer(scale=2), name="advantage") self.value = tf.layers.dense( inputs=self.valuestream, units=1, kernel_initializer=tf.variance_scaling_initializer(scale=2), name='value') # Combining value and advantage into Q-values as described above self.q_values = self.value + tf.subtract(self.advantage, tf.reduce_mean(self.advantage, axis=1, keepdims=True)) self.best_action = tf.argmax(self.q_values, 1) # The next lines perform the parameter update. This will be explained in detail later. # targetQ according to Bellman equation: # Q = r + gamma*max Q', calculated in the function learn() self.target_q = tf.placeholder(shape=[None], dtype=tf.float32) # Action that was performed self.action = tf.placeholder(shape=[None], dtype=tf.int32) # Q value of the action that was performed self.Q = tf.reduce_sum(tf.multiply(self.q_values, tf.one_hot(self.action, self.n_actions, dtype=tf.float32)), axis=1) # Parameter updates self.loss = tf.reduce_mean(tf.losses.huber_loss(labels=self.target_q, predictions=self.Q)) self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate) self.update = self.optimizer.minimize(self.loss) class ActionGetter: """Determines an action according to an epsilon greedy strategy with annealing epsilon""" def __init__(self, n_actions, eps_initial=1, eps_final=0.1, eps_final_frame=0.01, eps_evaluation=0.0, eps_annealing_frames=1000000, replay_memory_start_size=50000, max_frames=25000000): """ Args: n_actions: Integer, number of possible actions eps_initial: Float, Exploration probability for the first replay_memory_start_size frames eps_final: Float, Exploration probability after replay_memory_start_size + eps_annealing_frames frames eps_final_frame: Float, Exploration probability after max_frames frames eps_evaluation: Float, Exploration probability during evaluation eps_annealing_frames: Int, Number of frames over which the exploration probabilty is annealed from eps_initial to eps_final replay_memory_start_size: Integer, Number of frames during which the agent only explores max_frames: Integer, Total number of frames shown to the agent """ self.n_actions = n_actions self.eps_initial = eps_initial self.eps_final = eps_final self.eps_final_frame = eps_final_frame self.eps_evaluation = eps_evaluation self.eps_annealing_frames = eps_annealing_frames self.replay_memory_start_size = replay_memory_start_size self.max_frames = max_frames # Slopes and intercepts for exploration decrease self.slope = -(self.eps_initial - self.eps_final)/self.eps_annealing_frames self.intercept = self.eps_initial - self.slope*self.replay_memory_start_size self.slope_2 = -(self.eps_final - self.eps_final_frame)/(self.max_frames - self.eps_annealing_frames - self.replay_memory_start_size) self.intercept_2 = self.eps_final_frame - self.slope_2*self.max_frames def get_action(self, session, frame_number, state, main_dqn, evaluation=False): """ Args: session: A tensorflow session object frame_number: Integer, number of the current frame state: A (84, 84, 4) sequence of frames of an Atari game in grayscale main_dqn: A DQN object evaluation: A boolean saying whether the agent is being evaluated Returns: An integer between 0 and n_actions - 1 determining the action the agent perfoms next """ if evaluation: eps = self.eps_evaluation elif frame_number < self.replay_memory_start_size: eps = self.eps_initial elif frame_number >= self.replay_memory_start_size and frame_number < self.replay_memory_start_size + self.eps_annealing_frames: eps = self.slope*frame_number + self.intercept elif frame_number >= self.replay_memory_start_size + self.eps_annealing_frames: eps = self.slope_2*frame_number + self.intercept_2 if np.random.rand(1) < eps: return np.random.randint(0, self.n_actions) return session.run(main_dqn.best_action, feed_dict={main_dqn.input:[state]})[0] class ReplayMemory: """Replay Memory that stores the last size=1,000,000 transitions""" def __init__(self, size=1000000, frame_height=84, frame_width=84, agent_history_length=4, batch_size=32): """ Args: size: Integer, Number of stored transitions frame_height: Integer, Height of a frame of an Atari game frame_width: Integer, Width of a frame of an Atari game agent_history_length: Integer, Number of frames stacked together to create a state batch_size: Integer, Number if transitions returned in a minibatch """ self.size = size self.frame_height = frame_height self.frame_width = frame_width self.agent_history_length = agent_history_length self.batch_size = batch_size self.count = 0 self.current = 0 # Pre-allocate memory self.actions = np.empty(self.size, dtype=np.int32) self.rewards = np.empty(self.size, dtype=np.float32) self.frames = np.empty((self.size, self.frame_height, self.frame_width), dtype=np.uint8) self.terminal_flags = np.empty(self.size, dtype=np.bool) # Pre-allocate memory for the states and new_states in a minibatch self.states = np.empty((self.batch_size, self.agent_history_length, self.frame_height, self.frame_width), dtype=np.uint8) self.new_states = np.empty((self.batch_size, self.agent_history_length, self.frame_height, self.frame_width), dtype=np.uint8) self.indices = np.empty(self.batch_size, dtype=np.int32) def add_experience(self, action, frame, reward, terminal): """ Args: action: An integer between 0 and env.action_space.n - 1 determining the action the agent perfomed frame: A (84, 84, 1) frame of an Atari game in grayscale reward: A float determining the reward the agend received for performing an action terminal: A bool stating whether the episode terminated """ if frame.shape != (self.frame_height, self.frame_width): raise ValueError('Dimension of frame is wrong!') self.actions[self.current] = action self.frames[self.current, ...] = frame self.rewards[self.current] = reward self.terminal_flags[self.current] = terminal self.count = max(self.count, self.current+1) self.current = (self.current + 1) % self.size def _get_state(self, index): if self.count is 0: raise ValueError("The replay memory is empty!") if index < self.agent_history_length - 1: raise ValueError("Index must be min 3") return self.frames[index-self.agent_history_length+1:index+1, ...] def _get_valid_indices(self): for i in range(self.batch_size): while True: index = random.randint(self.agent_history_length, self.count - 1) if index < self.agent_history_length: continue if index >= self.current and index - self.agent_history_length <= self.current: continue if self.terminal_flags[index - self.agent_history_length:index].any(): continue break self.indices[i] = index def get_minibatch(self): """ Returns a minibatch of self.batch_size = 32 transitions """ if self.count < self.agent_history_length: raise ValueError('Not enough memories to get a minibatch') self._get_valid_indices() for i, idx in enumerate(self.indices): self.states[i] = self._get_state(idx - 1) self.new_states[i] = self._get_state(idx) return np.transpose(self.states, axes=(0, 2, 3, 1)), self.actions[self.indices], self.rewards[self.indices], np.transpose(self.new_states, axes=(0, 2, 3, 1)), self.terminal_flags[self.indices] def learn(session, replay_memory, main_dqn, target_dqn, batch_size, gamma): """ Args: session: A tensorflow sesson object replay_memory: A ReplayMemory object main_dqn: A DQN object target_dqn: A DQN object batch_size: Integer, Batch size gamma: Float, discount factor for the Bellman equation Returns: loss: The loss of the minibatch, for tensorboard Draws a minibatch from the replay memory, calculates the target Q-value that the prediction Q-value is regressed to. Then a parameter update is performed on the main DQN. """ # Draw a minibatch from the replay memory states, actions, rewards, new_states, terminal_flags = replay_memory.get_minibatch() # The main network estimates which action is best (in the next # state s', new_states is passed!) # for every transition in the minibatch arg_q_max = session.run(main_dqn.best_action, feed_dict={main_dqn.input:new_states}) # The target network estimates the Q-values (in the next state s', new_states is passed!) # for every transition in the minibatch q_vals = session.run(target_dqn.q_values, feed_dict={target_dqn.input:new_states}) double_q = q_vals[range(batch_size), arg_q_max] # Bellman equation. Multiplication with (1-terminal_flags) makes sure that # if the game is over, targetQ=rewards target_q = rewards + (gamma*double_q * (1-terminal_flags)) # Gradient descend step to update the parameters of the main network loss, _ = session.run([main_dqn.loss, main_dqn.update], feed_dict={main_dqn.input:states, main_dqn.target_q:target_q, main_dqn.action:actions}) return loss class TargetNetworkUpdater: """Copies the parameters of the main DQN to the target DQN""" def __init__(self, main_dqn_vars, target_dqn_vars): """ Args: main_dqn_vars: A list of tensorflow variables belonging to the main DQN network target_dqn_vars: A list of tensorflow variables belonging to the target DQN network """ self.main_dqn_vars = main_dqn_vars self.target_dqn_vars = target_dqn_vars def _update_target_vars(self): update_ops = [] for i, var in enumerate(self.main_dqn_vars): copy_op = self.target_dqn_vars[i].assign(var.value()) update_ops.append(copy_op) return update_ops def update_networks(self, sess): """ Args: sess: A Tensorflow session object Assigns the values of the parameters of the main network to the parameters of the target network """ update_ops = self._update_target_vars() for copy_op in update_ops: sess.run(copy_op) class Atari: """Wrapper for the environment provided by gym""" def __init__(self, envName, no_op_steps=10, agent_history_length=4): self.env = gym.make(envName) self.frame_processor = FrameProcessor() self.state = None self.last_lives = 0 self.no_op_steps = no_op_steps self.agent_history_length = agent_history_length def reset(self, sess, evaluation=False): """ Args: sess: A Tensorflow session object evaluation: A boolean saying whether the agent is evaluating or training Resets the environment and stacks four frames ontop of each other to create the first state """ frame = self.env.reset() self.last_lives = 0 terminal_life_lost = True # Set to true so that the agent starts # with a 'FIRE' action when evaluating if evaluation: for _ in range(random.randint(1, self.no_op_steps)): frame, _, _, _ = self.env.step(1) # Action 'Fire' processed_frame = self.frame_processor.process(sess, frame) # (★★★) self.state = np.repeat(processed_frame, self.agent_history_length, axis=2) return terminal_life_lost def step(self, sess, action): """ Args: sess: A Tensorflow session object action: Integer, action the agent performs Performs an action and observes the reward and terminal state from the environment """ new_frame, reward, terminal, info = self.env.step(action) # (5★) if info['ale.lives'] < self.last_lives: terminal_life_lost = True else: terminal_life_lost = terminal self.last_lives = info['ale.lives'] processed_new_frame = self.frame_processor.process(sess, new_frame) # (6★) new_state = np.append(self.state[:, :, 1:], processed_new_frame, axis=2) # (6★) self.state = new_state return processed_new_frame, reward, terminal, terminal_life_lost, new_frame ENV_NAME = 'BreakoutDeterministic-v4' tf.reset_default_graph() # Control parameters MAX_EPISODE_LENGTH = 18000 # Equivalent of 5 minutes of gameplay at 60 frames per second EVAL_FREQUENCY = 200000 # Number of frames the agent sees between evaluations EVAL_STEPS = 10000 # Number of frames for one evaluation NETW_UPDATE_FREQ = 10000 # Number of chosen actions between updating the target network. # According to Mnih et al. 2015 this is measured in the number of # parameter updates (every four actions), however, in the # DeepMind code, it is clearly measured in the number # of actions the agent choses DISCOUNT_FACTOR = 0.99 # gamma in the Bellman equation REPLAY_MEMORY_START_SIZE = 50000 # Number of completely random actions, # before the agent starts learning MAX_FRAMES = 30000000 # Total number of frames the agent sees MEMORY_SIZE = 1000000 # Number of transitions stored in the replay memory NO_OP_STEPS = 10 # Number of 'NOOP' or 'FIRE' actions at the beginning of an # evaluation episode UPDATE_FREQ = 4 # Every four actions a gradient descend step is performed HIDDEN = 1024 # Number of filters in the final convolutional layer. The output # has the shape (1,1,1024) which is split into two streams. Both # the advantage stream and value stream have the shape # (1,1,512). This is slightly different from the original # implementation but tests I did with the environment Pong # have shown that this way the score increases more quickly LEARNING_RATE = 0.00001 # Set to 0.00025 in Pong for quicker results BS = 32 # Batch size PATH = "output/" # Gifs and checkpoints will be saved here SUMMARIES = "summaries" # logdir for tensorboard RUNID = 'run_1' #os.makedirs(PATH, exist_ok=True) #os.makedirs(os.path.join(SUMMARIES, RUNID)) SUMM_WRITER = tf.summary.FileWriter(os.path.join(SUMMARIES, RUNID)) atari = Atari(ENV_NAME, NO_OP_STEPS) print("The environment has the following {} actions: {}".format(atari.env.action_space.n, atari.env.unwrapped.get_action_meanings())) # main DQN and target DQN networks: with tf.device("/gpu:0"): with tf.variable_scope('mainDQN'): MAIN_DQN = DQN(atari.env.action_space.n, HIDDEN, LEARNING_RATE) # (★★) with tf.variable_scope('targetDQN'): TARGET_DQN = DQN(atari.env.action_space.n, HIDDEN) # (★★) init = tf.global_variables_initializer() MAIN_DQN_VARS = tf.trainable_variables(scope='mainDQN') TARGET_DQN_VARS = tf.trainable_variables(scope='targetDQN') saver = tf.train.Saver() LAYER_IDS = ["conv1", "conv2", "conv3", "conv4", "denseAdvantage", "denseAdvantageBias", "denseValue", "denseValueBias"] # Scalar summariess for tensorboard: loss, average reward and evaluation score with tf.name_scope('Performance'): LOSS_PH = tf.placeholder(tf.float32, shape=None, name='loss_summary') LOSS_SUMMARY = tf.summary.scalar('loss', LOSS_PH) REWARD_PH = tf.placeholder(tf.float32, shape=None, name='reward_summary') REWARD_SUMMARY = tf.summary.scalar('reward', REWARD_PH) EVAL_SCORE_PH = tf.placeholder(tf.float32, shape=None, name='evaluation_summary') EVAL_SCORE_SUMMARY = tf.summary.scalar('evaluation_score', EVAL_SCORE_PH) PERFORMANCE_SUMMARIES = tf.summary.merge([LOSS_SUMMARY, REWARD_SUMMARY]) with tf.device("/gpu:0"): # Histogramm summaries for tensorboard: parameters with tf.name_scope('Parameters'): ALL_PARAM_SUMMARIES = [] for i, Id in enumerate(LAYER_IDS): with tf.name_scope('mainDQN/'): MAIN_DQN_KERNEL = tf.summary.histogram(Id, tf.reshape(MAIN_DQN_VARS[i], shape=[-1])) ALL_PARAM_SUMMARIES.extend([MAIN_DQN_KERNEL]) PARAM_SUMMARIES = tf.summary.merge(ALL_PARAM_SUMMARIES) import datetime def train(): """Contains the training and evaluation loops""" with tf.device("/gpu:0"): my_replay_memory = ReplayMemory(size=MEMORY_SIZE, batch_size=BS) # (★) network_updater = TargetNetworkUpdater(MAIN_DQN_VARS, TARGET_DQN_VARS) action_getter = ActionGetter(atari.env.action_space.n, replay_memory_start_size=REPLAY_MEMORY_START_SIZE, max_frames=MAX_FRAMES) config = tf.ConfigProto(allow_soft_placement = True, log_device_placement = True) with tf.Session(config = config) as sess: sess.run(init) frame_number = 0 rewards = [] loss_list = [] while frame_number < MAX_FRAMES: ######################## ####### Training ####### ######################## epoch_frame = 0 while epoch_frame < EVAL_FREQUENCY: terminal_life_lost = atari.reset(sess) episode_reward_sum = 0 for _ in range(MAX_EPISODE_LENGTH): # (4★) action = action_getter.get_action(sess, frame_number, atari.state, MAIN_DQN) # (5★) processed_new_frame, reward, terminal, terminal_life_lost, _ = atari.step(sess, action) frame_number += 1 epoch_frame += 1 episode_reward_sum += reward # (7★) Store transition in the replay memory my_replay_memory.add_experience(action=action, frame=processed_new_frame[:, :, 0], reward=reward, terminal=terminal_life_lost) if frame_number % UPDATE_FREQ == 0 and frame_number > REPLAY_MEMORY_START_SIZE: loss = learn(sess, my_replay_memory, MAIN_DQN, TARGET_DQN, BS, gamma = DISCOUNT_FACTOR) # (8★) loss_list.append(loss) if frame_number % NETW_UPDATE_FREQ == 0 and frame_number > REPLAY_MEMORY_START_SIZE: network_updater.update_networks(sess) # (9★) if terminal: terminal = False break rewards.append(episode_reward_sum) # Output the progress: if len(rewards) % 10 == 0: # Scalar summaries for tensorboard if frame_number > REPLAY_MEMORY_START_SIZE: summ = sess.run(PERFORMANCE_SUMMARIES, feed_dict={LOSS_PH:np.mean(loss_list), REWARD_PH:np.mean(rewards[-100:])}) SUMM_WRITER.add_summary(summ, frame_number) loss_list = [] # Histogramm summaries for tensorboard summ_param = sess.run(PARAM_SUMMARIES) SUMM_WRITER.add_summary(summ_param, frame_number) dt_now = datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S') print(dt_now, len(rewards), frame_number, np.mean(rewards[-100:])) with open('rewards.dat', 'a') as reward_file: reward_file.write('%s: %d %s %s\n' % (dt_now, len(rewards), frame_number, np.mean(rewards[-100:]))) ######################## ###### Evaluation ###### ######################## terminal = True gif = False frames_for_gif = [] eval_rewards = [] evaluate_frame_number = 0 for _ in range(EVAL_STEPS): if terminal: terminal_life_lost = atari.reset(sess, evaluation=True) episode_reward_sum = 0 terminal = False # Fire (action 1), when a life was lost or the game just started, # so that the agent does not stand around doing nothing. When playing # with other environments, you might want to change this... action = 1 if terminal_life_lost else action_getter.get_action(sess, frame_number, atari.state, MAIN_DQN, evaluation=True) processed_new_frame, reward, terminal, terminal_life_lost, new_frame = atari.step(sess, action) evaluate_frame_number += 1 episode_reward_sum += reward if gif: frames_for_gif.append(new_frame) if terminal: eval_rewards.append(episode_reward_sum) gif = False # Save only the first game of the evaluation as a gif print("Evaluation score:\n", np.mean(eval_rewards)) # try: # generate_gif(frame_number, frames_for_gif, eval_rewards[0], PATH) # except IndexError: # print("No evaluation game finished") #Save the network parameters saver.save(sess, PATH+'/my_model', global_step=frame_number) frames_for_gif = [] # Show the evaluation score in tensorboard summ = sess.run(EVAL_SCORE_SUMMARY, feed_dict={EVAL_SCORE_PH:np.mean(eval_rewards)}) SUMM_WRITER.add_summary(summ, frame_number) with open('rewardsEval.dat', 'a') as eval_reward_file: eval_reward_file.write('%s %s\n' % (frame_number, np.mean(eval_rewards))) train() ```
github_jupyter
# Overview This is a python package that helps process, filter, and analyze the Global Historical Climatology Network Daily dataset. # I. Import The first step is to import all the modules. Do this by using the following import statement. This will give you access to all available modules and classes that the package has to offer. There are currently four modules included. 1. preprocessor 2. stats 3. plotter 4. conversion The preprocessor has three classes which serve as the core building blocks for managing and manipulating the GLobal Historical Climatology Network Daily dataset. 1. StationPreprocessor 2. Station 3. ClimateVar ``` from GHCND import * ``` # II. The StationPreprocessor After you import, you will then make a StationPreprocessor object that points to two files and one directory. The two files reference the necessary station metadata to create the data objects while the directory point to the location of the .dly files. The .dly files are fixed width text files that contain the station data. ``` sp = preprocessor.StationPreprocessor("D:/GHCND_data/ghcnd-stations.txt", "D:/GHCND_data/ghcnd-inventory.txt", "D:/GHCND_data/ghcnd_all.tar/ghcnd_all/ghcnd_all") ``` # III. Define which stations to fetch The next step is to define which countries you would like to process and summarize data for. In the case of the United States and Canada, you can also specify which state, province, or territory you would like to fetch data for. To set the countries and states for processing, use the addCountries and addStates methods. ```python StationPreprocessor.addCountries(country_names) StationPreprocessor.addStates(state_names) ``` both country_names and state_names should be a list of strings. In the following code snippet, the United States is added as a country and the states of Wisconsin and Minnesota are added. ``` sp.addCountries(["united states"]) sp.addStates(["wisconsin","minnesota"]) ``` You can print or return the states and countries defined in the StationPreprocessor as follows. Notice that the internal representation of the states and countries are abreviations and not the actual name. ``` print(sp.states) print(sp.countries) print(sp.stations) ``` # IV. Build the Station objects After you have defined which stations you want to process data for, you can build the Station objects. Station objects have metadata about the station such as station ID, elevation, latitude, longitude, etc. ``` sp.addStations() ``` You can see that there are now station objects in the StationPreprocessor. ``` len(sp.stations) sp.stations[0] # returns the station object print(sp.stations[0]) # a human readable string ``` A station object has many attributes. At this point, each station object doesn't have any variables such as precipitation or temperature attached to it because it has not yet read the data files. ``` print("station ID: ", sp.stations[0].stationId) print("counry: ", sp.stations[0].country) print("state: ", sp.stations[0].state) print("latitude: ", sp.stations[0].lat) print("longitude: ", sp.stations[0].lon) print("elevation: ", sp.stations[0].elev) print("is the station part of the u.s. historical climatology network: ", sp.stations[0].hcn) print("is the station part of the u.s. climate reference network: ", sp.stations[0].crn) print("world meteorological station number: ", sp.stations[0].wmoId) print("station variables: ", sp.stations[0].variables) ``` # V. Parse the daily data files It is finally time to parse the daily data files and add data to the station objects. After this step, the Station objects will have a dictionary where the keys are variable names such as "TMIN", "TMAX", "PRCP", etc. and the values associated with them are the ClimateVar objects. Use the ```python stationPreprocessor.processDlyFiles(variable_names) ``` method to begin reading the data files. variable_names should be a list of strings representing the names of the variables you would like to include. ``` sp.processDlyFiles(["TMAX","TMIN","PRCP"]) ``` The daily data is now loaded into the StationPreprocessor. The following example tells us that there are two variables ("TMAX" and "TMIN") in the Station object at index 0 of the StationPreprocessor's stations attribute. The keys in the dictionary are associated with the ClimateVar objects. By printing the ClimateVar object we see that it holds maximum temperature data, at a daily temporal resolution, and its record begins on Jan 1, 1893 and ends on Oct 31, 2017. ``` sp.stations[0].variables print(sp.stations[0].variables["TMAX"]) ``` The temperature data from the GHCND is in tenths of degrees Celsius. You could convert this data using the conversion.TenthsCelsiusToCelsius() and passing in the StationPreprocessor as an argument. ``` print(sp.stations[0].variables["TMAX"].data[:31]) print(sp.stations[0].variables["TMAX"].timelist[:31]) ``` # VI. Aggregate to monthly mean ``` stats.calculateMean(sp,"month") # this does additional data filtering. See reference materials for details. print(sp.stations[0].variables["TMAX"]) print(sp.stations[0].variables["TMAX"].data[:12]) print(sp.stations[0].variables["TMAX"].timelist[:12]) print(len(sp.stations)) ``` # VII. Utilities Up to this point, the data has gone through a filtering process. You can use the plotter module to plot timeseries of the ClimateVars. ``` plotter.plotStationSeries(sp.stations[0], "TMAX") plotter.plotStationSeries(sp.stations[0], "PRCP") ``` There are also export options exportToShapefile - export the stations to a shapefile. This does not include the data values with it. exportToJSON - export the station's climate variable data to json. exportToDat - export the data to single column text files. One file for each station and variable in the station preprocessor. ``` sp.exportToShapefile("D:/GHCND_data/minnesota_wisconsin_ghcnd.shp") sp.exportToJSON("D:/GHCND_data/minnesota_wisconsin_monthlydata.json") ```
github_jupyter
``` import torch import numpy as np data = [[1, 2], [3, 4]] x_data = torch.tensor(data) np_array = np.array(data) x_np = torch.from_numpy(np_array) x_ones = torch.ones_like(x_data) # retains the properties of x_data print(f"Ones Tensor: \n {x_ones} \n") x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data print(f"Random Tensor: \n {x_rand} \n") shape = (2, 3,) rand_tensor = torch.rand(shape) ones_tensor = torch.ones(shape) zeros_tensor = torch.zeros(shape) print(f"Random Tensor: \n {rand_tensor} \n") print(f"Ones Tensor: \n {ones_tensor} \n") print(f"Zeros Tensor: \n {zeros_tensor}") tensor = torch.rand(3, 4) print(f"Shape of tensor: {tensor.shape}") print(f"Datatype of tensor: {tensor.dtype}") print(f"Device tensor is stored on: {tensor.device}") # We move our tensor to the GPU if available if torch.cuda.is_available(): tensor = tensor.to('cuda') print(f"Device tensor is stored on: {tensor.device}") tensor = torch.ones(4, 4) tensor[:,1] = 0 print(tensor) t1 = torch.cat([tensor, tensor, tensor], dim=1) print(t1) # This computes the element-wise product print(f"tensor.mul(tensor) \n {tensor.mul(tensor)} \n") # Alternative syntax: print(f"tensor * tensor \n {tensor * tensor}") print(f"tensor.matmul(tensor.T) \n {tensor.matmul(tensor.T)} \n") # Alternative syntax: print(f"tensor @ tensor.T \n {tensor @ tensor.T}") print(tensor, "\n") tensor.add_(5) print(tensor) t = torch.ones(5) print(f"t: {t}") n = t.numpy() print(f"n: {n}") t.add_(1) print(f"t: {t}") print(f"n: {n}") n = np.ones(5) t = torch.from_numpy(n) np.add(n, 1, out=n) print(f"t: {t}") print(f"n: {n}") import torch, torchvision model = torchvision.models.resnet18(pretrained=True) data = torch.rand(1, 3, 64, 64) labels = torch.rand(1, 1000) prediction = model(data) # forward pass loss = (prediction - labels).sum() loss.backward() # backward pass optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9) optim.step() #gradient descent import torch a = torch.tensor([2., 3.], requires_grad=True) b = torch.tensor([6., 4.], requires_grad=True) Q = 3*a**3 - b**2 external_grad = torch.tensor([1., 1.]) Q.backward(gradient=external_grad) # check if collected gradients are correct print(9*a**2 == a.grad) print(-2*b == b.grad) x = torch.rand(5, 5) y = torch.rand(5, 5) z = torch.rand((5, 5), requires_grad=True) a = x + y print(f"Does `a` require gradients? : {a.requires_grad}") b = x + z print(f"Does `b` require gradients?: {b.requires_grad}") from torch import nn, optim model = torchvision.models.resnet18(pretrained=True) # Freeze all the parameters in the network for param in model.parameters(): param.requires_grad = False model.fc = nn.Linear(512, 10) # Optimize only the classifier optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.9) import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square, you can specify with a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() print(net) params = list(net.parameters()) print(len(params)) print(params[0].size()) # conv1's .weight input = torch.randn(1, 1, 32, 32) out = net(input) print(out) net.zero_grad() out.backward(torch.randn(1, 10)) output = net(input) target = torch.randn(10) # a dummy target, for example target = target.view(1, -1) # make it the same shape as output criterion = nn.MSELoss() loss = criterion(output, target) print(loss) print(loss.grad_fn) # MSELoss print(loss.grad_fn.next_functions[0][0]) # Linear print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU net.zero_grad() # zeroes the gradient buffers of all parameters print('conv1.bias.grad before backward') print(net.conv1.bias.grad) loss.backward() print('conv1.bias.grad after backward') print(net.conv1.bias.grad) learning_rate = 0.01 for f in net.parameters(): f.data.sub_(f.grad.data * learning_rate) import torch.optim as optim # create your optimizer optimizer = optim.SGD(net.parameters(), lr=0.01) # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = criterion(output, target) loss.backward() optimizer.step() # Does the update ```
github_jupyter
<a href="https://colab.research.google.com/github/VikriAulia/Tensorflow-Deep-Learning-Speech-Recognition/blob/master/Speech_emotion_classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Importing the required libraries ``` import librosa import librosa.display import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from matplotlib.pyplot import specgram import keras from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Embedding from keras.layers import LSTM from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils import to_categorical, get_file from keras.layers import Input, Flatten, Dropout, Activation from keras.layers import Conv1D, MaxPooling1D, AveragePooling1D from keras.models import Model from keras.callbacks import * from sklearn.metrics import confusion_matrix from fnmatch import fnmatch import pandas as pd import os %load_ext tensorboard RUN = 1 from google.colab import drive drive.mount('/content/drive', force_remount=True) from keras import regularizers import os RawData = '/content/RawData/SpeechEmotion/' !ls /content/drive/My\ Drive/Dataset/Speech/SpeechEmotion/ #!unzip -q /content/drive/My\ Drive/Dataset/Speech/SpeechEmotion/SpeechEmotion.zip -d RawData def getlistOfFiles(dirName): allFiles = list() for path, subdirs, files in os.walk(dirName): for name in files: if fnmatch(name, '*.wav'): allFiles.append(name) return allFiles mylist= getlistOfFiles(RawData) print(type(mylist)) print(len(mylist)) print(mylist[1400]) print(mylist[400][6:-16]) ``` ## Plotting the audio file's waveform and its spectrogram ``` data, sampling_rate = librosa.load(RawData+'a01.wav') % pylab inline import os import pandas as pd import librosa import glob plt.figure(figsize=(15, 5)) librosa.display.waveplot(data, sr=sampling_rate) import matplotlib.pyplot as plt import scipy.io.wavfile import numpy as np import sys sr,x = scipy.io.wavfile.read(RawData+'a01.wav') ## Parameters: 10ms step, 30ms window nstep = int(sr * 0.01) nwin = int(sr * 0.03) nfft = nwin window = np.hamming(nwin) ## will take windows x[n1:n2]. generate ## and loop over n2 such that all frames ## fit within the waveform nn = range(nwin, len(x), nstep) X = np.zeros( (len(nn), nfft//2) ) for i,n in enumerate(nn): xseg = x[n-nwin:n] z = np.fft.fft(window * xseg, nfft) X[i,:] = np.log(np.abs(z[:nfft//2])) plt.imshow(X.T, interpolation='nearest', origin='lower', aspect='auto') plt.show() ``` ## Setting the labels ``` feeling_list=[] for item in mylist: if item[6:-16]=='02' and int(item[18:-4])%2==0: feeling_list.append('female_calm') elif item[6:-16]=='01' and int(item[18:-4])%2==1: feeling_list.append('male_neutral') elif item[6:-16]=='02' and int(item[18:-4])%2==1: feeling_list.append('male_calm') elif item[6:-16]=='01' and int(item[18:-4])%2==0: feeling_list.append('female_neutral') elif item[6:-16]=='03' and int(item[18:-4])%2==0: feeling_list.append('female_happy') elif item[6:-16]=='03' and int(item[18:-4])%2==1: feeling_list.append('male_happy') elif item[6:-16]=='04' and int(item[18:-4])%2==0: feeling_list.append('female_sad') elif item[6:-16]=='04' and int(item[18:-4])%2==1: feeling_list.append('male_sad') elif item[6:-16]=='05' and int(item[18:-4])%2==0: feeling_list.append('female_angry') elif item[6:-16]=='05' and int(item[18:-4])%2==1: feeling_list.append('male_angry') elif item[6:-16]=='06' and int(item[18:-4])%2==0: feeling_list.append('female_fearful') elif item[6:-16]=='06' and int(item[18:-4])%2==1: feeling_list.append('male_fearful') elif item[6:-16]=='07' and int(item[18:-4])%2==0: feeling_list.append('female_disgust') elif item[6:-16]=='07' and int(item[18:-4])%2==1: feeling_list.append('male_disgust') elif item[6:-16]=='08' and int(item[18:-4])%2==0: feeling_list.append('female_surprise') elif item[6:-16]=='08' and int(item[18:-4])%2==1: feeling_list.append('male_surprise') elif item[:1]=='a': feeling_list.append('male_angry') elif item[:1]=='d': feeling_list.append('male_disgust') elif item[:1]=='f': feeling_list.append('male_fearful') elif item[:1]=='h': feeling_list.append('male_happy') elif item[:1]=='n': feeling_list.append('male_neutral') elif item[:2]=='sa': feeling_list.append('male_sad') elif item[:2]=='su': feeling_list.append('male_surprise') else: print(item) labels = pd.DataFrame(feeling_list) len(labels) len(mylist) ``` ## Getting the features of audio files using librosa ``` df = pd.DataFrame(columns=['feature']) for index,y in enumerate(mylist): X, sample_rate = librosa.load(RawData+y, res_type='kaiser_fast',duration=3, sr=16000, offset=0.5) sample_rate = np.array(sample_rate) mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=80), axis=0) feature = mfccs #[float(i) for i in feature] #feature1=feature[:135] df.loc[index] = [-(feature/100)] df[:5] df3 = pd.DataFrame(df['feature'].values.tolist()) ``` df3[:5] ``` newdf = pd.concat([df3,labels], axis=1) rnewdf = newdf.rename(index=str, columns={"0": "label"}) rnewdf[:5] from sklearn.utils import shuffle rnewdf = shuffle(newdf) rnewdf[:10] rnewdf=rnewdf.fillna(0) #rnewdf=rnewdf.dropna() len(rnewdf) ``` ## Dividing the data into test and train ``` newdf1 = np.random.rand(len(rnewdf)) < 0.8 train = rnewdf[newdf1] test = rnewdf[~newdf1] train = shuffle(train) train[260:270] trainfeatures = train.iloc[:, :-1] trainlabel = train.iloc[:, -1:] testfeatures = test.iloc[:, :-1] testlabel = test.iloc[:, -1:] from keras.utils import np_utils from sklearn.preprocessing import LabelEncoder X_train = np.array(trainfeatures) y_train = np.array(trainlabel) X_test = np.array(testfeatures) y_test = np.array(testlabel) lb = LabelEncoder() y_train = np_utils.to_categorical(lb.fit_transform(y_train)) y_test = np_utils.to_categorical(lb.fit_transform(y_test)) X_train.shape ``` ## Changing dimension for CNN model ``` x_traincnn =np.expand_dims(X_train, axis=2) x_testcnn= np.expand_dims(X_test, axis=2) x_traincnn.shape model = Sequential() model.add(Conv1D(128, 5,padding='same', input_shape=(94,1))) model.add(Activation('relu')) model.add(Conv1D(128, 5,padding='same')) model.add(Activation('relu')) model.add(Dropout(0.1)) model.add(MaxPooling1D(pool_size=(8))) model.add(Conv1D(128, 5,padding='same',)) model.add(Activation('relu')) model.add(Conv1D(128, 5,padding='same',)) model.add(Activation('relu')) model.add(Conv1D(128, 5,padding='same',)) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Conv1D(128, 5,padding='same',)) model.add(Activation('relu')) model.add(Flatten()) #model.add(Dense(128)) #model.add(Activation('relu')) #model.add(Dropout(0.2)) #model.add(Dense(64)) #model.add(Activation('relu')) #model.add(Dropout(0.2)) model.add(Dense(16)) model.add(Activation('softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) ``` ### Removed the whole training part for avoiding unnecessary long epochs list ``` import math def step_decay(epoch): initial_lrate = 0.001 drop = 0.2 epochs_drop = 10.0 lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop)) if (lrate < 4e-5): lrate = 4e-5 print('Changing learning rate to {}'.format(lrate)) return lrate def dotrain(): global RUN RUN+=1 print("RUN {}".format(RUN)) LOG_DIR = '/content/drive/My Drive/Model/speechEmotion/output/training_logs/run-{}'.format(RUN) LOG_FILE_PATH = LOG_DIR + '/checkpoint-{epoch:02d}-{val_loss:.4f}.hdf5' tensorboard = TensorBoard(log_dir=LOG_DIR, histogram_freq=1, write_grads=False, write_graph=False) #tensorboard = TensorBoard(log_dir=LOG_DIR, write_graph=True) checkpoint = ModelCheckpoint(filepath=LOG_FILE_PATH, monitor='val_loss', verbose=1, save_best_only=True) early_stopping = EarlyStopping(monitor='val_loss', patience=50, verbose=1) lrate = LearningRateScheduler(step_decay) history=model.fit(x_traincnn, y_train, batch_size=32, epochs=1000, validation_data=(x_testcnn, y_test), callbacks=[tensorboard, checkpoint, early_stopping, lrate]) return history print(len(x_traincnn)) cnnhistory = dotrain() plt.plot(cnnhistory.history['loss']) plt.plot(cnnhistory.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() ``` ## Saving the model ``` model_name = 'Emotion_Voice_Detection_Model.h5' save_dir = os.path.join(os.getcwd(), 'saved_models') # Save model and weights if not os.path.isdir(save_dir): os.makedirs(save_dir) model_path = os.path.join(save_dir, model_name) model.save(model_path) print('Saved trained model at %s ' % model_path) import json model_json = model.to_json() with open("model.json", "w") as json_file: json_file.write(model_json) ``` ## Loading the model ``` # loading json and creating model from keras.models import model_from_json json_file = open('model.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights("saved_models/Emotion_Voice_Detection_Model.h5") print("Loaded model from disk") # evaluate loaded model on test data loaded_model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) score = loaded_model.evaluate(x_testcnn, y_test, verbose=0) print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100)) ``` ## Predicting emotions on the test data ``` preds = loaded_model.predict(x_testcnn, batch_size=32, verbose=1) preds preds1=preds.argmax(axis=1) preds1 abc = preds1.astype(int).flatten() predictions = (lb.inverse_transform((abc))) preddf = pd.DataFrame({'predictedvalues': predictions}) preddf[:10] actual=y_test.argmax(axis=1) abc123 = actual.astype(int).flatten() actualvalues = (lb.inverse_transform((abc123))) actualdf = pd.DataFrame({'actualvalues': actualvalues}) actualdf[:10] finaldf = actualdf.join(preddf) ``` ## Actual v/s Predicted emotions ``` finaldf[170:180] finaldf.groupby('actualvalues').count() finaldf.groupby('predictedvalues').count() finaldf.to_csv('Predictions.csv', index=False) ``` ## Live Demo #### The file 'output10.wav' in the next cell is the file that was recorded live using the code in AudioRecoreder notebook found in the repository ``` data, sampling_rate = librosa.load('output10.wav') % pylab inline import os import pandas as pd import librosa import glob plt.figure(figsize=(15, 5)) librosa.display.waveplot(data, sr=sampling_rate) #livedf= pd.DataFrame(columns=['feature']) X, sample_rate = librosa.load('output10.wav', res_type='kaiser_fast',duration=2.5,sr=22050*2,offset=0.5) sample_rate = np.array(sample_rate) mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13),axis=0) featurelive = mfccs livedf2 = featurelive livedf2= pd.DataFrame(data=livedf2) livedf2 = livedf2.stack().to_frame().T livedf2 twodim= np.expand_dims(livedf2, axis=2) livepreds = loaded_model.predict(twodim, batch_size=32, verbose=1) livepreds livepreds1=livepreds.argmax(axis=1) liveabc = livepreds1.astype(int).flatten() livepredictions = (lb.inverse_transform((liveabc))) livepredictions ```
github_jupyter
# Introduction to Machine Learning Nanodegree ## Project: Finding Donors for *CharityML* In this project, we employ several supervised algorithms to accurately model individuals' income using data collected from the 1994 U.S. Census. The best candidate algorithm is then chosen from preliminary results and is further optimized to best model the data. The goal with this implementation is to construct a model that accurately predicts whether an individual makes more than \$50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features. The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. You can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries. ---- ## Exploring the Data Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, `'income'`, will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database. ``` # Import libraries necessary for this project import numpy as np import pandas as pd from time import time from IPython.display import display # Allows the use of display() for DataFrames # Import supplementary visualization code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline # Load the Census dataset data = pd.read_csv("census.csv") # Display the first record display(data.head(5)) ``` ### Implementation: Data Exploration A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, the following information is computed: - The total number of records, `'n_records'` - The number of individuals making more than \$50,000 annually, `'n_greater_50k'`. - The number of individuals making at most \$50,000 annually, `'n_at_most_50k'`. - The percentage of individuals making more than \$50,000 annually, `'greater_percent'`. ``` # Total number of records n_records = data.shape[0] # Number of records where individual's income is more than $50,000 n_greater_50k = data['income'].value_counts()[1] # Number of records where individual's income is at most $50,000 n_at_most_50k = data['income'].value_counts()[0] # Percentage of individuals whose income is more than $50,000 greater_percent = 100 * (n_greater_50k / (n_greater_50k + n_at_most_50k)) # Print the results print("Total number of records: {}".format(n_records)) print("Individuals making more than $50,000: {}".format(n_greater_50k)) print("Individuals making at most $50,000: {}".format(n_at_most_50k)) print("Percentage of individuals making more than $50,000: {}%".format(greater_percent)) # Check whether records are consistent if n_records == (n_greater_50k + n_at_most_50k): print('Records are consistent!') ``` **Featureset Exploration** * **age**: continuous. * **workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked. * **education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool. * **education-num**: continuous. * **marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse. * **occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces. * **relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried. * **race**: Black, White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other. * **sex**: Female, Male. * **capital-gain**: continuous. * **capital-loss**: continuous. * **hours-per-week**: continuous. * **native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands. ---- ## Preparing the Data Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms. ### Transforming Skewed Continuous Features A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: '`capital-gain'` and `'capital-loss'`. Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed. ``` # Split the data into features and target label income_raw = data['income'] features_raw = data.drop('income', axis = 1) # Visualize skewed continuous features of original data vs.distribution(data) ``` For highly-skewed feature distributions such as `'capital-gain'` and `'capital-loss'`, it is common practice to apply a <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of `0` is undefined, so we must translate the values by a small amount above `0` to apply the the logarithm successfully. Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed. ``` # Log-transform the skewed features skewed = ['capital-gain', 'capital-loss'] features_log_transformed = pd.DataFrame(data = features_raw) features_log_transformed[skewed] = features_raw[skewed].apply(lambda x: np.log(x + 1)) # Visualize the new log distributions vs.distribution(features_log_transformed, transformed = True) ``` ### Normalizing Numerical Features In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-loss'` above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below. Run the code cell below to normalize each numerical feature. We will use [`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) for this. ``` # Import sklearn.preprocessing.StandardScaler from sklearn.preprocessing import MinMaxScaler # Initialize a scaler, then apply it to the features scaler = MinMaxScaler() # default=(0, 1) numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] features_log_minmax_transform = pd.DataFrame(data = features_log_transformed) features_log_minmax_transform[numerical] = scaler.fit_transform(features_log_transformed[numerical]) # Show an example of a record with scaling applied display(features_log_minmax_transform.head(n = 5)) ``` ### Data Preprocessing From the table in **Exploring the Data** above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called *categorical variables*) be converted. One popular way to convert categorical variables is by using the **one-hot encoding** scheme. One-hot encoding creates a _"dummy"_ variable for each possible category of each non-numeric feature. For example, assume `someFeature` has three possible entries: `A`, `B`, or `C`. We then encode this feature into `someFeature_A`, `someFeature_B` and `someFeature_C`. | | someFeature | | someFeature_A | someFeature_B | someFeature_C | | :-: | :-: | | :-: | :-: | :-: | | 0 | B | | 0 | 1 | 0 | | 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 | | 2 | A | | 1 | 0 | 0 | Additionally, as with the non-numeric features, we need to convert the non-numeric target label, `'income'` to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as `0` and `1`, respectively. In code cell below, you will need to implement the following: - Use [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) to perform one-hot encoding on the `'features_log_minmax_transform'` data. - Convert the target label `'income_raw'` to numerical entries. - Set records with "<=50K" to `0` and records with ">50K" to `1`. ``` # One-hot encode the 'features_log_minmax_transform' data using pandas.get_dummies() features_final = pd.get_dummies(features_log_minmax_transform) # Encode the 'income_raw' data to numerical values income = income_raw.replace(to_replace = {'<=50K': 0, '>50K': 1}) # Print the number of features after one-hot encoding encoded = list(features_final.columns) print("{} total features after one-hot encoding.".format(len(encoded))) # Uncomment the following line to see the encoded feature names #print(encoded) ``` ### Shuffle and Split Data Now all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing. Run the code cell below to perform this split. ``` # Import train_test_split from sklearn.model_selection import train_test_split # Split the 'features' and 'income' data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(features_final, income, test_size = 0.2, random_state = 0) # Show the results of the split print("Training set has {} samples.".format(X_train.shape[0])) print("Testing set has {} samples.".format(X_test.shape[0])) ``` ---- ## Evaluating Model Performance In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of our choice, and the fourth algorithm is known as a *naive predictor*. ### Metrics and the Naive Predictor *CharityML*, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, *CharityML* is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using **accuracy** as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that *does not* make more than \$50,000 as someone who does would be detrimental to *CharityML*, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is *more important* than the model's ability to **recall** those individuals. We can use **F-beta score** as a metric that considers both precision and recall: $$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$ In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the **F$_{0.5}$ score** (or F-score for simplicity). Looking at the distribution of classes (those who make at most 50,000, and those who make more), it's clear most individuals do not make more than 50,000. This can greatly affect accuracy, since we could simply say \"this person does not make more than 50,000\" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than 50,000, CharityML would identify no one as donors. #### Note: Recap of accuracy, precision, recall **Accuracy** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points). **Precision** tells us what proportion of messages we classified as spam, actually were spam. It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of `[True Positives/(True Positives + False Positives)]` **Recall(sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam. It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of `[True Positives/(True Positives + False Negatives)]` For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average(harmonic mean) of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score(we take the harmonic mean as we are dealing with ratios). ### Naive Predictor Performace If we chose a model that always predicted an individual made more than $50,000, what would that model's accuracy and F-score be on this dataset? You must use the code cell below and assign your results to `'accuracy'` and `'fscore'` to be used later. **Please note** that the the purpose of generating a naive predictor is simply to show what a base model without any intelligence would look like. In the real world, ideally your base model would be either the results of a previous model or could be based on a research paper upon which you are looking to improve. When there is no benchmark model set, getting a result better than random choice is a place you could start from. **Notes:** * When we have a model that always predicts '1' (i.e. the individual makes more than 50k) then our model will have no True Negatives(TN) or False Negatives(FN) as we are not making any negative('0' value) predictions. Therefore our Accuracy in this case becomes the same as our Precision(True Positives/(True Positives + False Positives)) as every prediction that we have made with value '1' that should have '0' becomes a False Positive; therefore our denominator in this case is the total number of records we have in total. * Our Recall score(True Positives/(True Positives + False Negatives)) in this setting becomes 1 as we have no False Negatives. ``` ''' TP = np.sum(income) # Counting the ones as this is the naive case. Note that 'income' is the 'income_raw' data encoded to numerical values done in the data preprocessing step. FP = income.count() - TP # Specific to the naive case TN = 0 # No predicted negatives in the naive case FN = 0 # No predicted negatives in the naive case ''' # Calculate accuracy, precision and recall TP = np.sum(income) FP = income.count() - TP TN, FN = 0, 0 accuracy = (TP + TN) / (TP + TN + FP + FN) recall = TP / (TP + FN) precision = TP / (TP + FP) # Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall. beta = 0.5 # Define beta fscore = (1 + beta**2) * (precision * recall) / (beta**2 * precision + recall) # Print the results print("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)) ``` ### Supervised Learning Models **The following are some of the supervised learning models that are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:** - Gaussian Naive Bayes (GaussianNB) - Decision Trees - Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting) - K-Nearest Neighbors (KNeighbors) - Stochastic Gradient Descent Classifier (SGDC) - Support Vector Machines (SVM) - Logistic Regression ### Model Application List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen - Describe one real-world application in industry where the model can be applied. - What are the strengths of the model; when does it perform well? - What are the weaknesses of the model; when does it perform poorly? - What makes this model a good candidate for the problem, given what you know about the data? ### Decision Trees **Describe one real-world application in industry where the model can be applied.** Decision trees can be used for "Identifying Defective Products in the Manufacturing Process". [1] In this regard, decision trees are used as a classification algorithm that is trained on data with features of products that the company manufactures, as well as labels "Defective" and "Non-defective". After training process, the model should be able to group products into "Defective" and "Non-defective" categories and predict whether a manufactured product is defective or not. **What are the strengths of the model; when does it perform well?** 1. The data pre-processing step for decision trees requires less effort compared to other algorithms (e.g. no need to normalize/scale data or impute missing values). [2] 2. The way the algorithm works is very intuitive, and thus easier to understand and explain. In addition, they can be used as a white box model. [3] **What are the weaknesses of the model; when does it perform poorly?** 1. Because decision trees are so simple there is often a need for more complex algorithms (e.g. Random Forest) to achieve a higher accuracy. [3] 2. Decision trees have the tendency to overfit the training set. [3] 3. Decision trees are unstable. The reproducibility of a decision tree model is unreliable since the structure is sensitive to even to small changes in the data. [3] 4. Decision trees can get complex and computationally expensive. [3] **What makes this model a good candidate for the problem, given what you know about the data?** I think this model is a good candidate in this situation because, as a white box, and because the features are well-defined, it might provide further insights which CharityML can rely on. For example, CharityML identified that the most relevant parameter when it comes to determining donation likelihood is individual income. A decision tree model may find highly accurate predictors of income that can simplify the current process and help draw more valuable conclusions such as this one. Moreover, due to the algorithms simplicity, the charity members will have the capacity to intuitively understand its basic internal processes. **References** [[1]](http://www.kpubs.org/article/articleDownload.kpubs?downType=pdf&articleANo=E1CTBR_2017_v13n2_57) [[2]](https://medium.com/@dhiraj8899/top-5-advantages-and-disadvantages-of-decision-tree-algorithm-428ebd199d9a) [[3]](https://botbark.com/2019/12/19/top-6-advantages-and-disadvantages-of-decision-tree-algorithm/) ### Ensemble Methods (AdaBoost) **Describe one real-world application in industry where the model can be applied.** The AdaBoost algorithm can be applied for "Telecommunication Fraud Detection". [1] The model is trained using features of past telecommunication messages (features) along with whether they ended up being fraudulent or not (labels). Then, the AdaBoost model should be able to predict whether future telecommunication material is fraudulent or not. **What are the strengths of the model; when does it perform well?** 1. High flexibility. Different classification algorithms (decision trees, SVMs, etc.) can be used as weak learners to finally constitute a strong learner (final model). [2] 2. High precision. Experiments have shown AdaBoost models to achieve relatively high precision when making predictions. [3] 3. Simple preprocessing. AdaBoost algorithms are not too demanding when it comes to preprocessed data, thus more time is saved during the pre-processing step. [4] **What are the weaknesses of the model; when does it perform poorly?** 1. Sensitive to noise data and outliers. [4] 2. Requires quality data because the boosting technique learns progressively and is prone to error. [4] 3. Low Accuracy when Data is Imbalanced. [3] 4. Training is mildly computationally expensive, and thus it can be time-consuming. [3] **What makes this model a good candidate for the problem, given what you know about the data?** AdaBoost will be tried as a alternative to decision trees with stronger predictive capacity. An AdaBoost model is a good candidate because it can provide improvements over decision trees to valuable metrics such as accuracy and precision. Since it has been shown that this algorithm can achieve relatively high precision (which is what we are looking for in this problem), this aspect of it will also benefit us. **References** [[1]](https://download.atlantis-press.com/article/25896505.pdf) [[2]](https://www.educba.com/adaboost-algorithm/) [[3]](https://easyai.tech/en/ai-definition/adaboost/#:~:text=AdaBoost%20is%20adaptive%20in%20a,problems%20than%20other%20learning%20algorithms.) [[4]](https://blog.paperspace.com/adaboost-optimizer/) ### Support Vector Machines **Describe one real-world application in industry where the model can be applied.** SVM's can be applied in bioinformatics. [1] For example, an SVM model can be trained on data involving features of cancer tumours and then be able to identify whether a tumour is benign or malignant (labels). **What are the strengths of the model; when does it perform well?** 1. Effective in high dimensional spaces (i.e. when there numerous features). [2] 2. Generally good algorithm. SVM’s are good when we have almost no information about the data. [3] 3. Relatively low risk of overfitting. This is due to its L2 Regularisation feature. [4] 4. High flexibility. Can handle linear & non-linear data due to variety added by different kernel functions. [3] 5. Stability. Since a small change to the data does not greatly affect the hyperplane. [4] 6. SVM is defined by a convex optimisation problem (i.e. no local minima) [4] **What are the weaknesses of the model; when does it perform poorly?** 1. Training is very computationally expensive (high memory requirement) and thus it can be time-consuming, especially for large datasets [3] 2. Sensitive to noisy data, i.e. when the target classes are overlapping [2] 3. Hyperparameters can be difficult to tune. (Kernel, C parameter, gamma) e.g. when choosing a Kernel, if you always go with high-dimensional ones you might generate too many support vectors and reduce training speed drastically. [4] 4. Difficult to understand and interpret, particularly with high dimensional data. Also, the final model is not easy to see, so we cannot do small calibrations based on business intuition. [3] 5. Requires feature scaling. [4] **What makes this model a good candidate for the problem, given what you know about the data?** Given what we know about the data, SVM would be a good choice since it can handle its multiple dimensions. It will also add variety when compared to decision trees and AdaBoost, potentially yielding better results due to its vastly different mechanism. **References** [[1]](https://data-flair.training/blogs/applications-of-svm/) [[2]](https://medium.com/@dhiraj8899/top-4-advantages-and-disadvantages-of-support-vector-machine-or-svm-a3c06a2b107) [[3]](https://statinfer.com/204-6-8-svm-advantages-disadvantages-applications/) [[4]](http://theprofessionalspoint.blogspot.com/2019/03/advantages-and-disadvantages-of-svm.html) ### Creating a Training and Predicting Pipeline To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section. In the code block below, you will need to implement the following: - Import `fbeta_score` and `accuracy_score` from [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics). - Fit the learner to the sampled training data and record the training time. - Perform predictions on the test data `X_test`, and also on the first 300 training points `X_train[:300]`. - Record the total prediction time. - Calculate the accuracy score for both the training subset and testing set. - Calculate the F-score for both the training subset and testing set. - Make sure that you set the `beta` parameter! ``` # Import two metrics from sklearn - fbeta_score and accuracy_score from sklearn.metrics import fbeta_score, accuracy_score def train_predict(learner, sample_size, X_train, y_train, X_test, y_test): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_test: features testing set - y_test: income testing set ''' results = {} # Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:]) start = time() # Get start time learner = learner.fit(X_train[:sample_size], y_train[:sample_size]) end = time() # Get end time # Calculate the training time results['train_time'] = end - start # Get the predictions on the test set(X_test), # then get predictions on the first 300 training samples(X_train) using .predict() start = time() # Get start time predictions_test = learner.predict(X_test) predictions_train = learner.predict(X_train[:300]) end = time() # Get end time # Calculate the total prediction time results['pred_time'] = end - start # Compute accuracy on the first 300 training samples results['acc_train'] = accuracy_score(y_train[:300], predictions_train) # Compute accuracy on test set using accuracy_score() results['acc_test'] = accuracy_score(y_test, predictions_test) # Compute F-score on the the first 300 training samples using fbeta_score() results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta=beta) # Compute F-score on the test set which is y_test results['f_test'] = fbeta_score(y_test, predictions_test, beta=beta) # Success print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size)) # Return the results return results ``` ### Initial Model Evaluation In the code cell, you will need to implement the following: - Import the three supervised learning models you've discussed in the previous section. - Initialize the three models and store them in `'clf_A'`, `'clf_B'`, and `'clf_C'`. - Use a `'random_state'` for each model you use, if provided. - **Note:** Use the default settings for each model — you will tune one specific model in a later section. - Calculate the number of records equal to 1%, 10%, and 100% of the training data. - Store those values in `'samples_1'`, `'samples_10'`, and `'samples_100'` respectively. **Note:** Depending on which algorithms you chose, the following implementation may take some time to run! ``` # Import the three supervised learning models from sklearn # Import Algorithms from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.svm import SVC # Initialize the three models clf_A = DecisionTreeClassifier(random_state=42) clf_B = AdaBoostClassifier(random_state=42) clf_C = SVC(random_state=42) # Calculate the number of samples for 1%, 10%, and 100% of the training data samples_100 = len(y_train) samples_10 = int(0.1*len(y_train)) samples_1 = int(0.01*len(y_train)) # Collect results on the learners results = {} for clf in [clf_A, clf_B, clf_C]: clf_name = clf.__class__.__name__ results[clf_name] = {} for i, samples in enumerate([samples_1, samples_10, samples_100]): results[clf_name][i] = \ train_predict(clf, samples, X_train, y_train, X_test, y_test) # Run metrics visualization for the three supervised learning models chosen vs.evaluate(results, accuracy, fscore) ``` ---- ## Improving Results In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F-score. ### Choosing the Best Model Based on the evaluation you performed earlier, in one to two paragraphs, explain to *CharityML* which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000. ##### AdaBoost According to the analysis, the most appropriate model for identifying individuals who make more than \$50,000 is the AdaBoost model. This is because of the following reasons: - AdaBoost yields the best accuracy and F-score on the testing data, meaning that to maximise the number of true potential donors, it is the ideal model to choose. - The 2nd best competitor (namely, SVM) has a slightly higher tendency to overfit, and is significantly more time-consuming to train. - AdaBoost is suitable for the given dataset because it yields high precision (i.e. few false positives, which is what we want), and will allow us to interpret the result for potential callibrations more so than an SVM model would. ### Describing the Model in Layman's Terms In one to two paragraphs, explain to *CharityML*, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical jargon, such as describing equations. ##### Introduction AdaBoost is a model that belongs to a group of models called "Ensemble Methods". As the name suggests, the model trains weaker models on the data (also known as "weak learners"), and then combines them into a single, more powerful model (which we call a "strong learner"). ##### Training the AdaBoost Model In our case, we feed the model the training data from our dataset, and it fits a simple "weak learner" to the data. Then, it augments the errors made by the first learner, and it fits a second learner to correct its mistakes. Then, a 3rd weak learner does the same for the 2nd one, and this process repeats until enough learners have been trained. Then, the algorithm assigns a weight to each weak learner based on its performance, and combines all the weak learners into a single **Strong Learner**. When combining the weak learners, the ones with the stronger weights (i.e. the more successful ones) will get more of a say on how the final model is structured. ##### AdaBoost Predictions After training the model, we will be able to feed to it unseen examples (i.e. new individuals), and the model will use its knowledge on the previous individuals to predict whether or not they make more than /$50,000 per year. ### Model Tuning Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following: - Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html). - Initialize the classifier you've chosen and store it in `clf`. - Set a `random_state` if one is available to the same state you set before. - Create a dictionary of parameters you wish to tune for the chosen model. - Example: `parameters = {'parameter' : [list of values]}`. - **Note:** Avoid tuning the `max_features` parameter of your learner if that parameter is available! - Use `make_scorer` to create an `fbeta_score` scoring object (with $\beta = 0.5$). - Perform grid search on the classifier `clf` using the `'scorer'`, and store it in `grid_obj`. - Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_fit`. **Note:** Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run! ``` # Import 'GridSearchCV', 'make_scorer', and any other necessary libraries from sklearn.model_selection import GridSearchCV from sklearn.metrics import make_scorer from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import AdaBoostClassifier # Initialize the classifier clf = AdaBoostClassifier(random_state=42) # Create the parameters list you wish to tune, using a dictionary if needed. parameters = {'n_estimators': [500, 1000, 1500, 2000], 'learning_rate': np.linspace(0.001, 1, 10)} # Make an fbeta_score scoring object using make_scorer() scorer = make_scorer(fbeta_score, beta=beta) # Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV() grid_obj = GridSearchCV(clf, parameters, scoring=scorer, n_jobs = -1) # Fit the grid search object to the training data and find the optimal parameters using fit() start = time() grid_fit = grid_obj.fit(X_train, y_train) end = time() print('Time to tune: ', end - start) # Get the estimator best_clf = grid_fit.best_estimator_ # Make predictions using the unoptimized and model predictions = (clf.fit(X_train, y_train)).predict(X_test) best_predictions = best_clf.predict(X_test) # Check hyperparameters print(clf) print(best_clf) # Report the before-and-afterscores print("Unoptimized model\n------") print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))) print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))) print("\nOptimized Model\n------") print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))) print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))) ``` ### Final Model Evaluation * What is your optimized model's accuracy and F-score on the testing data? * Are these scores better or worse than the unoptimized model? * How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in **Question 1**?_ #### Results: | Metric | Unoptimized Model | Optimized Model | | :------------: | :---------------: | :-------------: | | Accuracy Score | 0.8576 | 0.8676 | | F-score | 0.7246 | 0.7456 | **Discussion** My optimised model's accuracy is 86.71% while the F-score (beta = 0.5) is 0.7448. These scores are slightly better than the optimised model's. Accuracy improved by ~1.2% and F-score by ~2.9%. The scores are significantly better than the naive predictor's. Accuracy improved by ~350% (3.5+ times higher) and F-score by ~256% (2.5+ times higher). ---- ## Feature Importance An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000. Here, we choose a scikit-learn classifier (e.g., adaboost, random forests) that has a `feature_importance_` attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell, we fit this classifier to the training set and use this attribute to determine the top 5 most important features for the census dataset. ### Feature Relevance Observation When **Exploring the Data**, it was shown there are thirteen available features for each individual on record in the census data. Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why? **Answer:** 1. **Occupation**. I would expect the job that a person has to be a good predictor of income. 2. **Hours per week**. The more hours you work, the more you earn. 3. **Education Number** Because of the positive correlation between education level and income. 4. **Age** Usually older people who've had longer careers have a higher income. 5. **Native Country** Because a US worker earns significantly more than, say, an Argentina one. ### Feature Importance Choose a `scikit-learn` supervised learning algorithm that has a `feature_importance_` attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm. In the code cell below, you will need to implement the following: - Import a supervised learning model from sklearn if it is different from the three used earlier. - Train the supervised model on the entire training set. - Extract the feature importances using `'.feature_importances_'`. ``` # Import a supervised learning model that has 'feature_importances_' from sklearn.ensemble import AdaBoostClassifier # Train the supervised model on the training set using .fit(X_train, y_train) model = AdaBoostClassifier().fit(X_train, y_train) # Extract the feature importances using .feature_importances_ importances = model.feature_importances_ # Plot vs.feature_plot(importances, X_train, y_train) ``` ### Extracting Feature Importance Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000. * How do these five features compare to the five features you discussed in **Question 6**? * If you were close to the same answer, how does this visualization confirm your thoughts? * If you were not close, why do you think these features are more relevant? **Answer:** * *How do these five features compare to the five features you discussed in **Question 6**?* These five features are significantly different to what I predicted in question 6. While I did mention age, hours-per-week and education-num, I failed to mention two of the most significant features: capital-loss and capital-gain, which together amount to about 37% cumulative feature weight. * *If you were close to the same answer, how does this visualization confirm your thoughts?* This visualisation confirms that age plays a large role and that hours-per-week and education-num are among the most relevant features. This is because of the direct and strong correlation between these variables and individual income. * *If you were not close, why do you think these features are more relevant?* I was genuinely surprised that occupation did not make it in the top 5. I suppose it was because the mentioned occupations just do not have a large discrepancy in income. Whereas capital-loss and capital-gain varies more among those individuals and more directly affects their income. Similarly, regarding native-country, I suppose most people were from the US or a similarly developed country and hence the feature didn't have great predictive power. ### Feature Selection How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of **all** features present in the data. This hints that we can attempt to *reduce the feature space* and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set *with only the top five important features*. ``` # Import functionality for cloning a model from sklearn.base import clone # Reduce the feature space X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]] X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]] # Train on the "best" model found from grid search earlier clf = (clone(best_clf)).fit(X_train_reduced, y_train) # Make new predictions reduced_predictions = clf.predict(X_test_reduced) # Report scores from the final model using both versions of data print("Final Model trained on full data\n------") print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))) print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))) print("\nFinal Model trained on reduced data\n------") print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))) print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))) ``` ### Effects of Feature Selection * How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used? * If training time was a factor, would you consider using the reduced data as your training set? **Answer:** The model trained on reduced data gets an extra of ~2% of testing examples wrong, and its F-score is ~0.04 less. If training time was a factor, I would probably still not use the reduced data as my training set. However, if more training examples yielded a significant improvement, I would recommend using lower-dimension data so that we could accommodate more training examples.
github_jupyter
``` '''Try Euler time-stepping for a simple array Print pe,u's as said by Shubham G. Initial conditions also same ''' def Async_sim(num_grid): import delay_file import probability_initial import error_file import analytical_file import ic_file import numpy as np import matplotlib.pyplot as plt import timestep_file import FLAG_file import grid_file import input_file import step ## This cell remains same for Asynchronous compute ## Length=2*np.pi nx_=num_grid dx=Length/(nx_-1) x_=grid_file.grid_(dx,nx_) C=input_file.c init_c=ic_file.ic_(x_,amp=input_file.amp_ls,kappa=input_file.k_ls,phi=input_file.phi_ls, num_k=input_file.numk,num_phi=input_file.numphi,Nx=nx_) dt=timestep_file.timestep_(dx,cfl=input_file.cfl,EqFLAG=FLAG_file.EqnFLAG,cx=input_file.c) Nt=input_file.N_t_ arr_2d=[] # Nt=input_file.N_t_ L=3 u=init_c arr_2d.append(u) for k in range(L-1): rhs=step.cd2u1(u,C,dx,nx_,Eqflag=FLAG_file.EqnFLAG,Syncflag='DSync') u=step.euler(u,rhs,dt,nx_) arr_2d.append(u) arr_2d=np.stack(arr_2d) num_PEs=input_file.numPE per_PEs=int((nx_)/(num_PEs)) ps_i,pe_i=probability_initial.prob_2D_from_arr_2D(arr_2d[:-1],num_PEs,per_PEs,L-1) u=arr_2d[L-1] ls=[] ls.append(u) ps=ps_i pe=pe_i for j in range(Nt): ys,ye=probability_initial.prob_1D_from_u_1D(u,num_PEs,per_PEs) ps=np.vstack((ps,ys)) pe=np.vstack((pe,ye)) # print(f'\n{j}th iteration pe is {pe} \n u is {u} ') rhs,ps,pe=step.cd2u1(u,C,dx,nx_,Eqflag=FLAG_file.EqnFLAG,Syncflag=FLAG_file.SyncFLAG,L=3,PE=num_PEs,perPE=per_PEs, pstart=ps,pend=pe,ATolFLAG=FLAG_file.ATFLAG) u=step.euler(u,rhs,dt,nx_) ls.append(u) Nt_total=Nt+L-1 # for i in range(40): # plt.plot(x_,ls[i*7]) ana_soln=analytical_file.analytical_(x_,input_file.amp_ls,input_file.k_ls,input_file.phi_ls,input_file.numk, input_file.numphi,nx_,dt*Nt,C,0) error_Nx=error_file.error_MSE_(ana_soln,ls[-1]) return error_Nx,np.vstack((arr_2d,ls[1:])) def at2u0(pe,l,L, p_arr): a = l+1 b = -l temp = a*p_arr[L-1-l][pe]+b*p_arr[L-2-l][pe] return temp er.shape for l in range(4): at2u0(0,l,4,er) for pe in range(4): for l in range(4): at2u0(pe,l,4,er) import matplotlib.pyplot as plt import numpy as np err=[] n_list=64*np.arange(7,15) print(n_list) for n in n_list: err.append(Async_sim(n)[0]) # plt.plot(n_list,err) import numpy as np def plot_error(n_list,err,comptype='Synchronous',order=2): # plt.plot(n_list,err) plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(10,8)) plt.plot(np.log(n_list),np.log(err),label=f'{comptype}Order') plt.plot(np.log(n_list),-order*np.log(n_list)+6,label=f'{order}-Order') plt.title(f"{comptype} Computations Accuracy order") plt.xlabel("log-N") plt.ylabel("log-Avg.Error") plt.legend() plt.grid() plot_error(n_list,err,'Asynchronous',2) import input_file num_X=input_file.Nx Async_FD_data=Async_sim(num_X)[1] x=np.arange(12).reshape(3,4) y=np.arange(4,8) y x=np.vstack((x,y)) x plt.plot(np.log(n_list),np.log(err)) plt.plot(np.log(n_list),-1*np.log(n_list)) import delay_file x=0 for i in range(10000): x+=delay_file.delay_() x/10000 ```
github_jupyter
# Image features exercise *Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.* We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels. All of your work for this exercise will be done in this notebook. ``` import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 ``` ## Load data Similar to previous exercises, we will load CIFAR-10 data from disk. ``` from cs231n.features import color_histogram_hsv, hog_feature def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' # Cleaning up variables to prevent loading data multiple times (which may cause memory issue) try: del X_train, y_train del X_test, y_test print('Clear previously loaded data.') except: pass X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = list(range(num_training, num_training + num_validation)) X_val = X_train[mask] y_val = y_train[mask] mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] return X_train, y_train, X_val, y_val, X_test, y_test X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() print(f'{X_train.shape}') ``` ## Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for your own interest. The `hog_feature` and `color_histogram_hsv` functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image. ``` from cs231n.features import * num_color_bins = 10 # Number of bins in the color histogram feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)] X_train_feats = extract_features(X_train, feature_fns, verbose=True) X_val_feats = extract_features(X_val, feature_fns) X_test_feats = extract_features(X_test, feature_fns) # Preprocessing: Subtract the mean feature mean_feat = np.mean(X_train_feats, axis=0, keepdims=True) X_train_feats -= mean_feat X_val_feats -= mean_feat X_test_feats -= mean_feat # Preprocessing: Divide by standard deviation. This ensures that each feature # has roughly the same scale. std_feat = np.std(X_train_feats, axis=0, keepdims=True) X_train_feats /= std_feat X_val_feats /= std_feat X_test_feats /= std_feat # Preprocessing: Add a bias dimension X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))]) X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))]) X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))]) print(X_train_feats.shape) ``` ## Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. ``` # Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [5e4, 5e5, 5e6] results = {} best_val = -1 best_svm = None ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** for lr in learning_rates: for reg in regularization_strengths: it = max(X_train_feats.shape[0] // 100, 1) num_epoch = 5 it *= num_epoch svm = LinearSVM() svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=it, batch_size=500, verbose=False) train_acc = np.mean(svm.predict(X_train_feats) == y_train) val_acc = np.mean(svm.predict(X_val_feats) == y_val) results[(lr, reg)] = (train_acc, val_acc) if val_acc > best_val: best_val = val_acc best_svm = svm # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) # Evaluate your trained SVM on the test set: you should be able to get at least 0.40 y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print(test_accuracy) # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 print('some pictures that classified incorrectly.') classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show() # show some pictures that classified correctly print('some that correctly classified.') for cls, cls_name in enumerate(classes): idxs = np.where((y_test == cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show() ``` ### Inline question 1: Describe the misclassification results that you see. Do they make sense? $\color{blue}{\textit Your Answer:}$ For example, row 5 col 1 picture is a bird, but the plane also has two wings, thus they might have some similar HoG features because they both has some diagonal gradients. ## Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy. ``` # Preprocessing: Remove the bias dimension # Make sure to run this cell only ONCE print(X_train_feats.shape) X_train_feats = X_train_feats[:, :-1] X_val_feats = X_val_feats[:, :-1] X_test_feats = X_test_feats[:, :-1] print(X_train_feats.shape) from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 best_val_acc = 0.0 best_net = None best_lr = 1e-5 best_hs = 200 best_stats = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** def plot_history(stats): plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Classification accuracy') plt.legend() plt.show() def tune(lr, hs, reg, verbose=False): net = TwoLayerNet(input_dim, hs, num_classes) batch_size = 200 num_epoch = 10 num_iters = max(X_train_feats.shape[0] // batch_size, 1) * num_epoch stats = net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=lr, reg=reg, learning_rate_decay=0.95, batch_size=batch_size, num_iters=num_iters, verbose=verbose) val_acc = np.mean(net.predict(X_val_feats) == y_val) train_acc = np.mean(net.predict(X_train_feats) == y_train) print(f'lr:{lr} hidden_size:{hs} lr_reg: {reg} train_acc:{train_acc} val_acc:{val_acc}') return val_acc, net, stats # tuning process is in the code cell below # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** best_val_acc = 0.0 best_net = None best_lr = 5e-2 best_hs = 500 best_stats = None best_reg = 5e-6 learning_rates = [1e-2, 5e-2, 1e-1, 5e-1] hidden_dim = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000] lr_reg = [1e-4, 2e-4, 3e-4, 7e-4, 2e-3] # for lr in learning_rates: # val_acc, net, stats = tune(lr, best_hs, best_reg, False) # if val_acc > best_val_acc: # best_lr = lr # best_net = net # best_stats = stats # best_val_acc = val_acc # for hs in hidden_dim: # val_acc, net, stats = tune(best_lr, hs, best_reg, False) # if val_acc > best_val_acc: # best_hs = hs # best_net = net # best_stats = stats # best_val_acc = val_acc for reg in lr_reg: val_acc, net, stats = tune(best_lr, best_hs, reg, False) if val_acc > best_val_acc: best_reg = reg best_net = net best_stats = stats best_val_acc = val_acc print(f'best val: {best_val_acc}, best_lr:{best_lr} best_hidden_size:{best_hs} best_lr_reg:{best_reg}') plot_history(best_stats) # Run your best neural net classifier on the test set. You should be able # to get more than 55% accuracy. test_acc = (best_net.predict(X_test_feats) == y_test).mean() print(test_acc) ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import r2_score from sklearn.linear_model import SGDRegressor # # load the data # df = pd.read_csv('../Datasets/synth_temp.csv') # # slice 1902 and forward # df = df.loc[df.Year > 1901] # # roll up by year # df_group_year = df.groupby(['Year']).agg({'RgnAvTemp' : 'mean'}) # # add the Year column so we can use that in a model # df_group_year['Year'] = df_group_year.index df_group_year = df_group_year.rename(columns = {'RgnAvTemp' : 'AvTemp'}) # # scale the data # X_min = df_group_year.Year.min() X_range = df_group_year.Year.max() - df_group_year.Year.min() Y_min = df_group_year.AvTemp.min() Y_range = df_group_year.AvTemp.max() - df_group_year.AvTemp.min() scale_X = (df_group_year.Year - X_min) / X_range # train_X = scale_X.ravel() train_Y = ((df_group_year.AvTemp - Y_min) / Y_range).ravel() # # create the model object # np.random.seed(42) model = SGDRegressor( loss = 'squared_loss', max_iter = 100, learning_rate = 'constant', eta0 = 0.0005, tol = 0.00009, penalty = 'none') # # fit the model # model.fit(train_X.reshape((-1, 1)), train_Y) Beta0 = (Y_min + Y_range * model.intercept_[0] - Y_range * model.coef_[0] * X_min / X_range) Beta1 = Y_range * model.coef_[0] / X_range print(Beta0) print(Beta1) # # generate predictions # pred_X = df_group_year['Year'] pred_Y = model.predict(train_X.reshape((-1, 1))) # # calcualte the r squared value # r2 = r2_score(train_Y, pred_Y) print('r squared = ', r2) # # scale predictions back to real values # pred_Y = (pred_Y * Y_range) + Y_min fig = plt.figure(figsize=(10, 7)) ax = fig.add_axes([1, 1, 1, 1]) # # Raw data # raw_plot_data = df ax.scatter(raw_plot_data.Year, raw_plot_data.RgnAvTemp, label = 'Raw Data', c = 'red', s = 1.5) # # Annual averages # ax.scatter(df_group_year.Year, df_group_year.AvTemp, label = 'Annual average', c = 'k', s = 10) # # linear fit # ax.plot(pred_X, pred_Y, c = "blue", linestyle = '-.', linewidth = 4, label = 'linear fit') # # put the model on the plot # ax.text(1902, 20, 'Temp = ' + str(round(Beta0, 2)) + ' + ' + str(round(Beta1, 4)) + ' * Year', fontsize = 16) # ax.set_title('Mean Air Temperature Measurements', fontsize = 16) # # make the ticks include the first and last years # tick_years = [1902] + list(range(1910, 2011, 10)) ax.set_xlabel('Year', fontsize = 14) ax.set_ylabel('Temperature ($^\circ$C)', fontsize = 14) ax.set_ylim(15, 21) ax.set_xticks(tick_years) ax.tick_params(labelsize = 12) ax.legend(fontsize = 12) plt.show() ```
github_jupyter
[View in Colaboratory](https://colab.research.google.com/github/ArunkumarRamanan/Exercises-Machine-Learning-Crash-Course-Google-Developers/blob/master/validation.ipynb) #### Copyright 2017 Google LLC. ``` # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Validation **Learning Objectives:** * Use multiple features, instead of a single feature, to further improve the effectiveness of a model * Debug issues in model input data * Use a test data set to check if a model is overfitting the validation data As in the prior exercises, we're working with the [California housing data set](https://developers.google.com/machine-learning/crash-course/california-housing-data-description), to try and predict `median_house_value` at the city block level from 1990 census data. ## Setup First off, let's load up and prepare our data. This time, we're going to work with multiple features, so we'll modularize the logic for preprocessing the features a bit: ``` from __future__ import print_function import math from IPython import display from matplotlib import cm from matplotlib import gridspec from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_verbosity(tf.logging.ERROR) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format california_housing_dataframe = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_train.csv", sep=",") # california_housing_dataframe = california_housing_dataframe.reindex( # np.random.permutation(california_housing_dataframe.index)) def preprocess_features(california_housing_dataframe): """Prepares input features from California housing data set. Args: california_housing_dataframe: A Pandas DataFrame expected to contain data from the California housing data set. Returns: A DataFrame that contains the features to be used for the model, including synthetic features. """ selected_features = california_housing_dataframe[ ["latitude", "longitude", "housing_median_age", "total_rooms", "total_bedrooms", "population", "households", "median_income"]] processed_features = selected_features.copy() # Create a synthetic feature. processed_features["rooms_per_person"] = ( california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"]) return processed_features def preprocess_targets(california_housing_dataframe): """Prepares target features (i.e., labels) from California housing data set. Args: california_housing_dataframe: A Pandas DataFrame expected to contain data from the California housing data set. Returns: A DataFrame that contains the target feature. """ output_targets = pd.DataFrame() # Scale the target to be in units of thousands of dollars. output_targets["median_house_value"] = ( california_housing_dataframe["median_house_value"] / 1000.0) return output_targets ``` For the **training set**, we'll choose the first 12000 examples, out of the total of 17000. ``` training_examples = preprocess_features(california_housing_dataframe.head(12000)) training_examples.describe() training_targets = preprocess_targets(california_housing_dataframe.head(12000)) training_targets.describe() ``` For the **validation set**, we'll choose the last 5000 examples, out of the total of 17000. ``` validation_examples = preprocess_features(california_housing_dataframe.tail(5000)) validation_examples.describe() validation_targets = preprocess_targets(california_housing_dataframe.tail(5000)) validation_targets.describe() ``` ## Task 1: Examine the Data Okay, let's look at the data above. We have `9` input features that we can use. Take a quick skim over the table of values. Everything look okay? See how many issues you can spot. Don't worry if you don't have a background in statistics; common sense will get you far. After you've had a chance to look over the data yourself, check the solution for some additional thoughts on how to verify data. ### Solution Click below for the solution. Let's check our data against some baseline expectations: * For some values, like `median_house_value`, we can check to see if these values fall within reasonable ranges (keeping in mind this was 1990 data — not today!). * For other values, like `latitude` and `longitude`, we can do a quick check to see if these line up with expected values from a quick Google search. If you look closely, you may see some oddities: * `median_income` is on a scale from about 3 to 15. It's not at all clear what this scale refers to—looks like maybe some log scale? It's not documented anywhere; all we can assume is that higher values correspond to higher income. * The maximum `median_house_value` is 500,001. This looks like an artificial cap of some kind. * Our `rooms_per_person` feature is generally on a sane scale, with a 75th percentile value of about 2. But there are some very large values, like 18 or 55, which may show some amount of corruption in the data. We'll use these features as given for now. But hopefully these kinds of examples can help to build a little intuition about how to check data that comes to you from an unknown source. ## Task 2: Plot Latitude/Longitude vs. Median House Value Let's take a close look at two features in particular: **`latitude`** and **`longitude`**. These are geographical coordinates of the city block in question. This might make a nice visualization — let's plot `latitude` and `longitude`, and use color to show the `median_house_value`. ``` plt.figure(figsize=(13, 8)) ax = plt.subplot(1, 2, 1) ax.set_title("Validation Data") ax.set_autoscaley_on(False) ax.set_ylim([32, 43]) ax.set_autoscalex_on(False) ax.set_xlim([-126, -112]) plt.scatter(validation_examples["longitude"], validation_examples["latitude"], cmap="coolwarm", c=validation_targets["median_house_value"] / validation_targets["median_house_value"].max()) ax = plt.subplot(1,2,2) ax.set_title("Training Data") ax.set_autoscaley_on(False) ax.set_ylim([32, 43]) ax.set_autoscalex_on(False) ax.set_xlim([-126, -112]) plt.scatter(training_examples["longitude"], training_examples["latitude"], cmap="coolwarm", c=training_targets["median_house_value"] / training_targets["median_house_value"].max()) _ = plt.plot() ``` Wait a second...this should have given us a nice map of the state of California, with red showing up in expensive areas like the San Francisco and Los Angeles. The training set sort of does, compared to a [real map](https://www.google.com/maps/place/California/@37.1870174,-123.7642688,6z/data=!3m1!4b1!4m2!3m1!1s0x808fb9fe5f285e3d:0x8b5109a227086f55), but the validation set clearly doesn't. **Go back up and look at the data from Task 1 again.** Do you see any other differences in the distributions of features or targets between the training and validation data? ### Solution Click below for the solution. Looking at the tables of summary stats above, it's easy to wonder how anyone would do a useful data check. What's the right 75<sup>th</sup> percentile value for total_rooms per city block? The key thing to notice is that for any given feature or column, the distribution of values between the train and validation splits should be roughly equal. The fact that this is not the case is a real worry, and shows that we likely have a fault in the way that our train and validation split was created. ## Task 3: Return to the Data Importing and Pre-Processing Code, and See if You Spot Any Bugs If you do, go ahead and fix the bug. Don't spend more than a minute or two looking. If you can't find the bug, check the solution. When you've found and fixed the issue, re-run `latitude` / `longitude` plotting cell above and confirm that our sanity checks look better. By the way, there's an important lesson here. **Debugging in ML is often *data debugging* rather than code debugging.** If the data is wrong, even the most advanced ML code can't save things. ### Solution Click below for the solution. Take a look at how the data is randomized when it's read in. If we don't randomize the data properly before creating training and validation splits, then we may be in trouble if the data is given to us in some sorted order, which appears to be the case here. ## Task 4: Train and Evaluate a Model **Spend 5 minutes or so trying different hyperparameter settings. Try to get the best validation performance you can.** Next, we'll train a linear regressor using all the features in the data set, and see how well we do. Let's define the same input function we've used previously for loading the data into a TensorFlow model. ``` def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a linear regression model of multiple features. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels ``` Because we're now working with multiple input features, let's modularize our code for configuring feature columns into a separate function. (For now, this code is fairly simple, as all our features are numeric, but we'll build on this code as we use other types of features in future exercises.) ``` def construct_feature_columns(input_features): """Construct the TensorFlow Feature Columns. Args: input_features: The names of the numerical input features to use. Returns: A set of feature columns """ return set([tf.feature_column.numeric_column(my_feature) for my_feature in input_features]) ``` Next, go ahead and complete the `train_model()` code below to set up the input functions and calculate predictions. **NOTE:** It's okay to reference the code from the previous exercises, but make sure to call `predict()` on the appropriate data sets. Compare the losses on training data and validation data. With a single raw feature, our best root mean squared error (RMSE) was of about 180. See how much better you can do now that we can use multiple features. Check the data using some of the methods we've looked at before. These might include: * Comparing distributions of predictions and actual target values * Creating a scatter plot of predictions vs. target values * Creating two scatter plots of validation data using `latitude` and `longitude`: * One plot mapping color to actual target `median_house_value` * A second plot mapping color to predicted `median_house_value` for side-by-side comparison. ``` def train_model( learning_rate, steps, batch_size, training_examples, training_targets, validation_examples, validation_targets): """Trains a linear regression model of multiple features. In addition to training, this function also prints training progress information, as well as a plot of the training and validation loss over time. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forward and backward pass using a single batch. batch_size: A non-zero `int`, the batch size. training_examples: A `DataFrame` containing one or more columns from `california_housing_dataframe` to use as input features for training. training_targets: A `DataFrame` containing exactly one column from `california_housing_dataframe` to use as target for training. validation_examples: A `DataFrame` containing one or more columns from `california_housing_dataframe` to use as input features for validation. validation_targets: A `DataFrame` containing exactly one column from `california_housing_dataframe` to use as target for validation. Returns: A `LinearRegressor` object trained on the training data. """ periods = 10 steps_per_period = steps / periods # Create a linear regressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) linear_regressor = tf.estimator.LinearRegressor( feature_columns=construct_feature_columns(training_examples), optimizer=my_optimizer ) # 1. Create input functions. training_input_fn = # YOUR CODE HERE predict_training_input_fn = # YOUR CODE HERE predict_validation_input_fn = # YOUR CODE HERE # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print("Training model...") print("RMSE (on training data):") training_rmse = [] validation_rmse = [] for period in range (0, periods): # Train the model, starting from the prior state. linear_regressor.train( input_fn=training_input_fn, steps=steps_per_period, ) # 2. Take a break and compute predictions. training_predictions = # YOUR CODE HERE validation_predictions = # YOUR CODE HERE # Compute training and validation loss. training_root_mean_squared_error = math.sqrt( metrics.mean_squared_error(training_predictions, training_targets)) validation_root_mean_squared_error = math.sqrt( metrics.mean_squared_error(validation_predictions, validation_targets)) # Occasionally print the current loss. print(" period %02d : %0.2f" % (period, training_root_mean_squared_error)) # Add the loss metrics from this period to our list. training_rmse.append(training_root_mean_squared_error) validation_rmse.append(validation_root_mean_squared_error) print("Model training finished.") # Output a graph of loss metrics over periods. plt.ylabel("RMSE") plt.xlabel("Periods") plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(training_rmse, label="training") plt.plot(validation_rmse, label="validation") plt.legend() return linear_regressor linear_regressor = train_model( # TWEAK THESE VALUES TO SEE HOW MUCH YOU CAN IMPROVE THE RMSE learning_rate=0.00001, steps=100, batch_size=1, training_examples=training_examples, training_targets=training_targets, validation_examples=validation_examples, validation_targets=validation_targets) ``` ### Solution Click below for a solution. ``` def train_model( learning_rate, steps, batch_size, training_examples, training_targets, validation_examples, validation_targets): """Trains a linear regression model of multiple features. In addition to training, this function also prints training progress information, as well as a plot of the training and validation loss over time. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forward and backward pass using a single batch. batch_size: A non-zero `int`, the batch size. training_examples: A `DataFrame` containing one or more columns from `california_housing_dataframe` to use as input features for training. training_targets: A `DataFrame` containing exactly one column from `california_housing_dataframe` to use as target for training. validation_examples: A `DataFrame` containing one or more columns from `california_housing_dataframe` to use as input features for validation. validation_targets: A `DataFrame` containing exactly one column from `california_housing_dataframe` to use as target for validation. Returns: A `LinearRegressor` object trained on the training data. """ periods = 10 steps_per_period = steps / periods # Create a linear regressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) linear_regressor = tf.estimator.LinearRegressor( feature_columns=construct_feature_columns(training_examples), optimizer=my_optimizer ) # Create input functions. training_input_fn = lambda: my_input_fn( training_examples, training_targets["median_house_value"], batch_size=batch_size) predict_training_input_fn = lambda: my_input_fn( training_examples, training_targets["median_house_value"], num_epochs=1, shuffle=False) predict_validation_input_fn = lambda: my_input_fn( validation_examples, validation_targets["median_house_value"], num_epochs=1, shuffle=False) # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print("Training model...") print("RMSE (on training data):") training_rmse = [] validation_rmse = [] for period in range (0, periods): # Train the model, starting from the prior state. linear_regressor.train( input_fn=training_input_fn, steps=steps_per_period, ) # Take a break and compute predictions. training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn) training_predictions = np.array([item['predictions'][0] for item in training_predictions]) validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn) validation_predictions = np.array([item['predictions'][0] for item in validation_predictions]) # Compute training and validation loss. training_root_mean_squared_error = math.sqrt( metrics.mean_squared_error(training_predictions, training_targets)) validation_root_mean_squared_error = math.sqrt( metrics.mean_squared_error(validation_predictions, validation_targets)) # Occasionally print the current loss. print(" period %02d : %0.2f" % (period, training_root_mean_squared_error)) # Add the loss metrics from this period to our list. training_rmse.append(training_root_mean_squared_error) validation_rmse.append(validation_root_mean_squared_error) print("Model training finished.") # Output a graph of loss metrics over periods. plt.ylabel("RMSE") plt.xlabel("Periods") plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(training_rmse, label="training") plt.plot(validation_rmse, label="validation") plt.legend() return linear_regressor linear_regressor = train_model( learning_rate=0.00003, steps=500, batch_size=5, training_examples=training_examples, training_targets=training_targets, validation_examples=validation_examples, validation_targets=validation_targets) ``` ## Task 5: Evaluate on Test Data **In the cell below, load in the test data set and evaluate your model on it.** We've done a lot of iteration on our validation data. Let's make sure we haven't overfit to the pecularities of that particular sample. Test data set is located [here](https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv). How does your test performance compare to the validation performance? What does this say about the generalization performance of your model? ``` california_housing_test_data = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv", sep=",") # # YOUR CODE HERE # ``` ### Solution Click below for the solution. ``` california_housing_test_data = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv", sep=",") test_examples = preprocess_features(california_housing_test_data) test_targets = preprocess_targets(california_housing_test_data) predict_test_input_fn = lambda: my_input_fn( test_examples, test_targets["median_house_value"], num_epochs=1, shuffle=False) test_predictions = linear_regressor.predict(input_fn=predict_test_input_fn) test_predictions = np.array([item['predictions'][0] for item in test_predictions]) root_mean_squared_error = math.sqrt( metrics.mean_squared_error(test_predictions, test_targets)) print("Final RMSE (on test data): %0.2f" % root_mean_squared_error) ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import os import sys from hawkes import hawkes, sampleHawkes, plotHawkes, iterative_sampling, extract_samples, sample_counterfactual_superposition, check_monotonicity_hawkes sys.path.append(os.path.abspath('../')) from sampling_utils import thinning_T ``` This notebook contains an example of running algorithm 3 (in the paper) for both cases where we have (1) both observed and un-observed events, and (2) the case that we have only the observed events. # 1. Sampling From Lambda_max ``` # required parameters mu0 = 1 alpha = 1 w = 1 lambda_max = 3 T = 5 def constant1(x): return mu0 # sampling from hawkes using the superposition property initial_sample, indicators = thinning_T(0, constant1, lambda_max, T) events = {initial_sample[i]: indicators[i] for i in range(len(initial_sample))} all_events = {} all_events[mu0] = events iterative_sampling(all_events, events, mu0, alpha, w, lambda_max, T) # plotting hawkes sampled_events = list(all_events.keys())[1:] sampled_events.sort() sampled_events = np.array(sampled_events) sampled_lambdas = hawkes(sampled_events, mu0, alpha, w) plt.figure(figsize=(10, 8)) tvec, l_t = plotHawkes(sampled_events, constant1, alpha, w, T, 10000.0, label= 'intensity', color = 'r+', legend= 'accepted') plt.plot(sampled_events, sampled_lambdas, 'r^') plt.legend() plt.show() # extract all sampled events from all_events dictionary. all_samples, all_lambdas = extract_samples(all_events, sampled_events, mu0, alpha, w) # plots all events, both accepted and rejected with their intensities. plt.figure(figsize=(10, 8)) plt.plot(tvec, l_t, label = 'Original Intensity') plt.plot(all_samples, all_lambdas, 'oy', label = 'events') plt.plot(sampled_events,sampled_lambdas, 'r+', label = 'accepted') plt.xlabel('time') plt.ylabel('intensity') plt.legend() # sampling from the counterfactual intensity. new_mu0 = 3 new_alpha = 0.1 real_counterfactuals = sample_counterfactual_superposition(mu0, alpha, new_mu0, new_alpha, all_events, lambda_max, w, T) ``` **The red +s are the counterfactuals.** ``` plt.figure(figsize=(15, 6)) plotHawkes(np.array(real_counterfactuals), lambda t: new_mu0, new_alpha, w, T, 10000.0, label= 'counterfactul intensity', color = 'g+', legend= 'accepted in counterfactual') plt.plot(tvec, l_t, label = 'Original Intensity') plt.plot(all_samples, all_lambdas, 'oy', label = 'events') plt.plot(sampled_events,sampled_lambdas, 'r^') plt.plot(sampled_events,np.full(len(sampled_events), -0.1), 'r+', label = 'originally accepted') for xc in real_counterfactuals: plt.axvline(x=xc, color = 'k', ls = '--', alpha = 0.2) plt.xlabel('time') plt.ylabel('intensity') plt.legend() ``` In the following cell, we will check monotonicity property. Note that this property should hold in **each exponential created by superposition** (please have a look at `check_monotonicity_hawkes` in `hawkes.py` for more details.). ``` check_monotonicity_hawkes(mu0, alpha, new_mu0, new_alpha, all_events, sampled_events, real_counterfactuals, w) ``` # 2. Real-World Scenario ``` # First, we sample from the hawkes process using the Ogata's algorithm (or any other sampling method), but only store the accepted events. plt.figure(figsize=(10, 8)) mu0 = 1 alpha = 1 w = 1 lambda_max = 3 T = 5 tev, tend, lambdas_original = sampleHawkes(mu0, alpha, w, T, Nev= 100) tvec, l_t = plotHawkes(tev, lambda t: mu0, alpha, w, T, 10000.0, label = 'Original Intensity', color= 'r+', legend= 'samples') plt.plot(tev, lambdas_original, 'r^') plt.legend() # this list stores functions corresponding to each exponential. exponentials = [] all_events = {} exponentials.append(lambda t: mu0) all_events[mu0] = {} for i in range(len(tev)): exponentials.append(lambda t: alpha * np.exp(-w * (t - tev[i]))) all_events[tev[i]] = {} # we should assign each accepted event to some exponential. (IMPORTANT) for i in range(len(tev)): if i == 0: all_events[mu0][tev[i]] = True else: probabilities = [exponentials[j](tev[i]) for j in range(0, i + 1)] probabilities = [float(i)/sum(probabilities) for i in probabilities] a = np.random.choice(i + 1, 1, p = probabilities) if a == 0: all_events[mu0][tev[i]] = True else: all_events[tev[a[0] - 1]][tev[i]] = True # using the superposition to calculate the difference between lambda_max and the exponentials, and sample from it. differences = [] differences.append(lambda t: lambda_max - mu0) for k in range(len(tev)): f = lambda t: lambda_max - alpha * np.exp(-w * (t - tev[k])) differences.append(f) for i in range(len(differences)): if i == 0: rejceted, indicators = thinning_T(0, differences[i], lambda_max, T) else: rejceted, indicators = thinning_T(tev[i - 1], differences[i], lambda_max, T) rejceted = {rejceted[j]: False for j in range(len(rejceted)) if indicators[j] == True} if i == 0: all_events[mu0].update(rejceted) all_events[mu0] = {k:v for k,v in sorted(all_events[mu0].items())} else: all_events[tev[i - 1]].update(rejceted) all_events[tev[i - 1]] = {k:v for k,v in sorted(all_events[tev[i - 1]].items())} all_samples, all_lambdas = extract_samples(all_events, tev, mu0, alpha, w) plt.figure(figsize=(10, 8)) plt.plot(tvec, l_t, label = 'Original Intensity') plt.plot(all_samples, all_lambdas, 'oy', label = 'events') plt.plot(tev,lambdas_original, 'r+', label = 'accepted') plt.xlabel('time') plt.ylabel('intensity') plt.legend() new_mu0 = 0.1 new_alpha = 1.7 real_counterfactuals = sample_counterfactual_superposition(mu0, alpha, new_mu0, new_alpha, all_events, lambda_max, w, T) ``` **The red +s are the counterfactuals.** ``` plt.figure(figsize=(15, 8)) plotHawkes(np.array(real_counterfactuals), lambda t: new_mu0, new_alpha, w, T, 10000.0, label= 'counterfactual intensity', color= 'g+', legend= 'accepted in counterfactual') plt.plot(tvec, l_t, label = 'Original Intensity') plt.plot(all_samples, all_lambdas, 'oy', label = 'events') plt.plot(tev,lambdas_original, 'r^') plt.plot(tev,np.full(len(tev), -0.1), 'r+', label = 'originally accepted') for xc in real_counterfactuals: plt.axvline(x=xc, color = 'k', ls = '--', alpha = 0.2) plt.xlabel('time') plt.ylabel('intensity') plt.legend() check_monotonicity_hawkes(mu0, alpha, new_mu0, new_alpha, all_events, tev, real_counterfactuals, w) ```
github_jupyter
# Two Layer QG Model Example # Here is a quick overview of how to use the two-layer model. See the :py:class:`pyqg.QGModel` api documentation for further details. First import numpy, matplotlib, and pyqg: ``` import numpy as np from matplotlib import pyplot as plt %matplotlib inline import pyqg ``` ## Initialize and Run the Model ## Here we set up a model which will run for 10 years and start averaging after 5 years. There are lots of parameters that can be specified as keyword arguments but we are just using the defaults. ``` year = 24*60*60*360. m = pyqg.QGModel(tmax=10*year, twrite=10000, tavestart=5*year) m.run() ``` ## Convert Model Outpt to an xarray Dataset ## Model variables, coordinates, attributes, and metadata can be stored conveniently as an xarray Dataset. (Notice that this feature requires xarray to be installed on your machine. See here for installation instructions: http://xarray.pydata.org/en/stable/getting-started-guide/installing.html#instructions) ``` m_ds = m.to_dataset() m_ds ``` ## Visualize Output ## Let's assign a new data variable, ``q_upper``, as the **upper layer PV anomaly**. We access the PV values in the Dataset as ``m_ds.q``, which has two levels and a corresponding background PV gradient, ``m_ds.Qy``. ``` m_ds['q_upper'] = m_ds.q.isel(lev=0, time=0) + m_ds.Qy.isel(lev=0)*m_ds.y m_ds['q_upper'].attrs = {'long_name': 'upper layer PV anomaly'} m_ds.q_upper.plot.contourf(levels=18, cmap='RdBu_r'); ``` ## Plot Diagnostics ## The model automatically accumulates averages of certain diagnostics. We can find out what diagnostics are available by calling ``` m.describe_diagnostics() ``` To look at the wavenumber energy spectrum, we plot the `KEspec` diagnostic. (Note that summing along the l-axis, as in this example, does not give us a true *isotropic* wavenumber spectrum.) ``` kespec_upper = m_ds.KEspec.isel(lev=0).sum('l') kespec_lower = m_ds.KEspec.isel(lev=1).sum('l') kespec_upper.plot.line( 'b.-', x='k', xscale='log', yscale='log', label='upper layer') kespec_lower.plot.line( 'g.-', x='k', xscale='log', yscale='log', label='lower layer') plt.legend(loc='lower left') plt.ylim([1e-9, 1e-3]); plt.xlabel(r'k (m$^{-1}$)'); plt.grid() plt.title('Kinetic Energy Spectrum'); ``` We can also plot the spectral fluxes of energy. ``` ebud = [ m_ds.APEgenspec.sum('l'), m_ds.APEflux.sum('l'), m_ds.KEflux.sum('l'), -m_ds.attrs['pyqg:rek']*m.del2*m_ds.KEspec.isel(lev=1).sum('l')*m.M**2 ] ebud.append(-np.vstack(ebud).sum(axis=0)) ebud_labels = ['APE gen','APE flux','KE flux','Diss.','Resid.'] [plt.semilogx(m_ds.k, term) for term in ebud] plt.legend(ebud_labels, loc='upper right') plt.xlim([m_ds.k.min(), m_ds.k.max()]) plt.xlabel(r'k (m$^{-1}$)'); plt.grid() plt.title('Spectral Energy Transfers'); ```
github_jupyter
### Importing libraries ``` # Import default libraries import pandas as pd import numpy as np import math from matplotlib import pyplot as plt import seaborn as sns import time import random import warnings import os import requests import json import time import zipfile import logging # Import Biopython utils from Bio.PDB import PDBList, calc_angle, calc_dihedral, PPBuilder, is_aa, PDBIO, NeighborSearch, DSSP, HSExposureCB from Bio.PDB.PDBParser import PDBParser from Bio.SeqUtils import IUPACData from Bio.PDB.PDBIO import Select # Import custom libraries from modules.feature_extraction import * # Set debug info logging.basicConfig(level=logging.DEBUG) ``` ### Helping functions ### Importing original dataset (LIP tagged sequences) ``` def down_sampling(df, number_of_samples, seed = 42): noLIP_index = set(df[df['LIP'] == 0].index) indexes = set(np.arange(0, np.shape(df)[0])) sample = random.sample(noLIP_index, len(noLIP_index) - number_of_samples) new_index = indexes.difference(sample) df1 = df.iloc[list(new_index), :] return df1 # Turns an angle from radiants to degrees def rad_to_deg(rad_angle): # If the input is None, then it returns None. # For numerical input, the output is mapped to [-180,180] if rad_angle is None : return None # Computes angle in degrees angle = rad_angle * 180 / math.pi # Handles radiants conversion while angle > 180 : angle = angle - 360 while angle < -180 : angle = angle + 360 return angle # Read original dataset (lips_dataset) ds_original = pd.read_csv('./datasets/lips_dataset_02.txt', sep='\t') # Define new dataset ds_original.head() ``` ### Downloading proteins (automatically skips a protein if it has already been downloaded) ``` # Select all proteins (pdb column) pdb_ids = ds_original.pdb.unique() # Define pdb files dir pdb_dir = './pdb_files' # Define pdb file fetching class pdbl = PDBList() # Fetch every protein for pdb_id in pdb_ids: # Execute fetching of the protein (pdb file) pdbl.retrieve_pdb_file(pdb_id, pdir=pdb_dir, file_format='pdb') ``` ### Creating redidues dataset ``` # Select all proteins (pdb column) pdb_ids = ds_original.pdb.unique() # Define pdb files dir pdb_dir = './pdb_files' # Define pdb file fetching class pdbl = PDBList() # Define a set containing (pdb_id, chain_id) valid_chains = set([(row['pdb'], row['chain']) for idx, row in ds_original.iterrows()]) # New list for residues ds_residues = list() # Loop thorugh every protein for pdb_id in ds_original.pdb.unique(): # Get structure of the protein structure = PDBParser(QUIET=True).get_structure(pdb_id, pdb_dir + '/pdb{}.ent'.format(pdb_id)) # We select only the 0-th model model = structure[0] # Loop through every model's chain for chain in model: # Skip if the chain is not valid if (pdb_id, chain.id) not in valid_chains: continue for residue in chain: # Do not take into account non-aminoacidic residues (e.g. water molecules) if(not is_aa(residue)): continue # Add an entry to the residues list ds_residues.append((pdb_id, model.id, chain.id, residue.id[1], residue.get_resname(), 0, 0)) # Turn list into dataframe ds_residues = pd.DataFrame(ds_residues) # Define dataset column names ds_residues.columns = ['PDB_ID', 'MODEL_ID', 'CHAIN_ID', 'RES_ID', 'RES_NAME', 'LIP_SCORE', 'LIP'] # Show some info about the dataset print("Numbers of proteins: {}".format(np.shape(ds_original)[0])) print("Numbers of res: {}".format(np.shape(ds_residues)[0])) # Show first rows ds_residues.head() ``` ### Tagging LIP residues ``` # Launch tagging algorithm: we have 0 positively tagged residues LIP_tag(ds_original, ds_residues) # Check that the number of residues positively LIP-tagged is higher than 0 assert True, any(ds_residues['LIP'] == 1) # Show first positively tagged LIP residues ds_residues[ds_residues.LIP == 1].head() ``` ### Check dataset balancement We check if we have the same numerosity of LIP and npn-LIP tagged residues. ``` # Compute numerosity of LIP tagged residues print('Numerosity of LIP tagged residues: {}'.format(ds_residues[ds_residues.LIP == 1].shape[0])) # Compute numerosity of non-LIP tagged residues print('Numerosity of non-LIP tagged residues: {}'.format(ds_residues[ds_residues.LIP == 0].shape[0])) # Add plot fig, ax = plt.subplots(1, 1) # Add frequency plot ax = plt.hist(ds_residues['LIP'], bins=2) ``` ## Feature extraction ### DSSP features (angles, etc.) ``` # Get DSSP dataframe ds_dssp = get_DSSP(ds_original.pdb.unique(), pdb_dir) # Show dataframe ds_dssp.head() # Check NULL values in PHI and PSI columns assert False == bool(ds_dssp.PHI.isnull().any()) # Drop useless features ds_dssp.drop(['DSSP_ID', 'AA'], axis=1, inplace=True) ds_dssp.head() # Drop useless columns from residues dataset if 'PHI' in ds_residues.columns: ds_residues.drop(['PHI', 'PSI'], axis=1, inplace=True) # Merge DSSP features in ds_residues dataset ds_residues = ds_residues.merge(ds_dssp, on=['PDB_ID', 'CHAIN_ID', 'RES_ID'], how='left') # Check new datset ds_residues.head() fig, ax = plt.subplots(1, 2) sns.boxplot(x='LIP', y='PHI',data=ds_residues, ax=ax[0]) sns.boxplot(x='LIP', y='PSI',data=ds_residues, ax=ax[1]) ``` ### RING features ``` # Define folder for ring files ring_dir = './ring_files' # Define PDB files for which RING feature extraction is required pdb_ids = ds_original.pdb.unique() # Define contact treshold to consider contact_threshold = 3.5 # Flag for actually extract RING files enable_ring = False if enable_ring: # Download chunk of 5 files per time for i in range(0, len(pdb_ids), 5): # Download required RING files download_RING(pdb_ids[i:i+5], ring_dir) # Get edges info from RING ds_ring = get_RING(pdb_ids, pdb_dir, ring_dir, contact_threshold) ds_ring.head() # Get the number of intra chains contacts for every residue intra_contacts = (ds_ring[ds_ring.CHAIN_ID_A == ds_ring.CHAIN_ID_B] .groupby(['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A'], as_index=False) .size() .reset_index(name='COUNTS')) intra_contacts.columns = ['PDB_ID', 'CHAIN_ID', 'RES_ID', 'INTRA_CONTACTS'] intra_contacts.RES_ID = intra_contacts.RES_ID.astype(int) intra_contacts.head() # Get the number of inter chains contacts for every residue inter_contacts = (ds_ring[ds_ring.CHAIN_ID_A != ds_ring.CHAIN_ID_B] .groupby(['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A'], as_index=False) .size() .reset_index(name='COUNTS')) inter_contacts.columns = ['PDB_ID', 'CHAIN_ID', 'RES_ID', 'INTER_CONTACTS'] inter_contacts.RES_ID = inter_contacts.RES_ID.astype(int) inter_contacts.head() # Merge intra chain contacts into the main dataset ds_residues = pd.merge(ds_residues, intra_contacts, how="left", on=['PDB_ID', 'CHAIN_ID', 'RES_ID']) ds_residues.head() # Merge inter chain contacts into the main dataset ds_residues = pd.merge(ds_residues, inter_contacts, how="left", on=['PDB_ID', 'CHAIN_ID', 'RES_ID']) ds_residues.head() # Fill Nan with zeroes ds_residues.fillna(0, inplace=True) ds_residues.head() # Group every contact by residue groupby = ds_ring.groupby(['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A'], as_index=False) # Get edge locations edges_loc = groupby['EDGE_LOC'].apply(lambda x: ' '.join(x)).reset_index(name='EDGE_LOC') # Get edge types edges_type = groupby['EDGE_TYPE'].apply(lambda x: ' '.join(x)).reset_index(name='EDGE_TYPE') # Merge loc and type edges = pd.merge(edges_loc, edges_type, on=['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A']) edges.columns = ['PDB_ID', 'CHAIN_ID', 'RES_ID', 'EDGE_LOC', 'EDGE_TYPE'] edges.RES_ID = edges.RES_ID.astype(int) edges.head() # Merge edges locations and types into the main dataframe ds_residues = ds_residues.merge(edges, how='left', on=['PDB_ID', 'CHAIN_ID', 'RES_ID']) # Handle NaNs ds_residues.EDGE_LOC = ds_residues.EDGE_LOC.fillna('') ds_residues.EDGE_TYPE = ds_residues.EDGE_TYPE.fillna('') # Show new dataset ds_residues.head() # Save residues dataset to disk ds_residues.to_csv('./datasets/residues.csv') ```
github_jupyter
## Starting Off Looking at the confusion matrix below, and answer the following questions. 1. How many oranges are there in the dataset? 12 2. How many fruits were predicted by the model to be an orange? 6 3. Of the fruits that were predicted to be an orange, how many were actually mangoes? 3 4. Of the fruits that are actually mangoes, how many were predicted to be apples? 9 ![alt text](images/confusion_matrix.png) # Classification Practicum with Class Imbalance Agenda: - Review class imbalance - Review code for different ways to handle class imbalance ``` !pip install imblearn import pandas as pd import numpy as np from sklearn import metrics # Read in data and split data to be used in the models titanic = pd.read_csv('https://raw.githubusercontent.com/learn-co-students/nyc-mhtn-ds-042219-lectures/master/Module_4/cleaned_titanic.csv', index_col='PassengerId') titanic.head() # Create matrix of features X = titanic.drop('Survived', axis = 1) # grabs everything else but 'Survived' # Create target variable y = titanic['Survived'] # y is the column we're trying to predict # Create a list of the features being used in the feature_cols = X.columns ``` # Handling Class Imbalance ## Visualizing Class Imbalance ``` import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set_style('darkgrid') plt.figure(figsize = (10,5)) sns.countplot(y, alpha =.80, palette= ['grey','lightgreen']) plt.title('Survivors vs Non-Survivors') plt.ylabel('# Passengers') plt.show() ``` ## Run a Dummy Classifier for Baseline Assessment ``` 1-y.mean() from sklearn.model_selection import train_test_split from sklearn.dummy import DummyClassifier from sklearn.metrics import accuracy_score, f1_score, recall_score # setting up testing and training sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=23, ) # DummyClassifier to predict only target 0 dummy = DummyClassifier(strategy='most_frequent').fit(X_train, y_train) dummy_pred = dummy.predict(X_test) dummy_pred ``` **Questions:** - What do you think the accuracy score will be for this model? - What do you think the recall score will be for this model? ``` # checking accuracy print('Test Accuracy score: ', accuracy_score(y_test, dummy_pred)) # checking recall print('Test Recall score: ', recall_score(y_test, dummy_pred)) ``` # Handling Class Imbalance In this guide, we will cover 5 tactics for handling imbalanced classes in machine learning: 1. Up-sample the minority class 2. Down-sample the majority class 3. Change your performance metric 4. Penalize algorithms (cost-sensitive training) 5. Use tree-based algorithms ## Run a classification Model with class imbalance Before we start to implement different ways to handle class imbalance, let's fit a basic model to have a better point of comparison. ``` from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression(solver='liblinear') lr_clf.fit(X_train, y_train) y_pred_test = lr_clf.predict(X_test) # checking accuracy print('Test Accuracy score: ', accuracy_score(y_test, y_pred_test)) # checking accuracy print('Test F1 score: ', f1_score(y_test, y_pred_test)) results = {} results['imbalanced'] = (accuracy_score(y_test, y_pred_test), f1_score(y_test, y_pred_test)) ``` ## Prepping data for handling class imbalances We are goign to change the training dataset to which we fit our model, so we want to bring our training data back together before we make those changes. ``` # concatenate our training data back together training = pd.concat([X_train, y_train], axis=1) training # separate minority and majority classes deceased = training[training.Survived==0] survived = training[training.Survived==1] # Get a class count to understand the class imbalance. print('deceased count: '+ str(len(deceased))) print('survived count: '+ str(len(survived))) ``` ## Resampling You can change the dataset that you use to build your predictive model to have more balanced data. This change is called sampling your dataset and there are two main methods that you can use to even-up the classes: You can add copies of instances from the under-represented class called over-sampling (or more formally sampling with replacement), or You can delete instances from the over-represented class, called under-sampling. These approaches are often very easy to implement and fast to run. They are an excellent starting point. **Some Rules of Thumb:** - Consider testing under-sampling when you have an a lot data (tens- or hundreds of thousands of instances or more) - Consider testing over-sampling when you don’t have a lot of data (tens of thousands of records or less) - Consider testing random and non-random (e.g. stratified) sampling schemes. - Consider testing different resampled ratios (e.g. you don’t have to target a 1:1 ratio in a binary classification problem, try other ratios) ![alt text](images/resampling.png) ``` from sklearn.utils import resample ``` ### Upsampling ``` # upsample minority survived_upsampled = resample(survived, replace=True, # sample with replacement n_samples=len(deceased), # match number in majority class random_state=23) # reproducible results survived_upsampled.shape # combine majority and upsampled minority upsampled = pd.concat([deceased, survived_upsampled]) # check new class counts upsampled.Survived.value_counts() len(upsampled) ``` Now that we have balanced classes, lets see how this can affect the performance of the model. ``` # trying logistic regression again with the balanced dataset y_train = upsampled.Survived X_train = upsampled.drop('Survived', axis=1) # upsampled_dt = DecisionTreeClassifier(max_depth=5) upsampled_lr = LogisticRegression(solver='liblinear') # upsampled_dt.fit(X_train, y_train) upsampled_lr.fit(X_train, y_train) # upsampled_pred = upsampled_dt.predict(X_test) upsampled_pred = upsampled_lr.predict(X_test) # checking accuracy print('Test Accuracy score: ', accuracy_score(y_test, upsampled_pred)) # checking accuracy print('Test F1 score: ', f1_score(y_test, upsampled_pred)) results['upsampled'] = (accuracy_score(y_test, upsampled_pred), f1_score(y_test, upsampled_pred)) results ``` ## Downsampling ``` # downsample majority survived_downsampled = resample(deceased, replace = False, # sample without replacement n_samples = len(survived), # match minority n random_state = 23) # reproducible results # combine minority and downsampled majority downsampled = pd.concat([survived_downsampled, survived]) # checking counts downsampled.Survived.value_counts() # trying logistic regression again with the balanced dataset y_train = downsampled.Survived X_train = downsampled.drop('Survived', axis=1) downsampled_lr = LogisticRegression(solver='liblinear') downsampled_lr.fit(X_train, y_train) downsampled_pred = downsampled_lr.predict(X_test) # checking accuracy print('Test Accuracy score: ', accuracy_score(y_test, downsampled_pred)) # checking accuracy print('Test F1 score: ', f1_score(y_test, downsampled_pred)) results['downsampled'] = (accuracy_score(y_test, downsampled_pred), f1_score(y_test, downsampled_pred)) results ``` ## Over-sampling: SMOTE SMOTE (Synthetic Minority Oversampling Technique) consists of synthesizing elements for the minority class, based on those that already exist. It works randomly picking a point from the minority class and computing the k-nearest neighbors for this point. The synthetic points are added between the chosen point and its neighbors. ![alt text](images/smote.png) ``` from imblearn.over_sampling import SMOTE # setting up testing and training sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=23, ) sm = SMOTE(random_state=23) X_train, y_train = sm.fit_sample(X_train, y_train) y_train.value_counts() smote_lr = LogisticRegression(solver='liblinear') # smote_dt.fit(X_train, y_train) smote_lr.fit(X_train, y_train) smote_pred = smote_lr.predict(X_test) # checking accuracy print('Test Accuracy score: ', accuracy_score(y_test, smote_pred)) # checking accuracy print('Test F1 score: ', f1_score(y_test, smote_pred)) results['smote'] = (accuracy_score(y_test, smote_pred), f1_score(y_test, smote_pred)) results ``` ## Under-sampling: Tomek links Tomek links are pairs of very close instances, but of opposite classes. Removing the instances of the majority class of each pair increases the space between the two classes, facilitating the classification process. ![alt text](images/tomek.png) ``` from collections import Counter from imblearn.under_sampling import TomekLinks # setting up testing and training sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=23, ) tl = TomekLinks() X_res, y_res = tl.fit_resample(X_train, y_train) print('Resampled dataset shape %s' % Counter(y_res)) feature_cols tl.sample_indices_ # remove Tomek links tl = TomekLinks() X_resampled, y_resampled = tl.fit_sample(X_train, y_train) ``` ## Show removed observations ``` fig = plt.figure() ax = fig.add_subplot(1, 1, 1) idx_samples_removed = np.setdiff1d(np.arange(X_train.shape[0]), tl.sample_indices_) idx_class_0 = y_resampled == 0 plt.scatter(X_resampled[idx_class_0]['Age'], X_resampled[idx_class_0]['Fare'], alpha=.8, label='Perished') plt.scatter(X_resampled[~idx_class_0]['Age'], X_resampled[~idx_class_0]['Fare'], alpha=.8, label='Survived') plt.scatter(X_train.iloc[idx_samples_removed]['Age'], X_train.iloc[idx_samples_removed]['Fare'], alpha=.8, label='Removed samples') plt.legend() len(idx_samples_removed) len(X_train) len(X_resampled) tomek_lr = LogisticRegression(solver='liblinear') tomek_lr.fit(X_resampled, y_resampled) tomek_pred = tomek_lr.predict(X_test) # checking accuracy print('Test Accuracy score: ', accuracy_score(y_test, tomek_pred)) # checking accuracy print('Test F1 score: ', f1_score(y_test, tomek_pred)) results['tomek'] = (accuracy_score(y_test, tomek_pred), f1_score(y_test, tomek_pred)) results ``` ### Penalize Algorithms (Cost-Sensitive Training) The next tactic is to use penalized learning algorithms that increase the cost of classification mistakes on the minority class. During training, we can use the argument `class_weight='balanced'` to penalize mistakes on the minority class by an amount proportional to how under-represented it is. ``` # setting up testing and training sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=23, ) lr_clf_weighted = LogisticRegression(solver='liblinear', class_weight='balanced') lr_clf_weighted.fit(X_train, y_train) y_weighted_test = lr_clf_weighted.predict(X_test) # checking accuracy print('Test Accuracy score: ', accuracy_score(y_test, y_weighted_test)) # checking accuracy print('Test F1 score: ', f1_score(y_test, y_weighted_test)) results['weighted'] = (accuracy_score(y_test, y_weighted_test), f1_score(y_test, y_weighted_test)) results ``` ## Tree-Based Algorithms Decision trees often perform well on imbalanced datasets because their hierarchical structure allows them to learn signals from both classes. ``` # Instantiate the classifier using 200 trees from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier(random_state = 23, n_estimators=200, class_weight='balanced') #fit the model to the training data rfc.fit(X_train, y_train) #use the fitted model to predict on the test data rfc_pred = rfc.predict(X_test) # checking accuracy on the test data print('Test Accuracy score: ', accuracy_score(y_test, rfc_pred)) # checking accuracy on the test data print('Test F1 score: ', f1_score(y_test, rfc_pred)) results['rfc'] = (accuracy_score(y_test, rfc_pred), f1_score(y_test, rfc_pred)) results ``` ## Change Your Performance Metric Accuracy is not the metric to use when working with an imbalanced dataset. We have seen that it is misleading. There are metrics that have been designed to tell you a more truthful story when working with imbalanced classes. - Precision: A measure of a classifiers exactness. - Recall: A measure of a classifiers completeness - F1 Score (or F-score): A weighted average of precision and recall. - Kappa (or Cohen’s kappa): Classification accuracy normalized by the imbalance of the classes in the data. - ROC Curves: Like precision and recall, accuracy is divided into sensitivity and specificity and models can be chosen based on the balance thresholds of these values. When using a cross-validation method, you can utilize one of these as the scoring metric when comparing across multiple methods. This will not change the way a model is fitted, it will just choose a different model as the **best_estimator** based on the scoring metric. ## Reframe as Anomaly Detection If your class imbalance is very extreme (less than 0.1%), it might be better to treat this as an anomay detection problem than a classification problem. **Anomaly detection**, a.k.a. outlier detection, is for detecting outliers and rare events. Instead of building a classification model, you'd have a "profile" of a normal observation. If a new observation strays too far from that "normal profile," it would be flagged as an anomaly. https://towardsdatascience.com/anomaly-detection-for-dummies-15f148e559c1
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt ``` This notebook provides a basic example of using the `blg_strain` package to calculate the magnetoelectric susceptibility for strained bilayer graphene. # Strained Lattice ``` from blg_strain.lattice import StrainedLattice sl = StrainedLattice(eps=0.01, theta=0) sl.calculate() ``` Below is a plot of the Brillouin zone (black hexagon) and location of the K/K' points (red markers), which do not coincide with the high-symmetry points of the Brillouin zone. ``` fig = plt.figure() axes = [fig.add_subplot(x) for x in (121, 222, 224)] for ax in axes: sl.plot_bz(ax) ax.set_aspect(1) w = 0.02 axes[1].set_xlim(sl.K[0] - w, sl.K[0] + w) axes[1].set_ylim(sl.K[1] - w, sl.K[1] + w) axes[2].set_xlim(sl.Kp[0] - w, sl.Kp[0] + w) axes[2].set_ylim(sl.Kp[1] - w, sl.Kp[1] + w) ``` # Band Structure ``` from blg_strain.bands import BandStructure bs = BandStructure(sl=sl, window=0.1, Delta=0.01) bs.calculate(Nkx=200, Nky=200) ``` Below are plots of the energy, one component of the wavefunction, Berry curvature, and orbital magnetic moment in regions of momentum space surrounding the K and K' valleys. ``` fig, axes = plt.subplots(2, 4, figsize=(14, 7)) pcolormesh_kwargs = dict(cmap='cividis', shading='gouraud') contour_kwargs = dict(colors='k', linewidths=0.5, linestyles='solid') n = 2 # Band index m = 1 # component of wavefunction for i, (axK, axKp, A) in enumerate(zip(axes[0,:], axes[1,:], [bs.E[n], bs.Psi[n,m,:,:].real, bs.Omega[n], bs.Mu[n]])): # K axK.pcolormesh(bs.Kxa, bs.Kya, A, **pcolormesh_kwargs) axK.contour(bs.Kxa, bs.Kya, A, **contour_kwargs) # K' if i >= 2: # Omega and Mu A = -A axKp.pcolormesh(-bs.Kxa, -bs.Kya, A, **pcolormesh_kwargs) axKp.contour(-bs.Kxa, -bs.Kya, A, **contour_kwargs) for ax in axes.flatten(): ax.set_xticks([]) ax.set_yticks([]) ax.set_aspect(1) axes[0,0].set_title('Conduction band energy') axes[0,1].set_title(f'Component {m} of wavefunction') axes[0,2].set_title('Berry curvature') axes[0,3].set_title('Orbital magnetic moment') axes[0,0].set_ylabel('$K$', rotation=0, labelpad=30, fontsize=16, va='center') axes[1,0].set_ylabel('$K\'$', rotation=0, labelpad=30, fontsize=16, va='center') ``` # Filled bands ``` from blg_strain.bands import FilledBands fb = FilledBands(bs=bs, EF=0.01) fb.calculate(Nkx=500, Nky=500) ``` Below is a plot of the $x$ component of magnetoelectric susceptibility as a function of doping (carrier density) for the band structure illustrated above. ``` EFs = np.linspace(0, 0.015, 100) ns = np.empty_like(EFs) alphas = np.empty_like(EFs) for i, EF in enumerate(EFs): fb = FilledBands(bs=bs, EF=EF) fb.calculate(500, 500) ns[i] = fb.n alphas[i] = fb.alpha[0] fig, ax = plt.subplots() ax.plot(ns/1e16, alphas) ax.set_xlabel('Carrier density ($10^{12}$ cm$^{-2}$)') ax.set_ylabel('Magnetoelectric coefficient (a.u.)') ``` # Saving and Loading ``` base_path = 'example' sl.save(base_path) bs.save() fb.save() sl_path = '/'.join((base_path, 'StrainedLattice_eps0.010_theta0.000_Run0')) sl = StrainedLattice.load(sl_path + '.h5') bs_path = '/'.join((sl_path, 'BandStructure_Nkx200_Nky200_Delta10.000')) bs = BandStructure.load(bs_path + '.h5') fb_path = '/'.join((bs_path, 'FilledBands_Nkx500_Nky500_EF15.000')) fb = FilledBands.load(fb_path + '.h5') ``` ## Create and load "summary" file ``` from blg_strain.utils.saver import load Deltas, EFs, ns, Ds, alphas = load(sl_path) Deltas, EFs, ns, Ds, alphas ```
github_jupyter
# Transfer Learning Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using [VGGNet](https://arxiv.org/pdf/1409.1556.pdf) trained on the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram of the VGGNet architecture. <img src="assets/cnnarchitecture.jpg" width=700px> VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes. You can read more about transfer learning from [the CS231n course notes](http://cs231n.github.io/transfer-learning/#tf). ## Pretrained VGGNet We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it. This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. **You'll need to clone the repo into the folder containing this notebook.** Then download the parameter file using the next cell. ``` from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm vgg_dir = 'tensorflow_vgg/' # Make sure vgg exists if not isdir(vgg_dir): raise Exception("VGG directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(vgg_dir + "vgg16.npy"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar: urlretrieve( 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy', vgg_dir + 'vgg16.npy', pbar.hook) else: print("Parameter file already exists!") ``` ## Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the [TensorFlow inception tutorial](https://www.tensorflow.org/tutorials/image_retraining). ``` import tarfile dataset_folder_path = 'flower_photos' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('flower_photos.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar: urlretrieve( 'http://download.tensorflow.org/example_images/flower_photos.tgz', 'flower_photos.tar.gz', pbar.hook) if not isdir(dataset_folder_path): with tarfile.open('flower_photos.tar.gz') as tar: tar.extractall() tar.close() ``` ## ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the `vgg16` module from `tensorflow_vgg`. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from [the source code](https://github.com/machrisaa/tensorflow-vgg/blob/master/vgg16.py)): ``` self.conv1_1 = self.conv_layer(bgr, "conv1_1") self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") self.pool1 = self.max_pool(self.conv1_2, 'pool1') self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") self.pool2 = self.max_pool(self.conv2_2, 'pool2') self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") self.pool3 = self.max_pool(self.conv3_3, 'pool3') self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") self.pool4 = self.max_pool(self.conv4_3, 'pool4') self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") self.pool5 = self.max_pool(self.conv5_3, 'pool5') self.fc6 = self.fc_layer(self.pool5, "fc6") self.relu6 = tf.nn.relu(self.fc6) ``` So what we want are the values of the first fully connected layer, after being ReLUd (`self.relu6`). To build the network, we use ``` with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) ``` This creates the `vgg` object, then builds the graph with `vgg.build(input_)`. Then to get the values from the layer, ``` feed_dict = {input_: images} codes = sess.run(vgg.relu6, feed_dict=feed_dict) ``` ``` import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)] ``` Below I'm running images through the VGG network in batches. > **Exercise:** Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values). ``` # Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): # Image batch to pass to VGG network images = np.concatenate(batch) # TODO: Get the values from the relu6 layer of the VGG network feed_dict = {input_: images} codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict) # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels) ``` ## Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. ``` # read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader if len(each) > 0]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1)) ``` ### Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! > **Exercise:** From scikit-learn, use [LabelBinarizer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html) to create one-hot encoded vectors from the labels. ``` labels[0] codes.shape from sklearn import preprocessing unique_labels = list(set(labels)) lb = preprocessing.LabelBinarizer() labels_vecs = lb.fit(unique_labels).transform(labels) ``` Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use [`StratifiedShuffleSplit`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) from scikit-learn. You can create the splitter like so: ``` ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) ``` Then split the data with ``` splitter = ss.split(x, y) ``` `ss.split` returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use `next(splitter)` to get the indices. Be sure to read the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) and the [user guide](http://scikit-learn.org/stable/modules/cross_validation.html#random-permutations-cross-validation-a-k-a-shuffle-split). > **Exercise:** Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. ``` from sklearn.model_selection import StratifiedShuffleSplit ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) train_idx, val_idx = next(ss.split(codes, labels_vecs)) half_val_len = int(len(val_idx)/2) val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:] train_x, train_y = codes[train_idx], labels_vecs[train_idx] val_x, val_y = codes[val_idx], labels_vecs[val_idx] test_x, test_y = codes[test_idx], labels_vecs[test_idx] print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape) ``` If you did it right, you should see these sizes for the training sets: ``` Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) ``` ### Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network. > **Exercise:** With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. ``` inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.float32, shape=[None, labels_vecs.shape[1]]) # TODO: Classifier layers and operations dense1 = tf.layers.dense(inputs_, 256, activation=tf.nn.relu) dropout1 = tf.layers.dropout(dense1, 0.2) dense2 = tf.layers.dense(dropout1, 64, activation=tf.nn.relu) dropout2 = tf.layers.dropout(dense2, 0.2) logits = tf.layers.dense(dropout2, len(unique_labels), activation=None) cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=labels_, logits=logits)) optimizer = tf.train.AdamOptimizer(0.005).minimize(cost) # Operations for validation/test accuracy predicted = tf.nn.softmax(logits) correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) ``` ### Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. ``` def get_batches(x, y, n_batches=10): """ Return a generator that yields batches from arrays x and y. """ batch_size = len(x)//n_batches for ii in range(0, n_batches*batch_size, batch_size): # If we're not on the last batch, grab data with size batch_size if ii != (n_batches-1)*batch_size: X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] # On the last batch, grab the rest of the data else: X, Y = x[ii:], y[ii:] # I love generators yield X, Y ``` ### Training Here, we'll train the network. > **Exercise:** So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the `get_batches` function I wrote before to get your batches like `for x, y in get_batches(train_x, train_y)`. Or write your own! ``` epochs = 10 iteration = 0 saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in get_batches(train_x, train_y): feed = {inputs_: x, labels_: y} loss, _ = sess.run([cost, optimizer], feed_dict=feed) print("Epoch: {}/{}".format(e+1, epochs), "Iteration: {}".format(iteration), "Training loss: {:.5f}".format(loss)) iteration += 1 if iteration % 5 == 0: feed = {inputs_: val_x, labels_: val_y} val_acc = sess.run(accuracy, feed_dict=feed) print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Validation Acc: {:.4f}".format(val_acc)) saver.save(sess, "checkpoints/flowers.ckpt") ``` ### Testing Below you see the test accuracy. You can also see the predictions returned for images. ``` with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplotlib.pyplot as plt from scipy.ndimage import imread ``` Below, feel free to choose images and see how the trained classifier predicts the flowers in them. ``` test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) vgg = vgg16.Vgg16() vgg.build(input_) with tf.Session() as sess: img = utils.load_image(test_img_path) img = img.reshape((1, 224, 224, 3)) feed_dict = {input_: img} code = sess.run(vgg.relu6, feed_dict=feed_dict) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: code} prediction = sess.run(predicted, feed_dict=feed).squeeze() plt.imshow(test_img) plt.barh(np.arange(5), prediction) _ = plt.yticks(np.arange(5), lb.classes_) ```
github_jupyter
``` %matplotlib inline import sys import os import json from glob import glob from collections import defaultdict, OrderedDict import dinopy import yaml import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator import seaborn import numpy import pandas as pd import networkx from scipy.special import binom from scipy import stats from IPython.display import Image, display from phasm.io import gfa from phasm.alignments import AlignmentType from phasm.assembly_graph import AssemblyGraph from phasm.bubbles import find_superbubbles BASE_DIR = os.path.realpath(os.path.join(os.getcwd(), '..')) with open(os.path.join(BASE_DIR, "config.yml")) as f: config = yaml.load(f) seaborn.set_style('whitegrid') spanning_read_stats = [] candidate_prob_stats = [] bubble_map = defaultdict(dict) for assembly, asm_config in config['assemblies'].items(): parts = assembly.split('-') ploidy = int(parts[0].replace("ploidy", "")) coverage = int(parts[1].replace("x", "")) asm_folder = os.path.join(BASE_DIR, "assemblies", assembly) for debugdata in glob("{}/04_phase/component[0-9].bubblechain[0-9]-debugdata.json".format(asm_folder)): print(debugdata) graphml = debugdata.replace("04_phase", "03_chain").replace("-debugdata.json", ".graphml") g = AssemblyGraph(networkx.read_graphml(graphml)) curr_bubble = None bubble_num = 0 num_candidates = -1 with open(debugdata) as f: for line in f: data = json.loads(line) if data['type'] == "new_bubble": curr_bubble = data bubble_map[ploidy, coverage][(data['entrance'], data['exit'])] = data if data['start_of_block'] == True: bubble_num = 1 else: dist_between_bubbles = ( min(e[2] for e in g.out_edges_iter(data['entrance'], data=g.edge_len)) ) spanning_read_stats.append({ 'dist': dist_between_bubbles, 'spanning_reads': len(data['rel_read_info']), 'ploidy': ploidy }) bubble_num += 1 if data['type'] == "candidate_set": p_sr = data['p_sr'] prior = data['prior'] prob = 10**(p_sr + prior) entrance = curr_bubble['entrance'] exit = curr_bubble['exit'] candidate_prob_stats.append({ 'bubble': (entrance, exit), 'bubble_num': bubble_num, 'candidate_prob': prob, 'ploidy': ploidy, 'coverage': coverage }) srdf = pd.DataFrame(spanning_read_stats) srdf['spanning_reads_norm'] = srdf['spanning_reads'] / srdf['ploidy'] g = seaborn.JointGrid(x="dist", y="spanning_reads_norm", data=srdf, size=7) x_bin_size = 2500 g.ax_marg_x.hist(srdf['dist'], alpha=0.6, bins=numpy.arange(0, srdf['dist'].max()+x_bin_size, x_bin_size)) y_bin_size = 10 g.ax_marg_y.hist(srdf['spanning_reads_norm'], alpha=0.6, orientation="horizontal", bins=numpy.arange(0, srdf['spanning_reads_norm'].max()+y_bin_size, y_bin_size)) g.plot_joint(seaborn.regplot) g.annotate(stats.pearsonr) seaborn.plt.suptitle("Number of spanning reads against the distance between two bubbles,\n normalised for ploidy") plt.ylim(ymin=0) plt.xlabel("Distance between two bubbles [bases]") plt.ylabel("Number of spanning reads") plt.subplots_adjust(top=0.9) plt.savefig(os.path.join(BASE_DIR, 'figures', 'spanning-reads.png'), transparent=True, dpi=256) candidate_df = pd.DataFrame(candidate_prob_stats) candidate_df.set_index('bubble') plt.figure() seaborn.distplot(candidate_df['candidate_prob'], kde=False, hist_kws={"alpha": 0.8}) plt.title("Distribution of candidate extension relative likelihoods") plt.xlabel("Relative likelihood of an extension") plt.ylabel("Count") # plt.xlim(xmax=1.0) plt.axvline(1e-3, linestyle='--', color='black') plt.savefig(os.path.join(BASE_DIR, 'figures', 'rel-likelihood-abs.png'), transparent=True, dpi=256) grouped = candidate_df.groupby(['bubble', 'ploidy'])['candidate_prob'] max_probs = grouped.max() for bubble, ploidy in grouped.groups.keys(): candidate_df.loc[grouped.groups[bubble, ploidy], 'max_prob'] = max_probs[bubble, ploidy] candidate_df['relative_prob'] = candidate_df['candidate_prob'] / candidate_df['max_prob'] candidate_df plt.figure() seaborn.distplot(candidate_df[candidate_df['relative_prob'] < 1.0]['relative_prob'], kde=False, hist_kws={"alpha": 0.8}) plt.title("Distribution of relative probabilities for each candidate extension\n" "at each superbubble") plt.xlabel(r"$RL[E|H]\ /\ \omega$") plt.ylabel("Count") plt.savefig(os.path.join(BASE_DIR, "figures", "rl-relative-dist.png"), transparent=True, dpi=256) c1, c2, c3, c4, c5 = seaborn.color_palette(n_colors=5) pruning_stats = [] for assembly, asm_config in config['assemblies'].items(): parts = assembly.split('-') ploidy = int(parts[0].replace("ploidy", "")) coverage = int(parts[1].replace("x", "")) if coverage != 60: continue asm_folder = os.path.join(BASE_DIR, "assemblies", assembly) for chain_num, graphml in enumerate(glob("{}/03_chain/component[0-9].bubblechain[0-9].graphml".format(asm_folder))): print(graphml) # Calculate effect of pruning g = AssemblyGraph(networkx.read_graphml(graphml)) bubbles = OrderedDict(find_superbubbles(g, report_nested=False)) bubble_num = 0 for i, bubble in enumerate(reversed(bubbles.items())): entrance, exit = bubble num_paths = len(list(networkx.all_simple_paths(g, entrance, exit))) if not bubble in bubble_map[ploidy, coverage]: continue bubble_data = bubble_map[ploidy, coverage][bubble] if bubble_data['start_of_block']: bubble_num = 1 else: bubble_num += 1 kappa = 0.0 pruned = 0 num_candidates_left = sys.maxsize while num_candidates_left > 500 and kappa < 1.0: kappa += 0.1 num_candidates_left = len( candidate_df.query('(bubble == @bubble) and (ploidy == @ploidy) and (relative_prob >= @kappa)') ) pruned = len( candidate_df.query('(bubble == @bubble) and (ploidy == @ploidy) and (relative_prob < @kappa)') ) pruning_stats.append({ 'ploidy': ploidy, 'coverage': coverage, 'bubble_num': bubble_num, 'pruned': pruned, 'kappa': kappa }) pruning_df = pd.DataFrame(pruning_stats) agg_df = pd.DataFrame(pruning_df.groupby(['bubble_num', 'kappa']).size().rename('counts')) agg_df.reset_index(level=agg_df.index.names, inplace=True) agg_df = agg_df.query('kappa <= 1.0') sum_df = pd.DataFrame(agg_df.groupby('bubble_num')['counts'].sum()).reset_index() sum_df for i in sum_df['bubble_num'].unique(): agg_df.loc[agg_df['bubble_num'] == i, 'total'] = int(sum_df['counts'].loc[sum_df['bubble_num'] == i].values[0]) agg_df['fraction'] = agg_df['counts'] / agg_df['total'] agg_df plt.figure() g = seaborn.factorplot(x="kappa", y="fraction", col="bubble_num", kind="bar", col_wrap=3, sharex=False, color=c1, data=agg_df.query('(bubble_num < 7) and (kappa <= 1.0)')) seaborn.plt.suptitle('The maximum pruning factor $\kappa$ at different stages of the phasing process') plt.subplots_adjust(top=0.9, hspace=0.3) for i, ax in enumerate(g.axes): ax.set_xlabel("$\kappa$") if i % 3 == 0: ax.set_ylabel("Fraction") ax.set_title("Superbubble {}".format(i+1)) plt.savefig(os.path.join(BASE_DIR, 'figures', 'pruning.png'), transparent=True, dpi=256) ```
github_jupyter
<a href="https://colab.research.google.com/github/wisrovi/03MAIR---Algoritmos-de-Optimizacion---2019/blob/master/Seminario/WilliamSteveRodriguezVillamizar_Seminario-2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Algoritmos de optimización - Seminario<br> Nombre y Apellidos: <br> Github: https://github.com/wisrovi/03MAIR---Algoritmos-de-Optimizacion---2019/blob/master/Seminario/WilliamSteveRodriguezVillamizar_Seminario-2.ipynb<br> Problema: Colab: https://colab.research.google.com/github/wisrovi/03MAIR---Algoritmos-de-Optimizacion---2019/blob/master/Seminario/WilliamSteveRodriguezVillamizar_Seminario-2.ipynb > 1. Combinar cifras y operaciones <br> Descripción del problema: Elección de grupos de población homogéneos • El problema consiste en analizar el siguiente problema y diseñar un algoritmo que lo resuelva. • Disponemos de las 9 cifras del 1 al 9 (excluimos el cero) y de los 4 signos básicos de las operaciones fundamentales: suma(+), resta(-), multiplicación(*) y división(/) • Debemos combinarlos alternativamente sin repetir ninguno de ellos para obtener una cantidad dada. .... (*) La respuesta es obligatoria # Solución del problema usando algoritmos por recursividad y el algoritmo divide y venceras (*)¿Cuantas posibilidades hay sin tener en cuenta las restricciones?<br> ¿Cuantas posibilidades hay teniendo en cuenta todas las restricciones. Respuesta Debido a que el algoritmo genera en diversas ocasiones soluciones repetidas y no todas las combinaciones de numeros y signos son posibles, además de que cada valor ingresado tiene su própio número de soluciones único, siendo más facil generar soluciones para valores enteros que para valores flotantes. No fue posible determinar una ecuación que entregara la cantidad de soluciones absoluta para todos los valores ingresados. Por ello se ha elaborado un algoritmo para buscar toda la cantidad de respuestas posibles para un número dado, para determinar cuantas soluciones se pueden generar para la respuesta solicituda (se verá más adelante en el código). En la evaluacion del algoritmo se ha logrado generar hasta 700 soluciones posibles para la respuesta de un entero de valor 10 (siendo este valor el que mayor soluciones se ha logrado encontrar. Modelo para el espacio de soluciones<br> (*) ¿Cual es la estructura de datos que mejor se adapta al problema? Argumentalo.(Es posible que hayas elegido una al principio y veas la necesidad de cambiar, arguentalo) Respuesta Como este problema no tiene un dataseet inicial sino un dato de ingreso y con este generar una solución al problema, sólo se puede decir que hay mejores soluciones para datos de entrada de tipo int menores a 100 que para datos float de cualquier valor Esto se evidencia debido a que tratando de evaluar el mayor numero de soluciones para un dato, se ha logrado minimo 250 soluciones diferentes para datos int mientras que para datos float sólo se han logrado 35 soluciones Sin embargo si se pudiere usar una maquina con mayor GPU a la mia, la cantidad de soluciones debería aumentar de 35 a 100 y de 250 a 1000 para float e int respectivamente. Según el modelo para el espacio de soluciones<br> (*)¿Cual es la función objetivo? (*)¿Es un problema de maximización o minimización? Respuesta: Minimizacion ## Implementaciones Básicas de código para fuerza bruta y algoritmo final ``` import random from sympy.parsing.sympy_parser import parse_expr import numpy as np Diccionario_signos = { "suma" : "+", "resta" : "-", "multiplicacion" : "*", "division" : "/" } Lista_Numeros_Posibles = (1, 2, 3, 4, 5, 6, 7, 8, 9 ) def BuscarExiste(lista, item): buscandoEncontrando = False for i in lista: if i == item: buscandoEncontrando = True break return buscandoEncontrando def getNumeroAleatorio(listaNumeros = []): """ usamos recursividad para crear este set de datos """ if len(listaNumeros) < 5: numer = random.choice(Lista_Numeros_Posibles) if not BuscarExiste(listaNumeros, numer): listaNumeros.append(numer) getNumeroAleatorio(listaNumeros) return tuple(listaNumeros) def getSignosAleatorios(listaSignos = []): """ usamos recursividad para crear este set de datos """ if len(listaSignos) < 4: sign = random.choice( list(Diccionario_signos.keys()) ) if not BuscarExiste(listaSignos, sign): listaSignos.append(sign) getSignosAleatorios(listaSignos) return tuple(listaSignos) def HallarResultado(tuplaNumeros, tuplaSignos): expresion = """%s %s %s %s %s %s %s %s %s""" %( str(tuplaNumeros[0]), Diccionario_signos[tuplaSignos[0]], str(tuplaNumeros[1]), Diccionario_signos[tuplaSignos[1]], str(tuplaNumeros[2]), Diccionario_signos[tuplaSignos[2]], str(tuplaNumeros[3]), Diccionario_signos[tuplaSignos[3]], str(tuplaNumeros[4]) ) solucion = float(parse_expr(expresion)) solucion = "{0:.2f}".format(solucion) solucion = float(solucion) return (expresion, solucion) def HallarValorDeseado(valorDeseado, stepsMax=10000): for i in range(stepsMax): (expresion, solucion) = HallarResultado(getNumeroAleatorio([]), getSignosAleatorios([])) if valorDeseado == solucion: return (expresion, "=", valorDeseado) def BuscarOperacionesPorListado(valoresDeseados): respuestas = [] numerosNoEncontrados = [] for valorDeseado in valoresDeseados: rta = HallarValorDeseado(valorDeseado) if rta is None: numerosNoEncontrados.append(valorDeseado) #print("None = ", valorDeseado ) else: respuestas.append(rta) #print(rta) return respuestas, numerosNoEncontrados ``` ## Algoritmo Por fuerza bruta Diseña un algoritmo para resolver el problema por fuerza bruta Respuesta ``` #Buscando la cantidad máxima de soluciones posibles valor = 5.25 cantidadSolucionesBuscar = 500 #Número probado de soluciones diferentes encontradas 500 para int y 150 float valoresDeseados = [] for i in range(cantidadSolucionesBuscar): valoresDeseados.append(valor) print("Total numeros buscar respuesta: ", len(valoresDeseados)) respuestas, numerosNoEncontrados =BuscarOperacionesPorListado(valoresDeseados) print("Numeros no encontrados: ") for rta in numerosNoEncontrados: print(rta) pass respuestas = sorted(set(respuestas)) print("Numeros con respuesta encontrada: ", len(respuestas) ) for rta in respuestas: #print(rta) pass print("solución a hallar: ", valor) print() print("solución mejor (usando los numero más pequeños posibles): " ) print(respuestas[0][0], respuestas[0][1], respuestas[0][2]) ``` Calcula la complejidad del algoritmo por fuerza bruta Respuesta O(n log n) ## Algoritmo final (mejora al de fuerza bruta) ok - (*)Diseña un algoritmo que mejore la complejidad del algortimo por fuerza bruta. Argumenta porque crees que mejora el algoritmo por fuerza bruta Respuesta ``` valoresDeseados = [-3.5] print("Total numeros buscar respuesta: ", len(valoresDeseados)) respuestas, numerosNoEncontrados =BuscarOperacionesPorListado(valoresDeseados) print("solución a hallar: ", valoresDeseados[0]) print() print("solución mejor (usando los numero más pequeños posibles): " ) print(respuestas[0][0], respuestas[0][1], respuestas[0][2]) valoresDeseados = [-3.5, 2, 9, -9.5, 4, 1.3, 1,2,3,4,5,6,7,8,9] print("Total numeros buscar respuesta: ", len(valoresDeseados)) respuestas, numerosNoEncontrados =BuscarOperacionesPorListado(valoresDeseados) print("Numeros no encontrados: ") for rta in numerosNoEncontrados: print(rta) pass print("Numeros con respuesta encontrada: ") for rta in respuestas: print(rta) pass valoresDeseados = range (-10, 11, 1) print("Total numeros buscar respuesta: ", len(valoresDeseados)) respuestas, numerosNoEncontrados =BuscarOperacionesPorListado(valoresDeseados) print("Numeros no encontrados: ") for rta in numerosNoEncontrados: print(rta) pass respuestas = sorted(set(respuestas)) print("Numeros con respuesta encontrada: ") for rta in respuestas: print(rta) pass valoresDeseados = [] valores = np.arange(-10, 11, 0.05) for valorDeseado in valores: valorDeseado = float("{0:.2f}".format(valorDeseado)) valoresDeseados.append(valorDeseado) print("Total numeros buscar respuesta: ", len(valoresDeseados)) respuestas, numerosNoEncontrados =BuscarOperacionesPorListado(valoresDeseados) print("Con rta: ",len(respuestas)) print("Sin rta: ",len(numerosNoEncontrados)) print("Numeros no encontrados: ") for rta in numerosNoEncontrados: print(rta) pass print("Numeros con respuesta encontrada: ") for rta in respuestas: print(rta) pass ``` Este algoritmo mejora al algoritmo de fuerza bruta debido a que al no tener que generar muchas soluciones para la misma entrada para luego elegir una, sino que se elije la qprimera que cumpla con lo solicitado, logrando así respuestas más rápidas. (*)Calcula la complejidad del algoritmo Respuesta O(n^2) Según el problema (y tenga sentido), diseña un juego de datos de entrada aleatorios ``` sizeDataseet = 20 valoresDeseados = [] for valorDeseado in range(sizeDataseet): valorAzar = random.random()*10 valorAzar = float("{0:.2f}".format(valorAzar)) valoresDeseados.append(valorAzar) print(valoresDeseados) ``` Aplica el algoritmo al juego de datos generado ``` print("Total numeros buscar respuesta: ", len(valoresDeseados)) respuestas, numerosNoEncontrados =BuscarOperacionesPorListado(valoresDeseados) print("Con rta: ",len(respuestas)) print("Sin rta: ",len(numerosNoEncontrados)) print("Numeros no encontrados: ") for rta in numerosNoEncontrados: print(rta) pass print("Numeros con respuesta encontrada: ") for rta in respuestas: print(rta) pass ``` ## Conclusiones Para este problema se ha usado recursividad y el algoritmo de divide y venceras para la solución del problema, pero usando algoritmos heuristicos seguro se podrá lograr obtener los mismos resultados en un tiempo menor Las restricciones del problema hacen que el algoritmo le resulte dificil generar soluciones para valores float no multiplos de 0.25.... El algoritmo generado fue por métodos exactos, pero un ejercicio futuro sería usar algoritmos heuristicos para la solución. Incluso la posibilidad de generar los set de signos y set de numeros (ambos son el set de datos) mediante algoritmos geneticos, y comparar las soluciones.
github_jupyter
# LAB 4c: Create Keras Wide and Deep model. **Learning Objectives** 1. Set CSV Columns, label column, and column defaults 1. Make dataset of features and label from CSV files 1. Create input layers for raw features 1. Create feature columns for inputs 1. Create wide layer, deep dense hidden layers, and output layer 1. Create custom evaluation metric 1. Build wide and deep model tying all of the pieces together 1. Train and evaluate ## Introduction In this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born. We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model. Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4c_keras_wide_and_deep_babyweight.ipynb). ## Load necessary libraries ``` import datetime import os import shutil import matplotlib.pyplot as plt import numpy as np import tensorflow as tf print(tf.__version__) ``` ## Verify CSV files exist In the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them. ``` %%bash ls *.csv %%bash head -5 *.csv ``` ## Create Keras model ### Lab Task #1: Set CSV Columns, label column, and column defaults. Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function. * `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files * `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary. * `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column. ``` # Determine CSV, label, and key columns # TODO: Create list of string column headers, make sure order matches. CSV_COLUMNS = [""] # TODO: Add string name for label column LABEL_COLUMN = "" # Set default values for each CSV column as a list of lists. # Treat is_male and plurality as strings. DEFAULTS = [] ``` ### Lab Task #2: Make dataset of features and label from CSV files. Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors. ``` def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label # features, label def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): """Loads dataset using the tf.data API from CSV files. Args: pattern: str, file pattern to glob into list of files. batch_size: int, the number of examples per batch. mode: tf.estimator.ModeKeys to determine if training or evaluating. Returns: `Dataset` object. """ # TODO: Make a CSV dataset dataset = tf.data.experimental.make_csv_dataset() # TODO: Map dataset to features and label dataset = dataset.map() # features, label # Shuffle and repeat for training if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(buffer_size=1000).repeat() # Take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(buffer_size=1) return dataset ``` ### Lab Task #3: Create input layers for raw features. We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining: * shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known. * name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. * dtype: The data type expected by the input, as a string (float32, float64, int32...) ``` def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ # TODO: Create dictionary of tf.keras.layers.Input for each dense feature deep_inputs = {} # TODO: Create dictionary of tf.keras.layers.Input for each sparse feature wide_inputs = {} inputs = {**wide_inputs, **deep_inputs} return inputs ``` ### Lab Task #4: Create feature columns for inputs. Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN. ``` def create_feature_columns(nembeds): """Creates wide and deep dictionaries of feature columns from inputs. Args: nembeds: int, number of dimensions to embed categorical column down to. Returns: Wide and deep dictionaries of feature columns. """ # TODO: Create deep feature columns for numeric features deep_fc = {} # TODO: Create wide feature columns for categorical features wide_fc = {} # TODO: Bucketize the float fields. This makes them wide # TODO: Cross all the wide cols, have to do the crossing before we one-hot # TODO: Embed cross and add to deep feature columns return wide_fc, deep_fc ``` ### Lab Task #5: Create wide and deep model and output layer. So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right. ``` def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units): """Creates model architecture and returns outputs. Args: wide_inputs: Dense tensor used as inputs to wide side of model. deep_inputs: Dense tensor used as inputs to deep side of model. dnn_hidden_units: List of integers where length is number of hidden layers and ith element is the number of neurons at ith layer. Returns: Dense tensor output from the model. """ # Hidden layers for the deep side layers = [int(x) for x in dnn_hidden_units] deep = deep_inputs # TODO: Create DNN model for the deep side deep_out = # TODO: Create linear model for the wide side wide_out = # Concatenate the two sides both = tf.keras.layers.concatenate( inputs=[deep_out, wide_out], name="both") # TODO: Create final output layer return output ``` ### Lab Task #6: Create custom evaluation metric. We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels. ``` def rmse(y_true, y_pred): """Calculates RMSE evaluation metric. Args: y_true: tensor, true labels. y_pred: tensor, predicted labels. Returns: Tensor with value of RMSE between true and predicted labels. """ # TODO: Calculate RMSE from true and predicted labels pass ``` ### Lab Task #7: Build wide and deep model tying all of the pieces together. Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics. ``` def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3): """Builds wide and deep model using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layers inputs = create_input_layers() # Create feature columns wide_fc, deep_fc = create_feature_columns(nembeds) # The constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires: LayerConstructor()(inputs) # TODO: Add wide and deep feature colummns wide_inputs = tf.keras.layers.DenseFeatures( feature_columns=#TODO, name="wide_inputs")(inputs) deep_inputs = tf.keras.layers.DenseFeatures( feature_columns=#TODO, name="deep_inputs")(inputs) # Get output of model given inputs output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units) # Build model and compile it all together model = tf.keras.models.Model(inputs=inputs, outputs=output) # TODO: Add custom eval metrics to list model.compile(optimizer="adam", loss="mse", metrics=["mse"]) return model print("Here is our wide and deep architecture so far:\n") model = build_wide_deep_model() print(model.summary()) ``` We can visualize the wide and deep network using the Keras plot_model utility. ``` tf.keras.utils.plot_model( model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR") ``` ## Run and evaluate model ### Lab Task #8: Train and evaluate. We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard. ``` TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around NUM_EVALS = 5 # how many times to evaluate # Enough to get a reasonable sample, but not so much that it slows down NUM_EVAL_EXAMPLES = 10000 # TODO: Load training dataset trainds = load_dataset() # TODO: Load evaluation dataset evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) logdir = os.path.join( "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir=logdir, histogram_freq=1) # TODO: Fit model on training dataset and evaluate every so often history = model.fit() ``` ### Visualize loss curve ``` # Plot nrows = 1 ncols = 2 fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(["loss", "rmse"]): ax = fig.add_subplot(nrows, ncols, idx+1) plt.plot(history.history[key]) plt.plot(history.history["val_{}".format(key)]) plt.title("model {}".format(key)) plt.ylabel(key) plt.xlabel("epoch") plt.legend(["train", "validation"], loc="upper left"); ``` ### Save the model ``` OUTPUT_DIR = "babyweight_trained_wd" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join( OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save( obj=model, export_dir=EXPORT_PATH) # with default serving function print("Exported trained model to {}".format(EXPORT_PATH)) !ls $EXPORT_PATH ``` ## Lab Summary: In this lab, we started by defining the CSV column names, label column, and column defaults for our data inputs. Then, we constructed a tf.data Dataset of features and the label from the CSV files and created inputs layers for the raw features. Next, we set up feature columns for the model inputs and built a wide and deep neural network in Keras. We created a custom evaluation metric and built our wide and deep model. Finally, we trained and evaluated our model. Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
# Hive Command Note **Outline** * [Introduction](#intro) * [Syntax](#syntax) * [Reference](#refer) --- Hive is a data warehouse infrastructure tool to process structured data in Hadoop. It resides on top of Hadoop to summarize Big Data, and makes querying and analyzing easy. * **Access Hive**: in cmd, type *`hive`* * **Run hive script**: hive -f xxx.hql > **Database in HIVE** Each database is a collection of tables. [link](http://www.tutorialspoint.com/hive/hive_create_database.htm) ``` # create database CREATE DATABASE [IF NOT EXISTS] userdb; # show all the databases show databases; # use a certain database, every table we create afterwards will be within the database use databaseName; # drop database DROP DATABASE IF EXISTS userdb; ``` > **Create Table** 1. employees.csv -> HDFS 2. create table & load employees.csv 3. drop employees table (Be careful that by dropping the table, HIVE will actually delete the original csv not just the table itself). Instead, we can create an external table. * External tables: if you drop them, data in hdfs will NOT be deleted. **Data Types** * **Integers** * *TINYINT*—1 byte integer * *SMALLINT*—2 byte integer * *INT*—4 byte integer * *BIGINT*—8 byte integer * **Boolean type** * *BOOLEAN*—TRUE/FALSE * **Floating point numbers** * *FLOAT*—single precision * *DOUBLE*—Double precision * **Fixed point numbers** * *DECIMAL*—a fixed point value of user defined scale and precision * **String types** * *STRING*—sequence of characters in a specified character set * *VARCHAR*—sequence of characters in a specified character set with a maximum length * *CHAR*—sequence of characters in a specified character set with a defined length * **Date and time types** * *TIMESTAMP*— a specific point in time, up to nanosecond precision * *DATE*—a date * **Binary types** * *BINARY*—a sequence of bytes **Complex Types** * **Structs**: the elements within the type can be accessed using the DOT (.) notation. For example, for a column c of type STRUCT {a INT; b INT}, the a field is accessed by the expression c.a * format: `<first, second>` * access: mystruct.first * **Maps (key-value tuples)**: The elements are accessed using ['element name'] notation. For example in a map M comprising of a mapping from 'group' -> gid the gid value can be accessed using M['group'] * format: index based * access: myarray[0] * **Arrays (indexable lists)**: The elements in the array have to be in the same type. Elements can be accessed using the [n] notation where n is an index (zero-based) into the array. For example, for an array A having the elements ['a', 'b', 'c'], A[1] retruns 'b'. * format: key based * access: myMap['KEY'] * **ROW FORMAT DELIMITED**: one row per line * **FIELDS TERMINATED BY ','**: split column by comma ``` # use external table in this example CREATE EXTERNAL TABLE movies( userid INT, movieid INT, rating INT, timestamp TIMESTAMP) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; CREATE TABLE myemployees( name STRING, salary FLOAT, subordinates ARRAY<STRING>, deductions MAP<STRING, FLOAT>, address STRUCT<street:STRING, city:STRING, state:STRING,zip:INT>) ROW FORMAT DELIMITED # This line is telling Hive to expect the file to contain one row per line. So basically, we are telling Hive that when it finds a new line character that means is a new records. FIELDS TERMINATED BY ',' # split column by comma COLLECTION ITEMS TERMINATED BY '#' # split the struct type item by `#` MAP KEYS TERMINATED BY '-' # split the map type column by `-` LINES TERMINATED BY '\N'; # separate line by `\N` ``` > **load file from hdfs into hive** [StackOverFlow: Which is the difference between LOAD DATA INPATH and LOAD DATA LOCAL INPATH in HIVE](https://stackoverflow.com/questions/43204716/which-is-the-difference-between-load-data-inpath-and-load-data-local-inpath-in-h/43205970) ``` # load data into table movie. Noted that the path is hdfs path # noted that the original file in hdfs://hw5/ will be move to ''hdfs://wolf.xxx.ooo.edu:8000/user/hive/warehouse/jchiu.db/movie/u.data'' after this command LOAD DATA INPATH 'hw5/u.data' into table movie; # load data into table movie. Noted that the path is local path # LOCAL is identifier to specify the local path. It is optional. # when using LOCAL, the file is copied to the hive directory LOAD DATA LOCAL INPATH 'localpath' into table movie; LOAD DATA LOCAL INPATH '/home/public/course/recommendationEngine/u.data' into table movies; # create an external table CREATE EXTERNAL TABLE myemployees LOAD DATA INPATH '...' INTO TABLE employees ``` > **see column name; describe table** ``` # method 1 describe database.tablename; # method 2 use database; describe tablename; ``` > **Query** ``` SELECT [ALL | DISTINCT] select_expr, select_expr, ... FROM table_reference [WHERE where_condition] [GROUP BY col_list] [HAVING having_condition] [ORDER BY col_list]] [LIMIT number]; select address.city from employees ``` > **show tables** ``` # if already use database, it'll show tables in this database; if not, it'll show all the tables show tables; ``` > **drop tables** [] means optional. When used, we don't need these. ``` DROP TABLE [IF EXISTS] table_name; ``` > **create view in hive** ``` CREATE VIEW [IF NOT EXISTS] emp_30000 AS SELECT * FROM employee WHERE salary>30000; ``` > **drop a view** ``` DROP VIEW view_name ``` > **join** [tutorialspoint: hiveql join](https://www.tutorialspoint.com/hive/hiveql_joins.htm) Syntax-wise is essentially the same as SQL > **hive built in aggregation functions** [treasuredata: hive-aggregate-functions](https://docs.treasuredata.com/articles/hive-aggregate-functions) > **hive built in operators** [tutorialspoint: built-in operators](https://www.tutorialspoint.com/hive/hive_built_in_operators.htm) deal with NULL/NA, equal...etc > **writing data into the filesystem from queries** [hive doc](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Writingdataintothefilesystemfromqueries) [Hive INSERT OVERWRITE DIRECTORY command output is not separated by a delimiter. Why?](https://stackoverflow.com/questions/16459790/hive-insert-overwrite-directory-command-output-is-not-separated-by-a-delimiter) The discussion happened at 2013. Not sure if it's still valid or not. * If LOCAL keyword is used, Hive will write data to the directory on the local file system. * Data written to the filesystem is serialized as text with columns separated by ^A and rows separated by newlines. If any of the columns are not of primitive type, then those columns are serialized to JSON format. ``` INSERT OVERWRITE [LOCAL] DIRECTORY directory1 SELECT ... FROM ... ``` * **STORED AS TEXTFILE**: Stored as plain text files. TEXTFILE is the default file format, unless the configuration parameter hive.default.fileformat has a different setting. ``` # in a newer hive version, this should work just fine INSERT OVERWRITE [LOCAL] DIRECTORY directory1 ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' SELECT ... FROM ... # another way to work around this # concat_ws: concat column together as string INSERT OVERWRITE DIRECTORY '/user/hadoop/output' SELECT concat_ws(',', col1, col2) FROM graph_edges; ``` > **Create User Defined Fucntions (UDF)** **Steps** * write in java * jar file * import jar file * use UDF as query # Lab Material ``` ### sample code from lab CREATE EXTERNAL TABLE employees( name STRING, salary FLOAT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ’,’; LOAD DATA INPATH ‘employees.csv’ into table employees; CREATE DATABASE msia; SHOW DATABASES; DROP DATABASE msia; USE msia; SHOW TABLES; CREATE TABLE employees( name STRING, salary FLOAT, subordinates ARRAY<STRING>, deductions MAP<STRING, FLOAT>, address STRUCT<street:STRING, city: STRING, state: STRING, zip: INT>); CREATE TABLE t ( s STRING, f FLOAT, a ARRAY<MAP<STRING, STRUCT<p1: INT, p2:INT> >); ROW FORMAT DELIMITED FIELDS TERMINATED BY ’,’ COLLECTION ITEMS TERMINATED BY ’#’ MAP KEYS TERMINATED BY ’-’ LINES TERMINATED BY ’\n’; LOAD DATA INPATH ’employees.csv’ into table employees; ``` --- # <a id='refer'>Reference</a> * [Tutorialspoint Hive Tutorial](https://www.tutorialspoint.com/hive/index.htm) * [Hive tutorial doc](https://cwiki.apache.org/confluence/display/Hive/Tutorial)
github_jupyter
# Loading and working with data in sktime Python provides a variety of useful ways to represent data, but NumPy arrays and pandas DataFrames are commonly used for data analysis. When using NumPy 2d-arrays or pandas DataFrames to analyze tabular data the rows are commony used to represent each instance (e.g. case or observation) of the data, while the columns are used to represent a given feature (e.g. variable or dimension) for an observation. Since timeseries data also has a time dimension for a given instance and feature, several alternative data formats could be used to represent this data, including nested pandas DataFrame structures, NumPy 3d-arrays, or multi-indexed pandas DataFrames. Sktime is designed to work with timeseries data stored as nested pandas DataFrame objects. Similar to working with pandas DataFrames with tabular data, this allows instances to be represented by rows and the feature data for each dimension of a problem (e.g. variables or features) to be stored in the DataFrame columns. To accomplish this the timepoints for each instance-feature combination are stored in a single cell in the input Pandas DataFrame ([see Sktime pandas DataFrame format](#sktime_df_format) for more details). Users can load or convert data into sktime's format in a variety of ways. Data can be loaded directly from a bespoke sktime file format (.ts) ([see Representing data with .ts files](#ts_files)) or supported file formats provided by [other existing data sources](#other_file_types) (such as Weka ARFF and .tsv). Sktime also provides functions to convert data to and from sktime's nested pandas DataFrame format and several other common ways for representing timeseries data using NumPy arrays or pandas DataFrames. [see Converting between sktime and alternative timeseries formats](#convert). The rest of this sktime tutorial will provide a more detailed description of the sktime pandas DataFrame format, a brief description of the .ts file format, how to load data from other supported formats, and how to convert between other common ways of representing timeseries data in NumPy arrays or pandas DataFrames. <a id="sktime_df_format"></a> ## Sktime pandas DataFrame format The core data structure for storing datasets in sktime is a _nested_ pandas DataFrame, where rows of the dataframe correspond to instances (cases or observations), and columns correspond to dimensions of the problem (features or variables). The multiple timepoints and their corresponding values for each instance-feature pair are stored as pandas Series object _nested_ within the applicable DataFrame cell. For example, for a problem with n cases that each have data across c timeseries dimensions: DataFrame: index | dim_0 | dim_1 | ... | dim_c-1 0 | pd.Series | pd.Series | pd.Series | pd.Series 1 | pd.Series | pd.Series | pd.Series | pd.Series ... | ... | ... | ... | ... n | pd.Series | pd.Series | pd.Series | pd.Series Representing timeseries data in this way makes it easy to align the timeseries features for a given instance with non-timeseries information. For example, in a classification problem, it is easy to align the timeseries features for an observation with its (index-aligned) target class label: index | class_val 0 | int 1 | int ... | ... n | int While sktime's format uses pandas Series objects in its nested DataFrame structure, other data structures like NumPy arrays could be used to hold the timeseries values in each cell. However, the use of pandas Series objects helps to facilitate simple storage of sparse data and make it easy to accomodate series with non-integer timestamps (such as dates). <a id="ts_files"></a> ## The .ts file format One common use case is to load locally stored data. To make this easy, the .ts file format has been created for representing problems in a standard format for use with sktime. ### Representing data with .ts files A .ts file include two main parts: * header information * data The header information is used to facilitate simple representation of the data through including metadata about the structure of the problem. The header contains the following: @problemName <problem name> @timeStamps <true/false> @univariate <true/false> @classLabel <true/false> <space delimited list of possible class values> @data The data for the problem should begin after the @data tag. In the simplest case where @timestamps is false, values for a series are expressed in a comma-separated list and the index of each value is relative to its position in the list (0, 1, ..., m). An _instance_ may contain 1 to many dimensions, where instances are line-delimited and dimensions within an instance are colon (:) delimited. For example: 2,3,2,4:4,3,2,2 13,12,32,12:22,23,12,32 4,4,5,4:3,2,3,2 This example data has 3 _instances_, corresponding to the three lines shown above. Each instance has 2 _dimensions_ with 4 observations per dimension. For example, the intitial instance's first dimension has the timepoint values of 2, 3, 2, 4 and the second dimension has the values 4, 3, 2, 2. Missing readings can be specified using ?. For example, 2,?,2,4:4,3,2,2 13,12,32,12:22,23,12,32 4,4,5,4:3,2,3,2 would indicate the second timepoint value of the initial instance's first dimension is missing. Alternatively, for sparse datasets, readings can be specified by setting @timestamps to true in the header and representing the data with tuples in the form of (timestamp, value) just for the obser. For example, the first instance in the example above could be specified in this representation as: (0,2),(1,3)(2,2)(3,4):(0,4),(1,3),(2,2),(3,2) Equivalently, the sparser example 2,5,?,?,?,?,?,5,?,?,?,?,4 could be represented with just the non-missing timestamps as: (0,2),(0,5),(7,5),(12,4) When using the .ts file format to store data for timeseries classification problems, the class label for an instance should be specified in the last dimension and @classLabel should be set to true in the header information and be followed by the set of possible class values. For example, if a case consists of a single dimension and has a class value of 1 it would be specified as: 1,4,23,34:1 ### Loading from .ts file to pandas DataFrame A dataset can be loaded from a .ts file using the following method in sktime.utils.data_io.py: load_from_tsfile_to_dataframe(full_file_path_and_name, replace_missing_vals_with='NaN') This can be demonstrated using the Arrow Head problem that is included in sktime under sktime/datasets/data ``` import os import sktime from sktime.utils.data_io import load_from_tsfile_to_dataframe DATA_PATH = os.path.join(os.path.dirname(sktime.__file__), "datasets/data") train_x, train_y = load_from_tsfile_to_dataframe( os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.ts") ) test_x, test_y = load_from_tsfile_to_dataframe( os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TEST.ts") ) ``` Train and test partitions of the ArrowHead problem have been loaded into nested dataframes with an associated array of class values. As an example, below are the first 5 rows from the train_x and train_y: ``` train_x.head() train_y[0:5] ``` <a id="other_file_types"></a> ## Loading other file formats Researchers who have made timeseries data available have used two other common formats, including: + Weka ARFF files + UCR .tsv files ### Loading from Weka ARFF files It is also possible to load data from Weka's attribute-relation file format (ARFF) files. Data for timeseries problems are made available in this format by researchers at the University of East Anglia (among others) at www.timeseriesclassification.com. The `load_from_arff_to_dataframe` method in `sktime.utils.data_io` supports reading data for both univariate and multivariate timeseries problems. The univariate functionality is demonstrated below using data on the ArrowHead problem again (this time loading from ARFF file). ``` from sktime.utils.data_io import load_from_arff_to_dataframe X, y = load_from_arff_to_dataframe( os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.arff") ) X.head() ``` The multivariate BasicMotions problem is used below to illustrate the ability to read multivariate timeseries data from ARFF files into the sktime format. ``` X, y = load_from_arff_to_dataframe( os.path.join(DATA_PATH, "BasicMotions/BasicMotions_TRAIN.arff") ) X.head() ``` ### Loading from UCR .tsv Format Files A further option is to load data into sktime from tab separated value (.tsv) files. Researchers at the University of Riverside, California make a variety of timeseries data available in this format at https://www.cs.ucr.edu/~eamonn/time_series_data_2018. The `load_from_ucr_tsv_to_dataframe` method in `sktime.utils.data_io` supports reading univariate problems. An example with ArrowHead is given below to demonstrate equivalence with loading from the .ts and ARFF file formats. ``` from sktime.utils.data_io import load_from_ucr_tsv_to_dataframe X, y = load_from_ucr_tsv_to_dataframe( os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.tsv") ) X.head() ``` <a id="convert"></a> ## Converting between other NumPy and pandas formats It is also possible to use data from sources other than .ts and .arff files by manually shaping the data into the format described above. Functions to convert from and to these types to sktime's nested DataFrame format are provided in `sktime.utils.data_processing` ### Using tabular data with sktime One approach to representing timeseries data is a tabular DataFrame. As usual, each row represents an instance. In the tabular setting each timepoint of the univariate timeseries being measured for each instance are treated as feature and stored as a primitive data type in the DataFrame's cells. In a univariate setting, where there are `n` instances of the series and each univariate timeseries has `t` timepoints, this would yield a pandas DataFrame with shape (n, t). In practice, this could be used to represent sensors measuring the same signal over time (features) on different machines (instances) or the same economic variable over time (features) for different countries (instances). The function `from_2d_array_to_nested` converts a (n, t) tabular DataFrame to nested DataFrame with shape (n, 1). To convert from a nested DataFrame to a tabular array the function `from_nested_to_2d_array` can be used. The example below uses 50 instances with 20 timepoints each. ``` from numpy.random import default_rng from sktime.utils.data_processing import ( from_2d_array_to_nested, from_nested_to_2d_array, is_nested_dataframe, ) rng = default_rng() X_2d = rng.standard_normal((50, 20)) print(f"The tabular data has the shape {X_2d.shape}") ``` The `from_2d_array_to_nested` function makes it easy to convert this to a nested DataFrame. ``` X_nested = from_2d_array_to_nested(X_2d) print(f"X_nested is a nested DataFrame: {is_nested_dataframe(X_nested)}") print(f"The cell contains a {type(X_nested.iloc[0,0])}.") print(f"The nested DataFrame has shape {X_nested.shape}") X_nested.head() ``` This nested DataFrame can also be converted back to a tabular DataFrame using easily. ``` X_2d = from_nested_to_2d_array(X_nested) print(f"The tabular data has the shape {X_2d.shape}") ``` ### Using long-format data with sktime Timeseries data can also be represented in _long_ format where each row identifies the value for a single timepoint for a given dimension for a given instance. This format may be encountered in a database where each row stores a single value measurement identified by several identification columns. For example, where `case_id` is an id to identify a specific instance in the data, `dimension_id` is an integer between 0 and d-1 for d dimensions in the data, `reading_id` is the index of timepoints for the associated `case_id` and `dimension_id`, and `value` is the actual value of the observation. E.g.: | case_id | dim_id | reading_id | value ------------------------------------------------ 0 | int | int | int | double 1 | int | int | int | double 2 | int | int | int | double 3 | int | int | int | double Sktime provides functions to convert to and from the long data format in `sktime.utils.data_processing`. The `from_long_to_nested` function converts from a long format DataFrame to sktime's nested format (with assumptions made on how the data is initially formatted). Conversely, `from_nested_to_long` converts from a sktime nested DataFrame into a long format DataFrame. To demonstrate this functionality the method below creates a dataset with a 50 instances (cases), 5 dimensions and 20 timepoints per dimension. ``` from sktime.utils.data_io import generate_example_long_table X = generate_example_long_table(num_cases=50, series_len=20, num_dims=5) X.head() X.tail() ``` As shown below, applying the `from_long_to_nested` method returns a sktime-formatted dataset with individual dimensions represented by columns of the output dataframe. ``` from sktime.utils.data_processing import from_long_to_nested, from_nested_to_long X_nested = from_long_to_nested(X) X_nested.head() ``` As expected the result is a nested DataFrame and the cells include nested pandas Series objects. ``` print(f"X_nested is a nested DataFrame: {is_nested_dataframe(X_nested)}") print(f"The cell contains a {type(X_nested.iloc[0,0])}.") print(f"The nested DataFrame has shape {X_nested.shape}") X_nested.iloc[0, 0].head() ``` As shown below, the `from_nested_to_long` function can be used to convert the resulting nested DataFrame (or any nested DataFrame) to a long format DataFrame. ``` X_long = from_nested_to_long( X_nested, instance_column_name="case_id", time_column_name="reading_id", dimension_column_name="dim_id", ) X_long.head() X_long.tail() ``` ### Using multi-indexed pandas DataFrames Pandas deprecated its Panel object in version 0.20.1. Since that time pandas has recommended representing 3-dimensional data using a multi-indexed DataFrame. Storing timeseries data in a Pandas multi-indexed DataFrame is a natural option since many timeseries problems include data over the instance, feature and time dimensions. Sktime provides the functions `from_multi_index_to_nested` and `from_nested_to_multi_index` in `sktime.utils.data_processing` to easily convert between pandas multi-indexed DataFrames and sktime's nested DataFrame structure. The example below illustrates how these functions can be used to convert to and from the nested structure given data with 50 instances, 5 features (columns) and 20 timepoints per feature. In the multi-indexed DataFrame a row represents a unique combination of the instance and timepoint indices. Therefore, the resulting multi-indexed DataFrame should have the shape (1000, 5). ``` from sktime.utils.data_io import make_multi_index_dataframe from sktime.utils.data_processing import ( from_multi_index_to_nested, from_nested_to_multi_index, ) X_mi = make_multi_index_dataframe(n_instances=50, n_columns=5, n_timepoints=20) print(f"The multi-indexed DataFrame has shape {X_mi.shape}") print(f"The multi-index names are {X_mi.index.names}") X_mi.head() ``` The multi-indexed DataFrame can be easily converted to a nested DataFrame with shape (50, 5). Note that the conversion to the nested DataFrame has preserved the column names (it has also preserved the values of the instance index and the pandas Series objects nested in each cell have preserved the time index). ``` X_nested = from_multi_index_to_nested(X_mi, instance_index="case_id") print(f"X_nested is a nested DataFrame: {is_nested_dataframe(X_nested)}") print(f"The cell contains a {type(X_nested.iloc[0,0])}.") print(f"The nested DataFrame has shape {X_nested.shape}") X_nested.head() ``` Nested DataFrames can also be converted to a multi-indexed Pandas DataFrame ``` X_mi = from_nested_to_multi_index( X_nested, instance_index="case_id", time_index="reading_id" ) X_mi.head() ``` ### Using NumPy 3d-arrays with sktime Another common approach for representing timeseries data is to use a 3-dimensional NumPy array with shape (n_instances, n_columns, n_timepoints). Sktime provides the functions `from_3d_numpy_to_nested` `from_nested_to_3d_numpy` in `sktime.utils.data_processing` to let users easily convert between NumPy 3d-arrays and nested pandas DataFrames. This is demonstrated using a 3d-array with 50 instances, 5 features (columns) and 20 timepoints, resulting in a 3d-array with shape (50, 5, 20). ``` from sktime.utils.data_processing import ( from_3d_numpy_to_nested, from_multi_index_to_3d_numpy, from_nested_to_3d_numpy, ) X_mi = make_multi_index_dataframe(n_instances=50, n_columns=5, n_timepoints=20) X_3d = from_multi_index_to_3d_numpy( X_mi, instance_index="case_id", time_index="reading_id" ) print(f"The 3d-array has shape {X_3d.shape}") ``` The 3d-array can be easily converted to a nested DataFrame with shape (50, 5). Note that since NumPy array doesn't have indices, the instance index is the numerical range over the number of instances and the columns are automatically assigned. Users can optionally supply their own columns names via the columns_names parameter. ``` X_nested = from_3d_numpy_to_nested(X_3d) print(f"X_nested is a nested DataFrame: {is_nested_dataframe(X_nested)}") print(f"The cell contains a {type(X_nested.iloc[0,0])}.") print(f"The nested DataFrame has shape {X_nested.shape}") X_nested.head() ``` Nested DataFrames can also be converted to NumPy 3d-arrays. ``` X_3d = from_nested_to_3d_numpy(X_nested) print(f"The resulting object is a {type(X_3d)}") print(f"The shape of the 3d-array is {X_3d.shape}") ``` ### Converting between NumPy 3d-arrays and pandas multi-indexed DataFrame Although an example is not provided here, sktime lets users convert data between NumPy 3d-arrays and a multi-indexed pandas DataFrame formats using the functions `from_3d_numpy_to_multi_index` and `from_multi_index_to_3d_numpy` in `sktime.utils.data_processing`.
github_jupyter
This notebook will show an example of text preprocessing applied to RTL-Wiki dataset. This dataset was introduced in [1] and later recreated in [2]. You can download it in from http://139.18.2.164/mroeder/palmetto/datasets/rtl-wiki.tar.gz -------- [1] "Reading Tea Leaves: How Humans Interpret Topic Models" (NIPS 2009) [2] "Exploring the Space of Topic Coherence Measures" (WSDM 2015) ``` # download corpus and unpack it: ! wget http://139.18.2.164/mroeder/palmetto/datasets/rtl-wiki.tar.gz -O rtl-wiki.tar.gz ! tar xzf rtl-wiki.tar.gz ``` The corpus is a sample of 10000 articles from English Wikipedia in a MediaWiki markup format. Hence, we need to strip specific wiki formatting. We advise using a `mwparserfromhell` fork optimized to deal with the English Wikipedia. ``` git clone --branch images_and_interwiki https://github.com/bt2901/mwparserfromhell.git ``` ``` ! git clone --branch images_and_interwiki https://github.com/bt2901/mwparserfromhell.git ``` The Wikipedia dataset is too heterogenous. Building a good topic model here requires a lot of topics or a lot of documents. To make collection more focused, we will filter out everything which isn't about people. We will use the following criteria to distinguish people and not-people: ``` import re # all infoboxes related to persons, according to https://en.wikipedia.org/wiki/Wikipedia:List_of_infoboxes person_infoboxes = {'infobox magic: the gathering player', 'infobox architect', 'infobox mountaineer', 'infobox scientist', 'infobox chess biography', 'infobox racing driver', 'infobox saint', 'infobox snooker player', 'infobox figure skater', 'infobox theological work', 'infobox gaelic athletic association player', 'infobox professional wrestler', 'infobox noble', 'infobox pelotari', 'infobox native american leader', 'infobox pretender', 'infobox amateur wrestler', 'infobox college football player', 'infobox buddha', 'infobox cfl biography', 'infobox playboy playmate', 'infobox cyclist', 'infobox martial artist', 'infobox motorcycle rider', 'infobox motocross rider', 'infobox bandy biography', 'infobox video game player', 'infobox dancer', 'infobox nahua officeholder', 'infobox criminal', 'infobox squash player', 'infobox go player', 'infobox bullfighting career', 'infobox engineering career', 'infobox pirate', 'infobox latter day saint biography', 'infobox sumo wrestler', 'infobox youtube personality', 'infobox national hockey league coach', 'infobox rebbe', 'infobox football official', 'infobox aviator', 'infobox pharaoh', 'infobox classical composer', 'infobox fbi ten most wanted', 'infobox chef', 'infobox engineer', 'infobox nascar driver', 'infobox medical person', 'infobox jewish leader', 'infobox horseracing personality', 'infobox poker player', 'infobox economist', 'infobox peer', 'infobox war on terror detainee', 'infobox philosopher', 'infobox professional bowler', 'infobox champ car driver', 'infobox golfer', 'infobox le mans driver', 'infobox alpine ski racer', 'infobox boxer (amateur)', 'infobox bodybuilder', 'infobox college coach', 'infobox speedway rider', 'infobox skier', 'infobox medical details', 'infobox field hockey player', 'infobox badminton player', 'infobox sports announcer details', 'infobox academic', 'infobox f1 driver', 'infobox ncaa athlete', 'infobox biathlete', 'infobox comics creator', 'infobox rugby league biography', 'infobox fencer', 'infobox theologian', 'infobox religious biography', 'infobox egyptian dignitary', 'infobox curler', 'infobox racing driver series section', 'infobox afl biography', 'infobox speed skater', 'infobox climber', 'infobox rugby biography', 'infobox clergy', 'infobox equestrian', 'infobox member of the knesset', 'infobox pageant titleholder', 'infobox lacrosse player', 'infobox tennis biography', 'infobox gymnast', 'infobox sport wrestler', 'infobox sports announcer', 'infobox surfer', 'infobox darts player', 'infobox christian leader', 'infobox presenter', 'infobox gunpowder plotter', 'infobox table tennis player', 'infobox sailor', 'infobox astronaut', 'infobox handball biography', 'infobox volleyball biography', 'infobox spy', 'infobox wrc driver', 'infobox police officer', 'infobox swimmer', 'infobox netball biography', 'infobox model', 'infobox comedian', 'infobox boxer'} # is page included in a category with demography information? demography_re = re.compile("([0-9]+ (deaths|births))|(living people)") dir_name = "persons" ! mkdir $dir_name import glob from bs4 import BeautifulSoup from mwparserfromhell import mwparserfromhell from tqdm import tqdm_notebook as tqdm for filename in tqdm(glob.glob("documents/*.html")): doc_id = filename.partition("/")[-1] doc_id = doc_id.rpartition(".")[0] + ".txt" is_about_person = False with open(filename, "r") as f: soup = BeautifulSoup("".join(f.readlines())) text = soup.findAll('textarea', id="wpTextbox1")[0].contents[0] text = text.replace("&amp;", "&").replace('&lt;', '<').replace('&gt;', '>') wikicode = mwparserfromhell.parse(text) if dir_name == "persons": for node in wikicode.nodes: entry_type = str(type(node)) if "Wikilink" in entry_type: special_link_name, _, cat_name = node.title.lower().strip().partition(":") if special_link_name == "category": if demography_re.match(cat_name): is_about_person = True if "Template" in entry_type: name = str(node.name).lower().strip() if name in person_infoboxes: is_about_person = True should_be_saved = is_about_person else: should_be_saved = True if should_be_saved: with open(f"{dir_name}/{doc_id}", "w") as f2: stripped_text = wikicode.strip_code() f2.write(stripped_text) ``` Now we have a folder `persons` which contains 1201 document. Let's take a look at the one of them: ``` ! head $dir_name/Eusebius.txt ``` We need to lemmatize texts, remove stopwords and extract informative ngramms. There's no one "correct" way to do it, but the reasonable baseline is using well-known `nltk` library. ``` import nltk import string import pandas as pd from glob import glob nltk.data.path.append('/home/evgenyegorov/nltk_data/') files = glob(dir_name + '/*.txt') data = [] for path in files: entry = {} entry['id'] = path.split('/')[-1].rpartition(".")[0] with open(path, 'r') as f: entry['raw_text'] = " ".join(line.strip() for line in f.readlines()) data.append(entry) wiki_texts = pd.DataFrame(data) from tqdm import tqdm tokenized_text = [] for text in tqdm(wiki_texts['raw_text'].values): tokens = nltk.wordpunct_tokenize(text.lower()) tokenized_text.append(nltk.pos_tag(tokens)) wiki_texts['tokenized'] = tokenized_text from nltk.corpus import wordnet def nltk2wn_tag(nltk_tag): if nltk_tag.startswith('J'): return wordnet.ADJ elif nltk_tag.startswith('V'): return wordnet.VERB elif nltk_tag.startswith('N'): return wordnet.NOUN elif nltk_tag.startswith('R'): return wordnet.ADV else: return '' from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords stop = set(stopwords.words('english')) lemmatized_text = [] wnl = WordNetLemmatizer() for text in wiki_texts['tokenized'].values: lemmatized = [wnl.lemmatize(word,nltk2wn_tag(pos)) if nltk2wn_tag(pos) != '' else wnl.lemmatize(word) for word, pos in text ] lemmatized = [word for word in lemmatized if word not in stop and word.isalpha()] lemmatized_text.append(lemmatized) wiki_texts['lemmatized'] = lemmatized_text ``` Ngrams are a powerful feature, and BigARTM is able to take advantage of it (the technical term is 'multimodal topic modeling': our topic model could model a lot of different features linked to a specific document, not just words). ``` from nltk.collocations import BigramAssocMeasures, BigramCollocationFinder bigram_measures = BigramAssocMeasures() finder = BigramCollocationFinder.from_documents(wiki_texts['lemmatized']) finder.apply_freq_filter(5) set_dict = set(finder.nbest(bigram_measures.pmi,32100)[100:]) documents = wiki_texts['lemmatized'] bigrams = [] for doc in documents: entry = ['_'.join([word_first, word_second]) for word_first, word_second in zip(doc[:-1],doc[1:]) if (word_first, word_second) in set_dict] bigrams.append(entry) wiki_texts['bigram'] = bigrams from collections import Counter def vowpalize_sequence(sequence): word_2_frequency = Counter(sequence) del word_2_frequency[''] vw_string = '' for word in word_2_frequency: vw_string += word + ":" + str(word_2_frequency[word]) + ' ' return vw_string vw_text = [] for index, data in wiki_texts.iterrows(): vw_string = '' doc_id = data.id lemmatized = '@lemmatized ' + vowpalize_sequence(data.lemmatized) bigram = '@bigram ' + vowpalize_sequence(data.bigram) vw_string = ' |'.join([doc_id, lemmatized, bigram]) vw_text.append(vw_string) wiki_texts['vw_text'] = vw_text ``` Vowpal Wabbit ("wv") is a text format which is a good fit for multimodal topic modeling. Here, we elected to store dataset in a Bag-of-Words format (for performance reasons), but VW could store everything as a sequence of words as well. It looks like this: ``` wiki_texts['vw_text'].head().values[0] wiki_texts[['id','raw_text', 'vw_text']].to_csv('./wiki_data.csv') ```
github_jupyter
``` import pandas as pd import numpy as np from sklearn import metrics from sklearn import preprocessing from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler from bedrock_client.bedrock.analyzer.model_analyzer import ModelAnalyzer from bedrock_client.bedrock.analyzer import ModelTypes from bedrock_client.bedrock.api import BedrockApi from bedrock_client.bedrock.metrics.service import ModelMonitoringService import logging def load_dataset(filepath, target): df = pd.read_csv(filepath) df['large_rings'] = (df['Rings'] > 10).astype(int) # Ensure nothing missing original_len = len(df) df.dropna(how="any", axis=0, inplace=True) num_rows_dropped = original_len - len(df) if num_rows_dropped > 0: print(f"Warning - dropped {num_rows_dropped} rows with NA data.") y = df[target].values df.drop(target, axis=1, inplace=True) return df, y def train_log_reg_model(X, y, seed=0, C=1, verbose=False): verbose and print('\nTraining\nScaling...') scaling = StandardScaler() X = scaling.fit_transform(X) verbose and print('Fitting...') verbose and print('C:', C) model = LogisticRegression(random_state=seed, C=C, max_iter=4000) model.fit(X, y) verbose and print('Chaining pipeline...') pipe = Pipeline([('scaling', scaling), ('model', model)]) verbose and print('Training Done.') return pipe def compute_log_metrics(pipe, x_test, y_test, y_test_onehot): test_prob = pipe.predict_proba(x_test) test_pred = pipe.predict(x_test) acc = metrics.accuracy_score(y_test, test_pred) precision = metrics.precision_score(y_test, test_pred, average='macro') recall = metrics.recall_score(y_test, test_pred, average='macro') f1_score = metrics.f1_score(y_test, test_pred, average='macro') roc_auc = metrics.roc_auc_score(y_test_onehot, test_prob, average='macro', multi_class='ovr') avg_prc = metrics.average_precision_score(y_test_onehot, test_prob, average='macro') print("\nEvaluation\n" f"\tAccuracy = {acc:.4f}\n" f"\tPrecision (macro) = {precision:.4f}\n" f"\tRecall (macro) = {recall:.4f}\n" f"\tF1 score (macro) = {f1_score:.4f}\n" f"\tROC AUC (macro) = {roc_auc:.4f}\n" f"\tAverage precision (macro) = {avg_prc:.4f}") # Bedrock Logger: captures model metrics bedrock = BedrockApi(logging.getLogger(__name__)) # `log_chart_data` assumes binary classification # For multiclass labels, we can use a "micro-average" by # quantifying score on all classes jointly # See https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html # This will allow us to use the same `log_chart_data` method bedrock.log_chart_data( y_test_onehot.ravel().astype(int).tolist(), # list of int test_prob.ravel().astype(float).tolist() # list of float ) bedrock.log_metric("Accuracy", acc) bedrock.log_metric("Precision (macro)", precision) bedrock.log_metric("Recall (macro)", recall) bedrock.log_metric("F1 Score (macro)", f1_score) bedrock.log_metric("ROC AUC (macro)", roc_auc) bedrock.log_metric("Avg precision (macro)", avg_prc) return test_prob, test_pred x_train, y_train = load_dataset( filepath="data/abalone_train.csv", target="Type" ) x_test, y_test = load_dataset( filepath="data/abalone_test.csv", target="Type" ) enc = OneHotEncoder(handle_unknown='ignore', sparse=False) # sklearn `roc_auc_score` and `average_precision_score` expects # binary label indicators with shape (n_samples, n_classes) y_train_onehot = enc.fit_transform(y_train.reshape(-1, 1)) y_test_onehot = enc.fit_transform(y_test.reshape(-1, 1)) # Convert target variable to numeric values # ModelMonitoringService.export_text expect both features # and inference to be numeric values y_train = np.argmax(y_train_onehot, axis=1) y_test = np.argmax(y_test_onehot, axis=1) for value, category in enumerate(enc.categories_[0]): print(f'{category} : {value}') pipe = train_log_reg_model(x_train, y_train, seed=0, C=1e-1, verbose=True) test_prob, test_pred = compute_log_metrics(pipe, x_test, y_test, y_test_onehot) # Ignore ERROR, this is for testing purposes CONFIG_FAI = { 'large_rings': { 'privileged_attribute_values': [1], # privileged group name corresponding to values=[1] 'privileged_group_name': 'Large', 'unprivileged_attribute_values': [0], # unprivileged group name corresponding to values=[0] 'unprivileged_group_name': 'Small', } } # Train Shap model and calculate xafai metrics analyzer = ( ModelAnalyzer(pipe[1], model_name='logistic', model_type=ModelTypes.LINEAR) .train_features(x_train) .test_features(x_test) .fairness_config(CONFIG_FAI) .test_labels(y_test) .test_inference(test_pred) ) analyzer.analyze() ModelMonitoringService.export_text( features=x_train.iteritems(), # assumes numeric values inference=test_pred.tolist(), # assumes numeric values ) for item in x_train.iteritems(): print(item) ```
github_jupyter
# Ex2 - Getting and Knowing your Data Check out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. ### Step 1. Import the necessary libraries ``` import pandas as pd import numpy as np ``` ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). ### Step 3. Assign it to a variable called chipo. ``` url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv' chipo = pd.read_csv(url, sep = '\t') ``` ### Step 4. See the first 10 entries ``` chipo.head(10) ``` ### Step 5. What is the number of observations in the dataset? ``` # Solution 1 chipo.shape[0] # entries <= 4622 observations # Solution 2 chipo.info() # entries <= 4622 observations ``` ### Step 6. What is the number of columns in the dataset? ``` chipo.shape[1] ``` ### Step 7. Print the name of all the columns. ``` chipo.columns ``` ### Step 8. How is the dataset indexed? ``` chipo.index ``` ### Step 9. Which was the most-ordered item? ``` c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) ``` ### Step 10. For the most-ordered item, how many items were ordered? ``` c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) ``` ### Step 11. What was the most ordered item in the choice_description column? ``` c = chipo.groupby('choice_description').sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) # Diet Coke 159 ``` ### Step 12. How many items were orderd in total? ``` total_items_orders = chipo.quantity.sum() total_items_orders ``` ### Step 13. Turn the item price into a float #### Step 13.a. Check the item price type ``` chipo.item_price.dtype ``` #### Step 13.b. Create a lambda function and change the type of item price ``` dollarizer = lambda x: float(x[1:-1]) chipo.item_price = chipo.item_price.apply(dollarizer) ``` #### Step 13.c. Check the item price type ``` chipo.item_price.dtype ``` ### Step 14. How much was the revenue for the period in the dataset? ``` revenue = (chipo['quantity']* chipo['item_price']).sum() print('Revenue was: $' + str(np.round(revenue,2))) ``` ### Step 15. How many orders were made in the period? ``` orders = chipo.order_id.value_counts().count() orders ``` ### Step 16. What is the average revenue amount per order? ``` # Solution 1 chipo['revenue'] = chipo['quantity'] * chipo['item_price'] order_grouped = chipo.groupby(by=['order_id']).sum() order_grouped.mean()['revenue'] # Solution 2 chipo.groupby(by=['order_id']).sum().mean()['revenue'] ``` ### Step 17. How many different items are sold? ``` chipo.item_name.value_counts().count() ```
github_jupyter
``` # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Custom training and online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/custom-tabular-bq-managed-dataset.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/custom-tabular-bq-managed-dataset.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> ## Overview This tutorial demonstrates how to use the Vertex SDK for Python to train and deploy a custom tabular classification model for online prediction. ### Dataset The dataset used for this tutorial is the penguins dataset from [BigQuery public datasets](https://cloud.google.com/bigquery/public-data). In this version of the dataset, you will use only the fields `culmen_length_mm`, `culmen_depth_mm`, `flipper_length_mm`, `body_mass_g` to predict the penguins species (`species`). ### Objective In this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using `gcloud` command-line tool, or online using the Cloud Console. The steps performed include: - Create an Vertex AI custom job for training a model. - Train a TensorFlow model. - Deploy the `Model` resource to a serving `Endpoint` resource. - Make a prediction. - Undeploy the `Model` resource. ### Costs This tutorial uses billable components of Google Cloud (GCP): * Vertex AI * Cloud Storage Learn about [Cloud Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ## Installation Install the latest (preview) version of Vertex SDK for Python. ``` import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" ! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform ``` Install the latest GA version of *google-cloud-storage* library as well. ``` ! pip3 install {USER_FLAG} -U google-cloud-storage ``` Install the latest GA version of *google-cloud-bigquery* library as well. ``` ! pip3 install {USER_FLAG} -U "google-cloud-bigquery[all]" ``` ### Restart the kernel Once you've installed everything, you need to restart the notebook kernel so it can find the packages. ``` import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` ## Before you begin ### Select a GPU runtime **Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** ### Set up your Google Cloud project **The following steps are required, regardless of your notebook environment.** 1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. 2. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project). 3. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component). 4. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk). 5. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. #### Set your project ID **If you don't know your project ID**, you may be able to get your project ID using `gcloud`. ``` PROJECT_ID = "" if not os.getenv("IS_TESTING"): # Get your Google Cloud project ID from gcloud shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) ``` Otherwise, set your project ID here. ``` if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} ``` #### Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. ``` from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ``` ### Authenticate your Google Cloud account **If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. **Otherwise**, follow these steps: 1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey). 2. Click **Create service account**. 3. In the **Service account name** field, enter a name, and click **Create**. 4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI" into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**. 5. Click *Create*. A JSON file that contains your key downloads to your local environment. 6. Enter the path to your service account key as the `GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell. ``` import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex AI runs the code from this package. In this tutorial, Vertex AI also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may not use a Multi-Regional Storage bucket for training with Vertex AI. ``` BUCKET_NAME = "" # @param {type:"string"} REGION = "us-central1" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION $BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al $BUCKET_NAME ``` ### Set up variables Next, set up some variables used throughout the tutorial. #### Import Vertex SDK for Python Import the Vertex SDK for Python into your Python environment and initialize it. ``` import os import sys from google.cloud import aiplatform from google.cloud.aiplatform import gapic as aip aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME) ``` #### Set hardware accelerators You can set hardware accelerators for both training and prediction. Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) See the [locations where accelerators are available](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators). Otherwise specify `(None, None)` to use a container image to run on a CPU. Learn [which accelerators are available in your region.](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators) ``` TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) ``` #### Set pre-built containers Vertex AI provides pre-built containers to run training and prediction. For the latest list, see [Pre-built containers for training](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers) and [Pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) ``` TRAIN_VERSION = "tf-gpu.2-4" DEPLOY_VERSION = "tf2-gpu.2-4" TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) ``` #### Set machine types Next, set the machine types to use for training and prediction. - Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure your compute resources for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \] *Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs *Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*. Learn [which machine types are available](https://cloud.google.com/vertex-ai/docs/training/configure-compute) for training and [for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute) ``` MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) ``` # Tutorial Now you are ready to start creating your own custom-trained model with the penguins dataset. ## Prepare the data To improve the convergence of our custom deep learning model, we need to normalize the data. To prepare for this, calculate the mean and standard deviation for each numeric column. This will be passed to the training script to normalize the data before training. This will also be used to normalize the testing data later in this notebook. ``` BQ_SOURCE = "bq://bigquery-public-data.ml_datasets.penguins" import json import numpy as np # Calculate mean and std across all rows from google.cloud import bigquery NA_VALUES = ["NA", "."] # Set up BigQuery clients bqclient = bigquery.Client() # Download a table def download_table(bq_table_uri: str): # Remove bq:// prefix if present prefix = "bq://" if bq_table_uri.startswith(prefix): bq_table_uri = bq_table_uri[len(prefix) :] table = bigquery.TableReference.from_string(bq_table_uri) rows = bqclient.list_rows( table, ) return rows.to_dataframe() # Remove NA values def clean_dataframe(df): return df.replace(to_replace=NA_VALUES, value=np.NaN).dropna() def calculate_mean_and_std(df): # Calculate mean and std for each applicable column mean_and_std = {} dtypes = list(zip(df.dtypes.index, map(str, df.dtypes))) # Normalize numeric columns. for column, dtype in dtypes: if dtype == "float32" or dtype == "float64": mean_and_std[column] = { "mean": df[column].mean(), "std": df[column].std(), } return mean_and_std dataframe = download_table(BQ_SOURCE) dataframe = clean_dataframe(dataframe) mean_and_std = calculate_mean_and_std(dataframe) print("The mean and stds for each column are: " + str(mean_and_std)) # Write to a file MEAN_AND_STD_JSON_FILE = "mean_and_std.json" with open(MEAN_AND_STD_JSON_FILE, "w") as outfile: json.dump(mean_and_std, outfile) # Save to the staging bucket ! gsutil cp {MEAN_AND_STD_JSON_FILE} {BUCKET_NAME} ``` ## Create a managed tabular dataset from BigQuery dataset Your first step in training a model is to create a managed dataset instance. ``` dataset = aiplatform.TabularDataset.create( display_name="sample-penguins", bq_source=BQ_SOURCE ) ``` ## Train a model There are two ways you can train a custom model using a container image: - **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model. - **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model. ### Define the command args for the training script Prepare the command-line arguments to pass to your training script. - `args`: The command line arguments to pass to the corresponding Python module. In this example, they will be: - `"--epochs=" + EPOCHS`: The number of epochs for training. - `"--batch_size=" + BATCH_SIZE`: The number of batch size for training. - `"--distribute=" + TRAIN_STRATEGY"` : The training distribution strategy to use for single or distributed training. - `"single"`: single device. - `"mirror"`: all GPU devices on a single compute instance. - `"multi"`: all GPU devices on all compute instances. - `"--mean_and_std_json_file=" + FILE_PATH`: The file on Google Cloud Storage with pre-calculated means and standard deviations. ``` JOB_NAME = "custom_job_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME) if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 BATCH_SIZE = 10 CMDARGS = [ "--epochs=" + str(EPOCHS), "--batch_size=" + str(BATCH_SIZE), "--distribute=" + TRAIN_STRATEGY, "--mean_and_std_json_file=" + f"{BUCKET_NAME}/{MEAN_AND_STD_JSON_FILE}", ] ``` #### Training script In the next cell, you will write the contents of the training script, `task.py`. In summary: - Loads the data from the BigQuery table using the BigQuery Python client library. - Loads the pre-calculated mean and standard deviation from the Google Cloud Storage bucket. - Builds a model using TF.Keras model API. - Compiles the model (`compile()`). - Sets a training distribution strategy according to the argument `args.distribute`. - Trains the model (`fit()`) with epochs and batch size according to the arguments `args.epochs` and `args.batch_size` - Get the directory where to save the model artifacts from the environment variable `AIP_MODEL_DIR`. This variable is set by the training service. - Saves the trained model (`save(MODEL_DIR)`) to the specified model directory. ``` %%writefile task.py import argparse import tensorflow as tf import numpy as np import os import pandas as pd import tensorflow as tf from google.cloud import bigquery from google.cloud import storage # Read environmental variables training_data_uri = os.environ["AIP_TRAINING_DATA_URI"] validation_data_uri = os.environ["AIP_VALIDATION_DATA_URI"] test_data_uri = os.environ["AIP_TEST_DATA_URI"] # Read args parser = argparse.ArgumentParser() parser.add_argument('--epochs', dest='epochs', default=10, type=int, help='Number of epochs.') parser.add_argument('--batch_size', dest='batch_size', default=10, type=int, help='Batch size.') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='Distributed training strategy.') parser.add_argument('--mean_and_std_json_file', dest='mean_and_std_json_file', type=str, help='GCS URI to the JSON file with pre-calculated column means and standard deviations.') args = parser.parse_args() def download_blob(bucket_name, source_blob_name, destination_file_name): """Downloads a blob from the bucket.""" # bucket_name = "your-bucket-name" # source_blob_name = "storage-object-name" # destination_file_name = "local/path/to/file" storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) # Construct a client side representation of a blob. # Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve # any content from Google Cloud Storage. As we don't need additional data, # using `Bucket.blob` is preferred here. blob = bucket.blob(source_blob_name) blob.download_to_filename(destination_file_name) print( "Blob {} downloaded to {}.".format( source_blob_name, destination_file_name ) ) def extract_bucket_and_prefix_from_gcs_path(gcs_path: str): """Given a complete GCS path, return the bucket name and prefix as a tuple. Example Usage: bucket, prefix = extract_bucket_and_prefix_from_gcs_path( "gs://example-bucket/path/to/folder" ) # bucket = "example-bucket" # prefix = "path/to/folder" Args: gcs_path (str): Required. A full path to a Google Cloud Storage folder or resource. Can optionally include "gs://" prefix or end in a trailing slash "/". Returns: Tuple[str, Optional[str]] A (bucket, prefix) pair from provided GCS path. If a prefix is not present, a None will be returned in its place. """ if gcs_path.startswith("gs://"): gcs_path = gcs_path[5:] if gcs_path.endswith("/"): gcs_path = gcs_path[:-1] gcs_parts = gcs_path.split("/", 1) gcs_bucket = gcs_parts[0] gcs_blob_prefix = None if len(gcs_parts) == 1 else gcs_parts[1] return (gcs_bucket, gcs_blob_prefix) # Download means and std def download_mean_and_std(mean_and_std_json_file): """Download mean and std for each column""" import json bucket, file_path = extract_bucket_and_prefix_from_gcs_path(mean_and_std_json_file) download_blob(bucket_name=bucket, source_blob_name=file_path, destination_file_name=file_path) with open(file_path, 'r') as file: return json.loads(file.read()) mean_and_std = download_mean_and_std(args.mean_and_std_json_file) # Single Machine, single compute device if args.distribute == 'single': if tf.test.is_gpu_available(): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") else: strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0") # Single Machine, multiple compute device elif args.distribute == 'mirror': strategy = tf.distribute.MirroredStrategy() # Multiple Machine, multiple compute device elif args.distribute == 'multi': strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # Set up training variables LABEL_COLUMN = "species" UNUSED_COLUMNS = [] NA_VALUES = ["NA", "."] # Possible categorical values SPECIES = ['Adelie Penguin (Pygoscelis adeliae)', 'Chinstrap penguin (Pygoscelis antarctica)', 'Gentoo penguin (Pygoscelis papua)'] ISLANDS = ['Dream', 'Biscoe', 'Torgersen'] SEXES = ['FEMALE', 'MALE'] # Set up BigQuery clients bqclient = bigquery.Client() # Download a table def download_table(bq_table_uri: str): # Remove bq:// prefix if present prefix = "bq://" if bq_table_uri.startswith(prefix): bq_table_uri = bq_table_uri[len(prefix):] table = bigquery.TableReference.from_string(bq_table_uri) rows = bqclient.list_rows( table, ) return rows.to_dataframe() df_train = download_table(training_data_uri) df_validation = download_table(validation_data_uri) df_test = download_table(test_data_uri) # Remove NA values def clean_dataframe(df): return df.replace(to_replace=NA_VALUES, value=np.NaN).dropna() df_train = clean_dataframe(df_train) # df_validation = clean_dataframe(df_validation) df_validation = clean_dataframe(df_validation) _CATEGORICAL_TYPES = { "island": pd.api.types.CategoricalDtype(categories=ISLANDS), "species": pd.api.types.CategoricalDtype(categories=SPECIES), "sex": pd.api.types.CategoricalDtype(categories=SEXES), } def standardize(df, mean_and_std): """Scales numerical columns using their means and standard deviation to get z-scores: the mean of each numerical column becomes 0, and the standard deviation becomes 1. This can help the model converge during training. Args: df: Pandas df Returns: Input df with the numerical columns scaled to z-scores """ dtypes = list(zip(df.dtypes.index, map(str, df.dtypes))) # Normalize numeric columns. for column, dtype in dtypes: if dtype == "float32": df[column] -= mean_and_std[column]["mean"] df[column] /= mean_and_std[column]["std"] return df def preprocess(df): """Converts categorical features to numeric. Removes unused columns. Args: df: Pandas df with raw data Returns: df with preprocessed data """ df = df.drop(columns=UNUSED_COLUMNS) # Drop rows with NaN's df = df.dropna() # Convert integer valued (numeric) columns to floating point numeric_columns = df.select_dtypes(["int32", "float32", "float64"]).columns df[numeric_columns] = df[numeric_columns].astype("float32") # Convert categorical columns to numeric cat_columns = df.select_dtypes(["object"]).columns df[cat_columns] = df[cat_columns].apply( lambda x: x.astype(_CATEGORICAL_TYPES[x.name]) ) df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) return df def convert_dataframe_to_dataset( df_train, df_validation, mean_and_std ): df_train = preprocess(df_train) df_validation = preprocess(df_validation) df_train_x, df_train_y = df_train, df_train.pop(LABEL_COLUMN) df_validation_x, df_validation_y = df_validation, df_validation.pop(LABEL_COLUMN) # Join train_x and eval_x to normalize on overall means and standard # deviations. Then separate them again. all_x = pd.concat([df_train_x, df_validation_x], keys=["train", "eval"]) all_x = standardize(all_x, mean_and_std) df_train_x, df_validation_x = all_x.xs("train"), all_x.xs("eval") y_train = np.asarray(df_train_y).astype("float32") y_validation = np.asarray(df_validation_y).astype("float32") # Convert to numpy representation x_train = np.asarray(df_train_x) x_test = np.asarray(df_validation_x) # Convert to one-hot representation y_train = tf.keras.utils.to_categorical(y_train, num_classes=len(SPECIES)) y_validation = tf.keras.utils.to_categorical(y_validation, num_classes=len(SPECIES)) dataset_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) dataset_validation = tf.data.Dataset.from_tensor_slices((x_test, y_validation)) return (dataset_train, dataset_validation) # Create datasets dataset_train, dataset_validation = convert_dataframe_to_dataset(df_train, df_validation, mean_and_std) # Shuffle train set dataset_train = dataset_train.shuffle(len(df_train)) def create_model(num_features): # Create model Dense = tf.keras.layers.Dense model = tf.keras.Sequential( [ Dense( 100, activation=tf.nn.relu, kernel_initializer="uniform", input_dim=num_features, ), Dense(75, activation=tf.nn.relu), Dense(50, activation=tf.nn.relu), Dense(25, activation=tf.nn.relu), Dense(3, activation=tf.nn.softmax), ] ) # Compile Keras model optimizer = tf.keras.optimizers.RMSprop(lr=0.001) model.compile( loss="categorical_crossentropy", metrics=["accuracy"], optimizer=optimizer ) return model # Create the model with strategy.scope(): model = create_model(num_features=dataset_train._flat_shapes[0].dims[0].value) # Set up datasets NUM_WORKERS = strategy.num_replicas_in_sync # Here the batch size scales up by number of workers since # `tf.data.Dataset.batch` expects the global batch size. GLOBAL_BATCH_SIZE = args.batch_size * NUM_WORKERS dataset_train = dataset_train.batch(GLOBAL_BATCH_SIZE) dataset_validation = dataset_validation.batch(GLOBAL_BATCH_SIZE) # Train the model model.fit(dataset_train, epochs=args.epochs, validation_data=dataset_validation) tf.saved_model.save(model, os.environ["AIP_MODEL_DIR"]) df_test.head() ``` ### Train the model Define your custom training job on Vertex AI. Use the `CustomTrainingJob` class to define the job, which takes the following parameters: - `display_name`: The user-defined name of this training pipeline. - `script_path`: The local path to the training script. - `container_uri`: The URI of the training container image. - `requirements`: The list of Python package dependencies of the script. - `model_serving_container_image_uri`: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container. Use the `run` function to start training, which takes the following parameters: - `args`: The command line arguments to be passed to the Python script. - `replica_count`: The number of worker replicas. - `model_display_name`: The display name of the `Model` if the script produces a managed `Model`. - `machine_type`: The type of machine to use for training. - `accelerator_type`: The hardware accelerator type. - `accelerator_count`: The number of accelerators to attach to a worker replica. The `run` function creates a training pipeline that trains and creates a `Model` object. After the training pipeline completes, the `run` function returns the `Model` object. ``` job = aiplatform.CustomTrainingJob( display_name=JOB_NAME, script_path="task.py", container_uri=TRAIN_IMAGE, requirements=["google-cloud-bigquery>=2.20.0"], model_serving_container_image_uri=DEPLOY_IMAGE, ) MODEL_DISPLAY_NAME = "penguins-" + TIMESTAMP # Start the training if TRAIN_GPU: model = job.run( dataset=dataset, model_display_name=MODEL_DISPLAY_NAME, bigquery_destination=f"bq://{PROJECT_ID}", args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_type=TRAIN_GPU.name, accelerator_count=TRAIN_NGPU, ) else: model = job.run( dataset=dataset, model_display_name=MODEL_DISPLAY_NAME, bigquery_destination=f"bq://{PROJECT_ID}", args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_count=0, ) ``` ### Deploy the model Before you use your model to make predictions, you need to deploy it to an `Endpoint`. You can do this by calling the `deploy` function on the `Model` resource. This will do two things: 1. Create an `Endpoint` resource for deploying the `Model` resource to. 2. Deploy the `Model` resource to the `Endpoint` resource. The function takes the following parameters: - `deployed_model_display_name`: A human readable name for the deployed model. - `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. - If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic. - If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100. - `machine_type`: The type of machine to use for training. - `accelerator_type`: The hardware accelerator type. - `accelerator_count`: The number of accelerators to attach to a worker replica. - `starting_replica_count`: The number of compute instances to initially provision. - `max_replica_count`: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. ### Traffic split The `traffic_split` parameter is specified as a Python dictionary. You can deploy more than one instance of your model to an endpoint, and then set the percentage of traffic that goes to each instance. You can use a traffic split to introduce a new model gradually into production. For example, if you had one existing model in production with 100% of the traffic, you could deploy a new model to the same endpoint, direct 10% of traffic to it, and reduce the original model's traffic to 90%. This allows you to monitor the new model's performance while minimizing the distruption to the majority of users. ### Compute instance scaling You can specify a single instance (or node) to serve your online prediction requests. This tutorial uses a single node, so the variables `MIN_NODES` and `MAX_NODES` are both set to `1`. If you want to use multiple nodes to serve your online prediction requests, set `MAX_NODES` to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the [pricing page](https://cloud.google.com/vertex-ai/pricing#prediction-prices) to understand the costs of autoscaling with multiple nodes. ### Endpoint The method will block until the model is deployed and eventually return an `Endpoint` object. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources. ``` DEPLOYED_NAME = "penguins_deployed-" + TIMESTAMP TRAFFIC_SPLIT = {"0": 100} MIN_NODES = 1 MAX_NODES = 1 if DEPLOY_GPU: endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, accelerator_type=DEPLOY_GPU.name, accelerator_count=DEPLOY_NGPU, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, ) else: endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, accelerator_type=DEPLOY_COMPUTE.name, accelerator_count=0, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, ) ``` ## Make an online prediction request Send an online prediction request to your deployed model. ### Prepare test data Prepare test data by normalizing it and converting categorical values to numeric values. These values have to match what was just at training. In this example, testing is done with the same dataset that was used for training. In practice, you will want to use a separate dataset to correctly verify your results. ``` import pandas as pd from google.cloud import bigquery UNUSED_COLUMNS = [] LABEL_COLUMN = "species" # Possible categorical values SPECIES = [ "Adelie Penguin (Pygoscelis adeliae)", "Chinstrap penguin (Pygoscelis antarctica)", "Gentoo penguin (Pygoscelis papua)", ] ISLANDS = ["Dream", "Biscoe", "Torgersen"] SEXES = ["FEMALE", "MALE"] _CATEGORICAL_TYPES = { "island": pd.api.types.CategoricalDtype(categories=ISLANDS), "species": pd.api.types.CategoricalDtype(categories=SPECIES), "sex": pd.api.types.CategoricalDtype(categories=SEXES), } def standardize(df, mean_and_std): """Scales numerical columns using their means and standard deviation to get z-scores: the mean of each numerical column becomes 0, and the standard deviation becomes 1. This can help the model converge during training. Args: df: Pandas df Returns: Input df with the numerical columns scaled to z-scores """ dtypes = list(zip(df.dtypes.index, map(str, df.dtypes))) # Normalize numeric columns. for column, dtype in dtypes: if dtype == "float32": df[column] -= mean_and_std[column]["mean"] df[column] /= mean_and_std[column]["std"] return df def preprocess(df, mean_and_std): """Converts categorical features to numeric. Removes unused columns. Args: df: Pandas df with raw data Returns: df with preprocessed data """ df = df.drop(columns=UNUSED_COLUMNS) # Drop rows with NaN's df = df.dropna() # Convert integer valued (numeric) columns to floating point numeric_columns = df.select_dtypes(["int32", "float32", "float64"]).columns df[numeric_columns] = df[numeric_columns].astype("float32") # Convert categorical columns to numeric cat_columns = df.select_dtypes(["object"]).columns df[cat_columns] = df[cat_columns].apply( lambda x: x.astype(_CATEGORICAL_TYPES[x.name]) ) df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) return df def convert_dataframe_to_list(df, mean_and_std): df = preprocess(df, mean_and_std) df_x, df_y = df, df.pop(LABEL_COLUMN) # Normalize on overall means and standard deviations. df = standardize(df, mean_and_std) y = np.asarray(df_y).astype("float32") # Convert to numpy representation x = np.asarray(df_x) # Convert to one-hot representation return x.tolist(), y.tolist() x_test, y_test = convert_dataframe_to_list(dataframe, mean_and_std) ``` ### Send the prediction request Now that you have test data, you can use them to send a prediction request. Use the `Endpoint` object's `predict` function, which takes the following parameters: - `instances`: A list of penguin measurement instances. According to your custom model, each instance should be an array of numbers. This was prepared in the previous step. The `predict` function returns a list, where each element in the list corresponds to the corresponding instance in the request. You will see in the output for each prediction: - Confidence level for the prediction (`predictions`), between 0 and 1, for each of the ten classes. You can then run a quick evaluation on the prediction results: 1. `np.argmax`: Convert each list of confidence levels to a label 2. Compare the predicted labels to the actual labels 3. Calculate `accuracy` as `correct/total` ``` predictions = endpoint.predict(instances=x_test) y_predicted = np.argmax(predictions.predictions, axis=1) correct = sum(y_predicted == np.array(y_test)) accuracy = len(y_predicted) print( f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}" ) ``` ## Undeploy the model To undeploy your `Model` resource from the serving `Endpoint` resource, use the endpoint's `undeploy` method with the following parameter: - `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed. You can retrieve the deployed models using the endpoint's `deployed_models` property. Since this is the only deployed model on the `Endpoint` resource, you can omit `traffic_split`. ``` deployed_model_id = endpoint.list_models()[0].id endpoint.undeploy(deployed_model_id=deployed_model_id) ``` # Cleaning up To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: - Training Job - Model - Endpoint - Cloud Storage Bucket ``` delete_training_job = True delete_model = True delete_endpoint = True # Warning: Setting this to true will delete everything in your bucket delete_bucket = False # Delete the training job job.delete() # Delete the model model.delete() # Delete the endpoint endpoint.delete() if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil -m rm -r $BUCKET_NAME ```
github_jupyter
## Explore one-hit vs. two-hit samples in expression space ``` from pathlib import Path import pickle as pkl import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler import sys; sys.path.append('..') import config as cfg from data_utilities import load_cnv_data %load_ext autoreload %autoreload 2 # park et al. geneset info park_loss_data = cfg.data_dir / 'park_loss_df.tsv' park_gain_data = cfg.data_dir / 'park_gain_df.tsv' # park et al. significant gene info park_loss_sig_data = cfg.data_dir / 'park_loss_df_sig_only.tsv' park_gain_sig_data = cfg.data_dir / 'park_gain_df_sig_only.tsv' # park et al. gene/cancer type predictions park_preds_dir = cfg.data_dir / 'park_genes_all_preds' # mutation and copy number data pancancer_pickle = Path('/home/jake/research/mpmp/data/pancancer_data.pkl') # gene expression/rppa data files data_type = 'gene expression' subset_feats = 10000 gene_expression_data_file = Path( '/home/jake/research/mpmp/data/tcga_expression_matrix_processed.tsv.gz' ) rppa_data_file = Path( '/home/jake/research/mpmp/data/tcga_rppa_matrix_processed.tsv' ) ``` ### Load mutation info For now, just use binary mutation status from the pancancer repo. In the future we could pull more granular info from MC3, but it would take some engineering of `1_get_mutation_counts` to do this for lots of genes. ``` park_loss_df = pd.read_csv(park_loss_data, sep='\t', index_col=0) park_loss_df.head() park_gain_df = pd.read_csv(park_gain_data, sep='\t', index_col=0) park_gain_df.head() with open(pancancer_pickle, 'rb') as f: pancancer_data = pkl.load(f) # get (binary) mutation data # 1 = observed non-silent mutation in this gene for this sample, 0 otherwise mutation_df = pancancer_data[1] print(mutation_df.shape) mutation_df.iloc[:5, :5] ``` ### Load copy number info Get copy loss/gain info directly from GISTIC "thresholded" output. This should be the same as (or very similar to) what the Park et al. study uses. ``` sample_freeze_df = pancancer_data[0] copy_samples = set(sample_freeze_df.SAMPLE_BARCODE) print(len(copy_samples)) copy_loss_df, copy_gain_df = load_cnv_data( cfg.data_dir / 'pancan_GISTIC_threshold.tsv', copy_samples ) print(copy_loss_df.shape) copy_loss_df.iloc[:5, :5] print(copy_gain_df.shape) copy_gain_df.iloc[:5, :5] sample_freeze_df.head() ``` ### Load expression data We'll also standardize each feature, and subset to the top features by mean absolute deviation if `subset_feats` is set. ``` if data_type == 'gene expression': exp_df = pd.read_csv(gene_expression_data_file, sep='\t', index_col=0) elif data_type == 'rppa': exp_df = pd.read_csv(rppa_data_file, sep='\t', index_col=0) print(exp_df.shape) exp_df.iloc[:5, :5] # standardize features first exp_df = pd.DataFrame( StandardScaler().fit_transform(exp_df), index=exp_df.index.copy(), columns=exp_df.columns.copy() ) print(exp_df.shape) exp_df.iloc[:5, :5] # subset to subset_feats features by mean absolute deviation if subset_feats is not None: mad_ranking = ( exp_df.mad(axis=0) .sort_values(ascending=False) ) top_feats = mad_ranking[:subset_feats].index.astype(str).values exp_mad_df = exp_df.reindex(top_feats, axis='columns') else: exp_mad_df = exp_df print(exp_mad_df.shape) exp_mad_df.iloc[:5, :5] ``` ### Get sample info and hit groups for gene/cancer type ``` def get_hits_for_gene_and_tissue(identifier, cancer_classification): """Given a gene and tissue, load the relevant mutation/CNV information, and divide the samples into groups to compare survival. """ # get patient ids in given cancer type gene, tissue = identifier.split('_') tissue_ids = (sample_freeze_df .query('DISEASE == @tissue') .SAMPLE_BARCODE ) # get mutation and copy status mutation_status = mutation_df.loc[tissue_ids, gene] if cancer_classification == 'TSG': copy_status = copy_loss_df.loc[tissue_ids, gene] elif cancer_classification == 'Oncogene': copy_status = copy_gain_df.loc[tissue_ids, gene] # get hit groups from mutation/CNV data two_hit_samples = (mutation_status & copy_status).astype(int) one_hit_samples = (mutation_status | copy_status).astype(int) return pd.DataFrame( {'group': one_hit_samples + two_hit_samples} ) identifier = 'ATRX_LGG' cancer_classification = 'Oncogene' sample_mut_df = get_hits_for_gene_and_tissue(identifier, cancer_classification) # make sure sample data overlaps exactly with expression data overlap_ixs = sample_mut_df.index.intersection(exp_mad_df.index) sample_mut_df = sample_mut_df.loc[overlap_ixs, :].copy() exp_mad_df = exp_mad_df.loc[overlap_ixs, :].copy() # add group info for legends sample_mut_df['group'] = sample_mut_df.group.map({ 0: 'wild-type', 1: 'one-hit', 2: 'two-hit' }) print(sample_mut_df.shape) print(sample_mut_df.group.unique()) sample_mut_df.iloc[:5, :5] ``` ### Plot samples by hit group ``` from sklearn.decomposition import PCA pca = PCA(n_components=2) X_proj_pca = pca.fit_transform(exp_mad_df) print(X_proj_pca.shape) X_proj_pca[:5, :5] sns.set({'figure.figsize': (8, 6)}) sns.scatterplot(x=X_proj_pca[:, 0], y=X_proj_pca[:, 1], hue=sample_mut_df.group, hue_order=['wild-type', 'one-hit', 'two-hit']) plt.title('PCA of {} {} features, colored by {} status'.format( subset_feats, data_type, identifier)) plt.xlabel('PC1') plt.ylabel('PC2') from umap import UMAP reducer = UMAP(n_components=2, random_state=42) X_proj_umap = reducer.fit_transform(exp_mad_df) print(X_proj_umap.shape) X_proj_umap[:5, :5] sns.set({'figure.figsize': (8, 6)}) sns.scatterplot(x=X_proj_umap[:, 0], y=X_proj_umap[:, 1], hue=sample_mut_df.group, hue_order=['wild-type', 'one-hit', 'two-hit']) plt.title('UMAP of {} {} features, colored by {} status'.format( subset_feats, data_type, identifier)) plt.xlabel('UMAP1') plt.ylabel('UMAP2') ``` ### Plot samples by hit group, using features selected by pan-cancer classifiers ``` coefs_file = Path( '/home/jake/research/mpmp/data/final_models/final_expression_all_merged_coefs.tsv' ) coefs_df = pd.read_csv(coefs_file, sep='\t', index_col=0) coefs_df.iloc[:5, :5] gene, tissue = identifier.split('_') coefs_gene = coefs_df.loc[:, gene] coefs_gene = coefs_gene[(~coefs_gene.isna()) & (~(coefs_gene == 0.0)) & # get rid of log10_mut and cancer type covariates (coefs_gene.index.astype(str).str.isdigit())] coefs_gene.index = coefs_gene.index.astype(str) print(coefs_gene.shape) coefs_gene.head() print(coefs_gene.index) print(coefs_gene.index.isna().sum()) exp_coefs_df = exp_df.loc[overlap_ixs, coefs_gene.index].copy() print(exp_coefs_df.shape) exp_coefs_df.iloc[:5, :5] sns.set({'figure.figsize': (8, 6)}) pca = PCA(n_components=2) X_proj_pca = pca.fit_transform(exp_coefs_df) sns.scatterplot(x=X_proj_pca[:, 0], y=X_proj_pca[:, 1], hue=sample_mut_df.group, hue_order=['wild-type', 'one-hit', 'two-hit']) plt.title('PCA of non-zero {} features, colored by {} status'.format( data_type, identifier)) plt.xlabel('PC1') plt.ylabel('PC2') sns.set({'figure.figsize': (8, 6)}) reducer = UMAP(n_components=2, random_state=42) X_proj_umap = reducer.fit_transform(exp_coefs_df) sns.scatterplot(x=X_proj_umap[:, 0], y=X_proj_umap[:, 1], hue=sample_mut_df.group, hue_order=['wild-type', 'one-hit', 'two-hit']) plt.title('UMAP of nonzero {} features, colored by {} status'.format( data_type, identifier)) plt.xlabel('UMAP1') plt.ylabel('UMAP2') ```
github_jupyter
<table style="float:left; border:none"> <tr style="border:none"> <td style="border:none"> <a href="http://bokeh.pydata.org/"> <img src="http://bokeh.pydata.org/en/latest/_static/bokeh-transparent.png" style="width:70px" > </a> </td> <td style="border:none"> <h1>Bokeh Tutorial &mdash; <tt style="display:inline">bokeh.models</tt> interface</h1> </td> </tr> </table> ## Models NYTimes interactive chart [Usain Bolt vs. 116 years of Olympic sprinters](http://www.nytimes.com/interactive/2012/08/05/sports/olympics/the-100-meter-dash-one-race-every-medalist-ever.html) The first thing we need is to get the data. The data for this chart is located in the ``bokeh.sampledata`` module as a Pandas DataFrame. You can see the first ten rows below:? ``` from bokeh.sampledata.sprint import sprint sprint[:10] ``` Next we import some of the Bokeh models that need to be assembled to make a plot. At a minimum, we need to start with ``Plot``, the glyphs (``Circle`` and ``Text``) we want to display, as well as ``ColumnDataSource`` to hold the data and range obejcts to set the plot bounds. ``` from bokeh.io import output_notebook, show from bokeh.models.glyphs import Circle, Text from bokeh.models import ColumnDataSource, Range1d, DataRange1d, Plot output_notebook() ``` ## Setting up Data ``` abbrev_to_country = { "USA": "United States", "GBR": "Britain", "JAM": "Jamaica", "CAN": "Canada", "TRI": "Trinidad and Tobago", "AUS": "Australia", "GER": "Germany", "CUB": "Cuba", "NAM": "Namibia", "URS": "Soviet Union", "BAR": "Barbados", "BUL": "Bulgaria", "HUN": "Hungary", "NED": "Netherlands", "NZL": "New Zealand", "PAN": "Panama", "POR": "Portugal", "RSA": "South Africa", "EUA": "United Team of Germany", } gold_fill = "#efcf6d" gold_line = "#c8a850" silver_fill = "#cccccc" silver_line = "#b0b0b1" bronze_fill = "#c59e8a" bronze_line = "#98715d" fill_color = { "gold": gold_fill, "silver": silver_fill, "bronze": bronze_fill } line_color = { "gold": gold_line, "silver": silver_line, "bronze": bronze_line } def selected_name(name, medal, year): return name if medal == "gold" and year in [1988, 1968, 1936, 1896] else None t0 = sprint.Time[0] sprint["Abbrev"] = sprint.Country sprint["Country"] = sprint.Abbrev.map(lambda abbr: abbrev_to_country[abbr]) sprint["Medal"] = sprint.Medal.map(lambda medal: medal.lower()) sprint["Speed"] = 100.0/sprint.Time sprint["MetersBack"] = 100.0*(1.0 - t0/sprint.Time) sprint["MedalFill"] = sprint.Medal.map(lambda medal: fill_color[medal]) sprint["MedalLine"] = sprint.Medal.map(lambda medal: line_color[medal]) sprint["SelectedName"] = sprint[["Name", "Medal", "Year"]].apply(tuple, axis=1).map(lambda args: selected_name(*args)) source = ColumnDataSource(sprint) ``` ## Basic Plot with Glyphs ``` plot_options = dict(plot_width=800, plot_height=480, toolbar_location=None, outline_line_color=None, title = "Usain Bolt vs. 116 years of Olympic sprinters") radius = dict(value=5, units="screen") medal_glyph = Circle(x="MetersBack", y="Year", radius=radius, fill_color="MedalFill", line_color="MedalLine", fill_alpha=0.5) athlete_glyph = Text(x="MetersBack", y="Year", x_offset=10, text="SelectedName", text_align="left", text_baseline="middle", text_font_size="9pt") no_olympics_glyph = Text(x=7.5, y=1942, text=["No Olympics in 1940 or 1944"], text_align="center", text_baseline="middle", text_font_size="9pt", text_font_style="italic", text_color="silver") xdr = Range1d(start=sprint.MetersBack.max()+2, end=0) # +2 is for padding ydr = DataRange1d(range_padding=0.05) plot = Plot(x_range=xdr, y_range=ydr, **plot_options) plot.add_glyph(source, medal_glyph) plot.add_glyph(source, athlete_glyph) plot.add_glyph(no_olympics_glyph) show(plot) ``` ## Adding Axes and Grids ``` from bokeh.models import Grid, LinearAxis, SingleIntervalTicker xdr = Range1d(start=sprint.MetersBack.max()+2, end=0) # +2 is for padding ydr = DataRange1d(range_padding=0.05) plot = Plot(x_range=xdr, y_range=ydr, **plot_options) plot.add_glyph(source, medal_glyph) plot.add_glyph(source, athlete_glyph) plot.add_glyph(no_olympics_glyph) xticker = SingleIntervalTicker(interval=5, num_minor_ticks=0) xaxis = LinearAxis(ticker=xticker, axis_line_color=None, major_tick_line_color=None, axis_label="Meters behind 2012 Bolt", axis_label_text_font_size="10pt", axis_label_text_font_style="bold") plot.add_layout(xaxis, "below") xgrid = Grid(dimension=0, ticker=xaxis.ticker, grid_line_dash="dashed") plot.add_layout(xgrid) yticker = SingleIntervalTicker(interval=12, num_minor_ticks=0) yaxis = LinearAxis(ticker=yticker, major_tick_in=-5, major_tick_out=10) plot.add_layout(yaxis, "right") show(plot) ``` ## Adding a Hover Tool ``` from bokeh.models import HoverTool tooltips = """ <div> <span style="font-size: 15px;">@Name</span>&nbsp; <span style="font-size: 10px; color: #666;">(@Abbrev)</span> </div> <div> <span style="font-size: 17px; font-weight: bold;">@Time{0.00}</span>&nbsp; <span style="font-size: 10px; color: #666;">@Year</span> </div> <div style="font-size: 11px; color: #666;">@{MetersBack}{0.00} meters behind</div> """ xdr = Range1d(start=sprint.MetersBack.max()+2, end=0) # +2 is for padding ydr = DataRange1d(range_padding=0.05) plot = Plot(x_range=xdr, y_range=ydr, **plot_options) medal = plot.add_glyph(source, medal_glyph) # we need this renderer to configure the hover tool plot.add_glyph(source, athlete_glyph) plot.add_glyph(no_olympics_glyph) xticker = SingleIntervalTicker(interval=5, num_minor_ticks=0) xaxis = LinearAxis(ticker=xticker, axis_line_color=None, major_tick_line_color=None, axis_label="Meters behind 2012 Bolt", axis_label_text_font_size="10pt", axis_label_text_font_style="bold") plot.add_layout(xaxis, "below") xgrid = Grid(dimension=0, ticker=xaxis.ticker, grid_line_dash="dashed") plot.add_layout(xgrid) yticker = SingleIntervalTicker(interval=12, num_minor_ticks=0) yaxis = LinearAxis(ticker=yticker, major_tick_in=-5, major_tick_out=10) plot.add_layout(yaxis, "right") hover = HoverTool(tooltips=tooltips, renderers=[medal]) plot.add_tools(hover) show(plot) from bubble_plot import get_1964_data def get_plot(): return Plot( x_range=Range1d(1, 9), y_range=Range1d(20, 100), title="", plot_width=800, plot_height=400, outline_line_color=None, toolbar_location=None, ) df = get_1964_data() df.head() # EXERCISE: Add Circles to the plot from the data in `df`. # With `fertility` for the x coordinates, `life` for the y coordinates. plot = get_plot() # EXERCISE: Color the circles by region_color & change the size of the color by population # EXERCISE: Add axes and grid lines # EXERCISE: Manually add a legend using Circle & Text. The color key is as follows region_name_and_color = [ ('America', '#3288bd'), ('East Asia & Pacific', '#99d594'), ('Europe & Central Asia', '#e6f598'), ('Middle East & North Africa', '#fee08b'), ('South Asia', '#fc8d59'), ('Sub-Saharan Africa', '#d53e4f') ] ```
github_jupyter
``` import matplotlib.pyplot as plt import numpy as np import pymc3 as pm import theano from scipy.integrate import odeint from theano import * THEANO_FLAGS = "optimizer=fast_compile" ``` # Lotka-Volterra with manual gradients by [Sanmitra Ghosh](https://www.mrc-bsu.cam.ac.uk/people/in-alphabetical-order/a-to-g/sanmitra-ghosh/) Mathematical models are used ubiquitously in a variety of science and engineering domains to model the time evolution of physical variables. These mathematical models are often described as ODEs that are characterised by model structure - the functions of the dynamical variables - and model parameters. However, for the vast majority of systems of practical interest it is necessary to infer both the model parameters and an appropriate model structure from experimental observations. This experimental data often appears to be scarce and incomplete. Furthermore, a large variety of models described as dynamical systems show traits of sloppiness (see [Gutenkunst et al., 2007](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0030189)) and have unidentifiable parameter combinations. The task of inferring model parameters and structure from experimental data is of paramount importance to reliably analyse the behaviour of dynamical systems and draw faithful predictions in light of the difficulties posit by their complexities. Moreover, any future model prediction should encompass and propagate variability and uncertainty in model parameters and/or structure. Thus, it is also important that the inference methods are equipped to quantify and propagate the aforementioned uncertainties from the model descriptions to model predictions. As a natural choice to handle uncertainty, at least in the parameters, Bayesian inference is increasingly used to fit ODE models to experimental data ([Mark Girolami, 2008](https://www.sciencedirect.com/science/article/pii/S030439750800501X)). However, due to some of the difficulties that I pointed above, fitting an ODE model using Bayesian inference is a challenging task. In this tutorial I am going to take up that challenge and will show how PyMC3 could be potentially used for this purpose. I must point out that model fitting (inference of the unknown parameters) is just one of many crucial tasks that a modeller has to complete in order to gain a deeper understanding of a physical process. However, success in this task is crucial and this is where PyMC3, and probabilistic programming (ppl) in general, is extremely useful. The modeller can take full advantage of the variety of samplers and distributions provided by PyMC3 to automate inference. In this tutorial I will focus on the fitting exercise, that is estimating the posterior distribution of the parameters given some noisy experimental time series. ## Bayesian inference of the parameters of an ODE I begin by first introducing the Bayesian framework for inference in a coupled non-linear ODE defined as $$ \frac{d X(t)}{dt}=\boldsymbol{f}\big(X(t),\boldsymbol{\theta}\big), $$ where $X(t)\in\mathbb{R}^K$ is the solution, at each time point, of the system composed of $K$ coupled ODEs - the state vector - and $\boldsymbol{\theta}\in\mathbb{R}^D$ is the parameter vector that we wish to infer. $\boldsymbol{f}(\cdot)$ is a non-linear function that describes the governing dynamics. Also, in case of an initial value problem, let the matrix $\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$ denote the solution of the above system of equations at some specified time points for the parameters $\boldsymbol{\theta}$ and initial conditions $\mathbf{x_0}$. Consider a set of noisy experimental observations $\boldsymbol{Y} \in \mathbb{R}^{T\times K}$ observed at $T$ experimental time points for the $K$ states. We can obtain the likelihood $p(\boldsymbol{Y}|\boldsymbol{X})$, where I use the symbol $\boldsymbol{X}:=\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$, and combine that with a prior distribution $p(\boldsymbol{\theta})$ on the parameters, using the Bayes theorem, to obtain the posterior distribution as $$ p(\boldsymbol{\theta}|\boldsymbol{Y})=\frac{1}{Z}p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}), $$ where $Z=\int p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}) d\boldsymbol{\theta} $ is the intractable marginal likelihood. Due to this intractability we resort to approximate inference and apply MCMC. For this tutorial I have chosen two ODEs: 1. The [__Lotka-Volterra predator prey model__ ](http://www.scholarpedia.org/article/Predator-prey_model) 2. The [__Fitzhugh-Nagumo action potential model__](http://www.scholarpedia.org/article/FitzHugh-Nagumo_model) I will showcase two distinctive approaches (__NUTS__ and __SMC__ step methods), supported by PyMC3, for the estimation of unknown parameters in these models. ## Lotka-Volterra predator prey model The Lotka Volterra model depicts an ecological system that is used to describe the interaction between a predator and prey species. This ODE given by $$ \begin{aligned} \frac{d x}{dt} &=\alpha x -\beta xy \\ \frac{d y}{dt} &=-\gamma y + \delta xy, \end{aligned} $$ shows limit cycle behaviour and has often been used for benchmarking Bayesian inference methods. $\boldsymbol{\theta}=(\alpha,\beta,\gamma,\delta, x(0),y(0))$ is the set of unknown parameters that we wish to infer from experimental observations of the state vector $X(t)=(x(t),y(t))$ comprising the concentrations of the prey and the predator species respectively. $x(0), y(0)$ are the initial values of the states needed to solve the ODE, which are also treated as unknown quantities. The predator prey model was recently used to demonstrate the applicability of the NUTS sampler, and the Stan ppl in general, for inference in ODE models. I will closely follow [this](https://mc-stan.org/users/documentation/case-studies/lotka-volterra-predator-prey.html) Stan tutorial and thus I will setup this model and associated inference problem (including the data) exactly as was done for the Stan tutorial. Let me first write down the code to solve this ODE using the SciPy's `odeint`. Note that the methods in this tutorial is not limited or tied to `odeint`. Here I have chosen `odeint` to simply stay within PyMC3's dependencies (SciPy in this case). ``` class LotkaVolterraModel: def __init__(self, y0=None): self._y0 = y0 def simulate(self, parameters, times): alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters] def rhs(y, t, p): X, Y = y dX_dt = alpha * X - beta * X * Y dY_dt = -gamma * Y + delta * X * Y return dX_dt, dY_dt values = odeint(rhs, [Xt0, Yt0], times, (parameters,)) return values ode_model = LotkaVolterraModel() ``` ## Handling ODE gradients NUTS requires the gradient of the log of the target density w.r.t. the unknown parameters, $\nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y})$, which can be evaluated using the chain rule of differentiation as $$ \nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y}) = \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}.$$ The gradient of an ODE w.r.t. its parameters, the term $\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, can be obtained using local sensitivity analysis, although this is not the only method to obtain gradients. However, just like solving an ODE (a non-linear one to be precise) evaluation of the gradients can only be carried out using some sort of numerical method, say for example the famous Runge-Kutta method for non-stiff ODEs. PyMC3 uses Theano as the automatic differentiation engine and thus all models are implemented by stitching together available primitive operations (Ops) supported by Theano. Even to extend PyMC3 we need to compose models that can be expressed as symbolic combinations of Theano's Ops. However, if we take a step back and think about Theano then it is apparent that neither the ODE solution nor its gradient w.r.t. to the parameters can be expressed symbolically as combinations of Theano’s primitive Ops. Hence, from Theano’s perspective an ODE (and for that matter any other form of a non-linear differential equation) is a non-differentiable black-box function. However, one might argue that if a numerical method is coded up in Theano (using say the `scan` Op), then it is possible to symbolically express the numerical method that evaluates the ODE states, and then we can easily use Theano’s automatic differentiation engine to obtain the gradients as well by differentiating through the numerical solver itself. I like to point out that the former, obtaining the solution, is indeed possible this way but the obtained gradient would be error-prone. Additionally, this entails to a complete ‘re-inventing the wheel’ as one would have to implement decades old sophisticated numerical algorithms again from scratch in Theano. Thus, in this tutorial I am going to present the alternative approach which consists of defining new [custom Theano Ops](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html), extending Theano, that will wrap both the numerical solution and the vector-Matrix product, $ \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, often known as the _**vector-Jacobian product**_ (VJP) in automatic differentiation literature. I like to point out here that in the context of non-linear ODEs the term Jacobian is used to denote gradients of the ODE dynamics $\boldsymbol{f}$ w.r.t. the ODE states $X(t)$. Thus, to avoid confusion, from now on I will use the term _**vector-sensitivity product**_ (VSP) to denote the same quantity that the term VJP denotes. I will start by introducing the forward sensitivity analysis. ## ODE sensitivity analysis For a coupled ODE system $\frac{d X(t)}{dt} = \boldsymbol{f}(X(t),\boldsymbol{\theta})$, the local sensitivity of the solution to a parameter is defined by how much the solution would change by changes in the parameter, i.e. the sensitivity of the the $k$-th state is simply put the time evolution of its graident w.r.t. the $d$-th parameter. This quantitiy, denoted as $Z_{kd}(t)$, is given by $$Z_{kd}(t)=\frac{d }{d t} \left\{\frac{\partial X_k (t)}{\partial \theta_d}\right\} = \sum_{i=1}^K \frac{\partial f_k}{\partial X_i (t)}\frac{\partial X_i (t)}{\partial \theta_d} + \frac{\partial f_k}{\partial \theta_d}.$$ Using forward sensitivity analysis we can obtain both the state $X(t)$ and its derivative w.r.t the parameters, at each time point, as the solution to an initial value problem by augmenting the original ODE system with the sensitivity equations $Z_{kd}$. The augmented ODE system $\big(X(t), Z(t)\big)$ can then be solved together using a chosen numerical method. The augmented ODE system needs the initial values for the sensitivity equations. All of these should be set to zero except the ones where the sensitivity of a state w.r.t. its own initial value is sought, that is $ \frac{\partial X_k(t)}{\partial X_k (0)} =1 $. Note that in order to solve this augmented system we have to embark in the tedious process of deriving $ \frac{\partial f_k}{\partial X_i (t)}$, also known as the Jacobian of an ODE, and $\frac{\partial f_k}{\partial \theta_d}$ terms. Thankfully, many ODE solvers calculate these terms and solve the augmented system when asked for by the user. An example would be the [SUNDIAL CVODES solver suite](https://computation.llnl.gov/projects/sundials/cvodes). A Python wrapper for CVODES can be found [here](https://jmodelica.org/assimulo/). However, for this tutorial I would go ahead and derive the terms mentioned above, manually, and solve the Lotka-Volterra ODEs alongwith the sensitivites in the following code block. The functions `jac` and `dfdp` below calculate $ \frac{\partial f_k}{\partial X_i (t)}$ and $\frac{\partial f_k}{\partial \theta_d}$ respectively for the Lotka-Volterra model. For conveniance I have transformed the sensitivity equation in a matrix form. Here I extended the solver code snippet above to include sensitivities when asked for. ``` n_states = 2 n_odeparams = 4 n_ivs = 2 class LotkaVolterraModel: def __init__(self, n_states, n_odeparams, n_ivs, y0=None): self._n_states = n_states self._n_odeparams = n_odeparams self._n_ivs = n_ivs self._y0 = y0 def simulate(self, parameters, times): return self._simulate(parameters, times, False) def simulate_with_sensitivities(self, parameters, times): return self._simulate(parameters, times, True) def _simulate(self, parameters, times, sensitivities): alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters] def r(y, t, p): X, Y = y dX_dt = alpha * X - beta * X * Y dY_dt = -gamma * Y + delta * X * Y return dX_dt, dY_dt if sensitivities: def jac(y): X, Y = y ret = np.zeros((self._n_states, self._n_states)) ret[0, 0] = alpha - beta * Y ret[0, 1] = -beta * X ret[1, 0] = delta * Y ret[1, 1] = -gamma + delta * X return ret def dfdp(y): X, Y = y ret = np.zeros( (self._n_states, self._n_odeparams + self._n_ivs) ) # except the following entries ret[ 0, 0 ] = X # \frac{\partial [\alpha X - \beta XY]}{\partial \alpha}, and so on... ret[0, 1] = -X * Y ret[1, 2] = -Y ret[1, 3] = X * Y return ret def rhs(y_and_dydp, t, p): y = y_and_dydp[0 : self._n_states] dydp = y_and_dydp[self._n_states :].reshape( (self._n_states, self._n_odeparams + self._n_ivs) ) dydt = r(y, t, p) d_dydp_dt = np.matmul(jac(y), dydp) + dfdp(y) return np.concatenate((dydt, d_dydp_dt.reshape(-1))) y0 = np.zeros((2 * (n_odeparams + n_ivs)) + n_states) y0[6] = 1.0 # \frac{\partial [X]}{\partial Xt0} at t==0, and same below for Y y0[13] = 1.0 y0[0:n_states] = [Xt0, Yt0] result = odeint(rhs, y0, times, (parameters,), rtol=1e-6, atol=1e-5) values = result[:, 0 : self._n_states] dvalues_dp = result[:, self._n_states :].reshape( (len(times), self._n_states, self._n_odeparams + self._n_ivs) ) return values, dvalues_dp else: values = odeint(r, [Xt0, Yt0], times, (parameters,), rtol=1e-6, atol=1e-5) return values ode_model = LotkaVolterraModel(n_states, n_odeparams, n_ivs) ``` For this model I have set the relative and absolute tolerances to $10^{-6}$ and $10^{-5}$ respectively, as was suggested in the Stan tutorial. This will produce sufficiently accurate solutions. Further reducing the tolerances will increase accuracy but at the cost of increasing the computational time. A thorough discussion on the choice and use of a numerical method for solving the ODE is out of the scope of this tutorial. However, I must point out that the inaccuracies of the ODE solver do affect the likelihood and as a result the inference. This is more so the case for stiff systems. I would recommend interested readers to this nice blog article where this effect is discussed thoroughly for a [cardiac ODE model](https://mirams.wordpress.com/2018/10/17/ode-errors-and-optimisation/). There is also an emerging area of uncertainty quantification that attacks the problem of noise arisng from impreciseness of numerical algorithms, [probabilistic numerics](http://probabilistic-numerics.org/). This is indeed an elegant framework to carry out inference while taking into account the errors coming from the numeric ODE solvers. ## Custom ODE Op In order to define the custom `Op` I have written down two `theano.Op` classes `ODEGradop`, `ODEop`. `ODEop` essentially wraps the ODE solution and will be called by PyMC3. The `ODEGradop` wraps the numerical VSP and this op is then in turn used inside the `grad` method in the `ODEop` to return the VSP. Note that we pass in two functions: `state`, `numpy_vsp` as arguments to respective Ops. I will define these functions later. These functions act as shims using which we connect the python code for numerical solution of sate and VSP to Theano and thus PyMC3. ``` class ODEGradop(theano.Op): def __init__(self, numpy_vsp): self._numpy_vsp = numpy_vsp def make_node(self, x, g): x = theano.tensor.as_tensor_variable(x) g = theano.tensor.as_tensor_variable(g) node = theano.Apply(self, [x, g], [g.type()]) return node def perform(self, node, inputs_storage, output_storage): x = inputs_storage[0] g = inputs_storage[1] out = output_storage[0] out[0] = self._numpy_vsp(x, g) # get the numerical VSP class ODEop(theano.Op): def __init__(self, state, numpy_vsp): self._state = state self._numpy_vsp = numpy_vsp def make_node(self, x): x = theano.tensor.as_tensor_variable(x) return theano.Apply(self, [x], [x.type()]) def perform(self, node, inputs_storage, output_storage): x = inputs_storage[0] out = output_storage[0] out[0] = self._state(x) # get the numerical solution of ODE states def grad(self, inputs, output_grads): x = inputs[0] g = output_grads[0] grad_op = ODEGradop(self._numpy_vsp) # pass the VSP when asked for gradient grad_op_apply = grad_op(x, g) return [grad_op_apply] ``` I must point out that the way I have defined the custom ODE Ops above there is the possibility that the ODE is solved twice for the same parameter values, once for the states and another time for the VSP. To avoid this behaviour I have written a helper class which stops this double evaluation. ``` class solveCached: def __init__(self, times, n_params, n_outputs): self._times = times self._n_params = n_params self._n_outputs = n_outputs self._cachedParam = np.zeros(n_params) self._cachedSens = np.zeros((len(times), n_outputs, n_params)) self._cachedState = np.zeros((len(times), n_outputs)) def __call__(self, x): if np.all(x == self._cachedParam): state, sens = self._cachedState, self._cachedSens else: state, sens = ode_model.simulate_with_sensitivities(x, times) return state, sens times = np.arange(0, 21) # number of measurement points (see below) cached_solver = solveCached(times, n_odeparams + n_ivs, n_states) ``` ### The ODE state & VSP evaluation Most ODE systems of practical interest will have multiple states and thus the output of the solver, which I have denoted so far as $\boldsymbol{X}$, for a system with $K$ states solved on $T$ time points, would be a $T \times K$-dimensional matrix. For the Lotka-Volterra model the columns of this matrix represent the time evolution of the individual species concentrations. I flatten this matrix to a $TK$-dimensional vector $vec(\boldsymbol{X})$, and also rearrange the sensitivities accordingly to obtain the desired vector-matrix product. It is beneficial at this point to test the custom Op as described [here](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html#how-to-test-it). ``` def state(x): State, Sens = cached_solver(np.array(x, dtype=np.float64)) cached_solver._cachedState, cached_solver._cachedSens, cached_solver._cachedParam = ( State, Sens, x, ) return State.reshape((2 * len(State),)) def numpy_vsp(x, g): numpy_sens = cached_solver(np.array(x, dtype=np.float64))[1].reshape( (n_states * len(times), len(x)) ) return numpy_sens.T.dot(g) ``` ## The Hudson's Bay Company data The Lotka-Volterra predator prey model has been used previously to successfully explain the dynamics of natural populations of predators and prey, such as the lynx and snowshoe hare data of the Hudson's Bay Company. This is the same data (that was shared [here](https://github.com/stan-dev/example-models/tree/master/knitr/lotka-volterra)) used in the Stan example and thus I will use this data-set as the experimental observations $\boldsymbol{Y}(t)$ to infer the parameters. ``` Year = np.arange(1900, 1921, 1) # fmt: off Lynx = np.array([4.0, 6.1, 9.8, 35.2, 59.4, 41.7, 19.0, 13.0, 8.3, 9.1, 7.4, 8.0, 12.3, 19.5, 45.7, 51.1, 29.7, 15.8, 9.7, 10.1, 8.6]) Hare = np.array([30.0, 47.2, 70.2, 77.4, 36.3, 20.6, 18.1, 21.4, 22.0, 25.4, 27.1, 40.3, 57.0, 76.6, 52.3, 19.5, 11.2, 7.6, 14.6, 16.2, 24.7]) # fmt: on plt.figure(figsize=(15, 7.5)) plt.plot(Year, Lynx, color="b", lw=4, label="Lynx") plt.plot(Year, Hare, color="g", lw=4, label="Hare") plt.legend(fontsize=15) plt.xlim([1900, 1920]) plt.xlabel("Year", fontsize=15) plt.ylabel("Concentrations", fontsize=15) plt.xticks(Year, rotation=45) plt.title("Lynx (predator) - Hare (prey): oscillatory dynamics", fontsize=25); ``` ## The probablistic model I have now got all the ingredients needed in order to define the probabilistic model in PyMC3. As I have mentioned previously I will set up the probabilistic model with the exact same likelihood and priors used in the Stan example. The observed data is defined as follows: $$\log (\boldsymbol{Y(t)}) = \log (\boldsymbol{X(t)}) + \eta(t),$$ where $\eta(t)$ is assumed to be zero mean i.i.d Gaussian noise with an unknown standard deviation $\sigma$, that needs to be estimated. The above multiplicative (on the natural scale) noise model encodes a lognormal distribution as the likelihood: $$\boldsymbol{Y(t)} \sim \mathcal{L}\mathcal{N}(\log (\boldsymbol{X(t)}), \sigma^2).$$ The following priors are then placed on the parameters: $$ \begin{aligned} x(0), y(0) &\sim \mathcal{L}\mathcal{N}(\log(10),1),\\ \alpha, \gamma &\sim \mathcal{N}(1,0.5),\\ \beta, \delta &\sim \mathcal{N}(0.05,0.05),\\ \sigma &\sim \mathcal{L}\mathcal{N}(-1,1). \end{aligned} $$ For an intuitive explanation, which I am omitting for brevity, regarding the choice of priors as well as the likelihood model, I would recommend the Stan example mentioned above. The above probabilistic model is defined in PyMC3 below. Note that the flattened state vector is reshaped to match the data dimensionality. Finally, I use the `pm.sample` method to run NUTS by default and obtain $1500$ post warm-up samples from the posterior. ``` theano.config.exception_verbosity = "high" theano.config.floatX = "float64" # Define the data matrix Y = np.vstack((Hare, Lynx)).T # Now instantiate the theano custom ODE op my_ODEop = ODEop(state, numpy_vsp) # The probabilistic model with pm.Model() as LV_model: # Priors for unknown model parameters alpha = pm.Normal("alpha", mu=1, sd=0.5) beta = pm.Normal("beta", mu=0.05, sd=0.05) gamma = pm.Normal("gamma", mu=1, sd=0.5) delta = pm.Normal("delta", mu=0.05, sd=0.05) xt0 = pm.Lognormal("xto", mu=np.log(10), sd=1) yt0 = pm.Lognormal("yto", mu=np.log(10), sd=1) sigma = pm.Lognormal("sigma", mu=-1, sd=1, shape=2) # Forward model all_params = pm.math.stack([alpha, beta, gamma, delta, xt0, yt0], axis=0) ode_sol = my_ODEop(all_params) forward = ode_sol.reshape(Y.shape) # Likelihood Y_obs = pm.Lognormal("Y_obs", mu=pm.math.log(forward), sd=sigma, observed=Y) trace = pm.sample(1500, tune=1000, init="adapt_diag") trace["diverging"].sum() with LV_model: pm.traceplot(trace); import pandas as pd summary = pm.summary(trace) STAN_mus = [0.549, 0.028, 0.797, 0.024, 33.960, 5.949, 0.248, 0.252] STAN_sds = [0.065, 0.004, 0.091, 0.004, 2.909, 0.533, 0.045, 0.044] summary["STAN_mus"] = pd.Series(np.array(STAN_mus), index=summary.index) summary["STAN_sds"] = pd.Series(np.array(STAN_sds), index=summary.index) summary ``` These estimates are almost identical to those obtained in the Stan tutorial (see the last two columns above), which is what we can expect. Posterior predictives can be drawn as below. ``` ppc_samples = pm.sample_posterior_predictive(trace, samples=1000, model=LV_model)["Y_obs"] mean_ppc = ppc_samples.mean(axis=0) CriL_ppc = np.percentile(ppc_samples, q=2.5, axis=0) CriU_ppc = np.percentile(ppc_samples, q=97.5, axis=0) plt.figure(figsize=(15, 2 * (5))) plt.subplot(2, 1, 1) plt.plot(Year, Lynx, "o", color="b", lw=4, ms=10.5) plt.plot(Year, mean_ppc[:, 1], color="b", lw=4) plt.plot(Year, CriL_ppc[:, 1], "--", color="b", lw=2) plt.plot(Year, CriU_ppc[:, 1], "--", color="b", lw=2) plt.xlim([1900, 1920]) plt.ylabel("Lynx conc", fontsize=15) plt.xticks(Year, rotation=45) plt.subplot(2, 1, 2) plt.plot(Year, Hare, "o", color="g", lw=4, ms=10.5, label="Observed") plt.plot(Year, mean_ppc[:, 0], color="g", lw=4, label="mean of ppc") plt.plot(Year, CriL_ppc[:, 0], "--", color="g", lw=2, label="credible intervals") plt.plot(Year, CriU_ppc[:, 0], "--", color="g", lw=2) plt.legend(fontsize=15) plt.xlim([1900, 1920]) plt.xlabel("Year", fontsize=15) plt.ylabel("Hare conc", fontsize=15) plt.xticks(Year, rotation=45); ``` # Efficient exploration of the posterior landscape with SMC It has been pointed out in several papers that the complex non-linear dynamics of an ODE results in a posterior landscape that is extremely difficult to navigate efficiently by many MCMC samplers. Thus, recently the curvature information of the posterior surface has been used to construct powerful geometrically aware samplers ([Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x)) that perform extremely well in ODE inference problems. Another set of ideas suggest breaking down a complex inference task into a sequence of simpler tasks. In essence the idea is to use sequential-importance-sampling to sample from an artificial sequence of increasingly complex distributions where the first in the sequence is a distribution that is easy to sample from, the prior, and the last in the sequence is the actual complex target distribution. The associated importance distribution is constructed by moving the set of particles sampled at the previous step using a Markov kernel, say for example the MH kernel. A simple way of building the sequence of distributions is to use a temperature $\beta$, that is raised slowly from $0$ to $1$. Using this temperature variable $\beta$ we can write down the annealed intermediate distribution as $$p_{\beta}(\boldsymbol{\theta}|\boldsymbol{y})\propto p(\boldsymbol{y}|\boldsymbol{\theta})^{\beta} p(\boldsymbol{\theta}).$$ Samplers that carry out sequential-importance-sampling from these artificial sequence of distributions, to avoid the difficult task of sampling directly from $p(\boldsymbol{\theta}|\boldsymbol{y})$, are known as Sequential Monte Carlo (SMC) samplers ([P Del Moral et al., 2006](https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1467-9868.2006.00553.x)). The performance of these samplers are sensitive to the choice of the temperature schedule, that is the set of user-defined increasing values of $\beta$ between $0$ and $1$. Fortunately, PyMC3 provides a version of the SMC sampler ([Jianye Ching and Yi-Chu Chen, 2007](https://ascelibrary.org/doi/10.1061/%28ASCE%290733-9399%282007%29133%3A7%28816%29)) that automatically figures out this temperature schedule. Moreover, the PyMC3's SMC sampler does not require the gradient of the log target density. As a result it is extremely easy to use this sampler for inference in ODE models. In the next example I will apply this SMC sampler to estimate the parameters of the Fitzhugh-Nagumo model. ## The Fitzhugh-Nagumo model The Fitzhugh-Nagumo model given by $$ \begin{aligned} \frac{dV}{dt}&=(V - \frac{V^3}{3} + R)c\\ \frac{dR}{dt}&=\frac{-(V-a+bR)}{c}, \end{aligned} $$ consisting of a membrane voltage variable $V(t)$ and a recovery variable $R(t)$ is a two-dimensional simplification of the [Hodgkin-Huxley](http://www.scholarpedia.org/article/Conductance-based_models) model of spike (action potential) generation in squid giant axons and where $a$, $b$, $c$ are the model parameters. This model produces a rich dynamics and as a result a complex geometry of the posterior surface that often leads to poor performance of many MCMC samplers. As a result this model was used to test the efficacy of the discussed geometric MCMC scheme and since then has been used to benchmark other novel MCMC methods. Following [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will also use artificially generated data from this model to setup the inference task for estimating $\boldsymbol{\theta}=(a,b,c)$. ``` class FitzhughNagumoModel: def __init__(self, times, y0=None): self._y0 = np.array([-1, 1], dtype=np.float64) self._times = times def _simulate(self, parameters, times): a, b, c = [float(x) for x in parameters] def rhs(y, t, p): V, R = y dV_dt = (V - V ** 3 / 3 + R) * c dR_dt = (V - a + b * R) / -c return dV_dt, dR_dt values = odeint(rhs, self._y0, times, (parameters,), rtol=1e-6, atol=1e-6) return values def simulate(self, x): return self._simulate(x, self._times) ``` ## Simulated Data For this example I am going to use simulated data that is I will generate noisy traces from the forward model defined above with parameters $\theta$ set to $(0.2,0.2,3)$ respectively and corrupted by i.i.d Gaussian noise with a standard deviation $\sigma=0.5$. The initial values are set to $V(0)=-1$ and $R(0)=1$ respectively. Again following [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will assume that the initial values are known. These parameter values pushes the model into the oscillatory regime. ``` n_states = 2 n_times = 200 true_params = [0.2, 0.2, 3.0] noise_sigma = 0.5 FN_solver_times = np.linspace(0, 20, n_times) ode_model = FitzhughNagumoModel(FN_solver_times) sim_data = ode_model.simulate(true_params) np.random.seed(42) Y_sim = sim_data + np.random.randn(n_times, n_states) * noise_sigma plt.figure(figsize=(15, 7.5)) plt.plot(FN_solver_times, sim_data[:, 0], color="darkblue", lw=4, label=r"$V(t)$") plt.plot(FN_solver_times, sim_data[:, 1], color="darkgreen", lw=4, label=r"$R(t)$") plt.plot(FN_solver_times, Y_sim[:, 0], "o", color="darkblue", ms=4.5, label="Noisy traces") plt.plot(FN_solver_times, Y_sim[:, 1], "o", color="darkgreen", ms=4.5) plt.legend(fontsize=15) plt.xlabel("Time", fontsize=15) plt.ylabel("Values", fontsize=15) plt.title("Fitzhugh-Nagumo Action Potential Model", fontsize=25); ``` ## Define a non-differentiable black-box op using Theano @as_op Remember that I told SMC sampler does not require gradients, this is by the way the case for other samplers such as the Metropolis-Hastings, Slice sampler that are also supported in PyMC3. For all these gradient-free samplers I will show a simple and quick way of wrapping the forward model i.e. the ODE solution in Theano. All we have to do is to simply to use the decorator `as_op` that converts a python function into a basic Theano Op. We also tell Theano using the `as_op` decorator that we have three parameters each being a Theano scalar. The output then is a Theano matrix whose columns are the state vectors. ``` import theano.tensor as tt from theano.compile.ops import as_op @as_op(itypes=[tt.dscalar, tt.dscalar, tt.dscalar], otypes=[tt.dmatrix]) def th_forward_model(param1, param2, param3): param = [param1, param2, param3] th_states = ode_model.simulate(param) return th_states ``` ## Generative model Since I have corrupted the original traces with i.i.d Gaussian thus the likelihood is given by $$\boldsymbol{Y} = \prod_{i=1}^T \mathcal{N}(\boldsymbol{X}(t_i)), \sigma^2\mathbb{I}),$$ where $\mathbb{I}\in \mathbb{R}^{K \times K}$. We place a Gamma, Normal, Uniform prior on $(a,b,c)$ and a HalfNormal prior on $\sigma$ as follows: $$ \begin{aligned} a & \sim \mathcal{Gamma}(2,1),\\ b & \sim \mathcal{N}(0,1),\\ c & \sim \mathcal{U}(0.1,1),\\ \sigma & \sim \mathcal{H}(1). \end{aligned} $$ Notice how I have used the `start` argument for this example. Just like `pm.sample` `pm.sample_smc` has a number of settings, but I found the default ones good enough for simple models such as this one. ``` draws = 1000 with pm.Model() as FN_model: a = pm.Gamma("a", alpha=2, beta=1) b = pm.Normal("b", mu=0, sd=1) c = pm.Uniform("c", lower=0.1, upper=10) sigma = pm.HalfNormal("sigma", sd=1) forward = th_forward_model(a, b, c) cov = np.eye(2) * sigma ** 2 Y_obs = pm.MvNormal("Y_obs", mu=forward, cov=cov, observed=Y_sim) startsmc = {v.name: np.random.uniform(1e-3, 2, size=draws) for v in FN_model.free_RVs} trace_FN = pm.sample_smc(draws, start=startsmc) pm.plot_posterior(trace_FN, kind="hist", bins=30, color="seagreen"); ``` ## Inference summary With `pm.SMC`, do I get similar performance to geometric MCMC samplers (see [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x))? I think so ! ``` results = [ pm.summary(trace_FN, ["a"]), pm.summary(trace_FN, ["b"]), pm.summary(trace_FN, ["c"]), pm.summary(trace_FN, ["sigma"]), ] results = pd.concat(results) true_params.append(noise_sigma) results["True values"] = pd.Series(np.array(true_params), index=results.index) true_params.pop() results ``` ## Reconstruction of the phase portrait Its good to check that we can reconstruct the (famous) pahse portrait for this model based on the obtained samples. ``` params = np.array([trace_FN.get_values("a"), trace_FN.get_values("b"), trace_FN.get_values("c")]).T params.shape new_values = [] for ind in range(len(params)): ppc_sol = ode_model.simulate(params[ind]) new_values.append(ppc_sol) new_values = np.array(new_values) mean_values = np.mean(new_values, axis=0) plt.figure(figsize=(15, 7.5)) plt.plot( mean_values[:, 0], mean_values[:, 1], color="black", lw=4, label="Inferred (mean of sampled) phase portrait", ) plt.plot( sim_data[:, 0], sim_data[:, 1], "--", color="#ff7f0e", lw=4, ms=6, label="True phase portrait" ) plt.legend(fontsize=15) plt.xlabel(r"$V(t)$", fontsize=15) plt.ylabel(r"$R(t)$", fontsize=15); ``` # Perspectives ### Using some other ODE models I have tried to keep everything as general as possible. So, my custom ODE Op, the state and VSP evaluator as well as the cached solver are not tied to a specific ODE model. Thus, to use any other ODE model one only needs to implement a `simulate_with_sensitivities` method according to their own specific ODE model. ### Other forms of differential equation (DDE, DAE, PDE) I hope the two examples have elucidated the applicability of PyMC3 in regards to fitting ODE models. Although ODEs are the most fundamental constituent of a mathematical model, there are indeed other forms of dynamical systems such as a delay differential equation (DDE), a differential algebraic equation (DAE) and the partial differential equation (PDE) whose parameter estimation is equally important. The SMC and for that matter any other non-gradient sampler supported by PyMC3 can be used to fit all these forms of differential equation, of course using the `as_op`. However, just like an ODE we can solve augmented systems of DDE/DAE along with their sensitivity equations. The sensitivity equations for a DDE and a DAE can be found in this recent paper, [C Rackauckas et al., 2018](https://arxiv.org/abs/1812.01892) (Equation 9 and 10). Thus we can easily apply NUTS sampler to these models. ### Stan already supports ODEs Well there are many problems where I believe SMC sampler would be more suitable than NUTS and thus its good to have that option. ### Model selection Most ODE inference literature since [Vladislav Vyshemirsky and Mark Girolami, 2008](https://academic.oup.com/bioinformatics/article/24/6/833/192524) recommend the usage of Bayes factor for the purpose of model selection/comparison. This involves the calculation of the marginal likelihood which is a much more nuanced topic and I would refrain from any discussion about that. Fortunately, the SMC sampler calculates the marginal likelihood as a by product so this can be used for obtaining Bayes factors. Follow PyMC3's other tutorials for further information regarding how to obtain the marginal likelihood after running the SMC sampler. Since we generally frame the ODE inference as a regression problem (along with the i.i.d measurement noise assumption in most cases) we can straight away use any of the supported information criterion, such as the widely available information criterion (WAIC), irrespective of what sampler is used for inference. See the PyMC3's API for further information regarding WAIC. ### Other AD packages Although this is a slight digression nonetheless I would still like to point out my observations on this issue. The approach that I have presented here for embedding an ODE (also extends to DDE/DAE) as a custom Op can be trivially carried forward to other AD packages such as TensorFlow and PyTorch. I had been able to use TensorFlow's [py_func](https://www.tensorflow.org/api_docs/python/tf/py_func) to build a custom TensorFlow ODE Op and then use that in the [Edward](http://edwardlib.org/) ppl. I would recommend [this](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html) tutorial, for writing PyTorch extensions, to those who are interested in using the [Pyro](http://pyro.ai/) ppl. ``` %load_ext watermark %watermark -n -u -v -iv -w ```
github_jupyter
# Quantum Teleportation This notebook demonstrates quantum teleportation. We first use Qiskit's built-in simulators to test our quantum circuit, and then try it out on a real quantum computer. ## 1. Overview <a id='overview'></a> Alice wants to send quantum information to Bob. Specifically, suppose she wants to send the qubit state $\vert\psi\rangle = \alpha\vert0\rangle + \beta\vert1\rangle$. This entails passing on information about $\alpha$ and $\beta$ to Bob. There exists a theorem in quantum mechanics which states that you cannot simply make an exact copy of an unknown quantum state. This is known as the no-cloning theorem. As a result of this we can see that Alice can't simply generate a copy of $\vert\psi\rangle$ and give the copy to Bob. We can only copy classical states (not superpositions). However, by taking advantage of two classical bits and an entangled qubit pair, Alice can transfer her state $\vert\psi\rangle$ to Bob. We call this teleportation because, at the end, Bob will have $\vert\psi\rangle$ and Alice won't anymore. ## 2. The Quantum Teleportation Protocol <a id='how'></a> To transfer a quantum bit, Alice and Bob must use a third party (Telamon) to send them an entangled qubit pair. Alice then performs some operations on her qubit, sends the results to Bob over a classical communication channel, and Bob then performs some operations on his end to receive Alice’s qubit. ![teleportation_doodle](images/tele1.jpg) We will describe the steps on a quantum circuit below. Here, no qubits are actually ‘sent’, you’ll just have to imagine that part! First we set up our session: ``` # Do the necessary imports import numpy as np from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister from qiskit import IBMQ, Aer, transpile from qiskit.visualization import plot_histogram, plot_bloch_multivector, array_to_latex from qiskit.extensions import Initialize from qiskit.result import marginal_counts from qiskit.quantum_info import random_statevector ``` and create our quantum circuit: ``` ## SETUP # Protocol uses 3 qubits and 2 classical bits in 2 different registers qr = QuantumRegister(3, name="q") # Protocol uses 3 qubits crz = ClassicalRegister(1, name="crz") # and 2 classical bits crx = ClassicalRegister(1, name="crx") # in 2 different registers teleportation_circuit = QuantumCircuit(qr, crz, crx) ``` #### Step 1 A third party, Telamon, creates an entangled pair of qubits and gives one to Bob and one to Alice. The pair Telamon creates is a special pair called a Bell pair. In quantum circuit language, the way to create a Bell pair between two qubits is to first transfer one of them to the X-basis ($|+\rangle$ and $|-\rangle$) using a Hadamard gate, and then to apply a CNOT gate onto the other qubit controlled by the one in the X-basis. ``` def create_bell_pair(qc, a, b): """Creates a bell pair in qc using qubits a & b""" qc.h(a) # Put qubit a into state |+> qc.cx(a,b) # CNOT with a as control and b as target ## SETUP # Protocol uses 3 qubits and 2 classical bits in 2 different registers qr = QuantumRegister(3, name="q") crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx") teleportation_circuit = QuantumCircuit(qr, crz, crx) ## STEP 1 # In our case, Telamon entangles qubits q1 and q2 # Let's apply this to our circuit: create_bell_pair(teleportation_circuit, 1, 2) # And view the circuit so far: teleportation_circuit.draw() ``` Let's say Alice owns $q_1$ and Bob owns $q_2$ after they part ways. #### Step 2 Alice applies a CNOT gate to $q_1$, controlled by $\vert\psi\rangle$ (the qubit she is trying to send Bob). Then Alice applies a Hadamard gate to $|\psi\rangle$. In our quantum circuit, the qubit ($|\psi\rangle$) Alice is trying to send is $q_0$: ``` def alice_gates(qc, psi, a): qc.cx(psi, a) qc.h(psi) ## SETUP # Protocol uses 3 qubits and 2 classical bits in 2 different registers qr = QuantumRegister(3, name="q") crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx") teleportation_circuit = QuantumCircuit(qr, crz, crx) ## STEP 1 create_bell_pair(teleportation_circuit, 1, 2) ## STEP 2 teleportation_circuit.barrier() # Use barrier to separate steps alice_gates(teleportation_circuit, 0, 1) teleportation_circuit.draw() ``` #### Step 3 Next, Alice applies a measurement to both qubits that she owns, $q_1$ and $\vert\psi\rangle$, and stores this result in two classical bits. She then sends these two bits to Bob. ``` def measure_and_send(qc, a, b): """Measures qubits a & b and 'sends' the results to Bob""" qc.barrier() qc.measure(a,0) qc.measure(b,1) ## SETUP # Protocol uses 3 qubits and 2 classical bits in 2 different registers qr = QuantumRegister(3, name="q") crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx") teleportation_circuit = QuantumCircuit(qr, crz, crx) ## STEP 1 create_bell_pair(teleportation_circuit, 1, 2) ## STEP 2 teleportation_circuit.barrier() # Use barrier to separate steps alice_gates(teleportation_circuit, 0, 1) ## STEP 3 measure_and_send(teleportation_circuit, 0 ,1) teleportation_circuit.draw() ``` #### Step 4 Bob, who already has the qubit $q_2$, then applies the following gates depending on the state of the classical bits: 00 $\rightarrow$ Do nothing 01 $\rightarrow$ Apply $X$ gate 10 $\rightarrow$ Apply $Z$ gate 11 $\rightarrow$ Apply $ZX$ gate (*Note that this transfer of information is purely classical*.) ``` # This function takes a QuantumCircuit (qc), integer (qubit) # and ClassicalRegisters (crz & crx) to decide which gates to apply def bob_gates(qc, qubit, crz, crx): # Here we use c_if to control our gates with a classical # bit instead of a qubit qc.x(qubit).c_if(crx, 1) # Apply gates if the registers qc.z(qubit).c_if(crz, 1) # are in the state '1' ## SETUP # Protocol uses 3 qubits and 2 classical bits in 2 different registers qr = QuantumRegister(3, name="q") crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx") teleportation_circuit = QuantumCircuit(qr, crz, crx) ## STEP 1 create_bell_pair(teleportation_circuit, 1, 2) ## STEP 2 teleportation_circuit.barrier() # Use barrier to separate steps alice_gates(teleportation_circuit, 0, 1) ## STEP 3 measure_and_send(teleportation_circuit, 0, 1) ## STEP 4 teleportation_circuit.barrier() # Use barrier to separate steps bob_gates(teleportation_circuit, 2, crz, crx) teleportation_circuit.draw() ``` And voila! At the end of this protocol, Alice's qubit has now teleported to Bob. ## 3. Simulating the Teleportation Protocol <a id='simulating'></a> ### 3.1 How Will We Test the Protocol on a Quantum Computer? <a id='testing'></a> In this notebook, we will initialize Alice's qubit in a random state $\vert\psi\rangle$ (`psi`). This state will be created using an `Initialize` gate on $|q_0\rangle$. In this chapter we use the function `random_statevector` to choose `psi` for us, but feel free to set `psi` to any qubit state you want. ``` # Create random 1-qubit state psi = random_statevector(2) # Display it nicely display(array_to_latex(psi, prefix="|\\psi\\rangle =")) # Show it on a Bloch sphere plot_bloch_multivector(psi) ``` Let's create our initialization instruction to create $|\psi\rangle$ from the state $|0\rangle$: ``` init_gate = Initialize(psi) init_gate.label = "init" ``` (`Initialize` is technically not a gate since it contains a reset operation, and so is not reversible. We call it an 'instruction' instead). If the quantum teleportation circuit works, then at the end of the circuit the qubit $|q_2\rangle$ will be in this state. We will check this using the statevector simulator. ### 3.2 Using the Simulated Statevector <a id='simulating-sv'></a> We can use the Aer simulator to verify our qubit has been teleported. ``` ## SETUP qr = QuantumRegister(3, name="q") # Protocol uses 3 qubits crz = ClassicalRegister(1, name="crz") # and 2 classical registers crx = ClassicalRegister(1, name="crx") qc = QuantumCircuit(qr, crz, crx) ## STEP 0 # First, let's initialize Alice's q0 qc.append(init_gate, [0]) qc.barrier() ## STEP 1 # Now begins the teleportation protocol create_bell_pair(qc, 1, 2) qc.barrier() ## STEP 2 # Send q1 to Alice and q2 to Bob alice_gates(qc, 0, 1) ## STEP 3 # Alice then sends her classical bits to Bob measure_and_send(qc, 0, 1) ## STEP 4 # Bob decodes qubits bob_gates(qc, 2, crz, crx) # Display the circuit qc.draw() ``` We can see below, using the statevector obtained from the aer simulator, that the state of $|q_2\rangle$ is the same as the state $|\psi\rangle$ we created above, while the states of $|q_0\rangle$ and $|q_1\rangle$ have been collapsed to either $|0\rangle$ or $|1\rangle$. The state $|\psi\rangle$ has been teleported from qubit 0 to qubit 2. ``` sim = Aer.get_backend('aer_simulator') qc.save_statevector() out_vector = sim.run(qc).result().get_statevector() plot_bloch_multivector(out_vector) ``` You can run this cell a few times to make sure. You may notice that the qubits 0 & 1 change states, but qubit 2 is always in the state $|\psi\rangle$. ### 3.3 Using the Simulated Counts <a id='simulating-fc'></a> Quantum teleportation is designed to send qubits between two parties. We do not have the hardware to demonstrate this, but we can demonstrate that the gates perform the correct transformations on a single quantum chip. Here we again use the aer simulator to simulate how we might test our protocol. On a real quantum computer, we would not be able to sample the statevector, so if we wanted to check our teleportation circuit is working, we need to do things slightly differently. The `Initialize` instruction first performs a reset, setting our qubit to the state $|0\rangle$. It then applies gates to turn our $|0\rangle$ qubit into the state $|\psi\rangle$: $$ |0\rangle \xrightarrow{\text{Initialize gates}} |\psi\rangle $$ Since all quantum gates are reversible, we can find the inverse of these gates using: ``` inverse_init_gate = init_gate.gates_to_uncompute() ``` This operation has the property: $$ |\psi\rangle \xrightarrow{\text{Inverse Initialize gates}} |0\rangle $$ To prove the qubit $|q_0\rangle$ has been teleported to $|q_2\rangle$, if we do this inverse initialization on $|q_2\rangle$, we expect to measure $|0\rangle$ with certainty. We do this in the circuit below: ``` ## SETUP qr = QuantumRegister(3, name="q") # Protocol uses 3 qubits crz = ClassicalRegister(1, name="crz") # and 2 classical registers crx = ClassicalRegister(1, name="crx") qc = QuantumCircuit(qr, crz, crx) ## STEP 0 # First, let's initialize Alice's q0 qc.append(init_gate, [0]) qc.barrier() ## STEP 1 # Now begins the teleportation protocol create_bell_pair(qc, 1, 2) qc.barrier() ## STEP 2 # Send q1 to Alice and q2 to Bob alice_gates(qc, 0, 1) ## STEP 3 # Alice then sends her classical bits to Bob measure_and_send(qc, 0, 1) ## STEP 4 # Bob decodes qubits bob_gates(qc, 2, crz, crx) ## STEP 5 # reverse the initialization process qc.append(inverse_init_gate, [2]) # Display the circuit qc.draw() ``` We can see the `inverse_init_gate` appearing, labelled 'disentangler' on the circuit diagram. Finally, we measure the third qubit and store the result in the third classical bit: ``` # Need to add a new ClassicalRegister # to see the result cr_result = ClassicalRegister(1) qc.add_register(cr_result) qc.measure(2,2) qc.draw() ``` and we run our experiment: ``` t_qc = transpile(qc, sim) t_qc.save_statevector() counts = sim.run(t_qc).result().get_counts() qubit_counts = [marginal_counts(counts, [qubit]) for qubit in range(3)] plot_histogram(qubit_counts) ``` We can see we have a 100% chance of measuring $q_2$ (the purple bar in the histogram) in the state $|0\rangle$. This is the expected result, and indicates the teleportation protocol has worked properly. ## 4. Understanding Quantum Teleportation <a id="understanding-qt"></a> As you have worked with the Quantum Teleportation's implementation, it is time to understand the mathematics behind the protocol. #### Step 1 Quantum Teleportation begins with the fact that Alice needs to transmit $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ (a random qubit) to Bob. She doesn't know the state of the qubit. For this, Alice and Bob take the help of a third party (Telamon). Telamon prepares a pair of entangled qubits for Alice and Bob. The entangled qubits could be written in Dirac Notation as: $$ |e \rangle = \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle) $$ Alice and Bob each possess one qubit of the entangled pair (denoted as A and B respectively), $$|e\rangle = \frac{1}{\sqrt{2}} (|0\rangle_A |0\rangle_B + |1\rangle_A |1\rangle_B) $$ This creates a three qubit quantum system where Alice has the first two qubits and Bob the last one. $$ \begin{aligned} |\psi\rangle \otimes |e\rangle &= \frac{1}{\sqrt{2}} (\alpha |0\rangle \otimes (|00\rangle + |11\rangle) + \beta |1\rangle \otimes (|00\rangle + |11\rangle))\\ &= \frac{1}{\sqrt{2}} (\alpha|000\rangle + \alpha|011\rangle + \beta|100\rangle + \beta|111\rangle) \end{aligned}$$ #### Step 2 Now according to the protocol Alice applies CNOT gate on her two qubits followed by Hadamard gate on the first qubit. This results in the state: $$ \begin{aligned} &(H \otimes I \otimes I) (CNOT \otimes I) (|\psi\rangle \otimes |e\rangle)\\ &=(H \otimes I \otimes I) (CNOT \otimes I) \frac{1}{\sqrt{2}} (\alpha|000\rangle + \alpha|011\rangle + \beta|100\rangle + \beta|111\rangle) \\ &= (H \otimes I \otimes I) \frac{1}{\sqrt{2}} (\alpha|000\rangle + \alpha|011\rangle + \beta|110\rangle + \beta|101\rangle) \\ &= \frac{1}{2} (\alpha(|000\rangle + |011\rangle + |100\rangle + |111\rangle) + \beta(|010\rangle + |001\rangle - |110\rangle - |101\rangle)) \\ \end{aligned} $$ Which can then be separated and written as: $$ \begin{aligned} = \frac{1}{2}( & \phantom{+} |00\rangle (\alpha|0\rangle + \beta|1\rangle) \hphantom{\quad )} \\ & + |01\rangle (\alpha|1\rangle + \beta|0\rangle) \hphantom{\quad )}\\[4pt] & + |10\rangle (\alpha|0\rangle - \beta|1\rangle) \hphantom{\quad )}\\[4pt] & + |11\rangle (\alpha|1\rangle - \beta|0\rangle) \quad )\\ \end{aligned} $$ #### Step 3 Alice measures the first two qubit (which she owns) and sends them as two classical bits to Bob. The result she obtains is always one of the four standard basis states $|00\rangle, |01\rangle, |10\rangle,$ and $|11\rangle$ with equal probability. On the basis of her measurement, Bob's state will be projected to, $$ |00\rangle \rightarrow (\alpha|0\rangle + \beta|1\rangle)\\ |01\rangle \rightarrow (\alpha|1\rangle + \beta|0\rangle)\\ |10\rangle \rightarrow (\alpha|0\rangle - \beta|1\rangle)\\ |11\rangle \rightarrow (\alpha|1\rangle - \beta|0\rangle) $$ #### Step 4 Bob, on receiving the bits from Alice, knows he can obtain the original state $|\psi\rangle$ by applying appropriate transformations on his qubit that was once part of the entangled pair. The transformations he needs to apply are: $$ \begin{array}{c c c} \mbox{Bob's State} & \mbox{Bits Received} & \mbox{Gate Applied} \\ (\alpha|0\rangle + \beta|1\rangle) & 00 & I \\ (\alpha|1\rangle + \beta|0\rangle) & 01 & X \\ (\alpha|0\rangle - \beta|1\rangle) & 10 & Z \\ (\alpha|1\rangle - \beta|0\rangle) & 11 & ZX \end{array} $$ After this step Bob will have successfully reconstructed Alice's state. ## 5. Teleportation on a Real Quantum Computer <a id='real_qc'></a> ### 5.1 IBM hardware and Deferred Measurement <a id='deferred-measurement'></a> The IBM quantum computers currently do not support instructions after measurements, meaning we cannot run the quantum teleportation in its current form on real hardware. Fortunately, this does not limit our ability to perform any computations due to the _deferred measurement principle_ discussed in chapter 4.4 of [1]. The principle states that any measurement can be postponed until the end of the circuit, i.e. we can move all the measurements to the end, and we should see the same results. ![deferred_measurement_gates](images/defer_measurement.svg) Any benefits of measuring early are hardware related: If we can measure early, we may be able to reuse qubits, or reduce the amount of time our qubits are in their fragile superposition. In this example, the early measurement in quantum teleportation would have allowed us to transmit a qubit state without a direct quantum communication channel. While moving the gates allows us to demonstrate the "teleportation" circuit on real hardware, it should be noted that the benefit of the teleportation process (transferring quantum states via classical channels) is lost. Let us re-write the `bob_gates` function to `new_bob_gates`: ``` def new_bob_gates(qc, a, b, c): qc.cx(b, c) qc.cz(a, c) ``` And create our new circuit: ``` qc = QuantumCircuit(3,1) # First, let's initialize Alice's q0 qc.append(init_gate, [0]) qc.barrier() # Now begins the teleportation protocol create_bell_pair(qc, 1, 2) qc.barrier() # Send q1 to Alice and q2 to Bob alice_gates(qc, 0, 1) qc.barrier() # Alice sends classical bits to Bob new_bob_gates(qc, 0, 1, 2) # We undo the initialization process qc.append(inverse_init_gate, [2]) # See the results, we only care about the state of qubit 2 qc.measure(2,0) # View the results: qc.draw() ``` ### 5.2 Executing <a id='executing'></a> ``` # First, see what devices we are allowed to use by loading our saved accounts IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q') # get the least-busy backend at IBM and run the quantum circuit there from qiskit.providers.ibmq import least_busy from qiskit.tools.monitor import job_monitor backend = least_busy(provider.backends(filters=lambda b: b.configuration().n_qubits >= 3 and not b.configuration().simulator and b.status().operational==True)) t_qc = transpile(qc, backend, optimization_level=3) job = backend.run(t_qc) job_monitor(job) # displays job status under cell # Get the results and display them exp_result = job.result() exp_counts = exp_result.get_counts(qc) print(exp_counts) plot_histogram(exp_counts) ``` As we see here, there are a few results in which we measured $|1\rangle$. These arise due to errors in the gates and the qubits. In contrast, our simulator in the earlier part of the notebook had zero errors in its gates, and allowed error-free teleportation. ``` print(f"The experimental error rate : {exp_counts['1']*100/sum(exp_counts.values()):.3f}%") ``` ## 6. References <a id='references'></a> [1] M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000). [2] Eleanor Rieffel and Wolfgang Polak, Quantum Computing: a Gentle Introduction (The MIT Press Cambridge England, Massachusetts, 2011). ``` import qiskit.tools.jupyter %qiskit_version_table ```
github_jupyter
<img src="https://pm1.narvii.com/5887/02b61b74eaec1060b56a3fcfed42ecc24a457a2e_hq.jpg"> In this hands-on, we will use the Marvel dataset to practice using different plots to visualize distributions of values between groups. You are free to come up with you own questions and use one of the categorical plots to help answer each question. You are also free to build your own dataframe that contains a specific subset of the data to help you answer your questions. The dataset is in https://raw.githubusercontent.com/csbfx/advpy122-data/master/marvel-wikia-data.csv Data source: https://github.com/fivethirtyeight/data/tree/master/comic-characters | Variable | Definition | | :------- | :- |page_id| The unique identifier for that characters page within the wikia |name| The name of the character |urlslug| The unique url within the wikia that takes you to the character |ID| The identity status of the character (Secret Identity, Public identity, [on marvel only: No Dual Identity]) |ALIGN| If the character is Good, Bad or Neutral |EYE| Eye color of the character |HAIR| Hair color of the character |SEX| Sex of the character (e.g. Male, Female, etc.) |GSM| If the character is a gender or sexual minority (e.g. Homosexual characters, bisexual characters) |ALIVE| If the character is alive or deceased |APPEARANCES| The number of appareances of the character in comic books (as of Sep. 2, 2014. Number will become increasingly out of date as time goes on.) |FIRST APPEARANCE| The month and year of the character's first appearance in a comic book, if available |YEAR| The year of the character's first appearance in a comic book, if available ## Q1. How big is this dataset? Use pandas to find out the number of rows and columns. ``` import pandas as pd import seaborn as sns from matplotlib import pyplot as plt %matplotlib inline marvel = pd.read_csv("https://raw.githubusercontent.com/csbfx/advpy122-data/master/marvel-wikia-data.csv") print("rows " + str(len(marvel))) print("columns " + str(len(marvel.columns))) ``` ## Q2. Strip plots Come up with a question using this dataset and use a `strip` plot to help answer the question. State your question in a markdown cell. Recall that a `strip` plot is a categorical plot where one axis is category and the other is a continuous variable. Set the appropriate arguments to make the plot more readable. Be sure to include a meaning title for the plot. **Show the distribution of appearances of characters based on how their alignment. What can you infer from the data?** ``` stripplt = sns.catplot(data= marvel, x="ALIGN", y="APPEARANCES", aspect = 1.5) stripplt.fig.suptitle("Appearances of characters based on their alignments", y = 1) ``` Amongst all the characters, it is the good characters who've made more appearances showing the marvel theme where heroes or good characters eventually defeat bad characters or evil men who most likely might've been eventually killed off or have been defeated. ## Q3. Multiples of Strip plots Come up with a question using this dataset and use a strip plot that contains multiples (splitting the plot into multiples by a category that has two or more unique values) by using the `row` or `col` argument. State your question in a markdown cell. Recall that a strip plot is a categorical plot where one axis is category and the other is a continuous variable. Set the appropriate arguments to make the plot more readable. Be sure to include a meaning title for the plot. **Create a strip plot pointing out the relationship between Marvel characters appearance like eyes and hair to the appearsnces they've made in comics** ``` marvel['EYE'] = marvel.EYE.str.replace(" Eyes","") mstripplt = sns.catplot(data= marvel, x="EYE", y="APPEARANCES", row="HAIR", hue="EYE", aspect = 3.5) mstripplt.fig.suptitle("Physical characteristics of characters and their relationship with the appearances they've made", y = 1) ``` ## Q4. Swarm plot Come up with a question using this dataset and use a `swarm` plot to help answer the question. State your question in a markdown cell. Recall that a `swarm` plot is also a categorical plot where one axis is category and the other is a continuous variable. Set the appropriate arguments to make the plot more readable. Be sure to include a meaning title for the plot. **What does the identity of a good character from marvel say about their chances of living or dying? create a swarm plot to illustrate your point** ``` marvel_good = marvel[marvel["ALIGN"] == "Good Characters"] swarm = sns.catplot(data=marvel_good, x="ALIVE", y= "Year", hue="ID", kind= "swarm") swarm.fig.suptitle("Living status of good characters based on their identities") ``` From the swarm plot, interesting readings can be found. Surprisingly it is characters with secret identites that have died rather than those with public identities. We'd expect good characters who hide their identities to have a higher chance of survival but that's not the case. ## Q5. Box plots Box plot is one of the most commonly used plot for visualizing data distribution. We can convert the `swamp` plot into a `box` plot by simply changing the kind argument to `kind="box"`. Convert the swarm plot that you created in Q4 with a boxplot here. Set the appropriate arguments to make the plot more readable. Be sure to include a meaning title for the plot. ``` box = sns.catplot(data=marvel_good, x="ALIVE", y= "Year", hue="ID", kind= "box") box.fig.suptitle("Box plot for Living status of good characters based on their identities") ``` ## Q6. Violin plots Come up with a question using this dataset and use a `violin` plot to help answer the question. State your question in a markdown cell. Recall that a `violin` plot is also a categorical plot where one axis is category and the other is a continuous variable. Set the appropriate arguments to make the plot more readable. You might want to set setting `cut` to zero if the distribution spreads beyond the values of the data in the dataset. Be sure to include a meaning title for the plot. **Create a violin plot which defines a relationship between characters of a gender or sexual minority and year when they made their first appearance. What do you conclude from the strip plot?** ``` stripplt = sns.catplot(data = marvel, x ="GSM", y = "Year", aspect = 2.5, kind= "violin", hue="ALIVE") stripplt.fig.suptitle("Inclusion of GSM characters over the years", y = 1) ``` From the given strip plot, we can conclude that Marvel started including characters of GSM very late from the 1990s. ## Bonus: Because violin plots are symetrical, when we have only two categories we can put one on each side with `split = True`. Try to create a violin plot using the `split` parameter. You will need to come up with a dataframe using this dataset with data that has two categories. ``` marvel stripplt1 = sns.catplot(data = marvel, x ="GSM", y = "Year", aspect = 2.5, kind= "violin",hue="ALIVE", split= True) stripplt1.fig.suptitle("Inclusion of GSM characters over the years", y = 1) ```
github_jupyter
# Train convolutional network for sentiment analysis. Based on "Convolutional Neural Networks for Sentence Classification" by Yoon Kim http://arxiv.org/pdf/1408.5882v2.pdf For `CNN-non-static` gets to 82.1% after 61 epochs with following settings: embedding_dim = 20 filter_sizes = (3, 4) num_filters = 3 dropout_prob = (0.7, 0.8) hidden_dims = 100 For `CNN-rand` gets to 78-79% after 7-8 epochs with following settings: embedding_dim = 20 filter_sizes = (3, 4) num_filters = 150 dropout_prob = (0.25, 0.5) hidden_dims = 150 For `CNN-static` gets to 75.4% after 7 epochs with following settings: embedding_dim = 100 filter_sizes = (3, 4) num_filters = 150 dropout_prob = (0.25, 0.5) hidden_dims = 150 * it turns out that such a small data set as "Movie reviews with one sentence per review" (Pang and Lee, 2005) requires much smaller network than the one introduced in the original article: - embedding dimension is only 20 (instead of 300; 'CNN-static' still requires ~100) - 2 filter sizes (instead of 3) - higher dropout probabilities and - 3 filters per filter size is enough for 'CNN-non-static' (instead of 100) - embedding initialization does not require prebuilt Google Word2Vec data. Training Word2Vec on the same "Movie reviews" data set is enough to achieve performance reported in the article (81.6%) Another distinct difference is sliding MaxPooling window of length=2 instead of MaxPooling over whole feature map as in the article ``` import numpy as np import data_helpers from w2v import train_word2vec from keras.models import Sequential, Model from keras.layers import Activation, Dense, Dropout, Embedding, Flatten, Input, Merge, Convolution1D, MaxPooling1D from sklearn.cross_validation import train_test_split np.random.seed(2) model_variation = 'CNN-rand' # CNN-rand | CNN-non-static | CNN-static print('Model variation is %s' % model_variation) # Model Hyperparameters sequence_length = 56 embedding_dim = 20 filter_sizes = (3, 4) num_filters = 150 dropout_prob = (0.25, 0.5) hidden_dims = 150 # Training parameters batch_size = 32 num_epochs = 2 # Word2Vec parameters, see train_word2vec min_word_count = 1 # Minimum word count context = 10 # Context window size print("Loading data...") x, y, vocabulary, vocabulary_inv = data_helpers.load_data() if model_variation=='CNN-non-static' or model_variation=='CNN-static': embedding_weights = train_word2vec(x, vocabulary_inv, embedding_dim, min_word_count, context) if model_variation=='CNN-static': x = embedding_weights[0][x] elif model_variation=='CNN-rand': embedding_weights = None else: raise ValueError('Unknown model variation') data = np.append(x,y,axis = 1) train, test = train_test_split(data, test_size = 0.15,random_state = 0) X_test = test[:,:56] Y_test = test[:,56:58] X_train = train[:,:56] Y_train = train[:,56:58] train_rows = np.random.randint(0,X_train.shape[0],2500) X_train = X_train[train_rows] Y_train = Y_train[train_rows] print("Vocabulary Size: {:d}".format(len(vocabulary))) def initialize(): global graph_in global convs graph_in = Input(shape=(sequence_length, embedding_dim)) convs = [] #Buliding the first layer (Convolution Layer) of the network def build_layer_1(filter_length): conv = Convolution1D(nb_filter=num_filters, filter_length=filter_length, border_mode='valid', activation='relu', subsample_length=1)(graph_in) return conv #Adding a max pooling layer to the model(network) def add_max_pooling(conv): pool = MaxPooling1D(pool_length=2)(conv) return pool #Adding a flattening layer to the model(network), before adding a dense layer def add_flatten(conv_or_pool): flatten = Flatten()(conv_or_pool) return flatten def add_sequential(graph): #main sequential model model = Sequential() if not model_variation=='CNN-static': model.add(Embedding(len(vocabulary), embedding_dim, input_length=sequence_length, weights=embedding_weights)) model.add(Dropout(dropout_prob[0], input_shape=(sequence_length, embedding_dim))) model.add(graph) model.add(Dense(2)) model.add(Activation('sigmoid')) return model #1.Convolution 2.Flatten def one_layer_convolution(): initialize() conv = build_layer_1(3) flatten = add_flatten(conv) convs.append(flatten) out = convs[0] graph = Model(input=graph_in, output=out) model = add_sequential(graph) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) #1.Convolution 2.Max Pooling 3.Flatten def two_layer_convolution(): initialize() conv = build_layer_1(3) pool = add_max_pooling(conv) flatten = add_flatten(pool) convs.append(flatten) out = convs[0] graph = Model(input=graph_in, output=out) model = add_sequential(graph) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=num_epochs, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) #1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Flatten def three_layer_convolution(): initialize() conv = build_layer_1(3) pool = add_max_pooling(conv) flatten = add_flatten(pool) convs.append(flatten) conv = build_layer_1(4) flatten = add_flatten(conv) convs.append(flatten) if len(filter_sizes)>1: out = Merge(mode='concat')(convs) else: out = convs[0] graph = Model(input=graph_in, output=out) model = add_sequential(graph) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) #1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Max Pooling 6.Flatten def four_layer_convolution(): initialize() conv = build_layer_1(3) pool = add_max_pooling(conv) flatten = add_flatten(pool) convs.append(flatten) conv = build_layer_1(4) pool = add_max_pooling(conv) flatten = add_flatten(pool) convs.append(flatten) if len(filter_sizes)>1: out = Merge(mode='concat')(convs) else: out = convs[0] graph = Model(input=graph_in, output=out) model = add_sequential(graph) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=num_epochs, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) %%time #1.Convolution 2.Flatten one_layer_convolution() %%time #1.Convolution 2.Max Pooling 3.Flatten two_layer_convolution() %%time #1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Flatten three_layer_convolution() %%time #1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Max Pooling 6.Flatten four_layer_convolution() ```
github_jupyter
``` #==========Imports========== import numpy as np import matplotlib.pyplot as plt import astropy.constants as const import time from scipy import interpolate import Zach_OPTIMIZER.EBMFunctions as opt import Bell_EBM as ebm #==========Set Up System========== planet = ebm.Planet(rad=1.500*const.R_jup.value, mass=1.170*const.M_jup.value, Porb=1.09142030, a=0.02340*2*const.au.value, inc=83.37, vWind=5e3, nlat = 8, e=0.2) star = ebm.Star(teff=6300., rad=1.59, mass=1.20) system = ebm.System(star, planet) def CreateBaseline(star, planet, temporal=5000, spacial=32,orbit=2): _star = star _planet = planet _system = ebm.System(_star, _planet) Teq = _system.get_teq() T0 = np.ones_like(_system.planet.map.values)*Teq t0 = 0. t1 = t0+_system.planet.Porb*orbit dt = _system.planet.Porb/temporal baselineTimes, baselineMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=False) if (planet.orbit.e != 0.): T0 = baselineMaps[-1] t0 = baselineTimes[-1] t1 = t0+system.planet.Porb dt = (system.planet.Porb)/1000. baselineTimes, baselineMaps = system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True) baselineLightcurve = system.lightcurve(baselineTimes, baselineMaps, bolo=False, wav=4.5e-6) # phaseBaseline = system.get_phase(baselineTimes).flatten() # order = np.argsort(phaseBaseline) # baselineLightcurve = baselineLightcurve[order] # phaseBaseline = phaseBaseline[order] else: baselineLightcurve = system.lightcurve(bolo=False, wav=4.5e-6) return baselineTimes, baselineMaps, baselineLightcurve blt, blm, blc = opt.CreateBaseline(star,planet) plt.plot(blc) def RunTests(star, planet, points, base): data = np.zeros(shape=(points.shape[0],4)) _star = star _planet = planet _system = ebm.System(_star,_planet) for i in range(0, points.shape[0]): _star = star _planet = planet _planet.map = ebm.Map.Map(nlat=points[i,1]) _system = ebm.System(_star, _planet) data[i,0] = points[i,0] data[i,1] = points[i,1] tInt = time.time() Teq = _system.get_teq() T0 = np.ones_like(_system.planet.map.values)*Teq t0 = 0. t1 = t0+_system.planet.Porb dt = _system.planet.Porb/points[i,0] testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False) if (_planet.orbit.e != 0): T0 = testMaps[-1] t0 = testTimes[-1] t1 = t0+_system.planet.Porb dt = system.planet.Porb/points[i,0] testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True) testLightcurve = system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6) phaseTest = _system.get_phase(testTimes).flatten() order = np.argsort(phaseTest) testLightcurve = testLightcurve[order] phaseTest = phaseTest[order] testLightcurve = np.interp(base, phaseTest, testLightcurve) else: testLightcurve = system.lightcurve(bolo=False, wav=4.5e-6) tFin = time.time() data[i,3] = (1e6)*(np.amax(np.absolute(base - testLightcurve))) data[i,2] = (tFin - tInt)*(1e3) return testLightcurve, data p = np.zeros(shape=((10),2)) p[:,0]=500 p[:,1]=8 p[9,0] = 500 p[9,1] = 8 p lc, data = opt.RunTests(star,planet,p,blc,blt) plt.plot(lc) plt.plot(lc, c='g') plt.plot(blc, c='b') plt.plot((blc-lc)*(1e6)) def Optimize(star, planet, error, verbose=False): _planet = planet _star = star aError = error #==========High Res Baseline Creation========== if (verbose == True): print("Starting baseline generation...") tInt = time.time() blt, blm, blc = CreateBaseline(_star, _planet) tFin = time.time() if (verbose == True): print("Baseline generation complete; Time to Compute: " + str(round(tFin-tInt,2)) + "s") #===========Initial data creationg================ space_points = 5 temp_points = 5 data = np.zeros(shape=((space_points*temp_points),4)) for i in range (0, temp_points): for j in range (0, space_points): data[(i*space_points)+j,0]= ((i+1)*250)+0 data[(i*space_points)+j,1] = ((j+1)*4)+0 if (verbose == True): print("First pass data points assigned") #==================First pass testing Area====================== if (verbose == True): print("Starting first pass...") tInt = time.time() lc, data = RunTests(_star, _planet, data, blc) tFin = time.time() if (verbose == True): print("First pass finished : Time to compute: " + str(round(tFin-tInt,2)) + "s") #=================First pass best point=================== #print(data) #For debugging purposes if (verbose == True): print("Processing first pass data...") iBest = None for i in range(0,space_points*temp_points): if (data[i,3]<=(aError*1.05)): if (iBest == None): iBest = i if(data[i,2] < data[iBest,2]): iBest = i #===========Second pass data creation================ space_points = 5 temp_points = 5 dataDouble = np.zeros(shape=((space_points*temp_points),2)) for i in range (0, temp_points): for j in range (0, space_points): dataDouble[(i*space_points)+j,0] = ((i)*50)+(data[iBest,0]-100) if (dataDouble[(i*space_points)+j,0]<100): dataDouble[(i*space_points)+j,0] = 100 dataDouble[(i*space_points)+j,1] = ((j)*2)+(data[iBest,1]-4) if (dataDouble[(i*space_points)+j,1]<2): dataDouble[(i*space_points)+j,1] = 2 if (verbose == True): print("Second pass data points assigned") #==================Second pass testing Area====================== if (verbose == True): print("Starting second pass...") tInt = time.time() lc, dataDouble = RunTests(_star, _planet, dataDouble, blc) tFin = time.time() if (verbose == True): print("Second pass finished : Time to compute: " + str(round(tFin-tInt,2)) + "s") #=================Finding best second pass point=================== #print(data) #For debugging purposes if (verbose == True): print("Processing second pass data...") iBest = None for i in range(0,space_points*temp_points): if (dataDouble[i,3]<=aError): if (iBest == None): iBest = i if(dataDouble[i,2] < dataDouble[iBest,2]): iBest = i if (iBest == None): print("No points match requested error") else: print("Temporal: " + str(dataDouble[iBest,0]) + " Spacial: " + str(dataDouble[iBest,1])) print("Time for compute: " + str(round(dataDouble[iBest, 2],2)) +"ms : Error: " + str(round(dataDouble[iBest, 3],2)) + "ppm") print("Expected compute time @ 1,000,000 cycles: " + str((round((dataDouble[iBest, 2]*1e3/60)/60,2))) + " Hrs") return dataDouble[iBest,0], dataDouble[iBest,1] # #print(data) #For debugging # #print(dataDouble) #For debugging # #=========Create Maps================== # if (verbose == True): # planet.map = ebm.Map.Map(nlat=dataDouble[iBest,1]) # system = ebm.System(star, planet) # TotalTimeToCompute = 0. # Teq = system.get_teq() # T0 = np.ones_like(system.planet.map.values)*Teq # t0 = 0. # t1 = t0+system.planet.Porb*1 # dt = system.planet.Porb/dataDouble[iBest,0] # times, maps, ttc = system.run_model_tester(T0, t0, t1, dt, verbose=False) # TotalTimeToCompute += ttc # if (planet.orbit.e != 0): # T0 = maps[-1] # t0 = times[-1] # t1 = t0+system.planet.Porb # dt = system.planet.Porb/dataDouble[iBest,0] # times, maps, ttc = system.run_model_tester(T0, t0, t1, dt, verbose=False, intermediates=True) # TotalTimeToCompute += ttc # testLightcurve = system.lightcurve(times, maps, bolo=False, wav=4.5e-6) # phaseTest = system.get_phase(times).flatten() # order = np.argsort(phaseTest) # testLightcurve = testLightcurve[order] # phaseTest = phaseTest[order] # testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve) # plt.plot((baselineLightcurve)*1e6, lw=2, c='g') # plt.plot((testLightcurve)*1e6, lw=1, c='r') # plt.title("Lightcurves of baseline (green) compared to recommended values (red)") # plt.show() temp, space = opt.Optimize(star, planet, 100, verbose=True) phaseBaseline = system.get_phase(blt).flatten() order = np.argsort(phaseBaseline) baselineLightcurve = blc[order] phaseBaseline = phaseBaseline[order] _star = star _planet = planet _planet.map = ebm.Map.Map(nlat=8) _system = ebm.System(_star, _planet) tInt = time.time() Teq = _system.get_teq() T0 = np.ones_like(_system.planet.map.values)*Teq t0 = 0. t1 = t0+_system.planet.Porb dt = _system.planet.Porb/500 testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False) if (_planet.orbit.e != 0): T0 = testMaps[-1] t0 = testTimes[-1] t1 = t0+_system.planet.Porb dt = system.planet.Porb/500 testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True) testLightcurve = system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6) testbeta = testLightcurve phaseTest = _system.get_phase(testTimes).flatten() order = np.argsort(phaseTest) testLightcurve = testLightcurve[order] testalpha = testLightcurve phaseTest = phaseTest[order] testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve) else: testLightcurve = system.lightcurve(bolo=False, wav=4.5e-6) tFin = time.time() plt.plot(blc) plt.plot(testbeta) plt.plot(testalpha) plt.plot(testLightcurve) phaseTest testTimes blt blt.shape testTimes.shape def RunTests(star, planet, points, base, basetimes, basemap): """ Runs several test of a system and returns time to compute and error as comapared to baseline for each test. Args: star (ebm.Star): The star to runs the tests on planet (ebm.Planet): The planet to run the tests on points (2darray (n by 2)): The array of points to be tested by the model, each point must contain [temporal, spacial], n points are provided base (ndarray): Baseline lightcurve as generated by the CreateBaseline function Return: ndarray: Latest tested lightcurve, mainly used for debugging purposes ndarray: (n by 4), n points of format [temporal, spacial, time_to_compute, error_in_ppm] """ data = np.zeros(shape=(points.shape[0],4)) _star = star _planet = planet _system = ebm.System(_star,_planet) if (_planet.orbit.e != 0): phaseBaseline = _system.get_phase(basetimes).flatten() order = np.argsort(phaseBaseline) baselineLightcurve = base[order] phaseBaseline = phaseBaseline[order] for i in range(0, points.shape[0]): _star = star _planet = planet _planet.map = ebm.Map.Map(nlat=points[i,1]) _system = ebm.System(_star, _planet) data[i,0] = points[i,0] data[i,1] = points[i,1] tInt = time.time() Teq = _system.get_teq() T0 = np.ones_like(_system.planet.map.values)*Teq t0 = 0. t1 = t0+_system.planet.Porb dt = _system.planet.Porb/points[i,0] testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False) # if (_planet.orbit.e != 0): # T0 = testMaps[-1] # t0 = testTimes[-1] # t1 = t0+_system.planet.Porb # dt = _system.planet.Porb/points[i,0] # testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True) # testLightcurve = _system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6) # phaseTest = _system.get_phase(testTimes).flatten() # order = np.argsort(phaseTest) # testLightcurve = testLightcurve[order] # phaseTest = phaseTest[order] # testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve) # else: # testLightcurve = _system.lightcurve(bolo=False, wav=4.5e-6) if (_planet.orbit.e != 0): T0 = testMaps[-1] t0 = testTimes[-1] t1 = t0+_system.planet.Porb dt = system.planet.Porb/points[i,0] testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True) testLightcurve = _system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6) #testbeta = testLightcurve phaseTest = _system.get_phase(testTimes).flatten() order = np.argsort(phaseTest) testLightcurve = testLightcurve[order] #testalpha = testLightcurve phaseTest = phaseTest[order] testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve) else: testLightcurve = system.lightcurve(bolo=False, wav=4.5e-6) tFin = time.time() data[i,3] = (1e6)*(np.amax(np.absolute(base - testLightcurve))) data[i,2] = (tFin - tInt)*(1e3) return testLightcurve, data light, ded = RunTests(star,planet,p,blc,blt,blm) plt.plot(light) phaseBaseline = system.get_phase(blt).flatten() order = np.argsort(phaseBaseline) baselineLightcurve = blc[order] phaseBaseline = phaseBaseline[order] #==========Imports========== import numpy as np import matplotlib.pyplot as plt import astropy.constants as const import time from scipy import interpolate import Zach_OPTIMIZER.EBMFunctions as opt import Bell_EBM as ebm def RunTests(star, planet, points, base, basetimes): """ Runs several test of a system and returns time to compute and error as comapared to baseline for each test. Args: star (ebm.Star): The star to runs the tests on planet (ebm.Planet): The planet to run the tests on points (2darray (n by 2)): The array of points to be tested by the model, each point must contain [temporal, spacial], n points are provided base (ndarray): Baseline lightcurve as generated by the CreateBaseline function basetime (ndarray): Baseline times as generated by CreateBaseline function Return: ndarray: Latest tested lightcurve, mainly used for debugging purposes ndarray: (n by 4), n points of format [temporal, spacial, time_to_compute, error_in_ppm] """ data = np.zeros(shape=(points.shape[0],4)) _star = star _planet = planet _system = ebm.System(_star,_planet) if (_planet.orbit.e != 0): phaseBaseline = _system.get_phase(basetimes).flatten() order = np.argsort(phaseBaseline) baselineLightcurve = base[order] phaseBaseline = phaseBaseline[order] for i in range(0, points.shape[0]): _star = star _planet = planet _planet.map = ebm.Map.Map(nlat=points[i,1]) _system = ebm.System(_star, _planet) data[i,0] = points[i,0] data[i,1] = points[i,1] tInt = time.time() Teq = _system.get_teq() T0 = np.ones_like(_system.planet.map.values)*Teq t0 = 0. t1 = t0+_system.planet.Porb dt = _system.planet.Porb/points[i,0] testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False) if (_planet.orbit.e != 0): T0 = testMaps[-1] t0 = testTimes[-1] t1 = t0+_system.planet.Porb dt = _system.planet.Porb/points[i,0] testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True) testLightcurve = _system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6) phaseTest = _system.get_phase(testTimes).flatten() order = np.argsort(phaseTest) testLightcurve = testLightcurve[order] phaseTest = phaseTest[order] testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve) else: testLightcurve = _system.lightcurve(bolo=False, wav=4.5e-6) tFin = time.time() data[i,3] = (1e6)*(np.amax(np.absolute(base - testLightcurve))) data[i,2] = (tFin - tInt)*(1e3) return testLightcurve, data ```
github_jupyter
### STEPS: #### Pipeline - 1 1. Tokenization 1. Remove StopWords and Punctuation 1. Stemming #### Pipeline - 2 1. Tokenization 1. POS Tagger 1. Lemmatization ***Remember to Deal With Everything in Lower Cases*** ``` import nltk nltk.download('punkt') # For Tokenizing nltk.download('stopwords') # For Stopwords nltk.download('wordnet') # For Lemmatization nltk.download('averaged_perceptron_tagger') # For POS Tagging from nltk.stem import WordNetLemmatizer from nltk.stem.snowball import EnglishStemmer from nltk import pos_tag # POS Tagger from nltk.corpus import wordnet as wn import string input = [ 'Industrial Disease', 'Private Investigations', 'So Far Away', 'Twisting by the Pool', 'Skateaway', 'Walk of Life', 'Romeo and Juliet', 'Tunnel of Love', 'Money for Nothing', 'Sultans of Swing', 'Stairway To Heaven', 'Kashmir', 'Achilles Last Stand', 'Whole Lotta Love', 'Immigrant Song', 'Black Dog', 'When The Levee Breaks', 'Since I\'ve Been Lovin\' You', 'Since I\'ve Been Loving You', 'Over the Hills and Far Away', 'Dazed and Confused' ] len(input) ``` #### TOKENIZER ``` def tokenize(sentence): return nltk.tokenize.word_tokenize(sentence) print input[17] print tokenize(input[17]) # there a "(\')" being tokenized we will remove it later ``` #### POS TAGGER CONVERTER ``` def penn_to_wn(tag): """ Convert between the PennTreebank (pos_tag) tags to simple Wordnet tags """ if tag.startswith('J'): return wn.ADJ elif tag.startswith('N'): return wn.NOUN elif tag.startswith('R'): return wn.ADV elif tag.startswith('V'): return wn.VERB return None ``` #### LEMMATIZER ``` wnl = WordNetLemmatizer() def lemmatize(word, pos=wn.NOUN): return wnl.lemmatize(word,pos=pos) print input[3] print tokenize(input[3]) for w in tokenize(input[3]): w.lower() print lemmatize(w) ``` #### STEMMER ``` def stem(word): stemmer = EnglishStemmer() return stemmer.stem(word) print input[3] print tokenize(input[3]) for w in tokenize(input[3]): w.lower() print stem(w) ``` #### STOPWORDS ``` stopwords = nltk.corpus.stopwords.words('English') print stopwords ``` #### INDEXED DATABASES #### Pipeline 1 ``` db = {} for sentence in input: words = tokenize(sentence) for word in words: word = word.lower() if word not in stopwords and word not in string.punctuation: root = stem(word) if db.has_key(root): db[root].append(sentence) else : db[root] = [sentence] db ``` #### Pipeline 2 ``` db2 = {} for sentence in input: words = tokenize(sentence) tagged_sentence = pos_tag(words) for word, tag in tagged_sentence: word = word.lower() tag = penn_to_wn(tag) if tag in (wn.NOUN,wn.ADJ,wn.VERB,wn.ADV): root = lemmatize(word,tag) if db2.has_key(root): db2[root].append(sentence) else : db2[root] = [sentence] db2 ``` #### WE CAN OBSERVE THAT BY USING LEMMATIZATION WE PRESERVED THE MORPHOLOGY
github_jupyter
[![img/pythonista.png](img/pythonista.png)](https://www.pythonista.io) # Esquema de *OpenAPI*. https://swagger.io/docs/specification/basic-structure/ ## Estructura. * Versión de *OpenAPI*. * Información (```info```). * Etiquetas (```tags```). * Servidores (```servers```). * Componentes (```components```). * Esquemas (```schemas```). * Cuerpos de petición (```requestBodies```) * Rutas (```paths```). ## Versión de *Open API*. ```yaml openapi: <versión> ``` https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.3.md#versions ## información. ``` yaml info: description: <Descripción de la API> version: <Version de la API> title: <Título de la documentación de la API> termsOfService: <URL de los términos de servicio> contact: name: <Nombre del contacto> email: <Correo electrónico del contacto> url: <URL de referencia> license: nombre: <Nombre de la licencia> url: <URL de la licencia> externalDocs: description: <Descripción de documentos externos> url: <URL de la licencia> ``` https://swagger.io/docs/specification/api-general-info/ ## Etiquetas: ```yaml tags: - name: <nombre de la etiqueta 1> description: <descripción de la etiqueta 1> - name: <nombre de la etiqueta 2> description: <descripción de la etiqueta 2> ``` https://swagger.io/docs/specification/grouping-operations-with-tags/ ## Servidores: ``` yaml servers: - url: <URL del servidor 1> description: <descripción del servidor 1> - url: <URL del servidor 2 description: <descripción del servidor 2> ``` ## Componentes. https://swagger.io/docs/specification/components/ * Esquemas (*schemas*) * Cuerpos de peticiones (*requestBodies*) ``` yaml components: requestBodies: - <esquema de peticion 1> - <esquema de peticion 2> schemas: - <esquema 1> - <esquema 2> parameters: - <parámetro 1> - <parámetro 2> responses: - <respuesta 1> - <respuesta 1> headers: - <encabezado 1> - <encabezado 2> examples: - <ejemplo 1> - <ejemplo 2> callbacks: - <URL 1> - <URL 2> ``` ## Rutas. https://swagger.io/docs/specification/paths-and-operations/ ``` "/<segmento 1>{<parámetro 1>}<segmento 2>{<parámetro 2>}" ``` **Ejemplos:** * ```/api/{clave}``` * ```/api/{clave}-{id}/mensajes``` * ```/auth/logout``` ``` yaml paths: <ruta 1>: <método 1> <metodo 2> <parameters>: <parámetro 1> <parámetro 2> ``` ### Parámetros. Los parámetros son datos obtenidos a partir de la ruta o de la consulta enviado en la petición. ``` yaml parameters: - name: <Nombre del parámetro> in: <Fuente> description: <Descripción del parámetro> required: <booleano> example: <Ejemplo del parámetro> schema: <esquema> ``` ### Métodos. ``` yaml <método>: tags: - <etiqueta 1> - <etiqueta 2> summary: <Resumen de la funcionalidad> description: <Descripción de la funcionalidad> parameters: - <parámetro 1> - <parámetro 2> responses: <código de estado 1> <código de estado 2> requestBody: <esquema de petición> ``` ### Códigos de estado. ``` yaml <número de código de estado 1>: description: <Descripción de la funcionalidad> content: <tipo de aplicación>: <esquema del contenido de la respuesta> ``` ### Contenidos de respuesta. https://swagger.io/docs/specification/describing-responses/ https://swagger.io/docs/specification/data-models/representing-xml/ ## Esquemas. https://swagger.io/docs/specification/data-models/ ## Tipos de datos. https://swagger.io/docs/specification/data-models/data-types/ ### Tipo ```string```. https://swagger.io/docs/specification/data-models/data-types/#string ### Tipos ```number``` e ```integer```. https://swagger.io/docs/specification/data-models/data-types/#numbers ### Tipo ```boolean```. https://swagger.io/docs/specification/data-models/data-types/#boolean ### Tipo ```array```. https://swagger.io/docs/specification/data-models/data-types/#array ### Tipo ```object```. https://swagger.io/docs/specification/data-models/data-types/#object ## Enums. ``` yaml type: <tipo> enum: - <elemento 1> - <elemento 2> ``` https://swagger.io/docs/specification/data-models/enums/ ## Referencias. ``` $ref: "ruta" ``` https://swagger.io/docs/specification/using-ref/ ### Referencias dentro del documento. ``` #/<nivel 1>/<nivel 2>/... /<nivel n>/<elemento> ``` ## Ejemplos. https://swagger.io/docs/specification/adding-examples/ # <p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p> <p style="text-align: center">&copy; José Luis Chiquete Valdivieso. 2022.</p>
github_jupyter
``` # Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Getting started: Training and prediction with Keras in AI Platform <img src="https://storage.googleapis.com/cloud-samples-data/ml-engine/census/keras-tensorflow-cmle.png" alt="Keras, TensorFlow, and AI Platform logos" width="300px"> <table align="left"> <td> <a href="https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-keras"> <img src="https://cloud.google.com/_static/images/cloud/icons/favicons/onecloud/super_cloud.png" alt="Google Cloud logo" width="32px"> Read on cloud.google.com </a> </td> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/cloudml-samples/blob/master/notebooks/tensorflow/getting-started-keras.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/notebooks/tensorflow/getting-started-keras.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> ## Overview This tutorial shows how to train a neural network on Cloud Machine Learning Engine using the Keras sequential API and how to serve predictions from that model. Keras is a high-level API for building and training deep learning models. [tf.keras](https://www.tensorflow.org/guide/keras) is TensorFlow’s implementation of this API. The first two parts of the tutorial walk through training a model on Cloud AI Platform using prewritten Keras code, deploying the trained model to Cloud ML Engine, and serving online predictions from the deployed model. The last part of the tutorial digs into the training code used for this model and ensuring it's compatible with AI Platform. To learn more about building machine learning models in Keras more generally, read [TensorFlow's Keras tutorials](https://www.tensorflow.org/tutorials/keras). ### Dataset This tutorial uses the [United States Census Income Dataset](https://archive.ics.uci.edu/ml/datasets/census+income) provided by the [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php). This dataset contains information about people from a 1994 Census database, including age, education, marital status, occupation, and whether they make more than $50,000 a year. ### Objective The goal is to train a deep neural network (DNN) using Keras that predicts whether a person makes more than $50,000 a year (target label) based on other Census information about the person (features). This tutorial focuses more on using this model with AI Platform than on the design of the model itself. However, it's always important to think about potential problems and unintended consequences when building machine learning systems. See the [Machine Learning Crash Course exercise about fairness](https://developers.google.com/machine-learning/crash-course/fairness/programming-exercise) to learn about sources of bias in the Census dataset, as well as machine learning fairness more generally. ### Costs This tutorial uses billable components of Google Cloud Platform (GCP): * AI Platform * Cloud Storage Learn about [AI Platform pricing](https://cloud.google.com/ml-engine/docs/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ## Before you begin You must do several things before you can train and deploy a model in Cloud ML Engine: * Set up your local development environment. * Set up a GCP project with billing and the necessary APIs enabled. * Authenticate your GCP account in this notebook. * Create a Cloud Storage bucket to store your training package and your trained model. ### Set up your local development environment **If you are using Colab or Cloud ML Notebooks**, your environment already meets all the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements. You need the following: * The Google Cloud SDK * Git * Python 3 * virtualenv * Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: 1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/) 2. [Install Python 3.](https://cloud.google.com/python/setup#installing_python) 3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. 4. Activate that environment and run `pip install jupyter` in a shell to install Jupyter. 5. Run `jupyter notebook` in a shell to launch Jupyter. 6. Open this notebook in the Jupyter Notebook Dashboard. ### Set up your GCP project **The following steps are required, regardless of your notebook environment.** 1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager) 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) 3. [Enable the AI Platform and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component) 4. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. ``` PROJECT_ID = "<your-project-id>" #@param {type:"string"} ! gcloud config set project $PROJECT_ID ``` ### Authenticate your GCP account **If you are using Cloud ML Notebooks**, your environment is already authenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. **Otherwise**, follow these steps: 1. In the GCP Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey). 2. From the **Service account** drop-down list, select **New service account**. 3. In the **Service account name** field, enter a name. 4. From the **Role** drop-down list, select **Machine Learning Engine > AI Platform Admin** and **Storage > Storage Object Admin**. 5. Click *Create*. A JSON file that contains your key downloads to your local environment. 6. Enter the path to your service account key as the `GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell. ``` import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. else: %env GOOGLE_APPLICATION_CREDENTIALS '' ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform also saves the trained model that results from your job in the same bucket. You can then create an AI Platform model verison based on this output in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Make sure to [choose a region where Cloud AI Platform services are available](https://cloud.google.com/ml-engine/docs/tensorflow/regions). You may not use a Multi-Regional Storage bucket for training with AI Platform. ``` BUCKET_NAME = "<your-bucket-name>" #@param {type:"string"} REGION = "us-central1" #@param {type:"string"} ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION gs://$BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al gs://$BUCKET_NAME ``` ## Part 1. Quickstart for training in AI Platform This section of the tutorial walks you through submitting a training job to Cloud AI Platform. This job runs sample code that uses Keras to train a deep neural network on the United States Census data. It outputs the trained model as a [TensorFlow SavedModel directory](https://www.tensorflow.org/guide/saved_model#save_and_restore_models) in your Cloud Storage bucket. ### Get training code and dependencies First, download the training code and change the notebook's working directory: ``` # Clone the repository of AI Platform samples ! git clone --depth 1 https://github.com/GoogleCloudPlatform/cloudml-samples # Set the working directory to the sample code directory %cd cloudml-samples/census/tf-keras ``` Notice that the training code is structured as a Python package in the `trainer/` subdirectory: ``` # `ls` shows the working directory's contents. The `p` flag adds trailing # slashes to subdirectory names. The `R` flag lists subdirectories recursively. ! ls -pR ``` Run the following cell to install Python dependencies needed to train the model locally. When you run the training job in AI Platform, dependencies are preinstalled based on the [runtime verison](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list) you choose. ``` ! pip install -r requirements.txt ``` ### Train your model locally Before training on AI Platform, train the job locally to verify the file structure and packaging is correct. For a complex or resource-intensive job, you may want to train locally on a small sample of your dataset to verify your code. Then you can run the job on AI Platform to train on the whole dataset. This sample runs a relatively quick job on a small dataset, so the local training and the AI Platform job run the same code on the same data. Run the following cell to train a model locally: ``` # Explicitly tell `gcloud ml-engine local train` to use Python 3 ! gcloud config set ml_engine/local_python $(which python3) # This is similar to `python -m trainer.task --job-dir local-training-output` # but it better replicates the AI Platform environment, especially for # distributed training (not applicable here). ! gcloud ml-engine local train \ --package-path trainer \ --module-name trainer.task \ --job-dir local-training-output ``` ### Train your model using AI Platform Next, submit a training job to AI Platform. This runs the training module in the cloud and exports the trained model to Cloud Storage. First, give your training job a name and choose a directory within your Cloud Storage bucket for saving intermediate and output files: ``` JOB_NAME = 'my_first_keras_job' JOB_DIR = 'gs://' + BUCKET_NAME + '/keras-job-dir' ``` Run the following command to package the `trainer/` directory, upload it to the specified `--job-dir`, and instruct AI Platform to run the `trainer.task` module from that package. The `--stream-logs` flag lets you view training logs in the cell below. You can also see logs and other job details in the GCP Console. ``` ! gcloud ml-engine jobs submit training $JOB_NAME \ --package-path trainer/ \ --module-name trainer.task \ --region $REGION \ --python-version 3.5 \ --runtime-version 1.13 \ --job-dir $JOB_DIR \ --stream-logs ``` ## Part 2. Quickstart for online predictions in AI Platform This section shows how to use AI Platform and your trained model from Part 1 to predict a person's income bracket from other Census information about them. ### Create model and version resources in AI Platform To serve online predictions using the model you trained and exported in Part 1, create a *model* resource in AI Platform and a *version* resource within it. The version resource is what actually uses your trained model to serve predictions. This structure lets you adjust and retrain your model many times and organize all the versions together in AI Platform. Learn more about [models and versions](https://cloud.google.com/ml-engine/docs/tensorflow/projects-models-versions-jobs). First, name and create the model resource: ``` MODEL_NAME = "my_first_keras_model" ! gcloud ml-engine models create $MODEL_NAME \ --regions $REGION ``` Next, create the model version. The training job from Part 1 exported a timestamped [TensorFlow SavedModel directory](https://www.tensorflow.org/guide/saved_model#structure_of_a_savedmodel_directory) to your Cloud Storage bucket. AI Platform uses this directory to create a model version. Learn more about [SavedModel and Cloud ML Engine](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models). You may be able to find the path to this directory in your training job's logs. Look for a line like: ``` Model exported to: gs://<your-bucket-name>/keras-job-dir/keras_export/1545439782 ``` Execute the following command to identify your SavedModel directory and use it to create a model version resource: ``` MODEL_VERSION = "v1" # Get a list of directories in the `keras_export` parent directory KERAS_EXPORT_DIRS = ! gsutil ls $JOB_DIR/keras_export/ # Pick the directory with the latest timestamp, in case you've trained # multiple times SAVED_MODEL_PATH = KERAS_EXPORT_DIRS[-1] # Create model version based on that SavedModel directory ! gcloud ml-engine versions create $MODEL_VERSION \ --model $MODEL_NAME \ --runtime-version 1.13 \ --python-version 3.5 \ --framework tensorflow \ --origin $SAVED_MODEL_PATH ``` ### Prepare input for prediction To receive valid and useful predictions, you must preprocess input for prediction in the same way that training data was preprocessed. In a production system, you may want to create a preprocessing pipeline that can be used identically at training time and prediction time. For this exercise, use the training package's data-loading code to select a random sample from the evaluation data. This data is in the form that was used to evaluate accuracy after each epoch of training, so it can be used to send test predictions without further preprocessing: ``` from trainer import util _, _, eval_x, eval_y = util.load_data() prediction_input = eval_x.sample(20) prediction_targets = eval_y[prediction_input.index] prediction_input ``` Notice that categorical fields, like `occupation`, have already been converted to integers (with the same mapping that was used for training). Numerical fields, like `age`, have been scaled to a [z-score](https://developers.google.com/machine-learning/crash-course/representation/cleaning-data). Some fields have been dropped from the original data. Compare the prediction input with the raw data for the same examples: ``` import pandas as pd _, eval_file_path = util.download(util.DATA_DIR) raw_eval_data = pd.read_csv(eval_file_path, names=util._CSV_COLUMNS, na_values='?') raw_eval_data.iloc[prediction_input.index] ``` Export the prediction input to a newline-delimited JSON file: ``` import json with open('prediction_input.json', 'w') as json_file: for row in prediction_input.values.tolist(): json.dump(row, json_file) json_file.write('\n') ! cat prediction_input.json ``` The `gcloud` command-line tool accepts newline-delimited JSON for online prediction, and this particular Keras model expects a flat list of numbers for each input example. AI Platform requires a different format when you make online prediction requests to the REST API without using the `gcloud` tool. The way you structure your model may also change how you must format data for prediction. Learn more about [formatting data for online prediction](https://cloud.google.com/ml-engine/docs/tensorflow/prediction-overview#prediction_input_data). ### Submit the online prediction request Use `gcloud` to submit your online prediction request. ``` ! gcloud ml-engine predict \ --model $MODEL_NAME \ --version $MODEL_VERSION \ --json-instances prediction_input.json ``` Since the model's last layer uses a [sigmoid function](https://developers.google.com/machine-learning/glossary/#sigmoid_function) for its activation, outputs between 0 and 0.5 represent negative predictions ("<=50K") and outputs between 0.5 and 1 represent positive ones (">50K"). Do the predicted income brackets match the actual ones? Run the following cell to see the true labels. ``` prediction_targets ``` ## Part 3. Developing the Keras model from scratch At this point, you have trained a machine learning model on AI Platform, deployed the trained model as a version resource on AI Platform, and received online predictions from the deployment. The next section walks through recreating the Keras code used to train your model. It covers the following parts of developing a machine learning model for use with AI Platform: * Downloading and preprocessing data * Designing and training the model * Visualizing training and exporting the trained model While this section provides more detailed insight to the tasks completed in previous parts, to learn more about using `tf.keras`, read [TensorFlow's guide to Keras](https://www.tensorflow.org/tutorials/keras). To learn more about structuring code as a training packge for AI Platform, read [Packaging a training application](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) and reference the [complete training code](https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/tf-keras), which is structured as a Python package. ### Import libraries and define constants First, import Python libraries required for training: ``` import os from six.moves import urllib import tempfile import numpy as np import pandas as pd import tensorflow as tf # Examine software versions print(__import__('sys').version) print(tf.__version__) print(tf.keras.__version__) ``` Then, define some useful constants: * Information for downloading training and evaluation data * Information required for Pandas to interpret the data and convert categorical fields into numeric features * Hyperparameters for training, such as learning rate and batch size ``` ### For downloading data ### # Storage directory DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data') # Download options. DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine' \ '/census/data' TRAINING_FILE = 'adult.data.csv' EVAL_FILE = 'adult.test.csv' TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE) EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE) ### For interpreting data ### # These are the features in the dataset. # Dataset information: https://archive.ics.uci.edu/ml/datasets/census+income _CSV_COLUMNS = [ 'age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'gender', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'income_bracket' ] _CATEGORICAL_TYPES = { 'workclass': pd.api.types.CategoricalDtype(categories=[ 'Federal-gov', 'Local-gov', 'Never-worked', 'Private', 'Self-emp-inc', 'Self-emp-not-inc', 'State-gov', 'Without-pay' ]), 'marital_status': pd.api.types.CategoricalDtype(categories=[ 'Divorced', 'Married-AF-spouse', 'Married-civ-spouse', 'Married-spouse-absent', 'Never-married', 'Separated', 'Widowed' ]), 'occupation': pd.api.types.CategoricalDtype([ 'Adm-clerical', 'Armed-Forces', 'Craft-repair', 'Exec-managerial', 'Farming-fishing', 'Handlers-cleaners', 'Machine-op-inspct', 'Other-service', 'Priv-house-serv', 'Prof-specialty', 'Protective-serv', 'Sales', 'Tech-support', 'Transport-moving' ]), 'relationship': pd.api.types.CategoricalDtype(categories=[ 'Husband', 'Not-in-family', 'Other-relative', 'Own-child', 'Unmarried', 'Wife' ]), 'race': pd.api.types.CategoricalDtype(categories=[ 'Amer-Indian-Eskimo', 'Asian-Pac-Islander', 'Black', 'Other', 'White' ]), 'native_country': pd.api.types.CategoricalDtype(categories=[ 'Cambodia', 'Canada', 'China', 'Columbia', 'Cuba', 'Dominican-Republic', 'Ecuador', 'El-Salvador', 'England', 'France', 'Germany', 'Greece', 'Guatemala', 'Haiti', 'Holand-Netherlands', 'Honduras', 'Hong', 'Hungary', 'India', 'Iran', 'Ireland', 'Italy', 'Jamaica', 'Japan', 'Laos', 'Mexico', 'Nicaragua', 'Outlying-US(Guam-USVI-etc)', 'Peru', 'Philippines', 'Poland', 'Portugal', 'Puerto-Rico', 'Scotland', 'South', 'Taiwan', 'Thailand', 'Trinadad&Tobago', 'United-States', 'Vietnam', 'Yugoslavia' ]), 'income_bracket': pd.api.types.CategoricalDtype(categories=[ '<=50K', '>50K' ]) } # This is the label (target) we want to predict. _LABEL_COLUMN = 'income_bracket' ### Hyperparameters for training ### # This the training batch size BATCH_SIZE = 128 # This is the number of epochs (passes over the full training data) NUM_EPOCHS = 20 # Define learning rate. LEARNING_RATE = .01 ``` ### Download and preprocess data #### Download the data Next, define functions to download training and evaluation data. These functions also fix minor irregularities in the data's formatting. ``` def _download_and_clean_file(filename, url): """Downloads data from url, and makes changes to match the CSV format. The CSVs may use spaces after the comma delimters (non-standard) or include rows which do not represent well-formed examples. This function strips out some of these problems. Args: filename: filename to save url to url: URL of resource to download """ temp_file, _ = urllib.request.urlretrieve(url) with tf.gfile.Open(temp_file, 'r') as temp_file_object: with tf.gfile.Open(filename, 'w') as file_object: for line in temp_file_object: line = line.strip() line = line.replace(', ', ',') if not line or ',' not in line: continue if line[-1] == '.': line = line[:-1] line += '\n' file_object.write(line) tf.gfile.Remove(temp_file) def download(data_dir): """Downloads census data if it is not already present. Args: data_dir: directory where we will access/save the census data """ tf.gfile.MakeDirs(data_dir) training_file_path = os.path.join(data_dir, TRAINING_FILE) if not tf.gfile.Exists(training_file_path): _download_and_clean_file(training_file_path, TRAINING_URL) eval_file_path = os.path.join(data_dir, EVAL_FILE) if not tf.gfile.Exists(eval_file_path): _download_and_clean_file(eval_file_path, EVAL_URL) return training_file_path, eval_file_path ``` Use those functions to download the data for training and verify that you have CSV files for training and evaluation: ``` training_file_path, eval_file_path = download(DATA_DIR) # You should see 2 files: adult.data.csv and adult.test.csv !ls -l $DATA_DIR ``` Next, load these files using Pandas and examine the data: ``` # This census data uses the value '?' for fields (column) that are missing data. # We use na_values to find ? and set it to NaN values. # https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html train_df = pd.read_csv(training_file_path, names=_CSV_COLUMNS, na_values='?') eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values='?') # Here's what the data looks like before we preprocess the data. train_df.head() ``` #### Preprocess the data The first preprocessing step removes certain features from the data and converts categorical features to numerical values for use with Keras. Learn more about [feature engineering](https://developers.google.com/machine-learning/crash-course/representation/feature-engineering) and [bias in data](https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias). ``` UNUSED_COLUMNS = ['fnlwgt', 'education', 'gender'] def preprocess(dataframe): """Converts categorical features to numeric. Removes unused columns. Args: dataframe: Pandas dataframe with raw data Returns: Dataframe with preprocessed data """ dataframe = dataframe.drop(columns=UNUSED_COLUMNS) # Convert integer valued (numeric) columns to floating point numeric_columns = dataframe.select_dtypes(['int64']).columns dataframe[numeric_columns] = dataframe[numeric_columns].astype('float32') # Convert categorical columns to numeric cat_columns = dataframe.select_dtypes(['object']).columns dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.astype( _CATEGORICAL_TYPES[x.name])) dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.cat.codes) return dataframe prepped_train_df = preprocess(train_df) prepped_eval_df = preprocess(eval_df) ``` Run the following cell to see how preprocessing changed the data. Notice in particular that `income_bracket`, the label that you're training the model to predict, has changed from `<=50K` and `>50K` to `0` and `1`: ``` prepped_train_df.head() ``` Next, separate the data into features ("x") and labels ("y"), and reshape the label arrays into a format for use with `tf.data.Dataset` later: ``` # Split train and test data with labels. # The pop() method will extract (copy) and remove the label column from the dataframe train_x, train_y = prepped_train_df, prepped_train_df.pop(_LABEL_COLUMN) eval_x, eval_y = prepped_eval_df, prepped_eval_df.pop(_LABEL_COLUMN) # Reshape label columns for use with tf.data.Dataset train_y = np.asarray(train_y).astype('float32').reshape((-1, 1)) eval_y = np.asarray(eval_y).astype('float32').reshape((-1, 1)) ``` Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 [can improve your model](https://developers.google.com/machine-learning/crash-course/representation/cleaning-data). In a production system, you may want to save the means and standard deviations from your training set and use them to perform an identical transformation on test data at prediction time. For convenience in this exercise, temporarily combine the training and evaluation data to scale all of them: ``` def standardize(dataframe): """Scales numerical columns using their means and standard deviation to get z-scores: the mean of each numerical column becomes 0, and the standard deviation becomes 1. This can help the model converge during training. Args: dataframe: Pandas dataframe Returns: Input dataframe with the numerical columns scaled to z-scores """ dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes))) # Normalize numeric columns. for column, dtype in dtypes: if dtype == 'float32': dataframe[column] -= dataframe[column].mean() dataframe[column] /= dataframe[column].std() return dataframe # Join train_x and eval_x to normalize on overall means and standard # deviations. Then separate them again. all_x = pd.concat([train_x, eval_x], keys=['train', 'eval']) all_x = standardize(all_x) train_x, eval_x = all_x.xs('train'), all_x.xs('eval') ``` Finally, examine some of your fully preprocessed training data: ``` # Verify dataset features # Note how only the numeric fields (not categorical) have been standardized train_x.head() ``` ### Design and train the model #### Create training and validation datasets Create an input function to convert features and labels into a [`tf.data.Dataset`](https://www.tensorflow.org/guide/datasets) for training or evaluation: ``` def input_fn(features, labels, shuffle, num_epochs, batch_size): """Generates an input function to be used for model training. Args: features: numpy array of features used for training or inference labels: numpy array of labels for each example shuffle: boolean for whether to shuffle the data or not (set True for training, False for evaluation) num_epochs: number of epochs to provide the data for batch_size: batch size for training Returns: A tf.data.Dataset that can provide data to the Keras model for training or evaluation """ if labels is None: inputs = features else: inputs = (features, labels) dataset = tf.data.Dataset.from_tensor_slices(inputs) if shuffle: dataset = dataset.shuffle(buffer_size=len(features)) # We call repeat after shuffling, rather than before, to prevent separate # epochs from blending together. dataset = dataset.repeat(num_epochs) dataset = dataset.batch(batch_size) return dataset ``` Next, create these training and evaluation datasets.Use the `NUM_EPOCHS` and `BATCH_SIZE` hyperparameters defined previously to define how the training dataset provides examples to the model during training. Set up the validation dataset to provide all its examples in one batch, for a single validation step at the end of each training epoch. ``` # Pass a numpy array by using DataFrame.values training_dataset = input_fn(features=train_x.values, labels=train_y, shuffle=True, num_epochs=NUM_EPOCHS, batch_size=BATCH_SIZE) num_eval_examples = eval_x.shape[0] # Pass a numpy array by using DataFrame.values validation_dataset = input_fn(features=eval_x.values, labels=eval_y, shuffle=False, num_epochs=NUM_EPOCHS, batch_size=num_eval_examples) ``` #### Design a Keras Model Design your neural network using the [Keras Sequential API](https://www.tensorflow.org/guide/keras#sequential_model). This deep neural network (DNN) has several hidden layers, and the last layer uses a sigmoid activation function to output a value between 0 and 1: * The input layer has 100 units using the ReLU activation function. * The hidden layer has 75 units using the ReLU activation function. * The hidden layer has 50 units using the ReLU activation function. * The hidden layer has 25 units using the ReLU activation function. * The output layer has 1 units using a sigmoid activation function. * The optimizer uses the binary cross-entropy loss function, which is appropriate for a binary classification problem like this one. Feel free to change these layers to try to improve the model: ``` def create_keras_model(input_dim, learning_rate): """Creates Keras Model for Binary Classification. Args: input_dim: How many features the input has learning_rate: Learning rate for training Returns: The compiled Keras model (still needs to be trained) """ model = tf.keras.Sequential() model.add( tf.keras.layers.Dense( 100, activation=tf.nn.relu, kernel_initializer='uniform', input_shape=(input_dim,))) model.add(tf.keras.layers.Dense(75, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(50, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(25, activation=tf.nn.relu)) # The single output node and Sigmoid activation makes this a Logistic # Regression. model.add(tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)) # Custom Optimizer: # https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer optimizer = tf.keras.optimizers.RMSprop( lr=learning_rate, rho=0.9, epsilon=1e-08, decay=0.0) # Compile Keras model model.compile( loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) return model ``` Next, create the Keras model object and examine its structure: ``` num_train_examples, input_dim = train_x.shape print('Number of features: {}'.format(input_dim)) print('Number of examples: {}'.format(num_train_examples)) keras_model = create_keras_model( input_dim=input_dim, learning_rate=LEARNING_RATE) # Take a detailed look inside the model keras_model.summary() ``` #### Train and evaluate the model Define a learning rate decay to encourage model paramaters to make smaller changes as training goes on: ``` # Setup Learning Rate decay. lr_decay = tf.keras.callbacks.LearningRateScheduler( lambda epoch: LEARNING_RATE + 0.02 * (0.5 ** (1 + epoch)), verbose=True) ``` Finally, train the model. Provide the appropriate `steps_per_epoch` for the model to train on the entire training dataset (with `BATCH_SIZE` examples per step) during each epoch. And instruct the model to calculate validation accuracy with one big validation batch at the end of each epoch. ``` history = keras_model.fit(training_dataset, epochs=NUM_EPOCHS, steps_per_epoch=int(num_train_examples/BATCH_SIZE), validation_data=validation_dataset, validation_steps=1, callbacks=[lr_decay], verbose=1) ``` ### Visualize training and export the trained model #### Visualize training Import `matplotlib` to visualize how the model learned over the training period. ``` ! pip install matplotlib from matplotlib import pyplot as plt %matplotlib inline ``` Plot the model's loss (binary cross-entropy) and accuracy, as measured at the end of each training epoch: ``` # Visualize History for Loss. plt.title('Keras model loss') plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['training', 'validation'], loc='upper right') plt.show() # Visualize History for Accuracy. plt.title('Keras model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.legend(['training', 'validation'], loc='lower right') plt.show() ``` Over time, loss decreases and accuracy increases. But do they converge to a stable level? Are there big differences between the training and validation metrics (a sign of overfitting)? Learn about [how to improve your machine learning model](https://developers.google.com/machine-learning/crash-course/). Then, feel free to adjust hyperparameters or the model architecture and train again. #### Export the model for serving Use [tf.contrib.saved_model.save_keras_model](https://www.tensorflow.org/api_docs/python/tf/contrib/saved_model/save_keras_model) to export a TensorFlow SavedModel directory. This is the format that Cloud AI Platform requires when you [create a model version resource](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models#creating_a_model_version). Since not all optimizers can be exported to the SavedModel format, you may see warnings during the export process. As long you successfully export a serving graph, AI Platform can used the SavedModel to serve predictions. ``` # Export the model to a local SavedModel directory export_path = tf.contrib.saved_model.save_keras_model(keras_model, 'keras_export') print("Model exported to: ", export_path) ``` You may export a SavedModel directory to your local filesystem or to Cloud Storage, as long as you have the necessary permissions. In your current environment, you granted access to Cloud Storage by authenticating your GCP account and setting the `GOOGLE_APPLICATION_CREDENTIALS` environment variable. Cloud ML Engine training jobs can also export directly to Cloud Storage, because Cloud ML Engine service accounts [have access to Cloud Storage buckets in their own project](https://cloud.google.com/ml-engine/docs/tensorflow/working-with-cloud-storage). Try exporting directly to Cloud Storage: ``` # Export the model to a SavedModel directory in Cloud Storage export_path = tf.contrib.saved_model.save_keras_model(keras_model, JOB_DIR + '/keras_export') print("Model exported to: ", export_path) ``` You can now deploy this model to AI Platform and serve predictions by following the steps from Part 2. ## Cleaning up To clean up all GCP resources used in this project, you can [delete the GCP project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Alternatively, you can clean up individual resources by running the following commands: ``` # Delete model version resource ! gcloud ml-engine versions delete $MODEL_VERSION --quiet --model $MODEL_NAME # Delete model resource ! gcloud ml-engine models delete $MODEL_NAME --quiet # Delete Cloud Storage objects that were created ! gsutil -m rm -r $JOB_DIR # If the training job is still running, cancel it ! gcloud ml-engine jobs cancel $JOB_NAME --quiet --verbosity critical ``` If your Cloud Storage bucket doesn't contain any other objects and you would like to delete it, run `gsutil rm -r gs://$BUCKET_NAME`. ## What's next? * View the [complete training code](https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/tf-keras) used in this guide, which structures the code to accept custom hyperparameters as command-line flags. * Read about [packaging code](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) for an AI Platform training job. * Read about [deploying a model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) to serve predictions.
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from tensorflow.keras import layers from tensorflow.keras import Model !wget --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \ -O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 from tensorflow.keras.applications.inception_v3 import InceptionV3 local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5' pre_trained_model = InceptionV3(input_shape = (150, 150, 3), include_top = False, weights = None) pre_trained_model.load_weights(local_weights_file) for layer in pre_trained_model.layers: layer.trainable = False # pre_trained_model.summary() last_layer = pre_trained_model.get_layer('mixed7') print('last layer output shape: ', last_layer.output_shape) last_output = last_layer.output from tensorflow.keras.optimizers import RMSprop # Flatten the output layer to 1 dimension x = layers.Flatten()(last_output) # Add a fully connected layer with 1,024 hidden units and ReLU activation x = layers.Dense(1024, activation='relu')(x) # Add a dropout rate of 0.2 x = layers.Dropout(0.2)(x) # Add a final sigmoid layer for classification x = layers.Dense (1, activation='sigmoid')(x) model = Model( pre_trained_model.input, x) model.compile(optimizer = RMSprop(lr=0.0001), loss = 'binary_crossentropy', metrics = ['accuracy']) !wget --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \ -O /tmp/cats_and_dogs_filtered.zip from tensorflow.keras.preprocessing.image import ImageDataGenerator import os import zipfile local_zip = '//tmp/cats_and_dogs_filtered.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp') zip_ref.close() # Define our example directories and files base_dir = '/tmp/cats_and_dogs_filtered' train_dir = os.path.join( base_dir, 'train') validation_dir = os.path.join( base_dir, 'validation') train_cats_dir = os.path.join(train_dir, 'cats') # Directory with our training cat pictures train_dogs_dir = os.path.join(train_dir, 'dogs') # Directory with our training dog pictures validation_cats_dir = os.path.join(validation_dir, 'cats') # Directory with our validation cat pictures validation_dogs_dir = os.path.join(validation_dir, 'dogs')# Directory with our validation dog pictures train_cat_fnames = os.listdir(train_cats_dir) train_dog_fnames = os.listdir(train_dogs_dir) # Add our data-augmentation parameters to ImageDataGenerator train_datagen = ImageDataGenerator(rescale = 1./255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) # Note that the validation data should not be augmented! test_datagen = ImageDataGenerator( rescale = 1.0/255. ) # Flow training images in batches of 20 using train_datagen generator train_generator = train_datagen.flow_from_directory(train_dir, batch_size = 20, class_mode = 'binary', target_size = (150, 150)) # Flow validation images in batches of 20 using test_datagen generator validation_generator = test_datagen.flow_from_directory( validation_dir, batch_size = 20, class_mode = 'binary', target_size = (150, 150)) history = model.fit( train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 20, validation_steps = 50, verbose = 2) import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend(loc=0) plt.figure() plt.show() ```
github_jupyter
# COVIDvu - US regions visualizer <img src='resources/American-flag.png' align = 'right'> --- ## Runtime prerequisites ``` %%capture --no-stderr requirementsOutput displayRequirementsOutput = False %pip install -r requirements.txt from covidvu.utils import autoReloadCode; autoReloadCode() if displayRequirementsOutput: requirementsOutput.show() ``` --- ## Pull latest datasets ``` %sx ./refreshdata local patch ``` --- ## Confirmed, deaths, recovered datasets ``` import os import numpy as np import pandas as pd from covidvu.cryostation import Cryostation pd.options.mode.chained_assignment = None databasePath = './database/virustrack.db' storage = Cryostation(databasePath=databasePath) confirmedCases = storage.timeSeriesFor(regionType = 'province', countryName = 'US', casesType = 'confirmed', disableProgressBar=False) confirmedDeaths = storage.timeSeriesFor(regionType = 'province', countryName = 'US', casesType = 'deaths', disableProgressBar=False) ``` --- ## Cases by US state ``` from ipywidgets import fixed from ipywidgets import interact from ipywidgets import widgets from covidvu import visualize statesUS = list(confirmedCases.columns) multiState = widgets.SelectMultiple( options=statesUS, value=['New York'], description='State', disabled=False ) log = widgets.Checkbox(value=False, description='Log scale') ``` ### Confirmed cases ``` interact(visualize.plotTimeSeriesInteractive, df=fixed(confirmedCases), selectedColumns=multiState, log=log, yLabel=fixed('Total confirmed cases'), title=fixed('COVID-19 total confirmed cases in US states') ); def viewTopStates(n): return pd.DataFrame(confirmedCases.iloc[-1,:].sort_values(ascending=False).iloc[1:n]).style.background_gradient(cmap="Reds") interact(viewTopStates, n=widgets.IntSlider(min=1, max=len(statesUS), step=1, value=5)); ``` --- ## Cases by US region ``` regionsUS = list(confirmedCases.columns) multiRegion = widgets.SelectMultiple( options=regionsUS, value=['New York'], description='State', disabled=False ) interact(visualize.plotTimeSeriesInteractive, df=fixed(confirmedCases), selectedColumns=multiRegion, log=log, yLabel=fixed('Total confirmed cases'), title=fixed('COVID-19 total confirmed cases in US regions') ); ``` --- &#169; the COVIDvu Contributors. All rights reserved.
github_jupyter
``` import urllib.request import os from PIL import Image,ImageStat import numpy as np import matplotlib.pyplot as plt import torch.optim as optim import torchvision import torch import torch.nn as nn from torch.utils.data import DataLoader import torchvision.transforms as transforms import torch.functional as F import torchsummary from sklearn.model_selection import StratifiedKFold ``` # Carregando os dados ``` main_dir = "/content/drive/My Drive/COVID-19/X-Ray Image DataSet" os.chdir(main_dir) from google.colab import drive drive.mount('/content/drive') !ls image_loader = lambda x: Image.open(x) ``` ## Tratamento de Imagem ``` class ToRGB(object): def __call__(self,img): if img.mode == 'RGBA': r, g, b, a = img.split() return Image.merge('RGB', (r, g, b)) if img.mode == 'L': rgb = img.convert('RGB') return rgb return img class ToNorm(object): def __call__(self,img): mean = torch.mean(img) std = torch.std(img) return (img - mean)/std transform = transforms.Compose([ToRGB(), transforms.Resize((256,256)), transforms.ToTensor(), ToNorm(), ]) ``` ## Carregando os dados em um dataloader ``` dataset = torchvision.datasets.DatasetFolder(main_dir,loader = image_loader,extensions = ('png','jpg','jpeg',) ,transform=transform ) dataset.classes len(dataset) dl = DataLoader(dataset,batch_size=32) data,class_att = next(iter(dl)) grid_img = torchvision.utils.make_grid(data,nrow=5) grid_img.shape plt.imshow(grid_img.permute(1,2,0)) ``` # rede ``` def conv_block(ni, nf, size=3, stride=1): for_pad = lambda s: s if s > 2 else 3 return nn.Sequential( nn.Conv2d(ni, nf, kernel_size=size, stride=stride, padding=(for_pad(size) - 1)//2, bias=False), nn.BatchNorm2d(nf), nn.LeakyReLU(negative_slope=0.1, inplace=True) ) def triple_conv(ni, nf): return nn.Sequential( conv_block(ni, nf), conv_block(ni, nf, size=1), conv_block(ni, nf) ) def maxpooling(): return nn.MaxPool2d(2, stride=2) nn.Module?? ``` # teste ``` def conv_block(in_c, out_c, size=3, stride=1): for_pad = lambda size: size if size > 2 else 3 return nn.Sequential( nn.Conv2d(in_c, out_c, kernel_size=size, stride=stride, padding=(for_pad(size) - 1) // 2, bias=False), nn.BatchNorm2d(out_c), nn.LeakyReLU(negative_slope=0.1, inplace=True), ) def last_conv_block(in_c, out_c, size=3, stride=1): for_pad = lambda size: size if size > 2 else 3 return nn.Sequential( nn.Conv2d(in_c, out_c, kernel_size=size, stride=stride, padding=(for_pad(size) - 1) // 2, bias=False), nn.ReLU(inplace=True), nn.BatchNorm2d(out_c), ) def triple_conv(in_c, out_c): return nn.Sequential( conv_block(in_c, out_c), conv_block(out_c, in_c, size=1), conv_block(in_c, out_c) ) class DarkCovidNet(nn.Module): def __init__(self): super().__init__() #maxpooling self.maxp = nn.MaxPool2d(2, stride=2) #backbone self.block1 = conv_block(3,8) self.block2 = conv_block(8,16) self.block3 = triple_conv(16,32) self.block4 = triple_conv(32,64) self.block5 = triple_conv(64,128) self.block6 = triple_conv(128,256) self.block7 = conv_block(256,128, size=1) self.block8 = conv_block(128,256) self.block9 = last_conv_block(256,3) #classifier self.flatten = nn.Flatten() self.linear = nn.Linear(507,3) def forward(self, x): #backbone x = self.block1(x) x = self.maxp(x) x = self.block2(x) x = self.maxp(x) x = self.block3(x) x = self.maxp(x) x = self.block4(x) x = self.maxp(x) x = self.block5(x) x = self.maxp(x) x = self.block6(x) x = self.block7(x) x = self.block8(x) x = self.block9(x) #classifier x = self.flatten(x) x = self.linear(x) return x model = DarkCovidNet() torchsummary.summary(model,(3,256,256),device='cpu') del model ``` # Treinamento ``` n = len(dataset) n_train = int(0.7*n) n_test = n - n_train n_train ,n_test ds_train,ds_test = torch.utils.data.random_split(dataset,(n_train,n_test)) len(ds_train) x,y = next(iter(ds_train)) x skf = StratifiedKFold(n_splits=5) for train, test in skf.split(x, y): print(train,test) def data_split(dataset,lists): return [torch.utils.data.Subset(dataset, llist ) for llist in lists] dl_train = torch.utils.data.DataLoader(ds_train,batch_size=32) dl_test = torch.utils.data.DataLoader(ds_test,batch_size=32) import sklearn.metrics as metrics def train_model(dl,model,opt,criterion,epochs,device): model.to(device) model.train() lloss = [] for epoch in range(epochs): for x,y in dl: x = x.to(device) y = y.to(device) pred = model(x) loss = criterion(pred,y) loss.backward() opt.step() opt.zero_grad() lloss.append(loss.item()) from sklearn.metrics import accuracy_score lacc = [] device = torch.device('cuda:0') #for train, test in skf.split(range(len(dataset)), dataset.targets): #ds_train,ds_test = data_split(ds,(train,test)) model = DarkCovidNet() criterion = nn.CrossEntropyLoss() opt = optim.Adam(model.parameters(),lr=1e-2) dl_train = torch.utils.data.DataLoader(ds_train,batch_size=32) dl_test = torch.utils.data.DataLoader(ds_test,batch_size=32) train_model(dl_train,model,opt,criterion,150,device) (acc,loss)=evaluate(dl_test,model,criterion) print("accuracy:%4.3f loss:%4.3f"%(acc,loss)) torch.save(model.state_dict(),'model.pth') lacc.append(acc) np.mean(lacc),np.std(lacc) def evaluate(dl,model,criterion): model.to(device) model.eval() lacc = [] lloss = [] with torch.no_grad(): for x,y in dl: x = x.to(device) pred = model(x) loss = criterion(pred,y.to(device)) y_pred = pred.argmax(dim=1).cpu() acc = accuracy_score(y,y_pred) lacc.append(acc) lloss.append(loss.item()) return np.mean(lacc),np.mean(lloss) ```
github_jupyter
<a href="https://colab.research.google.com/github/psuto/TensorFlow2ForDL/blob/main/Welcome_to_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # New section <p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p> <h1>What is Colaboratory?</h1> Colaboratory, or 'Colab' for short, allows you to write and execute Python in your browser, with - Zero configuration required - Free access to GPUs - Easy sharing Whether you're a <strong>student</strong>, a <strong>data scientist</strong> or an <strong>AI researcher</strong>, Colab can make your work easier. Watch <a href="https://www.youtube.com/watch?v=inN8seMm7UI">Introduction to Colab</a> to find out more, or just get started below! ## <strong>Getting started</strong> The document that you are reading is not a static web page, but an interactive environment called a <strong>Colab notebook</strong> that lets you write and execute code. For example, here is a <strong>code cell</strong> with a short Python script that computes a value, stores it in a variable and prints the result: ``` seconds_in_a_day = 24 * 60 * 60 seconds_in_a_day ``` To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut 'Command/Ctrl+Enter'. To edit the code, just click the cell and start editing. Variables that you define in one cell can later be used in other cells: ``` seconds_in_a_week = 7 * seconds_in_a_day seconds_in_a_week ``` Colab notebooks allow you to combine <strong>executable code</strong> and <strong>rich text</strong> in a single document, along with <strong>images</strong>, <strong>HTML</strong>, <strong>LaTeX</strong> and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To find out more, see <a href="/notebooks/basic_features_overview.ipynb">Overview of Colab</a>. To create a new Colab notebook you can use the File menu above, or use the following link: <a href="http://colab.research.google.com#create=true">Create a new Colab notebook</a>. Colab notebooks are Jupyter notebooks that are hosted by Colab. To find out more about the Jupyter project, see <a href="https://www.jupyter.org">jupyter.org</a>. ## Data science With Colab you can harness the full power of popular Python libraries to analyse and visualise data. The code cell below uses <strong>numpy</strong> to generate some random data, and uses <strong>matplotlib</strong> to visualise it. To edit the code, just click the cell and start editing. ``` import numpy as np from matplotlib import pyplot as plt ys = 200 + np.random.randn(100) x = [x for x in range(len(ys))] plt.plot(x, ys, '-') plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6) plt.title("Sample Visualization") plt.show() ``` You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from GitHub and many other sources. To find out more about importing data, and how Colab can be used for data science, see the links below under <a href="#working-with-data">Working with data</a>. ## Machine learning With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just <a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb">a few lines of code</a>. Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including <a href="#using-accelerated-hardware">GPUs and TPUs</a>, regardless of the power of your machine. All you need is a browser. Colab is used extensively in the machine learning community with applications including: - Getting started with TensorFlow - Developing and training neural networks - Experimenting with TPUs - Disseminating AI research - Creating tutorials To see sample Colab notebooks that demonstrate machine learning applications, see the <a href="#machine-learning-examples">machine learning examples</a> below. ## More resources ### Working with notebooks in Colab - [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb) - [Guide to markdown](/notebooks/markdown_guide.ipynb) - [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb) - [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) - [Interactive forms](/notebooks/forms.ipynb) - [Interactive widgets](/notebooks/widgets.ipynb) - <img src="/img/new.png" height="20px" align="left" hspace="4px" alt="New"></img> [TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb) <a name="working-with-data"></a> ### Working with data - [Loading data: Drive, Sheets and Google Cloud Storage](/notebooks/io.ipynb) - [Charts: visualising data](/notebooks/charts.ipynb) - [Getting started with BigQuery](/notebooks/bigquery.ipynb) ### Machine learning crash course These are a few of the notebooks from Google's online machine learning course. See the <a href="https://developers.google.com/machine-learning/crash-course/">full course website</a> for more. - [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) - [TensorFlow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb) - [First steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) - [Intro to neural nets](/notebooks/mlcc/intro_to_neural_nets.ipynb) - [Intro to sparse data and embeddings](/notebooks/mlcc/intro_to_sparse_data_and_embeddings.ipynb) <a name="using-accelerated-hardware"></a> ### Using accelerated hardware - [TensorFlow with GPUs](/notebooks/gpu.ipynb) - [TensorFlow with TPUs](/notebooks/tpu.ipynb) <a name="machine-learning-examples"></a> ## Machine learning examples To see end-to-end examples of the interactive machine-learning analyses that Colaboratory makes possible, take a look at these tutorials using models from <a href="https://tfhub.dev">TensorFlow Hub</a>. A few featured examples: - <a href="https://tensorflow.org/hub/tutorials/tf2_image_retraining">Retraining an Image Classifier</a>: Build a Keras model on top of a pre-trained image classifier to distinguish flowers. - <a href="https://tensorflow.org/hub/tutorials/tf2_text_classification">Text Classification</a>: Classify IMDB film reviews as either <em>positive</em> or <em>negative</em>. - <a href="https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization">Style Transfer</a>: Use deep learning to transfer style between images. - <a href="https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa">Multilingual Universal Sentence Encoder Q&amp;A</a>: Use a machine-learning model to answer questions from the SQuAD dataset. - <a href="https://tensorflow.org/hub/tutorials/tweening_conv3d">Video Interpolation</a>: Predict what happened in a video between the first and the last frame.
github_jupyter
<!--COURSE_INFORMATION--> <img align="left" style="padding-right:10px;" src="https://sitejerk.com/images/google-earth-logo-png-5.png" width=5% > <img align="right" style="padding-left:10px;" src="https://colab.research.google.com/img/colab_favicon_256px.png" width=6% > >> *This notebook is part of the free course [EEwPython](https://colab.research.google.com/github/csaybar/EEwPython/blob/master/index.ipynb); the content is available [on GitHub](https://github.com/csaybar/EEwPython)* and released under the [Apache 2.0 License](https://www.gnu.org/licenses/gpl-3.0.en.html). 99% of this material has been adapted from [Google Earth Engine Guides](https://developers.google.com/earth-engine/). <!--NAVIGATION--> < [Image](2_eeImage.ipynb) | [Contents](index.ipynb) | [Geometry, Feature and FeatureCollection](4_features.ipynb)> <a href="https://colab.research.google.com/github/csaybar/EEwPython/blob/master/3_eeImageCollection.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> <center> <h1>Google Earth Engine with Python </h1> <h2> ee.ImageCollection</h2> </center> <h2> Topics:</h2> 1. ImageCollection Overview 2. ImageCollection Information and Metadata 3. Filtering and ImageCollection 4. Mapping over an ImageCollection 5. Reducing an ImageCollection 6. Compositing and Mosaicking 7. Iterating over an ImageCollection An **ImageCollection** is a stack or time series of images. In addition to loading an **ImageCollection** using an Earth Engine collection ID, Earth Engine has methods to create image collections. The constructor **ee.ImageCollection()** or the convenience method **ee.ImageCollection.fromImages()** create image collections from lists of images. You can also create new image collections by merging existing collections. ## Connecting GEE with Google Services - **Authenticate to Earth Engine** ``` !pip install earthengine-api #earth-engine Python API !earthengine authenticate ``` - **Authenticate to Google Drive (OPTIONAL)** ``` from google.colab import drive drive.mount('/content/drive') ``` - **Authenticate to Google Cloud (OPTIONAL)** ``` from google.colab import auth auth.authenticate_user() ``` ## Testing the software setup ``` # Earth Engine Python API import ee ee.Initialize() import folium # Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' print('Folium version: ' + folium.__version__) # @title Mapdisplay: Display GEE objects using folium. def Mapdisplay(center, dicc, Tiles="OpensTreetMap",zoom_start=10): ''' :param center: Center of the map (Latitude and Longitude). :param dicc: Earth Engine Geometries or Tiles dictionary :param Tiles: Mapbox Bright,Mapbox Control Room,Stamen Terrain,Stamen Toner,stamenwatercolor,cartodbpositron. :zoom_start: Initial zoom level for the map. :return: A folium.Map object. ''' mapViz = folium.Map(location=center,tiles=Tiles, zoom_start=zoom_start) for k,v in dicc.items(): if ee.image.Image in [type(x) for x in v.values()]: folium.TileLayer( tiles = EE_TILES.format(**v), attr = 'Google Earth Engine', overlay =True, name = k ).add_to(mapViz) else: folium.GeoJson( data = v, name = k ).add_to(mapViz) mapViz.add_child(folium.LayerControl()) return mapViz ``` # 1. Image Collection Overview An `ImageCollection` is a **stack or time series of images**. In addition to loading an `ImageCollection` using an Earth Engine collection ID, Earth Engine has methods to create image collections. The constructor `ee.ImageCollection()` or the convenience method `ee.ImageCollection.fromImages()` create image collections from lists of images. You can also create new image collections by merging existing collections. For example: ``` # Create arbitrary constant images. constant1 = ee.Image(1) constant2 = ee.Image(2) # Create a collection by giving a list to the constructor. collectionFromConstructor = ee.ImageCollection([constant1, constant2]) print('collectionFromConstructor: ') collectionFromConstructor.getInfo() # Create a collection with fromImages(). collectionFromImages = ee.ImageCollection.fromImages([ee.Image(3), ee.Image(4)]) print('collectionFromImages: ') collectionFromImages.getInfo() # Merge two collections. mergedCollection = collectionFromConstructor.merge(collectionFromImages) print('mergedCollection: ') mergedCollection.getInfo() # Create a toy FeatureCollection features = ee.FeatureCollection( [ee.Feature(None, {'foo': 1}), ee.Feature(None, {'foo': 2})]) # Create an ImageCollection from the FeatureCollection # by mapping a function over the FeatureCollection. images = features.map(lambda feature:ee.Image(ee.Number(feature.get('foo')))) # Print the resultant collection. print('Image collection: ') images.getInfo() ``` Note that in this example an `ImageCollection` is created by mapping a function that returns an `Image` over a `FeatureCollection`. Learn more about mapping in the [Mapping over an ImageCollection section](https://developers.google.com/earth-engine/ic_mapping). Learn more about feature collections from the [FeatureCollection section](https://developers.google.com/earth-engine/feature_collections). # 2. ImageCollection Information and Metadata As with Images, there are a variety of ways to get information about an ImageCollection. The collection can be printed directly to the console, but the console printout is **limited to 5000 elements**. Collections larger than 5000 images will need to be filtered before printing. Printing a large collection will be correspondingly slower. The following example shows various ways of getting information about image collections programmatically. ``` # Load a Landsat 8 ImageCollection for a single path-row. collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA')\ .filter(ee.Filter.eq('WRS_PATH', 44))\ .filter(ee.Filter.eq('WRS_ROW', 34))\ .filterDate('2014-03-01', '2014-08-01') print('Collection: ') collection.getInfo() # Get the number of images. count = collection.size() print('Count: ', count.getInfo()) from datetime import datetime as dt # Get the date range of images in the collection. rango = collection.reduceColumns(ee.Reducer.minMax(), ["system:time_start"]) # Passing numeric date to standard init_date = ee.Date(rango.get('min')).getInfo()['value']/1000. init_date_f = dt.utcfromtimestamp(init_date).strftime('%Y-%m-%d %H:%M:%S') last_date = ee.Date(rango.get('max')).getInfo()['value']/1000. last_date_f = dt.utcfromtimestamp(last_date).strftime('%Y-%m-%d %H:%M:%S') print('Date range: ',init_date_f,' - ',last_date_f) # Get statistics for a property of the images in the collection. sunStats = collection.aggregate_stats('SUN_ELEVATION') print('Sun elevation statistics: ') sunStats.getInfo() # Sort by a cloud cover property, get the least cloudy image. image = ee.Image(collection.sort('CLOUD_COVER').first()) print('Least cloudy image: ', ) image.getInfo() # Limit the collection to the 10 most recent images. recent = collection.sort('system:time_start', False).limit(10) print('Recent images: ') # recent.getInfo() ``` # 3. Filtering an ImageCollection As illustrated in the Get Started section and the ImageCollection Information section, Earth Engine provides a variety of convenience methods for filtering image collections. Specifically, many common use cases are handled by **imageCollection.filterDate()**, and **imageCollection.filterBounds()**. For general purpose filtering, use **imageCollection.filter()** with an **ee.Filter** as an argument. The following example demonstrates both convenience methods and **filter()** to identify and remove images with bad registration from an **ImageCollection**. ``` # Load Landsat 5 data, filter by date and bounds. collection = ee.ImageCollection('LANDSAT/LT05/C01/T2').filterDate('1987-01-01', '1990-05-01').filterBounds(ee.Geometry.Point(25.8544, -18.08874)) # Also filter the collection by the IMAGE_QUALITY property. filtered = collection.filterMetadata(name = 'IMAGE_QUALITY', operator = 'equals', value = 9) # Create two composites to check the effect of filtering by IMAGE_QUALITY. badComposite = ee.Algorithms.Landsat.simpleComposite(collection = collection, percentile = 75, cloudScoreRange = 3) goodComposite = ee.Algorithms.Landsat.simpleComposite(collection = filtered, percentile = 75, cloudScoreRange = 3) dicc = { 'Bad composite' : badComposite.getMapId({'bands': ['B3', 'B2', 'B1'], 'gain': 3.5}), 'Good composite': goodComposite.getMapId({'bands': ['B3', 'B2', 'B1'], 'gain': 3.5}) } # Display the results center = [-18.08874, 25.8544] Mapdisplay(center, dicc, zoom_start= 13) dicc = { 'Bad composite' : badComposite.getMapId({'bands': ['B3', 'B2', 'B1'], 'gain': 3.5}) # , # 'Good composite': goodComposite.getMapId({'bands': ['B3', 'B2', 'B1'], 'gain': 3.5}) } # Display the results center = [-18.08874, 25.8544] Mapdisplay(center, dicc, zoom_start= 13) ``` # 4. Mapping over an ImageCollection To apply a function to every Image in an ImageCollection use imageCollection.map(). The only argument to map() is a function which takes one parameter: an ee.Image. For example, the following code adds a timestamp band to every image in the collection. ``` from pprint import pprint # Load a Landsat 8 collection for a single path-row. collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA').filter(ee.Filter.eq('WRS_PATH', 44)).filter(ee.Filter.eq('WRS_ROW', 34)) # This function adds a band representing the image timestamp. def addTime(image): return image.addBands(image.metadata('system:time_start')) # Map the function over the collection and display the result. pprint(collection.map(addTime).limit(3).getInfo()) ``` Note that in the predefined function, the **metadata()** method is used to create a new Image from the value of a property. As discussed in the *Reducing* and *Compositing* sections, having that time band is useful for the linear modeling of change and for making composites. The mapped function is limited in the operations it can perform. Specifically, it can’t modify variables outside the function; it can’t print anything; it can’t use JavaScript ‘if’ or ‘for’ statements. However, you can use **ee.Algorithms.If()** to perform conditional operations in a mapped function. ``` # Load a Landsat 8 collection for a single path-row. collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA').filter(ee.Filter.eq('WRS_PATH', 44)).filter(ee.Filter.eq('WRS_ROW', 34)) # This function uses a conditional statement to return the image if # the solar elevation > 40 degrees. Otherwise it returns a zero image. def conditional(image): return ee.Algorithms.If(ee.Number(image.get('SUN_ELEVATION')).gt(40), image, ee.Image(0)) # Map the function over the collection, convert to a List and print the result. print('Expand this to see the result: ') pprint(collection.map(conditional).limit(3).getInfo()) ``` Inspect the list of images in the output ImageCollection and note that the when the condition evaluated by the **If()** algorithm is true, the output contains a constant image. Although this demonstrates a server-side conditional function (learn more about client vs. server in Earth Engine on this page), avoid **If()** in general and use filters instead. # 5. Reducing an ImageCollection To composite images in an **ImageCollection**, use **imageCollection.reduce()**. This will composite all the images in the collection to a single image representing, for example, the min, max, mean or standard deviation of the images. (See the Reducers section for more information about reducers). For example, to create a median value image from a collection: ``` # Load a Landsat 8 collection for a single path-row. collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA').filter(ee.Filter.eq('WRS_PATH', 44)).filter(ee.Filter.eq('WRS_ROW', 34)).filterDate('2014-01-01', '2015-01-01') # Compute a median image and display. median = collection.median() dicc = { 'median' : median.getMapId({'bands': ['B4', 'B3', 'B2'], 'max': 0.3}) } # Display the results center = [37.7726, -122.3578] Mapdisplay(center, dicc, zoom_start= 12) ``` At each location in the output image, in each band, the pixel value is the median of all unmasked pixels in the input imagery (the images in the collection). In the previous example, median() is a convenience method for the following call: ``` # Reduce the collection with a median reducer. median = collection.reduce(ee.Reducer.median()) # Display the median image. dicc = {'also median' : median.getMapId({'bands': ['B4_median', 'B3_median', 'B2_median'], 'max': 0.3})} # Display the results center = [37.7726, -122.3578] Mapdisplay(center, dicc, zoom_start= 12) ``` Note that the band names differ as a result of using reduce() instead of the convenience method. Specifically, the names of the reducer have been appended to the band names. More complex reductions are also possible using reduce(). For example, to compute the long term linear trend over a collection, use one of the linear regression reducers. The following code computes the linear trend of MODIS Enhanced Vegetation Index (EVI). ``` # This function adds a band representing the image timestamp. def addTime(image): return image.addBands(image.metadata('system:time_start').divide(1000 * 60 * 60 * 24 * 365)) # Load a MODIS collection, filter to several years of 16 day mosaics, and map the time band function over it. collection = ee.ImageCollection('MODIS/006/MYD13A1').filterDate('2004-01-01', '2010-10-31').map(addTime) # Select the bands to model with the independent variable first. # Compute the linear trend over time. trend = collection.select(['system:time_start', 'EVI']).reduce(ee.Reducer.linearFit()) # Display the trend with increasing slopes in green, decreasing in red. dicc = { 'EVI trend' : trend.getMapId({'min': 0, 'max': [-100, 100, 10000], 'bands': ['scale', 'scale', 'offset']}) } # Display the results center = [39.436, -96.943] Mapdisplay(center, dicc, zoom_start= 5) ``` Note that the output of the reduction in this example is a two banded image with one band for the slope of a linear regression (**scale**) and one band for the intercept (**offset**). Explore the API documentation to see a list of the reducers that are available to reduce an **ImageCollection** to a single Image. See the **ImageCollection.reduce()** section for more information about reducing image collections. # 6. Compositing and Mosaicking In general, compositing refers to the process of combining spatially overlapping images into a single image based on an aggregation function. Mosaicking refers to the process of spatially assembling image datasets to produce a spatially continuous image. In Earth Engine, these terms are used interchangeably, though both compositing and mosaicking are supported. For example, consider the task of compositing multiple images in the same location. For example, using one National Agriculture Imagery Program (NAIP) Digital Orthophoto Quarter Quadrangle (DOQQ) at different times, the following example demonstrates making a maximum value composite: ``` # Load three NAIP quarter quads in the same location, different times. naip2004_2012 = ee.ImageCollection('USDA/NAIP/DOQQ')\ .filterBounds(ee.Geometry.Point(-71.08841, 42.39823))\ .filterDate('2004-07-01', '2012-12-31')\ .select(['R', 'G', 'B']) # Temporally composite the images with a maximum value function. composite = naip2004_2012.max() center = [42.3712, -71.12532] Mapdisplay(center, {'max value composite':composite.getMapId()},zoom_start=12) ``` Consider the need to mosaic four different DOQQs at the same time, but different locations. The following example demonstrates that using **imageCollection.mosaic()**: ``` # Load four 2012 NAIP quarter quads, different locations. naip2012 = ee.ImageCollection('USDA/NAIP/DOQQ')\ .filterBounds(ee.Geometry.Rectangle(-71.17965, 42.35125, -71.08824, 42.40584))\ .filterDate('2012-01-01', '2012-12-31') # Spatially mosaic the images in the collection and display. mosaic = naip2012.mosaic() center = [42.3712,-71.12532] Mapdisplay(center,{'spatial mosaic':mosaic.getMapId()},zoom_start=12) ``` Note that there is some overlap in the DOQQs in the previous example. The **mosaic()** method composites overlapping images according to their order in the collection (last on top). To control the source of pixels in a mosaic (or a composite), use image masks. For example, the following uses thresholds on spectral indices to mask the image data in a mosaic: ``` # Load a NAIP quarter quad, display. naip = ee.Image('USDA/NAIP/DOQQ/m_4207148_nw_19_1_20120710') # Create the NDVI and NDWI spectral indices. ndvi = naip.normalizedDifference(['N', 'R']) ndwi = naip.normalizedDifference(['G', 'N']) # Create some binary images from thresholds on the indices. # This threshold is designed to detect bare land. bare1 = ndvi.lt(0.2).And(ndwi.lt(0.3)) # This detects bare land with lower sensitivity. It also detects shadows. bare2 = ndvi.lt(0.2).And(ndwi.lt(0.8)); # Define visualization parameters for the spectral indices. ndviViz = {'min': -1, 'max': 1, 'palette': ['FF0000', '00FF00']} ndwiViz = {'min': 0.5, 'max': 1, 'palette': ['00FFFF', '0000FF']} # Mask and mosaic visualization images. The last layer is on top. mosaic = ee.ImageCollection([ # NDWI > 0.5 is water. Visualize it with a blue palette. ndwi.updateMask(ndwi.gte(0.5)).visualize(**ndwiViz), # NDVI > 0.2 is vegetation. Visualize it with a green palette. ndvi.updateMask(ndvi.gte(0.2)).visualize(**ndviViz), # Visualize bare areas with shadow (bare2 but not bare1) as gray. bare2.updateMask(bare2.And(bare1.Not())).visualize(**{'palette': ['AAAAAA']}), # Visualize the other bare areas as white. bare1.updateMask(bare1).visualize(**{'palette': ['FFFFFF']}), ]).mosaic() center = [42.3443, -71.0915] dicc = {'NAIP DOQQ':naip.getMapId(), 'Visualization mosaic':mosaic.getMapId()} Mapdisplay(center,dicc,zoom_start=14) ``` To make a composite which maximizes an arbitrary band in the input, use **imageCollection.qualityMosaic()**. The **qualityMosaic()** method sets each pixel in the composite based on which image in the collection has a maximum value for the specified band. For example, the following code demonstrates making a greenest pixel composite and a recent value composite: ``` # This function masks clouds in Landsat 8 imagery. def maskClouds(img): scored = ee.Algorithms.Landsat.simpleCloudScore(img) return img.updateMask(scored.select(['cloud']).lt(20)) # This function masks clouds and adds quality bands to Landsat 8 images. def addQualityBands(img): return maskClouds(img).addBands(img.normalizedDifference(['B5', 'B4']))\ .addBands(img.metadata('system:time_start')) # time in days # Load a 2014 Landsat 8 ImageCollection. # Map the cloud masking and quality band function over the collection. collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA')\ .filterDate('2014-06-01', '2014-12-31')\ .map(addQualityBands) # Create a cloud-free, most recent value composite. recentValueComposite = collection.qualityMosaic('system:time_start') # Create a greenest pixel composite. greenestPixelComposite = collection.qualityMosaic('nd') # Create a cloudy image in the collection. cloudy = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140825'); # Display the results. center = [37.8239, -122.374] # San Francisco Bay vizParams = {'bands': ['B5', 'B4', 'B3'], 'min': 0, 'max': 0.4} dicc = {'recent value composite':recentValueComposite.getMapId(vizParams), 'greenest pixel composite':greenestPixelComposite.getMapId(vizParams), 'cloudy':cloudy.getMapId(vizParams)} Mapdisplay(center,dicc,zoom_start=12) ``` # 7. Iterating over an ImageCollection Although `map()` applies a function to every image in a collection, the function visits every image in the collection independently. For example, suppose you want to compute a cumulative anomaly ($A_t$) at time t from a time series. To obtain a recursively defined series of the form $A_t = f(Image_t, A_{t-1})$, mapping won't work because the function (f) depends on the previous result ($A{t-1}$). For example, suppose you want to compute a series of cumulative Normalized Difference Vegetation Index (NDVI) anomaly images relative to a baseline. Let $A_0 = 0$ and $f(Image_t, A_{t-1}) = Image_t + A_{t-1}$ where $A_{t-1}$ is the cumulative anomaly up to time $t-1$ and Image_t is the anomaly at time t. Use **imageCollection.iterate()** to make this recursively defined ImageCollection. In the following example, the function **accumulate()** takes two parameters: an image in the collection, and a list of all the previous outputs. With each call to **iterate()**, the anomaly is added to the running sum and the result is added to the list. The final result is passed to the **ImageCollection** constructor to get a new sequence of images: ``` # Load MODIS EVI imagery. collection = ee.ImageCollection('MODIS/006/MYD13A1').select('EVI'); # Define reference conditions from the first 10 years of data. reference = collection.filterDate('2001-01-01', '2010-12-31')\ .sort('system:time_start', False) # Sort chronologically in descending order. # Compute the mean of the first 10 years. mean = reference.mean() # Compute anomalies by subtracting the 2001-2010 mean from each image in a # collection of 2011-2014 images. Copy the date metadata over to the # computed anomaly images in the new collection. series = collection.filterDate('2011-01-01', '2014-12-31')\ .map(lambda img: img.subtract(mean).set('system:time_start', img.get('system:time_start'))) # Display cumulative anomalies. center = [40.2,-100.811] vizParams = {'min': -60000, 'max': 60000, 'palette': ['FF0000', '000000', '00FF00']} dicc = {'EVI anomaly':series.sum().getMapId(vizParams)} Mapdisplay(center,dicc,zoom_start=5) # Get the timestamp from the most recent image in the reference collection. time0 = reference.first().get('system:time_start') # Use imageCollection.iterate() to make a collection of cumulative anomaly over time. # The initial value for iterate() is a list of anomaly images already processed. # The first anomaly image in the list is just 0, with the time0 timestamp. first = ee.List([ee.Image(0).set('system:time_start', time0).select([0], ['EVI'])]) # Rename the first band 'EVI'. # This is a function to pass to Iterate(). # As anomaly images are computed, add them to the list. def accumulate(img,lista): # Get the latest cumulative anomaly image from the end of the list with # get(-1). Since the type of the list argument to the function is unknown, # it needs to be cast to a List. Since the return type of get() is unknown, # cast it to Image. img = ee.Image(img) previous = ee.Image(ee.List(lista).get(-1)) # Add the current anomaly to make a new cumulative anomaly image. added = img.add(previous)\ .set('system:time_start', img.get('system:time_start')) # Propagate metadata to the new image. # Return the list with the cumulative anomaly inserted. return ee.List(lista).add(added) # Create an ImageCollection of cumulative anomaly images by iterating. # Since the return type of iterate is unknown, it needs to be cast to a List. cumulative = ee.ImageCollection(ee.List(series.iterate(accumulate, first))) def stackCollection(collection): # Create an initial image. first = ee.Image(collection.first()).select([]) # Write a function that appends a band to an image. def appendBands(image, previous): return ee.Image(previous).addBands(image) return ee.Image(collection.iterate(appendBands, first)) import numpy as np import matplotlib.pyplot as plt # Chart some interesting locations. pt1 = ee.Geometry.Point(116.4647, 40.1054) # ee.ImageCollection to ee.Image img_cumulative = stackCollection(cumulative) series = img_cumulative.reduceRegions(collection=pt1, reducer=ee.Reducer.mean(), scale=500) dic_series = series.getInfo() EVI_anom = np.array(list(dic_series['features'][0]['properties'].values())) plt.plot(EVI_anom) plt.show() ``` <!--NAVIGATION--> < [Image](2_eeImage.ipynb) | [Contents](index.ipynb) | [Geometry, Feature and FeatureCollection](4_features.ipynb)> <a href="https://colab.research.google.com/github/csaybar/EEwPython/blob/master/3_eeImageCollection.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
github_jupyter
## Libraries ``` ### Uncomment the next two lines to, ### install tensorflow_hub and tensorflow datasets #!pip install tensorflow_hub #!pip install tensorflow_datasets import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import tensorflow_hub as hub import tensorflow_datasets as tfds from tensorflow.keras import layers ``` ### Download and Split data into Train and Validation ``` def get_data(): (train_set, validation_set), info = tfds.load( 'tf_flowers', with_info=True, as_supervised=True, split=['train[:70%]', 'train[70%:]'], ) return train_set, validation_set, info train_set, validation_set, info = get_data() num_examples = info.splits['train'].num_examples num_classes = info.features['label'].num_classes print('Total Number of Classes: {}'.format(num_classes)) print('Total Number of Training Images: {}'.format(len(train_set))) print('Total Number of Validation Images: {} \n'.format(len(validation_set))) img_shape = 224 batch_size = 32 def format_image(image, label): image = tf.image.resize(image, (img_shape, img_shape))/255.0 return image, label train_batches = train_set.shuffle(num_examples//4).map(format_image).batch(batch_size).prefetch(1) validation_batches = validation_set.map(format_image).batch(batch_size).prefetch(1) ``` ### Getting MobileNet model's learned features ``` def get_mobilenet_features(): URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" global img_shape feature_extractor = hub.KerasLayer(URL, input_shape=(img_shape, img_shape,3)) return feature_extractor ### Freezing the layers of transferred model (MobileNet) feature_extractor = get_mobilenet_features() feature_extractor.trainable = False ``` ## Deep Learning Model - Transfer Learning using MobileNet ``` def create_transfer_learned_model(feature_extractor): global num_classes model = tf.keras.Sequential([ feature_extractor, layers.Dense(num_classes, activation='softmax') ]) model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.summary() return model ``` ### Training the last classification layer of the model Achieved Validation Accuracy: 90.10% (significant improvement over simple architecture) ``` epochs = 6 model = create_transfer_learned_model(feature_extractor) history = model.fit(train_batches, epochs=epochs, validation_data=validation_batches) ``` ### Plotting Accuracy and Loss Curves ``` def create_plots(history): acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] global epochs epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() create_plots(history) ``` ### Prediction ``` def predict(): global train_batches, info image_batch, label_batch = next(iter(train_batches.take(1))) image_batch = image_batch.numpy() label_batch = label_batch.numpy() predicted_batch = model.predict(image_batch) predicted_batch = tf.squeeze(predicted_batch).numpy() class_names = np.array(info.features['label'].names) predicted_ids = np.argmax(predicted_batch, axis=-1) predicted_class_names = class_names[predicted_ids] return image_batch, label_batch, predicted_ids, predicted_class_names image_batch, label_batch, predicted_ids, predicted_class_names = predict() print("Labels: ", label_batch) print("Predicted labels: ", predicted_ids) def plot_figures(): global image_batch, predicted_ids, label_batch plt.figure(figsize=(10,9)) for n in range(30): plt.subplot(6,5,n+1) plt.subplots_adjust(hspace = 0.3) plt.imshow(image_batch[n]) color = "blue" if predicted_ids[n] == label_batch[n] else "red" plt.title(predicted_class_names[n].title(), color=color) plt.axis('off') _ = plt.suptitle("Model predictions (blue: correct, red: incorrect)") plot_figures() ```
github_jupyter
## 1. The World Bank's international debt data <p>It's not that we humans only take debts to manage our necessities. A country may also take debt to manage its economy. For example, infrastructure spending is one costly ingredient required for a country's citizens to lead comfortable lives. <a href="https://www.worldbank.org">The World Bank</a> is the organization that provides debt to countries.</p> <p>In this notebook, we are going to analyze international debt data collected by The World Bank. The dataset contains information about the amount of debt (in USD) owed by developing countries across several categories. We are going to find the answers to questions like: </p> <ul> <li>What is the total amount of debt that is owed by the countries listed in the dataset?</li> <li>Which country owns the maximum amount of debt and what does that amount look like?</li> <li>What is the average amount of debt owed by countries across different debt indicators?</li> </ul> <p><img src="https://assets.datacamp.com/production/project_754/img/image.jpg" alt></p> <p>The first line of code connects us to the <code>international_debt</code> database where the table <code>international_debt</code> is residing. Let's first <code>SELECT</code> <em>all</em> of the columns from the <code>international_debt</code> table. Also, we'll limit the output to the first ten rows to keep the output clean.</p> ``` %%sql postgresql:///international_debt SELECT * FROM international_debt LIMIT 10; ``` ## 2. Finding the number of distinct countries <p>From the first ten rows, we can see the amount of debt owed by <em>Afghanistan</em> in the different debt indicators. But we do not know the number of different countries we have on the table. There are repetitions in the country names because a country is most likely to have debt in more than one debt indicator. </p> <p>Without a count of unique countries, we will not be able to perform our statistical analyses holistically. In this section, we are going to extract the number of unique countries present in the table. </p> ``` %%sql SELECT COUNT(DISTINCT country_name) AS total_distinct_countries FROM international_debt; ``` ## 3. Finding out the distinct debt indicators <p>We can see there are a total of 124 countries present on the table. As we saw in the first section, there is a column called <code>indicator_name</code> that briefly specifies the purpose of taking the debt. Just beside that column, there is another column called <code>indicator_code</code> which symbolizes the category of these debts. Knowing about these various debt indicators will help us to understand the areas in which a country can possibly be indebted to. </p> ``` %%sql SELECT DISTINCT indicator_code AS distinct_debt_indicators FROM international_debt ORDER BY distinct_debt_indicators; ``` ## 4. Totaling the amount of debt owed by the countries <p>As mentioned earlier, the financial debt of a particular country represents its economic state. But if we were to project this on an overall global scale, how will we approach it?</p> <p>Let's switch gears from the debt indicators now and find out the total amount of debt (in USD) that is owed by the different countries. This will give us a sense of how the overall economy of the entire world is holding up.</p> ``` %%sql SELECT ROUND(SUM(debt)/1000000, 2) AS total_debt FROM international_debt; ``` ## 5. Country with the highest debt <p>"Human beings cannot comprehend very large or very small numbers. It would be useful for us to acknowledge that fact." - <a href="https://en.wikipedia.org/wiki/Daniel_Kahneman">Daniel Kahneman</a>. That is more than <em>3 million <strong>million</strong></em> USD, an amount which is really hard for us to fathom. </p> <p>Now that we have the exact total of the amounts of debt owed by several countries, let's now find out the country that owns the highest amount of debt along with the amount. <strong>Note</strong> that this debt is the sum of different debts owed by a country across several categories. This will help to understand more about the country in terms of its socio-economic scenarios. We can also find out the category in which the country owns its highest debt. But we will leave that for now. </p> ``` %%sql SELECT country_name, SUM(debt) AS total_debt FROM international_debt GROUP BY country_name ORDER BY total_debt DESC LIMIT 1; ``` ## 6. Average amount of debt across indicators <p>So, it was <em>China</em>. A more in-depth breakdown of China's debts can be found <a href="https://datatopics.worldbank.org/debt/ids/country/CHN">here</a>. </p> <p>We now have a brief overview of the dataset and a few of its summary statistics. We already have an idea of the different debt indicators in which the countries owe their debts. We can dig even further to find out on an average how much debt a country owes? This will give us a better sense of the distribution of the amount of debt across different indicators.</p> ``` %%sql SELECT indicator_code AS debt_indicator, indicator_name, AVG(debt) AS average_debt FROM international_debt GROUP BY debt_indicator,indicator_name ORDER BY average_debt DESC LIMIT 10; ``` ## 7. The highest amount of principal repayments <p>We can see that the indicator <code>DT.AMT.DLXF.CD</code> tops the chart of average debt. This category includes repayment of long term debts. Countries take on long-term debt to acquire immediate capital. More information about this category can be found <a href="https://datacatalog.worldbank.org/principal-repayments-external-debt-long-term-amt-current-us-0">here</a>. </p> <p>An interesting observation in the above finding is that there is a huge difference in the amounts of the indicators after the second one. This indicates that the first two indicators might be the most severe categories in which the countries owe their debts.</p> <p>We can investigate this a bit more so as to find out which country owes the highest amount of debt in the category of long term debts (<code>DT.AMT.DLXF.CD</code>). Since not all the countries suffer from the same kind of economic disturbances, this finding will allow us to understand that particular country's economic condition a bit more specifically. </p> ``` %%sql SELECT country_name, indicator_name FROM international_debt WHERE debt = (SELECT MAX(debt) FROM international_debt LIMIT 1); ``` ## 8. The most common debt indicator <p>China has the highest amount of debt in the long-term debt (<code>DT.AMT.DLXF.CD</code>) category. This is verified by <a href="https://data.worldbank.org/indicator/DT.AMT.DLXF.CD?end=2018&most_recent_value_desc=true">The World Bank</a>. It is often a good idea to verify our analyses like this since it validates that our investigations are correct. </p> <p>We saw that long-term debt is the topmost category when it comes to the average amount of debt. But is it the most common indicator in which the countries owe their debt? Let's find that out. </p> ``` %%sql SELECT indicator_code, COUNT(indicator_code) AS indicator_count FROM international_debt GROUP BY indicator_code ORDER BY indicator_count DESC,indicator_code DESC LIMIT 20; ``` ## 9. Other viable debt issues and conclusion <p>There are a total of six debt indicators in which all the countries listed in our dataset have taken debt. The indicator <code>DT.AMT.DLXF.CD</code> is also there in the list. So, this gives us a clue that all these countries are suffering from a common economic issue. But that is not the end of the story, a part of the story rather. </p> <p>Let's change tracks from <code>debt_indicator</code>s now and focus on the amount of debt again. Let's find out the maximum amount of debt across the indicators along with the respective country names. With this, we will be in a position to identify the other plausible economic issues a country might be going through. By the end of this section, we will have found out the debt indicators in which a country owes its highest debt. </p> <p>In this notebook, we took a look at debt owed by countries across the globe. We extracted a few summary statistics from the data and unraveled some interesting facts and figures. We also validated our findings to make sure the investigations are correct.</p> ``` %%sql SELECT country_name, indicator_code, MAX(debt) AS maximum_debt FROM international_debt GROUP BY country_name,indicator_code ORDER BY maximum_debt DESC LIMIT 10; ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Project: **Finding Lane Lines on the Road** *** In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project. --- Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".** --- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.** --- <figure> <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ## Import Packages ``` #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ``` ## Read in an Image ``` #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ``` ## Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:** `cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** ## Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ``` import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def get_dist(a, b): x = (a[0]-b[0])*(a[0]-b[0]) + (a[1]-b[1])*(a[1]-b[1]); return x; def get_full_line(a, b): y = 540; x = ( (a[0]-b[0])*(y-a[1]) )/(a[1]-b[1]) + a[0]; point1 = ( int(round(x)), int(round(y)) ); y = 320; x = ( (a[0]-b[0])*(y-a[1]) )/(a[1]-b[1]) + a[0]; point2 = ( int(round(x)), int(round(y)) ); return point1, point2; def draw_lines(img, lines, color=[255, 0, 0], thickness=10): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ imshape = img.shape; height = imshape[0]; width = imshape[1]; current_dist = 0; for line in lines: for x1, y1, x2, y2 in line: if( 2*x1<width and 2*x2<width ): if( get_dist((x1, y1), (x2, y2))>current_dist ): current_dist = get_dist((x1, y1), (x2, y2)); point1 = (x1, y1); point2 = (x2, y2); point1, point2 = get_full_line(point1, point2); cv2.line(img, point1, point2, color, thickness); current_dist = 0; for line in lines: for x1, y1, x2, y2 in line: if( 2*x1>width and 2*x2>width ): if( get_dist((x1, y1), (x2, y2))>current_dist ): current_dist = get_dist((x1, y1), (x2, y2)); point1 = (x1, y1); point2 = (x2, y2); point1, point2 = get_full_line(point1, point2); cv2.line(img, point1, point2, color, thickness); def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) ``` ## Test Images Build your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ``` import os os.listdir("test_images/") ``` ## Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report. Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ``` def make_line(a, b): return (a[0], a[1], b[0], b[1]); ''' line = {}; line.x1 = a[0]; line.y1 = a[1]; line.x2 = b[0]; line.y2 = b[1]; return line; ''' def get_lines(vertices): lines = []; n = len(vertices); for i in range(0, n): lines.append( make_line(vertices[i], vertices[(i+1)%n]) ); return lines; def find_lane_line(image): gray_image = grayscale(image); blur_image = gaussian_blur(gray_image, 5); canny_edge = canny(blur_image, 50, 100); imshape = image.shape; #print("--->", imshape[0], imshape[1]); vertices = [(0, imshape[0]), (imshape[1], imshape[0]), (500, 325), (470, 325)]; polygon = np.array([vertices], dtype=np.int32); masked_image = region_of_interest(canny_edge, polygon); #lines = get_lines(vertices); #draw_lines(masked_image, lines); rho = 2; # distance resolution in pixels of the Hough grid theta = np.pi/180; # angular resolution in radians of the Hough grid threshold = 10; # minimum number of votes (intersections in Hough grid cell) min_line_len = 10; #minimum number of pixels making up a line max_line_gap = 20; # maximum gap in pixels between connectable line segments hough_image = hough_lines(masked_image, rho, theta, threshold, min_line_len, max_line_gap); processed = weighted_img(hough_image, image); return processed; # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_imaes = os.lisdir("test_images/"); inames = os.listdir("test_images/"); for iname in inames: image = mpimg.imread('test_images/'+iname); new_image = find_lane_line(image); #plt.imshow(new_image, cmap='Greys_r'); mpimg.imsave('test_images_output/'+iname, new_image); #break; ``` ## Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: `solidWhiteRight.mp4` `solidYellowLeft.mp4` **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** **If you get an error that looks like this:** ``` NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download() ``` **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ``` # Import everything needed to edit/save/watch video clips import imageio from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) result = find_lane_line(image); return result ``` Let's try the one with the solid white lane on the right first ... ``` white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ``` Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ``` HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ``` ## Improve the draw_lines() function **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".** **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ``` yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ``` ## Writeup and Submission If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. ## Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ``` challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ```
github_jupyter
``` import math import numpy as np import pandas as pd from matplotlib import pyplot as plt from scipy.stats import bayes_mvs as bayesest import os import time from szsimulator import Szsimulator %matplotlib inline mean_size = 3 # micron doubling_time = 18 #min tmax = 180 #min sample_time = 2 #min div_steps = 10 ncells = 1000 gr = np.log(2)/doubling_time kd = div_steps*gr/(mean_size) ncells = 2000 sampling_time = sample_time rprom = 10 # RNA mean concentration pprom = 1000 # prot mean concentration gammar = 5*gr # RNA Active degradation rate kr = rprom*(gr+gammar) # RNA transcription rate kp = pprom*gr/rprom # Protein translation rate pop = np.zeros([ncells,6]) indexes = np.int(tmax/sampling_time) rarray = np.zeros([ncells,indexes]) parray = np.zeros([ncells,indexes]) tarray = np.zeros([indexes]) szarray = np.zeros([ncells,indexes]) cellindex = 0 indexref = 0 start = time.time() for cell in pop: if ncells > 100: if cellindex/ncells > indexref: print(str(np.int(100*cellindex/ncells))+"%") indexref += 0.1 #Initialize the simulator sim = Szsimulator(tmax = tmax, sample_time = sample_time, ncells=1, gr = gr, k = kd, steps = div_steps) #_______________ #Example of a direct SSA simulation cell[0] = mean_size #Initial size cell[1] = mean_size*rprom #Initial RNA number cell[2] = mean_size*pprom #Initial Protein number cell[3] = (1/gr)*np.log(1-(gr/(kr*cell[0]))*np.log(np.random.rand())) #time to thenext rna creation cell[4] = -np.log(np.random.rand())/(gammar*cell[1]) #time to the next rna degradation cell[5] = -np.log(np.random.rand())/(kp*cell[1]) #time to next protein creation t=0 reactions=[[0,1,0,0,0,0],[0,-1,0,0,0,0],[0,0,1,0,0,0]] #Reactions (RNA creation, RNA active degradation, Protein creation) nextt = 0 index = 0 ndiv = 0 while t<tmax: #iterating over time nr = cell[1] nprot = cell[2] sz = cell[0] tnextarr = [cell[3],cell[4],cell[5]] tau = np.min(tnextarr) cell += reactions[np.argmin(tnextarr)] #------------------ sim.simulate(tmax=tau,export = False) #Simulate size dynamics for that given time #-------------------- cell[0] = sim.get_sz(0) #Taking the cell size after that simulation if sim.get_ndiv(0) > ndiv: #Check if cell got divided cell[1] = np.random.binomial(nr,0.5) # RNA segregated binomially cell[2] = np.random.binomial(nprot,0.5) # Protein segregated binomially ndiv += 1 # New number of divisions nr = cell[1] #Refreshing RNA number nprot = cell[2] #Refreshing Protein number sz = cell[0] #Refreshing size number cell[3] = (1/gr)*np.log(1-(gr/(kr*cell[0]))*np.log(np.random.rand())) #time to thenext rna creation cell[4] = -np.log(np.random.rand())/(gammar*cell[1]) #time to the next rna degradation cell[5] = -np.log(np.random.rand())/(kp*cell[1]) #time to next protein creation t+=tau if t > nextt and index<len(tarray): #storing data rarray[cellindex,index] = nr/sz # RNA concentration parray[cellindex,index] = nprot/sz # Protein concentration szarray[cellindex,index] = sz # Cell size tarray[index] = t # Time index += 1 nextt += sampling_time cellindex += 1 print('It took', np.int(time.time()-start), 'seconds.') data=pd.DataFrame(np.transpose(np.array(szarray))) ind=0 newcol=[] for name in data.columns: newcol.append("mom"+str(ind)) ind+=1 data.columns=newcol mnszarray=[] cvszarray=[] errcv2sz=[] errmnsz=[] for m in range(len(data)): szs=data.loc[m, :].values.tolist() mean_cntr, var_cntr, std_cntr = bayesest(szs,alpha=0.95) mnszarray.append(mean_cntr[0]) errmnsz.append(mean_cntr[1][1]-mean_cntr[0]) cvszarray.append(var_cntr[0]/mean_cntr[0]**2) errv=(var_cntr[1][1]-var_cntr[0])/mean_cntr[0]**2+2*(mean_cntr[1][1]-mean_cntr[0])*var_cntr[0]/mean_cntr[0]**3 errcv2sz.append(errv) data['time'] = tarray data['Mean_sz'] = mnszarray data['Error_mean'] = errmnsz data['sz_CV2'] = cvszarray data['Error_CV2'] = errcv2sz if not os.path.exists('./data/SSA'): os.makedirs('./data/SSA') data.to_csv("./data/SSA/szsim.csv") tmax=9*doubling_time dt=0.0001*doubling_time lamb=1 a=gr nsteps=div_steps k=kd v0=mean_size #psz1=[] ndivs=10 t=0 bigdeltat=0.1 steps=int(np.floor(tmax/dt)) u=np.zeros([ndivs,nsteps])#(DIVS,STEPS) u[0]=np.zeros(nsteps) u[0][0]=1#P_00 allmeandivs4=[]#average divisions along the time allvardiv4=[] # variace of pn along the time allmeansz4=[] allvarsz4=[] time4=[]#time array yenvol=[] xenvol=[] start=0 count=int(np.floor(tmax/(dt*1000)))-1 count2=0 start = time.time() for l in range(steps): utemp=u for n in range(len(utemp)):#n=divs, for m in range(len(utemp[n])):#m=steps if (m==0):#m=steps if(n==0):#n=divs dun=-k*v0**lamb*np.exp(lamb*a*t)*(utemp[0][0]) u[n][m]+=dun*dt else: arg=lamb*(a*t-n*np.log(2)) dun=k*v0**lamb*np.exp(arg)*((2**lamb)*utemp[n-1][len(utemp[n])-1]-utemp[n][0]) u[n][m]+=dun*dt elif(m==len(utemp[n])-1): if(n==len(utemp)-1): arg=lamb*(a*t-n*np.log(2)) dun=k*v0**lamb*np.exp(arg)*(utemp[n][len(utemp[n])-2]) u[n][m]+=dun*dt else: arg=lamb*(a*t-n*np.log(2)) dun=k*v0**lamb*np.exp(arg)*(utemp[n][m-1]-utemp[n][m]) u[n][m]+=dun*dt else: arg=lamb*(a*t-n*np.log(2)) dun=k*v0**lamb*np.exp(arg)*(utemp[n][m-1]-utemp[n][m]) u[n][m]+=dun*dt t+=dt count=count+1 if count==int(np.floor(tmax/(dt*1000))): time4.append(t/doubling_time) mean=0 for n in range(len(utemp)): pnval=np.sum(u[n]) mean+=n*pnval allmeandivs4.append(mean/mean_size) var=0 for n in range(len(utemp)):#divs pnval=np.sum(u[n]) var+=(n-mean)**2*pnval allvardiv4.append(np.sqrt(var)) pn=np.zeros(ndivs) sizen=np.zeros(ndivs) meansz=0 for ll in range(len(utemp)): pnltemp=np.sum(u[ll])#prob of n divs pn[ll]=pnltemp# sizen[ll]=np.exp(a*t)/2**ll# meansz+=pnltemp*v0*np.exp(a*t)/2**ll allmeansz4.append(meansz) varsz=0 for ll in range(len(utemp)): pnltemp=np.sum(u[ll]) varsz+=(v0*np.exp(a*t)/2**ll-meansz)**2*pnltemp allvarsz4.append(varsz) count=0 count2+=1 if(count2==100): print(str(int(100*t/tmax))+"%") count2=0 print('It took', np.int(time.time()-start), 'seconds.') fig, ax = plt.subplots(1,2, figsize=(12,4)) #ax[0].plot(tarray,mnszarray) ax[0].fill_between(np.array(tarray)/doubling_time,np.array(mnszarray)-np.array(errmnsz),np.array(mnszarray)+np.array(errmnsz), alpha=1, edgecolor='#4db8ff', facecolor='#4db8ff',linewidth=0,label='SSA') #ax[1].plot(tarray,cvszarray) ax[1].fill_between(np.array(tarray)/doubling_time,np.array(cvszarray)-np.array(errcv2sz),np.array(cvszarray)+np.array(errcv2sz), alpha=1, edgecolor='#4db8ff', facecolor='#4db8ff',linewidth=0) ax[0].plot(np.array(time4),np.array(allmeansz4),lw=2,c='#006599',label="Numerical") ax[1].plot(np.array(time4),np.array(allvarsz4)/np.array(allmeansz4)**2,lw=2,c='#006599') ax[0].set_ylabel("$s$ ($\mu$m)",size=20) ax[1].set_ylabel("$C_V^2(s)$",size=20) ax[0].set_xlabel(r"$t/\tau$",size=20) ax[1].set_xlabel(r"$t/\tau$",size=20) ax[0].set_ylim([1,1.2*np.max(mnszarray)]) ax[1].set_ylim([0,1.2*np.max(cvszarray)]) for l in [0,1]: ax[l].set_xlim([0,tmax/doubling_time]) taqui=np.arange(0,(tmax+1)/doubling_time,step=1) ax[l].set_xticks(np.array(taqui)) ax[l].grid() ax[l].tick_params(axis='x', labelsize=15) ax[l].tick_params(axis='y', labelsize=15) for axis in ['bottom','left']: ax[l].spines[axis].set_linewidth(2) ax[l].tick_params(axis='both', width=2,length=6) for axis in ['top','right']: ax[l].spines[axis].set_linewidth(0) ax[l].tick_params(axis='both', width=0,length=6) plt.subplots_adjust(hspace=0.3,wspace=0.3) taqui=np.arange(0,0.15,step=0.02) ax[1].set_yticks(np.array(taqui)) ax[0].legend(fontsize=15) if not os.path.exists('./figures/SSA'): os.makedirs('./figures/SSA') plt.savefig('./figures/SSA/size_statistics.svg',bbox_inches='tight') plt.savefig('./figures/SSA/size_statistics.png',bbox_inches='tight') data=pd.DataFrame(np.transpose(np.array(rarray))) ind=0 newcol=[] for name in data.columns: newcol.append("mom"+str(ind)) ind+=1 data.columns=newcol mnrnaarray=[] cvrnaarray=[] errcv2rna=[] errmnrna=[] for m in range(len(data)): rnas=data.loc[m, :].values.tolist() mean_cntr, var_cntr, std_cntr = bayesest(rnas,alpha=0.95) mnrnaarray.append(mean_cntr[0]) errmnrna.append(mean_cntr[1][1]-mean_cntr[0]) cvrnaarray.append(var_cntr[0]/mean_cntr[0]**2) errv=(var_cntr[1][1]-var_cntr[0])/mean_cntr[0]**2+2*(mean_cntr[1][1]-mean_cntr[0])*var_cntr[0]/mean_cntr[0]**3 errcv2rna.append(errv) data['time'] = tarray data['Mean_RNA'] = mnrnaarray data['Error_mean'] = errmnrna data['RNA_CV2'] = cvrnaarray data['Error_CV2'] = errcv2rna if not os.path.exists('./data/SSA'): os.makedirs('./data/SSA') data.to_csv("./data/SSA/RNAsim.csv") fig, ax = plt.subplots(1,2, figsize=(12,4)) ax[0].plot(np.array(tarray)/doubling_time,mnrnaarray,c="#BD0025") ax[0].fill_between(np.array(tarray)/doubling_time,np.array(mnrnaarray)-np.array(errmnrna),np.array(mnrnaarray)+np.array(errmnrna), alpha=1, edgecolor='#FF3333', facecolor='#FF3333',linewidth=0) ax[1].plot(np.array(tarray)/doubling_time,cvrnaarray,c="#BD0025") ax[1].fill_between(np.array(tarray)/doubling_time,np.array(cvrnaarray)-np.array(errcv2rna),np.array(cvrnaarray)+np.array(errcv2rna), alpha=1, edgecolor='#FF3333', facecolor='#FF3333',linewidth=0) ax[0].set_ylabel("RNA",size=20) ax[1].set_ylabel("$C_V^2(r)$",size=20) ax[0].set_xlabel(r"$t/\tau$",size=20) ax[1].set_xlabel(r"$t/\tau$",size=20) ax[0].set_ylim([0,1.2*np.max(mnrnaarray)]) ax[1].set_ylim([0,1.2*np.max(cvrnaarray)]) for l in [0,1]: ax[l].set_xlim([0,tmax/doubling_time]) taqui=np.arange(0,(tmax+1)/doubling_time,step=1) ax[l].set_xticks(np.array(taqui)) ax[l].grid() ax[l].tick_params(axis='x', labelsize=15) ax[l].tick_params(axis='y', labelsize=15) for axis in ['bottom','left']: ax[l].spines[axis].set_linewidth(2) ax[l].tick_params(axis='both', width=2,length=6) for axis in ['top','right']: ax[l].spines[axis].set_linewidth(0) ax[l].tick_params(axis='both', width=0,length=6) plt.subplots_adjust(hspace=0.3,wspace=0.3) taqui=np.arange(0,1.2*np.max(cvrnaarray),step=np.round(.2*np.max(cvrnaarray),2)) ax[1].set_yticks(np.array(taqui)) if not os.path.exists('./figures/SSA'): os.makedirs('./figures/SSA') plt.savefig('./figures/SSA/rna_statistics.svg',bbox_inches='tight') plt.savefig('./figures/SSA/rna_statistics.png',bbox_inches='tight') data=pd.DataFrame(np.transpose(np.array(parray))) ind=0 newcol=[] for name in data.columns: newcol.append("mom"+str(ind)) ind+=1 data.columns=newcol mnprotarray=[] cvprotarray=[] errcv2prot=[] errmnprot=[] for m in range(len(data)): rnas=data.loc[m, :].values.tolist() mean_cntr, var_cntr, std_cntr = bayesest(rnas,alpha=0.95) mnprotarray.append(mean_cntr[0]) errmnprot.append(mean_cntr[1][1]-mean_cntr[0]) cvprotarray.append(var_cntr[0]/mean_cntr[0]**2) errv=(var_cntr[1][1]-var_cntr[0])/mean_cntr[0]**2+2*(mean_cntr[1][1]-mean_cntr[0])*var_cntr[0]/mean_cntr[0]**3 errcv2prot.append(errv) data['time'] = tarray data['Mean_prot'] = mnrnaarray data['Error_mean'] = errmnrna data['prot_CV2'] = cvrnaarray data['Error_CV2'] = errcv2rna if not os.path.exists('./data/SSA'): os.makedirs('./data/SSA') data.to_csv("./data/SSA/protsim.csv") fig, ax = plt.subplots(1,2, figsize=(12,4)) ax[0].plot(np.array(tarray)/doubling_time,mnprotarray,c="#3BB000") ax[0].fill_between(np.array(tarray)/doubling_time,np.array(mnprotarray)-np.array(errmnprot),np.array(mnprotarray)+np.array(errmnprot), alpha=1, edgecolor='#4BE000', facecolor='#4BE000',linewidth=0) ax[1].plot(np.array(tarray)/doubling_time,cvprotarray,c="#3BB000") ax[1].fill_between(np.array(tarray)/doubling_time,np.array(cvprotarray)-np.array(errcv2prot),np.array(cvprotarray)+np.array(errcv2prot), alpha=1, edgecolor='#4BE000', facecolor='#4BE000',linewidth=0) ax[0].set_ylabel("Protein",size=20) ax[1].set_ylabel("$C_V^2(p)$",size=20) ax[0].set_xlabel(r"$t/\tau$",size=20) ax[1].set_xlabel(r"$t/\tau$",size=20) ax[0].set_ylim([0,1.2*np.max(mnprotarray)]) ax[1].set_ylim([0,1.2*np.max(cvprotarray)]) for l in [0,1]: ax[l].set_xlim([0,tmax/doubling_time]) taqui=np.arange(0,(tmax+1)/doubling_time,step=1) ax[l].set_xticks(np.array(taqui)) ax[l].grid() ax[l].tick_params(axis='x', labelsize=15) ax[l].tick_params(axis='y', labelsize=15) for axis in ['bottom','left']: ax[l].spines[axis].set_linewidth(2) ax[l].tick_params(axis='both', width=2,length=6) for axis in ['top','right']: ax[l].spines[axis].set_linewidth(0) ax[l].tick_params(axis='both', width=0,length=6) plt.subplots_adjust(hspace=0.3,wspace=0.5) taqui=np.arange(0,1.2*np.max(cvprotarray),step=np.round(.2*np.max(cvprotarray),4)) ax[1].set_yticks(np.array(taqui)) if not os.path.exists('./figures'): os.makedirs('./figures') if not os.path.exists('./figures/SSA'): os.makedirs('./figures/SSA') plt.savefig('./figures/SSA/prot_statistics.svg',bbox_inches='tight') plt.savefig('./figures/SSA/prot_statistics.png',bbox_inches='tight') ```
github_jupyter
# Text Data in scikit-learn ``` import matplotlib.pyplot as plt import sklearn sklearn.set_config(display='diagram') from pathlib import Path import tarfile from urllib import request data_path = Path("data") extracted_path = Path("data") / "train" imdb_path = data_path / "aclImdbmini.tar.gz" def untar_imdb(): if extracted_path.exists(): print("imdb dataset already extracted") return with tarfile.open(imdb_path, "r") as tar_f: tar_f.extractall(data_path) # This may take some time to run since it will download and extracted untar_imdb() ``` ## CountVectorizer ``` sample_text = ["Can we go to the hill? I finished my homework.", "The hill is very tall. Please be careful"] from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer() vect.fit(sample_text) vect.get_feature_names() X = vect.transform(sample_text) X X.toarray() ``` ### Bag of words ``` sample_text X_inverse = vect.inverse_transform(X) X_inverse[0] X_inverse[1] ``` ## Loading text data with scikit-learn ``` from sklearn.datasets import load_files reviews_train = load_files(extracted_path, categories=["neg", "pos"]) raw_text_train, raw_y_train = reviews_train.data, reviews_train.target raw_text_train = [doc.replace(b"<br />", b" ") for doc in raw_text_train] import numpy as np np.unique(raw_y_train) np.bincount(raw_y_train) len(raw_text_train) raw_text_train[5] ``` ## Split dataset ``` from sklearn.model_selection import train_test_split text_train, text_test, y_train, y_test = train_test_split( raw_text_train, raw_y_train, stratify=raw_y_train, random_state=0) ``` ### Transform training data ``` vect = CountVectorizer() X_train = vect.fit_transform(text_train) len(text_train) X_train ``` ### Transform testing set ``` len(text_test) X_test = vect.transform(text_test) X_test ``` ### Extract feature names ``` feature_names = vect.get_feature_names() feature_names[10000:10020] feature_names[::3000] ``` ### Linear model for classification ``` from sklearn.linear_model import LogisticRegression lr = LogisticRegression(solver='liblinear', random_state=42).fit(X_train, y_train) lr.score(X_test, y_test) def plot_important_features(coef, feature_names, top_n=20, ax=None, rotation=40): if ax is None: ax = plt.gca() feature_names = np.asarray(feature_names) coef = coef.reshape(-1) inds = np.argsort(coef) low = inds[:top_n] high = inds[-top_n:] important = np.hstack([low, high]) myrange = range(len(important)) colors = ['red'] * top_n + ['blue'] * top_n ax.bar(myrange, coef[important], color=colors) ax.set_xticks(myrange) ax.set_xticklabels(feature_names[important], rotation=rotation, ha="right") ax.set_xlim(-.7, 2 * top_n) ax.set_frame_on(False) feature_names = vect.get_feature_names() fig, ax = plt.subplots(figsize=(15, 6)) plot_important_features(lr.coef_, feature_names, top_n=20, ax=ax) ``` ## Exercise 1 1. Train a `sklearn.ensemble.RandomForestClassifier` on the training set, `X_train` and `y_train`. 2. Evalute the accuracy on the test set. 3. What are the top 20 important features accourind go `feature_importances_` of the random forst. ``` # %load solutions/01-ex01-solutions.py ``` ## CountVectorizer Options ``` sample_text = ["Can we go to the hill? I finished my homework.", "The hill is very tall. Please be careful"] vect = CountVectorizer() vect.fit(sample_text) vect.get_feature_names() ``` ### Stop words ``` vect = CountVectorizer(stop_words='english') vect.fit(sample_text) vect.get_feature_names() from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS print(list(ENGLISH_STOP_WORDS)) ``` ### Max features ``` vect = CountVectorizer(max_features=4, stop_words='english') vect.fit(sample_text) vect.get_feature_names() ``` ### Min frequency on the imdb dataset With `min_df=1` (default) ``` X_train.shape ``` With `min_df=4` ``` vect = CountVectorizer(min_df=4) X_train_min_df_4 = vect.fit_transform(text_train) X_train_min_df_4.shape lr_df_4 = LogisticRegression(solver='liblinear', random_state=42).fit(X_train_min_df_4, y_train) X_test_min_df_4 = vect.transform(text_test) ``` #### Scores with different min frequencies ``` lr_df_4.score(X_test_min_df_4, y_test) lr.score(X_test, y_test) ``` ## Pipelines and Vectorizers ``` from sklearn.pipeline import Pipeline log_reg = Pipeline([ ('vectorizer', CountVectorizer()), ('classifier', LogisticRegression(random_state=42, solver='liblinear')) ]) log_reg text_train[:2] log_reg.fit(text_train, y_train) log_reg.score(text_train, y_train) log_reg.score(text_test, y_test) ``` ## Exercise 2 1. Create a pipeline with a `CountVectorizer` with `min_df=5` and `stop_words='english'` and a `RandomForestClassifier`. 2. What is the score of the random forest on the test dataset? ``` # %load solutions/01-ex02-solutions.py ``` ## Bigrams `CountVectorizer` takes a `ngram_range` parameter ``` sample_text cv = CountVectorizer(ngram_range=(1, 1)).fit(sample_text) print("Vocabulary size:", len(cv.vocabulary_)) print("Vocabulary:", cv.get_feature_names()) cv = CountVectorizer(ngram_range=(2, 2)).fit(sample_text) print("Vocabulary size:", len(cv.vocabulary_)) print("Vocabulary:") print(cv.get_feature_names()) cv = CountVectorizer(ngram_range=(1, 2)).fit(sample_text) print("Vocabulary size:", len(cv.vocabulary_)) print("Vocabulary:") print(cv.get_feature_names()) ``` ## n-grams with stop words ``` cv_n_gram = CountVectorizer(ngram_range=(1, 2), min_df=4, stop_words="english") cv_n_gram.fit(text_train) len(cv_n_gram.vocabulary_) print(cv_n_gram.get_feature_names()[::2000]) pipe_cv_n_gram = Pipeline([ ('vectorizer', cv_n_gram), ('classifier', LogisticRegression(random_state=42, solver='liblinear')) ]) pipe_cv_n_gram.fit(text_train, y_train) pipe_cv_n_gram.score(text_test, y_test) feature_names = pipe_cv_n_gram['vectorizer'].get_feature_names() fig, ax = plt.subplots(figsize=(15, 6)) plot_important_features(pipe_cv_n_gram['classifier'].coef_.ravel(), feature_names, top_n=20, ax=ax) ``` ## Tf-idf rescaling ``` sample_text from sklearn.feature_extraction.text import TfidfVectorizer tfidvect = TfidfVectorizer().fit(sample_text) tfid_trans = tfidvect.transform(sample_text) tfid_trans.toarray() ``` ## Train on the imdb dataset ``` log_reg_tfid = Pipeline([ ('vectorizer', TfidfVectorizer(ngram_range=(1, 2), min_df=4, stop_words="english")), ('classifier', LogisticRegression(random_state=42, solver='liblinear')) ]) log_reg_tfid.fit(text_train, y_train) log_reg_tfid.score(text_test, y_test) ``` ## Exercise 3 0. Load data from `fetch_20newsgroups`: ```python from sklearn.datasets import fetch_20newsgroups categories = [ 'alt.atheism', 'sci.space', ] remove = ('headers', 'footers', 'quotes') data_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove) data_test = fetch_20newsgroups(subset='test', categories=categories, remove=remove) X_train, y_train = data_train.data, data_train.target X_test, y_test = data_test.data, data_test.target ``` 1. How many samples are there in the training dataset and test dataset? 1. Construct a pipeline with a `TfidfVectorizer` and `LogisticRegression`. 1. Evalute the pipeline on the test set. 1. Plot the feature importances using `plot_important_features`. ``` # %load solutions/01-ex03-solutions.py ```
github_jupyter
``` import tensorflow as tf import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder import pandas as pd import math gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Memory growth must be set before GPUs have been initialized print(e) def step_decay(epoch): initial_lrate = 0.001 drop = 0.98 epochs_drop = 50.0 lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop)) return lrate # 데이터 입력 df = pd.read_csv('../dataset/iris.csv', names = ["sepal_length", "sepal_width", "petal_length", "petal_width", "species"]) # 데이터 분류 dataset=df.copy() # 데이터 분류 Y_obj=dataset.pop("species") X=dataset.copy() # 문자열을 숫자로 변환 Y_encoded=pd.get_dummies(Y_obj) # 전체 데이터에서 학습 데이터와 테스트 데이터(0.1)로 구분 X_train1, X_test, Y_train1, Y_test = train_test_split(X, Y_encoded, test_size=0.3,shuffle=True, stratify=Y_encoded) ## shuffle=True로 하면 데이터를 섞어서 나눔 ## 학습 셋에서 학습과 검증 데이터(0.2)로 구분 X_train, X_valid, Y_train, Y_valid = train_test_split(X_train1, Y_train1, test_size=0.2, shuffle=True, stratify=Y_train1) ## shuffle=True로 하면 데이터를 섞어서 나눔 # 모델의 설정 activation=tf.keras.activations.sigmoid input_Layer = tf.keras.layers.Input(shape=(4,)) x = tf.keras.layers.Dense(16, activation=activation,)(input_Layer) x = tf.keras.layers.Dense(12, activation=activation)(x) Out_Layer= tf.keras.layers.Dense(3, activation='softmax')(x) model = tf.keras.models.Model(inputs=[input_Layer], outputs=[Out_Layer]) model.summary() # 모델 컴파일 model.compile(loss=tf.keras.losses.categorical_crossentropy, optimizer=tf.keras.optimizers.SGD(learning_rate=0.001), metrics=[tf.keras.metrics.categorical_accuracy]) modelpath="./best_model/{epoch:02d}-{val_loss:.4f}.h5" clabacks_list =[tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=100), tf.keras.callbacks.ModelCheckpoint(filepath=modelpath, monitor='val_loss', verbose=1, save_best_only=True), tf.keras.callbacks.LearningRateScheduler(step_decay, verbose=1)] ## model fit은 histoy를 반환한다. 훈련중의 발생하는 모든 정보를 담고 있는 딕셔너리. result=model.fit(X_train, Y_train, epochs=50, batch_size=50, validation_data=(X_valid,Y_valid),callbacks=clabacks_list) # validation_data=(X_valid,Y_valid)을 추가하여 학습시 검증을 해줌. ## histoy는 딕셔너리이므로 keys()를 통해 출력의 key(카테고리)를 확인하여 무엇을 받고 있는지 확인. print(result.history.keys()) ### result에서 loss와 val_loss의 key를 가지는 값들만 추출 loss = result.history['loss'] val_loss = result.history['val_loss'] ### loss와 val_loss를 그래프화 epochs = range(1, len(loss) + 1) plt.subplot(211) ## 2x1 개의 그래프 중에 1번째 plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() ### history에서 binary_accuracy와 val_binary_accuracy key를 가지는 값들만 추출 acc = result.history['categorical_accuracy'] val_acc = result.history['val_categorical_accuracy'] ### binary_accuracy와 val_binary_accuracy key를 그래프화 plt.subplot(212) ## 2x1 개의 그래프 중에 2번째 plt.plot(epochs, acc, 'ro', label='Training acc') plt.plot(epochs, val_acc, 'r', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() # model.evalueate를 통해 테스트 데이터로 정확도 확인하기. ## model.evaluate(X_test, Y_test)의 리턴값은 [loss, binary_acuuracy ] -> 위 model.compile에서 metrics=[ keras.metrics.binary_accuracy]옵션을 주어서 binary acuuracy 출력됨. print("\n Test Accuracy: %.4f" % (model.evaluate(X_test, Y_test)[1])) ## 그래프 띄우기 plt.show() ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Transformer model for language Translation <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/text/transformer"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/transformer.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/transformer.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/transformer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial trains a <a href="https://arxiv.org/abs/1706.03762" class="external">Transformer model</a> to translate Portuguese to English. This is an advanced example that assumes knowledge of [text generation](text_generation.ipynb) and [attention](nmt_with_attention.ipynb). The core idea behind the Transformer model is *self-attention*—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections *Scaled dot product attention* and *Multi-head attention*. A transformer model handles variable-sized input using stacks of self-attention layers instead of [RNNs](text_classification_rnn.ipynb) or [CNNs](../images/intro_to_cnns.ipynb). This general architecture has a number of advantages: * It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, [StarCraft units](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/#block-8)). * Layer outputs can be calculated in parallel, instead of a series like an RNN. * Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see [Scene Memory Transformer](https://arxiv.org/pdf/1903.03878.pdf) for example). * It can learn long-range dependencies. This is a challenge in many sequence tasks. The downsides of this architecture are: * For a time-series, the output for a time-step is calculated from the *entire history* instead of only the inputs and current hidden-state. This _may_ be less efficient. * If the input *does* have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words. After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation. <img src="https://www.tensorflow.org/images/tutorials/transformer/attention_map_portuguese.png" width="800" alt="Attention heatmap"> ``` !pip install -q tfds-nightly import tensorflow_datasets as tfds import tensorflow as tf import time import numpy as np import matplotlib.pyplot as plt ``` ## Setup input pipeline Use [TFDS](https://www.tensorflow.org/datasets) to load the [Portugese-English translation dataset](https://github.com/neulab/word-embeddings-for-nmt) from the [TED Talks Open Translation Project](https://www.ted.com/participate/translate). This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples. ``` examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True, as_supervised=True) train_examples, val_examples = examples['train'], examples['validation'] ``` Create a custom subwords tokenizer from the training dataset. ``` tokenizer_en = tfds.deprecated.text.SubwordTextEncoder.build_from_corpus( (en.numpy() for pt, en in train_examples), target_vocab_size=2**13) tokenizer_pt = tfds.deprecated.text.SubwordTextEncoder.build_from_corpus( (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13) sample_string = 'Transformer is awesome.' tokenized_string = tokenizer_en.encode(sample_string) print ('Tokenized string is {}'.format(tokenized_string)) original_string = tokenizer_en.decode(tokenized_string) print ('The original string: {}'.format(original_string)) assert original_string == sample_string ``` The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary. ``` for ts in tokenized_string: print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts]))) BUFFER_SIZE = 20000 BATCH_SIZE = 64 ``` Add a start and end token to the input and target. ``` def encode(lang1, lang2): lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode( lang1.numpy()) + [tokenizer_pt.vocab_size+1] lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode( lang2.numpy()) + [tokenizer_en.vocab_size+1] return lang1, lang2 ``` You want to use `Dataset.map` to apply this function to each element of the dataset. `Dataset.map` runs in graph mode. * Graph tensors do not have a value. * In graph mode you can only use TensorFlow Ops and functions. So you can't `.map` this function directly: You need to wrap it in a `tf.py_function`. The `tf.py_function` will pass regular tensors (with a value and a `.numpy()` method to access it), to the wrapped python function. ``` def tf_encode(pt, en): result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64]) result_pt.set_shape([None]) result_en.set_shape([None]) return result_pt, result_en ``` Note: To keep this example small and relatively fast, drop examples with a length of over 40 tokens. ``` MAX_LENGTH = 40 def filter_max_length(x, y, max_length=MAX_LENGTH): return tf.logical_and(tf.size(x) <= max_length, tf.size(y) <= max_length) train_dataset = train_examples.map(tf_encode) train_dataset = train_dataset.filter(filter_max_length) # cache the dataset to memory to get a speedup while reading from it. train_dataset = train_dataset.cache() train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE) train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE) val_dataset = val_examples.map(tf_encode) val_dataset = val_dataset.filter(filter_max_length).padded_batch(BATCH_SIZE) pt_batch, en_batch = next(iter(val_dataset)) pt_batch, en_batch ``` ## Positional encoding Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence. The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the *similarity of their meaning and their position in the sentence*, in the d-dimensional space. See the notebook on [positional encoding](https://github.com/tensorflow/examples/blob/master/community/en/position_encoding.ipynb) to learn more about it. The formula for calculating the positional encoding is as follows: $$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$ $$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$ ``` def get_angles(pos, i, d_model): angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model)) return pos * angle_rates def positional_encoding(position, d_model): angle_rads = get_angles(np.arange(position)[:, np.newaxis], np.arange(d_model)[np.newaxis, :], d_model) # apply sin to even indices in the array; 2i angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2]) # apply cos to odd indices in the array; 2i+1 angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2]) pos_encoding = angle_rads[np.newaxis, ...] return tf.cast(pos_encoding, dtype=tf.float32) pos_encoding = positional_encoding(50, 512) print (pos_encoding.shape) plt.pcolormesh(pos_encoding[0], cmap='RdBu') plt.xlabel('Depth') plt.xlim((0, 512)) plt.ylabel('Position') plt.colorbar() plt.show() ``` ## Masking Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value `0` is present: it outputs a `1` at those locations, and a `0` otherwise. ``` def create_padding_mask(seq): seq = tf.cast(tf.math.equal(seq, 0), tf.float32) # add extra dimensions to add the padding # to the attention logits. return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len) x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]) create_padding_mask(x) ``` The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used. This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on. ``` def create_look_ahead_mask(size): mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0) return mask # (seq_len, seq_len) x = tf.random.uniform((1, 3)) temp = create_look_ahead_mask(x.shape[1]) temp ``` ## Scaled dot product attention <img src="https://www.tensorflow.org/images/tutorials/transformer/scaled_attention.png" width="500" alt="scaled_dot_product_attention"> The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is: $$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$ The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax. For example, consider that `Q` and `K` have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of `dk`. Hence, *square root of `dk`* is used for scaling (and not any other number) because the matmul of `Q` and `K` should have a mean of 0 and variance of 1, and you get a gentler softmax. The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output. ``` def scaled_dot_product_attention(q, k, v, mask): """Calculate the attention weights. q, k, v must have matching leading dimensions. k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v. The mask has different shapes depending on its type(padding or look ahead) but it must be broadcastable for addition. Args: q: query shape == (..., seq_len_q, depth) k: key shape == (..., seq_len_k, depth) v: value shape == (..., seq_len_v, depth_v) mask: Float tensor with shape broadcastable to (..., seq_len_q, seq_len_k). Defaults to None. Returns: output, attention_weights """ matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k) # scale matmul_qk dk = tf.cast(tf.shape(k)[-1], tf.float32) scaled_attention_logits = matmul_qk / tf.math.sqrt(dk) # add the mask to the scaled tensor. if mask is not None: scaled_attention_logits += (mask * -1e9) # softmax is normalized on the last axis (seq_len_k) so that the scores # add up to 1. attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k) output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v) return output, attention_weights ``` As the softmax normalization is done on K, its values decide the amount of importance given to Q. The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out. ``` def print_out(q, k, v): temp_out, temp_attn = scaled_dot_product_attention( q, k, v, None) print ('Attention weights are:') print (temp_attn) print ('Output is:') print (temp_out) np.set_printoptions(suppress=True) temp_k = tf.constant([[10,0,0], [0,10,0], [0,0,10], [0,0,10]], dtype=tf.float32) # (4, 3) temp_v = tf.constant([[ 1,0], [ 10,0], [ 100,5], [1000,6]], dtype=tf.float32) # (4, 2) # This `query` aligns with the second `key`, # so the second `value` is returned. temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3) print_out(temp_q, temp_k, temp_v) # This query aligns with a repeated key (third and fourth), # so all associated values get averaged. temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32) # (1, 3) print_out(temp_q, temp_k, temp_v) # This query aligns equally with the first and second key, # so their values get averaged. temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32) # (1, 3) print_out(temp_q, temp_k, temp_v) ``` Pass all the queries together. ``` temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32) # (3, 3) print_out(temp_q, temp_k, temp_v) ``` ## Multi-head attention <img src="https://www.tensorflow.org/images/tutorials/transformer/multi_head_attention.png" width="500" alt="multi-head attention"> Multi-head attention consists of four parts: * Linear layers and split into heads. * Scaled dot-product attention. * Concatenation of heads. * Final linear layer. Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads. The `scaled_dot_product_attention` defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using `tf.transpose`, and `tf.reshape`) and put through a final `Dense` layer. Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality. ``` class MultiHeadAttention(tf.keras.layers.Layer): def __init__(self, d_model, num_heads): super(MultiHeadAttention, self).__init__() self.num_heads = num_heads self.d_model = d_model assert d_model % self.num_heads == 0 self.depth = d_model // self.num_heads self.wq = tf.keras.layers.Dense(d_model) self.wk = tf.keras.layers.Dense(d_model) self.wv = tf.keras.layers.Dense(d_model) self.dense = tf.keras.layers.Dense(d_model) def split_heads(self, x, batch_size): """Split the last dimension into (num_heads, depth). Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth) """ x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth)) return tf.transpose(x, perm=[0, 2, 1, 3]) def call(self, v, k, q, mask): batch_size = tf.shape(q)[0] q = self.wq(q) # (batch_size, seq_len, d_model) k = self.wk(k) # (batch_size, seq_len, d_model) v = self.wv(v) # (batch_size, seq_len, d_model) q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth) k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth) v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth) # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth) # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k) scaled_attention, attention_weights = scaled_dot_product_attention( q, k, v, mask) scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth) concat_attention = tf.reshape(scaled_attention, (batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model) output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model) return output, attention_weights ``` Create a `MultiHeadAttention` layer to try out. At each location in the sequence, `y`, the `MultiHeadAttention` runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location. ``` temp_mha = MultiHeadAttention(d_model=512, num_heads=8) y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model) out, attn = temp_mha(y, k=y, q=y, mask=None) out.shape, attn.shape ``` ## Point wise feed forward network Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between. ``` def point_wise_feed_forward_network(d_model, dff): return tf.keras.Sequential([ tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff) tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model) ]) sample_ffn = point_wise_feed_forward_network(512, 2048) sample_ffn(tf.random.uniform((64, 50, 512))).shape ``` ## Encoder and decoder <img src="https://www.tensorflow.org/images/tutorials/transformer/transformer.png" width="600" alt="transformer"> The transformer model follows the same general pattern as a standard [sequence to sequence with attention model](nmt_with_attention.ipynb). * The input sentence is passed through `N` encoder layers that generates an output for each word/token in the sequence. * The decoder attends on the encoder's output and its own input (self-attention) to predict the next word. ### Encoder layer Each encoder layer consists of sublayers: 1. Multi-head attention (with padding mask) 2. Point wise feed forward networks. Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks. The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis. There are N encoder layers in the transformer. ``` class EncoderLayer(tf.keras.layers.Layer): def __init__(self, d_model, num_heads, dff, rate=0.1): super(EncoderLayer, self).__init__() self.mha = MultiHeadAttention(d_model, num_heads) self.ffn = point_wise_feed_forward_network(d_model, dff) self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.dropout1 = tf.keras.layers.Dropout(rate) self.dropout2 = tf.keras.layers.Dropout(rate) def call(self, x, training, mask): attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model) ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model) ffn_output = self.dropout2(ffn_output, training=training) out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model) return out2 sample_encoder_layer = EncoderLayer(512, 8, 2048) sample_encoder_layer_output = sample_encoder_layer( tf.random.uniform((64, 43, 512)), False, None) sample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model) ``` ### Decoder layer Each decoder layer consists of sublayers: 1. Masked multi-head attention (with look ahead mask and padding mask) 2. Multi-head attention (with padding mask). V (value) and K (key) receive the *encoder output* as inputs. Q (query) receives the *output from the masked multi-head attention sublayer.* 3. Point wise feed forward networks Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis. There are N decoder layers in the transformer. As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section. ``` class DecoderLayer(tf.keras.layers.Layer): def __init__(self, d_model, num_heads, dff, rate=0.1): super(DecoderLayer, self).__init__() self.mha1 = MultiHeadAttention(d_model, num_heads) self.mha2 = MultiHeadAttention(d_model, num_heads) self.ffn = point_wise_feed_forward_network(d_model, dff) self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6) self.dropout1 = tf.keras.layers.Dropout(rate) self.dropout2 = tf.keras.layers.Dropout(rate) self.dropout3 = tf.keras.layers.Dropout(rate) def call(self, x, enc_output, training, look_ahead_mask, padding_mask): # enc_output.shape == (batch_size, input_seq_len, d_model) attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) # (batch_size, target_seq_len, d_model) attn1 = self.dropout1(attn1, training=training) out1 = self.layernorm1(attn1 + x) attn2, attn_weights_block2 = self.mha2( enc_output, enc_output, out1, padding_mask) # (batch_size, target_seq_len, d_model) attn2 = self.dropout2(attn2, training=training) out2 = self.layernorm2(attn2 + out1) # (batch_size, target_seq_len, d_model) ffn_output = self.ffn(out2) # (batch_size, target_seq_len, d_model) ffn_output = self.dropout3(ffn_output, training=training) out3 = self.layernorm3(ffn_output + out2) # (batch_size, target_seq_len, d_model) return out3, attn_weights_block1, attn_weights_block2 sample_decoder_layer = DecoderLayer(512, 8, 2048) sample_decoder_layer_output, _, _ = sample_decoder_layer( tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, False, None, None) sample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model) ``` ### Encoder The `Encoder` consists of: 1. Input Embedding 2. Positional Encoding 3. N encoder layers The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder. ``` class Encoder(tf.keras.layers.Layer): def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, maximum_position_encoding, rate=0.1): super(Encoder, self).__init__() self.d_model = d_model self.num_layers = num_layers self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model) self.pos_encoding = positional_encoding(maximum_position_encoding, self.d_model) self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) for _ in range(num_layers)] self.dropout = tf.keras.layers.Dropout(rate) def call(self, x, training, mask): seq_len = tf.shape(x)[1] # adding embedding and position encoding. x = self.embedding(x) # (batch_size, input_seq_len, d_model) x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) x += self.pos_encoding[:, :seq_len, :] x = self.dropout(x, training=training) for i in range(self.num_layers): x = self.enc_layers[i](x, training, mask) return x # (batch_size, input_seq_len, d_model) sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, dff=2048, input_vocab_size=8500, maximum_position_encoding=10000) temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200) sample_encoder_output = sample_encoder(temp_input, training=False, mask=None) print (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model) ``` ### Decoder The `Decoder` consists of: 1. Output Embedding 2. Positional Encoding 3. N decoder layers The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer. ``` class Decoder(tf.keras.layers.Layer): def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size, maximum_position_encoding, rate=0.1): super(Decoder, self).__init__() self.d_model = d_model self.num_layers = num_layers self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model) self.pos_encoding = positional_encoding(maximum_position_encoding, d_model) self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) for _ in range(num_layers)] self.dropout = tf.keras.layers.Dropout(rate) def call(self, x, enc_output, training, look_ahead_mask, padding_mask): seq_len = tf.shape(x)[1] attention_weights = {} x = self.embedding(x) # (batch_size, target_seq_len, d_model) x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) x += self.pos_encoding[:, :seq_len, :] x = self.dropout(x, training=training) for i in range(self.num_layers): x, block1, block2 = self.dec_layers[i](x, enc_output, training, look_ahead_mask, padding_mask) attention_weights['decoder_layer{}_block1'.format(i+1)] = block1 attention_weights['decoder_layer{}_block2'.format(i+1)] = block2 # x.shape == (batch_size, target_seq_len, d_model) return x, attention_weights sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, dff=2048, target_vocab_size=8000, maximum_position_encoding=5000) temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200) output, attn = sample_decoder(temp_input, enc_output=sample_encoder_output, training=False, look_ahead_mask=None, padding_mask=None) output.shape, attn['decoder_layer2_block2'].shape ``` ## Create the Transformer Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned. ``` class Transformer(tf.keras.Model): def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, target_vocab_size, pe_input, pe_target, rate=0.1): super(Transformer, self).__init__() self.encoder = Encoder(num_layers, d_model, num_heads, dff, input_vocab_size, pe_input, rate) self.decoder = Decoder(num_layers, d_model, num_heads, dff, target_vocab_size, pe_target, rate) self.final_layer = tf.keras.layers.Dense(target_vocab_size) def call(self, inp, tar, training, enc_padding_mask, look_ahead_mask, dec_padding_mask): enc_output = self.encoder(inp, training, enc_padding_mask) # (batch_size, inp_seq_len, d_model) # dec_output.shape == (batch_size, tar_seq_len, d_model) dec_output, attention_weights = self.decoder( tar, enc_output, training, look_ahead_mask, dec_padding_mask) final_output = self.final_layer(dec_output) # (batch_size, tar_seq_len, target_vocab_size) return final_output, attention_weights sample_transformer = Transformer( num_layers=2, d_model=512, num_heads=8, dff=2048, input_vocab_size=8500, target_vocab_size=8000, pe_input=10000, pe_target=6000) temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200) temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200) fn_out, _ = sample_transformer(temp_input, temp_target, training=False, enc_padding_mask=None, look_ahead_mask=None, dec_padding_mask=None) fn_out.shape # (batch_size, tar_seq_len, target_vocab_size) ``` ## Set hyperparameters To keep this example small and relatively fast, the values for *num_layers, d_model, and dff* have been reduced. The values used in the base model of transformer were; *num_layers=6*, *d_model = 512*, *dff = 2048*. See the [paper](https://arxiv.org/abs/1706.03762) for all the other versions of the transformer. Note: By changing the values below, you can get the model that achieved state of the art on many tasks. ``` num_layers = 4 d_model = 128 dff = 512 num_heads = 8 input_vocab_size = tokenizer_pt.vocab_size + 2 target_vocab_size = tokenizer_en.vocab_size + 2 dropout_rate = 0.1 ``` ## Optimizer Use the Adam optimizer with a custom learning rate scheduler according to the formula in the [paper](https://arxiv.org/abs/1706.03762). $$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$ ``` class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule): def __init__(self, d_model, warmup_steps=4000): super(CustomSchedule, self).__init__() self.d_model = d_model self.d_model = tf.cast(self.d_model, tf.float32) self.warmup_steps = warmup_steps def __call__(self, step): arg1 = tf.math.rsqrt(step) arg2 = step * (self.warmup_steps ** -1.5) return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2) learning_rate = CustomSchedule(d_model) optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, epsilon=1e-9) temp_learning_rate_schedule = CustomSchedule(d_model) plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32))) plt.ylabel("Learning Rate") plt.xlabel("Train Step") ``` ## Loss and metrics Since the target sequences are padded, it is important to apply a padding mask when calculating the loss. ``` loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction='none') def loss_function(real, pred): mask = tf.math.logical_not(tf.math.equal(real, 0)) loss_ = loss_object(real, pred) mask = tf.cast(mask, dtype=loss_.dtype) loss_ *= mask return tf.reduce_sum(loss_)/tf.reduce_sum(mask) train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='train_accuracy') ``` ## Training and checkpointing ``` transformer = Transformer(num_layers, d_model, num_heads, dff, input_vocab_size, target_vocab_size, pe_input=input_vocab_size, pe_target=target_vocab_size, rate=dropout_rate) def create_masks(inp, tar): # Encoder padding mask enc_padding_mask = create_padding_mask(inp) # Used in the 2nd attention block in the decoder. # This padding mask is used to mask the encoder outputs. dec_padding_mask = create_padding_mask(inp) # Used in the 1st attention block in the decoder. # It is used to pad and mask future tokens in the input received by # the decoder. look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1]) dec_target_padding_mask = create_padding_mask(tar) combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask) return enc_padding_mask, combined_mask, dec_padding_mask ``` Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every `n` epochs. ``` checkpoint_path = "./checkpoints/train" ckpt = tf.train.Checkpoint(transformer=transformer, optimizer=optimizer) ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5) # if a checkpoint exists, restore the latest checkpoint. if ckpt_manager.latest_checkpoint: ckpt.restore(ckpt_manager.latest_checkpoint) print ('Latest checkpoint restored!!') ``` The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. `tar_real` is that same input shifted by 1: At each location in `tar_input`, `tar_real` contains the next token that should be predicted. For example, `sentence` = "SOS A lion in the jungle is sleeping EOS" `tar_inp` = "SOS A lion in the jungle is sleeping" `tar_real` = "A lion in the jungle is sleeping EOS" The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next. During training this example uses teacher-forcing (like in the [text generation tutorial](./text_generation.ipynb)). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step. As the transformer predicts each word, *self-attention* allows it to look at the previous words in the input sequence to better predict the next word. To prevent the model from peeking at the expected output the model uses a look-ahead mask. ``` EPOCHS = 20 # The @tf.function trace-compiles train_step into a TF graph for faster # execution. The function specializes to the precise shape of the argument # tensors. To avoid re-tracing due to the variable sequence lengths or variable # batch sizes (the last batch is smaller), use input_signature to specify # more generic shapes. train_step_signature = [ tf.TensorSpec(shape=(None, None), dtype=tf.int64), tf.TensorSpec(shape=(None, None), dtype=tf.int64), ] @tf.function(input_signature=train_step_signature) def train_step(inp, tar): tar_inp = tar[:, :-1] tar_real = tar[:, 1:] enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp) with tf.GradientTape() as tape: predictions, _ = transformer(inp, tar_inp, True, enc_padding_mask, combined_mask, dec_padding_mask) loss = loss_function(tar_real, predictions) gradients = tape.gradient(loss, transformer.trainable_variables) optimizer.apply_gradients(zip(gradients, transformer.trainable_variables)) train_loss(loss) train_accuracy(tar_real, predictions) ``` Portuguese is used as the input language and English is the target language. ``` for epoch in range(EPOCHS): start = time.time() train_loss.reset_states() train_accuracy.reset_states() # inp -> portuguese, tar -> english for (batch, (inp, tar)) in enumerate(train_dataset): train_step(inp, tar) if batch % 50 == 0: print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format( epoch + 1, batch, train_loss.result(), train_accuracy.result())) if (epoch + 1) % 5 == 0: ckpt_save_path = ckpt_manager.save() print ('Saving checkpoint for epoch {} at {}'.format(epoch+1, ckpt_save_path)) print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, train_loss.result(), train_accuracy.result())) print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start)) ``` ## Evaluate The following steps are used for evaluation: * Encode the input sentence using the Portuguese tokenizer (`tokenizer_pt`). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input. * The decoder input is the `start token == tokenizer_en.vocab_size`. * Calculate the padding masks and the look ahead masks. * The `decoder` then outputs the predictions by looking at the `encoder output` and its own output (self-attention). * Select the last word and calculate the argmax of that. * Concatentate the predicted word to the decoder input as pass it to the decoder. * In this approach, the decoder predicts the next word based on the previous words it predicted. Note: The model used here has less capacity to keep the example relatively faster so the predictions maybe less right. To reproduce the results in the paper, use the entire dataset and base transformer model or transformer XL, by changing the hyperparameters above. ``` def evaluate(inp_sentence): start_token = [tokenizer_pt.vocab_size] end_token = [tokenizer_pt.vocab_size + 1] # inp sentence is portuguese, hence adding the start and end token inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token encoder_input = tf.expand_dims(inp_sentence, 0) # as the target is english, the first word to the transformer should be the # english start token. decoder_input = [tokenizer_en.vocab_size] output = tf.expand_dims(decoder_input, 0) for i in range(MAX_LENGTH): enc_padding_mask, combined_mask, dec_padding_mask = create_masks( encoder_input, output) # predictions.shape == (batch_size, seq_len, vocab_size) predictions, attention_weights = transformer(encoder_input, output, False, enc_padding_mask, combined_mask, dec_padding_mask) # select the last word from the seq_len dimension predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size) predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32) # return the result if the predicted_id is equal to the end token if predicted_id == tokenizer_en.vocab_size+1: return tf.squeeze(output, axis=0), attention_weights # concatentate the predicted_id to the output which is given to the decoder # as its input. output = tf.concat([output, predicted_id], axis=-1) return tf.squeeze(output, axis=0), attention_weights def plot_attention_weights(attention, sentence, result, layer): fig = plt.figure(figsize=(16, 8)) sentence = tokenizer_pt.encode(sentence) attention = tf.squeeze(attention[layer], axis=0) for head in range(attention.shape[0]): ax = fig.add_subplot(2, 4, head+1) # plot the attention weights ax.matshow(attention[head][:-1, :], cmap='viridis') fontdict = {'fontsize': 10} ax.set_xticks(range(len(sentence)+2)) ax.set_yticks(range(len(result))) ax.set_ylim(len(result)-1.5, -0.5) ax.set_xticklabels( ['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'], fontdict=fontdict, rotation=90) ax.set_yticklabels([tokenizer_en.decode([i]) for i in result if i < tokenizer_en.vocab_size], fontdict=fontdict) ax.set_xlabel('Head {}'.format(head+1)) plt.tight_layout() plt.show() def translate(sentence, plot=''): result, attention_weights = evaluate(sentence) predicted_sentence = tokenizer_en.decode([i for i in result if i < tokenizer_en.vocab_size]) print('Input: {}'.format(sentence)) print('Predicted translation: {}'.format(predicted_sentence)) if plot: plot_attention_weights(attention_weights, sentence, result, plot) translate("este é um problema que temos que resolver.") print ("Real translation: this is a problem we have to solve .") translate("os meus vizinhos ouviram sobre esta ideia.") print ("Real translation: and my neighboring homes heard about this idea .") translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.") print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .") ``` You can pass different layers and attention blocks of the decoder to the `plot` parameter. ``` translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2') print ("Real translation: this is the first book i've ever done.") ``` ## Summary In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer. Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create [BERT](https://arxiv.org/abs/1810.04805) and train state of the art models. Futhermore, you can implement beam search to get better predictions.
github_jupyter
# DeepLab Demo This demo will demostrate the steps to run deeplab semantic segmentation model on sample input images. ``` #@title Imports import os from io import BytesIO import tarfile import tempfile from six.moves import urllib from matplotlib import gridspec from matplotlib import pyplot as plt import numpy as np from PIL import Image import tensorflow as tf #@title Helper methods class DeepLabModel(object): """Class to load deeplab model and run inference.""" INPUT_TENSOR_NAME = 'ImageTensor:0' OUTPUT_TENSOR_NAME = 'SemanticPredictions:0' INPUT_SIZE = 513 FROZEN_GRAPH_NAME = 'frozen_inference_graph' def __init__(self, tarball_path): """Creates and loads pretrained deeplab model.""" self.graph = tf.Graph() graph_def = None # Extract frozen graph from tar archive. tar_file = tarfile.open(tarball_path) for tar_info in tar_file.getmembers(): if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name): file_handle = tar_file.extractfile(tar_info) graph_def = tf.GraphDef.FromString(file_handle.read()) break tar_file.close() if graph_def is None: raise RuntimeError('Cannot find inference graph in tar archive.') with self.graph.as_default(): tf.import_graph_def(graph_def, name='') self.sess = tf.Session(graph=self.graph) def run(self, image): """Runs inference on a single image. Args: image: A PIL.Image object, raw input image. Returns: resized_image: RGB image resized from original input image. seg_map: Segmentation map of `resized_image`. """ width, height = image.size resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height) target_size = (int(resize_ratio * width), int(resize_ratio * height)) resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS) batch_seg_map = self.sess.run( self.OUTPUT_TENSOR_NAME, feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]}) seg_map = batch_seg_map[0] return resized_image, seg_map def create_pascal_label_colormap(): """Creates a label colormap used in PASCAL VOC segmentation benchmark. Returns: A Colormap for visualizing segmentation results. """ colormap = np.zeros((256, 3), dtype=int) ind = np.arange(256, dtype=int) for shift in reversed(range(8)): for channel in range(3): colormap[:, channel] |= ((ind >> channel) & 1) << shift ind >>= 3 return colormap def label_to_color_image(label): """Adds color defined by the dataset colormap to the label. Args: label: A 2D array with integer type, storing the segmentation label. Returns: result: A 2D array with floating type. The element of the array is the color indexed by the corresponding element in the input label to the PASCAL color map. Raises: ValueError: If label is not of rank 2 or its value is larger than color map maximum entry. """ if label.ndim != 2: raise ValueError('Expect 2-D input label') colormap = create_pascal_label_colormap() if np.max(label) >= len(colormap): raise ValueError('label value too large.') return colormap[label] def vis_segmentation(image, seg_map): """Visualizes input image, segmentation map and overlay view.""" plt.figure(figsize=(15, 5)) grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1]) plt.subplot(grid_spec[0]) plt.imshow(image) plt.axis('off') plt.title('input image') plt.subplot(grid_spec[1]) seg_image = label_to_color_image(seg_map).astype(np.uint8) plt.imshow(seg_image) plt.axis('off') plt.title('segmentation map') plt.subplot(grid_spec[2]) plt.imshow(image) plt.imshow(seg_image, alpha=0.7) plt.axis('off') plt.title('segmentation overlay') unique_labels = np.unique(seg_map) ax = plt.subplot(grid_spec[3]) plt.imshow( FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest') ax.yaxis.tick_right() plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels]) plt.xticks([], []) ax.tick_params(width=0.0) plt.grid('off') plt.show() LABEL_NAMES = np.asarray([ 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv' ]) FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1) FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP) #@title Select and download models {display-mode: "form"} MODEL_NAME = 'mobilenetv2_coco_voctrainaug' # @param ['mobilenetv2_coco_voctrainaug', 'mobilenetv2_coco_voctrainval', 'xception_coco_voctrainaug', 'xception_coco_voctrainval'] _DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/' _MODEL_URLS = { 'mobilenetv2_coco_voctrainaug': 'deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz', 'mobilenetv2_coco_voctrainval': 'deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz', 'xception_coco_voctrainaug': 'deeplabv3_pascal_train_aug_2018_01_04.tar.gz', 'xception_coco_voctrainval': 'deeplabv3_pascal_trainval_2018_01_04.tar.gz', } _TARBALL_NAME = 'deeplab_model.tar.gz' model_dir = tempfile.mkdtemp() tf.gfile.MakeDirs(model_dir) download_path = os.path.join(model_dir, _TARBALL_NAME) print('downloading model, this might take a while...') urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME], download_path) print('download completed! loading DeepLab model...') MODEL = DeepLabModel(download_path) print('model loaded successfully!') ``` ## Run on sample images Select one of sample images (leave `IMAGE_URL` empty) or feed any internet image url for inference. Note that we are using single scale inference in the demo for fast computation, so the results may slightly differ from the visualizations in [README](https://github.com/tensorflow/models/blob/master/research/deeplab/README.md), which uses multi-scale and left-right flipped inputs. ``` #@title Run on sample images {display-mode: "form"} SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3'] IMAGE_URL = '' #@param {type:"string"} _SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/' 'deeplab/g3doc/img/%s.jpg?raw=true') def run_visualization(url): """Inferences DeepLab model and visualizes result.""" try: f = urllib.request.urlopen(url) jpeg_str = f.read() orignal_im = Image.open(BytesIO(jpeg_str)) except IOError: print('Cannot retrieve image. Please check url: ' + url) return print('running deeplab on image %s...' % url) resized_im, seg_map = MODEL.run(orignal_im) vis_segmentation(resized_im, seg_map) image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE run_visualization(image_url) ```
github_jupyter
## Import libraries ``` import pandas as pd import numpy as np from sklearn import preprocessing from sklearn.preprocessing import MinMaxScaler, StandardScaler from sklearn.model_selection import ShuffleSplit from sklearn.metrics import mean_absolute_error, r2_score from sklearn.ensemble import RandomForestRegressor import scipy.stats ``` ## Read data ``` train_df = pd.read_csv('data.csv') #read original train data ``` ## Functions ``` def drop_columns(df, column_names): """ df: input dataframe column_names: list of column's name return: dataframe with dropped columns """ new_df = df.copy(deep=True) new_df.drop(column_names, axis=1, inplace=True) return new_df ``` # Preprocessing ### 1- First discover our data generally ``` train_df.head(-5) ``` #### we have 20 features and 616656 samples ``` train_df.shape ``` ### Check number of unique values of features ``` for col in train_df: print("column:{}".format(str(col)) + " ---------> " + str(len(train_df[col].unique()))) ``` ## features with NaN samples ``` train_df.columns[train_df.isna().any()].tolist() ``` ## 2- pirmary feature selection ### Episode and Name of show are same features! then choose one of them ``` # check equality of two columns assert sum(train_df['Episode'] == train_df['Name of show'][:]) == train_df.shape[0] , "Columns are not same" ``` ### difference of end time and start time are high correlated with Length (using Pearson Correlation Coefficient about 99.5%), also we have NaN values in start time and end time, then we can remove start time and end time features from data ``` df = train_df.copy() df = df[df['Start_time'].notna()] df['Start_time'] = pd.to_datetime(df['Start_time']) df['End_time'] = pd.to_datetime(df['End_time']) df['Time_diff'] = (df['End_time'] - df['Start_time']) df['Time_diff'] = df['Time_diff'].dt.seconds /3600 print(scipy.stats.pearsonr(df["Time_diff"],df["Length"] )) del df ``` ### Convert Date feature to Month and Day ``` train_df['Date'] = pd.to_datetime(train_df['Date']) train_df['Month'] = train_df.Date.dt.month train_df['Day'] = train_df.Date.dt.day ``` ### 36% of a "Name of episode" feature is NaN and we can not use interpolate or some other approach...we can drop it! ``` 100 * train_df[(train_df['Name of episode'].isnull())].shape[0]/len(train_df) ``` ### Conclution: These features should be dropped ``` column_names = ['Unnamed: 0','Date', 'Start_time', 'End_time','Name of show', 'Name of episode'] ``` ### We have NaN values on "Temperature in Montreal during episode" features..I choose linear Interpolate to fill NaN values ``` train_df['Temperature in Montreal during episode'].interpolate(inplace=True) ``` ## Label Encoding with simple label encoder ``` temp_train_df = drop_columns(train_df, column_names) # temp_test_df = drop_columns(test_df, column_names) train_target_df = temp_train_df['Market Share_total'] train_df = temp_train_df.copy(deep=True) train_df.drop(['Market Share_total'], axis=1, inplace=True) # test_target_df = new_test_df['Market Share_total'] # test_df = new_test_df.copy(deep=True) # test_df.drop(['Market Share_total'], axis=1, inplace=True) le = preprocessing.LabelEncoder() for item in train_df.loc[:, ~train_df.columns.isin(['Temperature in Montreal during episode','Year', 'Length', 'Month', 'Day'])]: train_df[item] = le.fit_transform(train_df[item]) + 1 ``` ### Normalize our data ``` scaler = StandardScaler() Normalized_train_arr = scaler.fit_transform(train_df) Normalized_train_target_arr = scaler.fit_transform(train_target_df.values.reshape(-1,1)) ``` ## Use shuffle split train and test data: 70% for train and 30% for validation data ## Choose RandomForest Regressor model with 12 estimator for our data ## Metrics are R square and MAE ``` ss = ShuffleSplit(n_splits=5, test_size=0.3, random_state=0) preds = [] reals = [] r2_scores_list = [] mae_list = [] pcc = [] spc = [] for train_index, val_index in ss.split(Normalized_train_arr): train_X = Normalized_train_arr[train_index] train_Y = Normalized_train_target_arr[train_index] validation_X = Normalized_train_arr[val_index] validation_y = Normalized_train_target_arr[val_index] regr = RandomForestRegressor(n_estimators=12, random_state=0, n_jobs=-1) regr.fit(train_X, train_Y) pred_y = regr.predict(validation_X) # Model Metrics Calculation r2_scores_list.append(regr.score(validation_X, validation_y)) mae_list.append(mean_absolute_error(scaler.inverse_transform(validation_y), scaler.inverse_transform(pred_y.reshape(-1,1)))) # Pearson Correlation Coefficient and Spearman Correlation Coefficient Calculations pcc.append(scipy.stats.pearsonr(pred_y, validation_y.ravel())[0]) spc.append(scipy.stats.spearmanr(pred_y, validation_y.ravel())[0]) ``` ### Calculate Mean value of model metrics and statistical metrics ``` print("R Square mean value: ", str(np.mean(r2_scores_list))) print("MAE mean value: ", str(np.mean(mae_list))) print("Pearson Correlation Coefficient: ", str(np.mean(pcc))) print("Spearman Correlation Coefficient: ", str(np.mean(spc))) ```
github_jupyter
# Viele Dateien **Inhalt:** Massenverarbeitung von gescrapten Zeitreihen **Nötige Skills:** Daten explorieren, Time+Date Basics **Lernziele:** - Pandas in Kombination mit Scraping - Öffnen und zusammenfügen von vielen Dateien (Glob) - Umstrukturierung von Dataframes (Pivot) - Plotting Level 4 (Small Multiples) ## Das Beispiel Wir interessieren uns in diesem Notebook für Krypto-Coins. Die Webseite https://coinmarketcap.com/ führt Marktdaten zu den hundert wichtigsten Coins auf. Mit einem einfachen Scraper werden wir diese Daten beschaffen und rudimentär analysieren. Der Pfad zum Projektordner heisst `dataprojects/Krypto/` ## Vorbereitung ``` import requests from bs4 import BeautifulSoup import pandas as pd import numpy as np import re import glob %matplotlib inline ``` ## Scraper ``` path = 'dataprojects/Krypto/' ``` ### Liste von allen Kryptowährungen Zuerst kucken wir auf der Seite, welches die 100 grössten Kryptowährungen sind, und laden uns Namen und Links derselbigen. ``` base_url = 'https://coinmarketcap.com/' response = requests.get(base_url) doc = BeautifulSoup(response.text, "html.parser") currencies = doc.find_all('a', class_='currency-name-container link-secondary') currencies[0] len(currencies) currency_list = [] for currency in currencies: this_currency = {} this_currency['name'] = currency.text this_currency['link'] = currency['href'] currency_list.append(this_currency) df_currencies = pd.DataFrame(currency_list) df_currencies.head(2) df_currencies['link'] = df_currencies['link'].str.extract('/currencies/(.+)/') df_currencies.head(2) df_currencies.to_csv(path + 'currencies.csv', index=False) ``` ### Daten von den einzelnen Währungen Zuerst testen wir mit einer Probewährung aus, wie wir an die Informationen kommen. ``` base_url = 'https://coinmarketcap.com/currencies/bitcoin/historical-data/?start=20171015&end=20181015' response = requests.get(base_url) doc = BeautifulSoup(response.text, "html.parser") days = doc.find_all('tr', class_='text-right') days_list = [] cells = days[0].find_all('td') cells this_day = {} this_day['date'] = cells[0].text this_day['open'] = cells[1].text this_day['high'] = cells[2].text this_day['low'] = cells[3].text this_day['close'] = cells[4].text this_day['volume'] = cells[5].text this_day['marketcap'] = cells[6].text this_day for day in days: this_day = {} cells = day.find_all('td') this_day['date'] = cells[0].text this_day['open'] = cells[1].text this_day['high'] = cells[2].text this_day['low'] = cells[3].text this_day['close'] = cells[4].text this_day['volume'] = cells[5].text this_day['marketcap'] = cells[6].text days_list.append(this_day) df = pd.DataFrame(days_list) df.head(2) ``` Nun wenden wir den Scraper auf alle Währungen an ``` df_currencies = pd.read_csv(path + 'currencies.csv') df_currencies.head(2) len(df_currencies) currencies = df_currencies.to_dict(orient='records') url_start = 'https://coinmarketcap.com/currencies/' url_end = '/historical-data/?start=20171015&end=20181015' for currency in currencies: print ('working on: ' + currency['name']) url = url_start + currency['link'] + url_end response = requests.get(url) doc = BeautifulSoup(response.text, "html.parser") days = doc.find_all('tr', class_='text-right') days_list = [] this_day = {} for day in days: this_day = {} cells = day.find_all('td') this_day['date'] = cells[0].text this_day['open'] = cells[1].text this_day['high'] = cells[2].text this_day['low'] = cells[3].text this_day['close'] = cells[4].text this_day['volume'] = cells[5].text this_day['marketcap'] = cells[6].text days_list.append(this_day) df = pd.DataFrame(days_list) filename = currency['name'] + '.csv' df.to_csv(path + 'data/' + filename, index=False) print('Done') ``` Am Ende haben wir eine Liste von Dateien: Zu jeder Kryptowährung existiert eine Tabelle mit den Marktdaten über den definierten Zeitraum. Die Daten sind im Unterordner `data/` abgelegt. ## Daten analysieren ### Einlesen Wir starten damit, dass wir das Verzeichnis durchsuchen, in dem alle Kryptowährungs-Daten abgelegt sind. Dazu benutzen wir `glob`, ein praktisches Tool aus der Standard Library: https://docs.python.org/3/library/glob.html ``` filenames = glob.glob(path + 'data/*.csv') # wir machen eine Liste mit den Filenames len(filenames) filenames[0:2] ``` Mit Glob haben wir nun eine Liste mit den Dateinamen erstellt. Nun lesen wir jede einzelne Datei aus der Liste ein. So, dass wir als Ergebnis eine Liste von Dataframes erhalten. ``` dfs = [] dfs = [pd.read_csv(filename) for filename in filenames] # Listcomprehension dfs[0].head(2) ``` Die einzelnen Dataframes in der Liste enthalten die Marktdaten. Doch sie enthalten selbst keine Information darüber, zu welcher Kryptowährung die Daten gehören. Wir führen zu dem Zweck in jedes Dataframe noch eine zusätzliche Spalte hinzu mit dem Namen der Währung. ``` for df, filename in zip(dfs, filenames): #zip ==> führt den code für die in Klammern aufgefüheten df gleichz. aus df['currency'] = filename df['currency'] = df['currency'].str.extract('/data/(.+).csv') dfs[0].head(2) ``` Nun fügen wir alle Dataframes zu einem einzigen, sehr langen Dataframe zusammen. Dazu benutzen wir die Funktion `pd.concat()`: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html ``` df_all = pd.concat(dfs, ignore_index=True) #ignore_index=True ==> die alte Indexierung soll überschrieben werden. #concat führt listen untereinaner zusammen df_all.shape df_all.head(2) df_all.tail(2) df_all.dtypes ``` Wir haben nun ein ellenlanges Dataframe. What next? ### Arrangieren Das hängt davon ab, was wir mit den Daten genau tun wollen. Eine Option wäre: die verschiedenen Währungen miteinander zu vergleichen. Und zwar anhand der Schlusskurse. Dazu müssen wir das Dataframe leicht umstellen, mit `pivot()`: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html ``` df_pivoted = df_all.pivot(index='date', columns='currency', values='close') df_pivoted.shape df_pivoted.head(2) df_pivoted.tail(2) df_pivoted.rename_axis(None, inplace=True) ``` Nun verfügen wir über einen Index, bei dem eine Zeile jeweils einem einzigartigen Zeitpunkt entspricht. Um damit zu arbeiten, verwandeln wir den Text in der Indexspalte in ein echtes Datum des Typs datetime. ``` df_pivoted.index = pd.to_datetime(df_pivoted.index, format="%b %d, %Y") df_pivoted.sort_index(inplace=True) df_pivoted.head(2) ``` Wir haben nun ein sauber formatiertes Dataframe. Mit hundert Spalten, die für jede Kryptowährung, sofern sie zum betreffenden Zeitpunkt existierte, einen Handelskurs enthält. Die nächste Frage ist: Wie vergleichen wir diese Kurse? Was sagt es aus, wenn eine Währung an einem bestimmten Tag zu 0,1976 USD gehandelt wurde und eine andere zu 18,66 USD? ### Vergleichbarkeit herstellen Diverse Dinge würden sich hier anbieten: - zB `pct_change()` um die Veränderungen in den Kursen zu analysieren - oder eine indexierte Zeitreihe, die an einem bestimmten Tag bei 100 beginnt Wir wählen die zweite Variante. Und speichern dazu die erste Zeile separat ab. ``` row_0 = df_pivoted.iloc[0] row_0 ``` Dann teilen wir jede einzelne Zeile im Dataframe durch die erste Zeile. Und speichern als neues DF ab. ``` df_pivoted_100 = df_pivoted.apply(lambda row: row / row_0 * 100, axis=1) ``` Das neue Dataframe ist nun indexiert auf 100. Alle Währungen starten am gleichen Punkt... ``` df_pivoted_100.head(5) df_pivoted_100.tail(1) ``` ... und enden an einem bestimmten Punkt. Anhand dieses Punktes können wir die relative Entwicklung ablesen. ``` s_last = df_pivoted_100.iloc[-1] ``` Welche zehn Kryptowährungen am meisten Wert zugelegt haben... ``` s_last.sort_values(ascending=False).head(10) ``` ... und welche am meisten Wert verloren haben. ``` s_last.sort_values(ascending=False, na_position='first').tail(10) ``` Und so sieht die Performance aller Währungen aus: ``` df_pivoted_100.plot(figsize=(10,6), legend=False) ``` Wow, das sind ziemlich viele Linien! # Plotting Level 4 Wie wir diesen Chart etwas auseinandernehmen können, lernen wir hier. Eine Gelegenheit, zu sehen, wie man die matplotlib-Funktionen direkt benutzen kann. ``` import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.ticker as ticker ``` Und eine neue Art kennenlernen, wie man einen Plot erstellt. ### Ein Plot Starten wir zuerst mal mit einem Plots: Bitcoin. Wir müssen uns dazu zuerst zwei Dinge basteln: 1. Eine "figure", also eine Abbildung 1. Einen "subplot", also der Plot selbst ``` # Wir erstellen beide Dinge in einem Atemzug fig, ax = plt.subplots(figsize=(10,6)) # Und füllen den Plot jetzt mit Inhalt: df_pivoted_100['Bitcoin'].plot(title="Bitcoin", ax=ax) ``` ### Zwei Plots Als nächstes Plotten wir zwei Währungen auf derselben Figure: Bitcoin und Ethereum. Wir müssen uns dazu erneut zwei Dinge basteln: 1. Eine "figure", also eine Abbildung 1. Diverse "subplots" für die jeweiligen Währungen Dazu formatieren wir jetzt die x-Achse etwas speziell. ``` # Zuerst kreieren wir nur die Figure fig = plt.figure(figsize=(12,3)) # Danach die einzelnen Subplots ax1 = fig.add_subplot(1, 2, 1) # total 1 Zeile, total 2 Spalten, Subplot Nr. 1 ax2 = fig.add_subplot(1, 2, 2) # total 1 Zeile, total 2 Spalten, Subplot Nr. 2 # Und schliesslich füllen wir die Subplots mit Inhalt df_pivoted_100['Bitcoin'].plot(title="Bitcoin", ax=ax1) df_pivoted_100['Ethereum'].plot(title="Ethereum", ax=ax2) # Hier formatieren wir die x-Achse für Plot 1 ax1.xaxis.set_major_locator(mdates.MonthLocator()) ax1.xaxis.set_major_formatter(mdates.DateFormatter('%m')) ax1.xaxis.set_minor_locator(ticker.NullLocator()) # Hier formatieren wir die x-Achse für Plot 2 ax2.xaxis.set_major_locator(mdates.MonthLocator()) ax2.xaxis.set_major_formatter(mdates.DateFormatter('%m')) ax2.xaxis.set_minor_locator(ticker.NullLocator()) ``` Einige Angaben dazu, wie man Zeitachsen formatieren kann, gibt es hier: - TickLocators: https://matplotlib.org/examples/ticks_and_spines/tick-locators.html - TickFormatters: https://matplotlib.org/gallery/ticks_and_spines/tick-formatters.html ### Sehr viele Plots Nun plotten wir sämtliche Währungen auf einmal. Wie viele sind es? ``` anzahl_charts = s_last.notnull().sum() anzahl_charts ``` Wir sortieren unsere Liste der Währungen etwas: ``` sortierte_waehrungen = s_last[s_last.notnull()].sort_values(ascending=False) sortierte_waehrungen.head(2) ``` Und wiederholen dann wiederum dasselbe Vorgehen wie vorher. ``` sortierte_waehrungen.index # Eine Abbildung, die gross genug ist fig = plt.figure(figsize=(15,22)) # Und nun, für jede einzelne Währung: for i, waehrung in enumerate(sortierte_waehrungen.index): #der enumerate-Funktion indexiert die Plots # einen Subplot kreieren ... ax = fig.add_subplot(11, 6, i + 1) # ... und mit Inhalt füllen df_pivoted_100[waehrung].plot(title=waehrung, ax=ax) # Auf Ticks verzichten wir hier ganz ax.xaxis.set_major_locator(ticker.NullLocator()) ax.xaxis.set_minor_locator(ticker.NullLocator()) ``` Falls wir zusätzlich noch wollen, dass jeder Plot dieselbe y-Achse hat: ``` # Eine Abbildung, die gross genug ist fig = plt.figure(figsize=(15,22)) # Und nun, für jede einzelne Währung: for i, waehrung in enumerate(sortierte_waehrungen.index): # einen Subplot kreieren ... ax = fig.add_subplot(11, 6, i + 1) # ... und mit Inhalt füllen df_pivoted_100[waehrung].plot(title=waehrung, ax=ax) # Auf Ticks verzichten wir hier ganz ax.xaxis.set_major_locator(ticker.NullLocator()) ax.xaxis.set_minor_locator(ticker.NullLocator()) # Hier setzen wir eine einheitliche y-Achse (und schalten sie aus) ax.set_ylim([0, 25000]) ax.yaxis.set_major_locator(ticker.NullLocator()) ``` ### Aber es geht auch einfacher... Ha! Nachdem wir nun alles Manuell zusammengebastelt haben, mit Matplotlib, hier die gute Nachricht: *Wir können das mit wenigen Codezeilen auch direkt aus der Pandas-Plot()-Funktion haben :-)* ``` axes = df_pivoted_100[sortierte_waehrungen.index].plot(subplots=True,layout=(22, 3), sharey=True, figsize=(15,22)) axes[0,0].xaxis.set_major_locator(ticker.NullLocator()) axes[0,0].xaxis.set_minor_locator(ticker.NullLocator()) ``` # Übung Hier schauen wir uns nicht mehr die Handelskurse, sondern die Handelsvolumen an! Also: Wie viel von den einzelnen Kryptowährungen an einem bestimmten Tag gekfauft und verkauft wurde (gemessen in USD). Schauen Sie sich nochmals das Dataframe `df_all` an, das wir im Verlauf des Notebooks erstellt haben - es enthält alle Informationen, die wir brauchen, ist aber noch relativ unstrukturiert. Welche Spalte interessiert uns? Müssen wir noch etwas daran machen? ### Daten arrangieren Unternehmen Sie die nötigen Schritte, um mit der Spalte arbeiten zu können. Sie sollten am Ende eine Spalte haben, die nicht mehr als Object, sondern als Float formatiert ist. Tipp: Speichern Sie alle Modifikationen in einer neuen Spalte ab, damit das Original unverändert bleibt. Nun wollen wir die Daten umgliedern: - Für jedes Datum wollen wir eine Zeile - Für jede Kryptowährung eine Spalte - Wir interessieren uns für die Handelsvolumen Formatieren Sie die Werte in der Index-Spalte als Datetime-Objekte und sortieren Sie das Dataframe nach Datum. ### Analyse Wir machen in dieser Sektion einige einfache Auswertungen und repetieren einige Befehle, u.a. aus dem Time Series Sheet. **Top-10**: Welches waren, im Schnitt, die zehn meistgehandelten Währungen? Liste und Chart. Welches waren die zehn Währungen, bei denen das Volumen in absoluten Zahlen am meisten geschwankt ist? (Standardabweichung) Sieht so aus, als wären es dieselben zehn Währungen. Können wir angeben, welche von ihnen relativ die grössten Schwankungen hatten, also im Vergleich zum Handelsvolumen? **Bitcoin vs Ethereum** Erstellen Sie einen Chart mit dem wöchentlichen Umsatztotal von Bitcoin und Ethereum! In welchem der letzten 12 Monate wurde insgesamt am meisten mit Bitcoin gehandelt? Mit Ethereum? Wie viel Bitcoin und Ethereum wird im Durchschnitt an den sieben Wochentagen gehandelt? Barchart. **Small Multiples**: Hier erstellen wir einen Plot, ähnlich wie oben Kreieren Sie zuerst eine Liste von Währungen: - Alle Währungen, die am letzten Handelstag einen Eintrag haben - Sortiert in absteigender Reihenfolge nach dem Handelsvolumen - Wir wählen nur die zehn grössten aus Und jetzt: Small Multiples plotten! Überlegen Sie sich: - Wie viele Subplots braucht es, wie sollen sie angeordnet sein? - Wie gross muss die Abbildung insgesamt sein? - Was ist eine sinnvolle Einstellung für die Y-Achse? (Sie können die Matplotlib-Funktionalität dafür nutzen oder direkt Pandas-plot()
github_jupyter
# Create a general MODFLOW model from the NHDPlus dataset Project specific variables are imported in the model_spec.py and gen_mod_dict.py files that must be included in the notebook directory. The first first includes pathnames to data sources that will be different for each user. The second file includes a dictionary of model-specific information such as cell size, default hydraulic parameter values, and scenario defintion (e.g. include bedrock, number of layers, etc.). There are examples in the repository. Run the following cells up to the "Run to here" cell to get a pull-down menu of models in the model_dict. Then, without re-running that cell, run all the remaining cells. Re-running the following cell would re-set the model to the first one in the list, which you probably don't want. If you use the notebook option to run all cells below, it runs the cell you're in, so if you use that option, move to the next cell (below the pull-down menu of models) first. ``` __author__ = 'Jeff Starn' %matplotlib notebook from model_specs import * from gen_mod_dict import * import os import sys import numpy as np import matplotlib.pyplot as plt import flopy as fp import pandas as pd import gdal gdal.UseExceptions() import shutil # from model_specs import * # from gen_mod_dict import * from ipywidgets import interact, Dropdown from IPython.display import display for key, value in model_dict.items(): md = key ms = model_dict[md] print('trying {}'.format(md)) try: pass except: pass models = list(model_dict.keys()) models.sort() model_area = Dropdown( options=models, description='Model:', background_color='cyan', border_color='black', border_width=2) display(model_area) ``` ### Run to here to initiate notebook First time using this notebook in this session (before restarting the notebook), run the cells up to this point. Then select your model from the dropdown list above. Move your cursor to this cell and use the toolbar menu Cell --> Run All Below. After the first time, if you want to run another model, select your model and start running from this cell--you don't need to re-run the cells from the beginning. ## Preliminary stuff ``` md = model_area.value ms = model_dict[md] print('The model being processed is {}\n'.format(md)) ``` Set pathnames and create workspace directories for geographic data (from Notebook 1) and this model. ``` geo_ws = os.path.join(proj_dir, ms['ws']) model_ws = os.path.join(geo_ws, scenario_dir) array_pth = os.path.join(model_ws, 'arrays') try: shutil.rmtree(array_pth) except: pass try: shutil.rmtree(model_ws) except: pass os.makedirs(model_ws) head_file_name = '{}.hds'.format(md) head_file_pth = os.path.join(model_ws, head_file_name) print (model_ws) ``` Replace entries from the default K_dict with the model specific K values from model_dict if they exist. ``` for key, value in K_dict.items(): if key in ms.keys(): K_dict[key] = ms[key] ``` Replace entries from the default rock_riv_dict with the model specific values from model_dict if they exist. rock_riv_dict has various attributes of bedrock and stream geometry. ``` for key, value in rock_riv_dict.items(): if key in ms.keys(): rock_riv_dict[key] = ms[key] ``` Assign values to variables used in this notebook using rock_riv_dict ``` min_thk = rock_riv_dict['min_thk'] stream_width = rock_riv_dict['stream_width'] stream_bed_thk = rock_riv_dict['stream_bed_thk'] river_depth = rock_riv_dict['river_depth'] bedrock_thk = rock_riv_dict['bedrock_thk'] stream_bed_kadjust = rock_riv_dict['stream_bed_kadjust'] ``` ## Read the information for a model domain processed using Notebook 1 Read the model_grid data frame from a csv file. Extract grid dimensions and ibound array. ``` model_file = os.path.join(geo_ws, 'model_grid.csv') model_grid = pd.read_csv(model_file, index_col='node_num', na_values=['nan', hnoflo]) NROW = model_grid.row.max() + 1 NCOL = model_grid.col.max() + 1 num_cells = NROW * NCOL ibound = model_grid.ibound.reshape(NROW, NCOL) inactive = (ibound == 0) ``` ## Translate geologic information into hydrologic properties ``` # # old geology used in general models prior to 4/5/2016 # coarse_deposits = (model_grid.coarse_flag == 2) # coarse_is_1 = coarse_deposits.reshape(NROW, NCOL) ``` This version replaces Soller's Surfmat with the Quaternary Atlas. Look-up table for coarse deposits (zone = 1) from Dick Yager's new_unit. All other categories are lumped with fine deposits (zone = 0). * alluvium = 1 * ice contact = 9 * lacustrine coarse = 11 * outwash = 17 Create a dictionary that maps the K_dict from gen_mod_dict to zone numbers (key=zone number, value=entry in K_dict). Make sure these correspond with the correct units. If you're using the defaults, it is correct. ``` zone_dict = {0 : 'K_fine', 1 : 'K_coarse', 2 : 'K_lakes', 3 : 'K_bedrock'} ``` Perform the mapping from zone number to K to create the Kh1d array. ``` zones1d = np.zeros(( NROW, NCOL ), dtype=np.int32) qa = model_grid.qu_atlas.reshape( NROW, NCOL ) zones1d[qa == 1] = 1 zones1d[qa == 9] = 1 zones1d[qa == 11] = 1 zones1d[qa == 17] = 1 la = model_grid.lake.reshape( NROW, NCOL ) zones1d[la == 1] = 2 Kh1d = np.zeros(( NROW, NCOL ), dtype=np.float32) for key, val in zone_dict.items(): Kh1d[zones1d == key] = K_dict[val] model_grid['K0'] = Kh1d.ravel() ``` ## Process boundary condition information Create a dictionary of stream information for the drain or river package. River package input also needs the elevation of the river bed. Don't use both packages. The choice is made by commenting/uncommenting sections of the modflow function. Replace segment_len (segment length) with the conductance. The river package has not been tested. ``` drn_flag = (model_grid.stage != np.nan) & (model_grid.ibound == 1) drn_data = model_grid.loc[drn_flag, ['lay', 'row', 'col', 'stage', 'segment_len', 'K0']] drn_data.columns = ['k', 'i', 'j', 'stage', 'segment_len', 'K0'] dcond = drn_data.K0 *stream_bed_kadjust* drn_data.segment_len * stream_width / stream_bed_thk drn_data['segment_len'] = dcond drn_data.rename(columns={'segment_len' : 'cond'}, inplace=True) drn_data.drop('K0', axis=1, inplace=True) drn_data.dropna(axis='index', inplace=True) drn_data.insert(drn_data.shape[1], 'iface', 6) drn_recarray = drn_data.to_records(index=False) drn_dict = {0 : drn_recarray} riv_flag = (model_grid.stage != np.nan) & (model_grid.ibound == 1) riv_data = model_grid.loc[riv_flag, ['lay', 'row', 'col', 'stage', 'segment_len', 'reach_intermit', 'K0']] riv_data.columns = ['k', 'i', 'j', 'stage', 'segment_len', 'rbot', 'K0'] riv_data[['rbot']] = riv_data.stage - river_depth rcond = riv_data.K0 * stream_bed_kadjust* riv_data.segment_len * stream_width / stream_bed_thk riv_data['segment_len'] = rcond riv_data.rename(columns={'segment_len' : 'rcond'}, inplace=True) riv_data.drop('K0', axis=1, inplace=True) riv_data.dropna(axis='index', inplace=True) riv_data.insert(riv_data.shape[1], 'iface', 6) riv_recarray = riv_data.to_records(index=False) riv_dict = {0 : riv_recarray} ``` Create a dictionary of information for the general-head boundary package. Similar to the above cell. Not tested. ``` if model_grid.ghb.sum() > 0: ghb_flag = model_grid.ghb == 1 ghb_data = model_grid.loc[ghb_flag, ['lay', 'row', 'col', 'top', 'segment_len', 'K0']] ghb_data.columns = ['k', 'i', 'j', 'stage', 'segment_len', 'K0'] gcond = ghb_data.K0 * L * L / stream_bed_thk ghb_data['segment_len'] = gcond ghb_data.rename(columns={'segment_len' : 'cond'}, inplace=True) ghb_data.drop('K0', axis=1, inplace=True) ghb_data.dropna(axis='index', inplace=True) ghb_data.insert(ghb_data.shape[1], 'iface', 6) ghb_recarray = ghb_data.to_records(index=False) ghb_dict = {0 : ghb_recarray} ``` ### Create 1-layer model to get initial top-of-aquifer on which to drape subsequent layering Get starting heads from top elevations. The top is defined as the model-cell-mean NED elevation except in streams, where it is interpolated between MaxElevSmo and MinElevSmo in the NHD (called 'stage' in model_grid). Make them a little higher than land so that drains don't accidentally go dry too soon. ``` top = model_grid.top.reshape(NROW, NCOL) strt = top * 1.05 ``` Modify the bedrock surface, ensuring that it is always at least min_thk below the top elevation. This calculation will be revisited for the multi-layer case. ``` bedrock = model_grid.bedrock_el.reshape(NROW, NCOL) thk = top - bedrock thk[thk < min_thk] = min_thk bot = top - thk ``` ## Create recharge array This version replaces the Wolock/Yager recharge grid with the GWRP SWB grid. ``` ## used in general models prior to 4/5/2016 # rech = model_grid.recharge.reshape(NROW, NCOL) ``` Replace rech array with * calculate total recharge for the model domain * calculate areas of fine and coarse deposits * apportion recharge according to the ratio specified in gen_mod_dict.py * write the values to an array ``` r_swb = model_grid.swb.reshape(NROW, NCOL) / 365.25 rech_ma = np.ma.MaskedArray(r_swb, mask=inactive) coarse_ma = np.ma.MaskedArray(zones1d != 0, mask=inactive) fine_ma = np.ma.MaskedArray(zones1d == 0, mask=inactive) total_rech = rech_ma.sum() Af = fine_ma.sum() Ac = coarse_ma.sum() Rf = total_rech / (rech_fact * Ac + Af) Rc = rech_fact * Rf rech = np.zeros_like(r_swb) rech[zones1d != 0] = Rc rech[zones1d == 0] = Rf ``` ## Define a function to create and run MODFLOW ``` def modflow(md, mfpth, model_ws, nlay=1, top=top, strt=strt, nrow=NROW, ncol=NCOL, botm=bedrock, ibound=ibound, hk=Kh1d, rech=rech, stream_dict=drn_dict, delr=L, delc=L, hnoflo=hnoflo, hdry=hdry, iphdry=1): strt_dir = os.getcwd() os.chdir(model_ws) ml = fp.modflow.Modflow(modelname=md, exe_name=mfpth, version='mfnwt', external_path='arrays') # add packages (DIS has to come before either BAS or the flow package) dis = fp.modflow.ModflowDis(ml, nlay=nlay, nrow=NROW, ncol=NCOL, nper=1, delr=L, delc=L, laycbd=0, top=top, botm=botm, perlen=1.E+05, nstp=1, tsmult=1, steady=True, itmuni=4, lenuni=2, extension='dis', unitnumber=11) bas = fp.modflow.ModflowBas(ml, ibound=ibound, strt=strt, ifrefm=True, ixsec=False, ichflg=False, stoper=None, hnoflo=hnoflo, extension='bas', unitnumber=13) upw = fp.modflow.ModflowUpw(ml, laytyp=1, layavg=0, chani=1.0, layvka=1, laywet=0, ipakcb=53, hdry=hdry, iphdry=iphdry, hk=hk, hani=1.0, vka=1.0, ss=1e-05, sy=0.15, vkcb=0.0, noparcheck=False, extension='upw', unitnumber=31) rch = fp.modflow.ModflowRch(ml, nrchop=3, ipakcb=53, rech=rech, irch=1, extension='rch', unitnumber=19) drn = fp.modflow.ModflowDrn(ml, ipakcb=53, stress_period_data=drn_dict, dtype=drn_dict[0].dtype, extension='drn', unitnumber=21, options=['NOPRINT', 'AUX IFACE']) riv = fp.modflow.ModflowRiv(ml, ipakcb=53, stress_period_data=riv_dict, dtype=riv_dict[0].dtype, extension='riv', unitnumber=18, options=['NOPRINT', 'AUX IFACE']) if GHB: ghb = fp.modflow.ModflowGhb(ml, ipakcb=53, stress_period_data=ghb_dict, dtype=ghb_dict[0].dtype, extension='ghb', unitnumber=23, options=['NOPRINT', 'AUX IFACE']) oc = fp.modflow.ModflowOc(ml, ihedfm=0, iddnfm=0, chedfm=None, cddnfm=None, cboufm=None, compact=True, stress_period_data={(0, 0): ['save head', 'save budget']}, extension=['oc', 'hds', 'ddn', 'cbc'], unitnumber=[14, 51, 52, 53]) # nwt = fp.modflow.ModflowNwt(ml, headtol=0.0001, fluxtol=500, maxiterout=1000, # thickfact=1e-05, linmeth=2, iprnwt=1, ibotav=0, options='COMPLEX') nwt = fp.modflow.ModflowNwt(ml, headtol=0.0001, fluxtol=500, maxiterout=100, thickfact=1e-05, linmeth=2, iprnwt=1, ibotav=1, options='SPECIFIED', dbdtheta =0.80, dbdkappa = 0.00001, dbdgamma = 0.0, momfact = 0.10, backflag = 1, maxbackiter=30, backtol=1.05, backreduce=0.4, iacl=2, norder=1, level=3, north=7, iredsys=1, rrctols=0.0,idroptol=1, epsrn=1.0E-3, hclosexmd= 1.0e-4, mxiterxmd=200) ml.write_input() ml.remove_package('RIV') ml.write_input() success, output = ml.run_model(silent=True) os.chdir(strt_dir) if success: print(" Your {:0d} layer model ran successfully".format(nlay)) else: print(" Your {:0d} layer model didn't work".format(nlay)) ``` ## Run 1-layer MODFLOW Use the function to run MODFLOW for 1 layer to getting approximate top-of-aquifer elevation ``` modflow(md, mfpth, model_ws, nlay=1, top=top, strt=strt, nrow=NROW, ncol=NCOL, botm=bot, ibound=ibound, hk=Kh1d, rech=rech, stream_dict=drn_dict, delr=L, delc=L, hnoflo=hnoflo, hdry=hdry, iphdry=0) ``` Read the head file and calculate new layer top (wt) and bottom (bot) elevations based on the estimated water table (wt) being the top of the top layer. Divide the surficial layer into NLAY equally thick layers between wt and the bedrock surface elevation (as computed using minimum surficial thickness). ``` hdobj = fp.utils.HeadFile(head_file_pth) heads1 = hdobj.get_data(kstpkper=(0, 0)) heads1[heads1 == hnoflo] = np.nan heads1[heads1 <= hdry] = np.nan heads1 = heads1[0, :, :] hdobj = None ``` ## Create layering using the scenario in gen_mod_dict Make new model with (possibly) multiple layers. If there are dry cells in the 1 layer model, they are converted to NaN (not a number). The minimum function in the first line returns NaN if the element of either input arrays is NaN. In that case, replace NaN in modeltop with the top elevation. The process is similar to the 1 layer case. Thickness is estimated based on modeltop and bedrock and is constrained to be at least min_thk (set in gen_mod_dict.py). This thickness is divided into num_surf_layers number of layers. The cumulative thickness of these layers is the distance from the top of the model to the bottom of the layers. This 3D array of distances (the same for each layer) is subtracted from modeltop. ``` modeltop = np.minimum(heads1, top) nan = np.isnan(heads1) modeltop[nan] = top[nan] thk = modeltop - bedrock thk[thk < min_thk] = min_thk NLAY = num_surf_layers lay_extrude = np.ones((NLAY, NROW, NCOL)) lay_thk = lay_extrude * thk / NLAY bot = modeltop - np.cumsum(lay_thk, axis=0) ``` Using the estimated water table as the new top-of-aquifer elevations sometimes leads to the situation, in usually a very small number of cells, that the drain elevation is below the bottom of the cell. The following procedure resets the bottom elevation to one meter below the drain elevation if that is the case. ``` stg = model_grid.stage.fillna(1.E+30, inplace=False) tmpdrn = (lay_extrude * stg.reshape(NROW, NCOL)).ravel() tmpbot = bot.ravel() index = np.less(tmpdrn, tmpbot) tmpbot[index] = tmpdrn[index] - 1.0 bot = tmpbot.reshape(NLAY, NROW, NCOL) ``` * If add_bedrock = True in gen_mod_dict.py, add a layer to the bottom and increment NLAY by 1. * Assign the new bottom-most layer an elevation equal to the elevation of the bottom of the lowest surficial layer minus bedrock_thk, which is specified in rock_riv_dict (in gen_mod_dict.py). * Concatenate the new bottom-of-bedrock-layer to the bottom of the surficial bottom array. * Compute the vertical midpoint of each cell. Make an array (bedrock_index) that is True if the bedrock surface is higher than the midpoint and False if it is not. * lay_extrude replaces the old lay_extrude to account for the new bedrock layer. It is not used in this cell, but is used later to extrude other arrays. ``` sol_thk = model_grid.soller_thk.reshape(NROW, NCOL) tmp = top - sol_thk bedrock_4_K = bedrock.copy() bedrock_4_K[bedrock > top] = tmp[bedrock > top] if add_bedrock: NLAY = num_surf_layers + 1 lay_extrude = np.ones((NLAY, NROW, NCOL)) bed_bot = bot[-1:,:,:] - bedrock_thk bot = np.concatenate((bot, bed_bot), axis=0) mids = bot + thk / NLAY / 2 bedrock_index = mids < bedrock_4_K bedrock_index[-1:,:,:] = True elif not add_bedrock: print(' no bedrock') pass else: print(' add_bedrock variable needs to True or False') ``` Extrude all arrays to NLAY number of layers. Create a top-of-aquifer elevation (fake_top) that is higher (20% in this case) than the simulated 1-layer water table because in doing this approximation, some stream elevations end up higher than top_of_aquifer and thus do not operate as drains. The fake_top shouldn't affect model computations if it is set high enough because the model uses convertible (confined or unconfined) layers. ``` fake_top = (modeltop * 1.2).astype(np.float32) strt = (lay_extrude * modeltop * 1.05).astype(np.float32) ibound = (lay_extrude * ibound).astype(np.int16) ``` Perform the mapping from zone number to K to create the Kh3d array. ``` zones3d = np.zeros(( NLAY, NROW, NCOL ), dtype=np.int32) qa = model_grid.qu_atlas.reshape(NROW, NCOL) qa3d = (lay_extrude * qa).astype(np.int32) zones3d[qa3d == 1] = 1 zones3d[qa3d == 9] = 1 zones3d[qa3d == 11] = 1 zones3d[qa3d == 17] = 1 if add_bedrock: zones3d[bedrock_index] = 3 la = model_grid.lake.reshape(NROW, NCOL) zones3d[0, la == 1] = 2 Kh3d = np.zeros(( NLAY, NROW, NCOL ), dtype=np.float32) for key, val in zone_dict.items(): Kh3d[zones3d == key] = K_dict[val] ``` Run MODFLOW again using the new layer definitions. The difference from the first run is that the top-of-aquifer elevation is the 1-layer water table rather than land surface, and of course, the number of surficial layers and/or the presence of a bedrock layer is different. ``` modflow(md, mfpth, model_ws, nlay=NLAY, top=fake_top, strt=strt, nrow=NROW, ncol=NCOL, botm=bot, ibound=ibound, hk=Kh3d, rech=rech, stream_dict=drn_dict, delr=L, delc=L, hnoflo=hnoflo, hdry=hdry, iphdry=1) ``` Read the new head array ``` hdobj = fp.utils.HeadFile(head_file_pth) heads = hdobj.get_data() hdobj = None ``` Make a 2D array of the heads in the highest active cells and call it the water_table ``` heads[heads == hnoflo] = np.nan heads[heads <= hdry] = np.nan hin = np.argmax(np.isfinite(heads), axis=0) row, col = np.indices((hin.shape)) water_table = heads[hin, row, col] water_table_ma = np.ma.MaskedArray(water_table, inactive) ``` Save the head array to a geotiff file. ``` data = water_table_ma src_pth = os.path.join(geo_ws, 'ibound.tif') src = gdal.Open(src_pth) dst_pth = os.path.join(model_ws, 'pre-heads.tif') driver = gdal.GetDriverByName('GTiff') dst = driver.CreateCopy(dst_pth, src, 0) band = dst.GetRasterBand(1) band.WriteArray(data) band.SetNoDataValue(np.nan) dst = None src = None ``` Save the heads and K from the upper-most layer to model_grid.csv ``` model_grid['pre_cal_heads'] = water_table_ma.ravel() model_grid['pre_cal_K'] = Kh3d[0,:,:].ravel() if add_bedrock: model_grid['thk'] = model_grid.top - bot[-1,:,:].ravel() + bedrock_thk else: model_grid['thk'] = model_grid.top - bot[-1,:,:].ravel() model_grid['thkR'] = model_grid.thk / model_grid.recharge model_grid.to_csv(os.path.join(model_ws, 'model_grid.csv')) ``` Save zone array for use in calibration. ``` zone_file = os.path.join(model_ws, 'zone_array.npz') np.savez(zone_file, zone=zones3d) ``` Plot a cross-section to see what the layers look like. Change row_to_plot to see other rows. Columns could be easily added. ``` def calc_error(top, head, obs_type): # an offset of 1 is used to eliminate counting heads that # are within 1 m of their target as errors. # count topo and hydro errors t = top < (head - err_tol) h = top > (head + err_tol) tmp_df = pd.DataFrame({'head':head, 'ot':obs_type, 't':t, 'h':h}) tmp = tmp_df.groupby('ot').sum() h_e_ = tmp.loc['hydro', 'h'] t_e_ = tmp.loc['topo', 't'] result = np.array([h_e_, t_e_]) return result hydro, topo = calc_error(model_grid.top, water_table.ravel(), model_grid.obs_type) num_hydro = model_grid.obs_type.value_counts()['hydro'] num_topo = model_grid.obs_type.value_counts()['topo'] num_cells = num_hydro + num_topo hydro = hydro / num_hydro topo = topo / num_topo def ma2(data2D): return np.ma.MaskedArray(data2D, mask=inactive) def ma3(data3D): return np.ma.MaskedArray(data3D, mask=(ibound == 0)) row_to_plot = NROW / 2 xplot = np.linspace( L / 2, NCOL * L - L / 2, NCOL) mKh = ma3(Kh3d) mtop = ma2(top) mbed = ma2(bedrock) mbot = ma3(bot) colors = ['green', 'red', 'gray'] fig = plt.figure(figsize=(8,8)) ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2) ax1.plot(xplot, mtop[row_to_plot, ], label='land surface', color='black', lw=0.5) ax1.plot(xplot, water_table_ma[row_to_plot, ], label='water table', color='blue', lw=1.) ax1.fill_between(xplot, mtop[row_to_plot, ], mbot[0, row_to_plot, :], alpha=0.25, color='blue', label='layer 1', lw=0.75) for lay in range(NLAY-1): label = 'layer {}'.format(lay+2) ax1.fill_between(xplot, mbot[lay, row_to_plot, :], mbot[lay+1, row_to_plot, :], label=label, color=colors[lay], alpha=0.250, lw=0.75) ax1.plot(xplot, mbed[row_to_plot, :], label='bedrock (Soller)', color='red', linestyle='dotted', lw=1.5) ax1.plot(xplot, mbot[-1, row_to_plot, :], color='black', linestyle='solid', lw=0.5) ax1.legend(loc=0, frameon=False, fontsize=10, ncol=3)#, bbox_to_anchor=(1.0, 0.5)) ax1.set_ylabel('Altitude, in meters') ax1.set_xticklabels('') ax1.set_title('Default section along row {}, {} model, weight {:0.1f}\nK fine = {:0.1f} K coarse = {:0.1f}\ K bedrock = {:0.1f}\nFraction dry drains {:0.2f} Fraction flooded cells {:0.2f}'.format(row_to_plot, \ md, 1, K_dict['K_fine'], K_dict['K_coarse'], K_dict['K_bedrock'], hydro, topo)) ax2 = plt.subplot2grid((3, 1), (2, 0)) ax2.fill_between(xplot, 0, mKh[0, row_to_plot, :], alpha=0.25, color='blue', label='layer 1', lw=0.75, step='mid') ax2.set_xlabel('Distance in meters') ax2.set_yscale('log') ax2.set_ylabel('Hydraulic conductivity\n in layer 1, in meters / day') line = '{}_{}_xs.png'.format(md, scenario_dir) fig_name = os.path.join(model_ws, line) plt.savefig(fig_name) t = top < (water_table - err_tol) h = top > (water_table + err_tol) mt = np.ma.MaskedArray(t.reshape(NROW, NCOL), model_grid.obs_type != 'topo') mh = np.ma.MaskedArray(h.reshape(NROW, NCOL), model_grid.obs_type != 'hydro') from matplotlib import colors cmap = colors.ListedColormap(['0.50', 'red']) cmap2 = colors.ListedColormap(['blue']) back = np.ma.MaskedArray(ibound[0,:,:], ibound[0,:,:] == 0) fig, ax = plt.subplots(1,2) ax[0].imshow(back, cmap=cmap2, alpha=0.2) im0 = ax[0].imshow(mh, cmap=cmap, interpolation='None') ax[0].axhline(row_to_plot) # fig.colorbar(im0, ax=ax[0]) ax[1].imshow(back, cmap=cmap2, alpha=0.2) im1 = ax[1].imshow(mt, cmap=cmap, interpolation='None') ax[1].axhline(row_to_plot) # fig.colorbar(im1, ax=ax[1]) fig.suptitle('Default model errors (in red) along row {}, {} model, weight {:0.1f}\nK fine = {:0.1f} K coarse = {:0.1f}\ K bedrock = {:0.1f}\nFraction dry drains {:0.2f} Fraction flooded cells {:0.2f}'.format(row_to_plot, \ md, 1.0, K_dict['K_fine'], K_dict['K_coarse'], K_dict['K_bedrock'], hydro, topo)) # fig.subplots_adjust(left=None, bottom=None, right=None, top=None, # wspace=None, hspace=None) fig.set_size_inches(6, 6) # line = '{}_{}_error_map_cal.png'.format(md, scenario_dir) line = '{}_{}_error_map.png'.format(md, scenario_dir) #csc fig_name = os.path.join(model_ws, line) plt.savefig(fig_name) ```
github_jupyter
# Lesson 2 Exercise 2: Creating Denormalized Tables <img src="images/postgresSQLlogo.png" width="250" height="250"> ## Walk through the basics of modeling data from normalized from to denormalized form. We will create tables in PostgreSQL, insert rows of data, and do simple JOIN SQL queries to show how these multiple tables can work together. #### Where you see ##### you will need to fill in code. This exercise will be more challenging than the last. Use the information provided to create the tables and write the insert statements. #### Remember the examples shown are simple, but imagine these situations at scale with large datasets, many users, and the need for quick response time. Note: __Do not__ click the blue Preview button in the lower task bar ### Import the library Note: An error might popup after this command has exectuted. If it does read it careful before ignoring. ``` import psycopg2 ``` ### Create a connection to the database, get a cursor, and set autocommit to true ``` try: conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student") except psycopg2.Error as e: print("Error: Could not make connection to the Postgres database") print(e) try: cur = conn.cursor() except psycopg2.Error as e: print("Error: Could not get cursor to the Database") print(e) conn.set_session(autocommit=True) ``` #### Let's start with our normalized (3NF) database set of tables we had in the last exercise, but we have added a new table `sales`. `Table Name: transactions2 column 0: transaction Id column 1: Customer Name column 2: Cashier Id column 3: Year ` `Table Name: albums_sold column 0: Album Id column 1: Transaction Id column 3: Album Name` `Table Name: employees column 0: Employee Id column 1: Employee Name ` `Table Name: sales column 0: Transaction Id column 1: Amount Spent ` <img src="images/table16.png" width="450" height="450"> <img src="images/table15.png" width="450" height="450"> <img src="images/table17.png" width="350" height="350"> <img src="images/table18.png" width="350" height="350"> ### TO-DO: Add all Create statements for all Tables and Insert data into the tables ``` # TO-DO: Add all Create statements for all tables try: cur.execute("CREATE TABLE IF NOT EXISTS transactions2 (transaction_id int, \ customer_name varchar, cashier_id int, \ year int);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS employees (employee_id int, \ employee_name varchar);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS albums_sold (album_id int, transaction_id int, \ album_name varchar);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) try: cur.execute("CREATE TABLE IF NOT EXISTS sales (transaction_id int, amount_spent int);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) # TO-DO: Insert data into the tables try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (1, "Amanda", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (2, "Toby", 1, 2000)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions2 (transaction_id, customer_name, cashier_id, year) \ VALUES (%s, %s, %s, %s)", \ (3, "Max", 2, 2018)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (1, 1, "Rubber Soul")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (2, 1, "Let It Be")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (3, 2, "My Generation")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (4, 3, "Meet the Beatles")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO albums_sold (album_id, transaction_id, album_name) \ VALUES (%s, %s, %s)", \ (5, 3, "Help!")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO employees (employee_id, employee_name) \ VALUES (%s, %s)", \ (1, "Sam")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO employees (employee_id, employee_name) \ VALUES (%s, %s)", \ (2, "Bob")) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (1, 40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (2, 19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO sales (transaction_id, amount_spent) \ VALUES (%s, %s)", \ (3, 45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` #### TO-DO: Confirm using the Select statement the data were added correctly ``` print("Table: transactions2\n") try: cur.execute("SELECT * FROM transactions2;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: albums_sold\n") try: cur.execute("SELECT * FROM albums_sold;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: employees\n") try: cur.execute("SELECT * FROM employees;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() print("\nTable: sales\n") try: cur.execute("SELECT * FROM sales;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` ### Let's say you need to do a query that gives: `transaction_id customer_name cashier name year albums sold amount sold` ### TO-DO: Complete the statement below to perform a 3 way `JOIN` on the 4 tables you have created. ``` try: cur.execute("SELECT transactions2.transaction_id, customer_name, employees.employee_name, \ year, albums_sold.album_name, sales.amount_spent\ FROM ((transactions2 JOIN employees ON \ transactions2.cashier_id = employees.employee_id) JOIN \ albums_sold ON albums_sold.transaction_id=transactions2.transaction_id) JOIN\ sales ON transactions2.transaction_id=sales.transaction_id;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Great we were able to get the data we wanted. ### But, we had to perform a 3 way `JOIN` to get there. While it's great we had that flexibility, we need to remember that `JOINS` are slow and if we have a read heavy workload that required low latency queries we want to reduce the number of `JOINS`. Let's think about denormalizing our normalized tables. ### With denormalization you want to think about the queries you are running and how to reduce the number of JOINS even if that means duplicating data. The following are the queries you need to run. #### Query 1 : `select transaction_id, customer_name, amount_spent FROM <min number of tables>` It should generate the amount spent on each transaction #### Query 2: `select cashier_name, SUM(amount_spent) FROM <min number of tables> GROUP BY cashier_name` It should generate the total sales by cashier ### Query 1: `select transaction_id, customer_name, amount_spent FROM <min number of tables>` One way to do this would be to do a JOIN on the `sales` and `transactions2` table but we want to minimize the use of `JOINS`. To reduce the number of tables, first add `amount_spent` to the `transactions` table so that you will not need to do a JOIN at all. `Table Name: transactions column 0: transaction Id column 1: Customer Name column 2: Cashier Id column 3: Year column 4: amount_spent` <img src="images/table19.png" width="450" height="450"> ### TO-DO: Add the tables as part of the denormalization process ``` # TO-DO: Create all tables try: cur.execute("CREATE TABLE IF NOT EXISTS transactions (transaction_id int, \ customer_name varchar, cashier_id int, \ year int, amount_spent int);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert data into all tables try: cur.execute("INSERT INTO transactions (transaction_id, customer_name, cashier_id, year, amount_spent) \ VALUES (%s, %s, %s, %s, %s)", \ (1, "Amanda", 1, 2000, 40)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (transaction_id, customer_name, cashier_id, year, amount_spent) \ VALUES (%s, %s, %s, %s, %s)", \ (2, "Toby", 1, 2000, 19)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO transactions (transaction_id, customer_name, cashier_id, year, amount_spent) \ VALUES (%s, %s, %s, %s, %s)", \ (3, "Max", 2, 2018, 45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` ### Now you should be able to do a simplifed query to get the information you need. No `JOIN` is needed. ``` try: cur.execute("SELECT transaction_id, customer_name, amount_spent FROM transactions;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Your output for the above cell should be the following: (1, 'Amanda', 40)<br> (2, 'Toby', 19)<br> (3, 'Max', 45) ### Query 2: `select cashier_name, SUM(amount_spent) FROM <min number of tables> GROUP BY cashier_name` To avoid using any `JOINS`, first create a new table with just the information we need. `Table Name: cashier_sales col: Transaction Id Col: Cashier Name Col: Cashier Id col: Amount_Spent ` <img src="images/table20.png" width="350" height="350"> ### TO-DO: Create a new table with just the information you need. ``` try: cur.execute("CREATE TABLE IF NOT EXISTS cashier_sales (transaction_id int, cashier_name varchar, \ cashier_id int, amount_spent int);") except psycopg2.Error as e: print("Error: Issue creating table") print (e) #Insert into all tables try: cur.execute("INSERT INTO cashier_sales (transaction_id, cashier_name, cashier_id, amount_spent) \ VALUES (%s, %s, %s, %s)", \ (1, "Sam", 1, 40 )) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO cashier_sales (transaction_id, cashier_name, cashier_id, amount_spent) \ VALUES (%s, %s, %s, %s)", \ (2, "Sam", 1, 19 )) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) try: cur.execute("INSERT INTO cashier_sales (transaction_id, cashier_name, cashier_id, amount_spent) \ VALUES (%s, %s, %s, %s)", \ (3, "Max", 2, 45)) except psycopg2.Error as e: print("Error: Inserting Rows") print (e) ``` ### Run the query ``` try: cur.execute("select cashier_name, SUM(amount_spent) FROM cashier_sales GROUP BY cashier_name;") except psycopg2.Error as e: print("Error: select *") print (e) row = cur.fetchone() while row: print(row) row = cur.fetchone() ``` #### Your output for the above cell should be the following: ('Sam', 59)<br> ('Max', 45) #### We have successfully taken normalized table and denormalized them inorder to speed up our performance and allow for simplier queries to be executed. ### Drop the tables ``` try: cur.execute("DROP table albums_sold") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table employees") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table transactions") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table transactions2") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table sales") except psycopg2.Error as e: print("Error: Dropping table") print (e) try: cur.execute("DROP table cashier_sales") except psycopg2.Error as e: print("Error: Dropping table") print (e) ``` ### And finally close your cursor and connection. ``` cur.close() conn.close() ```
github_jupyter
``` %matplotlib inline ``` Advanced: Making Dynamic Decisions and the Bi-LSTM CRF ====================================================== Dynamic versus Static Deep Learning Toolkits -------------------------------------------- Pytorch is a *dynamic* neural network kit. Another example of a dynamic kit is `Dynet <https://github.com/clab/dynet>`__ (I mention this because working with Pytorch and Dynet is similar. If you see an example in Dynet, it will probably help you implement it in Pytorch). The opposite is the *static* tool kit, which includes Theano, Keras, TensorFlow, etc. The core difference is the following: * In a static toolkit, you define a computation graph once, compile it, and then stream instances to it. * In a dynamic toolkit, you define a computation graph *for each instance*. It is never compiled and is executed on-the-fly Without a lot of experience, it is difficult to appreciate the difference. One example is to suppose we want to build a deep constituent parser. Suppose our model involves roughly the following steps: * We build the tree bottom up * Tag the root nodes (the words of the sentence) * From there, use a neural network and the embeddings of the words to find combinations that form constituents. Whenever you form a new constituent, use some sort of technique to get an embedding of the constituent. In this case, our network architecture will depend completely on the input sentence. In the sentence "The green cat scratched the wall", at some point in the model, we will want to combine the span $(i,j,r) = (1, 3, \text{NP})$ (that is, an NP constituent spans word 1 to word 3, in this case "The green cat"). However, another sentence might be "Somewhere, the big fat cat scratched the wall". In this sentence, we will want to form the constituent $(2, 4, NP)$ at some point. The constituents we will want to form will depend on the instance. If we just compile the computation graph once, as in a static toolkit, it will be exceptionally difficult or impossible to program this logic. In a dynamic toolkit though, there isn't just 1 pre-defined computation graph. There can be a new computation graph for each instance, so this problem goes away. Dynamic toolkits also have the advantage of being easier to debug and the code more closely resembling the host language (by that I mean that Pytorch and Dynet look more like actual Python code than Keras or Theano). Bi-LSTM Conditional Random Field Discussion ------------------------------------------- For this section, we will see a full, complicated example of a Bi-LSTM Conditional Random Field for named-entity recognition. The LSTM tagger above is typically sufficient for part-of-speech tagging, but a sequence model like the CRF is really essential for strong performance on NER. Familiarity with CRF's is assumed. Although this name sounds scary, all the model is is a CRF but where an LSTM provides the features. This is an advanced model though, far more complicated than any earlier model in this tutorial. If you want to skip it, that is fine. To see if you're ready, see if you can: - Write the recurrence for the viterbi variable at step i for tag k. - Modify the above recurrence to compute the forward variables instead. - Modify again the above recurrence to compute the forward variables in log-space (hint: log-sum-exp) If you can do those three things, you should be able to understand the code below. Recall that the CRF computes a conditional probability. Let $y$ be a tag sequence and $x$ an input sequence of words. Then we compute \begin{align}P(y|x) = \frac{\exp{(\text{Score}(x, y)})}{\sum_{y'} \exp{(\text{Score}(x, y')})}\end{align} Where the score is determined by defining some log potentials $\log \psi_i(x,y)$ such that \begin{align}\text{Score}(x,y) = \sum_i \log \psi_i(x,y)\end{align} To make the partition function tractable, the potentials must look only at local features. In the Bi-LSTM CRF, we define two kinds of potentials: emission and transition. The emission potential for the word at index $i$ comes from the hidden state of the Bi-LSTM at timestep $i$. The transition scores are stored in a $|T|x|T|$ matrix $\textbf{P}$, where $T$ is the tag set. In my implementation, $\textbf{P}_{j,k}$ is the score of transitioning to tag $j$ from tag $k$. So: \begin{align}\text{Score}(x,y) = \sum_i \log \psi_\text{EMIT}(y_i \rightarrow x_i) + \log \psi_\text{TRANS}(y_{i-1} \rightarrow y_i)\end{align} \begin{align}= \sum_i h_i[y_i] + \textbf{P}_{y_i, y_{i-1}}\end{align} where in this second expression, we think of the tags as being assigned unique non-negative indices. If the above discussion was too brief, you can check out `this <http://www.cs.columbia.edu/%7Emcollins/crf.pdf>`__ write up from Michael Collins on CRFs. Implementation Notes -------------------- The example below implements the forward algorithm in log space to compute the partition function, and the viterbi algorithm to decode. Backpropagation will compute the gradients automatically for us. We don't have to do anything by hand. The implementation is not optimized. If you understand what is going on, you'll probably quickly see that iterating over the next tag in the forward algorithm could probably be done in one big operation. I wanted to code to be more readable. If you want to make the relevant change, you could probably use this tagger for real tasks. ``` # Author: Robert Guthrie import torch import torch.autograd as autograd import torch.nn as nn import torch.optim as optim torch.manual_seed(1) ``` Helper functions to make the code more readable. ``` def to_scalar(var): # returns a python float return var.view(-1).data.tolist()[0] def argmax(vec): # return the argmax as a python int _, idx = torch.max(vec, 1) return to_scalar(idx) def prepare_sequence(seq, to_ix): idxs = [to_ix[w] for w in seq] tensor = torch.LongTensor(idxs) return autograd.Variable(tensor) # Compute log sum exp in a numerically stable way for the forward algorithm def log_sum_exp(vec): max_score = vec[0, argmax(vec)] max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1]) return max_score + \ torch.log(torch.sum(torch.exp(vec - max_score_broadcast))) ``` Create model ``` class BiLSTM_CRF(nn.Module): def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim): super(BiLSTM_CRF, self).__init__() self.embedding_dim = embedding_dim self.hidden_dim = hidden_dim self.vocab_size = vocab_size self.tag_to_ix = tag_to_ix self.tagset_size = len(tag_to_ix) self.word_embeds = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2, num_layers=1, bidirectional=True) # Maps the output of the LSTM into tag space. self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size) # Matrix of transition parameters. Entry i,j is the score of # transitioning *to* i *from* j. self.transitions = nn.Parameter( torch.randn(self.tagset_size, self.tagset_size)) # These two statements enforce the constraint that we never transfer # to the start tag and we never transfer from the stop tag self.transitions.data[tag_to_ix[START_TAG], :] = -10000 self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000 self.hidden = self.init_hidden() def init_hidden(self): return (autograd.Variable(torch.randn(2, 1, self.hidden_dim // 2)), autograd.Variable(torch.randn(2, 1, self.hidden_dim // 2))) def _forward_alg(self, feats): # Do the forward algorithm to compute the partition function init_alphas = torch.Tensor(1, self.tagset_size).fill_(-10000.) # START_TAG has all of the score. init_alphas[0][self.tag_to_ix[START_TAG]] = 0. # Wrap in a variable so that we will get automatic backprop forward_var = autograd.Variable(init_alphas) # Iterate through the sentence for feat in feats: alphas_t = [] # The forward variables at this timestep for next_tag in range(self.tagset_size): # broadcast the emission score: it is the same regardless of # the previous tag emit_score = feat[next_tag].view( 1, -1).expand(1, self.tagset_size) # the ith entry of trans_score is the score of transitioning to # next_tag from i trans_score = self.transitions[next_tag].view(1, -1) # The ith entry of next_tag_var is the value for the # edge (i -> next_tag) before we do log-sum-exp next_tag_var = forward_var + trans_score + emit_score # The forward variable for this tag is log-sum-exp of all the # scores. alphas_t.append(log_sum_exp(next_tag_var)) forward_var = torch.cat(alphas_t).view(1, -1) terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]] alpha = log_sum_exp(terminal_var) return alpha def _get_lstm_features(self, sentence): self.hidden = self.init_hidden() embeds = self.word_embeds(sentence).view(len(sentence), 1, -1) lstm_out, self.hidden = self.lstm(embeds, self.hidden) lstm_out = lstm_out.view(len(sentence), self.hidden_dim) lstm_feats = self.hidden2tag(lstm_out) return lstm_feats def _score_sentence(self, feats, tags): # Gives the score of a provided tag sequence score = autograd.Variable(torch.Tensor([0])) tags = torch.cat([torch.LongTensor([self.tag_to_ix[START_TAG]]), tags]) for i, feat in enumerate(feats): score = score + \ self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]] score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]] return score def _viterbi_decode(self, feats): backpointers = [] # Initialize the viterbi variables in log space init_vvars = torch.Tensor(1, self.tagset_size).fill_(-10000.) init_vvars[0][self.tag_to_ix[START_TAG]] = 0 # forward_var at step i holds the viterbi variables for step i-1 forward_var = autograd.Variable(init_vvars) for feat in feats: bptrs_t = [] # holds the backpointers for this step viterbivars_t = [] # holds the viterbi variables for this step for next_tag in range(self.tagset_size): # next_tag_var[i] holds the viterbi variable for tag i at the # previous step, plus the score of transitioning # from tag i to next_tag. # We don't include the emission scores here because the max # does not depend on them (we add them in below) next_tag_var = forward_var + self.transitions[next_tag] best_tag_id = argmax(next_tag_var) bptrs_t.append(best_tag_id) viterbivars_t.append(next_tag_var[0][best_tag_id]) # Now add in the emission scores, and assign forward_var to the set # of viterbi variables we just computed forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1) backpointers.append(bptrs_t) # Transition to STOP_TAG terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]] best_tag_id = argmax(terminal_var) path_score = terminal_var[0][best_tag_id] # Follow the back pointers to decode the best path. best_path = [best_tag_id] for bptrs_t in reversed(backpointers): best_tag_id = bptrs_t[best_tag_id] best_path.append(best_tag_id) # Pop off the start tag (we dont want to return that to the caller) start = best_path.pop() assert start == self.tag_to_ix[START_TAG] # Sanity check best_path.reverse() return path_score, best_path def neg_log_likelihood(self, sentence, tags): feats = self._get_lstm_features(sentence) forward_score = self._forward_alg(feats) gold_score = self._score_sentence(feats, tags) return forward_score - gold_score def forward(self, sentence): # dont confuse this with _forward_alg above. # Get the emission scores from the BiLSTM lstm_feats = self._get_lstm_features(sentence) # Find the best path, given the features. score, tag_seq = self._viterbi_decode(lstm_feats) return score, tag_seq ``` Run training ``` START_TAG = "<START>" STOP_TAG = "<STOP>" EMBEDDING_DIM = 5 HIDDEN_DIM = 4 # Make up some training data training_data = [( "the wall street journal reported today that apple corporation made money".split(), "B I I I O O O B I O O".split() ), ( "georgia tech is a university in georgia".split(), "B I O O O O B".split() )] word_to_ix = {} for sentence, tags in training_data: for word in sentence: if word not in word_to_ix: word_to_ix[word] = len(word_to_ix) tag_to_ix = {"B": 0, "I": 1, "O": 2, START_TAG: 3, STOP_TAG: 4} model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM) optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4) # Check predictions before training precheck_sent = prepare_sequence(training_data[0][0], word_to_ix) precheck_tags = torch.LongTensor([tag_to_ix[t] for t in training_data[0][1]]) print(model(precheck_sent)) # Make sure prepare_sequence from earlier in the LSTM section is loaded for epoch in range( 300): # again, normally you would NOT do 300 epochs, it is toy data for sentence, tags in training_data: # Step 1. Remember that Pytorch accumulates gradients. # We need to clear them out before each instance model.zero_grad() # Step 2. Get our inputs ready for the network, that is, # turn them into Variables of word indices. sentence_in = prepare_sequence(sentence, word_to_ix) targets = torch.LongTensor([tag_to_ix[t] for t in tags]) # Step 3. Run our forward pass. neg_log_likelihood = model.neg_log_likelihood(sentence_in, targets) # Step 4. Compute the loss, gradients, and update the parameters by # calling optimizer.step() neg_log_likelihood.backward() optimizer.step() # Check predictions after training precheck_sent = prepare_sequence(training_data[0][0], word_to_ix) print(model(precheck_sent)) # We got it! ``` Exercise: A new loss function for discriminative tagging -------------------------------------------------------- It wasn't really necessary for us to create a computation graph when doing decoding, since we do not backpropagate from the viterbi path score. Since we have it anyway, try training the tagger where the loss function is the difference between the Viterbi path score and the score of the gold-standard path. It should be clear that this function is non-negative and 0 when the predicted tag sequence is the correct tag sequence. This is essentially *structured perceptron*. This modification should be short, since Viterbi and score\_sentence are already implemented. This is an example of the shape of the computation graph *depending on the training instance*. Although I haven't tried implementing this in a static toolkit, I imagine that it is possible but much less straightforward. Pick up some real data and do a comparison!
github_jupyter
# Network Visualization (PyTorch) In this notebook we will explore the use of *image gradients* for generating new images. When training a model, we define a loss function which measures our current unhappiness with the model's performance; we then use backpropagation to compute the gradient of the loss with respect to the model parameters, and perform gradient descent on the model parameters to minimize the loss. Here we will do something slightly different. We will start from a convolutional neural network model which has been pretrained to perform image classification on the ImageNet dataset. We will use this model to define a loss function which quantifies our current unhappiness with our image, then use backpropagation to compute the gradient of this loss with respect to the pixels of the image. We will then keep the model fixed, and perform gradient descent *on the image* to synthesize a new image which minimizes the loss. In this notebook we will explore three techniques for image generation: 1. **Saliency Maps**: Saliency maps are a quick way to tell which part of the image influenced the classification decision made by the network. 2. **Fooling Images**: We can perturb an input image so that it appears the same to humans, but will be misclassified by the pretrained network. 3. **Class Visualization**: We can synthesize an image to maximize the classification score of a particular class; this can give us some sense of what the network is looking for when it classifies images of that class. This notebook uses **PyTorch**; ``` import torch import torchvision import torchvision.transforms as T import random import numpy as np from scipy.ndimage.filters import gaussian_filter1d import matplotlib.pyplot as plt from deeplearning.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD from PIL import Image from deeplearning.network_visualization import compute_saliency_maps, make_fooling_image, update_class_visulization %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 ``` ### Helper Functions Our pretrained model was trained on images that had been preprocessed by subtracting the per-color mean and dividing by the per-color standard deviation. We define a few helper functions for performing and undoing this preprocessing. You don't need to do anything in this cell. ``` def preprocess(img, size=224): transform = T.Compose([ T.Resize(size), T.ToTensor(), T.Normalize(mean=SQUEEZENET_MEAN.tolist(), std=SQUEEZENET_STD.tolist()), T.Lambda(lambda x: x[None]), ]) return transform(img) def deprocess(img, should_rescale=True): transform = T.Compose([ T.Lambda(lambda x: x[0]), T.Normalize(mean=[0, 0, 0], std=(1.0 / SQUEEZENET_STD).tolist()), T.Normalize(mean=(-SQUEEZENET_MEAN).tolist(), std=[1, 1, 1]), T.Lambda(rescale) if should_rescale else T.Lambda(lambda x: x), T.ToPILImage(), ]) return transform(img) def rescale(x): low, high = x.min(), x.max() x_rescaled = (x - low) / (high - low) return x_rescaled def blur_image(X, sigma=1): X_np = X.cpu().clone().numpy() X_np = gaussian_filter1d(X_np, sigma, axis=2) X_np = gaussian_filter1d(X_np, sigma, axis=3) X.copy_(torch.Tensor(X_np).type_as(X)) return X def rel_error(x, y): return torch.max(torch.abs(x - y) / (torch.maximum(torch.tensor(1e-8), torch.abs(x) + torch.abs(y)))) ``` # Pretrained Model For all of our image generation experiments, we will start with a convolutional neural network which was pretrained to perform image classification on ImageNet. We can use any model here, but for the purposes of this assignment we will use SqueezeNet [1], which achieves accuracies comparable to AlexNet but with a significantly reduced parameter count and computational complexity. Using SqueezeNet rather than AlexNet or VGG or ResNet means that we can easily perform all image generation experiments on CPU. [1] Iandola et al, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size", arXiv 2016 ``` # Download and load the pretrained SqueezeNet model. # model = torchvision.models.squeezenet1_1(pretrained=True) model = torch.load('squeezenet1_1.pt') # We don't want to train the model, so tell PyTorch not to compute gradients # with respect to model parameters. for param in model.parameters(): param.requires_grad = False reference_data = torch.load('network_visualization_check.pt') ``` ## Load some ImageNet images We have provided a few example images from the validation set of the ImageNet ILSVRC 2012 Classification dataset. To download these images, change to `deeplearning/datasets/` and run `get_imagenet_val.sh`. Since they come from the validation set, our pretrained model did not see these images during training. Run the following cell to visualize some of these images, along with their ground-truth labels. ``` from deeplearning.data_utils import load_imagenet_val X, y, class_names = load_imagenet_val(num=5) plt.figure(figsize=(12, 6)) for i in range(5): plt.subplot(1, 5, i + 1) plt.imshow(X[i]) plt.title(class_names[y[i]]) plt.axis('off') plt.gcf().tight_layout() ``` # Saliency Maps Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [2]. A **saliency map** tells us the degree to which each pixel in the image affects the classification score for that image. To compute it, we compute the gradient of the unnormalized score corresponding to the correct class (which is a scalar) with respect to the pixels of the image. If the image has shape `(3, H, W)` then this gradient will also have shape `(3, H, W)`; for each pixel in the image, this gradient tells us the amount by which the classification score will change if the pixel changes by a small amount. To compute the saliency map, we take the absolute value of this gradient, then take the maximum value over the 3 input channels; the final saliency map thus has shape `(H, W)` and all entries are nonnegative. [2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", ICLR Workshop 2014. ### Hint: PyTorch `gather` method Recall in Assignment 1 you needed to select one element from each row of a matrix; if `s` is an numpy array of shape `(N, C)` and `y` is a numpy array of shape `(N,`) containing integers `0 <= y[i] < C`, then `s[np.arange(N), y]` is a numpy array of shape `(N,)` which selects one element from each element in `s` using the indices in `y`. In PyTorch you can perform the same operation using the `gather()` method. If `s` is a PyTorch Tensor or Variable of shape `(N, C)` and `y` is a PyTorch Tensor or Variable of shape `(N,)` containing longs in the range `0 <= y[i] < C`, then `s.gather(1, y.view(-1, 1)).squeeze()` will be a PyTorch Tensor (or Variable) of shape `(N,)` containing one entry from each row of `s`, selected according to the indices in `y`. run the following cell to see an example. You can also read the documentation for [the gather method](http://pytorch.org/docs/torch.html#torch.gather) and [the squeeze method](http://pytorch.org/docs/torch.html#torch.squeeze). ``` # Example of using gather to select one entry from each row in PyTorch def gather_example(): N, C = 4, 5 s = torch.randn(N, C) y = torch.LongTensor([1, 2, 1, 3]) print(s) print(y) print(s.gather(1, y.view(-1, 1)).squeeze()) gather_example() ``` **Now implement the saliency map in the `compute_saliency_maps` function in the file `deeplearning/network_visualization.py`.** Once you have tested the implementation, run the following to visualize some class saliency maps on our example images from the ImageNet validation set: ``` def show_saliency_maps(X, y): # Convert X and y from numpy arrays to Torch Tensors X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0) y_tensor = torch.LongTensor(y) # Compute saliency maps for images in X saliency = compute_saliency_maps(X_tensor, y_tensor, model) # Convert the saliency map from Torch Tensor to numpy array and show images # and saliency maps together. saliency = saliency.numpy() N = X.shape[0] for i in range(N): plt.subplot(2, N, i + 1) plt.imshow(X[i]) plt.axis('off') plt.title(class_names[y[i]]) plt.subplot(2, N, N + i + 1) plt.imshow(saliency[i], cmap=plt.cm.hot) plt.axis('off') plt.gcf().set_size_inches(12, 5) plt.show() show_saliency_maps(X, y) ``` # Fooling Images We can also use image gradients to generate "fooling images" as discussed in [3]. Given an image and a target class, we can perform gradient **ascent** over the image to maximize the target class, stopping when the network classifies the image as the target class. **Implement the fooling image generation in the `make_fooling_image` function in the file `deeplearning/network_visualization.py`.** [3] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014 Run the following cell to generate a fooling image: ``` target_y = reference_data['fooling_input_y'] X_tensor = reference_data['fooling_input_X'] X_fooling = make_fooling_image(X_tensor, target_y, model) scores = model(X_fooling) assert target_y == scores.data.max(dim=1)[1][0], 'The model is not fooled!' ``` After generating a fooling image, run the following cell to visualize the original image, the fooling image, as well as the difference between them. ``` idx = 0 target_y = 6 X_fooling_np = deprocess(X_fooling.clone()) X_fooling_np = np.asarray(X_fooling_np).astype(np.uint8) plt.subplot(1, 4, 1) plt.imshow(X[idx]) plt.title(class_names[y[idx]]) plt.axis('off') plt.subplot(1, 4, 2) plt.imshow(X_fooling_np) plt.title(class_names[target_y]) plt.axis('off') plt.subplot(1, 4, 3) X_pre = preprocess(Image.fromarray(X[idx])) diff = np.asarray(deprocess(X_fooling - X_pre, should_rescale=False)) plt.imshow(diff) plt.title('Difference') plt.axis('off') plt.subplot(1, 4, 4) diff = np.asarray(deprocess(10 * (X_fooling - X_pre), should_rescale=False)) plt.imshow(diff) plt.title('Magnified difference (10x)') plt.axis('off') plt.gcf().set_size_inches(12, 5) plt.show() ``` # Class visualization By starting with a random noise image and performing gradient ascent on a target class, we can generate an image that the network will recognize as the target class. This idea was first presented in [2]; [3] extended this idea by suggesting several regularization techniques that can improve the quality of the generated image. Concretely, let $I$ be an image and let $y$ be a target class. Let $s_y(I)$ be the score that a convolutional network assigns to the image $I$ for class $y$; note that these are raw unnormalized scores, not class probabilities. We wish to generate an image $I^*$ that achieves a high score for the class $y$ by solving the problem $$ I^* = \arg\max_I s_y(I) - R(I) $$ where $R$ is a (possibly implicit) regularizer (note the sign of $R(I)$ in the argmax: we want to minimize this regularization term). We can solve this optimization problem using gradient ascent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form $$ R(I) = \lambda \|I\|_2^2 $$ **and** implicit regularization as suggested by [3] by periodically blurring the generated image. We can solve this problem using gradient ascent on the generated image. **Complete the implementation of the `update_class_visulization` function in the file `deeplearning/network_visualization.py`.** [2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", ICLR Workshop 2014. [3] Yosinski et al, "Understanding Neural Networks Through Deep Visualization", ICML 2015 Deep Learning Workshop Once you have test the implementation in the cell above, run the following cell to generate an image of a Tarantula: ``` def jitter(X, ox, oy): """ Helper function to randomly jitter an image. Inputs - X: PyTorch Tensor of shape (N, C, H, W) - ox, oy: Integers giving number of pixels to jitter along W and H axes Returns: A new PyTorch Tensor of shape (N, C, H, W) """ if ox != 0: left = X[:, :, :, :-ox] right = X[:, :, :, -ox:] X = torch.cat([right, left], dim=3) if oy != 0: top = X[:, :, :-oy] bottom = X[:, :, -oy:] X = torch.cat([bottom, top], dim=2) return X def create_class_visualization(target_y, model, **kwargs): """ Generate an image to maximize the score of target_y under a pretrained model. Inputs: - target_y: Integer in the range [0, 1000) giving the index of the class - model: A pretrained CNN that will be used to generate the image Keyword arguments: - l2_reg: Strength of L2 regularization on the image - learning_rate: How big of a step to take - num_iterations: How many iterations to use - blur_every: How often to blur the image as an implicit regularizer - max_jitter: How much to gjitter the image as an implicit regularizer - show_every: How often to show the intermediate result """ l2_reg = kwargs.pop('l2_reg', 1e-3) learning_rate = kwargs.pop('learning_rate', 25) num_iterations = kwargs.pop('num_iterations', 100) blur_every = kwargs.pop('blur_every', 10) max_jitter = kwargs.pop('max_jitter', 16) show_every = kwargs.pop('show_every', 25) # Randomly initialize the image as a PyTorch Tensor img = torch.randn(1, 3, 224, 224).mul_(1.0) for t in range(num_iterations): # Randomly jitter the image a bit; this gives slightly nicer results ox, oy = random.randint(0, max_jitter), random.randint(0, max_jitter) img.copy_(jitter(img, ox, oy)) img.copy_(update_class_visulization(model, target_y, l2_reg, learning_rate, img).data) # Undo the random jitter img.copy_(jitter(img, -ox, -oy)) # As regularizer, clamp and periodically blur the image for c in range(3): lo = float(-SQUEEZENET_MEAN[c] / SQUEEZENET_STD[c]) hi = float((1.0 - SQUEEZENET_MEAN[c]) / SQUEEZENET_STD[c]) img[:, c].clamp_(min=lo, max=hi) if t % blur_every == 0: blur_image(img, sigma=0.5) # Periodically show the image if t == 0 or (t + 1) % show_every == 0 or t == num_iterations - 1: plt.imshow(deprocess(img.clone().cpu())) class_name = class_names[target_y] plt.title('%s\nIteration %d / %d' % (class_name, t + 1, num_iterations)) plt.gcf().set_size_inches(4, 4) plt.axis('off') plt.show() return deprocess(img.cpu()) target_y = 76 # Tarantula # target_y = 78 # Tick # target_y = 187 # Yorkshire Terrier # target_y = 683 # Oboe # target_y = 366 # Gorilla # target_y = 604 # Hourglass out = create_class_visualization(target_y, model) ``` Try out your class visualization on other classes! You should also feel free to play with various hyperparameters to try and improve the quality of the generated image, but this is not required. ``` # target_y = 78 # Tick # target_y = 187 # Yorkshire Terrier # target_y = 683 # Oboe # target_y = 366 # Gorilla # target_y = 604 # Hourglass target_y = np.random.randint(1000) print(class_names[target_y]) X = create_class_visualization(target_y, model) ```
github_jupyter
## Sparse logistic regression $\newcommand{\n}[1]{\left\|#1 \right\|}$ $\newcommand{\R}{\mathbb R} $ $\newcommand{\N}{\mathbb N} $ $\newcommand{\Z}{\mathbb Z} $ $\newcommand{\lr}[1]{\left\langle #1\right\rangle}$ We want to minimize $$\min_x J(x) := \sum_{i=1}^m \log\bigl(1+\exp (-b_i\lr{a_i, x})\bigr) + \gamma \n{x}_1$$ where $(a_i, b_i)\in \R^n\times \{-1,1\}$ is the training set and $\gamma >0$. We can rewrite the objective as $J(x) = \tilde f(Kx)+g(x)$, where $$\tilde f(y)=\sum_{i=1}^{} \log (1+\exp(y_i)), \quad K = -b*A \in \R^{m\times n}, \quad g(x) = \gamma \n{x}_1$$ ``` import numpy as np import scipy.linalg as LA import scipy.sparse as spr import scipy.sparse.linalg as spr_LA from time import perf_counter from sklearn import datasets filename = "data/a9a" #filename = "data/real-sim.bz2" #filename = "data/rcv1_train.binary.bz2" #filename = "data/kdda.t.bz2" A, b = datasets.load_svmlight_file(filename) m, n = A.shape print("The dataset {}. The dimensions: m={}, n={}".format(filename[5:], m, n)) # define all ingredients for sparse logistic regression gamma = 0.005 * LA.norm(A.T.dot(b), np.inf) K = (A.T.multiply(-b)).T.tocsr() # find the norm of K^T K L = spr_LA.svds(K, k=1, return_singular_vectors=False)**2 # starting point x0 = np.zeros(n) # stepsize ss = 4/L g = lambda x: gamma*LA.norm(x,1) prox_g = lambda x, rho: x + np.clip(-x, -rho*gamma, rho*gamma) f = lambda x: np.log(1. + np.exp(x)).sum() def df(x): exp_x = np.exp(x) return exp_x/(1.+exp_x) dh = lambda x, Kx: K.T.dot(df(Kx)) # residual res = lambda x: LA.norm(x-prox_g(x-dh(x,K.dot(x)), 1)) # energy J = lambda x, Kx: f(Kx)+g(x) ### Algorithms def prox_grad(x1, s=1, numb_iter=100): """ Implementation of the proximal gradient method. x1: array, a starting point s: positive number, a stepsize numb_iter: positive integer, number of iterations Returns an array of energy values, computed in each iteration, and the argument x_k after numb_iter iterations """ begin = perf_counter() x = x1.copy() Kx = K.dot(x) values = [J(x, Kx)] dhx = dh(x,Kx) for i in range(numb_iter): #x = prox_g(x - s * dh(x, Kx), s) x = prox_g(x - s * dhx, s) Kx = K.dot(x) dhx = dh(x,Kx) values.append(J(x, Kx)) end = perf_counter() print("Time execution of prox-grad:", end - begin) return np.array(values), x def fista(x1, s=1, numb_iter=100): """ Implementation of the FISTA. x1: array, a starting point s: positive number, a stepsize numb_iter: positive integer, number of iterations Returns an array of energy values, computed in each iteration, and the argument x_k after numb_iter iterations """ begin = perf_counter() x, y = x1.copy(), x1.copy() t = 1. Ky = K.dot(y) values = [J(y,Ky)] for i in range(numb_iter): x1 = prox_g(y - s * dh(y, Ky), s) t1 = 0.5 * (1 + np.sqrt(1 + 4 * t**2)) y = x1 + (t - 1) / t1 * (x1 - x) x, t = x1, t1 Ky = K.dot(y) values.append(J(y, Ky)) end = perf_counter() print("Time execution of FISTA:", end - begin) return np.array(values), x def adaptive_graal(x1, numb_iter=100): """ Implementation of the adaptive GRAAL. x1: array, a starting point numb_iter: positive integer, number of iterations Returns an array of energy values, computed in each iteration, and the argument x_k after numb_iter iterations """ begin = perf_counter() phi = 1.5 x, x_ = x1.copy(), x1.copy() x0 = x + np.random.randn(x.shape[0]) * 1e-9 Kx = K.dot(x) dhx = dh(x, Kx) la = phi / 2 * LA.norm(x - x0) / LA.norm(dhx - dh(x0, K.dot(x0))) rho = 1. / phi + 1. / phi**2 values = [J(x, Kx)] th = 1 for i in range(numb_iter): x1 = prox_g(x_ - la * dhx, la) Kx1 = K.dot(x1) dhx1 = dh(x1, Kx1) n1 = LA.norm(x1 - x)**2 n2 = LA.norm(dhx1 - dhx)**2 n1_div_n2 = n1/n2 if n2 != 0 else la*10 la1 = min(rho * la, 0.25 * phi * th / la * (n1_div_n2)) x_ = ((phi - 1) * x1 + x_) / phi th = phi * la1 / la x, la, dhx = x1, la1, dhx1 values.append(J(x1, Kx1)) end = perf_counter() print("Time execution of aGRAAL:", end - begin) return values, x, x_ ``` Run the algorithms. It might take some time, if the dataset and/or the number of iterations are huge ``` N = 10000 ans1 = prox_grad(x0, ss, numb_iter=N) ans2 = fista(x0, ss, numb_iter=N) ans3 = adaptive_graal(x0, numb_iter=N) x1, x2, x3 = ans1[1], ans2[1], ans3[1] x1, x3 = ans1[1], ans3[1] print("Residuals:", [res(x) for x in [x1, x2, x3]]) ``` Plot the results ``` values = [ans1[0], ans2[0], ans3[0]] labels = ["PGM", "FISTA", "aGRAAL"] linestyles = [':', "--", "-"] colors = ['b', 'g', '#FFD700'] v_min = min([min(v) for v in values]) plt.figure(figsize=(6,4)) for i,v in enumerate(values): plt.plot(v - v_min, color=colors[i], label=labels[i], linestyle=linestyles[i]) plt.yscale('log') plt.xlabel(u'iterations, k') plt.ylabel('$J(x^k)-J_{_*}$') plt.legend() #plt.savefig('figures/a9a.pdf', bbox_inches='tight') plt.show() plt.clf() np.max(spr_LA.eigsh(K.T.dot(K))[0]) L ```
github_jupyter
# Dateien ## Eine Textdatei lesen und ihren Inhalt ausgeben ``` # Wir öffnen die Datei lesen.txt zum Lesen ("r") und speichern ihren Inhalt in die Variable file file = open("lesen.txt", "r") # Wir gehen alle Zeilen nacheinander durch # In der txt-Datei stehen für uns nicht sichtbare Zeilenumbruchszeichen, durch die jeweils das Ende einer Zeile markiert ist for line in file: # Eine Zeile ohne Zeilenumbruch ausgeben print(line.strip()) file.close() file = open("gemischtertext.txt", "r") ``` ## In eine Textdatei schreiben ``` # Wir öffnen eine Datei zum Schreiben ("w": write) file = open("schreiben.txt", "w") students = ["Max", "Monika", "hgcvhgv", "Erik", "Franziska"] # Wir loopen mit einer for-Schleife durch die Liste students for student in students: # Mit der write-Methode schreiben wir den aktuellen String student und einen Zeilenumbruch in das file-Objekt file.write(student + "\n") # Abschließend müssen wir die Datei wieder schließen file.close() ``` ## Dateien öffnen mit with Wenn wir Dateien mit einer with-Konstruktion öffnen, dann brauchen wir sie nicht mehr explizit mit der close()-Methode schließen. ``` with open("lesen.txt", "r") as file: for line in file: print(line) ``` ## CSV-Datei lesen csv steht für comma separated values. Auch solche csv-Dateien können wir mit Python auslesen. ``` with open("datei.csv") as file: for line in file: data = line.strip().split(";") print(data[0] + ": " + data[1]) ``` ## CSV-Datei lesen (und Daten überspringen) In dieser Lektion lernst du: - Wie du eine CSV-Datei einliest, und Zeilen überspringen kannst. ``` with open("datei.csv") as file: for line in file: data = line.strip().split(";") if int(data[1]) < 2000000: continue if data[2] == "BUD": continue print(data) #if data[2] == "BER" or data[2] == "BUD": # print(data[2]) # print(data) ``` ## Übung - Besorgt euch die datei https://data.stadt-zuerich.ch/dataset/pd-stapo-hundenamen/resource/8bf2127d-c354-4834-8590-9666cbd6e160 - Ihr findet sie auch im Ordner 20151001_hundenamen.csv - Findet heraus wie oft der Hundename "Aaron" zwischen 2000 - 2012 gebraucht wurde. ``` n = "1975" print(int(n) < 1990) jahre = ["Year", "1990", "1992"] for jahr in jahre: if jahr == "Year": continue print(int(jahr)) ### Euer code hier with open("20151001_hundenamen.csv", "r") as file: for line in file: anzahl=0 data = line.strip().split(",") if data[1] == "GEBURTSJAHR_HUND": continue if data[0] == '"Aaron"' and int(data[1]) >= 2000 and int(data[1]) < 2013: anzahl = anzahl + 1 print(anzahl) ```
github_jupyter
``` from functools import partial import torch import numpy as np import matplotlib.pyplot as plt from scipy.optimize import minimize from src.utils import generate_y from src.MLP import MLP ``` ## Tutorial for `Type2` problem The usage of `type 1` problems are prominent. However it is often true that without constraints on optimization varaibles $x$ that the optimal solution might be trivial. As an concrete example, let's consider the following optimization problem: $$ \max_{x} x^2 $$ We can visually inspect the solution of above optimization problem. Let's look around the following figure ``` xs = np.linspace(-10.0, 10.0, 1000) ys = xs ** 2 fig, ax = plt.subplots(1,1) ax.grid() _ = ax.plot(xs, ys) ``` As you can notice, as we push the solution $x$ toward $\infty$ or $-\infty$, the objective function $x^2$ increase. We call this scenarios as 'the problem is unbounded'. Even in the practical setup, we will confront such cases easily. Moreover to make the optimization problem 'practically' useful, we can consider the constrain on to the optimization problem. We firstly consider the following type of constraint optimziation problem. ### Solve optimization problem with box constraints $$ \begin{aligned} \min_{x} &\, f(x) \\ \text{s.t.} &\, x_{min} \leq x \leq x_{max} \\ \end{aligned} $$ This type of optimization problem is to handle the box constraint on optimization variable $x$. The $\text{s.t.}$ is abbreviation of '**s**uch **t**hat'. #### Box constraint $$x_{min} \leq x \leq x_{max}$$ The box constraint indicates that the solution of optmization problem $x^*$ must be larger or equal to $x_{min}$ and smaller or equal to $x_{max}$. The box constraint is a special case of general constraints. Mathemacically to solve the optimziation problems, either the box constraint and general linear/non linear constraints can be handled in the same manner. However most of the off-the-shelf solvers explictly consider the box constrains as a different arguments from the other constraints. > For instance, `scipy` solvers consider the box constrains as `bounds`. For now, let's investigate the box constraint first and reserve some room for the general constraint. ## `scipy.optimize` pacakge `scipy.optimize` is a package that implements various types of optimziation solvers. Most importantly, they offer nice python interface of the solvers so that you can setup your own optmization problem with few lines of codes. We will also solve the optimization problems with a variant of QP solver of this package. Especially, `scipy.optmize.minize` function is powerful in practice. you can 'some-how-magically' optimize your own function. Even in the case your function is not analytically differntiable. > In such case, `scipy.optmize.minize` employs numerical methods to estimate the jacobian and hessian. As a cost of emplyoing numerical methods, the optmization procedure will be slower. ### Interface of `scipy.optimize.minimize` `scipy.optimize.minimize` majorly requires following arguments: 1. `fun`: the objective function that you want to optmize 2. `x0`: the initial solution. You can set arbitarily unless it doesn't violate constraints 3. `jac`: (optional) the method for computing the jacobian of the objective function 4. `hessian`: (optional) the method for computing the hessian of the objective function 5. `bounds`: (optional) box constraints 6. `constraints`: (optional) linear/non-linear constraints when `jac` and `hessian` is not specified, the `scipy.optmize.minize` function estimate the jacobian and hessian of the objective function numerically. ## Implementing `fun`, `jac`, `hessian` Our primiary interest is to bind the `torch` module and `scipy.optimize.minimize`. Since pytorch is automatic differentiation tool, we can compute the `jac` and `hessian` efficiently. ``` def objective(x, model): # Note that we will not use the pytorch's automatic differentiation functionality # while computing objective with torch.no_grad(): torch_x = torch.from_numpy(x).view(-1,1).float() y = model(torch_x) y = y.sum().numpy() return y def jac(x, model): torch_x = torch.from_numpy(x).view(-1,1).float() jac = torch.autograd.functional.jacobian(model, torch_x).numpy() return jac m = MLP(1, 1, num_neurons=[128, 128]) m.load_state_dict(torch.load('./model.pt')) lb, ub = -3.0, 2.0 # declare lower and upper bound of optmization variable x_init = np.random.uniform(lb, ub) x0 = np.ones(1) * x_init x0_tensor = torch.ones(1,1) * x_init y0 = m(x0_tensor).detach() b = (lb, ub) bnds = (b,) soln_nn = minimize(partial(objective, model=m), x0, method='SLSQP', bounds=bnds, jac=partial(jac, model=m)) ``` ## Visualize optimization result ``` x_min, x_max = -4.0, 4.0 xs_linspace = torch.linspace(-4, 4, 2000).view(-1, 1) ys_linspace = generate_y(xs_linspace) fig, axes = plt.subplots(1, 1, figsize=(10, 5)) axes.grid() axes.plot(xs_linspace, ys_linspace, label='Ground truth') ys_pred = m(xs_linspace).detach() axes.plot(xs_linspace, ys_pred, label='Ground truth') axes.fill_between(np.linspace(lb, ub, 100), ys_linspace.min(), ys_linspace.max(), color='grey', alpha=0.3, label='constraint region') axes.scatter(x_init, y0, label='Opt start', c='green', marker='*', s=100.0) axes.scatter(soln_nn.x, soln_nn.fun, label='NN opt', c='green') axes.legend() axes.set_xlabel("input") axes.set_ylabel("y value") plt.show() ```
github_jupyter
# Ibis Integration (Experimental) The [Ibis project](https://ibis-project.org/docs/) tries to bridge the gap between local Python and [various backends](https://ibis-project.org/docs/backends/index.html) including distributed systems such as Spark and Dask. The main idea is to create a pythonic interface to express SQL semantic, so the expression is agnostic to the backends. The design idea is very aligned with Fugue. But please notice there are a few key differences: * **Fugue supports both pythonic APIs and SQL**, and the choice should be determined by particular cases or users' preferences. On the other hand, Ibis focuses on the pythonic expression of SQL and perfectionizes it. * **Fugue supports SQL and non-SQL semantic for data transformation.** Besides SQL, another important option is [Fugue Transform](introduction.html#fugue-transform). The Fugue transformers can wrap complicated Python/Pandas logic and apply them distributedly on dataframes. A typical example is distributed model inference, the inference part has to be done by Python, it can be easily achieved by a transformer, but the data preparation may be done nicely by SQL or Ibis. * **Fugue and Ibis are on different abstraction layers.** Ibis is nice to construct single SQL statements to accomplish single tasks. Even it involves multiple tables and multiple steps, its final step is either outputting one table or inserting one table into a database. On the other hand, Fugue workflow is to orchestrate these tasks. For example, it can read a table, do the first transformation and save to a file, then do the second transformation and print. Each transformation may be done using Ibis, but loading, saving and printing and the orchestration can be done by Fugue. This is also why Ibis can be a very nice option for Fugue users to build their pipelines. For people who prefer pythonic APIs, they can keep all the logic in Python with the help of Ibis. Although Fugue has its own functional API similar to Ibis, the programming interface of Ibis is really elegant. It usually helps users write less but more expressive code to achieve the same thing. ## Hello World In this example, we try to achieve this SQL semantic: ```sql SELECT a, a+1 AS b FROM (SELECT a FROM tb1 UNION SELECT a FROM tb2) ``` ``` from ibis import BaseBackend, literal import ibis.expr.types as ir def ibis_func(backend:BaseBackend) -> ir.TableExpr: tb1 = backend.table("tb1") tb2 = backend.table("tb2") tb3 = tb1.union(tb2) return tb3.mutate(b=tb3.a+literal(1)) ``` Now let's test with the pandas backend ``` import ibis import pandas as pd con = ibis.pandas.connect({ "tb1": pd.DataFrame([[0]], columns=["a"]), "tb2": pd.DataFrame([[1]], columns=["a"]) }) ibis_func(con).execute() ``` Now let's make this a part of Fugue ``` from fugue import FugueWorkflow from fugue_ibis import run_ibis dag = FugueWorkflow() df1 = dag.df([[0]], "a:long") df2 = dag.df([[1]], "a:long") df3 = run_ibis(ibis_func, tb1=df1, tb2=df2) df3.show() ``` Now let's run on Pandas ``` dag.run() ``` Now let's run on Dask ``` import fugue_dask dag.run("dask") ``` Now let's run on DuckDB ``` import fugue_duckdb dag.run("duck") ``` For each different execution engine, Ibis will also run on the correspondent backend. ## A deeper integration The above approach needs a function taking in an Ibis backend and returning a `TableExpr`. The following is another approach that simpler and more elegant. ``` from fugue_ibis import as_ibis, as_fugue dag = FugueWorkflow() tb1 = as_ibis(dag.df([[0]], "a:long")) tb2 = as_ibis(dag.df([[1]], "a:long")) tb3 = tb1.union(tb2) df3 = as_fugue(tb3.mutate(b=tb3.a+literal(1))) df3.show() dag.run() ``` Alternatively, you can treat `as_ibis` and `as_fugue` as class methods. This is more convenient to use, but it's a bit magical. This is achieved by adding these two methods using `setattr` to the correspondent classes. This patching-like design pattern is widely used by Ibis. ``` import fugue_ibis # must import dag = FugueWorkflow() tb1 = dag.df([[0]], "a:long").as_ibis() tb2 = dag.df([[1]], "a:long").as_ibis() tb3 = tb1.union(tb2) df3 = tb3.mutate(b=tb3.a+literal(1)).as_fugue() df3.show() dag.run() ``` By importing `fugue_ibis`, the two methods were automatically added. It's up to the users which way to go. The first approach (`run_ibis`) is the best to separate Ibis logic, as you can see, it is also great for unit testing. The second approach is elegant, but you will have to unit test the code with the logic before and after the conversions. The third approach is the most intuitive, but it's a bit magical. ## Z-Score Now, let's consider a practical example. We want to use Fugue to compute z-score of a dataframe, partitioning should be an option. The reason to implement it on Fugue level is that the compute becomes scale agnostic and framework agnostic. ``` from fugue import WorkflowDataFrame from fugue_ibis import as_ibis, as_fugue def z_score(df:WorkflowDataFrame, input_col:str, output_col:str) -> WorkflowDataFrame: by = df.partition_spec.partition_by idf = as_ibis(df) col = idf[input_col] if len(by)==0: return as_fugue(idf.mutate(**{output_col:(col - col.mean())/col.std()})) agg = idf.group_by(by).aggregate(mean_=col.mean(), std_=col.std()) j = idf.inner_join(agg, by)[idf, ((idf[input_col]-agg.mean_)/agg.std_).name(output_col)] return as_fugue(j) ``` Now, generate testing data ``` import numpy as np np.random.seed(0) pdf = pd.DataFrame(dict( a=np.random.choice(["a","b"], 100), b=np.random.choice(["c","d"], 100), c=np.random.rand(100), )) pdf["expected1"] = (pdf.c - pdf.c.mean())/pdf.c.std() pdf = pdf.groupby(["a", "b"]).apply(lambda tdf: tdf.assign(expected2=(tdf.c - tdf.c.mean())/tdf.c.std())).reset_index(drop=True) ``` And here is the final code. ``` dag = FugueWorkflow() df = z_score(dag.df(pdf), "c", "z1") df = z_score(df.partition_by("a", "b"), "c", "z2") df.show() dag.run() ``` ## Consistency issues Ibis as of 2.0.0 can have different behaviors on different backends. Here are some examples from the common discrepencies between pandas and SQL. ``` # pandas drops null keys on group (by default), SQL doesn't dag = FugueWorkflow() df = dag.df([["a",1],[None,2]], "a:str,b:int").as_ibis() df.groupby(["a"]).aggregate(s=df.b.sum()).as_fugue().show() dag.run() dag.run("duckdb") # pandas joins on NULLs, SQL doesn't dag = FugueWorkflow() df1 = dag.df([["a",1],[None,2]], "a:str,b:int").as_ibis() df2 = dag.df([["a",1],[None,2]], "a:str,c:int").as_ibis() df1.inner_join(df2, ["a"])[df1, df2.c].as_fugue().show() dag.run() dag.run("duckdb") ``` Since Ibis integration is experimental, we rely on Ibis to achieve consistent behaviors. If you have any Ibis specific question please also consider asking in [Ibis issues](https://github.com/ibis-project/ibis/issues).
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Image classification <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/images/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。 このチュートリアルでは、画像から猫または犬を分類する方法を示します。 `tf.keras.Sequential` モデルを使用して画像分類器を構築し、 `tf.keras.preprocessing.image.ImageDataGenerator` を使用してデータをロードします。このチュートリアルでは、以下のコンセプトにしたがって、実践的な経験と感覚を養います。 * `tf.keras.preprocessing.image.ImageDataGenerator` クラスを使用して _データ入力パイプライン_ を構築し、モデルで使用するディスク上のデータを効率的に処理します。 * _過学習(Overfitting)_ —過学習を識別および防止する方法。 * _データ拡張(Data Augmentation)_ および _ドロップアウト(dropout)_ —データパイプラインおよび画像分類モデルに組み込むコンピュータービジョンタスクの過学習と戦うための重要なテクニック。 このチュートリアルは、基本的な機械学習のワークフローに従います。 1. データの調査及び理解 2. 入力パイプラインの構築 3. モデルの構築 4. モデルの学習 5. モデルのテスト 6. モデルの改善とプロセスの繰り返し ## パッケージのインポート まずは必要なパッケージをインポートすることから始めましょう。 `os`パッケージはファイルとディレクトリ構造を読み込み、 NumPy は python リストの numpy 配列への変換と必要な行列演算の実行、 `matplotlib.pyplot` はグラフの描画や学習データおよび検証データに含まれる画像の表示、に利用します。 モデルの構築に必要な TensorFlow と Keras クラスをインポートします。 ``` import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D from tensorflow.keras.preprocessing.image import ImageDataGenerator import os import numpy as np import matplotlib.pyplot as plt ``` ## データの読み込み データセットのダウンロードから始めます。このチュートリアルでは、 Kaggle の <a href="https://www.kaggle.com/c/dogs-vs-cats/data" target="_blank">Dogs vs Cats</a> データセットをフィルタリングしたバージョンを使用します。データセットのアーカイブバージョンをダウンロードし、"/tmp/"ディレクトリに保存します。 ``` _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip' path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True) PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered') ``` データセットのディレクトリ構造は次のとおりです: <pre> <b>cats_and_dogs_filtered</b> |__ <b>train</b> |______ <b>cats</b>: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ <b>dogs</b>: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...] |__ <b>validation</b> |______ <b>cats</b>: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ <b>dogs</b>: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] </pre> データの内容を抽出した後、学習および検証セットのための適切なファイルパスで変数を設定します。 ``` train_dir = os.path.join(PATH, 'train') validation_dir = os.path.join(PATH, 'validation') train_cats_dir = os.path.join(train_dir, 'cats') # 学習用の猫画像のディレクトリ train_dogs_dir = os.path.join(train_dir, 'dogs') # 学習用の犬画像のディレクトリ validation_cats_dir = os.path.join(validation_dir, 'cats') # 検証用の猫画像のディレクトリ validation_dogs_dir = os.path.join(validation_dir, 'dogs') # 検証用の犬画像のディレクトリ ``` ### データの理解 学習および検証ディレクトリの中にある猫と犬の画像の数を見てみましょう: ``` num_cats_tr = len(os.listdir(train_cats_dir)) num_dogs_tr = len(os.listdir(train_dogs_dir)) num_cats_val = len(os.listdir(validation_cats_dir)) num_dogs_val = len(os.listdir(validation_dogs_dir)) total_train = num_cats_tr + num_dogs_tr total_val = num_cats_val + num_dogs_val print('total training cat images:', num_cats_tr) print('total training dog images:', num_dogs_tr) print('total validation cat images:', num_cats_val) print('total validation dog images:', num_dogs_val) print("--") print("Total training images:", total_train) print("Total validation images:", total_val) ``` 便宜上、データセットの前処理およびネットワークの学習中に使用する変数を設定します。 ``` batch_size = 128 epochs = 15 IMG_HEIGHT = 150 IMG_WIDTH = 150 ``` ## データの準備 モデルにデータを送る前に、画像を適切に前処理された浮動小数点テンソルにフォーマットします。 1.ディスクから画像を読み取ります。 2.これらの画像のコンテンツをデコードし、RGB値にしたがって適切なグリッド形式に変換します。 3.それらを浮動小数点テンソルに変換します。 4.ニューラルネットワークは小さな入力値を扱う方が適しているため、テンソルを0〜255の値から0〜1の値にリスケーリングします。 幸い、これらすべてのタスクは、 `tf.keras` によって提供される `ImageDataGenerator` クラスで実行できます。この `ImageDataGenerator` はディスクから画像を読み取り、適切なテンソルに前処理を行います。さらに、これらの画像をテンソルのバッチに変換するジェネレータをセットアップします。これは、ネットワーク学習時に便利です。 ``` train_image_generator = ImageDataGenerator(rescale=1./255) # 学習データのジェネレータ validation_image_generator = ImageDataGenerator(rescale=1./255) # 検証データのジェネレータ ``` 学習および検証画像のジェネレータを定義したのち、 `flow_from_directory` メソッドはディスクから画像をロードし、リスケーリングを適用し、画像を必要な大きさにリサイズします。 ``` train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size, directory=train_dir, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size, directory=validation_dir, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') ``` ### 学習用画像の可視化 学習用のジェネレータから画像バッチを抽出して可視化します。(この例では32個の画像を抽出し、そのうち5つを `matplotlib` で描画します。) ``` sample_training_images, _ = next(train_data_gen) ``` `next` 関数はデータセットからバッチを返します。 `next` 関数の返り値は `(x_train、y_train)` の形式で、 `x_train` は学習用の特徴量、 `y_train` はそのラベルです。ラベルを破棄して、学習用画像の可視化のみを行います。 ``` # この関数は、1行5列のグリッド形式で画像をプロットし、画像は各列に配置されます。 def plotImages(images_arr): fig, axes = plt.subplots(1, 5, figsize=(20,20)) axes = axes.flatten() for img, ax in zip( images_arr, axes): ax.imshow(img) ax.axis('off') plt.tight_layout() plt.show() plotImages(sample_training_images[:5]) ``` ## モデルの構築 モデルはmax pooling層を伴う3つの畳み込みブロックからなります。さらに `relu` 活性化関数によるアクティベーションを伴う512ユニットの全結合層があります。モデルは、シグモイド活性化関数による2値分類に基づいてクラスに属する確率を出力します。 ``` model = Sequential([ Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)), MaxPooling2D(), Conv2D(32, 3, padding='same', activation='relu'), MaxPooling2D(), Conv2D(64, 3, padding='same', activation='relu'), MaxPooling2D(), Flatten(), Dense(512, activation='relu'), Dense(1, activation='sigmoid') ]) ``` ### モデルのコンパイル このチュートリアルでは、 *ADAM* オプティマイザーと *binary cross entropy* 損失関数を選択します。各学習エポックの学習と検証の精度を表示するために、`metrics` 引数を渡します。 ``` model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) ``` ### モデルの概要 すべてのネットワークのレイヤーを見るには、モデルの `summary` メソッドを利用します: ``` model.summary() ``` ### モデルの学習 `ImageDataGenerator` クラスの `fit_generator` メソッドを使用して、ネットワークを学習します。 ``` history = model.fit_generator( train_data_gen, steps_per_epoch=total_train // batch_size, epochs=epochs, validation_data=val_data_gen, validation_steps=total_val // batch_size ) ``` ### 学習結果の可視化 ネットワークを学習した後、結果を可視化します。 ``` acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() ``` プロットからわかるように、学習セットの精度と検証セットの精度は大幅に外れており、モデルは検証セットで約70%の精度しか達成していません。 何がうまくいかなかったかを見て、モデル全体のパフォーマンスを向上してみましょう。 ## 過学習 上記のプロットでは、学習セットの精度は時間とともに直線的に向上していますが、検証セットの精度は学習プロセスの中で約70%あたりで頭打ちになっています。そして、学習と検証の精度の違いが顕著です。これは *過学習* のサインです。 学習サンプルが少ない場合、モデルは学習サンプルに含まれるノイズや不要な詳細から学習してしまい、これによって新しいサンプルに対するモデルの性能に悪影響を与えることがあります。この現象は、過学習として知られています。過学習とは、モデルが新しいデータセットに対して汎化するのが難しい状態をいいます。 学習プロセスにおいて過学習に対抗する手段はいくつかあります。このチュートリアルでは、*データ拡張(data Augmentation)* を使用し、さらにモデルに *ドロップアウト(dropout)* を追加します。 ## データ拡張(Data augmentation) 過学習は一般に、学習サンプルが少ない場合に発生します。この問題を解決する方法の1つは、十分な数の学習サンプルが含まれるようにデータセットを拡張することです。データ拡張は、既存の学習サンプルに対してランダムな変換を行い、データセットとして利用できそうな画像を生成するアプローチをとります。このデータ拡張の目的は、学習中にモデルがまったくおなじ画像を2回利用しないようにすることです。これによってモデルをデータのより多くの特徴を利用し、より汎化することができます。 `tf.keras` においては、このデータ拡張を `ImageDataGenerator` クラスを使用して実装します。データセットに対するさまざまな変換を指定することによって、学習プロセス中にそれが適用されます。 ### データの拡張と可視化 最初に、ランダムな水平反転による拡張をデータセットに適用し、それぞれの画像が変換後にどのように見えるかを確認します。 ### 水平反転の適用 このデータ拡張を適用するためには、 `ImageDataGenerator` クラスの引数として `horizontal_flip` を渡し、 `True`を設定します。 ``` image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True) train_data_gen = image_gen.flow_from_directory(batch_size=batch_size, directory=train_dir, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH)) ``` 学習サンプルから1つのサンプル画像を取得する作業を5回繰り返して、おなじ画像に5回データ拡張が適用されるようにします。 ``` augmented_images = [train_data_gen[0][0][0] for i in range(5)] # 上で学習用画像の可視化のために定義、使用されたおなじカスタムプロット関数を再利用する plotImages(augmented_images) ``` ### 画像のランダムな回転 回転のデータ拡張を利用して学習用サンプルをランダムに左右45度の範囲で回転させてみましょう。 ``` image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45) train_data_gen = image_gen.flow_from_directory(batch_size=batch_size, directory=train_dir, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH)) augmented_images = [train_data_gen[0][0][0] for i in range(5)] plotImages(augmented_images) ``` ### ズームによるデータ拡張の適用 データセットにズームによるデータ拡張を適用して、画像をランダムに最大50%拡大します。 ``` image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) train_data_gen = image_gen.flow_from_directory(batch_size=batch_size, directory=train_dir, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH)) augmented_images = [train_data_gen[0][0][0] for i in range(5)] plotImages(augmented_images) ``` ### すべてのデータ拡張を同時に利用する ここまでで紹介したすべてのデータ拡張機能を適用します。ここでは、学習用画像に対して、リスケール、45度の回転、幅シフト、高さシフト、水平反転、ズームを適用しました。 ``` image_gen_train = ImageDataGenerator( rescale=1./255, rotation_range=45, width_shift_range=.15, height_shift_range=.15, horizontal_flip=True, zoom_range=0.5 ) train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size, directory=train_dir, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') ``` これらのデータ拡張がデータセットにランダムに適用されたときに、一つの画像に対して5回の個別の適用を行った際にそれぞれどのように見えるかを可視化します。 ``` augmented_images = [train_data_gen[0][0][0] for i in range(5)] plotImages(augmented_images) ``` ### 検証データジェネレータの構築 一般に、データ拡張は学習サンプルのみに適用します。今回は、 `ImageDataGenerator` を使用して検証画像に対してリスケールのみを実施し、バッチに変換します。 ``` image_gen_val = ImageDataGenerator(rescale=1./255) val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size, directory=validation_dir, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') ``` ## ドロップアウト(dropout) 過学習を避けるもう一つの方法は、ネットワークに *ドロップアウト* を導入することです。これは、ネットワークにおいて重みを小さくする正則化の方式で、これによって重みの値の分布がより規則的になり、少ない学習データに対する過学習を減らすことができます。ドロップアウトはこのチュートリアルで利用される正則化手法の一つです。 ドロップアウトをレイヤーに適用すると、学習プロセス中に適用されたレイヤーのうちランダムに出力ユニットをドロップアウト(ゼロに設定)します。ドロップアウトは、入力値として0.1、0.2、0.4といった形式の小数をとります。これは、適用されたレイヤーからランダムに出力単位の10%、20%、または40%をドロップアウトすることを意味します。 特定のレイヤーに0.1ドロップアウトを適用すると、各学習エポックにおいて出力ユニットの10%がランダムに0にされます。 この新しいドロップアウト機能を使用したネットワークアーキテクチャを作成し、異なる畳み込みレイヤーや全接続レイヤーに適用してみましょう。 ## ドロップアウトを追加した新しいネットワークの構築 ここでは、ドロップアウトを最初と最後の max pool 層に適用します。ドロップアウトを適用すると、各学習エポック中にニューロンの20%がランダムにゼロに設定されます。これにより、学習データセットに対する過学習を避けることができます。 ``` model_new = Sequential([ Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)), MaxPooling2D(), Dropout(0.2), Conv2D(32, 3, padding='same', activation='relu'), MaxPooling2D(), Conv2D(64, 3, padding='same', activation='relu'), MaxPooling2D(), Dropout(0.2), Flatten(), Dense(512, activation='relu'), Dense(1, activation='sigmoid') ]) ``` ### モデルのコンパイル ネットワークにドロップアウトを導入した後、モデルをコンパイルし、レイヤーの概要を表示します。 ``` model_new.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model_new.summary() ``` ### モデルの学習 学習サンプルにデータ拡張を導入し、ネットワークにドロップアウトを追加した後、この新しいネットワークを学習します: ``` history = model_new.fit_generator( train_data_gen, steps_per_epoch=total_train // batch_size, epochs=epochs, validation_data=val_data_gen, validation_steps=total_val // batch_size ) ``` ### モデルの可視化 学習後に新しいモデルを可視化すると、過学習が前回よりも大幅に少ないことがわかります。より多くのエポックでモデルを学習すると、精度はさらに向上するはずです。 ``` acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() ```
github_jupyter
### Homework: going neural (6 pts) We've checked out statistical approaches to language models in the last notebook. Now let's go find out what deep learning has to offer. <img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/expanding_mind_lm_kn_3.png' width=300px> We're gonna use the same dataset as before, except this time we build a language model that's character-level, not word level. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ``` Working on character level means that we don't need to deal with large vocabulary or missing words. Heck, we can even keep uppercase words in text! The downside, however, is that all our sequences just got a lot longer. However, we still need special tokens: * Begin Of Sequence (__BOS__) - this token is at the start of each sequence. We use it so that we always have non-empty input to our neural network. $P(x_t) = P(x_1 | BOS)$ * End Of Sequence (__EOS__) - you guess it... this token is at the end of each sequence. The catch is that it should __not__ occur anywhere else except at the very end. If our model produces this token, the sequence is over. ``` BOS, EOS = ' ', '\n' data = pd.read_json("./arxivData.json") lines = data.apply(lambda row: (row['title'] + ' ; ' + row['summary'])[:512], axis=1) \ .apply(lambda line: BOS + line.replace(EOS, ' ') + EOS) \ .tolist() # if you missed the seminar, download data here - https://yadi.sk/d/_nGyU2IajjR9-w ``` Our next step is __building char-level vocabulary__. Put simply, you need to assemble a list of all unique tokens in the dataset. ``` # get all unique characters from lines (including capital letters and symbols) tokens = set(''.join(lines)) tokens = sorted(tokens) n_tokens = len(tokens) print ('n_tokens = ',n_tokens) assert 100 < n_tokens < 150 assert BOS in tokens, EOS in tokens ``` We can now assign each character with it's index in tokens list. This way we can encode a string into a TF-friendly integer vector. ``` # dictionary of character -> its identifier (index in tokens list) token_to_id = {token: id for id, token in enumerate(tokens)} assert len(tokens) == len(token_to_id), "dictionaries must have same size" for i in range(n_tokens): assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list" print("Seems alright!") ``` Our final step is to assemble several strings in a integet matrix `[batch_size, text_length]`. The only problem is that each sequence has a different length. We can work around that by padding short sequences with extra _EOS_ or cropping long sequences. Here's how it works: ``` def to_matrix(lines, max_len=None, pad=token_to_id[EOS], dtype='int32'): """Casts a list of lines into tf-digestable matrix""" max_len = max_len or max(map(len, lines)) lines_ix = np.zeros([len(lines), max_len], dtype) + pad for i in range(len(lines)): line_ix = list(map(token_to_id.get, lines[i][:max_len])) lines_ix[i, :len(line_ix)] = line_ix return lines_ix #Example: cast 4 random names to matrices, pad with zeros dummy_lines = [ ' abc\n', ' abacaba\n', ' abc1234567890\n', ] print(to_matrix(dummy_lines)) ``` ### Neural Language Model Just like for N-gram LMs, we want to estimate probability of text as a joint probability of tokens (symbols this time). $$P(X) = \prod_t P(x_t \mid x_0, \dots, x_{t-1}).$$ Instead of counting all possible statistics, we want to train a neural network with parameters $\theta$ that estimates the conditional probabilities: $$ P(x_t \mid x_0, \dots, x_{t-1}) \approx p(x_t \mid x_0, \dots, x_{t-1}, \theta) $$ But before we optimize, we need to define our neural network. Let's start with a fixed-window (aka convolutional) architecture: <img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/fixed_window_lm.jpg' width=400px> ``` import tensorflow as tf import keras, keras.layers as L sess = tf.InteractiveSession() class FixedWindowLanguageModel: def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=64): """ A fixed window model that looks on at least 5 previous symbols. Note: fixed window LM is effectively performing a convolution over a sequence of words. This convolution only looks on current and previous words. Such convolution can be represented as a sequence of 2 operations: - pad input vectors by {strides * (filter_size - 1)} zero vectors on the "left", do not pad right - perform regular convolution with {filter_size} and {strides} You can stack several convolutions at once """ #YOUR CODE - create layers/variables and any metadata you want, e.g. self.emb = L.Embedding(...) self.emb = L.Embedding(input_dim=n_tokens, output_dim=emb_size) self.conv1 = L.Convolution1D(filters=hid_size, kernel_size=5, padding='causal', name='conv1') self.conv2 = L.Convolution1D(filters=n_tokens, kernel_size=5, padding='causal', name='conv2') self.activation = L.Activation('relu') #END OF YOUR CODE self.prefix_ix = tf.placeholder('int32', [None, None]) self.next_token_probs = tf.nn.softmax(self(self.prefix_ix)[:, -1]) def __call__(self, input_ix): """ compute language model logits given input tokens :param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length] :returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens] these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1}) """ embedding = self.emb(input_ix) conv1 = self.conv1(embedding) conv1 = self.activation(conv1) conv2 = self.conv2(conv1) return conv2 def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100, sess=sess): """ :returns: probabilities of next token, dict {token : prob} for all tokens """ probs = sess.run(self.next_token_probs, {self.prefix_ix: to_matrix([prefix])})[0] return dict(zip(tokens, probs)) window_lm = FixedWindowLanguageModel() dummy_input_ix = tf.constant(to_matrix(dummy_lines)) dummy_lm_out = window_lm(dummy_input_ix) # note: tensorflow and keras layers only create variables after they're first applied (called) sess.run(tf.global_variables_initializer()) dummy_logits = sess.run(dummy_lm_out) assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape" assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered" assert not np.allclose(dummy_logits.sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)" # test for lookahead dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines])) dummy_lm_out_2 = window_lm(dummy_input_ix_2) dummy_logits_2 = sess.run(dummy_lm_out_2) assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \ " Make sure you don't allow any layers to look ahead of current token." \ " You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test." ``` We can now tune our network's parameters to minimize categorical crossentropy over training dataset $D$: $$ L = {\frac1{|D|}} \sum_{X \in D} \sum_{x_i \in X} - \log p(x_t \mid x_1, \dots, x_{t-1}, \theta) $$ As usual with with neural nets, this optimization is performed via stochastic gradient descent with backprop. One can also note that minimizing crossentropy is equivalent to minimizing model __perplexity__, KL-divergence or maximizng log-likelihood. ``` def compute_lengths(input_ix, eos_ix=token_to_id[EOS]): """ compute length of each line in input ix (incl. first EOS), int32 vector of shape [batch_size] """ count_eos = tf.cumsum(tf.to_int32(tf.equal(input_ix, eos_ix)), axis=1, exclusive=True) lengths = tf.reduce_sum(tf.to_int32(tf.equal(count_eos, 0)), axis=1) return lengths print('matrix:\n', dummy_input_ix.eval()) print('lengths:', compute_lengths(dummy_input_ix).eval()) input_ix = tf.placeholder('int32', [None, None]) logits = window_lm(input_ix[:, :-1]) reference_answers = input_ix[:, 1:] # Your task: implement loss function as per formula above # your loss should only be computed on actual tokens, excluding padding # predicting actual tokens and first EOS do count. Subsequent EOS-es don't # you will likely need to use compute_lengths and/or tf.sequence_mask to get it right. lengths = compute_lengths(input_ix) mask = tf.to_float(tf.sequence_mask(lengths, tf.shape(input_ix)[1])[:, 1:]) loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=reference_answers, logits=logits) loss = tf.reduce_sum(loss * mask) / tf.reduce_sum(mask) # operation to update network weights train_step = tf.train.AdamOptimizer().minimize(loss) loss_1 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=50)}) loss_2 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=100)}) assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar" assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. '\ 'Hint: use tf.sequence_mask. Beware +/-1 errors. And be careful when averaging!' ``` ### Training loop Now let's train our model on minibatches of data ``` from sklearn.model_selection import train_test_split train_lines, dev_lines = train_test_split(lines, test_size=0.25, random_state=42) sess.run(tf.global_variables_initializer()) batch_size = 256 score_dev_every = 250 train_history, dev_history = [], [] def score_lines(dev_lines, batch_size): """ computes average loss over the entire dataset """ dev_loss_num, dev_loss_len = 0., 0. for i in range(0, len(dev_lines), batch_size): batch_ix = to_matrix(dev_lines[i: i + batch_size]) dev_loss_num += sess.run(loss, {input_ix: batch_ix}) * len(batch_ix) dev_loss_len += len(batch_ix) return dev_loss_num / dev_loss_len def generate(lm, prefix=BOS, temperature=1.0, max_len=100): """ Samples output sequence from probability distribution obtained by lm :param temperature: samples proportionally to lm probabilities ^ temperature if temperature == 0, always takes most likely token. Break ties arbitrarily. """ while True: token_probs = lm.get_possible_next_tokens(prefix) tokens, probs = zip(*token_probs.items()) if temperature == 0: next_token = tokens[np.argmax(probs)] else: probs = np.array([p ** (1. / temperature) for p in probs]) probs /= sum(probs) next_token = np.random.choice(tokens, p=probs) prefix += next_token if next_token == EOS or len(prefix) > max_len: break return prefix if len(dev_history) == 0: dev_history.append((0, score_lines(dev_lines, batch_size))) print("Before training:", generate(window_lm, 'Bridging')) from IPython.display import clear_output from random import sample from tqdm import trange for i in trange(len(train_history), 5000): batch = to_matrix(sample(train_lines, batch_size)) loss_i, _ = sess.run([loss, train_step], {input_ix: batch}) train_history.append((i, loss_i)) if (i + 1) % 50 == 0: clear_output(True) plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss') if len(dev_history): plt.plot(*zip(*dev_history), color='red', label='dev_loss') plt.legend(); plt.grid(); plt.show() print("Generated examples (tau=0.5):") for j in range(3): print(generate(window_lm, temperature=0.5)) if (i + 1) % score_dev_every == 0: print("Scoring dev...") dev_history.append((i, score_lines(dev_lines, batch_size))) print('#%i Dev loss: %.3f' % dev_history[-1]) assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge." print("Final dev loss:", dev_history[-1][-1]) for i in range(10): print(generate(window_lm, temperature=0.5)) ``` ### RNN Language Models Fixed-size architectures are reasonably good when capturing short-term dependencies, but their design prevents them from capturing any signal outside their window. We can mitigate this problem by using a __recurrent neural network__: $$ h_0 = \vec 0 ; \quad h_{t+1} = RNN(x_t, h_t) $$ $$ p(x_t \mid x_0, \dots, x_{t-1}, \theta) = dense_{softmax}(h_{t-1}) $$ Such model processes one token at a time, left to right, and maintains a hidden state vector between them. Theoretically, it can learn arbitrarily long temporal dependencies given large enough hidden size. <img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/rnn_lm.jpg' width=480px> ``` class RNNLanguageModel: def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=256): """ Build a recurrent language model. You are free to choose anything you want, but the recommended architecture is - token embeddings - one or more LSTM/GRU layers with hid size - linear layer to predict logits """ # YOUR CODE - create layers/variables/etc self.emb = L.Embedding(n_tokens, emb_size) self.lstm = L.LSTM(hid_size, return_sequences=True) self.linear = L.Dense(n_tokens) #END OF YOUR CODE self.prefix_ix = tf.placeholder('int32', [None, None]) self.next_token_probs = tf.nn.softmax(self(self.prefix_ix)[:, -1]) def __call__(self, input_ix): """ compute language model logits given input tokens :param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length] :returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens] these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1}) """ embedding = self.emb(input_ix) lstm = self.lstm(embedding) linear = self.linear(lstm) return linear def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100, sess=sess): """ :returns: probabilities of next token, dict {token : prob} for all tokens """ probs = sess.run(self.next_token_probs, {self.prefix_ix: to_matrix([prefix])})[0] return dict(zip(tokens, probs)) rnn_lm = RNNLanguageModel() dummy_input_ix = tf.constant(to_matrix(dummy_lines)) dummy_lm_out = rnn_lm(dummy_input_ix) # note: tensorflow and keras layers only create variables after they're first applied (called) sess.run(tf.global_variables_initializer()) dummy_logits = sess.run(dummy_lm_out) assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape" assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered" assert not np.allclose(dummy_logits.sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)" # test for lookahead dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines])) dummy_lm_out_2 = rnn_lm(dummy_input_ix_2) dummy_logits_2 = sess.run(dummy_lm_out_2) assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \ " Make sure you don't allow any layers to look ahead of current token." \ " You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test." ``` ### RNN training Our RNN language model should optimize the same loss function as fixed-window model. But there's a catch. Since RNN recurrently multiplies gradients through many time-steps, gradient values may explode, [breaking](https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/nan.jpg) your model. The common solution to that problem is to clip gradients either [individually](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/clip_by_value) or [globally](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/clip_by_global_norm). Your task here is to prepare tensorflow graph that would minimize the same loss function. If you encounter large loss fluctuations during training, please add gradient clipping using urls above. _Note: gradient clipping is not exclusive to RNNs. Convolutional networks with enough depth often suffer from the same issue._ ``` input_ix = tf.placeholder('int32', [None, None]) logits = rnn_lm(input_ix[:, :-1]) reference_answers = input_ix[:, 1:] # Copy the loss function and train step from the fixed-window model training lengths = compute_lengths(input_ix) mask = tf.to_float(tf.sequence_mask(lengths, tf.shape(input_ix)[1])[:, 1:]) loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=reference_answers, logits=logits) loss = tf.reduce_sum(loss * mask) / tf.reduce_sum(mask) # and the train step train_step = tf.train.AdamOptimizer().minimize(loss) loss_1 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=50)}) loss_2 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=100)}) assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar" assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. Hint: use tf.sequence_mask. Be careful when averaging!' ``` ### RNN: Training loop ``` sess.run(tf.global_variables_initializer()) batch_size = 128 score_dev_every = 250 train_history, dev_history = [], [] dev_history.append((0, score_lines(dev_lines, batch_size))) for i in trange(len(train_history), 5000): batch = to_matrix(sample(train_lines, batch_size)) loss_i, _ = sess.run([loss, train_step], {input_ix: batch}) train_history.append((i, loss_i)) if (i + 1) % 50 == 0: clear_output(True) plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss') if len(dev_history): plt.plot(*zip(*dev_history), color='red', label='dev_loss') plt.legend(); plt.grid(); plt.show() print("Generated examples (tau=0.5):") for j in range(3): print(generate(rnn_lm, temperature=0.5)) if (i + 1) % score_dev_every == 0: print("Scoring dev...") dev_history.append((i, score_lines(dev_lines, batch_size))) print('#%i Dev loss: %.3f' % dev_history[-1]) assert np.mean(train_history[:10]) > np.mean(train_history[-10:]), "The model didn't converge." print("Final dev loss:", dev_history[-1][-1]) for i in range(10): print(generate(rnn_lm, temperature=0.5)) ``` ### Bonus quest: Ultimate Language Model So you've learned the building blocks of neural language models, you can now build the ultimate monster: * Make it char-level, word level or maybe use sub-word units like [bpe](https://github.com/rsennrich/subword-nmt); * Combine convolutions, recurrent cells, pre-trained embeddings and all the black magic deep learning has to offer; * Use strides to get larger window size quickly. Here's a [scheme](https://storage.googleapis.com/deepmind-live-cms/documents/BlogPost-Fig2-Anim-160908-r01.gif) from google wavenet. * Train on large data. Like... really large. Try [1 Billion Words](http://www.statmt.org/lm-benchmark/1-billion-word-language-modeling-benchmark-r13output.tar.gz) benchmark; * Use training schedules to speed up training. Start with small length and increase over time; Take a look at [one cycle](https://medium.com/@nachiket.tanksale/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6) for learning rate; _You are NOT required to submit this assignment. Please make sure you don't miss your deadline because of it :)_
github_jupyter
# Identification of modal parametes using extended Morlet-Wave method from MDOF system ver. 0.1 ``` import numpy as np import matplotlib.pyplot as plt from morlet_wave import * ``` **Steps required by the user:** 1. Load Impulse Response Functions as a numpy array of the shape: \ `x[(number_of_samples, measure_points)]` 2. Define sampling frequency: \ eg. `fs = 1024` S/s 3. Estimate natural frequencies: \ eg. `nat_freq_est = np.array([315, 860, 1667]) * 2*np.pi` \ unit of natural frequencies is [rad/s]. In case of noisy signals it is important to estimate natural frequency as accurate as it is possible. ``` # x = # fs = # nat_freq_est = ``` **Set parameters of the method:** 1. Set time spread parameters. One can set any two set parameters, but according to the author these three sets should be used: * `tsprd = (5, 10)` * `tsprd = (7, 14)` <- default * `tsprd = (10, 20)` 2. Set range of morlet-wave function cycles, default value: * `ncycl = (30, 300)` <- default ``` tsprd = (5, 10) ncycl = (30, 300) ``` Defined container to store identified modal parameters: ``` data = { "zeta" : [], "omega": [], "X" : [] } ``` Following cell does the identification. It iterates along all measurement spots and natural frquencies and stores data in container `data`.\ *Note:* in case of very noisy data, if identified natural frequencies varies significantly from estimated, then estimated natural frequencies can be used for identification, but calling method for frequency identification: ```detect_frequency(use_estimated=True)``` ``` measure_points = x.shape[1] nat_freq = nat_freq_est.size for i in range(measure_points): zeta = [] omega = [] X = [] for j in range(nat_freq): print("i, j: ", i, j) if j == 0: freq = (nat_freq_est[0], nat_freq_est[1]) elif j == nat_freq-1: freq = (nat_freq_est[-1], nat_freq_est[-2]) elif np.abs(nat_freq_est[j]-nat_freq_est[j+1]) < np.abs(nat_freq_est[j]-nat_freq_est[j-1]): freq = (nat_freq_est[j], nat_freq_est[j+1]) else: freq = (nat_freq_est[j], nat_freq_est[j-1]) sys = ExtendedMW(fs, x[i,], freq, tsprd, ncycl) sys.detect_frequency() sys.detect_damp() sys.estimate(True) sys.detect_amplitude(True) zeta.append(sys.zeta) omega.append(sys.omega) X.append(sys.X * np.exp(1j * sys.phi)) del sys data["zeta"].append(zeta) data["omega"].append(omega) data["X"].append(X) ``` Calculate modeshapes from the identified amplitudes and phases. ``` psi = np.sign(np.sin(np.angle(beam["X"])))*np.abs(beam["X"]) print(psi) ``` Plot the mode shapes: ``` m = np.linspace(1, measure_points, measure_points) for i in range(nat_freq): y = np.zeros(measure_points) m = np.linspace(1, measure_points, measure_points) m[np.isnan(psi[:, i])] = np.nan y[np.invert(np.isnan(psi[:, i]))] = psi[np.invert(np.isnan(psi[:, i])), i] print(m, y) plt.plot(m, y/np.max(np.abs(y))) plt.grid(True) plt.xticks(np.linspace(1, measure_points, measure_points)); ```
github_jupyter
# Bevezetés ### Python programozási nyelv A Python egy open-source (OS), interpretált, általános célú programozási nyelv (vagy script-nyelv). **Tulajdonságai:** - Objektum orientált - Interpretált - Nem szükséges fordítani (mint a pl a *C++*-t), elég csak beírni a parancsot, és már futtatható is a kód - Alkalmas teszi számítások gyors-prototipizálására - Cserébe lassú - Open-source: - Ingyenes - Folyamatosan karban tartott - Széles körben felhasznált iparban és akadémiában is - Nagy "Community", sok segédlettel, fórummal (pl.: [stackoverflow](https://stackoverflow.com/questions/tagged/python)) - Moduláris: - Rengetek feladatra létezik "*package*" (pl.: numerikus számításokra *numpy*, szimbolikus számításokra *sympy*, táblázatfájl-kezelésre *CSV*) - Csak azt kell behívni, amire szükségünk van - Ismerni kell a *package* ekoszisztémát, mik léteznek, mi mire jó, stb... - Sok IDE (*Integrated Development Environment*) létezik: - Alapvetően shell (terminál) alapú - Notebook: **_jupyter notebook_**, *jupyter lab* - Szövegszerkesztő: *Spyder*, *VS Code* (ingyenes/open source - ezek tartalmaznak *Debugger*-t is) - Fizetős szövegszerkeszők (lista nem teljes): *Visual Studio*, *PyCharm*, stb... ### Jupyter notebook működése (+ Python kernel): Legfontosabb tudnivalók: - Csak egy *front-end*, ami kommunikál egy *kernel*-lel (ez a kernel menüben választható). - Két mód létezik: - Command mode (cellaműveleteket lehet végezni) - Edit mode (szövegbevitel cellába) - Command mode (`ESC` billentyű lenyomásával érhető el, kék csík a cella kijelölése esetén): - Notebook mentése: `s` - Cellák hozzáadása: `b` az aktuális cella alá, `a` az aktuális cella fölé - Cella törlése: kétszer egymás után a `d` billentyű lenyomása - Cella törlésének visszavonása: `z` - Cella másolás: `c`, kivágás: `x`, beillesztés az aktuális cella alá: `v` - Számozás bekapcsolása a cella soraira: `l` (kis L), vagy `Shift + l` az összes cellára - Cellamódok: futtatandó kód: `y`, nyers kód (nem futtatható): `r`, markdown (formázott szöveg): `m` - Edit mode (`Enter` billenytű lenyomásával érhető el, zöld szín): - Sor "kikommentelése"/"visszaállítása": `Ctrl + /` - Több kurzor lehelyezése: `Ctrl + Bal egérgomb` - Téglalap-szerű kijelölés (rectangular selection): `Alt + Bal egérgomb` "húzása" (dragging) - Közös - Cella futtatása, majd cellaléptetés: `Shift + Enter` (ez létrehoz egy új cellát, ha nincs hova lépnie) - Cella futtatása cellaléptetés nélkül: `Ctrl + Enter` **Jupyter notebook help-jének előhozása**: *Edit mode*-ban `h` lenyomásával **Python help**: Kurzorral a függvény nevén állva `Shift + Tab` vagy egy cellába `?"fv_név"` beírása és futtatása # Python bevezető ## Alapműveletek (Shift/Ctrl + Enter-rel futtassuk) ``` 17 + 7 #Összeadás 333 - 7 #Kivonás 11 * 22 #Szorzás 7/9 #Osztás (ez nem egész (int) lesz: külön típus float) 0.3-0.1-0.2 # float: számábrázolási hiba lehet!! 2**3 # Hatványozás (** és NEM ^!) 2**(0.5) # Gyökvönás hatványozás segítségével 5e-3 #normálalak e segítségével (vagy 5E-3) ``` Néhány alapművelet működik szövegre is ``` 'str1_' + 'str2_' #Összeadás 2 * 'str2_' #Szorzás ``` ## Összetettebb függvények ``` sin(2) #szinusz ``` Összetettebb függvények már nincsenek a python alapnyelvben - ilyenkor szükséges behívni külső csomagokat, pl a **math** csomagot ``` import math sin(2) # ez így továbbra sem létezik math.sin(2) # Több parancs együttes beírásakor nem látszik, csak az utolsó sor kimenete: print függvény alkalmazása! print(math.sqrt(2)) print(math.tan(2)) print(math.atan(2)) # Kimenet el is rejthető a ; segítségével ("suppress output") 1+1; ``` Amennyiben szükséges, definiálhatunk mi is saját változókat az `=` jellel. Megjegyzés: a `=` értékadó függvénynek nincs kimenete ``` a=2 b=3 c=4.0 # automatikus típusadás (a+b*c)**a # a legáltalánosabb típus lesz a kimenet (int < float) # Fontos, hogy igyekezzük kerülni védett változó neveket! ILYET NE! math.sqrt = 1 math.sqrt(2) ``` Ha véletlenül ilyet teszünk, akkor érdemes újraindítani a *kernel* a fent látható körkörös nyíllal, vagy a *Kernel* $\rightarrow$ *Restart* segítségével ## Függvények Szerkezet: ```python def function(*arguments): instruction1 instruction2 ... return result ``` A függvény alá tartozó utasításokat tabulátoros behúzással (indent) kell beírni (nincs `{}` zárójel, vagy `end`). A függvény neve után jönnek az argumentumok majd kettősponttal `:` jelezzük, hogy hol kezdődik a függvény. ``` def foo(x): return 3*x def bar(x,y): a = x+y**2 return 2*a + 4 print(foo(3)) print(foo(3.)) print(foo('szöveg_')) print(bar(3,4.)) ``` Lehetséges úgynevezett anonim függvényeket (*anonymous function* vagy *lambda function*) is létrehozni, amely gyors módja az egyszerű, egysoros függvények létrehozására: ```python lambda arguments: instruction ``` Ez akár egy változóhoz is hozzárendelhető, mint egy szám vagy string. ``` double = lambda x : x*2 multiply = lambda x,y : x*y print(double(3)) print(multiply(10,3)) ``` ## Listák Pythonban egyszerűen létrehozhatóak listák, amelyekbe bármilyen adattípust tárolhatunk. A lista indexelése 0-tól indul ``` lista = [1,2,3,4,"valami",[1.0,4]] print(lista[0]) # lista 1. eleme print(lista[3]) # lista 4. eleme print(lista[-1]) # negatív számokkal hátulról indexeljük a listát, és (-1)-től indul print(lista[-2]) # lista utolsó előtti eleme print(lista[1:-1]) # egyszerre több elem [inkluzív:exklúzív módon] print(lista[1:2]) # egyszerre több elem [inkluzív:exklúzív módon] print(lista[2:]) # lista utolsó elemét is figyelembe vesszük ``` ## Vezérlési szerkezetek (Control Flow) - csak a legfontosabbak ### if-then-else ```python if condition: instruction1 elif condition2: instruction2 else: intsturction3 ``` ``` a=4 if a<=3: print('"a" nem nagyobb, mint 3') elif a>=10: print('"a" nem kisebb, mint 10') else: print('"a" nagyobb mint 3, de kisebb mint 10') ``` ### for ciklus (for loop) ```python for i in array: instruction ``` ``` for i in range(3): print(i) print() for (i,elem) in enumerate(lista): print('lista ',i,'. eleme: ',elem,sep='') # több elem printelése egyszerr, szeparátor = '' ``` ## Listák gyors létrehozása (List comprehension) ``` lista2 = [3*i**2 for i in range(2,5)] # range: 2,3,4 lista2 lista3 = list(range(10)) lista3 myfun = lambda x: 3*x**2 lista4 = [myfun(i) for i in range(2,10) if i%3 != 0] # ha i nem osztható 3-al lista4 ```
github_jupyter
# One Shot Learning with Siamese Networks This is the jupyter notebook that accompanies ## Imports All the imports are defined here ``` %matplotlib inline import torchvision import torchvision.datasets as dset import torchvision.transforms as transforms from torch.utils.data import DataLoader,Dataset import matplotlib.pyplot as plt import torchvision.utils import numpy as np import random from PIL import Image import torch from torch.autograd import Variable import PIL.ImageOps import torch.nn as nn from torch import optim import torch.nn.functional as F ``` ## Helper functions Set of helper functions ``` def imshow(img,text=None,should_save=False): npimg = img.numpy() plt.axis("off") if text: plt.text(75, 8, text, style='italic',fontweight='bold', bbox={'facecolor':'white', 'alpha':0.8, 'pad':10}) plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() def show_plot(iteration,loss): plt.plot(iteration,loss) plt.show() ``` ## Configuration Class A simple class to manage configuration ``` class Config(): training_dir = "./data/faces/training/" testing_dir = "./data/faces/testing/" train_batch_size = 64 train_number_epochs = 100 ``` ## Custom Dataset Class This dataset generates a pair of images. 0 for geniune pair and 1 for imposter pair ``` class SiameseNetworkDataset(Dataset): def __init__(self,imageFolderDataset,transform=None,should_invert=True): self.imageFolderDataset = imageFolderDataset self.transform = transform self.should_invert = should_invert def __getitem__(self,index): img0_tuple = random.choice(self.imageFolderDataset.imgs) #we need to make sure approx 50% of images are in the same class should_get_same_class = random.randint(0,1) if should_get_same_class: while True: #keep looping till the same class image is found img1_tuple = random.choice(self.imageFolderDataset.imgs) if img0_tuple[1]==img1_tuple[1]: break else: while True: #keep looping till a different class image is found img1_tuple = random.choice(self.imageFolderDataset.imgs) if img0_tuple[1] !=img1_tuple[1]: break img0 = Image.open(img0_tuple[0]) img1 = Image.open(img1_tuple[0]) img0 = img0.convert("L") img1 = img1.convert("L") if self.should_invert: img0 = PIL.ImageOps.invert(img0) img1 = PIL.ImageOps.invert(img1) if self.transform is not None: img0 = self.transform(img0) img1 = self.transform(img1) return img0, img1 , torch.from_numpy(np.array([int(img1_tuple[1]!=img0_tuple[1])],dtype=np.float32)) def __len__(self): return len(self.imageFolderDataset.imgs) ``` ## Using Image Folder Dataset ``` folder_dataset = dset.ImageFolder(root=Config.training_dir) siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset, transform=transforms.Compose([transforms.Resize((100,100)), transforms.ToTensor() ]) ,should_invert=False) ``` ## Visualising some of the data The top row and the bottom row of any column is one pair. The 0s and 1s correspond to the column of the image. 1 indiciates dissimilar, and 0 indicates similar. ``` vis_dataloader = DataLoader(siamese_dataset, shuffle=True, num_workers=8, batch_size=8) dataiter = iter(vis_dataloader) example_batch = next(dataiter) concatenated = torch.cat((example_batch[0],example_batch[1]),0) imshow(torchvision.utils.make_grid(concatenated)) print(example_batch[2].numpy()) ``` ## Neural Net Definition We will use a standard convolutional neural network ``` class SiameseNetwork(nn.Module): def __init__(self): super(SiameseNetwork, self).__init__() self.cnn1 = nn.Sequential( nn.ReflectionPad2d(1), nn.Conv2d(1, 4, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(4), nn.ReflectionPad2d(1), nn.Conv2d(4, 8, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(8), nn.ReflectionPad2d(1), nn.Conv2d(8, 8, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(8), ) self.fc1 = nn.Sequential( nn.Linear(8*100*100, 500), nn.ReLU(inplace=True), nn.Linear(500, 500), nn.ReLU(inplace=True), nn.Linear(500, 5)) def forward_once(self, x): output = self.cnn1(x) output = output.view(output.size()[0], -1) output = self.fc1(output) return output def forward(self, input1, input2): output1 = self.forward_once(input1) output2 = self.forward_once(input2) return output1, output2 ``` ## Contrastive Loss ``` class ContrastiveLoss(torch.nn.Module): """ Contrastive loss function. Based on: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf """ def __init__(self, margin=2.0): super(ContrastiveLoss, self).__init__() self.margin = margin def forward(self, output1, output2, label): euclidean_distance = F.pairwise_distance(output1, output2, keepdim = True) loss_contrastive = torch.mean((1-label) * torch.pow(euclidean_distance, 2) + (label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2)) return loss_contrastive ``` ## Training Time! ``` train_dataloader = DataLoader(siamese_dataset, shuffle=True, num_workers=8, batch_size=Config.train_batch_size) net = SiameseNetwork().cuda() criterion = ContrastiveLoss() optimizer = optim.Adam(net.parameters(),lr = 0.0005 ) counter = [] loss_history = [] iteration_number= 0 for epoch in range(0,Config.train_number_epochs): for i, data in enumerate(train_dataloader,0): img0, img1 , label = data img0, img1 , label = img0.cuda(), img1.cuda() , label.cuda() optimizer.zero_grad() output1,output2 = net(img0,img1) loss_contrastive = criterion(output1,output2,label) loss_contrastive.backward() optimizer.step() if i %10 == 0 : print("Epoch number {}\n Current loss {}\n".format(epoch,loss_contrastive.item())) iteration_number +=10 counter.append(iteration_number) loss_history.append(loss_contrastive.item()) show_plot(counter,loss_history) ``` ## Some simple testing The last 3 subjects were held out from the training, and will be used to test. The Distance between each image pair denotes the degree of similarity the model found between the two images. Less means it found more similar, while higher values indicate it found them to be dissimilar. ``` folder_dataset_test = dset.ImageFolder(root=Config.testing_dir) siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset_test, transform=transforms.Compose([transforms.Resize((100,100)), transforms.ToTensor() ]) ,should_invert=False) test_dataloader = DataLoader(siamese_dataset,num_workers=6,batch_size=1,shuffle=True) dataiter = iter(test_dataloader) x0,_,_ = next(dataiter) for i in range(10): _,x1,label2 = next(dataiter) concatenated = torch.cat((x0,x1),0) output1,output2 = net(Variable(x0).cuda(),Variable(x1).cuda()) euclidean_distance = F.pairwise_distance(output1, output2) imshow(torchvision.utils.make_grid(concatenated),'Dissimilarity: {:.2f}'.format(euclidean_distance.item())) ```
github_jupyter
# Naive Bayes $$ \begin{split} \mathop{argmax}_{c_k}p(y=c_k|x) &= \mathop{argmax}_{c_k}p(y=c_k)p(x|y=c_k) \\ & \left( due to: p(y=c_k|x) = \frac{p(y=c_k)p(x|y=c_k)}{p(x)} \right) \\ &= \mathop{argmax}_{c_k}p(y=c_k)\prod_jp(x^{(j)}|y=c_k) \end{split} $$ Use Maximum Likelihood Estimate(MLE) to evaluate $ p(y=c_k)$ and $ p(x^{(j)}|y=c_k) $ in datasets. $$ \hat{p}(y=c_k) = \frac{\sum_i I(y_i=c_k)}{N} \\ \hat{p}(x^{(j)}=a_j|y=c_k) = \frac{\sum_i I(x_i^{(j)}=a_j,y=c_k)}{I(y_i=c_k)} $$ Bayesian estimation add $ \lambda $ on numerator and denominator in MLE. # Naive Bayes in Scikit-learn Classifiers: GaussianNB, MultinomialNB, BernoulliNB ## Documents Classification Use TF-IDF(Term Frequency and Inverse Document Frequency) of term in documents as feature $$ TF-IDF = TF*IDF \\ TF(t) = \frac {\text{Number of times term t appears in a document}}{\text{Total number of terms in the document}}\\ IDF(t) = log_e\frac {\text{Total number of documents}}{\text{Number of documents with term t in it + 1}} $$ Bag of Words ### TfidfVectorizer sklearn.feature_extraction.text.TfidfVectorizer(stop_words, token_pattern, max_df) ``` from sklearn.feature_extraction.text import TfidfVectorizer vect = TfidfVectorizer() documents=[ 'my dog has flea problems help please', 'maybe not take him to dog park stupid', 'my dalmation is so cute I love him', 'stop posting stupid worthless garbage', 'mr licks ate my steak how to stop him', 'quit buying worthlsess dog food stupid', ] targets=[0,1,0,1,0,1] # 0 normal, 1 insult tf_matrix = vect.fit_transform(documents) # all unique words words = vect.get_feature_names() print(len(words), words) # words id print(len(vect.vocabulary_), vect.vocabulary_) tfidf = tf_matrix.toarray() print(tfidf.shape, tfidf[0]) ``` ### CountVectorizer ``` from sklearn.feature_extraction.text import CountVectorizer c_vect = CountVectorizer() c_matrix = c_vect.fit_transform(documents) print(c_vect.get_feature_names()) c_matrix.toarray() # default ngram_range is (1, 1), token_pattern=’(?u)\b\w\w+\b’ c_vect_ngram = CountVectorizer(ngram_range=(1, 2)) c_matrix_ngram = c_vect_ngram.fit_transform(documents) print(c_vect_ngram.get_feature_names()) ``` ### MultinomialNB ``` from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB(alpha=0.001).fit(tf_matrix, targets) test_vect = TfidfVectorizer(vocabulary=vect.vocabulary_) test_features = test_vect.fit_transform([documents[3]]) predicted_labels = clf.predict(test_features) from sklearn import metrics print(metrics.accuracy_score([targets[3]], predicted_labels)) ```
github_jupyter
# How To: Crop type classification for Austria This example notebook shows the steps towards constructing an automated machine learning pipeline for crop type identification in an area of interest in Austria. Along the pipeline, two different approaches are applied and compared. The first one, the LightGBM, represents a state-of-the-art machine learning algorithm. The second one is a Temporal Convolutional Neural Network architecture from the field of deep learning. The prediction is performed on a time-series of Sentinel-2 scenes from 2018. The example notebook will lead you through the whole process of creating the pipeline, with details provided at each step (see **Overview**). ## Before start Enjoying the functionality of eo-learn and the simplicity of this example workflow is preceded by the unavoidable setup of an adequate working environment. But trust us, it's worth it! And we'll take you by the hand. ### Requirements #### Sentinel Hub account To run the example you'll need a Sentinel Hub account. If you do not have one yet, you can create a free trial account at the [Sentinel Hub webpage](https://services.sentinel-hub.com/oauth/subscription). If you are a researcher you can even apply for a free non-commercial account at the [ESA OSEO page](https://earth.esa.int/aos/OSEO). Once you have the account set up, login to [Sentinel Hub Configurator](https://apps.sentinel-hub.com/configurator/). By default you will already have the default configuration with an **instance ID** (alpha numeric code of length 36). For this tutorial we recommend that you create a new configuration (`Add new configuration`) and set the configuration to be based on **Python scripts template**. Such configuration will already contain all layers used in a more general Land Use/ Land Cover (LULC) example which are adopted for this example. Otherwise you will have to define the layers for your configuration yourself. One layer you have to define yourself is your "MY-S2-L2A-BANDS" layer. Therefore you: * log in to your Sentinel Hub account * go to `Configuration Utility` and access your newly created `LULC` configuration * here you choose `+ Add new layer` * your Layer name is `MY-S2-L2A-BANDS` * in red letters you are requested to _! Please select predefined product or enter your processing script_ - so you better do... * to set your custom script you copy/paste `return [B02,B03,B04,B05,B06,B07,B08,B8A,B11,B12]` into the `Custom script editor` and click `</> Set Custom Script` * You just told Sentinel Hub which bands you want to download in the following. Now, go on and `Save` your own layer After you have prepared the configuration please put configuration's **instance ID** into `sentinelhub` package's configuration file following the [configuration instructions](http://sentinelhub-py.readthedocs.io/en/latest/configure.html). For Processing API request you also need to obtain and set your `oauth` client id and secret. You can do this either manually or by using the respective variables in the configuration section of the following workflow. #### Sentinel Hub Python package The [Sentinel Hub Python package](https://sentinelhub-py.readthedocs.io/en/latest/) allows users to make OGC (WMS and WCS) web requests to download and process satellite images within your Python scripts. It supports Sentinel-2 L1C and L2A, Sentinel-1, Landsat 8, MODIS and DEM data source. #### eo-learn library Between the acquisition of a satellite image and actionable information, there is a large processing effort. [eo-learn](https://eo-learn.readthedocs.io/en/latest/index.html) as a collection of modular Python sub-packages allows easy and quick pro-cessing of spatio-temporal data to prototype, build and automate these required large scale EO workflows for AOIs of any size. It also directly enables the application of state-of-the-art tools for computer vision, machine learning and deep learning packages in Python to the data. Especially for non-experts to the field of remote sensing and machine learning it makes extraction of valuable information from satellite imagery easier and more comfortable. #### Additional packages In addition to the previous packages the installation of the packages [keras](https://keras.io/), [tensor flow](https://www.tensorflow.org/) and [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/) is required. ## Overview With the help of the eo-learn library, the entire classification process can be executed in 4 processing blocks, i.e. `EOWorkflows`. 1. Ground truth data 2. EO data 3. Feature engineering - Crop type grouping - Sampling 4. Prediction **In more detail the notebook is structured as follows:** I. Imports II. Configurations ### Part I 1. BBox-Splitter - Plot AOI and give the extent - Create BBoxes - Visualize the selection 2. Add ground truth data - Create EOPatches and add LPIS + area ratio 3. Add EO data - Choose EO features - Clean EOPatch list 4. Feature/ label engineering and Sampling - Data visualization - Resampling, Interpolation, LPIS data preparation - Sampling ### Part II 6. Prediction - Set up and train LightGBM model - Set up and train TempCNN model - Model validation and evaluation - Prediction - Visualization of the results 7. Next steps Now, after the setup you are curious what is next and can't wait to get your hands dirty? Well, let's get started! # Imports Lets start with some necessary imports. ``` # set module directory to system path import sys, os MAIN_FOLDER = os.getcwd() import_path = os.path.join(MAIN_FOLDER, 'Tasks') if import_path not in sys.path: sys.path.append(import_path) # Built-in modules import math import shutil import itertools from datetime import timedelta # Basics of Python data handling and visualization import pandas as pd import geopandas as gpd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap from shapely.geometry import Polygon from tqdm.notebook import tqdm # Imports from eo-learn, sentinelhub-py, and perceptive-sentinel from sentinelhub import CRS, BBoxSplitter, MimeType from eolearn.core import LinearWorkflow, FeatureType, SaveTask, OverwritePermission, LoadTask from eolearn.core import EOPatch, EOTask, CreateEOPatchTask, ZipFeatureTask, MapFeatureTask from eolearn.geometry import VectorToRaster, ErosionTask from eolearn.io import SentinelHubInputTask, ExportToTiff from eolearn.mask import get_s2_pixel_cloud_detector, AddCloudMaskTask, AddValidDataMaskTask from eolearn.features import SimpleFilterTask, LinearInterpolation from eolearn.features import NormalizedDifferenceIndexTask, EuclideanNormTask # Machine learning import lightgbm as lgb import joblib from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.model_selection import train_test_split from sklearn import metrics # Deep Learning from keras.models import Sequential, load_model from keras.layers import Dense, Flatten, Dropout from keras.layers.convolutional import Conv1D, MaxPooling1D from keras.utils import to_categorical # Notebook specific classes and functions from CropTypeClassTasks import CleanLPIS, GroupLPIS, LPISCLASS, ConcatenateData, SamplingTaskTask from CropTypeClassTasks import train_test_split_eopatches, train_test_split_eopatch from CropTypeClassTasks import plot_confusion_matrix, PredictPatch, AddAreaRatio, FixLPIS, masking from CropTypeClassTasks import AddGeopediaVectorFeature, Sen2CorValidData, ValidDataFractionPredicate ``` # Configurations In this part you can define your configurations. The basic configurations are set for an example running smoothly. ## Configuration file customization ``` # In case you put the credentials into the configuration file by hand you can leave this unchanged INSTANCE_ID = '' CLIENT_ID = '' CLIENT_SECRET = '' config = SHConfig() if CLIENT_ID and CLIENT_SECRET and INSTANCE_ID: config.instance_id = INSTANCE_ID config.sh_client_id = CLIENT_ID config.sh_client_secret = CLIENT_SECRET if config.sh_client_id == '' or config.sh_client_secret == '' or config.instance_id == '': print("Warning! To use Sentinel Hub services, please provide the credentials (client ID and client secret).") ``` ## Workflow configuration ``` # define in- and output folders output_path = os.path.join(MAIN_FOLDER, 'Output') general_data_path = os.path.join(MAIN_FOLDER, 'GeneralData') patch_path = os.path.join(MAIN_FOLDER, 'Output', 'EOPatches') thresLPIS_path = os.path.join(MAIN_FOLDER, 'Output', 'EOPatches_Low_LPIS_Thres') samples_path = os.path.join(MAIN_FOLDER, 'Output', 'Samples') models_path = os.path.join(MAIN_FOLDER, 'Output', 'Models') predictions_path = os.path.join(MAIN_FOLDER, 'Output', 'Predictions') # For reference colormap lpisclass_cmap = mpl.colors.ListedColormap([entry.color for entry in LPISCLASS]) lpisclass_norm = mpl.colors.BoundaryNorm(np.arange(-0.5, 26, 1), lpisclass_cmap.N) class_names = [entry.class_name for entry in LPISCLASS] class_ids = [entry.id for entry in LPISCLASS] ### 1. BBox-Splitter ## Plot AOI and give extent INPUT_FILE = os.path.join(general_data_path, 'Area_AOI.geojson') # Geojson or Shapefile of the area of interest austria = os.path.join(general_data_path, 'Area_Austria.geojson') # Geojson of austrian national borders crs = CRS.UTM_33N # wanted coordinate System of the AOI ### 2. Add ground truth data ## Create EOPatches and add LPIS + area ratio year = '2018' # year of interest layerID_dict = {'2018': 2647, '2017': 2034, '2016': 2033} # Layer IDs of Geopedia Layer layerID = layerID_dict[year] # Layer ID for Austria of year set by "year" patch_list = os.listdir(patch_path) # List of created EOPatches names ### 3. Add EO data ## Choose EO features maxcloud = 0.8 # maximum cloudcoverage of sentinel tiles used for download datafrac = 0.7 # keep only frames with valid data fraction over x% ## Clean EOPatch list lpis_thres = 0.13 # Patches with less than x% of LPIS coverage are excluded in the following ## Add EO data time_interval = [f'{year}-01-01', f'{year}-09-30'] # the start and end date for downloading satellite data ### 4. Feature and label engineering ## Feature concatenation, interpolation and LPIS data preparation day_range = 8 # the realtime range of valid satellite images is resampled to a x day equidistant range ## Prepare LPIS data grouping_id = 'basic' # define grouping id (has to be identical to endings of the two grouping files) # File that contains LPIS Crop ID and wanted groups - Colums of shape: CROP_ID, english, slovenian, latin, GROUP_1 lpis_to_group_file = os.path.join(general_data_path, 'at_lpis_{}_crop_to_group_mapping_{}.csv'.format(year, grouping_id)) # File that contains the wanted groups and their ID - Colums of shape: GROUP_1, GROUP_1_ID crop_group_file = os.path.join(general_data_path, 'crop_group_1_definition_{}.csv'.format(grouping_id)) ### 5. Sampling ## Sampling per EOPatch pixel_thres = 1000 # Pixel thresold necessary for a class to be considered in sampling samp_class = 500 # take x samples per class per EOPatch ## Combine samples and split into train and test data test_ratio = 4 # take every xth patch for testing features_dict = 'FEATURES_SAMPLED' # name of the dictionary where the sample features are saved labels_dict = 'LPIS_class_{}_ERODED_SAMPLED'.format(grouping_id) # name of the dictionary where the sample labels are saved ``` # 1. From AOI to BBox ## Plot AOI and give extent Spotlight on your "INPUT_FILE" configuration! This is where you have the possibility to easily adapt the workflow to your needs. **Take your pick** and replace the AOI file in the _General data_ folder. Either shapefile or geojson formatted version of your AOI is split into smaller patches by `eo-learn`. The total number of patches depends on the AOIs size. Automated splitting is supposed to create patches of size 10 x 10 km. ``` # define Area of interest aoi = gpd.read_file(INPUT_FILE) # read AOI file aoi_shape = aoi.geometry.values[-1] # get aoi shape # define BBox-Splitter split values ShapeVal_a = round(aoi_shape.bounds[2] - aoi_shape.bounds[0]) ShapeVal_b = round(aoi_shape.bounds[3] - aoi_shape.bounds[1]) SplitVal_a = max(1, int(ShapeVal_a/1e4)) SplitVal_b = max(1, int(ShapeVal_b/1e4)) # Give extent of AOI + grid count and plot AOI print('The extent of the AOI is {}m x {}m, so it is split into a grid of {} x {}.'.format(ShapeVal_a, ShapeVal_b, SplitVal_a, SplitVal_b)) aoi.plot() plt.axis('off'); ``` ## Create BBoxes The simple patch polygons are transformed into bounding boxes suitable for serving as geometrical EOPatch frame. ``` # split area of interest into an appropriate number of BBoxes bbox_splitter = BBoxSplitter([aoi_shape], crs, (SplitVal_a, SplitVal_b)) bbox_list = np.array(bbox_splitter.get_bbox_list()) # get list of BBox geometries info_list = np.array(bbox_splitter.get_info_list()) # get list of x (column) and y(row) indices print('Each bounding box also has some info how it was created.\nExample:\n' 'bbox: {}\ninfo: {}\n'.format(bbox_list[0].__repr__(), info_list[0])) ``` ## Visualize the selection First visualize the GeoDataFrame of the bounding boxes ``` # create GeoDataFrame of BBoxes geometry = [Polygon(bbox.get_polygon()) for bbox in bbox_list] # get geometry from bbox_list for creating GeoSeries idxs_x = [info['index_x'] for info in info_list] # get column index for naming EOPatch idxs_y = [info['index_y'] for info in info_list] # get row index for naming EOPatch gdf = gpd.GeoDataFrame({'index_x': idxs_x, 'index_y': idxs_y}, crs={'init': CRS.ogc_string(crs)}, geometry=geometry) shapefile_name = os.path.join(output_path, 'BBoxes.shp') gdf.to_file(shapefile_name) gdf.head() ``` Second visualize the split AOI with reference to Austrian national borders ``` # Plot AOI overview austria_gdf = gpd.read_file(austria) fontdict = {'family': 'monospace', 'weight': 'normal', 'size': 11} # if bboxes have all same size, estimate offset xl, yl, xu, yu = gdf.geometry[0].bounds xoff, yoff = (xu - xl) / 3, (yu - yl) / 5 # main figure fig, ax = plt.subplots(figsize=(20, 20)) gdf.plot(ax=ax,facecolor='w',edgecolor='r',alpha=0.5,linewidth=5) aoi.plot(ax=ax, facecolor='w',edgecolor='k',alpha=0.5) austria_gdf.plot(ax=ax, facecolor='w',edgecolor='b',alpha=0.5) ax.set_title('Test Area Splitted'); plt.axis('off') # sub figure a = plt.axes([0.2, 0.6, .2, .2]) gdf.plot(ax=a, facecolor='w',edgecolor='r',alpha=0.5, linewidth=3) aoi.plot(ax=a, facecolor='w',edgecolor='k',alpha=0.5, linewidth=3) plt.xticks([]) plt.yticks([]) ``` # 2. Add ground truth data ## Create EOPatches and add LPIS data + area ratio Now it's time to create `EOPatches` and start filling them with data. #### Add data * At first you transform your basic geometric frames into proper `EOPatches`. You can then fill these handy data containers endlessly. * As a start you add your ground truth data that is later used as a reference to validate your prediction results. Here, you use Austrian LPIS data containing agricultural information on the field-level. In the case of this example you download your 2018 data in vector format automatically from [Geopedia](http://portal.geopedia.world/) using Sentinel-Hub tasks. For further observation you can also download the complete and free dataset for Austria [here](https://www.data.gv.at/katalog/dataset?q=INVEKOS+Schl%C3%A4ge&sort=score+desc%2C+metadata_modified+desc). * Additionally a ratio value is added showing the percentage of the agricultural area in the respective `EOPatch`. The importance of this ratio will become apparent in the following steps. An `EOPatch` is created and manipulated using `EOTasks`. Due to the potentially large number of `EOPatches`, automation of the processing pipeline is absolutely crucial. Therefore `EOTasks` are chained in an `EOWorkflow`. In this example the final workflow is executed on all patches, which are saved to the specified directory. ### Set up your 1. EOWorkflow - Ground truth data The `EOTasks` need to be put in some order and executed one by one. This can be achieved by manually executing the tasks, or more conveniently, defining an `EOWorkflow` which does this for you. An `EOWorkflow` can be linear or more complex, but it should be acyclic. Here we will use the linear case of the EOWorkflow, available as `LinearWorkflow` ``` # TASK FOR CREATING EOPATCH create = CreateEOPatchTask() # TASK FOR ADDING LPIS DATA FROM GEOPEDIA # here you can choose the year of interest # also you have to set the corresponding Geopedialayer-ID add_lpis = AddGeopediaVectorFeature((FeatureType.VECTOR_TIMELESS, 'LPIS_{}'.format(year)), layer=layerID, year_filter=None, drop_duplicates=True) # TASK FOR ADDING AN AREA RATIO # the area ratio indicates the EOPatches proportion of LPIS coverage area_ratio = AddAreaRatio((FeatureType.VECTOR_TIMELESS, 'LPIS_{}'.format(year)), (FeatureType.SCALAR_TIMELESS, 'FIELD_AREA_RATIO')) # TASK FOR SAVING TO OUTPUT save = SaveTask(patch_path, overwrite_permission=OverwritePermission.OVERWRITE_PATCH) # define the workflow workflow = LinearWorkflow(create, add_lpis, area_ratio, save) ``` ### Run your first EOWorkflow ``` # execute workflow pbar = tqdm(total=len(bbox_list)) for idx, bbox in enumerate(bbox_list): bbox = bbox_splitter.bbox_list[idx] info = bbox_splitter.info_list[idx] patch_name = f'eopatch_{idx}_col-{info["index_x"]}_row-{info["index_y"]}' workflow.execute({create:{'bbox':bbox}, save:{'eopatch_folder':patch_name}}) pbar.update(1) ``` Visualize the added vector data for one example EOPatch ``` eopatch_name = 'eopatch_0_col-0_row-0' # get the name of the first newly created EOPatch eopatch = EOPatch.load(os.path.join(patch_path, eopatch_name)) # plot vector data print('Plotting LPIS vector data of eopatch: {}'.format(eopatch_name)) fig, ax = plt.subplots(figsize=(20, 20)) LPIS = eopatch.vector_timeless['LPIS_{}'.format(year)] LPIS.plot(column='SNAR_BEZEI', ax=ax, categorical=True) ax.set_aspect('auto') ax.set_xticks(ticks=[]) ax.set_yticks(ticks=[]) del eopatch ``` As you can see, the crop types in your AOI are very diverse. Each colour stands for one of the over 200 LPIS classes. # 3. Add EO data ## Choose EO features Now, it's time to add Sentinel-2 data to the EOPatches. You are lucky to be using `eo-learn`, as this is simply done by setting up another single EOWorkflow including only a single EOTask for adding your satellite images. The remaining tasks allow you to create extensive valid data masks and useful indices using a ridiculously small amount of code. In detail you add: * L2A bands [B02,B03,B04,B05,B06,B07,B08,B8A,B11,B12] * Sen2cor's scene classification map and snow probability map * SentinelHub's cloud probability map and cloud mask * A mask of validity, based on acquired data from Sentinel and cloud coverage. 1. IS_DATA == True 2. CLOUD_MASK == 0 (1 indicates that pixel was identified to be covered with cloud) * Filter out time frames with < 70 % valid coverage (no clouds) * Calculate and add NDVI, NDWI, NORM for helping the algorithm to detect relationships between the spectral bands. * Prepare following Workflow by adding two features ### Set up your 2. EOWorkflow - EO data ``` # TASK TO LOAD EXISTING EOPATCH load = LoadTask(patch_path) # TASK TO ADD SENTINEL 2 LEVEL 2A DATA # Here also a simple filter of cloudy scenes is done. A detailed cloud cover # detection is performed within the next steps # Using SH we will download the following data: # * L2A bands: B02, B03, B04, B05, B06, B07, B08, B8A, B11, and B12 # * sen2cor's scene classification: SCL # * s2cloudless' cloud mask: CLM band_names = ['B02','B03','B04','B05','B06','B07','B08','B8A','B11','B12'] add_l2a = SentinelHubInputTask( bands_feature=(FeatureType.DATA, 'MY-S2-L2A-BANDS'), bands = band_names, resolution=10, maxcc=maxcloud, time_difference=timedelta(minutes=120), data_source=DataSource.SENTINEL2_L2A, additional_data=[(FeatureType.MASK, 'dataMask', 'IS_DATA'), (FeatureType.MASK, 'SCL'), (FeatureType.MASK, 'CLM')], config=config, max_threads=5 ) # create valid data masks scl_valid_classes = [2, 4, 5, 6, 7] # TASKs FOR ADDING L2A and L1C VALID DATA MASKS # convert cloudmask to validmask add_clm_valid =MapFeatureTask((FeatureType.MASK, 'CLM'), (FeatureType.MASK, 'CLM_VALID'), np.logical_not) # combine IS_DATA and CLM_VALID add_l1c_valmask = ZipFeatureTask({FeatureType.MASK: ['IS_DATA', 'CLM_VALID']}, (FeatureType.MASK, 'L1C_VALID'), np.logical_and) # combine IS_DATA and SCL (using an erosion radius of 6 and a dilation radius of 22 pixel for SCL classes) add_l2a_valmask = AddValidDataMaskTask(Sen2CorValidData(scl_valid_classes, 6, 22), 'L2A_VALID') # combine all validmasks add_valmask = ZipFeatureTask({FeatureType.MASK: ['L1C_VALID', 'L2A_VALID']}, (FeatureType.MASK, 'VALID_DATA'), np.logical_and) # TASK TO FILTER OUT SCENES INCLUDING TOO MANY UNVALID PIXEL # keep frames with > x % valid coverage valid_data_predicate = ValidDataFractionPredicate(datafrac) filter_task = SimpleFilterTask((FeatureType.MASK, 'VALID_DATA'), valid_data_predicate) # TASK FOR CALCULATING INDICES # NDVI = Normalized Difference Vegetation Index # NDWI = Normalized Difference Water Index # NORM = Euclidean Norm ndvi = NormalizedDifferenceIndexTask((FeatureType.DATA, 'MY-S2-L2A-BANDS'), (FeatureType.DATA, 'NDVI'), [6, 2]) ndwi = NormalizedDifferenceIndexTask((FeatureType.DATA, 'MY-S2-L2A-BANDS'), (FeatureType.DATA, 'NDWI'), [1, 6]) norm = EuclideanNormTask((FeatureType.DATA, 'MY-S2-L2A-BANDS'), (FeatureType.DATA, 'NORM')) # TASK FOR SAVING TO OUTPUT save = SaveTask(patch_path, compress_level=1, overwrite_permission=OverwritePermission.OVERWRITE_PATCH) workflow = LinearWorkflow(load, add_l2a, add_clm_valid, add_l1c_valmask, add_l2a_valmask, add_valmask, filter_task, ndvi, ndwi, norm, save) ``` ## Clean EOPatch list Most likely, along with this innovative workflow, you are pushing humankind forward with other processes on your machine. Therefore you do not want to waste your ressources on EOPatches containing very little agricultural area. Before running your already set up EOWorkflow, clean your EOPatch list. Remember the earlier calculated LPIS ratio? From here on you only keep EOPatches containing more than 13% agricultural area. The irrelevant ones are moved to the sidetrack. If you want to use EOPatches more extensively covered with agricultural area simply increase your "lpis_thres" configuration. ``` # in GeoDataFrame label patches with certain thresold either as to do (1) or not to do (0) gdf[f'far{year}'] = -2.0 for idx, row in gdf.iterrows(): patch_name = os.path.join(patch_path, f'eopatch_{idx}_col-{row.index_x}_row-{row.index_y}') eop = EOPatch.load(str(patch_name), lazy_loading=True) gdf.loc[idx, f'far{year}'] = eop.scalar_timeless['FIELD_AREA_RATIO'][0] gdf[f'todo{year}'] = (gdf[f'far{year}'] > lpis_thres) * 1 gdf.to_file(shapefile_name) # move EOPatch folders with LPIS coverage beneath thresold into seperate folder move = [] patch_list_delete = gpd.read_file(shapefile_name) patch_list_delete = patch_list_delete[patch_list_delete[f'todo{year}'] == 0] # identify EOPatches with insufficient LPIS thresold # create list including names of the identified EOPatches for idx in patch_list_delete.index: info = bbox_splitter.info_list[idx] patch_name = f'eopatch_{idx}_col-{info["index_x"]}_row-{info["index_y"]}' move.append(patch_name) print('EOPatches moved to sidetrack: ' + str([patch_name for patch_name in move])) # move identified EOPatches to alternative folder for patch_name in move: shutil.move(os.path.join(patch_path, patch_name), os.path.join(thresLPIS_path, patch_name)) patch_list = os.listdir(patch_path) # update patch_list ``` ### Run second EOWorkflow * Set up EOWorkflow? **Check!** * Ignored irrelevant EOPatches? **Check!** Then go ahead and run your EOWorkflow on the basis of your "time_interval" configuration! ``` # execute workflow and save the names of those that failed failed = [] pbar = tqdm(total=len(patch_list)) for patch_name in patch_list: # add EO data if possible try: workflow.execute({load: {'eopatch_folder': patch_name}, add_l2a: {'time_interval': time_interval}, add_sc_feature: {'data': {}}, add_rl_feature: {'data': {}}, save: {'eopatch_folder': patch_name}}) # append EOPatch name to list for further investigation except Exception as ex: print(f'Failed {patch_name} with {ex}') failed.append(patch_name) pbar.update() ``` # 4. Feature/ label engineering and Sampling The classifier you are using for the following prediction is very picky when it comes to the format of the input data. To feed your thoughtfully compiled data to the algorithm it needs some preparation. ## Data visualization Now, after all necessary data is added let's load a single EOPatch and look at the structure. By executing ``` EOPatch.load(os.path.join(patch_path, 'eopatch_0_col-0_row-0')) ``` You obtain the following structure: ``` EOPatch( data: { MY-S2-L2A-BANDS: numpy.ndarray(shape=(39, 1028, 1033, 10), dtype=float32) NDVI: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=float32) NDWI: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=float32) NORM: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=float32) } mask: { CLM: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=bool) CLM_VALID: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=bool) IS_DATA: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=bool) L1C_VALID: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=bool) L2A_VALID: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=bool) SCL: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=int32) VALID_DATA: numpy.ndarray(shape=(39, 1028, 1033, 1), dtype=bool) } scalar: {} label: {} vector: {} data_timeless: {} mask_timeless: {} scalar_timeless: { FIELD_AREA_RATIO: numpy.ndarray(shape=(1,), dtype=float64) } label_timeless: {} vector_timeless: { LPIS_2018: geopandas.GeoDataFrame(columns=['geometry', 'FS_KENNUNG', 'SL_FLAECHE', 'ID', 'SNAR_BEZEI', 'DateImported'], length=4091, crs=EPSG:32633) } meta_info: { maxcc: 0.8 service_type: 'wcs' size_x: '10m' size_y: '10m' time_difference: datetime.timedelta(seconds=7200) time_interval: ['2018-01-01', '2018-09-30'] } bbox: BBox(((420862.3179607267, 5329537.336315366), (431194.28800678457, 5339817.792378783)), crs=CRS('32633')) timestamp: [datetime.datetime(2018, 1, 6, 10, 4, 51), ..., datetime.datetime(2018, 9, 28, 10, 0, 24)], length=39 ) ``` As you can see your EO data and indices are stored in `data.FeatureType` your valid data masks in `mask.FeatureType` and your ground truth data in `vector_timeless.FeatureType` It is possible to access various EOPatch content via calls like: ``` eopatch.timestamp eopatch.vector_timeless['LPIS_2018'] eopatch.data['NDVI'][0] eopatch.data['MY-S2-L2A-BANDS'][5][..., [3, 2, 1]] ``` ### Plot RGB image In order to get a quick and realistic overview of your AOI you plot the true color image of one EOPatch ``` eopatch_name = 'eopatch_0_col-0_row-0' # get the name of the first newly created EOPatch eopatch = EOPatch.load(os.path.join(patch_path, eopatch_name), lazy_loading=True) fig, ax = plt.subplots(figsize=(20, 20)) plt.imshow(np.clip(eopatch.data['MY-S2-L2A-BANDS'][0][..., [2, 1, 0]] * 3.5, 0, 1)) plt.xticks([]) plt.yticks([]) ax.set_aspect('auto') ``` ### Plot mean NDVI Plot the time-wise mean of NDVI for the whole region. Filter out clouds in the mean calculation. ``` fig, ax = plt.subplots(figsize=(20, 20)) ndvi = eopatch.data['NDVI'] mask = eopatch.mask['VALID_DATA'] ndvi[~mask] = np.nan ndvi_mean = np.nanmean(ndvi, axis=0).squeeze() im = ax.imshow(ndvi_mean, vmin=0, vmax=0.8, cmap=plt.get_cmap('YlGn')) ax.set_xticks([]) ax.set_yticks([]) ax.set_aspect('auto') cb = fig.colorbar(im, ax=ax, orientation='horizontal', pad=0.01, aspect=100) cb.ax.tick_params(labelsize=20) plt.show() ``` ### Plot all masks To see how the valid data masks look like and work together, you can compare them to a regular RGB image. For demonstration reasons a timeframe is selected which contains cloud-covered area. ``` tidx = 1 plt.figure(figsize=(20,20)) plt.subplot(331) plt.imshow(np.clip(eopatch.data['MY-S2-L2A-BANDS'][tidx][..., [2,1,0]] * 3.5,0,1)) plt.xticks([]) plt.yticks([]) plt.title('MY-S2-L2A-BANDS - RGB') plt.subplot(332) plt.imshow(eopatch.mask['IS_DATA'][tidx].squeeze(), vmin=0, vmax=1, cmap='gray') plt.xticks([]) plt.yticks([]) plt.title('IS_DATA - Data availability') plt.subplot(333) plt.imshow(eopatch.mask['CLM'][tidx].squeeze(), vmin=0, vmax=1, cmap='gray') plt.xticks([]) plt.yticks([]) plt.title('CLM - Cloudmask') plt.subplot(334) plt.imshow(eopatch.mask['L1C_VALID'][tidx].squeeze(), vmin=0, vmax=1, cmap='gray') plt.xticks([]) plt.yticks([]) plt.title('L1C_VALID - L1C valid data mask') plt.subplot(335) plt.imshow(eopatch.mask['L2A_VALID'][tidx].squeeze(), vmin=0, vmax=1, cmap='gray') plt.xticks([]) plt.yticks([]) plt.title('L2A_VALID - L2A valid data mask') plt.subplot(336) plt.imshow(eopatch.mask['SCL'][tidx].squeeze(), cmap='jet') plt.xticks([]) plt.yticks([]) plt.title('SCL - Sen2Cor scene classification map') plt.subplot(338) plt.imshow(eopatch.mask['VALID_DATA'][tidx].squeeze(), vmin=0, vmax=1, cmap='gray') plt.xticks([]) plt.yticks([]) plt.title('VALID_DATA - Combined valid data mask') ``` As you can see invalid pixel from the different cloud masks and Sen2Cor scene classification map are combined. For SCL the classes: * 1 SC_SATURATED_DEFECTIVE * 3 SC_CLOUD_SHADOW * 8 SC_CLOUD_MEDIUM_PROBABILITY * 9 CLOUD_HIGH_PROBABILITY * 10 THIN_CIRRUS * 11 SNOW are considered as invalid. ### Plot spatial mean NDVI timeseries Plot the mean of NDVI over all pixels in a single patch throughout the year. Filter out clouds in the mean calculation. ``` ndvi_series = eopatch.data['NDVI'] time = np.array(eopatch.timestamp) mask = eopatch.mask['VALID_DATA'] t, w, h, _ = ndvi_series.shape ndvi_clean = ndvi_series.copy() ndvi_clean[~mask] = np.nan # set values of invalid pixels to NaN's # Calculate means, remove NaN's from means ndvi_mean = np.nanmean(ndvi_series.reshape(t, w * h).squeeze(), axis=1) ndvi_mean_clean = np.nanmean(ndvi_clean.reshape(t, w * h).squeeze(), axis=1) time_clean = time[~np.isnan(ndvi_mean_clean)] ndvi_mean_clean = ndvi_mean_clean[~np.isnan(ndvi_mean_clean)] fig, ax = plt.subplots(figsize=(20, 5)) plt.plot(time_clean, ndvi_mean_clean, 's-', label = 'Mean NDVI with cloud cleaning') plt.plot(time, ndvi_mean, 'o-', label='Mean NDVI without cloud cleaning') plt.xlabel('Time', fontsize=15) plt.ylabel('Mean NDVI over patch', fontsize=15) plt.xticks(fontsize=15) plt.yticks(fontsize=15) plt.legend(loc=2, prop={'size': 15}); ax.set_aspect('auto') del eopatch # delete eopatch variable to enable further processing ``` The time series displayed looks very fragmented for the temporal resolution of the Sentinel 2 data to be so hyped, right? This is what you get if you choose to keep timeframes with valid data fraction over 70% only. You set the value in your "datafrac" configuration. If you expect a nice overview of vegetation growing stages, reality kicks in and gives you mostly cloudy conditions in the first months of the year. The good thing about being picky about the validity of your timeframes is reduced data volume. Invalid frames contain no additional value for your later analysis anyways. ## Resampling - Interpolation - LPIS data preparation - Sampling ### Feature concatenation and interpolation * For easier handling of the data you concatenate MY-S2-L2A-BANDS, NDVI, NDWI, NORM info into a single feature called FEATURES * Perform temporal interpolation (filling gaps and resampling to the same dates) by: * creating a linear interpolation task in the temporal dimension * providing the cloud mask to tell the interpolating function which values to update * using only timeframes from a timerange all EOPatches have in common (from earliest date to latest date) ### LPIS data preparation * From scratch, LPIS data is divided into 200 different crop type classes. As the classification is based on spectral signatures, those have to be distinctive. 200 classes are obviously too detailed for achieving accurate prediction results. Therefore you group these classes into reasonable groups also based on similar spectral characteristics using the two CSV files from the "General data" folder. The basic grouping defines 14 groups namely: Grass, Maize, Orchards, Peas, Potatoes, Pumpkins, Soybean, Summer cereals, Sunflower, Vegetables, Vine-yards, Winter cereals, Winter rape, Other. This grouping turned out to perform best in classification. * After the grouping, the data set stored in vector format is converted into a raster format. Thus, each EO pixel can be assigned to a crop type value. All polygons belonging to one of the classes are separately burned to the raster mask. * In order to get rid of artifacts with a width of 1 pixel, and mixed pixels at the edges between polygons of different classes you perform an erosion. That means a buffer of 1 pixel (10m) size is applied to each individual field in the border area. ### Sampling By a spatial sampling of the EOPatches you randomly take a subset of pixels from a patch to use in the machine learning training and testing. Here you only want to consider classes that are represented to a certain quantity of pixels. * Remember your "pixel_tres" configuration - a threshold of 1000 pixel is necessary for a class to be considered in sampling * Remember your "samp_class" configuration - 500 pixel per class per EOPatch are sampled ``` # for linear interpolation find earliest and latest overlapping dates # list EOPatches eopatches = [] patch_list = os.listdir(patch_path) for i in patch_list: eopatches.append(EOPatch.load(os.path.join(patch_path, i), lazy_loading=True)) eopatches = np.array(eopatches) # identify earliest date timelist = [] for eopatch in eopatches: timelist.append(eopatch.timestamp[0]) mindate = str(max(timelist).date()) print('Earliest date: ' + str(max(timelist))) # identify latest date timelist = [] for eopatch in eopatches: timelist.append(eopatch.timestamp[-1]) maxdate = str(min(timelist).date()) print('Latest date: ' + str(min(timelist))) ``` ### Set up your 3. EOWorkflow - Feature engineering/ Crop type grouping/ Sampling ``` # TASK FOR LOADING EXISTING EOPATCHES load = LoadTask(patch_path) # TASK FOR CONCATENATION # bands and indices are concatenated into one features dictionary concatenate = ConcatenateData('FEATURES', ['MY-S2-L2A-BANDS','NDVI','NDWI','NORM']) # TASK FOR LINEAR INTERPOLATION # linear interpolation of full time-series and date resampling resample_range = (mindate, maxdate, day_range) linear_interp = LinearInterpolation( 'FEATURES', # name of field to interpolate mask_feature=(FeatureType.MASK, 'VALID_DATA'), # mask to be used in interpolation copy_features=[(FeatureType.VECTOR_TIMELESS, 'LPIS_{}'.format(year))], # features to keep resample_range=resample_range, # set the resampling range bounds_error=False # extrapolate with NaN's ) # TASK TO FIX AUSTRIAN LPIS DATA # on the basis of the wrongly defined column "SNAR_BEZEI" # a column "SNAR_BEZEI_NAME" is added which defines the LPIS class fixlpis = FixLPIS(feature='LPIS_{}'.format(year), country='Austria') # TASK FOR GROUPING LPIS INTO WANTED CLASSES # on the basis of the two grouping files an individual crop type grouping can be applied # for changes these files have to be adapted grouplpis = GroupLPIS(year=year, lpis_to_group_file=lpis_to_group_file, crop_group_file=crop_group_file) # TASK FOR CONVERTING LPIS DATA FROM VECTOR TO RASTER FORMAT # multiple rasterized layers appling different crop type groupings can be stored in an EOPatch vtr = VectorToRaster( vector_input=(FeatureType.VECTOR_TIMELESS, 'LPIS_{}'.format(year)), raster_feature=(FeatureType.MASK_TIMELESS, 'LPIS_class_{}'.format(grouping_id)), values_column='GROUP_1_ID', raster_shape=(FeatureType.DATA, 'FEATURES'), no_data_value=0) # TASK FOR EROSION # erode each class of the reference map erosion = ErosionTask(mask_feature=(FeatureType.MASK_TIMELESS,'LPIS_class_{}'.format(grouping_id), 'LPIS_class_{}_ERODED'.format(grouping_id)), disk_radius=1) # TASK FOR SPATIAL SAMPLING # evenly sample about pixels from patches spatial_sampling = SamplingTaskTask(grouping_id, pixel_thres, samp_class) # TASK FOR SAVING TO OUTPUT save = SaveTask(patch_path, overwrite_permission=OverwritePermission.OVERWRITE_PATCH) # define the workflow workflow = LinearWorkflow(load, concatenate, linear_interp, fixlpis, grouplpis, vtr, erosion, spatial_sampling, save) ``` ### Run third EOWorkflow ``` pbar = tqdm(total=len(patch_list)) for patch_name in patch_list: extra_param = {load: {'eopatch_folder': patch_name}, grouplpis: {'col_cropN_lpis': 'SNAR_BEZEI_NAME', 'col_cropN_lpistogroup': 'CROP_ID'}, save: {'eopatch_folder': patch_name}} workflow.execute(extra_param) pbar.update(1) ``` ### EOPatch data visualization Now, after all the data is transformed and sampled let's load the single EOPatch again and look at the structure. By executing ``` EOPatch.load(os.path.join(patch_path, 'eopatch_0_col-0_row-0') ``` You obtain the following structure: ``` EOPatch( data: { FEATURES: numpy.ndarray(shape=(31, 1033, 1040, 13), dtype=float64) FEATURES_SAMPLED: numpy.ndarray(shape=(31, 6000, 1, 13), dtype=float64) } mask: {} scalar: {} label: {} vector: {} data_timeless: {} mask_timeless: { LPIS_class_basic: numpy.ndarray(shape=(1033, 1040, 1), dtype=uint8) LPIS_class_basic_ERODED: numpy.ndarray(shape=(1033, 1040, 1), dtype=uint8) LPIS_class_basic_ERODED_SAMPLED: numpy.ndarray(shape=(6000, 1, 1), dtype=uint8) } scalar_timeless: {} label_timeless: {} vector_timeless: { LPIS_2018: geopandas.GeoDataFrame(columns=['geometry', 'FS_KENNUNG', 'SL_FLAECHE', 'ID', 'SNAR_BEZEI', 'DateImported', 'SNAR_BEZEI_NAME', 'CROP_ID', 'english', 'slovenian', 'latin', 'GROUP_1', 'GROUP_1_original', 'GROUP_1_ID'], length=4140, crs=epsg:32633) } meta_info: {} bbox: BBox(((420717.14926283853, 5329441.919254168), (431121.7036578405, 5339770.083848184)), crs=EPSG:32633) timestamp: [datetime.datetime(2018, 1, 29, 0, 0), ..., datetime.datetime(2018, 9, 26, 0, 0)], length=31 ) ``` Things have changed, haven't they? Your 10 spectral bands and 3 indices are combined in `FEATURES` and the randomly sampled pixels are stored in `FEATURES_SAMPLED`. After filtering, your valid data masks have been deleted and your eroded and sampled reference data is available in practical raster format as `mask_timeless.FeatureType`. ## Combine samples and split into train and test data As you performed the spatial sampling for each patch separately you have to combine the samples. But first you have to assign your EOPatches either to the training or validation dataset. In this case you take one in four EOPatches for testing. Only classes present in both train and test dataset are considered in the classification. The sampled features and labels are loaded and reshaped into $n \times m$, where $n$ represents the number of training pixels, and $m = f \times t$ the number of all features, with $f$ the size of bands and band combinations (in this example 13) and $t$ the length of the resampled time-series (in this example 34) Terminology: In data science features are commonly refered to as "X" and labels as "y" ``` patch_list = os.listdir(patch_path) # update patch list # combine EOPatches to one dataset eopatches = [] for i in patch_list: eopatches.append(EOPatch.load(os.path.join(patch_path, i), lazy_loading=True)) eopatches = np.array(eopatches) # depending on the number of EOPatches adjust test_ratio if necessary and split into test and train data accordingly if len(patch_list) == 1: # split combined dataset into train and test data X_train, X_test, y_train, y_test, n_timesteps, n_features = train_test_split_eopatch(eopatches, features_dict, labels_dict) elif len(patch_list) < 4: test_ratio = 3 # split combined dataset into train and test data X_train, X_test, y_train, y_test, n_timesteps, n_features = train_test_split_eopatches(eopatches, test_ratio, features_dict, labels_dict) else: # split combined dataset into train and test data X_train, X_test, y_train, y_test, n_timesteps, n_features = train_test_split_eopatches(eopatches, test_ratio, features_dict, labels_dict) # mask out labels that are not in both train and test data and also mask out samples where features include NaN values X_train, X_test, y_train, y_test = masking(X_train, X_test, y_train, y_test) total_samp_count = X_train.shape[0] + X_test.shape[0] print('From your {} EOPatch(es) at total of {} samples were taken. ' 'This sampling dataset includes {} training and {} test samples.'.format(len(patch_list), total_samp_count, X_train.shape[0], X_test.shape[0])) ``` ### Plot sample distribution ``` fig = plt.figure(figsize=(20, 15)) y_ids_train, y_counts_train = np.unique(y_train, return_counts=True) plt.subplot(2, 1, 1) plt.bar(range(len(y_ids_train)), y_counts_train) plt.xticks(range(len(y_ids_train)), [class_names[i] for i in y_ids_train], rotation=90, fontsize=20); plt.yticks(fontsize=20) plt.grid(True) plt.title('Training samples', size=20) y_ids_test, y_counts_test = np.unique(y_test, return_counts=True) plt.subplot(2, 1, 2) plt.bar(range(len(y_ids_test)), y_counts_test) plt.xticks(range(len(y_ids_test)), [class_names[i] for i in y_ids_test], rotation=90, fontsize=20); plt.yticks(fontsize=20) plt.grid(True) plt.title('Test samples', size=20) fig.subplots_adjust(wspace=0, hspace=1) ``` As you can see you have managed to generate a well balanced dataset. In both your 3/4 training and 1/4 test dataset no group is under or over represented, which provides a reasonable basis for the following classification. ### Scaling and one-hot-encoding In the following you want to feed your samples into two different algorithms. To guarantee equivalent conditions for both models you need scaled features and one-hot-endcoded labels. ``` # scale features scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) X_train = np.reshape(X_train, (-1,n_timesteps,n_features)) X_test = np.reshape(X_test, (-1,n_timesteps,n_features)) # save feature related scaling properties joblib.dump(scaler, os.path.join(samples_path, 'Scaler_{}.bin'.format(grouping_id)), compress=True) # labels one- hot-encoding y_train = y_train.reshape(-1, 1) y_test = y_test.reshape(-1, 1) enc = OneHotEncoder(sparse=False) enc.fit(np.array(class_ids).reshape(-1, 1)) y_train = enc.transform(y_train) y_test = enc.transform(y_test) label_count = y_train.shape[1] ``` ## Save or load samples (optional) You can choose to save your samples for later applications. For entering the upcoming part of prediction, this is not necessary. ``` # save samples(optional) np.save(os.path.join(samples_path, 'X_train_{}'.format(grouping_id)), X_train) np.save(os.path.join(samples_path, 'X_test_{}'.format(grouping_id)), X_test) np.save(os.path.join(samples_path, 'y_train_{}'.format(grouping_id)), y_train) np.save(os.path.join(samples_path, 'y_test_{}'.format(grouping_id)), y_test) # load samples(optional) X_train = np.load(os.path.join(samples_path, 'X_train_{}.npy'.format(grouping_id))) X_test = np.load(os.path.join(samples_path, 'X_test_{}.npy'.format(grouping_id))) y_train = np.load(os.path.join(samples_path, 'y_train_{}.npy'.format(grouping_id))) y_test = np.load(os.path.join(samples_path, 'y_test_{}.npy'.format(grouping_id))) ``` # 6. Prediction Congrats, you've mastered the heavy preprocessing steps! Now, this is where the magic of Machine and Deep Learning happens. State-of-the-art [LightGBM](https://github.com/Microsoft/LightGBM) is used as a ML model. It is a fast, distributed, high-performance gradient boosting framework based on decision tree algorithms, used for many ML tasks. As novel competitors, [TempCNN](https://www.mdpi.com/2072-4292/11/5/523/htm#sec4-remotesensing-11-00523) DL architectures are entering the game. So far Convolutional Neural Networks were mainly and successfully applied for image and language recognition tasks. Modifying the convolutional filters of the architectures the Temporal CNN is supposed to exploit the temporal information of satellite image time series. ## Set up and train LightGBM model The [default hyper-parameters](https://lightgbm.readthedocs.io/en/latest/Parameters.html) are used in this example. For more info on [parameter tuning](https://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html), check the documentation of the package. ``` %%time # Set up training classes rev_y_train = [np.argmax(y, axis=None, out=None) for y in y_train] rev_y_train_unique = np.unique(rev_y_train) # reshape features from count-timeframes-features to timeframes-count-features a, b, c = X_train.shape X_train_lgbm = X_train.reshape(a,b * c) # Set up the LightGBM model model_lgbm = lgb.LGBMClassifier( objective='multiclass', num_class=len(rev_y_train_unique), metric='multi_logloss' ) # Train the model model_lgbm.fit(X_train_lgbm, rev_y_train) # Save the model joblib.dump(model_lgbm, os.path.join(models_path, 'model_lgbm_CropTypeClass_{}.pkl'.format(grouping_id))) ``` ## Set up and train TempCNN model In this example an approved architecture from the scientific paper linked above is adopted. ``` %%time # Set up the TempCNN architecture model_tcnn = Sequential() model_tcnn.add(Conv1D(filters=5, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features))) model_tcnn.add(Dropout(0.5)) model_tcnn.add(Conv1D(filters=5, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features))) model_tcnn.add(Dropout(0.5)) model_tcnn.add(Conv1D(filters=5, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features))) model_tcnn.add(Dropout(0.5)) model_tcnn.add(Flatten()) model_tcnn.add(Dense(256, activation='relu')) model_tcnn.add(Dense(label_count, activation='softmax')) model_tcnn.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Train the model model_tcnn.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=32, verbose=1, shuffle=True) # Save the model model_tcnn.save(os.path.join(models_path, 'model_tcnn_CropTypeClass_{}.h5'.format(grouping_id))) ``` ## Validation and evaluation Validation of the models is a crucial step in data science. All models are wrong, but some are less wrong than others, so model evaluation is important. In order to validate the models, we use the training set to predict the classes, and then compare the predicted set of labels to the "ground truth". The validation is performed by evaluating various metrics, such as accuracy, precision, recall, $F_1$ score, some of which are nicely described [in this blog post](https://medium.com/greyatom/performance-metrics-for-classification-problems-in-machine-learning-part-i-b085d432082b) Get the overall accuracy (OA) and the weighted $F_1$ score ``` # reshape features from count-timeframes-features to timeframes-count-features # and set up training classes d, e, f = X_test.shape X_test_lgbm = X_test.reshape(d, e * f) rev_y_test = [np.argmax(y, axis=None, out=None) for y in y_test] # Load the models model_lgbm = joblib.load(os.path.join(models_path, 'model_lgbm_CropTypeClass_{}.pkl'.format(grouping_id))) model_tcnn = load_model(os.path.join(models_path, 'model_tcnn_CropTypeClass_{}.h5'.format(grouping_id))) # get overall accuracy and weighted F1-score for LightGBM py_test_lgbm = model_lgbm.predict(X_test_lgbm) print('Classification accuracy LightGBM {:.1f}%'.format(100 * metrics.accuracy_score(rev_y_test, py_test_lgbm))) print('Classification F1-score LightGBM {:.1f}%'.format(100 * metrics.f1_score(rev_y_test, py_test_lgbm, average='weighted'))) py_test_tcnn = model_tcnn.predict_classes(X_test) print('Classification accuracy TempCNN {:.1f}%'.format(100 * metrics.accuracy_score(rev_y_test, py_test_tcnn))) print('Classification F1-score TempCNN {:.1f}%'.format(100 * metrics.f1_score(rev_y_test, py_test_tcnn, average='weighted'))) ``` $F_1$ score, precision, and recall for each class separately ``` # LightGBM: F1-score, precision, and recall for each class separately class_labels = np.unique(rev_y_test) class_names = [entry.class_name for entry in LPISCLASS] f1_scores = metrics.f1_score(rev_y_test, py_test_lgbm, labels=class_labels, average=None) recall = metrics.recall_score(rev_y_test, py_test_lgbm, labels=class_labels, average=None) precision = metrics.precision_score(rev_y_test, py_test_lgbm, labels=class_labels, average=None) print('LightGBM:') print(' Class = F1 | Recall | Precision') print(' --------------------------------------------------') for idx, croptype in enumerate([class_names[idx] for idx in class_labels]): print(' * {0:20s} = {1:2.1f} | {2:2.1f} | {3:2.1f}'.format(croptype, f1_scores[idx] * 100, recall[idx] * 100, precision[idx] * 100)) # TempCNN: F1-score, precision, and recall for each class separately class_names = [entry.class_name for entry in LPISCLASS] f1_scores = metrics.f1_score(rev_y_test, py_test_tcnn, labels=class_labels, average=None) recall = metrics.recall_score(rev_y_test, py_test_tcnn, labels=class_labels, average=None) precision = metrics.precision_score(rev_y_test, py_test_tcnn, labels=class_labels, average=None) print('TempCNN:') print(' Class = F1 | Recall | Precision') print(' --------------------------------------------------') for idx, croptype in enumerate([class_names[idx] for idx in class_labels]): print(' * {0:20s} = {1:2.1f} | {2:2.1f} | {3:2.1f}'.format(croptype, f1_scores[idx] * 100, recall[idx] * 100, precision[idx] * 100)) ``` ### Plot the standard Confusion Matrix for LightGBM ``` fig = plt.figure(figsize=(20, 20)) conf_matrix_gbm = metrics.confusion_matrix(rev_y_test, py_test_lgbm) plot_confusion_matrix(conf_matrix_gbm, classes=[name for idx, name in enumerate(class_names) if idx in class_labels], normalize=True, ylabel='Truth (CROPS)', xlabel='Predicted (LightGBM)', title='Confusion matrix'); ``` ### Plot the standard Confusion Matrix for TempCNN ``` fig = plt.figure(figsize=(20, 20)) conf_matrix_gbm = metrics.confusion_matrix(rev_y_test, py_test_tcnn) plot_confusion_matrix(conf_matrix_gbm, classes=[name for idx, name in enumerate(class_names) if idx in class_labels], normalize=True, ylabel='Truth (CROPS)', xlabel='Predicted (TempCNN)', title='Confusion matrix'); ``` The validation of the models shows that for most of the groups both perform very well. However, there seem to be differences in their confusion for certain classes: * In this specific case orchards might catch your attention mostly. LightGBM performs worse than TempCNN. But more interesting than the overall accuracy is, that LightGBM classifies actual orchards as grass a lot (low recall) while, no other class is mistaken as orchards (high precision). In contrast TempCNN recognizes actual orchards well (high recall) but identifies acutal grass as orchards frequently (lower precision). Generally, confusion with grass class is not surprising, as there is a lot of it between the individual trees. * There is also poor performance received for potatoes in both models as their cultivation practices are quite similar to peas. * Poor performance for the group Other is expectable in consequence of its diverse class composition ### Most important features The LightGBM model contains the information about feature importances. Let's check which features are most important for classification. ``` eopatch_name = 'eopatch_0_col-0_row-0' # get the name of the first newly created EOPatch eopatch = EOPatch.load(os.path.join(patch_path, eopatch_name), lazy_loading=True) timeframe_count = eopatch.data['FEATURES'].shape[0] features_count = eopatch.data['FEATURES'].shape[3] del eopatch z = model_lgbm.feature_importances_.reshape((timeframe_count, features_count)) fnames = ['B02','B03','B04','B05','B06','B07','B08','B8A','B11','B12','NDVI','NDWI','NORM'] fig = plt.figure(figsize=(15, 15)) ax = plt.gca() # plot the importances im = ax.imshow(z, aspect=0.25) plt.xticks(range(len(fnames)), fnames, rotation=45, fontsize=20) plt.yticks(range(timeframe_count), ['T{}'.format(i) for i in range(timeframe_count)], fontsize=20) plt.xlabel('Bands and band related features', fontsize=20) plt.ylabel('Time frames', fontsize=15) plt.ylim(top=-0.5, bottom=timeframe_count - 0.5) ax.xaxis.tick_top() ax.xaxis.set_label_position('top') fig.subplots_adjust(wspace=0, hspace=0) cb = fig.colorbar(im, ax=[ax], orientation='horizontal', pad=0.01, aspect=100) cb.ax.tick_params(labelsize=20) cb.set_label('Feature importance', fontsize=15) ``` As you can see, the most important features for LightGBM are recorded within the main growth period. Here different growing stages can be detected that constitute certain crop types. ## Prediction Now that both models have been validated, the remaining thing is to predict the whole AOI. As LightGBM receives higher overall accurays it is used for further predictions. If you are interested in a specific crop group TempCNN is outperforming LightGBM simply change the following configuration. ``` # swap commentation for using a different model # model = load_model(os.path.join(models_path, 'model_tcnn_CropTypeClass_{}.h5'.format(grouping_id))) # load TempCNN model model = joblib.load(os.path.join(models_path, 'model_lgbm_CropTypeClass_{}.pkl'.format(grouping_id))) # load LightGBM model # load respective feature scaler scaler = joblib.load(os.path.join(samples_path, 'Scaler_{}.bin'.format(grouping_id))) ``` In the following you define a workflow to make a prediction on the existing EOPatches. The EOTask accepts the features and the names for the labels. In addition you export GeoTIFF images of the prediction to easily access your visual results. ### Set up your 4. EOWorklow - Prediction ``` # TASK TO LOAD EXISTING EOPATCHES load = LoadTask(patch_path) # TASK FOR PREDICTION predict = PredictPatch(model, (FeatureType.DATA, 'FEATURES'), 'LBL_GBM', scaler) # TASK TO EXPORT TIFF export_tiff = ExportToTiff((FeatureType.MASK_TIMELESS, 'LBL_GBM')) tiff_location = predictions_path if not os.path.isdir(tiff_location): os.makedirs(tiff_location) # TASK FOR SAVING save = SaveTask(patch_path, overwrite_permission=OverwritePermission.OVERWRITE_PATCH) workflow = LinearWorkflow(load, predict, export_tiff, save) ``` ### Run fourth EOWorkflow ``` patch_list = os.listdir(patch_path) # update patch list # execute workflow pbar = tqdm(total=len(patch_list)) for patch_name in patch_list: extra_param = {load: {'eopatch_folder': patch_name}, export_tiff: {'filename': '{}/prediction_{}.tiff'.format(predictions_path, patch_name)}, save: {'eopatch_folder': patch_name}} workflow.execute(extra_param) pbar.update() ``` ### EOPatch data visualization Finishing the last processing step, let's have a look at the final EOPatch by executing ``` EOPatch.load(os.path.join(patch_path, 'eopatch_0_col-0_row-0') ``` You obtain the following structure which is extended by your predicted data stored as `LBL_GBM` in `mask_timeless.FeatureType`: ``` EOPatch( data: { FEATURES: numpy.ndarray(shape=(34, 1028, 1033, 13), dtype=float64) FEATURES_SAMPLED: numpy.ndarray(shape=(34, 6000, 1, 13), dtype=float64) } mask: {} scalar: {} label: {} vector: {} data_timeless: {} mask_timeless: { LBL_GBM: numpy.ndarray(shape=(1028, 1033, 1), dtype=int64) LPIS_class_basic: numpy.ndarray(shape=(1028, 1033, 1), dtype=uint8) LPIS_class_basic_ERODED: numpy.ndarray(shape=(1028, 1033, 1), dtype=uint8) LPIS_class_basic_ERODED_SAMPLED: numpy.ndarray(shape=(6000, 1, 1), dtype=uint8) } scalar_timeless: {} label_timeless: {} vector_timeless: { LPIS_2018: geopandas.GeoDataFrame(columns=['geometry', 'FS_KENNUNG', 'SL_FLAECHE', 'ID', 'SNAR_BEZEI', 'DateImported', 'SNAR_BEZEI_NAME', 'CROP_ID', 'english', 'slovenian', 'latin', 'GROUP_1', 'GROUP_1_original', 'GROUP_1_ID'], length=4091, crs=EPSG:32633) } meta_info: {} bbox: BBox(((420862.3179607267, 5329537.336315366), (431194.28800678457, 5339817.792378783)), crs=CRS('32633')) timestamp: [datetime.datetime(2018, 1, 6, 0, 0), ..., datetime.datetime(2018, 9, 27, 0, 0)], length=34 ) ``` ## Visualization of the results ### Visualize predicted EOPatch data ``` eopatch_name = 'eopatch_0_col-0_row-0' # get the name of the first newly created EOPatch eopatch = EOPatch.load(os.path.join(patch_path, eopatch_name), lazy_loading=True) # update colormap cb_classes = np.unique(np.unique(eopatch.mask_timeless['LBL_GBM'])) custom_cmap = mpl.colors.ListedColormap([lpisclass_cmap.colors[i] for i in cb_classes]) custom_norm = mpl.colors.BoundaryNorm(np.arange(-0.5, len(cb_classes), 1), custom_cmap.N) # mask prediction - exclude pixel with no LPIS reference labels = np.array(eopatch.mask_timeless['LPIS_class_{}'.format(grouping_id)]) mask = labels == 0 labelspred = np.array(eopatch.mask_timeless['LBL_GBM']) LBL = np.ma.masked_array(labelspred, mask) # plot figure fig, ax = plt.subplots(figsize=(20, 20)) im = ax.imshow(LBL.squeeze(), cmap=lpisclass_cmap, norm=lpisclass_norm) ax.set_xticks([]) ax.set_yticks([]) ax.set_aspect('auto') fig.subplots_adjust(wspace=0, hspace=0) # plot colorbar cb = fig.colorbar(mpl.cm.ScalarMappable(norm=custom_norm, cmap=custom_cmap), orientation="horizontal", pad=0.01, aspect=100) cb.ax.tick_params(labelsize=20) cb.set_ticks(range(len(cb_classes))) cb.ax.set_xticklabels([class_names[i] for i in cb_classes], rotation=90, fontsize=15) plt.show() ``` ### Compare ground truth and prediction ``` # mask prediction - exclude pixel with no LPIS reference labels = np.array(eopatch.mask_timeless['LPIS_class_{}'.format(grouping_id)]) mask = labels == 0 labelspred = np.array(eopatch.mask_timeless['LBL_GBM']) LBL = np.ma.masked_array(labelspred, mask) fig, axes = plt.subplots(2,2,figsize=(20, 10)) # plot prediction ax1 = plt.subplot(121) im = ax1.imshow(LBL.squeeze(), cmap=lpisclass_cmap, norm=lpisclass_norm) plt.title('Prediction') ax1.set_xticks([]) ax1.set_yticks([]) ax1.set_aspect('auto') # plot ground truth ax2 = plt.subplot(122) im = ax2.imshow(labels.squeeze(), cmap=lpisclass_cmap, norm=lpisclass_norm) plt.title('Ground truth') ax2.set_xticks([]) ax2.set_yticks([]) ax2.set_aspect('auto') axlist=[ax1,ax2] fig.subplots_adjust(wspace=0, hspace=0) # plot colorbar cb = fig.colorbar(mpl.cm.ScalarMappable(norm=custom_norm, cmap=custom_cmap), ax = axlist, orientation="horizontal", pad=0.01, aspect=100) cb.ax.tick_params(labelsize=20) cb.set_ticks(range(len(cb_classes))) cb.ax.set_xticklabels([class_names[i] for i in cb_classes], rotation=90, fontsize=15) plt.show() ``` ### Close-up comparison ``` # create red-green colormap colors = [(0, 1, 0), (1, 0, 0)] # G -> R cmap_name = 'my_list' cm = LinearSegmentedColormap.from_list( cmap_name, colors) fig = plt.figure(figsize=(20, 20)) inspect_size = 100 w, h = labels.squeeze().shape w_min = np.random.choice(range(w - inspect_size)) h_min = np.random.choice(range(h - inspect_size)) ax = plt.subplot(2, 2, 1) plt.imshow(labels.squeeze()[w_min: w_min + inspect_size, h_min : h_min + inspect_size], cmap=lpisclass_cmap, norm=lpisclass_norm) plt.xticks([]) plt.yticks([]) ax.set_aspect('auto') plt.title('Ground truth', fontsize=20) ax = plt.subplot(2, 2, 2) plt.imshow(LBL.squeeze()[w_min: w_min + inspect_size, h_min: h_min + inspect_size], cmap=lpisclass_cmap, norm=lpisclass_norm) plt.xticks([]) plt.yticks([]) ax.set_aspect('auto') plt.title('Prediction', fontsize=20) ax = plt.subplot(2, 2, 3) mask = LBL.squeeze() != labels.squeeze() plt.imshow(mask[w_min: w_min + inspect_size, h_min: h_min + inspect_size], cmap=cm) plt.xticks([]) plt.yticks([]); ax.set_aspect('auto') plt.title('Difference', fontsize=20) ax = plt.subplot(2, 2, 4) image = np.clip(eopatch.data['FEATURES'][8][..., [2, 1, 0]] * 3.5, 0, 1) plt.imshow(image[w_min: w_min + inspect_size, h_min: h_min + inspect_size]) plt.xticks([]) plt.yticks([]); ax.set_aspect('auto') plt.title('True Color', fontsize=20) fig.subplots_adjust(wspace=0.1, hspace=0.1) ``` As you can probably see in the randomly chosen section of the AOI there are certain patterns of misclassified pixels: * There are complete fields mistaken as another crop group. In this case the algorithm got confused because of similar spectral characteristics. You already got an overview of the frequency and combination of those incidents in the evaluation part above. * Misclassified single pixels are usually located at the border of the respective fields. Here the "mixed-pixel-problem" impacts the prediction results. For the modeling these pixels were excluded, as they may include spectral reflectance values of different vegetation types and thereby confuse the algorithm. # Next steps Now, after your first successful classification you are hooked? But the region around Wels in Austria was surprisingly not your actual AOI or you want to try other vegetation groupings? Then here are some suggestions on how you could proceed: * **Customize configurations** The notebook offers various possibilities to change parameters and evaluate their effects. Simply enter the configuration section in the beginning and modify e.g. cloudcover thresholds or your sampling strategy. * **Change the AOI within Austria** This would be the simplest case to apply. You just have to place a Shapefile or Geojson of your own AOI in the location of the "Area_AOI.geojson" from the example. The size and shape of the included polygon are irrelevant. * **Try alternative crop groupings** In order to regroup the LPIS classes you need to have a closer look at the two CSV files in the `GeneralData` folder. * `at_lpis_2018_crop_to_group_mapping_basic.csv`: Here you can assign LPIS classes to different crop groups.\ *_CROP_ID_* represents the respective LPIS class\ *_GROUP_1_* represents the respective group you want a class in * `crop_group_1_definition_basic.csv`: Here you can combine or separate individual crop groups by assigning the respective ID.\ *_GROUP_1_* again represents the groups\ *_GROUP_1_ID_* represents the respective numeric ID * **Apply the notebook to another country** Another country means different AOI plus different LPIS classes. * The first requires no additional effort. Change your AOI file and run the processes. EO data is downloaded and processed exactly as in the example. * But when it comes to the ground truth data, this is were things get tricky as you additionally need to customize the CSV grouping files for your specific country
github_jupyter
# Classwork 6 ### Critique another group's classwork 5 ### Group Eric-Lance #### Is it clear how the code is organized? We, the people of Datacats, believe this module is very well organized and easy to follow. We can clearly see what each function does, where each function lies, i.e. nothing looks confusing or out of place. There are comments inbetween different steps, which makes understanding the different parts of the code easy to follow. Overall, the organization is good. #### Is the code properly documented with both docstrings and supplementary comments according to industry standards? Yes, the module has correctly documented docstrings, both for the module docstring and the function docstrings. As stated before, there are comments throughout the code which makes the algorithms easy to follow. One critique we feel is the docstrings themselves could contain more information. The Module Docstring, we feel is fine. The Function Docstring for the init, could use the wording "the user needs to input" or use the words "parameter 1, parameter 2" so if someone who was reading the docstring did not know how to use the function, they will have a better clerification of how to use the function, and which parameter input goes to which variable. We're not sure if it's a necessity, but it could be something to think about when writing programs where users not as familiar with cloding, needs to figure out how to use a module. The other Function Docstrings were taken from the original abscplane class, which is straightforward, so we think they're stated well, but we suppose it could use some more elaboration on the function. Once again, not sure if it's needed or necessary, but could be something to think. #### Can you follow the algorithm of the code, i.e., what it is doing, and how? It is very easy to follow. We loved how you guys made the xstep and ystep variable a part of the class' initial variables, so it can be used again in different functions. We liked how you built the plane, the coding of it is very concise, short, and saved a lot of steps, in comparison to what we did. Very straightforward and simple, which is good. #### Do you see any suggestions for how to improve the code? Discuss your critique with the members of the other group. Probaby the biggest suggestion for you guys is to put the test functions in a different module/file. This way, the code does not look unnecessarily long or cluttered. Although the test functions themselves don't interfere with code itself, we think asthetically wise, it would make your module look better. Although we will say we liked how you programmed your test functions. It is very good.
github_jupyter
# Working with Streaming Data Learning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data ## Introduction It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery) These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. <img src='../assets/taxi_streaming_data.png' width='80%'> In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic. ``` !pip install --upgrade apache-beam[gcp] ``` Restart the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel). ``` import os import shutil import numpy as np import tensorflow as tf from google import api_core from google.cloud import aiplatform, bigquery from google.protobuf import json_format from google.protobuf.struct_pb2 import Value from matplotlib import pyplot as plt from tensorflow import keras from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.layers import Dense, DenseFeatures from tensorflow.keras.models import Sequential print(tf.__version__) # Change below if necessary PROJECT = !gcloud config get-value project # noqa: E999 PROJECT = PROJECT[0] BUCKET = PROJECT REGION = "us-central1" %env PROJECT=$PROJECT %env BUCKET=$BUCKET %env REGION=$REGION %%bash gcloud config set project $PROJECT gcloud config set ai/region $REGION ``` ## Re-train our model with `trips_last_5min` feature In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. ## Simulate Real Time Taxi Data Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub. Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `asl-ml-immersion/notebooks/building_production_ml_systems/solutions` directory. Then run the following two commands. ```bash PROJECT_ID=$(gcloud config get-value project) python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID ``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. ## Create a BigQuery table to collect the processed data In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`. ``` bq = bigquery.Client() dataset = bigquery.Dataset(bq.dataset("taxifare")) try: bq.create_dataset(dataset) # will fail if dataset already exists print("Dataset created.") except api_core.exceptions.Conflict: print("Dataset already exists.") ``` Next, we create a table called `traffic_realtime` and set up the schema. ``` dataset = bigquery.Dataset(bq.dataset("taxifare")) table_ref = dataset.table("traffic_realtime") SCHEMA = [ bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"), bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"), ] table = bigquery.Table(table_ref, schema=SCHEMA) try: bq.create_table(table) print("Table created.") except api_core.exceptions.Conflict: print("Table already exists.") ``` ## Launch Streaming Dataflow Pipeline Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline. The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery **TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/#windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bash PROJECT_ID=$(gcloud config get-value project) REGION=$(gcloud config get-value ai/region) BUCKET=$PROJECT_ID # change as necessary python3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --region=$REGION \ --temp_location=gs://$BUCKET/dataflow_streaming ``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). ## Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds. ``` %%bigquery SELECT * FROM `taxifare.traffic_realtime` ORDER BY time DESC LIMIT 10 ``` ## Make predictions from the new data In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `4a_streaming_data_training.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance. ``` # TODO 2a. Write a function to take most recent entry in `traffic_realtime` # table and add it to instance. def add_traffic_last_5min(instance): bq = bigquery.Client() query_string = """ TODO: Your code goes here """ trips = bq.query(query_string).to_dataframe()["trips_last_5min"][0] instance['traffic_last_5min'] = # TODO: Your code goes here. return instance ``` The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time. ``` add_traffic_last_5min( instance={ "dayofweek": 4, "hourofday": 13, "pickup_longitude": -73.99, "pickup_latitude": 40.758, "dropoff_latitude": 41.742, "dropoff_longitude": -73.07, } ) ``` Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should - use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance - call prediction on your model for this realtime instance and save the result as a variable called `response` - parse the json of `response` to print the predicted taxifare cost Copy the `ENDPOINT_RESOURCENAME` from the deployment in the previous lab to the beginning of the block below. ``` # TODO 2b. Write code to call prediction on instance using realtime traffic # info. Hint: Look at this sample # https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py # TODO: Copy the `ENDPOINT_RESOURCENAME` from the deployment in the previous # lab. ENDPOINT_RESOURCENAME = "" api_endpoint = f"{REGION}-aiplatform.googleapis.com" # The AI Platform services require regional API endpoints. client_options = {"api_endpoint": api_endpoint} # Initialize client that will be used to create and send requests. # This client only needs to be created once, and can be reused for multiple # requests. client = aiplatform.gapic.PredictionServiceClient(client_options=client_options) instance = { "dayofweek": 4, "hourofday": 13, "pickup_longitude": -73.99, "pickup_latitude": 40.758, "dropoff_latitude": 41.742, "dropoff_longitude": -73.07, } # The format of each instance should conform to the deployed model's # prediction input schema. instance_dict = # TODO: Your code goes here. instance = json_format.ParseDict(instance, Value()) instances = [instance] response = # TODO: Your code goes here. # The predictions are a google.protobuf.Value representation of the model's # predictions. print(" prediction:", # TODO: Your code goes here. ) ``` ## Cleanup In order to avoid ongoing charges, when you are finished with this lab, you can delete your Dataflow job of that job from the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). An endpoint with a model deployed to it incurs ongoing charges, as there must be at least one replica defined (the `min-replica-count` parameter is at least 1). In order to stop incurring charges, you can click on the endpoint on the [Endpoints page of the Cloud Console](https://console.cloud.google.com/vertex-ai/endpoints) and un-deploy your model. Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
# Build GAN (Generative Adversarial Networks) with PyTorch and SageMaker ### About GAN Generative Adversarial Network (GAN) i is a generative machine learning model, which is widely used in advertising, games, entertainment, media, pharmaceuticals and other industries. It can be used to create fictional characters and scenes, simulate facial aging, and change image styles, and produce chemical formulas and so on. GAN was proposed by Ian Goodfellow in 2014, it is a deep neural network architecture consisting of a generative network and a discriminant network. The generation network generates "fake" data and tries to deceive the discrimination network; the discrimination network authenticates the generated data and tries to correctly identify all "fake" data. In the process of training iterations, the two networks continue to evolve and confront until they reach an equilibrium state (reference: Nash equilibrium), the discriminant network can no longer recognize "fake" data, and the training ends. This example will lead you to build a GAN model leveraging the PyTorch framework, introducing GAN from the perspective of engineering practice, and opening a new and interesting AI/ML experience in generative models. ### Environment setup Upgrade packages ``` !pip install --upgrade pip sagemaker awscli boto3 numpy ipywidgets !pip install Pillow==7.1.2 ``` Create folders ``` !mkdir -p data src tmp ``` ### Download data There are many public datasets on the Internet, which are very helpful for machine learning engineering and scientific research, such as algorithm study and evaluation. We will use MNIST dataset, which is a handwritten digits dataset, we will use it to train a GAN model, and eventually generate some fake "handwritten" digits. ``` !aws s3 cp --recursive s3://sagemaker-sample-files/datasets/image/MNIST/pytorch/ ./data ``` ### Data preparation PyTorch framework has a torchvision.datasets package, which provides access to a number of datasets, you may use the following commands to read MNIST pre-downloaded dataset from local storage, for later use. ``` from torchvision import datasets dataroot = './data' trainset = datasets.MNIST(root=dataroot, train=True, download=False) testset = datasets.MNIST(root=dataroot, train=False, download=False) print(trainset) print(testset) ``` SageMaker SDK will create a default Amazon S3 bucket for you to access various files and data, that you may need in the machine learning engineering lifecycle. We can get the name of this bucket through the default_bucket method of the sagemaker.session.Session class in the SageMaker SDK. ``` from sagemaker.session import Session sess = Session() # S3 bucket for saving code and model artifacts. # Feel free to specify a different bucket here if you wish. bucket = sess.default_bucket() prefix = 'byos-pytorch-gan' # Location to save your custom code in tar.gz format. s3_custom_code_upload_location = f's3://{bucket}/{prefix}/customcode' # Location where results of model training are saved. s3_model_artifacts_location = f's3://{bucket}/{prefix}/artifacts/' ``` The SageMaker SDK provides tools for operating AWS services. For example, the S3Downloader class is used to download objects in S3, and the S3Uploader is used to upload local files to S3. You will upload the dataset files to Amazon S3 for model training. During model training, we do not download data from the Internet to avoid network latency caused by fetching data from the Internet, and at the same time avoiding possible security risks due to direct access to the Internet. ``` import os from sagemaker.s3 import S3Uploader as s3up s3_data_location = s3up.upload(os.path.join(dataroot, "MNIST"), f"s3://{bucket}/{prefix}/data/mnist") ``` ### Training DCGAN (Deep Convolutional Generative Adversarial Networks) is a variant of the GAN families. This architecture essentially leverages Deep Convolutional Neural Networks to generate images belonging to a given distribution from noisy data using the Generator-Discriminator framework. ``` %%writefile src/train.py from __future__ import print_function import argparse import json import logging import os import sys import random import torch import torch.nn as nn import torch.nn.parallel import torch.nn.functional as F import torch.optim as optim import torch.backends.cudnn as cudnn import torch.utils.data import torchvision.datasets as dset import torchvision.transforms as transforms import torchvision.utils as vutils cudnn.benchmark = True logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler(sys.stdout)) class Generator(nn.Module): def __init__(self, *, nz, nc, ngf, ngpu=1): super(Generator, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( # input is Z, going into a convolution nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), nn.BatchNorm2d(ngf * 8), nn.ReLU(True), # state size. (ngf*8) x 4 x 4 nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), # state size. (ngf*4) x 8 x 8 nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 2), nn.ReLU(True), # state size. (ngf*2) x 16 x 16 nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), # state size. (ngf) x 32 x 32 nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False), nn.Tanh() # state size. (nc) x 64 x 64 ) def forward(self, input): if input.is_cuda and self.ngpu > 1: output = nn.parallel.data_parallel(self.main, input, range(self.ngpu)) else: output = self.main(input) return output def save(self, path, *, filename=None, device='cpu'): # recommended way from http://pytorch.org/docs/master/notes/serialization.html self.to(device) if not filename is None: path = os.path.join(path, filename) torch.save(self.state_dict(), path) def load(self, path, *, filename=None): if not filename is None: path = os.path.join(path, filename) with open(path, 'rb') as f: self.load_state_dict(torch.load(f)) class Discriminator(nn.Module): def __init__(self, *, nc, ndf, ngpu=1): super(Discriminator, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( # input is (nc) x 64 x 64 nn.Conv2d(nc, ndf, 4, 2, 1, bias=False), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf) x 32 x 32 nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(ndf * 2), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf*2) x 16 x 16 nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(ndf * 4), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf*4) x 8 x 8 nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False), nn.BatchNorm2d(ndf * 8), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf*8) x 4 x 4 nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False), nn.Sigmoid() ) def forward(self, input): if input.is_cuda and self.ngpu > 1: output = nn.parallel.data_parallel(self.main, input, range(self.ngpu)) else: output = self.main(input) return output.view(-1, 1).squeeze(1) def save(self, path, *, filename=None, device='cpu'): # recommended way from http://pytorch.org/docs/master/notes/serialization.html self.to(device) if not filename is None: path = os.path.join(path, filename) torch.save(self.state_dict(), path) def load(self, path, *, filename=None): if not filename is None: path = os.path.join(path, filename) with open(path, 'rb') as f: self.load_state_dict(torch.load(f)) class DCGAN(object): """ A wrapper class for Generator and Discriminator, 'train_step' method is for single batch training. """ fixed_noise = None criterion = None device = None netG = None netD = None optimizerG = None optimizerD = None nz = None nc = None ngf = None ndf = None real_cpu = None def __init__(self, *, batch_size, nz, nc, ngf, ndf, device, weights_init, learning_rate, betas, real_label, fake_label): super(DCGAN, self).__init__() import torch self.nz = nz self.nc = nc self.ngf = ngf self.ndf = ndf self.real_label = real_label self.fake_label = fake_label self.fixed_noise = torch.randn(batch_size, nz, 1, 1, device=device) self.criterion = nn.BCELoss() self.device = device self.netG = Generator(nz=nz, nc=nc, ngf=ngf).to(device) # print(netG) self.netD = Discriminator(nc=nc, ndf=ndf).to(device) # print(netD) self.netG.apply(weights_init) self.netD.apply(weights_init) # setup optimizer self.optimizerG = optim.Adam(self.netG.parameters(), lr=learning_rate, betas=betas) self.optimizerD = optim.Adam(self.netD.parameters(), lr=learning_rate, betas=betas) def train_step(self, data, *, epoch, epochs): import torch ############################ # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) ########################### # train with real self.netD.zero_grad() self.real_cpu = data[0] real = data[0].to(self.device) batch_size = real.size(0) label = torch.full((batch_size,), self.real_label, device=self.device) output = self.netD(real).view(-1) errD_real = self.criterion(output, label) errD_real.backward() D_x = output.mean().item() # train with fake noise = torch.randn(batch_size, self.nz, 1, 1, device=self.device) fake = self.netG(noise) label.fill_(self.fake_label) output = self.netD(fake.detach()).view(-1) errD_fake = self.criterion(output, label) errD_fake.backward() D_G_z1 = output.mean().item() errD = errD_real + errD_fake self.optimizerD.step() ############################ # (2) Update G network: maximize log(D(G(z))) ########################### self.netG.zero_grad() label.fill_(self.real_label) # fake labels are real for generator cost output = self.netD(fake).view(-1) errG = self.criterion(output, label) errG.backward() D_G_z2 = output.mean().item() self.optimizerG.step() return errG.item(), errD.item(), D_x, D_G_z1, D_G_z2 # custom weights initialization called on netG and netD def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: torch.nn.init.normal_(m.weight, 0.0, 0.02) elif classname.find('BatchNorm') != -1: torch.nn.init.normal_(m.weight, 1.0, 0.02) torch.nn.init.zeros_(m.bias) def log_batch(epoch, epochs, batch, batches, errD, errG, D_x, D_G_z1, D_G_z2, *, log_interval=10, output_dir): if batch % log_interval == 0: logger.info(f"Epoch[{epoch}/{epochs}], Batch[{batch}/{batches}], " + f"Loss_D: {errD:.4}, Loss_G: {errG:.4}, D(x): {D_x:.4}, D(G(z)): {D_G_z1:.4}/{D_G_z2:.4}") def get_device(use_cuda): import torch device = "cpu" num_gpus = 0 if torch.cuda.is_available(): if use_cuda: device = "cuda" torch.cuda.set_device(0) num_gpus = torch.cuda.device_count() else: logger.debug("WARNING: You have a CUDA device, so you should probably run with --cuda 1") logger.debug(f"Number of gpus available: {num_gpus}") return device, num_gpus def train(dataloader, hps, test_batch_size, device, model_dir, output_dir, seed, log_interval): epochs = hps['epochs'] batch_size = hps['batch-size'] nz = hps['nz'] ngf = hps['ngf'] ndf = hps['ndf'] learning_rate = hps['learning-rate'] beta1 = hps['beta1'] dcgan = DCGAN(batch_size=batch_size, nz=nz, nc=1, ngf=ngf, ndf=ndf, device=device, weights_init=weights_init, learning_rate=learning_rate, betas=(beta1, 0.999), real_label=1, fake_label=0) for epoch in range(epochs): batches = len(dataloader) for batch, data in enumerate(dataloader, 0): errG, errD, D_x, D_G_z1, D_G_z2 = dcgan.train_step(data, epoch=epoch, epochs=epochs) log_batch(epoch, epochs, batch, batches, errD, errG, D_x, D_G_z1, D_G_z2, log_interval=log_interval, output_dir=output_dir) save_model(model_dir, dcgan.netG) return def save_model(model_dir, model): logger.info("Saving the model.") model.save(model_dir, filename="model.pth") def load_model(model_dir, device=None): logger.info("Loading the model.") if device is None: device = get_training_device_name(1) netG.load(model_dir, filename="model.pth", device=device) return netG def parse_args(): # Training settings parser = argparse.ArgumentParser(description='PyTorch Example') parser.add_argument('--batch-size', type=int, default=1000, metavar='N', help='input batch size (default: 1000)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--seed', type=int, default=None, metavar='S', help='random seed') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') parser.add_argument('--save-model', action='store_true', default=False, help='For Saving the current Model') parser.add_argument('--model-dir', type=str, default=os.environ.get('SM_MODEL_DIR', None)) parser.add_argument('--cuda', type=int, default=1) parser.add_argument('--num-gpus', type=int, default=os.environ.get('SM_NUM_GPUS', None)) parser.add_argument('--pin-memory', type=bool, default=os.environ.get('SM_PIN_MEMORY', False)) parser.add_argument('--data-dir', required=False, default=None, help='path to data dir') parser.add_argument('--workers', type=int, help='number of data loading workers', default=2) parser.add_argument('--output-dir', default=os.environ.get('SM_OUTPUT_DATA_DIR', None), help='folder to output images and model checkpoints') parser.add_argument('--hps', default=os.environ.get('SM_HPS', None), help='Hyperparameters') return parser.parse_known_args() def get_datasets(*, dataroot='/opt/ml/input/data', classes=None): dataset = dset.MNIST(root=dataroot, transform=transforms.Compose([ transforms.Resize(64), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ])) return dataset if __name__ == '__main__': args, unknown = parse_args() # get training options hps = json.loads(args.hps) try: os.makedirs(args.output_dir) except OSError: pass if args.seed is None: random_seed = random.randint(1, 10000) logger.debug(f"Generated Random Seed: {random_seed}") cudnn.benchmark = True else: logger.debug(f"Provided Random Seed: {args.seed}") random_seed = args.seed cudnn.deterministic = True cudnn.benchmark = False random.seed(random_seed) torch.manual_seed(random_seed) pin_memory=args.pin_memory num_workers = int(args.workers) device, num_gpus = get_device(args.cuda) if device == 'cuda': num_workers = 1 pin_memory = True if args.data_dir is None: input_dir = os.environ.get('SM_INPUT_DIR', None) if input_dir is None and str(args.dataset).lower() != 'fake': raise ValueError(f"`--data-dir` parameter is required for dataset \"{args.dataset}\"") dataroot = input_dir + "/data" else: dataroot = args.data_dir dataset = get_datasets(dataroot=dataroot) assert dataset dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True, num_workers=num_workers, pin_memory=pin_memory) train(dataloader, hps, args.test_batch_size, device, args.model_dir, args.output_dir, args.seed, args.log_interval) ``` Per sagemaker.get_execution_role() method, the notebook can get the role pre-assigned to the notebook instance. This role will be used to obtain training resources, such as downloading training framework images, allocating Amazon EC2 instances, and so on. ``` from sagemaker import get_execution_role # IAM execution role that gives SageMaker access to resources in your AWS account. # We can use the SageMaker Python SDK to get the role from our notebook environment. role = get_execution_role() ``` The hyperparameters, that used in the model training tasks, can be defined in the notebook so that it is separated from the algorithm and training code. The hyperparameters are passed in when the training task is created and dynamically combined with the training task. ``` import json hps = { 'seed': 0, 'learning-rate': 0.0002, 'epochs': 18, 'pin-memory': 1, 'beta1': 0.5, 'nz': 100, 'ngf': 28, 'ndf': 28, 'batch-size': 128, 'log-interval': 20, } str_hps = json.dumps(hps, indent = 4) print(str_hps) ``` ```PyTorch``` class from sagemaker.pytorch package, is an estimator for PyTorch framework, it can be used to create and execute training tasks, as well as to deploy trained models. In the parameter list, ``instance_type`` is used to specify the instance type, such as CPU or GPU instances. The directory containing training script and the model code are specified by ``source_dir``, and the training script file name must be clearly defined by ``entry_point``. These parameters will be passed to the training task along with other parameters, and they determine the environment settings of the training task. ``` from sagemaker.pytorch import PyTorch estimator = PyTorch(role=role, entry_point='train.py', source_dir='./src', output_path=s3_model_artifacts_location, code_location=s3_custom_code_upload_location, instance_count=1, instance_type='ml.g4dn.2xlarge', framework_version='1.5.0', py_version='py3', hyperparameters=hps, ) ``` Please pay special attention to the ``train_use_spot_instances`` parameter. The value of ``True`` means that you want to use SPOT instances first. Since machine learning training usually requires a large amount of computing resources to run for a long time, leveraging SPOT instances can help you control your cost. The SPOT instances may save cost up to 90% of the on-demand instances, depending on the instance type, region, and time, the actual price might be different. You have created a PyTorch object, and you can use it to fit pre-uploaded data on Amazon S3. The following command will initiate the training task, and the training data will be imported into the training environment in the form of an input channel named **MNIST**. When the training task starts, the training data was already downloaded from S3 to the local file system of the training instance, and the training script ```train.py``` will load the data from the local disk afterwards. ``` # Start training estimator.fit({"MNIST": s3_data_location}, wait=False) ``` Depending on the training instance you choose, the training process may last from tens of minutes to several hours. It is recommended to set the ``wait`` parameter to ``False``, this option will detach the notebook from the training task. In scenarios with long training time and many training logs, it can prevent the notebook context from being lost due to network interruption or session timeout. After the notebook detached from the training task, the output will be temporarily invisible. You can execute the following code, and the notebook will obtain and resume the previous training session. ``` %%time from sagemaker.estimator import Estimator # Attaching previous training session training_job_name = estimator.latest_training_job.name attached_estimator = Estimator.attach(training_job_name) ``` Since the model was designed to leverage the GPU power to accelerate training, it will be much faster than training tasks on CPU instances. For example, the p3.2xlarge instance will take about 15 minutes, while the c5.xlarge instance may take more than 6 hours. The current model does not support distributed and parallel training, so multi-instance and multi-CPU/GPU will not bring extra benefits in training speed boosting. When the training completes, the trained model will be uploaded to S3. The upload location is specified by the `output_path` parameter provided when creating the `PyTorch` object. ### Model verification You will download the trained model from Amazon S3 to the local file system of the instance where the notebook is located. The following code will load the model, and then generate a picture with a random number as input, then display picture. ``` from sagemaker.s3 import S3Downloader as s3down !mkdir -p ./tmp model_url = attached_estimator.model_data s3down.download(model_url, './tmp') !tar -zxf tmp/model.tar.gz -C ./tmp ``` Execute the following instructions to load the trained model, and generate a set of "handwritten" digitals. ``` def generate_fake_handwriting(model, *, num_images, nz, device=None): import torch import torchvision.utils as vutils from io import BytesIO from PIL import Image z = torch.randn(num_images, nz, 1, 1, device=device) fake = model(z) imgio = BytesIO() vutils.save_image(fake.detach(), imgio, normalize=True, format="PNG") img = Image.open(imgio) return img def load_model(path, *, model_cls=None, params=None, filename=None, device=None, strict=True): import os import torch model_pt_path = path if not filename is None: model_pt_path = os.path.join(path, filename) if device is None: device = 'cpu' if not model_cls is None: model = model_cls(**params) model.load_state_dict(torch.load(model_pt_path, map_location=torch.device(device)), strict=strict) else: model = torch.jit.load(model_pt_path, map_location=torch.device(device)) model.to(device) return model import matplotlib.pyplot as plt import numpy as np import torch from src.train import Generator device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") params = {'nz': hps['nz'], 'nc': 1, 'ngf': hps['ngf']} model = load_model("./tmp/model.pth", model_cls=Generator, params=params, device=device, strict=False) img = generate_fake_handwriting(model, num_images=64, nz=hps['nz'], device=device) plt.imshow(np.asarray(img)) ``` ### Clean up Run the following commandline in a terminal, to remove files generated by this notebook from S3 and local storage ``` import os print(f"aws s3 rm --recursive s3://{bucket}/{prefix}") print(f"rm -rf {os.path.abspath(dataroot)}") ``` ### Conclusion The PyTorch framework, as one of the most popular deep learning framework, is being widely recognised and applied, has become one of the de facto mainstream frameworks. Amazon SageMaker is tightly integrated with a variety of AWS services, such as Amazon EC2 instances of various types and sizes, Amazon S3, Amazon ECR, etc., providing an end-to-end, consistent machine learning experience for all framework practitioners. Amazon SageMaker continues to support mainstream machine learning frameworks, including PyTorch. Machine learning algorithms and models developed with PyTorch can be easily transplanted to Amazon SageMaker environment, by using Amazon SageMaker's fully managed Jupyter Notebook, SPOT training instances, Amazon Elastic Container Registry, SageMaker SDK, and so on, the complexity of machine learning engineering and infrastracture operation are simplified, productivity and efficiency are improved, operation and maintenance costs reduced. DCGAN is a landmark in the field of generative confrontation networks, and it is the cornerstone of many complex generative confrontation networks today. We will explore some of the most recent and interesting variants of GAN in later exmaples. I believe that through the introduction and engineering practice of this example, it will be helpful for you to understand the principles and engineering methods for GAN in general.
github_jupyter
We attempt to build an encoder-decoder system for abstractive text summarization with a Bahdanau attention layer. 100-D GloVe Embeddings are used to initialize the encoder-decoder design, with LSTM and CNN architectures tested for the intermediary layers. The overarching idea of the design is: ![Image of encoder-decoder architecture ](https://cdn-images-1.medium.com/max/2560/1*nYptRUTtVd9xUjwL-cVL3Q.png) ``` from attention import AttentionLayer import pandas as pd import numpy as np import scipy as sp import re from bs4 import BeautifulSoup import tensorflow as tf from tensorflow.keras.layers import LSTM, TimeDistributed, Dense, Bidirectional, Input, Embedding, Dropout from tensorflow.keras.models import Model import os import collections import matplotlib.pyplot as plt import seaborn as sns import nltk.stem from nltk.corpus import stopwords lstmdim = 500 tf.keras.backend.clear_session() #Encoding Segment textinput = Input(shape=(MAX_TEXT_LEN,)) textembed = Embedding(len(texttoken.word_index)+1, lstmdim, trainable=True, mask_zero=True)(textinput) encout1, _, _, _, _ = Bidirectional(LSTM(lstmdim, return_sequences=True, return_state=True))(textembed) encout1 = Dropout(0.1)(encout1) _, enc_h, enc_c = LSTM(lstmdim, return_sequences=True, return_state=True)(encout1) #Decoding Segment summinput = Input(shape=(None,)) decembed_layer = Embedding(len(summtoken.word_index)+1, lstmdim, trainable=True) summembed = decembed_layer(summinput) declstm_layer = LSTM(lstmdim, return_sequences=True, return_state=True) decout, _, _ = declstm_layer(summembed, initial_state=[enc_h, enc_c]) decdense_layer = Dense(len(summtoken.word_index)+1, activation="softmax") preds = decdense_layer(decout) mdl = Model(inputs=[textinput, summinput], outputs=preds) mdl.compile(optimizer="adam", loss="sparse_categorical_crossentropy") mdl.summary() check = tf.keras.callbacks.ModelCheckpoint("newsbbcseq2seq.h5", save_best_only=True, monitor="val_loss", verbose=True) hist = mdl.fit([trainX, trainY[:,:-1]], trainY.reshape(trainY.shape[0], trainY.shape[1], 1)[:,1:], epochs=10, callbacks=[check], batch_size=16, verbose=True, validation_data=([testX, testY[:,:-1]], testY.reshape(testY.shape[0], testY.shape[1], 1)[:,1:])) #Run-Time Model Graphs encode_model = Model(inputs=textinput, outputs=[enc_h, enc_c]) dec_h = Input(shape=(lstmdim,)) dec_c = Input(shape=(lstmdim,)) decinput = Input(shape=(None,)) decembed = decembed_layer(decinput) output, new_h, new_c = declstm_layer(decembed, initial_state=[dec_h, dec_c]) output = decdense_layer(output) decode_model = Model(inputs=[decinput, dec_h, dec_c], outputs=[output, new_h, new_c]) ```
github_jupyter
# Quantization of Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* ## Characteristic of a Linear Uniform Quantizer The characteristics of a quantizer depend on the mapping functions $f(\cdot)$, $g(\cdot)$ and the rounding operation $\lfloor \cdot \rfloor$ introduced in the [previous section](introduction.ipynb). A linear quantizer bases on linear mapping functions $f(\cdot)$ and $g(\cdot)$. A uniform quantizer splits the mapped input signal into quantization steps of equal size. Quantizers can be described by their nonlinear in-/output characteristic $x_Q[k] = \mathcal{Q} \{ x[k] \}$, where $\mathcal{Q} \{ \cdot \}$ denotes the quantization process. For linear uniform quantization it is common to differentiate between two characteristic curves, the so called mid-tread and mid-rise. Both are introduced in the following. ### Mid-Tread Characteristic Curve The in-/output relation of the mid-tread quantizer is given as \begin{equation} x_Q[k] = Q \cdot \underbrace{\left\lfloor \frac{x[k]}{Q} + \frac{1}{2} \right\rfloor}_{index} \end{equation} where $Q$ denotes the constant quantization step size and $\lfloor \cdot \rfloor$ the [floor function](https://en.wikipedia.org/wiki/Floor_and_ceiling_functions) which maps a real number to the largest integer not greater than its argument. Without restricting $x[k]$ in amplitude, the resulting quantization indexes are [countable infinite](https://en.wikipedia.org/wiki/Countable_set). For a finite number of quantization indexes, the input signal has to be restricted to a minimal/maximal amplitude $x_\text{min} < x[k] < x_\text{max}$ before quantization. The resulting quantization characteristic of a linear uniform mid-tread quantizer is shown below ![Characteristic of a linear uniform mid-tread quantizer](mid_tread_characteristic.png) The term mid-tread is due to the fact that small values $|x[k]| < \frac{Q}{2}$ are mapped to zero. #### Example - Mid-tread quantization of a sine signal The quantization of one period of a sine signal $x[k] = A \cdot \sin[\Omega_0\,k]$ by a mid-tread quantizer is simulated. $A$ denotes the amplitude of the signal, $x_\text{min} = -1$ and $x_\text{max} = 1$ are the smallest and largest output values of the quantizer, respectively. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt A = 1.2 # amplitude of signal Q = 1/10 # quantization stepsize N = 2000 # number of samples def uniform_midtread_quantizer(x, Q): # limiter x = np.copy(x) idx = np.where(np.abs(x) >= 1) x[idx] = np.sign(x[idx]) # linear uniform quantization xQ = Q * np.floor(x/Q + 1/2) return xQ def plot_signals(x, xQ): e = xQ - x plt.figure(figsize=(10,6)) plt.plot(x, label=r'signal $x[k]$') plt.plot(xQ, label=r'quantized signal $x_Q[k]$') plt.plot(e, label=r'quantization error $e[k]$') plt.xlabel(r'$k$') plt.axis([0, N, -1.1*A, 1.1*A]) plt.legend() plt.grid() # generate signal x = A * np.sin(2*np.pi/N * np.arange(N)) # quantize signal xQ = uniform_midtread_quantizer(x, Q) # plot signals plot_signals(x, xQ) ``` **Exercise** * Change the quantization stepsize `Q` and the amplitude `A` of the signal. Which effect does this have on the quantization error? Solution: The smaller the quantization step size, the smaller the quantization error is for $|x[k]| < 1$. Note, the quantization error is not bounded for $|x[k]| > 1$ due to the clipping of the signal $x[k]$. ### Mid-Rise Characteristic Curve The in-/output relation of the mid-rise quantizer is given as \begin{equation} x_Q[k] = Q \cdot \Big( \underbrace{\left\lfloor\frac{ x[k] }{Q}\right\rfloor}_{index} + \frac{1}{2} \Big) \end{equation} where $\lfloor \cdot \rfloor$ denotes the floor function. The quantization characteristic of a linear uniform mid-rise quantizer is illustrated below ![Characteristic of a linear uniform mid-rise quantizer](mid_rise_characteristic.png) The term mid-rise copes for the fact that $x[k] = 0$ is not mapped to zero. Small positive/negative values around zero are mapped to $\pm \frac{Q}{2}$. #### Example - Mid-rise quantization of a sine signal The previous example is now reevaluated using the mid-rise characteristic ``` A = 1.2 # amplitude of signal Q = 1/10 # quantization stepsize N = 2000 # number of samples def uniform_midrise_quantizer(x, Q): # limiter x = np.copy(x) idx = np.where(np.abs(x) >= 1) x[idx] = np.sign(x[idx]) # linear uniform quantization xQ = Q * (np.floor(x/Q) + .5) return xQ # generate signal x = A * np.sin(2*np.pi/N * np.arange(N)) # quantize signal xQ = uniform_midrise_quantizer(x, Q) # plot signals plot_signals(x, xQ) ``` **Exercise** * What are the differences between the mid-tread and the mid-rise characteristic curves for the given example? Solution: The mid-tread and the mid-rise quantization of the sine signal differ for signal values smaller than half of the quantization interval. Mid-tread has a representation of $x[k] = 0$ while this is not the case for the mid-rise quantization. **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
github_jupyter
``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from tensorflow.contrib.tensorboard.plugins import projector # 载入数据集 mnist = input_data.read_data_sets(r"C:\Users\zdwxx\Downloads\Compressed\MNIST_data", one_hot=True) # 运行次数 max_steps = 550 * 21 # 图片数量 image_num = 3000 # 定义会话 sess = tf.Session() # 文件路径 DIR = "C:/Tensorflow/" # 载入图片 embedding = tf.Variable(tf.stack(mnist.test.images[:image_num]), trainable=False, name="embedding") # 定义一个参数概要 def varible_summaries(var): with tf.name_scope("summary"): mean = tf.reduce_mean(var) tf.summary.scalar("mean", mean) # 平均值 with tf.name_scope("stddev"): stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar("stddev", stddev) # 标准差 tf.summary.scalar("max", tf.reduce_max(var)) #最大值 tf.summary.scalar("min", tf.reduce_min(var)) # 最小值 tf.summary.histogram("histogram", var) # 直方图 # 命名空间 with tf.name_scope("input"): # 定义两个placeholder x = tf.placeholder(tf.float32, [None, 784], name="x-input") y = tf.placeholder(tf.float32, [None, 10], name="y-input") # 显示图片 with tf.name_scope("input_reshape"): image_shaped_input = tf.reshape(x, [-1, 28, 28, 1]) tf.summary.image("input", image_shaped_input, 10) with tf.name_scope("layer"): #创建一个简单的神经网络 with tf.name_scope("wights1"): W1 = tf.Variable(tf.truncated_normal([784, 500], stddev=0.1), name="W1") varible_summaries(W1) with tf.name_scope("biases1"): b1 = tf.Variable(tf.zeros([500]) + 0.1, name="b1") varible_summaries(b1) # with tf.name_scope("wx_plus_b1"): # wx_plus_b1 = tf.matmul(x, W1) + b1 with tf.name_scope("L1"): L1 = tf.nn.tanh(tf.matmul(x, W1) + b1) with tf.name_scope("wights2"): W2 = tf.Variable(tf.truncated_normal([500, 10], stddev=0.1), name="W2") varible_summaries(W2) with tf.name_scope("biases2"): b2 = tf.Variable(tf.zeros([10]) + 0.1, name="b2") varible_summaries(b2) with tf.name_scope("wx_plus_b2"): wx_plus_b2 = tf.matmul(L1, W2) + b2 with tf.name_scope("softmax"): prediction = tf.nn.softmax(wx_plus_b2) # 预测值 # 二次代价函数 # loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction)) with tf.name_scope("loss"): loss = loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction)) tf.summary.scalar("loss", loss) # 梯度下降法 with tf.name_scope("train"): train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss) # 初始化变量 init = tf.global_variables_initializer() sess.run(init) with tf.name_scope("accuracy"): # 结果存放在一个布尔型列表中 with tf.name_scope("correct_prediction"): correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1)) #argmax返回1维张量中最大的值所在的位置 # 求准确率 with tf.name_scope("accuracy"): accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))#cast转换类型,True->1.0, False->0.0 tf.summary.scalar("accuracy", accuracy) # 产生 metadata文件 if tf.gfile.Exists(DIR + "projector/projector/metadata.tsv"): tf.gfile.DeleteRecursively(DIR + "projector/projector/metadata.tsv") with open(DIR + "projector/projector/metadata.tsv", "w") as f: lables = sess.run(tf.argmax(mnist.test.labels[:], 1)) for i in range(image_num): f.write(str(lables[i]) + "\n") # 合并所有的summary merged = tf.summary.merge_all() projector_writer = tf.summary.FileWriter(DIR + "projector/projector", sess.graph) saver = tf.train.Saver() config = projector.ProjectorConfig() embed = config.embeddings.add() embed.tensor_name = embedding.name embed.metadata_path = DIR + "projector/projector/metadata.tsv" embed.sprite.image_path = DIR + "projector/data/mnist_10k_sprite.png" embed.sprite.single_image_dim.extend([28, 28]) projector.visualize_embeddings(projector_writer, config) for i in range(max_steps): batch_xs, batch_ys = mnist.train.next_batch(100) #类似于read,一次读取100张图片 run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() summary = sess.run([train_step, merged], feed_dict={x : batch_xs, y : batch_ys})[1] projector_writer.add_run_metadata(run_metadata, "step%03d" % i) projector_writer.add_summary(summary, i) if i % 550 == 0: acc = sess.run(accuracy, feed_dict={x:mnist.test.images, y:mnist.test.labels}) print("第", i, "个周期", "准确率是", acc) saver.save(sess, DIR + "projector/projector/a_model.ckpt") projector_writer.close() sess.close() ```
github_jupyter
# Diseño de software para cómputo científico ---- ## Unidad 5: Integración con lenguajes de alto nivel con bajo nivel. ## Agenda de la Unidad 5 - JIT (Numba) - Cython. - Integración de Python con FORTRAN. - **Integración de Python con C.** ## Recapitulando - Escribimos el código Python. - Pasamos todo a numpy. - Hicimos profile. - Paralelisamos (joblib/dask). - Hicimos profile. - Usamos Numba. - Hicimos profile. - Si podemos elegir el lenguaje: Cython - Si no podemos elegir el lenguaje y vamos a hacer cómputo numérico FORTRAN. - Si no podemos elegir vamos con C/C++/Rust/lo-que-sea. ## Ctypes - Permite usar bibliotecas existentes en otros idiomas escribiendo envoltorios **simples** en Python. - Viene con Python. - Puede ser un poco **Dificil** de usar. - Es una herramienta ideal para comper Python ### Ejemplo para Ctypes 1/2 El código C que usaremos en este tutorial está diseñado para ser lo más simple posible mientras demuestra los conceptos que estamos cubriendo. Es más un "ejemplo de juguete" y no pretende ser útil por sí solo. Estas son las funciones que utilizaremos: ```c int simple_function(void) { static int counter = 0; counter++; return counter; } ``` - `simple_function` simplemente devuelve números de conteo. - Cada vez que se llama en incrementos de contador y devuelve ese valor. ### Ejemplo para Ctypes 2/2 ```c void add_one_to_string(char *input) { int ii = 0; for (; ii < strlen(input); ii++) { input[ii]++; } } ``` - Agrega uno a cada carácter en una matriz de caracteres que se pasa. - Usaremos esto para hablar sobre las cadenas inmutables de Python y cómo solucionarlas cuando sea necesario. Estos ejemplos estan guardadoe en `clibc1.c`, y se compilan con: ```bash gcc -c -Wall -Werror -fpic clib1.c # crea el código objeto gcc -shared -o libclib1.so clib1.o # crea el .so ``` ## Llamando a una función simple ``` import ctypes # Load the shared library into c types. libc = ctypes.CDLL("ctypes/libclib1.so") counter = libc.simple_function() counter ``` ## Cadenas inmutables en Python con Ctypes ``` print("Calling C function which tries to modify Python string") original_string = "starting string" print("Before:", original_string) # This call does not change value, even though it tries! libc.add_one_to_string(original_string) print("After: ", original_string) ``` - Como notarán esto **no anda**. - El `original_string` no está disponible en la función C en absoluto al hacer esto. - La función C modificó alguna otra memoria, no la cadena. - La función C no solo no hace lo que desea, sino que también modifica la memoria que no debería, lo que genera posibles problemas de corrupción de memoria. - Si queremos que la función C tenga acceso a la cadena, necesitamos hacer un poco de trabajo de serialización. ## Cadenas inmutables en Python con Ctypes - Necesitamos convertir la cadena original a bytes usando `str.encode,` y luego pasar esto al constructor para un `ctypes.string_buffer`. - Los String_buffers son mutables y se pasan a C como `char *`. ``` # The ctypes string buffer IS mutable, however. print("Calling C function with mutable buffer this time") # Need to encode the original to get bytes for string_buffer mutable_string = ctypes.create_string_buffer(str.encode(original_string)) print("Before:", mutable_string.value) libc.add_one_to_string(mutable_string) # Works! print("After: ", mutable_string.value) ``` ## Especificación de firmas de funciones en ctypes - Como vimos anteriormente, podemos especificar el tipo de retorno si es necesario. - Podemos hacer una especificación similar de los parámetros de la función. - Además, proporcionar una firma de función le permite a Python verificar que está pasando los parámetros correctos cuando llama a una función C, de lo contrario, pueden suceder cosas **malas**. Para especificar el tipo de retorno de una función, hayque obtener el bjeto de la función y establecer la propiedad `restype`: ```python libc.func.restype = ctypes.POINTER(ctypes.c_char) ``` y para especificar las firmas ```python libc.func.argtypes = [ctypes.POINTER(ctypes.c_char), ] ``` ## Escribir una interfaz Python en C Vamos a "envolver" función de biblioteca C `fputs()`: ```C int fputs (const char *, FILE *) ``` - Esta función toma dos argumentos: 1. `const char *` es una matriz de caracteres. 2. `FILE *` es un puntero a un stream de archivo. - `fputs()` escribe la matriz de caracteres en el archivo especificado y devuelve un valor no negativo, si la operación es exitosa, este valor indicará el número de bytes escritos en el archivo. - Si hay un error, entonces devuelve `EOF`. ## Escribir la función C para `fputs()` Este es un programa básico de C que usa fputs() para escribir una cadena en una secuencia de archivos: ```C #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main() { FILE *fp = fopen("write.txt", "w"); fputs("Real Python!", fp); fclose(fp); return 0; } ``` ## Envolviendo `fputs()` El siguiente bloque de código muestra la versión final envuelta de su código C: ```C #include <Python.h> static PyObject *method_fputs(PyObject *self, PyObject *args) { char *str, *filename = NULL; int bytes_copied = -1; /* Parse arguments */ if(!PyArg_ParseTuple(args, "ss", &str, &filename)) { return NULL; } FILE *fp = fopen(filename, "w"); bytes_copied = fputs(str, fp); fclose(fp); return PyLong_FromLong(bytes_copied); } ``` Este fragmento de código hace referencia a tres estructuras de objetos que se definen en `Python.h`: `PyObject`, `PyArg_ParseTuple()` y `PyLong_FromLong()` ## `PyObject` - `PyObject` es una estructura de objetos que utiliza para definir tipos de objetos para Python. - Todos los demás tipos de objetos Python son extensiones de este tipo. - Establecer el tipo de retorno de la función anterior como `PyObject` define los campos comunes que requiere Python para reconocer esto como un tipo válido. Eche otro vistazo a las primeras líneas de su código C: ```C static PyObject *method_fputs(PyObject *self, PyObject *args) { char *str, *filename = NULL; int bytes_copied = -1; ... ``` En la línea 2, declara los tipos de argumento que desea recibir de su código Python: - `char *str` es la cadena que desea escribir en la secuencia del archivo. - `char *filename` es el nombre del archivo para escribir. ## `PyArg_ParseTuple()` `PyArg_ParseTuple()` transforma los argumentos que recibirá de su programa Python en variables locales: ```C static PyObject *method_fputs(PyObject *self, PyObject *args) { char *str, *filename = NULL; int bytes_copied = -1; if(!PyArg_ParseTuple(args, "ss", &str, &filename)) { return NULL; } ... ``` `PyArg_ParseTuple()` toma los siguientes argumentos: - `args` de tipo `PyObject`. - `"ss"` especifica el tipo de datos de los argumentos a analizar. - `&str` y `&filename` son punteros a variables locales a las que se asignarán los valores analizados. `PyArg_ParseTuple()` retorna `false` frente a un error. ## `fputs()` y `PyLongFromLon()` ```C static PyObject *method_fputs(PyObject *self, PyObject *args) { char *str, *filename = NULL; int bytes_copied = -1; if(!PyArg_ParseTuple(args, "ss", &str, &filename)) { return NULL; } FILE *fp = fopen(filename, "w"); bytes_copied = fputs(str, fp); fclose(fp); return PyLong_FromLong(bytes_copied); } ``` - Las llamadas a `fputs()` fueron explicadas anteriormente, la única diferencia es que las variables utilizadas son las que provienen de `*args` y almacenadas localmente. - Finalmente `PyLong_FromLong()` retorna un `PyLongObject`, que representa objecto entero en Python. ## Módulo de extensión Ya se escribió el código que constituye la funcionalidad principal de su módulo de extensión Python C. - Sin embargo queda escribir las definiciones de su módulo y los métodos que contiene, de esta manera: ```C static PyMethodDef FputsMethods[] = { {"fputs", method_fputs, METH_VARARGS, "Python interface for fputs C library function"}, {NULL, NULL, 0, NULL} }; static struct PyModuleDef fputsmodule = { PyModuleDef_HEAD_INIT, "fputs", "Python interface for the fputs C library function", -1, FputsMethods }; ``` ## `PyMethodDef` - `PyMethodDef` informa al intérprete de Python sobre ello los métodos definidos en el módulo - Idealmente, habrá más de un método en la. Es por eso que necesita definir una matriz de estructuras: ```C static PyMethodDef FputsMethods[] = { {"fputs", method_fputs, METH_VARARGS, "Python interface for fputs C library function"}, {NULL, NULL, 0, NULL} }; ``` Cada miembro individual de la estructura contiene la siguiente información: - `fputs` es el nombre que el usuario escribiría para invocar esta función en particular desde Python. - `method_fputs` es el nombre de la función C a invocar. - `METH_VARARGS` indica que la función aceptará dos argumentos de tipo `PyObject *`: - `self` es el objeto del módulo. - `args` es una tupla que contiene los argumentos de la función (descomprimibles `PyArg_ParseTuple()`. - La cadena final es un valor para representar el docstring. ### `PyModuleDef` Define un módulo Python (un archivo `.py`) en C. ```C static struct PyModuleDef fputsmodule = { PyModuleDef_HEAD_INIT, "fputs", "Interface for the fputs C function", -1, FputsMethods};``` Hay un total de 9 miembros en esta estructura, pero el bloque de código anterior, inicializa los siguientes cinco: - `PyModuleDef_HEAD_INIT` es la clase "base" del módulo (normalmente esto siempre es igual). - `"fputs"` nombre del módulo. - La cadena es la documentación del módulo. - `-1` cantidad de memoria necesaria para almacenar el estado del programa. Es útil cuando su módulo se utiliza en múltiples subinterpretadores, y puede tener los siguientes valores: - Un valor negativo indica que este módulo no tiene soporte para subinterpretadores. - Un valor no negativo permite la reinicialización del módulo. También especifica el requisito de memoria que se asignará en cada sesión de subinterpretador. - `FputsMethods` es tabla de métodos. ## Inicializando el módulo - Ahora que ha definido la extensión Python C y las estructuras de métodos, es hora de ponerlas en uso. - Cuando un programa Python importa su módulo por primera vez, llamará a `PyInit_fputs()`: ```C PyMODINIT_FUNC PyInit_fputs(void) { return PyModule_Create(&fputsmodule); } ``` `PyMODINIT_FUNC hace 3 cosas implícitamente` - Establece implícitamente el tipo de retorno de la función como PyObject *. - Declara cualquier enlace especial. - Declara la función como "C" externa. En caso de que esté usando C++, le dice al compilador de C ++ que no haga cambios de nombre en los símbolos. `PyModule_Create()` devolverá un nuevo objeto de módulo de tipo `PyObject *`. ## Poniendo todo junto - Qué pasa cuando importamos el módulo? ![image.png](attachment:image.png) ## Poniendo todo junto - Qué retorna cuando se importa el módulo? ![image.png](attachment:image.png) ## Poniendo todo junto - Qué sucede cuando llamamos a `fputs.fputs()` ![image.png](attachment:image.png) ## Empaquetado con `distutils` ```python from distutils.core import setup, Extension def main(): setup(name="fputs", ext_modules=[Extension("fputs", ["fputsmodule.c"])], ...) if __name__ == "__main__": main() ``` Para instalar: ```bash $ python3 setup.py install ``` Para compilar ```bash $ python setup.py build_ext --inplace ``` Si se quiere especificar el compilador ```bash $ CC=gcc python3 setup.py install ``` ## Usando la extensión ``` import sys; sys.path.insert(0, "./c_extensions") import fputs fputs? fputs.fputs? fputs.fputs("Hola mundo!", "salida.txt") with open("salida.txt") as fp: print(fp.read()) ``` ## Raising Exceptions - Si desea lanzar excepciones de Python desde C, puede usar la API de Python para hacerlo. - Algunas de las funciones proporcionadas por la API de Python para generar excepciones son las siguientes: - `PyErr_SetString(PyObject *type, const char *message)` - `PyErr_Format(PyObject *type, const char *format)` - `PyErr_SetObject(PyObject *type, PyObject *value)` Todas las exceptions de Python estan definidas en las API. ## Raising Exceptions ```C static PyObject *method_fputs(PyObject *self, PyObject *args) { char *str, *filename = NULL; int bytes_copied = -1; /* Parse arguments */ if(!PyArg_ParseTuple(args, "ss", &str, &fd)) { return NULL; } if (strlen(str) <= 0) { PyErr_SetString(PyExc_ValueError, "String length must be greater than 0"); return NULL; } fp = fopen(filename, "w"); bytes_copied = fputs(str, fp); fclose(fp); return PyLong_FromLong(bytes_copied); } ``` ## Raising Custom Exceptions Para crear y usar excepción personalizada, se debe agregarla instancia de módulo: ```C static PyObject *StringTooShortError = NULL; PyMODINIT_FUNC PyInit_fputs(void) { /* Assign module value */ PyObject *module = PyModule_Create(&fputsmodule); /* Initialize new exception object */ StringTooShortError = PyErr_NewException("fputs.StringTooShortError", NULL, NULL); /* Add exception object to your module */ PyModule_AddObject(module, "StringTooShortError", StringTooShortError); return module; } static PyObject *method_fputs(PyObject *self, PyObject *args) { ... if (strlen(str) <=0 10) { /* Passing custom exception */ PyErr_SetString(StringTooShortError, "String length must be greater than 0"); return NULL;} ... } ``` ## Referencias - https://docs.python.org/3.8/library/ctypes.html - https://dbader.org/blog/python-ctypes-tutorial - https://realpython.com/build-python-c-extension-module/
github_jupyter
# An Introduction to Python using Jupyter Notebooks <a id='toc'></a> ## Table of Contents: ### Introduction * [Python programs are plain text files](#python-programs) * [Use the Jupyter Notebook for editing and running Python](#jn-editing-python) * [How are Jupyter Notebooks stored](#how-its-stored) * [What you need to know](#need-to-know) * [The Notebook has Control and Edit modes](#notebook-modes) * [Use the keyboard and mouse to select and edit cells](#keyboard-mouse) * [Practice: Run your first Jupyter Notebook cells](#prac-jupyter) ### Using Markdown * [The Notebook will turn Markdown into pretty-printed documentation](#markdown) * [How to use Markdown](#how-to-markdown) * [Markdown Exercises](#md-exercises) * [Markdown Exercise Soultions](#md-solutions) ### Introduction to Python 1: Data * [Intro to Python 1: Prerequisites](#python-1) * [Programming with Python](#python-introduction) * [What is Python and why would I use it?](#python-introduction) * [Special Characters](#python-sp-char) * [Variables](#variables) * [Practice](#prac-variable) * [Variables can be used in calculations](#variable-calc) * [Data Types](#data-types) * [Practice with Strings](#prac-strings) * [Practice with Numerics](#numbers) * [Practice with Booleans](#booleans) * [Python "Type" function](#py-type) * [Lists](#py-lists) * [Tuples](#py-tuples) * [Differences between lists and tuples](#lists-vs-tuples) * [Sets](#py-sets) * [Dictionaries](#py-dictionaries) * [Python Statements](#py-statements) * [Conditionals](#py-conditionals) * [Loops](#py-loops) * [For Loops](#for-loops) * [While Loops](#while-loops) * [Pandas: Working with Existing Data](#pandas) * [Pandas: Importing Data](#read-data) * [Pandas: Manipulating Data](#manipulate-data) * [Pandas: Writing Data](#write-data) * [Pandas: Working with more than file](#all-countries) * [Pandas: Slicing and selecting values](#slicing) * Python I Exercises * [Problem 5: Assigning variables and printing values](#prob-variable) * [Problem 6: Print your first and last name](#py-concatenate) * [Problem 7: What variable type do I have?](#py-data-type) * [Problem 8: Creating and Working with Lists](#prob-lists) * [Problem 9: Creating and Accessing Dictionaries](#prob-dictionaries) * [Problem 10: Writing Conditional If/Else Statements](#prob-if-else) * [Problem 11: Reverse the string using a for loop](#prob-str-reverse-loop) * [Problem 12: Looping through Dictionaries](#prob-dict-loop) * [Problem 13: Checking assumptions about your data](#prob-unique) * [Problem 14: Slice and save summary statistics](#summary-stats) * [Python I Exercise Soultions](#py1-solutions) ### Introduction to Python 2: A Tool for Programming * [Intro to Python 2: Prerequisites](#python-2) * [Setup if you are joining in for Python II](#python-2-setup) * [Functions:](#functions) * [Why Use Functions?](#why-functions) * [Let's revist the reverse string and turn it into a function](#str-reverse-func) * [Let's look at a real world example of where constants could be used in functions](#temp-func) * [Scripting](#scripting) * Python II Exercises * [Python II Exercise Soultions](#py2-solutions) ### Common Errors * [Common Errors](#errors) <a id='python-programs'></a> ### Python programs are plain text files [Table of Contents](#toc) * They have the `.py` extension to let everyone (including the operating system) know it is a Python program. * This is convention, not a requirement. * It's common to write them using a text editor but we are going to use a [Jupyter Notebook](http://jupyter.org/). * There is a bit of extra setup, but it is well worth it because Jupyter Notebooks provide code completion and other helpful features such as markdown integration. This means you can take notes in this notebook while we are working throughout the session. * There are some pitfalls that can also cause confusion if we are unaware of them. While code generally runs from top to bottom, a Jupyter Notebook allows you to run items out of sequence. The order of code blocks running order will appear as a number to the left of the code text field. * Notebook files have the extension `.ipynb` to distinguish them from plain-text Python programs. <a id='jn-editing-python'></a> ### Use the Jupyter Notebook for editing and running Python [Table of Contents](#toc) * The [Anaconda package manager](http://www.anaconda.com) is an automated way to install the Jupyter notebook. * See [the setup instructions]({{ site.github.url }}/setup/) for Anaconda installation instructions. * It also installs all the extra libraries it needs to run. * Once you have installed Python and the Jupyter Notebook requirements, open a shell and type: > `jupyter notebook` * This will start a Jupyter Notebook server and open your default web browser. * The server runs locally on your machine only and does not use an internet connection. * The server sends messages to your browser. * The server does the work and the web browser renders the notebook. * You can type code into the browser and see the result when the web page talks to the server. * This has several advantages: - You can easily type, edit, and copy and paste blocks of code. - Tab completion allows you to easily access the names of things you are using and learn more about them. - It allows you to annotate your code with links, different sized text, bullets, etc to make it more accessible to you and your collaborators. - It allows you to display figures next to the code that produces them to tell a complete story of the analysis. - **Note: This will modify and delete files on your local machine.** * The notebook is stored as JSON but can be saved as a .py file if you would like to run it from the bash shell or a python interpreter. * Just like a webpage, the saved notebook looks different to what you see when it gets rendered by your browser. <a id='how-its-stored'></a> ### How are Jupyter Notebooks Stored [Table of Contents](#toc) * The notebook file is stored in a format called JSON. * Just like a webpage, what's saved looks different from what you see in your browser. * But this format allows Jupyter to mix software (in several languages) with documentation and graphics, all in one file. <a id='need-to-know'></a> ### What you need to know for today's lesson [Table of Contents](#toc) **Jupyter Notebook options when running locally:** ![jn_options.png](jn_options.png) **Jupyter Notebook options when running in Binder:** ![jn_binder_options.png](jn_binder_options.png) * Commands are only run when you tell them to run. Some lessons require you to run their code in order. * The File menu has an option called "Revert to Checkpoint". Use that to reset your file in case you delete something on accident. * The Kernel menu has an options to restart the interpreter and clear the output. * The Run button will send the code in the selected cell to the interpreter. * The command pallate function will show you and let you set hotkeys. * Saving to browser storage is the button with a cloud and downward facing arrow. Click on this button frequently to save progress as we go. * Restoring from browser storage is the button with a cloud and upward facing arrow. Click on this button if you are disconnected or Binder quits working after you have refreshed the page. This will load your previously save work. <a id='notebook-modes'></a> ### The Notebook has Control and Edit modes. [Table of Contents](#toc) * Open a new notebook from the dropdown menu in the top right corner of the file browser page. * Each notebook contains one or more cells of various types. > ## Code vs. Markdown > > We often use the term "code" to mean "the source code of software written in a language such as Python". > A "code cell" in a Jupyter Notebook contains software code or that which is for the computer to read. > A "markdown cell" is one that contains ordinary prose written for human beings to read. * If you press `esc` and `return` keys alternately, the outer border of your code cell will change from blue to green. * The difference in color can be subtle, but indicate different modes of you notebook. * <span style='color:blue'>Blue</span> is the command mode while <span style='color:green'>Green</span> is the edit mode. * If you use the "esc" key to make the surrounding box blue (enter into command mode) and then press the "H" key, a list of all the shortcut keys will appear. * When in command mode (esc/blue), * The `B key` will make a new cell below the currently selected cell. * The `A key` will make one above. * The `X key` will delete the current cell. * There are lots of shortcuts you can try out and most actions can be done with the menus at the top of the page if you forget the shortcuts. * If you remember the `esc` and `H` shortcuts, you will be able to find all the tools you need to work in a notebook. <a id='keyboard-mouse'></a> ### Use the keyboard and mouse to select and edit cells. [Table of Contents](#toc) * Pressing the `return key turns the surrounding box green to signal edit mode and allows you type in the cell. * Because we want to be able to write many lines of code in a single cell, pressing the `return` key when the border is green moves the cursor to the next line in the cell just like in a text editor. * We need some other way to tell the Notebook we want to run what's in the cell. * Pressing the `shift` and the `return` keys together will execute the contents of the cell. * Notice that the `return` and `shift` keys on the right of the keyboard are right next to each other. <a id='prac-jupyter'></a> ### Practice: Running Jupyter Notebook Cell [Table of Contents](#toc) ``` # Find the shortcut in the command pallate and run this cell. message = "run me first" ``` If you ran the above cell correctly, there should be a number **1** inside the square brackets to the left of the cell. **Note:** the number will increase everytime you run the cell. ``` # Run this cell and see what the output is. print(message) ``` **If the output beneath the cell looks like this:** ```python run me first ``` Then you have run the cells in the correct order and received the expected output. Why did we get this output? **If the output beneath the cell looks like this:** ```python --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-1-a4525a899574> in <module> 1 # Run this cell and see what the output is. ----> 2 print(message) NameError: name 'message' is not defined ``` Then you have received an error. Read the error message to see what went wrong. Here we have a `NameError`because the computer does not know what the variable `message` is. We need to go back to the first code cell and run it correctly first to define the variable `message`. Then we should be able to run the second code cell and receive the first output (prints the string we assigned to the variable `message`). **Getting Error Messages**: Error messages are commonplace for anyone writing code. You should expect to get them frequently and learn how to interpret them as best as possible. Some languages give more descriptive error messages than others, but in both cases you are likely to find the answer with a quick Google search. ## Using Markdown <a id='markdown'></a> ### The Notebook will turn Markdown into pretty-printed documentation. [Table of Contents](#toc) * Notebooks can also render [Markdown][markdown]. * A simple plain-text format for writing lists, links and other things that might go into a web page. * Equivalently, a subset of HTML that looks like what you would send in an old-fashioned email. * Turn the current cell into a Markdown cell by entering the command mode (esc/blue) and press the `M key`. * `In [ ]:` will disappear to show it is no longer a code cell and you will be able to write in Markdown. * Turn the current cell into a Code cell by entering the command mode (esc/blue) and press the `Y key`. <a id='how-to-markdown'></a> ### How to use Markdown [Table of Contents](#toc) <div class="row"> <div class="col-md-6" markdown="1"> **The asterisk is a special character in markdown. It will create a bulleted list.** ```markdown Markdown syntax to produce output below. * Use asterisks * to create * bullet lists. ``` * Use asterisks * to create * bulleted lists. **But what happens when I want to use asterisk in my text. We can use another special, the back slash `\`, also known as an escape charatcer. Place the back slash before any markdown special character without a space to use the special character in your text.** ```markdown Markdown syntax to produce output below. \* Use asterisks \* to create \* bullet lists. ``` \* Use asterisks \* to create \* bullet lists. Note: Escape characters can change depending on the language you are writing in. **Use can use numbers to create a numbered list:** ```markdown Markdown syntax to produce numbered lists. 1. Use numbers 1. to create 1. numbered lists. ``` 1. Use numbers 1. to create 1. numbered lists. Note: That we did not have to type numbers in order but markdown still converted correctly in output. This is nice because it saves us time when we modify or edit lists later because we do not have to renumber the entire list. **Using differnt Headings to keep consistency through document:** ```markdown Markdown syntax to produce headings. # A Level-1 Heading ## A Level-2 Heading ### A Level-3 Heading ``` Print version of the three lines of markdown code from above. # A Level-1 Heading ## A Level-2 Heading ### A Level-3 Heading **Line breaks don't matter. But blank lines create new paragraphs.** ```markdown **Markdown syntax:** Line breaks do not matter. _(accomplished by pressing the return key once)_ Sometimes though we want to include a line break without starting a new paragraph. We can accomplish this by including two spaces at the end of the line. Here is the first line. The second line is on the second line but in same paragraph (no blank line). ``` **Print version of markdown code from above:** Line breaks don't matter. _(accomplished by pressing the return key once)_ Sometimes though we want to include a line break without starting a new paragraph. We can accomplish this by including two spaces at the end of the line. Here is the first line. The second line is on the second line but in same paragraph (no blank line). **Creating links in markdown:** The information inside the `[...]` is what the user will see and the information inside the `(...)` is the pointer or url that the link will take the user to. ```markdown **Markdown Syntax:** [Create links](http://software-carpentry.org) with the following syntax `[...](...)`. Or use [named links][data_carpentry]. _Notice the line below only defines the link and is not in printed output. Double click on the cell below this one if you don't believe me._ [data_carpentry]: http://datacarpentry.org ``` **Output of markdown syntax:** [Create links](http://software-carpentry.org) with `[...](...)`. Or use [named links][data_carpentry]. [data_carpentry]: http://datacarpentry.org <a id='md-exercises'></a> ## Markdown Exercises [Table of Contents](#toc) ### Creating Lists in Markdown <a id='md-exercises-p01'></a> **Problem 1: Creating Lists** Create a nested list in a Markdown cell in a notebook that looks like this: 1. Get funding. 1. Do work. * Design experiment. * Collect data. * Analyze. 1. Write up. 1. Publish. **Hint:**_Double click this cell to see the answer._ [Solution](#md-solutions-p01) <a id='md-exercises-p02'></a> ### Math anyone? **Problem 2: Math in python** What is displayed when a Python cell in a notebook that contains several calculations is executed? For example, what happens when this cell is executed? ``` 7 * 3 ``` What is displayed when a Python cell in a notebook that contains several calculations is executed? For example, what happens when this cell is executed? ``` 7 * 3 2 + 1 6 * 7 + 12 ``` [Solution](#md-solutions-p02) <a id='md-exercises-p03'></a> **Problem 3: Math in markdown** Change an Existing Cell from Code to Markdown What happens if you write some Python in a code cell and then you switch it to a Markdown cell? For example, put the following in a code cell. 1. Run the cell below with `shift + return` to be sure that it works as a code cell. _Hint: it should give you the same result as **Problem 2**_. 1. Select the cell below and use `escape + M` to switch the cell to Markdown and run it again with `shift + return`. What happened and how might this be useful? ``` 7 * 3 2 + 1 x = 6 * 7 + 12 print(x) ``` Print statements can help us find errors or unexpected results from our code. They allow us to check our assumptions. Does the computer have stored what we think it does? This could also be useful if you wanted to show what the code generating your document looks like. Think code reviews, colleagues, advisors, etc. [Solution](#md-solutions-p03) <a id='md-exercises-p04'></a> **Problem 4:** Equations Standard Markdown (such as we’re using for these notes) won’t render equations, but the Notebook will. `$\Sigma_{i=1}^{N} 2^{-i} \approx 1$` Think about the following questions: 1. What will it display? 1. What do you think the underscore `_` does? 1. What do you think the circumflex `^` does? 1. What do you think the dollar sign `$` does? Change the Code cell below containing the equation to a Markdown cell and run it. ``` $\Sigma_{i=1}^{N} 2^{-i} \approx 1$ ``` **Note:** If you received a <span style='color:red'> SyntaxError</span>, then you need to change the cell to a Markdown cell and rerun. [Solution](#md-solutions-p04) <a id='md-solutions'></a> ## Markdown Exercise Solutions [Table of Contents](#toc) <a id='md-solutions-p01'></a> ### Problem 1: Creating Lists This challenge integrates both the numbered list and bullet list. Note that the bullet list is tabbed over to create the nesting necesary for the list. ```markdown **Type the following in your Markdown cell:** 1. Get funding. 1. Do work. * Design experiment. * Collect data. * Analyze. 1. Write up. 1. Publish. ``` [Back to Problem](#md-exercises-p01) <a id='md-solutions-p02'></a> ### Problem 2: Math in python The output of running the code cell is 54 because 6 multiplied by 7 is 42 and 42 plus 12 equals 54. This equation was stored as a variable called `x` and the last line executed was `print(x)`, which simply prints out the value of variable `x` at the current time. However, it still did all the other mathematical equations `7*3` and `2+1`, but it did not print them out because we did not ask the computer to do so. [Back to Problem](#md-exercises-p02) <a id='md-solutions-p03'></a> ### Problem 3: Math in markdown In step 1, The output of running the code cell is 54 because 6 multiplied by 7 is 42 and 42 plus 12 equals 54. This equation was stored as a variable called `x` and the last line executed was `print(x)`, which simply prints out the value of variable `x` at the current time. However, it still did all the other mathematical equations `7*3` and `2+1`, but it did not print them out because we did not store the value and ask the computer to print them. The Python code gets treated like markdown text. The lines appear as if they are part of one contiguous paragraph. This could be useful to temporarily turn on and off cells in notebooks that get used for multiple purposes. It is also useful when you want to show the code you have written rather than the output of the code execution. ```markdown 7*3 2+1 x = 6 * 7 + 12 print(x) ``` [Back to Problem](#md-exercises-p03) <a id='md-solutions-p04'></a> ### Problem 4: Equations `$\Sigma_{i=1}^{N} 2^{-i} \approx 1$` $\Sigma_{i=1}^{N} 2^{-i} \approx 1$ The notebook shows the equation as it would be rendered from latex equation syntax. The dollar sign,`$`, is used to tell markdown that the text in between is a latex equation. If you are not familiar with latex, the underscore, `_`, is used for subscripts and the circumflex, `^`, is used for superscripts. A pair of curly braces, `{` and `}`, is used to group text together so that the statement `i=1` becomes the the subscript and `N` becomes the superscript. Similarly, `-i` is in curly braces to make the whole statement the superscript for `2`. `\sum` and `\approx` are latex commands for “sum over” and “approximate” symbols. [anaconda]: https://docs.continuum.io/anaconda/install [markdown]: https://en.wikipedia.org/wiki/Markdown **A common error is to forgot to run the cell as markdown.** The python interpreter does not know what to do with the \$. Syntax errors generally mean that the user has entered something incorrectly (check for typos before assuming the line of code is wrong altogether. ```markdown File "<ipython-input-1-a80a20b3c603>", line 1 $\Sigma_{i=1}^{N} 2^{-i} \approx 1$ ^ SyntaxError: invalid syntax ``` [Back to Problem](#md-exercises-p04) <a id='python-1'></a> # Intro to Python I: Data [Table of Contents](#toc) **Prerequisites:** None This workshop will help researchers with no prior programming experience learn how to utilize Python to analyze research data. You will learn how to open data files in Python, complete basic data manipulation tasks and save your work without compromising original data. Oftentimes, researchers find themselves needing to do the same task with different data and you will gain basic experience on how Python can help you make more efficient use of your time. **Learning Objectives:** 1. Clean/manipulate data 1. Automate repetitive tasks **Learning Outcomes:** you will be able to… 1. read data into Pandas dataframe 1. use Pandas to manipulate data 1. save work to a datafile useable in other programs needed by researcher 1. write if/else statements 1. build for and while loops <a id='python-introduction'></a> ## Programming with Python [Table of Contents](#toc) ### What is Python and why would I use it? A programming language is a way of writing commands so that an interpreter or compiler can turn them into machine instructions. Python is just one of many different programming languages. Even if you are not using Python in your work, you can use Python to learn the fundamentals of programming that will apply across languages. **We like using Python in workshops for lots of reasons:** * It is widely used in science * It's easy to read and write * There is a huge supporting community - lots of ways to learn and get help * This Jupyter Notebook. Not a lot of languages have this kind of thing (name comes from Julia, Python, and R). <a id='python-sp-char'></a> ### Special Characters [Table of Contents](#toc) We have already worked with special characters in markdown. Similarly, python uses certain special characters as part of its syntax. **Note:** special characters are not consistent across languages so make sure you familiarize yourself with the special characters in the languages in which you write code. **Python Special Characters:** * `[` : left `square bracket` * `]` : right `square bracket` * `(` : left `paren` (parentheses) * `)` : right `paren` (parentheses) * `{` : left `curly brace` * `}` : right `curly brace` * `<` : left `angle bracket` * `>` : right `angle bracket` * `-` `dash` (not hyphen. Minus only when used in an equation or formula) * `"` : `double quote` * `'` : `single quote` (apostrophe) <a id='variables'></a> ### Variables [Table of Contents](#toc) Variables are used to store information in the computer that can later be referenced, manipulated and/or used by our programs. Important things to remember about variables include: * We store values inside variables. * We can refer to variables in other parts of our programs. * In Python, the variable is created when a value is assigned to it. * Values are assigned to variable names using the equals sign `=`. * A variable can hold two types of things. * Basic data types. For descriptions and details [(See Data Types)](#data-types) * Objects - ways to structure data and code. In Python, all variables are objects. * Variable naming convention: * Cannot start with a digit * Cannot contain spaces, quotation marks, or other punctuation * Using a descriptive name can make the code easier to read **(You will thank yourself later)** <a id='prac-variable'></a> ### Practice [Table of Contents](#toc) ``` # What is happening in this code python cell age = 34 first_name = 'Drake' ``` In the cell above, Python assigns an age (in this example 34) to a variable `age` and a name (Drake) in quotation marks to a variable `first_name`. If you want to see the stored value of a variable in python, you can display the value by using the print command `print()` with the variable name placed inside the parenthesis. ``` # what is the current value stored in the variable `age` print(age) ``` **Write a print statement to show the value of variable `first_name` in the code cell below.** ``` # Print out the current value stored in the variable `first_name`` ``` <a id='prob-variable'></a> ### Problem 5: Assigning variables and printing values [Table of Contents](#toc) 1. Create two new variables called `age` and `first_name` with your own age and name 2. Print each variable out to dispaly it's value **Extra Credit:** Combine values in a single print command by separating them with commas ``` # Insert your variable values into the print statement below print(<insert variable here>, 'is', <insert variable here>, 'years old.') ``` The `print` command automatically puts a single space between items to separate them and wraps around to a new line at the end. [Solution](#prob-variable-sol) <a id='variable-calc'></a> ## Variables can be used in calculations. [Table of Contents](#toc) * We can use variables in calculations just as if they were values. * Remember, we assigned **our own age** to `age` a few lines ago. ``` age = age + 3 print('My age in three years:', age) ``` * This now sets our age value **our current age + 3 years**. * We can also add strings together, but it works a bit differently. When you add strings together it is called **concatenating**. ``` name = "Sonoran" full_name = name + " Desert" print(full_name) ``` * Notice how I included a space in the quotes before "Desert". If we hadn't, we would have had "SonoranDesert" * Can we subtract, multiply, or divide strings? <a id='py-concatenate'></a> ## Problem 6: Printing your first and last name [Table of Contents](#toc) In the code cell below, create a new variable called last_name with your own last name. Create a second new variable called full_name that is a combination of your first and last name. ``` # Print full name ``` [Solution](#py-concatenate-sol) <a id='data-types'></a> ### Data Types [Table of Contents](#toc) **Some data types you will find in almost every language include:** | Data Type| Abbreviation | Type of Information | Examples | | :-| :-| :-| :-| | Strings | str | characters, words, sentences or paragraphs| 'a' 'b' 'c' 'abc' '0' '3' ';' '?'| | Integers | int | whole numbers | 1 2 3 100 10000 -100 | | Floating point or Float | float | decimals | 10.0 56.9 -3.765 | | Booleans | bool | logical test | True, False | <a id='strings'></a> ### Strings [Table of Contents](#toc) One or more characters strung together and enclosed in quotes (single or double): "Hello World!" ``` greeting = "Hello World!" print("The greeting is:", greeting) greeting = 'Hello World!' print('The greeting is:', greeting) ``` #### Need to use single quotes in your string? Use double quotes to make your string. ``` greeting = "Hello 'World'!" print("The greeting is:", greeting) ``` #### Need to use both? ``` greeting1 = "'Hello'" greeting2 = '"World"!' print("The greeting is:", greeting1, greeting2) ``` #### Concatenation ``` bear = "wild" down = "cats" print(bear + down) ``` Why aren't `greeting`, `greeting1`, `greeting2`, `bear`, or `down` enclosed in quotes in the statements above? <a id='prac-strings'></a> ### Practice: Strings [Table of Contents](#toc) #### Use an index to get a single character from a string. * The characters (individual letters, numbers, and so on) in a string are ordered. * For example, the string ‘AB’ is not the same as ‘BA’. Because of this ordering, we can treat the string as a list of characters. * Each position in the string (first, second, etc.) is given a number. This number is called an index or sometimes a subscript. * Indices are numbered from 0. * Use the position’s index in square brackets to get the character at that position. ``` # String : H e l i u m # Index Location: 0 1 2 3 4 5 atom_name = 'helium' print(atom_name[0], atom_name[3]) ``` <a id='numbers'></a> ### Numbers [Table of Contents](#toc) * Numbers are stored as numbers (no quotes) and are either integers (whole) or real numbers (decimal). * In programming, numbers with decimal precision are called floating-point, or float. * Floats use more processing than integers so use them wisely! * Floats and integers come in various sizes but Python switches between them transparently. ``` my_integer = 10 my_float = 10.99998 my_value = my_integer print("My numeric value:", my_value) ``` <a id='py-type'></a> ### Using Python built-in type() function [Table of Contents](#toc) If you are not sure of what your variables' types are, you can call a python function called `type()` in the same manner as you used `print()` function. Python is an object-oriented language, so any defined variable has a type. Default common types are **str, int, float, list and tuple.** We will cover [list](#py-list) and [tuple](#py-tuple) later. ``` print("Type:", type(age)) print("Type:", type(first_name)) # Print out datatype of variables print("my_value Type:", type(my_value)) print("my_float Type:", type(my_float)) ``` <a id='booleans'></a> ### Boolean [Table of Contents](#toc) * Boolean values are binary, meaning they can only either true or false. * In python True and False (no quotes) are boolean values ``` is_true = True is_false = False print("My true boolean variable:", is_true) print("Type:", type(is_false)) ``` <a id='py-data-type'></a> ### Problem 7: What variable type do I have? [Table of Contents](#toc) size = '1024' What data type is `size`? Use some of the python you have learned to provide proof of your answer. <ol style="list-style-type:lower-alpha"> <li>float</li> <li>string</li> <li>integer</li> <li>boolean</li> </ol> ``` # Write your explanation as a comment and write the python code that outputs support for your answer. ``` [Solution](#py-data-type) <a id='py-data-structures'></a> ## Data Structures [Table of Contents](#toc) Python has many objects that can be used to structure data including: | Object | Data Structure | Mutable | | :- | :- | :- | | List | collections of values held together in brackets | Mutable | | Tuple | collection of grouped values held together in parentheses | Immutable | | Set | collections of unique values held together in curly braces | Mutable | | Dictionary | collections of keys & values held together in curly braces | Mutable | <a id='py-lists'></a> ### Lists [Table of Contents](#toc) Lists are collections of values held together in brackets: ``` list_of_characters = ['a', 'b', 'c'] print(list_of_characters) ``` <a id='prob-lists'></a> ### Problem 8: Creating and Working with Lists [Table of Contents](#toc) 1. Create a new list called list_of_numbers with four numbers in it. ``` # Print out the list of numbers you created ``` * Just like strings, we can access any value in the list by it's position in the list. * **IMPORTANT:** Indexes start at 0 ~~~ list: ['a', 'b', 'c', 'd'] index location: 0 1 2 3 ~~~ ``` # Print out the second value in the list list_of_numbers ``` 2. Once you have created a list you can add more items to it with the append method ``` # Append a number to your list_of_numbers ``` [Solution](#prob-lists-sol) #### Aside: Sizes of data structures To determine how large (how many values/entries/elements/etc.) any Python data structure has, use the `len()` function ``` len(list_of_numbers) ``` Note that you cannot compute the length of a numeric variable: ``` len(age) ``` This will give an error: `TypeError: object of type 'int' has no len()` However, `len()` can compute the lengths of strings ``` # Get the length of the string print(len('this is a sentence')) # You can also get the lengths of strings in a list list_of_strings = ["Python is Awesome!", "Look! I'm programming.", "E = mc^2"] # This will get the length of "Look! I'm programming." print(len(list_of_strings[1])) ``` <a id='py-tuples'></a> ### Tuples [Table of Contents](#toc) Tuples are like a List, but **cannot be changed (immutable).** Tuples can be used to represent any collection of data. They work well for things like coordinates. Notice below that tuples are surrounded by parentheses `()` rather than square brackets `[]` that were used for lists. ``` tuple_of_x_y_coordinates = (3, 4) print (tuple_of_x_y_coordinates) ``` Tuples can have any number of values ``` coordinates = (1, 7, 38, 9, 0) print (coordinates) icecream_flavors = ("strawberry", "vanilla", "chocolate") print (icecream_flavors) ``` ... and any types of values. Once created, you `cannot add more items to a tuple` (but you can add items to a list). If we try to append, like we did with lists, we get an error ``` icecream_flavors.append('bubblegum') ``` <a id='lists-vs-tuples'></a> ### The Difference Between Lists and Tuples [Table of Contents](#toc) Lists are good for manipulating data sets. It's easy for the computer to add, remove and sort items. Sorted tuples are easier to search and index. This happens because tuples reserve entire blocks of memory to make finding specific locations easier while lists use addressing and force the computer to step through the whole list. ![array_vs_list.png](array_vs_list.png) Let's say you want to get to the last item. The tuple can calculate the location because: (address)=(size of data)×(index of the item)+(original address) This is how zero indexing works. The computer can do the calculation and jump directly to the address. The list would need to go through every item in the list to get there. Now lets say you wanted to remove the third item. Removing it from the tuple requires it to be resized and copied. Python would even make you do this manually. Removing the third item in the list is as simple as making the second item point to the fourth. Python makes this as easy as calling a method on the tuple object. <a id='py-sets'></a> ### Sets [Table of Contents](#toc) Sets are similar to lists and tuples, but can only contain unique values and are held inside curly braces. For example a list could contain multiple exact values ``` # In the gapminder data that we will use, we will have data entries for the continents # of each country in the dataset my_list = ['Africa', 'Europe', 'North America', 'Africa', 'Europe', 'North America'] print("my_list is", my_list) # A set would only allow for unique values to be held my_set = {'Africa', 'Europe', 'North America', 'Africa', 'Europe', 'North America'} print("my_set is", my_set) ``` Just like lists, you can append to a set using the add() method. ``` my_set.add('Asia') # Now let's try to append one that is in: my_set.add('Europe') ``` What will the print statements show now in the code cell below? ``` print("my_list is", my_list) print("my_set is", my_set) ``` <a id='py-dictionaries'></a> ### Dictionaries [Table of Contents](#toc) * Dictionaries are collections of things that you can lookup like in a real dictionary: * Dictionarys can organized into key and value pairs separated by commas (like lists) and surrounded by curly braces. * E.g. {key1: value1, key2: value2} * We call each association a "key-value pair". ``` dictionary_of_definitions = {"aardvark" : "The aardvark is a medium-sized, burrowing, nocturnal mammal native to " "Africa.", "boat" : "A boat is a thing that floats on water"} ``` We can find the definition of aardvark by giving the dictionary the "key" to the definition we want in brackets. In this case the key is the word we want to lookup ``` print("The definition of aardvark is:", dictionary_of_definitions["aardvark"]) # Print out the definition of a boat ``` Just like lists and sets, you can add to dictionaries by doing the following: ``` dictionary_of_definitions['ocean'] = "An ocean is a very large expanse of sea, in particular each of the main areas into which the sea is divided geographically." print(dictionary_of_definitions) ``` <a id='prob-dictionaries'></a> ### Problem 9: Creating and Accessing Dictionaries [Table of Contents](#toc) 1. Create a dictionary called `zoo` with at least three animal types with a different count for each animal (How many animals of that type are found at the zoo). 1. `print` out the count of the second animal in your dictionary ``` # Zoo Dictionary ``` [Solution](#prob-dictionaries-sol) <a id='py-statements'></a> ## Statements [Table of Contents](#toc) OK great. Now what can we do with all of this? We can plug everything together with a bit of logic and python language to make a program that can do things like: * process data (data wrangling or manipulation) * parse files * data analysis What kind of logic are we talking about? We are talking about something called a "logical structure" which starts at the top (first line) and reads down the page in order In python a logical structure are often composed of statements. Statements are powerful operators that control the flow of your script. There are two main types of statements: * conditionals (if, while) * loops (for, while) <a id='py-conditionals'></a> ### Conditionals [Table of Contents](#toc) Conditionals are how we make a decision in the program. In python, conditional statements are called if/else statements. * If statement use boolean values to define flow. * If something is True, do this. Else, do this * While something is True, do some process. **Building if/else statements in Python:** 1. Start first line with `if` 1. Then `some-condition` must be a logical test that can be evaulated as True or False 1. End the first line with `:` 1. Indent the next line(s) with `tab` or `4 spaces` (Jupyter does the indent automatically!) 1. `do-things`: give python commands to execute 1. End the statement with `else` and `:` (notice that if and else are in the same indent) 1. Indent the next line(s) with `tab` or `4 spaces` (Jupyter does the indent automatically!) 1. `do-different-things`: give python commands to execute ### Comparison operators: `==` equality `!=` not equal `>` greater than `>=` greater than or equal to `<` less than `<=` less than or equal to ``` weight = 3.56 if weight >= 2: print(weight,'is greater than or equal to 2') else: print(weight,'is less than 2') ``` ### Membership operators: `in` check to see if data is **present** in some collection `not in` check to see if data is **absent** from some collection ``` groceries=['bread', 'tomato', 'hot sauce', 'cheese'] if 'basil' in groceries: print('Will buy basil') else: print("Don't need basil") # this is the variable that holds the current condition of it_is_daytime # which is True or False it_is_daytime = True # if/else statement that evaluates current value of it_is_daytime variable if it_is_daytime: print ("Have a nice day.") else: print ("Have a nice night.") # before running this cell # what will happen if we change it_is_daytime to True? # what will happen if we change it_is_daytime to False? ``` * Often if/else statement use a comparison between two values to determine True or False * These comparisons use "comparison operators" such as ==, >, and <. * \>= and <= can be used if you need the comparison to be inclusive. * **NOTE**: Two equal signs `==` is used to compare values, while one equals sign `=` is used to assign a value * E.g. 1 > 2 is False<br/> 2 > 2 is False<br/> 2 >= 2 is True<br/> 'abc' == 'abc' is True ``` user_name = "Ben" if user_name == "Marnee": print ("Marnee likes to program in Python.") else: print ("We do not know who you are.") ``` * What if a condition has more than two choices? Does it have to use a boolean? * Python if-statments will let you do that with elif * `elif` stands for "else if" ``` if user_name == "Marnee": print ("Marnee likes to program in Python.") elif user_name == "Ben": print ("Ben likes maps.") elif user_name == "Brian": print ("Brian likes plant genomes") else: print ("We do not know who you are") # for each possibility of user_name we have an if or else-if statment to check the # value of the name and print a message accordingly. ``` What does the following statement print? my_num = 42 my_num = 8 + my_num new_num = my_num / 2 if new_num >= 30: print("Greater than thirty") elif my_num == 25: print("Equals 25") elif new_num <= 30: print("Less than thirty") else: print("Unknown") <a id='prob-if-else'></a> ### Problem 10: Writing Conditional If/Else Statements [Table of Contents](#toc) Check to see if you have more than three entries in the `zoo` dictionary you created earlier. If you do, print "more than three animals". If you don't, print "three or less animals" ``` # write an if/else statement ``` Can you modify your code above to tell the user that they have exactly three animals in the dictionary? ``` # Modify conditional to include exactly three as potential output ``` [Solution](#prob-if-else-sol) <a id='py-loops'></a> ### Loops [Table of Contents](#toc) Loops tell a program to do the same thing over and over again until a certain condition is met. * In python two main loop types: * For loops * While loops <a id='for-loops'></a> ### For Loops [Table of Contents](#toc) A for loop executes the same command through each value in a collection. Building blocks of a for loop: > `for` each-item `in` variable `:` >> `do-something` **Building for loops in python:** 1. Start the first line with `for` 1. `each-item` is an arbitrary name for each item in the variable/list. 1. Use `in` to indicate the variable that hold the collection of information 1. End the first line with `:` 1. indent the following line(s) with `tab` or `4 spaces` (Jupyter does the indent automatically!) 1. `do-something` give python commands to execute In the example below, `number` is our `each-item` and the `print()` command is our `do-something`. ``` # Running this cell will give a NameError because it has not be defined yet. print(number) # Run this cell and see if you figure out what this for loop does for number in range(10): # does not include 10! print(number) ``` #### LOOPING a set number of times We can do this with the function `range()`. Range automatically creates a list of numbers in a specified range. In the example above, we have a list of 10 numbers starting with 0 and increasing by one until we have 10 numbers. In the example below, we get the same end result although we have give two numbers to `range()`. In the example below we have given the start and end points of the range. **Note: Do not forget about python's zero-indexing** ``` # What will be printed for number in range(0,10): print(number) # What will be printed for number in range(1,11): print(number) # What will be printed for number in range(10,0, -1): print(number) # Change the code from the cell above so that python prints 9 to 0 in descending order # This loop prints in each iteration of the loop which shows us a value for each of the 10 runs. total = 0 # global variable for i in range(10): total=total+i print(total) # This loop prints the value for last value after the 10 runs have occured. total=0 for i in range(10): total=total+i print(total) ``` #### Saving Time Looping can save you lots of time. We will look at a simple example to see how it works with lists, but imagine if your list was 100 items long. You do not want to write 100 individual print commands, do you? ``` # LOOPING over a collection # LIST # If I want to print a list of fruits, I could write out each print statment like this: print("apple") print("banana") print("mango") # or I could create a list of fruit # loop over the list # and print each item in the list list_of_fruit = ["apple", "banana", "mango"] # this is how we write the loop # "fruit" here is a variable that will hold each item in the list, the fruit, as we loop # over the items in the list print (">>looping>>") for fruit in list_of_fruit: print (fruit) ``` #### Creating New Data You can also use loops to create new datasets as well. In the cell below, we use a mathematical operator to create a new list `data_2` where each value is double that of the value in the original list `data`. ``` data = [35,45,60,1.5,40,50] data_2 = [] for i in data: data_2.append(i*2) print(data_2) ``` <a id='prob-str-reverse-loop'></a> ### Problem 11: Reverse the string using a for loop [Table of Contents](#toc) There are many ways to reverse a string. I want to challenge you to use a for loop. The goal is to practice how to build a for loop (use multiple print statements) to help you understand what is happening in each step. ``` string = "waterfall" reversed_string = "" # For loop reverses the string given as input # Print out the both the original and reversed strings ``` **Extra Credit: Accomplish the same task (reverse a string) with out using a for loop.** _Hint: the reversing range example above gives you a clue AND Google always has an answer!_ ``` # Reversing the string can be done by writing only one more line string = "waterfall" ``` We can loop over collections of things like lists or dictionaries or we can create a looping structure. ``` # LOOPING over a collection # DICTIONARY # We can do the same thing with a dictionary and each association in the dictionary fruit_price = {"apple" : 0.10, "banana" : 0.50, "mango" : 0.75} for key, value in fruit_price.items(): print ("%s price is %s" % (key, value)) ``` [Solution](#prob-str-reverse-loop-sol) <a id='prob-dict-loop'></a> ### Problem 12: Looping through Dictionaries [Table of Contents](#toc) 1. For each entry in your `zoo` dictionary, print that key ``` # print only dictionary keys using a for loop ``` 2. For each entry in your zoo dictionary, print that value ``` # print only dictionary values using a for loop ``` 3. Can you print both the key and its associated value using a for loop? ``` # print dictionary keys and values using a single for loop ``` [Solution](#prob-dict-loop-sol) <a id='while-loops'></a> ### While Loops [Table of Contents](#toc) Similar to if statements, while loops use a boolean test to either continue looping or break out of the loop. ``` # While Loops my_num = 10 while my_num > 0: print("My number", my_num) my_num = my_num - 1 print('My value is no longer greater than zero and I have exited the "while" loop as a result.') ``` NOTE: While loops can be dangerous, because if you forget to include an operation that modifies the variable being tested (above, we're subtracting 1 at the end of each loop), it will continue to run forever and your script will never finish. That is it. With just these data types, structures and logic, you can build a program. We will write program with functions in [Python II: A tool for programming](#python-2) <a id='pandas'></a> ## Pandas: Working with Existing Data [Table of Contents](#toc) Thus far, we have been creating our own data as we go along and you are probably thinking "How in the world can this save me time?" This next section is going to help you learn how to import data that you already have. [Pandas](https://pandas.pydata.org/docs/) is a python package that is great for doing data manipulation. <a id='read-data'></a> ### Pandas: Importing Data [Table of Contents](#toc) **Importing packages:** Pandas is a package that is written for python but is not part of the base python install. In order to use these add on packages, we must first import them. This is conventionally the first thing you do in a script. If I were building a script using Jupyter Notebooks, I generally do all the importing of packages I need for the entire notebook in the first code cell. ``` # Import packages import pandas ``` **Note:** pandas is a long name and you will generally find a shortened version of the name in online help resources. As such, we will use the same convention in this workshop. It only requires a small modification to the import statement. ``` # Import packages import pandas as pd ``` Now that we have access to pandas at our disposal we are ready to import some data. We will be working a freely available dataset called [gapminder](https://www.gapminder.org/). The first data set we are going to look at is called `Afghanistan_Raw`. ``` # import from excel spreadsheet afghanistan_xlsx = pd.read_excel('gapminder_data/Afghanistan_Raw.xlsx') # import from csv file afghanistan_csv = pd.read_csv('gapminder_data/Afghanistan_Raw.csv') ``` The cell above assigns a `variable` to a pandas dataframe. To create a pandas dataframe: 1. We use `pd` to tell python that we want to use the pandas package that we imported. 1. We use `.read_excel()` or `.read_csv()` to tell pandas what type of file format we are giving it. 1. We have given the `relative path` to the file in parentheses. **Relative paths** are your best friend when you want your code to be easily moved or shared with collaborators. They use your current position in the computer's file structure as the starting point. * If you work on a script with relative paths on your work computer, email it to yourself and try to continue working on your personal home computer, it should work because the usernames may be different but are bypassed and computer's file structure are the same from the directory in which we are working. * The `current working directory` is where the Jupyter Notebook is stored unless you manually change it. #### Project Directory Intro_Python_Resbaz_2021.ipynb ├── array_vs_list.png ├── gapminder_data │   ├── Afghanistan_Raw.csv │   ├── Afghanistan_Raw.xlsx │   ├── Afghanistan_Fixed.csv │   └── gapminder_by_country ├── jn_binder_options.png ├── jn_options.png ├── Intro_Python_Resbaz_2021.ipynb └── scripting_practice.ipynb **Absolute paths** can be useful if the thing you are trying to access is never going to move. They start at the root of the computer's file structure and work out to the file's location. **Note: this includes the computer's username.** * If you work on a script with absolute paths on your work computer, email it to yourself and try to continue working on your personal home computer, it will fail because the usernames and computer's file structure are different. * My absolute path (work): /Users/**drakeasberry**/Desktop/2021_Resbaz_Python/intro_python * My absolute path (home): /Users/**drake**/Desktop/2021_Resbaz_Python/intro_python ``` print('This is the excel file:\n\n', afghanistan_xlsx) print('\nThis is the csv file:\n\n', afghanistan_csv) ``` This prints out each file separatly and I have added a few line break `\n` just to make it a little easier read when it is printed. However, these may still feel unfamiliar and hard to read for you or your colleagues. If we do not include the data varibale inside the `print()`, then pandas will render a formatted table that is more visually pleasing. Let's look at the difference. ``` # Use print to label the output, but let pandas render the table print('This is the excel file:') afghanistan_xlsx # Use print to label the output, but let pandas render the table print('This is the csv file:') afghanistan_csv ``` <a id='manipulate-data'></a> ### Pandas: Manipulating Data [Table of Contents](#toc) As you can see above, both ways of importing data have produced the same results. The type of data file you use is a personal choice, but not one that should be taken for granted. Microsoft Excel is licensed product and not everyone may have access to open `.xlsx` files whereas a `.csv`file is a comma separated values document that can be read by many free text editors. `.csv` files are also genereally smaller than the same information stored in a `.xlsx` file. My preferred choice is using `.csv` files due to smaller size and easier accessibility. ``` afghanistan_csv.country.unique() # Drop all rows with no values afghanistan_csv.dropna(how='all') # What prints now and why? afghanistan_csv # If we want to save the operations we need to store it in a variable (we will overwrite the existing one here) afghanistan_csv = afghanistan_csv.dropna(how='all') afghanistan_csv # we will store a new dataframe called df to save some typing # we will subset the data to only rows that have a country name df = afghanistan_csv.dropna(subset=['country']) df df = df.rename(columns={'pop':'population'}) # We are only expecting Afghanistan to be the only country in this file # Let's check our assumptions df.country.unique() ``` <a id='prob-unique'></a> ### Problem 13: Checking assumptions about your data [Table of Contents](#toc) You can use df.info() to general idea about the data and then you can investigate the remaining columns to see if the data is as you expect. ``` # this will give a quick overview of the data frame to give you an idea of where to start looks # Hint: Check your assumptions about values dataframe ``` [Solution](#prob-unique-sol) Our investigation has showed us that some of data has errors but probably still useful if we correct them. * The year column is being read as a float instead of an object (we will not be doing mathematics on years) * The year column still has a missing value * The population column is being read as an object instead of an integer (we may want to do mathematics on population) * The continent column has a typo `Asiaa` and `tbd` Let's see if we can fix these issues together. ``` # Let's fix the typos in continent column df = df.replace(to_replace =["Asiaa", "tbd"], value ="Asia") df # Let's take a closer look at year column by sorting df.sort_values(by='year') ``` By sorting the dataframe based on year, we can see that the years are incrementing by 5 years. We can also deduce that the year 1982 is missing. Depending on the data, you will have to make a decision as the researcher: * Are you confident that you can say that you have replaced the value correctly and the rest of the data is good? * Do you delete the data based on the fact that it had missing data? In this case, we are going to replace the missing value with 1982 because we believe it is the right thing to do in this particular case. **Note:** In general, you should be very selective on replacing missing values. ``` df['year'] = df['year'].fillna(1982) df # Finally, let's fix the datatypes of columns df = df.astype({"year": int, "population": int}) df # Let's check to see if it is working the way we think it is df.info() ``` <a id='write-data'></a> ### Pandas: Writing Data [Table of Contents](#toc) Now that we have made all the changes necessary, we should save our corrected datafram as a new file. ``` # Save file with changes we made df.to_csv('gapminder_data/Afghanistan_Fixed.csv') ``` <a id='all-countries'></a> ### Pandas: Working with more than file [Table of Contents](#toc) ``` #Import pandas library using an alias import pandas as pd # Import glob library which allows us to use regular expressions to select multiple files import glob # Let's see where we are within the computer's directory structure # The exclamation point allows us to utilize a bash command in the notebook !pwd # Let's see what files and folders are in our current location !ls # Let's see what files and folders are in the gapminder_data directory !ls gapminder_data/ # Let's see what files and folders are in the gapminder_data/gapminder_by_country directory !ls gapminder_data/gapminder_by_country/ ``` We worked with one file `Afghanistan` in the previous section, now we will combine everything we have seen to work with all the countries data that we have. 1. Find files in `gapminder_data/gapminder_by_country/` 1. Get all filenames into a list 1. Remove `country.cc.txt` 1. For loop to append file lines into a pandas dataframe 1. Add column names from `country.cc.txt` ``` # glob.glob will match files in the current directory based on a pattern countries = sorted(glob.glob('gapminder_data/gapminder_by_country/*.cc.txt')) len(countries) # Remove header item from item of files # If you try to run this cell more than once, you will get an error # because the item does not exist once it has been removed after the first execution of this cell countries.remove('gapminder_data/gapminder_by_country/country.cc.txt') # Check the length of the list to ensure the item was correctly removed len(countries) # creating dataframe from a for loop: df = pd.DataFrame() # Go through each of 142 files and append until all countries are in one dataframe for country in countries: c=pd.read_csv(country,sep='\t',header=None) df=df.append(c,ignore_index=True) # Import header and store as list header = pd.read_csv('gapminder_data/gapminder_by_country/country.cc.txt', sep='\t') column_names = list(header.columns) # Add header to dataframe created with the loop df.columns = column_names # Gives us number of rows and columns df.shape # Get summary statistics df.describe() # Do you remember how to change column types # Solution # Do you remember how to change column types df = df.astype({"year": int, "pop": int}) df.describe() ``` Save to summary of the dataframe `to_csv`, create a NEW file name, otherwise will overwrite the files we downloaded! ``` df.describe().to_csv('gapminder_summ_stats.csv') ls ``` <a id='slicing'></a> ### Pandas: Slicing and selecting values [Table of Contents](#toc) <div class="alert alert-block alert-success"> <b>Pandas Dataframe:</b> - 2-dimensional representation of a table - Series is the data-structure Pandas use to represent a column. </div> Because it's 2 dimensional, we have to specify which rows and which columns we want to select. ``` # see the first 5 rows in the dataframe by default, but you can use any number in parentheses to see more or less df.head() ``` **`.loc[]` to select values by the name** **`.loc[a:b,i:j]`**, where a and b are the rows i and j are the columns Need to set index first: ``` df=df.set_index('country') df # this returns all the rows and columns where the index is Brazil df.loc['Brazil'] # this returns all the rows and columns where the index is Brazil through Ecuador (alphabetically) df.loc['Brazil':'Ecuador'] # this returns all the rows where the index is Brazil through Ecuador (alphabetically), but only includes the columns # between year and lifeExp (moving from left to right across the dataframe) df.loc['Brazil':'Ecuador','year':'lifeExp'] # this returns all the rows where the index is Brazil or Ecuador, but only includes the columns # between year and lifeExp (moving from left to right across the dataframe) df.loc[['Brazil','Ecuador'],'year':'lifeExp'] ``` **`.iloc[]` to select values by the index** **`.iloc[a:b,i:j]`**, where a and b are the indexes of rows i and j are the indexes of columns ``` # this returns rows 10 through 16 and all but the last column (gdpPercap) df.iloc[9:16,:-1] ``` **Observation:** ``` -3:-1, omits the final index (column gdpPercap) in the range provided, while a named slice includes the final element. ``` ``` # this returns rows 10 and 17 and all but the columns (continent and lifeExp) df.iloc[[9,16],-3:-1] # this also returns rows 10 and 17 and all but the columns (continent and lifeExp) df.iloc[[9,16],2:4] ``` <a id='summary-stats'></a> ### Problem 14: Slice and save summary statistics [Table of Contents](#toc) Select two countries of your interest. Slice the `df` to select only these countries. Then, obtain summary statistics by country, and save to a file. ``` # pick two countries to subset and save file with a descriptive name ``` [Solution](#summary-stats-sol) <a id='py1-solutions'></a> ## Python I: Problem Solutions [Table of Contents](#toc) <a id='prob-variable-sol'></a> ### Problem 5: Assigning variables and printing values 1. Create two new variables called `age` and `first_name` with your own age and name 2. Print each variable out to dispaly it's value [Back to Problem](#prob-variable) ``` age = '<your age>' first_name = '<your first name>' print(age) print(first_name) ``` **Extra Credit:** You can also combine values in a single print command by separating them with commas ``` # Insert your variable values into the print statement below print(first_name, 'is', age, 'years old') ``` Correct Output: If you received this output, then you correctly assigned new variables and combined them correctly in the print statment. The information represented between `<>` should reflect your personal information at this point. ```markdown <your age> <your first name> <your first name> is <your age> years old ``` If you received this output, then you forget to assign new variables. ```markdown 34 Drake Drake is 34 years old ``` If you received this output, then you correctly assigned new variables but mixed up the order in the combined print statment. ```markdown <your age> <your first name> <your age> is <your first name> years old ``` <a id='py-concatenate-sol'></a> ### Problem 6: Printing your first and last name In the code cell below, create a new variable called last_name with your own last name. Create a second new variable called full_name that is a combination of your first and last name. ``` # Print full name first_name = 'Drake' last_name = 'Asberry' print(first_name, last_name) ``` [Back to Problem](#py-concatenate-sol) <a id='py-data-type-sol'></a> ### Problem 7: What variable type do I have? size = '1024' What data type is `size`? Use some of the python you have learned to provide proof of your answer. <ol style="list-style-type:lower-alpha"> <li>float</li> <li>string</li> <li>integer</li> <li>boolean</li> </ol> ``` # Write your explanation as a comment and write the python code that outputs support for your answer. size = '1024' print(type(size), "is a string because when we stored the variable, we wrapped it in single quotes ''. Python " "understood this to be a string instead of an integer as a result.") ``` [Back to Problem](#py-data-type) <a id='prob-lists-sol'></a> ### Problem 8: Creating and Working with Lists 1. Create a new list called list_of_numbers with four numbers in it. ``` # Print out the list of numbers you created list_of_numbers = [0, 1, 2, 3] print(list_of_numbers) # Print out the second value in the list list_of_numbers print(list_of_characters[1]) ``` 2. Once you have created a list you can add more items to it with the append method ``` # Append a number to your list list_of_numbers.append(5) print(list_of_numbers) ``` [Back to Problem](#prob-lists) ### Problem 9: Creating and Accessing Dictionaries 1. Create a dictionary called `zoo` with at least three animal types with a different count for each animal. 1. `print` out the count of the second animal in your dictionary ``` # Zoo Dictionary zoo = {'bears':25, 'lions':19, 'monkeys':67} print(zoo['lions']) ``` [Back to Problem](#prob-dictionaries) <a id='prob-if-else-sol'></a> ### Problem 10: Writing Conditional If/Else Statements Check to see if you have more than three entries in the `zoo` dictionary you created earlier. If you do, print "more than three animals". If you don't, print "three or less animals" ``` # write an if/else statement if len(zoo) > 3: print("more than three animals") else: print("three or less animals") ``` Can you modify your code above to tell the user that they have exactly three animals in the dictionary? ``` # Modify conditional to include exactly three as potential output if len(zoo) > 3: print("more than three animals") elif len(zoo) < 3: print("less than three animals") else: print("exactly three animals") ``` [Back to Problem](#prob-if-else) <a id='prob-str-reverse-loop-sol'></a> ### Problem 11: Reversing Strings There are many ways to reverse a string. I want to challenge you to use a for loop. The goal is to practice how to build a for loop (use multiple print statements) to help you understand what is happening in each step. ``` string = "waterfall" reversed_string = "" for char in string: #print(reversed_string) reversed_string = char + reversed_string #print(char) #print(reversed_string) print('The original string was:', string) print('The reversed string is:', reversed_string) ``` **Extra Credit: Accomplish the same task (reverse a string) with out using a for loop.** _Hint: the reversing range example above gives you a clue AND Google always has an answer!_ ``` string = "waterfall" print(string[::-1]) ``` [Back to Problem](#prob-str-reverse-loop) <a id='prob-dict-loop'></a> ### Problem 12: Looping through Dictionaries [Table of Contents](#toc) 1. For each entry in your `zoo` dictionary, print that key ``` # print only dictionary keys using a for loop for key in zoo.keys(): print(key) ``` 2. For each entry in your zoo dictionary, print that value ``` # print only dictionary values using a for loop for value in zoo.values(): print(value) ``` 3. Can you print both the key and its associated value using a for loop? ``` # print dictionary keys and values using a single for loop for key, value in zoo.items(): print(key,value) ``` [Back to Problem](#prob-dict-loop) <a id='prob-unique'></a> ### Problem 13: Checking assumptions about your data [Table of Contents](#toc) You can use df.info() to general idea about the data and then you can investigate the remaining columns to see if the data is as you expect. ``` # this will give a quick overview of the data frame to give you an idea of where to start looks print('total rows in dataframe:', len(df)) df.info() # Hint: Check your assumptions about values dataframe df.year.unique() columns = list(df.columns) for column in columns: unique_val = eval('df.' + column + '.unique()') print(column, ':\nunique values:\n', unique_val, '\n\n') ``` [Back to Problem](#prob-unique) <a id='summary-stats-sol'></a> ### Problem 14: Slice and save summary statistics Select two countries of your interest. Slice the `df` to select only these countries. Then, obtain summary statistics by country, and save to a file. ``` # My Solution my_countries = df.loc[['China','Germany'],'pop':] my_countries.describe().to_csv('china_germany_summ_stats.csv') my_countries ``` [Back to Problem](#summary-stats) <a id='python-2'></a> ## Intro to Python II: A Tool for Programming [Table of Contents](#toc) **Prerequisites:** Intro to Python 1: Data OR knowledge of another programming language This workshop will help attendees build on previous knowledge of Python or other programming language in order to harness the powers of Python to make your computer work for you. You will learn how to write their own Python functions, save their code as scripts that can be called from future projects and build a workflow to chain multiple scripts together. **Learning Objectives:** 1. Understand the syntax of python functions 1. Understand the basics of scripting in python 1. Understand data analysis cycles **Learning Outcomes:** you will be able to… 1. Write your own functions 1. Save code as a script 1. Build a workflow <a id='python-2-setup'></a> ## Setup if you are joining in for Python II [Table of Contents](#toc) **Run the next three code cells to have the data you need to work with in this section.** ``` # import libraries import pandas as pd # Create a dictionary with rainfall, temperature and pressure data={'rainfall_inches':[1.34,1.56,4.33], 'temperature_F':[75,80,96], 'pressure_psi':[10,2,35]} data string = "waterfall" print(string[::-1]) ``` <a id='functions'></a> ## Functions: [Table of Contents](#toc) Create your own functions, especially if you need to make the same operation many times. This will make you code cleaner. * Functions are known by many names in other languages. Most commonly methods and subroutines. * A function has a contract that guarantees certain output based on certain input(s) * Variables get passed into the function * The function then preforms actions based on the variables that are passed * A new value is returned from the function In python we are able to define a function with `def`. First you define the function and later you call the defined function. Here we define a function that we will call "add_two_numbers" * def add_two_numbers(): ``` # this defines our function def add_two_numbers(): answer = 50 + 15 return answer # this calls the function and stores in the variable `x` x = add_two_numbers() x ``` That function seems a little silly because we could just add 50 and 15 easier than defining a function to do it for us. However, imagine 50 was some constant that we need to add observations to. Now we could rewrite the function to accept an observation to add to our constant of 50. ``` # this defines our function # the "num1" inside the parentheses means it is expecting us to pass a value to the function when we call it def add_to_constant(num1): answer = 50 + num1 return answer # this calls the function and stores in the variable `y` # the value we want to pass goes inside the parentheses in the call y = add_to_constant(10) y ``` Change the value that you pass to the function to see how it works. <a id='why-functions'></a> ### Why Use Functions? [Table of Contents](#toc) Functions let us break down our programs into smaller bits that can be reused and tested. Human beings can only keep a few items in working memory at a time. we can only understand larger/more complicated ideas by understanding smaller pieces and combining them. Functions serve the same purpose in programs. We encapsulate complexity so that we can treat it as a single “thing” and this enables reusablility. Write code one time, but use many times in our program or programs. 1. Testability * Imagine a really big program with lots of lines of code. There is a problem somewhere in the code because you are not getting the results you expect. * How do you find the problem in your code? * If your program is composed of lots of small functions that only do one thing then you can test each function individually. 2. Reusability * Imagine a really big program with lots of lines of code. There is a section of code you want to use in a different part of the program. * How do you reuse that part of the code? * If you just have one big program then you have to copy and paste that bit of code where you want it to go, but if that bit was a function, you could just use that function again. 3. Writing cleaner code * Always keep both of these concepts in mind when writing programs. * Write small functions that do one thing. * Never have one giant function that does a million things. * A well written script is composed of lots of functions that do one thing. <a id='str-reverse-func'></a> ### Let's revist the reverse string and turn it into a function [Table of Contents](#toc) ``` # Create the function def reverse_text(string): """Function to reverse text in strings. """ result=string[::-1] return result # Call the function and pass a string as input reverse_text("waterfall") # you can also pass a variable to function original='pool' reverse_text(original) ``` This may seem trivial, but we could use a function like to ask a user for a word that they would like to see written in reverse. Each time the input is given, we would run the same code to return the reversed spelling of the word they gave us. <a id='temp-func'></a> ### Let's look at a real world example of where constants could be used in functions [Table of Contents](#toc) ``` # Create the function def convert_temp(temperature,unit): """Function to convert temperature from F to C, and vice-versa. Need temperature (integer or float) and unit (string, uppercase F or C) """ t=int(temperature) u=str(unit) if u == 'C': fahr=(9/5*t)+32 print('{}C is {}F'.format(t,int(fahr))) elif u == 'F': # or else: celsius=(t-32)*5/9 print('{}F is {}C'.format(t,int(celsius))) convert_temp(85,'C') # Using the question mark following the function name, we see information about the function and how we might use it convert_temp? # will demonstrate this depending on time def convert_temp2(): """Function to convert temperature from F to C, and vice-versa. User input. """ t=int(input('Enter temperature:')) u=str(input('Enter unit (F or C):')) if u == 'C': fahr=9/5*t+32 return '{}C is {}F'.format(t,int(fahr)) elif u == 'F': celsius=(t-32)*5/9 return '{}F is {}C'.format(t,int(celsius)) else: return "Don't know how to convert..." convert_temp2() convert_temp2() convert_temp2() ``` <a id='scripting'></a> ## Scripting [Table of Contents](#toc) For this section we are going to open the other Jupyter Notebook found in our repository to ensure we are starting with a clean slate. 1. Save your progress in the current notebook and you may want download a copy for your records as well which can be done using the `File` menu. 1. `Go to File > Open > scripting_practice.ipynb` to open the notebook. <a id='errors'></a> ## Common Errors [Table of Contents](#toc) ### Help yourself ``` help(print) help(len) ?len ?data dir(data) ``` ``` help(your_data_object) dir(your_data_object) ``` ### Variable errors ``` # need to create/define a variable before using it chocolate_cake # this also includes mispellings... first_name='Nathalia' firt_name ``` ### Syntax errors ``` # Syntax errors: when you forget to close a ) ## EOF - end of file ## means that the end of your source code was reached before all code blocks were completed print(len(first_name) print(len(first_name)) # Syntax errors: when you forgot a , tires=4 print('My car has'tires,' tires') # Syntax errors: forgot to close a quote ' in a string ## EOL = end of line print('My car has',tires,' tires) tires=4 print('My car has',tires,' tires') # Syntax errors: when you forget the colon at the end of a line data=[1,2,3,4] for i in data print(i**2) # Indentation errors: forgot to indent for i in data: print(i**2) for i in data: print(i**2) ``` ### Index errors ``` groceries=['banana','cheese','bread'] groceries[3] ``` ### Character in strings are IMMUTABLE ``` fruit='mango' fruit[3] fruit[3]='G' ``` ### Item in list is MUTABLE ``` fruits=['mango','cherry'] fruits[1] fruits[1]='apple' fruits ``` ### Character in item of a list is IMMUTABLE ``` fruits[1] fruits[1][2] fruits[1][2]='P' ```
github_jupyter
``` # HIDDEN from datascience import * from prob140 import * import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') %matplotlib inline import math from scipy import stats from scipy import misc from itertools import permutations # HIDDEN # The alphabet alph = make_array('a', 'd', 't') # HIDDEN # Decode atdt using all possible decoders x1 = [['a', 't', 'd', 't'], ['a','d','t','d'], ['d','t','a','t']] x2 = [['d','a','t','a'], ['t','d','a','d'], ['t','a','d','a']] decoded = x1+x2 # HIDDEN decoding = Table().with_columns( 'Decoder', list(permutations(alph)), 'atdt Decoded', decoded ) # HIDDEN # Make bigram transition matrix # Data from Peter Norvig's bigram table aa = 1913489177 dd = 6513992572 tt = 19222971337 ad = 23202347740 da = 23279747379 at = 80609883139 ta = 42344542093 dt = 10976756096 td = 3231292348 row1 = make_array(aa, ad, at)/sum([aa, ad, at]) row2 = make_array(da, dd, dt)/sum([da, dd, dt]) row3 = make_array(ta, td, tt)/sum([ta, td, tt]) rows = np.append(np.append(row1, row2), row3) # HIDDEN bigrams = MarkovChain.from_table(Table().states(alph).transition_probability(rows)) ``` ## Code Breaking ## While it is interesting that many Markov Chains are reversible, the examples that we have seen so far haven't explained what we get by reversing a chain. After all, if it looks the same running forwards as it does backwards, why not just run it forwards? Why bother with reversibility? It turns out that reversing Markov Chains can help solve a class of problems that are intractable by other methods. In this section we present an example of how such problems arise. In the next section we discuss a solution. ### Assumptions ### People have long been fascinated by encryption and decryption, well before cybersecurity became part of our lives. Decoding encrypted information can be complex and computation intensive. Reversed Markov Chains can help us in this task. To get a sense of one approach to solving such problems, and of the extent of the task, let's try to decode a short piece of text that has been encoded using a simple code called a *substituion code*. Text is written in an *alphabet*, which you can think of as a set of letters and punctuation. In a substitution code, each letter of the alphabet is simply replaced by another in such a way that the code is just a permutation of the alphabet. To decode a message encrypted by a substitution code, you have to *invert* the permutation that was used. In other words, you have to apply a permutation to the *coded* message in order to recover the original text. We will call this permutation the *decoder*. To decode a textual message, we have to make some assumptions. For example, it helps to know the language in which the message was written, and what combinations of letters are common in that language. For example, suppose we try to decode a message that was written in English and then encrypted. If our decoding process ends up with "words" like zzxtf and tbgdgaa, we might want to try a different way. So we need data about which sequences of letters are common. Such data are now increasingly easy to gather; see for example this [web page](http://norvig.com/ngrams/) by [Peter Norvig](http://norvig.com), a Director of Research at Google. ### Decoding a Message ### Let's see how we can use such an approach to decode a message. For simplicity, suppose our alphabet consists of only three letters: a, d, and t. Now suppose we get the coded message atdt. We believe it's an English word. How can we go about decoding it in a manner that can be replicated by a computer for other words too? As a first step, we will write down all 3! = 6 possible permutations of the letters in the alphabet and use each one to decode the message. The table `decoding` contains all the results. Each entry in the `Decoder` column is a permutation that we will apply to our coded text atdt. The permutation determines which letters we will use as substitutes in our decoding process. To see how to do this, start by keeping the alphabet in "alphabetical" order in your head: 'a', 'd', 't'. Now look at the rows of the table. - The decoder in the first row is ['a', 'd', 't']. This decoder simply leaves the letters unchanged; atdt gets decoded as atdt. $$ \text{Decoder ['a', 'd', 't']: } ~~~ a \to a, ~~~ d \to d, ~~~ t \to t $$ - The decoder in the second row is ['a', 't', 'd']. This keeps the first letter of the alphabet 'a' unchanged, but replaces the second letter 'd' by 't' and the third letter 't' by 'd'. $$ \text{Decoder ['a', 't', 'd']: } ~~~ a \to a, ~~~ d \to t, ~~~ t \to d $$ So atdt gets decoded as adtd. You can read the rest of the table in the same way. Notice that in each decoded message, a letter appears twice, at indices 1 and 3. That's the letter being used to decode t in atdt. A feature of substitution codes is that each letter *original* is coded by a letter *code*, with the same letter *code* being used every time the letter *original* appears in the text. So the decoder must have the same feature. ``` decoding ``` Which one of these decoders should we use? To make this decision, we have to know something about the frequency of letter transitions in English. Our goal will be to pick the decoder according to the frequency of the decoded word. We have put together some data on the frequency of the different *bigrams*, or two-letter combinations, in English. Here is a transition matrix called `bigrams` that is a gross simplification of available information about bigrams in English; we used Peter Norvig's bigrams table and restricted it to our three-letter alphabet. The row corresponding to the letter 'a' assumes that about 2% of the bigrams that start with 'a' are 'aa', about 22% are 'ad', and the remaining 76% are 'at'. It makes sense that the 'aa' transitions are rare; we don't use words like aardvark very often. Even 2% seems large until you remember that it is the proportion of 'aa' transitions only among transitions 'aa', 'ad', and 'at', because we have restricted the alphabet. If you look at its proportion among all $26\times26$ bigrams, that will be much lower. ``` bigrams ``` Now think of the true text as a path of a Markov Chain that has this transition matrix. An interesting historical note is that this is what Markov did when he first came up with the process that now bears his name – he analyzed the transitions between vowels and consonants in *Eugene Onegin*, Alexander Pushkin's novel written in verse. If the true text is tada, then we can think of the sequence tada as the path of a Markov chain. Its probability can be calculated at $P(t)P(t, a)P(a, d)P(d, a)$. We will give each decoder a score based on this probability. Higher scores correspond to better decoders. To assign the score, we assume that all three letters are equally likely to start the path. For three common letters in the alphabet, this won't be far from the truth. That means the probability of each path will start with a factor of 1/3, which we can ignore because all we are trying to do is rank all the probabilities. We will just calculate $P(t, a)P(a, d)P(d, a)$ which is about 8%. According to our `decoding` table above, tada is the result we get by applying the decoder ['t', 'd', 'a'] to our data atdt. For now, we will say that *the score of this decoder, given the data*, is 8%. Later we will introduce more formal calculations and terminology. ``` # score of decoder ['t', 'd', 'a'] 0.653477 * 0.219458 * 0.570995 ``` To automate such calcuations we can use the `prob_of_path` method. Remember that its first argument is the initial state, and the second argument is a list or array consisting of the remaining states in sequence. ``` bigrams.prob_of_path('t', ['a', 'd', 'a']) ``` Should we decide that our message atdt should be decoded as tada? Perhaps, if we think that 8% is a high likelihood. But what if some other possible decoder has a higher likelihood? In that case it would be natural to prefer that one. So we are going to need the probabilities of each of the six "decoded" paths. Let's define a function `score` that will take a list or array of characters and return the probability of the corresponding path using the `bigrams` transition matrix. In our example, this is the same as returning the score of the corresponding decoder. ``` def score(x): return bigrams.prob_of_path(x[0], x[1:]) ``` Here are the results in decreasing order of score. There is a clear winner: the decoder ['d', 't', 'a'] corresponding to the message 'data' has more than twice the score of any other decoder. ``` decoding = decoding.with_column('Score of Decoder', decoding.apply(score, 1)) decoding.sort('Score of Decoder', descending=True) ``` ### The Size of the Problem ### What we have been able to do with an alphabet of three characters becomes daunting when the alphabet is larger. The 52 lower case and upper case letters, along with a space character and all the punctuations, form an alphabet of around 70 characters. That gives us 70! different decoders to consider. In theory, we have to find the likelihood of each of these 70! candidates and sort them. Here is the number 70!. That's a lot of decoders. Our computing system can't handle that many, and other systems will have the same problem. ``` math.factorial(70) ``` One potential solution is to sample at random from these 70! possible decoders and just pick from among the sampled permutations. But how should we draw from 70! items? It's not a good idea to choose uniform random permutations of the alphabet, as those are unlikely to get us quickly to the desired solution. What we would really like our sampling procedure to do is to choose good decoders with high probability. A good decoder is one that generates text that has higher probability than text produced by almost all other decoders. In other words, a good decoder has higher likelihood than other decoders, given the data. You can write down this likelihood using Bayes' Rule. Let $S$ represent the space of all possible permutations; if the alphabet has $N$ characters, then $S$ has $N!$ elements. For any randomly picked permutation $j$, the likelihood of that decoder given the data is: \begin{align*} \text{Likelihood of } j \text{ given the encoded text} &= \frac{\frac{1}{N!} P(\text{encoded text} \mid \text{decoder = }j)} { {\sum_{i \in S} } \frac{1}{N!} P(\text{encoded text} \mid \text{decoder = }i)} \\ \\ &=\frac{P(\text{encoded text} \mid \text{decoder = }j)} { {\sum_{i \in S} } P(\text{encoded text} \mid \text{decoder = }i)} \end{align*} For the given encoded text, the denominator is the normalizing constant that makes all the likelihoods sum to 1. It appears in the likelihood of every decoder. In our example with the three-letter alphabet, we ignored it because we could figure out the numerators for all six decoders and just compare them. The numerator was what we called the *score* of the decoder. Even when the alphabet is large, for any particular decoder $j$ we can find the numerator by multiplying transition probabilities sequentially, as we did in our example. But with a large alphabet we can't do this for all possible decoders, so we can't list all possible scores and we can't add them all up. Therefore we don't know the denominator of the likelihoods, not even upto a decent approximation. What we need now is a method that helps us draw from a probability distribution even when we don't know the normalizing constant. That is what Markov Chain Monte Carlo helps us to do.
github_jupyter
<a href="https://colab.research.google.com/github/vndee/pytorch-vi/blob/master/chatbot_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## CHATBOT **Tác giả**: [Matthew Inkawhich](https://github.com/MatthewInkawhich) Trong hướng dẫn này chúng ta sẽ khám phá một ứng dụng thú vị của mô hình seq2seq. Chúng ta sẽ huấn luyện một chatbot đơn giản sử dụng data là lời thoại trong phim từ [Cornell Movie-Dialogs Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) Các mô hình có khả năng đàm thoại là một mảng nghiên cứu đang rất được chú ý của trí tuệ nhân tạo. Chatbot có thể tìm thấy trong rất nhiều sản phẩm tiện ích như bộ phận chăm sóc khách hàng hoặc các dịch vụ tư vấn online. Nhưng con bot này thường thuộc dạng retrieval-based (dựa trên truy xuất), đó là các mô hình mà câu trả lời đã được định sẵn cho mỗi loại câu hỏi nhất định. Dạy một cỗ máy để nó có khả năng đàm thoại với con người một cách tự nhiên vẫn là một bài toán khó và còn xa để đi đến lời giải. Gần đây, đi theo sự bùng nổ của học sâu, các mô hình sinh mạnh mẽ như Google's Neural Conversational Model đã tạo ra một bước nhảy vọt ấn tượng. Trong bài hướng dẫn này, chúng ta sẽ hiện thực một kiểu mô hình sinh như vậy với PyTorch. ![](https://pytorch.org/tutorials/_images/bot.png) ``` > hello? Bot: hello . > where am I? Bot: you re in a hospital . > who are you? Bot: i m a lawyer . > how are you doing? Bot: i m fine . > are you my friend? Bot: no . > you're under arrest Bot: i m trying to help you ! > i'm just kidding Bot: i m sorry . > where are you from? Bot: san francisco . > it's time for me to leave Bot: i know . > goodbye Bot: goodbye . ``` ### Các phần chính: - Load và tiền xử lý [Cornell Movie-Dialogs Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) dataset. - Hiện thực mô hình seq2seq với Luong's attention. - Phối hợp huấn luyện mô hình encoder-decoder với mini-batches. - Hiện thực thuật toán decoding bằng tìm kiếm tham lam. - Tương tác với mô hình đã huấn luyện. ### Lời cảm ơn: Code trong bài viết này được mượn từ các project mã nguồn mở sau: - Yuan-Kuei Wu’s pytorch-chatbot implementation: https://github.com/ywk991112/pytorch-chatbot - Sean Robertson’s practical-pytorch seq2seq-translation example: https://github.com/spro/practical-pytorch/tree/master/seq2seq-translation - FloydHub’s Cornell Movie Corpus preprocessing code: https://github.com/floydhub/textutil-preprocess-cornell-movie-corpus ## Chuẩn bị Đầu tiên chúng ta cần tải dữ liệu tại [đây](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) và giải nén. ``` !wget --header 'Host: www.cs.cornell.edu' --user-agent 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Upgrade-Insecure-Requests: 1' 'http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip' --output-document 'cornell_movie_dialogs_corpus.zip' !unzip cornell_movie_dialogs_corpus.zip !ls cornell\ movie-dialogs\ corpus ``` Import một số thư viện hỗ trợ: ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import torch from torch.jit import script, trace import torch.nn as nn from torch import optim import torch.nn.functional as F import csv import random import re import os import unicodedata import codecs from io import open import itertools import math USE_CUDA = torch.cuda.is_available() device = torch.device("cuda" if USE_CUDA else "cpu") ``` ## Load và tiền xử lý dữ liệu Bước tiếp theo chúng ta cần tổ chức lại dữ liệu. Cornell Movie-Dialogs Corpus là một tập dữ liệu lớn gồm các đoạn hội thoại của các nhân vật trong phim. - 220,579 đoạn hội thoại của 10,292 cặp nhân vật. - 9,035 nhân vật từ 617 bộ phim. - 304,713 cách diễn đạt. Tập dữ liệu này rất lớn và phân tán, đa dạng trong phong cách ngôn ngữ, thời gian, địa điểm cũng như ý nghĩa. Chúng ta hi vọng mô hình của mình sẽ đủ tốt để làm việc với nhiều cách nói hay truy vấn khác nhau. Trước hết, hãy xem một vài dòng từ dữ liệu gốc, xem chúng ta có gì ở đây. ``` corpus_name = 'cornell movie-dialogs corpus' def printLines(file, n=10): with open(file, 'rb') as datafile: lines = datafile.readlines() for line in lines[:n]: print(line) printLines(os.path.join(corpus_name, 'movie_lines.txt')) ``` Để thuận tiện, chúng ta sẽ tổ chức lại dữ liệu theo một format mỗi dòng trong file sẽ được tách ra bởi dấu tab cho một câu hỏi và một câu trả lời. Phía dưới chúng ta sẽ cần một số phương thức để phân tích dữ liệu từ file movie_lines.tx - `loadLines': Tách mỗi dòng dữ liệu thành một đối tượng dictionary trong python gồm các thuộc tính (lineID, characterID, movieID, character, text). -`loadConversations`: Nhóm các thuộc tính của từng dòng trong `loadLines` thành một đoạn hội thoại dựa trên movie_conversations.txt. - `extractSentencePairs`: Trích xuất một cặp câu trong đoạn hội thoại. ``` # Splits each line of the file into a dictionary of fields def loadLines(fileName, fields): lines = {} with open(fileName, 'r', encoding='iso-8859-1') as f: for line in f: values = line.split(" +++$+++ ") # Extract fields lineObj = {} for i, field in enumerate(fields): lineObj[field] = values[i] lines[lineObj['lineID']] = lineObj return lines # Groups fields of lines from `loadLines` into conversations based on *movie_conversations.txt* def loadConversations(fileName, lines, fields): conversations = [] with open(fileName, 'r', encoding='iso-8859-1') as f: for line in f: values = line.split(" +++$+++ ") # Extract fields convObj = {} for i, field in enumerate(fields): convObj[field] = values[i] # Convert string to list (convObj["utteranceIDs"] == "['L598485', 'L598486', ...]") lineIds = eval(convObj["utteranceIDs"]) # Reassemble lines convObj["lines"] = [] for lineId in lineIds: convObj["lines"].append(lines[lineId]) conversations.append(convObj) return conversations # Extracts pairs of sentences from conversations def extractSentencePairs(conversations): qa_pairs = [] for conversation in conversations: # Iterate over all the lines of the conversation for i in range(len(conversation["lines"]) - 1): # We ignore the last line (no answer for it) inputLine = conversation["lines"][i]["text"].strip() targetLine = conversation["lines"][i+1]["text"].strip() # Filter wrong samples (if one of the lists is empty) if inputLine and targetLine: qa_pairs.append([inputLine, targetLine]) return qa_pairs ``` Bây giờ chúng ta sẽ gọi các phương thức ở trên để tạo ra một file dữ liệu mới tên là formatted_movie_lines.txt. ``` # Define path to new file datafile = os.path.join(corpus_name, 'formatted_movie_lines.txt') delimiter = '\t' # Unescape the delimiter delimiter = str(codecs.decode(delimiter, 'unicode_escape')) # Initialize lines dict, conversations list, and field ids lines = {} conversations = [] MOVIE_LINES_FIELDS = ["lineID", "characterID", "movieID", "character", "text"] MOVIE_CONVERSATIONS_FIELDS = ["character1ID", "character2ID", "movieID", "utteranceIDs"] # Load lines and process conversations print("\nProcessing corpus...") lines = loadLines(os.path.join(corpus_name, "movie_lines.txt"), MOVIE_LINES_FIELDS) print("\nLoading conversations...") conversations = loadConversations(os.path.join(corpus_name, "movie_conversations.txt"), lines, MOVIE_CONVERSATIONS_FIELDS) # Write new csv file print("\nWriting newly formatted file...") with open(datafile, 'w', encoding='utf-8') as outputfile: writer = csv.writer(outputfile, delimiter=delimiter, lineterminator='\n') for pair in extractSentencePairs(conversations): writer.writerow(pair) # Print a sample of lines print("\nSample lines from file:") printLines(datafile) ``` ### Đọc và cắt dữ liệu Sau khi đã tổ chức lại dữ liệu, chúng ta cần tạo một từ điển các từ dùng trong tập dữ liệu và đọc các cặp câu truy vấn - phản hồi vào bộ nhớ. Chú ý rằng chúng ta xem một câu là một chuỗi liên tiếp các **từ**, không có một ánh xạ ngầm nào của nó ở một không gian số học rời rạc. Do đó chúng ta cần phải tạo một hàm ánh xạ sao cho mỗi từ riêng biệt chỉ có duy nhất một giá trị chỉ số đại diện chính là vị trí của nó trong từ điển. Để làm điều đó chúng ta định nghĩa lớp `Voc`, nơi sẽ lưu một dictionary ánh xạ **từ** sang **chỉ số**, một dictionary ánh xạ ngược **chỉ số** sang **từ**, một biến đếm cho mỗi từ và một biến đếm tổng số các từ. Lớp `Voc` cũng cung cắp các phương thức để thêm một từ vào từ điển (`addWord`), thêm tất cả các từ trong một câu (`addSentence`) và lược bỏ (trimming) các từ không thường gặp. Chúng ta sẽ nói về trimming sau: ``` # Default word tokens PAD_token = 0 # Used for padding short sentences SOS_token = 1 # Start-of-sentence token EOS_token = 2 # End-of-sentence token class Voc: def __init__(self, name): self.name = name self.trimmed = False self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 # Count SOS, EOS, PAD def addSentence(self, sentence): for word in sentence.split(' '): self.addWord(word) def addWord(self, word): if word not in self.word2index: self.word2index[word] = self.num_words self.word2count[word] = 1 self.index2word[self.num_words] = word self.num_words += 1 else: self.word2count[word] += 1 # Remove words below a certain count threshold def trim(self, min_count): if self.trimmed: return self.trimmed = True keep_words = [] for k, v in self.word2count.items(): if v >= min_count: keep_words.append(k) print('keep_words {} / {} = {:.4f}'.format( len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index) )) # Reinitialize dictionaries self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 # Count default tokens for word in keep_words: self.addWord(word) ``` Trước khi đưa vào huấn luyện ta cần một số thao tác tiền xử lý dữ liệu. Đầu tiên, chúng ta cần chuyển đổi các chuỗi Unicode thành ASCII sử dụng `unicodeToAscii`. Tiếp theo phải chuyển tất cả các kí tự thành chữ viết thường và lược bỏ các kí tự không ở trong bảng chữ cái ngoại trừ một số dấu câu (`normalizedString`). Cuối cùng để giúp quá trình huấn luyện nhanh chóng hội tụ chúng ta sẽ lọc ra các câu có độ dài lớn hơn ngưỡng `MAX_LENGTH` (`filterPairs`). ``` MAX_LENGTH = 10 # Maximum sentence length to consider # Turn a Unicode string to plain ASCII, thanks to # https://stackoverflow.com/a/518232/2809427 def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' ) # Lowercase, trim, and remove non-letter characters def normalizeString(s): s = unicodeToAscii(s.lower().strip()) s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) s = re.sub(r"\s+", r" ", s).strip() return s # Read query/response pairs and return a voc object def readVocs(datafile, corpus_name): print("Reading lines...") # Read the file and split into lines lines = open(datafile, encoding='utf-8').\ read().strip().split('\n') # Split every line into pairs and normalize pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines] voc = Voc(corpus_name) return voc, pairs # Returns True iff both sentences in a pair 'p' are under the MAX_LENGTH threshold def filterPair(p): # Input sequences need to preserve the last word for EOS token return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH # Filter pairs using filterPair condition def filterPairs(pairs): return [pair for pair in pairs if filterPair(pair)] # Using the functions defined above, return a populated voc object and pairs list def loadPrepareData(corpus_name, datafile, save_dir): print("Start preparing training data ...") voc, pairs = readVocs(datafile, corpus_name) print("Read {!s} sentence pairs".format(len(pairs))) pairs = filterPairs(pairs) print("Trimmed to {!s} sentence pairs".format(len(pairs))) print("Counting words...") for pair in pairs: voc.addSentence(pair[0]) voc.addSentence(pair[1]) print("Counted words:", voc.num_words) return voc, pairs # Load/Assemble voc and pairs save_dir = os.path.join("save") voc, pairs = loadPrepareData(corpus_name, datafile, save_dir) # Print some pairs to validate print("\npairs:") for pair in pairs[:10]: print(pair) ``` Một chiến thuật khác để giúp mô hình học nhanh hơn đó là lược bỏ các từ hiếm gặp trong dữ liệu. Việc này giúp làm giảm đi độ khó của bài toán, và do đó mô hình sẽ hội tụ nhanh hơn. Chúng ta sẽ làm điều này bằng 2 bước. - Lược bỏ các từ với tần suất xuất hiện ít hơn `MIN_COUNT` sử dụng phương thức `voc.trim`. - Lược bỏ các cặp câu hội thoại có chứa từ bị cắt ở bước trên. ``` MIN_COUNT = 3 # Minimum word count threshold for trimming def trimRareWords(voc, pairs, MIN_COUNT): # Trim words used under the MIN_COUNT from the voc voc.trim(MIN_COUNT) # Filter out pairs with trimmed words keep_pairs = [] for pair in pairs: input_sentence = pair[0] output_sentence = pair[1] keep_input = True keep_output = True # Check input sentence for word in input_sentence.split(' '): if word not in voc.word2index: keep_input = False break # Check output sentence for word in output_sentence.split(' '): if word not in voc.word2index: keep_output = False break # Only keep pairs that do not contain trimmed word(s) in their input or output sentence if keep_input and keep_output: keep_pairs.append(pair) print("Trimmed from {} pairs to {}, {:.4f} of total".format(len(pairs), len(keep_pairs), len(keep_pairs) / len(pairs))) return keep_pairs # Trim voc and pairs pairs = trimRareWords(voc, pairs, MIN_COUNT) ``` ## Chuẩn bị dữ liệu cho mô hình Mặc dù ở trên chúng ta đã làm rất nhiều thứ để có một bộ dữ liệu tốt gồm các cặp câu hội thoại, từ điển. Nhưng mô hình của chúng ta luôn mong đợi dữ liệu vào của nó phải là numerical torch tensor. Cách để chuyển dữ liệu dạng này thành tensor có thể tìm thấy ở bài viết [seq2seq translation tutorial](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html). Trong bài viết này chúng ta chỉ dùng batch size bằng 1, tất cả những gì chúng ta phải làm là chuyển tất cả các từ trong một cặp câu thành chỉ số tương ứng của nó trong từ điển và đưa vào mô hình huấn luyện. Tuy nhiên, nếu muốn quá trình huấn luyện nhanh hơn và tận dụng được khả năng tính toán song song của GPU chúng ta nên huấn luyện theo mini-batches. Sử dụng mini-batches thì cần phải chú ý rằng các câu trong một batch có thể sẽ có độ dài không giống nhau. Vì vậy chúng ta nên đặt số chiều của các tensor batch cố định là (max_length, batch_size). Các câu có độ dài nhỏ hơn max_length sẽ được thêm zero padding phía sau kí tự EOS_token (kí tự kết thúc câu). Một vấn đề khác đặt ra là nếu chúng ta chuyển tất cả các từ của một cặp câu vào một batch tensor, lúc này tensor của chúng ta sẽ có kích thước là (max_length, batch_size). Tuy nhiên cái chúng ta cần là một tensor với kích thước (batch_size, max_length) và lúc đó cần phải hiện thực thêm một phướng thức để chuyển vị ma trận. Thay vì rườm ra như vậy, chúng ta sẽ thực hiện việc chuyển vị đó ngay từ trong hàm `zeroPadding`. ![](https://pytorch.org/tutorials/_images/seq2seq_batches.png) ``` def indexesFromSentence(voc, sentence): return [voc.word2index[word] for word in sentence.split(' ')] + [EOS_token] def zeroPadding(l, fillvalue=PAD_token): return list(itertools.zip_longest(*l, fillvalue=fillvalue)) def binaryMatrix(l, value=PAD_token): m = [] for i, seq in enumerate(l): m.append([]) for token in seq: if token == PAD_token: m[i].append(0) else: m[i].append(1) return m # Returns padded input sequene tensor and lengths def inputVar(l, voc): indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l] lengths = torch.tensor([len(indexes) for indexes in indexes_batch]) padList = zeroPadding(indexes_batch) padVar = torch.LongTensor(padList) return padVar, lengths # Returns padded target sequence tensor, padding mask, and max target length def outputVar(l, voc): indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l] max_target_len = max([len(indexes) for indexes in indexes_batch]) padList = zeroPadding(indexes_batch) mask = binaryMatrix(padList) mask = torch.ByteTensor(mask) padVar = torch.LongTensor(padList) return padVar, mask, max_target_len def batch2TrainData(voc, pair_batch): pair_batch.sort(key=lambda x: len(x[0].split(' ')), reverse=True) input_batch, output_batch = [], [] for pair in pair_batch: input_batch.append(pair[0]) output_batch.append(pair[1]) inp, lengths = inputVar(input_batch, voc) output, mask, max_target_len = outputVar(output_batch, voc) return inp, lengths, output, mask, max_target_len # Example for validation small_batch_size = 5 batches = batch2TrainData(voc, [random.choice(pairs) for _ in range(small_batch_size)]) input_variable, lengths, target_variable, mask, max_target_len = batches print('input_variable:', input_variable) print('lengths:', lengths) print('target_variable:', target_variable) print('mask:', mask) print('max_target_len:', max_target_len) ``` ##Định nghĩa mô hình ###Mô hình Seq2Seq Bộ não chatbot của chúng ta là một mô hình sequence-to-sequence (seq2seq). Mục tiêu của mô hình seq2seq là nhận một chuỗi đầu vào và dự đoán chuỗi đầu ra dựa trên mô mô hình cố định. [Sutskever và các cộng sự](https://arxiv.org/abs/1409.3215) đã đề xuất một phương pháp dựa trên hai mô hình mạng nơ-ron hồi quy (RNN) có thể giải quyết được bài toán này. Một RNN hoạt động như một encoder (bộ mã hóa), encoder có nhiệm vụ mã hóa chuỗi đầu vào thành một context vector (vector ngữ cảnh). Trên lý thuyết, context vector (layer cuối cùng của RNN) sẽ chứa các thông tin ngữ nghĩa của chuỗi đầu vào. RNN thứ hai là decoder (bộ giải mã), nó dùng context vector của encoder để dự đoán chuỗi đầu ra tương ứng. ![](https://pytorch.org/tutorials/_images/seq2seq_ts.png) *Nguồn ảnh: https://jeddy92.github.io/JEddy92.github.io/ts_seq2seq_intro/* ###Encoder Bộ mã hóa sử dụng mạng nơ-ron hồi quy (encoder RNN) duyệt qua từng token của chuỗi đầu vào, tại mỗi thời điểm xuất ra một "output" vector và một "hidden state" vector. Hidden state vector sau đó sẽ được dùng để tính hidden state vector tại thời điểm tiếp theo như trong ý tưởng cơ bản của RNN. Mạng encoder sẽ cố gắn g chuyển đổi những cái gì nó nhìn thấy trong chuỗi đầu vào bao gồm cả ngữ cảnh và ngữ nghĩa thành một tập hợp các điểm trong một không gian nhiều chiều, nơi decoder nhìn vào để giải mã chuỗi đầu ra có ý nghĩa. Trái tim của encoder là multi-layered Gate Recurrent Unit, được đề xuất bởi [Cho và các cộng sư](https://arxiv.org/pdf/1406.1078v3.pdf) vào năm 2014. Chúng ta sẽ dùng dạng hai chiều của GRU, đồng nghĩa với việc có 2 mạng RNN độc lập: một đọc chuỗi đầu vào theo một thứ tự từ trái sáng phải, một từ phải sang trái. ![](https://pytorch.org/tutorials/_images/RNN-bidirectional.png) *Nguồn ảnh: https://colah.github.io/posts/2015-09-NN-Types-FP/* Chú ý rằng `embedding` layer được dùng để mã hóa từng từ trong câu văn đầu vào thành một vector trong không gian ngữ nghĩa của nó. Cuối cùng, nếu đưa một batch dữ liệu vào RNN, chúng ta cần phải "unpack" zeros padding xung quanh của từng chuỗi. ####Các bước tính toán 1. Chuyển word index thành embedding vector. 2. Đóng gói các câu thành một các batch. 3. Đưa từng batch qua GRU để tính toán. 4. Unpack padding. 5. Cộng tất cả các output của GRU hai chiều. 6. Trả về kết quả và hidden state cuối cùng. ####Input: - `input_seq`: batch of input sentences, kích thước (max_length, batch_size) - `input_lengths`: Danh sách chứa độ dài câu tương ứng với từng câu trong batch, kích thước (batch_size) - `hidden`: hidden state, kích thước (n_layers * num_directions, batch_size, hidden_size) ####Output: - `output`: Layer của cuối cùng của GRU, kích thước (max_length, batch_size, hidden_size) - `hidden`: cập nhật hidden state từ GRU, kích thước (n_layers * num_directions, batch_size, hidden_size) ``` class EncoderRNN(nn.Module): def __init__(self, hidden_size, embedding, n_layers=1, dropout=0): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size self.embedding = embedding # Initialize GRU; the input_size and hidden_size params are both set to # 'hidden_size' because our input size is a word embedding with number # of features == hidden_size self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout), bidirectional=True) def forward(self, input_seq, input_lengths, hidden=None): # Convert word indexes to embedding vector embedded = self.embedding(input_seq) # Pack padded batch of sequences for RNN module packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) # Forward pass through GRU outputs, hidden = self.gru(packed, hidden) # Unpack padding outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs) # Sum bidirectional GRU outputs output = outputs[:, :, :self.hidden_size] + outputs[:, :, self.hidden_size:] # Return output and final hidden state return outputs, hidden ``` ###Decoder Bộ giải mã RNN sẽ sinh ra chuỗi đầu ra theo từng token. Nó sử dụng context vector của encoder và hidden state để sinh từ tiếp theo trong chuỗi đầu ra cho đến khi gặp phải EOS_token (kí hiệu kết thúc câu). Một vấn đề với bài toán seq2seq truyền thống đó là nếu chỉ dùng context vector và hidden state thì sẽ bị mất mát thông tin, đặc biệt là với những câu dài. Để đối phó với điều đó, [Bahdanau](https://arxiv.org/abs/1409.0473) đã đề xuất một phương pháp gọi là cơ chế attention. Cơ chế này cho phép decoder đặt sự chú ý lên một vài điểm nhất định trong câu thay vì nhìn các từ với mức độ quan trọng y như nhau. Attention được tính toán dựa vào hidden state hiện tại của decoder và kết quả của encoder. Bộ trọng số của attention có cùng kích thước với chuồi đầu vào. ![](https://pytorch.org/tutorials/_images/attn2.png) [Luong](https://arxiv.org/abs/1508.04025) attention là một phiên bản cải tiến với ý tưởng "Global attention". Sự khác biệt là với "Global attention" chúng ta sẽ nhìn tất cả các hidden state của encoder, thay vì chỉ nhìn hidden state cuối cùng của encoder như của Bahdanau. Một khác biệt nữa là "global attention" tính dựa trên duy nhất hidden state hiện tại của decoder chứ không như phiên bản của Bahdanau cần phải tính qua hidden state tại các bước trước đó. ![](https://pytorch.org/tutorials/_images/scores.png) Trong đó: $h_{t}$ là hidden state hiện tại của decoder và $h_{s}$ là toàn bộ hidden state của encoder. Nhìn chung, global attention có thể tổng hợp như hình bên dưới. ![](https://pytorch.org/tutorials/_images/global_attn.png) ``` # Luong attention layer class Attn(nn.Module): def __init__(self, method, hidden_size): super(Attn, self).__init__() self.method = method if self.method not in ['dot', 'general', 'concat']: raise ValueError(self.method, 'is not an appropriate attention method.') self.hidden_size = hidden_size if self.method == 'general': self.attn = nn.Linear(self.hidden_size, hidden_size) elif self.method == 'concat': self.attn = nn.Linear(self.hidden_size * 2, hidden_size) self.v = nn.Parameter(torch.FloatTensor(hidden_size)) def dot_score(self, hidden, encoder_output): return torch.sum(hidden * encoder_ouput, dim=2) def general_score(self, hidden, encoder_output): energy = self.attn(encoder_output) return torch.sum(hidden * energy, dim=2) def concat_score(self, hidden, encoder_outputs): energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_ouputs), 2)).tanh() return torch.sum(self.v * energy, dim=2) def forward(self, hidden, encoder_outputs): # Calculate the attention weights (energies) based on the given method if self.method == 'general': attn_energies = self.general_score(hidden, encoder_outputs) elif self.method == 'concat': attn_energies = self.concat_score(hidden, encoder_outputs) elif self.method == 'dot': attn_energies = self.dot_score(hidden, encoder_outputs) # Transpose max_length and batch_size dimensions attn_energies = attn_energies.t() # Return the softmax normalized probability scores (with added dimension) return F.softmax(attn_energies, dim=1).unsqueeze(1) ``` ####Các bước tính toán 1. Lấy embedding vector của từ hiện tại 2. Đưa dữ liệu qua GRU hai chiều để tính toán 3. Tính trọng số attention từ output của GRU 4. Nhân trọng số của attention của encoder output để có được trọng số mới của context vector. 5. Nối (concat) context vector và GRU hidden state như trong công thức của Luong attention. 6. Dự đoán từ tiếp theo dựa trên Luong attention 7. Trả về kết quả và hidden state cuối cùng ####Inputs: - `input_step`: Một step là một đơn vị thời gian, kích thước (1, batch_size) - `last_hidden`: hidden layer cuối của GRU, kích thước (n_layers * num_directión, batch_size, hidden_size) - `encoder_outputs`: encoder output, kích thước (max_length, batch_size, hidden_size) ####Outputs: - `output`: softmax normalized tensor, kích thước (batch_size, voc.num_words) - `hidden`: hidden state cuối của GRU, kích thước (n_layers * num_directions, batch_size, hidden_size) ``` class LuongAttnDecoderRNN(nn.Module): def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1): super(LuongAttnDecoderRNN, self).__init__() # Keep for reference self.attn_model = attn_model self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.dropout = dropout # Define layers self.embedding = embedding self.embedding_dropout = nn.Dropout(dropout) self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout)) self.concat = nn.Linear(hidden_size * 2, hidden_size) self.out = nn.Linear(hidden_size, output_size) self.attn = Attn(attn_model, hidden_size) def forward(self, input_step, last_hidden, encoder_outputs): # Note: we run this one step (word) at a time # Get embedding of current input word embedded = self.embedding(input_step) embedded = self.embedding_dropout(embedded) # Forward through unidirectional GRU rnn_output, hidden = self.gru(embedded, last_hidden) # Calculate attention weights from the current GRU output attn_weights = self.attn(rnn_output, encoder_outputs) # Multiply attention weights to encoder outputs to get new "weighted sum" context vector context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) # Concatenate weighted context vector and GRU output using Luong eq. 5 rnn_output = rnn_output.squeeze(0) context = context.squeeze(1) concat_input = torch.cat((rnn_output, context), 1) concat_output = torch.tanh(self.concat(concat_input)) # Predict next word using Luong eq. 6 output = self.out(concat_output) output = F.softmax(output, dim=1) # Return output and final hidden state return output, hidden ``` ##Huấn luyện ###Masked loss Vì chúng ta đang làm việc với batch of padded sentences, cho nên không thể dễ dàng để tính loss cho tất cả các thành phần của tensor. Chúng ta định nghĩa hàm `maskNLLLoss` để tính loss dựa trên output của decoder. Kết quả trả về là trung bình negative log likelihood của các thành phần trong tensor (mỗi thành phần là một câu). ``` def maskNLLLoss(inp, target, mask): nTotal = mask.sum() crossEntropy = -troch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1)) loss = crossEntropy.masked_selected(mask).mean() loss = loss.to(device) return loss, nTotal.item() ``` ###Training Hàm `train` hiện thực thuật toán huấn luyện cho một lần lặp. Chúng ta sẽ dùng một vài kỹ thuật để quá trình training diễn ra tốt hơn: - **Teacher forcing**: Kỹ thuật này cho phép với một xác suất được quy định sẵn `teacher_forcing_ratio`, decoder sẽ dùng target word tại thời điểm hiện tại để dự đoán từ tiếp theo thay vì dùng từ được dự đoán bởi decoder tại thời điểm hiện tại. - **Gradient clipping**: Đây là một kỹ thuật thường dùng để đối phố với "exploding gradient". Kỹ thuật này đơn giản là chặn giá trị gradient ở một ngưỡng trên, không để nó trở nên quá lớn. ![](https://pytorch.org/tutorials/_images/grad_clip.png) *Nguồn ảnh: Goodfellow et al. Deep Learning. 2016. https://www.deeplearningbook.org/* ####Các bước tính toán 1. Đưa toàn bộ batch vào encoder đê tính toán. 2. Khởi tạo input cho decoder bằng SOS_token và hidden state bằng với hidden state cuối cùng của encoder. 3. Đưa chuỗi input qua decoder. 4. If teacher_forcing: gán input tại thời điểm tiếp theo của decoder bằng nhãn đúng của từ dự đoán hiện tại, ngược lại gán bằng từ được decoder dự đoán tại thời điểm hiện tại. 5. Tính loss 6. Thực hiện giải thuật lan truyền ngược. 7. Clip gradients. 8. Cập nhật trọng số encoder và decoder. ``` def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, embedding, encoder_optimizer, decoder_optimizer, batch_size, clip, max_length=MAX_LENGTH): # Zero gradients encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() # Set device options input_variable = input_variable.to(device) lengths = lengths.to(device) target_variable = target_variable.to(device) mask = mask.to(device) # Initialize variables loss = 0 print_losses = [] n_totals = 0 # Forward pass through encoder encoder_outputs, encoder_hidden = encoder(input_variable, lengths) # Create initial decoder input (start with SOS tokens for each sentence) decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]]) decoder_input = decoder_input.to(device) # Set initial decoder hidden state to the encoder's final hidden state decoder_hidden = encoder_hidden[:decoder.n_layers] # Determine if we are using teacher forcing this iteration use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False # Forward batch of sequences through decoder one time step at a time if use_teacher_forcing: for t in range(max_target_len): decoder_output, decoder_hidden = decoder( decoder_input, decoder_hidden, encoder_outputs ) # Teacher forcing: next input is current target decoder_input = target_variable[t].view(1, -1) # Calculate and accumulate loss mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal else: for t in range(max_target_len): decoder_output, decoder_hidden = decoder( decoder_input, decoder_hidden, encoder_outputs ) # No teacher forcing: next input is decoder's own current output _, topi = decoder_output.topk(1) decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]]) decoder_input = decoder_input.to(device) # Calculate and accumulate loss mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal # Perform backpropatation loss.backward() # Clip gradients: gradients are modified in place _ = nn.utils.clip_grad_norm_(encoder.parameters(), clip) _ = nn.utils.clip_grad_norm_(decoder.parameters(), clip) # Adjust model weights encoder_optimizer.step() decoder_optimizer.step() return sum(print_losses) / n_totals ```
github_jupyter
# In-Class Coding Lab: Dictionaries The goals of this lab are to help you understand: - How to use Python Dictionaries - Basic Dictionary methods - Dealing with Key errors - How to use lists of Dictionaries - How to encode / decode python dictionaries to json. ## Dictionaries are Key-Value Pairs. The **key** is unique for each Python dictionary object and is always of type `str`. The **value** stored under the key can be any Python type. This example creates a `stock` variable with two keys `symbol` and `name`. We access the dictionary key with `['keyname']`. ``` stock = {} # empty dictionary stock['symbol'] = 'AAPL' stock['name'] = 'Apple Computer' print(stock) print(stock['symbol']) print(stock['name']) ``` While Python lists are best suited for storing multiple values of the same type ( like grades ), Python dictionaries are best suited for storing hybrid values, or values with multiple attributes. In the example above we created an empty dictionary `{}` then assigned keys `symbol` and `name` as part of individual assignment statements. We can also build the dictionary in a single statement, like this: ``` stock = { 'name' : 'Apple Computer', 'symbol' : 'AAPL', 'value' : 125.6 } print(stock) print("%s (%s) has a value of $%.2f" %(stock['name'], stock['symbol'], stock['value'])) ``` ## Dictionaries are mutable This means we can change their value. We can add and remove keys and update the value of keys. This makes dictionaries quite useful for storing data. ``` # let's add 2 new keys print("Before changes", stock) stock['low'] = 119.85 stock['high'] = 127.0 # and update the value key stock['value'] = 126.25 print("After change", stock) ``` ## Now you Try It! Create a python dictionary called `car` with the following keys `make`, `model` and `price`. Set appropriate values and print out the dictionary. ``` # TODO: Write code here stock = {} stock['make'] = 'Acura' stock['model'] = 'MDX' stock['price'] = '$50,000' print(stock) print(stock['make']) print(stock['model']) print(stock['price']) ``` ## What Happens when the key is not there? Let's go back to our stock example. What happens when we try to read a key not present in the dictionary? The answer is that Python will report a `KeyError` ``` print( stock['change'] ) ``` No worries. We know how to handle run-time errors in Python... use `try except` !!! ``` try: print( stock['change'] ) except KeyError: print("The key 'change' does not exist!") ``` ## Avoiding KeyError You can avoid `KeyError` using the `get()` dictionary method. This method will return a default value when the key does not exist. The first argument to `get()` is the key to get, the second argument is the value to return when the key does not exist. ``` print(stock.get('name','no key')) print(stock.get('change', 'no key')) ``` ## Now You try It! Write a program to ask the user to input a key for the `stock` variable. If the key exists, print the value, otherwise print 'Key does not exist' ``` # TODO: write code here stock = {} stock = input('Enter key') try: print(stock.get('stock')) except KeyError: print('key does not exist') ``` ## Enumerating keys and values You can enumerate keys and values easily, using the `keys()` and `values()` methods: ``` print("KEYS") for k in stock.keys(): print(k) print("VALUES") for v in stock.values(): print(v) ``` ## List of Dictionary The List of Dictionary object in Python allows us to create useful in-memory data structures. It's one of the features of Python that sets it apart from other programming languages. Let's use it to build a portfolio (list of 4 stocks). ``` portfolio = [ { 'symbol' : 'AAPL', 'name' : 'Apple Computer Corp.', 'value': 136.66 }, { 'symbol' : 'AMZN', 'name' : 'Amazon.com, Inc.', 'value': 845.24 }, { 'symbol' : 'MSFT', 'name' : 'Microsoft Corporation', 'value': 64.62 }, { 'symbol' : 'TSLA', 'name' : 'Tesla, Inc.', 'value': 257.00 } ] print("first stock", portfolio[0]) print("name of first stock", portfolio[0]['name']) print("last stock", portfolio[-1]) print("value of 2nd stock", portfolio[1]['value']) ``` ## Putting It All Together Write a program to build out your personal stock portfolio. ``` 1. Start with an empty list, called portfolio 2. loop 3. create a new stock dictionary 3. input a stock symbol, or type 'QUIT' to print portfolio 4. if symbol equals 'QUIT' exit loop 5. add symbol value to stock dictionary under 'symbol' key 6. input stock value as float 7. add stock value to stock dictionary under 'value key 8. append stock variable to portfolio list variable 9. time to print the portfolio: for each stock in the portfolio 10. print stock symbol and stock value, like this "AAPL $136.66" ``` ``` portfolio = [] while True: stock = {} symbol = input("Enter a stock symbol, or type \'Quit\' to print portfolio.") if symbol == 'Quit': break stock['symbol'] = symbol value = float(input('Enter the value of the stock.')) stock['value'] = value portfolio.append(stock) for stock in portfolio: print(stock['symbol'],stock['value'],) ```
github_jupyter
# Tema 4.1 <a class="tocSkip"> # Imports ``` import math import numpy as np import pandas as pd import matplotlib.pyplot as plt import graphviz import sklearn.tree import sklearn.neighbors import sklearn.naive_bayes import sklearn.svm import sklearn.metrics import sklearn.preprocessing import sklearn.model_selection ``` # Data https://www.drivendata.org/competitions/54/machine-learning-with-a-heart/page/109/ - Numeric - slope\_of\_peak\_exercise\_st\_segment (int, semi-categorical, 1-3) - resting\_blood\_pressure (int) - chest\_pain\_type (int, semi-categorical, 1-4) - num\_major\_vessels (int, semi-categorical, 0-3) - resting\_ekg\_results (int, semi-categorical, 0-2) - serum\_cholesterol\_mg\_per\_dl (int) - oldpeak\_eq\_st\_depression (float) - age (int) - max\_heart\_rate\_achieved (int) - Categorical - thal - normal - fixed\_defect - reversible\_defect - fasting\_blood\_sugar\_gt\_120\_mg\_per\_dl (blood sugar > 120) - 0 - 1 - sex - 0 (f) - 1 (m) - exercise\_induced\_angina - 0 - 1 ``` features = pd.read_csv('train_values.csv') labels = pd.read_csv('train_labels.csv') features.head() labels.head() FEATURES = ['slope_of_peak_exercise_st_segment', 'thal', 'resting_blood_pressure', 'chest_pain_type', 'num_major_vessels', 'fasting_blood_sugar_gt_120_mg_per_dl', 'resting_ekg_results', 'serum_cholesterol_mg_per_dl', 'oldpeak_eq_st_depression', 'sex', 'age', 'max_heart_rate_achieved', 'exercise_induced_angina'] LABEL = 'heart_disease_present' EXPLANATIONS = {'slope_of_peak_exercise_st_segment' : 'Quality of Blood Flow to the Heart', 'thal' : 'Thallium Stress Test Measuring Blood Flow to the Heart', 'resting_blood_pressure' : 'Resting Blood Pressure', 'chest_pain_type' : 'Chest Pain Type (1-4)', 'num_major_vessels' : 'Major Vessels (0-3) Colored by Flourosopy', 'fasting_blood_sugar_gt_120_mg_per_dl' : 'Fasting Blood Sugar > 120 mg/dl', 'resting_ekg_results' : 'Resting Electrocardiographic Results (0-2)', 'serum_cholesterol_mg_per_dl' : 'Serum Cholesterol in mg/dl', 'oldpeak_eq_st_depression' : 'Exercise vs. Rest\nA Measure of Abnormality in Electrocardiograms', 'age' : 'Age (years)', 'sex' : 'Sex (m/f)', 'max_heart_rate_achieved' : 'Maximum Heart Rate Achieved (bpm)', 'exercise_induced_angina' : 'Exercise-Induced Chest Pain (yes/no)'} NUMERICAL_FEATURES = ['slope_of_peak_exercise_st_segment', 'resting_blood_pressure', 'chest_pain_type', 'num_major_vessels', 'resting_ekg_results', 'serum_cholesterol_mg_per_dl', 'oldpeak_eq_st_depression', 'age', 'max_heart_rate_achieved'] CATEGORICAL_FEATURES = ['thal', 'fasting_blood_sugar_gt_120_mg_per_dl', 'sex', 'exercise_induced_angina'] CATEGORICAL_FEATURE_VALUES = {'thal' : [[0, 1, 2], ['Normal', 'Fixed Defect', 'Reversible Defect']], 'fasting_blood_sugar_gt_120_mg_per_dl' : [[0, 1], ['No', 'Yes']], 'sex' : [[0, 1], ['F', 'M']], 'exercise_induced_angina' : [[0, 1], ['No', 'Yes']]} SEMI_CATEGORICAL_FEATURES = ['slope_of_peak_exercise_st_segment', 'chest_pain_type', 'num_major_vessels', 'resting_ekg_results'] SEMI_CATEGORICAL_FEATURE_LIMITS = {'slope_of_peak_exercise_st_segment' : [1, 3], 'chest_pain_type' : [1, 4], 'num_major_vessels' : [0, 3], 'resting_ekg_results' : [0, 2]} LABEL_VALUES = [[0, 1], ['No', 'Yes']] for feature in CATEGORICAL_FEATURES: if len(CATEGORICAL_FEATURE_VALUES[feature][0]) > 2: onehot_feature = pd.get_dummies(features[feature]) feature_index = features.columns.get_loc(feature) features.drop(feature, axis=1, inplace=True) onehot_feature.columns = [f'{feature}={feature_value}' for feature_value in onehot_feature.columns] for colname in onehot_feature.columns[::-1]: features.insert(feature_index, colname, onehot_feature[colname]) features.head() x = features.values[:,1:].astype(int) y = labels.values[:,-1].astype(int) print('x =\n', x) print('y =\n', y) stratified_kflod_validator = sklearn.model_selection.StratifiedKFold(n_splits=5, shuffle=True) stratified_kflod_validator ``` # Decision Trees ``` tree_mean_acc = 0 tree_score_df = pd.DataFrame(columns = ['Fold', 'Accuracy', 'Precision', 'Recall']) for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1): x_train, x_test = x[train_indices], x[test_indices] y_train, y_test = y[train_indices], y[test_indices] dec_tree = sklearn.tree.DecisionTreeClassifier(min_samples_split = 5) dec_tree.fit(x_train, y_train) acc = dec_tree.score(x_test, y_test) tree_mean_acc += acc y_pred = dec_tree.predict(x_test) precision = sklearn.metrics.precision_score(y_test, y_pred) recall = sklearn.metrics.recall_score(y_test, y_pred) tree_score_df.loc[fold_ind] = [f'{fold_ind}', f'{acc*100:.2f} %', f'{precision*100:.2f} %', f'{recall*100:.2f} %'] tree_plot_data = sklearn.tree.export_graphviz(dec_tree, out_file = None, feature_names = features.columns[1:], class_names = [f'{labels.columns[1]}={label_value}' for label_value in LABEL_VALUES[1]], filled = True, rounded = True, special_characters = True) graph = graphviz.Source(tree_plot_data) graph.render(f'Fold {fold_ind}') next_ind = len(tree_score_df) + 1 mean_acc = tree_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean() mean_prec = tree_score_df['Precision'].apply(lambda n: float(n[:-2])).mean() mean_rec = tree_score_df['Recall'].apply(lambda n: float(n[:-2])).mean() tree_score_df.loc[next_ind] = ['Avg', f'{mean_acc:.2f} %', f'{mean_prec:.2f} %', f'{mean_rec:.2f} %'] tree_score_df ``` # KNN ``` # TODO Normalize knn_mean_score_df = pd.DataFrame(columns = ['k', 'Avg. Accuracy', 'Avg. Precision', 'Avg. Recall']) normalized_x = sklearn.preprocessing.normalize(x) # No improvement over un-normalized data. mean_accs = [] for k in list(range(1, 10)) + [math.ceil(len(features) * step) for step in [0.1, 0.2, 0.3, 0.4, 0.5]]: knn_score_df = pd.DataFrame(columns = ['Fold', 'Accuracy', 'Precision', 'Recall']) mean_acc = 0 for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1): x_train, x_test = normalized_x[train_indices], normalized_x[test_indices] y_train, y_test = y[train_indices], y[test_indices] knn = sklearn.neighbors.KNeighborsClassifier(n_neighbors = k) knn.fit(x_train, y_train) acc = knn.score(x_test, y_test) mean_acc += acc y_pred = knn.predict(x_test) precision = sklearn.metrics.precision_score(y_test, y_pred) recall = sklearn.metrics.recall_score(y_test, y_pred) knn_score_df.loc[fold_ind] = [f'{fold_ind}', f'{acc*100:.2f} %', f'{precision*100:.2f} %', f'{recall*100:.2f} %'] next_ind = len(knn_score_df) + 1 mean_acc = knn_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean() mean_prec = knn_score_df['Precision'].apply(lambda n: float(n[:-2])).mean() mean_rec = knn_score_df['Recall'].apply(lambda n: float(n[:-2])).mean() knn_score_df.loc[next_ind] = ['Avg', f'{acc*100:.2f} %', f'{precision*100:.2f} %', f'{recall*100:.2f} %'] knn_mean_score_df.loc[k] = [k, f'{mean_acc:.2f} %', f'{mean_prec:.2f} %', f'{mean_rec:.2f} %'] # print(f'k = {k}') # print(knn_score_df) # print() best_accuracy = knn_mean_score_df.sort_values(by = ['Avg. Accuracy']).iloc[-1] print('Best avg. accuracy is', best_accuracy['Avg. Accuracy'], 'for k =', best_accuracy['k'], '.') knn_mean_score_df.sort_values(by = ['Avg. Accuracy']) ``` # Naive Bayes ``` nb_classifier_types = [sklearn.naive_bayes.GaussianNB, sklearn.naive_bayes.MultinomialNB, sklearn.naive_bayes.ComplementNB, sklearn.naive_bayes.BernoulliNB] nb_mean_score_df = pd.DataFrame(columns = ['Type', 'Avg. Accuracy', 'Avg. Precision', 'Avg. Recall']) for nb_classifier_type in nb_classifier_types: nb_score_df = pd.DataFrame(columns = ['Fold', 'Accuracy', 'Precision', 'Recall']) mean_acc = 0 for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1): x_train, x_test = x[train_indices], x[test_indices] y_train, y_test = y[train_indices], y[test_indices] nb = nb_classifier_type() nb.fit(x_train, y_train) acc = nb.score(x_test, y_test) mean_acc += acc y_pred = nb.predict(x_test) precision = sklearn.metrics.precision_score(y_test, y_pred) recall = sklearn.metrics.recall_score(y_test, y_pred) nb_score_df.loc[fold_ind] = [f'{fold_ind}', f'{acc*100:.2f} %', f'{precision*100:.2f} %', f'{recall*100:.2f} %'] next_ind = len(nb_score_df) + 1 mean_acc = nb_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean() mean_prec = nb_score_df['Precision'].apply(lambda n: float(n[:-2])).mean() mean_rec = nb_score_df['Recall'].apply(lambda n: float(n[:-2])).mean() nb_score_df.loc[next_ind] = ['Avg', f'{mean_acc:.2f} %', f'{mean_prec:.2f} %', f'{mean_rec:.2f} %'] nb_mean_score_df.loc[len(nb_mean_score_df) + 1] = [nb_classifier_type.__name__, f'{mean_acc:.2f} %', f'{mean_prec:.2f} %', f'{mean_rec:.2f} %'] print(nb_classifier_type.__name__) print() print(nb_score_df) print() nb_mean_score_df.sort_values(by = ['Avg. Accuracy']) ``` # SVM ``` svm_classifier_type = sklearn.svm.SVC # Avg. # Args -> acc / prec / rec # # kernel: linear -> 78.89 % 78.31 % 73.75 % # kernel: linear, C: 0.1 -> 84.44 % 88.54 % 75.00 % # # * No improvement for larger C. # # kernel: poly, max_iter: 1 -> 46.67 % 34.67 % 21.25 % # kernel: poly, max_iter: 10 -> 57.22 % 51.27 % 66.25 % # kernel: poly, max_iter: 100 -> 61.67 % 60.18 % 40.00 % # kernel: poly, max_iter: 100, coef0: 1 -> 62.22 % 62.19 % 41.25 % # # * No improvement for more iters. # * No improvement for larger C. # * No improvement for higher degree. # * No improvement for different coef0. # # kernel: rbf, max_iter: 10 -> 48.89 % 46.07 % 72.50 % # kernel: rbf, max_iter: 100 -> 60.00 % 74.00 % 17.50 % # kernel: rbf, max_iter: 1000 -> 60.56 % 78.33 % 15.00 % args = {'kernel': 'linear', 'C': 0.1} svm_score_df = pd.DataFrame(columns = ['Type', 'Accuracy', 'Precision', 'Recall']) # normalized_x = sklearn.preprocessing.normalize(x) mean_acc = 0 for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1): x_train, x_test = x[train_indices], x[test_indices] y_train, y_test = y[train_indices], y[test_indices] svm = svm_classifier_type(**args, gamma = 'scale', cache_size = 256) svm.fit(x_train, y_train) acc = svm.score(x_test, y_test) mean_acc += acc y_pred = svm.predict(x_test) precision = sklearn.metrics.precision_score(y_test, y_pred) recall = sklearn.metrics.recall_score(y_test, y_pred) svm_score_df.loc[fold_ind] = [f'{fold_ind}', f'{acc*100:.2f} %', f'{precision*100:.2f} %', f'{recall*100:.2f} %'] next_ind = len(svm_score_df) + 1 mean_acc = svm_score_df['Accuracy'].apply(lambda n: float(n[:-2])).mean() mean_prec = svm_score_df['Precision'].apply(lambda n: float(n[:-2])).mean() mean_rec = svm_score_df['Recall'].apply(lambda n: float(n[:-2])).mean() svm_score_df.loc[next_ind] = ['Avg', f'{mean_acc:.2f} %', f'{mean_prec:.2f} %', f'{mean_rec:.2f} %'] print(svm_score_df) ``` # Shallow Neural Nets ## Import deps ``` import pandas as pd from sklearn.model_selection import train_test_split import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.layers import Input, Dense, Conv2D, MaxPooling2D, Dropout, Flatten, BatchNormalization, LeakyReLU ``` ## Import data ``` features = pd.read_csv('train_values.csv') labels = pd.read_csv('train_labels.csv') print(labels.head()) features.head() FEATURES = ['slope_of_peak_exercise_st_segment', 'thal', 'resting_blood_pressure', 'chest_pain_type', 'num_major_vessels', 'fasting_blood_sugar_gt_120_mg_per_dl', 'resting_ekg_results', 'serum_cholesterol_mg_per_dl', 'oldpeak_eq_st_depression', 'sex', 'age', 'max_heart_rate_achieved', 'exercise_induced_angina'] LABEL = 'heart_disease_present' EXPLANATIONS = {'slope_of_peak_exercise_st_segment' : 'Quality of Blood Flow to the Heart', 'thal' : 'Thallium Stress Test Measuring Blood Flow to the Heart', 'resting_blood_pressure' : 'Resting Blood Pressure', 'chest_pain_type' : 'Chest Pain Type (1-4)', 'num_major_vessels' : 'Major Vessels (0-3) Colored by Flourosopy', 'fasting_blood_sugar_gt_120_mg_per_dl' : 'Fasting Blood Sugar > 120 mg/dl', 'resting_ekg_results' : 'Resting Electrocardiographic Results (0-2)', 'serum_cholesterol_mg_per_dl' : 'Serum Cholesterol in mg/dl', 'oldpeak_eq_st_depression' : 'Exercise vs. Rest\nA Measure of Abnormality in Electrocardiograms', 'age' : 'Age (years)', 'sex' : 'Sex (m/f)', 'max_heart_rate_achieved' : 'Maximum Heart Rate Achieved (bpm)', 'exercise_induced_angina' : 'Exercise-Induced Chest Pain (yes/no)'} NUMERICAL_FEATURES = ['slope_of_peak_exercise_st_segment', 'resting_blood_pressure', 'chest_pain_type', 'num_major_vessels', 'resting_ekg_results', 'serum_cholesterol_mg_per_dl', 'oldpeak_eq_st_depression', 'age', 'max_heart_rate_achieved'] CATEGORICAL_FEATURES = ['thal', 'fasting_blood_sugar_gt_120_mg_per_dl', 'sex', 'exercise_induced_angina'] CATEGORICAL_FEATURE_VALUES = {'thal' : [[0, 1, 2], ['Normal', 'Fixed Defect', 'Reversible Defect']], 'fasting_blood_sugar_gt_120_mg_per_dl' : [[0, 1], ['No', 'Yes']], 'sex' : [[0, 1], ['F', 'M']], 'exercise_induced_angina' : [[0, 1], ['No', 'Yes']]} SEMI_CATEGORICAL_FEATURES = ['slope_of_peak_exercise_st_segment', 'chest_pain_type', 'num_major_vessels', 'resting_ekg_results'] SEMI_CATEGORICAL_FEATURE_LIMITS = {'slope_of_peak_exercise_st_segment' : [1, 3], 'chest_pain_type' : [1, 4], 'num_major_vessels' : [0, 3], 'resting_ekg_results' : [0, 2]} LABEL_VALUES = [[0, 1], ['No', 'Yes']] for feature in CATEGORICAL_FEATURES: if len(CATEGORICAL_FEATURE_VALUES[feature][0]) > 2: onehot_feature = pd.get_dummies(features[feature]) feature_index = features.columns.get_loc(feature) features.drop(feature, axis=1, inplace=True) onehot_feature.columns = ['%s=%s' % (feature, feature_value) for feature_value in onehot_feature.columns] for colname in onehot_feature.columns[::-1]: features.insert(feature_index, colname, onehot_feature[colname]) x = features.values[:,1:].astype(int) y = labels.values[:,-1].astype(int) print('x =\n', x) print('y =\n', y) # for fold_ind, (train_indices, test_indices) in enumerate(stratified_kflod_validator.split(x, y), 1): # x_train, x_test = x[train_indices], x[test_indices] # y_train, y_test = y[train_indices], y[test_indices] x_train, x_test, y_train, y_test = \ train_test_split(x, y, test_size=0.2, random_state=42) print(x_train.shape, x_test.shape) print(y_train.shape, y_test.shape) ``` ## Define model ``` input_shape = (1,15) num_classes = 2 print(x.shape) print(y.shape) print(x[:1]) print(y[:1]) ``` ### Architecture 0 - Inflating Dense 120-225, 0.5 Dropout, Batch Norm, Sigmoid Classification ``` arch_cnt = 'arch-0-3' model = Sequential() model.add( Dense(120, input_dim=15, kernel_initializer='normal', # kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc activation='relu')) model.add(Dropout(0.5)) model.add(Dense(225, input_dim=15, kernel_initializer='normal', activation='relu')) # model.add(LeakyReLU(alpha=0.1)) model.add(BatchNormalization(axis = 1)) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # earlystop_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # patience=5, restore_best_weights=True, # verbose=1) reduce_lr_cb = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.05, patience=5, min_lr=0.001, verbose=1) # es_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # min_delta=0.1, # patience=7, # verbose=1, # mode='auto' # ) # 'restore_best_weights' in dir(keras.callbacks.EarlyStopping()) # FALSE = library is not up-to-date tb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, write_graph=True, write_images=True) epochs = 50 batch_size = 32 model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, shuffle=False, validation_data=(x_test, y_test), callbacks=[reduce_lr_cb, es_cb, tb_cb] ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ### Architecture 1 - **`Deflating Dense 225-112`**, 0.5 Dropout, Batch Norm, Sigmoid Classification ``` arch_cnt = 'arch-1' model = Sequential() model.add( Dense(225, input_dim=15, kernel_initializer='normal', # kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc activation='relu')) model.add(Dropout(0.5)) model.add(Dense(112, input_dim=15, kernel_initializer='normal', activation='relu')) # model.add(LeakyReLU(alpha=0.1)) model.add(BatchNormalization(axis = 1)) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # earlystop_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # patience=5, restore_best_weights=True, # verbose=1) reduce_lr_cb = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.05, patience=7, min_lr=0.001, verbose=1) tb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, write_graph=True, write_images=True) epochs = 50 batch_size = 32 model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, shuffle=False, validation_data=(x_test, y_test), callbacks=[reduce_lr_cb, tb_cb] # callbacks=[earlystop_cb, reduce_lr_cb] ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ### Architecture 2 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, **`HE Initialization`** ``` arch_cnt = 'arch-2' model = Sequential() model.add( Dense(225, input_dim=15, kernel_initializer='he_uniform', kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc activation='relu')) model.add(Dropout(0.5)) model.add(Dense(112, input_dim=15, kernel_initializer='he_uniform', activation='relu')) # model.add(LeakyReLU(alpha=0.1)) model.add(BatchNormalization(axis = 1)) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # earlystop_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # patience=5, restore_best_weights=True, # verbose=1) reduce_lr_cb = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.05, patience=7, min_lr=0.001, verbose=1) tb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, write_graph=True, write_images=True) epochs = 50 batch_size = 32 model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, shuffle=False, validation_data=(x_test, y_test), callbacks=[reduce_lr_cb, tb_cb] # callbacks=[earlystop_cb, reduce_lr_cb] ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ### Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, **`L2 = 1e^-4`** ``` arch_cnt = 'arch-3-4' model = Sequential() model.add( Dense(225, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.0001), # pierd 0.2 acc activation='relu')) model.add(Dropout(0.5)) model.add( Dense(112, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.0001), # pierd 0.2 acc activation='relu')) # model.add(LeakyReLU(alpha=0.1)) model.add(BatchNormalization(axis = 1)) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # earlystop_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # patience=5, restore_best_weights=True, # verbose=1) reduce_lr_cb = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.05, patience=7, min_lr=0.001, verbose=1) tb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, write_graph=True, write_images=True) epochs = 50 batch_size = 32 model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, shuffle=False, validation_data=(x_test, y_test), callbacks=[reduce_lr_cb, tb_cb] # callbacks=[earlystop_cb, reduce_lr_cb] ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ### Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, **`L2 = 1e^-3`** ``` arch_cnt = 'arch-3-3' model = Sequential() model.add( Dense(225, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc activation='relu')) model.add(Dropout(0.5)) model.add( Dense(112, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.001), # pierd 0.2 acc activation='relu')) # model.add(LeakyReLU(alpha=0.1)) model.add(BatchNormalization(axis = 1)) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # earlystop_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # patience=5, restore_best_weights=True, # verbose=1) reduce_lr_cb = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.05, patience=7, min_lr=0.001, verbose=1) tb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, write_graph=True, write_images=True) epochs = 50 batch_size = 32 model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, shuffle=False, validation_data=(x_test, y_test), callbacks=[reduce_lr_cb, tb_cb] # callbacks=[earlystop_cb, reduce_lr_cb] ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ### Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, **`L2 = 1e^-2`** ``` arch_cnt = 'arch-3-2' model = Sequential() model.add( Dense(225, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.01), # pierd 0.2 acc activation='relu')) model.add(Dropout(0.5)) model.add( Dense(112, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.01), # pierd 0.2 acc activation='relu')) # model.add(LeakyReLU(alpha=0.1)) model.add(BatchNormalization(axis = 1)) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # earlystop_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # patience=5, restore_best_weights=True, # verbose=1) reduce_lr_cb = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.05, patience=7, min_lr=0.001, verbose=1) tb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, write_graph=True, write_images=True) epochs = 50 batch_size = 32 model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, shuffle=False, validation_data=(x_test, y_test), callbacks=[reduce_lr_cb, tb_cb] # callbacks=[earlystop_cb, reduce_lr_cb] ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ### Architecture 3 - Deflating Dense 225-112, 0.5 Dropout, Batch Norm, Sigmoid Classification, **`L2 = 1e^-1`** ``` arch_cnt = 'arch-3-1' model = Sequential() model.add( Dense(225, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.1), # pierd 0.2 acc activation='relu')) model.add(Dropout(0.5)) model.add( Dense(112, input_dim=15, kernel_initializer='normal', kernel_regularizer=keras.regularizers.l2(0.1), # pierd 0.2 acc activation='relu')) # model.add(LeakyReLU(alpha=0.1)) model.add(BatchNormalization(axis = 1)) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() %%time # earlystop_cb = keras.callbacks.EarlyStopping( # monitor='val_loss', # patience=5, restore_best_weights=True, # verbose=1) reduce_lr_cb = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.05, patience=7, min_lr=0.001, verbose=1) tb_cb = keras.callbacks.TensorBoard(log_dir='./tensorboard/%s' % arch_cnt, histogram_freq=0, write_graph=True, write_images=True) epochs = 50 batch_size = 32 model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, shuffle=False, validation_data=(x_test, y_test), callbacks=[reduce_lr_cb, tb_cb] # callbacks=[earlystop_cb, reduce_lr_cb] ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` --- # Ensemble Methods ``` import matplotlib.pyplot as plt %matplotlib inline ``` ## Bagging Strategies ### Random Forests ``` from sklearn.ensemble import RandomForestClassifier # x_train, x_test, y_train, y_test clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0) clf.fit(x_train, y_train) print(clf.feature_importances_) print(clf.predict(x_test)) # make predictions for test data y_pred = clf.predict(x_test) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) ``` ### ExtraTrees ``` from sklearn.ensemble import ExtraTreesClassifier # x_train, x_test, y_train, y_test clf = ExtraTreesClassifier(n_estimators=100, max_depth=2, random_state=0) clf.fit(x_train, y_train) print(clf.feature_importances_) print(clf.predict(x_test)) # make predictions for test data y_pred = clf.predict(x_test) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) fig = plt.figure(figsize=(10,5)) plot_learning_curves(x_train, y_train, x_test, y_test, clf) plt.show() ``` ## Stacking Strategies ### SuperLearner ## Boosting Strategies ### xgboost ``` # import xgboost as xgb from xgboost import XGBClassifier from sklearn.metrics import accuracy_score # x_train, x_test, y_train, y_test model = XGBClassifier() model.fit(x_train, y_train) print(model) # make predictions for test data y_pred = model.predict(x_test) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) ``` --- # Bibliography + https://medium.com/@datalesdatales/why-you-should-be-plotting-learning-curves-in-your-next-machine-learning-project-221bae60c53 + https://slideplayer.com/slide/4684120/15/images/6/Outline+Bias%2FVariance+Tradeoff+Ensemble+methods+that+minimize+variance.jpg + https://slideplayer.com/slide/4684120/ + plot confusion matrix + http://rasbt.github.io/mlxtend/user_guide/plotting/plot_learning_curves/ + https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/ + + http://docs.h2o.ai/h2o-tutorials/latest-stable/tutorials/ensembles-stacking/index.html ---
github_jupyter
<a href="https://colab.research.google.com/github/mikvikpik/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module1-join-and-reshape-data/LS_DS_121_Join_and_Reshape_Data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # used for machine learning when doing suggestions for purchasing ``` _Lambda School Data Science_ # Join and Reshape datasets Objectives - concatenate data with pandas - merge data with pandas - understand tidy data formatting - melt and pivot data with pandas Links - [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data) - Combine Data Sets: Standard Joins - Tidy Data - Reshaping Data - Python Data Science Handbook - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables Reference - Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html) - Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html) ## Download data We’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)! ``` !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz %cd instacart_2017_05_01 !ls -lh *.csv ``` # Join Datasets ## Goal: Reproduce this example The first two orders for user id 1: ``` from IPython.display import display, Image url = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png' example = Image(url=url, width=600) display(example) #important to refer to what we are creating ``` ## Load data Here's a list of all six CSV filenames ``` !ls -lh *.csv ``` For each CSV - Load it with pandas - Look at the dataframe's shape - Look at its head (first rows) - `display(example)` - Which columns does it have in common with the example we want to reproduce? ``` import pandas as pd ``` ### aisles ``` aisles = pd.read_csv('aisles.csv') print(aisles.shape) aisles.head() display(example) # aisles.csv not found or needed ``` ### departments ``` !head departments.csv departments = pd.read_csv('departments.csv') print(departments.shape) departments.head() display(example) # department.csv not needed either because not found ``` ### order_products__prior ``` # order_products__prior has doublle underscore in second underscore order_products__prior = pd.read_csv('order_products__prior.csv') print(order_products__prior.shape) order_products__prior.head() display(example) # need order_id, product_id, add_to_cart_order, from order_products__prior !free -m #shows memory used in bang notation order_products__prior.groupby('order_id') #creates a new dataframe, groupby object #order_products__prior.groupby('order_id')['product_id'].count() #shows total count of each item #.mean() shows average count of ordered item ``` ### order_products__train ``` order_products__train = pd.read_csv('order_products__train.csv') print(order_products__train.shape) order_products__train.head() display(example) # need order_id, product_id, add_to_cart_order, from order_products__train # IMPORTANT - same data from order_products__prior # beware of overwrite ``` ### orders ``` orders = pd.read_csv('orders.csv') print(orders.shape) orders.head() # best dataframe to start with display(example) # has order_id, order_number, order_dow, order_hour_of_day, from orders # useful to match up for merge as index to match rows on ``` ### products ``` products = pd.read_csv('products.csv') print(products.shape) products.head() display(example) # need product_id, product_name, from products ``` ## Concatenate order_products__prior and order_products__train ``` order_products__prior.shape order_products__train.shape # pd.concat used to add onto from order_products = pd.concat([order_products__prior, order_products__train]) order_products.shape # assert is used to simple test for usability # refactoring is to make the dataframe more readable and precise # no error means True statement assert len(order_products) == len(order_products__prior) + len(order_products__train) # Filter order products to get as close as we can to example table # condition used to filter # this shows True/False for condition order_products['order_id'] == 2539329 # shows data of what condition is True order_products[order_products['order_id'] == 2539329] import numpy as np order_products[np.logical_or((order_products['order_id'] == 2539329), (order_products['order_id'] == 2398795))] #condition = (orders['user_id' == 1]) & (orders['order_number'] <= 2) # #columns = [ # 'user_id', # 'order_id', # 'order_number', # 'order_dow', # 'order_hour_of_day' #] # #subset = orders.loc[conditions, columns] #subset ``` ## Get a subset of orders — the first two orders for user id 1 From `orders` dataframe: - user_id - order_id - order_number - order_dow - order_hour_of_day ``` ``` ## Merge dataframes Merge the subset from `orders` with columns from `order_products` ``` # merging dataframes is the most important of today help(pd.merge) ``` Merge with columns from `products` ``` ``` # Reshape Datasets ## Why reshape data? #### Some libraries prefer data in different formats For example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always). > "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.html#organizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated: > - Each variable is a column - Each observation is a row > A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot." #### Data science is often about putting square pegs in round holes Here's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling! ## Hadley Wickham's Examples From his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html) ``` %matplotlib inline import pandas as pd import numpy as np import seaborn as sns table1 = pd.DataFrame( [[np.nan, 2], [16, 11], [3, 1]], index=['John Smith', 'Jane Doe', 'Mary Johnson'], columns=['treatmenta', 'treatmentb']) table2 = table1.T ``` "Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild. The table has two columns and three rows, and both rows and columns are labelled." ``` table1 ``` "There are many ways to structure the same underlying data. Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different." ``` table2 ``` "Table 3 reorganises Table 1 to make the values, variables and obserations more clear. Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable." | name | trt | result | |--------------|-----|--------| | John Smith | a | - | | Jane Doe | a | 16 | | Mary Johnson | a | 3 | | John Smith | b | 2 | | Jane Doe | b | 11 | | Mary Johnson | b | 1 | ## Table 1 --> Tidy We can use the pandas `melt` function to reshape Table 1 into Tidy format. ``` table1 = table1.reset_index() table1 # id_vars is the unit of observation # when calling melt pick id_vars very carefully tidy = table1.melt(id_vars='index') tidy tidy = tidy.rename(columns={ 'index': 'name', 'variable': 'trt', 'value': 'result' }) tidy tidy.trt = tidy.trt.str.replace('treatment', '') tidy ``` ## Table 2 --> Tidy ``` tidy = table2.reset_index().melt(id_vars = 'index') tidy tidy = tidy[['variable', 'index', 'value']] tidy tidy = tidy.rename(columns = { 'variable': 'name', 'index': 'trt', 'value': 'result' }) tidy tidy.trt = tidy.trt.str.replace('treatment', '') tidy ``` ## Tidy --> Table 1 The `pivot_table` function is the inverse of `melt`. ``` tidy = tidy.pivot_table(index='name', columns='trt', values='result') tidy ``` ## Tidy --> Table 2 ``` tidy = tidy.T tidy ``` # Seaborn example The rules can be simply stated: - Each variable is a column - Each observation is a row A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot." ``` sns.catplot(x='trt', y='result', col='name', kind='bar', data=tidy, height=2); ``` ## Now with Instacart data ``` products = pd.read_csv('products.csv') order_products = pd.concat([pd.read_csv('order_products__prior.csv'), pd.read_csv('order_products__train.csv')]) orders = pd.read_csv('orders.csv') ``` ## Goal: Reproduce part of this example Instead of a plot with 50 products, we'll just do two — the first products from each list - Half And Half Ultra Pasteurized - Half Baked Frozen Yogurt ``` from IPython.display import display, Image url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png' example = Image(url=url, width=600) display(example) ``` So, given a `product_name` we need to calculate its `order_hour_of_day` pattern. ## Subset and Merge One challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge. ``` ``` ## 4 ways to reshape and plot ### 1. value_counts ``` ``` ### 2. crosstab ``` ``` ### 3. Pivot Table ``` ``` ### 4. melt ``` ``` # Assignment ## Join Data Section These are the top 10 most frequently ordered products. How many times was each ordered? 1. Banana 2. Bag of Organic Bananas 3. Organic Strawberries 4. Organic Baby Spinach 5. Organic Hass Avocado 6. Organic Avocado 7. Large Lemon 8. Strawberries 9. Limes 10. Organic Whole Milk First, write down which columns you need and which dataframes have them. Next, merge these into a single dataframe. Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products. ## Reshape Data Section - Replicate the lesson code - Complete the code cells we skipped near the beginning of the notebook - Table 2 --> Tidy - Tidy --> Table 2 - Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960. ``` # Join Data Section Assignment # Need 'product_name', 'product_id' from 'products' # Need products_list dataframe merged to orders # Then need sum of produce_list frequency. Each time they were ordered. # creating products_list from products products_list = products[['product_name', 'product_id']] print(products_list.shape) products_list.head() # merge products list to orders by product_id products_list = pd.merge(products_list, order_products, how='left', on='product_id') print(products_list.shape) products_list.head(20) # create list of only shopping_list shopping_list = ['Banana', 'Bag of Organic Bananas', 'Organic Strawberries', 'Organic Baby Spinach', 'Organic Hass Avocado', 'Organic Avocado', 'Large Lemon', 'Strawberries', 'Limes', 'Organic Whole Milk'] # filter products_list by shopping_list products_list = products_list.drop(['add_to_cart_order','reordered'], axis=1) products_list.head() # created dataframe of only shopping_list items included products_list = products_list[products_list['product_name'].isin(shopping_list)] print(products_list.shape) products_list.head() # counts of each item ordered products_list['product_name'].value_counts() # Reshape Data Section # Load flights # Pivot using .T # Use year for the index and month for the columns. # You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960. flights = sns.load_dataset('flights') flights = flights.T flights ``` ## Join Data Stretch Challenge The [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)." The post says, > "We can also see the time of day that users purchase specific products. > Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening. > **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**" Your challenge is to reproduce the list of the top 25 latest ordered popular products. We'll define "popular products" as products with more than 2,900 orders. ## Reshape Data Stretch Challenge _Try whatever sounds most interesting to you!_ - Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product" - Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases" - Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis) - Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
github_jupyter
# Super Resolution with PaddleGAN and OpenVINO This notebook demonstrates converting the RealSR (real-world super-resolution) model from [PaddlePaddle/PaddleGAN](https://github.com/PaddlePaddle/PaddleGAN) to OpenVINO's Intermediate Representation (IR) format, and shows inference results on both the PaddleGAN and IR models. For more information about the various PaddleGAN superresolution models, see [PaddleGAN's documentation](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/single_image_super_resolution.md). For more information about RealSR, see the [research paper](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Ji_Real-World_Super-Resolution_via_Kernel_Estimation_and_Noise_Injection_CVPRW_2020_paper.pdf) from CVPR 2020. This notebook works best with small images (up to 800x600). ## Imports ``` import sys import time import warnings from pathlib import Path import cv2 import matplotlib.pyplot as plt import numpy as np import paddle from IPython.display import HTML, FileLink, ProgressBar, clear_output, display from IPython.display import Image as DisplayImage from PIL import Image from openvino.runtime import Core, PartialShape from paddle.static import InputSpec from ppgan.apps import RealSRPredictor sys.path.append("../utils") from notebook_utils import NotebookAlert ``` ## Settings ``` # The filenames of the downloaded and converted models MODEL_NAME = "paddlegan_sr" MODEL_DIR = Path("model") OUTPUT_DIR = Path("output") OUTPUT_DIR.mkdir(exist_ok=True) model_path = MODEL_DIR / MODEL_NAME ir_path = model_path.with_suffix(".xml") onnx_path = model_path.with_suffix(".onnx") ``` ## Inference on PaddlePaddle Model ### Investigate PaddleGAN Model The [PaddleGAN documentation](https://github.com/PaddlePaddle/PaddleGAN) explains to run the model with `sr.run()`. Let's see what that function does, and check other relevant functions that are called from that function. Adding `??` to the methods shows the docstring and source code. ``` # Running this cell will download the model weights if they have not been downloaded before # This may take a while sr = RealSRPredictor() sr.run?? sr.run_image?? sr.norm?? sr.denorm?? ``` The run checks whether the input is an image or a video. For an image, it loads the image as an RGB image, normalizes it, and converts it to a Paddle tensor. It is propagated to the network by calling `self.model()` and then "denormalized". The normalization function simply divides all image values by 255. This converts an image with integer values in the range of 0 to 255 to an image with floating point values in the range of 0 to 1. The denormalization function transforms the output from network shape (C,H,W) to image shape (H,W,C). It then clips the image values between 0 and 255, and converts the image to a standard RGB image with integer values in the range of 0 to 255. To get more information about the model, we can check what it looks like with `sr.model??`. ``` # sr.model?? ``` ### Do Inference To show inference on the PaddlePaddle model, set PADDLEGAN_INFERENCE to True in the cell below. Performing inference may take some time. ``` # Set PADDLEGAN_INFERENCE to True to show inference on the PaddlePaddle model. # This may take a long time, especially for larger images. # PADDLEGAN_INFERENCE = False if PADDLEGAN_INFERENCE: # load the input image and convert to tensor with input shape IMAGE_PATH = Path("data/coco_tulips.jpg") image = cv2.cvtColor(cv2.imread(str(IMAGE_PATH)), cv2.COLOR_BGR2RGB) input_image = image.transpose(2, 0, 1)[None, :, :, :] / 255 input_tensor = paddle.to_tensor(input_image.astype(np.float32)) if max(image.shape) > 400: NotebookAlert( f"This image has shape {image.shape}. Doing inference will be slow " "and the notebook may stop responding. Set PADDLEGAN_INFERENCE to False " "to skip doing inference on the PaddlePaddle model.", "warning", ) if PADDLEGAN_INFERENCE: # Do inference, and measure how long it takes print(f"Start superresolution inference for {IMAGE_PATH.name} with shape {image.shape}...") start_time = time.perf_counter() sr.model.eval() with paddle.no_grad(): result = sr.model(input_tensor) end_time = time.perf_counter() duration = end_time - start_time result_image = ( (result.numpy().squeeze() * 255).clip(0, 255).astype("uint8").transpose((1, 2, 0)) ) print(f"Superresolution image shape: {result_image.shape}") print(f"Inference duration: {duration:.2f} seconds") plt.imshow(result_image); ``` ## Convert PaddleGAN Model to ONNX and OpenVINO IR To convert the PaddlePaddle model to OpenVINO IR, we first convert the model to ONNX, and then convert the ONNX model to the IR format. ### Convert PaddlePaddle Model to ONNX ``` # Ignore PaddlePaddle warnings: # The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) warnings.filterwarnings("ignore") sr.model.eval() # ONNX export requires an input shape in this format as parameter x_spec = InputSpec([None, 3, 299, 299], "float32", "x") paddle.onnx.export(sr.model, str(model_path), input_spec=[x_spec], opset_version=13) ``` ### Convert ONNX Model to OpenVINO IR ``` ## Uncomment the command below to show Model Optimizer help, which shows the possible arguments for Model Optimizer # ! mo --help if not ir_path.exists(): print("Exporting ONNX model to IR... This may take a few minutes.") ! mo --input_model $onnx_path --input_shape "[1,3,299,299]" --model_name $MODEL_NAME --output_dir "$MODEL_DIR" --data_type "FP16" --log_level "CRITICAL" ``` ## Do Inference on IR Model ``` # Read network and get input and output names ie = Core() model = ie.read_model(model=ir_path) input_layer = next(iter(model.inputs)) # Load and show image IMAGE_PATH = Path("data/coco_tulips.jpg") image = cv2.cvtColor(cv2.imread(str(IMAGE_PATH)), cv2.COLOR_BGR2RGB) if max(image.shape) > 800: NotebookAlert( f"This image has shape {image.shape}. The notebook works best with images with " "a maximum side of 800x600. Larger images may work well, but inference may " "be slow", "warning", ) plt.imshow(image) # Reshape network to image size model.reshape({input_layer.any_name: PartialShape([1, 3, image.shape[0], image.shape[1]])}) # Load network to the CPU device (this may take a few seconds) compiled_model = ie.compile_model(model=model, device_name="CPU") output_layer = next(iter(compiled_model.outputs)) # Convert image to network input shape and divide pixel values by 255 # See "Investigate PaddleGAN model" section input_image = image.transpose(2, 0, 1)[None, :, :, :] / 255 start_time = time.perf_counter() # Do inference ir_result = compiled_model([input_image])[output_layer] end_time = time.perf_counter() duration = end_time - start_time print(f"Inference duration: {duration:.2f} seconds") # Get result array in CHW format result_array = ir_result.squeeze() # Convert array to image with same method as PaddleGAN: # Multiply by 255, clip values between 0 and 255, convert to HWC INT8 image # See "Investigate PaddleGAN model" section image_super = (result_array * 255).clip(0, 255).astype("uint8").transpose((1, 2, 0)) # Resize image with bicubic upsampling for comparison image_bicubic = cv2.resize(image, tuple(image_super.shape[:2][::-1]), interpolation=cv2.INTER_CUBIC) plt.imshow(image_super) ``` ### Show Animated GIF To visualize the difference between the bicubic image and the superresolution image, we create an imated gif that switches between both versions. ``` result_pil = Image.fromarray(image_super) bicubic_pil = Image.fromarray(image_bicubic) gif_image_path = OUTPUT_DIR / Path(IMAGE_PATH.stem + "_comparison.gif") final_image_path = OUTPUT_DIR / Path(IMAGE_PATH.stem + "_super.png") result_pil.save( fp=str(gif_image_path), format="GIF", append_images=[bicubic_pil], save_all=True, duration=1000, loop=0, ) result_pil.save(fp=str(final_image_path), format="png") DisplayImage(open(gif_image_path, "rb").read(), width=1920 // 2) ``` ### Create Comparison Video Create a video with a "slider", showing the bicubic image to the right and the superresolution image on the left. For the video, the superresolution and bicubic image are resized to half the original width and height, to improve processing speed. This gives an indication of the superresolution effect. The video is saved as an .avi video. You can click on the link to download the video, or open it directly from the images directory, and play it locally. ``` FOURCC = cv2.VideoWriter_fourcc(*"MJPG") IMAGE_PATH = Path(IMAGE_PATH) result_video_path = OUTPUT_DIR / Path(f"{IMAGE_PATH.stem}_comparison_paddlegan.avi") video_target_height, video_target_width = ( image_super.shape[0] // 2, image_super.shape[1] // 2, ) out_video = cv2.VideoWriter( str(result_video_path), FOURCC, 90, (video_target_width, video_target_height), ) resized_result_image = cv2.resize(image_super, (video_target_width, video_target_height))[ :, :, (2, 1, 0) ] resized_bicubic_image = cv2.resize(image_bicubic, (video_target_width, video_target_height))[ :, :, (2, 1, 0) ] progress_bar = ProgressBar(total=video_target_width) progress_bar.display() for i in range(2, video_target_width): # Create a frame where the left part (until i pixels width) contains the # superresolution image, and the right part (from i pixels width) contains # the bicubic image comparison_frame = np.hstack( ( resized_result_image[:, :i, :], resized_bicubic_image[:, i:, :], ) ) # create a small black border line between the superresolution # and bicubic part of the image comparison_frame[:, i - 1 : i + 1, :] = 0 out_video.write(comparison_frame) progress_bar.progress = i progress_bar.update() out_video.release() clear_output() video_link = FileLink(result_video_path) video_link.html_link_str = "<a href='%s' download>%s</a>" display(HTML(f"The video has been saved to {video_link._repr_html_()}")) ```
github_jupyter
# Imports ``` from datetime import datetime from b2 import B2 ``` # B2 kick-off and data loading `fire_earlier.csv` and `fire_later.csv` are samples of the "[1.88 Million US Wildfires](https://www.kaggle.com/rtatman/188-million-us-wildfires)" dataset made available on Kaggle by Rachael Tatman. ``` b2 = B2() data = b2.from_file("./data/fire_earlier.csv") # data = b2.from_file("./data/fire_later.csv") data["DT"] = data.apply( lambda ts: datetime.fromtimestamp(ts).replace(microsecond=0), "DISCOVERY_DATE" ) data["YEAR"] = data.apply(lambda d: d.year, "DT") data["MINUTE"] = data.apply(lambda d: d.minute, "DT") b2.show_profile(data) # Update the dashboard columns data.head() data.num_rows ``` # B2 in action ``` states = data.group("STATE") states.vis() # 🟡 01:47 🟡 CAUSE_DESCR_data_dist = data.group('CAUSE_DESCR') CAUSE_DESCR_data_dist.vis() large_fires = data.where("FIRE_SIZE", lambda x: x > 1000) time_vs_size = large_fires.select(["DISCOVERY_TIME", "FIRE_SIZE"]) time_vs_size.vis() # 🟡 01:48 🟡 YEAR_data_dist = data.group('YEAR') YEAR_data_dist.vis() %%reactive data.get_filtered_data().head(10) %%reactive filtered_locs = data.get_filtered_data().select(["LATITUDE", "LONGITUDE"]) filtered_locs.plot_heatmap(zoom_start=3, radius=6) # 🔵 02:02 🔵 # b2.sel([{"CAUSE_DESCR_data_dist": {"CAUSE_DESCR": ["Lightning"]}}]) b2.sel([{"CAUSE_DESCR_data_dist": {"CAUSE_DESCR": ["Debris Burning"]}}]) # 🟠 02:15 🟠 ## Current snapshot queries: # data.where('CAUSE_DESCR', b2.are.contained_in(['Debris Burning'])).group('STATE') # data.group('CAUSE_DESCR') # data.where('CAUSE_DESCR', b2.are.contained_in(['Debris Burning'])).where('FIRE_SIZE', lambda x: x > 1000).select(['DISCOVERY_TIME', 'FIRE_SIZE']) # data.where('CAUSE_DESCR', b2.are.contained_in(['Debris Burning'])).group('YEAR') from IPython.display import HTML, display display(HTML("""<div><svg class="marks" width="677" height="173" viewBox="0 0 677 173" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><rect width="677" height="173" style="fill: white;"></rect><g transform="translate(52,10)"><g class="mark-group role-frame root"><g transform="translate(0,0)"><path class="background" d="M0.5,0.5h620v120h-620Z" style="fill: none; stroke: #ddd;"></path><g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-grid" style="pointer-events: none;"><line transform="translate(0,120)" x2="620" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,90)" x2="620" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,60)" x2="620" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,30)" x2="620" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,0)" x2="620" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,120.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(10,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(30,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(50,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(70,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(90,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(110,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(130,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(150,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(170,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(190,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(210,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(230,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(250,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(270,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(290,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(310,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(330,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(350,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(370,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(390,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(410,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(430,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(450,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(470,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(490,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(510,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(530,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(550,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(570,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(590,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(610,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="end" transform="translate(9.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">GA</text><text text-anchor="end" transform="translate(29.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">MS</text><text text-anchor="end" transform="translate(49.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">NY</text><text text-anchor="end" transform="translate(69.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">WV</text><text text-anchor="end" transform="translate(89.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">AZ</text><text text-anchor="end" transform="translate(109.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">OK</text><text text-anchor="end" transform="translate(129.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">CA</text><text text-anchor="end" transform="translate(149.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">WI</text><text text-anchor="end" transform="translate(169.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">ID</text><text text-anchor="end" transform="translate(189.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">CO</text><text text-anchor="end" transform="translate(209.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">MI</text><text text-anchor="end" transform="translate(229.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">UT</text><text text-anchor="end" transform="translate(249.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">OR</text><text text-anchor="end" transform="translate(269.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">NV</text><text text-anchor="end" transform="translate(289.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">MT</text><text text-anchor="end" transform="translate(309.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">NM</text><text text-anchor="end" transform="translate(329.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">WA</text><text text-anchor="end" transform="translate(349.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">AL</text><text text-anchor="end" transform="translate(369.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">ME</text><text text-anchor="end" transform="translate(389.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">KY</text><text text-anchor="end" transform="translate(409.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">PA</text><text text-anchor="end" transform="translate(429.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">WY</text><text text-anchor="end" transform="translate(449.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">SC</text><text text-anchor="end" transform="translate(469.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">NC</text><text text-anchor="end" transform="translate(489.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">TN</text><text text-anchor="end" transform="translate(509.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">VA</text><text text-anchor="end" transform="translate(529.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">SD</text><text text-anchor="end" transform="translate(549.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">TX</text><text text-anchor="end" transform="translate(569.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">FL</text><text text-anchor="end" transform="translate(589.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">KS</text><text text-anchor="end" transform="translate(609.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">ND</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,0)" x2="620" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(310,35.859375)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">STATE</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(0,120)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,90)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,60)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,30)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,0)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="end" transform="translate(-7,123)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">0</text><text text-anchor="end" transform="translate(-7,93)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">500</text><text text-anchor="end" transform="translate(-7,63)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1,000</text><text text-anchor="end" transform="translate(-7,33)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1,500</text><text text-anchor="end" transform="translate(-7,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">2,000</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,120)" x2="0" y2="-120" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(-35.423828125,60) rotate(-90) translate(0,-2)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">count</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-rect role-mark layer_0_marks"><path d="M341,117.48h18v2.519999999999996h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M81,105.66000000000001h18v14.33999999999999h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M121,108.17999999999999h18v11.820000000000007h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M181,112.98h18v7.019999999999996h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M561,119.94000000000001h18v0.05999999999998806h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M1,10.559999999999995h18v109.44h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M161,112.5h18v7.5h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M581,119.94000000000001h18v0.05999999999998806h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M381,117.84h18v2.1599999999999966h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M361,117.48h18v2.519999999999996h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M201,114.42h18v5.579999999999998h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M21,72.17999999999999h18v47.82000000000001h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M281,115.92h18v4.079999999999998h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M461,118.86h18v1.1400000000000006h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M601,119.94000000000001h18v0.05999999999998806h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M301,116.58h18v3.4200000000000017h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M261,115.8h18v4.200000000000003h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M41,102.6h18v17.400000000000006h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M101,106.8h18v13.200000000000003h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M241,114.9h18v5.099999999999994h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M401,118.08h18v1.9200000000000017h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M441,118.56h18v1.4399999999999977h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M521,119.88h18v0.12000000000000455h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M481,119.34h18v0.6599999999999966h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M541,119.88h18v0.12000000000000455h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M221,114.78h18v5.219999999999999h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M501,119.64h18v0.35999999999999943h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M321,117.36h18v2.6400000000000006h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M141,111.17999999999999h18v8.820000000000007h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M61,102.6h18v17.400000000000006h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M421,118.14h18v1.8599999999999994h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M341,119.03999999999999h18v0.960000000000008h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M81,119.34h18v0.6599999999999966h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M181,119.76h18v0.23999999999999488h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M1,64.80000000000001h18v55.19999999999999h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M161,119.88h18v0.12000000000000455h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M381,119.16h18v0.8400000000000034h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M361,118.98h18v1.019999999999996h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M201,117.96h18v2.0400000000000063h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M21,105.36h18v14.64h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M281,119.94000000000001h18v0.05999999999998806h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M461,119.88h18v0.12000000000000455h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M301,119.88h18v0.12000000000000455h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M41,119.16h18v0.8400000000000034h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M101,119.76h18v0.23999999999999488h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M401,119.46000000000001h18v0.539999999999992h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M441,119.88h18v0.12000000000000455h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M321,119.94000000000001h18v0.05999999999998806h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M141,117.06h18v2.9399999999999977h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M61,113.88h18v6.1200000000000045h-18Z" style="fill: #003E6B; opacity: 0.5;"></path><path d="M421,119.88h18v0.12000000000000455h-18Z" style="fill: #003E6B; opacity: 0.5;"></path></g><g class="mark-rect role-mark layer_1_marks" style="pointer-events: none;"></g></g><path class="foreground" d="" style="display: none; fill: none;"></path></g></g></g></svg><svg class="marks" width="317" height="237" viewBox="0 0 317 237" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><rect width="317" height="237" style="fill: white;"></rect><g transform="translate(52,5)"><g class="mark-group role-frame root"><g transform="translate(0,0)"><path class="background" d="M0.5,0.5h260v120h-260Z" style="fill: none; stroke: #ddd;"></path><g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-grid" style="pointer-events: none;"><line transform="translate(0,120)" x2="260" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,83)" x2="260" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,45)" x2="260" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,8)" x2="260" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,120.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(10,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(30,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(50,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(70,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(90,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(110,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(130,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(150,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(170,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(190,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(210,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(230,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(250,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="end" transform="translate(9.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Debris Burning</text><text text-anchor="end" transform="translate(29.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Arson</text><text text-anchor="end" transform="translate(49.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Lightning</text><text text-anchor="end" transform="translate(69.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Miscellaneous</text><text text-anchor="end" transform="translate(89.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Equipment Use</text><text text-anchor="end" transform="translate(109.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Missing/Undefined</text><text text-anchor="end" transform="translate(129.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Smoking</text><text text-anchor="end" transform="translate(149.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Campfire</text><text text-anchor="end" transform="translate(169.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Children</text><text text-anchor="end" transform="translate(189.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Railroad</text><text text-anchor="end" transform="translate(209.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Powerline</text><text text-anchor="end" transform="translate(229.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Fireworks</text><text text-anchor="end" transform="translate(249.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">Structure</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,0)" x2="260" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(130,104.0771484375)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">CAUSE_DESCR</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(0,120)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,83)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,45)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,8)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="end" transform="translate(-7,123)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">0</text><text text-anchor="end" transform="translate(-7,85.5)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">500</text><text text-anchor="end" transform="translate(-7,48)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1,000</text><text text-anchor="end" transform="translate(-7,10.5)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1,500</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,120)" x2="0" y2="-120" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(-35.423828125,60) rotate(-90) translate(0,-2)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">count</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-rect role-mark layer_0_marks"><path d="M21,32.25000000000001h18v87.75h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M141,108.89999999999999h18v11.100000000000009h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M161,109.35h18v10.650000000000006h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M1,11.25h18v108.75h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M81,98.775h18v21.224999999999994h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M221,118.425h18v1.5750000000000028h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M41,64.05h18v55.95h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M61,79.95h18v40.05h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M101,103.27499999999999h18v16.72500000000001h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M201,117.6h18v2.4000000000000057h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M181,114.45h18v5.549999999999997h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M121,107.1h18v12.900000000000006h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M241,119.625h18v0.375h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path></g><g class="mark-rect role-mark layer_1_marks" style="pointer-events: none;"><path d="M1,11.25h18v108.75h-18Z" style="fill: #fdae6b; opacity: 0.5;"></path></g></g><path class="foreground" d="" style="display: none; fill: none;"></path></g></g></g></svg><svg class="marks" width="264" height="165" viewBox="0 0 264 165" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><rect width="264" height="165" style="fill: white;"></rect><defs><clipPath id="clip83"><rect x="0" y="0" width="200" height="120"></rect></clipPath><clipPath id="clip84"><rect x="0" y="0" width="200" height="120"></rect></clipPath></defs><g transform="translate(58,8)"><g class="mark-group role-frame root"><g transform="translate(0,0)"><path class="background" d="M0.5,0.5h200v120h-200Z" style="fill: none; stroke: #ddd;"></path><g><g class="mark-group role-axis"><g transform="translate(0.5,120.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-grid" style="pointer-events: none;"><line transform="translate(0,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(29,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(57,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(86,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(114,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(143,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(171,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(200,-120)" x2="0" y2="120" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-grid" style="pointer-events: none;"><line transform="translate(0,120)" x2="200" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,70)" x2="200" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,20)" x2="200" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,120.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(0,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(29,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(57,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(86,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(114,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(143,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(171,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(200,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="start" transform="translate(0,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1,000</text><text text-anchor="middle" transform="translate(28.57142857142857,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 0;">1,200</text><text text-anchor="middle" transform="translate(57.14285714285714,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1,400</text><text text-anchor="middle" transform="translate(85.71428571428571,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 0;">1,600</text><text text-anchor="middle" transform="translate(114.28571428571428,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1,800</text><text text-anchor="middle" transform="translate(142.85714285714286,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 0;">2,000</text><text text-anchor="middle" transform="translate(171.42857142857142,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">2,200</text><text text-anchor="end" transform="translate(200,15)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 0;">2,400</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,0)" x2="200" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(100,30)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">DISCOVERY_TIME</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(0,120)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,70)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,20)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="end" transform="translate(-7,123)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">0</text><text text-anchor="end" transform="translate(-7,72.99999999999999)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">10,000</text><text text-anchor="end" transform="translate(-7,22.999999999999996)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">20,000</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,120)" x2="0" y2="-120" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(-41.0390625,60) rotate(-90) translate(0,-2)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">FIRE_SIZE</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-rect role-mark brush_brush_bg" clip-path="url(#clip83)"><path d="M0,0h0v0h0Z" style="fill: #333; fill-opacity: 0.125;"></path></g><g class="mark-symbol role-mark marks"><path transform="translate(161.42857142857144,77.49999999999999)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(30.28571428571429,114.655)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(63.28571428571429,112.64)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(175.71428571428572,109.47)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(75.14285714285714,107.05)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(85.71428571428571,2.38799999999999)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(47.14285714285714,81.5)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(115.99999999999999,112.89)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(14.285714285714285,84.62)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(100.42857142857142,62.725)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(30.28571428571429,107.73400000000001)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(36.42857142857142,100)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(49.28571428571429,105.50750000000001)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(129.85714285714286,114.095)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(60.14285714285714,45)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(57.14285714285714,101.47)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(30.714285714285715,112.35000000000001)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(75.71428571428571,98.855)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(74.28571428571429,113.5)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(75.71428571428571,111.88499999999999)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(186.42857142857144,103.92)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(58.285714285714285,114.195)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(89.71428571428571,92.5)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(88,19.810000000000002)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(102.14285714285714,110.405)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(114.71428571428572,109.36)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(142.85714285714286,105.285)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(57.14285714285714,113.25)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(121.14285714285715,102.65)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(0,114)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(74.71428571428571,112.5)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(119.28571428571428,108.75)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(28.57142857142857,15.395000000000003)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #9FB3C8; stroke-width: 2; opacity: 0.5;"></path><path transform="translate(102.14285714285714,110.405)" d="M2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,-2.7386127875258306,0A2.7386127875258306,2.7386127875258306,0,1,1,2.7386127875258306,0" style="fill: none; stroke: #003E6B; stroke-width: 2; opacity: 0.5;"></path></g><g class="mark-rect role-mark brush_brush" clip-path="url(#clip84)"><path d="M0,0h0v0h0Z" style="fill: none;"></path></g></g><path class="foreground" d="" style="display: none; fill: none;"></path></g></g></g></svg><svg class="marks" width="81" height="175" viewBox="0 0 81 175" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><rect width="81" height="175" style="fill: white;"></rect><defs><clipPath id="clip85"><rect x="0" y="0" width="20" height="120"></rect></clipPath><clipPath id="clip86"><rect x="0" y="0" width="20" height="120"></rect></clipPath></defs><g transform="translate(52,5)"><g class="mark-group role-frame root"><g transform="translate(0,0)"><path class="background" d="M0.5,0.5h20v120h-20Z" style="fill: none; stroke: #ddd;"></path><g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-grid" style="pointer-events: none;"><line transform="translate(0,120)" x2="20" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,72)" x2="20" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,24)" x2="20" y2="0" style="fill: none; stroke: #ddd; stroke-width: 1; opacity: 1;"></line></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,120.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(10,0)" x2="0" y2="5" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="end" transform="translate(9.5,7) rotate(270) translate(0,3)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">1970</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,0)" x2="20" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(10,42.4609375)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">YEAR</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-group role-axis"><g transform="translate(0.5,0.5)"><path class="background" d="M0,0h0v0h0Z" style="pointer-events: none; fill: none;"></path><g><g class="mark-rule role-axis-tick" style="pointer-events: none;"><line transform="translate(0,120)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,72)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line><line transform="translate(0,24)" x2="-5" y2="0" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-label" style="pointer-events: none;"><text text-anchor="end" transform="translate(-7,123)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">0</text><text text-anchor="end" transform="translate(-7,75)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">2,000</text><text text-anchor="end" transform="translate(-7,26.999999999999993)" style="font-family: sans-serif; font-size: 10px; fill: #000; opacity: 1;">4,000</text></g><g class="mark-rule role-axis-domain" style="pointer-events: none;"><line transform="translate(0,120)" x2="0" y2="-120" style="fill: none; stroke: #888; stroke-width: 1; opacity: 1;"></line></g><g class="mark-text role-axis-title" style="pointer-events: none;"><text text-anchor="middle" transform="translate(-35.423828125,60) rotate(-90) translate(0,-2)" style="font-family: sans-serif; font-size: 11px; font-weight: bold; fill: #000; opacity: 1;">count</text></g></g><path class="foreground" d="" style="pointer-events: none; display: none; fill: none;"></path></g></g><g class="mark-rect role-mark brush_brush_bg" clip-path="url(#clip85)"><path d="M0,0h0v0h0Z" style="fill: #333; fill-opacity: 0.125;"></path></g><g class="mark-rect role-mark layer_0_marks"><path d="M1,0h18v120h-18Z" style="fill: #9FB3C8; opacity: 0.5;"></path><path d="M1,85.19999999999999h18v34.80000000000001h-18Z" style="fill: #003E6B; opacity: 0.5;"></path></g><g class="mark-rect role-mark layer_1_marks" style="pointer-events: none;"></g><g class="mark-rect role-mark brush_brush" clip-path="url(#clip86)"><path d="M0,0h0v0h0Z" style="fill: none;"></path></g></g><path class="foreground" d="" style="display: none; fill: none;"></path></g></g></g></svg><div>""")) # 🔵 02:21 🔵 # b2.sel([{"time_vs_size": {"DISCOVERY_TIME": [1007, 2400]}}, {"time_vs_size": {"FIRE_SIZE": [0, 9800]}}, {"CAUSE_DESCR_data_dist": {"CAUSE_DESCR": ["Smoking"]}}]) # 🟡 02:23 🟡 data['DISCOVERY_DATE_bin'] = data.apply(lambda x: 'null' if b2.np.isnan(x) else int(x/300.0) * 300.0, 'DISCOVERY_DATE') DISCOVERY_DATE_data_dist = data.group('DISCOVERY_DATE_bin') # DISCOVERY_DATE_data_dist.vis() DISCOVERY_DATE_data_dist.vis(mark="line", x_type="temporal") minute = data.group("MINUTE") minute.vis(mark="line", x_type="ordinal") help(DISCOVERY_DATE_data_dist.vis) ``` ---
github_jupyter
# Treinando LSTMs com o dataset IMDB ``` import tensorflow as tf import keras from keras.models import Sequential from keras.layers import Dense, LSTM, Conv1D, MaxPool1D, Dropout, Embedding from keras.preprocessing import sequence from keras.callbacks import EarlyStopping import matplotlib.pyplot as plt import sys import warnings # Desativa os avisos if not sys.warnoptions: warnings.simplefilter("ignore") # Carrega o dataset contendo as 5000 palavras mais utilizadas do dataset (x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words = 5000) # Separando os dados de treino x_aux, y_aux = x_test, y_test x_test = x_aux[:15000] y_test = y_aux[:15000] x_validation = x_aux[15000:] y_validation = y_aux[15000:] del x_aux, y_aux # Exibindo a quantidade de listas dentro do dataset print("X train: ", len(x_train)) print("Y train: ", len(y_train)) print("X test: ", len(x_test)) print("Y test: ", len(y_test)) print("X validation: ", len(x_validation)) print("Y validation: ", len(y_validation)) # Exibindo o comprimento das 10 primeiras listas do dataset for l, t in zip(x_train[:10], y_train[:10]): print(f"Comprimento lista : {len(l)} - Target: {t}") ``` Para o treinamento será necessário deixar todas as listas no mesmo tamanho. ``` # Truncando as sequências de entrada de treinamento e teste para ficarem todas no mesmo tamanho # O modelo aprenderá que valores 0 não possuem informação x_train = sequence.pad_sequences(x_train, maxlen=500) x_test = sequence.pad_sequences(x_test, maxlen=500) x_validation = sequence.pad_sequences(x_validation, maxlen=500) # Verificando novamente as 10 primeiras listas do dataset de treino for x, y in zip(x_train[:10], y_train[:10]): print(f"Comprimento da lista: {len(x)} - Target: {y}") ``` Agora sim está adequado para o treinamento. ## Definindo algumas funções ``` def exibir_performance_dados_teste(loss, accuracy): print("Loss: %.2f" % (loss * 100)) print("Accuracy: %.2f" % (accuracy * 100)) def exibir_evolucao_treino_validacao(train_hist, train, validation): plt.plot(train_hist.history[train]) plt.plot(train_hist.history[validation]) plt.title("Histórico do treinamento") plt.ylabel(train) plt.xlabel("epoch") plt.legend(['train', 'validation'], loc = 'upper left') plt.show() ``` # Criando vários modelos ## LSTM ``` # Criando o primeiro modelo model1 = Sequential() # Parâmetros: # input_dim = tamanho do vocabulário do dataset. Nesse caso, 5000. # output_dim = tamanho do vetor para representar cada palavra. # input_length = tamanho de cada vetor de saída. model1.add(Embedding(input_dim = 5000, output_dim = 64, input_length = 500)) # Camada LSTM com 100 neurônios model1.add(LSTM(100)) # Camada de saída com um neurônio e função sigmoid model1.add(Dense(1, activation = 'sigmoid')) # Compilando o modelo model1.compile(loss = keras.losses.binary_crossentropy, optimizer = 'adam', metrics = ['accuracy']) # Verificando a arquitetura da rede model1.summary() %time # Treinando o modelo hist = model1.fit(x_train, y_train, validation_data=(x_validation, y_validation), batch_size = 128, epochs = 10) # Avaliando o modelo loss, accuracy = model1.evaluate(x_test, y_test) exibir_performance_dados_teste(loss, accuracy) # Exibindo a evolução dos dados de treino e validação exibir_evolucao_treino_validacao(hist, 'accuracy', 'val_accuracy') ``` ## LSTM com técnica de regulização Dropout O objetivo do segundo modelo é melhorar a generalização do modelo aplicando a técnica de regularização Dropout. ``` # Criando o modelo model2 = Sequential() # Parâmetros: # input_dim = tamanho do vocabulário do dataset. Nesse caso, 5000. # output_dim = tamanho do vetor para representar cada palavra. # input_length = tamanho de cada vetor de saída. model2.add(Embedding(input_dim = 5000, output_dim = 64, input_length = 500)) # Camada Dropout com 25% de drop model2.add(Dropout(0.25)) # Camada LSTM com 100 neurônios model2.add(LSTM(100)) # Camada Dropout com 25% de drop model2.add(Dropout(0.25)) # Última camada com 1 neurônio de saída e função de ativação sigmoid model2.add(Dense(1, activation = 'sigmoid')) # Compilando o modelo model2.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy']) # Visualizando a arquitetura da rede model2.summary() %time # Treinando o modelo hist = model2.fit(x_train, y_train, validation_data=(x_validation, y_validation), batch_size = 128, epochs = 10) # Avaliando o modelo loss, accuracy = model2.evaluate(x_test, y_test) exibir_performance_dados_teste(loss, accuracy) # Exibindo a evolução dos dados de treino e validação exibir_evolucao_treino_validacao(hist, 'accuracy', 'val_accuracy') ``` ## LSTM como CNN e Técnicas de regularização O objetivo do terceiro modelo é aplicar uma camada de CNN e mais uma técnica de regularização ``` # Criando o modelo model3 = Sequential() # Parâmetros: # input_dim = tamanho do vocabulário do dataset. Nesse caso, 5000. # output_dim = tamanho do vetor para representar cada palavra. # input_length = tamanho de cada vetor de saída. model3.add(Embedding(input_dim = 5000, output_dim = 64, input_length = 500)) # Camada de convolução model3.add(Conv1D(filters = 64, kernel_size = 3, padding = 'same', activation = 'relu')) # Camada de pooling model3.add(MaxPool1D(pool_size = 2)) # Camada de Dropout com 25% model3.add(Dropout(0.25)) # Camada LSTM com 100 neurônios, 1 camada de dropout antes e uma depois, ambas com 25% de drop #model3.add(LSTM(100, dropout = 0.25, recurrent_dropout = 0.25)) model3.add(LSTM(100)) # Camada de Dropout com 25% model3.add(Dropout(0.25)) # Última camada com 1 neurônio de saída e função de ativação sigmoid model3.add(Dense(units = 1, activation = 'sigmoid')) # Compilando o modelo model3.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy']) # Exibindo a arquitetura da rede model3.summary() # Definindo o EarlyStopping e treinando o modelo monitor = EarlyStopping(monitor = 'val_loss', min_delta=1e-1, patience = 5, verbose = 3, mode = 'auto') hist = model3.fit(x_train, y_train, validation_data = (x_validation, y_validation), callbacks = [monitor], batch_size = 64, epochs = 100) # Avaliando o modelo loss, accuracy = model3.evaluate(x_test, y_test) exibir_performance_dados_teste(loss, accuracy) #xibindo a evolução do modelo exibir_evolucao_treino_validacao(hist, 'accuracy', 'val_accuracy') ``` Com 86,51% de acurácia nos dados de teste, o último modelo (model3) foi o que apresentou o melhor resultado. Utilizando EarlyStopping, consegui obter o melhor modelo com apenas 6 epochs.
github_jupyter
``` %load_ext autoreload %autoreload 2 import importlib import vsms import torch import torch.nn as nn import clip from vsms import * from vsms import BoxFeedbackQuery class StringEncoder(object): def __init__(self): variant ="ViT-B/32" device='cpu' jit = False self.device = device model, preproc = clip.load(variant, device=device, jit=jit) self.model = model self.preproc = preproc self.celoss = nn.CrossEntropyLoss(reduction='none') def encode_string(self, string): model = self.model.eval() with torch.no_grad(): ttext = clip.tokenize([string]) text_features = model.encode_text(ttext.to(self.device)) text_features = text_features / text_features.norm(dim=-1, keepdim=True) return text_features.detach().cpu().numpy() def get_text_features(self, actual_strings, target_string): s2id = {} sids = [] s2id[target_string] = 0 for s in actual_strings: if s not in s2id: s2id[s] = len(s2id) sids.append(s2id[s]) strings = [target_string] + actual_strings ustrings = list(s2id) stringids = torch.tensor([s2id[s] for s in actual_strings], dtype=torch.long).to(self.device) tstrings = clip.tokenize(ustrings) text_features = self.model.encode_text(tstrings.to(self.device)) text_features = text_features / text_features.norm(dim=-1, keepdim=True) return text_features, stringids, ustrings def forward(self, imagevecs, actual_strings, target_string): ## uniquify strings text_features, stringids, ustrings = get_text_features(self, actual_strings, target_string) image_features = torch.from_numpy(imagevecs).type(text_features.dtype) image_features = image_features / image_features.norm(dim=-1, keepdim=True) image_features = image_features.to(self.device) scores = image_features @ text_features.t() assert scores.shape[0] == stringids.shape[0] return scores, stringids.to(self.device), ustrings def forward2(self, imagevecs, actual_strings, target_string): text_features, stringids, ustrings = get_text_features(self, actual_strings, target_string) actual_vecs = text_features[stringids] sought_vec = text_features[0].reshape(1,-1) image_features = torch.from_numpy(imagevecs).type(text_features.dtype) image_features = image_features / image_features.norm(dim=-1, keepdim=True) image_features = image_features.to(self.device) search_score = image_features @ sought_vec.reshape(-1) confounder_score = (image_features * actual_vecs).sum(dim=1) return search_score, confounder_score import torch.optim import torch.nn.functional as F nn.HingeEmbeddingLoss() import ray ray.init('auto') xclip = ModelService(ray.get_actor('clip')) from vsms import * benchparams = dict( objectnet=dict(loader=objectnet_cropped, idxs=np.load('./data/object_random_idx.npy')[:10000]) ) def load_ds(evs, dsnames): for k,v in tqdm(benchparams.items(), total=len(benchparams)): if k in dsnames: def closure(): ev0 = v['loader'](xclip) idxs = v['idxs'] idxs = np.sort(idxs) if idxs is not None else None ev = extract_subset(ev0, idxsample=idxs) evs[k] = ev closure() evs = {} load_ds(evs, 'objectnet') ev = evs['objectnet'] vecs = ev.embedded_dataset hdb = AugmentedDB(raw_dataset=ev.image_dataset, embedding=ev.embedding, embedded_dataset=vecs, vector_meta=ev.fine_grained_meta) def show_scores(se, vecs, actual_strings, target_string): with torch.no_grad(): se.model.eval() scs,stids,rawstrs = forward(se, vecs, actual_strings, target_string=target_string) scdf = pd.DataFrame({st:col for st,col in zip(rawstrs,scs.cpu().numpy().transpose())}) display(scdf.style.highlight_max(axis=1)) def get_feedback(idxbatch): strids = np.where(ev.query_ground_truth.iloc[idxbatch])[1] strs = ev.query_ground_truth.columns[strids] strs = [search_terms['objectnet'][fbstr] for fbstr in strs.values] return strs curr_firsts = pd.read_parquet('./data/cats_objectnet_ordered.parquet') class Updater(object): def __init__(self, se, lr, rounds=1, losstype='hinge'): self.se = se self.losstype=losstype self.opt = torch.optim.AdamW([{'params': se.model.ln_final.parameters()}, {'params':se.model.text_projection}, # {'params':se.model.transformer.parameters(), 'lr':lr*.01} ], lr=lr, weight_decay=0.) # self.opt = torch.optim.Adam@([{'params': se.model.parameters()}], lr=lr) self.rounds = rounds def update(self, imagevecs, actual_strings, target_string): se = self.se se.model.train() losstype = self.losstype opt = self.opt margin = .3 def opt_closure(): opt.zero_grad() if losstype=='ce': scores, stringids, rawstrs = forward(se, imagevecs, actual_strings, target_string) # breakpoint() iidx = torch.arange(scores.shape[0]).long() actuals = scores[iidx, stringids] midx = scores.argmax(dim=1) maxes = scores[iidx, midx] elif losstype=='hinge': #a,b = forward2(se, imagevecs, actual_strings, target_string) scores, stringids, rawstrs = forward(se, imagevecs, actual_strings, target_string) # breakpoint() iidx = torch.arange(scores.shape[0]).long() maxidx = scores.argmax(dim=1) actual_score = scores[iidx, stringids].reshape(-1,1) #max_score = scores[iidx, maxidx] #target_score = scores[:,0] losses1 = F.relu(- (actual_score - scores - margin)) #losses2 = F.relu(- (actual_score - target_score - margin)) #losses = torch.cat([losses1, losses2]) losses = losses1 else: assert False loss = losses.mean() #print(loss.detach().cpu()) loss.backward() for _ in range(self.rounds): opt.step(opt_closure) def closure(search_query, max_n, firsts, show_display=False, batch_size=10): sq = search_terms['objectnet'][search_query] se = StringEncoder() up = Updater(se, lr=.0001, rounds=1) bs = batch_size bfq = BoxFeedbackQuery(hdb, batch_size=bs, auto_fill_df=None) tvecs = [] dbidxs = [] accstrs = [] gts = [] while True: tvec = se.encode_string(sq) tvecs.append(tvec) idxbatch, _ = bfq.query_stateful(mode='dot', vector=tvec, batch_size=bs) dbidxs.append(idxbatch) gtvals = ev.query_ground_truth[search_query][idxbatch].values gts.append(gtvals) if show_display: display(hdb.raw.show_images(idxbatch)) display(gtvals) #vecs = ev.embedded_dataset[idxbatch] actual_strings = get_feedback(idxbatch) accstrs.extend(actual_strings) if show_display: display(actual_strings) if gtvals.sum() > 0 or len(accstrs) > max_n: break # vcs = ev.embedded_dataset[idxbatch] # astrs = actual_strings vcs = ev.embedded_dataset[np.concatenate(dbidxs)] astrs = accstrs if show_display: show_scores(se, vcs, astrs, target_string=sq) up.update(vcs, actual_strings=astrs, target_string=sq) if show_display: show_scores(se, vcs, astrs, target_string=sq) frsts = np.where(np.concatenate(gts).reshape(-1))[0] if frsts.shape[0] == 0: firsts[search_query] = np.inf else: firsts[search_query] = frsts[0] + 1 cf = curr_firsts[curr_firsts.nfirst_x > batch_size] x.category firsts = {} batch_size = 10 for x in tqdm(curr_firsts.itertuples()): closure(x.category, max_n=30, firsts=firsts, show_display=True, batch_size=batch_size) print(firsts[x.category], x.nfirst_x) if x.nfirst_x <= batch_size: break firsts = {} batch_size = 10 for x in tqdm(curr_firsts.itertuples()): closure(x.category, max_n=3*x.nfirst_x, firsts=firsts, show_display=True, batch_size=batch_size) print(firsts[x.category], x.nfirst_x) if x.nfirst_x <= batch_size: break rdf = pd.concat([pd.Series(firsts).rename('feedback'), cf[['category', 'nfirst_x']].set_index('category')['nfirst_x'].rename('no_feedback')], axis=1) ((rdf.feedback < rdf.no_feedback).mean(), (rdf.feedback == rdf.no_feedback).mean(), (rdf.feedback > rdf.no_feedback).mean()) rdf rdf.to_parquet('./data/objectnet_nfirst_verbal.parquet') ```
github_jupyter
<a name="top"></a> # Examples of Tables ## Table of Content 1. [Tabel 1 - No Alignment](#table1) <br> 2. [Tabel 2 - Center Alignment](#table2) <br> 3. [Tabel 3 - Left Alignment](#table3) <br> 4. [Tabel 4 - Right Alignment](#table4) <br> 5. [Table 5 - HTML](#table5) <br> 5a. [Html Table](#table5)<br> 5b. [Adding borders](#table5b) <br> 5c. [Align Text](#table5c) <br> 5d. [Adding colour](#table5d) <br> <a id='table1'></a> ### Tabel 1 - No Alignment |Sections |No of Functions |Brief Descriptions | |-|-|-| Random Sample Data | 10 functions |Random data computations on specified arrays Permutations | 2 functions | Alters the sequence of generated outputs within arrays Distributions |35 functions |Determines the occurance of variables accross defined parameters within arrays Random Generator | 4 functions | Determining the probability of occurance within arrays <a id='table2'></a> ### Tabel 2 - Center Alignment |Sections |No of Functions |Brief Descriptions | |:-:|:-:|:-:| |Random Sample Data | 10 functions |Random data computations on specified arrays |Permutations | 2 functions | Alters the sequence of generated outputs within arrays |Distributions |35 functions |Determines the occurance of variables accross defined parameters within arrays |Random Generator | 4 functions | Determining the probability of occurance within arrays <a id='table3'></a> ### Tabel 3 - Left Alignment |Sections |No of Functions |Brief Descriptions | |:-|:-|:- |Random Sample Data | 10 functions |Random data computations on specified arrays |Permutations | 2 functions | Alters the sequence of generated outputs within arrays |Distributions |35 functions |Determines the occurance of variables accross defined parameters within arrays |Random Generator | 4 functions | Determining the probability of occurance within arrays <a id='table4'></a> ### Tabel 4 - Right Alignment |Sections |No of Functions |Brief Descriptions | |-:|-:|-: |Random Sample Data | 10 functions |Random data computations on specified arrays |Permutations | 2 functions | Alters the sequence of generated outputs within arrays |Distributions |35 functions |Determines the occurance of variables accross defined parameters within arrays |Random Generator | 4 functions | Determining the probability of occurance within arrays <a id='table5'></a> ### Table 5 - HTML #### HTML Table ``` %%html <table> <tr> <th>Sections </th> <th>No of Functions </th> <th>Brief Descriptions </th> </tr> <tr> <td >Random Sample Data </td> <td>10 functions</td> <td>Random data computations on specified arrays</td> </tr> <tr> <td>Permutations</td> <td>2 functions</td> <td>Alters the sequence of generated outputs within arrays</td> </tr> <tr> <td>Distributions</td> <td>35 functions</td> <td>Determines the occurance of variables accross defined parameters within arrays</td> </tr> <tr> <td>Random Generator</td> <td>4 functions </td> <td>Determining the probability of occurance within arrays</td> </tr> </table> ``` <a id='table5b'></a> #### Adding borders ``` %%html <table> <tr> <th style="border:1px solid; text-align:center">Sections </th> <th style="border:1px solid; text-align:center">No of Functions </th> <th style="border:1px solid ; text-align:center">Brief Descriptions </th> </tr> <tr> <td style="border:1px solid; text-align:center">Random Sample Data </td> <td style="border:1px solid; text-align:center">10 functions</td> <td style="border:1px solid ; text-align:center">Random data computations on specified arrays</td> </tr> <tr> <td style="border:1px solid; text-align:center">Permutations</td> <td style="border:1px solid; text-align:center">2 functions</td> <td style="border:1px solid ; text-align:center">Alters the sequence of generated outputs within arrays</td> </tr> <tr> <td style="border:1px solid; text-align:center">Distributions</td> <td style="border:1px solid; text-align:center">35 functions</td> <td style="border:1px solid ; text-align:center">Determines the occurance of variables accross defined parameters within arrays</td> </tr> <tr> <td style="border:1px solid; text-align:center">Random Generator</td> <td style="border:1px solid; text-align:center">4 functions </td> <td style="border:1px solid; text-align:center">Determining the probability of occurance within arrays</td> </tr> </table> ``` <a id='table5c'></a> #### Align Text - Center <table> <tr> <th style="border:1px solid; text-align:center">Sections </th> <th style="border:1px solid; text-align:center">No of Functions </th> <th style="border:1px solid ; text-align:center">Brief Descriptions </th> </tr> <tr> <td style="border:1px solid; text-align:center">Random Sample Data </td> <td style="border:1px solid; text-align:center">10 functions</td> <td style="border:1px solid ; text-align:center">Random data computations on specified arrays</td> </tr> <tr> <td style="border:1px solid; text-align:center">Permutations</td> <td style="border:1px solid; text-align:center">2 functions</td> <td style="border:1px solid ; text-align:center">Alters the sequence of generated outputs within arrays</td> </tr> <tr> <td style="border:1px solid; text-align:center">Distributions</td> <td style="border:1px solid; text-align:center">35 functions</td> <td style="border:1px solid ; text-align:center">Determines the occurance of variables accross defined parameters within arrays</td> </tr> <tr> <td style="border:1px solid; text-align:center">Random Generator</td> <td style="border:1px solid; text-align:center">4 functions </td> <td style="border:1px solid; text-align:center">Determining the probability of occurance within arrays</td> </tr> </table> <a id='table5d'></a> #### Adding colour <a href="#top"><img style="float: right; width:50px;height:50px;" src="https://cdn.pixabay.com/photo/2016/09/05/10/50/app-1646212_1280.png" alt="Back To Top"></a>
github_jupyter
# Car Price Prediction:: Download dataset from this link: https://www.kaggle.com/hellbuoy/car-price-prediction # Problem Statement:: ``` # mount google drive in to your Colab enviornment from google.colab import drive drive.mount('/content/drive') cd /content/drive/MyDrive/AI_assignment/ ``` A Chinese automobile company Geely Auto aspires to enter the US market by setting up their manufacturing unit there and producing cars locally to give competition to their US and European counterparts. They have contracted an automobile consulting company to understand the factors on which the pricing of cars depends. Specifically, they want to understand the factors affecting the pricing of cars in the American market, since those may be very different from the Chinese market. The company wants to know: Which variables are significant in predicting the price of a car How well those variables describe the price of a car Based on various market surveys, the consulting firm has gathered a large data set of different types of cars across the America market. # task:: We are required to model the price of cars with the available independent variables. It will be used by the management to understand how exactly the prices vary with the independent variables. They can accordingly manipulate the design of the cars, the business strategy etc. to meet certain price levels. Further, the model will be a good way for management to understand the pricing dynamics of a new market. # WORKFLOW :: 1.Load Data 2.Check Missing Values ( If Exist ; Fill each record with mean of its feature ) 3.Split into 50% Training(Samples,Labels) , 30% Test(Samples,Labels) and 20% Validation Data(Samples,Labels). 4.Model : input Layer (No. of features ), 3 hidden layers including 10,8,6 unit & Output Layer with activation function relu/tanh (check by experiment). 5.Compilation Step (Note : Its a Regression problem , select loss , metrics according to it) 6.Train the Model with Epochs (100) and validate it 7.If the model gets overfit tune your model by changing the units , No. of layers , activation function , epochs , add dropout layer or add Regularizer according to the need . 8.Evaluation Step 9.Prediction ``` import pandas as pd import numpy as np car_data = pd.read_csv('/content/drive/MyDrive/AI_assignment/CarPrice_Assignment.csv') import tensorflow as tf car_data.head() car_data['CarName'].unique() #check if there are empty cells, if there are then row and column indexes will be returned where values are empty or missing np.where(car_data.applymap(lambda x: x =='')) car_data.isnull().any() # correct the name error in audi 100 ls car_data.iloc[3,2] = 'audi 100ls' car_data.dtypes car_data.drop(columns=['car_ID'], inplace = True) # get columns so that we can use the column names for onehot encoding of catagorical featrues in next cell car_data.columns # onehot encode all catagorical columns final_car = pd.get_dummies(car_data, columns=['CarName','symboling','fueltype', 'aspiration', 'doornumber', 'carbody', 'drivewheel', 'enginelocation', 'enginetype', 'cylindernumber', 'fuelsystem'], drop_first = True) final_car.head() #check statistical data to see abnormal values and outliers final_car.describe() #initialize a seed value so that each time we can get the same random number sequence, it will help us as a team # working on a common project to work on the same random data. Each new seed will generate a particular sequnce #of random number. You can choose any seed value here of your choice # 0.72 means we have taken 72% values for training set as we will make 72/4 = 18 rows of k fold validation data, where # value of k will be 4 when we compile and fit our model for validation np.random.seed(11111) msk = np.random.rand(len(final_car)) < 0.72 train_total = final_car[msk] test_total = final_car[~msk] #check the length of our test and train datasets print(len(train_total)) print(len(test_total)) train_total.head(10) # check statistical overview if there are some outliers and abnormal values train_total.describe() print(train_total.dtypes) # get our price labels and store in another dataframe train_label = train_total.loc[:,'price'] test_label = test_total.loc[:,'price'] train_label # drop price from oroginal training and test dataset , as price is not needed there test_data= test_total.drop(columns = ['price']) train_data= train_total.drop(columns = ['price']) train_data.shape train_data #get indices of the columns so that we can know how many columns we have to normalize, as catagorical columns which we # have added with onehot encoding, do not need to be normalized.. normalizing will be done in next cell {train_data.columns.get_loc(c): c for idx, c in enumerate(train_data.columns)} ## we normalize data because data has big vlaues in decimal and it will worsen performance of our model, may overfit ## or we may face hardware resource high usage # we will apply the formula normalized_train_data = (train_data - mean)/ stadrad_deviation ## firt take mean of training, then subtract mean from each value of the array slice train_data.iloc[:,0:13] mean = train_data.iloc[:,0:13].mean(axis=0) # taking the mean of train_data.iloc[:,0:13] -= mean std = train_data.iloc[:,0:13].std(axis=0) train_data.iloc[:,0:13] /= std test_data.iloc[:,0:13] -= mean test_data.iloc[:,0:13] /= std mean std mean_label = train_label.mean() train_label -= mean_label std_label = train_label.std() train_label /= std_label test_label -= mean_label test_label /= std_label mean_label std_label print(mean_label) test_label train_data.shape #store in numpy array test = np.array(test_data.iloc[:]).astype('float32') train = np.array(train_data.iloc[:]).astype('float32') test_l= np.array(test_label.astype('float32')) train_l= np.array(train_label.astype('float32')) train.shape[1] (141,192)[1] train.dtype ``` # Models section ``` #WE will configure different models here according to relu, tanh , regularization, dropout etc.. ``` ``` # we are passing activation function as a parameter here so that we can call this function with tanh or relu while # fitting and training the model from keras import models from keras import layers def build_model(act): model = models.Sequential() model.add(layers.Dense(128, activation= act,input_shape=(train.shape[1],))) model.add(layers.Dense(64, activation= act)) model.add(layers.Dense(32, activation= act)) model.add(layers.Dense(1)) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) return model build_model('relu').summary() build_model('tanh').summary() # Regularized model from keras import regularizers def build_model_regular(act): model = models.Sequential() model.add(layers.Dense(10, activation= act,kernel_regularizer= regularizers.l1_l2(l1=0.001, l2=0.001),input_shape=(train.shape[1],))) model.add(layers.Dense(8, activation= act,kernel_regularizer= regularizers.l1_l2(l1=0.001, l2=0.001))) model.add(layers.Dense(6, activation= act,kernel_regularizer= regularizers.l1_l2(l1=0.001, l2=0.001))) model.add(layers.Dense(1)) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) return model build_model_regular('tanh').summary() # dropout model from keras import regularizers def build_model_drop(act): model = models.Sequential() model.add(layers.Dense(10, activation= act,input_shape=(train.shape[1],))) model.add(layers.Dropout(0.2)) model.add(layers.Dense(8, activation= act)) model.add(layers.Dropout(0.2)) model.add(layers.Dense(6, activation= act)) model.add(layers.Dropout(0.2)) model.add(layers.Dense(1)) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) return model build_model_drop('relu').summary() ``` # K Fold validation section ## here we will use len(train)//k to make 141//4 = 36 rows for validation in each validation test and collect the validation scores for relu , tanh , regularization , and dropout ``` #k fold validation with relu # 141/4 import numpy as np k = 4 num_val_samples = len(train) // k num_epochs = 100 all_scores_relu = [] for i in range(k): print('processing fold #', i) val_data = train[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_l[i * num_val_samples: (i + 1) * num_val_samples] partial_train_data = np.concatenate([train[:i * num_val_samples],train[(i + 1) * num_val_samples:]], axis=0) # print(partial_train_data) partial_train_targets = np.concatenate([train_l[:i * num_val_samples],train_l[(i + 1) * num_val_samples:]],axis=0) model = build_model('relu') model.fit(partial_train_data, partial_train_targets,epochs=num_epochs, batch_size=1, verbose=0) val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0) all_scores_relu.append(val_mae) # 141/4 #k fold validation with tanh import numpy as np k = 4 num_val_samples = len(train) // k num_epochs = 100 all_scores_tanh = [] for i in range(k): print('processing fold #', i) val_data = train[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_l[i * num_val_samples: (i + 1) * num_val_samples] partial_train_data = np.concatenate([train[:i * num_val_samples],train[(i + 1) * num_val_samples:]], axis=0) # print(partial_train_data) partial_train_targets = np.concatenate([train_l[:i * num_val_samples],train_l[(i + 1) * num_val_samples:]],axis=0) model = build_model('tanh') model.fit(partial_train_data, partial_train_targets,epochs=num_epochs, batch_size=1, verbose=0) val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0) all_scores_tanh.append(val_mae) #k-fold validtion with regularization import numpy as np k = 4 num_val_samples = len(train) // k num_epochs = 100 all_scores_regular = [] for i in range(k): print('processing fold #', i) val_data = train[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_l[i * num_val_samples: (i + 1) * num_val_samples] partial_train_data = np.concatenate([train[:i * num_val_samples],train[(i + 1) * num_val_samples:]], axis=0) # print(partial_train_data) partial_train_targets = np.concatenate([train_l[:i * num_val_samples],train_l[(i + 1) * num_val_samples:]],axis=0) model = build_model_regular('relu') model.fit(partial_train_data, partial_train_targets,epochs=num_epochs, batch_size=1, verbose=0) val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0) all_scores_regular.append(val_mae) #k-fold validtion with dropout import numpy as np k = 4 num_val_samples = len(train) // k num_epochs = 100 all_scores_drop = [] for i in range(k): print('processing fold #', i) val_data = train[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_l[i * num_val_samples: (i + 1) * num_val_samples] partial_train_data = np.concatenate([train[:i * num_val_samples],train[(i + 1) * num_val_samples:]], axis=0) # print(partial_train_data) partial_train_targets = np.concatenate([train_l[:i * num_val_samples],train_l[(i + 1) * num_val_samples:]],axis=0) model = build_model_drop('relu') model.fit(partial_train_data, partial_train_targets,epochs=num_epochs, batch_size=1, verbose=1) val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=1) all_scores_drop.append(val_mae) ``` # Scores ## here we will see MAE mean absolute Error scores of all model which we have saved in the list during each training in above section ``` all_scores_relu all_scores_tanh all_scores_regular all_scores_drop ``` # training on the training data ## here we will call each model separately from Models section and train on the training data and evaluate on the test data ``` model_tanh = build_model('tanh') model_tanh.fit(train, train_l,epochs= 100, batch_size=1, verbose=0) test_mse_score, test_mae_score = model_tanh.evaluate(test, test_l) model_relu = build_model('relu') model_relu.fit(train, train_l,epochs= 100, batch_size=1, verbose=0) test_mse_score, test_mae_score = model_relu.evaluate(test, test_l) model_regular = build_model_regular('relu') model_regular.fit(train, train_l,epochs= 100, batch_size=1, verbose=0) test_mse_score, test_mae_score = model_regular.evaluate(test, test_l) model_drop = build_model_drop('relu') model_drop.fit(train, train_l,epochs= 100, batch_size=1, verbose=0) test_mse_score, test_mae_score = model_drop.evaluate(test, test_l) ``` # Prediction Section ## here we will predict our prices of our test dataset with each model which we have trained in training section ## Note that here we will use the reverse process of Normalization to retrieve our values of price in thousand of dollars i.e. x = (y - mean)/ std ==>> we will calculate( y = x * std + mean) and then we will compare it with our target values ``` test_l def predict(model, m): print(f" the Actual value Price was : {test_l[m]* std_label + mean_label} " ) return(f" the predicted Price was : {(model.predict(test[m:m+1].reshape(1,test.shape[1]))) * std_label + mean_label} ") x_tanh = predict(model_tanh,2) x_tanh x_relu = predict(model_relu,2) x_relu x_regular = predict(model_regular,2) x_regular x_drop = predict(model_drop,2) x_drop def plot_fn(mod): y_true = test_l* std_label + mean_label y_pred = mod.predict(test) * std_label + mean_label return y_true , y_pred.flatten() import numpy as np import matplotlib.pyplot as plt %matplotlib inline def plotting(mod, label): y_true, y_pred = plot_fn(mod) coef = np.polyfit(y_true,y_pred,1) poly1d_fn = np.poly1d(coef) # poly1d_fn is now a function which takes in x and returns an estimate for y plt.figure() plt.plot(y_true,y_pred, 'yo', y_true, poly1d_fn(y_true), '--k') plt.title(label) plt.xlabel('Thousand Dollar True' ) plt.ylabel('Thousand Dollar Predictions' ) plt.xlim(0, 50000) plt.ylim(0, 50000) plot_list = [] for i,j in enumerate([model_relu, model_tanh, model_regular, model_drop]): list_name = ['model_relu', 'model_tanh', 'model_regular', 'model_drop'] plot_list.append(plotting(j,list_name[i])) ```
github_jupyter
# [AI达人创造营第二期] 从电影推荐系统出发了解基于用户的协同过滤算法 ## 1. 项目背景介绍 ### 1.1 协同过滤算法 协同过滤算法是推荐算法领域中基础但非常重要的部分, 它从1992年开始投入推荐算法的研究过程中, 并在AMAZON等大型电子商务的推荐系统中起到了非常出色的效果. 协同过滤算法可以被分为基于用户的协同过滤算法以及基于项目的协同过滤算法. 本项目将从电影推荐系统的简单构建出发来介绍基于用户的协同过滤算法 ### 1.2 以用户为基础(User-based)的协同过滤 用相似统计的方法得到具有相似爱好或者兴趣的相邻用户, 所以称之为以用户为基础(User-based)的协同过滤或基于邻居的协同过滤(Neighbor-based Collaborative Filtering). ![](https://ai-studio-static-online.cdn.bcebos.com/58ff5bf39564492ca5b5c989d8078bddf0e7b7594fb645a497cb386be3d837c7) 基本方法步骤: 1. 收集用户信息 收集可以代表用户兴趣的信息, 一般的网站系统使用评分的方式或是给予评价, 这种方式被称为“主动评分”, 另外一种是“被动评分”, 是根据用户的行为模式由系统代替用户完成评价. 不需要用户直接打分或输入评价数据. 电子商务网站在被动评分的数据获取上有其优势, 用户购买的商品记录是相当有用的数据. 2. 最近邻搜索(Nearest neighbor search, NNS) 以用户为基础(User-based)的协同过滤的出发点是与用户兴趣爱好相同的另一组用户, 就是计算两个用户的相似度. 例如:查找n个和A有相似兴趣用户, 把他们对M的评分作为A对M的评分预测. 一般会根据数据的不同选择不同的算法, 较多使用的相似度算法有Pearson Correlation Coefficient、Cosine-based Similarity、Adjusted Cosine Similarity. 3. 产生推荐结果 有了最近邻集合, 就可以对目标用户的兴趣进行预测,产生推荐结果。依据推荐目的的不同进行不同形式的推荐, 较常见的推荐结果有Top-N 推荐和关系推荐。Top-N 推荐是针对个体用户产生, 对每个人产生不一样的结果, 例如:通过对A用户的最近邻用户进行统计, 选择出现频率高且在A用户的评分项目中不存在的,作为推荐结果。关系推荐是对最近邻用户的记录进行关系规则(association rules)挖掘. ## 2. 数据介绍 模型决定复现经典的协同过滤算法(CF), 在本项目中, 拟使用Movielens的ml-latest数据集来完成一个简单的电影推荐系统. MovieLens数据集包含多个用户对多部电影的评级数据, 也包括电影元数据信息和用户属性信息. 这个数据集经常用来做推荐系统, 机器学习算法的测试数据集. 尤其在推荐系统领域, 很多著名论文都是基于这个数据集的. 本文采用的是MovieLens中的ml-latest数据集. ### 2.1 解压数据集及导入依赖包 项目中导入了m1-latest数据集 ``` !unzip -oq data/data101354/ml-latest.zip -d data/ print('解压成功!') import pandas as pd import paddle import numpy as np ``` ### 2.2 数据集的读取 在读取数据集时, 由于评价时间在统计过程中没有用到, 所以不读取. 为了快速看到训练效果, 只取前10w个数据作为小量数据集 ``` dtype = {'userId': np.int64, 'movieId': np.int64, 'rating': np.float32} # 由于评价时间在统计过程中没有用到, 所以不读取. 为了缩短训练时间, 只取前10w个数据作为小量数据集 ratings_data = pd.read_csv(r'data/ml-latest/ratings.csv', dtype=dtype, usecols=[0,1,2], nrows=100000) print('success!') ``` ### 2.3 数据的可视化与处理 在完成数据的读取后, 对数据集的基本信息进行简单的了解, 以及转变数据集的储存格式 ``` # 数据的基本结构 print(ratings_data.head()) # 数据集的信息 print(ratings_data.describe()) ``` 在看完基本的数据集信息后, 为了能够更直观地看到用户和电影之间的关系, 以及更好地调用同一个用户的电影评分以及同一部电影不同用户给出的评分, 现将数据集转化为透视表的形式, 将评价信息转化为用户对电影的评分矩阵 ``` # 构建透视表 ratings_matrix = ratings_data.pivot_table(index='userId', columns='movieId', values='rating') # 透视表概览 ratings_matrix ``` ## 3. 模型构建 ### 3.1 相似度的计算 在协同过滤算法中, 经常使用的相似度有三种, 分别为余弦相似度, 皮尔逊(Pearson)相似度, 以及杰卡德(Jacaard)相似度. - 余弦相似度, Pearson相似度 - 余弦相似度: $$sim(a,b) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}| \times |\vec{b}|}$$ ![](https://ai-studio-static-online.cdn.bcebos.com/67184bf7b0d84c6799d0b42dff459437c9f04dc2b93c4c979119de47e5422e07) - Pearson相似度: $$corr(a,b) = \frac{\sum _{i} (r_{ai}-\overline{r_a})(r_{b i}-\overline{r_b})}{\sqrt{\sum _{i}(r_{ai}-\overline{r_a})^2 \sum _{i}(r_{b i}-\overline{r_b})^2}}$$ - 都为向量的余弦角值 - Pearson相似度会对向量的每一个分量做中心化处理 - 相对于余弦相似度, Pearson相似度还考虑每一个向量的长度, 因此Pearson更加常用 - 在评价数据是连续分布的情况下, 常使用余弦相似度以及Pearson相似度 - Jaccard相似度 - 计算方法: $sim(a,b) = \frac{交集}{并集}$ - 在计算评分数据为布尔值的情况下, 使用Jaccard相似度 对电影评分的预测我们使用Pearson相似度来作为用户之间的相似度. ``` # 计算pearson_similarity similarity = ratings_matrix.T.corr() similarity ``` ### 3.2 构建预测模型 在得到了用户之间的相似度矩阵后, 我们就可以开始构建对于某个指定的用户对电影的喜好程度的预测了. 以下是实现思路: 1. 对相似度矩阵先进行处理, 去除无关用户和与目标用户负相关的用户, 得到初步的相似用户similar_users 2. 在相似用户中进行筛选, 只留下那些只看过我们目标电影的用户作为最终参与预测的用户群体final_similar_users 3. 根据评分预测的公式 $$pred(user, movie) = \frac{\sum_{v \in U} sim(user, movie) * r_{vi}}{\sum_{v \in U} |sim(user, movie)|}$$ 我们可以计算出预测分数 ``` # 构建指定用户对指定电影的评分预测 def predict(user, movie, ratings_matrix, similarity): # 找到和目标user相关的用户 similar_users = similarity[user].drop([user]).dropna() # 去除掉负相关的干扰项 similar_users = similar_users.where(similar_users>0).dropna() # 找到其中评价过目标电影的用户 idx = ratings_matrix[movie].dropna().index & similar_users.index # 得到最终这些相似用户的相似度 final_similar_users = similar_users.loc[list(idx)] # 初始化评分预测公式的分子分母 sum_up = 0 sum_down = 0 for sim_user, similarity in final_similar_users.iteritems(): # 相似用户的评分数据 sim_user_rated_movies = ratings_matrix.loc[sim_user].dropna() # 相似用户对目标电影的评分 sim_user_rating_for_movie = sim_user_rated_movies[movie] sum_up += similarity*sim_user_rating_for_movie sum_down += similarity # 计算预测评分 predict_rating = sum_up/sum_down return predict_rating ``` ### 3.3 构建排序模型 通过之前计算得到的评分预测模型, 我们可以通过计算指定用户对电影的评分预测来得到电影评分预测的排序, 从而给出针对特定用户的电影推荐. 在这个过程中, 首先我们将构建一个对所有电影进行评分预测的函数, 这其中会包含对极端数据的处理: - 不会再次预测目标用户已经观看过的电影. - 不会给用户推荐评分数量过少的"冷门"电影. 通过筛选掉这些电影之后, 我们就会得到对一个特定用户来说的电影评分预测清单. 利用这个预测清单既可以轻松得到推荐电影的movieId ``` # 对一个指定用户的所有电影进行评分预测 def predict_all(user, ratings_matrix, similarity, filter_rule=None): # 添加过滤条件简化数据集中的冗余数据 if not filter_rule: # 获取电影Id索引 movie_idx = ratings_matrix.columns elif filter_rule == ['unhot','rated']: # 去除用户已经看过的电影 user_ratings = ratings_matrix.iloc[user] # 判断已经有过评分的电影 _ = user_ratings<=5 idx1 = _.where(_ == False).dropna().index # 去除冷门电影, 热门电影的判断标准暂定为被观看(评分)超过十次 count = ratings_matrix.count() idx2 = count.where(count>10).dropna().index movie_idx = set(idx1) & set(idx2) else: raise Exception('无效过滤参数') # 进入循环 for movie in movie_idx: try: rating = predict(user, movie, ratings_matrix, similarity) except Exception as e: pass else: yield user, movie, rating # 给出前n项推荐的电影ID def rank(user, n): results = predict_all(user, ratings_matrix, similarity, filter_rule=['unhot','rated']) # 根据分数进行降序排序, 然后输出前n项 return sorted(results, key=lambda x: x[2], reverse = True)[:n] ``` ### 3.4 结合movies.csv数据集输出电影名称 在得到了推荐的电影的movieId后, 可以通过movies,csv来读取电影名称最后输出推荐的电影. ``` # 读取movies数据集 movie_names = pd.read_csv(r'data/ml-latest/movies.csv') print('success!') # 电影id与名称对应概览 # print(movie_names.head()) movie_id = 1 print(movie_names.where(movie_names['movieId']==movie_id).dropna().values[0][1]) # 输出推荐的电影的电影名称 def get_movie_name(rank, movie_names): count = 0 print(f'推荐的电影按推荐力度排列如下:') # rank函数返回的是一个二维数组, 其中每一项的第二个数据为推荐的movieId for item in rank: movie_id = item[1] count += 1 print(f'{count}.', movie_names.where(movie_names['movieId']==movie_id).dropna().values[0][1]) # 电影推荐实例测试 user_id = 100 # 输出top10推荐电影 get_movie_name(rank(user_id,10), movie_names) ``` ## 4. 总结与升华 协同过滤算法是推荐算法领域的重要部分, 它有许多优点, 比如能够过滤机器难以自动内容分析的信息, 避免了内容分析的不完全或不精确; 并且能够基于一些复杂的, 难以表述的概念(如信息质量, 个人品味)进行过滤等等. 但是协同过滤算法同样有着许多不足: 1. 对于新用户的推荐效果就较差, 由于新用户的评价数据较少, 就很难确定用户的准确的相似用户群体, 因此也难以给出准确的判断 2. 由于推荐系统的应用场景大部分都是在具有非常庞大的项目数量的基础上的, 因此用户评价数据的稀疏性也会成为一个值得关注的问题 对于协同过滤算法中的每一个细节, 我们也都可以思考优化的方向, 比如相似度的计算, 数据的读取使用的方法, 相似度矩阵的储存和读取等等. 当下本项目使用的仅仅是节选的10万的数据集, 在面对实际应用时大体量的数据场景下, 这些问题的优化就会起到举足轻重的作用. ``` # 查看当前挂载的数据集目录, 该目录下的变更重启环境后会自动还原 # View dataset directory. # This directory will be recovered automatically after resetting environment. !ls /home/aistudio/data # 查看工作区文件, 该目录下的变更将会持久保存. 请及时清理不必要的文件, 避免加载过慢. # View personal work directory. # All changes under this directory will be kept even after reset. # Please clean unnecessary files in time to speed up environment loading. !ls /home/aistudio/work # 如果需要进行持久化安装, 需要使用持久化路径, 如下方代码示例: # If a persistence installation is required, # you need to use the persistence path as the following: !mkdir /home/aistudio/external-libraries !pip install beautifulsoup4 -t /home/aistudio/external-libraries # 同时添加如下代码, 这样每次环境(kernel)启动的时候只要运行下方代码即可: # Also add the following code, # so that every time the environment (kernel) starts, # just run the following code: import sys sys.path.append('/home/aistudio/external-libraries') ``` 请点击[此处](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576)查看本环境基本用法. <br> Please click [here ](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576) for more detailed instructions.
github_jupyter
``` import sys, os import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pandas_profiling as pp sys.path.insert(0, os.path.abspath('..')) from script.functions import * ``` #### First, we import the data and display it after passing it through the function. ``` df = load_and_process('../../data/raw/adult.data') df #MANAGED THE IMPORT THANK GOD ``` #### Next, we describe the data to show the mean surveyed age (38) and the mean work hours of 40/week ``` df.describe() ``` #### Create a profile report for the dataset. ``` df.to_csv('../../data/processed/processed1.csv') report = pp.ProfileReport(df).to_file('../../data/processed/report.html') ``` #### Let's check the relationship between Age and Education ``` ageEdu = df.loc[:,['Age', 'Education']] ageEdu ``` ### Create countplot for different education levels: This is to see what education the majority of the people surveyed in this dataset had. We can clearly see that most of the people only joined the workforce with a HS degree while others mainly did some college courses or compeleted a full Bachelors ``` ageEdu.replace({'Some-college': 'Some\ncollege','Prof-school':'Prof-\nschool', 'Assoc-voc':'Assoc\n-voc','Assoc-acdm':'Assoc\n-acdm', 'Preschool':'Pre-\nschool'}, inplace = True) #names didn't fit on graph so I had to change them. sns.despine() plt.figure(figsize = (20,10)) sns.set(style = 'white', font_scale = 1.5) eduCountGraph = sns.countplot(x = 'Education', data = ageEdu, palette = 'viridis', order = ageEdu['Education'].value_counts().index) eduCountGraph.get_figure().savefig('../../images/eduCount.jpg',dpi = 500) #Replace all row values in Education with new format. df.replace({'Some-college': 'Some\ncollege','Prof-school':'Prof-\nschool', 'Assoc-voc':'Assoc\n-voc','Assoc-acdm':'Assoc\n-acdm', 'Preschool':'Pre-\nschool'}, inplace = True) df ``` ## Let's check the relationship of Age vs Earnings: #### First, we check the count of ages that have >50K earning/year This shows that the most amount of people with above 50K yearly income are aged 37 - 47 which just shows that this is the age when adults become most settled with a good job ``` ageEarn = df.loc[:,['Age', 'Yearly Income']] ageEarnAbove50k = ageEarn.loc[lambda x: x['Yearly Income'] == '>50K'] #We'll check both Ages that have above and below 50K income sns.set(style = 'white', font_scale = 1.5,rc={'figure.figsize':(30,10)}) ageEarnGraph = sns.histplot(x = 'Age',data = ageEarnAbove50k,shrink = 0.9, bins = 6, kde = True) ageEarnGraph.set(ylabel = 'Yearly Income\nAbove 50K Count') ageEarnGraph.get_figure().savefig('../../images/ageEarnAbove50k.jpg', dpi = 500) ``` #### Next, we check the count of ages with <=50K earning/year This shows that the most amount of people with below 50K yearly income are aged 19 - 36 which is understandable for young people ``` ageEarnBelow50k = ageEarn.loc[lambda x: x['Yearly Income'] == '<=50K'] ageEarnGraph = sns.histplot(x = 'Age', data = ageEarnBelow50k,bins = 6, shrink = 0.9, kde = True) ageEarnGraph.set(ylabel = 'Yearly Income\nBelow 50K Count') ageEarnGraph.get_figure().savefig('../../images/ageEarnBelow50k.jpg', dpi = 500) ``` ## Let's make a density plot to see the trends in each graph and where they overlap #### TL;DR Mo money mo money mo money - as we age..to the peak of 47. This just shows the immense density of those aged around 40 in the above 50K group. It also shows that at around age 25 most is where most people make under 50K then they start to climb the above 50K ladder at the peak age of 40. From this data it is evident that we make more money as we get older. ``` ageEarnDenisty = sns.kdeplot(x = 'Age',hue = 'Yearly Income', data = ageEarn, alpha = 0.3, fill = True, common_norm = False) ``` # Time to look at research questions. ## RESEARCH QUESTION 1: ### How much of a role does education play into someone's yearly income? I will conduct this analysis through a count plot of each education category to see which of them has the highest count of >50k/year earners and which have the lowest and the same with <=50k/year earners. #### TL;DR Bachelor is all you need for a good paying job **Start with >50K wages** This data shows that most of the people in the Above50k dataset only have their Bachelors degree with HS-grad and some college education trailing behind. Of course the data becomes skewed as we can't directly compare against other educational paths since they are not in equal numbers. ``` eduWageAbove50k = df.loc[lambda x: x['Yearly Income'] == '>50K'] #Let's make a countplot for that. eduWageGraph = sns.countplot(x = 'Education', data = eduWageAbove50k, palette = 'Spectral', order = eduWageAbove50k['Education'].value_counts().index) eduWageGraph.set(title = 'Education VS Salary (Over 50K) Count', ylabel = 'Count') ``` **Now with Below50k dataset** #### TL;DR HS grads who don't go to post secondary and finish a degree have lower paying jobs This data shows that most of the people with jobs paying below 50k/year are the ones with only a HS-grad education with people that have only done some college courses as second place. Unless you complete a program at post-secondary or go into trades after finishing school, you may make less than 50k/year ``` eduWageBelow50k = df.loc[lambda x: x['Yearly Income'] == '<=50K'] #Let's make a countplot for that. sns.countplot(x = 'Education', data = eduWageBelow50k, order = eduWageBelow50k['Education'].value_counts().index, palette = 'Spectral') ``` **Since my data is all categorical and a violin, distplot, plot doesn't count occurences of categorical data I am limited to a certain amount of graphs.** ## RESEARCH QUESTION 2: ### Which industries of work pay the most amount of money on average? To analyze this I will create a count plot of every job category to observe the amount of people earning above or below 50k/year #### TL;DR Own a suit, managerial and executive have most top earners while trades/clerical industries have most low earners We can see from this data that no one in armed forces makes above 50K/year with the Exec/Managerial and Prof-specialty occupations making the majority of the people with wages above 50K/year ``` #change row values to fit graph df.replace({'Adm-clerical':'Adm-\nclerical','Exec-managerial':'Exec-\nmanagerial', 'Handlers-cleaners':'Handlers\n-cleaners','Tech-support':'Tech-\nsupport', 'Craft-repair':'Craft-\nrepair','Other-service':'Other-\nservice', 'Prof-specialty':'Prof-\nspecialty','Machine-op-inspct':'Machine\n-op-\ninspct','Farming-fishing':'Farming\n-fishing'}, inplace = True) wageOc = df.loc[:,['Occupation', 'Yearly Income']] wageOcAbove50k = wageOc.loc[lambda x:x['Yearly Income'] == '>50K'] wageOcGraph = sns.countplot(data = wageOcAbove50k, x = 'Occupation', palette = 'Spectral', order = wageOcAbove50k['Occupation'].value_counts().index) wageOcGraph.set(title = 'Occupation VS Yearly Income', ylabel = 'Count of People with >50K\nEarnings per Occupation') ``` **Check jobs with below 50k/year earnings** Now seeing the second half of the data we can observe that no one surveyed worked in the armed forces. It also shows that the majority of people making below 50K a year strike a 3 way tie between Adm-clerical, Other-services, and Craft-repair jobs. Although Exec/managerial jobs make up most of the people who make >50K/year, they also make up a decent chunk of the people who make less than 50K/year. ``` wageOcBelow50k = wageOc.loc[lambda x:x['Yearly Income'] == '<=50K'] wageOcGraph = sns.countplot(data = wageOcBelow50k, x = 'Occupation',palette = 'Spectral', order = wageOcBelow50k['Occupation'].value_counts().index) wageOcGraph.set(title = 'Occupation VS Yearly Income', ylabel = 'Count of People with <=50K\nEarnings per Occupation') ``` ## RESEARCH QUESTION 3: ### What is the most common occupation surveyed in this dataset? I will conduct this analysis through a count plot of each occupation. ### Results: In an interesting 3 way tie, we have prof-specialty, craft-repair, and exec-managerial occupations with the highest counts although not far behind are adm-clerical, sales, and other-services. It is super interesting to see that execute/managerial roles are so prevelant in this dataset as it can be thought by some as a difficult role to obtain. The occupations in this category are also the first place holder for most number of wages above 50k/year. ``` occ = df.loc[:,['Occupation']] sns.countplot(x='Occupation', data = occ, order = occ['Occupation'].value_counts().index) ``` ## RESEARCH QUESTION 4: ### What is the ratio of people earning >50k/year and <=50k/year by sex? I will conduct this in two seperate graphs by first focusing on people who manke above 50k/year and in the second graph I will focus on those earning <=50k/year. ### Results: The graphs show that this dataset shows a majority of men that were surveyed. In the first half of the data, men just about sextuple the women in earning above 50k/year. The ratio of high earners/low earners of each sex is about 6100/14000 = 44% of men are high earners where 1000/8000 = 12.5% of women are high earners. ``` earnSex = df.loc[:,['Sex', 'Yearly Income']] earnSexAbove50k = earnSex.loc[lambda x: x['Yearly Income'] == '>50K'] plt.figure(figsize=(5,5)) graph = sns.countplot(data = earnSexAbove50k, x = 'Sex') graph.set(ylabel = 'Number of People who Make\n >50K/year') earnSexBelow50k = earnSex.loc[lambda x: x['Yearly Income'] == '<=50K'] plt.figure(figsize=(5,5)) graph = sns.countplot(data = earnSexBelow50k, x = 'Sex') graph.set(ylabel = 'Number of People who Make\n <=50K/year') #replace some values so they fit on graph df.replace({'Married-civ-spouse':'Married\n-civ-\nspouse', 'Never-married':'Never-\nMarried', 'Married-spouse-absent':'Married\n-spouse\n-absent','Married-AF-spouse':'Married\n-AF-\nspouse'}, inplace = True) df ``` ## RESEARCH QUESTION 5: ### What is the relationship of yearly earnings and marital status? I will conduct this through splitting the data into the top earners and low earners (>50K/year,<=50K/year) and comparing them to their marital status. ### Results People who are married are most likely to make over 50k/year while people who have never married top the charts for below 50k/year. ``` earnMar = df.loc[:,['Yearly Income', 'Marital Status']] earnMarAbove50k = earnMar.loc[lambda x: x['Yearly Income'] == '>50K'] plt.figure(figsize= (10,5)) graph = sns.countplot(data = earnMarAbove50k, x = 'Marital Status') graph.set_ylabel('Number of Top Earners\nby Marital Status') earnMarBelow50k = earnMar.loc[lambda x: x['Yearly Income'] == '<=50K'] plt.figure(figsize= (10,5)) graph = sns.countplot(data = earnMarBelow50k, x = 'Marital Status') graph.set_ylabel('Number of Low Earners\nby Marital Status') ```
github_jupyter
# 確率ロボティクス課題 ## 参考 + [詳解 確率ロボティクス](https://www.amazon.co.jp/%E8%A9%B3%E8%A7%A3-%E7%A2%BA%E7%8E%87%E3%83%AD%E3%83%9C%E3%83%86%E3%82%A3%E3%82%AF%E3%82%B9-Python%E3%81%AB%E3%82%88%E3%82%8B%E5%9F%BA%E7%A4%8E%E3%82%A2%E3%83%AB%E3%82%B4%E3%83%AA%E3%82%BA%E3%83%A0%E3%81%AE%E5%AE%9F%E8%A3%85-KS%E7%90%86%E5%B7%A5%E5%AD%A6%E5%B0%82%E9%96%80%E6%9B%B8-%E4%B8%8A%E7%94%B0/dp/4065170060/ref=sr_1_1?__mk_ja_JP=%E3%82%AB%E3%82%BF%E3%82%AB%E3%83%8A&dchild=1&keywords=%E8%A9%B3%E8%A7%A3+%E7%A2%BA%E7%8E%87%E3%83%AD%E3%83%9C%E3%83%86%E3%82%A3%E3%82%AF%E3%82%B9&qid=1610537879&sr=8-1) + [確率ロボティクス](https://www.amazon.co.jp/%E7%A2%BA%E7%8E%87%E3%83%AD%E3%83%9C%E3%83%86%E3%82%A3%E3%82%AF%E3%82%B9-%E3%83%97%E3%83%AC%E3%83%9F%E3%82%A2%E3%83%A0%E3%83%96%E3%83%83%E3%82%AF%E3%82%B9%E7%89%88-Sebastian-Thrun/dp/4839952981/ref=sr_1_2?__mk_ja_JP=%E3%82%AB%E3%82%BF%E3%82%AB%E3%83%8A&dchild=1&keywords=%E8%A9%B3%E8%A7%A3+%E7%A2%BA%E7%8E%87%E3%83%AD%E3%83%9C%E3%83%86%E3%82%A3%E3%82%AF%E3%82%B9&qid=1610537879&sr=8-2) + [詳解 確率ロボティクスのサンプルコード](https://github.com/ryuichiueda/LNPR_BOOK_CODES) + [自己位置推定](https://github.com/KobayashiRui/ProbabilisticRobotics_Task) ## 問題設定 + 授業で扱ったマルコフ決定過程の全方向移動バージョンを実装 + ロボットはx,y方向に移動可能 + 回転はしない + 自己位置推定は去年の先輩のコードを引用しています コードの変数名やクラス名は基本的に参考の「詳解 確率ロボティクス」に準拠しています ``` %matplotlib nbagg import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.animation as anm import math import numpy as np from scipy.stats import expon, norm, multivariate_normal from matplotlib.patches import Ellipse import itertools import collections from copy import copy ``` ## Worldの作成 goal puddle Landmark ``` class Goal: def __init__(self, x, y, radius=0.3, value=0.0): self.pos = np.array([x, y]).T self.radius = radius self.value = value def inside(self, pose): #ゴールの範囲設定 return self.radius > math.sqrt( (self.pos[0]-pose[0])**2 + (self.pos[1]-pose[1])**2 ) def draw(self, ax, elems): x, y = self.pos c = ax.scatter(x + 0.16, y + 0.5, s=50, marker=">", label="landmarks", color="red") elems.append(c) elems += ax.plot([x, x], [y, y + 0.6], color="black") class Puddle: #水たまり def __init__(self, lowerleft, upperright, depth): self.lowerleft = lowerleft self.upperright = upperright self.depth = depth def draw(self, ax, elems): w = self.upperright[0] - self.lowerleft[0] h = self.upperright[1] - self.lowerleft[1] r = patches.Rectangle(self.lowerleft, w, h, color="blue", alpha=self.depth) elems.append(ax.add_patch(r)) def inside(self, pose): return all([ self.lowerleft[i] < pose[i] < self.upperright[i] for i in [0, 1] ]) class PuddleWorld(): def __init__(self, time_span, time_interval, debug=False): self.objects = [] self.debug = debug self.time_span = time_span self.time_interval = time_interval self.puddles = [] self.robots = [] self.goals = [] def draw(self): fig = plt.figure(figsize=(4,4)) ax = fig.add_subplot(111) ax.set_aspect('equal') ax.set_xlim(-5,5) ax.set_ylim(-5,5) ax.set_xlabel("X",fontsize=10) ax.set_ylabel("Y",fontsize=10) elems = [] if self.debug: for i in range(int(self.time_span/self.time_interval)): self.one_step(i, elems, ax) else: self.ani = anm.FuncAnimation(fig, self.one_step, fargs=(elems, ax), frames=int(self.time_span/self.time_interval)+1, interval=int(self.time_interval*1000), repeat=False) plt.show() def append(self,obj): self.objects.append(obj) if isinstance(obj, Puddle): self.puddles.append(obj) if isinstance(obj, Robot): self.robots.append(obj) if isinstance(obj, Goal): self.goals.append(obj) def puddle_depth(self, pose): return sum([p.depth * p.inside(pose) for p in self.puddles]) def one_step(self, i, elems, ax): while elems: elems.pop().remove() time_str = "t=%.2f[s]" % (self.time_interval*i) elems.append(ax.text(-4.4, 4.5, time_str, fontsize=10)) for obj in self.objects: obj.draw(ax, elems) if hasattr(obj, "one_step"): obj.one_step(self.time_interval) for r in self.robots: r.agent.puddle_depth = self.puddle_depth(r.pose) for g in self.goals: ##goal判定 if g.inside(r.pose): r.agent.in_goal = True r.agent.final_value = g.value class Landmark: def __init__(self, x, y): self.pos = np.array([x, y]).T self.id = None def draw(self, ax, elems): c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color="orange") elems.append(c) elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10)) class Map: def __init__(self): # 空のランドマークのリストを準備 self.landmarks = [] def append_landmark(self, landmark): # ランドマークを追加 landmark.id = len(self.landmarks) # 追加するランドマークにIDを与える self.landmarks.append(landmark) def draw(self, ax, elems): # 描画(Landmarkのdrawを順に呼び出し) for lm in self.landmarks: lm.draw(ax, elems) ``` ## Robotの作成 ノイズのみ(スタック、誘拐は考えない) ``` class Robot: ''' ロボット1台の処理関係 agentとsensorを含む ''' def __init__(self, pose, agent=None, sensor=None, color="blue", noise_per_meter=5, noise_std=0.05): self.pose = pose self.r = 0.2 self.color = color self.agent = agent self.poses = [pose] #移動の軌跡を保存 self.sensor = sensor self.noise_pdf = expon(scale=1.0/(1e-100 + noise_per_meter)) self.distance_until_noise = self.noise_pdf.rvs() self.pose_noise = norm(scale=noise_std) def noise(self, pose, v_x, v_y, time_interval): distance = np.hypot(v_x*time_interval, v_y*time_interval) self.distance_until_noise -= distance if self.distance_until_noise <= 0.0: self.distance_until_noise += self.noise_pdf.rvs() noise_value = self.pose_noise.rvs() pose[0] += self.pose_noise.rvs() #noise_value pose[1] += self.pose_noise.rvs() #noise_value return pose def draw(self, ax, elems): #初期値or状態遷移後のposeを取得 x,y = self.pose robot = patches.Circle(xy=(x,y), radius=self.r, color=self.color) elems.append(ax.add_patch(robot)) self.poses.append(np.array([x,y]).T) poses_x = [e[0] for e in self.poses] poses_y = [e[1] for e in self.poses] elems += ax.plot(poses_x, poses_y, linewidth=0.5, color="black") if self.sensor and len(self.poses) > 1: #状態遷移前の姿勢で観測しているのでposes[-2] (一つ前の姿勢値から線分の計算) self.sensor.draw(ax, elems, self.poses[-2]) if self.agent and hasattr(self.agent, "draw"): self.agent.draw(ax, elems) @classmethod def state_transition(cls, v_x, v_y, time, pose): #x軸の速度, y軸の速度, 移動時間 return pose + np.array([v_x*time, v_y*time]) def one_step(self, time_interval): if self.agent: obs = self.sensor.data(self.pose) if self.sensor else None v_x, v_y = self.agent.decision(obs) self.pose = self.state_transition(v_x, v_y, time_interval, self.pose) self.pose = self.noise(self.pose, v_x, v_y, time_interval) ``` ## カメラの作成 センサノイズのみ考慮(オクルージョン、ファントムなどは考えない) ``` class Camera: ''' 観測を管理する, sensorとしてRobotに登録する ''' def __init__(self, env_map, distance_range = (0.5, 4),pos_noise=0.1): self.map = env_map self.lastdata = [] self.distance_range = distance_range self.pos_noise = pos_noise def noise(self, relpos): noise_x = norm.rvs(loc=relpos[0], scale=self.pos_noise) noise_y = norm.rvs(loc=relpos[1], scale=self.pos_noise) return np.array([noise_x, noise_y]).T def visible(self, pos): if pos is None: return False distance = np.hypot(*pos) return self.distance_range[0] <= distance <= self.distance_range[1] def data(self, cam_pose): observed = [] for lm in self.map.landmarks: z = self.observation_function(cam_pose, lm.pos) if self.visible(z): z = self.noise(z) observed.append((z, lm.id)) self.lastdata = observed return observed @classmethod def observation_function(cls, cam_pose, obj_pos): diff = obj_pos - cam_pose return np.array(diff).T def draw(self, ax, elems, cam_pose): for lm in self.lastdata: x, y = cam_pose lx = lm[0][0] + x ly = lm[0][1] + y elems += ax.plot([x,lx],[y,ly], color="pink") ``` ## エージェント + 授業でのPuddleIgnorePolicyではゴールの方向に向きを合わせて進行 + 全方向移動ではゴールと現在位置との距離をx軸y軸方向の比で行動を決定する + 行動選択が増えすぎないように近似 ``` class Agent: ''' ロボットの動作を決定する, agent(操縦者)としてRobotに登録する ''' def __init__(self, v_x, v_y): self.v_x = v_x self.v_y = v_y self.counter =0 def decision(self, observation=None): self.counter += 1 return self.v_x, self.v_y class EstimationAgent(Agent): def __init__(self, time_interval, v_x, v_y, estimator): super().__init__(v_x, v_y) self.estimator = estimator self.time_interval = time_interval self.prev_v_x = 0.0 self.prev_v_y = 0.0 def draw(self, ax, elems): self.estimator.draw(ax, elems) class PuddleIgnoreAgent(EstimationAgent): def __init__(self, time_interval, estimator, goal, puddle_coef=100): super().__init__(time_interval, 0.0, 0.0, estimator) self.puddle_coef = puddle_coef self.puddle_depth = 0.0 self.total_reward = 0.0 self.in_goal = False self.final_value = 0.0 self.goal = goal def reward_per_sec(self): return -1.0 - self.puddle_depth*self.puddle_coef @classmethod ###puddleignoreagent(以下全部) def policy(cls, pose, goal): x, y = pose dx, dy = goal.pos[0] - x, goal.pos[1] - y if dx==0: ##エラー回避 dx=0.01 if dy==0: dy=0.01 v_x, v_y = dx/(abs(dx)+abs(dy)), dy/(abs(dx)+abs(dy)) ##xとyのゴールの距離の比で制御値を決定 v_x, v_y = round(v_x,1), round(v_y,1) return v_x, v_y def decision(self, observation=None): #行動決定 if self.in_goal: return 0.0, 0.0 self.estimator.motion_update(self.prev_v_x, self.prev_v_y, self.time_interval)##推定更新 v_x, v_y= self.policy(self.estimator.pose, self.goal)##推定値から行動決定 self.prev_v_x, self.prev_v_y = v_x, v_y self.estimator.observation_update(observation)##観測更新 self.total_reward += self.time_interval*self.reward_per_sec()##報酬の計算 return v_x, v_y def draw(self, ax, elems): super().draw(ax, elems) x, y= self.estimator.pose elems.append(ax.text(x+1.0, y-0.5, "reward/sec:" + str(self.reward_per_sec()), fontsize=8)) elems.append(ax.text(x+1.0, y-1.0, "eval: {:.1f}".format(self.total_reward+self.final_value), fontsize=8)) ``` ## 価値反復後の方策用エージェント ``` class DpPolicyAgent(PuddleIgnoreAgent): ###dppolicyagent def __init__(self, time_interval, estimator, goal, puddle_coef=100, widths=np.array([0.2, 0.2]).T, \ lowerleft=np.array([-4, -4]).T, upperright=np.array([4, 4]).T): #widths以降はDynamicProgrammingから持ってくる super().__init__(time_interval, estimator, goal, puddle_coef) ###座標関連の変数をDynamicProgrammingから持ってくる### self.pose_min = np.r_[lowerleft] self.pose_max = np.r_[upperright] self.widths = widths self.index_nums = ((self.pose_max - self.pose_min)/self.widths).astype(int) self.policy_data = self.init_policy(self.index_nums) def init_policy(self, index_nums): tmp = np.zeros(np.r_[index_nums,2]) for line in open("policy.txt", "r"): d = line.split() tmp[int(d[0]), int(d[1])] = [float(d[2]), float(d[3])] return tmp def to_index(self, pose, pose_min, index_nums , widths): #姿勢をインデックスに変えて正規化 index = np.floor((pose - pose_min)/widths).astype(int) #姿勢からインデックスに for i in [0,1]: #端の処理(内側の座標の方策を使う) if index[i] < 0: index[i] = 0 elif index[i] >= index_nums[i]: index[i] = index_nums[i] - 1 return tuple(index) #ベクトルのままだとインデックスに使えないのでタプルに def policy(self, pose, goal=None): #姿勢から離散状態のインデックスを作って方策を参照して返すだけ return self.policy_data[self.to_index(pose, self.pose_min, self.index_nums, self.widths)] ``` ## カルマンフィルタ ``` def sigma_ellipse(p, cov, n): eig_vals, eig_vec = np.linalg.eig(cov) ang = math.atan2(eig_vec[:,0][1], eig_vec[:,0][0])/math.pi*180 return Ellipse(p, width=2*n*math.sqrt(eig_vals[0]), height=2*n*math.sqrt(eig_vals[1]), angle=ang, fill=False, color="blue", alpha=0.5) class KalmanFilter: def __init__(self, envmap, init_pose, motion_noise_stds= {"nn":0.05, "oo":0.05}, pos_noise=0.1): self.belief = multivariate_normal(mean=np.array([0.0, 0.0]), cov=np.diag([1e-10, 1e-10])) self.pose = self.belief.mean self.motion_noise_stds = motion_noise_stds self.pose = self.belief.mean self.map = envmap self.pos_noise = pos_noise def matR(self, v_x, v_y,time): return np.diag([self.motion_noise_stds["nn"]**2*abs(v_x)/time, self.motion_noise_stds["oo"]**2*abs(v_y)/time]) def observation_update(self, observation): for d in observation: z = d[0] obs_id = d[1] estimated_z = Camera.observation_function(self.belief.mean, self.map.landmarks[obs_id].pos) H = np.array([[-1,0],[0,-1]]) K = self.belief.cov.dot(H.T).dot(np.linalg.inv(H.dot(self.belief.cov).dot(H.T) + np.diag([self.pos_noise, self.pos_noise]))) self.belief.mean += K.dot(z - estimated_z) self.belief.cov = (np.eye(2) - K.dot(H)).dot(self.belief.cov) self.pose = self.belief.mean def motion_update(self, v_x, v_y, time): self.belief.cov = self.belief.cov + self.matR(v_x,v_y,time) self.belief.mean = Robot.state_transition(v_x, v_y, time, self.belief.mean) self.pose = self.belief.mean def draw(self, ax, elems): e = sigma_ellipse(self.belief.mean[0:2], self.belief.cov[0:2, 0:2], 3) elems.append(ax.add_patch(e)) ``` ## dynamic_programming ``` class DynamicProgramming: def __init__(self, widths, goal, puddles, time_interval, sampling_num, \ puddle_coef=100.0, lowerleft=np.array([-4, -4]).T, upperright=np.array([4, 4]).T): self.pose_min = np.r_[lowerleft] self.pose_max = np.r_[upperright] self.widths = widths self.goal = goal self.index_nums = ((self.pose_max - self.pose_min)/self.widths).astype(int) nx, ny = self.index_nums self.indexes = list(itertools.product(range(nx), range(ny))) self.value_function, self.final_state_flags = self.init_value_function() self.policy = self.init_policy() self.actions = list(set([tuple(self.policy[i]) for i in self.indexes])) self.state_transition_probs = self.init_state_transition_probs(time_interval, sampling_num) self.depths = self.depth_means(puddles, sampling_num) self.time_interval = time_interval self.puddle_coef = puddle_coef def value_iteration_sweep(self): max_delta = 0.0 for index in self.indexes: if not self.final_state_flags[index]: max_q = -1e100 max_a = None qs = [self.action_value(a, index) for a in self.actions] #全行動の行動価値を計算 max_q = max(qs) #最大の行動価値 max_a = self.actions[np.argmax(qs)] #最大の行動価値を与える行動 delta = abs(self.value_function[index] - max_q) #変化量 max_delta = delta if delta > max_delta else max_delta #スイープ中で最大の変化量の更新 self.value_function[index] = max_q #価値の更新 self.policy[index] = np.array(max_a).T #方策の更新 return max_delta def policy_evaluation_sweep(self): max_delta = 0.0 for index in self.indexes: if not self.final_state_flags[index]: q = self.action_value(tuple(self.policy[index]), index) delta = abs(self.value_function[index] - q) max_delta = delta if delta > max_delta else max_delta self.value_function[index] = q return max_delta def action_value(self, action, index, out_penalty=True): value = 0.0 for delta, prob in self.state_transition_probs[(action)]: after, out_reward = self.out_correction(np.array(index).T + delta) after = tuple(after) reward = - self.time_interval * self.depths[(after[0], after[1])] * self.puddle_coef - self.time_interval + out_reward*out_penalty value += (self.value_function[after] + reward) * prob return value def out_correction(self, index): out_reward = 0.0 for i in range(2): if index[i] < 0: index[i] = 0 out_reward = -1e100 elif index[i] >= self.index_nums[i]: index[i] = self.index_nums[i]-1 out_reward = -1e100 return index, out_reward def depth_means(self, puddles, sampling_num): ###セルの中の座標を均等にsampling_num**2点サンプリング### dx = np.linspace(0, self.widths[0], sampling_num) dy = np.linspace(0, self.widths[1], sampling_num) samples = list(itertools.product(dx, dy)) tmp = np.zeros(self.index_nums[0:2]) #深さの合計が計算されて入る for xy in itertools.product(range(self.index_nums[0]), range(self.index_nums[1])): for s in samples: pose = self.pose_min + self.widths*np.array([xy[0], xy[1]]).T + np.array([s[0], s[1]]).T #セルの中心の座標 for p in puddles: tmp[xy] += p.depth*p.inside(pose) #深さに水たまりの中か否か(1 or 0)をかけて足す tmp[xy] /= sampling_num**2 #深さの合計から平均値に変換 return tmp def init_state_transition_probs(self, time_interval, sampling_num): ###セルの中の座標を均等にsampling_num**2点サンプリング### dx = np.linspace(0.001, self.widths[0]*0.999, sampling_num) #隣のセルにはみ出さないように端を避ける dy = np.linspace(0.001, self.widths[1]*0.999, sampling_num) samples = list(itertools.product(dx, dy)) ###各行動、各方角でサンプリングした点を移動してインデックスの増分を記録### tmp = {} for a in self.actions: transitions = [] for s in samples: before = np.array([s[0], s[1]]).T + self.pose_min #遷移前の姿勢 before_index = np.array([0, 0]).T #遷移前のインデックス after =Robot.state_transition(a[0], a[1], time_interval, before) #遷移後の姿勢 after_index = np.floor((after - self.pose_min)/self.widths).astype(int) #遷移後のインデックス transitions.append(after_index - before_index) #インデックスの差分を追加 unique, count = np.unique(transitions, axis=0, return_counts=True) #集計(どのセルへの遷移が何回か) probs = [c/sampling_num**2 for c in count] #サンプル数で割って確率にする tmp[a] = list(zip(unique, probs)) return tmp def init_policy(self): tmp = np.zeros(np.r_[self.index_nums,2]) #制御出力が2次元なので、配列の次元を4次元に for index in self.indexes: center = self.pose_min + self.widths*(np.array(index).T + 0.5) #セルの中心の座標 tmp[index] = PuddleIgnoreAgent.policy(center, self.goal) return tmp def init_value_function(self): v = np.empty(self.index_nums) #全離散状態を要素に持つ配列を作成 f = np.zeros(self.index_nums) for index in self.indexes: f[index] = self.final_state(np.array(index).T) v[index] = self.goal.value if f[index] else -100.0 return v, f def final_state(self, index): x_min, y_min= self.pose_min + self.widths*index #xy平面で左下の座標 x_max, y_max= self.pose_min + self.widths*(index + 1) #右上の座標(斜め上の離散状態の左下の座標) corners = [[x_min, y_min], [x_min, y_max], [x_max, y_min], [x_max, y_max] ] #4隅の座標 return all([self.goal.inside(np.array(c).T) for c in corners ]) import seaborn as sns ###dp2exec puddles = [Puddle((-2, 0), (0, 2), 0.1), Puddle((-0.5, -2), (2.5, 1), 0.1)] dp = DynamicProgramming(np.array([0.2, 0.2]).T, Goal(-3,-3), puddles, 0.1, 10) counter = 0 #スイープの回数 ``` ## 価値反復の実行 ``` delta = 1e100 while delta > 0.01: delta = dp.value_iteration_sweep() counter += 1 print(counter, delta) with open("policy.txt", "w") as f: for index in dp.indexes: p = dp.policy[index] f.write("{} {} {} {}\n".format(index[0], index[1], p[0], p[1])) with open("value.txt", "w") as f: for index in dp.indexes: p = dp.value_function[index] f.write("{} {} {} {}\n".format(index[0], index[1], 0, p)) ``` ## 状態価値と方策のグラフ化 ``` v = dp.value_function[:, :] fig = plt.figure(figsize=(4,4)) ax = fig.add_subplot(111) ax.set_aspect('equal') sns.heatmap(np.rot90(v), square=False) plt.show() p = np.zeros(dp.index_nums) for i in dp.indexes: p[i] = sum(dp.policy[i]) fig = plt.figure(figsize=(4,4)) ax = fig.add_subplot(111) ax.set_aspect('equal') sns.heatmap(np.rot90(p[:, :]), square=False) plt.show() ``` ## Demo ``` def trial(): ###puddle_world4_trial time_interval = 0.1 world = PuddleWorld(30, time_interval, debug=False) ## 地図を生成して3つランドマークを追加 ## m = Map() for ln in [(-4,2), (2,-3), (4,4), (-4,-4)]: m.append_landmark(Landmark(*ln)) world.append(m) ##ゴールの追加## goal = Goal(-3,-3) #goalを変数に world.append(goal) ##水たまりの追加## world.append(Puddle((-2, 0), (0, 2), 0.1)) world.append(Puddle((-0.5, -2), (2.5, 1), 0.1)) # ##ロボットを作る## init_poses = [] for p in [[-3, 3], [0.5, 1.5], [3, 3], [2, -1]]: init_pose = np.array(p).T kf = KalmanFilter(m, init_pose) a = DpPolicyAgent(time_interval, kf, goal) r = Robot(init_pose, sensor=Camera(m), agent=a, color="red") world.append(r) world.draw() trial() ```
github_jupyter
<a href="https://colab.research.google.com/github/wesleybeckner/technology_fundamentals/blob/main/C4%20Machine%20Learning%20II/SOLUTIONS/SOLUTION_Tech_Fun_C4_S2_Computer_Vision_Part_2_(Defect_Detection_Case_Study).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Technology Fundamentals Course 4, Session 2: Computer Vision Part 2 (Defect Detection Case Study) **Instructor**: Wesley Beckner **Contact**: wesleybeckner@gmail.com **Teaching Assitants**: Varsha Bang, Harsha Vardhan **Contact**: vbang@uw.edu, harshav@uw.edu <br> --- <br> In this session we will continue with our exploration of CNNs. In the previous session we discussed three flagship layers for the CNN: convolution ReLU and maximum pooling. Here we'll discuss the sliding window, how to build your custom CNN, and data augmentation for images. <br> _images in this notebook borrowed from [Ryan Holbrook](https://mathformachines.com/)_ --- <br> <a name='top'></a> # Contents * 4.0 [Preparing Environment and Importing Data](#x.0) * 4.0.1 [Enabling and Testing the GPU](#x.0.1) * 4.0.2 [Observe TensorFlow on GPU vs CPU](#x.0.2) * 4.0.3 [Import Packages](#x.0.3) * 4.0.4 [Load Dataset](#x.0.4) * 4.0.4.1 [Loading Data with ImageDataGenerator](#x.0.4.1) * 4.0.4.2 [Loading Data with image_dataset_from_directory](#x.0.4.2) * 4.1 [Sliding Window](#x.1) * 4.1.1 [Stride](#x.1.1) * 4.1.2 [Padding](#x.1.2) * 4.1.3 [Exercise: Exploring Sliding Windows](#x.1.3) * 4.2 [Custom CNN](#x.2) * 4.2.1 [Evaluate Model](#x.2.1) * 4.3 [Data Augmentation](#x.3) * 4.3.1 [Evaluate Model](#x.3.1) * 4.3.2 [Exercise: Image Preprocessing Layers](#x.3.2) * 4.4 [Transfer Learning](#x.4) <br> --- <a name='x.0'></a> ## 4.0 Preparing Environment and Importing Data [back to top](#top) <a name='x.0.1'></a> ### 4.0.1 Enabling and testing the GPU [back to top](#top) First, you'll need to enable GPUs for the notebook: - Navigate to Edit→Notebook Settings - select GPU from the Hardware Accelerator drop-down Next, we'll confirm that we can connect to the GPU with tensorflow: ``` %tensorflow_version 2.x import tensorflow as tf device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) ``` <a name='x.0.2'></a> ### 4.0.2 Observe TensorFlow speedup on GPU relative to CPU [back to top](#top) This example constructs a typical convolutional neural network layer over a random image and manually places the resulting ops on either the CPU or the GPU to compare execution speed. ``` %tensorflow_version 2.x import tensorflow as tf import timeit device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': print( '\n\nThis error most likely means that this notebook is not ' 'configured to use a GPU. Change this in Notebook Settings via the ' 'command palette (cmd/ctrl-shift-P) or the Edit menu.\n\n') raise SystemError('GPU device not found') def cpu(): with tf.device('/cpu:0'): random_image_cpu = tf.random.normal((100, 100, 100, 3)) net_cpu = tf.keras.layers.Conv2D(32, 7)(random_image_cpu) return tf.math.reduce_sum(net_cpu) def gpu(): with tf.device('/device:GPU:0'): random_image_gpu = tf.random.normal((100, 100, 100, 3)) net_gpu = tf.keras.layers.Conv2D(32, 7)(random_image_gpu) return tf.math.reduce_sum(net_gpu) # We run each op once to warm up; see: https://stackoverflow.com/a/45067900 cpu() gpu() # Run the op several times. print('Time (s) to convolve 32x7x7x3 filter over random 100x100x100x3 images ' '(batch x height x width x channel). Sum of ten runs.') print('CPU (s):') cpu_time = timeit.timeit('cpu()', number=10, setup="from __main__ import cpu") print(cpu_time) print('GPU (s):') gpu_time = timeit.timeit('gpu()', number=10, setup="from __main__ import gpu") print(gpu_time) print('GPU speedup over CPU: {}x'.format(int(cpu_time/gpu_time))) ``` <a name='x.0.3'></a> ### 4.0.3 Import Packages [back to top](#top) ``` import os import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.preprocessing import image_dataset_from_directory #importing required libraries from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense, Conv2D, MaxPooling2D, InputLayer from tensorflow.keras.callbacks import EarlyStopping from sklearn.metrics import classification_report,confusion_matrix ``` <a name='x.0.4'></a> ### 4.0.4 Load Dataset [back to top](#top) We will actually take a beat here today. When we started building our ML frameworks, we simply wanted our data in a numpy array to feed it into our pipeline. At some point, especially when working with images, the data becomes too large to fit into memory. For this reason we need an alternative way to import our data. With the merger of keras/tf two popular frameworks became available, `ImageDataGenerator` and `image_dataset_from_directory` both under `tf.keras.preprocessing.image`. `image_dataset_from_directory` can sometimes be faster (tf origin) but `ImageDataGenerator` is a lot simpler to use and has on-the-fly data augmentation capability (keras). For a full comparison of methods visit [this link](https://towardsdatascience.com/what-is-the-best-input-pipeline-to-train-image-classification-models-with-tf-keras-eb3fe26d3cc5) ``` # Sync your google drive folder from google.colab import drive drive.mount("/content/drive") ``` <a name='x.0.4.1'></a> #### 4.0.4.1 Loading Data with `ImageDataGenerator` [back to top](#top) ``` # full dataset can be attained from kaggle if you are interested # https://www.kaggle.com/ravirajsinh45/real-life-industrial-dataset-of-casting-product?select=casting_data path_to_casting_data = '/content/drive/MyDrive/courses/tech_fundamentals/TECH_FUNDAMENTALS/data/casting_data_class_practice' image_shape = (300,300,1) batch_size = 32 technocast_train_path = path_to_casting_data + '/train/' technocast_test_path = path_to_casting_data + '/test/' image_gen = ImageDataGenerator(rescale=1/255) # normalize pixels to 0-1 #we're using keras inbuilt function to ImageDataGenerator so we # dont need to label all images into 0 and 1 print("loading training set...") train_set = image_gen.flow_from_directory(technocast_train_path, target_size=image_shape[:2], color_mode="grayscale", batch_size=batch_size, class_mode='binary', shuffle=True) print("loading testing set...") test_set = image_gen.flow_from_directory(technocast_test_path, target_size=image_shape[:2], color_mode="grayscale", batch_size=batch_size, class_mode='binary', shuffle=False) ``` <a name='x.0.4.2'></a> #### 4.0.4.2 loading data with `image_dataset_from_directory` [back to top](#top) This method should be approx 2x faster than `ImageDataGenerator` ``` from tensorflow.keras.preprocessing import image_dataset_from_directory from tensorflow.data.experimental import AUTOTUNE path_to_casting_data = '/content/drive/MyDrive/courses/tech_fundamentals/TECH_FUNDAMENTALS/data/casting_data_class_practice' technocast_train_path = path_to_casting_data + '/train/' technocast_test_path = path_to_casting_data + '/test/' # Load training and validation sets image_shape = (300,300,1) batch_size = 32 ds_train_ = image_dataset_from_directory( technocast_train_path, labels='inferred', label_mode='binary', color_mode="grayscale", image_size=image_shape[:2], batch_size=batch_size, shuffle=True, ) ds_valid_ = image_dataset_from_directory( technocast_test_path, labels='inferred', label_mode='binary', color_mode="grayscale", image_size=image_shape[:2], batch_size=batch_size, shuffle=False, ) train_set = ds_train_.prefetch(buffer_size=AUTOTUNE) test_set = ds_valid_.prefetch(buffer_size=AUTOTUNE) # view some images def_path = '/def_front/cast_def_0_1001.jpeg' ok_path = '/ok_front/cast_ok_0_1.jpeg' image_path = technocast_train_path + ok_path image = tf.io.read_file(image_path) image = tf.io.decode_jpeg(image) plt.figure(figsize=(6, 6)) plt.imshow(tf.squeeze(image), cmap='gray') plt.axis('off') plt.show(); ``` <a name='x.1'></a> ## 4.1 Sliding Window [back to top](#top) The kernels we just reviewed, need to be swept or _slid_ along the preceding layer. We call this a **_sliding window_**, the window being the kernel. <p align=center> <img src="https://i.imgur.com/LueNK6b.gif" width=400></img> What do you notice about the gif? One perhaps obvious observation is that you can't scoot all the way up to the border of the input layer, this is because the kernel defines operations _around_ the centered pixel and so you bang up against the margin of the input array. We can change the behavior at the boundary with a **_padding_** hyperparameter. A second observation, is that the distance we move the kernel along in each step could be variable, we call this the **_stride_**. We will explore the affects of each of these. ``` from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Conv2D(filters=64, kernel_size=3, strides=1, padding='same', activation='relu'), layers.MaxPool2D(pool_size=2, strides=1, padding='same') # More layers follow ]) ``` <a name='x.1.1'></a> ### 4.1.1 Stride [back to top](#top) Stride defines the the step size we take with each kernel as it passes along the input array. The stride needs to be defined in both the horizontal and vertical dimensions. This animation shows a 2x2 stride <p align=center> <img src="https://i.imgur.com/Tlptsvt.gif" width=400></img> The stride will often be 1 for CNNs, where we don't want to lose any important information. Maximum pooling layers will often have strides greater than 1, to better summarize/accentuate the relevant features/activations. If the stride is the same in both the horizontal and vertical directions, it can be set with a single number like `strides=2` within keras. ### 4.1.2 Padding [back to top](#top) Padding attempts to resolve our issue at the border: our kernel requires information surrounding the centered pixel, and at the border of the input array we don't have that information. What to do? We have a couple popular options within the keras framework. We can set `padding='valid'` and only slide the kernel to the edge of the input array. This has the drawback of feature maps shrinking in size as we pass through the NN. Another option is to set `padding='same'` what this will do is pad the input array with 0's, just enough of them to allow the feature map to be the same size as the input array. This is shown in the gif below: <p align=center> <img src="https://i.imgur.com/RvGM2xb.gif" width=400></img> The downside of setting the padding to same will be that features at the edges of the image will be diluted. <a name='x.1.3'></a> ### 4.1.3 Exercise: Exploring Sliding Windows [back to top](#top) ``` from skimage import draw, transform from itertools import product # helper functions borrowed from Ryan Holbrook # https://mathformachines.com/ def circle(size, val=None, r_shrink=0): circle = np.zeros([size[0]+1, size[1]+1]) rr, cc = draw.circle_perimeter( size[0]//2, size[1]//2, radius=size[0]//2 - r_shrink, shape=[size[0]+1, size[1]+1], ) if val is None: circle[rr, cc] = np.random.uniform(size=circle.shape)[rr, cc] else: circle[rr, cc] = val circle = transform.resize(circle, size, order=0) return circle def show_kernel(kernel, label=True, digits=None, text_size=28): # Format kernel kernel = np.array(kernel) if digits is not None: kernel = kernel.round(digits) # Plot kernel cmap = plt.get_cmap('Blues_r') plt.imshow(kernel, cmap=cmap) rows, cols = kernel.shape thresh = (kernel.max()+kernel.min())/2 # Optionally, add value labels if label: for i, j in product(range(rows), range(cols)): val = kernel[i, j] color = cmap(0) if val > thresh else cmap(255) plt.text(j, i, val, color=color, size=text_size, horizontalalignment='center', verticalalignment='center') plt.xticks([]) plt.yticks([]) def show_extraction(image, kernel, conv_stride=1, conv_padding='valid', activation='relu', pool_size=2, pool_stride=2, pool_padding='same', figsize=(10, 10), subplot_shape=(2, 2), ops=['Input', 'Filter', 'Detect', 'Condense'], gamma=1.0): # Create Layers model = tf.keras.Sequential([ tf.keras.layers.Conv2D( filters=1, kernel_size=kernel.shape, strides=conv_stride, padding=conv_padding, use_bias=False, input_shape=image.shape, ), tf.keras.layers.Activation(activation), tf.keras.layers.MaxPool2D( pool_size=pool_size, strides=pool_stride, padding=pool_padding, ), ]) layer_filter, layer_detect, layer_condense = model.layers kernel = tf.reshape(kernel, [*kernel.shape, 1, 1]) layer_filter.set_weights([kernel]) # Format for TF image = tf.expand_dims(image, axis=0) image = tf.image.convert_image_dtype(image, dtype=tf.float32) # Extract Feature image_filter = layer_filter(image) image_detect = layer_detect(image_filter) image_condense = layer_condense(image_detect) images = {} if 'Input' in ops: images.update({'Input': (image, 1.0)}) if 'Filter' in ops: images.update({'Filter': (image_filter, 1.0)}) if 'Detect' in ops: images.update({'Detect': (image_detect, gamma)}) if 'Condense' in ops: images.update({'Condense': (image_condense, gamma)}) # Plot plt.figure(figsize=figsize) for i, title in enumerate(ops): image, gamma = images[title] plt.subplot(*subplot_shape, i+1) plt.imshow(tf.image.adjust_gamma(tf.squeeze(image), gamma)) plt.axis('off') plt.title(title) ``` Create an image and kernel: ``` import tensorflow as tf import matplotlib.pyplot as plt plt.rc('figure', autolayout=True) plt.rc('axes', labelweight='bold', labelsize='large', titleweight='bold', titlesize=18, titlepad=10) plt.rc('image', cmap='magma') image = circle([64, 64], val=1.0, r_shrink=3) image = tf.reshape(image, [*image.shape, 1]) # Bottom sobel kernel = tf.constant( [[-1, -2, -1], [0, 0, 0], [1, 2, 1]], ) show_kernel(kernel) ``` What do we think this kernel is meant to detect for? We will apply our kernel with a 1x1 stride and our max pooling with a 2x2 stride and pool size of 2. ``` show_extraction( image, kernel, # Window parameters conv_stride=1, pool_size=2, pool_stride=2, subplot_shape=(1, 4), figsize=(14, 6), ) ``` Works ok! what about a higher conv stride? ``` show_extraction( image, kernel, # Window parameters conv_stride=2, pool_size=3, pool_stride=4, subplot_shape=(1, 4), figsize=(14, 6), ) ``` Looks like we lost a bit of information! Sometimes published models will use a larger kernel and stride in the initial layer to produce large-scale features early on in the network without losing too much information (ResNet50 uses 7x7 kernels with a stride of 2). For now, without having much experience it's safe to set conv strides to 1. Take a moment here with the given kernel and explore different settings for applying both the kernel and the max_pool ``` conv_stride=YOUR_VALUE, # condenses pixels pool_size=YOUR_VALUE, pool_stride=YOUR_VALUE, # condenses pixels ``` Given a total condensation of 8 (I'm taking condensation to mean `conv_stride` x `pool_stride`). what do you think is the best combination of values for `conv_stride, pool_size, and pool_stride`? <a name='x.2'></a> ## 4.2 Custom CNN [back to top](#top) As we move through the network, small-scale features (lines, edges, etc.) turn to large-scale features (shapes, eyes, ears, etc). We call these blocks of convolution, ReLU, and max pool **_convolutional blocks_** and they are the low level modular framework we work with. By this means, the CNN is able to design it's own features, ones suited for the classification or regression task at hand. We will design a custom CNN for the Casting Defect Detection Dataset. In the following I'm going to double the filter size after the first block. This is a common pattern as the max pooling layers forces us in the opposite direction. ``` #Creating model model = Sequential() model.add(InputLayer(input_shape=(image_shape))) model.add(Conv2D(filters=8, kernel_size=(3,3), activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(224)) model.add(Activation('relu')) # Last layer model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['binary_accuracy']) early_stop = EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True,) # with CPU + ImageDataGenerator runs for about 40 minutes (5 epochs) # with GPU + image_dataset_from_directory runs for about 4 minutes (16 epochs) with tf.device('/device:GPU:0'): results = model.fit(train_set, epochs=20, validation_data=test_set, callbacks=[early_stop]) # model.save('inspection_of_casting_products.h5') ``` <a name='x.2.1'></a> ### 4.2.1 Evaluate Model [back to top](#top) ``` # model.load_weights('inspection_of_casting_products.h5') losses = pd.DataFrame(results.history) # losses.to_csv('history_simple_model.csv', index=False) fig, ax = plt.subplots(1, 2, figsize=(10,5)) losses[['loss','val_loss']].plot(ax=ax[0]) losses[['binary_accuracy','val_binary_accuracy']].plot(ax=ax[1]) # predict test set pred_probability = model.predict(test_set) # convert to bool predictions = pred_probability > 0.5 # precision / recall / f1-score # test_set.classes to get images from ImageDataGenerator # for image_dataset_from_directory we have to do a little gymnastics # to get the labels labels = np.array([]) for x, y in ds_valid_: labels = np.concatenate([labels, tf.squeeze(y.numpy()).numpy()]) print(classification_report(labels,predictions)) plt.figure(figsize=(10,6)) sns.heatmap(confusion_matrix(labels,predictions),annot=True) ``` <a name='x.3'></a> ## 4.3 Data Augmentation [back to top](#top) Alright, alright, alright. We've done pretty good making our CNN model. But let's see if we can make it even better. There's a last trick we'll cover here in regard to image classifiers. We're going to perturb the input images in such a way as to create a pseudo-larger dataset. With any machine learning model, the more relevant training data we give the model, the better. The key here is _relevant_ training data. We can easily do this with images so long as we do not change the class of the image. For example, in the small plot below, we are changing contrast, hue, rotation, and doing other things to the image of a car; and this is okay because it does not change the classification from a car to, say, a truck. <p align=center> <img src="https://i.imgur.com/UaOm0ms.png" width=400></img> Typically when we do data augmentation for images, we do them _online_, i.e. during training. Recall that we train in batches (or minibatches) with CNNs. An example of a minibatch then, might be the small multiples plot below. <p align=center> <img src="https://i.imgur.com/MFviYoE.png" width=400></img> by varying the images in this way, the model always sees slightly new data, and becomes a more robust model. Remember that the caveat is that we can't muddle the relevant classification of the image. Sometimes the best way to see if data augmentation will be helpful is to just try it and see! ``` from tensorflow.keras.layers.experimental import preprocessing #Creating model model = Sequential() model.add(preprocessing.RandomFlip('horizontal')), # flip left-to-right model.add(preprocessing.RandomFlip('vertical')), # flip upside-down model.add(preprocessing.RandomContrast(0.5)), # contrast change by up to 50% model.add(Conv2D(filters=8, kernel_size=(3,3),input_shape=image_shape, activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(224)) model.add(Activation('relu')) # Last layer model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['binary_accuracy']) early_stop = EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True,) results = model.fit(train_set, epochs=30, validation_data=test_set, callbacks=[early_stop]) ``` <a name='x.3.1'></a> ### 4.3.1 Evaluate Model [back to top](#top) ``` losses = pd.DataFrame(results.history) # losses.to_csv('history_augment_model.csv', index=False) fig, ax = plt.subplots(1, 2, figsize=(10,5)) losses[['loss','val_loss']].plot(ax=ax[0]) losses[['binary_accuracy','val_binary_accuracy']].plot(ax=ax[1]) # predict test set pred_probability = model.predict(test_set) # convert to bool predictions = pred_probability > 0.5 # precision / recall / f1-score # test_set.classes to get images from ImageDataGenerator # for image_dataset_from_directory we have to do a little gymnastics # to get the labels labels = np.array([]) for x, y in ds_valid_: labels = np.concatenate([labels, tf.squeeze(y.numpy()).numpy()]) print(classification_report(labels,predictions)) plt.figure(figsize=(10,6)) sns.heatmap(confusion_matrix(labels,predictions),annot=True) ``` <a name='x.3.2'></a> ### 4.3.2 Exercise: Image Preprocessing Layers [back to top](#top) These layers apply random augmentation transforms to a batch of images. They are only active during training. You can visit the documentation [here](https://keras.io/api/layers/preprocessing_layers/image_preprocessing/) * `RandomCrop` layer * `RandomFlip` layer * `RandomTranslation` layer * `RandomRotation` layer * `RandomZoom` layer * `RandomHeight` layer * `RandomWidth` layer Use any combination of random augmentation transforms and retrain your model. Can you get a higher val performance? you may need to increase your epochs. ``` # code cell for exercise 4.3.2 from tensorflow.keras.layers.experimental import preprocessing #Creating model model = Sequential() model.add(preprocessing.RandomFlip('horizontal')), # flip left-to-right model.add(preprocessing.RandomFlip('vertical')), # flip upside-down model.add(preprocessing.RandomContrast(0.5)), # contrast change by up to 50% model.add(preprocessing.RandomRotation((-1,1))), # contrast change by up to 50% model.add(Conv2D(filters=8, kernel_size=(3,3),input_shape=image_shape, activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(224)) model.add(Activation('relu')) # Last layer model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['binary_accuracy']) early_stop = EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True,) results = model.fit(train_set, epochs=200, validation_data=test_set, callbacks=[early_stop]) # predict test set pred_probability = model.predict(test_set) # convert to bool predictions = pred_probability > 0.5 # precision / recall / f1-score # test_set.classes to get images from ImageDataGenerator # for image_dataset_from_directory we have to do a little gymnastics # to get the labels labels = np.array([]) for x, y in ds_valid_: labels = np.concatenate([labels, tf.squeeze(y.numpy()).numpy()]) print(classification_report(labels,predictions)) plt.figure(figsize=(10,6)) sns.heatmap(confusion_matrix(labels,predictions),annot=True) ``` <a name='x.4'></a> ## 4.4 Transfer Learning [back to top](#top) Transfer learning with [EfficientNet](https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/) ``` from tensorflow.keras.preprocessing import image_dataset_from_directory from tensorflow.data.experimental import AUTOTUNE path_to_casting_data = '/content/drive/MyDrive/courses/TECH_FUNDAMENTALS/data/casting_data_class_practice' technocast_train_path = path_to_casting_data + '/train/' technocast_test_path = path_to_casting_data + '/test/' # Load training and validation sets image_shape = (300,300,3) batch_size = 32 ds_train_ = image_dataset_from_directory( technocast_train_path, labels='inferred', label_mode='binary', color_mode="grayscale", image_size=image_shape[:2], batch_size=batch_size, shuffle=True, ) ds_valid_ = image_dataset_from_directory( technocast_test_path, labels='inferred', label_mode='binary', color_mode="grayscale", image_size=image_shape[:2], batch_size=batch_size, shuffle=False, ) train_set = ds_train_.prefetch(buffer_size=AUTOTUNE) test_set = ds_valid_.prefetch(buffer_size=AUTOTUNE) def build_model(image_shape): input = tf.keras.layers.Input(shape=(image_shape)) # include_top = False will take of the last dense layer used for classification model = tf.keras.applications.EfficientNetB3(include_top=False, input_tensor=input, weights="imagenet") # Freeze the pretrained weights model.trainable = False # now we have to rebuild the top x = tf.keras.layers.GlobalAveragePooling2D(name="avg_pool")(model.output) x = tf.keras.layers.BatchNormalization()(x) top_dropout_rate = 0.2 x = tf.keras.layers.Dropout(top_dropout_rate, name="top_dropout")(x) # use num-nodes = 1 for binary, class # for multiclass output = tf.keras.layers.Dense(1, activation="softmax", name="pred")(x) # Compile model = tf.keras.Model(input, output, name="EfficientNet") model.compile(optimizer='adam', loss="binary_crossentropy", metrics=["binary_accuracy"]) return model model = build_model(image_shape) with tf.device('/device:GPU:0'): results = model.fit(train_set, epochs=20, validation_data=test_set, callbacks=[early_stop]) ```
github_jupyter
- Usar o algorithmo Isolation Forest ( IF ) para identificar comportamentos outliers no dataset de dívidas - Hipótese : um comportamento outlier pode indicar um usuário de alto risco/fraudulento ``` import pandas as pd import numpy as np from datetime import datetime from sklearn.ensemble import IsolationForest from sklearn.decomposition import PCA import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') df = pd.read_excel("../tabelas/dataset_modelo_201904.xlsx") df.head() df.shape df.index = df.cnpj df.drop(columns=['cnpj'], axis=1, inplace=True) df.head() ``` #### Distributions of the variables ``` from plotly.offline import init_notebook_mode, iplot import plotly.graph_objs as go init_notebook_mode(connected=True) ``` - prop_divida ``` trace = go.Histogram( x = df['prop_divida'], marker = dict(color='rgb(88,190,148)') ) layout = go.Layout(title = "prop_divida") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - quantidade_cheques ``` trace = go.Histogram( x = df['quantidade_cheques'], marker = dict(color='rgb(88,190,148)') ) layout = go.Layout(title = "quantidade_cheques") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - tempo_medio ``` trace = go.Histogram( x = df['tempo_medio'], marker = dict(color='rgb(88,190,148)') ) layout = go.Layout(title = "tempo_medio") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - idade_maxima ``` trace = go.Histogram( x = df['idade_maxima'], marker = dict(color='rgb(88,190,148)'), xbins=dict( start=df['idade_maxima'].min(), end=df['idade_maxima'].max(), size=10 ) ) layout = go.Layout(title = "idade_maxima") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - credito ``` trace = go.Histogram( x = df['credito'], marker = dict(color='rgb(88,190,148)'), xbins = dict( start = df['credito'].min(), end = df['credito'].max(), size = 0.05 ) ) layout = go.Layout(title = "credito") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - infra ``` trace = go.Histogram( x = df['infra'], marker = dict(color='rgb(88,190,148)') ) layout = go.Layout(title = "infra") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - outros ``` trace = go.Histogram( x = df['outros'], marker = dict(color='rgb(88,190,148)'), xbins = dict( start = df['outros'].min(), end = df['outros'].max(), size=0.02 ) ) layout = go.Layout(title = "outros") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - processos ``` trace = go.Histogram( x = df['processos'], marker = dict(color='rgb(88,190,148)'), xbins = dict( start = df['processos'].min(), end = df['processos'].max(), size = 0.05 ) ) layout = go.Layout(title = "processos") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` - dispersao ``` trace = go.Histogram( x = df['dispersao'], marker = dict(color='rgb(88,190,148)'), ) layout = go.Layout(title = "dispersao") fig = go.Figure(data = [trace], layout=layout) iplot(fig) ``` #### Outlier detection with the entire set of variables ``` X = df.copy() outlier_detect = IsolationForest(n_estimators=100, max_samples=1000, contamination=.05, max_features=X.shape[1], random_state=1) outlier_detect.fit(X) outliers_predicted = outlier_detect.predict(X) df["outlier"] = outliers_predicted df[df['outlier']==-1] X.head() ``` - var idade_maxima has a nearly uniform distribution, it will not define an outlier ``` pca = PCA(n_components=2) principalComponents = pca.fit_transform(X) df_pca = pd.DataFrame(data = principalComponents , columns = ['pc1', 'pc2']) df_pca["outlier"] = outliers_predicted fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2 component PCA', fontsize = 20) targets = ["outlier", "inlier"] colors = ['r', 'b'] for target, color in zip(targets,colors): indicesToKeep = df_pca['outlier'] == -1 if target == "outlier" else df_pca['outlier'] == 1 ax.scatter(df_pca.loc[indicesToKeep, 'pc1'] , df_pca.loc[indicesToKeep, 'pc2'] , c = color , s = 50) ax.legend(targets) ax.grid() ``` - dropping the var idae_maxima ``` X = df.drop(columns=["idade_maxima", "outlier"]) # X.drop(columns=["outlier"], axis=1, inplace=True) # X.drop(columns=['outros'], axis=1, inplace=True) X.head() X.shape X = X[X['prop_divida']<1.5] X.head() # X.drop(columns=['outlier'], axis=1, inplace=True) X.head() outlier_detect = IsolationForest(n_estimators=100, max_samples=1000, contamination=.04, max_features=X.shape[1], random_state=1) outlier_detect.fit(X) outliers_predicted = outlier_detect.predict(X) X["outlier"] = outliers_predicted X[X['outlier']==-1] pca = PCA(n_components=2) principalComponents = pca.fit_transform(X) df_pca = pd.DataFrame(data = principalComponents , columns = ['pc1', 'pc2']) df_pca["outlier"] = outliers_predicted fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2 component PCA', fontsize = 20) targets = ["outlier", "inlier"] colors = ['r', 'b'] for target, color in zip(targets,colors): indicesToKeep = df_pca['outlier'] == -1 if target == "outlier" else df_pca['outlier'] == 1 ax.scatter(df_pca.loc[indicesToKeep, 'pc1'] , df_pca.loc[indicesToKeep, 'pc2'] , c = color , s = 50) ax.legend(targets) ax.grid() X = df[(df['prop_divida']<1.5) & (df['quantidade_cheques']==0)] X.head() X.drop(columns=["idade_maxima", "outlier"], axis=1, inplace=True) # X = X.drop(columns=['prop_divida', 'quantidade_cheques', 'tempo_medio', 'idade_maxima', 'outros', 'outlier']) outlier_detect = IsolationForest(n_estimators=100, max_samples=1000, contamination=.1, max_features=X.shape[1], random_state=1) outlier_detect.fit(X) outliers_predicted = outlier_detect.predict(X) X['outlier'] = outliers_predicted pca = PCA(n_components=2) principalComponents = pca.fit_transform(X) df_pca = pd.DataFrame(data = principalComponents , columns = ['pc1', 'pc2']) df_pca["outlier"] = outliers_predicted fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2 component PCA', fontsize = 20) targets = ["outlier", "inlier"] colors = ['r', 'b'] for target, color in zip(targets,colors): indicesToKeep = df_pca['outlier'] == -1 if target == "outlier" else df_pca['outlier'] == 1 ax.scatter(df_pca.loc[indicesToKeep, 'pc1'] , df_pca.loc[indicesToKeep, 'pc2'] , c = color , s = 50) ax.legend(targets) ax.grid() X.shape df2 = df.drop(columns=['outros']) # df2.head() df3 = df2[df2['prop_divida']<1.5] df3.head() # applying Isolation Forest X = df3.iloc[:, 1:] X.head() outlier_detect = IsolationForest(n_estimators=1000, max_samples=1000, contamination=.04, max_features=X.shape[1], random_state=1) outlier_detect.fit(X) outliers_predicted = outlier_detect.predict(X) df3['outlier'] = outliers_predicted df3.groupby('outlier').count() df3[df3['outlier']==-1] ## trying PCA to visualize outliers from sklearn.preprocessing import StandardScaler features = list(X.columns) features x = X.loc[:, features].values x = StandardScaler().fit_transform(x) X = pd.DataFrame(x) X.columns = features # features dataframe mean normalized X.head() # reducing dimensionality to plot the data and, hoppefully, detect outliers by visualization from sklearn.decomposition import PCA pca = PCA(n_components=2) principalComponents = pca.fit_transform(X) df_pca = pd.DataFrame(data = principalComponents , columns = ['pc1', 'pc2']) df_pca.head() import matplotlib.pyplot as plt fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2 component PCA', fontsize = 20) targets = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'] colors = ['r', 'g', 'b'] ax.scatter(df_pca.iloc[:, 0] , df_pca.iloc[:, 1] , s = 50) ax.grid() df_pca ```
github_jupyter
``` from selenium import webdriver from bs4 import BeautifulSoup import time import csv import os import shutil import codecs import pandas as pd import numpy as np import random data = r'C:\Users\preya\Downloads\Telegram Desktop\ChatExport_2021-06-27 (1)' file_paths = [] #loop to get append file_paths for root, directories, files in os.walk(data): for filename in files: if ".html" in filename: # Join the two strings in order to form the full filepath. filepath = os.path.join(root, filename) #to append file_paths.append(filepath) messages = [] for file in file_paths: soup = BeautifulSoup(codecs.open(file, 'r', "utf-8").read()) messages.extend(soup.findAll('div', attrs={'class':'message default clearfix'})) messages.extend(soup.findAll('div', attrs={'class':'message default clearfix joined'})) print(len(contents)) current_ids, previous_ids, message_texts,message_statuses = [],[],[],[] for meg in messages: current_id = meg.get("id")[7:] current_id = int(current_id) try: previous_id = meg.find("div", {"class":"reply_to details"}).a.get("href")[14:] previous_id = int(previous_id) except: previous_id = 0 try: message_text = meg.find("div", {"class":"description"}) message_text = message_text.text.strip() except: try: message_text = meg.find("div", {"class":"text"}) message_text = message_text.text.strip() except: continue try: message_status = meg.find("div", {"class":"status details"}) message_status = message_status.text.strip().split(",")[0] except: message_status = None if "Not included, change data exporting settings to download." not in message_text and "/" not in message_text and "Try command /" not in message_text: current_ids.append(current_id) previous_ids.append(previous_id) message_texts.append(message_text) message_statuses.append(message_status) dataset = pd.DataFrame({ "current_id":current_ids, "previous_id":previous_ids, "message_text":message_texts, "message_status":message_statuses }) np.sum(dataset.isnull()) def readPreviousMain(currentID): pm = dataset[dataset["previous_id"] == currentID].iloc[:,2].values pm = str(pm) cm = dataset[dataset["current_id"] == currentID].iloc[:,2].values cm = str(cm) print("Previous Message: ", pm) print("Current Message: ", cm) readPreviousMain(5500021) readPreviousMain(5500036) readPreviousMain(5500023) dataset.to_csv(r'C:\Users\preya\Downloads\Telegram Desktop\ChatExport_2021-06-27 (1)\dataset.csv') dataset.to_csv(r'C:\Users\preya\Downloads\Telegram Desktop\ChatExport_2021-06-27 (1)\dataset.csv') dataset = pd.read_csv(r"C:\Users\preya\Downloads\Telegram Desktop\ChatExport_2021-06-27 (1)\dataset.csv") dataset = dataset.iloc[:,1:] messages_set = list(set(dataset.message_text)) exclude_text = ["bet", "Level", "YOUR PROFILE Messages sent here","LEADERBOARD","Local","City | Informations","Статистика пользователя" , "Report", "coinsOh no!","Status | Informations of" ,"coins", "\\u"] final_messages_set = [] for i in messages_set: valid = True for j in exclude_text: if j in i: valid = False if valid: final_messages_set.append(i) #removing only usernames final_messages_set = [i for i in final_messages_set if i[0] != "@" and len(i.split(" ")) != 1] len(final_messages_set) def readPreviousMain(currentID): question = dataset[dataset["previous_id"] == currentID] question_str = str(question.message_text.values[0]) if question.message_status.isna().bool() == False: question_str += " " + str(question.message_status.values[0]) ans = dataset[dataset["current_id"] == currentID] ans_str = str(ans.message_text.values[0]) if ans.message_status.isna().bool() == False: ans_str += " " + str(ans.message_status.values[0]) #print("Previous Message: ", question_str) #print("Current Message: ", ans_str) return question_str, ans_str question_message_text = [] ans_message_text = [] for message in final_messages_set: temp_dataset = dataset[dataset["message_text"] == message] try: q, a = readPreviousMain(int(temp_dataset.current_id.values[0])) question_message_text.append(q) ans_message_text.append(a) except: pass output = pd.DataFrame({ "question_message_text": question_message_text, "ans_message_text":ans_message_text }) output.to_csv(r"C:\Users\preya\Downloads\Telegram Desktop\ChatExport_2021-06-27 (1)\question_and_answer.csv") len(question_message_text) ``` DilogFlow ``` def create_intent(project_id, display_name, training_phrases_parts, message_texts): """Create an intent of the given intent type.""" from google.cloud import dialogflow intents_client = dialogflow.IntentsClient() parent = dialogflow.AgentsClient.agent_path(project_id) training_phrases = [] for training_phrases_part in training_phrases_parts: part = dialogflow.Intent.TrainingPhrase.Part(text=training_phrases_part) # Here we create a new training phrase for each provided part. training_phrase = dialogflow.Intent.TrainingPhrase(parts=[part]) training_phrases.append(training_phrase) text = dialogflow.Intent.Message.Text(text=message_texts) message = dialogflow.Intent.Message(text=text) intent = dialogflow.Intent( display_name=display_name, training_phrases=training_phrases, messages=[message] ) response = intents_client.create_intent( request={"parent": parent, "intent": intent} ) print("Intent created: {}".format(response)) os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "hindienglishchatbot-3d57939ede44.json" question_ans = pd.read_csv(r"C:\Users\preya\Downloads\Telegram Desktop\ChatExport_2021-06-27 (1)\question_and_answer.csv") question_ans = question_ans.iloc[:,1:] question_ans training_phrases_parts = ["who is developer of this?", "who has developed it", "who is admin name", "who is admin of this"] message_texts = ["Preyash Patel is our developer, here you can talk with him https://www.linkedin.com/in/preyash2047/"] create_intent( project_id="hindienglishchatbot", display_name="about", training_phrases_parts = training_phrases_parts, message_texts = message_texts ) for i in range(len(question_ans)): training_phrases_parts = [question_ans.iloc[i,0]] message_texts = [question_ans.iloc[i,1]] create_intent( project_id="hindienglishchatbot", display_name="intent_" + str(i), training_phrases_parts = training_phrases_parts, message_texts = message_texts ) time.sleep(random.randint(5,10)) ```
github_jupyter
# Data Manipulation and Plotting with `pandas` ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns ``` ![pandas](https://upload.wikimedia.org/wikipedia/commons/thumb/e/ed/Pandas_logo.svg/2880px-Pandas_logo.svg.png) ## Learning Goals - Load .csv files into `pandas` DataFrames - Describe and manipulate data in Series and DataFrames - Visualize data using DataFrame methods and `matplotlib` ## What is Pandas? Pandas, as [the Anaconda docs](https://docs.anaconda.com/anaconda/packages/py3.7_osx-64/) tell us, offers us "High-performance, easy-to-use data structures and data analysis tools." It's something like "Excel for Python", but it's quite a bit more powerful. Let's read in the heart dataset. Pandas has many methods for reading different types of files. Note that here we have a .csv file. Read about this dataset [here](https://www.kaggle.com/ronitf/heart-disease-uci). ``` heart_df = pd.read_csv('heart.csv') ``` The output of the `.read_csv()` function is a pandas *DataFrame*, which has a familiar tabaular structure of rows and columns. ``` type(heart_df) heart_df ``` ## DataFrames and Series Two main types of pandas objects are the DataFrame and the Series, the latter being in effect a single column of the former: ``` age_series = heart_df['age'] type(age_series) ``` Notice how we can isolate a column of our DataFrame simply by using square brackets together with the name of the column. Both Series and DataFrames have an *index* as well: ``` heart_df.index age_series.index ``` Pandas is built on top of NumPy, and we can always access the NumPy array underlying a DataFrame using `.values`. ``` heart_df.values ``` ## Basic DataFrame Attributes and Methods ### `.head()` ``` heart_df.head() ``` ### `.tail()` ``` heart_df.tail() ``` ### `.info()` ``` heart_df.info() ``` ### `.describe()` ``` heart_df.describe() #Provides statistical data on the file data ``` ### `.dtypes` ``` heart_df.dtypes ``` ### `.shape` ``` heart_df.shape ``` ### Exploratory Plots Let's make ourselves a histogram of ages: ``` sns.set_style('darkgrid') sns.distplot(a=heart_df['age']); ``` And while we're at it let's do a scatter plot of maximum heart rate vs. age: ``` sns.scatterplot(x=heart_df['age'], y=heart_df['thalach']); ``` ## Adding to a DataFrame ### Adding Rows Here are two rows that our engineer accidentally left out of the .csv file, expressed as a Python dictionary: ``` extra_rows = {'age': [40, 30], 'sex': [1, 0], 'cp': [0, 0], 'trestbps': [120, 130], 'chol': [240, 200], 'fbs': [0, 0], 'restecg': [1, 0], 'thalach': [120, 122], 'exang': [0, 1], 'oldpeak': [0.1, 1.0], 'slope': [1, 1], 'ca': [0, 1], 'thal': [2, 3], 'target': [0, 0]} extra_rows ``` How can we add this to the bottom of our dataset? ``` # Let's first turn this into a DataFrame. # We can use the .from_dict() method. missing = pd.DataFrame(extra_rows) missing # Now we just need to concatenate the two DataFrames together. # Note the `ignore_index` parameter! We'll set that to True. heart_augmented = pd.concat([heart_df, missing], ignore_index=True) # Let's check the end to make sure we were successful! heart_augmented.tail() ``` ### Adding Columns Adding a column is very easy in `pandas`. Let's add a new column to our dataset called "test", and set all of its values to 0. ``` heart_augmented['test'] = 0 heart_augmented.head() ``` I can also add columns whose values are functions of existing columns. Suppose I want to add the cholesterol column ("chol") to the resting systolic blood pressure column ("trestbps"): ``` heart_augmented['chol+trestbps'] = heart_augmented['chol'] + heart_augmented['trestbps'] heart_augmented.head() ``` ## Filtering We can use filtering techniques to see only certain rows of our data. If we wanted to see only the rows for patients 70 years of age or older, we can simply type: ``` heart_augmented[heart_augmented['age'] >= 70] ``` Use '&' for "and" and '|' for "or". ### Exercise Display the patients who are 70 or over as well as the patients whose trestbps score is greater than 170. ``` heart_augmented[(heart_augmented['age'] >= 70) | (heart_augmented['trestbps'] > 170)] ``` <details> <summary>Answer</summary> <code>heart_augmented[(heart_augmented['age'] >= 70) | (heart_augmented['trestbps'] > 170)]</code> </details> ### Exploratory Plot Using the subframe we just made, let's make a scatter plot of their cholesterol levels vs. age and color by sex: ``` at_risk = heart_augmented[(heart_augmented['age'] >= 70) \ | (heart_augmented['trestbps'] > 170)] #the backslash figure allows you to break the line, but still continue it on the next sns.scatterplot(data=at_risk, x='age', y='chol', hue='sex'); ``` ### `.loc` and `.iloc` We can use `.loc` to get, say, the first ten values of the age and resting blood pressure ("trestbps") columns: ``` heart_augmented.loc heart_augmented.loc[:9, ['age', 'trestbps']] #Note that .loc is grabbing the endpoint as well. Instead of ending at that point. Potentially #because it is grabbing things by name and the index has a name of '9' ``` `.iloc` is used for selecting locations in the DataFrame **by number**: ``` heart_augmented.iloc heart_augmented.iloc[3, 0] ``` ### Exercise How would we get the same slice as just above by using .iloc() instead of .loc()? ``` heart_augmented.iloc[:10, [0,3]] ``` <details> <summary>Answer</summary> <code>heart_augmented.iloc[:10, [0, 3]]</code> </details> ## Statistics ### `.mean()` ``` heart_augmented.mean() ``` Be careful! Some of these will are not straightforwardly interpretable. What does an average "sex" of 0.682 mean? ### `.min()` ``` heart_augmented.min() ``` ### `.max()` ``` heart_augmented.max() ``` ## Series Methods ### `.value_counts()` How many different values does have slope have? What about sex? And target? ``` heart_augmented['slope'].value_counts() ``` ### `.sort_values()` ``` heart_augmented['age'].sort_values() ``` ## `pandas`-Native Plotting The `.plot()` and `.hist()` methods available for DataFrames use a wrapper around `matplotlib`: ``` heart_augmented.plot(x='age', y='trestbps', kind='scatter'); heart_augmented.hist(column='chol'); ``` ## Exercises 1. Make a bar plot of "age" vs. "slope" for the `heart_augmented` DataFrame. <details> <summary>Answer</summary> <code>sns.barplot(data=heart_augmented, x='slope', y='age');</code> </details> 2. Make a histogram of ages for **just the men** in `heart_augmented` (heart_augmented['sex']=1). <details> <summary>Answer</summary> <code>men = heart_augmented[heart_augmented['sex'] == 1] sns.distplot(a=men['age']);</code> </details> 3. Make separate scatter plots of cholesterol vs. resting systolic blood pressure for the target=0 and the target=1 groups. Put both plots on the same figure and give each an appropriate title. <details> <summary>Answer</summary> <code>target0 = heart_augmented[heart_augmented['target'] == 0] target1 = heart_augmented[heart_augmented['target'] == 1] fig, ax = plt.subplots(1, 2, figsize=(10, 5)) sns.scatterplot(data=target0, x='trestbps', y='chol', ax=ax[0]) sns.scatterplot(data=target1, x='trestbps', y='chol', ax=ax[1]) ax[0].set_title('Cholesterol Vs. Resting Blood Pressure in Women') ax[1].set_title('Cholesterol Vs. Resting Blood Pressure in Men');</code> </details> ## Let's find a .csv file online and experiment with it. I'm going to head to [dataportals.org](https://dataportals.org) to find a .csv file.
github_jupyter
# KCWI_calcs.ipynb functions from Busola Alabi, Apr 2018 ``` from __future__ import division import glob import re import os, sys from astropy.io.fits import getheader, getdata from astropy.wcs import WCS import astropy.units as u import numpy as np from scipy import interpolate import logging from time import time import matplotlib.pyplot as plt from pylab import * import matplotlib as mpl import matplotlib.ticker as mtick from scipy.special import gamma def make_obj(flux, grat_wave, f_lam_index): ''' ''' w = np.arange(3000)+3000. p_A = flux/(2.e-8/w)*(w/grat_wave)**f_lam_index return w, p_A def inst_throughput(wave, grat): ''' ''' eff_bl = np.asarray([0.1825,0.38,0.40,0.46,0.47,0.44]) eff_bm = np.asarray([0.1575, 0.33, 0.36, 0.42, 0.48, 0.45]) eff_bh1 = np.asarray([0., 0.0, 0.0, 0.0, 0.0, 0.]) eff_bh2 = np.asarray([0., 0.18, 0.3, 0.4, 0.28, 0.]) eff_bh3 = np.asarray([0., 0., 0., 0.2, 0.29, 0.31]) wave_0 = np.asarray([355.,380.,405.,450.,486.,530.])*10. wave_bl = np.asarray([355., 530.])*10. wave_bm = np.asarray([355., 530.])*10. wave_bh1 = np.asarray([350., 450.])*10. wave_bh2 = np.asarray([405., 486.])*10. wave_bh3 = np.asarray([405., 530.])*10. trans_atmtel = np.asarray([0.54, 0.55, 0.56, 0.56, 0.56, 0.55]) if grat=='BL': eff = eff_bl*trans_atmtel wave_range = wave_bl if grat=='BM': eff = eff_bm*trans_atmtel wave_range = wave_bm if grat=='BH1': eff = eff_bh1*trans_atmtel wave_range = wave_bh1 if grat=='BH2': eff = eff_bh2*trans_atmtel wave_range = wave_bh2 if grat=='BH3': eff = eff_bh3*trans_atmtel wave_range = wave_bh3 wave1 = wave interpfunc = interpolate.interp1d(wave_0, eff, fill_value="extrapolate") #this is the only way I've gotten this interpolation to work eff_int = interpfunc(wave1) idx = np.where((wave1 <= wave_range[0]) | (wave1 > wave_range[1])) eff_int[idx] = 0. return eff_int def obj_cts(w, f0, grat, exposure_time): ''' ''' A_geo = np.pi/4.*(10.e2)**2 eff = inst_throughput(w, grat) cts = eff*A_geo*exposure_time*f0 return cts def sky(wave): ''' ''' with open('mk_sky.dat') as f: lines = (line for line in f if not line.startswith('#')) skydata = np.loadtxt(lines, skiprows=2) ws = skydata[:,0] fs = skydata[:,1] f_nu_data = getdata('lris_esi_skyspec_fnu_uJy.fits') f_nu_hdr = getheader('lris_esi_skyspec_fnu_uJy.fits') dw = f_nu_hdr["CDELT1"] w0 = f_nu_hdr["CRVAL1"] ns = len(fs) ws = np.arange(ns)*dw + w0 f_lam = f_nu_data[:len(ws)]*1e-29*3.*1e18/ws/ws interpfunc = interpolate.interp1d(ws,f_lam, fill_value="extrapolate") fs_int = interpfunc(wave) return fs_int def sky_mk(wave): ''' ''' with open('mk_sky.dat') as f: lines = (line for line in f if not line.startswith('#')) skydata = np.loadtxt(lines, skiprows=2) ws = skydata[:,0] fs = skydata[:,1] f_nu_data = getdata('lris_esi_skyspec_fnu_uJy.fits') f_nu_hdr = getheader('lris_esi_skyspec_fnu_uJy.fits') dw = f_nu_hdr["CDELT1"] w0 = f_nu_hdr["CRVAL1"] ns = len(fs) ws = np.arange(ns)*dw + w0 f_lam = f_nu_data[:len(ws)]*1e-29*3.*1e18/ws/ws p_lam = f_lam/(2.e-8/ws) interpfunc = interpolate.interp1d(ws,p_lam, fill_value="extrapolate") #using linear since argument not set in idl ps_int = interpfunc(wave) return ps_int def sky_cts(w, grat, exposure_time, airmass=1.2, area=1.0): ''' ''' A_geo = np.pi/4.*(10.e2)**2 eff = inst_throughput(w, grat) cts = eff*A_geo*exposure_time*sky_mk(w)*airmass*area return cts def ETC(slicer, grating, grat_wave, f_lam_index, seeing, exposure_time, ccd_bin, spatial_bin=[], spectral_bin=None, nas=True, sb=True, mag_AB=None, flux=None, Nframes=1, emline_width=None): """ Parameters ========== slicer: str L/M/S (Large, Medium or Small) grating: str BH1, BH2, BH3, BM, BL grating wavelength: float or int 3400. < ref_wave < 6000. f_lam_index: float source f_lam ~ lam^f_lam_index, default = 0 seeing: float arcsec exposure_time: float seconds for source image (total) for all frames ccd_bin: str '1x1','2x2'" spatial_bin: list [dx,dy] bin in arcsec x arcsec for binning extended emission flux. if sb=True then default is 1 x 1 arcsec^2' spectral_bin: float or int Ang to bin for S/N calculation, default=None nas: boolean nod and shuffle sb: boolean surface brightness m_AB in mag arcsec^2; flux = cgs arcsec^-2' mag_AB: float or int continuum AB magnitude at wavelength (ref_wave)' flux: float erg cm^-2 s^-1 Ang^1 (continuum source [total]); erg cm^-2 s^1 (point line source [total]) [emline = width in Ang] EXTENDED: erg cm^-2 s^-1 Ang^1 arcsec^-2 (continuum source [total]); erg cm^-2 s^1 arcsec^-2 (point line source [total]) [emline = width in Ang] Nframes: int number of frames (default is 1) emline_width: float flux is for an emission line, not continuum flux (only works for flux), and emission line width is emline_width Ang """ # logger = logging.getLogger(__name__) logger.info('Running KECK/ETC') t0 = time() slicer_OPTIONS = ('L', 'M','S') grating_OPTIONS = ('BH1', 'BH2', 'BH3', 'BM', 'BL') if slicer not in slicer_OPTIONS: raise ValueError("slicer must be L, M, or S, wrongly entered {}".format(slicer)) logger.info('Using SLICER=%s', slicer) if grating not in grating_OPTIONS: raise ValueError("grating must be L, M, or S, wrongly entered {}".format(grating)) logger.info('Using GRATING=%s', grating) if grat_wave < 3400. or grat_wave > 6000: raise ValueError('wrong value for grating wavelength') logger.info('Using reference wavelength=%.2f', grat_wave) if len(spatial_bin) != 2 and len(spatial_bin) !=0: raise ValueError('wrong spatial binning!!') logger.info('Using spatial binning, spatial_bin=%s', str(spatial_bin[0])+'x'+str(spatial_bin[1])) bin_factor = 1. if ccd_bin == '2x2': bin_factor = 0.25 if ccd_bin == '2x2' and slicer == 'S': print'******** WARNING: DO NOT USE 2x2 BINNING WITH SMALL SLICER' read_noise = 2.7 # electrons Nf = Nframes chsz = 3 #what is this???? nas_overhead = 10. #seconds per half cycle seeing1 = seeing seeing2 = seeing pixels_per_arcsec = 1./0.147 if slicer == 'L': seeing2 = 1.38 snr_spatial_bin = seeing1*seeing2 pixels_spectral = 8 arcsec_per_slice = 1.35 if slicer == 'M': seeing2 = max(0.69,seeing) snr_spatial_bin = seeing1*seeing2 pixels_spectral = 4 arcsec_per_slice = 0.69 if slicer == 'S': seeing2 = seeing snr_spatial_bin = seeing1*seeing2 pixels_spectral = 2 arcsec_per_slice = 0.35 N_slices = seeing/arcsec_per_slice if len(spatial_bin) == 2: N_slices = spatial_bin[1]/arcsec_per_slice snr_spatial_bin = spatial_bin[0]*spatial_bin[1] pixels_spatial_bin = pixels_per_arcsec * N_slices print "GRATING :", grating if grating == 'BL': A_per_pixel = 0.625 if grating == 'BM': A_per_pixel = 0.28 if grating == 'BH2' or grating == 'BH3': A_per_pixel = 0.125 print 'A_per_pixel', A_per_pixel logger.info('f_lam ~ lam = %.2f',f_lam_index) logger.info('SEEING: %.2f, %s', seeing, ' arcsec') logger.info('Ang/pixel: %.2f', A_per_pixel) logger.info('spectral pixels in 1 spectral resolution element: %.2f',pixels_spectral) A_per_spectral_bin = pixels_spectral*A_per_pixel logger.info('Ang/resolution element: =%.2f',A_per_spectral_bin) if spectral_bin is not None: snr_spectral_bin = spectral_bin else: snr_spectral_bin = A_per_spectral_bin logger.info('Ang/SNR bin: %.2f', snr_spectral_bin) pixels_per_snr_spec_bin = snr_spectral_bin/A_per_pixel logger.info('Pixels/Spectral SNR bin: %.2f', pixels_per_snr_spec_bin) logger.info('SNR Spatial Bin [arcsec^2]: %.2f', snr_spatial_bin) logger.info('SNR Spatial Bin [pixels^2]: %.2f', pixels_spatial_bin) flux1 = 0 if flux is not None: flux1 = flux if flux is not None and emline_width is not None: flux1 = flux/emline_width if flux1 == 0 and emline_width is not None: raise ValueError('Dont use mag_AB for emission line') if mag_AB is not None: flux1 = (10**(-0.4*(mag_AB+48.6)))*(3.e18/grat_wave)/grat_wave w, p_A = make_obj(flux1,grat_wave, f_lam_index) if sb==False and mag_AB is not None: flux_input = ' mag_AB' logger.info('OBJECT mag: %.2f, %s', mag_AB,flux_input) if sb==True and mag_AB is not None: flux_input = ' mag_AB / arcsec^2' logger.info('OBJECT mag: %.2f, %s',mag_AB,flux_input) if flux is not None and sb==False and emline_width is None: flux_input = 'erg cm^-2 s^-1 Ang^-1' if flux is not None and sb==False and emline_width is not None: flux_input = 'erg cm^-2 s^-1 in '+ str(emline_width) +' Ang' if flux is not None and sb and emline_width is None: flux_input = 'erg cm^-2 s^-1 Ang^-1 arcsec^-2' if flux is not None and sb and emline_width is not None: flux_input = 'erg cm^-2 s^-1 arcsec^-2 in '+ str(emline_width) +' Ang' if flux is not None: logger.info('OBJECT Flux %.2f, %s',flux,flux_input) if emline_width is not None: logger.info('EMISSION LINE OBJECT --> flux is not per unit Ang') t_exp = exposure_time if nas==False: c_o = obj_cts(w,p_A,grating,t_exp)*snr_spatial_bin*snr_spectral_bin c_s = sky_cts(w,grating,exposure_time,airmass=1.2,area=1.0)*snr_spatial_bin*snr_spectral_bin c_r = Nf*read_noise**2*pixels_per_snr_spec_bin*pixels_spatial_bin*bin_factor snr = c_o/np.sqrt(c_s+c_o+c_r) if nas==True: n_cyc = np.floor((exposure_time-nas_overhead)/2./(nas+nas_overhead)+0.5) total_exposure = (2*n_cyc*(nas+nas_overhead))+nas_overhead logger.info('NAS: Rounding up to ',n_cyc, ' Cycles of NAS for total exposure of',total_exposure,' s') t_exp = n_cyc*nas c_o = obj_cts(w,p_A,grating,t_exp)*snr_spatial_bin*snr_spectral_bin c_s = sky_cts(w,grating,t_exp,airmass=1.2,area=1.0)*snr_spatial_bin*snr_spectral_bin c_r = 2.*Nf*read_noise**2*pixels_per_snr_spec_bin*pixels_spatial_bin*bin_factor snr = c_o/np.sqrt(2.*c_s+c_o+c_r) fig=figure(num=1, figsize=(12, 16), dpi=80, facecolor='w', edgecolor='k') subplots_adjust(hspace=0.001) ax0 = fig.add_subplot(611) ax0.plot(w, snr, 'k-') ax0.minorticks_on() ax0.tick_params(axis='both',which='minor',direction='in', length=5,width=2) ax0.tick_params(axis='both',which='major',direction='in', length=8,width=2,labelsize=8) ylabel('SNR / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12) ax1 = fig.add_subplot(612) ax1.plot(w,c_o, 'k--') ax1.minorticks_on() ax1.tick_params(axis='both',which='minor',direction='in',length=5,width=2) ax1.tick_params(axis='both',which='major',direction='in',length=8,width=2,labelsize=12) ylabel('Obj cts / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12) ax2 = fig.add_subplot(613) ax2.plot(w,c_s, 'k--') ax2.minorticks_on() ax2.tick_params(axis='both',which='minor',direction='in', length=5,width=2) ax2.tick_params(axis='both',which='major',direction='in', length=8,width=2,labelsize=12) ylabel('Sky cts / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12) ax3 = fig.add_subplot(614) ax3.plot(w,c_r*np.ones(len(w)), 'k--') ax3.minorticks_on() ax3.tick_params(axis='both',which='minor', direction='in', length=5,width=2) ax3.tick_params(axis='both',which='major', direction='in', length=8,width=2,labelsize=12) ylabel('Rd. Noise cts / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12) ax4 = fig.add_subplot(615) yval = w[c_s > 0] num = c_o[c_s > 0] den = c_s[c_s > 0] ax4.plot(yval, num/den, 'k--') #some c_s are zeros ax4.minorticks_on() xlim(min(w), max(w)) #only show show valid data! ax4.tick_params(axis='both',which='minor', direction='in', length=5,width=2) ax4.tick_params(axis='both',which='major', direction='in', length=8,width=2,labelsize=12) ylabel('Obj/Sky cts /%.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12) ax5 = fig.add_subplot(616) ax5.plot(w,p_A, 'k--') ax5.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1e')) ax5.minorticks_on() ax5.tick_params(axis='both',which='minor',direction='in', length=5,width=2) ax5.tick_params(axis='both',which='major',direction='in', length=8,width=2,labelsize=12) ylabel('Flux ['r'$\rm ph\ cm^{-2}\ s^{-1}\ \AA^{-1}$]', fontsize=12) xlabel('Wavelength ['r'$\rm \AA$]', fontsize=12) show() fig.savefig('{}.pdf'.format('KCWI_ETC_calc'), format='pdf', transparent=True, bbox_inches='tight') logger.info('KCWI/ETC run successful!') logging.basicConfig(level=logging.INFO, format='[%(levelname)s] %(message)s', stream=sys.stdout) logger = logging.getLogger(__name__) if __name__ == '__main__': print("KCWI/ETC...python version") ``` Simulate DF44 observation, begin by figuring out Sérsic model conversions. See toy_jeans4.ipynb for more detailed Sérsic calculations. ``` # n, R_e, M_g = 0.85, 7.1, 19.05 # van Dokkum+16 (van Dokkum+17 is slightly different) n, mu_0, a_e, R_e = 0.94, 24.2, 9.7, 7.9 # van Dokkum+17; some guesses b_n = 1.9992*n - 0.3271 mu_m_e = mu_0 - 2.5*log10(n*exp(b_n)/b_n**(2.0*n)*gamma(2.0*n)) + 2.5*b_n / log(10.0) # mean SB, using Graham & Driver eqns 6 and ? print '<mu>_e =', mu_m_e ETC('M','BM', 5110., 0., 0.75, 3600., '2x2', spatial_bin=[14.0,14.0], spectral_bin=None, nas=False, sb=True, mag_AB=25.2, flux=None, Nframes=1, emline_width=None) # S/N ~ 20/Ang, binned over ~1 R_e aperture ``` Simulate VCC 1287 observation: ``` n, a_e, q, m_i = 0.6231, 46.34, 0.809, 15.1081 # Viraj Pandya sci_gf_i.fits header GALFIT results, sent 19 Mar 2018 R_e = a_e * sqrt(q) g_i = 0.72 # note this is a CFHT g-i not SDSSS g-i mu_m_e = m_i + 2.5*log10(2.0) + 2.5*log10(pi*R_e**2) print '<mu>_e (i-band) =', mu_m_e mu_m_e += g_i print '<mu>_e (g-band) =', mu_m_e b_n = 1.9992*n - 0.3271 mu_0 = mu_m_e + 2.5*log10(n*exp(b_n)/b_n**(2.0*n)*gamma(2.0*n)) - 2.5*b_n / log(10.0) # mean SB, using Graham & Driver eqns 6 and ? print 'mu_0 (g-band) =', mu_0 ETC('M','BM', 5092., 0., 0.75, 3600., '2x2', spatial_bin=[16.5,20.4], spectral_bin=None, nas=False, sb=True, mag_AB=25.5, flux=None, Nframes=1, emline_width=None) # S/N ~ 20/Ang, binned over full FOV ``` Simulate Hubble VII observation: ``` R_e = 0.9 m_V = 15.8 mue = 15.8 + 2.5*log10(2) + 2.5*log10(pi*R_e**2) print '<mu_V>_e = ', mue side = sqrt(pi * R_e**2) print 'box size = %f arcsec' % (side) ETC('S','BM', 4500., 0., 0.75, 900., '1x1', spatial_bin=[side,side], spectral_bin=None, nas=False, sb=True, mag_AB=mue, flux=None, Nframes=3, emline_width=None) ```
github_jupyter
## MIC Demo 1 - Basic steps for measurement This simple demonstration of the MIC toolbox uses two simulated bivariate VAR(2) models from the ["Macroeconomic simulation comparison with a multivariate extension of the Markov Information Criterion"](https://www.kent.ac.uk/economics/documents/research/papers/2019/1908.pdf) paper. These are the first two settings from the VAR validation exercises. The two simulated datasets are located in `data/model_1.txt` and `data/model_2.txt`. In addition this, one of these two models has been used to generated an 'empirical' dataset `data/emp_dat.txt`. The purpose of the demonstration is to show how to run the MIC and see if we can figure out which of models 1 or 2 is the true model. The purpose of this first part is to outline the individual steps required to obtain a MIC measurement on a single variable in a multivariate system. As a full multivariate measurment requires several runs of the algorithms, this is best done in parallel, which will be covered in the second demonstration notebook. We start with the setup, including the toolbox import: ``` import time import numpy as np from scipy.stats import pearsonr import mic.toolbox as mt ``` ### Stage 0 - Discretising the data The first task that needs to be done is to discretise the two variables in the system (denoted $x^1$ and $x^2$). In order to do so, we need to provide the following information: - `lb` and `ub` : Bounds to the range of variation. - `r_vec` : Binary discretisation resolution of the variables - `hp_bit_vec` : High priority bits - Number of bits to prioritise in the permutation We can then call the binary quantisation function in the toolbox, `mt.bin_quant()`, and look at the result of the discretisation diagnostics to ensure that settings above are chosen so that the discretisation error is i.i.d uniformly distributed. ``` lb = [-10,-10] ub = [ 10, 10] r_vec = [7,7] hp_bit_vec = [3,3] # Load 'empirical' data path = 'data/emp_data.txt' emp_data = np.loadtxt(path, delimiter="\t") # Pick first replication (columns 1 and 2) - just as an example dat = emp_data[:,0:2] # Run the discretisation tests (displays are active by passing any string other than 'notests' or 'nodisplay') data_struct_emp = mt.bin_quant(dat,lb,ub,r_vec,'') # Check the correlation of the high-priority bits (example of 'notests' here) data_struct_hp = mt.bin_quant(dat,lb,ub,hp_bit_vec,'notests') dat_bin = data_struct_hp['binary_data'] hp_dat = dat - np.asarray(dat_bin.errors) print('\n Correlation of high priority bits with raw data:') for n in range(2): corr = pearsonr(dat[:,n],hp_dat[:,n]) print(' Variable {:1d}: {:7.4f}'.format(n+1,corr[0])) ``` We can see here that under the settings picked above, the quantisation errors for both series are indeed uniformly distributed (KS test not rejected), independent (LB test is not rejected) and the errors are not correlated with the discretisation levels. Furthermore, a discretisation using only the first three bits of each variable (the 'high priority bits') already has a 98% correlation with the raw data. This suggests that the settings are appropriate for the analysis. ### Stage 1 - learning model probabilities from the data The important parameters to choose at this stage relate to the size of the tree that we want to build with the simulated data. The parameters of interest are: - `mem` : the maximum amount of nodes we can initialise. As trees tend to be sparse, this does not have to match the theoretical number of the trees $2^D$. - `d`: maximum depth of the context trees (in bits). Combined with `mem`, these implement a cap on the amount of the amount of memory that can be used. - `lags`: The number of lags in the Markov process being used to learn the probabilities. ``` mem = 200000 d = 24 lags = 2 ``` Choosing 2 Markov lags and 14 bits per observations means that the full context will be 28 bits long (not counting contemporaneous correlations). Given a maximum tree depth $D=24$, it is clear that some of the context will be truncated. We therefore need to permute the bits in the context to prioritise the most important ones and ensure only the least informative ones get truncated. In order to do so, the next step is to generate a permutation the context bits. As stated above, we are only demonstrating a single run of the MIC algorithm, and we choose to predict the first variable conditional on the context and the current value of the second variable. This is declared via the `var_vec` list below. To clarify the syntax, the first entry in the `var_vec` list identifies the variable to predict (1 in this case), and any subsequent entries in the list indentify contemporaneous conditioning variables (2 in our case) This will allow us to determine the value of $\lambda^1 (x_t^1 \mid x_t^2, \Omega_t)$. It is important to note that to get the full MIC measurement, will need to run the algorithm again. The steps are essentially the same, and this will be covered in the second part of the demonstration. ``` num_runs = 2 var_vec = [1,2] # This is the critical input, it governs the conditioning order. perm = mt.corr_perm(dat, r_vec, hp_bit_vec, var_vec, lags, d) ``` We now have all the elements required to train the tree. For the purpose of the demonstration the two simulated data files contain two training sets of 125,000 observations for each variable $x^1$ and $x^2$. The first set is located in the first two columns of the traing data, while the second set is located in columns 3 and 4. This division into two training sets is done in order to illustrate: - How to initialise a new tree on the first series - How to update an existing tree with further data As stated above, we are only carrying out a single run here, so we choose to learn the probabilities for the 1st model only. Once again, to get a measurement for the second model will require further runs which we do in the second part of the demonstration. ``` # Load model data path = 'data/model_1.txt' sim_data = np.loadtxt(path, delimiter="\t") # Pick a tag for the tree (useful for indentifying the tree later on) tag = 'Model 1' # Discretise the training data. sim_dat1 = sim_data[:,0:2] data_struct = mt.bin_quant(sim_dat1,lb,ub,r_vec,'notests') # Note the 'notests' option data_bin = data_struct['binary_data'] # Initialise a tree and train it, trying to predict the 1st variable var = var_vec[0] output = mt.train(None, data_bin, mem, lags, d, var, tag, perm) ``` Let's now update the tree with the second run of training data to see the difference in syntax and output ``` # Discretise the second run of training data sim_dat1 = sim_data[:,2:4] data_struct = mt.bin_quant(sim_dat1,lb,ub,r_vec,'notests') # Note, we are not running discretisation tests data_bin = data_struct['binary_data'] # Extract the tree from the previous output and train it again. Only the 1st argument changes T = output['T'] output = mt.train(T, data_bin, mem, lags, d, var, tag, perm) ``` Notice how the header of the output has changed, using the tag to flag that we are updating an existing tree. We are done training the tree, let's get some descriptive statistics. ``` # Use the built-in descriptive statistic method to get some diagnostics T = output['T'] T.desc() ``` We can see that the tree has used about 1/3 of the initial node allocation, so we have plenty of margin on memory. This will typically change if more variables are included. There is an element of trial and error to figuring out how much memory to allocated, which is why this diagnostic is useful. It is important to note that the algorithm can cope with failed node allocations (situations where the algorithm attempts to allocate a node to memory but fails), as it has a heuristic that allows it to 'skip' failed nodes, at the cost of introducing an error in the probabilities. Furthermore, because the tree implements a pruning and rollout mechanism, nodes are only allocated when they are not on a single branch path. This means that node allocation failures due to lack of memory will typically occur for very rare events Both of these mechanisms mean that memory allocation failure is graceful and the odd failure will not impair the measurement. It is nevertheless a good idea to check `T.desc()` to ensure that failed allocations are not too numerous. ### Stage 2 - Scoring the empirical data with the model probabilities We have already loaded the empirical data when running the discretisation tests. For the purposes of this demonstration, we have 10 replications of 1000 observations each, in order to show the consistency of the measurement. We will therefore loop the steps required to score these series over the 10 replications. In a normal application with only one emprical dataset, this loop is of course not needed! - Discretise the empirical data into `data_stuct_emp` - Extract the binary data from the dictionary - Pass the binary data alongside the tree `T` to the score function. The function knows which variable to score as this is given in the tree. - Correct the score by removing the estimate of the bias (measured using the Rissanen bound correction). ``` scores = np.zeros([998,10]) for j in range(10): loop_t = time.time() # Discretise the data k = 2*j dat = emp_data[:,k:k+2] data_struct_emp = mt.bin_quant(dat,lb,ub,T.r_vec,'notests') data_bin_emp = data_struct_emp['binary_data'] # Score the data using the tree score_struct = mt.score(T, data_bin_emp) # Correct the measurement scores[:,j] = score_struct['score'] - score_struct['bound_corr'] print('Replication {:2d}: {:10.4f} secs.'.format(j,time.time() - loop_t)) flt_str = ' {:7.2f}'*10 print('\n Scores obtained ') print(flt_str.format(*np.sum(scores,0))) ``` This provides the measurement for $\lambda^1 (x_t^1 \mid x_t^2, \Omega_t)$. To complete the measurement, one also requires $\lambda^1 (x_t^2 \mid \Omega_t)$. This will enable us to calculate: $$ \lambda^1 (X) = \sum _{t=L}^T \left[ \lambda^1 (x_t^1 \mid x_t^2, \Omega_t) + \lambda^1 (x_t^2 \mid \Omega_t) \right]$$ To do this, the analysis can be re-run from `In [4]` onwards, setting `var_vec = [2]` and choosing . The resulting score for variable 2 can be added to the score above, which measures the score for variable 1, conditional on 2, thus providing the MIC for the entire system. Finally, the accuracy of the measurement can be improved by re-doing the analysis using a different conditioning order in the cross-entropy measurement. In this case this can be done by carrying out the same analysis with `var_vec = [2,1]` and `var_vec = [1]` and adding the result. This provides the following measurement: $$ \lambda^1 (X) = \sum _{t=L}^T \left[ \lambda^1 (x_t^2 \mid x_t^1, \Omega_t) + \lambda^1 (x_t^1 \mid \Omega_t) \right]$$ In theory, the two $\lambda^1 (X)$ measurements should be indentical, in practice they will differ by a measurement error. Averaging the will therefore improve precision.
github_jupyter
``` import pandas as pd import numpy as np from sklearn.utils import shuffle import seaborn as sns import matplotlib.pyplot as plt from nltk.corpus import stopwords from nltk import word_tokenize from collections import Counter import nltk, string import matplotlib from wordcloud import WordCloud from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer from sklearn.svm import SVC from sklearn.model_selection import * from sklearn import metrics from sklearn import model_selection from sklearn import feature_extraction from sklearn.metrics import classification_report matplotlib.style.use('ggplot') pd.set_option('display.max_colwidth', -1) from pylab import rcParams rcParams['figure.figsize'] = 20, 10 true = pd.read_csv("./data/true.csv") fake = pd.read_csv("./data/fake.csv") true.head() fake.head() true['category'] = 1 fake['category'] = 0 df = pd.concat([true,fake]) data = shuffle(df) data = data.reset_index(drop=True) data.drop(["date"]) sns.set_style("darkgrid") sns.countplot(data.category) data.isna().sum() data.subject.value_counts() plt.figure(figsize=(8,5)) sns.countplot("subject", data=fake) plt.show() data["full_text"] = data["title"]+" "+data["text"] stop_words = set(stopwords.words('english')) punct = re.compile(r'(\w+)') def normalize_sentence(txt): #string to lowercase txt = txt.lower() #punctuation removal and map it to space tokenized = [m.group() for m in punct.finditer(txt)] s = ' '.join(tokenized) #remove digits no_digits = ''.join([i for i in s if not i.isdigit()]) cleaner = " ".join(no_digits.split()) #tokenize words and removing stop words word_tokens = word_tokenize(cleaner) filtered_sentence = [w for w in word_tokens if not w in stop_words] filtered_sentence = " ".join(filtered_sentence) return filtered_sentence #creating column with cleaned and normalized text data['ntext'] = data.full_text.apply(normalize_sentence) #data['normalized_text'] = data.text.apply(stemming_words) data.head() data.ntext.values[4] freq = pd.Series(' '.join(data['ntext']).split()).value_counts()[:3] data['ntext'] = data['ntext'].apply(lambda x: " ".join(x for x in x.split() if x not in freq)) all_docs = list(data.ntext.values) word_frequencies = pd.DataFrame(data.ntext.str.split(expand=True).stack().value_counts(),columns = ["frequency"]) word_frequencies.reset_index(inplace=True) word_frequencies.rename(columns={'index':'word'},inplace=True) word_frequencies.head(5) count_true = Counter(" ".join(data[data['category']==1]["ntext"]).split()).most_common(30) df1 = pd.DataFrame.from_dict(count_true) df1 = df1.rename(columns={0: "True news", 1 : "count"}) count_fake = Counter(" ".join(data[data['category']==0]["ntext"]).split()).most_common(30) df2 = pd.DataFrame.from_dict(count_fake) df2 = df2.rename(columns={0: "Fake news", 1 : "count"}) df1.plot.bar(legend = False,color = 'Green') y_pos = np.arange(len(df1["True news"])) plt.xticks(y_pos, df1["True news"]) plt.title('More frequent positive words') plt.xlabel('Words') plt.ylabel('Number of occurences') plt.show() df2.plot.bar(legend = False) y_pos = np.arange(len(df2["Fake news"])) plt.xticks(y_pos, df2["Fake news"]) plt.title('More frequent negative words') plt.xlabel('Words') plt.ylabel('Number of occurences') plt.show() def wordcloud_function(doc): corpus =(" ").join(doc) wordcloud = WordCloud(width = 1000, height = 500, max_words=100, background_color="white").generate(corpus) plt.figure(figsize=(15,8)) plt.imshow(wordcloud) plt.axis("off") plt.savefig("wordcloud"+".png", bbox_inches='tight') plt.show() plt.close() wordcloud_function(all_docs) real_docs = list(data['ntext'][data['category']==1]) #wordcloud of real news wordcloud_function(real_docs) fake_docs = list(data['ntext'][data['category']==0]) #wordcloud of real news wordcloud_function(fake_docs) def featureExtraction(data): vect = TfidfVectorizer() tfidf_data = vect.fit_transform(data) return tfidf_data def learning(clf, X, Y): X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=60) #60 classifier = clf()#4800-br #5900 #6200 #6700 #6800 #7800-ba #7800 #8800 classifier.fit(X_train, Y_train) predict = classifier.predict(X_test) return predict, X_train, X_test, classifier, Y_test, Y_train def model_evaluation(classifier, predict, X_train, Y_train, X_test, Y_test): print("Train Accuracy : {}%".format(round(classifier.score(X_train, Y_train)*100,1))) #training accuracy print("Test Accuracy : {}%".format(round(classifier.score(X_test, Y_test)*100,1))) #Test accuracy print("Test precision : {}%".format(round( metrics.precision_score(Y_test, predict)*100,1))) #Test precision print("Recall : {}%".format(round(metrics.recall_score(Y_test,predict)*100,1))) #Recall print(classification_report(predict, Y_test)) #Classification report m_confusion_test = metrics.confusion_matrix(Y_test, predict) #Confusion Matrix pd.DataFrame(data = m_confusion_test, columns = ['Predicted 0', 'Predicted 1'], index = ['Actual 0', 'Actual 1']) fig, ax = plt.subplots(figsize=(4, 4)) ax.matshow(m_confusion_test, cmap=plt.cm.RdPu_r, alpha=0.1) for i in range(m_confusion_test.shape[0]): for j in range(m_confusion_test.shape[1]): ax.text(x=j, y=i, s=m_confusion_test[i, j], va='center', ha='center') plt.title('SVC Linear Kernel \nRecall: {0:.1f}%'.format(metrics.recall_score(Y_test,predict)*100)) plt.xlabel('PREDICTED LABELS') plt.ylabel('ACTUAL LABELS') plt.tight_layout() def main(clf): data_train, polarity = data.ntext, data.category tfidf_data = featureExtraction(data_train) pred, X_train, X_test, classifier, Y_test, Y_train = learning(clf, tfidf_data, polarity) model_evaluation(classifier, pred, X_train, Y_train, X_test, Y_test) return classifier classifier = main(SVC) ```
github_jupyter