code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# This Jupyter notebook illustrates how to read data in from an external file ## [notebook provides a simple illustration, users can easily use these examples to modify and customize for their data storage scheme and/or preferred workflows] ###Motion Blur Filtering: A Statistical Approach for Extracting Confinement Forces & Diffusivity from a Single Blurred Trajectory #####Author: Chris Calderon Copyright 2015 Ursa Analytics, Inc. Licensed under the Apache License, Version 2.0 (the "License"); You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 ### Cell below loads the required modules and packages ``` %matplotlib inline #command above avoids using the "dreaded" pylab flag when launching ipython (always put magic command above as first arg to ipynb file) import matplotlib.font_manager as font_manager import matplotlib.pyplot as plt import numpy as np import scipy.optimize as spo import findBerglundVersionOfMA1 #this module builds off of Berglund's 2010 PRE parameterization (atypical MA1 formulation) import MotionBlurFilter import Ursa_IPyNBpltWrapper ``` ##Now that required modules packages are loaded, set parameters for simulating "Blurred" OU trajectories. Specific mixed continuous/discrete model: \begin{align} dr_t = & ({v}-{\kappa} r_t)dt + \sqrt{2 D}dB_t \\ \psi_{t_i} = & \frac{1}{t_E}\int_{t_{i}-t_E}^{t_i} r_s ds + \epsilon^{\mathrm{loc}}_{t_i} \end{align} ###In above equations, parameter vector specifying model is: $\theta = (\kappa,D,\sigma_{\mathrm{loc}},v)$ ###Statistically exact discretization of above for uniform time spacing $\delta$ (non-uniform $\delta$ requires time dependent vectors and matrices below): \begin{align} r_{t_{i+1}} = & A + F r_{t_{i}} + \eta_{t_i} \\ \psi_{t_i} = & H_A + H_Fr_{t_{i-1}} + \epsilon^{\mathrm{loc}}_{t_i} + \epsilon^{\mathrm{mblur}}_{t_i} \\ \epsilon^{\mathrm{loc}}_{t_i} + & \epsilon^{\mathrm{mblur}}_{t_i} \sim \mathcal{N}(0,R_i) \\ \eta_i \sim & \mathcal{N}(0,Q) \\ t_{i-1} = & t_{i}-t_E \\ C = & cov(\epsilon^{\mathrm{mblur}}_{t_i},\eta_{t_{i-1}}) \ne 0 \end{align} ####Note: Kalman Filter (KF) and Motion Blur Filter (MBF) codes estimate $\sqrt(2D)$ directly as "thermal noise" parameter ### For situations where users would like to read data in from external source, many options exist. ####In cell below, we show how to read in a text file and process the data assuming the text file contains two columns: One column with the 1D measurements and one with localization standard deviation vs. time estimates. Code chunk below sets up some default variables (tunable values indicated by comments below). Note that for multivariate signals, chunks below can readily be modified to process x/y or x/y/z measurements separately. Future work will address estimating 2D/3D models with the MBF (computational [not theoretical] issues exists in this case); however, the code currently provides diagnostic information to determine if unmodeled multivariate interaction effects are important (see main paper and Calderon, Weiss, Moerner, PRE 2014) ### Plot examles from other notebooks can be used to explore output within this notbook or another. Next, a simple example of "Batch" processing is illustrated. ``` filenameBase='./ExampleData/MyTraj_' #assume all trajectory files have this prefix (adjust file location accordingly) N=20 #set the number of trajectories to read. delta = 25./1000. #user must specify the time (in seconds) between observations. code provided assumes uniform continuous illumination and #NOTE: in this simple example, all trajectories assumed to be collected with exposure time delta input above #now loop over trajectories and store MLE results resBatch=[] #variable for storing MLE output #loop below just copies info from cell below (only difference is file to read is modified on each iteration of the loop) for i in range(N): filei = filenameBase + str(i+1) + '.txt' print '' print '^'*100 print 'Reading in file: ', filei #first load the sample data stored in text file. here we assume two columns of numerica data (col 1 are measurements) data = np.loadtxt(filei) (T,ncol)=data.shape #above we just used a simple default text file reader; however, any means of extracting the data and #casting it to a Tx2 array (or Tx1 if no localization accuracy info available) will work. ymeas = data[:,0] locStdGuess = data[:,1] #if no localization info avaible, just set this to zero or a reasonable estimate of localization error [in nm] Dguess = 0.1 #input a guess of the local diffusion coefficient of the trajecotry to seed the MLE searches (need not be accurate) velguess = np.mean(np.diff(ymeas))/delta #input a guess of the velocity of the trajecotry to seed the MLE searches (need not be accurate) MA=findBerglundVersionOfMA1.CostFuncMA1Diff(ymeas,delta) #construct an instance of the Berglund estimator res = spo.minimize(MA.evalCostFuncVel, (np.sqrt(Dguess),np.median(locStdGuess),velguess), method='nelder-mead') #output Berglund estimation result. print 'Berglund MLE',res.x[0]*np.sqrt(2),res.x[1],res.x[-1] print '-'*100 #obtain crude estimate of mean reversion parameter. see Calderon, PRE (2013) kappa1 = np.log(np.sum(ymeas[1:]*ymeas[0:-1])/(np.sum(ymeas[0:-1]**2)-T*res.x[1]**2))/-delta #construct an instance of the MBF estimator BlurF = MotionBlurFilter.ModifiedKalmanFilter1DwithCrossCorr(ymeas,delta,StaticErrorEstSeq=locStdGuess) #use call below if no localization info avaible # BlurF = MotionBlurFilter.ModifiedKalmanFilter1DwithCrossCorr(ymeas,delta) parsIG=np.array([np.abs(kappa1),res.x[0]*np.sqrt(2),res.x[1],res.x[-1]]) #kick off MLE search with "warm start" based on simpler model #kick off nonlinear cost function optimization given data and initial guess resBlur = spo.minimize(BlurF.evalCostFunc,parsIG, method='nelder-mead') print 'parsIG for Motion Blur filter',parsIG print 'Motion Blur MLE result:',resBlur #finally evaluate diagnostic statistics at MLE just obtained loglike,xfilt,pit,Shist =BlurF.KFfilterOU1d(resBlur.x) print np.mean(pit),np.std(pit) print 'crude assessment of model: check above mean is near 0.5 and std is approximately',np.sqrt(1/12.) print 'statements above based on generalized residual U[0,1] shape' print 'other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.' #finally just store the MLE of the MBF in a list resBatch.append(resBlur.x) #Summarize the results of the above N simulations # resSUM=np.array(resBatch) print 'Blur medians',np.median(resSUM[:,0]),np.median(resSUM[:,1]),np.median(resSUM[:,2]),np.median(resSUM[:,3]) print 'means',np.mean(resSUM[:,0]),np.mean(resSUM[:,1]),np.mean(resSUM[:,2]),np.mean(resSUM[:,3]) print 'std',np.std(resSUM[:,0]),np.std(resSUM[:,1]),np.std(resSUM[:,2]),np.std(resSUM[:,3]) print '^'*100 ,'\n\n' ```
github_jupyter
``` import geopandas as gpd import pandas as pd import numpy as np from covidcaremap.constants import state_name_to_abbreviation from covidcaremap.geo import sum_per_county, sum_per_state, sum_per_hrr from covidcaremap.data import (external_data_path, processed_data_path, read_census_data_df) ``` # Merge Region and Census Data This notebook utilizes US Census data at the county and state level to merge population data into the county, state, and HRR region data. Most logic taken from [usa_beds_capacity_analysis_20200313_v2](https://github.com/daveluo/covid19-healthsystemcapacity/blob/9a45c424a23e7a15559527893ebeb28703f26422/nbs/usa_beds_capacity_analysis_20200313_v2.ipynb) ``` county_census_df = pd.read_csv(external_data_path('us-census-cc-est2018-alldata.csv'), encoding='unicode_escape') puerto_rico_census_df = pd.read_csv(external_data_path('PEP_2018_PEPAGESEX_with_ann.csv'), encoding='unicode_escape') # Filter dataset to Puerto Rico and format it to join puerto_rico_census_df = puerto_rico_census_df[puerto_rico_census_df['GEO.display-label'] == 'Puerto Rico'] puerto_rico_census_df = puerto_rico_census_df.rename(columns={'GEO.display-label': 'STNAME'}) ``` #### Format FIPS code as to be joined with county geo data ``` county_census_df['fips_code'] = county_census_df['STATE'].apply(lambda x: str(x).zfill(2)) + \ county_census_df['COUNTY'].apply(lambda x: str(x).zfill(3)) ``` #### Filter to 7/1/2018 population estimate ``` county_census2018_df = county_census_df[county_census_df['YEAR'] == 11] ``` #### Filter by age groups We will be looking at total population, adult population (20+ years old), and elderly population (65+ years old). These age groups match up with the CDC groupings here: https://www.cdc.gov/mmwr/volumes/69/wr/mm6912e2.htm?s_cid=mm6912e2_w From https://www2.census.gov/programs-surveys/popest/technical-documentation/file-layouts/2010-2018/cc-est2018-alldata.pdf, the key for AGEGRP is as follows: - 0 = Total - 1 = Age 0 to 4 years - 2 = Age 5 to 9 years - 3 = Age 10 to 14 years - 4 = Age 15 to 19 years - 5 = Age 20 to 24 years - 6 = Age 25 to 29 years - 7 = Age 30 to 34 years - 8 = Age 35 to 39 years - 9 = Age 40 to 44 years - 10 = Age 45 to 49 years - 11 = Age 50 to 54 years - 12 = Age 55 to 59 years - 13 = Age 60 to 64 years - 14 = Age 65 to 69 years - 15 = Age 70 to 74 years - 16 = Age 75 to 79 years - 17 = Age 80 to 84 years - 18 = Age 85 years or older ``` county_pop_all = county_census2018_df[county_census2018_df['AGEGRP']==0].groupby( ['fips_code'])['TOT_POP'].sum() county_pop_adult = county_census2018_df[county_census2018_df['AGEGRP']>=5].groupby( ['fips_code'])['TOT_POP'].sum() county_pop_elderly = county_census2018_df[county_census2018_df['AGEGRP']>=14].groupby( ['fips_code'])['TOT_POP'].sum() county_pop_all.sum(), county_pop_adult.sum(), county_pop_elderly.sum() state_pop_all = county_census2018_df[county_census2018_df['AGEGRP']==0].groupby( ['STNAME'])['TOT_POP'].sum() state_pop_adult = county_census2018_df[county_census2018_df['AGEGRP']>=5].groupby( ['STNAME'])['TOT_POP'].sum() state_pop_elderly = county_census2018_df[county_census2018_df['AGEGRP']>=14].groupby( ['STNAME'])['TOT_POP'].sum() # Calculate populations for Puerto Rico pr_pop_all_columns = ['est72018sex0_age999'] pr_pop_adult_columns = [ 'est72018sex0_age{}to{}'.format(x, x+4) for x in range(20,60, 5) ] + ['est72018sex0_age65plus'] pr_pop_edlerly_columns = ['est72018sex0_age65plus'] puerto_rico_census_df = puerto_rico_census_df.astype(dtype=dict( (n, int) for n in pr_pop_all_columns + pr_pop_adult_columns)) def get_pr_pop(columns): result = puerto_rico_census_df.transpose().reset_index() result = result[result['index'].isin(columns)].sum() result = pd.DataFrame(data={'STNAME': ['Puerto Rico'], 'TOT_POP': [result.iloc[1]]}) \ .set_index('STNAME').groupby( ['STNAME'])['TOT_POP'].sum() return result state_pop_all_with_pr = pd.concat([state_pop_all, get_pr_pop(pr_pop_all_columns)]) state_pop_adult_with_pr = pd.concat([state_pop_all, get_pr_pop(pr_pop_adult_columns)]) state_pop_elderly_with_pr = pd.concat([state_pop_all, get_pr_pop(pr_pop_edlerly_columns)]) get_pr_pop(pr_pop_all_columns) state_pop_all_with_pr state_pop_all_with_pr.sum(), state_pop_adult_with_pr.sum(), state_pop_elderly_with_pr.sum() county_pops = { 'Population': county_pop_all, 'Population (20+)': county_pop_adult, 'Population (65+)': county_pop_elderly } state_pops = { 'Population': state_pop_all_with_pr, 'Population (20+)': state_pop_adult_with_pr, 'Population (65+)': state_pop_elderly_with_pr } def set_population_field(target_df, pop_df, column_name, join_on): result = target_df.join(pop_df, how='left', on=join_on) result = result.rename({'TOT_POP': column_name}, axis=1) result = result.fillna(value={column_name: 0}) return result ``` ### Merge census data into states ``` state_gdf = gpd.read_file(external_data_path('us_states.geojson'), encoding='utf-8') enriched_state_df = state_gdf.set_index('NAME') for column_name, pop_df in state_pops.items(): pop_df = pop_df.rename({'STNAME': 'State Name'}, axis=1) enriched_state_df = set_population_field(enriched_state_df, pop_df, column_name, join_on='NAME') enriched_state_df = enriched_state_df.reset_index() enriched_state_df = enriched_state_df.rename(columns={'STATE': 'STATE_FIPS', 'NAME': 'State Name'}) enriched_state_df['State'] = enriched_state_df['State Name'].apply( lambda x: state_name_to_abbreviation[x]) enriched_state_df.to_file(processed_data_path('us_states_with_pop.geojson'), driver='GeoJSON') ``` ### Merge census data into counties ``` county_gdf = gpd.read_file(external_data_path('us_counties.geojson'), encoding='utf-8') county_gdf = county_gdf.rename(columns={'STATE': 'STATE_FIPS', 'NAME': 'County Name'}) county_gdf = county_gdf.merge(enriched_state_df[['STATE_FIPS', 'State']], on='STATE_FIPS') # FIPS code is last 5 digits of GEO_ID county_gdf['COUNTY_FIPS'] = county_gdf['GEO_ID'].apply(lambda x: x[-5:]) county_gdf = county_gdf.drop(columns=['COUNTY']) enriched_county_df = county_gdf for column_name, pop_df in county_pops.items(): enriched_county_df = set_population_field(enriched_county_df, pop_df, column_name, join_on='COUNTY_FIPS') enriched_county_df.to_file(processed_data_path('us_counties_with_pop.geojson'), driver='GeoJSON') ``` ## Generate population data for HRRs Spatially join HRRs with counties. For each intersecting county, take the ratio of the area of intersection with the HRR and the area of the county as the ratio of population for that county to be assigned to that HRR. ``` hrr_gdf = gpd.read_file(external_data_path('us_hrr.geojson'), encoding='utf-8') hrr_gdf = hrr_gdf.to_crs('EPSG:5070') hrr_gdf['hrr_geom'] = hrr_gdf['geometry'] county_pop_gdf = enriched_county_df county_pop_gdf = county_pop_gdf.to_crs('EPSG:5070') county_pop_gdf['county_geom'] = county_pop_gdf['geometry'] hrr_counties_joined_gpd = gpd.sjoin(county_pop_gdf, hrr_gdf, how='left', op='intersects') def calculate_ratio(row): if row['hrr_geom'] is None: return 0.0 i = row['hrr_geom'].buffer(0).intersection(row['geometry'].buffer(0)) return i.area / row['geometry'].area hrr_counties_joined_gpd['ratio'] = hrr_counties_joined_gpd.apply(calculate_ratio, axis=1) for column in county_pops.keys(): hrr_counties_joined_gpd[column] = \ (hrr_counties_joined_gpd[column] * hrr_counties_joined_gpd['ratio']).round() hrr_pops = hrr_counties_joined_gpd.groupby('HRR_BDRY_I')[list(county_pops.keys())].sum() hrr_pops enriched_hrr_gdf = hrr_gdf.join(hrr_pops, on='HRR_BDRY_I').fillna(value=0) enriched_hrr_gdf = enriched_hrr_gdf.drop('hrr_geom', axis=1).to_crs('EPSG:4326') enriched_hrr_gdf enriched_hrr_gdf.to_file(processed_data_path('us_hrr_with_pop.geojson'), driver='GeoJSON') ```
github_jupyter
``` import os print (os.environ['CONDA_DEFAULT_ENV']) os.system('cls') #file = open('test_set_1E/test_set_2/ts2_input.txt', 'r') file = open('test_set_1E/sample_test_set_1/sample_ts1_input.txt', 'r') #overwrite input to mimic google input with local files def input(): line = file.readline() return line import math import random import numpy as np stringimp = 'IMPOSSIBLE' def closestindex(a_list, given_value): absolute_difference_function = lambda list_value : abs(list_value - given_value) closestval = min(a_list, key=absolute_difference_function) closest_index = a_list.index(closestval) return closest_index t = int(input()) # read number of cases def method0(n): #Will try until it work at random, don't work for second patch (time limit) a = [0]*26 res = '' for index in n: a[ord(index)-ord('a')] = a[ord(index)-ord('a')]+1 if (max(a) > math.floor(len(n)/2)): res = stringimp else: test = True #Will try until it work at random, don't work for second patch (time limit) while test : possibilite = set(n) test = False for index in n : possibiliten = set(possibilite) if index in possibiliten: possibiliten.remove(index) if len(possibiliten) == 0 : test=True a = [0]*26 res = '' for index in n: a[ord(index)-ord('a')] = a[ord(index)-ord('a')]+1 break else: remove = random.choice(list(possibiliten)) res += remove a[ord(remove)-ord('a')] = a[ord(remove)-ord('a')]-1 if a[ord(remove)-ord('a')] == 0: possibilite.remove(remove) return res #cette solution ne marche pas def method1(n): nimp = list(n) unic = set(n) index = [0]*26 res = '' for i in list(unic) : index[ord(i)-ord('a')] = nimp.count(i) if (max(index) > math.floor(len(n)/2)) : res = stringimp else: indexreverse = list(index) indexreverse.reverse() #chaque lettre comence à l'index de depart de l'autre indexlettres = [0] for indexes in index : indexlettres+= [indexlettres[-1] + indexes] indexreverselettres = [0] for indexes in indexreverse : indexreverselettres+= [indexreverselettres[-1] + indexes] for letter in n : print(indexlettres,indexreverselettres, res ) numericalletter = ord(letter)-ord('a') codepartie1 = indexlettres[numericalletter] indexlettres[numericalletter] = indexlettres[numericalletter] +1 plusprocheindex = closestindex(indexreverselettres, codepartie1) plusprocheindexval = indexreverselettres[plusprocheindex] if plusprocheindexval == codepartie1 : while plusprocheindex < 26 and \ indexreverselettres[plusprocheindex] == indexreverselettres[plusprocheindex+1]: plusprocheindex =plusprocheindex +1 res += chr(25-plusprocheindex+ord('a')) else: if plusprocheindexval < codepartie1: res += chr(25-plusprocheindex+ord('a')) else: res += chr(25-plusprocheindex-1+ord('a')) return(res) return res #methode donné dans les explications def methodsol(n): a = [0]*26 res = '' for index in n: a[ord(index)-ord('a')] = a[ord(index)-ord('a')]+1 if (max(a) > math.floor(len(n)/2)): res = stringimp else: nsort = np.array(n) argsort = np.argsort(nsort) valsort = np.sort(nsort) midle = math.floor(len(valsort)/2) p1 = valsort[0:midle] p2 = valsort[midle:] nswap = p2 nswap = np.append (nswap,p1) for i in range(len(n)): res+=nswap[argsort[i]] return res ``` n = [str(s) for s in input()] #remouve line return from file if '\n' in n : n.remove('\n') res = method1(n) for i in range(len(n)): if n[i] == res[i]: print('error', i, n, res) print("Case #{}: {}".format(0, res)) ``` for i in range(1, t + 1): # for each case n = [str(s) for s in input()] #remouve line return from file if '\n' in n : n.remove('\n') res = methodsol(n) print("Case #{}: {}".format(i, res)) # check out .format's specification for more formatting options chr(97+25) ```
github_jupyter
# Introduction to obspy The obspy package is very useful to download seismic data and to do some signal processing on them. Most signal processing methods are based on the signal processing method in the Python package scipy. First we import useful packages. ``` import obspy import obspy.clients.earthworm.client as earthworm import obspy.clients.fdsn.client as fdsn from obspy import read from obspy import read_inventory from obspy import UTCDateTime from obspy.core.stream import Stream from obspy.signal.cross_correlation import correlate import matplotlib.pyplot as plt import numpy as np import os import urllib.request %matplotlib inline ``` We are going to download data from an array of seismic stations. ``` network = 'XU' arrayName = 'BS' staNames = ['BS01', 'BS02', 'BS03', 'BS04', 'BS05', 'BS06', 'BS11', 'BS20', 'BS21', 'BS22', 'BS23', 'BS24', 'BS25', \ 'BS26', 'BS27'] chaNames = ['SHE', 'SHN', 'SHZ'] staCodes = 'BS01,BS02,BS03,BS04,BS05,BS06,BS11,BS20,BS21,BS22,BS23,BS24,BS25,BS26,BS27' chans = 'SHE,SHN,SHZ' ``` We also need to define the time period for which we want to download data. ``` myYear = 2010 myMonth = 8 myDay = 17 myHour = 6 TDUR = 2 * 3600.0 Tstart = UTCDateTime(year=myYear, month=myMonth, day=myDay, hour=myHour) Tend = Tstart + TDUR ``` We start by defining the client for downloading the data ``` fdsn_client = fdsn.Client('IRIS') ``` Download the seismic data for all the stations in the array. ``` Dtmp = fdsn_client.get_waveforms(network=network, station=staCodes, location='--', channel=chans, starttime=Tstart, \ endtime=Tend, attach_response=True) ``` Some stations did not record the entire two hours. We delete these and keep only stations with a complte two hour recording. ``` ntmp = [] for ksta in range(0, len(Dtmp)): ntmp.append(len(Dtmp[ksta])) ntmp = max(set(ntmp), key=ntmp.count) D = Dtmp.select(npts=ntmp) ``` This is a function for plotting after each operation on the data. ``` def plot_2hour(D, channel, offset, title): """ Plot seismograms D = Stream channel = 'E', 'N', or 'Z' offset = Offset between two stations title = Title of the figure """ fig, ax = plt.subplots(figsize=(15, 10)) Dplot = D.select(component=channel) t = (1.0 / Dplot[0].stats.sampling_rate) * np.arange(0, Dplot[0].stats.npts) for ksta in range(0, len(Dplot)): plt.plot(t, ksta * offset + Dplot[ksta].data, 'k') plt.xlim(np.min(t), np.max(t)) plt.ylim(- offset, len(Dplot) * offset) plt.title(title, fontsize=24) plt.xlabel('Time (s)', fontsize=24) ax.set_yticklabels([]) ax.tick_params(labelsize=20) plot_2hour(D, 'E', 1200.0, 'Downloaded data') ``` We start by detrending the data. ``` D D.detrend(type='linear') plot_2hour(D, 'E', 1200.0, 'Detrended data') ``` We then taper the data. ``` D.taper(type='hann', max_percentage=None, max_length=5.0) plot_2hour(D, 'E', 1200.0, 'Tapered data') ``` And we remove the instrment response. ``` D.remove_response(output='VEL', pre_filt=(0.2, 0.5, 10.0, 15.0), water_level=80.0) plot_2hour(D, 'E', 1.0e-6, 'Deconvolving the instrument response') ``` Then we filter the data. ``` D.filter('bandpass', freqmin=2.0, freqmax=8.0, zerophase=True) plot_2hour(D, 'E', 1.0e-6, 'Filtered data') ``` And we resample the data. ``` D.interpolate(100.0, method='lanczos', a=10) D.decimate(5, no_filter=True) plot_2hour(D, 'E', 1.0e-6, 'Resampled data') ``` We can also compute the envelope of the signal. ``` for index in range(0, len(D)): D[index].data = obspy.signal.filter.envelope(D[index].data) plot_2hour(D, 'E', 1.0e-6, 'Envelope') ``` You can also download the instrument response separately: ``` network = 'XQ' station = 'ME12' channels = 'BHE,BHN,BHZ' location = '01' ``` This is to download the instrument response. ``` fdsn_client = fdsn.Client('IRIS') inventory = fdsn_client.get_stations(network=network, station=station, level='response') inventory.write('response/' + network + '_' + station + '.xml', format='STATIONXML') ``` We then read the data and start precessing the signal as we did above. ``` fdsn_client = fdsn.Client('IRIS') Tstart = UTCDateTime(year=2008, month=4, day=1, hour=4, minute=49) Tend = UTCDateTime(year=2008, month=4, day=1, hour=4, minute=50) D = fdsn_client.get_waveforms(network=network, station=station, location=location, channel=channels, starttime=Tstart, endtime=Tend, attach_response=False) D.detrend(type='linear') D.taper(type='hann', max_percentage=None, max_length=5.0) ``` But we now use the xml file that contains the instrment response to remove it from the signal. ``` filename = 'response/' + network + '_' + station + '.xml' inventory = read_inventory(filename, format='STATIONXML') D.attach_response(inventory) D.remove_response(output='VEL', pre_filt=(0.2, 0.5, 10.0, 15.0), water_level=80.0) ``` We resume signal processing. ``` D.filter('bandpass', freqmin=2.0, freqmax=8.0, zerophase=True) D.interpolate(100.0, method='lanczos', a=10) D.decimate(5, no_filter=True) ``` And we plot. ``` t = (1.0 / D[0].stats.sampling_rate) * np.arange(0, D[0].stats.npts) plt.plot(t, D[0].data, 'k') plt.xlim(np.min(t), np.max(t)) plt.title('Single waveform', fontsize=18) plt.xlabel('Time (s)', fontsize=18) ``` Not all seismic data are stored on IRIS. This is an example of how to download data from the Northern California Earthquake Data Center (NCEDC). ``` network = 'BK' station = 'WDC' channels = 'BHE,BHN,BHZ' location = '--' ``` This is to download the instrument response. ``` url = 'http://service.ncedc.org/fdsnws/station/1/query?net=' + network + '&sta=' + station + '&level=response&format=xml&includeavailability=true' s = urllib.request.urlopen(url) contents = s.read() file = open('response/' + network + '_' + station + '.xml', 'wb') file.write(contents) file.close() ``` And this is to download the data. ``` Tstart = UTCDateTime(year=2007, month=2, day=12, hour=1, minute=11, second=54) Tend = UTCDateTime(year=2007, month=2, day=12, hour=1, minute=12, second=54) request = 'waveform_' + station + '.request' file = open(request, 'w') message = '{} {} {} {} '.format(network, station, location, channels) + \ '{:04d}-{:02d}-{:02d}T{:02d}:{:02d}:{:02d} '.format( \ Tstart.year, Tstart.month, Tstart.day, Tstart.hour, Tstart.minute, Tstart.second) + \ '{:04d}-{:02d}-{:02d}T{:02d}:{:02d}:{:02d}\n'.format( \ Tend.year, Tend.month, Tend.day, Tend.hour, Tend.minute, Tend.second) file.write(message) file.close() miniseed = 'station_' + station + '.miniseed' request = 'curl -s --data-binary @waveform_' + station + '.request -o ' + miniseed + ' http://service.ncedc.org/fdsnws/dataselect/1/query' os.system(request) D = read(miniseed) D.detrend(type='linear') D.taper(type='hann', max_percentage=None, max_length=5.0) filename = 'response/' + network + '_' + station + '.xml' inventory = read_inventory(filename, format='STATIONXML') D.attach_response(inventory) D.remove_response(output='VEL', pre_filt=(0.2, 0.5, 10.0, 15.0), water_level=80.0) D.filter('bandpass', freqmin=2.0, freqmax=8.0, zerophase=True) D.interpolate(100.0, method='lanczos', a=10) D.decimate(5, no_filter=True) t = (1.0 / D[0].stats.sampling_rate) * np.arange(0, D[0].stats.npts) plt.plot(t, D[0].data, 'k') plt.xlim(np.min(t), np.max(t)) plt.title('Single waveform', fontsize=18) plt.xlabel('Time (s)', fontsize=18) ```
github_jupyter
## Quantitative trading in China A stock market with FinRL ### Import modules ``` import warnings warnings.filterwarnings("ignore") import pandas as pd from IPython import display display.set_matplotlib_formats("svg") from finrl_meta import config from finrl_meta.data_processors.processor_tusharepro import TushareProProcessor, ReturnPlotter from finrl_meta.env_stock_trading.env_stocktrading_A import StockTradingEnv from drl_agents.stablebaselines3_models import DRLAgent pd.options.display.max_columns = None print("ALL Modules have been imported!") ``` ### Create folders ``` import os if not os.path.exists("./datasets" ): os.makedirs("./datasets" ) if not os.path.exists("./trained_models"): os.makedirs("./trained_models" ) if not os.path.exists("./tensorboard_log"): os.makedirs("./tensorboard_log" ) if not os.path.exists("./results" ): os.makedirs("./results" ) ``` ### Download data, cleaning and feature engineering ``` ticket_list=['600000.SH', '600009.SH', '600016.SH', '600028.SH', '600030.SH', '600031.SH', '600036.SH', '600050.SH', '600104.SH', '600196.SH', '600276.SH', '600309.SH', '600519.SH', '600547.SH', '600570.SH'] train_start_date='2015-01-01' train_stop_date='2019-08-01' val_start_date='2019-08-01' val_stop_date='2021-01-03' token='27080ec403c0218f96f388bca1b1d85329d563c91a43672239619ef5' # download and clean ts_processor = TushareProProcessor("tusharepro", token=token) df = ts_processor.download_data(ticket_list, train_start_date, val_stop_date, "1D") df = ts_processor.clean_data(df) df # add_technical_indicator df = ts_processor.add_technical_indicator(df, config.TECHNICAL_INDICATORS_LIST) df = ts_processor.clean_data(df) df ``` ### Split traning dataset ``` train =ts_processor.data_split(df, train_start_date, train_stop_date) len(train.tic.unique()) train.tic.unique() train.head() train.shape stock_dimension = len(train.tic.unique()) state_space = stock_dimension*(len(config.TECHNICAL_INDICATORS_LIST)+2)+1 print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}") ``` ### Train ``` env_kwargs = { "stock_dim": stock_dimension, "hmax": 1000, "initial_amount": 1000000, "buy_cost_pct":6.87e-5, "sell_cost_pct":1.0687e-3, "reward_scaling": 1e-4, "state_space": state_space, "action_space": stock_dimension, "tech_indicator_list": config.TECHNICAL_INDICATORS_LIST, "print_verbosity": 1, "initial_buy":True, "hundred_each_trade":True } e_train_gym = StockTradingEnv(df = train, **env_kwargs) env_train, _ = e_train_gym.get_sb_env() print(type(env_train)) agent = DRLAgent(env = env_train) DDPG_PARAMS = { "batch_size": 256, "buffer_size": 50000, "learning_rate": 0.0005, "action_noise":"normal", } POLICY_KWARGS = dict(net_arch=dict(pi=[64, 64], qf=[400, 300])) model_ddpg = agent.get_model("ddpg", model_kwargs = DDPG_PARAMS, policy_kwargs=POLICY_KWARGS) trained_ddpg = agent.train_model(model=model_ddpg, tb_log_name='ddpg', total_timesteps=1000) ``` ### Trade ``` trade = ts_processor.data_split(df, val_start_date, val_stop_date) env_kwargs = { "stock_dim": stock_dimension, "hmax": 1000, "initial_amount": 1000000, "buy_cost_pct":6.87e-5, "sell_cost_pct":1.0687e-3, "reward_scaling": 1e-4, "state_space": state_space, "action_space": stock_dimension, "tech_indicator_list": config.TECHNICAL_INDICATORS_LIST, "print_verbosity": 1, "initial_buy":False, "hundred_each_trade":True } e_trade_gym = StockTradingEnv(df = trade, **env_kwargs) df_account_value, df_actions = DRLAgent.DRL_prediction(model=trained_ddpg, environment = e_trade_gym) df_actions.to_csv("action.csv",index=False) df_actions ``` ### Backtest ``` %matplotlib inline plotter = ReturnPlotter(df_account_value, trade, val_start_date, val_stop_date) plotter.plot_all() %matplotlib inline plotter.plot() %matplotlib inline # ticket: SSE 50:000016 plotter.plot("000016") ``` #### Use pyfolio ``` # CSI 300 baseline_df = plotter.get_baseline("399300") import pyfolio from pyfolio import timeseries daily_return = plotter.get_return(df_account_value) daily_return_base = plotter.get_return(baseline_df, value_col_name="close") perf_func = timeseries.perf_stats perf_stats_all = perf_func(returns=daily_return, factor_returns=daily_return_base, positions=None, transactions=None, turnover_denom="AGB") print("==============DRL Strategy Stats===========") perf_stats_all with pyfolio.plotting.plotting_context(font_scale=1.1): pyfolio.create_full_tear_sheet(returns = daily_return, benchmark_rets = daily_return_base, set_context=False) ``` ### Authors github username: oliverwang15, eitin-infant
github_jupyter
# train.py: What it does step by step This tutorial will break down what train.py does when it is run, and illustrate the functionality of some of the custom 'utils' functions that are called during a training run, in a way that is easy to understand and follow. Note that parts of the functionality of train.py depend on the config.json file you are using. This tutorial is self-contained, and doesn't use a config file, but for more information on working with this file when using ProLoaF, see [this explainer](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/config/). Before proceeding to any of the sections below, please run the following code block: ``` import os import sys sys.path.append("../") import pandas as pd import utils.datahandler as dh import matplotlib.pyplot as plt import numpy as np ``` ## Table of contents: [1. Dealing with missing values in the data](#1.-Dealing-with-missing-values-in-the-data) [2. Selecting and scaling features](#2.-Selecting-and-scaling-features) [3. Creating a dataframe to log training results](#3.-Creating-a-dataframe-to-log-training-results) [4. Exploration](#4.-Exploration) [5. Main run - creating the training model](#5.-Main-run---creating-the-training-model) [6. Main run - training the model](#6.-Main-run---training-the-model) [7. Updating the config, Saving the model & logs](#7.-Updating-the-config,-saving-the-model-&-logs) ## 1. Dealing with missing values in the data The first thing train.py does after loading the dataset that was specified in your config file, is to check for any missing values, and fill them in as necessary. It does this using the function 'utils.datahandler.fill_if_missing'. In the following example, we will load some data that has missing values and examine what the 'fill_if_missing' function does. Please run the code block below to get started. ``` #Load the data sample and prep for use with datahandler functions df = pd.read_csv("../data/fill_missing.csv", sep=";") df['Time'] = pd.to_datetime(df['Time']) df = df.set_index('Time') df = df.astype(float) df_missing_range = df.copy() #Plot the data df.iloc[0:194].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), xlabel='Hours', use_index = False) ``` As should be clearly visible in the plot above, the data has some missing values. There is a missing range (a range refers to multiple adjacent values), from around 96-121, as well as two individual values that are missing, at 160 and 192. Please run the code block below to see how 'fill_if_missing' deals with these problems. ``` #Use fill_if_missing and plot the results df=dh.fill_if_missing(df, periodicity=24) df.iloc[0:192].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), use_index = False) #TODO: Test this again once interpolation is working ``` As we can see by the printed console messages, fill_if_missing first checks whether there are any missing values. If there are, it checks whether they are individual values or ranges, and handles these cases differently: ### Single missing values: These are simply replaced by the average of the values on either side. ### Missing range: If a range of values is missing, fill_if_missing will use the specified periodicity of the data to provide an estimate of the missing values, by averaging the ranges on either side of the missing range and then adapting the new values to fit the trend. If not specified, the periodicity has a default value of 1, but since we are using hourly data, we will use a periodicity of p = 24. For each missing value at a given position t in the range, fill_if_missing first searches backwards through the data at intervals equal to the periodicity of the data (i.e. t1 = t - 24\*n, n = 1, 2,...) until it finds an existing value. It then does the same thing searching forwards through the data (i.e. t2 = t + 24\*n, n = 1, 2,...), and then it sets the value at t equal to the average of t1 and t2. Run the code block below to see the result for the missing range at 95-121: ``` start = 95 end = 121 p = 24 seas = np.zeros(len(df_missing_range)) #fill the missing values for t in range(start, end + 1): p1 = p p2 = p while np.isnan(df_missing_range.iloc[t - p1, 0]): p1 += p while np.isnan(df_missing_range.iloc[t + p2, 0]): p2 += p seas[t] = (df_missing_range.iloc[t - p1, 0] + df_missing_range.iloc[t + p2, 0]) / 2 #plot the result ax = plt.gca() df_missing_range["Interpolated"] = pd.Series(len(seas)) for t in range(start, end + 1): df_missing_range.iloc[t, 1] = seas[t] df_missing_range.iloc[0:192].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), use_index = False, ax = ax) df_missing_range.iloc[0:192].plot(kind='line',y='Interpolated', figsize = (12, 6), use_index = False, ax = ax) ``` The missing values in the range between 95 and 121 have now been filled in, but the end points aren't continuous with the original data, and the new values don't take into account the trend in the data. To deal with this, the function uses the difference in slope between the start and end points of the missing data range, and the start and end points of the newly interpolated values, to offset the new values so that they line up with the original data: ``` print("Create two straight lines that connect the interpolated start and end points, and the original start and end points.\nThese capture the 'trend' in each case over the missing section") trend1 = np.poly1d( np.polyfit([start, end], [seas[start], seas[end]], 1) ) trend2 = np.poly1d( np.polyfit( [start - 1, end + 1], [df_missing_range.iloc[start - 1, 0], df_missing_range.iloc[end + 1, 0]], 1, ) ) #by subtracting the trend of the interpolated data, then adding the trend of the original data, we match the filled in #values to what we had before for t in range(start, end + 1): df_missing_range.iloc[t, 1] = seas[t] - trend1(t) + trend2(t) #plot the result ax = plt.gca() df_missing_range.iloc[0:192].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), use_index = False, ax = ax) df_missing_range.iloc[0:192].plot(kind='line',y='Interpolated', figsize = (12, 6), use_index = False, ax = ax) ``` **Please note:** - Missing data ranges at the beginning or end of the data are handled differently (TODO: Explain how) - Though the examples shown here use a single column for simplicity's sake, fill_if_missing automatically works on every column (feature) of your original dataframe. ## 2. Selecting and scaling features The next thing train.py does is to select and scale features in the data as specified in the relevant config file, using the function 'utils.datahandler.scale_all'. Consider the following dataset: ``` #Load and then plot the new dataset df_to_scale = pd.read_csv("../data/opsd.csv", sep=";", index_col=0) df_to_scale.plot(kind='line',y='AT_load_actual_entsoe_transparency', figsize = (8, 4), use_index = False) df_to_scale.plot(kind='line',y='AT_temperature', figsize = (8, 4), use_index = False) df_to_scale.head() ``` The above dataset has 55 features (columns), some of which are at totally different scales, as is clearly visible when looking at the y-axes of the above graphs for load and temperature data from Austria. Depending on our dataset, we may not want to use all of the available features for training. If we wanted to select only the two features highlighted above for training, we could do so by editing the value at the "feature_groups" key in the config.json, which takes the form of a list of dicts like the one below: ``` two_features = [ { "name": "main", "scaler": [ "minmax", -1.0, 1.0 ], "features": [ "AT_load_actual_entsoe_transparency", "AT_temperature" ] } ] ``` Each dict in the list represents a feature group, and should have the following keys: - "name" - the name of the feature group - "scaler" - the scaler used by this feature group (value: a list with entries for scaler name and scaler specific attributes.) Valid scaler names include 'standard', 'robust' or 'minmax'. For more information on these scalers and their use, please see the [scikit-learn documentation](https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing) or [the documentation for scale_all](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/reference/proloaf/proloaf/utils/datahandler.html#scale_all) - "features" - which features are to be included in the group (value: a list containing the feature names) The 'scale_all' function will only return the selected features, scaled using the scaler assigned to their feature group. Here we only have one group, 'main', which uses the 'minmax' scaler: ``` #Select, scale and plot the features as specified by the two_features list (see above) selected_features, scalers = dh.scale_all(df_to_scale, two_features) selected_features.plot(figsize = (12, 6), use_index = False) print("Currently used scalers:") print(scalers) ``` As you can see, both of our features (load and temperature for Austria) have now been scaled to fit within the same range (between -1 and 1). Let's say we also wanted to include the weekday data from the data set in our training. Let us first take a look at what the weekday features look like. Here are the first 500 hours (approx. 3 weeks) of weekday_0: ``` df_to_scale[:500].plot(kind='line',y='weekday_0', figsize = (12, 4), use_index = False) ``` As we can see, these features are already within the range [0,1] and thus don't need to be scaled. So we can include them in a second feature group called 'aux'. Note, features which we deliberately aren't scaling should go in a group with this name. The value of the "feature_groups" key in the config.json could then look like this: ``` feature_groups = [ { "name": "main", "scaler": [ "minmax", 0.0, 1.0 ], "features": [ "AT_load_actual_entsoe_transparency", "AT_temperature" ] }, { "name": "aux", "scaler": None, "features": [ "weekday_0", "weekday_1", "weekday_2", "weekday_3", "weekday_4", "weekday_5", "weekday_6" ] } ] ``` We now have two feature groups, 'main' (which uses the 'minmax' scaler, this time with a range between 0 and 1) and 'aux' (which uses no scaler): ``` #Select, scale and plot the features as specified by feature_groups (see above) selected_features, scalers = dh.scale_all(df_to_scale,feature_groups) selected_features[23000:28000].plot(figsize = (12, 6), use_index = False) print("Currently used scalers:") print(scalers) ``` We can see that all of our selected features now fit between 0 and 1. From this point onward, train.py will only work with our selected, scaled features. ``` print("Currently selected and scaled features: ") print(selected_features.columns) ``` ### Selecting scalers When selecting which scalers to use, it is important that whichever one we choose does not adversely affect the shape of the distribution of our data, as this would distort our results. For example, this is the distribution of the feature "AT_load_actual_entsoe_transparency" before scaling: ``` df_unscaled = pd.read_csv("../data/opsd.csv", sep=";", index_col=0) df_unscaled["AT_load_actual_entsoe_transparency"].plot.kde() ``` And this is the distribution after scaling using the minmax scaler, as we did above: ``` selected_features["AT_load_actual_entsoe_transparency"].plot.kde() ``` It is clear that in both cases, the distribution functions have a similar shape. The axes are scaled differently, but both graphs have maxima to the right of zero. On the other hand, this is what the distribution looks like if we use the robust scaler on this data: ``` feature_robust = [ { "name": "main", "scaler": [ "robust", 0.25, 0.75 ], "features": [ "AT_load_actual_entsoe_transparency", ] } ] selected_feat_robust, scalers_robust = dh.scale_all(df_to_scale,feature_robust) selected_feat_robust["AT_load_actual_entsoe_transparency"].plot.kde() ``` Not only have the axes been scaled, but the data has also been shifted so that the maxima are centered around zero. The same problem can be observed with the "standard" scaler: ``` feature_std = [ { "name": "main", "scaler": [ "standard" ], "features": [ "AT_load_actual_entsoe_transparency", ] } ] selected_feat_std, scalers_std = dh.scale_all(df_to_scale,feature_std) selected_feat_std["AT_load_actual_entsoe_transparency"].plot.kde() ``` As a result, the minmax scaler is the best option for this feature ## 3. Creating a dataframe to log training results Having already filled any missing values in our data, and scaled and selected the features we want to use for training, at this point we use the function 'utils.loghandler.create_log' to create a dataframe which will log the results of our training. This dataframe is saved as a .csv file at the end of the main training run. This allows different training runs, e.g. using new data or different parameters, to be compared with one another, so that we can monitor any changes in performance and compare the most recent run to the best performance achieved so far. 'create_log' creates the dataframe by getting which features we'll be logging from the [log.json file](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/log/), and then: - loading an existing log file (from MAIN_PATH/\<log_path\>/\<model_name\>/\<model_name\>+"_training.csv" - see the [config.json explainer](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/config/) for more information) - or creating a new dataframe from scratch, with the required features. The newly created dataframe, log_df, is used at various later points in train.py (see sections [4](#4.-Exploration), [6](#6.-Main-run---training-the-model) and [7](#7.-Updating-the-config,-saving-the-model-&-logs) for more info). ## 4. Exploration From this point on, it is assumed that we are working with prepared and scaled data (see earlier sections for more details). The exploration phase is optional, and will only be carried out if the 'exploration' key in the config.json file is set to 'true'. The purpose of exploration is to tune our hyperparameters before the main training run. This is done by using [Optuna](https://optuna.org/) to optimize our [objective function](#Objective-function). Optuna iterates through a number of trials - either a fixed number, or until timeout (as specified in the tuning.json file, [see below](#tuning.json) for more info) - with the purpose of finding the hyperparameter settings that result in the smallest validation loss. This is an indicator of the quality of the prediction - the validation loss is the discrepancy between the predicted values and the actual values (targets) from the validation dataset, as determined by one of the metrics from [utils.metrics](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/reference/proloaf/proloaf/utils/eval_metrics.html). The metric used is specified when train.py is called. Once Optuna is done iterating, a summary of the trials is printed (number of trials, details of the best trial), and if the new best trial represents an improvement over the previously logged best, you will be prompted about whether you would like to overwrite the config.json with the newly found hyperparameters, so that they can be used for future training. Optuna also has built-in paralellization, which we can opt to use by setting the 'parallel_jobs' key in the config.json to 'true'. ### Objective function The previously mentioned objective function is a callable which Optuna uses for its optimization. In our case, it is the function 'mh.tuning_objective', which does the following per trial: - Suggests values for each hyperparameter as per the [tuning.json](#tuning.json) - Creates (see [section 5](#5.-Main-run---creating-the-training-model)) and trains (see [section 6](#6.-Main-run---training-the-model)) a model using these hyperparameters and our selected features and scalers - Returns a score for the model in the form of the validation loss after training. ### tuning.json The tuning.json file ([see explainer](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/config/#tuning-config) is located in the same folder as the other configs, under proloaf/targets/\<model name\>, and contains information about the hyperparameters to be tuned, as well as settings that limit how long Optuna should run. It can look like this, for example: ```json { "number_of_tests": 100, "settings": { "learning_rate": { "function": "suggest_loguniform", "kwargs": { "name": "learning_rate", "low": 0.000001, "high": 0.0001 } }, "batch_size": { "function": "suggest_int", "kwargs": { "name": "batch_size", "low": 32, "high": 120 } } } } ``` - "number_of_tests": 100 here means that Optuna will stop after 100 trials. Alternatively, "timeout": \<number in seconds\> would limit the maximum duration for which Optuna would run. If both are specified, Optuna will stop whenever the first criterion is met. If neither are specified, Optuna will wait for a termination signal (Ctrl+C or SIGTERM). - The "settings" keyword has a dictionary as its value. This dictionary contains keywords for each of the hyperparameters that Optuna is to optimize, e.g. "learning_rate", "batch_size". - Each hyperparameter keyword takes yet another dictionary as a value, with the keywords "function" and "kwargs" - "function" takes as its value a function name beginning with "suggest_..." as described in the [Optuna docs](https://optuna.readthedocs.io/en/v1.4.0/reference/trial.html#optuna.trial.Trial.suggest_loguniform). These "suggest" functions are used to suggest hyperparameter values by sampling from a range with the relevant distribution. - "kwargs" has keywords for the arguments required by the "suggest" functions, typically "low" and "high" for the start and endpoints of the desired ranges as well as a "name" keyword which stores the hyperparameter name. ### Notes - When running train.py using Gitlab CI, the prompt about whether to overwrite the config.json with the newly found values is disabled (as well as all other similar prompts). - Though the objective function takes log_df (from the previous section) as a parameter, as it is required by the 'train' function, none of the training runs from this exploration phase are actually logged. Only the main run is logged. (See the next section for more) ## 5. Main run - creating the training model Now that the data is free of missing values, we've selected which features we'll use, and the (optional) hyperparameter exploration is done, we are almost ready to create the model that will be used for the training. The last step we need to take before we can do that is to split our data into training, validation and testing sets. During this process, we also transition from using the familiar Dataframe structure we've been using up until now, to using a new custom structure called CustomTensorData. ### Splitting the data We do this using the function 'utils.datahandler.transform', which splits our 'selected_features' dataframe into three new dataframes for the three sets outlined above. It does this according to the 'train_split' and 'validation_split' parameters in the config file, as follows: Each of the 'split' parameters are multiplied with the length of the 'selected_features' dataframe, with the result converted to an integer. These new integer values are the indices for the split. e.g. For 'train_split' = 0.6 and 'validation_split'= 0.8, the first 60% of the data entries would be stored in the new training dataframe, with the next 20% going to the validation dataframe, and the final 20% used for the testing dataframe. ### Transformation into Tensors Each of the three new dataframes is then transformed into the new CustomTensorData structure. This is done because the new structure is better suited for use with our RNN, and it is accomplished using 'utils.tensorloader.make_dataloader'. A single CustomTensorData structure also has three components, each of which is comprised of a different set of features as defined in the config file: - inputs1 - Encoder features: Features used as input for the encoder RNN. They provide information from a certain number of timesteps leading up to the period we wish to forecast i.e. from historical data. - inputs2 - Decoder features: Features used as input for the decoder RNN. They provide information from the same time steps as the period we wish to forecast i.e. from data about the future that we know in advance, e.g. weather forecasts, day of the week, etc. - targets - The features we are trying to forecast. These three components contain the features described above, but reorganized from the familiar tabular Dataframe format into a series of samples of a given length ([horizon](#Horizons)). To understand this change, please consider the image below, which focuses on a single feature that is only 7 time steps long. (Feature 1 - with data that is merely illustrative) ![Tensor Structure](figures/tensor-structure.jpg) The final format, as seen in the two examples on the right, consists of a number of samples (rows) of a given length (horizon), such that the first sample begins at the first time step of the range given to 'make_dataloader', and the last sample ends at the last time step in the aforementioned range. Each consecutive sample also begins one time step later than the previous sample (the one above it). When we consider a set of multiple features (with many more than 7 timesteps), each feature is transformed individually and then all features are combined into a 3D Tensor, as depicted in the following image: ![3D Tensor](figures/3d-tensor.jpg) ### Horizons So far, we have been using the term 'horizon' to refer to how many time steps are contained in one sample in our Custom Tensor structure. However, the three components (inputs1, inputs2 and targets) do not all share the same horizon length. In fact, there are two different parameters for horizon length, namely 'history_horizon', and 'forecast_horizon', and the different components use them as illustrated in the following diagram: ![Horizons](figures/horizon.jpg) As we can see, sample 1 of inputs1 contains 'history_horizon' timesteps up to a certain point, while sample 1 of inputs2 and targets contain the 'forecast_horizon' timesteps that follow from that point onwards. This is because, as previously mentioned, inputs1 provides historical data, while inputs2 and targets both provide "future data" - data from the future that we know ahead of making our forecast. **NB:** our "future data" can include uncertainty, for example, if we use existing forecasts as input. With increasing sample number, we have a kind of moving window which shifts towards the right in the image above. So that all three components contain the same number of samples, the inputs1 tensor does not include the final 'forecast_horizon' timesteps, while inputs2 and targets do not contain the first 'history_horizon' timesteps. **Note:** for illustration purposes, the above image shows which timesteps are in a given sample of the three components, in relation to the original dataframe. The components are nevertheless at this point already stored in the custom tensor structure described earlier in this subsection. ### Model Creation The final step in this stage is to create the model which we will be training. We do this using 'utils.modelhandler.make_model'. This function takes the number of features in component 1 and in component 2 (inputs1 and inputs2) of the training tensor, as well as our scalers from [section 2](#2.-Selecting-and-scaling-features) of this guide and other parameters from the config file, and returns a EncoderDecoder model as defined in 'utils.models'. ## 6. Main run - training the model At this point, we have everything we need to train our model: our training, validation and testing datasets, stored as CustomTensorData structures, and the instantiated EncoderDecoder model we are going to be training. We give these along with the hyperparameters in our config file (e.g. learning rate, batch size etc.) to the function 'utils.modelhandler.train'. This function returns: - a trained model - our logging dataframe (see [section 3](#3.-Creating-a-dataframe-to-log-training-results)) updated with the results of the training - the minimum validation loss after training - the model's score as calculated by the function 'performance_test', in our case using the metric 'mis' (Mean Interval Score) as defined in 'utils.metrics' What follows is a short breakdown of what 'utils.modelhandler.train' does. **Reminder:** 'loss' generally refers to a measure of how far off our predictions are from the actual values (targets) we are trying to predict. ### How training works Training lasts for a number of epochs, given by the parameter 'max_epochs' in the config file. During each epoch, we perform the training step, followed by the validation step. #### Training step: Use .train() on the model to set it in training mode, then loop through every sample in our training data tensor, and in each iteration of the loop: - get the model's prediction using the current sample of inputs1 and inputs2 - zero the gradients of our optimizer - calculate the loss of the prediction we just made for this sample (using whichever metric is specified in the config) - compute new gradients using loss.backward() - update the optimizer's parameters (by taking an optimization step) using the new gradients - update the epoch loss (the loss for the current sample is divided by the total number of samples in the training data and added to the current epoch loss) This step iteratively teaches the model to produce better predictions. #### Validation step: Use .eval() on the model to set it in validation mode, then loop through every sample in our validation data tensor, and in each iteration of the loop: - get the model's prediction using the current sample of inputs1 and inputs2 - update the validation loss (the loss for the current sample is calculated using whichever metric is specified in the config, then divided by the total number of samples in the validation data and added to the current validation loss) This step gives us a way to track whether our model is improving, by validating the training using new data, to ensure that our model doesn't only work on our specific training data, but that it is actually being trained to predict the behaviour of our target feature. The training uses early stopping, which basically means that it stops before 'max_epochs' iterations have been reached, if and when a certain number of epochs go by without any improvement to the model. **Note:** If you are using Tensorboard's SummaryWriter to log training performance, this is when it gets logged. #### Testing step: Here, the function 'performance_test' is used to calculate the model's score using the testing data set. As mentioned at the top of this section, this function calculates the Mean Interval Score along the horizon. This score is saved along with the model, see [section 7]((#7.-Updating-the-config,-saving-the-model-&-logs)) ## 7. Updating the config, saving the model & logs The last thing we need to do is save our current model and the relevant scores and logs, so that we can use it in the future and monitor changes in performance. First, the model is saved using 'modelhandler.save'. By default, this function only saves the model if the most recently achieved score is better than the previous best. In this case, and only if running in exploration mode, the config is updated with the most recent parameters before saving. <br> If no improvement was achieved, the model can still be saved by opting to do so when prompted. In this case, the config parameters will not be updated before saving. Saving the model entails calling torch.save() to store the model at the location given by the config parameter "output_path", and then saving the config to "config_path" using the function 'utils.confighandler.write_config'. Lastly, the logs are saved using the function 'utils.loghandler.end_logging', which writes the logs to a .csv stored at:<br> MAIN_PATH/\<log_path\>/\<model_name\>/\<model_name\>+"_training.csv", (the same location as where the log file was read in from) **Note:** Things work a little differently if using Tensorboard logging (TODO: expand) <br> In this case, when the 'modelhandler.train' function is called during the exploration phase (see [section 4](#4.-Exploration)) the config is automatically updated with the hyperparameter values from the latest trial, right after that trial ends.
github_jupyter
``` import pandas as pd from sklearn.preprocessing import LabelEncoder import numpy as np ``` ## NOTES * For KDD99 feature description, check http://kdd.ics.uci.edu/databases/kddcup99/task.html ``` kdd99_file = "kddcup.data.corrected" kdd99_df = pd.read_csv(kdd99_file, header=None) print(kdd99_df.shape) kdd99_df.head() kdd99_df.drop_duplicates(inplace=True) kdd99_df.shape kdd99_df.columns = ['duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_hot_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'attack_type'] kdd99_df.head() # add a feature to calculate the bytes difference between source and destination kdd99_df['src_dst_bytes_diff'] = kdd99_df['dst_bytes'] - kdd99_df['src_bytes'] ``` ## Data Exploration ``` def get_percentile(col): result = {'Feature':col.name, 'min':np.percentile(col, 0), '1%':np.percentile(col, 1), '5%':np.percentile(col, 5), '15%':np.percentile(col, 15), '25%':np.percentile(col, 25), '50%':np.percentile(col, 50), '75%':np.percentile(col, 75), '85%':np.percentile(col, 85), '95%':np.percentile(col, 95), '99%':np.percentile(col, 99), '99.9%':np.percentile(col, 99.9), 'max':np.percentile(col, 100)} return result # find columns with null isnull_df = kdd99_df.isnull().sum() isnull_df.loc[isnull_df > 0] # no null in any column kdd99_df['attack_type'].value_counts()/kdd99_df['attack_type'].shape[0] * 100 int_types = [col for col in kdd99_df.columns if kdd99_df[col].dtype == 'int64'] print(int_types) print() float_types = [col for col in kdd99_df.columns if kdd99_df[col].dtype == 'float64'] print(float_types) print() o_types = [col for col in kdd99_df.columns if kdd99_df[col].dtype == 'O'] print(o_types) print() # lime needs categorical feature names (values will still be numerical value) cat_features = o_types cat_features.extend(['land', 'logged_in', 'root_shell', 'su_attempted', 'is_hot_login', 'is_guest_login']) cat_features # check values for each categorical values print(kdd99_df['protocol_type'].unique()) print() print(kdd99_df['service'].unique()) print() print(kdd99_df['flag'].unique()) print() print(kdd99_df['attack_type'].unique()) print() print(kdd99_df['land'].value_counts()) print() print(kdd99_df['logged_in'].value_counts()) print() print(kdd99_df['root_shell'].value_counts()) print() print(kdd99_df['su_attempted'].value_counts()) print() print(kdd99_df['is_hot_login'].value_counts()) print() print(kdd99_df['is_guest_login'].value_counts()) print() # is_hot_login is the same for all type of attack_type, drop it kdd99_df.loc[kdd99_df['is_hot_login'] == 1]['attack_type'] kdd99_df.drop(['is_hot_login'], inplace=True, axis=1) cat_features.remove('is_hot_login') y = kdd99_df['attack_type'] y.value_counts() # label encoding number = LabelEncoder() for cat_col in cat_features: kdd99_df[cat_col] = number.fit_transform(kdd99_df[cat_col].astype('str')) kdd99_df[cat_col] = kdd99_df[cat_col].astype('object') kdd99_df.head() kdd99_df['attack_type'].value_counts() kdd99_df.dtypes kdd99_df.var() num_dist_dct = {} idx = 0 for col in kdd99_df.columns: if kdd99_df[col].dtype == 'O': continue num_dist_dct[idx] = get_percentile(kdd99_df[col]) idx += 1 num_dist_df = pd.DataFrame(num_dist_dct).T num_dist_df = num_dist_df[['Feature', 'min', '1%', '5%', '15%', '25%', '50%', '75%', '85%', '95%', '99%', '99.9%','max']] num_dist_df # check outliers print(kdd99_df.loc[kdd99_df['src_bytes'] > 61298]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['dst_bytes'] > 125015]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['src_dst_bytes_diff'] < -9178]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['src_dst_bytes_diff'] > 124758]['attack_type'].value_counts()) print() ``` It seems that some attack types have majority with outlier values, such as 0 and 22, so here not going to replace outliers with any other value ``` # check constant values print(kdd99_df.loc[kdd99_df['wrong_fragment'] > 1]['attack_type'].value_counts()) # cotains the majority of 20 (teardrop) print() print(kdd99_df.loc[kdd99_df['urgent'] > 0]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['hot'] > 20]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['num_failed_logins'] > 0]['attack_type'].value_counts()) # contains the majority of 3 (guess_passwd) print() print(kdd99_df.loc[kdd99_df['num_compromised'] > 1]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['num_root'] > 9]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['num_file_creations'] > 1]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['num_shells'] > 0]['attack_type'].value_counts()) print() print(kdd99_df.loc[kdd99_df['num_access_files'] > 1]['attack_type'].value_counts()) print() kdd99_df.drop('num_outbound_cmds', inplace=True, axis=1) print(kdd99_df.shape) print(y.shape) object_cols = [col for col in kdd99_df.columns if kdd99_df[col].dtype=='O'] print(object_cols) # Just use raw 40 features, and see how it runs in tree model kdd99_df['attack_type_cat'] = y # use original strings as label for multi-class prediction print(kdd99_df['attack_type_cat'].value_counts()) print(kdd99_df['attack_type'].value_counts()) kdd99_df.to_csv('kdd99_raw40.csv', index=False) ```
github_jupyter
``` ! nvidia-smi ``` # Install ติดตั้ง Library Transformers จาก HuggingFace ``` ! pip install transformers -q ! pip install fastai2 -q ``` # Import เราจะ Import ``` from transformers import GPT2LMHeadModel, GPT2TokenizerFast ``` # Download Pre-trained Model ดาวน์โหลด Weight ของโมเดล ที่เทรนไว้เรียบร้อยแล้ว ชื่อ GPT2 ``` pretrained_weights = 'gpt2' tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights) model = GPT2LMHeadModel.from_pretrained(pretrained_weights) ``` ใช้ Tokenizer ตัดตำ โดย Tokenizer ของ HuggingFace นี้ encode จะ Tokenize แปลงเป็น ตัวเลข Numericalize ในขั้นตอนเดียว ``` ids = tokenizer.encode("A lab at Florida Atlantic University is simulating a human cough") ids ``` หรือ เราสามารถแยกเป็น 2 Step ได้ ``` # toks = tokenizer.tokenize("A lab at Florida Atlantic University is simulating a human cough") # toks, tokenizer.convert_tokens_to_ids(toks) ``` decode กลับเป็นข้อความต้นฉบับ ``` tokenizer.decode(ids) ``` # Generate text ``` import torch t = torch.LongTensor(ids)[None] preds = model.generate(t) preds.shape preds[0] tokenizer.decode(preds[0].numpy()) ``` # Fastai ``` from fastai2.text.all import * path = untar_data(URLs.WIKITEXT_TINY) path.ls() df_train = pd.read_csv(path/"train.csv", header=None) df_valid = pd.read_csv(path/"test.csv", header=None) df_train.head() all_texts = np.concatenate([df_train[0].values, df_valid[0].values]) len(all_texts) ``` # Creating TransformersTokenizer Transform เราจะนำ Tokenizer ของ Transformer มาสร้าง Transform ใน fastai ด้วยการกำหนด encodes, decodes และ setups ``` class TransformersTokenizer(Transform): def __init__(self, tokenizer): self.tokenizer = tokenizer def encodes(self, x): toks = self.tokenizer.tokenize(x) return tensor(self.tokenizer.convert_tokens_to_ids(toks)) def decodes(self, x): return TitledStr(self.tokenizer.decode(x.cpu().numpy())) ``` ใน encodes เราจะไม่ได้ใช้ tokenizer.encode เนื่องจากภายในนั้น มีการ preprocessing นอกจาก tokenize และ numericalize ที่เรายังไม่ต้องการในขณะนี้ และ decodes จะ return TitledStr แทนที่ string เฉย ๆ จะได้รองรับ show method ``` # list(range_of(df_train)) # list(range(len(df_train), len(all_texts))) ``` เราจะเอา Transform ที่สร้างด้านบน ไปใส่ TfmdLists โดย split ตามลำดับที่ concat ไว้ และ กำหนด dl_type DataLoader Type เป็น LMDataLoader สำหรับใช้ในงาน Lanugage Model ``` splits = [list(range_of(df_train)), list(range(len(df_train), len(all_texts)))] tls = TfmdLists(all_texts, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader) # tls ``` ดูข้อมูล Record แรก ของ Training Set ``` tls.train[0].shape, tls.train[0] ``` ดูเป็นข้อมูล ที่ decode แล้ว ``` # show_at(tls.train, 0) ``` ดูข้อมูล Record แรก ของ Validation Set ``` tls.valid[0].shape, tls.valid[0] ``` ดูเป็นข้อมูล ที่ decode แล้ว ``` # show_at(tls.valid, 0) ``` # DataLoaders สร้าง DataLoaders เพื่อส่งให้กับ Model ด้วย Batch Size ขนาด 64 และ Sequence Length 1024 ตามที่ GPT2 ใช้ ``` bs, sl = 4, 1024 dls = tls.dataloaders(bs=bs, seq_len=sl) dls dls.show_batch(max_n=5) ``` จะได้ DataLoader สำหรับ Lanugage Model ที่มี input และ label เหลื่อมกันอยู่ 1 Token สำหรับให้โมเดล Predict คำต่อไปของประโยค # Preprocessing ไว้ก่อนให้หมด อีกวิธีนึงคือ เราสามารถ Preprocessing ข้อมูลทั้งหมดไว้ก่อนได้เลย ``` # def tokenize(text): # toks = tokenizer.tokenize(text) # return tensor(tokenizer.convert_tokens_to_ids(toks)) # tokenized = [tokenize(t) for t in progress_bar(all_texts)] # len(tokenized), tokenized[0] ``` เราจะประกาศ TransformersTokenizer ใหม่ ให้ใน encodes ไม่ต้องทำอะไร (แต่ถ้าไม่เป็น Tensor มาก็ให้ tokenize ใหม่) ``` # class TransformersTokenizer(Transform): # def __init__(self, tokenizer): self.tokenizer = tokenizer # def encodes(self, x): # return x if isinstance(x, Tensor) else tokenize(x) # def decodes(self, x): # return TitledStr(self.tokenizer.decode(x.cpu().numpy())) ``` แล้วจึงสร้าง TfmdLists โดยส่ง tokenized (ข้อมูลทั้งหมดที่ถูก tokenize เรียบร้อยแล้ว) เข้าไป ``` # tls = TfmdLists(tokenized, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader) # dls = tls.dataloaders(bs=bs, seq_len=sl) # dls.show_batch(max_n=5) ``` # Fine-tune Model เนื่องจากโมเดลของ HuggingFace นั่น return output เป็น Tuple ที่ประกอบด้วย Prediction และ Activation เพิ่มเติมอื่น ๆ สำหรับใช้ในงานอื่น ๆ แต่ในเคสนี้เรายังไม่ต้องการ ทำให้เราต้องสร้าง after_pred Callback มาคั่นเพื่อเปลี่ยนให้ return แต่ Prediction เพื่อส่งไปให้กับ Loss Function ทำงานได้ตามปกติเหมือนเดิม ``` class DropOutput(Callback): def after_pred(self): self.learn.pred = self.pred[0] ``` ใน callback เราสามารถอ้างถึง Prediction ของโมเดล ได้ด้วย self.pred ได้เลย แต่จะเป็นการ Read-only ถ้าต้องการ Write ต้องอ้างเต็ม ๆ ด้วย self.learn.pred ตอนนี้เราสามารถสร้าง learner เพื่อเทรนโมเดลได้แล้ว ``` learn = None torch.cuda.empty_cache() Perplexity?? learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), cbs=[DropOutput], metrics=Perplexity()).to_fp16() learn ``` ดูประสิทธิภาพของโมเดล ก่อนที่จะ Fine-tuned ตัวเลขแรกคือ Validation Loss ตัวที่สอง คือ Metrics ในที่นี้คือ Perplexity ``` learn.validate() ``` ได้ Perplexity 25.6 คือไม่เลวเลยทีเดียว # Training ก่อนเริ่มต้นเทรน เราจะเรียก lr_find หา Learning Rate กันก่อน ``` learn.lr_find() ``` แล้วเทรนไปแค่ 1 Epoch ``` learn.fit_one_cycle(1, 3e-5) ``` เราเทรนไปแค่ 1 Epoch โดยไม่ได้ปรับอะไรเลย โมเดลไม่ได้ประสิทธิภาพดีขึ้นสักเท่าไร เพราะมันดีมากอยู่แล้ว ต่อมาเราจะมาลองใช้โมเดล generate ข้อความดู ดังรูปแบบตัวอย่างใน Validation Set ``` df_valid.head(1) prompt = "\n = Modern economy = \n \n The modern economy is driven by data, and that trend is being accelerated by" prompt_ids = tokenizer.encode(prompt) # prompt_ids inp = torch.LongTensor(prompt_ids)[None].cuda() inp.shape preds = learn.model.generate(inp, max_length=50, num_beams=5, temperature=1.6) preds.shape preds[0] tokenizer.decode(preds[0].cpu().numpy()) ``` # Credit * https://dev.fast.ai/tutorial.transformers * https://github.com/huggingface/transformers ``` ```
github_jupyter
# An Introduction to SageMaker Random Cut Forests ***Unsupervised anomaly detection on timeseries data a Random Cut Forest algorithm.*** --- 1. [Introduction](#Introduction) 1. [Setup](#Setup) 1. [Training](#Training) 1. [Inference](#Inference) 1. [Epilogue](#Epilogue) # Introduction *** Amazon SageMaker Random Cut Forest (RCF) is an algorithm designed to detect anomalous data points within a dataset. Examples of when anomalies are important to detect include when website activity uncharactersitically spikes, when temperature data diverges from a periodic behavior, or when changes to public transit ridership reflect the occurrence of a special event. In this notebook, we will use the SageMaker RCF algorithm to train an RCF model on the Numenta Anomaly Benchmark (NAB) NYC Taxi dataset which records the amount New York City taxi ridership over the course of six months. We will then use this model to predict anomalous events by emitting an "anomaly score" for each data point. The main goals of this notebook are, * to learn how to obtain, transform, and store data for use in Amazon SageMaker; * to create an AWS SageMaker training job on a data set to produce an RCF model, * use the RCF model to perform inference with an Amazon SageMaker endpoint. The following are ***not*** goals of this notebook: * deeply understand the RCF model, * understand how the Amazon SageMaker RCF algorithm works. If you would like to know more please check out the [SageMaker RCF Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html). # Setup *** *This notebook was created and tested on an ml.m4.xlarge notebook instance.* Our first step is to setup our AWS credentials so that AWS SageMaker can store and access training data and model artifacts. We also need some data to inspect and to train upon. ## Select Amazon S3 Bucket We first need to specify the locations where we will store our training data and trained model artifacts. ***This is the only cell of this notebook that you will need to edit.*** In particular, we need the following data: * `bucket` - An S3 bucket accessible by this account. * `prefix` - The location in the bucket where this notebook's input and output data will be stored. (The default value is sufficient.) ``` import boto3 import botocore import sagemaker import sys bucket = '' # <--- specify a bucket you have access to prefix = 'sagemaker/rcf-benchmarks' execution_role = sagemaker.get_execution_role() # check if the bucket exists try: boto3.Session().client('s3').head_bucket(Bucket=bucket) except botocore.exceptions.ParamValidationError as e: print('Hey! You either forgot to specify your S3 bucket' ' or you gave your bucket an invalid name!') except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] == '403': print("Hey! You don't have permission to access the bucket, {}.".format(bucket)) elif e.response['Error']['Code'] == '404': print("Hey! Your bucket, {}, doesn't exist!".format(bucket)) else: raise else: print('Training input/output will be stored in: s3://{}/{}'.format(bucket, prefix)) ``` ## Obtain and Inspect Example Data Our data comes from the Numenta Anomaly Benchmark (NAB) NYC Taxi dataset [[1](https://github.com/numenta/NAB/blob/master/data/realKnownCause/nyc_taxi.csv)]. These data consists of the number of New York City taxi passengers over the course of six months aggregated into 30-minute buckets. We know, a priori, that there are anomalous events occurring during the NYC marathon, Thanksgiving, Christmas, New Year's day, and on the day of a snow storm. > [1] https://github.com/numenta/NAB/blob/master/data/realKnownCause/nyc_taxi.csv ``` %%time import pandas as pd import urllib.request data_filename = 'nyc_taxi.csv' data_source = 'https://raw.githubusercontent.com/numenta/NAB/master/data/realKnownCause/nyc_taxi.csv' urllib.request.urlretrieve(data_source, data_filename) taxi_data = pd.read_csv(data_filename, delimiter=',') ``` Before training any models it is important to inspect our data, first. Perhaps there are some underlying patterns or structures that we could provide as "hints" to the model or maybe there is some noise that we could pre-process away. The raw data looks like this: ``` taxi_data.head() ``` Human beings are visual creatures so let's take a look at a plot of the data. ``` %matplotlib inline import matplotlib import matplotlib.pyplot as plt matplotlib.rcParams['figure.dpi'] = 100 taxi_data.plot() ``` Human beings are also extraordinarily good at perceiving patterns. Note, for example, that something uncharacteristic occurs at around datapoint number 6000. Additionally, as we might expect with taxi ridership, the passenger count appears more or less periodic. Let's zoom in to not only examine this anomaly but also to get a better picture of what the "normal" data looks like. ``` taxi_data[5500:6500].plot() ``` Here we see that the number of taxi trips taken is mostly periodic with one mode of length approximately 50 data points. In fact, the mode is length 48 since each datapoint represents a 30-minute bin of ridership count. Therefore, we expect another mode of length $336 = 48 \times 7$, the length of a week. Smaller frequencies over the course of the day occur, as well. For example, here is the data across the day containing the above anomaly: ``` taxi_data[5952:6000] ``` # Training *** Next, we configure a SageMaker training job to train the Random Cut Forest (RCF) algorithm on the taxi cab data. ## Hyperparameters Particular to a SageMaker RCF training job are the following hyperparameters: * **`num_samples_per_tree`** - the number randomly sampled data points sent to each tree. As a general rule, `1/num_samples_per_tree` should approximate the the estimated ratio of anomalies to normal points in the dataset. * **`num_trees`** - the number of trees to create in the forest. Each tree learns a separate model from different samples of data. The full forest model uses the mean predicted anomaly score from each constituent tree. * **`feature_dim`** - the dimension of each data point. In addition to these RCF model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that, * Recommended instance type: `ml.m4`, `ml.c4`, or `ml.c5` * Current limitations: * The RCF algorithm does not take advantage of GPU hardware. ``` from sagemaker import RandomCutForest session = sagemaker.Session() # specify general training job information rcf = RandomCutForest(role=execution_role, train_instance_count=1, train_instance_type='ml.m4.xlarge', data_location='s3://{}/{}/'.format(bucket, prefix), output_path='s3://{}/{}/output'.format(bucket, prefix), num_samples_per_tree=512, num_trees=50) # automatically upload the training data to S3 and run the training job rcf.fit(rcf.record_set(taxi_data.value.as_matrix().reshape(-1,1))) ``` If you see the message > `===== Job Complete =====` at the bottom of the output logs then that means training successfully completed and the output RCF model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below: ``` print('Training job name: {}'.format(rcf.latest_training_job.job_name)) ``` # Inference *** A trained Random Cut Forest model does nothing on its own. We now want to use the model we computed to perform inference on data. In this case, it means computing anomaly scores from input time series data points. We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up. We recommend using the `ml.c5` instance type as it provides the fastest inference time at the lowest cost. ``` rcf_inference = rcf.deploy( initial_instance_count=1, instance_type='ml.m4.xlarge', ) ``` Congratulations! You now have a functioning SageMaker RCF inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below: ``` print('Endpoint name: {}'.format(rcf_inference.endpoint)) ``` ## Data Serialization/Deserialization We can pass data in a variety of formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `csv_serializer` and `json_deserializer` when configuring the inference endpoint. ``` from sagemaker.predictor import csv_serializer, json_deserializer rcf_inference.content_type = 'text/csv' rcf_inference.serializer = csv_serializer rcf_inference.accept = 'application/json' rcf_inference.deserializer = json_deserializer ``` Let's pass the training dataset, in CSV format, to the inference endpoint so we can automatically detect the anomalies we saw with our eyes in the plots, above. Note that the serializer and deserializer will automatically take care of the datatype conversion from Numpy NDArrays. For starters, let's only pass in the first six datapoints so we can see what the output looks like. ``` taxi_data_numpy = taxi_data.value.as_matrix().reshape(-1,1) print(taxi_data_numpy[:6]) results = rcf_inference.predict(taxi_data_numpy[:6]) ``` ## Computing Anomaly Scores Now, let's compute and plot the anomaly scores from the entire taxi dataset. ``` results = rcf_inference.predict(taxi_data_numpy) scores = [datum['score'] for datum in results['scores']] # add scores to taxi data frame and print first few values taxi_data['score'] = pd.Series(scores, index=taxi_data.index) taxi_data.head() fig, ax1 = plt.subplots() ax2 = ax1.twinx() # # *Try this out* - change `start` and `end` to zoom in on the # anomaly found earlier in this notebook # start, end = 0, len(taxi_data) #start, end = 5500, 6500 taxi_data_subset = taxi_data[start:end] ax1.plot(taxi_data_subset['value'], color='C0', alpha=0.8) ax2.plot(taxi_data_subset['score'], color='C1') ax1.grid(which='major', axis='both') ax1.set_ylabel('Taxi Ridership', color='C0') ax2.set_ylabel('Anomaly Score', color='C1') ax1.tick_params('y', colors='C0') ax2.tick_params('y', colors='C1') ax1.set_ylim(0, 40000) ax2.set_ylim(min(scores), 1.4*max(scores)) fig.set_figwidth(10) ``` Note that the anomaly score spikes where our eyeball-norm method suggests there is an anomalous data point as well as in some places where our eyeballs are not as accurate. Below we print and plot any data points with scores greater than 3 standard deviations (approx 99.9th percentile) from the mean score. ``` score_mean = taxi_data['score'].mean() score_std = taxi_data['score'].std() score_cutoff = score_mean + 3*score_std anomalies = taxi_data_subset[taxi_data_subset['score'] > score_cutoff] anomalies ``` The following is a list of known anomalous events which occurred in New York City within this timeframe: * `2014-11-02` - NYC Marathon * `2015-01-01` - New Year's Eve * `2015-01-27` - Snowstorm Note that our algorithm managed to capture these events along with quite a few others. Below we add these anomalies to the score plot. ``` ax2.plot(anomalies.index, anomalies.score, 'ko') fig ``` With the current hyperparameter choices we see that the three-standard-deviation threshold, while able to capture the known anomalies as well as the ones apparent in the ridership plot, is rather sensitive to fine-grained peruturbations and anomalous behavior. Adding trees to the SageMaker RCF model could smooth out the results as well as using a larger data set. ## Stop and Delete the Endpoint Finally, we should delete the endpoint before we close the notebook. To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select "Delete" from the "Actions" dropdown menu. ``` sagemaker.Session().delete_endpoint(rcf_inference.endpoint) ``` # Epilogue --- We used Amazon SageMaker Random Cut Forest to detect anomalous datapoints in a taxi ridership dataset. In these data the anomalies occurred when ridership was uncharacteristically high or low. However, the RCF algorithm is also capable of detecting when, for example, data breaks periodicity or uncharacteristically changes global behavior. Depending on the kind of data you have there are several ways to improve algorithm performance. One method, for example, is to use an appropriate training set. If you know that a particular set of data is characteristic of "normal" behavior then training on said set of data will more accurately characterize "abnormal" data. Another improvement is make use of a windowing technique called "shingling". This is especially useful when working with periodic data with known period, such as the NYC taxi dataset used above. The idea is to treat a period of $P$ datapoints as a single datapoint of feature length $P$ and then run the RCF algorithm on these feature vectors. That is, if our original data consists of points $x_1, x_2, \ldots, x_N \in \mathbb{R}$ then we perform the transformation, ``` data = [[x_1], shingled_data = [[x_1, x_2, ..., x_{P}], [x_2], ---> [x_2, x_3, ..., x_{P+1}], ... ... [x_N]] [x_{N-P}, ..., x_{N}]] ``` ``` import numpy as np def shingle(data, shingle_size): num_data = len(data) shingled_data = np.zeros((num_data-shingle_size, shingle_size)) for n in range(num_data - shingle_size): shingled_data[n] = data[n:(n+shingle_size)] return shingled_data # single data with shingle size=48 (one day) shingle_size = 48 prefix_shingled = 'sagemaker/randomcutforest_shingled' taxi_data_shingled = shingle(taxi_data.values[:,1], shingle_size) print(taxi_data_shingled) ``` We create a new training job and and inference endpoint. (Note that we cannot re-use the endpoint created above because it was trained with one-dimensional data.) ``` session = sagemaker.Session() # specify general training job information rcf = RandomCutForest(role=execution_role, train_instance_count=1, train_instance_type='ml.m4.xlarge', data_location='s3://{}/{}/'.format(bucket, prefix_shingled), output_path='s3://{}/{}/output'.format(bucket, prefix_shingled), num_samples_per_tree=512, num_trees=50) # automatically upload the training data to S3 and run the training job rcf.fit(rcf.record_set(taxi_data_shingled)) from sagemaker.predictor import csv_serializer, json_deserializer rcf_inference = rcf.deploy( initial_instance_count=1, instance_type='ml.m4.xlarge', ) rcf_inference.content_type = 'text/csv' rcf_inference.serializer = csv_serializer rcf_inference.accept = 'appliation/json' rcf_inference.deserializer = json_deserializer ``` Using the above inference endpoint we compute the anomaly scores associated with the shingled data. ``` # Score the shingled datapoints results = rcf_inference.predict(taxi_data_shingled) scores = np.array([datum['score'] for datum in results['scores']]) # compute the shingled score distribution and cutoff and determine anomalous scores score_mean = scores.mean() score_std = scores.std() score_cutoff = score_mean + 3*score_std anomalies = scores[scores > score_cutoff] anomaly_indices = np.arange(len(scores))[scores > score_cutoff] print(anomalies) ``` Finally, we plot the scores from the shingled data on top of the original dataset and mark the score lying above the anomaly score threshold. ``` fig, ax1 = plt.subplots() ax2 = ax1.twinx() # # *Try this out* - change `start` and `end` to zoom in on the # anomaly found earlier in this notebook # start, end = 0, len(taxi_data) taxi_data_subset = taxi_data[start:end] ax1.plot(taxi_data['value'], color='C0', alpha=0.8) ax2.plot(scores, color='C1') ax2.scatter(anomaly_indices, anomalies, color='k') ax1.grid(which='major', axis='both') ax1.set_ylabel('Taxi Ridership', color='C0') ax2.set_ylabel('Anomaly Score', color='C1') ax1.tick_params('y', colors='C0') ax2.tick_params('y', colors='C1') ax1.set_ylim(0, 40000) ax2.set_ylim(min(scores), 1.4*max(scores)) fig.set_figwidth(10) ``` We see that with this particular shingle size, hyperparameter selection, and anomaly cutoff threshold that the shingled approach more clearly captures the major anomalous events: the spike at around t=6000 and the dips at around t=9000 and t=10000. In general, the number of trees, sample size, and anomaly score cutoff are all parameters that a data scientist may need experiment with in order to achieve desired results. The use of a labeled test dataset allows the used to obtain common accuracy metrics for anomaly detection algorithms. For more information about Amazon SageMaker Random Cut Forest see the [AWS Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html). ``` sagemaker.Session().delete_endpoint(rcf_inference.endpoint) ```
github_jupyter
# ⚠️ This notebook is the first and NOT our final notebook It shows part of our journey for training a CNN model to classify ASL gestures. <br> ### Used dataset: [ASL MNIST](https://www.kaggle.com/datamunge/sign-language-mnist) First we used the ASL MNIST dataset, to train a CNN model and achieved "amazing" results of almost 100% accuracy. <br> Unfortunately, this dataset is highly synthetic and is not close enough to real-world conditions. Therefore, we've made the next attempt. <br> <br><br> This code is not final! read carefully. It shows the code refining progress. ## Imports ``` import os import cv2 as cv import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau from keras.models import Sequential from keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout , BatchNormalization from keras.models import load_model from keras.callbacks import CSVLogger from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, confusion_matrix ``` ## Constants ``` TRAINING_SET_SRC_PATH = "../resources/sign_mnist_train.csv" TEST_SET_SRC_PATH = "../resources/sign_mnist_test.csv" TRAINED_MODEL_PATH = "../resources/trained_model.h5" TRAINING_LOG_PATH = "../resources/training.log" IMAGE_A_PATH = "../resources/samples/A.jpeg" ``` ## Loading the dataset Our dataset is based on the Kaggle ["Sign-Language-MNIST" dataset](https://www.kaggle.com/datamunge/sign-language-mnist). <br> Training and test sets are stored as CSV files: column for labels and columns for every pixel in the image. <br> **Row = image.** <br> We'll now load the data and show a taste of it. <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/7d/American_Sign_Language_ASL.svg/640px-American_Sign_Language_ASL.svg.png?1630275435741" alt="ilustration" width="200"/> <small>Source: https://www.wikiwand.com/en/American_Sign_Language</small> ``` train_df_src = pd.read_csv(TRAINING_SET_SRC_PATH) test_df_src = pd.read_csv(TEST_SET_SRC_PATH) train_df_src.head() ``` ## Visualization of the dataset's variance Here we can see that the dataset's classes are distributed in relatively equal form and we have enough samples of each class. ``` plt.figure(figsize = (5, 5)) sns.set_style("dark") sns.countplot(x=train_df_src['label']) ``` ### Create a clone of the dataset, and separate the labels from the data itself ``` train_df = train_df_src.copy() test_df = test_df_src.copy() train_labels = train_df["label"] test_labels = test_df["label"] train_df.drop("label", axis=1, inplace=True) test_df.drop("label", axis=1, inplace=True) train_values = train_df.values test_values = test_df.values ``` ## Label Binarization Here we binarize lables in a one-vs-all fasion. #### What does it mean? We convert our labels to classes in a binary manner. <br> That means that every label is switched from a single-value-label to a set of binary labels. <br> **For example:** "3" is turned into: (0,0,0,**1**,0,0,0,0,0...) ``` label_binarizer = LabelBinarizer() train_labels = label_binarizer.fit_transform(train_labels) test_labels = label_binarizer.fit_transform(test_labels) ``` ## Reshape The dataset's images are now rows of size 784 columns (pixels). <br> We'll now reshape them to their original size of 28x28 images, by 1 grayscale dimension. ``` train_values = train_values.reshape(-1,28,28,1) test_values = test_values.reshape(-1,28,28,1) ``` ## Data normalization CNN are known to converges faster and better on normalized data <br> at scale of [0..1] (than on [0..255]). ``` train_values = train_values / 255 test_values = test_values / 255 ``` ## Creating a validation set We split the training set and create a new validation set (30% of the training set size) ``` train_values, validation_values, train_labels, validation_labels = train_test_split(train_values, train_labels, test_size = 0.3, random_state = 101) ``` ## Visualize a taste of the digested dataset ``` f, ax = plt.subplots(2,5) f.set_size_inches(6, 6) k = 0 for i in range(2): for j in range(5): ax[i,j].imshow(train_values[k].reshape(28, 28) , cmap = "gray") k += 1 plt.tight_layout() ``` ## Data augmentation using keras. 1. Accepting a batch of images used for training. 2. Taking this batch and applying a series of random transformations to each image in the batch (including random rotation, resizing, shearing, etc.). 3. **Replacing the original batch** with the new, randomly transformed batch. 4. Training the CNN on this randomly transformed batch (i.e., the original data itself is not used for training). *data augmentation illustration*<br> <img src="https://www.pyimagesearch.com/wp-content/uploads/2019/07/keras_data_augmentation_header.png" alt="data-augmentation" width="250"/> ``` datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180) zoom_range = 0.1, # Randomly zoom image width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=False, # randomly flip images vertical_flip=False) # randomly flip images datagen.fit(train_values) ``` ## Creating the CONVOLUTIONAL NEURAL NETWORKS ``` if os.path.exists(TRAINED_MODEL_PATH): print("Found a backup trained model file, will load now...") model = load_model(TRAINED_MODEL_PATH) print("Loaded model file:") else: model = Sequential() model.add(Conv2D(75 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu' , input_shape = (28,28,1))) model.add(BatchNormalization()) model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same')) model.add(Conv2D(50 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same')) model.add(Conv2D(25 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu')) model.add(BatchNormalization()) model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same')) model.add(Flatten()) model.add(Dense(units = 512 , activation = 'relu')) model.add(Dropout(0.3)) model.add(Dense(units = 24 , activation = 'softmax')) model.compile(optimizer = 'adam' , loss = 'categorical_crossentropy' , metrics = ['accuracy']) print("Couldn't find an existing model file. Created a new one:") model.summary() ``` ## Model Training ``` # this will be passed as a callback to the model.fit action, in order to Reduce learning rate when a metric has stopped improving. learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience = 2, verbose=1,factor=0.5, min_lr=0.00001) if not os.path.exists(TRAINING_LOG_PATH): print("Didn't find a training log history, will re-train the model") csv_logger = CSVLogger(TRAINING_LOG_PATH, separator=',', append=False) history = model.fit(datagen.flow(train_values, train_labels, batch_size = 128) ,epochs = 20 , validation_data = (validation_values, validation_labels) , callbacks = [learning_rate_reduction ,csv_logger]) model.save(TRAINED_MODEL_PATH) history = history.history else: print("Found a trained model file, will load the training log history") history = pd.read_csv(TRAINING_LOG_PATH, sep=',', engine='python') ``` ## Evaluation ``` print("Accuracy of the model is - " , model.evaluate(test_values, test_labels)[1]*100 , "%") epochs = [i for i in range(20)] fig , ax = plt.subplots(1,2) train_acc = history['accuracy'] train_loss = history['loss'] val_acc = history['val_accuracy'] val_loss = history['val_loss'] fig.set_size_inches(15,8) ax[0].plot(epochs , train_acc , 'bo-' , label = 'Training Accuracy') ax[0].plot(epochs , val_acc , 'yo-' , label = 'Testing Accuracy') ax[0].set_title('Training & Validation Accuracy') ax[0].legend() ax[0].set_xlabel("Epochs") ax[0].set_ylabel("Accuracy") ax[1].plot(epochs , train_loss , 'b-o' , label = 'Training Loss') ax[1].plot(epochs , val_loss , 'y-o' , label = 'Testing Loss') ax[1].set_title('Testing Accuracy & Loss') ax[1].legend() ax[1].set_xlabel("Epochs") ax[1].set_ylabel("Loss") plt.show() ``` #### Classification report ``` predictions = np.argmax(model.predict(test_values), axis=-1) for i in range(len(predictions)): if(predictions[i] >= 9): predictions[i] += 1 classes = ["Class " + str(i) for i in range(25) if i != 9] print(classification_report(test_df_src["label"], predictions, target_names = classes)) cm = confusion_matrix(test_df_src["label"], predictions) cm = pd.DataFrame(cm , index = [i for i in range(25) if i != 9] , columns = [i for i in range(25) if i != 9]) plt.figure(figsize = (10,10)) sns.heatmap(cm,cmap= "Blues", linecolor = 'black' , linewidth = 1 , annot = True, fmt='') ``` ### Test with a clear image from outside the dataset The image is of the letter "A" sign ``` image_a = cv.imread(IMAGE_A_PATH, cv.IMREAD_GRAYSCALE) plt.figure(figsize = (2,2)) plt.imshow(image_a, cmap="gray") ``` The perdiction is indeed of the letter A ``` a_reshaped = image_a.reshape(-1,28,28,1) np.argmax(model.predict(a_reshaped), axis=-1) ```
github_jupyter
``` #project 2 def uses_all(): letters = ["a", "e", "i", "o", "u",] wentool = list(str(input("Enter the required letters: "))) word = str(input("Enter the word: ")) ls = [] for x in wentool: if x in word: continue else: ls.append(x) if len(ls) > 0: print(False) print("The required letters",ls,"cannot be found in", word) else: print(True) tig = True for y in letters: if y in word: continue else: tig = False if tig: print("it contains all letters") uses_all() #project 3 import math a = str(input("Player 1, pick between ODD and EVEN: ")) b = str(input("Player 2, pick between ODD and EVEN: ")) c = int(input("Player 1, pick a number: ")) d = int(input("Player 2, pick a number: ")) e = 0 if (c+d)%2 == 0: e = 1 else: e = 2 if e == 1: if a == "EVEN": print("Player 1 is correct") elif b == "EVEN": print("Player 2 is correct") elif a and b == "EVEN": print("Both players are correct") else: print("Both players are wrong") elif f == 2: if a == "ODD": print("Player 1 is correct") elif b == "ODD": print("Player 2 is correct") elif a and b == "ODD": print("Both players are correct") else: print("Both players are wrong") #project 1 def avoids(): count = 0 a = input("Enter a forbidden letter: ") b = input("Enter a word: ") if a not in b: c = count + 1 print("there is no forbidden letter in this word") else: c = 0 print("There is a forbidden letter in this word") d = input("Enter a forbidden letter: ") e = input("Enter a word: ") if d not in e: f = count + 1 print("there is no forbidden letter in this word") else: f = 0 print("There is a forbidden letter in this word") g = input("Enter a forbidden letter: ") h = input("Enter a word: ") if g not in h: i = count + 1 print("there is no forbidden letter in this word") else: i = 0 print("There is a forbidden letter in this word") j = input("Enter a forbidden letter: ") k = input("Enter a word: ") if j not in k: l = count + 1 print("there is no forbidden letter in this word") else: l = 0 print("There is a forbidden letter in this word") m = input("Enter a forbidden letter: ") n = input("Enter a word: ") if m not in n: o = count + 1 print("there is no forbidden letter in this word") else: o = 0 print("There is a forbidden letter in this word") p = c + f + i + l + o print("The total number of words that do not have forbidden letters are: ", p) avoids() #project 4 d = 50 t = 2 s = d/t print("The speed of the car is", s) ```
github_jupyter
# This is the Saildrone and GOES collocation code. trying to get mfopendataset to work with opendap data...... ``` import os import numpy as np import matplotlib.pyplot as plt import datetime as dt import xarray as xr import requests def get_sat_filename(date): dir_sat='https://opendap.jpl.nasa.gov/opendap/OceanTemperature/ghrsst/data/GDS2/L3C/AMERICAS/GOES16/OSISAF/v1/' syr, smon, sdym = str(date.dt.year.data), str(date.dt.month.data).zfill(2), str(date.dt.day.data).zfill(2) sjdy, shr = str(date.dt.dayofyear.data).zfill(2),str(date.dt.hour.data).zfill(2) if date.dt.hour.data==0: datetem = date - np.timedelta64(1,'D') sjdy = str(datetem.dt.dayofyear.data).zfill(2) # syr, smon, sdym = str(datetem.dt.year.data), str(datetem.dt.month.data).zfill(2), str(datetem.dt.day.data).zfill(2) fgoes='0000-OSISAF-L3C_GHRSST-SSTsubskin-GOES16-ssteqc_goes16_' dstr=syr+smon+sdym+shr dstr2=syr+smon+sdym+'_'+shr sat_filename=dir_sat+syr+'/'+sjdy+'/'+ dstr + fgoes +dstr2+'0000-v02.0-fv01.0.nc' r = requests.get(sat_filename) if r.status_code != requests.codes.ok: exists = False else: exists = True print(exists,sat_filename) return sat_filename, exists ``` # Read in USV data Read in the Saildrone USV file either from a local disc or using OpenDAP. There are 6 NaN values in the lat/lon data arrays, interpolate across these We want to collocate with wind vectors for this example, but the wind vectors are only every 10 minutes rather than every minute, so use .dropna to remove all values in the dataset from all dataarrays when wind vectors aren't availalbe ``` filename_collocation_data = 'F:/data/cruise_data/saildrone/baja-2018/ccmp_collocation_data.nc' #filename_usv = 'https://podaac-opendap.jpl.nasa.gov/opendap/hyrax/allData/insitu/L2/saildrone/Baja/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc' filename_usv='f:/data/cruise_data/saildrone/baja-2018/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc' ds_usv = xr.open_dataset(filename_usv) ds_usv.close() ds_usv = ds_usv.isel(trajectory=0).swap_dims({'obs':'time'}).rename({'longitude':'lon','latitude':'lat'}) ds_usv = ds_usv.sel(time=slice('2018-04-11T18:30',ds_usv.time[-1].data)) #first part of data is when USV being towed, elminiate ds_usv['lon'] = ds_usv.lon.interpolate_na(dim='time',method='linear') #there are 6 nan values ds_usv['lat'] = ds_usv.lat.interpolate_na(dim='time',method='linear') ds_usv['wind_speed']=np.sqrt(ds_usv.UWND_MEAN**2+ds_usv.VWND_MEAN**2) ds_usv['wind_dir']=np.arctan2(ds_usv.VWND_MEAN,ds_usv.UWND_MEAN)*180/np.pi ds_usv_subset = ds_usv.copy(deep=True) #ds_usv_subset = ds_usv.dropna(dim='time',subset={'UWND_MEAN'}) #get rid of all the nan #print(ds_usv_subset.UWND_MEAN[2000:2010].values) ``` In order to use open_mfdataset you need to either provide a path or a list of filenames to input Here we use the USV cruise start and end date to read in all data for that period ``` read_date,end_date = ds_usv_subset.time.min(),ds_usv_subset.time.max() filelist = [] while read_date<=(end_date+np.timedelta64(1,'h')): #while read_date<=(ds_usv_subset.time.min()+np.timedelta64(10,'h')): tem_filename,exists = get_sat_filename(read_date) if exists: filelist.append(tem_filename) read_date=read_date+np.timedelta64(1,'h') print(filelist[0]) ``` # Read in MUR data Read in data using open_mfdataset with the option coords='minimal' The dataset is printed out and you can see that rather than straight xarray data array for each of the data variables open_mfdataset using dask arrays ``` ds_sat = xr.open_mfdataset(filelist,coords='minimal') ds_sat ``` # Xarray interpolation won't run on chunked dimensions. 1. First let's subset the data to make it smaller to deal with by using the cruise lat/lons 1. Now load the data into memory (de-Dask-ify) it ``` #Step 1 from above subset = ds_sat.sel(lon=slice(ds_usv_subset.lon.min().data,ds_usv_subset.lon.max().data), lat=slice(ds_usv_subset.lat.min().data,ds_usv_subset.lat.max().data)) #Step 2 from above subset.load() #now collocate with usv lat and lons ds_collocated = subset.interp(lat=ds_usv_subset.lat,lon=ds_usv_subset.lon,time=ds_usv_subset.time,method='linear') ds_collocated_nearest = subset.interp(lat=ds_usv_subset.lat,lon=ds_usv_subset.lon,time=ds_usv_subset.time,method='nearest') ``` # A larger STD that isn't reflective of uncertainty in the observation The collocation above will result in multiple USV data points matched with a single satellite observation. The USV is sampling every 1 min and approximately few meters, while the satellite is an average over a footprint that is interpolated onto a daily mean map. While calculating the mean would results in a valid mean, the STD would be higher and consist of a component that reflects the uncertainty of the USV and the satellite and a component that reflects the natural variability in the region that is sampled by the USV Below we use the 'nearest' collocation results to identify when multiple USV data are collcated to a single satellite observation. This code goes through the data and creates averages of the USV data that match the single CCMP collocated value. ``` ds_tem.dims['time'] index=302 ds_tem = ds_collocated_nearest.copy(deep=True) ds_tem_subset = ds_tem.analysed_sst[index:index+1000] cond = ((ds_tem_subset==ds_collocated_nearest.analysed_sst[index])) notcond = np.logical_not(cond) #cond = np.append(np.full(index,True),cond) #cond = np.append(cond,np.full(ilen-index-1000,True)) #cond.shape print(cond[0:5].data) print(ds_tem.analysed_sst[index:index+5].data) ds_tem.analysed_sst[index:index+1000]=ds_tem.analysed_sst.where(notcond) print(ds_tem.analysed_sst[index:index+5].data) print(ds_collocated_nearest.analysed_sst[300:310].data) print(ds_collocated_nearest.time.dt.day[300:310].data) index=302 ilen = ds_tem.dims['time'] #cond = ((ds_tem.analysed_sst[index:index+1000]==ds_collocated_nearest.analysed_sst[index]) # & (ds_tem.time.dt.day[index:index+1000]==ds_collocated_nearest.time.dt.day[index]) # & (ds_tem.time.dt.hour[index:index+1000]==ds_collocated_nearest.time.dt.hour[index])) cond = ((ds_tem.analysed_sst[index:index+1000]==ds_collocated_nearest.analysed_sst[index])) #cond = np.append(np.full(index,True),cond) #cond = np.append(cond,np.full(ilen-index-1000,True)) print(cond[index:index+10].data) print(np.logical_not(cond[index+10]).data) masked_usv = ds_usv_subset.where(cond,drop=True) #ds_collocated_nearest #print(ds_collocated_nearest.uwnd[244:315].data) #print(masked_usv.UWND_MEAN[244:315].data) #print(masked_usv.UWND_MEAN[244:315].mean().data) #print(masked_usv.time.min().data) #print(masked_usv.time.max().data) #print(masked_usv.lon.min().data) #print(masked_usv.lon.max().data) #print(masked_usv.time[0].data,masked_usv.time[-1].data) ilen,index = ds_collocated_nearest.dims['time'],0 ds_tem = ds_collocated_nearest.copy(deep=True) duu, duv1, duv2, dlat, dlon, dut = [],[],[],[],[],np.empty((),dtype='datetime64') while index <= ilen-2: index += 1 if np.isnan(ds_collocated_nearest.analysed_sst[index]): continue if np.isnan(ds_tem.analysed_sst[index]): continue # print(index, ilen) iend = index + 1000 if iend > ilen-1: iend = ilen-1 ds_tem_subset = ds_tem.analysed_sst[index:iend] ds_usv_subset2sst = ds_usv_subset.TEMP_CTD_MEAN[index:iend] ds_usv_subset2uwnd = ds_usv_subset.UWND_MEAN[index:iend] ds_usv_subset2vwnd = ds_usv_subset.VWND_MEAN[index:iend] ds_usv_subset2lat = ds_usv_subset.lat[index:iend] ds_usv_subset2lon = ds_usv_subset.lon[index:iend] ds_usv_subset2time = ds_usv_subset.time[index:iend] cond = ((ds_tem_subset==ds_collocated_nearest.analysed_sst[index])) notcond = np.logical_not(cond) #cond = ((ds_tem.analysed_sst==ds_collocated_nearest.analysed_sst[index])) #notcond = np.logical_not(cond) masked = ds_tem_subset.where(cond) if masked.sum().data==0: #don't do if data not found continue masked_usvsst = ds_usv_subset2sst.where(cond,drop=True) masked_usvuwnd = ds_usv_subset2uwnd.where(cond,drop=True) masked_usvvwnd = ds_usv_subset2vwnd.where(cond,drop=True) masked_usvlat = ds_usv_subset2lat.where(cond,drop=True) masked_usvlon = ds_usv_subset2lon.where(cond,drop=True) masked_usvtime = ds_usv_subset2time.where(cond,drop=True) duu=np.append(duu,masked_usvsst.mean().data) duv1=np.append(duv1,masked_usvuwnd.mean().data) duv2=np.append(duv2,masked_usvvwnd.mean().data) dlat=np.append(dlat,masked_usvlat.mean().data) dlon=np.append(dlon,masked_usvlon.mean().data) tdif = masked_usvtime[-1].data-masked_usvtime[0].data mtime=masked_usvtime[0].data+np.timedelta64(tdif/2,'ns') dut=np.append(dut,mtime) ds_tem.analysed_sst[index:iend]=ds_tem.analysed_sst.where(notcond) # ds_tem=ds_tem.where(notcond,np.nan) #masked used values by setting to nan dut2 = dut[1:] #remove first data point which is a repeat from what array defined ds_new=xr.Dataset(data_vars={'sst_usv': ('time',duu),'uwnd_usv': ('time',duv1),'vwnd_usv': ('time',duv2), 'lon': ('time',dlon), 'lat': ('time',dlat)}, coords={'time':dut2}) ds_new.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/goes_downsampled_usv_data2.nc') ds_new=xr.Dataset(data_vars={'sst_usv': ('time',duu),'uwnd_usv': ('time',duv1),'vwnd_usv': ('time',duv2), 'lon': ('time',dlon), 'lat': ('time',dlat)}, coords={'time':dut2}) ds_new.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/goes_downsampled_usv_data2.nc') ``` # redo the collocation Now, redo the collocation, using 'linear' interpolation using the averaged data. This will interpolate the data temporally onto the USV sampling which has been averaged to the satellite data grid points ``` ds_collocated_averaged = subset.interp(lat=ds_new.lat,lon=ds_new.lon,time=ds_new.time,method='linear') ds_collocated_averaged ds_collocated_averaged.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/mur_downsampled_collocated_usv_data2.nc') sat_sst = ds_collocated_averaged.analysed_sst[:-19]-273.15 usv_sst = ds_new.sst_usv[:-19] ds_new['spd']=np.sqrt(ds_new.uwnd_usv**2+ds_new.vwnd_usv**2) usv_spd = ds_new.spd[:-19] dif_sst = sat_sst - usv_sst print('mean,std dif ',[dif_sst.mean().data,dif_sst.std().data,dif_sst.shape[0]]) plt.plot(usv_spd,dif_sst,'.') sat_sst = ds_collocated_averaged.analysed_sst[:-19]-273.15 usv_sst = ds_new.sst_usv[:-19] dif_sst = sat_sst - usv_sst cond = usv_spd>2 dif_sst = dif_sst.where(cond) print('no low wind mean,std dif ',[dif_sst.mean().data,dif_sst.std().data,sum(cond).data]) plt.plot(usv_spd,dif_sst,'.') fig, ax = plt.subplots(figsize=(5,4)) ax.plot(sat_sst,sat_sst-usv_sst,'.') ax.set_xlabel('USV wind speed (ms$^{-1}$)') ax.set_ylabel('USV - Sat wind direction (deg)') fig_fname='F:/data/cruise_data/saildrone/baja-2018/figs/sat_sst_both_bias.png' fig.savefig(fig_fname, transparent=False, format='png') plt.plot(dif_sst[:-19],'.') #faster not sure why ilen,index = ds_collocated_nearest.dims['time'],0 ds_tem = ds_collocated_nearest.copy(deep=True) duu,dvu, dlat, dlon, dut = [],[],[],[],np.empty((),dtype='datetime64') while index <= ilen-2: index += 1 if np.isnan(ds_collocated_nearest.uwnd[index]): continue test = ds_collocated_nearest.where((ds_tem.uwnd==ds_collocated_nearest.uwnd[index])&(ds_tem.vwnd==ds_collocated_nearest.vwnd[index])) test = test/test if test.uwnd.sum()>0: duu=np.append(duu,(ds_usv_subset.UWND_MEAN*test.uwnd).mean().data) dvu=np.append(dvu,(ds_usv_subset.VWND_MEAN*test.vwnd).mean().data) dlat=np.append(dlat,(ds_usv_subset.lat*test.lat).mean().data) dlon=np.append(dlon,(ds_usv_subset.lon*test.lon).mean().data) tdif = ds_usv_subset.time.where(test.vwnd==1).max().data-ds_usv_subset.time.where(test.vwnd==1).min().data mtime=ds_usv_subset.time.where(test.vwnd==1).min().data+np.timedelta64(tdif/2,'ns') dut=np.append(dut,mtime) ds_tem=ds_tem.where(np.isnan(test),np.nan) #you have used values, so set to nan dut2 = dut[1:] #remove first data point which is a repeat from what array defined ds_new2=xr.Dataset(data_vars={'u_usv': ('time',duu), 'v_usv': ('time',dvu), 'lon': ('time',dlon), 'lat': ('time',dlat)}, coords={'time':dut2}) #testing code above ds_tem = ds_collocated_nearest.copy(deep=True) print(ds_collocated_nearest.uwnd[1055].data) print(ds_collocated_nearest.uwnd[1050:1150].data) test = ds_collocated_nearest.where((ds_collocated_nearest.uwnd==ds_collocated_nearest.uwnd[1055])&(ds_collocated_nearest.vwnd==ds_collocated_nearest.vwnd[1055])) test = test/test print(test.uwnd[1050:1150].data) ds_tem=ds_tem.where(np.isnan(test),np.nan) print(ds_tem.uwnd[1050:1150].data) print((ds_usv_subset.UWND_MEAN*test.uwnd).mean()) print((ds_usv_subset.VWND_MEAN*test.vwnd).mean()) from scipy.interpolate import griddata # interpolate points = (ds_usv_subset.lon.data,ds_usv_subset.lat.data) grid_in_lon,grid_in_lat = np.meshgrid(subset.lon.data,subset.lat.data) grid_in = (grid_in_lon,grid_in_lat) values = ds_usv_subset.UWND_MEAN.data #print(points.size) zi = griddata(points,values,grid_in,method='linear',fill_value=np.nan) zi2 = griddata(points,values/values,grid_in,method='linear',fill_value=np.nan) print(np.isfinite(zi).sum()) plt.pcolormesh(subset.lon,subset.lat,zi,vmin=-5,vmax=5) plt.plot(ds_usv_subset.lon,ds_usv_subset.lat,'.') #plt.contourf(subset.uwnd[0,:,:]) len(points[0]) from scipy.interpolate.interpnd import _ndim_coords_from_arrays from scipy.spatial import cKDTree THRESHOLD=1 # Construct kd-tree, functionality copied from scipy.interpolate tree = cKDTree(points) xi = _ndim_coords_from_arrays(grid_in, ndim=len(points[0])) dists, indexes = tree.query(xi) # Copy original result but mask missing values with NaNs result3 = result2[:] result3[dists > THRESHOLD] = np.nan # Show plt.figimage(result3) plt.show() #testing index=300 ds_tem = ds_collocated_nearest.copy(deep=True) cond = ((ds_tem.uwnd==ds_collocated_nearest.uwnd[index]) & (ds_tem.vwnd==ds_collocated_nearest.vwnd[index])) notcond = ((ds_tem.uwnd!=ds_collocated_nearest.uwnd[index]) & (ds_tem.vwnd!=ds_collocated_nearest.vwnd[index])) masked = ds_tem.where(cond) masked_usv = ds_usv_subset.where(cond,drop=True) print(masked.uwnd.sum().data) #print(masked.nobs[290:310].data) print((masked_usv.UWND_MEAN).mean().data) print(ds_tem.uwnd[243:316]) ds_tem=ds_tem.where(notcond,np.nan) #you have used values, so set to nan print(ds_tem.uwnd[243:316]) ilen,index = ds_collocated_nearest.dims['time'],0 ds_tem = ds_collocated_nearest.copy(deep=True) duu, duv1, duv2, dlat, dlon, dut = [],[],[],[],[],np.empty((),dtype='datetime64') while index <= ilen-2: index += 1 if np.isnan(ds_collocated_nearest.analysed_sst[index]): continue if np.isnan(ds_tem.analysed_sst[index]): continue print(index, ilen) cond = ((ds_tem.analysed_sst==ds_collocated_nearest.analysed_sst[index]) & (ds_tem.time.dt.day==ds_collocated_nearest.time.dt.day[index]) & (ds_tem.time.dt.hour==ds_collocated_nearest.time.dt.hour[index])) notcond = np.logical_not(cond) masked = ds_tem.where(cond) masked_usv = ds_usv_subset.where(cond,drop=True) if masked.analysed_sst.sum().data==0: #don't do if data not found continue duu=np.append(duu,masked_usv.TEMP_CTD_MEAN.mean().data) duv1=np.append(duu,masked_usv.UWND_MEAN.mean().data) duv2=np.append(duu,masked_usv.VWND_MEAN.mean().data) dlat=np.append(dlat,masked_usv.lat.mean().data) dlon=np.append(dlon,masked_usv.lon.mean().data) tdif = masked_usv.time[-1].data-masked_usv.time[0].data mtime=masked_usv.time[0].data+np.timedelta64(tdif/2,'ns') dut=np.append(dut,mtime) ds_tem=ds_tem.where(notcond,np.nan) #masked used values by setting to nan dut2 = dut[1:] #remove first data point which is a repeat from what array defined ds_new=xr.Dataset(data_vars={'sst_usv': ('time',duu),'uwnd_usv': ('time',duv1),'vwnd_usv': ('time',duv2), 'lon': ('time',dlon), 'lat': ('time',dlat)}, coords={'time':dut2}) ds_new.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/mur_downsampled_usv_data.nc') ```
github_jupyter
**Course Announcements** Due Friday (11:59 PM): - D8 - Q8 - A4 - weekly project survey (*optional*) # Geospatial Analysis - Analysis: - Exploratory Spatial Data Analysis - K-Nearest Neighbors - Tools: - `shapely` - create and manipulate shape objects - `geopandas` - shapely + dataframe + visualization Today's notes are adapted from the [Scipy 2018 Tutorial - Introduction to Geospatial Data Analysis with Python](https://github.com/geopandas/scipy2018-geospatial-data). To get all notes and examples from this workshop, do the following: ``` git clone https://github.com/geopandas/scipy2018-geospatial-data # get materials conda env create -f environment.yml # download packages python check_environment.py # check environment ``` Additional resource for mapping data with `geopandas`: http://darribas.org/gds15/content/labs/lab_03.html ``` # uncomment below if not yet installed # !pip install --user geopandas # !pip install --user descartes %matplotlib inline import pandas as pd import geopandas as gpd import numpy as np import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (17, 5) plt.rcParams.update({'font.size': 16}) from mpl_toolkits.axes_grid1 import make_axes_locatable import seaborn as sns import shapely.geometry as shp import sklearn.neighbors as skn import sklearn.metrics as skm import warnings warnings.filterwarnings('ignore') pd.options.display.max_rows = 10 #improve resolution #comment this line if erroring on your machine/screen %config InlineBackend.figure_format ='retina' ``` # `geopandas` basics Examples here are from `geopandas` documentation: http://geopandas.org/mapping.html ## The Data ``` world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) cities = gpd.read_file(gpd.datasets.get_path('naturalearth_cities')) world cities ``` ## Population Estimates ``` # Plot population estimates with an accurate legend fig, ax = plt.subplots(1, 1, figsize=(17, 7)) divider = make_axes_locatable(ax) world.plot(column='pop_est', ax=ax, legend=True); # Plot population estimates with a different color scale fig, ax = plt.subplots(1, 1, figsize=(17, 7)) divider = make_axes_locatable(ax) world.plot(column='pop_est', ax=ax, cmap='GnBu', legend=True); ``` ## GDP per capita ``` # Plot by GDP per capita # specify data world = world[(world.pop_est>0) & (world.name!="Antarctica")] world['gdp_per_cap'] = world.gdp_md_est / world.pop_est # plot choropleth fig, ax = plt.subplots(1, 1, figsize=(17, 7)) divider = make_axes_locatable(ax) world.plot(column='gdp_per_cap', ax = ax, figsize=(17, 6), cmap='GnBu', legend = True); world[world['gdp_per_cap'] > 0.08] # combining maps base = world.plot(column='pop_est', cmap='GnBu') cities.plot(ax=base, marker='o', color='red', markersize=5); ``` ## Geospatial Analysis - Data - EDA (Visualization) - Analysis ### District data: Berlin ``` # berlin districts df = gpd.read_file('https://raw.githubusercontent.com/geopandas/scipy2018-geospatial-data/master/data/berlin-districts.geojson') df.shape df.head() ``` ### Exploratory Spatial Data Analysis ``` sns.distplot(df['median_price']); ``` We get an idea of what the median price for listings in this area of Berlin is, but we don't know how this information is spatially related. ``` df.plot(column='median_price', figsize=(18, 12), cmap='GnBu', legend=True); ``` Unless you happen to know something about this area of Germany, interpreting what's going on in this choropleth is likely a little tricky, but we can see there is some variation in median prices across this region. ### Spatial Autocorrelation Note that if prices were distributed randomly, there would be no clustering of similar values. To visualize the existence of global spatial autocorrelation, let's take it to the extreme. Let's look at the 68 districts with the highest Airbnb prices and those with the lowest prices. ``` # get data to dichotomize y = df['median_price'] yb = y > y.median() labels = ["0 Low", "1 High"] yb = [labels[i] for i in 1*yb] df['yb'] = yb # take a look fig = plt.figure(figsize=(12,10)) ax = plt.gca() df.plot(column='yb', cmap='binary', edgecolor='grey', legend=True, ax=ax); ``` ### Airbnb Listings: Berlin - kernel regressions - "borrow strength" from nearby observations A reminder that in geospatial data, there *two simultaneous senses of what is near:* - things that similar in attribute (classical kernel regression) - things that are similar in spatial position (spatial kernel regression) ### Question What features would you consider including in a model to predict an Airbnb's nightly price? First, though, let's try to predict the log of an **Airbnb's nightly price** based on a few factors: - `accommodates`: the number of people the airbnb can accommodate - `review_scores_rating`: the aggregate rating of the listing - `bedrooms`: the number of bedrooms the airbnb has - `bathrooms`: the number of bathrooms the airbnb has - `beds`: the number of beds the airbnb offers ### Airbnb Listings: The Data ``` listings = pd.read_csv('https://raw.githubusercontent.com/geopandas/scipy2018-geospatial-data/master/data/berlin-listings.csv.gz') listings['geometry'] = listings[['longitude', 'latitude']].apply(shp.Point, axis=1) listings = gpd.GeoDataFrame(listings) listings.crs = {'init':'epsg:4269'} # coordinate reference system listings = listings.to_crs(epsg=3857) listings.shape listings.head() ``` ### Airbnb Listings: Outcome Variable ``` fig, ax = plt.subplots(1, 1, figsize=(11, 7)) divider = make_axes_locatable(ax) listings.sort_values('price').plot('price', cmap='plasma', figsize=(10, 18), ax=ax, legend=True); # distribution of price sns.distplot(listings['price']); listings['price_log'] = np.log(listings['price']) fig, ax = plt.subplots(1, 1, figsize=(11, 7)) divider = make_axes_locatable(ax) listings.sort_values('price_log').plot('price_log', cmap='plasma', figsize=(10, 18), ax=ax, legend=True); # distribution of log price sns.distplot(listings['price_log'], bins=10); ``` ### The Models ``` # get data for attributes model model_data = listings[['accommodates', 'review_scores_rating', 'bedrooms', 'bathrooms', 'beds', 'price', 'geometry']].dropna() # specify predictors (X) and outcome (y) Xnames = ['accommodates', 'review_scores_rating', 'bedrooms', 'bathrooms', 'beds' ] X = model_data[Xnames].values X = X.astype(float) y = np.log(model_data[['price']].values) ``` We'll need the spatial coordinates for each listing... ``` # get spatial coordinates coordinates = np.vstack(model_data.geometry.apply(lambda p: np.hstack(p.xy)).values) ``` `scikit-learn`'s neighbor regressions are contained in the sklearn.neighbors module, and there are two main types: - **KNeighborsRegressor** - uses a k-nearest neighborhood of observations around each focal site - **RadiusNeighborsRegressor** - considers all observations within a fixed radius around each focal site. Further, these methods can use inverse distance weighting to rank the relative importance of sites around each focal; in this way, near things are given more weight than far things, even when there's a lot of near things. #### Training & Test ``` # specify training and test set shuffle = np.random.permutation(len(y)) num = int(0.8*len(shuffle)) train, test = shuffle[:num],shuffle[num:] ``` #### Three Models So, let's fit three models: - `spatial`: using inverse distance weighting on the nearest 100 neighbors geographical space - `attribute`: using inverse distance weighting on the nearest 100 neighbors in attribute space - `both`: using inverse distance weighting in both geographical and attribute space. ``` # spatial KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=100) spatial = KNNR.fit(coordinates[train,:], y[train,:]) # attribute KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=100) attribute = KNNR.fit(X[train,:], y[train,]) # both KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=100) both = KNNR.fit(np.hstack((coordinates,X))[train,:], y[train,:]) ``` ### Performance To score them, I'm going to look at the scatterplot and get their % explained variance: #### Training Data ``` # generate predictions in the training set sp_ypred_train = spatial.predict(coordinates[train,:]) # spatial att_ypred_train = attribute.predict(X[train,:]) # attribute both_ypred_train = both.predict(np.hstack((X,coordinates))[train,:]) # combo # variance explained in training data (skm.explained_variance_score(y[train,], sp_ypred_train), skm.explained_variance_score(y[train,], att_ypred_train), skm.explained_variance_score(y[train,], both_ypred_train)) # take a look at predictions plt.plot(y[train,], sp_ypred_train, '.') plt.xlabel('reported') plt.ylabel('predicted'); ``` #### Test Data ``` # generate predictions in the test set sp_ypred = spatial.predict(coordinates[test,:]) att_ypred = attribute.predict(X[test,:]) both_ypred = both.predict(np.hstack((X,coordinates))[test,:]) (skm.explained_variance_score(y[test,], sp_ypred), skm.explained_variance_score(y[test,], att_ypred), skm.explained_variance_score(y[test,], both_ypred)) # take a look at predictions plt.plot(y[test,], both_ypred, '.') plt.xlabel('reported') plt.ylabel('predicted'); ``` ### Model Improvement None of these models is performing particularly well... Cosiderations for improvement: - features included in attribute model - model tuning (i.e. number of nearest neighbors) - model selected - etc... One method that can exploit the fact that local data may be more informative in predicting $y$ at site $i$ than distant data is **Geographically Weighted Regression**, a type of Generalized Additive Spatial Model. Kind of like a Kernel Regression, GWR conducts a bunch of regressions at each training site only considering data near that site. This means it works like the kernel regressions above, but uses *both* the coordinates *and* the data in $X$ to predict $y$ at each site. It optimizes its sense of "local" depending on some information criteria or fit score. You can find this in the `gwr` package, and significant development is ongoing on this at `https://github.com/pysal/gwr`.
github_jupyter
``` import pandas as pd from sklearn import preprocessing import numpy as np import cv2 import skimage.io import matplotlib.pyplot as plt import PIL.Image import time import os from skimage.transform import rescale, resize, downscale_local_mean from random import uniform data_car = pd.read_csv("driving_log.csv", na_values = ['no info', '.']) #data_car = pd.read_csv("interpolated.csv", na_values = ['no info', '.']) # Data presentation data_car.head() # Verificando homegeneidad de los datos de salida angle_dt = data_car['angle'] print(angle_dt.dtypes) angle_dt.head() # Normalizando valores de angularidad, normalizando en base al valor positivo mayor #Y_train = (angle_dt-angle_dt.min())/(angle_dt.max()-angle_dt.min()) Y_train = angle_dt#/abs(angle_dt.min()) print(Y_train.max()) print(Y_train.min()) Y_train.head(5) def adjust_gamma(image, gamma=1.0): # build a lookup table mapping the pixel values [0, 255] to # their adjusted gamma values invGamma = 1.0 / gamma table = np.array([((i / 255.0) ** invGamma) * 255 for i in np.arange(0, 256)]).astype("uint8") # apply gamma correction using the lookup table return cv2.LUT(image, table) def path2array(X_train_path,mini,maxi): MAX_ = 200 MIN_ = 66 X_train = list() #X_0 = skimage.io.imread(X_train_path[0][0]) uniform(1,1.5) #X_1 = skimage.io.imread(X_train_path[0][1]) #X_train = np.append([X_0],[X_1],axis=0) # 225 for img_path in X_train_path[0][mini:maxi]: #print(img_path) temp = skimage.io.imread(img_path)[60:-25,:,:] temp = cv2.resize(temp,(int(MAX_), int(MIN_))) temp = adjust_gamma(temp, uniform(1,1.2)) temp = adjust_gamma(temp, uniform(0.5,0.9)) temp = cv2.cvtColor(temp, cv2.COLOR_RGB2YUV) X_train.append(temp) for img_path in X_train_path[0][mini:maxi]: #print(img_path) temp = skimage.io.imread(img_path)[60:-25,:,:] temp = cv2.resize(temp,(int(MAX_), int(MIN_))) temp = adjust_gamma(temp, uniform(0.3,0.6)) temp = cv2.flip( temp, 1 ) temp = cv2.cvtColor(temp, cv2.COLOR_RGB2YUV) X_train.append(temp) for img_path in X_train_path[0][mini:maxi]: #print(img_path) temp = skimage.io.imread(img_path)[60:-25,:,:] temp = cv2.resize(temp,(int(MAX_), int(MIN_))) temp = adjust_gamma(temp, uniform(0.5,1.1)) temp = cv2.GaussianBlur(temp, (3,3), 0) temp = cv2.cvtColor(temp, cv2.COLOR_RGB2YUV) X_train.append(temp) #X_train = np.append(X_train,[skimage.io.imread(img_path)],axis=0) X_train = np.array(X_train) #X_train.shape return X_train #Take data with just belong to the center camera >> Imagenes #bool_id = np.array(data_car['frame_id']=='center_camera') #img_center_index = list(np.where(bool_id == True)[0]) #X_train_path = data_car['filename'].iloc[img_center_index] #X_train_path = np.array(X_train_path) #X_train_path = pd.DataFrame(X_train_path) #X_train_path[0].head() X_train_path = data_car['dir_center'] X_train_path = np.array(X_train_path) X_train_path = pd.DataFrame(X_train_path) X_train_path[0].head() ``` ## X_train ``` # Convirtiendo las imagenes 2 arrays minimo = 0 maximo = 15000 #X_train = skimage.io.imread(X_train_path[0][0]) X_train = path2array(X_train_path,minimo,maximo) print(X_train.shape) #im_test = skimage.io.imread(X_train_path[0][0]) #im_test = cv2.imread(X_train_path[0][500], cv2.COLOR_BGR2RGB) PIL.Image.fromarray(X_train[1000]) gaussiana = cv2.GaussianBlur(X_train[14000], (3,3), 0) canny = cv2.Canny(gaussiana, 10,300) gamma = adjust_gamma(gaussiana,0.5) canny = cv2.Canny(gamma, 90,150) PIL.Image.fromarray(gamma) temp = skimage.io.imread('IMG/center_2018_11_20_17_12_32_854.jpg')[60:-25,:,:] PIL.Image.fromarray(X_train[36000]) # Save file filename = os.path.join("data/", "{}.npy".format("X_train2")) np.save(filename,X_train) ``` ## Y_train ``` #Take data with just belong to the center camera >> Angulos #Y_train = Y_train.iloc[img_center_index] Y_train = angle_dt Y_train_ = np.array(Y_train) Y_train = np.append(Y_train_[minimo:maximo],-Y_train_[minimo:maximo],axis=0) Y_train = np.append(Y_train, Y_train_[minimo:maximo],axis=0) # Save file filename = os.path.join("data/", "{}.npy".format("Y_train2")) np.save(filename,Y_train) Y_train.shape !sudo chown -R $USER:$USER IMG ```
github_jupyter
``` ''' first lets import the neccesary labraries we import re which stand for regular expressions because we what to use it to remove the currency symbol on the price ''' from bs4 import BeautifulSoup as bs4 import requests import pandas as pd import re ''' next we initiale the list of columns we what to fetch ''' pages = [] prices = [] stars = [] titles = [] stock_availibility =[] urlss = [] ''' this variable will the number of pages we want to fetch information from this variable can be made dynamic where as a user can be prompt to enter the number of pages he wants. ''' no_pages =5 ''' this loop will iterate to the number of pages we want to fetch data from and append each url to the list of pages http://books.toscrape.com/catalogue/page-2.html looking at the url above we can see that before the .html has a figure 2 show it's important that we are able to identify the pattern of the url ''' for i in range (1, no_pages + 1): url = ('http://books.toscrape.com/catalogue/page-{}.html').format(i) pages.append(url) ''' now we have the number pages we want, we are going to iterate through each page and use the beautifulsoup library to easy iterate through all the html tags ''' for item in pages: page = requests.get(item) soup = bs4(page.text, 'html.parser') ''' each of this loops is going append the information collected to the list of columns we declared above the first two for loops are straight forward so there is nothing much to explain ''' for i in soup.findAll('h3'): ttl = i.getText() titles.append(ttl) for n in soup.findAll('p', class_='instock availability'): stk = n.getText().strip() stock_availibility.append(stk) ''' for the firt loop below, we can see it has another loop within it. this because looking at the inspected elements of the website, the rating is embeded int he string of the class definition see the sample below (<p class="star-rating One">) for we need to apply some trick to get only the value we want. class ['star-rating', 'Three'] class ['star-rating', 'One'] class ['star-rating', 'One'] ''' for s in soup.findAll('p', class_='star-rating'): for k,v in s.attrs.items(): star = v[1] stars.append(star) ''' in this loop too we can see why we imported the regular expression library. this is to remove the currency symbols for the price tag ''' for j in soup.findAll('p',class_='price_color'): price = j.getText() trim = re.compile(r'[^\d.,]+') price = trim.sub('',price) prices.append(price) ''' also like the above we need to apply some paitonic stuff so that we can get the image thumbnail url ''' divs = soup.findAll('div', class_='image_container') for thumbs in divs: tags = thumbs.find('img', class_='thumbnail') urls = 'http://books.toscrape.com/'+str(tags['src']) newurls = urls.replace("../","") urlss.append(newurls) ''' Finally we save all the list of data as a dictionary because is easier to convert it to a data frame when the data is in dictionary fomart.also we can save our data in csv, json or even insert it to our database ''' dic = {'TITLE': titles, 'PRICE': prices, 'RATING': stars, 'STOCK': stock_availibility, 'URLs': urlss} df = pd.DataFrame(data = dic) df.head() ```
github_jupyter
<a href="https://colab.research.google.com/github/Kawarjeet/MOT-Capstone-Course-Assistant/blob/main/Capstone_CA_Journal.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # MOT Capstone Project Journal A total of 25 students under me. ### **01/26/2022** #### Microsoft Teams meeting with Professor Policastro and Lakshmi Purrushottanam from Freddie Mac #### Key take away points: * Riipen(learning management site) platform to be used to handle logistics for the project. The platform also allows tracking project progress of the students by the industry professionals. * Have to reach out to different NYU departments to arrange logistics and resources for the project. This includes datasets, softwares, google cloud access etc., as per the timely needs. * The main project will start from week 3. * The existing ACE(Automated Collateral Evaluation) model at Freddie Mac predicts valuation of the properties based on the text dataset. This helps customer get property appraisals faster and the company also gets to close loan deals faster. The company is now looking for the possibility of leveraging images and videos of properties and their surroundings for valuation. One of the reasons for this approach is to diminish the racial and ethnic valuation gaps in home purchase appraisals, as published in their article: http://www.freddiemac.com/research/insight/20210920_home_appraisals.page #### Follow up tasks: * Acculmulating datasets for the model from across NYU resources, including leveraging work from other semesters, such as, https://public.tableau.com/app/profile/camila.saavedra3201/viz/LAMP_Prioritize_V1/LAMP_Prioritize?publish=yes * Reach out to other departments working on housing and find out which groups are relevant to the capstone. Establish a point a contact at these departments to help us acquire the data. * Find out what has already been done in the NYU space related to the capstone. * Throughout the semester, we will work with 3 companies: Freddie Mac, NBCU and Citizen Watch * https://docs.google.com/document/d/1pkFY1q7C4cA6gduGw3P3ivr4DfnxXkbq/edit ### 01/27/2022 * The datasets that is being provided by Freddie Mac is imbalanced and does not have enough images of the properties that can be classified as bad. The latest development on possibly getting the bad images are: >>>> Sourcing the data from across NYU space like wagner and furman centre. >>>> Applying image augmentation to existing images. Will try ImageDataGenerator and GAN from keras on existing cats and dogs classification project to understand the working implementation of these two techniques to generate novel images from an original image. ### **01/28/2022** Class on Friday January 28 6-7PM in Rogers Hall, Rm 705 Key take points delivered by the professor: #02/03/2022 Meeting with Citizen Watch group with Thomas, Ranajay Christopher: - we have a template for statement of work. will be giving out them this week, go through them and have thm updatee it accordingly, for meet next time with the citizen group. - meeting students on friday Ranajay Nandy: - want the product lifecycle is on the product level. Dont want the product lifecyle on retail level. - want to look at product insights has to be on retailers perspective - will let you know on the data - will be available once a week to help out - bringing the data in snowflake (this is the data source) - they use power BI(which is a business intelligence tool) - okay with zoom meetings but prefers microsoft teams - the timeline is from feb to may but have global sales meeting in may and wants to present it there. so want the deadline in april and work in the last month on project presentation. # 02/04/2022 ##02/09/2022 Meeting with Maxwell Austensen from Furman Center Key points from the meeting: - he asked if there are particular datasets that might fit into our needs. - on dataside there are a lot of connections between faculty in wagner and cusp. But there is no formalized system of sharing the data sources as such. Most of the collaboration happen through faculty interactions. - furman have worked with stern too on a particular based on market research about demographic. - most of the datasets were procured historically from agencies like city planning, zoning, housing, - property sales from financing departments. - TRAP is the latest dataset on proprietary mortgage. most of the meeeting revolved around finding out datasets that nyu has and might be of interest for the capstone project. Dataset from Stern? ACRIS: can help find information like when property changed hands TREPP website Infutor Data Solutions https://data.cityofnewyork.us/browse?q=acris https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page thank you all for taking time out of your busy schedules to be here today If we have any questions about anything we discussed today, I will reach out to you on email.
github_jupyter
# Object Detection Demo Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/installation.md) before you start. # Imports ``` import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from utils import label_map_util from utils import visualization_utils as vis_util ``` ## Env setup ``` # This is needed to display the images. %matplotlib inline # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") ``` ## Object detection imports Here are the imports from the object detection module. ``` from utils import label_map_util from utils import visualization_utils as vis_util ``` # Model preparation ## Variables Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ``` # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') NUM_CLASSES = 90 ``` ## Download Model ``` if not os.path.exists(MODEL_NAME + '/frozen_inference_graph.pb'): print ('Downloading the model') opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) print ('Download complete') else: print ('Model already exists') ``` ## Load a (frozen) Tensorflow model into memory. ``` detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ``` ## Loading label map Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ``` label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) ``` ## Helper code ``` def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) ``` # Detection ``` # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) #intializing the web camera device import cv2 cap = cv2.VideoCapture(0) # Running the tensorflow session with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: ret = True while (ret): ret,image_np = cap.read() # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. boxes = detection_graph.get_tensor_by_name('detection_boxes:0') # Each score represent how level of confidence for each of the objects. # Score is shown on the result image, together with the class label. scores = detection_graph.get_tensor_by_name('detection_scores:0') classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') # Actual detection. (boxes, scores, classes, num_detections) = sess.run( [boxes, scores, classes, num_detections], feed_dict={image_tensor: image_np_expanded}) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8) # plt.figure(figsize=IMAGE_SIZE) # plt.imshow(image_np) cv2.imshow('image',cv2.resize(image_np,(1280,960))) if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() cap.release() break ```
github_jupyter
``` # Data Exploration and Cleaning # First import dependencies import pandas as pd import matplotlib.pyplot as plt import scipy.stats as sts import numpy as np # Read tables off the numbeo.com site # Original sites had 2 positions and 1 had the data table for cost of living, position 0 did not # Position 0 had df_2019 = pd.read_html("https://www.numbeo.com/cost-of-living/rankings_by_country.jsp?title=2019-mid")[1] df_2021 = pd.read_html("https://www.numbeo.com/cost-of-living/rankings_by_country.jsp?title=2021-mid")[1] # Drop 'Rank' column df_2021['Year']='2021' df_2021 df_2019['Year']='2019' df_2019 # Drop column with 'Rank' df_2021 = df_2021.drop('Rank',axis=1) df_2019 = df_2019.drop('Rank',axis=1) # Remove 'Year' column and insert at position 0; .insert(position, name, column) # Pandas pop documentation # LINK: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html?highlight=pop first_column = df_2021.pop('Year') df_2021.insert(1,'Year', first_column) df_2021.head(10) first_column = df_2019.pop('Year') df_2019.insert(1,'Year', first_column) df_2019.head(10) df_2021.Country.value_counts() df_2019.Country.value_counts() # Scatterplot Mid-2021 plot1 = df_2021.plot.scatter(x="Cost of Living Index", y="Rent Index", s=None, c=None) plot11 = df_2021.plot.scatter(x="Rent Index", y="Cost of Living Plus Rent Index", s=None, c=None) # Scatterplot Mid-2019 plot2 = df_2019.plot.scatter(x="Cost of Living Index", y="Rent Index", s=None, c=None) plot12 = df_2019.plot.scatter(x="Rent Index", y="Cost of Living Plus Rent Index", s=None, c=None) # Scatterplot Mid-2021 plot3 = df_2021.plot.scatter(x="Cost of Living Index", y="Cost of Living Plus Rent Index", s=None, c=None) # Scatterplot Mid-2019 plot4 = df_2019.plot.scatter(x="Cost of Living Index", y="Cost of Living Plus Rent Index", s=None, c=None) # Scatterplot Mid-2021 as cost of living index goes up groceries index goes up x = df_2021["Cost of Living Index"] y = df_2021["Groceries Index"] plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"r--") plt.grid() plt.show() df_2019[("Cost of Living Index")] df_2021["Cost of Living Index"] # Scatterplot Mid-2019 x = df_2019["Cost of Living Index"] y = df_2019["Groceries Index"] plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"r--") plt.grid() plt.show() # Scatterplot Mid-2019 x = df_2021["Cost of Living Index"] y = df_2021["Groceries Index"] plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"r--") plt.grid() plt.show() # LINK: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.pie.html y = df_2021["Cost of Living Index"] df.plot.pie(y, figsize= 5,5) # z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"r--") # Scatterplot Mid-2021 plot7 = df_2021.plot.scatter(x="Cost of Living Index", y="Restaurant Price Index", s=None, c=None) # Scatterplot Mid-2019 plot8 = df_2019.plot.scatter(x="Cost of Living Index", y="Restaurant Price Index", s=None, c=None) # Scatterplot Mid-2021 plot9 = df_2021.plot.scatter(x="Cost of Living Index", y="Local Purchasing Power Index", s=None, c=None) # Scatterplot Mid-2019 plot10 = df_2019.plot.scatter(x="Cost of Living Index", y="Local Purchasing Power Index", s=None, c=None) # Save data as csv files df_2021.to_csv('Data\datafile2021.csv') df_2019.to_csv('Data\datafile2019.csv') ```
github_jupyter
# Convolutional Neural Networks: Application Welcome to Course 4's second assignment! In this notebook, you will: - Implement helper functions that you will use when implementing a TensorFlow model - Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:** - Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). ### <font color='darkblue'> Updates to Assignment <font> #### If you were working on a previous version * The current notebook filename is version "1a". * You can find your work in the file directory as version "1". * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. #### List of Updates * `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case. * Added explanations for the kernel (filter) stride values, max pooling, and flatten functions. * Added details about softmax cross entropy with logits. * Added instructions for creating the Adam Optimizer. * Added explanation of how to evaluate tensors (optimizer and cost). * `forward_propagation`: clarified instructions, use "F" to store "flatten" layer. * Updated print statements and 'expected output' for easier visual comparisons. * Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course! ## 1.0 - TensorFlow model In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages. ``` import math import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage import tensorflow as tf from tensorflow.python.framework import ops from cnn_utils import * %matplotlib inline np.random.seed(1) ``` Run the next cell to load the "SIGNS" dataset you are going to use. ``` # Loading the data (signs) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ``` As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5. <img src="images/SIGNS.png" style="width:800px;height:300px;"> The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples. ``` # Example of a picture index = 6 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ``` In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it. To get started, let's examine the shapes of your data. ``` X_train = X_train_orig/255. X_test = X_test_orig/255. Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) conv_layers = {} ``` ### 1.1 - Create placeholders TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session. **Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation"](https://www.tensorflow.org/api_docs/python/tf/placeholder). ``` # GRADED FUNCTION: create_placeholders def create_placeholders(n_H0, n_W0, n_C0, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_H0 -- scalar, height of an input image n_W0 -- scalar, width of an input image n_C0 -- scalar, number of channels of the input n_y -- scalar, number of classes Returns: X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float" Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float" """ ### START CODE HERE ### (≈2 lines) X = tf.placeholder(tf.float32, [None, n_H0, n_W0, n_C0]) Y = tf.placeholder(tf.float32, [None, n_y]) ### END CODE HERE ### return X, Y X, Y = create_placeholders(64, 64, 3, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ``` **Expected Output** <table> <tr> <td> X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) </td> </tr> <tr> <td> Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) </td> </tr> </table> ### 1.2 - Initialize parameters You will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment. **Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use: ```python W = tf.get_variable("W", [1,2,3,4], initializer = ...) ``` #### tf.get_variable() [Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says: ``` Gets an existing variable with these parameters or create a new one. ``` So we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes weight parameters to build a neural network with tensorflow. The shapes are: W1 : [4, 4, 3, 8] W2 : [2, 2, 8, 16] Note that we will hard code the shape values in the function to make the grading simpler. Normally, functions should take values as inputs rather than hard coding. Returns: parameters -- a dictionary of tensors containing W1, W2 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 2 lines of code) W1 = tf.get_variable("W1", [4, 4, 3, 8], initializer = tf.contrib.layers.xavier_initializer(seed = 0)) W2 = tf.get_variable("W2", [2, 2, 8, 16], initializer = tf.contrib.layers.xavier_initializer(seed = 0)) ### END CODE HERE ### parameters = {"W1": W1, "W2": W2} return parameters tf.reset_default_graph() with tf.Session() as sess_test: parameters = initialize_parameters() init = tf.global_variables_initializer() sess_test.run(init) print("W1[1,1,1] = \n" + str(parameters["W1"].eval()[1,1,1])) print("W1.shape: " + str(parameters["W1"].shape)) print("\n") print("W2[1,1,1] = \n" + str(parameters["W2"].eval()[1,1,1])) print("W2.shape: " + str(parameters["W2"].shape)) ``` ** Expected Output:** ``` W1[1,1,1] = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192] W1.shape: (4, 4, 3, 8) W2[1,1,1] = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498] W2.shape: (2, 2, 8, 16) ``` ### 1.3 - Forward propagation In TensorFlow, there are built-in functions that implement the convolution steps for you. - **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d). - **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool). - **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu). - **tf.contrib.layers.flatten(P)**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector. * If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension. * For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten). - **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected). In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. #### Window, kernel, filter The words "window", "kernel", and "filter" are used to refer to the same thing. This is why the parameter `ksize` refers to "kernel size", and we use `(f,f)` to refer to the filter size. Both "kernel" and "filter" refer to the "window." **Exercise** Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. ``` # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Note that for simplicity and grading purposes, we'll hard-code some values such as the stride and kernel (filter) sizes. Normally, functions should take these values as function parameters. Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "W2" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] W2 = parameters['W2'] ### START CODE HERE ### # CONV2D: stride of 1, padding 'SAME' Z1 = tf.nn.conv2d(X,W1, strides = [1,1,1,1], padding = 'SAME') # RELU A1 = tf.nn.relu(Z1) # MAXPOOL: window 8x8, stride 8, padding 'SAME' P1 = tf.nn.max_pool(A1, ksize = [1,8,8,1], strides = [1,8,8,1], padding = 'SAME') # CONV2D: filters W2, stride 1, padding 'SAME' Z2 = tf.nn.conv2d(P1,W2, strides = [1,1,1,1], padding = 'SAME') # RELU A2 = tf.nn.relu(Z2) # MAXPOOL: window 4x4, stride 4, padding 'SAME' P2 = tf.nn.max_pool(A2, ksize = [1,4,4,1], strides = [1,4,4,1], padding = 'SAME') # FLATTEN F = tf.contrib.layers.flatten(P2) # FULLY-CONNECTED without non-linear activation function (not not call softmax). # 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None" Z3 = tf.contrib.layers.fully_connected(F, 6, activation_fn = None) ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) init = tf.global_variables_initializer() sess.run(init) a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)}) print("Z3 = \n" + str(a)) ``` **Expected Output**: ``` Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] ``` ### 1.4 - Compute cost Implement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions. You might find these two functions helpful: - **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits). - **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean). #### Details on softmax_cross_entropy_with_logits (optional reading) * Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1. * Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the "loss" of the model's predictions. * "Logits" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the "activation." * The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations. ** Exercise**: Compute the cost below using the function above. ``` # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) init = tf.global_variables_initializer() sess.run(init) a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)}) print("cost = " + str(a)) ``` **Expected Output**: ``` cost = 2.91034 ``` ## 1.5 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. **Exercise**: Complete the function below. The model below should: - create placeholders - initialize parameters - forward propagate - compute the cost - create an optimizer Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) #### Adam Optimizer You can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize. For details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) #### Random mini batches If you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the "Optimization" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this: ```Python minibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0) ``` (You will want to choose the correct variable names when you use it in your code). #### Evaluating the optimizer and cost Within a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost. You'll use this kind of syntax: ``` output_for_var1, output_for_var2 = sess.run( fetches=[var1, var2], feed_dict={var_inputs: the_batch_of_inputs, var_labels: the_batch_of_labels} ) ``` * Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost). * It also takes a dictionary for the `feed_dict` parameter. * The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above. * The values are the variables holding the actual numpy arrays for each mini-batch. * The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`. For more information on how to use sess.run, see the documentation [tf.Sesssion#run](https://www.tensorflow.org/api_docs/python/tf/Session#run) documentation. ``` # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): """ Implements a three-layer ConvNet in Tensorflow: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X_train -- training set, of shape (None, 64, 64, 3) Y_train -- test set, of shape (None, n_y = 6) X_test -- training set, of shape (None, 64, 64, 3) Y_test -- test set, of shape (None, n_y = 6) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: train_accuracy -- real number, accuracy on the train set (X_train) test_accuracy -- real number, testing accuracy on the test set (X_test) parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep results consistent (tensorflow seed) seed = 3 # to keep results consistent (numpy seed) (m, n_H0, n_W0, n_C0) = X_train.shape n_y = Y_train.shape[1] costs = [] # To keep track of the cost # Create Placeholders of the correct shape ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables globally init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): minibatch_cost = 0. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch """ # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the optimizer and the cost. # The feedict should contain a minibatch for (X,Y). """ ### START CODE HERE ### (1 line) _ , temp_cost = sess.run(fetches=[optimizer, cost], feed_dict={X: minibatch_X,Y: minibatch_Y} ) ### END CODE HERE ### minibatch_cost += temp_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 5 == 0: print ("Cost after epoch %i: %f" % (epoch, minibatch_cost)) if print_cost == True and epoch % 1 == 0: costs.append(minibatch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # Calculate the correct predictions predict_op = tf.argmax(Z3, 1) correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(accuracy) train_accuracy = accuracy.eval({X: X_train, Y: Y_train}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test}) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) return train_accuracy, test_accuracy, parameters ``` Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code! ``` _, _, parameters = model(X_train, Y_train, X_test, Y_test) ``` **Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. <table> <tr> <td> **Cost after epoch 0 =** </td> <td> 1.917929 </td> </tr> <tr> <td> **Cost after epoch 5 =** </td> <td> 1.506757 </td> </tr> <tr> <td> **Train Accuracy =** </td> <td> 0.940741 </td> </tr> <tr> <td> **Test Accuracy =** </td> <td> 0.783333 </td> </tr> </table> Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work! ``` fname = "images/thumbs_up.jpg" image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)) plt.imshow(my_image) ```
github_jupyter
<img src="https://s3-ap-southeast-1.amazonaws.com/he-public-data/wordmark_black65ee464.png" width="700"> # Weekend Exercise (Ungraded) During all the questions so far you've been asked to use simulators with no (or negligible) noise to do the computational tasks necessary. However, Quantum Computers today are far from ideal and noiseless. As coined by John Preskill we are in the [Noisy Intermediate Scale Quantum (NISQ)](https://arxiv.org/abs/1801.00862) era where our quantum computers are of the intermediate scale (50-100 qubits) and are very noisy, for example, a qubit initialized in the $|0\rangle$ state might not give $0$ on measurement in the computational basis all the time. To understand the statement above lets do an experiment on a noiseless simulator and on an actual quantum computer and compare the results. ### Simulator ``` %matplotlib inline from qiskit import QuantumCircuit, execute, Aer from qiskit.tools.jupyter import * from qiskit.visualization import * import numpy as np qc = QuantumCircuit(1, 1) qc.x(0) qc.measure(0,0) qc.draw(output="mpl") ``` Once we build our circuit lets use the `qasm_simulator` from Aer to get the measurement counts ``` backend = Aer.get_backend("qasm_simulator") job = execute(qc,backend=backend, shots =1000) counts = job.result().get_counts() plot_histogram(counts) ``` As you'd expect we received all 1000 counts to be corresponding to `1`. Now lets do the same experiment on a real device. ### Real device To run your experiments on a real device you'll first need to create(login into) an [IBM Quantum Experience](https://www.ibm.com/quantum-computing/technology/experience/) account and follow the instructions given [here](https://qiskit.org/documentation/install.html#install-access-ibm-q-devices-label) to be able to access IBM quantum services form Qiskit. ``` # only run this cell once, running it the second time might raise an error/warning as enable_account() already has your token stored from qiskit import IBMQ # to enabble your account you'll need to enter your token from IBM Quantum Experience in 'YOUR_IBM_TOKEN' in a string format IBMQ.save_account('ed4617679492e9c0f933116b0b62d0d110aa210faf481974a8720903b8c8c69febcb7158f1d044fedcac64bd11a94153e907aff42068a427f59e238464577816') IBMQ.enable_account('ed4617679492e9c0f933116b0b62d0d110aa210faf481974a8720903b8c8c69febcb7158f1d044fedcac64bd11a94153e907aff42068a427f59e238464577816') # loading your account IBMQ.load_account() # Getting a backend for running the circuit on. In this case 'ibm_armonk'. For more devices you have access to you # can look into your IBM Q Experience account's dashboard. provider = IBMQ.get_provider(hub='ibm-q') backend = provider.get_backend("ibmq_armonk") job = execute(qc,backend=backend,shots =1000) counts = job.result().get_counts() plot_histogram(counts) ``` From the above histogram we see that not all the counts resulted in a `1`. This is an example of the noise that gates induce. It's important to note that different gates induce different levels of noise into the system. For example, 2-qubit gates induce a lot more noise into the system than single qubit gates, hence while creating circuits to be run on real devices we should be wary of the number of 2-qubit gates being used and try to reduce them as much as possible. This idea will be the essence of the exercise today. [Chapter 5 of the Qiskit Textbook](https://qiskit.org/textbook/ch-quantum-hardware/index-circuits.html) delves deeper into noise and some methods to tackle it if you're interested in learning more. While in today's exercise we won't be working with noisy simulators or real devices, lets take a step forward in understanding how to build circuits that give better results in the presence of noise. To do that lets dive into the exercise: ## Exercise: Construct a 3 digit binary adder circuit with minimum cost. You have learnt how to construct a binary half adder in [Chapter 1.2](https://qiskit.org/textbook/ch-states/atoms-computation.html) of the Qiskit Textbook. Using that knowledge we want you to create a 3 digit binary adder circuit which can do computations such as $101 + 110 = 1011$ where each input is a 3 digit binary number and the output is a four digit binary number. Your task is to find such a circuit with the least cost possible. The exercise is intentionally defined without many constraints to give you the freedom to test different data encoding schemes, basis gate sets, etc. Let us now define our cost function: **Cost of the circuit = Number of Single qubit gates + 10 $\times$ Number of CX gates** ### The cost function Any given quantum circuit can be decomposed into single-qubit and `CX` gates as they are a set of [universal quantum gates](https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_quantum_gates). With the current Noisy Intermediate-Scale Quantum (NISQ) devices noise introduced is higher when implementing a `CX` gate. Therefore, we weigh `CX` gates 10 times more than a single-qubit gate while evaluating the cost of our circuit. To evaluate the cost of your circuit you can use the `cost_function()` method given below. The `cost_function()` takes as **input**: * `circuit`: (`QuantumCircuit`) -- The quantum circuit for which you'd like to find the cost. And gives as **output**: * `circuit_cost`: (`a`) -- Cost of the circuit * `gates`: (`Dict`) -- Dictionary with the number of gates used in the circuit * `unrolled_circuit`: (`QuantumCircuit`) -- The resultant circuit after change of basis ``` from qiskit.transpiler import PassManager from qiskit.transpiler.passes import Unroller def cost_function(circuit): if not isinstance(circuit, (QuantumCircuit)): print("the inserted circuit must be a QuantumCircuit object, not {}".format(type(circuit))) else: basis_gate_set = ['u3', 'cx'] # basis that we are unrolling the circuit into # changing our basis using an Unroller pass_ = Unroller(basis_gate_set) pm = PassManager(pass_) unrolled_circuit = pm.run(circuit) # calculating the cost function using the equation given above gates = unrolled_circuit.count_ops() circuit_cost = gates['u3'] + 10*gates['cx'] return circuit_cost, gates, unrolled_circuit ``` Internally, the `cost_function()` method uses an `Unroller` pass to convert our circuit to the {'u3', 'cx'} basis gate set and then applies the cost function equation as defined above. Here are a few resources to understand how the transpiler works: * Qiskit Terra Documentation - [Transpiler](https://qiskit.org/documentation/apidoc/transpiler.html) * Advanced Circuit Tutorial - [Tranpiler Passes and Pass Manager](https://qiskit.org/documentation/tutorials/circuits_advanced/4_transpiler_passes_and_passmanager.html) Let's understand how to use the `cost_function()` method by applying it on a half adder circuit as given in [Chapter 1.2](https://qiskit.org/textbook/ch-states/atoms-computation.html) of the Qiskit Textbook. ``` # Constructing the Half Adder circuit qc_ha = QuantumCircuit(4,2) # encode inputs in qubits 0 and 1 qc_ha.x(0) # For a=0, remove the this line. For a=1, leave it. qc_ha.x(1) # For b=0, remove the this line. For b=1, leave it. qc_ha.barrier() # use cnots to write the XOR of the inputs on qubit 2 qc_ha.cx(0,2) qc_ha.cx(1,2) # use ccx to write the AND of the inputs on qubit 3 qc_ha.ccx(0,1,3) qc_ha.barrier() # extract outputs qc_ha.measure(2,0) # extract XOR value qc_ha.measure(3,1) # extract AND value qc_ha.draw() circuit_cost, gates, unrolled_circuit = cost_function(qc_ha) print('Cost of the circuit : {}'.format(circuit_cost)) print('Gates counts after unrolling : {}'.format(gates)) print('Circuit after unrolling :') unrolled_circuit.draw() ```
github_jupyter
# Задание 1.1 - Метод К-ближайших соседей (K-neariest neighbor classifier) В первом задании вы реализуете один из простейших алгоритмов машинного обучения - классификатор на основе метода K-ближайших соседей. Мы применим его к задачам - бинарной классификации (то есть, только двум классам) - многоклассовой классификации (то есть, нескольким классам) Так как методу необходим гиперпараметр (hyperparameter) - количество соседей, мы выберем его на основе кросс-валидации (cross-validation). Наша основная задача - научиться пользоваться numpy и представлять вычисления в векторном виде, а также ознакомиться с основными метриками, важными для задачи классификации. Перед выполнением задания: - запустите файл `download_data.sh`, чтобы скачать данные, которые мы будем использовать для тренировки - установите все необходимые библиотеки, запустив `pip install -r requirements.txt` (если раньше не работали с `pip`, вам сюда - https://pip.pypa.io/en/stable/quickstart/) Если вы раньше не работали с numpy, вам может помочь tutorial. Например этот: http://cs231n.github.io/python-numpy-tutorial/ ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 from dataset import load_svhn from knn import KNN from metrics import binary_classification_metrics, multiclass_accuracy ``` # Загрузим и визуализируем данные В задании уже дана функция `load_svhn`, загружающая данные с диска. Она возвращает данные для тренировки и для тестирования как numpy arrays. Мы будем использовать цифры из датасета Street View House Numbers (SVHN, http://ufldl.stanford.edu/housenumbers/), чтобы решать задачу хоть сколько-нибудь сложнее MNIST. ``` train_X, train_y, test_X, test_y = load_svhn("data", max_train=1000, max_test=100) samples_per_class = 5 # Number of samples per class to visualize plot_index = 1 for example_index in range(samples_per_class): for class_index in range(10): plt.subplot(5, 10, plot_index) image = train_X[train_y == class_index][example_index] plt.imshow(image.astype(np.uint8)) plt.axis('off') plot_index += 1 ``` # Сначала реализуем KNN для бинарной классификации В качестве задачи бинарной классификации мы натренируем модель, которая будет отличать цифру 0 от цифры 9. ``` # First, let's prepare the labels and the source data # Only select 0s and 9s binary_train_mask = (train_y == 0) | (train_y == 9) binary_train_X = train_X[binary_train_mask] binary_train_y = train_y[binary_train_mask] == 0 binary_test_mask = (test_y == 0) | (test_y == 9) binary_test_X = test_X[binary_test_mask] binary_test_y = test_y[binary_test_mask] == 0 # Reshape to 1-dimensional array [num_samples, 32*32*3] binary_train_X = binary_train_X.reshape(binary_train_X.shape[0], -1) binary_test_X = binary_test_X.reshape(binary_test_X.shape[0], -1) # Create the classifier and call fit to train the model # KNN just remembers all the data knn_classifier = KNN(k=1) knn_classifier.fit(binary_train_X, binary_train_y) ``` ## Пришло время написать код! Последовательно реализуйте функции `compute_distances_two_loops`, `compute_distances_one_loop` и `compute_distances_no_loops` в файле `knn.py`. Эти функции строят массив расстояний между всеми векторами в тестовом наборе и в тренировочном наборе. В результате они должны построить массив размера `(num_test, num_train)`, где координата `[i][j]` соотвествует расстоянию между i-м вектором в test (`test[i]`) и j-м вектором в train (`train[j]`). **Обратите внимание** Для простоты реализации мы будем использовать в качестве расстояния меру L1 (ее еще называют [Manhattan distance](https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D1%81%D1%82%D0%BE%D1%8F%D0%BD%D0%B8%D0%B5_%D0%B3%D0%BE%D1%80%D0%BE%D0%B4%D1%81%D0%BA%D0%B8%D1%85_%D0%BA%D0%B2%D0%B0%D1%80%D1%82%D0%B0%D0%BB%D0%BE%D0%B2)). ![image.png](attachment:image.png) ``` # TODO: implement compute_distances_two_loops in knn.py dists = knn_classifier.compute_distances_two_loops(binary_test_X) assert np.isclose(dists[0, 10], np.sum(np.abs(binary_test_X[0] - binary_train_X[10]))) # TODO: implement compute_distances_one_loop in knn.py dists = knn_classifier.compute_distances_one_loop(binary_test_X) assert np.isclose(dists[0, 10], np.sum(np.abs(binary_test_X[0] - binary_train_X[10]))) # TODO: implement compute_distances_no_loops in knn.py dists = knn_classifier.compute_distances_no_loops(binary_test_X) assert np.isclose(dists[0, 10], np.sum(np.abs(binary_test_X[0] - binary_train_X[10]))) # Lets look at the performance difference %timeit knn_classifier.compute_distances_two_loops(binary_test_X) %timeit knn_classifier.compute_distances_one_loop(binary_test_X) %timeit knn_classifier.compute_distances_no_loops(binary_test_X) # TODO: implement predict_labels_binary in knn.py prediction = knn_classifier.predict(binary_test_X) # TODO: implement binary_classification_metrics in metrics.py precision, recall, f1, accuracy = binary_classification_metrics(prediction, binary_test_y) print("KNN with k = %s" % knn_classifier.k) print("Accuracy: %4.2f, Precision: %4.2f, Recall: %4.2f, F1: %4.2f" % (accuracy, precision, recall, f1)) # Let's put everything together and run KNN with k=3 and see how we do knn_classifier_3 = KNN(k=3) knn_classifier_3.fit(binary_train_X, binary_train_y) prediction = knn_classifier_3.predict(binary_test_X) precision, recall, f1, accuracy = binary_classification_metrics(prediction, binary_test_y) print("KNN with k = %s" % knn_classifier_3.k) print("Accuracy: %4.2f, Precision: %4.2f, Recall: %4.2f, F1: %4.2f" % (accuracy, precision, recall, f1)) ``` # Кросс-валидация (cross-validation) Попробуем найти лучшее значение параметра k для алгоритма KNN! Для этого мы воспользуемся k-fold cross-validation (https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation). Мы разделим тренировочные данные на 5 фолдов (folds), и по очереди будем использовать каждый из них в качестве проверочных данных (validation data), а остальные -- в качестве тренировочных (training data). В качестве финальной оценки эффективности k мы усредним значения F1 score на всех фолдах. После этого мы просто выберем значение k с лучшим значением метрики. *Бонус*: есть ли другие варианты агрегировать F1 score по всем фолдам? Напишите плюсы и минусы в клетке ниже. ``` # Find the best k using cross-validation based on F1 score num_folds = 5 train_folds_X = [] train_folds_y = [] test_folds_X = [] test_folds_y = [] # TODO: split the training data in 5 folds and store them in train_folds_X/train_folds_y from sklearn.model_selection import StratifiedKFold skf = StratifiedKFold(n_splits = 5) for train_ind, test_ind in skf.split(binary_train_X, binary_train_y): X_train, X_test = binary_train_X[train_ind], binary_train_X[test_ind] y_train, y_test = binary_train_y[train_ind], binary_train_y[test_ind] train_folds_X.append(X_train) train_folds_y.append(y_train) test_folds_X.append(X_test) test_folds_y.append(y_test) k_choices = [1, 2, 3, 5, 8, 10, 15, 20, 25, 50] k_to_f1 = {} # dict mapping k values to mean F1 scores (int -> float) for k in k_choices: # TODO: perform cross-validation # Go through every fold and use it for testing and all other folds for training # Perform training and produce F1 score metric on the validation dataset # Average F1 from all the folds and write it into k_to_f1 avg_f1 = 0 for i in range(num_folds): knn_classifier = KNN(k=k) knn_classifier.fit(train_folds_X[i], train_folds_y[i]) prediction = knn_classifier.predict(test_folds_X[i]) precision, recall, f1, acccuracy = binary_classification_metrics(prediction, test_folds_y[i]) avg_f1 += f1 avg_f1 /= num_folds k_to_f1[k] = avg_f1 for k in sorted(k_to_f1): print('k = %d, f1 = %f' % (k, k_to_f1[k])) ``` ### Проверим, как хорошо работает лучшее значение k на тестовых данных (test data) ``` # TODO Set the best k to the best value found by cross-validation best_k = 1 best_knn_classifier = KNN(k=best_k) best_knn_classifier.fit(binary_train_X, binary_train_y) prediction = best_knn_classifier.predict(binary_test_X) precision, recall, f1, accuracy = binary_classification_metrics(prediction, binary_test_y) print("Best KNN with k = %s" % best_k) print("Accuracy: %4.2f, Precision: %4.2f, Recall: %4.2f, F1: %4.2f" % (accuracy, precision, recall, f1)) ``` # Многоклассовая классификация (multi-class classification) Переходим к следующему этапу - классификации на каждую цифру. ``` # Now let's use all 10 classes train_X = train_X.reshape(train_X.shape[0], -1) test_X = test_X.reshape(test_X.shape[0], -1) knn_classifier = KNN(k=1) knn_classifier.fit(train_X, train_y) # TODO: Implement predict_labels_multiclass predict = knn_classifier.predict(test_X) # TODO: Implement multiclass_accuracy accuracy = multiclass_accuracy(predict, test_y) print("Accuracy: %4.2f" % accuracy) ``` Снова кросс-валидация. Теперь нашей основной метрикой стала точность (accuracy), и ее мы тоже будем усреднять по всем фолдам. ``` # Find the best k using cross-validation based on accuracy num_folds = 5 train_folds_X = [] train_folds_y = [] test_folds_X = [] test_folds_y = [] # TODO: split the training data in 5 folds and store them in train_folds_X/train_folds_y skf = StratifiedKFold(n_splits = 5) for train_ind, test_ind in skf.split(train_X, train_y): X_train, X_test = train_X[train_ind], train_X[test_ind] y_train, y_test = train_y[train_ind], train_y[test_ind] train_folds_X.append(X_train) train_folds_y.append(y_train) test_folds_X.append(X_test) test_folds_y.append(y_test) k_choices = [1, 2, 3, 5, 8, 10, 15, 20, 25, 50] k_to_accuracy = {} for k in k_choices: # TODO: perform cross-validation # Go through every fold and use it for testing and all other folds for validation # Perform training and produce accuracy metric on the validation dataset # Average accuracy from all the folds and write it into k_to_accuracy avg_acc = 0 for i in range(num_folds): knn_classifier = KNN(k=k) knn_classifier.fit(train_folds_X[i], train_folds_y[i]) prediction = knn_classifier.predict(test_folds_X[i]) acccuracy = multiclass_accuracy(prediction, test_folds_y[i]) avg_acc += accuracy avg_acc /= num_folds k_to_f1[k] = avg_acc for k in sorted(k_to_accuracy): print('k = %d, accuracy = %f' % (k, k_to_accuracy[k])) ``` ### Финальный тест - классификация на 10 классов на тестовой выборке (test data) Если все реализовано правильно, вы должны увидеть точность не менее **0.2**. ``` # TODO Set the best k as a best from computed best_k = 1 best_knn_classifier = KNN(k=best_k) best_knn_classifier.fit(train_X, train_y) prediction = best_knn_classifier.predict(test_X) # Accuracy should be around 20%! accuracy = multiclass_accuracy(prediction, test_y) print("Accuracy: %4.2f" % accuracy) ##мощности моего компа не хватило, чтобы получить результат. Кайф. ```
github_jupyter
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i> <i>Licensed under the MIT License.</i> # Evaluation Evaluation with offline metrics is pivotal to assess the quality of a recommender before it goes into production. Usually, evaluation metrics are carefully chosen based on the actual application scenario of a recommendation system. It is hence important to data scientists and AI developers that build recommendation systems to understand how each evaluation metric is calculated and what it is for. This notebook deep dives into several commonly used evaluation metrics, and illustrates how these metrics are used in practice. The metrics covered in this notebook are merely for off-line evaluations. ## 0 Global settings Most of the functions used in the notebook can be found in the `reco_utils` directory. ``` # set the environment path to find Recommenders import sys sys.path.append("../../") import pandas as pd import pyspark from sklearn.preprocessing import minmax_scale from reco_utils.common.spark_utils import start_or_get_spark from reco_utils.evaluation.spark_evaluation import SparkRankingEvaluation, SparkRatingEvaluation from reco_utils.evaluation.python_evaluation import auc, logloss print("System version: {}".format(sys.version)) print("Pandas version: {}".format(pd.__version__)) print("PySpark version: {}".format(pyspark.__version__)) ``` Note to successfully run Spark codes with the Jupyter kernel, one needs to correctly set the environment variables of `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` that point to Python executables with the desired version. Detailed information can be found in the setup instruction document [SETUP.md](../../SETUP.md). ``` COL_USER = "UserId" COL_ITEM = "MovieId" COL_RATING = "Rating" COL_PREDICTION = "Rating" HEADER = { "col_user": COL_USER, "col_item": COL_ITEM, "col_rating": COL_RATING, "col_prediction": COL_PREDICTION, } ``` ## 1 Prepare data ### 1.1 Prepare dummy data For illustration purpose, a dummy data set is created for demonstrating how different evaluation metrics work. The data has the schema that can be frequently found in a recommendation problem, that is, each row in the dataset is a (user, item, rating) tuple, where "rating" can be an ordinal rating score (e.g., discrete integers of 1, 2, 3, etc.) or an numerical float number that quantitatively indicates the preference of the user towards that item. For simplicity reason, the column of rating in the dummy dataset we use in the example represent some ordinal ratings. ``` df_true = pd.DataFrame( { COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3], COL_ITEM: [1, 2, 3, 1, 4, 5, 6, 7, 2, 5, 6, 8, 9, 10, 11, 12, 13, 14], COL_RATING: [5, 4, 3, 5, 5, 3, 3, 1, 5, 5, 5, 4, 4, 3, 3, 3, 2, 1], } ) df_pred = pd.DataFrame( { COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3], COL_ITEM: [3, 10, 12, 10, 3, 5, 11, 13, 4, 10, 7, 13, 1, 3, 5, 2, 11, 14], COL_PREDICTION: [14, 13, 12, 14, 13, 12, 11, 10, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5] } ) ``` Take a look at ratings of the user with ID "1" in the dummy dataset. ``` df_true[df_true[COL_USER] == 1] df_pred[df_pred[COL_USER] == 1] ``` ### 1.2 Prepare Spark data Spark framework is sometimes used to evaluate metrics given datasets that are hard to fit into memory. In our example, Spark DataFrames can be created from the Python dummy dataset. ``` spark = start_or_get_spark("EvaluationTesting", "local") dfs_true = spark.createDataFrame(df_true) dfs_pred = spark.createDataFrame(df_pred) dfs_true.filter(dfs_true[COL_USER] == 1).show() dfs_pred.filter(dfs_pred[COL_USER] == 1).show() ``` ## 2 Evaluation metrics ### 2.1 Rating metrics Rating metrics are similar to regression metrics used for evaluating a regression model that predicts numerical values given input observations. In the context of recommendation system, rating metrics are to evaluate how accurate a recommender is to predict ratings that users may give to items. Therefore, the metrics are **calculated exactly on the same group of (user, item) pairs that exist in both ground-truth dataset and prediction dataset** and **averaged by the total number of users**. #### 2.1.1 Use cases Rating metrics are effective in measuring the model accuracy. However, in some cases, the rating metrics are limited if * **the recommender is to predict ranking instead of explicit rating**. For example, if the consumer of the recommender cares about the ranked recommended items, rating metrics do not apply directly. Usually a relevancy function such as top-k will be applied to generate the ranked list from predicted ratings in order to evaluate the recommender with other metrics. * **the recommender is to generate recommendation scores that have different scales with the original ratings (e.g., the SAR algorithm)**. In this case, the difference between the generated scores and the original scores (or, ratings) is not valid for measuring accuracy of the model. #### 2.1.2 How-to with the evaluation utilities A few notes about the interface of the Rating evaluator class: 1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame). 2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics. 3. Default column names for user, item, rating, and prediction are "UserId", "ItemId", "Rating", and "Prediciton", respectively. In our examples below, to calculate rating metrics for input data frames in Spark, a Spark object, `SparkRatingEvaluation` is initialized. The input data schemas for the ground-truth dataset and the prediction dataset are * Ground-truth dataset. |Column|Data type|Description| |-------------|------------|-------------| |`COL_USER`|<int\>|User ID| |`COL_ITEM`|<int\>|Item ID| |`COL_RATING`|<float\>|Rating or numerical value of user preference.| * Prediction dataset. |Column|Data type|Description| |-------------|------------|-------------| |`COL_USER`|<int\>|User ID| |`COL_ITEM`|<int\>|Item ID| |`COL_RATING`|<float\>|Predicted rating or numerical value of user preference.| ``` spark_rate_eval = SparkRatingEvaluation(dfs_true, dfs_pred, **HEADER) ``` #### 2.1.3 Root Mean Square Error (RMSE) RMSE is for evaluating the accuracy of prediction on ratings. RMSE is the most widely used metric to evaluate a recommendation algorithm that predicts missing ratings. The benefit is that RMSE is easy to explain and calculate. ``` print("The RMSE is {}".format(spark_rate_eval.rmse())) ``` #### 2.1.4 R Squared (R2) R2 is also called "coefficient of determination" in some context. It is a metric that evaluates how well a regression model performs, based on the proportion of total variations of the observed results. ``` print("The R2 is {}".format(spark_rate_eval.rsquared())) ``` #### 2.1.5 Mean Absolute Error (MAE) MAE evaluates accuracy of prediction. It computes the metric value from ground truths and prediction in the same scale. Compared to RMSE, MAE is more explainable. ``` print("The MAE is {}".format(spark_rate_eval.mae())) ``` #### 2.1.6 Explained Variance Explained variance is usually used to measure how well a model performs with regard to the impact from the variation of the dataset. ``` print("The explained variance is {}".format(spark_rate_eval.exp_var())) ``` #### 2.1.7 Summary |Metric|Range|Selection criteria|Limitation|Reference| |------|-------------------------------|---------|----------|---------| |RMSE|$> 0$|The smaller the better.|May be biased, and less explainable than MSE|[link](https://en.wikipedia.org/wiki/Root-mean-square_deviation)| |R2|$\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Coefficient_of_determination)| |MSE|$\geq 0$|The smaller the better.|Dependent on variable scale.|[link](https://en.wikipedia.org/wiki/Mean_absolute_error)| |Explained variance|$\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Explained_variation)| ### 2.2 Ranking metrics "Beyond-accuray evaluation" was proposed to evaluate how relevant recommendations are for users. In this case, a recommendation system is a treated as a ranking system. Given a relency definition, recommendation system outputs a list of recommended items to each user, which is ordered by relevance. The evaluation part takes ground-truth data, the actual items that users interact with (e.g., liked, purchased, etc.), and the recommendation data, as inputs, to calculate ranking evaluation metrics. #### 2.2.1 Use cases Ranking metrics are often used when hit and/or ranking of the items are considered: * **Hit** - defined by relevancy, a hit usually means whether the recommended "k" items hit the "relevant" items by the user. For example, a user may have clicked, viewed, or purchased an item for many times, and a hit in the recommended items indicate that the recommender performs well. Metrics like "precision", "recall", etc. measure the performance of such hitting accuracy. * **Ranking** - ranking metrics give more explanations about, for the hitted items, whether they are ranked in a way that is preferred by the users whom the items will be recommended to. Metrics like "mean average precision", "ndcg", etc., evaluate whether the relevant items are ranked higher than the less-relevant or irrelevant items. #### 2.2.2 How-to with evaluation utilities A few notes about the interface of the Rating evaluator class: 1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame). The column of timestamp is optional, but it is required if certain relevanc function is used. For example, timestamps will be used if the most recent items are defined as the relevant one. 2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics. 3. Default column names for user, item, rating, and prediction are "UserId", "ItemId", "Rating", and "Prediciton", respectively. #### 2.2.1 Relevancy of recommendation Relevancy of recommendation can be measured in different ways: * **By ranking** - In this case, relevant items in the recommendations are defined as the top ranked items, i.e., top k items, which are taken from the list of the recommended items that is ordered by the predicted ratings (or other numerical scores that indicate preference of a user to an item). * **By timestamp** - Relevant items are defined as the most recently viewed k items, which are obtained from the recommended items ranked by timestamps. * **By rating** - Relevant items are defined as items with ratings (or other numerical scores that indicate preference of a user to an item) that are above a given threshold. Similarly, a ranking metric object can be initialized as below. The input data schema is * Ground-truth dataset. |Column|Data type|Description| |-------------|------------|-------------| |`COL_USER`|<int\>|User ID| |`COL_ITEM`|<int\>|Item ID| |`COL_RATING`|<float\>|Rating or numerical value of user preference.| |`COL_TIMESTAMP`|<string\>|Timestamps.| * Prediction dataset. |Column|Data type|Description| |-------------|------------|-------------| |`COL_USER`|<int\>|User ID| |`COL_ITEM`|<int\>|Item ID| |`COL_RATING`|<float\>|Predicted rating or numerical value of user preference.| |`COL_TIMESTAM`|<string\>|Timestamps.| In this case, in addition to the input datasets, there are also other arguments used for calculating the ranking metrics: |Argument|Data type|Description| |------------|------------|--------------| |`k`|<int\>|Number of items recommended to user.| |`revelancy_method`|<string\>|Methonds that extract relevant items from the recommendation list| For example, the following code initializes a ranking metric object that calculates the metrics. ``` spark_rank_eval = SparkRankingEvaluation(dfs_true, dfs_pred, k=3, relevancy_method="top_k", **HEADER) ``` A few ranking metrics can then be calculated. #### 2.2.1 Precision Precision@k is a metric that evaluates how many items in the recommendation list are relevant (hit) in the ground-truth data. For each user the precision score is normalized by `k` and then the overall precision scores are averaged by the total number of users. Note it is apparent that the precision@k metric grows with the number of `k`. ``` print("The precision at k is {}".format(spark_rank_eval.precision_at_k())) ``` #### 2.2.2 Recall Recall@k is a metric that evaluates how many relevant items in the ground-truth data are in the recommendation list. For each user the recall score is normalized by the total number of ground-truth items and then the overall recall scores are averaged by the total number of users. ``` print("The recall at k is {}".format(spark_rank_eval.recall_at_k())) ``` #### 2.2.3 Normalized Discounted Cumulative Gain (NDCG) NDCG is a metric that evaluates how well the recommender performs in recommending ranked items to users. Therefore both hit of relevant items and correctness in ranking of these items matter to the NDCG evaluation. The total NDCG score is normalized by the total number of users. ``` print("The ndcg at k is {}".format(spark_rank_eval.ndcg_at_k())) ``` #### 2.2.4 Mean Average Precision (MAP) MAP is a metric that evaluates the average precision for each user in the datasets. It also penalizes ranking correctness of the recommended items. The overall MAP score is normalized by the total number of users. ``` print("The map at k is {}".format(spark_rank_eval.map_at_k())) ``` #### 2.2.5 ROC and AUC ROC, as well as AUC, is a well known metric that is used for evaluating binary classification problem. It is similar in the case of binary rating typed recommendation algorithm where the "hit" accuracy on the relevant items is used for measuring the recommender's performance. To demonstrate the evaluation method, the original data for testing is manipuldated in a way that the ratings in the testing data are arranged as binary scores, whilst the ones in the prediction are scaled in 0 to 1. ``` # Convert the original rating to 0 and 1. df_true_bin = df_true.copy() df_true_bin[COL_RATING] = df_true_bin[COL_RATING].apply(lambda x: 1 if x > 3 else 0) df_true_bin # Convert the predicted ratings into a [0, 1] scale. df_pred_bin = df_pred.copy() df_pred_bin[COL_PREDICTION] = minmax_scale(df_pred_bin[COL_PREDICTION].astype(float)) df_pred_bin # Calculate the AUC metric auc_score = auc( df_true_bin, df_pred_bin, col_user = COL_USER, col_item = COL_ITEM, col_rating = COL_RATING, col_prediction = COL_RATING ) print("The auc score is {}".format(auc_score)) ``` It is worth mentioning that in some literature there are variants of the original AUC metric, that considers the effect of **the number of the recommended items (k)**, **grouping effect of users (compute AUC for each user group, and take the average across different groups)**. These variants are applicable to various different scenarios, and choosing an appropriate one depends on the context of the use case itself. #### 2.3.2 Logistic loss Logistic loss (sometimes it is called simply logloss, or cross-entropy loss) is another useful metric to evaluate the hit accuracy. It is defined as the negative log-likelihood of the true labels given the predictions of a classifier. ``` # Calculate the logloss metric logloss_score = logloss( df_true_bin, df_pred_bin, col_user = COL_USER, col_item = COL_ITEM, col_rating = COL_RATING, col_prediction = COL_RATING ) print("The logloss score is {}".format(logloss_score)) ``` It is worth noting that logloss may be sensitive to the class balance of datasets, as it penalizes heavily classifiers that are confident about incorrect classifications. To demonstrate, the ground truth data set for testing is manipulated purposely to unbalance the binary labels. For example, the following binarizes the original rating data by using a lower threshold, i.e., 2, to create more positive feedback from the user. ``` df_true_bin_pos = df_true.copy() df_true_bin_pos[COL_RATING] = df_true_bin_pos[COL_RATING].apply(lambda x: 1 if x > 2 else 0) df_true_bin_pos ``` By using threshold of 2, the labels in the ground truth data is not balanced, and the ratio of 1 over 0 is ``` one_zero_ratio = df_true_bin_pos[COL_PREDICTION].sum() / (df_true_bin_pos.shape[0] - df_true_bin_pos[COL_PREDICTION].sum()) print('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio)) ``` Another prediction data is also created, where the probabilities for label 1 and label 0 are fixed. Without loss of generity, the probability of predicting 1 is 0.6. The data set is purposely created to make the precision to be 100% given an presumption of cut-off equal to 0.5. ``` prob_true = 0.6 df_pred_bin_pos = df_true_bin_pos.copy() df_pred_bin_pos[COL_PREDICTION] = df_pred_bin_pos[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true) df_pred_bin_pos ``` Then the logloss is calculated as follows. ``` # Calculate the logloss metric logloss_score_pos = logloss( df_true_bin_pos, df_pred_bin_pos, col_user = COL_USER, col_item = COL_ITEM, col_rating = COL_RATING, col_prediction = COL_RATING ) print("The logloss score is {}".format(logloss_score)) ``` For comparison, a similar process is used with a threshold value of 3 to create a more balanced dataset. Another prediction dataset is also created by using the balanced dataset. Again, the probabilities of predicting label 1 and label 0 are fixed as 0.6 and 0.4, respectively. **NOTE**, same as above, in this case, the prediction also gives us a 100% precision. The only difference is the proportion of binary labels. ``` prob_true = 0.6 df_pred_bin_balanced = df_true_bin.copy() df_pred_bin_balanced[COL_PREDICTION] = df_pred_bin_balanced[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true) df_pred_bin_balanced ``` The ratio of label 1 and label 0 is ``` one_zero_ratio = df_true_bin[COL_PREDICTION].sum() / (df_true_bin.shape[0] - df_true_bin[COL_PREDICTION].sum()) print('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio)) ``` It is perfectly balanced. Applying the logloss function to calculate the metric gives us a more promising result, as shown below. ``` # Calculate the logloss metric logloss_score = logloss( df_true_bin, df_pred_bin_balanced, col_user = COL_USER, col_item = COL_ITEM, col_rating = COL_RATING, col_prediction = COL_RATING ) print("The logloss score is {}".format(logloss_score)) ``` It can be seen that the score is more close to 0, and, by definition, it means that the predictions are generating better results than the one before where binary labels are more biased. #### 2.2.5 Summary |Metric|Range|Selection criteria|Limitation|Reference| |------|-------------------------------|---------|----------|---------| |Precision|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Only for hits in recommendations.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)| |Recall|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Only for hits in the ground truth.|[link](https://en.wikipedia.org/wiki/Precision_and_recall)| |NDCG|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Does not penalize for bad/missing items, and does not perform for several equally good items.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)| |MAP|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)| |AUC|$\geq 0$ and $\leq 1$|The closer to $1$ the better. 0.5 indicates an uninformative classifier|Depend on the number of recommended items (k).|[link](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve)| |Logloss|$0$ to $\infty$|The closer to $0$ the better.|Logloss can be sensitive to imbalanced datasets.|[link](https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood)| ``` # cleanup spark instance spark.stop() ``` ## References 1. Guy Shani and Asela Gunawardana, "Evaluating Recommendation Systems", Recommender Systems Handbook, Springer, 2015. 2. PySpark MLlib evaluation metrics, url: https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html. 3. Dimitris Paraschakis et al, "Comparative Evaluation of Top-N Recommenders in e-Commerce: An Industrial Perspective", IEEE ICMLA, 2015, Miami, FL, USA. 4. Yehuda Koren and Robert Bell, "Advances in Collaborative Filtering", Recommender Systems Handbook, Springer, 2015. 5. Chris Bishop, "Pattern Recognition and Machine Learning", Springer, 2006.
github_jupyter
# Advanced Seq2Seq Modeling # Problem Build a model to help pronounce english words. We'll convert english words in to [Arpabet](https://en.wikipedia.org/wiki/Arpabet) phoneme @sunilmallya: refer for more live instructions https://www.twitch.tv/videos/171226133 ## Dataset http://svn.code.sf.net/p/cmusphinx/code/trunk/cmudict/ ``` # Load data data = open('cmudict-0.7b', 'r').readlines() phones = open('cmudict-0.7b.phones', 'r').readlines() phones = open('cmudict-0.7b.symbols', 'r').readlines() words = [] phones = [] def f_char(word): for c in ["(", ".", "'", ")", "-", "_", "\xc0", "\xc9"]: #print c in word, type(word) if c in word: return True return False for d in data: parts = d.strip('\n').split(' ') if not f_char(parts[0]): words.append(parts[0]) phones.append(parts[1]) words[:5], phones[:5] len(words), len(phones) all_chars = set() for word, phone in zip(words, phones): for c in word: all_chars.add(c) for p in phone.split(" "): all_chars.add(p) print all_chars # Create a map of symbols to numbers symbol_set = list(all_chars) symbol_set.append("+") # add space for padding # word to symbol index def word_to_symbol_index(word): return [symbol_set.index(char) for char in word] # list of symbol index to word def symbol_index_to_word(indices): return [symbol_set[idx] for idx in indices] # phone to symbol index def phone_to_symbol_index(phone): return [symbol_set.index(p) for p in phone.split(" ")] # list of symbol index to word def psymbol_index_to_word(indices): return [symbol_set[idx] for idx in indices] print symbol_set # sample indices = word_to_symbol_index("ARDBERG") print indices, symbol_index_to_word(indices) indices = phone_to_symbol_index("AA1 B ER0 G") print indices, symbol_index_to_word(indices) # Pad input and output data input_sequence_length = max([len(w) for w in words]) output_sequence_length = max([len(p.split(' ')) for p in phones]) input_sequence_length, output_sequence_length # input data trainX = [] labels = [] def pad_string(word, max_len, pad_char = "+"): out = '' for _ in range(max_len - len(word)): out += pad_char return out + word #for word in words: # padded_strng = "%*s" % (input_sequence_length, word) # trainX.append(word_to_symbol_index(padded_strng)) # output data #for p in phones: # padded_strng = "%*s" % (output_sequence_length, p) # print phone_to_symbol_index(padded_strng) pad_string('EY2 EY1', output_sequence_length) for word in words: padded_strng = pad_string(word, input_sequence_length) trainX.append(word_to_symbol_index(padded_strng)) # output labels # TODO: Fix padding logic labels =[] for p in phones: label = [] for _ in range(output_sequence_length - len(p.split(' '))): label.append(phone_to_symbol_index('+')[0]) label.extend(phone_to_symbol_index(p)) labels.append(label) len(labels), len(trainX) trainX[0], labels[0] print "INP: ", symbol_index_to_word(trainX[2]) print "LBL: ", symbol_index_to_word(labels[2]) import mxnet as mx import numpy as np def shuffle_together(a, b): assert len(a) == len(b) p = np.random.permutation(len(a)) return a[p], b[p] batch_size = 128 trainX, labels = np.array(trainX), np.array(labels) trainX, labels = shuffle_together(trainX, labels) N = int(len(trainX) * 0.9) # 90% dataX = np.array(trainX)[:N] dataY = np.array(labels)[:N] testX = np.array(trainX)[N:] testY = np.array(labels)[N:] print dataX.shape, dataY.shape print testX.shape, testY.shape ## Lets define the Iterator train_iter = mx.io.NDArrayIter(data=dataX, label=dataY, data_name="data", label_name="target", batch_size=batch_size, shuffle=True) test_iter = mx.io.NDArrayIter(data=testX, label=testY, data_name="data", label_name="target", batch_size=batch_size, shuffle=True) print train_iter.provide_data, train_iter.provide_label data_dim = len(symbol_set) data = mx.sym.var('data') # Shape: (N, T) target = mx.sym.var('target') # Shape: (N, T) # 2 Layer LSTM # get_next_state = return the states that can be used as starting states next time lstm1 = mx.rnn.FusedRNNCell(num_hidden=128, prefix="lstm1_", get_next_state=True) lstm2 = mx.rnn.FusedRNNCell(num_hidden=128, prefix="lstm2_", get_next_state=False) # In the layout, 'N' represents batch size, 'T' represents sequence length, # and 'C' represents the number of dimensions in hidden states. # one hot encode data_one_hot = mx.sym.one_hot(data, depth=data_dim) # Shape: (N, T, C) data_one_hot = mx.sym.transpose(data_one_hot, axes=(1, 0, 2)) # Shape: (T, N, C) # Note that when unrolling, if 'merge_outputs'== True, the 'outputs' is merged into a single symbol # encoder (with repeat vector) _, encode_state = lstm1.unroll(length=input_sequence_length, inputs=data_one_hot, layout="TNC") encode_state_h = mx.sym.broadcast_to(encode_state[0], shape=(output_sequence_length, 0, 0)) #Shape: (T,N,C); use ouput seq shape # decoder decode_out, _ = lstm2.unroll(length=output_sequence_length, inputs=encode_state_h, layout="TNC") decode_out = mx.sym.reshape(decode_out, shape=(-1, batch_size)) # logits out logits = mx.sym.FullyConnected(decode_out, num_hidden=data_dim, name="logits") logits = mx.sym.reshape(logits, shape=(output_sequence_length, -1, data_dim)) logits = mx.sym.transpose(logits, axes=(1, 0, 2)) # Lets define a loss function: Convert Logits to softmax probabilities loss = mx.sym.mean(-mx.sym.pick(mx.sym.log_softmax(logits), target, axis=-1)) loss = mx.sym.make_loss(loss) # visualize #shape = {"data" : (batch_size, dataX[0].shape[0])} #mx.viz.plot_network(loss, shape=shape) net = mx.mod.Module(symbol=loss, data_names=['data'], label_names=['target'], context=mx.gpu()) net.bind(data_shapes=train_iter.provide_data, label_shapes=train_iter.provide_label) net.init_params(initializer=mx.init.Xavier()) net.init_optimizer(optimizer="adam", optimizer_params={'learning_rate': 1E-3, 'rescale_grad': 1.0}, kvstore=None) # lets keep a test network to see how we do predict_net = mx.mod.Module(symbol=logits, data_names=['data'], label_names=None, context=mx.gpu()) data_desc = train_iter.provide_data[0] # shared_module = True: sharesthe same parameters and memory of the training network predict_net.bind(data_shapes=[data_desc], label_shapes=None, for_training=False, grad_req='null', shared_module=net) def predict(data_iter): data_iter.reset() corr = 0 for i, data_batch in enumerate(data_iter): #print data_batch.label[0] predict_net.forward(data_batch=data_batch) predictions = predict_net.get_outputs()[0].asnumpy() indices = np.argmax(predictions, axis=2) lbls = data_batch.label[0].asnumpy() results = (indices == lbls) for r in results: # Exact match if np.sum(r) == output_sequence_length: corr += 1.0 # total % match per sample #corr += (1.0 *np.sum(r)/ output_sequence_length) return corr/data_iter.num_data #test_iter.__dict__ predict(test_iter) epochs = 200 total_batches = len(dataX) // batch_size for epoch in range(epochs): avg_loss = 0 train_iter.reset() for i, data_batch in enumerate(train_iter): net.forward_backward(data_batch=data_batch) loss = net.get_outputs()[0].asscalar() avg_loss += loss /total_batches net.update() # every 10 epochs test_acc = predict(test_iter) print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.9f}'.format(avg_loss)) print('Epoch:', '%04d' % (epoch + 1), 'test acc =', '{:.9f}'.format(test_acc)) # Save the model prefix = 'pronounce128' net.save_checkpoint(prefix, epochs) #pred_model = mx.mod.Module.load(prefix, num_epoch) # Test module test_net = mx.mod.Module(symbol=logits, data_names=['data'], label_names=None, context=mx.gpu()) data_desc = train_iter.provide_data[0] # shared_module = True: sharesthe same parameters and memory of the training network test_net.bind(data_shapes=[data_desc], label_shapes=None, for_training=False, grad_req='null', shared_module=net) def print_word(arr): word_indices = symbol_index_to_word(arr) out = filter(lambda x: x != symbol_set[-1], word_indices) return "".join(out) def print_phone(arr): word_indices = psymbol_index_to_word(arr) out = filter(lambda x: x != symbol_set[-1], word_indices) return " ".join(out) testX, testY = trainX[0:10], labels[0:10] #print testX testX = [word_to_symbol_index(pad_string("SUNIL", input_sequence_length))] testX += [word_to_symbol_index(pad_string("JOSEPH", input_sequence_length))] testX += [word_to_symbol_index(pad_string("RANDALL", input_sequence_length))] testX += [word_to_symbol_index(pad_string("SAUSALITO", input_sequence_length))] testX += [word_to_symbol_index(pad_string("EMBARCADERO", input_sequence_length))] testX += [word_to_symbol_index(pad_string("AMULYA", input_sequence_length))] testX += [word_to_symbol_index(pad_string("TWITCH", input_sequence_length))] testX += [word_to_symbol_index(pad_string("ALUMINUM", input_sequence_length))] testX = np.array(testX, dtype=np.int) test_net.reshape(data_shapes=[mx.io.DataDesc('data', (1, input_sequence_length))]) predictions = test_net.predict(mx.io.NDArrayIter(testX, batch_size=1)).asnumpy() print "expression", "predicted", "actual" for i, prediction in enumerate(predictions): #x_str = symbol_index_to_word(testX[i]) word = print_word(testX[i]) index = np.argmax(prediction, axis=1) result = print_phone(index) #result = [symbol_set[j] for j in index] print "%10s" % word, result #label = [alphabet[j] for j in testY[i]] #print "".join(x_str), "".join(result), " ", "".join(label) ```
github_jupyter
# Structured and time series data This notebook contains an implementation of the third place result in the Rossman Kaggle competition as detailed in Guo/Berkhahn's [Entity Embeddings of Categorical Variables](https://arxiv.org/abs/1604.06737). The motivation behind exploring this architecture is it's relevance to real-world application. Most data used for decision making day-to-day in industry is structured and/or time-series data. Here we explore the end-to-end process of using neural networks with practical structured data problems. ``` %matplotlib inline %reload_ext autoreload %autoreload 2 from fastai.structured import * from fastai.column_data import * np.set_printoptions(threshold=50, edgeitems=20) PATH='/home/paperspace/data/rossmann/' ``` ## Create datasets In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them [here](http://files.fast.ai/part2/lesson14/rossmann.tgz). For completeness, the implementation used to put them together is included below. ``` def concat_csvs(dirname): path = f'{PATH}{dirname}' filenames=glob.glob(f"{path}/*.csv") wrote_header = False with open(f"{path}.csv","w") as outputfile: for filename in filenames: name = filename.split(".")[0] with open(filename) as f: line = f.readline() if not wrote_header: wrote_header = True outputfile.write("file,"+line) for line in f: outputfile.write(name + "," + line) outputfile.write("\n") # concat_csvs('googletrend') # concat_csvs('weather') ``` Feature Space: * train: Training set provided by competition * store: List of stores * store_states: mapping of store to the German state they are in * List of German state names * googletrend: trend of certain google keywords over time, found by users to correlate well w/ given data * weather: weather * test: testing set ``` table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test'] ``` We'll be using the popular data manipulation framework `pandas`. Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database. We're going to go ahead and load all of our csv's as dataframes into the list `tables`. ``` tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names] tables np.shape(tables) from IPython.display import HTML ``` We can use `head()` to get a quick look at the contents of each table: * train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holdiay, etc. * store: general info about the store including competition, etc. * store_states: maps store to state it is in * state_names: Maps state abbreviations to names * googletrend: trend data for particular week/state * weather: weather conditions for each state * test: Same as training table, w/o sales and customers ``` for t in tables: display(t.head()) ``` This is very representative of a typical industry dataset. The following returns summarized aggregate information to each table accross each field. ``` for t in tables: display(DataFrameSummary(t).summary()) ``` ## Data Cleaning / Feature Engineering As a structured data problem, we necessarily have to go through all the cleaning and feature engineering, even though we're using a neural network. ``` nutables=tables.copy(); train, store, store_states, state_names, googletrend, weather, test = nutables len(train),len(test) train.StateHoliday.head() ``` We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy. ``` train.StateHoliday = train.StateHoliday!='0'; test.StateHoliday = test.StateHoliday!='0' ``` `join_df` is a function for joining tables on specific fields. By default, we'll be doing a left outer join of `right` on the `left` argument using the given fields for each table. Pandas does joins using the `merge` method. The `suffixes` argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "\_y" to those on the right. ``` def join_df(left, right, left_on, right_on=None, suffix='_y'): if right_on is None: right_on = left_on return left.merge(right, how='left', left_on=left_on, right_on=right_on, suffixes=("", suffix)) ``` Join weather/state names. ``` weather = join_df(weather, state_names, "file", "StateName") weather.head() ``` In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns. We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use `.loc[rows, cols]` to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list `googletrend.State=='NI'` and selecting "State". ``` googletrend.head() googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0] googletrend.head() googletrend['State'] = googletrend.file.str.split('_', expand=True)[2] googletrend.head() googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI' googletrend.head() ``` The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals. You should *always* consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field. ``` add_datepart(weather, "Date", drop=False) add_datepart(googletrend, "Date", drop=False) add_datepart(train, "Date", drop=False) add_datepart(test, "Date", drop=False) googletrend.head() ``` The Google trends data has a special category for the whole of the US - we'll pull that out so we can use it explicitly. ``` trend_de = googletrend[googletrend.file == 'Rossmann_DE'] trend_de.head() ``` Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here. *Aside*: Why note just do an inner join? If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.) ``` store = join_df(store, store_states, "Store") len(store[store.State.isnull()]) joined = join_df(train, store, "Store") joined_test = join_df(test, store, "Store") len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()]) joined = join_df(joined, googletrend, ["State","Year", "Week"]) joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"]) len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()]) trend_de.head() joined.head() joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE')) joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE')) len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()]) joined = join_df(joined, weather, ["State","Date"]) joined_test = join_df(joined_test, weather, ["State","Date"]) len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()]) for df in (joined, joined_test): for c in df.columns: if c.endswith('_y'): if c in df.columns: df.drop(c, inplace=True, axis=1) ``` Next we'll fill in missing values to avoid complications with `NA`'s. `NA` (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary *signal value* that doesn't otherwise appear in the data. ``` for df in (joined,joined_test): df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32) df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32) df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32) df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32) ``` Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of `apply()` in mapping a function across dataframe values. ``` for df in (joined,joined_test): df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear, month=df.CompetitionOpenSinceMonth, day=15)) df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days ``` We'll replace some erroneous / outlying data. ``` for df in (joined,joined_test): df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0 df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0 ``` We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories. ``` for df in (joined,joined_test): df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30 df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24 joined.CompetitionMonthsOpen.unique() ``` Same process for Promo dates. ``` for df in (joined,joined_test): df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week( x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime)) df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days for df in (joined,joined_test): df.loc[df.Promo2Days<0, "Promo2Days"] = 0 df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0 df["Promo2Weeks"] = df["Promo2Days"]//7 df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0 df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25 df.Promo2Weeks.unique() joined.to_feather(f'{PATH}joined') joined_test.to_feather(f'{PATH}joined_test') ``` ## Durations It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.: * Running averages * Time until next event * Time since last event This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data. We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero. Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly. ``` def get_elapsed(fld, pre): day1 = np.timedelta64(1, 'D') last_date = np.datetime64() last_store = 0 res = [] for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values): if s != last_store: last_date = np.datetime64() last_store = s if v: last_date = d res.append(((d-last_date).astype('timedelta64[D]') / day1).astype(int)) df[pre+fld] = res ``` We'll be applying this to a subset of columns: ``` columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"] df = train[columns] df = test[columns] ``` Let's walk through an example. Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`: This will apply to each row with School Holiday: * A applied to every row of the dataframe in order of store and date * Will add to the dataframe the days since seeing a School Holiday * If we sort in the other direction, this will count the days until another holiday. ``` fld = 'SchoolHoliday' df = df.sort_values(['Store', 'Date']) get_elapsed(fld, 'After') df = df.sort_values(['Store', 'Date'], ascending=[True, False]) get_elapsed(fld, 'Before') ``` We'll do this for two more fields. ``` fld = 'StateHoliday' df = df.sort_values(['Store', 'Date']) get_elapsed(fld, 'After') df = df.sort_values(['Store', 'Date'], ascending=[True, False]) get_elapsed(fld, 'Before') fld = 'Promo' df = df.sort_values(['Store', 'Date']) get_elapsed(fld, 'After') df = df.sort_values(['Store', 'Date'], ascending=[True, False]) get_elapsed(fld, 'Before') ``` We're going to set the active index to Date. ``` df = df.set_index("Date") ``` Then set null values from elapsed field calculations to 0. ``` columns = ['SchoolHoliday', 'StateHoliday', 'Promo'] for o in ['Before', 'After']: for p in columns: a = o+p df[a] = df[a].fillna(0) ``` Next we'll demonstrate window functions in pandas to calculate rolling quantities. Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in `columns` in the following week (`rolling()`), grouped by Store (`groupby()`). We do the same in the opposite direction. ``` bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum() fwd = df[['Store']+columns].sort_index(ascending=False ).groupby("Store").rolling(7, min_periods=1).sum() ``` Next we want to drop the Store indices grouped together in the window function. Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets. ``` bwd.drop('Store',1,inplace=True) bwd.reset_index(inplace=True) fwd.drop('Store',1,inplace=True) fwd.reset_index(inplace=True) df.reset_index(inplace=True) ``` Now we'll merge these values onto the df. ``` df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw']) df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw']) df.drop(columns,1,inplace=True) df.head() ``` It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it. ``` df.to_feather(f'{PATH}df') df = pd.read_feather(f'{PATH}df') df["Date"] = pd.to_datetime(df.Date) df.columns joined = join_df(joined, df, ['Store', 'Date']) joined_test = join_df(joined_test, df, ['Store', 'Date']) ``` The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior. ``` joined = joined[joined.Sales!=0] ``` We'll back this up as well. ``` joined.reset_index(inplace=True) joined_test.reset_index(inplace=True) joined.to_feather(f'{PATH}joined') joined_test.to_feather(f'{PATH}joined_test') ``` We now have our final set of engineered features. While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting. ## Create features ``` joined = pd.read_feather(f'{PATH}joined') joined_test = pd.read_feather(f'{PATH}joined_test') joined.head().T.head(40) ``` Now that we've engineered all our features, we need to convert to input compatible with a neural network. This includes converting categorical variables into contiguous integers or one-hot encodings, normalizing continuous features to standard normal, etc... ``` cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen', 'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear', 'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw', 'SchoolHoliday_fw', 'SchoolHoliday_bw'] contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday'] n = len(joined); n dep = 'Sales' joined = joined[cat_vars+contin_vars+[dep, 'Date']].copy() joined_test[dep] = 0 joined_test = joined_test[cat_vars+contin_vars+[dep, 'Date', 'Id']].copy() for v in cat_vars: joined[v] = joined[v].astype('category').cat.as_ordered() apply_cats(joined_test, joined) for v in contin_vars: joined[v] = joined[v].astype('float32') joined_test[v] = joined_test[v].astype('float32') ``` We're going to run on a sample. ``` idxs = get_cv_idxs(n, val_pct=150000/n) joined_samp = joined.iloc[idxs].set_index("Date") samp_size = len(joined_samp); samp_size ``` To run on the full dataset, use this instead: ``` samp_size = n joined_samp = joined.set_index("Date") ``` We can now process our data... ``` joined_samp.head(2) df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True) yl = np.log(y) joined_test = joined_test.set_index("Date") df_test, _, nas, mapper = proc_df(joined_test, 'Sales', do_scale=True, skip_flds=['Id'], mapper=mapper, na_dict=nas) df.head(2) ``` In time series data, cross-validation is not random. Instead, our holdout data is generally the most recent data, as it would be in real application. This issue is discussed in detail in [this post](http://www.fast.ai/2017/11/13/validation-sets/) on our web site. One approach is to take the last 25% of rows (sorted by date) as our validation set. ``` train_ratio = 0.75 # train_ratio = 0.9 train_size = int(samp_size * train_ratio); train_size val_idx = list(range(train_size, len(df))) ``` An even better option for picking a validation set is using the exact same length of time period as the test set uses - this is implemented here: ``` val_idx = np.flatnonzero( (df.index<=datetime.datetime(2014,9,17)) & (df.index>=datetime.datetime(2014,8,1))) val_idx=[0] ``` ## DL We're ready to put together our models. Root-mean-squared percent error is the metric Kaggle used for this competition. ``` def inv_y(a): return np.exp(a) def exp_rmspe(y_pred, targ): targ = inv_y(targ) pct_var = (targ - inv_y(y_pred))/targ return math.sqrt((pct_var**2).mean()) max_log_y = np.max(yl) y_range = (0, max_log_y*1.2) ``` We can create a ModelData object directly from out data frame. ``` md = ColumnarModelData.from_data_frame(PATH, val_idx, df, yl.astype(np.float32), cat_flds=cat_vars, bs=128, test_df=df_test) ``` Some categorical variables have a lot more levels than others. Store, in particular, has over a thousand! ``` cat_sz = [(c, len(joined_samp[c].cat.categories)+1) for c in cat_vars] cat_sz ``` We use the *cardinality* of each variable (that is, its number of unique values) to decide how large to make its *embeddings*. Each level will be associated with a vector with length defined as below. ``` emb_szs = [(c, min(50, (c+1)//2)) for _,c in cat_sz] emb_szs m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.lr_find() m.sched.plot(100) ``` ### Sample ``` m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.fit(lr, 3, metrics=[exp_rmspe]) m.fit(lr, 5, metrics=[exp_rmspe], cycle_len=1) m.fit(lr, 2, metrics=[exp_rmspe], cycle_len=4) ``` ### All ``` m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.fit(lr, 1, metrics=[exp_rmspe]) m.fit(lr, 3, metrics=[exp_rmspe]) m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1) ``` ### Test ``` m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.fit(lr, 3, metrics=[exp_rmspe]) m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1) m.save('val0') m.load('val0') x,y=m.predict_with_targs() exp_rmspe(x,y) pred_test=m.predict(True) pred_test = np.exp(pred_test) joined_test['Sales']=pred_test csv_fn=f'{PATH}tmp/sub.csv' joined_test[['Id','Sales']].to_csv(csv_fn, index=False) FileLink(csv_fn) ``` ## RF ``` from sklearn.ensemble import RandomForestRegressor ((val,trn), (y_val,y_trn)) = split_by_idx(val_idx, df.values, yl) m = RandomForestRegressor(n_estimators=40, max_features=0.99, min_samples_leaf=2, n_jobs=-1, oob_score=True) m.fit(trn, y_trn); preds = m.predict(val) m.score(trn, y_trn), m.score(val, y_val), m.oob_score_, exp_rmspe(preds, y_val) ```
github_jupyter
<a href="https://colab.research.google.com/github/Abhishek-Gargha-Maheshwarappa/Reinforcement-Learning-Ad-Campaing-Optimization/blob/master/AD_Campaing_modeling_with_Reinforcemnet_Learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # **Multi Arm Bandit - Thomson Sampling for Ads** ## **1. Abstract** Real- World Bernouli trial with exactly two possible outcomes, "success" and "failure." In this since we are doing it for Ads it will be impression or no impression/ 'Click' or 'no Click' on banner ads. Creating the simulation of the ads using random normal distrubtion and then modeling it with thomson smapling with beat distrubtion and trying out different methods and different hyper parameter to obsereve the effect of other techniques like E-greedy, UBC and random Sampling. First we setup the necessary imports and the standard Ads. The get_reward_regret samples the reward for the given action, and returns the regret based on the true best action. ## **2. Setting up the enivironment** ``` import numpy as np import matplotlib.pyplot as plt from pdb import set_trace stationary=True class ADS(): def __init__(self, ad_camp): """ Ads with rewards 1 or 0. At initialization, multiple Ads are created. The probability of each Ads returning reward 1 if clicked is sampled from Bernouilli(p), where p randomly chosen from Uniform(0,1) at initialization """ self.ad_camp = ad_camp self.generate_thetas() self.timestep = 0 global stationary self.stationary=stationary def generate_thetas(self): self.thetas = np.random.uniform(0,1,self.ad_camp) def get_reward_regret(self, arm): """ Returns random reward for ad. Assumes actions are 0-indexed Args: ad_camp is an int """ self.timestep += 1 if (self.stationary==False) and (self.timestep%100 == 0) : self.generate_thetas() # Simulate bernouilli sampling sim = np.random.uniform(0,1,self.ad_camp) rewards = (sim<self.thetas).astype(int) reward = rewards[arm] regret = self.thetas.max() - self.thetas[arm] return reward, regret ``` ## **3. Thompson Sampling** ``` class BetaAlgo(): """ The algos try to learn which Bandit arm is the best to maximize reward. It does this by modelling the distribution of the ads with a Beta, assuming the true probability of success of an arm is Bernouilli distributed. """ def __init__(self, ads): """ Args: ads: the ads class the algo is trying to model """ self.ads = ads self.ad_camp = ads.ad_camp self.alpha = np.ones(self.ad_camp) self.beta = np.ones(self.ad_camp) def get_reward_regret(self, arm): reward, regret = self.ads.get_reward_regret(arm) self._update_params(arm, reward) return reward, regret def _update_params(self, arm, reward): self.alpha[arm] += reward self.beta[arm] += 1 - reward class BernGreedy(BetaAlgo): def __init__(self, ads): super().__init__(ads) @staticmethod def name(): return 'beta-greedy' def get_action(self): """ Bernouilli parameters are the expected values of the beta""" theta = self.alpha / (self.alpha + self.beta) return theta.argmax() class BernThompson(BetaAlgo): def __init__(self, ads): super().__init__(ads) @staticmethod def name(): return 'thompson' def get_action(self): """ Bernouilli parameters are sampled from the beta""" theta = np.random.beta(self.alpha, self.beta) return theta.argmax() ``` ## **4. Epsilon Greedy** ``` epsilon = 0.1 class EpsilonGreedy(): """ Epsilon Greedy with incremental update. Based on Sutton and Barto pseudo-code, page. 24 """ def __init__(self, ads): global epsilon self.epsilon = epsilon self.ads = ads self.ad_camp = ads.ad_camp self.Q = np.zeros(self.ad_camp) # q-value of actions self.N = np.zeros(self.ad_camp) # action count @staticmethod def name(): return 'epsilon-greedy' def get_action(self): if np.random.uniform(0,1) > self.epsilon: action = self.Q.argmax() else: action = np.random.randint(0, self.ad_camp) return action def get_reward_regret(self, arm): reward, regret = self.ads.get_reward_regret(arm) self._update_params(arm, reward) return reward, regret def _update_params(self, arm, reward): self.N[arm] += 1 # increment action count self.Q[arm] += 1/self.N[arm] * (reward - self.Q[arm]) # inc. update rule ``` ## **5.Upper-Confidence-Bound** ``` ucb_c = 2 class UCB(): """ Epsilon Greedy with incremental update. Based on Sutton and Barto pseudo-code, page. 24 """ def __init__(self, ads): global ucb_c self.ucb_c = ucb_c self.ads = ads self.ad_camp = ads.ad_camp self.Q = np.zeros(self.ad_camp) # q-value of actions self.N = np.zeros(self.ad_camp) + 0.0001 # action count self.timestep = 1 @staticmethod def name(): return 'ucb' def get_action(self): ln_timestep = np.log(np.full(self.ad_camp, self.timestep)) confidence = self.ucb_c * np.sqrt(ln_timestep/self.N) action = np.argmax(self.Q + confidence) self.timestep += 1 return action def get_reward_regret(self, arm): reward, regret = self.ads.get_reward_regret(arm) self._update_params(arm, reward) return reward, regret def _update_params(self, arm, reward): self.N[arm] += 1 # increment action count self.Q[arm] += 1/self.N[arm] * (reward - self.Q[arm]) # inc. update rule ``` ## **6. Plotting different sampling** ``` def plot_data(y): """ y is a 1D vector """ x = np.arange(y.size) _ = plt.plot(x, y, 'o') def multi_plot_data(data, names): """ data, names are lists of vectors """ x = np.arange(data[0].size) for i, y in enumerate(data): plt.plot(x, y, 'o', markersize=2, label=names[i]) plt.legend(loc='upper right', prop={'size': 16}, numpoints=10) plt.xlabel("Episodes") plt.ylabel("Regret") plt.show() def simulate(simulations, timesteps, ad_camp, Algorithm): """ Simulates the algorithm over 'simulations' epochs """ sum_regrets = np.zeros(timesteps) for e in range(simulations): ads = ADS(ad_camp) algo = Algorithm(ads) regrets = np.zeros(timesteps) for i in range(timesteps): action = algo.get_action() reward, regret = algo.get_reward_regret(action) regrets[i] = regret sum_regrets += regrets mean_regrets = sum_regrets / simulations return mean_regrets def experiment(ad_camp, timesteps=1000, simulations=1000): """ Standard setup across all experiments Args: timesteps: (int) how many steps for the algo to learn the ads simulations: (int) number of epochs """ algos = [EpsilonGreedy, UCB, BernThompson] regrets = [] names = [] for algo in algos: regrets.append(simulate(simulations, timesteps, ad_camp, algo)) names.append(algo.name()) multi_plot_data(regrets, names) ``` ## **7.An example Expierment** ``` ad_camp = 10 # number of ads epsilon = 0.1 ucb_c = 2 stationary=True experiment(ad_camp) ``` ## **8. Hyperparameters** Which hyperparameters are important for Thompson Sampling, e-greedy, UBC, and random sampling? Show that they are important (15 Points) # **1. Thompson Sampling Hyperparameter** 1. Distrubution beta, uniform or other - it is illustrated at the end #**2. e-greedy** 1. Epsilon 2. Decay rate #**3. UBC** 1. C - uncertainty measure is weighed by the hyperparameter ## **9.E-greedy** By changing the **epsilon** which is Exploration rate ### **9.1 Experiment 1** ``` ad_camp = 10 # number of Ads epsilon = 0.1 ucb_c = 2 stationary=True experiment(ad_camp) ``` ### **9.2 Experiment 2** ``` # Experiment 2 ad_camp = 10 # number of Ads epsilon = 0.3 ucb_c = 2 stationary=True experiment(ad_camp) ``` ### **9.3 Experiment 3** ``` # Experiment 3 ad_camp = 10 # number of Ads epsilon = 0.5 ucb_c = 2 stationary=True experiment(ad_camp) ``` ### **9.4 Experiment 4** ``` # Experiment 4 ad_camp = 10 # number of Ads epsilon = 0.01 ucb_c = 2 stationary=True experiment(ad_camp) ``` From above it is clear that Epsilon greedy fails when the when we use large value for epsilon, which means more exploration. The best value for Epsilon is **0.1** ## **10. UCB** C - uncertainty measure is weighed by the hyperparameter ### **10.1 Experiment 1** ``` # Experiment 1 ad_camp = 10 # number of Ads epsilon = 0.01 ucb_c = 3 stationary=True experiment(ad_camp) ``` ### **10.2 Experiment 2** ``` # Experiment 2 ad_camp = 10 # number of Ads epsilon = 0.01 ucb_c = 10 stationary=True experiment(ad_camp) ``` ### **10.3 Experiment 3** ``` # Experiment 3 ad_camp = 10 # number of Ads epsilon = 0.01 ucb_c = 5 stationary=True experiment(ad_camp) ``` ### **10.4 Experiment 4** ``` # Experiment 4 ad_camp = 10 # number of Ads epsilon = 0.01 ucb_c = 1 stationary=True experiment(ad_camp) ``` ### **10.5 Experiment 5** ``` # Experiment 5 ad_camp = 10 # number of ads epsilon = 0.01 ucb_c = 0.1 stationary=True experiment(ad_camp) ``` From the above experiement of changing c we see that UCB performs well when c decreases, in the final experiement where **c = 0.1** its perfroms is very close to Thompson sampling How does the action space affect Thompson Sampling, e-greedy, UBC, and random sampling? ## **11. Action Space** The action space can be changed by changing the number of ads ### **11.1 Ad_campaing =** 10 ``` # Experiment 1 ad_camp = 10 epsilon = 0.01 ucb_c = 2 stationary=True experiment(ad_camp) ``` ## **11.2. Ad_campaing =** 15 ``` # Experiment 2 ad_camp = 15 epsilon = 0.01 ucb_c = 2 stationary=True experiment(ad_camp) ``` ### **11.3. Ad_campaing = 25** ``` # Experiment 3 ad_camp = 25 epsilon = 0.01 ucb_c = 2 stationary=True experiment(ad_camp) ``` ## **11.4. Ad_campaing =** 5 ``` # Experiment 4 ad_camp = 5 epsilon = 0.01 ucb_c = 2 stationary=True experiment(ad_camp) ``` ## **11.5. Ad_campaing =** 2 ``` # Experiment 5 ad_camp = 2 epsilon = 0.01 ucb_c = 2 stationary=True experiment(ad_camp) ``` Action space will not affect the Thompson sampling as seen by the above experiment, but for the UCB it affects largely, higher action space UCB performs very badly, the E greedy will almost remain fairly the same. ### **12. Stationary affect** How does stationary affect Thompson Sampling, e-greedy, UBC, and random sampling? ``` # Experiment 1 ad_camp = 5 epsilon = 0.01 ucb_c = 2 stationary=True experiment(ad_camp) # Experiment 1 ad_camp = 10 epsilon = 0.1 ucb_c = 2 stationary=False experiment(ad_camp) ``` Introducing Non Stationary effect after every 100 steps makes the Regret to move all over the place When do Thompson Sampling, e-greedy, UBC, and random sampling stop exploring? Explain why. Explain the exploration-exploitation tradeoff (15 Points) ## **13. Exploration stoppage** 1. Thompson Sampling will not stop exploration it will keep exploring throughout, when we use beta distribution part of it will overlap with action which is less likely to get good rewards also, which is nothing but exploration. Though the exploration rate is very less compared to exploitation, after many episodes playing the exploration reduces and exploitation increases. 2. In the E-Greedy exploring stops only when epsilon is zero 3. UCB keeps exploring untill the confidence interval goes a point value 3. Random smapling will always keep exploring throut the life time if there is no condition. ## **Exploitation - Exploration Trade off** ![Alt Text](https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcSofjoQq1pD-CUMS7Q3QPVIt8MUECqaL2vpSA&usqp=CAU) The exploration-exploitation trade-off is a well-known dilemma of Reinforcement Learning where agent has choose between exploring the environment and exploiting the knowledge of the enivronment. When agent start exploring environment it starts to learn from it and the agent exploration decerases. This exploration and exploitation trade off can be controlled by exploration rate. ### **13.1. Epsilon Greedy Strategy** To get this balance between exploitation and exploration, epsilon greedy strategy can be used. With this strategy, we define an exploration rate that we initially set to . This exploration rate is the probability that our agent will explore the environment rather than exploit it. With Epslion 1, it is 100% certain that the agent will start out by exploring the environment. As the agent learns more about the environment, at the start of each new episode, epsilon will decay by some rate that we set so that the likelihood of exploration becomes less and less probable as the agent learns more and more about the environment. The agent will become “greedy” in terms of exploiting the environment once it has had the opportunity to explore and learn more about it. ## **14. Thompson sampling with different distrubtion** Thompson Sampling with non-Beta distribution (5 Points) Modify the Thompson Sampling to run with a different distribution (e.g. Parteo, Normal, etc) We take a example of the slot machine and try out different distrubution ## **14.1 Defining the slot machine and genrating data** ``` import numpy as np #Define the total number of turns (i.e., the number of times we will play a slot machine). #Remember, we have $1,000 available, and each turn costs $1. We thus have 1,000 turns. number_of_turns = 1000 #define the total number of slot machines number_of_slot_machines = 6 #Define arrays where we can keep track of our wins (positive rewards) #and losses (negative rewards) for each slot machine. #number_of_positive_rewards = np.zeros(number_of_slot_machines) #number_of_negative_rewards = np.zeros(number_of_slot_machines) #define a seed for the random number generator (to ensure that results are reproducible) np.random.seed(5) #create a random conversion rate between 1% and 15% for each slot machine conversion_rates = np.random.uniform(0.01, 0.15, number_of_slot_machines) #Show conversion rates for each slot machine. Remember that in a real-world scenario #the decision-maker would not know this information! for i in range(6): print('Conversion rate for slot machine {0}: {1:.2%}'.format(i, conversion_rates[i])) #The data set is a matrix with one row for each turn, and one column for each slot machine. #Each item in the matrix represents the outcome of what would happen if we were to play a #particular slot machine on that particular turn. A value of "1" indicates that we would win, #while a value of "0" indicates that we would lose. The number of "wins" for each slot machine #is determined by its conversion rate. outcomes = np.zeros((number_of_turns, number_of_slot_machines)) #create a two-dimensional numpy array, and fill it with zeros for turn_index in range(number_of_turns): #for each turn for slot_machine_index in range(number_of_slot_machines): #for each slot machine #Get a random number between 0.0 and 1.0. #If the random number is less than or equal to this slot machine's conversion rate, then set the outcome to "1". #Otherwise, the outcome will be "0" because the entire matrix was initially filled with zeros. if np.random.rand() <= conversion_rates[slot_machine_index]: outcomes[turn_index][slot_machine_index] = 1 #display the first 15 rows of data print(outcomes[0:15, 0:6]) #this sort of indexing means "rows 0 to 14" (i.e., the first 15 rows) and "columns 0 through 5" (i.e., the first six columns) #show means (i.e., conversion rates) for each column (i.e., for each slot machine) for i in range(6): print('Mean for column {0}: {1:.2%}'.format(i, np.mean(outcomes[:, i]))) #show true conversion rate for i in range(6): print('True conversion rate for column {0}: {1:.2%}'.format(i, conversion_rates[i])) ``` ## **14.2 Normal Distrubution** ``` #for each turn rewards = [[0] for i in range(number_of_slot_machines)] for turn_index in range(number_of_turns): index_of_machine_to_play = -1 max_beta = -1 # note that max beta #determine which slot machine to play for this turn for slot_machine_index in range(number_of_slot_machines): #for each slot machine #Define the shape parameters for the beta distribution. The shape will depend on the number #of wins and losses that have thus far been observed for this particular slot machine. #a = number_of_positive_rewards[slot_machine_index] + 1 #b = number_of_negative_rewards[slot_machine_index] + 1 mean = np.mean(rewards[slot_machine_index]) std = np.std(rewards[slot_machine_index]) #Get a random value from the beta distribution whose shape is defined by the number of #wins and losses that have thus far been observed for this slot machine random_beta = np.random.normal(mean, std) #print(random_beta) #if this is the largest beta value thus far observed for this iteration if random_beta > max_beta: max_beta = random_beta #update the maximum beta value thus far observed index_of_machine_to_play = slot_machine_index #set the machine to play to the current machine #play the selected slot machine, and record whether we win or lose if outcomes[turn_index][index_of_machine_to_play] == 1: rewards[index_of_machine_to_play].append(1) else: rewards[index_of_machine_to_play].append(0) print('Number of turns {0}:'.format(number_of_turns)) #compute and display the total number of times each slot machine was played number_of_times_played = [0 for i in range(number_of_slot_machines)] for n in range (number_of_slot_machines): number_of_times_played[n] =len(rewards[n]) #number_of_times_played = number_of_positive_rewards + number_of_negative_rewards for slot_machine_index in range(number_of_slot_machines): #for each slot machine print('Slot machine {0} was played {1} times that is, {2:.2%}'.format(slot_machine_index, number_of_times_played[slot_machine_index], (number_of_times_played[slot_machine_index]/number_of_turns))) #identify and display the best slot machine to play print('\nOverall Conclusion: The best slot machine to play is machine {}!'.format(np.argmax(number_of_times_played))) #show true conversion rate for i in range(6): print('True conversion rate for column {0}: {1:.2%}'.format(i, conversion_rates[i])) ``` ## **14.3 Gamma Distrubution** ``` #for each turn number_of_turns = 1000 number_of_positive_rewards = np.zeros(number_of_slot_machines) number_of_negative_rewards = np.zeros(number_of_slot_machines) for turn_index in range(number_of_turns): index_of_machine_to_play = -1 max_beta = -1 # note that max beta random_beta = 0 #determine which slot machine to play for this turn for slot_machine_index in range(number_of_slot_machines): #for each slot machine #Define the shape parameters for the beta distribution. The shape will depend on the number #of wins and losses that have thus far been observed for this particular slot machine. a = number_of_positive_rewards[slot_machine_index] + 1 b = number_of_negative_rewards[slot_machine_index] + 1 k=a/(a+b) random_beta = np.random.gamma(k,turn_index) #if this is the largest beta value thus far observed for this iteration if random_beta > max_beta: max_beta = random_beta #update the maximum beta value thus far observed index_of_machine_to_play = slot_machine_index #set the machine to play to the current machine #play the selected slot machine, and record whether we win or lose if outcomes[turn_index][index_of_machine_to_play] == 1: number_of_positive_rewards[index_of_machine_to_play] += 1 else: number_of_negative_rewards[index_of_machine_to_play] += 1 print('Number of turns {0}:'.format(number_of_turns)) #compute and display the total number of times each slot machine was played number_of_times_played = number_of_positive_rewards + number_of_negative_rewards #number_of_times_played = number_of_positive_rewards + number_of_negative_rewards for slot_machine_index in range(number_of_slot_machines): #for each slot machine print('Slot machine {0} was played {1} times that is, {2:.2%}'.format(slot_machine_index, number_of_times_played[slot_machine_index], (number_of_times_played[slot_machine_index]/number_of_turns))) #identify and display the best slot machine to play print('\nOverall Conclusion: The best slot machine to play is machine {}!'.format(np.argmax(number_of_times_played))) #show true conversion rate for i in range(6): print('True conversion rate for column {0}: {1:.2%}'.format(i, conversion_rates[i])) ``` ## **14.2 Binomial Distrubution** ``` #for each turn number_of_turns = 1000 number_of_positive_rewards = np.zeros(number_of_slot_machines) number_of_negative_rewards = np.zeros(number_of_slot_machines) for turn_index in range(number_of_turns): index_of_machine_to_play = -1 max_beta = -1 # note that max beta random_beta = 0 #determine which slot machine to play for this turn for slot_machine_index in range(number_of_slot_machines): #for each slot machine #Define the shape parameters for the beta distribution. The shape will depend on the number #of wins and losses that have thus far been observed for this particular slot machine. a = number_of_positive_rewards[slot_machine_index] + 1 b = number_of_negative_rewards[slot_machine_index] + 1 k=a/(a+b) #Get a random value from the beta distribution whose shape is defined by the number of #wins and losses that have thus far been observed for this slot machine random_beta = np.random.binomial(turn_index, k) #if this is the largest beta value thus far observed for this iteration if random_beta > max_beta: max_beta = random_beta #update the maximum beta value thus far observed index_of_machine_to_play = slot_machine_index #set the machine to play to the current machine #play the selected slot machine, and record whether we win or lose if outcomes[turn_index][index_of_machine_to_play] == 1: number_of_positive_rewards[index_of_machine_to_play] += 1 else: number_of_negative_rewards[index_of_machine_to_play] += 1 print('Number of turns {0}:'.format(number_of_turns)) #compute and display the total number of times each slot machine was played number_of_times_played = number_of_positive_rewards + number_of_negative_rewards #number_of_times_played = number_of_positive_rewards + number_of_negative_rewards for slot_machine_index in range(number_of_slot_machines): #for each slot machine print('Slot machine {0} was played {1} times that is, {2:.2%}'.format(slot_machine_index, number_of_times_played[slot_machine_index], (number_of_times_played[slot_machine_index]/number_of_turns))) #identify and display the best slot machine to play print('\nOverall Conclusion: The best slot machine to play is machine {}!'.format(np.argmax(number_of_times_played))) #show true conversion rate for i in range(6): print('True conversion rate for column {0}: {1:.2%}'.format(i, conversion_rates[i])) ``` ## **15. Past actions with different sampling** How long do Thompson Sampling, e-greedy, UBC, and random sampling remember the past actions? 1. Thompson sampling remembers the past action in the fomr of distrubtion thats how it remember the pervious action, which can be exploited. 2. E- greedy - also remembers the pervious actions in the form of action and reward, which it will exploit. 3. UBC - it actual remembers because with which we can find less explored or uncertain action which has big confindence interval. 4. Random doesnt even remeber anything from past its action are completely random ## **Conclusion** We can conclude that the hyperparameters affect the performance of each of the sampling techniques, non stationary has large affect on the performance of smapling. Using other distrubution to like binomial and gamma does affect output sometimes giving output not matching the data generated. ## **Reference** * Nicholas Brown -Reinforcement Learning - Thompson Sampling & the Multi-Armed Bandit Problem [link](https://colab.research.google.com/drive/1gdR7k7jtSRqYnPNHcbAKdIjGRjQXpfnA) * Andre Cianflone - Thompson sampling - * Russo, Daniel, Benjamin Van Roy, Abbas Kazerouni, and Ian Osband. "A Tutorial on Thompson Sampling." arXiv preprint arXiv:1707.02038 (2017). [link](https://arxiv.org/abs/1707.02038) * Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. Vol. 1, no. 1. Cambridge: MIT press, 1998. I have referred the above links and used to code for the above notebook and made necessary changes where ever required. Copyright 2020 Abhishek Maheshwarappa Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
github_jupyter
``` # mahalanobis_discriminative model from collections import OrderedDict import numpy as np import torch as th from torch import nn import seaborn as sns from pathlib import Path import matplotlib.pyplot as plt import cv2 import pandas as pd import math from scipy.spatial import distance as mahal_distance from skimage.util import random_noise classes = np.array(['uCry', 'sCry', 'cCry', 'hCast', 'nhCast', 'sEC', 'nsEC', 'WBC', 'RBC']) outlier_classes1 = np.array(['Artifact', 'Dirt', 'LD']) outlier_classes2 = np.array(['blankurine', 'bubbles', 'cathair', 'condensation', 'dust', 'feces', 'fingerprint', 'humanhair', 'Lipids', 'Lotion', 'pollen', 'semifilled', 'void', 'wetslide', 'yeast']) # Loading the pre-trained classifier def conv_bn_relu( in_channels, out_channels, kernel_size=3, padding=None, stride=1, depthwise=False, normalization=True, activation=True, init_bn_zero=False): """ Make a depthwise or normal convolution layer, followed by batch normalization and an activation. """ layers = [] padding = kernel_size // 2 if padding is None else padding if depthwise and in_channels > 1: layers += [ nn.Conv2d(in_channels, in_channels, bias=False, kernel_size=kernel_size, stride=stride, padding=padding, groups=in_channels), nn.Conv2d(in_channels, out_channels, bias=not normalization, kernel_size=1) ] else: layers.append( nn.Conv2d(in_channels, out_channels, bias=not normalization, kernel_size=kernel_size, stride=stride, padding=padding) ) if normalization: bn = nn.BatchNorm2d(out_channels) if init_bn_zero: nn.init.zeros_(bn.weight) layers.append(bn) if activation: # TODO: parametrize activation layers.append(nn.ReLU()) return nn.Sequential(*layers) def depthwise_cnn_classifier( channels=[], strides=None, img_width=32, img_height=32, c_in=None, c_out=None, ): channels = channels[:] if c_in is not None: channels.insert(0, c_in) if c_out is not None: channels.append(c_out) if len(channels) < 2: raise ValueError("Not enough channels") layers = OrderedDict() number_convolutions = len(channels) - 2 if strides is None: strides = [2] * number_convolutions out_width = img_width out_height = img_height for layer_index in range(number_convolutions): in_channels = channels[layer_index] out_channels = channels[layer_index + 1] layers["conv1" + str(layer_index)] = conv_bn_relu( in_channels, out_channels, kernel_size=3, stride=strides[layer_index], depthwise=layer_index > 0, normalization=True, activation=True, ) layers["conv2" + str(layer_index)] = conv_bn_relu( out_channels, out_channels, kernel_size=3, stride=1, depthwise=True, normalization=True, activation=True, ) out_width = out_width // strides[layer_index] out_height = out_height // strides[layer_index] layers["drop"] = nn.Dropout(p=0.2) layers["flatten"] = nn.Flatten() layers["final"] = nn.Linear(out_width * out_height * channels[-2], channels[-1]) #layers["softmax"] = nn.Softmax(-1) return nn.Sequential(layers) # load model cnn = depthwise_cnn_classifier([32, 64, 128], c_in=1, c_out=9, img_width=32, img_height=32) cnn.load_state_dict(th.load("/home/erdem/pickle/thomas_classifier/urine_classifier_uniform_32x32.pt")) cnn.eval() # IMPORTANT cnn from ood_metrics import calc_metrics, plot_roc, plot_pr, plot_barcode # Mahalanobis # get empirical class means and covariances def get_mean_covariance(f): means= [] observations = [] for cl in classes: print("in class", cl) cl_path = "/home/thomas/tmp/patches_urine_32_scaled/"+cl+"/" counter = 0 temp_array = None for img_path in Path(cl_path).glob("*.png"): counter += 1 image = th.from_numpy(plt.imread(img_path)).float() if counter == 1: temp_array = f(image[None, None, :, :] - 1).detach().view(-1).numpy() observations.append(f(image[None, None, :, :] - 1).detach().view(-1).numpy()) else: temp_array += f(image[None, None, :, :] - 1).detach().view(-1).numpy() observations.append(f(image[None, None, :, :] - 1).detach().view(-1).numpy()) means.append(temp_array/counter) V = np.cov(observations, rowvar=False) VM = np.matrix(V) return means, VM # Returns the -mahal distance per class, max is better def Mahal_distance(f, x, means, cov): np_output = f(x[None, None, :, :] - 1).detach().view(-1).numpy() mahal_distance_per_C = [] for i in range(len(classes)): maha = mahal_distance.mahalanobis(np_output, means[i], cov) mahal_distance_per_C.append(maha) return mahal_distance_per_C def test_mahala(f, means, covs, outlier_class, outlier_temp, perturb): covi = covs.I inlier_scores = [] inlier_labels = [] for cl in classes: print(cl) cl_path = "/home/thomas/tmp/patches_urine_32_scaled/"+cl+"/" for img_path in Path(cl_path).glob("*.png"): inlier_labels.append(1) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True)) mahal_dist_per_c = Mahal_distance(f, image, means, covi) temp_score = np.amax(mahal_dist_per_c) inlier_scores.append(temp_score) sns.scatterplot(data=inlier_scores) outlier_scores = [] outlier_labels = [] for cl in outlier_class: print(cl) cl_path = outlier_temp+cl+"/" for img_path in Path(cl_path).glob("*.png"): outlier_labels.append(0) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True)) mahal_dist_per_c = Mahal_distance(f, image, means, covi) temp_score = np.amax(mahal_dist_per_c) outlier_scores.append(temp_score) sns.scatterplot(data=outlier_scores) score_array = inlier_scores+outlier_scores label_array = inlier_labels+outlier_labels print(calc_metrics(score_array, label_array)) plot_roc(score_array, label_array) # plot_pr(score_array, label_array) # plot_barcode(score_array, label_array) def test_mahala_final(f, means, covs, perturb): covi = covs.I inlier_scores = [] inlier_labels = [] outlier_scores = [] outlier_labels = [] inlier_path = "/home/erdem/dataset/urine_test_32/inliers" outlier_path = "/home/erdem/dataset/urine_test_32/outliers" # Inliers for img_path in Path(inlier_path).glob("*.png"): inlier_labels.append(0) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True)) mahal_dist_per_c = Mahal_distance(f, image, means, covi) temp_score = np.amax(mahal_dist_per_c) inlier_scores.append(temp_score) # Outliers for img_path in Path(outlier_path).glob("*.png"): outlier_labels.append(1) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True)) mahal_dist_per_c = Mahal_distance(f, image, means, covi) temp_score = np.amax(mahal_dist_per_c) outlier_scores.append(temp_score) d_outliers = {"Mahalanobis Distance": outlier_scores, "outlier_labels": outlier_labels, "Index of Image Patches": np.linspace(1, 636, num=636)} d_inliers = {"Mahalanobis Distance": inlier_scores, "inlier_labels": inlier_labels, "Index of Image Patches": np.linspace(1, 636, num=636)} df1 = pd.DataFrame(data=d_inliers) df2 = pd.DataFrame(data=d_outliers) sns.scatterplot(data=df1, x="Index of Image Patches", y="Mahalanobis Distance") sns.scatterplot(data=df2, x="Index of Image Patches", y="Mahalanobis Distance") score_array = inlier_scores+outlier_scores label_array = inlier_labels+outlier_labels print(calc_metrics(score_array, label_array)) plot_roc(score_array, label_array) plot_pr(score_array, label_array) # plot_barcode(score_array, label_array) from copy import deepcopy cnn_flattened = deepcopy(cnn) del cnn_flattened[-1] # remove linear image = th.from_numpy(plt.imread("/home/thomas/tmp/patches_contaminants_32_scaled/bubbles/Anvajo_bubbles1_100um_161_385_201_426.png")).float() cnn_dropout = deepcopy(cnn_flattened) del cnn_dropout[-1] # remove flatten seq6 = deepcopy(cnn_dropout) del seq6[-1] # remove dropout del seq6[-1][-1] # remove last relu seq5 = deepcopy(seq6) del seq5[-1] # remove dropout del seq5[-1][-1] # remove last relu seq4 = deepcopy(seq5) del seq4[-1] # remove dropout del seq4[-1][-1] # remove last relu seq3 = deepcopy(seq4) del seq3[-1] # remove dropout del seq3[-1][-1] # remove last relu seq2 = deepcopy(seq3) del seq2[-1] # remove dropout del seq2[-1][-1] # remove last relu seq1 = deepcopy(seq2) del seq1[-1] # remove dropout del seq1[-1][-1] # remove last relu means, COV= get_mean_covariance(cnn_flattened) COVI = COV.I print(COVI) # cnn without the linear last layer test_mahala_final(cnn_flattened, means, COV, perturb = None) # cnn without the linear last layer test_mahala_final(cnn_flattened, means, COV, perturb = 'gaussian') # cnn without the linear last layer test_mahala_final(cnn_flattened, means, COV, perturb = 's&p') means, COV= get_mean_covariance(cnn_flattened) # cnn without the linear last layer test_mahala(cnn_flattened, means, COV, outlier_classes1, "/home/thomas/tmp/patches_urine_32_scaled/", perturb = None) import pandas as pd COVI = COV.I outlier_temp = "/home/thomas/tmp/patches_urine_32_scaled/" perturb = None outlier_labels = [] outlier_scores = [] outlier_path = [] for cl in outlier_classes1: print(cl) cl_path = outlier_temp+cl+"/" for img_path in Path(cl_path).glob("*.png"): outlier_path.append(img_path) outlier_labels.append(cl) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True)) mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI) temp_score = np.amax(mahal_dist_per_c) outlier_scores.append(temp_score) d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path} df = pd.DataFrame(data=d) sns.scatterplot(data=df, x = "outlier_labels", y="outlier_scores") outlier_temp = "/home/thomas/tmp/patches_urine_32_scaled/" perturb = None outlier_labels = [] outlier_scores = [] outlier_path = [] for cl in classes: print(cl) cl_path = outlier_temp+cl+"/" for img_path in Path(cl_path).glob("*.png"): outlier_path.append(img_path) outlier_labels.append(cl) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True)) mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI) temp_score = np.amax(mahal_dist_per_c) outlier_scores.append(temp_score) d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path} df2 = pd.DataFrame(data=d) sns.scatterplot(data=df2, x = "outlier_labels", y="outlier_scores") outlier_temp = "/home/thomas/tmp/patches_contaminants_32_scaled/" perturb = None outlier_labels = [] outlier_scores = [] outlier_path = [] for cl in outlier_classes2: print(cl) cl_path = outlier_temp+cl+"/" for img_path in Path(cl_path).glob("*.png"): outlier_path.append(img_path) outlier_labels.append(cl) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True)) mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI) temp_score = np.amax(mahal_dist_per_c) outlier_scores.append(temp_score) d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path} df4 = pd.DataFrame(data=d) sns.scatterplot(data=df4, x = "outlier_labels", y="outlier_scores") cl_path = "/home/thomas/tmp/patches_urine_32_scaled/Unclassified" perturb = None outlier_labels = [] outlier_scores = [] outlier_path = [] cl = "Unclassified" COVI = COV.I for img_path in Path(cl_path).glob("*.png"): outlier_path.append(img_path) outlier_labels.append(cl) image = th.from_numpy(plt.imread(img_path)).float() if perturb == 'gaussian': image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float() elif perturb == 's&p': image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True)) mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI) temp_score = np.amax(mahal_dist_per_c) outlier_scores.append(temp_score) d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path} df3 = pd.DataFrame(data=d) sns.scatterplot(data=df3, x = "outlier_labels", y="outlier_scores") # sorted_outliers1 = df.sort_values(by=['outlier_scores']) # sorted_outliers2 = df4.sort_values(by=['outlier_scores']) # sorted_inliers = df2.sort_values(by=['outlier_scores']) sorted_unclassified = df3.sort_values(by=['outlier_scores']) index = 0 # index: 717 and after is inlier for a in sorted_unclassified['outlier_scores']: print(index, a) index += 1 from torchvision.utils import make_grid from torchvision.io import read_image import torchvision.transforms.functional as F %matplotlib inline def show(imgs): if not isinstance(imgs, list): imgs = [imgs] fix, axs = plt.subplots(ncols=len(imgs), squeeze=False) for i, img in enumerate(imgs): img = img.detach() img = F.to_pil_image(img) axs[0, i].imshow(np.asarray(img)) axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) unclassified_imgs = [] for path in sorted_unclassified["outlier_path"]: unclassified_imgs.append(read_image(str(path))) ```
github_jupyter
``` from bs4 import BeautifulSoup from selenium import webdriver from splinter import Browser import requests import time import pandas as pd from pandas.io.html import read_html from pprint import pprint executable_path = {'executable_path': 'chromedriver'} browser = Browser('chrome', **executable_path, headless=False) mars_info = {} nasa_url ='https://mars.nasa.gov/news/' browser.visit(nasa_url) time.sleep(10) soup = BeautifulSoup(browser.html, 'html.parser') news_title = soup.find('div', class_='content_title').text news_p = soup.find('div', class_='article_teaser_body').text mars_info['title'] = news_title mars_info['paragraph'] = news_p print(f'{news_title} \n\n{news_p}') image_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars' browser.visit(image_url) html = browser.html time.sleep(10) soup = BeautifulSoup(html, "html.parser") browser.click_link_by_id('full_image') html = browser.html soup = BeautifulSoup(html, "html.parser") browser.click_link_by_partial_href('/spaceimages/details.php?id') html = browser.html soup = BeautifulSoup(html, "html.parser") browser_url = 'https://www.jpl.nasa.gov' image_url = soup.find('article').find('figure').find('a')['href'] featured_image = browser_url + image_url mars_info['featured_image'] = featured_image print(featured_image) twitter_url = 'https://twitter.com/marswxreport?lang=en' browser.visit(twitter_url) time.sleep(10) html = browser.html soup = BeautifulSoup(html, "html.parser") mars_weather = soup.find_all('span', class_='css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0')[31].text mars_info['mars_weather'] = mars_weather print(mars_weather) facts_url ='https://space-facts.com/mars/' tables = pd.read_html(facts_url) mars_facts_df = tables[0] mars_facts_df.columns = ['Attribute', 'Value'] mars_facts_df.set_index('Attribute', inplace=True) mars_facts_df mars_facts_dic = mars_facts_df.to_dict() mars_info['mars_facts'] = mars_facts_dic mars_facts_dic hemisphere_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars' browser.visit(hemisphere_url) html = browser.html soup = BeautifulSoup(html, "html.parser") mars_hemisphere = [] hemisphere_list = ['Cerberus','Schiaparelli','Syrtis','Valles'] for item in hemisphere_list: try: hemispheres = {} url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars' browser.visit(url) html = browser.html soup = BeautifulSoup(html,'html.parser') browser.click_link_by_partial_text(item) html = browser.html soup = BeautifulSoup(html,'html.parser') hemispheres['title'] = soup.find('h2',class_="title").text hemispheres['image'] = soup.find('ul').find('a')['href'] mars_hemisphere.append(hemispheres) except Exception as e: print(e) mars_info['mars_hemisphere'] = mars_hemisphere mars_hemisphere pprint(mars_info) ```
github_jupyter
``` #export from local.test import * from local.basics import * from local.callback.all import * from local.vision.core import * from local.vision.data import * from local.vision.augment import * from local.vision import models #default_exp vision.learner from local.notebook.showdoc import * ``` # Learner for the vision applications > All the functions necessary to build `Learner` suitable for transfer learning in computer vision ## Cut a pretrained model ``` # export def _is_pool_type(l): return re.search(r'Pool[123]d$', l.__class__.__name__) m = nn.Sequential(nn.AdaptiveAvgPool2d(5), nn.Linear(2,3), nn.Conv2d(2,3,1), nn.MaxPool3d(5)) test_eq([bool(_is_pool_type(m_)) for m_ in m.children()], [True,False,False,True]) # export def has_pool_type(m): "Return `True` if `m` is a pooling layer or has one in its children" if _is_pool_type(m): return True for l in m.children(): if has_pool_type(l): return True return False m = nn.Sequential(nn.AdaptiveAvgPool2d(5), nn.Linear(2,3), nn.Conv2d(2,3,1), nn.MaxPool3d(5)) assert has_pool_type(m) test_eq([has_pool_type(m_) for m_ in m.children()], [True,False,False,True]) #export def create_body(arch, pretrained=True, cut=None): "Cut off the body of a typically pretrained `arch` as determined by `cut`" model = arch(pretrained) #cut = ifnone(cut, cnn_config(arch)['cut']) if cut is None: ll = list(enumerate(model.children())) cut = next(i for i,o in reversed(ll) if has_pool_type(o)) if isinstance(cut, int): return nn.Sequential(*list(model.children())[:cut]) elif callable(cut): return cut(model) else: raise NamedError("cut must be either integer or a function") ``` `cut` can either be an integer, in which case we cut the model at the coresponding layer, or a function, in which case, this funciton returns `cut(model)`. It defaults to `cnn_config(arch)['cut']` if `arch` is in `cnn_config`, otherwise to the first layer that contains some pooling. ``` tst = lambda p : nn.Sequential(nn.Conv2d(4,5,3), nn.BatchNorm2d(5), nn.AvgPool2d(1), nn.Linear(3,4)) m = create_body(tst) test_eq(len(m), 2) m = create_body(tst, cut=3) test_eq(len(m), 3) m = create_body(tst, cut=noop) test_eq(len(m), 4) ``` ## Head and model ``` #export def create_head(nf, nc, lin_ftrs=None, ps=0.5, concat_pool=True, bn_final=False): "Model head that takes `nf` features, runs through `lin_ftrs`, and out `nc` classes." lin_ftrs = [nf, 512, nc] if lin_ftrs is None else [nf] + lin_ftrs + [nc] ps = L(ps) if len(ps) == 1: ps = [ps[0]/2] * (len(lin_ftrs)-2) + ps actns = [nn.ReLU(inplace=True)] * (len(lin_ftrs)-2) + [None] pool = AdaptiveConcatPool2d() if concat_pool else nn.AdaptiveAvgPool2d(1) layers = [pool, Flatten()] for ni,no,p,actn in zip(lin_ftrs[:-1], lin_ftrs[1:], ps, actns): layers += BnDropLin(ni, no, True, p, actn) if bn_final: layers.append(nn.BatchNorm1d(lin_ftrs[-1], momentum=0.01)) return nn.Sequential(*layers) a = create_head(5,5) a tst = create_head(5, 10) tst #export def create_cnn_model(arch, nc, cut, pretrained, lin_ftrs=None, ps=0.5, custom_head=None, bn_final=False, concat_pool=True, init=nn.init.kaiming_normal_): "Create custom convnet architecture using `base_arch`" body = create_body(arch, pretrained, cut) if custom_head is None: nf = num_features_model(nn.Sequential(*body.children())) * (2 if concat_pool else 1) head = create_head(nf, nc, lin_ftrs, ps=ps, concat_pool=concat_pool, bn_final=bn_final) else: head = custom_head model = nn.Sequential(body, head) if init is not None: apply_init(model[1], init) return model tst = create_cnn_model(models.resnet18, 10, None, True) #export @delegates(create_cnn_model) def cnn_config(**kwargs): "Convenienc function to easily create a config for `create_cnn_model`" return kwargs pets = DataBlock(types=(PILImage, Category), get_items=get_image_files, splitter=RandomSplitter(), get_y=RegexLabeller(pat = r'/([^/]+)_\d+.jpg$')) dbunch = pets.databunch(untar_data(URLs.PETS)/"images", item_tfms=RandomResizedCrop(300, min_scale=0.5), bs=64, batch_tfms=[*aug_transforms(size=224), Normalize(*imagenet_stats)]) get_c(dbunch) dbunch.show_batch(max_n=9) #export def _default_split(m:nn.Module): return L(m[0], m[1:]).map(params) def _resnet_split(m): return L(m[0][:6], m[0][6:], m[1:]).map(params) def _squeezenet_split(m:nn.Module): return L(m[0][0][:5], m[0][0][5:], m[1:]).map(params) def _densenet_split(m:nn.Module): return L(m[0][0][:7],m[0][0][7:], m[1:]).map(params) def _vgg_split(m:nn.Module): return L(m[0][0][:22], m[0][0][22:], m[1:]).map(params) def _alexnet_split(m:nn.Module): return L(m[0][0][:6], m[0][0][6:], m[1:]).map(params) _default_meta = {'cut':None, 'split':_default_split} _resnet_meta = {'cut':-2, 'split':_resnet_split } _squeezenet_meta = {'cut':-1, 'split': _squeezenet_split} _densenet_meta = {'cut':-1, 'split':_densenet_split} _vgg_meta = {'cut':-2, 'split':_vgg_split} _alexnet_meta = {'cut':-2, 'split':_alexnet_split} #export model_meta = { models.xresnet.xresnet18 :{**_resnet_meta}, models.xresnet.xresnet34: {**_resnet_meta}, models.xresnet.xresnet50 :{**_resnet_meta}, models.xresnet.xresnet101:{**_resnet_meta}, models.xresnet.xresnet152:{**_resnet_meta}, models.resnet18 :{**_resnet_meta}, models.resnet34: {**_resnet_meta}, models.resnet50 :{**_resnet_meta}, models.resnet101:{**_resnet_meta}, models.resnet152:{**_resnet_meta}, models.squeezenet1_0:{**_squeezenet_meta}, models.squeezenet1_1:{**_squeezenet_meta}, models.densenet121:{**_densenet_meta}, models.densenet169:{**_densenet_meta}, models.densenet201:{**_densenet_meta}, models.densenet161:{**_densenet_meta}, models.vgg11_bn:{**_vgg_meta}, models.vgg13_bn:{**_vgg_meta}, models.vgg16_bn:{**_vgg_meta}, models.vgg19_bn:{**_vgg_meta}, models.alexnet:{**_alexnet_meta}} ``` ## `Learner` convenience functions ``` #export @delegates(Learner.__init__) def cnn_learner(dbunch, arch, loss_func=None, pretrained=True, cut=None, splitter=None, config=None, **kwargs): "Build a convnet style learner" if config is None: config = {} meta = model_meta.get(arch, _default_meta) model = create_cnn_model(arch, get_c(dbunch), ifnone(cut, meta['cut']), pretrained, **config) learn = Learner(dbunch, model, loss_func=loss_func, splitter=ifnone(splitter, meta['split']), **kwargs) if pretrained: learn.freeze() return learn ``` The model is built from `arch` using the number of final activation inferred from `dbunch` by `get_c`. It might be `pretrained` and the architecture is cut and split using the default metadata of the model architecture (this can be customized by passing a `cut` or a `splitter`). To customize the model creation, use `cnn_config` and pass the result to the `config` argument. ``` learn = cnn_learner(dbunch, models.resnet34, loss_func=CrossEntropyLossFlat(), config=cnn_config(ps=0.25)) #export @delegates(models.unet.DynamicUnet.__init__) def unet_config(**kwargs): "Convenience function to easily create a config for `DynamicUnet`" return kwargs #export @delegates(Learner.__init__) def unet_learner(dbunch, arch, loss_func=None, pretrained=True, cut=None, splitter=None, config=None, **kwargs): "Build a unet learner from `dbunch` and `arch`" if config is None: config = {} meta = model_meta.get(arch, _default_meta) body = create_body(arch, pretrained, ifnone(cut, meta['cut'])) try: size = dbunch.train_ds[0][0].size except: size = dbunch.one_batch()[0].shape[-2:] model = models.unet.DynamicUnet(body, get_c(dbunch), size, **config) learn = Learner(dbunch, model, loss_func=loss_func, splitter=ifnone(splitter, meta['split']), **kwargs) if pretrained: learn.freeze() return learn camvid = DataBlock(types=(PILImage, PILMask), get_items=get_image_files, splitter=RandomSplitter(), get_y=lambda o: untar_data(URLs.CAMVID_TINY)/'labels'/f'{o.stem}_P{o.suffix}') dbunch = camvid.databunch(untar_data(URLs.CAMVID_TINY)/"images", batch_tfms=aug_transforms()) dbunch.show_batch(max_n=9, vmin=1, vmax=30) #TODO: Find a way to pass the classes properly dbunch.vocab = np.loadtxt(untar_data(URLs.CAMVID_TINY)/'codes.txt', dtype=str) learn = unet_learner(dbunch, models.resnet34, loss_func=CrossEntropyLossFlat(axis=1), config=unet_config()) ``` ## Show functions ``` #export @typedispatch def show_results(x:TensorImage, y, samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs): if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize) ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs) return ctxs #export @typedispatch def show_results(x:TensorImage, y:TensorCategory, samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs): if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize) for i in range(2): ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))] ctxs = [r.show(ctx=c, color='green' if b==r else 'red', **kwargs) for b,r,c,_ in zip(samples.itemgot(1),outs.itemgot(0),ctxs,range(max_n))] return ctxs #export @typedispatch def show_results(x:TensorImage, y:(TensorImageBase, TensorPoint, TensorBBox), samples, outs, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs): if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize, double=True) for i in range(2): ctxs[::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[::2],range(max_n))] for x in [samples,outs]: ctxs[1::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(x.itemgot(0),ctxs[1::2],range(max_n))] return ctxs #export @typedispatch def plot_top_losses(x: TensorImage, y:TensorCategory, samples, outs, raws, losses, rows=None, cols=None, figsize=None, **kwargs): axs = get_grid(len(samples), rows=rows, cols=cols, add_vert=1, figsize=figsize, title='Prediction/Actual/Loss/Probability') for ax,s,o,r,l in zip(axs, samples, outs, raws, losses): s[0].show(ctx=ax, **kwargs) ax.set_title(f'{o[0]}/{s[1]} / {l.item():.2f} / {r.max().item():.2f}') #export @typedispatch def plot_top_losses(x: TensorImage, y:TensorMultiCategory, samples, outs, raws, losses, rows=None, cols=None, figsize=None, **kwargs): axs = get_grid(len(samples), rows=rows, cols=cols, add_vert=1, figsize=figsize) for i,(ax,s) in enumerate(zip(axs, samples)): s[0].show(ctx=ax, title=f'Image {i}', **kwargs) rows = get_empty_df(len(samples)) outs = L(s[1:] + o + (Str(r), Float(l.item())) for s,o,r,l in zip(samples, outs, raws, losses)) for i,l in enumerate(["target", "predicted", "probabilities", "loss"]): rows = [b.show(ctx=r, label=l, **kwargs) for b,r in zip(outs.itemgot(i),rows)] display_df(pd.DataFrame(rows)) ``` ## Export - ``` #hide from local.notebook.export import notebook2script notebook2script(all_fs=True) ```
github_jupyter
Transfer Learning Using https://towardsdatascience.com/building-your-own-object-detector-pytorch-vs-tensorflow-and-how-to-even-get-started-1d314691d4ae tutorial ``` !git clone https://github.com/pytorch/vision.git !cd vision !git checkout v0.3.0 import pycocotools import numpy as np import pandas as pd import matplotlib.pyplot as plt import torch import torch.utils.data from PIL import Image import utils as until from torchvision.models.detection.faster_rcnn import FastRCNNPredictor import math import cv2 import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils import pickle # from torch.vision import enginetrain from engine import train_one_epoch, evaluate ``` ### Data preprocessing ``` import re cv2.startWindowThread() class ExpressionImageDataset(Dataset): """ An expression-level dataset. """ def __init__(self, pickle_file, transform=None, colab=True): """ Args: pickle_file (string): Path to dataset pickle file. transform (callable, optional): Optional transform to be applied on a sample. """ with open(pickle_file, 'rb') as f: self.df_data = pd.DataFrame(pickle.load(f)) if colab: self.df_data["img_path"] = self.df_data["img_path"].apply(lambda x:"".join(x.split("all_years")[1:])) self.df_data["img_path"] = self.df_data["img_path"].apply(lambda x: "/content/drive/My Drive" + re.sub(r'\\', "/", x))#/10617 Data # print(self.df_data['img_path'].iloc[0]) self.transform = transform def __len__(self): return len(self.df_data) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() row = self.df_data.iloc[idx] traces_data = row['traces_data'] img_path = row['img_path'] tokens = row['tokens'] latex = row['latex'] # CV2 will read the image with white being 255 and black being 0, but since # our token-level training set uses binary arrays to represent images, we # need to binarize our image here as well. image_raw = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE) image_binarized = cv2.threshold(image_raw, 127, 255, cv2.THRESH_BINARY)[1] image_bitmap = image_binarized / 255.0 #change to be 1's and 0's sample = { 'image': image_binarized, 'image_bitmap': image_bitmap, 'traces_data': traces_data, 'tokens': tokens, 'latex': latex } if self.transform: sample = self.transform(sample) return sample train_exp_path = "/content/drive/My Drive/train/Copy of train.pickle" #10617 Data test_exp_path = '/content/drive/My Drive/test/Copy of test.pickle' #10617 Data/ # print('train') train_exp_set = ExpressionImageDataset(train_exp_path) # print('test') test_exp_set = ExpressionImageDataset(test_exp_path) sample = train_exp_set.__getitem__(0) test_traces_data = train_exp_set[2]['traces_data'] def get_traces_data_stats(traces_data): all_coords = [] for pattern in traces_data: for trace in pattern['trace_group']: all_coords.extend(trace) all_coords = np.array(all_coords) x_min, y_min = np.min(all_coords, axis=0) width, height = np.max(all_coords, axis=0) - [x_min, y_min] + 1 return x_min, y_min, width, height def get_trace_group_bounding_box(trace_group): all_coords = [] for t in trace_group: all_coords.extend(t) all_coords = np.array(all_coords) x_min, y_min = np.min(all_coords, axis=0) width, height = np.max(all_coords, axis=0) - [x_min, y_min] + 1 return x_min, y_min, width, height def draw_traces_data(traces_data): im_x_min, im_y_min, width, height = get_traces_data_stats(traces_data) # Scale the image down. max_dim = 1000 # Maximum dimension pre-pad. sf = 1000 / max(height, width) scaled_height = int(height * sf) scaled_width = int(width * sf) image = np.ones((scaled_height, scaled_width)) # Draw the traces on the unscaled image. for pattern in traces_data: for trace in pattern['trace_group']: trace = np.array(trace) trace -= np.array([im_x_min, im_y_min]) trace = (trace.astype(np.float64) * sf).astype(int) for coord_idx in range(1, len(trace)): cv2.line(image, tuple(trace[coord_idx - 1]), tuple(trace[coord_idx]), color=(0), thickness=5) # Pad the scaled image. pad_factor = 0.05 pad_width = ((int(pad_factor * scaled_height), int(pad_factor * scaled_height)), (int(pad_factor * scaled_width), int(pad_factor * scaled_width))) image = np.pad(image, pad_width=pad_width, mode='constant', constant_values=1) # Binarize. image = (image > 0).astype(int) # Open CV wants images to be between 0 and 255. image *= 255 image = image.astype(np.uint8) boxes = [] # Get bounding boxes. for pattern in traces_data: trace_group = pattern['trace_group'] rect_x_min, rect_y_min, rect_width, rect_height = get_trace_group_bounding_box(trace_group) rect_x_min = (rect_x_min - im_x_min) * sf + pad_width[1][0] rect_y_min = (rect_y_min - im_y_min) * sf + pad_width[0][0] rect_width *= sf rect_height *= sf # Convert bounding box coords to integers. rect_x_min = int(rect_x_min) rect_y_min = int(rect_y_min) rect_width = int(rect_width) rect_height = int(rect_height) boxes.append((rect_x_min, rect_y_min, rect_x_min + rect_width, rect_y_min + rect_height)) return image, boxes image, boxes = draw_traces_data(test_traces_data) print(image.shape) print(boxes) true_image = np.array(image, copy=True) for box in true_boxes: rect_x_min, rect_y_min, rect_width, rect_height = box true_image = cv2.rectangle(true_image, (int(rect_x_min), int(rect_y_min)), (int(rect_x_min + rect_width), int(rect_y_min + rect_height)), (0), 5) print('Image with true boxes:') plt.imshow(true_image, cmap='gray') plt.show() ``` ### Making mini Object Recognition Dataset: Make a Pickle that for objet detection which contains the numpy images (normalized) as well as the predicted boxes. In this case, a smaller dataframe was made just to test the model to see if the flow works. ( ~1000 examples). Other other notebook is run for the entire train/test set to get the entire dataset pickle. ``` %%time box_list = [] numpy_list = [] short_len = 700 #not entire dataset for i in range(len(train_exp_set.df_data[:short_len])): #get the specific row: curr_row = train_exp_set[i] test_traces_data = curr_row['traces_data'] #get trace data for row: image, boxes = draw_traces_data(test_traces_data) #double check right row: if str(test_traces_data[0].values()) == str(train_exp_set.df_data.iloc[i]["traces_data"][0].values()): #check to make srue traces same: #append to lists in order to later append to train df box_list.append(boxes) numpy_list.append(image) else: #any errors? print("error at line {}".format(i)) print(len(train_exp_set.df_data), len(box_list)) #shapes ? short_df = train_exp_set.df_data[:short_len].copy() #create a df (short since not entire dataset) ### OHE: IS THIS EVEN NEEDED? ### from sklearn.preprocessing import OneHotEncoder as OHE #want OHE labels tokens = train_exp_set.df_data["tokens"].sum() ohe_categories = pd.Series(tokens).unique() ohe_categories, len(ohe_categories) handle = "ignore" #or error or ignore... maybe ignore is safer ohe = OHE(categories = [np.array(sorted(ohe_categories))], handle_unknown=handle) ohe_input = train_exp_set.df_data["tokens"].apply(lambda x: ohe.fit_transform(np.array(x).reshape(-1,1))) #add images, boxes and OHE labels to df short_df["true_location"] = box_list short_df["numpy_image"] = numpy_list short_df["labels"] = ohe_input[:short_len] test_split = round(short_len * .8) #split at around 80% #since only using training data, split into two dataframes/pickles for model work. short_df[["true_location", "numpy_image", "img_path","tokens", "labels"]][:test_split].to_pickle("train_short_2000_test_df.pkl") short_df[["true_location", "numpy_image", "img_path","tokens", "labels"]][test_split:].to_pickle("test_short_2000_test_df.pkl") ``` ### Pytorch object detection tutorial ``` class ImageDataset(torch.utils.data.Dataset): def __init__(self, path, data): self.path = path self.data = data pickle_file = os.path.join(self.path, self.data) with open(pickle_file, 'rb') as f: self.df_data = pd.DataFrame(pickle.load(f)) def __getitem__(self, index): row = self.df_data.iloc[index] images = row["numpy_image"]/255 #normalize images height = images.shape[0] width = images.shape[1] pil_image = Image.fromarray(np.uint8(images)) images = torch.tensor(row["numpy_image"]/255).view(1,height,width).float() #need to reshape to (C, H, W) but 1 channel so 1. also turn to float images = images.view(1, height, width) box = np.array(row["true_location"]) # area = (box[:,2] - box[:,0]) * (box[:,3] - box[:,1]) #add buffer of boxes (since some boxes are width or height 0) box[:,0] -=1 box[:,1] -=1 box[:,2] +=1 box[:,3] +=1 #how many objects? num_classes = len(box) img_info = {} img_info["num_classes"] = torch.tensor(num_classes) #more like how many boxes are present img_info["boxes"] = torch.tensor(box, dtype=torch.int32) #make boxes integers img_info["image_index"] = torch.tensor(index) #image number # img_info["box_area"] = #do we want this # img_info["tokens"] = torch.tensor(row["tokens"]) img_info["labels"] = torch.tensor(row["labels"].toarray().sum(axis=0), dtype=torch.int64) #make labels ints return images, img_info def __len__(self): return len(self.df_data) import os obj_train_pickle = "train_short_2000_test_df.pkl" obj_test_pickle = "test_short_2000_test_df.pkl" train_img_data = ImageDataset("/content", "train_short_2000_test_df.pkl") test_img_data = ImageDataset("/content", "test_short_2000_test_df.pkl") ``` ### Modeling using pretrained resnet 50 ``` #pretrained model mod = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) #want pretrained model = mod.float() #keep float or double consistent #changing the model: num_classes = 101 #is this the right thing to divide by? #number of input features: input_features = model.roi_heads.box_predictor.cls_score.in_features #change last layer model.roi_heads.box_predictor = FastRCNNPredictor(input_features, num_classes) #output number of classes based on number of tokens #put data into dataloaders: batch_size = 4 shuffle = False num_workers = 1 #preparing data train_loader = torch.utils.data.DataLoader( train_img_data, batch_size = batch_size, shuffle = shuffle, num_workers = num_workers, collate_fn=until.collate_fn #need this to have images of different size ) test_loader = torch.utils.data.DataLoader( test_img_data, batch_size = batch_size, shuffle = shuffle, num_workers = num_workers, collate_fn=until.collate_fn ) print(len(train_loader), len(test_loader)) #also collect garbage and throw out! import gc gc.collect() ``` ### Training the model ``` #see if available device (GPU) torch.cuda.is_available() device = torch.device("cuda") #move to device model.to(device) #optimizer: optimizer = torch.optim.Adam(model.parameters(), lr = 0.001) #they added in a lr scheduler, should we do that? #train for certain number of epochs (verbose) epochs = 5 model = model.float() train_one_epoch(model, optimizer, train_loader, device=device, epoch=epochs, print_freq=100) ``` ### Predictions ``` #set model to eval mode: model.eval() #sample to get prediction of index = 3 test_img = train_img_data[index][0].to("cuda") test_img #get model predictions with torch.no_grad(): prediction = model([test_img]) #plot prediction over image: test_img = train_img_data[index][0].to("cpu") true_image = test_img.numpy().squeeze(0) # true_image = np.array(image, copy=True) for testbox in prediction[0]["boxes"]: #maybe image off so multiply by scaling factor?? sf = 1 #normal xmin = testbox[0] * sf ymin = testbox[1] * sf xmax = testbox[2] * sf xmax = testbox[3] * sf rect_x_min, rect_y_min, rect_width, rect_height = box true_image = cv2.rectangle(true_image, (int(xmin), int(ymin)), (int(xmax), int(ymax)), (0), 2) print('Image with true boxes:') plt.figure() plt.imshow(true_image, cmap='gray') plt.show() ``` ### TODO: * i think the model converts to pil image which may be why bounding boxes are so off... Because boxes are based on numpy not pil image... * drop OHE for labels... How do we also denote there's >1 of a certain symbol?? * train for longer on a validation set (didn't use test data at all). ### Scratch: ``` #testing specific inputs with the model targets = [ train_img_data[2][1], train_img_data[1][1] ] images = [ train_img_data[1][0].view(1,70, 1097).float().to(device), new_img.view(1,715, 1100).float().to(device)] images = list(image.to(device) for image in images) targets = [{k: v.to(device) for k, v in t.items()} for t in targets] m = model.float() m(images, targets) #check to see if pil image will keep the bounding boxes the same... I don't think so. from PIL import ImageDraw # [597.8450317382812, 47.35714340209961, 609.9963989257812, 49.35714340209961] xmin = 596.8450317382812 ymin = 46.35714340209961 xmax = 610.9963989257812 ymax = 48.35714340209961 tester = Image.fromarray(image) draw = ImageDraw.Draw(tester) draw.rectangle([(xmin, ymin), (xmax, ymax)], outline ='red') tester def train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq): """Training loop copied from https://github.com/pytorch/vision/blob/master/references/detection/engine.py """ model.train() metric_logger = until.MetricLogger(delimiter=" ") metric_logger.add_meter('lr', until.SmoothedValue(window_size=1, fmt='{value:.6f}')) header = 'Epoch: [{}]'.format(epoch) lr_scheduler = None if epoch == 0: warmup_factor = 1. / 1000 warmup_iters = min(1000, len(data_loader) - 1) lr_scheduler = until.warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor) for images, targets in metric_logger.log_every(data_loader, print_freq, header): images = list(image.to(device) for image in images) targets = [{k: v.to(device) for k, v in t.items()} for t in targets] loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) # reduce losses over all GPUs for logging purposes loss_dict_reduced = until.reduce_dict(loss_dict) losses_reduced = sum(loss for loss in loss_dict_reduced.values()) loss_value = losses_reduced.item() if not math.isfinite(loss_value): print("Loss is {}, stopping training".format(loss_value)) print(loss_dict_reduced) sys.exit(1) optimizer.zero_grad() losses.backward() optimizer.step() if lr_scheduler is not None: lr_scheduler.step() metric_logger.update(loss=losses_reduced, **loss_dict_reduced) metric_logger.update(lr=optimizer.param_groups[0]["lr"]) return metric_logger ```
github_jupyter
## Coding Exercise #0703 ### 1. Softmax regression (multi-class logistic regression): ``` # import tensorflow as tf import tensorflow.compat.v1 as tf import numpy as np import pandas as pd from sklearn.preprocessing import scale from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris tf.disable_v2_behavior() ``` #### 1.1. Read in the data: ``` # We will use Iris data. # 4 explanatory variables. # 3 classes for the response variable. data_raw = load_iris() data_raw.keys() # Print out the description. # print(data_raw['DESCR']) X = data_raw['data'] y = data_raw['target'] # Check the shape. print(X.shape) print(y.shape) ``` #### 1.2. Data pre-processing: ``` # One-Hot-Encoding. y = np.array(pd.get_dummies(y, drop_first=False)) # drop_frist = False for one-hot-encoding. y.shape # Scaling X = scale(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=3) n_train_size = y_train.shape[0] ``` #### 1.3. Do the necessary definitions: ``` batch_size = 100 # Size of each (mini) batch. n_epochs = 30000 # Number of epochs. learn_rate = 0.05 W = tf.Variable(tf.ones([4,3])) # Initial value of the weights = 1. b = tf.Variable(tf.ones([3])) # Initial value of the bias = 1. X_ph = tf.placeholder(tf.float32, shape=(None, 4)) # Number of rows not specified. Number of columns = numbmer of X variables = 4. y_ph = tf.placeholder(tf.float32, shape=(None,3)) # Number of rows not specified. Number of columns = number of classes of the y variable = 3. # Model. # Not strictly necessary to apply the softmax activation. => in the end we will apply argmax() function to predict the label! # y_model = tf.nn.softmax(tf.matmul(X_ph, W) + b) # The following will work just fine. y_model = tf.matmul(X_ph, W) + b loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_ph, logits=y_model)) # Loss = cross entropy. optimizer = tf.train.GradientDescentOptimizer(learning_rate = learn_rate) train = optimizer.minimize(loss) # Define training. init = tf.global_variables_initializer() # Define Variable initialization. ``` #### 1.4. Training and Testing: ``` with tf.Session() as sess: # Variables initialization. sess.run(init) # Training. for i in range(n_epochs): idx_rnd = np.random.choice(range(n_train_size),batch_size,replace=False) # Random sampling w/o replacement for the batch indices. batch_X, batch_y = [X_train[idx_rnd,:], y_train[idx_rnd,:]] # Get a batch. my_feed = {X_ph:batch_X, y_ph:batch_y} # Prepare the feed data as a dictionary. sess.run(train, feed_dict = my_feed) if (i + 1) % 2000 == 0: print("Step : {}".format(i + 1)) # Print the step number at every multiple of 2000. # Testing. correct_predictions = tf.equal(tf.argmax(y_ph, axis=1), tf.argmax(y_model, axis=1)) # In argmax(), axis=1 means horizontal direction. accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # Recast the Boolean as float32 first. Then calculate the mean. accuracy_value = sess.run(accuracy, feed_dict={X_ph:X_test, y_ph:y_test}) # Actually run the test with the test data. ``` Print the testing result. ``` print("Accuracy = {:5.3f}".format(accuracy_value)) ```
github_jupyter
``` from fastai2.basics import * from fastai2.callback.all import * from fastai2.vision.all import * #all_slow #hide from nbdev.showdoc import * ``` # Tutorial - Migrating from pure PyTorch > Incrementally adding fastai goodness to your PyTorch models ### Original PyTorch code Here's the MNIST training code from the official PyTorch examples (slightly reformatted for space, updated from AdaDelta to AdamW, and converted from a script to a notebook). There's a lot of code! ``` from torchvision import datasets, transforms from torch.optim.lr_scheduler import StepLR class Net(nn.Sequential): def __init__(self): super().__init__( nn.Conv2d(1, 32, 3, 1), nn.ReLU(), nn.Conv2d(32, 64, 3, 1), nn.MaxPool2d(2), nn.Dropout2d(0.25), Flatten(), nn.Linear(9216, 128), nn.ReLU(), nn.Dropout2d(0.5), nn.Linear(128, 10), nn.LogSoftmax(dim=1) ) def train(model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % 100 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx*len(data), len(train_loader.dataset), 100. * batch_idx/len(train_loader), loss.item())) def test(model, device, test_loader): model.eval() test_loss,correct = 0,0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss/len(test_loader.dataset), correct, len(test_loader.dataset), 100. * correct/len(test_loader.dataset))) batch_size,test_batch_size = 256,512 epochs,lr,gamma = 1,1e-2,0.7 use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {} train_loader = DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True, **kwargs) test_loader = DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=True, **kwargs) model = Net().to(device) optimizer = torch.optim.AdamW(model.parameters(), lr=lr) ``` ## Use the fastai training loop The most important step is to replace the custom training loop with fastai's. That means you can get rid of `train()`, `test()`, and the epoch loop above, and replace it all with just this: ``` data = DataLoaders(train_loader, test_loader).cuda() learn = Learner(data, Net(), loss_func=F.nll_loss, opt_func=Adam, metrics=accuracy) ``` fastai supports many schedulers. We recommend using 1cycle: ``` learn.fit_one_cycle(epochs, lr) ```
github_jupyter
# LeNet5 Analog Training with Tiki Taka Optimizer Example Training the LeNet5 neural network with Tiki Taka analog optimizer on MNIST dataset, simulated on the the analog resistive random-access memory with soft bounds (ReRam) device. <a href="https://colab.research.google.com/github/IBM-AI-Hardware-Center/aihwkit-notebooks/blob/main/examples/analog_training_LeNet5_TT.ipynb" target="_parent"> <img src="https://colab.research.google.com/assets/colab-badge.svg"/> </a> # IBM Analog Hardware Acceleration Kit IBM Analog Hardware Acceleration Kit (AIHWKIT) is an open source Python toolkit for exploring and using the capabilities of in-memory computing devices in the context of artificial intelligence. The pytorch integration consists of a series of primitives and features that allow using the toolkit within PyTorch. The github repository can be found at: https://github.com/IBM/aihwkit There are two possible scenarios for using Analog AI, one where the Analog accelerator targets training of DNN and one where the Analog accelerator aims at accelerating the inference of DNN. Employing Analog accelerator for training scenario requires innovation on the algorithm used for during the backpropagation (BP) algorithm which we will explore in this notebook. Employing Analog accelerator for inference scenarion allow the use of a digital accelerator for the training part and then transfer the weights to the analog hardware for the inference, which we will explore in hardware aware training notebook. ## Training with Analog AI Hardware architecture based on resistive cross-point arrays can provide significant improvement in performance, both in terms of speed and power performance. This new hardware architecture use existing technique such as stochastic gradient descent (SGD) and backpropagation (BP) algorithm to train the neural network. However the training accuracy is affected by non idealities of the device used in the cross-point array making necessary innovation also at the algorithm level. IBM is developing new training algorithm which can alleviate the non-idealities of these devices achieving high network accuracy. In this notebook we will explore the Tiki-Taka algorithm which eliminates the stringent symmetry requirement for increase and decrease of device conductance. SGD and Tiki-Taka both use the error backpropagation. Still, they process the gradient information very differently and hence are fundamentally very different algorithms. Tiki-Taka replaces each weight matrix W of SGD with two matrices, referred to as matrix A and C, and creates a coupled dynamical system by exchanging information between the two. We showed that in the Tiki-Taka dynamics, the non-symmetric behavior is a valuable and needed property of the device; therefore, it is ideal for many non-symmetric device technologies. <center><img src="imgs/tt.png" style="width:50%; height:50%"/></center> More details on the Tiki-Taka can be found at: https://www.frontiersin.org/articles/10.3389/fnins.2020.00103/full https://www.frontiersin.org/articles/10.3389/frai.2021.699148/full In this notebook we will usse the AIHWKIT to train a LeNet5 inspired analog network, using the Tiki-Taka algorithm. The network will be trained using the MNIST dataset, a collection of images representing the digits 0 to 9. The first thing to do is to install the AIHKIT and dependencies in your environment. The preferred way to install this package is by using the Python package index (please uncomment this line to install in your environment if not previously installed): ``` # To install the cpu-only enabled kit, uncommend the line below #pip install aihwkit # To install the gpu enabled wheel, use the commands below !wget https://aihwkit-gpu-demo.s3.us-east.cloud-object-storage.appdomain.cloud/aihwkit-0.4.5-cp37-cp37m-manylinux2014_x86_64.whl !pip install aihwkit-0.4.5-cp37-cp37m-manylinux2014_x86_64.whl ``` If the library was installed correctly, you can use the following snippet for creating an analog layer and predicting the output: ``` from torch import Tensor from aihwkit.nn import AnalogLinear model = AnalogLinear(2, 2) model(Tensor([[0.1, 0.2], [0.3, 0.4]])) ``` Now that the package is installed and running, we can start working on creating the LeNet5 network. AIHWKIT offers different Analog layers that can be used to build a network, including AnalogLinear and AnalogConv2d which will be the main layers used to build the present network. In addition to the standard input that are expected by the PyTorch layers (in_channels, out_channels, etc.) the analog layers also expect a rpu_config input which defines various settings of the RPU tile. Through the rpu_config parameter the user can specify many of the hardware specs such as: device used in the cross-point array, bit used by the ADC/DAC converters, noise values and many other. Additional details on the RPU configuration can be found at https://aihwkit.readthedocs.io/en/latest/using_simulator.html#rpu-configurations For this particular case we will use two device per cross-point which will effectively allow us to enable the weight transfer needed to implement the Tiki-Taka algorithm. ``` def create_rpu_config(): from aihwkit.simulator.presets import TikiTakaReRamSBPreset rpu_config = TikiTakaReRamSBPreset() return rpu_config ``` We can now use this rpu_config as input of the network model: ``` from torch.nn import Tanh, MaxPool2d, LogSoftmax, Flatten from aihwkit.nn import AnalogConv2d, AnalogLinear, AnalogSequential def create_analog_network(rpu_config): channel = [16, 32, 512, 128] model = AnalogSequential( AnalogConv2d(in_channels=1, out_channels=channel[0], kernel_size=5, stride=1, rpu_config=rpu_config), Tanh(), MaxPool2d(kernel_size=2), AnalogConv2d(in_channels=channel[0], out_channels=channel[1], kernel_size=5, stride=1, rpu_config=rpu_config), Tanh(), MaxPool2d(kernel_size=2), Tanh(), Flatten(), AnalogLinear(in_features=channel[2], out_features=channel[3], rpu_config=rpu_config), Tanh(), AnalogLinear(in_features=channel[3], out_features=10, rpu_config=rpu_config), LogSoftmax(dim=1) ) return model ``` We will use the cross entropy to calculate the loss and the Stochastic Gradient Descent (SGD) as optimizer: ``` from torch.nn import CrossEntropyLoss criterion = CrossEntropyLoss() from aihwkit.optim import AnalogSGD def create_analog_optimizer(model): """Create the analog-aware optimizer. Args: model (nn.Module): model to be trained Returns: Optimizer: created analog optimizer """ optimizer = AnalogSGD(model.parameters(), lr=0.01) # we will use a learning rate of 0.01 as in the paper optimizer.regroup_param_groups(model) return optimizer ``` We can now write the train function which will optimize the network over the MNIST train dataset. The train_step function will take as input the images to train on, the model to train and the criterion and optimizer to train with: ``` from torch import device, cuda DEVICE = device('cuda' if cuda.is_available() else 'cpu') print('Running the simulation on: ', DEVICE) def train_step(train_data, model, criterion, optimizer): """Train network. Args: train_data (DataLoader): Validation set to perform the evaluation model (nn.Module): Trained model to be evaluated criterion (nn.CrossEntropyLoss): criterion to compute loss optimizer (Optimizer): analog model optimizer Returns: train_dataset_loss: epoch loss of the train dataset """ total_loss = 0 model.train() for images, labels in train_data: images = images.to(DEVICE) labels = labels.to(DEVICE) optimizer.zero_grad() # Add training Tensor to the model (input). output = model(images) loss = criterion(output, labels) # Run training (backward propagation). loss.backward() # Optimize weights. optimizer.step() total_loss += loss.item() * images.size(0) train_dataset_loss = total_loss / len(train_data.dataset) return train_dataset_loss ``` Since training can be quite time consuming it is nice to see the evolution of the training process by testing the model capabilities on a set of images that it has not seen before (test dataset). So we write a test_step function: ``` def test_step(validation_data, model, criterion): """Test trained network Args: validation_data (DataLoader): Validation set to perform the evaluation model (nn.Module): Trained model to be evaluated criterion (nn.CrossEntropyLoss): criterion to compute loss Returns: test_dataset_loss: epoch loss of the train_dataset test_dataset_error: error of the test dataset test_dataset_accuracy: accuracy of the test dataset """ total_loss = 0 predicted_ok = 0 total_images = 0 model.eval() for images, labels in validation_data: images = images.to(DEVICE) labels = labels.to(DEVICE) pred = model(images) loss = criterion(pred, labels) total_loss += loss.item() * images.size(0) _, predicted = torch.max(pred.data, 1) total_images += labels.size(0) predicted_ok += (predicted == labels).sum().item() test_dataset_accuracy = predicted_ok/total_images*100 test_dataset_error = (1-predicted_ok/total_images)*100 test_dataset_loss = total_loss / len(validation_data.dataset) return test_dataset_loss, test_dataset_error, test_dataset_accuracy ``` To reach satisfactory accuracy levels, the train_step will have to be repeated mulitple time so we will implement a loop over a certain number of epochs: ``` def training_loop(model, criterion, optimizer, train_data, validation_data, epochs=15, print_every=1): """Training loop. Args: model (nn.Module): Trained model to be evaluated criterion (nn.CrossEntropyLoss): criterion to compute loss optimizer (Optimizer): analog model optimizer train_data (DataLoader): Validation set to perform the evaluation validation_data (DataLoader): Validation set to perform the evaluation epochs (int): global parameter to define epochs number print_every (int): defines how many times to print training progress """ train_losses = [] valid_losses = [] test_error = [] # Train model for epoch in range(0, epochs): # Train_step train_loss = train_step(train_data, model, criterion, optimizer) train_losses.append(train_loss) if epoch % print_every == (print_every - 1): # Validate_step with torch.no_grad(): valid_loss, error, accuracy = test_step(validation_data, model, criterion) valid_losses.append(valid_loss) test_error.append(error) print(f'Epoch: {epoch}\t' f'Train loss: {train_loss:.4f}\t' f'Valid loss: {valid_loss:.4f}\t' f'Test error: {error:.2f}%\t' f'Test accuracy: {accuracy:.2f}%\t') ``` We will now download the MNIST dataset and prepare the images for the training and test: ``` import os from torchvision import datasets, transforms PATH_DATASET = os.path.join('data', 'DATASET') os.makedirs(PATH_DATASET, exist_ok=True) def load_images(): """Load images for train from torchvision datasets.""" transform = transforms.Compose([transforms.ToTensor()]) train_set = datasets.MNIST(PATH_DATASET, download=True, train=True, transform=transform) test_set = datasets.MNIST(PATH_DATASET, download=True, train=False, transform=transform) train_data = torch.utils.data.DataLoader(train_set, batch_size=8, shuffle=True) test_data = torch.utils.data.DataLoader(test_set, batch_size=8, shuffle=False) return train_data, test_data ``` Put together all the code above to train ``` import torch torch.manual_seed(1) #load the dataset train_data, test_data = load_images() #create the rpu_config rpu_config = create_rpu_config() #create the model model = create_analog_network(rpu_config).to(DEVICE) #define the analog optimizer optimizer = create_analog_optimizer(model) training_loop(model, criterion, optimizer, train_data, test_data) ```
github_jupyter
# TensorFlow Tutorial Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables - Start your own session - Train algorithms - Implement a Neural Network Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. ## 1 - Exploring the Tensorflow Library To start, you will import the library: ``` import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1) ``` Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ ``` y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (session.run(init)), # the loss variable will be initialized and ready to be computed with tf.Session() as session: # Create a session and print the output session.run(init) # Initializes the variables print(session.run(loss)) # Prints the loss ``` Writing and running programs in TensorFlow has the following steps: 1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors. 3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value. Now let us look at an easy example. Run the cell below: ``` a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c) ``` As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. ``` sess = tf.Session() print(sess.run(c)) ``` Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. ``` # Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close() ``` When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. ### 1.1 - Linear function Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1): ```python X = tf.constant(np.random.randn(3,1), name = "X") ``` You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication - tf.add(..., ...) to do an addition - np.random.randn(...) to initialize randomly ``` # GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- runs the session for Y = WX + b """ np.random.seed(1) ### START CODE HERE ### (4 lines of code) X = tf.constant(np.random.randn(3,1), name = "X") W = tf.constant(np.random.randn(4,3), name = "weights") b = tf.constant(np.random.randn(4,1), name = "bias") Y = tf.add(tf.matmul(W, X), b) ### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate ### START CODE HERE ### sess = tf.Session() result = sess.run(Y) ### END CODE HERE ### # close the session sess.close() return result print( "result = " + str(linear_function())) ``` *** Expected Output ***: <table> <tr> <td> **result** </td> <td> [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] </td> </tr> </table> ### 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")` - `tf.sigmoid(...)` - `sess.run(..., feed_dict = {x: z})` Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:** ```python sess = tf.Session() # Run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) sess.close() # Close the session ``` **Method 2:** ```python with tf.Session() as sess: # run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) # This takes care of closing the session for you :) ``` ``` # GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(tf.float32, name = "x") # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. with tf.Session() as sess: # Run session and call the output "result" result = sess.run(sigmoid, feed_dict = {x : z}) ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12))) ``` *** Expected Output ***: <table> <tr> <td> **sigmoid(0)** </td> <td> 0.5 </td> </tr> <tr> <td> **sigmoid(12)** </td> <td> 0.999994 </td> </tr> </table> <font color='blue'> **To summarize, you how know how to**: 1. Create placeholders 2. Specify the computation graph corresponding to operations you want to compute 3. Create the session 4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. ### 1.3 - Computing the Cost You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$ you can do it in one line of code in tensorflow! **Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)` Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes $$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$ ``` # GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y.          Returns:     cost -- runs the session of the cost (formula (2)) """ ### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines) z = tf.placeholder(tf.float32, shape = logits.shape, name = "logits") y = tf.placeholder(tf.float32, shape = labels.shape, name = "label") # Use the loss function (approx. 1 line) cost = tf.nn.sigmoid_cross_entropy_with_logits(labels = y, logits = z) # Create a session (approx. 1 line). See method 1 above. sess = tf.Session() # Run the session (approx. 1 line). cost = sess.run(cost, feed_dict = {z: logits, y: labels}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost logits = sigmoid(np.array([0.2,0.4,0.7,0.9])) cost = cost(logits, np.array([0,0,1,1])) print ("cost = " + str(cost)) ``` ** Expected Output** : <table> <tr> <td> **cost** </td> <td> [ 1.00538719 1.03664088 0.41385433 0.39956614] </td> </tr> </table> ### 1.4 - Using One Hot encodings Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows: <img src="images/onehot.png" style="width:600px;height:150px;"> This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. ``` # GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C, name = "C") # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(labels, C, axis = 0, name = "one_hot") # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C = 4) print ("one_hot = " + str(one_hot)) ``` **Expected Output**: <table> <tr> <td> **one_hot** </td> <td> [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] </td> </tr> </table> ### 1.5 - Initialize with zeros and ones Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape) ``` # GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(...). (approx. 1 line) ones = tf.ones(shape) # Create the session (approx. 1 line) sess = tf.Session() # Run the session to compute 'ones' (approx. 1 line) ones = sess.run(ones) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return ones print ("ones = " + str(ones([3]))) ``` **Expected Output:** <table> <tr> <td> **ones** </td> <td> [ 1. 1. 1.] </td> </tr> </table> # 2 - Building your first neural network in tensorflow In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model: - Create the computation graph - Run the graph Let's delve into the problem you'd like to solve! ### 2.0 - Problem statement: SIGNS Dataset One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language. - **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number). - **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number). Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs. Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. <img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center> Run the following code to load the dataset. ``` # Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ``` Change the index below and run the cell to visualize some examples in the dataset. ``` # Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ``` As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. ``` # Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) ``` **Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. ### 2.1 - Create placeholders Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow. ``` # GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(tf.float32, shape = [n_x, None], name = "X") Y = tf.placeholder(tf.float32, shape = [n_y, None], name = "Y") ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ``` **Expected Output**: <table> <tr> <td> **X** </td> <td> Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) </td> </tr> <tr> <td> **Y** </td> <td> Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) </td> </tr> </table> ### 2.2 - Initializing the parameters Your second task is to initialize the parameters in tensorflow. **Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```python W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) ``` Please use `seed = 1` to make sure your results match ours. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 6 lines of code) W1 = tf.get_variable("W1", shape = [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable("b1", shape = [25, 1], initializer = tf.zeros_initializer()) W2 = tf.get_variable("W2", shape = [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b2 = tf.get_variable("b2", shape = [12, 1], initializer = tf.zeros_initializer()) W3 = tf.get_variable("W3", shape = [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b3 = tf.get_variable("b3", shape = [6, 1], initializer = tf.zeros_initializer()) ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref > </td> </tr> <tr> <td> **b1** </td> <td> < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref > </td> </tr> <tr> <td> **W2** </td> <td> < tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref > </td> </tr> <tr> <td> **b2** </td> <td> < tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref > </td> </tr> </table> As expected, the parameters haven't been evaluated yet. ### 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition - `tf.matmul(...,...)` to do a matrix multiplication - `tf.nn.relu(...)` to apply the ReLU activation **Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! ``` # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents: Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3 ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) print("Z3 = " + str(Z3)) ``` **Expected Output**: <table> <tr> <td> **Z3** </td> <td> Tensor("Add_2:0", shape=(6, ?), dtype=float32) </td> </tr> </table> You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. ### 2.4 Compute cost As seen before, it is very easy to compute the cost using: ```python tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...)) ``` **Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you. - Besides, `tf.reduce_mean` basically does the summation over the examples. ``` # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost)) ``` **Expected Output**: <table> <tr> <td> **cost** </td> <td> Tensor("Mean:0", shape=(), dtype=float32) </td> </tr> </table> ### 2.5 - Backward propagation & parameter updates This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model. After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate. For instance, for gradient descent the optimizer would be: ```python optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost) ``` To make the optimization you would do: ```python _ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ``` This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs. **Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). ### 2.6 - Building the model Now, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented. ``` def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_x, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , minibatch_cost = sess.run([optimizer, cost], feed_dict = {X : minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters ``` Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! ``` parameters = model(X_train, Y_train, X_test, Y_test) ``` **Expected Output**: <table> <tr> <td> **Train Accuracy** </td> <td> 0.999074 </td> </tr> <tr> <td> **Test Accuracy** </td> <td> 0.716667 </td> </tr> </table> Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy. **Insights**: - Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. ### 2.7 - Test with your own image (optional / ungraded exercise) Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! ``` import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T my_image_prediction = predict(my_image, parameters) plt.imshow(image) print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) ``` You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects". <font color='blue'> **What you should remember**: - Tensorflow is a programming framework used in deep learning - The two main object classes in tensorflow are Tensors and Operators. - When you code in tensorflow you have to take the following steps: - Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...) - Create a session - Initialize the session - Run the session to execute the graph - You can execute the graph multiple times as you've seen in model() - The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
github_jupyter
# Finite element method *A summary of "Introduction to Numerical Methods for Variational Problems", a great open source book by Hans Petter Langtangen and Kent-Andre Mardal. [http://folk.uio.no/kent-and/hpl-fem-book/doc/web/]. The source code has been obtained from [https://github.com/hplgit/fem-book]* * The present book is essentially a book on the finite element method, although we discuss many other choices of basis functions and other applications than partial differential equations. The literature on finite elements contains books of many different flavors, ranging from an abstract view of the method to a more practical, algorithmic treatment of the subject. The present book has a very strong algorithmic focus ("how to compute"), but formulate the method in abstract form with variational forms and function spaces. * It is not our aim to present the established mathematical theory here, but rather provide the reader with the gory details of the implementation in an explicit, user-friendly and simplistic manner to lower the threshold of usage. At the same time, we want to present tools for verification and debugging that are applicable in general situations such that the method can be used safely beyond what is theoretically proven. * It is our opinion that it still is important to develop codes from scratch in order to learn all the gory details. That is; "programming is understanding" as Kristen Nygaard put it - a favorite quite of both Hans Petter and me. As such, there is a need for teaching material that exposes the internals of a finite element engine and allow for scrutinous investigation in a clean environment. In particular, Hans Petter always wanted to lower the bar for introducing finite elements both by avoiding technical details of implementation as well as avoiding the theoretical issues with Sobolev spaces and functional analysis. This is the purpose of this book. * Through explicit, detailed and sometimes lengthy derivations, the reader will be able to get direct exposition of all components in a finite element engine. * The present book grew out of the need to explain variational formulations in the most intuitive way so FEniCS users can transform their PDE problem into the proper formulation for FEniCS programming. We then added material such that also the details of the most fundamental finite element algorithms could easily be understood. The learning outcomes of this book are five-fold: 1. understanding various types of variational formulations of PDE problems, 2. understanding the machinery of finite element algorithms, with an emphasis on one-dimensional problems, 3. understanding potential artifacts in simulation results, 4. understanding how variational formulations can be used in other contexts (generalized boundary conditions, solving linear systems) 5. understanding how variational methods may be used for complicated PDEs (systems of non-linear and time-dependent PDEs) * The exposition is recognized by very explicit mathematics, i.e., we have tried to write out all details of the finite element "engine" such that a reader can calculate a finite element problem by hand. * **Although we imagine that the reader will use FEniCS or other similar software to actually solve finite element problems, we strongly believe that successful application of such complex software requires a thorough understanding of the underlying method, which is best gained by hand calculations of the steps in the algorithms.** * Also, hand calculations are indispensable for debugging finite element programs: one can run a one-dimensional problem, print out intermediate results, and compare with separate hand calculations. When the program is fully verified in 1D, ideally the program should be turned into a 2D/3D simulation simply by switching from a 1D mesh to the relevant 2D/3D mesh. * When working with algorithms and hand calculations in the present book, we emphasize the usefulness of symbolic computing. Our choice is the free SymPy package, which is very easy to use for students and which gives a seamless transition from symbolic to numerical computing. * Another learning outcome (although not needed to be a successful FEniCS user) is to understand how the finite element method is a special case of more general variational approaches to solving equations. We consider approximation in general, solution of PDEs, as well as solving linear systems in a way that hopefully gives the reader an understanding of how seemingly very different numerical methods actually are just variants of a common way of reasoning.
github_jupyter
# MAT281 - Laboratorio N°02 <a id='p1'></a> ## Problema 01 Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por: $$\displaystyle sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma: * $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2) = [promedio(1,2), promedio(2,3), promedio(3,4), promedio(4,5)] = [1.5, 2.5, 3.5, 4.5] * $a = [1,2,3,4,5]$, la SMA con una ventana de $n=3$ sería: * sma(3) = [promedio(1,2,3), promedio(2,3,4), promedio(3,4,5)] = [2.,3.,4.] Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma: * **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$ En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$. **Hint**: utilice la función `numpy.cumsum` ``` # importar librerias import numpy as np ``` Definir Función ``` def sma(a:np.array, window_len:int): arr=np.zeros((1, a.shape[0]-window_len+1)) for i in range (a.shape[0]-window_len+1): c=0 for j in range(i,window_len+i): c=c+a[i] arr[0][i]=c/window_len return arr ``` Verificar ejemplos ``` # ejemplo 01 a = np.array([1,2,3,4,5]) np.testing.assert_array_equal( sma(a, window_len=2), np.array([1.5, 2.5, 3.5, 4.5]) ) #el error se presenta porque al crear el arreglo el que sale es de tipo arreglo de enteros y el que entra es un arreglo de un elemento que es un arreglo de enteros, por lo que no hay coincidencia # ejemplo 02 a = np.array([5,3,8,10,2,1,5,1,0,2]) np.testing.assert_array_equal( sma(a, window_len=2), np.array([4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]) ) #el error se presenta porque al crear el arreglo el que sale es de tipo arreglo de enteros y el que entra es un arreglo de un elemento que es un arreglo de enteros, por lo que no hay coincidencia ``` <a id='p2'></a> ## Problema 02 La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante. * Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$ Implemente una función llamada `strides(a,n,p)` cuyo input sea: * $a$: un arreglo unidimensional, * $n$: el número de columnas, * $p$: el número de pasos hacia adelante y retorne la matriz de $n$ columnas, cuyos desfaces hacia adelante se hacen de $p$ en $p$ pasos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$ ### Definir Función ``` def strides(a:np.array,n:int,p:int): c=a.shape[0] matrx=np.zeros((1,n)) mat_ag=np.zeros((1,n)) for i in range(c): d=0 d=d+(p*i) for j in range(n): matrx[i,j]=a[d] d=d+1 if d == c: return matrx matrx=np.r_[matrx,mat_ag] ``` ### Verificar ejemplos ``` # ejemplo 01 a = np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) n=4 p=2 np.testing.assert_array_equal( strides(a,n,p), np.array([ [ 1, 2, 3, 4], [ 3, 4, 5, 6], [ 5, 6, 7, 8], [ 7, 8, 9, 10] ]) ) ``` <a id='p3'></a> ## Problema 03 Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico. Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$\displaystyle M_{n} = \dfrac{n(n^2+1)}{2}$$ Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, es un cuadrado mágico. * $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico. Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario * **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False **Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$. ### Definir Función ``` def es_cudrado_magico(A:np.array): c=A.shape[0] d=(c*(c**2)+c)/2 for i in range(c): suma=0 for j in range(c): suma=suma+A[i,j] if suma!=d: return False for j in range(c): suma=0 for i in range(c): suma=suma+A[i,j] if suma!=d: return False suma=0 for i in range(c): suma=suma+A[i,i] if suma!=d: return False suma=0 for i in range(c): suma=suma+A[c-i-1,i] if suma!=d: return False return True ``` ### Verificar ejemplos ``` # ejemplo 01 A = np.array([[4,9,2],[3,5,7],[8,1,6]]) assert es_cudrado_magico(A) == True, "ejemplo 01 incorrecto" # ejemplo 02 B = np.array([[4,2,9],[3,5,7],[8,1,6]]) assert es_cudrado_magico(B) == False, "ejemplo 02 incorrecto" ```
github_jupyter
<img src="qiskit-heading.gif" width="500 px" align="center"> # _*Qiskit Aqua: Experimenting with Traveling Salesman problem with variational quantum eigensolver*_ This notebook is based on an official notebook by Qiskit team, available at https://github.com/qiskit/qiskit-tutorial under the [Apache License 2.0](https://github.com/Qiskit/qiskit-tutorial/blob/master/LICENSE) license. The original notebook was developed by Antonio Mezzacapo<sup>[1]</sup>, Jay Gambetta<sup>[1]</sup>, Kristan Temme<sup>[1]</sup>, Ramis Movassagh<sup>[1]</sup>, Albert Frisch<sup>[1]</sup>, Takashi Imamichi<sup>[1]</sup>, Giacomo Nannicni<sup>[1]</sup>, Richard Chen<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>(<sup>[1]</sup>IBMQ) Your **TASK** is to execute every step of this notebook while learning to use qiskit-aqua and also how to leverage general problem modeling into know problems that qiskit-aqua can solve, namely the [Travelling salesman problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem). ## Introduction Many problems in quantitative fields such as finance and engineering are optimization problems. Optimization problems lay at the core of complex decision-making and definition of strategies. Optimization (or combinatorial optimization) means searching for an optimal solution in a finite or countably infinite set of potential solutions. Optimality is defined with respect to some criterion function, which is to be minimized or maximized. This is typically called cost function or objective function. **Typical optimization problems** Minimization: cost, distance, length of a traversal, weight, processing time, material, energy consumption, number of objects Maximization: profit, value, output, return, yield, utility, efficiency, capacity, number of objects We consider here max-cut problem of practical interest in many fields, and show how they can mapped on quantum computers. ### Weighted Max-Cut Max-Cut is an NP-complete problem, with applications in clustering, network science, and statistical physics. To grasp how practical applications are mapped into given Max-Cut instances, consider a system of many people that can interact and influence each other. Individuals can be represented by vertices of a graph, and their interactions seen as pairwise connections between vertices of the graph, or edges. With this representation in mind, it is easy to model typical marketing problems. For example, suppose that it is assumed that individuals will influence each other's buying decisions, and knowledge is given about how strong they will influence each other. The influence can be modeled by weights assigned on each edge of the graph. It is possible then to predict the outcome of a marketing strategy in which products are offered for free to some individuals, and then ask which is the optimal subset of individuals that should get the free products, in order to maximize revenues. The formal definition of this problem is the following: Consider an $n$-node undirected graph *G = (V, E)* where *|V| = n* with edge weights $w_{ij}>0$, $w_{ij}=w_{ji}$, for $(i, j)\in E$. A cut is defined as a partition of the original set V into two subsets. The cost function to be optimized is in this case the sum of weights of edges connecting points in the two different subsets, *crossing* the cut. By assigning $x_i=0$ or $x_i=1$ to each node $i$, one tries to maximize the global profit function (here and in the following summations run over indices 0,1,...n-1) $$\tilde{C}(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j).$$ In our simple marketing model, $w_{ij}$ represents the probability that the person $j$ will buy a product after $i$ gets a free one. Note that the weights $w_{ij}$ can in principle be greater than $1$, corresponding to the case where the individual $j$ will buy more than one product. Maximizing the total buying probability corresponds to maximizing the total future revenues. In the case where the profit probability will be greater than the cost of the initial free samples, the strategy is a convenient one. An extension to this model has the nodes themselves carry weights, which can be regarded, in our marketing model, as the likelihood that a person granted with a free sample of the product will buy it again in the future. With this additional information in our model, the objective function to maximize becomes $$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j)+\sum_i w_i x_i. $$ In order to find a solution to this problem on a quantum computer, one needs first to map it to an Ising Hamiltonian. This can be done with the assignment $x_i\rightarrow (1-Z_i)/2$, where $Z_i$ is the Pauli Z operator that has eigenvalues $\pm 1$. Doing this we find that $$C(\textbf{Z}) = \sum_{i,j} \frac{w_{ij}}{4} (1-Z_i)(1+Z_j) + \sum_i \frac{w_i}{2} (1-Z_i) = -\frac{1}{2}\left( \sum_{i<j} w_{ij} Z_i Z_j +\sum_i w_i Z_i\right)+\mathrm{const},$$ where const = $\sum_{i<j}w_{ij}/2+\sum_i w_i/2 $. In other terms, the weighted Max-Cut problem is equivalent to minimizing the Ising Hamiltonian $$ H = \sum_i w_i Z_i + \sum_{i<j} w_{ij} Z_iZ_j.$$ Aqua can generate the Ising Hamiltonian for the first profit function $\tilde{C}$. ### Approximate Universal Quantum Computing for Optimization Problems There has been a considerable amount of interest in recent times about the use of quantum computers to find a solution to combinatorial problems. It is important to say that, given the classical nature of combinatorial problems, exponential speedup in using quantum computers compared to the best classical algorithms is not guaranteed. However, due to the nature and importance of the target problems, it is worth investigating heuristic approaches on a quantum computer that could indeed speed up some problem instances. Here we demonstrate an approach that is based on the Quantum Approximate Optimization Algorithm by Farhi, Goldstone, and Gutman (2014). We frame the algorithm in the context of *approximate quantum computing*, given its heuristic nature. The Algorithm works as follows: 1. Choose the $w_i$ and $w_{ij}$ in the target Ising problem. In principle, even higher powers of Z are allowed. 2. Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively. 3. Choose a set of controls $\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$. 4. Evaluate $C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H|~\psi(\boldsymbol\theta)\rangle = \sum_i w_i \langle\psi(\boldsymbol\theta)~|Z_i|~\psi(\boldsymbol\theta)\rangle+ \sum_{i<j} w_{ij} \langle\psi(\boldsymbol\theta)~|Z_iZ_j|~\psi(\boldsymbol\theta)\rangle$ by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen. 5. Use a classical optimizer to choose a new set of controls. 6. Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$. 7. Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $|\langle z_i~|\psi(\boldsymbol\theta)\rangle|^2\;\forall i$ to obtain the answer. It is our belief the difficulty of finding good heuristic algorithms will come down to the choice of an appropriate trial wavefunction. For example, one could consider a trial function whose entanglement best aligns with the target problem, or simply make the amount of entanglement a variable. In this tutorial, we will consider a simple trial function of the form $$|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$ where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, where $n$ is the number of qubits and $m$ is the depth of the quantum circuit. The motivation for this choice is that for these classical problems this choice allows us to search over the space of quantum states that have only real coefficients, still exploiting the entanglement to potentially converge faster to the solution. One advantage of using this sampling method compared to adiabatic approaches is that the target Ising Hamiltonian does not have to be implemented directly on hardware, allowing this algorithm not to be limited to the connectivity of the device. Furthermore, higher-order terms in the cost function, such as $Z_iZ_jZ_k$, can also be sampled efficiently, whereas in adiabatic or annealing approaches they are generally impractical to deal with. References: - A. Lucas, Frontiers in Physics 2, 5 (2014) - E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028 (2014) - D. Wecker, M. B. Hastings, M. Troyer Phys. Rev. A 94, 022309 (2016) - E. Farhi, J. Goldstone, S. Gutmann, H. Neven e-print arXiv 1703.06199 (2017) ``` # useful additional packages import matplotlib.pyplot as plt import matplotlib.axes as axes %matplotlib inline import numpy as np import networkx as nx from qiskit.tools.visualization import plot_histogram from qiskit.aqua import Operator, run_algorithm, get_algorithm_instance from qiskit.aqua.input import get_input_instance from qiskit.aqua.translators.ising import max_cut, tsp # setup aqua logging import logging from qiskit.aqua._logging import set_logging_config, build_logging_config # set_logging_config(build_logging_config(logging.DEBUG)) # choose INFO, DEBUG to see the log # ignoring deprecation errors on matplotlib import warnings import matplotlib.cbook warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation) ``` ### [Optional] Setup token to run the experiment on a real device If you would like to run the experiement on a real device, you need to setup your account first. Note: If you do not store your token yet, use `IBMQ.save_accounts()` to store it first. ``` from qiskit import IBMQ IBMQ.load_accounts() ``` ## Traveling Salesman Problem In addition to being a notorious NP-complete problem that has drawn the attention of computer scientists and mathematicians for over two centuries, the Traveling Salesman Problem (TSP) has important bearings on finance and marketing, as its name suggests. Colloquially speaking, the traveling salesman is a person that goes from city to city to sell merchandise. The objective in this case is to find the shortest path that would enable the salesman to visit all the cities and return to its hometown, i.e. the city where he started traveling. By doing this, the salesman gets to maximize potential sales in the least amount of time. The problem derives its importance from its "hardness" and ubiquitous equivalence to other relevant combinatorial optimization problems that arise in practice. The mathematical formulation with some early analysis was proposed by W.R. Hamilton in the early 19th century. Mathematically the problem is, as in the case of Max-Cut, best abstracted in terms of graphs. The TSP on the nodes of a graph asks for the shortest *Hamiltonian cycle* that can be taken through each of the nodes. A Hamilton cycle is a closed path that uses every vertex of a graph once. The general solution is unknown and an algorithm that finds it efficiently (e.g., in polynomial time) is not expected to exist. Find the shortest Hamiltonian cycle in a graph $G=(V,E)$ with $n=|V|$ nodes and distances, $w_{ij}$ (distance from vertex $i$ to vertex $j$). A Hamiltonian cycle is described by $N^2$ variables $x_{i,p}$, where $i$ represents the node and $p$ represents its order in a prospective cycle. The decision variable takes the value 1 if the solution occurs at node $i$ at time order $p$. We require that every node can only appear once in the cycle, and for each time a node has to occur. This amounts to the two constraints (here and in the following, whenever not specified, the summands run over 0,1,...N-1) $$\sum_{i} x_{i,p} = 1 ~~\forall p$$ $$\sum_{p} x_{i,p} = 1 ~~\forall i.$$ For nodes in our prospective ordering, if $x_{i,p}$ and $x_{j,p+1}$ are both 1, then there should be an energy penalty if $(i,j) \notin E$ (not connected in the graph). The form of this penalty is $$\sum_{i,j\notin E}\sum_{p} x_{i,p}x_{j,p+1}>0,$$ where it is assumed the boundary condition of the Hamiltonian cycle $(p=N)\equiv (p=0)$. However, here it will be assumed a fully connected graph and not include this term. The distance that needs to be minimized is $$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}.$$ Putting this all together in a single objective function to be minimized, we get the following: $$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}+ A\sum_p\left(1- \sum_i x_{i,p}\right)^2+A\sum_i\left(1- \sum_p x_{i,p}\right)^2,$$ where $A$ is a free parameter. One needs to ensure that $A$ is large enough so that these constraints are respected. One way to do this is to choose $A$ such that $A > \mathrm{max}(w_{ij})$. Once again, it is easy to map the problem in this form to a quantum computer, and the solution will be found by minimizing a Ising Hamiltonian. ``` # Generating a graph of 3 nodes n = 3 num_qubits = n ** 2 ins = tsp.random_tsp(n) G = nx.Graph() G.add_nodes_from(np.arange(0, n, 1)) colors = ['r' for node in G.nodes()] pos = {k: v for k, v in enumerate(ins.coord)} default_axes = plt.axes(frameon=True) nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos) print('distance\n', ins.w) ``` ### Brute force approach ``` from itertools import permutations def brute_force_tsp(w, N): a=list(permutations(range(1,N))) last_best_distance = 1e10 for i in a: distance = 0 pre_j = 0 for j in i: distance = distance + w[j,pre_j] pre_j = j distance = distance + w[pre_j,0] order = (0,) + i if distance < last_best_distance: best_order = order last_best_distance = distance print('order = ' + str(order) + ' Distance = ' + str(distance)) return last_best_distance, best_order best_distance, best_order = brute_force_tsp(ins.w, ins.dim) print('Best order from brute force = ' + str(best_order) + ' with total distance = ' + str(best_distance)) def draw_tsp_solution(G, order, colors, pos): G2 = G.copy() n = len(order) for i in range(n): j = (i + 1) % n G2.add_edge(order[i], order[j]) default_axes = plt.axes(frameon=True) nx.draw_networkx(G2, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos) draw_tsp_solution(G, best_order, colors, pos) ``` ### Mapping to the Ising problem ``` qubitOp, offset = tsp.get_tsp_qubitops(ins) algo_input = get_input_instance('EnergyInput') algo_input.qubit_op = qubitOp ``` ### Checking that the full Hamiltonian gives the right cost ``` #Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector algorithm_cfg = { 'name': 'ExactEigensolver', } params = { 'problem': {'name': 'ising'}, 'algorithm': algorithm_cfg } result = run_algorithm(params,algo_input) print('energy:', result['energy']) #print('tsp objective:', result['energy'] + offset) x = tsp.sample_most_likely(result['eigvecs'][0]) print('feasible:', tsp.tsp_feasible(x)) z = tsp.get_tsp_solution(x) print('solution:', z) print('solution objective:', tsp.tsp_value(z, ins.w)) draw_tsp_solution(G, z, colors, pos) ``` ### Running it on quantum computer We run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$. ``` algorithm_cfg = { 'name': 'VQE', 'operator_mode': 'matrix' } optimizer_cfg = { 'name': 'SPSA', 'max_trials': 300 } var_form_cfg = { 'name': 'RY', 'depth': 5, 'entanglement': 'linear' } params = { 'problem': {'name': 'ising', 'random_seed': 10598}, 'algorithm': algorithm_cfg, 'optimizer': optimizer_cfg, 'variational_form': var_form_cfg, 'backend': {'name': 'statevector_simulator'} } result = run_algorithm(params,algo_input) print('energy:', result['energy']) print('time:', result['eval_time']) #print('tsp objective:', result['energy'] + offset) x = tsp.sample_most_likely(result['eigvecs'][0]) print('feasible:', tsp.tsp_feasible(x)) z = tsp.get_tsp_solution(x) print('solution:', z) print('solution objective:', tsp.tsp_value(z, ins.w)) draw_tsp_solution(G, z, colors, pos) # run quantum algorithm with shots params['algorithm']['operator_mode'] = 'grouped_paulis' params['backend']['name'] = 'qasm_simulator' params['backend']['shots'] = 1024 result = run_algorithm(params,algo_input) print('energy:', result['energy']) print('time:', result['eval_time']) #print('tsp objective:', result['energy'] + offset) x = tsp.sample_most_likely(result['eigvecs'][0]) print('feasible:', tsp.tsp_feasible(x)) z = tsp.get_tsp_solution(x) print('solution:', z) print('solution objective:', tsp.tsp_value(z, ins.w)) plot_histogram(result['eigvecs'][0]) draw_tsp_solution(G, z, colors, pos) ```
github_jupyter
# Mask R-CNN Demo A quick intro to using the pre-trained model to detect and segment objects. ``` import os import sys import random import math import numpy as np import skimage.io import matplotlib import matplotlib.pyplot as plt import copy # Root directory of the project ROOT_DIR = os.path.abspath(".") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn import utils import mrcnn.model as modellib from mrcnn import visualize # Import COCO config sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version import coco # import tracking from sort.sort import Sort import glob # %matplotlib inline # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "pretrained_models", "mask_rcnn_coco.h5") # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH) # Directory of images to run detection on IMAGE_DIR = os.path.join(ROOT_DIR, "images") # indicate GPUs os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="0" ``` ## Configurations We'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```. For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change. ``` class InferenceConfig(coco.CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 IMAGE_SHAPE = [1024,1024,3] IMAGE_MAX_DIM = 1024 # IMAGE_RESIZE_MODE = "none" # NUM_CLASSES = 15 config = InferenceConfig() config.display() ``` ## Create Model and Load Trained Weights ``` # Create model object in inference mode. model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Load weights trained on MS-COCO model.load_weights(COCO_MODEL_PATH, by_name=True) ``` ## Class Names The model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71. To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names. To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this. ``` # Load COCO dataset dataset = coco.CocoDataset() dataset.load_coco(COCO_DIR, "train") dataset.prepare() # Print class names print(dataset.class_names) ``` We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.) ``` # COCO Class names # Index of the class in the list is its ID. For example, to get ID of # the teddy bear class, use: class_names.index('teddy bear') class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench']#, 'bird', # 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', # 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', # 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', # 'kite', 'baseball bat', 'baseball glove', 'skateboard', # 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', # 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', # 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', # 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', # 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', # 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', # 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', # 'teddy bear', 'hair drier', 'toothbrush'] num_classes = len(class_names) num_classes ``` ## Run Object Detection on Taiwan SA dataset ``` '''Run on KITTI raw images!''' # del sys.modules['sort'] # del sys.modules['mrcnn'] from mrcnn import visualize from sort.sort import Sort from utils import * import glob import time import numpy as np from PIL import Image import os # only for testing IMG_DIR = '/media/DATA/VAD_datasets/taiwan_sa/testing/frames/'#'/media/DATA/traffic_accident_videos/images_10hz/' OUT_DIR = '/media/DATA/VAD_datasets/taiwan_sa/testing/mask_rcnn_detections/'#'/media/DATA/traffic_accident_videos/mask_rcnn_detections/' all_folders = glob.glob(IMG_DIR + '*') W = 1280 H = 720 ROI = [0, 0, 720, 1280] saver = True display = False colors = np.random.rand(32, 3) '''for saving observations of each video''' all_observations = {} for folder_id, folder in enumerate(all_folders): video_name = folder.split('/')[-1] print(video_name) '''for display''' if display: colours = np.random.rand(32, 3)*255 # used only for display plt.ion() fig = plt.figure() '''init tracker''' # use_dlibTracker = False # True to use dlib correlation tracker, False to use Kalman Filter tracker # all_trackers = Sort(ROI, max_age=3,min_hits=3, use_dlib=use_dlibTracker,track_masks=True) all_trackers = Sort(max_age=5, min_hits=3, since_update_thresh=1) '''count time''' total_time = 0 # '''write results''' out_file = os.path.join(OUT_DIR, video_name + '.txt') out_file_with_feature = os.path.join(OUT_DIR, video_name + '.npy') try: os.stat(OUT_DIR) print("video has been processed!") # continue except: os.mkdir(OUT_DIR) aa = 1 f_out = open(out_file, 'w') frame = 0 '''for saving observations of each car''' observations = {} all_images = sorted(glob.glob(os.path.join(folder, '*.jpg'))) '''make dir if doesn exist''' SAMPLE_IMG_DIR = os.path.join(OUT_DIR, video_name) if not os.path.isdir(SAMPLE_IMG_DIR): os.mkdir(SAMPLE_IMG_DIR) output_with_feature = [] for image_file in all_images: img = np.asarray(Image.open(image_file)) if img is None: break mrcnn_detections = model.detect([img], verbose=1)[0] interesting_objects = np.where(mrcnn_detections['class_ids']<num_classes)[0] bbox_hash = {} bboxes = mrcnn_detections['rois'][interesting_objects] masks = mrcnn_detections['masks'][:,:,interesting_objects] classes = mrcnn_detections['class_ids'][interesting_objects] scores = mrcnn_detections['scores'][interesting_objects] features = mrcnn_detections['roi_features'][interesting_objects] for i, bbox in enumerate(bboxes): bbox_hash[tuple(bbox)] = [classes[i], scores[i],masks[i]] start_time = time.time() #update tracker # trackers, feature_list = all_trackers.update(bboxes.astype('int'), # img=img, # masks=masks, # classes=classes, # scores=scores, # features=features) # only cars are considered so far. matched, ret = all_trackers.update(bboxes.astype('int')) # only cars are considered so far. if frame == 0: matched = [[row[-1],row[-1]] for row in ret] print(matched) # build a hash mapping from tracker id to bbox number in current detection trk2det_id_hash = {} for m in matched: trk_id = m[0] trk2det_id_hash[trk_id] = m[1] # use correlation tracker # trackers, mask_list = all_trackers.update(bboxes.astype('int'), # img=img, # classes=classes, # scores=scores) # only cars are considered so far. cycle_time = time.time() - start_time total_time += cycle_time print('frame: %d...took: %3fs'%(frame,cycle_time)) tracked_boxes = [] tracked_id = [] tracked_masks = [] tracked_classes = [] tracked_scores = [] tracked_features = [] # if not use_dlibTracker: # for j, (d,features) in enumerate(zip(trackers, feature_list)): # tracked_boxes.append(d[:4]) # tracked_id.append(d[4]) # tracked_classes.append(d[-1]) # tracked_scores.append(d[-2]) # tracked_features.append(feature_list[j]) # # track_id, frame_id, age, class, score, xmin, ymin, xmax, ymax # f_out.write('%d,%d,%d,%d,%d,%.3f,%.3f,%.3f,%.3f\n' % # (d[4], frame, d[5], d[7],d[6], d[0], d[1], d[2], d[3])) # else: # for j, d in enumerate(trackers): # tracked_boxes.append(d[:4]) # tracked_id.append(d[4]) # tracked_classes.append(d[-1]) # tracked_scores.append(d[-2]) # tracked_features.append(feature_list[j]) # # track_id, frame_id, age, class, score, xmin, ymin, xmax, ymax # f_out.write('%d,%d,%d,%d,%d,%.3f,%.3f,%.3f,%.3f\n' % # (d[4], frame, d[5], d[7],d[6], d[0], d[1], d[2], d[3])) for j, d in enumerate(ret): tracked_boxes.append(d[:4]) tracked_id.append(d[4]) tracked_classes.append(classes[trk2det_id_hash[d[4]]]) tracked_scores.append(scores[trk2det_id_hash[d[4]]]) tracked_masks.append(masks[trk2det_id_hash[d[4]]]) tracked_features.append(features[trk2det_id_hash[d[4]],:]) f_out.write('%d,%d,%d,%.3f,%d,%d,%d,%d\n' % (d[4], frame,tracked_classes[j], tracked_scores[j], d[0], d[1], d[2], d[3])) tracked_boxes = np.array(tracked_boxes).astype('int') tracked_id = np.array(tracked_id) if len(tracked_id) == 0: continue tracked_classes = np.array(tracked_classes).astype('int') tracked_scores = np.array(tracked_scores) tracked_features = np.array(tracked_features) frame_ids = frame * np.ones([tracked_boxes.shape[0],1]) complete_output_array = np.hstack([frame_ids, np.expand_dims(tracked_id, axis=-1), tracked_boxes, np.expand_dims(tracked_scores, axis=-1), tracked_features]) if len(output_with_feature) == 0: output_with_feature = complete_output_array else: output_with_feature = np.vstack([output_with_feature, complete_output_array]) # save masked images save_path = os.path.join(SAMPLE_IMG_DIR, str(format(frame,'04'))+'.jpg') masked_img = visualize.display_tracklets(img, tracked_boxes, tracked_id, tracked_masks, tracked_classes, class_names, tracked_scores, show_mask = False, colors = colors, save_path = save_path) # used only for display) frame += 1 # tracked_features = np.array(tracked_features) # print(tracked_features.shape) # np.save(feature_out, tracked_features) # plt.clf() # save mask files # total_mask = np.zeros((640,1280),dtype=bool) # for i in range(tracked_masks.shape[2]): # total_mask = np.bitwise_or(total_mask, tracked_masks[:,:,i]) # bbox_mask = np.ones((640,1280)) # for box in tracked_boxes: # bbox_mask[box[0]:box[2], box[1]:box[3]] = 0 # write_csv(out_path + str(format(frame,'04')) + '.csv' ,total_mask) np.save(out_file_with_feature, output_with_feature) print("One video is written!") f_out.close() # if folder_id > 5: break folder def y1x1y2x2_to_xywh(boxes): ''' Params: bounding boxes: (num_boxes, 4) in [ymin,xmin,ymax,xmax] order Returns: bounding boxes: (num_boxes, 4) in [xmin,ymin,w,h] order ''' boxes = y1x1y2x2_to_x1y1x2y2(boxes) boxes[:,2] -=boxes[:,0] boxes[:,3] -=boxes[:,1] return boxes def y1x1y2x2_to_x1y1x2y2(boxes): ''' Params: bounding boxes: (num_boxes, 4) in [ymin,xmin,ymax,xmax] order Returns: bounding boxes: (num_boxes, 4) in [xmin, ymin,xmax, ymax] order ''' tmp = copy.deepcopy(boxes[:,1]) boxes[:,1] = boxes[:,0] boxes[:,0] = tmp tmp = copy.deepcopy(boxes[:,3]) boxes[:,3] = boxes[:,2] boxes[:,2] = tmp return boxes '''Only do detection, no tracking''' from mrcnn import visualize from sort.sort import Sort from utils import * import glob import time import numpy as np from PIL import Image import os # only for testing IMG_DIR = '/media/DATA/VAD_datasets/taiwan_sa/testing/frames/'#'/media/DATA/traffic_accident_videos/images_10hz/' OUT_DIR = '/media/DATA/VAD_datasets/taiwan_sa/testing/mask_rcnn_detections/'#'/media/DATA/traffic_accident_videos/mask_rcnn_detections/' all_folders = glob.glob(IMG_DIR + '*') W = 1280 H = 720 ROI = [0, 0, 720, 1280] display = False colors = np.random.rand(32, 3) '''for saving observations of each video''' all_observations = {} for_deepsort = True save_det_images = True for folder_id, folder in enumerate(all_folders): folder = '/media/DATA/VAD_datasets/taiwan_sa/testing/frames/000462' video_name = folder.split('/')[-1] print(video_name) '''for display''' if display: colours = np.random.rand(32, 3)*255 # used only for display plt.ion() fig = plt.figure() '''count time''' total_time = 0 # '''write results''' out_file = os.path.join(OUT_DIR, video_name + '.txt') out_file_with_feature = os.path.join(OUT_DIR, video_name + '.npy') try: os.stat(OUT_DIR) print("video has been processed!") # continue except: os.mkdir(OUT_DIR) aa = 1 # f_out = open(out_file, 'w') frame = 0 '''for saving observations of each car''' observations = {} all_images = sorted(glob.glob(os.path.join(folder,'images', '*.jpg'))) '''make dir if doesn exist''' SAMPLE_IMG_DIR = os.path.join(OUT_DIR, video_name) if not os.path.isdir(SAMPLE_IMG_DIR): os.mkdir(SAMPLE_IMG_DIR) output_with_feature = [] for image_file in all_images: img = np.asarray(Image.open(image_file)) if img is None: break # run detection start_time = time.time() mrcnn_detections = model.detect([img], verbose=1)[0] cycle_time = time.time() - start_time total_time += cycle_time print('frame: %d...took: %3fs'%(frame,cycle_time)) interesting_objects = np.where(mrcnn_detections['class_ids'] < num_classes)[0] bboxes = mrcnn_detections['rois'][interesting_objects] # ymin xmin ymax xmax # convert to xywh format for deepsort purpose if for_deepsort: deepsort_bboxes = y1x1y2x2_to_xywh(copy.deepcopy(bboxes)) masks = mrcnn_detections['masks'][:,:,interesting_objects] classes = mrcnn_detections['class_ids'][interesting_objects] scores = mrcnn_detections['scores'][interesting_objects] features = mrcnn_detections['roi_features'][interesting_objects] frame_ids = frame * np.ones([bboxes.shape[0],1]) track_ids = -1 * np.ones([bboxes.shape[0],1]) complete_output_array = np.hstack([frame_ids, track_ids, deepsort_bboxes, np.expand_dims(scores, axis=-1), features]) if len(output_with_feature) == 0: output_with_feature = complete_output_array else: output_with_feature = np.vstack([output_with_feature, complete_output_array]) # save masked images if save_det_images: save_path = os.path.join(SAMPLE_IMG_DIR, str(format(frame,'04'))+'.jpg') visualize.display_instances(img, bboxes, masks, classes, class_names, scores=scores, save_path=save_path, figsize=(16, 16), show_bbox=True) frame += 1 np.save(out_file_with_feature, output_with_feature) print("One video is written!") # f_out.close() # if folder_id > 5: break ``` # Prepare for DeepSort ``` '''Change Taiwan Dataset to DeepSort input''' import shutil TAIWAN_ROOT = '/media/DATA/VAD_datasets/taiwan_sa/testing/frames' all_dirs = sorted(glob.glob(os.path.join(TAIWAN_ROOT,'*'))) for video_dir in all_dirs: video_name = video_dir.split('/')[-1] print(video_name) dest_dir = os.path.join(video_dir, 'images') if not os.path.isdir(dest_dir): os.mkdir(dest_dir) for file in glob.glob(os.path.join(video_dir, '*.jpg')): shutil.move(file, dest_dir) file_name = os.path.join(video_dir, 'seqinfo.ini') with open(file_name, 'w') as f: f.writelines(['[Sequence]\n', 'name=' + video_name + '\n', 'imDir=' + video_dir + '\n', 'frameRate=25\n', 'seqLength=100\n', 'imWidth=1280\n', 'imHeight=720\n', 'imExt=.jpg\n']) # import csv # import cv2 # import numpy as np # def read_binary(file_path,header=True,delimiter=','): # # The read-in data should be a N*W matrix, # # where N is the length of the time sequences, # # W is the number of sensors/data features # i = 0 # with open(file_path, 'r') as file: # reader = csv.reader(file, delimiter = delimiter) # data=[] # for line in reader: # if i == 0 and header: # i += +1 # else: # for j, element in enumerate(line): # if element == 'True': # line[j] = 0 # elif element == 'False': # line[j] = 255 # else: # raise ValueError("Data type is not boolean!!") # line = np.array(line) # str2float # if i == 0 or (i == 1 and header): # data = line # else: # data = np.vstack((data, line)) # i += 1 # return data # # from data_reader import * # masks = read_binary('/home/yyao/Documents/car_intersection/tracking_output/mask_rcnn/201804171444003136/0001.csv',header=False) ```
github_jupyter
### solve the global sequence alignment problem using needleman-wunsch algorithm ``` import numpy as np equal_score = 1 unequal_score = -1 space_score = -2 # needleman-wunsch 算法可能出现负分的情况 def createScoreMatrix(list1, list2, debug=False): lenList1, lenList2 = len(list1), len(list2) #initialize matrix scoreMatrix = np.zeros((lenList1+1, lenList2+1), dtype=int) for i in range(1, lenList1+1): scoreMatrix[i][0] = i * space_score for j in range(1, lenList2+1): scoreMatrix[0][j] = j * space_score #populate the matrix for i, x in enumerate(list1): for j, y in enumerate(list2): if x == y: scoreMatrix[i+1][j+1] = scoreMatrix[i][j]+equal_score else: scoreMatrix[i+1][j+1] = max(scoreMatrix[i][j+1]+space_score, scoreMatrix[i+1][j]+space_score, scoreMatrix[i][j]+unequal_score) if debug: print("score Matrix:") print(scoreMatrix) return scoreMatrix list1=[1, 2, 4, 6,7,8,0] list2=[4,5,7,1,2,0] print(createScoreMatrix(list1, list2)) list1=list("GCCCTAGCG") list2=list("GCGCAATG") print(createScoreMatrix(list1, list2)) def traceBack(list1, list2, scoreMatrix): ''' Return: alignedList1, alignedList2, commonSub ''' commonSub = [] alignedList1 = [] alignedList2 = [] i, j = scoreMatrix.shape[0]-1, scoreMatrix.shape[1]-1 if i == 0 or j == 0: return list1, list2, commonSub else: while i != 0 and j != 0: #顺序是左上,上,左 if list1[i-1] == list2[j-1]: commonSub.append(list1[i-1]) alignedList1.append(list1[i-1]) alignedList2.append(list2[j-1]) i -= 1 j -= 1 elif scoreMatrix[i][j] == scoreMatrix[i-1][j-1] + unequal_score: alignedList1.append(list1[i-1]) alignedList2.append(list2[j-1]) i -= 1 j -= 1 elif scoreMatrix[i][j] == scoreMatrix[i-1][j] + space_score: alignedList1.append(list1[i-1]) alignedList2.append('_') i -= 1 else:#scoreMatrix[i][j] == scoreMatrix[i][j-1] + space_score: alignedList1.append('_') alignedList2.append(list2[j-1]) j -= 1 #己回滋到最左一行,或最上一列,但未到达0, 0 位置 while i > 0: alignedList1.append(list1[i-1]) alignedList2.append('_') i -= 1 while j > 0: alignedList2.append(list2[j-1]) alignedList1.append('_') j -= 1 alignedList1.reverse() alignedList2.reverse() commonSub.reverse() return alignedList1, alignedList2, commonSub list1=[1, 2, 4, 6,7,8,0] list2=[4,5,7,1,2,0] alignedList1, alignedList2, commonSub= traceBack(list1, list2, createScoreMatrix(list1, list2)) print(alignedList1) print(alignedList2) print(commonSub) def needleman_wunsch(list1, list2, debug=False): return traceBack(list1, list2, createScoreMatrix(list1, list2, debug)) list1 = list("GCCCTAGCG") list2 = list("GCGCAATG") alignedList1, alignedList2, commonSub = needleman_wunsch(list1, list2, True) print(alignedList1) print(alignedList2) print(commonSub) text1 = "this is a test for text alignment from xxxx" text2 = "Hi, try A test for alignment , Heirish" list1 = text1.lower().split(" ") list2 = text2.lower().split(" ") alignedList1, alignedList2, commonSub = needleman_wunsch(list1, list2) print(alignedList1) print(alignedList2) print(commonSub) ```
github_jupyter
``` import numpy as np import tensorflow as tf from sklearn.utils import shuffle import re import time import collections import os def build_dataset(words, n_words, atleast=1): count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]] counter = collections.Counter(words).most_common(n_words) counter = [i for i in counter if i[1] >= atleast] count.extend(counter) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: index = dictionary.get(word, 0) if index == 0: unk_count += 1 data.append(index) count[0][1] = unk_count reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reversed_dictionary lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\n') conv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\n') id2line = {} for line in lines: _line = line.split(' +++$+++ ') if len(_line) == 5: id2line[_line[0]] = _line[4] convs = [ ] for line in conv_lines[:-1]: _line = line.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","") convs.append(_line.split(',')) questions = [] answers = [] for conv in convs: for i in range(len(conv)-1): questions.append(id2line[conv[i]]) answers.append(id2line[conv[i+1]]) def clean_text(text): text = text.lower() text = re.sub(r"i'm", "i am", text) text = re.sub(r"he's", "he is", text) text = re.sub(r"she's", "she is", text) text = re.sub(r"it's", "it is", text) text = re.sub(r"that's", "that is", text) text = re.sub(r"what's", "that is", text) text = re.sub(r"where's", "where is", text) text = re.sub(r"how's", "how is", text) text = re.sub(r"\'ll", " will", text) text = re.sub(r"\'ve", " have", text) text = re.sub(r"\'re", " are", text) text = re.sub(r"\'d", " would", text) text = re.sub(r"\'re", " are", text) text = re.sub(r"won't", "will not", text) text = re.sub(r"can't", "cannot", text) text = re.sub(r"n't", " not", text) text = re.sub(r"n'", "ng", text) text = re.sub(r"'bout", "about", text) text = re.sub(r"'til", "until", text) text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text) return ' '.join([i.strip() for i in filter(None, text.split())]) clean_questions = [] for question in questions: clean_questions.append(clean_text(question)) clean_answers = [] for answer in answers: clean_answers.append(clean_text(answer)) min_line_length = 2 max_line_length = 5 short_questions_temp = [] short_answers_temp = [] i = 0 for question in clean_questions: if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length: short_questions_temp.append(question) short_answers_temp.append(clean_answers[i]) i += 1 short_questions = [] short_answers = [] i = 0 for answer in short_answers_temp: if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length: short_answers.append(answer) short_questions.append(short_questions_temp[i]) i += 1 question_test = short_questions[500:550] answer_test = short_answers[500:550] short_questions = short_questions[:500] short_answers = short_answers[:500] concat_from = ' '.join(short_questions+question_test).split() vocabulary_size_from = len(list(set(concat_from))) data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from) print('vocab from size: %d'%(vocabulary_size_from)) print('Most common words', count_from[4:10]) print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]]) print('filtered vocab size:',len(dictionary_from)) print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100)) concat_to = ' '.join(short_answers+answer_test).split() vocabulary_size_to = len(list(set(concat_to))) data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to) print('vocab from size: %d'%(vocabulary_size_to)) print('Most common words', count_to[4:10]) print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]]) print('filtered vocab size:',len(dictionary_to)) print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100)) GO = dictionary_from['GO'] PAD = dictionary_from['PAD'] EOS = dictionary_from['EOS'] UNK = dictionary_from['UNK'] for i in range(len(short_answers)): short_answers[i] += ' EOS' class Chatbot: def __init__(self, size_layer, num_layers, embedded_size, from_dict_size, to_dict_size, learning_rate, batch_size): def cells(reuse=False): return tf.nn.rnn_cell.LSTMCell(size_layer,initializer=tf.orthogonal_initializer(), reuse=reuse) self.X = tf.placeholder(tf.int32, [None, None]) self.Y = tf.placeholder(tf.int32, [None, None]) self.X_seq_len = tf.placeholder(tf.int32, [None]) self.Y_seq_len = tf.placeholder(tf.int32, [None]) encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1)) decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1)) encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X) main = tf.strided_slice(self.X, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1) decoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, decoder_input) attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer, memory = encoder_embedded) rnn_cells = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]), attention_mechanism = attention_mechanism, attention_layer_size = size_layer) _, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded, dtype = tf.float32) last_state = tuple(last_state[0][-1] for _ in range(num_layers)) with tf.variable_scope("decoder"): rnn_cells_dec = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]) outputs, _ = tf.nn.dynamic_rnn(rnn_cells_dec, decoder_embedded, initial_state = last_state, dtype = tf.float32) self.logits = tf.layers.dense(outputs,to_dict_size) masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32) self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.logits, targets = self.Y, weights = masks) self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost) size_layer = 128 num_layers = 2 embedded_size = 128 learning_rate = 0.001 batch_size = 16 epoch = 20 tf.reset_default_graph() sess = tf.InteractiveSession() model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from), len(dictionary_to), learning_rate,batch_size) sess.run(tf.global_variables_initializer()) def str_idx(corpus, dic): X = [] for i in corpus: ints = [] for k in i.split(): ints.append(dic.get(k,UNK)) X.append(ints) return X X = str_idx(short_questions, dictionary_from) Y = str_idx(short_answers, dictionary_to) X_test = str_idx(question_test, dictionary_from) Y_test = str_idx(answer_test, dictionary_from) def pad_sentence_batch(sentence_batch, pad_int): padded_seqs = [] seq_lens = [] max_sentence_len = 10 for sentence in sentence_batch: padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence))) seq_lens.append(10) return padded_seqs, seq_lens def check_accuracy(logits, Y): acc = 0 for i in range(logits.shape[0]): internal_acc = 0 count = 0 for k in range(len(Y[i])): try: if Y[i][k] == logits[i][k]: internal_acc += 1 count += 1 if Y[i][k] == EOS: break except: break acc += (internal_acc / count) return acc / logits.shape[0] for i in range(epoch): total_loss, total_accuracy = 0, 0 for k in range(0, (len(short_questions) // batch_size) * batch_size, batch_size): batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD) batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD) predicted, loss, _ = sess.run([tf.argmax(model.logits,2), model.cost, model.optimizer], feed_dict={model.X:batch_x, model.Y:batch_y, model.X_seq_len:seq_x, model.Y_seq_len:seq_y}) total_loss += loss total_accuracy += check_accuracy(predicted,batch_y) total_loss /= (len(short_questions) // batch_size) total_accuracy /= (len(short_questions) // batch_size) print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy)) for i in range(len(batch_x)): print('row %d'%(i+1)) print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]])) print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]])) print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n') batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD) batch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD) predicted = sess.run(tf.argmax(model.logits,2), feed_dict={model.X:batch_x,model.X_seq_len:seq_x}) for i in range(len(batch_x)): print('row %d'%(i+1)) print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]])) print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]])) print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n') ```
github_jupyter
## Regression Analysis : First Machine Learning Algorithm !! ### Machine learning - is an application of artificial intelligence (AI) that provides systems the __ability to automatically learn and improve from experience without being explicitly programmed__. <img style="float: left;" src = "./img/ml_definition.png" width="600" height="600"> <img style="float: left;" src = "./img/traditionalVsml.png" width="600" height="600"> ### Types of Machine Learning <img style="float: left;" src = "./img/types-ml.png" width="700" height="600"> <br> <br> <img style="float: left;" src = "./img/ml-ex.png" width="800" height="700"> __Why use linear regression?__ 1. Easy to use 2. Easy to interpret 3. Basis for many methods 4. Runs fast 5. Most people have heard about it :-) ### Libraries in Python for Linear Regression The two most popular ones are 1. `scikit-learn` 2. `statsmodels` Highly recommend learning `scikit-learn` since that's also the machine learning package in Python. ### Linear regression Let's use `scikit-lean` for this example. Linear regression is of the form: $y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$ - $y$ is what we have to predict/independent variable/response variable - $\beta_0$ is the intercept/slope - $\beta_1$ is the coefficient for $x_1$ (the first feature/dependent variable) - $\beta_n$ is the coefficient for $x_n$ (the nth feature/dependent variable) The $\beta$ are called *model coefficients* The model coefficients are estimated in this process. (In Machine Learning parlance - the weights are learned using the algorithm). The objective function is least squares method. <br> **Least Squares Method** : To identify the weights so that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. [Wiki](https://en.wikipedia.org/wiki/Least_squares) <img style="float: left;" src = "./img/lin_reg.jpg" width="600" height="600"> <h2> Model Building & Testing Methodology </h2> <img src="./img/train_test.png" alt="Train & Test Methodology" width="700" height="600"> <br> <br> <br> ### Must read blog: Interpretable Machine Learning by Christoph https://christophm.github.io/interpretable-ml-book/intro.html ``` # Step1: Import packages import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split import seaborn as sns sns.set(color_codes = True) %matplotlib inline # Step2: Load our data df = pd.read_csv('Mall_Customers.csv') df.rename(columns={'CustomerID':'id','Spending Score (1-100)':'score','Annual Income (k$)':'income'},inplace=True) df.head() # Visualize first 5 rows of data df.tail() # Step3: Feature Engineering - transforming variables as appropriate for inputs to Machine Learning Algorithm # transforming categorical variable Gender using One hot encodding gender_onhot = pd.get_dummies(df['Gender']) gender_onhot.tail() # Create input dataset aka X X = pd.merge(df[['Age','score']], gender_onhot, left_index=True, right_index=True) X.head() sns.pairplot(X[['Age','score']]) print("Correlation between variables.........") X.iloc[:,:4].corr() # Create target variable Y = df['income'] Y.head() # Step3: Split data in train & test set X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=0.10,random_state = 35) print('Shape of Training Xs:{}'.format(X_train.shape)) print('Shape of Test Xs:{}'.format(X_test.shape)) # Step4: Build Linear Regression Analysis Model learner = LinearRegression(); #initializing linear regression model learner.fit(X_train,y_train); #training the linear regression model y_predicted = learner.predict(X_test) score=learner.score(X_test,y_test);#testing the linear regression model ``` ### Interpretation __Score__: R^2 (pronounced as R Square) it is also called as __coefficient of determination__ of prediction. __Range of Score values__: 0 to 1 , 0 -> No relation between predicted Y and input Xs, 1 -> best case scenario where predicted value is same as actual value. __Formula for Score__: R^2 = (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum() ``` print(score) print(y_predicted) sns.boxplot(x = df['score']) sns.distplot(df['score']) # Step5: Check Accuracy of Model df_new = pd.DataFrame({"true_income":y_test,"predicted_income":y_predicted}) df_new # Step6: Diagnostic analysis from sklearn.metrics import mean_squared_error, r2_score print("Intercept is at: %.2f"%(learner.intercept_)) # The coefficients print('Coefficients: \n', learner.coef_) # The mean squared error print("Mean squared error: %.2f" % mean_squared_error(y_test, y_predicted)) # Explained variance score: 1 is perfect prediction print('Variance score: %.4f' % r2_score(y_test, y_predicted)) ```
github_jupyter
# Detecting COVID-19 with Chest X-Rays Classify Chest X-rays into 3 classes: Normal, Viral Pneumonia, COVID-19 Used The COVID-19 Radiography Dataset on Kaggle: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database ``` %matplotlib inline import os import shutil import random import torch import torch.nn as nn import torch.nn.functional as F from torchvision import datasets, transforms, models from torchvision.utils import make_grid from torch.utils.data import DataLoader import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sn from sklearn.metrics import confusion_matrix from PIL import Image from matplotlib import pyplot as plt torch.manual_seed(0) print('Using PyTorch version', torch.__version__) import warnings from IPython.display import display warnings.filterwarnings('ignore') ``` # Sort Files into Training and Testing ``` class_names = ["normal", "viral", "covid"] root = "COVID-19 Radiography Database" source_names = ["NORMAL", "COVID-19", "Viral Pneumonia"] if os.path.isdir(os.path.join(root, 'train', source_names[1])): os.mkdir(os.path.join(root, 'test')) for i, d in enumerate(source_names): os.rename(os.path.join(root, d), os.path.join(root, class_names[i])) for c in class_names: os.mkdir(os.path.join(root, 'test', c)) for c in class_names: images = [x for x in os.listdir(os.path.join(root, c)) if x.lower().endswith('png')] print(len(images)) select = random.sample(images, 30) for image in select: source = os.path.join(root, c, image) target = os.path.join(root, 'test', c, image) shutil.move(source, target) ``` # Transform Dataset ``` root = '../COVID-19/COVID-19 Radiography Database/' train_transform = transforms.Compose([ transforms.Resize(size=(224, 224)), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) test_transform = transforms.Compose([ transforms.Resize(size=(224, 224)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) train_dset = datasets.ImageFolder(os.path.join(root, 'train'), transform=train_transform) test_dset = datasets.ImageFolder(os.path.join(root, 'test'), transform=test_transform) ``` # Prepare Dataloader ``` class_names = train_dset.classes class_names trainloader = DataLoader(train_dset, batch_size=6, shuffle=True) testloader = DataLoader(test_dset, batch_size=6, shuffle=False) print(f'Num training batches: {len(trainloader)}') print(f'Num testing batches: {len(testloader)}') len(train_dset) len(test_dset) ``` # Visualize Images ``` for images, labels in trainloader: break im = make_grid(images, nrow=3) inv_norm = transforms.Compose([ transforms.Normalize([-0.485/0.229, -0.456/0.224, -0.406/0.225], [1/0.229, 1/0.224, 1/0.225]) ]) new_im = inv_norm(im) print(labels.numpy()) plt.figure(figsize=(12,4)) plt.imshow(np.transpose(new_im.numpy(), (1, 2, 0))) ResNetmodel = models.resnet18(pretrained=True) ResNetmodel for param in ResNetmodel.parameters(): param.requires_grad = False ResNetmodel.fc = nn.Sequential(nn.Linear(512, 64), nn.ReLU(inplace=True), nn.Dropout(0.5), nn.Linear(64, 3), nn.LogSoftmax(dim=1)) ResNetmodel criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(ResNetmodel.fc.parameters(), lr=0.001) import time start_time = time.time() trn_btch_lmt = 800 tst_btch_lmt = 300 epochs = 15 train_losses = [] test_losses = [] train_correct = [] test_correct = [] for i in range(epochs): trn_corr = 0 tst_corr = 0 for b, (X_train, y_train) in enumerate(trainloader): if b == trn_btch_lmt: break y_pred = ResNetmodel(X_train) predicted = torch.max(y_pred, 1)[1] batch_corr = (predicted == y_train).sum() trn_corr += batch_corr loss = criterion(y_pred, y_train) if (b+1)%200 == 0: print(f'EPOCH: {i} LOSS: {loss.item()}') optimizer.zero_grad() loss.backward() optimizer.step() train_losses.append(loss.item()) train_correct.append(trn_corr) with torch.no_grad(): for b, (X_test, y_test) in enumerate(testloader): if b == tst_btch_lmt: break y_pred = ResNetmodel(X_test) predicted = torch.max(y_pred, 1)[1] tst_corr += (predicted == y_test).sum() loss = criterion(y_pred, y_test) test_losses.append(loss.item()) test_correct.append(tst_corr) print(f'training took {(time.time() - start_time)/60} minutes') train_correct[-1].item()/len(train_dset) torch.save(ResNetmodel.state_dict(), 'NewResNet.pt') ```
github_jupyter
``` #hide #skip ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab #export from fastai.basics import * from fastai.text.core import * from fastai.text.data import * from fastai.text.models.core import * from fastai.text.models.awdlstm import * from fastai.callback.rnn import * from fastai.callback.progress import * #hide from nbdev.showdoc import * #default_exp text.learner ``` # Learner for the text application > All the functions necessary to build `Learner` suitable for transfer learning in NLP The most important functions of this module are `language_model_learner` and `text_classifier_learner`. They will help you define a `Learner` using a pretrained model. See the [text tutorial](http://docs.fast.ai/tutorial.text) for exmaples of use. ## Loading a pretrained model In text, to load a pretrained model, we need to adapt the embeddings of the vocabulary used for the pre-training to the vocabulary of our current corpus. ``` #export def match_embeds(old_wgts, old_vocab, new_vocab): "Convert the embedding in `old_wgts` to go from `old_vocab` to `new_vocab`." bias, wgts = old_wgts.get('1.decoder.bias', None), old_wgts['0.encoder.weight'] wgts_m = wgts.mean(0) new_wgts = wgts.new_zeros((len(new_vocab),wgts.size(1))) if bias is not None: bias_m = bias.mean(0) new_bias = bias.new_zeros((len(new_vocab),)) old_o2i = old_vocab.o2i if hasattr(old_vocab, 'o2i') else {w:i for i,w in enumerate(old_vocab)} for i,w in enumerate(new_vocab): idx = old_o2i.get(w, -1) new_wgts[i] = wgts[idx] if idx>=0 else wgts_m if bias is not None: new_bias[i] = bias[idx] if idx>=0 else bias_m old_wgts['0.encoder.weight'] = new_wgts if '0.encoder_dp.emb.weight' in old_wgts: old_wgts['0.encoder_dp.emb.weight'] = new_wgts.clone() old_wgts['1.decoder.weight'] = new_wgts.clone() if bias is not None: old_wgts['1.decoder.bias'] = new_bias return old_wgts ``` For words in `new_vocab` that don't have a corresponding match in `old_vocab`, we use the mean of all pretrained embeddings. ``` wgts = {'0.encoder.weight': torch.randn(5,3)} new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b']) old,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight'] test_eq(new[0], old[0]) test_eq(new[1], old[2]) test_eq(new[2], old.mean(0)) test_eq(new[3], old[1]) #hide #With bias wgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)} new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b']) old_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight'] old_b,new_b = wgts['1.decoder.bias'], new_wgts['1.decoder.bias'] test_eq(new_w[0], old_w[0]) test_eq(new_w[1], old_w[2]) test_eq(new_w[2], old_w.mean(0)) test_eq(new_w[3], old_w[1]) test_eq(new_b[0], old_b[0]) test_eq(new_b[1], old_b[2]) test_eq(new_b[2], old_b.mean(0)) test_eq(new_b[3], old_b[1]) #export def _get_text_vocab(dls): vocab = dls.vocab if isinstance(vocab, L): vocab = vocab[0] return vocab #export def load_ignore_keys(model, wgts): "Load `wgts` in `model` ignoring the names of the keys, just taking parameters in order" sd = model.state_dict() for k1,k2 in zip(sd.keys(), wgts.keys()): sd[k1].data = wgts[k2].data.clone() return model.load_state_dict(sd) #export def _rm_module(n): t = n.split('.') for i in range(len(t)-1, -1, -1): if t[i] == 'module': t.pop(i) break return '.'.join(t) #export #For previous versions compatibility, remove for release def clean_raw_keys(wgts): keys = list(wgts.keys()) for k in keys: t = k.split('.module') if f'{_rm_module(k)}_raw' in keys: del wgts[k] return wgts #export #For previous versions compatibility, remove for release def load_model_text(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" distrib_barrier() if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(clean_raw_keys(model_state), strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") #export @log_args(but_as=Learner.__init__) @delegates(Learner.__init__) class TextLearner(Learner): "Basic class for a `Learner` in NLP." def __init__(self, dls, model, alpha=2., beta=1., moms=(0.8,0.7,0.8), **kwargs): super().__init__(dls, model, moms=moms, **kwargs) self.add_cbs([ModelResetter(), RNNRegularizer(alpha=alpha, beta=beta)]) def save_encoder(self, file): "Save the encoder to `file` in the model directory" if rank_distrib(): return # don't save if child proc encoder = get_model(self.model)[0] if hasattr(encoder, 'module'): encoder = encoder.module torch.save(encoder.state_dict(), join_path_file(file, self.path/self.model_dir, ext='.pth')) def load_encoder(self, file, device=None): "Load the encoder `file` from the model directory, optionally ensuring it's on `device`" encoder = get_model(self.model)[0] if device is None: device = self.dls.device if hasattr(encoder, 'module'): encoder = encoder.module distrib_barrier() wgts = torch.load(join_path_file(file,self.path/self.model_dir, ext='.pth'), map_location=device) encoder.load_state_dict(clean_raw_keys(wgts)) self.freeze() return self def load_pretrained(self, wgts_fname, vocab_fname, model=None): "Load a pretrained model and adapt it to the data vocabulary." old_vocab = load_pickle(vocab_fname) new_vocab = _get_text_vocab(self.dls) distrib_barrier() wgts = torch.load(wgts_fname, map_location = lambda storage,loc: storage) if 'model' in wgts: wgts = wgts['model'] #Just in case the pretrained model was saved with an optimizer wgts = match_embeds(wgts, old_vocab, new_vocab) load_ignore_keys(self.model if model is None else model, clean_raw_keys(wgts)) self.freeze() return self #For previous versions compatibility. Remove at release @delegates(load_model_text) def load(self, file, with_opt=None, device=None, **kwargs): if device is None: device = self.dls.device if self.opt is None: self.create_opt() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model_text(file, self.model, self.opt, device=device, **kwargs) return self ``` Adds a `ModelResetter` and an `RNNRegularizer` with `alpha` and `beta` to the callbacks, the rest is the same as `Learner` init. This `Learner` adds functionality to the base class: ``` show_doc(TextLearner.load_pretrained) ``` `wgts_fname` should point to the weights of the pretrained model and `vocab_fname` to the vocabulary used to pretrain it. ``` show_doc(TextLearner.save_encoder) ``` The model directory is `Learner.path/Learner.model_dir`. ``` show_doc(TextLearner.load_encoder) ``` ## Language modeling predictions For language modeling, the predict method is quite different form the other applications, which is why it needs its own subclass. ``` #export def decode_spec_tokens(tokens): "Decode the special tokens in `tokens`" new_toks,rule,arg = [],None,None for t in tokens: if t in [TK_MAJ, TK_UP, TK_REP, TK_WREP]: rule = t elif rule is None: new_toks.append(t) elif rule == TK_MAJ: new_toks.append(t[:1].upper() + t[1:].lower()) rule = None elif rule == TK_UP: new_toks.append(t.upper()) rule = None elif arg is None: try: arg = int(t) except: rule = None else: if rule == TK_REP: new_toks.append(t * arg) else: new_toks += [t] * arg return new_toks test_eq(decode_spec_tokens(['xxmaj', 'text']), ['Text']) test_eq(decode_spec_tokens(['xxup', 'text']), ['TEXT']) test_eq(decode_spec_tokens(['xxrep', '3', 'a']), ['aaa']) test_eq(decode_spec_tokens(['xxwrep', '3', 'word']), ['word', 'word', 'word']) #export @log_args(but_as=TextLearner.__init__) class LMLearner(TextLearner): "Add functionality to `TextLearner` when dealing with a language model" def predict(self, text, n_words=1, no_unk=True, temperature=1., min_p=None, no_bar=False, decoder=decode_spec_tokens, only_last_word=False): "Return `text` and the `n_words` that come after" self.model.reset() idxs = idxs_all = self.dls.test_dl([text]).items[0].to(self.dls.device) if no_unk: unk_idx = self.dls.vocab.index(UNK) for _ in (range(n_words) if no_bar else progress_bar(range(n_words), leave=False)): with self.no_bar(): preds,_ = self.get_preds(dl=[(idxs[None],)]) res = preds[0][-1] if no_unk: res[unk_idx] = 0. if min_p is not None: if (res >= min_p).float().sum() == 0: warn(f"There is no item with probability >= {min_p}, try a lower value.") else: res[res < min_p] = 0. if temperature != 1.: res.pow_(1 / temperature) idx = torch.multinomial(res, 1).item() idxs = idxs_all = torch.cat([idxs_all, idxs.new([idx])]) if only_last_word: idxs = idxs[-1][None] num = self.dls.train_ds.numericalize tokens = [num.vocab[i] for i in idxs_all if num.vocab[i] not in [BOS, PAD]] sep = self.dls.train_ds.tokenizer.sep return sep.join(decoder(tokens)) @delegates(Learner.get_preds) def get_preds(self, concat_dim=1, **kwargs): return super().get_preds(concat_dim=1, **kwargs) show_doc(LMLearner, title_level=3) show_doc(LMLearner.predict) ``` The words are picked randomly among the predictions, depending on the probability of each index. `no_unk` means we never pick the `UNK` token, `temperature` is applied to the predictions, if `min_p` is passed, we don't consider the indices with a probability lower than it. Set `no_bar` to `True` if you don't want any progress bar, and you can pass a long a custom `decoder` to process the predicted tokens. ## `Learner` convenience functions ``` #export from fastai.text.models.core import _model_meta #export def _get_text_vocab(dls): vocab = dls.vocab if isinstance(vocab, L): vocab = vocab[0] return vocab #export @log_args(to_return=True, but_as=Learner.__init__) @delegates(Learner.__init__) def language_model_learner(dls, arch, config=None, drop_mult=1., backwards=False, pretrained=True, pretrained_fnames=None, **kwargs): "Create a `Learner` with a language model from `dls` and `arch`." vocab = _get_text_vocab(dls) model = get_language_model(arch, len(vocab), config=config, drop_mult=drop_mult) meta = _model_meta[arch] learn = LMLearner(dls, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_lm'], **kwargs) url = 'url_bwd' if backwards else 'url' if pretrained or pretrained_fnames: if pretrained_fnames is not None: fnames = [learn.path/learn.model_dir/f'{fn}.{ext}' for fn,ext in zip(pretrained_fnames, ['pth', 'pkl'])] else: if url not in meta: warn("There are no pretrained weights for that architecture yet!") return learn model_path = untar_data(meta[url] , c_key='model') try: fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']] except IndexError: print(f'The model in {model_path} is incomplete, download again'); raise learn = learn.load_pretrained(*fnames) return learn ``` You can use the `config` to customize the architecture used (change the values from `awd_lstm_lm_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available) or you can pass specific `pretrained_fnames` containing your own pretrained model and the corresponding vocabulary. All other arguments are passed to `Learner`. ``` path = untar_data(URLs.IMDB_SAMPLE) df = pd.read_csv(path/'texts.csv') dls = TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid') learn = language_model_learner(dls, AWD_LSTM) ``` You can then use the `.predict` method to generate new text. ``` learn.predict('This movie is about', n_words=20) ``` By default the entire sentence is feed again to the model after each predicted word, this little trick shows an improvement on the quality of the generated text. If you want to feed only the last word, specify argument `only_last_word`. ``` learn.predict('This movie is about', n_words=20, only_last_word=True) #export @log_args(to_return=True, but_as=Learner.__init__) @delegates(Learner.__init__) def text_classifier_learner(dls, arch, seq_len=72, config=None, backwards=False, pretrained=True, drop_mult=0.5, n_out=None, lin_ftrs=None, ps=None, max_len=72*20, y_range=None, **kwargs): "Create a `Learner` with a text classifier from `dls` and `arch`." vocab = _get_text_vocab(dls) if n_out is None: n_out = get_c(dls) assert n_out, "`n_out` is not defined, and could not be inferred from data, set `dls.c` or pass `n_out`" model = get_text_classifier(arch, len(vocab), n_out, seq_len=seq_len, config=config, y_range=y_range, drop_mult=drop_mult, lin_ftrs=lin_ftrs, ps=ps, max_len=max_len) meta = _model_meta[arch] learn = TextLearner(dls, model, splitter=meta['split_clas'], **kwargs) url = 'url_bwd' if backwards else 'url' if pretrained: if url not in meta: warn("There are no pretrained weights for that architecture yet!") return learn model_path = untar_data(meta[url], c_key='model') try: fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']] except IndexError: print(f'The model in {model_path} is incomplete, download again'); raise learn = learn.load_pretrained(*fnames, model=learn.model[0]) learn.freeze() return learn ``` You can use the `config` to customize the architecture used (change the values from `awd_lstm_clas_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available). `drop_mult` is a global multiplier applied to control all dropouts. `n_out` is usually inferred from the `dls` but you may pass it. The model uses a `SentenceEncoder`, which means the texts are passed `seq_len` tokens at a time, and will only compute the gradients on the last `max_len` steps. `lin_ftrs` and `ps` are passed to `get_text_classifier`. All other arguments are passed to `Learner`. ``` path = untar_data(URLs.IMDB_SAMPLE) df = pd.read_csv(path/'texts.csv') dls = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid') learn = text_classifier_learner(dls, AWD_LSTM) ``` ## Show methods - ``` #export @typedispatch def show_results(x: LMTensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs): if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n)) for i,l in enumerate(['input', 'target']): ctxs = [b.show(ctx=c, label=l, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))] ctxs = [b.show(ctx=c, label='pred', **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))] display_df(pd.DataFrame(ctxs)) return ctxs #export @typedispatch def show_results(x: TensorText, y, samples, outs, ctxs=None, max_n=10, trunc_at=150, **kwargs): if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n)) samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples) ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs) display_df(pd.DataFrame(ctxs)) return ctxs #export @typedispatch def plot_top_losses(x: TensorText, y:TensorCategory, samples, outs, raws, losses, trunc_at=150, **kwargs): rows = get_empty_df(len(samples)) samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples) for i,l in enumerate(['input', 'target']): rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(samples.itemgot(i),rows)] outs = L(o + (TitledFloat(r.max().item()), TitledFloat(l.item())) for o,r,l in zip(outs, raws, losses)) for i,l in enumerate(['predicted', 'probability', 'loss']): rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(outs.itemgot(i),rows)] display_df(pd.DataFrame(rows)) ``` ## Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
``` import numpy as np import tensorflow as tf from tensorflow import keras from IPython.display import Image import matplotlib.pyplot as plt import matplotlib.cm as cm from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession config = ConfigProto() config.gpu_options.allow_growth = True session = InteractiveSession(config=config) model_builder = keras.applications.xception.Xception img_size = (299, 299) preprocess_input = keras.applications.xception.preprocess_input decode_predictions = keras.applications.xception.decode_predictions last_conv_layer_name = "block14_sepconv2_act" classifier_layer_names = [ "avg_pool", "predictions", ] img_path = './dog.jpeg' display(Image(img_path)) def get_img_array(img_path, size): img = keras.preprocessing.image.load_img(img_path, target_size=size) array = keras.preprocessing.image.img_to_array(img) array = np.expand_dims(array, axis=0) return array def make_gradcam_heatmap( img_array, model, last_conv_layer_name, classifier_layer_names ): last_conv_layer = model.get_layer(last_conv_layer_name) last_conv_layer_model = keras.Model(model.inputs, last_conv_layer.output) classifier_input = keras.Input(shape=last_conv_layer.output.shape[1:]) x = classifier_input for layer_name in classifier_layer_names: x = model.get_layer(layer_name)(x) classifier_model = keras.Model(classifier_input, x) with tf.GradientTape() as tape: last_conv_layer_output = last_conv_layer_model(img_array) tape.watch(last_conv_layer_output) preds = classifier_model(last_conv_layer_output) top_pred_index = tf.argmax(preds[0]) top_class_channel = preds[:, top_pred_index] grads = tape.gradient(top_class_channel, last_conv_layer_output) pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) last_conv_layer_output = last_conv_layer_output.numpy()[0] pooled_grads = pooled_grads.numpy() for i in range(pooled_grads.shape[-1]): last_conv_layer_output[:, :, i] *= pooled_grads[i] heatmap = np.mean(last_conv_layer_output, axis=-1) heatmap = np.maximum(heatmap, 0) / np.max(heatmap) return heatmap img_array = preprocess_input(get_img_array(img_path, size=img_size)) model = model_builder(weights="imagenet") preds = model.predict(img_array) heatmap = make_gradcam_heatmap( img_array, model, last_conv_layer_name, classifier_layer_names ) plt.imshow(heatmap) plt.title("Predicted: {}".format(decode_predictions(preds, top=1)[0][0][1].upper().replace('_', ' '))) plt.show() img = keras.preprocessing.image.load_img(img_path) img = keras.preprocessing.image.img_to_array(img) heatmap = np.uint8(255 * heatmap) jet = cm.get_cmap("jet") jet_colors = jet(np.arange(256))[:, :3] jet_heatmap = jet_colors[heatmap] jet_heatmap = keras.preprocessing.image.array_to_img(jet_heatmap) jet_heatmap = jet_heatmap.resize((img.shape[1], img.shape[0])) jet_heatmap = keras.preprocessing.image.img_to_array(jet_heatmap) superimposed_img = jet_heatmap * 0.6 + img superimposed_img = keras.preprocessing.image.array_to_img(superimposed_img) save_path = "." + img_path.split('.')[-2] + '-cam.jpg' superimposed_img.save(save_path) display(Image(save_path)) ``` # References [1] [Grad-CAM class activation visualization](https://keras.io/examples/vision/grad_cam/) [2] [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization](https://arxiv.org/abs/1610.02391)
github_jupyter
# Introduction to the Semantic Web ## Brief History * 1969: ARPAnet delivers first message * 1983: TCP/IP added to internet * 1989: WWW Invented along with HTTP proposed and implemented by [Tim Berners-Lee](https://en.wikipedia.org/wiki/Tim_Berners-Lee) * 1999: Term "Semantic Web" defined by [Tim Berners-Lee](https://en.wikipedia.org/wiki/Tim_Berners-Lee) * 1999: RDF recommendation adopted by W3C * 2004: RDF 1.0 spec published ## Key Definitions [Semantic Web](https://www.w3.org/standards/semanticweb/) In addition to the classic “Web of documents” W3C is helping to build a technology stack to support a “Web of data,” the sort of data you find in databases. The ultimate goal of the Web of data is to enable computers to do more useful work and to develop systems that can support trusted interactions over the network. The term “Semantic Web” refers to W3C’s vision of the Web of linked data. Semantic Web technologies enable people to create data stores on the Web, build vocabularies, and write rules for handling data. Linked data are empowered by technologies such as RDF, SPARQL, OWL, and SKOS. [Linked Data](https://www.w3.org/standards/semanticweb/data.html) It is important to have the huge amount of data on the Web available in a standard format, reachable and manageable by Semantic Web tools. Furthermore, not only does the Semantic Web need access to data, but relationships among data should be made available, too, to create a Web of Data (as opposed to a sheer collection of datasets). This collection of interrelated datasets on the Web can also be referred to as Linked Data. [RDF (Resource Description Framework)](https://en.wikipedia.org/wiki/Resource_Description_Framework) The RDF data model is similar to classical conceptual modeling approaches (such as entity–relationship or class diagrams). It is based on the idea of making statements about resources (in particular web resources) in expressions of the form subject–predicate–object, known as triples. The subject denotes the resource, and the predicate denotes traits or aspects of the resource, and expresses a relationship between the subject and the object. [Ontology](https://www.w3.org/standards/semanticweb/ontology.html) On the Semantic Web, vocabularies define the concepts and relationships (also referred to as “terms”) used to describe and represent an area of concern. Vocabularies are used to classify the terms that can be used in a particular application, characterize possible relationships, and define possible constraints on using those terms. In practice, vocabularies can be very complex (with several thousands of terms) or very simple (describing one or two concepts only). There is no clear division between what is referred to as “vocabularies” and “ontologies”. The trend is to use the word “ontology” for more complex, and possibly quite formal collection of terms, whereas “vocabulary” is used when such strict formalism is not necessarily used or only in a very loose sense. Vocabularies are the basic building blocks for inference techniques on the Semantic Web. ## References and Resources https://github.com/semantalytics/awesome-semantic-web ## What is actually in a URL? ``` from urllib.parse import urlparse urlparse("https://www.w3.org/People/Berners-Lee/card#i") ``` ## Building the URI (Uniform Resource Identifier) For most of our cases, the scheme will be http or https, but can really be any protocol used to communicate over a network. The netloc, also known as the authority, is the network device that hosts the data being referenced by the URI. The path represents the location of the component located at the authority, where the data can be retrieved. The fragment portion is part of the URN (Uniform Resource Name), and signifies which object at the URL, the URI is pointing to. Lastly, from here on we will be using the term IRI (International Resource Identifier), which is an international version of the URI. ## Working with RDF data RDF data is an implementation of an unstructured graph data structure. Not all RDF can be cleanly placed into a relational database, but all relational database data can be cleanly represented as a graph matrix where the Subject is the IRI pointing to the primary key, the Predicate is the IRI pointing to the definition of the column name in a table, and the Object is either a literaly value, or a IRI pointing to the foreign key. Data in RDFs can be one of 3 types: 1. IRI * When used in a RDF, the IRI is used to denote the absolute or relative location to the resource being referenced. 2. Literal * A basic value that are not IRIs. Such as strings, integers, or an instance of a concrete class. 3. Blank Node * An anonymous reference to an object without an IRI. Usually used as a container type to reference a collection of RDF statements * Subject * IRI * Blank Node * Predicate * IRI * Object * IRI (Denotes a forign key when describing relational database data) * Literal * Blank Node Example: *Table Name: People* |ID|Name|Age|Type|Knows| |--|----|---|----|------| |0|Tory|32|Human|| |1|Clyde|13|Cat|0| Using the above table, we could make the following RDF statements: ```text Subject(People#0) -> Predicate(has_name) -> Object(Tory) Subject(People#1) -> Predicate(knows) -> Object(People#0) ``` ``` !pip install rdflib rdflib-jsonld import rdflib from rdflib.namespace import FOAF , XSD # create a Graph to store our RDF objects graph = rdflib.Graph() # Delcare our namespace to stuff all our stuff ns = rdflib.Namespace("http://example.org") # Declare new types Person = rdflib.URIRef(ns + "/person") Human = rdflib.URIRef(ns + "/human") Cat = rdflib.URIRef(ns + "/cat") # Create our People Tory = rdflib.URIRef(Person + "/0") Clyde = rdflib.URIRef(Person + "/1") # Start populating the graphs graph.add( (Tory, FOAF.name, rdflib.Literal("Tory")) ) graph.add( (Tory, rdflib.RDF.type, Human ) ) graph.add( (Tory, FOAF.age, rdflib.Literal(32) ) ) graph.add( (Clyde, FOAF.name, rdflib.Literal("Clyde")) ) graph.add( (Clyde, rdflib.RDF.type, Cat) ) graph.add( (Clyde, FOAF.age, rdflib.Literal(13) ) ) graph.add( (Clyde, FOAF.knows, Tory ) ) for triple in graph: print(triple) ``` ## RDF Serialization Formats Some common serialization formats include: * XML (Extensible Markup Language) * N3 (Notation3) * TTL (Terse RDF Triple Language) * JSON-LD (JSON for Linked-Data) * And MANY more! ``` for fmt in ["xml", "n3", "ttl", "json-ld"]: print("=" * 20, fmt, "=" * 20) print(graph.serialize(format=fmt).decode()) ``` ## Applying Semantics to Linked Data The simplest version of the [data pyramid](https://en.wikipedia.org/wiki/DIKW_pyramid) goes: Data -> Information -> Knowledge -> Wisdom Data is a collection of observations stated as facts. Information is inferred from data by extracting the useful parts out of the data. This is normally done through some sort of ETL (Extract. Transform. Load) process. Knowledge is information that is enriched with domain knowledge to add context to the information, or a combination of multiple sources of information to add context that would otherwise not have that context when standing alone. Wisdom is the shared understanding of the knowledge and how to apply it to business objects, and why it is useful. The value or cell in a database somewhere represents a data point. Each triple in our graph represents a single piece of information. The graph as a whole represents our collection of information. The next step is to apply domain knowledge to lift the information graph to a knowledge graph. The way we will encode and apply our domain knowledge is by using an ontology. ## Vocabularies, Ontologies, and Schemas The two most common ways to encode domain knowledge are: 1. [RDFS (RDF Schema)](https://www.w3.org/TR/rdf-schema/) 2. [OWL (Web Ontology Language)](https://www.w3.org/TR/owl-overview/) Vocabularies are themselves also just RDF graphs, but contain an encoding of domain knowledge and a set of constraints to validate or extend an existing knowledge graph. Lets start by bringing in the ontology for foaf (Friend of a Friend), the social network encoding ontology used in our previous example. ``` ontology = rdflib.Graph() url = "http://xmlns.com/foaf/spec/20140114.rdf" namespace = rdflib.Namespace("http://xmlns.com/foaf/0.1/") ontology.parse(url) subjects = set(ontology.subjects()) predicates = set(ontology.predicates()) objects = set(ontology.objects()) print(f"Triples({len(ontology)}), Subjects({len(subjects)}), Predicates({len(predicates)}), Objects({len(objects)})") ``` From here, we are able to categorize our subjects into two groups: 1. Subjects that belong to the FOAF namespace 2. External subjects that enrich the FOAF namespace items ``` internal_subjects = set(sub for sub in subjects if sub.startswith(namespace)) external_subjects = subjects - internal_subjects print(f"Internal({len(internal_subjects)}), External({len(external_subjects)})") from collections import namedtuple IRIRef = namedtuple("IRIRef", ("iri", "delim", "urn")) def split(item): if not isinstance(item, rdflib.term.URIRef): return item iri, delim, urn = item.rpartition("#" if "#" in item else "/") return IRIRef(iri, delim, urn) [split(subject)[2] for subject in internal_subjects] [split(subject) for subject in external_subjects] ``` In our example, the predicates are where things get interesting. Lets start by only looking at the namespaces brought in. In the below example, you can see we are bringing in predicates from not only multiple vocabularies, but multiple types or standards of vocabularies such as RDFS, and OWL standards. When building RDFs, you are encouraged to include as much sematics as possible, and you aren't required to stick to a single namespace, or even a single domain when describing your graph. ``` set(split(predicate)[0] for predicate in predicates) set(split(predicate) for predicate in predicates) ``` While most RDF graphs describe relationships in your data, a vocabulary describes the relationships in your TYPES of data. In this case, a vocabulary is similar to a UML diagram that would descibe the relationships between python classes and python subclasses, to include base types (see the above "http://www.w3.org/1999/02/22-rdf-syntax-ns" and "http://purl.org/dc/elements/1.1") Next, lets look at objects that are references (not literal or blank nodes) that are not internal references inside the namespace. This will tell us how FOAF depends on external vocabularies. ``` foreign_references = tuple(split(i) for i in predicates.union(objects).difference(subjects) if isinstance(i, rdflib.term.URIRef)) foreign_references !pip install owlrl import owlrl combined_graph = graph + ontology owlrl.DeductiveClosure(owlrl.CombinedClosure.RDFS_OWLRL_Semantics).expand(combined_graph) print(f"The original graph contained {len(graph)} triples and the ontoloty contained {len(ontology)} triples. But after automated deductive reasoning it now contains {len(combined_graph)} triples!") # Search the combined graph for all triples where Clyde is the Subject for s, p, o in combined_graph.triples( (Clyde, None, None) ): # None is considered a wildcard for iteration purposes if (s, p, o) in graph: continue # Skip items in original graph, we only want to see new data that was learned through deductive reasoning print(split(p), split(o)) # Search the combined graph for all triples where Clyde is the Object for s, p, o in combined_graph.triples( (None, None, Clyde) ): # None is considered a wildcard for iteration purposes if (s, p, o) in graph: continue # Skip items in original graph, we only want to see new data that was learned through deductive reasoning print(f"{split(s)}\n\t{split(p)}\n\t{split(o)}") ```
github_jupyter
``` %%writefile app.py import streamlit as st import tensorflow as tf import cv2 from PIL import Image ,ImageOps import numpy as np from tensorflow.keras.preprocessing.image import load_img, img_to_array @st.cache(allow_output_mutation=True) def predict(img): IMAGE_SIZE = 224 classes = ['Apple - Apple scab', 'Apple - Black rot', 'Apple - Cedar apple rust', 'Apple - healthy', 'Background without leaves', 'Blueberry - healthy', 'Cherry - Powdery mildew', 'Cherry - healthy', 'Corn - Cercospora leaf spot Gray leaf spot', 'Corn - Common rust', 'Corn - Northern Leaf Blight', 'Corn - healthy', 'Grape - Black rot', 'Grape - Esca (Black Measles)', 'Grape - Leaf blight (Isariopsis Leaf Spot)', 'Grape - healthy', 'Orange - Haunglongbing (Citrus greening)', 'Peach - Bacterial spot', 'Peach - healthy', 'Pepper, bell - Bacterial spot', 'Pepper, bell - healthy', 'Potato - Early blight', 'Potato - Late blight', 'Potato - healthy', 'Raspberry - healthy', 'Soybean - healthy', 'Squash - Powdery mildew', 'Strawberry - Leaf scorch', 'Strawberry - healthy', 'Tomato - Bacterial spot', 'Tomato - Early blight', 'Tomato - Late blight', 'Tomato - Leaf Mold', 'Tomato - Septoria leaf spot', 'Tomato - Spider mites Two-spotted spider mite', 'Tomato - Target Spot', 'Tomato - Tomato Yellow Leaf Curl Virus', 'Tomato - Tomato mosaic virus', 'Tomato - healthy'] model_path = r'model' model = tf.keras.models.load_model(model_path) img = Image.open(img) img = img.resize((IMAGE_SIZE, IMAGE_SIZE)) img = img_to_array(img) img = img.reshape((1, IMAGE_SIZE, IMAGE_SIZE, 3)) img = img/255. class_probabilities = model.predict(x=img) class_probabilities = np.squeeze(class_probabilities) prediction_index = int(np.argmax(class_probabilities)) prediction_class = classes[prediction_index] prediction_probability = class_probabilities[prediction_index] * 100 prediction_probability = round(prediction_probability, 2) return prediction_class, prediction_probability def load_model(): model=tf.keras.models.load_model('my_model.hdf5') return model model2=load_model() def import_and_predict(image_data , model): size=(256,256) image = ImageOps.fit(image_data,size,Image.ANTIALIAS) img=np.asarray(image) img_reshape=img[np.newaxis,...] prediction=model2.predict(img_reshape) return prediction st.markdown('<style>body{text-align: center;}</style>', unsafe_allow_html=True) # Main app interface st.title('plant and soil Classification ') st.image('appimage.jpg',width=600, height=500) img = st.file_uploader(label='Upload leaf image (PNG, JPG or JPEG)', type=['png', 'jpg', 'jpeg']) st.write('Please specify the type of classifier (soil or plant)') if img is not None: predict_button = st.button(label='Plante Disease Classifier') prediction_class, prediction_probability = predict(img) if predict_button: st.image(image=img.read(), caption='Uploaded image') st.subheader('Prediction') st.info(f'Classification: {prediction_class}, Accuracy: {prediction_probability}%') predict_button2 = st.button(label='Soil Classifier') if predict_button2: image=Image.open(img) st.image(image,size=(350,350)) st.subheader('Prediction') predictions=import_and_predict(image,model2) class_names=['clay soil', 'gravel soil', 'loam soil', 'sand soil'] score = tf.nn.softmax(predictions[0]) st.info(f'Classification: {class_names[np.argmax(predictions)]}, Accuracy: { 100 * np.max(score)}%') ```
github_jupyter
**This notebook is an exercise in the [Python](https://www.kaggle.com/learn/python) course. You can reference the tutorial at [this link](https://www.kaggle.com/colinmorris/booleans-and-conditionals).** --- In this exercise, you'll put to work what you have learned about booleans and conditionals. To get started, **run the setup code below** before writing your own code (and if you leave this notebook and come back later, don't forget to run the setup code again). ``` from learntools.core import binder; binder.bind(globals()) from learntools.python.ex3 import * print('Setup complete.') ``` # 1. Many programming languages have [`sign`](https://en.wikipedia.org/wiki/Sign_function) available as a built-in function. Python doesn't, but we can define our own! In the cell below, define a function called `sign` which takes a numerical argument and returns -1 if it's negative, 1 if it's positive, and 0 if it's 0. ``` # Your code goes here. Define a function called 'sign' def sign(num): try: return num/abs(num) except ZeroDivisionError as z: return 0 # Check your answer q1.check() #q1.solution() ``` # 2. We've decided to add "logging" to our `to_smash` function from the previous exercise. ``` def to_smash(total_candies): """Return the number of leftover candies that must be smashed after distributing the given number of candies evenly between 3 friends. >>> to_smash(91) 1 """ print("Splitting", total_candies, "candies") return total_candies % 3 to_smash(91) ``` What happens if we call it with `total_candies = 1`? ``` to_smash(1) ``` That isn't great grammar! Modify the definition in the cell below to correct the grammar of our print statement. (If there's only one candy, we should use the singular "candy" instead of the plural "candies") ``` def to_smash(total_candies): """Return the number of leftover candies that must be smashed after distributing the given number of candies evenly between 3 friends. >>> to_smash(91) 1 """ if total_candies == 1: print("Splitting", total_candies, "candy") else: print("Splitting", total_candies, "candies") return total_candies % 3 to_smash(91) to_smash(1) ``` To get credit for completing this problem, and to see the official answer, run the code cell below. ``` # Check your answer (Run this code cell to receive credit!) q2.solution() ``` # 3. <span title="A bit spicy" style="color: darkgreen ">🌶️</span> In the tutorial, we talked about deciding whether we're prepared for the weather. I said that I'm safe from today's weather if... - I have an umbrella... - or if the rain isn't too heavy and I have a hood... - otherwise, I'm still fine unless it's raining *and* it's a workday The function below uses our first attempt at turning this logic into a Python expression. I claimed that there was a bug in that code. Can you find it? To prove that `prepared_for_weather` is buggy, come up with a set of inputs where either: - the function returns `False` (but should have returned `True`), or - the function returned `True` (but should have returned `False`). To get credit for completing this question, your code should return a <font color='#33cc33'>Correct</font> result. ``` def prepared_for_weather(have_umbrella, rain_level, have_hood, is_workday): # Don't change this code. Our goal is just to find the bug, not fix it! return have_umbrella or rain_level < 5 and have_hood or not rain_level > 0 and is_workday # Change the values of these inputs so they represent a case where prepared_for_weather # returns the wrong answer. have_umbrella = False rain_level = 5.5 have_hood = True is_workday = False # Check what the function returns given the current values of the variables above actual = prepared_for_weather(have_umbrella, rain_level, have_hood, is_workday) print(actual) # Check your answer q3.check() #q3.hint() #q3.solution() ``` # 4. The function `is_negative` below is implemented correctly - it returns True if the given number is negative and False otherwise. However, it's more verbose than it needs to be. We can actually reduce the number of lines of code in this function by *75%* while keeping the same behaviour. See if you can come up with an equivalent body that uses just **one line** of code, and put it in the function `concise_is_negative`. (HINT: you don't even need Python's ternary syntax) ``` def is_negative(number): if number < 0: return True else: return False def concise_is_negative(number): return True if number < 0 else False # Check your answer q4.check() #q4.hint() #q4.solution() ``` # 5a. The boolean variables `ketchup`, `mustard` and `onion` represent whether a customer wants a particular topping on their hot dog. We want to implement a number of boolean functions that correspond to some yes-or-no questions about the customer's order. For example: ``` def onionless(ketchup, mustard, onion): """Return whether the customer doesn't want onions. """ return not onion def wants_all_toppings(ketchup, mustard, onion): """Return whether the customer wants "the works" (all 3 toppings) """ return all([ketchup, mustard, onion]) # Check your answer q5.a.check() #q5.a.hint() #q5.a.solution() ``` # 5b. For the next function, fill in the body to match the English description in the docstring. ``` def wants_plain_hotdog(ketchup, mustard, onion): """Return whether the customer wants a plain hot dog with no toppings. """ return not any([ketchup, mustard, onion]) # Check your answer q5.b.check() #q5.b.hint() #q5.b.solution() ``` # 5c. You know what to do: for the next function, fill in the body to match the English description in the docstring. ``` def exactly_one_sauce(ketchup, mustard, onion): """Return whether the customer wants either ketchup or mustard, but not both. (You may be familiar with this operation under the name "exclusive or") """ return bool(ketchup ^ mustard) and onion # Check your answer q5.c.check() #q5.c.hint() #q5.c.solution() ``` # 6. <span title="A bit spicy" style="color: darkgreen ">🌶️</span> We’ve seen that calling `bool()` on an integer returns `False` if it’s equal to 0 and `True` otherwise. What happens if we call `int()` on a bool? Try it out in the notebook cell below. Can you take advantage of this to write a succinct function that corresponds to the English sentence "does the customer want exactly one topping?"? ``` def exactly_one_topping(ketchup, mustard, onion): """Return whether the customer wants exactly one of the three available toppings on their hot dog. """ return sum([ketchup, mustard, onion]) == 1 # Check your answer q6.check() #q6.hint() #q6.solution() ``` # 7. <span title="A bit spicy" style="color: darkgreen ">🌶️</span> (Optional) In this problem we'll be working with a simplified version of [blackjack](https://en.wikipedia.org/wiki/Blackjack) (aka twenty-one). In this version there is one player (who you'll control) and a dealer. Play proceeds as follows: - The player is dealt two face-up cards. The dealer is dealt one face-up card. - The player may ask to be dealt another card ('hit') as many times as they wish. If the sum of their cards exceeds 21, they lose the round immediately. - The dealer then deals additional cards to himself until either: - the sum of the dealer's cards exceeds 21, in which case the player wins the round - the sum of the dealer's cards is greater than or equal to 17. If the player's total is greater than the dealer's, the player wins. Otherwise, the dealer wins (even in case of a tie). When calculating the sum of cards, Jack, Queen, and King count for 10. Aces can count as 1 or 11 (when referring to a player's "total" above, we mean the largest total that can be made without exceeding 21. So e.g. A+8 = 19, A+8+8 = 17) For this problem, you'll write a function representing the player's decision-making strategy in this game. We've provided a very unintelligent implementation below: ``` def should_hit(dealer_total, player_total, player_low_aces, player_high_aces): """Return True if the player should hit (request another card) given the current game state, or False if the player should stay. When calculating a hand's total value, we count aces as "high" (with value 11) if doing so doesn't bring the total above 21, otherwise we count them as low (with value 1). For example, if the player's hand is {A, A, A, 7}, we will count it as 11 + 1 + 1 + 7, and therefore set player_total=20, player_low_aces=2, player_high_aces=1. """ if player_total > dealer_total: return False ``` This very conservative agent *always* sticks with the hand of two cards that they're dealt. We'll be simulating games between your player agent and our own dealer agent by calling your function. Try running the function below to see an example of a simulated game: ``` q7.simulate_one_game() ``` The real test of your agent's mettle is their average win rate over many games. Try calling the function below to simulate 50000 games of blackjack (it may take a couple seconds): ``` q7.simulate(n_games=50000) ``` Our dumb agent that completely ignores the game state still manages to win shockingly often! Try adding some more smarts to the `should_hit` function and see how it affects the results. ``` def should_hit(dealer_total, player_total, player_low_aces, player_high_aces): """Return True if the player should hit (request another card) given the current game state, or False if the player should stay. When calculating a hand's total value, we count aces as "high" (with value 11) if doing so doesn't bring the total above 21, otherwise we count them as low (with value 1). For example, if the player's hand is {A, A, A, 7}, we will count it as 11 + 1 + 1 + 7, and therefore set player_total=20, player_low_aces=2, player_high_aces=1. """ return False q7.simulate(n_games=50000) ``` # Keep Going Learn about **[lists and tuples](https://www.kaggle.com/colinmorris/lists)** to handle multiple items of data in a systematic way. --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161283) to chat with other Learners.*
github_jupyter
# AUTHOR: LAKSHMI VYSHNAVI UMMADISETTI # TASK2 - PREDICTION USING UNSUPERVISED ML # DATA SCIENCE AND BUSINESS ANALYTICS INTERNSHIP - THE SPARKS FOUNDATION (GRIPJANUARY22) # USING IRIS DATASET AND PREDICTING THE OPTIMUM NUMBER OF CLUSTERS AND REPRESENTING THEM VISUALLY. ``` # Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn import datasets from sklearn.cluster import KMeans # Imporing dataset iris = datasets.load_iris() iris_df = pd.DataFrame(iris.data, columns = iris.feature_names) iris_df.head() iris_df.tail() iris_df.shape iris_df.columns iris_df.info() iris_df.describe() ``` # FINDING THE OPTIMUM NUMBER OF CLUSTERS Before clustering the data using kmeans, we need to specify the number of clusters. In order to find the optimum number of clusters, there are various methods available like Silhouette Coefficient and the Elbow method. Here, the elbow method is used. # THE ELBOW METHOD In this method, the number of clusters varies within a certain range. For each number, within-cluster sum of square (wss) value is calculated and stored in a list. These value are then plotted against the range of number of clusters used before. The location of bend in the 2d plot indicates the appropiate number of clusters. ``` # Calculating the within-cluster sum of square within_cluster_sum_of_square = [] clusters_range = range(1,15) for k in clusters_range: km = KMeans(n_clusters=k) km = km.fit(iris_df) within_cluster_sum_of_square.append(km.inertia_) # Plotting the "within-cluster sum of square" against clusters range plt.plot(clusters_range, within_cluster_sum_of_square, 'o--', color='red') plt.title('The elbow method') plt.xlabel('Number of clusters') plt.ylabel('Within-cluster sum of square') plt.grid() plt.show() ``` we can clearly see why it is called 'The elbow method' from the above graph, the optimum clusters is where the elbow occurs. This is when the within cluster sum of squares (WCSS) doesn't decrease significantly with every iteration. From this we choose the number of clusters as '3'. # K-MEANS CLUSTERING ``` from sklearn.cluster import KMeans model = KMeans(n_clusters = 3, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0) predictions = model.fit_predict(iris_df) ``` # VISUALISING THE CLUSTERS ``` x = iris_df.iloc[:, [0, 1, 2, 3]].values plt.scatter(x[predictions == 0, 0], x[predictions == 0, 1], s = 25, c = 'blue', label = 'Iris-setosa') plt.scatter(x[predictions == 1, 0], x[predictions == 1, 1], s = 25, c = 'red', label = 'Iris-versicolour') plt.scatter(x[predictions == 2, 0], x[predictions == 2, 1], s = 25, c = 'green', label = 'Iris-virginica') # Plotting the cluster centers plt.scatter(model.cluster_centers_[:, 0], model.cluster_centers_[:,1], s = 100, c = 'yellow', label = 'Centroids') plt.legend() plt.grid() plt.show() ```
github_jupyter
``` import numpy as np import pandas as pd import scipy as sp import matplotlib.pyplot as plt import scipy.fftpack as fft from scipy import fromstring, int16, frombuffer import soundfile as sf import wave import struct h_n = pd.read_csv('hpf_60hz.csv', header=None) h_n h_n=h_n[0] h_n h_n=h_n.T h_n h_n_arr=np.array(h_n) h_n_arr wavf = 'test_2_48000.wav' wr = wave.open(wavf, 'r') # waveファイルが持つ性質を取得 ch = wr.getnchannels() width = wr.getsampwidth() fr = wr.getframerate() fn = wr.getnframes() data=wr.readframes(wr.getnframes()) wr.close() X = frombuffer(data, dtype=int16) X y = np.convolve(X, h_n_arr, mode="full") y outf ="test_hpf_out.wav" # 書き出し ww = wave.open(outf, 'w') ww.setnchannels(ch) ww.setsampwidth(width) ww.setframerate(fr) ww.writeframes(y) ww.close() #delta fanction deltas = np.zeros(1000, dtype=int16) deltas deltas[0] = 1 deltas #delta convolution H = np.convolve(deltas, h_n_arr, mode="full") H H.shape h_n_arr * h_n_arr h_n_arr.max() h_n_arr_int = h_n_arr/h_n_arr.max() * 32767 i = 0 for element in h_n_arr_int: h_n_arr_int[i] = int(element) i += 1 print(h_n_arr_int) h_n_arr_int=h_n_arr_int.astype(np.int16) print(h_n_arr_int) for element in h_n_arr_int: print(element) #delta convolution H = np.convolve(deltas, h_n_arr_int, mode="full") H for element in H: print(element) wavf = 'test_2_48000.wav' wr = wave.open(wavf, 'r') # waveファイルが持つ性質を取得 ch = wr.getnchannels() width = wr.getsampwidth() fr = wr.getframerate() fn = wr.getnframes() data=wr.readframes(wr.getnframes()) wr.close() X = frombuffer(data, dtype=int16) X X.max() #デルタ関数の畳み込み y_delta = np.convolve(X, deltas, mode="full") outf ="test_delta_out.wav" # 書き出し ww = wave.open(outf, 'w') ww.setnchannels(ch) ww.setsampwidth(width) ww.setframerate(fr) ww.writeframes(y_delta) ww.close() count=1 size = 1001 #start = 0 #end = 1000 #st = 10000 # サンプリングする開始位置 #fs = 44100 #サンプリングレート fs=48000 d = 1.0 / fs #サンプリングレートの逆数 freqList = np.fft.fftfreq(size, d) for i in range(count): #n = random.randint(start,end) data = np.fft.fft(h_n_arr_int) data = data / max(abs(data)) # 0~1正規化 plt.plot(freqList,abs(data)) #plt.axis([0,fs/16,0,1]) #第二引数でグラフのy軸方向の範囲指定 plt.axis([0,fs/64,-1,1]) #第二引数でグラフのy軸方向の範囲指定 plt.title("filter_H") plt.xlabel("Frequency[Hz]") plt.ylabel("amplitude spectrum") plt.show() count=1 size = 1001 #start = 0 #end = 1000 #st = 10000 # サンプリングする開始位置 #fs = 44100 #サンプリングレート fs=48000 d = 1.0 / fs #サンプリングレートの逆数 freqList = np.fft.fftfreq(size, d) for t in range(count): #n = random.randint(start,end) #data = np.fft.fft(h_n_arr_int) data = h_n_arr_int data = data / max(abs(data)) # 0~1正規化 plt.plot(np.linspace(0,len(data)),data) #plt.axis([0,fs/16,0,1]) #第二引数でグラフのy軸方向の範囲指定 plt.axis([0,len(data),0,1]) #第二引数でグラフのy軸方向の範囲指定 plt.title("filter_H") plt.xlabel("Time[s]") plt.ylabel("amplitude spectrum") plt.show() #coding:utf-8 import wave import numpy as np import scipy.fftpack from pylab import * if __name__ == "__main__" : data_name="test_2_48000.wav" wf = wave.open(data_name , "r" ) fs = wf.getframerate() # サンプリング周波数 x = wf.readframes(wf.getnframes()) x = frombuffer(x, dtype= "int16") / 32768.0 # -1 - +1に正規化 wf.close() start = 0 # サンプリングする開始位置 N = 256 # FFTのサンプル数 X = np.fft.fft(x[start:start+N]) # FFT # X = scipy.fftpack.fft(x[start:start+N]) # scipy版 freqList = np.fft.fftfreq(N, d=1.0/fs) # 周波数軸の値を計算 # freqList = scipy.fftpack.fftfreq(N, d=1.0/ fs) # scipy版 amplitudeSpectrum = [np.sqrt(c.real ** 2 + c.imag ** 2) for c in X] # 振幅スペクトル phaseSpectrum = [np.arctan2(int(c.imag), int(c.real)) for c in X] # 位相スペクトル # 波形を描画 subplot(311) # 3行1列のグラフの1番目の位置にプロット plot(range(start, start+N), x[start:start+N]) axis([start, start+N, -1.0, 1.0]) xlabel("time [sample]") ylabel("amplitude") # 振幅スペクトルを描画 subplot(312) plot(freqList, amplitudeSpectrum, marker= 'o', linestyle='-') axis([0, fs/2, 0, 50]) xlabel("frequency [Hz]") ylabel("amplitude spectrum") # 位相スペクトルを描画 subplot(313) plot(freqList, phaseSpectrum, marker= 'o', linestyle='-') axis([0, fs/2, -np.pi, np.pi]) xlabel("frequency [Hz]") ylabel("phase spectrum") show() ```
github_jupyter
<img align="right" src="images/tf.png" width="128"/> <img align="right" src="images/etcbc.png" width="128"/> <img align="right" src="images/syrnt.png" width="128"/> <img align="right" src="images/peshitta.png" width="128"/> # Use lectionaries in the Peshitta (OT and NT) This notebook shows just one way to use the Syriac Lectionary data by Geert Jan Veldman together with the Peshitta texts, OT and NT. It has been used in the Syriac Bootcamp at the ETCBC, VU Amsterdam, on 2019-01-18. ## Provenance The lectionary data can be downloaded from the [DANS archive](https://dans.knaw.nl/en/front-page?set_language=en) through this DOI: [10.17026/dans-26t-hhv7](https://doi.org/10.17026/dans-26t-hhv7). The Peshitta (OT) and (NT) text sources in text-fabric format are on GitHub: * OT: [etcbc/peshitta](https://github.com/ETCBC/peshitta) * NT: [etcbc/syrnt](https://github.com/ETCBC/syrnt) The program that generated the text-fabric features linking the lectionaries with the text is in a Jupyter notebook: * [makeLectio](https://nbviewer.jupyter.org/github/etcbc/linksyr/blob/master/programs/lectionaries/makeLectio.ipynb) ## Run it yourself! Make sure you have installed * Python (3.6.3 or higher) * Jupyter ```pip3 install jupyter``` * Text-Fabric ```pip3 install text-fabric``` If you have already installed text-fabric before, make sure to do ```pip3 install --upgrade text-fabric``` because Text-Fabric is in active development every now and then. ``` %load_ext autoreload %autoreload 2 import os import re from tf.app import use ``` # Context We will be working with two TF data sources, * the `peshitta`, (OT Peshitta) which name we store in variable `P` * the `syrnt`, (NT Peshitta) which name we store in variable `S` They both contain Syriac text and transcriptions, but the SyrNT has linguistic annotations and lexemes, while the Peshitta (OT) lacks them. ``` P = 'peshitta' S = 'syrnt' A = {P: None, S: None} ``` # Text-Fabric browser Let's first look at the data in your own browser. What you need to do is to open a command prompt. If you do not know what that is: on Windows it is the program `cmd.exe`, on the Mac it is the app called `Terminal`, and on Linux you know what it is. You can use it from any directory. If one of the commands below do not work, you have installed things differently than I assume here, or the installation was not succesful. For more information, consult [Install](https://annotation.github.io/text-fabric/tf/about/install.html) and/or [FAQ](https://annotation.github.io/text-fabric/tf/about/faq.html) Start the TF browser as follows: ### Old Testament ``` text-fabric peshitta -c --mod=etcbc/linksyr/data/tf/lectio/peshitta ``` ### New Testament Open a new command prompt and say there: ``` text-fabric syrnt -c --mod=etcbc/linksyr/data/tf/lectio/syrnt ``` ### Example queries In both cases, issue a query such as ``` verse taksa link ``` or a more refined one: ``` verse taksa link word word_etcbc=LLJ> ``` You will see all verses that are associated with a lectionary that has a `taksa` and a `link` value. After playing around with the browsing interface on both testaments, return to this notebook. We are going to load both texts here in our program: ``` for volume in A: A[volume] = use(volume+':clone', mod=f'etcbc/linksyr/data/tf/lectio/{volume}') ``` Above you can see that we have loaded the `peshitta` and `syrnt` data sources but also additional data from * **etcbc/linksyr/data/tf/lectio/peshitta** * **etcbc/linksyr/data/tf/lectio/syrnt** From both additional sources we have loaded several features: `lectio`, `mark1`, `mark2`, `siglum`, `taksa`, `taksaTr`. Every lectionary has a number. A lectionary is linked to several verses. Here is what kind of information the features contain: feature | description --- | --- **lectio** | comma separated list of numbers of lectionaries associated with this verse **mark1** | comma separated list of words which mark the precise location of where the lectionaries start **taksa** | newline separated list of liturgical events associated with the lectionaries (in Syriac) **taksaTr** | same as **taksa**, but now in English **siglum** | newline separated list of document references that mention specify the lectionary **link** | newline separated list of links to the *sigla* **mark2** | same as **mark2**, but the word is in a different language When you work with TF, you usually have handy variables called `F`, `L`, `T` ready with which you access all data in the text. Since we use two TF resources in this program, we make a double set of these variables, and instead of just `F`, we'll say `F[P]` for accessing the Peshitta (OT) and `F[S]` for accessing the SyrNT. Same pattern for `L` and `T`. For the meaning of these variables, consult * [F Features](https://annotation.github.io/text-fabric/tf/core/nodefeature.html) * [L Locality](https://annotation.github.io/text-fabric/tf/core/locality.html) * [T Text](https://annotation.github.io/text-fabric/tf/core/text.html) ``` Fs = {} F = {} T = {} L = {} for volume in A: thisApi = A[volume].api F[volume] = thisApi.F Fs[volume] = thisApi.Fs T[volume] = thisApi.T L[volume] = thisApi.L extraFeatures = ''' lectio mark1 mark2 '''.strip().split() ``` # Liturgicalness We measure the *liturgicalness* of a word by counting the number of lectionaries it is involved in. As a first step, we collect for each words the set of lectionaries it is involved in. In the Peshitta OT we use the word form, since we do not have lemmas. The word form is in the feature `word`. In the SyrNT we use the word lemma, which is in the feature `lexeme`. We collect the information in the dictionary `liturgical`, which maps each word form unto the set of lectionaries it is involved in. ``` # this function can do the collection in either Testament def getLiturgical(volume): wordRep = 'word' if volume == P else 'lexeme' mapping = {} # we traverse all verse nodes for verseNode in F[volume].otype.s('verse'): # we retrieve the value of feature 'lectio' for that verse node lectioStr = F[volume].lectio.v(verseNode) if lectioStr: # we split the lectio string into a set of individual lectio numbers lectios = lectioStr.split(',') # we descend into the words of the verse for wordNode in L[volume].d(verseNode, otype='word'): # we use either the feature 'word' or 'lexeme', depending on the volume word = Fs[volume](wordRep).v(wordNode) # if this is the first time we encounter the word, # we add it to the mapping and give it a start value: the empty set if word not in mapping: mapping[word] = set() # in any case, we add the new found lectio numbers to the existing set for this word mapping[word] |= set(lectios) # we report how many words we have collected print(f'Found {len(mapping)} words in {volume}') # we return the mapping as result return mapping ``` Before we call the function above for Peshitta and SyrNT, we make a place where the results can land: ``` liturgical = {} for volume in A: liturgical[volume] = getLiturgical(volume) ``` Remember that we count word occurrences in the Peshitta, and lemmas in the SyrNT, so we get much smaller numbers for the NT. Let's show some mapping members for each volume: ``` for volume in liturgical: print(f'IN {volume}:') for (word, lectios) in list(liturgical[volume].items())[0:10]: print(f'\t{word}') print(f'\t\t{",".join(sorted(lectios)[0:5])} ...') ``` We are not done yet, because we are not interested in the actual lectionaries, but in their number. So we make a new mapping `liturgicalNess`, which maps each word to the number of lectionaries it is associated with. ``` liturgicalNess = {} for volume in liturgical: for word in liturgical[volume]: nLectio = len(liturgical[volume][word]) liturgicalNess.setdefault(volume, {})[word] = nLectio ``` Lets print the top twenty of each volume ``` for volume in liturgicalNess: print(f'IN {volume}:') for (word, lNess) in sorted( liturgicalNess[volume].items(), key=lambda x: (-x[1], x[0]), )[0:20]: print(f'\t{lNess:>5} {word}') ``` # Frequency lists Here is how to get a frequency list of a volume. We can produce the frequency of any feature, but let us do it here for words in the Peshitta (OT) and lexemes in the SyrNY. There is a hidden snag: in the SyrNT we do not have only word nodes, but also lexeme nodes. When we count frequencies, we have to take care to count word nodes only. The function [freqList](https://annotation.github.io/text-fabric/tf/core/nodefeature.html#tf.core.nodefeature.NodeFeature.freqList) can do that. Lets use it and produce the top twenty list of frequent words in both sources, and also the number of hapaxes. ``` # first we define a function to generate the table per volume def showFreqList(volume): print(f'IN {volume}:') wordRep = 'word' if volume == P else 'lexeme' freqs = Fs[volume](wordRep).freqList(nodeTypes={'word'}) # now the members of freqs are pairs (word, freqency) # we print the top frequent words for (word, freq) in freqs[0:10]: print(f'\t{freq:>5} x {word}') # we collect all hapaxes: the items with frequency 1 hapaxes = [word for (word, freq) in freqs if freq == 1] print(f'{len(hapaxes)} hapaxes') for hapax in hapaxes[100:105]: print(f'\t{hapax}') # then we execute it on both volumes for volume in A: showFreqList(volume) ``` # Queries First a simple query with all verses with a lectionary (with taksa and link) ``` query = ''' verse taksa link ''' ``` We run them in both the Old and the New Testament ``` results = {} for volume in A: results[volume] = A[volume].search(query) ``` Let's show some results from the New Testament: ``` A[S].show(results[S], start=1, end=1) ``` Let's show some results from the New Testament: ``` A[P].show(results[P], start=1, end=1) ``` # Word study: CJN> We want to study a word, in both volumes. First we show a verse where the word occurs: James 3:18. It is in the New Testament. The [`T.nodeFromSection()`](https://annotation.github.io/text-fabric/tf/core/text.html#tf.core.text.Text.nodeFromSection) function can find the node (bar code) for a verse specified by a passage reference. ``` # we have to pass the section reference as a triple: section = ('James', 3, 18) # we retrieve the verse node verseNode = T[S].nodeFromSection(('James', 3, 18)) # in case you're curious: here is the node, but it should not be meaningful to you, # only to the program print(verseNode) ``` Finally we show the corresponding verse by means of the function [pretty()](https://annotation.github.io/text-fabric/tf/advanced/display.html#tf.advanced.display.pretty) ``` A[S].pretty(verseNode) ``` Now we use a query to find this word in the New Testament ``` queryS = ''' word lexeme_etcbc=CJN> ''' resultsS = A[S].search(queryS) ``` We show them all: ``` A[S].show(resultsS) ``` For the OT, we do not have the lexeme value, so we try looking for word forms that *match* `CJN>` rather than those that are exactly equal to it. Note that we have replaced '=' by '~' in the query below ``` queryP = ''' word word_etcbc~CJN> ''' resultsP = A[P].search(queryP) # We show only 20 results A[P].show(resultsP, end=20) ``` Here ends the bootcamp session. Interested? Send [me](mailto:dirk.roorda@dans.knaw.nl) a note.
github_jupyter
# Python Basics with Numpy (optional assignment) Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. **Instructions:** - You will be using Python 3. - Avoid using for-loops and while-loops, unless you are explicitly told to do so. - Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function. - After coding your function, run the cell right below it to check if your result is correct. **After this assignment you will:** - Be able to use iPython Notebooks - Be able to use numpy functions and numpy matrix/vector operations - Understand the concept of "broadcasting" - Be able to vectorize code Let's get started! ## About iPython Notebooks ## iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter. **Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below. ``` ### START CODE HERE ### (≈ 1 line of code) test = "Hello World" ### END CODE HERE ### print ("test: " + test) ``` **Expected output**: test: Hello World <font color='blue'> **What you need to remember**: - Run your cells using SHIFT+ENTER (or "Run cell") - Write code in the designated areas using Python 3 only - Do not modify the code outside of the designated areas ## 1 - Building basic functions with numpy ## Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. ### 1.1 - sigmoid function, np.exp() ### Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp(). **Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function. **Reminder**: $sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning. <img src="images/Sigmoid.png" style="width:500px;height:228px;"> To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp(). ``` # GRADED FUNCTION: basic_sigmoid import math import numpy as np def basic_sigmoid(x): """ Compute sigmoid of x. Arguments: x -- A scalar Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = math.exp(-1 * x) s = 1 / (1 + s) ### END CODE HERE ### return s basic_sigmoid(3) ``` **Expected Output**: <table style = "width:40%"> <tr> <td>** basic_sigmoid(3) **</td> <td>0.9525741268224334 </td> </tr> </table> Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful. ``` ### One reason why we use "numpy" instead of "math" in Deep Learning ### x = [1, 2, 3] basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector. ``` In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$ ``` import numpy as np # example of np.exp x = np.array([1, 2, 3]) print(np.exp(x)) # result is (exp(1), exp(2), exp(3)) ``` Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x. ``` # example of vector operation x = np.array([1, 2, 3]) print (x + 3) ``` Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html). You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation. **Exercise**: Implement the sigmoid function using numpy. **Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now. $$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \\ x_2 \\ ... \\ x_n \\ \end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \\ \frac{1}{1+e^{-x_2}} \\ ... \\ \frac{1}{1+e^{-x_n}} \\ \end{pmatrix}\tag{1} $$ ``` # GRADED FUNCTION: sigmoid import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function() def sigmoid(x): """ Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) #s = np.exp(np.multiply(-1, x)) #s = np.divide(1, np.add(1, s)) s = 1 / (1 + np.exp(-x)) ### END CODE HERE ### return s x = np.array([1, 2, 3]) sigmoid(x) ``` **Expected Output**: <table> <tr> <td> **sigmoid([1,2,3])**</td> <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> </tr> </table> ### 1.2 - Sigmoid gradient As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function. **Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$ You often code this function in two steps: 1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful. 2. Compute $\sigma'(x) = s(1-s)$ ``` # GRADED FUNCTION: sigmoid_derivative def sigmoid_derivative(x): """ Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x. You can store the output of the sigmoid function into variables and then use it to calculate the gradient. Arguments: x -- A scalar or numpy array Return: ds -- Your computed gradient. """ ### START CODE HERE ### (≈ 2 lines of code) s = sigmoid(x) ds = s * (1 - s) ### END CODE HERE ### return ds x = np.array([1, 2, 3]) print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x))) ``` **Expected Output**: <table> <tr> <td> **sigmoid_derivative([1,2,3])**</td> <td> [ 0.19661193 0.10499359 0.04517666] </td> </tr> </table> ### 1.3 - Reshaping arrays ### Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html). - X.shape is used to get the shape (dimension) of a matrix/vector X. - X.reshape(...) is used to reshape X into some other dimension. For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector. <img src="images/image2vector_kiank.png" style="width:500px;height:300;"> **Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do: ``` python v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c ``` - Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc. ``` # GRADED FUNCTION: image2vector def image2vector(image): """ Argument: image -- a numpy array of shape (length, height, depth) Returns: v -- a vector of shape (length*height*depth, 1) """ ### START CODE HERE ### (≈ 1 line of code) v = None ### END CODE HERE ### return v # This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values image = np.array([[[ 0.67826139, 0.29380381], [ 0.90714982, 0.52835647], [ 0.4215251 , 0.45017551]], [[ 0.92814219, 0.96677647], [ 0.85304703, 0.52351845], [ 0.19981397, 0.27417313]], [[ 0.60659855, 0.00533165], [ 0.10820313, 0.49978937], [ 0.34144279, 0.94630077]]]) print ("image2vector(image) = " + str(image2vector(image))) ``` **Expected Output**: <table style="width:100%"> <tr> <td> **image2vector(image)** </td> <td> [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]]</td> </tr> </table> ### 1.4 - Normalizing rows Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm). For example, if $$x = \begin{bmatrix} 0 & 3 & 4 \\ 2 & 6 & 4 \\ \end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \\ \sqrt{56} \\ \end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \\ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\ \end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5. **Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1). ``` # GRADED FUNCTION: normalizeRows def normalizeRows(x): """ Implement a function that normalizes each row of the matrix x (to have unit length). Argument: x -- A numpy matrix of shape (n, m) Returns: x -- The normalized (by row) numpy matrix. You are allowed to modify x. """ ### START CODE HERE ### (≈ 2 lines of code) # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True) x_norm = None # Divide x by its norm. x = None ### END CODE HERE ### return x x = np.array([ [0, 3, 4], [1, 6, 4]]) print("normalizeRows(x) = " + str(normalizeRows(x))) ``` **Expected Output**: <table style="width:60%"> <tr> <td> **normalizeRows(x)** </td> <td> [[ 0. 0.6 0.8 ] [ 0.13736056 0.82416338 0.54944226]]</td> </tr> </table> **Note**: In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! ### 1.5 - Broadcasting and the softmax function #### A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). **Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization. **Instructions**: - $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && ... && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && ... && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} $ - $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn} \end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}} \end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \\ softmax\text{(second row of x)} \\ ... \\ softmax\text{(last row of x)} \\ \end{pmatrix} $$ ``` # GRADED FUNCTION: softmax def softmax(x): """Calculates the softmax for each row of the input x. Your code should work for a row vector and also for matrices of shape (n, m). Argument: x -- A numpy matrix of shape (n,m) Returns: s -- A numpy matrix equal to the softmax of x, of shape (n,m) """ ### START CODE HERE ### (≈ 3 lines of code) # Apply exp() element-wise to x. Use np.exp(...). x_exp = None # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True). x_sum = None # Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting. s = None ### END CODE HERE ### return s x = np.array([ [9, 2, 5, 0, 0], [7, 5, 0, 0 ,0]]) print("softmax(x) = " + str(softmax(x))) ``` **Expected Output**: <table style="width:60%"> <tr> <td> **softmax(x)** </td> <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04 1.21052389e-04] [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04 8.01252314e-04]]</td> </tr> </table> **Note**: - If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting. Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning. <font color='blue'> **What you need to remember:** - np.exp(x) works for any np.array x and applies the exponential function to every coordinate - the sigmoid function and its gradient - image2vector is commonly used in deep learning - np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions - broadcasting is extremely useful ## 2) Vectorization In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product. ``` import time x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ### tic = time.process_time() dot = 0 for i in range(len(x1)): dot+= x1[i]*x2[i] toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC OUTER PRODUCT IMPLEMENTATION ### tic = time.process_time() outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros for i in range(len(x1)): for j in range(len(x2)): outer[i,j] = x1[i]*x2[j] toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC ELEMENTWISE IMPLEMENTATION ### tic = time.process_time() mul = np.zeros(len(x1)) for i in range(len(x1)): mul[i] = x1[i]*x2[i] toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ### W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array tic = time.process_time() gdot = np.zeros(W.shape[0]) for i in range(W.shape[0]): for j in range(len(x1)): gdot[i] += W[i,j]*x1[j] toc = time.process_time() print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### VECTORIZED DOT PRODUCT OF VECTORS ### tic = time.process_time() dot = np.dot(x1,x2) toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED OUTER PRODUCT ### tic = time.process_time() outer = np.outer(x1,x2) toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED ELEMENTWISE MULTIPLICATION ### tic = time.process_time() mul = np.multiply(x1,x2) toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED GENERAL DOT PRODUCT ### tic = time.process_time() dot = np.dot(W,x1) toc = time.process_time() print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ``` As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. **Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication. ### 2.1 Implement the L1 and L2 loss functions **Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful. **Reminder**: - The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost. - L1 loss is defined as: $$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$ ``` # GRADED FUNCTION: L1 def L1(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L1 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = None ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L1 = " + str(L1(yhat,y))) ``` **Expected Output**: <table style="width:20%"> <tr> <td> **L1** </td> <td> 1.1 </td> </tr> </table> **Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$. - L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$ ``` # GRADED FUNCTION: L2 def L2(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L2 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = None ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L2 = " + str(L2(yhat,y))) ``` **Expected Output**: <table style="width:20%"> <tr> <td> **L2** </td> <td> 0.43 </td> </tr> </table> Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting! <font color='blue'> **What to remember:** - Vectorization is very important in deep learning. It provides computational efficiency and clarity. - You have reviewed the L1 and L2 loss. - You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
github_jupyter
# Setup ``` import sys import os import re import collections import itertools import bcolz import pickle sys.path.append('../../lib') sys.path.append('../') import numpy as np import pandas as pd import gc import random import smart_open import h5py import csv import json import functools import time import string import datetime as dt from tqdm import tqdm_notebook as tqdm import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import global_utils random_state_number = 967898 import tensorflow as tf from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] config = tf.ConfigProto() config.gpu_options.allow_growth=True sess = tf.Session(config=config) get_available_gpus() %pylab %matplotlib inline %load_ext line_profiler %load_ext memory_profiler %load_ext autoreload pd.options.mode.chained_assignment = None pd.options.display.max_columns = 999 color = sns.color_palette() ``` # Data ``` store = pd.HDFStore('../../data_prep/processed/stage1/data_frames.h5') train_df = store['train_df'] test_df = store['test_df'] display(train_df.head()) display(test_df.head()) corpus_vocab_list, corpus_vocab_wordidx = None, None with open('../../data_prep/processed/stage1/vocab_words_wordidx.pkl', 'rb') as f: (corpus_vocab_list, corpus_wordidx) = pickle.load(f) print(len(corpus_vocab_list), len(corpus_wordidx)) ``` # Data Prep To control the vocabulary pass in updated corpus_wordidx ``` from sklearn.model_selection import train_test_split x_train_df, x_val_df = train_test_split(train_df, test_size=0.10, random_state=random_state_number, stratify=train_df.Class) print(x_train_df.shape) print(x_val_df.shape) from tensorflow.contrib.keras.python.keras.utils import np_utils from keras.preprocessing.sequence import pad_sequences from keras.utils.np_utils import to_categorical vocab_size=len(corpus_vocab_list) ``` ## T:sent_words ### generate data ``` custom_unit_dict = { "gene_unit" : "words", "variation_unit" : "words", # text transformed to sentences attribute "doc_unit" : "words", "doc_form" : "sentences", "divide_document": "multiple_unit" } %autoreload import global_utils gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx) x_train_21_T, x_train_21_G, x_train_21_V, x_train_21_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Train data") print(np.array(x_train_21_T).shape, x_train_21_T[0]) print(np.array(x_train_21_G).shape, x_train_21_G[0]) print(np.array(x_train_21_V).shape, x_train_21_V[0]) print(np.array(x_train_21_C).shape, x_train_21_C[0]) gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx) x_val_21_T, x_val_21_G, x_val_21_V, x_val_21_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Val data") print("text",np.array(x_val_21_T).shape) print("gene",np.array(x_val_21_G).shape, x_val_21_G[0]) print("variation",np.array(x_val_21_V).shape, x_val_21_V[0]) print("classes",np.array(x_val_21_C).shape, x_val_21_C[0]) ``` ### format data ``` word_unknown_tag_idx = corpus_wordidx["<UNK>"] char_unknown_tag_idx = global_utils.char_unknown_tag_idx MAX_SENT_LEN = 60 x_train_21_T = pad_sequences(x_train_21_T, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx, padding="post",truncating="post") x_val_21_T = pad_sequences(x_val_21_T, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx, padding="post",truncating="post") print(x_train_21_T.shape, x_val_21_T.shape) ``` keras np_utils.to_categorical expects zero index categorical variables https://github.com/fchollet/keras/issues/570 ``` x_train_21_C = np.array(x_train_21_C) - 1 x_val_21_C = np.array(x_val_21_C) - 1 x_train_21_C = np_utils.to_categorical(np.array(x_train_21_C), 9) x_val_21_C = np_utils.to_categorical(np.array(x_val_21_C), 9) print(x_train_21_C.shape, x_val_21_C.shape) ``` ## T:text_words ### generate data ``` custom_unit_dict = { "gene_unit" : "words", "variation_unit" : "words", # text transformed to sentences attribute "doc_unit" : "words", "doc_form" : "text", "divide_document": "single_unit" } %autoreload import global_utils gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx) x_train_22_T, x_train_22_G, x_train_22_V, x_train_22_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Train data") print("text",np.array(x_train_22_T).shape) print("gene",np.array(x_train_22_G).shape, x_train_22_G[0]) print("variation",np.array(x_train_22_V).shape, x_train_22_V[0]) print("classes",np.array(x_train_22_C).shape, x_train_22_C[0]) gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx) x_val_22_T, x_val_22_G, x_val_22_V, x_val_22_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Val data") print("text",np.array(x_val_22_T).shape) print("gene",np.array(x_val_22_G).shape, x_val_22_G[0]) print("variation",np.array(x_val_22_V).shape, x_val_22_V[0]) print("classes",np.array(x_val_22_C).shape, x_val_22_C[0]) ``` ### format data ``` word_unknown_tag_idx = corpus_wordidx["<UNK>"] char_unknown_tag_idx = global_utils.char_unknown_tag_idx MAX_TEXT_LEN = 5000 x_train_22_T = pad_sequences(x_train_22_T, maxlen=MAX_TEXT_LEN, value=word_unknown_tag_idx, padding="post",truncating="post") x_val_22_T = pad_sequences(x_val_22_T, maxlen=MAX_TEXT_LEN, value=word_unknown_tag_idx, padding="post",truncating="post") print(x_train_22_T.shape, x_val_22_T.shape) MAX_GENE_LEN = 1 MAX_VAR_LEN = 4 x_train_22_G = pad_sequences(x_train_22_G, maxlen=MAX_GENE_LEN, value=word_unknown_tag_idx) x_train_22_V = pad_sequences(x_train_22_V, maxlen=MAX_VAR_LEN, value=word_unknown_tag_idx) x_val_22_G = pad_sequences(x_val_22_G, maxlen=MAX_GENE_LEN, value=word_unknown_tag_idx) x_val_22_V = pad_sequences(x_val_22_V, maxlen=MAX_VAR_LEN, value=word_unknown_tag_idx) print(x_train_22_G.shape, x_train_22_V.shape) print(x_val_22_G.shape, x_val_22_V.shape) ``` keras np_utils.to_categorical expects zero index categorical variables https://github.com/fchollet/keras/issues/570 ``` x_train_22_C = np.array(x_train_22_C) - 1 x_val_22_C = np.array(x_val_22_C) - 1 x_train_22_C = np_utils.to_categorical(np.array(x_train_22_C), 9) x_val_22_C = np_utils.to_categorical(np.array(x_val_22_C), 9) print(x_train_22_C.shape, x_val_22_C.shape) ``` ### test Data setup ``` gen_data = global_utils.GenerateDataset(test_df, corpus_wordidx) x_test_22_T, x_test_22_G, x_test_22_V, _ = gen_data.generate_data(custom_unit_dict, has_class=False, add_start_end_tag=True) del gen_data print("Test data") print("text",np.array(x_test_22_T).shape) print("gene",np.array(x_test_22_G).shape, x_test_22_G[0]) print("variation",np.array(x_test_22_V).shape, x_test_22_V[0]) x_test_22_T = pad_sequences(x_test_22_T, maxlen=MAX_TEXT_LEN, value=word_unknown_tag_idx, padding="post",truncating="post") print(x_test_22_T.shape) MAX_GENE_LEN = 1 MAX_VAR_LEN = 4 x_test_22_G = pad_sequences(x_test_22_G, maxlen=MAX_GENE_LEN, value=word_unknown_tag_idx) x_test_22_V = pad_sequences(x_test_22_V, maxlen=MAX_VAR_LEN, value=word_unknown_tag_idx) print(x_test_22_G.shape, x_test_22_V.shape) ``` ## T:text_chars ### generate data ``` custom_unit_dict = { "gene_unit" : "raw_chars", "variation_unit" : "raw_chars", # text transformed to sentences attribute "doc_unit" : "raw_chars", "doc_form" : "text", "divide_document" : "multiple_unit" } %autoreload import global_utils gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx) x_train_33_T, x_train_33_G, x_train_33_V, x_train_33_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Train data") print("text",np.array(x_train_33_T).shape, x_train_33_T[0]) print("gene",np.array(x_train_33_G).shape, x_train_33_G[0]) print("variation",np.array(x_train_33_V).shape, x_train_33_V[0]) print("classes",np.array(x_train_33_C).shape, x_train_33_C[0]) %autoreload import global_utils gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx) x_val_33_T, x_val_33_G, x_val_33_V, x_val_33_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Val data") print("text",np.array(x_val_33_T).shape, x_val_33_T[98]) print("gene",np.array(x_val_33_G).shape, x_val_33_G[0]) print("variation",np.array(x_val_33_V).shape, x_val_33_V[0]) print("classes",np.array(x_val_33_C).shape, x_val_33_C[0]) ``` ### format data ``` word_unknown_tag_idx = corpus_wordidx["<UNK>"] char_unknown_tag_idx = global_utils.char_unknown_tag_idx MAX_CHAR_IN_SENT_LEN = 150 x_train_33_T = pad_sequences(x_train_33_T, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx, padding="post",truncating="post") x_val_33_T = pad_sequences(x_val_33_T, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx, padding="post",truncating="post") print(x_train_33_T.shape, x_val_33_T.shape) x_train_33_G = pad_sequences(x_train_33_G, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx) x_train_33_V = pad_sequences(x_train_33_V, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx) x_val_33_G = pad_sequences(x_val_33_G, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx) x_val_33_V = pad_sequences(x_val_33_V, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx) print(x_train_33_G.shape, x_train_33_V.shape) print(x_val_33_G.shape, x_val_33_V.shape) ``` keras np_utils.to_categorical expects zero index categorical variables https://github.com/fchollet/keras/issues/570 ``` x_train_33_C = np.array(x_train_33_C) - 1 x_val_33_C = np.array(x_val_33_C) - 1 x_train_33_C = np_utils.to_categorical(np.array(x_train_33_C), 9) x_val_33_C = np_utils.to_categorical(np.array(x_val_33_C), 9) print(x_train_33_C.shape, x_val_33_C.shape) ``` ## T:text_sent_words ### generate data ``` custom_unit_dict = { "gene_unit" : "words", "variation_unit" : "words", # text transformed to sentences attribute "doc_unit" : "word_list", "doc_form" : "text", "divide_document" : "single_unit" } %autoreload import global_utils gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx) x_train_34_T, x_train_34_G, x_train_34_V, x_train_34_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Train data") print("text",np.array(x_train_34_T).shape, x_train_34_T[0][:1]) print("gene",np.array(x_train_34_G).shape, x_train_34_G[0]) print("variation",np.array(x_train_34_V).shape, x_train_34_V[0]) print("classes",np.array(x_train_34_C).shape, x_train_34_C[0]) %autoreload import global_utils gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx) x_val_34_T, x_val_34_G, x_val_34_V, x_val_34_C = gen_data.generate_data(custom_unit_dict, has_class=True, add_start_end_tag=True) del gen_data print("Val data") print("text",np.array(x_val_34_T).shape, x_val_34_T[98][:1]) print("gene",np.array(x_val_34_G).shape, x_val_34_G[0]) print("variation",np.array(x_val_34_V).shape, x_val_34_V[0]) print("classes",np.array(x_val_34_C).shape, x_val_34_C[0]) ``` ### format data ``` word_unknown_tag_idx = corpus_wordidx["<UNK>"] char_unknown_tag_idx = global_utils.char_unknown_tag_idx MAX_DOC_LEN = 500 # no of sentences in a document MAX_SENT_LEN = 80 # no of words in a sentence for doc_i, doc in enumerate(x_train_34_T): x_train_34_T[doc_i] = x_train_34_T[doc_i][:MAX_DOC_LEN] # padding sentences if len(x_train_34_T[doc_i]) < MAX_DOC_LEN: for not_used_i in range(0,MAX_DOC_LEN - len(x_train_34_T[doc_i])): x_train_34_T[doc_i].append([word_unknown_tag_idx]*MAX_SENT_LEN) # padding words x_train_34_T[doc_i] = pad_sequences(x_train_34_T[doc_i], maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx) for doc_i, doc in enumerate(x_val_34_T): x_val_34_T[doc_i] = x_val_34_T[doc_i][:MAX_DOC_LEN] # padding sentences if len(x_val_34_T[doc_i]) < MAX_DOC_LEN: for not_used_i in range(0,MAX_DOC_LEN - len(x_val_34_T[doc_i])): x_val_34_T[doc_i].append([word_unknown_tag_idx]*MAX_SENT_LEN) # padding words x_val_34_T[doc_i] = pad_sequences(x_val_34_T[doc_i], maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx) x_train_34_T = np.array(x_train_34_T) x_val_34_T = np.array(x_val_34_T) print(x_val_34_T.shape, x_train_34_T.shape) x_train_34_G = pad_sequences(x_train_34_G, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx) x_train_34_V = pad_sequences(x_train_34_V, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx) x_val_34_G = pad_sequences(x_val_34_G, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx) x_val_34_V = pad_sequences(x_val_34_V, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx) print(x_train_34_G.shape, x_train_34_V.shape) print(x_val_34_G.shape, x_val_34_V.shape) ``` keras np_utils.to_categorical expects zero index categorical variables https://github.com/fchollet/keras/issues/570 ``` x_train_34_C = np.array(x_train_34_C) - 1 x_val_34_C = np.array(x_val_34_C) - 1 x_train_34_C = np_utils.to_categorical(np.array(x_train_34_C), 9) x_val_34_C = np_utils.to_categorical(np.array(x_val_34_C), 9) print(x_train_34_C.shape, x_val_34_C.shape) ``` Need to form 3 dimensional target data for rationale model training ``` temp = (x_train_34_C.shape[0],1,x_train_34_C.shape[1]) x_train_34_C_sent = np.repeat(x_train_34_C.reshape(temp[0],temp[1],temp[2]), MAX_DOC_LEN, axis=1) #sentence test targets temp = (x_val_34_C.shape[0],1,x_val_34_C.shape[1]) x_val_34_C_sent = np.repeat(x_val_34_C.reshape(temp[0],temp[1],temp[2]), MAX_DOC_LEN, axis=1) print(x_train_34_C_sent.shape, x_val_34_C_sent.shape) ``` ## Embedding layer ### for words ``` WORD_EMB_SIZE = 200 %autoreload import global_utils ft_file_path = "/home/bicepjai/Projects/Deep-Survey-Text-Classification/data_prep/processed/stage1/pretrained_word_vectors/ft_sg_200d_50e.vec" trained_embeddings = global_utils.get_embeddings_from_ft(ft_file_path, WORD_EMB_SIZE, corpus_vocab_list) trained_embeddings.shape ``` ### for characters ``` CHAR_EMB_SIZE = 64 char_embeddings = np.random.randn(global_utils.CHAR_ALPHABETS_LEN, CHAR_EMB_SIZE) char_embeddings.shape ``` # Models ## prep ``` %autoreload import tensorflow.contrib.keras as keras import tensorflow as tf from keras import backend as K from keras.engine import Layer, InputSpec, InputLayer from keras.models import Model, Sequential from keras.layers import Dropout, Embedding, concatenate from keras.layers import Conv1D, MaxPool1D, Conv2D, MaxPool2D, ZeroPadding1D, GlobalMaxPool1D from keras.layers import Dense, Input, Flatten, BatchNormalization from keras.layers import Concatenate, Dot, Merge, Multiply, RepeatVector from keras.layers import Bidirectional, TimeDistributed from keras.layers import SimpleRNN, LSTM, GRU, Lambda, Permute from keras.layers.core import Reshape, Activation from keras.optimizers import Adam from keras.callbacks import ModelCheckpoint,EarlyStopping,TensorBoard from keras.constraints import maxnorm from keras.regularizers import l2 ``` ## model_1: paper ``` text_seq_input = Input(shape=(MAX_SENT_LEN,), dtype='int32') text_embedding = Embedding(vocab_size, WORD_EMB_SIZE, input_length=MAX_SENT_LEN, weights=[trained_embeddings], trainable=True)(text_seq_input) model_1 = Sequential([ Embedding(vocab_size, WORD_EMB_SIZE, weights=[trained_embeddings], input_length=MAX_SENT_LEN, trainable=True), LSTM(32), Dense(9, activation='softmax') ]) ``` #### training ``` model_1.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy']) model_1.summary() %rm -rf ./tb_graphs/* tb_callback = keras.callbacks.TensorBoard(log_dir='./tb_graphs', histogram_freq=0, write_graph=True, write_images=True) checkpointer = ModelCheckpoint(filepath="model_1_weights.hdf5", verbose=1, monitor="val_categorical_accuracy", save_best_only=True, mode="max") with tf.Session() as sess: # model = keras.models.load_model('current_model.h5') sess.run(tf.global_variables_initializer()) try: model_1.load_weights("model_1_weights.hdf5") except IOError as ioe: print("no checkpoints available !") model_1.fit(x_train_21_T, x_train_21_C, validation_data=(x_val_21_T, x_val_21_C), epochs=5, batch_size=1024, shuffle=True, callbacks=[tb_callback,checkpointer]) #model.save('current_sent_model.h5') ``` ## model_2: with GRU ``` text_seq_input = Input(shape=(MAX_SENT_LEN,), dtype='int32') text_embedding = Embedding(vocab_size, WORD_EMB_SIZE, input_length=MAX_SENT_LEN, weights=[trained_embeddings], trainable=True)(text_seq_input) model_2 = Sequential([ Embedding(vocab_size, WORD_EMB_SIZE, weights=[trained_embeddings], input_length=MAX_SENT_LEN, trainable=True), GRU(32), Dense(9, activation='softmax') ]) ``` #### training ``` model_2.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy']) model_2.summary() %rm -rf ./tb_graphs/* tb_callback = keras.callbacks.TensorBoard(log_dir='./tb_graphs', histogram_freq=0, write_graph=True, write_images=True) checkpointer = ModelCheckpoint(filepath="model_2_weights.hdf5", verbose=1, monitor="val_categorical_accuracy", save_best_only=True, mode="max") with tf.Session() as sess: # model = keras.models.load_model('current_model.h5') sess.run(tf.global_variables_initializer()) try: model_2.load_weights("model_2_weights.hdf5") except IOError as ioe: print("no checkpoints available !") model_2.fit(x_train_21_T, x_train_21_C, validation_data=(x_val_21_T, x_val_21_C), epochs=5, batch_size=1024, shuffle=True, callbacks=[tb_callback,checkpointer]) #model.save('current_sent_model.h5') ```
github_jupyter
# Diagram Widget The same _renderer_ that powers the [Diagram Document](./Diagram%20Document.ipynb) can be used as a computable _Jupyter Widget_, which offers even more power than the [Diagram Rich Display](./Diagram%20Rich%20Display.ipynb). ``` from ipywidgets import HBox, VBox, Textarea, jslink, jsdlink, FloatSlider, IntSlider, Checkbox, Text, SelectMultiple, Accordion from lxml import etree from traitlets import observe, link, dlink from ipydrawio import Diagram diagram = Diagram(layout=dict(min_height="80vh", flex="1")) box = HBox([diagram]) box ``` ## value A `Diagram.source`'s `value` trait is the raw drawio XML. You can use one document for multiple diagrams. > [graphviz2drawio](https://pypi.org/project/graphviz2drawio) is recommended for getting to **give me some drawio XML from my data right now**. ``` Diagram(source=diagram.source, layout=dict(min_height="400px")) diagram.source.value = '''<mxfile host="127.0.0.1" modified="2021-01-27T15:56:33.612Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" etag="u04aDhBnb7c9tLWsiHn9" version="13.6.10"> <diagram id="x" name="Page-1"> <mxGraphModel dx="1164" dy="293" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" math="0" shadow="0"> <root> <mxCell id="0"/> <mxCell id="1" parent="0"/> <mxCell id="2" value="" style="edgeStyle=entityRelationEdgeStyle;startArrow=none;endArrow=none;segment=10;curved=1;" parent="1" source="4" target="5" edge="1"> <mxGeometry relative="1" as="geometry"/> </mxCell> <mxCell id="3" value="" style="edgeStyle=entityRelationEdgeStyle;startArrow=none;endArrow=none;segment=10;curved=1;" parent="1" source="4" target="6" edge="1"> <mxGeometry relative="1" as="geometry"> <mxPoint x="260" y="160" as="sourcePoint"/> </mxGeometry> </mxCell> <UserObject label="The Big Idea" treeRoot="1" id="4"> <mxCell style="ellipse;whiteSpace=wrap;html=1;align=center;collapsible=0;container=1;recursiveResize=0;" parent="1" vertex="1"> <mxGeometry x="300" y="140" width="100" height="40" as="geometry"/> </mxCell> </UserObject> <mxCell id="5" value="Branch" style="whiteSpace=wrap;html=1;shape=partialRectangle;top=0;left=0;bottom=1;right=0;points=[[0,1],[1,1]];strokeColor=#000000;fillColor=none;align=center;verticalAlign=bottom;routingCenterY=0.5;snapToPoint=1;collapsible=0;container=1;recursiveResize=0;autosize=1;" parent="1" vertex="1"> <mxGeometry x="460" y="120" width="80" height="20" as="geometry"/> </mxCell> <mxCell id="6" value="Sub Topic" style="whiteSpace=wrap;html=1;rounded=1;arcSize=50;align=center;verticalAlign=middle;collapsible=0;container=1;recursiveResize=0;strokeWidth=1;autosize=1;spacing=4;" parent="1" vertex="1"> <mxGeometry x="460" y="160" width="72" height="26" as="geometry"/> </mxCell> </root> </mxGraphModel> </diagram> </mxfile>''' value = Textarea(description="value", rows=20) controls = Accordion([value]) controls.set_title(0, "value") jslink((diagram.source, "value"), (value, "value")) box.children = [controls, diagram] ``` There are a number of challenges in using it as a protocol: - includes hostname (ick!) - includes etag - stripping these out creates flicker when updating At present, tools like jinja2, which work directly with XML, or `lxml`, which can work at a higher level, with e.g. XPath. > Stay tuned for better tools for working with this format with e.g. `networkx` ## Interactive state A `Diagram` exposes a number of parts of both the content and interactive state of the editor. ``` zoom = FloatSlider(description="zoom", min=0.01) scroll_x, scroll_y = [FloatSlider(description=f"scroll {x}", min=-1e5, max=1e5) for x in "xy"] current_page = IntSlider(description="page") jslink((diagram, "zoom"), (zoom, "value")) jslink((diagram, "scroll_x"), (scroll_x, "value")) jslink((diagram, "scroll_y"), (scroll_y, "value")) jslink((diagram, "current_page"), (current_page, "value")) controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), value] controls._titles = {"0": "ui", "1": "value"} selected_cells = SelectMultiple(description="selected") enable_selected = Checkbox(True, description="enable select") def update_selected(*_): if enable_selected.value: diagram.selected_cells = [*selected_cells.value] def update_selected_options(*_): try: with selected_cells.hold_trait_notifications(): selected_cells.options = [ cell.attrib["id"] for cell in etree.fromstring(diagram.source.value).xpath("//mxCell") if "id" in cell.attrib ] selected_cells.value = diagram.selected_cells except: pass selected_cells.observe(update_selected, "value") diagram.source.observe(update_selected_options, "value") diagram.observe(update_selected_options, "selected_cells") update_selected_options() controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), VBox([enable_selected, selected_cells]), value] controls._titles = {"0": "ui", "1": "selection", "2": "value"} HBox([enable_selected, selected_cells]) ``` ## Page Information `Diagrams` actually describe a "real thing", measured in inches. ``` page_format = { k: IntSlider(description=k, value=v, min=0, max=1e5) for k,v in diagram.page_format.items() } def update_format(*_): diagram.page_format = { k: v.value for k, v in page_format.items() } def update_sliders(*_): for k, v in page_format.items(): v.value = diagram.page_format[k] [v.observe(update_format, "value") for k, v in page_format.items()] [diagram.observe(update_sliders, "page_format")] controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), VBox([enable_selected, selected_cells]), VBox([*page_format.values()]), value] controls._titles = {"0": "ui", "1": "selection", "2": "page", "3": "value"} ``` ## Grid The styling of the on-screen grid is cutomizable. This typically _won't_ be included in export to e.g. SVG. ``` grid_enabled = Checkbox(description="grid") grid_size = FloatSlider(description="grid size") grid_color = Text("#66666666", description="grid color") jslink((diagram, "grid_enabled"), (grid_enabled, "value")) jslink((diagram, "grid_size"), (grid_size, "value")) jslink((diagram, "grid_color"), (grid_color, "value")) controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), VBox([enable_selected, selected_cells]), VBox([*page_format.values()]), VBox([ grid_enabled, grid_size, grid_color]), value] controls._titles = {"0": "ui", "1": "selection", "2": "page", "3":"grid", "4": "value"} ```
github_jupyter
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title"><b>A Magic Square Solver</b></span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://mate.unipv.it/gualandi" property="cc:attributionName" rel="cc:attributionURL">Stefano Gualandi</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/mathcoding/opt4ds" rel="dct:source">https://github.com/mathcoding/opt4ds</a>. **NOTE:** Run the following script whenever running this script on a Google Colab. ``` import shutil import sys import os.path if not shutil.which("pyomo"): !pip install -q pyomo assert(shutil.which("pyomo")) if not (shutil.which("glpk") or os.path.isfile("glpk")): if "google.colab" in sys.modules: !apt-get install -y -qq glpk-utils else: try: !conda install -c conda-forge glpk except: pass ``` # Magic Square Solver In this notebook, we propose an ILP model to the [Magic Square](https://en.wikipedia.org/wiki/Magic_square) puzzle. The puzzle asks to place into a grid of size $n \times n$ the digits from $1$ to $n^2$, in such a way that the sum of the digits in each row, the sum of digits in each column, and the sum of the digits on the two main diagonals, is equal to the same number. ## ILP Model The model we propose is as follows. **Decision Variables:** We use two type of variables: * The variable $x_{ijk} \in \{0,1\}$ is equal to 1 if the cell in position $(i,j)$ contains the digit $k$, and it is equal to 0 otherwise. For easy of exposition, we use the set $I,J:=\{1,\dots,n\}$ and $K := \{1,\dots,n^2\}$. * The variable $z\in\mathbb{Z}_+$ represents the magic number. **Objective function:** Since the problem is a feasibility problem, we can set the objective function equal to a constant value. Otherwise, we can add the sum of every variable (this way we avoid also a warning from the solver). **Constraints:** We introduce the following linear constraints, which encode the puzzle rules: 1. Every digit, we can be placed into a single position: $$ \sum_{i \in I}\sum_{j \in J} x_{ijk} = 1, \;\; \forall k \in K $$ 2. In every position, we can place a single digit: $$ \sum_{k \in K} x_{ijk} = 1, \;\; \forall i \in I, \; \forall j \in J $$ 3. The sum of the digits in each row must be equal to $z$: $$ \sum_{j \in J}\sum_{k \in K} k x_{ijk} = z, \;\; \forall i \in I $$ 3. The sum of the digits in each column must be equal to $z$: $$ \sum_{i \in I}\sum_{k \in K} k x_{ijk} = z, \;\; \forall j \in J $$ 4. The sum of the digits over the two main diagonals is equal to $z$: $$ \sum_{i \in I} \sum_{k \in K} x_{iik} = z, $$ $$ \sum_{i \in I} \sum_{k \in K} x_{i(n-i+1)k} = z, $$ We show next how to implement this model in Pyomo. ## Pyomo implementation As a first step we import the Pyomo libraries. ``` from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory from pyomo.environ import Binary, RangeSet, ConstraintList, PositiveIntegers ``` We create an instance of the class *ConcreteModel*, and we start to add the *RangeSet* and *Var* corresponding to the index sets and the variables of our model. We set also the objective function. ``` # Create concrete model model = ConcreteModel() n = 4 # Set of indices model.I = RangeSet(1, n) model.J = RangeSet(1, n) model.K = RangeSet(1, n*n) # Variables model.z = Var(within=PositiveIntegers) model.x = Var(model.I, model.J, model.K, within=Binary) # Objective Function model.obj = Objective(expr = model.z) ``` Then, we encode all the constraints of our model using the Pyomo syntax. ``` def Unique(model, k): return sum(model.x[i,j,k] for j in model.J for i in model.I) == 1 model.unique = Constraint(model.K, rule = Unique) def CellUnique(model, i, j): return sum(model.x[i,j,k] for k in model.K) == 1 model.cellUnique = Constraint(model.I, model.J, rule = CellUnique) def Row(model, i): return sum(k*model.x[i,j,k] for j in model.J for k in model.K) == model.z model.row = Constraint(model.I, rule = Row) def Col(model, j): return sum(k*model.x[i,j,k] for i in model.I for k in model.K) == model.z model.column = Constraint(model.J, rule = Col) model.diag1 = Constraint( expr = sum(k*model.x[i,i,k] for i in model.I for k in model.K) == model.z) model.diag2 = Constraint( expr = sum(k*model.x[i,n-i+1,k] for i in model.I for k in model.K) == model.z) ``` Finally, we solve the model for a given $n$ and we check the solution status. ``` # Solve the model sol = SolverFactory('glpk').solve(model) # CHECK SOLUTION STATUS # Get a JSON representation of the solution sol_json = sol.json_repn() # Check solution status if sol_json['Solver'][0]['Status'] != 'ok': print("Problem unsolved") if sol_json['Solver'][0]['Termination condition'] != 'optimal': print("Problem unsolved") ``` If the problem is solved and a feasible solution is available, we write the solution into a colorful **magic square**. ``` def PlotMagicSquare(x, n): # Report solution value import matplotlib.pyplot as plt import numpy as np import itertools sol = np.zeros((n,n), dtype=int) for i, j, k in x: if x[i,j,k]() > 0.5: sol[i-1,j-1] = k cmap = plt.get_cmap('Blues') plt.figure(figsize=(6,6)) plt.imshow(sol, interpolation='nearest', cmap=cmap) plt.title("Magic Square, Size: {}".format(n)) plt.axis('off') for i, j in itertools.product(range(n), range(n)): plt.text(j, i, "{:d}".format(sol[i, j]), fontsize=24, ha='center', va='center') plt.tight_layout() plt.show() PlotMagicSquare(model.x, n) ```
github_jupyter
``` import sys sys.path.append('..') sys.path.append('../..') from stats import * from sentiment_stats import * from peewee import SQL from database.models import RawFacebookComments, RawTwitterComments, RawInstagramComments, RawYouTubeComments, RawHashtagComments rede_social = 'Hashtags' modelo = RawHashtagComments cores = ['#FFA726', '#66BB6A', '#42A5F5', '#FFEE58', '#EF5350', '#AB47BC', '#C8C8C8'] cores2 = ['#FFA726', '#AB47BC', '#FFEE58', '#C8C8C8', '#EF5350', '#66BB6A', '#42A5F5'] cores_val = ['#EF5350', '#C8C8C8', '#66BB6A'] cores_val2 = ['#66BB6A', '#EF5350', '#C8C8C8'] sentimentos = ['ALEGRIA', 'SURPRESA', 'TRISTEZA', 'MEDO', 'RAIVA', 'DESGOSTO', 'NEUTRO'] valencia = ['POSITIVO', 'NEGATIVO', 'NEUTRO'] valencia_dict = OrderedDict() for val in valencia: valencia_dict[val] = 0 sentimentos_dict = OrderedDict() for sentimento in sentimentos: sentimentos_dict[sentimento] = 0 default_clause = [ SQL('length(clean_comment) > 0'), ] positivo_clause = [ SQL('length(emotion) > 0 AND length(valence) > 0'), SQL('emotion in ("ALEGRIA", "SURPRESA") AND valence = "POSITIVO"') ] negativo_clause = [ SQL('length(emotion) > 0 AND length(valence) > 0'), SQL('emotion in ("TRISTEZA", "RAIVA", "MEDO", "DESGOSTO") AND valence = "NEGATIVO"') ] neutro_clause = [ SQL('length(emotion) > 0 AND length(valence) > 0'), SQL('emotion in ("NEUTRO") AND valence = "NEUTRO"') ] general = default_clause + [ SQL('length(emotion) > 0 AND length(valence) > 0'), SQL(""" (emotion in ("ALEGRIA", "SURPRESA") AND valence = "POSITIVO") OR (emotion in ("TRISTEZA", "RAIVA", "MEDO", "DESGOSTO") AND valence = "NEGATIVO") OR (emotion in ("NEUTRO") AND valence = "NEUTRO") """) ] ``` ### Emoções gerais dos comentários : Twitter Hashtags ``` total_comentarios = modelo.select() \ .where(default_clause) \ .count() comentarios_positivos = modelo.select() \ .where(reduce(operator.and_, default_clause + positivo_clause)) \ .order_by(modelo.timestamp) comentarios_negativos = modelo.select() \ .where(reduce(operator.and_, default_clause + negativo_clause)) \ .order_by(modelo.timestamp) comentarios_neutros = modelo.select() \ .where(reduce(operator.and_, default_clause + neutro_clause)) \ .order_by(modelo.timestamp) comentarios = modelo.select() \ .where(reduce(operator.and_, general)) \ .order_by(modelo.timestamp) alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros) print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros) ``` #### Contagem total de comentários : Valência ``` graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro) ``` #### Contagem total de comentários : Emoções ``` graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro) ``` #### Comentários por data : Valência ``` graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios) ``` #### Comentários por data : Emoções ``` graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios) ``` ### Emoções por algumas hashtags : Twitter Hashtags #### EleNão ``` hashtag_c = [modelo.hashtag == 'EleNão'] total_comentarios = modelo.select() \ .where(reduce(operator.and_, default_clause + hashtag_c)) \ .count() comentarios_positivos = modelo.select() \ .where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_negativos = modelo.select() \ .where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_neutros = modelo.select() \ .where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios = modelo.select() \ .where(reduce(operator.and_, general + hashtag_c)) \ .order_by(modelo.timestamp) alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros) print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros) ``` ##### Contagem total de comentários : Valência ``` graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro) ``` ##### Contagem total de comentários : Emoções ``` graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro) ``` ##### Comentários por data : Valência ``` graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios) ``` ##### Comentários por data : Emoções ``` graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios) ``` #### elesimeno1turno ``` hashtag_c = [modelo.hashtag == 'elesimeno1turno'] total_comentarios = modelo.select() \ .where(reduce(operator.and_, default_clause + hashtag_c)) \ .count() comentarios_positivos = modelo.select() \ .where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_negativos = modelo.select() \ .where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_neutros = modelo.select() \ .where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios = modelo.select() \ .where(reduce(operator.and_, general + hashtag_c)) \ .order_by(modelo.timestamp) alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros) print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros) ``` ##### Contagem total de comentários : Valência ``` graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro) ``` ##### Contagem total de comentários : Emoções ``` graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro) ``` ##### Comentários por data : Valência ``` graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios) ``` ##### Comentários por data : Emoções ``` graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios) ``` #### ViraViraCiro ``` hashtag_c = [modelo.hashtag == 'ViraViraCiro'] total_comentarios = modelo.select() \ .where(reduce(operator.and_, default_clause + hashtag_c)) \ .count() comentarios_positivos = modelo.select() \ .where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_negativos = modelo.select() \ .where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_neutros = modelo.select() \ .where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios = modelo.select() \ .where(reduce(operator.and_, general + hashtag_c)) \ .order_by(modelo.timestamp) alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros) print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros) ``` ##### Contagem total de comentários : Valência ``` graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro) ``` ##### Contagem total de comentários : Emoções ``` graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro) ``` ##### Comentários por data : Valência ``` graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios) ``` ##### Comentários por data : Emoções ``` graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios) ``` #### FicaTemer ``` hashtag_c = [modelo.hashtag == 'FicaTemer'] total_comentarios = modelo.select() \ .where(reduce(operator.and_, default_clause + hashtag_c)) \ .count() comentarios_positivos = modelo.select() \ .where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_negativos = modelo.select() \ .where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_neutros = modelo.select() \ .where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios = modelo.select() \ .where(reduce(operator.and_, general + hashtag_c)) \ .order_by(modelo.timestamp) alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros) print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros) ``` ##### Contagem total de comentários : Valência ``` graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro) ``` ##### Contagem total de comentários : Emoções ``` graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro) ``` ##### Comentários por data : Valência ``` graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios) ``` ##### Comentários por data : Emoções ``` graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios) ``` #### MarketeirosDoJair ``` hashtag_c = [modelo.hashtag == 'MarketeirosDoJair'] total_comentarios = modelo.select() \ .where(reduce(operator.and_, default_clause + hashtag_c)) \ .count() comentarios_positivos = modelo.select() \ .where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_negativos = modelo.select() \ .where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_neutros = modelo.select() \ .where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios = modelo.select() \ .where(reduce(operator.and_, general + hashtag_c)) \ .order_by(modelo.timestamp) alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros) print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros) ``` ##### Contagem total de comentários : Valência ``` graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro) ``` ##### Contagem total de comentários : Emoções ``` graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro) ``` ##### Comentários por data : Valência ``` graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios) ``` ##### Comentários por data : Emoções ``` graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios) ``` #### ViraVotoHaddad13 ``` hashtag_c = [modelo.hashtag == 'ViraVotoHaddad13'] total_comentarios = modelo.select() \ .where(reduce(operator.and_, default_clause + hashtag_c)) \ .count() comentarios_positivos = modelo.select() \ .where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_negativos = modelo.select() \ .where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios_neutros = modelo.select() \ .where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \ .order_by(modelo.timestamp) comentarios = modelo.select() \ .where(reduce(operator.and_, general + hashtag_c)) \ .order_by(modelo.timestamp) alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros) print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros) ``` ##### Contagem total de comentários : Valência ``` graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro) ``` ##### Contagem total de comentários : Emoções ``` graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro) ``` ##### Comentários por data : Valência ``` graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios) ``` ##### Comentários por data : Emoções ``` graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios) ```
github_jupyter
``` import os import re import time import string import numpy as np import pandas as pd import matplotlib.pyplot as plt from pylab import rcParams from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, accuracy_score, precision_score, recall_score, confusion_matrix, roc_auc_score from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import KFold from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier, ExtraTreesClassifier from sklearn.svm import SVC from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV from catboost import CatBoostClassifier from xgboost import XGBClassifier import lightgbm as lgb from textblob import TextBlob from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer import nltk from nltk.corpus import stopwords from nltk import PorterStemmer #for first time using nltk # nltk.download () splits=10 stop_words = stopwords.words('english') def clean_text(txt): txt = txt.lower() txt = re.sub(r"(http\S+|http)", "", txt) # remove links txt = re.sub('[^a-zA-Z ]+', '', txt) #only allows for letters txt = ' '.join([PorterStemmer().stem(word=word) for word in txt.split(" ") if word not in stop_words ]) # stem & remove stop words return txt def print_model_performance(target,predicted): print('outcome of training') print(classification_report( target,predicted)) #uncomment if you want to see full report print('test average accuracy ',accuracy_score( target,predicted)) print(confusion_matrix( target,predicted)) def sentiment_analyzer_scores(text, threshold=0.05, engl=True): analyser = SentimentIntensityAnalyzer() if engl: trans = text else: trans = translator.translate(text).text score = analyser.polarity_scores(trans) lb = score['compound'] return lb if lb >= threshold: return 1 elif (lb > -threshold) and (lb < threshold): return 0 else: return -1 def train_test_split_features(train, test,train_feature, target_feature,vectorise): y_train = train[target_feature] X_train = train[train_feature] y_test = test[target_feature] X_test = test[train_feature] feature_names=[] if(vectorise): vect = TfidfVectorizer(min_df=5, ngram_range=(1, 4)) # create Count vectorizer. X_train = vect.fit(X_train).transform(X_train) # transform text_train into a vector X_test = vect.transform(X_test) feature_names = vect.get_feature_names() # to return all words used in vectorizer return X_train, X_test, y_train, y_test, feature_names #get this working a bit better later def pull_data(dataset, test_run=False): data_df=[] if (dataset == 'redit_data'): dataset_location="datasets/Twitter and Reddit Sentimental analysis Dataset/Twitter_Data.csv" text_variable='clean_text' target_feature='category' data_df = pd.read_csv(dataset_location) if (dataset == 'financial'): text_variable='clean_text' target_feature='category' dataset_location="datasets/Sentiment Analysis for Financial News/all-data.csv" data_df = pd.read_csv(dataset_location,encoding='ISO-8859-1',header='infer',) data_df.columns = [ target_feature,text_variable] if (dataset == 'us_airline'): target_feature='airline_sentiment' text_variable='text' dataset_location='datasets/Twitter US Airline Sentiment/Tweets.csv' data_df = pd.read_csv(dataset_location) data_df=data_df[[text_variable,target_feature]] data_df.columns = [ 'clean_text','target'] data_df[text_variable] = data_df['clean_text'].astype(str) data_df[text_variable] = data_df['clean_text'].apply(clean_text) # df.cc.astype('category').cat.codes # data_df['target']=data_df['target'].cat.codes data_df['target'] = pd.factorize(data_df['target'])[0] if test_run: data_df,_,_ = quick_run(data_df) data_df = data_df.dropna() data_df = data_df.reset_index(drop=True) return data_df # a list of all models used def all_models(): #Using the recomended classifiers #https://arxiv.org/abs/1708.05070 GBC = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,max_depth=1, random_state=0) RFC = RandomForestClassifier(n_estimators=500, max_features=0.25, criterion="entropy") SVM = SVC(C = 0.01, gamma=0.1, kernel="poly", degree=3, coef0=10.0) ETC = ExtraTreesClassifier(n_estimators=1000, max_features="log2", criterion="entropy") LR = LogisticRegression(C=1.5,fit_intercept=True) # Models that were not included in the paper not from SKlearn XGC = XGBClassifier() CBC = CatBoostClassifier(silent=True) light_gb = lgb.LGBMClassifier() models=[(LR, "linear_regression"),(ETC, "Extra_tree_classifier"),(SVM, "support_vector_classifier"), (RFC, "random_forest_classifier"), (GBC, "gradient_boosted_classifier"), (XGC, "XGBoost"),(light_gb,"Light_GBM"), (CBC, "catboost_classifier")] #this subset was selected due to runtime models=[(LR, "linear_regression"), (GBC, "gradient_boosted_classifier"), (XGC, "XGBoost"),(light_gb,"Light_GBM")] return models # Run the relavent machine learning model def run_features(df, model, splits,features='clean_text',vectorise=True, predict_probability=False): cv = KFold(n_splits=splits, random_state=42, shuffle=False) full_prediciton=[] for train_index, test_index in cv.split(df): train, test = df.loc[train_index], df.loc[test_index] X_train, X_test, y_train, y_test, feature_names=train_test_split_features(train,test,features,'target', vectorise) model.fit(X_train, y_train) if (predict_probability==True): prediction = model.predict_proba(X_test) else: prediction = model.predict(X_test) full_prediciton.append(prediction) predictions=[] for set_of_prediction in full_prediciton: for predicted in set_of_prediction: predictions.append(predicted) return predictions # a quick way to run though a dataset to confirm that everything works def quick_run(df): train, test = train_test_split(df, test_size=0.99) train, validation = train_test_split(train, test_size=0.125) return train, validation, test #current_datasets=['redit_data','financial','us_airline'] data_df=pull_data('us_airline',test_run=False) predict_probability=True model_predicted_names=[] models=all_models() df_copy=data_df.copy() for model, name in models: print(name) start_time = time.time() predictions=run_features(df_copy,model,splits,predict_probability=predict_probability) print("--- %s seconds ---" % (time.time() - start_time)) predicted_name=name+'_prediction' negative_prob=[] neutral_prob=[] positive_prob=[] if (predict_probability): for neg, neut,pos in predictions: negative_prob.append(neg) neutral_prob.append(neut) positive_prob.append(pos) neg_predictions=predicted_name+'_neg' neut_predictions=predicted_name+'_neut' pos_predictions=predicted_name+'_pos' df_copy[neg_predictions]=negative_prob df_copy[neut_predictions]=neutral_prob df_copy[pos_predictions]=positive_prob model_predicted_names.append(neg_predictions) model_predicted_names.append(neut_predictions) model_predicted_names.append(pos_predictions) negative_prob.clear() neutral_prob.clear() positive_prob.clear() else: df_copy[predicted_name]=predictions model_predicted_names.append(predicted_name) print_model_performance(df_copy['target'],predictions) predictions.clear() df_copy['polarity']=df_copy['clean_text'].apply(lambda text: TextBlob(text).sentiment.polarity) df_copy['subjectivity']=df_copy['clean_text'].apply(lambda text: TextBlob(text).sentiment.subjectivity) df_copy['vader_sentiment']=df_copy['clean_text'].apply(lambda tweet: sentiment_analyzer_scores(tweet)) model_predicted_names.append('polarity') model_predicted_names.append('subjectivity') model_predicted_names.append('vader_sentiment') model = RandomForestClassifier() predictions=run_features(df_copy,model,splits, features=model_predicted_names,vectorise=False) print_model_performance(df_copy['target'],predictions) df_copy model_predicted_names ```
github_jupyter
# Process metadata This notebook checks each experiment id is associated with gene expression data, via its run id, and returns a clean list of experiment ids that have gene expression data. ``` %load_ext autoreload %autoreload 2 import os import sys import glob import pandas as pd import numpy as np import random import warnings warnings.filterwarnings(action='ignore') sys.path.append("../../") from functions import utils from numpy.random import seed randomState = 123 seed(randomState) # Read in config variables config_file = os.path.abspath(os.path.join(os.getcwd(),"../../configs", "config_Human_experiment.tsv")) params = utils.read_config(config_file) # Load parameters dataset_name = params["dataset_name"] # Input files # base dir on repo base_dir = os.path.abspath(os.path.join(os.getcwd(),"../..")) mapping_file = os.path.join( base_dir, dataset_name, "data", "metadata", "recount2_metadata.tsv") normalized_data_file = os.path.join( base_dir, dataset_name, "data", "input", "recount2_gene_normalized_data.tsv.xz") # Output file experiment_id_file = os.path.join( base_dir, dataset_name, "data", "metadata", "recount2_experiment_ids.txt") ``` ### Get experiment ids ``` # Read in metadata metadata = pd.read_table( mapping_file, header=0, sep='\t', index_col=0) metadata.head() map_experiment_sample = metadata[['run']] map_experiment_sample.head() experiment_ids = np.unique(np.array(map_experiment_sample.index)).tolist() print("There are {} experiments in the compendium".format(len(experiment_ids))) ``` ### Get sample ids from gene expression data ``` normalized_data = pd.read_table( normalized_data_file, header=0, sep='\t', index_col=0).T normalized_data.head() sample_ids_with_gene_expression = list(normalized_data.index) ``` ### Get samples belonging to selected experiment ``` experiment_ids_with_gene_expression = [] for experiment_id in experiment_ids: # Some project id values are descriptions # We will skip these if len(experiment_id) == 9: print(experiment_id) selected_metadata = metadata.loc[experiment_id] #print("There are {} samples in experiment {}".format(selected_metadata.shape[0], experiment_id)) sample_ids = list(selected_metadata['run']) if any(x in sample_ids_with_gene_expression for x in sample_ids): experiment_ids_with_gene_expression.append(experiment_id) print('There are {} experiments with gene expression data'.format(len(experiment_ids_with_gene_expression))) experiment_ids_with_gene_expression_df = pd.DataFrame(experiment_ids_with_gene_expression, columns=['experiment_id']) experiment_ids_with_gene_expression_df.head() # Save simulated data experiment_ids_with_gene_expression_df.to_csv(experiment_id_file, sep='\t') ```
github_jupyter
# Setup **Make sure to read the instructions carefully!** If you have other resources used in the Blender project and chose to *make all paths relative*, pack all of them into a zip archive. Alternatively, you can *pack all external file*. * `blender_version` : Version of blender used to render the scene. Only supports 2.8x * `blend_file_path` : Path to the blend file after unpacking the zip archive. If blend file is used, this is automatically ignored. ___ * `upload_type` : Select the type of upload method. `gdrive_relative` pulls everything from the folder specified. * `drive_path` : Path to your blend/zip file relative to the root of your Google Drive if `google_drive` is selected. Must state the file and its extension (.zip/.blend) **unless** `gdrive_relative` is selected. * `url_blend` : Specify the URL to the blend/zip file if `url` is selected. ___ * `animation` : Specify whether animation or still image is rendered. If **still image** is used, put the frame number in `start_frame`. * `start_frame, end_frame` : Specify the start and end frame for animation. You may put same value such as zero for both input to set the default frame in the blend file. ___ * `download_type` : Select the type of download method. `gdrive_direct` enables the frames to be outputted directly to Google Drive (zipping will be disabled). * `output_name` : Name of the output frames, **do NOT include .blend!** (## for frame number) * `zip_files` : Archive multiple animation frames automatically into a zip file. * `drive_output_path` : Path to your frames/zip file in Google Drive. ___ * `gpu_enabled, cpu_enabled` : Toggle GPU and CPU for rendering. CPU might give a slight boost in rendering time but may varies depend on the project. After you are done, go to Runtime > Run All (Ctrl + F9) and upload your files or have Google Drive authorised below. See the [GitHub repo](https://github.com/syn73/blender-colab) for more info. ``` #@markdown #**Check GPU type** #@markdown ### Factory reset runtime if you don't have the desired GPU. #@markdown --- #@markdown V100 = Very Very fast rendering (*Available only for Colab Pro users*) #@markdown P100 = Faster rendering #@markdown T4 = Fast rendering #@markdown K80 = 2x Slower rendering compared to T4 #@markdown P4 = Very slow, not recommended #@markdown --- !nvidia-smi -L #@title Now choose your rendering options. You dont really need to change the file path; just choose your upload and download type, run the cell, then go to the next cell to upload your file. If you choose direct upload, you can directly upload your file in the next cell. If your project is an animation, dont forget to enable "animation", if not it will render only a single frame blender_version = '3.0.0-alpha-cycles-x' #@param ['1.79', '2.79', '2.79a', '2.79b', '2.80rc3', '2.81a', '2.82a', '2.83.16', '2.90.0', '2.90.1', '2.91.0', '2.91.2', '2.92.0', '2.93.0', '2.93.1', '2.93.2', '2.93.3', '2.93.4', '3.0.0-alpha', '3.0.0-alpha-cycles-x'] {allow-input: false} blend_file_path = 'path/to/file.blend' #@param {type: 'string'} #@markdown --- upload_type = 'direct' #@param ['direct', 'google_drive', 'url', 'gdrive_relative'] {allow-input: false} drive_path = 'path/to/blend.zip' #@param {type: 'string'} url_blend = '' #@param {type: 'string'} #@markdown --- animation = False #@param {type: 'boolean'} start_frame = 1#@param {type: 'integer'} end_frame = 250#@param {type: 'integer'} #@markdown --- download_type = 'direct' #@param ['direct', 'google_drive', 'gdrive_direct'] {allow-input: false} output_name = 'blender-##' #@param {type: 'string'} zip_files = True #@param {type: 'boolean'} drive_output_path = 'blender/output' #@param {type: 'string'} #@markdown --- gpu_enabled = True #@param {type:"boolean"} cpu_enabled = False #@param {type:"boolean"} #@title Run this cell to upload your file with the selected method import os import shutil from google.colab import files, drive uploaded_filename = "" if upload_type == 'google_drive' or upload_type == 'gdrive_relative' or download_type == 'google_drive' or download_type == 'gdrive_direct': drive.mount('/drive') if upload_type == 'direct': uploaded = files.upload() for fn in uploaded.keys(): uploaded_filename = fn elif upload_type == 'url': !wget -nc $url_blend uploaded_filename = os.path.basename(url_blend) elif upload_type == 'google_drive': shutil.copy('/drive/My Drive/' + drive_path, '.') uploaded_filename = os.path.basename(drive_path) !rm -r render !mkdir render if upload_type == 'gdrive_relative': if not drive_path.endswith('/'): drive_path += '/' !cp -r '/drive/My Drive/{drive_path}.' 'render/' elif uploaded_filename.lower().endswith('.zip'): !unzip -o $uploaded_filename -d 'render/' elif uploaded_filename.lower().endswith('.blend'): shutil.copy(uploaded_filename, 'render/') blend_file_path = uploaded_filename else: raise SystemExit("Invalid file extension, only .blend and .zip can be uploaded.") #@title Download blender blender_url_dict = {'1.79' : "https://download.blender.org/release/Blender1.73/blender1.73_Linux_i386_libc5-static.tar.gz" '2.79' : "https://download.blender.org/release/Blender2.79/blender-2.79-linux-glibc219-x86_64.tar.bz2" '2.79a' : "https://download.blender.org/release/Blender2.79/blender-2.79a-linux-glibc219-x86_64.tar.bz2" '2.79b' : "https://download.blender.org/release/Blender2.79/blender-2.79b-linux-glibc219-x86_64.tar.bz2", '2.80rc3' : "https://download.blender.org/release/Blender2.80/blender-2.80rc3-linux-glibc224-i686.tar.bz2", '2.81a' : "https://download.blender.org/release/Blender2.81/blender-2.81a-linux-glibc217-x86_64.tar.bz2", '2.82a' : "https://download.blender.org/release/Blender2.82/blender-2.82a-linux64.tar.xz", '2.83.16' : "https://download.blender.org/release/Blender2.83/blender-2.83.16-linux-x64.tar.xz", '2.90.0' : "https://download.blender.org/release/Blender2.90/blender-2.90.0-linux64.tar.xz" '2.90.1' : "https://download.blender.org/release/Blender2.90/blender-2.90.1-linux64.tar.xz", '2.91.0' : "https://download.blender.org/release/Blender2.91/blender-2.91.0-linux64.tar.xz" '2.91.2' : "https://download.blender.org/release/Blender2.91/blender-2.91.2-linux64.tar.xz", '2.92.0' : "https://download.blender.org/release/Blender2.92/blender-2.92.0-linux64.tar.xz", '2.93.0' : "https://download.blender.org/release/Blender2.93/blender-2.93.0-linux-x64.tar.xz", '2.93.1' : "https://download.blender.org/release/Blender2.93/blender-2.93.1-linux-x64.tar.xz", '2.93.2' : "https://download.blender.org/release/Blender2.93/blender-2.93.2-linux-x64.tar.xz", '2.93.3' : "https://download.blender.org/release/Blender2.93/blender-2.93.3-linux-x64.tar.xz", '2.93.4' : "https://download.blender.org/release/Blender2.93/blender-2.93.4-linux-x64.tar.xz", '3.0.0-alpha' : "https://builder.blender.org/download/daily/blender-3.0.0-alpha+master.a3027fb09416-linux.x86_64-release.tar.xz", '3.0.0-alpha-cycles-x' : "https://builder.blender.org/download/experimental/blender-3.0.0-alpha+cycles-x.342cdb03ee06-linux.x86_64-release.tar.xz"} blender_url = blender_url_dict[blender_version] base_url = os.path.basename(blender_url) !mkdir $blender_version !wget -nc $blender_url !tar -xkf $base_url -C ./$blender_version --strip-components=1 #@title Enable GPU rendering (or add custom properties by double clicking here) data = "import re\n"+\ "import bpy\n"+\ "scene = bpy.context.scene\n"+\ "scene.cycles.device = 'GPU'\n"+\ "prefs = bpy.context.preferences\n"+\ "prefs.addons['cycles'].preferences.get_devices()\n"+\ "cprefs = prefs.addons['cycles'].preferences\n"+\ "print(cprefs)\n"+\ "for compute_device_type in ('CUDA', 'OPENCL', 'NONE'):\n"+\ " try:\n"+\ " cprefs.compute_device_type = compute_device_type\n"+\ " print('Device found:',compute_device_type)\n"+\ " break\n"+\ " except TypeError:\n"+\ " pass\n"+\ "for device in cprefs.devices:\n"+\ " if not re.match('intel', device.name, re.I):\n"+\ " print('Activating',device)\n"+\ " device.use = "+str(gpu_enabled)+"\n"+\ " else:\n"+\ " device.use = "+str(cpu_enabled)+"\n" with open('setgpu.py', 'w') as f: f.write(data) #@title Start rendering! !rm -r output !mkdir output if not drive_output_path.endswith('/'): drive_output_path += '/' if download_type != 'gdrive_direct': output_path = 'output/' + output_name else: output_path = '/drive/My Drive/' + drive_output_path + output_name if animation: if start_frame == end_frame: !sudo ./$blender_version/blender -b 'render/{blend_file_path}' -P setgpu.py -E CYCLES -o '{output_path}' -noaudio -a else: !sudo ./$blender_version/blender -b 'render/{blend_file_path}' -P setgpu.py -E CYCLES -o '{output_path}' -noaudio -s $start_frame -e $end_frame -a else: !sudo ./$blender_version/blender -b 'render/{blend_file_path}' -P setgpu.py -E CYCLES -o '{output_path}' -noaudio -f $start_frame #@title Download the result with the selected method path, dirs, files_folder = next(os.walk("output")) output_folder_name = output_name.replace('#', '') + 'render' if download_type == 'gdrive_direct': pass elif len(files_folder) == 1: render_img = 'output/' + files_folder[0] if download_type == 'direct': files.download('output/' + files_folder[0]) else: shutil.copy('/content/' + render_img, '/drive/My Drive/' + drive_output_path) elif len(files_folder) > 1: if zip_files: shutil.make_archive(output_folder_name, 'zip', 'output') if download_type == 'direct': files.download(output_folder_name + '.zip') else: shutil.copy('/content/' + output_folder_name + ".zip", '/drive/My Drive/' + drive_output_path) elif download_type == 'direct': for f in files_folder: files.download('output/{}'.format(f)) # Drive, no zip else: for f in files_folder: shutil.copy("/content/output/" + f, '/drive/My Drive/' + drive_output_path + f) else: raise SystemExit("No frames are rendered.") ```
github_jupyter
``` #alpha3版本内容:增加传输矩阵随机起伏 通过total函数控制训练时间控制 通过total函数控制内容存储文件夹 通过total函数控制结果预测数量 # 先说恢复误删单元格的操作 # 场景:不小心把某个cell给cut了,或者删除了单元格(前提不要关闭notebook窗口)。 # 解决方法: 先按Esc键进入命令模式,在按z键就会恢复。记住不要按Ctrl+z(这个只限没删除单元格的常规操作) # 命令模式和编辑模式识别: # 命令模式:左侧为蓝色。 #我们现在应该在master上弃用torch.nn.functional.tanh,因为现在已经合并了张量和变量。 #If you deprecate nn.functional.tanh I could do # output = nn.Tanh()(input) import h5py #导入工具包 import numpy as np import os import torch import torch.nn as nn from torch import autograd import torch.nn.functional as F import time import math import h5py #导入工具包 import os from PIL import Image,ImageFilter import matplotlib.pyplot as plt # plt 用于显示图片 import matplotlib.image as mpimg # mpimg 用于读取图片 from torch.nn.parameter import Parameter from torch.nn import init import zipfile import torch.optim as optim class Scattering_system_simulation:#python class ,#input shape def __init__(self,Win,Wout,device,sparseValue,I_random_range,proportionSigma): self.Matrix_R = self.sparseMatrixGenerate(Win,Wout,device,sparseValue) self.Matrix_I = self.sparseMatrixGenerate(Win,Wout,device,sparseValue) self.Win=Win self.Wout=Wout self.device=device self.I_random_range = I_random_range self.sigmaBasicProportion = 0.5/(Win*Win) self.proportionSigma = proportionSigma def sparseMatrixGenerate(self,Win,Wout,device,sparseValue): _sparseValue=torch.tensor([sparseValue],dtype=torch.float32) Matrix = torch.rand(Win*Win, Wout*Wout) #矩阵稀疏设置 rand是0-1均匀分布 _sparseMatrix = torch.rand(Win*Win, Wout*Wout) _sparseMatrix = torch.where(_sparseMatrix>_sparseValue,torch.tensor([1.0]),torch.tensor([0.0])) Matrix = _sparseMatrix*Matrix _div = (Matrix).sum(dim=0,keepdim=True)+1e-6 #加一个小数防止为零时爆炸 Matrix = Matrix/_div Matrix = Matrix.to(device) return Matrix def I_rand_Generation_everyRand(self): return torch.randn(1,self.Win*self.Win,device=self.device)*(self.I_random_range)+1 # def I_rand_Generation_Fixed(): # def I_rand_Generation_inputCorrelation(): def generate(self, input):#input.shape = (Win,Win) with torch.no_grad(): input_R=input.view(1,self.Win*self.Win) input_I=self.I_rand_Generation_everyRand() tempMatrix_R = self.Matrix_R tempMatrix_I = self.Matrix_I R1 = torch.matmul(input_R,tempMatrix_R) I1 = torch.matmul(input_I,tempMatrix_R) I2 = torch.matmul(input_R,tempMatrix_I) R2 = torch.matmul(input_I,tempMatrix_I) # return torch.nn.functional.sigmoid(torch.sqrt(torch.pow((R1-R2),2)+torch.pow((I1+I2),2)).view(self.Wout,self.Wout)) return torch.sqrt(torch.pow((R1-R2),2)+torch.pow((I1+I2),2)).view(self.Wout,self.Wout) import matplotlib.pyplot as plt # plt 用于显示图片 import matplotlib.image as mpimg # mpimg 用于读取图片 def show_original_and_speckle(index,samples,labels): plt.figure() plt.subplot(1,2,1) plt.imshow(samples[index][0].cpu(),cmap='gray') plt.subplot(1,2,2) plt.imshow(labels[index][0].cpu(),cmap='gray') plt.show() def show_test_original_and_speckle(index,testSamples,testLabels): plt.figure() plt.subplot(1,2,1) plt.imshow(testSamples[index][0].cpu(),cmap='gray') plt.subplot(1,2,2) plt.imshow(testLabels[index][0].cpu(),cmap='gray') plt.show() #创建Dataset子类 import torch.utils.data.dataloader as DataLoader import torch.utils.data.dataset as Dataset class subDataset(Dataset.Dataset): #初始化,定义数据内容和标签 def __init__(self, Data, Label,W,device): #torch.randn(1,self.Win*self.Win,device=self.device)*(self.I_random_range) self.Data = Data self.Label = Label self.device = device #返回数据集大小 def __len__(self): return len(self.Data) #得到数据内容和标签 def __getitem__(self, index): data = self.Data[index].to(self.device) label = self.Label[index].to(self.device) return data, label #网络构建 :单一全连接 单一通道 存在前conv 无后conv class Transmission_Matrix(nn.Module): def __init__(self,Win,Wout): #batchsize * channelnum * W * W, super(Transmission_Matrix, self).__init__() self.Matrix = nn.Linear(Win*Win, Wout*Wout, bias=True) self.Win = Win self.Wout = Wout def forward(self, input): W = input.shape[2] input = input.view(input.shape[0],input.shape[1],self.Win*self.Win) out = self.Matrix(input) out = out.view(input.shape[0],input.shape[1],self.Wout,self.Wout) return out class EnhancedNet(nn.Module): def __init__(self,Win,Wout,Temp_feature_nums): super(EnhancedNet, self).__init__() self.head=nn.Sequential( nn.Conv2d(1,Temp_feature_nums,3,padding=3//2), nn.Tanh(), nn.Conv2d(Temp_feature_nums,Temp_feature_nums,3, padding=3//2), nn.Tanh(), nn.Conv2d(Temp_feature_nums,1,3, padding=3//2), nn.Tanh() ) self.Matrix_r = Transmission_Matrix(Win,Wout) self.tailAct=nn.Sigmoid() def forward(self,input): result = self.tailAct(self.Matrix_r(self.head(input))) return result #showTestResult_XY = PositionXYGenerator(120,device) def showTestResult(net,testSamples,testLabels,index,device): net.train(False) _sample = testSamples[index].to(device) with torch.no_grad(): output = net(_sample.view(1,1,64,64)) plt.figure() plt.subplot(131) plt.title("speckle") plt.imshow(testSamples[index][0].cpu(),cmap='gray') # plt.show()#show不需要写啦!! plt.subplot(132) plt.title("output") img = output[0][0].cpu().numpy() np.where(img > 0, img, 0) plt.imshow(output[0][0].cpu(),cmap='gray') plt.subplot(133) plt.title("real label") img_t = testLabels[index][0].cpu().numpy() np.where(img_t > 0, img_t, 0) plt.imshow(testLabels[index][0].cpu(),cmap='gray') plt.show() net.train(True) import matplotlib.pyplot as plt # plt 用于显示图片 import matplotlib.image as mpimg # mpimg 用于读取图片 def SaveResult(net,testSamples,testLabels,index,device,root_path,windows_or_linux='\\'): net.train(False) _sample = testSamples[index].to(device) with torch.no_grad(): output = net(_sample.view(1,1,64,64)) mpimg.imsave(root_path+windows_or_linux+str(index)+'_speckle'+'.png',testSamples[index][0].cpu().numpy(),cmap='gray') mpimg.imsave(root_path+windows_or_linux+str(index)+'_reconstruction'+'.png',output[0][0].cpu().numpy(),cmap='gray') mpimg.imsave(root_path+windows_or_linux+str(index)+'_realLabel'+'.png',testLabels[index][0].cpu().numpy(),cmap='gray') net.train(True) # PSNR.py import numpy as np import math def psnr(target, ref): # target:目标图像 ref:参考图像 scale:尺寸大小 # assume RGB image target_data = np.array(target) ref_data = np.array(ref) diff = ref_data - target_data diff = diff.flatten('C') rmse = math.sqrt( np.mean(diff ** 2.) ) scale=1.0 return 20*math.log10(scale/rmse) import matplotlib.pyplot as plt # plt 用于显示图片 import matplotlib.image as mpimg # mpimg 用于读取图片 def set_psnr(net,testSamples,testLabels,index,device): net.train(False) _sample = testSamples[index].to(device) with torch.no_grad(): output = net(_sample.view(1,1,64,64)) output = psnr(output[0][0].cpu().numpy(),testLabels[index][0].cpu().numpy()) net.train(True) return output import torch import torch.nn.functional as F from math import exp import numpy as np from torch.autograd import Variable # 计算一维的高斯分布向量 def gaussian(window_size, sigma): gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)]) return gauss/gauss.sum() # 创建高斯核,通过两个一维高斯分布向量进行矩阵乘法得到 # 可以设定channel参数拓展为3通道 def create_window(window_size, channel=1): _1D_window = gaussian(window_size, 1.5).unsqueeze(1) _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) window = _2D_window.expand(channel, 1, window_size, window_size).contiguous() return window # 计算SSIM # 直接使用SSIM的公式,但是在计算均值时,不是直接求像素平均值,而是采用归一化的高斯核卷积来代替。 # 在计算方差和协方差时用到了公式Var(X)=E[X^2]-E[X]^2, cov(X,Y)=E[XY]-E[X]E[Y]. # 正如前面提到的,上面求期望的操作采用高斯核卷积代替。 def ssim(img1, img2, window_size=11, window=None, size_average=True, full=False, val_range=1): # Value range can be different from 255. Other common ranges are 1 (sigmoid) and 2 (tanh). if val_range is None: if torch.max(img1) > 128: max_val = 255 else: max_val = 1 if torch.min(img1) < -0.5: min_val = -1 else: min_val = 0 L = max_val - min_val else: L = val_range padd = 0 (_, channel, height, width) = img1.size() if window is None: real_size = min(window_size, height, width) window = create_window(real_size, channel=channel).to(img1.device) mu1 = F.conv2d(img1, window, padding=padd, groups=channel) mu2 = F.conv2d(img2, window, padding=padd, groups=channel) mu1_sq = mu1.pow(2) mu2_sq = mu2.pow(2) mu1_mu2 = mu1 * mu2 sigma1_sq = F.conv2d(img1 * img1, window, padding=padd, groups=channel) - mu1_sq sigma2_sq = F.conv2d(img2 * img2, window, padding=padd, groups=channel) - mu2_sq sigma12 = F.conv2d(img1 * img2, window, padding=padd, groups=channel) - mu1_mu2 C1 = (0.01 * L) ** 2 C2 = (0.03 * L) ** 2 v1 = 2.0 * sigma12 + C2 v2 = sigma1_sq + sigma2_sq + C2 cs = torch.mean(v1 / v2) # contrast sensitivity ssim_map = ((2 * mu1_mu2 + C1) * v1) / ((mu1_sq + mu2_sq + C1) * v2) if size_average: ret = ssim_map.mean() else: ret = ssim_map.mean(1).mean(1).mean(1) if full: return ret, cs return ret # Classes to re-use window class SSIM(torch.nn.Module): def __init__(self, window_size=11, size_average=True, val_range=None): super(SSIM, self).__init__() self.window_size = window_size self.size_average = size_average self.val_range = val_range # Assume 1 channel for SSIM self.channel = 1 self.window = create_window(window_size) def forward(self, img1, img2): (_, channel, _, _) = img1.size() if channel == self.channel and self.window.dtype == img1.dtype: window = self.window else: window = create_window(self.window_size, channel).to(img1.device).type(img1.dtype) self.window = window self.channel = channel return ssim(img1 ,img2 ,window_size=self.window_size ,window=self.window ,size_average=self.size_average ,val_range=self.val_range) def ssim_calculate(ssim_class,image1,image2): with torch.no_grad(): result=ssim_class(image1,image2) return result #通过一个网络和散斑重建,并和Label计算SSIM def netAndOriginalAndLabel_to_ssim(ssim_class,net,W1,W2,testSamples,testLabels,index,device): net.train(False) _sample = (testSamples[index].view(1,1,W1,W1)).to(device) with torch.no_grad(): output = net(_sample) output = ssim_calculate(ssim_class,output.cpu(),(testLabels[index].view(1,1,W2,W2))).numpy() net.train(True) return output import zipfile def zip_ya(startdir): file_news = startdir +'.zip' # 压缩后文件夹的名字 z = zipfile.ZipFile(file_news,'w',zipfile.ZIP_DEFLATED) #参数一:文件夹名 for dirpath, dirnames, filenames in os.walk(startdir): fpath = dirpath.replace(startdir,'') #这一句很重要,不replace的话,就从根目录开始复制 fpath = fpath and fpath + os.sep or ''#这句话理解我也点郁闷,实现当前文件夹以及包含的所有文件的压缩 for filename in filenames: z.write(os.path.join(dirpath, filename),fpath+filename) print ('压缩成功') z.close() # if__name__=="__main__" # startdir = ".\\123" #要压缩的文件夹路径 # file_news = startdir +'.zip' # 压缩后文件夹的名字 # zip_ya(startdir,file_news) def OneTrainingTotalFunction(ScatteringSystem,resultSavedPath,trainTime,reconstructedAndSavedTestNum,imageStrengthFlowSigma): print("data preprocessing start...") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") SSS = ScatteringSystem path = r'/root/code/images' dirs = os.listdir(path) trans_num = 10000 original_image = [] scatted=[] _t_trans_num=0 # torch.randn(1,self.Win*self.Win,device=self.device)*(self.I_random_range) originalImages_clear = np.zeros((trans_num,64,64), dtype = np.float32) for dir in dirs: if _t_trans_num<trans_num: _t_trans_num=_t_trans_num+1 file = Image.open(path+'//'+dir) i=(np.array(file.convert('L'))/255).astype(np.float32) i=torch.from_numpy(i) originalImages_clear[_t_trans_num-1] = i i= i + torch.randn(64,64)*(imageStrengthFlowSigma) original_image.append(i) _t=SSS.generate(i.to(device)) scatted.append(_t.to('cpu')) file.close() np_original_data = np.zeros((trans_num,64,64), dtype = np.float32) np_scatted_data = np.zeros((trans_num,64,64), dtype = np.float32) for i in range(0,trans_num): np_original_data[i] = original_image[i] np_scatted_data[i] =scatted[i] samples=torch.from_numpy(np_original_data[0:9500]).view(9500,1,64,64) labels=torch.from_numpy(np_scatted_data[0:9500]).view(9500,1,64,64) testSamples = torch.from_numpy(np_original_data[9500:10000]).view(500,1,64,64) testLabels = torch.from_numpy(np_scatted_data[9500:10000]).view(500,1,64,64) originalImages_clear = torch.from_numpy(originalImages_clear[9500:10000]).view(500,1,64,64) #交换样本和标签 temp=samples samples=labels labels=temp temp=testSamples testSamples=testLabels testLabels=temp print(samples.shape) print(labels.shape) print(testSamples.shape) print(testLabels.shape) print(samples.dtype) print(labels.dtype) print(testSamples.dtype) print(testLabels.dtype) print("Samples and labels OK") index=10 print("show_test_original_and_speckle: ",index) show_test_original_and_speckle(index=index,testSamples=testSamples,testLabels=testLabels) print("dataset processing...") dataset = subDataset(samples,labels,64,device) device_count = 1 if(torch.cuda.device_count()>1): device_count = torch.cuda.device_count() batchSize = 16*device_count print("batchSize:",batchSize) dataloader = DataLoader.DataLoader(dataset,batch_size= batchSize, shuffle = True) print(dataset.__getitem__(10)[0].shape,dataset.__getitem__(10)[1].shape) print("network building...") net = EnhancedNet(Win=64,Wout=64,Temp_feature_nums=64) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs net = nn.DataParallel(net) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net.to(device) index = 8 print("Untrained network reconstruction on test data: ",index) showTestResult(net,testSamples,testLabels,index,device) print("optimizer processing...") #criterion = nn.CrossEntropyLoss() #optimizer = optim.SGD(net.parameters(), lr=0.00001, momentum=0.9) optimizer = optim.Adam(net.parameters(), lr=3e-4) print("planned time is :",trainTime," ,start train...") startTime = time.time() net.train(True) for epoch in range(200): # loop over the dataset multiple times running_loss = 0.0 if (time.time()-startTime)<trainTime: for i, data in enumerate(dataloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = torch.dist(outputs, labels) loss.backward() optimizer.step() # print statistics #testReconstruction running_loss += loss.item() # if i %100 ==99: # print every 2000 mini-batches # print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 200)) # running_loss = 0.0 showTempNetResultNum = 3000 if i %showTempNetResultNum == showTempNetResultNum-1: # print every 2000 mini-batches ticks = time.time()-startTime print("当前时间戳为:", ticks) print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / showTempNetResultNum)) running_loss = 0.0 index = 10 showTestResult(net,testSamples,testLabels,index,device) net.train(False) print('totalTrainTime : ',(time.time()-startTime)) print('Finished Training') ssim_class=SSIM(val_range=1) print("creat ssim caculate class") #将测试集的散斑图片、原图像和重建图像 以及将对应的名称的PSNR和SSIM保存到list中。 print("计算500测试集中每张PSNR SSIM 并将相关输出保存......") SimulationSystemResult_PSNR_SSIM=[] average_psnr=0 average_ssim=0 root_path = resultSavedPath if not os.path.exists(root_path): os.makedirs(root_path) root_path_for_imageResult = resultSavedPath+"//"+"imageResult" if not os.path.exists(root_path_for_imageResult): os.makedirs(root_path_for_imageResult) root_path_for_qualityData = resultSavedPath+"//"+"qualityResult" if not os.path.exists(root_path_for_qualityData): os.makedirs(root_path_for_qualityData) with open(root_path_for_qualityData+"//"+"qualityResult.txt", 'w' ) as f: imageResult_root_path=root_path_for_imageResult windows_or_linux='//' net.train(False) for index in range(500): _sample = testSamples[index].to(device) with torch.no_grad(): output = net(_sample.view(1,1,64,64)) if index < reconstructedAndSavedTestNum : mpimg.imsave(imageResult_root_path+windows_or_linux+str(index)+'_speckle'+'.png',testSamples[index][0].cpu().numpy(),cmap='gray') mpimg.imsave(imageResult_root_path+windows_or_linux+str(index)+'_reconstruction'+'.png',output[0][0].cpu().numpy(),cmap='gray') mpimg.imsave(imageResult_root_path+windows_or_linux+str(index)+'_realLabel'+'.png',testLabels[index][0].cpu().numpy(),cmap='gray') singe_psnr = psnr(output[0][0].cpu().numpy(),originalImages_clear[index][0].cpu().numpy()) average_psnr=average_psnr+singe_psnr singe_ssim = ssim_calculate(ssim_class,output.cpu(),originalImages_clear[index].view(1,1,64,64)).numpy() average_ssim = average_ssim + singe_ssim s="result_"+str(index)+"psnr_and_ssim" SimulationSystemResult_PSNR_SSIM.append((s,singe_psnr,singe_ssim)) f.writelines(s+' '+str(singe_psnr)+' '+str(singe_ssim)+'\n') print((s,singe_psnr,singe_ssim)) average_psnr = average_psnr/500 average_ssim = average_ssim/500 with open(root_path_for_qualityData+"//"+"averageQualityResult.txt", 'w' ) as f: f.writelines('average_psnr : '+str(average_psnr)+'\n') f.writelines('average_ssim : '+str(average_ssim)+'\n') print("average PSNR and SSIM : ",average_psnr,average_ssim) print("result save complete") z = zipfile.ZipFile('images.zip', 'r') z.extractall(path=r'/root/code/images') z.close() device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") SSS=Scattering_system_simulation(64,64,device,sparseValue=0,I_random_range=0.00,proportionSigma=0) DifferentSigmaList = [0,0.1,0.2,0.3,0.4,0.5] resultRootSavedPath=r"root/code/TestImageStrengthFlow" for sigma in DifferentSigmaList: resultSavedPath = resultRootSavedPath +'//'+'sigma'+str(sigma)+'result' OneTrainingTotalFunction(SSS,resultSavedPath=resultSavedPath,trainTime=600,reconstructedAndSavedTestNum=20,imageStrengthFlowSigma =sigma) zip_ya(resultRootSavedPath) print("processing success") z = zipfile.ZipFile('images.zip', 'r') z.extractall(path=r'/root/code/images') z.close() device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") SSS=Scattering_system_simulation(64,64,device,sparseValue=0,I_random_range=0.00,proportionSigma=0) DifferentSigmaList = [0,0.1,0.2,0.3,0.4,0.5] resultRootSavedPath=r"root/code/TestIFlow" for sigma in DifferentSigmaList: resultSavedPath = resultRootSavedPath +'//'+'sigma'+str(sigma)+'result' SSS.I_random_range = sigma OneTrainingTotalFunction(SSS,resultSavedPath=resultSavedPath,trainTime=600,reconstructedAndSavedTestNum=20) zip_ya(resultRootSavedPath) print("processing success") ```
github_jupyter
# Basic Text Classification with Naive Bayes *** In the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on [Lab 10 of Harvard's CS109](https://github.com/cs109/2015lab10) class. Please free to go to the original lab for additional exercises and solutions. ``` %matplotlib inline import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from six.moves import range # Setup Pandas pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) # Setup Seaborn sns.set_style("whitegrid") sns.set_context("poster") ``` # Table of Contents * [Rotten Tomatoes Dataset](#Rotten-Tomatoes-Dataset) * [Explore](#Explore) * [The Vector Space Model and a Search Engine](#The-Vector-Space-Model-and-a-Search-Engine) * [In Code](#In-Code) * [Naive Bayes](#Naive-Bayes) * [Multinomial Naive Bayes and Other Likelihood Functions](#Multinomial-Naive-Bayes-and-Other-Likelihood-Functions) * [Picking Hyperparameters for Naive Bayes and Text Maintenance](#Picking-Hyperparameters-for-Naive-Bayes-and-Text-Maintenance) * [Interpretation](#Interpretation) ## Rotten Tomatoes Dataset ``` critics = pd.read_csv('./critics.csv') #let's drop rows with missing quotes critics = critics[~critics.quote.isnull()] critics.head() #critics.fresh ``` ### Explore ``` n_reviews = len(critics) n_movies = critics.rtid.unique().size n_critics = critics.critic.unique().size print("Number of reviews: {:d}".format(n_reviews)) print("Number of critics: {:d}".format(n_critics)) print("Number of movies: {:d}".format(n_movies)) df = critics.copy() df['fresh'] = df.fresh == 'fresh' grp = df.groupby('critic') counts = grp.critic.count() # number of reviews by each critic means = grp.fresh.mean() # average freshness for each critic means[counts > 100].hist(bins=10, edgecolor='w', lw=1) plt.xlabel("Average Rating per critic") plt.ylabel("Number of Critics") plt.yticks([0, 2, 4, 6, 8, 10]); df.head() ``` <h3>Exercise Set I</h3> <br/> <b>Exercise:</b> Look at the histogram above. Tell a story about the average ratings per critic. What shape does the distribution look like? What is interesting about the distribution? What might explain these interesting things? The distribution is more of a normal distribution below 0.5 and right skewed distribution after 0.5. The average ratings per critic that can be explained is that only few critics give harsh critics and too good critics. However there are many critics that give you an average critic of 0.6. The word average we expected to be is 0.5 but there are very very few people who critic between 0.5 and 0.6. May be this can be explained as people want to critic fairly above average and not just 0.5 which might make a critic person not efficient in his word. ## The Vector Space Model and a Search Engine All the diagrams here are snipped from [*Introduction to Information Retrieval* by Manning et. al.]( http://nlp.stanford.edu/IR-book/) which is a great resource on text processing. For additional information on text mining and natural language processing, see [*Foundations of Statistical Natural Language Processing* by Manning and Schutze](http://nlp.stanford.edu/fsnlp/). Also check out Python packages [`nltk`](http://www.nltk.org/), [`spaCy`](https://spacy.io/), [`pattern`](http://www.clips.ua.ac.be/pattern), and their associated resources. Also see [`word2vec`](https://en.wikipedia.org/wiki/Word2vec). Let us define the vector derived from document $d$ by $\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry "slot" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a *corpus*. To define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So "hello" may be at index 5 and "world" at index 99. Suppose we have the following corpus: `A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree. The grapes seemed ready to burst with juice, and the Fox's mouth watered as he gazed longingly at them.` Suppose we treat each sentence as a document $d$. The vocabulary (often called the *lexicon*) is the following: $V = \left\{\right.$ `a, along, and, as, at, beautiful, branches, bunch, burst, day, fox, fox's, from, gazed, grapes, hanging, he, juice, longingly, mouth, of, one, ready, ripe, seemed, spied, the, them, to, trained, tree, vine, watered, with`$\left.\right\}$ Then the document `A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree` may be represented as the following sparse vector of word counts: $$\bar V(d) = \left( 4,1,0,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,2,1,0,1,0,0,1,0,0,0,1,1,0,0 \right)$$ or more succinctly as `[(0, 4), (1, 1), (5, 1), (6, 1), (7, 1), (9, 1), (10, 1), (12, 1), (14, 1), (15, 1), (20, 2), (21, 1), (23, 1),` `(26, 1), (30, 1), (31, 1)]` along with a dictionary `` { 0: a, 1: along, 5: beautiful, 6: branches, 7: bunch, 9: day, 10: fox, 12: from, 14: grapes, 15: hanging, 19: mouth, 20: of, 21: one, 23: ripe, 24: seemed, 25: spied, 26: the, 30: tree, 31: vine, } `` Then, a set of documents becomes, in the usual `sklearn` style, a sparse matrix with rows being sparse arrays representing documents and columns representing the features/words in the vocabulary. Notice that this representation loses the relative ordering of the terms in the document. That is "cat ate rat" and "rat ate cat" are the same. Thus, this representation is also known as the Bag-Of-Words representation. Here is another example, from the book quoted above, although the matrix is transposed here so that documents are columns: ![novel terms](terms.png) Such a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, `jealous` and `jealousy` after stemming are the same feature. One could also make use of other "Natural Language Processing" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove "stopwords" from our vocabulary, such as common words like "the". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all depends on our application. From the book: >The standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\bar V(d_1)$ and $\bar V(d_2)$: $$S_{12} = \frac{\bar V(d_1) \cdot \bar V(d_2)}{|\bar V(d_1)| \times |\bar V(d_2)|}$$ ![Vector Space Model](vsm.png) >There is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below. ![novel terms](terms2.png) >The key idea now: to assign to each document d a score equal to the dot product: $$\bar V(q) \cdot \bar V(d)$$ Then we can use this simple Vector Model as a Search engine. ### In Code ``` from sklearn.feature_extraction.text import CountVectorizer text = ['Hop on pop', 'Hop off pop', 'Hop Hop hop'] print("Original text is\n{}".format('\n'.join(text))) vectorizer = CountVectorizer(min_df=0) # call `fit` to build the vocabulary vectorizer.fit(text) # call `transform` to convert text to a bag of words x = vectorizer.transform(text) # CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to # convert back to a "normal" numpy array x = x.toarray() print("") print("Transformed text vector is \n{}".format(x)) # `get_feature_names` tracks which word is associated with each column of the transformed x print("") print("Words for each feature:") print(vectorizer.get_feature_names()) # Notice that the bag of words treatment doesn't preserve information about the *order* of words, # just their frequency def make_xy(critics, vectorizer=None): #Your code here if vectorizer is None: vectorizer = CountVectorizer() X = vectorizer.fit_transform(critics.quote) X = X.tocsc() # some versions of sklearn return COO format y = (critics.fresh == 'fresh').values.astype(np.int) return X, y X, y = make_xy(critics) #df1=pd.DataFrame(X.toarray()) ``` ## Naive Bayes From Bayes' Theorem, we have that $$P(c \vert f) = \frac{P(c \cap f)}{P(f)}$$ where $c$ represents a *class* or category, and $f$ represents a feature vector, such as $\bar V(d)$ as above. **We are computing the probability that a document (or whatever we are classifying) belongs to category *c* given the features in the document.** $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as $$P(c \vert f) \propto P(f \vert c) P(c) $$ $P(c)$ is called the *prior* and is simply the probability of seeing class $c$. But what is $P(f \vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the *likelihood* and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are *conditionally independent* given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear *within that class*. This is a very important distinction. Recall that if two events are independent, then: $$P(A \cap B) = P(A) \cdot P(B)$$ Thus, conditional independence implies $$P(f \vert c) = \prod_i P(f_i | c) $$ where $f_i$ is an individual feature (a word in this example). To make a classification, we then choose the class $c$ such that $P(c \vert f)$ is maximal. There is a small caveat when computing these probabilities. For [floating point underflow](http://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html) we change the product into a sum by going into log space. This is called the LogSumExp trick. So: $$\log P(f \vert c) = \sum_i \log P(f_i \vert c) $$ There is another caveat. What if we see a term that didn't exist in the training data? This means that $P(f_i \vert c) = 0$ for that term, and thus $P(f \vert c) = \prod_i P(f_i | c) = 0$, which doesn't help us at all. Instead of using zeros, we add a small negligible value called $\alpha$ to each count. This is called Laplace Smoothing. $$P(f_i \vert c) = \frac{N_{ic}+\alpha}{N_c + \alpha N_i}$$ where $N_{ic}$ is the number of times feature $i$ was seen in class $c$, $N_c$ is the number of times class $c$ was seen and $N_i$ is the number of times feature $i$ was seen globally. $\alpha$ is sometimes called a regularization parameter. ### Multinomial Naive Bayes and Other Likelihood Functions Since we are modeling word counts, we are using variation of Naive Bayes called Multinomial Naive Bayes. This is because the likelihood function actually takes the form of the multinomial distribution. $$P(f \vert c) = \frac{\left( \sum_i f_i \right)!}{\prod_i f_i!} \prod_{f_i} P(f_i \vert c)^{f_i} \propto \prod_{i} P(f_i \vert c)$$ where the nasty term out front is absorbed as a normalization constant such that probabilities sum to 1. There are many other variations of Naive Bayes, all which depend on what type of value $f_i$ takes. If $f_i$ is continuous, we may be able to use *Gaussian Naive Bayes*. First compute the mean and variance for each class $c$. Then the likelihood, $P(f \vert c)$ is given as follows $$P(f_i = v \vert c) = \frac{1}{\sqrt{2\pi \sigma^2_c}} e^{- \frac{\left( v - \mu_c \right)^2}{2 \sigma^2_c}}$$ <h3>Exercise Set II</h3> <p><b>Exercise:</b> Implement a simple Naive Bayes classifier:</p> <ol> <li> split the data set into a training and test set <li> Use `scikit-learn`'s `MultinomialNB()` classifier with default parameters. <li> train the classifier over the training set and test on the test set <li> print the accuracy scores for both the training and the test sets </ol> What do you notice? Is this a good classifier? If not, why not? ``` #your turn from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score X, y = make_xy(critics) Xlr, Xtestlr, ylr, ytestlr = train_test_split(X,y,random_state=5) from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB() clf.fit(Xlr, ylr) print("The Testing error is:") print(accuracy_score(clf.predict(Xtestlr), ytestlr)) print("The Training error is:") print(accuracy_score(clf.predict(Xlr), ylr)) ``` ### Picking Hyperparameters for Naive Bayes and Text Maintenance We need to know what value to use for $\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise. First, let's find an appropriate value for `min_df` for the `CountVectorizer`. `min_df` can be either an integer or a float/decimal. If it is an integer, `min_df` represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum *percentage* of documents a word must appear in to be included in the vocabulary. From the documentation: >min_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. <h3>Exercise Set III</h3> <p><b>Exercise:</b> Construct the cumulative distribution of document frequencies (df). The $x$-axis is a document count $x_i$ and the $y$-axis is the percentage of words that appear less than $x_i$ times. For example, at $x=5$, plot a point representing the percentage or number of words that appear in 5 or fewer documents.</p> <p><b>Exercise:</b> Look for the point at which the curve begins climbing steeply. This may be a good value for `min_df`. If we were interested in also picking `max_df`, we would likely pick the value where the curve starts to plateau. What value did you choose?</p> ``` # Your turn. a_xarray=[] for i in X: for j in i: j=str(j) b=j.split('\n') a_xarray.append(b) df= pd.DataFrame(a_xarray) list1=[] for i in range(0, len(df[0])): for j in range(0, len(df.columns)): h=df.iloc[i,j] h=str(h) h=h.split('\t') list1.append(h) df1= pd.DataFrame(list1) df1.dropna(inplace=True) df1[1]=df1[1].astype(int) df2=df1.groupby(1).count() add=df2[0].tolist() final_list=[] x_axis=[] temp=0 for i,j in enumerate(add): temp= temp+j final_list.append(temp) x_axis.append(i) import matplotlib.pyplot as plt plt.plot(x_axis,final_list) plt.xlabel('Document count ') plt.ylabel('The percentage of words') plt.title('Document count vs The percentage of words') plt.grid(True) plt.show() ``` The parameter $\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function `cv_score` performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold. ``` from sklearn.naive_bayes import MultinomialNB from sklearn.model_selection import KFold def cv_score(clf, X, y, scorefunc): result = 0. nfold = 5 for train, test in KFold(nfold).split(X): # split data into train/test groups, 5 times clf.fit(X[train], y[train]) # fit the classifier, passed is as clf. result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data return result / nfold # average ``` We use the log-likelihood as the score here in `scorefunc`. The higher the log-likelihood, the better. Indeed, what we do in `cv_score` above is to implement the cross-validation part of `GridSearchCV`. The custom scoring function `scorefunc` allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using `roc_auc`, precision, recall, or `F1-score` as the scoring function. ``` def log_likelihood(clf, x, y): prob = clf.predict_log_proba(x) rotten = y == 0 fresh = ~rotten return prob[rotten, 0].sum() + prob[fresh, 1].sum() ``` We'll cross-validate over the regularization parameter $\alpha$. Let's set up the train and test masks first, and then we can run the cross-validation procedure. ``` from sklearn.model_selection import train_test_split _, itest = train_test_split(range(critics.shape[0]), train_size=0.7) mask = np.zeros(critics.shape[0], dtype=np.bool) mask[itest] = True ``` <h3>Exercise Set IV</h3> <p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p> <p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\alpha$ that is too high?</p> <p><b>Exercise:</b> Using the skeleton code below, find the best values of the parameter `alpha`, and use the value of `min_df` you chose in the previous exercise set. Use the `cv_score` function above with the `log_likelihood` function for scoring.</p> The log likelihood function as the score mean is basically the log derivative with respect to the hyper paramters.The score basically indicates the sensitivity of the function (posterior probability) and hence it serves as an important metric for a score function and better evaluates a model. We are optimizing the posterior probability distribution given the input distribution. On further increasing the value of alpha to a higher value, both the training and testing accuracy decreases. ``` from sklearn.naive_bayes import MultinomialNB #the grid of parameters to search over alphas = [.1, 1, 5, 10, 50] best_min_df = 1 # YOUR TURN: put your value of min_df here. #Find the best value for alpha and min_df, and the best classifier best_alpha = None #maxscore=-np.inf score_func=[] for alpha in alphas: vectorizer = CountVectorizer(min_df=best_min_df) Xthis, ythis = make_xy(critics, vectorizer) Xtrainthis = Xthis[mask] ytrainthis = ythis[mask] # your turn clf = MultinomialNB(alpha) score_func.append(cv_score(clf,Xtrainthis,ytrainthis,log_likelihood)) alpha1=np.argmax(score_func) best_alpha=alphas[alpha1] print("alpha: {}".format(best_alpha)) ``` <h3>Exercise Set V: Working with the Best Parameters</h3> <p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p> ``` vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain) #your turn. Print the accuracy on the test and training dataset training_accuracy = clf.score(xtrain, ytrain) test_accuracy = clf.score(xtest, ytest) print("Accuracy on training data: {:2f}".format(training_accuracy)) print("Accuracy on test data: {:2f}".format(test_accuracy)) from sklearn.metrics import confusion_matrix print(confusion_matrix(ytest, clf.predict(xtest))) ``` ## Interpretation ### What are the strongly predictive features? We use a neat trick to identify strongly predictive features (i.e. words). * first, create a data set such that each row has exactly one feature. This is represented by the identity matrix. * use the trained classifier to make predictions on this matrix * sort the rows by predicted probabilities, and pick the top and bottom $K$ rows ``` words = np.array(vectorizer.get_feature_names()) x = np.eye(xtest.shape[1]) probs = clf.predict_log_proba(x)[:, 0] ind = np.argsort(probs) good_words = words[ind[:10]] bad_words = words[ind[-10:]] good_prob = probs[ind[:10]] bad_prob = probs[ind[-10:]] print("Good words\t P(fresh | word)") for w, p in zip(good_words, good_prob): print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p))) print("Bad words\t P(fresh | word)") for w, p in zip(bad_words, bad_prob): print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p))) words ``` <h3>Exercise Set VI</h3> <p><b>Exercise:</b> Why does this method work? What does the probability for each row in the identity matrix represent</p> This method works for the fact that each row is a feature vector word that describes the conditional probability of that word appearing. Basically a one hot encoding of a vector. The above exercise is an example of *feature selection*. There are many other feature selection methods. A list of feature selection methods available in `sklearn` is [here](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_selection). The most common feature selection technique for text mining is the chi-squared $\left( \chi^2 \right)$ [method](http://nlp.stanford.edu/IR-book/html/htmledition/feature-selectionchi2-feature-selection-1.html). ### Prediction Errors We can see mis-predictions as well. ``` x, y = make_xy(critics, vectorizer) prob = clf.predict_proba(x)[:, 0] predict = clf.predict(x) bad_rotten = np.argsort(prob[y == 0])[:5] bad_fresh = np.argsort(prob[y == 1])[-5:] print("Mis-predicted Rotten quotes") print('---------------------------') for row in bad_rotten: print(critics[y == 0].quote.iloc[row]) print("") print("Mis-predicted Fresh quotes") print('--------------------------') for row in bad_fresh: print(critics[y == 1].quote.iloc[row]) print("") ``` <h3>Exercise Set VII: Predicting the Freshness for a New Review</h3> <br/> <div> <b>Exercise:</b> <ul> <li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'* <li> Is the result what you'd expect? Why (not)? </ul> </div> ``` vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] text=['This movie is not remarkable, touching, or superb in any way'] clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain) txt = vectorizer.transform(text) print(clf.predict(txt)) ``` The model here is having three positive words but a not beside them which makes the model fail to it. It is treating this as fresh because of the probability distribution of good words as opposed to bad words. ### Aside: TF-IDF Weighting for Term Importance TF-IDF stands for `Term-Frequency X Inverse Document Frequency`. In the standard `CountVectorizer` model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weight this term frequency by the inverse of its popularity in all documents. For example, if the word "movie" showed up in all the documents, it would not have much predictive value. It could actually be considered a stopword. By weighing its counts by 1 divided by its overall frequency, we downweight it. We can then use this TF-IDF weighted features as inputs to any classifier. **TF-IDF is essentially a measure of term importance, and of how discriminative a word is in a corpus.** There are a variety of nuances involved in computing TF-IDF, mainly involving where to add the smoothing term to avoid division by 0, or log of 0 errors. The formula for TF-IDF in `scikit-learn` differs from that of most textbooks: $$\mbox{TF-IDF}(t, d) = \mbox{TF}(t, d)\times \mbox{IDF}(t) = n_{td} \log{\left( \frac{\vert D \vert}{\vert d : t \in d \vert} + 1 \right)}$$ where $n_{td}$ is the number of times term $t$ occurs in document $d$, $\vert D \vert$ is the number of documents, and $\vert d : t \in d \vert$ is the number of documents that contain $t$ ``` # http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction # http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref from sklearn.feature_extraction.text import TfidfVectorizer tfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english') Xtfidf=tfidfvectorizer.fit_transform(critics.quote) ``` <h3>Exercise Set VIII: Enrichment</h3> <p> There are several additional things we could try. Try some of these as exercises: <ol> <li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram contains 2 words, a trigram contains 3 words, and 6-gram contains 6 words. This is useful because "not good" and "so good" mean very different things. On the other hand, as n increases, the model does not scale well since the feature set becomes more sparse. <li> Try a model besides Naive Bayes, one that would allow for interactions between words -- for example, a Random Forest classifier. <li> Try adding supplemental features -- information about genre, director, cast, etc. <li> Use word2vec or [Latent Dirichlet Allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) to group words into topics and use those topics for prediction. <li> Use TF-IDF weighting instead of word counts. </ol> </p> <b>Exercise:</b> Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result. ``` # Your turn from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer from sklearn.decomposition import NMF, LatentDirichletAllocation vectorizer = CountVectorizer(min_df=best_min_df,stop_words='english') no_topics=1 X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] lda = LatentDirichletAllocation(n_topics=no_topics, max_iter=5, learning_method='online', learning_offset=50.,random_state=0).fit(xtrain) get_feature_names = vectorizer.get_feature_names() def display_topics(model, feature_names, no_top_words): for topic_idx, topic in enumerate(model.components_): print("Topic %d:" % (topic_idx)) print(" ".join([feature_names[i] for i in topic.argsort()[:-no_top_words - 1:-1]])) no_top_words=2 display_topics(lda, get_feature_names, no_top_words) no_top_words=5 display_topics(lda, get_feature_names, no_top_words) ``` I have tried to use LDA and as you can see from the above and it is able recognize the corpus is about movie and film. You can play around with the number of topics and the top words that you want to see in each topic.
github_jupyter
# Batch Normalization – Solutions Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now. This is **not** a good network for classfying MNIST digits. You could create a _much_ simpler network and get _better_ results. However, to give you hands-on experience with batch normalization, we had to make an example that was: 1. Complicated enough that training would benefit from batch normalization. 2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization. 3. Simple enough that the architecture would be easy to understand without additional resources. This notebook includes two versions of the network that you can edit. The first uses higher level functions from the `tf.layers` package. The second is the same network, but uses only lower level functions in the `tf.nn` package. 1. [Batch Normalization with `tf.layers.batch_normalization`](#example_1) 2. [Batch Normalization with `tf.nn.batch_normalization`](#example_2) The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named `mnist`. You'll need to run this cell before running anything else in the notebook. ``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False) ``` # Batch Normalization using `tf.layers.batch_normalization`<a id="example_1"></a> This version of the network uses `tf.layers` for almost everything, and expects you to implement batch normalization using [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function. This version of the function does not include batch normalization. ``` """ DO NOT MODIFY THIS CELL """ def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu) return layer ``` We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network. This version of the function does not include batch normalization. ``` """ DO NOT MODIFY THIS CELL """ def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu) return conv_layer ``` **Run the following cell**, along with the earlier cells (to load the dataset and define the necessary functions). This cell builds the network **without** batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training. ``` """ DO NOT MODIFY THIS CELL """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]]}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) ``` With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches. # Add batch normalization To add batch normalization to the layers created by `fully_connected`, we did the following: 1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer. 2. Removed the bias and activation function from the `dense` layer. 3. Used `tf.layers.batch_normalization` to normalize the layer's output. Notice we pass `is_training` to this layer to ensure the network updates its population statistics appropriately. 4. Passed the normalized values into a ReLU activation function. ``` def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) layer = tf.layers.batch_normalization(layer, training=is_training) layer = tf.nn.relu(layer) return layer ``` To add batch normalization to the layers created by `conv_layer`, we did the following: 1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer. 2. Removed the bias and activation function from the `conv2d` layer. 3. Used `tf.layers.batch_normalization` to normalize the convolutional layer's output. Notice we pass `is_training` to this layer to ensure the network updates its population statistics appropriately. 4. Passed the normalized values into a ReLU activation function. If you compare this function to `fully_connected`, you'll see that – when using `tf.layers` – there really isn't any difference between normalizing a fully connected layer and a convolutional layer. However, if you look at the second example in this notebook, where we restrict ourselves to the `tf.nn` package, you'll see a small difference. ``` def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) conv_layer = tf.nn.relu(conv_layer) return conv_layer ``` Batch normalization is still a new enough idea that researchers are still discovering how best to use it. In general, people seem to agree to remove the layer's bias (because the batch normalization already has terms for scaling and shifting) and add batch normalization _before_ the layer's non-linear activation function. However, for some networks it will work well in other ways, too. Just to demonstrate this point, the following three versions of `conv_layer` show other ways to implement batch normalization. If you try running with any of these versions of the function, they should all still work fine (although some versions may still work better than others). **Alternate solution that uses bias in the convolutional layer but still adds batch normalization before the ReLU activation function.** ``` def conv_layer(prev_layer, layer_num, is_training): strides = 2 if layer_num % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=None) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) conv_layer = tf.nn.relu(conv_layer) return conv_layer ``` **Alternate solution that uses a bias and ReLU activation function _before_ batch normalization.** ``` def conv_layer(prev_layer, layer_num, is_training): strides = 2 if layer_num % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=tf.nn.relu) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) return conv_layer ``` **Alternate solution that uses a ReLU activation function _before_ normalization, but no bias.** ``` def conv_layer(prev_layer, layer_num, is_training): strides = 2 if layer_num % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=False, activation=tf.nn.relu) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) return conv_layer ``` To modify `train`, we did the following: 1. Added `is_training`, a placeholder to store a boolean value indicating whether or not the network is training. 2. Passed `is_training` to the `conv_layer` and `fully_connected` functions. 3. Each time we call `run` on the session, we added to `feed_dict` the appropriate value for `is_training`. 4. Moved the creation of `train_opt` inside a `with tf.control_dependencies...` statement. This is necessary to get the normalization layers created with `tf.layers.batch_normalization` to update their population statistics, which we need when performing inference. ``` def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) # Tell TensorFlow to update the population statistics while training with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) ``` With batch normalization, we now get excellent performance. In fact, validation accuracy is almost 94% after only 500 batches. Notice also the last line of the output: `Accuracy on 100 samples`. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference. # Batch Normalization using `tf.nn.batch_normalization`<a id="example_2"></a> Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things. This version of the network uses `tf.nn` for almost everything, and expects you to implement batch normalization using [`tf.nn.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization). This implementation of `fully_connected` is much more involved than the one that uses `tf.layers`. However, if you went through the `Batch_Normalization_Lesson` notebook, things should look pretty familiar. To add batch normalization, we did the following: 1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer. 2. Removed the bias and activation function from the `dense` layer. 3. Added `gamma`, `beta`, `pop_mean`, and `pop_variance` variables. 4. Used `tf.cond` to make handle training and inference differently. 5. When training, we use `tf.nn.moments` to calculate the batch mean and variance. Then we update the population statistics and use `tf.nn.batch_normalization` to normalize the layer's output using the batch statistics. Notice the `with tf.control_dependencies...` statement - this is required to force TensorFlow to run the operations that update the population statistics. 6. During inference (i.e. when not training), we use `tf.nn.batch_normalization` to normalize the layer's output using the population statistics we calculated during training. 7. Passed the normalized values into a ReLU activation function. If any of thise code is unclear, it is almost identical to what we showed in the `fully_connected` function in the `Batch_Normalization_Lesson` notebook. Please see that for extensive comments. ``` def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) gamma = tf.Variable(tf.ones([num_units])) beta = tf.Variable(tf.zeros([num_units])) pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False) pop_variance = tf.Variable(tf.ones([num_units]), trainable=False) epsilon = 1e-3 def batch_norm_training(): batch_mean, batch_variance = tf.nn.moments(layer, [0]) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output) ``` The changes we made to `conv_layer` are _almost_ exactly the same as the ones we made to `fully_connected`. However, there is an important difference. Convolutional layers have multiple feature maps, and each feature map uses shared weights. So we need to make sure we calculate our batch and population statistics **per feature map** instead of per node in the layer. To accomplish this, we do **the same things** that we did in `fully_connected`, with two exceptions: 1. The sizes of `gamma`, `beta`, `pop_mean` and `pop_variance` are set to the number of feature maps (output channels) instead of the number of output nodes. 2. We change the parameters we pass to `tf.nn.moments` to make sure it calculates the mean and variance for the correct dimensions. ``` def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 in_channels = prev_layer.get_shape().as_list()[3] out_channels = layer_depth*4 weights = tf.Variable( tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)) layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME') gamma = tf.Variable(tf.ones([out_channels])) beta = tf.Variable(tf.zeros([out_channels])) pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False) pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False) epsilon = 1e-3 def batch_norm_training(): # Important to use the correct dimensions here to ensure the mean and variance are calculated # per feature map instead of for the entire layer batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output) ``` To modify `train`, we did the following: 1. Added `is_training`, a placeholder to store a boolean value indicating whether or not the network is training. 2. Each time we call `run` on the session, we added to `feed_dict` the appropriate value for `is_training`. 3. We did **not** need to add the `with tf.control_dependencies...` statement that we added in the network that used `tf.layers.batch_normalization` because we handled updating the population statistics ourselves in `conv_layer` and `fully_connected`. ``` def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) ``` Once again, the model with batch normalization quickly reaches a high accuracy. But in our run, notice that it doesn't seem to learn anything for the first 250 batches, then the accuracy starts to climb. That just goes to show - even with batch normalization, it's important to give your network a bit of time to learn before you decide it isn't working.
github_jupyter
``` %matplotlib inline from fastai.vision.all import * from fastai.vision.gan import * ``` ## LSun bedroom data For this lesson, we'll be using the bedrooms from the [LSUN dataset](http://lsun.cs.princeton.edu/2017/). The full dataset is a bit too large so we'll use a sample from [kaggle](https://www.kaggle.com/jhoward/lsun_bedroom). ``` path = untar_data(URLs.LSUN_BEDROOMS) ``` We then grab all the images in the folder with the data block API. We don't create a validation set here for reasons we'll explain later. It consists of random noise of size 100 by default (can be changed if you replace `generate_noise` by `partial(generate_noise, size=...)`) as inputs and the images of bedrooms as targets. ``` dblock = DataBlock(blocks = (TransformBlock, ImageBlock), get_x = generate_noise, get_items = get_image_files, splitter = IndexSplitter([])) def get_dls(bs, size): dblock = DataBlock(blocks = (TransformBlock, ImageBlock), get_x = generate_noise, get_items = get_image_files, splitter = IndexSplitter([]), item_tfms=Resize(size, method=ResizeMethod.Crop), batch_tfms = Normalize.from_stats(torch.tensor([0.5,0.5,0.5]), torch.tensor([0.5,0.5,0.5]))) return dblock.dataloaders(path, path=path, bs=bs) ``` We'll begin with a small size since GANs take a lot of time to train. ``` dls = get_dls(128, 64) dls.show_batch(max_n=16) ``` ## Models GAN stands for [Generative Adversarial Nets](https://arxiv.org/pdf/1406.2661.pdf) and were invented by Ian Goodfellow. The concept is that we will train two models at the same time: a generator and a critic. The generator will try to make new images similar to the ones in our dataset, and the critic will try to classify real images from the ones the generator does. The generator returns images, the critic a single number (usually 0. for fake images and 1. for real ones). We train them against each other in the sense that at each step (more or less), we: 1. Freeze the generator and train the critic for one step by: - getting one batch of true images (let's call that `real`) - generating one batch of fake images (let's call that `fake`) - have the critic evaluate each batch and compute a loss function from that; the important part is that it rewards positively the detection of real images and penalizes the fake ones - update the weights of the critic with the gradients of this loss 2. Freeze the critic and train the generator for one step by: - generating one batch of fake images - evaluate the critic on it - return a loss that rewards posisitivly the critic thinking those are real images; the important part is that it rewards positively the detection of real images and penalizes the fake ones - update the weights of the generator with the gradients of this loss Here, we'll use the [Wassertein GAN](https://arxiv.org/pdf/1701.07875.pdf). We create a generator and a critic that we pass to `gan_learner`. The noise_size is the size of the random vector from which our generator creates images. ``` generator = basic_generator(64, n_channels=3, n_extra_layers=1) critic = basic_critic (64, n_channels=3, n_extra_layers=1, act_cls=partial(nn.LeakyReLU, negative_slope=0.2)) learn = GANLearner.wgan(dls, generator, critic, opt_func = partial(Adam, mom=0.)) learn.recorder.train_metrics=True learn.recorder.valid_metrics=False learn.fit(30, 2e-4, wd=0) #learn.gan_trainer.switch(gen_mode=True) learn.show_results(max_n=16, figsize=(8,8), ds_idx=0) ```
github_jupyter
<a href="https://colab.research.google.com/github/rvignav/SimCLR/blob/main/Train.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %cd /scratch/users/rvignav/SimCLR !pip install -r requirements.txt import numpy as np import pickle import pandas as pd from sklearn.model_selection import train_test_split from tensorflow.keras.applications.vgg16 import VGG16 from evaluate_features import get_features, linear_classifier, tSNE_vis ``` # Load Dataframe ``` import csv class_labels = ["none", "mild", "moderate", "severe", "proliferative"] csv_file = open('data/trainLabels.csv', mode='r') d = csv.DictReader(csv_file) fname = [] label = [] one_hot = [] for row in d: fname.append('/scratch/groups/rubin/SimCLR/stylized-input/' + row['image'] + '.jpeg') l = int(row['level']) label.append(class_labels[l]) arr = [0, 0, 0, 0, 0] arr[l] = 1 one_hot.append(arr) df = pd.DataFrame({"filename": fname, "class_label": label, "class_one_hot": one_hot}) df.head() num_classes = len(df['class_one_hot'][0]) print("# of training instances:", len(df.index), "\n") for label in class_labels: print(f"# of '{label}' training instances: {(df.class_label == label).sum()}") df_train, df_val_test = train_test_split(df, test_size=0.30, random_state=42, shuffle=True) df_val, df_test = train_test_split(df_val_test, test_size=0.50, random_state=42, shuffle=True) print("# of training instances:", len(df_train.index), "\n") for label in class_labels: print(f"# of '{label}' training instances: {(df_train.class_label == label).sum()}") print() print("# of validation instances:", len(df_val.index), "\n") for label in class_labels: print(f"# of '{label}' training instances: {(df_val.class_label == label).sum()}") print() print("# of test instances:", len(df_test.index), "\n") for label in class_labels: print(f"# of '{label}' training instances: {(df_test.class_label == label).sum()}") dfs = { "train": df_train, "val": df_val, "test": df_test } # Img size size = 128 height_img = size width_img = size input_shape = (height_img, width_img, 3) ``` # Load pretrained VGG16 & Feature evaluation ``` params_vgg16 = {'weights': "imagenet", 'include_top': False, 'input_shape': input_shape, 'pooling': None} # Design model base_model = VGG16(**params_vgg16) base_model.summary() feat_dim = 2 * 2 * 512 ``` # Build SimCLR-Model ``` from DataGeneratorSimCLR import DataGeneratorSimCLR as DataGenerator from SimCLR import SimCLR ``` ### Properties ``` batch_size = 16 # Projection_head num_layers_ph = 2 feat_dims_ph = [2048, 128] num_of_unfrozen_layers = 4 save_path = 'models/dr-stylized' SimCLR = SimCLR( base_model = base_model, input_shape = input_shape, batch_size = batch_size, feat_dim = feat_dim, feat_dims_ph = feat_dims_ph, num_of_unfrozen_layers = num_of_unfrozen_layers, save_path = save_path ) params_generator = {'batch_size': batch_size, 'shuffle' : True, 'width':width_img, 'height': height_img, 'VGG': True } # Generators data_train = DataGenerator(df_train.reset_index(drop=True), **params_generator) data_val = DataGenerator(df_val.reset_index(drop=True), subset = "val", **params_generator) #val keeps the unity values on the same random places ~42 data_test = DataGenerator(df_test.reset_index(drop=True), subset = "test", **params_generator) #test keeps the unity values on the diagonal ``` ## Training SimCLR ``` SimCLR.unfreeze_and_train(data_train, data_val, num_of_unfrozen_layers = 4, r = 4, lr = 1e-6, epochs = 25) ```
github_jupyter
<center> <img src="https://tensorflowkorea.files.wordpress.com/2020/12/4.-e18492e185a9e186abe1848ce185a1-e18480e185a9e186bce18487e185aee18492e185a1e18482e185b3e186ab-e18486e185a5e18489e185b5e186abe18485e185a5e18482e185b5e186bce18483e185b5e186b8e18485e185a5e.png?w=972" width="200" height="200"><br> </center> ## 07-3 신경망 모델 훈련 이번 절에서는 케라스 API를 사용해 모델을 훈련하는데 필요한 다양한 도구들을 알아보자. 이 과정에서 여러가지 중요한 개념과 모범 사례를 함께 살펴보자. ### - 손실 곡선 2절에서 fit() 메서드로 모델을 훈련하면 훈련 과정이 상세하게 출력되어 확인할 수 있었다. 여기에는 에포트 횟수, 손실, 정확도 등이 있었다. 그런데 아래와 같이 출력의 마지막에 다음과 같은 메시지를 본 기억이 있을 것이다. <tensorflow.python.keras.callbacks.History at 0x7f11f82f5710> 노트북의 코드 셀은 print() 명령을 사용하지 않더라도 마지막 라인의 실행 결과를 자동으로 출력한다. 즉 이 메세지는 fit() 메서드의 실행 결과를 출력한 것이다. 다시 말해 fit() 메서드가 무엇인가 반환한다는 증거이다. 사실 케라스의 fit() 메서드는 History 클래스 객체를 반환한다. History 객체에는 훈련 과정에서 계산한 지표, 즉 손실과 정확도 값이 저장되어 있다. 이 값을 사용하면 그래프를 그릴 수 있을 것이다. 먼저 이전 절에서 사용했던 것과 같이 패션 MNIST 데이터셋을 적재하고 훈련 세트와 검증 세트로 나눈다. ``` from tensorflow import keras from sklearn.model_selection import train_test_split (train_input, train_target), (test_input, test_target) = \ keras.datasets.fashion_mnist.load_data() train_scaled = train_input / 255.0 train_scaled, val_scaled, train_target, val_target = train_test_split( train_scaled, train_target, test_size=0.2, random_state=42) ``` 그 다음 모델을 만들자. 이전 절과는 다르게 모델을 만드는 간단한 함수를 정의하겠다. 이 함수는 하나의 매개변수를 가진다. 먼저 코드를 작성해 보자. ``` def model_fn(a_layer=None): model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) model.add(keras.layers.Dense(100, activation='relu')) if a_layer: model.add(a_layer) model.add(keras.layers.Dense(10, activation='softmax')) return model ``` if구문을 제외하면 이 코드는 이전 절에서 만든 것과 동일한 모델을 만든다. if 구문의 역할은 model_fn() 함수에 (a_layer 매개변수로) 케라스 층을 추가하면 은닉층 뒤에 또 하나의 층을 추가하는 것이다. 마치 프로그래밍과 비슷하다. 여기서는 a_layer 매개변수로 층을 추가하지 않고 단순하게 model_fn() 함수를 호출한다. 그리고 모델 구조를 출력하면 이전 절과 동일한 모델이라는 것을 확인할 수 있다. ``` model = model_fn() model.summary() ``` 이전 절과 동일하게 모델을 훈련하지만 fit() 메서드의 결과를 history 변수에 담아보자. ``` model.compile(optimizer = 'rmsprop',loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) history = model.fit(train_scaled, train_target, epochs=5, verbose=0) # verbose 0은 훈련 과정 막대 안나오게 하는 매개변수 ``` history 객체에는 훈련 측정값이 담겨 있는 history 딕셔너리가 들어 있다. 이 딕셔너리에 어떤 값이 있는지 확인해 보자. ``` print(history.history.keys()) ``` 손실과 정확도가 포함되어 있다. 이전 절에서 언급했듯 케라스는 기본적으로 에포크마다 손실을 계산한다. 정확도는 compile() 메서드에서 metrics 매개변수에 accuracy를 추가했기 때문에 history 속성에 포함되어 있다. history 속성에 포함된 손실과 정확도는 에포크마다 계산한 값이 순서대로 나열된 단순한 리스트이다. matplotlib을 사용해 그래프로 그려보자. ``` import matplotlib.pyplot as plt plt.plot(history.history['loss']) plt.xlabel('epoch') plt.ylabel('loss') plt.show() ``` 파이썬 리스트의 인덱스는 0부터 시작하므로 5개의 에포크가 0에서부터 4까지 x축에 표현된다. y축은 계산된 손실값이다. 이번엔 정확도를 출력해보자. ``` plt.plot(history.history['acc']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.show() ``` 에포크마다 손실이 감소하고 정확도가 향상된다. 그렇다면 에포크를 늘려서 더 훈련을 하면 손실이 감소할까? 이번엔 에포크 횟수를 20으로 늘려 모델을 훈련하고 손실 그래프를 그려보자. ``` model = model_fn() model.compile(optimizer = 'rmsprop',loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) history = model.fit(train_scaled, train_target, epochs=20, verbose=0) plt.plot(history.history['loss']) plt.xlabel('epoch') plt.ylabel('loss') plt.show() ``` 예상대로 손실이 잘 감소한다. ### - 검증 손실 앞서 확률적 경사 하강법을 사용했을 때 과대/과소적합과 에포크 사이의 관계를 알아 보았다. 인공 신경망은 모두 일종의 경사 하강법을 사용하기 때문에 동일한 개념이 여기에도 적용된다. 에포크에 따른 과대적합과 과소적합을 파악하려면 훈련 세트에 대한 점수 뿐만 아니라 검증 세트에 대한 점수도 필요하다. 따라서 앞에서처럼 훈련 세트의 손실만 그려서는 안된다. 4장에서는 정확도를 사용하여 과대/과소 적합을 설명했지만 이 장에서는 손실을 사용하여 과대/과소적합을 다루어 보자. 에포크마다 검증 손실을 계산하기 위해 케라스 모델의 fit() 메서드에 검증 데이터를 전달할 수 있다. 다음처럼 validation_data 매개변수에 검증에 사용할 입력과 타깃값을 튜플로 만들어 전달한다. ``` model = model_fn() model.compile(optimizer = 'rmsprop',loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) history = model.fit(train_scaled, train_target, epochs=20, verbose=0, validation_data=(val_scaled, val_target)) ``` 반환된 history.history 딕셔너리에 어떤 값이 들어 있는지 키를 확인해 보자. ``` print(history.history.keys()) ``` 검증 세트에 대한 손실은 val_loss에 들어 있고 정확도는 val_acc에 들어있을 것이다. 과대/과소적합 문제를 조사하기 위해 훈련 손실과 검증 손실을 한 그래프에 그려서 비교해 보자. ``` plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['train','val']) plt.show() ``` 초기에 검증 손실이 감소하다가 다섯 번째 에포크 만에 다시 상승하기 시작한다. 훈련 손실은 꾸준히 감소하기 때문에 전형적인 과대적합 모델이 만들어 진다. 검증 손실이 상승하는 시점을 가능한 뒤로 늦추면 검증 세트에 대한 손실이 줄어들 뿐만 아니라 검증 세트에 대한 정확도도 증가할 것이다. 과대 적합을 막기 위해 3장에서 배웠던 규제 방식 대신에 신경망에 특화된 규제 방법을 다음 섹션에서 다루어 보자. 지금은 옵티마이저 하이퍼파라미터를 조정하여 과대적합을 완화시킬 수 있는지 알아보자. 기본 RMSprop 대신 Adam 을 선택하여 학습해보자. ``` model = model_fn() model.compile(optimizer = 'adam',loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) history = model.fit(train_scaled, train_target, epochs=20, verbose=0, validation_data=(val_scaled, val_target)) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['train','val']) plt.show() ``` 과대적합이 훨씬 줄었다. 검증 손실 그래프에 여전히 요동이 남아있지만 열 번째 에포크까지 전반적인 감소 추세가 이어지고 있다. 이는 Adam 옵티마이저가 이 데이터셋에 잘 맞는다는 것을 보여준다. ### - 드롭아웃 드롭아웃은 딥러닝의 아버지로 불리는 제프리 힌턴이 소개했다. 이 방식은 훈련 과정에서 층에 있는 일부 뉴런을 랜덤하게 꺼서 과대적합을 막는다. 뉴런은 랜덤하게 드롭아웃 되고 얼마나 많은 뉴런을 드랍할지는 우리가 정해야 할 또 다른 하이퍼파라미터이다. 드롭아웃이 왜 과대적합을 막을까? 이전 층의 일부 뉴런이 랜덤하게 꺼지면 특정 뉴런에 과대하게 의존하는 것을 줄일 수 있고 모든 입력에 대해 주의를 기울여야 한다. 일부 뉴런의 출력이 없을 수 있다는 것을 감안하면 이 신경망은 더 안정적인 예측을 만들 수 있다. 케라스에서는 드롭아웃을 keras.layers 패키지 아래 Dropout 클래스를 제공한다. 어떤 층의 뒤에 드롭아웃을 두어 이 층의 출력을 랜덤하게 0으로 만드는 것이다. 드롭아웃이 층처럼 사용되지만 훈련되는 모델 파라미터는 없다. 그럼 앞서 정의한 model_fn() 함수에 드롭아웃 객체를 전달하여 층을 추가해 보자. 여기에서는 30% 정도를 드롭아웃 해보자. ``` model = model_fn(keras.layers.Dropout(0.3)) model.summary() ``` 출력 결과에서 볼 수 있듯 은닉층 뒤에 추가된 드롭아웃 층은 훈련되는 모델 파라미터가 없다. 또한 입력과 출력의 크기가 같다. 일부 뉴런의 출력을 0으로 만들지만 전체 출력 배열의 크기를 바꾸지는 않는다. 물론 훈련이 끝난 뒤에 평가나 예측을 수행할 때는 드롭아웃을 적용하지 말아야 한다. 훈련된 모든 뉴런을 사용해야 올바른 예측을 수행할 수 있다. 그렇다면 모델을 훈련한 다음 층을 다시 빼야 할까? 똑똑하게도 텐서플로와 케라스는 모델을 평가와 예측에 사용할 때는 자동으로 드롭아웃을 적용하지 않는다. 그래서 마음 편하게 검증 점수를 계산할 수 있다. 이전과 마찬가지로 훈련 손실과 검증 손실의 그래프를 그려 비교해 보자. ``` model.compile(optimizer = 'adam',loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) history = model.fit(train_scaled, train_target, epochs=20, verbose=0, validation_data=(val_scaled, val_target)) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['train','val']) plt.show() ``` 과대적합이 확실히 줄었다. 열 번째 에포크 정도에서 검증 손실의 감소가 멈추지만 크게 상승하지 않고 어느 정도 유지되고 있다. 이 모델은 20번의 에포크 동안 훈련했기 때문에 결국 다소 과대적합 되어있다. 그렇다면 과대적합 되지 않은 모델을 얻기 위해 에포크 횟수를 10으로 하고 다시 훈련해 보아야 할 듯 하다. ### - 모델 저장과 복원 에포크 횟수를 10으로 다시 지정하고 모델을 훈련하자. 그리고 이 모델을 저장해보자. ``` model = model_fn(keras.layers.Dropout(0.3)) model.compile(optimizer = 'adam',loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) history = model.fit(train_scaled, train_target, epochs=10, verbose=0, validation_data=(val_scaled, val_target)) ``` 케라스 모델은 훈련된 모델의 파라미터를 저장하는 save() 메서드도 제공한다. 기본적으로 이 메서드는 텐서플로의 체크포인트 포맷으로 저장하지만 확장자가 '.h5'일 경우 HDF5 포맷으로 저장한다. ``` model.save_weights('model-weights.h5') ``` 또한 모델 구조와 모델 파라미터를 함께 저장하는 save() 메서드도 제공한다. 기본적으로 이 메서드는 텐서플로의 SavedModel 포맷으로 저장하지만 확장자가 '.h5'일 경우 HDF5 포맷으로 저장한다. ``` model.save('model-whole.h5') ``` 이제 이 두파일이잘 만들어 졌는지 확인해 보자. ``` !ls -al *.h5 model = model_fn(keras.layers.Dropout(0.3)) ``` 이 모델의 검증 정확도를 확인해 보자. 케라스에서 예측을 수행하는 predict() 메서드는 사이킷런과 달리 샘플마다 10개의 클래스에 대한 확률을 반환한다. 패션 MNIST 데이터셋이 다중 분류 문제이기 때문이다. ``` import numpy as np val_labels = np.argmax(model.predict(val_scaled), axis=-1) print(np.mean(val_labels == val_target)) ``` 모델의 predict() 메서드 결과에서 가장 큰 값을 고르기 위해 넘파이 argmax() 함수를 사용했다. 이 함수는 배열에서 가장 큰 값의 인덱스를 반환한다. 예를 들어 배열의 첫 번째 원소가 가장 큰 값을 경우 0을 반환한다. 다행히 우리가 준비한 타깃값도 0부터 시작하기 때문에 비교하기 좋다. ### - 콜백 콜백은 훈련 과정 중간에 어떤 작업을 수행할 수 있게 하는 객체로 keras.callbacks 패키지 아래에 있는 클래스들이다.fit() 메서드의 callbacks 매개변수에 리스트로 전달하여 사용한다. 여기서 사용할ModelCheckpoint 콜백은 기본적으로 최상의 검증 점수를 만다는 모델을 저장한다. 저장될 파일 이름을 'best-model.h5'로 지정하여 콜백을 적용해 보자. ``` model = model_fn(keras.layers.Dropout(0.3)) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) checkpoint_cb = keras.callbacks.ModelCheckpoint('best-model.h5', save_best_only=True) model.fit(train_scaled, train_target, epochs=20, verbose=0, validation_data=(val_scaled, val_target), callbacks=[checkpoint_cb]) ``` 앞에 함수를 만드는 것은 이전과 동일한다. ModelCheckpoint 클래스의 객체 checkpoint_cb를 만든 후 fit() 메서드의 callbacks 매개변수에 리스트로 감싸서 전달한다. 사실 검증 점수가 상승하기 시작하면 그 이후에는 과대적합이 더 커지기 때문에 훈련을 계속할 필요가 없다. 이때 훈련을 중지하면 컴퓨터 자원과 시간을 아낄 수 있다. 이렇게 과대적합이 시작되기 전에 훈련을 미리 중지하는 것을 조기 종료라고 부르며, 딥러닝 분야에서 널리 사용된다. 조기종료는 훈련 에포크 횟수를 제한하는 역할이지만 모델이 과대적합 되는 것을 막아 주기 때문에 규제 방법 중 하나로 생각할 수도 있다. 케라스에는 조기 종료를 위한 EarlyStopping 콜백을 제공한다. 이 콜백의 patience 매개변수는 검증 점수가 향상 되지 않더라도 참을 에포크 횟수로 지정한다. EarlyStopping 콜백을 ModelCheckpoint 콜백과 함께 사용하면 가장 낮은 검증 손실의 모델을 파일에 저장하고 검증 손실이 다시 상승할 때 훈련을 중지할 수 있다. 또한 훈련을 중지한 다음 현재 모델의 파라미터를 최상의 파라미터로 되돌린다. ``` model = model_fn(keras.layers.Dropout(0.3)) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) checkpoint_cb = keras.callbacks.ModelCheckpoint('best-model.h5') early_stopping_cb = keras.callbacks.EarlyStopping(patience=2, restore_best_weights=True) history= model.fit(train_scaled, train_target, epochs=20, verbose=0, validation_data=(val_scaled, val_target), callbacks=[checkpoint_cb, early_stopping_cb]) ``` fit() 메서드에 callbacks 매개변수 2개의 콜백을 리스트로 전달하였다. 훈련을 마치고 나면 몇 번째 에포크에서 훈련이 중지되었는지 early_stopping_cb 객체의 stopped_epoch 속성에서 확인할 수 있다. ``` print(early_stopping_cb.stopped_epoch) ``` 에포크 횟수가 0부터 시작하기 때문에 12는 열세 번째 에포크에서 훈련이 중지되었다는 것을 의미한다. patience를 2로 지정했으므로 최상의 모델은 열한 번째 에포크일 것이다. ``` plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['train', 'val']) plt.savefig('7_3-07', dpi=300) plt.show() ``` 위 그래프에서 보다시피 열한 번째 에포크에서 가장 낮은 손실을 기록했고 열세 번째 에포크에서 훈련이 중지 되었다. 조기 종료 기법을 사용하면 안심하고 에포크 횟수를 크게 지정해도 괜찮다. 마지막으로 조기 종료로 얻은 모델을 사용해 검증 세트에 대한 성능을 확인해 보자. ``` model.evaluate(val_scaled, val_target) ``` 출처 : 혼자 공부하는 머신러닝 + 딥러닝
github_jupyter
## The Basic Idea of Machine-learning Imagine a monkey drawing on a canvas (say, of `128 * 128` pixels). What's the probability that it draw a human-face? Almost none, isn't it. This implies that * the manifold of human-face involved in $\mathbb{R}^{128 \times 128}$ has relatively much smaller dimensions. * Even, the manifold is spares. To see this, imagine you modify the background of a painting with a human-face in the foreground, the points in $\mathbb{R}^{128 \times 128}$ before and after the modification are generally far from each other. Thus, the task of machine-learning is to find out the low-dimensional spares manifold, mapping the manifold to a lower dimensional compact space, and mapping the element there back to generate real-world object, like painting. We call the real-world object "observable", and the low-dimensional spares manifold "latent" space. This serves both to data-compression and data-abstraction. In fact, these are two aspects of one thing: the probability distribution of data (which we will talk in the next topic). ## Auto-encoder ### Conceptions This basic idea naturally forces to "auto-encoder", which has two parts: 1. Encoder: mapping the observable to latent. 2. Decoder: mapping the latent to observable. Let $X$ the space of observable, and $Z$ the latent. Let $f: X \mapsto Z$ denotes the encoder, and $g: Z \mapsto X$ the decoder. Then, for $\forall x \in X$, we would expect \begin{equation} g \circ f(x) \approx x. \end{equation} To numerically characterize this approximation, let $d_{\text{obs}}$ some pre-defined distance in the space of observable, we can define loss \begin{equation} \mathcal{L}_{\text{recon}} = \frac{1}{|D|} \sum_{x \in D} d_{\text{obs}} \left(x, g \circ f (x) \right). \end{equation} We call this "reconstruction" loss, since $g \circ f (x)$ is a reconstruction of $x$. For ensuring the compactness of the latent, an additional regularizer is added to the reconstruction loss, by some pre-defined distance in the latant space $d_{\text{lat}}$. Thus, the total loss is \begin{equation} \mathcal{L} = \frac{1}{|D|} \sum_{x \in D} d_{\text{obs}} \left(x, g \circ f (x) \right) + d_{\text{lat}} \left( f(x), 0 \right). \end{equation} The task is thus to find the functions $f$ and $g$ that minimize the total loss. This utilizes the universality property of neural network. ### Reference: 1. [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder). ## Implementation ``` %matplotlib inline from IPython.display import display import matplotlib.pyplot as plt from tqdm import tqdm from PIL import Image import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data data_path = '../../dat/MNIST/' mnist = input_data.read_data_sets( data_path, one_hot=True, source_url='http://yann.lecun.com/exdb/mnist/') def get_encoder(latent_dim, hidden_layers): def encoder(observable, name='encoder', reuse=None): with tf.variable_scope(name, reuse=reuse): hidden = observable for hidden_layer in hidden_layers: hidden = tf.layers.dense(hidden, hidden_layer, activation=tf.nn.relu) latent = tf.layers.dense(hidden, latent_dim, activation=None) return latent return encoder def get_decoder(observable_dim, hidden_layers): def decoder(latent, name='decoder', reuse=None): with tf.variable_scope(name, reuse=reuse): hidden = latent for hidden_layer in hidden_layers: hidden = tf.layers.dense(hidden, hidden_layer, activation=tf.nn.relu) reconstructed = tf.layers.dense(hidden, observable_dim, activation=tf.nn.sigmoid) return reconstructed return decoder def get_loss(observable, encoder, decoder, regularizer=None, reuse=None): if regularizer is None: regularizer = lambda latent: 0.0 with tf.name_scope('loss'): # shape: [batch_size, latent_dim] latent = encoder(observable, reuse=reuse) # shape: [batch_size, observable_dim] reconstructed = decoder(latent, reuse=reuse) # shape: [batch_size] squared_errors = tf.reduce_sum( (reconstructed - observable) ** 2, axis=1) mean_square_error = tf.reduce_mean(squared_errors) return mean_square_error + regularizer(latent) latent_dim = 64 encoder = get_encoder(latent_dim=latent_dim, hidden_layers=[512, 256, 128]) decoder = get_decoder(observable_dim=28*28, hidden_layers=[128, 256, 512]) observable = tf.placeholder(shape=[None, 28*28], dtype='float32', name='observable') latent_samples = tf.placeholder(shape=[None, latent_dim], dtype='float32', name='latent_samples') generated = decoder(latent_samples, reuse=tf.AUTO_REUSE) def regularizer(latent, name='regularizer'): with tf.name_scope(name): distances = tf.reduce_sum(latent ** 2, axis=1) return tf.reduce_mean(distances) loss = get_loss(observable, encoder, decoder, regularizer=regularizer, reuse=tf.AUTO_REUSE) optimizer = tf.train.AdamOptimizer(epsilon=1e-3) train_op = optimizer.minimize(loss) sess = tf.Session() sess.run(tf.global_variables_initializer()) loss_vals = [] for i in tqdm(range(100000)): X, y = mnist.train.next_batch(batch_size=128) _, loss_val = sess.run([train_op, loss], {observable: X}) if np.isnan(loss_Xy_val): raise ValueError('Loss has been NaN.') loss_vals.append(loss_val) print('Final loss:', np.mean(loss_vals[-100:])) plt.plot(loss_vals) plt.xlabel('steps') plt.ylabel('loss') plt.show() def get_image(array): """ Args: array: Numpy array with shape `[28*28]`. Returns: An image. """ array = 255 * array array = array.reshape([28, 28]) array = array.astype(np.uint8) return Image.fromarray(array) latent_sample_vals = np.random.normal(size=[128, latent_dim]) generated_vals = sess.run(generated, {latent_samples: latent_sample_vals}) # Display the results n_display = 5 for i in range(n_display): print('Gnerated:') display(get_image(generated_vals[i])) print() ```
github_jupyter
# Monodepth Estimation with OpenVINO This tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information: https://docs.openvinotoolkit.org/latest/omz_models_model_midasnet.html ![monodepth](https://user-images.githubusercontent.com/36741649/127173017-a0bbcf75-db24-4d2c-81b9-616e04ab7cd9.gif) ### What is Monodepth? Monocular Depth Estimation is the task of estimating scene depth using a single image. It has many potential applications in robotics, 3D reconstruction, medical imaging and autonomous systems. For this demo, we use a neural network model called [MiDaS](https://github.com/intel-isl/MiDaS) which was developed by the [Embodied AI Foundation](https://www.embodiedaifoundation.org/). Check out the research paper below to learn more. R. Ranftl, K. Lasinger, D. Hafner, K. Schindler and V. Koltun, ["Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer,"](https://ieeexplore.ieee.org/document/9178977) in IEEE Transactions on Pattern Analysis and Machine Intelligence, doi: 10.1109/TPAMI.2020.3019967. ## Preparation ### Imports ``` import sys import time from pathlib import Path import cv2 import matplotlib.cm import matplotlib.pyplot as plt import numpy as np from IPython.display import ( HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display, ) from openvino.inference_engine import IECore sys.path.append("../utils") from notebook_utils import load_image ``` ### Settings ``` DEVICE = "CPU" MODEL_FILE = "model/MiDaS_small.xml" model_xml_path = Path(MODEL_FILE) ``` ## Functions ``` def normalize_minmax(data): """Normalizes the values in `data` between 0 and 1""" return (data - data.min()) / (data.max() - data.min()) def convert_result_to_image(result, colormap="viridis"): """ Convert network result of floating point numbers to an RGB image with integer values from 0-255 by applying a colormap. `result` is expected to be a single network result in 1,H,W shape `colormap` is a matplotlib colormap. See https://matplotlib.org/stable/tutorials/colors/colormaps.html """ cmap = matplotlib.cm.get_cmap(colormap) result = result.squeeze(0) result = normalize_minmax(result) result = cmap(result)[:, :, :3] * 255 result = result.astype(np.uint8) return result def to_rgb(image_data) -> np.ndarray: """ Convert image_data from BGR to RGB """ return cv2.cvtColor(image_data, cv2.COLOR_BGR2RGB) ``` ## Load the Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`. Get input and output keys and the expected input shape for the model. ``` ie = IECore() net = ie.read_network(model=model_xml_path, weights=model_xml_path.with_suffix(".bin")) exec_net = ie.load_network(network=net, device_name=DEVICE) input_key = list(exec_net.input_info)[0] output_key = list(exec_net.outputs.keys())[0] network_input_shape = exec_net.input_info[input_key].tensor_desc.dims network_image_height, network_image_width = network_input_shape[2:] ``` ## Monodepth on Image ### Load, resize and reshape input image The input image is read with OpenCV, resized to network input size, and reshaped to (N,C,H,W) (N=number of images, C=number of channels, H=height, W=width). ``` IMAGE_FILE = "data/coco_bike.jpg" image = load_image(path=IMAGE_FILE) # resize to input shape for network resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width)) # reshape image to network input shape NCHW input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) ``` ### Do inference on image Do the inference, convert the result to an image, and resize it to the original image shape ``` result = exec_net.infer(inputs={input_key: input_image})[output_key] # convert network result of disparity map to an image that shows # distance as colors result_image = convert_result_to_image(result=result) # resize back to original image shape. cv2.resize expects shape # in (width, height), [::-1] reverses the (height, width) shape to match this result_image = cv2.resize(result_image, image.shape[:2][::-1]) ``` ### Display monodepth image ``` fig, ax = plt.subplots(1, 2, figsize=(20, 15)) ax[0].imshow(to_rgb(image)) ax[1].imshow(result_image); ``` ## Monodepth on Video By default, only the first 100 frames are processed, in order to quickly check that everything works. Change NUM_FRAMES in the cell below to modify this. Set NUM_FRAMES to 0 to process the whole video. ### Video Settings ``` # Video source: https://www.youtube.com/watch?v=fu1xcQdJRws (Public Domain) VIDEO_FILE = "data/Coco Walking in Berkeley.mp4" # Number of seconds of input video to process. Set to 0 to process # the full video. NUM_SECONDS = 4 # Set ADVANCE_FRAMES to 1 to process every frame from the input video # Set ADVANCE_FRAMES to 2 to process every second frame. This reduces # the time it takes to process the video ADVANCE_FRAMES = 2 # Set SCALE_OUTPUT to reduce the size of the result video # If SCALE_OUTPUT is 0.5, the width and height of the result video # will be half the width and height of the input video SCALE_OUTPUT = 0.5 # The format to use for video encoding. vp09 is slow, # but it works on most systems. # Try the THEO encoding if you have FFMPEG installed. # FOURCC = cv2.VideoWriter_fourcc(*"THEO") FOURCC = cv2.VideoWriter_fourcc(*"vp09") # Create Path objects for the input video and the resulting video output_directory = Path("output") output_directory.mkdir(exist_ok=True) result_video_path = output_directory / f"{Path(VIDEO_FILE).stem}_monodepth.mp4" ``` ### Load Video Load video from `VIDEO_FILE`, set in the *Video Settings* cell above. Open the video to read the frame width and height and fps, and compute values for these properties for the monodepth video. ``` cap = cv2.VideoCapture(str(VIDEO_FILE)) ret, image = cap.read() if not ret: raise ValueError(f"The video at {VIDEO_FILE} cannot be read.") input_fps = cap.get(cv2.CAP_PROP_FPS) input_video_frame_height, input_video_frame_width = image.shape[:2] target_fps = input_fps / ADVANCE_FRAMES target_frame_height = int(input_video_frame_height * SCALE_OUTPUT) target_frame_width = int(input_video_frame_width * SCALE_OUTPUT) cap.release() print( f"The input video has a frame width of {input_video_frame_width}, " f"frame height of {input_video_frame_height} and runs at {input_fps:.2f} fps" ) print( "The monodepth video will be scaled with a factor " f"{SCALE_OUTPUT}, have width {target_frame_width}, " f" height {target_frame_height}, and run at {target_fps:.2f} fps" ) ``` ### Do Inference on a Video and Create Monodepth Video ``` # Initialize variables input_video_frame_nr = 0 start_time = time.perf_counter() total_inference_duration = 0 # Open input video cap = cv2.VideoCapture(str(VIDEO_FILE)) # Create result video out_video = cv2.VideoWriter( str(result_video_path), FOURCC, target_fps, (target_frame_width * 2, target_frame_height), ) num_frames = int(NUM_SECONDS * input_fps) total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if num_frames == 0 else num_frames progress_bar = ProgressBar(total=total_frames) progress_bar.display() try: while cap.isOpened(): ret, image = cap.read() if not ret: cap.release() break if input_video_frame_nr >= total_frames: break # Only process every second frame # Prepare frame for inference # resize to input shape for network resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width)) # reshape image to network input shape NCHW input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) # Do inference inference_start_time = time.perf_counter() result = exec_net.infer(inputs={input_key: input_image})[output_key] inference_stop_time = time.perf_counter() inference_duration = inference_stop_time - inference_start_time total_inference_duration += inference_duration if input_video_frame_nr % (10 * ADVANCE_FRAMES) == 0: clear_output(wait=True) progress_bar.display() # input_video_frame_nr // ADVANCE_FRAMES gives the number of # frames that have been processed by the network display( Pretty( f"Processed frame {input_video_frame_nr // ADVANCE_FRAMES}" f"/{total_frames // ADVANCE_FRAMES}. " f"Inference time: {inference_duration:.2f} seconds " f"({1/inference_duration:.2f} FPS)" ) ) # Transform network result to RGB image result_frame = to_rgb(convert_result_to_image(result)) # Resize image and result to target frame shape result_frame = cv2.resize(result_frame, (target_frame_width, target_frame_height)) image = cv2.resize(image, (target_frame_width, target_frame_height)) # Put image and result side by side stacked_frame = np.hstack((image, result_frame)) # Save frame to video out_video.write(stacked_frame) input_video_frame_nr = input_video_frame_nr + ADVANCE_FRAMES cap.set(1, input_video_frame_nr) progress_bar.progress = input_video_frame_nr progress_bar.update() except KeyboardInterrupt: print("Processing interrupted.") finally: clear_output() processed_frames = num_frames // ADVANCE_FRAMES out_video.release() cap.release() end_time = time.perf_counter() duration = end_time - start_time print( f"Processed {processed_frames} frames in {duration:.2f} seconds. " f"Total FPS (including video processing): {processed_frames/duration:.2f}." f"Inference FPS: {processed_frames/total_inference_duration:.2f} " ) print(f"Monodepth Video saved to '{str(result_video_path)}'.") ``` ### Display Monodepth Video ``` video = Video(result_video_path, width=800, embed=True) if not result_video_path.exists(): plt.imshow(stacked_frame) raise ValueError("OpenCV was unable to write the video file. Showing one video frame.") else: print(f"Showing monodepth video saved at\n{result_video_path.resolve()}") print( "If you cannot see the video in your browser, please click on the " "following link to download the video " ) video_link = FileLink(result_video_path) video_link.html_link_str = "<a href='%s' download>%s</a>" display(HTML(video_link._repr_html_())) display(video) ```
github_jupyter
# Lambda School Data Science - Logistic Regression Logistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models). ## Lecture - Where Linear goes Wrong ### Return of the Titanic 🚢 You've likely already explored the rich dataset that is the Titanic - let's use regression and try to predict survival with it. The data is [available from Kaggle](https://www.kaggle.com/c/titanic/data), so we'll also play a bit with [the Kaggle API](https://github.com/Kaggle/kaggle-api). ### Get data, option 1: Kaggle API #### Sign up for Kaggle and get an API token 1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. [Follow these instructions](https://github.com/Kaggle/kaggle-api#api-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file. If you are using Anaconda, put the file in the directory specified in the instructions. _This will enable you to download data directly from Kaggle. If you run into problems, don’t worry — I’ll give you an easy alternative way to download today’s data, so you can still follow along with the lecture hands-on. And then we’ll help you through the Kaggle process after the lecture._ #### Put `kaggle.json` in the correct location - ***If you're using Anaconda,*** put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-api#api-credentials). - ***If you're using Google Colab,*** upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ``` #### Install the Kaggle API package and use it to get the data You also have to join the Titanic competition to have access to the data ``` !pip install kaggle !kaggle competitions download -c titanic ``` ### Get data, option 2: Download from the competition page 1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. [Go to the Titanic competition page](https://www.kaggle.com/c/titanic) to download the [data](https://www.kaggle.com/c/titanic/data). ### Get data, option 3: Use Seaborn ``` import seaborn as sns train = sns.load_dataset('titanic') ``` But Seaborn's version of the Titanic dataset is not identical to Kaggle's version, as we'll see during this lesson! ### Read data ``` import pandas as pd train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') train.shape, test.shape ``` Notice that `train.csv` has one more column than `test.csv` : The target, `Survived`. Kaggle provides test labels, but not test targets. Instead, you submit your test predictions to Kaggle to get your test scores. Why? This is model validaton best practice, makes competitons fair, and helps us learn about over- and under-fitting. ``` train.sample(n=5) test.sample(n=5) ``` Do some data exploration. About 62% of passengers did not survive. ``` target = 'Survived' train[target].value_counts(normalize=True) ``` Describe the numeric columns ``` train.describe(include='number') ``` Describe the non-numeric columns ``` train.describe(exclude='number') ``` ### How would we try to do this with linear regression? We choose a few numeric features, split the data into X and y, [impute missing values](https://scikit-learn.org/stable/modules/impute.html), and fit a Linear Regression model on the train set. ``` from sklearn.impute import SimpleImputer from sklearn.linear_model import LinearRegression features = ['Pclass', 'Age', 'Fare'] target = 'Survived' X_train = train[features] y_train = train[target] X_test = test[features] imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train) X_test_imputed = imputer.transform(X_test) lin_reg = LinearRegression() lin_reg.fit(X_train_imputed, y_train) ``` Let's consider a test case. What does our Linear Regression predict for a 1st class, 5 year-old, with a fare of 500? 119% probability of survival. ``` import numpy as np test_case = np.array([[1, 5, 500]]) # Rich 5-year old in first class lin_reg.predict(test_case) ``` Based on the Linear Regression's intercept and coefficients, it will predict probabilities greater than 100%, or less than 0%, given high enough / low enough values for the features. ``` print('Intercept', lin_reg.intercept_) coefficients = pd.Series(lin_reg.coef_, X_train.columns) print(coefficients.to_string()) ``` ### How would we do this with Logistic Regression? The scikit-learn API is consistent, so the code is similar. We instantiate our model (here with `LogisticRegression()` instead of `LinearRegression()`) We use the same method to fit the model on the training data: `.fit(X_train_imputed, y_train)` We use the same method to make a predict for our test case: `.predict(test_case)` — But this returns different results. Regressors return continuous values, but classifiers return discrete predictions of the class label. In this binary classification problem, our discrete class labels are `0` (did not survive) or `1` (did survive). Classifiers also have a `.predict_proba` method, which returns predicted probabilities for each class. The probabilities sum to 1. We predict ~3% probability that our test case did not surive, and 97% probability that our test case did survive. This result is what we want and expect for our test case: to predict survival, with high probability, but less than 100%. ``` from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train_imputed, y_train) print('Prediction for rich 5 year old:', log_reg.predict(test_case)) print('Predicted probabilities for rich 5 year old:', log_reg.predict_proba(test_case)) ``` Logistic Regression calculates predicted probablities between the range of 0 and 1. By default, scikit-learn makes a discrete prediction by returning whichever class had the highest predicted probability for that observation. In the case of binary classification, this is equivalent to using a threshold of 0.5. However, we could choose a different threshold, for different trade-offs between false positives versus false negatives. ``` threshold = 0.5 probabilities = log_reg.predict_proba(X_test_imputed)[:,1] manual_predictions = (probabilities > threshold).astype(int) direct_predictions = log_reg.predict(X_test_imputed) all(manual_predictions == direct_predictions) ``` ### How accurate is the Logistic Regression? Scikit-learn estimators provide a convenient method, `.score`. It uses the X features to generate predictions. Then it compares the predictions to the y ground truth labels. Then it returns the score. For regressors, `.score` returns R^2. For classifiers, `.score` returns Accuracy. Our Logistic Regression model has 70% training accuracy. (This is higher than the 62% accuracy we would get with a baseline that predicts every passenger does not survive.) ``` score = log_reg.score(X_train_imputed, y_train) print('Train Accuracy Score', score) ``` Accuracy is just the number of correct predictions divided by the total number of predictions. For example, we can look at our first five predictions: ``` y_pred = log_reg.predict(X_train_imputed) y_pred[:5] ``` And compare to the ground truth labels for these first five observations: ``` y_train[:5].values ``` We have four correct predictions, divided by five total predictions, for 80% accuracy. ``` correct_predictions = 4 total_predictions = 5 accuracy = correct_predictions / total_predictions print(accuracy) ``` scikit-learn's `accuracy_score` function works the same way and returns the same result. ``` from sklearn.metrics import accuracy_score accuracy_score(y_train[:5], y_pred[:5]) ``` We don't want to just score our model on the training data. We cannot calculate a test accuracy score ourselves in this notebook, because Kaggle does not provide test labels. We could split the train data into train and validation sets. However, we don't have many observations. (Fewer than 1,000.) As another alternative, we can use cross-validation: ``` from sklearn.model_selection import cross_val_score scores = cross_val_score(log_reg, X_train_imputed, y_train, cv=10) print('Cross-Validation Accuracy Scores', scores) ``` We can see a range of scores: ``` scores = pd.Series(scores) scores.min(), scores.mean(), scores.max() ``` To learn more about Cross-Validation, see these links: - https://scikit-learn.org/stable/modules/cross_validation.html - https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html - https://github.com/LambdaSchool/DS-Unit-2-Sprint-3-Classification-Validation/blob/master/module2-baselines-validation/model-validation-preread.md#what-is-cross-validation ### What's the equation for Logistic Regression? https://en.wikipedia.org/wiki/Logistic_function https://en.wikipedia.org/wiki/Logistic_regression#Probability_of_passing_an_exam_versus_hours_of_study ``` print('Intercept', log_reg.intercept_[0]) coefficients = pd.Series(log_reg.coef_[0], X_train.columns) print(coefficients.to_string()) # The logistic sigmoid "squishing" function, # implemented to work with numpy arrays def sigmoid(x): return 1 / (1 + np.e**(-x)) sigmoid(np.dot(log_reg.coef_, test_case.T) + log_reg.intercept_) ``` Or we can write the code with the `@` operator instead of numpy's dot product function ``` sigmoid(log_reg.coef_ @ test_case.T + log_reg.intercept_) ``` Either way, we get the same result as our scikit-learn Logistic Regression ``` log_reg.predict_proba(test_case) ``` ## Feature Engineering Get the [Category Encoder](http://contrib.scikit-learn.org/categorical-encoding/) library If you're running on Google Colab: ``` !pip install category_encoders ``` If you're running locally with Anaconda: ``` !conda install -c conda-forge category_encoders ``` #### Notice that Seaborn's version of the Titanic dataset has more features than Kaggle's version ``` import seaborn as sns sns_titanic = sns.load_dataset('titanic') print(sns_titanic.shape) sns_titanic.head() ``` #### We can make the `adult_male` and `alone` features, and we can extract features from `Name` ``` def make_features(X): X = X.copy() X['adult_male'] = (X['Sex'] == 'male') & (X['Age'] >= 16) X['alone'] = (X['SibSp'] == 0) & (X['Parch'] == 0) X['last_name'] = X['Name'].str.split(',').str[0] X['title'] = X['Name'].str.split(',').str[1].str.split('.').str[0] return X train = make_features(train) test = make_features(test) train.head() train['adult_male'].value_counts() train['alone'].value_counts() train['title'].value_counts() train.describe(include='number') train.describe(exclude='number') ``` ### Category Encoders! http://contrib.scikit-learn.org/categorical-encoding/onehot.html End-to-end example ``` import category_encoders as ce pd.set_option('display.max_columns', 1000) features = ['Pclass', 'Age', 'Fare', 'Sex', 'Embarked', 'adult_male', 'alone', 'title'] target = 'Survived' X_train = train[features] X_test = test[features] y_train = train[target] y_test = train[target] encoder = ce.OneHotEncoder(use_cat_names=True) imputer = SimpleImputer() log_reg = LogisticRegression(solver='lbfgs', max_iter=1000) X_train_encoded = encoder.fit_transform(X_train) X_test_encoded = encoder.transform(X_test) X_train_imputed = imputer.fit_transform(X_train_encoded) X_test_imputed = imputer.transform(X_test_encoded) scores = cross_val_score(log_reg, X_train_imputed, y_train, cv=10) print('Cross-Validation Accuracy Scores', scores) ``` Here's what the one-hot encoded data looks like ``` X_train_encoded.sample(n=5) ``` The cross-validation accuracy scores improve with the additional features ``` %matplotlib inline import matplotlib.pyplot as plt log_reg.fit(X_train_imputed, y_train) coefficients = pd.Series(log_reg.coef_[0], X_train_encoded.columns) plt.figure(figsize=(10,10)) coefficients.sort_values().plot.barh(color='grey'); ``` ### Scaler https://scikit-learn.org/stable/modules/preprocessing.html#scaling-features-to-a-range End-to-end example ``` from sklearn.preprocessing import MinMaxScaler encoder = ce.OneHotEncoder(use_cat_names=True) imputer = SimpleImputer() scaler = MinMaxScaler() log_reg = LogisticRegression(solver='lbfgs', max_iter=1000) X_train_encoded = encoder.fit_transform(X_train) X_test_encoded = encoder.transform(X_test) X_train_imputed = imputer.fit_transform(X_train_encoded) X_test_imputed = imputer.transform(X_test_encoded) X_train_scaled = scaler.fit_transform(X_train_imputed) X_test_scaled = scaler.transform(X_test_imputed) scores = cross_val_score(log_reg, X_train_scaled, y_train, cv=10) print('Cross-Validation Accuracy Scores', scores) ``` Now all the features have a min of 0 and a max of 1 ``` pd.DataFrame(X_train_scaled).describe() ``` The model coefficients change with scaling ``` log_reg.fit(X_train_scaled, y_train) coefficients = pd.Series(log_reg.coef_[0], X_train_encoded.columns) plt.figure(figsize=(10,10)) coefficients.sort_values().plot.barh(color='grey'); ``` ### Pipeline https://scikit-learn.org/stable/modules/compose.html#pipeline ``` from sklearn.pipeline import make_pipeline pipe = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(), MinMaxScaler(), LogisticRegression(solver='lbfgs', max_iter=1000) ) scores = cross_val_score(pipe, X_train, y_train, cv=10) print('Cross-Validation Accuracy Scores', scores) pipe.fit(X_train, y_train) y_pred = pipe.predict(X_test) submission = test[['PassengerId']].copy() submission['Survived'] = y_pred submission.to_csv('kaggle-submission-001.csv', index=False) ``` ## Assignment: real-world classification We're going to check out a larger dataset - the [FMA Free Music Archive data](https://github.com/mdeff/fma). It has a selection of CSVs with metadata and calculated audio features that you can load and try to use to classify genre of tracks. To get you started: ### Get and unzip the data #### Google Colab ``` !wget https://os.unil.cloud.switch.ch/fma/fma_metadata.zip !unzip fma_metadata.zip ``` #### Windows - Download the [zip file](https://os.unil.cloud.switch.ch/fma/fma_metadata.zip) - You may need to use [7zip](https://www.7-zip.org/download.html) to unzip it #### Mac - Download the [zip file](https://os.unil.cloud.switch.ch/fma/fma_metadata.zip) - You may need to use [p7zip](https://superuser.com/a/626731) to unzip it ### Look at first 4 lines of raw `tracks.csv` file ``` !head -n 4 fma_metadata/tracks.csv ``` ### Read with pandas https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html ``` tracks = pd.read_csv('fma_metadata/tracks.csv', header=[0,1], index_col=0) tracks.head() ``` ### More data prep Get value counts of the target. (The syntax is different because the header has two levels, it's a "MultiIndex.") The target has multiple classes, and many missing values. ``` tracks['track']['genre_top'].value_counts(normalize=True, dropna=False) ``` We can't do supervised learning where targets are missing. (In other words, we can't do supervised learning without supervision.) So, only keep observations where the target is not null. ``` target_not_null = tracks['track']['genre_top'].notnull() tracks = tracks[target_not_null] ``` Load `features.csv`: "common features extracted from the audio with [librosa](https://librosa.github.io/librosa/)" It has 3 levels of columns! ``` features = pd.read_csv('fma_metadata/features.csv', header=[0,1,2], index_col=0) features.head() ``` I want to [drop a level](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.droplevel.html) here from the audio features dataframe, so it has the same number of levels (2) as the tracks metadata dataframe, so that I can better merge the two together. ``` features.columns = features.columns.droplevel(level=2) features.head() ``` Merge the metadata with the audio features, on track id (the index for both dataframes). ``` df = pd.merge(tracks, features, left_index=True, right_index=True) ``` And drop a level of columns again, because dealing with MultiIndex is hard ``` df.columns = df.columns.droplevel() ``` This is now a pretty big dataset. Almost 500,000 rows, over 500 columns, and over 200 megabytes in RAM. ``` print(df.shape) df.info() ``` ### Fit Logistic Regression! ``` from sklearn.model_selection import train_test_split y = df['genre_top'] X = df.select_dtypes('number').drop(columns=['longitude', 'latitude']) X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.50, test_size=0.50, random_state=42, stratify=y) X_train.shape, X_test.shape, y_train.shape, y_test.shape model = LogisticRegression(solver='lbfgs', multi_class='auto') model.fit(X_train, y_train) ``` Accuracy is 37%, which sounds bad, BUT ... ``` model.score(X_test, y_test) ``` ... remember we have 16 classes, and the majority class (Rock) occurs 29% of the time, so the model isn't worse than random guessing for this problem ``` y.value_counts(normalize=True) ``` This dataset is bigger than many you've worked with so far, and while it should fit in Colab, it can take awhile to run. That's part of the challenge! Your tasks: - Clean up the variable names in the dataframe - Use logistic regression to fit a model predicting (primary/top) genre - Inspect, iterate, and improve your model - Answer the following questions (written, ~paragraph each): - What are the best predictors of genre? - What information isn't very useful for predicting genre? - What surprised you the most about your results? *Important caveats*: - This is going to be difficult data to work with - don't let the perfect be the enemy of the good! - Be creative in cleaning it up - if the best way you know how to do it is download it locally and edit as a spreadsheet, that's OK! - If the data size becomes problematic, consider sampling/subsetting, or [downcasting numeric datatypes](https://www.dataquest.io/blog/pandas-big-data/). - You do not need perfect or complete results - just something plausible that runs, and that supports the reasoning in your written answers If you find that fitting a model to classify *all* genres isn't very good, it's totally OK to limit to the most frequent genres, or perhaps trying to combine or cluster genres as a preprocessing step. Even then, there will be limits to how good a model can be with just this metadata - if you really want to train an effective genre classifier, you'll have to involve the other data (see stretch goals). This is real data - there is no "one correct answer", so you can take this in a variety of directions. Just make sure to support your findings, and feel free to share them as well! This is meant to be practice for dealing with other "messy" data, a common task in data science. ## Resources and stretch goals - Check out the other .csv files from the FMA dataset, and see if you can join them or otherwise fit interesting models with them - [Logistic regression from scratch in numpy](https://blog.goodaudience.com/logistic-regression-from-scratch-in-numpy-5841c09e425f) - if you want to dig in a bit more to both the code and math (also takes a gradient descent approach, introducing the logistic loss function) - Create a visualization to show predictions of your model - ideally show a confidence interval based on error! - Check out and compare classification models from scikit-learn, such as [SVM](https://scikit-learn.org/stable/modules/svm.html#classification), [decision trees](https://scikit-learn.org/stable/modules/tree.html#classification), and [naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html). The underlying math will vary significantly, but the API (how you write the code) and interpretation will actually be fairly similar. - Sign up for [Kaggle](https://kaggle.com), and find a competition to try logistic regression with - (Not logistic regression related) If you enjoyed the assignment, you may want to read up on [music informatics](https://en.wikipedia.org/wiki/Music_informatics), which is how those audio features were actually calculated. The FMA includes the actual raw audio, so (while this is more of a longterm project than a stretch goal, and won't fit in Colab) if you'd like you can check those out and see what sort of deeper analysis you can do.
github_jupyter
<h2> Import Libraries</h2> ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression ``` ## Load the Data The boston house-price dataset is one of datasets scikit-learn comes with that do not require the downloading of any file from some external website. The code below loads the boston dataset. ``` data = load_boston() df = pd.DataFrame(data.data, columns=data.feature_names) df['target'] = data.target df.head() ``` <h2> Remove Missing or Impute Values</h2> If you want to build models with your data, null values are (almost) never allowed. It is important to always see how many samples have missing values and for which columns. ``` # Look at the shape of the dataframe df.shape # There are no missing values in the dataset df.isnull().sum() ``` <h2> Arrange Data into Features Matrix and Target Vector </h2> What we are predicing is the continuous column "target" which is the median value of owner-occupied homes in $1000’s. ``` X = df.loc[:, ['RM', 'LSTAT', 'PTRATIO']] y = df.loc[:, 'target'] ``` ## Splitting Data into Training and Test Sets ``` # Original random state is 2 X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2) ``` ## Train Test Split Visualization A relatively new feature of pandas is conditional formatting. https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html ``` X_train = pd.DataFrame(X_train, columns=['RM', 'LSTAT', 'PTRATIO']) X_test = pd.DataFrame(X_test, columns=['RM', 'LSTAT', 'PTRATIO']) X_train['split'] = 'train' X_test['split'] = 'test' X_train X_train['target'] = y_train X_test['target'] = y_test fullDF = pd.concat([X_train, X_test], axis = 0, ignore_index=False) fullDF.head(10) len(fullDF.index) len(np.unique(fullDF.index)) fullDFsplit = fullDF.copy() fullDF = fullDF.drop(columns = ['split']) def highlight_color(s, fullDFsplit): ''' highlight the the entire dataframe cyan. ''' colorDF = s.copy() colorDF.loc[fullDFsplit['split'] == 'train', ['RM', 'LSTAT', 'PTRATIO']] = 'background-color: #40E0D0' colorDF.loc[fullDFsplit['split'] == 'test', ['RM', 'LSTAT', 'PTRATIO']] = 'background-color: #00FFFF' # #9370DB # FF D7 00 colorDF.loc[fullDFsplit['split'] == 'train', ['target']] = 'background-color: #FFD700' # EE82EE # BD B7 6B colorDF.loc[fullDFsplit['split'] == 'test', ['target']] = 'background-color: #FFFF00' return(colorDF) temp = fullDF.sort_index().loc[0:9,:].style.apply(lambda x: highlight_color(x,pd.DataFrame(fullDFsplit['split'])), axis = None) temp.set_properties(**{'border-color': 'black', 'border': '1px solid black'}) ``` <h3>Train test split key</h3> ``` # Train test split key temp = pd.DataFrame(data = [['X_train','X_test','y_train','y_test']]).T temp def highlight_mini(s): ''' highlight the the entire dataframe cyan. ''' colorDF = s.copy() # colorDF.loc[0, [0]] = 'background-color: #40E0D0' # train features colorDF.loc[0, [0]] = 'background-color: #40E0D0' # test features colorDF.loc[1, [0]] = 'background-color: #00FFFF' # train target colorDF.loc[2, [0]] = 'background-color: #FFD700' # test target colorDF.loc[3, [0]] = 'background-color: #FFFF00' return(colorDF) temp2 = temp.sort_index().style.apply(lambda x: highlight_mini(x), axis = None) temp2.set_properties(**{'border-color': 'black', 'border': '1px solid black', }) ``` After that I was lazy and used powerpoint to make that graph.
github_jupyter
# Impulse response equivalence In this notebook we explore the equivalence between the two-layer model and a two-timescale impulse response approach. ## Background Following [Geoffroy et al., 2013, Part 2](https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00196.1), with notation altered to match our implementation, the two-layer model with efficacy ($\epsilon$) and state-dependent climate feedback can be written as \begin{align} C \frac{dT}{dt} & = F - (\lambda_0 - a T) T - \epsilon \eta (T - T_D) \\ C_D \frac{dT_D}{dt} & = \eta (T - T_D) \end{align} If the state-dependent feedback factor, $a$, is non-zero, the two-layer model and impulse response approaches are not equivalent. However, if $a=0$, they become the same. Hereafter we assume $a=0$, however this assumption should not be forgotten. In the case $a=0$, the two-layer model can be written (adding an $\epsilon$ for the deep-ocean equation too for simplicity later). \begin{align} C \frac{dT}{dt} & = F - \lambda_0 T - \epsilon \eta (T - T_D) \\ \epsilon C_D \frac{dT_D}{dt} & = \epsilon \eta (T - T_D) \end{align} In matrix notation we have \begin{align} \frac{d\mathbf{X}}{dt} = \mathbf{A} \mathbf{X} + \mathbf{B} \end{align} where $\mathbf{X} = \begin{pmatrix}T \\T_D\end{pmatrix}$, $\mathbf{A} = \begin{bmatrix} - \frac{\lambda_0 + \epsilon \eta}{C} & \frac{\epsilon \eta}{C} \\ \frac{\epsilon \eta}{\epsilon C_d} & -\frac{\epsilon \eta}{\epsilon C_d} \end{bmatrix}$ and $\mathbf{B} = \begin{pmatrix} \frac{F}{C} \\ 0 \end{pmatrix}$. As shown in [Geoffroy et al., 2013, Part 1](https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00195.1), $\mathbf{A}$ can be diagonalised i.e. written in the form $\mathbf{A} = \mathbf{\Phi} \mathbf{D} \mathbf{\Phi}^{-1}$, where $\mathbf{D}$ is a diagonal matrix. Applying the solution given in [Geoffroy et al., 2013, Part 1](https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00195.1) to our impulse response notation, we have \begin{align} \mathbf{D} = \begin{bmatrix} - \frac{1}{\tau_1} & 0 \\ 0 & - \frac{1}{\tau_2} \end{bmatrix} \end{align} and \begin{align} \mathbf{\Phi} = \begin{bmatrix} 1 & 1 \\ \phi_1 & \phi_2 \end{bmatrix} \end{align} where \begin{align} \tau_1 = \frac{C C_D}{2 \lambda_0 \eta} (b - \sqrt{\delta}) \end{align} \begin{align} \tau_2 = \frac{C C_D}{2 \lambda_0 \eta} (b + \sqrt{\delta}) \end{align} \begin{align} \phi_1 = \frac{C}{2 \epsilon \eta} (b^* - \sqrt{\delta}) \end{align} \begin{align} \phi_2 = \frac{C}{2 \epsilon \eta} (b^* + \sqrt{\delta}) \end{align} \begin{align} b = \frac{\lambda_0 + \epsilon \eta}{C} + \frac{\eta}{C_D} \end{align} \begin{align} b^* = \frac{\lambda_0 + \epsilon \eta}{C} - \frac{\eta}{C_D} \end{align} \begin{align} \delta = b^2 - 4 \frac{\lambda_0 \eta}{C C_D} \end{align} Given this, we can re-write the system as \begin{align} \frac{d\mathbf{X}}{dt} &= \mathbf{\Phi} \mathbf{D} \mathbf{\Phi}^{-1} \mathbf{X} + \mathbf{B} \\ \mathbf{\Phi}^{-1}\frac{d\mathbf{X}}{dt} &= \mathbf{D} \mathbf{\Phi}^{-1} \mathbf{X} + \mathbf{\Phi}^{-1} \mathbf{B} \\ \frac{d\mathbf{Y}}{dt} &= \mathbf{D} \mathbf{Y} + \mathbf{\Phi}^{-1} \mathbf{B} \\ \end{align} Defining $\mathbf{Y} = \begin{pmatrix} T_1 \\ T_2 \end{pmatrix}$, we have \begin{align} \frac{d}{dt} \begin{pmatrix} T_1 \\ T_2 \end{pmatrix} = \begin{bmatrix} - \frac{1}{\tau_1} & 0 \\ 0 & - \frac{1}{\tau_2} \end{bmatrix} \begin{pmatrix} T_1 \\ T_2 \end{pmatrix} + \frac{1}{\phi_2 - \phi_1}\begin{bmatrix} \phi_2 & -1 \\ -\phi_1 & 1 \end{bmatrix} \begin{pmatrix} \frac{F}{C} \\ 0 \end{pmatrix} \end{align} or, \begin{align} \frac{dT_1}{dt} & = \frac{-T_1}{\tau_1} + \frac{\phi_2}{\phi_2 - \phi_1} \frac{F}{C} \\ \frac{dT_2}{dt} & = \frac{-T_2}{\tau_2} - \frac{\phi_1}{\phi_2 - \phi_1} \frac{F}{C} \end{align} Re-writing, we have, \begin{align} \frac{dT_1}{dt} & = \frac{1}{\tau_1} \left( \frac{\tau_1 \phi_2}{\phi_2 - \phi_1} \frac{F}{C} - T_1 \right) \\ \frac{dT_2}{dt} & = \frac{1}{\tau_2} \left( \frac{-\tau_2 \phi_1}{\phi_2 - \phi_1} \frac{F}{C} - T_2 \right) \end{align} We can compare this to the notation of [Millar et al., 2017](https://doi.org/10.5194/acp-17-7213-2017) and see that \begin{align} d_1 = \tau_1 \\ d_2 = \tau_2 \\ q_1 = \frac{\tau_1 \phi_2}{C(\phi_2 - \phi_1)} \\ q_2 = - \frac{\tau_2 \phi_1}{C(\phi_2 - \phi_1)} \\ \end{align} Hence we have redemonstrated the equivalence of the two-layer model and a two-timescale impulse response model. Given the parameters of the two-layer model, we can now trivially derive the equivalent parameters of the two-timescale model. Doing the reverse is possible, but requires some more work in order to make a useable route drop out. The first step is to follow [Geoffroy et al., 2013, Part 1](https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00195.1), and define two extra constants \begin{align} a_1 = \frac{\phi_2 \tau_1}{C(\phi_2 - \phi_1)} \lambda_0 \\ a_2 = - \frac{\phi_1 \tau_2}{C(\phi_2 - \phi_1)} \lambda_0 \\ \end{align} These constants have the useful property that $a_1 + a_2 = 1$ (proof in Appendix A). From above, we also see that \begin{align} a_1 = \lambda_0 q_1 \\ a_2 = \lambda_0 q_2 \\ \end{align} Hence \begin{align} a_1 + a_2 = \lambda_0 q_1 + \lambda_0 q_2 = 1 \\ \lambda_0 = \frac{1}{q_1 + q_2} \end{align} Next we calculate $C$ via \begin{align} \frac{q_1}{d_1} + \frac{q_2}{d_2} &= \frac{\phi_2}{C(\phi_2 - \phi_1)} - \frac{\phi_1}{C(\phi_2 - \phi_1)} = \frac{1}{C} \\ C &= \frac{d_1 d_2}{q_1 d_2 + q_2 d_1} \end{align} We then use further relationships from Table 1 of [Geoffroy et al., 2013, Part 1](https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00195.1) (proof is left to the reader) to calculate the rest of the constants. Firstly, \begin{align} \tau_1 a_1 + \tau_2 a_2 = \frac{C + \epsilon C_D}{\lambda_0} \\ \epsilon C_D = \lambda_0 (\tau_1 a_1 + \tau_2 a_2) - C \end{align} and then finally, \begin{align} \tau_1 a_1 + \tau_2 a_2 = \frac{C + \epsilon C_D}{\lambda_0} \\ \epsilon \eta = \frac{\epsilon C_D}{\tau_1 a_2 + \tau_2 a_1} \end{align} The final thing to notice here is that $C_D$, $\epsilon$ and $\eta$ are not uniquely-defined. This makes sense, as shown by [Geoffroy et al., 2013, Part 2](https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00196.1), the introduction of the efficacy factor does not alter the behaviour of the system (it is still the same mathematical system) and so it is impossible for simply the two-timescale temperature response to uniquely define all three of these quantities. It can only define the products $\epsilon C_D$ and $\epsilon \eta$. Hence when translating from the two-timescale model to the two-layer model with efficacy, an explicit choice for the efficacy must be made. This does not alter the temperature response but it does alter the implied ocean heat uptake of the two-timescale model. Long story short, when deriving two-layer model parameters from a two-timescale model, one must specify the efficacy. Given that $\mathbf{Y} = \mathbf{\Phi}^{-1} \mathbf{X}$ i.e. $\mathbf{X} = \mathbf{\Phi} \mathbf{Y}$, we can also relate the impulse response boxes to the two layers. \begin{aligned} \begin{pmatrix} T \\ T_D \end{pmatrix} = \begin{bmatrix} 1 & 1 \\ \phi_1 & \phi_2 \end{bmatrix} \begin{pmatrix} T_1 \\ T_2 \end{pmatrix} \\ \begin{pmatrix} T \\ T_D \end{pmatrix} = \begin{pmatrix} T_1 + T_2 \\ \phi_1 T_1 + \phi_2 T_2 \end{pmatrix} \end{aligned} Finally, the equivalent of the two-timescale and two-layer models allows us to also calculate the heat uptake of a two-timescale impulse response model. It is given by \begin{aligned} \text{Heat uptake} &= C \frac{dT}{dt} + C_D \frac{dT_D}{dt} \\ &= F - \lambda_0 T + (1 - \epsilon) \eta (T - T_D) \\ &= F - \lambda_0 (T_1 + T_2) + (1 - \epsilon) \eta ((1 - \phi_1) T_1 + (1 - \phi_2) T_2) \\ &= F - \lambda_0 (T_1 + T_2) - \eta (\epsilon - 1)((1 - \phi_1) T_1 + (1 - \phi_2) T_2) \end{aligned} ## Running the code Here we actually run the two implementations to explore their similarity. ``` import datetime as dt import numpy as np import pandas as pd import openscm_units.unit_registry as ur from scmdata.run import ScmRun, run_append from openscm_twolayermodel import ImpulseResponseModel, TwoLayerModel import matplotlib.pyplot as plt ``` First we define a scenario to run. ``` time = np.arange(1750, 2501) forcing = 0.05 * np.sin(time / 15 * 2 * np.pi) + 3.0 * time / time.max() inp = ScmRun( data=forcing, index=time, columns={ "scenario": "test_scenario", "model": "unspecified", "climate_model": "junk input", "variable": "Effective Radiative Forcing", "unit": "W/m^2", "region": "World", }, ) inp # NBVAL_IGNORE_OUTPUT inp.lineplot() ``` Next we run the two-layer model. In order for it to be convertible to a two-timescale model, we must turn state-dependence off (a=0). ``` two_layer_config = { "du": 55 * ur("m"), "efficacy": 1.2 * ur("dimensionless"), # "efficacy": 1.0 * ur("dimensionless"), "a": 0 * ur("W/m^2/delta_degC^2"), } # NBVAL_IGNORE_OUTPUT twolayer = TwoLayerModel(**two_layer_config) res_twolayer = twolayer.run_scenarios(inp) res_twolayer # NBVAL_IGNORE_OUTPUT fig = plt.figure(figsize=(16, 9)) ax = fig.add_subplot(121) res_twolayer.filter(variable="*Temperature*").lineplot(hue="variable", ax=ax) ax = fig.add_subplot(122) res_twolayer.filter(variable="Heat Uptake").lineplot(hue="variable", ax=ax) ``` Next we get the parameters with which we get the equivalent impulse response model. ``` two_timescale_paras = twolayer.get_impulse_response_parameters() two_timescale_paras # NBVAL_IGNORE_OUTPUT impulse_response = ImpulseResponseModel(**two_timescale_paras) res_impulse_response = impulse_response.run_scenarios(inp) res_impulse_response # NBVAL_IGNORE_OUTPUT fig = plt.figure(figsize=(16, 9)) ax = fig.add_subplot(121) res_impulse_response.filter(variable="*Temperature*").lineplot( hue="variable", ax=ax ) ax = fig.add_subplot(122) res_impulse_response.filter(variable="Heat Uptake").lineplot( hue="variable", ax=ax ) ``` We can compare the two responses as well. ``` # NBVAL_IGNORE_OUTPUT combined = run_append([res_impulse_response, res_twolayer]) combined ``` To within numerical errors they are equal. ``` # NBVAL_IGNORE_OUTPUT fig = plt.figure(figsize=(16, 9)) ax = fig.add_subplot(121) combined.filter(variable="*Temperature*").lineplot( hue="variable", style="climate_model", alpha=0.7, linewidth=2, ax=ax ) ax.legend(loc="upper left") ax = fig.add_subplot(122) combined.filter(variable="Heat Uptake").lineplot( hue="climate_model", style="climate_model", alpha=0.7, linewidth=2, ax=ax ) ``` ## Appendix A We begin with the definitions of the $a$ constants, \begin{align} a_1 = \frac{\phi_2 \tau_1}{C(\phi_2 - \phi_1)} \lambda_0 \\ a_2 = - \frac{\phi_1 \tau_2}{C(\phi_2 - \phi_1)} \lambda_0 \\ \end{align} We then have \begin{align} a_1 + a_2 &= \frac{\phi_2 \tau_1}{C(\phi_2 - \phi_1)} \lambda_0 - \frac{\phi_1 \tau_2}{C(\phi_2 - \phi_1)} \lambda_0 \\ &= \frac{\lambda_0}{C(\phi_2 - \phi_1)} (\phi_2 \tau_1 - \phi_1 \tau_2) \end{align} Recalling the definition of the $\phi$ parameters, \begin{align} \phi_1 = \frac{C}{2 \epsilon \eta} (b^* - \sqrt{\delta}) \end{align} \begin{align} \phi_2 = \frac{C}{2 \epsilon \eta} (b^* + \sqrt{\delta}) \end{align} We have, \begin{align} \phi_2 - \phi_1 = \frac{C \sqrt{\delta}}{\epsilon \eta} \end{align} Recalling the definition of the $\tau$ parameters, \begin{align} \tau_1 = \frac{C C_D}{2 \lambda_0 \eta} (b - \sqrt{\delta}) \end{align} \begin{align} \tau_2 = \frac{C C_D}{2 \lambda_0 \eta} (b + \sqrt{\delta}) \end{align} We have, \begin{align} \phi_2 \tau_1 - \phi_1 \tau_2 &= \frac{C}{2 \epsilon \eta} (b^* + \sqrt{\delta}) \times \frac{C C_D}{2 \lambda_0 \eta} (b - \sqrt{\delta}) - \frac{C}{2 \epsilon \eta} (b^* - \sqrt{\delta}) \times \frac{C C_D}{2 \lambda_0 \eta} (b + \sqrt{\delta}) \\ &= \frac{C^2 C_d}{4 \epsilon \eta^2 \lambda_0} \left[ (b^* + \sqrt{\delta}) (b - \sqrt{\delta}) - (b^* - \sqrt{\delta}) (b + \sqrt{\delta}) \right] \\ &= \frac{C^2 C_d}{4 \epsilon \eta^2 \lambda_0} \left[ b^*b + b\sqrt{\delta} -b^*\sqrt{\delta} - \delta - bb^* + b\sqrt{\delta} - b^*\sqrt{\delta} + \delta \right] \\ &= \frac{C^2 C_d}{2 \epsilon \eta^2 \lambda_0} \left[ b\sqrt{\delta} -b^*\sqrt{\delta} \right] \\ &= \frac{C^2 C_d \sqrt{\delta}}{2 \epsilon \eta^2 \lambda_0} \left[ b -b^*\right] \end{align} Recalling the definition of the $b$ parameters, \begin{align} b = \frac{\lambda_0 + \epsilon \eta}{C} + \frac{\eta}{C_D} \end{align} \begin{align} b^* = \frac{\lambda_0 + \epsilon \eta}{C} - \frac{\eta}{C_D} \end{align} We then have \begin{align} \phi_2 \tau_1 - \phi_1 \tau_2 &= \frac{C^2 C_d \sqrt{\delta}}{2 \epsilon \eta^2 \lambda_0} \left[ \frac{2\eta}{C_D} \right] \\ &= \frac{C^2 \sqrt{\delta}}{\epsilon \eta \lambda_0} \end{align} Putting it all back together, \begin{align} a_1 + a_2 &= \frac{\lambda_0}{C(\phi_2 - \phi_1)} (\phi_2 \tau_1 - \phi_1 \tau_2) \\ &= \frac{\lambda_0}{C} \frac{\epsilon \eta}{C \sqrt{\delta}} \frac{C^2 \sqrt{\delta}}{\epsilon \eta \lambda_0} \\ &= 1 \end{align}
github_jupyter
# Practice: Statistical Significance Let's say that we've collected data for a web-based experiment. In the experiment, we're testing the change in layout of a product information page to see if this affects the proportion of people who click on a button to go to the download page. This experiment has been designed to have a cookie-based diversion, and we record two things from each user: which page version they received, and whether or not they accessed the download page during the data recording period. (We aren't keeping track of any other factors in this example, such as number of pageviews, or time between accessing the page and making the download, that might be of further interest.) Your objective in this notebook is to perform a statistical test on both recorded metrics to see if there is a statistical difference between the two groups. ``` # import packages import numpy as np import pandas as pd import scipy.stats as stats from statsmodels.stats import proportion as proptests import matplotlib.pyplot as plt %matplotlib inline # import data data = pd.read_csv('data/statistical_significance_data.csv') data.head(10) ``` In the dataset, the 'condition' column takes a 0 for the control group, and 1 for the experimental group. The 'click' column takes a values of 0 for no click, and 1 for a click. ## Checking the Invariant Metric First of all, we should check that the number of visitors assigned to each group is similar. It's important to check the invariant metrics as a prerequisite so that our inferences on the evaluation metrics are founded on solid ground. If we find that the two groups are imbalanced on the invariant metric, then this will require us to look carefully at how the visitors were split so that any sources of bias are accounted for. It's possible that a statistically significant difference in an invariant metric will require us to revise random assignment procedures and re-do data collection. In this case, we want to do a two-sided hypothesis test on the proportion of visitors assigned to one of our conditions. Choosing the control or the experimental condition doesn't matter: you'll get the same result either way. Feel free to use whatever method you'd like: we'll highlight two main avenues below. If you want to take a simulation-based approach, you can simulate the number of visitors that would be assigned to each group for the number of total observations, assuming that we have an expected 50/50 split. Do this many times (200 000 repetitions should provide a good speed-variability balance in this case) and then see in how many simulated cases we get as extreme or more extreme a deviation from 50/50 that we actually observed. Don't forget that, since we have a two-sided test, an extreme case also includes values on the opposite side of 50/50. (e.g. Since simulated outcomes of .48 and lower are considered as being more extreme than an actual observation of 0.48, so too will simulated outcomes of .52 and higher.) The proportion of flagged simulation outcomes gives us a p-value on which to assess our observed proportion. We hope to see a larger p-value, insufficient evidence to reject the null hypothesis. If you want to take an analytic approach, you could use the exact binomial distribution to compute a p-value for the test. The more usual approach, however, is to use the normal distribution approximation. Recall that this is possible thanks to our large sample size and the central limit theorem. To get a precise p-value, you should also perform a continuity correction, either adding or subtracting 0.5 to the total count before computing the area underneath the curve. (e.g. If we had 415 / 850 assigned to the control group, then the normal approximation would take the area to the left of $(415 + 0.5) / 850 = 0.489$ and to the right of $(435 - 0.5) / 850 = 0.511$.) ### Analytic Approach ``` # Analytic approach: # get number of trials and number of 'successes' n_obs = data.shape[0] n_control = data.groupby('condition').size()[0] print(f'numb of observation: {n_obs}, \nnumb of control: {n_control}') # Compute a z-score and p-value p = 0.5 sd = np.sqrt(p * (1-p) * n_obs) z = ((n_control + 0.5) - p * n_obs) / sd print(z) print(2 * stats.norm.cdf(z)) ``` ### Simulation Approach ``` # get number of trials and number of 'successes' n_obs = data.shape[0] n_control = data.groupby('condition').size()[0] # # simulate outcomes under null, compare to observed outcome p = 0.5 n_trials = 200_000 samples = np.random.binomial(n_obs, p, n_trials) p_value = np.logical_or(samples <= n_control, samples >= (n_obs - n_control)).mean() print(p_value) ``` Taking an analytic approach with the normal approximation, we get a p-value around .613. Since the difference between groups isn't statistically significant, we should feel fine moving on to the test on the evaluation metric. ## Checking the Evaluation Metric After performing our checks on the invariant metric, we can move on to performing a hypothesis test on the evaluation metric: the click-through rate. In this case, we want to see that the experimental group has a significantly larger click-through rate than the control group, a one-tailed test. The simulation approach for this metric isn't too different from the approach for the invariant metric. You'll need the overall click-through rate as the common proportion to draw simulated values from for each group. You may also want to perform more simulations since there's higher variance for this test. There are a few analytic approaches possible here, but you'll probably make use of the normal approximation again in these cases. In addition to the pooled click-through rate, you'll need a pooled standard deviation in order to compute a z-score. While there is a continuity correction possible in this case as well, it's much more conservative than the p-value that a simulation will usually imply. Computing the z-score and resulting p-value without a continuity correction should be closer to the simulation's outcomes, though slightly more optimistic about there being a statistical difference between groups. ``` p_click = data.groupby('condition').mean()['click'] p_click p_click[1] - p_click[0] ``` ### Analytic Approach ``` # get number of trials and overall 'success' rate under null n_control = data.groupby('condition').size()[0] n_exper = data.groupby('condition').size()[1] p_null = data['click'].mean() # compute standard error, z-score, and p-value se_p = np.sqrt(p_null * (1-p_null) * (1/n_control + 1/n_exper)) z = (p_click[1] - p_click[0]) / se_p print(z) print(1-stats.norm.cdf(z)) ``` ### Simulation Approach ``` # get number of trials and overall 'success' rate under null n_control = data.groupby('condition').size()[0] n_exper = data.groupby('condition').size()[1] p_null = data['click'].mean() # simulate outcomes under null, compare to observed outcome n_trials = 200_000 ctrl_clicks = np.random.binomial(n_control, p_null, n_trials) exp_clicks = np.random.binomial(n_exper, p_null, n_trials) samples = exp_clicks / n_exper - ctrl_clicks / n_control p_value = (samples >= (p_click[1] - p_click[0])).mean() print(p_value) ``` p-value of 0.03894 would be considered a statistically significant difference at an alpha = .05 level, so we could conclude that the experiment had the desired effect. ## Checkpoint ### 1. What is the p-value for the test on the invariant metric (number of visitors assigned to each group)? Since the difference between groups isn't statistically significant, we should feel fine moving on to the test on the evaluation metric. ### 2. What is the p-value for the test on the evaluation metric (difference in click-through rates across groups)?
github_jupyter
# Assignment 5: Exploring Hashing In this exercise, we will begin to explore the concept of hashing and how it related to various object containers with respect to computational complexity. We will begin with the base code for as described in Chapter 5 of Grokking Algorithms (Bhargava 2016). ## Deliverables: We will again generate random data for this assignment. 1) Create a list of 100,000 names (randomly pick 10 characters e.g. abcdefghij, any order is fine, just make sure there are no duplicates names) and store those names in an unsorted list. 2) Now store the above names in a set 3) Make a separate copy of the list and sort it using any sorting algorithm that you have learned so far and justify why are you using it. Capture the time it takes to sort the list. 4) Pick the names from the unsorted array that are at 10,000th, 30,000th, 50,000th, 70,000th, 90,000th, and 100,000th positions, and store them in a temporary array somewhere for later use. 5) Search for these six names in each of the collections. Use linear search for the unsorted list, binary search for the sorted list, and use the set.remove() (or the in keyword) builtin for the set. Capture the time it takes using all three algorithms. 6) Create a table and plot comparing times of linear search, binary search and set lookup for the six names using Python (matplotlib or Seaborn) or JavaScript (D3) visualization tools to illustrate algorithm performance. ### Prepare an executive summary of your results, referring to the table and figures you have generated. Explain how your results relate to big O notation. Describe your results in language that management can understand. This summary should be included as text paragraphs in the Jupyter notebook. Explain how the algorithm works and why it is a useful to data engineers. # A. Setup: Library imports, Function construction and Array generation ``` import numpy as np import pandas as pd import seaborn as sns import time import random import string RANDOM_SEED = 8 #sets random seed def random_string(str_length, num_strings): str_list = [] #instantiates an empty list to hold the strings for i in range(0,num_strings): #loop to generate the specified number of strings str_list.append(''.join(random.choice(string.ascii_lowercase) for m in range(str_length))) #generates a string of the defined character length return str_list #returns the string list def MergeSort(arr): if len(arr) > 1: mid = len(arr)//2 # gets middle Left = arr[:mid] #splits elements left of middle Right = arr[mid:] #splits elements right of middle MergeSort(Left) #recursive call on left MergeSort(Right) #recursive call on right #set all indicies to 0 i=0 k=0 j=0 #below checks the values for if elements are sorted, if unsorted: swap. Merge to the original list while i < len(Left) and j < len(Right): if Left[i] < Right[j]: arr[k] = Left[i] #makes k index of arr left[i] if it's less than Right[j] i += 1 #increments i (the left index) else: arr[k] = Right[j] #if right value is lss than left, makes arr[k] the value of right and increments the right index j += 1 #increments j k += 1 #increments the arr index while i < len(Left): #checks to see if reamaining elements in left (less than mid), if so adds to arr at k index and increments i and k arr[k] = Left[i] i += 1 #increments i k += 1 #increments k while j < len(Right): #checks to see if remaining elements in right (greater than mid), if so adds to arr at k index and increments j and k. arr[k] = Right[j] j += 1 #increments j k += 1 #increments k return arr def Container(arr, fun): objects = [] #instantiates an empty list to collect the returns times = [] #instantiates an empty list to collect times for each computation start= time.perf_counter() #collects the start time obj = fun(arr) # applies the function to the arr object end = time.perf_counter() # collects end time duration = (end-start)* 1E3 #converts to milliseconds objects.append(obj)# adds the returns of the functions to the objects list times.append(duration) # adds the duration for computation to list return objects, duration #function SimpleSearch uses a value counter "low" which increments after a non successful evalution of equivalence for the item within a given array. It returns the milliseconds elapsed and a register of all the incremental guesses. def SimpleSearch(array, item): i = 0 guess = array[i] start = time.perf_counter() # gets fractional seconds while item != guess: i += 1 guess = array[i] #increments low end = time.perf_counter() # gets fractional seconds duration = end - start # calcualates difference in fractional seconds MilliElapsed = duration*1E3 # returns a tuple which contains search time in milliseconds and register of the guesses return MilliElapsed #function BinarySearch determines the range of the array and guwsses the midpoint of the range. A loop continues to to perform iterative range evaluations so long as the low value is equal or less than the high value of the array. When the gues converges to the item of interest, a tuple is returned with the time elapsed in milliseconds and the register of guesses. # binary search for the sorted list def BinarySearch(array, item): i = 0 length = len(array)-1 low = array[i] #finds lowest value in array high = array[length] #finds highest value in array register = [] # creates empty register of increments; for debug purposes start = time.perf_counter() # gets fractional seconds while i <= length: mid= (i + length)/2 # calculates midpoint of the range guess = int(mid) register.append(array[guess]) # appends increments to register; for debug purposes if array[guess] == item: end = time.perf_counter() #datetime.utcnow() duration = end - start MilliElapsed = duration*1E3 #print('the string is found for:', n) #returns a tuple which contains search time in milliseconds and register of the guesses return MilliElapsed #, register elif array[guess] > item: ##### loop for if guess is higher than the item high = array[guess] #resets high to the item at the guess index low = array[i] #resets low to the item at the i index (typically index 0) length = guess#resets length to guess #print('The guess went too high!', n, i, array[guess]) elif array[guess] < item: ######loop for if guess is lower the the item low = array[guess] #reset low to the index of guess length = len(array)-1 #get the length of the array to pass to high high = array[length] #reset high to be the end of the list i = guess+1 #make sure we increment i so that it can become the end of the list, otherwise you are going to have a bad time! #print('The guess went too low!',n, i, high, length, low) str100000 = random_string(str_length=10, num_strings=100000) #generates random strings str100000_copy = str100000[:] #creates a copy of the random strings start = time.perf_counter() MergeSort(str100000) end = time.perf_counter() duration = end - start MS_time = duration*1E3 positions = [9999, 29999, 49999, 69999, 89999, 99999] #positions of the names (needles) needles = [str100000_copy[i] for i in positions] #collects the needles from the haystack str100000_container =Container(str100000, MergeSort) #uses mergesort to sort the strings. temp =str100000_container[0] str100000_sorted =temp[0] set_str100000 = set(str100000_copy) print('the needles are:' , needles) print('the length of the set is:' ,len(set_str100000)) print('the length of the unsorted copy is:' , len(str100000_copy)) print('the length of the sorted list (mergesort) is:', len(str100000_sorted)) ``` # B. Sorting Search for these six names in each of the collections. Use linear search for the unsorted list, binary search for the sorted list, and use the set.remove() (or the in keyword) builtin for the set. Capture the time it takes using all three algorithms. ### B1. Linear Search of the unsorted list ``` #linear search for the unsorted list Linear_times = [] for n in needles: temp_time = SimpleSearch(str100000_copy, n) Linear_times.append(temp_time) print('The time reqired for each element in the unsorted array using linear search is:', Linear_times) ``` ### B2. Binary Search of the sorted list ``` Binary_times = [] for n in needles: temp_time = BinarySearch(str100000, n) Binary_times.append(temp_time) print('The time reqired for each element in the unsorted array using Binary search is:', Binary_times) ``` ### B3. Set Removal for the Set ``` set_needles = set(needles) set_times = {} for needle in set_needles: start = time.perf_counter() set_str100000.intersection(needle) end = time.perf_counter() duration = end - start MilliElapsed = duration*1E3 set_times[needle] = MilliElapsed set_times ``` # C. Summary ## Figure 1: Search times in milliseconds for Strings within an array of 100000 elements (each string 10 random lowercase alpha characters) ``` Strings = { 'String': [needles[0], needles[1],needles[2], needles[3],needles[4], needles[5]], 'PostionInSortedArray': [10000, 30000, 50000, 70000, 90000, 100000], 'LinearSearch(Unsorted)': [Linear_times[0], Linear_times[1], Linear_times[2], Linear_times[3], Linear_times[4], Linear_times[5]], 'BinarySearch(Sorted)': [Binary_times[0], Binary_times[1], Binary_times[2], Binary_times[3], Binary_times[4], Binary_times[5]], 'SetIntersection(Unsorted)': [set_times.get(needles[0]), set_times.get(needles[1]), set_times.get(needles[2]), set_times.get(needles[3]), set_times.get(needles[4]), set_times.get(needles[5])] } string_df = pd.DataFrame.from_dict(Strings) string_df['Binary+Sort'] = string_df['BinarySearch(Sorted)']+MS_time string_df ``` ## Table 1: Times for each algorithm given the length of the starting list ``` long_df = string_df.melt(id_vars=['String', 'PostionInSortedArray'], value_vars=['LinearSearch(Unsorted)', 'BinarySearch(Sorted)', 'SetIntersection(Unsorted)'],var_name='Algo', value_name='Time(ms)') ``` ## Figure 1: Sorth Algorithm Time Complexity ``` sns.barplot(data = long_df, x='PostionInSortedArray', hue='Algo', y='Time(ms)') plot = sns.barplot(data = long_df, x='PostionInSortedArray', hue='Algo', y='Time(ms)') plot.set_yscale('log') ``` ## Figure 2: Merge and Quick Sort time complexity # Discussion Three sorting algorithms were tested for their time complexity in sorting lists of varying sizes of string elements. Each string element in the list was randomly populated with 50 alphabetic lower case characters. The number of elements within the list was varied. Five lists containing 200, 400, 600, 800, and 1000 strings were sorted via BubbleSort, MergeSort, and QuickSort. The times (given in milliseconds) required to perform the sort are collected and displayed in Table 1. By far, the most inefficient sorting algorithm demonstrated here is the bubble sort whose complexity is shown graphically (figure 1) to grow at n\*n or O(n^2) rate. This makes sense for bubble sort as it compares n elements amongst n elements. Alternatively, the other two methodologies utilize a divide and conquer strategy. The list of strings when using QuickSort are divided into two arrays (greater and less) which contain values which are greater or less than a pivot value. In MergeSort a similar strategy is achieved by dividing the list into two arrays (left and right) which are left and right respectivly from the center element of the list. In both of these arrays recursion is used as the QuickSort and MergeSort functions are called on the subarrays. The result of this divide and conquer strategy is a complexity of n*logn or O(n*logn) in big O notation. A direct comparision of the times required for sorting the lists with these two methodologies are shown in Figure 2. In rare instances QuickSort may also dramatically underperform as the pivot element is always selected as the first item of the array (or subarray). If an array contained a list which was sorted largest to smallest already, this method could also have very high complexity as you would not divide the list recursively for an array of n size (this would also be n\*n complexity O(n^2)). It is interesting the QuickSort seems to perform slightly better than MergeSort, but both are quite efficient. Because of the splitting methodology employed by the MergeSort, there lacks a risk of any deviation from the O(n*logn) complexity. The begining array and subarrays are always split in half size-wise. It's therefore recommended that the MergeSort method be used as the time complexity will always be constant. # ------------------------ END ------------------------ ``` # binary search for the sorted list def BinarySearch(array, item): i = 0 length = len(array)-1 low = array[i] #finds lowest value in array high = array[length] #finds highest value in array register = [] # creates empty register of increments; for debug purposes start = time.perf_counter() # gets fractional seconds while i <= length: mid= (i + length)/2 # calculates midpoint of the range guess = int(mid) register.append(array[guess]) # appends increments to register; for debug purposes if array[guess] == item: end = time.perf_counter() #datetime.utcnow() duration = end - start MilliElapsed = duration*1E3 #print('the string is found for:', n) #returns a tuple which contains search time in milliseconds and register of the guesses return MilliElapsed #, register elif array[guess] > item: high = array[guess] low = array[i] length = guess #print('The guess went too high!', n, i, array[guess]) elif array[guess] < item: low = array[guess] length = len(array)-1 high = array[length] i = guess+1 #print('The guess went too low!',n, i, high, length, low) else: print('item not found!') ```
github_jupyter
# Building your Deep Neural Network: Step by Step Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want! - In this notebook, you will implement all the functions required to build a deep neural network. - In the next assignment, you will use these functions to build a deep neural network for image classification. **After this assignment you will be able to:** - Use non-linear units like ReLU to improve your model - Build a deeper neural network (with more than 1 hidden layer) - Implement an easy-to-use neural network class **Notation**: - Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters. - Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations). Let's get started! ## 1 - Packages Let's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the main package for scientific computing with Python. - [matplotlib](http://matplotlib.org) is a library to plot graphs in Python. - dnn_utils provides some necessary functions for this notebook. - testCases provides some test cases to assess the correctness of your functions - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed. ``` import numpy as np import h5py import matplotlib.pyplot as plt from testCases_v4 import * from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) import inspect import re def describe(arg): frame = inspect.currentframe() callerframeinfo = inspect.getframeinfo(frame.f_back) try: context = inspect.getframeinfo(frame.f_back).code_context caller_lines = ''.join([line.strip() for line in context]) m = re.search(r'describe\s*\((.+?)\)$', caller_lines) if m: caller_lines = m.group(1) position = str(callerframeinfo.filename) + "@" + str(callerframeinfo.lineno) # Add additional info such as array shape or string length additional = '' if hasattr(arg, "shape"): additional += "[shape={}]".format(arg.shape) elif hasattr(arg, "__len__"): # shape includes length information additional += "[len={}]".format(len(arg)) # Use str() representation if it is printable str_arg = str(arg) str_arg = str_arg if str_arg.isprintable() else repr(arg) print(position, "describe(" + caller_lines + ") = ", end='') print(arg.__class__.__name__ + "(" + str_arg + ")", additional) else: print("Describe: couldn't find caller context") finally: del frame del callerframeinfo ``` ## 2 - Outline of the Assignment To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will: - Initialize the parameters for a two-layer network and for an $L$-layer neural network. - Implement the forward propagation module (shown in purple in the figure below). - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). - We give you the ACTIVATION function (relu/sigmoid). - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function. - Compute the loss. - Implement the backward propagation module (denoted in red in the figure below). - Complete the LINEAR part of a layer's backward propagation step. - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function - Finally update the parameters. <img src="images/final outline.png" style="width:800px;height:500px;"> <caption><center> **Figure 1**</center></caption><br> **Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. ## 3 - Initialization You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. ### 3.1 - 2-layer Neural Network **Exercise**: Create and initialize the parameters of the 2-layer neural network. **Instructions**: - The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. - Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape. - Use zero initialization for the biases. Use `np.zeros(shape)`. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: parameters -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(1) ### START CODE HERE ### (≈ 4 lines of code) W1 = np.random.randn(n_h, n_x) * 0.01 b1 = np.zeros((n_h, 1)) W2 = np.random.randn(n_y, n_h) * 0.01 b2 = np.zeros((n_y, 1)) ### END CODE HERE ### assert(W1.shape == (n_h, n_x)) assert(b1.shape == (n_h, 1)) assert(W2.shape == (n_y, n_h)) assert(b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters = initialize_parameters(3,2,1) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected output**: <table style="width:80%"> <tr> <td> **W1** </td> <td> [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]] </td> </tr> <tr> <td> **b1**</td> <td>[[ 0.] [ 0.]]</td> </tr> <tr> <td>**W2**</td> <td> [[ 0.01744812 -0.00761207]]</td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> ### 3.2 - L-layer Neural Network The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: <table style="width:100%"> <tr> <td> </td> <td> **Shape of W** </td> <td> **Shape of b** </td> <td> **Activation** </td> <td> **Shape of Activation** </td> <tr> <tr> <td> **Layer 1** </td> <td> $(n^{[1]},12288)$ </td> <td> $(n^{[1]},1)$ </td> <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> <td> $(n^{[1]},209)$ </td> <tr> <tr> <td> **Layer 2** </td> <td> $(n^{[2]}, n^{[1]})$ </td> <td> $(n^{[2]},1)$ </td> <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> <td> $(n^{[2]}, 209)$ </td> <tr> <tr> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$</td> <td> $\vdots$ </td> <tr> <tr> <td> **Layer L-1** </td> <td> $(n^{[L-1]}, n^{[L-2]})$ </td> <td> $(n^{[L-1]}, 1)$ </td> <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> <td> $(n^{[L-1]}, 209)$ </td> <tr> <tr> <td> **Layer L** </td> <td> $(n^{[L]}, n^{[L-1]})$ </td> <td> $(n^{[L]}, 1)$ </td> <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td> <td> $(n^{[L]}, 209)$ </td> <tr> </table> Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\\ m & n & o \\ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \\ t \\ u \end{bmatrix}\tag{2}$$ Then $WX + b$ will be: $$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u \end{bmatrix}\tag{3} $$ **Exercise**: Implement initialization for an L-layer Neural Network. **Instructions**: - The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function. - Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`. - Use zeros initialization for the biases. Use `np.zeros(shape)`. - We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network). ```python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1)) ``` ``` # GRADED FUNCTION: initialize_parameters_deep def initialize_parameters_deep(layer_dims): """ Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) """ np.random.seed(3) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01 parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) ### END CODE HERE ### assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters parameters = initialize_parameters_deep([5,4,3]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected output**: <table style="width:80%"> <tr> <td> **W1** </td> <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> </tr> <tr> <td>**b1** </td> <td>[[ 0.] [ 0.] [ 0.] [ 0.]]</td> </tr> <tr> <td>**W2** </td> <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> </tr> <tr> <td>**b2** </td> <td>[[ 0.] [ 0.] [ 0.]]</td> </tr> </table> ## 4 - Forward propagation module ### 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order: - LINEAR - LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model) The linear forward module (vectorized over all the examples) computes the following equations: $$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$ where $A^{[0]} = X$. **Exercise**: Build the linear part of forward propagation. **Reminder**: The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help. ``` # GRADED FUNCTION: linear_forward def linear_forward(A, W, b): """ Implement the linear part of a layer's forward propagation. Arguments: A -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) Returns: Z -- the input of the activation function, also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently """ ### START CODE HERE ### (≈ 1 line of code) Z = np.dot(W, A) + b ### END CODE HERE ### assert(Z.shape == (W.shape[0], A.shape[1])) cache = (A, W, b) return Z, cache A, W, b = linear_forward_test_case() Z, linear_cache = linear_forward(A, W, b) print("Z = " + str(Z)) ``` **Expected output**: <table style="width:35%"> <tr> <td> **Z** </td> <td> [[ 3.26295337 -1.23429987]] </td> </tr> </table> ### 4.2 - Linear-Activation Forward In this notebook, you will use two activation functions: - **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` python A, activation_cache = sigmoid(Z) ``` - **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` python A, activation_cache = relu(Z) ``` For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step. **Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function. ``` # GRADED FUNCTION: linear_activation_forward def linear_activation_forward(A_prev, W, b, activation): """ Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments: A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: A -- the output of the activation function, also called the post-activation value cache -- a python dictionary containing "linear_cache" and "activation_cache"; stored for computing the backward pass efficiently """ # Ravi's note: duplicated line 1 of written code if activation == "sigmoid": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = sigmoid(Z) ### END CODE HERE ### elif activation == "relu": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = relu(Z) ### END CODE HERE ### assert (A.shape == (W.shape[0], A_prev.shape[1])) cache = (linear_cache, activation_cache) return A, cache A_prev, W, b = linear_activation_forward_test_case() A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid") print("With sigmoid: A = " + str(A)) A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu") print("With ReLU: A = " + str(A)) ``` **Expected output**: <table style="width:35%"> <tr> <td> **With sigmoid: A ** </td> <td > [[ 0.96890023 0.11013289]]</td> </tr> <tr> <td> **With ReLU: A ** </td> <td > [[ 3.43896131 0. ]]</td> </tr> </table> **Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. ### d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID. <img src="images/model_architecture_kiank.png" style="width:600px;height:300px;"> <caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br> **Exercise**: Implement the forward propagation of the above model. **Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.) **Tips**: - Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times - Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`. ``` # GRADED FUNCTION: L_model_forward def L_model_forward(X, parameters): """ Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments: X -- data, numpy array of shape (input size, number of examples) parameters -- output of initialize_parameters_deep() Returns: AL -- last post-activation value caches -- list of caches containing: every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1) """ caches = [] A = X L = len(parameters) // 2 # number of layers in the neural network # // is Floor division - division that results into whole number adjusted to the left in the number line # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list. for l in range(1, L): A_prev = A ### START CODE HERE ### (≈ 2 lines of code) A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu") caches.append(cache) ### END CODE HERE ### # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. ### START CODE HERE ### (≈ 2 lines of code) AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid") caches.append(cache) ### END CODE HERE ### assert(AL.shape == (1,X.shape[1])) return AL, caches X, parameters = L_model_forward_test_case_2hidden() AL, caches = L_model_forward(X, parameters) print("AL = " + str(AL)) print("Length of caches list = " + str(len(caches))) ``` <table style="width:50%"> <tr> <td> **AL** </td> <td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td> </tr> <tr> <td> **Length of caches list ** </td> <td > 3 </td> </tr> </table> Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. ## 5 - Cost function Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning. **Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$ ``` # GRADED FUNCTION: compute_cost def compute_cost(AL, Y): """ Implement the cost function defined by equation (7). Arguments: AL -- probability vector corresponding to your label predictions, shape (1, number of examples) Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns: cost -- cross-entropy cost """ m = Y.shape[1] # Compute loss from aL and y. ### START CODE HERE ### (≈ 1 lines of code) cost = -1/m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1-Y, np.log(1-AL))) # Use dot product as it's all summed at the end anyway: # But for some reason the marker didn't like it... not float64. #cost = -1/m * np.dot(Y, np.log(AL.T)) + np.dot(1 - Y, np.log(1 - AL.T)) ### END CODE HERE ### cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). assert(cost.shape == ()) # cost = np.asscalar(cost) # Try to make marker happy return cost Y, AL = compute_cost_test_case() print("cost = " + str(compute_cost(AL, Y))) describe(compute_cost(AL, Y)) ``` **Expected Output**: <table> <tr> <td>**cost** </td> <td> 0.41493159961539694</td> </tr> </table> ## 6 - Backward propagation module Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. **Reminder**: <img src="images/backprop_kiank.png" style="width:650px;height:250px;"> <caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption> <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows: $$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$ In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted. Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$. This is why we talk about **backpropagation**. !--> Now, similar to forward propagation, you are going to build the backward propagation in three steps: - LINEAR backward - LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) ### 6.1 - Linear backward For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation). Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$. <img src="images/linearback_kiank.png" style="width:250px;height:300px;"> <caption><center> **Figure 4** </center></caption> The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need: $$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$ $$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$ $$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ **Exercise**: Use the 3 formulas above to implement linear_backward(). ``` # GRADED FUNCTION: linear_backward def linear_backward(dZ, cache): """ Implement the linear portion of backward propagation for a single layer (layer l) Arguments: dZ -- Gradient of the cost with respect to the linear output (of current layer l) cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ A_prev, W, b = cache m = A_prev.shape[1] ### START CODE HERE ### (≈ 3 lines of code) dW = 1/m * np.dot(dZ, A_prev.T) db = 1/m * np.sum(dZ, axis=1, keepdims=True) dA_prev = np.dot(W.T, dZ) ### END CODE HERE ### assert (dA_prev.shape == A_prev.shape) assert (dW.shape == W.shape) assert (db.shape == b.shape) return dA_prev, dW, db # Set up some test inputs dZ, linear_cache = linear_backward_test_case() dA_prev, dW, db = linear_backward(dZ, linear_cache) print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db)) ``` **Expected Output**: <table style="width:90%"> <tr> <td> **dA_prev** </td> <td > [[ 0.51822968 -0.19517421] [-0.40506361 0.15255393] [ 2.37496825 -0.89445391]] </td> </tr> <tr> <td> **dW** </td> <td > [[-0.10076895 1.40685096 1.64992505]] </td> </tr> <tr> <td> **db** </td> <td> [[ 0.50629448]] </td> </tr> </table> ### 6.2 - Linear-Activation backward Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. To help you implement `linear_activation_backward`, we provided two backward functions: - **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows: ```python dZ = sigmoid_backward(dA, activation_cache) ``` - **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows: ```python dZ = relu_backward(dA, activation_cache) ``` If $g(.)$ is the activation function, `sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. **Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer. ``` # GRADED FUNCTION: linear_activation_backward def linear_activation_backward(dA, cache, activation): """ Implement the backward propagation for the LINEAR->ACTIVATION layer. Arguments: dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ linear_cache, activation_cache = cache if activation == "relu": ### START CODE HERE ### (≈ 2 lines of code) dZ = relu_backward(dA, activation_cache) dA_prev, dW, db = linear_backward(dZ, linear_cache) ### END CODE HERE ### elif activation == "sigmoid": ### START CODE HERE ### (≈ 2 lines of code) dZ = sigmoid_backward(dA, activation_cache) dA_prev, dW, db = linear_backward(dZ, linear_cache) ### END CODE HERE ### return dA_prev, dW, db dAL, linear_activation_cache = linear_activation_backward_test_case() dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid") print ("sigmoid:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db) + "\n") dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu") print ("relu:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db)) ``` **Expected output with sigmoid:** <table style="width:100%"> <tr> <td > dA_prev </td> <td >[[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> </tr> <tr> <td > db </td> <td > [[-0.05729622]] </td> </tr> </table> **Expected output with relu:** <table style="width:100%"> <tr> <td > dA_prev </td> <td > [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> </tr> <tr> <td > db </td> <td > [[-0.20837892]] </td> </tr> </table> ### 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. <img src="images/mn_backward.png" style="width:450px;height:300px;"> <caption><center> **Figure 5** : Backward pass </center></caption> ** Initializing backpropagation**: To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$. To do so, use this formula (derived using calculus which you don't need in-depth knowledge of): ```python dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL ``` You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$ For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`. **Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model. ``` # GRADED FUNCTION: L_model_backward def L_model_backward(AL, Y, caches): """ Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group Arguments: AL -- probability vector, output of the forward propagation (L_model_forward()) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) caches -- list of caches containing: every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2) the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1]) Returns: grads -- A dictionary with the gradients grads["dA" + str(l)] = ... grads["dW" + str(l)] = ... grads["db" + str(l)] = ... """ grads = {} L = len(caches) # the number of layers m = AL.shape[1] Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL # Initializing the backpropagation ### START CODE HERE ### (1 line of code) dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL ### END CODE HERE ### # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"] ### START CODE HERE ### (approx. 2 lines) current_cache = caches[L-1] # Caches is 0-indexed grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] \ = linear_activation_backward(dAL, current_cache, "sigmoid") ### END CODE HERE ### # Loop from l=L-2 to l=0 for l in reversed(range(L-1)): # lth layer: (RELU -> LINEAR) gradients. # Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] ### START CODE HERE ### (approx. 5 lines) #current_cache = caches[l] # Note: caches[l] corresponds to grads["X" + str(l+1)] as layers are 1-indexed dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+1)], caches[l], "relu") grads["dA" + str(l)] = dA_prev_temp grads["dW" + str(l + 1)] = dW_temp grads["db" + str(l + 1)] = db_temp ### END CODE HERE ### return grads AL, Y_assess, caches = L_model_backward_test_case() grads = L_model_backward(AL, Y_assess, caches) print_grads(grads) ``` **Expected Output** <table style="width:60%"> <tr> <td > dW1 </td> <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> </tr> <tr> <td > db1 </td> <td > [[-0.22007063] [ 0. ] [-0.02835349]] </td> </tr> <tr> <td > dA1 </td> <td > [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]] </td> </tr> </table> ### 6.4 - Update Parameters In this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$ where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. **Exercise**: Implement `update_parameters()` to update your parameters using gradient descent. **Instructions**: Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$. ``` # GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate): """ Update parameters using gradient descent Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backward Returns: parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ... """ L = len(parameters) // 2 # number of layers in the neural network # Update rule for each parameter. Use a for loop. ### START CODE HERE ### (≈ 3 lines of code) for l in range(L): parameters["W" + str(l+1)] -= np.multiply(learning_rate, grads["dW" + str(l+1)]) parameters["b" + str(l+1)] -= np.multiply(learning_rate, grads["db" + str(l+1)]) ### END CODE HERE ### return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads, 0.1) print ("W1 = "+ str(parameters["W1"])) print ("b1 = "+ str(parameters["b1"])) print ("W2 = "+ str(parameters["W2"])) print ("b2 = "+ str(parameters["b2"])) ``` **Expected Output**: <table style="width:100%"> <tr> <td > W1 </td> <td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008] [-1.76569676 -0.80627147 0.51115557 -1.18258802] [-1.0535704 -0.86128581 0.68284052 2.20374577]] </td> </tr> <tr> <td > b1 </td> <td > [[-0.04659241] [-1.28888275] [ 0.53405496]] </td> </tr> <tr> <td > W2 </td> <td > [[-0.55569196 0.0354055 1.32964895]]</td> </tr> <tr> <td > b2 </td> <td > [[-0.84610769]] </td> </tr> </table> ## 7 - Conclusion Congrats on implementing all the functions required for building a deep neural network! We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier. In the next assignment you will put all these together to build two models: - A two-layer neural network - An L-layer neural network You will in fact use these models to classify cat vs non-cat images!
github_jupyter
<a href="https://colab.research.google.com/github/neerjavashist/PUBH6859/blob/main/assignmet5_Neerja.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Assignment 5 Konzo is a distinct upper motor neuron disease that is prevalent in sub-Saharan Africa. As part of a pilot study to investigate the relationship of the gutmicrobiome and konzo, individuals with a heavy reliance on cassava, whose consumption without proper detoxification is implicated in konzo, were assessed from regions with varying prevalence of konzo. Samples were taken from the urban capital of Kinshasa (Kin) where no outbreaks of konzo are documented. Additional samples from a rural control, Masimanimba (Mas), where no outbreaks of konzo are historically reported were also taken. Individuals from two regions of high (HPZ) and low prevalence (LPZ) of konzo from the Kahemba region were taken with unaffected (U) and konzo (K) individuals from each. Bacteroides and Prevotella are genus that have known associations with urban and rural lifestyles, respectively. Here we assess using the Kruskal-Wallis test where there is a significant difference in the relative abundance of these genus in the six groups, and the data is visualized using box plots. ``` import pandas as pd import numpy as np import scipy import scipy.stats import plotly.express as px from google.colab import drive drive.mount('/content/drive') #get csv file that contains read counts for genus for different samples. Header is true #1st column contains genus names. 2nd to 4th column is additional info. Starting at colum 5 is samples read counts for wach genus genus = pd.read_csv("/content/drive/My Drive/KinshasaControl_Konzo3_Bacteria_Genus_ReadCounts.csv") genus #meta file contains sample data (such as which geographical location were the samples collected from) meta = pd.read_csv("/content/drive/My Drive/KinshasaControl_Konzo3_sampleData.csv") meta #set NA's to 0. Remove the unnecessary colums genus = genus.replace(np.nan,0) genus.pop("taxRank") genus.pop("taxID") genus.pop("Max") genus #Make the column with genus into row names so data structure is only readcounts. genus = genus.set_index('name') #Conver read counts to relative abundance (done with each column since each sample is one column) genus = genus.apply(lambda x : x / x.sum()) genus #genus #transpose data frame so samples are rows. #Also remove NA's since those got introduces for genus whose sum was 0 (making denominator 0 for relative abundance calculation) genus_t = genus.transpose() genus_t = genus_t.replace(np.nan,0) #might be a better way to do this, but convert rownames back to column so we can merge the meta file with sample name genus_t.index.name = 'name' genus_t.reset_index(inplace=True) #name column Sample to match meta file genus_t = genus_t.rename(columns=str).rename(columns={'name':'Sample'}) #Merge meta data with genus_t genus_tj = pd.merge(genus_t, meta, on=['Sample']) genus_tj genus_tj = genus_tj.set_index('Sample') #Do Kruskal Wallis test to see if there is a sig difference in the relative abundance of Prevotella genus between the six groups #microbiome data tends to not be normally distributed so a non-parametric test is appropriate #Bacteroides has been previously shown to be enriched in urban populations bact_kw = scipy.stats.kruskal(*[group["Bacteroides"].values for name, group in genus_tj.groupby("Status")]) bact_kw #KruskalResult(statistic=2.0190546347452027, pvalue=0.8465022320762265) #Prevotella genus has previously been shown to be enriched in in rural populations prev_kw = scipy.stats.kruskal(*[group["Prevotella"].values for name, group in genus_tj.groupby("Status")]) prev_kw #KruskalResult(statistic=39.928496009821856, pvalue=1.5437782911043988e-07) Bact = px.box(genus_tj, x="Status", y="Bacteroides", color = "Status", category_orders={ "Status" : ["Kinshasa", "Masimanimba", "Unaffected_Low_Prevalence_Zone", "Konzo_Low_Prevalence_Zone", "Unaffected_High_Prevalence_Zone", "Konzo_High_Prevalence_Zone"]}, boxmode="overlay") Bact.update_layout( xaxis = dict( tickvals = ["Kinshasa", "Masimanimba", "Unaffected_Low_Prevalence_Zone", "Konzo_Low_Prevalence_Zone", "Unaffected_High_Prevalence_Zone", "Konzo_High_Prevalence_Zone"], ticktext = ["Kin", "Mas", "ULPZ", "KLPZ", "UHPZ", "KHPZ"] ), showlegend=False ) Bact.show() #Although Kruskal-Wallis test resulted in a p-value > 0.05, a post-hoc test may be considered to see if there is an enrichment of Bacteroides in urban population in this dataset. Prev = px.box(genus_tj, x="Status", y="Prevotella", color = "Status", category_orders={ "Status" : ["Kinshasa", "Masimanimba", "Unaffected_Low_Prevalence_Zone", "Konzo_Low_Prevalence_Zone", "Unaffected_High_Prevalence_Zone", "Konzo_High_Prevalence_Zone"]}, boxmode="overlay") Prev.update_layout( xaxis = dict( tickvals = ["Kinshasa", "Masimanimba", "Unaffected_Low_Prevalence_Zone", "Konzo_Low_Prevalence_Zone", "Unaffected_High_Prevalence_Zone", "Konzo_High_Prevalence_Zone"], ticktext = ["Kin", "Mas", "ULPZ", "KLPZ", "UHPZ", "KHPZ"] ), showlegend=False ) Prev.show() #The Kruskal-Wallis test resulted in a p-value < 0.01, a post-hoc test is necessary to see if there is an enrichment of Prevotella in rural population in specific pairwise comparisons ```
github_jupyter
# Demo: RAIL Evaluation The purpose of this notebook is to demonstrate the application of the metrics scripts to be used on the photo-z PDF catalogs produced by the PZ working group. The first implementation of the _evaluation_ module is based on the refactoring of the code used in [Schmidt et al. 2020](https://arxiv.org/pdf/2001.03621.pdf), available on Github repository [PZDC1paper](https://github.com/LSSTDESC/PZDC1paper). To run this notebook, you must install qp and have the notebook in the same directory as `utils.py` (available in RAIL's examples directrory). You must also install some run-of-the-mill Python packages: numpy, scipy, matplotlib, and seaborn. ### Contents * [Data](#data) - [Photo-z Results](#fzboost) * [CDF-based metrics](#metrics) - [PIT](#pit) - [QQ plot](#qq) * [Summary statistics of CDF-based metrics](#summary_stats) - [KS](#ks) - [CvM](#cvm) - [AD](#ad) - [KLD](#kld) * [CDE loss](#cde_loss) * [Summary](#summary) ``` from rail.evaluation.metrics.pit import * from rail.evaluation.metrics.cdeloss import * from utils import read_pz_output, plot_pit_qq, ks_plot from main import Summary import qp import os %matplotlib inline %reload_ext autoreload %autoreload 2 ``` <a class="anchor" id="data"></a> # Data To compute the photo-z metrics of a given test sample, it is necessary to read the output of a photo-z code containing galaxies' photo-z PDFs. Let's use the toy data available in `tests/data/` (**test_dc2_training_9816.hdf5** and **test_dc2_validation_9816.hdf5**) and the configuration file available in `examples/configs/FZBoost.yaml` to generate a small sample of photo-z PDFs using the **FZBoost** algorithm available on RAIL's _estimation_ module. <a class="anchor" id="fzboost"></a> ### Photo-z Results #### Run FZBoost Go to dir `<your_path>/RAIL/examples/estimation/` and run the command: `python main.py configs/FZBoost.yaml` The photo-z output files (inputs for this notebook) will be writen at: `<your_path>/RAIL/examples/estimation/results/FZBoost/test_FZBoost.hdf5`. Let's use the ancillary function **read_pz_output** to facilitate the reading of all necessary data. ``` my_path = '/Users/sam/WORK/software/TMPRAIL/RAIL' # replace this with your local path to RAIL's parent dir pdfs_file = os.path.join(my_path, "examples/estimation/results/FZBoost/test_FZBoost.hdf5") ztrue_file = os.path.join(my_path, "tests/data/test_dc2_validation_9816.hdf5") pdfs, zgrid, ztrue, photoz_mode = read_pz_output(pdfs_file, ztrue_file) # all numpy arrays ``` The inputs for the metrics shown above are the array of true (or spectroscopic) redshifts, and an ensemble of photo-z PDFs (a `qp.Ensemble` object). ``` fzdata = qp.Ensemble(qp.interp, data=dict(xvals=zgrid, yvals=pdfs)) ``` *** <a class="anchor" id="metrics"></a> # Metrics <a class="anchor" id="pit"></a> ## PIT The Probability Integral Transform (PIT), is the Cumulative Distribution Function (CDF) of the photo-z PDF $$ \mathrm{CDF}(f, q)\ =\ \int_{-\infty}^{q}\ f(z)\ dz $$ evaluated at the galaxy's true redshift for every galaxy $i$ in the catalog. $$ \mathrm{PIT}(p_{i}(z);\ z_{i})\ =\ \int_{-\infty}^{z^{true}_{i}}\ p_{i}(z)\ dz $$ ``` pitobj = PIT(fzdata, ztrue) quant_ens, metamets = pitobj.evaluate() ``` The _evaluate_ method PIT class returns two objects, a quantile distribution based on the full set of PIT values (a frozen distribution object), and a dictionary of meta metrics associated to PIT (to be detailed below). ``` quant_ens metamets ``` PIT values ``` pit_vals = np.array(pitobj._pit_samps) pit_vals ``` ### PIT outlier rate The PIT outlier rate is a global metric defined as the fraction of galaxies in the sample with extreme PIT values. The lower and upper limits for considering a PIT as outlier are optional parameters set at the Metrics instantiation (default values are: PIT $<10^{-4}$ or PIT $>0.9999$). ``` pit_out_rate = PITOutRate(pit_vals, quant_ens).evaluate() print(f"PIT outlier rate of this sample: {pit_out_rate:.6f}") ``` <a class="anchor" id="qq"></a> ## PIT-QQ plot The histogram of PIT values is a useful tool for a qualitative assessment of PDFs quality. It shows whether the PDFs are: * biased (tilted PIT histogram) * under-dispersed (excess counts close to the boudaries 0 and 1) * over-dispersed (lack of counts close the boudaries 0 and 1) * well-calibrated (flat histogram) Following the standards in DC1 paper, the PIT histogram is accompanied by the quantile-quantile (QQ), which can be used to compare qualitatively the PIT distribution obtained with the PDFs agaist the ideal case (uniform distribution). The closer the QQ plot is to the diagonal, the better is the PDFs calibration. ``` plot_pit_qq(pdfs, zgrid, ztrue, title="PIT-QQ - toy data", code="FZBoost", pit_out_rate=pit_out_rate, savefig=False) ``` The black horizontal line represents the ideal case where the PIT histogram would behave as a uniform distribution U(0,1). *** <a class="anchor" id="summary_stats"></a> # Summary statistics of CDF-based metrics To evaluate globally the quality of PDFs estimates, `rail.evaluation` provides a set of metrics to compare the empirical distributions of PIT values with the reference uniform distribution, U(0,1). <a class="anchor" id="ks"></a> ### Kolmogorov-Smirnov Let's start with the traditional Kolmogorov-Smirnov (KS) statistic test, which is the maximum difference between the empirical and the expected cumulative distributions of PIT values: $$ \mathrm{KS} \equiv \max_{PIT} \Big( \left| \ \mathrm{CDF} \small[ \hat{f}, z \small] - \mathrm{CDF} \small[ \tilde{f}, z \small] \ \right| \Big) $$ Where $\hat{f}$ is the PIT distribution and $\tilde{f}$ is U(0,1). Therefore, the smaller value of KS the closer the PIT distribution is to be uniform. The `evaluate` method of the PITKS class returns a named tuple with the statistic and p-value. ``` ksobj = PITKS(pit_vals, quant_ens) ks_stat_and_pval = ksobj.evaluate() ks_stat_and_pval ``` Visual interpretation of the KS statistic: ``` ks_plot(pitobj) print(f"KS metric of this sample: {ks_stat_and_pval.statistic:.4f}") ``` <a class="anchor" id="cvm"></a> ### Cramer-von Mises Similarly, let's calculate the Cramer-von Mises (CvM) test, a variant of the KS statistic defined as the mean-square difference between the CDFs of an empirical PDF and the true PDFs: $$ \mathrm{CvM}^2 \equiv \int_{-\infty}^{\infty} \Big( \mathrm{CDF} \small[ \hat{f}, z \small] \ - \ \mathrm{CDF} \small[ \tilde{f}, z \small] \Big)^{2} \mathrm{dCDF}(\tilde{f}, z) $$ on the distribution of PIT values, which should be uniform if the PDFs are perfect. ``` cvmobj = PITCvM(pit_vals, quant_ens) cvm_stat_and_pval = cvmobj.evaluate() print(f"CvM metric of this sample: {cvm_stat_and_pval.statistic:.4f}") ``` <a class="anchor" id="ad"></a> ### Anderson-Darling Another variation of the KS statistic is the Anderson-Darling (AD) test, a weighted mean-squared difference featuring enhanced sensitivity to discrepancies in the tails of the distribution. $$ \mathrm{AD}^2 \equiv N_{tot} \int_{-\infty}^{\infty} \frac{\big( \mathrm{CDF} \small[ \hat{f}, z \small] \ - \ \mathrm{CDF} \small[ \tilde{f}, z \small] \big)^{2}}{\mathrm{CDF} \small[ \tilde{f}, z \small] \big( 1 \ - \ \mathrm{CDF} \small[ \tilde{f}, z \small] \big)}\mathrm{dCDF}(\tilde{f}, z) $$ ``` adobj = PITAD(pit_vals, quant_ens) ad_stat_crit_sig = adobj.evaluate() ad_stat_crit_sig ad_stat_crit_sig print(f"AD metric of this sample: {ad_stat_crit_sig.statistic:.4f}") ``` It is possible to remove catastrophic outliers before calculating the integral for the sake of preserving numerical instability. For instance, Schmidt et al. computed the Anderson-Darling statistic within the interval (0.01, 0.99). ``` ad_stat_crit_sig_cut = adobj.evaluate(pit_min=0.01, pit_max=0.99) print(f"AD metric of this sample: {ad_stat_crit_sig.statistic:.4f}") print(f"AD metric for 0.01 < PIT < 0.99: {ad_stat_crit_sig_cut.statistic:.4f}") ``` <a class="anchor" id="cde_loss"></a> # CDE Loss In the absence of true photo-z posteriors, the metric used to evaluate individual PDFs is the **Conditional Density Estimate (CDE) Loss**, a metric analogue to the root-mean-squared-error: $$ L(f, \hat{f}) \equiv \int \int {\big(f(z | x) - \hat{f}(z | x) \big)}^{2} dzdP(x), $$ where $f(z | x)$ is the true photo-z PDF and $\hat{f}(z | x)$ is the estimated PDF in terms of the photometry $x$. Since $f(z | x)$ is unknown, we estimate the **CDE Loss** as described in [Izbicki & Lee, 2017 (arXiv:1704.08095)](https://arxiv.org/abs/1704.08095). : $$ \mathrm{CDE} = \mathbb{E}\big( \int{{\hat{f}(z | X)}^2 dz} \big) - 2{\mathbb{E}}_{X, Z}\big(\hat{f}(Z, X) \big) + K_{f}, $$ where the first term is the expectation value of photo-z posterior with respect to the marginal distribution of the covariates X, and the second term is the expectation value with respect to the joint distribution of observables X and the space Z of all possible redshifts (in practice, the centroids of the PDF bins), and the third term is a constant depending on the true conditional densities $f(z | x)$. ``` cdelossobj = CDELoss(fzdata, zgrid, ztrue) cde_stat_and_pval = cdelossobj.evaluate() cde_stat_and_pval print(f"CDE loss of this sample: {cde_stat_and_pval.statistic:.2f}") ``` <a class="anchor" id="summary"></a> # Summary ``` summary = Summary(pdfs, zgrid, ztrue) summary.markdown_metrics_table(pitobj=pitobj) # pitobj as optional input to speed-up metrics evaluation summary.markdown_metrics_table(pitobj=pitobj, show_dc1="FlexZBoost") ```
github_jupyter
# IBM Advanced Data Science Capstone Project ## Sentiment Analysis of Amazon Customer Reviews ### Harsh V Singh, Apr 2021 ## Extract, Transform, Load (ETL) This notebook contains the comprehensive step-by-step process for preparing the raw data to be used in the project. The data that we are using is avaiable in the form of two csv files (train.csv/ test.csv). We will read these files into memory and then store them in parquet files with the same name. *Spark csv reader is not able to handle commas within the quoted text of the reviews. Hence, we will first read the files into Pandas dataframes and then export them into parquet files*. ## Importing required Python libraries and initializing Apache Spark environment ``` import pandas as pd import csv import time from pathlib import Path import findspark findspark.init() from pyspark import SparkContext, SparkConf from pyspark.sql import SQLContext, SparkSession from pyspark.sql.types import StructType, StructField, DoubleType, IntegerType, StringType conf = SparkConf().setMaster("local[*]") \ .setAll([("spark.driver.memory", "16g"),\ ("spark.executor.memory", "4g"), \ ("spark.driver.maxResultSize", "16g"), \ ("spark.executor.cores", "4")]) sc = SparkContext.getOrCreate(conf=conf) from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .getOrCreate() ``` ## Reading data from CSV and storing local copies The data that we are using for this project is avaiable to us in the form of two csv files (train.csv/ test.csv). We will read these files into memory and then store them in parquet files with the same name. We will write a function called **readSparkDFFromParquet** will read the parquet files into memory as Spark dataframes. In case the parquet files are not found, this function will call another function called **savePandasDFToParquet** which reads the original csv files into Pandas dataframe and saves them as **parquet** files. *The reason why we need to read the csv files into a Pandas dataframe is bacause the Spark csv reader function is not able to handle commas within the quoted text of the reviews. In order to solve that, we will use the Pandas csv reader to process the data initially and then export them into parquet files*. ``` # Function to print time taken by a particular process, given the start and end times def printElapsedTime(startTime, endTime): elapsedTime = endTime - startTime print("-- Process time = %.2f seconds --"%(elapsedTime)) # Schema that defines the columns and datatypes of the data in the csv files rawSchema = StructType([ StructField("rating", IntegerType(), True), StructField("review_heading", StringType(), True), StructField("review_text", StringType(), True) ]) # Function to save a Pandas dataframe as a parquet file def savePandasDFToParquet(csvPath, parqPath, rawSchema, printTime=False): startTime = time.time() pandasDF = pd.read_csv(csvPath, header=None) pandasDF.columns = rawSchema.names pandasDF.to_parquet(parqPath, engine="pyarrow") endTime = time.time() if printTime: printElapsedTime(startTime=startTime, endTime=endTime) return # Function to read a parquet file into a Spark dataframe # If the parquet file is not found, it will be created from the original csv def readSparkDFFromParquet(csvPath, parqPath, rawSchema, printTime=False): if (Path(parqPath).is_file() == False): print("Parquet file not found... converting %s to parquet!"%(csvPath)) savePandasDFToParquet(csvPath=csvPath, parqPath=parqPath, rawSchema=rawSchema, printTime=printTime) sparkDF = spark.read.parquet(parqPath) return (sparkDF) ``` ## Load local data for sanity check We will load the train and test sets and print a few samples as well as the size of the datasets. ``` trainRaw = readSparkDFFromParquet(csvPath="data/rawCSVs/train.csv", parqPath="data/trainRaw.parquet", rawSchema=rawSchema, printTime=True) testRaw = readSparkDFFromParquet(csvPath="data/rawCSVs/test.csv", parqPath="data/testRaw.parquet", rawSchema=rawSchema, printTime=True) trainRaw.show(5) print("There are %d/ %d samples in the training/ test data."%(trainRaw.count(), testRaw.count())) print("Sample review text: %s"%(trainRaw.take(1)[0]["review_text"])) spark.sparkContext.stop() ```
github_jupyter
# Model Selection/Evaluation with Yellowbrick Oftentimes with a new dataset, the choice of the best machine learning algorithm is not always obvious at the outset. Thanks to the scikit-learn API, we can easily approach the problem of model selection using model *evaluation*. As we'll see in these examples, Yellowbrick is helpful for facilitating the process. ## Evaluating Classifiers Classification models attempt to predict a target in a discrete space, that is assign an instance of dependent variables one or more categories. Classification score visualizers display the differences between classes as well as a number of classifier-specific visual evaluations. ### ROCAUC A `ROCAUC` (Receiver Operating Characteristic/Area Under the Curve) plot allows the user to visualize the tradeoff between the classifier’s sensitivity and specificity. The Receiver Operating Characteristic (ROC) is a measure of a classifier’s predictive quality that compares and visualizes the tradeoff between the model’s sensitivity and specificity. When plotted, a ROC curve displays the true positive rate on the Y axis and the false positive rate on the X axis on both a global average and per-class basis. The ideal point is therefore the top-left corner of the plot: false positives are zero and true positives are one. This leads to another metric, area under the curve (AUC), which is a computation of the relationship between false positives and true positives. The higher the AUC, the better the model generally is. However, it is also important to inspect the “steepness” of the curve, as this describes the maximization of the true positive rate while minimizing the false positive rate. ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from yellowbrick.classifier import ROCAUC from yellowbrick.datasets import load_occupancy # Load the classification data set X, y = load_occupancy() # Specify the classes of the target classes = ["unoccupied", "occupied"] # Create the train and test data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) # Instantiate the visualizer with the classification model visualizer = ROCAUC(LogisticRegression( multi_class="auto", solver="liblinear" ), classes=classes, size=(1080, 720) ) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Draw the data ``` Yellowbrick’s `ROCAUC` Visualizer also allows for plotting **multiclass** classification curves. ROC curves are typically used in binary classification, and in fact the Scikit-Learn `roc_curve` metric is only able to perform metrics for binary classifiers. Yellowbrick addresses this by binarizing the output (per-class) or to use one-vs-rest (micro score) or one-vs-all (macro score) strategies of classification. ``` from sklearn.linear_model import RidgeClassifier from sklearn.preprocessing import OrdinalEncoder, LabelEncoder from yellowbrick.datasets import load_game # Load multi-class classification dataset X, y = load_game() classes = ['win', 'loss', 'draw'] # Encode the non-numeric columns X = OrdinalEncoder().fit_transform(X) y = LabelEncoder().fit_transform(y) # Create the train and test data X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2 ) visualizer = ROCAUC( RidgeClassifier(), classes=classes, size=(1080, 720) ) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Draw the data ``` ### ClassificationReport Heatmap The classification report visualizer displays the precision, recall, F1, and support scores for the model. In order to support easier interpretation and problem detection, the report integrates numerical scores with a color-coded heatmap. All heatmaps are in the range `(0.0, 1.0)` to facilitate easy comparison of classification models across different classification reports. ``` from sklearn.naive_bayes import GaussianNB from yellowbrick.classifier import ClassificationReport # Load the classification data set X, y = load_occupancy() classes = ["unoccupied", "occupied"] # Create the train and test data X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2 ) # Instantiate the classification model and visualizer bayes = GaussianNB() visualizer = ClassificationReport( bayes, classes=classes, support=True, size=(1080, 720) ) visualizer.fit(X_train, y_train) # Fit the visualizer and the model visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Draw the data ``` The classification report shows a representation of the main classification metrics on a per-class basis. This gives a deeper intuition of the classifier behavior over global accuracy which can mask functional weaknesses in one class of a multiclass problem. Visual classification reports are used to compare classification models to select models that are “redder”, e.g. have stronger classification metrics or that are more balanced. The metrics are defined in terms of true and false positives, and true and false negatives. Positive and negative in this case are generic names for the classes of a binary classification problem. In the example above, we would consider true and false occupied and true and false unoccupied. Therefore a true positive is when the actual class is positive as is the estimated class. A false positive is when the actual class is negative but the estimated class is positive. Using this terminology the meterics are defined as follows: **precision** Precision is the ability of a classiifer not to label an instance positive that is actually negative. For each class it is defined as as the ratio of true positives to the sum of true and false positives. Said another way, “for all instances classified positive, what percent was correct?” **recall** Recall is the ability of a classifier to find all positive instances. For each class it is defined as the ratio of true positives to the sum of true positives and false negatives. Said another way, “for all instances that were actually positive, what percent was classified correctly?” **f1 score** The F1 score is a weighted harmonic mean of precision and recall such that the best score is 1.0 and the worst is 0.0. Generally speaking, F1 scores are lower than accuracy measures as they embed precision and recall into their computation. As a rule of thumb, the weighted average of F1 should be used to compare classifier models, not global accuracy. **support** Support is the number of actual occurrences of the class in the specified dataset. Imbalanced support in the training data may indicate structural weaknesses in the reported scores of the classifier and could indicate the need for stratified sampling or rebalancing. Support doesn’t change between models but instead diagnoses the evaluation process. ### ClassPredictionError The Yellowbrick `ClassPredictionError` plot is a twist on other and sometimes more familiar classification model diagnostic tools like the Confusion Matrix and Classification Report. Like the Classification Report, this plot shows the support (number of training samples) for each class in the fitted classification model as a stacked bar chart. Each bar is segmented to show the proportion of predictions (including false negatives and false positives, like a Confusion Matrix) for each class. You can use a `ClassPredictionError` to visualize which classes your classifier is having a particularly difficult time with, and more importantly, what incorrect answers it is giving on a per-class basis. This can often enable you to better understand strengths and weaknesses of different models and particular challenges unique to your dataset. The class prediction error chart provides a way to quickly understand how good your classifier is at predicting the right classes. ``` from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from yellowbrick.classifier import ClassPredictionError from yellowbrick.datasets import load_credit X, y = load_credit() classes = ['account in default', 'current with bills'] # Perform 80/20 training/test split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.20, random_state=42 ) # Instantiate the classification model and visualizer visualizer = ClassPredictionError( RandomForestClassifier(n_estimators=10), classes=classes, size=(1080, 720) ) # Fit the training data to the visualizer visualizer.fit(X_train, y_train) # Evaluate the model on the test data visualizer.score(X_test, y_test) # Draw visualization visualizer.show() ``` ## Evaluating Regressors Regression models attempt to predict a target in a continuous space. Regressor score visualizers display the instances in model space to better understand how the model is making predictions. ### PredictionError A prediction error plot shows the actual targets from the dataset against the predicted values generated by our model. This allows us to see how much variance is in the model. Data scientists can diagnose regression models using this plot by comparing against the 45 degree line, where the prediction exactly matches the model. ``` from sklearn.linear_model import Lasso from yellowbrick.regressor import PredictionError from yellowbrick.datasets import load_concrete # Load regression dataset X, y = load_concrete() # Create the train and test data X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) # Instantiate the linear model and visualizer model = Lasso() visualizer = PredictionError(model, size=(1080, 720)) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Draw the data ``` ### Residuals Plot Residuals, in the context of regression models, are the difference between the observed value of the target variable (y) and the predicted value (ŷ), i.e. the error of the prediction. The residuals plot shows the difference between residuals on the vertical axis and the dependent variable on the horizontal axis, allowing you to detect regions within the target that may be susceptible to more or less error. ``` from sklearn.linear_model import Ridge from yellowbrick.regressor import ResidualsPlot # Instantiate the linear model and visualizer model = Ridge() visualizer = ResidualsPlot(model, size=(1080, 720)) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data visualizer.show() # Draw the data ``` ### Try them all ``` from sklearn.svm import SVR from sklearn.neural_network import MLPRegressor from sklearn.neighbors import KNeighborsRegressor from sklearn.linear_model import BayesianRidge, LinearRegression regressors = { "support vector machine": SVR(), "multilayer perceptron": MLPRegressor(), "nearest neighbors": KNeighborsRegressor(), "bayesian ridge": BayesianRidge(), "linear regression": LinearRegression(), } for _, regressor in regressors.items(): visualizer = ResidualsPlot(regressor) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) visualizer.show() ``` ## Diagnostics Target visualizers specialize in visually describing the dependent variable for supervised modeling, often referred to as y or the target. ### Class Balance Report One of the biggest challenges for classification models is an imbalance of classes in the training data. Severe class imbalances may be masked by relatively good F1 and accuracy scores – the classifier is simply guessing the majority class and not making any evaluation on the underrepresented class. There are several techniques for dealing with class imbalance such as stratified sampling, down sampling the majority class, weighting, etc. But before these actions can be taken, it is important to understand what the class balance is in the training data. The `ClassBalance` visualizer supports this by creating a bar chart of the support for each class, that is the frequency of the classes’ representation in the dataset. ``` from yellowbrick.target import ClassBalance # Load multi-class classification dataset X, y = load_game() # Instantiate the visualizer visualizer = ClassBalance( labels=["draw", "loss", "win"], size=(1080, 720) ) visualizer.fit(y) visualizer.show() ``` Yellowbrick visualizers are intended to steer the model selection process. Generally, model selection is a search problem defined as follows: given N instances described by numeric properties and (optionally) a target for estimation, find a model described by a triple composed of features, an algorithm and hyperparameters that best fits the data. For most purposes the “best” triple refers to the triple that receives the best cross-validated score for the model type. The yellowbrick.model_selection package provides visualizers for inspecting the performance of cross validation and hyper parameter tuning. Many visualizers wrap functionality found in `sklearn.model_selection` and others build upon it for performing multi-model comparisons. ### Cross Validation Generally we determine whether a given model is optimal by looking at it’s F1, precision, recall, and accuracy (for classification), or it’s coefficient of determination (R2) and error (for regression). However, real world data is often distributed somewhat unevenly, meaning that the fitted model is likely to perform better on some sections of the data than on others. Yellowbrick’s `CVScores` visualizer enables us to visually explore these variations in performance using different cross validation strategies. Cross-validation starts by shuffling the data (to prevent any unintentional ordering errors) and splitting it into `k` folds. Then `k` models are fit on $\frac{k-1} {k}$ of the data (called the training split) and evaluated on $\frac {1} {k}$ of the data (called the test split). The results from each evaluation are averaged together for a final score, then the final model is fit on the entire dataset for operationalization. In Yellowbrick, the `CVScores` visualizer displays cross-validated scores as a bar chart (one bar for each fold) with the average score across all folds plotted as a horizontal dotted line. ``` from sklearn.naive_bayes import MultinomialNB from sklearn.model_selection import StratifiedKFold from yellowbrick.model_selection import CVScores # Load the classification data set X, y = load_occupancy() # Create a cross-validation strategy cv = StratifiedKFold(n_splits=12, random_state=42) # Instantiate the classification model and visualizer model = MultinomialNB() visualizer = CVScores( model, cv=cv, scoring='f1_weighted', size=(1080, 720) ) visualizer.fit(X, y) visualizer.show() ``` Visit the Yellowbrick docs for more about visualizers for [classification](http://www.scikit-yb.org/en/latest/api/classifier/index.html), [regression](http://www.scikit-yb.org/en/latest/api/regressor/index.html) and [model selection](http://www.scikit-yb.org/en/latest/api/model_selection/index.html)!
github_jupyter
``` from functools import partial from collections import defaultdict import os import pickle import numpy as np import scipy.sparse as sp import scipy.io as spio import matplotlib.pyplot as plt from torchray_extremal_perturbation_sequence import extremal_perturbation, contrastive_reward, simple_reward from torchray.utils import get_device import torch import torch.nn as nn from torch.autograd import Variable from torch import optim import torch.nn.functional as F from sklearn import preprocessing import pandas as pd class MySequence : def __init__(self) : self.dummy = 1 import tensorflow as tf import tensorflow.keras tf.keras.utils.Sequence = MySequence from sequence_logo_helper import plot_dna_logo, dna_letter_at #Load data dataset_name = "optimus5_synth" def one_hot_encode(df, col='utr', seq_len=50): # Dictionary returning one-hot encoding of nucleotides. nuc_d = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]} # Creat empty matrix. vectors=np.empty([len(df),seq_len,4]) # Iterate through UTRs and one-hot encode for i,seq in enumerate(df[col].str[:seq_len]): seq = seq.lower() a = np.array([nuc_d[x] for x in seq]) vectors[i] = a return vectors def r2(x,y): slope, intercept, r_value, p_value, std_err = stats.linregress(x,y) return r_value**2 #Train data e_train = pd.read_csv("bottom5KIFuAUGTop5KIFuAUG.csv") e_train.loc[:,'scaled_rl'] = preprocessing.StandardScaler().fit_transform(e_train.loc[:,'rl'].values.reshape(-1,1)) seq_e_train = one_hot_encode(e_train,seq_len=50) x_train = seq_e_train x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1], x_train.shape[2])) y_train = np.array(e_train['scaled_rl'].values) y_train = np.reshape(y_train, (y_train.shape[0],1)) y_train = (y_train >= 0.) y_train = np.concatenate([1. - y_train, y_train], axis=1) print("x_train.shape = " + str(x_train.shape)) print("y_train.shape = " + str(y_train.shape)) #Test data allFiles = ["optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512.csv", "optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512.csv", "optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512.csv", "optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512.csv"] x_tests = [] for csv_to_open in allFiles : #Load dataset for benchmarking dataset_name = csv_to_open.replace(".csv", "") benchmarkSet = pd.read_csv(csv_to_open) seq_e_test = one_hot_encode(benchmarkSet, seq_len=50) x_test = seq_e_test[:, None, ...] print(x_test.shape) x_tests.append(x_test) x_test = np.concatenate(x_tests, axis=0) y_test = -1. * np.ones((x_test.shape[0], 1)) y_test = (y_test >= 0.) y_test = np.concatenate([1. - y_test, y_test], axis=1) print("x_test.shape = " + str(x_test.shape)) print("y_test.shape = " + str(y_test.shape)) #Load predictor model class CNNClassifier(nn.Module) : def __init__(self, batch_size) : super(CNNClassifier, self).__init__() self.conv1 = nn.Conv2d(4, 120, kernel_size=(1, 8), padding=(0, 4)) self.conv2 = nn.Conv2d(120, 120, kernel_size=(1, 8), padding=(0, 4)) self.conv3 = nn.Conv2d(120, 120, kernel_size=(1, 8), padding=(0, 4)) self.fc1 = nn.Linear(in_features=50 * 120, out_features=40) self.drop1 = nn.Dropout(p=0.2) self.fc2 = nn.Linear(in_features=40, out_features=1) self.batch_size = batch_size self.use_cuda = True if torch.cuda.is_available() else False def forward(self, x): #x = x.transpose(1, 2) x = F.relu(self.conv1(x))[..., 1:] x = F.relu(self.conv2(x))[..., 1:] x = F.relu(self.conv3(x))[..., 1:] x = x.transpose(1, 3) x = x.reshape(-1, 50 * 120) x = F.relu(self.fc1(x)) x = self.fc2(x) #Transform sigmoid logits to 2-input softmax scores x = torch.cat([-1 * x, x], axis=1) return x model_pytorch = CNNClassifier(batch_size=1) _ = model_pytorch.load_state_dict(torch.load("optimusRetrainedMain_pytorch.pth")) #Create pytorch input tensor x_test_pytorch = Variable(torch.FloatTensor(np.transpose(x_test, (0, 3, 1, 2)))) x_test_pytorch = x_test_pytorch.cuda() if model_pytorch.use_cuda else x_test_pytorch digit_test = np.array(np.argmax(y_test, axis=1), dtype=np.int) #Predict using pytorch model device = get_device() model_pytorch.to(device) model_pytorch.eval() y_pred_pytorch = np.concatenate([model_pytorch(x_test_pytorch[i:i+1]).data.cpu().numpy() for i in range(x_test.shape[0])], axis=0) digit_pred_test = np.argmax(y_pred_pytorch, axis=-1) print("Test accuracy = " + str(round(np.sum(digit_test == digit_pred_test) / digit_test.shape[0], 4))) device = get_device() model_pytorch.to(device) x_test_pytorch = x_test_pytorch.to(device) #Gradient saliency/backprop visualization import matplotlib.collections as collections import operator import matplotlib.pyplot as plt import matplotlib.cm as cm import matplotlib.colors as colors import matplotlib as mpl from matplotlib.text import TextPath from matplotlib.patches import PathPatch, Rectangle from matplotlib.font_manager import FontProperties from matplotlib import gridspec from matplotlib.ticker import FormatStrFormatter def plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96) : end_pos = ref_seq.find("#") fig = plt.figure(figsize=figsize) ax = plt.gca() if score_clip is not None : importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip) max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01 for i in range(0, len(ref_seq)) : mutability_score = np.sum(importance_scores[:, i]) dna_letter_at(ref_seq[i], i + 0.5, 0, mutability_score, ax) plt.sca(ax) plt.xlim((0, len(ref_seq))) plt.ylim((0, max_score)) plt.axis('off') plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16) for axis in fig.axes : axis.get_xaxis().set_visible(False) axis.get_yaxis().set_visible(False) plt.tight_layout() plt.show() class IdentityEncoder : def __init__(self, seq_len, channel_map) : self.seq_len = seq_len self.n_channels = len(channel_map) self.encode_map = channel_map self.decode_map = { val : key for key, val in channel_map.items() } def encode(self, seq) : encoding = np.zeros((self.seq_len, self.n_channels)) for i in range(len(seq)) : if seq[i] in self.encode_map : channel_ix = self.encode_map[seq[i]] encoding[i, channel_ix] = 1. return encoding def encode_inplace(self, seq, encoding) : for i in range(len(seq)) : if seq[i] in self.encode_map : channel_ix = self.encode_map[seq[i]] encoding[i, channel_ix] = 1. def encode_inplace_sparse(self, seq, encoding_mat, row_index) : raise NotImplementError() def decode(self, encoding) : seq = '' for pos in range(0, encoding.shape[0]) : argmax_nt = np.argmax(encoding[pos, :]) max_nt = np.max(encoding[pos, :]) if max_nt == 1 : seq += self.decode_map[argmax_nt] else : seq += "0" return seq def decode_sparse(self, encoding_mat, row_index) : encoding = np.array(encoding_mat[row_index, :].todense()).reshape(-1, 4) return self.decode(encoding) #Initialize sequence encoder seq_length = 50 residue_map = {'A': 0, 'C': 1, 'G': 2, 'T': 3} encoder = IdentityEncoder(seq_length, residue_map) y_pred_pytorch[:10] #Execute method on test set i = 0 area = 0.2 variant_mode = "preserve" perturbation_mode = "blur" masks = [] m, _ = extremal_perturbation( model_pytorch, x_test_pytorch[i:i + 1], int(digit_test[i]), reward_func=contrastive_reward, debug=True, jitter=False, areas=[area], variant=variant_mode, perturbation=perturbation_mode, num_levels=8, step=3, sigma=3 ) imp_s = np.tile(m[0, 0, :, :].cpu().numpy(), (4, 1)) * x_test[i, 0, :, :].T score_clip = None plot_dna_logo(x_test[i, 0, :, :], sequence_template='N'*50, figsize=(12, 1), plot_start=0, plot_end=50) plot_importance_scores(imp_s, encoder.decode(x_test[i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50) #Execute method on test set n_to_test = x_test.shape[0] area = 0.2 variant_mode = "preserve" perturbation_mode = "blur" masks = [] for i in range(n_to_test) : if i % 100 == 0 : print("Processing example " + str(i) + "...") m, _ = extremal_perturbation( model_pytorch, x_test_pytorch[i:i + 1], int(digit_test[i]), reward_func=contrastive_reward, debug=False, jitter=False, areas=[area], variant=variant_mode, perturbation=perturbation_mode, num_levels=8, step=3, sigma=3 ) masks.append(np.expand_dims(m.cpu().numpy()[:, 0, ...], axis=-1)) importance_scores_test = np.concatenate(masks, axis=0) #Visualize a few images for plot_i in range(0, 5) : print("Test sequence " + str(plot_i) + ":") imp_s = np.tile(importance_scores_test[plot_i, :, :, 0], (4, 1)) * x_test[plot_i, 0, :, :].T score_clip = None plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, figsize=(12, 1), plot_start=0, plot_end=50) plot_importance_scores(imp_s, encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50) #Save predicted importance scores model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[0:512, ...]) model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[512:1024, ...]) model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[1024:1536, ...]) model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[1536:2048, ...]) #Execute method on test set n_to_test = x_test.shape[0] area = 0.2 variant_mode = "preserve" perturbation_mode = "fade" masks = [] for i in range(n_to_test) : if i % 100 == 0 : print("Processing example " + str(i) + "...") m, _ = extremal_perturbation( model_pytorch, x_test_pytorch[i:i + 1], int(digit_test[i]), reward_func=contrastive_reward, debug=False, jitter=False, areas=[area], variant=variant_mode, perturbation=perturbation_mode, num_levels=8, step=3, sigma=3 ) masks.append(np.expand_dims(m.cpu().numpy()[:, 0, ...], axis=-1)) importance_scores_test = np.concatenate(masks, axis=0) #Visualize a few images for plot_i in range(0, 5) : print("Test sequence " + str(plot_i) + ":") imp_s = np.tile(importance_scores_test[plot_i, :, :, 0], (4, 1)) * x_test[plot_i, 0, :, :].T score_clip = None plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, figsize=(12, 1), plot_start=0, plot_end=50) plot_importance_scores(imp_s, encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50) #Save predicted importance scores model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[0:512, ...]) model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[512:1024, ...]) model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[1024:1536, ...]) model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "") np.save(model_name + "_importance_scores_test", importance_scores_test[1536:2048, ...]) ```
github_jupyter
``` # default_exp core ``` # fastrl > A Concise introduction to key ideas in RL. ``` #hide from nbdev.showdoc import * #hide from fastrl.core import * ``` ## Introduction Let's start with humans learning from experiance.Say we are trying to learn to bicycle. We are driven by a goal to `stay balanced and pedal`.Along the way we fall.Now we need to start again.Somewhere in the gap between each iteration,we are learning and improving.Let's hash out the features of this process,if we are to model this : 1. **Past actions influence future output**: There is no immediate feedback.Each micro-action(exerting more pressure on the pedal,..) along the way either leads to falling-off balance or keep moving. `Computational Problem` : How to assign credit to actions when they are not temporally connected ? 2. **Outcomes might not be deterministic**: There are features of the environment(road,weather etc) that we do not fully understand that can effect outcome of the action. `Computational Problem`: How to make inference about the properties of a system under uncertainity ? Since out of the above two the latter seems to be simpler,let's start by building our intuition about Non-deterministic systems : Let's say we are given a Non-deterministic system whose properties we are unaware of.How can we build our knowledge about the properties of this system ?Can we come up with a systematic way(algorithm) to estimate it's behaviour ? ``` #hide import matplotlib.pyplot as plt import torch def query(): return torch.randn(1).item()*10 query(),query(),query() #Fires different measurements each time. ``` Now let's measure a **property** of this system...say it's **mean**: 1. How will your estimate of this mean change with each query ? 2. How does your confidence on this mean change with increasing number of queries $n$ ? ``` def estimate_mean(n): """Returns list of estimated means after each query repeated for n times.""" list_n = [] # keep track of all the outputs from our slot machine list_mean = [] # collect the means after every sample. for i in range(n): out = query() list_n.append(out) list_mean.append(sum(list_n)/len(list_n)) return list_mean estimate_mean(10) # The list of estimates for n = 1...10 #let's plot these means list_means = estimate_mean(100) plt.hist(list_means) ``` We can see that the calculated mean vary a lot but seem to be closer to $0$ most often..But how does our estimate itself depend on $n$(number of queries) ? ``` #let's measure the stability of our estimates by taking the difference of each successive estimates. diff_means = [list_means[i]-list_means[i-1] for i in range(1,len(list_means))] plt.plot(diff_means) ``` Our estimate of the mean don't seem to change much after a while.This is interesting.This aligns with our intuition - with more samples, we can be more confident.We can even go about proclaiming that whatever the dynamics of the system it's mean might be constant ?
github_jupyter
Your name here. Your section number here. # Homework 5: Fitting ##### ** Submit this notebook to bourses to receive a credit for this assignment. ** Please complete this homework assignment in code cells in the iPython notebook. Please submit both a PDF of the jupyter notebook to bcourses and the notebook itself (.ipynb file). Note, that when saving as PDF you don't want to use the option with latex because it crashes, but rather the one to save it directly as a PDF. ## Problem 1: Gamma-ray peak [Some of you may recognize this problem from Advanced Lab's Error Analysis Exercise. That's not an accident. You may also recognize this dataset from Homework04. That's not an accident either.] You are given a dataset (peak.dat) from a gamma-ray experiment consisting of ~1000 hits. Each line in the file corresponds to one recorded gamma-ray event, and stores the the measured energy of the gamma-ray. We will assume that the energies are randomly distributed about a common mean, and that each event is uncorrelated to others. Read the dataset from the enclosed file and: 1. Produce a histogram of the distribution of energies. Choose the number of bins wisely, i.e. so that the width of each bin is smaller than the width of the peak, and at the same time so that the number of entries in the most populated bin is relatively large. Since this plot represents randomly-collected data, plotting error bars would be appropriate. 1. Fit the distribution to a Gaussian function using an unbinned fit (<i>Hint:</i> use <tt>scipi.stats.norm.fit()</tt> function), and compare the parameters of the fitted Gaussian with the mean and standard deviation computed in Homework04 1. Fit the distribution to a Gaussian function using a binned least-squares fit (<i>Hint:</i> use <tt>scipy.optimize.curve_fit()</tt> function), and compare the parameters of the fitted Gaussian and their uncertainties to the parameters obtained in the unbinned fit above. 1. Re-make your histogram from (1) with twice as many bins, and repeat the binned least-squares fit from (3) on the new histogram. How sensitive are your results to binning ? 1. How consistent is the distribution with a Gaussian? In other words, compare the histogram from (1) to the fitted curve, and compute a goodness-of-fit value, such as $\chi^2$/d.f. ``` import numpy as np import matplotlib.pyplot as plt import scipy.stats import scipy.optimize as fitter # Once again, feel free to play around with the matplotlib parameters plt.rcParams['figure.figsize'] = 8,4 plt.rcParams['font.size'] = 14 energies = np.loadtxt('peak.dat') # MeV ``` Recall `plt.hist()` isn't great when you need error bars, so it's better to first use [`np.histogram()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html) -- which returns the counts in each bin, along with the edges of the bins (there are $n + 1$ edges for $n$ bins). Once you find the bin centers and errors on the counts, you can make the actual plot with [`plt.bar()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.bar.html). Start with something close to `bins = 25` as the second input parameter to `np.histogram()`. ``` # use numpy.histogram to get the counts and bin edges # bin_centers = 0.5*(bin_edges[1:]+bin_edges[:-1]) works for finding the bin centers # assume Poisson errors on the counts – errors go as the square root of the count # now use plt.bar() to make the histogram with error bars (remember to label the plot) ``` You can use the list of `energies` directly as input to `scipy.stats.norm.fit()`; the returned values are the mean and standard deviation of a fit to the data. ``` # Find the mean and standard deviation using scipy.stats.norm.fit() # Compare these to those computed in the previous homework (or just find them again here) ``` Now, using the binned values (found above with `np.histogram()`) and their errors use `scipy.optimize.curve_fit()` to fit the data. ``` # Remember, curve_fit() will need a model function defined def model(x, A, mu, sigma): '''Model function to use with curve_fit(); it should take the form of a 1-D Gaussian''' # Also make sure you define some starting parameters for curve_fit (we typically called these par0 or p0 in the past workshop) '''# You can use this to ensure the errors are greater than 0 to avoid division by 0 within fitter.curve_fit() for i, err in enumerate(counts_err): if err == 0: counts_err[i] = 1''' # Now use fitter.curve_fit() on the binned data and compare the best-fit parameters to those found by scipy.stats.norm.fit() # It's also useful to plot the fitted curve over the histogram you made in part 1 to check that things are working properly # At this point, it's also useful to find the chi^2 and reduced chi^2 value of this binned fit ``` Repeat this process with twice as many bins (i.e. now use `bins = 50` in `np.histogram()`, or a similar value). Compute the $\chi^2$ and reduced $\chi^2$ and compare these values, along with the best-fit parameters between the two binned fits. Feel free to continue to play with the number of bins and see how it changes the fit. ## Problem 2: Optical Pumping experiment One of the experiments in the 111B (111-ADV) lab is the study of the optical pumping of atomic rubidium. In that experiment, we measure the resonant frequency of a Zeeman transition as a function of the applied current (local magnetic field). Consider a mock data set: <table border="1" align="center"> <tr> <td>Current <i>I</i> (Amps) </td><td>0.0 </td><td> 0.2 </td><td> 0.4 </td><td> 0.6 </td><td> 0.8 </td><td> 1.0 </td><td> 1.2 </td><td> 1.4 </td><td> 1.6 </td><td> 1.8 </td><td> 2.0 </td><td> 2.2 </td></tr> <tr> <td>Frequency <i>f</i> (MHz) </td><td> 0.14 </td><td> 0.60 </td><td> 1.21 </td><td> 1.94 </td><td> 2.47 </td><td> 3.07 </td><td> 3.83 </td><td> 4.16 </td><td> 4.68 </td><td> 5.60 </td><td> 6.31 </td><td> 6.78 </td></tr></table> 1. Plot a graph of the pairs of values. Assuming a linear relationship between $I$ and $f$, determine the slope and the intercept of the best-fit line using the least-squares method with equal weights, and draw the best-fit line through the data points in the graph. 1. From what s/he knows about the equipment used to measure the resonant frequency, your lab partner hastily estimates the uncertainty in the measurement of $f$ to be $\sigma(f) = 0.01$ MHz. Estimate the probability that the straight line you found is an adequate description of the observed data if it is distributed with the uncertainty guessed by your lab partner. (Hint: use scipy.stats.chi2 class to compute the quantile of the chi2 distribution). What can you conclude from these results? 1. Repeat the analysis assuming your partner estimated the uncertainty to be $\sigma(f) = 1$ MHz. What can you conclude from these results? 1. Assume that the best-fit line found in Part 1 is a good fit to the data. Estimate the uncertainty in measurement of $y$ from the scatter of the observed data about this line. Again, assume that all the data points have equal weight. Use this to estimate the uncertainty in both the slope and the intercept of the best-fit line. This is the technique you will use in the Optical Pumping lab to determine the uncertainties in the fit parameters. 1. Now assume that the uncertainty in each value of $f$ grows with $f$: $\sigma(f) = 0.03 + 0.03 * f$ (MHz). Determine the slope and the intercept of the best-fit line using the least-squares method with unequal weights (weighted least-squares fit) ``` import numpy as np import matplotlib.pyplot as plt from numpy.linalg import * import scipy.stats import scipy.optimize as fitter # Use current as the x-variable in your plots/fitting current = np.arange(0, 2.3, .2) # Amps frequency = np.array([.14, .6, 1.21, 1.94, 2.47, 3.07, 3.83, 4.16, 4.68, 5.6, 6.31, 6.78]) # MHz def linear_model(x, slope, intercept): '''Model function to use with curve_fit(); it should take the form of a line''' # Use fitter.curve_fit() to get the line of best fit # Plot this line, along with the data points -- remember to label ``` The rest is pretty short, but the statistics might be a bit complicated. Ask questions if you need advice or help. Next, the problem is basically asking you to compute the $\chi^2$ for the above fit twice, once with $0.01$ as the error for each point (in the 'denominator' of the $\chi^2$ formula) and once with $0.1$. These values can then be compared to a "range of acceptable $\chi^2$ values", found with `scipy.stats.chi2.ppf()` -- which takes two inputs. The second input should be the number of degrees of freedom used during fitting (# data points minus the 2 free parameters). The first input should be something like $0.05$ and $0.95$ (one function call of `scipy.stats.chi2.ppf()` for each endpoint fo the acceptable range). If the calculated $\chi^2$ statistic falls within this range, then the assumed uncertainty is reasonable. Now, estimate the uncertainty in the frequency measurements, and use this to find the uncertainty in the best-fit parameters. [This document](https://pages.mtu.edu/~fmorriso/cm3215/UncertaintySlopeInterceptOfLeastSquaresFit.pdf) is a good resource for learning to propagate errors in the context of linear fitting. Finally, repeat the fitting with the weighted errors (from the $\sigma(f)$ uncertainty formula) given to `scipy.optimize.curve_fit()`
github_jupyter
``` # Copyright 2021 NVIDIA Corporation. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` # Scaling Criteo: Triton Inference with TensorFlow ## Overview The last step is to deploy the ETL workflow and saved model to production. In the production setting, we want to transform the input data as during training (ETL). We need to apply the same mean/std for continuous features and use the same categorical mapping to convert the categories to continuous integer before we use the deep learning model for a prediction. Therefore, we deploy the NVTabular workflow with the TensorFlow model as an ensemble model to Triton Inference. The ensemble model garantuees that the same transformation are applied to the raw inputs. <img src='./imgs/triton-tf.png' width="25%"> ### Learning objectives In this notebook, we learn how to deploy our models to production - Use **NVTabular** to generate config and model files for Triton Inference Server - Deploy an ensemble of NVTabular workflow and TensorFlow model - Send example request to Triton Inference Server ## Inference with Triton and TensorFlow First, we need to generate the Triton Inference Server configurations and save the models in the correct format. In the previous notebooks [02-ETL-with-NVTabular](./02-ETL-with-NVTabular.ipynb) and [03-Training-with-TF](./03-Training-with-TF.ipynb) we saved the NVTabular workflow and TensorFlow model to disk. We will load them. ### Saving Ensemble Model for Triton Inference Server ``` import os import tensorflow as tf import nvtabular as nvt BASE_DIR = os.environ.get("BASE_DIR", "/raid/data/criteo") input_path = os.path.join(BASE_DIR, "test_dask/output") workflow = nvt.Workflow.load(os.path.join(input_path, "workflow")) model = tf.keras.models.load_model(os.path.join(input_path, "model.savedmodel")) ``` TensorFlow expect the Integer as `int32` datatype. Therefore, we need to define the NVTabular output datatypes to `int32` for categorical features. ``` for key in workflow.output_dtypes.keys(): if key.startswith("C"): workflow.output_dtypes[key] = "int32" ``` NVTabular provides an easy function to deploy the ensemble model for Triton Inference Server. ``` from nvtabular.inference.triton import export_tensorflow_ensemble export_tensorflow_ensemble(model, workflow, "criteo", "/models", ["label"]) ``` We can take a look on the generated files. ``` !tree /models ``` ### Loading Ensemble Model with Triton Inference Server We have only saved the models for Triton Inference Server. We started Triton Inference Server in explicit mode, meaning that we need to send a request that Triton will load the ensemble model. First, we restart this notebook to free the GPU memory. ``` import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` We define the BASE_DIR again. ``` import os BASE_DIR = os.environ.get("BASE_DIR", "/raid/data/criteo") ``` We connect to the Triton Inference Server. ``` import tritonhttpclient try: triton_client = tritonhttpclient.InferenceServerClient(url="triton:8000", verbose=True) print("client created.") except Exception as e: print("channel creation failed: " + str(e)) ``` We deactivate warnings. ``` import warnings warnings.filterwarnings("ignore") ``` We check if the server is alive. ``` triton_client.is_server_live() ``` We check the available models in the repositories: - criteo: Ensemble - criteo_nvt: NVTabular - criteo_tf: TensorFlow model ``` triton_client.get_model_repository_index() ``` We load the ensembled model. ``` %%time triton_client.load_model(model_name="criteo") ``` ### Example Request to Triton Inference Server Now, the models are loaded and we can create a sample request. We read an example **raw batch** for inference. ``` import cudf # read in the workflow (to get input/output schema to call triton with) batch_path = os.path.join(BASE_DIR, "converted/criteo") batch = cudf.read_parquet(os.path.join(batch_path, "*.parquet"), num_rows=3) batch = batch[[x for x in batch.columns if x != "label"]] print(batch) ``` We prepare the batch for inference by using correct column names and data types. We use the same datatypes as defined in our dataframe. ``` batch.dtypes import tritonclient.http as httpclient from tritonclient.utils import np_to_triton_dtype import numpy as np inputs = [] col_names = list(batch.columns) col_dtypes = [np.int32] * len(col_names) for i, col in enumerate(batch.columns): d = batch[col].values_host.astype(col_dtypes[i]) d = d.reshape(len(d), 1) inputs.append(httpclient.InferInput(col_names[i], d.shape, np_to_triton_dtype(col_dtypes[i]))) inputs[i].set_data_from_numpy(d) ``` We send the request to the triton server and collect the last output. ``` # placeholder variables for the output outputs = [httpclient.InferRequestedOutput("output")] # build a client to connect to our server. # This InferenceServerClient object is what we'll be using to talk to Triton. # make the request with tritonclient.http.InferInput object response = triton_client.infer("criteo", inputs, request_id="1", outputs=outputs) print("predicted softmax result:\n", response.as_numpy("output")) ``` Let's unload the model. We need to unload each model. ``` triton_client.unload_model(model_name="criteo") triton_client.unload_model(model_name="criteo_nvt") triton_client.unload_model(model_name="criteo_tf") ```
github_jupyter
``` %matplotlib inline import gym import itertools import matplotlib import numpy as np import sys import tensorflow as tf import collections if "../" not in sys.path: sys.path.append("../") from lib.envs.cliff_walking import CliffWalkingEnv from lib import plotting matplotlib.style.use('ggplot') env = CliffWalkingEnv() class PolicyEstimator(): """ Policy Function approximator. """ def __init__(self, learning_rate=0.01, scope="policy_estimator"): with tf.variable_scope(scope): self.state = tf.placeholder(tf.int32, [], "state") self.action = tf.placeholder(dtype=tf.int32, name="action") self.target = tf.placeholder(dtype=tf.float32, name="target") # This is just table lookup estimator state_one_hot = tf.one_hot(self.state, int(env.observation_space.n)) self.output_layer = tf.contrib.layers.fully_connected( inputs=tf.expand_dims(state_one_hot, 0), num_outputs=env.action_space.n, activation_fn=None, weights_initializer=tf.zeros_initializer) self.action_probs = tf.squeeze(tf.nn.softmax(self.output_layer)) self.picked_action_prob = tf.gather(self.action_probs, self.action) # Loss and train op self.loss = -tf.log(self.picked_action_prob) * self.target self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) self.train_op = self.optimizer.minimize( self.loss, global_step=tf.contrib.framework.get_global_step()) def predict(self, state, sess=None): sess = sess or tf.get_default_session() return sess.run(self.action_probs, { self.state: state }) def update(self, state, target, action, sess=None): sess = sess or tf.get_default_session() feed_dict = { self.state: state, self.target: target, self.action: action } _, loss = sess.run([self.train_op, self.loss], feed_dict) return loss class ValueEstimator(): """ Value Function approximator. """ def __init__(self, learning_rate=0.1, scope="value_estimator"): with tf.variable_scope(scope): self.state = tf.placeholder(tf.int32, [], "state") self.target = tf.placeholder(dtype=tf.float32, name="target") # This is just table lookup estimator state_one_hot = tf.one_hot(self.state, int(env.observation_space.n)) self.output_layer = tf.contrib.layers.fully_connected( inputs=tf.expand_dims(state_one_hot, 0), num_outputs=1, activation_fn=None, weights_initializer=tf.zeros_initializer) self.value_estimate = tf.squeeze(self.output_layer) self.loss = tf.squared_difference(self.value_estimate, self.target) self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) self.train_op = self.optimizer.minimize( self.loss, global_step=tf.contrib.framework.get_global_step()) def predict(self, state, sess=None): sess = sess or tf.get_default_session() return sess.run(self.value_estimate, { self.state: state }) def update(self, state, target, sess=None): sess = sess or tf.get_default_session() feed_dict = { self.state: state, self.target: target } _, loss = sess.run([self.train_op, self.loss], feed_dict) return loss def actor_critic(env, estimator_policy, estimator_value, num_episodes, discount_factor=1.0): """ Actor Critic Algorithm. Optimizes the policy function approximator using policy gradient. Args: env: OpenAI environment. estimator_policy: Policy Function to be optimized estimator_value: Value function approximator, used as a critic num_episodes: Number of episodes to run for discount_factor: Time-discount factor Returns: An EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards. """ # Keeps track of useful statistics stats = plotting.EpisodeStats( episode_lengths=np.zeros(num_episodes), episode_rewards=np.zeros(num_episodes)) Transition = collections.namedtuple("Transition", ["state", "action", "reward", "next_state", "done"]) for i_episode in range(num_episodes): # Reset the environment and pick the fisrst action state = env.reset() episode = [] # One step in the environment for t in itertools.count(): # Take a step action_probs = estimator_policy.predict(state) action = np.random.choice(np.arange(len(action_probs)), p=action_probs) next_state, reward, done, _ = env.step(action) # Keep track of the transition episode.append(Transition( state=state, action=action, reward=reward, next_state=next_state, done=done)) # Update statistics stats.episode_rewards[i_episode] += reward stats.episode_lengths[i_episode] = t # Calculate TD Target value_next = estimator_value.predict(next_state) td_target = reward + discount_factor * value_next td_error = td_target - estimator_value.predict(state) # Update the value estimator estimator_value.update(state, td_target) # Update the policy estimator # using the td error as our advantage estimate estimator_policy.update(state, td_error, action) # Print out which step we're on, useful for debugging. print("\rStep {} @ Episode {}/{} ({})".format( t, i_episode + 1, num_episodes, stats.episode_rewards[i_episode - 1]), end="") if done: break state = next_state return stats tf.reset_default_graph() global_step = tf.Variable(0, name="global_step", trainable=False) policy_estimator = PolicyEstimator() value_estimator = ValueEstimator() with tf.Session() as sess: sess.run(tf.initialize_all_variables()) # Note, due to randomness in the policy the number of episodes you need to learn a good # policy may vary. ~300 seemed to work well for me. stats = actor_critic(env, policy_estimator, value_estimator, 300) plotting.plot_episode_stats(stats, smoothing_window=10) ```
github_jupyter
``` import os import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import random from matplotlib import pyplot from pandas.io.parsers import read_csv from sklearn.utils import shuffle FTRAIN = '../../../data/raw/FacialKeyPointDetection/training.csv' FTEST = '../../../data/raw/FacialKeyPointDetection/test.csv' def load(test=False, cols=None): """Loads data from FTEST if *test* is True, otherwise from FTRAIN. Pass a list of *cols* if you're only interested in a subset of the target columns. """ fname = FTEST if test else FTRAIN df = read_csv(os.path.expanduser(fname)) # load pandas dataframe # The Image column has pixel values separated by space; convert # the values to numpy arrays: df['Image'] = df['Image'].apply(lambda im: np.fromstring(im, sep=' ')) if cols: # get a subset of columns df = df[list(cols) + ['Image']] print(df.count()) # prints the number of values for each column df = df.dropna() # drop all rows that have missing values in them X = np.vstack(df['Image'].values) / 255. # scale pixel values to [0, 1] X = X.astype(np.float32) # Copy of the array, cast to a specified type. if not test: # only FTRAIN has any target columns y = df[df.columns[:-1]].values y = (y - 48) / 48 # scale target coordinates to [-1, 1] X, y = shuffle(X, y, random_state=42) # shuffle train data, random_state corresponding to the seed y = y.astype(np.float32) # Copy of the array, cast to a specified type. else: y = None return X, y X_train, y_train = load() print("X_train.shape == {}; X_train.min == {:.3f}; X_train.max == {:.3f}".format( X_train.shape, X_train.min(), X_train.max())) print("y.shape == {}; y.min == {:.3f}; y.max == {:.3f}".format( y_train.shape, y_train.min(), y_train.max())) x1 = tf.placeholder(tf.float32, [None, 9216]) W1 = tf.Variable(tf.zeros([9216, 100])) W2 = tf.Variable(tf.zeros([100, 30])) b1 = tf.Variable(tf.zeros([100])) b2 = tf.Variable(tf.zeros([30])) y = tf.nn.relu(tf.matmul(x1, W1) + b1) # the equation y1 = tf.nn.tanh(tf.matmul(y, W2) + b2) y_ = tf.placeholder(tf.float32, [None, 30]) mse = tf.reduce_mean(tf.square(tf.subtract(y_, y1))) train_step = tf.train.AdamOptimizer(0.0001).minimize(mse) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) TRAINING_EPOCHS = 2000 loss = [] avg_loss = [] for j in range(TRAINING_EPOCHS): for i in range(20): nb = np.array([random.randint(0, len(X_train) - 1) for _ in range(100)]) batch_xs = X_train[nb] batch_ys = y_train[nb] _, c = sess.run([train_step, mse], feed_dict={x1: batch_xs, y_: batch_ys}) loss.append(c) avg_loss.append(np.mean(loss)) if j % 100 == 0: print("Epoch " + str(j) + ": loss=", str(np.mean(loss))) print("done") plt.plot(avg_loss) plt.show() def plot_sample(x, y, axis): img = x.reshape(96, 96) axis.imshow(img, cmap='gray') axis.scatter(y[0::2] * 48 + 48, y[1::2] * 48 + 48, marker='x', s=10) X_test, y = load(True) classification = sess.run(y1, feed_dict={x1: X_test}) fig = plt.figure(figsize=(6, 6)) fig.subplots_adjust( left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) for i in range(16): ax = fig.add_subplot(4, 4, i + 1, xticks=[], yticks=[]) plot_sample(X_test[i], classification[i], ax) plt.show() ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Var</a></span><ul class="toc-item"><li><span><a href="#Init" data-toc-modified-id="Init-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Init</a></span></li></ul></li><li><span><a href="#DeepMAsED-SM" data-toc-modified-id="DeepMAsED-SM-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>DeepMAsED-SM</a></span><ul class="toc-item"><li><span><a href="#Config" data-toc-modified-id="Config-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Config</a></span></li><li><span><a href="#Run" data-toc-modified-id="Run-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Run</a></span></li></ul></li><li><span><a href="#--WAITING--" data-toc-modified-id="--WAITING---4"><span class="toc-item-num">4&nbsp;&nbsp;</span>--WAITING--</a></span></li><li><span><a href="#Summary" data-toc-modified-id="Summary-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Summary</a></span><ul class="toc-item"><li><span><a href="#Communities" data-toc-modified-id="Communities-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Communities</a></span></li><li><span><a href="#Feature-tables" data-toc-modified-id="Feature-tables-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>Feature tables</a></span><ul class="toc-item"><li><span><a href="#No.-of-contigs" data-toc-modified-id="No.-of-contigs-5.2.1"><span class="toc-item-num">5.2.1&nbsp;&nbsp;</span>No. of contigs</a></span></li><li><span><a href="#Misassembly-types" data-toc-modified-id="Misassembly-types-5.2.2"><span class="toc-item-num">5.2.2&nbsp;&nbsp;</span>Misassembly types</a></span></li></ul></li></ul></li><li><span><a href="#sessionInfo" data-toc-modified-id="sessionInfo-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>sessionInfo</a></span></li></ul></div> # Goal * Replicate metagenome assemblies using intra-spec training genome dataset * Richness = 0.5 (50% of all ref genomes used) # Var ``` ref_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/GTDB_ref_genomes/intraSpec/' ref_file = file.path(ref_dir, 'GTDBr86_genome-refs_train_clean.tsv') work_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p5/' # params pipeline_dir = '/ebio/abt3_projects/databases_no-backup/bin/deepmased/DeepMAsED-SM/' ``` ## Init ``` library(dplyr) library(tidyr) library(ggplot2) library(data.table) source('/ebio/abt3_projects/software/dev/DeepMAsED/bin/misc_r_functions/init.R') #' "cat {file}" in R cat_file = function(file_name){ cmd = paste('cat', file_name, collapse=' ') system(cmd, intern=TRUE) %>% paste(collapse='\n') %>% cat } ``` # DeepMAsED-SM ## Config ``` config_file = file.path(work_dir, 'config.yaml') cat_file(config_file) ``` ## Run ``` (snakemake_dev) @ rick:/ebio/abt3_projects/databases_no-backup/bin/deepmased/DeepMAsED-SM $ screen -L -S DM-intraS-rich0.5 ./snakemake_sge.sh /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p5/config.yaml cluster.json /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p5/SGE_log 48 ``` # Summary ## Communities ``` comm_files = list.files(file.path(work_dir, 'MGSIM'), 'comm_wAbund.txt', full.names=TRUE, recursive=TRUE) comm_files %>% length %>% print comm_files %>% head comms = list() for(F in comm_files){ df = read.delim(F, sep='\t') df$Rep = basename(dirname(F)) comms[[F]] = df } comms = do.call(rbind, comms) rownames(comms) = 1:nrow(comms) comms %>% dfhead p = comms %>% mutate(Perc_rel_abund = ifelse(Perc_rel_abund == 0, 1e-5, Perc_rel_abund)) %>% group_by(Taxon) %>% summarize(mean_perc_abund = mean(Perc_rel_abund), sd_perc_abund = sd(Perc_rel_abund)) %>% ungroup() %>% mutate(neg_sd_perc_abund = mean_perc_abund - sd_perc_abund, pos_sd_perc_abund = mean_perc_abund + sd_perc_abund, neg_sd_perc_abund = ifelse(neg_sd_perc_abund <= 0, 1e-5, neg_sd_perc_abund)) %>% mutate(Taxon = Taxon %>% reorder(-mean_perc_abund)) %>% ggplot(aes(Taxon, mean_perc_abund)) + geom_linerange(aes(ymin=neg_sd_perc_abund, ymax=pos_sd_perc_abund), size=0.3, alpha=0.3) + geom_point(size=0.5, alpha=0.4, color='red') + labs(y='% abundance') + theme_bw() + theme( axis.text.x = element_blank(), panel.grid.major.x = element_blank(), panel.grid.major.y = element_blank(), panel.grid.minor.x = element_blank(), panel.grid.minor.y = element_blank() ) dims(10,2.5) plot(p) dims(10,2.5) plot(p + scale_y_log10()) ``` ## Feature tables ``` feat_files = list.files(file.path(work_dir, 'map'), 'features.tsv.gz', full.names=TRUE, recursive=TRUE) feat_files %>% length %>% print feat_files %>% head feats = list() for(F in feat_files){ cmd = glue::glue('gunzip -c {F}', F=F) df = fread(cmd, sep='\t') %>% distinct(contig, assembler, Extensive_misassembly) df$Rep = basename(dirname(dirname(F))) feats[[F]] = df } feats = do.call(rbind, feats) rownames(feats) = 1:nrow(feats) feats %>% dfhead ``` ### No. of contigs ``` feats_s = feats %>% group_by(assembler, Rep) %>% summarize(n_contigs = n_distinct(contig)) %>% ungroup feats_s$n_contigs %>% summary ``` ### Misassembly types ``` p = feats %>% mutate(Extensive_misassembly = ifelse(Extensive_misassembly == '', 'None', Extensive_misassembly)) %>% group_by(Extensive_misassembly, assembler, Rep) %>% summarize(n = n()) %>% ungroup() %>% ggplot(aes(Extensive_misassembly, n, color=assembler)) + geom_boxplot() + scale_y_log10() + labs(x='metaQUAST extensive mis-assembly', y='Count') + coord_flip() + theme_bw() + theme( axis.text.x = element_text(angle=45, hjust=1) ) dims(8,4) plot(p) ``` # sessionInfo ``` sessionInfo() ```
github_jupyter
<center> A crash course in <br><br> <b><font size=44px>Surviving Titanic</font></b> <br><br> (with numpy and matplotlib) </center> --- This notebook is going to teach you to use the basic data science stack for Python: Jupyter, Numpy, matplotlib, and sklearn. ### Part I: Jupyter notebooks in a nutshell * You are reading this line in a jupyter notebook. * A notebook consists of cells. A cell can contain either code or hypertext. * This cell contains hypertext. The next cell contains code. * You can __run a cell__ with code by selecting it (click) and pressing `Ctrl + Enter` to execute the code and display output(if any). * If you're running this on a device with no keyboard, ~~you are doing it wrong~~ use the top bar (esp. play/stop/restart buttons) to run code. * Behind the curtains, there's a Python interpreter that runs that code and remembers anything you defined. Run these cells to get started ``` a = 5 print(a * 2) ``` * `Ctrl + S` to save changes (or use the button that looks like a floppy disk) * Top menu → Kernel → Interrupt (or Stop button) if you want it to stop running cell midway. * Top menu → Kernel → Restart (or cyclic arrow button) if interrupt doesn't fix the problem (you will lose all variables). * For shortcut junkies like us: Top menu → Help → Keyboard Shortcuts * More: [Hacker's guide](http://arogozhnikov.github.io/2016/09/10/jupyter-features.html), [Beginner's guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/), [Datacamp tutorial](https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook) Now __the most important feature__ of jupyter notebooks for this course: * if you're typing something, press `Tab` to see automatic suggestions, use arrow keys + enter to pick one. * if you move your cursor inside some function and press `Shift + Tab`, you'll get a help window. `Shift + (Tab , Tab)` (press `Tab` twice) will expand it. ``` # run this first import math # place your cursor at the end of the unfinished line below to find a function # that computes arctangent from two parameters (should have 2 in it's name) # once you chose it, press shift + tab + tab(again) to see the docs math.a # <--- ``` ### Part II: Loading data with Pandas Pandas is a library that helps you load the data, prepare it and perform some lightweight analysis. The god object here is the `pandas.DataFrame` - a 2D table with batteries included. In the cells below we use it to read the data on the infamous titanic shipwreck. __please keep running all the code cells as you read__ ``` # If you are running in Google Colab, this cell will download the dataset from our repository. # Otherwise, this cell will do nothing. import sys if 'google.colab' in sys.modules: !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week1_intro/primer/train.csv import pandas as pd # this yields a pandas.DataFrame data = pd.read_csv("train.csv", index_col='PassengerId') # Selecting rows head = data[:10] head # if you leave an expression at the end of a cell, jupyter will "display" it automatically ``` #### About the data Here's some of the columns * Name - a string with person's full name * Survived - 1 if a person survived the shipwreck, 0 otherwise. * Pclass - passenger class. Pclass == 3 is cheap'n'cheerful, Pclass == 1 is for moneybags. * Sex - a person's gender (in those good ol' times when there were just 2 of them) * Age - age in years, if available * Sibsp - number of siblings on a ship * Parch - number of parents on a ship * Fare - ticket cost * Embarked - port where the passenger embarked * C = Cherbourg; Q = Queenstown; S = Southampton ``` # table dimensions print("len(data) =", len(data)) print("data.shape =", data.shape) # select a single row by PassengerId (using .loc) print(data.loc[4]) # select a single row by index (using .iloc) print(data.iloc[3]) # select a single column. ages = data["Age"] print(ages[:10]) # alternatively: data.Age # select several columns and rows at once # alternatively: data[["Fare","Pclass"]].loc[5:10] data.loc[5:10, ("Fare", "Pclass")] ``` ## Your turn: ``` # Select passengers number 13 and 666 (with these PassengerId values). Did they survive? data.loc[13:666] # Compute the overall survival rate: what fraction of passengers survived the shipwreck? len(data[data['Survived']==1])/len(data) ``` --- Pandas also has some basic data analysis tools. For one, you can quickly display statistical aggregates for each column using `.describe()` ``` data.describe() ``` Some columns contain __NaN__ values - this means that there is no data there. For example, passenger `#6` has unknown age. To simplify the future data analysis, we'll replace NaN values by using pandas `fillna` function. _Note: we do this so easily because it's a tutorial. In general, you think twice before you modify data like this._ ``` data.loc[6] data['Age'] = data['Age'].fillna(value=data['Age'].mean()) data['Fare'] = data['Fare'].fillna(value=data['Fare'].mean()) data.loc[6] ``` More pandas: * A neat [tutorial](http://pandas.pydata.org/) from pydata * Official [tutorials](https://pandas.pydata.org/pandas-docs/stable/tutorials.html), including this [10 minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/10min.html#min) * Bunch of cheat sheets awaits just one google query away from you (e.g. [basics](http://blog.yhat.com/static/img/datacamp-cheat.png), [combining datasets](https://pbs.twimg.com/media/C65MaMpVwAA3v0A.jpg) and so on). ### Part III: Numpy and vectorized computing Almost any machine learning model requires some computational heavy lifting usually involving linear algebra problems. Unfortunately, raw Python is terrible at this because each operation is interpreted at runtime. So instead, we'll use `numpy` - a library that lets you run blazing fast computation with vectors, matrices and other tensors. Again, the god object here is `numpy.ndarray`: ``` import numpy as np a = np.array([1, 2, 3, 4, 5]) b = np.array([5, 4, 3, 2, 1]) print("a =", a) print("b =", b) # math and boolean operations can applied to each element of an array print("a + 1 =", a + 1) print("a * 2 =", a * 2) print("a == 2", a == 2) # ... or corresponding elements of two (or more) arrays print("a + b =", a + b) print("a * b =", a * b) # Your turn: compute half-products of a and b elements (i.e. ½ of the products of corresponding elements) (a * b)/ 2 # compute elementwise quotient between squared a and (b plus 1) a * a / (b + 1) ``` --- ### How fast is it, Harry? ![img](https://img.buzzfeed.com/buzzfeed-static/static/2015-11/6/7/enhanced/webdr10/enhanced-buzz-22847-1446811476-0.jpg) Let's compare computation time for Python and Numpy * Two arrays of $10^6$ elements * first one: from 0 to 1 000 000 * second one: from 99 to 1 000 099 * Computing: * elementwise sum * elementwise product * square root of first array * sum of all elements in the first array ``` %%time # ^-- this "magic" measures and prints cell computation time # Option I: pure Python arr_1 = range(1000000) arr_2 = range(99, 1000099) a_sum = [] a_prod = [] sqrt_a1 = [] for i in range(len(arr_1)): a_sum.append(arr_1[i]+arr_2[i]) a_prod.append(arr_1[i]*arr_2[i]) a_sum.append(arr_1[i]**0.5) arr_1_sum = sum(arr_1) %%time # Option II: start from Python, convert to numpy arr_1 = range(1000000) arr_2 = range(99, 1000099) arr_1, arr_2 = np.array(arr_1), np.array(arr_2) a_sum = arr_1 + arr_2 a_prod = arr_1 * arr_2 sqrt_a1 = arr_1 ** .5 arr_1_sum = arr_1.sum() %%time # Option III: pure numpy arr_1 = np.arange(1000000) arr_2 = np.arange(99, 1000099) a_sum = arr_1 + arr_2 a_prod = arr_1 * arr_2 sqrt_a1 = arr_1 ** .5 arr_1_sum = arr_1.sum() ``` If you want more serious benchmarks, take a look at [this](http://brilliantlywrong.blogspot.ru/2015/01/benchmarks-of-speed-numpy-vs-all.html). --- There's also a bunch of pre-implemented operations including logarithms, trigonometry, vector/matrix products and aggregations. ``` a = np.array([1, 2, 3, 4, 5]) b = np.array([5, 4, 3, 2, 1]) print("numpy.sum(a) =", np.sum(a)) print("numpy.mean(a) =", np.mean(a)) print("numpy.min(a) =", np.min(a)) print("numpy.argmin(b) =", np.argmin(b)) # index of minimal element # dot product. Also used for matrix/tensor multiplication print("numpy.dot(a,b) =", np.dot(a, b)) print( "numpy.unique(['male','male','female','female','male']) =", np.unique(['male', 'male', 'female', 'female', 'male'])) ``` There is a lot more stuff. Check out a Numpy cheat sheet [here](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf). The important part: all this functionality works with dataframes: ``` print("Max ticket price: ", np.max(data["Fare"])) print("\nThe guy who paid the most:\n", data.iloc[np.argmax(data["Fare"])]) # your code: compute mean passenger age and the oldest guy on the ship print("Mean passenger age: ", np.mean(data['Age'])) print("The oldest guy on the ship: ", np.max(data['Age'])) print("Boolean operations") print('a =', a) print('b =', b) print("a > 2", a > 2) print("numpy.logical_not(a>2) =", np.logical_not(a > 2)) print("numpy.logical_and(a>2,b>2) =", np.logical_and(a > 2, b > 2)) print("numpy.logical_or(a>4,b<3) =", np.logical_or(a > 2, b < 3)) print() print("shortcuts") print("~(a > 2) =", ~(a > 2)) # logical_not(a > 2) print("(a > 2) & (b > 2) =", (a > 2) & (b > 2)) # logical_and print("(a > 2) | (b < 3) =", (a > 2) | (b < 3)) # logical_or ``` The final Numpy feature we'll need is indexing: selecting elements from an array. Aside from Python indexes and slices (e.g. `a[1:4]`), Numpy also allows you to select several elements at once. ``` a = np.array([0, 1, 4, 9, 16, 25]) ix = np.array([1, 2, 5]) print("a =", a) print("Select by element index") print("a[[1,2,5]] =", a[ix]) print("\nSelect by boolean mask") # select all elementts in a that are greater than 5 print("a[a > 5] =", a[a > 5]) print("(a % 2 == 0) =", a % 2 == 0) # True for even, False for odd print("a[a % 2 == 0] =", a[a % 2 == 0]) # select all elements in a that are even # select male children print("data[(data['Age'] < 18) & (data['Sex'] == 'male')] = (below)") data[(data['Age'] < 18) & (data['Sex'] == 'male')] ``` ### Your turn Use numpy and pandas to answer a few questions about data ``` # who on average paid more for their ticket, men or women? mean_fare_men = data[(data['Sex'] == 'male') & (data['Fare']>data['Fare'].mean())] mean_fare_women = data[(data['Sex'] == 'female') & (data['Fare']>data['Fare'].mean())] print(mean_fare_men, mean_fare_women) # who is more likely to survive: a child (<18 yo) or an adult? child_survival_rate = data[(data['Age'] < 18) & (data['Survived'] == 1)] adult_survival_rate = data[(data['Age'] >= 18) & (data['Survived'] == 1)] print(child_survival_rate, adult_survival_rate) ``` # Part IV: plots and matplotlib Using Python to visualize the data is covered by yet another library: matplotlib. Just like Python itself, matplotlib has an awesome tendency of keeping simple things simple while still allowing you to write complicated stuff with convenience (e.g. super-detailed plots or custom animations). ``` import matplotlib.pyplot as plt %matplotlib inline # ^-- this "magic" tells all future matplotlib plots to be drawn inside notebook and not in a separate window. # line plot plt.plot([0, 1, 2, 3, 4, 5], [0, 1, 4, 9, 16, 25]) # scatter-plot plt.scatter([0, 1, 2, 3, 4, 5], [0, 1, 4, 9, 16, 25]) plt.show() # show the first plot and begin drawing next one # draw a scatter plot with custom markers and colors plt.scatter([1, 1, 2, 3, 4, 4.5], [3, 2, 2, 5, 15, 24], c=["red", "blue", "orange", "green", "cyan", "gray"], marker="x") # without .show(), several plots will be drawn on top of one another plt.plot([0, 1, 2, 3, 4, 5], [0, 1, 4, 9, 16, 25], c="black") # adding more sugar plt.title("Conspiracy theory proven!!!") plt.xlabel("Per capita alcohol consumption") plt.ylabel("# Layers in state of the art image classifier") # fun with correlations: http://bit.ly/1FcNnWF # histogram - showing data density plt.hist([0, 1, 1, 1, 2, 2, 3, 3, 3, 3, 3, 4, 4, 5, 5, 5, 6, 7, 7, 8, 9, 10]) plt.show() plt.hist([0, 1, 1, 1, 2, 2, 3, 3, 3, 3, 3, 4, 4, 5, 5, 5, 6, 7, 7, 8, 9, 10], bins=5) # plot a histogram of age and a histogram of ticket fares on separate plots plt.hist(data['Age']) plt.show() plt.hist(data['Fare']) # bonus: use tab shift-tab to see if there is a way to draw a 2D histogram of age vs fare. # make a scatter plot of passenger age vs ticket fare c= data['Sex']=='male' c[c==False]='red' c[c==True]='blue' c=list(c) plt.scatter(data['Age'],data['Fare'],c ) plt.show() # kudos if you add separate colors for men and women ``` * Extended [tutorial](https://matplotlib.org/2.0.2/users/pyplot_tutorial.html) * A [cheat sheet](http://bit.ly/2koHxNF) * Other libraries for more sophisticated stuff: [Plotly](https://plot.ly/python/) and [Bokeh](https://bokeh.pydata.org/en/latest/) ### Part V (final): machine learning with scikit-learn <img src='https://imgs.xkcd.com/comics/machine_learning.png' width=320px> Scikit-learn is _the_ tool for simple machine learning pipelines. It's a single library that unites a whole bunch of models under the common interface: * Create: `model = sklearn.whatever.ModelNameHere(parameters_if_any)` * Train: `model.fit(X, y)` * Predict: `model.predict(X_test)` It also contains utilities for feature extraction, quality estimation or cross-validation. ``` data["Sex"] =(data["Sex"] == "male") * 1 from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score features = data[["Fare", "SibSp", "Sex"]].copy() answers = data["Survived"] model = RandomForestClassifier(n_estimators=100) model.fit(features[:-100], answers[:-100]) test_predictions = model.predict(features[-100:]) print("Test accuracy:", accuracy_score(answers[-100:], test_predictions)) ``` Final quest: add more features to achieve accuracy of at least 0.80 __Hint:__ for string features like "Sex" or "Embarked" you will have to compute some kind of numeric representation. For example, 1 if male and 0 if female or vice versa __Hint II:__ you can use `model.feature_importances_` to get a hint on how much did it rely each of your features. Here are more resources for sklearn: * [Tutorials](http://scikit-learn.org/stable/tutorial/index.html) * [Examples](http://scikit-learn.org/stable/auto_examples/index.html) * [Cheat sheet](http://scikit-learn.org/stable/_static/ml_map.png) --- Okay, here's what we've learned: to survive a shipwreck you need to become an underaged girl with parents on the ship. Be sure to use this helpful advice next time you find yourself in a shipwreck.
github_jupyter
<div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="https://cocl.us/corsera_da0101en_notebook_top"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/TopAd.png" width="750" align="center"> </a> </div> <a href="https://www.bigdatauniversity.com"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/CCLog.png" width=300, align="center"></a> <h1 align=center><font size=5>Data Analysis with Python</font></h1> <h1>Module 5: Model Evaluation and Refinement</h1> We have built models and made predictions of vehicle prices. Now we will determine how accurate these predictions are. <h1>Table of content</h1> <ul> <li><a href="#ref1">Model Evaluation </a></li> <li><a href="#ref2">Over-fitting, Under-fitting and Model Selection </a></li> <li><a href="#ref3">Ridge Regression </a></li> <li><a href="#ref4">Grid Search</a></li> </ul> This dataset was hosted on IBM Cloud object click <a href="https://cocl.us/DA101EN_object_storage">HERE</a> for free storage. ``` import pandas as pd import numpy as np # Import clean data path = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/module_5_auto.csv' df = pd.read_csv(path) df.to_csv('module_5_auto.csv') ``` First lets only use numeric data ``` df=df._get_numeric_data() df.head() ``` Libraries for plotting ``` %%capture ! pip install ipywidgets from IPython.display import display from IPython.html import widgets from IPython.display import display from ipywidgets import interact, interactive, fixed, interact_manual ``` <h2>Functions for plotting</h2> ``` def DistributionPlot(RedFunction, BlueFunction, RedName, BlueName, Title): width = 12 height = 10 plt.figure(figsize=(width, height)) ax1 = sns.distplot(RedFunction, hist=False, color="r", label=RedName) ax2 = sns.distplot(BlueFunction, hist=False, color="b", label=BlueName, ax=ax1) plt.title(Title) plt.xlabel('Price (in dollars)') plt.ylabel('Proportion of Cars') plt.show() plt.close() def PollyPlot(xtrain, xtest, y_train, y_test, lr,poly_transform): width = 12 height = 10 plt.figure(figsize=(width, height)) #training data #testing data # lr: linear regression object #poly_transform: polynomial transformation object xmax=max([xtrain.values.max(), xtest.values.max()]) xmin=min([xtrain.values.min(), xtest.values.min()]) x=np.arange(xmin, xmax, 0.1) plt.plot(xtrain, y_train, 'ro', label='Training Data') plt.plot(xtest, y_test, 'go', label='Test Data') plt.plot(x, lr.predict(poly_transform.fit_transform(x.reshape(-1, 1))), label='Predicted Function') plt.ylim([-10000, 60000]) plt.ylabel('Price') plt.legend() ``` <h1 id="ref1">Part 1: Training and Testing</h1> <p>An important step in testing your model is to split your data into training and testing data. We will place the target data <b>price</b> in a separate dataframe <b>y</b>:</p> ``` y_data = df['price'] ``` drop price data in x data ``` x_data=df.drop('price',axis=1) ``` Now we randomly split our data into training and testing data using the function <b>train_test_split</b>. ``` from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.15, random_state=1) print("number of test samples :", x_test.shape[0]) print("number of training samples:",x_train.shape[0]) ``` The <b>test_size</b> parameter sets the proportion of data that is split into the testing set. In the above, the testing set is set to 10% of the total dataset. <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #1):</h1> <b>Use the function "train_test_split" to split up the data set such that 40% of the data samples will be utilized for testing, set the parameter "random_state" equal to zero. The output of the function should be the following: "x_train_1" , "x_test_1", "y_train_1" and "y_test_1".</b> </div> ``` # Write your code below and press Shift+Enter to execute from sklearn.model_selection import train_test_split x_train_1, x_test_1, y_train_1, y_test_1 = train_test_split(x_data, y_data, test_size=0.4, random_state=0) print("number of test samples:", x_test_1.shape[0]) print("number of training samples:", x_train_1.shape[0]) ``` Double-click <b>here</b> for the solution. <!-- The answer is below: x_train1, x_test1, y_train1, y_test1 = train_test_split(x_data, y_data, test_size=0.4, random_state=0) print("number of test samples :", x_test1.shape[0]) print("number of training samples:",x_train1.shape[0]) --> Let's import <b>LinearRegression</b> from the module <b>linear_model</b>. ``` from sklearn.linear_model import LinearRegression ``` We create a Linear Regression object: ``` lre=LinearRegression() ``` we fit the model using the feature horsepower ``` lre.fit(x_train[['horsepower']], y_train) ``` Let's Calculate the R^2 on the test data: ``` lre.score(x_test[['horsepower']], y_test) ``` we can see the R^2 is much smaller using the test data. ``` lre.score(x_train[['horsepower']], y_train) ``` <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #2): </h1> <b> Find the R^2 on the test data using 90% of the data for training data </b> </div> ``` # Write your code below and press Shift+Enter to execute x_train1, x_test1, y_train1, y_test1 = train_test_split(x_data, y_data, test_size=0.1, random_state=0) lre.fit(x_train1[['horsepower']],y_train1) lre.score(x_test1[['horsepower']], y_test1) ``` Double-click <b>here</b> for the solution. <!-- The answer is below: x_train1, x_test1, y_train1, y_test1 = train_test_split(x_data, y_data, test_size=0.1, random_state=0) lre.fit(x_train1[['horsepower']],y_train1) lre.score(x_test1[['horsepower']],y_test1) --> Sometimes you do not have sufficient testing data; as a result, you may want to perform Cross-validation. Let's go over several methods that you can use for Cross-validation. <h2>Cross-validation Score</h2> Lets import <b>model_selection</b> from the module <b>cross_val_score</b>. ``` from sklearn.model_selection import cross_val_score ``` We input the object, the feature in this case ' horsepower', the target data (y_data). The parameter 'cv' determines the number of folds; in this case 4. ``` Rcross = cross_val_score(lre, x_data[['horsepower']], y_data, cv=4) ``` The default scoring is R^2; each element in the array has the average R^2 value in the fold: ``` Rcross ``` We can calculate the average and standard deviation of our estimate: ``` print("The mean of the folds are", Rcross.mean(), "and the standard deviation is" , Rcross.std()) ``` We can use negative squared error as a score by setting the parameter 'scoring' metric to 'neg_mean_squared_error'. ``` -1 * cross_val_score(lre,x_data[['horsepower']], y_data,cv=4,scoring='neg_mean_squared_error') ``` <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #3): </h1> <b> Calculate the average R^2 using two folds, find the average R^2 for the second fold utilizing the horsepower as a feature : </b> </div> ``` # Write your code below and press Shift+Enter to execute Rc=cross_val_score(lre,x_data[['horsepower']],y_data,cv=2) Rc.mean() ``` Double-click <b>here</b> for the solution. <!-- The answer is below: Rc=cross_val_score(lre,x_data[['horsepower']], y_data,cv=2) Rc.mean() --> You can also use the function 'cross_val_predict' to predict the output. The function splits up the data into the specified number of folds, using one fold to get a prediction while the rest of the folds are used as test data. First import the function: ``` from sklearn.model_selection import cross_val_predict ``` We input the object, the feature in this case <b>'horsepower'</b> , the target data <b>y_data</b>. The parameter 'cv' determines the number of folds; in this case 4. We can produce an output: ``` yhat = cross_val_predict(lre,x_data[['horsepower']], y_data,cv=4) yhat[0:5] ``` <h1 id="ref2">Part 2: Overfitting, Underfitting and Model Selection</h1> <p>It turns out that the test data sometimes referred to as the out of sample data is a much better measure of how well your model performs in the real world. One reason for this is overfitting; let's go over some examples. It turns out these differences are more apparent in Multiple Linear Regression and Polynomial Regression so we will explore overfitting in that context.</p> Let's create Multiple linear regression objects and train the model using <b>'horsepower'</b>, <b>'curb-weight'</b>, <b>'engine-size'</b> and <b>'highway-mpg'</b> as features. ``` lr = LinearRegression() lr.fit(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_train) ``` Prediction using training data: ``` yhat_train = lr.predict(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) yhat_train[0:5] ``` Prediction using test data: ``` yhat_test = lr.predict(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) yhat_test[0:5] ``` Let's perform some model evaluation using our training and testing data separately. First we import the seaborn and matplotlibb library for plotting. ``` import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns ``` Let's examine the distribution of the predicted values of the training data. ``` Title = 'Distribution Plot of Predicted Value Using Training Data vs Training Data Distribution' DistributionPlot(y_train, yhat_train, "Actual Values (Train)", "Predicted Values (Train)", Title) ``` Figure 1: Plot of predicted values using the training data compared to the training data. So far the model seems to be doing well in learning from the training dataset. But what happens when the model encounters new data from the testing dataset? When the model generates new values from the test data, we see the distribution of the predicted values is much different from the actual target values. ``` Title='Distribution Plot of Predicted Value Using Test Data vs Data Distribution of Test Data' DistributionPlot(y_test,yhat_test,"Actual Values (Test)","Predicted Values (Test)",Title) ``` Figur 2: Plot of predicted value using the test data compared to the test data. <p>Comparing Figure 1 and Figure 2; it is evident the distribution of the test data in Figure 1 is much better at fitting the data. This difference in Figure 2 is apparent where the ranges are from 5000 to 15 000. This is where the distribution shape is exceptionally different. Let's see if polynomial regression also exhibits a drop in the prediction accuracy when analysing the test dataset.</p> ``` from sklearn.preprocessing import PolynomialFeatures ``` <h4>Overfitting</h4> <p>Overfitting occurs when the model fits the noise, not the underlying process. Therefore when testing your model using the test-set, your model does not perform as well as it is modelling noise, not the underlying process that generated the relationship. Let's create a degree 5 polynomial model.</p> Let's use 55 percent of the data for testing and the rest for training: ``` x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.45, random_state=0) ``` We will perform a degree 5 polynomial transformation on the feature <b>'horse power'</b>. ``` pr = PolynomialFeatures(degree=5) x_train_pr = pr.fit_transform(x_train[['horsepower']]) x_test_pr = pr.fit_transform(x_test[['horsepower']]) pr ``` Now let's create a linear regression model "poly" and train it. ``` poly = LinearRegression() poly.fit(x_train_pr, y_train) ``` We can see the output of our model using the method "predict." then assign the values to "yhat". ``` yhat = poly.predict(x_test_pr) yhat[0:5] ``` Let's take the first five predicted values and compare it to the actual targets. ``` print("Predicted values:", yhat[0:4]) print("True values:", y_test[0:4].values) ``` We will use the function "PollyPlot" that we defined at the beginning of the lab to display the training data, testing data, and the predicted function. ``` PollyPlot(x_train[['horsepower']], x_test[['horsepower']], y_train, y_test, poly,pr) ``` Figur 4 A polynomial regression model, red dots represent training data, green dots represent test data, and the blue line represents the model prediction. We see that the estimated function appears to track the data but around 200 horsepower, the function begins to diverge from the data points. R^2 of the training data: ``` poly.score(x_train_pr, y_train) ``` R^2 of the test data: ``` poly.score(x_test_pr, y_test) ``` We see the R^2 for the training data is 0.5567 while the R^2 on the test data was -29.87. The lower the R^2, the worse the model, a Negative R^2 is a sign of overfitting. Let's see how the R^2 changes on the test data for different order polynomials and plot the results: ``` Rsqu_test = [] order = [1, 2, 3, 4] for n in order: pr = PolynomialFeatures(degree=n) x_train_pr = pr.fit_transform(x_train[['horsepower']]) x_test_pr = pr.fit_transform(x_test[['horsepower']]) lr.fit(x_train_pr, y_train) Rsqu_test.append(lr.score(x_test_pr, y_test)) plt.plot(order, Rsqu_test) plt.xlabel('order') plt.ylabel('R^2') plt.title('R^2 Using Test Data') plt.text(3, 0.75, 'Maximum R^2 ') ``` We see the R^2 gradually increases until an order three polynomial is used. Then the R^2 dramatically decreases at four. The following function will be used in the next section; please run the cell. ``` def f(order, test_data): x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=test_data, random_state=0) pr = PolynomialFeatures(degree=order) x_train_pr = pr.fit_transform(x_train[['horsepower']]) x_test_pr = pr.fit_transform(x_test[['horsepower']]) poly = LinearRegression() poly.fit(x_train_pr,y_train) PollyPlot(x_train[['horsepower']], x_test[['horsepower']], y_train,y_test, poly, pr) ``` The following interface allows you to experiment with different polynomial orders and different amounts of data. ``` interact(f, order=(0, 6, 1), test_data=(0.05, 0.95, 0.05)) ``` <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #4a):</h1> <b>We can perform polynomial transformations with more than one feature. Create a "PolynomialFeatures" object "pr1" of degree two?</b> </div> ``` pr1=PolynomialFeatures(degree=2) ``` Double-click <b>here</b> for the solution. <!-- The answer is below: pr1=PolynomialFeatures(degree=2) --> <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #4b): </h1> <b> Transform the training and testing samples for the features 'horsepower', 'curb-weight', 'engine-size' and 'highway-mpg'. Hint: use the method "fit_transform" ?</b> </div> ``` x_train_pr1=pr.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) x_test_pr1=pr.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) ``` Double-click <b>here</b> for the solution. <!-- The answer is below: x_train_pr1=pr.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) x_test_pr1=pr.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) --> <!-- The answer is below: x_train_pr1=pr.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) x_test_pr1=pr.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]) --> <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #4c): </h1> <b> How many dimensions does the new feature have? Hint: use the attribute "shape" </b> </div> ``` x_train_pr1.shape ``` Double-click <b>here</b> for the solution. <!-- The answer is below: There are now 15 features: x_train_pr1.shape --> <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #4d): </h1> <b> Create a linear regression model "poly1" and train the object using the method "fit" using the polynomial features?</b> </div> ``` #from sklearn.linear_model import LinearRegression #poly1=linear_model.LinearRegression().fit(x_train_pr1,y_train) #Another way linear_model = LinearRegression() linear_model.fit(x_train_pr1,y_train) ``` Double-click <b>here</b> for the solution. <!-- The answer is below: poly1=linear_model.LinearRegression().fit(x_train_pr1,y_train) --> <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #4e): </h1> <b>Use the method "predict" to predict an output on the polynomial features, then use the function "DistributionPlot" to display the distribution of the predicted output vs the test data?</b> </div> ``` #yhat_test1=poly1.predict(x_test_pr1) #Title='Distribution Plot of Predicted Value Using Test Data vs Data Distribution of Test Data' #DistributionPlot(y_test, yhat_test1, "Actual Values (Test)", "Predicted Values (Test)", Title) #Another way poly1 = LinearRegression() poly1.fit(x_train_pr1, y_train) yhat_test1=poly1.predict(x_test_pr1) Title='Distribution Plot of Predicted Value Using Test Data vs Data Distribution of Test Data' DistributionPlot(y_test, yhat_test1, "Actual Values (Test)", "Predicted Values (Test)", Title) ``` Double-click <b>here</b> for the solution. <!-- The answer is below: yhat_test1=poly1.predict(x_test_pr1) Title='Distribution Plot of Predicted Value Using Test Data vs Data Distribution of Test Data' DistributionPlot(y_test, yhat_test1, "Actual Values (Test)", "Predicted Values (Test)", Title) --> <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #4f): </h1> <b>Use the distribution plot to determine the two regions were the predicted prices are less accurate than the actual prices.</b> </div> ``` #The predicted value is lower than actual value for cars where the price $ 10,000 range, conversely the predicted price is larger than the price cost in the $30, 000 to $40,000 range. #As such the model is not as accurate in these ranges . ``` Double-click <b>here</b> for the solution. <!-- The answer is below: The predicted value is lower than actual value for cars where the price $ 10,000 range, conversely the predicted price is larger than the price cost in the $30, 000 to $40,000 range. As such the model is not as accurate in these ranges . --> <img src = "https://ibm.box.com/shared/static/c35ipv9zeanu7ynsnppb8gjo2re5ugeg.png" width = 700, align = "center"> <h2 id="ref3">Part 3: Ridge regression</h2> In this section, we will review Ridge Regression we will see how the parameter Alfa changes the model. Just a note here our test data will be used as validation data. Let's perform a degree two polynomial transformation on our data. ``` pr=PolynomialFeatures(degree=2) x_train_pr=pr.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg','normalized-losses','symboling']]) x_test_pr=pr.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg','normalized-losses','symboling']]) ``` Let's import <b>Ridge</b> from the module <b>linear models</b>. ``` from sklearn.linear_model import Ridge ``` Let's create a Ridge regression object, setting the regularization parameter to 0.1 ``` RigeModel=Ridge(alpha=0.1) ``` Like regular regression, you can fit the model using the method <b>fit</b>. ``` RigeModel.fit(x_train_pr, y_train) ``` Similarly, you can obtain a prediction: ``` yhat = RigeModel.predict(x_test_pr) ``` Let's compare the first five predicted samples to our test set ``` print('predicted:', yhat[0:4]) print('test set :', y_test[0:4].values) ``` We select the value of Alfa that minimizes the test error, for example, we can use a for loop. ``` Rsqu_test = [] Rsqu_train = [] dummy1 = [] ALFA = 10 * np.array(range(0,1000)) for alfa in ALFA: RigeModel = Ridge(alpha=alfa) RigeModel.fit(x_train_pr, y_train) Rsqu_test.append(RigeModel.score(x_test_pr, y_test)) Rsqu_train.append(RigeModel.score(x_train_pr, y_train)) ``` We can plot out the value of R^2 for different Alphas ``` width = 12 height = 10 plt.figure(figsize=(width, height)) plt.plot(ALFA,Rsqu_test, label='validation data ') plt.plot(ALFA,Rsqu_train, 'r', label='training Data ') plt.xlabel('alpha') plt.ylabel('R^2') plt.legend() ``` Figure 6:The blue line represents the R^2 of the test data, and the red line represents the R^2 of the training data. The x-axis represents the different values of Alfa The red line in figure 6 represents the R^2 of the test data, as Alpha increases the R^2 decreases; therefore as Alfa increases the model performs worse on the test data. The blue line represents the R^2 on the validation data, as the value for Alfa increases the R^2 decreases. <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #5): </h1> Perform Ridge regression and calculate the R^2 using the polynomial features, use the training data to train the model and test data to test the model. The parameter alpha should be set to 10. </div> ``` # Write your code below and press Shift+Enter to execute RigeModel = Ridge(alpha=0) RigeModel.fit(x_train_pr, y_train) RigeModel.score(x_test_pr, y_test) ``` Double-click <b>here</b> for the solution. <!-- The answer is below: RigeModel = Ridge(alpha=0) RigeModel.fit(x_train_pr, y_train) RigeModel.score(x_test_pr, y_test) --> <h2 id="ref4">Part 4: Grid Search</h2> The term Alfa is a hyperparameter, sklearn has the class <b>GridSearchCV</b> to make the process of finding the best hyperparameter simpler. Let's import <b>GridSearchCV</b> from the module <b>model_selection</b>. ``` from sklearn.model_selection import GridSearchCV ``` We create a dictionary of parameter values: ``` parameters1= [{'alpha': [0.001,0.1,1, 10, 100, 1000, 10000, 100000, 100000]}] parameters1 ``` Create a ridge regions object: ``` RR=Ridge() RR ``` Create a ridge grid search object ``` Grid1 = GridSearchCV(RR, parameters1,cv=4) ``` Fit the model ``` Grid1.fit(x_data[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_data) ``` The object finds the best parameter values on the validation data. We can obtain the estimator with the best parameters and assign it to the variable BestRR as follows: ``` BestRR=Grid1.best_estimator_ BestRR ``` We now test our model on the test data ``` BestRR.score(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_test) ``` <div class="alert alert-danger alertdanger" style="margin-top: 20px"> <h1> Question #6): </h1> Perform a grid search for the alpha parameter and the normalization parameter, then find the best values of the parameters </div> ``` # Write your code below and press Shift+Enter to execute parameters2= [{'alpha': [0.001,0.1,1, 10, 100, 1000,10000,100000,100000],'normalize':[True,False]} ] Grid2 = GridSearchCV(Ridge(), parameters2,cv=4) Grid2.fit(x_data[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']],y_data) Grid2.best_estimator_ ``` Double-click <b>here</b> for the solution. <!-- The answer is below: parameters2= [{'alpha': [0.001,0.1,1, 10, 100, 1000,10000,100000,100000],'normalize':[True,False]} ] Grid2 = GridSearchCV(Ridge(), parameters2,cv=4) Grid2.fit(x_data[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']],y_data) Grid2.best_estimator_ --> <h1>Thank you for completing this notebook!</h1> <div class="alert alert-block alert-info" style="margin-top: 20px"> <p><a href="https://cocl.us/corsera_da0101en_notebook_bottom"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/BottomAd.png" width="750" align="center"></a></p> </div> <h3>About the Authors:</h3> This notebook was written by <a href="https://www.linkedin.com/in/mahdi-noorian-58219234/" target="_blank">Mahdi Noorian PhD</a>, <a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>, Bahare Talayian, Eric Xiao, Steven Dong, Parizad, Hima Vsudevan and <a href="https://www.linkedin.com/in/fiorellawever/" target="_blank">Fiorella Wenver</a> and <a href=" https://www.linkedin.com/in/yi-leng-yao-84451275/ " target="_blank" >Yi Yao</a>. <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p> <hr> <p>Copyright &copy; 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
github_jupyter
``` !pip install yfinance --upgrade --no-cache-dir ``` ### Importing all the necessary libraries which are required to implement ``` # LinearRegression is a machine learning library for linear regression from sklearn.linear_model import LinearRegression # pandas and numpy are used for data manipulation import pandas as pd import numpy as np # matplotlib and seaborn are used for plotting graphs import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn-darkgrid') import yfinance as yf ``` ### Reading the past 10 years of daily Gold ETF price data and store it in DataFrame #### Plotting the CLOSE prices of Gold ``` # Read data Df = yf.download('GLD', '2006-01-01', '2022-01-03', auto_adjust=True) # Only keep close columns Df = Df[['Close']] # Drop rows with missing values Df = Df.dropna() # Plot the closing price of GLD Df.Close.plot(figsize=(10, 7),color='r') plt.ylabel("Gold ETF Prices") plt.title("Gold ETF Price Series") plt.show() ``` ### Viewing the Dataframe collected from ticker/index ``` Df ``` ### Some EDA on Data Collected ``` #Create histogram with density plot import seaborn as sns sns.distplot(Df, hist=True, kde=True, bins=20, color = 'blue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 2}) # yearly Box-Plot on Data single_year = Df groups = single_year.groupby(pd.Grouper(freq='A')) years = pd.concat([pd.DataFrame(x[1].values) for x in groups], axis=1) years = pd.DataFrame(years) years.boxplot(figsize=(25,15)) plt.show() # create a scatter plot from pandas.plotting import lag_plot plt.figure(figsize=(20,10)) lag_plot(Df) plt.show() # Creating Autocorrelation plot pd.plotting.autocorrelation_plot(Df) plt.show() ``` ### Explantory Variable = variable which is manipulated to determind the next day price of commodity ### Dependant Valriable = which depends upon value of explanatory variable (Gold Price in this case) #### S3 = moving avg of last 3 days #### S9 = moving avg of last 15 Days #### y where we will store our Gold Price ``` # Define explanatory variables Df['S3'] = Df['Close'].rolling(window=3).mean() Df['S15'] = Df['Close'].rolling(window=15).mean() Df['next_day_price'] = Df['Close'].shift(-1) Df = Df.dropna() X = Df[['S3', 'S15']] # Define dependent variable y = Df['next_day_price'] Df ``` ### Splitting Data for Test & Training. Test Data to be used for Linear Regression Model #### - First 80% of the data is used for training and remaining data for testing ``` # Split the data into train and test dataset t = .8 t = int(t*len(Df)) # Train dataset X_train = X[:t] y_train = y[:t] # Test dataset X_test = X[t:] y_test = y[t:] ``` ### Y = m1 * X1 + m2 * X2 + C ### Gold ETF price = m1 * 3 days moving average + m2 * 15 days moving average + c ``` # Create a linear regression model linear = LinearRegression().fit(X_train, y_train) print("Linear Regression model") print("Gold ETF Price (y) = %.2f * 3 Days Moving Average (x1) \ + %.2f * 15 Days Moving Average (x2) \ + %.2f (constant)" % (linear.coef_[0], linear.coef_[1], linear.intercept_)) ``` ### Predict the Gold ETF prices ``` predicted_price = linear.predict(X_test) predicted_price = pd.DataFrame( predicted_price, index=y_test.index, columns=['price']) predicted_price.plot(figsize=(20, 14)) y_test.plot() plt.legend(['predicted_price', 'actual_price']) plt.ylabel("Gold ETF Price") plt.show() ``` ### Computing the goodness of the fit ``` # R square r2_score = linear.score(X[t:], y[t:])*100 float("{0:.2f}".format(r2_score)) ``` ### Plotting cumulative returns ``` gold = pd.DataFrame() gold['price'] = Df[t:]['Close'] gold['predicted_price_next_day'] = predicted_price gold['actual_price_next_day'] = y_test gold['gold_returns'] = gold['price'].pct_change().shift(-1) gold['signal'] = np.where(gold.predicted_price_next_day.shift(1) < gold.predicted_price_next_day,1,0) gold['strategy_returns'] = gold.signal * gold['gold_returns'] ((gold['strategy_returns']+1).cumprod()).plot(figsize=(20,14),color='g') plt.ylabel('Cumulative Returns') plt.show() ``` #### A 'buy' trading signal represented by “1” when the next day’s predicted price is more than the current day predicted price. No position is taken otherwise ``` gold ``` ### Calculating Sharpe Ratio #### Sharpe ratio is the measure of risk-adjusted return of a financial portfolio. ``` sharpe = gold['strategy_returns'].mean()/gold['strategy_returns'].std()*(252**0.5) 'Sharpe Ratio %.2f' % (sharpe) ``` ### Predict daily moves as per signal created ``` # import datetime and get today's date import datetime as dt current_date = dt.datetime.now() # Get the data data = yf.download('GLD', '2007-01-01', current_date, auto_adjust=True) data['S3'] = data['Close'].rolling(window=3).mean() data['S15'] = data['Close'].rolling(window=9).mean() data = data.dropna() # Forecast the price data['predicted_gold_price'] = linear.predict(data[['S3', 'S15']]) data['signal'] = np.where(data.predicted_gold_price.shift(1) < data.predicted_gold_price,"Buy","No Position") # Print the forecast data.tail(1)[['signal','predicted_gold_price']].T ``` ## Model Training: Random Forest Regressor ``` from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn import metrics regressor = RandomForestRegressor(n_estimators=100) regressor.fit(X_train,y_train) ``` ### Model Evaluation ``` test_data_prediction = regressor.predict(X_test) error_score = metrics.r2_score(y_test, test_data_prediction) print("R squared error : ", error_score) ``` ### Comparing the Actual Values and the Predicted Values ``` y_test = list(y_test) plt.plot(y_test, color='blue', label = 'Actual Value') plt.plot(test_data_prediction, color='green', label='Predicted Value') plt.title('Actual Price vs Predicted Price') plt.xlabel('Number of values') plt.ylabel('GLD Price') plt.legend() plt.show() from sklearn import model_selection # prepare configuration for cross validation test harness seed = 7 X1 = Df[['S3', 'S15']] # Define dependent variable Y = Df['next_day_price'] # prepare models models = [] models.append(('LR', linear)) models.append(('RFR', regressor)) # evaluate each model in turn results = [] names = [] scoring = 'accuracy' for name, model in models: kfold = model_selection.KFold(n_splits=10) cv_results = model_selection.cross_val_score(model, X1, Y, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # boxplot algorithm comparison fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show() ```
github_jupyter
``` import sys sys.path.append('../code') from utils import plot_utils %load_ext autoreload %autoreload 2 %matplotlib inline import os import glob import torch import numpy as np import pickle import stg_node from model.dyn_stg import SpatioTemporalGraphCVAEModel from model.model_registrar import ModelRegistrar from utils.scene_utils import create_batch_scene_graph import timeit import matplotlib.pyplot as plt from scipy.integrate import cumtrapz from PIL import Image import imageio import random from collections import defaultdict ``` # Options ``` hyperparams = { ### Training ## Batch Sizes 'batch_size': 16, ## Learning Rate 'learning_rate': 0.001, 'min_learning_rate': 0.00001, 'learning_decay_rate': 0.9999, ## Optimizer # 'optimizer': tf.train.AdamOptimizer, 'optimizer_kwargs': {}, 'grad_clip': 1.0, ### Prediction 'minimum_history_length': 5, # 0.5 seconds 'prediction_horizon': 15, # 1.5 seconds (at least as far as the loss function is concerned) ### Variational Objective ## Objective Formulation 'alpha': 1, 'k': 3, # number of samples from z during training 'k_eval': 50, # number of samples from z during evaluation 'use_iwae': False, # only matters if alpha = 1 'kl_exact': True, # relevant only if alpha = 1 ## KL Annealing/Bounding 'kl_min': 0.07, 'kl_weight': 1.0, 'kl_weight_start': 0.0001, 'kl_decay_rate': 0.99995, 'kl_crossover': 8000, 'kl_sigmoid_divisor': 6, ### Network Parameters ## RNNs/Summarization 'rnn_kwargs': {"dropout_keep_prob": 0.75}, 'MLP_dropout_keep_prob': 0.9, 'rnn_io_dropout_keep_prob': 1.0, 'enc_rnn_dim_multiple_inputs': 8, 'enc_rnn_dim_edge': 8, 'enc_rnn_dim_edge_influence': 8, 'enc_rnn_dim_history': 32, 'enc_rnn_dim_future': 32, 'dec_rnn_dim': 128, 'dec_GMM_proj_MLP_dims': None, 'sample_model_during_dec': True, 'dec_sample_model_prob_start': 0.0, 'dec_sample_model_prob_final': 0.0, 'dec_sample_model_prob_crossover': 20000, 'dec_sample_model_prob_divisor': 6, ## q_z_xy (encoder) 'q_z_xy_MLP_dims': None, ## p_z_x (encoder) 'p_z_x_MLP_dims': 16, ## p_y_xz (decoder) 'fuzz_factor': 0.05, 'GMM_components': 16, 'log_sigma_min': -10, 'log_sigma_max': 10, 'log_p_yt_xz_max': 50, ### Discrete Latent Variable 'N': 2, 'K': 5, ## Relaxed One-Hot Temperature Annealing 'tau_init': 2.0, 'tau_final': 0.001, 'tau_decay_rate': 0.9999, ## Logit Clipping 'use_z_logit_clipping': False, 'z_logit_clip_start': 0.05, 'z_logit_clip_final': 3.0, 'z_logit_clip_crossover': 8000, 'z_logit_clip_divisor': 6 } os.environ["CUDA_VISIBLE_DEVICES"] = "0" device = 'cpu' data_dir = './data' eval_data_dict_path = 'eval_data_dict_2_files_100_rows.pkl' model_dir = './logs/models_28_Jan_2019_15_35_05' robot_node = stg_node.STGNode('Al Horford', 'HomeC') hyperparams['dynamic_edges'] = 'yes' hyperparams['edge_addition_filter'] = [0.04, 0.06, 0.09, 0.12, 0.17, 0.25, 0.35, 0.5, 0.7, 1.0] hyperparams['edge_removal_filter'] = [1.0, 0.7, 0.5, 0.35, 0.25, 0.17, 0.12, 0.09, 0.06, 0.04] hyperparams['edge_state_combine_method'] = 'sum' hyperparams['edge_influence_combine_method'] = 'bi-rnn' hyperparams['edge_radius'] = 2.0 * 3.28084 if not torch.cuda.is_available() or device == 'cpu': device = torch.device('cpu') else: if torch.cuda.device_count() == 1: # If you have CUDA_VISIBLE_DEVICES set, which you should, # then this will prevent leftover flag arguments from # messing with the device allocation. device = 'cuda:0' device = torch.device(device) print(device) ``` # Visualization ``` with open(os.path.join(data_dir, eval_data_dict_path), 'rb') as f: eval_data_dict = pickle.load(f, encoding='latin1') model_registrar = ModelRegistrar(model_dir, device) model_registrar.load_models(699) model_registrar = model_registrar.cpu() # This keeps colors consistent across timesteps, rerun this cell if you want to reset the colours. color_dict = defaultdict(dict) plot_utils.plot_predictions(eval_data_dict, model_registrar, robot_node, hyperparams, device, dt=eval_data_dict['dt'], max_speed=40.76, color_dict=color_dict, data_id=0, t_predict=10, figsize=(10, 10), ylim=(0, 40), xlim=(0, 40), num_samples=400, radius_of_influence=hyperparams['edge_radius'], node_circle_size=0.45, circle_edge_width=1.0, line_alpha=0.9, line_width=0.2, edge_width=4, dpi=300, tick_fontsize=16, robot_circle=None, omit_names=False, legend_loc='best', title='', xlabel='Longitudinal Court Position (ft)', ylabel='Lateral Court Position (ft)' ) ```
github_jupyter
``` ## Here is a data wrangling project I would work on. Kindly follow the link for background info on the project ## https://data.world/exercises/data-wrangling-exercise-1 ## The main goal here is to narrow down a list of schools nationwide based on a set of criteria. ## I will start off by working on the ColScorecard.csv file. ## First, I will take a look at the dictionary CollegeScorecardDataDictionary-09-12-2015.csv import numpy as np import pandas as pd CSDictionary = pd.read_csv('CollegeScorecardDataDictionary-09-12-2015.csv') list(CSDictionary) CSDictionary.head() CSDictionary #Here is a list of columns that are of interest to me Interest = ['OPEID','opeid6','CURROPER','INSTNM','CITY','ZIP','STABBR','HIGHDEG','DISTANCEONLY','CIP','st_fips','LOCALE','CIP11CERT4','CIP11BACHL','INSTURL','main'] # I am going to narrow down the College Scorecard dictionary to only the columns that tells me what I want to know CSDictSummary = pd.DataFrame(CSDictionary, columns = ['NAME OF DATA ELEMENT','VARIABLE NAME','VALUE','LABEL','SCORECARD? Y/N']) CSDictSummary.head() #This shows the description of the columns I selected CSDictSummary[CSDictSummary['VARIABLE NAME'].isin(Interest)] #Now that I know the Variables of interest I am going to read the file ColScorecard.csv and zoom in to the variables ColScorecard = pd.read_csv('CollegeScorecard.csv') ColScorecard.shape ColScorecard = pd.DataFrame(ColScorecard, columns=Interest) ColScorecard.head() ColScorecard.shape #There's currently about 7803 possible different school campuses on this table. #Not all schools offer a 2 to 4 year Information Science or Technology program #columns 'CIP11CERT4' and 'CIP11BACHL' will help us narrow down. 0- indicates not offered, 1-indicates offered # and 2-indicates the program is offered in an exclusive Distance Learning program # only schools with 1 will satisfy the criteria. ColScorecard = ColScorecard[(ColScorecard['CIP11CERT4']==1) | (ColScorecard['CIP11BACHL']==1)] ColScorecard.shape #Now I narrowed down the list to 1305 schools, still some way to go. #I noticed there is a column that says distance only, I wouldn't expect any of the remaining universities to be DistanceOnly ColScorecard['DISTANCEONLY'].unique() # :) None of them is distance only so I will drop this column ColScorecard = ColScorecard.drop(columns=['DISTANCEONLY'],1) #I'm going to try that again ColScorecard = ColScorecard.drop(['DISTANCEONLY'],1) #Here's how the table looks now ColScorecard.head() #Not all of these schools are currently operating. We can filter that out through the column 'CURROPER'. ColScorecard = ColScorecard[ColScorecard['CURROPER'] == 1] ColScorecard.shape #I will come back to this later. #Now I'm going to work on the crime data file Crime = pd.read_csv('Crime_2015.csv') Crime.head() for i in list(Crime): Crime[i].describe() for i in list(Crime): print(Crime[i].describe()) #I noticed some of the numeric data(e.g Theft, Burglary) are inputed in a different type. #I will convert the file format to float64 str_float = ['Theft','Burglary','PropertyCrime','ViolentCrime'] for i in str_float: Crime[i] = pd.to_numeric(Crime[i],errors = 'coerce') Crime['Theft'].describe() Crime = Crime.dropna(how='all') #The previous code was to drop all rows in which all their columns have No Value in them #What I really want to to is drop columns of cities with no reported crime values. Crime = Crime.dropna(subset=['Theft','Burglary','PropertyCrime','ViolentCrime','MotorVehicleTheft','AggravatedAssault','Robbery','Rape','Murder'],how='all') Crime.shape Crime['Theft'].isnull().sum().sum() Crime['Burglary'].isnull().sum().sum() subset=['Theft','Burglary','PropertyCrime','ViolentCrime','MotorVehicleTheft','AggravatedAssault','Robbery','Rape','Murder'] for i in subset: print(i) print(Crime[i].isnull().sum().sum()) print(' ') #The theft and PropertyCrime columns have a lot of Null values, I will drop them. Crime = Crime.drop(['Theft','PropertyCrime'], 1) Crime.head() #I would go ahead and divide the values in each column with the column mean. #This would help while I carry out further operations on it later subset=['Burglary','ViolentCrime','MotorVehicleTheft','AggravatedAssault','Robbery','Rape','Murder'] for i in subset: Crime[i] = Crime[i] / Crime[i].mean() Crime.head() #I will replace all Null values in each column by the column mean, this would help minimize any distortion in my analysis. for i in subset: Crime[i].fillna(Crime[i].mean(), inplace=True) Crime.head() #I'm going to verify that there are no null values. for i in subset: print(i) print(Crime[i].isnull().sum().sum()) print(' ') #I will aggregate the crime values, creating a new column. #The ratios will be 0.2 for ViolentCrime, Murder, Rape and 0.1 for Burglary, MotorVehicleTheft, AggravatedAssault, Robbery Crime['Aggregate'] = 0.2*Crime['ViolentCrime']+0.2*Crime['Murder']+0.2*Crime['Rape']+Crime['Burglary']+Crime['MotorVehicleTheft']+Crime['AggravatedAssault']+Crime['Robbery'] Crime['Aggregate'][:5] #I would prefer to sort the cities list according to the values in the Aggregate column Crime = Crime.sort_values(by='Aggregate', ascending=False) Crime.head() #I will now drop all other columns except City, State and Aggregate Crime_New = pd.DataFrame(Crime, columns=['City','State','Aggregate']) #I will drop all cities with crime aggregate greater than 50% percentile Crime_New = Crime_New[Crime_New['Aggregate'] < Crime_New['Aggregate'].quantile(0.5)] Crime_New.shape Crime_New.head() ColScorecard['HIGHDEG'].unique() #Going back to the ColScorecard #There are a few of them that are not currently operating and some of them are non degree granting institution. #I will proceed to drop them ColScorecard = ColScorecard[ColScorecard['CURROPER'] ==1] ColScorecard = ColScorecard[ColScorecard['HIGHDEG'] != 0] ColScorecard.head() #There are a few columns in ColScorecard that are no longer needed, I will drop them now ColScorecard = ColScorecard.drop(['opeid6','CURROPER','STABBR','CIP','st_fips','CIP11CERT4','CIP11BACHL'], 1) ColScorecard.head() #Part of the criteria was that the school should be in a big city, the different locales are shown below ColScorecard['LOCALE'].unique() #The Locale code for a large or MidSize city is 12 and 13 ColScorecard = ColScorecard[(ColScorecard['LOCALE'] == 12) | (ColScorecard['LOCALE'] == 13)] ColScorecard.shape #I will proceed to merge ColScorecard with Crime_New, this will also drop all Cities that are not on the Crime_New list CS = ColScorecard.merge(Crime_New, left_on='CITY', right_on='City', how='right') CS.shape CS.head() CS.nunique() #Part of the criteria also says to work with schools that are not in cold places. #Here is a list of schools that I prepared earlier MeanTemperature = pd.read_csv('MeanTemp.csv') MeanTemperature.head() MeanTemperature = pd.DataFrame(MeanTemperature, column=['ZIP','CITY']) list(MeanTemperature) MeanTemperature = pd.DataFrame(MeanTemperature, columns=['ZIP','CITY',' ANN']) MeanTemperature.head() MeanTemperature[MeanTemperature['CITY'] == 'Montgomery'] #I will now merge the two DataFrames. MeanTemperature = MeanTemperature.drop(['ZIP'],1) FinalList = CS.merge(MeanTemperature, on='CITY', how='inner') CS.head() FinalList.head() FinalList.shape FinalList.tail() FinalList #I see we have more than enough City column so I will drop one along with some other unneccessary ones FinalCollegeList = FinalList.drop(['OPEID','CITY','ZIP','HIGHDEG','LOCALE','main'], 1) FinalCollegeList.head() FinalCollegeList.tail() #I have succeeded in narrowing down a nationwide list of universities to this 73 based on the set of criterias given. #Thank You ```
github_jupyter
``` dataset=0 # [Fashion MNIST, CIFAR10] import torch from torchvision import transforms, datasets import numpy as np import matplotlib.pyplot as plt print('Done') # get x and y axis to quantify HP/LP structure def get_axes(size_im): f_axis_0=np.arange(size_im) f_axis_0[f_axis_0>np.floor(size_im/2)]=np.flip(np.arange(np.ceil(size_im/2)-1)+1) f_axis_0=np.fft.fftshift(f_axis_0) f_axis_1=np.arange(size_im) f_axis_1[f_axis_1>np.floor(size_im/2)]=np.flip(np.arange(np.ceil(size_im/2)-1)+1) f_axis_1=np.fft.fftshift(f_axis_1) Y,X=np.meshgrid(f_axis_0/size_im,f_axis_1/size_im) return Y,X # Define dataloader for training data_transform=transforms.Compose([transforms.ToTensor()]) if dataset==0: FMNIST_dataset=datasets.FashionMNIST(root='.', train=True,\ transform=data_transform, download=True) dataset_loader=torch.utils.data.DataLoader(FMNIST_dataset) elif dataset==1: CIFAR10_dataset=datasets.CIFAR10(root='.', train=True,\ transform=data_transform, download=True) dataset_loader=torch.utils.data.DataLoader(CIFAR10_dataset) print('Defined data loader...') n_classes=10 if dataset==0: size_im=28 elif dataset==1: size_im=32 h=np.hamming(size_im) h_win=np.outer(h,h) Y,X=get_axes(size_im) euc_dist=np.sqrt(X**2+Y**2) mean_fft=np.zeros((size_im,size_im,n_classes)) n_samples=np.zeros(n_classes) all_mean_f_response=np.zeros(len(dataset_loader)) class_mean_f_response=np.zeros((int(len(dataset_loader)/n_classes),n_classes)) print('Started loop') image_processed=0 for X, y in dataset_loader: if dataset==0: X=np.squeeze(X.numpy()) elif dataset==1: X=np.squeeze(np.mean(X.numpy(),axis=1)) # GAUSSIAN BLUR X # Get fft of input image X=X*h_win fft_sample=np.abs(np.fft.fft2(X))**2 mean_fft[:,:,y]+=fft_sample # unnormalized fft_sample=np.fft.fftshift(fft_sample/np.sum(fft_sample)) # normalized # Get HP/LP histogram mean_f_response=np.mean(euc_dist*fft_sample) all_mean_f_response[image_processed]=mean_f_response class_mean_f_response[int(n_samples[y]),y]=mean_f_response n_samples[y]+=1 image_processed+=1 if image_processed%10000==0: print('Done with '+str(image_processed)+' images\n') hist,bins=np.histogram(all_mean_f_response,bins=100) if dataset==0: classes=['T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle boot'] elif dataset==1: classes=['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck'] print(n_samples) plt.figure(1) for i in range(n_classes): plt.imshow(np.log10(np.fft.fftshift(mean_fft[:,:,i]/n_samples[i]))) plt.title(classes[i]) plt.colorbar() plt.pause(0.1) plt.figure(2) plt.plot(bins[0:-1],hist) plt.xlim((0,0.00034)) plt.figure(3) for i in range(n_classes): class_hist,class_bins=np.histogram(class_mean_f_response[:,i],bins=100) plt.plot(class_bins[0:-1],class_hist,label=classes[i]) plt.xlim((0,0.00034)) plt.legend() ```
github_jupyter
# <center> Pandas, part 2 </center> ### By the end of this talk, you will be able to - modify/clean columns - evaluate the runtime of your scripts - merge and append data frames ### <font color='LIGHTGRAY'>By the end of this talk, you will be able to</font> - **modify/clean columns** - **evaluate the runtime of your scripts** - <font color='LIGHTGRAY'>merge and append data frames</font> ## What is data cleaning? - the data you scrape from sites are often not in a format you can directly work with - the temp column in weather.csv contains the temperature value but also other strings - the year column in imdb.csv contains the year the movie came out but also - or () and roman numbers - data cleaning means that you bring the data in a format that's easier to analyze/visualize ## How is it done? - you read in your data into a pandas data frame and either - modify the values of a column - create a new column that contains the modified values - either works, I'll show you how to do both - ADVICE: when you save the modified data frame, it is usually good practice to not overwrite the original csv or excel file that you scraped. - save the modified data into a new file instead ## Ways to modify a column - there are several ways to do this - with a for loop - with a list comprehension - using the .apply method - we will compare run times - investigate which approach is faster - important when you work with large datasets (>1,000,000 lines) ## The task: runtime column in imdb.csv - the column is string in the format 'n min' where n is the length of the movie in minutes - for plotting purposes, it is better if the runtime is not a string but a number (float or int) - you can't create a histogram of runtime using strings - task: clean the runtime column and convert it to float ## Approach 1: for loop ``` # read in the data import pandas as pd df_imdb = pd.read_csv('data/imdb.csv') print(df_imdb.head()) import time start = time.time() # start the clock for i in range(100): # repeat everything 100 times to get better estimate of elapsed time # the actual code to clean the runtime column comes here runtime_lst = [] for x in df_imdb['runtime']: if type(x) == str: runtime = float(x[:-4].replace(',','')) else: runtime = 0e0 runtime_lst.append(runtime) df_imdb['runtime min'] = runtime_lst end = time.time() # stop the timer print('cpu time = ',end-start,'sec') ``` ## Approach 2: list comprehension ``` start = time.time() # start the clock for i in range(100): # repeat everything 100 times to get better estimate of elapsed time # the actual code to clean the runtime column comes here df_imdb['runtime min'] = [float(x[:-4].replace(',','')) if type(x) == str else 0e0 for x in df_imdb['runtime']] end = time.time() # stop the timer print('cpu time = ',end-start,'sec') ``` ## Approach 3: the .apply method ``` def clean_runtime(x): if type(x) == str: runtime = float(x[:-4].replace(',','')) else: runtime = 0e0 return runtime start = time.time() # start the clock for i in range(100): # repeat everything 100 times to get better estimate of elapsed time # the actual code to clean the runtime column comes here df_imdb['runtime min'] = df_imdb['runtime'].apply(clean_runtime) end = time.time() # stop the timer print('cpu time = ',end-start,'sec') ``` ## Summary - the for loop is slower - the list comprehension and the apply method are equally quick - it is down to personal preference to choose between list comprehension and .apply - **the same ranking is not quaranteed for a different task!** - **always try a few different approaches if runtime is an issue (you work with large data)!** ## Exercise 1 Clean the `temp` column in the `data/weather.csv` file. The new temperature column should be an integer or a float. Work through at least one of the approaches we discussed. If you have time, work through all three methods. If you have even more time, look for other approaches to clean a column and time it using the `runtime` column of the imdb.csv. Try to beat my cpu time and find an even faster approach! :) ### <font color='LIGHTGRAY'>By the end of this talk, you will be able to</font> - <font color='LIGHTGRAY'>modify/clean columns</font> - <font color='LIGHTGRAY'>evaluate the runtime of your scripts</font> - **merge and append data frames** ### How to merge dataframes? Merge - data are distributed in multiple files ``` # We have two datasets from two hospitals hospital1 = {'ID':['ID1','ID2','ID3','ID4','ID5','ID6','ID7'],'col1':[5,8,2,6,0,2,5],'col2':['y','j','w','b','a','b','t']} df1 = pd.DataFrame(data=hospital1) print(df1) hospital2 = {'ID':['ID2','ID5','ID6','ID10','ID11'],'col3':[12,76,34,98,65],'col2':['q','u','e','l','p']} df2 = pd.DataFrame(data=hospital2) print(df2) # we are interested in only patients from hospital1 #df_left = df1.merge(df2,how='left',on='ID') # IDs from the left dataframe (df1) are kept #print(df_left) # we are interested in only patients from hospital2 #df_right = df1.merge(df2,how='right',on='ID') # IDs from the right dataframe (df2) are kept #print(df_right) # we are interested in patiens who were in both hospitals #df_inner = df1.merge(df2,how='inner',on='ID') # merging on IDs present in both dataframes #print(df_inner) # we are interested in all patients who visited at least one of the hospitals #df_outer = df1.merge(df2,how='outer',on='ID') # merging on IDs present in any dataframe #print(df_outer) ``` ### How to append dataframes? Append - new data comes in over a period of time. E.g., one file per month/quarter/fiscal year etc. You want to combine these files into one data frame. ``` #df_append = df1.append(df2) # note that rows with ID2, ID5, and ID6 are duplicated! Indices are duplicated too. #print(df_append) df_append = df1.append(df2,ignore_index=True) # note that rows with ID2, ID5, and ID6 are duplicated! #print(df_append) d3 = {'ID':['ID23','ID94','ID56','ID17'],'col1':['rt','h','st','ne'],'col2':[23,86,23,78]} df3 = pd.DataFrame(data=d3) #print(df3) df_append = df1.append([df2,df3],ignore_index=True) # multiple dataframes can be appended to df1 print(df_append) ``` ### Exercise 2 - Create three data frames from raw_data_1, 2, and 3. - Append the first two data frames and assign it to df_append. - Merge the third data frame with df_append such that only subject_ids from df_append are present. - Assign the new data frame to df_merge. - How many rows and columns do we have in df_merge? ``` raw_data_1 = { 'subject_id': ['1', '2', '3', '4', '5'], 'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'], 'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']} raw_data_2 = { 'subject_id': ['6', '7', '8', '9', '10'], 'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'], 'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan']} raw_data_3 = { 'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'], 'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]} ``` ### Always check that the resulting dataframe is what you wanted to end up with! - small toy datasets are ideal to test your code. ### If you need to do a more complicated dataframe operation, check out pd.concat()!
github_jupyter
<a href="https://colab.research.google.com/github/VictoriaDraganova/Attention-Based-Siamese-Text-CNN-for-Stance-Detection/blob/master/CW.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount('/content/gdrive') # Execute this code block to install dependencies when running on colab try: import torch except: from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision try: import torchbearer except: !pip install torchbearer # automatically reload external modules if they change %load_ext autoreload %autoreload 2 import torch import torch.nn.functional as F import torchvision.transforms as transforms import torchbearer import tqdm.notebook as tq from torch import nn from torch import optim from torch.utils.data import DataLoader from torchvision.datasets import MNIST from torchbearer import Trial import numpy as np import copy from torch.utils.data import Dataset from torch.autograd import Variable import torch.nn.functional as F from torchvision.utils import save_image import os import statistics as st import os.path from os import path import math n_classes = 0 device = "cuda:0" if torch.cuda.is_available() else "cpu" channel = 1 im_size = [] trainset = [] testset = [] trainset_copy = [] images_all = [] labels_all = [] indices_class = [] class Synthetic(Dataset): def __init__(self, data, targets): self.data = data.detach().float() self.targets = targets.detach() def __getitem__(self, index): return self.data[index], self.targets[index] def __len__(self): return self.data.shape[0] def sample_batch(data): batches = DataLoader(data, batch_size=256, shuffle=True) for data, target in (batches): data, target = data.to(device), target.to(device) return data, target def updateNetwork(optimizer, steps, loss_function, net, syn_data_data, syn_data_target): for s in range(steps): net.train() prediction_syn = net(syn_data_data) loss_syn = loss_function(prediction_syn, syn_data_target) optimizer.zero_grad() loss_syn.backward() optimizer.step() #based on author's published code def distance(grad1, grad2): dist = torch.tensor(0.0).to(device) for gr, gs in zip(grad1, grad2): shape=gr.shape if len(shape) == 4: gr = gr.reshape(shape[0], shape[1] * shape[2] * shape[3]) gs = gs.reshape(shape[0], shape[1] * shape[2] * shape[3]) elif len(shape) == 3: gr = gr.reshape(shape[0], shape[1] * shape[2]) gs = gs.reshape(shape[0], shape[1] * shape[2]) elif len(shape) == 2: tmp = 'do nothing' elif len(shape) == 1: gr = gr.reshape(1, shape[0]) gs = gs.reshape(1, shape[0]) continue dis_weight = torch.sum(1 - torch.sum(gr * gs, dim=-1) / (torch.norm(gr, dim=-1) * torch.norm(gs, dim=-1)+ 0.000001)) dist+=dis_weight return dist #from author's published code def get_images(c, n): # get random n images from class c idx_shuffle = np.random.permutation(indices_class[c])[:n] return images_all[idx_shuffle] #create synthetic data def train_synthetic(model, dataset, images_per_class, iterations, network_steps): synthetic_datas = [] T = images_per_class K = iterations for i in range(1): #to generate 1 synthetic datasets #create synthetic data data_syn = torch.randn(size=(n_classes*T, channel, im_size[0], im_size[1]), dtype=torch.float, requires_grad=True, device=device) targets_syn = torch.tensor([np.ones(T)*i for i in range(n_classes)], dtype=torch.long, requires_grad=False, device=device).view(-1) #optimizer for image optimizer_img = torch.optim.SGD([data_syn, ], lr=0.1) # optimizer_img for synthetic data; only update synthetic image, labels don't change optimizer_img.zero_grad() loss_function = nn.CrossEntropyLoss().to(device) #training synthetic data for k in tq.tqdm(range(K)): net = new_network(model).to(device) net.train() net_parameters = list(net.parameters()) optimizer_net = torch.optim.SGD(net.parameters(), lr=0.01) # optimizer_net for network optimizer_net.zero_grad() loss_avg = 0 for t in range(T): loss = torch.tensor(0.0).to(device) for c in range(n_classes): img_real = get_images(c, 256) targets_real = torch.ones((img_real.shape[0],), device=device, dtype=torch.long) * c prediction_real = net(img_real) # makes prediction loss_real = loss_function(prediction_real, targets_real) # computes the cross entropy loss gw_real = torch.autograd.grad(loss_real, net_parameters) # returns the sum of the gradients of the loss wrt the network parameters data_synth = data_syn[c*T:(c+1)*T].reshape((T, channel, im_size[0], im_size[1])) targets_synth = torch.ones((T,), device=device, dtype=torch.long) * c prediction_syn = net(data_synth) loss_syn = loss_function(prediction_syn, targets_synth) gw_syn = torch.autograd.grad(loss_syn, net_parameters, create_graph=True) dist = distance(gw_syn, gw_real) loss+=dist optimizer_img.zero_grad() loss.backward() optimizer_img.step() loss_avg += loss.item() if t == T - 1: break updateNetwork(optimizer_net, network_steps, loss_function, net, data_syn, targets_syn) loss_avg /= (n_classes*T) if k%10 == 0: print('iter = %.4f, loss = %.4f' % (k, loss_avg)) # model_save_name = 'data_syn.pt' # path = F"/content/gdrive/MyDrive/{model_save_name}" #to save synthetic data # torch.save(data_syn, path) synthetic_datas.append(data_syn) print('Synthetic %d created ' % (i)) return synthetic_datas #evaluation of synthetic data produced def evaluation(model, all_synthetic_data, images_per_class): accuracies = [] targets_syn = torch.tensor([np.ones(images_per_class)*i for i in range(n_classes)], dtype=torch.long, requires_grad=False, device=device).view(-1) for data in all_synthetic_data: loss_function = nn.CrossEntropyLoss().to(device) for it in range(20): #number of random models for evaluation print(it) net = new_network(model).to(device) net.train() net_parameters = list(net.parameters()) optimizer_train = torch.optim.SGD(net.parameters(), lr=0.01) optimizer_train.zero_grad() trial = Trial(net,optimizer=optimizer_train, criterion=loss_function, metrics=['loss', 'accuracy'], verbose=0).to(device) syn_data_whole = Synthetic(data, targets_syn) train_loader = DataLoader(syn_data_whole, batch_size=256, shuffle=True) test_loader = DataLoader(testset, batch_size=256, shuffle=False) trial.with_generators(train_loader, test_generator=test_loader) trial.run(epochs=300) results = trial.evaluate(data_key=torchbearer.TEST_DATA) print() print(results) accuracies.append(results['test_acc']) average_acc = sum(accuracies)/len(accuracies) std_acc = st.pstdev(accuracies) print("Model is: ", model) print("Standard deviation is : " , std_acc) print("Average is : " ,average_acc) def createData(dataset): global im_size global trainset global testset global trainset_copy global n_classes global channel global images_all global labels_all global indices_class if dataset == "MNIST": !wget https://artist-cloud.ecs.soton.ac.uk/s/sFkQ7HYOekDoDEG/download !unzip download !mv mnist MNIST from torchvision.datasets import MNIST mean = [0.1307] std = [0.3015] transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) trainset = MNIST(".", train=True, download=True, transform=transform) testset = MNIST(".", train=False, download=True, transform=transform) trainset_copy = MNIST(".", train=True, download=True, transform=transform) n_classes = 10 channel = 1 im_size = [28,28] elif dataset == "FashionMNIST": from torchvision.datasets import FashionMNIST mean = [0.2860] std = [0.3205] transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) trainset = FashionMNIST(".", train=True, download=True, transform=transform) testset = FashionMNIST(".", train=False, download=True, transform=transform) trainset_copy = FashionMNIST(".", train=True, download=True, transform=transform) n_classes = 10 channel = 1 im_size = [28,28] elif dataset == "SVHN": from torchvision.datasets import SVHN mean = [0.4377, 0.4438, 0.4728] std = [0.1201, 0.1231, 0.1052] transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) trainset = SVHN(".", split='train', transform=transform, download=True) testset = SVHN(".", split='test', transform=transform, download=True) trainset_copy = SVHN(".", split='test', transform=transform, download=True) n_classes = 10 channel = 3 im_size = [32,32] elif dataset == "CIFAR10": from torchvision.datasets import CIFAR10 mean = [0.4914, 0.4822, 0.4465] std = [0.2023, 0.1994, 0.2010] transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) trainset = CIFAR10(".", train=True, download=True, transform=transform) testset = CIFAR10(".", train=False, download=True, transform=transform) trainset_copy = CIFAR10(".", train=True, download=True, transform=transform) n_classes = 10 channel = 3 im_size = [32,32] #from author's published code indices_class = [[] for c in range(n_classes)] images_all = [torch.unsqueeze(trainset[i][0], dim=0) for i in range(len(trainset))] labels_all = [trainset[i][1] for i in range(len(trainset))] for i, lab in enumerate(labels_all): indices_class[lab].append(i) images_all = torch.cat(images_all, dim=0).to(device) labels_all = torch.tensor(labels_all, dtype=torch.long, device=device) ``` **Networks** ``` #to calculate image output size def calculate(size, kernel, stride, padding): return int(((size+(2*padding)-kernel)/stride) + 1) #based on https://cs231n.github.io/convolutional-networks/ class CNN(torch.nn.Module): def __init__(self): super(CNN, self).__init__() outsize = im_size[0] self.conv1 = nn.Conv2d(in_channels=channel, out_channels=128, kernel_size=3, padding=1) #32*32 outsize = calculate(outsize,3,1,1) self.norm1 = nn.GroupNorm(128, 128) self.avg_pooling1 = nn.AvgPool2d(kernel_size=2, stride=2) # (n+2p-f)/s+1 => 32+0-2/2 + 1 =16 outsize = calculate(outsize,2,2,0) self.conv2 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1) #out = (n+2p-f)/s+1 => 16+2-3/1 + 1 => 16 outsize = calculate(outsize,3,1,1) self.norm2 = nn.GroupNorm(128, 128) self.avg_pooling2 = nn.AvgPool2d(kernel_size=2, stride=2) #out = (n+2p-f)/s+1 => 16+0-2/2 +1 => 8 outsize = calculate(outsize,2,2,0) self.conv3 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1) #out = (n+2p-f)/s+1 => 8+2-3/1 + 1 => 8 outsize = calculate(outsize,3,1,1) self.norm3 = nn.GroupNorm(128, 128) self.avg_pooling3 = nn.AvgPool2d(kernel_size=2, stride=2) #out = (n+2p-f)/s+1 => 8+0-2/2 +1 => 4 outsize = calculate(outsize,2,2,0) self.classifier = nn.Linear(outsize*outsize*128, 10) def forward(self, x): out = self.conv1(x) out = self.norm1(out) out = F.relu(out) out = self.avg_pooling1(out) out = self.conv2(out) out = self.norm2(out) out = F.relu(out) out = self.avg_pooling2(out) out = self.conv3(out) out = self.norm3(out) out = F.relu(out) out = self.avg_pooling3(out) out = out.view(out.size(0), -1) out = self.classifier(out) return out class MLP(nn.Module): def __init__(self): super(MLP, self).__init__() self.fc1 = nn.Linear(im_size[0]*im_size[1]*channel, 128) self.fc2 = nn.Linear(128, 128) self.fc3 = nn.Linear(128, n_classes) def forward(self, x): out = x.view(x.size(0), -1) out = F.relu(self.fc1(out)) out = F.relu(self.fc2(out)) out = self.fc3(out) return out #based on https://en.wikipedia.org/wiki/LeNet class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() outsize = 28 self.conv1 = nn.Conv2d(channel, 6, kernel_size=5) outsize = calculate(outsize, 5, 1,0) self.avg1 = nn.AvgPool2d(kernel_size=2, stride=2) outsize = calculate(outsize, 2, 2,0) self.conv2 = nn.Conv2d(6,16,kernel_size=5) outsize = calculate(outsize, 5, 1,0) self.avg2 = nn.AvgPool2d(kernel_size=2, stride=2) outsize = calculate(outsize, 2, 2,0) self.fc1 = nn.Linear(outsize*outsize*16, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, n_classes) def forward(self, x): out = self.conv1(x) out = F.sigmoid(out) out = self.avg1(out) out = self.conv2(out) out = F.sigmoid(out) out = self.avg2(out) out = out.view(out.size(0), -1) out = F.sigmoid(self.fc1(out)) out = F.sigmoid(self.fc2(out)) out = self.fc3(out) return out trans = transforms.Resize((227,227)) #based on https://www.analyticsvidhya.com/blog/2021/03/introduction-to-the-architecture-of-alexnet/ class AlexNet(torch.nn.Module): def __init__(self): super(AlexNet, self).__init__() outsize = 227 self.conv1 = nn.Conv2d(in_channels=channel, out_channels=96, kernel_size=11, padding=0, stride=4) #32*32 outsize = calculate(outsize,11,4,0) self.max_pooling1 = nn.MaxPool2d(kernel_size=3, stride=2) # (n+2p-f)/s+1 => 32+0-2/2 + 1 =16 outsize = calculate(outsize,3,2,0) self.conv2 = nn.Conv2d(in_channels=96, out_channels=256, kernel_size=5, padding=2, stride=1) #32*32 outsize = calculate(outsize,5,1,2) self.max_pooling2 = nn.MaxPool2d(kernel_size=3, stride=2) # (n+2p-f)/s+1 => 32+0-2/2 + 1 =16 outsize = calculate(outsize,3,2,0) self.conv3 = nn.Conv2d(in_channels=256, out_channels=384, kernel_size=3, padding=1, stride=1) #32*32 outsize = calculate(outsize,3,1,1) self.conv4 = nn.Conv2d(in_channels=384, out_channels=384, kernel_size=3, padding=1, stride=1) #32*32 outsize = calculate(outsize,3,1,1) self.conv5 = nn.Conv2d(in_channels=384, out_channels=256, kernel_size=3, padding=1, stride=1) #32*32 outsize = calculate(outsize,3,1,1) self.max_pooling3 = nn.MaxPool2d(kernel_size=3, stride=2) # (n+2p-f)/s+1 => 32+0-2/2 + 1 =16 outsize = calculate(outsize,3,2,0) self.dropout1 = nn.Dropout(p=0.5) self.fc1 = nn.Linear(outsize*outsize*256, 4096) self.dropout2 = nn.Dropout(p=0.5) self.fc2 = nn.Linear(4096, 4096) self.fc3 = nn.Linear(4096, 10) def forward(self, x): x = trans(x) out = self.conv1(x) out = F.relu(out) out = self.max_pooling1(out) out = self.conv2(out) out = F.relu(out) out = self.max_pooling2(out) out = self.conv3(out) out = F.relu(out) out = self.conv4(out) out = F.relu(out) out = self.conv5(out) out = F.relu(out) out = self.max_pooling3(out) out = self.dropout1(out) out = out.view(out.size(0), -1) out = self.fc1(out) out = F.relu(out) out = self.dropout2(out) out = self.fc2(out) out = F.relu(out) out = self.fc3(out) out = F.softmax(out) return out #Author's published implementation of LeNet class LeNetTheirs(nn.Module): def __init__(self, channel, num_classes): super(LeNetTheirs, self).__init__() self.features = nn.Sequential( nn.Conv2d(channel, 6, kernel_size=5, padding=2 if channel==1 else 0), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(6, 16, kernel_size=5), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), ) self.fc_1 = nn.Linear(16 * 5 * 5, 120) self.fc_2 = nn.Linear(120, 84) self.fc_3 = nn.Linear(84, num_classes) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) x = F.relu(self.fc_1(x)) x = F.relu(self.fc_2(x)) x = self.fc_3(x) return x #Author's published implementation of AlexNet class AlexNetTheirs(nn.Module): def __init__(self, channel, num_classes): super(AlexNetTheirs, self).__init__() self.features = nn.Sequential( nn.Conv2d(channel, 128, kernel_size=5, stride=1, padding=4 if channel==1 else 2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(128, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(192, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 192, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(192, 192, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), ) self.fc = nn.Linear(192 * 4 * 4, num_classes) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) x = self.fc(x) return x #to get the model specified def new_network(model): if model == "CNN": return CNN() if model == "AlexNet": return AlexNet() if model == "AlexNetTheirs": return AlexNetTheirs(channel,n_classes) if model == "MLP": return MLP() if model == "LeNet": return LeNet() if model == "LeNetTheirs": return LeNetTheirs(channel,n_classes) # Experiment 1 def experiment1(model, dataset, images_per_class, iterations, network_steps): createData(dataset) all_synthetic_datas = train_synthetic(model,dataset, images_per_class, iterations, network_steps) evaluation(model,all_synthetic_datas, images_per_class) # Experiment 2 def experiment2(model, dataset, images_per_class, iterations, network_steps): createData(dataset) all_synthetic_datas = train_synthetic(model,dataset, images_per_class, iterations, network_steps) models = ["CNN", "MLP", "LeNet", "AlexNet"] #models used to evaluate the synthetic data for m in models: evaluation(m, all_synthetic_datas, images_per_class) experiment1("CNN", "SVHN", 1, 1000, 1) #ConvNet model, MNIST dataset, 1000 iterations and 1 image per class experiment2("AlexNet", "MNIST", 1, 1000, 1) #ConvNet model, MNIST dataset, 1000 iterations and 1 image per class ``` **For mean and std** ``` #To find the mean and std of the datasets transform = transforms.Compose([ transforms.ToTensor() ]) from torchvision.datasets import MNIST trainset = MNIST(".", train=True, download=True, transform=transform) testset = MNIST(".", train=False, download=True, transform=transform) trainset_copy = MNIST(".", train=True, download=True, transform=transform) # from torchvision.datasets import FashionMNIST # trainset = FashionMNIST(".", train=True, download=True, transform=transform) # testset = FashionMNIST(".", train=False, download=True, transform=transform) # trainset_copy = FashionMNIST(".", train=True, download=True, transform=transform) # from torchvision.datasets import CIFAR10 # trainset = CIFAR10(".", train=True, download=True, transform=transform) # testset = CIFAR10(".", train=False, download=True, transform=transform) # trainset_copy = CIFAR10(".", train=True, download=True, transform=transform) # from torchvision.datasets import SVHN # trainset = SVHN(".", split='train', transform=transform, download=True) # testset = SVHN(".", split='test', transform=transform, download=True) # trainset_copy = SVHN(".", split='test', transform=transform, download=True) loader = DataLoader(trainset, batch_size=256, num_workers=0, shuffle=False) mean = 0. std = 0. for images, _ in loader: batch_samples = images.size(0) images = images.view(batch_samples, images.size(1), -1) mean += images.mean(2).sum(0) std += images.std(2).sum(0) mean /= len(loader.dataset) std /= len(loader.dataset) print(mean) print(std) ``` MNIST mean = [0.1307], std = [0.3015] FashionMNIST mean = [0.2860], std = [0.3205] SVHN mean = [0.4377, 0.4438, 0.4728], std = [0.1201, 0.1231, 0.1052] CIFAR10 mean = [0.4914, 0.4822, 0.4465], std = [0.2023, 0.1994, 0.2010] If there is a crash, reload: ``` path = F"/content/gdrive/MyDrive/data_syn.pt" syn=torch.load(path) print(syn.shape) all_synthetic_data=[] all_synthetic_data.append(syn) loss_function = nn.CrossEntropyLoss().to(device) #evaluation("CNN",all_synthetic_data, 1) ```
github_jupyter
# Function Practice Exercises Problems are arranged in increasing difficulty: * Warmup - these can be solved using basic comparisons and methods * Level 1 - these may involve if/then conditional statements and simple methods * Level 2 - these may require iterating over sequences, usually with some kind of loop * Challenging - these will take some creativity to solve ## WARMUP SECTION: #### LESSER OF TWO EVENS: Write a function that returns the lesser of two given numbers *if* both numbers are even, but returns the greater if one or both numbers are odd lesser_of_two_evens(2,4) --> 2 lesser_of_two_evens(2,5) --> 5 ``` def lesser_of_two_evens(a,b): if(a%2==0 and b%2==0): print(b if a>b else a) else: print(a if a>b else b) # Check lesser_of_two_evens(2,4) # Check lesser_of_two_evens(2,5) ``` #### ANIMAL CRACKERS: Write a function takes a two-word string and returns True if both words begin with same letter animal_crackers('Levelheaded Llama') --> True animal_crackers('Crazy Kangaroo') --> False ``` def animal_crackers(text): words = text.split() if words[0][0] == words[1][0]: return True else: return False # Check animal_crackers('Levelheaded Llama') # Check animal_crackers('Crazy Kangaroo') ``` #### MAKES TWENTY: Given two integers, return True if the sum of the integers is 20 *or* if one of the integers is 20. If not, return False makes_twenty(20,10) --> True makes_twenty(12,8) --> True makes_twenty(2,3) --> False ``` def makes_twenty(n1,n2): if(n1==20 or n2==20 or n1+n2==20): return True else: return False pass # Check makes_twenty(10,20) makes_twenty(12,8) makes_twenty(2,7) # Check makes_twenty(2,3) ``` # LEVEL 1 PROBLEMS #### OLD MACDONALD: Write a function that capitalizes the first and fourth letters of a name old_macdonald('macdonald') --> MacDonald Note: `'macdonald'.capitalize()` returns `'Macdonald'` ``` def old_macdonald(name): new_word=list(name) new_word[0]=new_word[0].capitalize() new_word[3]=new_word[3].capitalize() return ''.join(new_word) pass # Check old_macdonald('macdonald') ``` #### MASTER YODA: Given a sentence, return a sentence with the words reversed master_yoda('I am home') --> 'home am I' master_yoda('We are ready') --> 'ready are We' Note: The .join() method may be useful here. The .join() method allows you to join together strings in a list with some connector string. For example, some uses of the .join() method: >>> "--".join(['a','b','c']) >>> 'a--b--c' This means if you had a list of words you wanted to turn back into a sentence, you could just join them with a single space string: >>> " ".join(['Hello','world']) >>> "Hello world" ``` def master_yoda(text): list1=text.split() list1.reverse() return " ".join(list1) pass # Check master_yoda('I am home') # Check master_yoda('We are ready') ``` #### ALMOST THERE: Given an integer n, return True if n is within 10 of either 100 or 200 almost_there(90) --> True almost_there(104) --> True almost_there(150) --> False almost_there(209) --> True NOTE: `abs(num)` returns the absolute value of a number ``` ##def almost_there(n): ## return n in range(90,111) or n in range(190,211) def almost_there(n): return(True if(abs(100-n)<=10) or(abs(200-n<=10)) else False) # Check almost_there(104) # Check almost_there(150) # Check almost_there(209) ``` # LEVEL 2 PROBLEMS #### FIND 33: Given a list of ints, return True if the array contains a 3 next to a 3 somewhere. has_33([1, 3, 3]) → True has_33([1, 3, 1, 3]) → False has_33([3, 1, 3]) → False ``` def has_33(nums): for i in range(len(nums)-1): if nums[i:i+1]==[3,3]: return True else: return False # Check has_33([1, 3 , 3]) # Check has_33([1, 3, 1, 3]) # Check has_33([3, 1, 3]) ``` #### PAPER DOLL: Given a string, return a string where for every character in the original there are three characters paper_doll('Hello') --> 'HHHeeellllllooo' paper_doll('Mississippi') --> 'MMMiiissssssiiippppppiii' ``` def paper_doll(text): result = '' for char in text: result += char * 3 return result # Check paper_doll('Hello') # Check paper_doll('Mississippi') ``` #### BLACKJACK: Given three integers between 1 and 11, if their sum is less than or equal to 21, return their sum. If their sum exceeds 21 *and* there's an eleven, reduce the total sum by 10. Finally, if the sum (even after adjustment) exceeds 21, return 'BUST' blackjack(5,6,7) --> 18 blackjack(9,9,9) --> 'BUST' blackjack(9,9,11) --> 19 ``` def blackjack(a,b,c): sum=a+b+c if sum<=21: return sum elif (sum>21) and (a==11 or b==11 or c==11): sum=sum-10 return sum else: print('BUST') # Check blackjack(5,6,7) # Check blackjack(9,9,9) # Check blackjack(9,9,11) ``` #### SUMMER OF '69: Return the sum of the numbers in the array, except ignore sections of numbers starting with a 6 and extending to the next 9 (every 6 will be followed by at least one 9). Return 0 for no numbers. summer_69([1, 3, 5]) --> 9 summer_69([4, 5, 6, 7, 8, 9]) --> 9 summer_69([2, 1, 6, 9, 11]) --> 14 ``` def summer_69(arr): y = [] for x in arr: if 6 in arr: a = arr.index(6) b = arr.index(9) del arr[a:b+1] y = arr elif arr == []: return "0" else: return sum(arr) return sum(y) # Check summer_69([1, 3, 5]) # Check summer_69([4, 5, 6, 7, 8, 9]) # Check summer_69([2, 1, 6, 9, 11]) ``` # CHALLENGING PROBLEMS #### SPY GAME: Write a function that takes in a list of integers and returns True if it contains 007 in order spy_game([1,2,4,0,0,7,5]) --> True spy_game([1,0,2,4,0,5,7]) --> True spy_game([1,7,2,0,4,5,0]) --> False ``` def spy_game(nums): for i in range (len(nums)): if [0,0,7]==nums[i:i+3]: return True else: return False # Check spy_game([1,2,4,0,0,7,5]) # Check spy_game([1,0,2,4,0,5,7]) # Check spy_game([1,7,2,0,4,5,0]) ``` #### COUNT PRIMES: Write a function that returns the *number* of prime numbers that exist up to and including a given number count_primes(100) --> 25 By convention, 0 and 1 are not prime. ``` def count_primes(num): x = 0 for n in range(2,num+1): prime=True for i in range(2,n): if(n%i == 0): prime=False break if prime: x += 1 return x # Check count_primes(100) ``` ### Just for fun: #### PRINT BIG: Write a function that takes in a single letter, and returns a 5x5 representation of that letter print_big('a') out: * * * ***** * * * * HINT: Consider making a dictionary of possible patterns, and mapping the alphabet to specific 5-line combinations of patterns. <br>For purposes of this exercise, it's ok if your dictionary stops at "E". ``` def print_big(letter): patterns = {1:' * ',2:' * * ',3:'* *',4:'*****',5:'**** ',6:' * ',7:' * ',8:'* * ',9:'* '} alphabet = {'A':[1,2,4,3,3],'B':[5,3,5,3,5],'C':[4,9,9,9,4],'D':[5,3,3,3,5],'E':[4,9,4,9,4]} for pattern in alphabet[letter.upper()]: print(patterns[pattern]) print_big('a') print_big('b') print_big('c') ``` ## Great Job!
github_jupyter
# Results: mutagenesis2 Original <b> MIL </b> <i>stratified k fold Validation</i> is performed. Metrics: <br> - AUC - Accuracie ### Import Libraries ``` import sys,os import warnings os.chdir('/Users/josemiguelarrieta/Documents/MILpy') sys.path.append(os.path.realpath('..')) from sklearn.utils import shuffle import random as rand import numpy as np from data import load_data warnings.filterwarnings('ignore') from MILpy.functions.mil_cross_val import mil_cross_val #Import Algorithms from MILpy.Algorithms.simpleMIL import simpleMIL from MILpy.Algorithms.MILBoost import MILBoost from MILpy.Algorithms.maxDD import maxDD from MILpy.Algorithms.CKNN import CKNN from MILpy.Algorithms.EMDD import EMDD from MILpy.Algorithms.MILES import MILES from MILpy.Algorithms.BOW import BOW ``` ### Load data ``` bags,labels,X = load_data('mutagenesis2_original') folds = 5 runs = 5 ``` #### Simple MIL [max] ``` SMILa = simpleMIL() parameters_smil = {'type': 'max'} print '\n========= SIMPLE MIL RESULT [MAX] =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) #Shuffle Data bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds, parameters=parameters_smil, timer = True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### Simple MIL [min] ``` parameters_smil = {'type': 'min'} print '\n========= SIMPLE MIL RESULT [MIN] =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### Simple MIL [extreme] ``` parameters_smil = {'type': 'extreme'} print '\n========= SIMPLE MIL RESULT [MIN] =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) #Shuffle Data bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### Simple MIL [average] ``` parameters_smil = {'type': 'average'} print '\n========= SIMPLE MIL RESULT [AVERAGE] =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### Bag of Words ``` bow_classifier = BOW() parameters_bow = {'k':100,'covar_type':'diag','n_iter':20} print '\n========= BAG OF WORDS RESULT =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=bow_classifier, folds=folds,parameters=parameters_bow, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### Citation KNN ``` cknn_classifier = CKNN() parameters_cknn = {'references': 3, 'citers': 5} print '\n========= CKNN RESULT =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=cknn_classifier, folds=folds,parameters=parameters_cknn, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### Diverse Density ``` maxDD_classifier = maxDD() print '\n========= DIVERSE DENSITY RESULT=========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=maxDD_classifier, folds=folds,parameters={}, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### EM-DD ``` emdd_classifier = EMDD() print '\n========= EM-DD RESULT =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=emdd_classifier, folds=folds,parameters={}, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### MILBoost ``` milboost_classifier = MILBoost() print '\n========= MILBOOST RESULT =========' AUC = [] ACCURACIE=[] for i in range(runs): print '\n run #'+ str(i) bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100)) accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels, model=milboost_classifier, folds=folds,parameters={}, timer=True) print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2)) AUC.append(auc) ACCURACIE.append(accuracie) print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE)) ``` #### Miles ``` #Pending ```
github_jupyter
# Computation of NPP signal to noise ratio in 30 year windows ### UKESM model output Steps: - Load the data - Define the signal to noise function - Using a 30 year moving window (for loop) to calculate the signal:noise ratio at each grid cell at each year **Import packages** ``` !pwd import numpy as np import netCDF4 as nc import pandas as pd import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec import cartopy.crs as ccrs import cartopy.feature as cfeature import xarray as xr import matplotlib.cm as cm ``` **Import the pre-processed data** ``` ### UKESM data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_UKESM1-0-LL_historical_r1i1p1f2_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_UKESM1-0-LL_ssp585_r1i1p1f2_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 lon = data.variables['ETOPO60X'][...] lat = data.variables['ETOPO60Y'][...] month = data.variables['time'][...] years = np.arange(1850.5,2100.6,1) ukesm_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) ukesm_npp_jan = ukesm_npp[:,0,:,:] ukesm_npp_feb = ukesm_npp[:,1,:,:] ukesm_npp_mar = ukesm_npp[:,2,:,:] ukesm_npp_apr = ukesm_npp[:,3,:,:] ukesm_npp_may = ukesm_npp[:,4,:,:] ukesm_npp_jun = ukesm_npp[:,5,:,:] ukesm_npp_jul = ukesm_npp[:,6,:,:] ukesm_npp_aug = ukesm_npp[:,7,:,:] ukesm_npp_sep = ukesm_npp[:,8,:,:] ukesm_npp_oct = ukesm_npp[:,9,:,:] ukesm_npp_nov = ukesm_npp[:,10,:,:] ukesm_npp_dec = ukesm_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_ACCESS-ESM1-5_historical_r1i1p1f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_ACCESS-ESM1-5_ssp585_r1i1p1f1_2015001-230012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 access_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) access_npp_jan = access_npp[:,0,:,:] access_npp_feb = access_npp[:,1,:,:] access_npp_mar = access_npp[:,2,:,:] access_npp_apr = access_npp[:,3,:,:] access_npp_may = access_npp[:,4,:,:] access_npp_jun = access_npp[:,5,:,:] access_npp_jul = access_npp[:,6,:,:] access_npp_aug = access_npp[:,7,:,:] access_npp_sep = access_npp[:,8,:,:] access_npp_oct = access_npp[:,9,:,:] access_npp_nov = access_npp[:,10,:,:] access_npp_dec = access_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_CanESM5_historical_r1i1p2f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_CanESM5_ssp585_r1i1p2f1_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 canesm_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) canesm_npp_jan = canesm_npp[:,0,:,:] canesm_npp_feb = canesm_npp[:,1,:,:] canesm_npp_mar = canesm_npp[:,2,:,:] canesm_npp_apr = canesm_npp[:,3,:,:] canesm_npp_may = canesm_npp[:,4,:,:] canesm_npp_jun = canesm_npp[:,5,:,:] canesm_npp_jul = canesm_npp[:,6,:,:] canesm_npp_aug = canesm_npp[:,7,:,:] canesm_npp_sep = canesm_npp[:,8,:,:] canesm_npp_oct = canesm_npp[:,9,:,:] canesm_npp_nov = canesm_npp[:,10,:,:] canesm_npp_dec = canesm_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_CESM2_historical_r4i1p1f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_CESM2_ssp585_r4i1p1f1_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 cesm_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) cesm_npp_jan = cesm_npp[:,0,:,:] cesm_npp_feb = cesm_npp[:,1,:,:] cesm_npp_mar = cesm_npp[:,2,:,:] cesm_npp_apr = cesm_npp[:,3,:,:] cesm_npp_may = cesm_npp[:,4,:,:] cesm_npp_jun = cesm_npp[:,5,:,:] cesm_npp_jul = cesm_npp[:,6,:,:] cesm_npp_aug = cesm_npp[:,7,:,:] cesm_npp_sep = cesm_npp[:,8,:,:] cesm_npp_oct = cesm_npp[:,9,:,:] cesm_npp_nov = cesm_npp[:,10,:,:] cesm_npp_dec = cesm_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_CNRM-ESM2-1_historical_r1i1p1f2_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_CNRM-ESM2-1_ssp585_r1i1p1f2_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 cnrm_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) cnrm_npp_jan = cnrm_npp[:,0,:,:] cnrm_npp_feb = cnrm_npp[:,1,:,:] cnrm_npp_mar = cnrm_npp[:,2,:,:] cnrm_npp_apr = cnrm_npp[:,3,:,:] cnrm_npp_may = cnrm_npp[:,4,:,:] cnrm_npp_jun = cnrm_npp[:,5,:,:] cnrm_npp_jul = cnrm_npp[:,6,:,:] cnrm_npp_aug = cnrm_npp[:,7,:,:] cnrm_npp_sep = cnrm_npp[:,8,:,:] cnrm_npp_oct = cnrm_npp[:,9,:,:] cnrm_npp_nov = cnrm_npp[:,10,:,:] cnrm_npp_dec = cnrm_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_GFDL-CM4_historical_r1i1p1f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_GFDL-CM4_ssp585_ri1p1f1_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 gfdlcm4_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) gfdlcm4_npp_jan = gfdlcm4_npp[:,0,:,:] gfdlcm4_npp_feb = gfdlcm4_npp[:,1,:,:] gfdlcm4_npp_mar = gfdlcm4_npp[:,2,:,:] gfdlcm4_npp_apr = gfdlcm4_npp[:,3,:,:] gfdlcm4_npp_may = gfdlcm4_npp[:,4,:,:] gfdlcm4_npp_jun = gfdlcm4_npp[:,5,:,:] gfdlcm4_npp_jul = gfdlcm4_npp[:,6,:,:] gfdlcm4_npp_aug = gfdlcm4_npp[:,7,:,:] gfdlcm4_npp_sep = gfdlcm4_npp[:,8,:,:] gfdlcm4_npp_oct = gfdlcm4_npp[:,9,:,:] gfdlcm4_npp_nov = gfdlcm4_npp[:,10,:,:] gfdlcm4_npp_dec = gfdlcm4_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_GFDL-ESM4_historical_r1i1p1f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_GFDL-ESM4_ssp585_r1i1p1f1_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 gfdlesm4_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) gfdlesm4_npp_jan = gfdlesm4_npp[:,0,:,:] gfdlesm4_npp_feb = gfdlesm4_npp[:,1,:,:] gfdlesm4_npp_mar = gfdlesm4_npp[:,2,:,:] gfdlesm4_npp_apr = gfdlesm4_npp[:,3,:,:] gfdlesm4_npp_may = gfdlesm4_npp[:,4,:,:] gfdlesm4_npp_jun = gfdlesm4_npp[:,5,:,:] gfdlesm4_npp_jul = gfdlesm4_npp[:,6,:,:] gfdlesm4_npp_aug = gfdlesm4_npp[:,7,:,:] gfdlesm4_npp_sep = gfdlesm4_npp[:,8,:,:] gfdlesm4_npp_oct = gfdlesm4_npp[:,9,:,:] gfdlesm4_npp_nov = gfdlesm4_npp[:,10,:,:] gfdlesm4_npp_dec = gfdlesm4_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_IPSL-CM6A-LR_historical_r1i1p1f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_IPSL-CM6A-LR_ssp585_r1i1p1f1_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 ipsl_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) ipsl_npp_jan = ipsl_npp[:,0,:,:] ipsl_npp_feb = ipsl_npp[:,1,:,:] ipsl_npp_mar = ipsl_npp[:,2,:,:] ipsl_npp_apr = ipsl_npp[:,3,:,:] ipsl_npp_may = ipsl_npp[:,4,:,:] ipsl_npp_jun = ipsl_npp[:,5,:,:] ipsl_npp_jul = ipsl_npp[:,6,:,:] ipsl_npp_aug = ipsl_npp[:,7,:,:] ipsl_npp_sep = ipsl_npp[:,8,:,:] ipsl_npp_oct = ipsl_npp[:,9,:,:] ipsl_npp_nov = ipsl_npp[:,10,:,:] ipsl_npp_dec = ipsl_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_MIROC-ES2L_historical_r1i1p1f2_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_MIRIC-ES2L_ssp585_r1i1p1f2_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 miroc_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) miroc_npp_jan = miroc_npp[:,0,:,:] miroc_npp_feb = miroc_npp[:,1,:,:] miroc_npp_mar = miroc_npp[:,2,:,:] miroc_npp_apr = miroc_npp[:,3,:,:] miroc_npp_may = miroc_npp[:,4,:,:] miroc_npp_jun = miroc_npp[:,5,:,:] miroc_npp_jul = miroc_npp[:,6,:,:] miroc_npp_aug = miroc_npp[:,7,:,:] miroc_npp_sep = miroc_npp[:,8,:,:] miroc_npp_oct = miroc_npp[:,9,:,:] miroc_npp_nov = miroc_npp[:,10,:,:] miroc_npp_dec = miroc_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_MRI-ESM2-0_historical_r1i2p1f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_MRI-ESM2-0_ssp585_r1i2p1f1_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 mri_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) mri_npp_jan = mri_npp[:,0,:,:] mri_npp_feb = mri_npp[:,1,:,:] mri_npp_mar = mri_npp[:,2,:,:] mri_npp_apr = mri_npp[:,3,:,:] mri_npp_may = mri_npp[:,4,:,:] mri_npp_jun = mri_npp[:,5,:,:] mri_npp_jul = mri_npp[:,6,:,:] mri_npp_aug = mri_npp[:,7,:,:] mri_npp_sep = mri_npp[:,8,:,:] mri_npp_oct = mri_npp[:,9,:,:] mri_npp_nov = mri_npp[:,10,:,:] mri_npp_dec = mri_npp[:,11,:,:] data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_NorESM2-LM_historical_r1i1p1f1_185001-201412_yearmonths.nc') hist_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/data/ETOPO_intpp_Omon_NorESM2-LM_ssp585_r1i1p1f1_2015001-210012_yearmonths.nc') ssp585_npp = data.variables['intpp'][...]*86400*12 # convert from mol m-2 s-1 --> g m-2 s-1 noresm_npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) noresm_npp_jan = noresm_npp[:,0,:,:] noresm_npp_feb = noresm_npp[:,1,:,:] noresm_npp_mar = noresm_npp[:,2,:,:] noresm_npp_apr = noresm_npp[:,3,:,:] noresm_npp_may = noresm_npp[:,4,:,:] noresm_npp_jun = noresm_npp[:,5,:,:] noresm_npp_jul = noresm_npp[:,6,:,:] noresm_npp_aug = noresm_npp[:,7,:,:] noresm_npp_sep = noresm_npp[:,8,:,:] noresm_npp_oct = noresm_npp[:,9,:,:] noresm_npp_nov = noresm_npp[:,10,:,:] noresm_npp_dec = noresm_npp[:,11,:,:] print(np.shape(access_npp)) print(np.shape(canesm_npp)) print(np.shape(cesm_npp)) print(np.shape(cnrm_npp)) print(np.shape(gfdlcm4_npp)) print(np.shape(gfdlesm4_npp)) print(np.shape(ipsl_npp)) print(np.shape(miroc_npp)) print(np.shape(mri_npp)) print(np.shape(noresm_npp)) print(np.shape(ukesm_npp)) print(np.shape(lon)) print(np.shape(lat)) print(np.shape(month)) ``` **Put the historical and SSP585 scenario together into one time series and separate into months** ``` npp = np.ma.concatenate((hist_npp, ssp585_npp), axis=0) print(np.shape(npp)) years = np.arange(1850.5,2100.6,1) npp_jan = npp[:,0,:,:] npp_feb = npp[:,1,:,:] npp_mar = npp[:,2,:,:] npp_apr = npp[:,3,:,:] npp_may = npp[:,4,:,:] npp_jun = npp[:,5,:,:] npp_jul = npp[:,6,:,:] npp_aug = npp[:,7,:,:] npp_sep = npp[:,8,:,:] npp_oct = npp[:,9,:,:] npp_nov = npp[:,10,:,:] npp_dec = npp[:,11,:,:] ``` **Define signal:noise function** ``` def signal_to_noise(ny,data): # find linear trend mask = np.ma.getmask(data[0]) tre = data[1::] - data[0:-1] trend = np.ma.average(tre, axis=0) # find noise from the error around the linear trend vector = np.arange(1,ny+1,1) scaler = np.ones(np.shape(trend)) * vector[0, np.newaxis, np.newaxis] error = (trend * scaler + data[0]) - data noise = np.ma.std(error) s2n = (ny*trend) / noise return np.ma.masked_where(mask,s2n) ``` **Calculate the signal:noise ratio using a 30-year moving window** ``` # define new arrays to fill s2n_npp_jan = np.ma.zeros(np.ma.shape(npp_jan)) s2n_npp_feb = np.ma.zeros(np.ma.shape(npp_feb)) s2n_npp_mar = np.ma.zeros(np.ma.shape(npp_mar)) s2n_npp_apr = np.ma.zeros(np.ma.shape(npp_apr)) s2n_npp_may = np.ma.zeros(np.ma.shape(npp_may)) s2n_npp_jun = np.ma.zeros(np.ma.shape(npp_jun)) s2n_npp_jul = np.ma.zeros(np.ma.shape(npp_jul)) s2n_npp_aug = np.ma.zeros(np.ma.shape(npp_aug)) s2n_npp_sep = np.ma.zeros(np.ma.shape(npp_sep)) s2n_npp_oct = np.ma.zeros(np.ma.shape(npp_oct)) s2n_npp_nov = np.ma.zeros(np.ma.shape(npp_nov)) s2n_npp_dec = np.ma.zeros(np.ma.shape(npp_dec)) # iterate over all years (vectorised version) for yr,year in enumerate(years): # define end year ny = 30 yr2 = yr+ny # Only do the signal if (year+ny > 2100): print("end") break else: s2n_npp_jan[yr,:,:] = signal_to_noise(ny,npp_jan[yr:yr2,:,:]) s2n_npp_feb[yr,:,:] = signal_to_noise(ny,npp_feb[yr:yr2,:,:]) s2n_npp_mar[yr,:,:] = signal_to_noise(ny,npp_mar[yr:yr2,:,:]) s2n_npp_apr[yr,:,:] = signal_to_noise(ny,npp_apr[yr:yr2,:,:]) s2n_npp_may[yr,:,:] = signal_to_noise(ny,npp_may[yr:yr2,:,:]) s2n_npp_jun[yr,:,:] = signal_to_noise(ny,npp_jun[yr:yr2,:,:]) s2n_npp_jul[yr,:,:] = signal_to_noise(ny,npp_jul[yr:yr2,:,:]) s2n_npp_aug[yr,:,:] = signal_to_noise(ny,npp_aug[yr:yr2,:,:]) s2n_npp_sep[yr,:,:] = signal_to_noise(ny,npp_sep[yr:yr2,:,:]) s2n_npp_oct[yr,:,:] = signal_to_noise(ny,npp_oct[yr:yr2,:,:]) s2n_npp_nov[yr,:,:] = signal_to_noise(ny,npp_nov[yr:yr2,:,:]) s2n_npp_dec[yr,:,:] = signal_to_noise(ny,npp_dec[yr:yr2,:,:]) ``` **Have a look at the output** ``` fstic = 13 fslab = 15 fig = plt.figure(figsize=(10,8)) gs = GridSpec(1,1) la = 90 lo = 180 ax1 = plt.subplot(gs[0]) ax1.spines['top'].set_visible(False) ax1.spines['right'].set_visible(False) ax1.tick_params(labelsize=fstic) plt.plot(years,s2n_npp_jan[:,la,lo]) plt.plot((1850,2100),(-1,-1),'k--') plt.plot((1850,2100),(1,1),'k--') plt.ylabel('Signal : Noise', fontsize=fslab) plt.xlabel('year', fontsize=fslab) plt.xlim(1850,2070) ``` **Import the heatmap of Arctic Tern area use** ``` data = nc.Dataset('/gws/pw/j05/cop26_hackathons/bristol/project09/tern_heatmap/bird_heatmap.nc') tern_density = data.variables['density'][...] tern_lon = data.variables['longitude'][...] tern_lat = data.variables['latitude'][...] print(np.shape(tern_density)) tern_lon = np.ma.concatenate((tern_lon[:,200::], tern_lon[:,0:200]+360.0), axis=1) tern_density = np.ma.concatenate((tern_density[:,200::], tern_density[:,0:200]), axis=1) #proj = ccrs.Orthographic(central_longitude=-20.0, central_latitude=0.0, globe=None) proj = ccrs.Robinson(central_longitude=20) levs1 = np.arange(0,21,1)*0.1 levs2 = np.arange(-20,21,2)*0.1 colmap1 = cm.viridis colmap2 = cm.coolwarm fstic = 13 fslab = 15 fig = plt.figure(figsize=(12,16)) gs = GridSpec(2,1) ax1 = plt.subplot(gs[0], projection=proj) p1 = plt.contourf(tern_lon, tern_lat, tern_density, transform=ccrs.PlateCarree(), cmap=colmap1, levels=levs1, vmin=np.min(levs1), vmax=np.max(levs1), extend='max') #c1 = plt.contour(lon, lat, s2n_npp_jan[0,:,:], transform=ccrs.PlateCarree(), colors='k', linewidths=0.75, levels=[-1,1]) ax1.add_feature(cfeature.LAND, color='w', zorder=2) ax1.coastlines(zorder=2) ax2 = plt.subplot(gs[1], projection=proj) p2 = plt.contourf(lon, lat, s2n_npp_jan[0,:,:], transform=ccrs.PlateCarree(), cmap=colmap2, levels=levs2, vmin=np.min(levs2), vmax=np.max(levs2), extend='both') #c1 = plt.contour(lon, lat, s2n_npp_jan[0,:,:], transform=ccrs.PlateCarree(), colors='k', linewidths=0.75, levels=[-1,1]) #ax1.add_feature(cfeature.BORDERS, linestyle=':') ax2.coastlines(zorder=2) plt.subplots_adjust(right=0.85) cbax1 = fig.add_axes([0.9, 0.55, 0.05, 0.3]) cbar1 = plt.colorbar(p1, cax=cbax1, orientation='vertical', ticks=levs1[::2]) cbar1.ax.set_ylabel('Area use density', fontsize=fslab) cbar1.ax.tick_params(labelsize=fstic) cbax2 = fig.add_axes([0.9, 0.15, 0.05, 0.3]) cbar2 = plt.colorbar(p2, cax=cbax2, orientation='vertical', ticks=levs2[::2]) cbar2.ax.set_ylabel('signal:noise ratio (1850)', fontsize=fslab) cbar2.ax.tick_params(labelsize=fstic) fig.savefig('TernDensity_UKESMintpp_Signal2Noise1850.png', dpi=300, bbox_inches='tight') ``` **Extract the signal:noise using the tern heatmap** ``` ### select certain regions mask = np.ma.getmask(s2n_npp_jan[:,:,:]) la1 = 125; la2 = 155 lo1 = 250; lo2 = 330 print("longitude =",lon[lo1],lon[lo2]) print("latitude =",lat[la1],lat[la2]) mask_NA = mask[:,la1:la2,lo1:lo2] tern_density_NA = np.ma.masked_where(mask_NA[0,:,:], tern_density[la1:la2,lo1:lo2]) s2n_npp_jan_NA = np.ma.masked_where(mask_NA, s2n_npp_jan[:,la1:la2,lo1:lo2]) s2n_npp_feb_NA = np.ma.masked_where(mask_NA, s2n_npp_feb[:,la1:la2,lo1:lo2]) s2n_npp_mar_NA = np.ma.masked_where(mask_NA, s2n_npp_mar[:,la1:la2,lo1:lo2]) s2n_npp_apr_NA = np.ma.masked_where(mask_NA, s2n_npp_apr[:,la1:la2,lo1:lo2]) s2n_npp_may_NA = np.ma.masked_where(mask_NA, s2n_npp_may[:,la1:la2,lo1:lo2]) s2n_npp_jun_NA = np.ma.masked_where(mask_NA, s2n_npp_jun[:,la1:la2,lo1:lo2]) s2n_npp_jul_NA = np.ma.masked_where(mask_NA, s2n_npp_jul[:,la1:la2,lo1:lo2]) s2n_npp_aug_NA = np.ma.masked_where(mask_NA, s2n_npp_aug[:,la1:la2,lo1:lo2]) s2n_npp_sep_NA = np.ma.masked_where(mask_NA, s2n_npp_sep[:,la1:la2,lo1:lo2]) s2n_npp_oct_NA = np.ma.masked_where(mask_NA, s2n_npp_oct[:,la1:la2,lo1:lo2]) s2n_npp_nov_NA = np.ma.masked_where(mask_NA, s2n_npp_nov[:,la1:la2,lo1:lo2]) s2n_npp_dec_NA = np.ma.masked_where(mask_NA, s2n_npp_dec[:,la1:la2,lo1:lo2]) la1 = 50; la2 = 90 lo1 = 330; lo2 = 360 print("longitude =",lon[lo1],lon[lo2-1]) print("latitude =",lat[la1],lat[la2-1]) mask_BE = mask[:,la1:la2,lo1:lo2] tern_density_BE = np.ma.masked_where(mask_BE[0,:,:], tern_density[la1:la2,lo1:lo2]) s2n_npp_jan_BE = np.ma.masked_where(mask_BE, s2n_npp_jan[:,la1:la2,lo1:lo2]) s2n_npp_feb_BE = np.ma.masked_where(mask_BE, s2n_npp_feb[:,la1:la2,lo1:lo2]) s2n_npp_mar_BE = np.ma.masked_where(mask_BE, s2n_npp_mar[:,la1:la2,lo1:lo2]) s2n_npp_apr_BE = np.ma.masked_where(mask_BE, s2n_npp_apr[:,la1:la2,lo1:lo2]) s2n_npp_may_BE = np.ma.masked_where(mask_BE, s2n_npp_may[:,la1:la2,lo1:lo2]) s2n_npp_jun_BE = np.ma.masked_where(mask_BE, s2n_npp_jun[:,la1:la2,lo1:lo2]) s2n_npp_jul_BE = np.ma.masked_where(mask_BE, s2n_npp_jul[:,la1:la2,lo1:lo2]) s2n_npp_aug_BE = np.ma.masked_where(mask_BE, s2n_npp_aug[:,la1:la2,lo1:lo2]) s2n_npp_sep_BE = np.ma.masked_where(mask_BE, s2n_npp_sep[:,la1:la2,lo1:lo2]) s2n_npp_oct_BE = np.ma.masked_where(mask_BE, s2n_npp_oct[:,la1:la2,lo1:lo2]) s2n_npp_nov_BE = np.ma.masked_where(mask_BE, s2n_npp_nov[:,la1:la2,lo1:lo2]) s2n_npp_dec_BE = np.ma.masked_where(mask_BE, s2n_npp_dec[:,la1:la2,lo1:lo2]) la1 = 35; la2 = 70 lo1 = 30; lo2 = 80 print("longitude =",lon[lo1],lon[lo2-1]) print("latitude =",lat[la1],lat[la2-1]) mask_AI = mask[:,la1:la2,lo1:lo2] tern_density_AI = np.ma.masked_where(mask_AI[0,:,:], tern_density[la1:la2,lo1:lo2]) s2n_npp_jan_AI = np.ma.masked_where(mask_AI, s2n_npp_jan[:,la1:la2,lo1:lo2]) s2n_npp_feb_AI = np.ma.masked_where(mask_AI, s2n_npp_feb[:,la1:la2,lo1:lo2]) s2n_npp_mar_AI = np.ma.masked_where(mask_AI, s2n_npp_mar[:,la1:la2,lo1:lo2]) s2n_npp_apr_AI = np.ma.masked_where(mask_AI, s2n_npp_apr[:,la1:la2,lo1:lo2]) s2n_npp_may_AI = np.ma.masked_where(mask_AI, s2n_npp_may[:,la1:la2,lo1:lo2]) s2n_npp_jun_AI = np.ma.masked_where(mask_AI, s2n_npp_jun[:,la1:la2,lo1:lo2]) s2n_npp_jul_AI = np.ma.masked_where(mask_AI, s2n_npp_jul[:,la1:la2,lo1:lo2]) s2n_npp_aug_AI = np.ma.masked_where(mask_AI, s2n_npp_aug[:,la1:la2,lo1:lo2]) s2n_npp_sep_AI = np.ma.masked_where(mask_AI, s2n_npp_sep[:,la1:la2,lo1:lo2]) s2n_npp_oct_AI = np.ma.masked_where(mask_AI, s2n_npp_oct[:,la1:la2,lo1:lo2]) s2n_npp_nov_AI = np.ma.masked_where(mask_AI, s2n_npp_nov[:,la1:la2,lo1:lo2]) s2n_npp_dec_AI = np.ma.masked_where(mask_AI, s2n_npp_dec[:,la1:la2,lo1:lo2]) la1 = 0; la2 = 35 lo1 = 0; lo2 = 360 print("longitude =",lon[lo1],lon[lo2-1]) print("latitude =",lat[la1],lat[la2-1]) mask_SO = mask[:,la1:la2,lo1:lo2] tern_density_SO = np.ma.masked_where(mask_SO[0,:,:], tern_density[la1:la2,lo1:lo2]) s2n_npp_jan_SO = np.ma.masked_where(mask_SO, s2n_npp_jan[:,la1:la2,lo1:lo2]) s2n_npp_feb_SO = np.ma.masked_where(mask_SO, s2n_npp_feb[:,la1:la2,lo1:lo2]) s2n_npp_mar_SO = np.ma.masked_where(mask_SO, s2n_npp_mar[:,la1:la2,lo1:lo2]) s2n_npp_apr_SO = np.ma.masked_where(mask_SO, s2n_npp_apr[:,la1:la2,lo1:lo2]) s2n_npp_may_SO = np.ma.masked_where(mask_SO, s2n_npp_may[:,la1:la2,lo1:lo2]) s2n_npp_jun_SO = np.ma.masked_where(mask_SO, s2n_npp_jun[:,la1:la2,lo1:lo2]) s2n_npp_jul_SO = np.ma.masked_where(mask_SO, s2n_npp_jul[:,la1:la2,lo1:lo2]) s2n_npp_aug_SO = np.ma.masked_where(mask_SO, s2n_npp_aug[:,la1:la2,lo1:lo2]) s2n_npp_sep_SO = np.ma.masked_where(mask_SO, s2n_npp_sep[:,la1:la2,lo1:lo2]) s2n_npp_oct_SO = np.ma.masked_where(mask_SO, s2n_npp_oct[:,la1:la2,lo1:lo2]) s2n_npp_nov_SO = np.ma.masked_where(mask_SO, s2n_npp_nov[:,la1:la2,lo1:lo2]) s2n_npp_dec_SO = np.ma.masked_where(mask_SO, s2n_npp_dec[:,la1:la2,lo1:lo2]) print(np.shape(s2n_npp_dec_SO)) ### weightings def weighting(data,density): ww = density / np.ma.sum(density) tmp1 = data * ww data_w = np.ma.sum(tmp1) return data_w s2n_npp_jan_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_feb_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_mar_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_apr_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_may_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_jun_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_jul_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_aug_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_sep_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_oct_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_nov_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_dec_NA_weighted = np.ma.zeros(len(s2n_npp_jan_NA[:,0,0])) s2n_npp_jan_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_feb_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_mar_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_apr_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_may_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_jun_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_jul_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_aug_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_sep_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_oct_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_nov_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_dec_BE_weighted = np.ma.zeros(len(s2n_npp_jan_BE[:,0,0])) s2n_npp_jan_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_feb_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_mar_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_apr_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_may_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_jun_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_jul_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_aug_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_sep_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_oct_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_nov_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_dec_AI_weighted = np.ma.zeros(len(s2n_npp_jan_AI[:,0,0])) s2n_npp_jan_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_feb_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_mar_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_apr_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_may_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_jun_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_jul_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_aug_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_sep_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_oct_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_nov_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) s2n_npp_dec_SO_weighted = np.ma.zeros(len(s2n_npp_jan_SO[:,0,0])) print(len(s2n_npp_jan_NA_weighted)) for yr in np.arange(len(s2n_npp_jan_NA[:,0,0])): s2n_npp_jan_NA_weighted[yr] = weighting(s2n_npp_jan_NA[yr,:,:], tern_density_NA) s2n_npp_feb_NA_weighted[yr] = weighting(s2n_npp_feb_NA[yr,:,:], tern_density_NA) s2n_npp_mar_NA_weighted[yr] = weighting(s2n_npp_mar_NA[yr,:,:], tern_density_NA) s2n_npp_apr_NA_weighted[yr] = weighting(s2n_npp_apr_NA[yr,:,:], tern_density_NA) s2n_npp_may_NA_weighted[yr] = weighting(s2n_npp_may_NA[yr,:,:], tern_density_NA) s2n_npp_jun_NA_weighted[yr] = weighting(s2n_npp_jun_NA[yr,:,:], tern_density_NA) s2n_npp_jul_NA_weighted[yr] = weighting(s2n_npp_jul_NA[yr,:,:], tern_density_NA) s2n_npp_aug_NA_weighted[yr] = weighting(s2n_npp_aug_NA[yr,:,:], tern_density_NA) s2n_npp_sep_NA_weighted[yr] = weighting(s2n_npp_sep_NA[yr,:,:], tern_density_NA) s2n_npp_oct_NA_weighted[yr] = weighting(s2n_npp_oct_NA[yr,:,:], tern_density_NA) s2n_npp_nov_NA_weighted[yr] = weighting(s2n_npp_nov_NA[yr,:,:], tern_density_NA) s2n_npp_dec_NA_weighted[yr] = weighting(s2n_npp_dec_NA[yr,:,:], tern_density_NA) s2n_npp_jan_BE_weighted[yr] = weighting(s2n_npp_jan_BE[yr,:,:], tern_density_BE) s2n_npp_feb_BE_weighted[yr] = weighting(s2n_npp_feb_BE[yr,:,:], tern_density_BE) s2n_npp_mar_BE_weighted[yr] = weighting(s2n_npp_mar_BE[yr,:,:], tern_density_BE) s2n_npp_apr_BE_weighted[yr] = weighting(s2n_npp_apr_BE[yr,:,:], tern_density_BE) s2n_npp_may_BE_weighted[yr] = weighting(s2n_npp_may_BE[yr,:,:], tern_density_BE) s2n_npp_jun_BE_weighted[yr] = weighting(s2n_npp_jun_BE[yr,:,:], tern_density_BE) s2n_npp_jul_BE_weighted[yr] = weighting(s2n_npp_jul_BE[yr,:,:], tern_density_BE) s2n_npp_aug_BE_weighted[yr] = weighting(s2n_npp_aug_BE[yr,:,:], tern_density_BE) s2n_npp_sep_BE_weighted[yr] = weighting(s2n_npp_sep_BE[yr,:,:], tern_density_BE) s2n_npp_oct_BE_weighted[yr] = weighting(s2n_npp_oct_BE[yr,:,:], tern_density_BE) s2n_npp_nov_BE_weighted[yr] = weighting(s2n_npp_nov_BE[yr,:,:], tern_density_BE) s2n_npp_dec_BE_weighted[yr] = weighting(s2n_npp_dec_BE[yr,:,:], tern_density_BE) s2n_npp_jan_AI_weighted[yr] = weighting(s2n_npp_jan_AI[yr,:,:], tern_density_AI) s2n_npp_feb_AI_weighted[yr] = weighting(s2n_npp_feb_AI[yr,:,:], tern_density_AI) s2n_npp_mar_AI_weighted[yr] = weighting(s2n_npp_mar_AI[yr,:,:], tern_density_AI) s2n_npp_apr_AI_weighted[yr] = weighting(s2n_npp_apr_AI[yr,:,:], tern_density_AI) s2n_npp_may_AI_weighted[yr] = weighting(s2n_npp_may_AI[yr,:,:], tern_density_AI) s2n_npp_jun_AI_weighted[yr] = weighting(s2n_npp_jun_AI[yr,:,:], tern_density_AI) s2n_npp_jul_AI_weighted[yr] = weighting(s2n_npp_jul_AI[yr,:,:], tern_density_AI) s2n_npp_aug_AI_weighted[yr] = weighting(s2n_npp_aug_AI[yr,:,:], tern_density_AI) s2n_npp_sep_AI_weighted[yr] = weighting(s2n_npp_sep_AI[yr,:,:], tern_density_AI) s2n_npp_oct_AI_weighted[yr] = weighting(s2n_npp_oct_AI[yr,:,:], tern_density_AI) s2n_npp_nov_AI_weighted[yr] = weighting(s2n_npp_nov_AI[yr,:,:], tern_density_AI) s2n_npp_dec_AI_weighted[yr] = weighting(s2n_npp_dec_AI[yr,:,:], tern_density_AI) s2n_npp_jan_SO_weighted[yr] = weighting(s2n_npp_jan_SO[yr,:,:], tern_density_SO) s2n_npp_feb_SO_weighted[yr] = weighting(s2n_npp_feb_SO[yr,:,:], tern_density_SO) s2n_npp_mar_SO_weighted[yr] = weighting(s2n_npp_mar_SO[yr,:,:], tern_density_SO) s2n_npp_apr_SO_weighted[yr] = weighting(s2n_npp_apr_SO[yr,:,:], tern_density_SO) s2n_npp_may_SO_weighted[yr] = weighting(s2n_npp_may_SO[yr,:,:], tern_density_SO) s2n_npp_jun_SO_weighted[yr] = weighting(s2n_npp_jun_SO[yr,:,:], tern_density_SO) s2n_npp_jul_SO_weighted[yr] = weighting(s2n_npp_jul_SO[yr,:,:], tern_density_SO) s2n_npp_aug_SO_weighted[yr] = weighting(s2n_npp_aug_SO[yr,:,:], tern_density_SO) s2n_npp_sep_SO_weighted[yr] = weighting(s2n_npp_sep_SO[yr,:,:], tern_density_SO) s2n_npp_oct_SO_weighted[yr] = weighting(s2n_npp_oct_SO[yr,:,:], tern_density_SO) s2n_npp_nov_SO_weighted[yr] = weighting(s2n_npp_nov_SO[yr,:,:], tern_density_SO) s2n_npp_dec_SO_weighted[yr] = weighting(s2n_npp_dec_SO[yr,:,:], tern_density_SO) ### put it all together in easy to use arrays s2n_npp_NA_weighted = np.vstack((s2n_npp_jan_NA_weighted, s2n_npp_feb_NA_weighted, s2n_npp_mar_NA_weighted, s2n_npp_apr_NA_weighted, s2n_npp_may_NA_weighted, s2n_npp_jun_NA_weighted, s2n_npp_jul_NA_weighted, s2n_npp_aug_NA_weighted, s2n_npp_sep_NA_weighted, s2n_npp_oct_NA_weighted, s2n_npp_nov_NA_weighted, s2n_npp_dec_NA_weighted)) s2n_npp_BE_weighted = np.vstack((s2n_npp_jan_BE_weighted, s2n_npp_feb_BE_weighted, s2n_npp_mar_BE_weighted, s2n_npp_apr_BE_weighted, s2n_npp_may_BE_weighted, s2n_npp_jun_BE_weighted, s2n_npp_jul_BE_weighted, s2n_npp_aug_BE_weighted, s2n_npp_sep_BE_weighted, s2n_npp_oct_BE_weighted, s2n_npp_nov_BE_weighted, s2n_npp_dec_BE_weighted)) s2n_npp_AI_weighted = np.vstack((s2n_npp_jan_AI_weighted, s2n_npp_feb_AI_weighted, s2n_npp_mar_AI_weighted, s2n_npp_apr_AI_weighted, s2n_npp_may_AI_weighted, s2n_npp_jun_AI_weighted, s2n_npp_jul_AI_weighted, s2n_npp_aug_AI_weighted, s2n_npp_sep_AI_weighted, s2n_npp_oct_AI_weighted, s2n_npp_nov_AI_weighted, s2n_npp_dec_AI_weighted)) s2n_npp_SO_weighted = np.vstack((s2n_npp_jan_SO_weighted, s2n_npp_feb_SO_weighted, s2n_npp_mar_SO_weighted, s2n_npp_apr_SO_weighted, s2n_npp_may_SO_weighted, s2n_npp_jun_SO_weighted, s2n_npp_jul_SO_weighted, s2n_npp_aug_SO_weighted, s2n_npp_sep_SO_weighted, s2n_npp_oct_SO_weighted, s2n_npp_nov_SO_weighted, s2n_npp_dec_SO_weighted)) print(np.shape(s2n_npp_NA_weighted)) print(np.shape(s2n_npp_BE_weighted)) print(np.shape(s2n_npp_AI_weighted)) print(np.shape(s2n_npp_SO_weighted)) xxx ``` **Plot results** ``` fstic = 13 fslab = 15 alf = [0.25,0.25,0.25,0.25] ls = ['-','-','-','-'] lw = [1.5,1.5,1.5,1.5] labs = ['North Atlantic (Jul-Aug)', 'Benguela Upwelling (Aug-Oct)', 'Amsterdam Island (Nov-Dec)', 'Southern Ocean (Jan-Mar)'] cols = ['k', 'royalblue', 'firebrick', 'goldenrod'] fig = plt.figure(figsize=(10,8)) gs = GridSpec(1,1) la = 90 lo = 180 ax1 = plt.subplot(gs[0]) ax1.spines['top'].set_visible(False) ax1.spines['right'].set_visible(False) ax1.tick_params(labelsize=fstic) plt.plot(years, np.average(s2n_npp_NA_weighted[6:8,:],axis=0), color=cols[0], label=labs[0], linewidth=lw[0], alpha=alf[0], linestyle=ls[0]) plt.plot(years, np.average(s2n_npp_BE_weighted[7:10,:],axis=0), color=cols[1], label=labs[1], linewidth=lw[1], alpha=alf[1], linestyle=ls[1]) plt.plot(years, np.average(s2n_npp_AI_weighted[10:12,:],axis=0), color=cols[2], label=labs[2], linewidth=lw[2], alpha=alf[2], linestyle=ls[2]) plt.plot(years, np.average(s2n_npp_SO_weighted[0:4,:],axis=0), color=cols[3], label=labs[3], linewidth=lw[3], alpha=alf[3], linestyle=ls[3]) plt.plot((1850,2100),(0,0),'k--') plt.plot((1850,2100),(-1,-1),'k--') plt.plot((1850,2100),(1,1),'k--') plt.ylabel('Signal : Noise', fontsize=fslab) plt.xlabel('year', fontsize=fslab) plt.xlim(1850,2070) plt.legend(frameon=False, loc='upper right', ncol=1) fig.savefig('UKESMintpp_Signal2Noise_TernForagingRegions.png', dpi=300, bbox_inches='tight') ```
github_jupyter
# Information Retrieval in High Dimensional Data # Assignment #1, 03.11.2017 # Curse of Dimensionality Group Number: G10 Group Members: - Achtner Martin - Arifi Ridon - Ehrhardt Daniel - Fichtner Lukas - Hassis Rafif ## Task 1 Assume $X$ to be uniformly distributed in $C_1$ Determine d in dependence of p: $(X \in C_d) = q$ with $q \in [0, 1]$ $p$: Dimensionality of the hypercube $d$: Edge length of the hypercube $q$: Arbitrary probability between $0$ and $1$ $\rightarrow$ The probability of $X$ lying in $C_d$ can be expressed as the relation between their corresponding "volumes": \begin{equation} P(X \in C_d) = q = \frac{d^p}{1^p} = d^p \end{equation} \begin{equation} d = \sqrt[p]{q} \end{equation} Let the components of the p-dimensional random variable $X^p$ be independent and have the standard normal distribution. It is known that $P( | X^1 | \leqslant 2.576) = 0.99$. For an arbitrary $p$, determine the probability $P(X^p \notin C_{5.152})$ for any of the components of $X^p$ to lie outside of the interval $[-2.576, 2.576]$. \begin{equation} P = P( | X^p | \leqslant 2.576)^p \end{equation} ### Evaluate the value for $p = 2$, $p = 3$ and $p = 500$. ``` import numpy as np import matplotlib.pyplot as plt p_values = (2, 3, 500) Px = np.asarray([1 - 0.99**i for i in p_values]) for i in range(3): print('Px[{}] = {:.3f}'.format(p_values[i], Px[i])) ``` ## Task 2 ### Sample 100 uniformly distributed random vectors from the hypercube $[-1, 1]^p$ for $p = 2$. ``` samples = np.random.uniform(low=-1.0, high=1.0, size=(2, 100)) np.set_printoptions(formatter={'float': lambda x: "{0:0.3f}".format(x)}) print('{}'.format(samples) ) ``` ### For each of the 100 vectors determine the minimum angle to all other vectors. Then compute the average of these minimum angles. ``` %%time min_angles = np.zeros((samples.shape[1])) for i in range(samples.shape[1]): indices = [j for j in range(samples.shape[1]) if not j==i] angles = np.zeros(0) x = samples[:, i] for j in indices: y = samples[:, j] cosine = np.dot(x,y) / (np.linalg.norm(x) * np.linalg.norm(y)) angles = np.append(angles, np.arccos(cosine)) min_angles[i] = np.amin(angles) print('Average minimum angle = ', np.mean(min_angles)) ``` ### Repeat the above for dimensions p = 1...1000 and use the results to plot the average minimum angle against the dimension. ``` %%time N_DIMENSIONS = 1000 SAMPLESIZE = 100 min_angles = np.zeros(SAMPLESIZE) max_angles = np.zeros(SAMPLESIZE) min_avg_angles = np.zeros(N_DIMENSIONS) max_avg_angles = np.zeros(N_DIMENSIONS) samples = np.random.uniform(low=-1.0, high=1.0, size=(N_DIMENSIONS, SAMPLESIZE)) for p in range(1, N_DIMENSIONS+1): for i in range(SAMPLESIZE): indices = [j for j in range(SAMPLESIZE) if not i==j] x = samples[:, i] angles = np.zeros(0) for j in indices: y = samples[:, j] cosine = np.dot(x,y) / (np.linalg.norm(x) * np.linalg.norm(y)) angles = np.append(angles, np.arccos(cosine)) min_angles[i] = np.amin(angles) max_angles[i] = np.amax(angles) min_avg_angles[N_DIMENSIONS-p] = np.mean(min_angles) max_avg_angles[N_DIMENSIONS-p] = np.mean(max_angles) samples = np.delete(samples, np.s_[-1:], axis=0) plt.plot(range(1,N_DIMENSIONS+1), min_avg_angles/np.pi) plt.ylabel('Average minimum angle (rad)') plt.xlabel('Dimension') plt.show() ``` ### Give an interpretation of the result. $\rightarrow$ The average angle between 2 vectors converges to $\frac{\pi}{2}$ ### What conclusions can you draw for 2 randomly sampled vectors in a p-dimensional space? $\rightarrow$ The higher the value of p, the more likely it is that the 2 vectors are orthogonal. ### Does the result change if the sample size increases? ``` %%time N_DIMENSIONS = 1000 SAMPLESIZE = 200 min_angles = np.zeros(SAMPLESIZE) max_angles = np.zeros(SAMPLESIZE) min_avg_angles = np.zeros(N_DIMENSIONS) max_avg_angles = np.zeros(N_DIMENSIONS) samples = np.random.uniform(low=-1.0, high=1.0, size=(N_DIMENSIONS, SAMPLESIZE)) for p in range(1, N_DIMENSIONS+1): for i in range(SAMPLESIZE): indices = [j for j in range(SAMPLESIZE) if not i==j] x = samples[:, i] angles = np.zeros(0) for j in indices: y = samples[:, j] cosine = np.dot(x,y) / (np.linalg.norm(x) * np.linalg.norm(y)) angles = np.append(angles, np.arccos(cosine)) min_angles[i] = np.amin(angles) max_angles[i] = np.amax(angles) min_avg_angles[N_DIMENSIONS-p] = np.mean(min_angles) max_avg_angles[N_DIMENSIONS-p] = np.mean(max_angles) samples = np.delete(samples, np.s_[-1:], axis=0) plt.plot(range(1,N_DIMENSIONS+1), min_avg_angles/np.pi) plt.ylabel('Average angle (rad)') plt.xlabel('Dimension') plt.show() ``` $\rightarrow$ The more samples are chosen in a space of dimension $p$ the more likely it is that 2 vectors point in the same direction. Hence the average angle converges slower and with more oscillations around $\frac{\pi}{2}$. ## Task 3 Draw a circle with radius $\frac{1}{2}$ around each corner (note that each circle touches its two neighboring circles). Now draw a circle around the origin with a radius such that it touches all of the four previously drawn circles. What radius does it have? $$ \rightarrow r = \sqrt{0.5^2 + 0.5^2} - R = \sqrt{0.5^2 + 0.5^2} - 0.5 = \sqrt{0.25 + 0.25} -0.5 = \sqrt{0.5} - 0.5 = 0.207106... \approx 0.207 $$ Motivate your claim. $\rightarrow$The distance from the center 0 to one of the corner points of the hypercube can be calculated using the Theorem of Pythagoras. ``` def calculate_radius(d, p): return np.sqrt(np.sum(np.asarray([d**2 for i in range(p)]))) - 0.5 dimensions = np.asarray(range(1,21)) result = np.zeros((20,)) for p in dimensions: result[p-1] = calculate_radius(0.5, p) print('p = ' + str(p) + ', r = ' + str(result[p-1])) ``` $\rightarrow$ From the 4th dimension onwards the radius of the inner hypersphere exceeds the boundaries of the hypercube. $\rightarrow$ From the 9th dimension onwards the radius of the inner hypersphere is equal or more than the size of the hypercube. ## Statistical Decision Making ## Task 4 Answer the following question. All answers must be justified. <div style="text-align: center">$\begin{array}{c|ccc} p_X(X, Y) & Y = 1 & Y = 2 & Y = 3 \\ \hline\hline X = 2 & 0.4 & 0.14 & 0.05 \\ X = 1 & 0.02 & 0.26 & 0.13\end{array}$</div> + The numbers in Figure 1 describe the probability of the respective events (e.g. $P(X=1, Y=1) = 0.02).$ Is this table a probability table? Justify your answer. $ \rightarrow $ The individual probabilities of alle possible events must be real, non-negative and sum up to 1. ``` # Check sum of probabilties. joint_prob = np.array([[0.4, 0.14, 0.05], [0.02, 0.26, 0.13]]) P_sum = np.sum(joint_prob) print('Sum of all probabilities: {}'.format(P_sum)) negative = False arr = joint_prob.reshape(6,1) for prob in arr: if prob < 0: negative = True print('Is any probability negative?: {}'.format(negative)) ``` So the answer ist yes, this table is a probability table. + By means of Figure 1, provide the conditional expectation $\mathbb{E}_{y|X=2}[Y]$ and the probability of the event $X=1$ under the condition that $Y=3.$ $ \rightarrow $ To calculate $\mathbb{E}_{y|X=2}[Y]$ we simply have to multiply the possible outcomes of $Y$ with their respective joint probabilities, add them up and divide them by the probybility that $X$ equals two. $\mathbb{E}_{y|X=2}[Y] = \frac{\sum\limits_{i=1}^3 i \times P(X=1, Y=i)}{P(X=2)}$ ``` y_values = np.array([1,2,3]) E_y_x2 = np.dot(joint_prob[0], y_values)/np.sum(joint_prob[0]) print('Conditional Expectation: {:.2f}'.format(E_y_x2)) ``` $ \rightarrow $ Similiar formula for the second probability: $ P(X=1|Y=3) = \frac{P(X=1,Y=3)}{P(Y=3)} = \frac{P(X=1,Y=3)}{\sum\limits_{i=1}^2 P(X=i,Y=3)} $ ``` P_x1_cond_y3 = joint_prob[1,2]/(joint_prob[0,2]+joint_prob[1,2]) print('Conditional Probability: {:.2f}'.format(P_x1_cond_y3)) ``` + Is the function $p(x,y)$ given by \begin{equation} p(x,y) = \begin{cases} 1 & \text{for } 0 \leq x \leq 1 \text{, } 0\leq y \leq \frac{1}{2} \\0 & \text{otherwise}\end{cases} \end{equation} a joint densitiy function for two random variables? $ \rightarrow $ For this function to be a joint density function it must fulfill the property: \begin{equation} \int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}p(x,y)dxdy = 1 \end{equation} and regarding our function we can change this condition to \begin{equation} \int\limits_{0}^{0.5}\int\limits_{0}^{1}1 dxdy = 1 \end{equation} $ \int\limits_{0}^{\frac{1}{2}}\int\limits_{0}^{1}1 dxdy = \int\limits_{0}^{\frac{1}{2}}\left[x\right]_0^1dy = \int\limits_{0}^{\frac{1}{2}}1dy = \left[y\right]_0^{\frac{1}{2}} = \frac{1}{2} \neq 1$ Therefore this function is not a joint density function. ``` # Validate Integral from sympy import * init_printing(use_unicode=True) x, y = symbols('x y') integrate(integrate(1, (x, 0, 1)), (y, 0, Rational(1, 2))) ``` + For two random variables $X$ and $Y$, let the joint density function be given by \begin{equation} p(x,y) = \begin{cases} 2e^{-(x+y)} & \text{for } 0 \leq x \leq y \text{, } 0\leq y \\0 & \text{otherwise.}\end{cases} \end{equation} What are the marginal density functions for $X$ and $Y$ respectively? $ \rightarrow $ The marginal density functions are defined as $ f_x(x,y) = \int\limits_{-\infty}^{+\infty}f(x,y)dy$ $ f_y(x,y) = \int\limits_{-\infty}^{+\infty}f(x,y)dx$ Compute $ f_x(x,y) $ first: $ f_x(x,y) = \int\limits_{0}^{+\infty}2e^{-(x+y)}dy = \lim\limits_{\alpha \to \infty} \int\limits_{0}^{\alpha}2e^{-(x+y)}dy = \lim\limits_{\alpha \to \infty} \left[-2e^{-(x+y)}\right]_0^{\alpha} = \lim\limits_{\alpha \to \infty} \left.\left[-2(e^{-(x + \alpha)} - e^{-(x+0)})\right]\right|_{x\geq0} = 2e^{-x}$ Now for $ f_y(x,y) $: $ f_y(x,y) = \int\limits_0^y 2e^{-(x+y)}dx = \left[-2e^{-(x+y)}\right]_0^y = -2e^{-2y} + 2e^{-y} $ ``` # Validate f_x(x, y) fxy = 2*exp(-(x+y)) integrate(fxy, (y, 0, oo)) # Validate f_y(x, y) integrate(fxy, (x, 0, y)) ``` + Let the joint density function of two random variables $X$ and $Y$ be given by \begin{equation} p(x,y) = \begin{cases} \frac{1}{15}(2x+4y) & \text{for } 0 < x < 3 \text{, } 0 < y < 1 \\0 & \text{otherwise.}\end{cases} \end{equation} Determine the probability for $X \leq 2$ under the condition that $Y = \frac{1}{2}$. $\rightarrow P(X \leq 2, Y=\frac{1}{2}) = \int\limits_0^2 \left.p(x,y)dx\right|_{y=\frac{1}{2}} = \int\limits_0^2 \frac{1}{15}(2x+2)dx = \left[\frac{1}{15}(x^2 + 2x)\right]_0^2 = \frac{1}{15}(4+4) -0 = \frac{8}{15}$ ``` # Validate P(X <=2, Y = 0.5) pxy = Rational(1, 15)*(2*x + 4*y) prob = integrate(pxy, (x, 0, 2)) y = Rational(1, 2) prob ```
github_jupyter
# Course 2 week 1 lecture notebook Exercise 03 <a name="combine-features"></a> ## Combine features In this exercise, you will practice how to combine features in a pandas dataframe. This will help you in the graded assignment at the end of the week. In addition, you will explore why it makes more sense to multiply two features rather than add them in order to create interaction terms. First, you will generate some data to work with. ``` # Import pandas import pandas as pd # Import a pre-defined function that generates data from utils import load_data # Generate features and labels X, y = load_data(100) X.head() feature_names = X.columns feature_names ``` ### Combine strings Even though you can visually see feature names and type the name of the combined feature, you can programmatically create interaction features so that you can apply this to any dataframe. Use f-strings to combine two strings. There are other ways to do this, but Python's f-strings are quite useful. ``` name1 = feature_names[0] name2 = feature_names[1] print(f"name1: {name1}") print(f"name2: {name2}") # Combine the names of two features into a single string, separated by '_&_' for clarity combined_names = f"{name1}_&_{name2}" combined_names ``` ### Add two columns - Add the values from two columns and put them into a new column. - You'll do something similar in this week's assignment. ``` X[combined_names] = X['Age'] + X['Systolic_BP'] X.head(2) ``` ### Why we multiply two features instead of adding Why do you think it makes more sense to multiply two features together rather than adding them together? Please take a look at two features, and compare what you get when you add them, versus when you multiply them together. ``` # Generate a small dataset with two features df = pd.DataFrame({'v1': [1,1,1,2,2,2,3,3,3], 'v2': [100,200,300,100,200,300,100,200,300] }) # add the two features together df['v1 + v2'] = df['v1'] + df['v2'] # multiply the two features together df['v1 x v2'] = df['v1'] * df['v2'] df ``` It may not be immediately apparent how adding or multiplying makes a difference; either way you get unique values for each of these operations. To view the data in a more helpful way, rearrange the data (pivot it) so that: - feature 1 is the row index - feature 2 is the column name. - Then set the sum of the two features as the value. Display the resulting data in a heatmap. ``` # Import seaborn in order to use a heatmap plot import seaborn as sns # Pivot the data so that v1 + v2 is the value df_add = df.pivot(index='v1', columns='v2', values='v1 + v2' ) print("v1 + v2\n") display(df_add) print() sns.heatmap(df_add); ``` Notice that it doesn't seem like you can easily distinguish clearly when you vary feature 1 (which ranges from 1 to 3), since feature 2 is so much larger in magnitude (100 to 300). This is because you added the two features together. #### View the 'multiply' interaction Now pivot the data so that: - feature 1 is the row index - feature 2 is the column name. - The values are 'v1 x v2' Use a heatmap to visualize the table. ``` df_mult = df.pivot(index='v1', columns='v2', values='v1 x v2' ) print('v1 x v2') display(df_mult) print() sns.heatmap(df_mult); ``` Notice how when you multiply the features, the heatmap looks more like a 'grid' shape instead of three vertical bars. This means that you are more clearly able to make a distinction as feature 1 varies from 1 to 2 to 3. ### Discussion When you find the interaction between two features, you ideally hope to see how varying one feature makes an impact on the interaction term. This is better achieved by multiplying the two features together rather than adding them together. Another way to think of this is that you want to separate the feature space into a "grid", which you can do by multiplying the features together. In this week's assignment, you will create interaction terms! ### This is the end of this practice section. Please continue on with the lecture videos! ---
github_jupyter
# Path and CrossSection You can create a `Path` in gdsfactory and extrude it with an arbitrary `CrossSection`. Lets create a path: - Create a blank `Path`. - Append points to the `Path` either using the built-in functions (`arc()`, `straight()`, `euler()` ...) or by providing your own lists of points - Specify `CrossSection` with layers and offsets. - Extrude `Path` with a `CrossSection` to create a Component with the path polygons in it. ## Path The first step is to generate the list of points we want the path to follow. Let's start out by creating a blank `Path` and using the built-in functions to make a few smooth turns. ``` import gdsfactory as gf import numpy as np import matplotlib.pyplot as plt P = gf.Path() P.append(gf.path.arc(radius=10, angle=90)) # Circular arc P.append(gf.path.straight(length=10)) # Straight section P.append(gf.path.euler(radius=3, angle=-90)) # Euler bend (aka "racetrack" curve) P.append(gf.path.straight(length=40)) P.append(gf.path.arc(radius=8, angle=-45)) P.append(gf.path.straight(length=10)) P.append(gf.path.arc(radius=8, angle=45)) P.append(gf.path.straight(length=10)) f = gf.plot(P) p2 = P.copy().rotate() f = gf.plot([P, p2]) P.points - p2.points ``` You can also modify our Path in the same ways as any other gdsfactory object: - Manipulation with `move()`, `rotate()`, `mirror()`, etc - Accessing properties like `xmin`, `y`, `center`, `bbox`, etc ``` P.movey(10) P.xmin = 20 f = gf.plot(P) ``` You can also check the length of the curve with the `length()` method: ``` P.length() ``` ## CrossSection Now that you've got your path defined, the next step is to define the cross-section of the path. To do this, you can create a blank `CrossSection` and add whatever cross-sections you want to it. You can then combine the `Path` and the `CrossSection` using the `gf.path.extrude()` function to generate a Component: ### Option 1: Single layer and width cross-section The simplest option is to just set the cross-section to be a constant width by passing a number to `extrude()` like so: ``` # Extrude the Path and the CrossSection c = gf.path.extrude(P, layer=(1, 0), width=1.5) c ``` ### Option 2: Linearly-varying width A slightly more advanced version is to make the cross-section width vary linearly from start to finish by passing a 2-element list to `extrude()` like so: ``` # Extrude the Path and the CrossSection c = gf.path.extrude(P, layer=(1, 0), widths=(1, 3)) c ``` ### Option 3: Arbitrary Cross-section You can also extrude an arbitrary cross_section Now, what if we want a more complicated straight? For instance, in some photonic applications it's helpful to have a shallow etch that appears on either side of the straight (often called a trench or sleeve). Additionally, it might be nice to have a Port on either end of the center section so we can snap other geometries to it. Let's try adding something like that in: ``` import gdsfactory as gf p = gf.path.straight() # Add a few "sections" to the cross-section s1 = gf.Section(width=2, offset=2, layer=(2, 0)) s2 = gf.Section(width=2, offset=-2, layer=(2, 0)) x = gf.CrossSection( width=1, offset=0, layer=(1, 0), port_names=("in", "out"), sections=[s1, s2] ) c = gf.path.extrude(p, cross_section=x) c p = gf.path.arc() # Combine the Path and the CrossSection b = gf.path.extrude(p, cross_section=x) b ``` ## Building Paths quickly You can pass `append()` lists of path segments. This makes it easy to combine paths very quickly. Below we show 3 examples using this functionality: **Example 1:** Assemble a complex path by making a list of Paths and passing it to `append()` ``` P = gf.Path() # Create the basic Path components left_turn = gf.path.euler(radius=4, angle=90) right_turn = gf.path.euler(radius=4, angle=-90) straight = gf.path.straight(length=10) # Assemble a complex path by making list of Paths and passing it to `append()` P.append( [ straight, left_turn, straight, right_turn, straight, straight, right_turn, left_turn, straight, ] ) f = gf.plot(P) ``` **Example 2:** Create an "S-turn" just by making a list of `[left_turn, right_turn]` ``` P = gf.Path() # Create an "S-turn" just by making a list s_turn = [left_turn, right_turn] P.append(s_turn) f = gf.plot(P) ``` **Example 3:** Repeat the S-turn 3 times by nesting our S-turn list in another list ``` P = gf.Path() # Create an "S-turn" using a list s_turn = [left_turn, right_turn] # Repeat the S-turn 3 times by nesting our S-turn list 3x times in another list triple_s_turn = [s_turn, s_turn, s_turn] P.append(triple_s_turn) f = gf.plot(P) ``` Note you can also use the Path() constructor to immediately contruct your Path: ``` P = gf.Path([straight, left_turn, straight, right_turn, straight]) f = gf.plot(P) ``` ## Waypoint smooth paths You can also build smooth paths between waypoints with the `smooth()` function ``` points = np.array([(20, 10), (40, 10), (20, 40), (50, 40), (50, 20), (70, 20)]) plt.plot(points[:, 0], points[:, 1], ".-") plt.axis("equal") import gdsfactory as gf import numpy as np import matplotlib.pyplot as plt points = np.array([(20, 10), (40, 10), (20, 40), (50, 40), (50, 20), (70, 20)]) P = gf.path.smooth( points=points, radius=2, bend=gf.path.euler, # Alternatively, use pp.arc use_eff=False, ) f = gf.plot(P) ``` ## Waypoint sharp paths It's also possible to make more traditional angular paths (e.g. electrical wires) in a few different ways. **Example 1:** Using a simple list of points ``` P = gf.Path([(20, 10), (30, 10), (40, 30), (50, 30), (50, 20), (70, 20)]) f = gf.plot(P) ``` **Example 2:** Using the "turn and move" method, where you manipulate the end angle of the Path so that when you append points to it, they're in the correct direction. *Note: It is crucial that the number of points per straight section is set to 2 (`pp.straight(length, num_pts = 2)`) otherwise the extrusion algorithm will show defects.* ``` P = gf.Path() P.append(gf.path.straight(length=10, npoints=2)) P.end_angle += 90 # "Turn" 90 deg (left) P.append(gf.path.straight(length=10, npoints=2)) # "Walk" length of 10 P.end_angle += -135 # "Turn" -135 degrees (right) P.append(gf.path.straight(length=15, npoints=2)) # "Walk" length of 10 P.end_angle = 0 # Force the direction to be 0 degrees P.append(gf.path.straight(length=10, npoints=2)) # "Walk" length of 10 f = gf.plot(P) import gdsfactory as gf s1 = gf.Section(width=1.5, offset=2.5, layer=(2, 0)) s2 = gf.Section(width=1.5, offset=-2.5, layer=(3, 0)) X = gf.CrossSection(width=1, offset=0, layer=(1, 0), sections=[s1, s2]) component = gf.path.extrude(P, X) component ``` ## Custom curves Now let's have some fun and try to make a loop-de-loop structure with parallel straights and several Ports. To create a new type of curve we simply make a function that produces an array of points. The best way to do that is to create a function which allows you to specify a large number of points along that curve -- in the case below, the `looploop()` function outputs 1000 points along a looping path. Later, if we want reduce the number of points in our geometry we can trivially `simplify` the path. ``` import numpy as np import gdsfactory as gf def looploop(num_pts=1000): """Simple limacon looping curve""" t = np.linspace(-np.pi, 0, num_pts) r = 20 + 25 * np.sin(t) x = r * np.cos(t) y = r * np.sin(t) return np.array((x, y)).T # Create the path points P = gf.Path() P.append(gf.path.arc(radius=10, angle=90)) P.append(gf.path.straight()) P.append(gf.path.arc(radius=5, angle=-90)) P.append(looploop(num_pts=1000)) P.rotate(-45) # Create the crosssection s1 = gf.Section(width=0.5, offset=2, layer=(2, 0)) s2 = gf.Section(width=0.5, offset=4, layer=(3, 0)) s3 = gf.Section(width=1, offset=0, layer=(4, 0)) X = gf.CrossSection( width=1.5, offset=0, layer=(1, 0), port_names=["in", "out"], sections=[s1, s2, s3] ) c = gf.path.extrude(P, X) c ``` You can create Paths from any array of points -- just be sure that they form smooth curves! If we examine our path `P` we can see that all we've simply created a long list of points: ``` import numpy as np path_points = P.points # Curve points are stored as a numpy array in P.points print(np.shape(path_points)) # The shape of the array is Nx2 print(len(P)) # Equivalently, use len(P) to see how many points are inside ``` ## Simplifying / reducing point usage One of the chief concerns of generating smooth curves is that too many points are generated, inflating file sizes and making boolean operations computationally expensive. Fortunately, PHIDL has a fast implementation of the [Ramer-Douglas–Peucker algorithm](https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm) that lets you reduce the number of points in a curve without changing its shape. All that needs to be done is when you made a component `component()` extruding the path with a cross_section, you specify the `simplify` argument. If we specify `simplify = 1e-3`, the number of points in the line drops from 12,000 to 4,000, and the remaining points form a line that is identical to within `1e-3` distance from the original (for the default 1 micron unit size, this corresponds to 1 nanometer resolution): ``` # The remaining points form a identical line to within `1e-3` from the original c = gf.path.extrude(p=P, cross_section=X, simplify=1e-3) c ``` Let's say we need fewer points. We can increase the simplify tolerance by specifying `simplify = 1e-1`. This drops the number of points to ~400 points form a line that is identical to within `1e-1` distance from the original: ``` c = gf.path.extrude(P, cross_section=X, simplify=1e-1) c ``` Taken to absurdity, what happens if we set `simplify = 0.3`? Once again, the ~200 remaining points form a line that is within `0.3` units from the original -- but that line looks pretty bad. ``` c = gf.path.extrude(P, cross_section=X, simplify=0.3) c ``` ## Curvature calculation The `Path` class has a `curvature()` method that computes the curvature `K` of your smooth path (K = 1/(radius of curvature)). This can be helpful for verifying that your curves transition smoothly such as in [track-transition curves](https://en.wikipedia.org/wiki/Track_transition_curve) (also known as "Euler" bends in the photonics world). Euler bends have lower mode-missmatch loss as explained in [this paper](https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-27-22-31394&id=422321) Note this curvature is numerically computed so areas where the curvature jumps instantaneously (such as between an arc and a straight segment) will be slightly interpolated, and sudden changes in point density along the curve can cause discontinuities. ``` import matplotlib.pyplot as plt import gdsfactory as gf straight_points = 100 P = gf.Path() P.append( [ gf.path.straight( length=10, npoints=straight_points ), # Should have a curvature of 0 gf.path.euler( radius=3, angle=90, p=0.5, use_eff=False ), # Euler straight-to-bend transition with min. bend radius of 3 (max curvature of 1/3) gf.path.straight( length=10, npoints=straight_points ), # Should have a curvature of 0 gf.path.arc(radius=10, angle=90), # Should have a curvature of 1/10 gf.path.arc(radius=5, angle=-90), # Should have a curvature of -1/5 gf.path.straight( length=2, npoints=straight_points ), # Should have a curvature of 0 ] ) f = gf.plot(P) ``` Arc paths are equivalent to `bend_circular` and euler paths are equivalent to `bend_euler` ``` s, K = P.curvature() plt.plot(s, K, ".-") plt.xlabel("Position along curve (arc length)") plt.ylabel("Curvature") P = gf.path.euler(radius=3, angle=90, p=1.0, use_eff=False) P.append(gf.path.euler(radius=3, angle=90, p=0.2, use_eff=False)) P.append(gf.path.euler(radius=3, angle=90, p=0.0, use_eff=False)) gf.plot(P) s, K = P.curvature() plt.plot(s, K, ".-") plt.xlabel("Position along curve (arc length)") plt.ylabel("Curvature") ``` You can compare two 90 degrees euler bend with 180 euler bend. A 180 euler bend is shorter, and has less loss than two 90 degrees euler bend. ``` import matplotlib.pyplot as plt import gdsfactory as gf straight_points = 100 P = gf.Path() P.append( [ gf.path.euler(radius=3, angle=90, p=1, use_eff=False), gf.path.euler(radius=3, angle=90, p=1, use_eff=False), gf.path.straight(length=6, npoints=100), gf.path.euler(radius=3, angle=180, p=1, use_eff=False), ] ) f = gf.plot(P) s, K = P.curvature() plt.plot(s, K, ".-") plt.xlabel("Position along curve (arc length)") plt.ylabel("Curvature") ``` ## Transitioning between cross-sections Often a critical element of building paths is being able to transition between cross-sections. You can use the `transition()` function to do exactly this: you simply feed it two `CrossSection`s and it will output a new `CrossSection` that smoothly transitions between the two. Let's start off by creating two cross-sections we want to transition between. Note we give all the cross-sectional elements names by specifying the `name` argument in the `add()` function -- this is important because the transition function will try to match names between the two input cross-sections, and any names not present in both inputs will be skipped. ``` import numpy as np import gdsfactory as gf # Create our first CrossSection s1 = gf.Section(width=2.2, offset=0, layer=(3, 0), name="etch") s2 = gf.Section(width=1.1, offset=3, layer=(1, 0), name="wg2") X1 = gf.CrossSection( width=1.2, offset=0, layer=(2, 0), name="wg", port_names=("o1", "o2"), sections=[s1, s2], ) # Create the second CrossSection that we want to transition to s1 = gf.Section(width=3.5, offset=0, layer=(3, 0), name="etch") s2 = gf.Section(width=3, offset=5, layer=(1, 0), name="wg2") X2 = gf.CrossSection( width=1, offset=0, layer=(2, 0), name="wg", port_names=("o1", "o2"), sections=[s1, s2], ) # To show the cross-sections, let's create two Paths and # create Devices by extruding them P1 = gf.path.straight(length=5) P2 = gf.path.straight(length=5) wg1 = gf.path.extrude(P1, X1) wg2 = gf.path.extrude(P2, X2) # Place both cross-section Devices and quickplot them c = gf.Component("demo") wg1ref = c << wg1 wg2ref = c << wg2 wg2ref.movex(7.5) c ``` Now let's create the transitional CrossSection by calling `transition()` with these two CrossSections as input. If we want the width to vary as a smooth sinusoid between the sections, we can set `width_type` to `'sine'` (alternatively we could also use `'linear'`). ``` # Create the transitional CrossSection Xtrans = gf.path.transition(cross_section1=X1, cross_section2=X2, width_type="sine") # Create a Path for the transitional CrossSection to follow P3 = gf.path.straight(length=15, npoints=100) # Use the transitional CrossSection to create a Component straight_transition = gf.path.extrude(P3, Xtrans) straight_transition wg1 wg2 ``` Now that we have all of our components, let's `connect()` everything and see what it looks like ``` c = gf.Component("transition_demo") wg1ref = c << wg1 wgtref = c << straight_transition wg2ref = c << wg2 wgtref.connect("o1", wg1ref.ports["o2"]) wg2ref.connect("o1", wgtref.ports["o2"]) c ``` Note that since `transition()` outputs a `CrossSection`, we can make the transition follow an arbitrary path: ``` # Transition along a curving Path P4 = gf.path.euler(radius=25, angle=45, p=0.5, use_eff=False) wg_trans = gf.path.extrude(P4, Xtrans) c = gf.Component() wg1_ref = c << wg1 # First cross-section Component wg2_ref = c << wg2 wgt_ref = c << wg_trans wgt_ref.connect("o1", wg1_ref.ports["o2"]) wg2_ref.connect("o1", wgt_ref.ports["o2"]) c ``` ## Variable width / offset In some instances, you may want to vary the width or offset of the path's cross- section as it travels. This can be accomplished by giving the `CrossSection` arguments that are functions or lists. Let's say we wanted a width that varies sinusoidally along the length of the Path. To do this, we need to make a width function that is parameterized from 0 to 1: for an example function `my_width_fun(t)` where the width at `t==0` is the width at the beginning of the Path and the width at `t==1` is the width at the end. ``` def my_custom_width_fun(t): # Note: Custom width/offset functions MUST be vectorizable--you must be able # to call them with an array input like my_custom_width_fun([0, 0.1, 0.2, 0.3, 0.4]) num_periods = 5 return 3 + np.cos(2 * np.pi * t * num_periods) # Create the Path P = gf.path.straight(length=40, npoints=30) # Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun s = gf.Section(width=my_custom_width_fun, offset=0, layer=(1, 0)) X = gf.CrossSection(width=3, offset=-6, layer=(2, 0), sections=[s]) # Extrude the Path to create the Component c = gf.path.extrude(P, cross_section=X) c ``` We can do the same thing with the offset argument: ``` def my_custom_offset_fun(t): # Note: Custom width/offset functions MUST be vectorizable--you must be able # to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4]) num_periods = 3 return 3 + np.cos(2 * np.pi * t * num_periods) # Create the Path P = gf.path.straight(length=40, npoints=30) # Create two cross-sections: one fixed offset, one modulated by my_custom_offset_fun s = gf.Section( width=1, offset=my_custom_offset_fun, layer=(2, 0), port_names=["clad1", "clad2"] ) X = gf.CrossSection(width=1, offset=0, layer=(1, 0), sections=[s]) # Extrude the Path to create the Component c = gf.path.extrude(P, cross_section=X) c ``` ## Offsetting a Path Sometimes it's convenient to start with a simple Path and offset the line it follows to suit your needs (without using a custom-offset CrossSection). Here, we start with two copies of simple straight Path and use the `offset()` function to directly modify each Path. ``` def my_custom_offset_fun(t): # Note: Custom width/offset functions MUST be vectorizable--you must be able # to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4]) num_periods = 1 return 2 + np.cos(2 * np.pi * t * num_periods) P1 = gf.path.straight(length=40) P2 = P1.copy() # Make a copy of the Path P1.offset(offset=my_custom_offset_fun) P2.offset(offset=my_custom_offset_fun) P2.mirror((1, 0)) # reflect across X-axis f = gf.plot([P1, P2]) ``` ## Modifying a CrossSection In case you need to modify the CrossSection, it can be done simply by specifying a `name` argument for the cross-sectional element you want to modify later. Here is an example where we name one of thee cross-sectional elements `'myelement1'` and `'myelement2'`: ``` # Create the Path P = gf.path.arc(radius=10, angle=45) # Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun s = gf.Section(width=1, offset=3, layer=(2, 0), name="waveguide") X = gf.CrossSection( width=1, offset=0, layer=(1, 0), port_names=("o1", "o2"), name="heater", sections=[s], ) c = gf.path.extrude(P, X) c ``` In case we want to change any of the CrossSection elements, we simply access the Python dictionary that specifies that element and modify the values ``` import gdsfactory as gf # Create our first CrossSection s1 = gf.Section(width=2.2, offset=0, layer=(3, 0), name="etch") s2 = gf.Section(width=1.1, offset=3, layer=(1, 0), name="wg2") X1 = gf.CrossSection( width=1.2, offset=0, layer=(2, 0), name="wg", port_names=("o1", "o2"), sections=[s1, s2], ) # Create the second CrossSection that we want to transition to s1 = gf.Section(width=3.5, offset=0, layer=(3, 0), name="etch") s2 = gf.Section(width=3, offset=5, layer=(1, 0), name="wg2") X2 = gf.CrossSection( width=1, offset=0, layer=(2, 0), name="wg", port_names=("o1", "o2"), sections=[s1, s2], ) Xtrans = gf.path.transition(cross_section1=X1, cross_section2=X2, width_type="sine") P1 = gf.path.straight(length=5) P2 = gf.path.straight(length=5) wg1 = gf.path.extrude(P1, X1) wg2 = gf.path.extrude(P2, X2) P4 = gf.path.euler(radius=25, angle=45, p=0.5, use_eff=False) wg_trans = gf.path.extrude(P4, Xtrans) # WG_trans = P4.extrude(Xtrans) c = gf.Component("demo") wg1_ref = c << wg1 wg2_ref = c << wg2 wgt_ref = c << wg_trans wgt_ref.connect("o1", wg1_ref.ports["o2"]) wg2_ref.connect("o1", wgt_ref.ports["o2"]) c len(c.references) ``` **Note** Any unamed section in the CrossSection won't be transitioned. If you don't add any named sections in a cross-section it will give you an error when making a transition ``` import gdsfactory as gf import numpy as np P = gf.Path() P.append(gf.path.arc(radius=10, angle=90)) # Circular arc P.append(gf.path.straight(length=10)) # Straight section P.append(gf.path.euler(radius=3, angle=-90)) # Euler bend (aka "racetrack" curve) P.append(gf.path.straight(length=40)) P.append(gf.path.arc(radius=8, angle=-45)) P.append(gf.path.straight(length=10)) P.append(gf.path.arc(radius=8, angle=45)) P.append(gf.path.straight(length=10)) f = gf.plot(P) X1 = gf.CrossSection(width=1, offset=0, layer=(2, 0)) c = gf.path.extrude(P, X1) c X2 = gf.CrossSection(width=2, offset=0, layer=(2, 0)) c = gf.path.extrude(P, X2) c ``` For example this will give you an error ``` T = gf.path.transition(X, X2) ``` **Solution** ``` P = gf.path.straight(length=10, npoints=101) s = gf.Section(width=3, offset=0, layer=gf.LAYER.SLAB90) X1 = gf.CrossSection( width=1, offset=0, layer=gf.LAYER.WG, name="core", port_names=("o1", "o2"), sections=[s], ) c = gf.path.extrude(P, X1) c X2 = gf.CrossSection( width=3, offset=0, layer=gf.LAYER.WG, name="core", port_names=("o1", "o2") ) c2 = gf.path.extrude(P, X2) c2 T = gf.path.transition(X1, X2) c3 = gf.path.extrude(P, T) c3 c4 = gf.Component() start_ref = c4 << c trans_ref = c4 << c3 end_ref = c4 << c2 trans_ref.connect("o1", start_ref.ports["o2"]) end_ref.connect("o1", trans_ref.ports["o2"]) c4 ``` ## cross-section You can create functions that return a cross_section in 2 ways: - `gf.partial` can customize an existing cross-section for example `gf.cross_section.strip` - define a function that returns a cross_section What parameters do `cross_section` take? ``` import gdsfactory as gf from gdsfactory.tech import Section help(gf.cross_section.cross_section) pin = gf.partial( gf.cross_section.strip, layer=(2, 0), sections=( Section(layer=gf.LAYER.P, width=2, offset=+2), Section(layer=gf.LAYER.N, width=2, offset=-2), ), ) c = gf.components.straight(cross_section=pin) c pin5 = gf.components.straight(cross_section=pin, length=5) pin5 ``` finally, you can also pass most components Dict that define the cross-section ``` gf.components.straight( layer=(1, 0), width=0.5, sections=( Section(layer=gf.LAYER.P, width=1, offset=+2), Section(layer=gf.LAYER.N, width=1, offset=-2), ), ) import numpy as np import gdsfactory as gf # Create our first CrossSection s1 = gf.Section(width=0.2, offset=0, layer=(3, 0), name="slab") X1 = gf.CrossSection( width=0.5, offset=0, layer=(1, 0), name="wg", port_names=("o1", "o2"), sections=[s1] ) # Create the second CrossSection that we want to transition to s = gf.Section(width=3.0, offset=0, layer=(3, 0), name="slab") X2 = gf.CrossSection( width=0.5, offset=0, layer=(1, 0), name="wg", port_names=("o1", "o2"), sections=[s] ) # To show the cross-sections, let's create two Paths and # create Devices by extruding them P1 = gf.path.straight(length=5) P2 = gf.path.straight(length=5) wg1 = gf.path.extrude(P1, X1) wg2 = gf.path.extrude(P2, X2) # Place both cross-section Devices and quickplot them c = gf.Component() wg1ref = c << wg1 wg2ref = c << wg2 wg2ref.movex(7.5) # Create the transitional CrossSection Xtrans = gf.path.transition(cross_section1=X1, cross_section2=X2, width_type="linear") # Create a Path for the transitional CrossSection to follow P3 = gf.path.straight(length=15, npoints=100) # Use the transitional CrossSection to create a Component straight_transition = gf.path.extrude(P3, Xtrans) straight_transition s = gf.export.to_3d(straight_transition, layer_set=gf.layers.LAYER_SET) s.show() ``` ## Waveguides with Shear Faces By default, an extruded path will end in a face orthogonal to the direction of the path. In some cases, it is desired to have a sheared face that tilts at a given angle from this orthogonal baseline. This can be done by supplying the parameters `shear_angle_start` and `shear_angle_end` to the `extrude()` function. ``` import numpy as np import gdsfactory as gf P = gf.path.straight(length=10) s = gf.Section(width=3, offset=0, layer=gf.LAYER.SLAB90) X1 = gf.CrossSection( width=1, offset=0, layer=gf.LAYER.WG, name="core", port_names=("o1", "o2"), sections=[s], ) c = gf.path.extrude(P, X1, shear_angle_start=10, shear_angle_end=45) c ``` By default, the shear angle parameters are `None`, in which case shearing will not be applied to the face. ``` c = gf.path.extrude(P, X1, shear_angle_start=None, shear_angle_end=10) c ``` Shearing should work on paths of arbitrary orientation, as long as their end segments are sufficiently long. ``` angle = 45 P = gf.path.straight(length=10).rotate(angle) c = gf.path.extrude(P, X1, shear_angle_start=angle, shear_angle_end=angle) c ``` For a non-linear path or width profile, the algorithm will intersect the path when sheared inwards and extrapolate linearly going outwards. ``` angle = 15 P = gf.path.euler() c = gf.path.extrude(P, X1, shear_angle_start=angle, shear_angle_end=angle) c ``` The port location, width and orientation remains the same for a sheared component. However, an additional property, `shear_angle` is set to the value of the shear angle. In general, shear ports can be safely connected together. ``` P = gf.path.straight(length=10) P_skinny = gf.path.straight(length=0.5) s = gf.Section(width=3, offset=0, layer=gf.LAYER.SLAB90, name="slab") X1 = gf.CrossSection( width=1, offset=0, layer=gf.LAYER.WG, name="core", port_names=("o1", "o2"), sections=[s], ) c = gf.path.extrude(P, X1, shear_angle_start=45, shear_angle_end=45) c_skinny = gf.path.extrude(P_skinny, X1, shear_angle_start=45, shear_angle_end=45) circuit = gf.Component("shear_sample") c1 = circuit << c c2 = circuit << c_skinny c3 = circuit << c c1.connect(port="o1", destination=c2.ports["o1"]) c3.connect(port="o1", destination=c2.ports["o2"]) print(c1.ports["o1"].to_dict()) print(c3.ports["o2"].to_dict()) circuit ``` ### Transitions with Shear faces You can also create a transition with a shear face ``` import numpy as np import gdsfactory as gf P = gf.path.straight(length=10) s = gf.Section(width=3, offset=0, layer=gf.LAYER.SLAB90, name="slab") X1 = gf.CrossSection( width=1, offset=0, layer=gf.LAYER.WG, name="core", port_names=("o1", "o2"), sections=[s], ) s2 = gf.Section(width=2, offset=0, layer=gf.LAYER.SLAB90, name="slab") X2 = gf.CrossSection( width=0.5, offset=0, layer=gf.LAYER.WG, name="core", port_names=("o1", "o2"), sections=[s2], ) t = gf.path.transition(X1, X2, width_type="linear") c = gf.path.extrude(P, t, shear_angle_start=10, shear_angle_end=45) c ``` This will also work with curves and non-linear width profiles. Keep in mind that points outside the original geometry will be extrapolated linearly. ``` angle = 15 P = gf.path.euler() c = gf.path.extrude(P, t, shear_angle_start=angle, shear_angle_end=angle) c ``` ## bbox_layers vs cladding_layers For extruding waveguides you have two options: 1. bbox_layers for squared bounding box 2. cladding_layers for extruding a layer that follows the shape of the path. ``` import gdsfactory as gf xs_bbox = gf.cross_section.cross_section(bbox_layers=[(3, 0)], bbox_offsets=[3]) w1 = gf.components.bend_euler(cross_section=xs_bbox, with_bbox=True) w1 xs_clad = gf.cross_section.cross_section(cladding_layers=[(3, 0)], cladding_offsets=[3]) w2 = gf.components.bend_euler(cross_section=xs_clad) w2 ``` based on PHIDL waveguides tutorial
github_jupyter
Doc : https://doc.demarches-simplifiees.fr/pour-aller-plus-loin/graphql et https://demarches-simplifiees-graphql.netlify.app/query.doc.html ``` import requests import json import pandas as pd pd.options.display.max_columns = 500 pd.options.display.max_rows = 500 import configparser config = configparser.ConfigParser() config.read('./secret.ini') arr = [] endCursor = '' hasNextPage = True while(hasNextPage): query = """query{ demarche(number: 30928) { id dossiers(state:accepte after:\""""+endCursor+"""\") { nodes { champs { label stringValue } annotations { label stringValue } number id state demandeur { ... on PersonnePhysique { civilite nom prenom } ... on PersonneMorale { siret codePostal naf } } groupeInstructeur{ label } } pageInfo { hasNextPage endCursor } } } }""" headers = {"Authorization": "Bearer "+config['DS']['bearer']} url = 'https://www.demarches-simplifiees.fr/api/v2/graphql' r = requests.post(url, headers=headers, json={'query': query}) print(r.status_code) data = r.json() hasNextPage = data['data']['demarche']['dossiers']['pageInfo']['hasNextPage'] endCursor = data['data']['demarche']['dossiers']['pageInfo']['endCursor'] for d in data['data']['demarche']['dossiers']['nodes']: mydict = {} mydict['number'] = d['number'] mydict['id'] = d['id'] mydict['state'] = d['state'] mydict['siret'] = d['demandeur']['siret'] mydict['codePostal'] = d['demandeur']['codePostal'] mydict['naf'] = d['demandeur']['naf'] mydict['groupe_instructeur'] = d['groupeInstructeur']['label'] for c in d['champs']: mydict[c['label']] = c['stringValue'] for a in d['annotations']: mydict[a['label']] = a['stringValue'] arr.append(mydict) df = pd.DataFrame(arr) df.shape df.state.value_counts() dffinal = df[['id','siret','naf','state', 'Montant total du prêt demandé', 'Montant proposé','Durée du prêt','Quelle forme prend l\'aide ?', 'codePostal','Quels sont vos effectifs ?', 'groupe_instructeur','Département']] dffinal = dffinal.rename(columns={ 'naf':'code_naf', 'state':'statut', 'Montant total du prêt demandé':'montant_demande', 'Montant proposé':'montant', 'Quelle forme prend l\'aide ?':'type_aide', 'Durée du prêt':'duree', 'Quels sont vos effectifs ?':'effectifs', 'Département':'departement' }) dffinal.effectifs = dffinal.effectifs.astype(float) dffinal[dffinal['effectifs'] < 0] dffinal.montant = dffinal.montant.astype(float) dffinal.montant.sum() dffinal['dep'] = dffinal['departement'].apply(lambda x: x.split(" - ")[0]) import time siret_for_api = dffinal.siret.unique() # Pour chaque SIRET on appelle l'API entreprise pour voir si on récupère des informations # au niveau du SIRET arr = [] i = 0 for siret in siret_for_api: # On ne doit pas surcharger l'API donc on met un temps entre chaque requête (il y a un quota à ne pas dépasser) # C'est un peu long... time.sleep(0.3) i = i + 1 if(i%10 == 0): print(str(i)) row = {} url = "https://entreprise.api.gouv.fr/v2/effectifs_annuels_acoss_covid/"+siret[:9]+"?non_diffusables=true&recipient=13001653800014&object=dgefp&context=%22dashboard%20aides%20aux%20entreprises%22&token="+config['API']['token'] try: r = requests.get(url=url) try: mydict = r.json() except: pass if "effectifs_annuels" in mydict: arr.append(mydict) except requests.exceptions.ConnectionError: print("Error : "+siret) dfapi = pd.DataFrame(arr) dfapi = dfapi.drop_duplicates(subset=['siren'], keep='first') dffinal['siren'] = dffinal['siret'].apply(lambda x: str(x)[:9]) dffinal = pd.merge(dffinal,dfapi,on='siren',how='left') dffinal['effectifs_annuels'] = dffinal['effectifs_annuels'].apply(lambda x: int(float(x)) if x == x else x) dffinal.loc[dffinal.effectifs_annuels.isna(),'effectifs_annuels']=dffinal['effectifs'] reg = pd.read_csv("https://raw.githubusercontent.com/etalab/dashboard-aides-entreprises/master/utils/region2019.csv",dtype=str) dep = pd.read_csv("https://raw.githubusercontent.com/etalab/dashboard-aides-entreprises/master/utils/departement2019.csv",dtype=str) dep = dep[['dep','reg']] reg = reg[['reg','libelle']] dep = pd.merge(dep,reg,on='reg',how='left') dffinal = pd.merge(dffinal,dep,on='dep',how='left') dffinal = dffinal[['montant','effectifs_annuels','reg','libelle','type_aide']] dffinal = dffinal.rename(columns={'effectifs_annuels':'effectifs'}) dffinal.effectifs = dffinal.effectifs.astype(float) dffinal.montant = dffinal.montant.astype(float) arr = [] mydict = {} mydict['type'] = [] mydict2 = {} mydict2['libelle'] = 'Prêt à taux bonifié' mydict2['nombre'] = str(dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].shape[0]) mydict2['montant'] = str(dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].montant.sum()) mydict2['effectifs'] = str(dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].effectifs.sum()) mydict['type'].append(mydict2) mydict2 = {} mydict2['libelle'] = 'Avance remboursable' mydict2['nombre'] = str(dffinal[dffinal['type_aide'] == 'Avance remboursable'].shape[0]) mydict2['montant'] = str(dffinal[dffinal['type_aide'] == 'Avance remboursable'].montant.sum()) mydict2['effectifs'] = str(dffinal[dffinal['type_aide'] == 'Avance remboursable'].effectifs.sum()) mydict['type'].append(mydict2) mydict['nombre'] = str(dffinal.shape[0]) mydict['montant'] = str(dffinal.montant.sum()) mydict['effectifs'] = str(dffinal.effectifs.sum()) arr.append(mydict) arr with open('arpb-maille-national.json', 'w') as outfile: json.dump(arr, outfile) dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].effectifs.sum() arr = [] for r in reg.reg.unique(): if(dffinal[dffinal['reg'] == r].shape[0] > 0): mydict = {} mydict['type'] = [] mydict2 = {} mydict2['libelle'] = 'Prêt à taux bonifié' mydict2['nombre'] = str(dffinal[(dffinal['type_aide'] == 'Prêt à taux bonifié') & (dffinal['reg'] == r)].shape[0]) mydict2['montant'] = str(dffinal[(dffinal['type_aide'] == 'Prêt à taux bonifié') & (dffinal['reg'] == r)].montant.sum()) mydict2['effectifs'] = str(dffinal[(dffinal['type_aide'] == 'Prêt à taux bonifié') & (dffinal['reg'] == r)].effectifs.sum()) mydict['type'].append(mydict2) mydict2 = {} mydict2['libelle'] = 'Avance remboursable' mydict2['nombre'] = str(dffinal[(dffinal['type_aide'] == 'Avance remboursable') & (dffinal['reg'] == r)].shape[0]) mydict2['montant'] = str(dffinal[(dffinal['type_aide'] == 'Avance remboursable') & (dffinal['reg'] == r)].montant.sum()) mydict2['effectifs'] = str(dffinal[(dffinal['type_aide'] == 'Avance remboursable') & (dffinal['reg'] == r)].effectifs.sum()) mydict['type'].append(mydict2) mydict['nombre'] = str(dffinal[dffinal['reg'] == r].shape[0]) mydict['montant'] = str(dffinal[dffinal['reg'] == r].montant.sum()) mydict['effectifs'] = str(dffinal[dffinal['reg'] == r].effectifs.sum()) else: mydict = {} mydict['type'] = None mydict['nombre'] = None mydict['montant'] = None mydict['effectifs'] = None mydict['reg'] = r mydict['libelle'] = reg[reg['reg'] == r].iloc[0]['libelle'] arr.append(mydict) arr with open('arpb-maille-regional.json', 'w') as outfile: json.dump(arr, outfile) arr = [] with open('arpb-maille-departemental.json', 'w') as outfile: json.dump(arr, outfile) dfgb = dffinal.groupby(['reg','libelle','type_aide'],as_index=False).sum() dfgb.to_csv("prets-directs-etat-regional.csv",index=False) dfgb = dfgb.groupby(['type_aide'],as_index=False).sum() dfgb.to_csv("prets-directs-etat-national.csv",index=False) dfgb ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import json from pandas.io.json import json_normalize games = pd.read_json("datas2.json", lines = True, orient="values") ``` #Champion Static content ``` championsDataJson = pd.read_json("champions.json", lines = True, orient="values") championsDataJson = json.loads(championsDataJson.data.to_json())['0'] ``` ###Adding champs left ``` #Zoe 142 newdata = {'key':'142', 'tags':['Mage'], 'id':'Zoe'} championsDataJson['Zoe'] = newdata #Kayn 141 newdata = {'key':'141', 'tags':['Fighter'], 'id':'Kayn'} championsDataJson['Kayn'] = newdata #Neeko 518 newdata = {'key':'518', 'tags':['Mage'], 'id':'Neeko'} championsDataJson['Neeko'] = newdata #Xayah 498 newdata = {'key':'498', 'tags':['Marksman'], 'id':'Xayah'} championsDataJson['Xayah'] = newdata #Rakan 497 newdata = {'key':'497', 'tags':['Support'], 'id':'Rakan'} championsDataJson['Rakan'] = newdata #Kaisa 145 newdata = {'key':'145', 'tags':['Marksman'], 'id':'Kaisa'} championsDataJson['Kaisa'] = newdata #Sylas 517 newdata = {'key':'517', 'tags':['Mage'], 'id':'Sylas'} championsDataJson['Sylas'] = newdata #Pyke 555 newdata = {'key':'555', 'tags':['Support'], 'id':'Pyke'} championsDataJson['Pyke'] = newdata #Ornn 516 newdata = {'key':'516', 'tags':['Tank'], 'id':'Ornn'} championsDataJson['Ornn'] = newdata championsDataJson['Pyke'] ``` ####EXPORT DATA OF CHAMPIONS COMPLETE ``` with open('championsComplete.json', 'w') as f: json.dump(championsDataJson, f) ``` #Data management ### Removement of unnecessy columns ``` #games = games.drop(['gameCreation'], axis=1) games = games.drop(['gameVersion'], axis=1) games = games.drop(['platformId'], axis=1) games = games.drop(['seasonId'], axis=1) games = games.drop(['queueId'], axis=1) games = games.drop(['mapId'], axis=1) games = games.drop(['participantIdentities'], axis=1) games = games.drop(['gameType'], axis=1) games.head() games.columns ``` ### Normal or ranked games filter ``` games = games.loc[games['gameMode'] == 'CLASSIC'] #Now that is filtered it's innecessary to maintain the column gamemode games = games.drop(['gameMode'], axis=1) ``` ### Once filtered by games ``` print(games['participants'][0]) print(games['teams'][0]) for key, value in games['participants'][0][0].items() : print (key, value) for key, value in games['participants'][0][0]['stats'].items() : print (key, value) print(games['participants'][9]) len(games) ``` ### Check data ``` games.columns for teams in games['teams']: for team in teams: print(team['firstTower']) for teams in games['teams']: for team in teams: print(team['firstTower']) with open('game.json', 'w') as f: f.write(games.to_json(orient='records', lines=True)) ``` ##GET MOST PLAYED CHAMP ``` champs = {} games.participants[0][0] games['champions'] = games.participants.apply(lambda game: np.array([player['championId'] for player in game])) games.champions for x in range(0, max(games['champions'].apply(lambda x: max(x)))+1): champs[x] = 0 for champ in np.concatenate(games['champions'].values): champs[champ] += 1 champs maximito = max(champs.values()) maximito keyMostPlayedChamp = "0" for keyita, valuito in champs.items(): # for name, age in dictionary.iteritems(): (for Python 2.x) if valuito == maximito: print(str(keyita)) keyMostPlayedChamp = str(keyita) champ = "" for key, value in championsDataJson.items(): if(value['key'] == keyMostPlayedChamp): print(key + " " + value['key']) ```
github_jupyter
``` %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np from scipy.sparse import hstack from sklearn.neural_network import MLPClassifier ``` # Train set ``` df_train = pd.read_csv('train.csv.zip') df_train.head() print('Train set size: {}'.format(len(df_train))) q1_train_empty = df_train['question1'].isnull().values q2_train_empty = df_train['question2'].isnull().values if q1_train_empty.any(): print('Indices of data points where `question1` is empty:') print(np.where(q1_train_empty == True)) if q2_train_empty.any(): print('Indices of data points where `question2` is empty:') print(np.where(q2_train_empty == True)) ``` # Test set ``` df_test = pd.read_csv('test.csv.zip') df_test.head() print('Train set size: {}'.format(len(df_train))) q1_test_empty = df_test['question1'].isnull().values q2_test_empty = df_test['question2'].isnull().values if q1_test_empty.any(): print('Indices of data points where `question1` is empty:') print(np.where(q1_test_empty == True)) if q2_test_empty.any(): print('Indices of data points where `question2` is empty:') print(np.where(q2_test_empty == True)) ``` # Features ``` from nltk.corpus import stopwords # Why cannot I use this function? def words(row, qid): str(row['question{}'.format(qid)]).lower().split() def word_count_difference(row): length1 = len(str(row['question1']).lower().split()) length2 = len(str(row['question2']).lower().split()) return abs(length1 - length2) / max(length1, length2) plt.figure(figsize = (15, 5)) train_word_match = df_train.apply(word_count_difference, axis = 1, raw = True) plt.hist(train_word_match[df_train['is_duplicate'] == 0], bins = 20, normed = True, label = 'Not Duplicate') plt.hist(train_word_match[df_train['is_duplicate'] == 1], bins = 20, normed = True, alpha = 0.7, label = 'Duplicate') plt.legend() plt.title('Label distribution over word_count_difference', fontsize = 15) plt.xlabel('word_count_difference', fontsize = 15) stops = set(stopwords.words('english')) def word_match_share(row): q1words = {} q2words = {} for word in str(row['question1']).lower().split(): if word not in stops: q1words[word] = 1 for word in str(row['question2']).lower().split(): if word not in stops: q2words[word] = 1 if len(q1words) == 0 or len(q2words) == 0: # The computer-generated chaff includes a few questions that are nothing but stopwords return 0 shared_words_in_q1 = [w for w in q1words.keys() if w in q2words] shared_words_in_q2 = [w for w in q2words.keys() if w in q1words] R = (len(shared_words_in_q1) + len(shared_words_in_q2)) / (len(q1words) + len(q2words)) return R plt.figure(figsize = (15, 5)) train_word_match = df_train.apply(word_match_share, axis = 1, raw = True) plt.hist(train_word_match[df_train['is_duplicate'] == 0], bins = 20, normed = True, label = 'Not Duplicate') plt.hist(train_word_match[df_train['is_duplicate'] == 1], bins = 20, normed = True, alpha = 0.7, label = 'Duplicate') plt.legend() plt.title('Label distribution over word_match_share', fontsize = 15) plt.xlabel('word_match_share', fontsize = 15) ``` ## sklearn.feature_extraction.text ``` from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import HashingVectorizer from sklearn.feature_extraction.text import TfidfTransformer # drop empty strings df_train.dropna(inplace=True) vectorizer = HashingVectorizer(stop_words = 'english', non_negative = True, n_features = 10) X1_train = vectorizer.transform(df_train['question1']) X2_train = vectorizer.transform(df_train['question2']) y_train = df_train['is_duplicate'] X_train = hstack([X1_train, X2_train]) clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) clf.fit(X_train, y_train) q1_null_indices = np.where(df_test['question1'].isnull().values == True) q2_null_indices = np.where(df_test['question2'].isnull().values == True) dummy = 'xxxxxxxxxxxxx' for i in q1_null_indices: df_test.loc[i, 'question1'] = dummy for i in q2_null_indices: df_test.loc[i, 'question2'] = dummy X1_test = vectorizer.transform(df_test['question1']) X2_test = vectorizer.transform(df_test['question2']) X_test = hstack([X1_test, X2_test]) y_test = clf.predict(X_test) print(y_test.size) for i in q1_null_indices: y_test[i] = 0 for i in q2_null_indices: y_test[i] = 0 sub = pd.DataFrame() sub['test_id'] = df_test['test_id'] sub['is_duplicate'] = y_test sub.to_csv('hash-mlp.csv', index=False) ```
github_jupyter
# Import Modules ``` import os print(os.getcwd()) import sys import json import pickle import pandas as pd # ######################################################### from methods import get_df_dft ``` # Read Data ## Read bulk_ids of octahedral and unique polymorphs ``` # ######################################################## data_path = os.path.join( os.environ["PROJ_irox_oer"], "workflow/creating_slabs/selecting_bulks", "out_data/data.json") with open(data_path, "r") as fle: data = json.load(fle) # ######################################################## bulk_ids__octa_unique = data["bulk_ids__octa_unique"] df_dft = get_df_dft() df_dft_i = df_dft[df_dft.index.isin(bulk_ids__octa_unique)] # df_dft_i.sort_values("num_atoms", ascending=False).iloc[0:15] # df_dft_i.sort_values? ``` ``` # [print(i) for i in df_dft_i.index.tolist()] directory = "out_data/all_bulks" if not os.path.exists(directory): os.makedirs(directory) directory = "out_data/layered_bulks" if not os.path.exists(directory): os.makedirs(directory) for i_cnt, (bulk_id_i, row_i) in enumerate(df_dft_i.iterrows()): i_cnt_str = str(i_cnt).zfill(3) atoms_i = row_i.atoms atoms_i.write("out_data/all_bulks/" + i_cnt_str + "_" + bulk_id_i + ".cif") ``` # Reading `bulk_manual_classification.csv` ``` # df_bulk_class = pd.read_csv("./bulk_manual_classification.csv") from methods import get_df_bulk_manual_class df_bulk_class = get_df_bulk_manual_class() df_bulk_class.head() print("Total number of bulks being considered:", df_bulk_class.shape[0]) df_bulk_class_layered = df_bulk_class[df_bulk_class.layered == True] print("Number of layered structures", df_bulk_class[df_bulk_class.layered == True].shape) for i_cnt, (bulk_id_i, row_i) in enumerate(df_bulk_class_layered.iterrows()): i_cnt_str = str(i_cnt).zfill(3) # ##################################################### row_dft_i = df_dft.loc[bulk_id_i] # ##################################################### atoms_i = row_dft_i.atoms # ##################################################### atoms_i.write("out_data/layered_bulks/" + i_cnt_str + "_" + bulk_id_i + ".cif") ``` ``` # df_bulk_class = df_bulk_class.fillna(value=False) # df[['a', 'b']] = df[['a','b']].fillna(value=0) # df_bulk_class.fillna? # def read_df_bulk_manual_class(): # """ # """ # # ################################################# # path_i = os.path.join( # os.environ["PROJ_irox_oer"], # "workflow/process_bulk_dft/manually_classify_bulks", # "bulk_manual_classification.csv") # df_bulk_class = pd.read_csv(path_i) # # df_bulk_class = pd.read_csv("./bulk_manual_classification.csv") # # ################################################# # # Filling empty spots of layerd column with False (if not True) # df_bulk_class[["layered"]] = df_bulk_class[["layered"]].fillna(value=False) # # Setting index # df_bulk_class = df_bulk_class.set_index("bulk_id", drop=False) # return(df_bulk_class) ```
github_jupyter