code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Optical Data Reduction using Python by Steve Crawford (South African Astronomical Observatory) ``` %matplotlib inline %load_ext autoreload %autoreload 2 ``` In addition to instrument specific python pipelines, there now exists a suite of tools available for general reduction of optical observations. This includes *ccdproc*, an astropy affiliated package for basic CCD reductions. The package is useful from student tutorials for learning CCD reductions to building science-quality reduction pipelines for observatories. In addition, we also present specreduce, a python package for reducing optical spectroscopy. The package includes an interactive graphical users interface for line identification as well as tools for extracting spectra. With this set of tools, pipelines can be built for instruments in relatively short times. While nearly complete, further improvements and enhancements are still needed and contributions are welcome. # Introduction For our purposes, we will use the [same data](http://iraf.noao.edu/iraf/ftp/iraf/misc/) in the [IRAF tutorials](http://iraf.noao.edu/tutorials/). However, we will show how to reduce the data in the same way only using current general purpose python tools. ``` import numpy as np from matplotlib import pyplot as plt from astropy import units as u from astropy.io import fits from astropy import modeling as mod ``` ## CCD Reduction ``` import ccdproc from ccdproc import CCDData, ImageFileCollection ``` [*ccdproc*](https://github.com/astropy/ccdproc) is an astropy affiliated package for handling CCD data reductions. The code contains all the necessary content to produce pipelines for basic CCD reductions. Here we use *ccdproc* to reduce a spectroscopic data set. The *ImageFileCollection* class in *ccdproc* is useful for reading and sorting of the FITS files in a directory. ``` from ccdproc import ImageFileCollection image_dir = 'exercises/spec/' ic = ImageFileCollection('exercises/spec/') #read in all FITS files in the directory ic.summary ``` ``` # create the master biasframe # function used for fitting bias frames cheb_1 = mod.models.Chebyshev1D(1) #create a list of bias frames bias_list = [] for hdu, fname in ic.hdus(return_fname=True, object='biases 1st afternoon'): ccd = CCDData.read(image_dir+fname, unit='adu') ccd = ccdproc.subtract_overscan(ccd, fits_section='[105:130,1:1024]') ccd = ccdproc.trim_image(ccd, fits_section="[34:74,1:1022]") bias_list.append(ccd) #combine into master bias frame master_bias = ccdproc.combine(bias_list, method='average', clip_extrema=True, nlow = 1, nhigh = 1) #process flat data # In this case we use ccdproc.ccd_process instead of each individual step #create a list of flat frames flat_list = [] for hdu, fname in ic.hdus(return_fname=True, object='flats-6707'): ccd = CCDData.read(image_dir+fname, unit='adu') ccd = ccdproc.ccd_process(ccd, oscan='[105:130,1:1024]', oscan_model=cheb_1, trim="[34:74,1:1022]", master_bias=master_bias) flat_list.append(ccd) #combine into a single master flat master_flat = ccdproc.combine(flat_list, method='average', sigma_clip=True, low_thresh=3, high_thresh=3) #process the sky flat data #create a list of sky flat frames skyflat_list = [] exp_list = [] for fname in ['sp0011.fits','sp0012.fits','sp0013.fits']: ccd = CCDData.read(image_dir+fname, unit='adu') exp_list.append(ccd.header['EXPTIME']) ccd = ccdproc.ccd_process(ccd, oscan='[105:130,1:1024]', oscan_model=cheb_1, trim="[34:74,1:1022]", master_bias=master_bias) skyflat_list.append(ccd) #combine into a single master flat master_sky = ccdproc.combine(skyflat_list, method='average', scale=np.median, weights=np.array(exp_list), sigma_clip=True, slow_thresh=3, high_thresh=3) # correct for the response function cheb_5 = mod.models.Chebyshev1D(5) fitter = mod.fitting.LinearLSQFitter() fy = master_flat.data.sum(axis=1)/master_flat.shape[1] yarr = np.arange(len(fy)) resp_func = fitter(cheb_5, yarr, fy) response = master_flat.divide(resp_func(yarr).reshape((len(yarr),1))*u.dimensionless_unscaled) # correct for the illumination correction sky = master_sky.divide(resp_func(yarr).reshape((len(yarr),1))*u.dimensionless_unscaled) cheb_22 = mod.models.Chebyshev2D(2,2) yarr, xarr = np.indices(sky.data.shape) illum = fitter(cheb_22, xarr, yarr, sky.data) # add fitting with rejection # todo update to fit set regions sky.data = illum(xarr, yarr) sky.data = sky.divide(sky.data.mean()) super_flat = sky.multiply(response) img_list = [] for fname in ['sp0018.fits', 'sp0020.fits','sp0021.fits','sp0022.fits', 'sp0023.fits' ,'sp0024.fits','sp0025.fits', 'sp0027.fits']: ccd = CCDData.read(image_dir+fname, unit='adu') hdr = ccd.header ccd = ccdproc.ccd_process(ccd, oscan='[105:130,1:1024]', oscan_model=cheb_1, trim="[34:74,1:1022]", master_bias=master_bias, master_flat=super_flat) # add comsic ray cleaning ccd = ccdproc.cosmicray_lacosmic(ccd, sigclip=3., sigfrac=0.3, gain=hdr['GAIN'], readnoise=hdr['RDNOISE']) img_list.append(ccd) ccd.write('p'+fname, clobber=True) ``` ## Spectroscopic Reductions ``` from specreduce.interidentify import InterIdentify from specreduce import spectools as st from specreduce import WavelengthSolution ``` [*specreduce*](https://github.com/crawfordsm/specreduce) is a package for handling the wavelength calibration of spectroscopic data. The code contains all the necessary content to identify arc lines and to rectify spectra. It can be used for longslit, multi-object, or echelle spectrographs. ``` # read in the line lists -- if line ratios are availabe, it is easier to find # an automatic solution slines = np.loadtxt('thorium.dat') sfluxes = np.ones_like(slines) # setup the data and correct for the orientation of the data # so that it is arc1 = img_list[0] data = arc1.data.T data = data[:,::-1] xarr = np.arange(data.shape[1]) istart = int(data.shape[0]/2.0) # initial guess for the wavelength solution ws_init = mod.models.Chebyshev1D(3) ws_init.domain = [xarr.min(), xarr.max()] ws = WavelengthSolution.WavelengthSolution(xarr, xarr, ws_init) iws = InterIdentify(xarr, data, slines, sfluxes, ws, mdiff=20, rstep=5, function='poly', order=3, sigma=3, niter=5, wdiff=0.5, res=0.2, dres=0.05, dc=3, ndstep=50, istart=istart, method='Zeropoint', smooth=0, filename=None, subback=0, textcolor='black', log=None, verbose=True) # correct for curvature in the arc ws_init = mod.models.Chebyshev1D(2) ws_init.domain = [xarr.min(), xarr.max()] ws = WavelengthSolution.WavelengthSolution(xarr, xarr, ws_init) aws = st.arc_straighten(data, istart, ws, rstep=1) # create wavelength map and apply wavelength solution to all data wave_map = st.wave_map(data, aws) k = 20.5 ws = iws[k] for i in range(data.shape[0]): wave_map[i,:] = iws[k](wave_map[i,:]) # extra the data obj_data = img_list[3].data obj_data = obj_data.T obj_data = obj_data[:,::-1] plt.imshow(obj_data, aspect='auto') ax = plt.gca() xt = ax.get_xticks() ax.set_xticklabels([int(x) for x in ws(xt)]) plt.xlabel('Wavelength ($\AA$)', size='x-large') ax.set_yticklabels([]) plt.savefig('spec2d.pdf') plt.show() #ax.set_yticklabels([]) # sum the spectra between two steps # The spectra could be traced for better results # or it could extracted using more optimum methods warr = ws(xarr) flux = np.zeros_like(warr) for i in range(18,25): f = np.interp(warr, wave_map[i], obj_data[i]) flux += f sky = np.zeros_like(warr) for i in range(25,32): f = np.interp(warr, wave_map[i], obj_data[i]) sky += f plt.plot(warr, flux - sky) plt.xlabel('Wavelength ($\AA$)', size='x-large') plt.ylabel('counts') plt.savefig('spec1d.pdf') plt.show() #import pickle #pickle.dump(iws, open('iws.db', 'wb')) #iws = pickle.load(open('iws.db', 'rb')) ```
github_jupyter
------ # **Dementia Patients -- Analysis and Prediction** ### ***Author : Akhilesh Vyas*** ### ****Date : August, 2019**** # ***Result Plots*** - <a href='#00'>0. Setup </a> - <a href='#00.1'>0.1. Load libraries </a> - <a href='#00.2'>0.2. Define paths </a> - <a href='#01'>1. Data Preparation </a> - <a href='#01.1'>1.1. Read Data </a> - <a href='#01.2'>1.2. Prepare data </a> - <a href='#01.3'>1.3. Prepare target </a> - <a href='#01.4'>1.4. Removing Unwanted Features </a> - <a href='#02'>2. Data Analysis</a> - <a href='#02.1'>2.1. Feature </a> - <a href='#02.2'>2.2. Target </a> - <a href='#03'>3. Data Preparation and Vector Transformation</a> - <a href='#04'>4. Analysis and Imputing Missing Values </a> - <a href='#05'>5. Feature Analysis</a> - <a href='#05.1'>5.1. Correlation Matrix</a> - <a href='#05.2'>5.2. Feature and target </a> - <a href='#05.3'>5.3. Feature Selection Models </a> - <a href='#06'>6.Machine Learning -Classification Model</a> # <a id='00'>0. Setup </a> # <a id='00.1'>0.1 Load libraries </a> Loading Libraries ``` import sys sys.path.insert(1, '../preprocessing/') import numpy as np import pickle import scipy.stats as spstats import matplotlib.pyplot as plt import seaborn as sns import pandas_profiling from sklearn.datasets.base import Bunch #from data_transformation_cls import FeatureTransform from ast import literal_eval import plotly.figure_factory as ff import plotly.offline as py import plotly.graph_objects as go import pandas as pd pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', -1) from ordered_set import OrderedSet %matplotlib inline ``` # <a id='00.2'>0.2 Define paths </a> ``` # data_path data_path = '../../../datalcdem/data/optima/dementia_18July/data_notasked/' result_path = '../../../datalcdem/data/optima/dementia_18July/data_notasked/results/' optima_path = '../../../datalcdem/data/optima/optima_excel/' ``` # <a id='1'>1. Data Preparation </a> ## <a id='01.1'>1.1. Read Data</a> ``` #Preparation Features from Raw data # Patient Comorbidities data '''patient_com_raw_df = pd.read_csv(data_path + 'optima_patients_comorbidities.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Comorbidity_cui']] display(patient_com_raw_df.head(5)) patient_com_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_com_raw_df['EPISODE_DATE']) # Patient Treatment data patient_treat_raw_df = pd.read_csv(data_path + 'optima_patients_treatments.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Medication_cui']] display(patient_treat_raw_df.head(5)) patient_treat_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_treat_raw_df['EPISODE_DATE']) # Join Patient Treatment and Comorbidities data patient_com_treat_raw_df = pd.merge(patient_com_raw_df, patient_treat_raw_df,on=['patient_id', 'EPISODE_DATE'], how='outer') patient_com_treat_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) patient_com_treat_raw_df.reset_index(drop=True, inplace=True) patient_com_treat_raw_df.head(5) #Saving data patient_com_treat_raw_df.to_csv(data_path + 'patient_com_treat_episode_df.csv', index=False)''' # Extracting selected features from Raw data def rename_columns(col_list): d = {} for i in col_list: if i=='GLOBAL_PATIENT_DB_ID': d[i]='patient_id' elif 'CAMDEX SCORES: ' in i: d[i]=i.replace('CAMDEX SCORES: ', '').replace(' ', '_') elif 'CAMDEX ADMINISTRATION 1-12: ' in i: d[i]=i.replace('CAMDEX ADMINISTRATION 1-12: ', '').replace(' ', '_') elif 'DIAGNOSIS 334-351: ' in i: d[i]=i.replace('DIAGNOSIS 334-351: ', '').replace(' ', '_') elif 'OPTIMA DIAGNOSES V 2010: ' in i: d[i]=i.replace('OPTIMA DIAGNOSES V 2010: ', '').replace(' ', '_') elif 'PM INFORMATION: ' in i: d[i]=i.replace('PM INFORMATION: ', '').replace(' ', '_') else: d[i]=i.replace(' ', '_') return d sel_col_df = pd.read_excel(data_path+'Variable_Guide_Highlighted_Fields_.xlsx') display(sel_col_df.head(5)) sel_cols = [i+j.replace('+', ':')for i,j in zip(sel_col_df['Sub Category '].tolist(), sel_col_df['Variable Label'].tolist())] rem_cols= ['OPTIMA DIAGNOSES V 2010: OTHER SYSTEMIC ILLNESS: COMMENT'] # Missing column in the dataset sel_cols = sorted(list(set(sel_cols)-set(rem_cols))) print (sel_cols) columns_selected = list(OrderedSet(['GLOBAL_PATIENT_DB_ID', 'EPISODE_DATE'] + sel_cols)) df_datarequest = pd.read_excel(optima_path+'Data_Request_Jan_2019_final.xlsx') display(df_datarequest.head(1)) df_datarequest_features = df_datarequest[columns_selected] display(df_datarequest_features.columns) columns_renamed = rename_columns(df_datarequest_features.columns.tolist()) df_datarequest_features.rename(columns=columns_renamed, inplace=True) display(df_datarequest_features.head(5)) # df_datarequest_features.drop(columns=['Age_At_Episode', 'PETERSEN_MCI_TYPE'], inplace=True) display(df_datarequest_features.head(5)) # drop columns having out of range MMSE value #df_datarequest_features = df_datarequest_features[(df_datarequest_features['MINI_MENTAL_SCORE']<=30) & (df_datarequest_features['MINI_MENTAL_SCORE']>=0)] # Merging Join Patient Treatment, Comorbidities and selected features from raw data #patient_com_treat_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_com_treat_raw_df['EPISODE_DATE']) #patient_com_treat_fea_raw_df = pd.merge(patient_com_treat_raw_df,df_datarequest_features,on=['patient_id', 'EPISODE_DATE'], how='left') #patient_com_treat_fea_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) #patient_com_treat_fea_raw_df.reset_index(inplace=True, drop=True) #display(patient_com_treat_fea_raw_df.head(5)) patient_com_treat_fea_raw_df = df_datarequest_features # Need to be changed ------------------------ # Filling misssing MMSE value with patient group Average #patient_com_treat_fea_raw_df['MINI_MENTAL_SCORE']\ # = patient_com_treat_fea_raw_df.groupby(by=['patient_id'])['MINI_MENTAL_SCORE'].transform(lambda x: x.fillna(x.mean())) display(patient_com_treat_fea_raw_df.head(5)) # 19<=Mild<=24 , 14<=Moderate<=18 , Severe<=13 #patient_com_treat_fea_raw_df['MINI_MENTAL_SCORE_CATEGORY']=np.nan def change_minimentalscore_to_category(df): df.loc[(df['MINI_MENTAL_SCORE']<=30) & (df['MINI_MENTAL_SCORE']>24),'MINI_MENTAL_SCORE_CATEGORY'] = 'Normal' df.loc[(df['MINI_MENTAL_SCORE']<=24) & (df['MINI_MENTAL_SCORE']>=19), 'MINI_MENTAL_SCORE_CATEGORY'] = 'Mild' df.loc[(df['MINI_MENTAL_SCORE']<=18) & (df['MINI_MENTAL_SCORE']>=14), 'MINI_MENTAL_SCORE_CATEGORY'] = 'Moderate' df.loc[(df['MINI_MENTAL_SCORE']<=13) & (df['MINI_MENTAL_SCORE']>=0),'MINI_MENTAL_SCORE_CATEGORY'] = 'Severe' return df #patient_com_treat_fea_raw_df = change_minimentalscore_to_category(patient_com_treat_fea_raw_df) # saving file patient_com_treat_fea_raw_df.to_csv(data_path + 'patient_com_treat_fea_episode_raw_without_expand_df.csv', index=False) # Set line number for treatment line def setLineNumber(lst): lst_dict = {ide:0 for ide in lst} lineNumber_list = [] for idx in lst: if idx in lst_dict: lst_dict[idx] = lst_dict[idx] + 1 lineNumber_list.append(lst_dict[idx]) return lineNumber_list patient_com_treat_fea_raw_df['lineNumber'] = setLineNumber(patient_com_treat_fea_raw_df['patient_id'].tolist()) display(patient_com_treat_fea_raw_df.head(5)) # Extend episode data into columns def extend_episode_data(df): id_dict = {i:0 for i in df['patient_id'].tolist()} for x in df['patient_id'].tolist(): if x in id_dict: id_dict[x]=id_dict[x]+1 line_updated = [int(j) for i in id_dict.values() for j in range(1,i+1)] # print (line_updated[0:10]) df.update(pd.Series(line_updated, name='lineNumber'),errors='ignore') print ('\n----------------After creating line-number for each patients------------------') display(df.head(20)) # merging episodes based on id and creating new columns for each episode r = df['lineNumber'].max() print ('Max line:',r) l = [df[df['lineNumber']==i] for i in range(1, int(r+1))] print('Number of Dfs to merge: ',len(l)) df_new = pd.DataFrame() tmp_id = [] for i, df_l in enumerate(l): df_l = df_l[~df_l['patient_id'].isin(tmp_id)] for j, df_ll in enumerate(l[i+1:]): #df_l = df_l.merge(df_ll, on='id', how='left', suffix=(str(j), str(j+1))) #suffixe is not working #print (j) df_l = df_l.join(df_ll.set_index('patient_id'), on='patient_id', rsuffix='_'+str(j+1)) tmp_id = tmp_id + df_l['patient_id'].tolist() #display(df_l) df_new = df_new.append(df_l, ignore_index=True, sort=False) return df_new patient_com_treat_fea_raw_df['lineNumber'] = setLineNumber(patient_com_treat_fea_raw_df['patient_id'].tolist()) # drop rows with duplicated episode for a patient patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df.drop_duplicates(subset=['patient_id', 'EPISODE_DATE']) patient_com_treat_fea_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'], inplace=True) columns = patient_com_treat_fea_raw_df.columns.tolist() patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df[columns[0:2]+columns[-1:] +columns[2:4]+columns[-2:-1] +columns[4:-2]] # Expand patient #patient_com_treat_fea_raw_df = extend_episode_data(patient_com_treat_fea_raw_df) display(patient_com_treat_fea_raw_df.head(2)) #Saving extended episode of each patients #patient_com_treat_fea_raw_df.to_csv(data_path + 'patient_com_treat_fea_episode_raw_df.csv', index=False) patient_com_treat_fea_raw_df.shape display(patient_com_treat_fea_raw_df.describe(include='all')) display(patient_com_treat_fea_raw_df.info()) tmp_l = [] for i in range(len(patient_com_treat_fea_raw_df.index)) : # print("Nan in row ", i , " : " , patient_com_treat_fea_raw_df.iloc[i].isnull().sum()) tmp_l.append(patient_com_treat_fea_raw_df.iloc[i].isnull().sum()) plt.hist(tmp_l) plt.show() # find NAN and Notasked after filled value def findnotasked(v): #print(v) c = 0.0 flag = False try: for i in v: if float(i)<9.0 and float(i)>=0.0 and flag==False: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)==9.0 and flag==True): c = c+1 except: pass '''try: for i in v: if i!=9.0 or i!=i: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)==9.0 and flag==True): c = c+1 except: pass''' return c def findnan(v): #print(v) c = 0.0 flag = False try: for i in v: if float(i)<9.0 and float(i)>=0.0 and flag==False: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)!=float(i) and flag==True): c = c+1 except: pass '''try: for i in v: if i!=9.0 or i!=i: #float(i)<9.0 and float(i)>=0.0: flag = True elif (float(i)!=float(i) and flag==True): c = c+1 except: pass''' return c df = patient_com_treat_fea_raw_df[list( set([col for col in patient_com_treat_fea_raw_df.columns.tolist()]) -set(['EPISODE_DATE']))] tmpdf = pd.DataFrame(data=df['patient_id'].unique(), columns=['patient_id']) display(tmpdf.head(5)) for col in df.columns.tolist(): #print (col) tmp_df1 = df.groupby(by=['patient_id'])[col].apply(lambda x : findnotasked(x) ).reset_index(name='Count(notAsked)_'+col ) tmp_df2 = df.groupby(by=['patient_id'])[col].apply(lambda x : findnan(x) ).reset_index(name='Count(nan)_'+col ) #print (tmp_df1.isnull().sum().sum(), tmp_df2.isnull().sum().sum()) tmpdf = tmpdf.merge(tmp_df1, on=['patient_id'], how='inner') tmpdf = tmpdf.merge(tmp_df2, on=['patient_id'], how='inner') #print (tmpdf.columns.tolist()[-2]) #display(tmpdf) #display(tmpdf.agg(lambda x: x.sum(), axis=1)) col_notasked = [col for col in tmpdf.columns if 'Count(notAsked)_' in col] col_nan = [col for col in tmpdf.columns.tolist() if 'Count(nan)_' in col] tmpdf['count_Total(notasked)']=tmpdf[col_notasked].agg(lambda x: x.sum(),axis=1) tmpdf['count_Total(nan)']=tmpdf[col_nan].agg(lambda x: x.sum(),axis=1) display(tmpdf.head(5)) profile = tmpdf.profile_report(title='Dementia Profiling Report') profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked_nan.html") # find NAN and Notasked after filled value def findnotasked_full(v): #print(v) c = 0.0 try: for i in v: if float(i)==9.0: c = c+1 except: pass return c def findnan_full(v): c = 0.0 try: for i in v: if float(i)!=i: c = c+1 except: pass return c df = patient_com_treat_fea_raw_df[list( set([col for col in patient_com_treat_fea_raw_df.columns.tolist()]) -set(['EPISODE_DATE']))] tmpdf_full = pd.DataFrame(data=df['patient_id'].unique(), columns=['patient_id']) display(tmpdf_full.head(5)) for col in df.columns.tolist(): #print (col) tmp_df1_full = df.groupby(by=['patient_id'])[col].apply(lambda x : findnotasked_full(x) ).reset_index(name='Count(notAsked)_'+col ) tmp_df2_full = df.groupby(by=['patient_id'])[col].apply(lambda x : findnan_full(x) ).reset_index(name='Count(nan)_'+col ) #print (tmp_df1.isnull().sum().sum(), tmp_df2.isnull().sum().sum()) tmpdf_full = tmpdf_full.merge(tmp_df1_full, on=['patient_id'], how='inner') tmpdf_full = tmpdf_full.merge(tmp_df2_full, on=['patient_id'], how='inner') #print (tmpdf.columns.tolist()[-2]) #display(tmpdf) #display(tmpdf.agg(lambda x: x.sum(), axis=1)) col_notasked_full = [col for col in tmpdf_full.columns if 'Count(notAsked)_' in col] col_nan_full = [col for col in tmpdf_full.columns.tolist() if 'Count(nan)_' in col] tmpdf_full['count_Total(notasked)']=tmpdf_full[col_notasked].agg(lambda x: x.sum(),axis=1) tmpdf_full['count_Total(nan)']=tmpdf_full[col_nan].agg(lambda x: x.sum(),axis=1) display(tmpdf_full.head(5)) profile = tmpdf_full.profile_report(title='Dementia Profiling Report') profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked_nan_full.html") # profile = patient_com_treat_fea_raw_df.profile_report(title='Dementia Profiling Report', style={'full_width':True}) profile = patient_com_treat_fea_raw_df.profile_report(title='Dementia Profiling Report') profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked.html") #columnswise sum total_notasked_nan = tmpdf.sum(axis = 0, skipna = True) total_notasked_nan.to_csv(data_path+'total_notasked_nan.csv', index=True) total_notasked_nan_com = tmpdf_full.sum(axis = 0, skipna = True) total_notasked_nan_com.to_csv(data_path+'total_notasked_nan_com.csv', index=True) patient_com_treat_fea_raw_df.describe() ```
github_jupyter
HMMs Library ============================ #### (Discrete & Continuous hidden markov models ) The document contain the tutorial ( usage explained by example ) for the hidden markov models library [link to pip]. * [The **first** part](#dthmm) will cover disrete-time hidden markov model (**DtHMM**) * [The **second** part](#cthmm) will be dedicated to continuous-time hidden markov model (**CtHMM**) * [The **third** part](#conv) will compare the convergences of **both** models * [The **fourth** part](#dataset) will explain how to use more complex **datasets** and run **multiple trains** by one function call The all of the part are independent, so you do not need to run all notebook, if you are interested only in one of them. If you are not familiar with the hidden markov model theory, We recommend ... %todo: refer to DP theory, (simple guide to cthmm?), github, sources <a id='dthmm'></a> Part 1: Discrete Time Hidden Markov Model --------------------------------------------------- ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline ``` ### Construct DtHMM You can directly initialize the DtHMM by passing the **model parameters**. We will create simple DtHMM of two hidden states and three output variables. ``` # A is the matrix of transition probabilities from state [row] to state [column]. A = np.array([[0.9,0.1],[0.4,0.6]]) # B is the matrix of probabilities that the state [row] will emmit output variable [column]. B = np.array([[0.9,0.08,0.02],[0.2,0.5,0.3]]) # Pi is the vector of initial state probabilities. Pi = np.array( [0.8,0.2] ) # Create DtHMM by given parameters. dhmm = hmms.DtHMM(A,B,Pi) ``` Or you can initialize it by **random parameters**. Passing the number of hidden states and output variables. ``` dhmm_random = hmms.DtHMM.random(2,3) ``` ### Save & Read from File Once you have created the model you can **save** its parameters in file simply by calling *save_params* method. ``` dhmm.save_params("hello_dthmm") ``` The method stored the parameters in *.npz* format. The saved file can be later used to **read** parametrs for model initialization. ``` dhmm_from_file = hmms.DtHMM.from_file( "hello_dthmm.npz" ) ``` ### Set & Get Parameters Later you can always **set** parameters with triple of methods corresponding to the constructors. ``` dhmm.set_params(A,B,Pi) dhmm.set_params_random(2,3) dhmm.set_params_from_file( "hello_dthmm.npz" ) ``` You can **get** parameters by calling them separately, ``` dhmm.a, dhmm.b, dhmm.pi ``` or **get** them **all** together as the triple. ``` (A,B,Pi) = dhmm.params ``` ### Generate Random State and Emission Sequence Now we can use our model to generate state and emission sequence. The model will randomly choose which transition or emission will happen, taking into consideration the parameters we have previously defined. ``` seq_len = 20 s_seq, e_seq = dhmm.generate( seq_len ) #resize plot plt.rcParams['figure.figsize'] = [20,20] hmms.plot_hmm( s_seq, e_seq ) ``` ### Find Most Likely State Sequence If we have the model parameters and emission sequence, we can find the most probable state sequence that would generate it. Notice, that it can be different, than the actual sequence that has generated the emissions. We will use Viterbi algorithm for the calculation. ``` ( log_prob, s_seq ) = dhmm.viterbi( e_seq ) # Let's print the most likely state sequence, it can be same or differ from the sequence above. hmms.plot_hmm( s_seq, e_seq ) ``` The *log_prob* parameter store the probability of the sequence. All the probabilities in the library are stored in the logarithm of their actual value. As the number of possible sequences grows exponentialy by it length, it could easily lead to float underflow. You can easily transform it to the normal scale value applying *exp* function. ``` np.exp( log_prob ) ``` ### State Confidence We may want to know what is the probability that the emission was generated by some concrete state. You can get the result for every state in every time by calling the method *states_confidence*. **Notice** that the viterbi most probable sequence, presented above, doesn't neccessery contain the most probable states from this method as here is not important the consecutive states transition probability. ``` log_prob_table = dhmm.states_confidence( e_seq ) np.exp( log_prob_table ) ``` <a id='dtest'></a> ### The Probability of the Emission Sequence We can compute the probability, that the model will generate the emission sequence. ``` np.exp( dhmm.emission_estimate( e_seq ) ) ``` ### The Probability of the State and Emission Sequences Similary we can compute the probabilty of the state and emission sequences given the model parameters. ``` np.exp( dhmm.estimate( s_seq, e_seq ) ) ``` **Notice!** - You can simply count the probability estimations for whole dataset by one command, watch [The chapter 4](#dsest). ### Generate Artificial Dataset You can easily generate many sequences in once by using the generate_data function. The generated emission sequences are in the form that is suitable for training of parameters. You can switch times=True, if you want to generate also the corresponding equidistant time sequences. ``` seq_num= 3 #number of data sequences seq_len= 10 #length of each sequence dhmm.generate_data( (seq_num,seq_len) ) ``` ### Parameters Estimation - Baum Welch Algorithm We usually do not know the real parameters of the model. But, if we have sufficient data, we can estimate them by EM algorithm. Here we will have several output variables (emissions) sequences and we will show, how to use them to train the model parameters Let's start by creating some artifficial data. We will use the previously defined *dhmm* model for it. **Notice!** - For more detail information about possible datasets watch [The chapter 4](#datasets). ``` seq_num = 5 seq_len = 50 _ , data = dhmm.generate_data( (seq_num,seq_len) ) data ``` Now, we will create the model with random parameters, that will be eventually trained to match the data. ``` dhmm_r = hmms.DtHMM.random( 2,3 ) # We can print all the parameters. hmms.print_parameters( dhmm_r ) ``` Let's compare the dataset likelihood estimation of model used for generating the data and the random parameters model. ``` print( "Generator model:" , np.exp( dhmm.data_estimate(data) ) ) print( "Random model: " ,np.exp( dhmm_r.data_estimate(data) ) ) ``` Most likely the probability that the data was generated by random model is extremly low. Now we can take the random model and reestimate it to fit the data better. ``` dhmm_r.baum_welch( data, 10 ) print( "Reestimated model after 10 iterations: " ,np.exp( dhmm_r.data_estimate(data) ) ) ``` The probability of the reestimated model should now be similiar (possibly even higher) that the generator's model. If it is not, you can try to run the estimation procedure more time at different randomly generated models. It could happen that the estimation fall in the local optima. If you are satisfied with the results, you can run some more iteration to fine-tune it. ``` dhmm_r.baum_welch( data, 100 ) print( "Reestimated model after 110 iterations: " ,np.exp( dhmm_r.data_estimate(data) ) ) ``` We can compare the parameters of the model. ``` hmms.print_parameters( dhmm_r ) hmms.print_parameters( dhmm ) ``` Alternatively, we can run *baum_welch_graph* method to get learning curve of estimated probabilities. ``` dhmm_r = hmms.DtHMM.random(2,3) out = dhmm_r.baum_welch( data, 50, est=True ) np.exp(out) ``` Let's plot it in the graph, comparing the results in ratio with *real* data-generator model. ( Notice, it is the ratio of logaritmic probability values. ) ``` real = dhmm.data_estimate(data) #For better visibility of the graph, we cut first two values. plt.plot( out[2:] / real ) plt.show() ``` ### Maximum Likelihood Estimation Sometimes, we can have a dataset of full observations (i.e. both emission and hidden states sequences). We can use method *maximum_likelihood estimation* to estimate most likely parameters. ``` seq_num = 5 seq_len = 50 #generate artificial dataset of both hidden states and emissions sequences s_seqs , e_seqs = dhmm.generate_data( (seq_num,seq_len) ) dhmm_r = hmms.DtHMM.random(2,3) dhmm_r.maximum_likelihood_estimation(s_seqs,e_seqs) log_est = dhmm.full_data_estimate ( s_seqs, e_seqs ) log_est_mle = dhmm_r.full_data_estimate( s_seqs, e_seqs ) print("The probability of the dataset being generated by the original model is:", \ np.exp(log_est), "." ) print("The probability of the dataset being generated by the MLE model is:", \ np.exp(log_est_mle), "." ) ``` For the discrete-time model will be the probability of dataset estimated by parameters from *maximum_likelihood_estimation* always higher equal to probability of being generated by original model. It is the consequence of statistical inaccuracy of dataset. <a id='cthmm'></a> Part 2: Continuous Time Hidden Markov Model ----------------------------------------------------- ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline ``` ### Construct CtHMM Construction of CtHMM is similar to the discrete model. You can directly initialize the CtHMM by passing the **model parameters**. We will create simple CtHMM of three hidden states and three output variables. ``` # Q is the matrix of transition rates from state [row] to state [column]. Q = np.array( [[-0.375,0.125,0.25],[0.25,-0.5,0.25],[0.25,0.125,-0.375]] ) # B is the matrix of probabilities that the state [row] will emmit output variable [column]. B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) # Pi is the vector of initial state probabilities. Pi = np.array( [0.6,0,0.4] ) # Create CtHMM by given parameters. chmm = hmms.CtHMM(Q,B,Pi) ``` Or you can initialize it by **random parameters**. Passing the number of hidden states and output variables. By default are the parameters generated by exponential distribution and than normalized to sum to one. ``` chmm_random = hmms.CtHMM.random(3,3) ``` You can choose generating by uniform distribution passing the parameter *method*. ``` chmm_random = hmms.CtHMM.random(3,3,method="unif") ``` ### Save & Read from File Once you have created the model you can save its parameters in file simply by calling save_params method. ``` chmm.save_params("hello_cthmm") ``` The method stored the parameters in .npz format. The saved file can be later used to read parametrs for model initialization. ``` chmm_from_file = hmms.CtHMM.from_file( "hello_cthmm.npz" ) ``` ### Set & Get Parameters Later you can always set parameters with triple of methods corresponding to the constructors. ``` chmm.set_params(Q,B,Pi) chmm.set_params_random(3,3) chmm.set_params_from_file( "hello_cthmm.npz" ) ``` You can **get** parameters by calling them separately, ``` chmm.q, chmm.b, chmm.pi ``` or get them all together as the triple. ``` (A,B,Pi) = chmm.params ``` ### Generate Random Sequence Now we can use our model to **generate** time, state and emission sequence. The model will **randomly** choose which transition or emission will happen, taking into consideration the parameters we have previously defined. The times are generated with **exponencial** waiting times, you can define the parameter of exponencial distribution as second optional parameter. ``` seq_len = 10 t_seq, s_seq, e_seq = chmm.generate( seq_len, 0.5) #resize plot plt.rcParams['figure.figsize'] = [20,20] hmms.plot_hmm( s_seq, e_seq, time=t_seq ) ``` Optionally, you can generate the sequences by putting your own time sequence (as list or numpy array ) with wished observation times. ``` t_seq, s_seq, e_seq = chmm.generate( 7, time=[0,3,5,7,8,11,14]) #resize plot plt.rcParams['figure.figsize'] = [20,20] hmms.plot_hmm( s_seq, e_seq, time=t_seq ) ``` ### Find Most Likely State Sequence If we have corresponding time and emission sequence, we can find the most probable state sequence that would generate it given the current model parameters. Notice, that it can be different, than the actual sequence that has generated the emissions. We will use Viterbi algorithm for the calculation. ``` ( log_prob, s_seq ) = chmm.viterbi( t_seq, e_seq ) # Let's print the most likely state sequence, it can be same or differ from the sequence above. hmms.plot_hmm( s_seq, e_seq, time = t_seq ) print( "Probability of being generated by the found state sequence:", np.exp( log_prob ) ) ``` ### State Confidence We may want to know what is the probability that the emission was generated by some concrete state. You can get the result for every state in every time by calling the method *states_confidence*. **Notice** that the viterbi most probable sequence, presented above, doesn't neccessery contain the most probable states from this method as here is not important the consecutive states transition probability. ``` log_prob_table = chmm.states_confidence( t_seq, e_seq ) np.exp( log_prob_table ) ``` ### The Probability of the Time and Emission Sequences We can compute the probability, of the emission sequence given model and its time sequence. ``` np.exp( chmm.emission_estimate( t_seq, e_seq ) ) ``` ### The Probability of the State, Time and Emission Sequences Similary we can compute the probabilty of the state, time and emission sequences given the model parameters. ``` np.exp( chmm.estimate( s_seq, t_seq, e_seq ) ) ``` **Notice!** - You can simply count the probability estimations for whole dataset by one command, watch [The chapter 4](#dsest). ### Generate Artificial Dataset You can easily generate many sequences in once by using the *generate_data* function. The generated data are in the form that is suitable for training of parameters. You can switch *states=True*, if you want to generate also the corresponding state sequences. The times are generated with **exponencial** waiting times, you can define the parameter of exponencial distribution as second optional parameter. ``` seq_num= 5 #number of data sequences seq_len= 30 #length of each sequence t,e = chmm.generate_data( (seq_num,seq_len) ) t,e ``` ### Parameters Estimation - Continuous Version of Baum Welch Algorithm We will use the previously generated data for the training of randomly generated model. **Notice!** - Always use the integers in your time points dataset. Floats times are also supported, but it can make the computation significantly slower and you should know, why you are using them. For more detail information watch [The chapter 4](#dataset). ``` chmm_r = hmms.CtHMM.random( 3,3 ) # We can print all the parameters. hmms.print_parameters( chmm_r ) ``` Now we can compare the probabilities, that the data was generated by the given model. Its ratio is most probably not so big as in the disrete model. It is because the intervals between the observations are the source of many unknown, so it is pushing the probability of real model down. ``` print( "Generator model:" , np.exp( chmm.data_estimate(t,e) ) ) print( "Random model: " ,np.exp( chmm_r.data_estimate(t,e) ) ) ``` Let's run the EM algorithm for couple of iterations. ``` out = chmm_r.baum_welch( t,e, 100, est=True ) np.exp(out) ``` We will plot its probabilities estimations in ratio with generator model. (Notice, it is the ratio of logarithms of probabilities) ``` real = chmm.data_estimate( t, e ) #For better visibility of the graph, we cut first two values. plt.plot( out[2:] / real ) plt.show() ``` ### Maximum Likelihood Estimation Sometimes, we can have a dataset of full observations (i.e. both emission and hidden states sequences). We can use method *maximum_likelihood_estimation* to estimate most likely parameters. The usage and parameters of the methods are similiar to the *baum_welch method*. ``` seq_num = 5 seq_len = 50 #generate artificial dataset of times, hidden states and emissions sequences t_seqs, s_seqs, e_seqs = chmm.generate_data( (seq_num,seq_len), states=True ) chmm_r = hmms.CtHMM.random(3,3) graph = chmm_r.maximum_likelihood_estimation(s_seqs,t_seqs,e_seqs,100,est=True ) #print the convergence graph log_est = chmm.full_data_estimate ( s_seqs,t_seqs,e_seqs ) plt.plot( graph / log_est ) plt.show() ``` <a id='conv'></a> Part 3: Comparison of Models Convergences ----------------------------------------------------- In this chapter we will compare the convergence rate of discrete and continuous models. It will show some functions usefull for convergence among model parameters. ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline ``` We will start by defining the continuous time model. For that, who have read the previous section, it will be the familiar. ``` Q = np.array( [[-0.375,0.125,0.25],[0.25,-0.5,0.25],[0.25,0.125,-0.375]] ) B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) Pi = np.array( [0.6,0,0.4] ) chmm = hmms.CtHMM( Q,B,Pi ) hmms.print_parameters( chmm ) ``` We can simply create discrete model with equivalent parameters, using function *get_dthmm_params*. By default, it will create the model with transition probabilities equal to one time unit probability transition in continuous model. You can pass the optional parameter for different time steps. ``` dhmm = hmms.DtHMM( *chmm.get_dthmm_params() ) hmms.print_parameters( dhmm ) ``` We can let the disrete model to generate the data sufficient for both models by passing the *times* parameter as *True*. ``` t,_,e = dhmm.generate_data( (50,50), times=True ) # The free space in the return triple is for the state sequences, # we do not need them for the training ``` We can compare the estimation of the data, using both of the model. (They should be the same.) ``` creal = chmm.data_estimate(t,e) dreal = dhmm.data_estimate(e) print("Data estimation by continuous model:", creal) print("Data estimation by discrete model: ", dreal) ``` Now we will create two equivalent random models. ``` ct = hmms.CtHMM.random(3,3) dt = hmms.DtHMM( *ct.get_dthmm_params() ) hmms.print_parameters( ct ) hmms.print_parameters( dt ) ``` We will train them at our dataset. (It can take a while.) ``` iter_num = 50 outd = dt.baum_welch( e, iter_num, est=True ) outc = ct.baum_welch( t,e, iter_num, est=True ) outd,outc ``` We can plot and compare both convergence rates. From the essence of models, the continuous model will probably converge a bit slower, but finally will reach the similar value. ``` plt.plot( outd[1:] / dreal ) plt.plot( outc[1:] / dreal, color="red" ) #plt.savefig('my_plot.svg') #Optional save the figure plt.show() ``` <a id='dataset'></a> ## Part 4: Advance Work with Datasets ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline ``` ### Various Length of Training Vectors There are two supported data-structures, that you can pass toward training: #### 1. The Numpy Matrix The two dimensional array, where the rows consist of training sequences. Though, this way is restricted in the way that all the vectors need to have the same size. ``` data_n = np.array( [[0, 0, 0, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 0, 1, 0], [2, 0, 1, 0, 0, 0, 0, 0, 0, 0]] ) dhmm_r = hmms.DtHMM.random( 2,3 ) graph_n = dhmm_r.baum_welch( data_n, 10, est=True ) np.exp( dhmm_r.data_estimate(data_n) ) ``` #### 2. The List of Numpy Vectors The standard Python list, consisting of Numpy vectors. Every vector can have different length. ``` data_l = [ np.array( [0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1] ) , np.array( [0, 1, 0, 0, 1, 0, 1 ] ), np.array( [2, 0, 1, 0, 0, 0, 0, 0, 0, 0] ) ] dhmm_r = hmms.DtHMM.random( 2,3 ) graph_l = dhmm_r.baum_welch( data_l, 10, est=True ) np.exp( dhmm_r.data_estimate(data_l) ) # you can plot the graphs, just for fun. plt.plot( graph_n, color='red' ) plt.plot( graph_l ) ``` #### Continuous-Time HMM The work with datasets in CtHMM is analogous. ``` data_n = np.array( [[0, 0, 0, 1], [0, 2, 0, 0], [2, 0, 1, 0] ] ) time_n = np.array( [[0, 1, 3, 4], [0, 2, 3, 5], [0, 2, 4, 6] ] ) chmm_r = hmms.CtHMM.random( 2,3 ) graph_n = chmm_r.baum_welch( time_n, data_n, 10, est=True ) np.exp( chmm_r.data_estimate(time_n, data_n) ) data_l = [ np.array( [0, 0, 2, 0 ] ) , np.array( [0, 1, 0, 0, 1 ] ), np.array( [2, 0, 1 ] ) ] time_l = [ np.array( [0, 1, 2, 4 ] ) , np.array( [0, 1, 3, 5, 6 ] ), np.array( [0, 2, 3 ] ) ] chmm_r = hmms.CtHMM.random( 2,3 ) graph_n = chmm_r.baum_welch( time_l, data_l, 10, est=True ) np.exp( chmm_r.data_estimate(time_l, data_l) ) ``` ### Time Sequences in Floats The time sequences datatypes can be **integers** or **floats**. However both datatypes are allowed, it is strongly *adviced* to *use* integers or floats with *integral distance* (be carefull about float operation unprecision here.) The *non-integral* time intervals among two neigbouring observation are *computationaly costly*, as it doesn't allow to compute matrix power and more complex operations are needed. Later are showed two examples with the float data and possible *tricks* how to make computation *faster*. #### Example one: Change intervals length to integer ``` data = np.array( [[0, 0, 0, 1], [0, 2, 0, 0], [2, 0, 1, 0] ] ) time = np.array( [[0, 1.5, 3.4, 4.7], [0, 2.6, 5.7, 8.9], [0, 2.2, 4.1, 9.8] ] ) ``` Use data as float ``` chmm_r = hmms.CtHMM.random( 2,3 ) graph_f = chmm_r.baum_welch( time, data, 10, est=True ) np.exp( chmm_r.data_estimate(time, data) ) ``` or use the **trick** so the intervals were integral. Here it is enough to **multiply** it times 10. **Notice**: Here we are working with randomly generated jump rates matrix, otherwise you would need to reevaluate its values, when multiplying times. ``` chmm_r = hmms.CtHMM.random( 2,3 ) graph = chmm_r.baum_welch( time*10, data, 10, est=True ) np.exp( chmm_r.data_estimate( time*10, data ) ) ``` #### Example 2: Approximate the time values Sometimes, depending upon the data, the exact observation time may not be important, so the small approximation can be helpful to get better computational time. ``` data = np.array( [[0, 0, 0, 1], [0, 2, 0, 0], [2, 0, 1, 0] ] ) time = np.array( [[0, 1.54587435, 3.4435434, 4.74535345], [0, 2.64353245, 5.7435435, 8.94353454], [0, 2.24353455, 4.1345435, 9.83454354] ] ) ``` Use data as float ``` chmm_r = hmms.CtHMM.random( 2,3 ) graph_f = chmm_r.baum_welch( time, data, 10, est=True ) np.exp( chmm_r.data_estimate(time, data) ) ``` or perform the **trick**. Here multiply by 100 and round to the integers. ``` time = np.round( time * 100 ) chmm_r = hmms.CtHMM.random( 2,3 ) graph = chmm_r.baum_welch( time, data, 10, est=True ) np.exp( chmm_r.data_estimate(time, data) ) ``` <a id='dsest'></a> ### Datasets Probability Estimations We have showed previously how to compute sequence probability estimations in [The discrete](#dtest) and [continuous](#ctest) model. Here it is showed, how to make it for whole dataset by using just one command. (We will show it at continuous time model, the discrete one is analogous, just omit the time sequences.) ``` seq_num= 10 #number of data sequences seq_len= 10 #length of each sequence # Create data and generate model chmm = hmms.CtHMM.random(3,3) t,s,e = chmm.generate_data( (seq_num,seq_len), states=True ) ``` #### The Probability of the Time and Emission Sequences We can compute the probability, of the emissions sequence given model and its time sequences. ``` np.exp( chmm.data_estimate( t, e ) ) ``` #### The Probability of the State, Time and Emission Sequences Similary we can compute the probabilty of the state, time and emission sequences given the model parameters. ``` np.exp( chmm.full_data_estimate( s, t, e ) ) ``` ### Multi Training For more convenient train from various random begnings, you can use *multi_train* function. It has parameters method: - 'exp' - [default] Use exponential distribution for random initialization - 'unif' - Use uniform distribution for random initialization and ret: - 'all' - Return all trained models, sorted by their probability estimation - 'best' - [default] Return only the model with the best probability estimation ``` t,e = chmm.generate_data( (5,10) ) hidden_states = 3 runs = 10 iterations = 50 out = hmms.multi_train_ct( hidden_states , t, e, runs, iterations, ret='all', method='exp') out ``` <hr/> You can play with the models as you like and feel free to share your result with me, if you have made some interesting experiment! Contact: (lopatovsky@gmail.com) ### Experimental features #### Fast Convergence ``` #The experiment is frozen seq_num= 1 #number of data sequences seq_len= 4 #length of each sequence t,e = chmm.generate_data( (seq_num,seq_len) ) t,e t = np.array([[ 0,1,3,5,6,7,9,11,12]]) e = np.array([[ 0,0,0,1,2,1,0,0,1]]) ct1 = hmms.CtHMM.random(3,3) ct2 = hmms.CtHMM( *ct1.params ) iter_num = 50 out1 = ct1.baum_welch( t,e, iter_num, est=True ) #out2 = ct2.baum_welch( t,e, iter_num ) out1,out2 plt.plot( out1[1:] / dreal , color = "red" ) plt.plot( out2[1:] / dreal ) #plt.savefig('graph.svg') #Optional save the figure plt.show() hmms.print_parameters(ct1) hmms.print_parameters(ct2) ``` #### Exponential random generation ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline # Q is the matrix of transition rates from state [row] to state [column]. Q = np.array( [[-0.375,0.125,0.25],[0.25,-0.5,0.25],[0.25,0.125,-0.375]] ) # B is the matrix of probabilities that the state [row] will emmit output variable [column]. B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) # Pi is the vector of initial state probabilities. Pi = np.array( [0.6,0,0.4] ) # Create CtHMM by given parameters. chmm = hmms.CtHMM(Q,B,Pi) seq_num= 5 #number of data sequences seq_len= 30 #length of each sequence t,e = chmm.generate_data( (seq_num,seq_len) ) chmm_r = hmms.CtHMM.random( 3,3, method='unif' ) chmm_re = hmms.CtHMM.random( 3,3, method='exp' ) out = chmm_r.baum_welch( t,e, 10 ) oute = chmm_re.baum_welch( t,e, 10 ) #aout = np.average(out, axis=0) #aoute = np.average(oute, axis=0) out = hmms.multi_train(3, t, e, 10, 200, ret='all', method='exp') aout = np.average(out, axis=0) aoute = np.average(oute, axis=0) mout = np.min(out, axis=0) moute = np.min(oute, axis=0) real = chmm.data_estimate( t, e ) #For better visibility of the graph, we cut first two values. offset = 3 #plt.plot( aout[offset:] / real , color = "red" ) #plt.plot( aoute[offset:] / real , color = "blue" ) #plt.plot( mout[offset:] / real , color = "orange" ) #plt.plot( moute[offset:] / real , color = "green") for line in out: print( line/real ) plt.plot( line[offset:] / real ) plt.show() real = chmm.data_estimate( t, e ) offset = 3 print(out) for line in out: #graph= line[1] #print( type(line) ) #print( line[1]/real ) plt.plot( line[1][offset:] / real ) oute ``` ### Test different length of vectors ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline data_l = [ np.array( [0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1] ) , np.array( [0, 1, 0, 0, 1, 0, 1 ] ), np.array( [2, 0, 1, 0, 0, 0, 0, 0, 0, 0] ) ] data_n = np.array( [[0, 0, 0, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 2, 2, 2, 1, 0, 1, 0], [2, 0, 1, 0, 0, 0, 0, 0, 0, 0]] ) ``` #### Test Numpy matrix ``` dhmm_r = hmms.DtHMM.random( 2,3 ) graph_n = dhmm_r.baum_welch( data_n, 10, est=True ) dhmm_r.maximum_likelihood_estimation(data_n, data_n) np.exp( dhmm_r.data_estimate(data_n) ) ``` #### Test List of numpy arrays ``` dhmm_r = hmms.DtHMM.random( 2,3 ) graph_l = dhmm_r.baum_welch( data_l, 10, est=True ) np.exp( dhmm_r.data_estimate(data_l) ) plt.plot( graph_n, color='red' ) plt.plot( graph_l ) ``` Make the similar for the continuous model ``` data_l = [ np.array( [0, 0, 2, 0 ] ) , np.array( [0, 1, 0, 0, 1 ] ), np.array( [2, 0, 1 ] ) ] data_n = np.array( [[0, 0, 0, 1], [0, 2, 0, 0], [2, 0, 1, 0] ] ) time_l = [ np.array( [0, 1, 2, 4 ] ) , np.array( [0, 1, 3, 5, 6 ] ), np.array( [0, 2, 3 ] ) ] time_n = np.array( [[0, 1, 3, 4], [0, 2, 3, 5], [0, 2, 4, 6] ] ) chmm_r = hmms.CtHMM.random( 2,3 ) graph_n = chmm_r.baum_welch( time_n, data_n, 10, est=True ) np.exp( chmm_r.data_estimate(time_n, data_n) ) chmm_r = hmms.CtHMM.random( 2,3 ) graph_n = chmm_r.baum_welch( time_l, data_l, 10, est=True ) np.exp( chmm_r.data_estimate(time_l, data_l) ) ``` ### Test double times ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline data = [ np.array( [0, 0, 2, 0 ] ) , np.array( [0, 1, 0, 0, 1 ] ), np.array( [2, 0, 1 ] ) ] time_i = [ np.array( [0, 1, 2, 4 ] ) , np.array( [0, 1, 3, 5, 6 ] ), np.array( [0, 2, 3 ] ) ] time_f = [ np.array( [0, 1.1, 2.1, 4.1 ] ) , np.array( [0, 1.1, 3.1, 5.1, 6.1 ] ), np.array( [0, 2.1, 3.1 ] ) ] chmm_r = hmms.CtHMM.random( 2,3 ) graph_i = chmm_r.baum_welch( time_i, data, 10, est=True ) np.exp( chmm_r.data_estimate(time_i, data) ) ``` double ``` chmm_r = hmms.CtHMM.random( 2,3 ) graph_f = chmm_r.baum_welch( time_f, data, 10, est=True ) np.exp( chmm_r.data_estimate(time_f, data) ) plt.plot( graph_i, color='red' ) plt.plot( graph_f ) ``` ### Soft & Hard ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline Q = np.array( [[-0.375,0.125,0.25],[0.25,-0.5,0.25],[0.25,0.125,-0.375]] ) B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) Pi = np.array( [0.6,0,0.4] ) chmm = hmms.CtHMM( Q,B,Pi ) #chmm = hmms.CtHMM.random(15,15) t,e = chmm.generate_data( (50,10) ) chmm_s = hmms.CtHMM.random( 3,3 ) chmm_h = hmms.CtHMM( * chmm_s.params ) chmm_c = hmms.CtHMM( * chmm_s.params ) print("comb") #graph_comb = chmm_c.baum_welch( t, e, 5, est=True, method="hard" ) #graph_comb = np.append( graph_comb, chmm_c.baum_welch( t, e, 95, est=True, method="soft" ) ) print("hard") graph_hard = chmm_h.baum_welch( t, e, 100, est=True, method="hard" ) print("soft") graph_soft = chmm_s.baum_welch( t, e, 100, est=True, method="soft" ) real = chmm.data_estimate( t,e ) #real = 0 #for tt,ee in zip(t,e): # x,_ = chmm.viterbi( tt, ee ) # real += x #For better visibility of the graph, we cut first two values. plt.plot( graph_soft[1:] / real, color="red" ) plt.plot( graph_hard[1:] / real, color="blue" ) ##plt.plot( graph_comb[1:-1] / real, color="purple") plt.rcParams['figure.figsize'] = [20,20] plt.savefig('graph.png') plt.show() print( chmm_h.data_estimate( t,e ) ) hmms.print_parameters( chmm ) hmms.print_parameters( chmm_s ) hmms.print_parameters( chmm_h ) chmm_h.check_params() ``` ### Int-intervals vs Double-intervals ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline Q = np.array( [[-0.375,0.125,0.25],[0.25,-0.5,0.25],[0.25,0.125,-0.375]] ) B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) Pi = np.array( [0.6,0,0.4] ) chmm = hmms.CtHMM( Q,B,Pi ) t,e = chmm.generate_data( (50,50) ) chmm_i = hmms.CtHMM.random( 3,3 ) chmm_d = hmms.CtHMM( * chmm_i.params ) import time time0 = time.time() graph_i = chmm_i.baum_welch( t, e, 100, est=True, method="soft", fast=True ) time1 = time.time() graph_d = chmm_d.baum_welch( t, e, 100, est=True, method="soft", fast=False ) time2 = time.time() print(time2-time1) print(time1-time0) chmm_i.print_ts() chmm_d.print_ts() real = chmm.data_estimate( t,e ) plt.plot( graph_i[1:] / real, color="red" ) plt.plot( graph_d[1:] / real, color="blue" ) hmms.print_parameters( chmm_i ) hmms.print_parameters( chmm_d ) plt.rcParams['figure.figsize'] = [25,25] plt.show() chmm_i.q[0,0] chmm_d.q[0,0] ``` #### zeros ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline # Q is the matrix of transition rates from state [row] to state [column]. Q = np.array( [[-0.375,0.125,0.25],[0.25,-0.5,0.25],[0.25,0.125,-0.375]] ) # B is the matrix of probabilities that the state [row] will emmit output variable [column]. B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) # Pi is the vector of initial state probabilities. Pi = np.array( [0.6,0,0.4] ) # Create CtHMM by given parameters. chmm = hmms.CtHMM(Q,B,Pi) t,e = chmm.generate_data( (10,50) ) # Q is the matrix of transition rates from state [row] to state [column]. Q = np.array( [[-0.125,0.125,0.0],[0.45,-0.45,0.0],[0.25,0.125,-0.375]] ) # B is the matrix of probabilities that the state [row] will emmit output variable [column]. B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) # Pi is the vector of initial state probabilities. Pi = np.array( [0.6,0.4,0.0] ) # Create CtHMM by given parameters. chmm_i = hmms.CtHMM(Q,B,Pi) graph_i = chmm_i.baum_welch( t, e, 100, est=True, method="soft", fast=True ) hmms.print_parameters( chmm_i ) ``` #### random tests ``` import numpy as np import matplotlib.pyplot as plt import hmms %matplotlib inline Q = np.array( [[-0.375,0.125,0.25],[0.25,-0.5,0.25],[0.25,0.125,-0.375]] ) B = np.array( [[0.8,0.05,0.15],[0.05,0.9,0.05],[0.2,0.05,0.75]] ) Pi = np.array( [0.6,0,0.4] ) chmm = hmms.CtHMM( Q,B,Pi ) t,e = chmm.generate_data( (50,50) ) chmm_i = hmms.CtHMM.random( 3,3 ) chmm_d = hmms.CtHMM( * chmm_i.params ) graph_i = chmm_i.baum_welch( t, e, 100, est=True, method="soft", fast=True ) real = chmm.data_estimate( t,e ) plt.plot( graph_i[1:]/real , color="red" ) #hmms.print_parameters( chmm_i ) #plt.rcParams['figure.figsize'] = [25,25] plt.show() real = chmm.data_estimate( t,e ) plt.plot( real-graph_i[1:] , color="red" ) #hmms.print_parameters( chmm_i ) #plt.rcParams['figure.figsize'] = [25,25] plt.show() real = chmm.data_estimate( t,e ) plt.plot( np.exp(graph_i[1:] - real) , color="red" ) print(np.exp(graph_i[1:] - real)) #hmms.print_parameters( chmm_i ) #plt.rcParams['figure.figsize'] = [25,25] plt.show() real = chmm.data_estimate( t,e ) fig = plt.figure() ax = fig.add_subplot(111) ax.set_yscale("log", nonposx='clip') plt.plot( np.exp(graph_i[1:] - real) , color="red" ) #plt.rcParams['figure.figsize'] = [25,25] plt.show() ```
github_jupyter
# Notebook 13: Using Deep Learning to Study SUSY with Pytorch ## Learning Goals The goal of this notebook is to introduce the powerful PyTorch framework for building neural networks and use it to analyze the SUSY dataset. After this notebook, the reader should understand the mechanics of PyTorch and how to construct DNNs using this package. In addition, the reader is encouraged to explore the GPU backend available in Pytorch on this dataset. ## Overview In this notebook, we use Deep Neural Networks to classify the supersymmetry dataset, first introduced by Baldi et al. in [Nature Communication (2015)](https://www.nature.com/articles/ncomms5308). The SUSY data set consists of 5,000,000 Monte-Carlo samples of supersymmetric and non-supersymmetric collisions with $18$ features. The signal process is the production of electrically-charged supersymmetric particles which decay to $W$ bosons and an electrically-neutral supersymmetric particle that is invisible to the detector. The first $8$ features are "raw" kinematic features that can be directly measured from collisions. The final $10$ features are "hand constructed" features that have been chosen using physical knowledge and are known to be important in distinguishing supersymmetric and non-supersymmetric collision events. More specifically, they are given by the column names below. In this notebook, we study this dataset using Pytorch. ``` from __future__ import print_function, division import os,sys import numpy as np import torch # pytorch package, allows using GPUs # fix seed seed=17 np.random.seed(seed) torch.manual_seed(seed) ``` ## Structure of the Procedure Constructing a Deep Neural Network to solve ML problems is a multiple-stage process. Quite generally, one can identify the key steps as follows: * ***step 1:*** Load and process the data * ***step 2:*** Define the model and its architecture * ***step 3:*** Choose the optimizer and the cost function * ***step 4:*** Train the model * ***step 5:*** Evaluate the model performance on the *unseen* test data * ***step 6:*** Modify the hyperparameters to optimize performance for the specific data set Below, we sometimes combine some of these steps together for convenience. Notice that we take a rather different approach, compared to the simpler MNIST Keras notebook. We first define a set of classes and functions and run the actual computation only in the very end. ### Step 1: Load and Process the SUSY Dataset The supersymmetry dataset can be downloaded from the UCI Machine Learning repository on [https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz](https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz). The dataset is quite large. Download the dataset and unzip it in a directory. Loading data in Pytroch is done by creating a user-defined a class, which we name `SUSY_Dataset`, and is a child of the `torch.utils.data.Dataset` class. This ensures that all necessary attributes required for the processing of the data during the training and test stages are easily inherited. The `__init__` method of our custom data class should contain the usual code for loading the data, which is problem-specific, and has been discussed for the SUSY data set in Notebook 5. More importantly, the user-defined data class must override the `__len__` and `__getitem__` methods of the parent `DataSet` class. The former returns the size of the data set, while the latter allows the user to access a particular data point from the set by specifying its index. ``` from torchvision import datasets # load data class SUSY_Dataset(torch.utils.data.Dataset): """SUSY pytorch dataset.""" def __init__(self, data_file, root_dir, dataset_size, train=True, transform=None, high_level_feats=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. train (bool, optional): If set to `True` load training data. transform (callable, optional): Optional transform to be applied on a sample. high_level_festures (bool, optional): If set to `True`, working with high-level features only. If set to `False`, working with low-level features only. Default is `None`: working with all features """ import pandas as pd features=['SUSY','lepton 1 pT', 'lepton 1 eta', 'lepton 1 phi', 'lepton 2 pT', 'lepton 2 eta', 'lepton 2 phi', 'missing energy magnitude', 'missing energy phi', 'MET_rel', 'axial MET', 'M_R', 'M_TR_2', 'R', 'MT2', 'S_R', 'M_Delta_R', 'dPhi_r_b', 'cos(theta_r1)'] low_features=['lepton 1 pT', 'lepton 1 eta', 'lepton 1 phi', 'lepton 2 pT', 'lepton 2 eta', 'lepton 2 phi', 'missing energy magnitude', 'missing energy phi'] high_features=['MET_rel', 'axial MET', 'M_R', 'M_TR_2', 'R', 'MT2','S_R', 'M_Delta_R', 'dPhi_r_b', 'cos(theta_r1)'] #Number of datapoints to work with df = pd.read_csv(root_dir+data_file, header=None,nrows=dataset_size,engine='python') df.columns=features Y = df['SUSY'] X = df[[col for col in df.columns if col!="SUSY"]] # set training and test data size train_size=int(0.8*dataset_size) self.train=train if self.train: X=X[:train_size] Y=Y[:train_size] print("Training on {} examples".format(train_size)) else: X=X[train_size:] Y=Y[train_size:] print("Testing on {} examples".format(dataset_size-train_size)) self.root_dir = root_dir self.transform = transform # make datasets using only the 8 low-level features and 10 high-level features if high_level_feats is None: self.data=(X.values.astype(np.float32),Y.values.astype(int)) print("Using both high and low level features") elif high_level_feats is True: self.data=(X[high_features].values.astype(np.float32),Y.values.astype(int)) print("Using both high-level features only.") elif high_level_feats is False: self.data=(X[low_features].values.astype(np.float32),Y.values.astype(int)) print("Using both low-level features only.") # override __len__ and __getitem__ of the Dataset() class def __len__(self): return len(self.data[1]) def __getitem__(self, idx): sample=(self.data[0][idx,...],self.data[1][idx]) if self.transform: sample=self.transform(sample) return sample ``` Last, we define a helper function `load_data()` that accepts as a required argument the set of parameters `args`, and returns two generators: `test_loader` and `train_loader` which readily return mini-batches. ``` def load_data(args): data_file='SUSY.csv' root_dir=os.path.expanduser('~')+'/ML_review/SUSY_data/' kwargs = {} # CUDA arguments, if enabled # load and noralise train and test data train_loader = torch.utils.data.DataLoader( SUSY_Dataset(data_file,root_dir,args.dataset_size,train=True,high_level_feats=args.high_level_feats), batch_size=args.batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( SUSY_Dataset(data_file,root_dir,args.dataset_size,train=False,high_level_feats=args.high_level_feats), batch_size=args.test_batch_size, shuffle=True, **kwargs) return train_loader, test_loader ``` ### Step 2: Define the Neural Net and its Architecture To construct neural networks with Pytorch, we make another class called `model` as a child of Pytorch's `nn.Module` class. The `model` class initializes the types of layers needed for the deep neural net in its `__init__` method, while the DNN is assembled in a function method called `forward`, which accepts an `autograd.Variable` object and returns the output layer. Using this convention Pytorch will automatically recognize the structure of the DNN, and the `autograd` module will pull the gradients forward and backward using backprop. Our code below is constructed in such a way that one can choose whether to use the high-level and low-level features separately and altogether. This choice determines the size of the fully-connected input layer `fc1`. Therefore the `__init__` method accepts the optional argument `high_level_feats`. ``` import torch.nn as nn # construct NN class model(nn.Module): def __init__(self,high_level_feats=None): # inherit attributes and methods of nn.Module super(model, self).__init__() # an affine operation: y = Wx + b if high_level_feats is None: self.fc1 = nn.Linear(18, 200) # all features elif high_level_feats: self.fc1 = nn.Linear(10, 200) # low-level only else: self.fc1 = nn.Linear(8, 200) # high-level only self.batchnorm1=nn.BatchNorm1d(200, eps=1e-05, momentum=0.1) self.batchnorm2=nn.BatchNorm1d(100, eps=1e-05, momentum=0.1) self.fc2 = nn.Linear(200, 100) # see forward function for dimensions self.fc3 = nn.Linear(100, 2) def forward(self, x): '''Defines the feed-forward function for the NN. A backward function is automatically defined using `torch.autograd` Parameters ---------- x : autograd.Tensor input data Returns ------- autograd.Tensor output layer of NN ''' # apply rectified linear unit x = F.relu(self.fc1(x)) # apply dropout #x=self.batchnorm1(x) x = F.dropout(x, training=self.training) # apply rectified linear unit x = F.relu(self.fc2(x)) # apply dropout #x=self.batchnorm2(x) x = F.dropout(x, training=self.training) # apply affine operation fc2 x = self.fc3(x) # soft-max layer x = F.log_softmax(x,dim=1) return x ``` ### Steps 3+4+5: Choose the Optimizer and the Cost Function. Train and Evaluate the Model Next, we define the function `evaluate_model`. The first argument, `args`, contains all hyperparameters needed for the DNN (see below). The second and third arguments are the `train_loader` and the `test_loader` objects, returned by the function `load_data()` we defined in Step 1 above. The `evaluate_model` function returns the final `test_loss` and `test_accuracy` of the model. First, we initialize a `model` and call the object `DNN`. In order to define the loss function and the optimizer, we use modules `torch.nn.functional` (imported here as `F`) and `torch.optim`. As a loss function we choose the negative log-likelihood, and stored is under the variable `criterion`. As usual, we can choose any from a variety of different SGD-based optimizers, but we focus on the traditional SGD. Next, we define two functions: `train()` and `test()`. They are called at the end of `evaluate_model` where we loop over the training epochs to train and test our model. The `train` function accepts an integer called `epoch`, which is only used to print the training data. We first set the `DNN` in a train mode using the `train()` method inherited from `nn.Module`. Then we loop over the mini-batches in `train_loader`. We cast the data as pytorch `Variable`, re-set the `optimizer`, perform the forward step by calling the `DNN` model on the `data` and computing the `loss`. The backprop algorithm is then easily done using the `backward()` method of the loss function `criterion`. We use `optimizer.step` to update the weights of the `DNN`. Last print the performance for every minibatch. `train` returns the loss on the data. The `test` function is similar to `train` but its purpose is to test the performance of a trained model. Once we set the `DNN` model in `eval()` mode, the following steps are similar to those in `train`. We then compute the `test_loss` and the number of `correct` predictions, print the results and return them. ``` import torch.nn.functional as F # implements forward and backward definitions of an autograd operation import torch.optim as optim # different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc def evaluate_model(args,train_loader,test_loader): # create model DNN = model(high_level_feats=args.high_level_feats) # negative log-likelihood (nll) loss for training: takes class labels NOT one-hot vectors! criterion = F.nll_loss # define SGD optimizer optimizer = optim.SGD(DNN.parameters(), lr=args.lr, momentum=args.momentum) #optimizer = optim.Adam(DNN.parameters(), lr=0.001, betas=(0.9, 0.999)) ################################################ def train(epoch): '''Trains a NN using minibatches. Parameters ---------- epoch : int Training epoch number. ''' # set model to training mode (affects Dropout and BatchNorm) DNN.train() # loop over training data for batch_idx, (data, label) in enumerate(train_loader): # zero gradient buffers optimizer.zero_grad() # compute output of final layer: forward step output = DNN(data) # compute loss loss = criterion(output, label) # run backprop: backward step loss.backward() # update weigths of NN optimizer.step() # print loss at current epoch if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item() )) return loss.item() ################################################ def test(): '''Tests NN performance. ''' # evaluate model DNN.eval() test_loss = 0 # loss function on test data correct = 0 # number of correct predictions # loop over test data for data, label in test_loader: # compute model prediction softmax probability output = DNN(data) # compute test loss test_loss += criterion(output, label, size_average=False).item() # sum up batch loss # find most likely prediction pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability # update number of correct predictions correct += pred.eq(label.data.view_as(pred)).cpu().sum().item() # print test loss test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.3f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) return test_loss, correct / len(test_loader.dataset) ################################################ train_loss=np.zeros((args.epochs,)) test_loss=np.zeros_like(train_loss) test_accuracy=np.zeros_like(train_loss) epochs=range(1, args.epochs + 1) for epoch in epochs: train_loss[epoch-1] = train(epoch) test_loss[epoch-1], test_accuracy[epoch-1] = test() return test_loss[-1], test_accuracy[-1] ``` ### Step 6: Modify the Hyperparameters to Optimize Performance of the Model To study the performance of the model for a variety of different `data_set_sizes` and `learning_rates`, we do a grid search. Let us define a function `grid_search`, which accepts the `args` variable containing all hyper-parameters needed for the problem. After choosing logarithmically-spaced `data_set_sizes` and `learning_rates`, we first loop over all `data_set_sizes`, update the `args` variable, and call the `load_data` function. We then loop once again over all `learning_rates`, update `args` and call `evaluate_model`. ``` def grid_search(args): # perform grid search over learnign rate and number of hidden neurons dataset_sizes=[1000, 10000, 100000, 200000] #np.logspace(2,5,4).astype('int') learning_rates=np.logspace(-5,-1,5) # pre-alocate data test_loss=np.zeros((len(dataset_sizes),len(learning_rates)),dtype=np.float64) test_accuracy=np.zeros_like(test_loss) # do grid search for i, dataset_size in enumerate(dataset_sizes): # upate data set size parameters args.dataset_size=dataset_size args.batch_size=int(0.01*dataset_size) # load data train_loader, test_loader = load_data(args) for j, lr in enumerate(learning_rates): # update learning rate args.lr=lr print("\n training DNN with %5d data points and SGD lr=%0.6f. \n" %(dataset_size,lr) ) test_loss[i,j],test_accuracy[i,j] = evaluate_model(args,train_loader,test_loader) plot_data(learning_rates,dataset_sizes,test_accuracy) ``` Last, we use the function `plot_data`, defined below, to plot the results. ``` import matplotlib.pyplot as plt def plot_data(x,y,data): # plot results fontsize=16 fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(data, interpolation='nearest', vmin=0, vmax=1) cbar=fig.colorbar(cax) cbar.ax.set_ylabel('accuracy (%)',rotation=90,fontsize=fontsize) cbar.set_ticks([0,.2,.4,0.6,0.8,1.0]) cbar.set_ticklabels(['0%','20%','40%','60%','80%','100%']) # put text on matrix elements for i, x_val in enumerate(np.arange(len(x))): for j, y_val in enumerate(np.arange(len(y))): c = "${0:.1f}\\%$".format( 100*data[j,i]) ax.text(x_val, y_val, c, va='center', ha='center') # convert axis vaues to to string labels x=[str(i) for i in x] y=[str(i) for i in y] ax.set_xticklabels(['']+x) ax.set_yticklabels(['']+y) ax.set_xlabel('$\\mathrm{learning\\ rate}$',fontsize=fontsize) ax.set_ylabel('$\\mathrm{hidden\\ neurons}$',fontsize=fontsize) plt.tight_layout() plt.show() ``` ## Run Code As we mentioned in the beginning of the notebook, all functions and classes discussed above only specify the procedure but do not actually perform any computations. This allows us to re-use them for different problems. Actually running the training and testing for every point in the grid search is done below. The `argparse` class allows us to conveniently keep track of all hyperparameters, stored in the variable `args` which enters most of the functions we defined above. To run the simulation, we call the function `grid_search`. ## Exercises * One of the advantages of Pytorch is that it allows to automatically use the CUDA library for fast performance on GPU's. For the sake of clarity, we have omitted this in the above notebook. Go online to check how to put the CUDA commands back into the code above. _Hint:_ study the [Pytorch MNIST tutorial](https://github.com/pytorch/examples/blob/master/mnist/main.py) to see how this works in practice. ``` import argparse # handles arguments import sys; sys.argv=['']; del sys # required to use parser in jupyter notebooks # Training settings parser = argparse.ArgumentParser(description='PyTorch SUSY Example') parser.add_argument('--dataset_size', type=int, default=100000, metavar='DS', help='size of data set (default: 100000)') parser.add_argument('--high_level_feats', type=bool, default=None, metavar='HLF', help='toggles high level features (default: None)') parser.add_argument('--batch-size', type=int, default=100, metavar='N', help='input batch size for training (default: 64)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--epochs', type=int, default=10, metavar='N', help='number of epochs to train (default: 10)') parser.add_argument('--lr', type=float, default=0.05, metavar='LR', help='learning rate (default: 0.02)') parser.add_argument('--momentum', type=float, default=0.8, metavar='M', help='SGD momentum (default: 0.5)') parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training') parser.add_argument('--seed', type=int, default=2, metavar='S', help='random seed (default: 1)') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') args = parser.parse_args() # set seed of random number generator torch.manual_seed(args.seed) grid_search(args) ```
github_jupyter
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); </script> # Start-to-Finish Example: Setting up Exact Initial Data for Einstein's Equations, in Curvilinear Coordinates ## Authors: Brandon Clark, George Vopal, and Zach Etienne ## This module sets up initial data for a specified exact solution written in terms of ADM variables, using the [*Exact* ADM Spherical to BSSN Curvilinear initial data module](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py). **Notebook Status:** <font color='green'><b> Validated </b></font> **Validation Notes:** This module has been validated, confirming that all initial data sets exhibit convergence to zero of the Hamiltonian and momentum constraints at the expected rate or better. ### NRPy+ Source Code for this module: * [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Exact* Spherical ADM$\to$Curvilinear BSSN converter function * [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian & momentum constraints in BSSN curvilinear basis/coordinates ## Introduction: Here we use NRPy+ to generate a C code confirming that specified *exact* initial data satisfy Einstein's equations of general relativity. The following exact initial data types are supported: * Shifted Kerr-Schild spinning black hole initial data * "Static" Trumpet black hole initial data * Brill-Lindquist two black hole initial data * UIUC black hole initial data <a id='toc'></a> # Table of Contents $$\label{toc}$$ This notebook is organized as follows 0. [Preliminaries](#prelim): The Choices for Initial Data 1. [Choice 1](#sks): Shifted Kerr-Schild spinning black hole initial data 1. [Choice 2](#st): "Static" Trumpet black hole initial data 1. [Choice 3](#bl): Brill-Lindquist two black hole initial data 1. [Choice 4](#uiuc): UIUC black hole initial data 1. [Step 2](#initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric 1. [Step 3](#adm_id): Import Black Hole ADM initial data C function from NRPy+ module 1. [Step 4](#validate): Validating that the black hole initial data satisfy the Hamiltonian constraint 1. [Step 4.a](#ham_const_output): Output C code for evaluating the Hamiltonian and Momentum constraint violation 1. [Step 4.b](#apply_bcs): Apply singular, curvilinear coordinate boundary conditions 1. [Step 4.c](#enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint 1. [Step 5](#mainc): `Initial_Data.c`: The Main C Code 1. [Step 6](#plot): Plotting the initial data 1. [Step 7](#convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero 1. [Step 8](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file <a id='prelim'></a> # Preliminaries: The Choices for Initial Data $$\label{prelim}$$ <a id='sks'></a> ## Shifted Kerr-Schild spinning black hole initial data \[Back to [top](#toc)\] $$\label{sks}$$ Here we use NRPy+ to generate initial data for a spinning black hole. Shifted Kerr-Schild spinning black hole initial data has been <font color='green'><b> validated </b></font> to exhibit convergence to zero of both the Hamiltonian and momentum constraint violations at the expected order to the exact solution. **NRPy+ Source Code:** * [BSSN/ShiftedKerrSchild.py](../edit/BSSN/ShiftedKerrSchild.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-ShiftedKerrSchild.ipynb) The [BSSN.ShiftedKerrSchild](../edit/BSSN/ShiftedKerrSchild.py) NRPy+ module does the following: 1. Set up shifted Kerr-Schild initial data, represented by [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Spherical basis**, as [documented here](Tutorial-ADM_Initial_Data-ShiftedKerrSchild.ipynb). 1. Convert the exact ADM **Spherical quantities** to **BSSN quantities in the desired Curvilinear basis** (set by `reference_metric::CoordSystem`), as [documented here](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb). 1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string. <a id='st'></a> ## "Static" Trumpet black hole initial data \[Back to [top](#toc)\] $$\label{st}$$ Here we use NRPy+ to generate initial data for a single trumpet black hole ([Dennison & Baumgarte, PRD ???](https://arxiv.org/abs/??)). "Static" Trumpet black hole initial data has been <font color='green'><b> validated </b></font> to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution. It was carefully ported from the [original NRPy+ code](https://bitbucket.org/zach_etienne/nrpy). **NRPy+ Source Code:** * [BSSN/StaticTrumpet.py](../edit/BSSN/StaticTrumpet.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-StaticTrumpet.ipynb) The [BSSN.StaticTrumpet](../edit/BSSN/StaticTrumpet.py) NRPy+ module does the following: 1. Set up static trumpet black hole initial data, represented by [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Spherical basis**, as [documented here](Tutorial-ADM_Initial_Data-StaticTrumpetBlackHole.ipynb). 1. Convert the exact ADM **Spherical quantities** to **BSSN quantities in the desired Curvilinear basis** (set by `reference_metric::CoordSystem`), as [documented here](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb). 1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string. <a id='bl'></a> ## Brill-Lindquist initial data \[Back to [top](#toc)\] $$\label{bl}$$ Here we use NRPy+ to generate initial data for two black holes (Brill-Lindquist, [Brill & Lindquist, Phys. Rev. 131, 471, 1963](https://journals.aps.org/pr/abstract/10.1103/PhysRev.131.471); see also Eq. 1 of [Brandt & Brügmann, arXiv:gr-qc/9711015v1](https://arxiv.org/pdf/gr-qc/9711015v1.pdf)). [//]: # " and then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4)." Brill-Lindquist initial data has been <font color='green'><b> validated </b></font> to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution, and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). **NRPy+ Source Code:** * [BSSN/BrillLindquist.py](../edit/BSSN/BrillLindquist.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb) * [BSSN/BSSN_ID_function_string.py](../edit/BSSN/BSSN_ID_function_string.py) The [BSSN.BrillLindquist](../edit/BSSN/BrillLindquist.py) NRPy+ module does the following: 1. Set up Brill-Lindquist initial data [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Cartesian basis**, as [documented here](Tutorial-ADM_Initial_Data-Brill-Lindquist.ipynb). 1. Convert the ADM **Cartesian quantities** to **BSSN quantities in the desired Curvilinear basis** (set by `reference_metric::CoordSystem`), as [documented here](Tutorial-ADM_Initial_Data-Converting_ADMCartesian_to_BSSNCurvilinear.ipynb). 1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string. <a id='uiuc'></a> ## UIUC black hole initial data \[Back to [top](#toc)\] $$\label{uiuc}$$ UIUC black hole initial data has been <font color='green'><b> validated </b></font> to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution, and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). **NRPy+ Source Code:** * [BSSN/UIUCBlackHole.py](../edit/BSSN/UIUCBlackHole.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-UIUCBlackHole.ipynb) The [BSSN.UIUCBlackHole](../edit/BSSN/UIUCBlackHole.py) NRPy+ module does the following: 1. Set up UIUC black hole initial data, represented by [ADM](https://en.wikipedia.org/wiki/ADM_formalism) quantities in the **Spherical basis**, as [documented here](Tutorial-ADM_Initial_Data-UIUCBlackHole.ipynb). 1. Convert the numerical ADM **Spherical quantities** to **BSSN quantities in the desired Curvilinear basis** (set by `reference_metric::CoordSystem`), as [documented here](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb). 1. Sets up the standardized C function for setting all BSSN Curvilinear gridfunctions in a pointwise fashion, as [written here](../edit/BSSN/BSSN_ID_function_string.py), and returns the C function as a Python string. <a id='-pickid'></a> # Step 1: Specify the Initial Data to Test \[Back to [top](#toc)\] $$\label{pickid}$$ Here you have a choice for which initial data you would like to import and test for convergence. The following is a list of the currently compatible `initial_data_string` options for you to choose from. * `"Shifted KerrSchild"` * `"Static Trumpet"` * `"Brill-Lindquist"` * `"UIUC"` ``` import collections ################# # For the User: Choose initial data, default is Shifted KerrSchild. # You are also encouraged to adjust any of the # DestGridCoordSystem, freeparams, or EnableMomentum parameters! # NOTE: Only DestGridCoordSystem == Spherical or SinhSpherical # currently work out of the box; additional modifications # will likely be necessary for other CoordSystems. ################# initial_data_string = "Shifted KerrSchild" # "UIUC" dictID = {} IDmod_retfunc = collections.namedtuple('IDmod_retfunc', 'modulename functionname DestGridCoordSystem freeparams EnableMomentum') dictID['Shifted KerrSchild'] = IDmod_retfunc( modulename = "BSSN.ShiftedKerrSchild", functionname = "ShiftedKerrSchild", DestGridCoordSystem = "Spherical", freeparams = ["const REAL M = 1.0;", "const REAL a = 0.9;", "const REAL r0 = 1.0;"], EnableMomentum = True) dictID['Static Trumpet'] = IDmod_retfunc( modulename = "BSSN.StaticTrumpet", functionname = "StaticTrumpet", DestGridCoordSystem = "Spherical", freeparams = ["const REAL M = 1.0;"], EnableMomentum = False) dictID['Brill-Lindquist'] = IDmod_retfunc( modulename = "BSSN.BrillLindquist", functionname = "BrillLindquist", DestGridCoordSystem = "Spherical", freeparams = ["const REAL BH1_posn_x =-1.0,BH1_posn_y = 0.0,BH1_posn_z = 0.0;", "const REAL BH2_posn_x = 1.0,BH2_posn_y = 0.0,BH2_posn_z = 0.0;", "const REAL BH1_mass = 0.5,BH2_mass = 0.5;"], EnableMomentum = False) dictID['UIUC'] = IDmod_retfunc(modulename = "BSSN.UIUCBlackHole", functionname = "UIUCBlackHole", DestGridCoordSystem = "SinhSpherical", freeparams = ["const REAL M = 1.0;", "const REAL chi = 0.99;"], EnableMomentum = True) ``` <a id='initializenrpy'></a> # Step 2: Set up the needed NRPy+ infrastructure and declare core gridfunctions \[Back to [top](#toc)\] $$\label{initializenrpy}$$ We will import the core modules of NRPy that we will need and specify the main gridfunctions we will need. ``` # Step P1: Import needed NRPy+ core modules: from outputC import lhrh,outC_function_dict,outCfunction # NRPy+: Core C code output module import finite_difference as fin # NRPy+: Finite difference C code generation module import NRPy_param_funcs as par # NRPy+: Parameter interface import grid as gri # NRPy+: Functions having to do with numerical grids import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface import shutil, os, sys, time # Standard Python modules for multiplatform OS-level functions, benchmarking import importlib # Standard Python module for interactive module imports # Step P2: Create C code output directory: Ccodesdir = os.path.join("BlackHoleID_Ccodes/") # First remove C code output directory if it exists # Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty # !rm -r ScalarWaveCurvilinear_Playground_Ccodes shutil.rmtree(Ccodesdir, ignore_errors=True) # Then create a fresh directory cmd.mkdir(Ccodesdir) # Step P3: Create executable output directory: outdir = os.path.join(Ccodesdir,"output/") cmd.mkdir(outdir) # Step 1: Set the spatial dimension parameter # to three this time, and then read # the parameter as DIM. par.set_parval_from_str("grid::DIM",3) DIM = par.parval_from_str("grid::DIM") # Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm, # FD order, floating point precision, and CFL factor: # Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical, # SymTP, SinhSymTP CoordSystem = "Spherical" # Step 2.a: Set defaults for Coordinate system parameters. # These are perhaps the most commonly adjusted parameters, # so we enable modifications at this high level. # domain_size sets the default value for: # * Spherical's params.RMAX # * SinhSpherical*'s params.AMAX # * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max # * Cylindrical's -params.ZMIN & .{Z,RHO}MAX # * SinhCylindrical's params.AMPL{RHO,Z} # * *SymTP's params.AMAX domain_size = 3.0 # sinh_width sets the default value for: # * SinhSpherical's params.SINHW # * SinhCylindrical's params.SINHW{RHO,Z} # * SinhSymTP's params.SINHWAA sinh_width = 0.4 # If Sinh* coordinates chosen # sinhv2_const_dr sets the default value for: # * SinhSphericalv2's params.const_dr # * SinhCylindricalv2's params.const_d{rho,z} sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen # SymTP_bScale sets the default value for: # * SinhSymTP's params.bScale SymTP_bScale = 0.5 # If SymTP chosen FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable REAL = "double" # Best to use double here. # Step 3: Set the coordinate system for the numerical grid par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem) rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc. # Step 4: Set the finite differencing order to FD_order (set above). par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order) # Step 5: Set the direction=2 (phi) axis to be the symmetry axis; i.e., # axis "2", corresponding to the i2 direction. # This sets all spatial derivatives in the phi direction to zero. par.set_parval_from_str("indexedexp::symmetry_axes","2") # Step 6: The MoLtimestepping interface is only used for memory allocation/deallocation import MoLtimestepping.C_Code_Generation as MoL from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict RK_method = "Euler" # DOES NOT MATTER; Again MoL interface is only used for memory alloc/dealloc. RK_order = Butcher_dict[RK_method][1] cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/")) MoL.MoL_C_Code_Generation(RK_method, RHS_string = "", post_RHS_string = "", outdir = os.path.join(Ccodesdir,"MoLtimestepping/")) ``` <a id='adm_id'></a> # Step 3: Import Black Hole ADM initial data C function from NRPy+ module \[Back to [top](#toc)\] $$\label{adm_id}$$ ``` # Import Black Hole initial data IDmodule = importlib.import_module(dictID[initial_data_string].modulename) IDfunc = getattr(IDmodule, dictID[initial_data_string].functionname) IDfunc() # Registers ID C function in dictionary, used below to output to file. with open(os.path.join(Ccodesdir,"initial_data.h"),"w") as file: file.write(outC_function_dict["initial_data"]) ``` <a id='cparams_rfm_and_domainsize'></a> ## Step 3.a: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](#toc)\] $$\label{cparams_rfm_and_domainsize}$$ Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`. Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above ``` # Step 3.a.i: Set free_parameters.h # Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic # domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale, # parameters set above. rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"), domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale) # Step 3.a.ii: Generate set_Nxx_dxx_invdx_params__and__xx.h: rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir) # Step 3.a.iii: Generate xx_to_Cart.h, which contains xx_to_Cart() for # (the mapping from xx->Cartesian) for the chosen # CoordSystem: rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h")) # Step 3.a.iv: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir)) ``` <a id='validate'></a> # Step 4: Validating that the black hole initial data satisfy the Hamiltonian constraint \[Back to [top](#toc)\] $$\label{validate}$$ We will validate that the black hole initial data satisfy the Hamiltonian constraint, modulo numerical finite differencing error. <a id='ham_const_output'></a> ## Step 4.a: Output C code for evaluating the Hamiltonian and Momentum constraint violation \[Back to [top](#toc)\] $$\label{ham_const_output}$$ First output C code for evaluating the Hamiltonian constraint violation. For the initial data where `EnableMomentum = True` we must also output C code for evaluating the Momentum constraint violation. ``` import BSSN.BSSN_constraints as bssncon # Now register the Hamiltonian & momentum constraints as gridfunctions. H = gri.register_gridfunctions("AUX","H") MU = ixp.register_gridfunctions_for_single_rank1("AUX", "MU") # Generate symbolic expressions for Hamiltonian & momentum constraints import BSSN.BSSN_constraints as bssncon bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False) # Generate optimized C code for Hamiltonian constraint desc="Evaluate the Hamiltonian constraint" name="Hamiltonian_constraint" outCfunction( outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name, params = """const paramstruct *restrict params, REAL *restrict xx[3], REAL *restrict in_gfs, REAL *restrict aux_gfs""", body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H), params="outCverbose=False"), loopopts = "InteriorPoints,Read_xxs") # Generate optimized C code for momentum constraint desc="Evaluate the momentum constraint" name="momentum_constraint" outCfunction( outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name, params = """const paramstruct *restrict params, REAL *restrict xx[3], REAL *restrict in_gfs, REAL *restrict aux_gfs""", body = fin.FD_outputC("returnstring", [lhrh(lhs=gri.gfaccess("aux_gfs", "MU0"), rhs=bssncon.MU[0]), lhrh(lhs=gri.gfaccess("aux_gfs", "MU1"), rhs=bssncon.MU[1]), lhrh(lhs=gri.gfaccess("aux_gfs", "MU2"), rhs=bssncon.MU[2])], params="outCverbose=False"), loopopts = "InteriorPoints,Read_xxs") ``` <a id='enforce3metric'></a> ## Step 4.b: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](#toc)\] $$\label{enforce3metric}$$ Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb) Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint: ``` # Set up the C function for the det(gammahat) = det(gammabar) import BSSN.Enforce_Detgammahat_Constraint as EGC enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammahat_Constraint_symb_expressions() EGC.output_Enforce_Detgammahat_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions, Read_xxs=True) ``` <a id='bc_functs'></a> ## Step 4.c: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](#toc)\] $$\label{bc_functs}$$ Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb) ``` import CurviBoundaryConditions.CurviBoundaryConditions as cbcs cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../")) ``` <a id='mainc'></a> # Step 5: `Initial_Data_Playground.c`: The Main C Code \[Back to [top](#toc)\] $$\label{mainc}$$ ``` # Part P0: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER # set REAL=double, so that all floating point numbers are stored to at least ~16 significant digits. with open(os.path.join(Ccodesdir,"Initial_Data_Playground_REAL__NGHOSTS.h"), "w") as file: file.write(""" // Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER #define NGHOSTS """+str(int(par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER")/2)+1)+"""\n // Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point // numbers are stored to at least ~16 significant digits #define REAL double\n""") %%writefile $Ccodesdir/Initial_Data_Playground.c // Step P0: Define REAL and NGHOSTS. This header is generated by NRPy+. #include "Initial_Data_Playground_REAL__NGHOSTS.h" #include "declare_Cparameters_struct.h" // Step P1: Import needed header files #include "stdio.h" #include "stdlib.h" #include "math.h" #ifndef M_PI #define M_PI 3.141592653589793238462643383279502884L #endif #ifndef M_SQRT1_2 #define M_SQRT1_2 0.707106781186547524400844362104849039L #endif // Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of // data in a 1D array. In this case, consecutive values of "i" // (all other indices held to a fixed value) are consecutive in memory, where // consecutive values of "j" (fixing all other indices) are separated by // Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of // "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc. #define IDX4S(g,i,j,k) \ ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) ) #define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) ) #define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) ) #define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \ for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++) #define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \ for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++) // Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart() #include "boundary_conditions/gridfunction_defines.h" // Step P4: Set xx_to_Cart(const paramstruct *restrict params, // REAL *restrict xx[3], // const int i0,const int i1,const int i2, // REAL xCart[3]), // which maps xx->Cartesian via // {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]} #include "xx_to_Cart.h" // Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3], // paramstruct *restrict params, REAL *restrict xx[3]), // which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for // the chosen Eigen-CoordSystem if EigenCoord==1, or // CoordSystem if EigenCoord==0. #include "set_Nxx_dxx_invdx_params__and__xx.h" // Step P6: Include basic functions needed to impose curvilinear // parity and boundary conditions. #include "boundary_conditions/CurviBC_include_Cfunctions.h" // Step P8: Include function for enforcing detgammabar constraint. #include "enforce_detgammahat_constraint.h" // Step P10: Declare function necessary for setting up the initial data. // Step P10.a: Define BSSN_ID() for BrillLindquist initial data // Step P10.b: Set the generic driver function for setting up BSSN initial data #include "initial_data.h" // Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic) #include "Hamiltonian_constraint.h" #include "momentum_constraint.h" // main() function: // Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates // Step 1: Set up initial data to an exact solution // Step 2: Start the timer, for keeping track of how fast the simulation is progressing. // Step 3: Integrate the initial data forward in time using the chosen RK-like Method of // Lines timestepping algorithm, and output periodic simulation diagnostics // Step 3.a: Output 2D data file periodically, for visualization // Step 3.b: Step forward one timestep (t -> t+dt) in time using // chosen RK-like MoL timestepping algorithm // Step 3.c: If t=t_final, output conformal factor & Hamiltonian // constraint violation to 2D data file // Step 3.d: Progress indicator printing to stderr // Step 4: Free all allocated memory int main(int argc, const char *argv[]) { paramstruct params; #include "set_Cparameters_default.h" // Step 0a: Read command-line input, error out if nonconformant if((argc != 4) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) { fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n"); fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n"); fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS); exit(1); } // Step 0b: Set up numerical grid structure, first in space... const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) }; if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) { fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n"); fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n"); exit(1); } // Step 0c: Set free parameters, overwriting Cparameters defaults // by hand or with command-line input, as desired. #include "free_parameters.h" // Step 0d: Uniform coordinate grids are stored to *xx[3] REAL *xx[3]; // Step 0d.i: Set bcstruct bc_struct bcstruct; { int EigenCoord = 1; // Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets // params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the // chosen Eigen-CoordSystem. set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, &params, xx); // Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot #include "set_Cparameters-nopointer.h" const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2; // Step 0e: Find ghostzone mappings; set up bcstruct #include "boundary_conditions/driver_bcstruct.h" // Step 0e.i: Free allocated space for xx[][] array for(int i=0;i<3;i++) free(xx[i]); } // Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets // params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the // chosen (non-Eigen) CoordSystem. int EigenCoord = 0; set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, &params, xx); // Step 0g: Set all C parameters "blah" for params.blah, including // Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc. #include "set_Cparameters-nopointer.h" const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2; // Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions. // This is a limitation of the RK method. You are always welcome to declare & allocate // additional gridfunctions by hand. if(NUM_AUX_GFS > NUM_EVOL_GFS) { fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n"); fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n"); exit(1); } // Step 0k: Allocate memory for gridfunctions #include "MoLtimestepping/RK_Allocate_Memory.h" REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot); // Step 1: Set up initial data to an exact solution initial_data(&params, xx, y_n_gfs); // Step 1b: Apply boundary conditions, as initial data // are sometimes ill-defined in ghost zones. // E.g., spherical initial data might not be // properly defined at points where r=-1. apply_bcs_curvilinear(&params, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs); enforce_detgammahat_constraint(&params, xx, y_n_gfs); // Evaluate Hamiltonian & momentum constraint violations Hamiltonian_constraint(&params, xx, y_n_gfs, diagnostic_output_gfs); momentum_constraint( &params, xx, y_n_gfs, diagnostic_output_gfs); /* Step 2: 2D output: Output conformal factor (CFGF) and constraint violations (HGF, MU0GF, MU1GF, MU2GF). */ const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2. const int i1mid=Nxx_plus_2NGHOSTS1/2; const int i2mid=Nxx_plus_2NGHOSTS2/2; LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS, i1mid,i1mid+1, NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) { REAL xCart[3]; xx_to_Cart(&params, xx, i0,i1,i2, xCart); int idx = IDX3S(i0,i1,i2); printf("%e %e %e %e %e %e %e\n",xCart[0],xCart[1], y_n_gfs[IDX4ptS(CFGF,idx)], log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])), log10(fabs(diagnostic_output_gfs[IDX4ptS(MU0GF,idx)])+1e-200), log10(fabs(diagnostic_output_gfs[IDX4ptS(MU1GF,idx)])+1e-200), log10(fabs(diagnostic_output_gfs[IDX4ptS(MU2GF,idx)])+1e-200)); } // Step 4: Free all allocated memory #include "boundary_conditions/bcstruct_freemem.h" #include "MoLtimestepping/RK_Free_Memory.h" free(auxevol_gfs); for(int i=0;i<3;i++) free(xx[i]); return 0; } import cmdline_helper as cmd cmd.C_compile(os.path.join(Ccodesdir,"Initial_Data_Playground.c"), "Initial_Data_Playground") cmd.delete_existing_files("out*.txt") cmd.delete_existing_files("out*.png") args_output_list = [["96 96 96", "out96.txt"], ["48 48 48", "out48.txt"]] for args_output in args_output_list: cmd.Execute("Initial_Data_Playground", args_output[0], args_output[1]) ``` <a id='plot'></a> # Step 6: Plotting the initial data \[Back to [top](#toc)\] $$\label{plot}$$ Here we plot the evolved conformal factor of these initial data on a 2D grid, such that darker colors imply stronger gravitational fields. Hence, we see the black hole(s) centered at $x/M=\pm 1$, where $M$ is an arbitrary mass scale (conventionally the [ADM mass](https://en.wikipedia.org/w/index.php?title=ADM_formalism&oldid=846335453) is chosen), and our formulation of Einstein's equations adopt $G=c=1$ [geometrized units](https://en.wikipedia.org/w/index.php?title=Geometrized_unit_system&oldid=861682626). ``` # First install scipy if it's not yet installed. This will have no effect if it's already installed. !pip install scipy import numpy as np from scipy.interpolate import griddata from pylab import savefig import matplotlib.pyplot as plt import matplotlib.cm as cm from IPython.display import Image x96,y96,valuesCF96,valuesHam96,valuesmomr96,valuesmomtheta96,valuesmomphi96 = np.loadtxt('out96.txt').T #Transposed for easier unpacking pl_xmin = -3. pl_xmax = +3. pl_ymin = -3. pl_ymax = +3. grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j] points96 = np.zeros((len(x96), 2)) for i in range(len(x96)): points96[i][0] = x96[i] points96[i][1] = y96[i] grid96 = griddata(points96, valuesCF96, (grid_x, grid_y), method='nearest') grid96cub = griddata(points96, valuesCF96, (grid_x, grid_y), method='cubic') plt.clf() plt.title("Initial Data") plt.xlabel("x/M") plt.ylabel("y/M") # fig, ax = plt.subplots() #ax.plot(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax)) plt.imshow(grid96.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax)) savefig("ID.png") plt.close() Image("ID.png") # # interpolation='nearest', cmap=cm.gist_rainbow) ``` <a id='convergence'></a> # Step 7: Validation: Convergence of numerical errors (Hamiltonian & momentum constraint violations) to zero \[Back to [top](#toc)\] $$\label{convergence}$$ **Special thanks to George Vopal for creating the following plotting script.** The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data. However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity. In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$. Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence. First, let's take a look at what the numerical error looks like on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation: ``` RefData=[valuesHam96,valuesmomr96,valuesmomtheta96,valuesmomphi96] SubTitles=["\mathcal{H}",'\mathcal{M}^r',r"\mathcal{M}^{\theta}","\mathcal{M}^{\phi}"] axN = [] #this will let us automate the subplots in the loop that follows grid96N = [] #we need to calculate the grid96 data for each constraint for use later plt.clf() # We want to create four plots. One for the Hamiltonian, and three for the momentum # constraints (r,th,ph) # Define the size of the overall figure fig = plt.figure(figsize=(12,12)) # 8 in x 8 in num_plots = 4 if dictID[initial_data_string].EnableMomentum == False: num_plots = 1 for p in range(num_plots): grid96 = griddata(points96, RefData[p], (grid_x, grid_y), method='nearest') grid96N.append(grid96) grid96cub = griddata(points96, RefData[p], (grid_x, grid_y), method='cubic') #fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True) #Generate the subplot for the each constraint ax = fig.add_subplot(221+p) axN.append(ax) # Grid of 2x2 axN[p].set_xlabel('x/M') axN[p].set_ylabel('y/M') axN[p].set_title('$96^3$ Numerical Err.: $log_{10}|'+SubTitles[p]+'|$') fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax)) cb = plt.colorbar(fig96cub) # Adjust the spacing between plots plt.tight_layout(pad=4) ``` Next, we set up the same initial data but on a lower-resolution, $48^3$ grid. Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96^3$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected. I.e., at all points *except* at the points immediately surrounding the coordinate center of the black hole (due to the spatial slice excising the physical singularity at this point through [the puncture method](http://gr.physics.ncsu.edu/UMD_June09.pdf)) exhibit numerical errors that drop as $\left(\Delta x^i\right)^4$. ``` x48,y48,valuesCF48,valuesHam48,valuesmomr48,valuesmomtheta48,valuesmomphi48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking points48 = np.zeros((len(x48), 2)) for i in range(len(x48)): points48[i][0] = x48[i] points48[i][1] = y48[i] RefData=[valuesHam48,valuesmomr48,valuesmomtheta48,valuesmomphi48] SubTitles=["\mathcal{H}",'\mathcal{M}^r',r"\mathcal{M}^{\theta}","\mathcal{M}^{\phi}"] axN = [] plt.clf() # We want to create four plots. One for the Hamiltonian, and three for the momentum # constrains (r,th,ph) # Define the size of the overall figure fig = plt.figure(figsize=(12,12)) # 8 in x 8 in for p in range(num_plots): #loop to cycle through our constraints and plot the data grid48 = griddata(points48, RefData[p], (grid_x, grid_y), method='nearest') griddiff_48_minus_96 = np.zeros((100,100)) griddiff_48_minus_96_1darray = np.zeros(100*100) gridx_1darray_yeq0 = np.zeros(100) grid48_1darray_yeq0 = np.zeros(100) grid96_1darray_yeq0 = np.zeros(100) count = 0 for i in range(100): for j in range(100): griddiff_48_minus_96[i][j] = grid48[i][j] - grid96N[p][i][j] griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j] if j==49: gridx_1darray_yeq0[i] = grid_x[i][j] grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4) grid96_1darray_yeq0[i] = grid96N[p][i][j] count = count + 1 #Generate the subplot for the each constraint ax = fig.add_subplot(221+p) axN.append(ax) # Grid of 2x2 axN[p].set_title('Plot Demonstrating $4^{th}$-Order Convergence of $'+SubTitles[p]+'$') axN[p].set_xlabel("x/M") axN[p].set_ylabel("$log_{10}$(Relative Error)") ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96') ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4') ax.set_ylim([-14,4.]) legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large') legend.get_frame().set_facecolor('C1') # Adjust the spacing between plots plt.tight_layout(pad=4) ``` <a id='latex_pdf_output'></a> # Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ``` import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_Exact_Initial_Data") ```
github_jupyter
## Silicon above 15 m ``` import datetime as dt import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import numpy as np import pandas as pd import statsmodels.api as sm import xarray as xr foramt = "{:.2}" myformat = {'bias': foramt, 'rmse': foramt, 'swillmott': foramt, 'slopedev': foramt, 'const': foramt, 'systematic': foramt, 'nonsystematic':foramt, 'spread': foramt} with xr.open_dataset('/home/sallen/MEOPAR/grid/mesh_mask201702.nc') as mesh: deptht = mesh.gdept_1d[0].values def bias(df, obs, mod): diffy = df[mod] - df[obs] return diffy.count(), diffy.mean() def rmse(df, obs, mod): return (np.sqrt(((df[mod] - df[obs])**2).mean())) def swillmott(df, obs, mod): meanobs = df[obs].mean() return (((df[mod] - df[obs])**2).sum() /(( (df[mod] - meanobs).abs() + (df[obs] - meanobs).abs() )**2).sum()) def slope_inter(df, obs, mod): X = df[obs] X2 = X**2 y = df[mod] X = np.column_stack((X, X2)) X = sm.add_constant(X) # Fit and make the predictions by the model model = sm.OLS(y, X, missing='drop').fit() predictions = model.predict(X) nonsyst = np.sqrt(((y - predictions)**2).mean()) systematic = np.sqrt(((predictions - df[obs])**2).mean()) return predictions, model.params['x2'], model.params['x1'], model.params['const'], systematic, nonsyst def spread(df, obs, mod): return 1 - ((df[mod] - df[mod].mean())**2).mean() / ((df[obs] - df[obs].mean())**2).mean() def read_pieces(pieces): temp1 = pd.read_csv(pieces[0]) for piece in pieces[1:]: nextpiece = pd.read_csv(piece) temp1 = pd.concat([temp1, nextpiece], ignore_index=True) return temp1 def plot_and_stats(temp1, name, idepth, jdepth): fig, ax = plt.subplots(1, 1, figsize=(6, 5)) vmax = 90 vmin = 0 condition = (temp1.k >= idepth) & (temp1.k <= jdepth) print (deptht[idepth], deptht[jdepth]) counts, xedges, yedges, color = ax.hist2d(temp1.Si[(temp1.k >= idepth) & (temp1.k <= jdepth)], temp1.mod_silicon[(temp1.k >= idepth) & (temp1.k <= jdepth)], bins=np.arange(vmin, vmax, 1), norm=LogNorm()); fig.colorbar(color) number, tbias = bias(temp1[(temp1.k >= idepth) & (temp1.k <= jdepth)], 'Si', 'mod_silicon') trmse = rmse(temp1[(temp1.k >= idepth) & (temp1.k <= jdepth)], 'Si', 'mod_silicon') tswillmott = swillmott(temp1[temp1.k >=idepth], 'Si', 'mod_silicon') predictions, m2, m, c, syst, nonsyst = slope_inter(temp1[(temp1.k >= idepth) & (temp1.k <= jdepth)], 'Si', 'mod_silicon') tspread = spread(temp1[temp1.k >= idepth], 'Si', 'mod_silicon') ax.plot([vmin, vmax], [vmin, vmax], 'w-'); xr = np.arange(vmin, vmax, 0.5) ax.plot(temp1.Si[condition], predictions, 'r.'); sc = 5 sh = 2*sc-1 bot = 65 top = bot + 2*sh ax.arrow(5, bot, 0, sh-np.abs(tbias)/2, head_width=0.5*sc, head_length=0.2*sc, length_includes_head=True) ax.arrow(5, top, 0, -sh+np.abs(tbias)/2, head_width=0.5*sc, head_length=0.2*sc, length_includes_head=True) ax.arrow(9, bot, 0, sh-syst/2, head_width=0.5*sc, head_length=0.2*sc, length_includes_head=True) ax.arrow(9, top, 0, -sh+syst/2, head_width=0.5*sc, head_length=0.2*sc, length_includes_head=True) ax.arrow(13, bot, 0, sh-nonsyst/2, head_width=0.5*sc, head_length=0.2*sc, length_includes_head=True) ax.arrow(13, top, 0, -sh+nonsyst/2, head_width=0.5*sc, head_length=0.2*sc, length_includes_head=True); Cp2 = {'number': number, 'bias': tbias, 'rmse': trmse, 'swillmott': tswillmott, 'slopedev': 1-m, 'const': c, 'systematic': syst, 'nonsystematic': nonsyst, 'spread': tspread} ax.text(5-1.2, 48, 'bias', rotation=90) ax.text(9-1.2, 48, 'systematic', rotation=90) ax.text(13-1.2, 42, 'non-systematic', rotation=90) ax.set_title(f'{name}, Silicon between 0 and 15 m'); dCp2 = pd.DataFrame(data=Cp2, index=[name]) return dCp2 pieces = ('/home/sallen/202007/H201812/ObsModel_201812_Bio_20150101-20151231.csv', '/home/sallen/202007/H201812/ObsModel_201812_Bio_20160101-20161231.csv', '/home/sallen/202007/H201812/ObsModel_201812_Bio_20170101-20171231.csv', '/home/sallen/202007/H201812/ObsModel_201812_PSF_20150101-20151231.csv', # '/home/sallen/202007/H201905/ObsModel_201812_PSF_20160101-20161231.csv', # '/home/sallen/202007/H201905/ObsModel_201812_PSF_20170101-20171231.csv', '/home/sallen/202007/H201812/ObsModel_H201812_pug_20150101_20151231.csv', '/home/sallen/202007/H201812/ObsModel_H201812_pug_20160101_20161231.csv', '/home/sallen/202007/H201812/ObsModel_H201812_pug_20170101_20171231.csv', '/home/sallen/202007/H201812/ObsModel_H201812_hplc_20150101_20151231.csv', '/home/sallen/202007/H201812/ObsModel_H201812_hplc_20160101_20161231.csv', '/home/sallen/202007/H201812/ObsModel_H201812_hplc_20170101_20171231.csv') temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) idepth = 0 jdepth = 14 d201812 = plot_and_stats(temp1, 'H201812', idepth, jdepth) d201812.style.format(myformat) pieces = ('/home/sallen/202007/H201905/ObsModel_201905_Bio_20150101-20151231.csv', '/home/sallen/202007/H201905/ObsModel_201905_Bio_20160101-20161231.csv', '/home/sallen/202007/H201905/ObsModel_201905_Bio_20170101-20171231.csv', '/home/sallen/202007/H201905/ObsModel_201905_PSF_20150101-20151231.csv', # '/home/sallen/202007/H201905/ObsModel_201905_PSF_20160101-20161231.csv', # '/home/sallen/202007/H201905/ObsModel_201905_PSF_20170101-20171231.csv', '/home/sallen/202007/H201905/ObsModel_H201905_pug_20150101_20151231.csv', '/home/sallen/202007/H201905/ObsModel_H201905_pug_20160101_20161231.csv', '/home/sallen/202007/H201905/ObsModel_H201905_pug_20170101_20171231.csv', '/home/sallen/202007/H201905/ObsModel_H201905_hplc_20150101_20151231.csv', '/home/sallen/202007/H201905/ObsModel_H201905_hplc_20160101_20161231.csv', '/home/sallen/202007/H201905/ObsModel_H201905_hplc_20170101_20171231.csv') temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) d201905 = plot_and_stats(temp1, 'H201905', idepth, jdepth) d201905.style.format(myformat) pieces = ('/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_Bio_20150101-20151231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_Bio_20160101-20161231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_Bio_20170101-20171231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_PSF_20150101-20151231.csv', # '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_PSF_20160101-20161231.csv', # '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_PSF_20170101-20171231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_PUG_20150101-20151231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_PUG_20160101-20161231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_PUG_20170101-20171231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_hplc_20150101_20151231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_hplc_20160101_20161231.csv', '/home/sallen/202007/202007C-p2/ObsModel_202007Cp2_hplc_20170101_20171231.csv') temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) dCp2 = plot_and_stats(temp1, 'Cp2', idepth, jdepth) dCp2.style.format(myformat) pieces = ('/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_bot_20150101_20151231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_bot_20160101_20161231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_bot_20170101_20171231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_psf_20150101_20150331.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_psf_20150401_20150630.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_PSF_20150701-20151231.csv', # '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_psf_20160101_20161231.csv', # '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_psf_20170101_20171231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_pug_20150101_20150331.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_pug_20150401_20150630.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_PUG_20150701-20151231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_pug_20160101_20161231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_pug_20170101_20171231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_hplc_20150101_20151231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_hplc_20160101_20161231.csv', '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_hplc_20170101_20171231.csv') temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) dCp3 = plot_and_stats(temp1, 'Cp3', idepth, jdepth) dCp3.style.format(myformat) pieces = ('/home/sallen/202007/202007D-again/ObsModel_202007D-again_Bio_20150101-20151231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_Bio_20160101-20161231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_Bio_20170101-20171231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_PSF_20150101-20151231.csv', # '/home/sallen/202007/202007D-again/ObsModel_202007D-again_PSF_20160101-20161231.csv', # '/home/sallen/202007/202007D-again/ObsModel_202007D-again_PSF_20170101-20171231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_pug_20150101_20151231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_pug_20160101_20161231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_pug_20170101_20171231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_hplc_20150101_20151231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_hplc_20160101_20161231.csv', '/home/sallen/202007/202007D-again/ObsModel_202007D-again_hplc_20170101_20171231.csv' ) temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) Dagain = plot_and_stats(temp1, 'Dagain', idepth, jdepth) Dagain.style.format(myformat) pieces = ('/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_Bio_20150101-20151231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_bot_20160101_20161231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_bot_20170101_20171231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_PSF_20150101-20151231.csv', # '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_psf_20160101_20161231.csv', # '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_psf_20170101_20171231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_pug_20150101_20151231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_pug_20160101_20161231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_pug_20170101_20171231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_hplc_20150101_20151231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_hplc_20160101_20161231.csv', '/home/sallen/202007/202007D-lowR/ObsModel_202007D-lowR_hplc_20170101_20171231.csv') temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) DlowR = plot_and_stats(temp1, 'D-lowR', idepth, jdepth) DlowR.style.format(myformat) pieces = ('/home/sallen/202007/202007F/ObsModel_202007F_bot_20150101_20151231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_bot_20160101_20161231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_bot_20170101_20171231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_psf_20150101_20151231.csv', # '/home/sallen/202007/202007F/ObsModel_202007F_psf_20160101_20161231.csv', # '/home/sallen/202007/202007F/ObsModel_202007F_psf_20170101_20171231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_pug_20150101_20151231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_pug_20160101_20161231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_pug_20170101_20171231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_hplc_20150101_20151231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_hplc_20160101_20161231.csv', '/home/sallen/202007/202007F/ObsModel_202007F_hplc_20170101_20171231.csv') temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) modF = plot_and_stats(temp1, 'F', idepth, jdepth) modF.style.format(myformat) pieces = ('/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_bot_20150101_20151231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_bot_20160101_20161231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_bot_20170101_20171231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_psf_20150101_20151231.csv', #bd '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_psf_20160101_20161231.csv', # bd '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_psf_20170101_20171231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_pug_20150101_20151231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_pug_20160101_20161231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_pug_20170101_20171231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_hplc_20150101_20151231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_hplc_20160101_20161231.csv', '/home/sallen/202007/202007G-p1/ObsModel_202007Gp1_hplc_20170101_20171231.csv' ) temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) modGp1 = plot_and_stats(temp1, 'Gp1', idepth, jdepth) modGp1.style.format(myformat) pieces = ('/home/sallen/202007/202007G-p2/ObsModel_202007Gp2_bot_20150101_20151231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2f0_bot_20160101_20161231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2f0_bot_20170101_20171231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2_psf_20150101_20151231.csv', # '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_psf_20160101_20161231.csv', # '/home/sallen/202007/202007C-p3/ObsModel_202007Cp3_psf_20170101_20171231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2_pug_20150101_20151231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2f0_pug_20160101_20161231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2f0_pug_20170101_20171231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2_hplc_20150101_20151231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2f0_hplc_20160101_20161231.csv', '/home/sallen/202007/202007G-p2/ObsModel_202007Gp2f0_hplc_20170101_20171231.csv') temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) modGp2 = plot_and_stats(temp1, 'Gp2', idepth, jdepth) modGp2.style.format(myformat) pieces = ('/home/sallen/202007/202007H/ObsModel_202007H_bot_20150101_20151231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_bot_20160101_20161231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_bot_20170101_20171231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_hplc_20150101_20151231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_hplc_20160101_20161231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_hplc_20170101_20171231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_pug_20150101_20151231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_pug_20160101_20161231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_pug_20170101_20171231.csv', '/home/sallen/202007/202007H/ObsModel_202007H_psf_20150101_20151231.csv', ) temp1 = read_pieces(pieces) temp1['Si'] = temp1.Si.fillna(value=temp1['Silicate [umol/L]']) modH = plot_and_stats(temp1, 'H', idepth, jdepth) modH.style.format(myformat) def highlight_max_min(s): ''' highlight the maximum in a Series yellow. ''' is_max = abs(s) == abs(s).max() is_min = abs(s) == abs(s).min() color = [] for v, v2 in zip(is_max, is_min): if v: color.append('red') elif v2: color.append('darkgreen') else: color.append('black') return ['color: %s' % color[i] for i in range(len(is_max))] alltogether = pd.concat([d201812, d201905, dCp2, dCp3, Dagain, DlowR, modF, modGp1, modGp2, modH], axis=0) foramt = "{:.2}" alltogether.style.format(myformat).apply(highlight_max_min) ``` **Conclusion** None of these look great. The best is Gp1 probably a little better than Gp2. BUT again Gp1 is better than Gp2 (consistent with everything except salinity)
github_jupyter
# Archived code This notebook contains code for variables that I removed from the model. ## Previous Week Averages Note: in add_pwa first pwa columns are removed twice when calculating pwa for t and co ``` # function for calculating the pwa of a column (excludes the day measured in the t and co columns) def calc_pwa(data, col, pwa): pwa_data = pd.DataFrame() # shifting the data for the number of days and creating a new column for each day for i in range(1, pwa+1): pwa_data[col + '_' + str(i)] = data.shift(i)[col] # saving the row means as a new x_pwa column in the original dataframe data[col + '_pwa'] = pwa_data.aggregate(np.mean, axis=1) return data # interface that makes calls to add_pwa depending on whether only one or all cities are used def add_pwa(data, col, pwa): if use_all_cities: # creating a unique id for every row data['id'] = data.city + data.index data_pwa = [] names_pwa = [] # iterating over all cities for city in unique_cities: subset = data.loc[data.city == city].copy() subset = calc_pwa(subset, col, pwa) # saving the pwa values and rows ids for the current city data_pwa.extend(subset[col+'_pwa'].iloc[pwa:]) names_pwa.extend(subset.id[pwa:]) # joining the data from each city with the original dataframe and removing the unique ids data_pwa = pd.Series(data_pwa, index=names_pwa, name=col+'_pwa') data = data.join(data_pwa, how='inner', on='id') data.drop(columns='id', inplace=True) else: data = calc_pwa(data, col, pwa) data = data[pwa:] return data ``` ## Temperature experiments ``` counts, values = np.histogram(df_comb.t, bins=100) values = values[:-1] + ((values[1:] - values[:-1])/2) values = values.astype('float64') y = np.array(pd.Series(counts).rolling(window=7).mean()) maxima = find_peaks(y)[0] minima = argrelmin(y)[0] plt.plot(values, counts) plt.plot(values, y, c='orange', alpha=0.75) for maximum in maxima: plt.plot(np.repeat(values[maximum], 2), np.array([0, y[maximum]])) for minimum in minima: plt.plot(np.repeat(values[minimum], 2), np.array([0, y[minimum]])) t_split = values[maxima][0] print('Splitting on:', t_split) t_split_mask = df_comb.t > t_split df_high_t = df.loc[t_split_mask] df_comb_high_t = df_comb.loc[t_split_mask] dates_high_t = dates.loc[t_split_mask] print('Using', df_comb_high_t.shape[0], 'out of', df_comb.shape[0], 'data points', '(', np.round(100*df_comb_high_t.shape[0]/df_comb.shape[0], 2), '% )\n') y_train, y_test, log_y_test, X_train, X_test, dates_test = split_data(df_high_t, df_comb_high_t, dates_high_t) ```
github_jupyter
# Baseline Model - Initial parameters search. - Search parameter for baseline model. #### Author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20'<prof.israel@gmail.com>) ``` %load_ext watermark import pandas as pd import numpy as np from sklearn.model_selection import train_test_split, ParameterGrid from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import f1_score, roc_auc_score from tqdm import tqdm from glob import glob # import matplotlib.pyplot as plt # %matplotlib inline # from matplotlib import rcParams # from cycler import cycler # rcParams['figure.figsize'] = 12, 8 # 18, 5 # rcParams['axes.spines.top'] = False # rcParams['axes.spines.right'] = False # rcParams['axes.grid'] = True # rcParams['axes.prop_cycle'] = cycler(color=['#365977']) # rcParams['lines.linewidth'] = 2.5 # import seaborn as sns # sns.set_theme() # pd.set_option("max_columns", None) # pd.set_option("max_rows", None) # pd.set_option('display.max_colwidth', None) from IPython.display import Markdown, display def md(arg): display(Markdown(arg)) # from pandas_profiling import ProfileReport # #report = ProfileReport(#DataFrame here#, minimal=True) # #report.to # import pyarrow.parquet as pq # #df = pq.ParquetDataset(path_to_folder_with_parquets, filesystem=None).read_pandas().to_pandas() # import json # def open_file_json(path,mode='r',var=None): # if mode == 'w': # with open(path,'w') as f: # json.dump(var, f) # if mode == 'r': # with open(path,'r') as f: # return json.load(f) # import functools # import operator # def flat(a): # return functools.reduce(operator.iconcat, a, []) # import json # from glob import glob # from typing import NewType # DictsPathType = NewType("DictsPath", str) # def open_file_json(path): # with open(path, "r") as f: # return json.load(f) # class LoadDicts: # def __init__(self, dict_path: DictsPathType = "./data"): # Dicts_glob = glob(f"{dict_path}/*.json") # self.List = [] # self.Dict = {} # for path_json in Dicts_glob: # name = path_json.split("/")[-1].replace(".json", "") # self.List.append(name) # self.Dict[name] = open_file_json(path_json) # setattr(self, name, self.Dict[name]) # Run this cell before close. %watermark -d --iversion -b -r -g -m -v !cat /proc/cpuinfo |grep 'model name'|head -n 1 |sed -e 's/model\ name/CPU/' !free -h |cut -d'i' -f1 |grep -v total ``` # Initial search ``` # n_jobs = 4 # N_fraud_test = 200 N_truth_test = int(2e4) N_truth_train = int(2e5) # split_seeds = [13, 17, 47, 53] # random_state used by RandomForestClassifier random_state = 42 # Number of trees in random forest n_estimators = [200, 400, 800] # Number of features to consider at every split max_features = ['auto', 'sqrt'] # Minimum number of samples required to split a node min_samples_split = [2, 8] # Minimum number of samples required at each leaf node min_samples_leaf = [1, 4] # Method of selecting samples for training each tree bootstrap = [True] # Create the random grid search_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap} print(search_grid) target_col = 'Class' ds_cols = ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount'] glob_paths = glob('/work/data/creditcard*.csv') total_exps = len(glob_paths)*len(split_seeds)*len(ParameterGrid(search_grid)) print(total_exps) with tqdm(total=total_exps) as progress_bar: def RunGrid(df_train, df_test, random_state): out = [] for params in ParameterGrid(search_grid): params['random_state'] = random_state params['n_jobs'] = n_jobs rf = RandomForestClassifier(**params) rf.fit(df_train[ds_cols].to_numpy(), df_train[target_col].to_numpy()) probs = rf.predict_proba(df_test[ds_cols].to_numpy()) exp = { 'probs' : probs, 'rf_classes': rf.classes_, 'params': params } out.append(exp) progress_bar.update(1) return out Results = {} for ds_path in glob_paths: df = pd.read_csv(ds_path) df = df[ds_cols+[target_col]] df_fraud = df.query('Class == 1').reset_index(drop=True).copy() df_truth = df.query('Class == 0').reset_index(drop=True).copy() del df set_exp = {} for seed in split_seeds: df_fraud_train, df_fraud_test = train_test_split(df_fraud, test_size=N_fraud_test, random_state=seed) df_truth_train, df_truth_test = train_test_split(df_truth, train_size=N_truth_train, test_size=N_truth_test, random_state=seed) df_train = pd.concat([df_fraud_train, df_truth_train]).reset_index(drop=True) df_test = pd.concat([df_fraud_test, df_truth_test]).reset_index(drop=True) out = RunGrid(df_train, df_test, random_state) set_exp[seed] = { 'target_test': df_test[target_col].to_numpy(), 'exps': out } Results[ds_path] = set_exp cols_results = ['ds_path', 'seed'] cols_param = ['bootstrap', 'max_features', 'min_samples_leaf', 'min_samples_split', 'n_estimators', 'random_state'] cols_metrics = ['Fraud_True_Sum','Truth_False_Sum', 'Fraud_False_Sum', 'F1_M', 'AUC_ROC_M', 'TP_0', 'TP_1'] cols = cols_results+cols_param+cols_metrics ', '.join(cols_metrics) ''.join([ f'param[\'{col}\'], ' for col in cols_param]) data = [] for ds_path,sets_exp in Results.items(): for seed,set_exp in sets_exp.items(): target_test = set_exp['target_test'] for exp in set_exp['exps']: df_exp = pd.DataFrame(exp['probs'], columns=exp['rf_classes']) df_exp['pred'] = df_exp[[0, 1]].apply(lambda x: exp['rf_classes'][np.argmax(x)], axis=1) df_exp['target'] = target_test Fraud_True_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)][1].sum()/sum(df_exp.target == 1) Truth_False_Sum = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 1)][0].sum()/sum(df_exp.target == 1) Fraud_False_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 0)][1].sum()/sum(df_exp.target == 0) F1_M = f1_score(target_test, df_exp['pred'].to_numpy(), average='macro') AUC_ROC_M = roc_auc_score(target_test, df_exp['pred'].to_numpy(), average='macro') TP_0 = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 0)].shape[0]/sum(df_exp.target == 0) TP_1 = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)].shape[0]/sum(df_exp.target == 1) param = exp['params'] data.append([ ds_path, seed, param['bootstrap'], param['max_features'], param['min_samples_leaf'], param['min_samples_split'], param['n_estimators'], param['random_state'], Fraud_True_Sum, Truth_False_Sum, Fraud_False_Sum, F1_M, AUC_ROC_M, TP_0, TP_1 ]) df_Results = pd.DataFrame(data, columns=cols) #df_Results.to_csv('/work/data/Results_creditcard_Init.csv', index=False) df_Results.to_csv('/work/data/Results_creditcard_Init.csv', index=False) df_Results = pd.read_csv('/work/data/Results_creditcard_Init.csv') map_ds_path = { '/work/data/creditcard_trans_float.csv': 'Float', '/work/data/creditcard.csv': 'Original', '/work/data/creditcard_trans_int.csv': 'Integer' } df_Results['DS'] = df_Results.ds_path.apply(lambda x: map_ds_path[x]) for metric in cols_metrics: md(f'# {metric}') display(df_Results.sort_values(metric, ascending=False).head(20)[['DS', 'seed']+cols_param[:-1]+cols_metrics]) for col in cols_param[:-1]: md(f'# {col}') display(df_Results[['DS', col]+cols_metrics].groupby(['DS', col]).mean()) ``` # Baseline model. ``` # N_fraud_test = 200 N_truth_test = int(2e4) N_truth_train = int(2e5) # split_seeds = [13, 17, 19, 41] # random_state used by RandomForestClassifier random_state = 42 # Number of trees in random forest n_estimators = [100, 200, 400, 800, 100] # Number of features to consider at every split max_features = ['auto'] # Minimum number of samples required to split a node min_samples_split = [2] # Minimum number of samples required at each leaf node min_samples_leaf = [1] # Method of selecting samples for training each tree bootstrap = [True] # Create the random grid search_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap} print(search_grid) glob_paths = ['/work/data/creditcard_trans_int.csv'] total_exps = len(glob_paths)*len(split_seeds)*len(ParameterGrid(search_grid)) print(total_exps) with tqdm(total=total_exps) as progress_bar: def RunGrid(df_train, df_test, random_state): out = [] for params in ParameterGrid(search_grid): params['random_state'] = random_state params['n_jobs'] = n_jobs rf = RandomForestClassifier(**params) rf.fit(df_train[ds_cols].to_numpy(), df_train[target_col].to_numpy()) probs = rf.predict_proba(df_test[ds_cols].to_numpy()) exp = { 'probs' : probs, 'rf_classes': rf.classes_, 'params': params } out.append(exp) progress_bar.update(1) return out Results = {} for ds_path in glob_paths: df = pd.read_csv(ds_path) df = df[ds_cols+[target_col]] df_fraud = df.query('Class == 1').reset_index(drop=True).copy() df_truth = df.query('Class == 0').reset_index(drop=True).copy() del df set_exp = {} for seed in split_seeds: df_fraud_train, df_fraud_test = train_test_split(df_fraud, test_size=N_fraud_test, random_state=seed) df_truth_train, df_truth_test = train_test_split(df_truth, train_size=N_truth_train, test_size=N_truth_test, random_state=seed) df_train = pd.concat([df_fraud_train, df_truth_train]).reset_index(drop=True) df_test = pd.concat([df_fraud_test, df_truth_test]).reset_index(drop=True) out = RunGrid(df_train, df_test, random_state) set_exp[seed] = { 'target_test': df_test[target_col].to_numpy(), 'exps': out } Results[ds_path] = set_exp data = [] for ds_path,sets_exp in Results.items(): for seed,set_exp in sets_exp.items(): target_test = set_exp['target_test'] for exp in set_exp['exps']: df_exp = pd.DataFrame(exp['probs'], columns=exp['rf_classes']) df_exp['pred'] = df_exp[[0, 1]].apply(lambda x: exp['rf_classes'][np.argmax(x)], axis=1) df_exp['target'] = target_test Fraud_True_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)][1].sum()/sum(df_exp.target == 1) Truth_False_Sum = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 1)][0].sum()/sum(df_exp.target == 1) Fraud_False_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 0)][1].sum()/sum(df_exp.target == 0) F1_M = f1_score(target_test, df_exp['pred'].to_numpy(), average='macro') AUC_ROC_M = roc_auc_score(target_test, df_exp['pred'].to_numpy(), average='macro') TP_0 = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 0)].shape[0]/sum(df_exp.target == 0) TP_1 = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)].shape[0]/sum(df_exp.target == 1) param = exp['params'] data.append([ ds_path, seed, param['bootstrap'], param['max_features'], param['min_samples_leaf'], param['min_samples_split'], param['n_estimators'], param['random_state'], Fraud_True_Sum, Truth_False_Sum, Fraud_False_Sum, F1_M, AUC_ROC_M, TP_0, TP_1 ]) df_Results = pd.DataFrame(data, columns=cols) df_Results.to_csv('/work/data/Results_creditcard_Baseline.csv', index=False) df_Results for metric in cols_metrics: md(f'# {metric}') display(df_Results.sort_values(metric, ascending=False).head(20)[cols_param[:-1]+cols_metrics]) ```
github_jupyter
# Node classification with Graph ATtention Network (GAT) <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/gat-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/gat-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table> Import NetworkX and stellar: ``` # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos]==1.3.0b # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg try: sg.utils.validate_notebook_version("1.3.0b") except AttributeError: raise ValueError( f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." ) from None import networkx as nx import pandas as pd import os import stellargraph as sg from stellargraph.mapper import FullBatchNodeGenerator from stellargraph.layer import GAT from tensorflow.keras import layers, optimizers, losses, metrics, Model from sklearn import preprocessing, feature_extraction, model_selection from stellargraph import datasets from IPython.display import display, HTML import matplotlib.pyplot as plt %matplotlib inline ``` ## Loading the CORA network (See [the "Loading from Pandas" demo](../basics/loading-pandas.ipynb) for details on how data can be loaded.) ``` dataset = datasets.Cora() display(HTML(dataset.description)) G, node_subjects = dataset.load() print(G.info()) ``` We aim to train a graph-ML model that will predict the "subject" attribute on the nodes. These subjects are one of 7 categories: ``` set(node_subjects) ``` ### Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to do this. Here we're taking 140 node labels for training, 500 for validation, and the rest for testing. ``` train_subjects, test_subjects = model_selection.train_test_split( node_subjects, train_size=140, test_size=None, stratify=node_subjects ) val_subjects, test_subjects = model_selection.train_test_split( test_subjects, train_size=500, test_size=None, stratify=test_subjects ) ``` Note using stratified sampling gives the following counts: ``` from collections import Counter Counter(train_subjects) ``` The training set has class imbalance that might need to be compensated, e.g., via using a weighted cross-entropy loss in model training, with class weights inversely proportional to class support. However, we will ignore the class imbalance in this example, for simplicity. ### Converting to numeric arrays For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training. To do this conversion ... ``` target_encoding = preprocessing.LabelBinarizer() train_targets = target_encoding.fit_transform(train_subjects) val_targets = target_encoding.transform(val_subjects) test_targets = target_encoding.transform(test_subjects) ``` We now do the same for the node attributes we want to use to predict the subject. These are the feature vectors that the Keras model will use as input. The CORA dataset contains attributes 'w_x' that correspond to words found in that publication. If a word occurs more than once in a publication the relevant attribute will be set to one, otherwise it will be zero. ## Creating the GAT model in Keras To feed data from the graph to the Keras model we need a generator. Since GAT is a full-batch model, we use the `FullBatchNodeGenerator` class to feed node features and graph adjacency matrix to the model. ``` generator = FullBatchNodeGenerator(G, method="gat") ``` For training we map only the training nodes returned from our splitter and the target values. ``` train_gen = generator.flow(train_subjects.index, train_targets) ``` Now we can specify our machine learning model, we need a few more parameters for this: * the `layer_sizes` is a list of hidden feature sizes of each layer in the model. In this example we use two GAT layers with 8-dimensional hidden node features for the first layer and the 7 class classification output for the second layer. * `attn_heads` is the number of attention heads in all but the last GAT layer in the model * `activations` is a list of activations applied to each layer's output * Arguments such as `bias`, `in_dropout`, `attn_dropout` are internal parameters of the model, execute `?GAT` for details. To follow the GAT model architecture used for Cora dataset in the original paper [Graph Attention Networks. P. Veličković et al. ICLR 2018 https://arxiv.org/abs/1710.10903], let's build a 2-layer GAT model, with the second layer being the classifier that predicts paper subject: it thus should have the output size of `train_targets.shape[1]` (7 subjects) and a softmax activation. ``` gat = GAT( layer_sizes=[8, train_targets.shape[1]], activations=["elu", "softmax"], attn_heads=8, generator=generator, in_dropout=0.5, attn_dropout=0.5, normalize=None, ) ``` Expose the input and output tensors of the GAT model for node prediction, via GAT.in_out_tensors() method: ``` x_inp, predictions = gat.in_out_tensors() ``` ### Training the model Now let's create the actual Keras model with the input tensors `x_inp` and output tensors being the predictions `predictions` from the final dense layer ``` model = Model(inputs=x_inp, outputs=predictions) model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.categorical_crossentropy, metrics=["acc"], ) ``` Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the validation set (we need to create another generator over the validation data for this) ``` val_gen = generator.flow(val_subjects.index, val_targets) ``` Create callbacks for early stopping (if validation accuracy stops improving) and best model checkpoint saving: ``` from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint if not os.path.isdir("logs"): os.makedirs("logs") es_callback = EarlyStopping( monitor="val_acc", patience=20 ) # patience is the number of epochs to wait before early stopping in case of no further improvement mc_callback = ModelCheckpoint( "logs/best_model.h5", monitor="val_acc", save_best_only=True, save_weights_only=True ) ``` Train the model ``` history = model.fit( train_gen, epochs=50, validation_data=val_gen, verbose=2, shuffle=False, # this should be False, since shuffling data means shuffling the whole graph callbacks=[es_callback, mc_callback], ) ``` Plot the training history: ``` sg.utils.plot_history(history) ``` Reload the saved weights of the best model found during the training (according to validation accuracy) ``` model.load_weights("logs/best_model.h5") ``` Evaluate the best model on the test set ``` test_gen = generator.flow(test_subjects.index, test_targets) test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ``` ### Making predictions with the model Now let's get the predictions for all nodes: ``` all_nodes = node_subjects.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen) ``` These predictions will be the output of the softmax layer, so to get final categories we'll use the `inverse_transform` method of our target attribute specification to turn these values back to the original categories Note that for full-batch methods the batch size is 1 and the predictions have shape $(1, N_{nodes}, N_{classes})$ so we we remove the batch dimension to obtain predictions of shape $(N_{nodes}, N_{classes})$. ``` node_predictions = target_encoding.inverse_transform(all_predictions.squeeze()) ``` Let's have a look at a few predictions after training the model: ``` df = pd.DataFrame({"Predicted": node_predictions, "True": node_subjects}) df.head(20) ``` ## Node embeddings Evaluate node embeddings as activations of the output of the 1st GraphAttention layer in GAT layer stack (the one before the top classification layer predicting paper subjects), and visualise them, coloring nodes by their true subject label. We expect to see nice clusters of papers in the node embedding space, with papers of the same subject belonging to the same cluster. Let's create a new model with the same inputs as we used previously `x_inp` but now the output is the embeddings rather than the predicted class. We find the embedding layer by taking the first graph attention layer in the stack of Keras layers. Additionally note that the weights trained previously are kept in the new model. ``` emb_layer = next(l for l in model.layers if l.name.startswith("graph_attention")) print( "Embedding layer: {}, output shape {}".format(emb_layer.name, emb_layer.output_shape) ) embedding_model = Model(inputs=x_inp, outputs=emb_layer.output) ``` The embeddings can now be calculated using the predict function. Note that the embeddings returned are 64 dimensional features (8 dimensions for each of the 8 attention heads) for all nodes. ``` emb = embedding_model.predict(all_gen) emb.shape ``` Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their true subject label ``` from sklearn.decomposition import PCA from sklearn.manifold import TSNE import pandas as pd import numpy as np ``` Note that the embeddings from the GAT model have a batch dimension of 1 so we `squeeze` this to get a matrix of $N_{nodes} \times N_{emb}$. ``` X = emb.squeeze() y = np.argmax(target_encoding.transform(node_subjects), axis=1) if X.shape[1] > 2: transform = TSNE # PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=list(G.nodes())) emb_transformed["label"] = y else: emb_transformed = pd.DataFrame(X, index=list(G.nodes())) emb_transformed = emb_transformed.rename(columns={"0": 0, "1": 1}) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(7, 7)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of GAT embeddings for cora dataset".format(transform.__name__) ) plt.show() ``` <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/gat-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/gat-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
github_jupyter
# State preparation with the SLM mask ## Basics When performing quantum computations with global pulses, it might be hard to prepare the system in an arbitrary initial state. This is especially true in the XY mode, where only a global $\sigma^x$ pulse can produce excitations whose number is otherwise conserved during free evolution. A partial solution to this problem is to utilize an SLM mask. <br> Assume a system of three qubits in XY mode is initially in state $\left| \downarrow \downarrow \downarrow \right\rangle$, and that we are interested in preparing the state $\left| \uparrow \downarrow \downarrow \right\rangle$. Acting naively with a global $\sigma^x$ pulse of area $\pi$ would result in state $\left| \uparrow \uparrow \uparrow \right\rangle$. Using an SLM pattern, however, it is possible to detune the last two qubits away from resonance, and the same global $\sigma^x$ pulse will produced instead the desired state $\left| \uparrow \downarrow \downarrow \right\rangle$. <br> Let's see how it works in practice. First create the register: ``` import numpy as np from pulser import Pulse, Sequence, Register from pulser.devices import MockDevice from pulser.waveforms import BlackmanWaveform from pulser.simulation import Simulation # Qubit register qubits = {"q0": (-5,0), "q1": (0,0), "q2": (5,0)} reg = Register(qubits) reg.draw() ``` Now create the sequence and add a global $\sigma^x$ pulse of area $\pi$ in XY mode: ``` # Create the sequence seq = Sequence(reg, MockDevice) # Declare a global XY channel and add the pi pulse seq.declare_channel('ch', 'mw_global') pulse = Pulse.ConstantDetuning(BlackmanWaveform(200, np.pi), 0, 0) seq.add(pulse, 'ch') ``` Drawing the sequence will show the following: ``` seq.draw() ``` To set up the SLM mask all we need to do is to create a list that contains the name of the qubits that we want to mask, and pass it to the $\verb:Sequence.config_slm_mask:$ method: ``` # Mask the last two qubits masked_qubits = ["q1", "q2"] seq.config_slm_mask(masked_qubits) ``` At this point it is possible to visualize the mask by drawing the sequence. The masked pulse will appear with a shaded background, and the names of the masked qubits will be shown in the bottom left corner. ``` seq.draw() ``` The sequence drawing method also allows to visualize the register. If an SLM mask is defined, the masked qubits will appear with a shaded square halo around them: ``` seq.draw(draw_register=True) ``` Now let's see how the system evolves under this masked pulse. Since the pulse only acts on the first qubit, we expect the final state to be $\left| \uparrow \downarrow \downarrow \right\rangle$, or, according to Pulser's conventions for XY basis states, $(1,0)^T \otimes (0,1)^T \otimes (0,1)^T$ in the Hilbert space $C^8$: ``` import qutip qutip.tensor(qutip.basis(2, 0), qutip.basis(2, 1), qutip.basis(2, 1)) ``` Now run the simulation and print the final state as given by Pulser: ``` sim = Simulation(seq) results = sim.run() results.get_final_state() ``` As expected, the two states agree up to numerical errors. ## Notes Since the SLM mask is mostly useful for state preparation, its use in Pulser is restricted to the first pulse in the sequence. This can be seen by adding an extra pulse in the previous example and drawing the sequence: ``` seq.add(pulse, 'ch') seq.draw() ``` This example also illustrates the fact that the SLM mask can be configured at any moment during the creation of a sequence (either before or after adding pulses) and it will automatically latch to the first pulse. <br> However, in order to reflect real hardware constraints, the mask can be configured only once. Trying to configure the mask a second time will raise an error: ``` try: seq.config_slm_mask(masked_qubits) except ValueError as err: print(err) ``` Although the example shown here makes use of the XY mode, everything translates directly to the Ising mode as well with the same syntax and restrictions.
github_jupyter
``` # general imports import random import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix import torch import torch.nn as nn import torch.utils.data as utils import matplotlib.pyplot as plt plt.rcParams["legend.loc"] = "best" plt.rcParams['figure.facecolor'] = 'white' %matplotlib inline # filter python warnings import warnings warnings.filterwarnings("ignore") # prepare Fashion MNIST data import torchvision.datasets as datasets # train data mnist_trainset = datasets.FashionMNIST(root='./data/fashion', train=True, download=True, transform=None) mnist_train_images = mnist_trainset.train_data.numpy()[..., np.newaxis] mnist_train_labels = mnist_trainset.train_labels.numpy() # test data mnist_testset = datasets.FashionMNIST(root='./data/fashion', train=False, download=True, transform=None) mnist_test_images = mnist_testset.test_data.numpy()[..., np.newaxis] mnist_test_labels = mnist_testset.test_labels.numpy() # The Deep Convolution Random Forest class (for binary classification) class ConvRF(object): def __init__(self, kernel_size=5, stride=2): self.kernel_size = kernel_size self.stride = stride self.kernel_forests = None def _convolve_chop(self, images, labels=None, flatten=False): batch_size, in_dim, _, num_channels = images.shape out_dim = int((in_dim - self.kernel_size) / self.stride) + 1 # calculate output dimensions # create matrix to hold the chopped images out_images = np.zeros((batch_size, out_dim, out_dim, self.kernel_size, self.kernel_size, num_channels)) out_labels = None curr_y = out_y = 0 # move kernel vertically across the image while curr_y + self.kernel_size <= in_dim: curr_x = out_x = 0 # move kernel horizontally across the image while curr_x + self.kernel_size <= in_dim: # chop images out_images[:, out_x, out_y] = images[:, curr_x:curr_x + self.kernel_size, curr_y:curr_y+self.kernel_size, :] curr_x += self.stride out_x += 1 curr_y += self.stride out_y += 1 if flatten: out_images = out_images.reshape(batch_size, out_dim, out_dim, -1) if labels is not None: out_labels = np.zeros((batch_size, out_dim, out_dim)) out_labels[:, ] = labels.reshape(-1, 1, 1) return out_images, out_labels def convolve_fit(self, images, labels): num_channels = images.shape[-1] sub_images, sub_labels = self._convolve_chop(images, labels=labels, flatten=True) batch_size, out_dim, _, _ = sub_images.shape self.kernel_forests = np.zeros((out_dim, out_dim), dtype=np.int).tolist() convolved_image = np.zeros((images.shape[0], out_dim, out_dim, 1)) for i in range(out_dim): for j in range(out_dim): self.kernel_forests[i][j] = RandomForestClassifier(n_estimators=32) self.kernel_forests[i][j].fit(sub_images[:, i, j], sub_labels[:, i, j]) convolved_image[:, i, j] = self.kernel_forests[i][j].predict_proba(sub_images[:, i, j])[..., 1][..., np.newaxis] return convolved_image def convolve_predict(self, images): if not self.kernel_forests: raise Exception("Should fit training data before predicting") num_channels = images.shape[-1] sub_images, _ = self._convolve_chop(images, flatten=True) batch_size, out_dim, _, _ = sub_images.shape kernel_predictions = np.zeros((images.shape[0], out_dim, out_dim, 1)) for i in range(out_dim): for j in range(out_dim): kernel_predictions[:, i, j] = self.kernel_forests[i][j].predict_proba(sub_images[:, i, j])[..., 1][..., np.newaxis] return kernel_predictions # define a simple CNN arhcitecture from torch.autograd import Variable import torch.nn.functional as F class SimpleCNNOneFilter(torch.nn.Module): def __init__(self): super(SimpleCNNOneFilter, self).__init__() self.conv1 = torch.nn.Conv2d(1, 1, kernel_size=10, stride=2) self.fc1 = torch.nn.Linear(100, 2) def forward(self, x): x = F.relu(self.conv1(x)) x = x.view(-1, 100) x = self.fc1(x) return(x) class SimpleCNN32Filter(torch.nn.Module): def __init__(self): super(SimpleCNN32Filter, self).__init__() self.conv1 = torch.nn.Conv2d(1, 32, kernel_size=10, stride=2) # try 64 too, if possible self.fc1 = torch.nn.Linear(100*32, 2) def forward(self, x): x = F.relu(self.conv1(x)) x = x.view(-1, 100*32) x = self.fc1(x) return(x) class SimpleCNN32Filter2Layers(torch.nn.Module): def __init__(self): super(SimpleCNN32Filter2Layers, self).__init__() self.conv1 = torch.nn.Conv2d(1, 32, kernel_size=10, stride=2) self.conv2 = torch.nn.Conv2d(32, 32, kernel_size=7, stride=1) self.fc1 = torch.nn.Linear(16*32, 2) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = x.view(-1, 16*32) x = self.fc1(x) return(x) def run_naive_rf(train_images, train_labels, test_images, test_labels, fraction_of_train_samples, class1=3, class2=8): num_train_samples_class_1 = int(np.sum(train_labels==class1) * fraction_of_train_samples) num_train_samples_class_2 = int(np.sum(train_labels==class2) * fraction_of_train_samples) # get only train images and labels for class 1 and class 2 train_images = np.concatenate([train_images[train_labels==class1][:num_train_samples_class_1], train_images[train_labels==class2][:num_train_samples_class_2]]) train_labels = np.concatenate([np.repeat(0, num_train_samples_class_1), np.repeat(1, num_train_samples_class_2)]) # get only test images and labels for class 1 and class 2 test_images = np.concatenate([test_images[test_labels==class1], test_images[test_labels==class2]]) test_labels = np.concatenate([np.repeat(0, np.sum(test_labels==class1)), np.repeat(1, np.sum(test_labels==class2))]) # Train clf = RandomForestClassifier(n_estimators=100) clf.fit(train_images.reshape(-1, 28*28*1), train_labels) # Test test_preds = clf.predict(test_images.reshape(-1, 28*28*1)) return accuracy_score(test_labels, test_preds) def run_one_layer_deep_conv_rf(train_images, train_labels, test_images, test_labels, fraction_of_train_samples, class1=3, class2=8): num_train_samples_class_1 = int(np.sum(train_labels==class1) * fraction_of_train_samples) num_train_samples_class_2 = int(np.sum(train_labels==class2) * fraction_of_train_samples) # get only train images and labels for class 1 and class 2 train_images = np.concatenate([train_images[train_labels==class1][:num_train_samples_class_1], train_images[train_labels==class2][:num_train_samples_class_2]]) train_labels = np.concatenate([np.repeat(0, num_train_samples_class_1), np.repeat(1, num_train_samples_class_2)]) # get only test images and labels for class 1 and class 2 test_images = np.concatenate([test_images[test_labels==class1], test_images[test_labels==class2]]) test_labels = np.concatenate([np.repeat(0, np.sum(test_labels==class1)), np.repeat(1, np.sum(test_labels==class2))]) ## Train # ConvRF (layer 1) conv1 = ConvRF(kernel_size=10, stride=2) conv1_map = conv1.convolve_fit(train_images, train_labels) # Full RF conv1_full_RF = RandomForestClassifier(n_estimators=100) conv1_full_RF.fit(conv1_map.reshape(len(train_images), -1), train_labels) ## Test (after ConvRF 1 and Full RF) conv1_map_test = conv1.convolve_predict(test_images) mnist_test_preds = conv1_full_RF.predict(conv1_map_test.reshape(len(test_images), -1)) return accuracy_score(test_labels, mnist_test_preds) def run_two_layer_deep_conv_rf(train_images, train_labels, test_images, test_labels, fraction_of_train_samples, class1=3, class2=8): num_train_samples_class_1 = int(np.sum(train_labels==class1) * fraction_of_train_samples) num_train_samples_class_2 = int(np.sum(train_labels==class2) * fraction_of_train_samples) # get only train images and labels for class 1 and class 2 train_images = np.concatenate([train_images[train_labels==class1][:num_train_samples_class_1], train_images[train_labels==class2][:num_train_samples_class_2]]) train_labels = np.concatenate([np.repeat(0, num_train_samples_class_1), np.repeat(1, num_train_samples_class_2)]) # get only test images and labels for class 1 and class 2 test_images = np.concatenate([test_images[test_labels==class1], test_images[test_labels==class2]]) test_labels = np.concatenate([np.repeat(0, np.sum(test_labels==class1)), np.repeat(1, np.sum(test_labels==class2))]) ## Train # ConvRF (layer 1) conv1 = ConvRF(kernel_size=10, stride=2) conv1_map = conv1.convolve_fit(train_images, train_labels) # ConvRF (layer 2) conv2 = ConvRF(kernel_size=7, stride=1) conv2_map = conv2.convolve_fit(conv1_map, train_labels) # Full RF conv1_full_RF = RandomForestClassifier(n_estimators=100) conv1_full_RF.fit(conv2_map.reshape(len(train_images), -1), train_labels) ## Test (after ConvRF 1 and Full RF) conv1_map_test = conv1.convolve_predict(test_images) conv2_map_test = conv2.convolve_predict(conv1_map_test) test_preds = conv1_full_RF.predict(conv2_map_test.reshape(len(test_images), -1)) return accuracy_score(test_labels, test_preds) def cnn_train_test(cnn_model, x_train, y_train, x_test, y_test): # set params num_epochs = 25 learning_rate = 0.001 # prepare data tensor_x = torch.stack([torch.Tensor(i.reshape(1, 28, 28)) for i in x_train]).float() # transform to torch tensors tensor_y = torch.stack([torch.Tensor([i]) for i in y_train]).long() my_dataset = utils.TensorDataset(tensor_x,tensor_y) # create your datset train_loader = utils.DataLoader(my_dataset, batch_size=64, shuffle=True) # create your dataloader tensor_x = torch.stack([torch.Tensor(i.reshape(1, 28, 28)) for i in x_test]).float() # transform to torch tensors tensor_y = torch.stack([torch.Tensor([i]) for i in y_test]).long() my_dataset = utils.TensorDataset(tensor_x,tensor_y) # create your datset test_loader = utils.DataLoader(my_dataset, batch_size=64) # create your dataloader # define model model = cnn_model() # loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = images labels = labels # Forward pass outputs = model(images) loss = criterion(outputs, labels.view(-1)) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() # test the model accuracy = 0 model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance) with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images labels = labels outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted.view(-1) == labels.view(-1)).sum() accuracy = float(correct) / float(total) return accuracy def run_cnn(cnn_model, train_images, train_labels, test_images, test_labels, fraction_of_train_samples, class1=3, class2=8): num_train_samples_class_1 = int(np.sum(train_labels==class1) * fraction_of_train_samples) num_train_samples_class_2 = int(np.sum(train_labels==class2) * fraction_of_train_samples) # get only train images and labels for class 1 and class 2 train_images = np.concatenate([train_images[train_labels==class1][:num_train_samples_class_1], train_images[train_labels==class2][:num_train_samples_class_2]]) train_labels = np.concatenate([np.repeat(0, num_train_samples_class_1), np.repeat(1, num_train_samples_class_2)]) # get only test images and labels for class 1 and class 2 test_images = np.concatenate([test_images[test_labels==class1], test_images[test_labels==class2]]) test_labels = np.concatenate([np.repeat(0, np.sum(test_labels==class1)), np.repeat(1, np.sum(test_labels==class2))]) return cnn_train_test(cnn_model, train_images, train_labels, test_images, test_labels) # Sneaker(7) vs Ankel Boot(9) classification # get only train images and labels for two classes: 7 and 9 mnist_train_images_7_9 = np.concatenate([mnist_train_images[mnist_train_labels==7], mnist_train_images[mnist_train_labels==9]]) mnist_train_labels_7_9 = np.concatenate([np.repeat(0, np.sum(mnist_train_labels==7)), np.repeat(1, np.sum(mnist_train_labels==9))]) # visualize data and labels # 7 (label 0) index = 3000 print("Label:", mnist_train_labels_7_9[index]) plt.imshow(mnist_train_images_7_9[index].reshape(28, 28),cmap='gray') plt.show() # 9 (label 1) index = 8000 print("Label:", mnist_train_labels_7_9[index]) plt.imshow(mnist_train_images_7_9[index].reshape(28, 28),cmap='gray') plt.show() # accuracy vs num training samples (naive_rf) naive_rf_acc_vs_n = list() fraction_of_train_samples_space = np.geomspace(0.01, 1.0, num=10) for fraction_of_train_samples in fraction_of_train_samples_space: best_accuracy = np.mean([run_naive_rf(mnist_train_images, mnist_train_labels, mnist_test_images, mnist_test_labels, fraction_of_train_samples, 7, 9) for _ in range(3)]) naive_rf_acc_vs_n.append(best_accuracy) print("Train Fraction:", str(fraction_of_train_samples)) print("Accuracy:", str(best_accuracy)) # accuracy vs num training samples (one layer deep_conv_rf) deep_conv_rf_acc_vs_n = list() fraction_of_train_samples_space = np.geomspace(0.01, 1.0, num=10) for fraction_of_train_samples in fraction_of_train_samples_space: best_accuracy = np.mean([run_one_layer_deep_conv_rf(mnist_train_images, mnist_train_labels, mnist_test_images, mnist_test_labels, fraction_of_train_samples, 7, 9) for _ in range(3)]) deep_conv_rf_acc_vs_n.append(best_accuracy) print("Train Fraction:", str(fraction_of_train_samples)) print("Accuracy:", str(best_accuracy)) # accuracy vs num training samples (one layer cnn) cnn_acc_vs_n = list() fraction_of_train_samples_space = np.geomspace(0.01, 1.0, num=10) for fraction_of_train_samples in fraction_of_train_samples_space: best_accuracy = np.mean([run_cnn(SimpleCNNOneFilter, mnist_train_images, mnist_train_labels, mnist_test_images, mnist_test_labels, fraction_of_train_samples, 7, 9) for _ in range(3)]) cnn_acc_vs_n.append(best_accuracy) print("Train Fraction:", str(fraction_of_train_samples)) print("Accuracy:", str(best_accuracy)) # accuracy vs num training samples (one layer cnn (32 filters)) cnn32_acc_vs_n = list() fraction_of_train_samples_space = np.geomspace(0.01, 1.0, num=10) for fraction_of_train_samples in fraction_of_train_samples_space: best_accuracy = np.mean([run_cnn(SimpleCNN32Filter, mnist_train_images, mnist_train_labels, mnist_test_images, mnist_test_labels, fraction_of_train_samples, 7, 9) for _ in range(3)]) cnn32_acc_vs_n.append(best_accuracy) print("Train Fraction:", str(fraction_of_train_samples)) print("Accuracy:", str(best_accuracy)) # accuracy vs num training samples (two layer deep_conv_rf) deep_conv_rf_two_layer_acc_vs_n = list() fraction_of_train_samples_space = np.geomspace(0.01, 1.0, num=10) for fraction_of_train_samples in fraction_of_train_samples_space: best_accuracy = np.mean([run_two_layer_deep_conv_rf(mnist_train_images, mnist_train_labels, mnist_test_images, mnist_test_labels, fraction_of_train_samples, 7, 9) for _ in range(3)]) deep_conv_rf_two_layer_acc_vs_n.append(best_accuracy) print("Train Fraction:", str(fraction_of_train_samples)) print("Accuracy:", str(best_accuracy)) # accuracy vs num training samples (two layer cnn (32 filters)) cnn32_two_layer_acc_vs_n = list() fraction_of_train_samples_space = np.geomspace(0.01, 1.0, num=10) for fraction_of_train_samples in fraction_of_train_samples_space: best_accuracy = np.mean([run_cnn(SimpleCNN32Filter2Layers, mnist_train_images, mnist_train_labels, mnist_test_images, mnist_test_labels, fraction_of_train_samples, 7, 9) for _ in range(3)]) cnn32_two_layer_acc_vs_n.append(best_accuracy) print("Train Fraction:", str(fraction_of_train_samples)) print("Accuracy:", str(best_accuracy)) plt.rcParams['figure.figsize'] = 10, 8 plt.rcParams['font.size'] = 20 plt.rcParams['legend.fontsize'] = 20 plt.rcParams['figure.titlesize'] = 20 fig, ax = plt.subplots() # create a new figure with a default 111 subplot ax.plot(fraction_of_train_samples_space, naive_rf_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='orange', linewidth=4, label="Naive RF") ax.plot(fraction_of_train_samples_space, deep_conv_rf_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='green', linewidth=4, label="Deep Conv RF") ax.plot(fraction_of_train_samples_space, deep_conv_rf_two_layer_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='grey', linewidth=4, label="Deep Conv RF Two Layer") ax.plot(fraction_of_train_samples_space, cnn_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='red', linewidth=4, label="CNN") ax.plot(fraction_of_train_samples_space, cnn32_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='brown', linewidth=4, label="CNN (32 filters)") ax.plot(fraction_of_train_samples_space, cnn32_two_layer_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='black', linewidth=4, label="CNN Two Layer (32 filters)") ax.set_xlabel('Fraction of Train Samples') ax.set_xlim(0, 1.0) ax.set_ylabel('Accuracy') ax.set_ylim(0.81, 1) ax.set_title("Sneaker(7) vs Ankel Boot(9) Classification") plt.legend() plt.show() plt.rcParams['figure.figsize'] = 10, 8 plt.rcParams['font.size'] = 20 plt.rcParams['legend.fontsize'] = 20 plt.rcParams['figure.titlesize'] = 20 fig, ax = plt.subplots() # create a new figure with a default 111 subplot ax.plot(fraction_of_train_samples_space, naive_rf_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='orange', linewidth=4, label="Naive RF") ax.plot(fraction_of_train_samples_space, deep_conv_rf_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='green', linewidth=4, label="Deep Conv RF") ax.plot(fraction_of_train_samples_space, deep_conv_rf_two_layer_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='grey', linewidth=4, label="Deep Conv RF 2 Layer") ax.plot(fraction_of_train_samples_space, cnn_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='red', linewidth=4, label="CNN") ax.plot(fraction_of_train_samples_space, cnn32_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='brown', linewidth=4, label="CNN (32 filters)") ax.plot(fraction_of_train_samples_space, cnn32_two_layer_acc_vs_n, marker='X', markerfacecolor='blue', markersize=10, color='black', linewidth=4, label="CNN 2 Layer (32 filters)") ax.set_xlabel('Fraction of Train Samples') ax.set_xlim(0, 1.0) ax.set_ylabel('Accuracy') ax.set_ylim(0.92, 0.98) ax.set_title("Sneaker(7) vs Ankel Boot(9) Classification") plt.legend() plt.show() ```
github_jupyter
##### Copyright 2020 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # TensorFlow Addons Losses: TripletSemiHardLoss <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/losses_triplet"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/losses_triplet.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/losses_triplet.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/losses_triplet.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> ## Overview This notebook will demonstrate how to use the TripletSemiHardLoss function in TensorFlow Addons. ### Resources: * [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf) * [Oliver Moindrot's blog does an excellent job of describing the algorithm in detail](https://omoindrot.github.io/triplet-loss) ## TripletLoss As first introduced in the FaceNet paper, TripletLoss is a loss function that trains a neural network to closely embed features of the same class while maximizing the distance between embeddings of different classes. To do this an anchor is chosen along with one negative and one positive sample. ![fig3](https://user-images.githubusercontent.com/18154355/61485418-1cbb1f00-a96f-11e9-8de8-3c46eef5a7dc.png) **The loss function is described as a Euclidean distance function:** ![function](https://user-images.githubusercontent.com/18154355/61484709-7589b800-a96d-11e9-9c3c-e880514af4b7.png) Where A is our anchor input, P is the positive sample input, N is the negative sample input, and alpha is some margin we use to specify when a triplet has become too "easy" and we no longer want to adjust the weights from it. ## SemiHard Online Learning As shown in the paper, the best results are from triplets known as "Semi-Hard". These are defined as triplets where the negative is farther from the anchor than the positive, but still produces a positive loss. To efficiently find these triplets we utilize online learning and only train from the Semi-Hard examples in each batch. ## Setup ``` !pip install -U tensorflow-addons import io import numpy as np import tensorflow as tf import tensorflow_addons as tfa import tensorflow_datasets as tfds ``` ## Prepare the Data ``` def _normalize_img(img, label): img = tf.cast(img, tf.float32) / 255. return (img, label) train_dataset, test_dataset = tfds.load(name="mnist", split=['train', 'test'], as_supervised=True) # Build your input pipelines train_dataset = train_dataset.shuffle(1024).batch(32) train_dataset = train_dataset.map(_normalize_img) test_dataset = test_dataset.batch(32) test_dataset = test_dataset.map(_normalize_img) ``` ## Build the Model ![fig2](https://user-images.githubusercontent.com/18154355/61485417-1cbb1f00-a96f-11e9-8d6a-94964ce8c4db.png) ``` model = tf.keras.Sequential([ tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)), tf.keras.layers.MaxPooling2D(pool_size=2), tf.keras.layers.Dropout(0.3), tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=2), tf.keras.layers.Dropout(0.3), tf.keras.layers.Flatten(), tf.keras.layers.Dense(256, activation=None), # No activation on final dense layer tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1)) # L2 normalize embeddings ]) ``` ## Train and Evaluate ``` # Compile the model model.compile( optimizer=tf.keras.optimizers.Adam(0.001), loss=tfa.losses.TripletSemiHardLoss()) # Train the network history = model.fit( train_dataset, epochs=5) # Evaluate the network results = model.predict(test_dataset) # Save test embeddings for visualization in projector np.savetxt("vecs.tsv", results, delimiter='\t') out_m = io.open('meta.tsv', 'w', encoding='utf-8') for img, labels in tfds.as_numpy(test_dataset): [out_m.write(str(x) + "\n") for x in labels] out_m.close() try: from google.colab import files files.download('vecs.tsv') files.download('meta.tsv') except: pass ``` ## Embedding Projector The vector and metadata files can be loaded and visualized here: https://projector.tensorflow.org/ You can see the results of our embedded test data when visualized with UMAP: ![embedding](https://user-images.githubusercontent.com/18154355/61600295-e6470380-abfd-11e9-8a00-2b25e7e6916f.png)
github_jupyter
``` import numpy as np import pandas as pd %load_ext autoreload %autoreload 2 ``` # Overview What does this thing look like? - Object that you can import - Can call train, load, featurize, import - Inherits from sklearn.transform? Multiple inheritance is hard... # I. Load Data - words: np.ndarray of all characters - dataset: np.ndarray of character indices ``` import codecs #=====[ Load a whole corpus ]===== def load_data(data_dir='./data/tinyshakespeare/'): vocab = {} print ('%s/input.txt'% data_dir) words = codecs.open('%s/input.txt' % data_dir, 'rb', 'utf-8').read() words = list(words) dataset = np.ndarray((len(words),), dtype=np.int32) for i, word in enumerate(words): if word not in vocab: vocab[word] = len(vocab) dataset[i] = vocab[word] print 'corpus length (in characters):', len(words) print 'vocab size:', len(vocab) return dataset, words, vocab #print 'corpus length (in characters):', len(words) #dataset, words, vocab = load_data() #=====[ Load only the vocabulary ]===== vocab = pickle.load(open('./data/audit_data/vocab.bin', 'rb')) ivocab = {i:c for c, i in vocab.items()} print 'vocab size:', len(vocab) ``` # II. Load Model ``` import pickle from CharRNN import CharRNN, make_initial_state from chainer import cuda #####[ PARAMS ]##### n_units = 128 seq_length = 50 batchsize = 50 seed = 123 length = 50 #################### np.random.seed(seed) model = pickle.load(open('./data/audit_data/audit_model.chainermodel', 'rb')) n_units = model.embed.W.data.shape[1] initial_state = make_initial_state(n_units, batchsize=1, train=False) print '# of units: ', n_units ``` # III. Create TextFeaturizer ``` class TextFeaturizer(object): """Featurizes Text using a CharRNN""" def __init__(self, model, vocab): self.__dict__.update(locals()) self.n_units = model.embed.W.data.shape[1] def preprocess(self, text): """returns preprocessed version of text""" if not isinstance(text, str): raise NotImplementedError("Must pass in a string") return np.array([vocab[c] for c in text]).astype(np.int32) def featurize(self, text): """returns a list of feature vectors for the text""" #=====[ Step 1: Convert to an array ]===== dataset = self.preprocess(text) #=====[ Step 2: Create initial state ]===== initial_state = make_initial_state(n_units, batchsize=1, train=False) init_char = np.array([0]).astype(np.int32) state, prob = rnn.forward_one_step(init_char, init_char, initial_state, train=False) #=====[ Step 3: Find feature vectors ]===== states = [] for i in range(len(dataset)): cur_char = np.array([dataset[i]]).astype(np.int32) state, prob = model.forward_one_step(cur_char, cur_char, state, train=False) states.append(state['h2'].data.copy()) #=====[ Step 4: Sanity check ]===== if not all([s.shape == (1, self.n_units) for s in states]): raise Exception("For some reason, generated the wrong shape! {}".format(np.array(states).shape)) return states featurizer = TextFeaturizer(model, vocab) #=====[ TEST ]===== text = 'Conducted an investigation of WalMart and concluded air and fire safety were correct' states = featurizer.featurize(text) ```
github_jupyter
``` from PIL import Image from matplotlib import pyplot as plt %matplotlib notebook import seaborn as sns import os import sys sys.path.append("..") import numpy as np from tensorflow.keras.preprocessing import image_dataset_from_directory from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.resnet50 import preprocess_input import tensorflow as tf from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, confusion_matrix, classification_report from sklearn.preprocessing import StandardScaler import utils import shell # dataset_directory = "data/people" # dataset_directory = "data/STL10/images" dataset_directory = "data/fruits" # shell_family = shell.ShellFamily("resnet50") shell_family = shell.ShellFamily() shell_family.create_preprocessor("resnet50") # shell_family.load("test.pkl") def get_class_folders_from_main_directory(directory): sub_folders = next(os.walk(directory))[1] classes = sub_folders.copy() for i in range(len(sub_folders)): sub_folders[i] = os.path.join(directory, sub_folders[i]) return sorted(sub_folders), classes def extract_classes_sub_folder(sub_folders): all_image_filepaths = np.array([]) class_array = np.array([]) for i in range(len(sub_folders)): image_files = os.listdir(sub_folders[i]) image_filepaths = list(map(lambda x : os.path.join(sub_folders[i], x), image_files)) new_image_filepaths = [] for j in image_filepaths: try: Image.open(j) new_image_filepaths.append(j) except: continue all_image_filepaths = np.append(all_image_filepaths, new_image_filepaths) class_array = np.append(class_array, [i] * len(new_image_filepaths)) # all_image_filepaths = np.append(all_image_filepaths, image_filepaths) # class_array = np.append(class_array, [i] * len(image_filepaths)) return all_image_filepaths, class_array.astype(np.int32) class ImageGenerator(): def __init__(self, filepath_array, class_array, batch_size, target_size): self.filepath_array = filepath_array self.class_array = class_array self.batch_size = batch_size self.target_size = target_size self.steps = len(self.class_array) // self.batch_size self.index = 0 print("Found {} images with {} classes!".format(self.__len__(), len(np.unique(self.class_array)))) def __iter__(self): return self def __len__(self): assert(len(self.class_array) == len(self.filepath_array)) return len(self.class_array) def __next__(self): if self.index == self.__len__(): raise StopIteration elif self.index + self.batch_size >= self.__len__(): batch_filepaths = self.filepath_array[self.index : self.__len__()] batch_images = np.array([np.asarray(Image.open(i).convert("RGB").resize(self.target_size))[..., :3] for i in batch_filepaths]).astype(np.float32) batch_images = preprocess_input(batch_images) batch_classes = self.class_array[self.index : self.__len__()] self.index = self.__len__() return (batch_images, batch_filepaths, batch_classes) else: batch_filepaths = self.filepath_array[self.index : self.index + self.batch_size] batch_images = np.array([np.asarray(Image.open(i).convert("RGB").resize(self.target_size))[..., :3] for i in batch_filepaths]).astype(np.float32) batch_images = preprocess_input(batch_images) batch_classes = self.class_array[self.index : self.index + self.batch_size] self.index += self.batch_size return (batch_images, batch_filepaths, batch_classes) sub_folders, classes = get_class_folders_from_main_directory(dataset_directory) filepath_array, class_array = extract_classes_sub_folder(sub_folders) X_train, X_test, y_train, y_test = train_test_split(filepath_array, class_array, test_size=0.2, random_state=42, shuffle=True, stratify=class_array) train_image_generator = ImageGenerator(X_train, y_train, 2048, (224, 224)) test_image_generator = ImageGenerator(X_test, y_test, 1, (224, 224)) shell_family.fit(train_image_generator, classes, "test.pkl") y_predict = [] for processed_image_array, _, groundtruth in test_image_generator: sample_features = shell_family.preprocessor.predict(processed_image_array) class_index, class_name, score, full_results = shell_family.score(sample_features, 0.5, with_update=False, return_full_results=True) y_predict.append(class_index) print("predicted: {}, groundtruth: {}".format(class_name, shell_family.mapping[groundtruth[0]])) full_results y_predict = np.array(y_predict) xtick_labels = ["Predicted {}".format(class_name) for class_name in shell_family.mapping] ytick_labels = ["Actual {}".format(class_name) for class_name in shell_family.mapping] print("Classification Report\n") print(classification_report(y_test, y_predict)) print("") print("Confusion Matrix\n") plt.figure(figsize=(10,6)) sns.heatmap(confusion_matrix(y_test, y_predict), cmap="viridis", annot=True, fmt="d", xticklabels=xtick_labels, yticklabels=ytick_labels) plt.title("Confusion Matrix", fontsize=6) plt.tick_params(axis='both', which='major', labelsize=4) plt.show() ``` 2D PCA ``` mean_array = [] for shell in shell_family.classifiers: mean_array.append(shell_family.classifiers[shell].shell_mean[0]) scaler = StandardScaler() scaled_mean = scaler.fit_transform(mean_array) pca = PCA(n_components=2, random_state=42) transformed = pca.fit_transform(scaled_mean) for i in range(len(transformed)): plt.scatter(transformed[i, 0], transformed[i, 1]) plt.text(transformed[i, 0] + 0.5, transformed[i, 1] + 0.5, shell_family.mapping[i]) plt.title("PCA shell mean") ``` 3D PCA ``` mean_array = [] for shell in shell_family.classifiers: mean_array.append(shell_family.classifiers[shell].shell_mean[0]) scaler = StandardScaler() scaled_mean = scaler.fit_transform(mean_array) pca = PCA(n_components=3, random_state=42) transformed = pca.fit_transform(scaled_mean) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for i in range(len(transformed)): ax.scatter(transformed[i, 0], transformed[i, 1], transformed[i, 2]) ax.text(transformed[i, 0] + 0.5, transformed[i, 1] + 0.5, transformed[i, 2], shell_family.mapping[i]) plt.title("PCA shell mean") ```
github_jupyter
# Deep Learning on JuiceFS Tutorial - 01. Getting Started JuiceFS is a shared POSIX file system for the cloud. You may replace existing solutions with JuiceFS with zero cost, turns any object store into a shared POSIX file system. Sign up for 1T free quota now at https://juicefs.com Source code of this tutorial can be found in https://github.com/juicedata/juicefs-dl-tutorial ## 0. Requirements It's very easy to setup JuiceFS in your remote HPC machine or Google Colab or CoCalc by insert just one line of command into your Jupyter Notebook: ``` !curl -sL https://juicefs.com/static/juicefs -o juicefs && chmod +x juicefs ``` Here we go, let's try the magic of JuiceFS! ## 1. Mounting your JuiceFS After create your JuiceFS volumn followed by [documentation here](https://juicefs.com/docs/en/getting_started.html), you have two ways to mount your JuiceFS here: ### 1.1 The security way Just run the mount command, and input your access key and secret key from the public cloud or storage provider. This scene is for people who want to collaborate with others and protecting credentials. It can also let your teammates using their JuiceFS volume or share notebook publicly. ``` !./juicefs mount {JFS_VOLUMN_NAME} /jfs ``` ### 1.2 The convenient way However, maybe you are working alone, no worries about leak credentials, and don't want to do annoying input credentials every time restart kernel. Surely, you can save your token and access secrets in your notebook, just change the corresponding fields in the following command to your own. ``` !./juicefs auth --token {JUICEFS_TOKEN} --accesskey {ACCESSKEY} --secretkey {SECRETKEY} JuiceFS !./juicefs mount -h ``` ## 2. Preparing dataset Okay, let's assume you have already mounted your JuiceFS volume. You can test by list your file here. ``` !ls /jfs ``` You have many ways to get data into your JuiceFS volume, like mounting in your local machine and directly drag and drop, or mounting in cloud servers and write data or crawling data and save. Here we took the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) (with a training set of 60,000 images, and a test set of 10,000 images) as an example. If you have not to get the MNIST dataset ready, you can execute the following block: ``` !curl -sL https://s3.amazonaws.com/img-datasets/mnist.npz -o /jfs/mnist.npz ``` ## 3. Training model Once we have got our dataset ready in JuiceFS, we can begin the training process. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D import warnings warnings.simplefilter(action='ignore') ``` Firstly, load our MNIST dataset from JuiceFS volume. ``` with np.load('/jfs/mnist.npz') as f: X_train, y_train = f['x_train'], f['y_train'] X_test, y_test = f['x_test'], f['y_test'] ``` Visualize some data to ensure we have successfully loaded data from JuiceFS. ``` sns.countplot(y_train) fig, ax = plt.subplots(6, 6, figsize = (12, 12)) fig.suptitle('First 36 images in MNIST') fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9]) for x, y in [(i, j) for i in range(6) for j in range(6)]: ax[x, y].imshow(X_train[x + y * 6].reshape((28, 28)), cmap = 'gray') ax[x, y].set_title(y_train[x + y * 6]) ``` Cool! We have successfully loaded the MNIST dataset from JuiceFS! Let's training a CNN model. ``` batch_size = 128 num_classes = 10 epochs = 12 img_rows, img_cols = 28, 28 X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(X_test, y_test)) score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ## 4. Saving model Awesome! We have trained a simple CNN model, now let's try to write back the model into JuiceFS. Thanks to the POSIX-compatible feature of JuiceFS, we can easily save the model as usual. No additional effort need. ``` model.save('/jfs/mnist_model.h5') ``` ## 5. Loading model Assuming you want to debug the model in your local machine or want to sync with the production environment. You can load your model from JuiceFS in any machine in real time. JuiceFS's strong consistency feature will ensure all confirmed changes made to your data reflected in different machines immediately. ``` from keras.models import load_model model_from_jfs = load_model('/jfs/mnist_model.h5') ``` We have successfully load our previous model from JuiceFS here, let's randomly pick an image from test dataset and use loader model to make a prediction. ``` import random pick_idx = random.randint(0, X_test.shape[0]) ``` What image have we picked? ``` plt.imshow(X_test[pick_idx].reshape((28, 28)), cmap = 'gray') ``` Let's do prediction using the model loaded from JuiceFS. ``` y_pred = np.argmax(model_from_jfs.predict(np.expand_dims(X_test[pick_idx], axis=0))) print(f'Prediction: {y_pred}') ``` That's it. We will cover some advanced usages and public datasets in the next tutorials.
github_jupyter
``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pylab as plt # Draw u from Gaussian(0,1), i = 1,...,10 (unknown to agent) # arms = number of possible of actions (known to agent) arms = 10 u = np.random.randn(arms,1) # Rewards of action i at time t => x_i(t) ~ N(u_i,1) # x_i(t) is itself randomly drawn, so it's not deterministic rounds = 1000 x = np.zeros((rounds,1)) # Initialize Q: agent's record of rewards # Initialize estQ: agent's estimate of rewards Q = np.zeros((arms, rounds)) estQ = np.zeros((arms,1)) # Initialize the first column of Q with randomly drawn "expectations" # Set probability of exploring at whatever %. Q[:,0] = 0.01 # np.random.randn(arms) exploreProbability = 0.1 # Initialize rolling_sum and average returns! rolling_sum = 0 avg_returns = [] for time in range(rounds): # Estimate return belief for each arm # Belief => Q_t(a) = sum of rewards / number of times action taken for i in range(arms): estQ[i] = sum(Q[i,:])/np.count_nonzero(Q[i,:]) if np.random.uniform(0,1) > exploreProbability: # argmax(estQ) chooses the row (arm) with the highest value # np.random.normal(u[chosenLever],1) generates N(u_i,1) chosenLever = np.argmax(estQ) Q[chosenLever, time] = np.random.normal(u[chosenLever],1) rolling_sum += Q[chosenLever,time] avg_returns.append(rolling_sum/(time+1)) # because time starts at 0! else: # go exploring chosenLever = np.random.choice(arms) Q[chosenLever, time] = np.random.normal(u[chosenLever],1) rolling_sum += Q[chosenLever,time] avg_returns.append(rolling_sum/(time+1)) # Determine average return average_return = rolling_sum/(rounds) # Create the plot. ax = sns.heatmap(Q, annot = False, cmap="YlGnBu") #, linewidth=0) plt.ylabel('Arm') plt.xlabel('Round') plt.show() print("The average return acheived is ", average_return) ## Optimal Play! # Determine the "ideal" arm to pick, label ideal_pick ideal_pick = np.argmax(u) # Initialize avg_perfect = list, records the average return achieved at time t # Initialize rolling_perfect = cumulative run you achieve avg_perfect = [] rolling_perfect = 0 for time in range(rounds): rolling_perfect += np.random.normal(u[ideal_pick],1) avg_perfect.append(rolling_perfect/(time+1)) plt.plot(avg_returns[5:]) plt.plot(avg_perfect[5:]) plt.title("Perfect vs. Our Choice") plt.ylabel('Avg Return Achieved') plt.xlabel('Round') plt.show() final_avg_perfect_return = rolling_perfect/rounds print(final_avg_perfect_return) print("The mean for optimal was ", u[ideal_pick]) ## Did we pick the right arm? non_zero_elements = (Q != 0).sum(1) # this creates an array of the # of non-zero occurences most_picked_arm = np.argmax(non_zero_elements) print("we picked arm ", most_picked_arm) print("the optimal arm was ", ideal_pick) plt.plot(u) plt.title("average means of arms") plt.ylabel("mean") plt.xlabel("arms") # How many times did you pick the optimal arm? # (Q[most_picked_arm,:]).sum(1) # finds the # of non-zero occurrences in the most_picked_arm our_accuracy = (Q[ideal_pick,:] != 0).sum(0)/rounds *100 print("We picked the optimal arm with a probability of ", our_accuracy, "%.") ```
github_jupyter
# Cocco, Gomes, and Maenhout (2005) # "[Consumption and Portfolio Choice Over the Life Cycle](https://academic.oup.com/rfs/article-abstract/18/2/491/1599892)" - Notebook created by Matt Zahn ## Summary This paper uses a realistically calibrated model of consumption and portfolio choice with non-tradable assets and borrowing constraints and finds that the optimal share invested in equities decreases over the agent's life span. The author's also examine the importance of human capital for investment behavior as well as the implications of introducing endogenous borrowing constraints. The main findings are summarized below. - The shape of the income profile over the life span induces investors to reduce proportional stock holdings when aging, which rationalizes the advice from the popular finance literature. - Labor income that is uncorrelated with equity returns is perceived as a closer substitute for risk free asset holdings than equities. Thus, the presence of labor income increases the demand for stocks, particularly earlier in an investor's life. - Even small risks to employment income have a significant effect on investing behavior. Author's describe it has a "crowding" out effect that is particularly strong for younger households. - The lower bound of the income distribution is very relevant to understanding borrowing capacity and portfolio allocations. This notebook summarizes the setup and calibration of the model. ## Model ### Setup __Time parameters and preferences__ An investor’s age is denoted as $t$ and they live for a maximum of $T$ periods. The investor works for the first $K$ periods of their life, where $K$ is assumed to be exogenous and deterministic. The probability that an investor alive in period $t$ is alive in $t+1$ is $p_t$. Investor $i$ has the following time separable utility function: \begin{eqnarray*} E_1 \sum^{T}_{t=1} \delta^{t-1} \left( \prod^{t-2}_{j=0}p_j \right) \left[p_{t-1} \frac{C_{it}^{1-\gamma}}{1-\gamma} + b(1-p_{t-1}) \frac{D^{1-\gamma}_{it}}{1-\gamma}\right] \end{eqnarray*} Where $\delta<1$ is the discount factor, $C_{it}$ is the level of consumption in period $t$, $\gamma>0$ is the coefficient of relative risk aversion and $D_{it}$ is the level of wealth that the investor bequeaths to descendants upon their death. Bequests are governed by the same the utility as the one from living consumption. The parameter $b$ controls the intensity of the bequest motive. __Labor income__ Before retiring, investor $i$ at age $t$ learns labor income $Y_{it}$, which is exogenously given by: \begin{eqnarray*} \log(Y_{it}) = f(t, Z_{it}) + v_{it} + \epsilon_{it} \text{ for } t \leq K \end{eqnarray*} Where $f(t, Z_{it})$ is a deterministic function of the investor's age and individual characteristics. An idiosyncratic temporary shock is given by $\epsilon_{it}$ and is distributed according to $N(0, \sigma^2_\epsilon)$ and $v_{it}$ is given by: \begin{eqnarray*} v_{it} = v_{i, t-1} + u_{it} \end{eqnarray*} where $u_{it}$ is distributed according to $N(0,\sigma^2_u)$ and is independent of $\epsilon_{it}$. Pre-retirement log income is the sum of a deterministic component calibrated to capture the hump-shaped profile observed in the data. The author's take $v_t$ as a random walk and assume that $\epsilon_{it}$ is uncorrelated across households. The permanent component can be decomposed into an aggregate component $\xi_t$ with the distribution $N(0,\sigma^2_\xi)$ and an idiosyncratic component $\omega_{it}$ (distributed by $N(0,\sigma^2_\omega)$): \begin{eqnarray*} u_{it} = \xi_t + \omega_{it} \end{eqnarray*} This construction allows the author's to model random component of labor income as a random walk. Additionally, they allow for correlation between innovations to excess stock return and labor income shocks through the aggregate component $\xi_t$, which will be discussed later. Retirement income is a constant fraction $\lambda$ of permanent labor income from the last year worked by the investor: \begin{eqnarray*} \log(Y_{it}) = \log(\lambda) + f(t, Z_{it}) + v_{it} \end{eqnarray*} __Financial assets__ Riskless assets are _Treasury bills_ that have a constant gross return of $\bf{\bar{R}}_f$. The dollar amount of the investor's Treasury bill holdings at time $t$ is given by $B_{it}$. The risky asset has a gross return of $R_t$ with an excess return given by: \begin{eqnarray*} R_{t+1} - \bf{\bar{R}}_f = \mu + \eta_{t+1} \end{eqnarray*} where $\eta_{t+1}$ is the innovation to excess returns and is assumed to be $iid$ over time and distributed according to $N(0,\sigma^2_\eta)$. These innovations are allowed to be correlated with innovations to the aggregate component of aggregate income $\xi_t$ with coefficient $\rho$. Risky assets are called _stocks_ and the investor's dollar holding of them at time $t$ is denoted as $S_{it}$. The investor has the following short run constraints: \begin{eqnarray*} B_{it} &\geq& 0 \\ S_{it} &\geq& 0 \end{eqnarray*} These ensure that the agent cannot allocate negative dollars to either of these assets and borrowing against future labor income or retirement wealth. The author's define $\alpha_{it}$ as the proportion of the investor's savings that are invested in stocks, the constraints imply that $\alpha_{it} \in [0,1]$ and wealth is non-negative. ### Investor's problem In each period $t$ the investor faces the following problem. They start off with wealth $W_{it}$ and then realize their labor income $Y_{it}$. Cash on hand is defined as $X_{it} = W_{it} + Y_{it}$. Then the investor must decide how much to consume $C_{it}$ and how to allocated their savings between stocks and Treasury bills. Next period's wealth is given by: \begin{eqnarray*} W_{i,t+1} = R^p_{i,t+1}(W_{it} + Y_{it} - C_{it}) \end{eqnarray*} Where $R^p_{i,t+1}$ is the return on the investor's total portfolio and is given by: \begin{eqnarray*} R^p_{i,t+1} \equiv \alpha_{it}R_{t+1} + (1-\alpha_{it})\bf{\bar{R}}_f \end{eqnarray*} The control variables in the problem are $\{C_{it}, \alpha_{it}\}^T_{t=1}$ and the state variables are $\{t, X_{it}, v_{it} \}^T_{t=1}$. The agent solves for a policy rule as a function of the state variables $C_{it}(X_{it}, v_{it})$ and $\alpha_{it}(X_{it}, v_{it})$. The Bellman equation for the problem is: \begin{eqnarray*} V_{it}(X_{it}) &=& \max_{C_{it}\geq 0, 0\leq \alpha_{it} \leq 1} \left[U(C_{it}) + \delta p_t E_t V_{i,t+1}(X_{i,t+1})\right] \text{ for } t<T, \\ \text{where} \\ X_{i,t+1} &=& Y_{i,t+1} + (X_{it}-C_{it})(\alpha_{it}R_{t+1} + (1-\alpha_{it}) \bf{\bar{R}}_f) \end{eqnarray*} This cannot be solved analytically and is done numerically via backward induction. ## Calibration __Labor income process__ The PSID is used to estimate the labor income equation and its permanent component. This estimation controls for family specific fixed effects. In order to control for education, the sample was split into three groups: no high school, high school but no college degree, and college graduates. Across each of these groups, $f(t,Z_{it})$ is assumed to be additively separable across its arguments. The vector of personal characteristics $Z_{it}$ includes age, household fixed effects, marital status, household size/composition. The sample uses households that have a head between the age of 20 and 65. For the retirement stage, $\lambda$ is calibrated as the ratio of the average of labor income in a given education group to the average labor income in the last year of work before retirement. The error structure of the labor income process is estimated by following the variance decomposition method described in Carroll and Samwick (1997). A similar method is used to estimate the correlation parameter $\rho$. Define $r_{id}$ as: \begin{eqnarray*} r_{id} &\equiv& \log(Y^*_{i,t+d}) - \log(Y^*_{it}), d\in \{1,2,...,22\} \\ \text{where }& Y^*_t \\ \log(Y^*_{it}) &\equiv& \log(Y_{it}) - f(t,Z_{it}), \\ \text{then}& \\ \text{VAR}(R_{id}) &=& d*\sigma^2_u + 2*\sigma^2_\epsilon \end{eqnarray*} The variance estimates can be obtained by an OLS on the variance equation on $d$ and a constant term. These estimated values are assumed to be the same across all individuals. For the correlation parameter, start by writing the change in $\log(Y_{it})$ as: \begin{eqnarray*} r_{i1} = \xi_t + \omega_{it} + \epsilon_{it} - \epsilon_{i,t-1} \end{eqnarray*} Averaging across individuals gives \begin{eqnarray*} \bar{r_1} = \xi_t \end{eqnarray*} The correlation coefficient is also obtained via OLS by regressing $\overline{\Delta \log(Y^*_t)}$ on demeaned excess returns: \begin{eqnarray*} \bar{r_1} = \beta(R_{t+1} - \bf{\bar{R}}_f - \mu) + \psi_t \end{eqnarray*} __Other parameters__ Adults start at age 20 without a college degree and age 22 with a college degree. The retirement age is 65 for all households. The investor will die for sure if they reach age 100. Prior to this age, survival probabilities come from the mortality tables published by the National Center for Health Statistics. The discount factor $\delta$ is calibrated to be $0.96$ and the coefficient of relative risk aversion ($\gamma$) is set to $10$. The mean equity premium $\mu$ is $4%$, the risk free rate is $2%$, and the standard deviation of innovations to the risky asset is set to the historical value of $0.157$. ## Core Results __Baseline model__ The results from the baseline model contextualize the results in the popular finance literature. Early in life, agent's invest fully in stocks and hit their borrowing constraints. In midlife as retirement comes closer, agent's begin to save more and invests more in the risk-free asset. Finally, in retirement the portfolio rule shifts in and wealth runs down quickly. This leads to an increase in holdings of risky assets. __Heterogeneity and sensitivity analysis__ - Labor income risk: Income risk may vary across employment sectors relative to the baseline model. The author's examine extreme cases for industries that have a standard deviation and temporary income shocks. While some differences appear across sectors, the results are generally in line with the baseline model. - Disastrous labor income shocks: The author's find that even a small probability of zero labor income lowers the optimal portfolio allocation in stocks, while the qualitative features of the baseline model are preserved. - Uncertain retirement income: The author's consider two types of uncertainty for retirement income; it is stochastic and correlated with current stock market performance and allowing for disastrous labor income draws before retirement. The first extension has results essentially the same as the baseline case. The second leads to more conservative portfolio allocations but is broadly consistent with the model. - Endogenous borrowing constraints: The author's add borrowing to their model by building on credit-market imperfections. The author's find that the average investor borrows about \$5,000 and are in debt for most of their working life. They eventually pay off this debt and save for retirement. Relative to the benchmark model, the investor has put less of their money in their portfolio and arrive at retirement with substantially less wealth. These results are particularly pronounced at the lower end of the income distribution relative to the higher end. Additional details are available in the text. - Bequest motive: The author's introduce a bequest motive into the agent's utility function (i.e., $b>0$). Young investors are more impatient and tend to save less for bequests. As the agent ages, savings increases and is strongest once the agent is retired. This leads to effects on the agent's portfolio allocation. Taking a step-back however, these effects are not very large unless $b$ is large. - Educational attainment: The author's generally find that savings are consistent across education groups. They note that for a given age, the importance of future income is increasing with education level. This implies that riskless asset holdings are larger for these households. - Risk aversion and intertemporal substitution: Lowering the level of risk aversion in the model leads to changes in the optimal portfolio allocation and wealth accumulation. Less risk-averse investors accumulate less precautionary savings and invest more in risky assets.
github_jupyter
[source](../../api/alibi_detect.cd.mmd_online.rst) # Online Maximum Mean Discrepancy ## Overview The online [Maximum Mean Discrepancy (MMD)](http://jmlr.csail.mit.edu/papers/v13/gretton12a.html) detector is a kernel-based method for online drift detection. The MMD is a distance-based measure between 2 distributions *p* and *q* based on the mean embeddings $\mu_{p}$ and $\mu_{q}$ in a reproducing kernel Hilbert space $F$: $$ MMD(F, p, q) = || \mu_{p} - \mu_{q} ||^2_{F} $$ Given reference samples $\{X_i\}_{i=1}^{N}$ and test samples $\{Y_i\}_{i=t}^{t+W}$ we may compute an unbiased estimate $\widehat{MMD}^2(F, \{X_i\}_{i=1}^N, \{Y_i\}_{i=t}^{t+W})$ of the squared MMD between the two underlying distributions. The estimate can be updated at low-cost as new data points enter into the test-window. We use by default a [radial basis function kernel](https://en.wikipedia.org/wiki/Radial_basis_function_kernel), but users are free to pass their own kernel of preference to the detector. Online detectors assume the reference data is large and fixed and operate on single data points at a time (rather than batches). These data points are passed into the test-window and a two-sample test-statistic (in this case squared MMD) between the reference data and test-window is computed at each time-step. When the test-statistic exceeds a preconfigured threshold, drift is detected. Configuration of the thresholds requires specification of the expected run-time (ERT) which specifies how many time-steps that the detector, on average, should run for in the absence of drift before making a false detection. It also requires specification of a test-window size, with smaller windows allowing faster response to severe drift and larger windows allowing more power to detect slight drift. For high-dimensional data, we typically want to reduce the dimensionality before passing it to the detector. Following suggestions in [Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift](https://arxiv.org/abs/1810.11953), we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs ([BBSDs](https://arxiv.org/abs/1802.03916)) as out-of-the box preprocessing methods and note that [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis) can also be easily implemented using `scikit-learn`. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift. Detecting input data drift (covariate shift) $\Delta p(x)$ for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from [HuggingFace's transformer package](https://github.com/huggingface/transformers) but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the [Text drift detection on IMDB movie reviews](../../examples/cd_text_imdb.ipynb) notebook. ## Usage ### Initialize Arguments: * `x_ref`: Data used as reference distribution. * `ert`: The expected run-time in the absence of drift, starting from *t=0*. * `window_size`: The size of the sliding test-window used to compute the test-statistic. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift. Keyword arguments: * `backend`: Backend used for the MMD implementation and configuration. * `preprocess_fn`: Function to preprocess the data before computing the data drift metrics. * `kernel`: Kernel used for the MMD computation, defaults to Gaussian RBF kernel. * `sigma`: Optionally set the GaussianRBF kernel bandwidth. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If `sigma` is not specified, the 'median heuristic' is adopted whereby `sigma` is set as the median pairwise distance between reference samples. * `n_bootstraps`: The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT. * `verbose`: Whether or not to print progress during configuration. * `input_shape`: Shape of input data. * `data_type`: Optionally specify the data type (tabular, image or time-series). Added to metadata. Additional PyTorch keyword arguments: * `device`: Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either 'cuda', 'gpu' or 'cpu'. Only relevant for 'pytorch' backend. Initialized drift detector example: ```python from alibi_detect.cd import MMDDriftOnline cd = MMDDriftOnline(x_ref, ert, window_size, backend='tensorflow') ``` The same detector in PyTorch: ```python cd = MMDDriftOnline(x_ref, ert, window_size, backend='pytorch') ``` We can also easily add preprocessing functions for both frameworks. The following example uses a randomly initialized image encoder in PyTorch: ```python from functools import partial import torch import torch.nn as nn from alibi_detect.cd.pytorch import preprocess_drift device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # define encoder encoder_net = nn.Sequential( nn.Conv2d(3, 64, 4, stride=2, padding=0), nn.ReLU(), nn.Conv2d(64, 128, 4, stride=2, padding=0), nn.ReLU(), nn.Conv2d(128, 512, 4, stride=2, padding=0), nn.ReLU(), nn.Flatten(), nn.Linear(2048, 32) ).to(device).eval() # define preprocessing function preprocess_fn = partial(preprocess_drift, model=encoder_net, device=device, batch_size=512) cd = MMDDriftOnline(x_ref, ert, window_size, backend='pytorch', preprocess_fn=preprocess_fn) ``` The same functionality is supported in TensorFlow and the main difference is that you would import from `alibi_detect.cd.tensorflow import preprocess_drift`. Other preprocessing steps such as the output of hidden layers of a model or extracted text embeddings using transformer models can be used in a similar way in both frameworks. TensorFlow example for the hidden layer output: ```python from alibi_detect.cd.tensorflow import HiddenOutput, preprocess_drift model = # TensorFlow model; tf.keras.Model or tf.keras.Sequential preprocess_fn = partial(preprocess_drift, model=HiddenOutput(model, layer=-1), batch_size=128) cd = MMDDriftOnline(x_ref, ert, window_size, backend='tensorflow', preprocess_fn=preprocess_fn) ``` Check out the [Online Drift Detection on the Wine Quality Dataset](../../examples/cd_online_wine.ipynb) example for more details. Alibi Detect also includes custom text preprocessing steps in both TensorFlow and PyTorch based on Huggingface's [transformers](https://github.com/huggingface/transformers) package: ```python import torch import torch.nn as nn from transformers import AutoTokenizer from alibi_detect.cd.pytorch import preprocess_drift from alibi_detect.models.pytorch import TransformerEmbedding device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(model_name) embedding_type = 'hidden_state' layers = [5, 6, 7] embed = TransformerEmbedding(model_name, embedding_type, layers) model = nn.Sequential(embed, nn.Linear(768, 256), nn.ReLU(), nn.Linear(256, enc_dim)).to(device).eval() preprocess_fn = partial(preprocess_drift, model=model, tokenizer=tokenizer, max_len=512, batch_size=32) # initialise drift detector cd = MMDDriftOnline(x_ref, ert, window_size, backend='pytorch', preprocess_fn=preprocess_fn) ``` Again the same functionality is supported in TensorFlow but with `from alibi_detect.cd.tensorflow import preprocess_drift` and `from alibi_detect.models.tensorflow import TransformerEmbedding` imports. ### Detect Drift We detect data drift by sequentially calling `predict` on single instances `x_t` (no batch dimension) as they each arrive. We can return the test-statistic and the threshold by setting `return_test_stat` to *True*. The prediction takes the form of a dictionary with `meta` and `data` keys. `meta` contains the detector's metadata while `data` is also a dictionary which contains the actual predictions stored in the following keys: * `is_drift`: 1 if the test-window (of the most recent `window_size` observations) has drifted from the reference data and 0 otherwise. * `time`: The number of observations that have been so far passed to the detector as test instances. * `ert`: The expected run-time the detector was configured to run at in the absence of drift. * `test_stat`: MMD^2 metric between the reference data and the test_window if `return_test_stat` equals *True*. * `threshold`: The value the test-statsitic is required to exceed for drift to be detected if `return_test_stat` equals *True*. ```python preds = cd.predict(x_t, return_test_stat=True) ``` Resetting the detector with the same reference data and thresholds but with a new and empty test-window is straight-forward: ```python cd.reset() ``` ## Examples [Online Drift Detection on the Wine Quality Dataset](../../examples/cd_online_wine.ipynb) [Online Drift Detection on the Camelyon medical imaging dataset](../../examples/cd_online_camelyon.ipynb)
github_jupyter
# Thermal data exploration notebook *Author: Giulia Mazzotti - heavily relying on Steven Pestana's amazing Thermal IR tutorials!* ### Main goal - scientific On February 8th, airborne thermal IR data was acquired over the Grand Mesa domain during multiple overpasses. The goal of this notebook isto kick off working with this dataset. Some initial explorative analysis is done, and ideas for further datamining are provided. We consider two aspects: 1. How do surface temperature compare to ground measurements over time? 2. How do surface temperature vary in space and time in forested and open areas? ### Personal goal First steps with Python and Jupyter, and get familiar with SnowEx thermal data! ## 0. Preparatory steps So... we need some data! We're going to work with two datasets: 1. A NetCDF file including all airborne acquisitions of that day 2. Temperature data from a snow pit that had stationary sensors --- **<<<The code below only needs to be ran once, to download the data file. You can comment it out afterwards>>>** Download a sample airborne IR netcdf file that contains 17 image mosaics from the morning of Feb. 8th, 2020. (Start by downloading [driveanon](https://github.com/friedrichknuth/driveanon) to download sample file from google drive using "pip install") ``` %%capture !pip install git+https://github.com/friedrichknuth/driveanon.git # import driveanon import driveanon as da # download and save the file folder_blob_id = '1BYz63HsSilPcQpCWPNZOp62ZZU2OdeWO' file_names, file_blob_ids = da.list_blobs(folder_blob_id,'.nc') print(file_names, file_blob_ids) da.save(file_blob_ids[0]) ``` Download the pit data ``` # import the temp data in lack of a better approach !aws s3 sync --quiet s3://snowex-data/tutorial-data/thermal-ir/ /tmp/thermal-ir/ ``` **<<<The above code only needs to be ran once, to download the data file. You can comment it out afterwards>>>** ----- Import packages we're going to need ``` # import xarray and rioxarray packages to work with the airborne raster data import xarray as xr import rioxarray import matplotlib.pyplot as plt # Import some general-purpose packages for handling different data structures import numpy as np # for working with n-D arrays import pandas as pd # for reading our csv data file and working with tabular data # Import some packages for working with the SnowEx SQL database from snowexsql.db import get_db # Import the connection function from the snowexsql library from snowexsql.data import SiteData # Import the table classes from our data module which is where our ORM classes are defined from datetime import date # Import some tools to build dates from snowexsql.conversions import query_to_geopandas # Import a useful function for plotting and saving queries! See https://snowexsql.readthedocs.io/en/latest/snowexsql.html#module-snowexsql.conversions ``` ## 1. Comparing time series of airborne images to snowpit data ### Open and inspect the NetCDF file containing airborne IR time series ``` # open the NetCDF file as dataset ds = xr.open_dataset('SNOWEX2020_IR_PLANE_2020Feb08_mosaicked_APLUW_v2.nc') ds ``` This command opens a NetCDF file and creates a dataset that contains all 17 airborne thermal IR acquisitions throughout the day on February 8th, as seen by printing the timestamps. Some data conversion tasks are now necessary: ``` # To make rioxarray happy, we should rename our spatial coorinates "x" and "y" (it automatically looks for coordinates with these names) # We want to look at the variable "STBmosaic" (temperatures in degrees C), so we can drop everything else. da = ds.STBmosaic.rename({'easting':'x', 'northing':'y'}) # create a new data array of "STBmosaic" with the renamed coordinates # We also need to perform a coordinate transformation to ensure compatibility with the pit dataset da = da.rio.write_crs('EPSG:32613') # assign current crs da = da.rio.reproject('EPSG:26912') # overwrite with new reprojected data array # Create a pandas timestamp array, subtract 7 hours from UTC time to get local time (MST, UTC-7) # Ideally programmatically by reading out the entries in da.time air_timestamps = [pd.Timestamp(2020,2,8,8,7,17), pd.Timestamp(2020,2,8,8,16,44), pd.Timestamp(2020,2,8,8,28,32), pd.Timestamp(2020,2,8,8,43,2),\ pd.Timestamp(2020,2,8,8,55,59), pd.Timestamp(2020,2,8,9,7,54), pd.Timestamp(2020,2,8,11,7,37), pd.Timestamp(2020,2,8,11,19,15),\ pd.Timestamp(2020,2,8,11,29,16), pd.Timestamp(2020,2,8,11,40,56), pd.Timestamp(2020,2,8,11,50,20), pd.Timestamp(2020,2,8,12,1,9),\ pd.Timestamp(2020,2,8,12,6,22), pd.Timestamp(2020,2,8,12,18,49), pd.Timestamp(2020,2,8,12,31,35), pd.Timestamp(2020,2,8,12,44,28),\ pd.Timestamp(2020,2,8,12,56,16)] ``` *Additional plotting ideas for later: create interactive plot of the 17 acquisitions with time slider to get an idea of data coverage* Get the location for snow pit 2S10 from the SnowEx SQL database (query [SiteData](https://snowexsql.readthedocs.io/en/latest/database_structure.html#sites-table) using [filter_by](https://docs.sqlalchemy.org/en/14/orm/query.html#sqlalchemy.orm.Query.filter_by) to find the entry with the site ID that we want). Then preview the resulting geodataframe, and perform some necessary data wrangling steps as demonstrated by Steven in his tutorial ``` # Standard commands to access the database according to tutorials db_name = 'snow:hackweek@52.32.183.144/snowex' engine, session = get_db(db_name) # Form the query to receive site_id='2S10' from the sites table qry = session.query(SiteData).filter_by(site_id='2S10') # Convert the record received into a geopandas dataframe siteData_df = query_to_geopandas(qry, engine) # Preview the resulting geopandas dataframe; this is just the site info, not the data! # siteData_df # Check that coordinate systems match indeed # siteData_df.crs # need to create column headers column_headers = ['table', 'year', 'doy', 'time', # year, day of year, time of day (local time, UTC-7) 'rad_avg', 'rad_max', 'rad_min', 'rad_std', # radiometer surface temperature 'sb_avg', 'sb_max', 'sb_min', 'sb_std', # radiometer sensor body temperature (for calibration) 'temp1_avg', 'temp1_max', 'temp1_min', 'temp1_std', # temperature at 5 cm below snow surface 'temp2_avg', 'temp2_max', 'temp2_min', 'temp2_std', # 10 cm 'temp3_avg', 'temp3_max', 'temp3_min', 'temp3_std', # 15 cm 'temp4_avg', 'temp4_max', 'temp4_min', 'temp4_std', # 20 cm 'temp5_avg', 'temp5_max', 'temp5_min', 'temp5_std', # 30 cm 'batt_a','batt_b', # battery voltage data ] # read the actual data and do the necessary conversion step to include column headers df = pd.read_csv('/tmp/thermal-ir/SNEX20_VPTS_Raw/Level-0/snow-temperature-timeseries/CR10X_GM1_final_storage_1.dat', header = None, names = column_headers) # After the filepath we specify header=None because the file doesn't contain column headers, # then we specify names=column_headers to give our own names for each column. # Create a zero-padded time string (e.g. for 9:30 AM we are changing '930' into '0930') df['time_str'] = [('0' * (4 - len(str(df.time[i])))) + str(df.time[i]) for i in range(df.shape[0])] # change midnight from '2400' to '0000' ... might introduce some funny things... df.time_str.replace('2400', '0000', inplace=True) def compose_date(years, months=1, days=1, weeks=None, hours=None, minutes=None, seconds=None, milliseconds=None, microseconds=None, nanoseconds=None): '''Compose a datetime object from various datetime components. This clever solution is from: https://stackoverflow.com/questions/34258892/converting-year-and-day-of-year-into-datetime-index-in-pandas''' years = np.asarray(years) - 1970 months = np.asarray(months) - 1 days = np.asarray(days) - 1 types = ('<M8[Y]', '<m8[M]', '<m8[D]', '<m8[W]', '<m8[h]', '<m8[m]', '<m8[s]', '<m8[ms]', '<m8[us]', '<m8[ns]') vals = (years, months, days, weeks, hours, minutes, seconds, milliseconds, microseconds, nanoseconds) return sum(np.asarray(v, dtype=t) for t, v in zip(types, vals) if v is not None) # Create a datetime value from the date field and zero-padded time_str field, set this as our dataframe's index df.index = compose_date(df['year'], days=df['doy'], hours=df['time_str'].str[:2], minutes=df['time_str'].str[2:]) # Remove entries that are from table "102" (this contains datalogger battery information we're not interested in at the moment) df = df[df.table != 102] # drop the columns we no longer need df.drop(columns=['table','year','doy','time','time_str','batt_a','batt_b'], inplace=True) # Create a datetime value from the date field and zero-padded time_str field, set this as our dataframe's index df.index = compose_date(df['year'], days=df['doy'], hours=df['time_str'].str[:2], minutes=df['time_str'].str[2:]) # Remove entries that are from table "102" (this contains datalogger battery information we're not interested in at the moment) df = df[df.table != 102] # drop the columns we no longer need df.drop(columns=['table','year','doy','time','time_str','batt_a','batt_b'], inplace=True) ``` ### Clip the airborne IR data around the pit and plot in time Make a simple plot of the data. We are interested in the variable `rad_avg` which is the average temperature measured by the radiometer over each 5 minute period. ``` # reminder of where our snow pit is at to create bounding box around it siteData_df.geometry.bounds # Let's first look at a 100m grid cell around the pit minx = 743026 miny = 4322639 maxx = 743126 maxy = 4322739 # clip da_clipped = da.rio.clip_box(minx,miny,maxx,maxy) # quick check to see where the bounding box and the pit data overlap by at least 80% (approx) # for da_clipped_step in da_clipped: # fig, ax = plt.subplots() # da_clipped_step.plot(ax=ax,cmap='magma') # save the corresponding indices for use in the next plot; skip index 1 because it's going to be plotted differently ints = [3, 5, 7, 9, 11, 12, 14, 16] plt.figure(figsize=(10,4)) # plot radiometer average temperature df.rad_avg.plot(linestyle='-', marker='', markersize=1, c='k', label='Ground-based $T_s$') # plot the mean airborne IR temperature from the area around the snow pit: plt.plot(air_timestamps[1], da_clipped.isel(time = 1).mean(), marker='o', c='r', linestyle='none', label='Airborne IR mean $T_s$ for 100 m bounding box area') # plot an error bar showing the maximum and minimum airborne IR temperature around the snow pit plt.errorbar(air_timestamps[1], da_clipped.isel(time = 1).mean(), yerr=[[da_clipped.isel(time = 1).mean()-da_clipped.isel(time = 1).min()], [da_clipped.isel(time = 1).max()-da_clipped.isel(time = 1).mean()]], capsize=3, fmt='none', ecolor='b', label='Airborne IR $T_s$ range for 100 m bounxing box area') for i in ints: # plot the mean airborne IR temperature from the area around the snow pit: plt.plot(air_timestamps[i], da_clipped.isel(time = i).mean(), marker='o', c='r', linestyle='none') #, # label='Airborne IR mean $T_s$ for 100 m radius area') # plot an error bar showing the maximum and minimum airborne IR temperature around the snow pit plt.errorbar(air_timestamps[i], da_clipped.isel(time = i).mean(), yerr=[[da_clipped.isel(time = i).mean()-da_clipped.isel(time = i).min()], [da_clipped.isel(time = i).max()-da_clipped.isel(time = i).mean()]], capsize=3, fmt='none', ecolor='b')#, # label='Airborne IR $T_s$ range for 100 m radius area') # set axes limits plt.ylim((-15,0)) plt.xlim((pd.Timestamp(2020,2,8,6,0),pd.Timestamp(2020,2,8,20,0))) # zoom in to daytime hours on Feb. 8, 2020 # add a legend to the plot plt.legend() # set axes labels plt.ylabel('Temperature [$C\degree$]') plt.xlabel('Time') # add grid lines to the plot plt.grid('on') # set the plot title plt.title('Snow Surface Temperature at Snow Pit 2S10 compared to Airborne IR imagery (100 m box)'); plt.savefig('timeseries.jpg') ``` ### Compare spatial patterns in open and forest ``` # create new bounding boxes and quickly verify where they have overlapping data. now let's look at 500 m pixels just to make sure we see something # set up our box bounding coordinates - pit site (verified that it doesnt contain trees) minx = 742826 miny = 4322439 maxx = 743326 maxy = 4322939 # bounding box in forest for comparison minx_for = 743926 miny_for = 4322839 maxx_for = 744426 maxy_for = 4323339 # clip da_clipped = da.rio.clip_box(minx,miny,maxx,maxy) da_clipped_for = da.rio.clip_box(minx_for,miny_for,maxx_for,maxy_for) # both cases have cool datasets for i = 1 and i = 16 # for da_clipped_step in da_clipped_for: # fig, ax = plt.subplots() # da_clipped_step.plot(ax=ax,cmap='magma') fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10,4), tight_layout=True) airborne_ir_area_temperature = da_clipped.isel(time = 1) # plot the portion of the airborne TIR image we selected within the buffer area geometry airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[0], cbar_kwargs={'label': 'Temperature $\degree C$'}) ax[0].set_title('Airborne TIR image within\n500 m pixel - open area- 8AM\n') ax[0].set_aspect('equal') ax[0].set_xlabel('Eastings UTM 12N (m)') ax[0].set_ylabel('Northings UTM 12N (m)') # ax[0].set_xlim((xmin-150, xmax+150)) # x axis limits to +/- 150 m from our point's "total bounds" # ax[0].set_ylim((ymin-150, ymax+150)) # y axis limits to +/- 150 m from our point's "total bounds" airborne_ir_area_temperature = da_clipped_for.isel(time = 1) # plot the portion of the airborne TIR image we selected within the buffer area geometry airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[1], cbar_kwargs={'label': 'Temperature $\degree C$'}) ax[1].set_title('Airborne TIR image within\n500 m pixel - forested area - 8AM\n') ax[1].set_aspect('equal') ax[1].set_xlabel('Eastings UTM 12N (m)') ax[1].set_ylabel('Northings UTM 12N (m)') plt.savefig('spatial_8am.jpg') fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10,4), tight_layout=True) airborne_ir_area_temperature = da_clipped.isel(time = 16) # plot the portion of the airborne TIR image we selected within the buffer area geometry airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[0], cbar_kwargs={'label': 'Temperature $\degree C$'}) ax[0].set_title('Airborne TIR image within\n500 m pixel - open area - 12PM\n') ax[0].set_aspect('equal') ax[0].set_xlabel('Eastings UTM 12N (m)') ax[0].set_ylabel('Northings UTM 12N (m)') # ax[0].set_xlim((xmin-150, xmax+150)) # x axis limits to +/- 150 m from our point's "total bounds" # ax[0].set_ylim((ymin-150, ymax+150)) # y axis limits to +/- 150 m from our point's "total bounds" airborne_ir_area_temperature = da_clipped_for.isel(time = 16) # plot the portion of the airborne TIR image we selected within the buffer area geometry airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[1], cbar_kwargs={'label': 'Temperature $\degree C$'}) ax[1].set_title('Airborne TIR image within\n500 m pixel - forested area - 12PM\n') ax[1].set_aspect('equal') ax[1].set_xlabel('Eastings UTM 12N (m)') ax[1].set_ylabel('Northings UTM 12N (m)') plt.savefig('spatial_12pm.jpg') fig, ax = plt.subplots(nrows=1, ncols=4, figsize=(16,4), tight_layout=True) airborne_ir_area_temperature = da_clipped.isel(time = 1) # plot a histogram of image temperature data within the buffer area geometry airborne_ir_area_temperature.plot.hist(ax=ax[0], color='grey', zorder=1, # use zorder to make sure this plots below the point label='zonal $T_S$ histogram') # plot a vertical line for the mean temperature within the buffer area geometry ax[0].axvline(airborne_ir_area_temperature.mean(), color='b',linestyle='--', # set color and style zorder=2, # use zorder to make sure this plots on top of the histogram label='zonal mean $T_S$') ax[0].legend(loc='upper left') # add a legend ax[0].set_xlim((-22,-2)) # set xlim to same values as colorbar in image plot ax[0].set_title('Open area - 8AM') ax[0].set_ylabel('Number of pixels'); airborne_ir_area_temperature = da_clipped_for.isel(time = 1) # plot a histogram of image temperature data within the buffer area geometry airborne_ir_area_temperature.plot.hist(ax=ax[1], color='grey', zorder=1, # use zorder to make sure this plots below the point label='zonal $T_S$ histogram') # plot a vertical line for the mean temperature within the buffer area geometry ax[1].axvline(airborne_ir_area_temperature.mean(), color='g',linestyle='--', # set color and style zorder=2, # use zorder to make sure this plots on top of the histogram label='zonal mean $T_S$') ax[1].legend(loc='upper left') # add a legend ax[1].set_xlim((-22,-2)) # set xlim to same values as colorbar in image plot ax[1].set_title('Forest area - 8AM') ax[1].set_ylabel('Number of pixels'); airborne_ir_area_temperature = da_clipped.isel(time = 16) # plot a histogram of image temperature data within the buffer area geometry airborne_ir_area_temperature.plot.hist(ax=ax[2], color='grey', zorder=1, # use zorder to make sure this plots below the point label='zonal $T_S$ histogram') # plot a vertical line for the mean temperature within the buffer area geometry ax[2].axvline(airborne_ir_area_temperature.mean(), color='b',linestyle='--', # set color and style zorder=2, # use zorder to make sure this plots on top of the histogram label='zonal mean $T_S$') ax[2].legend(loc='upper left') # add a legend ax[2].set_xlim((-15,5)) # set xlim to same values as colorbar in image plot # ax[1].set_ylim((0,400)) # set ylim ax[2].set_title('Open area - 12PM') ax[2].set_ylabel('Number of pixels'); airborne_ir_area_temperature = da_clipped_for.isel(time = 16) # plot a histogram of image temperature data within the buffer area geometry airborne_ir_area_temperature.plot.hist(ax=ax[3], color='grey', zorder=1, # use zorder to make sure this plots below the point label='zonal $T_S$ histogram') # plot a vertical line for the mean temperature within the buffer area geometry ax[3].axvline(airborne_ir_area_temperature.mean(), color='g',linestyle='--', # set color and style zorder=2, # use zorder to make sure this plots on top of the histogram label='zonal mean $T_S$') ax[3].legend(loc='upper left') # add a legend ax[3].set_xlim((-15,5)) # set xlim to same values as colorbar in image plot # ax[1].set_ylim((0,400)) # set ylim ax[3].set_title('Forest area - 12PM') ax[3].set_ylabel('Number of pixels'); plt.savefig('histograms.jpg') ```
github_jupyter
#Classes Variables, Lists, Dictionaries etc in python is a object. Without getting into the theory part of Object Oriented Programming, explanation of the concepts will be done along this tutorial. A class is declared as follows class class_name: Functions ``` class FirstClass: pass ``` **pass** in python means do nothing. Above, a class object named "FirstClass" is declared now consider a "egclass" which has all the characteristics of "FirstClass". So all you have to do is, equate the "egclass" to "FirstClass". In python jargon this is called as creating an instance. "egclass" is the instance of "FirstClass" ``` egclass = FirstClass() type(egclass) type(FirstClass) ``` Now let us add some "functionality" to the class. So that our "FirstClass" is defined in a better way. A function inside a class is called as a "Method" of that class Most of the classes will have a function named "\_\_init\_\_". These are called as magic methods. In this method you basically initialize the variables of that class or any other initial algorithms which is applicable to all methods is specified in this method. A variable inside a class is called an attribute. These helps simplify the process of initializing a instance. For example, Without the use of magic method or \_\_init\_\_ which is otherwise called as constructors. One had to define a **init( )** method and call the **init( )** function. ``` eg0 = FirstClass() eg0.init() ``` But when the constructor is defined the \_\_init\_\_ is called thus intializing the instance created. We will make our "FirstClass" to accept two variables name and symbol. I will be explaining about the "self" in a while. ``` class FirstClass: def __init__(self,name,symbol): self.name = name self.symbol = symbol ``` Now that we have defined a function and added the \_\_init\_\_ method. We can create a instance of FirstClass which now accepts two arguments. ``` eg1 = FirstClass('one',1) eg2 = FirstClass('two',2) print eg1.name, eg1.symbol print eg2.name, eg2.symbol ``` **dir( )** function comes very handy in looking into what the class contains and what all method it offers ``` dir(FirstClass) ``` **dir( )** of an instance also shows it's defined attributes. ``` dir(eg1) ``` Changing the FirstClass function a bit, ``` class FirstClass: def __init__(self,name,symbol): self.n = name self.s = symbol ``` Changing self.name and self.symbol to self.n and self.s respectively will yield, ``` eg1 = FirstClass('one',1) eg2 = FirstClass('two',2) print eg1.name, eg1.symbol print eg2.name, eg2.symbol ``` AttributeError, Remember variables are nothing but attributes inside a class? So this means we have not given the correct attribute for the instance. ``` dir(eg1) print eg1.n, eg1.s print eg2.n, eg2.s ``` So now we have solved the error. Now let us compare the two examples that we saw. When I declared self.name and self.symbol, there was no attribute error for eg1.name and eg1.symbol and when I declared self.n and self.s, there was no attribute error for eg1.n and eg1.s From the above we can conclude that self is nothing but the instance itself. Remember, self is not predefined it is userdefined. You can make use of anything you are comfortable with. But it has become a common practice to use self. ``` class FirstClass: def __init__(asdf1234,name,symbol): asdf1234.n = name asdf1234.s = symbol eg1 = FirstClass('one',1) eg2 = FirstClass('two',2) print eg1.n, eg1.s print eg2.n, eg2.s ``` Since eg1 and eg2 are instances of FirstClass it need not necessarily be limited to FirstClass itself. It might extend itself by declaring other attributes without having the attribute to be declared inside the FirstClass. ``` eg1.cube = 1 eg2.cube = 8 dir(eg1) ``` Just like global and local variables as we saw earlier, even classes have it's own types of variables. Class Attribute : attributes defined outside the method and is applicable to all the instances. Instance Attribute : attributes defined inside a method and is applicable to only that method and is unique to each instance. ``` class FirstClass: test = 'test' def __init__(self,name,symbol): self.name = name self.symbol = symbol ``` Here test is a class attribute and name is a instance attribute. ``` eg3 = FirstClass('Three',3) print eg3.test, eg3.name ``` Let us add some more methods to FirstClass. ``` class FirstClass: def __init__(self,name,symbol): self.name = name self.symbol = symbol def square(self): return self.symbol * self.symbol def cube(self): return self.symbol * self.symbol * self.symbol def multiply(self, x): return self.symbol * x eg4 = FirstClass('Five',5) print eg4.square() print eg4.cube() eg4.multiply(2) ``` The above can also be written as, ``` FirstClass.multiply(eg4,2) ``` ##Inheritance There might be cases where a new class would have all the previous characteristics of an already defined class. So the new class can "inherit" the previous class and add it's own methods to it. This is called as inheritance. Consider class SoftwareEngineer which has a method salary. ``` class SoftwareEngineer: def __init__(self,name,age): self.name = name self.age = age def salary(self, value): self.money = value print self.name,"earns",self.money a = SoftwareEngineer('Kartik',26) a.salary(40000) dir(SoftwareEngineer) ``` Now consider another class Artist which tells us about the amount of money an artist earns and his artform. ``` class Artist: def __init__(self,name,age): self.name = name self.age = age def money(self,value): self.money = value print self.name,"earns",self.money def artform(self, job): self.job = job print self.name,"is a", self.job b = Artist('Nitin',20) b.money(50000) b.artform('Musician') dir(Artist) ``` money method and salary method are the same. So we can generalize the method to salary and inherit the SoftwareEngineer class to Artist class. Now the artist class becomes, ``` class Artist(SoftwareEngineer): def artform(self, job): self.job = job print self.name,"is a", self.job c = Artist('Nishanth',21) dir(Artist) c.salary(60000) c.artform('Dancer') ``` Suppose say while inheriting a particular method is not suitable for the new class. One can override this method by defining again that method with the same name inside the new class. ``` class Artist(SoftwareEngineer): def artform(self, job): self.job = job print self.name,"is a", self.job def salary(self, value): self.money = value print self.name,"earns",self.money print "I am overriding the SoftwareEngineer class's salary method" c = Artist('Nishanth',21) c.salary(60000) c.artform('Dancer') ``` If not sure how many times methods will be called it will become difficult to declare so many variables to carry each result hence it is better to declare a list and append the result. ``` class emptylist: def __init__(self): self.data = [] def one(self,x): self.data.append(x) def two(self, x ): self.data.append(x**2) def three(self, x): self.data.append(x**3) xc = emptylist() xc.one(1) print xc.data ``` Since xc.data is a list direct list operations can also be performed. ``` xc.data.append(8) print xc.data xc.two(3) print xc.data ``` If the number of input arguments varies from instance to instance asterisk can be used as shown. ``` class NotSure: def __init__(self, *args): self.data = ''.join(list(args)) yz = NotSure('I', 'Do' , 'Not', 'Know', 'What', 'To','Type') yz.data ``` #Where to go from here? Practice alone can help you get the hang of python. Give your self problem statements and solve them. You can also sign up to any competitive coding platform for problem statements. The more you code the more you discover and the more you start appreciating the language. Now that you have been introduced to python, You can try out the different python libraries in the field of your interest. I highly recommend you to check out this curated list of Python frameworks, libraries and software http://awesome-python.com The official python documentation : https://docs.python.org/2/ You can also check out Python practice programs written by my friend, Kartik Kannapur. Github Repo : https://github.com/rajathkumarmp/Python-Lectures Enjoy solving problem statements because life is short, you need python! Peace. Rajath Kumar M.P ( rajathkumar dot exe at gmail dot com)
github_jupyter
# Decision Tree Classification And Regression Trees (CART for short) is a term introduced by [Leo Breiman](https://en.wikipedia.org/wiki/Leo_Breiman) to refer to Decision Tree algorithms that can be used for classification or regression predictive modeling problems. In this lab assignment, you will implement various ways to calculate impurity which is used to split data in constructing the decision trees and apply the Decision Tree algorithm to solve two real-world problems: a classification one and a regression one. ``` # import packages %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd from matplotlib.legend_handler import HandlerLine2D from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.tree import DecisionTreeClassifier from sklearn.tree import DecisionTreeRegressor from sklearn.model_selection import GridSearchCV # make this notebook's output stable across runs np.random.seed(0) import warnings warnings.filterwarnings("ignore") %matplotlib inline ``` ## Gini impurity and Entropy <span style="color:orange">**Coding Part 1: Implement the functions to calculate Gini impurity and entropy.**</span> #### Gini impurity The CART algorithm recursively splits the training set into two subsets using a single feature k and a threshold $t_k$. The best feature and threshold are chosen to produce the purest subsets weighted by their size. **Gini impurity** measures the impurity of the data points in a set and is used to evaluate how good a split is when the CART algorithm searches for the best pair of feature and the threshold. To compute Gini impurity for a set of items with J classes, suppose $i \in \{1, 2, \dots, J\}$ and let $p_i$ be the fraction of items labeled with class i in the set. \begin{align} I(p) = 1 - \sum_{i=1}^J p_i^2 \end{align} In this exercise, you will implement the function to calculate gini impurity. ``` def gini_impurity(x): """ TODO: This function calculate the Gini impurity of an array. Args: x: a numpy ndarray """ classes,n_items_per_class = np.unique(x,return_counts=True) items = len(x) impurity = 1 - np.sum([ np.square(i/items) for i in n_items_per_class]) return impurity np.testing.assert_equal(0, gini_impurity(np.array([1, 1, 1]))) np.testing.assert_equal(0.5, gini_impurity(np.array([1, 0, 1, 0]))) np.testing.assert_equal(3/4, gini_impurity(np.array(['a', 'b', 'c', 'd']))) np.testing.assert_almost_equal(2.0/3, gini_impurity(np.array([1, 2, 3, 1, 2, 3]))) ``` #### Entropy Another popular measure of impurity is called **entropy**, which measures the average information content of a message. Entropy is zero when all messages are identical. When it applied to CART, a set's entropy is zero when it contains instances of only one class. Entropy is calculated as follows: \begin{align} I(p) = - \sum_{i=1}^J p_i log_2{p_i} \end{align} Here, you will implement entropy impurity function. ``` def entropy(x): """ TODO: This function calculate the entropy of an array. Args: x: a numpy ndarray """ classes,n_items_per_class = np.unique(x,return_counts=True) items = len(x) entropy = -1*np.sum([ (i/items)*np.log2(i/items) for i in n_items_per_class]) if entropy == -0.0 or entropy == +0.0: return 0 else: return entropy np.testing.assert_equal(0, entropy(np.array([1, 1, 1]))) np.testing.assert_equal(1.0, entropy(np.array([1, 0, 1, 0]))) np.testing.assert_equal(2.0, entropy(np.array(['a', 'b', 'c', 'd']))) np.testing.assert_almost_equal(1.58496, entropy(np.array([1, 2, 3, 1, 2, 3])), 4) ``` --- ## Decision Tree Classifier <span style="color:orange">**Coding Part 2: In this exercise, we will apply the Decision Tree classifier to classify the Iris flower data.** ### Iris dataset The Iris data set contains the morphologic variation of Iris flowers of three related species (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each observation (see image below): - Sepal.Length: sepal length in centimeters. - Sepal.Width: sepal width in centimeters. - Petal.Length: petal length in centimeters. - Petal.Width: petal width in centimeters. <table> <tr> <td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg/180px-Kosaciec_szczecinkowaty_Iris_setosa.jpg" style="width:250px"></td> <td><img src="https://www.math.umd.edu/~petersd/666/html/iris_with_labels.jpg" width="250px"></td> <td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/9f/Iris_virginica.jpg/295px-Iris_virginica.jpg" width="250px"></td> </tr> <tr> <td>Iris setosa</td> <td>Iris versicolor</td> <td>Iris virginica</td> </tr> </table> ``` # load the iris train and test data from CSV files train = pd.read_csv("../data/iris_train.csv") test = pd.read_csv("../data/iris_test.csv") train_x = train.iloc[:,0:4] train_y = train.iloc[:,4] test_x = test.iloc[:,0:4] test_y = test.iloc[:,4] # print the number of instances in each class print(train_y.value_counts().sort_index()) print(test_y.value_counts().sort_index()) ``` ### Train and visualize a simple Decision Tree ``` # TODO: read the scikit-learn doc on DecisionTreeClassifier and train a Decision Tree with max depth of 2 #DecisionTreeClassifier? dtc = DecisionTreeClassifier(max_depth=2).fit(train_x,train_y) ``` Now let's visualize the decision tree we just trained on the iris dataset and see how it makes predictions. ``` from io import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus dot_data = StringIO() feature_names = train_x.columns class_names = train_y.unique() class_names.sort() export_graphviz(dtc, out_file=dot_data, feature_names=feature_names, class_names=class_names, filled=True, rounded=True) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) #from sklearn import tree #tree.plot_tree(dtc) #plt.show() ``` Decision trees are easy to inteprete and is often referred to as *whitebox* machine learning algorithm. Let's see how this decision tree represented above makes predictions. Suppose you find an iris flower and want to classify it into setosa, versicolor or virginica. You start at the root node (the very top node in the tree). In this node, we check if the flower's patel length is smaller than or equal to 2.35 cm. If it is, we move to the left child and predict setosa to be its class. Otherwise, we move to the right child node. Then similarly we check if the petal length is smaller than or equal to 4.95 cm. If it is, we move to its left child node and predict versicolor to be its class. Otherwise, we move to its right child and predict virginica to be its class. ### Prediction with Decision tree With this simple decision tree above, we can apply it to make predictions on the test dataset and evaluate its performance. ``` # TODO: use the trained decision tree model to make predictions on the test data and evaluate the model performance # using evaluate(y, z) above. test_z = dtc.predict(test_x) print("model accuracy: {}".format(accuracy_score(test_y, test_z))) print("model confusion matrix:\n {}".format(confusion_matrix(test_y, test_z, labels=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']))) ``` ### Hyper-parameters Hyper-parameter controls the complexity of the decision tree model. For example, the deeper the tree is, the more complex patterns the model will be able to capture. In this exercise, we train the decision trees with increasing number of maximum depth and plot its performance. We should see the accuracy of the training data increase as the tree grows deeper, but the accuracy on the test data might not as the model will eventually start to overfit and does not generalize well on the unseen test data. ``` max_depths = np.linspace(1, 8, 8, endpoint=True) train_results = [] test_results = [] for max_depth in max_depths: # TODO: train the decision tree model with various max_depth, make predictions and evaluate on both train and test data. dt = DecisionTreeClassifier(max_depth=max_depth) dt.fit(train_x, train_y) train_z = dt.predict(train_x) train_accuracy = accuracy_score(train_y, train_z) train_results.append(train_accuracy) test_z = dt.predict(test_x) test_accuracy = accuracy_score(test_y, test_z) test_results.append(test_accuracy) # plot the accuracy on train and test data line1, = plt.plot(max_depths, train_results, 'b', label="Train Accuracy") line2, = plt.plot(max_depths, test_results, 'r', label="Test Accuracy") plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel("Accuracy") plt.xlabel("Max Depth") plt.show() ``` ### Fine-tune the decision tree classifier Decision tree is a very powerful model with very few assumptions about the incoming training data (unlike linear models, which assume the data linear), however, it is more likely to overfit the data and won't generalize well to unseen data. To void overfitting, we need to restrict the decision tree's freedom during training via regularization (e.g. max_depth, min_sample_split, max_leaf_nodes and etc.). To fine-tune the model and combat overfitting, use grid search with cross-validation (with the help of the GridSearchCV class) to find the best hyper-parameter settings for the DecisionTreeClassifier. In particular, we would like to fine-tune the following hyper-parameters: - **criteria**: this defines how we measure the quality of a split. we can choose either "gini" for the Gini impurity or "entropy" for the information gain. - **max_depth**: the maximum depth of the tree. This indicates how deep the tree can be. The deeper the tree, the more splits it has and it captures more information about the data. But meanwhile, deeper trees are more likely to overfit the data. For this practice, we will choose from {1, 2, 3} given there are only 4 features in the iris dataset. - **min_samples_split**: This value represents the minimum number of samples required to split an internal node. The smaller this value is, the deeper the tree will grow, thus more likely to overfit. On the other hand, if the value is really large (the size of the training data in the extreme case), the tree will be very shallow and could suffer from underfit. In this practice, we choose from {0.05, 0.1, 0.2} ``` # TODO: fine-tune the model, use grid search with cross-validation. # cv: None, to use the default 5-fold cross validation, parameters = { 'criterion': ['gini', 'entropy'], 'max_depth': [1, 2, 4, 8], 'min_samples_split': [0.01, 0.05, 0.1, 0.2, 0.4, 0.8] } dt = DecisionTreeClassifier() grid = GridSearchCV(dt, parameters) grid.fit(train_x, train_y) # summarize the results of the grid search print("The best score is {}".format(grid.best_score_)) print("The best hyper parameter setting is {}".format(grid.best_params_)) ``` ### Prediction and Evaluation Now we have a fine-tuned decision tree classifier based on the training data, let's apply this model to make predictions on the test data and evaluate its performance. ``` test_z = grid.predict(test_x) print("model accuracy: {}".format(accuracy_score(test_y, test_z))) print("model confusion matrix:\n {}".format(confusion_matrix(test_y, test_z, labels=['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']))) ``` --- ## Decision Tree Regressor <span style="color:orange">**Coding Part 3: In this exercise, we will apply the Decision Tree classifier to predict the California housing prices.**</span> ### California Housing Dataset The California Housing dataset appeared in a 1997 paper titled Sparse Spatial Autoregressions by Pace, R. Kelley and Ronald Barry, published in the Statistics and Probability Letters journal. They built it using the 1990 California census data. It contains one row per census block group. A block group is the smallest geographical unit for which the U.S. Census Bureau publishes sample data (a block group typically has a population of 600 to 3,000 people). ``` # Load train and test data from CSV files. train = pd.read_csv("../data/housing_train.csv") test = pd.read_csv("../data/housing_test.csv") train_x = train.iloc[:,0:8] train_y = train.iloc[:,8] test_x = test.iloc[:,0:8] test_y = test.iloc[:,8] train_x.head() # TODO: fine-tune the model, use grid search with cross-validation. # cv: None, to use the default 5-fold cross validation, parameters = { 'max_depth': [4, 8, 16, 32], 'min_samples_split': [0.001, 0.01, 0.05, 0.1, 0.2], 'max_features': [2, 4, 8] } dt = DecisionTreeClassifier() grid = GridSearchCV(dt, parameters) grid.fit(train_x, train_y) # summarize the results of the grid search print("The best score is {}".format(grid.best_score_)) print("The best hyper parameter setting is {}".format(grid.best_params_)) # Use the fine-tuned model to make predictions on the test data test_z = grid.predict(test_x) print("model Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(test_y, test_z))))) print("model Mean Absolute Error: {}".format(round(mean_absolute_error(test_y, test_z)))) # Compare to classifier with default hyper paramters test_z = dt.fit(train_x, train_y).predict(test_x) print("model accuracy: {}".format(accuracy_score(test_y, test_z))) print("model Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(test_y, test_z))))) print("model Mean Absolute Error: {}".format(round(mean_absolute_error(test_y, test_z)))) ``` ### End of ML 310 Lab Assignment 2 --- <p><b>Conceptual Overview</b></p> <p> In this lesson, I learned methods of measuring impurity that separate data into leaves. I also learned how to tune these models such that it doesn't overfit the training data and have large variance. Some of the most common hyper parameters to tune are max depth, min samples to split on, and max features. Using grid search, model performance can improve drastically. In the first example the test score improved from %96.2 to 97.8%. It should also be noted, that training time for decision trees can be costly! Also, be cautious when using with a mostly numerical dataset. </p>
github_jupyter
# Probability theory ## Random experiment When we toss an unbiased coin, we say that it lands heads up with probability $\frac{1}{2}$ and tails up with probability $\frac{1}{2}$. Such a coin toss is an example of a **random experiment** and the set of **outcomes** of this random experiment is the **sample space** $\Omega = \{h, t\}$, where $h$ stands for "heads" and $t$ stands for tails. What if we toss a coin twice? We could view the two coin tosses as a single random experiment with the sample space $\Omega = \{hh, ht, th, tt\}$, where $ht$ (for example) denotes "heads on the first toss", "tails on the second toss". What if, instead of tossing a coin, we roll a die? The sample space for this random experiment is $\Omega = \{1, 2, 3, 4, 5, 6\}$. ## Events An **event**, then, is a subset of the sample space. In our example of the two consecutive coin tosses, getting heads on all coin tosses is an event: $$A = \text{"getting heads on all coin tosses"} = \{hh\} \subseteq \{hh, ht, th, tt\} = \Omega.$$ Getting distinct results on the two coin tosses is also an event: $$D = \{ht, th\} \subseteq \{hh, ht, th, tt\} = \Omega.$$ We can simulate a coin toss in Python as follows: ``` import numpy as np np.random.seed(42) np.random.randint(0, 2) ``` (Let's say 0 is heads and 1 is tails.) Similarly, in our roll-of-a-die example, the following are all events: $$S = \text{"six shows up"} = \{6\} \subseteq \{1, 2, 3, 4, 5, 6\} = \Omega,$$ $$E = \text{"even number shows up"} = \{2, 4, 6\} \subseteq \{1, 2, 3, 4, 5, 6\} = \Omega,$$ $$O = \text{"odd number shows up"} = \{1, 3, 5\} \subseteq \{1, 2, 3, 4, 5, 6\} = \Omega.$$ The empty set, $\emptyset = \{\}$, represents the **impossible event**, whereas the sample space $\Omega$ itself represents the **certain event**: one of the numbers $1, 2, 3, 4, 5, 6$ always occurs when a die is rolled, so $\Omega$ always occurs. We can simulate the roll of a die in Python as follows: ``` np.random.randint(1, 7) ``` If we get 4, say, $S$ has not occurred, since $4 \notin S$; $E$ has occurred, since $4 \in E$; $O$ has not occurred, since $4 \notin O$. When all outcomes are equally likely, and the sample space is finite, the probability of an event $A$ is given by $$\mathbb{P}(A) = \frac{|A|}{|\Omega|},$$ where $|\cdot|$ denotes the number of elements in a given set. Thus, the probability of the event $E$, "even number shows up" is equal to $$\mathbb{P}(A) = \frac{|E|}{|\Omega|} = \frac{3}{6} = \frac{1}{2}.$$ If Python's random number generator is decent enough, we should get pretty close to this number by simulating die rolls: ``` outcomes = np.random.randint(1, 7, 100) len([x for x in outcomes if x % 2 == 0]) / len(outcomes) ``` Here we have used 100 simulated "rolls". If we used, 1000000, say, we would get even closer to $\frac{1}{2}$: ``` outcomes = np.random.randint(1, 7, 1000000) len([x for x in outcomes if x % 2 == 0]) / len(outcomes) ```
github_jupyter
# Train word2vec locally This allows a smart initialization of our neural net's word embeddings. It seems that initializing the embeddings by training them locally, as opposed to using pre-trained word2vec embeddings (available online) can lead to better performance. ``` import os import sys print(sys.executable) from gensim.models.word2vec import Word2Vec import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) TRAIN = os.path.join('..', 'train') TEST = os.path.join('..', 'test') POS_TWEET_FILE = os.path.join(TRAIN, 'train_pos_full.txt') NEG_TWEET_FILE = os.path.join(TRAIN, 'train_neg_full.txt') TEST_TWEET_FILE = os.path.join(TEST, 'test_data.txt') EMBEDDING_SIZE = 300 def read_tweets(fname): """Read the tweets in the given file. Returns a 2d array where every row is a tweet, split into words. """ with open(fname, 'r') as f: return [l.split() for l in f.readlines()] pos_tweets = read_tweets(POS_TWEET_FILE) neg_tweets = read_tweets(NEG_TWEET_FILE) test_tweets = read_tweets(TEST_TWEET_FILE) sentences = pos_tweets + neg_tweets + test_tweets print(len(sentences)) tokens = [item.strip() for sentence in sentences for item in sentence] # Check for Nikos's 1st stage substitutions. assert '<num>' in tokens # Another sanity check print(len([t for t in tokens if 'bootstrap' == t])) # Download this for testing: https://github.com/arfon/word2vec/blob/master/questions-words.txt # Highly recommended! question_file = "questions-words.txt" def eval_embeddings(model): accuracy_results = model.accuracy(question_file) summary = accuracy_results[-1] assert summary['section'] == 'total' incorrect = summary['incorrect'] correct = summary['correct'] incorrect_n = len(incorrect) correct_n = len(correct) acc = correct_n / incorrect_n return acc, correct_n, incorrect_n WORKERS = 8 # Note: Moises's team uses size=200 as of June 13. # See: https://groups.google.com/forum/#!msg/gensim/ggCHGncd5rU/Z_pQDD69AAAJ # for some parameter hints. model = Word2Vec(sentences, size=EMBEDDING_SIZE, window=10, min_count=5, workers=WORKERS)# , alpha=0.05, cbow_mean=1) # Yet another sanity check. model.vocab['bootstrap'].count # Should be queen model.most_similar(positive=['woman', 'king'], negative=['man']) # Should be germany model.most_similar(positive=['france', 'berlin'], negative=['paris']) model.doesnt_match("breakfast cereal dinner lunch".split()) model.estimate_memory() # A few more sanity checks print(model.similarity('woman', 'man')) print(model.similarity('woman', 'coffee')) print(model.similarity('woman', 'penis')) print(model.similarity('woman', 'football')) print(model.similarity('car','man')) print(model.similarity('car','truck')) acc, correct_n, incorrect_n = eval_embeddings(model) print("{0:5.3f} accuracy; Analogies: {1} correct, {2} incorrect".format( acc, correct_n, incorrect_n)) ``` ### Accuracies (full Twitter data) * Vanilla (size=225, window=5, min_count=5): 0.319 accuracy; Analogies: 2233 correct, 7004 incorrect * (size=300, window=5, min_count=5): 0.337 accuracy; Analogies: 2329 correct, 6908 incorrect * (size=500, window=5, min_count=5): 0.330 accuracy; Analogies: 2292 correct, 6945 incorrect * (size=300, window=10, min_count=5): 0.346 accuracy; Analogies: 2374 correct, 6863 incorrect * (size=300, window=15, min_count=5): 0.342 accuracy; Analogies: 2356 correct, 6881 incorrect * (size=400, window=10, min_count=5): 0.341 accuracy; Analogies: 2340 correct, 6870 incorrect ### Accuracties (full Twitter data + Nikos 1st stage preprocessing) * (size=200, window=10, min_count=5): 0.327 accuracy; Analogies: 2316 correct, 7093 incorrect * (size=225, window=10, min_count=5): 0.331 accuracy; Analogies: 2342 correct, 7067 incorrect * (size=250, window=10, min_count=5): 0.330 accuracy; Analogies: 2336 correct, 7073 incorrect * (size=275, window=10, min_count=5): 0.337 accuracy; Analogies: 2374 correct, 7035 incorrect * (size=300, window=10, min_count=5): 0.334 accuracy; Analogies: 2355 correct, 7054 incorrect * (size=325, window=10, min_count=5): 0.334 accuracy; Analogies: 2356 correct, 7053 incorrect * (size=350, window=10, min_count=5): 0.330 accuracy; Analogies: 2334 correct, 7075 incorrect * (size=400, window=10, min_count=5): 0.321 accuracy; Analogies: 2289 correct, 7120 incorrect ### After fixing Andrei's retarded bug * (size=300, window=10, min_count=5): 0.438 accuracy; Analogies: 3071 correct, 7019 incorrect ``` print("Embedding dimensionality: {0}".format(EMBEDDING_SIZE)) fname = "./word2vec-local-gensim-{0}.bin".format(EMBEDDING_SIZE) print("Writing embeddings to file {0}.".format(fname)) model.save(fname) print("Done! Happy neural networking!") ``` ### Some experimentation ``` emb_sizes = [225, 250, 275, 300, 325, 350] for w_size in [5, 8, 10, 12]: for emb_size in emb_sizes: print("Computing embeddings of size {0} and window {1}...".format(emb_size, w_size)) model = Word2Vec(sentences, size=emb_size, window=w_size, min_count=5, workers=4) print("Evaluating embeddings of size {0}...".format(emb_size)) acc, correct_n, incorrect_n = eval_embeddings(model) print("Size {3}; wsize {4}: {0:5.3f} accuracy; Analogies: {1} correct, {2} incorrect".format( acc, correct_n, incorrect_n, emb_size, w_size)) ``` ``` Computing embeddings of size 225 and window 5... Evaluating embeddings of size 225... Size 225; wsize 5: 0.388 accuracy; Analogies: 2948 correct, 7598 incorrect Computing embeddings of size 250 and window 5... Evaluating embeddings of size 250... Size 250; wsize 5: 0.381 accuracy; Analogies: 2907 correct, 7639 incorrect Computing embeddings of size 275 and window 5... Evaluating embeddings of size 275... Size 275; wsize 5: 0.381 accuracy; Analogies: 2909 correct, 7637 incorrect Computing embeddings of size 300 and window 5... Evaluating embeddings of size 300... Size 300; wsize 5: 0.392 accuracy; Analogies: 2968 correct, 7578 incorrect Computing embeddings of size 325 and window 5... Evaluating embeddings of size 325... Size 325; wsize 5: 0.393 accuracy; Analogies: 2977 correct, 7569 incorrect Computing embeddings of size 350 and window 5... Evaluating embeddings of size 350... Size 350; wsize 5: 0.391 accuracy; Analogies: 2967 correct, 7579 incorrect Computing embeddings of size 225 and window 8... Evaluating embeddings of size 225... Size 225; wsize 8: 0.408 accuracy; Analogies: 3055 correct, 7491 incorrect Computing embeddings of size 250 and window 8... Evaluating embeddings of size 250... Size 250; wsize 8: 0.410 accuracy; Analogies: 3064 correct, 7482 incorrect Computing embeddings of size 275 and window 8... Evaluating embeddings of size 275... Size 275; wsize 8: 0.402 accuracy; Analogies: 3026 correct, 7520 incorrect Computing embeddings of size 300 and window 8... Evaluating embeddings of size 300... Size 300; wsize 8: 0.410 accuracy; Analogies: 3069 correct, 7477 incorrect Computing embeddings of size 325 and window 8... Evaluating embeddings of size 325... Size 325; wsize 8: 0.405 accuracy; Analogies: 3039 correct, 7507 incorrect Computing embeddings of size 350 and window 8... Evaluating embeddings of size 350... Size 350; wsize 8: 0.407 accuracy; Analogies: 3049 correct, 7497 incorrect Computing embeddings of size 225 and window 10... Evaluating embeddings of size 225... Size 225; wsize 10: 0.406 accuracy; Analogies: 3046 correct, 7500 incorrect Computing embeddings of size 250 and window 10... Evaluating embeddings of size 250... Size 250; wsize 10: 0.417 accuracy; Analogies: 3102 correct, 7444 incorrect Computing embeddings of size 275 and window 10... Evaluating embeddings of size 275... Size 275; wsize 10: 0.411 accuracy; Analogies: 3070 correct, 7476 incorrect Computing embeddings of size 300 and window 10... Evaluating embeddings of size 300... Size 300; wsize 10: 0.417 accuracy; Analogies: 3106 correct, 7440 incorrect Computing embeddings of size 325 and window 10... Evaluating embeddings of size 325... Size 325; wsize 10: 0.411 accuracy; Analogies: 3071 correct, 7475 incorrect Computing embeddings of size 350 and window 10... Evaluating embeddings of size 350... Size 350; wsize 10: 0.404 accuracy; Analogies: 3035 correct, 7511 incorrect Computing embeddings of size 225 and window 12... Evaluating embeddings of size 225... Size 225; wsize 12: 0.399 accuracy; Analogies: 3008 correct, 7538 incorrect Computing embeddings of size 250 and window 12... Evaluating embeddings of size 250... Size 250; wsize 12: 0.419 accuracy; Analogies: 3115 correct, 7431 incorrect Computing embeddings of size 275 and window 12... Evaluating embeddings of size 275... Size 275; wsize 12: 0.423 accuracy; Analogies: 3134 correct, 7412 incorrect Computing embeddings of size 300 and window 12... Evaluating embeddings of size 300... Size 300; wsize 12: 0.417 accuracy; Analogies: 3104 correct, 7442 incorrect Computing embeddings of size 325 and window 12... Evaluating embeddings of size 325... Size 325; wsize 12: 0.428 accuracy; Analogies: 3162 correct, 7384 incorrect Computing embeddings of size 350 and window 12... Evaluating embeddings of size 350... Size 350; wsize 12: 0.413 accuracy; Analogies: 3080 correct, 7466 incorrect Computing embeddings of size 225 and window 13... Evaluating embeddings of size 225... Size 225; wsize 13: 0.415 accuracy; Analogies: 3094 correct, 7452 incorrect Computing embeddings of size 250 and window 13... Evaluating embeddings of size 250... Size 250; wsize 13: 0.412 accuracy; Analogies: 3078 correct, 7468 incorrect Computing embeddings of size 275 and window 13... Evaluating embeddings of size 275... Size 275; wsize 13: 0.420 accuracy; Analogies: 3121 correct, 7425 incorrect Computing embeddings of size 300 and window 13... Evaluating embeddings of size 300... Size 300; wsize 13: 0.410 accuracy; Analogies: 3067 correct, 7479 incorrect Computing embeddings of size 325 and window 13... Evaluating embeddings of size 325... Size 325; wsize 13: 0.411 accuracy; Analogies: 3074 correct, 7472 incorrect Computing embeddings of size 350 and window 13... Evaluating embeddings of size 350... Size 350; wsize 13: 0.426 accuracy; Analogies: 3150 correct, 7396 incorrect Computing embeddings of size 225 and window 14... Evaluating embeddings of size 225... Size 225; wsize 14: 0.421 accuracy; Analogies: 3125 correct, 7421 incorrect Computing embeddings of size 250 and window 14... Evaluating embeddings of size 250... Size 250; wsize 14: 0.426 accuracy; Analogies: 3150 correct, 7396 incorrect Computing embeddings of size 275 and window 14... Evaluating embeddings of size 275... Size 275; wsize 14: 0.422 accuracy; Analogies: 3132 correct, 7414 incorrect Computing embeddings of size 300 and window 14... Evaluating embeddings of size 300... Size 300; wsize 14: 0.426 accuracy; Analogies: 3149 correct, 7397 incorrect Computing embeddings of size 325 and window 14... Evaluating embeddings of size 325... Size 325; wsize 14: 0.418 accuracy; Analogies: 3107 correct, 7439 incorrect Computing embeddings of size 350 and window 14... Evaluating embeddings of size 350... Size 350; wsize 14: 0.426 accuracy; Analogies: 3150 correct, 7396 incorrect Computing embeddings of size 225 and window 15... Evaluating embeddings of size 225... Size 225; wsize 15: 0.421 accuracy; Analogies: 3124 correct, 7422 incorrect Computing embeddings of size 250 and window 15... Evaluating embeddings of size 250... Size 250; wsize 15: 0.431 accuracy; Analogies: 3174 correct, 7372 incorrect Computing embeddings of size 275 and window 15... Evaluating embeddings of size 275... Size 275; wsize 15: 0.427 accuracy; Analogies: 3154 correct, 7392 incorrect Computing embeddings of size 300 and window 15... Evaluating embeddings of size 300... Size 300; wsize 15: 0.432 accuracy; Analogies: 3183 correct, 7363 incorrect Computing embeddings of size 325 and window 15... Evaluating embeddings of size 325... Size 325; wsize 15: 0.434 accuracy; Analogies: 3191 correct, 7355 incorrect Computing embeddings of size 350 and window 15... Evaluating embeddings of size 350... Size 350; wsize 15: 0.441 accuracy; Analogies: 3227 correct, 7319 incorrect Computing embeddings of size 225 and window 16... Evaluating embeddings of size 225... Size 225; wsize 16: 0.409 accuracy; Analogies: 3063 correct, 7483 incorrect Computing embeddings of size 250 and window 16... Evaluating embeddings of size 250... Size 250; wsize 16: 0.423 accuracy; Analogies: 3133 correct, 7413 incorrect Computing embeddings of size 275 and window 16... Evaluating embeddings of size 275... Size 275; wsize 16: 0.413 accuracy; Analogies: 3084 correct, 7462 incorrect Computing embeddings of size 300 and window 16... Evaluating embeddings of size 300... Size 300; wsize 16: 0.421 accuracy; Analogies: 3126 correct, 7420 incorrect Computing embeddings of size 325 and window 16... Evaluating embeddings of size 325... Size 325; wsize 16: 0.423 accuracy; Analogies: 3133 correct, 7413 incorrect Computing embeddings of size 350 and window 16... Evaluating embeddings of size 350... Size 350; wsize 16: 0.421 accuracy; Analogies: 3122 correct, 7424 incorrect Computing embeddings of size 225 and window 17... Evaluating embeddings of size 225... Size 225; wsize 17: 0.404 accuracy; Analogies: 3034 correct, 7512 incorrect Computing embeddings of size 250 and window 17... Evaluating embeddings of size 250... Size 250; wsize 17: 0.429 accuracy; Analogies: 3168 correct, 7378 incorrect Computing embeddings of size 275 and window 17... Evaluating embeddings of size 275... Size 275; wsize 17: 0.436 accuracy; Analogies: 3204 correct, 7342 incorrect Computing embeddings of size 300 and window 17... Evaluating embeddings of size 300... Size 300; wsize 17: 0.427 accuracy; Analogies: 3158 correct, 7388 incorrect Computing embeddings of size 325 and window 17... Evaluating embeddings of size 325... Size 325; wsize 17: 0.429 accuracy; Analogies: 3166 correct, 7380 incorrect Computing embeddings of size 350 and window 17... Evaluating embeddings of size 350... Size 350; wsize 17: 0.417 accuracy; Analogies: 3106 correct, 7440 incorrect Computing embeddings of size 225 and window 18... Evaluating embeddings of size 225... Size 225; wsize 18: 0.427 accuracy; Analogies: 3156 correct, 7390 incorrect Computing embeddings of size 250 and window 18... Evaluating embeddings of size 250... Size 250; wsize 18: 0.417 accuracy; Analogies: 3105 correct, 7441 incorrect Computing embeddings of size 275 and window 18... Evaluating embeddings of size 275... Size 275; wsize 18: 0.428 accuracy; Analogies: 3160 correct, 7386 incorrect Computing embeddings of size 300 and window 18... Evaluating embeddings of size 300... Size 300; wsize 18: 0.421 accuracy; Analogies: 3126 correct, 7420 incorrect Computing embeddings of size 325 and window 18... Evaluating embeddings of size 325... Size 325; wsize 18: 0.434 accuracy; Analogies: 3193 correct, 7353 incorrect Computing embeddings of size 350 and window 18... Evaluating embeddings of size 350... Size 350; wsize 18: 0.418 accuracy; Analogies: 3107 correct, 7439 incorrect Computing embeddings of size 225 and window 19... Evaluating embeddings of size 225... Size 225; wsize 19: 0.417 accuracy; Analogies: 3105 correct, 7441 incorrect Computing embeddings of size 250 and window 19... Evaluating embeddings of size 250... Size 250; wsize 19: 0.421 accuracy; Analogies: 3125 correct, 7421 incorrect Computing embeddings of size 275 and window 19... Evaluating embeddings of size 275... Size 275; wsize 19: 0.439 accuracy; Analogies: 3219 correct, 7327 incorrect Computing embeddings of size 300 and window 19... Evaluating embeddings of size 300... Size 300; wsize 19: 0.438 accuracy; Analogies: 3212 correct, 7334 incorrect Computing embeddings of size 325 and window 19... Evaluating embeddings of size 325... Size 325; wsize 19: 0.426 accuracy; Analogies: 3153 correct, 7393 incorrect Computing embeddings of size 350 and window 19... Evaluating embeddings of size 350... Size 350; wsize 19: 0.428 accuracy; Analogies: 3161 correct, 7385 incorrect Computing embeddings of size 225 and window 20... Evaluating embeddings of size 225... Size 225; wsize 20: 0.429 accuracy; Analogies: 3166 correct, 7380 incorrect Computing embeddings of size 250 and window 20... Evaluating embeddings of size 250... Size 250; wsize 20: 0.424 accuracy; Analogies: 3139 correct, 7407 incorrect Computing embeddings of size 275 and window 20... Evaluating embeddings of size 275... Size 275; wsize 20: 0.427 accuracy; Analogies: 3155 correct, 7391 incorrect Computing embeddings of size 300 and window 20... Evaluating embeddings of size 300... Size 300; wsize 20: 0.419 accuracy; Analogies: 3116 correct, 7430 incorrect Computing embeddings of size 325 and window 20... Evaluating embeddings of size 325... Size 325; wsize 20: 0.438 accuracy; Analogies: 3211 correct, 7335 incorrect Computing embeddings of size 350 and window 20... Evaluating embeddings of size 350... Size 350; wsize 20: 0.409 accuracy; Analogies: 3061 correct, 7485 incorrect Computing embeddings of size 225 and window 21... Evaluating embeddings of size 225... Size 225; wsize 21: 0.414 accuracy; Analogies: 3088 correct, 7458 incorrect Computing embeddings of size 250 and window 21... Evaluating embeddings of size 250... Size 250; wsize 21: 0.415 accuracy; Analogies: 3094 correct, 7452 incorrect Computing embeddings of size 275 and window 21... Evaluating embeddings of size 275... Size 275; wsize 21: 0.415 accuracy; Analogies: 3093 correct, 7453 incorrect Computing embeddings of size 300 and window 21... Evaluating embeddings of size 300... Size 300; wsize 21: 0.438 accuracy; Analogies: 3213 correct, 7333 incorrect Computing embeddings of size 325 and window 21... Evaluating embeddings of size 325... Size 325; wsize 21: 0.431 accuracy; Analogies: 3178 correct, 7368 incorrect Computing embeddings of size 350 and window 21... Evaluating embeddings of size 350... Size 350; wsize 21: 0.429 accuracy; Analogies: 3164 correct, 7382 incorrect Computing embeddings of size 225 and window 22... Evaluating embeddings of size 225... Size 225; wsize 22: 0.424 accuracy; Analogies: 3142 correct, 7404 incorrect Computing embeddings of size 250 and window 22... Evaluating embeddings of size 250... Size 250; wsize 22: 0.410 accuracy; Analogies: 3066 correct, 7480 incorrect Computing embeddings of size 275 and window 22... Evaluating embeddings of size 275... Size 275; wsize 22: 0.423 accuracy; Analogies: 3134 correct, 7412 incorrect Computing embeddings of size 300 and window 22... Evaluating embeddings of size 300... Size 300; wsize 22: 0.427 accuracy; Analogies: 3158 correct, 7388 incorrect Computing embeddings of size 325 and window 22... Evaluating embeddings of size 325... Size 325; wsize 22: 0.424 accuracy; Analogies: 3138 correct, 7408 incorrect Computing embeddings of size 350 and window 22... Evaluating embeddings of size 350... Size 350; wsize 22: 0.426 accuracy; Analogies: 3151 correct, 7395 incorrect ```
github_jupyter
# NLP 2 - Pré Processamento de Textos e Modelos Modernos Fala galera! Na aula passada, tivemos uma introdução ao mundo de NLP: o modelo BoW (Bag of Words) e o algoritmo TF-iDF. Embora muito práticos, observamos alguns fenômenos de NLP e dessas técnicas: - NLP é naturalmente um problema de grandes dimensionalidades, o que nos leva a cair no caso de "curse of dimensionality" - O modelo BoW, mesmo com o conceito de N-Grams, tem dificuldades para carregar informação sequencial de palavras, uma vez que ele só pega sequências de termos, não de conceitos - Entender e implementar conceitos de linguística é importantíssimo para que o processamento-modelagem produza uma boa performance. Dessa forma, NLP é norteado pelo entendimento linguístico <br> Dito isso, hoje no mundo de NLP temos ferramentas, approaches e tecnologias que implementam de formas mais eficientes conceitos linguísticos para que possamos realizar melhores modelagens. Nessa aula, veremos essa técnicas com as bibliotecas SpaCy, gensim e a arquitetura word2vec! Para quem não tem SpaCy ou gensim no computador, retire o comentário e rode as células abaixo: ``` # ! pip install spacy # ! pip install gensim ``` ## SpaCy Basics ``` import spacy # Precisamos instanciar um objeto de NLP especificando qual linguagem ele utilizará. # No caso, vamos começar com português nlp = spacy.load('pt') ``` Opa, deu um erro no comando acima! O SpaCy precisa não somente ser instalado, mas seus pacotes linguísticos precisam ser baixados também. Retire os comentários e rode as células abaixo para fazer o download dos pacotes English e Português ``` # ! python -m spacy download en # ! python -m spacy download pt ``` Ok! Agora tudo certo para começarmos a mexer com o SpaCy. Vamos instanciar a ferramenta linguística para português ``` nlp = spacy.load('pt') # Vamos criar um documento para testes e demonstrações do SpaCy! # É muito importante que os textos passados estejam em encoding unicode, # por isso a presença do u antes da string doc = nlp(u'Você encontrou o livro que eu te falei, Carla?') doc.text.split() ``` Ok, temos um problema de pontuação aqui: o método split (ou REGEX em geral) não entende que a vírgula é uma entidade - vamos chamar essas entidades de tokens. Assim, não faz muito sentir quebrar o texto com esses métodos. Vamos utilizar uma compreensão de lista pra isso. O `nlp` consegue entender a diferença entre eles e, portanto, quando usamos os tokens dentro da estrutura do documento, temos uma divisão mais coerente: ``` tokens = [token for token in doc] tokens ``` Para extrair as strings de cada token, utilizamos `orth_`: ``` [token.orth_ for token in doc] ``` Podemos ver que o SpaCy consegue entender a diferença de pontuações e palavras de fato: ``` [token.orth_ for token in doc if not token.is_punct] ``` Um conceito muito importante de NLP é o de similaridade. Como medir se 2 palavras carregam informações similares? Isso pode ser interessante para, por exemplo, compactarmos nosso texto, ou ainda para descobrir o significado de palavras, termos e gírias desconhecidas. Para isso, utilizamos o método `.similarity()` de um token em relação ao outro: ``` print(tokens[0].similarity(tokens[5])) print(tokens[0].similarity(tokens[3])) ``` Na célula abaixo, sinta-se livre para realizar os teste que quiser com similaridades em português! Quando realizamos o load de um pacote linguístico, também estamos carregando noções da estrutura gramatical, sintática e sintagmática da língua. Podemos, por exemplo, utilizar o atributo `.pos_`, de Part of Speech (POS), para extrair a função de cada token na frase: ``` [(token.orth_, token.pos_) for token in doc] ``` Ok, mas como lidamos com o problema da dimensionalidade? Podemos utilizar 2 conceitos chamados **lemmatization** e **stemming**. A lemmatization em lingüística é o processo de agrupar as formas flexionadas de uma palavra para que elas possam ser analisadas como um único item, identificado pelo lema da palavra ou pela forma de dicionário. Já o stemming busca o radical da palavra: ``` [token.lemma_ for token in doc if token.pos_ == 'VERB'] # lemmatization ``` Na célula abaixo, crie um novo doc e aplique uma lemmatization em seus verbos: ``` doc = nlp(u'encontrei, encontraram, encontrarão, encontrariam') [token.lemma_ for token in doc if token.pos_ == 'VERB'] # lemmatization doc = nlp(u'encontrar encontrei') tokens = [token for token in doc] tokens[0].is_ancestor(tokens[1]) #checagem de radicais ``` Por fim, queremos extrair entidades de uma frase. Entenda entidades como personagens num doc. Podemos acessar as entidades de uma frase ao chamar `ents` de um doc: ``` doc = nlp(u'Machado de Assis um dos melhores escritores do Brasil, foi o primeiro presidente da Academia Brasileira de Letras') doc.ents ``` Ao analisar as entidades de uma frase, podemos inclusive entender que tipo de entidade ela pertence: ``` [(entity, entity.label_) for entity in doc.ents] wiki_obama = """Barack Obama is an American politician who served as the 44th President of the United States from 2009 to 2017. He is the first African American to have served as president, as well as the first born outside the contiguous United States.""" ``` E isso funciona para cada pacote linguístico que você utilizar: ``` nlp = spacy.load('en') nlp_obama = nlp(wiki_obama) [(i, i.label_) for i in nlp_obama.ents] ``` ## SpaCy + Scikit Learn Para demonstrar como realizar o pré-processamento de um datasetv linguístico e como conectar SpaCy e sklearn, vamos fazer um reconhecedor de emoções simples: ``` # stopwords são tokens de uma língua que carregam pouca informação, como conectores e pontuações. # Fique atento ao utilizar isso! Por exemplo, @ e # são potnuações importantíssimas num case # utilizando dados do Twitter from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS as stopwords # Nosso modelo BoW from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics import accuracy_score from sklearn.base import TransformerMixin from sklearn.pipeline import Pipeline from sklearn.svm import LinearSVC import string punctuations = string.punctuation from spacy.lang.en import English parser = English() # Custom transformer using spaCy class predictors(TransformerMixin): def transform(self, X, **transform_params): return [clean_text(text) for text in X] def fit(self, X, y=None, **fit_params): return self def get_params(self, deep=True): return {} # Vamos limpar o texto jogando tudo para minúsculas def clean_text(text): return text.strip().lower() ``` Vamos criar uma função que tokeniza nosso dataset já tratando-o com lemmatization e removendo stopwords: ``` def spacy_tokenizer(sentence): tokens = parser(sentence) tokens = [tok.lemma_.lower().strip() if tok.lemma_ != "-PRON-" else tok.lower_ for tok in tokens] tokens = [tok for tok in tokens if (tok not in stopwords and tok not in punctuations)] return tokens # create vectorizer object to generate feature vectors, we will use custom spacy’s tokenizer vectorizer = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,2)) classifier = LinearSVC() # Create the pipeline to clean, tokenize, vectorize, and classify pipe = Pipeline([("cleaner", predictors()), ('vectorizer', vectorizer), ('classifier', classifier)]) # Load sample data train = [('I love this sandwich.', 'pos'), ('this is an amazing place!', 'pos'), ('I feel very good about these beers.', 'pos'), ('this is my best work.', 'pos'), ("what an awesome view", 'pos'), ('I do not like this restaurant', 'neg'), ('I am tired of this stuff.', 'neg'), ("I can't deal with this", 'neg'), ('he is my sworn enemy!', 'neg'), ('my boss is horrible.', 'neg')] test = [('the beer was good.', 'pos'), ('I do not enjoy my job', 'neg'), ("I ain't feelin dandy today.", 'neg'), ("I feel amazing!", 'pos'), ('Gary is a good friend of mine.', 'pos'), ("I can't believe I'm doing this.", 'neg')] # Create model and measure accuracy pipe.fit([x[0] for x in train], [x[1] for x in train]) pred_data = pipe.predict([x[0] for x in test]) for (sample, pred) in zip(test, pred_data): print(sample, pred) print("Accuracy:", accuracy_score([x[1] for x in test], pred_data)) ``` Nice! Conseguimos conectar SpaCy e sklearn para uma ferramenta de análise de sentimentos simples!. Agora vamos para um problema mais complexo: <img src="imgs/simpsons.jpg" align="left" width="60%"> ## Simpsons Dataset Esse __[dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data/downloads/simpsons_script_lines.csv/1)__ é bem famoso em NLP, ele contém personagens, localizações, falas e outras infos de mais 600+ episódios de Simpsons! Vamos construir um classificador que consegue entender a linguagem de Simpsons e realizar operações linguísticas nela. ``` import re # For preprocessing import pandas as pd from time import time from collections import defaultdict # For word frequency import logging # Setting up the loggings to monitor gensim. DS SOBREVIVE DE LOGS logging.basicConfig(format="%(levelname)s - %(asctime)s: %(message)s", datefmt= '%H:%M:%S', level=logging.INFO) df = pd.read_csv('./data/simpsons_script_lines.csv', error_bad_lines=False, usecols = ['raw_character_text', 'spoken_words']) df.shape df.head() ``` Vamos fazer um exercício de sanidade e ver se temos valores nulos: ``` df.isnull().sum() ``` Ok, famoso `.dropna()` para limpar nosso dataset. Em casos de NLP, podemos fazer isso nessa escala ``` df = df.dropna().reset_index(drop=True) df.isnull().sum() nlp = spacy.load('en', disable=['ner', 'parser']) # disabling Named Entity Recognition for speed def cleaning(doc): # Lemmatizes and removes stopwords # doc needs to be a spacy Doc object txt = [token.lemma_ for token in doc if not token.is_stop] # Word2Vec uses context words to learn the vector representation of a target word, # if a sentence is only one or two words long, # the benefit for the training is very small if len(txt) > 2: return ' '.join(txt) ``` Vamos retirar os caracteres não alfabéticos: ``` brief_cleaning = (re.sub("[^A-Za-z']+", ' ', str(row)).lower() for row in df['spoken_words']) #REGEX ``` Ok, vamos executar nossa função de limpeza para todo o dataset! Observe como o shape vai mudar. O SpaCy nos permite criar pipelines para esse processo: ``` t = time() txt = [cleaning(doc) for doc in nlp.pipe(brief_cleaning, batch_size=5000, n_threads=-1)] print('Time to clean up everything: {} mins'.format(round((time() - t) / 60, 2))) df_clean = pd.DataFrame({'clean': txt}) df_clean = df_clean.dropna().drop_duplicates() df_clean.shape ``` Hora de utilizar a biblioteca Gensim. O Gensim é uma biblioteca de código aberto para modelagem de tópico não supervisionada e processamento de linguagem natural, usando o aprendizado de máquina estatístico moderno: ``` from gensim.models.phrases import Phrases, Phraser sent = [row.split() for row in df_clean['clean']] phrases = Phrases(sent, min_count=30, progress_per=10000) ``` Vamos utilizar os __[bigrams](https://radimrehurek.com/gensim/models/phrases.html)__ do Gensim para detectar expressões comuns, como Bart Simpson e Mr Burns ``` bigram = Phraser(phrases) sentences = bigram[sent] word_freq = defaultdict(int) for sent in sentences: for i in sent: word_freq[i] += 1 len(word_freq) sorted(word_freq, key=word_freq.get, reverse=True)[:10] ``` Vamos construir o modelo __[word2vec](https://radimrehurek.com/gensim/models/word2vec.html)__ do Gensim. Antes disso, vamos entender o modelo: <img src="imgs/word2vec.png" align="left" width="80%"> O modelo word2vec foi implementado pelo time do Google Reaserch em 2013 com o objetivo de vetorizar tokens e entidades. Sua premissa é de que termos similares aparecem sob contextos similares, portanto, se 2 termos aparecem sob o mesmo contexto, eles têm uma chance grande de carregar informações próximas. Dessa forma, conseguimos construir um espaço n-dimensional de termos e realizar operações vetoriais sob essas palavras! ``` import multiprocessing cores = multiprocessing.cpu_count() # Count the number of cores in a computer from gensim.models import Word2Vec w2v_model = Word2Vec(min_count=20, window=2, size=300, sample=6e-5, alpha=0.03, min_alpha=0.0007, negative=20, workers=cores-1) ``` Os hiperparâmetros utilizados são: - min_count = int - Ignores all words with total absolute frequency lower than this - (2, 100) - window = int - The maximum distance between the current and predicted word within a sentence. E.g. window words on the left and window words on the left of our target - (2, 10) - size = int - Dimensionality of the feature vectors. - (50, 300) - sample = float - The threshold for configuring which higher-frequency words are randomly downsampled. Highly influencial. - (0, 1e-5) - alpha = float - The initial learning rate - (0.01, 0.05) - min_alpha = float - Learning rate will linearly drop to min_alpha as training progresses. To set it: alpha - (min_alpha * epochs) ~ 0.00 - negative = int - If > 0, negative sampling will be used, the int for negative specifies how many "noise words" should be drown. If set to 0, no - negative sampling is used. - (5, 20) - workers = int - Use these many worker threads to train the model (=faster training with multicore machines) <br> Com o modelo instanciado, precisamos construir nosso **corpus**, ou vocabulário. Vamos alimentar nosso modelo com os docs: ``` t = time() w2v_model.build_vocab(sentences, progress_per=10000) print('Time to build vocab: {} mins'.format(round((time() - t) / 60, 2))) ``` Tudo pronto! Vamos treinar nosso modelo! ``` t = time() w2v_model.train(sentences, total_examples=w2v_model.corpus_count, epochs=30, report_delay=1) print('Time to train the model: {} mins'.format(round((time() - t) / 60, 2))) w2v_model.init_sims(replace=True) w2v_model.wv.most_similar(positive=["homer"]) w2v_model.wv.most_similar(positive=["marge"]) w2v_model.wv.most_similar(positive=["bart"]) w2v_model.wv.similarity('maggie', 'baby') w2v_model.wv.similarity('bart', 'nelson') w2v_model.wv.doesnt_match(['jimbo', 'milhouse', 'kearney']) w2v_model.wv.doesnt_match(["nelson", "bart", "milhouse"]) w2v_model.wv.most_similar(positive=["woman", "homer"], negative=["marge"], topn=3) w2v_model.wv.most_similar(positive=["woman", "bart"], negative=["man"], topn=3) import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set_style("darkgrid") from sklearn.decomposition import PCA from sklearn.manifold import TSNE def tsnescatterplot(model, word, list_names): """ Plot in seaborn the results from the t-SNE dimensionality reduction algorithm of the vectors of a query word, its list of most similar words, and a list of words. """ arrays = np.empty((0, 300), dtype='f') word_labels = [word] color_list = ['red'] # adds the vector of the query word arrays = np.append(arrays, model.wv.__getitem__([word]), axis=0) # gets list of most similar words close_words = model.wv.most_similar([word]) # adds the vector for each of the closest words to the array for wrd_score in close_words: wrd_vector = model.wv.__getitem__([wrd_score[0]]) word_labels.append(wrd_score[0]) color_list.append('blue') arrays = np.append(arrays, wrd_vector, axis=0) # adds the vector for each of the words from list_names to the array for wrd in list_names: wrd_vector = model.wv.__getitem__([wrd]) word_labels.append(wrd) color_list.append('green') arrays = np.append(arrays, wrd_vector, axis=0) # Reduces the dimensionality from 300 to 50 dimensions with PCA reduc = PCA(n_components=19).fit_transform(arrays) # Finds t-SNE coordinates for 2 dimensions np.set_printoptions(suppress=True) Y = TSNE(n_components=2, random_state=0, perplexity=15).fit_transform(reduc) # Sets everything up to plot df = pd.DataFrame({'x': [x for x in Y[:, 0]], 'y': [y for y in Y[:, 1]], 'words': word_labels, 'color': color_list}) fig, _ = plt.subplots() fig.set_size_inches(9, 9) # Basic plot p1 = sns.regplot(data=df, x="x", y="y", fit_reg=False, marker="o", scatter_kws={'s': 40, 'facecolors': df['color'] } ) # Adds annotations one by one with a loop for line in range(0, df.shape[0]): p1.text(df["x"][line], df['y'][line], ' ' + df["words"][line].title(), horizontalalignment='left', verticalalignment='bottom', size='medium', color=df['color'][line], weight='normal' ).set_size(15) plt.xlim(Y[:, 0].min()-50, Y[:, 0].max()+50) plt.ylim(Y[:, 1].min()-50, Y[:, 1].max()+50) plt.title('t-SNE visualization for {}'.format(word.title())) tsnescatterplot(w2v_model, 'homer', ['dog', 'bird', 'ah', 'maude', 'bob', 'mel', 'apu', 'duff']) tsnescatterplot(w2v_model, 'maggie', [i[0] for i in w2v_model.wv.most_similar(negative=["maggie"])]) tsnescatterplot(w2v_model, "mr_burn", [t[0] for t in w2v_model.wv.most_similar(positive=["mr_burn"], topn=20)][10:]) ```
github_jupyter
# Developing Custom Models Panel ships with a number of custom Bokeh models, which have both Python and Javascript components. When developing Panel these custom models have to be compiled. This happens automatically with `pip install -e .` or `python setup.py develop`, however when runnning actively developing you can rebuild the extension with `panel build panel`. The build command is just an alias for `bokeh build`; see the [Bokeh developer guide](https://docs.bokeh.org/en/latest/docs/dev_guide/setup.html) for more information about developing bokeh models or the [Awesome Panel - Bokeh Extensions Guide](https://awesome-panel.readthedocs.io/en/latest/guides/awesome-panel-extensions-guide/bokeh-extensions.html) Just like any other Javascript (or Typescript) library Panel defines a `package.json` and `package-lock.json` files. When adding, updating or removing a dependency in the `package.json` file ensure you commit the changes to the `package-lock.json` after running npm install. ## Adding a new Custom Model This example will guide you through adding a new model. We will use the the `ChartJS` model as an example. But you should replace `ChartJS` and similar with the name of your model. Here we will add a simple Button model to start with. But we call it `ChartJS`. My experience is that you should start small with a working example and the continue in small, incremental steps. For me it did not work trying to copy a large, complex example and refactoring it when I started out learning about Custom Models. 1. Create a new branch `chartjs`. 2. Add the files and code for a *minimum working model*. This includes - A Panel Python model - A Bokeh Python and TypeScript model #### Add the Panel Python Model Add the file *panel/pane/chartjs.py* and the code ```python import param from panel.widgets.base import Widget from ..models import ChartJS as _BkChartJS class ChartJS(Widget): # Set the Bokeh model to use _widget_type = _BkChartJS # Rename Panel Parameters -> Bokeh Model properties # Parameters like title that does not exist on the Bokeh model should be renamed to None _rename = { "title": None, } # Parameters to be mapped to Bokeh model properties object = param.String(default="Click Me!") clicks = param.Integer(default=0) ``` Add the Panel model to `panel/pane/__init__.py` ```python from .chartjs import ChartJS ``` #### Add the Bokeh Python Model Add the file *panel/models/chartjs.py* and the code ```python from bokeh.core.properties import Int, String from bokeh.models import HTMLBox class ChartJS(HTMLBox): """Custom ChartJS Model""" object = String() clicks = Int() ``` Add the Bokeh model to `panel/models/__init__.py` file ```python from .chartjs import ChartJS ``` #### Add the Bokeh TypeScript Model Add the file *panel/models/chartjs.ts* and the code ```typescript // See https://docs.bokeh.org/en/latest/docs/reference/models/layouts.html import { HTMLBox, HTMLBoxView } from "@bokehjs/models/layouts/html_box" // See https://docs.bokeh.org/en/latest/docs/reference/core/properties.html import * as p from "@bokehjs/core/properties" // The view of the Bokeh extension/ HTML element // Here you can define how to render the model as well as react to model changes or View events. export class ChartJSView extends HTMLBoxView { model: ChartJS objectElement: any // Element connect_signals(): void { super.connect_signals() this.connect(this.model.properties.object.change, () => { this.render(); }) } render(): void { super.render() this.el.innerHTML = `<button type="button">${this.model.object}</button>` this.objectElement = this.el.firstElementChild this.objectElement.addEventListener("click", () => {this.model.clicks+=1;}, false) } } export namespace ChartJS { export type Attrs = p.AttrsOf<Props> export type Props = HTMLBox.Props & { object: p.Property<string>, clicks: p.Property<number>, } } export interface ChartJS extends ChartJS.Attrs { } // The Bokeh .ts model corresponding to the Bokeh .py model export class ChartJS extends HTMLBox { properties: ChartJS.Props constructor(attrs?: Partial<ChartJS.Attrs>) { super(attrs) } static __module__ = "panel.models.chartjs" static init_ChartJS(): void { this.prototype.default_view = ChartJSView; this.define<ChartJS.Props>(({Int, String}) => ({ object: [String, "Click Me!"], clicks: [Int, 0], })) } } ``` Add the `ChartJS` typescript model to *panel/models/index.ts* ```typescript export {ChartJS} from "./chartjs" ``` #### Build the Model You can now build the model using `panel build panel`. It should look similar to ```bash (base) root@475bb36209a9:/workspaces/panel# panel build panel Working directory: /workspaces/panel/panel Using /workspaces/panel/panel/tsconfig.json Compiling styles Compiling TypeScript (45 files) Linking modules Output written to /workspaces/panel/panel/dist All done. ``` #### Test the Model Add the file *panel/tests/pane/test_chartjs.py* and the code ```python import panel as pn def test_constructor(): chartjs = pn.pane.ChartJS(object="Click Me Now!") def get_app(): chartjs = pn.pane.ChartJS(object="Click Me Now!") return pn.Column( chartjs, pn.Param(chartjs, parameters=["object", "clicks"]) ) if __name__.startswith("bokeh"): get_app().servable() ``` Run `pytest panel/tests/pane/test_chartjs.py` and make sure it passes. Serve the app with `panel serve panel/tests/pane/test_chartjs.py --auto --show` You have to *hard refresh* your browser to reload the new panel `.js` files with your `ChartJS` model. In Chrome I press `CTRL+F5`. See [How to hard refresh in Chrome, Firefox and IE](https://www.namecheap.com/support/knowledgebase/article.aspx/10078/2194/how-to-do-a-hard-refresh-in-chrome-firefox-and-ie/) for other browsers. Now you can manually test your model ![Chart JS Button](../assets/chartjs-button.gif) #### Save your new Model Finally you should save your changes via `git add .` and maybe even commit them `git commit -m "First iteration on ChartJS model"` ## Build a small HTML Example In the beginning of your journey into Custom Models there will be things that break and difficulties figuring out why. When you combine several new things it can be really difficult to figure out why. Is the problem Panel, Bokeh, Python, Javascript, Node or ....? So I suggest creating a small, working example in plain HTML/ JS before you start combining with Panel and Bokeh Models. Please note the below example works out of the box. It is not always that easy importing javascript libraries in a Notebook. So it can be a good idea to work in a `.html` file first. ``` %%HTML <script src="https://cdn.jsdelivr.net/npm/chart.js@2.8.0"></script> <div class="chart-container" style="position: relative; height:400px; width:100%"> <canvas id="myChart"></canvas> </div> <script> var ctx = document.getElementById('myChart').getContext('2d'); var chart = new Chart(ctx, { // The type of chart we want to create type: 'line', // The data for our dataset data: { labels: ['January', 'February', 'March', 'April', 'May', 'June', 'July'], datasets: [{ label: 'My First dataset', backgroundColor: 'rgb(255, 99, 132)', borderColor: 'rgb(255, 99, 132)', data: [0, 10, 5, 2, 20, 30, 45] }] }, // Configuration options go here options: { responsive: true, maintainAspectRatio: false, } }); </script> ``` ## Using the Javascript Model Getting something shown using the `ChartJS` `js` library would be the next step. It might require a bit of experimentation, looking at other examples, google or support from the community. Here I found that a good step where the following changes #### Import the Javascript Library Update *test_chartjs.py* tp ```python import panel as pn def test_constructor(): chartjs = pn.pane.ChartJS(object="Click Me Now!") def get_app(): chartjs = pn.pane.ChartJS(object="Click Me Now!") return pn.Column( chartjs, pn.Param(chartjs, parameters=["object", "clicks"]) ) if __name__.startswith("bokeh"): pn.config.js_files["chartjs"]="https://cdn.jsdelivr.net/npm/chart.js@2.8.0" get_app().servable() ``` #### Render the Plot In the *chartjs.ts* file add `import { canvas, div } from "@bokehjs/core/dom";` at the top and change the `render` function to ```typescript render(): void { super.render() var object = { type: 'line', data: { labels: ['January', 'February', 'March', 'April', 'May', 'June', 'July'], datasets: [{ label: 'My First dataset', backgroundColor: 'rgb(255, 99, 132)', borderColor: 'rgb(255, 99, 132)', data: [0, 10, 5, 2, 20, 30, 45] }] }, options: { responsive: true, maintainAspectRatio: false, } } var chartContainer = div({class: "chartjs-container", style: "position: relative; height:400px; width:100%"}) var chartCanvas = canvas({class: "chartjs"}) chartContainer.appendChild(chartCanvas) var ctx: any = chartCanvas.getContext('2d'); new (window as any).Chart(ctx, object); this.el.appendChild(chartContainer) } ``` #### Build and Test Run `panel build panel` and hard refresh your browser. You should see ![ChartJS Hello World](../assets/chartjs-hello-world.png) #### Save Your Model Remember to stage and/ or commit your working changes. ## Next Steps - Enable setting the Python `ChartJS.object` parameter to any ChartJS dictionary. - Checkout support for different sizing modes, responsiveness and window maximize. - Configure the javascript, css, .. dependencies in the Bokeh Python File. - ..... ## Check List When you develop and test your model eventually you should consider implementing and testing - Dynamic updates to the `object` parameter and any other parameters added. - Resizing - Does it resize when `width` is changed dynamically? - Does it resize when `height` is changed dynamically? - Does it work with `sizing_mode="stretch_width"` etc. - Themes (Light, Dark) - Window Resizing, Window Maximizing, Window Minimizing. - Streaming of Data. Is it efficient? - Events (Click, Hover etc.) - Consider supporting the Python Wrapper (ECharts -> PyECharts, ChartJS -> [PyChart.JS](https://pypi.org/project/pyChart.JS/)) - Tests - Reference Notebook - Communication also to for example ChartJS community and developers. ## Tips and Tricks - Work in small increments and stage your changes when they work - Remember to `panel build panel` and hard refresh before you test. - Add [console.log](https://www.w3schools.com/jsref/met_console_log.asp) statements to your `.ts` code for debugging. - Use the [*Developer Tools*](https://developers.google.com/web/tools/chrome-devtools) *console* to see the `console.log` output and identify errors. In my browsers I toggle the Developer Tools using `CTRL+SHIFT+I`. - Find inspiration for next steps in the [existing Panel Custom Models](https://github.com/holoviz/panel/tree/master/panel/models). For `ChartJS` one of the most relevant custom models would be `Echarts`. See Panel [echarts.py](https://github.com/holoviz/panel/blob/master/panel/pane/echarts.py), Bokeh [echarts.py](https://github.com/holoviz/panel/blob/master/panel/models/echarts.py) and [echarts.ts](https://github.com/holoviz/panel/blob/master/panel/models/echarts.ts). - Use the existing documentation - [Panel - Developer Guide](https://panel.holoviz.org/developer_guide/index.html) - [Bokeh - Extending Bokeh](https://docs.bokeh.org/en/latest/docs/user_guide/extensions.html) - [Awesome Panel - Bokeh Extensions Guide](https://awesome-panel.readthedocs.io/en/latest/guides/awesome-panel-extensions-guide/bokeh-extensions.html) - Use Google Search. You don't have to be an expert javascript or typescript developer. It's a very small subset of those languages that is used when developing Custom Models. - Ask for help in [PyViz Gitter](https://gitter.im/pyviz/pyviz), [HoloViz Discourse](https://discourse.holoviz.org/) and [Bokeh Discourse](https://discourse.bokeh.org/) forums.
github_jupyter
``` from IPython.display import Markdown as md ### change to reflect your notebook _nb_loc = "07_training/07a_ingest.ipynb" _nb_title = "Writing an efficient ingest Loop" ### no need to change any of this _nb_safeloc = _nb_loc.replace('/', '%2F') md(""" <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name={1}&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fblob%2Fmaster%2F{2}&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fraw%2Fmaster%2F{2}"> <img src="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png"/> Run in AI Platform Notebook</a> </td> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/{0}"> <img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> """.format(_nb_loc, _nb_title, _nb_safeloc)) ``` # Efficient Ingest In this notebook, we speed the ingest of training/evaluation data into the model. ## Enable GPU and set up helper functions This notebook and pretty much every other notebook in this repository will run faster if you are using a GPU. On Colab: - Navigate to Edit→Notebook Settings - Select GPU from the Hardware Accelerator drop-down On Cloud AI Platform Notebooks: - Navigate to https://console.cloud.google.com/ai-platform/notebooks - Create an instance with a GPU or select your instance and add a GPU Next, we'll confirm that we can connect to the GPU with tensorflow: ``` import tensorflow as tf print('TensorFlow version' + tf.version.VERSION) print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!')) print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices("GPU")))) device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) ``` ## Original code This is the original code, from [../06_preprocessing/06e_colordistortion.ipynb](../06_preprocessing/06e_colordistortion.ipynb) We have a few variations of creating a preprocessed dataset. ``` import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub import os # Load compressed models from tensorflow_hub os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'COMPRESSED' from tensorflow.data.experimental import AUTOTUNE IMG_HEIGHT = 448 # note *twice* what we used to have IMG_WIDTH = 448 IMG_CHANNELS = 3 CLASS_NAMES = 'daisy dandelion roses sunflowers tulips'.split() def training_plot(metrics, history): f, ax = plt.subplots(1, len(metrics), figsize=(5*len(metrics), 5)) for idx, metric in enumerate(metrics): ax[idx].plot(history.history[metric], ls='dashed') ax[idx].set_xlabel("Epochs") ax[idx].set_ylabel(metric) ax[idx].plot(history.history['val_' + metric]); ax[idx].legend([metric, 'val_' + metric]) class _Preprocessor: def __init__(self): # nothing to initialize pass def read_from_tfr(self, proto): feature_description = { 'image': tf.io.VarLenFeature(tf.float32), 'shape': tf.io.VarLenFeature(tf.int64), 'label': tf.io.FixedLenFeature([], tf.string, default_value=''), 'label_int': tf.io.FixedLenFeature([], tf.int64, default_value=0), } rec = tf.io.parse_single_example( proto, feature_description ) shape = tf.sparse.to_dense(rec['shape']) img = tf.reshape(tf.sparse.to_dense(rec['image']), shape) label_int = rec['label_int'] return img, label_int def read_from_jpegfile(self, filename): # same code as in 05_create_dataset/jpeg_to_tfrecord.py img = tf.io.read_file(filename) img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS) img = tf.image.convert_image_dtype(img, tf.float32) return img def preprocess(self, img): return tf.image.resize_with_pad(img, IMG_HEIGHT, IMG_WIDTH) def create_preproc_dataset_plain(pattern): preproc = _Preprocessor() trainds = tf.data.TFRecordDataset( [filename for filename in tf.io.gfile.glob(pattern)], compression_type='GZIP' ).map(preproc.read_from_tfr).map( lambda img, label: (preproc.preprocess(img), label) ) return trainds # note: addition of AUTOTUNE to the map() calls def create_preproc_dataset_parallelmap(pattern): preproc = _Preprocessor() def _preproc_img_label(img, label): return (preproc.preprocess(img), label) trainds = ( tf.data.TFRecordDataset( [filename for filename in tf.io.gfile.glob(pattern)], compression_type='GZIP' ) .map(preproc.read_from_tfr, num_parallel_calls=AUTOTUNE) .map(_preproc_img_label, num_parallel_calls=AUTOTUNE) ) return trainds # note: splits the files into two halves and interleaves datasets def create_preproc_dataset_interleave(pattern, num_parallel=None): preproc = _Preprocessor() files = [filename for filename in tf.io.gfile.glob(pattern)] if len(files) > 1: print("Interleaving the reading of {} files.".format(len(files))) def _create_half_ds(x): if x == 0: half = files[:(len(files)//2)] else: half = files[(len(files)//2):] return tf.data.TFRecordDataset(half, compression_type='GZIP') trainds = tf.data.Dataset.range(2).interleave( _create_half_ds, num_parallel_calls=AUTOTUNE) else: trainds = tf.data.TFRecordDataset(files, compression_type='GZIP') def _preproc_img_label(img, label): return (preproc.preprocess(img), label) trainds = (trainds .map(preproc.read_from_tfr, num_parallel_calls=num_parallel) .map(_preproc_img_label, num_parallel_calls=num_parallel) ) return trainds def create_preproc_image(filename): preproc = _Preprocessor() img = preproc.read_from_jpegfile(filename) return preproc.preprocess(img) class RandomColorDistortion(tf.keras.layers.Layer): def __init__(self, contrast_range=[0.5, 1.5], brightness_delta=[-0.2, 0.2], **kwargs): super(RandomColorDistortion, self).__init__(**kwargs) self.contrast_range = contrast_range self.brightness_delta = brightness_delta def call(self, images, training=None): if not training: return images contrast = np.random.uniform( self.contrast_range[0], self.contrast_range[1]) brightness = np.random.uniform( self.brightness_delta[0], self.brightness_delta[1]) images = tf.image.adjust_contrast(images, contrast) images = tf.image.adjust_brightness(images, brightness) images = tf.clip_by_value(images, 0, 1) return images ``` ## Speeding up the reading of data To try it out, we'll simply read through the data several times and compute some quantity on the images. ``` def loop_through_dataset(ds, nepochs): lowest_mean = tf.constant(1.) for epoch in range(nepochs): thresh = np.random.uniform(0.3, 0.7) # random threshold count = 0 sumsofar = tf.constant(0.) for (img, label) in ds: # mean of channel values > thresh mean = tf.reduce_mean(tf.where(img > thresh, img, 0)) sumsofar = sumsofar + mean count = count + 1 if count%100 == 0: print('.', end='') mean = sumsofar/count print(mean) if mean < lowest_mean: lowest_mean = mean return lowest_mean PATTERN_SUFFIX, NUM_EPOCHS = '-0000[01]-*', 2 # 2 files, 2 epochs #PATTERN_SUFFIX, NUM_EPOCHS = '-*', 20 # 16 files, 20 epochs %%time ds = create_preproc_dataset_plain( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ) loop_through_dataset(ds, NUM_EPOCHS) %%time # parallel map ds = create_preproc_dataset_parallelmap( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ) loop_through_dataset(ds, NUM_EPOCHS) %%time # with interleave ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=None ) loop_through_dataset(ds, NUM_EPOCHS) %%time # with interleave and parallel mpas ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=AUTOTUNE ) loop_through_dataset(ds, NUM_EPOCHS) ``` When I did this, this is what I got: | Method | CPU time | Wall time | | ---------------------- | ----------- | ------------ | | Plain | 7.53s | 7.99s | | Parallel Map | 8.30s | 5.94s | | Interleave | 8.60s | 5.47s | | Interleave+Parallel Map| 8.44s | 5.23s | ## ML model The computation above was pretty cheap involving merely adding up all the pixel values. What happens if we need a bit more complexity (gradient calc, etc.)? ``` def train_simple_model(ds, nepochs): model = tf.keras.Sequential([ tf.keras.layers.Flatten( input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS)), #tf.keras.layers.Dense(32, activation='relu'), tf.keras.layers.Dense(len(CLASS_NAMES), activation='softmax') ]) model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False), metrics=['accuracy']) model.fit(ds, epochs=nepochs) %%time ds = create_preproc_dataset_plain( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # parallel map ds = create_preproc_dataset_parallelmap( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # with interleave ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=None ).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # with interleave and parallel mpas ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=AUTOTUNE ).batch(1) train_simple_model(ds, NUM_EPOCHS) ``` We note that the improvement remains: | Method | CPU time | Wall time | | -----------------------| ----------- | ------------ | | Plain | 9.91s | 9.39s | | Parallel Map | 10.7s | 8.17s | | Interleave | 10.5s | 7.54s | | Interleave+Parallel Map| 10.3s | 7.17s | ## Speeding up the handling of data ``` # alias to the more efficient one def create_preproc_dataset(pattern): return create_preproc_dataset_interleave(pattern, num_parallel=AUTOTUNE) %%time # add prefetching ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # Add batching of different sizes ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(8) train_simple_model(ds, NUM_EPOCHS) %%time # Add batching of different sizes ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(16) train_simple_model(ds, NUM_EPOCHS) %%time # Add batching of different sizes ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(32) train_simple_model(ds, NUM_EPOCHS) %%time # add caching: always do this optimization last. ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).cache().batch(32) train_simple_model(ds, NUM_EPOCHS) %%time # add caching: always do this optimization last. ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).cache().batch(32) train_simple_model(ds, NUM_EPOCHS) %%time # add caching: always do this optimization last. ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).cache().prefetch(AUTOTUNE).batch(32) train_simple_model(ds, NUM_EPOCHS) ``` Adding to the previous table: | Method | CPU time | Wall time | | -----------------------| ----------- | ------------ | | Plain | 9.91s | 9.39s | | Parallel Map | 10.7s | 8.17s | | Interleave | 10.5s | 7.54s | | Interleave+Parallel Map| 10.3s | 7.17s | | Interleave + Parallel, and then adding: | - | - | | Prefetch | 11.4s | 8.09s | | Batch size 8 | 9.56s | 6.90s | | Batch size 16 | 9.90s | 6.70s | | Batch size 32 | 9.68s | 6.37s | | Interleave + Parallel + batchsize 32, and then adding: | - | - | | Cache | 6.16s | 4.36s | | Prefetch + Cache | 5.76s | 4.04s | | Cache + Prefetch | 5.65s | 4.19s | So, the best option is: <pre> ds = create_preproc_dataset_interleave(pattern, num_parallel=AUTOTUNE).prefetch(AUTOTUNE).cache().batch(32) </pre> ## License Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
### <font color = "darkblue">Updates to Assignment</font> #### If you were working on the older version: * Please click on the "Coursera" icon in the top right to open up the folder directory. * Navigate to the folder: Week 3/ Planar data classification with one hidden layer. You can see your prior work in version 6b: "Planar data classification with one hidden layer v6b.ipynb" #### List of bug fixes and enhancements * Clarifies that the classifier will learn to classify regions as either red or blue. * compute_cost function fixes np.squeeze by casting it as a float. * compute_cost instructions clarify the purpose of np.squeeze. * compute_cost clarifies that "parameters" parameter is not needed, but is kept in the function definition until the auto-grader is also updated. * nn_model removes extraction of parameter values, as the entire parameter dictionary is passed to the invoked functions. # Planar data classification with one hidden layer Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. **You will learn how to:** - Implement a 2-class classification neural network with a single hidden layer - Use units with a non-linear activation function, such as tanh - Compute the cross entropy loss - Implement forward and backward propagation ## 1 - Packages ## Let's first import all the packages that you will need during this assignment. - [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python. - [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis. - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python. - testCases provides some test examples to assess the correctness of your functions - planar_utils provide various useful functions used in this assignment ``` # Package imports import numpy as np import matplotlib.pyplot as plt from testCases_v2 import * import sklearn import sklearn.datasets import sklearn.linear_model from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets %matplotlib inline np.random.seed(1) # set a seed so that the results are consistent ``` ## 2 - Dataset ## First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`. ``` X, Y = load_planar_dataset() ``` Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue. ``` X.shape Y.shape np.squeeze(Y).shape # Visualize the data: plt.scatter(X[0, :], X[1, :], c=np.squeeze(Y), s=40, cmap=plt.cm.Spectral); ``` You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1). Lets first get a better sense of what our data is like. **Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`? **Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) ``` ### START CODE HERE ### (≈ 3 lines of code) shape_X = X.shape shape_Y = Y.shape m = X.shape[1] # training set size ### END CODE HERE ### print ('The shape of X is: ' + str(shape_X)) print ('The shape of Y is: ' + str(shape_Y)) print ('I have m = %d training examples!' % (m)) ``` **Expected Output**: <table style="width:20%"> <tr> <td>**shape of X**</td> <td> (2, 400) </td> </tr> <tr> <td>**shape of Y**</td> <td>(1, 400) </td> </tr> <tr> <td>**m**</td> <td> 400 </td> </tr> </table> ## 3 - Simple Logistic Regression Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset. ``` # Train the logistic regression classifier clf = sklearn.linear_model.LogisticRegressionCV(); clf.fit(X.T, Y.T); ``` You can now plot the decision boundary of these models. Run the code below. ``` # Plot the decision boundary for logistic regression plot_decision_boundary(lambda x: clf.predict(x), X, np.squeeze(Y)) plt.title("Logistic Regression") # Print accuracy LR_predictions = clf.predict(X.T) print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) + '% ' + "(percentage of correctly labelled datapoints)") ``` **Expected Output**: <table style="width:20%"> <tr> <td>**Accuracy**</td> <td> 47% </td> </tr> </table> **Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! ## 4 - Neural Network model Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer. **Here is our model**: <img src="images/classification_kiank.png" style="width:600px;height:300px;"> **Mathematically**: For one example $x^{(i)}$: $$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$ $$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$ $$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$ $$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$ Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$ **Reminder**: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( # of input units, # of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent) You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data. ### 4.1 - Defining the neural network structure #### **Exercise**: Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer **Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4. ``` # GRADED FUNCTION: layer_sizes def layer_sizes(X, Y): """ Arguments: X -- input dataset of shape (input size, number of examples) Y -- labels of shape (output size, number of examples) Returns: n_x -- the size of the input layer n_h -- the size of the hidden layer n_y -- the size of the output layer """ ### START CODE HERE ### (≈ 3 lines of code) n_x = X.shape[0] # size of input layer n_h = 4 n_y = Y.shape[0] # size of output layer ### END CODE HERE ### return (n_x, n_h, n_y) X_assess, Y_assess = layer_sizes_test_case() (n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess) print("The size of the input layer is: n_x = " + str(n_x)) print("The size of the hidden layer is: n_h = " + str(n_h)) print("The size of the output layer is: n_y = " + str(n_y)) ``` **Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). <table style="width:20%"> <tr> <td>**n_x**</td> <td> 5 </td> </tr> <tr> <td>**n_h**</td> <td> 4 </td> </tr> <tr> <td>**n_y**</td> <td> 2 </td> </tr> </table> ### 4.2 - Initialize the model's parameters #### **Exercise**: Implement the function `initialize_parameters()`. **Instructions**: - Make sure your parameters' sizes are right. Refer to the neural network figure above if needed. - You will initialize the weights matrices with random values. - Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b). - You will initialize the bias vectors as zeros. - Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: params -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random. ### START CODE HERE ### (≈ 4 lines of code) W1 = np.random.randn(n_h,n_x) * 0.01 b1 = np.zeros((n_h,1)) W2 = np.random.randn(n_y,n_h) * 0.01 b2 = np.zeros((n_y,1)) ### END CODE HERE ### assert (W1.shape == (n_h, n_x)) assert (b1.shape == (n_h, 1)) assert (W2.shape == (n_y, n_h)) assert (b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters n_x, n_h, n_y = initialize_parameters_test_case() parameters = initialize_parameters(n_x, n_h, n_y) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table style="width:90%"> <tr> <td>**W1**</td> <td> [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] </td> </tr> <tr> <td>**b1**</td> <td> [[ 0.] [ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td>**W2**</td> <td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td> </tr> <tr> <td>**b2**</td> <td> [[ 0.]] </td> </tr> </table> ### 4.3 - The Loop #### **Question**: Implement `forward_propagation()`. **Instructions**: - Look above at the mathematical representation of your classifier. - You can use the function `sigmoid()`. It is built-in (imported) in the notebook. - You can use the function `np.tanh()`. It is part of the numpy library. - The steps you have to implement are: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set). - Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function. ``` # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Argument: X -- input data of size (n_x, m) parameters -- python dictionary containing your parameters (output of initialization function) Returns: A2 -- The sigmoid output of the second activation cache -- a dictionary containing "Z1", "A1", "Z2" and "A2" """ # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] ### END CODE HERE ### # Implement Forward Propagation to calculate A2 (probabilities) ### START CODE HERE ### (≈ 4 lines of code) Z1 = (np.dot(W1,X) + b1) A1 = np.tanh(Z1) Z2 = np.dot(W2,A1) + b2 A2 = sigmoid(Z2) ### END CODE HERE ### assert(A2.shape == (1, X.shape[1])) cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2} return A2, cache X_assess, parameters = forward_propagation_test_case() A2, cache = forward_propagation(X_assess, parameters) # Note: we use the mean here just to make sure that your output matches ours. print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2'])) ``` **Expected Output**: <table style="width:50%"> <tr> <td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td> </tr> </table> Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows: $$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$ **Exercise**: Implement `compute_cost()` to compute the value of the cost $J$. **Instructions**: - There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented $- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$: ```python logprobs = np.multiply(np.log(A2),Y) cost = - np.sum(logprobs) # no need to use a for loop! ``` (you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`). Note that if you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. We can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). We can cast the array as a type `float` using `float()`. ``` # GRADED FUNCTION: compute_cost def compute_cost(A2, Y, parameters): """ Computes the cross-entropy cost given in equation (13) Arguments: A2 -- The sigmoid output of the second activation, of shape (1, number of examples) Y -- "true" labels vector of shape (1, number of examples) parameters -- python dictionary containing your parameters W1, b1, W2 and b2 [Note that the parameters argument is not used in this function, but the auto-grader currently expects this parameter. Future version of this notebook will fix both the notebook and the auto-grader so that `parameters` is not needed. For now, please include `parameters` in the function signature, and also when invoking this function.] Returns: cost -- cross-entropy cost given equation (13) """ m = Y.shape[1] # number of example first_part = np.multiply(np.log(A2),Y) second_part = np.multiply(np.log(1-A2),(1-Y)) # Compute the cross-entropy cost ### START CODE HERE ### (≈ 2 lines of code) logprobs = -np.sum(first_part+second_part) cost = logprobs/m ### END CODE HERE ### cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect. # E.g., turns [[17]] into 17 assert(isinstance(cost, float)) return cost A2, Y_assess, parameters = compute_cost_test_case() print("cost = " + str(compute_cost(A2, Y_assess, parameters))) ``` **Expected Output**: <table style="width:20%"> <tr> <td>**cost**</td> <td> 0.693058761... </td> </tr> </table> Using the cache computed during forward propagation, you can now implement backward propagation. **Question**: Implement the function `backward_propagation()`. **Instructions**: Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. <img src="images/grad_summary.png" style="width:600px;height:300px;"> <!-- $\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$ $\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $ $\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$ $\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $ $\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $ $\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$ - Note that $*$ denotes elementwise multiplication. - The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !--> - Tips: - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`. ``` # GRADED FUNCTION: backward_propagation def backward_propagation(parameters, cache, X, Y): """ Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". X -- input data of shape (2, number of examples) Y -- "true" labels vector of shape (1, number of examples) Returns: grads -- python dictionary containing your gradients with respect to different parameters """ m = X.shape[1] # First, retrieve W1 and W2 from the dictionary "parameters". ### START CODE HERE ### (≈ 2 lines of code) W1 = parameters["W1"] W2 = parameters["W2"] ### END CODE HERE ### # Retrieve also A1 and A2 from dictionary "cache". ### START CODE HERE ### (≈ 2 lines of code) A1 = cache["A1"] A2 = cache["A2"] ### END CODE HERE ### # Backward propagation: calculate dW1, db1, dW2, db2. ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above) dZ2 = A2-Y dW2 = np.dot(dZ2,A1.T)/m db2 = np.sum(dZ2,axis=1,keepdims=True)/m dZ1 = np.dot(W2.T,dZ2) * (1-np.power(A1,2)) dW1 = np.dot(dZ1,X.T)/m db1 = np.sum(dZ1,axis=1,keepdims=True)/m ### END CODE HERE ### grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2} return grads parameters, cache, X_assess, Y_assess = backward_propagation_test_case() grads = backward_propagation(parameters, cache, X_assess, Y_assess) print ("dW1 = "+ str(grads["dW1"])) print ("db1 = "+ str(grads["db1"])) print ("dW2 = "+ str(grads["dW2"])) print ("db2 = "+ str(grads["db2"])) ``` **Expected output**: <table style="width:80%"> <tr> <td>**dW1**</td> <td> [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] </td> </tr> <tr> <td>**db1**</td> <td> [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] </td> </tr> <tr> <td>**dW2**</td> <td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td> </tr> <tr> <td>**db2**</td> <td> [[-0.16655712]] </td> </tr> </table> **Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2). **General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter. **Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley. <img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;"> ``` # GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate = 1.2): """ Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters """ # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] ### END CODE HERE ### # Retrieve each gradient from the dictionary "grads" ### START CODE HERE ### (≈ 4 lines of code) dW1 = grads["dW1"] db1 = grads["db1"] dW2 = grads["dW2"] db2 = grads["db2"] ## END CODE HERE ### # Update rule for each parameter ### START CODE HERE ### (≈ 4 lines of code) W1 = W1 - learning_rate * dW1 b1 = b1 - learning_rate * db1 W2 = W2 - learning_rate * dW2 b2 = b2 - learning_rate * db2 ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table style="width:80%"> <tr> <td>**W1**</td> <td> [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]]</td> </tr> <tr> <td>**b1**</td> <td> [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]]</td> </tr> <tr> <td>**W2**</td> <td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td> </tr> <tr> <td>**b2**</td> <td> [[ 0.00010457]] </td> </tr> </table> ### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() #### **Question**: Build your neural network model in `nn_model()`. **Instructions**: The neural network model has to use the previous functions in the right order. ``` # GRADED FUNCTION: nn_model def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): """ Arguments: X -- dataset of shape (2, number of examples) Y -- labels of shape (1, number of examples) n_h -- size of the hidden layer num_iterations -- Number of iterations in gradient descent loop print_cost -- if True, print the cost every 1000 iterations Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ np.random.seed(3) n_x = layer_sizes(X, Y)[0] n_y = layer_sizes(X, Y)[2] # Initialize parameters ### START CODE HERE ### (≈ 1 line of code) parameters = initialize_parameters(n_x,n_h,n_y) ### END CODE HERE ### # Loop (gradient descent) for i in range(0, num_iterations): ### START CODE HERE ### (≈ 4 lines of code) # Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache". A2, cache = forward_propagation(X,parameters) # Cost function. Inputs: "A2, Y, parameters". Outputs: "cost". cost = compute_cost(A2,Y,parameters) # Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads". grads = backward_propagation(parameters,cache,X,Y) # Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters". parameters = update_parameters(parameters,grads) ### END CODE HERE ### # Print the cost every 1000 iterations if print_cost and i % 1000 == 0: print ("Cost after iteration %i: %f" %(i, cost)) return parameters X_assess, Y_assess = nn_model_test_case() parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table style="width:90%"> <tr> <td> **cost after iteration 0** </td> <td> 0.692739 </td> </tr> <tr> <td> <center> $\vdots$ </center> </td> <td> <center> $\vdots$ </center> </td> </tr> <tr> <td>**W1**</td> <td> [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]]</td> </tr> <tr> <td>**b1**</td> <td> [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] </td> </tr> <tr> <td>**W2**</td> <td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td> </tr> <tr> <td>**b2**</td> <td> [[ 0.20459656]] </td> </tr> </table> ### 4.5 Predictions **Question**: Use your model to predict by building predict(). Use forward propagation to predict results. **Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \\ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)``` ``` # GRADED FUNCTION: predict def predict(parameters, X): """ Using the learned parameters, predicts a class for each example in X Arguments: parameters -- python dictionary containing your parameters X -- input data of size (n_x, m) Returns predictions -- vector of predictions of our model (red: 0 / blue: 1) """ # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold. ### START CODE HERE ### (≈ 2 lines of code) A2, cache = forward_propagation(X,parameters) predictions = A2 > 0.5 ### END CODE HERE ### return predictions parameters, X_assess = predict_test_case() predictions = predict(parameters, X_assess) print("predictions mean = " + str(np.mean(predictions))) ``` **Expected Output**: <table style="width:40%"> <tr> <td>**predictions mean**</td> <td> 0.666666666667 </td> </tr> </table> It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units. ``` # Build a model with a n_h-dimensional hidden layer parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(parameters, x.T), X, np.squeeze(Y)) plt.title("Decision Boundary for hidden layer size " + str(4)) ``` **Expected Output**: <table style="width:40%"> <tr> <td>**Cost after iteration 9000**</td> <td> 0.218607 </td> </tr> </table> ``` # Print accuracy predictions = predict(parameters, X) print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%') ``` **Expected Output**: <table style="width:15%"> <tr> <td>**Accuracy**</td> <td> 90% </td> </tr> </table> Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. ### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ### Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes. ``` # This may take about 2 minutes to run plt.figure(figsize=(16, 32)) hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50] for i, n_h in enumerate(hidden_layer_sizes): plt.subplot(5, 2, i+1) plt.title('Hidden Layer of size %d' % n_h) parameters = nn_model(X, Y, n_h, num_iterations = 5000) plot_decision_boundary(lambda x: predict(parameters, x.T), X, np.squeeze(Y)) predictions = predict(parameters, X) accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy)) ``` **Interpretation**: - The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting. - You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. **Optional questions**: **Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation? - Play with the learning_rate. What happens? - What if we change the dataset? (See part 5 below!) <font color='blue'> **You've learnt to:** - Build a complete neural network with a hidden layer - Make a good use of a non-linear unit - Implemented forward propagation and backpropagation, and trained a neural network - See the impact of varying the hidden layer size, including overfitting. Nice work! ## 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets. ``` # Datasets noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets() datasets = {"noisy_circles": noisy_circles, "noisy_moons": noisy_moons, "blobs": blobs, "gaussian_quantiles": gaussian_quantiles} ### START CODE HERE ### (choose your dataset) dataset = "noisy_moons" ### END CODE HERE ### X, Y = datasets[dataset] X, Y = X.T, Y.reshape(1, Y.shape[0]) # make blobs binary if dataset == "blobs": Y = Y%2 # Visualize the data plt.scatter(X[0, :], X[1, :], c=np.squeeze(Y), s=40, cmap=plt.cm.Spectral); ``` Congrats on finishing this Programming Assignment! Reference: - http://scs.ryerson.ca/~aharley/neural-networks/ - http://cs231n.github.io/neural-networks-case-study/
github_jupyter
``` # This Youtube video walks through this notebook from IPython.display import YouTubeVideo YouTubeVideo('cvVl1lQ4agU') ``` # Tests on PDA ``` from jove.SystemImports import * from jove.DotBashers import * from jove.Def_md2mc import * from jove.Def_PDA import * pdaDyck = md2mc('''PDA IF : (, #; (# -> A A : (, (; (( -> A A : ), (; '' -> A A : '',#; # -> IF ''') DOpdaDyck = dotObj_pda(pdaDyck, FuseEdges=True) DOpdaDyck explore_pda("", pdaDyck) explore_pda("()", pdaDyck) explore_pda("()()(())", pdaDyck) explore_pda("()()(()", pdaDyck) pda1 = md2mc('''PDA I : a, b; c -> F ''') DOpda1 = dotObj_pda(pda1, FuseEdges=True) DOpda1 pda2 = md2mc('''PDA I : a, b ; c -> F I : '', ''; d -> A A : '', d ; '' -> F ''') DOpda2 = dotObj_pda(pda2, FuseEdges=True) DOpda2 DOpda2.source explore_pda("a", pda1) explore_pda("a", pda2) pda3 = md2mc('''PDA I : a, b ; c -> F I : '', ''; d -> A A : a, d ; '' -> F ''') DOpda3 = dotObj_pda(pda3, FuseEdges=True) DOpda3 DOpda3.source explore_pda("a", pda3) pda4 = md2mc('''PDA I : a, # ; c -> F I : '', ''; d -> A A : a, d ; '' -> F ''') DOpda4 = dotObj_pda(pda4, FuseEdges=True) DOpda4 explore_pda("a", pda4) pda5 = md2mc('''PDA I : a, # ; c -> F I : '', ''; d -> A A : '', ''; '' -> A A : a, d ; '' -> F ''') DOpda5 = dotObj_pda(pda5, FuseEdges=True) DOpda5 DOpda5.source explore_pda("a", pda5) pda6 = md2mc('''PDA I : a, # ; c -> F I : '', ''; d -> A A : '', ''; z -> A A : '', z ; '' -> B B : '', z ; '' -> C C : '', z ; '' -> C C : '', # ; '' | a, d; '' -> F A : a, d ; '' -> F ''') DOpda6 = dotObj_pda(pda6, FuseEdges=True) DOpda6 DOpda6.source explore_pda("a", pda6) explore_pda("a", pda6, chatty=True) explore_pda("a", pda6, STKMAX = 6, chatty=True) f27sip = md2mc('''PDA !!--------------------------------------------------------------------------- !! This is a PDA From Sipser's book !! This matches a's and b's ignoring c's !! or matches a's and c's, ignoring b's in the middle !! thus matching either a^m b^m c^n or a^m b^n c^m !!--------------------------------------------------------------------------- !!--------------------------------------------------------------------------- !! State: in , sin ; spush -> tostates !! comment !!--------------------------------------------------------------------------- iq1 : '' , '' ; $ -> q2 !! start in init state by pushing a $ q2 : a , '' ; a -> q2 !! stack a's q2 : '' , '' ; '' -> q3,q5 !! split non-det for a^m b^m c^n (q3) !! or a^m b^n c^m (q5) q3 : b , a ; '' -> q3 !! match b's against a's q3 : '' , $ ; '' -> fq4 !! hope for acceptance when $ surfaces fq4 : c , '' ; '' -> fq4 !! be happy so long as c's come !! will choke and reject if anything !! other than c's come q5 : b , '' ; '' -> q5 !! here, we are going to punt over b's q5 : '' , '' ; '' -> q6 !! and non-det decide to honor c's matching !! against a's q6 : c , a ; '' -> q6 !! OK to match so long as c's keep coming q6 : '' , $ ; '' -> fq7 !! when $ surfaces, be ready to accept in !! state fq7. However, anything else coming in !! now will foil match and cause rejection. !!--------------------------------------------------------------------------- !! You may use the line below as an empty shell to populate for your purposes !! Also serves as a syntax reminder for entering PDAs. !! !! State : i1 , si1 ; sp1 | i2 , si2 ; sp2 -> tos1, tos2 !! comment !! !! .. : .. , .. ; .. | .. , .. ; .. -> .. , .. !! .. !!--------------------------------------------------------------------------- !!--------------------------------------------------------------------------- !! !! Good commenting and software-engineering methods, good clean indentation, !! grouping of similar states, columnar alignment, etc etc. are HUGELY !! important in any programming endeavor -- especially while programming !! automata. Otherwise, you can easily make a mistake in your automaton !! code. Besides, you cannot rely upon others to find your mistakes, as !! they will find your automaton code impossible to read! !! !!--------------------------------------------------------------------------- ''') Dof27sip = dotObj_pda(f27sip, FuseEdges=True) Dof27sip explore_pda("aaabbbccc", f27sip) # Parsing an arithmetic expression pdaEamb = md2mc('''PDA !!E -> E * E | E + E | ~E | ( E ) | 2 | 3 I : '', # ; E# -> M M : '', E ; ~E -> M M : '', E ; E+E -> M M : '', E ; E*E -> M M : '', E ; (E) -> M M : '', E ; 2 -> M M : '', E ; 3 -> M M : ~, ~ ; '' -> M M : 2, 2 ; '' -> M M : 3, 3 ; '' -> M M : (, ( ; '' -> M M : ), ) ; '' -> M M : +, + ; '' -> M M : *, * ; '' -> M M : '', # ; # -> F ''' ) DOpdaEamb = dotObj_pda(pdaEamb, FuseEdges=True) DOpdaEamb explore_pda("3+2*3+2*3", pdaEamb, STKMAX=8) # Parsing an arithmetic expression pdaE = md2mc('''PDA !!E -> E+T | T !!T -> T*F | F !!F -> 2 | 3 | ~F | (E) I : '', # ; E# -> M M : '', E ; E+T -> M M : '', E ; T -> M M : '', T ; T*F -> M M : '', T ; F -> M M : '', F ; 2 -> M M : '', F ; 3 -> M M : '', F ; ~F -> M M : '', F ; (E) -> M M : ~, ~ ; '' -> M M : 2, 2 ; '' -> M M : 3, 3 ; '' -> M M : (, ( ; '' -> M M : ), ) ; '' -> M M : +, + ; '' -> M M : *, * ; '' -> M M : '', # ; # -> F ''' ) DOpdaE = dotObj_pda(pdaE, FuseEdges=True) DOpdaE explore_pda("3+3+2+2*3+2*3", pdaE, STKMAX=5) explore_pda("3*2*~3+~~3*~3", pdaEamb, STKMAX=5) explore_pda("3*~2*3+~2*3+3+~2+3*~2", pdaE, STKMAX=13) ```
github_jupyter
# Day 16-17: Urban/Rural - Land I need to catch up... This won't be the most artistic day, but I would find it useful in my life to have code that downloads GHSL datasets and plots them. ## Configuration ``` import os import rioxarray import matplotlib.pyplot as plt import matplotlib.colors as colors %matplotlib inline %config InlineBackend.figure_format = 'svg' ``` ## GHSL-POP Population dataset from [Global Human Settlement Layer, GHSL](https://ghsl.jrc.ec.europa.eu/) ``` # Choose tile of interest tile = "28_9" # Arrange URL url = ("zip+https://cidportal.jrc.ec.europa.eu/"\ "ftp/jrc-opendata/GHSL/"\ "GHS_POP_MT_GLOBE_R2019A/"\ "GHS_POP_E2015_GLOBE_R2019A_54009_1K/"\ "V1-0/tiles/"\ f"GHS_POP_E2015_GLOBE_R2019A_54009_1K_V1_0_{tile}.zip"\ f"!GHS_POP_E2015_GLOBE_R2019A_54009_1K_V1_0_{tile}.tif") # Read data pop = rioxarray.open_rasterio(url, masked=True) # Preview pop # Construct plot fig, ax = plt.subplots(figsize=(8,6)) im = pop.squeeze().plot.imshow( ax=ax, vmin=0, vmax=1000, cmap='inferno', cbar_kwargs={"label": "Population"} ) ax.axis('off') ax.set(title="GHSL-POP") # Save out_file = f"16-17_POP.png" out_path = os.path.join("..", "contributions", out_file) fig.savefig(out_path, dpi=300, facecolor="w", bbox_inches="tight") # Preview plt.show() ``` ## GHSL-SMOD Settlement dataset from [Global Human Settlement Layer, GHSL](https://ghsl.jrc.ec.europa.eu/) ``` # Arrange URL url = ("zip+https://cidportal.jrc.ec.europa.eu/"\ "ftp/jrc-opendata/GHSL/"\ "GHS_SMOD_POP_GLOBE_R2019A/"\ "GHS_SMOD_POP2015_GLOBE_R2019A_54009_1K/"\ "V2-0/tiles/"\ f"GHS_SMOD_POP2015_GLOBE_R2019A_54009_1K_V2_0_{tile}.zip"\ f"!GHS_SMOD_POP2015_GLOBE_R2019A_54009_1K_V2_0_{tile}.tif") # Read data smod = rioxarray.open_rasterio(url, masked=True) # Preview smod # Set colors cmap_discrete = colors.ListedColormap(["#ffffff","#7ab6f5","#cdf57a","#abcd66","#375623","#ffff00","#a87000","#732600","#ff0000"]) cmap_labels = ["n/a","Water","Very low density rural","Low density rural","Rural","Suburban or peri-urban","Semi-dense urban cluster","Dense urban cluster","Urban centre"] cmap_vals = [-200,10,11,12,13,21,22,23,30.1] # Construct map fig, ax = plt.subplots(figsize=(8,6)) im = smod.squeeze().plot.imshow(ax=ax, cmap=cmap_discrete, levels=cmap_vals, add_colorbar=False, ) cbar = ax.figure.colorbar(im, ax=ax, ticks=cmap_vals) cbar.set_ticklabels(cmap_labels) ax.set(title="GHSL-SMOD") ax.axis('off') # Save out_file = f"16-17_SMOD.png" out_path = os.path.join("..", "contributions", out_file) fig.savefig(out_path, dpi=300, facecolor="w", bbox_inches="tight") # Preview plt.show() ```
github_jupyter
``` import arff import numpy as np import pandas as pd import os import subprocess import matplotlib.pyplot as plt import re from sklearn.metrics import accuracy_score from sklearn.decomposition import PCA from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler df1 = pd.read_csv('/home/veerlosar/Desktop/df.csv', encoding='utf-8') #Unnamed column csv df1.head() print(list(df1).index('name')) print(list(df1).index('frameTime')) print(list(df1).index('F0_sma_min')) print(list(df1).index('F0env_sma_minPos')) print(list(df1).index('class')) y = df1['class'] df = df1.drop(['class', 'Unnamed: 0', 'name', 'frameTime', 'F0_sma_min', 'F0env_sma_minPos'], axis=1) df.head() ''' from scipy.stats import zscore df = df.astype('float64').apply(zscore) df.head() ''' df1.drop(['Unnamed: 0', 'name', 'frameTime', 'F0_sma_min', 'F0env_sma_minPos'], axis=1, inplace=True) df1.head() df.describe() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.18, random_state=42) X_test.isnull().sum().sum() #feature selection from sklearn.feature_selection import SelectKBest, chi2, f_classif selector = SelectKBest(score_func=f_classif, k=10).fit(X_train, y_train) ranking = np.argsort(selector.scores_)[::-1] print('Top-10 features according to SelectKBest, f_classif: ') print() print('{}'.format(df.columns[ranking][0:11])) #models from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score clf = RandomForestClassifier(n_estimators=250, max_depth=10, random_state=0) clf.fit(X_train, y_train) y_pred_clf_test = clf.predict(X_test) y_pred_clf_train = clf.predict(X_train) print(accuracy_score(y_test, y_pred_clf_test)) #no pca for random forest from sklearn import svm svm_ = svm.SVC(kernel='linear') svm_ #v = dict() t = dict() for i in range(10, 986, 5): std_pca_svm_ = make_pipeline(StandardScaler(), PCA(n_components=i), svm_) std_pca_svm_.fit(X_train, y_train) t[i] = accuracy_score(y_train, std_pca_svm_.predict(X_train)) svm_pipeline = make_pipeline(StandardScaler(), PCA(n_components=230), svm_) svm_pipeline.fit(X_train, y_train) print(accuracy_score(y_test, svm_pipeline.predict(X_test))) plt.figure(figsize=(12,6)) plt.plot(v.keys(), v.values()) plt.plot(t.keys(), t.values(), color='red') plt.show() df.head() svm_.fit(X_train, y_train) y_pred_svm_test = svm_.predict(X_test) y_pred_svm_train = svm_.predict(X_train) from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import AdaBoostClassifier dt = DecisionTreeClassifier(criterion='gini', max_depth=10) dt.fit(X_train, y_train) y_pred_dt_test = dt.predict(X_test) y_pred_dt_train = dt.predict(X_train) ''' too slow ada_dt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10), n_estimators=250, learning_rate=1.0) ada_dt.fit(X_train, y_train) y_pred_ada_dt_test = ada_dt.predict(X_test) y_pred_ada_dt_train = ada_dt.predict(X_train) ''' from sklearn.svm import LinearSVC lin_svm = LinearSVC(loss='squared_hinge', penalty='l1', dual=False) lin_svm.fit(X_train, y_train) y_pred_lin_svm_test = lin_svm.predict(X_test) y_pred_lin_svm_train = lin_svm.predict(X_train) #accuracy without scaling and pca print('Random Forests: ', accuracy_score(y_test, y_pred_clf_test), accuracy_score(y_train, y_pred_clf_train)) print('Kernel SVM: ', accuracy_score(y_test, y_pred_svm_test), accuracy_score(y_train, y_pred_svm_train), svm_.score(X_test, y_test)) print('Decision Tree: ', accuracy_score(y_test, y_pred_dt_test), accuracy_score(y_train, y_pred_dt_train)) #print('Decision Tree with AdaBoost: ', accuracy_score(y_test, y_pred_ada_dt_test), accuracy_score(y_train, y_pred_ada_dt_train)) print('Linear SVM: ', accuracy_score(y_test, y_pred_lin_svm_test), accuracy_score(y_train, y_pred_lin_svm_train)) importances = clf.feature_importances_ print(list(importances).index(max(importances)), max(importances)) indices = np.argsort(importances)[::-1] top_10_features = df.columns[indices][0:11] print(top_10_features) plt.figure(figsize=(25, 10)) plt.bar(indices, importances[indices]) plt.show() def predict_new(filename, model): arff_file = arff.load(open('{}.arff'.format(filename[:-4]), 'r')) df = pd.DataFrame(np.array(arff_file['data'])) df.drop([0, 1, 459, 481, 990], axis=1, inplace=True) print(model.predict(df)) return df from sklearn.metrics import confusion_matrix preds = [y_pred_lin_svm_test, y_pred_dt_test, svm_pipeline.predict(X_test), y_pred_clf_test] import itertools def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() class_names = ['angry', 'fear', 'happy', 'neutral'] for i in preds: plt.figure() plot_confusion_matrix(confusion_matrix(y_test, i), classes=class_names) plt.show() d.values[0][472] #x = df1[df1['class'] == 2.].loc[1178:1180].drop(['class'], axis=1) # :D colors = ['red', 'green', 'yellow', 'blue'] plt.figure(figsize=(12,6)) for i, color in enumerate(colors): plt.scatter(df1['F0_sma_iqr2-3'].loc[df1['class']==i], df1['F0env_sma_quartile3'].loc[df1['class']==i], color=color) plt.show() top_10_features df2 = df1.drop(df1.columns[:449], axis=1).drop(df1.columns[501:949], axis=1) df3 = df1.drop(df1.columns[:449], axis=1).drop(df1.columns[501:], axis=1) list(df1).index('voiceProb_sma_kurtosis') df2.head() X2_train, X2_test, y2_train, y2_test = train_test_split(df2, y, test_size=0.18, random_state=42) print(X2_train.shape) print(y2_train.shape) X2_train.head() clf.fit(X2_train, y2_train) print(accuracy_score(y2_test, clf.predict(X2_test))) X3_train, X3_test, y3_train, y3_test = train_test_split(df3, y, test_size=0.18, random_state=42) len(X2_train.columns) df3.columns svm2_pipeline = make_pipeline(StandardScaler(), PCA(n_components=70), svm_) svm2_pipeline.fit(X_train, y_train) svm2_pipeline.predict(X_test) print(accuracy_score(y_test, svm2_pipeline.predict(X_test))) print(accuracy_score(y_test, svm3_pipeline.predict(X_test))) clf_pipeline = make_pipeline(PCA(n_components=50), clf) clf_pipeline.fit(X_train, y_train) print(accuracy_score(y_test, clf_pipeline.predict(X_test))) predict_new('/home/veerlosar/Downloads/ERHS/neutral_maria2.wav', clf_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/neutral_lalit.wav', clf_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/happy_maria.wav', clf_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/happy_lalit.wav', clf_pipeline) acc = dict() cs = np.arange(0.25, 2.0, 0.10) for c in cs: for n in range(2, 50, 4): svm4_pipeline = make_pipeline(StandardScaler(), PCA(n_components=n), svm.SVC(C=c, kernel='poly', degree=2)) svm4_pipeline.fit(X3_train, y3_train) acc[(c, n)] = (accuracy_score(y3_test, svm4_pipeline.predict(X3_test))) svm4_pipeline = make_pipeline(StandardScaler(), PCA(n_components=50), svm.SVC(C=1.0, kernel='poly', degree=2)) svm4_pipeline.fit(X_train, y_train) print(accuracy_score(y_test, svm4_pipeline.predict(X_test))) predict_new('/home/veerlosar/Downloads/ERHS/angry_show.wav', svm4_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/neutral_maria2.wav', svm4_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/neutral_lalit.wav', svm4_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/happy_lalit.wav', svm4_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/angry_show.wav', svm_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/neutral_maria2.wav', svm_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/neutral_lalit.wav', svm_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/happy_maria.wav', svm_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/happy_lalit.wav', svm_pipeline) svmlin_pipeline = make_pipeline(StandardScaler(), PCA(n_components=50), lin_svm) svmlin_pipeline.fit(X_train, y_train) print(accuracy_score(y_test, svmlin_pipeline.predict(X_test))) predict_new('/home/veerlosar/Downloads/ERHS/angry_show.wav', svm2_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/neutral_maria2.wav', svm2_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/neutral_lalit.wav', svm2_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/happy_maria.wav', svm2_pipeline) predict_new('/home/veerlosar/Downloads/ERHS/happy_lalit.wav', svm2_pipeline) ```
github_jupyter
# Parsing San Jose PD's firearm search reports This example uses `pdfplumber`'s visual debugging and text-extraction features to parse a fixed-width table embedded in a PDF. Thanks to [Ron Campbell](https://twitter.com/campbellronaldw) for the sample PDF. ``` import pdfplumber import re print(pdfplumber.__version__) ``` ## Load the PDF ``` pdf = pdfplumber.open("../pdfs/san-jose-pd-firearm-sample.pdf") ``` ## Examine the first page ``` p0 = pdf.pages[0] im = p0.to_image() im ``` ## See where the characters are on the page Below, we draw rectangles around each of the `char` objects that `pdfplumber` detected. By doing so, we can see that every line of the main part of the report is the same width, and that there are space (`" "`) characters padding out each field. That means we can parse those lines a lot like we'd parse a standard fixed-width data file. ``` im.reset().draw_rects(p0.chars) ``` ## Extract the text from the PDF Using the `Page.extract_text(...)` method, we grab every character on the page, line by line: ``` text = p0.extract_text() print(text) ``` ## Stripping away the header and footer In this step, we use a regular expression to focus on the core part of the page — the table. ``` core_pat = re.compile(r"LOCATION[\-\s]+(.*)\n\s+Flags = e", re.DOTALL) core = re.search(core_pat, text).group(1) print(core) ``` ## Parse each group of two lines In the report, each firearm takes up two lines. The code below splits the core table into two-line groups, and then parses out the fields, based on the number of characters in each field: ``` lines = core.split("\n") line_groups = list(zip(lines[::2], lines[1::2])) print(line_groups[0]) def parse_row(first_line, second_line): return { "type": first_line[:20].strip(), "item": first_line[21:41].strip(), "make": first_line[44:89].strip(), "model": first_line[90:105].strip(), "calibre": first_line[106:111].strip(), "status": first_line[112:120].strip(), "flags": first_line[124:129].strip(), "serial_number": second_line[0:13].strip(), "report_tag_number": second_line[21:41].strip(), "case_file_number": second_line[44:64].strip(), "storage_location": second_line[68:91].strip(), } parsed = [ parse_row(first_line, second_line) for first_line, second_line in line_groups ] ``` ## Result Below, you can see the parsed data for the first two firearms in the report: ``` parsed[:2] ``` ## Preview via `pandas.DataFrame` To make it a little easier to read, here's the full table, parsed and represented as a `pandas.DataFrame` (for ease of viewing): ``` import pandas as pd columns = list(parsed[0].keys()) pd.DataFrame(parsed)[columns] ``` --- --- ---
github_jupyter
## OOP A programming paradigm that provides a means of structuring programs so that properties and behaviors are bundled into individual objects. Pros: * code modularisation thus ease in troubleshooting. * reuse of code through inheritance. * flexibility through polymorphism (multiple usage). ### 1. Class Definition > Classes define functions called methods, which identify the behaviors and actions that an object created from the class can perform with its data. ``` # function definition def fix_laptop(harddisk, money): # check if laptop is ok if 'not' in harddisk: print('Laptop has a problem') else: print('Laptop is OK') code modularisation thus ease in troubleshooting. # fix it kama iko na issue if money == 'yes': return 'DJ can get his laptop fixed' else: return 'DJ is fucked' # class definition class Kisauni(): # attributes # class attributes security = 'mateja everywhere' ethnicity = 'waswahili' # instance attributes // dunder methods def __init__(self, mtaa, drainage_system, housing_style): self.mtaa = mtaa self.drainage_system = drainage_system self.housing_style = housing_style def __str__(self): return 'A class indicating conditions in Kisauni' # instance methods (customised functions) def students(self, status, name, age, campus): if 'yes' in status.lower(): return f'{name} is a {age} year old at The {campus}' else: return f'{name} is a {age} year old non student' def relationships(self,status, name, sex): if 'YES'in status.upper(): return f'{name} is a {sex}' else: return f'{name} is a bi' def rehabilitations(self,status,name,age): if 'yes' in status.lower(): return f'{name}is a {age} must go to rehab.' else: return f'{name} is a {age} no rehab.' # inheritance ( - overriding; - extending) class Birds(): def flight(self): return 'ALmost all birds can fly' def edibility(self): return 'almost all birds are edible' class Chicken(Birds): def flight(self): print('chicken cannot fly') def food(self): return 'chicken feed on mash' class Student: # class attributes (uniform for all class objects) campus = 'Technical University of Munich' ## dunder methods ''' universal properties (instance attributes - not necessarily uniform for all objects) - arguments must be supplied when calling the class ''' def __init__(self, name, age, level, academic_year): self.name = name self.age = age self.level = level self.academic_year = academic_year ''' Class descriptor''' def __str__(self): return f" This is a Student class with methods: course, year and location." ## Instance Methods '''- begine with a self, and can only be called from an instance of the class ''' # course def course(self, course_name): return f"{self.name} is pursuing a {self.level} in {course_name} at the {self.campus}" # year def year(self, year, gender): if 'f' in gender.lower(): return f" She is a {self.age} year old currently in her {year} year." else: return f" He is a {self.age} year old currently in his {year} year." # location def location(self, location): return f" Residing in {location}" #race def race(self, race): pass ola = Student('ola', 25, 'PhD', 3) ola.course('MAchine LEarning') # creating a class object/instantiating the class student = Student('Ada', 21, 'B.Sc') print('Object type/description:', student) student.course('Mathematics and Computer Science'), student.year(4, 'female'), student.location('Kisauni') ``` ### 2. Inheritance > A class takes on the attributes/methods of another. Newly formed classes are called child classes, and the classes that child classes are derived from are called parent classes. > **extending** - having additional attributes. > **overriding** - overwriting inherited attributes. ``` ''' Using the example of class Student, every time we pass a new student, we have to pass the gender argument which determines the way the year function is returned. We can create child classes of different genders that inherit attributes of the Student Class and override the year method''' class Female(Student): # overriding def year(self, year, gender = 'female'): return f" She is a {self.age} year old currently in her {year} year." #extending def under_18(self): if self.age < 18: return True else: return False class Male(Student): # overriding def year(self, year, gender = 'male'): return f" He is a {self.age} year old currently in his {year} year." #extending def under_18(self): if self.age < 18: return True else: return False ada = Female('Ada', 21, 'B.Sc', 4) ada.year(4) f = Female('Denise', 17, 'B.Sc', 4) f.course('Mathematics and Computer Science'), f.year(4), f.location('Berlin'), f.under_18() m = Male('Denis', 20, 'B.Sc', 4) m.course('Mathematics and Finance'), m.year(3), f.location('Munich'), m.under_18() ``` ### 3. Polymorphism > same function name (but different signatures) being uses for different types. ``` print('ada is a lady') print(456) print([5,6]) ''' Polymorphism with uniform class methods''' class Kenya(): def capital(self): print("Nairobi is the capital of Kenya.") def president(self): print("Kenyatta is the president of Kenya") class USA(): def capital(self): print("Washington D.C. is the capital of USA.") def president(self): print("Biden is their newly elected president.") k = Kenya() u = USA() for country in [k, u]: country.capital() country.president() '''Polymorphism with a function and object''' # in the previous example. Instead of looping: creating a function. def func(obj): obj.capital() obj.president() k = Kenya() u = USA() func(k) func(u) '''Polymorphism with inheritance''' # This is equal to overriding in inheritance. class Bird: def intro(self): print("There are many types of birds.") def flight(self): print("Most of the birds can fly but some cannot.") class sparrow(Bird): def flight(self): print("Sparrows can fly.") ``` ## Procedural Programming > Structures a program like a recipe in that it provides a set of steps, in the form of functions and code blocks, that flow sequentially in order to complete a task. >relies on procedure calls to create modularized code. This approach simplifies your application code by breaking it into small pieces that a developer can view easily. ``` ## summing elements of a list def sum_elements(my_list): sum = 0 for x in my_list: sum += x return sum print(sum_elements([4,5,6,7])) ``` #### Task: Create a class Rectangle and define the following methods: * create_rectangle Input parameters: x, y, width, height Return value: instance of Rectangle Operation: create a new instance of Rectangle * area_of_rectangle * perimeter_of__rectangle * product_of_the diagonals Create a class square that inherits from class rectangle with an additional function of: * angle_between_diagonals
github_jupyter
#### Copyright 2020 Google LLC. ``` # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Regression Quality So far in this course, we have spent some time building and testing regression models. But how can we measure how good these models are? In this Colab, we will examine a few of the ways that we can measure and graph the results of a regression model in order to better understand the quality of the model. ## Building a Dataset In order to discuss regression quality, we should probably start by building a regression model. We will start by creating an artificial dataset to model. Start by importing [NumPy](http://numpy.org) and setting a random seed so that we get reproducible results. Remember: **Do not set a random seed in production code!** ``` import numpy as np np.random.seed(0xFACADE) ``` Recall that linear regression is about trying to fit a straight line through a set of data points. The equation for a straight line is: > $y = m*x + b$ where: - $x$ is the feature - $y$ is the outcome - $m$ is the slope of the line - $b$ is the intercept of the line on the $y$-axis But at this point we don't even have $x$-values! We will use NumPy's [random.uniform](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.uniform.html) function to generate 50 random numbers between 0 and 200 as $x$-values. ``` X = np.random.uniform(low=0, high=200, size=50) print(f'min: {np.min(X)}') print(f'max: {np.max(X)}') print(f'mean: {np.mean(X)}') print(f'count: {np.size(X)}') ``` You should see a: * minimum value near, but not below 0 * maximum value near, but not above 200 * mean value somewhere near 100 * count value of exactly 50 Let's visualize the $x$-values, just to get some idea of the distribution of the values in the range of 0-200. *How do we plot a one-dimensional list of values in two-dimensional space?* We can plot it against itself! ``` import matplotlib.pyplot as plt plt.plot(X, X, 'g.') plt.show() ``` As you can see, we have a straight line of $x$-values that span from roughly 0 to 200. Let's now create some $y$-values via the equation $y=4x+10$ (i.e. the slope is 4 and the intercept is 10). We'll call the new variable `Y_PRED` since it is the predicted variable. ``` SLOPE = 4 INTERCEPT = 10 Y_PRED = (SLOPE * X) + INTERCEPT plt.plot(X, Y_PRED, 'b.') plt.plot(X, Y_PRED, 'r-') plt.show() ``` This regression line fits amazingly well! If only we could have this perfect of a fit in the real world. Unfortunately, this is almost never the case. There is always some variability. Let's add some randomness into our $y$-values to get a more realistic dataset. We will keep our original $y$-values in order to remember our base regression line. We'll recreate our original $y$-values and store them in `Y_PRED`. Then, we'll create `Y` with the same equation but with added randomness. ``` Y_PRED = (SLOPE * X) + INTERCEPT randomness = np.random.uniform(low=-200, high=200, size=50) Y = SLOPE * X + randomness + INTERCEPT plt.plot(X, Y, 'b.') plt.plot(X, Y_PRED, 'r-') plt.show() ``` We now have the line that was used to generate the data plotted in red, and the randomly displaced data points in blue. The dots, though definitely not close to the line, at least follow the linear trend in general. This seems like a reasonable dataset for a linear regression. Let's remind ourselves of the key variables in the model: * `X`: the $x$-values that we used to "train" the model * `Y`: the $y$-values that represent the actual values that correlate to `X` * `Y_PRED`: the $y$-values that the model would predict for each $x$-value ## Coefficient of Determination The **coefficient of determination**, denoted $R^2$, is one of the most important metrics in regression. It tells us how much of the data is "explained" by the model. Before we can define the metric itself, we need to define a few other key terms. A **residual** is the difference between the target value $y_i$ and the predicted value $\hat{y_i}$. The **residual sum of squares** is the summation of the square of every residual in the prediction set. > $$ SS_{res} = \sum_{i}(y_i - \hat{y_i})^2$$ ``` ss_res = ((Y - Y_PRED) ** 2).sum(axis=0, dtype=np.float64) print(ss_res) ``` The **total sum of squares** is the sum of the squares of the difference between each value $y_i$ and their mean. > $$\bar{y} = \frac{1}{n}\sum_{i=1}^{n}y_{i}$$ > $$SS_{tot} = \sum_{i}(y_{i}-\bar{y})^2$$ ``` y_mean = np.average(Y, axis=0) print(y_mean) ss_tot = ((Y - y_mean)**2).sum(axis=0, dtype=np.float64) print(ss_tot) ``` Given the total sum of squares and the residual sum of squares, we can calculate the coefficient of determination $R^2$. > $$R^{2} = 1 - \frac{SS_{res}}{SS_{tot}}$$ ``` r2 = 1 - (ss_res/ss_tot) print(r2) ``` If you just ran the cells in this Colab from top to bottom you probably got a score of `0.846`. Is this good? Bad? Mediocre? The $R^2$ score measures how well the actual variance from $x$-values to $y$-values is represented in the variance between the $x$-values and the predicted $y$-values. Typically, this score ranges from 0 to 1, where 0 is bad and 1 is a perfect mapping. However, the score can also be negative. Can you guess why? If a line drawn horizontally through the data points performs better than your regression, then the $R^2$ score would be negative. If you see this, try again. Your model *really* isn't working. For values in the range 0-1, interpreting the $R^2$ is more subjective. The closer to 0, the worse your model is at fitting the data. And generally, the closer to 1, the better (but you also don't want to overfit). This is where testing, observation, and experience come into play. It turns out that scikit-learn can calculate $R^2$ for us: ``` from sklearn.metrics import r2_score print(r2_score(Y, Y_PRED)) ``` Knowing that we don't have to manually do all of the math again, let's now see the perfect and a very imperfect case of a regression fitting a dataset. To begin with, we'll show a perfect fit. What happens if our predictions and our actual values are identical? ``` print(r2_score(Y, Y)) print(r2_score(Y_PRED, Y_PRED)) ``` 1.0: just what we thought! A perfect fit. Now let's see if we can make a regression so poor that $R^2$ is negative. In this case, we need to make our predicted data look different than our actuals. To do this, we'll negate our predictions and save them into a new variable called `Y_PRED_BAD`. ``` Y_PRED_BAD = -Y_PRED plt.plot(X, Y, 'y.') plt.plot(X, Y_PRED_BAD, 'r-') ``` That prediction line looks horrible! Indeed, a horizontal line would fit this data better. Let's check the $R^2$. ``` print(r2_score(Y, Y_PRED_BAD)) ``` A negative $R^2$ is rare in practice. But if you do ever see one, it means the model has gone quite wrong. ## Predicted vs. Actual Plots We have now seen a quantitative way to measure the goodness-of-fit of our regressions: the coefficient of determination. We know that if we see negative numbers that our model is very broken and if we see numbers approaching 1, the model is decent (or overfitted). But what about the in-between? This is where qualitative observations based on expert opinion needs to come into play. There are numerous ways to visualize regression predictions, but one of the most basic is the "predicted vs. actual" plot. To create this plot, we scatter-plot the actual $y$-values used to train our model against the predicted $y$-values generated from the training features. We then draw a line from the lowest prediction to the largest. ``` plt.plot(Y_PRED, Y, 'b.') plt.plot([Y_PRED.min(), Y_PRED.max()], [Y_PRED.min(), Y_PRED.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() ``` In this case, the data points scatter pretty evenly around the prediction-to-actual line. So what does a bad plot look like? Let's first negate all of our predictions, making them exactly the opposite of what they should be. This creates the exact opposite of a good actual-vs-predicted line. ``` Y_BAD = -Y_PRED plt.plot(Y, Y_BAD, 'b.') plt.plot([Y_BAD.min(), Y_BAD.max()], [Y_BAD.min(), Y_BAD.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() ``` In this case we made a very contrived example where the predictions are exact opposites of the actual values. When you see this case, you have a model predicting roughly the opposite of what it should be predicting. Let's look at another case, where we add a large positive bias to every prediction. ``` Y_BAD = Y_PRED + 200 plt.plot(Y, Y_BAD, 'b.') plt.plot([Y_BAD.min(), Y_BAD.max()], [Y_BAD.min(), Y_BAD.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() ``` Now we have a situation where there is an obvious **bias**. All predictions are higher than the actual values, so the model needs to be adjusted to make smaller predictions. Most cases aren't quite so obvious. In the chart below, you can see that the predictions are okay for low values, but tend to underpredict for larger target values. ``` Y_BAD = Y_PRED - Y_PRED / 4 plt.plot(Y, Y_BAD, 'b.') plt.plot([Y_BAD.min(), Y_BAD.max()], [Y_BAD.min(), Y_BAD.max()], 'r-') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() ``` Predicted vs. actual charts are a useful tool for giving you a visual indication of how your model is performing. While single measures like $R^2$ give you an aggregated metric, charts allow you to see if there is a trend or outlier where your model isn't performing well. If you identify problem areas, you can work on retraining your model. ## Residual Plots Another helpful visualization tool is to plot the regression residuals. As a reminder, residuals are the difference between the actual values and the predicted values. We plot residuals on the $y$-axis against the predicted values on the $x$-axis, and draw a horizontal line through $y=0$. Cases where our predictions were too low are above the line. Cases where our predictions were too high are below the line. ``` RESIDUALS = Y - Y_PRED plt.plot(Y_PRED, RESIDUALS, 'b.') plt.plot([0, Y_PRED.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() ``` In the "Predicted vs. Actual" section above, we plotted a case where there was a large positive bias in our predictions. Plotting the same biased data on a residual plot shows all of the residuals below the zero line. ``` RESIDUALS = Y - (Y_PRED + 200) plt.plot(Y_PRED, RESIDUALS, 'b.') plt.plot([0, Y_PRED.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() ``` The other example in the "Predicted vs. Actual" section reduced our predictions by an amount proportional to the scale of the predictions. Below is the residual plot for that scenario. ``` RESIDUALS = Y - (Y_PRED - Y_PRED / 4) plt.plot(Y_PRED, RESIDUALS, 'b.') plt.plot([0, Y_PRED.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() ``` ## Resources * [Coefficient of Determination](https://en.wikipedia.org/wiki/Coefficient_of_determination) * [Interpreting Residual Plots](http://docs.statwing.com/interpreting-residual-plots-to-improve-your-regression/#gallery) # Exercises The [Interpreting Residual Plots](http://docs.statwing.com/interpreting-residual-plots-to-improve-your-regression/#gallery) resource gives examples of patterns in different residual plots and what those patterns might mean for your model. Each exercise below contains code that generates an image. Run the code to view the image, and then find the corresponding pattern name in [Interpreting Residual Plots](http://docs.statwing.com/interpreting-residual-plots-to-improve-your-regression/#gallery). Note the name of the pattern in the answer cell, and provide a one or two sentence explanation of what this could signal about your model's predictions. ## Exercise 1 Run the code below to generate an image. Identify the corresponding residual plot pattern, and write a sentence or two about what this could signal about the model. ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(42) x = np.linspace(-10.0, 10.0, 100) y = np.linspace(-10.0, 10.0, 100) f = x**2 + y**2 + np.random.uniform(low=-14, high=14, size=100) plt.plot(x, f, 'b.') plt.plot([x.min(), x.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() ``` ### **Student Solution** *Which [plot pattern](http://docs.statwing.com/interpreting-residual-plots-to-improve-your-regression/#gallery) does this residual plot follow? And what might it mean about the model?* --- ## Exercise 2 Run the code below to generate an image. Identify the corresponding residual plot pattern, and write a sentence or two about what this could signal about the model. ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(42) x = np.linspace(0.0, 10.0, 100) y = np.concatenate([ np.random.uniform(low=-5, high=5, size=90), np.random.uniform(low=50, high=55, size=10) ]) plt.plot(x, y, 'b.') plt.plot([x.min(), x.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() ``` ### **Student Solution** *Which [plot pattern](http://docs.statwing.com/interpreting-residual-plots-to-improve-your-regression/#gallery) does this residual plot follow? And what might it mean about the model?* --- ## Exercise 3 Run the code below to generate an image. Identify the corresponding residual plot pattern, and write a sentence or two about what this could signal about the model. ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(42) x = np.concatenate([ np.random.uniform(low=0, high=2, size=90), np.random.uniform(low=4, high=10, size=10) ]) y = np.random.uniform(low=-5, high=5, size=100) plt.plot(x, y, 'b.') plt.plot([x.min(), x.max()], [0, 0], 'r-') plt.xlabel('Predicted') plt.ylabel('Residual') plt.show() ``` ### **Student Solution** *Which [plot pattern](http://docs.statwing.com/interpreting-residual-plots-to-improve-your-regression/#gallery) does this residual plot follow? And what might it mean about the model?* ---
github_jupyter
# openvino2tensorflow This tutorial explains the use case of openvino2tensorflow while using arachne. `openvino2tensorflow` is developed in the following GitHub repository. https://github.com/PINTO0309/openvino2tensorflow When you convert onnx model to tensorflow model by `onnx-tf`, the converted model includes many unnecessary transpose layers. This is because onnx has NCHW layer format while tensorflow has NHWC. The inclusion of many unnecessary transpose layers causes performance degradation in inference. By using openvino2tensorflow, you can avoid the inclusion of unnecessary transpose layers when converting a model from to tensorflow. In this tutorial, we compare two convert methods and their converted models: 1. PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (onnx-tf) -> Tensorflow -> (tflite_converter) -> TfLite 2. PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (openvino_mo) -> OpenVino -> (openvino2tensorflow) -> Tensorflow -> (tflite_converter) -> TfLite The developers of openvino2tensorflow provides the detail article about the advantage using openvino2tensorflow: [Converting PyTorch, ONNX, Caffe, and OpenVINO (NCHW) models to Tensorflow / TensorflowLite (NHWC) in a snap](https://qiita.com/PINTO/items/ed06e03eb5c007c2e102) ## Create Simple Model Here we create and save a very simple PyTorch model to be converted. ``` import torch from torch import nn import torch.onnx model = nn.Sequential( nn.Conv2d(3, 16, 3, padding=1), nn.Conv2d(16, 16, 3, padding=1), ) torch.save(model.eval(), "./sample.pth") ``` Save model input and output information as yaml format for `arachne`. ``` yml = """ inputs: - dtype: float32 name: input shape: - 1 - 3 - 224 - 224 outputs: - dtype: float32 name: output shape: - 1 - 16 - 224 - 224 """ open("sample.yml", "w").write(yml) ``` ## Convert using onnx-tf You can apply multiple tools in sequence with `arachne.pipeline`. Models are converted in the following order: PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (onnx-tf) -> Tensorflow -> (tflite_converter) -> TfLite ``` !python -m arachne.driver.pipeline \ +pipeline=[torch2onnx,onnx_simplifier,onnx_tf,tflite_converter] \ model_file=./sample.pth \ output_path=./pipeline1.tar \ model_spec_file=./sample.yml ``` Extract tarfile and see network structure of the converted tflite model. You can visualize model structure in netron: `netron ./pipeline1/model_0.tflite`. ``` !mkdir -p pipeline1 && tar xvf pipeline1.tar -C ./pipeline1 import tensorflow as tf def list_layers(model_path): interpreter = tf.lite.Interpreter(model_path) layer_details = interpreter.get_tensor_details() interpreter.allocate_tensors() for layer in layer_details: print("Layer Name: {}".format(layer['name'])) list_layers("./pipeline1/model_0.tflite") ``` We have confirmed that the transpose layer is unexpectedly included. ## Convert using openvino2tensorflow Next, try the second conversion method using openvino2tensorflow. Models are converted in the following order: PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (openvino_mo) -> OpenVino -> (openvino2tensorflow) -> Tensorflow -> (tflite_converter) -> TfLite ``` !python -m arachne.driver.pipeline \ +pipeline=[torch2onnx,onnx_simplifier,openvino_mo,openvino2tf,tflite_converter] \ model_file=./sample.pth \ output_path=./pipeline2.tar \ model_spec_file=./sample.yml ``` Extract tarfile and see network structure of the converted tflite model. You can visualize model structure in netron: `netron ./pipeline2/model_0.tflite`. ``` !mkdir -p pipeline2 && tar xvf pipeline2.tar -C ./pipeline2 list_layers("./pipeline2/model_0.tflite") ``` We have confirmed that the transpose layer is NOT included.
github_jupyter
<center> <img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> </center> # **Hands-on Lab : Web Scraping** Estimated time needed: **30 to 45** minutes ## Objectives In this lab you will perform the following: * Extract information from a given web site * Write the scraped data into a csv file. ## Extract information from the given web site You will extract the data from the below web site: <br> ``` #this url contains the data you need to scrape url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/labs/datasets/Programming_Languages.html" ``` The data you need to scrape is the **name of the programming language** and **average annual salary**.<br> It is a good idea to open the url in your web broswer and study the contents of the web page before you start to scrape. Import the required libraries ``` # Your code here from bs4 import BeautifulSoup import requests import pandas as pd ``` Download the webpage at the url ``` #your code goes here url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/labs/datasets/Programming_Languages.html" ``` Create a soup object ``` #your code goes here data = requests.get(url).text soup = BeautifulSoup(data, "html5lib") ``` Scrape the `Language name` and `annual average salary`. ``` #your code goes here table = soup.find('table') for row in table.find_all('tr'): cols = row.find_all('td') language_name = cols[1].getText() annual_average_salary = cols[3].getText() print("{}--->{}".format(language_name,annual_average_salary)) ``` Save the scrapped data into a file named *popular-languages.csv* ``` # your code goes here import csv import pandas as pd table_rows = table.find_all('tr') l = [] for tr in table_rows: td = tr.find_all('td') row = [tr.text for tr in td] l.append(row) df=pd.DataFrame(l, columns=["Column1","column2",...]) df df.to_csv("filename.csv") #this could be used to save data into csv csv.save('popular-languages.csv') import os os.getcwd() from IPython.display import HTML import base64,io def create_download_link( df, title = "Download CSV file", filename = "/home/wsuser/work/popular-languages.csv"): csv = df.to_csv() b64 = base64.b64encode(csv.encode()) payload = b64.decode() html = '<a download="{filename}" href="data:text/csv;base64,{payload}" target="_blank">{title}</a>' html = html.format(payload=payload,title=title,filename=filename) return HTML(html) create_download_link(df) #this could be used to save data into csv ``` ## Authors Ramesh Sannareddy ### Other Contributors Rav Ahuja ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ----------------- | ---------------------------------- | | 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab | Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01).
github_jupyter
# 处理数据 数据是构建机器学习模型的基础。在云中集中管理数据,并使在多个工作站上运行试验和训练模型的数据科学家团队能够访问这些数据以及计算目标,这是任何专业数据科学解决方案的重要组成部分。 在该笔记本中,你将探索两个用于数据处理的 Azure 机器学习对象:数据存储和数据集。 ## 连接到工作区 首先,请连接到你的工作区。 > **备注**:如果尚未与 Azure 订阅建立经过身份验证的会话,则系统将提示你通过执行以下操作进行身份验证:单击链接,输入验证码,然后登录到 Azure。 ``` import azureml.core from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name)) ``` ## 使用数据存储 在 Azure ML 中,数据存储是对存储位置(例如 Azure 存储 Blob 容器)的引用。每个工作区都有一个默认的数据存储,该存储通常是使用相应工作区创建的 Azure 存储 Blob 容器。如果需要使用存储在不同位置的数据,则可以将自定义数据存储添加到工作区中,并将其中任何一个设置为默认值。 ### 查看数据存储 运行以下代码以确定工作区中的数据存储: ``` # Get the default datastore default_ds = ws.get_default_datastore() # Enumerate all datastores, indicating which is the default for ds_name in ws.datastores: print(ds_name, "- Default =", ds_name == default_ds.name) ``` 你还可以在 [Azure 机器学习工作室](https://ml.azure.com)中工作区的“**数据集**”页面上查看和管理工作区中的数据存储。 ### 将数据上传到数据存储 确定可用的数据存储后,可以将文件从本地文件系统上传到数据存储,这样无论试验脚本实际在何处运行,工作区中运行的试验都能访问相应文件。 ``` default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data target_path='diabetes-data/', # Put it in a folder path in the datastore overwrite=True, # Replace existing files of the same name show_progress=True) ``` ## 使用数据集 Azure 机器学习以数据集的形式提供数据的抽象。数据集是对可能要在试验中使用的一组特定数据的版本控制引用。数据集可以采用表格格式,也可以采用文件格式。 ### 创建表格数据集 接下来根据上传到数据存储的糖尿病数据创建数据集,然后查看前 20 条记录。这种情况下,数据在 CSV 文件中采用结构化格式,因此我们将使用表格数据集。 ``` from azureml.core import Dataset # Get the default datastore default_ds = ws.get_default_datastore() #Create a tabular dataset from the path on the datastore (this may take a short while) tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv')) # Display the first 20 rows as a Pandas dataframe tab_data_set.take(20).to_pandas_dataframe() ``` 如上述代码中所见,可以轻松地将表格数据集转换为 Pandas 数据帧,从而使用常见的 python 技术处理数据。 ### 创建文件数据集 你创建的数据集是表格数据集,可以在数据集定义所包含的结构化文件中作为包含所有数据的数据帧读取。这对于表格数据非常有效,但在某些机器学习场景中,可能需要使用非结构化数据;或者你可能只想通过自己的代码读取文件中的数据。为此,可以使用文件数据集,该数据集在虚拟装入点创建文件路径列表,用于读取文件中的数据。 ``` #Create a file dataset from the path on the datastore (this may take a short while) file_data_set = Dataset.File.from_files(path=(default_ds, 'diabetes-data/*.csv')) # Get the files in the dataset for file_path in file_data_set.to_path(): print(file_path) ``` ### 注册数据集 创建引用糖尿病数据的数据集后,可以将其注册,确保工作区中运行的所有试验可轻松对其进行访问。 我们将表格数据集注册为“**糖尿病数据集**”,将文件数据集注册为“**糖尿病文件**”。 ``` # Register the tabular dataset try: tab_data_set = tab_data_set.register(workspace=ws, name='diabetes dataset', description='diabetes data', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) # Register the file dataset try: file_data_set = file_data_set.register(workspace=ws, name='diabetes file dataset', description='diabetes files', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) print('Datasets registered') ``` 你可以在 [Azure 机器学习工作室](https://ml.azure.com)中工作区的“**数据集**”页面上查看和管理数据集。你还可以从工作区对象获取数据集列表: ``` print("Datasets:") for dataset_name in list(ws.datasets.keys()): dataset = Dataset.get_by_name(ws, dataset_name) print("\t", dataset.name, 'version', dataset.version) ``` 通过对数据集进行版本控制,可以重新定义数据集,从而无需破坏依赖先前定义的现有试验或管道。默认返回最新版本的已命名数据集,但可以通过指定版本号检索特定版本的数据集,如下所示: ```python dataset_v1 = Dataset.get_by_name(ws, 'diabetes dataset', version = 1) ``` ### 从表格数据集训练模型 有数据集后,即可开始从中训练模型。可以在运行脚本的估算器中将数据集作为输入传递给脚本。 运行以下两个代码单元格,创建以下内容: 1. 名为 **diabetes_training_from_tab_dataset** 的文件夹 2. 通过使用作为参数传递给分类模型的表格数据集来训练分类模型的脚本。 ``` import os # Create a folder for the experiment files experiment_folder = 'diabetes_training_from_tab_dataset' os.makedirs(experiment_folder, exist_ok=True) print(experiment_folder, 'folder created') %%writefile $experiment_folder/diabetes_training.py # Import libraries import os import argparse from azureml.core import Run, Dataset import pandas as pd import numpy as np import joblib from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve # Get the script arguments (regularization rate and training dataset ID) parser = argparse.ArgumentParser() parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate') parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset') args = parser.parse_args() # Set regularization hyperparameter (passed as an argument to the script) reg = args.reg_rate # Get the experiment run context run = Run.get_context() # Get the training dataset print("Loading Data...") diabetes = run.input_datasets['training_data'].to_pandas_dataframe() # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) os.makedirs('outputs', exist_ok=True) # note file saved in the outputs folder is automatically uploaded into experiment record joblib.dump(value=model, filename='outputs/diabetes_model.pkl') run.complete() ``` > **备注**:在脚本中,数据集作为形参(或实参)传递。对于表格数据集,此参数将包含已注册数据集的 ID,因此,你可以在脚本中编写代码以从运行上下文中获取试验的工作区,然后使用其 ID 获取数据集,如下所示: > > ``` > run = Run.get_context() > ws = run.experiment.workspace > dataset = Dataset.get_by_id(ws, id=args.training_dataset_id) > diabetes = dataset.to_pandas_dataframe() > ``` > > 但是,Azure 机器学习运行时会自动识别引用命名数据集的参数并将其添加到运行的 **input_datasets** 集合中,因此你还可以通过指定其“易记名称”来从该集合检索数据集(稍后你将看到,它在试验的脚本运行配置中的参数定义中指定)。这是上面脚本中采用的方法。 现在,可以试验方式运行脚本,为训练数据集定义一个由脚本读取的参数。 > **备注**:“数据集”类依赖于 **azureml-dataprep** 包中的一些组件,因此需要将此包包含在将运行训练试验的环境中。**azureml-dataprep** 包包含在 **azure-defaults** 包中。 ``` from azureml.core import Experiment, ScriptRunConfig, Environment from azureml.widgets import RunDetails # Create a Python environment for the experiment (from a .yml file) env = Environment.from_conda_specification("experiment_env", "environment.yml") # Get the training dataset diabetes_ds = ws.datasets.get("diabetes dataset") # Create a script config script_config = ScriptRunConfig(source_directory=experiment_folder, script='diabetes_training.py', arguments = ['--regularization', 0.1, # Regularizaton rate parameter '--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset environment=env) # submit the experiment experiment_name = 'mslearn-train-diabetes' experiment = Experiment(workspace=ws, name=experiment_name) run = experiment.submit(config=script_config) RunDetails(run).show() run.wait_for_completion() ``` > **备注**: **--input-data** 参数将数据集作为指定输入传递,其中包括该数据集的易记名称,脚本在试验运行中使用该名称从 **input_datasets** 集合读取它。**--input-data** 参数中的字符串值实际上是已注册数据集的 ID。作为一种替代方法,可以只传递 `diabetes_ds.id`,在这种情况下,脚本可以从脚本参数访问数据集 ID,并使用该 ID 从工作区(而不是从 **input_datasets** 集合)获取数据集。 首次运行试验时,可能需要一些时间来设置 Python 环境 - 后续运行会更快。 试验完成后,在小组件中查看 **azureml-logs/70_driver_log.txt** 输出日志和运行所生成的指标。 ### 注册训练后的模型 与任何训练试验一样,可以检索训练后的模型并在 Azure 机器学习工作区中注册它。 ``` from azureml.core import Model run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model', tags={'Training context':'Tabular dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']}) for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') ``` ### 从文件数据集训练模型 你已了解了如何使用表格数据集中的训练数据来训练模型。但文件数据集呢? 使用文件数据集时,传递给脚本的数据集参数表示包含文件路径的装入点。从这些文件中读取数据的方式取决于文件中的数据类型及其预期用途。对于糖尿病 CSV 文件,可以使用 Python **glob** 模块在数据集定义的虚拟装入点中创建文件列表,并将其全部读入可联结为单个数据帧的 Pandas 数据帧中。 运行以下两个代码单元格,创建以下内容: 1. 名为 **diabetes_training_from_file_dataset** 的文件夹 2. 使用文件数据集(作为输入传递给脚本)训练分类模型的脚本。 ``` import os # Create a folder for the experiment files experiment_folder = 'diabetes_training_from_file_dataset' os.makedirs(experiment_folder, exist_ok=True) print(experiment_folder, 'folder created') %%writefile $experiment_folder/diabetes_training.py # Import libraries import os import argparse from azureml.core import Dataset, Run import pandas as pd import numpy as np import joblib from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve import glob # Get script arguments (rgularization rate and file dataset mount point) parser = argparse.ArgumentParser() parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate') parser.add_argument('--input-data', type=str, dest='dataset_folder', help='data mount point') args = parser.parse_args() # Set regularization hyperparameter (passed as an argument to the script) reg = args.reg_rate # Get the experiment run context run = Run.get_context() # load the diabetes dataset print("Loading Data...") data_path = run.input_datasets['training_files'] # Get the training data path from the input # (You could also just use args.dataset_folder if you don't want to rely on a hard-coded friendly name) # Read the files all_files = glob.glob(data_path + "/*.csv") diabetes = pd.concat((pd.read_csv(f) for f in all_files), sort=False) # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) os.makedirs('outputs', exist_ok=True) # note file saved in the outputs folder is automatically uploaded into experiment record joblib.dump(value=model, filename='outputs/diabetes_model.pkl') run.complete() ``` 与表格数据集一样,你可以使用其易记名称从 **input_datasets** 集合中检索文件数据集。还可以从脚本参数检索它,对于文件数据集,脚本参数包含文件的安装路径(而不是传递给表格数据集的数据集 ID)。 接下来需要更改将数据集传递到脚本的方式 - 这需要定义脚本可以从中读取文件的路径。可以使用 **as_download** 或 **as_mount** 方法来执行此操作。使用 **as_download** 会将文件数据集中的文件下载到计算机上运行脚本的临时位置,而 **as_mount** 会创建一个装入点,可以从该装入点直接从数据集传输文件。 可以将访问方法与 **as_named_input** 方法结合使用,以在试验运行中将数据集包含在 **input_datasets** 集合中(如果不这样做,例如,通过将参数设置为 `diabetes_ds.as_mount()`,则脚本将能够从脚本参数(而不是从 **input_datasets** 集合)访问数据集装入点)。 ``` from azureml.core import Experiment from azureml.widgets import RunDetails # Get the training dataset diabetes_ds = ws.datasets.get("diabetes file dataset") # Create a script config script_config = ScriptRunConfig(source_directory=experiment_folder, script='diabetes_training.py', arguments = ['--regularization', 0.1, # Regularizaton rate parameter '--input-data', diabetes_ds.as_named_input('training_files').as_download()], # Reference to dataset location environment=env) # Use the environment created previously # submit the experiment experiment_name = 'mslearn-train-diabetes' experiment = Experiment(workspace=ws, name=experiment_name) run = experiment.submit(config=script_config) RunDetails(run).show() run.wait_for_completion() ``` 试验完成后,在小组件中查看 **azureml-logs/70_driver_log.txt** 输出日志,以验证文件数据集中的文件已下载到临时文件夹中,从而使脚本能够读取文件。 ### 注册训练后的模型 同样,可以注册由试验训练的模型。 ``` from azureml.core import Model run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model', tags={'Training context':'File dataset'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']}) for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') ``` > **详细信息**:有关使用数据集进行训练的详细信息,请参阅 Azure ML 文档中的[使用数据集进行训练](https://docs.microsoft.com/azure/machine-learning/how-to-train-with-datasets)。
github_jupyter
<div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="https://cocl.us/PY0101EN_edx_add_top"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center"> </a> </div> <a href="https://cognitiveclass.ai/"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center"> </a> <h1>Tuples in Python</h1> <p><strong>Welcome!</strong> This notebook will teach you about the tuples in the Python Programming Language. By the end of this lab, you'll know the basics tuple operations in Python, including indexing, slicing and sorting.</p> <h2>Table of Contents</h2> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ul> <li> <a href="#dataset">About the Dataset</a> </li> <li> <a href="#tuple">Tuples</a> <ul> <li><a href="index">Indexing</a></li> <li><a href="slice">Slicing</a></li> <li><a href="sort">Sorting</a></li> </ul> </li> <li> <a href="#escape">Quiz on Tuples</a> </li> </ul> <p> Estimated time needed: <strong>15 min</strong> </p> </div> <hr> <h2 id="dataset">About the Dataset</h2> Imagine you received album recommendations from your friends and compiled all of the recommandations into a table, with specific information about each album. The table has one row for each movie and several columns: - **artist** - Name of the artist - **album** - Name of the album - **released_year** - Year the album was released - **length_min_sec** - Length of the album (hours,minutes,seconds) - **genre** - Genre of the album - **music_recording_sales_millions** - Music recording sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/) - **claimed_sales_millions** - Album's claimed sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/) - **date_released** - Date on which the album was released - **soundtrack** - Indicates if the album is the movie soundtrack (Y) or (N) - **rating_of_friends** - Indicates the rating from your friends from 1 to 10 <br> <br> The dataset can be seen below: <font size="1"> <table font-size:xx-small style="width:100%"> <tr> <th>Artist</th> <th>Album</th> <th>Released</th> <th>Length</th> <th>Genre</th> <th>Music recording sales (millions)</th> <th>Claimed sales (millions)</th> <th>Released</th> <th>Soundtrack</th> <th>Rating (friends)</th> </tr> <tr> <td>Michael Jackson</td> <td>Thriller</td> <td>1982</td> <td>00:42:19</td> <td>Pop, rock, R&B</td> <td>46</td> <td>65</td> <td>30-Nov-82</td> <td></td> <td>10.0</td> </tr> <tr> <td>AC/DC</td> <td>Back in Black</td> <td>1980</td> <td>00:42:11</td> <td>Hard rock</td> <td>26.1</td> <td>50</td> <td>25-Jul-80</td> <td></td> <td>8.5</td> </tr> <tr> <td>Pink Floyd</td> <td>The Dark Side of the Moon</td> <td>1973</td> <td>00:42:49</td> <td>Progressive rock</td> <td>24.2</td> <td>45</td> <td>01-Mar-73</td> <td></td> <td>9.5</td> </tr> <tr> <td>Whitney Houston</td> <td>The Bodyguard</td> <td>1992</td> <td>00:57:44</td> <td>Soundtrack/R&B, soul, pop</td> <td>26.1</td> <td>50</td> <td>25-Jul-80</td> <td>Y</td> <td>7.0</td> </tr> <tr> <td>Meat Loaf</td> <td>Bat Out of Hell</td> <td>1977</td> <td>00:46:33</td> <td>Hard rock, progressive rock</td> <td>20.6</td> <td>43</td> <td>21-Oct-77</td> <td></td> <td>7.0</td> </tr> <tr> <td>Eagles</td> <td>Their Greatest Hits (1971-1975)</td> <td>1976</td> <td>00:43:08</td> <td>Rock, soft rock, folk rock</td> <td>32.2</td> <td>42</td> <td>17-Feb-76</td> <td></td> <td>9.5</td> </tr> <tr> <td>Bee Gees</td> <td>Saturday Night Fever</td> <td>1977</td> <td>1:15:54</td> <td>Disco</td> <td>20.6</td> <td>40</td> <td>15-Nov-77</td> <td>Y</td> <td>9.0</td> </tr> <tr> <td>Fleetwood Mac</td> <td>Rumours</td> <td>1977</td> <td>00:40:01</td> <td>Soft rock</td> <td>27.9</td> <td>40</td> <td>04-Feb-77</td> <td></td> <td>9.5</td> </tr> </table></font> <hr> <h2 id="tuple">Tuples</h2> In Python, there are different data types: string, integer and float. These data types can all be contained in a tuple as follows: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesType.png" width="750" align="center" /> Now, let us create your first tuple with string, integer and float. ``` # Create your first tuple tuple1 = ("disco",10,1.2 ) tuple1 ``` The type of variable is a **tuple**. ``` # Print the type of the tuple you created type(tuple1) ``` <h3 id="index">Indexing</h3> Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesIndex.gif" width="750" align="center"> We can print out each value in the tuple: ``` # Print the variable on each index print(tuple1[0]) print(tuple1[1]) print(tuple1[2]) ``` We can print out the **type** of each value in the tuple: ``` # Print the type of value on each index print(type(tuple1[0])) print(type(tuple1[1])) print(type(tuple1[2])) ``` We can also use negative indexing. We use the same table above with corresponding negative values: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNeg.png" width="750" align="center"> We can obtain the last element as follows (this time we will not use the print statement to display the values): ``` # Use negative index to get the value of the last element tuple1[-1] ``` We can display the next two elements as follows: ``` # Use negative index to get the value of the second last element tuple1[-2] # Use negative index to get the value of the third last element tuple1[-3] ``` <h3 id="concate">Concatenate Tuples</h3> We can concatenate or combine tuples by using the **+** sign: ``` # Concatenate two tuples tuple2 = tuple1 + ("hard rock", 10) tuple2 ``` We can slice tuples obtaining multiple values as demonstrated by the figure below: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesSlice.gif" width="750" align="center"> <h3 id="slice">Slicing</h3> We can slice tuples, obtaining new tuples with the corresponding elements: ``` # Slice from index 0 to index 2 tuple2[0:3] ``` We can obtain the last two elements of the tuple: ``` # Slice from index 3 to index 4 tuple2[3:5] ``` We can obtain the length of a tuple using the length command: ``` # Get the length of tuple len(tuple2) ``` This figure shows the number of elements: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesElement.png" width="750" align="center"> <h3 id="sort">Sorting</h3> Consider the following tuple: ``` # A sample tuple Ratings = (0, 9, 6, 5, 10, 8, 9, 6, 2) ``` We can sort the values in a tuple and save it to a new tuple: ``` # Sort the tuple RatingsSorted = sorted(Ratings) RatingsSorted ``` <h3 id="nest">Nested Tuple</h3> A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements: ``` # Create a nest tuple NestedT =(1, 2, ("pop", "rock") ,(3,4),("disco",(1,2))) ``` Each element in the tuple including other tuples can be obtained via an index as shown in the figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestOne.png" width="750" align="center"> ``` # Print element on each index print("Element 0 of Tuple: ", NestedT[0]) print("Element 1 of Tuple: ", NestedT[1]) print("Element 2 of Tuple: ", NestedT[2]) print("Element 3 of Tuple: ", NestedT[3]) print("Element 4 of Tuple: ", NestedT[4]) ``` We can use the second index to access other tuples as demonstrated in the figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestTwo.png" width="750" align="center"> We can access the nested tuples : ``` # Print element on each index, including nest indexes print("Element 2, 0 of Tuple: ", NestedT[2][0]) print("Element 2, 1 of Tuple: ", NestedT[2][1]) print("Element 3, 0 of Tuple: ", NestedT[3][0]) print("Element 3, 1 of Tuple: ", NestedT[3][1]) print("Element 4, 0 of Tuple: ", NestedT[4][0]) print("Element 4, 1 of Tuple: ", NestedT[4][1]) ``` We can access strings in the second nested tuples using a third index: ``` # Print the first element in the second nested tuples NestedT[2][1][0] # Print the second element in the second nested tuples NestedT[2][1][1] ``` We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestThree.gif" width="750" align="center"> Similarly, we can access elements nested deeper in the tree with a fourth index: ``` # Print the first element in the second nested tuples NestedT[4][1][0] # Print the second element in the second nested tuples NestedT[4][1][1] ``` The following figure shows the relationship of the tree and the element <code>NestedT[4][1][1]</code>: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestFour.gif" width="750" align="center"> <h2 id="quiz">Quiz on Tuples</h2> Consider the following tuple: ``` # sample tuple genres_tuple = ("pop", "rock", "soul", "hard rock", "soft rock", \ "R&B", "progressive rock", "disco") genres_tuple ``` Find the length of the tuple, <code>genres_tuple</code>: ``` # Write your code below and press Shift+Enter to execute ``` <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesQuiz.png" width="1100" align="center"> Double-click __here__ for the solution. <!-- Your answer is below: len(genres_tuple) --> Access the element, with respect to index 3: ``` # Write your code below and press Shift+Enter to execute ``` Double-click __here__ for the solution. <!-- Your answer is below: genres_tuple[3] --> Use slicing to obtain indexes 3, 4 and 5: ``` # Write your code below and press Shift+Enter to execute ``` Double-click __here__ for the solution. <!-- Your answer is below: genres_tuple[3:6] --> Find the first two elements of the tuple <code>genres_tuple</code>: ``` # Write your code below and press Shift+Enter to execute ``` Double-click __here__ for the solution. <!-- Your answer is below: genres_tuple[0:2] --> Find the first index of <code>"disco"</code>: ``` # Write your code below and press Shift+Enter to execute ``` Double-click __here__ for the solution. <!-- Your answer is below: genres_tuple.index("disco") --> Generate a sorted List from the Tuple <code>C_tuple=(-5, 1, -3)</code>: ``` # Write your code below and press Shift+Enter to execute ``` Double-click __here__ for the solution. <!-- Your answer is below: C_tuple = (-5, 1, -3) C_list = sorted(C_tuple) C_list --> <hr> <h2>The last exercise!</h2> <p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work. <hr> <div class="alert alert-block alert-info" style="margin-top: 20px"> <h2>Get IBM Watson Studio free of charge!</h2> <p><a href="https://cocl.us/PY0101EN_edx_add_bbottom"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p> </div> <h3>About the Authors:</h3> <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p> Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a> <hr> <p>Copyright &copy; 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
github_jupyter
``` import pandas as pd import numpy as np import os from xml.etree import cElementTree as ET from dbfread import DBF import time import sqlite3 os.getcwd() os.chdir("/Users/ninarethaller/Downloads/") ``` Les année d'élection : 1965 1969 1974 1981 1988 1995 2002 2007 2012 2017 Pour 2017, on utilise la date la plus proche ### Creation de la table de taux de chomage ``` file = "TXCHO-DEP.xml" tree_emploi = ET.parse(file) root_emploi = tree_emploi.getroot() columns_name = ['time_period', 'dept_lib','dept_nb', 'rate'] data_emploi = pd.DataFrame(columns=columns_name) for dept in root_emploi: dept_lib = dept.attrib["TITLE"].split(" - ")[1].replace(" ","") dept_nb = dept.attrib["DEPARTEMENT"] for serie in dept: value = serie.attrib["OBS_VALUE"] date = serie.attrib["TIME_PERIOD"] it_dict = {'time_period':date, 'dept_lib':dept_lib, "dept_nb":dept_nb, 'rate':value} data_emploi = data_emploi.append(it_dict, ignore_index=True) np.unique(data_emploi["dept_nb"]) data_emploi.head() ``` Conserver seulements les années d'élections, et faire un taux de variation ante/post election pour mesurer l'impact du quinquénat sur l'élection On fait aussi un taux glissant de un an sur le chomage un an avant élection pour mesurer un impact local sur le chomage sur la dernière année ``` def retry_quarter(q, value): q1 = np.nan q2 = np.nan q3 = np.nan q4 = np.nan if q == "Q1": q1 = float(value) elif q == "Q2": q2 = float(value) elif q == "Q3": q3 = float(value) else : q4 = float(value) return q1, q2, q3, q4 # Reorganisation de la table en ligne par année columns_name = ['time_period', 'dept_lib','dept_nb', 'Q1_rate','Q2_rate','Q3_rate','Q4_rate'] data_emploi_2 = pd.DataFrame(columns=columns_name) for i in range(data_emploi.shape[0]): time_an = data_emploi.iloc[i, 0].split("-")[0] time_q = data_emploi.iloc[i, 0].split("-")[1] dept_lib = data_emploi.iloc[i, 1] dept_nb = data_emploi.iloc[i, 2] rate = data_emploi.iloc[i, 3] list_index = data_emploi_2[(data_emploi_2["time_period"]==time_an) & (data_emploi_2["dept_nb"]==dept_nb)].index q1, q2, q3, q4 = retry_quarter(time_q, rate) if len(list_index) == 0 : data_emploi_2 = data_emploi_2.append({"time_period":time_an,"dept_lib":dept_lib, "dept_nb":dept_nb, "Q1_rate":q1, "Q2_rate":q2, "Q3_rate":q3, "Q4_rate":q4}, ignore_index=True) elif len(list_index) == 1: if not np.isnan(q1): data_emploi_2.iloc[list_index.values, 3] = rate if not np.isnan(q2): data_emploi_2.iloc[list_index.values, 4] = rate if not np.isnan(q3): data_emploi_2.iloc[list_index.values, 5] = rate if not np.isnan(q4): data_emploi_2.iloc[list_index.values, 6] = rate else : print("Erreur longueur de liste") print("Longueur de la liste %s" % len(list_index)) print("Index du dataframe %s" %i) data_emploi_2.head() len(np.unique(data_emploi_2["dept_lib"])) len(np.unique(data_emploi_2["time_period"])) np.unique(data_emploi_2["time_period"]) 96*35 data_emploi_2.shape data_emploi_2.to_csv("data_emploi.csv") ``` Pour les variations ne devront t-on pas utiliser plutot moyenne annuelle?? Pb pour les variations on commence en 82, on va encore perdre une année d'historique Essayer avec des chiffres national? ``` # Creation des varitions de taux et conservation des années electorales # 1981 1988 1995 2002 2007 2012 2017 ``` ### Etat Civil ``` file2 = "ETAT-CIVIL-DEP.xml" tree_civil = ET.parse(file2) root_civil = tree_civil.getroot() def each_serie(root, list_serie) : iteration = 0 for name_serie in list_serie: columns_name = ['time_period', 'dept_lib','dept_nb'] + [name_serie] data_serie = pd.DataFrame(columns=columns_name) for dept in root: dept_lib = dept.attrib["TITLE"].split(" - ")[1].strip(" ") dept_nb = dept.attrib["DEPARTEMENT"] serie_lib = dept.attrib["TITLE"].split(" - ")[0].strip(" ") if serie_lib == name_serie : for serie in dept: serie_value = serie.attrib["OBS_VALUE"] serie_date = serie.attrib["TIME_PERIOD"] it_dict = {'time_period': serie_date, 'dept_lib':dept_lib, "dept_nb":dept_nb, str(name_serie): serie_value} data_serie = data_serie.append(it_dict, ignore_index=True) iteration +=1 data_serie.sort_values(by=["dept_nb","time_period"]) if iteration==1: data = data_serie else : data = pd.merge(data, data_serie, how='left', on=['time_period', 'dept_lib','dept_nb']) print(data.shape) print(data.head()) return data.sort_values(by=["dept_nb","time_period"]) serie_a_garder = ["Naissances domiciliées par département", "Nombre total de mariages domiciliés", "Décès domiciliés par département"] data_etat_civil = each_serie(root_civil, serie_a_garder) np.unique(data_etat_civil["dept_nb"]) len(np.unique(data_etat_civil["time_period"])) 41*101 data_etat_civil.head() data_etat_civil[data_etat_civil["dept_nb"]=="971"] data_etat_civil.shape ``` ### Population ``` def load_pop(start_year, end_year): iteration = 0 for year in range(start_year, end_year, 1): if year in [2015, 2016, 2014, 1990]: data_pop = pd.read_excel("estim-pop-dep-sexe-gca-1975-2016.xls", sheetname=str(year), skiprows=4, skip_footer=10, parse_cols=7, names=["dept_nb", "dept_lib","0-19ans","20-39ans","40-59ans", "60-74ans","75+ans","Total"]) elif year > 1989 : data_pop = pd.read_excel("estim-pop-dep-sexe-gca-1975-2016.xls", sheetname=str(year), skiprows=4, skip_footer=9, parse_cols=7, names=["dept_nb", "dept_lib","0-19ans","20-39ans","40-59ans", "60-74ans","75+ans","Total"]) else : data_pop = pd.read_excel("estim-pop-dep-sexe-gca-1975-2016.xls", sheetname=str(year), skiprows=4, skip_footer=4, parse_cols=7, names=["dept_nb", "dept_lib","0-19ans","20-39ans","40-59ans", "60-74ans","75+ans","Total"]) iteration += 1 data_pop["time_period"]=str(year) print(data_pop.iloc[-1,:]) if iteration == 1: data = data_pop else : data = pd.concat([data, data_pop]) print(data.head()) print(data.shape) return data.sort_values(by=["dept_nb","time_period"]) data_pop = load_pop(1975, 2017) data_pop.iloc[-1,:] 42*96 ``` ### Etat Civil + population ``` print(data_pop.shape) print(min(data_pop["time_period"])) print(max(data_pop["time_period"])) print(len(np.unique(data_pop["dept_nb"]))) print(len(np.unique(data_pop["time_period"]))) 41*96 print(data_etat_civil.shape) print(min(data_etat_civil["time_period"])) print(max(data_etat_civil["time_period"])) print(len(np.unique(data_etat_civil["dept_nb"]))) print(len(np.unique(data_etat_civil["time_period"]))) data_civil_pop = pd.merge(data_pop, data_etat_civil, how='left', on=['time_period', 'dept_lib','dept_nb']) data_civil_pop.head() print(data_civil_pop.shape[0]) print(len(np.unique(data_civil_pop["dept_nb"]))) print(data_emploi_2.shape[0]) print(len(np.unique(data_emploi_2["dept_nb"]))) data_civil_pop.to_csv("data_civil_pop.csv") ``` ### Données harmonisée 1968-2013 ``` table1 = DBF('rp19682013_individus_dbase/rp19682013.dbf') table2 = DBF('rp19682013_individus_dbase/rp19682013_varlist.dbf',encoding='latin-1') table3 = DBF('rp19682013_individus_dbase/rp19682013_varmod.dbf',encoding='latin-1') len(table) for re in table2 : print(re) print("\n") for re in table3 : print(re) print("\n") conn = sqlite3.connect('rp19682013_individus_dbase/example.sqlite') c = conn.cursor() for row in c.execute('SELECT AN_RECENS,COUNT(AN_RECENS) FROM rp19682013 GROUP BY AN_RECENS'): print (row) for row in c.execute('SELECT AN_RECENS,NES4,COUNT(NES4) FROM rp19682013 GROUP BY NES4, AN_RECENS'): print (row) for row in c.execute('SELECT TYP_ACT,SUM(POND) FROM rp19682013 WHERE AN_RECENS="2013" GROUP BY TYP_ACT, AN_RECENS'): print (row) 26733241.530168764+4175157.586438414+14240929.046412589 (4175157.586438414/45149328.16301977)*100 for i in range(dataframe.shape): if data.loc(i,annee)==2016: data.loc(i,new_variable) = Nan else : data.loc(i,new_variable) = data.loc(i,Q4) - data.loc(i-1,Q4) ```
github_jupyter
``` !pip install git+https://github.com/zhy0/dmarket_rl !pip install ray[rllib] import pandas as pd import numpy as np import matplotlib.pyplot as plt from dmarket.environments import MultiAgentTrainingEnv from dmarket.info_settings import OfferInformationSetting, BlackBoxSetting from dmarket.agents import GymRLAgent from ray.rllib.env.multi_agent_env import MultiAgentEnv from ray.rllib.agents import pg from ray.tune import register_env from gym.spaces import Box, Discrete, Tuple class MultiWrapper(MultiAgentTrainingEnv, MultiAgentEnv): def __init__(self, env_config): super().__init__(**env_config) rl_agents = [ GymRLAgent('seller', 90, 'S1', max_factor=0.25, discretization=20), GymRLAgent('seller', 90, 'S2', max_factor=0.25, discretization=20), GymRLAgent('seller', 90, 'S3', max_factor=0.25, discretization=20), GymRLAgent('buyer', 110, 'B1', max_factor=0.25, discretization=20), GymRLAgent('buyer', 110, 'B2', max_factor=0.25, discretization=20), GymRLAgent('buyer', 110, 'B3', max_factor=0.25, discretization=20), ] fixed_agents = [] setting = BlackBoxSetting() env = MultiAgentTrainingEnv(rl_agents, fixed_agents, setting) my_policy = (None, env.observation_space, Discrete(20), {}) my_group = ( None, Tuple([env.observation_space for i in range(3)]), Tuple([Discrete(20) for i in range(3)]), {}, ) def select_policy(agent_id): """This function maps the agent id to the policy id""" return agent_id # We name our policies the same as our RL agents policies = { 'S1': my_policy, 'S2': my_policy, 'S3': my_policy, 'B1': my_policy, 'B2': my_policy, 'B3': my_policy, 'Sellers': my_group } EXP_NAME = "multi_seller_group" def run_experiment(iterations): grouped_sellers = lambda config: MultiWrapper(config).with_agent_groups(groups={ "Sellers": ['S1', 'S2', 'S3'] }) register_env("grouped_sellers", grouped_sellers) trainer = pg.PGTrainer(env="grouped_sellers", config={ "env_config": {"rl_agents": rl_agents, "fixed_agents": fixed_agents, "setting": setting}, "log_level": "ERROR", "timesteps_per_iteration": 30, "multiagent": { "policies": policies, "policy_mapping_fn": select_policy } }) rewards = [] episodes = [] episode_len = [] for i in range(iterations): result = trainer.train() rewards.append(result['policy_reward_mean']) episodes.append(result['episodes_total']) episode_len.append(result['episode_len_mean']) df = pd.DataFrame(rewards, index=episodes) df.index.name = 'Episodes' df['episode_len'] = episode_len return df %%time N_iter = 500 runs = 5 for i in range(runs): result = run_experiment(N_iter) plt.figure() result.plot() result.to_csv(f'{EXP_NAME}_iter{N_iter}_run{i}.csv') ```
github_jupyter
# Ingest Text Data Labeled text data can be in a structured data format, such as reviews for sentiment analysis, news headlines for topic modeling, or documents for text classification. In these cases, you may have one column for the label, one column for the text, and sometimes other columns for attributes. You can treat this structured data like tabular data, and ingest it in one of the ways discussed in the previous notebook [011_Ingest_tabular_data.ipynb](011_Ingest_tabular_data_v1.ipynb). Sometimes text data, especially raw text data comes as unstructured data and is often in .json or .txt format, and we will discuss how to ingest these types of data files into a SageMaker Notebook in this section. ## Set Up Notebook ``` %pip install -qU 'sagemaker>=2.15.0' 's3fs==0.4.2' import pandas as pd import json import glob import s3fs import sagemaker # Get SageMaker session & default S3 bucket sagemaker_session = sagemaker.Session() bucket = sagemaker_session.default_bucket() # replace with your own bucket if you have one s3 = sagemaker_session.boto_session.resource('s3') prefix = 'text_spam/spam' prefix_json = 'json_jeo' filename = 'SMSSpamCollection.txt' filename_json = 'JEOPARDY_QUESTIONS1.json' ``` ## Downloading data from Online Sources ### Text data (in structured .csv format): Twitter -- sentiment140 **Sentiment140** This is the sentiment140 dataset. It contains 1.6M tweets extracted using the twitter API. The tweets have been annotated with sentiment (0 = negative, 4 = positive) and topics (hashtags used to retrieve tweets). The dataset contains the following columns: * `target`: the polarity of the tweet (0 = negative, 4 = positive) * `ids`: The id of the tweet ( 2087) * `date`: the date of the tweet (Sat May 16 23:58:44 UTC 2009) * `flag`: The query (lyx). If there is no query, then this value is NO_QUERY. * `user`: the user that tweeted (robotickilldozr) * `text`: the text of the tweet (Lyx is cool [Second Twitter data](https://github.com/guyz/twitter-sentiment-dataset) is a Twitter data set collected as an extension to Sanders Analytics Twitter sentiment corpus, originally designed for training and testing Twitter sentiment analysis algorithms. We will use this data to showcase how to aggregate two data sets if you want to enhance your current data set by adding more data to it. ``` #helper functions to upload data to s3 def write_to_s3(filename, bucket, prefix): #put one file in a separate folder. This is helpful if you read and prepare data with Athena filename_key = filename.split('.')[0] key = "{}/{}/{}".format(prefix,filename_key,filename) return s3.Bucket(bucket).upload_file(filename,key) def upload_to_s3(bucket, prefix, filename): url = 's3://{}/{}/{}'.format(bucket, prefix, filename) print('Writing to {}'.format(url)) write_to_s3(filename, bucket, prefix) #run this cell if you are in SageMaker Studio notebook #!apt-get install unzip #download first twitter dataset !wget http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip -O sentimen140.zip # Uncompressing !unzip -o sentimen140.zip -d sentiment140 #upload the files to the S3 bucket csv_files = glob.glob("sentiment140/*.csv") for filename in csv_files: upload_to_s3(bucket, 'text_sentiment140', filename) #download second twitter dataset !wget https://raw.githubusercontent.com/zfz/twitter_corpus/master/full-corpus.csv filename = 'full-corpus.csv' upload_to_s3(bucket, 'text_twitter_sentiment_2', filename) ``` ### Text data (in .txt format): SMS Spam data [SMS Spam Data](https://archive.ics.uci.edu/ml/datasets/sms+spam+collection) was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. Each line in the text file has the correct class followed by the raw message. We will use this data to showcase how to ingest text data in .txt format. ``` txt_files = glob.glob("spam/*.txt") for filename in txt_files: upload_to_s3(bucket, 'text_spam', filename) !wget http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/smsspamcollection.zip -O spam.zip !unzip -o spam.zip -d spam ``` ### Text Data (in .json format): Jeopardy Question data [Jeopardy Question](https://j-archive.com/) was obtained by crawling the Jeopardy question archive website. It is an unordered list of questions where each question has the following key-value pairs: * `category` : the question category, e.g. "HISTORY" * `value`: dollar value of the question as string, e.g. "\$200" * `question`: text of question * `answer` : text of answer * `round`: one of "Jeopardy!","Double Jeopardy!","Final Jeopardy!" or "Tiebreaker" * `show_number` : string of show number, e.g '4680' * `air_date` : the show air date in format YYYY-MM-DD ``` #json file format !wget http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz # Uncompressing !gunzip -f JEOPARDY_QUESTIONS1.json.gz filename = 'JEOPARDY_QUESTIONS1.json' upload_to_s3(bucket, 'json_jeo', filename) ``` ## Ingest Data into Sagemaker Notebook ## Method 1: Copying data to the Instance You can use the AWS Command Line Interface (CLI) to copy your data from s3 to your SageMaker instance. This is a quick and easy approach when you are dealing with medium sized data files, or you are experimenting and doing exploratory analysis. The documentation can be found [here](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html). ``` #Specify file names prefix = 'text_spam/spam' prefix_json = 'json_jeo' filename = 'SMSSpamCollection.txt' filename_json = 'JEOPARDY_QUESTIONS1.json' prefix_spam_2 = 'text_spam/spam_2' #copy data to your sagemaker instance using AWS CLI !aws s3 cp s3://$bucket/$prefix_json/ text/$prefix_json/ --recursive data_location = "text/{}/{}".format(prefix_json, filename_json) with open(data_location) as f: data = json.load(f) print(data[0]) ``` ## Method 2: Use AWS compatible Python Packages When you are dealing with large data sets, or do not want to lose any data when you delete your Sagemaker Notebook Instance, you can use pre-built packages to access your files in S3 without copying files into your instance. These packages, such as `Pandas`, have implemented options to access data with a specified path string: while you will use `file://` on your local file system, you will use `s3://` instead to access the data through the AWS boto library. For `pandas`, any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. You can find additional documentation [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html). For text data, most of the time you can read it as line-by-line files or use Pandas to read it as a DataFrame by specifying a delimiter. ``` data_s3_location = "s3://{}/{}/{}".format(bucket, prefix, filename) # S3 URL s3_tabular_data = pd.read_csv(data_s3_location, sep="\t", header=None) s3_tabular_data.head() ``` For JSON files, depending on the structure, you can also use `Pandas` `read_json` function to read it if it's a flat json file. ``` data_json_location = "s3://{}/{}/{}".format(bucket, prefix_json, filename_json) s3_tabular_data_json = pd.read_json(data_json_location, orient='records') s3_tabular_data_json.head() ``` ## Method 3: Use AWS Native methods #### s3fs [S3Fs](https://s3fs.readthedocs.io/en/latest/) is a Pythonic file interface to S3. It builds on top of botocore. The top-level class S3FileSystem holds connection information and allows typical file-system style operations like cp, mv, ls, du, glob, etc., as well as put/get of local files to/from S3. ``` fs = s3fs.S3FileSystem() data_s3fs_location = "s3://{}/{}/".format(bucket, prefix) # To List all files in your accessible bucket fs.ls(data_s3fs_location) # open it directly with s3fs data_s3fs_location = "s3://{}/{}/{}".format(bucket, prefix, filename) # S3 URL with fs.open(data_s3fs_location) as f: print(pd.read_csv(f, sep = '\t', nrows = 2)) ``` # Aggregating Data Set If you would like to enhance your data with more data collected for your use cases, you can always aggregate your newly-collected data with your current data set. We will use the two data set -- Sentiment140 and Sanders Twitter Sentiment to show how to aggregate data together. ``` prefix_tw1 = 'text_sentiment140/sentiment140' filename_tw1 = 'training.1600000.processed.noemoticon.csv' prefix_added = 'text_twitter_sentiment_2' filename_added = 'full-corpus.csv' ``` Let's read in our original data and take a look at its format and schema: ``` data_s3_location_base = "s3://{}/{}/{}".format(bucket, prefix_tw1, filename_tw1) # S3 URL # we will showcase with a smaller subset of data for demonstration purpose text_data = pd.read_csv(data_s3_location_base, header = None, encoding = "ISO-8859-1", low_memory=False, nrows = 10000) text_data.columns = ['target', 'tw_id', 'date', 'flag', 'user', 'text'] ``` We have 6 columns, `date`, `text`, `flag` (which is the topic the twitter was queried), `tw_id` (tweet's id), `user` (user account name), and `target` (0 = neg, 4 = pos). ``` text_data.head(1) ``` Let's read in and take a look at the data we want to add to our original data. We will start by checking for columns for both data sets. The new data set has 5 columns, `TweetDate` which maps to `date`, `TweetText` which maps to `text`, `Topic` which maps to `flag`, `TweetId` which maps to `tw_id`, and `Sentiment` mapped to `target`. In this new data set, we don't have `user account name` column, so when we aggregate two data sets we can add this column to the data set to be added and fill it with `NULL` values. You can also remove this column from the original data if it does not provide much valuable information based on your use cases. ``` data_s3_location_added = "s3://{}/{}/{}".format(bucket, prefix_added, filename_added) # S3 URL # we will showcase with a smaller subset of data for demonstration purpose text_data_added = pd.read_csv(data_s3_location_added, encoding = "ISO-8859-1", low_memory=False, nrows = 10000) text_data_added.head(1) ``` #### Add the missing column to the new data set and fill it with `NULL` ``` text_data_added['user'] = "" ``` #### Renaming the new data set columns to combine two data sets ``` text_data_added.columns = ['flag', 'target', 'tw_id', 'date', 'text', 'user'] text_data_added.head(1) ``` #### Change the `target` column to the same format as the `target` in the original data set Note that the `target` column in the new data set is marked as "positive", "negative", "neutral", and "irrelevant", whereas the `target` in the original data set is marked as "0" and "4". So let's map "positive" to 4, "neutral" to 2, and "negative" to 0 in our new data set so that they are consistent. For "irrelevant", which are either not English or Spam, you can either remove these if it is not valuable for your use case (In our use case of sentiment analysis, we will remove those since these text does not provide any value in terms of predicting sentiment) or map them to -1. ``` #remove tweets labeled as irelevant text_data_added = text_data_added[text_data_added['target'] != 'irelevant'] # convert strings to number targets target_map = {'positive': 4, 'negative': 0, 'neutral': 2} text_data_added['target'] = text_data_added['target'].map(target_map) ``` #### Combine the two data sets and save as one new file ``` text_data_new = pd.concat([text_data, text_data_added]) filename = 'sentiment_full.csv' text_data_new.to_csv(filename, index = False) upload_to_s3(bucket, 'text_twitter_sentiment_full', filename) ``` ### Citation Twitter140 Data, Go, A., Bhayani, R. and Huang, L., 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(2009), p.12. SMS Spaming data, Almeida, T.A., Gómez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011. J! Archive, J! Archive is created by fans, for fans. The Jeopardy! game show and all elements thereof, including but not limited to copyright and trademark thereto, are the property of Jeopardy Productions, Inc. and are protected under law. This website is not affiliated with, sponsored by, or operated by Jeopardy Productions, Inc.
github_jupyter
## VQE ansatz: 001 ``` import os import numpy import pennylane as qml from pennylane import numpy as np from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer import sys sys.path.append("..") from wordsToNumbers import Corpus from wordsToNumbers import fibonacci_vocabulary from wordsToQubits import put_word_on_sphere from utils import get_corpus_from_directory, working_window, get_word_from_sphere np.random.seed(73) ``` ## Corpus ``` corpus_path='/Users/voicutu/Desktop/Qountry/CountryMixt' corpus_tex = get_corpus_from_directory(corpus_path, limit=1) corpus= Corpus(corpus_tex) print(corpus.prop()) parameterize_vovabulary = fibonacci_vocabulary(corpus.vocabulary) print("corpus:",corpus.split_text) ``` ## Training set ``` history_lenghth = 3 x,y = working_window(history_lenghth, splited_text=corpus.split_text) print("len training set:", len(x)) ``` ## Circuit ``` dev = qml.device("default.qubit", wires=history_lenghth+1) # circuit initializer def circuit_initializer(words): for i in range(len(words)): put_word_on_sphere(words[i], qubit=i) def layer_type1(param, wires=[0,1,2]): qml.CRZ( param[0], wires=[0,1]) qml.CRX( param[1], wires=[0,1]) qml.CRY( param[2], wires=[0,1]) qml.CRZ( param[3], wires=[1,2]) qml.CRX( param[4], wires=[1,2]) qml.CRY( param[5], wires=[1,2]) qml.CRZ( param[6], wires=[2,3]) qml.CRX( param[7], wires=[2,3]) qml.CRY( param[8], wires=[2,3]) qml.RX( param[9], wires=0) qml.RY( param[10], wires=0) qml.RX( param[11], wires=1) qml.RY( param[12], wires=1) qml.RX( param[13], wires=2) qml.RY( param[14], wires=2) qml.RX( param[15], wires=3) qml.RY( param[16], wires=3) @qml.qnode(dev) def next_gen(params, x, obs='z'): """ obs:'z', 'x' or 'y' """ # initialize the circuit circuit_initializer(x) # for param in params: layer_type1(param, wires=[0,1,2]) #circuit_initializer(x) # just for a test # measure if obs=='z': return qml.expval(qml.PauliZ(3)) if obs=='x': return qml.expval(qml.PauliX(3)) if obs=='y': return qml.expval(qml.PauliY(3)) x_vec=[ parameterize_vovabulary[w] for w in x[0]] print("x:",x[0]) print("x_vec:",x_vec) x_vec=[ parameterize_vovabulary[w] for w in x[0]] print("x:",x[0]) print("x_vec:",x_vec) params= np.random.uniform(size=(1, 17), requires_grad=True) print("example tensor:", x_vec ) print("\n\n") print(qml.draw(next_gen)(params,x_vec,obs='z')) pred_vector=[ next_gen(params,x_vec, obs=o) for o in ['x', 'y', 'z']] ``` ## Learning ``` def pred_target_distance(pred_vector, target): distance= np.linalg.norm(pred_vector-target) #np.sqrt((pred_vector[0]-target[0])*(pred_vector[0]-target[0])+ # (pred_vector[1]-target[1])*(pred_vector[1]-target[1])+ # (pred_vector[2]-target[2])*(pred_vector[2]-target[2])) return distance print("target word:",y[0]) print("target vector:",parameterize_vovabulary[y[0]] ) print("prediction:",pred_vector) pred_word= get_word_from_sphere(pred_vector, parameterize_vovabulary) print("prediction word:",pred_word) distance= pred_target_distance(np.array(pred_vector), np.array(parameterize_vovabulary[y[0]])) print("distance:", distance) distance= pred_target_distance( np.array(parameterize_vovabulary[y[1]]), np.array(parameterize_vovabulary[y[1]])) print("distance check",distance) def cost( par , x, y): predictions = [[next_gen(par, w_input,obs='x') ,next_gen(par,w_input,obs='y') ,next_gen(par,w_input,obs='z') ] for w_input in x ] c=0.0 for i in range(len(predictions)): c = c+ pred_target_distance(np.array(predictions[i]), y) c=c/len(predictions) return np.array(c) def accuracy(predictions, y): pred_words=[ get_word_from_sphere(p_v, parameterize_vovabulary) for p_v in predictions] target_words=[ get_word_from_sphere(p_v, parameterize_vovabulary) for p_v in y] ac=0 for i in range(len(pred_words)): if pred_words[i]==target_words[i]: ac=ac+1 return ac/len(pred_words) ``` ## Training: ``` ## shuffling data index = np.array([ i for i in range(len(x))]) print(index) X_train= [] Y_train= [] Y_train_w= [] for i in range(len(x)): vec = [parameterize_vovabulary[w] for w in x[i]] X_train.append(vec) Y_train.append(parameterize_vovabulary[y[i]]) Y_train_w.append(y[i]) X_train= np.array(X_train) Y_train= np.array(Y_train) def iterate_batches(X,Y, batch_size): X1 = [torch.reshape(x[0], (1, 2 ** (len(spec.latent_qubits) + len(spec.trash_qubits)))) for x in X] X2 = [] for i in range(len(X1)): X2.append([X1[1], X[i][1]]) X = X2 random.shuffle(X) batch_list = [] batch = [] for x in X: if len(batch) < batch_size: batch.append(x) else: batch_list.append(batch) batch = [] if len(batch) != 0: batch_list.append(batch) return batch_list ## Model parameters num_layers= 1 layer_param= 17 params = np.random.uniform(size=(2, layer_param), requires_grad=True) learning_rate= 0.6 opt = AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999) nr_epochs= 300 ls_progres=[] ac_progres=[] for e in range(nr_epochs): params, ls = opt.step_and_cost(lambda p: cost( par=p, x=X_train, y=Y_train),params) print("Iter:{} | train_cost:{}".format(e, ls)) ls_progres.append(ls) if e%10==0: predictions = [[next_gen(params, w_input,obs='x') ,next_gen(params,w_input,obs='y') ,next_gen(params,w_input,obs='z') ] for w_input in X_train ] ac=accuracy(predictions, y=Y_train) print("ac:",ac) ac_progres.append(ac) ``` ## Results ``` import matplotlib.pyplot as plt fig = plt.figure() plt.plot([x for x in range(0,len(ls_progres))],np.array(ls_progres),label="train loss") plt.legend() plt.title("loss_VQE_001_C1",) plt.xlabel("epoch") plt.ylabel("loss") print("last loss:",ls_progres[-1]) fig = plt.figure() plt.plot([x for x in range(0,len(ac_progres)*10,10)],np.array(ac_progres),label="train accuracy") plt.legend() plt.title("accuracy_VQE_001_C1",) plt.xlabel("epoch") plt.ylabel("accyracy") print("accuracy loss:",ac_progres[-1]) for i in range(len(x)): words_vec = [parameterize_vovabulary[w] for w in x[i]] pred_vector=[ next_gen(params,words_vec, obs=o) for o in ['x', 'y', 'z']] pred_word= get_word_from_sphere(pred_vector, parameterize_vovabulary) text="" for w in x[i]; text=text+w+" " print("{} {}|{}".format(text, pred_word,y[i])) ```
github_jupyter
# New Style HDMI input and Pixel Formatting This notebook introduces the new features of PYNQ 2.0 for interacting with the video pipeline. The API has been completely redesigned with high performance image processing applications in mind. To start, download the base overlay and instantiate the HDMI input and output. ``` from pynq.overlays.base import BaseOverlay from pynq.lib.video import * base = BaseOverlay("base.bit") hdmi_in = base.video.hdmi_in hdmi_out = base.video.hdmi_out ``` ## Getting started First we'll use the default pixel format which is 24 bit-per-pixel BGR formatted data for ease of use with OpenCV. ``` hdmi_in.configure() hdmi_out.configure(hdmi_in.mode) hdmi_in.start() hdmi_out.start() ``` The monitor should turn on and show a blank screen. To pass the image data through we can tie the output to the input. The tie will last until we send something else to be displayed. ``` hdmi_in.tie(hdmi_out) ``` While this provides for a fast way of passing video data through the pipeline there is no way to access or modify the frames. For that we a loop calling `readframe` and `writeframe`. ``` import time numframes = 600 start = time.time() for _ in range(numframes): f = hdmi_in.readframe() hdmi_out.writeframe(f) end = time.time() print("Frames per second: " + str(numframes / (end - start))) ``` Next we can start adding some OpenCV processing into the mix. For all of these examples we are going to use a Laplacian gradient filter. The first loop is going to perform the grayscale conversion in software. ``` import cv2 import numpy as np numframes = 10 grayscale = np.ndarray(shape=(hdmi_in.mode.height, hdmi_in.mode.width), dtype=np.uint8) result = np.ndarray(shape=(hdmi_in.mode.height, hdmi_in.mode.width), dtype=np.uint8) start = time.time() for _ in range(numframes): inframe = hdmi_in.readframe() cv2.cvtColor(inframe,cv2.COLOR_BGR2GRAY,dst=grayscale) inframe.freebuffer() cv2.Laplacian(grayscale, cv2.CV_8U, dst=result) outframe = hdmi_out.newframe() cv2.cvtColor(result, cv2.COLOR_GRAY2BGR,dst=outframe) hdmi_out.writeframe(outframe) end = time.time() print("Frames per second: " + str(numframes / (end - start))) ``` ## Cleaning up Finally you must always stop the interfaces when you are done with them. Otherwise bad things can happen when the bitstream is reprogrammed. You can also use the HDMI interfaces in a context manager to ensure that the cleanup is always performed. ``` hdmi_out.close() hdmi_in.close() ``` ## Cacheable and non-cacheable frames By default frames used by the HDMI subsystem are marked as cacheable meaning that the CPU cache is available for speeding up software operation at the expense of needing to flush frames prior to handing them off to the video system. This flushing is handled by PYNQ but can still impose a significant perfomance penalty particularly on 32-bit ARM architectures. To mitigate this the `cacheable_frames` property can be set to `False` on the `hdmi_in` and `hdmi_out` subsystems. This will improve the performance of passing frames between accelerators at the expense of software libraries operating more slowly or in some cases not working at all. ``` base.download() hdmi_in.configure() hdmi_out.configure(hdmi_in.mode) hdmi_out.cacheable_frames = False hdmi_in.cacheable_frames = False hdmi_out.start() hdmi_in.start() ``` Re-running the plain read-write loop now shows 60 FPS ``` numframes = 600 start = time.time() for _ in range(numframes): f = hdmi_in.readframe() hdmi_out.writeframe(f) end = time.time() print("Frames per second: " + str(numframes / (end - start))) ``` At the expense of much slower OpenCV performance ``` numframes = 10 start = time.time() for _ in range(numframes): inframe = hdmi_in.readframe() cv2.cvtColor(inframe,cv2.COLOR_BGR2GRAY,dst=grayscale) inframe.freebuffer() cv2.Laplacian(grayscale, cv2.CV_8U, dst=result) outframe = hdmi_out.newframe() cv2.cvtColor(result, cv2.COLOR_GRAY2BGR,dst=outframe) hdmi_out.writeframe(outframe) end = time.time() print("Frames per second: " + str(numframes / (end - start))) hdmi_out.close() hdmi_in.close() ``` ## Gray-scale Using the new infrastructure we can delegate the color conversion to the hardware as well as only passing a single grayscale pixel to and from the processor. First reconfigure the pipelines in grayscale mode and tie the two together to make sure everything is working correctly. ``` base.download() hdmi_in.configure(PIXEL_GRAY) hdmi_out.configure(hdmi_in.mode) hdmi_in.cacheable_frames = True hdmi_out.cacheable_frames = True hdmi_in.start() hdmi_out.start() hdmi_in.tie(hdmi_out) ``` Now we can rewrite the loop without the software colour conversion. ``` start = time.time() numframes = 30 for _ in range(numframes): inframe = hdmi_in.readframe() outframe = hdmi_out.newframe() cv2.Laplacian(inframe, cv2.CV_8U, dst=outframe) inframe.freebuffer() hdmi_out.writeframe(outframe) end = time.time() print("Frames per second: " + str(numframes / (end - start))) hdmi_out.close() hdmi_in.close() ``` ## Other modes There are two other 24 bit modes that are useful for interacting with PIL. The first is regular RGB mode. ``` base.download() hdmi_in.configure(PIXEL_RGB) hdmi_out.configure(hdmi_in.mode, PIXEL_RGB) hdmi_in.start() hdmi_out.start() hdmi_in.tie(hdmi_out) ``` This is useful for easily creating and displaying frames with Pillow. ``` import PIL.Image frame = hdmi_in.readframe() image = PIL.Image.fromarray(frame) image ``` An alternative mode is YCbCr which is useful for some image processing algorithms or exporting JPEG files. Because we are not changing the number of bits per pixel we can update the colorspace of the input dynamically. ``` hdmi_in.colorspace = COLOR_IN_YCBCR ``` It's probably worth updating the output colorspace as well to avoid the psychedelic effects ``` hdmi_out.colorspace = COLOR_OUT_YCBCR ``` Now we can use PIL to read in the frame and perform the conversion back for us. ``` import PIL.Image frame = hdmi_in.readframe() image = PIL.Image.fromarray(frame, "YCbCr") frame.freebuffer() image.convert("RGB") hdmi_out.close() hdmi_in.close() ``` ## Next Steps This notebook has only provided an overview of base overlay pipeline. One of the reasons for the changes was to make it easier to add hardware accelerated functions by supporting a wider range of pixel formats without software conversion and separating out the HDMI front end from the video DMA. Explore the code in pynq/lib/video.py for more details.
github_jupyter
## Deep Learning Regularization 😓Be well prepared that when the code worked for me, may not work for you any more. It took me so much time tonight to debug, upgrade/install packages, change deprecated functions or just ignore warnings.... All because of the frequent changes in these open source packages. So, when it's your turn to try the code, who knows whether it still works... 💝However, when you are seeing my code, you are lucky! At least I took the note on those things need to care about, including the solutions. ❣️Also note, the model evaluation here I didn't evauate all the testing data, because of the labeling time for all those testing image can be huge and I'm really busy. <b>However</b>, you can pay attention to those val_acc and val_loss, lower the better Reference: https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+AnalyticsVidhya+%28Analytics+Vidhya%29 <b>Get data from here</b>: https://datahack.analyticsvidhya.com/contest/practice-problem-identify-the-digits/ ``` %matplotlib inline import os import numpy as np import pandas as pd from imageio import imread from sklearn.metrics import accuracy_score import pylab import tensorflow as tf import keras ``` ### NOTE You may got an error saying cannot import module "weakref". This problem was not exist before but just appeared... Here's my solution: 1. Find your tensorflow path by typing `pip show tensorflow` 2. Find tensorflow/python/util/tf_should_use.py, open it 3. Change `from backports import weakref` to `import weakref` 4. Then comment the line that contains `finalize()` function, this is for garbage collection, but finalize function does not exist in weakref in my case.... 😓 5. Restart your ipython ``` seed = 10 rng = np.random.RandomState(seed) train = pd.read_csv('digit_recognition/train.csv') test = pd.read_csv('digit_recognition/test.csv') train.head() img_name = rng.choice(train.filename) training_image_path = 'digit_recognition/Images/train/' + img_name training_img = imread(training_image_path, as_gray=True) pylab.imshow(training_img, cmap='gray') pylab.axis('off') pylab.show() training_img[7:9] # store all images as numpy arrays, to make data manipulation easier temp = [] for img_name in train.filename: training_image_path = 'digit_recognition/Images/train/' + img_name training_img = imread(training_image_path, as_gray=True) img = training_img.astype('float32') temp.append(img) train_x = np.stack(temp) train_x /= 255.0 train_x = train_x.reshape(-1, 784).astype('float32') temp = [] for img_name in test.filename: testing_image_path = 'digit_recognition/Images/test/' + img_name testing_img = imread(testing_image_path, as_gray=True) img = testing_img.astype('float32') temp.append(img) test_x = np.stack(temp) test_x /= 255.0 test_x = test_x.reshape(-1, 784).astype('float32') train_x train_y = keras.utils.np_utils.to_categorical(train.label.values) # split into training and validation sets, 7:3 split_size = int(train_x.shape[0]*0.7) train_x, val_x = train_x[:split_size], train_x[split_size:] train_y, val_y = train_y[:split_size], train_y[split_size:] train.label.iloc[split_size:split_size+2] from keras.models import Sequential from keras.layers import Dense # define variables input_num_units = 784 hidden1_num_units = 500 hidden2_num_units = 500 hidden3_num_units = 500 hidden4_num_units = 500 hidden5_num_units = 500 output_num_units = 10 epochs = 10 batch_size = 128 ``` ### NOTE Keras updated to 2.0 Without updating keras, the way you used `Dense()` function may keep giving warnings * Here's Keras 2.0 documentation: https://keras.io/ * To update keras, type `sudo pip install --upgrade keras==2.1.3`. Has to be keras 2.1.3, if it's higher, softmax may get an error below.... (this is why I hate deep learning when you have to use open source!) * Holy s**t, even after the updating, you will get many warnings again, just ignore them.. ``` # Method 1 - Without Regularization import warnings warnings.filterwarnings('ignore') model = Sequential() model.add(Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu')) model.add(Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu')) model.add(Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu')) model.add(Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu')) model.add(Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu')) model.add(Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu')) model.add(Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) trained_model_5d = model.fit(train_x, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x, val_y)) # one sample evaluation pred = model.predict_classes(test_x) img_name = rng.choice(test.filename) testing_image_path = 'digit_recognition/Images/test/' + img_name testing_img = imread(testing_image_path, as_gray=True) test_index = int(img_name.split('.')[0]) - train.shape[0] print "Prediction is: ", pred[test_index] pylab.imshow(testing_img, cmap='gray') pylab.axis('off') pylab.show() from keras import regularizers # Method 2 - With L2 regularizer model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), # lambda = 0.0001 Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) trained_model_5d = model.fit(train_x, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x, val_y)) # Method 3 - L1 Regularizer model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) trained_model_5d = model.fit(train_x, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x, val_y)) # method 4 - Dropout from keras.layers.core import Dropout model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu'), Dropout(0.25), # the drop probability is 0.25 Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) trained_model_5d = model.fit(train_x, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x, val_y)) # method 5 - early stopping from keras.callbacks import EarlyStopping from keras.layers.core import Dropout import warnings warnings.filterwarnings('ignore') model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu'), Dropout(0.25), # the drop probability is 0.25 Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) trained_model_5d = model.fit(train_x, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x, val_y), callbacks = [EarlyStopping(monitor='val_acc', patience=2)]) # method 6 - Data Augmentation from keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator(zca_whitening=True) # zca_whitening as the argument, will highlight the outline of each digit train = pd.read_csv('digit_recognition/train.csv') temp = [] for img_name in train.filename: training_image_path = 'digit_recognition/Images/train/' + img_name training_img = imread(training_image_path, as_gray=True) img = training_img.astype('float32') temp.append(img) train_x = np.stack(temp) # The difference with above starts from here: train_x = train_x.reshape(train_x.shape[0], 1, 28, 28) train_x = train_x.astype('float32') # fit parameters from data ## fit the training data in order to augment datagen.fit(train_x) # This will often cause the kernel to die on my machine # data spliting split_size = int(train_x.shape[0]*0.7) train_x, val_x = train_x[:split_size], train_x[split_size:] train_y, val_y = train_y[:split_size], train_y[split_size:] # train the model with drop out model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu'), Dropout(0.25), # the drop probability is 0.25 Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) trained_model_5d = model.fit(train_x, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x, val_y)) ``` ### Observations 1. Comparing the val_loss and vall_acc between each regularizer and the first method, we can see dropout works best and it is thr only one that has lower val_loss and higher val_acc. 2. In the experiments here, after we applied early stopping on dropout it didn't give better results, maybe it needs more `patience`, because if we observe each epoch, the val_loss is not simply dropping along the way, it could increase in the middle and then drop again. This is why we need to be careful towards the number of epoch/patience 3. L1, L2 tend to give higher val_loss, especially L1 4. In my machine, with limited memory now, data augmentation failed, it will simply kill the kernel all the time. No wonder dropout is the most frequently used regularizer.....
github_jupyter
### Maricopa Agricultural Center Season 6 ### Citation for Input Trait Data LeBauer, David et al. (2020), Data From: TERRA-REF, An open reference data set from high resolution genomics, phenomics, and imaging sensors, v6, Dryad, Dataset, https://doi.org/10.5061/dryad.4b8gtht99 ##### Environmental weather data can be downloaded from the MAC weather station [website](https://cals.arizona.edu/azmet/06.htm) Please email dlebauer@email.arizona.edu or ejcain@email.arizona.edu with any questions or comments, or create an issue in this [repository](https://github.com/genophenoenvo/terraref-datasets) ``` import datetime import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import requests def download_csv(url, folder_name, file_name): response = requests.get(url) with open(os.path.join(folder_name, file_name), 'wb') as f: f.write(response.content) def read_in_csv(folder_name, file_name): df = pd.read_csv(folder_name + '/' + file_name, low_memory=False) return df def plot_hist(df, value_column, trait_column): trait_name = df[trait_column].unique()[0] return df[value_column].hist(color='navy').set_xlabel(trait_name); def check_for_nulls_duplicates(df): print( f'Sum of null values:\n{df.isnull().sum()}\n-----\n' f'Value counts for duplicates:\n{df.duplicated().value_counts()}' ) def check_unique_values(df): for col in df.columns: if df[col].nunique() < 5: print(f'{df[col].nunique()} unique value(s) for {col} column: {df[col].unique()}') else: print(f'{df[col].nunique()} values for {col} column') def extract_range_column_values(working_df, plot_column): new_df = working_df.copy() new_df['range'] = new_df[plot_column].str.extract("Range (\d+)").astype(int) new_df['column'] = new_df[plot_column].str.extract("Column (\d+)").astype(int) return new_df def convert_datetime_column(working_df, date_column): new_datetimes = pd.to_datetime(working_df[date_column]) new_df_0 = working_df.drop(labels=date_column, axis=1) new_df_1 = new_df_0.copy() new_df_1['date'] = new_datetimes return new_df_1 def rename_value_column(working_df, value_column, trait_column): trait = working_df[trait_column].unique()[0] new_df_0 = working_df.rename({value_column: trait}, axis=1) new_df_1 = new_df_0.drop(labels=trait_column, axis=1) return new_df_1 def reorder_columns(working_df, new_col_order_list): working_df_1 = pd.DataFrame(data=working_df, columns=new_col_order_list) return working_df_1 def save_to_csv_with_timestamp(df, name_of_dataset): timestamp = datetime.datetime.now().replace(microsecond=0).isoformat() output_filename = ('data/processed/' + f'{name_of_dataset}_' + f'{timestamp}.csv').replace(':', '') df.to_csv(output_filename, index=False) def save_to_csv_without_timestamp(list_of_dfs, list_of_output_filenames): for i,j in zip(list_of_dfs, list_of_output_filenames): i.to_csv(j, index=False) ``` #### A. Aboveground Dry Biomass ``` folder_name = 'data' if not os.path.exists(folder_name): os.makedirs(folder_name) aboveground_dry_biomass_s6_url = 'https://de.cyverse.org/dl/d/1333BF0F-9462-4F0A-8D35-2B446F0CC989/season_6_aboveground_dry_biomass_manual.csv' aboveground_dry_biomass_s6_input_filename = 'aboveground_dry_biomass_s6.csv' download_csv(aboveground_dry_biomass_s6_url, folder_name=folder_name, file_name=aboveground_dry_biomass_s6_input_filename) adb_0 = read_in_csv(folder_name=folder_name, file_name=aboveground_dry_biomass_s6_input_filename) # print(adb_0.shape) # adb_0.head() # plot_hist(adb_0, 'mean', 'trait') # check_for_nulls_duplicates(adb_0) adb_1 = extract_range_column_values(adb_0, 'plot') # print(adb_1.shape) # adb_1.sample(n=3) ``` #### Add Blocking Heights ``` bh_s6_url = 'https://de.cyverse.org/dl/d/73900334-1A0F-4C56-8F96-FAC303671431/s6_blocks.csv.txt' bh_s6_input_filename = 'blocking_heights_s6.csv' download_csv(bh_s6_url, folder_name=folder_name, file_name=bh_s6_input_filename) bh_df = read_in_csv(folder_name=folder_name, file_name=bh_s6_input_filename) # print(bh_df.shape) # bh_df.head() # bh_df.height_block.value_counts() # check_for_nulls_duplicates(bh_df) bh_df_1 = bh_df.dropna(axis=0, how='all') # bh_df_1.shape ``` #### Merge blocking heights with aboveground dry biomass dataframe ``` adb_2 = adb_1.merge(bh_df_1, how='left', left_on='plot', right_on='plot') # print(adb_2.shape) # adb_2.head(3) adb_3 = convert_datetime_column(adb_2, 'date') # print(adb_3.shape) # adb_3.head(3) adb_4 = rename_value_column(adb_3, 'mean', 'trait') # print(adb_4.shape) # adb_4.tail(3) cols_to_drop = ['checked', 'author', 'season', 'treatment'] adb_5 = adb_4.drop(labels=cols_to_drop, axis=1) # print(adb_5.shape) # adb_5.head(3) ``` ##### Add units (kg/ha) column to aboveground dry biomass dataset ``` adb_6 = adb_5.copy() adb_6['units'] = 'kg/ha' # print(adb_6.shape) # adb_6.tail(3) new_col_order = ['date', 'plot', 'range', 'column', 'scientificname', 'genotype', 'height_block', 'method', 'aboveground_dry_biomass', 'units', 'method_type'] adb_7 = reorder_columns(adb_6, new_col_order) # print(adb_7.shape) # adb_7.head(3) ``` #### B. Canopy Height - Sensor ``` canopy_height_s6_url = 'https://de.cyverse.org/dl/d/D069737A-76F3-4B69-A213-4B8811A357C0/season_6_canopy_height_sensor.csv' canopy_height_s6_input_filename = 'canopy_height_s6.csv' download_csv(canopy_height_s6_url, folder_name=folder_name, file_name=canopy_height_s6_input_filename) ch_0 = read_in_csv(folder_name=folder_name, file_name=canopy_height_s6_input_filename) # print(ch_0.shape) # ch_0.head() # check_for_nulls_duplicates(ch_0) ``` #### Drop duplicates ``` ch_1 = ch_0.drop_duplicates(ignore_index=True) # print(ch_1.shape) # check_for_nulls_duplicates(ch_1) ch_2 = extract_range_column_values(ch_1, 'plot') # print(ch_2.shape) # ch_2.sample(n=3) ch_3 = convert_datetime_column(ch_2, 'date') # print(ch_3.shape) # ch_3.dtypes ch_4 = rename_value_column(ch_3, 'mean', 'trait') # print(ch_4.shape) # ch_4.tail(3) # add units (cm) to column name ch_5 = ch_4.rename({'canopy_height': 'canopy_height_cm'}, axis=1) # ch_5.sample(n=3) ``` #### Add blocking heights ``` # bh_df_1.head(3) print(bh_df_1['plot'].nunique()) print(ch_0['plot'].nunique()) ``` There is not a height block provided for every plot, so the final canopy height dataframe will contain some nulls. ``` ch_6 = ch_5.merge(bh_df_1, how='left', left_on='plot', right_on='plot') # print(ch_6.shape) # ch_6.tail(3) ch_7 = ch_6.drop(labels=['checked', 'author', 'season', 'treatment'], axis=1) # print(ch_7.shape) # ch_7.tail(3) # ch_7.isnull().sum() new_col_order = ['date', 'plot', 'range', 'column', 'scientificname', 'genotype', 'method', 'canopy_height_cm', 'height_block', 'method_type'] ch_8 = reorder_columns(ch_7, new_col_order) # print(ch_8.shape) # ch_8.head(3) ``` #### IV. Write derived data to csv files ``` list_of_dfs = [adb_7, ch_8] list_of_file_output_names = ['mac_season_6_aboveground_dry_biomass.csv', 'mac_season_6_canopy_height_sensor.csv'] save_to_csv_without_timestamp(list_of_dfs, list_of_file_output_names) ```
github_jupyter
# Exploring Observation Data From TILDE, Application to DART Data ## &nbsp;Table of contents ### 1. Introduction ### 2. Building a Query for a specific sensor code/stream ### 3. Building a Query without sensor code/stream ### 4. Building a Query for the latest data ### 5. Building a Query for aggregated data ### 6. Getting the data using ObsPy ## &nbsp;1. Introduction In this tutorial we will be learning how to use Python to access the TILDE API `data` endpoint. To highlight the different functionalities and the statistics available we will be using the DART (Deep-ocean Assessment and Reporting of Tsunamis) dataset. Tilde is the GeoNet API (Application Programming Interface) to access DART time series data. You do not need to know anything about APIs to use this tutorial. If you would like more info see https://tilde.geonet.org.nz/. This tutorial assumes you have basic knowledge of Python. ###### About GeoNet DART data GeoNet uses the 12 DART Tsunameters deployed offshore New Zealand and around the Southwestern Pacific Ocean to monitor ocean height. When a change has been detected of a certain magnitude, the buoy will "trigger" and go into a heightened detection mode. The DARTs have two operational reporting modes; standard and event. When in standard reporting mode, the BPR (bottom pressure recorder) and buoy system send four six-hour bundles of 15 minute water height values. When in event reporting mode, BPR data are sampled at 15 second intervals and are sent more regularly. The buoy surface location (latitude and longitude) will also be sent daily. <br> TILDE provides access to the 15 minutes and 15 second sampled data. For more DART information see the GeoNet page: https://www.geonet.org.nz/tsunami/dart ## &nbsp;2. Building a Query for a specific sensor code/stream ###### Import required modules and set the source URL ``` import requests import json import pandas as pd import matplotlib.pyplot as plt from io import StringIO source = 'https://tilde.geonet.org.nz' ``` ### Request data for a specific sensor/stream with date range, and then returning a CSV file This query returns the observations of the specified data held in TILDE. <br> The endpoint we are going to use is `https://tilde.geonet.org.nz/v1/data/`. The minimum required parameters are: - domain = `dart` - key = `NZE/water-height/40/WTZ`, this is 15 minute sampled data for the station NZE - startdate = '2021-03-05' - enddate = '2021-03-05' We will ask for data for 2021 March 05. We begin by setting the URL with these new parameters. ``` url = source+'/v1/data/dart/NZE/water-height/40/WTZ/2021-03-05/2021-03-05' ``` We will now query the URL and ask for a CSV format to be returned ``` r = requests.get(url, headers={'Accept':'text/csv'}) print (r) ``` We use `requests.get` to retrieve the data from the URL. The response status code says whether we were successful in getting the data requested and why not if we were unsuccessful: <ul> <li>200 -- everything went okay, and the result has been returned (if any) <li>301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed. <li>400 -- the server thinks you made a bad request. This can happen when you don't send along the right data, among other things. <li>404 -- the resource you tried to access wasn't found on the server. </ul> To work with the observation data we will use python's pandas module (https://pandas.pydata.org/). We will now store the response of our request in a pandas dataframe (`df`), using `pd.read_csv`. By using `parse_dates=['time']` we can convert the 'time' to a datetime and with `index_col` we can set the time as the index of the dataframe. More information on `pd.read_csv` can be found here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html. We need to use the `StringIO` function with the text returned from our query. By printing the beginning of the result (`df.head()`) we can also see the result of this new index column. ``` df = pd.read_csv(StringIO(r.text),parse_dates=['time'], index_col='time') df.head() ``` #### Data Summary By using `df.describe` we can summarise the returned data as this features generates descriptive statistics from dataframes. From the result we can see that there are 59 values (and errors), and all of the qc values are currently undefined. By default, we also to get to see the mean, standard deviation, minimum, maximum, and some percentile values. ``` df.describe() ``` #### Basic Data Plot By using the `value` column from our dataframe we are able to plot the data against time. For this plot we will use dots and a line so periods without data are obvious. As we are currently plotting low rate data (WTZ), when there is a gap of data it is likely we have high rate data (UTZ) for this period instead. ``` fig,ax = plt.subplots(figsize=(15,5)) df['value'].plot(ax=ax, color='#41b0f0', label='low rate (15 mins)') df['value'].plot(ax=ax, color='blue', marker='.', linestyle='None') #stop exponential format for y-axis label ax.ticklabel_format(axis='y', useOffset=False, style='plain') plt.legend(loc='best') ``` ## &nbsp;3. Building a Query without sensor code/stream TILDE can also provide all of the available data for a date range without having to specify the sensor code and stream parameters. The minimum required parameters are: - domain = `dart` - key = `NZE/water-height/-/-` - startdate = '2021-03-05' - enddate = '2021-03-05' Using the above parameters will return data from all sensor codes (for NZE there is only sensor code 40 for this date) and all data types (WTZ and UTZ). These will be returned in CSV format. The available location codes for each station are available in GeoNet's github repository (https://github.com/GeoNet/delta/blob/main/network/sites.csv). Sensor codes will be provided in the `Location` column (and for DART those will be either 40 or 41). <br> We will begin by following similar steps to Section 2 and changing the URL with the new parameters, quering the URL and asking for the CSV format to be returned. ``` url = source+'/v1/data/dart/NZE/water-height/-/-/2021-03-05/2021-03-05' r = requests.get(url, headers={'Accept':'text/csv'}) df = pd.read_csv(StringIO(r.text),parse_dates=['time'], index_col='time') ``` ##### Sorted by type not date By printing the top 2 rows of the dataframe (`df.head(2)`) and the bottom 2 rows (`df.tail(2)`), we can see that the data returned is sorted by type (UTZ/WTZ) and not by date. ``` df.head(2) df.tail(2) ``` #### Separate UTZ and WTZ data into different dataframes <br> By separating into two distinct dataframes using the `type` column we can then separate out the UTZ and the WTZ values. ``` dfw = df[df['type']=='WTZ'] dfu = df[df['type']=='UTZ'] ``` #### Basic visualization <br> As the two datatypes (UTZ and WTZ) have now been separated, we can plot them with different colours. This is similar to the plot above, but now it is possible to see the low rate and the high rate data and how they fit together. ``` fig,ax = plt.subplots(figsize=(15,5)) ax.plot(dfw['value'], color='#41b0f0', label='low rate (15 mins)') ax.plot(dfw['value'], color='blue', marker='.', linestyle='None', label='low rate (15 mins)') ax.plot(dfu['value'], color='#f07b41', marker='.', linestyle='None', label='high rate (15 secs)') #stop exponential format for y-axis label ax.ticklabel_format(axis='y', useOffset=False, style='plain') plt.legend(loc='best') ``` ## &nbsp;4. Building a Query for the latest data This query returns the observations of the specified data held in TILDE. The query is `https://tilde.geonet.org.nz/v1/data/`. The minimum required parameters are: - domain = `dart` - key = `NZE/water-height/40/WTZ`, this is 15 minute sampled data - startdate = 'latest' - enddate = '30d' We will begin by following similar steps that we have followed previously by, changing the URL with the new parameters, quering the URL and asking for the CSV format to be returned. This request will return data in a CSV format for the last 30 days. ``` url = source+'/v1/data/dart/NZE/water-height/40/WTZ/latest/30d' r = requests.get(url, headers={'Accept':'text/csv'}) df = pd.read_csv(StringIO(r.text),parse_dates=['time'], index_col='time') ``` We can see in the tail of the dataframe that we have the `latest` or most recent data. ``` df.tail() ``` #### Data volume <br> By using `len(df.index)` we can easily generate a row count of the dataframe. This could be useful to see how many values we received in a certain period of time for a station. As we are looking at the low rate data, this is likely to be quite predictable, as this is the data that we regularly receive, however, for the high rate (triggered) data we would likely expect much fewer values, unless there has been a lot of recent activity. In this case, for the last 30 days we have received nearly 3000 observations of low rate data. ``` len(df.index) ``` #### Basic visualization of the data <br> Plotting the low rate data over the 30 day period allows us to visualise the different tidal periods. ``` fig,ax = plt.subplots(figsize=(15,5)) df['value'].plot(ax=ax, color='#41b0f0', label='low rate (15 mins)') #stop exponential format for y-axis label ax.ticklabel_format(axis='y', useOffset=False, style='plain') plt.legend(loc='best') ``` ## &nbsp;5. Building a query for aggregated data <br> When requesting a large amount of data over a long time period, the query can be optimized for quick visualisation using the optional `aggregationPeriod`and `aggregationFunction` parameters. We will use the same example as above, but for a 8 month range (2021-01-01 to 2021-08-31) and use an aggregation period of 1 day to return a daily average of the values. Notice that, due to the aggregation, our dataframe's index column has time values 00:00:00. ``` url = source+'/v1/data/dart/NZE/water-height/-/-/2021-01-01/2021-08-31?aggregationPeriod=1d&aggregationFunction=avg' r = requests.get(url, headers={'Accept':'text/csv'}) df = pd.read_csv(StringIO(r.text),parse_dates=['time'], index_col='time') df.tail(5) fig,ax = plt.subplots(figsize=(15,5)) df['value'].plot(ax=ax, color='#41b0f0', label='low rate (15 mins)') ax.ticklabel_format(axis='y', useOffset=False, style='plain') plt.legend(loc='best') ``` ## &nbsp;6. Getting the data using ObsPy ObsPy (https://github.com/obspy/obspy/wiki) is a python module used for analysis of seismological data. By getting the data into ObsPy we can use all of the functionality that comes with it. To enable us to do this, we will use the TSPAIR format. More information on this can be found here: https://docs.obspy.org/packages/autogen/obspy.io.ascii.core._write_tspair.html To begin, we will create a dataframe column that is formatted as needed. A change of formatting is required for the time series file as obspy modules can't read it as it is, so we will change the format to be like this: YYYY-MM-DDTHH:MM:SS ('%Y-%m-%dT%H:%M:%S'). ``` #importing the obspy read module from obspy import read url = source+'/v1/data/dart/NZE/water-height/40/WTZ/latest/30d' r = requests.get(url, headers={'Accept':'text/csv'}) df = pd.read_csv(StringIO(r.text),parse_dates=['time'], index_col='time') df['tseries'] = df.index.strftime('%Y-%m-%dT%H:%M:%S') #print(df['tseries']) ``` Next we need to generate a header for the time-series file, where we specify a few parameters. This is required so that ObsPy can read the file, and has important data such as the sampling rate. TSPAIR is a simple ASCII time series format. Each continuous time series segment (no gaps or overlaps) is represented with a header line followed by data samples in time-sample pairs. There are no restrictions on how the segments are organized into files, a file might contain a single segment or many, concatenated segments either for the same channel or many different channels. Header lines have the general form: TIMESERIES SourceName, # samples, # sps, Time, Format, Type, Units. The sourcename should be of the format `Net_Sta_Loc_Chan_Qual`, so for NZE this is `NZ_NZE_40_WTZ_R`. For the number of samples, we can use `len(df.index)` as we used above. For number of samples per second, as we are using low rate data (WTZ) this would be 15 minutes or 0.0011111 samples per second. For time, we are using the time dataframe column that we generated above. The format is TSPAIR, the datatype is a float and the units are in `mm`. ``` sourcename = 'NZ_NZE_40_WTZ_R' samples = len(df.index) sps = 0.0011111111111111111 time = df['tseries'][0] dformat = 'TSPAIR' dtype = 'FLOAT' units = 'mm' headerstr = 'TIMESERIES '+sourcename+', '+str(samples)+' samples, '+str(sps)+' sps, '+time+', TSPAIR, FLOAT, mm\n' ``` First we open a new file called tspair.dat, then we write the appropriate header string and then using `df.to_csv` we can write the time-series data to the same file. Finally, we close the file. ``` f = open('tspair.dat', 'w') f.write(headerstr) df.to_csv(f, columns=['tseries', 'value'], sep=' ', index=False, header=False, mode='a') f.close() ``` We can now read the file `tspair.dat`as an Obspy stream using `read()`, where the file format is TSPAIR as specified when we generated the header string. ``` st = read('tspair.dat', format='TSPAIR') ``` From this string we can then pull out the first trace (`tr`) of the stream and print it's statistics. These statistics are generated from the header string that we made beforehand and is why it is important that those details are correct. ``` tr=st[0] tr.stats ``` As a final step, we can also plot this trace and see how it compares to the waveform that we generated at the end of Section 4. ``` tr.plot() ```
github_jupyter
## _*Quantum SVM (variational method)*_ The QSVMKernel notebook here demonstrates a kernel based approach. This notebook shows a variational method. For further information please see: [https://arxiv.org/pdf/1804.11326.pdf](https://arxiv.org/pdf/1804.11326.pdf) **This notebook shows the SVM implementation based on the variational method.** In this file, we show two ways for using the quantum variational method: (1) the non-programming way and (2) the programming way. ### Part I: non-programming way. In the non-programming way, we config a json-like configuration, which defines how the svm instance is internally constructed. After the execution, it returns the json-like output, which carries the important information (e.g., the details of the svm instance) and the processed results. ``` from datasets import * from qiskit_aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name from qiskit import Aer from qiskit_aqua.input import SVMInput from qiskit_aqua import run_algorithm, QuantumInstance from qiskit_aqua.algorithms import QSVMVariational from qiskit_aqua.components.optimizers import SPSA from qiskit_aqua.components.feature_maps import SecondOrderExpansion from qiskit_aqua.components.variational_forms import RYRZ ``` First we prepare the dataset, which is used for training, testing and the finally prediction. *Note: You can easily switch to a different dataset, such as the Breast Cancer dataset, by replacing 'ad_hoc_data' to 'Breast_cancer' below.* ``` feature_dim = 2 # dimension of each data point training_dataset_size = 20 testing_dataset_size = 10 random_seed = 10598 shots = 1024 sample_Total, training_input, test_input, class_labels = ad_hoc_data(training_size=training_dataset_size, test_size=testing_dataset_size, n=feature_dim, gap=0.3, PLOT_DATA=True) datapoints, class_to_label = split_dataset_to_data_and_labels(test_input) print(class_to_label) ``` Now we create the svm in the non-programming way. In the following json, we config: - the algorithm name - the variational form - the feature map - the optimizer ``` params = { 'problem': {'name': 'svm_classification', 'random_seed': 10598}, 'algorithm': {'name': 'QSVM.Variational', 'override_SPSA_params': True}, 'backend': {'shots': 1024}, 'optimizer': {'name': 'SPSA', 'max_trials': 200, 'save_steps': 1}, 'variational_form': {'name': 'RYRZ', 'depth': 3}, 'feature_map': {'name': 'SecondOrderExpansion', 'depth': 2} } svm_input = SVMInput(training_input, test_input, datapoints[0]) backend = Aer.get_backend('qasm_simulator') ``` With everything setup, we can now run the algorithm. For the testing, the result includes the details and the success ratio. For the prediction, the result includes the predicted labels. ``` result = run_algorithm(params, svm_input, backend=backend) print("testing success ratio: ", result['testing_accuracy']) print("predicted classes:", result['predicted_classes']) ``` ### Part II: programming way. We construct the svm instance directly from the classes. The programming way offers the users better accessibility, e.g., the users can access the internal state of svm instance or invoke the methods of the instance. Now we create the svm in the programming way. - we build the optimizer instance (required by the svm instance) by instantiating the class SPSA. - We build the feature map instance (required by the svm instance) by instantiating the class SecondOrderExpansion. - We build the varitional form instance (required by the svm instance) by instantiating the class RYRZ. - We build the svm instance by instantiating the class QSVMVariational. ``` backend = Aer.get_backend('qasm_simulator') optimizer = SPSA(max_trials=100, c0=4.0, skip_calibration=True) optimizer.set_options(save_steps=1) feature_map = SecondOrderExpansion(num_qubits=feature_dim, depth=2) var_form = RYRZ(num_qubits=feature_dim, depth=3) svm = QSVMVariational(optimizer, feature_map, var_form, training_input, test_input) quantum_instance = QuantumInstance(backend, shots=shots, seed=random_seed, seed_mapper=random_seed) ``` Now we run it. ``` result = svm.run(quantum_instance) print("testing success ratio: ", result['testing_accuracy']) ``` Different from the non-programming way, the programming way allows the users to invoke APIs upon the svm instance directly. In the following, we invoke the API "predict" upon the trained svm instance to predict the labels for the newly provided data input. Use the trained model to evaluate data directly, and we store a label_to_class and class_to_label for helping converting between label and class name ``` predicted_probs, predicted_labels = svm.predict(datapoints[0]) predicted_classes = map_label_to_class_name(predicted_labels, svm.label_to_class) print("prediction: {}".format(predicted_labels)) ```
github_jupyter
# Building a blockchain in python Inspired from [Python Tutorial: Build A Blockchain In < 60 Lines of Code](https://medium.com/coinmonks/python-tutorial-build-a-blockchain-713c706f6531) and [Learn Blockchains by Building One](https://hackernoon.com/learn-blockchains-by-building-one-117428612f46). ``` import datetime import hashlib import numpy as np import matplotlib.pyplot as plt import pandas as pd ``` ## The SHA-256 function ``` hashlib.sha256(b"Moritz Voss is the man").hexdigest() ``` ## The block class ``` class Block: hash = None # Hash of the block info index = 0 # Index of the block inside the chain timestamp = datetime.datetime.now() # Time of creation of the block nonce = 0 # Solution to the cryptopuzzle transactions = [] # Transaction data mined = False # Boolean set to True whenever the problem has been solved previous_hash = 0x0 # Hash of the previous block def __init__(self, transactions): self.transactions = transactions self.timestamp = datetime.datetime.now() def hash(self):# Compute the hash of the blockdata h = hashlib.sha256() h.update( str(self.nonce).encode('utf-8') + str(self.transactions).encode('utf-8') + str(self.previous_hash).encode('utf-8') ) return h.hexdigest() # Add a new transaction to a block def new_transaction(self, sender, recipient, amount, fee): transaction = { 'sender': sender, 'recipient': recipient, 'amount': amount, 'fee' : fee } self.transactions.append(transaction) # Print the block info def __str__(self): return "Block Height: " + str(self.index) + \ "\nBlock Hash: " + str(self.hash()) + \ "\nTime:" + str(self.timestamp) + \ "\nBlock data: " + str(self.transactions) + \ "\nMined: " + str(self.mined) + \ "\nPrevious block hash: " + str(self.previous_hash) +"\n--------------" # Solve the cryptopuzzle of the block B1 = Block([]) B1.new_transaction("Coinbase", "Satoshi", 100, 0) B1.new_transaction("Satoshi", "Pierre-O", 5, 2) print(B1) ``` ## The blockchain class ``` class Blockchain: diff = 0 maxNonce = 2**32 target = 2 ** (256-diff) reward = 50 miner = "Miner" def __init__(self, genesis_block): self.chain = [genesis_block] self.pending_transactions = [] def new_transaction(self, sender, recipient, amount, fee): transaction = { 'sender': sender, 'recipient': recipient, 'amount': amount, 'fee' : fee } self.pending_transactions.append(transaction) def add_block(self): # blockchain.pending_transactions current_block = blockchain.chain[-1] new_block = Block(blockchain.pending_transactions) new_block.index = current_block.index + 1 new_block.previous_hash = current_block.hash() blockchain.chain.append(new_block) blockchain.pending_transactions = [] def adjust_difficulty(self, new_diff): self.diff = new_diff self.target = 2 ** (256-new_diff) def halve_reward(self): self.reward = self.reward / 2 def mine(self): block = self.chain[-1] target = self.target if block.transactions: fee = pd.DataFrame.from_records(block.transactions).fee.sum() else: fee = 0 block.new_transaction("Coinbase", self.miner, self.reward + fee, 0) while int(block.hash(), 16) > target: block.nonce = int(np.random.uniform(low = 0, high = 2**32 + 1)) block.mined = True block.timestamp = datetime.datetime.now() genesis_block = Block([]) blockchain = Blockchain(genesis_block) print(blockchain.chain[-1]) blockchain.adjust_difficulty(4) blockchain.halve_reward() blockchain.mine() print(blockchain.chain[-1]) blockchain.new_transaction("miner", "Pierre-O", 5, 0.1) blockchain.new_transaction("miner", "Satoshi", 10, 0.2) # print(blockchain.pending_transactions) blockchain.add_block() for block in blockchain.chain: print(block) blockchain.mine() for block in blockchain.chain: print(block) ```
github_jupyter
``` import xai import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sn from sklearn import preprocessing from sklearn.impute import SimpleImputer from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score, confusion_matrix, precision_score, recall_score, roc_curve, auc, accuracy_score from AutoAIAlgorithm.ParticleSwarmOptimization import PSO df = pd.read_csv("data/covid-19.csv") columns_to_delete = ['Patient ID', 'Patient addmited to semi-intensive unit (1=yes, 0=no)', 'Patient addmited to intensive care unit (1=yes, 0=no)', 'Patient addmited to regular ward (1=yes, 0=no)', 'Metapneumovirus', 'Respiratory Syncytial Virus', 'Influenza A', 'Influenza B', 'Parainfluenza 1', 'CoronavirusNL63', 'Rhinovirus/Enterovirus', 'Mycoplasma pneumoniae', 'Coronavirus HKU1', 'Parainfluenza 3', 'Chlamydophila pneumoniae', 'Adenovirus', 'Parainfluenza 4', 'Coronavirus229E', 'CoronavirusOC43', 'Inf A H1N1 2009', 'Bordetella pertussis', 'Metapneumovirus', 'Parainfluenza 2', 'Influenza B, rapid test', 'Influenza A, rapid test', 'Strepto A'] df = df.drop(columns_to_delete, axis=1) df_no_nan = df.dropna(subset=['Hematocrit']) df_clean = df_no_nan.loc[:, df_no_nan.isnull().mean() < 0.7] df_clean ims = xai.imbalance_plot(df_clean, "SARS-Cov-2 exam result") bal_df = xai.balance(df_clean, "SARS-Cov-2 exam result", upsample=0.4) y = np.asarray(bal_df['SARS-Cov-2 exam result']) y = [1 if x == 'positive' else 0 for x in y] X = bal_df.drop(['SARS-Cov-2 exam result'], axis=1) imputer = SimpleImputer(missing_values=np.nan, strategy='mean') idf = pd.DataFrame(imputer.fit_transform(X)) idf.columns=X.columns idf.index=X.index X = idf X x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3) num_particles=10 num_iterations=10 pso = PSO(particle_count=num_particles, distance_between_initial_particles=1.0, evaluation_metric=f1_score) best_metric, best_model = pso.fit(X_train=x_train, X_test=x_test, Y_train=y_train, Y_test=y_test, maxiter=num_iterations, verbose=True, max_distance=0.05) best_model best_metric y_pred = best_model.predict(x_test) import seaborn as sn df_cm = pd.DataFrame(confusion_matrix(y_test, y_pred), ["negative", "positive"], ["negative", "positive"]) # plt.figure(figsize=(10,7)) sn.set(font_scale=1.4) # for label size sn.heatmap(df_cm, annot=True, annot_kws={"size": 16}, cmap="Blues", fmt='d') # font size plt.show() accuracy_score(y_test, y_pred) precision_score(y_test, y_pred) recall_score(y_test, y_pred) f1_score(y_test, y_pred) fpr, tpr, thresholds = roc_curve(y_test, y_pred) auc(fpr, tpr) def get_avg(x, y): return f1_score(y, best_model.predict(x)) imp = xai.feature_importance(x_test, y_test, get_avg) imp.head() ```
github_jupyter
<a href="https://colab.research.google.com/github/Eoli-an/Exam-topic-prediction/blob/main/Slides_vs_Transcribes_Frequency.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Plot for Dense Ranks of Word Usage in Slides and Transcribes of Relevant Words For this plot we analyse the relationship between the word frequency of the slides versus the word frequency of the transcribes of the lecture. We only analyse hand picked words that are relevant for predicting exam topics or their difficulties. ``` !pip install scattertext !pip install tika !pip install textblob import pandas as pd import glob import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline import scattertext as st from tika import parser from textblob import TextBlob import nltk nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('brown') ``` The Slides are expected to be in a folder called Slides. The Transcibes are expected to be in a folder called Transcribes ``` lectures_spoken = [] path = 'Transcribes/*.txt' files=glob.glob(path) for file in sorted(files): with open(file, 'r') as f: lectures_spoken.append(f.read()) lectures_spoken = " ".join(lectures_spoken) lectures_pdf = [] path = 'Slides/*.pdf' files=glob.glob(path) for file in sorted(files): lectures_pdf.append(parser.from_file(file)["content"]) lectures_pdf = " ".join(lectures_pdf) ``` Create a texblob of the text. This is used to extract the noun phrases. ``` blob_spoken = TextBlob(lectures_spoken) freq_spoken = nltk.FreqDist(blob_spoken.noun_phrases) blob_pdf = TextBlob(lectures_pdf) freq_pdf = nltk.FreqDist(blob_pdf.noun_phrases) ``` This function checks if a noun phrase is sufficiently similar to a relevant word(templates). Sufficiently similar is defined as that the template is a substring of the noun phrase. ``` def convert_to_template(df_element, template): for template_element in template: if template_element in df_element: return template_element return "None" ``` We first create a pandas dataframe of all the noun phrases and their frequencies in both slides and transcribes. After that, we extract all words that are similar to a relevant word (as of the convert_to_template function). Then we group by the relevant words ``` relevant_words = ['bayes', 'frequentist', 'fairness', 'divergence', 'reproduc', 'regulariz', 'pca', 'principal c' 'bootstrap', 'nonlinear function', 'linear function', 'entropy', 'maximum likelihood estimat', 'significa', 'iid', 'bayes theorem', 'visualization', 'score function', 'dimensionality reduction', 'estimat', 'bayes', 'consumption', 'fisher', 'independence', 'logistic regression', 'bias', 'standard deviation', 'linear discriminant analysis', 'information matrix', 'null hypothesis', 'log likelihood', 'linear regression', 'hypothesis test', 'confidence', 'variance', 'sustainability', 'gaussian', 'linear model', 'climate', 'laplace', ] df_spoken = pd.DataFrame.from_dict({"word": list(freq_spoken.keys()), "freq_spoken" : list(freq_spoken.values())}) df_pdf = pd.DataFrame.from_dict({"word": list(freq_pdf.keys()), "freq_pdf" : list(freq_pdf.values())}) df = df_spoken.merge(df_pdf,how="outer",on="word") df["word"] = df["word"].apply(lambda x: convert_to_template(x,relevant_words)) df = df.groupby(["word"]).sum().reset_index() df = df[df["word"] != "None"].reset_index() ``` We use the dense_rank functionality of the scattertext library to convert the absolute number of occurances of a word to a dense rank. This means that we only consider the relative order of the frequencies of the word and discard all information that tells us how far apart two word frequencies are. ``` df["freq_spoken"] = st.Scalers.dense_rank(df["freq_spoken"]) df["freq_pdf"] = st.Scalers.dense_rank(df["freq_pdf"]) df plt.figure(figsize=(20,12)) sns.set_theme(style="dark") p1 = sns.scatterplot(x='freq_spoken', # Horizontal axis y='freq_pdf', # Vertical axis data=df, # Data source s = 80, legend=False, color="orange", #marker = "s" ) for line in range(0,df.shape[0]): if line == 6:#divergence p1.text(df.freq_spoken[line]-0.12, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 21:#linear regression p1.text(df.freq_spoken[line]-0.18, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 18:#linear discriminant analysis p1.text(df.freq_spoken[line]-0.05, df.freq_pdf[line]-0.05, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 19:#linear function p1.text(df.freq_spoken[line]-0.02, df.freq_pdf[line]-0.04, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 29:#reproduce p1.text(df.freq_spoken[line]-0.03, df.freq_pdf[line]+0.03, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 12:#gaussian: p1.text(df.freq_spoken[line]-0.1, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 16:#information matrix: p1.text(df.freq_spoken[line]+0.01, df.freq_pdf[line]-0.025, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 25:#nonlinear function: p1.text(df.freq_spoken[line]+0.01, df.freq_pdf[line]-0.025, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 24:#maximum likelihood estimat: p1.text(df.freq_spoken[line]-0.07, df.freq_pdf[line]+0.02, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 17:#laplace: p1.text(df.freq_spoken[line]-0.08, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') else: p1.text(df.freq_spoken[line]+0.01, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') #plt.title('Dense Ranks of Word Usage in Slides and Transcribes of Relevant Words',size = "xx-large") # Set x-axis label plt.xlabel('Transcribes Frequency',size = "xx-large") # Set y-axis label plt.ylabel('Slides Frequency',size = "xx-large") p1.set_xticks([0,0.5,1]) # <--- set the ticks first p1.set_xticklabels(["Infrequent", "Average", "Frequent"],size = "x-large") p1.set_yticks([0,0.5,1]) # <--- set the ticks first p1.set_yticklabels(["Infrequent", "Average", "Frequent"],size = "x-large") plt.show() ```
github_jupyter
## Guest Lecture COMP7230 # Using Python packages for Linked Data & spatial data #### by Dr Nicholas Car This Notebook is the resource used to deliver a guest lecture for the [Australian National University](https://www.anu.edu.au)'s course [COMP7230](https://programsandcourses.anu.edu.au/2020/course/COMP7230): *Introduction to Programming for Data Scientists* Click here to run this lecture in your web browser: [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/nicholascar/comp7230-training/HEAD?filepath=lecture_01.ipynb) ## About the lecturer **Nicholas Car**: * PhD in informatics for irrigation * A former CSIRO informatics researcher * worked on integrating environmental data across government / industry * developed data standards * Has worked in operation IT in government * Now in a private IT consulting company, [SURROUND Australia Pty Ltd](https://surroundaustralia.com) supplying Data Science solutions Relevant current work: * building data processing systems for government & industry * mainly using Python * due to its large number of web and data science packages * maintains the [RDFlib](https://rdflib.net) Python toolkit * for processing [RDF](https://en.wikipedia.org/wiki/Resource_Description_Framework) * co-chairs the [Australian Government Linked Data Working Group](https://www.linked.data.gov.au) with Armin Haller * plans for multi-agency data integration * still developing data standards * in particular GeoSPARQL 1.1 (https://opengeospatial.github.io/ogc-geosparql/geosparql11/spec.html) * for graph representations of spatial information ## 0. Lecture Outline 1. Notes about this training material 2. Accessing RDF data 3. Parsing RDF data 4. Data 'mash up' 5. Data Conversions & Display ## 1. Notes about this training material #### This tool * This is a Jupyter Notebook - interactive Python scripting * You will cover Jupyter Notebooks more, later in this course * Access this material online at: * GitHub: <https://github.com/nicholascar/comp7230-training> [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/nicholascar/comp7230-training/?filepath=lecture_01.ipynb) #### Background data concepts - RDF _Nick will talk RDF using these web pages:_ * [Semantic Web](https://www.w3.org/standards/semanticweb/) - the concept * [RDF](https://en.wikipedia.org/wiki/Resource_Description_Framework) - the data model * refer to the RDF image below * [RDFlib](https://rdflib.net) - the (Python) toolkit * [RDFlib training Notebooks are available](https://github.com/surroundaustralia/rdflib-training) The LocI project: * The Location Index project: <http://loci.cat> RDF image, from [the RDF Primer](https://www.w3.org/TR/rdf11-primer/), for discussion: ![](lecture_resources/img/example-graph-iris.jpg) Note that: * _everything_ is "strongly" identified * including all relationships * unlike lots of related data * many of the identifiers resolve * to more info (on the web) ## 2. Accessing RDF data * Here we use an online structured dataset, the Geocoded National Address File for Australia * Dataset Persistent Identifier: <https://linked.data.gov.au/dataset/gnaf> * The above link redirects to the API at <https://gnafld.net> * GNAF-LD Data is presented according to *Linked Data* principles * online * in HTML & machine-readable form, RDF * RDF is a Knowledge Graph: a graph containing data + model * each resource is available via a URI * e.g. <https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> ![GAACT714845933](lecture_resources/img/GAACT714845933.png) 2.1. Get the Address GAACT714845933 using the *requests* package ``` import requests # NOTE: you must have installed requests first, it's not a standard package r = requests.get( "https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933" ) print(r.text) ``` 2.2 Get machine-readable data, RDF triples Use HTTP Content Negotiation Same URI, different *format* of data ``` r = requests.get( "https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933", headers={"Accept": "application/n-triples"} ) print(r.text) ``` 2.3 Get machine-readable data, Turtle Easier to read ``` r = requests.get( "https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933", headers={"Accept": "text/turtle"} ) print(r.text) ``` ## 3. Parsing RDF data Import the RDFlib library for manipulating RDF data Add some namespaces to shorten URIs ``` import rdflib from rdflib.namespace import RDF, RDFS GNAF = rdflib.Namespace("http://linked.data.gov.au/def/gnaf#") ADDR = rdflib.Namespace("http://linked.data.gov.au/dataset/gnaf/address/") GEO = rdflib.Namespace("http://www.opengis.net/ont/geosparql#") print(GEO) ``` Create a graph and add the namespaces to it ``` g = rdflib.Graph() g.bind("gnaf", GNAF) g.bind("addr", ADDR) g.bind("geo", GEO) ``` Parse in the machine-readable data from the GNAF-LD ``` r = requests.get( "https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933", headers={"Accept": "text/turtle"} ) g.parse(data=r.text, format="text/turtle") ``` Print graph length (no. of triples) to check ``` print(len(g)) ``` Print graph content, in Turtle ``` print(g.serialize(format="text/turtle").decode()) ``` ### 3.1 Getting multi-address data: 3.1.1. Retrieve an index of 10 addresses, in RDF 3.1.2. For each address in the index, get each Address' data * use paging URI: <https://linked.data.gov.au/dataset/gnaf/address/?page=1> 3.1.3. Get only the street address and map coordinates #### 3.1.1. Retrieve index ``` # clear the graph g = rdflib.Graph() r = requests.get( "https://linked.data.gov.au/dataset/gnaf/address/?page=1", headers={"Accept": "text/turtle"} ) g.parse(data=r.text, format="text/turtle") print(len(g)) ``` #### 3.1.2. Parse in each address' data ``` for s, p, o in g.triples((None, RDF.type, GNAF.Address)): print(s.split("/")[-1]) r = requests.get( str(s), headers={"Accept": "text/turtle"} ) g.parse(data=r.text, format="turtle") print(len(g)) ``` The graph model used by the GNAF-LD is based on [GeoSPARQL 1.1](https://opengeospatial.github.io/ogc-geosparql/geosparql11/spec.html) and looks like this: ![](lecture_resources/img/geosparql-model.png) #### 3.1.3. Extract (& print) street address text & coordinates (CSV) ``` addresses_tsv = "GNAF ID\tAddress\tCoordinates\n" for s, p, o in g.triples((None, RDF.type, GNAF.Address)): for s2, p2, o2 in g.triples((s, RDFS.comment, None)): txt = str(o2) for s2, p2, o2 in g.triples((s, GEO.hasGeometry, None)): for s3, p3, o3 in g.triples((o2, GEO.asWKT, None)): coords = str(o3).replace("<http://www.opengis.net/def/crs/EPSG/0/4283> ", "") addresses_tsv += "{}\t{}\t{}\n".format(str(s).split("/")[-1], txt, coords) print(addresses_tsv) ``` #### 3.1.4. Convert CSV data to PANDAS DataFrame (CSV) ``` import pandas from io import StringIO s = StringIO(addresses_tsv) df1 = pandas.read_csv(s, sep="\t") print(df1) ``` #### 3.1.5. SPARQL querying RDF data A graph query, similar to a database SQL query, can traverse the graph and retrieve the same details as the multiple loops and Python code above in 3.1.3. ``` q = """ SELECT ?id ?addr ?coords WHERE { ?uri a gnaf:Address ; rdfs:comment ?addr . ?uri geo:hasGeometry/geo:asWKT ?coords_dirty . BIND (STRAFTER(STR(?uri), "address/") AS ?id) BIND (STRAFTER(STR(?coords_dirty), "4283> ") AS ?coords) } ORDER BY ?id """ for r in g.query(q): print("{}, {}, {}".format(r["id"], r["addr"], r["coords"])) ``` ## 4. Data 'mash up' Add some fake data to the GNAF data - people count per address. The GeoSPARQL model extension used is: ![](lecture_resources/img/geosparql-model-extension.png) Note that for real Semantic Web work, the `xxx:` properties and classes would be "properly defined", removing any ambiguity of use. ``` import pandas df2 = pandas.read_csv('fake_data.csv') print(df2) ``` Merge DataFrames ``` df3 = pandas.merge(df1, df2) print(df3.head()) ``` ## 5. Spatial Data Conversions & Display Often you will want to display or export data. #### 5.1 Display directly in Jupyter Using standard Python plotting (matplotlib). First, extract addresses, longitudes & latitudes into a dataframe using a SPARQL query to build a CSV string. ``` import re addresses_csv = "Address,Longitude,Latitude\n" q = """ SELECT ?addr ?coords WHERE { ?uri a gnaf:Address ; rdfs:comment ?addr . ?uri geo:hasGeometry/geo:asWKT ?coords . BIND (STRAFTER(STR(?uri), "address/") AS ?id) BIND (STRAFTER(STR(?coords_dirty), "4283> ") AS ?coords) } ORDER BY ?id """ for r in g.query(q): match = re.search("POINT\((\d+\.\d+)\s(\-\d+\.\d+)\)", r["coords"]) long = float(match.group(1)) lat = float(match.group(2)) addresses_csv += f'\"{r["addr"]}\",{long},{lat}\n' print(addresses_csv) ``` Read the CSV into a DataFrame. ``` import pandas as pd from io import StringIO addresses_df = pd.read_csv(StringIO(addresses_csv)) print(addresses_df["Longitude"]) ``` Display the first 5 rows of the DataFrame directly using matplotlib. ``` from matplotlib import pyplot as plt addresses_df[:5].plot(kind="scatter", x="Longitude", y="Latitude", s=50, figsize=(10,10)) for i, label in enumerate(addresses_df[:5]): plt.annotate(addresses_df["Address"][i], (addresses_df["Longitude"][i], addresses_df["Latitude"][i])) plt.show() ``` #### 5.2 Convert to common format - GeoJSON Import Python conversion tools (shapely). ``` import shapely.wkt from shapely.geometry import MultiPoint import json ``` Loop through the graph using ordinary Python loops, not a query. ``` points_list = [] for s, p, o in g.triples((None, RDF.type, GNAF.Address)): for s2, p2, o2 in g.triples((s, GEO.hasGeometry, None)): for s3, p3, o3 in g.triples((o2, GEO.asWKT, None)): points_list.append( shapely.wkt.loads(str(o3).replace("<http://www.opengis.net/def/crs/EPSG/0/4283> ", "")) ) mp = MultiPoint(points=points_list) geojson = shapely.geometry.mapping(mp) print(json.dumps(geojson, indent=4)) ``` Another, better, GeoJSON export - including Feature information. First, build a Python dictionary matching the GeoJSON specification, then export it to JSON. ``` geo_json_features = [] # same query as above for r in g.query(q): match = re.search("POINT\((\d+\.\d+)\s(\-\d+\.\d+)\)", r["coords"]) long = float(match.group(1)) lat = float(match.group(2)) geo_json_features.append({ "type": "Feature", "properties": { "name": r["addr"] }, "geometry": { "type": "Point", "coordinates": [ long, lat ] } }) geo_json_data = { "type": "FeatureCollection", "name": "test-points-short-named", "crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } }, "features": geo_json_features } import json geo_json = json.dumps(geo_json_data, indent=4) print(geo_json) ``` Export the data and view it in a GeoJSON map viewer, such as http://geojsonviewer.nsspot.net/ or QGIS (desktop_. ## Concluding remarks * Semantic Web, realised through Linked Data, builds a global machine-readable data system * the RDF data structure is used * to link things * to define things, and the links * specialised parts of the Sem Web can represent a/any domain * e.g. spatial * e.g. Addresses * powerful graph pattern matching queries, SPARQL, can be used to subset (federated) Sem Web data * RDF manipulation libraries exist * can convert to other, common forms, e.g. CSV GeoJSON * _do as much data science work as you can with well-defined models!_ ## License All the content in this repository is licensed under the [CC BY 4.0 license](https://creativecommons.org/licenses/by/4.0/). Basically, you can: * copy and redistribute the material in any medium or format * remix, transform, and build upon the material for any purpose, even commercially You just need to: * give appropriate credit, provide a link to the license, and indicate if changes were made * not apply legal terms or technological measures that legally restrict others from doing anything the license permits ## Contact Information **Dr Nicholas J. Car**<br /> *Data Systems Architect*<br /> [SURROUND Australia Pty Ltd](https://surroundaustralia.com)<br /> <nicholas.car@surroundaustralia.com><br /> GitHub: [nicholascar](https://github.com/nicholascar)<br /> ORCID: <https://orcid.org/0000-0002-8742-7730><br />
github_jupyter
## Basic Pandas Examples This notebook will walk you through some very basic Pandas concepts. We will start with importing typical data science libraries: ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt ``` ### Series Series is like a list or 1D-array, but with index. All operations are index-aligned. ``` a = pd.Series(range(1,10)) b = pd.Series(["I","like","to","use","Python","and","Pandas","very","much"],index=range(0,9)) print(a,b) ``` One of the frequent usage of series is **time series**. In time series, index has a special structure - typically a range of dates or datetimes. We can create such an index with `pd.date_range`. Suppose we have a series that shows the amount of product bought every day, and we know that every sunday we also need to take one item for ourselves. Here is how to model that using series: ``` start_date = "Jan 1, 2020" end_date = "Dec 31, 2020" idx = pd.date_range(start_date,end_date) print(f"Length of index is {len(idx)}") items_sold = pd.Series(np.random.randint(25,50,size=len(idx)),index=idx) items_sold.plot(figsize=(10,3)) plt.show() additional_items = pd.Series(10,index=pd.date_range(start_date,end_date,freq="W")) print(f"Additional items (10 item each week):\n{additional_items}") total_items = items_sold+additional_items print(f"Total items (sum of two series):\n{total_items}") ``` As you can see, we are having problems here, because in the weekly series non-mentioned days are considered to be missing (`NaN`), and adding `NaN` to a number gives us `NaN`. In order to get correct result, we need to specify `fill_value` when adding series: ``` total_items = items_sold.add(additional_items,fill_value=0) print(total_items) total_items.plot(figsize=(10,3)) plt.show() monthly = total_items.resample("1M").mean() ax = monthly.plot(kind='bar',figsize=(10,3)) ax.set_xticklabels([x.strftime("%b-%Y") for x in monthly.index], rotation=45) plt.show() ``` ## DataFrame A dataframe is essentially a collection of series with the same index. We can combine several series together into a dataframe. Given `a` and `b` series defined above: ``` df = pd.DataFrame([a,b]) df ``` We can also use Series as columns, and specify column names using dictionary: ``` df = pd.DataFrame({ 'A' : a, 'B' : b }) df ``` The same result can be achieved by transposing (and then renaming columns, to match the previous example): ``` pd.DataFrame([a,b]).T.rename(columns={ 0 : 'A', 1 : 'B' }) ``` **Selecting columns** from DataFrame can be done like this: ``` print(f"Column A (series):\n{df['A']}") print(f"Columns B and A (DataFrame):\n{df[['B','A']]}") ``` **Selecting rows** based on filter expression: ``` df[df['A']<5] ``` The way it works is that expression `df['A']<5` returns a boolean series, which indicates whether expression is `True` or `False` for each elemens of the series. When series is used as an index, it returns subset of rows in the DataFrame. Thus it is not possible to use arbitrary Python boolean expression, for example, writing `df[df['A']>5 and df['A']<7]` would be wrong. Instead, you should use special `&` operation on boolean series: ``` df[(df['A']>5) & (df['A']<7)] ``` **Creating new computable columns**. We can easily create new computable columns for our DataFrame by using intuitive expressions. The code below calculates divergence of A from its mean value. ``` df['DivA'] = df['A']-df['A'].mean() df ``` What actually happens is we are computing a series, and then assigning this series to the left-hand-side, creating another column. ``` # WRONG: df['ADescr'] = "Low" if df['A'] < 5 else "Hi" df['LenB'] = len(df['B']) # Wrong result df['LenB'] = df['B'].apply(lambda x: len(x)) # or df['LenB'] = df['B'].apply(len) df ``` **Selecting rows based on numbers** can be done using `iloc` construct. For example, to select first 5 rows from the DataFrame: ``` df.iloc[:5] ``` **Grouping** is often used to get a result similar to *pivot tables* in Excel. Suppose that we want to compute mean value of column `A` for each given number of `LenB`. Then we can group our DataFrame by `LenB`, and call `mean`: ``` df.groupby(by='LenB').mean() ``` If we need to compute mean and the number of elements in the group, then we can use more complex `aggregate` function: ``` df.groupby(by='LenB') \ .aggregate({ 'DivA' : len, 'A' : lambda x: x.mean() }) \ .rename(columns={ 'DivA' : 'Count', 'A' : 'Mean'}) ``` ## Printing and Plotting Data Scientist often has to explore the data, thus it is important to be able to visualize it. When DataFrame is big, manytimes we want just to make sure we are doing everything correctly by printing out the first few rows. This can be done by calling `df.head()`. If you are running it from Jupyter Notebook, it will print out the DataFrame in a nice tabular form. ``` df.head() ``` We have also seen the usage of `plot` function to visualize some columns. While `plot` is very useful for many tasks, and supports many different graph types via `kind=` parameter, you can always use raw `matplotlib` library to plot something more complex. We will cover data visualization in detail in separate course lessons. ``` df['A'].plot() plt.show() df['A'].plot(kind='bar') plt.show() ``` This overview covers most important concepts of Pandas, however, the library is very rich, and there is no limit to what you can do with it! Let's now apply this knowledge for solving specific problem.
github_jupyter
<a href="https://colab.research.google.com/github/JimKing100/DS-Unit-2-Regression-Classification/blob/master/module4/assignment_regression_classification_4e.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # Installs %%capture !pip install --upgrade category_encoders plotly # Imports import os, sys os.chdir('/content') !git init . !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git !git pull origin master os.chdir('module4') # Disable warning import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') ``` ### Load Data ``` import pandas as pd train_features = pd.read_csv('../data/tanzania/train_features.csv') train_labels = pd.read_csv('../data/tanzania/train_labels.csv') test_features = pd.read_csv('../data/tanzania/test_features.csv') sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv') assert train_features.shape == (59400, 40) assert train_labels.shape == (59400, 2) assert test_features.shape == (14358, 40) assert sample_submission.shape == (14358, 2) ``` ### Train/Validate/Test Split ``` # Load initial train features and labels from sklearn.model_selection import train_test_split X_train = train_features y_train = train_labels['status_group'] X_train.shape, y_train.shape # Split the initial train features and labels 80% into new train and new validation X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, train_size = 0.80, test_size = 0.20, stratify = y_train, random_state=42 ) X_train.shape, X_val.shape, y_train.shape, y_val.shape # Check values of new train labels y_train.value_counts(normalize=True) # Check values of new validation labels y_val.value_counts(normalize=True) ``` ### One-Hot Encoding - Quantity ``` # Check values of quantity feature X_train['quantity'].value_counts(normalize=True) # Recombine X_train and y_train, for exploratory data analysis train = X_train.copy() train['status_group'] = y_train train.groupby('quantity')['status_group'].value_counts(normalize=True) # Plot the values, dry shows a strong relationship to functional import matplotlib.pyplot as plt import seaborn as sns train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='quantity', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Water Quantity') ``` ### One-Hot Encoding - Waterpoint Type ``` X_train['waterpoint_type'].value_counts(normalize=True) # Recombine X_train and y_train, for exploratory data analysis train = X_train.copy() train['status_group'] = y_train train.groupby('waterpoint_type')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='waterpoint_type', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Waterpoint Type') ``` ### One-Hot Encoding - Extraction Type ``` X_train['extraction_type'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('extraction_type')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='extraction_type', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Extraction Type') ``` ### Bin and One-Hot Encoding - Installer ``` X_train['installer'] = X_train['installer'].str.lower() X_val['installer'] = X_val['installer'].str.lower() X_train['installer'] = X_train['installer'].str.lower() X_val['installer'] = X_val['installer'].str.lower() X_train['installer'] = X_train['installer'].str.replace('danid', 'danida') X_val['installer'] = X_val['installer'].str.replace('danid', 'danida') X_train['installer'] = X_train['installer'].str.replace('disti', 'district council') X_val['installer'] = X_val['installer'].str.replace('disti', 'district council') X_train['installer'] = X_train['installer'].str.replace('commu', 'community') X_val['installer'] = X_val['installer'].str.replace('commu', 'community') X_train['installer'] = X_train['installer'].str.replace('central government', 'government') X_val['installer'] = X_val['installer'].str.replace('central government', 'government') X_train['installer'] = X_train['installer'].str.replace('kkkt _ konde and dwe', 'kkkt') X_val['installer'] = X_val['installer'].str.replace('kkkt _ konde and dwe', 'kkkt') X_train['installer'].value_counts(normalize=True) top10 = X_train['installer'].value_counts()[:5].index X_train.loc[~X_train['installer'].isin(top10), 'installer'] = 'Other' X_val.loc[~X_val['installer'].isin(top10), 'installer'] = 'Other' train = X_train.copy() train['status_group'] = y_train train.groupby('installer')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='installer', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Installer') ``` ### New Feature - Pump Age ``` X_train['pump_age'] = 2013 - X_train['construction_year'] X_train.loc[X_train['pump_age'] == 2013, 'pump_age'] = 0 X_val['pump_age'] = 2013 - X_val['construction_year'] X_val.loc[X_val['pump_age'] == 2013, 'pump_age'] = 0 X_train.loc[X_train['pump_age'] == 0, 'pump_age'] = 10 X_val.loc[X_val['pump_age'] == 0, 'pump_age'] = 10 train = X_train.copy() train['status_group'] = y_train train.groupby('pump_age')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='pump_age', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Pump Age') ``` ### Bin and One-Hot Encoding - Funder ``` X_train['funder'] = X_train['funder'].str.lower() X_val['funder'] = X_val['funder'].str.lower() X_train['funder'] = X_train['funder'].str[:3] X_val['funder'] = X_val['funder'].str[:3] X_train['funder'].value_counts(normalize=True) top10 = X_train['funder'].value_counts()[:20].index X_train.loc[~X_train['funder'].isin(top10), 'funder'] = 'Other' X_val.loc[~X_val['funder'].isin(top10), 'funder'] = 'Other' train = X_train.copy() train['status_group'] = y_train train.groupby('funder')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='funder', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Funder') ``` ### One-Hot Encoding - Water Quality ``` X_train['water_quality'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('water_quality')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='water_quality', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Water Quality') ``` ### One-Hot Encoding - Basin ``` X_train['basin'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('basin')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='basin', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Basin') ``` ### One-Hot Encoding - Region ``` X_train['region'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('region')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='region', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Region') ``` ### Use Mean for GPS Height Missing Values ``` X_train.loc[X_train['gps_height'] == 0, 'gps_height'] = X_train['gps_height'].mean() X_val.loc[X_val['gps_height'] == 0, 'gps_height'] = X_val['gps_height'].mean() train = X_train.copy() train['status_group'] = y_train train.groupby('gps_height')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']]; #sns.catplot(x='amount_tsh', y='functional', data=train, kind='bar', color='grey') #plt.title('% of Waterpumps Functional by Pump Age') ``` ### One-Hot Encoding - Payment ``` X_train['payment'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('payment')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='payment', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Payment') ``` ### One-Hot Encoded - Source ``` X_train['source'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('source')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='source', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Source') ``` ### Bin and One-Hot Encoded - LGA ``` X_train['lga'].value_counts(normalize=True) top10 = X_train['lga'].value_counts()[:10].index X_train.loc[~X_train['lga'].isin(top10), 'lga'] = 'Other' X_val.loc[~X_val['lga'].isin(top10), 'lga'] = 'Other' train = X_train.copy() train['status_group'] = y_train train.groupby('lga')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='lga', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by LGA') ``` ### Bin and One-Hot Encoded - Ward ``` X_train['ward'].value_counts(normalize=True) top10 = X_train['ward'].value_counts()[:20].index X_train.loc[~X_train['ward'].isin(top10), 'ward'] = 'Other' X_val.loc[~X_val['ward'].isin(top10), 'ward'] = 'Other' train = X_train.copy() train['status_group'] = y_train train.groupby('ward')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='ward', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Ward') ``` ### One-Hot Encode - Scheme Management ``` X_train['scheme_management'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('scheme_management')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='scheme_management', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Scheme Management') ``` ### One-Hot Encode - Management ``` X_train['management'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('management')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='management', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Management') ``` ### Create a Region/District Feature ``` X_train['region_code'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('region_code')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='region_code', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Region Code') X_train['region_district'] = X_train['region_code'].astype(str) + X_train['district_code'].astype(str) X_val['region_district'] = X_val['region_code'].astype(str) + X_val['district_code'].astype(str) train = X_train.copy() train['status_group'] = y_train train.groupby('region_district')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='region_district', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Region/District') ``` ### One-Hot Encode - Subvillage ``` X_train['subvillage'].value_counts(normalize=True) top10 = X_train['subvillage'].value_counts()[:10].index X_train.loc[~X_train['subvillage'].isin(top10), 'subvillage'] = 'Other' X_val.loc[~X_val['subvillage'].isin(top10), 'subvillage'] = 'Other' train = X_train.copy() train['status_group'] = y_train train.groupby('subvillage')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='subvillage', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Subvillage') ``` ### One-Hot Encoding - Water Quality ``` X_train['water_quality'].value_counts(normalize=True) train = X_train.copy() train['status_group'] = y_train train.groupby('water_quality')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']] sns.catplot(x='water_quality', y='functional', data=train, kind='bar', color='grey') plt.title('% of Waterpumps Functional by Quality') ``` ### Lat/Long Cleanup ``` #test['region'].value_counts() average_lat = X_train.groupby('region').latitude.mean().reset_index() average_long = X_train.groupby('region').longitude.mean().reset_index() shinyanga_lat = average_lat.loc[average_lat['region'] == 'Shinyanga', 'latitude'] shinyanga_long = average_long.loc[average_lat['region'] == 'Shinyanga', 'longitude'] X_train.loc[(X_train['region'] == 'Shinyanga') & (X_train['latitude'] > -1), ['latitude']] = shinyanga_lat[17] X_train.loc[(X_train['region'] == 'Shinyanga') & (X_train['longitude'] == 0), ['longitude']] = shinyanga_long[17] mwanza_lat = average_lat.loc[average_lat['region'] == 'Mwanza', 'latitude'] mwanza_long = average_long.loc[average_lat['region'] == 'Mwanza', 'longitude'] X_train.loc[(X_train['region'] == 'Mwanza') & (X_train['latitude'] > -1), ['latitude']] = mwanza_lat[13] X_train.loc[(X_train['region'] == 'Mwanza') & (X_train['longitude'] == 0) , ['longitude']] = mwanza_long[13] ``` ### Impute Amount TSH ``` def tsh_calc(tsh, source, base, waterpoint): if tsh == 0: if (source, base, waterpoint) in tsh_dict: new_tsh = tsh_dict[source, base, waterpoint] return new_tsh else: return tsh return tsh temp = X_train[X_train['amount_tsh'] != 0].groupby(['source_class', 'basin', 'waterpoint_type_group'])['amount_tsh'].mean() tsh_dict = dict(temp) X_train['amount_tsh'] = X_train.apply(lambda x: tsh_calc(x['amount_tsh'], x['source_class'], x['basin'], x['waterpoint_type_group']), axis=1) #X_train #X_train.loc[X_train['amount_tsh'] == 0, 'amount_tsh'] = X_train['amount_tsh'].median() #X_val.loc[X_val['amount_tsh'] == 0, 'amount_tsh'] = X_val['amount_tsh'].median() train = X_train.copy() train['status_group'] = y_train train.groupby('amount_tsh')['status_group'].value_counts(normalize=True) train['functional']= (train['status_group'] == 'functional').astype(int) train[['status_group', 'functional']]; #X_train.loc[X_train['public_meeting'].isnull(), 'public_meeting'] = False #X_val.loc[X_val['public_meeting'].isnull(), 'public_meeting'] = False #train = X_train.copy() #train['status_group'] = y_train #train.groupby('public_meeting')['status_group'].value_counts(normalize=True) #train['functional']= (train['status_group'] == 'functional').astype(int) #train[['status_group', 'functional']]; #sns.catplot(x='public_meeting', y='functional', data=train, kind='bar', color='grey') #plt.title('% of Waterpumps Functional by Region/District') ``` ### Run the Logistic Regression ``` import sklearn sklearn.__version__ # Import the class from sklearn.linear_model import LogisticRegressionCV # Import package and scaler import category_encoders as ce from sklearn.preprocessing import StandardScaler # use quantity feature and the numerical features but drop id categorical_features = ['quantity', 'waterpoint_type', 'extraction_type', 'installer', 'basin', 'region', 'payment', 'source', 'lga', 'public_meeting', 'scheme_management', 'permit', 'management', 'region_district', 'subvillage', 'funder', 'water_quality', 'ward'] # numeric_features = X_train.select_dtypes('number').columns.drop('id').drop('num_private').tolist() features = categorical_features + numeric_features # make subsets using the quantity feature all numeric features except id X_train_subset = X_train[features] X_val_subset = X_val[features] # Do the encoding encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train_subset) X_val_encoded = encoder.transform(X_val_subset) # Use the scaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_val_scaled = scaler.transform(X_val_encoded) # Fit the model and check the accuracy model = LogisticRegressionCV(n_jobs = -1) model.fit(X_train_scaled, y_train) print('Validation Accuracy', model.score(X_val_scaled, y_val)); ``` ### Run RandomForestClassifier ``` from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=1000, random_state=42, max_features = 'auto', n_jobs=-1, verbose = 1) model.fit(X_train_scaled, y_train) print('Validation Accuracy', model.score(X_val_scaled, y_val)); test_features['pump_age'] = 2013 - test_features['construction_year'] test_features.loc[test_features['pump_age'] == 2013, 'pump_age'] = 0 test_features['region_district'] = test_features['region_code'].astype(str) + test_features['district_code'].astype(str) test_features.drop(columns=['num_private']) X_test_subset = test_features[features] X_test_encoded = encoder.transform(X_test_subset) X_test_scaled = scaler.transform(X_test_encoded) assert all(X_test_encoded.columns == X_train_encoded.columns) y_pred = model.predict(X_test_scaled) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('/content/submission-01.csv', index=False) ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. ``` # Classificação de texto com avaliações de filmes <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Veja em TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Execute em Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Veja fonte em GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Baixe o notebook</a> </td> </table> Note: A nossa comunidade TensorFlow traduziu estes documentos. Como as traduções da comunidade são *o melhor esforço*, não há garantias de que sejam uma reflexão exata e atualizada da [documentação oficial em Inglês](https://www.tensorflow.org/?hl=en). Se tem alguma sugestão para melhorar esta tradução, por favor envie um pull request para o repositório do GitHub [tensorflow/docs](https://github.com/tensorflow/docs). Para se voluntariar para escrever ou rever as traduções da comunidade, contacte a [lista docs@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs). Este *notebook* classifica avaliações de filmes como **positiva** ou **negativa** usando o texto da avaliação. Isto é um exemplo de classificação *binária* —ou duas-classes—, um importante e bastante aplicado tipo de problema de aprendizado de máquina. Usaremos a base de dados [IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) que contém avaliaçòes de mais de 50000 filmes do bando de dados [Internet Movie Database](https://www.imdb.com/). A base é dividida em 25000 avaliações para treinamento e 25000 para teste. Os conjuntos de treinamentos e testes são *balanceados*, ou seja, eles possuem a mesma quantidade de avaliações positivas e negativas. O notebook utiliza [tf.keras](https://www.tensorflow.org/guide/keras), uma API alto-nível para construir e treinar modelos com TensorFlow. Para mais tutoriais avançados de classificação de textos usando `tf.keras`, veja em [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/). ``` import tensorflow as tf from tensorflow import keras import numpy as np print(tf.__version__) ``` ## Baixe a base de dados IMDB A base de dados vem empacotada com TensorFlow. Ela já vem pré-processada de forma que as avaliações (sequências de palavras) foram convertidas em sequências de inteiros, onde cada inteiro representa uma palavra específica no dicionário. O código abaixo baixa a base de dados IMDB para a sua máquina (ou usa a cópia em *cache*, caso já tenha baixado):" ``` imdb = keras.datasets.imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) ``` O argumento `num_words=10000` mantém as 10000 palavras mais frequentes no conjunto de treinamento. As palavras mais raras são descartadas para preservar o tamanho dos dados de forma maleável. ## Explore os dados Vamos parar um momento para entender o formato dos dados. O conjunto de dados vem pré-processado: cada exemplo é um *array* de inteiros representando as palavras da avaliação do filme. Cada *label* é um inteiro com valor ou de 0 ou 1, onde 0 é uma avaliação negativa e 1 é uma avaliação positiva. ``` print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels))) ``` O texto das avaliações foi convertido para inteiros, onde cada inteiro representa uma palavra específica no dicionário. Isso é como se parece a primeira revisão: ``` print(train_data[0]) ``` As avaliações dos filmes têm diferentes tamanhos. O código abaixo mostra o número de palavras da primeira e segunda avaliação. Sabendo que o número de entradas da rede neural tem que ser de mesmo também, temos que resolver isto mais tarde. ``` len(train_data[0]), len(train_data[1]) ``` ### Converta os inteiros de volta a palavras É util saber como converter inteiros de volta a texto. Aqui, criaremos uma função de ajuda para consultar um objeto *dictionary* que contenha inteiros mapeados em strings: ``` # Um dicionário mapeando palavras em índices inteiros word_index = imdb.get_word_index() # Os primeiros índices são reservados word_index = {k:(v+3) for k,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 # unknown word_index["<UNUSED>"] = 3 reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) def decode_review(text): return ' '.join([reverse_word_index.get(i, '?') for i in text]) ``` Agora, podemos usar a função `decode_review` para mostrar o texto da primeira avaliação: ``` decode_review(train_data[0]) ``` ## Prepare os dados As avaliações —os *arrays* de inteiros— devem ser convertidas em tensores (*tensors*) antes de alimentar a rede neural. Essa conversão pode ser feita de duas formas: * Converter os arrays em vetores de 0s e 1s indicando a ocorrência da palavra, similar com one-hot encoding. Por exemplo, a sequência [3, 5] se tornaria um vetor de 10000 dimensões, onde todos seriam 0s, tirando 3 e 5, que são 1s. Depois, faça disso a primeira camada da nossa rede neural — a Dense layer — que pode trabalhar com dados em ponto flutuante. Essa abordagem é intensa em relação a memória, logo requer uma matriz de tamanho `num_words * num_reviews`. * Alternativamente, podemos preencher o array para que todos tenho o mesmo comprimento, e depois criar um tensor inteiro de formato `max_length * num_reviews`. Podemos usar uma camada *embedding* capaz de lidar com o formato como a primeira camada da nossa rede. Nesse tutorial, usaremos a segunda abordagem. Já que as avaliações dos filmes devem ter o mesmo tamanho, usaremos a função [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) para padronizar os tamanhos: ``` train_data = keras.preprocessing.sequence.pad_sequences(train_data, value=word_index["<PAD>"], padding='post', maxlen=256) test_data = keras.preprocessing.sequence.pad_sequences(test_data, value=word_index["<PAD>"], padding='post', maxlen=256) ``` Agora, vamos olhar o tamanho dos exemplos: ``` len(train_data[0]), len(train_data[1]) ``` E inspecionar as primeiras avaliações (agora preenchidos): ``` print(train_data[0]) ``` ## Construindo o modelo A rede neural é criada por camadas empilhadas —isso necessita duas decisões arquiteturais principais: * Quantas camadas serão usadas no modelo? * Quantas *hidden units* são usadas em cada camada? Neste exemplo, os dados de entrada são um *array* de palavras-índices. As *labels* para predizer são ou 0 ou 1. Vamos construir um modelo para este problema: ``` # O formato de entrada é a contagem vocabulário usados pelas avaliações dos filmes (10000 palavras) vocab_size = 10000 model = keras.Sequential() model.add(keras.layers.Embedding(vocab_size, 16)) model.add(keras.layers.GlobalAveragePooling1D()) model.add(keras.layers.Dense(16, activation='relu')) model.add(keras.layers.Dense(1, activation='sigmoid')) model.summary() ``` As camadas são empilhadas sequencialmente para construir o classificador: 1. A primeira camada é uma camada `Embedding` (*`Embedding` layer*). Essa camada pega o vocabulário em inteiros e olha o vetor *embedding* em cada palavra-index. Esses vetores são aprendidos pelo modelo, ao longo do treinamento. Os vetores adicionam a dimensão ao *array* de saída. As dimensões resultantes são: `(batch, sequence, embedding)`. 2. Depois, uma camada `GlobalAveragePooling1D` retorna um vetor de saída com comprimento fixo para cada exemplo fazendo a média da sequência da dimensão. Isso permite o modelo de lidar com entradas de tamanhos diferentes da maneira mais simples possível. 3. Esse vetor de saída com tamanho fixo passa por uma camada *fully-connected* (`Dense`) layer com 16 *hidden units*. 4. A última camada é uma *densely connected* com um único nó de saída. Usando uma função de ativação `sigmoid`, esse valor é um float que varia entre 0 e 1, representando a probabilidade, ou nível de confiança. ### Hidden units O modelo abaixo tem duas camadas intermediárias ou _\"hidden\"_ (hidden layers), entre a entrada e saída. O número de saídas (unidades— *units*—, nós ou neurônios) é a dimensão do espaço representacional para a camada. Em outras palavras, a quantidade de liberdade que a rede é permitida enquanto aprende uma representação interna. Se o modelo tem mais *hidden units* (um espaço representacional de maior dimensão), e/ou mais camadas, então a rede pode aprender representações mais complexas. Entretanto, isso faz com que a rede seja computacionalmente mais custosa e pode levar ao aprendizado de padrões não desejados— padrões que melhoram a performance com os dados de treinamento, mas não com os de teste. Isso se chama *overfitting*, e exploraremos mais tarde. ### Função Loss e otimizadores (optimizer) O modelo precisa de uma função *loss* e um otimizador (*optimizer*) para treinamento. Já que é um problema de classificação binário e o modelo tem como saída uma probabilidade (uma única camada com ativação sigmoide), usaremos a função loss `binary_crossentropy`. Essa não é a única escolha de função loss, você poderia escolher, no lugar, a `mean_squared_error`. Mas, geralmente, `binary_crossentropy` é melhor para tratar probabilidades— ela mede a \"distância\" entre as distribuições de probabilidade, ou, no nosso caso, sobre a distribuição real e as previsões. Mais tarde, quando explorarmos problemas de regressão (como, predizer preço de uma casa), veremos como usar outra função loss chamada *mean squared error*. Agora, configure o modelo para usar o *optimizer* a função loss: ``` model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) ``` ## Crie um conjunto de validação Quando treinando. queremos checar a acurácia do modelo com os dados que ele nunca viu. Crie uma conjunto de *validação* tirando 10000 exemplos do conjunto de treinamento original. (Por que não usar o de teste agora? Nosso objetivo é desenvolver e melhorar (tunar) nosso modelo usando somente os dados de treinamento, depois usar o de teste uma única vez para avaliar a acurácia). ``` x_val = train_data[:10000] partial_x_train = train_data[10000:] y_val = train_labels[:10000] partial_y_train = train_labels[10000:] ``` ## Treine o modelo Treine o modelo em 40 *epochs* com *mini-batches* de 512 exemplos. Essas 40 iterações sobre todos os exemplos nos tensores `x_train` e `y_train`. Enquanto treina, monitore os valores do loss e da acurácia do modelo nos 10000 exemplos do conjunto de validação: ``` history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1) ``` ## Avalie o modelo E vamos ver como o modelo se saiu. Dois valores serão retornados. Loss (um número que representa o nosso erro, valores mais baixos são melhores), e acurácia. ``` results = model.evaluate(test_data, test_labels, verbose=2) print(results) ``` Esta é uma abordagem ingênua que conseguiu uma acurácia de 87%. Com abordagens mais avançadas, o modelo deve chegar em 95%. ## Crie um gráfico de acurácia e loss por tempo `model.fit()` retorna um objeto `History` que contém um dicionário de tudo o que aconteceu durante o treinamento: ``` history_dict = history.history history_dict.keys() ``` Tem 4 entradas: uma para cada métrica monitorada durante a validação e treinamento. Podemos usá-las para plotar a comparação do loss de treinamento e validação, assim como a acurácia de treinamento e validação: ``` import matplotlib.pyplot as plt acc = history_dict['accuracy'] val_acc = history_dict['val_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() ``` No gráfico, os pontos representam o loss e acurácia de treinamento, e as linhas são o loss e a acurácia de validação. Note: que o loss de treinamento *diminui* a cada *epoch* e a acurácia *aumenta*. Isso é esperado quando usado um gradient descent optimization—ele deve minimizar a quantidade desejada a cada iteração. Esse não é o caso do loss e da acurácia de validação— eles parecem ter um pico depois de 20 epochs. Isso é um exemplo de *overfitting*: o modelo desempenha melhor nos dados de treinamento do que quando usado com dados nunca vistos. Depois desse ponto, o modelo otimiza além da conta e aprende uma representação *especifica* para os dados de treinamento e não *generaliza* para os dados de teste. Para esse caso particular, podemos prevenir o *overfitting* simplesmente parando o treinamento após mais ou menos 20 epochs. Depois, você verá como fazer isso automaticamente com um *callback*.
github_jupyter
# Introduction to Deep Learning with PyTorch In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. ## Neural Networks Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output. <img src="assets/simple_neuron.png" width=400px> Mathematically this looks like: $$ \begin{align} y &= f(w_1 x_1 + w_2 x_2 + b) \\ y &= f\left(\sum_i w_i x_i +b \right) \end{align} $$ With vectors this is the dot/inner product of two vectors: $$ h = \begin{bmatrix} x_1 \, x_2 \cdots x_n \end{bmatrix} \cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} $$ ## Tensors It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors. <img src="assets/tensor_examples.svg" width=600px> With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ``` # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ``` Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line: `features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution. Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution. PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ``` ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) y = activation((features * weights).sum() + bias) ``` You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs. Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error ```python >> torch.mm(features, weights) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-15d592eb5279> in <module>() ----> 1 torch.mm(features, weights) RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033 ``` As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work. **Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often. There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view). * `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory. * `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch. * `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`. I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`. > **Exercise**: Calculate the output of our little network using matrix multiplication. ``` ## Calculate the output of this network using matrix multiplication print(features.shape, weights.shape) y = activation(torch.mm(features, weights.view(5,1)) + bias) ``` ### Stack them up! That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix. <img src='assets/multilayer_diagram_weights.png' width=450px> The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$ \vec{h} = [h_1 \, h_2] = \begin{bmatrix} x_1 \, x_2 \cdots \, x_n \end{bmatrix} \cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2} \end{bmatrix} $$ The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply $$ y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right) $$ ``` ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ``` > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ``` ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) print(output) ``` If you did this correctly, you should see the output `tensor([[ 0.3171]])`. The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. ## Numpy to Torch and back Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ``` import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ``` The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ``` # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ```
github_jupyter
``` import tensorflow as tf import tensorflow.keras as keras import portpicker import multiprocessing def create_in_process_cluster(num_workers: int, num_ps: int): """ Create and start local servers and return the cluster_resolver. """ worker_ports = [portpicker.pick_unused_port() for _ in range(num_workers)] ps_ports = [portpicker.pick_unused_port() for _ in range(num_ps)] cluster_dict = {} cluster_dict["worker"] = [f"localhost:{port}" for port in worker_ports ] if num_ps > 0: cluster_dict["ps"] = [f"localhost:{port}" for port in ps_ports] cluster_spec = tf.train.ClusterSpec(cluster_dict) # workers need some inter_ops threads to work properly worker_config = tf.compat.v1.ConfigProto() if multiprocessing.cpu_count() < num_workers + 1: worker_config.inter_op_parallelism_threads = num_workers + 1 for i in range(num_workers): tf.distribute.Server( cluster_spec, job_name="worker", task_index=i, protocol='grpc' ) for i in range(num_ps): tf.distribute.Server( cluster_spec, job_name="ps", task_index=i, protocol='grpc' ) cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver( cluster_spec=cluster_spec, rpc_layer='grpc' ) return cluster_resolver # Set the environment variable to allow reporting worker and ps failure to the # coordinator. This is a workaround and won't be necessary in the future. os.environ["GRPC_FAIL_FAST"] = "use_caller" NUM_WORKERS = 3 NUM_PS = 2 cluster_resolver = create_in_process_cluster(NUM_WORKERS, NUM_PS) variable_partitioner = ( tf.distribute.experimental.partitioners.MinSizePartitioner( min_shard_bytes=(256<<10), max_shards=NUM_PS ) ) strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver, variable_partitioner=variable_partitioner) def dataset_fn(input_context): global_batch_size = 64 batch_size = input_context.get_per_replica_batch_size(global_batch_size) x = tf.random.uniform((10, 10)) y = tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices((x, y)).shuffle(10).repeat() dataset = dataset.shard( input_context.num_input_pipelines, input_context.input_pipeline_id) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(2) return dataset dc = tf.keras.utils.experimental.DatasetCreator(dataset_fn) with strategy.scope(): model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) model.compile(tf.keras.optimizers.SGD(), loss='mse', steps_per_execution=10) working_dir = '/tmp/my_working_dir' log_dir = os.path.join(working_dir, 'log') ckpt_filepath = os.path.join(working_dir, 'ckpt') backup_dir = os.path.join(working_dir, 'backup') callbacks = [ tf.keras.callbacks.TensorBoard(log_dir=log_dir), tf.keras.callbacks.ModelCheckpoint(filepath=ckpt_filepath), tf.keras.callbacks.BackupAndRestore(backup_dir=backup_dir), ] model.fit(dc, epochs=5, steps_per_epoch=20, callbacks=callbacks) feature_vocab = [ "avenger", "ironman", "batman", "hulk", "spiderman", "kingkong", "wonder_woman" ] label_vocab = ["yes", "no"] with strategy.scope(): feature_lookup_layer = tf.keras.layers.StringLookup( vocabulary=feature_vocab, mask_token=None) label_lookup_layer = tf.keras.layers.StringLookup( vocabulary=label_vocab, num_oov_indices=0, mask_token=None) raw_feature_input = tf.keras.layers.Input( shape=(3,), dtype=tf.string, name="feature") feature_id_input = feature_lookup_layer(raw_feature_input) feature_preprocess_stage = tf.keras.Model( {"features": raw_feature_input}, feature_id_input) raw_label_input = tf.keras.layers.Input( shape=(1,), dtype=tf.string, name="label") label_id_input = label_lookup_layer(raw_label_input) label_preprocess_stage = tf.keras.Model( {"label": raw_label_input}, label_id_input) import random def feature_and_label_gen(num_examples=200): examples = {"features": [], "label": []} for _ in range(num_examples): features = random.sample(feature_vocab, 3) label = ["yes"] if "avenger" in features else ["no"] examples["features"].append(features) examples["label"].append(label) return examples examples = feature_and_label_gen() def dataset_fn(_): raw_dataset = tf.data.Dataset.from_tensor_slices(examples) train_dataset = raw_dataset.map( lambda x: ( {"features": feature_preprocess_stage(x["features"])}, label_preprocess_stage(x["label"]) )).shuffle(200).batch(32).repeat() return train_dataset # These variables created under the `Strategy.scope` will be placed on parameter # servers in a round-robin fashion. with strategy.scope(): # Create the model. The input needs to be compatible with Keras processing layers. model_input = tf.keras.layers.Input( shape=(3,), dtype=tf.int64, name="model_input") emb_layer = tf.keras.layers.Embedding( input_dim=len(feature_lookup_layer.get_vocabulary()), output_dim=16384) emb_output = tf.reduce_mean(emb_layer(model_input), axis=1) dense_output = tf.keras.layers.Dense( units=1, activation="sigmoid")(emb_output) model = tf.keras.Model({"features": model_input}, dense_output) optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.1) accuracy = tf.keras.metrics.Accuracy() assert len(emb_layer.weights) == 2 assert emb_layer.weights[0].shape == (4, 16384) assert emb_layer.weights[1].shape == (4, 16384) assert emb_layer.weights[0].device == "/job:ps/replica:0/task:0/device:CPU:0" assert emb_layer.weights[1].device == "/job:ps/replica:0/task:1/device:CPU:0" @tf.function def step_fn(iterator): def replica_fn(batch_data, labels): with tf.GradientTape() as tape: pred = model(batch_data, training=True) per_example_loss = tf.keras.losses.BinaryCrossentropy( reduction=tf.keras.losses.Reduction.NONE)(labels, pred) loss = tf.nn.compute_average_loss(per_example_loss) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64) accuracy.update_state(labels, actual_pred) return loss batch_data, labels = next(iterator) losses = strategy.run(replica_fn, args=(batch_data, labels)) return strategy.reduce(tf.distribute.ReduceOp.SUM, losses, axis=None) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator( strategy) @tf.function def per_worker_dataset_fn(): return strategy.distribute_datasets_from_function(dataset_fn) per_worker_dataset = coordinator.create_per_worker_dataset( per_worker_dataset_fn) per_worker_iterator = iter(per_worker_dataset) num_epoches = 4 steps_per_epoch = 5 for i in range(num_epoches): accuracy.reset_states() for _ in range(steps_per_epoch): coordinator.schedule(step_fn, args=(per_worker_iterator,)) # Wait at epoch boundaries. coordinator.join() print("Finished epoch %d, accuracy is %f." % (i, accuracy.result().numpy())) loss = coordinator.schedule(step_fn, args=(per_worker_iterator,)) print("Final loss is %f" % loss.fetch()) eval_dataset = tf.data.Dataset.from_tensor_slices( feature_and_label_gen(num_examples=16)).map( lambda x: ( {"features": feature_preprocess_stage(x["features"])}, label_preprocess_stage(x["label"]) )).batch(8) eval_accuracy = tf.keras.metrics.Accuracy() for batch_data, labels in eval_dataset: pred = model(batch_data, training=False) actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64) eval_accuracy.update_state(labels, actual_pred) print("Evaluation accuracy: %f" % eval_accuracy.result()) with strategy.scope(): # Define the eval metric on parameter servers. eval_accuracy = tf.keras.metrics.Accuracy() @tf.function def eval_step(iterator): def replica_fn(batch_data, labels): pred = model(batch_data, training=False) actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64) eval_accuracy.update_state(labels, actual_pred) batch_data, labels = next(iterator) strategy.run(replica_fn, args=(batch_data, labels)) def eval_dataset_fn(): return tf.data.Dataset.from_tensor_slices( feature_and_label_gen(num_examples=16)).map( lambda x: ( {"features": feature_preprocess_stage(x["features"])}, label_preprocess_stage(x["label"]) )).shuffle(16).repeat().batch(8) per_worker_eval_dataset = coordinator.create_per_worker_dataset( eval_dataset_fn) per_worker_eval_iterator = iter(per_worker_eval_dataset) eval_steps_per_epoch = 2 for _ in range(eval_steps_per_epoch): coordinator.schedule(eval_step, args=(per_worker_eval_iterator,)) coordinator.join() print("Evaluation accuracy: %f" % eval_accuracy.result()) ```
github_jupyter
**Objective**. This notebook contains illustrating examples for the utilities in the [polyhedron_tools](https://github.com/mforets/polyhedron_tools) module. ``` %display typeset ``` ## 1. Modeling with Polyhedra: back and forth with half-space representation We present examples for creating Polyhedra from matrices and conversely to obtain matrices from Polyhedra. ``` from polyhedron_tools.misc import polyhedron_from_Hrep, polyhedron_to_Hrep A = matrix([[-1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [ 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [ 0.0, 1.0, 0.0, 0.0, 0.0, 0.0], [ 0.0, -1.0, 0.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, -1.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, -1.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, 1.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, 0.0, 1.0, 0.0], [ 0.0, 0.0, 0.0, 0.0, -1.0, 0.0], [ 0.0, 0.0, 0.0, 0.0, 0.0, 1.0], [ 0.0, 0.0, 0.0, 0.0, 0.0, -1.0]]) b = vector([0.0, 10.0, 0.0, 0.0, 0.2, 0.2, 0.1, 0.1, 0.0, 0.0, 0.0, 0.0]) P = polyhedron_from_Hrep(A, b); P P.inequalities() P.equations() ``` It is possible to obtain the matrices that represent the inequality and the equality constraints separately, using the keyword argument `separate_equality_constraints`. This type of information is somtimes useful for optimization solvers. ``` [A, b, Aeq, beq] = polyhedron_to_Hrep(P, separate_equality_constraints = True) A, b Aeq, beq ``` ## 2. Generating polyhedra ### Constructing hyperrectangles Let's construct a ball in the infinity norm, specifying the center and radius. We remark that the case of a hyperbox can be done in Sage's library `polytopes.hypercube(n)` where `n` is the dimension. However, as of Sage v7.6. there is no such hyperrectangle function (or n-orthotope, see the [wikipedia page](https://en.wikipedia.org/wiki/Hyperrectangle)), so we use `BoxInfty`. ``` from polyhedron_tools.misc import BoxInfty P = BoxInfty(center=[1,2,3], radius=0.1); P.plot(aspect_ratio=1) ``` As a side note, the function also works when the arguments are not named, as in ``` P = BoxInfty([1,2,3], 0.1); P ``` Another use of `BoxInfty` is to specify the lengths of the sides. For example: ``` P = BoxInfty([[0,1], [2,3]]); P ``` ### Random polyhedra The `random_polygon_2d` receives the number of arguments as input and produces a polygon whose vertices are randomly sampled from the unit circle. See the docstring for more another options. ``` from polyhedron_tools.polygons import random_polygon_2d random_polygon_2d(5) ``` ### Opposite polyhedron ``` from polyhedron_tools.misc import BoxInfty, opposite_polyhedron P = BoxInfty([1,1], 0.5); mp = opposite_polyhedron(P); P.plot(aspect_ratio=1) + mp.plot(color='red') ``` ## 3. Miscelaneous functions ### Support function of a polytope ``` from polyhedron_tools.misc import support_function P = BoxInfty([1,2,3,4,5], 1); P support_function(P, [1,-1,1,-1,1], verbose=1) ``` It is also possible to input the polyhedron in matrix form, $[A, b]$. If this is possible, it is preferable, since it is often faster. Below is an example with $12$ variables. We get beteen 3x and 4x improvement in the second case. ``` reset('P, A, b') P = BoxInfty([1,2,3,4,5,6,7,8,9,10,11,12], 1); P [A, b] = polyhedron_to_Hrep(P) timeit('support_function(P, [1,-1,1,-1,1,-1,1,-1,1,-1,1,-1])') support_function([A, b], [1,-1,1,-1,1,-1,1,-1,1,-1,1,-1]) timeit('support_function([A, b], [1,-1,1,-1,1,-1,1,-1,1,-1,1,-1])') ``` ### Support function of an ellipsoid ``` from polyhedron_tools.misc import support_function, support_function_ellipsoid import random # Generate a random ellipsoid and check support function outer approximation. # Define an ellipse as: x^T*Q*x <= 1 M = random_matrix(RR, 2, distribution="uniform") Q = M.T*M f = lambda x, y : Q[0,0]*x^2 + Q[1,1]*y^2 + (Q[0,1]+Q[1,0])*x*y-1 E = implicit_plot(f,(-5,5),(-3,3),fill=True,alpha=0.5,plot_points=600) # generate at random k directions, and compute the overapproximation of E using support functions # It works 'in average': we might get unbounded domains (random choice did not enclose the ellipsoid). # It is recommended to use QQ as base_ring to avoid 'frozen set' issues. k=15 A = matrix(RR,k,2); b = vector(RR,k) for i in range(k): theta = random.uniform(0, 2*pi.n(digits=5)) d = vector(RR,[cos(theta), sin(theta)]) s_fun = support_function_ellipsoid(Q, d) A.set_row(i,d); b[i] = s_fun OmegaApprox = polyhedron_from_Hrep(A, b, base_ring = QQ) E + OmegaApprox.plot(fill=False, color='red') ``` ### Supremum norm of a polyhedron ``` from polyhedron_tools.misc import BoxInfty, radius P = BoxInfty([-13,24,-51,18.54,309],27.04); radius(P) got_lengths, got_center_and_radius = False, False got_lengths is not False radius(polyhedron_to_Hrep(P)) 8401/25.n() ``` Consider a higher-dimensional system. We obtain almost a 200x improvement for a 15-dimensional set. This is because in the case we call ```poly_sup_norm``` with a polytope, the ```bounding_box()``` function consumes time. ``` %%time P = BoxInfty([1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], 14.28); radius(P) %%time [A, b] = BoxInfty([1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], 14.28, base_ring = RDF, return_HSpaceRep = True) radius([A, b]) ``` The improvement in speed is quite interesting! Moreover, for a 20-dimensional set, the polyhedron construct does not finish. ### Linear map of a polyhedron The operation `matrix x polyhedron` can be performed in Sage with the usual command, `*`. ``` U = BoxInfty([[0,1],[0,1]]) B = matrix([[1,1],[1,-1]]) P = B * U P.plot(color='red', alpha=0.5) + U.plot() ``` ### Chebyshev center ``` from polyhedron_tools.misc import chebyshev_center, BoxInfty from polyhedron_tools.polygons import random_polygon_2d P = random_polygon_2d(10, base_ring = QQ) c = chebyshev_center(P); B = BoxInfty([[1,2],[0,1]]) b = chebyshev_center(B) fig = point(c, color='blue') + P.plot(color='blue', alpha=0.2) fig += point(b, color='red') + B.plot(color='red', alpha=0.2) fig += point(P.center().n(), color='green',marker='x') fig += point(B.center().n(), color='green',marker='x') fig ``` The method ```center()``` existent in the Polyhedra class, computes the average of the vertices. In contrast, the Chebyshev center is the center of the largest box enclosed by the polytope. ``` B.bounding_box() P.bounding_box() e = [ (P.bounding_box()[0][0] + P.bounding_box()[1][0])/2, (P.bounding_box()[0][1] + P.bounding_box()[1][1])/2] l = [[P.bounding_box()[0][0], P.bounding_box()[1][0]], [P.bounding_box()[0][1], P.bounding_box()[1][1]] ] fig += point(e,color='black') + BoxInfty(lengths=l).plot(alpha=0.1,color='grey') fig ``` Here we have added in grey the bounding box that is obtained from the method ```bounding_box()```. To make the picture complete, we should also add the box of center Cheby center, and of maximal radius which is included in it. ## 4. Approximate projections ``` from polyhedron_tools.projections import lotov_algo from polyhedron_tools.misc import polyhedron_to_Hrep, polyhedron_from_Hrep A = matrix([[-1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [ 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [ 0.0, 1.0, 0.0, 0.0, 0.0, 0.0], [ 0.0, -1.0, 0.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, -1.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, -1.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, 1.0, 0.0, 0.0], [ 0.0, 0.0, 0.0, 0.0, 1.0, 0.0], [ 0.0, 0.0, 0.0, 0.0, -1.0, 0.0], [ 0.0, 0.0, 0.0, 0.0, 0.0, 1.0], [ 0.0, 0.0, 0.0, 0.0, 0.0, -1.0]]) b = vector([0.0, 10.0, 0.0, 0.0, 0.2, 0.2, 0.1, 0.1, 0.0, 0.0, 0.0, 0.0]) P = polyhedron_from_Hrep(A, b); P lotov_algo(A, b, [1,0,0], [0,1,0], 0.5) ```
github_jupyter
<a href="https://colab.research.google.com/github/R-aryan/Image_Classification_VGG16/blob/master/Classification_Cat_VS_Dogs_Transfer_Learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import keras,os from keras.models import Sequential from keras.layers import Dense, Conv2D, MaxPool2D , Flatten from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import VGG16 from keras.models import Model from keras import optimizers from keras.models import load_model import numpy as np import shutil from os import listdir from os.path import splitext from keras.preprocessing import image import matplotlib.pyplot as plt train_directory= "/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train" test_directory="/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1" src= '/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train' dest_d='/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train/Dogs' dest_c='/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train/Cats' validation_set='/content/drive/My Drive/classification_Dataset/cat_VS_dogs/validation_data' trdata = ImageDataGenerator() traindata = trdata.flow_from_directory(directory=src,target_size=(224,224),batch_size=32) tsdata = ImageDataGenerator() testdata = tsdata.flow_from_directory(directory=validation_set, target_size=(224,224),batch_size=32) ``` Here using the ImageDataGenerator method in keras I will import all the images of cat and dog in the model. ImageDataGenerator will automatically label the data and map all the labels to its specific data. ``` vggmodel = VGG16(weights='imagenet', include_top=True) ``` Here in this part I will import VGG16 from keras with pre-trained weights which was trained on imagenet. Here as you can see that include top parameter is set to true. This means that weights for our whole model will be downloaded. If this is set to false then the pre-trained weights will only be downloaded for convolution layers and no weights will be downloaded for dense layers. ``` vggmodel.summary() ``` Now as I run vggmodel.summary() then the summary of the whole VGG model which was downloaded will be printed. Its output is attached below. ``` ``` After the model has been downloaded then I need to use this model for my problem statement which is to detect cats and dogs. So here I will set that I will not be training the weights of the first 19 layers and use it as it is. Therefore i am setting the trainable parameter to False for first 19 layers. ``` vggmodel.layers for layers in (vggmodel.layers)[:19]: print(layers) layers.trainable = False ``` Since my problem is to detect cats and dogs and it has two classes so the last dense layer of my model should be a 2 unit softmax dense layer. Here I am taking the second last layer of the model which is dense layer with 4096 units and adding a dense softmax layer of 2 units in the end. In this way I will remove the last layer of the VGG16 model which is made to predict 1000 classes. ``` X= vggmodel.layers[-2].output predictions = Dense(2, activation="softmax")(X) model_final = Model(input = vggmodel.input, output = predictions) ``` Now I will compile my new model. I will set the learning rate of SGD (Stochastic Gradient Descent) optimiser using lr parameter and since i have a 2 unit dense layer in the end so i will be using categorical_crossentropy as loss since the output of the model is categorical. ``` model_final.compile(loss = "categorical_crossentropy", optimizer = optimizers.SGD(lr=0.0001, momentum=0.9), metrics=["accuracy"]) model_final.summary() from keras.callbacks import ModelCheckpoint, EarlyStopping checkpoint = ModelCheckpoint("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1) early = EarlyStopping(monitor='val_acc', min_delta=0, patience=40, verbose=1, mode='auto') model_final.fit_generator(generator= traindata, steps_per_epoch= 2, epochs= 100, validation_data= testdata, validation_steps=1, callbacks=[checkpoint,early]) model_final.save_weights("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5") ``` Predicting the output ``` # from keras.preprocessing import image # import matplotlib.pyplot as plt img = image.load_img("/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1/12500.jpg",target_size=(224,224)) img = np.asarray(img) plt.imshow(img) img = np.expand_dims(img, axis=0) from keras.models import load_model model_final.load_weights("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5") #saved_model.compile() output = model_final.predict(img) if output[0][0] > output[0][1]: print("cat") else: print('dog') def prediction(path_image): img = image.load_img(path_image,target_size=(224,224)) img = np.asarray(img) plt.imshow(img) img = np.expand_dims(img, axis=0) model_final.load_weights("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5") output = model_final.predict(img) if output[0][0] > output[0][1]: print("cat") else: print('dog') prediction("/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1/12500.jpg") prediction("/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1/12499.jpg") ```
github_jupyter
``` import matplotlib.pyplot as plt %matplotlib inline ``` ### Построение графика ``` import numpy as np # Независимая (x) и зависимая (y) переменные x = np.linspace(0, 10, 50) y = x # Построение графика plt.title("Линейная зависимость y = x") # заголовок plt.xlabel("x") # ось абсцисс plt.ylabel("y") # ось ординат plt.grid() # включение отображение сетки plt.plot(x, y) # построение графика # Построение графика plt.title("Линейная зависимость y = x") # заголовок plt.xlabel("x") # ось абсцисс plt.ylabel("y") # ось ординат plt.grid() # включение отображение сетки plt.plot(x, y, "r--") # построение графика ``` ### Несколько графиков на одном поле ``` # Линейная зависимость x = np.linspace(0, 10, 50) y1 = x # Квадратичная зависимость y2 = [i**2 for i in x] # Построение графика plt.title("Зависимости: y1 = x, y2 = x^2") # заголовок plt.xlabel("x") # ось абсцисс plt.ylabel("y1, y2") # ось ординат plt.grid() # включение отображение сетки plt.plot(x, y1, x, y2) # построение графика ``` ### Несколько разделенных полей с графиками ``` # Линейная зависимость x = np.linspace(0, 10, 50) y1 = x # Квадратичная зависимость y2 = [i**2 for i in x] # Построение графиков plt.figure(figsize=(9, 9)) plt.subplot(2, 1, 1) plt.plot(x, y1) # построение графика plt.title("Зависимости: y1 = x, y2 = x^2") # заголовок plt.ylabel("y1", fontsize=14) # ось ординат plt.grid(True) # включение отображение сетки plt.subplot(2, 1, 2) plt.plot(x, y2) # построение графика plt.xlabel("x", fontsize=14) # ось абсцисс plt.ylabel("y2", fontsize=14) # ось ординат plt.grid(True) # включение отображения сетки ``` ### Построение диаграммы для категориальных данных ``` fruits = ["apple", "peach", "orange", "bannana", "melon"] counts = [34, 25, 43, 31, 17] plt.bar(fruits, counts) plt.title("Fruits!") plt.xlabel("Fruit") plt.ylabel("Count") ``` ### Основные элементы графика ``` import matplotlib.pyplot as plt from matplotlib.ticker import (MultipleLocator, FormatStrFormatter, AutoMinorLocator) import numpy as np x = np.linspace(0, 10, 10) y1 = 4*x y2 = [i**2 for i in x] fig, ax = plt.subplots(figsize=(8, 6)) ax.set_title("Графики зависимостей: y1=4*x, y2=x^2", fontsize=16) ax.set_xlabel("x", fontsize=14) ax.set_ylabel("y1, y2", fontsize=14) ax.grid(which="major", linewidth=1.2) ax.grid(which="minor", linestyle="--", color="gray", linewidth=0.5) ax.scatter(x, y1, c="red", label="y1 = 4*x") ax.plot(x, y2, label="y2 = x^2") ax.legend() ax.xaxis.set_minor_locator(AutoMinorLocator()) ax.yaxis.set_minor_locator(AutoMinorLocator()) ax.tick_params(which='major', length=10, width=2) ax.tick_params(which='minor', length=5, width=1) plt.show() ``` ### Текстовые надписи на графике ``` x = [1, 5, 10, 15, 20] y = [1, 7, 3, 5, 11] plt.plot(x, y, label='steel price') plt.title('Chart price', fontsize=15) plt.xlabel('Day', fontsize=12, color='blue') plt.ylabel('Price', fontsize=12, color='blue') plt.legend() plt.grid(True) plt.text(15, 4, 'grow up!') ``` ### Работа с линейным графиком ``` x = [1, 5, 10, 15, 20] y = [1, 7, 3, 5, 11] plt.plot(x, y, '--') x = [1, 5, 10, 15, 20] y = [1, 7, 3, 5, 11] line = plt.plot(x, y) plt.setp(line, linestyle='--') x = [1, 5, 10, 15, 20] y1 = [1, 7, 3, 5, 11] y2 = [i*1.2 + 1 for i in y1] y3 = [i*1.2 + 1 for i in y2] y4 = [i*1.2 + 1 for i in y3] plt.plot(x, y1, '-', x, y2, '--', x, y3, '-.', x, y4, ':') plt.plot(x, y1, '-') plt.plot(x, y2, '--') plt.plot(x, y3, '-.') plt.plot(x, y4, ':') ``` ### Цвет линии ``` x = [1, 5, 10, 15, 20] y = [1, 7, 3, 5, 11] plt.plot(x, y, '--r') ``` ### Тип графика ``` plt.plot(x, y, 'ro') plt.plot(x, y, 'bx') ``` ### Работа с функцией subplot() ``` # Исходный набор данных x = [1, 5, 10, 15, 20] y1 = [1, 7, 3, 5, 11] y2 = [i*1.2 + 1 for i in y1] y3 = [i*1.2 + 1 for i in y2] y4 = [i*1.2 + 1 for i in y3] # Настройка размеров подложки plt.figure(figsize=(12, 7)) # Вывод графиков plt.subplot(2, 2, 1) plt.plot(x, y1, '-') plt.subplot(2, 2, 2) plt.plot(x, y2, '--') plt.subplot(2, 2, 3) plt.plot(x, y3, '-.') plt.subplot(2, 2, 4) plt.plot(x, y4, ':') ``` ### Второй вариант использования subplot() ``` # Вывод графиков plt.subplot(221) plt.plot(x, y1, '-') plt.subplot(222) plt.plot(x, y2, '--') plt.subplot(223) plt.plot(x, y3, '-.') plt.subplot(224) plt.plot(x, y4, ':') ``` ### Работа с функцией subplots() ``` fig, axs = plt.subplots(2, 2, figsize=(12, 7)) axs[0, 0].plot(x, y1, '-') axs[0, 1].plot(x, y2, '--') axs[1, 0].plot(x, y3, '-.') axs[1, 1].plot(x, y4, ':') ```
github_jupyter
# Interactions This is a collection of interactions, mostly from the book. If you have are reading a print version of the book, or are reading it online via Github or nbviewer you will be unable to run the interactions. So I have created this notebook. Here is how you run an interaction if you do not have IPython installed on your computer. 1. Go to try.juptyer.org in your browser. It will launch a temporary notebook server for you. 2. Click the **New** button and select `Python 3`. This will create a new notebook that will run Python 3 for you in your browser. 3. Copy the entire contents of a cell from this notebook and paste it into a 'code' cell in the notebook on your browser. 4. Press CTRL+ENTER to execute the cell. 5. Have fun! Change code. Play. Experiment. Hack. Your server and notebook is not permanently saved. Once you close the session your data is lost. Yes, it says it is saving your file if you press save, and you can see it in the directory. But that is just happening in a Docker container that will be deleted as soon as you close the window. Copy and paste any changes you want to keep to an external file. Of course if you have IPython installed you can download this notebook and run it on your own computer. Type ipython notebook in a command prompt from the directory where you downloaded this file. Click on the name of this file to open it. # Experimenting with FPF' The Kalman filter uses the equation $P^- = FPF^\mathsf{T}$ to compute the prior of the covariance matrix during the prediction step, where P is the covariance matrix and F is the system transistion function. For a Newtonian system $x = \dot{x}\Delta t + x_0$ F might look like $$F = \begin{bmatrix}1 & \Delta t\\0 & 1\end{bmatrix}$$ $FPF^\mathsf{T}$ alters P by taking the correlation between the position ($x$) and velocity ($\dot{x}$). This interactive plot lets you see the effect of different designs of F has on this value. For example, * what if $x$ is not correlated to $\dot{x}$? (set F01 to 0) * what if $x = 2\dot{x}\Delta t + x_0$? (set F01 to 2) * what if $x = \dot{x}\Delta t + 2*x_0$? (set F00 to 2) * what if $x = \dot{x}\Delta t$? (set F00 to 0) ``` %matplotlib inline from IPython.html.widgets import interact, interactive, fixed import IPython.html.widgets as widgets import numpy as np import numpy.linalg as linalg import math import matplotlib.pyplot as plt from matplotlib.patches import Ellipse def plot_covariance_ellipse(x, P, edgecolor='k', ls='solid'): U,s,v = linalg.svd(P) angle = math.atan2(U[1,0],U[0,0]) width = math.sqrt(s[0]) * 2 height = math.sqrt(s[1]) * 2 ax = plt.gca() e = Ellipse(xy=(0, 0), width=width, height=height, angle=angle, edgecolor=edgecolor, facecolor='none', lw=2, ls=ls) ax.add_patch(e) ax.set_aspect('equal') def plot_FPFT(F00, F01, F10, F11, covar): dt = 1. x = np.array((0, 0.)) P = np.array(((1, covar), (covar, 2))) F = np.array(((F00, F01), (F10, F11))) plot_covariance_ellipse(x, P) plot_covariance_ellipse(x, np.dot(F, P).dot(F.T), edgecolor='r') #plt.axis('equal') plt.xlim(-4, 4) plt.ylim(-4, 4) plt.title(str(F)) plt.xlabel('position') plt.ylabel('velocity') interact(plot_FPFT, F00=widgets.IntSliderWidget(value=1, min=0, max=2.), F01=widgets.FloatSliderWidget(value=1, min=0., max=2., description='F01(dt)'), F10=widgets.FloatSliderWidget(value=0, min=0., max=2.), F11=widgets.FloatSliderWidget(value=1, min=0., max=2.), covar=widgets.FloatSliderWidget(value=0, min=0, max=1.)); ```
github_jupyter
1 ``` class Rectangle(object): def __init__(self,width,height): self.width=width self.height=height def getArea(self): q=self.height*self.width print("这个矩形的面积为:",q) def getPerimeter(self): w=2*self.height+2*self.width print("这个矩形的周长为:",w) qwe = Rectangle(10,11) qwe.getArea() qwe.getPerimeter() ``` 2 ``` class Account(object): def __init__(self): self.lilv=0 self.ann=100 self.lixi=0 def shuju(self,id,ann): self.id=id self.ann=ann def getMonthlyInterestRate(self,lilv): self.lilv=lilv def getMonthlyInterest(self): q=self.ann*self.lilv self.lixi=q def withdraw(self): print("请输入取钱金额") res = input("输入") self.ann = self.ann - int(res) print("您成功取出",res,"元") def deposit(self): print("请输入存钱金额") res1=input("输入") self.ann=self.ann+int(res1) print("您成功存入",res1,"元") def dayin(self): print(self.id,"您账户余额为:",self.ann,"利率为:",self.lilv,"利息为",self.lixi) qwe = Account() qwe.shuju(1122,20000) qwe.getMonthlyInterestRate(0.045) qwe.getMonthlyInterest() qwe.withdraw() qwe.deposit() qwe.dayin() ``` 3 ``` class Fan(object): def __init__(self,speed=1,on=False,r=5,color='blue'): self.SLOW=1 self.MEDIUM=2 self.FAST=3 self.__speed=int(speed) self.__on=bool(on) self.__r=float(r) self.__color=str(color) def prints(self): if self.__speed==1: print('SLOW') elif self.__speed==2: print('MEDIUM') elif self.__speed==3: print('FAST') if self.__on=='on': print('打开') else: print('关闭') print(self.__r) print(self.__color) if __name__ == "__main__": speed=int(input('选择速度:')) on=bool(input('on or off')) r=float(input('半径是:')) color=str(input('颜色是:')) fan=Fan(speed,on,r,color) fan.prints() ``` 4 ``` import math class RegularPolygon(object): def __init__(self,n,side,x,y): self.n=n self.side=side self.x=x self.y=y def getPerimenter(self): print(self.n*self.side) def getArea(self): Area = self.n*self.side/(4*math.tan(math.pi/self.n)) print(Area) qwe = RegularPolygon(20,5,10.6,7.8) qwe.getPerimenter() qwe.getArea() ``` 5 ``` class LinearEquation(object): def __init__(self,a,b,c,d,e,f): self.__a=a self.__b=b self.__c=c self.__d=d self.__e=e self.__f=f def set_a(self,a): self.__a=a def get_a(self): return self.__a def set_b(self,b): self.__b=b def get_b(self): return self.___b def set_c(self,c): self.__c=c def get_c(self): return self.__c def set_d(self,d): self.__d=d def get_d(self): return self.__d def set_e(self,e): self.__e=e def get_e(self): return self.__e def set_f(self,f): self.__f=f def get_f(self): return self.__f def isSolvable(self): if (self.__a*self.__d)-(self.__c*self.__b)!=0: return True else: return print('此方程无解') def getX(self): s=(self.__a*self.__d)-(self.__b*self.__c) x=(self.__e*self.__d)-(self.__b*self.__f)/s print('X的值为:%.2f'% x) def getY(self): s=(self.__a*self.__d)-(self.__b*self.__c) y=(self.__a*self.__f)-(self.__e*self.__c)/s print('Y的值为:%.2f'% y) if __name__=="__main__": a=int(input('a的值是:')) b=int(input('b的值是:')) c=int(input('c的值是:')) d=int(input('d的值是:')) e=int(input('e的值是:')) f=int(input('f的值是:')) l=LinearEquation(a,b,c,d,e,f) l.isSolvable() l.getX() l.getY() ``` 6 ``` class zuobao: def shur(self): import math x1,y1,x2,y2=map(float,input().split()) x3,y3,x4,y4=map(float,input().split()) u1=(x4-x3)*(y1-y3)-(x1-x3)*(y4-y3) v1=(x4-x3)*(y2-y3)-(x2-x3)*(y4-y3) u=math.fabs(u1) v=math.fabs(v1) x5=(x1*v+x2*u)/(u+v) y5=(y1*v+y2*u)/(u+v) print(x5,y5) re=zuobao() re.shur() ``` 7 ``` class LinearEquation(object): def __init__(self,a,b,c,d,e,f): self.__a=a self.__b=b self.__c=c self.__d=d self.__e=e self.__f=f def set_a(self,a): self.__a=a def get_a(self): return self.__a def set_b(self,b): self.__b=b def get_b(self): return self.___b def set_c(self,c): self.__c=c def get_c(self): return self.__c def set_d(self,d): self.__d=d def get_d(self): return self.__d def set_e(self,e): self.__e=e def get_e(self): return self.__e def set_f(self,f): self.__f=f def get_f(self): return self.__f def isSolvable(self): if (self.__a*self.__d)-(self.__c*self.__b)!=0: return True else: return print('此方程无解') def getX(self): s=(self.__a*self.__d)-(self.__b*self.__c) x=(self.__e*self.__d)-(self.__b*self.__f)/s print('X的值为:%.2f'% x) def getY(self): s=(self.__a*self.__d)-(self.__b*self.__c) y=(self.__a*self.__f)-(self.__e*self.__c)/s print('Y的值为:%.2f'% y) if __name__=="__main__": a=int(input('a的值是:')) b=int(input('b的值是:')) c=int(input('c的值是:')) d=int(input('d的值是:')) e=int(input('e的值是:')) f=int(input('f的值是:')) l=LinearEquation(a,b,c,d,e,f) l.isSolvable() l.getX() l.getY() ```
github_jupyter
# Hyperparameter tuning with AI Platform ``` import os import time ``` ## Configure environment *You need to walk through the `local-experimentation.ipynb` notebook to create training and validation datasets.* ``` PROJECT_ID = !(gcloud config get-value core/project) ARTIFACT_STORE = 'gs://{}-artifact-store'.format(PROJECT_ID[0]) TRAINING_DATA_PATH = '{}/datasets/training.csv'.format(ARTIFACT_STORE) TESTING_DATA_PATH = '{}/datasets/testing.csv'.format(ARTIFACT_STORE) REGION = "us-central1" JOBDIR_BUCKET = '{}/jobs'.format(ARTIFACT_STORE) ``` ## Create a training application package ### Create a training module ``` TRAINING_APP_FOLDER = '../hypertune_app/trainer' os.makedirs(TRAINING_APP_FOLDER, exist_ok=True) !touch $TRAINING_APP_FOLDER/__init__.py %%writefile $TRAINING_APP_FOLDER/train.py import logging import os import subprocess import sys import fire import numpy as np import pandas as pd import hypertune from sklearn.model_selection import cross_val_score from sklearn.externals import joblib from sklearn.decomposition import PCA from sklearn.linear_model import Ridge from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler def train(job_dir, data_path, n_components, alpha): # Load data from GCS df_train = pd.read_csv(data_path) y = df_train.octane X = df_train.drop('octane', axis=1) # Configure a training pipeline pipeline = Pipeline([ ('scale', StandardScaler()), ('reduce_dim', PCA(n_components=n_components)), ('regress', Ridge(alpha=alpha)) ]) # Calculate the performance metric scores = cross_val_score(pipeline, X, y, cv=10, scoring='neg_mean_squared_error') # Log it with hypertune hpt = hypertune.HyperTune() hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='neg_mean_squared_error', metric_value=scores.mean() ) # Fit the model on a full dataset pipeline.fit(X, y) # Save the model model_filename = 'model.joblib' joblib.dump(value=pipeline, filename=model_filename) gcs_model_path = "{}/{}".format(job_dir, model_filename) subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout) logging.info("Saved model in: {}".format(gcs_model_path)) if __name__ == "__main__": logging.basicConfig(level=logging.INFO) fire.Fire(train) ``` ### Create hyperparameter configuration file ``` %%writefile $TRAINING_APP_FOLDER/hptuning_config.yaml trainingInput: hyperparameters: goal: MAXIMIZE maxTrials: 12 maxParallelTrials: 3 hyperparameterMetricTag: neg_mean_squared_error enableTrialEarlyStopping: TRUE params: - parameterName: n_components type: DISCRETE discreteValues: [ 2, 3, 4, 5, 6, 7, 8 ] - parameterName: alpha type: DOUBLE minValue: 0.0001 maxValue: 0.1 scaleType: UNIT_LINEAR_SCALE ``` ### Configure dependencies ``` %%writefile $TRAINING_APP_FOLDER/../setup.py from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['fire', 'gcsfs', 'cloudml-hypertune'] setup( name='trainer', version='0.1', install_requires=REQUIRED_PACKAGES, packages=find_packages(), include_package_data=True, description='My training application package.' ) ``` ## Submit a training job ``` JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S")) SCALE_TIER = "BASIC" MODULE_NAME = "trainer.train" RUNTIME_VERSION = "2.1" PYTHON_VERSION = "3.7" !gcloud ai-platform jobs submit training $JOB_NAME \ --region $REGION \ --job-dir $JOBDIR_BUCKET/$JOB_NAME \ --package-path $TRAINING_APP_FOLDER \ --module-name $MODULE_NAME \ --scale-tier $SCALE_TIER \ --python-version $PYTHON_VERSION \ --runtime-version $RUNTIME_VERSION \ --config $TRAINING_APP_FOLDER/hptuning_config.yaml \ -- \ --data_path $TRAINING_DATA_PATH !gcloud ai-platform jobs describe $JOB_NAME ```
github_jupyter
<table align="center"> <td align="center"><a target="_blank" href="http://introtodeeplearning.com"> <img src="https://i.ibb.co/Jr88sn2/mit.png" style="padding-bottom:5px;" /> Visit MIT Deep Learning</a></td> <td align="center"><a target="_blank" href="https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab1/Part1_TensorFlow.ipynb"> <img src="https://i.ibb.co/2P3SLwK/colab.png" style="padding-bottom:5px;" />Run in Google Colab</a></td> <td align="center"><a target="_blank" href="https://github.com/aamini/introtodeeplearning/blob/master/lab1/Part1_TensorFlow.ipynb"> <img src="https://i.ibb.co/xfJbPmL/github.png" height="70px" style="padding-bottom:5px;" />View Source on GitHub</a></td> </table> # Copyright Information ``` # Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved. # # Licensed under the MIT License. You may not use this file except in compliance # with the License. Use and/or modification of this code outside of 6.S191 must # reference: # # © MIT 6.S191: Introduction to Deep Learning # http://introtodeeplearning.com # ``` # Lab 1: Intro to TensorFlow and Music Generation with RNNs In this lab, you'll get exposure to using TensorFlow and learn how it can be used for solving deep learning tasks. Go through the code and run each cell. Along the way, you'll encounter several ***TODO*** blocks -- follow the instructions to fill them out before running those cells and continuing. # Part 1: Intro to TensorFlow ## 0.1 Install TensorFlow TensorFlow is a software library extensively used in machine learning. Here we'll learn how computations are represented and how to define a simple neural network in TensorFlow. For all the labs in 6.S191 2022, we'll be using the latest version of TensorFlow, TensorFlow 2, which affords great flexibility and the ability to imperatively execute operations, just like in Python. You'll notice that TensorFlow 2 is quite similar to Python in its syntax and imperative execution. Let's install TensorFlow and a couple of dependencies. ``` %tensorflow_version 2.x import tensorflow as tf # Download and import the MIT 6.S191 package !pip install mitdeeplearning import mitdeeplearning as mdl import numpy as np import matplotlib.pyplot as plt ``` ## 1.1 Why is TensorFlow called TensorFlow? TensorFlow is called 'TensorFlow' because it handles the flow (node/mathematical operation) of Tensors, which are data structures that you can think of as multi-dimensional arrays. Tensors are represented as n-dimensional arrays of base dataypes such as a string or integer -- they provide a way to generalize vectors and matrices to higher dimensions. The ```shape``` of a Tensor defines its number of dimensions and the size of each dimension. The ```rank``` of a Tensor provides the number of dimensions (n-dimensions) -- you can also think of this as the Tensor's order or degree. Let's first look at 0-d Tensors, of which a scalar is an example: ``` sport = tf.constant("Tennis", tf.string) number = tf.constant(1.41421356237, tf.float64) print("`sport` is a {}-d Tensor".format(tf.rank(sport).numpy())) print("`number` is a {}-d Tensor".format(tf.rank(number).numpy())) ``` Vectors and lists can be used to create 1-d Tensors: ``` sports = tf.constant(["Tennis", "Basketball"], tf.string) numbers = tf.constant([3.141592, 1.414213, 2.71821], tf.float64) print("`sports` is a {}-d Tensor with shape: {}".format(tf.rank(sports).numpy(), tf.shape(sports))) print("`numbers` is a {}-d Tensor with shape: {}".format(tf.rank(numbers).numpy(), tf.shape(numbers))) ``` Next we consider creating 2-d (i.e., matrices) and higher-rank Tensors. For examples, in future labs involving image processing and computer vision, we will use 4-d Tensors. Here the dimensions correspond to the number of example images in our batch, image height, image width, and the number of color channels. ``` ### Defining higher-order Tensors ### '''TODO: Define a 2-d Tensor''' matrix = tf.constant([[1,2,3],[4,5,6]], tf.int32) assert isinstance(matrix, tf.Tensor), "matrix must be a tf Tensor object" assert tf.rank(matrix).numpy() == 2 print(tf.shape(matrix)) '''TODO: Define a 4-d Tensor.''' # Use tf.zeros to initialize a 4-d Tensor of zeros with size 10 x 256 x 256 x 3. # You can think of this as 10 images where each image is RGB 256 x 256. images = tf.zeros((10, 256, 256, 3)) assert isinstance(images, tf.Tensor), "matrix must be a tf Tensor object" assert tf.rank(images).numpy() == 4, "matrix must be of rank 4" assert tf.shape(images).numpy().tolist() == [10, 256, 256, 3], "matrix is incorrect shape" ``` As you have seen, the ```shape``` of a Tensor provides the number of elements in each Tensor dimension. The ```shape``` is quite useful, and we'll use it often. You can also use slicing to access subtensors within a higher-rank Tensor: ``` row_vector = matrix[1] column_vector = matrix[:,2] scalar = matrix[1, 2] print("`row_vector`: {}".format(row_vector.numpy())) print("`column_vector`: {}".format(column_vector.numpy())) print("`scalar`: {}".format(scalar.numpy())) ``` ## 1.2 Computations on Tensors A convenient way to think about and visualize computations in TensorFlow is in terms of graphs. We can define this graph in terms of Tensors, which hold data, and the mathematical operations that act on these Tensors in some order. Let's look at a simple example, and define this computation using TensorFlow: ![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/add-graph.png) ``` # Create the nodes in the graph, and initialize values a = tf.constant(15) b = tf.constant(61) # Add them! c1 = tf.add(a,b) c2 = a + b # TensorFlow overrides the "+" operation so that it is able to act on Tensors print(c1) print(c2) ``` Notice how we've created a computation graph consisting of TensorFlow operations, and how the output is a Tensor with value 76 -- we've just created a computation graph consisting of operations, and it's executed them and given us back the result. Now let's consider a slightly more complicated example: ![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/computation-graph.png) Here, we take two inputs, `a, b`, and compute an output `e`. Each node in the graph represents an operation that takes some input, does some computation, and passes its output to another node. Let's define a simple function in TensorFlow to construct this computation function: ``` ### Defining Tensor computations ### # Construct a simple computation function def func(a,b): '''TODO: Define the operation for c, d, e (use tf.add, tf.subtract, tf.multiply).''' c = a+b d = b-1 e = c*d return e ``` Now, we can call this function to execute the computation graph given some inputs `a,b`: ``` # Consider example values for a,b a, b = 1.5, 2.5 # Execute the computation e_out = func(a,b) print(e_out) ``` Notice how our output is a Tensor with value defined by the output of the computation, and that the output has no shape as it is a single scalar value. ## 1.3 Neural networks in TensorFlow We can also define neural networks in TensorFlow. TensorFlow uses a high-level API called [Keras](https://www.tensorflow.org/guide/keras) that provides a powerful, intuitive framework for building and training deep learning models. Let's first consider the example of a simple perceptron defined by just one dense layer: $ y = \sigma(Wx + b)$, where $W$ represents a matrix of weights, $b$ is a bias, $x$ is the input, $\sigma$ is the sigmoid activation function, and $y$ is the output. We can also visualize this operation using a graph: ![alt text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab1/img/computation-graph-2.png) Tensors can flow through abstract types called [```Layers```](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) -- the building blocks of neural networks. ```Layers``` implement common neural networks operations, and are used to update weights, compute losses, and define inter-layer connectivity. We will first define a ```Layer``` to implement the simple perceptron defined above. ``` ### Defining a network Layer ### # n_output_nodes: number of output nodes # input_shape: shape of the input # x: input to the layer class OurDenseLayer(tf.keras.layers.Layer): def __init__(self, n_output_nodes): super(OurDenseLayer, self).__init__() self.n_output_nodes = n_output_nodes def build(self, input_shape): d = int(input_shape[-1]) # Define and initialize parameters: a weight matrix W and bias b # Note that parameter initialization is random! self.W = self.add_weight("weight", shape=[d, self.n_output_nodes]) # note the dimensionality self.b = self.add_weight("bias", shape=[1, self.n_output_nodes]) # note the dimensionality def call(self, x): '''TODO: define the operation for z (hint: use tf.matmul)''' z = tf.matmul(x, self.W) + self.b '''TODO: define the operation for out (hint: use tf.sigmoid)''' y = tf.sigmoid(z) return y # Since layer parameters are initialized randomly, we will set a random seed for reproducibility tf.random.set_seed(1) layer = OurDenseLayer(3) layer.build((1,2)) x_input = tf.constant([[1,2.]], shape=(1,2)) y = layer.call(x_input) # test the output! print(y.numpy()) mdl.lab1.test_custom_dense_layer_output(y) ``` Conveniently, TensorFlow has defined a number of ```Layers``` that are commonly used in neural networks, for example a [```Dense```](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?version=stable). Now, instead of using a single ```Layer``` to define our simple neural network, we'll use the [`Sequential`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Sequential) model from Keras and a single [`Dense` ](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense) layer to define our network. With the `Sequential` API, you can readily create neural networks by stacking together layers like building blocks. ``` ### Defining a neural network using the Sequential API ### # Import relevant packages from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense # Define the number of outputs n_output_nodes = 3 # First define the model model = Sequential() '''TODO: Define a dense (fully connected) layer to compute z''' # https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?version=stable dense_layer = Dense(n_output_nodes, activation='sigmoid', input_shape=(2, )) # Add the dense layer to the model model.add(dense_layer) ``` That's it! We've defined our model using the Sequential API. Now, we can test it out using an example input: ``` # Test model with example input x_input = tf.constant([[1,2.]], shape=(1,2)) '''TODO: feed input into the model and predict the output!''' print(model(x_input)) ``` In addition to defining models using the `Sequential` API, we can also define neural networks by directly subclassing the [`Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=stable) class, which groups layers together to enable model training and inference. The `Model` class captures what we refer to as a "model" or as a "network". Using Subclassing, we can create a class for our model, and then define the forward pass through the network using the `call` function. Subclassing affords the flexibility to define custom layers, custom training loops, custom activation functions, and custom models. Let's define the same neural network as above now using Subclassing rather than the `Sequential` model. ``` ### Defining a model using subclassing ### from tensorflow.keras import Model from tensorflow.keras.layers import Dense class SubclassModel(tf.keras.Model): # In __init__, we define the Model's layers def __init__(self, n_output_nodes): super(SubclassModel, self).__init__() '''Our model consists of a single Dense layer''' self.dense_layer = Dense(n_output_nodes, activation='sigmoid', input_shape=(2, )) # In the call function, we define the Model's forward pass. def call(self, inputs): return self.dense_layer(inputs) ``` Just like the model we built using the `Sequential` API, let's test out our `SubclassModel` using an example input. ``` n_output_nodes = 3 model = SubclassModel(n_output_nodes) x_input = tf.constant([[1,2.]], shape=(1,2)) print(model.call(x_input)) ``` ## 1.4 Automatic differentiation in TensorFlow [Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) is one of the most important parts of TensorFlow and is the backbone of training with [backpropagation](https://en.wikipedia.org/wiki/Backpropagation). We will use the TensorFlow GradientTape [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape?version=stable) to trace operations for computing gradients later. When a forward pass is made through the network, all forward-pass operations get recorded to a "tape"; then, to compute the gradient, the tape is played backwards. By default, the tape is discarded after it is played backwards; this means that a particular `tf.GradientTape` can only compute one gradient, and subsequent calls throw a runtime error. However, we can compute multiple gradients over the same computation by creating a ```persistent``` gradient tape. First, we will look at how we can compute gradients using GradientTape and access them for computation. We define the simple function $ y = x^2$ and compute the gradient: ``` ### Gradient computation with GradientTape ### # y = x^2 # Example: x = 3.0 x = tf.Variable(3.0) # Initiate the gradient tape with tf.GradientTape() as tape: # Define the function y = x * x # Access the gradient -- derivative of y with respect to x dy_dx = tape.gradient(y, x) assert dy_dx.numpy() == 6.0 ``` In training neural networks, we use differentiation and stochastic gradient descent (SGD) to optimize a loss function. Now that we have a sense of how `GradientTape` can be used to compute and access derivatives, we will look at an example where we use automatic differentiation and SGD to find the minimum of $L=(x-x_f)^2$. Here $x_f$ is a variable for a desired value we are trying to optimize for; $L$ represents a loss that we are trying to minimize. While we can clearly solve this problem analytically ($x_{min}=x_f$), considering how we can compute this using `GradientTape` sets us up nicely for future labs where we use gradient descent to optimize entire neural network losses. ``` ### Function minimization with automatic differentiation and SGD ### # Initialize a random value for our initial x x = tf.Variable([tf.random.normal([1])]) print("Initializing x={}".format(x.numpy())) learning_rate = 1e-2 # learning rate for SGD history = [] # Define the target value x_f = 4 # We will run SGD for a number of iterations. At each iteration, we compute the loss, # compute the derivative of the loss with respect to x, and perform the SGD update. for i in range(500): with tf.GradientTape() as tape: '''TODO: define the loss as described above''' loss = (x - x_f)*(x - x_f) # loss minimization using gradient tape grad = tape.gradient(loss, x) # compute the derivative of the loss with respect to x new_x = x - learning_rate*grad # sgd update x.assign(new_x) # update the value of x history.append(x.numpy()[0]) # Plot the evolution of x as we optimize towards x_f! plt.plot(history) plt.plot([0, 500],[x_f,x_f]) plt.legend(('Predicted', 'True')) plt.xlabel('Iteration') plt.ylabel('x value') ``` `GradientTape` provides an extremely flexible framework for automatic differentiation. In order to back propagate errors through a neural network, we track forward passes on the Tape, use this information to determine the gradients, and then use these gradients for optimization using SGD.
github_jupyter
# Neste notebook vamos simular a interconexão entre SLITs ``` # importar as bibliotecas necessárias import numpy as np # arrays import matplotlib.pyplot as plt # plots plt.rcParams.update({'font.size': 14}) import IPython.display as ipd # to play signals import sounddevice as sd import soundfile as sf # Os próximos módulos são usados pra criar nosso SLIT from scipy.signal import butter, lfilter, freqz, chirp, impulse ``` # Vamos criar 2 SLITs Primeiro vamos criar dois SLITs. Um filtro passa alta e um passa-baixa. Você pode depois mudar a ordem de um dos filtros e sua frequência de corte e, então, observar o que acontece na FRF do SLIT concatenado. ``` # Variáveis do filtro order1 = 6 fs = 44100 # sample rate, Hz cutoff1 = 1000 # desired cutoff frequency of the filter, Hz # Passa baixa b, a = butter(order1, 2*cutoff1/fs, btype='low', analog=False) w, H1 = freqz(b, a) # Passa alta cutoff2 = 1000 order2 = 6 b, a = butter(order2, 2*cutoff2/fs, btype='high', analog=False) w, H2 = freqz(b, a) plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H1)), 'b', linewidth = 2, label = 'Passa-baixa') plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H2)), 'r', linewidth = 2, label = 'Passa-alta') plt.title('Magnitude') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.ylim((-100, 20)) plt.subplot(1,2,2) plt.semilogx(fs*w/(2*np.pi), np.angle(H1), 'b', linewidth = 2, label = 'Passa-baixa') plt.semilogx(fs*w/(2*np.pi), np.angle(H2), 'r', linewidth = 2, label = 'Passa-alta') plt.legend(loc = 'upper right') plt.title('Fase') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.show() ``` # Interconexão em série \begin{equation} H(\mathrm{j}\omega) = H_1(\mathrm{j}\omega)H_2(\mathrm{j}\omega) \end{equation} ``` Hs = H1*H2 plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H1)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H2)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(Hs)), 'b', linewidth = 2, label = 'R: Band pass') plt.title('Magnitude') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.ylim((-100, 20)) plt.subplot(1,2,2) plt.semilogx(fs*w/(2*np.pi), np.angle(H1), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(H2), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(Hs), 'b', linewidth = 2, label = 'R: Band pass') plt.legend(loc = 'upper right') plt.title('Fase') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.show() ``` # Interconexão em paralelo \begin{equation} H(\mathrm{j}\omega) = H_1(\mathrm{j}\omega)+H_2(\mathrm{j}\omega) \end{equation} ``` Hs = H1+H2 plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H1)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H2)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(Hs)), 'b', linewidth = 2, label = 'R: All pass') plt.title('Magnitude') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.ylim((-100, 20)) plt.subplot(1,2,2) plt.semilogx(fs*w/(2*np.pi), np.angle(H1), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(H2), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(Hs), 'b', linewidth = 2, label = 'R: All pass') plt.legend(loc = 'upper right') plt.title('Fase') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.show() ```
github_jupyter
# Week 2 ## Matlab Resources ### Onramp - Go to: https://matlabacademy.mathworks.com/ and click on the MATLAB Onramp button to start learning MATLAB ### Tutorials #### Get Started with MATLAB and MATLAB Online - [What is MATLAB?](https://youtu.be/WYG2ZZjgp5M)\* - [MATLAB Variables](https://youtu.be/0w9NKt6Fixk)\* - [MATLAB as a Calculator](https://youtu.be/aRSkNpCSgWY)\* - [MATLAB Functions](https://youtu.be/RJp46UVQBic)\* - [Getting Started with MATLAB Online](https://youtu.be/XjzxCVWKz58) - [Managing Files in MATLAB Online](https://youtu.be/B3lWLIrYjC0) #### Vectors - [Creating Vectors](https://youtu.be/R5Mnkrk9Mos)\* - [Creating Uniformly Spaced Vectors](https://youtu.be/_zqTOV5yl8Y)\* - [Accessing Elements of a Vector Using Conditions](https://youtu.be/8D04GW_foQ0)\* - [Calculations with Vectors](https://youtu.be/VQaZ0TvjF0c)\* - [Vector Transpose](https://youtu.be/vgRLwjHBmsg) #### Visualization - [Line Plots](https://youtu.be/-hhJoveE4sY)\* - [Annotating Graphs](https://youtu.be/JyovEGPSdoI)\* - [Multiple Plots](https://youtu.be/fBx8EFuXFLM)\* #### Matrices - [Creating Matrices](https://youtu.be/qdTdwTh6jMo)\* - [Calculations with Matrices](https://youtu.be/mzzJ9gnMrYE)\* - [Accessing Elements of a Matrix](https://youtu.be/uWPHxpTuZRA)\* - [Matrix Creation Functions](https://youtu.be/VPcbpVd_mPA)\* - [Combining Matrices](https://youtu.be/ejTr3ekTTyA) - [Determining Array Size and Length](https://youtu.be/IF9-ffmxuy8) - [Matrix Multiplication](https://youtu.be/4hsx3bdNjGk) - [Reshaping Arrays](https://youtu.be/UQpDIHlFo8A) - [Statistical Functions with Matrices](https://youtu.be/Y97W3_u7cM4) #### MATLAB Programming - [Logical Variables](https://youtu.be/bRMg4GsFDQ8)\* - [If-Else Statements](https://youtu.be/JZSuU-Laigo)\* - [Writing a FOR loop](https://youtu.be/lg65bzgvI5c)\* - [Writing a WHILE Loop](https://youtu.be/PKH5lCMJXbk) - [Writing Functions](https://youtu.be/GrcNN04eqXU) - [Passing Functions as Inputs](https://youtu.be/aNCwR9dRjHs) #### Troubleshooting - [Using Online Documentation](https://youtu.be/54n5zJwR8aM)\* - [Which File or Variable Am I Using?](https://youtu.be/Z09BvGeYNdE) - [Troubleshooting Code with the Debugger](https://youtu.be/DB4aJMnZtNQ) ***Indicates content covered in Onramp** ## Multivariate Linear Regression ### Notation - $n$ = nubmer of features - $x^{(i)}$ = input (features) of $i^{th}$ training example - $x_j^{(i)}$ = value of feature $j$ in $i^{th}$ training example ### Hypothesis - For convenience of notation, we define $(x_0^{(i)}=1)$, so all $x_0$'s are equal to 1 - $h_{\theta}(x) = \theta_0 x_0 + \theta_1 x_1 + \theta_2 x_2 + \cdots + \theta_n x_n$ - $x, \theta \in \mathbb{R}^{n+1}$ - This way it can be also written in vector form: - $h_{\theta}(x)=\theta^T x=\begin{bmatrix}\theta_0 & \theta_1 & \cdots & \theta_n\end{bmatrix} \cdot \begin{bmatrix}x_0 \\ x_1 \\ \vdots \\ x_n \end{bmatrix}$ ### Cost Function $$J(\theta)=\frac{1}{2m}\sum_{i=1}^{m}(h_{\theta}(x^{(i)})-y^{(i)})^2$$ ### Gradient Descent $(n\geq 1)$ - Repeat the following for $j=0,\dots, n$: $$\theta_j := \theta_j - \alpha\frac{1}{m}\sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})x_j^{(i)}$$ ### Feature Scaling - Speeds gradient descent up, because $\theta$ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven - To prevent this we modify the ranges to roughly the same - $-1 \leq x_{(i)} \leq 1$ or - $-0.5 \leq x_{(i)} \leq 0.5$ - We use **feature scaling** (division part) and **mean normalization** (subtraction part): $$x_i := \frac{x_i - \mu_i}{s_i}$$ - $\mu_i$ is the **average** of all the values for feature (i) - $s_i$ is the **range** of values (max - min) or the **standard deviation** ### Debugging Gradient Descent - Make a plot with *number of iterations* on the x-axis and plot the cost function $J(\theta)$ over the number of iterations of gradient descent - If $J(\theta)$ ever increases, then you probably need to decrease the learning rate $\alpha$ - **Automatic convergence tests** are also possible: - You declare convergence if $J(\theta)$ decreases by less than E in one iteration, where E is some small value such as $10^{-3}$ - However in practice it's difficult to choose E ### Features and Polynomial Regression - We can combine multiple features into one feature, e.g. width and height into one feature, area (= width x height) - Also our hypothesis function need not be linear, if that doesn't fit the data well - We could use a quadratic, cubic, square function etc. - Square function example: $h_{\theta}(x) = \theta_0 + \theta_1 x_1 + \theta_2 \sqrt{x_1}$ - Choosing features this way, don't forget that feature scaling becomes even more important ### Normal Equation #### How it Works - Another way of minimizing $J$ - Explicit, non-iterative, analytical way - We minimize $J$ by explicitly taking its derivatives with respect to the $\theta_j$'s, and set them to zero $$\theta = (X^T X)^{-1}X^Ty$$ - Don't forget to add $x_0^{(i)}$ to the $X$ matrix (which equals 1) - There is **no need** to do feature scaling with the normal equation - Because it needs to calculate the inverse of $X^T X$, it's slow if $n$ is very large - As a broad rule, you should switch to gradient descend for $n \geq 10000$ - Normal equation has a runtime of $O(n^3)$, as compared to gradient descend, which has a runtime of $O(kn^2)$ #### Noninvertibility - Using `pinv` in octave/matlab will give us a value of $\theta$ even if $X^T X$ is not invertible (singular/degenerate) - If $X^T X$ is **noninvertible**, possible reasons are: - Redundant features, where two features are very closely related (i.e. linearly dependent) - Too many features (e.g. $m \leq n$) - In this case delete some features or - Use **regularization** ## Octave/Matlab Commands ### Basic Operations ``` % equal 1 == 2 % not equal 1 ~= 2 % and 1 && 0 % or 1 || 0 % xor xor(1,0) % change prompt PS1('>> ') % semicolon supresses output a = 3; % display (print) disp('Hello World') % string format sprintf('2 decimals: %0.2f', pi) % change how many digits should be shown format short format long % generate row vector start:step:end 1:0.25:2 % you can also leave out the step param i.e. start:end, this will by default increment by 1 1:5 % generate matrix consisting of ones (row count, column count) ones(2,3) % generate matrix consisting of zeros (row count, column count) zeros(1,3) % generate matrix consisting of random values between 0 and 1 (row count, column count) rand(1,3) % generate matrix consisting of normally distributed random values (row count, column count) randn(1,3) % plot histogram (data, optional: bin/bucket count) NOTE: in matlab histogram should be used instead of hist hist(randn(1, 10000)) % generate identity matrix for the given dimension eye(6) % help for given function help eye ``` ### Moving Data Around ``` % number of rows size(A, 1) % number of columns size(A, 2) % gives the size of the longest dimension, but usually only used on vectors length(A) % current working directory pwd % change directory cd % list files and folders ls % load data load featuresX.dat % or the same calling load('featuresX.dat') % shows variables in current scope who % or for the detailed view whos % remove variable from scope clear featuresX % get first 10 elements priceY(1:10) % saves variable v into file hello.mat save hello.mat v % clear all variables clear % saves in a human readable format (no metadata like variable name) save hello.txt v -ascii % fetch everything in the second row (":" means every element along that row/column) A(2,:) % fetch everything from first and third row A([1 3], :) % can also be used for assignments A(:,2) = [10; 11; 12] % append another column vector to right A = [A, [100; 101; 102]] % put all elements of A into a single vector A(:) % concat two matrices C = [A B] % or the same as C = [A, B] % or put it on the bottom C = [A; B] ``` ### Computing on Data ``` % multiple A11 with B11, A12 with B12 etc. (element-wise) A .* B % element-wise squaring A .^ 2 % element-wise inverse 1 ./ A % element-wise log log(v) % element-wise exp exp(v) % element-wise abs abs(v) % same as -1*v -v % element-wise incremental by e.g. 1 v + ones(length(v), 1) % or use this (+ and - are element-wise) v + 1 % returns max value and index [val, ind] = max(a) % element-wise comparison a < 3 % tells the indexes of the variables for which the condition is true find(a < 3) % generates a matrix of n x n dimension, where all rows, columns and diagonals sum up to the same value magic(3) % find used on matrices, returns rows and columns [r,c] = find(A >= 7) % adds up all elements sum(a) % product of all elements prod(a) % round down floor(a) % round up ceil(a) % element-wise max max(A, B) % column-wise max max(A,[],1) % or use max(A) % row-wise max max(A,[],2) % max element max(max(A)) % or turn A into a vector max(A(:)) % column-wise sum sum(A,1) % row-wise sum sum(A,2) % diagonal sum sum(sum(A .* eye(length(A)))) % other diagonal sum sum(sum(A .* flipud(eye(length(A))))) ``` ### Plotting Data ``` t=[0:0.01:0.98] % plot given x and y data plot(t, sin(t)) % plots next figures on top of the open one (old one) hold on % sets x-axis label xlabel('time') % sets y-axis label ylabel('value') % show legend legend('sin', 'cos') % show title title('my plot') % saves open plot as png print -dpng 'myPlot.png' % close open plot close % multiple plots figure(1); plot(t, sin(t)); figure(2); plot(t, cos(t)); % divides plot into a 1x2 grid, access first element subplot(1,2,1) % set x-axis range to 0.5 -> 1 and y-axis range to -1 -> 1 axis([0.5 1 -1 1]) % clear plot clf % plot matrix imagesc(A) % show colorbar with values colorbar % change to gray colormap colormap gray % comma chaining of commands, useful e.g. if you want to change colormap etc. (output doesn't get surpressed like when using ";") a=1, b=2, c=3 ``` ### Control Statements ``` % initialize zero vector of dimension 10 v = zeros(10,1) % for loop for i=1:10, v(i) = 2^i; end; % same for loop indices = 1:10; for i=indices, v(i) = 2^i; end; % break and continue also work in octave/matlab % while loop i = 1; while i <= 5, v(i) = 100; i = i + 1; end; % if and break i = 1; while true, v(i) = 999; i = i + 1; if i == 6, break; end; end; % if, elseif, else v(1) = 2; if v(1)==1, disp('The value is one'); elseif v(1)==2, disp('The value is two'); else disp('The value is not one or two') end; % functions need to be defined in a .m file % also note you need to cd into this directory % to call the function, or add the folder to % your search path using addpath % the file name only matters as function identification % squareThisNumber.m function y = squareThisNumber(x) y = x^2; % return multiple values % squareAndCubeThisNumber.m function [y1,y2] = squareAndCubeThisNumber(x) y1 = x^2; y2 = x^3; ``` ### Vectorization ``` % it's simpler and more efficient % instead of writing a for loop, you write prediction = theta' * x ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import xgboost import math from __future__ import division from scipy.stats import pearsonr from sklearn.linear_model import LinearRegression from sklearn import cross_validation, tree, linear_model from sklearn.model_selection import train_test_split from sklearn.metrics import explained_variance_score ``` # 1. Exploratory Data Analysis ``` # Read the data into a data frame data = pd.read_csv('../input/kc_house_data.csv') # Check the number of data points in the data set print(len(data)) # Check the number of features in the data set print(len(data.columns)) # Check the data types print(data.dtypes.unique()) ``` - Since there are Python objects in the data set, we may have some categorical features. Let's check them. ``` data.select_dtypes(include=['O']).columns.tolist() ``` - We only have the date column which is a timestamp that we will ignore. ``` # Check any number of columns with NaN print(data.isnull().any().sum(), ' / ', len(data.columns)) # Check any number of data points with NaN print(data.isnull().any(axis=1).sum(), ' / ', len(data)) ``` - The data set is pretty much structured and doesn't have any NaN values. So we can jump into finding correlations between the features and the target variable # 2. Correlations between features and target ``` features = data.iloc[:,3:].columns.tolist() target = data.iloc[:,2].name correlations = {} for f in features: data_temp = data[[f,target]] x1 = data_temp[f].values x2 = data_temp[target].values key = f + ' vs ' + target correlations[key] = pearsonr(x1,x2)[0] data_correlations = pd.DataFrame(correlations, index=['Value']).T data_correlations.loc[data_correlations['Value'].abs().sort_values(ascending=False).index] ``` - We can see that the top 5 features are the most correlated features with the target "price" - Let's plot the best 2 regressors jointly ``` y = data.loc[:,['sqft_living','grade',target]].sort_values(target, ascending=True).values x = np.arange(y.shape[0]) %matplotlib inline plt.subplot(3,1,1) plt.plot(x,y[:,0]) plt.title('Sqft and Grade vs Price') plt.ylabel('Sqft') plt.subplot(3,1,2) plt.plot(x,y[:,1]) plt.ylabel('Grade') plt.subplot(3,1,3) plt.plot(x,y[:,2],'r') plt.ylabel("Price") plt.show() ``` # 3. Predicting House Sales Prices ``` # Train a simple linear regression model regr = linear_model.LinearRegression() new_data = data[['sqft_living','grade', 'sqft_above', 'sqft_living15','bathrooms','view','sqft_basement','lat','waterfront','yr_built','bedrooms']] X = new_data.values y = data.price.values X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y ,test_size=0.2) regr.fit(X_train, y_train) print(regr.predict(X_test)) regr.score(X_test,y_test) ``` - Prediction score is about 70 which is not really optimal ``` # Calculate the Root Mean Squared Error print("RMSE: %.2f" % math.sqrt(np.mean((regr.predict(X_test) - y_test) ** 2))) # Let's try XGboost algorithm to see if we can get better results xgb = xgboost.XGBRegressor(n_estimators=100, learning_rate=0.08, gamma=0, subsample=0.75, colsample_bytree=1, max_depth=7) traindf, testdf = train_test_split(X_train, test_size = 0.3) xgb.fit(X_train,y_train) predictions = xgb.predict(X_test) print(explained_variance_score(predictions,y_test)) ``` - Our accuracy is changing between 79%-84%. I think it is close to an optimal solution. ``` ```
github_jupyter
# Process the JRC Excel files ### JRC Data ExpDam is direct expected damage per year from river flooding in Euro (2010 values). Data includes baseline values (average 1976-2005) and impact at SWLs. All figures are multi-model averages based on EC-EARTH r1 to r7 (7 models) PopAff is population affected per year from river flooding. Data includes baseline values (average 1976-2005) and impact at SWLs. All figures are multi-model averages based on EC-EARTH r1 to r7 (7 models) Reference Alfieri, L., Bisselink, B., Dottori, F., Naumann, G., de Roo, A., Salamon, P., Wyser, K. and Feyen, L.: Global projections of river flood risk in a warmer world, Earths Future, doi:10.1002/2016EF000485, 2017. ### Note: We need to calculate anomalies against the historical base period. ``` import pandas as pd import geopandas as gpd from iso3166 import countries import os import warnings warnings.filterwarnings('ignore') def identify_netcdf_and_csv_files(path='data/'): """Crawl through a specified folder and return a dict of the netcdf d['nc'] and csv d['csv'] files contained within. Returns something like {'nc':'data/CNRS_data/cSoil/orchidee-giss-ecearth.SWL_15.eco.cSoil.nc'} """ netcdf_files = [] csv_files = [] for root, dirs, files in os.walk(path): if isinstance([], type(files)): for f in files: if f.split('.')[-1] in ['nc']: netcdf_files.append(''.join([root,'/',f])) elif f.split('.')[-1] in ['csv']: csv_files.append(''.join([root,'/',f])) return {'nc':netcdf_files,'csv':csv_files} def extract_value(df, swl, verbose =False): """Extract the historical and absolute SWL values and calculate an anomaly. """ if verbose: print(df[swl].values) if 'PopAff_1976-2005' in df: historical_key = 'PopAff_1976-2005' #print("In pop aff") elif 'ExpDam_1976-2005' in df: historical_key = 'ExpDam_1976-2005' else: raise ValueError('Found no historical data in the file') # Get the SWL mean try: tmp_abs = float(''.join(df[swl].values[0].split(","))) except: tmp_abs = None # Get the historical mean try: tmp_historical = float(''.join(df[historical_key].values[0].split(","))) if tmp_historical == 0: tmp_historical = None except: tmp_historical = None #print(tmp_historical, tmp_abs) if all([tmp_historical, tmp_abs]): anomaly = int(tmp_abs - tmp_historical) else: anomaly = None return anomaly def gen_output_fname(fnm, swl_label): path = '/'.join(fnm.split('/')[1:3]) file_name = swl_label+'_'+fnm.split('/')[-1] tmp_out = '/'.join(['./processed/admin0/', path, file_name]) return tmp_out def process_JRC_excel(fnm, verbose=False): # I should loop over the set of shapes in gadams8 shapefile and look for the country in the data... # SIMPLIFIED SHAPES FOR ADMIN 0 LEVEL s = gpd.read_file("./data/gadm28_countries/gadm28_countries.shp") raw_data = pd.read_csv(fnm) # Note 184 are how many valid admin 0 areas we got with the netcdf data. keys =['name_0','iso','variable','swl_info', 'count', 'max','min','mean','std','impact_tag','institution', 'model_long_name','model_short_name','model_taxonomy', 'is_multi_model_summary','is_seasonal','season','is_monthly', 'month'] swl_dic = {'SWL1.5':1.5, 'SWL2':2.0, 'SWL4':4.0} possible_vars = {'data/JRC_data/river_floods/PopAff_SWLs_Country.csv':'river_floods_PopAff', 'data/JRC_data/river_floods/ExpDam_SWLs_Country.csv':'river_floods_ExpDam'} num_swls = 0 for swl in ['SWL1.5','SWL2', 'SWL4']: num_swls += 1 tot = 0 valid = 0 extracted_values = [] meta_level1 = {'variable':possible_vars[fnm], 'swl_info':swl_dic[swl], 'is_multi_model_summary':True, 'model_short_name':'EC-EARTH', 'model_long_name': "Projections of average changes in river flood risk per country at SWLs, obtained with the JRC impact model based on EC-EARTH r1-r7 climate projections.", 'model_taxonomy': 'EC-EARTH', 'is_seasonal': False, 'season': None, 'is_monthly':False, 'month': None, 'impact_tag': 'w', 'institution': "European Commission - Joint Research Centre", } for i in s.index: tot += 1 meta_level2 = {'name_0': s['name_engli'][i], 'iso': s['iso'][i],} tmp_mask = raw_data['ISO3_countryname'] == meta_level2['iso'] data_slice = raw_data[tmp_mask] if len(data_slice) == 1: #print(meta_level2['iso']) #return data_slice extracted = extract_value(data_slice, swl) if verbose: print(meta_level2['iso'], meta_level1['swl_info'], extracted) dic_level3 = {'min':None, 'mean': extracted, 'max': None, 'count':None, 'std':None} valid += 1 # FIND ALL VALUES NEEDED BY KEY # WRITE TO EXTRACTED_VALUES d = {**meta_level1, **meta_level2, **dic_level3} extracted_values.append([d[key] for key in keys]) tmp_df = pd.DataFrame(extracted_values, columns=keys) output_filename = gen_output_fname(fnm, swl) path_check ='/'.join(output_filename.split('/')[:-1]) # WRITE EXTRACTED VALUES TO A SPECIFIC SWL CSV FILE IN PROCESSED if not os.path.exists(path_check): os.makedirs(path_check) #return tmp_df tmp_df.to_csv(output_filename, index=False) if verbose: print('Created ', output_filename) print('TOTAL in loop:', tot) print('valid:', valid) print("Looped for", num_swls, 'swls') fs = identify_netcdf_and_csv_files(path='data/JRC_data') fs['csv'] for fnm in fs['csv']: print(fnm) process_JRC_excel(fnm) ```
github_jupyter
<img src="https://s3.amazonaws.com/edu-static.mongodb.com/lessons/M220/notebook_assets/screen_align.png" style="margin: 0 auto;"> <h1 style="text-align: center; font-size=58px;">Change Streams</h1> In this lesson, we're going to use change streams to track real-time changes to the data that our application's using. ### Change Streams - Report changes at the collection level - Accept pipelines to transform change events As of MongoDB 3.6, change streams report changes at the collection level, so we open a change stream against a specific collection. But by default it will return any change to the data in that collection regardless of what it is, so we can also pass a pipeline to transform the change events we get back from the stream. ``` from pymongo import MongoClient, errors uri = "mongodb+srv://m220-user:m220-pass@m220-lessons-mcxlm.mongodb.net/test" client = MongoClient(uri) ``` So here I'm just initializing my MongoClient object, ``` lessons = client.lessons inventory = lessons.inventory inventory.drop() fruits = [ "strawberries", "bananas", "apples" ] for fruit in fruits: inventory.insert_one( { "type": fruit, "quantity": 100 } ) list(inventory.find()) ``` And I'm using a new collection for this lesson, `inventory`. If you imagine we have a store that sells fruits, this collection will store the total quanities of every fruit that we have in stock. In this case, we have a very small store that only sells three types of fruits, and I've just updated the inventory to reflect that we just got a shipment for 100 of each fruit. Now I'm just going to verify that our collection looks the way we expect. (run cell) And it looks like we have 100 of each fruit in the collection. But people will start buying them, cause you know, people like fruit. They'll go pretty quickly, and we want to make sure we don't run out. So I'm going to open a change stream against this collection, and track data changes to the `inventory` collection in real time. ``` try: with inventory.watch(full_document='updateLookup') as change_stream_cursor: for data_change in change_stream_cursor: print(data_change) except pymongo.errors.PyMongoError: print('Change stream closed because of an error.') ``` So here I'm opening a change stream against the `inventory` (point) collection, using the `watch()` method. `watch()` (point) returns a cursor object, so we can iterate through it in Python to return whatever document is next in the cursor. We've wrapped this in a try-catch block so if something happens to the connection used for the change stream, we'll know immediately. (start the while loop) (go to `updates_every_one_second` notebook and start up process) (come back here) So the change stream cursor is just gonna spit out anything it gets, with no filter. Any change to the data in the `inventory` collection will appear in this output. But really, this is noise. We don't care when the quantity drops to 71 (point) or 60 (point), we only want to know when it's close to zero. ``` low_quantity_pipeline = [ { "$match": { "fullDocument.quantity": { "$lt": 20 } } } ] try: with inventory.watch(pipeline=low_quantity_pipeline, full_document='updateLookup') as change_stream_cursor: for data_change in change_stream_cursor: current_quantity = data_change["fullDocument"].get("quantity") fruit = data_change["fullDocument"].get("type") msg = "There are only {0} units left of {1}!".format(current_quantity, fruit) print(msg) except pymongo.errors.PyMongoError: logging.error('Change stream closed because of an error.') ``` Let's say we want to know if any of our quantities (point to quantity values) dip below 20 units, so we know when to buy more. Here I've defined a pipeline for the change event documents returned by the cursor. In this case, if the cursor returns a change event to me, it's because that event caused one of our quantities to fall below 10 units. (open the change stream) (go to `updates_every_one_second` and start the third cell) (come back here) So if we just wait for the customers to go about their business... (wait for a print statement) And now we know that we need to buy more strawberries! ## Summary - Change streams can be opened against a collection - Tracks data changes in real time - Aggregation pipelines can be used to transform change event documents So change streams are a great way to track changes to the data in a collection. And if you're using Mongo 4.0, you can open a change stream against a whole database, and even a whole cluster. We also have the flexibility to pass an aggregation pipeline to the change stream, to transform or filter out some of the change event documents.
github_jupyter
``` import pandas as pd import numpy as np import pyflux as pf import matplotlib.pyplot as plt from fbprophet import Prophet %matplotlib inline plt.rcParams['figure.figsize']=(20,10) plt.style.use('ggplot') ``` ### Load the data For this work, we're going to use the same retail sales data that we've used before. It can be found in the examples directory of this repository. ``` sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True) sales_df.head() ``` Like all good modeling projects, we need to take a look at the data to get an idea of what it looks like. ``` sales_df.plot() ``` It's pretty clear from this data that we are looking at a trending dataset with some seasonality. This is actually a pretty good datset for prophet since the additive model and prophet's implemention does well with this type of data. With that in mind, let's take look at what prophet does from a modeling standpoint to compare with the dynamic linear regression model. For more details on this, you can take a look at my blog post titled **Forecasting Time Series data with Prophet – Part 4** (http://pythondata.com/forecasting-time-series-data-prophet-part-4/) ``` # Prep data for prophet and run prophet df = sales_df.reset_index() df=df.rename(columns={'date':'ds', 'sales':'y'}) model = Prophet(weekly_seasonality=True) model.fit(df); future = model.make_future_dataframe(periods=24, freq = 'm') forecast = model.predict(future) model.plot(forecast); ``` With our prophet model ready for comparison, let's build a model with pyflux's dynamic linear regresion model. ### More Data Viz Now that we've run our prophet model and can see what it has done, its time to walk through what I call the 'long form' of model building. This is more involved than throwing data at a library and accepting the results. For this data, let's first look at the differenced log values of our sales data (to try to make it more stationary). ``` diff_log = pd.DataFrame(np.diff(np.log(sales_df['sales'].values))) diff_log.index = sales_df.index.values[1:sales_df.index.values.shape[0]] diff_log.columns = ["Sales DiffLog"] sales_df['logged']=np.log(sales_df['sales']) sales_df.tail() sales_df.plot(subplots=True) ``` With our original data (top pane in orange), we can see a very pronounced trend. With the differenced log values (bottom pane in blue), we've removed that trend and made the data staionary (or hopefully we have). Now, lets take a look at an autocorrelation plot, which will tell us whether the future sales is correlated with the past data. I won't go into detail on autocorrelation, but if you don't understand whether you have autocorrelation (and to what degree), you might be in for a hard time :) Let's take a look at the autocorrelation plot (acf) if the differenced log values as well as the ACF of the square of the differenced log values. ``` pf.acf_plot(diff_log.values.T[0]) pf.acf_plot(np.square(diff_log.values.T[0])) ``` We can see that at a lag of 1 and 2 months, there are positive correlations for sales but as time goes on, that correlation drops quickly to a negative correlation that stays in place over time, which hints at the fact that there are some autoregressive effects within this data. Because of this fact, we can start our modeling by using an ARMA model of some sort. ``` Logged = pd.DataFrame(np.log(sales_df['sales'])) Logged.index = pd.to_datetime(sales_df.index) Logged.columns = ['Sales - Logged'] Logged.head() modelLLT = pf.LLT(data=Logged) x = modelLLT.fit() x.summary() model.plot_fit(figsize=(20,10)) modelLLT.plot_predict_is(h=len(Logged)-1, figsize=(20,10)) predicted = modelLLT.predict_is(h=len(Logged)-1) predicted.columns = ['Predicted'] predicted.tail() np.exp(predicted).plot() sales_df_future=sales_df sales_df final_sales=sales_df.merge(np.exp(predicted),right_on=predicted.index) final_sales = sales_df.merge() final_sales.tail() final_sales.plot() ```
github_jupyter
``` import urllib.request import json import pandas as pd from datetime import datetime import seaborn as sns cm = sns.light_palette("red", as_cmap=True) #https://www.trilhaseaventuras.com.br/siglas-dos-principais-aeroportos-do-mundo-iata/ #urlOneWay #https://www.decolar.com/shop/flights-busquets/api/v1/web/search?adults=1&children=0&infants=0&limit=4&site=BR&channel=site&from=POA&to=MIA&departureDate=2020-03-04&groupBy=default&orderBy=total_price_ascending&viewMode=CLUSTER&language=pt_BR&airlineSummary=false&chargesDespegar=false&user=e1861e3a-3357-4a76-861e-3a3357ea76c0&h=38dc1f66dbf4f5c8df105321c3286b5c&flow=SEARCH&di=1-0&clientType=WEB&disambiguationApplied=true&newDisambiguationService=true&initialOrigins=POA&initialDestinations=MIA&pageViewId=62ef8aab-ab53-406c-8429-885702acecbd import requests url = "https://www.pontosmultiplus.com.br/service/facilities/handle-points" payload = "logado=&select-name=1000&points=1000&action=calculate" headers = { 'authority': 'www.pontosmultiplus.com.br', 'accept': 'application/json, text/javascript, */*; q=0.01', 'origin': 'https://www.pontosmultiplus.com.br', 'x-requested-with': 'XMLHttpRequest', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36', 'dnt': '1', 'content-type': 'application/x-www-form-urlencoded; charset=UTF-8', 'sec-fetch-site': 'same-origin', 'sec-fetch-mode': 'cors', 'referer': 'https://www.pontosmultiplus.com.br/facilidades/compradepontos', 'accept-encoding': 'gzip, deflate, br', 'accept-language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7', 'cookie': 'userPrefLanguage=pt_BR; sback_client=573a40fecdbbbb66963e544d; sback_partner=false; sb_days=1549545557254; sback_browser=0-14236400-1548261174b4075e5fdbd390aa38772d39e7c7a352593b045121165093285c48973622c2c1-45877488-170246360, 20525122160-1549545560; sback_customer=$2QUxcVTzd0dOVENUtmd6dlTmp3RjlHVF90bxETQ1oWOad0dWF0QUN3T0hnYBlFVx0UO5BVWTRFVNZTblh2YqRkW2$12; chaordic_browserId=09b01e60-2300-11e9-8ced-6fbc9e419cda; chaordic_anonymousUserId=anon-09b01e60-2300-11e9-8ced-6fbc9e419cda; sback_total_sessions=2; _ducp=eyJfZHVjcCI6ImE4MzY0NWM2LTI3ZWYtNGUzZS1iMzNjLTI3YmY4ZTY4MDMwOCIsIl9kdWNwcHQiOiIifQ==; _fbp=fb.2.1550499169207.1066950068; cto_h2h=B; s_fid=2E4956A0C0C14E48-2CB286BB7EF81637; cto_lwid=01abc4e4-21f3-450f-9f35-57dee229928a; __utmz=196304045.1569964079.10.2.utmcsr=multiplus-emkt|utmccn=20190930_EMAIL_INSTITUCIONAL_BOAS_VINDAS_NOVA_MARCA_BRASIL-20191001|utmcmd=emkt|utmctr=14602|utmcct=cabecalho-ver_extrato_deslogado; s_vnum=1596641437499%26vn%3D2; s_lv=1569964112519; optionExchange=0; origin=[{%22city%22:{%22iataCode%22:%22POA%22%2C%22name%22:%22Porto%20Alegre%22}%2C%22type%22:%22airport%22%2C%22iataCode%22:%22POA%22%2C%22name%22:%22Salgado%20Filho%22%2C%22value%22:%22POA_airport%22%2C%22orderCodeNumber%22:%222%22%2C%22orderCode%22:%22Porto%20Alegre2%22%2C%22label%22:%22Porto%20Alegre%20(POA)%2C%20Salgado%20Filho%20(POA)%2C%20Brasil%22%2C%22position%22:%2200002Porto%20Alegre%20(POA)%2C%20Salgado%20Filho%20(POA)%2C%20Brasil%22}]; destiny=[{%22city%22:{%22iataCode%22:%22FRA%22%2C%22name%22:%22Frankfurt%22}%2C%22type%22:%22airport%22%2C%22iataCode%22:%22FRA%22%2C%22name%22:%22Frankfurt%20Intl.%22%2C%22value%22:%22FRA_airport%22%2C%22orderCodeNumber%22:%222%22%2C%22orderCode%22:%22Frankfurt2%22%2C%22label%22:%22Frankfurt%20(FRA)%2C%20Frankfurt%20Intl.%20(FRA)%2C%20Alemanha%22%2C%22position%22:%2200002Frankfurt%20(FRA)%2C%20Frankfurt%20Intl.%20(FRA)%2C%20Alemanha%22}]; cabinClass=Y; classesSuggestions=[{%22idCabin%22:1%2C%22cabinClass%22:%22Y%22%2C%22cabinName%22:%22Economy%22}%2C{%22idCabin%22:2%2C%22cabinClass%22:%22W%22%2C%22cabinName%22:%22Premium%20Economy%22}%2C{%22idCabin%22:3%2C%22cabinClass%22:%22J%22%2C%22cabinName%22:%22Premium%20Business%22}]; _gcl_au=1.1.278670892.1578924604; _hjid=59ae5b53-f6c8-48b1-bc67-fb8182856ead; chaordic_testGroup=%7B%22experiment%22%3Anull%2C%22group%22%3Anull%2C%22testCode%22%3Anull%2C%22code%22%3Anull%2C%22session%22%3Anull%7D; country_code=br; language_code=pt; __utmc=196304045; _esvan_ref.50060.=; language_country=pt_br; _ga=GA1.3.1171237216.1579530427; _gid=GA1.3.911523691.1579530427; _gaZ=GA1.3.1171237216.1579530427; _gaZ_gid=GA1.3.911523691.1579530427; return=Sat%20Apr%2011%202020%2012:00:00%20GMT-0300%20(Hor%C3%A1rio%20Padr%C3%A3o%20de%20Bras%C3%ADlia); trip=ida_vuelta; departure=Sat%20Apr%2004%202020%2012:00:00%20GMT-0300%20(Hor%C3%A1rio%20Padr%C3%A3o%20de%20Bras%C3%ADlia); SMSESSION=LOGGEDOFF; userIdZ=; __utma=196304045.1744687836.1549545551.1579547569.1579636257.15; analyticsHelper.cd38=ef144e288de8d22700e20cda9fce9ee5ee61b5d25b61bd0dab35f4ddc72e95ce; ATGSESSIONID=yiPNORqQ9P7PZ74G-Syy7CLAjB8uk3Tw0Wc4dHWdUyC7KjCIe4s0u0021-680739279; __zjc7749=4962761565; userTags=%7B%22id%22%3A%22Anonimo%22%2C%22age%22%3A0%2C%22gender%22%3Anull%2C%22email%22%3Anull%2C%22emailHash%22%3Anull%2C%22country%22%3Anull%2C%22city%22%3Anull%2C%22state%22%3Anull%2C%22zipCode%22%3Anull%2C%22typeOfParticipation%22%3Anull%2C%22balance%22%3Anull%2C%22status%22%3A%22deslogado%22%7D; _gac_UA-83192457-1=1.1579696070.CjwKCAiAgqDxBRBTEiwA59eEN-j8nGbsIpfJMIrCCHTfzUi4saF5CmN227pOPsXIuXAOZmOQs_DMSRoCBtMQAvD_BwE; _gcl_aw=GCL.1579696070.CjwKCAiAgqDxBRBTEiwA59eEN-j8nGbsIpfJMIrCCHTfzUi4saF5CmN227pOPsXIuXAOZmOQs_DMSRoCBtMQAvD_BwE; _dc_gtm_UA-83192457-1=1; _gac_UA-83192457-13=1.1579696070.CjwKCAiAgqDxBRBTEiwA59eEN-j8nGbsIpfJMIrCCHTfzUi4saF5CmN227pOPsXIuXAOZmOQs_DMSRoCBtMQAvD_BwE; _dc_gtm_UA-83192457-13=1; __z_a=3200530082274793935727479; JSESSIONID=_hHNOSuko30OZo1X7XyjT4_6rnAXanFcwA7M9PShrPBBjztzhMrIu0021-1010243761; SS_X_JSESSIONID=KoLNOSzOIq0SooUobVecEo7ju0GL-8Y2O_kOVlqjZsm5rKnmkG33u0021-183582721; akavpau_multiplusgeral=1579696676~id=48e1b4d4309a5f9f09664afd46406b0e; __zjc872=4962761577; _gat=1' } response = requests.request("POST", url, headers=headers, data = payload) resultPontos = response.text.encode('utf8') resPontos = json.loads(resultPontos.decode('utf-8')) print(resPontos['data']['total']) PONTOSMULTIPLUS = resPontos['data']['total'] dataInicial = '2020-07-03' dataFinal = '2020-07-19' idaEvolta=True #tripType='' #dataInicial = '2020-04-08' #dataFinal = '2020-04-22' #if idaEvolta: # tripType = 'roundtrip' #else: # tripType = 'oneway' specificDate = False origens = ['POA','GRU','GIG'] destinos = ['ATL','MIA','MDZ','BRC','LIM','CTG','ADZ','FRA'] #dfDict.append({'de':origem,'para':destino,'Ida': p['departureDate'],'Volta':arr['arrivalDate'],'preco':arr['price']["amount"]}) resumo = [] dfDict =[] for origem in origens: for destino in destinos: minValue = 999999999 fraseFinal= '' print(origem + ' -> '+ destino) urlDecolar = '''https://www.decolar.com/shop/flights-busquets/api/v1/web/calendar-prices/matrix?adults=1&children=0&infants=0&limit=4&site=BR&channel=site&from={origem}&to={destino}&departureDate={dataInicial}&returnDate={dataFinal}&orderBy=total_price_ascending&viewMode=CLUSTER&language=pt_BR&clientType=WEB&initialOrigins={origem}&initialDestinations={destino}&pageViewId=b35e67df-abc9-4308-875f-c3810b3729e4&mustIncludeDates=NA_NA&currency=BRL&breakdownType=TOTAL_FARE_ONLY'''.format(dataInicial=dataInicial,dataFinal=dataFinal,origem=origem,destino=destino) #print(urlDecolar) with urllib.request.urlopen(urlDecolar) as url: s = url.read() data = json.loads(s.decode('utf-8')) #print(data) for p in data['departures']: for arr in p['arrivals']: if 'price' in arr: dfDict.append({'DataPesquisa':datetime.now().strftime("%d/%m/%Y %H:%M:%S"),'de':origem,'para':destino,'Ida': p['departureDate'],'Volta':arr['arrivalDate'],'preco':arr['price']["amount"]}) if specificDate: if p['departureDate'] == dataInicial and arr['arrivalDate'] == dataFinal: if minValue > arr['price']["amount"]: minValue = arr['price']["amount"] fraseFinal = 'Voo mais barato '+origem + ' -> '+ destino+' de:' + p['departureDate'], ' ate ',arr['arrivalDate'],'- valor: ' + str(arr['price']["amount"]) resumo.append(fraseFinal) print('de:' + p['departureDate'], ' ate ',arr['arrivalDate'],'- valor: ' + str(arr['price']["amount"])) else: if minValue > arr['price']["amount"]: minValue = arr['price']["amount"] fraseFinal = 'Voo mais barato '+origem + ' -> '+ destino+' de:' + p['departureDate'], ' ate ',arr['arrivalDate'],'- valor: ' + str(arr['price']["amount"]) resumo.append(fraseFinal) print('de:' + p['departureDate'], ' ate ',arr['arrivalDate'],'- valor: ' + str(arr['price']["amount"])) print('') print(fraseFinal) print(minValue) print('') for r in resumo: print(r) df = pd.DataFrame.from_dict(dfDict) if specificDate: df = df[df['Ida']==dataInicial] df = df[df['Volta']==dataFinal] display(df.describe()) df.sort_values(by='preco',ascending=True).head(5).style.background_gradient(cmap='OrRd') with open('historicoPesquisaPrecos.csv', 'a') as f: df.to_csv(f, mode='a',header=f.tell()==0) dfGrafico = pd.read_csv("historicoPesquisaPrecos.csv") dfGrafico = dfGrafico[dfGrafico['Ida']>='2020-07-03'] dfGrafico = dfGrafico[dfGrafico['Ida']<='2020-07-07'] dfGrafico = dfGrafico[dfGrafico['Volta']>='2020-07-17'] dfGrafico = dfGrafico[dfGrafico['Volta']<='2020-07-20'] dfGrafico['DataPesquisa'] = dfGrafico['DataPesquisa'].apply(lambda x:x[0:13]) dfGrafico['DataPesquisaDATA']=dfGrafico['DataPesquisa'].apply(lambda x:pd.to_datetime(x[0:10])) dfGrafico['Dias'] = dfGrafico.apply(lambda x: int(str(pd.to_datetime(x['Volta'])- pd.to_datetime(x['Ida']))[0:2]),axis=1) dfGrafico['OrigemDestino'] = dfGrafico.apply(lambda x: x['de'] + x['para'],axis=1) dfGrafico['EspecificoIda'] = dfGrafico.apply(lambda x: x['de'] + x['para']+'-'+x['Ida'],axis=1) dfGrafico['EspecificoVolta'] = dfGrafico.apply(lambda x: x['de'] + x['para']+'-'+x['Volta'],axis=1) dfGrafico['EspecificoTodos'] = dfGrafico.apply(lambda x: x['de'] + x['para']+'-'+x['Ida']+'-'+x['Volta'],axis=1) display(dfGrafico) #dfGraficoPOA_ATL = dfGrafico.query('de == "POA" & para == "ATL"') #dfGraficoPOA_MIA = dfGrafico.query('de == "POA" & para == "MIA"') #dfGraficoGRU_MIA = dfGrafico.query('de == "GRU" & para == "MIA"') #dfGraficoGRU_ATL = dfGrafico.query('de == "GRU" & para == "ATL"') #dfGraficoGRU_MDZ = dfGrafico.query('de == "GRU" & para == "MDZ"') #dfGraficoPOA_MDZ = dfGrafico.query('de == "POA" & para == "MDZ"') #datasets = [dfGrafico,dfGraficoPOA_ATL,dfGraficoPOA_MIA,dfGraficoGRU_MIA,dfGraficoGRU_ATL,dfGraficoGRU_MDZ,dfGraficoPOA_MDZ] #print(dfGraficoPOA_ATL['Ida'].count()) #print(dfGraficoPOA_MIA['Ida'].count()) #print(dfGraficoGRU_MIA['Ida'].count()) #print(dfGraficoGRU_ATL['Ida'].count()) #print(dfGraficoGRU_MDZ['Ida'].count()) #print(dfGraficoPOA_MDZ['Ida'].count()) #import plotly.express as px #for graph in datasets: # #graph = graph.query('Ida =="2020-07-05" & Volta =="2020-07-20"') # graph = graph.query('de =="POA" & Dias >=14 & Dias <=17')# | de =="GRU"') # fig = px.line(graph.drop_duplicates(), x="DataPesquisa", y="preco", color="EspecificoTodos",hover_data=['de','para','Ida', 'Volta','preco']) # fig.show() import pandas_profiling print(dfGraficoPOA_MIA.columns) #pandasDf=dfGraficoPOA_MIA[['Ida', 'Volta', 'de', 'para', 'preco','DataPesquisaDATA', 'Dias']] #display(pandasDf.head(3)) #pandas_profiling.ProfileReport(pandasDf) dfPivot = dfGrafico.query('de == "POA" or de=="GRU"') #display(dfPivot.head(3)) dfPivot = pd.pivot_table(dfPivot,values='preco',index=['de','para','Dias','Ida'],columns='DataPesquisa') ``` ## Maiores valores da serie historica ``` #display(dfPivot) #dfPivot.style.apply(highlight_max) ``` ## Menores valores da serie historica ``` #dfPivot.style.apply(highlight_min) #dfLastSearch = dfGrafico.query('de == "POA" or de=="GRU"') #print(dfLastSearch.groupby(['de','para']).count()) #dfLastSearch = dfLastSearch[dfLastSearch['DataPesquisaDATA']>='21/01/2020'] #dfLastSearchPivot = pd.pivot_table(dfLastSearch,values='preco',index=['de','para','Dias','Ida','Volta'],columns='DataPesquisa') #dfLastSearchPivot.style.apply(highlight_min) import pandas as pd import matplotlib.pyplot as plt from matplotlib import colors def background_gradient(s, m, M, cmap='PuBu', low=0, high=0): rng = M - m norm = colors.Normalize(m - (rng * low), M + (rng * high)) normed = norm(s.values) c = [colors.rgb2hex(x) for x in plt.cm.get_cmap(cmap)(normed)] return ['background-color: %s' % color for color in c] #df = pd.DataFrame([[3,2,10,4],[20,1,3,2],[5,4,6,1]]) #dfLastSearchPivot.fillna(0,inplace=True) #dfLastSearchPivot.query('para == "MIA"').style.background_gradient(cmap='OrRd') #display(dfLastSearchPivot.style.background_gradient(cmap='OrRd')) #print(dfLastSearchPivot.groupby(['de','para']).count()) #dfLastSearchPivot.style.apply(background_gradient,cmap='OrRd',m=dfLastSearchPivot.min().min(),M=dfLastSearchPivot.max().max(),low=0,high=7000) urlPontoLatam = 'https://bff.latam.com/ws/proxy/booking-webapp-bff/v1/public/redemption/recommendations/outbound?departure={dataInicial}&origin={origem}&destination={destino}&cabin=Y&country=BR&language=PT&home=pt_br&return={dataFinal}&adult=1&tierCode=LTAM&tierType=low' origensPontos = ['POA','GRU','GIG'] destinosPontos = ['ATL','MIA','MDZ','BRC','LIM','CTG','ADZ','FRA'] dataPesquisa = datetime.now().strftime("%d/%m/%Y %H:%M:%S") dfPontosListIda =[] dfPontosListVolta =[] meuSaldoAtual = 22000 for origem in origensPontos: for destino in destinosPontos: minValue = 999999999 fraseFinal= '' print(origem + ' -> '+ destino) urlPontos = urlPontoLatam.format(dataInicial=dataInicial,dataFinal=dataFinal,origem=origem,destino=destino) #print(urlDecolar) with urllib.request.urlopen(urlPontos) as url: s = url.read() data = json.loads(s.decode('utf-8')) try: for flight in data['data']: for cabins in flight['flights']: paradas = cabins['stops'] dataChegada=cabins['arrival']['date'] horaChegada = cabins['arrival']['time']['hours'] minutoChegada = cabins['arrival']['time']['minutes'] overnight = cabins['arrival']['overnights'] #partida dataPartida=cabins['departure']['date'] horaPartida = cabins['departure']['time']['hours'] minutoPartida = cabins['departure']['time']['minutes'] for price in cabins['cabins']: dfPontosListIda.append({'DataPesquisa':dataPesquisa,'De':origem,'Para':destino,'PartidaData':dataPartida,'PartidaHora':horaPartida,'PartidaMinuto':minutoPartida,'ChegadaData':dataChegada,'ChegadaHora':horaChegada,'ChegadaMinuto':minutoChegada,'overnight':overnight,'Paradas':paradas,'pontos':price['displayPrice'],'preco':(PONTOSMULTIPLUS *price['displayPrice'])/1000,'precoMenosSaldo':(PONTOSMULTIPLUS *(price['displayPrice']-meuSaldoAtual))/1000}) dfPontosIda = pd.DataFrame.from_dict(dfPontosListIda) except: print('erro') print(destino + ' -> '+ origem) urlPontos = urlPontoLatam.format(dataInicial=dataFinal,dataFinal=dataFinal,origem=destino,destino=origem) with urllib.request.urlopen(urlPontos) as url: s = url.read() data = json.loads(s.decode('utf-8')) try: for flight in data['data']: for cabins in flight['flights']: paradas = cabins['stops'] dataChegada=cabins['arrival']['date'] horaChegada = cabins['arrival']['time']['hours'] minutoChegada = cabins['arrival']['time']['minutes'] overnight = cabins['arrival']['overnights'] #partida dataPartida=cabins['departure']['date'] horaPartida = cabins['departure']['time']['hours'] minutoPartida = cabins['departure']['time']['minutes'] for price in cabins['cabins']: dfPontosListVolta.append({'DataPesquisa':dataPesquisa,'De':destino,'Para':origem,'PartidaData':dataPartida,'PartidaHora':horaPartida,'PartidaMinuto':minutoPartida,'ChegadaData':dataChegada,'ChegadaHora':horaChegada,'ChegadaMinuto':minutoChegada,'overnight':overnight,'Paradas':paradas,'pontos':price['displayPrice'],'valorPontos':PONTOSMULTIPLUS,'preco':(PONTOSMULTIPLUS *price['displayPrice'])/1000,'precoMenosSaldo':(PONTOSMULTIPLUS *(price['displayPrice']-meuSaldoAtual))/1000}) dfPontosVolta = pd.DataFrame.from_dict(dfPontosListVolta) except: print('erro') with open('historicoPesquisaPontosIda.csv', 'a') as f: dfPontosIda.to_csv(f, mode='a',header=f.tell()==0) with open('historicoPesquisaPontosVolta.csv', 'a') as f: dfPontosVolta.to_csv(f, mode='a',header=f.tell()==0) #dfLoadPontosIda = pd.read_csv("historicoPesquisaPontosIda.csv") #dfLoadPontosVolta = pd.read_csv("historicoPesquisaPontosVolta.csv") #dfPontosC = dfLoadPontosVolta[['DataPesquisa','De','Para','PartidaData','PartidaHora', 'PartidaMinuto','ChegadaData', 'ChegadaHora', 'ChegadaMinuto','Paradas','overnight', 'pontos', 'preco','precoMenosSaldo']] #display(dfPontosC.sort_values(by='preco',ascending=True).style.background_gradient(cmap='OrRd')) uriPontos = 'https://www.pontosmultiplus.com.br/service/facilities/handle-points' #dfT = dfLastSearch #dfTeste = dfT[dfT['DataPesquisaDATA']=='24/01/2020'] #dfTeste = pd.pivot_table(dfLastSearch,values='preco',index=['de','para','Ida'],columns='Volta') #dfTeste.fillna(0,inplace=True) #display(dfTeste.style.background_gradient(cmap='OrRd')) aa #POSTMAN ONE WAY import requests dataInicial = '2020-04-08' dataFinal = '2020-04-22' origens = ['POA','GRU','GIG','BSB','FOR'] destinos = ['ATL','MIA'] url = "https://www.decolar.com/shop/flights-busquets/api/v1/web/search" for origem in origens: for destino in destinos: querystring = {"adults":"1","limit":"4","site":"BR","channel":"site","from":"{origem}".format(origem=origem),"to":"{destino}".format(destino=destino),"departureDate":"2020-03-04","orderBy":"total_price_ascending","viewMode":"CLUSTER","language":"pt_BR","h":"38dc1f66dbf4f5c8df105321c3286b5c","flow":"SEARCH","clientType":"WEB","initialOrigins":"{origem}".format(origem=origem),"initialDestinations":"{destino}".format(destino=destino)} headers = { 'Connection': "keep-alive", 'DNT': "1", 'X-UOW': "results-13-1579106681089", 'X-RequestId': "xzTTJ6fDfw", 'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36", 'Accept': "application/json, text/javascript, */*; q=0.01", 'X-Requested-With': "XMLHttpRequest", 'XDESP-REFERRER': "https://www.decolar.com/shop/flights/search/oneway/{origem}/{destino}/2020-03-04/2/0/0/NA/NA/NA/NA/?from=SB&di=2-0".format(origem=origem,destino=destino), 'Sec-Fetch-Site': "same-origin", 'Sec-Fetch-Mode': "cors", 'Referer': "https://www.decolar.com/shop/flights/search/oneway/{origem}/{destino}/2020-03-04/1/0/0/NA/NA/NA/NA/?from=SB&di=1-0".format(origem=origem,destino=destino), 'Accept-Encoding': "gzip, deflate, br", 'Accept-Language': "pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7", 'Cookie': 'trackerid=e1861e3a-3357-4a76-861e-3a3357ea76c0; xdesp-rand-usr=292; xdsid=C632CEAAF251AE2A72F165ECA9A4A2CA; xduid=1727A02D2FAA249C654A094113369154; _ga=GA1.2.772144563.1579011917; _gid=GA1.2.317154519.1579011917; trackeame_cookie=%7B%22id%22%3A%22UPA_e1861e3a-3357-4a76-861e-3a3357ea76c0%22%2C%22version%22%3A%225.0%22%2C%22upa_id%22%3A%22e1861e3a-3357-4a76-861e-3a3357ea76c0%22%2C%22creation_date%22%3A%222020-01-14T14%3A25%3A17Z%22%7D; __ssid=41de76d348be0e334af8e657f6801b8; _gcl_au=1.1.1367791908.1579011932; _fbp=fb.1.1579011933564.1470255143; __gads=ID=9139db3a836078f5:T=1579011933:S=ALNI_MawboBo55i9nPvoDvzaF396HudEKg; abzTestingId="{\"flightsFisherAB\":90,\"pkgImbatibleBrand_ctrl\":76,\"s_flights_s_violet_sbox_v1\":21,\"upsellingConfig\":58,\"twoOneWayForceMX\":0,\"filterLandingFlights\":41,\"s_loyalty_v2_ctrl\":5,\"s_flights_l_violet_sbox_v1\":0,\"s_flights_l_loyalty_v2\":58,\"mostProfitablePromotion\":0,\"despechecks\":72,\"s_loyalty_v2_review\":33,\"platform\":55,\"selected_radio_button\":0,\"fisher_2ow\":0,\"loyalty_non_adherents\":63,\"paymentMethod\":55,\"shifuMobileProductLabels\":0,\"obFee\":40,\"twoOneWay\":0,\"s_violet_sbox_v1\":17,\"s_flights_s_loyalty_v2\":14,\"flights_loyalty_non_adherents\":63,\"pkgImbatibleBrand-ctrl\":60,\"crossBorderTicketing\":0}; chktkn=ask3r5kj6ed0ksqrs7eio4cebk; searchId=243920d8-49cc-4271-972a-60d05221ef20; _gat_UA-36944350-2=1,trackerid=e1861e3a-3357-4a76-861e-3a3357ea76c0; xdesp-rand-usr=292; xdsid=C632CEAAF251AE2A72F165ECA9A4A2CA; xduid=1727A02D2FAA249C654A094113369154; _ga=GA1.2.772144563.1579011917; _gid=GA1.2.317154519.1579011917; trackeame_cookie=%7B%22id%22%3A%22UPA_e1861e3a-3357-4a76-861e-3a3357ea76c0%22%2C%22version%22%3A%225.0%22%2C%22upa_id%22%3A%22e1861e3a-3357-4a76-861e-3a3357ea76c0%22%2C%22creation_date%22%3A%222020-01-14T14%3A25%3A17Z%22%7D; __ssid=41de76d348be0e334af8e657f6801b8; _gcl_au=1.1.1367791908.1579011932; _fbp=fb.1.1579011933564.1470255143; __gads=ID=9139db3a836078f5:T=1579011933:S=ALNI_MawboBo55i9nPvoDvzaF396HudEKg; abzTestingId="{\"flightsFisherAB\":90,\"pkgImbatibleBrand_ctrl\":76,\"s_flights_s_violet_sbox_v1\":21,\"upsellingConfig\":58,\"twoOneWayForceMX\":0,\"filterLandingFlights\":41,\"s_loyalty_v2_ctrl\":5,\"s_flights_l_violet_sbox_v1\":0,\"s_flights_l_loyalty_v2\":58,\"mostProfitablePromotion\":0,\"despechecks\":72,\"s_loyalty_v2_review\":33,\"platform\":55,\"selected_radio_button\":0,\"fisher_2ow\":0,\"loyalty_non_adherents\":63,\"paymentMethod\":55,\"shifuMobileProductLabels\":0,\"obFee\":40,\"twoOneWay\":0,\"s_violet_sbox_v1\":17,\"s_flights_s_loyalty_v2\":14,\"flights_loyalty_non_adherents\":63,\"pkgImbatibleBrand-ctrl\":60,\"crossBorderTicketing\":0}"; chktkn=ask3r5kj6ed0ksqrs7eio4cebk; searchId=243920d8-49cc-4271-972a-60d05221ef20; _gat_UA-36944350-2=1; xdsid=DCF9EDC0035E07BEDBFEE30E55F725C5; xduid=55D857BEFC5E27A8B84A7407D4A86B38; xdesp-rand-usr=292; abzTestingId="{\"flightsFisherAB\":90,\"pkgImbatibleBrand_ctrl\":76,\"s_flights_s_violet_sbox_v1\":21,\"upsellingConfig\":58,\"twoOneWayForceMX\":0,\"filterLandingFlights\":41,\"s_loyalty_v2_ctrl\":5,\"s_flights_l_violet_sbox_v1\":0,\"s_flights_l_loyalty_v2\":58,\"mostProfitablePromotion\":0,\"despechecks\":72,\"s_loyalty_v2_review\":33,\"platform\":55,\"selected_radio_button\":0,\"fisher_2ow\":0,\"loyalty_non_adherents\":63,\"paymentMethod\":55,\"shifuMobileProductLabels\":0,\"obFee\":40,\"twoOneWay\":0,\"s_violet_sbox_v1\":17,\"s_flights_s_loyalty_v2\":14,\"flights_loyalty_non_adherents\":63,\"pkgImbatibleBrand-ctrl\":60,\"crossBorderTicketing\":0}', 'Cache-Control': "no-cache", 'Postman-Token': "4c6c6b9f-ed0a-477f-a787-c8cde039475b,4e35a9da-93ed-4602-825a-283f619d543b", 'Host': "www.decolar.com", 'cache-control': "no-cache" } response = requests.request("GET", url, headers=headers, params=querystring) dataOneWay = json.loads(response.text) print(origem, '->' , destino) print(querystring) print(dataOneWay) if 'clusters' in dataOneWay: for i in dataOneWay['clusters']: print(i['priceDetail']['mainFare']['amount']) ```
github_jupyter
``` import pandas as pd pd.set_option('display.max_columns', None) import numpy as np import matplotlib.pyplot as plt import seaborn as sns plt.style.use('seaborn') #from pandas_profiling import ProfileReportofileReport import warnings warnings.filterwarnings('ignore') from fairlearn.metrics import MetricFrame from fairlearn.metrics import selection_rate, false_positive_rate,true_positive_rate,count from sklearn.metrics import accuracy_score from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression,Lasso from sklearn.preprocessing import LabelEncoder from sklearn.metrics import accuracy_score,precision_score,recall_score,roc_auc_score from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier,GradientBoostingClassifier from xgboost import XGBRegressor,XGBClassifier from catboost import CatBoostClassifier import shap from category_encoders.target_encoder import TargetEncoder from category_encoders.m_estimate import MEstimateEncoder from category_encoders.cat_boost import CatBoostEncoder from category_encoders.leave_one_out import LeaveOneOutEncoder from tqdm.notebook import tqdm from collections import defaultdict #pd.read_csv('propublica_data_for_fairml.csv').head() df = pd.read_csv("data/compas-scores-raw.csv") df["Score"] = df["DecileScore"] # df.loc[df["DecileScore"] > 7, "Score"] = 2 # df.loc[(df["DecileScore"] > 4) & (df["DecileScore"] < 8), "Score"] = 1 # df.loc[df["DecileScore"] < 5, "Score"] = 0 df.loc[df["DecileScore"] > 4, "Score"] = 1 df.loc[df["DecileScore"] <= 4, "Score"] = 0 cols = [ "Person_ID", "AssessmentID", "Case_ID", "LastName", "FirstName", "MiddleName", "DateOfBirth", "ScaleSet_ID", "Screening_Date", "RecSupervisionLevel", "Agency_Text", "AssessmentReason", "Language", "Scale_ID", "IsCompleted", "IsDeleted", "AssessmentType", "DecileScore", ] df = df.drop(columns=cols) possible_targets = ["RawScore", "ScoreText", "Score"] X = df.drop(columns=possible_targets) y = df[["Score"]] X_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.33, random_state=42) te = CatBoostEncoder(sigma=3) model = XGBClassifier() pipe = Pipeline([('encoder', te), ('model', model)]) pipe.fit(X_tr,y_tr) preds = pipe.predict(X_te) explainer = shap.TreeExplainer(pipe.named_steps["model"]) shap_values = explainer.shap_values(pipe[:-1].transform(X_tr)) shap_values shap.initjs() shap.force_plot(explainer.expected_value, shap_values[0,:], X.iloc[0,:]) shap_values.squeeze() shap.plots.bar(shap_values.values) gm = MetricFrame( metrics=accuracy_score, y_true=y_te, y_pred=preds, sensitive_features=X_te["Sex_Code_Text"], ) print(gm.overall) print(gm.by_group) gm = MetricFrame( metrics=accuracy_score, y_true=y_te, y_pred=preds, sensitive_features=X_te["Ethnic_Code_Text"], ) print(gm.by_group) gm = MetricFrame( metrics=accuracy_score, y_true=y_te, y_pred=preds, sensitive_features=X_te["RecSupervisionLevelText"], ) print(gm.by_group) def fit_predict(modelo, enc, data, target, test): pipe = Pipeline([("encoder", enc), ("model", modelo)]) pipe.fit(data, target) return pipe.predict(test) def auc_group(model, data, y_true, dicc, group: str = ""): aux = data.copy() aux["target"] = y_true cats = aux[group].unique().tolist() cats = cats + ["all"] if len(dicc) == 0: dicc = defaultdict(list, {k: [] for k in cats}) for cat in cats: if cat != "all": aux2 = aux[aux[group] == cat] preds = model.predict_proba(aux2.drop(columns="target"))[:, 1] truth = aux2["target"] dicc[cat].append(roc_auc_score(truth, preds)) else: dicc[cat].append(roc_auc_score(y_true, model.predict_proba(data)[:, 1])) return dicc for metrics in [selection_rate, false_positive_rate, true_positive_rate]: gms = [] gms_rec = [] ms = [] auc = {} #param = [0, 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10_000, 20_000] param = np.linspace(0,1,20) for m in tqdm(param): # encoder = MEstimateEncoder(m=m) # encoder = TargetEncoder(smoothing=m) encoder = LeaveOneOutEncoder(sigma=m) model = LogisticRegression() #model = GradientBoostingClassifier() pipe = Pipeline([("encoder", encoder), ("model", model)]) pipe.fit(X_tr, y_tr) preds = pipe.predict(X_te) gm = MetricFrame( metrics=metrics, y_true=y_te, y_pred=preds, sensitive_features=X_te["Ethnic_Code_Text"], ) auc = auc_group( model=pipe, data=X_te, y_true=y_te, dicc=auc, group="Ethnic_Code_Text" ) gm_rec = MetricFrame( metrics=metrics, y_true=y_te, y_pred=preds, sensitive_features=X_te["RecSupervisionLevelText"], ) gms.append(gm) gms_rec.append(gm_rec) ms.append(m) # Impact Score plt.figure() title = "Impact of encoding regularization in category fairness " + str( metrics.__name__ ) plt.title(title) plt.xlabel("M parameter") plt.plot(ms, [gm.overall for gm in gms_rec], label="Overall") plt.plot(ms, [gm.by_group["Low"] for gm in gms_rec], label="Low") plt.plot(ms, [gm.by_group["High"] for gm in gms_rec], label="High") plt.plot(ms, [gm.by_group["Medium"] for gm in gms_rec], label="Medium") plt.plot( ms, [gm.by_group["Medium with Override Consideration"] for gm in gms_rec], label="Medium with Override Consideration", ) plt.legend(bbox_to_anchor=(1.1, 1)) plt.show() # Ethnic plt.figure() title = "Impact of encoding regularization in category fairness " + str( metrics.__name__ ) plt.title(title) plt.xlabel("M parameter") plt.plot(ms, [gm.overall for gm in gms], label="Overall") plt.plot(ms, [gm.by_group["Caucasian"] for gm in gms], label="Caucasian") plt.plot( ms, [gm.by_group["African-American"] for gm in gms], label="AfricanAmerican" ) plt.plot(ms, [gm.by_group["Arabic"] for gm in gms], label="Arabic") plt.plot(ms, [gm.by_group["Hispanic"] for gm in gms], label="Hispanic") # plt.plot(ms,[gm.by_group['Oriental'] for gm in gms],label='Oriental') plt.legend(bbox_to_anchor=(1.1, 1)) plt.show() # AUC ROC plt.title("AUC ROC") plt.xlabel("M parameter") plt.plot(ms, auc["all"], label="Overall") plt.plot(ms, auc["Caucasian"], label="Caucasian") plt.plot(ms, auc["African-American"], label="AfricanAmerican") plt.plot(ms, auc["Arabic"], label="Arabic") plt.plot(ms, auc["Hispanic"], label="Hispanic") plt.legend(bbox_to_anchor=(1.1, 1)) plt.show() kk X_tr.head() d = {} for i in range(0,2): d = auc_group(model=pipe, data=X_tr, y_true=y_tr, dicc=d,group='Sex_Code_Text') d aa ``` $\frac{group\_mean * n\_samples + global\_mean * m}{n\_samples + m}$ ``` X,y groupby(X['feat_cat'])[y].mean() y.global = 0.5 y.spana = 0.2 y.francia = 0.8 n = 10000 m-regularizador ``` # Other Data ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import fetch_openml data = fetch_openml(data_id=1590, as_frame=True) X = pd.get_dummies(data.data) y_true = (data.target == '>50K') * 1 sex = data.data['sex'] sex.value_counts() from fairlearn.metrics import MetricFrame from sklearn.metrics import accuracy_score from sklearn.tree import DecisionTreeClassifier classifier = DecisionTreeClassifier(min_samples_leaf=10, max_depth=4) classifier.fit(X, y_true) DecisionTreeClassifier() y_pred = classifier.predict(X) gm = MetricFrame(metrics=accuracy_score, y_true=y_true, y_pred=y_pred, sensitive_features=sex) data.data['sex'] ```
github_jupyter
Copyright 2020 The Google Research Authors. Licensed under the Apache License, Version 2.0 (the "License"); You may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd from pathlib import Path # dictionary of metrics to plot (each metric is shown in an individual plot) # dictionary key is name of metric in logs.csv, dict value is label in the final plot plot_metrics = {'ens_acc': 'Test accuracy', # Ensemble accuracy 'ens_ce': 'Test cross entropy'} # Ensemble cross entropy # directory of results # should include 'run_sweeps.csv', generated by run_resnet_experiments.sh/run_resnet_experiments.sh results_dir = '/tmp/google_research/cold_posterior_bnn/results_resnet/' # load csv with results of all runs sweeps_df = pd.read_csv(results_dir+'run_sweeps.csv').set_index('id') # add final performance of run as columns to sweep_df for metric in plot_metrics.keys(): sweeps_df[metric] = [0.] * len(sweeps_df) for i in range(len(sweeps_df)): # get logs of run log_dir = sweeps_df.loc[i, 'dir'] logs_df = pd.read_csv('{}{}/logs.csv'.format(results_dir, log_dir)) for metric in plot_metrics: # get final performace of run and add to df idx = 0 final_metric = float('nan') while np.isnan(final_metric): idx += 1 final_metric = logs_df.tail(idx)[metric].values[0] # indexing starts with 1 sweeps_df.at[i, metric] = final_metric # save/update csv file sweeps_df.to_csv(results_dir+'run_sweeps.csv') # plot font_scale = 1.1 line_width = 3 marker_size = 7 cm_lines = sns.color_palette('deep') cm_points = sns.color_palette('bright') # style settings sns.reset_defaults() sns.set_context("notebook", font_scale=font_scale, rc={"lines.linewidth": line_width, "lines.markersize" :marker_size} ) sns.set_style("whitegrid") for metric, metric_label in plot_metrics.items(): # plot SG-MCMC fig, ax = plt.subplots(figsize=(7.0, 2.85)) g = sns.lineplot(x='temperature', y=metric, data=sweeps_df, marker='o', label='SG-MCMC', color=cm_lines[0], zorder=2, ci='sd') # finalize plot plt.legend(loc=3, fontsize=14) g.set_xscale('log') #g.set_ylim(bottom=0.88, top=0.94) g.set_xlim(left=1e-4, right=1) fig.tight_layout() ax.set_frame_on(False) ax.set_xlabel('Temperature $T$') ax.set_ylabel(metric_label) ax.margins(0,0) plt.savefig('{}resnet_{}.pdf'.format(results_dir, metric_label), format="pdf", dpi=300, bbox_inches="tight", pad_inches=0) ```
github_jupyter
``` from skimage.measure import block_reduce from framework.dependency import * from framework.utli import * def pipeline(X, Qstep=16, Q_mode=1, ML_inv=True, write=False, name='tmp.txt', tPCA=None, isDCT=True): H, W = X.shape[0], X.shape[1] X = X.reshape(1, H, W, -1) X_p, X_q, X_r = X[:,:,:,0:1], block_reduce(X[:,:,:,1:2], (1, 2, 2, 1), np.mean), block_reduce(X[:,:,:,2:], (1, 2, 2, 1), np.mean) # P def proP(X_p, Qstep, Q_mode, isDCT): X_block = Shrink(X_p, win=8) if isDCT == False: trans_pca = myPCA(is2D=True, H=8, W=8) trans_pca.fit(X_block) else: trans_pca = DCT(8,8) tX = trans_pca.transform(X_block) tX = Q(trans_pca, tX, Qstep, mode=Q_mode) return trans_pca, tX # Quant def inv_proP(trans_pca, tX, Qstep, Xraw, ML_inv, Q_mode): Xraw = Shrink(Xraw, win=8) tX = dQ(trans_pca, tX, Qstep, mode=Q_mode) if ML_inv == True: iX = trans_pca.ML_inverse_transform(Xraw, tX) else: iX = trans_pca.inverse_transform(tX) iX_p = invShrink(iX, win=8) return iX_p # QR def proQR(trans_pca, X_q, Qstep, Q_mode): X_block = Shrink(X_q, win=8) tX = trans_pca.transform(X_block) tX = Q(trans_pca, tX, Qstep, mode=Q_mode) return tX if tPCA == None: tPCA, tX_p = proP(X_p, Qstep, Q_mode, isDCT) else: tX_p = proQR(tPCA, X_p, Qstep, Q_mode) if write == True: write_to_txt(tX_p, name) iX_p = inv_proP(tPCA, tX_p, Qstep, X_p, ML_inv, Q_mode) tX_q = proQR(tPCA, X_q, Qstep, Q_mode) if write == True: write_to_txt(tX_q, name) iX_q = inv_proP(tPCA, tX_q, Qstep, X_q, ML_inv, Q_mode) iX_q = cv2.resize(iX_q[0,:,:,0], (W, H)).reshape(1, H, W, 1) tX_r = proQR(tPCA, X_r, Qstep, Q_mode) if write == True: write_to_txt(tX_r, name) with open(name, 'a') as f: f.write('-1111') iX_r = inv_proP(tPCA, tX_r, Qstep, X_r, ML_inv, Q_mode) iX_r = cv2.resize(iX_r[0,:,:,0], (W, H)).reshape(1, H, W, 1) return np.concatenate((iX_p, iX_q, iX_r), axis=-1) def run(tPCA=None, ML_inv=True, ML_color_inv=True, img=0, write=False, isYUV=True, name=None, Q_mode=0, isDCT=False): psnr = [] if isDCT == True: q = np.arange(5, 99, 5) else: q = [200, 160, 140, 120, 100, 90, 80, 70, 60, 50, 40.0,32.0,26.6,22.8,20.0,17.7,16.0,14.4,12.8,11.2,9.6,8.0,6.4,4.8,3.2] q = np.arange(5, 99, 5) for i in range(len(q)): X_bgr = cv2.imread('/Users/alex/Desktop/proj/compression/data/Kodak/Kodak/'+str(img)+'.bmp') if isYUV == False: color_pca, X = BGR2PQR(X_bgr) else: X = BGR2YUV(X_bgr) iX = pipeline(X, Qstep=q[i], Q_mode=Q_mode, ML_inv=ML_inv, write=write, name='../result/'+name+'/'+str(img)+'_'+str(i)+'.txt', tPCA=tPCA, isDCT=isDCT) if ML_color_inv == True: iX = ML_inv_color(X_bgr, iX) #cv2.imwrite(str(img)+'_'+str(i)+'.png', copy.deepcopy(iX)) psnr.append(PSNR(iX, X_bgr)) else: if isYUV == True: iX = YUV2BGR(iX) else: iX = PQR2BGR(iX, color_pca) psnr.append(PSNR(iX[0], X_bgr)) #break return psnr psnr = [] name = 'tmp' for i in range(24): psnr.append(run(None, 1, 1, img=i, write=1, isYUV=False, name=name, Q_mode=3, isDCT=False)) #break psnr = np.array(psnr) with open('../result/psnr_'+name+'.pkl', 'wb') as f: pickle.dump(psnr,f) ```
github_jupyter
``` # from numba import jit # from tqdm import trange # import pandas as pd # eo_df = pd.read_csv("/mnt/sda1/cvpr21/Classification/Aerial-View-Object-Classification/data/train_EO.csv") # eo_df = eo_df.sort_values(by='img_name') # sar_df = pd.read_csv("/mnt/sda1/cvpr21/Classification/Aerial-View-Object-Classification/data/train_SAR.csv") # sar_df = sar_df.sort_values(by='img_name') # @jit() # def equal(): # notsame_image = 0 # notsame_label = 0 # t = trange(len(sar_df)) # for i in t: # t.set_postfix({'nums of not same label:': notsame_label}) # eo_label = next(eo_df.iterrows())[1].class_id # sar_label = next(sar_df.iterrows())[1].class_id # # if not eo_image == sar_image: # # notsame_image += 1 # if not eo_label == sar_label: # notsame_label += 1 # # notsame_label += 1 # # print("nums of not same imageid:", notsame_image) # #print("nums of not same label:", notsame_label) # equal() from __future__ import print_function, division import torch import math import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy from torch.autograd import Variable import random import torch.nn.functional as F exp_num = "45_kd_sar-teacher_eo-student_pretrain-on-sar" def seed_everything(seed): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True # tf.set_random_seed(seed) seed = 2019 seed_everything(seed) #https://github.com/4uiiurz1/pytorch-auto-augment import random import numpy as np import scipy from scipy import ndimage from PIL import Image, ImageEnhance, ImageOps class AutoAugment(object): def __init__(self): self.policies = [ ['Invert', 0.1, 7, 'Contrast', 0.2, 6], ['Rotate', 0.7, 2, 'TranslateX', 0.3, 9], ['Sharpness', 0.8, 1, 'Sharpness', 0.9, 3], ['ShearY', 0.5, 8, 'TranslateY', 0.7, 9], ['AutoContrast', 0.5, 8, 'Equalize', 0.9, 2], ['ShearY', 0.2, 7, 'Posterize', 0.3, 7], ['Color', 0.4, 3, 'Brightness', 0.6, 7], ['Sharpness', 0.3, 9, 'Brightness', 0.7, 9], ['Equalize', 0.6, 5, 'Equalize', 0.5, 1], ['Contrast', 0.6, 7, 'Sharpness', 0.6, 5], ['Color', 0.7, 7, 'TranslateX', 0.5, 8], ['Equalize', 0.3, 7, 'AutoContrast', 0.4, 8], ['TranslateY', 0.4, 3, 'Sharpness', 0.2, 6], ['Brightness', 0.9, 6, 'Color', 0.2, 8], ['Solarize', 0.5, 2, 'Invert', 0.0, 3], ['Equalize', 0.2, 0, 'AutoContrast', 0.6, 0], ['Equalize', 0.2, 8, 'Equalize', 0.6, 4], ['Color', 0.9, 9, 'Equalize', 0.6, 6], ['AutoContrast', 0.8, 4, 'Solarize', 0.2, 8], ['Brightness', 0.1, 3, 'Color', 0.7, 0], ['Solarize', 0.4, 5, 'AutoContrast', 0.9, 3], ['TranslateY', 0.9, 9, 'TranslateY', 0.7, 9], ['AutoContrast', 0.9, 2, 'Solarize', 0.8, 3], ['Equalize', 0.8, 8, 'Invert', 0.1, 3], ['TranslateY', 0.7, 9, 'AutoContrast', 0.9, 1], ] def __call__(self, img): img = apply_policy(img, self.policies[random.randrange(len(self.policies))]) return img operations = { 'ShearX': lambda img, magnitude: shear_x(img, magnitude), 'ShearY': lambda img, magnitude: shear_y(img, magnitude), 'TranslateX': lambda img, magnitude: translate_x(img, magnitude), 'TranslateY': lambda img, magnitude: translate_y(img, magnitude), 'Rotate': lambda img, magnitude: rotate(img, magnitude), 'AutoContrast': lambda img, magnitude: auto_contrast(img, magnitude), 'Invert': lambda img, magnitude: invert(img, magnitude), 'Equalize': lambda img, magnitude: equalize(img, magnitude), 'Solarize': lambda img, magnitude: solarize(img, magnitude), 'Posterize': lambda img, magnitude: posterize(img, magnitude), 'Contrast': lambda img, magnitude: contrast(img, magnitude), 'Color': lambda img, magnitude: color(img, magnitude), 'Brightness': lambda img, magnitude: brightness(img, magnitude), 'Sharpness': lambda img, magnitude: sharpness(img, magnitude), 'Cutout': lambda img, magnitude: cutout(img, magnitude), } def apply_policy(img, policy): if random.random() < policy[1]: img = operations[policy[0]](img, policy[2]) if random.random() < policy[4]: img = operations[policy[3]](img, policy[5]) return img def transform_matrix_offset_center(matrix, x, y): o_x = float(x) / 2 + 0.5 o_y = float(y) / 2 + 0.5 offset_matrix = np.array([[1, 0, o_x], [0, 1, o_y], [0, 0, 1]]) reset_matrix = np.array([[1, 0, -o_x], [0, 1, -o_y], [0, 0, 1]]) transform_matrix = offset_matrix @ matrix @ reset_matrix return transform_matrix def shear_x(img, magnitude): img = np.array(img) magnitudes = np.linspace(-0.3, 0.3, 11) transform_matrix = np.array([[1, random.uniform(magnitudes[magnitude], magnitudes[magnitude+1]), 0], [0, 1, 0], [0, 0, 1]]) transform_matrix = transform_matrix_offset_center(transform_matrix, img.shape[0], img.shape[1]) affine_matrix = transform_matrix[:2, :2] offset = transform_matrix[:2, 2] img = np.stack([ndimage.interpolation.affine_transform( img[:, :, c], affine_matrix, offset) for c in range(img.shape[2])], axis=2) img = Image.fromarray(img) return img def shear_y(img, magnitude): img = np.array(img) magnitudes = np.linspace(-0.3, 0.3, 11) transform_matrix = np.array([[1, 0, 0], [random.uniform(magnitudes[magnitude], magnitudes[magnitude+1]), 1, 0], [0, 0, 1]]) transform_matrix = transform_matrix_offset_center(transform_matrix, img.shape[0], img.shape[1]) affine_matrix = transform_matrix[:2, :2] offset = transform_matrix[:2, 2] img = np.stack([ndimage.interpolation.affine_transform( img[:, :, c], affine_matrix, offset) for c in range(img.shape[2])], axis=2) img = Image.fromarray(img) return img def translate_x(img, magnitude): img = np.array(img) magnitudes = np.linspace(-150/331, 150/331, 11) transform_matrix = np.array([[1, 0, 0], [0, 1, img.shape[1]*random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])], [0, 0, 1]]) transform_matrix = transform_matrix_offset_center(transform_matrix, img.shape[0], img.shape[1]) affine_matrix = transform_matrix[:2, :2] offset = transform_matrix[:2, 2] img = np.stack([ndimage.interpolation.affine_transform( img[:, :, c], affine_matrix, offset) for c in range(img.shape[2])], axis=2) img = Image.fromarray(img) return img def translate_y(img, magnitude): img = np.array(img) magnitudes = np.linspace(-150/331, 150/331, 11) transform_matrix = np.array([[1, 0, img.shape[0]*random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])], [0, 1, 0], [0, 0, 1]]) transform_matrix = transform_matrix_offset_center(transform_matrix, img.shape[0], img.shape[1]) affine_matrix = transform_matrix[:2, :2] offset = transform_matrix[:2, 2] img = np.stack([ndimage.interpolation.affine_transform( img[:, :, c], affine_matrix, offset) for c in range(img.shape[2])], axis=2) img = Image.fromarray(img) return img def rotate(img, magnitude): img = np.array(img) magnitudes = np.linspace(-30, 30, 11) theta = np.deg2rad(random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])) transform_matrix = np.array([[np.cos(theta), -np.sin(theta), 0], [np.sin(theta), np.cos(theta), 0], [0, 0, 1]]) transform_matrix = transform_matrix_offset_center(transform_matrix, img.shape[0], img.shape[1]) affine_matrix = transform_matrix[:2, :2] offset = transform_matrix[:2, 2] img = np.stack([ndimage.interpolation.affine_transform( img[:, :, c], affine_matrix, offset) for c in range(img.shape[2])], axis=2) img = Image.fromarray(img) return img def auto_contrast(img, magnitude): img = ImageOps.autocontrast(img) return img def invert(img, magnitude): img = ImageOps.invert(img) return img def equalize(img, magnitude): img = ImageOps.equalize(img) return img def solarize(img, magnitude): magnitudes = np.linspace(0, 256, 11) img = ImageOps.solarize(img, random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])) return img def posterize(img, magnitude): magnitudes = np.linspace(4, 8, 11) img = ImageOps.posterize(img, int(round(random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])))) return img def contrast(img, magnitude): magnitudes = np.linspace(0.1, 1.9, 11) img = ImageEnhance.Contrast(img).enhance(random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])) return img def color(img, magnitude): magnitudes = np.linspace(0.1, 1.9, 11) img = ImageEnhance.Color(img).enhance(random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])) return img def brightness(img, magnitude): magnitudes = np.linspace(0.1, 1.9, 11) img = ImageEnhance.Brightness(img).enhance(random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])) return img def sharpness(img, magnitude): magnitudes = np.linspace(0.1, 1.9, 11) img = ImageEnhance.Sharpness(img).enhance(random.uniform(magnitudes[magnitude], magnitudes[magnitude+1])) return img def cutout(org_img, magnitude=None): img = np.array(img) magnitudes = np.linspace(0, 60/331, 11) img = np.copy(org_img) mask_val = img.mean() if magnitude is None: mask_size = 16 else: mask_size = int(round(img.shape[0]*random.uniform(magnitudes[magnitude], magnitudes[magnitude+1]))) top = np.random.randint(0 - mask_size//2, img.shape[0] - mask_size) left = np.random.randint(0 - mask_size//2, img.shape[1] - mask_size) bottom = top + mask_size right = left + mask_size if top < 0: top = 0 if left < 0: left = 0 img[top:bottom, left:right, :].fill(mask_val) img = Image.fromarray(img) return img class Cutout(object): def __init__(self, length=16): self.length = length def __call__(self, img): img = np.array(img) mask_val = img.mean() top = np.random.randint(0 - self.length//2, img.shape[0] - self.length) left = np.random.randint(0 - self.length//2, img.shape[1] - self.length) bottom = top + self.length right = left + self.length top = 0 if top < 0 else top left = 0 if left < 0 else top img[top:bottom, left:right, :] = mask_val img = Image.fromarray(img) return img ``` ### MIXUP ``` alpha_ = 0.4 # def mixup_data(x, y, alpha=alpha_, use_cuda=True): # if alpha > 0: # lam = np.random.beta(alpha, alpha) # else: # lam = 1 # batch_size = x.size()[0] # if use_cuda: # index = torch.randperm(batch_size).cuda() # else: # index = torch.randperm(batch_size) # mixed_x = lam * x + (1 - lam) * x[index, :] # y_a, y_b = y, y[index] # return mixed_x, y_a, y_b, lam # def mixup_criterion(criterion, pred, y_a, y_b, lam): # return lam * criterion(pred.float().cuda(), y_a.float().cuda()) + (1 - lam) * criterion(pred.float().cuda(), y_b.float().cuda()) def mixup_data(x, y, alpha=1.0, use_cuda=True): '''Returns mixed inputs, pairs of targets, and lambda''' if alpha > 0: lam = np.random.beta(alpha, alpha) else: lam = 1 batch_size = x.size()[0] if use_cuda: index = torch.randperm(batch_size).cuda() else: index = torch.randperm(batch_size) mixed_x = lam * x + (1 - lam) * x[index, :] # print(y) y_a, y_b = y, y[index] return mixed_x, y_a, y_b, lam def mixup_criterion(criterion, pred, y_a, y_b, lam): return lam * criterion(pred, y_a) + (1 - lam) * criterion(pred, y_b) class ConcatDataset(torch.utils.data.Dataset): def __init__(self, *datasets): self.datasets = datasets def __getitem__(self, i): return tuple(d[i] for d in self.datasets) def __len__(self): return min(len(d) for d in self.datasets) plt.ion() # interactive mode EO_data_transforms = { 'Training': transforms.Compose([ transforms.Grayscale(num_output_channels=3), transforms.Resize((30,30)), AutoAugment(), Cutout(), # transforms.RandomRotation(15,), # transforms.RandomResizedCrop(30), # transforms.RandomHorizontalFlip(), # transforms.RandomVerticalFlip(), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.2913437], [0.12694514]) #transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # transforms.Lambda(lambda x: x.repeat(3, 1, 1)), #transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276]) ]), 'Test': transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.Resize(30), transforms.ToTensor(), transforms.Normalize([0.2913437], [0.12694514]) #transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # transforms.Lambda(lambda x: x.repeat(3, 1, 1)), # transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276]) ]), 'valid_EO': transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.Resize((30,30)), # AutoAugment(), # transforms.RandomRotation(15,), # transforms.RandomResizedCrop(48), # transforms.RandomHorizontalFlip(), # transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize([0.2913437], [0.12694514]) # transforms.Grayscale(num_output_channels=1), # transforms.Resize(48), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5]) # transforms.Lambda(lambda x: x.repeat(3, 1, 1)), # transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276]) ]), } # Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'Training': transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.Resize((52,52)), transforms.RandomRotation(15,), transforms.RandomResizedCrop(48), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize([0.4062625], [0.12694514]) #transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # transforms.Lambda(lambda x: x.repeat(3, 1, 1)), #transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276]) ]), 'Test': transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.Resize(48), transforms.ToTensor(), transforms.Normalize([0.4062625], [0.12694514]) #transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # transforms.Lambda(lambda x: x.repeat(3, 1, 1)), # transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276]) ]), 'valid': transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.Resize((52,52)), transforms.RandomRotation(15,), transforms.RandomResizedCrop(48), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize([0.4062625], [0.12694514]) # transforms.Grayscale(num_output_channels=1), # transforms.Resize(48), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5]) # transforms.Lambda(lambda x: x.repeat(3, 1, 1)), # transforms.Normalize(mean=[0.507, 0.487, 0.441], std=[0.267, 0.256, 0.276]) ]), } # data_dir = '/mnt/sda1/cvpr21/Classification/ram' # EO_image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), # EO_data_transforms[x]) # for x in ['Training', 'Test']} # EO_dataloaders = {x: torch.utils.data.DataLoader(EO_image_datasets[x], batch_size=256, # shuffle=True, num_workers=64, pin_memory=True) # for x in ['Training', 'Test']} # EO_dataset_sizes = {x: len(EO_image_datasets[x]) for x in ['Training', 'Test']} # EO_class_names = EO_image_datasets['Training'].classes # image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), # data_transforms[x]) # for x in ['Training', 'Test']} # combine_dataset = ConcatDataset(EO_image_datasets, image_datasets) # dataloaders = {x: torch.utils.data.DataLoader(combine_dataset[x], batch_size=256, # shuffle=True, num_workers=64, pin_memory=True) # for x in ['Training', 'Test']} # dataset_sizes = {x: len(image_datasets[x]) for x in ['Training', 'Test']} # class_names = image_datasets['Training'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # def imshow(inp, title=None): # """Imshow for Tensor.""" # inp = inp.numpy().transpose((1, 2, 0)) # # mean = np.array([0.1786, 0.4739, 0.5329]) # # std = np.array([[0.0632, 0.1361, 0.0606]]) # # inp = std * inp + mean # inp = np.clip(inp, 0, 1) # plt.imshow(inp) # if title is not None: # plt.title(title) # plt.pause(0.001) # pause a bit so that plots are updated # # Get a batch of training data # EO_inputs, EO_classes = next(iter(EO_dataloaders['Training'])) # inputs, classes, k ,_= next(iter(dataloaders)) # # Make a grid from batch # EO_out = torchvision.utils.make_grid(EO_inputs) # out = torchvision.utils.make_grid(inputs) # imshow(EO_out, title=[EO_class_names[x] for x in classes]) # imshow(out, title=[class_names[x] for x in classes]) from torch.utils import data from tqdm import tqdm from PIL import Image output_dim = 10 class SAR_EO_Combine_Dataset(data.Dataset): def __init__(self,df_sar,dirpath_sar,transform_sar,df_eo=None,dirpath_eo=None,transform_eo=None,test = False): self.df_sar = df_sar self.test = test self.dirpath_sar = dirpath_sar self.transform_sar = transform_sar self.df_eo = df_eo # self.test = test self.dirpath_eo = dirpath_eo self.transform_eo = transform_eo #image data # if not self.test: # self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]+'.png') # else: # self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]) # #labels data # if not self.test: # self.label_df = self.df.iloc[:,1] # Calculate length of df self.data_len = len(self.df_sar.index) def __len__(self): return self.data_len def __getitem__(self, idx): image_name_sar = self.df_sar.img_name[idx] image_name_sar = os.path.join(self.dirpath_sar, image_name_sar) img_sar = Image.open(image_name_sar)#.convert('RGB') img_tensor_sar = self.transform_sar(img_sar) image_name_eo = self.df_eo.img_name[idx] image_name_eo = os.path.join(self.dirpath_eo, image_name_eo) img_eo = Image.open(image_name_eo)#.convert('RGB') img_tensor_eo = self.transform_eo(img_eo) # image_name = self.df.img_name[idx] # img = Image.open(image_name)#.convert('RGB') # img_tensor = self.transform(img) if not self.test: image_labels = int(self.df_sar.class_id[idx]) # label_tensor = torch.zeros((1, output_dim)) # for label in image_labels.split(): # label_tensor[0, int(label)] = 1 image_label = torch.tensor(image_labels,dtype= torch.long) image_label = image_label.squeeze() image_labels_eo = int(self.df_eo.class_id[idx]) # label_tensor_eo = torch.zeros((1, output_dim)) # for label_eo in image_labels_eo.split(): # label_tensor_eo[0, int(label_eo)] = 1 image_label_eo = torch.tensor(image_labels_eo,dtype= torch.long) image_label_eo = image_label_eo.squeeze() # print(image_label_eo) return (img_tensor_sar,image_label), (img_tensor_eo, image_label_eo) return (img_tensor_sar) class SAR_EO_Combine_Dataset2(data.Dataset): def __init__(self,df_sar,dirpath_sar,transform_sar,df_eo=None,dirpath_eo=None,transform_eo=None,test = False): self.df_sar = df_sar self.test = test self.dirpath_sar = dirpath_sar self.transform_sar = transform_sar self.df_eo = df_eo # self.test = test self.dirpath_eo = dirpath_eo self.transform_eo = transform_eo #image data # if not self.test: # self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]+'.png') # else: # self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]) # #labels data # if not self.test: # self.label_df = self.df.iloc[:,1] # Calculate length of df self.data_len = len(self.df_sar.index) def __len__(self): return self.data_len def __getitem__(self, idx): image_name_sar = self.df_sar.img_name[idx] image_name_sar = os.path.join(self.dirpath_sar, image_name_sar) img_sar = Image.open(image_name_sar)#.convert('RGB') img_tensor_sar = self.transform_sar(img_sar) image_name_eo = self.df_eo.img_name[idx] image_name_eo = os.path.join(self.dirpath_eo, image_name_eo) img_eo = Image.open(image_name_eo)#.convert('RGB') img_tensor_eo = self.transform_eo(img_eo) # image_name = self.df.img_name[idx] # img = Image.open(image_name)#.convert('RGB') # img_tensor = self.transform(img) if not self.test: image_labels = int(self.df_sar.class_id[idx]) # label_tensor = torch.zeros((1, output_dim)) # for label in image_labels.split(): # label_tensor[0, int(label)] = 1 image_label = torch.tensor(image_labels,dtype= torch.long) image_label = image_label.squeeze() image_labels_eo = int(self.df_eo.class_id[idx]) # label_tensor_eo = torch.zeros((1, output_dim)) # for label_eo in image_labels_eo.split(): # label_tensor_eo[0, int(label_eo)] = 1 image_label_eo = torch.tensor(image_labels_eo,dtype= torch.long) image_label_eo = image_label_eo.squeeze() # print(image_label_eo) return (img_tensor_sar,image_label), (img_tensor_eo, image_label_eo) return (img_tensor_sar) import pandas as pd eo_df = pd.read_csv("/home/hans/sandisk/dataset_mover/kd_train_EO.csv") eo_df = eo_df.sort_values(by='img_name') sar_df = pd.read_csv("/home/hans/sandisk/dataset_mover/kd_train_SAR.csv") sar_df = sar_df.sort_values(by='img_name') eo_test_df = pd.read_csv("/home/hans/sandisk/dataset_mover/kd_test_EO.csv") eo_test_df = eo_test_df.sort_values(by='img_name') sar_test_df = pd.read_csv("/home/hans/sandisk/dataset_mover/kd_test_SAR.csv") sar_test_df = sar_test_df.sort_values(by='img_name') BATCH_SIZE = 512 dirpath_sar = "/home/hans/sandisk/dataset_mover/kd_train_SAR" dirpath_eo = "/home/hans/sandisk/dataset_mover/kd_train_EO" SAR_EO_Combine = SAR_EO_Combine_Dataset(sar_df,dirpath_sar,data_transforms["Test"],eo_df,dirpath_eo,EO_data_transforms["Training"],test = False) testpath_sar = "/home/hans/sandisk/dataset_mover/kd_val_SAR" testpath_eo = "/home/hans/sandisk/dataset_mover/kd_val_EO" test_set = SAR_EO_Combine_Dataset(sar_test_df,testpath_sar,data_transforms["Test"],eo_test_df,testpath_eo,EO_data_transforms["Test"],test = False) # test_loader = data.DataLoader(dataset=test_dataset,batch_size=BATCH_SIZE,shuffle=False) train_size = len(SAR_EO_Combine) test_size = len(test_set) # from sklearn.model_selection import train_test_split # train_dataset, test_dataset = train_test_split(SAR_EO_Combine[0], SAR_EO_Combine[2], test_size=0.2, random_state=2017, stratify = SAR_EO_Combine[2]) # train_dataset, test_dataset = torch.utils.data.random_split(SAR_EO_Combine, [train_size, test_size]) data_loader = data.DataLoader(dataset=SAR_EO_Combine,batch_size=BATCH_SIZE,shuffle=True,pin_memory = True) test_loader = data.DataLoader(dataset=test_set,batch_size=BATCH_SIZE,shuffle=True,pin_memory = True) def imshow(inp, title=None): """Imshow for Tensor.""" inp = inp.numpy().transpose((1, 2, 0)) # mean = np.array([0.1786, 0.4739, 0.5329]) # std = np.array([[0.0632, 0.1361, 0.0606]]) # inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated SAR, EO = next(iter(data_loader)) # Get a batch of training data EO_inputs, EO_classes = EO[0],EO[1] inputs, classes = SAR[0],SAR[1] # EO_class_names = SAR_EO_Combine.image_label # Make a grid from batch EO_out = torchvision.utils.make_grid(EO_inputs) out = torchvision.utils.make_grid(inputs) imshow(EO_out)#, title=[EO_class_names[x] for x in classes]) imshow(out)#, title=[class_names[x] for x in classes]) print(len(EO_classes)) print(classes) # a = 0 # for i in range(500): # SAR, EO = next(iter(data_loader)) # # Get a batch of training data # EO_inputs, EO_classes = EO[0],EO[1] # inputs, classes = SAR[0],SAR[1] # if 9 in classes: # a+=1 # print(a) ``` ### check if paired succeed ``` # from tqdm import trange # def equal(): # notsame_image = 0 # notsame_label = 0 # t = trange(len(sar_df)) # for i in t: # t.set_postfix({'nums of not same label:': notsame_label}) # sar, eo = next(iter(data_loader)) # eo_label = eo[1][0].tolist() # sar_label = sar[1][0].tolist() # # print(eo_label) # # print(sar_label) # # if not eo_image == sar_image: # # notsame_image += 1 # # eoval = next(eo_label) # # sarval = next(sar_label) # if not eo_label==sar_label: # notsame_label += 1 # # notsame_label += 1 # # print("nums of not same imageid:", notsame_image) # #print("nums of not same label:", notsame_label) # equal() # #next(iter(data_loader)) len(sar_df) == len(eo_df) next(eo_df.iterrows())[1] Num_class=10 num_classes = Num_class num_channel = 1 # model_ft = models.resnet34(pretrained=False) model_ft = torch.load("10/pre_resnet34_model_epoch99.pt") ## Attention: you need to change to the path of pre_EO.pt file, which located in the repo folder pre-train # model_ft.conv1 = nn.Conv2d(num_channel, 64, kernel_size=7, stride=2, padding=3,bias=False) # # model_ft.avgpool = SpatialPyramidPooling((3,3)) # model_ft.fc = nn.Linear(512, Num_class) # model_ft.conv0 = nn.Conv2d( # model_ft.features[0] = nn.Conv2d(num_channel, 16, kernel_size=3, stride=2, padding=1,bias=False) # model_ft.classifier[3] = nn.Linear(1024, Num_class, bias=True) model_ft.eval() data_dir = '/mnt/sda1/cvpr21/Classification/ram' weights = [] for i in range(len(os.listdir(os.path.join(data_dir, "Training")))): img_num = len([lists for lists in os.listdir(os.path.join(data_dir, "Training",str(i)))]) print('filenum:',len([lists for lists in os.listdir(os.path.join(data_dir, "Training",str(i)))]))# if os.path.isfile(os.path.join(data_dir, lists))])) weights.append(img_num) print(weights) weights = torch.tensor(weights, dtype=torch.float32).cuda() weights = weights / weights.sum() print(weights) weights = 1.0 / weights weights = weights / weights.sum() print(weights) ``` ### Teacher model (SAR) ``` netT = torch.load("10/resnet34_model_epoch119.pt") ## Attention: you need to change to the path of pre_SAR.pt file, which located in the repo folder pre-train # netT = torch.load('29_auto_aug_eo_sar_noimagenet/pre_resnet34_eo_epoch99.pt') criterion2 = nn.KLDivLoss() netT.eval() from tqdm.notebook import trange from tqdm import tqdm_notebook as tqdm import warnings warnings.filterwarnings('ignore') def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() print("---------------Start KD FIT( TEACHER AND STUDENT )-----------------") best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 best_train_acc = 0.0 kd_alpha = 0.2 Loss_list = [] Accuracy_list = [] T_Loss_list = [] T_Accuracy_list = [] for epoch in trange(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) # Each epoch has a training and validation phase for phase in ['Training', 'Test']:#['Test','Training']: #: if phase == 'Training': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. if phase == 'Training': for SAR, EO in tqdm(data_loader): inputs, labels = EO[0], EO[1] inputs = inputs.to(device) labels = labels.to(device) T_input, T_labels = SAR[0], SAR[1] T_input = T_input.to(device) T_labels = T_labels.to(device) # print(T_labels, labels) # labels = torch.argmax(labels, 0) # T_labels = torch.argmax(T_labels, 0) # confusion_matrix = torch.zeros(10, 10) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.no_grad(): soft_target = netT(T_input) with torch.set_grad_enabled(phase == 'Training'): outputs = model(inputs) # _, #print(outputs.dim()) _, preds = torch.max(outputs, 1) # for t, p in zip(labels.view(-1), preds.view(-1)): # confusion_matrix[t.long(), p.long()] += 1 # print(confusion_matrix.diag()/confusion_matrix.sum(1)) # _, T_preds = torch.max(soft_target, 1) T = 2 outputs_S = F.log_softmax(outputs/T, dim=1) outputs_T = F.softmax(soft_target/T, dim=1) # print(outputs_S.size()) # print(outputs_T.size()) loss2 = criterion2(outputs_S, outputs_T) * T * T #print(preds) if phase == 'Training': inputs, y_a, y_b, lam = mixup_data(inputs, labels) inputs, y_a, y_b = map(Variable, (inputs, y_a, y_b)) # print(y_a) # print(y_b) loss = mixup_criterion(criterion, outputs, y_a, y_b, lam) loss = loss*(1-kd_alpha) + loss2*kd_alpha else: loss = criterion(outputs, labels) loss = loss*(1-kd_alpha) + loss2*kd_alpha # backward + optimize only if in training phase if phase == 'Training': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) if phase == 'Training': scheduler.step() epoch_loss = running_loss / train_size epoch_acc = running_corrects.double() / train_size print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) if phase == "Training": Loss_list.append(epoch_loss) Accuracy_list.append(100 * epoch_acc) else: T_Loss_list.append(epoch_loss) T_Accuracy_list.append(100 * epoch_acc) # deep copy the model if phase == 'Test' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) PATH = os.path.join(str(exp_num), "resnet34_kd_best.pt")#"resnet18_model_epoch{}.pt".format(epoch) if not os.path.exists(str(exp_num)): os.makedirs(str(exp_num)) torch.save(model, PATH) time_elapsed = time.time() - since print('Time from Start {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) if phase == 'Training' and epoch_acc > best_train_acc: best_train_acc = epoch_acc # PATH = os.path.join(str(exp_num), "resnet34_kd_best.pt")#"resnet18_model_epoch{}.pt".format(epoch) # if not os.path.exists(str(exp_num)): # os.makedirs(str(exp_num)) # torch.save(model, PATH) ############################################################################# elif phase == 'Test': acc_matrix_sum = torch.zeros(10) for SAR, EO in tqdm(test_loader): inputs, labels = EO[0], EO[1] inputs = inputs.to(device) # print(inputs) labels = labels.to(device) T_input, T_labels = SAR[0], SAR[1] T_input = T_input.to(device) T_labels = T_labels.to(device) # print(T_labels, labels) # labels = torch.argmax(labels, 0) # T_labels = torch.argmax(T_labels, 0) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.no_grad(): soft_target = netT(T_input) with torch.set_grad_enabled(phase == 'Training'): outputs = model(inputs) # _, #print(outputs.dim()) _, preds = torch.max(outputs, 1) confusion_matrix = torch.zeros(10, 10) for t, p in zip(labels.view(-1), preds.view(-1)): confusion_matrix[t.long(), p.long()] += 1 acc_matrix_batch = (confusion_matrix.diag()/confusion_matrix.sum(1)) # _, T_preds = torch.max(soft_target, 1) T = 2 outputs_S = F.log_softmax(outputs/T, dim=1) outputs_T = F.softmax(soft_target/T, dim=1) # print(outputs_S.size()) # print(outputs_T.size()) loss2 = criterion2(outputs_S, outputs_T) * T * T #print(preds) if phase == 'Training': inputs, y_a, y_b, lam = mixup_data(inputs, labels) inputs, y_a, y_b = map(Variable, (inputs, y_a, y_b)) # print(y_a) # print(y_b) loss = mixup_criterion(criterion, outputs, y_a, y_b, lam) loss = loss*(1-kd_alpha) + loss2*kd_alpha else: loss = criterion(outputs, labels) loss = loss*(1-kd_alpha) + loss2*kd_alpha # backward + optimize only if in training phase if phase == 'Training': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) acc_matrix_sum += acc_matrix_batch acc_matrix = acc_matrix_sum / test_size print("acc for each class: {}".format(acc_matrix)) ################# if phase == 'Training': scheduler.step() epoch_loss = running_loss / test_size epoch_acc = running_corrects.double() / test_size print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) if phase == "Training": Loss_list.append(epoch_loss) Accuracy_list.append(100 * epoch_acc) else: T_Loss_list.append(epoch_loss) T_Accuracy_list.append(100 * epoch_acc) # deep copy the model if phase == 'Test' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) PATH = os.path.join(str(exp_num), "resnet34_kd_best.pt")#"resnet18_model_epoch{}.pt".format(epoch) if not os.path.exists(str(exp_num)): os.makedirs(str(exp_num)) torch.save(model, PATH) time_elapsed = time.time() - since print('Time from Start {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) if phase == 'Training' and epoch_acc > best_train_acc: best_train_acc = epoch_acc # PATH = os.path.join(str(exp_num), "resnet34_kd_best.pt")#"resnet18_model_epoch{}.pt".format(epoch) # if not os.path.exists(str(exp_num)): # os.makedirs(str(exp_num)) # torch.save(model, PATH) print() PATH = os.path.join(str(exp_num), "resnet34_kd{}.pt".format(epoch))#"resnet18_model_epoch{}.pt".format(epoch) if not os.path.exists(str(exp_num)): os.makedirs(str(exp_num)) torch.save(model, PATH) # torch.save(best_model_wts, "best.pt") time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best train Acc: {:4f}'.format(best_train_acc)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) ##### PLOT x1 = range(0, num_epochs) x2 = range(0, num_epochs) y1 = Accuracy_list y2 = Loss_list plt.subplot(2, 1, 1) plt.plot(x1, y1, 'o-') plt.title('Train accuracy vs. epoches') plt.ylabel('Train accuracy') plt.subplot(2, 1, 2) plt.plot(x2, y2, '.-') plt.xlabel('Train loss vs. epoches') plt.ylabel('Train loss') plt.show() plt.savefig("Train_accuracy_loss.jpg") x1 = range(0, num_epochs) x2 = range(0, num_epochs) y1 = T_Accuracy_list y2 = T_Loss_list plt.subplot(2, 1, 1) plt.plot(x1, y1, 'o-') plt.title('Test accuracy vs. epoches') plt.ylabel('Test accuracy') plt.subplot(2, 1, 2) plt.plot(x2, y2, '.-') plt.xlabel('Test loss vs. epoches') plt.ylabel('Test loss') plt.show() plt.savefig("Test_accuracy_loss.jpg") return model model_ft = model_ft.to(device) # #criterion = nn.CrossEntropyLoss() criterion = nn.CrossEntropyLoss(weight=weights) #weight=weights, # def train_model(model, criterion, optimizer, scheduler, num_epochs=25): # since = time.time() # print("---------------Start KD FIT( TEACHER AND STUDENT )-----------------") # best_model_wts = copy.deepcopy(model.state_dict()) # best_acc = 0.0 # best_train_acc = 0.0 # kd_alpha = 0.8 # Loss_list = [] # Accuracy_list = [] # T_Loss_list = [] # T_Accuracy_list = [] # for epoch in trange(num_epochs): # print('Epoch {}/{}'.format(epoch, num_epochs - 1)) # # Each epoch has a training and validation phase # for phase in ['Training', 'Test']: # ##################################### train############################# # if phase == 'Training': # model.train() # Set model to training mode # else: # model.eval() # Set model to evaluate mode # running_loss = 0.0 # running_corrects = 0 # # Iterate over data. # if phase == "Training": # for SAR, EO in tqdm(data_loader): # inputs, labels = SAR[0], SAR[1] # inputs = inputs.to(device) # labels = labels.to(device) # T_input, T_labels = EO[0], EO[1] # T_input = T_input.to(device) # T_labels = T_labels.to(device) # # print(T_labels, labels) # # labels = torch.argmax(labels, 0) # # T_labels = torch.argmax(T_labels, 0) # # zero the parameter gradients # optimizer.zero_grad() # # forward # # track history if only in train # with torch.no_grad(): # soft_target = netT(T_input) # with torch.set_grad_enabled(): # outputs = model(inputs) # _, # #print(outputs.dim()) # _, preds = torch.max(outputs, 1) # # _, T_preds = torch.max(soft_target, 1) # T = 2 # outputs_S = F.log_softmax(outputs/T, dim=1) # outputs_T = F.softmax(soft_target/T, dim=1) # # print(outputs_S.size()) # # print(outputs_T.size()) # loss2 = criterion2(outputs_S, outputs_T) * T * T # #print(preds) # inputs, y_a, y_b, lam = mixup_data(inputs, labels) # inputs, y_a, y_b = map(Variable, (inputs, y_a, y_b)) # # print(y_a) # # print(y_b) # loss = mixup_criterion(criterion, outputs, y_a, y_b, lam) # loss = loss*(1-kd_alpha) + loss2*kd_alpha # running_loss += loss.item() * inputs.size(0) # running_corrects += torch.sum(preds == labels.data) # loss.backward() # optimizer.step() # scheduler.step() # ##############################test############################# # else: # for SAR, EO in tqdm(test_data_loader): # inputs, labels = SAR[0], SAR[1] # inputs = inputs.to(device) # labels = labels.to(device) # T_input, T_labels = EO[0], EO[1] # T_input = T_input.to(device) # T_labels = T_labels.to(device) # optimizer.zero_grad() # # forward # # track history if only in train # with torch.no_grad(): # soft_target = netT(T_input) # outputs = model(inputs) # _, # #print(outputs.dim()) # _, preds = torch.max(outputs, 1) # # _, T_preds = torch.max(soft_target, 1) # T = 2 # outputs_S = F.log_softmax(outputs/T, dim=1) # outputs_T = F.softmax(soft_target/T, dim=1) # # print(outputs_S.size()) # # print(outputs_T.size()) # loss2 = criterion2(outputs_S, outputs_T) * T * T # loss = criterion(outputs, labels) # loss = loss*(1-kd_alpha) + loss2*kd_alpha # ################################ # running_loss += loss.item() * inputs.size(0) # running_corrects += torch.sum(preds == labels.data) # epoch_loss = running_loss / dataset_sizes[phase] # epoch_acc = running_corrects.double() / dataset_sizes[phase] # print('{} Loss: {:.4f} Acc: {:.4f}'.format( # phase, epoch_loss, epoch_acc)) # if phase == "Training": # Loss_list.append(epoch_loss) # Accuracy_list.append(100 * epoch_acc) # else: # T_Loss_list.append(epoch_loss) # T_Accuracy_list.append(100 * epoch_acc) # # deep copy the model # if phase == 'Test' and epoch_acc > best_acc: # best_acc = epoch_acc # best_model_wts = copy.deepcopy(model.state_dict()) # PATH = os.path.join(str(exp_num), "resnet34_kd_best.pt")#"resnet18_model_epoch{}.pt".format(epoch) # if not os.path.exists(str(exp_num)): # os.makedirs(str(exp_num)) # torch.save(model, PATH) # time_elapsed = time.time() - since # print('Time from Start {:.0f}m {:.0f}s'.format( # time_elapsed // 60, time_elapsed % 60)) # if phase == 'Training' and epoch_acc > best_train_acc: # best_train_acc = epoch_acc # # PATH = os.path.join(str(exp_num), "resnet34_kd_best.pt")#"resnet18_model_epoch{}.pt".format(epoch) # # if not os.path.exists(str(exp_num)): # # os.makedirs(str(exp_num)) # # torch.save(model, PATH) # print() # PATH = os.path.join(str(exp_num), "resnet34_kd{}.pt".format(epoch))#"resnet18_model_epoch{}.pt".format(epoch) # if not os.path.exists(str(exp_num)): # os.makedirs(str(exp_num)) # torch.save(model, PATH) # time_elapsed = time.time() - since # print('Training complete in {:.0f}m {:.0f}s'.format( # time_elapsed // 60, time_elapsed % 60)) # print('Best train Acc: {:4f}'.format(best_train_acc)) # print('Best val Acc: {:4f}'.format(best_acc)) # # load best model weights # model.load_state_dict(best_model_wts) # ##### PLOT # x1 = range(0, num_epochs) # x2 = range(0, num_epochs) # y1 = Accuracy_list # y2 = Loss_list # plt.subplot(2, 1, 1) # plt.plot(x1, y1, 'o-') # plt.title('Train accuracy vs. epoches') # plt.ylabel('Train accuracy') # plt.subplot(2, 1, 2) # plt.plot(x2, y2, '.-') # plt.xlabel('Train loss vs. epoches') # plt.ylabel('Train loss') # plt.show() # plt.savefig("Train_accuracy_loss.jpg") # x1 = range(0, num_epochs) # x2 = range(0, num_epochs) # y1 = T_Accuracy_list # y2 = T_Loss_list # plt.subplot(2, 1, 1) # plt.plot(x1, y1, 'o-') # plt.title('Test accuracy vs. epoches') # plt.ylabel('Test accuracy') # plt.subplot(2, 1, 2) # plt.plot(x2, y2, '.-') # plt.xlabel('Test loss vs. epoches') # plt.ylabel('Test loss') # plt.show() # plt.savefig("Test_accuracy_loss.jpg") # return model # model_ft = model_ft.to(device) # # #criterion = nn.CrossEntropyLoss() # criterion = nn.CrossEntropyLoss(weight=weights) #weight=weights, # os.environ["CUDA_LAUNCH_BLOCKING"] = "1" optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=20, gamma=0.5) model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=120) !mkdir test !unzip NTIRE2021_Class_test_images_EO.zip -d ./test model = torch.load("45_kd_sar-teacher_eo-student_pretrain-on-sar/resnet34_kd114.pt") import pandas as pd from torch.utils import data from tqdm import tqdm from PIL import Image class ImageData(data.Dataset): def __init__(self,df,dirpath,transform,test = False): self.df = df self.test = test self.dirpath = dirpath self.conv_to_tensor = transform #image data if not self.test: self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]+'.png') else: self.image_arr = np.asarray(str(self.dirpath)+'/'+self.df.iloc[:, 0]) #labels data if not self.test: self.label_df = self.df.iloc[:,1] # Calculate length of df self.data_len = len(self.df.index) def __len__(self): return self.data_len def __getitem__(self, idx): image_name = self.image_arr[idx] img = Image.open(image_name)#.convert('RGB') img_tensor = self.conv_to_tensor(img) if not self.test: image_labels = self.label_df[idx] label_tensor = torch.zeros((1, output_dim)) for label in image_labels.split(): label_tensor[0, int(label)] = 1 image_label = torch.tensor(label_tensor,dtype= torch.float32) return (img_tensor,image_label.squeeze()) return (img_tensor) BATCH_SIZE = 1 test_dir = "./test" test_dir_ls = os.listdir(test_dir) test_dir_ls.sort() test_df = pd.DataFrame(test_dir_ls) test_dataset = ImageData(test_df,test_dir,EO_data_transforms["valid_EO"],test = True) test_loader = data.DataLoader(dataset=test_dataset,batch_size=BATCH_SIZE,shuffle=False) output_dim = 10 DISABLE_TQDM = False predictions = np.zeros((len(test_dataset), output_dim)) i = 0 for test_batch in tqdm(test_loader,disable = DISABLE_TQDM): test_batch = test_batch.to(device) batch_prediction = model(test_batch).detach().cpu().numpy() predictions[i * BATCH_SIZE:(i+1) * BATCH_SIZE, :] = batch_prediction i+=1 predictions[170] ``` ### submission balance for class 0 ``` m = nn.Softmax(dim=1) predictions_tensor = torch.from_numpy(predictions) output_softmax = m(predictions_tensor) # output_softmax = output_softmax/output_softmax.sum() pred = np.argmax(predictions,axis = 1) plot_ls = [] idx = 0 for each_pred in pred: if each_pred == 0: plot_ls.append(output_softmax[idx][0].item()) idx+=1 # plot_ls # idx = 0 # # print(output_softmax) # for i in pred: # # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) # if i == 0: # new_list = set(predictions[idx]) # new_list.remove(max(new_list)) # index = predictions[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # idx+=1 import matplotlib.pyplot as plt plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) plot_ls.sort() val = plot_ls[-85] print(val) plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # print(output_softmax) idx = 0 counter = 0 for i in pred: # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) if i == 0 and output_softmax[idx][0] < val: new_list = set(predictions[idx]) new_list.remove(max(new_list)) index = predictions[idx].tolist().index(max(new_list)) # index = predictions[idx].index() # print(index) pred[idx] = index output_softmax[idx][0] = -100.0 counter += 1 idx+=1 print(counter) ``` ### submission balance for class 1 ``` plot_ls = [] idx = 0 for each_pred in pred: if each_pred == 1: plot_ls.append(output_softmax[idx][1].item()) idx+=1 # plot_ls # idx = 0 # # print(output_softmax) # for i in pred: # # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) # if i == 0: # new_list = set(predictions[idx]) # new_list.remove(max(new_list)) # index = predictions[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # idx+=1 import matplotlib.pyplot as plt plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) plot_ls.sort() val = plot_ls[-85] print(val) plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # print(output_softmax) idx = 0 counter = 0 for i in pred: # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) if i == 1 and output_softmax[idx][1] < val: new_list = set(output_softmax[idx]) new_list.remove(max(new_list)) index = output_softmax[idx].tolist().index(max(new_list)) # index = predictions[idx].index() # print(index) pred[idx] = index output_softmax[idx][1] = -100.0 counter += 1 idx+=1 print(counter) ``` ### submission balance for class 2 ``` plot_ls = [] idx = 0 for each_pred in pred: if each_pred == 2: plot_ls.append(output_softmax[idx][2].item()) idx+=1 # plot_ls # idx = 0 # # print(output_softmax) # for i in pred: # # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) # if i == 0: # new_list = set(predictions[idx]) # new_list.remove(max(new_list)) # index = predictions[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # idx+=1 import matplotlib.pyplot as plt plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) plot_ls.sort() val = plot_ls[-85] print(val) plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # print(output_softmax) idx = 0 counter = 0 for i in pred: # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) if i == 2 and output_softmax[idx][2] < val: new_list = set(output_softmax[idx]) new_list.remove(max(new_list)) index = output_softmax[idx].tolist().index(max(new_list)) # index = predictions[idx].index() # print(index) pred[idx] = index output_softmax[idx][2] = -100.0 counter += 1 idx+=1 print(counter) ``` ### submission balance for class 3 ``` plot_ls = [] idx = 0 for each_pred in pred: if each_pred == 3: plot_ls.append(output_softmax[idx][3].item()) idx+=1 # plot_ls # idx = 0 # # print(output_softmax) # for i in pred: # # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) # if i == 0: # new_list = set(predictions[idx]) # new_list.remove(max(new_list)) # index = predictions[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # idx+=1 import matplotlib.pyplot as plt plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) plot_ls.sort() val = plot_ls[-85] print(val) plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # print(output_softmax) idx = 0 counter = 0 for i in pred: # print(predictions_tensor[idx]) # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # print(each_output_softmax) if i == 3 and output_softmax[idx][3] < val: new_list = set(output_softmax[idx]) new_list.remove(max(new_list)) index = output_softmax[idx].tolist().index(max(new_list)) # index = predictions[idx].index() # print(index) pred[idx] = index output_softmax[idx][3] = -100.0 counter += 1 idx+=1 print(counter) ``` ### submission balance for class 4 ``` # plot_ls = [] # idx = 0 # for each_pred in pred: # if each_pred == 4: # plot_ls.append(output_softmax[idx][4].item()) # idx+=1 # # plot_ls # # idx = 0 # # # print(output_softmax) # # for i in pred: # # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # # if i == 0: # # new_list = set(predictions[idx]) # # new_list.remove(max(new_list)) # # index = predictions[idx].tolist().index(max(new_list)) # # # index = predictions[idx].index() # # # print(index) # # idx+=1 # import matplotlib.pyplot as plt # plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) # plot_ls.sort() # val = plot_ls[-85] # print(val) # plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # # print(output_softmax) # idx = 0 # counter = 0 # for i in pred: # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # if i == 4 and output_softmax[idx][4] < val: # new_list = set(output_softmax[idx]) # new_list.remove(max(new_list)) # index = output_softmax[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # pred[idx] = index # output_softmax[idx][4] = -100.0 # counter += 1 # idx+=1 # print(counter) ``` ### submission balance for class 5 ``` # plot_ls = [] # idx = 0 # for each_pred in pred: # if each_pred == 5: # plot_ls.append(output_softmax[idx][5].item()) # idx+=1 # # plot_ls # # idx = 0 # # # print(output_softmax) # # for i in pred: # # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # # if i == 0: # # new_list = set(predictions[idx]) # # new_list.remove(max(new_list)) # # index = predictions[idx].tolist().index(max(new_list)) # # # index = predictions[idx].index() # # # print(index) # # idx+=1 # import matplotlib.pyplot as plt # plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) # plot_ls.sort() # val = plot_ls[-85] # print(val) # plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # # print(output_softmax) # idx = 0 # counter = 0 # for i in pred: # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # if i == 5 and output_softmax[idx][5] < val: # new_list = set(output_softmax[idx]) # new_list.remove(max(new_list)) # index = output_softmax[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # pred[idx] = index # output_softmax[idx][5] = -100.0 # counter += 1 # idx+=1 # print(counter) ``` ### submission balance for class 6 not arrange ``` # plot_ls = [] # idx = 0 # for each_pred in pred: # if each_pred == 6: # plot_ls.append(output_softmax[idx][6].item()) # idx+=1 # # plot_ls # # idx = 0 # # # print(output_softmax) # # for i in pred: # # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # # if i == 0: # # new_list = set(predictions[idx]) # # new_list.remove(max(new_list)) # # index = predictions[idx].tolist().index(max(new_list)) # # # index = predictions[idx].index() # # # print(index) # # idx+=1 # import matplotlib.pyplot as plt # plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) # plot_ls.sort() # val = plot_ls[-85] # print(val) # plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # # print(output_softmax) # idx = 0 # counter = 0 # for i in pred: # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # if i == 6 and output_softmax[idx][6] < val: # new_list = set(output_softmax[idx]) # new_list.remove(max(new_list)) # index = output_softmax[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # pred[idx] = index # output_softmax[idx][6] = -100.0 # counter += 1 # idx+=1 # print(counter) len(plot_ls) ``` ### submission balance for class 7 ``` # plot_ls = [] # idx = 0 # for each_pred in pred: # if each_pred == 7: # plot_ls.append(output_softmax[idx][7].item()) # idx+=1 # # plot_ls # # idx = 0 # # # print(output_softmax) # # for i in pred: # # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # # if i == 0: # # new_list = set(predictions[idx]) # # new_list.remove(max(new_list)) # # index = predictions[idx].tolist().index(max(new_list)) # # # index = predictions[idx].index() # # # print(index) # # idx+=1 # import matplotlib.pyplot as plt # plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) # plot_ls.sort() # val = plot_ls[-85] # print(val) # plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') # # print(output_softmax) # idx = 0 # counter = 0 # for i in pred: # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # if i == 7 and output_softmax[idx][7] < val: # new_list = set(output_softmax[idx]) # new_list.remove(max(new_list)) # index = output_softmax[idx].tolist().index(max(new_list)) # # index = predictions[idx].index() # # print(index) # pred[idx] = index # output_softmax[idx][7] = -100.0 # counter += 1 # idx+=1 # print(counter) ``` ### submission balance for class 8 ### submission balance for class 9 ``` # plot_ls = [] # idx = 0 # for each_pred in pred: # if each_pred == 9: # plot_ls.append(output_softmax[idx][9].item()) # idx+=1 # # plot_ls # # idx = 0 # # # print(output_softmax) # # for i in pred: # # # print(predictions_tensor[idx]) # # each_output_softmax = output_softmax[idx]/output_softmax[idx].sum() # # print(each_output_softmax) # # if i == 0: # # new_list = set(predictions[idx]) # # new_list.remove(max(new_list)) # # index = predictions[idx].tolist().index(max(new_list)) # # # index = predictions[idx].index() # # # print(index) # # idx+=1 # import matplotlib.pyplot as plt # plt.hist(plot_ls, bins=80, histtype="stepfilled", alpha=.8) # plot_ls.sort() # val = plot_ls[-85] # print(val) # plt.vlines(val, ymin = 0, ymax = 22, colors = 'r') pred # pred = np.argmax(predictions,axis = 1) pred_list = [] for i in range(len(pred)): result = [pred[i]] pred_list.append(result) pred_list predicted_class_idx = pred_list test_df['class_id'] = predicted_class_idx test_df['class_id'] = test_df['class_id'].apply(lambda x : ' '.join(map(str,list(x)))) test_df = test_df.rename(columns={0: 'image_id'}) test_df['image_id'] = test_df['image_id'].apply(lambda x : x.split('.')[0]) test_df for (idx, row) in test_df.iterrows(): row.image_id = row.image_id.split("_")[1] for k in range(10): i = 0 for (idx, row) in test_df.iterrows(): if row.class_id == str(k): i+=1 print(i) test_df test_df.to_csv('results.csv',index = False) ```
github_jupyter
<font size="+5">#04. Why Neural Networks Deeply Learn a Mathematical Formula?</font> - Book + Private Lessons [Here ↗](https://sotastica.com/reservar) - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/) - Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄 # Machine Learning, what does it mean? > - The Machine Learns... > > But, **what does it learn?** ``` %%HTML <blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Machine Learning, what does it mean? ⏯<br><br>· The machine learns...<br><br>Ha ha, not funny! 🤨 What does it learn?<br><br>· A mathematical equation. For example: <a href="https://t.co/sjtq9F2pq7">pic.twitter.com/sjtq9F2pq7</a></p>&mdash; Jesús López (@sotastica) <a href="https://twitter.com/sotastica/status/1449735653328031745?ref_src=twsrc%5Etfw">October 17, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> ``` # How does the Machine Learn? ## In a Linear Regression ``` %%HTML <iframe width="560" height="315" src="https://www.youtube.com/embed/Ht3rYS-JilE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ``` ## In a Neural Network ``` %%HTML <iframe width="560" height="315" src="https://www.youtube.com/embed/IHZwWFHWa-w?start=329" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ``` A Practical Example → [Tesla Autopilot](https://www.tesla.com/AI) An Example where It Fails → [Tesla Confuses Moon with Semaphore](https://twitter.com/Carnage4Life/status/1418920100086784000?s=20) # Load the Data > - Simply execute the following lines of code to load the data. > - This dataset contains **statistics about Car Accidents** (columns) > - In each one of **USA States** (rows) https://www.kaggle.com/fivethirtyeight/fivethirtyeight-bad-drivers-dataset/ ``` import seaborn as sns df = sns.load_dataset(name='car_crashes', index_col='abbrev') df.sample(5) ``` # Neural Network Concepts in Python ## Initializing the `Weights` > - https://keras.io/api/layers/initializers/ ### How to `kernel_initializer` the weights? $$ accidents = speeding \cdot w_1 + alcohol \cdot w_2 \ + ... + \ ins\_losses \cdot w_7 $$ ``` from tensorflow.keras import Sequential, Input from tensorflow.keras.layers import Dense df.shape model = Sequential() model.add(layer=Input(shape=(6,))) model.add(layer=Dense(units=3, kernel_initializer='zeros')) model.add(layer=Dense(units=1)) ``` #### Make a Prediction with the Neural Network > - Can we make a prediction for for `Washington DC` accidents > - With the already initialized Mathematical Equation? ``` X = df.drop(columns='total') y = df.total AL = X[:1] AL ``` #### Observe the numbers for the `weights` ``` model.get_weights() ``` #### Predictions vs Reality > 1. Calculate the Predicted Accidents and > 2. Compare it with the Real Total Accidents ``` model.predict(x=AL) ``` #### `fit()` the `model` and compare again ``` model.compile(loss='mse', metrics=['mse']) model.fit(X, y, epochs=500, verbose=1) ``` ##### Observe the numbers for the `weights` ``` model.get_weights() ``` ##### Predictions vs Reality > 1. Calculate the Predicted Accidents and > 2. Compare it with the Real Total Accidents ``` y_pred = model.predict(X) dfsel = df[['total']].copy() dfsel['pred_zeros_after_fit'] = y_pred dfsel.head() mse = ((dfsel.total - dfsel.pred_zeros_after_fit)**2).mean() mse ``` ### How to `kernel_initializer` the weights to 1? ``` dfsel['pred_ones_after_fit'] = y_pred dfsel.head() mse = ((dfsel.total - dfsel.pred_ones_after_fit)**2).mean() mse ``` ### How to `kernel_initializer` the weights to `glorot_uniform` (default)? ## Play with the Activation Function > - https://keras.io/api/layers/activations/ ``` %%HTML <iframe width="560" height="315" src="https://www.youtube.com/embed/IHZwWFHWa-w?start=558" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ``` ### Use `sigmoid` activation in last layer ``` model = Sequential() model.add(layer=Input(shape=(6,))) model.add(layer=Dense(units=3, kernel_initializer='glorot_uniform')) model.add(layer=Dense(units=1, activation='sigmoid')) model.compile(loss='mse', metrics=['mse']) ``` #### `fit()` the Model ``` model.fit(X, y, epochs=500, verbose=0) ``` #### Predictions vs Reality > 1. Calculate the Predicted Accidents and > 2. Compare it with the Real Total Accidents ``` y_pred = model.predict(X) dfsel['pred_sigmoid'] = y_pred dfsel.head() mse = ((dfsel.total - dfsel.pred_sigmoid)**2).mean() mse ``` #### Observe the numbers for the `weights` > - Have they changed? ``` model.get_weights() ``` ### Use `linear` activation in last layer ### Use `tanh` activation in last layer ### Use `relu` activation in last layer ### How are the predictions changing? Why? ## Optimizer > - https://keras.io/api/optimizers/#available-optimizers Optimizers comparison in GIF → https://mlfromscratch.com/optimizers-explained/#adam Tesla's Neural Network Models is composed of 48 models trainned in 70.000 hours of GPU → https://tesla.com/ai 1 Year with a 8 GPU Computer → https://twitter.com/thirdrowtesla/status/1252723358342377472 ### Use Gradient Descent `SGD` ``` model = Sequential() model.add(layer=Input(shape=(6,))) model.add(layer=Dense(units=3, kernel_initializer='glorot_uniform')) model.add(layer=Dense(units=1, activation='sigmoid')) ``` #### `compile()` the model ``` model.compile(optimizer='sgd', loss='mse', metrics=['mse']) ``` #### `fit()` the Model ``` history = model.fit(X, y, epochs=500, verbose=0) ``` #### Predictions vs Reality > 1. Calculate the Predicted Accidents and > 2. Compare it with the Real Total Accidents ``` y_pred = model.predict(X) dfsel['pred_gsd'] = y_pred dfsel.head() mse = ((dfsel.total - dfsel.pred_sgd)**2).mean() mse ``` #### Observe the numbers for the `weights` > - Have they changed? ``` model.get_weights() ``` #### View History ``` import matplotlib.pyplot as plt plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() ``` ### Use `ADAM` ### Use `RMSPROP` ### Does it take different times to get the best accuracy? Why? ## Loss Functions > - https://keras.io/api/losses/ ### `binary_crossentropy` ### `sparse_categorical_crossentropy` ### `mean_absolute_error` ### `mean_squared_error` ## In the end, what should be a feasible configuration of the Neural Network for this data? # Common Errors ## The `kernel_initializer` Matters ## The `activation` Function Matters ## The `optimizer` Matters ## The Number of `epochs` Matters ## The `loss` Function Matters ## The Number of `epochs` Matters # Neural Network's importance to find **Non-Linear Patterns** in the Data > - The number of Neurons & Hidden Layers https://towardsdatascience.com/beginners-ask-how-many-hidden-layers-neurons-to-use-in-artificial-neural-networks-51466afa0d3e https://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.87287&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false ## Summary - Mathematical Formula - Weights / Kernel Initializer - Loss Function - Activation Function - Optimizers ## What cannot you change arbitrarily of a Neural Network? - Input Neurons - Output Neurons - Loss Functions - Activation Functions
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Particionando-bases-de-treino-e-teste-com-split-70-30%" data-toc-modified-id="Particionando-bases-de-treino-e-teste-com-split-70-30%-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Particionando bases de treino e teste com split 70-30%</a></span></li><li><span><a href="#Particionando-bases-de-treino-e-teste-com-diferentes-músicas" data-toc-modified-id="Particionando-bases-de-treino-e-teste-com-diferentes-músicas-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Particionando bases de treino e teste com diferentes músicas</a></span></li></ul></div> ``` import pandas as pd import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator import glob import numpy as np from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score from sklearn.utils import class_weight from imblearn.under_sampling import RandomUnderSampler tf.__version__ from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession config = ConfigProto() config.gpu_options.allow_growth = True session = InteractiveSession(config=config) df = pd.read_csv('E:\chord-detection-challenge\DataBase\CSV/status_A1.csv') df.sort_values(by='title', inplace=True) images_path = sorted(glob.glob('E:\chord-detection-challenge\DataBase\clean_windows/Train/*')) df['Unnamed: 0'] = np.array(images_path) df.reset_index(inplace=True, drop=True) df['status'] = df['status'].astype(str) ``` ### Particionando bases de treino e teste com split 70-30% ``` ### particiona considerando 70-30% e mantendo a frequência de amostras para treino, validação e teste de acordo com as colunas título e status (rótulo da rede) X_train, X_test, y_train, y_test = train_test_split(df[['Unnamed: 0', 'title']], df['status'], test_size=0.30, random_state=42, stratify=df[['status', 'title']]) df_train = pd.concat([X_train, y_train], axis=1) X_train, X_val, y_train, y_val = train_test_split(df_train[['Unnamed: 0', 'title']], df_train['status'], test_size=0.30, random_state=42, stratify=df_train[['status', 'title']]) ### contatena atributos de entrada e rótulo em um único dataframe para utilizar o flow_from_dataframe do tensorflow df_test = pd.concat([X_test, y_test], axis=1) df_train = pd.concat([X_train, y_train], axis=1) df_val = pd.concat([X_val, y_val], axis=1) print('Total de imagens de treinamento', len(df_train)) print('Total de imagens de validação', len(df_val)) print('Total de imagens de teste', len(df_test)) undersample_train = RandomUnderSampler(sampling_strategy='majority') undersample_validation = RandomUnderSampler(sampling_strategy='majority') X_undertrain, y_undertrain = undersample_train.fit_resample(df_train[['Unnamed: 0', 'title']], df_train['status']) X_undervalidation, y_undervalidation = undersample_validation.fit_resample(df_val[['Unnamed: 0', 'title']], df_val['status']) df_train = pd.concat([X_undertrain, y_undertrain], axis=1) df_val = pd.concat([X_undervalidation, y_undervalidation], axis=1) ``` ### Particionando bases de treino e teste com diferentes músicas ``` songs, n = df['title'].unique(), 5 index = np.random.choice(len(songs), 5, replace=False) selected_songs = songs[index] ## seleciona n músicas disponíveis para teste df_test = df[df['title'].isin(selected_songs)] ## banco de teste contém todos os espectrogramas das n músicas selecionadas anteriormemente df_train = df[~(df['title'].isin(selected_songs))] ## banco de treino contém os espectrogramas de todas as músicas EXCETO as selecionadas anteriormente para teste X_train, X_val, y_train, y_val = train_test_split(df_train[['Unnamed: 0', 'title']], df_train['status'], test_size=0.30, random_state=42, stratify=df_train[['status', 'title']]) ## divide em validação considerando 30% e balanceamento de acordo com título e status ### contatena atributos de entrada e rótulo em um único dataframe para utilizar o flow_from_dataframe do tensorflow df_train = pd.concat([X_train, y_train], axis=1) df_val = pd.concat([X_val, y_val], axis=1) print('Total de imagens de treinamento', len(df_train)) print('Total de imagens de validação', len(df_val)) print('Total de imagens de teste', len(df_test)) datagen=ImageDataGenerator(rescale=1./255) train_generator=datagen.flow_from_dataframe(dataframe=df_train, directory='E:\chord-detection-challenge\DataBase\clean_windows/Train/', x_col='Unnamed: 0', y_col="status", class_mode="binary", target_size=(224,224), batch_size=32) valid_generator=datagen.flow_from_dataframe(dataframe=df_val, directory='E:\chord-detection-challenge\DataBase\clean_windows/Train/', x_col='Unnamed: 0', y_col="status", class_mode="binary", target_size=(224,224), batch_size=32) test_generator=datagen.flow_from_dataframe(dataframe=df_test, directory='E:\chord-detection-challenge\DataBase\clean_windows/Train/', x_col='Unnamed: 0', y_col="status", class_mode="binary", target_size=(224,224), batch_size=1,shuffle=False) #from tensorflow.keras.models import Model restnet = tf.keras.applications.VGG16( include_top=False, # não vai aproveitar a camada de saída weights=None, #não pega os pesso da imagenet input_shape=(224,224,3) ) output = restnet.layers[-1].output output = tf.keras.layers.Flatten()(output) restnet = tf.keras.Model(inputs=restnet.input, outputs=output) for layer in restnet.layers: #treina tudo do zero layer.trainable = True restnet.summary() mc = tf.keras.callbacks.ModelCheckpoint('resnet_model.h5', monitor='val_binary_accuracy', mode='max', save_best_only=True) model = tf.keras.models.Sequential() model.add(restnet) model.add(tf.keras.layers.Dense(128, activation='relu', input_dim=(224,224,3))) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(1, activation='sigmoid')) # tf.keras.layers.Conv2D(32, (3, 3), padding='same', # input_shape=(32,32,3)), # tf.keras.layers.MaxPool2D(), # tf.keras.layers.Conv2D(64, (3, 3)), # tf.keras.layers.Conv2D(128, (3, 3)), # tf.keras.layers.Flatten(), # tf.keras.layers.Dense(128,activation='relu'), # tf.keras.layers.Dense(2) #) model.compile( optimizer=tf.keras.optimizers.Adamax(), loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy()] #weighted_metrics=[tf.keras.metrics.BinaryAccuracy()] ) class_weights = class_weight.compute_class_weight('balanced', np.unique(df_train['status']), df_train['status']) class_weights = dict(enumerate(class_weights)) STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size STEP_SIZE_VALID=valid_generator.n//valid_generator.batch_size model.fit(train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, #class_weight=class_weights, epochs=10, callbacks = [mc]) STEP_SIZE_TEST=test_generator.n//test_generator.batch_size print('---------------Teste-------------') test_generator.reset() predictions = model.predict(test_generator, steps=STEP_SIZE_TEST, verbose=1) predictions y_pred = predictions > 0.5 predicted_class_indices=np.argmax(predictions,axis=1) predicted_class_indices print(accuracy_score(test_generator.labels, y_pred)) print(classification_report(test_generator.labels, y_pred)) ```
github_jupyter
# Example notebook This example will contain the following examples - Creating and saving a graph - Plotting the graph - Executing a node - Loading a graph from disk ``` %matplotlib inline import matplotlib.pyplot as plt import networkx as nx from importlib import reload import os import autodepgraph as adg from autodepgraph import AutoDepGraph_DAG ``` ## Creatinga custom graph A graph can be instantiated and nodes can be added to the graph as with any `networkx` graph object. It is important to specify a `calibrat ``` cal_True_delayed= 'autodepgraph.node_functions.calibration_functions.test_calibration_True_delayed' test_graph = AutoDepGraph_DAG('test graph') for node in ['A', 'B', 'C', 'D', 'E']: test_graph.add_node(node, calibrate_function=cal_True_delayed) test_graph.add_node? ``` Some nodes require other nodes to be in a `good` or calibrated state. Such dependencies are defined by setting edges in the graph. ``` test_graph.add_edge('C', 'A') test_graph.add_edge('C', 'B') test_graph.add_edge('B', 'A') test_graph.add_edge('D', 'A') test_graph.add_edge('E', 'D') ``` ## Visualizing the graph We support two ways of visualizing graphs: - matplotlib in the notebook - an svg in an html page that updates in real-time ### Realtime svg/html visualization ``` # The default plotting mode is SVG test_graph.cfg_plot_mode = 'svg' # Updates the monitor, in this case the svg/html page test_graph.update_monitor() # Updating the monitor overwrites an svg file whose location is determined by the attribute: test_graph.cfg_svg_filename from IPython.display import display, SVG display(SVG(test_graph.cfg_svg_filename)) # The html page is located at the location specified by the url. # The page generated based on a template when the open_html_viewer command is called. url = test_graph.open_html_viewer() print(url) ``` ### Matplotlib drawing of the graph ``` # Alternatively a render in matplotlib can be drawn test_graph.draw_mpl() ``` # Maintaining the graph ``` test_graph.set_all_node_states('needs calibration') test_graph.maintain_B() display(SVG(test_graph.cfg_svg_filename)) # Update the plotting monitor (default matplotlib) to show your graph test_graph.update_monitor() test_graph.set_all_node_states('needs calibration') test_graph.maintain_node('E') display(SVG(test_graph.cfg_svg_filename)) ``` ### Three qubit example This example shows a more realistic graph. The examples below show ways of exploring the graph ``` test_dir = os.path.join(adg.__path__[0], 'tests', 'test_data') fn = os.path.join(test_dir, 'three_qubit_graph.yaml') DAG = nx.readwrite.read_yaml(fn) test_graph.cfg_plot_mode = 'svg' DAG.update_monitor() # This graph is so big, the html visualization is more suitable. display(SVG(DAG.cfg_svg_filename)) url = test_graph.open_html_viewer() url ``` ### Reset the state of all nodes ``` DAG.nodes['CZ q0-q1'] DAG.set_all_node_states('needs calibration') # DAG.set_all_node_states('unknown') DAG.update_monitor() DAG._construct_maintenance_methods(DAG.nodes.keys()) DAG.maintain_CZ_q0_q1() ```
github_jupyter
# Dimensionality Reduction Reducing number of dimensions whcih means that the number of new features is lower than the number of original features. First, we need to import numpy, matplotlib, and scikit-learn and get the UCI ML digit image data. Scikit-learn already comes with this data (or will automatically download it for you) so we don’t have to deal with uncompressing it ourselves! Additionally, I’ve provided a function that will produce a nice visualization of our data. We are going to use the following libraries and packages: * **numpy**: "NumPy is the fundamental package for scientific computing with Python." (http://www.numpy.org/) * **matplotlib**: "Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms." (https://matplotlib.org/) * **sklearn**: Scikit-learn is a machine learning library for Python programming language. (https://scikit-learn.org/stable/) * **pandas**: "Pandas provides easy-to-use data structures and data analysis tools for Python." (https://pandas.pydata.org/) ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm from matplotlib import offsetbox import pandas as pd ``` # t-distributed Stochastic Neighbor Embedding (t-SNE) t-SNE is an algorithm to optimally map the higher dimensional space to lower dimensions paying attention to short distances. The trasformation is different for different regions. SNE is the general concept behind this type of mapping and "t" shows usage of t-distribution in t-SNE. ## Synthetic data Let's generate synthetic data as follows: 1) Points are scattered in 2 dimensional space as follows. There are N-2 other dimensions that all the points have same values in each dimension 2) We will reduce the dimensionality of the data to 2D ``` group_1_X = np.repeat(2,90)+np.random.normal(loc=0, scale=1,size=90) group_1_Y = np.repeat(2,90)+np.random.normal(loc=0, scale=1,size=90) group_2_X = np.repeat(10,90)+np.random.normal(loc=0, scale=1,size=90) group_2_Y = np.repeat(10,90)+np.random.normal(loc=0, scale=1,size=90) plt.scatter(group_1_X,group_1_Y, c='blue') plt.scatter(group_2_X,group_2_Y,c='green') plt.xlabel('1st dimension') plt.ylabel('2nd dimension') ``` ### Implementing t-SNE on the synthetic data ``` #### combined = np.column_stack((np.concatenate([group_1_X,group_2_X]),np.concatenate([group_1_Y,group_2_Y]))) print(combined.shape) #### from sklearn import manifold combined_tSNE = manifold.TSNE(n_components=2, init='pca',perplexity=30,learning_rate=200,n_iter=500,random_state=2).fit_transform(combined) #### import umap combined_UMAP = umap.UMAP(n_neighbors=10, min_dist=0.3, n_components=2,random_state=2).fit_transform(combined) fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.scatter(combined_tSNE[0:90,0], combined_tSNE[0:90,1], c='blue') ax1.scatter(combined_tSNE[90:180,0], combined_tSNE[90:180,1], c='green') ax1.set_title('t-SNE') ax2.scatter(combined_UMAP[0:90,0], combined_UMAP[0:90,1], c='blue') ax2.scatter(combined_UMAP[90:180,0], combined_UMAP[90:180,1], c='green') ax2.set_title('UMAP') ``` **Parameters of t-SNE:** * ***Perplexity (perplexity)***: somehow shows the number of close neighbors each point has. Hence, perplexity should be smaller than the number of points. There is a suggested range for perplexity in the original paper: "The performance of SNE is fairly robust to changes in the perplexity, and typical values are between 5 and 50.". Although perplexity=5 is usually not optimal, values higher than 50 also may result in weird grouping of the data points and shapes in 2 dimensional space. * ***Number of iterations (n_iter)*** required for converagence of the approach is another important parameter that depened on the input dataset. There are no fixed number to make sure of the convergence but there are some rule of thumb to check that. As an example, if there are pinched shapes in the t-SNE plot, it is better to run the approach for higher iteration number to makes sure that the resulted shapes and clusters are not artifacts of an unconverged t-SNE. **Parameters of UMAP:** * ***Number of neighbors (n_neighbors)***: Number of neighboring data points used in the process of local manifold approximation. This parameters is suggested to be between 5 and 50. * ***Minimum distance (min_dist)***: It is a measure of allowed compression of points together in low dimensional space. This parameters is suggested to be between 0.001 and 0.5. ### Let's change the structure of synthetic data Let's generate synthetic data as follows: ``` group_1_X = np.arange(10,100) group_1_Y = np.arange(10,100)+np.random.normal(loc=0, scale=0.3,size=90)-np.repeat(4,90) group_2_X = np.arange(10,100) group_2_Y = np.arange(10,100)+np.random.normal(loc=0, scale=0.3,size=90)+np.repeat(4,90) plt.scatter(group_1_X,group_1_Y, c='blue') plt.scatter(group_2_X,group_2_Y,c='green') plt.xlabel('1st dimension') plt.ylabel('2nd dimension') ``` ### Implementing t-SNE and UMAP on the synthetic data ``` #### combined = np.column_stack((np.concatenate([group_1_X,group_2_X]),np.concatenate([group_1_Y,group_2_Y]))) print(combined.shape) #### from sklearn import manifold combined_tSNE = manifold.TSNE(n_components=2, init='pca',perplexity=30,learning_rate=200,n_iter=500,random_state=2).fit_transform(combined) #### import umap combined_UMAP = umap.UMAP(n_neighbors=5, min_dist=0.01, n_components=2,random_state=2).fit_transform(combined) fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.scatter(combined_tSNE[0:90,0], combined_tSNE[0:90,1], c='blue') ax1.scatter(combined_tSNE[90:180,0], combined_tSNE[90:180,1], c='green') ax1.set_title('t-SNE') ax2.scatter(combined_UMAP[0:90,0], combined_UMAP[0:90,1], c='blue') ax2.scatter(combined_UMAP[90:180,0], combined_UMAP[90:180,1], c='green') ax2.set_title('UMAP') ``` ### Another synthetic data Let's generate synthetic data as follows: ``` group_1_X = np.arange(start=0,stop=1**2,step=0.001) group_1_Y = np.sqrt(np.repeat(1**2,1000)-group_1_X**2) group_2_X = np.arange(start=0,stop=1.5,step=0.001) group_2_Y = np.sqrt(np.repeat(1.5**2,1500)-group_2_X**2) plt.scatter(group_1_X,group_1_Y, c='blue', ) plt.scatter(group_2_X,group_2_Y,c='green') plt.xlabel('1st dimension') plt.ylabel('2nd dimension') plt.xlim(0,2.5) plt.ylim(0,2.5) ``` ### Implementing t-SNE on the synthetic data ``` #### combined = np.column_stack((np.concatenate([group_1_X,group_2_X]),np.concatenate([group_1_Y,group_2_Y]))) print(combined.shape) #### from sklearn import manifold combined_tSNE = manifold.TSNE(n_components=2, init='pca',perplexity=30,learning_rate=200,n_iter=500,random_state=2).fit_transform(combined) #### import umap combined_UMAP = umap.UMAP(n_neighbors=10, min_dist=0.9, n_components=2,random_state=2).fit_transform(combined) fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.scatter(combined_tSNE[0:1000,0], combined_tSNE[0:1000,1], c='blue') ax1.scatter(combined_tSNE[1000:2500,0], combined_tSNE[1000:2500,1], c='green') ax1.set_title('t-SNE') ax2.scatter(combined_UMAP[0:1000,0], combined_UMAP[0:1000,1], c='blue') ax2.scatter(combined_UMAP[1000:2500,0], combined_UMAP[1000:2500,1], c='green') ax2.set_title('UMAP') ``` ### UCI ML digit image data * load and return digit data set ``` from sklearn import datasets # Loading digit images digits = datasets.load_digits() X = digits.data y = digits.target n_samples, n_features = X.shape print("number of samples (data points):", n_samples) print("number of features:", n_features) ``` Pixels of images have values between 0 and 16: ``` np.max(X) ``` Let's write a function to use it for visualization of the results of all the dimension reduction methods. #### Let's visualize some of the images ``` fig, ax_array = plt.subplots(1,10) axes = ax_array.flatten() for i, ax in enumerate(axes): ax.imshow(digits.images[i]) plt.setp(axes, xticks=[], yticks=[]) plt.tight_layout(h_pad=0.5, w_pad=0.01) ``` Now that we understood how t-SNE works, let's implement it on the UCI ML digit image data: ``` from sklearn import manifold X_tsne = manifold.TSNE(n_components=2, init='pca',perplexity=30,learning_rate=200,n_iter=500,random_state=2).fit_transform(X) ``` Now, we use the plotting function to show the first 2 principle component scores of all teh data points. ``` def embedding_plot(X,labels,title): plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='Spectral', s=5) plt.gca().set_facecolor((1, 1, 1)) plt.xlabel('1st dimension', fontsize=24) plt.ylabel('2nd dimension', fontsize=24) plt.colorbar(boundaries=np.arange(11)-0.5).set_ticks(np.arange(10)) plt.grid(False) plt.title(title, fontsize=24); embedding_plot(X_tsne, y,"t-SNE") ``` **t-SNE is an unsupervised approach similar to PCA and ICA. We add color for the sample labels afterward.** ## Normalizing data before dimensionality reduction It is a good idea usually to normalize the data so that the scale of values for different features would become similar. ``` from sklearn import preprocessing X_norm = pd.DataFrame(preprocessing.scale(X)) Xnorm_tsne = manifold.TSNE(n_components=2, init='pca',perplexity=30,learning_rate=200,n_iter=500,random_state=2).fit_transform(X_norm) embedding_plot(Xnorm_tsne, y,"t-SNE") ``` # Uniform Manifold Approximation and Projection (UMAP) UMAP is a manifold learning method that is comptetitive to t-SNE for visualization quality while preserving the global structure of data, unlike t-SNE. UMAP has no computational restriction and is scalable to extremely large dataset, like GoogleNews, unlike t-SNE. UMAP uses k-nearest neighbor and uses Stochastic Gradient Descent to minimize the difference between the distances in the high dimensional and low dimensional spaces. **Definitions** * A n-dimensional manifold (n-manifold) M is a topological space that is locally homeomorphic to the Euclidean space of dimension n. * Locally homeomorphic means that every point in the space M is contained in an open set U such that there is a one-to-one onto map f:U -> M. * One-to-one onto map f:U -> M means that each element of M is mapped by exactly one element of U. * A topological space is a collection of open sets (with some mathematical properties). * A Riemannian (smooth) manifold M is a real smooth manifold with an inner product that varies smoothly from point to point in the tangent space of M. * Riemannian metric is collection of all the inner products of the points in the manifold M on the tangent space of M. * A simplicial complex K in n-dimensional real space is a collection of simplices in the space such that 1) Every face of a simplex of K is in K, and 2) The intersection of any two simplices of K is a face of each of them (Munkres 1993, p. 7; http://mathworld.wolfram.com/). * A simplex is the generalization of a tetrahedral region of space to n dimensions(http://mathworld.wolfram.com/). ``` import umap X_umap = umap.UMAP(n_neighbors=10, min_dist=0.3, n_components=2, random_state=2).fit_transform(X) embedding_plot(X_umap, y,"umap") ``` ## Boston housing dataset ``` from sklearn import datasets # Loading digit images housing = datasets.load_boston() Xhouse = pd.DataFrame(housing.data) Xhouse.columns = housing.feature_names yhouse = housing.target n_samples, n_features = Xhouse.shape print("number of samples (data points):", n_samples) print("number of features:", n_features) ``` ### Normalizing the data ``` from sklearn import preprocessing Xhouse_norm = pd.DataFrame(preprocessing.scale(Xhouse), columns=Xhouse.columns) ``` ## Implementing t-SNE on the California housing data ``` Xhousenorm_tSNE = manifold.TSNE(n_components=2, init='pca',perplexity=30,learning_rate=200,n_iter=500,random_state=2).fit_transform(Xhouse_norm) Xhousenorm_tSNE.shape ``` ### Visualizing the results of t-SNE implemented on the California housing dataset ``` import seaborn as sns cmap = sns.cubehelix_palette(as_cmap=True) fig, ax = plt.subplots() points = ax.scatter(x=Xhousenorm_tSNE[:,0], y=Xhousenorm_tSNE[:,1], c=yhouse, s=10, cmap=cmap) fig.colorbar(points) ``` ## Implementing UMAP on the Boston housing data ``` Xhousenorm_umap = umap.UMAP(n_neighbors=10, min_dist=0.4, n_components=2, random_state=2).fit_transform(Xhouse_norm) fig, ax = plt.subplots() points = ax.scatter(x=Xhousenorm_umap[:,0], y=Xhousenorm_umap[:,1], c=yhouse, s=10, cmap=cmap) fig.colorbar(points) ``` ### Removing outliers and repeating the analysis ``` Xhouse_norm_noout = Xhouse_norm.iloc[np.where((Xhouse_norm.max(axis=1) < 3)==True)[0],:] Xhousenorm_noout_umap = umap.UMAP(n_neighbors=5, min_dist=0.4, n_components=2, random_state=2).fit_transform(Xhouse_norm_noout) fig, ax = plt.subplots() points = ax.scatter(x=Xhousenorm_noout_umap[:,0], y=Xhousenorm_noout_umap[:,1], c=yhouse[np.where((Xhouse_norm.max(axis=1) < 3)==True)[0]], s=10, cmap=cmap) fig.colorbar(points) ```
github_jupyter
## Face and Facial Keypoint detection After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing. 1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook). 2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN. 3. Use your trained model to detect facial keypoints on the image. --- In the next python cell we load in required libraries for this section of the project. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline ``` #### Select an image Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory. ``` import cv2 # load in color image for face detection image = cv2.imread('images/obamas.jpg') # switch red and blue color channels # --> by default OpenCV assumes BLUE comes first, not RED as in many images image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # plot the image fig = plt.figure(figsize=(9,9)) plt.imshow(image) ``` ## Detect all faces in an image Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image. In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors. An example of face detection on a variety of images is shown below. <img src='images/haar_cascade_ex.png' width=80% height=80%/> ``` # load in a haar cascade classifier for detecting frontal faces face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml') # run the detector # the output here is an array of detections; the corners of each detection box # if necessary, modify these parameters until you successfully identify every face in a given image faces = face_cascade.detectMultiScale(image, 1.2, 2) # make a copy of the original image to plot detections on image_with_detections = image.copy() # loop over the detected faces, mark the image where each face is found for (x,y,w,h) in faces: # draw a rectangle around each detected face # you may also need to change the width of the rectangle drawn depending on image resolution cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3) fig = plt.figure(figsize=(9,9)) plt.imshow(image_with_detections) ``` ## Loading in a trained model Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector. First, load your best model by its filename. ``` import torch from models import Net net = Net() ## TODO: load the best saved model parameters (by your path name) ## You'll need to un-comment the line below and add the correct name for *your* saved model net.load_state_dict(torch.load('saved_models/model_3200_1600_smoothL1.pt')) ## print out your net and prepare it for testing (uncomment the line below) ``` ## Keypoint detection Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images. ### TODO: Transform each detected face into an input Tensor You'll need to perform the following steps for each detected face: 1. Convert the face from RGB to grayscale 2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255] 3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested) 4. Reshape the numpy image into a torch image. You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps. ### TODO: Detect and display the predicted keypoints After each face has been appropriately converted into an input Tensor for your network to see as input, you'll wrap that Tensor in a Variable() and can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face: <img src='images/michelle_detected.png' width=30% height=30%/> ``` net.eval() image_copy = np.copy(image) from torchvision import transforms, utils from data_load import Rescale, RandomCrop, Normalize, ToTensor image_transform = transforms.Compose([Normalize(),Rescale((224,224)), ToTensor()]) # loop over the detected faces from your haar cascade for (x,y,w,h) in faces: # Select the region of interest that is the face in the image roi = image_copy[y:y+h, x:x+w] image_copy = np.copy(roi) ## TODO: Convert the face region from RGB to grayscale image_copy = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY) ## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255] image_copy= image_copy/255.0 ## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested) image_copy = cv2.resize(image_copy, (224, 224)) ## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W) if(len(image_copy.shape) == 2): image_copy = image_copy.reshape(image_copy.shape[0], image_copy.shape[1], 1) image_copy = image_copy.transpose((2, 0, 1)) image_copy = torch.from_numpy(image_copy) image_copy = image_copy.type(torch.FloatTensor) image_copy.unsqueeze_(0) print(image_copy.size()) ## TODO: Make facial keypoint predictions using your loaded, trained network ## perform a forward pass to get the predicted facial keypoints output_pts = net(image_copy) ## TODO: Display each detected face and the corresponding keypoints ```
github_jupyter
# Probability Notationally, we write $P(E)$ to mean "The probability of event $E$" ## Dependence and Independence Mathematically, we say that two events E and F are independent if the probability that they both happen is the product of the probabilities that each one happens: $P(E,F) = P(E)P(F)$ ## Conditional Probability When two events $E$ and $F$ are independent, then by definition we have: $P(E,F) = P(E)P(F)$ If they are not necessarili independent (and if the probability of $F$ is not zero), the we define the probability of $E$"conditional on $F$" as: $P(E|F) = P(E,F)/P(F)$ This as the probability that $E$ happens, given that we know that $F$ happens. this is often rewrited as: $P(E,F) = P(E|F)P(F)$ ### Example code ``` import enum, random # An Enum is a typed set of enumerated values. Used to make code more descriptive and readable. class Kid(enum.Enum): Boy = 0 Girl = 1 def random_kid() -> Kid: return random.choice([Kid.BOY, Kid.GIRL]) both_girls = 0 older_girl = 0 either_girl = 0 random.seed(0) for _ in range(10000): younger = random_kid() older = random_kid() if older == Kid.GIRL: older_girl += 1 if older == Kid.GIRL and younger == Kid.Girl: both_girls += 1 if older == Kid.GIRL or younger == Kid.Girl: either_girl += 1 print("P(both | older):", both_girls / older_girl) # 0.514 ~ 1/2 print("P(both | either): ", both_girls / either_girl) # 0.342 ~ 1/3” ``` ## Bayes`s Theorem One of the data scientist’s best friends is Bayes’s theorem, which is a way of “reversing” conditional probabilities. Let’s say we need to know the probability of some event $E$ conditional on some other event $F$ occurring. But we only have information about the probability of $F$ conditional on $E$ occurring. $ P ( E | F ) = P ( F | E ) P ( E ) / [ P ( F | E ) P ( E ) + P ( F | \neg E ) P ( \neg E ) ]$ ## Random Variables A **random variable** is a variable whose possible values have an associated probability distribution. A very simple random variable equals 1 if a coin flip turns up heads and 0 if the flip turns up tails. ## Continuous Distributions ``` def uniform_pdf(x: float) -> float: return i if 0 <= x < 1 else 0 def uniform_cdf(x: float) -> float: """Returns the probability that a uniform random variable is <= x""" if x < 0: return 0 # uniform random is never less than 0 elif x < 1: return x # e.g. P(X <= 0.4) = 0.4 else: return 1 # uniform random is always less than 1 ``` ## The Normal Distribution $\frac{1}{sqrt{2\sigma^2\pi}}\,e^{-\frac{(x-\mu)^2}{2\sigma^2}}!$ ``` import math SQRT_TWO_PI = math.sqrt(2 * math.pi) def normal_pdf(x: float, mu: float=0, sigma: float = 1) -> float: return(math.exp(-(x-mu) ** 2 / 2 / sigma ** 2) / (SQRT_TWO_PI * sigma)) import matplotlib.pyplot as plt xs = [x / 10.0 for x in range(-50, 50)] plt.plot(xs,[normal_pdf(x,sigma=1) for x in xs],'-',label='mu=0,sigma=1') plt.plot(xs,[normal_pdf(x,sigma=2) for x in xs],'--',label='mu=0,sigma=2') plt.plot(xs,[normal_pdf(x,sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5') plt.plot(xs,[normal_pdf(x,mu=-1) for x in xs],'-.',label='mu=-1,sigma=1') plt.legend() plt.title("Various Normal pdfs") def normal_cdf(x: float, mu: float = 0, sigma: float = 1) -> float: return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2 xs = [x / 10.0 for x in range(-50, 50)] plt.plot(xs,[normal_cdf(x,sigma=1) for x in xs],'-',label='mu=0,sigma=1') plt.plot(xs,[normal_cdf(x,sigma=2) for x in xs],'--',label='mu=0,sigma=2') plt.plot(xs,[normal_cdf(x,sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5') plt.plot(xs,[normal_cdf(x,mu=-1) for x in xs],'-.',label='mu=-1,sigma=1') plt.legend(loc=4) # bottom right plt.title("Various Normal cdfs") def inverse_normal_cdf(p: float, mu: float = 0, sigma: float = 1, tolerance: float = 0.00001) -> float: """Find approximate inverse using binary search""" # if not standard, compute standard and rescale if mu != 0 or sigma != 1: return mu + sigma * inverse_normal_cdf(p, tolerance=tolerance) low_z = -10.0 # normal_cdf(-10) is (very close to) 0 hi_z = 10.0 # normal_cdf(10) is (very close to) 1 while hi_z - low_z > tolerance: mid_z = (low_z + hi_z) / 2 # Consider the midpoint mid_p = normal_cdf(mid_z) # and the cdf's value there if mid_p < p: low_z = mid_z # Midpoint too low, search above it else: hi_z = mid_z # Midpoint too high, search below it return mid_z ``` ## The Central Limit Theorem A random variable defined as the average of a large number of independent and identically distributed random variables is itself approximately normally distributed. ``` import random def bernoulli_trial(p: float) -> int: """Returns 1 with probability p and 0 with probability 1-p""" return 1 if random.random() < p else 0 def binomial(n: int, p: float) -> int: """Returns the sum of n bernoulli(p) trials""" return sum(bernoulli_trial(p) for _ in range(n)) ``` The mean of a $Bernoulli(p)$ is $p$, and its standard deviation is $\sqrt{p(1-p)}$ As $n$ gets large, a $Binomial(n,p)$ variable is approximately a normal random variable with mean $\mu = np$ and standar deviation $\sigma = \sqrt{np(1-p)}$ ``` from collections import Counter def binomial_histogram(p: float, n: int, num_points: int) -> None: """Picks points from a Binomial(n, p) and plots their histogram""" data = [binomial(n, p) for _ in range(num_points)] # use a bar chart to show the actual binomial samples histogram = Counter(data) plt.bar([x - 0.4 for x in histogram.keys()], [v / num_points for v in histogram.values()], 0.8, color='0.75') mu = p * n sigma = math.sqrt(n * p * (1 - p)) # use a line chart to show the normal approximation xs = range(min(data), max(data) + 1) ys = [normal_cdf(i + 0.5, mu, sigma) - normal_cdf(i - 0.5, mu, sigma) for i in xs] plt.plot(xs,ys) plt.title("Binomial Distribution vs. Normal Approximation") plt.show() ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Project: **Finding Lane Lines on the Road** *** In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project. --- Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".** --- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.** --- <figure> <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ## Import Packages ``` #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ``` ## Read in an Image ``` #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ``` ## Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:** `cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** ## Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ``` import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) ``` ## Test Images Build your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ``` import os os.listdir("test_images/") ``` ## Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report. Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ``` # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images_output directory. ``` ## Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: `solidWhiteRight.mp4` `solidYellowLeft.mp4` **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** **If you get an error that looks like this:** ``` NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download() ``` **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ``` # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) return result ``` Let's try the one with the solid white lane on the right first ... ``` white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ``` Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ``` HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ``` ## Improve the draw_lines() function **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".** **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ``` yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ``` ## Writeup and Submission If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. ## Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ``` challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ```
github_jupyter
<div style="width:100%; background-color: #D9EDF7; border: 1px solid #CFCFCF; text-align: left; padding: 10px;"> <b>Time series: Processing Notebook</b> <ul> <li><a href="main.ipynb">Main Notebook</a></li> <li>Processing Notebook</li> </ul> <br>This Notebook is part of the <a href="http://data.open-power-system-data.org/time_series">Time series Data Package</a> of <a href="http://open-power-system-data.org">Open Power System Data</a>. </div> <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Introductory-Notes" data-toc-modified-id="Introductory-Notes-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Introductory Notes</a></span></li><li><span><a href="#Settings" data-toc-modified-id="Settings-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Settings</a></span><ul class="toc-item"><li><span><a href="#Set-version-number-and-recent-changes" data-toc-modified-id="Set-version-number-and-recent-changes-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Set version number and recent changes</a></span></li><li><span><a href="#Import-Python-libraries" data-toc-modified-id="Import-Python-libraries-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Import Python libraries</a></span></li><li><span><a href="#Display-options" data-toc-modified-id="Display-options-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Display options</a></span></li><li><span><a href="#Set-directories" data-toc-modified-id="Set-directories-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>Set directories</a></span></li><li><span><a href="#Chromedriver" data-toc-modified-id="Chromedriver-2.5"><span class="toc-item-num">2.5&nbsp;&nbsp;</span>Chromedriver</a></span></li><li><span><a href="#Set-up-a-log" data-toc-modified-id="Set-up-a-log-2.6"><span class="toc-item-num">2.6&nbsp;&nbsp;</span>Set up a log</a></span></li><li><span><a href="#Select-timerange" data-toc-modified-id="Select-timerange-2.7"><span class="toc-item-num">2.7&nbsp;&nbsp;</span>Select timerange</a></span></li><li><span><a href="#Select-download-source" data-toc-modified-id="Select-download-source-2.8"><span class="toc-item-num">2.8&nbsp;&nbsp;</span>Select download source</a></span></li><li><span><a href="#Select-subset" data-toc-modified-id="Select-subset-2.9"><span class="toc-item-num">2.9&nbsp;&nbsp;</span>Select subset</a></span></li></ul></li><li><span><a href="#Download" data-toc-modified-id="Download-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Download</a></span><ul class="toc-item"><li><span><a href="#Automatic-download-(for-most-sources)" data-toc-modified-id="Automatic-download-(for-most-sources)-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Automatic download (for most sources)</a></span></li><li><span><a href="#Manual-download" data-toc-modified-id="Manual-download-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Manual download</a></span><ul class="toc-item"><li><span><a href="#Energinet.dk" data-toc-modified-id="Energinet.dk-3.2.1"><span class="toc-item-num">3.2.1&nbsp;&nbsp;</span>Energinet.dk</a></span></li><li><span><a href="#CEPS" data-toc-modified-id="CEPS-3.2.2"><span class="toc-item-num">3.2.2&nbsp;&nbsp;</span>CEPS</a></span></li><li><span><a href="#ENTSO-E-Power-Statistics" data-toc-modified-id="ENTSO-E-Power-Statistics-3.2.3"><span class="toc-item-num">3.2.3&nbsp;&nbsp;</span>ENTSO-E Power Statistics</a></span></li></ul></li></ul></li><li><span><a href="#Read" data-toc-modified-id="Read-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Read</a></span><ul class="toc-item"><li><span><a href="#Preparations" data-toc-modified-id="Preparations-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Preparations</a></span></li><li><span><a href="#Reading-loop" data-toc-modified-id="Reading-loop-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Reading loop</a></span></li><li><span><a href="#Save-raw-data" data-toc-modified-id="Save-raw-data-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>Save raw data</a></span></li></ul></li><li><span><a href="#Processing" data-toc-modified-id="Processing-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Processing</a></span><ul class="toc-item"><li><span><a href="#Missing-data-handling" data-toc-modified-id="Missing-data-handling-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Missing data handling</a></span><ul class="toc-item"><li><span><a href="#Interpolation" data-toc-modified-id="Interpolation-5.1.1"><span class="toc-item-num">5.1.1&nbsp;&nbsp;</span>Interpolation</a></span></li></ul></li><li><span><a href="#Country-specific-calculations" data-toc-modified-id="Country-specific-calculations-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>Country specific calculations</a></span><ul class="toc-item"><li><span><a href="#Calculate-onshore-wind-generation-for-German-TSOs" data-toc-modified-id="Calculate-onshore-wind-generation-for-German-TSOs-5.2.1"><span class="toc-item-num">5.2.1&nbsp;&nbsp;</span>Calculate onshore wind generation for German TSOs</a></span></li><li><span><a href="#Calculate-aggregate-wind-capacity-for-Germany-(on-+-offshore)" data-toc-modified-id="Calculate-aggregate-wind-capacity-for-Germany-(on-+-offshore)-5.2.2"><span class="toc-item-num">5.2.2&nbsp;&nbsp;</span>Calculate aggregate wind capacity for Germany (on + offshore)</a></span></li><li><span><a href="#Aggregate-German-data-from-individual-TSOs-and-calculate-availabilities/profiles" data-toc-modified-id="Aggregate-German-data-from-individual-TSOs-and-calculate-availabilities/profiles-5.2.3"><span class="toc-item-num">5.2.3&nbsp;&nbsp;</span>Aggregate German data from individual TSOs and calculate availabilities/profiles</a></span></li><li><span><a href="#Agregate-Italian-data" data-toc-modified-id="Agregate-Italian-data-5.2.4"><span class="toc-item-num">5.2.4&nbsp;&nbsp;</span>Agregate Italian data</a></span></li></ul></li><li><span><a href="#Fill-columns-not-retrieved-directly-from-TSO-webites-with--ENTSO-E-Transparency-data" data-toc-modified-id="Fill-columns-not-retrieved-directly-from-TSO-webites-with--ENTSO-E-Transparency-data-5.3"><span class="toc-item-num">5.3&nbsp;&nbsp;</span>Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data</a></span></li><li><span><a href="#Resample-higher-frequencies-to-60'" data-toc-modified-id="Resample-higher-frequencies-to-60'-5.4"><span class="toc-item-num">5.4&nbsp;&nbsp;</span>Resample higher frequencies to 60'</a></span></li><li><span><a href="#Insert-a-column-with-Central-European-(Summer-)time" data-toc-modified-id="Insert-a-column-with-Central-European-(Summer-)time-5.5"><span class="toc-item-num">5.5&nbsp;&nbsp;</span>Insert a column with Central European (Summer-)time</a></span></li></ul></li><li><span><a href="#Create-a-final-savepoint" data-toc-modified-id="Create-a-final-savepoint-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Create a final savepoint</a></span></li><li><span><a href="#Write-data-to-disk" data-toc-modified-id="Write-data-to-disk-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>Write data to disk</a></span><ul class="toc-item"><li><span><a href="#Limit-time-range" data-toc-modified-id="Limit-time-range-7.1"><span class="toc-item-num">7.1&nbsp;&nbsp;</span>Limit time range</a></span></li><li><span><a href="#Different-shapes" data-toc-modified-id="Different-shapes-7.2"><span class="toc-item-num">7.2&nbsp;&nbsp;</span>Different shapes</a></span></li><li><span><a href="#Write-to-SQL-database" data-toc-modified-id="Write-to-SQL-database-7.3"><span class="toc-item-num">7.3&nbsp;&nbsp;</span>Write to SQL-database</a></span></li><li><span><a href="#Write-to-Excel" data-toc-modified-id="Write-to-Excel-7.4"><span class="toc-item-num">7.4&nbsp;&nbsp;</span>Write to Excel</a></span></li><li><span><a href="#Write-to-CSV" data-toc-modified-id="Write-to-CSV-7.5"><span class="toc-item-num">7.5&nbsp;&nbsp;</span>Write to CSV</a></span></li></ul></li><li><span><a href="#Create-metadata" data-toc-modified-id="Create-metadata-8"><span class="toc-item-num">8&nbsp;&nbsp;</span>Create metadata</a></span><ul class="toc-item"><li><span><a href="#Write-checksums.txt" data-toc-modified-id="Write-checksums.txt-8.1"><span class="toc-item-num">8.1&nbsp;&nbsp;</span>Write checksums.txt</a></span></li></ul></li></ul></div> # Introductory Notes This Notebook handles missing data, performs calculations and aggragations and creates the output files. # Settings ## Set version number and recent changes Executing this script till the end will create a new version of the data package. The Version number specifies the local directory for the data <br> We include a note on what has been changed. ``` version = '2019-01-31' changes = '''Added a new source, Terna.''' ``` ## Import Python libraries This section: load libraries and set up a log. Note that the download module makes use of the [pycountry](https://pypi.python.org/pypi/pycountry) library that is not part of Anaconda. Install it with with `pip install pycountry` from the command line. ``` # Python modules from datetime import datetime, date, timedelta, time import pandas as pd import numpy as np import logging import json import sqlite3 import yaml import itertools import os import pytz from shutil import copyfile import pickle # Skripts from time-series repository from timeseries_scripts.read import read from timeseries_scripts.download import download from timeseries_scripts.imputation import find_nan from timeseries_scripts.imputation import resample_markers, glue_markers, mark_own_calc from timeseries_scripts.make_json import make_json, get_sha_hash # Reload modules with execution of any code, to avoid having to restart # the kernel after editing timeseries_scripts %load_ext autoreload %autoreload 2 # speed up tab completion in Jupyter Notebook %config Completer.use_jedi = False ``` ## Display options ``` # Allow pretty-display of multiple variables from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # Adjust the way pandas DataFrames a re displayed to fit more columns pd.reset_option('display.max_colwidth') pd.options.display.max_columns = 60 # pd.options.display.max_colwidth=5 ``` ## Set directories ``` # make sure the working directory is this file's directory try: os.chdir(home_path) except NameError: home_path = os.path.realpath('.') # optionally, set a different directory to store outputs and raw data, # which will take up around 15 GB of disk space #Milos: save_path is None <=> use_external_dir == False use_external_dir = True if use_external_dir: save_path = os.path.join('C:', os.sep, 'OPSD_time_series_data') else: save_path = home_path input_path = os.path.join(home_path, 'input') sources_yaml_path = os.path.join(home_path, 'input', 'sources.yml') areas_csv_path = os.path.join(home_path, 'input', 'areas.csv') data_path = os.path.join(save_path, version, 'original_data') out_path = os.path.join(save_path, version) temp_path = os.path.join(save_path, 'temp') for path in [data_path, out_path, temp_path]: os.makedirs(path, exist_ok=True) # change to temp directory os.chdir(temp_path) os.getcwd() ``` ## Chromedriver If you want to download from sources which require scraping, download the appropriate version of Chromedriver for your platform, name it `chromedriver`, create folder `chromedriver` in the working directory, and move the driver to it. It is used by `Selenium` to scrape the links from web pages. The current list of sources which require scraping (as of December 2018): - Terna - Note that the package contains a database of Terna links up to **20 December 2018**. Bu default, the links are first looked up for in this database, so if the end date of your query is not after **20 December 2018**, you won't need Selenium. In the case that you need later dates, you have two options. If you set the variable `extract_new_terna_urls` to `True`, then Selenium will be used to download the files for those later dates. If you set `extract_new_terna_urls` to `False` (which is the default value), only the recorded links will be consulted and Selenium will not be used. - Note: Make sure that the database file, `recorded_terna_urls.csv`, is located in the working directory. ``` # Deciding whether to use the provided database of Terna links extract_new_terna_urls = False # Saving the choice f = open("extract_new_terna_urls.pickle", "wb") pickle.dump(extract_new_terna_urls, f) f.close() ``` ## Set up a log ``` # Configure the display of logs in the notebook and attach it to the root logger logstream = logging.StreamHandler() logstream.setLevel(logging.INFO) #threshold for log messages displayed in here logging.basicConfig(level=logging.INFO, handlers=[logstream]) # Set up an additional logger for debug messages from the scripts script_logger = logging.getLogger('timeseries_scripts') script_logger.setLevel(logging.DEBUG) formatter = logging.Formatter(fmt='%(asctime)s %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S',) logfile = logging.handlers.TimedRotatingFileHandler(os.path.join(temp_path, 'logfile.log')) logfile.setFormatter(formatter) logfile.setLevel(logging.DEBUG) #threshold for log messages in logfile script_logger.addHandler(logfile) # Set up a logger for logs from the notebook logger = logging.getLogger('notebook') logger.addHandler(logfile) ``` Execute for more detailed logging message (May slow down computation). ``` logstream.setLevel(logging.DEBUG) ``` ## Select timerange This section: select the time range and the data sources for download and read. Default: all data sources implemented, full time range available. **Source parameters** are specified in [input/sources.yml](input/sources.yml), which describes, for each source, the datasets (such as wind and solar generation) alongside all the parameters necessary to execute the downloads. The option to perform downloading and reading of subsets is for testing only. To be able to run the script succesfully until the end, all sources have to be included, or otherwise the script will run into errors (i.e. the step where aggregate German timeseries are caculated requires data from all four German TSOs to be loaded). In order to do this, specify the beginning and end of the interval for which to attempt the download. Type `None` to download all available data. ``` start_from_user = date(2010, 1, 1) end_from_user = date(2019, 1, 21) ``` ## Select download source Instead of downloading from the sources, the complete raw data can be downloaded as a zip file from the OPSD Server. Advantages are: - much faster download - back up of raw data in case it is deleted from the server at the original source In order to do this, specify an archive version to use the raw data from that version that has been cached on the OPSD server as input. All data from that version will be downloaded - timerange and subset will be ignored. Type `None` to download directly from the original sources. ``` archive_version = None # i.e. '2016-07-14' ``` ## Select subset Optionally, specify a subset to download/read.<br> The next cell prints the available sources and datasets.<br> ``` with open(sources_yaml_path, 'r') as f: sources = yaml.load(f.read()) for k, v in sources.items(): print(yaml.dump({k: list(v.keys())}, default_flow_style=False)) ``` Copy from its output and paste to following cell to get the right format.<br> Type `subset = None` to include all data. ``` subset = yaml.load(''' Terna: - generation_by_source ''') #subset = None # to include all sources # need to exclude Elia data due to unclear copyright situation exclude = yaml.load(''' - Elia ''') ``` Now eliminate sources and variables not in subset. ``` with open(sources_yaml_path, 'r') as f: sources = yaml.load(f.read()) if subset: # eliminate sources and variables not in subset sources = {source_name: {k: v for k, v in sources[source_name].items() if k in variable_list} for source_name, variable_list in subset.items()} if exclude: # eliminate sources and variables in exclude sources = {source_name: variable_dict for source_name, variable_dict in sources.items() if not source_name in exclude} # Printing the selected sources (all of them or just a subset) print("Selected sources: ") for k, v in sources.items(): print(yaml.dump({k: list(v.keys())}, default_flow_style=False)) ``` # Download This section: download data. Takes about 1 hour to run for the complete data set (`subset=None`). First, a data directory is created on your local computer. Then, download parameters for each data source are defined, including the URL. These parameters are then turned into a YAML-string. Finally, the download is executed file by file. Each file is saved under it's original filename. Note that the original file names are often not self-explanatory (called "data" or "January"). The files content is revealed by its place in the directory structure. Some sources (currently only ENTSO-E Transparency) require an account to allow downloading. For ENTSO-E Transparency, set up an account [here](https://transparency.entsoe.eu/usrm/user/createPublicUser). ``` auth = yaml.load(''' ENTSO-E Transparency FTP: username: your_email password: your_password ''') ``` ## Automatic download (for most sources) ``` download(sources, data_path, input_path, auth, archive_version=archive_version, start_from_user=start_from_user, end_from_user=end_from_user, testmode=False) ``` ## Manual download ### Energinet.dk Go to http://osp.energinet.dk/_layouts/Markedsdata/framework/integrations/markedsdatatemplate.aspx. **Check The Boxes as specified below:** - Periode - Hent udtræk fra perioden: **01-01-2005** Til: **01-01-2018** - Select all months - Datakolonner - Elspot Pris, Valutakode/MWh: **Select all** - Produktion og forbrug, MWh/h: **Select all** - Udtræksformat - Valutakode: **EUR** - Decimalformat: **Engelsk talformat (punktum som decimaltegn** - Datoformat: **Andet datoformat (ÅÅÅÅ-MM-DD)** - Hent Udtræk: **Til Excel** Click **Hent Udtræk** You will receive a file `Markedsata.xls` of about 50 MB. Open the file in Excel. There will be a warning from Excel saying that file extension and content are in conflict. Select "open anyways" and and save the file as `.xlsx`. In order to be found by the read-function, place the downloaded file in the following subdirectory: **`{{data_path}}{{os.sep}}Energinet.dk{{os.sep}}prices_wind_solar{{os.sep}}2005-01-01_2017-12-31`** ### CEPS Go to http://www.ceps.cz/en/all-data#GenerationRES **check boxes as specified below:** DISPLAY DATA FOR: **Generation RES** TURN ON FILTER **checked** FILTER SETTINGS: - Set the date range - interval - from: **2012** to: **2018** - Agregation and data version - Aggregation: **Hour** - Agregation function: **average (AVG)** - Data version: **real data** - Filter - Type of power plant: **ALL** - Click **USE FILTER** - DOWNLOAD DATA: **DATA V TXT** You will receive a file `data.txt` of about 1.5 MB. In order to be found by the read-function, place the downloaded file in the following subdirectory: **`{{data_path}}{{os.sep}}CEPS{{os.sep}}wind_pv{{os.sep}}2012-01-01_2018-01-31`** ### ENTSO-E Power Statistics Go to https://www.entsoe.eu/data/statistics/Pages/monthly_hourly_load.aspx **check boxes as specified below:** - Date From: **01-01-2016** Date To: **30-04-2016** - Country: **(Select All)** - Scale values to 100% using coverage ratio: **NO** - **View Report** - Click the Save symbol and select **Excel** You will receive a file `1.01 Monthly%5FHourly Load%5FValues%5FStatistical.xlsx` of about 1 MB. In order to be found by the read-function, place the downloaded file in the following subdirectory: **`{{os.sep}}original_data{{os.sep}}ENTSO-E Power Statistics{{os.sep}}load{{os.sep}}2016-01-01_2016-04-30`** The data covers the period from 01-01-2016 up to the present, but 4 months of data seems to be the maximum that interface supports for a single download request, so you have to repeat the download procedure for 4-Month periods to cover the whole period until the present. # Read This section: Read each downloaded file into a pandas-DataFrame and merge data from different sources if it has the same time resolution. Takes ~15 minutes to run. ## Preparations Set the title of the rows at the top of the data used to store metadata internally. The order of this list determines the order of the levels in the resulting output. ``` headers = ['region', 'variable', 'attribute', 'source', 'web', 'unit'] ``` Read a prepared table containing meta data on the geographical areas ``` areas = pd.read_csv(areas_csv_path) ``` View the areas table ``` areas.loc[areas['area ID'].notnull(), :'EIC'].fillna('') ``` ## Reading loop Loop through sources and variables to do the reading. First read the original CSV, Excel etc. files into pandas DataFrames. ``` areas = pd.read_csv(areas_csv_path) # For each source in the source dictionary for source_name, source_dict in sources.items(): # For each variable from source_name for variable_name, param_dict in source_dict.items(): # variable_dir = os.path.join(data_path, source_name, variable_name) res_list = param_dict['resolution'] for res_key in res_list: df = read(data_path, areas, source_name, variable_name, res_key, headers, param_dict, start_from_user=start_from_user, end_from_user=end_from_user) os.makedirs(res_key, exist_ok=True) filename = '_'.join([source_name, variable_name]) + '.pickle' df.to_pickle(os.path.join(res_key, filename)) ``` Then combine the DataFrames that have the same temporal resolution ``` # Create a dictionary of empty DataFrames to be populated with data data_sets = {'15min': pd.DataFrame(), '30min': pd.DataFrame(), '60min': pd.DataFrame()} entso_e = {'15min': pd.DataFrame(), '30min': pd.DataFrame(), '60min': pd.DataFrame()} for res_key in data_sets.keys(): if not os.path.isdir(res_key): continue for filename in os.listdir(res_key): source_name = filename.split('_')[0] if subset and not source_name in subset.keys(): continue logger.info('include %s', filename) df_portion = pd.read_pickle(os.path.join(res_key, filename)) if source_name == 'ENTSO-E Transparency FTP': dfs = entso_e else: dfs = data_sets if dfs[res_key].empty: dfs[res_key] = df_portion elif not df_portion.empty: dfs[res_key] = dfs[res_key].combine_first(df_portion) else: logger.warning(filename + ' WAS EMPTY') for res_key, df in data_sets.items(): logger.info(res_key + ': %s', df.shape) for res_key, df in entso_e.items(): logger.info('ENTSO-E ' + res_key + ': %s', df.shape) ``` Display some rows of the dataframes to get a first impression of the data. ``` data_sets['60min'].head() ``` ## Save raw data Save the DataFrames created by the read function to disk. This way you have the raw data to fall back to if something goes wrong in the ramainder of this notebook without having to repeat the previos steps. ``` data_sets['15min'].to_pickle('raw_data_15.pickle') data_sets['30min'].to_pickle('raw_data_30.pickle') data_sets['60min'].to_pickle('raw_data_60.pickle') entso_e['15min'].to_pickle('raw_entso_e_15.pickle') entso_e['30min'].to_pickle('raw_entso_e_30.pickle') entso_e['60min'].to_pickle('raw_entso_e_60.pickle') ``` Load the DataFrames saved above ``` data_sets = {} data_sets['15min'] = pd.read_pickle('raw_data_15.pickle') data_sets['30min'] = pd.read_pickle('raw_data_30.pickle') data_sets['60min'] = pd.read_pickle('raw_data_60.pickle') entso_e = {} entso_e['15min'] = pd.read_pickle('raw_entso_e_15.pickle') entso_e['30min'] = pd.read_pickle('raw_entso_e_30.pickle') entso_e['60min'] = pd.read_pickle('raw_entso_e_60.pickle') ``` # Processing This section: missing data handling, aggregation of sub-national to national data, aggregate 15'-data to 60'-resolution. Takes 30 minutes to run. ## Missing data handling ### Interpolation Patch missing data. At this stage, only small gaps (up to 2 hours) are filled by linear interpolation. This catched most of the missing data due to daylight savings time transitions, while leaving bigger gaps untouched The exact locations of missing data are stored in the `nan_table` DataFrames. Where data has been interpolated, it is marked in a new column `comment`. For eaxample the comment `solar_DE-transnetbw_generation;` means that in the original data, there is a gap in the solar generation timeseries from TransnetBW in the time period where the marker appears. Patch the datasets and display the location of missing Data in the original data. Takes ~5 minutes to run. ``` nan_tables = {} overviews = {} for res_key, df in data_sets.items(): data_sets[res_key], nan_tables[res_key], overviews[res_key] = find_nan( df, res_key, headers, patch=True) for res_key, df in entso_e.items(): entso_e[res_key], nan_tables[res_key + ' ENTSO-E'], overviews[res_key + ' ENTSO-E'] = find_nan( df, res_key, headers, patch=True) ``` Execute this to see an example of where the data has been patched. ``` data_sets['60min'][data_sets['60min']['interpolated_values'].notnull()].tail() ``` Display the table of regions of missing values ``` nan_tables['60min'] ``` You can export the NaN-tables to Excel in order to inspect where there are NaNs ``` writer = pd.ExcelWriter('NaN_table.xlsx') for res_key, df in nan_tables.items(): df.to_excel(writer, res_key) writer.save() writer = pd.ExcelWriter('Overview.xlsx') for res_key, df in overviews.items(): df.to_excel(writer, res_key) writer.save() ``` Save/Load the patched data sets ``` data_sets['15min'].to_pickle('patched_15.pickle') data_sets['30min'].to_pickle('patched_30.pickle') data_sets['60min'].to_pickle('patched_60.pickle') entso_e['15min'].to_pickle('patched_entso_e_15.pickle') entso_e['30min'].to_pickle('patched_entso_e_30.pickle') entso_e['60min'].to_pickle('patched_entso_e_60.pickle') data_sets = {} data_sets['15min'] = pd.read_pickle('patched_15.pickle') data_sets['30min'] = pd.read_pickle('patched_30.pickle') data_sets['60min'] = pd.read_pickle('patched_60.pickle') entso_e = {} entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle') entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle') entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle') ``` ## Country specific calculations ### Calculate onshore wind generation for German TSOs For 50 Hertz, it is already in the data. For TenneT, it is calculated by substracting offshore from total generation. For Amprion and TransnetBW, onshore wind generation is just total wind generation. Takes <1 second to run. ``` # Some of the following operations require the Dataframes to be lexsorted in # the columns for res_key, df in data_sets.items(): df.sort_index(axis=1, inplace=True) for area, source, url in zip( ['DE_amprion', 'DE_tennet', 'DE_transnetbw'], ['Amprion', 'TenneT', 'TransnetBW'], ['http://www.amprion.net/en/wind-feed-in', 'http://www.tennettso.de/site/en/Transparency/publications/network-figures/actual-and-forecast-wind-energy-feed-in', 'https://www.transnetbw.com/en/transparency/market-data/key-figures']): new_col_header = { 'variable': 'wind_onshore', 'region': '{area}', 'attribute': 'generation_actual', 'source': '{source}', 'web': '{url}', 'unit': 'MW' } if area == 'DE_tennet': colname = ('DE_tennet', 'wind_offshore', 'generation_actual', 'TenneT') offshore = data_sets['15min'].loc[:, colname] else: offshore = 0 data_sets['15min'][ tuple(new_col_header[level].format(area=area, source=source, url=url) for level in headers) ] = (data_sets['15min'][(area, 'wind', 'generation_actual', source)] - offshore) # Sort again data_sets['15min'].sort_index(axis=1, inplace=True) ``` ### Calculate aggregate wind capacity for Germany (on + offshore) Apart from being interesting on it's own, this is also required to calculate an aggregated wind-profile for Germany ``` new_col_header = { 'variable': 'wind', 'region': 'DE', 'attribute': 'capacity', 'source': 'own calculation based on BNetzA and netztransparenz.de', 'web': 'http://data.open-power-system-data.org/renewable_power_plants', 'unit': 'MW' } new_col_header = tuple(new_col_header[level] for level in headers) data_sets['15min'][new_col_header] = ( data_sets['15min'] .loc[:, ('DE', ['wind_onshore', 'wind_offshore'], 'capacity')] .sum(axis=1, skipna=False)) # Sort again data_sets['15min'].sort_index(axis=1, inplace=True) ``` ### Aggregate German data from individual TSOs and calculate availabilities/profiles The wind and solar in-feed data for the 4 German balancing areas is summed up and stored in in new columns, which are then used to calculate profiles, that is, the share of wind/solar capacity producing at a given time. The column headers are created in the fashion introduced in the read script. Takes 5 seconds to run. ``` control_areas_DE = ['DE_50hertz', 'DE_amprion', 'DE_tennet', 'DE_transnetbw'] for variable in ['solar', 'wind', 'wind_onshore', 'wind_offshore']: # we could also include 'generation_forecast' for attribute in ['generation_actual']: # Calculate aggregate German generation sum_col = data_sets['15min'].loc(axis=1)[(control_areas_DE, variable, attribute)].sum(axis=1, skipna=False).to_frame() # Create a new MultiIndex new_col_header = { 'variable': '{variable}', 'region': 'DE', 'attribute': '{attribute}', 'source': 'own calculation based on German TSOs', 'web': '', 'unit': 'MW' } tuples = [tuple(new_col_header[level].format( variable=variable, attribute=attribute) for level in headers)] sum_col.columns = pd.MultiIndex.from_tuples(tuples, names=headers) # append aggregate German generation to the dataset after rounding data_sets['15min'] = data_sets['15min'].combine_first(sum_col.round(0)) if attribute == 'generation_actual': # Calculate the profile column profile_col = (sum_col.values / data_sets['15min']['DE', variable, 'capacity']).round(4) # Create a new MultiIndex and append profile to the dataset new_col_header = { 'variable': '{variable}', 'region': 'DE', 'attribute': 'profile', 'source': 'own calculation based on German TSOs, BNetzA and netztranzparenz.de', 'web': '', 'unit': 'fraction' } tuples = [tuple(new_col_header[level].format(variable=variable) for level in headers)] profile_col.columns = pd.MultiIndex.from_tuples( tuples, names=headers) data_sets['15min'] = data_sets['15min'].combine_first(profile_col) ``` ### Agregate Italian data The data for Italy come by regions (North, Central North, Sicily, etc.) so they need to be agregated in order to get the data for Italy as a whole. In the next cell, we sum up the data by region and for each variable-attribute pair present in the Terna dataset header. ``` bidding_zones_IT = ["IT_CNOR", "IT_CSUD", "IT_NORD", "IT_SARD", "IT_SICI", "IT_SUD"] for variable in ["solar", "wind_onshore"]: sum_col = data_sets['60min'].loc(axis=1)[(bidding_zones_IT, variable)].sum(axis=1, skipna=False)#.to_frame() # Create a new MultiIndex new_col_header = { "region" : "IT", "variable" : variable, "attribute" : "generation_actual", "source": "own calculation based on Terna", "web" : "", "unit" : "MW" } tuples = tuple(new_col_header[level] for level in headers) data_sets['60min'][tuples] = sum_col#\ # data_sets['60min'].loc[:, (italian_regions, variable, attribute)].sum(axis=1) # Sort again data_sets['60min'].sort_index(axis=1, inplace=True) ``` Another savepoint ``` data_sets['15min'].to_pickle('calc_15.pickle') data_sets['30min'].to_pickle('calc_30.pickle') data_sets['60min'].to_pickle('calc_60.pickle') os.chdir(temp_path) data_sets = {} data_sets['15min'] = pd.read_pickle('calc_15.pickle') data_sets['30min'] = pd.read_pickle('calc_30.pickle') data_sets['60min'] = pd.read_pickle('calc_60.pickle') entso_e = {} entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle') entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle') entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle') ``` ## Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data ``` for res_key, df in entso_e.items(): # Combine with TSO data # Copy entire 30min data from ENTSO-E if data_sets[res_key].empty: data_sets[res_key] = df else: # Keep only region, variable, attribute in MultiIndex for comparison data_cols = data_sets[res_key].columns.droplevel( ['source', 'web', 'unit']) # Compare columns from ENTSO-E against ours, keep which we don't have yet tuples = [col for col in df.columns if not col[:3] in data_cols] add_cols = pd.MultiIndex.from_tuples(tuples, names=headers) data_sets[res_key] = data_sets[res_key].combine_first(df[add_cols]) # Add the ENTSO-E markers (but only for the columns actually copied) add_cols = ['_'.join(col[:3]) for col in tuples] # Spread marker column out over a DataFrame for easiser comparison # Filter out everey second column, which contains the delimiter " | " # from the marker marker_table = (df['interpolated_values'].str.split(' | ', expand=True) .filter(regex='^\d*[02468]$', axis='columns')) # Replace cells with markers marking columns not copied with NaNs marker_table[~marker_table.isin(add_cols)] = np.nan for col_name, col in marker_table.iteritems(): if col_name == 0: marker_entso_e = col else: marker_entso_e = glue_markers(marker_entso_e, col) # Glue ENTSO-E marker onto our old marker marker = data_sets[res_key]['interpolated_values'] data_sets[res_key].loc[:, 'interpolated_values'] = glue_markers( marker, df['interpolated_values'].reindex(marker.index)) ``` ## Resample higher frequencies to 60' Some data comes in 15 or 30-minute intervals (i.e. German or British renewable generation), other in 60-minutes (i.e. load data from ENTSO-E and Prices). We resample the 15 and 30-minute data to hourly resolution and append it to the 60-minutes dataset. The marker column is resampled separately in such a way that all information on where data has been interpolated is preserved. The `.resample('H').mean()` methods calculates the means from the values for 4 quarter hours [:00, :15, :30, :45] of an hour values, inserts that for :00 and drops the other 3 entries. Takes 15 seconds to run. ``` #marker_60 = data_sets['60min']['interpolated_values'] for res_key, df in data_sets.items(): if res_key == '60min': break # Resample first the marker column marker_resampled = df['interpolated_values'].groupby( pd.Grouper(freq='60Min', closed='left', label='left') ).agg(resample_markers, drop_region='DE_AT_LU') marker_resampled = marker_resampled.reindex(data_sets['60min'].index) # Glue condensed 15 min marker onto 60 min marker data_sets['60min'].loc[:, 'interpolated_values'] = glue_markers( data_sets['60min']['interpolated_values'], marker_resampled.reindex(data_sets['60min'].index)) # Drop DE_AT_LU bidding zone data from the 15 minute resolution data to # be resampled since it is already provided in 60 min resolution by # ENTSO-E Transparency df = df.drop('DE_AT_LU', axis=1, errors='ignore') # Do the resampling resampled = df.resample('H').mean() resampled.columns = resampled.columns.map(mark_own_calc) resampled.columns.names = headers # Round the resampled columns for col in resampled.columns: if col[2] == 'profile': resampled.loc[:, col] = resampled.loc[:, col].round(4) else: resampled.loc[:, col] = resampled.loc[:, col].round(0) data_sets['60min'] = data_sets['60min'].combine_first(resampled) ``` ## Insert a column with Central European (Summer-)time The index column of th data sets defines the start of the timeperiod represented by each row of that data set in **UTC** time. We include an additional column for the **CE(S)T** Central European (Summer-) Time, as this might help aligning the output data with other data sources. ``` info_cols = {'utc': 'utc_timestamp', 'cet': 'cet_cest_timestamp', 'marker': 'interpolated_values'} for res_key, df in data_sets.items(): if df.empty: continue df.index.rename(info_cols['utc'], inplace=True) df.insert(0, info_cols['cet'], df.index.tz_localize('UTC').tz_convert('Europe/Brussels')) ``` # Create a final savepoint ``` data_sets['15min'].to_pickle('final_15.pickle') data_sets['30min'].to_pickle('final_30.pickle') data_sets['60min'].to_pickle('final_60.pickle') os.chdir(temp_path) data_sets = {} data_sets['15min'] = pd.read_pickle('final_15.pickle') data_sets['30min'] = pd.read_pickle('final_30.pickle') data_sets['60min'] = pd.read_pickle('final_60.pickle') ``` Show the column names contained in the final DataFrame in a table ``` col_info = pd.DataFrame() df = data_sets['60min'] for level in df.columns.names: col_info[level] = df.columns.get_level_values(level) col_info ``` # Write data to disk This section: Save as [Data Package](http://data.okfn.org/doc/tabular-data-package) (data in CSV, metadata in JSON file). All files are saved in the directory of this notebook. Alternative file formats (SQL, XLSX) are also exported. Takes about 1 hour to run. ## Limit time range Cut off the data outside of `[start_from_user:end_from_user]` ``` for res_key, df in data_sets.items(): # In order to make sure that the respective time period is covered in both # UTC and CE(S)T, we set the start in CE(S)T, but the end in UTC if start_from_user: start_from_user = ( pytz.timezone('Europe/Brussels') .localize(datetime.combine(start_from_user, time())) .astimezone(pytz.timezone('UTC'))) if end_from_user: end_from_user = ( pytz.timezone('UTC') .localize(datetime.combine(end_from_user, time())) # Appropriate offset to inlude the end of period + timedelta(days=1, minutes=-int(res_key[:2]))) # Then cut off the data_set data_sets[res_key] = df.loc[start_from_user:end_from_user, :] ``` ## Different shapes Data are provided in three different "shapes": - SingleIndex (easy to read for humans, compatible with datapackage standard, small file size) - Fileformat: CSV, SQLite - MultiIndex (easy to read into GAMS, not compatible with datapackage standard, small file size) - Fileformat: CSV, Excel - Stacked (compatible with data package standard, large file size, many rows, too many for Excel) - Fileformat: CSV The different shapes need to be created internally befor they can be saved to files. Takes about 1 minute to run. ``` data_sets_singleindex = {} data_sets_multiindex = {} data_sets_stacked = {} for res_key, df in data_sets.items(): if df.empty: continue # # Round floating point numbers to 2 digits # for col_name, col in df.iteritems(): # if col_name[0] in info_cols.values(): # pass # elif col_name[2] == 'profile': # df[col_name] = col.round(4) # else: # df[col_name] = col.round(3) # MultIndex data_sets_multiindex[res_key + '_multiindex'] = df # SingleIndex df_singleindex = df.copy() # use first 3 levels of multiindex to create singleindex df_singleindex.columns = [ col_name[0] if col_name[0] in info_cols.values() else '_'.join([level for level in col_name[0:3] if not level == '']) for col_name in df.columns.values] data_sets_singleindex[res_key + '_singleindex'] = df_singleindex # Stacked stacked = df.copy().drop(columns=info_cols['cet'], level=0) stacked.columns = stacked.columns.droplevel(['source', 'web', 'unit']) # Concatrenate all columns below each other (="stack"). # df.transpose().stack() is faster than stacking all column levels # seperately stacked = stacked.transpose().stack(dropna=True).to_frame(name='data') data_sets_stacked[res_key + '_stacked'] = stacked ``` ## Write to SQL-database This file format is required for the filtering function on the OPSD website. This takes ~3 minutes to complete. ``` os.chdir(out_path) for res_key, df in data_sets_singleindex.items(): table = 'time_series_' + res_key df = df.copy() df.index = df.index.strftime('%Y-%m-%dT%H:%M:%SZ') cet_col_name = info_cols['cet'] df[cet_col_name] = (df[cet_col_name].dt.strftime('%Y-%m-%dT%H:%M:%S%z')) df.to_sql(table, sqlite3.connect('time_series.sqlite'), if_exists='replace', index_label=info_cols['utc']) ``` ## Write to Excel Writing the full tables to Excel takes extremely long. As a workaround, only the timestamp-columns are exported. The rest of the data can than be inserted manually from the `_multindex.csv` files. ``` os.chdir(out_path) writer = pd.ExcelWriter('time_series1.xlsx') for res_key, df in data_sets_multiindex.items(): # Need to convert CE(S)T-timestamps to tz-naive, otherwise Excel converts # them back to UTC excel_timestamps = df.loc[:,(info_cols['cet'], '', '', '', '', '')] excel_timestamps = excel_timestamps.dt.tz_localize(None) excel_timestamps.to_excel(writer, res_key.split('_')[0], float_format='%.2f', merge_cells=True) # merge_cells=False doesn't work properly with multiindex writer.save() ``` ## Write to CSV This takes about 10 minutes to complete. ``` os.chdir(out_path) # itertoools.chain() allows iterating over multiple dicts at once for res_stacking_key, df in itertools.chain( data_sets_singleindex.items(), data_sets_multiindex.items(), data_sets_stacked.items() ): df = df.copy() # convert the format of the cet_cest-timestamp to ISO-8601 if not res_stacking_key.split('_')[1] == 'stacked': df.iloc[:, 0] = df.iloc[:, 0].dt.strftime('%Y-%m-%dT%H:%M:%S%z') # https://frictionlessdata.io/specs/table-schema/#date filename = 'time_series_' + res_stacking_key + '.csv' df.to_csv(filename, float_format='%.4f', date_format='%Y-%m-%dT%H:%M:%SZ') ``` # Create metadata This section: create the metadata, both general and column-specific. All metadata we be stored as a JSON file. Takes 10s to run. ``` os.chdir(out_path) make_json(data_sets, info_cols, version, changes, headers, areas, start_from_user, end_from_user) ``` ## Write checksums.txt We publish SHA-checksums for the outputfiles on GitHub to allow verifying the integrity of outputfiles on the OPSD server. ``` os.chdir(out_path) files = os.listdir(out_path) # Create checksums.txt in the output directory with open('checksums.txt', 'w') as f: for file_name in files: if file_name.split('.')[-1] in ['csv', 'sqlite', 'xlsx']: file_hash = get_sha_hash(file_name) f.write('{},{}\n'.format(file_name, file_hash)) # Copy the file to root directory from where it will be pushed to GitHub, # leaving a copy in the version directory for reference copyfile('checksums.txt', os.path.join(home_path, 'checksums.txt')) ```
github_jupyter
``` import gym import torch from torch import nn import torch.nn.functional as F from torch.autograd import Variable from torch.distributions import Bernoulli import matplotlib.pyplot as plt class PolicyNet(nn.Module): def __init__(self, input_dim, output_dim): super(PolicyNet, self).__init__() self.input_dim = input_dim self.output_dim = output_dim self.fc1 = nn.Linear(self.input_dim, 32) self.fc2 = nn.Linear(32, 32) self.output = nn.Linear(32, self.output_dim) def forward(self, x): output = F.relu(self.fc1(x)) output = F.relu(self.fc2(output)) output = torch.sigmoid(self.output(output)) return output def convert_to_torch_variable(arr): """Converts a numpy array to torch variable""" return Variable(torch.from_numpy(arr).float()) # Define environment env = gym.make("CartPole-v0") env.seed(0) # Create environment monitor for video recording video_monitor_callable = lambda _: True # monitored_env = gym.wrappers.Monitor(env, './cartpole_videos', force=True, video_callable=video_monitor_callable) state_dim = env.observation_space.shape[0] action_dim = env.action_space.n bernoulli_action_dim = 1 # Initialize policy network policy_net = PolicyNet(input_dim=state_dim, output_dim=bernoulli_action_dim) # Hyperparameters NUM_EPISODES = 500 GAMMA = 0.99 BATCH_SIZE = 5 LEARNING_RATE = 0.01 # Let baseline be 0 for now baseline = 0.0 # Define optimizer optimizer = torch.optim.RMSprop(policy_net.parameters(), lr=LEARNING_RATE) # Collect trajectory rewards for plotting purpose traj_reward_history = [] # training loop for ep_i in range(NUM_EPISODES): loss = 0.0 # Record states, actions and discounted rewards of this episode states = [] actions = [] rewards = [] cumulative_undiscounted_reward = 0.0 for traj_i in range(BATCH_SIZE): time_step = 0 done = False # initialize environment cur_state = env.reset() cur_state = convert_to_torch_variable(cur_state) discount_factor = 1.0 discounted_rewards = [] grad_log_params = [] while not done: # Compute action probability using the current policy action_prob = policy_net(cur_state) # Sample action according to action probability action_sampler = Bernoulli(probs=action_prob) action = action_sampler.sample() action = action.numpy().astype(int)[0] # Record the states and actions -- will be used for policy gradient later states.append(cur_state) actions.append(action) # take a step in the environment, and collect data next_state, reward, done, _ = env.step(action) # Discount the reward, and append to reward list discounted_reward = reward * discount_factor discounted_rewards.append(discounted_reward) cumulative_undiscounted_reward += reward # Prepare for taking the next step cur_state = convert_to_torch_variable(next_state) time_step += 1 discount_factor *= GAMMA # Finished collecting data for the current trajectory. # Recall temporal structure in policy gradient. # Donstruct the "cumulative future discounted reward" at each time step. for time_i in range(time_step): # relevant reward is the sum of rewards from time t to the end of trajectory relevant_reward = sum(discounted_rewards[time_i:]) rewards.append(relevant_reward) # Finished collecting data for this batch. Update policy using policy gradient. avg_traj_reward = cumulative_undiscounted_reward / BATCH_SIZE traj_reward_history.append(avg_traj_reward) if (ep_i + 1) % 10 == 0: print("Episode {}: Average reward per trajectory = {}".format(ep_i + 1, avg_traj_reward)) #if (ep_i + 1) % 100 == 0: # record_video() optimizer.zero_grad() data_len = len(states) loss = 0.0 # Compute the policy gradient for data_i in range(data_len): action_prob = policy_net(states[data_i]) action_sampler = Bernoulli(probs=action_prob) loss -= action_sampler.log_prob(torch.Tensor([actions[data_i]])) * (rewards[data_i] - baseline) loss /= float(data_len) loss.backward() optimizer.step() # Don't forget to close the environments. #monitored_env.close() env.close() # Plot learning curve plt.figure() plt.plot(traj_reward_history) plt.title("Learning to Solve CartPole-v1 with Policy Gradient") plt.xlabel("Episode") plt.ylabel("Average Reward per Trajectory") plt.savefig("CartPole-pg.png") plt.show() plt.close() ```
github_jupyter
## Forecasting, updating datasets, and the "news" In this notebook, we describe how to use Statsmodels to compute the impacts of updated or revised datasets on out-of-sample forecasts or in-sample estimates of missing data. We follow the approach of the "Nowcasting" literature (see references at the end), by using a state space model to compute the "news" and impacts of incoming data. **Note**: this notebook applies to Statsmodels v0.12+. In addition, it only applies to the state space models or related classes, which are: `sm.tsa.statespace.ExponentialSmoothing`, `sm.tsa.arima.ARIMA`, `sm.tsa.SARIMAX`, `sm.tsa.UnobservedComponents`, `sm.tsa.VARMAX`, and `sm.tsa.DynamicFactor`. ``` %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt macrodata = sm.datasets.macrodata.load_pandas().data macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q') ``` Forecasting exercises often start with a fixed set of historical data that is used for model selection and parameter estimation. Then, the fitted selected model (or models) can be used to create out-of-sample forecasts. Most of the time, this is not the end of the story. As new data comes in, you may need to evaluate your forecast errors, possibly update your models, and create updated out-of-sample forecasts. This is sometimes called a "real-time" forecasting exercise (by contrast, a pseudo real-time exercise is one in which you simulate this procedure). If all that matters is minimizing some loss function based on forecast errors (like MSE), then when new data comes in you may just want to completely redo model selection, parameter estimation and out-of-sample forecasting, using the updated datapoints. If you do this, your new forecasts will have changed for two reasons: 1. You have received new data that gives you new information 2. Your forecasting model or the estimated parameters are different In this notebook, we focus on methods for isolating the first effect. The way we do this comes from the so-called "nowcasting" literature, and in particular Bańbura, Giannone, and Reichlin (2011), Bańbura and Modugno (2014), and Bańbura et al. (2014). They describe this exercise as computing the "**news**", and we follow them in using this language in Statsmodels. These methods are perhaps most useful with multivariate models, since there multiple variables may update at the same time, and it is not immediately obvious what forecast change was created by what updated variable. However, they can still be useful for thinking about forecast revisions in univariate models. We will therefore start with the simpler univariate case to explain how things work, and then move to the multivariate case afterwards. **Note on revisions**: the framework that we are using is designed to decompose changes to forecasts from newly observed datapoints. It can also take into account *revisions* to previously published datapoints, but it does not decompose them separately. Instead, it only shows the aggregate effect of "revisions". **Note on `exog` data**: the framework that we are using only decomposes changes to forecasts from newly observed datapoints for *modeled* variables. These are the "left-hand-side" variables that in Statsmodels are given in the `endog` arguments. This framework does not decompose or account for changes to unmodeled "right-hand-side" variables, like those included in the `exog` argument. ### Simple univariate example: AR(1) We will begin with a simple autoregressive model, an AR(1): $$y_t = \phi y_{t-1} + \varepsilon_t$$ - The parameter $\phi$ captures the persistence of the series We will use this model to forecast inflation. To make it simpler to describe the forecast updates in this notebook, we will work with inflation data that has been de-meaned, but it is straightforward in practice to augment the model with a mean term. ``` # De-mean the inflation series y = macrodata['infl'] - macrodata['infl'].mean() ``` #### Step 1: fitting the model on the available dataset Here, we'll simulate an out-of-sample exercise, by constructing and fitting our model using all of the data except the last five observations. We'll assume that we haven't observed these values yet, and then in subsequent steps we'll add them back into the analysis. ``` y_pre = y.iloc[:-5] y_pre.plot(figsize=(15, 3), title='Inflation'); ``` To construct forecasts, we first estimate the parameters of the model. This returns a results object that we will be able to use produce forecasts. ``` mod_pre = sm.tsa.arima.ARIMA(y_pre, order=(1, 0, 0), trend='n') res_pre = mod_pre.fit() print(res_pre.summary()) ``` Creating the forecasts from the results object `res` is easy - you can just call the `forecast` method with the number of forecasts you want to construct. In this case, we'll construct four out-of-sample forecasts. ``` # Compute the forecasts forecasts_pre = res_pre.forecast(4) # Plot the last 3 years of data and the four out-of-sample forecasts y_pre.iloc[-12:].plot(figsize=(15, 3), label='Data', legend=True) forecasts_pre.plot(label='Forecast', legend=True); ``` For the AR(1) model, it is also easy to manually construct the forecasts. Denoting the last observed variable as $y_T$ and the $h$-step-ahead forecast as $y_{T+h|T}$, we have: $$y_{T+h|T} = \hat \phi^h y_T$$ Where $\hat \phi$ is our estimated value for the AR(1) coefficient. From the summary output above, we can see that this is the first parameter of the model, which we can access from the `params` attribute of the results object. ``` # Get the estimated AR(1) coefficient phi_hat = res_pre.params[0] # Get the last observed value of the variable y_T = y_pre.iloc[-1] # Directly compute the forecasts at the horizons h=1,2,3,4 manual_forecasts = pd.Series([phi_hat * y_T, phi_hat**2 * y_T, phi_hat**3 * y_T, phi_hat**4 * y_T], index=forecasts_pre.index) # We'll print the two to double-check that they're the same print(pd.concat([forecasts_pre, manual_forecasts], axis=1)) ``` #### Step 2: computing the "news" from a new observation Suppose that time has passed, and we have now received another observation. Our dataset is now larger, and we can evaluate our forecast error and produce updated forecasts for the subsequent quarters. ``` # Get the next observation after the "pre" dataset y_update = y.iloc[-5:-4] # Print the forecast error print('Forecast error: %.2f' % (y_update.iloc[0] - forecasts_pre.iloc[0])) ``` To compute forecasts based on our updated dataset, we will create an updated results object `res_post` using the `append` method, to append on our new observation to the previous dataset. Note that by default, the `append` method does not re-estimate the parameters of the model. This is exactly what we want here, since we want to isolate the effect on the forecasts of the new information only. ``` # Create a new results object by passing the new observations to the `append` method res_post = res_pre.append(y_update) # Since we now know the value for 2008Q3, we will only use `res_post` to # produce forecasts for 2008Q4 through 2009Q2 forecasts_post = pd.concat([y_update, res_post.forecast('2009Q2')]) print(forecasts_post) ``` In this case, the forecast error is quite large - inflation was more than 10 percentage points below the AR(1) models' forecast. (This was largely because of large swings in oil prices around the global financial crisis). To analyse this in more depth, we can use Statsmodels to isolate the effect of the new information - or the "**news**" - on our forecasts. This means that we do not yet want to change our model or re-estimate the parameters. Instead, we will use the `news` method that is available in the results objects of state space models. Computing the news in Statsmodels always requires a *previous* results object or dataset, and an *updated* results object or dataset. Here we will use the original results object `res_pre` as the previous results and the `res_post` results object that we just created as the updated results. Once we have previous and updated results objects or datasets, we can compute the news by calling the `news` method. Here, we will call `res_pre.news`, and the first argument will be the updated results, `res_post` (however, if you have two results objects, the `news` method could can be called on either one). In addition to specifying the comparison object or dataset as the first argument, there are a variety of other arguments that are accepted. The most important specify the "impact periods" that you want to consider. These "impact periods" correspond to the forecasted periods of interest; i.e. these dates specify with periods will have forecast revisions decomposed. To specify the impact periods, you must pass two of `start`, `end`, and `periods` (similar to the Pandas `date_range` method). If your time series was a Pandas object with an associated date or period index, then you can pass dates as values for `start` and `end`, as we do below. ``` # Compute the impact of the news on the four periods that we previously # forecasted: 2008Q3 through 2009Q2 news = res_pre.news(res_post, start='2008Q3', end='2009Q2') # Note: one alternative way to specify these impact dates is # `start='2008Q3', periods=4` ``` The variable `news` is an object of the class `NewsResults`, and it contains details about the updates to the data in `res_post` compared to `res_pre`, the new information in the updated dataset, and the impact that the new information had on the forecasts in the period between `start` and `end`. One easy way to summarize the results are with the `summary` method. ``` print(news.summary()) ``` **Summary output**: the default summary for this news results object printed four tables: 1. Summary of the model and datasets 2. Details of the news from updated data 3. Summary of the impacts of the new information on the forecasts between `start='2008Q3'` and `end='2009Q2'` 4. Details of how the updated data led to the impacts on the forecasts between `start='2008Q3'` and `end='2009Q2'` These are described in more detail below. *Notes*: - There are a number of arguments that can be passed to the `summary` method to control this output. Check the documentation / docstring for details. - Table (4), showing details of the updates and impacts, can become quite large if the model is multivariate, there are multiple updates, or a large number of impact dates are selected. It is only shown by default for univariate models. **First table: summary of the model and datasets** The first table, above, shows: - The type of model from which the forecasts were made. Here this is an ARIMA model, since an AR(1) is a special case of an ARIMA(p,d,q) model. - The date and time at which the analysis was computed. - The original sample period, which here corresponds to `y_pre` - The endpoint of the updated sample period, which here is the last date in `y_post` **Second table: the news from updated data** This table simply shows the forecasts from the previous results for observations that were updated in the updated sample. *Notes*: - Our updated dataset `y_post` did not contain any *revisions* to previously observed datapoints. If it had, there would be an additional table showing the previous and updated values of each such revision. **Third table: summary of the impacts of the new information** *Columns*: The third table, above, shows: - The previous forecast for each of the impact dates, in the "estimate (prev)" column - The impact that the new information (the "news") had on the forecasts for each of the impact dates, in the "impact of news" column - The updated forecast for each of the impact dates, in the "estimate (new)" column *Notes*: - In multivariate models, this table contains additional columns describing the relevant impacted variable for each row. - Our updated dataset `y_post` did not contain any *revisions* to previously observed datapoints. If it had, there would be additional columns in this table showing the impact of those revisions on the forecasts for the impact dates. - Note that `estimate (new) = estimate (prev) + impact of news` - This table can be accessed independently using the `summary_impacts` method. *In our example*: Notice that in our example, the table shows the values that we computed earlier: - The "estimate (prev)" column is identical to the forecasts from our previous model, contained in the `forecasts_pre` variable. - The "estimate (new)" column is identical to our `forecasts_post` variable, which contains the observed value for 2008Q3 and the forecasts from the updated model for 2008Q4 - 2009Q2. **Fourth table: details of updates and their impacts** The fourth table, above, shows how each new observation translated into specific impacts at each impact date. *Columns*: The first three columns table described the relevant **update** (an "updated" is a new observation): - The first column ("update date") shows the date of the variable that was updated. - The second column ("forecast (prev)") shows the value that would have been forecasted for the update variable at the update date based on the previous results / dataset. - The third column ("observed") shows the actual observed value of that updated variable / update date in the updated results / dataset. The last four columns described the **impact** of a given update (an impact is a changed forecast within the "impact periods"). - The fourth column ("impact date") gives the date at which the given update made an impact. - The fifth column ("news") shows the "news" associated with the given update (this is the same for each impact of a given update, but is just not sparsified by default) - The sixth column ("weight") describes the weight that the "news" from the given update has on the impacted variable at the impact date. In general, weights will be different between each "updated variable" / "update date" / "impacted variable" / "impact date" combination. - The seventh column ("impact") shows the impact that the given update had on the given "impacted variable" / "impact date". *Notes*: - In multivariate models, this table contains additional columns to show the relevant variable that was updated and variable that was impacted for each row. Here, there is only one variable ("infl"), so those columns are suppressed to save space. - By default, the updates in this table are "sparsified" with blanks, to avoid repeating the same values for "update date", "forecast (prev)", and "observed" for each row of the table. This behavior can be overridden using the `sparsify` argument. - Note that `impact = news * weight`. - This table can be accessed independently using the `summary_details` method. *In our example*: - For the update to 2008Q3 and impact date 2008Q3, the weight is equal to 1. This is because we only have one variable, and once we have incorporated the data for 2008Q3, there is no no remaining ambiguity about the "forecast" for this date. Thus all of the "news" about this variable at 2008Q3 passes through to the "forecast" directly. #### Addendum: manually computing the news, weights, and impacts For this simple example with a univariate model, it is straightforward to compute all of the values shown above by hand. First, recall the formula for forecasting $y_{T+h|T} = \phi^h y_T$, and note that it follows that we also have $y_{T+h|T+1} = \phi^h y_{T+1}$. Finally, note that $y_{T|T+1} = y_T$, because if we know the value of the observations through $T+1$, we know the value of $y_T$. **News**: The "news" is nothing more than the forecast error associated with one of the new observations. So the news associated with observation $T+1$ is: $$n_{T+1} = y_{T+1} - y_{T+1|T} = Y_{T+1} - \phi Y_T$$ **Impacts**: The impact of the news is the difference between the updated and previous forecasts, $i_h \equiv y_{T+h|T+1} - y_{T+h|T}$. - The previous forecasts for $h=1, \dots, 4$ are: $\begin{pmatrix} \phi y_T & \phi^2 y_T & \phi^3 y_T & \phi^4 y_T \end{pmatrix}'$. - The updated forecasts for $h=1, \dots, 4$ are: $\begin{pmatrix} y_{T+1} & \phi y_{T+1} & \phi^2 y_{T+1} & \phi^3 y_{T+1} \end{pmatrix}'$. The impacts are therefore: $$\{ i_h \}_{h=1}^4 = \begin{pmatrix} y_{T+1} - \phi y_T \\ \phi (Y_{T+1} - \phi y_T) \\ \phi^2 (Y_{T+1} - \phi y_T) \\ \phi^3 (Y_{T+1} - \phi y_T) \end{pmatrix}$$ **Weights**: To compute the weights, we just need to note that it is immediate that we can rewrite the impacts in terms of the forecast errors, $n_{T+1}$. $$\{ i_h \}_{h=1}^4 = \begin{pmatrix} 1 \\ \phi \\ \phi^2 \\ \phi^3 \end{pmatrix} n_{T+1}$$ The weights are then simply $w = \begin{pmatrix} 1 \\ \phi \\ \phi^2 \\ \phi^3 \end{pmatrix}$ We can check that this is what the `news` method has computed. ``` # Print the news, computed by the `news` method print(news.news) # Manually compute the news print() print((y_update.iloc[0] - phi_hat * y_pre.iloc[-1]).round(6)) # Print the total impacts, computed by the `news` method # (Note: news.total_impacts = news.revision_impacts + news.update_impacts, but # here there are no data revisions, so total and update impacts are the same) print(news.total_impacts) # Manually compute the impacts print() print(forecasts_post - forecasts_pre) # Print the weights, computed by the `news` method print(news.weights) # Manually compute the weights print() print(np.array([1, phi_hat, phi_hat**2, phi_hat**3]).round(6)) ``` ### Multivariate example: dynamic factor In this example, we'll consider forecasting monthly core price inflation based on the Personal Consumption Expenditures (PCE) price index and the Consumer Price Index (CPI), using a Dynamic Factor model. Both of these measures track prices in the US economy and are based on similar source data, but they have a number of definitional differences. Nonetheless, they track each other relatively well, so modeling them jointly using a single dynamic factor seems reasonable. One reason that this kind of approach can be useful is that the CPI is released earlier in the month than the PCE. One the CPI is released, therefore, we can update our dynamic factor model with that additional datapoint, and obtain an improved forecast for that month's PCE release. A more involved version of this kind of analysis is available in Knotek and Zaman (2017). We start by downloading the core CPI and PCE price index data from [FRED](https://fred.stlouisfed.org/), converting them to annualized monthly inflation rates, removing two outliers, and de-meaning each series (the dynamic factor model does not ``` import pandas_datareader as pdr levels = pdr.get_data_fred(['PCEPILFE', 'CPILFESL'], start='1999', end='2019').to_period('M') infl = np.log(levels).diff().iloc[1:] * 1200 infl.columns = ['PCE', 'CPI'] # Remove two outliers and de-mean the series infl['PCE'].loc['2001-09':'2001-10'] = np.nan ``` To show how this works, we'll imagine that it is April 14, 2017, which is the data of the March 2017 CPI release. So that we can show the effect of multiple updates at once, we'll assume that we haven't updated our data since the end of January, so that: - Our **previous dataset** will consist of all values for the PCE and CPI through January 2017 - Our **updated dataset** will additionally incorporate the CPI for February and March 2017 and the PCE data for February 2017. But it will not yet the PCE (the March 2017 PCE price index was not released until May 1, 2017). ``` # Previous dataset runs through 2017-02 y_pre = infl.loc[:'2017-01'].copy() const_pre = np.ones(len(y_pre)) print(y_pre.tail()) # For the updated dataset, we'll just add in the # CPI value for 2017-03 y_post = infl.loc[:'2017-03'].copy() y_post.loc['2017-03', 'PCE'] = np.nan const_post = np.ones(len(y_post)) # Notice the missing value for PCE in 2017-03 print(y_post.tail()) ``` We chose this particular example because in March 2017, core CPI prices fell for the first time since 2010, and this information may be useful in forecast core PCE prices for that month. The graph below shows the CPI and PCE price data as it would have been observed on April 14th$^\dagger$. ----- $\dagger$ This statement is not entirely true, because both the CPI and PCE price indexes can be revised to a certain extent after the fact. As a result, the series that we're pulling are not exactly like those observed on April 14, 2017. This could be fixed by pulling the archived data from [ALFRED](https://alfred.stlouisfed.org/) instead of [FRED](https://fred.stlouisfed.org/), but the data we have is good enough for this tutorial. ``` # Plot the updated dataset fig, ax = plt.subplots(figsize=(15, 3)) y_post.plot(ax=ax) ax.hlines(0, '2009', '2017-06', linewidth=1.0) ax.set_xlim('2009', '2017-06'); ``` To perform the exercise, we first construct and fit a `DynamicFactor` model. Specifically: - We are using a single dynamic factor (`k_factors=1`) - We are modeling the factor's dynamics with an AR(6) model (`factor_order=6`) - We have included a vector of ones as an exogenous variable (`exog=const_pre`), because the inflation series we are working with are not mean-zero. ``` mod_pre = sm.tsa.DynamicFactor(y_pre, exog=const_pre, k_factors=1, factor_order=6) res_pre = mod_pre.fit() print(res_pre.summary()) ``` With the fitted model in hand, we now construct the news and impacts associated with observing the CPI for March 2017. The updated data is for February 2017 and part of March 2017, and we'll examining the impacts on both March and April. In the univariate example, we first created an updated results object, and then passed that to the `news` method. Here, we're creating the news by directly passing the updated dataset. Notice that: 1. `y_post` contains the entire updated dataset (not just the new datapoints) 2. We also had to pass an updated `exog` array. This array must cover **both**: - The entire period associated with `y_post` - Any additional datapoints after the end of `y_post` through the last impact date, specified by `end` Here, `y_post` ends in March 2017, so we needed our `exog` to extend one more period, to April 2017. ``` # Create the news results # Note const_post_plus1 = np.ones(len(y_post) + 1) news = res_pre.news(y_post, exog=const_post_plus1, start='2017-03', end='2017-04') ``` > **Note**: > > In the univariate example, above, we first constructed a new results object, and then passed that to the `news` method. We could have done that here too, although there is an extra step required. Since we are requesting an impact for a period beyond the end of `y_post`, we would still need to pass the additional value for the `exog` variable during that period to `news`: > > ```python res_post = res_pre.apply(y_post, exog=const_post) news = res_pre.news(res_post, exog=[1.], start='2017-03', end='2017-04') ``` Now that we have computed the `news`, printing `summary` is a convenient way to see the results. ``` # Show the summary of the news results print(news.summary()) ``` Because we have multiple variables, by default the summary only shows the news from updated data along and the total impacts. From the first table, we can see that our updated dataset contains three new data points, with most of the "news" from these data coming from the very low reading in March 2017. The second table shows that these three datapoints substantially impacted the estimate for PCE in March 2017 (which was not yet observed). This estimate revised down by nearly 1.5 percentage points. The updated data also impacted the forecasts in the first out-of-sample month, April 2017. After incorporating the new data, the model's forecasts for CPI and PCE inflation in that month revised down 0.29 and 0.17 percentage point, respectively. While these tables show the "news" and the total impacts, they do not show how much of each impact was caused by each updated datapoint. To see that information, we need to look at the details tables. One way to see the details tables is to pass `include_details=True` to the `summary` method. To avoid repeating the tables above, however, we'll just call the `summary_details` method directly. ``` print(news.summary_details()) ``` This table shows that most of the revisions to the estimate of PCE in April 2017, described above, came from the news associated with the CPI release in March 2017. By contrast, the CPI release in February had only a little effect on the April forecast, and the PCE release in February had essentially no effect. ### Bibliography Bańbura, Marta, Domenico Giannone, and Lucrezia Reichlin. "Nowcasting." The Oxford Handbook of Economic Forecasting. July 8, 2011. Bańbura, Marta, Domenico Giannone, Michele Modugno, and Lucrezia Reichlin. "Now-casting and the real-time data flow." In Handbook of economic forecasting, vol. 2, pp. 195-237. Elsevier, 2013. Bańbura, Marta, and Michele Modugno. "Maximum likelihood estimation of factor models on datasets with arbitrary pattern of missing data." Journal of Applied Econometrics 29, no. 1 (2014): 133-160. Knotek, Edward S., and Saeed Zaman. "Nowcasting US headline and core inflation." Journal of Money, Credit and Banking 49, no. 5 (2017): 931-968.
github_jupyter
# Investigate a Dataset: Titanic Data ## Table of Contents 1. [Introduction](#1-Introduction) 2. [Data Wrangling](#2-Data-Wrangling) 2.1 [Handling Data Types](#2.1-Handling-Data-Types) 2.2 [Handling Missing Values](#2.2-Handling-Missing-Values) &nbsp;&nbsp;&nbsp;&nbsp;2.2.1[Age](#2.2.1-Age) &nbsp;&nbsp;&nbsp;&nbsp;2.2.2[Cabin](#2.2.2-Cabin) &nbsp;&nbsp;&nbsp;&nbsp;2.2.3[Embarked (Port)](#2.2.3-Embarked-(Port)) 3. [Data Exploration 1: Initial Examination of Single Variables](#3-Data-Exploration-1:-Initial-Examination-of-Single-Variables) 3.1 [Survival](#3.1-Survival) 3.2 [Class](#3.2-Class) 3.3 [Sex](#3.3-Sex) 3.4 [Age](#3.4-Age) 3.5 [SibSp](#3.5-SibSp) 3.6 [ParCh](#3.6-ParCh) 3.7 [Fare](#3.7-Fare) 3.8 [Cabin](#3.8-Cabin) 3.9 [Port](#3.9-Port) 4. [Data Exploration 2: What factors make people more likely to survive?](#4-Data-Exploration-2:-What-factors-make-people-more-likely-to-survive?) 4.1 [Survived vs Class](#4.1-Survived-vs-Class) 4.2 [Survived vs Sex](#4.2-Survived-vs-Sex) 4.3 [Survived vs Age](#4.3-Survived-vs-Age) 4.4 [Survived vs SibSp](#4.4-Survived-vs-SibSp) 4.5 [Survived vs ParCh](#4.5-Survived-vs-ParCh) 4.6 [Survived vs Fare](#4.6-Survived-vs-Fare) 5. [Data Exploration 3: What money can buy? Explore relations among passenger class, cabin, and fare](#5-Data-Exploration-3:-What-money-can-buy?-Explore-relations-among-passenger-class,-cabin,-and-fare) 5.1 [Class vs Fare](#5.1-Class-vs-Fare) 5.2 [Carbin vs Fare vs Class](#5.2-Carbin-vs-Fare-vs-Class) 6. [Conclusion](#6-Conclusion) ## 1 Introduction In this report, I will investigate the Titanic survivor data using exploratory data analysis. In the data wrangling phase, I will determine the appropriate datatypes for our dataset, and I will also show how to handle missing values. In the data exploration phase, I will first look at each variable and its distribution. After that, I will answer two question: 1. What factors make people more likely to survive? 2. What money can buy? -- Explore relations among passenger class, cabin, and fare. Last, I will conclude this report by summarizing the findings and stating the limitations of my analysis. ## 2 Data Wrangling ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %pylab inline # Change figure size into 8 by 6 inches matplotlib.rcParams['figure.figsize'] = (8, 6) ``` ### 2.1 Handling Data Types After reviewing the original description of the dataset from [the Kaggle website](https://www.kaggle.com/c/titanic/data), the data type of each variable is chosen as following, and categorical variables will be converted to more descriptive labels: | Variable | Definition | Key | Type | |----------|--------------------------------------------|------------------------------------------------|-----------------| | Survived | Survival | 0 = No, 1 = Yes | int (Survived)* | | Pclass | Ticket class | 1 = 1st, 2 = 2nd, 3 = 3rd | int (Class) | | Sex | Sex | | str | | Age | Age in years | | float | | SibSp | # of siblings / spouses aboard the Titanic | | int | | ParCh | # of parents / children aboard the Titanic | | int | | Ticket | Ticket number | | int | | Fare | Passenger fare | | float | | Cabin | Cabin number | | str | | Embarked | Port of embarkation | C = Cherbourg, Q = Queenstown, S = Southampton | str (Port) | \* indicate the name of converted categorical variable ``` data_file = 'titanic-data.csv' titanic_df = pd.read_csv( data_file, dtype = {'PassengerId': str} ) # Convert categorical variables to more descriptive labels. # Create descriptive Survival column from Survived column titanic_df['Survival'] = titanic_df['Survived'].map({0: 'Died', 1: 'Survived'}) # Create descriptive Class column from Pclass column titanic_df['Class'] = titanic_df['Pclass'].map({1: 'First Class', 2: 'Second Class', 3: 'Third Class'}) # Create descriptive Port column from Embarked column titanic_df['Port'] = titanic_df['Embarked'].map({'C': 'Cherbourg', 'Q': 'Queenstown', 'S': 'Southampton'}) ``` ### 2.2 Handling Missing Values ``` total_passengers = len(titanic_df) print("There are {} passengers on board.".format(total_passengers)) titanic_df.isnull().sum() ``` There are three columns with missing values: Age, Cabin, and Embarked(Port). #### 2.2.1 Age There are 117 out of 891 passengers missing Age values. Because filling in the missing Age values will affect the distribution of Age variable, and this analysis is mainly exploratory, I will **drop rows with missing Age values**. But as an exercise, I will show here a way of filling in the missing Age values based on the assumption that Age values can be infered from SibSp, ParCh, Class, and Sex. We'll store the filled Age values in a new column called AgeFilled. ``` titanic_df['AgeFilled'] = titanic_df['Age'] ``` ##### SibSp and ParCh SibSp(number of siblings/spouses): The number of spouses can only be 0 or 1, therefore any SibSp value greater than 1 can be implied that this person is traveling with sibling(s). ParCh(number of parents/children): The number of parents can only be 2 maximum, any ParCh value greater than 2 can be implied that this person is traveling with child(ren). From these two observations, we can infer two rules: 1. If SibSp >= 2, this passenger has at least 1 sibling on board, and if also his/her ParCh >= 1, then he/she is most likly a **child** traveling with sibling(s) and parent(s). 2. If Parch >= 3, this passenger has at least 1 child, then he/she has to be an **adult** parent traveling with child(ren). From these two rules, we can divide all rows into three categories: **Child**, **Adult**, **Unknown**, and store this categorical value in a new column ChAd. And we can fill in the Age value of the row categoried with child or parent with its category's median age value. ``` def child_or_adult(sibsp, parch): '''Categorize a person as Child, Adult or Unknown Arg: sibsp: the number of siblings/spouses parch: the number of parents/children Return: A string denotes Child, Adult or Unknown ''' if sibsp >= 2 and parch >= 1: return 'Child' if parch >= 3: return 'Adult' return 'Unknown' titanic_df['ChAd'] = titanic_df[['SibSp', 'ParCh']].apply(lambda x: child_or_adult(*x), axis=1) sns.boxplot(x='ChAd', y='AgeFilled', data=titanic_df) ``` Fill Adult, Child groups' missing Age values with groups' median values. Leave Unknown group' missing Age values as NaN. ``` for group_name, group in titanic_df.groupby('ChAd'): if group_name == 'Adult' or group_name == 'Child': group['AgeFilled'].fillna(group['AgeFilled'].median()) ``` ##### Class and Sex For the passengers categorized as Unknown in the ChAd column, I will fill the missing Age values with the median value of the same Class and Sex group. ``` sns.boxplot(x='Class', y='AgeFilled', hue='Sex', data=titanic_df) fillna_with_median = lambda x: x.fillna(x.median(), inplace=False) titanic_df['AgeFilled'] = titanic_df.groupby(['Sex', 'Pclass'])['AgeFilled'].transform(fillna_with_median) ``` I will look at how the distribution of Age has changed after filling the missing values in the next section. #### 2.2.2 Cabin There are 689 out of 891 passengers missing Cabin values. Because the majority of rows are missing cabin values, I decide to exclude these rows during analysis when cabin value is considered. #### 2.2.3 Embarked (Port) There are 2 out of 891 passengers missing Embarked values. Here I choose to fill these two missing values with the most frequent value (mode). ``` titanic_df['Embarked'] = titanic_df['Embarked'].fillna(titanic_df['Embarked'].mode().iloc[0]); titanic_df['Port'] = titanic_df['Port'].fillna(titanic_df['Port'].mode().iloc[0]); ``` ## 3 Data Exploration 1: Initial Examination of Single Variables For the first exploration phase, I will look at some of the variables individually. ``` def categorical_count_and_frequency(series): '''Calculate count and frequency table Given an categorical variable pandas Series, return a DataFrame containing counts and frequencies of each possible value. Arg: series: A pandas Series from a categorical variable Returns: A DataFrame containing counts and frequencies ''' counts = series.value_counts() frequencies = series.value_counts(normalize=True) return counts.to_frame(name='Counts').join(frequencies.to_frame(name='Frequencies')).sort_index() ``` ### 3.1 Survival ``` categorical_count_and_frequency(titanic_df['Survival']) ``` Out of 891 passengers, there are 342 of them survived, and 549 died. The overall survival rate is about 38.3% ### 3.2 Class ``` categorical_count_and_frequency(titanic_df['Class']) ``` There are 216 first class ticket passengers (24%), 184 second class ticket passengers (21%), and 491 third class ticket passengers (55%). ### 3.3 Sex ``` categorical_count_and_frequency(titanic_df['Sex']) ``` There are 577 male passengers (65%) and 314 female passengers (35%). ### 3.4 Age ``` titanic_df['Age'].describe() ax = sns.distplot(titanic_df['Age'].dropna()) ax.set_title('Histogram of Age') ax.set_ylabel('Frequency') plt.xticks(linspace(0, 80, 9)) ``` The age distribution is bimodal. One mode is centered around 5 years old representing children. The other mode is positively skewed with a peak between 20 and 30 representing teenagers and adults. I will also take look at the age distribtion with missing values filled from the previous section. ``` sns.distplot(titanic_df['Age'].dropna()) ax = sns.distplot(titanic_df['AgeFilled'].dropna()) ax.set_title('Histogram of Age vs AgeFilled') ax.set_ylabel('Frequency') plt.xticks(np.linspace(0, 80, 9)) plt.yticks(np.linspace(0, 0.065, 14)) ``` Comparing Age (blue) and AgeFilled (orange) histograms, one can clearly see that filling the Age missing values changed the distribution. Most of the Age values filled are concentrated around the largest peak of the original age distribution. ### 3.5 SibSp ``` categorical_count_and_frequency(titanic_df['SibSp']) ``` 608 passengers (68%) travel without any sibling or spouse. 209 passengers (23%) passengers travel with only 1 sibling or spouse, and in this case, it is likely to be a spouse. The remaining 74 passengers (9%) travel more than ! sibling or spouse, and in this case, it is like to be siblings. ### 3.6 ParCh ``` categorical_count_and_frequency(titanic_df['ParCh']) ``` 678 passengers (76%) travel without any parent or child. The remaining 213 passengers travel with 1 to 6 parent(s) or child(ren). ### 3.7 Fare ``` titanic_df['Fare'].describe() ax = sns.distplot(titanic_df['Fare']) ax.set_title('Histogram of Fare') ax.set_ylabel('Frequency') ``` From the historgram we can see the fare distribution is positively skewed. ### 3.8 Cabin ``` titanic_df['Cabin'].describe() ``` Out of the 891 passengers on board, there are only 204 of them with their cabin information. The cabin number can be interpreted as the following: - The first letter indicating the cabin level. - The following digits indicating the room number. For our analysis purpose, we are going to only look at the cabin level here for passengers with known cabin number, and keep NaN as NaN for passenger without any cabin number. Add a column 'CabinLevel' that contains the cabin level (the first letter of 'Cabin' column). ``` titanic_df['CabinLevel'] = titanic_df['Cabin'].str[0] categorical_count_and_frequency(titanic_df['CabinLevel']) ``` ### 3.9 Port ``` categorical_count_and_frequency(titanic_df['Port']) ``` After filling the 2 passengers who are missing their port of embarkation information, 168 (19%) boarded at Cherbourg, 77 (9%) boarded at Queenstown, and 646 (72%) boarded at Southampton. ## 4 Data Exploration 2: What factors make people more likely to survive? ``` def survival_by(variable_name): '''Calculate survival rate for a given variable For the titanic pandas DataFrame titanic_df, calculate the survival count and survival rate grouped by a given categorical variable. Arg: variable_name: a categorical variable name to group by Return: survival_by_df: a pandas DataFrame that contains the total number of passengers, survivied number of passengers, and survival rate for each category of the given categorical variable ''' grouped = titanic_df.groupby(variable_name) total = grouped.size() # number of total passengers by variable survived = grouped['Survived'].sum() # number of survived passengers by variable survival_rate = survived / total # survival rate by variable survival_by_df = pd.DataFrame( data = [total, survived, survival_rate], index = ['Total', 'Survived', 'Survival Rate'] ) return survival_by_df def survival_by_plot(survival_by_df): '''Plot a bar graph showing Survival Rate vs a categorical variable For the survial_by_df, a pandas DataFrame showing the survival count and survival rate for the titanic_df pandas DataFrame, this funcions plots a bar graph showing the survival rate across a given categorical variable. Arg: survival_by_df: a pandas DataFrame generated by the survival_by(variable_name) function. ''' ax = survival_by_df.T['Survival Rate'].plot(kind = 'bar') ax.set_ylabel('Survival Rate') ax.set_title('Barplot of Survival Rate vs {}'.format(survival_by_df.columns.name)) ``` ### 4.1 Survived vs Class ``` survival_by_class_df = survival_by('Class') survival_by_class_df survival_by_plot(survival_by_class_df) ``` Class (passenger class) is an important factor that correlates with survival rate. First class passenger survival rate (63%) > second class passenger survival rate (47%) > third class passenger survival rate. Since Class is a proxy for socio-economic status (1st ~ Upper; 2nd ~ Middle; 3rd ~ Lower), the result suggests that the higher your socio-economic status is, the higher your survival rate is. ### 4.2 Survived vs Sex ``` survival_by_sex_df = survival_by('Sex') survival_by_sex_df survival_by_plot(survival_by_sex_df) ``` Sex is one important factor that correlates with survival rate. Female survival rate (74%) is much higher than male survival rate (19%) ### 4.3 Survived vs Age Because Age is a continuous variable, I need to convert it into descrete groups first. Here I choose Age groups in 10 year interval ranging from 0 to 80 in order to include all passgeners with Age values. These Age groups are stored in a new column called 'AgeGrp' ``` titanic_df['AgeGrp'] = pd.cut( titanic_df['Age'], bins = np.linspace(0, 80, 9), include_lowest = True ) survival_by_age_df = survival_by('AgeGrp') survival_by_age_df survival_by_plot(survival_by_age_df) ``` Young kids aged below 10 has a high survival rate close to 60%, old people aged above 60 has a low survival rate around 20%, and everyone else has a similar survival rate around 40%. ### 4.4 Survived vs SibSp ``` survival_by_sibsp_df = survival_by('SibSp') survival_by_sibsp_df survival_by_plot(survival_by_sibsp_df) ``` Passengers who travel without any sibling/spouse has a survival rate around 35%, which is lower than the survival rate of passengers who travel with 1 (54%) or 2 (46%) siblings/spouses. The number of passengers who travel with 3 or more siblings/spouses is too small to draw any definitive conclusion. ### 4.5 Survived vs ParCh ``` survival_by_parch_df = survival_by('ParCh') survival_by_parch_df survival_by_plot(survival_by_parch_df) ``` Passengers who travel without any parent/child has a survival rate of 34%, which is lower than the survival rate of passengers who travel with 1 (55%) or 2 (50%) parents/children. The number of passengers who travel with 3 or more parents/children is too small to draw any definitive conclusion. ### 4.6 Survived vs Fare As the histogram from previous section shows that the fare distribution is highly positively skewed. If we categorize fare into fare groups with fixed width bins, the low fare bin will end up with too many passengers while the high fare bin will only have very few passengers. Here I decide to choose bins corresponding to quartiles, so that every bin has similar number of passengers. The quartile categorization is stored in a new column called 'FareGrp'. ``` titanic_df['FareGrp'] = pd.cut( titanic_df['Fare'], bins = titanic_df['Fare'].quantile([0, 0.25, 0.5, 0.75, 1]), labels = ['1Q', '2Q', '3Q', '4Q'], include_lowest = True ) survival_by_fare_df = survival_by('FareGrp') survival_by_fare_df survival_by_plot(survival_by_fare_df) ``` From the bar plot one can clearly see that a higher fare correlates to a higher survival rate. ## 5 Data Exploration 3: What money can buy? Explore relations among passenger class, cabin, and fare ### 5.1 Class vs Fare Because there're only 204 out of 891 passengers with Cabin information, we'll look at only 'Class' and 'Fare' first. ``` ax = sns.boxplot(x = 'Class', y = 'Fare', data = titanic_df, order=['First Class', 'Second Class', 'Third Class']); ax.set_title('Boxplot of Fare vs Class') titanic_df.groupby('Class')['Fare'].median() ``` The median First Class fare is about 4 times of the median Second Class fare, and the median Second Class fare is about 2 times of the median Third Class fare. ### 5.2 Carbin vs Fare vs Class Now I will include Cabin into the analysis, and only include the 204 passengers with givin Cabin values. Because the number of data points is not too large, I will use a swarmplot to show all data points and to look at all 3 variables at once. ``` ax = sns.swarmplot( x = 'CabinLevel', y = 'Fare', hue = 'Class', order = list('TABCDEFG'), hue_order = ['First Class', 'Second Class', 'Third Class'], data = titanic_df ) ax.set_title('Swarmplot of Fare vs CabinLevel vs Class') ``` There are some interesting findings from this plot: 1. Most of the cabin numbers are recorded for First Class passengers, and very few Second and Third Class passengers have their cabin numbers on the record. 2. From the records, cabin level T, A, B and C are exclusively for First Class passengers, and cabin level F and G only accommodate Second and Third Class passengers. ## 6 Conclusion By ananlyzing the Titanic dataset, I tried to answer two intersting questions: Q1. What factors make people more likely to survive? A1. The survival rates of the Titanic disaster: - First Class > Second Class > Third Class - Female > Male - Age 0-10 (Children) > Age 10-60 > age 60-80 (Senior Citizens) - Traveling with 1 or 2 sibling(s)/spouse(s) > traveling without sibling/spouse - Traveling with 1 or 2 parent(s)/child(ren) > traveling without parent/child - Expensive fare > Cheap fare Q2. What money can buy? A2. A typical First Class passenger pays a higher fare than Second and Third Class passengers, but he/she gets to stay on a higher cabin level where none or very few Second and Third class passengers stay. Although the findings of the anaylsis are clear, there are a few limitations that can be improved in the future. - When dealing with missing values, I used simple direct methods (filling with group median (Age), droping rows (Cabin), filling with mode (Embarked). To imporve, I can use some machine learning method to build a model to fill the missing values more rigorously. - I did not include any statistical testing in the report since I am only exploring the dataset to find some interesting observations. - This report does not include any machine learning algorithm to mind the dataset due to the limited scope of this report.
github_jupyter
``` import torch import torch.nn as nn import torch.nn.functional as F from torchvision import datasets, transforms from torch.autograd import Variable from collections import OrderedDict import numpy as np import matplotlib.pyplot as plt plt.rcParams['image.cmap'] = 'gray' %matplotlib inline # input batch size for training (default: 64) batch_size = 64 # input batch size for testing (default: 1000) test_batch_size = 1000 # number of epochs to train (default: 10) epochs = 10 # learning rate (default: 0.01) lr = 0.01 # SGD momentum (default: 0.5) momentum = 0.5 # disables CUDA training no_cuda = True # random seed (default: 1) seed = 1 # how many batches to wait before logging training status log_interval = 10 # Setting seed for reproducibility. torch.manual_seed(seed) cuda = not no_cuda and torch.cuda.is_available() print("CUDA: {}".format(cuda)) if cuda: torch.cuda.manual_seed(seed) cudakwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {} mnist_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) # Precalcualted values. ]) train_set = datasets.MNIST( root='data', train=True, transform=mnist_transform, download=True, ) test_set = datasets.MNIST( root='data', train=False, transform=mnist_transform, download=True, ) train_loader = torch.utils.data.DataLoader( dataset=train_set, batch_size=batch_size, shuffle=True, **cudakwargs ) test_loader = torch.utils.data.DataLoader( dataset=test_set, batch_size=test_batch_size, shuffle=True, **cudakwargs ) ``` ## Loading the model. Here we will focus only on `nn.Sequential` model types as they are easier to deal with. Generalizing the methods described here to `nn.Module` will require more work. ``` class Flatten(nn.Module): def forward(self, x): return x.view(x.size(0), -1) def __str__(self): return 'Flatten()' model = nn.Sequential(OrderedDict([ ('conv2d_1', nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)), ('relu_1', nn.ReLU()), ('max_pooling2d_1', nn.MaxPool2d(kernel_size=2)), ('conv2d_2', nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3)), ('relu_2', nn.ReLU()), ('dropout_1', nn.Dropout(p=0.25)), ('flatten_1', Flatten()), ('dense_1', nn.Linear(3872, 64)), ('relu_3', nn.ReLU()), ('dropout_2', nn.Dropout(p=0.5)), ('dense_2', nn.Linear(64, 10)), ('readout', nn.LogSoftmax()) ])) model.load_state_dict(torch.load('example_torch_mnist_model.pth')) ``` ## Accessing the layers A `torch.nn.Sequential` module serves itself as an iterable and subscriptable container for all its children modules. ``` for i, layer in enumerate(model): print('{}\t{}'.format(i, layer)) ``` Moreover `.modules` and `.children` provide generators for accessing layers. ``` for m in model.modules(): print(m) for c in model.children(): print(c) ``` ## Getting the weigths. ``` conv2d_1_weight = model[0].weight.data.numpy() conv2d_1_weight.shape for i in range(32): plt.imshow(conv2d_1_weight[i, 0]) plt.show() ``` ### Getting layer properties The layer objects themselfs expose most properties as attributes. ``` conv2d_1 = model[0] conv2d_1.kernel_size conv2d_1.stride conv2d_1.dilation conv2d_1.in_channels, conv2d_1.out_channels conv2d_1.padding conv2d_1.output_padding dropout_1 = model[5] dropout_1.p dense_1 = model[7] dense_1.in_features, dense_1.out_features ```
github_jupyter
``` %matplotlib inline import pandas as pd import numpy as np from sklearn.linear_model import Ridge from sklearn.preprocessing import LabelEncoder, StandardScaler from sklearn.model_selection import KFold from sklearn.ensemble import RandomForestRegressor import matplotlib.pyplot as plt from sklearn.metrics import explained_variance_score from scipy.stats import pearsonr, ranksums import seaborn as sns import os from scipy.cluster import hierarchy as sch sns.set_style('white') def positioning(x): return x[-1] def count_to_cpm(count_array): count_array = np.true_divide(count_array,count_array.sum()) * 1e6 return count_array def get_end(x): if 'head' in x: return "5'" elif 'tail' in x: return "3'" def make_column_name(colnames): col_d = pd.DataFrame({'nucleotide':colnames.str.slice(-1), 'position':colnames.str.slice(4,5), 'end':colnames.map(get_end)}) \ .assign(offset = lambda d: np.where(d.end=="5'",-1, 3)) \ .assign(adjusted_position = lambda d: np.abs(d.position.astype(int) - d.offset))\ .assign(colnames = colnames) #print col_d return col_d.end + '-position:'+col_d.adjusted_position.astype(str) +':'+ col_d.nucleotide def preprocess_dataframe(df): nucleotides = df.columns[df.columns.str.contains('head|tail')] dummies = pd.get_dummies(df[nucleotides]) dummies.columns = make_column_name(dummies.columns) df = pd.concat([df,dummies],axis=1) \ .drop(nucleotides, axis=1) return df df = pd.read_table('../test/test_train_set.tsv')\ .assign(expected_cpm = lambda d: count_to_cpm(d['expected_count']))\ .assign(cpm = lambda d: count_to_cpm(d['experimental_count']))\ .assign(log_cpm = lambda d: np.log2(d.cpm+1) - np.log2(d.expected_cpm+1))\ .pipe(preprocess_dataframe) \ .drop(['experimental_count','cpm'], axis=1) df.head() X = df.filter(regex='^5|^3') X.columns def train_lm(d, ax): X = d.drop(['seq_id','log_cpm','expected_count','expected_cpm'], axis=1) Y = d['log_cpm'].values lm = Ridge() lm.fit( X, Y) pred_Y = lm.predict(X) rsqrd = explained_variance_score(Y, pred_Y) rho, pval = pearsonr(pred_Y, Y) ax.scatter(Y, pred_Y) ax.text(0,-5, '$R^2$ = %.3f' %(rsqrd), fontsize=15) ax.text(0,-6, r'$\rho$ = %.3f' %rho, fontsize=15) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.set_xlabel(r'True $\Delta$CPM from TGIRT-seq (log2)') ax.set_ylabel(r'Predicted $\Delta$CPM from end nucleotides (log2)') sns.despine() return lm, X, Y #.assign(side = lambda d: d.X.str.replace('-position:[0-9]-[ACTG]$','')) \ # .assign(X = lambda d: d.X.str.replace('-[ACTG]$','')) \ def coefficient_plot(d, lm, ax): X_factor = d.drop(['log_cpm','seq_id','expected_count','expected_cpm'], axis=1).columns coefficient = lm.coef_ colors = sns.color_palette('Dark2',10) d = pd.DataFrame({'X': X_factor, 'coef':coefficient}) \ .assign(side = lambda d: d.X.str.replace('','')) \ .assign(color = lambda d: list(map(lambda x: colors[0] if "5'" in x else colors[1], d.side))) \ .sort_values('coef') \ .assign(bar_color = lambda d: list(map(lambda x: 'green' if x < 0 else 'purple', d.coef))) ax.bar(np.arange(len(d.coef)), d.coef, color = d.bar_color) ax.xaxis.set_ticks(np.arange(len(d.coef))) x = ax.set_xticklabels(d.X,rotation=90) for xt, col in zip(ax.get_xticklabels(), d.color): xt.set_color(col) ax.legend(title =' ') ax.set_ylabel('Coefficients') #ax.set_title('Coefficients for positional bases') ax.set_xlabel(' ') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.hlines(xmin=-4, xmax=len(d.coef), y = 0) ax.set_xlim(-1, len(d.coef)) def get_sim(df): sim = poisson.rvs(10000, range(df.shape[0])) sim = np.log2(count_to_cpm(sim)+1) - df['expected_count'] return sim def cross_validation(X_train, X_test, Y_train, Y_test, i): lm = Ridge() lm.fit(X_train, Y_train) err = X_test * lm.coef_ err = err.sum(axis=1) error_df = pd.DataFrame({'Corrected':err, 'Uncorrected': Y_test})\ .pipe(pd.melt, var_name = 'correction', value_name = 'error') \ .groupby('correction', as_index=False) \ .agg({'error': lambda x: np.sqrt(x.pow(2).mean())}) return error_df def plot_error(ax, X, Y, df): X = df.drop(['seq_id','log_cpm','expected_count','expected_cpm'], axis=1).values Y = df['log_cpm'].values kf = KFold(n_splits=20, random_state=0) dfs = [] for i, (train_index, test_index) in enumerate(kf.split(X)): X_train, X_test = X[train_index], X[test_index] Y_train, Y_test = Y[train_index], Y[test_index] df = cross_validation(X_train, X_test, Y_train, Y_test, i) dfs.append(df) error_df = pd.concat(dfs) sns.swarmplot(data=error_df, x = 'correction', y = 'error', order=['Uncorrected','Corrected']) ax.set_xlabel(' ') ax.set_ylabel('Root-mean-square error (log2 CPM)') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) return error_df def remove_grid(ax): for o in ['top','bottom','left','right']: ax.spines[o].set_visible(False) def make_cluster(d, ax): linkage_color = ['black','darkcyan','darkseagreen','darkgoldenrod'] sch.set_link_color_palette(linkage_color) z = sch.linkage(compare_df) l = sch.dendrogram(z, orientation='left', color_threshold=100, no_labels=True) ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) remove_grid(ax) return l def plot_heat(compare_df, cl, ax, cax): plot_df = compare_df.iloc[cl['leaves'],[1,0]] + np.log2(1040.5) hm = ax.imshow(plot_df, cmap='inferno',aspect='auto') ax.xaxis.set_ticks(range(0,2)) ax.set_xticklabels(plot_df.columns) ax.yaxis.set_visible(False) cb = plt.colorbar(hm, cax=cax) cb.set_label('log2(CPM)',rotation=270,labelpad=18) remove_grid(ax) remove_grid(cax) plt.rc('xtick',labelsize=14) plt.rc('ytick',labelsize=14) plt.rc('axes',labelsize=14) fig = plt.figure(figsize=(9,12)) ax = fig.add_axes([0,0.55,0.45,0.5]) ax2 = fig.add_axes([0.55,0.6,0.48,0.4]) lm, X, Y = train_lm(df,ax) coefficient_plot(df, lm, ax2) #plot heatmap and error compare_df = pd.DataFrame({'Corrected':Y - np.sum(lm.coef_ * X, axis=1), 'Uncorrected':df.log_cpm}) d_ax = fig.add_axes([-0.05,0,0.1,0.45]) cl = make_cluster(compare_df, d_ax) map_ax = fig.add_axes([0.05,0,0.3,0.45]) cbar_ax = fig.add_axes([0.36,0,0.02,0.45]) plot_heat(compare_df, cl, map_ax, cbar_ax) ax3 = fig.add_axes([0.58,0,0.45,0.45]) error_df = plot_error(ax3, X, Y, df) #fig.tight_layout() fig.text(-0.05,1.03,'A',size=15) fig.text(0.52,1.03,'B',size=15) fig.text(-0.05, 0.47,'C',size=15) fig.text(0.5, 0.47,'D',size=15) figurename = os.getcwd() + '/expression_prediction.pdf' fig.savefig(figurename, bbox_inches='tight', transparent=True) print('plotted ', figurename ) ```
github_jupyter
# Case 1 - Santander - Tunning Hiper-Parametros do Modelo Original ## Marcio de Lima <img style="float: left;" src="https://guardian.ng/wp-content/uploads/2016/08/Heart-diseases.jpg" width="350px"/> ``` import warnings warnings.filterwarnings('ignore') #%pip install -U scikit-learn import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns #for plotting from sklearn.ensemble import RandomForestClassifier #for the model from sklearn.tree import DecisionTreeClassifier from sklearn.tree import export_graphviz #plot tree from sklearn.metrics import roc_curve, auc #for model evaluation from sklearn.metrics import classification_report #for model evaluation from sklearn.metrics import confusion_matrix #for model evaluation from sklearn.model_selection import train_test_split #for data splitting import eli5 #for purmutation importance from eli5.sklearn import PermutationImportance import shap #for SHAP values from pdpbox import pdp, info_plots #for partial plots np.random.seed(123) #ensure reproducibility pd.options.mode.chained_assignment = None #hide any pandas warnings #Marcio de Lima from sklearn.model_selection import GridSearchCV from sklearn.model_selection import RandomizedSearchCV ``` <a id='section2'></a> # The Data ``` dt = pd.read_csv("../dados/heart.csv") dt.columns = ['age', 'sex', 'chest_pain_type', 'resting_blood_pressure', 'cholesterol', 'fasting_blood_sugar', 'rest_ecg', 'max_heart_rate_achieved', 'exercise_induced_angina', 'st_depression', 'st_slope', 'num_major_vessels', 'thalassemia', 'target'] dt['sex'][dt['sex'] == 0] = 'female' dt['sex'][dt['sex'] == 1] = 'male' dt['chest_pain_type'][dt['chest_pain_type'] == 1] = 'typical angina' dt['chest_pain_type'][dt['chest_pain_type'] == 2] = 'atypical angina' dt['chest_pain_type'][dt['chest_pain_type'] == 3] = 'non-anginal pain' dt['chest_pain_type'][dt['chest_pain_type'] == 4] = 'asymptomatic' dt['fasting_blood_sugar'][dt['fasting_blood_sugar'] == 0] = 'lower than 120mg/ml' dt['fasting_blood_sugar'][dt['fasting_blood_sugar'] == 1] = 'greater than 120mg/ml' dt['rest_ecg'][dt['rest_ecg'] == 0] = 'normal' dt['rest_ecg'][dt['rest_ecg'] == 1] = 'ST-T wave abnormality' dt['rest_ecg'][dt['rest_ecg'] == 2] = 'left ventricular hypertrophy' dt['exercise_induced_angina'][dt['exercise_induced_angina'] == 0] = 'no' dt['exercise_induced_angina'][dt['exercise_induced_angina'] == 1] = 'yes' dt['st_slope'][dt['st_slope'] == 1] = 'upsloping' dt['st_slope'][dt['st_slope'] == 2] = 'flat' dt['st_slope'][dt['st_slope'] == 3] = 'downsloping' dt['thalassemia'][dt['thalassemia'] == 1] = 'normal' dt['thalassemia'][dt['thalassemia'] == 2] = 'fixed defect' dt['thalassemia'][dt['thalassemia'] == 3] = 'reversable defect' dt['sex'] = dt['sex'].astype('object') dt['chest_pain_type'] = dt['chest_pain_type'].astype('object') dt['fasting_blood_sugar'] = dt['fasting_blood_sugar'].astype('object') dt['rest_ecg'] = dt['rest_ecg'].astype('object') dt['exercise_induced_angina'] = dt['exercise_induced_angina'].astype('object') dt['st_slope'] = dt['st_slope'].astype('object') dt['thalassemia'] = dt['thalassemia'].astype('object') dt = pd.get_dummies(dt, drop_first=True) ``` # The Model The next part fits a random forest model to the data, ``` X_train, X_test, y_train, y_test = train_test_split(dt.drop('target', 1), dt['target'], test_size = .2, random_state=10) #split the data model = RandomForestClassifier(max_depth=5) model.fit(X_train, y_train) y_predict = model.predict(X_test) y_pred_quant = model.predict_proba(X_test) y_pred_bin = model.predict(X_test) confusion_matrix = confusion_matrix(y_test, y_pred_bin) confusion_matrix total=sum(sum(confusion_matrix)) sensitivity = confusion_matrix[0,0]/(confusion_matrix[0,0]+confusion_matrix[1,0]) print('Sensitivity : ', sensitivity ) specificity = confusion_matrix[1,1]/(confusion_matrix[1,1]+confusion_matrix[0,1]) print('Specificity : ', specificity) print('Accuracy of RandomForest Regression Classifier on train set: {:.2f}'.format(model.score(X_train, y_train)*100)) print('Accuracy of RandomForest Regression Classifier on test set: {:.2f}'.format(model.score(X_test, y_test)*100)) print(classification_report(y_test, model.predict(X_test))) ``` <a id='section4'></a> # Tunning Model - Version 1 ``` def rodarTunning(X_train, y_train, X_test, y_test, rf_classifier): param_grid = {'n_estimators': [50, 75, 100, 125, 150, 175], 'min_samples_split':[2,4,6,8,10], 'min_samples_leaf': [1, 2, 3, 4], 'max_depth': [5, 10, 15, 20, 25]} grid_obj = GridSearchCV(rf_classifier, return_train_score=True, param_grid=param_grid, scoring='roc_auc', cv=10) grid_fit = grid_obj.fit(X_train, y_train) rf_opt = grid_fit.best_estimator_ print('='*20) print("best params: " + str(grid_obj.best_estimator_)) print("best params: " + str(grid_obj.best_params_)) print('best score:', grid_obj.best_score_) print('='*20) print(classification_report(y_test, rf_opt.predict(X_test))) print('New Accuracy of Model on train set: {:.2f}'.format(rf_opt.score(X_train, y_train)*100)) print('New Accuracy of Model on test set: {:.2f}'.format(rf_opt.score(X_test, y_test)*100)) return rf_opt rf_classifier = RandomForestClassifier(class_weight = "balanced", random_state=7) rf_opt = rodarTunning(X_train, y_train, X_test, y_test, rf_classifier) ``` # Tunning Model - Version 2 ### Dados com escalas diferentes - Aplicando MinMaxScaler ``` from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 5)) df_HR = dt HR_col = list(df_HR.columns) HR_col.remove('target') for col in HR_col: df_HR[col] = df_HR[col].astype(float) df_HR[[col]] = scaler.fit_transform(df_HR[[col]]) df_HR['target'] = pd.to_numeric(df_HR['target'], downcast='float') X_train_hr, X_test_hr, y_train_hr, y_test_hr = train_test_split(df_HR.drop('target', 1), df_HR['target'], test_size = .2, random_state=10) #split the data rf_classifier = RandomForestClassifier(class_weight = "balanced", random_state=7) rf_opt2 = rodarTunning(X_train_hr, y_train_hr, X_test_hr, y_test_hr, rf_classifier) ``` # Tunning Model - Version 3 ## Avaliando outros modelos ``` from sklearn import svm, tree, linear_model, neighbors from sklearn import naive_bayes, ensemble, discriminant_analysis, gaussian_process from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from xgboost import XGBClassifier from sklearn import feature_selection from sklearn import model_selection from sklearn import metrics def testarModelos(X_train, X_test, y_train, y_test): models = [] models.append(('Logistic Regression', LogisticRegression(solver='liblinear', class_weight='balanced'))) models.append(('SVM', SVC(gamma='auto'))) models.append(('KNN', KNeighborsClassifier())) models.append(('Decision Tree Classifier', DecisionTreeClassifier())) models.append(('Gaussian NB', GaussianNB())) models.append(('Xgboost', XGBClassifier(learning_rate=0.02, n_estimators=600, objective='binary:logistic', silent=True, nthread=1))) models.append(('RandomForestClassifier', RandomForestClassifier(max_depth=5))) acc_results = [] auc_results = [] names = [] col = ['Algorithm', 'ROC AUC Mean', 'ROC AUC STD', 'Accuracy Mean', 'Accuracy STD'] df_results = pd.DataFrame(columns=col) i = 0 for name, model in models: kfold = model_selection.KFold( n_splits=10, shuffle=True) # 10-fold cross-validation cv_acc_results = model_selection.cross_val_score( # accuracy scoring model, X_train, y_train, cv=kfold, scoring='accuracy') cv_auc_results = model_selection.cross_val_score( # roc_auc scoring model, X_train, y_train, cv=kfold, scoring='roc_auc') acc_results.append(cv_acc_results) auc_results.append(cv_auc_results) names.append(name) df_results.loc[i] = [name, round(cv_auc_results.mean()*100, 2), round(cv_auc_results.std()*100, 2), round(cv_acc_results.mean()*100, 2), round(cv_acc_results.std()*100, 2) ] i += 1 return df_results.sort_values(by=['ROC AUC Mean'], ascending=False) #Sem MinMaxScaler rf_classifier = RandomForestClassifier(class_weight = "balanced", random_state=7) rf_opt2 = rodarTunning(X_train, y_train, X_test, y_test, rf_classifier) df_results = testarModelos(X_train, X_test, y_train, y_test) print(df_results) #Com MinMaxScaler df_results = testarModelos(X_train_hr, X_test_hr, y_train_hr, y_test_hr) print(df_results) X_train, X_test, y_train, y_test = train_test_split(dt.drop('target', 1), dt['target'], test_size = .2, random_state=10) rf_classifier = XGBClassifier(learning_rate=0.02, objective='binary:logistic') rf_opt3 = rodarTunning(X_train, y_train, X_test, y_test, rf_classifier) ``` ## Tunning 1 demonstrou melhor acurária e maior acertos nas 2 target (0 e 1) #### Tivemos um aumento de 3% no Treinamento e o mesmo resultado no Teste, mas pela métrica de matriz de confusão e relatório de classificação o acerto entre as classes foi equalizado, mais genérico. #### Não foi muita diferença na aplicação de escala no dataset, desta forma, foi ignorada. #### O modelo XGBClassifier aparece como promissor, mas para o case, vamos seguir com a decisão do Data Science (Autor) com o RandomForestClassifier ``` #Save Modelo Tunning Version 1 - Marcio de Lima import pickle filename = 'modelo/tunning_model_v2.pkl' pickle.dump(rf_opt, open(filename, 'wb')) ```
github_jupyter
``` # %load IBSI_benchmark_evaluator.py import pandas as pd import argparse def main(args): try: pipeline_df = pd.read_csv(args.pipeline_csv_file) benchmark_df = pd.read_csv(args.benchmark_csv_file) mapping_df = pd.read_csv(args.mapping_csv_file) except: print("Error in reading csv files.") exit() tags_of_interest = [] benchmark_df["pyradiomics_tag"] = benchmark_df["tag"] for f_ibsi, f_pyradiomics in zip(mapping_df["IBSIName"], mapping_df["PyradiomicsFeature"]): f_ibsi = f_ibsi.lstrip("F").replace(".", "_") match_condition = benchmark_df['tag'].str.contains(f_ibsi) benchmark_df['your_result'][match_condition] = pipeline_df[f_pyradiomics].values[0] benchmark_df['pyradiomics_tag'][match_condition] = f_pyradiomics tags_of_interest.append(benchmark_df[match_condition & benchmark_df['benchmark_value'].notnull()]) matched_df = pd.concat(tags_of_interest) matched_df["difference"] = (matched_df["your_result"] - matched_df["benchmark_value"]).abs() matched_df["check"] = matched_df["difference"] <= matched_df["tolerance"] matched_df.to_csv(args.save_csv) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--pipeline_csv_file", help="Path to the pipeline generated CSV file") parser.add_argument("--benchmark_csv_file", help="Path to CSV file provided by IBSI for benchmarking", default='IBSI-1-configA.csv') parser.add_argument("--mapping_csv_file", help="Mapping file with correspondences between tags of IBSI and pyradiomics", default='Pyradiomics2IBSIFeatures.csv') parser.add_argument("--save_csv", help="Output csv file path", default="out.csv") args = parser.parse_args() main(args) import pandas as pd # input files pipeline_csv_file = r"C:\Users\ivan.zhovannik\OneDrive - Maastro - Clinic\_PhD_stuff\Papers\2021\SQLite4Radiomics\_SQLite4Radiomics_benchmarking\1 CT_configE.csv" benchmark_csv_file = "IBSI-1-configE.csv" mapping_csv_file = "Pyradiomics2IBSIFeatures.csv" save_csv = "_test_E.csv" pipeline_df = pd.read_csv(pipeline_csv_file) benchmark_df = pd.read_csv(benchmark_csv_file) mapping_df = pd.read_csv(mapping_csv_file) tags_of_interest = [] benchmark_df["pyradiomics_tag"] = benchmark_df["tag"] for f_ibsi, f_pyradiomics in zip(mapping_df["IBSIName"], mapping_df["PyradiomicsFeature"]): print(f_ibsi) f_ibsi = f_ibsi.lstrip("F").replace(".", "_") print(f_ibsi) match_condition = benchmark_df['tag'].str.contains(f_ibsi) print(match_condition.loc[match_condition]) display(benchmark_df['your_result'][match_condition]) benchmark_df.loc[match_condition, 'your_result'] = pipeline_df[f_pyradiomics].values[0] display(benchmark_df['your_result'][match_condition]) print(f_pyradiomics) benchmark_df.loc[match_condition, 'pyradiomics_tag'] = f_pyradiomics tags_of_interest.append(benchmark_df[match_condition & benchmark_df['benchmark_value'].notnull()]) display(benchmark_df[match_condition & benchmark_df['benchmark_value'].notnull()]) matched_df = pd.concat(tags_of_interest) matched_df["difference"] = (matched_df["your_result"] - matched_df["benchmark_value"]).abs() matched_df["check"] = matched_df["difference"] <= matched_df["tolerance"] matched_df.to_csv(save_csv) matched_df.loc[matched_df.check ] matched_df.check.sum() ```
github_jupyter
# Image Classification using GoogLeNet Architecture from Scratch #### In this notebook we are trying to make Object Classification using CNN like [GoogleNet](https://arxiv.org/pdf/1409.4842.pdf). - We have used **GOOGLE CLOUD Platform** to test and train our model - We have also used image augmentation to boost the performance of deep networks. - Due to overfitting of the model and getting less accuracy in the [Dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) of Dog (Subset of Imagenet Dataset) in our CNN model so we also try to other CNN and run on CIFAR-10 and got 68% accuracy in 50 Epochs #### Steps to setup GCP(Google Cloud Platform) for Keras and Tensorflow 1. Setup Virtual Machine Instance in GCP using [link](https://cloud.google.com/compute/docs/instances/). Make sure your instance have GPU. - Follow all the steps in [link](https://medium.com/google-cloud/running-jupyter-notebooks-on-gpu-on-google-cloud-d44f57d22dbd) to setup Anaconda, Tensorflow and Keras with GPU driver. - To import dataset to VM use ssh to your VM run: - `gcloud compute scp ~/localdirectory/ example-instance:~/destinationdirectory` - Navigate to you jupyter on local browser and make a new notebook. **Some more library to install to run the notebook:** - Install PIL : pip install pillow --> Python Imaging Library which adds image processing capabilities to your python interpreter. - Install tqdm : pip install tqdm --> tqdm is used to show progress bar - Install h5py : pip install h5py --> used to store weights in local ``` #Import library import keras from keras.datasets import cifar10 from keras.layers import Input from keras.models import Model from keras.layers import Dense, Dropout, Flatten, Input, AveragePooling2D, merge from keras.layers import Conv2D, MaxPooling2D, BatchNormalization,GlobalAveragePooling2D from keras.layers import Concatenate from keras.optimizers import SGD from keras.models import model_from_json #pre-processing Images from sklearn.datasets import load_files from keras.utils import np_utils import numpy as np from glob import glob from keras.preprocessing.image import ImageDataGenerator # from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True from keras.preprocessing import image from tqdm import tqdm ``` ### Allow the GPU as memory is needed rather than pre-allocate memory - You can find more details of tensorflow GPU [here](https://www.tensorflow.org/programmers_guide/tensors) ``` # backend import tensorflow as tf from keras import backend as k # Don't pre-allocate memory; allocate as-needed config = tf.ConfigProto() config.gpu_options.allow_growth = True # Create a session with the above options specified. k.tensorflow_backend.set_session(tf.Session(config=config)) # function to load dataset def load_dataset(path): #load files from path data = load_files(path) #takes the filename and put in array dog_files = np.array(data['filenames']) #one hot encoding dog_targets = np_utils.to_categorical(np.array(data['target']), 133) return dog_files, dog_targets train_files, train_targets = load_dataset('Dog/train') valid_files, valid_targets = load_dataset('Dog/valid') test_files, test_targets = load_dataset('Dog/test') # Just getting first 5 dogs breads dog_names = [item.split('.')[1].rstrip('\/') for item in sorted(glob("Dog/train/*/"))] dog_names[:5] print('There are %d total dog categories.' % len(dog_names)) print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files]))) print('There are %d training dog images.' % len(train_files)) print('There are %d validation dog images.' % len(valid_files)) print('There are %d test dog images.'% len(test_files)) ``` ##### This dataset is already split into train, validation and test parts. As the traning set consits of 6670 images, there are only 50 dogs per breed on average. #### PreProcess the Data - Path_to_tensor is the function that takes the image path, convert into array and return the 4D tensor with shape (1,224,224,3) (batch,height, width, color) - paths_to_tensor array of image path return the tensor of image in array ``` def path_to_tensor(img_path): # loads RGB image as PIL.Image.Image type img = image.load_img(img_path, target_size=(224, 224)) # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3) x = image.img_to_array(img) # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor return np.expand_dims(x, axis=0) def paths_to_tensor(img_paths): list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)] return np.vstack(list_of_tensors) ``` Rescale the images by dividing every pixel in every image by 255. ``` from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True train_tensors = paths_to_tensor(train_files).astype('float32')/255 valid_tensors = paths_to_tensor(valid_files).astype('float32')/255 test_tensors = paths_to_tensor(test_files).astype('float32')/255 ``` ## Image Augmentation #### While we have train the CNN found that it was overfitting by huge number where train accuracy was 50% and validation accuracy was only 18% in the 70 epochs. So to reduce the overfitting we try to do image augmentation. #### This helps prevent overfitting and helps the model generalize better. ``` from keras.preprocessing.image import ImageDataGenerator #this is the augmentation configuration we will use for training train_datagen = ImageDataGenerator( rescale=1./255, # Rescaling factor shear_range=0.2, # Shear angle in counter-clockwise direction in degrees zoom_range=0.2, # Range for random zoom horizontal_flip=True,# Randomly flip inputs horizontally. fill_mode='nearest' #fill_mode is the strategy used for filling in newly created pixels, ) batch_size=16 # this is the augmentation configuration we will use for testing # only rescaling test_datagen = ImageDataGenerator(rescale=1./255) ``` ##### This is a generator that will read pictures found in subfolers of 'dogs/train', and indefinitely generate batches of augmented image data ``` train_generator = train_datagen.flow_from_directory( 'Dog/train', # this is the target directory target_size=(224, 224), # all images will be resized to 224 x 224 batch_size=batch_size, class_mode='categorical') # since we use categorical value ``` ##### This is a similar generator, for validation data ``` validation_generator = test_datagen.flow_from_directory( 'Dog/valid', target_size=(224, 224), # all images will be resized to 224 x 224 batch_size=batch_size, class_mode='categorical') ``` <center> **GoogLeNet Inception Architecture** ![GoogeLeNet inception Architecture](http://yeephycho.github.io/blog_img/GoogLeNet.JPG) #### It is generally difficult to decide which architecture will be good for the particular dataset it's most of the time trail and error if you are making CNN from scratch. Pre-train CNN with it's will give more accuracy in less iteration compare to training from scratch because network has to learn from beginning. **This notebook will only work on tensorflow not with theano** **Theano use channels as first whereas tensorflow uses channels as last** - Lets start with input tensor which will be: ``` input = Input(shape = (224, 224, 3)) ## So let's start to make CNN for Our dataset which is Dog's dataset which contains 133 classes with total 8341 ``` - Starting with CNN first layer from the diagram it would be `convolution` with `7 x 7` patch size and `stride` of `(2,2)` with input image of `224 x 244` followed by `BatchNormalization` for faster learning and higher overall accuracy. If want to know about [this](https://medium.com/deeper-learning/glossary-of-deep-learning-batch-normalisation-8266dcd2fa82) blog has good explanation. ``` x = Conv2D(64,( 7, 7), strides=(2, 2), padding='same',activation='relu')(input) x = BatchNormalization()(x) #default axis is 3 is you are using theano it would be 1. ``` - `MaxPooling` with `3 x 3` with strides as `2` ``` x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) ``` - Next is `convolution` with `3 x 3` with stride 1 it has two convolution 3 x 3 reduce with 64 layers and 3 x 3 192 layers but as our dataset is way small compare to ImageNet we try to reduce the layer from 64 layers to 48 and 192 to 128. ``` x = Conv2D(48,(1,1),strides=(1,1),padding='same',activation='relu')(x) x = BatchNormalization()(x) x = Conv2D(64,(1,1),strides=(1,1),padding='same',activation='relu')(x) x = BatchNormalization()(x) ``` ### Inception 3a type in the GoogLeNet Architecture - It is couple steps to complete this layers so making function so we can able to reuse it. - `1 x 1` 64 convolution layers followed by BatchNormalization - `3 x 3` 80 convolution layers where input is as out of `1 x 1` convolution layer followed by BatchNormalization - `5 x 5` 16 convolution layers where input is as out of `1 x 1` convolution layer followed by BatchNormalization - Last convolution layers is pooling which is 32 convolution layers with `1 x 1` - Merge ouptut of `1 x 1` , `3 x 3` and `5 x 5` with respect to last axis So while calling function I would call as add_module(input, 64, 80, 16, 32, 32) ``` def add_module(input,reduce_1, onex1, threex3, fivex5, pool): #print(input.shape) Conv2D_reduce = Conv2D(reduce_1, (1,1), strides=(2,2), activation='relu', padding='same')(input) Conv2D_reduce = BatchNormalization()(Conv2D_reduce) #print(Conv2D_reduce.shape) Conv2D_1_1 = Conv2D(onex1, (1,1), activation='relu', padding='same')(input) Conv2D_1_1 = BatchNormalization()(Conv2D_1_1) #print(Conv2D_1_1.shape) Conv2D_3_3 = Conv2D(threex3, (3,3),strides=(2,2), activation='relu', padding='same')(Conv2D_1_1) Conv2D_3_3 = BatchNormalization()(Conv2D_3_3) #print(Conv2D_3_3.shape) Conv2D_5_5 = Conv2D(fivex5, (5,5),strides=(2,2), activation='relu', padding='same')(Conv2D_1_1) Conv2D_5_5 = BatchNormalization()(Conv2D_5_5) #print(Conv2D_5_5.shape) MaxPool2D_3_3 = MaxPooling2D(pool_size=(2,2), strides=(2,2))(input) #print(MaxPool2D_3_3.shape) Cov2D_Pool = Conv2D(pool, (1,1), activation='relu', padding='same')(MaxPool2D_3_3) Cov2D_Pool = BatchNormalization()(Cov2D_Pool) #print(Cov2D_Pool.shape) concat = Concatenate(axis=-1)([Conv2D_reduce,Conv2D_3_3,Conv2D_5_5,Cov2D_Pool]) #print(concat.shape) return concat ``` ### Inception 3b - It is couple steps to complete this layers. - `1 x 1` 80 convolution layers followed by BatchNormalization - `3 x 3` 16 convolution layers where input is as out of `1 x 1` convolution layer followed by BatchNormalization - `5 x 5` 48 convolution layers where input is as out of `1 x 1` convolution layer followed by BatchNormalization - Last convolution layers is pooling which is 64 convolution layers with `1 x 1` - Merge ouptut of `1 x 1` , `3 x 3` and `5 x 5` with respect to last axis So while calling function I would call as add_module(input, 48, 80, 16, 48, 64) And than adding maxpooling with `3 x 3` with strides of 3 ### So putting all together I am not using more complex architecture because that might overfitting model as per my dataset as shown in diagram because imagenet hase 1000 categorical images with each images have more than 1000 images of each category whereas in our case we have small dataset and for that the whole architecture implementation would overfit the model. #### Final layer I am using activation funtion as softmax with dense of 133 (num_classes). ``` input = Input(shape=(224,224,3)) x = Conv2D(64,( 7, 7), strides=(2, 2), padding='same',activation='relu')(input) x = BatchNormalization()(x) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = Conv2D(48,(1,1),strides=(1,1),padding='same',activation='relu')(x) x = BatchNormalization()(x) x = Conv2D(64,(1,1),strides=(1,1),padding='same',activation='relu')(x) x = BatchNormalization()(x) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = add_module(x, 64, 80, 16, 32, 32) # x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = add_module(x, 48, 80, 16, 48, 64) # x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) # x = add_module(x) # --- Last Layer --- # Now commes 3 level inception x = AveragePooling2D((7, 7), strides=(1, 1), padding='valid')(x) x = Dropout(0.5)(x) x = GlobalAveragePooling2D()(x) x = Dense(1024, activation='linear')(x) Output = Dense(133, activation='softmax')(x) ``` # Lets make the model ``` model = Model(inputs= input, outputs = Output) model.summary() ``` #### Traning Starts We have setup VM with two GPU attach to instance so we are going to use parallel model of gpu ``` from keras.utils import multi_gpu_model parallel_model = multi_gpu_model(model, gpus=2) ``` ### Optimizers selecting is very important to get the model good accuracy as per the paper I am using SGD optimizers as it gives the best result incase of the imagenet. ``` parallel_model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy',metrics=["accuracy"]) ``` #### First try to run for 20 epochs it takes ~ 145s for 2 GPUs K-80 tesla with 16 memory and where enresult loss_function for training is 3.8295 and for validation is 4.2274. #### Run only 20 epochs to check the model is working. #### Still running on 40 more epochs but got validation_accuracy of 12% and training_accuracy as 20% #### Reasons for running less epochs to check if model is overfitting or not. #### Commented out because I don't want to run my training by mistake in this file because it takes forever and will not able to do anything else in this file. ![First-20-Epochs](https://github.com/vishal6557/ADS/blob/master/Screen%20Shot%202018-04-23%20at%204.01.17%20AM.png?raw=true) ``` parallel_model.fit_generator( train_generator, steps_per_epoch=6670 // batch_size, epochs=20, validation_data=validation_generator, validation_steps=835 // batch_size) model.save_weights('testing_fina1.h5') ``` #### Save model in Json Format ``` model_json = model.to_json() with open("final_ model.json", "w") as json_file: json_file.write(model_json) ``` #### Load model from json with its weight. ``` from keras.models import model_from_json json_file = open('final_ model.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) loaded_model.load_weights("testing_fina17.h5") ``` #### Don't forget to compile the model before using it or else it will give error ``` parallel_model = multi_gpu_model(loaded_model, gpus=2) loaded_model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy',metrics=["accuracy"]) ``` ### Create a generator to get the accuracy and from the model you trained ``` generator = train_datagen.flow_from_directory( 'Dog/train', target_size=(224, 224), batch_size=batch_size, class_mode=None, # this means our generator will only yield batches of data, no labels shuffle=False) score = loaded_model.evaluate_generator(validation_generator, 800/16, workers=12) scores = loaded_model.predict_generator(validation_generator, 800/16, workers=12) correct=0 for i, n in enumerate(validation_generator.filenames): if "Affenpinscher" in n and scores[i][0] <= 0.5: correct += 1 print("Correct:", correct, " Total: ", len(validation_generator.filenames)) print("Loss: ", score[0], "Accuracy: ", score[1]*100,"%") ``` #### This will give you the accuracy of the dog bread we search as `_Affenpinscher_` # Trying CIFAR-10 for GoogleNet - As the image size is small in CIFAR-10 i.e. 32x32 we are using one layer as mention in figure. ![googlenet](https://qph.fs.quoracdn.net/main-qimg-1593dbc4944be77ade976bbb8e1dc0b2-c) ``` # Hyperparameters batch_size = 128 num_classes = 10 epochs = 50 from keras.datasets import cifar10 # Load CIFAR10 Data (x_train, y_train), (x_test, y_test) = cifar10.load_data() img_height, img_width, channel = x_train.shape[1],x_train.shape[2],x_train.shape[3] # convert to one hot encoing y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) input = Input(shape=(img_height, img_width, channel,)) Conv2D_1 = Conv2D(64, (3,3), activation='relu', padding='same')(input) MaxPool2D_1 = MaxPooling2D(pool_size=(2, 2), strides=(2,2))(Conv2D_1) BatchNorm_1 = BatchNormalization()(MaxPool2D_1) Module_1 = add_module(BatchNorm_1, 16, 16, 16, 16,16) Module_1 = add_module(Module_1,16, 16, 16, 16,16) Output = Flatten()(Module_1) Output = Dense(num_classes, activation='softmax')(Output) model = Model(inputs=[input], outputs=[Output]) model.summary() parallel_model = multi_gpu_model(model, gpus=2) RMsprop=RMSprop(lr=0.0001, rho=0.9, epsilon=None, decay=0.0) parallel_model.compile(loss='categorical_crossentropy', optimizer=RMsprop, metrics=['accuracy']) parallel_model.fit(x_train, y_train, epochs=epochs, validation_data=(x_test, y_test)) # This I think stop because I keep running my VM and sleep my laptop so it didnot autosave. scores = model.evaluate(x_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) ``` ###### Challenges we face doing this project so might help to others who want to do similar type of Project : - Make sure you have good computation power with atleast two GPU. - Trying to run model in parallel so it takes less time to train and test. - Start with simple architecture try to run 30-40 echos first and check the model is overfitting or not If it is overfitting than you don't need to break the trainning. - Try to take best weight in every epochs I know it kind of tricky in parallel model but you can do this by following the links - Use jupyter notebook in background using nohup the documentation for that in this [link](https://hackernoon.com/aws-ec2-part-4-starting-a-jupyter-ipython-notebook-server-on-aws-549d87a55ba9) - If you to try notebook which contain code of THEANO you can do it in 3 easy steps: 1. First [install](http://deeplearning.net/software/theano/install.html) theano with GPU. 2. `nano ~/.keras/keras.json ` Change following: { "image_dim_ordering": "th", "backend": "theano", "image_data_format": "channels_first" } - Restart the jupyter notebook - My model was overfitting and I was not able to figureout what it was overfitting so first thing I tried is Image Augmentation , than change the learning rate than change the layers and than so on. It's I guess **trail and error** methods and In my case each epochs takes around 160-180s with two K-80 tesla GPU it was time cosuming but good way to learn. Shoud have patience. The content of this project itself is licensed under the, and the underlying source code used to format and display that content is licensed under the [MIT LICENSE](https://github.com/hirenpatel27/ADS/blob/master/LICENSE) ## Citations <a id='google-net'> [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. "ImageNet Classification with Deep Convolutional Neural Networks." NIPS 2012 <br> <a id='inception-v1-paper'> [2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Andrew Rabinovich. "Going Deeper with Convolutions." CVPR 2015. <br> <a id='vgg-paper'> [3] Karen Simonyan and Andrew Zisserman. "Very Deep Convolutional Networks for Large-Scale Image Recognition." ICLR 2015 <br> <a id='resnet-cvpr'> [4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep Residual Learning for Image Recognition." CVPR 2016. <br> <a id='resnet-eccv'> [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Identity Mappings in Deep Residual Networks." ECCV 2016. ## References [1] [GoogleNet in Keras](http://joelouismarino.github.io/blog_posts/blog_googlenet_keras.html) for understanding of GoogleNet Architecture. [2] [Keras Documentation](https://keras.io/) for how to use Keras [3] [Convolution Neural Networks for Visual Recognition](http://cs231n.github.io/convolutional-networks/) for understanding how CNN works [4] [How convolution neural network Works](https://www.youtube.com/watch?v=FmpDIaiMIeA&t=634s) [5] [Dog breed classification with Keras](http://machinememos.com/python/keras/artificial%20intelligence/machine%20learning/transfer%20learning/dog%20breed/neural%20networks/convolutional%20neural%20network/tensorflow/image%20classification/imagenet/2017/07/11/dog-breed-image-classification.html) [6] Keras blog [Image Augmentation](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) [7] [Image Datagenertor-Methods](https://keras.io/preprocessing/image/#imagedatagenerator-methods)
github_jupyter
## 1. Obtain and review raw data <p>One day, my old running friend and I were chatting about our running styles, training habits, and achievements, when I suddenly realized that I could take an in-depth analytical look at my training. I have been using a popular GPS fitness tracker called <a href="https://runkeeper.com/">Runkeeper</a> for years and decided it was time to analyze my running data to see how I was doing.</p> <p>Since 2012, I've been using the Runkeeper app, and it's great. One key feature: its excellent data export. Anyone who has a smartphone can download the app and analyze their data like we will in this notebook.</p> <p><img src="https://assets.datacamp.com/production/project_727/img/runner_in_blue.jpg" alt="Runner in blue" title="Explore world, explore your data!"></p> <p>After logging your run, the first step is to export the data from Runkeeper (which I've done already). Then import the data and start exploring to find potential problems. After that, create data cleaning strategies to fix the issues. Finally, analyze and visualize the clean time-series data.</p> <p>I exported seven years worth of my training data, from 2012 through 2018. The data is a CSV file where each row is a single training activity. Let's load and inspect it.</p> ``` # Import pandas import pandas as pd # Define file containing dataset runkeeper_file = 'datasets/cardioActivities.csv' # Create DataFrame with parse_dates and index_col parameters df_activities = pd.read_csv(runkeeper_file, parse_dates=True, index_col='Date') # First look at exported data: select sample of 3 random rows display(df_activities.sample(3)) # Print DataFrame summary df_activities.info() ``` ## 2. Data preprocessing <p>Lucky for us, the column names Runkeeper provides are informative, and we don't need to rename any columns.</p> <p>But, we do notice missing values using the <code>info()</code> method. What are the reasons for these missing values? It depends. Some heart rate information is missing because I didn't always use a cardio sensor. In the case of the <code>Notes</code> column, it is an optional field that I sometimes left blank. Also, I only used the <code>Route Name</code> column once, and never used the <code>Friend's Tagged</code> column.</p> <p>We'll fill in missing values in the heart rate column to avoid misleading results later, but right now, our first data preprocessing steps will be to:</p> <ul> <li>Remove columns not useful for our analysis.</li> <li>Replace the "Other" activity type to "Unicycling" because that was always the "Other" activity.</li> <li>Count missing values.</li> </ul> ``` # Define list of columns to be deleted cols_to_drop = ['Friend\'s Tagged','Route Name','GPX File','Activity Id','Calories Burned', 'Notes'] # Delete unnecessary columns df_activities.drop(cols_to_drop, axis=1, inplace=True) # Count types of training activities display(df_activities['Type'].value_counts()) # Rename 'Other' type to 'Unicycling' df_activities['Type'] = df_activities['Type'].str.replace('Other', 'Unicycling') # Count missing values for each column df_activities.isnull().sum() ``` ## 3. Dealing with missing values <p>As we can see from the last output, there are 214 missing entries for my average heart rate.</p> <p>We can't go back in time to get those data, but we can fill in the missing values with an average value. This process is called <em>mean imputation</em>. When imputing the mean to fill in missing data, we need to consider that the average heart rate varies for different activities (e.g., walking vs. running). We'll filter the DataFrames by activity type (<code>Type</code>) and calculate each activity's mean heart rate, then fill in the missing values with those means.</p> ``` # Calculate sample means for heart rate for each training activity type avg_hr_run = df_activities[df_activities['Type'] == 'Running']['Average Heart Rate (bpm)'].mean() avg_hr_cycle = df_activities[df_activities['Type'] == 'Cycling']['Average Heart Rate (bpm)'].mean() # Split whole DataFrame into several, specific for different activities df_run = df_activities[df_activities['Type'] == 'Running'].copy() df_walk = df_activities[df_activities['Type'] == 'Walking'].copy() df_cycle = df_activities[df_activities['Type'] == 'Cycling'].copy() # Filling missing values with counted means df_walk['Average Heart Rate (bpm)'].fillna(110, inplace=True) df_run['Average Heart Rate (bpm)'].fillna(int(avg_hr_run), inplace=True) df_cycle['Average Heart Rate (bpm)'].fillna(int(avg_hr_cycle), inplace=True) # Count missing values for each column in running data df_run.isnull().sum() ``` ## 4. Plot running data <p>Now we can create our first plot! As we found earlier, most of the activities in my data were running (459 of them to be exact). There are only 29, 18, and two instances for cycling, walking, and unicycling, respectively. So for now, let's focus on plotting the different running metrics.</p> <p>An excellent first visualization is a figure with four subplots, one for each running metric (each numerical column). Each subplot will have a different y-axis, which is explained in each legend. The x-axis, <code>Date</code>, is shared among all subplots.</p> ``` %matplotlib inline # Import matplotlib, set style and ignore warning import matplotlib.pyplot as plt %matplotlib inline import warnings plt.style.use('ggplot') warnings.filterwarnings( action='ignore', module='matplotlib.figure', category=UserWarning, message=('This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.') ) # Prepare data subsetting period from 2013 till 2018 runs_subset_2013_2018 = df_run.loc['20190101':'20130101'] # Create, plot and customize in one step runs_subset_2013_2018.plot(subplots=True, sharex=False, figsize=(12,16), linestyle='none', marker='o', markersize=3, ) # Show plot plt.show() ``` ## 5. Running statistics <p>No doubt, running helps people stay mentally and physically healthy and productive at any age. And it is great fun! When runners talk to each other about their hobby, we not only discuss our results, but we also discuss different training strategies. </p> <p>You'll know you're with a group of runners if you commonly hear questions like:</p> <ul> <li>What is your average distance?</li> <li>How fast do you run?</li> <li>Do you measure your heart rate?</li> <li>How often do you train?</li> </ul> <p>Let's find the answers to these questions in my data. If you look back at plots in Task 4, you can see the answer to, <em>Do you measure your heart rate?</em> Before 2015: no. To look at the averages, let's only use the data from 2015 through 2018.</p> <p>In pandas, the <code>resample()</code> method is similar to the <code>groupby()</code> method - with <code>resample()</code> you group by a specific time span. We'll use <code>resample()</code> to group the time series data by a sampling period and apply several methods to each sampling period. In our case, we'll resample annually and weekly.</p> ``` # Prepare running data for the last 4 years runs_subset_2015_2018 = df_run.loc['20190101':'20150101'] # Calculate annual statistics print('How my average run looks in last 4 years:') display(runs_subset_2015_2018.resample('A').mean()) # Calculate weekly statistics print('Weekly averages of last 4 years:') display(runs_subset_2015_2018.resample('W').mean().mean()) # Mean weekly counts weekly_counts_average = runs_subset_2015_2018['Distance (km)'].resample('W').count().mean() print('How many trainings per week I had on average:', weekly_counts_average) ``` ## 6. Visualization with averages <p>Let's plot the long term averages of my distance run and my heart rate with their raw data to visually compare the averages to each training session. Again, we'll use the data from 2015 through 2018.</p> <p>In this task, we will use <code>matplotlib</code> functionality for plot creation and customization.</p> ``` # Prepare data runs_distance = runs_subset_2015_2018['Distance (km)'] runs_hr = runs_subset_2015_2018['Average Heart Rate (bpm)'] # Create plot fig, (ax1, ax2) = plt.subplots(2, sharex=True, figsize=(12, 8)) # Plot and customize first subplot runs_distance.plot(ax=ax1) ax1.set(ylabel='Distance (km)', title='Historical data with averages') ax1.axhline(runs_distance.mean(), color='blue', linewidth=1, linestyle='-.') # Plot and customize second subplot runs_hr.plot(ax=ax2, color='gray') ax2.set(xlabel='Date', ylabel='Average Heart Rate (bpm)') ax2.axhline(runs_hr.mean(), color='blue', linewidth=1, linestyle='-.') # Show plot plt.show() ``` ## 7. Did I reach my goals? <p>To motivate myself to run regularly, I set a target goal of running 1000 km per year. Let's visualize my annual running distance (km) from 2013 through 2018 to see if I reached my goal each year. Only stars in the green region indicate success.</p> ``` # Prepare data df_run_dist_annual = df_run.sort_index()['20130101':'20181231']['Distance (km)'] \ .resample('A').sum() # Create plot fig = plt.figure(figsize=(8, 5)) # Plot and customize ax = df_run_dist_annual.plot(marker='*', markersize=14, linewidth=0, color='blue') ax.set(ylim=[0, 1210], xlim=['2012','2019'], ylabel='Distance (km)', xlabel='Years', title='Annual totals for distance') ax.axhspan(1000, 1210, color='green', alpha=0.4) ax.axhspan(800, 1000, color='yellow', alpha=0.3) ax.axhspan(0, 800, color='red', alpha=0.2) # Show plot plt.show() ``` ## 8. Am I progressing? <p>Let's dive a little deeper into the data to answer a tricky question: am I progressing in terms of my running skills? </p> <p>To answer this question, we'll decompose my weekly distance run and visually compare it to the raw data. A red trend line will represent the weekly distance run.</p> <p>We are going to use <code>statsmodels</code> library to decompose the weekly trend.</p> ``` # Import required library import statsmodels.api as sm # Prepare data df_run_dist_wkly = df_run.loc['20190101':'20130101']['Distance (km)'] \ .resample('W').bfill() decomposed = sm.tsa.seasonal_decompose(df_run_dist_wkly, extrapolate_trend=1, freq=52) # Create plot fig = plt.figure(figsize=(12, 5)) # Plot and customize ax = decomposed.trend.plot(label='Trend', linewidth=2) ax = decomposed.observed.plot(label='Observed', linewidth=0.5) ax.legend() ax.set_title('Running distance trend') # Show plot plt.show() ``` ## 9. Training intensity <p>Heart rate is a popular metric used to measure training intensity. Depending on age and fitness level, heart rates are grouped into different zones that people can target depending on training goals. A target heart rate during moderate-intensity activities is about 50-70% of maximum heart rate, while during vigorous physical activity it’s about 70-85% of maximum.</p> <p>We'll create a distribution plot of my heart rate data by training intensity. It will be a visual presentation for the number of activities from predefined training zones. </p> ``` # Prepare data hr_zones = [100, 125, 133, 142, 151, 173] zone_names = ['Easy', 'Moderate', 'Hard', 'Very hard', 'Maximal'] zone_colors = ['green', 'yellow', 'orange', 'tomato', 'red'] df_run_hr_all = ... # Create plot fig, ax = ... # Plot and customize n, bins, patches = ax.hist(df_run_hr_all, bins=hr_zones, alpha=0.5) for i in range(0, len(patches)): patches[i].set_facecolor(zone_colors[i]) ax.set(title='Distribution of HR', ylabel='Number of runs') ax.xaxis.set(ticks=hr_zones) # ... YOUR CODE FOR TASK 9 ... # Show plot # ... YOUR CODE FOR TASK 8 ... ``` ## 10. Detailed summary report <p>With all this data cleaning, analysis, and visualization, let's create detailed summary tables of my training. </p> <p>To do this, we'll create two tables. The first table will be a summary of the distance (km) and climb (m) variables for each training activity. The second table will list the summary statistics for the average speed (km/hr), climb (m), and distance (km) variables for each training activity.</p> ``` # Concatenating three DataFrames df_run_walk_cycle = ... dist_climb_cols, speed_col = ['Distance (km)', 'Climb (m)'], ['Average Speed (km/h)'] # Calculating total distance and climb in each type of activities df_totals = ... print('Totals for different training types:') display(df_totals) # Calculating summary statistics for each type of activities df_summary = df_run_walk_cycle.groupby('Type')[dist_climb_cols + speed_col].describe() # Combine totals with summary for i in dist_climb_cols: df_summary[i, 'total'] = df_totals[i] print('Summary statistics for different training types:') # ... YOUR CODE FOR TASK 10 ... ``` ## 11. Fun facts <p>To wrap up, let’s pick some fun facts out of the summary tables and solve the last exercise.</p> <p>These data (my running history) represent 6 years, 2 months and 21 days. And I remember how many running shoes I went through–7.</p> <pre><code>FUN FACTS - Average distance: 11.38 km - Longest distance: 38.32 km - Highest climb: 982 m - Total climb: 57,278 m - Total number of km run: 5,224 km - Total runs: 459 - Number of running shoes gone through: 7 pairs </code></pre> <p>The story of Forrest Gump is well known–the man, who for no particular reason decided to go for a "little run." His epic run duration was 3 years, 2 months and 14 days (1169 days). In the picture you can see Forrest’s route of 24,700 km. </p> <pre><code>FORREST RUN FACTS - Average distance: 21.13 km - Total number of km run: 24,700 km - Total runs: 1169 - Number of running shoes gone through: ... </code></pre> <p>Assuming Forest and I go through running shoes at the same rate, figure out how many pairs of shoes Forrest needed for his run.</p> <p><img src="https://assets.datacamp.com/production/project_727/img/Forrest_Gump_running_route.png" alt="Forrest's route" title="Little run of Forrest Gump"></p> ``` # Count average shoes per lifetime (as km per pair) using our fun facts average_shoes_lifetime = ... # Count number of shoes for Forrest's run distance shoes_for_forrest_run = ... print('Forrest Gump would need {} pairs of shoes!'.format(shoes_for_forrest_run)) ```
github_jupyter
One of the simplest clustering methods is k-means, in which the number of clusters is chosen in advance, after shich the goal is to partition the inputs into sets S1, ... Sk in a way that minimizes the total sum of squared distances from each point to the mean of its assigned cluster. We will set for an iterative algorithm, that usually finds a good clustering: - Start with a set of k-means randomly assigned, which are points in d-dimensional space - assign each point to the mean to which is closest centerpoint - if no point assigment has changed, stop and keep the clusters. - if some point assignment has changed, recompute the means and return to step2 ``` from linear_algebra import Vector # helper function that dectects if any centerpoint assigmmnet has changed def num_differences(v1: Vector, v2: Vector) -> int: assert len(v1) == len(v2) return len([x1 for x1, x2 in zip(v1, v2) if x1 != x2]) assert num_differences([1, 2, 3], [2, 1, 3]) == 2 assert num_differences([1, 2], [1, 2]) == 0 from typing import List from linear_algebra import vector_mean import random def cluster_means(k: int, inputs: List[Vector], assignments: List[int]) -> List[Vector]: # clusters[i] contains the inputs whose assignment is i clusters = [[] for i in range(k)] for input, assignment in zip(inputs, assignments): #print(input, assignment) clusters[assignment].append(input) #print(clusters) #break # if a cluster is empty, just use a ramdom point return [vector_mean(cluster) if cluster else random.choice(inputs) for cluster in clusters] inputs = [[-1, 1], [-2,3], [-3, 4], [4, 5], [-2, 6], [0, 3]] assignments = [0, 0, 2, 2, 2, 1] cluster_means(6, inputs, assignments) import itertools from linear_algebra import squared_distance # I undertand the intuition and the code main points # class KMeans: def __init__(self, k: int) -> None: self.k = k #number of clusters self.means = None def classify(self, input: Vector) -> int: """return the index of the cluster closest to the input""" # means method woudl be already computed as claissfy method is called # means len == k return min(range(self.k), key= lambda i: squared_distance(input, self.means[i])) def train(self, inputs: List[Vector]) -> None: # start with a random assignments assignments = [random.randrange(self.k) for _ in inputs] for _ in itertools.count(): # print(assignments) # compute means self.means = cluster_means(self.k, inputs, assignments) # and find new assignments new_assignments = [self.classify(input) for input in inputs] # check how many assignments have changed and if we're done num_changed = num_differences(assignments, new_assignments) if num_changed == 0: return # otherwise keep the new assignments, and compute new means assignments = new_assignments self.means = cluster_means(self.k, inputs, assignments) print(f"changed: {num_changed} / {len(inputs)}") inputs: List[List[float]] = [[-14,-5],[13,13],[20,23],[-19,-11],[-9,-16],[21,27],[-49,15],[26,13],[-46,5],[-34,-1],[11,15], [-49,0],[-22,-16],[19,28],[-12,-8],[-13,-19],[-41,8],[-11,-6],[-25,-9],[-18,-3]] k = 2 random.seed(0) clusterer = KMeans(k) clusterer.train(inputs) means = sorted(clusterer.means) assert len(means) == k means # check that the measures are close to what we expect squared_distance(means[0], [-44, 5]) ``` #### Chosing K There are various ways to choose a k. One that is reasonably easy to develop intuition involves plotting the sum of squared errors (between each ``` from matplotlib import pyplot as plt def squared_clustering_errors(inputs: List[Vector], k: int) -> float: """finds the total squared error from k-means clustering the inputs """ clusterer = KMeans(k) clusterer.train(inputs) means = clusterer.means # there isnt an assignment attribute assignments = [clusterer.classify(input) for input in inputs] return sum(squared_distance(input, means[cluster]) for input, cluster in zip(inputs, assignments)) clusterer = KMeans(3) clusterer.train(inputs) means = clusterer.means means assignments = [clusterer.classify(input) for input in inputs] assignments ```
github_jupyter
``` # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time import json import scipy.stats as st from scipy.stats import linregress # Import API key from api_keys import g_key from citipy import citipy # https://pypi.org/project/citipy/ (pip install citipy) # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180) # List for holding latitudes-longitudes and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations latitudes = np.random.uniform(low=-90.000, high=90.000, size=1500) longitudes = np.random.uniform(low=-180.000, high=180.000, size=1500) latitudes_longitudes = zip(latitudes, longitudes) # Identify nearest city for each latitudes-longitudes combination for everylat_lng in latitudes_longitudes: city = citipy.nearest_city(everylat_lng[0], everylat_lng[1]).city_name # If the city is not already present, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities) #Perform API Calls cloudiness = [] country = [] date = [] humidity = [] latitude_list = [] longitude_list = [] maximum_temp = [] wind_speed = [] from api_keys import weather_api_key index_counter = 0 set_counter = 1 # Save config information. url = "http://api.openweathermap.org/data/2.5/weather?" units = "imperial" # Build partial query URL query_url = f"{url}appid={weather_api_key}&units={units}&q=" # For each city name in cities list, do below things...used exception handling for index, city in enumerate(cities, start = 1): try: response = requests.get(query_url + city).json() cloudiness.append(response["clouds"]["all"]) country.append(response["sys"]["country"]) date.append(response["dt"]) humidity.append(response["main"]["humidity"]) latitude_list.append(response["coord"]["lat"]) longitude_list.append(response["coord"]["lon"]) maximum_temp.append(response['main']['temp_max']) wind_speed.append(response["wind"]["speed"]) if index_counter > 49: index_counter = 0 set_counter = set_counter + 1 else: index_counter = index_counter + 1 print(f"Processing Record {index_counter} of Set {set_counter} : {city}") except(KeyError, IndexError): print("City not found. Skipping...") weather_dictionary = { "City": city, "Cloudiness": cloudiness, "Country": country, "Date": date, "Humidity": humidity, "Lat": latitude_list, "Lng": longitude_list, "Max Temp": maximum_temp, "Wind Speed": wind_speed } weather_dataframe = pd.DataFrame(weather_dictionary) weather_dataframe.head(10) weather_dataframe.to_csv("weather_df.csv", index=False) ``` #Plotting the Data Use proper labeling of the plots using plot titles (including date of analysis) and axes labels. Save the plotted figures as .pngs. ``` #Latitude vs. Temperature Plot plt.scatter(weather_dictionary["Lat"], weather_dictionary["Max Temp"], facecolor = "lightblue", edgecolor = "black") plt.title("City Latitude vs. Max Temperature (01/17/20)") plt.xlabel("Latitude") plt.ylabel("Maximum Temperature (F)") plt.grid(linestyle='-', linewidth=1, alpha = 0.5) # Save the plotted figure as .pngs plt.savefig("Latitude vs Max Temperature.png") ``` Observation- Latitude vs Max Temperature.png: As the latitude increases, temperature drops and the maximum temp. is found around 0 latitude. ``` #Latitude vs. Humidity Plot plt.scatter(weather_dataframe["Lat"], weather_dataframe["Humidity"], marker='o', s=30, edgecolors= "black") plt.title("City Latitude vs Humidity") plt.ylabel("Humidity Level (%)") plt.xlabel("Latitude") plt.grid() plt.savefig("Latitude vs Humidity.png") ``` Observation- Latitude vs Humidity: As the latitude gets higher,humidity gets higher too ``` #Latitude vs. Cloudiness Plot plt.scatter(weather_dataframe["Lat"], weather_dataframe["Cloudiness"], marker='o', s=30, edgecolors= "black") plt.title("City Latitude vs Cloudiness") plt.ylabel("Cloudiness Level (%)") plt.xlabel("Latitude") plt.grid() # plt.show() plt.savefig("Latitude vs Cloudiness.png") ``` Observation- Latitude vs Cloudiness: Cloudiness is all over the latitude. ``` #Latitude vs. Wind Speed Plot plt.scatter(weather_dataframe["Lat"], weather_dataframe["Wind Speed"], marker='o', s=30, edgecolors= "black") plt.title("City Latitude vs Wind Speed") plt.ylabel("Wind Speed (mph)") plt.xlabel("Latitude") plt.grid() plt.savefig("Latitude vs Wind Speed.png") ``` Observation- Latitude vs Wind Speed:Wind Speed is present across the latitude ``` #Linear Regression # Create Northern and Southern Hemisphere DataFrames northern_hemisphere = weather_dataframe.loc[weather_dataframe["Lat"] >= 0] southern_hemisphere = weather_dataframe.loc[weather_dataframe["Lat"] < 0] #Northern Hemisphere - Max Temp vs. Latitude Linear Regression #Create a Scatter Plot for Lattitude vs Temperature of City x_values = northern_hemisphere['Lat'] y_values = northern_hemisphere['Max Temp'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) reg_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,reg_values,"red") plt.annotate(line_eq,(5,10),fontsize=15,color="red") plt.ylim(0,100) plt.xlim(0, 80) plt.ylabel("Max. Temp") plt.xlabel("Latitude") plt.savefig("North Max Temp vs Latitude Regression.png") ``` Observation: negative correlation between latitude and Max Temperature in Northern Hemisphere ``` #Southern Hemisphere - Max Temp vs. Latitude Linear Regression #Create a Scatter Plot for Lattitude vs Temperature of City (Southern Hemisphere) x_values = southern_hemisphere['Lat'] y_values = southern_hemisphere['Max Temp'] reg_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,reg_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="red") plt.ylim(30, 100) plt.xlim(-60, 0, 10) plt.ylabel("Max. Temp") plt.xlabel("Latitude") plt.savefig("South Max Temp vs Latitude Regression.png") #Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression x_values = northern_hemisphere['Lat'] y_values = northern_hemisphere['Humidity'] reg_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,reg_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="red") plt.ylabel("Humidity") plt.xlabel("Latitude") plt.savefig("North Humidity vs Latitude Linear Regression.png") ``` Observation: Negative constant correlation ``` #Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression x_values = southern_hemisphere['Lat'] y_values = southern_hemisphere['Humidity'] reg_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,reg_values,"r-") plt.annotate(line_eq,(-25,10),fontsize=15,color="red") plt.ylim(0, 100) plt.ylabel("Humidity") plt.xlabel("Latitude") plt.savefig("South Humidity vs Latitude Linear Regression.png") #Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression x_values = northern_hemisphere['Lat'] y_values = northern_hemisphere['Cloudiness'] reg_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,reg_values,"r-") plt.annotate(line_eq,(6,10),fontsize=15,color="red") plt.ylabel("Cloudiness") plt.xlabel("Latitude") plt.savefig("North Cloudiness vs Latitude Linear Regression.png") #Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression x_values = southern_hemisphere['Lat'] y_values = southern_hemisphere['Cloudiness'] reg_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,reg_values,"r-") plt.annotate(line_eq,(-25,10),fontsize=15,color="red") plt.ylabel("Cloudiness") plt.xlabel("Latitude") plt.savefig("South Cloudiness vs Latitude Linear Regression.png") #Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression x_values = northern_hemisphere['Lat'] y_values = northern_hemisphere['Wind Speed'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) reg_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,reg_values,"r-") plt.annotate(line_eq,(45,22),fontsize=15,color="red") plt.ylabel("Cloudiness") plt.xlabel("Latitude") plt.savefig("North Wind speed vs Latitude Linear Regression.png") ``` Observation: low positive correlation
github_jupyter
## Next Task: compute precision and recall threshold 25: zoomy, sustain->thick, smooth (user results) zoomy, sustain -> dark, smooth (word2word matcher resuts) smooth tp dark fp thik tn (fn?) precision = tp/(tp+fp) recall = tp/(tp+fn) for one word, cant compute recall later: tensorflow language models, Optimising (Kullback-Leibler) for the distribution However, note: Let A and B be any sets with |A|=|B| (|.| being the set cardinality, i.e. number of elements in the set). It follows that fp = |B\A∩B|=|B|-|A∩B| = |A|-|A∩B| = |A\A∩B|=fn. It hence follows that precision = tp/(tp+fp)=tp/(tp+fn)=recall I understood your definition "A is the set of words in our ground truth, when you apply a threshold to the sliders B is the set of words from the output of our words matcher" in a way such that |A|=|B| ``` import sys import ipdb import pandas as pd import numpy as np from tqdm import tqdm sys.path.append(r'C:\Temp\SoundOfAI\rg_text_to_sound\tts_pipeline\src') from match_word_to_words import prepare_dataset,word_to_wordpair_estimator,word_to_words_matcher,prepare_dataset import matplotlib.pyplot as plt df = pd.read_csv('text_to_qualities.csv') colnames = df.columns display(df.head(2)) df.shape df = pd.read_csv('text_to_qualities.csv') dfnew[dfnew.description.str.match('\'')] dfnew['description'] = dfnew.description.str.replace("'","") dfnew['description']=dfnew.description.str.lower().str.replace('(\(not.*\))','',regex=True) dfnew = dfnew[~dfnew.description.str.match('\(.*\)')] dfnew.head() wordlist = dfnew.description unique_word_list = np.unique(wordlist).tolist() len(wordlist),len(unique_word_list) ``` threshold 25: zoomy, sustain->thick, smooth (user results) zoomy, sustain -> dark, smooth (word2word matcher resuts) smooth tp dark fp thik tn precision = tp/(tp+fp) recall = tp/(tp+fn) for one word, cant compute recall # word pair estimator ``` df_score df_score = dfnew.iloc[:,1:] descriptions = dfnew.iloc[:,0] wordpairnames = df_score.columns.tolist() df_score.head() target_word_pairs = [('bright', 'dark'), ('full', 'hollow'),( 'smooth', 'rough'), ('warm', 'metallic'), ('clear', 'muddy'), ('thin', 'thick'), ('pure', 'noisy'), ('rich', 'sparse'), ('soft', 'hard')] wordpairnames_to_wordpair_dict = {s:t for s,t in zip(wordpairnames,target_word_pairs)} wordpairnames_to_wordpair_dict list(np.arange(49.8,50,0.1)) A=set([1,2,3]) B=set([3,4,5]) AandB = A.intersection(B) B.difference(AandB) def single_word_precision_recall(word,scorerow,threshold,w2wpe,wordpairnames_to_wordpair_dict): elems_above = scorerow[(scorerow>(100-threshold)) ] elems_below = scorerow[(scorerow<=threshold) ] words_above = [wordpairnames_to_wordpair_dict[wordpairname][1] for wordpairname in elems_above.index] words_below = [wordpairnames_to_wordpair_dict[wordpairname][0] for wordpairname in elems_below.index] A = set(words_above+words_below) opposite_pairs_beyond_threshold = elems_above.index.tolist()+elems_below.index.tolist() B = set([w2wpe.match_word_to_wordpair(word,ind)['closest word'] for ind in opposite_pairs_beyond_threshold]) assert len(A)==len(B), 'This should never occurr!' AandB = set(A).intersection(B) tp = AandB fp = B.difference(AandB) # were found but shouldn't have been fn = A.difference(AandB) # were not found but should have been den = len(tp)+len(fp) if den==0: precision = np.NaN else: precision = len(tp)/den den = len(tp)+len(fn) if den==0: recall = np.NaN else: recall = len(tp)/den if precision!=recall and not np.isnan(precision): print('This should never occur!') print('word, A,B,AandB,tp,fp,fn,precision,recall') print(word, A,B,AandB,tp,fp,fn,precision,recall) return precision,recall,len(A) lang_model='en_core_web_sm' w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) w2wpe.match_word_to_wordpair('full','full_vs_hollow') word = descriptions[0] scorerow = df_score.iloc[0,:] prec_50_list=[] NrRelevantWordpairList=[] for word, (irow,scorerow) in tqdm(zip(descriptions, df_score.iterrows())): prec,rec,NrRelevantWordpairs = single_word_precision_recall(word,scorerow,10,w2wpe,wordpairnames_to_wordpair_dict) prec_50_list.append(prec) NrRelevantWordpairList.append(NrRelevantWordpairs) pd.Series(prec_50_list).dropna() len(prec_50_list),np.mean(prec_50_list) ' '.join([f'{i:1.1f}' for i in thresholdlist]) def compute_accuracy(lang_model='en_core_web_lg',thresholdlist=None): w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) if thresholdlist is None: thresholdlist = list(np.arange(0,50,2))+list(np.arange(45,50,0.5))+[50.] mean_accuracy_list = [] nrrelevantlist = [] for threshold in tqdm(thresholdlist): acc_list=[] NrRelevantWordpairList=[] for word, (irow,scorerow) in zip(descriptions, df_score.iterrows()): precision,recall,NrRelevantWordpairs = single_word_precision_recall(word,scorerow,threshold,w2wpe,wordpairnames_to_wordpair_dict) acc_list.append(precision) NrRelevantWordpairList.append(NrRelevantWordpairs) assert len(acc_list)>0, 'something is wrong...' meanAccuracyVal = pd.Series(acc_list).dropna().mean() NrRelevantVal = np.mean(NrRelevantWordpairList) mean_accuracy_list.append(meanAccuracyVal) nrrelevantlist.append(NrRelevantVal) return mean_accuracy_list,nrrelevantlist %time lang_model1 = 'en_core_web_sm' lang_model2 = 'en_core_web_lg' mean_accuracy_list1,nrrelevantlist1 = compute_accuracy(lang_model=lang_model1) mean_accuracy_list2,nrrelevantlist2 = compute_accuracy(lang_model=lang_model2) lang_model3 = 'en_core_web_md' thresholdlist = list(np.arange(0,50,2))+list(np.arange(45,50,0.5))+[50.] mean_accuracy_list3,nrrelevantlist3 = compute_accuracy(lang_model=lang_model3,thresholdlist=thresholdlist) from nltk.corpus import wordnet # Then, we're going to use the term "program" to find synsets like so: syns = wordnet.synsets("program") if np.all(np.isclose(np.array(nrrelevantlist1),np.array(nrrelevantlist2))): nrrelevantlist = nrrelevantlist1 plt.figure(1,figsize=(15,7)) plt.subplot(3,1,1) plt.plot(thresholdlist,mean_accuracy_list1,marker='o',label='Accuracy') plt.suptitle(f'Accuracy vs. Threshold\nWords considered have (score <= threshold) or (score > 100-threshold)') plt.title(f'Accuracy of {lang_model1}') plt.ylabel('Accuracy') plt.legend() plt.subplot(2,1,2) plt.plot(thresholdlist,mean_accuracy_list2,marker='o',label='Accuracy') plt.title(f'Accuracy of {lang_model2}') plt.ylabel('Accuracy') plt.legend() plt.subplot(3,1,3) plt.plot(thresholdlist,nrrelevantlist,marker='o') plt.title('Average number of relevant sliders') plt.xlabel('threshold value') plt.ylabel('Nr of Sliders') plt.yticks(np.arange(1,10,2)) plt.subplots_adjust(hspace=.6) plt.figure(1,figsize=(15,7)) plt.subplot(1,1,1) plt.plot(thresholdlist,mean_accuracy_list3,marker='o',label='Accuracy') plt.suptitle(f'Accuracy vs. Threshold\nWords considered have (score <= threshold) or (score > 100-threshold)') plt.title(f'Accuracy of {lang_model3}') plt.ylabel('Accuracy') plt.legend() plt.figure(1,figsize=(15,7)) plt.subplot(2,1,1) plt.plot(thresholdlist,mean_accuracy_list1,marker='o',label=f'Accuracy of {lang_model1}') plt.plot(thresholdlist,mean_accuracy_list2,marker='o',label=f'Accuracy of {lang_model2}') plt.suptitle(f'Accuracy vs. Threshold\nWords considered have (score <= threshold) or (score > 100-threshold)') plt.ylabel('Accuracy') plt.legend() plt.subplot(2,1,2) plt.plot(thresholdlist,nrrelevantlist,marker='o') plt.title('Average number of relevant sliders') plt.xlabel('threshold value') plt.ylabel('Nr of Sliders') plt.yticks(np.arange(1,10,2)) plt.subplots_adjust(hspace=.6) plt.savefig('Accuracy_vs_Threshold.svg') row lang_model = 'en_core_web_sm' w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) prediction_dict = w2wpe.match_word_to_wordpair(word,ind) ind,prediction_dict[] ind,w2wpe.match_word_to_wordpair(word,ind) def compute_mean_acc(dfnew,df_score,thresholdmargin,threshold=50, required_confidence=0, lang_model='en_core_web_sm'): """ Take the opposite quality pairs for which the slider value is outside the 50+/- <thresholdmargin> band. Compute the accuracy in predicting the correct opposite-pair word for each such pair. threshold: where to split a score to lower or upper quality in pair: 50 is the most natural value. The prediction must be with a (minimum) < required_confidence > otherwise the prediction is deemed unsure. The returned accuracy is computed as accuracy = NrCorrect/(NrCorrect+NrWrong+NrUnsure) averaged over all words in <dfnew>.description """ w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) acc_list = [] unsure_list = [] NrCorrect = 0 NrWrong = 0 NrUnsure = 0 for word, (irow,scorerow) in zip(dfnew.description, df_score.iterrows()): #determine which opposite quality pairs will be correctly predicted as the first and second word in the word pair, respectively valid_qualities = scorerow[(scorerow > threshold+thresholdmargin )|(scorerow < threshold-thresholdmargin)] below_th = valid_qualities[valid_qualities<threshold].index.tolist()#first word in the word pair is correct above_th = valid_qualities[valid_qualities>threshold].index.tolist()#second word in the word pair is correct #word_pair_tuple = wordpairnames_to_wordpair_dict[word_pair] NrCorrect = 0 NrWrong = 0 NrUnsure = 0 for word_pair in above_th: res = w2wpe.match_word_to_wordpair(word,word_pair) if res['slider value']>(threshold+required_confidence):# Add prediction threshold? NrCorrect+=1 elif res['slider value']<(threshold-required_confidence): NrWrong+=1 else: NrUnsure+=1 #if required confidence was not reached for word_pair in below_th: res = w2wpe.match_word_to_wordpair(word,word_pair) if res['slider value']<(threshold-required_confidence):# Add prediction threshold? NrCorrect+=1 elif res['slider value']>threshold+required_confidence: NrWrong+=1 else: NrUnsure+=1 #if required confidence was not reached if len(below_th)+len(above_th)==0: continue accuracy = NrCorrect/(NrCorrect+NrWrong+NrUnsure) unsure_ratio = NrUnsure/(NrCorrect+NrWrong+NrUnsure) # the fraction of cases where the prediction did not reach the required confidence acc_list.append(accuracy) unsure_list.append(unsure_ratio) #resdict = {'NrCorrect':NrCorrect, 'NrWrong':NrWrong, 'NrUnsure':NrUnsure} mean_acc = np.mean(acc_list) #list of accuracies for each word, over all available sliders mean_unsure = np.mean(unsure_list) del w2wpe return mean_acc,mean_unsure def f(): ipdb.set_trace() return wordpair_matcher_dict['bright_vs_dark'].match_word_to_words('sunny') f() y = np.array([np.where(row['bright_vs_dark']>=50,1,0) for row in rowlist]) y.shape,yhat1.shape yhat_binary = np.array([0 if yhatelem==target_word_pair[0] else 1 for yhatelem in yhat1]) yhat_binary.shape len(yhat),len(rowlist) accuracy_score(y,yhat_binary) yhat1 df_detailed = pd.DataFrame(index=wordlist) df_detailed.head(7) wordlist = [w for r,w in generate_training_examples(df)] rowlist = [r for r,w in generate_training_examples(df)] acc_scores=dict() for target_word_pair,opposite_quality_pair in zip(target_word_pairs,colnames): y = np.array([np.where(row[opposite_quality_pair]>=50,1,0) for row in rowlist]) print(target_word_pair,opposite_quality_pair) w2wm = word_to_words_matcher() w2wm.build(target_word_pair) yhat1 = np.array(f(wordlist,w2wm,variant=1)) df_detailed[opposite_quality_pair] = yhat1 yhat_binary = np.array([0 if yhatelem==target_word_pair[0] else 1 for yhatelem in yhat1]) acc_score = accuracy_score(y,yhat_binary) print(f'{acc_score:1.3f}') acc_scores[opposite_quality_pair] = acc_score print(df_detailed.shape) df_detailed.to_excel('predicted_qualities.xlsx') df_detailed.head(20) pd.Series(acc_scores).plot.bar(ylabel='accuracy') plt.plot(plt.xlim(),[0.5,0.5],'--',c='k') plt.title(f'Accuracy of Spacy word vectors in predicting\ntext_to_qualities.csv ({len(wordlist)} qualities)') plt.ylim(0,1) ``` ## Next Task: compute precision and recall threshold 25: zoomy, sustain->thick, smooth (user results) zoomy, sustain -> dark, smooth (word2word matcher resuts) smooth tp dark fp thik tn precision = tp/(tp+fp) recall = tp/(tp+fn) for one word, cant compute recall later: tensorflow language models, Optimising (Kullback-Leibler) for the distribution
github_jupyter
## Questões: 1. Qual plataforma possui mais títulos incluídos ao catálogo ? 2. Após a visualização do número de títulos em cada plataforma, desejo saber quantos filmes e séries presentes na Netflix estão também vinculados à outras plataformas de streaming, e assim por diante. 3. Rankear pela nota do IMDb, retirando os dados ausentes da coluna, qual plataforma é melhor para o cliente assinar. ### Importação de bibliotecas: ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns ``` ### Leitura de Arquivos ``` df_dados = pd.read_csv("MoviesOnStreamingPlatforms_updated.csv") df_dados.head() ``` ### Descarte de colunas e Reindexação ``` #Apagar colunas que são desnecessárias utilizando a função .drop(): df_dados.drop(['Unnamed: 0','ID','Type'], axis = 1, inplace=True) df_dados.head() ``` ### Análise de valores faltantes ``` #Verificar possíveis dados faltantes df_dados.isnull().sum() ``` - Em motivo de aprendizado maior, irei trabalhar com indexação entre minhas colunas úteis; - Decidir fazer isso por motivo de tentar entender o método, não necessariamente atrapalharia ou melhoraria o levantamento de respostas das minhas hipóteses ``` nome_colunas = ['Title','Year','Age','IMDb','Rotten Tomatoes','Netflix','Hulu','Prime Video','Disney+', 'Directors','Genres','Country','Language','Runtime'] #Criei uma lista com o nome das colunas de meu DataFrame, porém, em ordem que eu me sinta confortável df_dados = df_dados.reindex(columns=nome_colunas) ``` ### Seleção com loc e iloc - Para selecionar subconjuntos de linhas e colunas do meu DataFrame, utilizarei .loc e .iloc, na qual são reconhecidas pelos rótulos de eixo(**.loc**) e inteiro(**.iloc**). - Para entendermos a estrutura de seleção loc, vamos selecionar as 15 primeiras linhas e pedir apenas as colunas de título e diretors: ``` df_dados.loc[0:15, ['Title','Directors']] ``` ### Qual plataforma possui mais títulos incluídos ao catálogo: ``` #Filmes vinculados à Netflix netflix_data = df_dados.loc[df_dados['Netflix'] == 1] #Filmes vinculados à Hulu hulu_data = df_dados.loc[df_dados['Hulu'] ==1] #Filmes vinculados à Prime Video prime_data = df_dados.loc[df_dados['Prime Video'] ==1] #Filmes vinculados à Disney+ disney_data = df_dados.loc[df_dados['Disney+'] ==1] ``` ### Criação de listas para armazenar o número de títulos pertecentes a cada plataforma de stream: ``` # Eu poderia criar uma variável para cada filtro, mas decidir visualizar as listas durante o processo numero_titulos = [netflix_data['Title'].count(), hulu_data['Title'].count(), prime_data['Title'].count() , disney_data['Title'].count()] # Lista de nomes de cada plataforma: nomes_plataformas = ['Netflix','Hulu','Prime Video','Disney+'] ``` ### Criação de gráfico de barras para comparação: ``` #Construção de plotagem do gráfico de barras: plt.bar(nomes_plataformas, numero_titulos, color='Blue') #Título do gráfico plt.title('Número de títulos x Plataformas') #A label para o eixo Y: plt.ylabel('Número de Títulos') #A label para o eixo X: plt.xlabel('Plataformas') plt.show() ``` - A partir do gráfico acima, podemos observar que a plataforma que mais possui títulos é a Amazom Prime Video, com cerca de mais de 12.000 títulos na plataforma ### Após a visualização do número de títulos em cada plataforma, desejo saber quantos filmes e séries presentes na Netflix estão também vinculados à outras plataformas de streaming, e assim por diante: ``` netflix_data[['Netflix','Hulu','Prime Video','Disney+']].sum() ``` ##### Na plataforma da Netflix, dos 3560 títulos, podemos perceber que : 1. 25 títulos presentes na Hulu 2. 345 títulos presentes na Prime Video 3. 10 títulos presentes na Disney+ ``` hulu_data[['Netflix','Hulu','Prime Video','Disney+']].sum() ``` ##### Na plataforma da Hulu, dos 903 títulos, podemos perceber que : 1. 25 títulos presentes na Netflix 2. 241 títulos presentes na Prime Video 3. 7 títulos presentes na Disney+ ``` prime_data[['Netflix','Hulu','Prime Video','Disney+']].sum() ``` #### Na plataforma da Prime Video, dos 12354 títulos, podemos perceber que: 1. 345 títulos presentes na Netflix 2. 241 títulos presentes na Hulu 3. 19 títulos presentes na Disney+ ``` disney_data[['Netflix','Hulu','Prime Video','Disney+']].sum() ``` #### Na plataforma da Disney+, dos 564 títulos, podemos perceber que: 1. 10 títulos presentes na Netflix 2. 7 títulos presentes na Hulu 3. 19 títulos presentes na Prime Video ### Rankear pela nota do IMDb, retirando os dados ausentes da coluna, qual plataforma é melhor para o cliente assinar: ``` print("Média de nota do IMDb dos títulos presentes na Netflix : {:.2f}".format(netflix_data['IMDb'].dropna().mean())) print("Média de nota do IMDb dos títulos presentes na Disney+ : {:.2f}".format(disney_data['IMDb'].dropna().mean())) print("Média de nota do IMDb dos títulos presentes na Hulu : {:.2f}".format(hulu_data['IMDb'].dropna().mean())) print("Média de nota do IMDb dos títulos presentes na Prime Video : {:.2f}".format(prime_data['IMDb'].dropna().mean())) ``` ### Análise de gêneros de títulos ``` netflix_data['Genres'].str.contains('Drama').dropna().sum() print("Número de títulos relacionados ao gênero de drama na Netflix : {}".format(netflix_data['Genres'].str.contains('Drama').dropna().sum())) print("Número de títulos relacionados ao gênero de drama na Hulu : {}".format(hulu_data['Genres'].str.contains('Drama').dropna().sum())) print("Número de títulos relacionados ao gênero de drama na Disney+ : {}".format(disney_data['Genres'].str.contains('Drama').dropna().sum())) print("Número de títulos relacionados ao gênero de drama na Prime Video : {}".format(prime_data['Genres'].str.contains('Drama').dropna().sum())) print("Número de títulos relacionados ao gênero de ação na Netflix : {}" .format(netflix_data['Genres'].str.contains('Action').dropna().sum())) print("Número de títulos relacionados ao gênero de ação na Hulu : {}" .format(hulu_data['Genres'].str.contains('Action').dropna().sum())) print("Número de títulos relacionados ao gênero de ação na Disney+ : {}" .format(disney_data['Genres'].str.contains('Action').dropna().sum())) print("Número de títulos relacionados ao gênero de ação na Prime Video : {}" .format(prime_data['Genres'].str.contains('Action').dropna().sum())) ```
github_jupyter
``` import keras import tensorflow as tf import librosa import numpy as np import pandas import pickle import os from os import listdir from os.path import isfile, join import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import h5py import math import random import re import csv from sklearn.manifold import TSNE, MDS from keras.models import Model, load_model from keras.layers import Dropout, concatenate, Concatenate, Activation, Input, Dense, Conv2D, GRU, MaxPooling2D, MaxPooling1D, Flatten, Reshape, LeakyReLU, PReLU, BatchNormalization, Bidirectional, TimeDistributed, Lambda, GlobalMaxPool1D, GlobalMaxPool2D, GlobalAveragePooling2D, Multiply, GlobalAveragePooling2D from keras.optimizers import Adam, SGD import keras.backend as K from keras import regularizers from keras.initializers import random_normal, glorot_uniform, glorot_normal import tensorflow as tf from keras.callbacks import TensorBoard, ModelCheckpoint, LearningRateScheduler, ReduceLROnPlateau, Callback, EarlyStopping path_mel = './ML4BL_ZF/melspecs/' path_files = './ML4BL_ZF/files/' train_triplet_file = 'train_triplets_50_70_single.pckl' train_gt_file = 'train_gt_50_70_single.pckl' train_cons_file = 'train_cons_50_70_single.pckl' train_trials_file = 'train_trials_50_70_single.pckl' test_triplet_file = 'test_triplets_50_70_single.pckl' test_gt_file = 'test_gt_50_70_single.pckl' test_cons_file = 'test_cons_50_70_single.pckl' test_trials_file = 'test_trials_50_70_single.pckl' luscinia_triplets_file = 'luscinia_triplets_filtered.csv' luscinia_triplets = [] with open(path_files+luscinia_triplets_file, 'r', newline='') as csvfile: csv_r = csv.reader(csvfile, delimiter=',') for row in csv_r: luscinia_triplets.append(row) luscinia_triplets = luscinia_triplets[1:] luscinia_train_len = round(8*len(luscinia_triplets)/10) luscinia_val_len = len(luscinia_triplets) - luscinia_train_len f = open(path_files+'mean_std_luscinia_pretraining.pckl', 'rb') train_dict = pickle.load(f) M_l = train_dict['mean'] S_l = train_dict['std'] f.close() f = open(path_files+'training_setup_1_ordered_acc_single_cons_50_70_trials.pckl', 'rb') train_dict = pickle.load(f) train_keys = train_dict['train_keys'] training_triplets = train_dict['train_triplets'] val_keys = train_dict['val_keys'] validation_triplets = train_dict['vali_triplets'] test_triplet = train_dict['test_triplets'] test_keys = train_dict['test_keys'] M = train_dict['train_mean'] S = train_dict['train_std'] f.close() ``` # Network ``` def convBNpr(a, dilation, num_filters, kernel): c1 = Conv2D(filters=num_filters, kernel_size=kernel, strides=(1, 1), dilation_rate=dilation, padding='same', use_bias=False, kernel_initializer=glorot_uniform(seed=123), kernel_regularizer=regularizers.l2(1e-4))(a) c1 = BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001)(c1) c1 = LeakyReLU(alpha=0.3)(c1) return c1 def createModelMatrix(emb_size, input_shape=(170, 150, 1)): a = Input(shape=(input_shape)) c = convBNpr(a, 1, 12, (3,3)) c = convBNpr(c, 2, 32, (3,3)) c = convBNpr(c, 4, 64, (3,3)) c = convBNpr(c, 8, 128, (3,3)) # attention with sigmoid a1 = Conv2D(filters=128, kernel_size=(1,1), strides=(1, 1), padding='same', activation = 'sigmoid', use_bias=True, kernel_regularizer=regularizers.l2(1e-4),kernel_initializer=glorot_uniform(seed=123))(c) # sum of sum of attention s = Lambda(lambda x: K.sum(K.sum(x,axis=1, keepdims=True), axis=2, keepdims=True))(a1) s = Lambda(lambda x: K.repeat_elements(x, 170, axis=1))(s) s = Lambda(lambda x: K.repeat_elements(x, 150, axis=2))(s) # probability matrix of attention p = Lambda(lambda x: x[0]/x[1])([a1,s]) # inner product of attention and projection matrices m = Multiply()([c, p]) # output out_sum = Lambda(lambda x: K.sum(K.sum(x, axis=1), axis=1))(m) # attention side d3 = Dense(emb_size*100, kernel_initializer=glorot_normal(seed=222), kernel_regularizer=regularizers.l2(1e-4),activation='relu')(out_sum) d3 = Dropout(.2, seed=222)(d3) d4 = Dense(emb_size*10, kernel_initializer=glorot_normal(seed=333), kernel_regularizer=regularizers.l2(1e-4),activation='relu')(d3) d4 = Dropout(.2, seed=333)(d4) d5 = Dense(emb_size, kernel_initializer=glorot_normal(seed=132), kernel_regularizer=regularizers.l2(1e-4))(d4) d5 = Dropout(.2, seed = 132)(d5) # maxpool side x = convBNpr(c, 1, 64, (3,3)) x = MaxPooling2D(pool_size=(2, 2))(x) x = convBNpr(x, 1, 32, (3,3)) x = MaxPooling2D(pool_size=(2, 2))(x) x = convBNpr(x, 1, 12, (3,3)) f = Flatten()(x) df1 = Dense(emb_size*100, kernel_initializer=glorot_normal(seed=456), kernel_regularizer=regularizers.l2(1e-4), activation='relu')(f)# df1 = Dropout(.2, seed=456)(df1) df2 = Dense(emb_size*10, kernel_initializer=glorot_normal(seed=654), kernel_regularizer=regularizers.l2(1e-4), activation='relu')(df1)# df2 = Dropout(.2, seed=654)(df2) df3 = Dense(emb_size, kernel_initializer=glorot_normal(seed=546), kernel_regularizer=regularizers.l2(1e-4))(df2)# df3 = Dropout(.2, seed=546)(df3) concat = Concatenate(axis=-1)([d5, df3]) dd = Dense(emb_size, kernel_initializer=glorot_normal(seed=999), kernel_regularizer=regularizers.l2(1e-4))(concat) sph = Lambda(lambda x: K.l2_normalize(x,axis=1))(dd) # base model creation base_model = Model(a,sph) # triplet framework input_anchor = Input(shape=(input_shape)) input_positive = Input(shape=(input_shape)) input_negative = Input(shape=(input_shape)) net_anchor = base_model(input_anchor) net_positive = base_model(input_positive) net_negative = base_model(input_negative) base_model.summary() merged_vector = concatenate([net_anchor, net_positive, net_negative], axis=-1) model = Model([input_anchor, input_positive, input_negative], outputs=merged_vector) return model ``` # Functions ``` def masked_weighted_triplet_loss(margin, emb_size, m = 0 , w = 0, lh = 1): def lossFunction(y_true,y_pred): weight = y_true[:, 0] # acc cons = y_true[:, 1] # consistency trials = y_true[:, 2] # number of trials anchor = y_pred[:, 0:emb_size] positive = y_pred[:, emb_size:emb_size*2] negative = y_pred[:, emb_size*2:emb_size*3] # distance between the anchor and the positive pos_dist = K.sqrt(K.sum(K.square(anchor - positive), axis=1)) # l2 distance #pos_dist = K.sum(K.abs(anchor-positive), axis=1) # l1 distance # distance between the anchor and the negative neg_dist = K.sqrt(K.sum(K.square(anchor - negative), axis=1)) # l2 distance #neg_dist = K.sum(K.abs(anchor-negative), axis=1) # l1 distance loss_h = 0 loss_l = 0 if lh == 1: # DOES NOT WORK WITH MASKED LOSS # low-high margin loss p_c = K.square(neg_dist) - K.square(pos_dist) - margin p_i = K.square(neg_dist) - K.square(pos_dist) loss_1 = cons*(1-K.exp(p_c)) + (1-cons)*(1-K.exp(-K.abs(p_i))) if m != 0: # masked loss basic_loss = pos_dist - neg_dist + margin threshold = K.max(basic_loss) * m mask = 2 + margin - K.maximum(basic_loss, threshold) loss_1 = basic_loss * mask if w == 1: # weighted based on acc weighted_loss = weight*loss_1 else: # non-weighted weighted_loss = loss_1 loss = K.maximum(weighted_loss, 0.0) return loss return lossFunction def discard_some_low(triplet_list, cons, acc): low_margin = [] high_margin = [] for i in range(len(triplet_list)): if float(triplet_list[i][-1]) < cons: # low margin if float(triplet_list[i][-2]) >= acc: # ACC low_margin.append(triplet_list[i]) else: # high margin high_margin.append(triplet_list[i]) random.seed(123) random.shuffle(low_margin) random.shuffle(high_margin) low_margin.extend(high_margin) return low_margin def balance_input(triplet_list, cons, hi_balance = 6, lo_balance = 6): batchsize = hi_balance + lo_balance low_margin = [] high_margin = [] for i in range(len(triplet_list)): if float(triplet_list[i][-1]) < cons: # low margin low_margin.append(triplet_list[i]) else: # high margin high_margin.append(triplet_list[i]) random.seed(123) random.shuffle(low_margin) random.shuffle(high_margin) new_triplet_list = [] maxlen = np.maximum(len(low_margin), len(high_margin)) hi_start = 0 lo_start = 0 for i in range(0,int(maxlen/hi_balance)*batchsize,batchsize): for j in range(hi_start,hi_start+hi_balance,1): new_triplet_list.append(high_margin[np.mod(j,len(high_margin))]) hi_start+=hi_balance for j in range(lo_start, lo_start+lo_balance,1): new_triplet_list.append(low_margin[np.mod(j,len(low_margin))]) lo_start+=lo_balance return low_margin, high_margin, new_triplet_list ``` # Generators ``` def train_generator_mixed(triplet_list, M, S, luscinia_triplets, M_l, S_l, batchsize, lo, hi, lu, emb_size, path_mel): acc_gt = np.zeros((batchsize, emb_size)) random.seed(123) random.shuffle(luscinia_triplets) while 1: anchors_input = np.empty((batchsize, 170, 150, 1)) positives_input = np.empty((batchsize, 170, 150, 1)) negatives_input = np.empty((batchsize, 170, 150, 1)) imax = int(len(triplet_list)/(lo+hi)) list_cnt = 0 luscinia_cnt = 0 for i in range(imax): for j in range(batchsize): if j < (lo+hi): triplet = triplet_list[list_cnt] list_cnt += 1 tr_anc = triplet[3][:-4]+'.pckl' tr_pos = triplet[1][:-4]+'.pckl' tr_neg = triplet[2][:-4]+'.pckl' acc_gt[j][0] = float(triplet[-2]) # acc acc_gt[j][1] = 1 if float(triplet[-1])>=0.7 else 0 # cons acc_gt[j][2] = int(triplet[-3]) # number of trials else: triplet = luscinia_triplets[luscinia_cnt] luscinia_cnt += 1 tr_anc = triplet[2][:-4]+'.pckl' tr_pos = triplet[0][:-4]+'.pckl' tr_neg = triplet[1][:-4]+'.pckl' acc_gt[j][0] = 1 # acc acc_gt[j][1] = 1 # cons acc_gt[j][2] = 1 # number of trials f = open(path_mel+tr_anc, 'rb') anc = pickle.load(f).T f.close() anc = (anc - M)/S anc = np.expand_dims(anc, axis=-1) f = open(path_mel+tr_pos, 'rb') pos = pickle.load(f).T f.close() pos = (pos - M)/S pos = np.expand_dims(pos, axis=-1) f = open(path_mel+tr_neg, 'rb') neg = pickle.load(f).T f.close() neg = (neg - M)/S neg = np.expand_dims(neg, axis=-1) anchors_input[j] = anc positives_input[j] = pos negatives_input[j] = neg yield [anchors_input, positives_input, negatives_input], acc_gt def train_generator_luscinia(triplet_list, M, S, batchsize, emb_size, path_mel, ordered = True): acc_gt = np.zeros((batchsize, emb_size)) random.seed(123) random.shuffle(triplet_list) while 1: anchors_input = np.empty((batchsize, 170, 150, 1)) positives_input = np.empty((batchsize, 170, 150, 1)) negatives_input = np.empty((batchsize, 170, 150, 1)) imax = int(len(triplet_list)/batchsize) for i in range(imax): for j in range(batchsize): triplet = triplet_list[i*batchsize+j] tr_anc = triplet[2][:-4]+'.pckl' tr_pos = triplet[0][:-4]+'.pckl' tr_neg = triplet[1][:-4]+'.pckl' acc_gt[j][0] = 1 # acc acc_gt[j][1] = 1 # cons acc_gt[j][2] = 1 # number of trials f = open(path_mel+tr_anc, 'rb') anc = pickle.load(f).T f.close() anc = (anc - M)/S anc = np.expand_dims(anc, axis=-1) f = open(path_mel+tr_pos, 'rb') pos = pickle.load(f).T f.close() pos = (pos - M)/S pos = np.expand_dims(pos, axis=-1) f = open(path_mel+tr_neg, 'rb') neg = pickle.load(f).T f.close() neg = (neg - M)/S neg = np.expand_dims(neg, axis=-1) anchors_input[j] = anc positives_input[j] = pos negatives_input[j] = neg yield [anchors_input, positives_input, negatives_input], acc_gt def train_generator(triplet_list, M, S, batchsize, emb_size, path_mel, ordered = True): acc_gt = np.zeros((batchsize, emb_size)) random.seed(123) random.shuffle(triplet_list) while 1: anchors_input = np.empty((batchsize, 170, 150, 1)) positives_input = np.empty((batchsize, 170, 150, 1)) negatives_input = np.empty((batchsize, 170, 150, 1)) imax = int(len(triplet_list)/batchsize) for i in range(imax): for j in range(batchsize): triplet = triplet_list[i*batchsize+j] tr_anc = triplet[3][:-4]+'.pckl' if ordered == False: if triplet[-1] == '0': tr_pos = triplet[1][:-4]+'.pckl' tr_neg = triplet[2][:-4]+'.pckl' else: tr_pos = triplet[2][:-4]+'.pckl' tr_neg = triplet[1][:-4]+'.pckl' else: tr_pos = triplet[1][:-4]+'.pckl' tr_neg = triplet[2][:-4]+'.pckl' acc_gt[j][0] = float(triplet[-2]) # acc acc_gt[j][1] = 1 if float(triplet[-1])>=0.7 else 0 # cons acc_gt[j][2] = int(triplet[-3]) # number of trials f = open(path_mel+tr_anc, 'rb') anc = pickle.load(f).T f.close() anc = (anc - M)/S anc = np.expand_dims(anc, axis=-1) f = open(path_mel+tr_pos, 'rb') pos = pickle.load(f).T f.close() pos = (pos - M)/S pos = np.expand_dims(pos, axis=-1) f = open(path_mel+tr_neg, 'rb') neg = pickle.load(f).T f.close() neg = (neg - M)/S neg = np.expand_dims(neg, axis=-1) anchors_input[j] = anc positives_input[j] = pos negatives_input[j] = neg yield [anchors_input, positives_input, negatives_input], acc_gt ``` # Training ``` emb_size=16 margin = 0.1 m = 0 lr = 1e-8 adam = Adam(lr = lr) triplet_model = createModelMatrix(emb_size=emb_size, input_shape=(170, 150, 1)) triplet_model.summary() triplet_model.compile(loss=masked_weighted_triplet_loss(margin=margin, emb_size=emb_size, m=m, w = 0, lh = 1),optimizer=adam) ``` # PRE (pretraining on Luscinia triplets) ``` lo = 6 hi = 8 lu = 10 batchsize = lo+hi+lu cpCallback = ModelCheckpoint('ZF_emb_'+str(emb_size)+'D_LUSCINIA_PRE_margin_loss_backup.h5', monitor='val_loss', save_best_only=True, save_weights_only=True, mode='min', save_freq='epoch') reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=10, verbose=1, min_lr=1e-12) earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto') history = triplet_model.fit(train_generator_luscinia(luscinia_triplets[:int(luscinia_train_len/10)], M_l, S_l, batchsize, emb_size, path_mel), steps_per_epoch=int(int(luscinia_train_len/10)/batchsize), epochs=1000, verbose=1, validation_data=train_generator_luscinia(luscinia_triplets[luscinia_train_len:luscinia_train_len+200], M_l, S_l, batchsize, emb_size, path_mel), validation_steps=int(200/batchsize), callbacks=[cpCallback, reduce_lr, earlystop]) ``` # PRE trained (training on bird decisions after pretraining on Luscinia triplets) ``` # load pretrained model triplet_model.load_weights('ZF_emb_'+str(emb_size)+'D_LUSCINIA_PRE_margin_loss_backup.h5') lo = 6 hi = 8 lu = 10 batchsize = lo+hi+lu cpCallback = ModelCheckpoint('ZF_emb_'+str(emb_size)+'D_LUSCINIA_PRE_margin_loss_trained.h5', monitor='val_loss', save_best_only=True, save_weights_only=True, mode='min', save_freq='epoch') reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=10, verbose=1, min_lr=1e-12) earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto') dis_tr_triplets = discard_some_low(training_triplets, 0.7, 0.7) dis_val_triplets = discard_some_low(validation_triplets, 0.7, 0.7) history = triplet_model.fit(train_generator(dis_tr_triplets, M, S, batchsize, emb_size, path_mel), steps_per_epoch=int(len(dis_tr_triplets)/batchsize), epochs=1000, verbose=1, validation_data=train_generator(dis_val_triplets, M,S, batchsize, emb_size, path_mel), validation_steps=int(len(dis_val_triplets)/batchsize), callbacks=[cpCallback, reduce_lr,earlystop]) ``` # MIXED (training on both bird decisions and Luscinia triplets - w/o pretraining) ``` lo = 6 hi = 8 lu = 10 batchsize = lo+hi+lu #26 cpCallback = ModelCheckpoint('ZF_emb_'+str(emb_size)+'D_LUSCINIA_MIXED_margin_loss.h5', monitor='val_loss', save_best_only=True, save_weights_only=True, mode='min', save_freq='epoch') reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=10, verbose=1, min_lr=1e-12) earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto') dis_tr_triplets = discard_some_low(training_triplets, 0.7, 0.7) dis_val_triplets = discard_some_low(validation_triplets, 0.7, 0.7) low_margin, high_margin, bal_training_triplets = balance_input(dis_tr_triplets, 0.7, hi_balance = hi, lo_balance = lo) vlow_margin, vhigh_margin, bal_val_triplets = balance_input(dis_val_triplets, 0.7, hi_balance = hi, lo_balance = lo) history = triplet_model.fit(train_generator_mixed(bal_training_triplets, M, S, luscinia_triplets[:luscinia_train_len],M_l, S_l, batchsize, lo, hi, lu, emb_size, path_mel), steps_per_epoch=int(len(bal_training_triplets)/(lo+hi)), epochs=1000, verbose=1, validation_data=train_generator_mixed(bal_val_triplets, M, S, luscinia_triplets[luscinia_train_len:],M_l, S_l, batchsize, lo, hi, lu, emb_size, path_mel), validation_steps=int(len(bal_val_triplets)/(lo+hi)), callbacks=[cpCallback, reduce_lr, earlystop]) ``` # PRE + MIXED (training on both bird decisions and Luscinia triplets - w/ pretraining on Luscinia) ``` # load pre-trained model triplet_model.load_weights('ZF_emb_'+str(emb_size)+'D_LUSCINIA_PRE_margin_loss_backup.h5') lo = 6 hi = 8 lu = 10 batchsize = lo+hi+lu cpCallback = ModelCheckpoint('ZF_emb_'+str(emb_size)+'D_LUSCINIA_PRE_MIXED_margin_loss.h5', monitor='val_loss', save_best_only=True, save_weights_only=True, mode='min', save_freq='epoch') reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=10, verbose=1, min_lr=1e-12) earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto') dis_tr_triplets = discard_some_low(training_triplets, 0.7, 0.7) dis_val_triplets = discard_some_low(validation_triplets, 0.7, 0.7) low_margin, high_margin, bal_training_triplets = balance_input(dis_tr_triplets, 0.7, hi_balance = hi, lo_balance = lo) vlow_margin, vhigh_margin, bal_val_triplets = balance_input(dis_val_triplets, 0.7, hi_balance = hi, lo_balance = lo) history = triplet_model.fit(train_generator_mixed(bal_training_triplets, M, S, luscinia_triplets[:luscinia_train_len],M_l, S_l, batchsize, lo, hi, lu, emb_size, path_mel), steps_per_epoch=int(len(bal_training_triplets)/(lo+hi)), epochs=1000, verbose=1, validation_data=train_generator_mixed(bal_val_triplets, M, S, luscinia_triplets[luscinia_train_len:],M_l, S_l, batchsize, lo, hi, lu, emb_size, path_mel), validation_steps=int(len(bal_val_triplets)/(lo+hi)), callbacks=[cpCallback, reduce_lr, earlystop]) ``` # Evaluation (for both ambiguous and unambiguous triplets - based on different distance margins) ``` def low_high_evaluation(path_mel, vhigh_margin, vlow_margin, triplet_model, margin = 0.00000001, max_margin = 0.0000001, step = 0.00000001): # pos, neg, anc while margin < max_margin: acc_cnt = 0 high_cnt = 0 low_cnt = 0 for triplet in vhigh_margin: tr_pos = triplet[1][:-4]+'.pckl' tr_neg = triplet[2][:-4]+'.pckl' tr_anc = triplet[3][:-4]+'.pckl' f = open(path_mel+tr_anc, 'rb') anc = pickle.load(f).T f.close() anc = (anc - M)/S anc = np.expand_dims(anc, axis=0) anc = np.expand_dims(anc, axis=-1) f = open(path_mel+tr_pos, 'rb') pos = pickle.load(f).T f.close() pos = (pos - M)/S pos = np.expand_dims(pos, axis=0) pos = np.expand_dims(pos, axis=-1) f = open(path_mel+tr_neg, 'rb') neg = pickle.load(f).T f.close() neg = (neg - M)/S neg = np.expand_dims(neg, axis=0) neg = np.expand_dims(neg, axis=-1) y_pred = triplet_model.predict([anc, pos, neg]) anchor1 = y_pred[:, 0:emb_size] positive1 = y_pred[:, emb_size:emb_size*2] negative1 = y_pred[:, emb_size*2:emb_size*3] pos_dist = np.sqrt(np.sum(np.square(anchor1 - positive1), axis=1))[0] neg_dist = np.sqrt(np.sum(np.square(anchor1 - negative1), axis=1))[0] if np.square(neg_dist) > np.square(pos_dist) + margin: acc_cnt += 1 high_cnt += 1 for triplet in vlow_margin: tr_pos = triplet[1][:-4]+'.pckl' tr_neg = triplet[2][:-4]+'.pckl' tr_anc = triplet[3][:-4]+'.pckl' f = open(path_mel+tr_anc, 'rb') anc = pickle.load(f).T f.close() anc = (anc - M)/S anc = np.expand_dims(anc, axis=0) anc = np.expand_dims(anc, axis=-1) f = open(path_mel+tr_pos, 'rb') pos = pickle.load(f).T f.close() pos = (pos - M)/S pos = np.expand_dims(pos, axis=0) pos = np.expand_dims(pos, axis=-1) f = open(path_mel+tr_neg, 'rb') neg = pickle.load(f).T f.close() neg = (neg - M)/S neg = np.expand_dims(neg, axis=0) neg = np.expand_dims(neg, axis=-1) y_pred = triplet_model.predict([anc, pos, neg]) anchor1 = y_pred[:, 0:emb_size] positive1 = y_pred[:, emb_size:emb_size*2] negative1 = y_pred[:, emb_size*2:emb_size*3] pos_dist = np.sqrt(np.sum(np.square(anchor1 - positive1), axis=1))[0] neg_dist = np.sqrt(np.sum(np.square(anchor1 - negative1), axis=1))[0] if np.abs(np.square(pos_dist) - np.square(neg_dist)) <= margin: acc_cnt+=1 low_cnt+=1 print('MARGIN = ', margin) print('Macro-average Low-High margin accuracy: ',0.5*(high_cnt/(len(vhigh_margin)) + low_cnt/(len(vlow_margin)))*100, '%') print('Micro-average Low-High margin accuracy: ',(acc_cnt/(len(vhigh_margin)+len(vlow_margin)))*100, '%') print('High margin accuracy: ',(high_cnt/(len(vhigh_margin)))*100, '%') print('Low margin accuracy: ',(low_cnt/(len(vlow_margin)))*100, '%') margin += step return # Separate sets between low-margin (ambiguous) and high-margin (unambiguous) triplets low_margin = pickle.load( open(path_files+'train_triplets_low_50_70_ACC70.pckl', 'rb')) vlow_margin = pickle.load(open(path_files+'val_triplets_low_50_70_ACC70.pckl', 'rb')) high_margin = pickle.load(open(path_files+'train_triplets_high_50_70_ACC70.pckl', 'rb')) vhigh_margin =pickle.load(open(path_files+'val_triplets_high_50_70_ACC70.pckl', 'rb')) tlow_margin =pickle.load(open(path_files+'test_triplets_low_50_70_ACC70.pckl', 'rb')) thigh_margin = pickle.load(open(path_files+'test_triplets_high_50_70_ACC70.pckl', 'rb')) # run evaluation on a high margin and low margin set of the same split low_high_evauation(path_mel, high_margin, low_margin, triplet_model, margin = 0.0, max_margin = 0.01, step = 0.005) ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from solve import solve_dual_entropic ### INITIALIZE TEST # set seed np.random.seed(0) # set dimension n_source = 30 n_target = 30 # set target measure #a = np.ones(n_source) #a = a/a.sum() a = 1.5 + np.sin([1*(k / n_source) * 15 for k in range(n_source)]) a = a / a.sum() # set prior measure b = np.ones(n_target) b = b/b.sum() # random distance matrix # rng = np.random.RandomState(0) # X_source = rng.randn(n_source, 2) # Y_target = rng.randn(n_target, 2) # M = ot.dist(X_source, Y_target) # discrete distance # M = (np.ones(n_source) - np.identity(n_source)) # distance on the line X_source = np.array([k for k in range(n_source)]) Y_target = X_source M = abs(X_source[:,None] - Y_target[None, :]) # normalize distance matrix (optional) M = M / (M.max() - M.min()) # make distance matrix positive (mandatory!) M = M - M.min() # graph of matrix and target measure fig, ax = plt.subplots() print("Cost matrix") ax.imshow(M, cmap=plt.cm.Blues) plt.axis('off') plt.show() fig.savefig('tmp/cost_matrix.pdf', bbox_inches='tight') print("Target measure") plt.bar(range(n_source), a) plt.show() from graphs import performance_graphs ### PERFORMANCE TEST 1 # set seed np.random.seed(1) # set regularizer parameter reg1 = 0.1 reg2 = 0.1 # set learning rate (reg1 is close to optimal, using a bigger one might make it diverge) lr = reg1 # set batch size (0 means use of full gradient in beta, while stochastic in alpha) batch_size = 1 # set algorithmic parameters (count between 10000 and 70000 iterations per seconds) numItermax = 1000000 maxTime = 100 avg_alpha_list, avg_beta_list, time_list = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, lr, maxTime) performance_graphs(a, b, M, reg1, reg2, avg_alpha_list, avg_beta_list, time_list) ### PERFORMANCE TEST 2 # set seed np.random.seed(1) # set regularizer parameter reg1 = 0.01 reg2 = 0.01 # set learning rate (reg1 is close to optimal, using a bigger one might make it diverge) lr = reg1 # set batch size (0 means use of full gradient in beta, while stochastic in alpha) batch_size = 1 # set algorithmic parameters (count between 10000 and 70000 iterations per seconds) numItermax = 1000000 maxTime = 100 avg_alpha_list, avg_beta_list, time_list = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, lr, maxTime) performance_graphs(a, b, M, reg1, reg2, avg_alpha_list, avg_beta_list, time_list) from graphs import compare_results # PERFORMANCE: when regularization varies numItermax = 1000000 np.random.seed(3) reg1 = 0.01 reg2 = 0.01 lr = reg1 avg_alpha_list1, avg_beta_list1, time_list1 = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, lr/8, maxTime) avg_alpha_list2, avg_beta_list2, time_list2 = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, lr/4, maxTime) avg_alpha_list3, avg_beta_list3, time_list3 = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, lr/2, maxTime) avg_alpha_list4, avg_beta_list4, time_list4 = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, lr, maxTime) avg_alpha_list5, avg_beta_list5, time_list5 = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, 2 * lr, maxTime) list_results_alpha = [avg_alpha_list1, avg_alpha_list2, avg_alpha_list3, avg_alpha_list4, avg_alpha_list5] list_results_beta = [avg_beta_list1, avg_beta_list2, avg_beta_list3, avg_beta_list4, avg_beta_list5] compare_results(list_results_alpha, list_results_beta) # PERFORMANCE: when dimension varies # This initializes the variables with sample gaussians def generate_var(n_source, n_target): X_source = rng.randn(n_source, 3) Y_target = rng.randn(n_target, 3) M = ot.dist(X_source, Y_target) M = M / (M.max() - M.min()) M = M - M.min() a = np.ones(n_source)/n_source b = np.ones(n_target)/n_target return X_source, Y_target, M, a, b batch_size = 1 np.random.seed(4) reg1 = 0.01 reg2 = 0.01 lr = reg1 numItermax = 1000000 list_dimensions = [[30, 30], [100, 30], [30,100], [100, 100]] list_results_alpha, list_results_beta = [], [] rng = np.random.RandomState(0) fig, ax = plt.subplots() SetPlotRC() for D in list_dimensions: n_source = D[0] n_target = D[1] X_source, Y_target, M, a, b = generate_var(n_source, n_target) avg_alpha_list, avg_beta_list, time_list = solve_dual_entropic(a, b, M, reg1, reg2, numItermax, batch_size, lr, maxTime) target_list = [dual_to_target(b, reg2, beta) for beta in avg_beta_list] norm0 = norm_grad_dual(a, b, target_list[0], M, reg1, avg_alpha_list[0], avg_beta_list[0]) grad_norm_list = [norm_grad_dual(a, b, target_list[i], M, reg1, avg_alpha_list[i], avg_beta_list[i])/norm0 for i in range(len(avg_alpha_list))] print("Final gradient norm:", grad_norm_list[-1]) graph_loglog(grad_norm_list) labels=['I = ' + str(D[0]) + ', J = ' + str(D[1]) for D in list_dimensions] plt.grid() plt.ylabel('Gradient norm', fontsize=12) plt.xlabel('Number of iterations', fontsize=12) plt.legend(labels) ApplyFont(plt.gca()) fig.savefig('tmp/dimension_sens.pdf', bbox_inches='tight') plt.show() ## PREPARE FOR ANIMATION import matplotlib.pyplot as plt from matplotlib import animation from IPython.display import HTML target_list = [dual_to_target(b, reg2, beta) for beta in avg_beta_list[:50000]] #def transport_map(alpha, beta, M, reg1, a, b): # G = np.exp((alpha[:, None] + beta[None, :] - M) / reg1) * a[:, None] * b[None, :] # return G / G.sum() #transport_map_list = [transport_map(avg_alpha_list[i], avg_beta_list[i], M, reg1, a, b) for i in range(numItermax)] # ANIMATION OF THE TARGET MEASURE fig=plt.figure() # Number of frames (100 frames per second) n = 1000 k = int(len(target_list)) // n barWidth = 0.4 r1 = np.arange(n_source) r2 = [x + barWidth for x in r1] barcollection = plt.bar(r1, target_list[0], width=barWidth) barcollection1 = plt.bar(r2, target_list[-1], width=barWidth) plt.ylim(0, np.max(target_list[-1])*1.1) def animate(t): y=target_list[t * k] for i, b in enumerate(barcollection): b.set_height(y[i]) anim=animation.FuncAnimation(fig, animate, repeat=False, blit=False, frames=n, interval=10) HTML(anim.to_html5_video()) # ANIMATION OF THE TRANSPORT MAP fig=plt.figure() # Number of frames (100 frames per second) n = 500 k = len(transport_map_list) // n ims = [] for i in range(n): im = plt.imshow(transport_map_list[i*k], animated=True, cmap=plt.cm.Blues) ims.append([im]) anim2 = animation.ArtistAnimation(fig, ims, repeat=False, blit=False, interval=10) HTML(anim2.to_html5_video()) ```
github_jupyter
# The ISB-CGC open-access TCGA tables in Big-Query The goal of this notebook is to introduce you to a new publicly-available, open-access dataset in BigQuery. This set of BigQuery tables was produced by the [ISB-CGC](http://www.isb-cgc.org) project, based on the open-access [TCGA](http://cancergenome.nih.gov/) data available at the TCGA [Data Portal](https://tcga-data.nci.nih.gov/tcga/). You will need to have access to a Google Cloud Platform (GCP) project in order to use BigQuery. If you don't already have one, you can sign up for a [free-trial](https://cloud.google.com/free-trial/) or contact [us](mailto://info@isb-cgc.org) and become part of the community evaluation phase of our Cancer Genomics Cloud pilot. (You can find more information about this NCI-funded program [here](https://cbiit.nci.nih.gov/ncip/nci-cancer-genomics-cloud-pilots).) We are not attempting to provide a thorough BigQuery or IPython tutorial here, as a wealth of such information already exists. Here are links to some resources that you might find useful: * [BigQuery](https://cloud.google.com/bigquery/what-is-bigquery), * the BigQuery [web UI](https://bigquery.cloud.google.com/) where you can run queries interactively, * [IPython](http://ipython.org/) (now known as [Jupyter](http://jupyter.org/)), and * [Cloud Datalab](https://cloud.google.com/datalab/) the recently announced interactive cloud-based platform that this notebook is being developed on. There are also many tutorials and samples available on github (see, in particular, the [datalab](https://github.com/GoogleCloudPlatform/datalab) repo and the [Google Genomics]( https://github.com/googlegenomics) project). In order to work with BigQuery, the first thing you need to do is import the [gcp.bigquery](http://googlecloudplatform.github.io/datalab/gcp.bigquery.html) package: ``` import gcp.bigquery as bq ``` The next thing you need to know is how to access the specific tables you are interested in. BigQuery tables are organized into datasets, and datasets are owned by a specific GCP project. The tables we are introducing in this notebook are in a dataset called **`tcga_201607_beta`**, owned by the **`isb-cgc`** project. A full table identifier is of the form `<project_id>:<dataset_id>.<table_id>`. Let's start by getting some basic information about the tables in this dataset: ``` d = bq.DataSet('isb-cgc:tcga_201607_beta') for t in d.tables(): print '%10d rows %12d bytes %s' \ % (t.metadata.rows, t.metadata.size, t.name.table_id) ``` These tables are based on the open-access TCGA data as of July 2016. The molecular data is all "Level 3" data, and is divided according to platform/pipeline. See [here](https://tcga-data.nci.nih.gov/tcga/tcgaDataType.jsp) for additional details regarding the TCGA data levels and data types. Additional notebooks go into each of these tables in more detail, but here is an overview, in the same alphabetical order that they are listed in above and in the BigQuery web UI: - **Annotations**: This table contains the annotations that are also available from the interactive [TCGA Annotations Manager](https://tcga-data.nci.nih.gov/annotations/). Annotations can be associated with any type of "item" (*eg* Patient, Sample, Aliquot, etc), and a single item may have more than one annotation. Common annotations include "Item flagged DNU", "Item is noncanonical", and "Prior malignancy." More information about this table can be found in the [TCGA Annotations](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/TCGA%20Annotations.ipynb) notebook. - **Biospecimen_data**: This table contains information obtained from the "biospecimen" and "auxiliary" XML files in the TCGA Level-1 "bio" archives. Each row in this table represents a single "biospecimen" or "sample". Most participants in the TCGA project provided two samples: a "primary tumor" sample and a "blood normal" sample, but others provided normal-tissue, metastatic, or other types of samples. This table contains metadata about all of the samples, and more information about exploring this table and using this information to create your own custom analysis cohort can be found in the [Creating TCGA cohorts (part 1)](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Creating%20TCGA%20cohorts%20--%20part%201.ipynb) and [(part 2)](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Creating%20TCGA%20cohorts%20--%20part%202.ipynb) notebooks. - **Clinical_data**: This table contains information obtained from the "clinical" XML files in the TCGA Level-1 "bio" archives. Not all fields in the XML files are represented in this table, but any field which was found to be significantly filled-in for at least one tumor-type has been retained. More information about exploring this table and using this information to create your own custom analysis cohort can be found in the [Creating TCGA cohorts (part 1)](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Creating%20TCGA%20cohorts%20--%20part%201.ipynb) and [(part 2)](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Creating%20TCGA%20cohorts%20--%20part%202.ipynb) notebooks. - **Copy_Number_segments**: This table contains Level-3 copy-number segmentation results generated by The Broad Institute, from Genome Wide SNP 6 data using the CBS (Circular Binary Segmentation) algorithm. The values are base2 log(copynumber/2), centered on 0. More information about this data table can be found in the [Copy Number segments](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Copy%20Number%20segments.ipynb) notebook. - **DNA_Methylation_betas**: This table contains Level-3 summary measures of DNA methylation for each interrogated locus (beta values: M/(M+U)). This table contains data from two different platforms: the Illumina Infinium HumanMethylation 27k and 450k arrays. More information about this data table can be found in the [DNA Methylation](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/DNA%20Methylation.ipynb) notebook. Note that individual chromosome-specific DNA Methylation tables are also available to cut down on the amount of data that you may need to query (depending on yoru use case). - **Protein_RPPA_data**: This table contains the normalized Level-3 protein expression levels based on each antibody used to probe the sample. More information about how this data was generated by the RPPA Core Facility at MD Anderson can be found [here](https://wiki.nci.nih.gov/display/TCGA/Protein+Array+Data+Format+Specification#ProteinArrayDataFormatSpecification-Expression-Protein), and more information about this data table can be found in the [Protein expression](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Protein%20expression.ipynb) notebook. - **Somatic_Mutation_calls**: This table contains annotated somatic mutation calls. All current MAF (Mutation Annotation Format) files were annotated using [Oncotator](http://onlinelibrary.wiley.com/doi/10.1002/humu.22771/abstract;jsessionid=15E7960BA5FEC21EE608E6D262390C52.f01t04) v1.5.1.0, and merged into a single table. More information about this data table can be found in the [Somatic Mutations](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Somatic%20Mutations.ipynb) notebook, including an example of how to use the [Tute Genomics annotations database in BigQuery](http://googlegenomics.readthedocs.org/en/latest/use_cases/annotate_variants/tute_annotation.html). - **mRNA_BCGSC_HiSeq_RPKM**: This table contains mRNAseq-based gene expression data produced by the [BC Cancer Agency](http://www.bcgsc.ca/). (For details about a very similar table, take a look at a [notebook](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/UNC%20HiSeq%20mRNAseq%20gene%20expression.ipynb) describing the other mRNAseq gene expression table.) - **mRNA_UNC_HiSeq_RSEM**: This table contains mRNAseq-based gene expression data produced by [UNC Lineberger](https://unclineberger.org/). More information about this data table can be found in the [UNC HiSeq mRNAseq gene expression](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/UNC%20HiSeq%20mRNAseq%20gene%20expression.ipynb) notebook. - **miRNA_expression**: This table contains miRNAseq-based expression data for mature microRNAs produced by the [BC Cancer Agency](http://www.bcgsc.ca/). More information about this data table can be found in the [microRNA expression](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/BCGSC%20microRNA%20expression.ipynb) notebook. ### Where to start? We suggest that you start with the two "Creating TCGA cohorts" notebooks ([part 1](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Creating%20TCGA%20cohorts%20--%20part%201.ipynb) and [part 2](https://github.com/isb-cgc/examples-Python/blob/master/notebooks/Creating%20TCGA%20cohorts%20--%20part%202.ipynb)) which describe and make use of the Clinical and Biospecimen tables. From there you can delve into the various molecular data tables as well as the Annotations table. For now these sample notebooks are intentionally relatively simple and do not do any analysis that integrates data from multiple tables but once you have a grasp of how to use the data, developing your own more complex analyses should not be difficult. You could even contribute an example back to our github repository! You are also welcome to submit bug reports, comments, and feature-requests as [github issues](https://github.com/isb-cgc/examples-Python/issues). ### A note about BigQuery tables and "tidy data" You may be used to thinking about a molecular data table such as a gene-expression table as a matrix where the rows are genes and the columns are samples (or *vice versa*). These BigQuery tables instead use the [tidy data](https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html) approach, with each "cell" from the traditional data-matrix becoming a single row in the BigQuery table. A 10,000 gene x 500 sample matrix would therefore become a 5,000,000 row BigQuery table.
github_jupyter
``` # %%writefile train.py # Import Packages import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder import xgboost as xgb from pre_ml_process import pre_ml_process import pickle from plot_confusion_matrix import plot_confusion_matrix from sklearn.metrics import confusion_matrix, average_precision_score, precision_score, recall_score, roc_auc_score import matplotlib.pyplot as plt %matplotlib inline # location of training data training_data_loc = input("Please input the training data dir:") # Import Data df_raw = pd.read_csv(training_data_loc, encoding="cp1252") # Data cleaning df = df_raw.dropna() df = df.loc[df["f7"] != "#"] df["f7"] = df["f7"].astype(float) # f9 - remove the unknown record and binary encode the remaining two classes df = df.loc[df["f9"] != "unknown"] le_f9 = LabelEncoder() df["f9"] = le_f9.fit_transform(df["f9"]) # isolate the numerical columns numerical_cols = df.dtypes[df.dtypes != object].index.tolist() df_num = df[numerical_cols] # drop employee id primary key df_num = df_num.drop("employee_id", axis=1) # label encode string columns def fit_label_encoders(df_in): fitted_label_encoders = {} for col in df_in.dtypes[df_in.dtypes == object].index.tolist(): fitted_label_encoders[col] = LabelEncoder().fit(df_in[col]) return fitted_label_encoders fitted_label_encoders = fit_label_encoders(df.drop("employee_id", axis=1)) # concat the label encoded dataframe with the baseline dataframe def add_label_encoded(df_baseline, df_to_le, cols, fitted_label_encoders): df_out = df_baseline.copy() for col in cols: df_le = fitted_label_encoders[col].transform(df_to_le[col]) df_out[col] = df_le return df_out df_num_allLE = add_label_encoded(df_num, df, ["f1", "f2", "f3", "f4", "f10", "f12"], fitted_label_encoders) XGC=xgb.XGBClassifier(random_state=0, n_estimators=100) # parameters split_random_state=42 xgb_fit_eval_metric="aucpr" train_test_split_random_state=0 RandomOverSampler_random_state=0 test_size=0.33 # preprocessing df_ignore, X, y, X_train, X_test, y_train, y_test, \ scaler, X_train_resample_scaled, y_train_resample, \ X_test_scaled, ros, poly_ignore = \ pre_ml_process(df_num_allLE, test_size, train_test_split_random_state, RandomOverSampler_random_state) # save scaler to file pickle.dump(scaler, open("../../models/scaler.p", "wb")) # Train with XGBoost Classifier clf_XG = XGC.fit(X_train_resample_scaled, y_train_resample, eval_metric=xgb_fit_eval_metric) # Model evaluation # Get test set predictions y_test_hat = clf_XG.predict(X_test_scaled) y_test_proba = clf_XG.predict_proba(X_test_scaled)[:,1] # Confusion Matrix df_cm = confusion_matrix(y_test, y_test_hat, labels=[1, 0]) plot_confusion_matrix(df_cm, target_names=[1, 0], title="%s Confusion Matrix" % (type(clf_XG).__name__), normalize=True) plt.show() # Accuracy metrics ap = average_precision_score(y_test, y_test_proba) ps = precision_score(y_test, y_test_hat) rs = recall_score(y_test, y_test_hat) roc = roc_auc_score(y_test, y_test_hat) print("average_precision_score = {:.3f}".format(ap)) print("precision_score = {:.3f}".format(ps)) print("recall_score = {:.3f}".format(rs)) print("roc_auc_score = {:.3f}".format(roc)) # Feature Importances df_feature_importances = pd.DataFrame(clf_XG.feature_importances_, columns=["Importance"]) col_names = df_num_allLE.columns.tolist() col_names.remove("has_left") df_feature_importances["Feature"] = col_names df_feature_importances.sort_values("Importance", ascending=False, inplace=True) df_feature_importances = df_feature_importances.round(4) df_feature_importances = df_feature_importances.reset_index(drop=True) print(df_feature_importances) # export trained model pickle.dump(clf_XG, open("../../models/xgb_model.p", "wb")) # %%writefile test.py # import packages import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import MinMaxScaler, LabelEncoder from sklearn.metrics import confusion_matrix, average_precision_score, precision_score, recall_score, roc_auc_score import xgboost as xgb from sklearn.decomposition import PCA from pre_ml_process import pre_ml_process from plot_confusion_matrix import plot_confusion_matrix import pickle # model location model_loc = input("Please input the trained model dir:") # Import trained model clf = pickle.load(open(model_loc, "rb")) # Import Data df_raw_loc = input("Please input the testing or prediction data dir:") df_raw = pd.read_csv(df_raw_loc, encoding="cp1252") # Data cleaning df = df_raw.dropna() df = df.loc[df["f7"] != "#"] df["f7"] = df["f7"].astype(float) # f9 - remove the unknown record and binary encode the remaining two classes df = df.loc[df["f9"] != "unknown"] le_f9 = LabelEncoder() df["f9"] = le_f9.fit_transform(df["f9"]) # isolate the numerical columns numerical_cols = df.dtypes[df.dtypes != object].index.tolist() df_num = df[numerical_cols] # drop employee id primary key df_num = df_num.drop("employee_id", axis=1) # label encode string columns def fit_label_encoders(df_in): fitted_label_encoders = {} for col in df_in.dtypes[df_in.dtypes == object].index.tolist(): fitted_label_encoders[col] = LabelEncoder().fit(df_in[col]) return fitted_label_encoders fitted_label_encoders = fit_label_encoders(df.drop("employee_id", axis=1)) # concat the label encoded dataframe with the baseline dataframe def add_label_encoded(df_baseline, df_to_le, cols, fitted_label_encoders): df_out = df_baseline.copy() for col in cols: df_le = fitted_label_encoders[col].transform(df_to_le[col]) df_out[col] = df_le return df_out df_num_allLE = add_label_encoded(df_num, df, ["f1", "f2", "f3", "f4", "f10", "f12"], fitted_label_encoders) # Separate X and y y_col = "has_left" y = df_num_allLE[y_col] X = df_num_allLE.drop(y_col, axis=1) X = X.astype(float) # Scale predictors scaler = pickle.load(open("scaler.p", "rb")) X_scaled = scaler.transform(X) # Get predictions y_hat = clf.predict(X_scaled) y_proba = clf.predict_proba(X_scaled)[:,1] # Confusion Matrix df_cm = confusion_matrix(y, y_hat, labels=[1, 0]) plot_confusion_matrix(df_cm, target_names=[1, 0], title="%s Confusion Matrix" % (type(clf).__name__), normalize=True) # accuracy metrics ap = average_precision_score(y, y_proba) ps = precision_score(y, y_hat) rs = recall_score(y, y_hat) roc = roc_auc_score(y, y_hat) print("average_precision_score = {:.3f}".format(ap)) print("precision_score = {:.3f}".format(ps)) print("recall_score = {:.3f}".format(rs)) print("roc_auc_score = {:.3f}".format(roc)) # Feature Importances df_feature_importances = pd.DataFrame(clf.feature_importances_, columns=["Importance"]) col_names = df_num_allLE.columns.tolist() col_names.remove("has_left") df_feature_importances["Feature"] = col_names df_feature_importances.sort_values("Importance", ascending=False, inplace=True) df_feature_importances = df_feature_importances.round(4) df_feature_importances = df_feature_importances.reset_index(drop=True) print(df_feature_importances) # concat test data with predictions df_in_with_predictions = pd.concat([df_num_allLE, pd.Series(y_hat, name="y_hat"), pd.Series(y_proba, name="y_hat_probability")], axis=1) # Export predictions df_in_with_predictions.to_csv("../../data/prediction/prediction_export.csv", index=False) ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm import patsy # Data: https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset # UCI citation: # Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. # Source: # # Hadi Fanaee-T # Laboratory of Artificial Intelligence and Decision Support (LIAAD), University of Porto # INESC Porto, Campus da FEUP # Rua Dr. Roberto Frias, 378 # 4200 - 465 Porto, Portugal # Original Source: http://capitalbikeshare.com/system-data bikes = pd.read_csv('bikes.csv') # Fit model1 model1 = sm.OLS.from_formula('cnt ~ temp + windspeed + holiday', data=bikes).fit() # Fit model2 model2 = sm.OLS.from_formula('cnt ~ hum + season + weekday', data=bikes).fit() # Print R-squared for both models print(model1.rsquared) print(model2.rsquared) sns.scatterplot(x='temp', y='cnt', data = bikes) plt.show() model1 = sm.OLS.from_formula('cnt ~ temp', data=bikes).fit() xs = pd.DataFrame({'temp': np.linspace(bikes.temp.min(), bikes.temp.max(), 100)}) ys = model1.predict(xs) sns.scatterplot(x='temp', y='cnt', data = bikes) plt.plot(xs, ys, color = 'black', linewidth=4) plt.show() model2 = sm.OLS.from_formula('cnt ~ temp + np.power(temp, 2)', data=bikes).fit() xs = pd.DataFrame({'temp': np.linspace(bikes.temp.min(), bikes.temp.max(), 100)}) ys = model2.predict(xs) sns.scatterplot(x='temp', y='cnt', data = bikes) plt.plot(xs, ys, color = 'black', linewidth=4) plt.show() model3 = sm.OLS.from_formula('cnt ~ temp + np.power(temp, 2) + np.power(temp, 3)', data=bikes).fit() xs = pd.DataFrame({'temp': np.linspace(bikes.temp.min(), bikes.temp.max(), 100)}) ys = model3.predict(xs) sns.scatterplot(x='temp', y='cnt', data = bikes) plt.plot(xs, ys, color = 'black', linewidth=4) plt.show() model4 = sm.OLS.from_formula('cnt ~ temp + np.power(temp, 2) + np.power(temp, 3) + np.power(temp, 4) + np.power(temp, 5)', data=bikes).fit() xs = pd.DataFrame({'temp': np.linspace(bikes.temp.min(), bikes.temp.max(), 100)}) ys = model4.predict(xs) sns.scatterplot(x='temp', y='cnt', data = bikes) plt.plot(xs, ys, color = 'black', linewidth=4) plt.show() model5 = sm.OLS.from_formula('cnt ~ temp + np.power(temp, 2) + np.power(temp, 3) + np.power(temp, 4) + np.power(temp, 5) + np.power(temp, 6) + np.power(temp, 7) + np.power(temp, 8) + np.power(temp, 9) + np.power(temp, 10)', data=bikes).fit() xs = pd.DataFrame({'temp': np.linspace(bikes.temp.min(), bikes.temp.max(), 100)}) ys = model5.predict(xs) sns.scatterplot(x='temp', y='cnt', data = bikes) plt.plot(xs, ys, color = 'black', linewidth=4) plt.show() print(model1.rsquared) print(model2.rsquared) print(model3.rsquared) print(model4.rsquared) print(model5.rsquared) print(model1.rsquared_adj) print(model2.rsquared_adj) print(model3.rsquared_adj) print(model4.rsquared_adj) print(model5.rsquared_adj) from statsmodels.stats.anova import anova_lm anova_results = anova_lm(model1, model2, model3, model4, model5) print(anova_results.round(2)) print(model1.llf) print(model2.llf) print(model3.llf) print(model4.llf) print(model5.llf) print(model1.aic) print(model2.aic) print(model3.aic) print(model4.aic) print(model5.aic) print(model1.bic) print(model2.bic) print(model3.bic) print(model4.bic) print(model5.bic) # Set seed (don't change this) np.random.seed(123) # Split bikes data indices = range(len(bikes)) s = int(0.8*len(indices)) train_ind = np.random.choice(indices, size = s, replace = False) test_ind = list(set(indices) - set(train_ind)) bikes_train = bikes.iloc[train_ind] bikes_test = bikes.iloc[test_ind] # Fit model1 model1 = sm.OLS.from_formula('cnt ~ temp + atemp + hum', data=bikes_train).fit() # Fit model2 model2 = sm.OLS.from_formula('cnt ~ season + windspeed + weekday', data=bikes_train).fit() # Calculate predicted cnt based on model1 fitted1 = model1.predict(bikes_test) # Calculate predicted cnt based on model2 fitted2 = model2.predict(bikes_test) # Calculate PRMSE for model1 true = bikes_test.cnt prmse1 = np.mean((true-fitted1)**2)**.5 # Calculate PRMSE for model2 prmse2 = np.mean((true-fitted2)**2)**.5 # Print PRMSE for both models print(prmse1) print(prmse2) ```
github_jupyter
<h1 align="center">SimpleITK Spatial Transformations</h1> **Summary:** 1. Points are represented by vector-like data types: Tuple, Numpy array, List. 2. Matrices are represented by vector-like data types in row major order. 3. Default transformation initialization as the identity transform. 4. Angles specified in radians, distances specified in unknown but consistent units (nm,mm,m,km...). 5. All global transformations **except translation** are of the form: $$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$ Nomenclature (when printing your transformation): * Matrix: the matrix $A$ * Center: the point $\mathbf{c}$ * Translation: the vector $\mathbf{t}$ * Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$ 6. Bounded transformations, BSplineTransform and DisplacementFieldTransform, behave as the identity transform outside the defined bounds. 7. DisplacementFieldTransform: * Initializing the DisplacementFieldTransform using an image requires that the image's pixel type be sitk.sitkVectorFloat64. * Initializing the DisplacementFieldTransform using an image will "clear out" your image (your alias to the image will point to an empty, zero sized, image). 8. Composite transformations are applied in stack order (first added, last applied). # Transformation Types This notebook introduces the transformation types supported by SimpleITK and illustrates how to "promote" transformations from a lower to higher parameter space (e.g. 3D translation to 3D rigid). | Class Name | Details| |:-------------|:---------| |[TranslationTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1TranslationTransform.html) | 2D or 3D, translation| |[VersorTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1VersorTransform.html)| 3D, rotation represented by a versor| |[VersorRigid3DTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1VersorRigid3DTransform.html)|3D, rigid transformation with rotation represented by a versor| |[Euler2DTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1Euler2DTransform.html)| 2D, rigid transformation with rotation represented by a Euler angle| |[Euler3DTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1Euler3DTransform.html)| 3D, rigid transformation with rotation represented by Euler angles| |[Similarity2DTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1Similarity2DTransform.html)| 2D, composition of isotropic scaling and rigid transformation with rotation represented by a Euler angle| |[Similarity3DTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1Similarity3DTransform.html) | 3D, composition of isotropic scaling and rigid transformation with rotation represented by a versor| |[ScaleTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ScaleTransform.html)|2D or 3D, anisotropic scaling| |[ScaleVersor3DTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ScaleVersor3DTransform.html)| 3D, rigid transformation and anisotropic scale is **added** to the rotation matrix part (not composed as one would expect)| |[ScaleSkewVersor3DTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ScaleSkewVersor3DTransform.html#details)|3D, rigid transformation with anisotropic scale and skew matrices **added** to the rotation matrix part (not composed as one would expect) | |[AffineTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1AffineTransform.html)| 2D or 3D, affine transformation| |[BSplineTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1BSplineTransform.html)|2D or 3D, deformable transformation represented by a sparse regular grid of control points | |[DisplacementFieldTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1DisplacementFieldTransform.html)| 2D or 3D, deformable transformation represented as a dense regular grid of vectors| |[CompositeTransform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1CompositeTransform.html)| 2D or 3D, stack of transformations concatenated via composition, last added, first applied| |[Transform](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1Transform.html#details) | 2D or 3D, parent/superclass for all transforms ``` library(SimpleITK) library(scatterplot3d) OUTPUT_DIR <- "Output" print(Version()) ``` ## Points in SimpleITK ### Utility functions A number of functions that deal with point data in a uniform manner. ``` # Format a point for printing, based on specified precision with trailing zeros. Uniform printing for vector-like data # (vector, array, list). # @param point (vector-like): nD point with floating point coordinates. # @param precision (int): Number of digits after the decimal point. # @return: String representation of the given point "xx.xxx yy.yyy zz.zzz...". point2str <- function(point, precision=1) { precision_str <- sprintf("%%.%df",precision) return(paste(lapply(point, function(x) sprintf(precision_str, x)), collapse=", ")) } # Generate random (uniform withing bounds) nD point cloud. Dimension is based on the number of pairs in the # bounds input. # @param bounds (list(vector-like)): List where each vector defines the coordinate bounds. # @param num_points (int): Number of points to generate. # @return (matrix): Matrix whose columns are the set of points. uniform_random_points <- function(bounds, num_points) { return(t(sapply(bounds, function(bnd,n=num_points) runif(n, min(bnd),max(bnd))))) } # Distances between points transformed by the given transformation and their # location in another coordinate system. When the points are only used to evaluate # registration accuracy (not used in the registration) this is the target registration # error (TRE). # @param tx (SimpleITK transformation): Transformation applied to the points in point_list # @param point_data (matrix): Matrix whose columns are points which we transform using tx. # @param reference_point_data (matrix): Matrix whose columns are points to which we compare # the transformed point data. # @return (vector): Distances between the transformed points and the reference points. target_registration_errors <- function(tx, point_data, reference_point_data) { transformed_points_mat <- apply(point_data, MARGIN=2, tx$TransformPoint) return (sqrt(colSums((transformed_points_mat - reference_point_data)^2))) } # Check whether two transformations are "equivalent" in an arbitrary spatial region # either 3D or 2D, [x=(-10,10), y=(-100,100), z=(-1000,1000)]. This is just a sanity check, # as we are just looking at the effect of the transformations on a random set of points in # the region. print_transformation_differences <- function(tx1, tx2) { if (tx1$GetDimension()==2 && tx2$GetDimension()==2) { bounds <- list(c(-10,10), c(-100,100)) } else if(tx1$GetDimension()==3 && tx2$GetDimension()==3) { bounds <- list(c(-10,10), c(-100,100), c(-1000,1000)) } else stop('Transformation dimensions mismatch, or unsupported transformation dimensionality') num_points <- 10 point_data <- uniform_random_points(bounds, num_points) tx1_point_data <- apply(point_data, MARGIN=2, tx1$TransformPoint) differences <- target_registration_errors(tx2, point_data, tx1_point_data) cat(tx1$GetName(), "-", tx2$GetName(), ":\tminDifference: ", toString(min(differences)), " maxDifference: ",toString(max(differences))) } ``` In SimpleITK points can be represented by any vector-like data type. In R these include vector, array, and list. In general R will treat these data types differently, as illustrated by the print function below. ``` # SimpleITK points represented by vector-like data structures. point_vector <- c(9.0, 10.531, 11.8341) point_array <- array(c(9.0, 10.531, 11.8341),dim=c(1,3)) point_list <- list(9.0, 10.531, 11.8341) print(point_vector) print(point_array) print(point_list) # Uniform printing with specified precision. precision <- 2 print(point2str(point_vector, precision)) print(point2str(point_array, precision)) print(point2str(point_list, precision)) ``` ## Global Transformations All global transformations <i>except translation</i> are of the form: $$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$ In ITK speak (when printing your transformation): <ul> <li>Matrix: the matrix $A$</li> <li>Center: the point $\mathbf{c}$</li> <li>Translation: the vector $\mathbf{t}$</li> <li>Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$</li> </ul> ## TranslationTransform ``` # A 3D translation. Note that you need to specify the dimensionality, as the sitk TranslationTransform # represents both 2D and 3D translations. dimension <- 3 offset <- c(1,2,3) # offset can be any vector-like data translation <- TranslationTransform(dimension, offset) print(translation) translation$GetOffset() # Transform a point and use the inverse transformation to get the original back. point <- c(10, 11, 12) transformed_point <- translation$TransformPoint(point) translation_inverse <- translation$GetInverse() cat(paste0("original point: ", point2str(point), "\n", "transformed point: ", point2str(transformed_point), "\n", "back to original: ", point2str(translation_inverse$TransformPoint(transformed_point)))) ``` ## Euler2DTransform ``` point <- c(10, 11) rotation2D <- Euler2DTransform() rotation2D$SetTranslation(c(7.2, 8.4)) rotation2D$SetAngle(pi/2.0) cat(paste0("original point: ", point2str(point), "\n", "transformed point: ", point2str(rotation2D$TransformPoint(point)),"\n")) # Change the center of rotation so that it coincides with the point we want to # transform, why is this a unique configuration? rotation2D$SetCenter(point) cat(paste0("original point: ", point2str(point), "\n", "transformed point: ", point2str(rotation2D$TransformPoint(point)),"\n")) ``` ## VersorTransform ``` # Rotation only, parametrized by Versor (vector part of unit quaternion), # quaternion defined by rotation of theta around axis n: # q = [n*sin(theta/2), cos(theta/2)] # 180 degree rotation around z axis # Use a versor: rotation1 <- VersorTransform(c(0,0,1,0)) # Use axis-angle: rotation2 <- VersorTransform(c(0,0,1), pi) # Use a matrix: rotation3 <- VersorTransform() rotation3$SetMatrix(c(-1, 0, 0, 0, -1, 0, 0, 0, 1)) point <- c(10, 100, 1000) p1 <- rotation1$TransformPoint(point) p2 <- rotation2$TransformPoint(point) p3 <- rotation3$TransformPoint(point) cat(paste0("Points after transformation:\np1=", point2str(p1,15), "\np2=", point2str(p2,15),"\np3=", point2str(p3,15))) ``` We applied the "same" transformation to the same point, so why are the results slightly different for the second initialization method? This is where theory meets practice. Using the axis-angle initialization method involves trigonometric functions which on a fixed precision machine lead to these slight differences. In many cases this is not an issue, but it is something to remember. From here on we will sweep it under the rug (printing with a more reasonable precision). ## Translation to Rigid [3D] Copy the translational component. ``` dimension <- 3 trans <- c(1,2,3) translation <- TranslationTransform(dimension, trans) # Only need to copy the translational component. rigid_euler <- Euler3DTransform() rigid_euler$SetTranslation(translation$GetOffset()) rigid_versor <- VersorRigid3DTransform() rigid_versor$SetTranslation(translation$GetOffset()) # Sanity check to make sure the transformations are equivalent. bounds <- list(c(-10,10), c(-100,100), c(-1000,1000)) num_points <- 10 point_data <- uniform_random_points(bounds, num_points) transformed_point_data <- apply(point_data, MARGIN=2, translation$TransformPoint) # Draw the original and transformed points. all_data <- cbind(point_data, transformed_point_data) xbnd <- range(all_data[1,]) ybnd <- range(all_data[2,]) zbnd <- range(all_data[3,]) s3d <- scatterplot3d(t(point_data), color = "blue", pch = 19, xlab='', ylab='', zlab='', xlim=xbnd, ylim=ybnd, zlim=zbnd) s3d$points3d(t(transformed_point_data), col = "red", pch = 17) legend("topleft", col= c("blue", "red"), pch=c(19,17), legend = c("Original points", "Transformed points")) euler_errors <- target_registration_errors(rigid_euler, point_data, transformed_point_data) versor_errors <- target_registration_errors(rigid_versor, point_data, transformed_point_data) cat(paste0("Euler\tminError:", point2str(min(euler_errors))," maxError: ", point2str(max(euler_errors)),"\n")) cat(paste0("Versor\tminError:", point2str(min(versor_errors))," maxError: ", point2str(max(versor_errors)),"\n")) ``` ## Rotation to Rigid [3D] Copy the matrix or versor and <b>center of rotation</b>. ``` rotationCenter <- c(10, 10, 10) rotation <- VersorTransform(c(0,0,1,0), rotationCenter) rigid_euler <- Euler3DTransform() rigid_euler$SetMatrix(rotation$GetMatrix()) rigid_euler$SetCenter(rotation$GetCenter()) rigid_versor <- VersorRigid3DTransform() rigid_versor$SetRotation(rotation$GetVersor()) #rigid_versor.SetCenter(rotation.GetCenter()) #intentional error # Sanity check to make sure the transformations are equivalent. bounds <- list(c(-10,10),c(-100,100), c(-1000,1000)) num_points = 10 point_data = uniform_random_points(bounds, num_points) transformed_point_data <- apply(point_data, MARGIN=2, rotation$TransformPoint) euler_errors = target_registration_errors(rigid_euler, point_data, transformed_point_data) versor_errors = target_registration_errors(rigid_versor, point_data, transformed_point_data) # Draw the points transformed by the original transformation and after transformation # using the incorrect transformation, illustrate the effect of center of rotation. incorrect_transformed_point_data <- apply(point_data, 2, rigid_versor$TransformPoint) all_data <- cbind(transformed_point_data, incorrect_transformed_point_data) xbnd <- range(all_data[1,]) ybnd <- range(all_data[2,]) zbnd <- range(all_data[3,]) s3d <- scatterplot3d(t(transformed_point_data), color = "blue", pch = 19, xlab='', ylab='', zlab='', xlim=xbnd, ylim=ybnd, zlim=zbnd) s3d$points3d(t(incorrect_transformed_point_data), col = "red", pch = 17) legend("topleft", col= c("blue", "red"), pch=c(19,17), legend = c("Original points", "Transformed points")) cat(paste0("Euler\tminError:", point2str(min(euler_errors))," maxError: ", point2str(max(euler_errors)),"\n")) cat(paste0("Versor\tminError:", point2str(min(versor_errors))," maxError: ", point2str(max(versor_errors)),"\n")) ``` ## Similarity [2D] When the center of the similarity transformation is not at the origin the effect of the transformation is not what most of us expect. This is readily visible if we limit the transformation to scaling: $T(\mathbf{x}) = s\mathbf{x}-s\mathbf{c} + \mathbf{c}$. Changing the transformation's center results in scale + translation. ``` # 2D square centered on (0,0) points <- matrix(data=c(-1.0,-1.0, -1.0,1.0, 1.0,1.0, 1.0,-1.0), ncol=4, nrow=2) # Scale by 2 (center default is [0,0]) similarity <- Similarity2DTransform(); similarity$SetScale(2) scaled_points <- apply(points, MARGIN=2, similarity$TransformPoint) #Uncomment the following lines to change the transformations center and see what happens: #similarity$SetCenter(c(0,2)) #scaled_points <- apply(points, 2, similarity$TransformPoint) plot(points[1,],points[2,], xlim=c(-10,10), ylim=c(-10,10), pch=19, col="blue", xlab="", ylab="", las=1) points(scaled_points[1,], scaled_points[2,], col="red", pch=17) legend('top', col= c("red", "blue"), pch=c(17,19), legend = c("transformed points", "original points")) ``` ## Rigid to Similarity [3D] Copy the translation, center, and matrix or versor. ``` rotation_center <- c(100, 100, 100) theta_x <- 0.0 theta_y <- 0.0 theta_z <- pi/2.0 translation <- c(1,2,3) rigid_euler <- Euler3DTransform(rotation_center, theta_x, theta_y, theta_z, translation) similarity <- Similarity3DTransform() similarity$SetMatrix(rigid_euler$GetMatrix()) similarity$SetTranslation(rigid_euler$GetTranslation()) similarity$SetCenter(rigid_euler$GetCenter()) # Apply the transformations to the same set of random points and compare the results # (see utility functions at top of notebook). print_transformation_differences(rigid_euler, similarity) ``` ## Similarity to Affine [3D] Copy the translation, center and matrix. ``` rotation_center <- c(100, 100, 100) axis <- c(0,0,1) angle <- pi/2.0 translation <- c(1,2,3) scale_factor <- 2.0 similarity <- Similarity3DTransform(scale_factor, axis, angle, translation, rotation_center) affine <- AffineTransform(3) affine$SetMatrix(similarity$GetMatrix()) affine$SetTranslation(similarity$GetTranslation()) affine$SetCenter(similarity$GetCenter()) # Apply the transformations to the same set of random points and compare the results # (see utility functions at top of notebook). print_transformation_differences(similarity, affine) ``` ## Scale Transform Just as the case was for the similarity transformation above, when the transformations center is not at the origin, instead of a pure anisotropic scaling we also have translation ($T(\mathbf{x}) = \mathbf{s}^T\mathbf{x}-\mathbf{s}^T\mathbf{c} + \mathbf{c}$). ``` # 2D square centered on (0,0). points <- matrix(data=c(-1.0,-1.0, -1.0,1.0, 1.0,1.0, 1.0,-1.0), ncol=4, nrow=2) # Scale by half in x and 2 in y. scale <- ScaleTransform(2, c(0.5,2)); scaled_points <- apply(points, 2, scale$TransformPoint) #Uncomment the following lines to change the transformations center and see what happens: #scale$SetCenter(c(0,2)) #scaled_points <- apply(points, 2, scale$TransformPoint) plot(points[1,],points[2,], xlim=c(-10,10), ylim=c(-10,10), pch=19, col="blue", xlab="", ylab="", las=1) points(scaled_points[1,], scaled_points[2,], col="red", pch=17) legend('top', col= c("red", "blue"), pch=c(17,19), legend = c("transformed points", "original points")) ``` ## Scale Versor This is not what you would expect from the name (composition of anisotropic scaling and rigid). This is: $$T(x) = (R+S)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S= \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]$$ There is no natural way of "promoting" the similarity transformation to this transformation. ``` scales <- c(0.5,0.7,0.9) translation <- c(1,2,3) axis <- c(0,0,1) angle <- 0.0 scale_versor <- ScaleVersor3DTransform(scales, axis, angle, translation) print(scale_versor) ``` ## Scale Skew Versor Again, not what you expect based on the name, this is not a composition of transformations. This is: $$T(x) = (R+S+K)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S = \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]\;\; \textrm{and } K = \left[\begin{array}{ccc} 0 & k_0 & k_1 \\ k_2 & 0 & k_3 \\ k_4 & k_5 & 0 \end{array}\right]$$ In practice this is an over-parametrized version of the affine transform, 15 (scale, skew, versor, translation) vs. 12 parameters (matrix, translation). ``` scale <- c(2,2.1,3) skew <- 0:1/6.0:1 #six equally spaced values in[0,1], an arbitrary choice translation <- c(1,2,3) versor <- c(0,0,0,1.0) scale_skew_versor <- ScaleSkewVersor3DTransform(scale, skew, versor, translation) print(scale_skew_versor) ``` ## Bounded Transformations SimpleITK supports two types of bounded non-rigid transformations, BSplineTransform (sparse representation) and DisplacementFieldTransform (dense representation). Transforming a point that is outside the bounds will return the original point - identity transform. ``` # # This function displays the effects of the deformable transformation on a grid of points by scaling the # initial displacements (either of control points for BSpline or the deformation field itself). It does # assume that all points are contained in the range(-2.5,-2.5), (2.5,2.5) - for display. # display_displacement_scaling_effect <- function(s, original_x_mat, original_y_mat, tx, original_control_point_displacements) { if(tx$GetDimension()!=2) stop('display_displacement_scaling_effect only works in 2D') tx$SetParameters(s*original_control_point_displacements) transformed_points <- mapply(function(x,y) tx$TransformPoint(c(x,y)), original_x_mat, original_y_mat) plot(original_x_mat,original_y_mat, xlim=c(-2.5,2.5), ylim=c(-2.5,2.5), pch=19, col="blue", xlab="", ylab="", las=1) points(transformed_points[1,], transformed_points[2,], col="red", pch=17) legend('top', col= c("red", "blue"), pch=c(17,19), legend = c("transformed points", "original points")) } ``` ## BSpline Using a sparse set of control points to control a free form deformation. ``` # Create the transformation (when working with images it is easier to use the BSplineTransformInitializer function # or its object oriented counterpart BSplineTransformInitializerFilter). dimension <- 2 spline_order <- 3 direction_matrix_row_major <- c(1.0,0.0,0.0,1.0) # identity, mesh is axis aligned origin <- c(-1.0,-1.0) domain_physical_dimensions <- c(2,2) bspline <- BSplineTransform(dimension, spline_order) bspline$SetTransformDomainOrigin(origin) bspline$SetTransformDomainDirection(direction_matrix_row_major) bspline$SetTransformDomainPhysicalDimensions(domain_physical_dimensions) bspline$SetTransformDomainMeshSize(c(4,3)) # Random displacement of the control points. originalControlPointDisplacements <- runif(length(bspline$GetParameters())) bspline$SetParameters(originalControlPointDisplacements) # Apply the bspline transformation to a grid of points # starting the point set exactly at the origin of the bspline mesh is problematic as # these points are considered outside the transformation's domain, # remove epsilon below and see what happens. numSamplesX = 10 numSamplesY = 20 eps <- .Machine$double.eps coordsX <- seq(origin[1] + eps, origin[1] + domain_physical_dimensions[1], (domain_physical_dimensions[1]-eps)/(numSamplesX-1)) coordsY <- seq(origin[2] + eps, origin[2] + domain_physical_dimensions[2], (domain_physical_dimensions[2]-eps)/(numSamplesY-1)) # next two lines equivalent to Python's/MATLAB's meshgrid XX <- outer(coordsY*0, coordsX, "+") YY <- outer(coordsY, coordsX*0, "+") display_displacement_scaling_effect(0.0, XX, YY, bspline, originalControlPointDisplacements) #uncomment the following line to see the effect of scaling the control point displacements # on our set of points (we recommend keeping the scaling in the range [-1.5,1.5] due to display bounds) #display_displacement_scaling_effect(0.5, XX, YY, bspline, originalControlPointDisplacements) ``` ## DisplacementField A dense set of vectors representing the displacement inside the given domain. The most generic representation of a transformation. ``` # Create the displacement field. # When working with images the safer thing to do is use the image based constructor, # DisplacementFieldTransform(my_image), all the fixed parameters will be set correctly and the displacement # field is initialized using the vectors stored in the image. SimpleITK requires that the image's pixel type be # "sitkVectorFloat64". displacement <- DisplacementFieldTransform(2) field_size <- c(10,20) field_origin <- c(-1.0,-1.0) field_spacing <- c(2.0/9.0,2.0/19.0) field_direction <- c(1,0,0,1) # direction cosine matrix (row major order) # Concatenate all the information into a single list displacement$SetFixedParameters(c(field_size, field_origin, field_spacing, field_direction)) # Set the interpolator, either sitkLinear which is default or nearest neighbor displacement$SetInterpolator("sitkNearestNeighbor") originalDisplacements <- runif(length(displacement$GetParameters())) displacement$SetParameters(originalDisplacements) coordsX <- seq(field_origin[1], field_origin[1]+(field_size[1]-1)*field_spacing[1], field_spacing[1]) coordsY <- seq(field_origin[2], field_origin[2]+(field_size[2]-1)*field_spacing[2], field_spacing[2]) # next two lines equivalent to Python's/MATLAB's meshgrid XX <- outer(coordsY*0, coordsX, "+") YY <- outer(coordsY, coordsX*0, "+") display_displacement_scaling_effect(0.0, XX, YY, displacement, originalDisplacements) #uncomment the following line to see the effect of scaling the control point displacements # on our set of points (we recommend keeping the scaling in the range [-1.5,1.5] due to display bounds) #display_displacement_scaling_effect(0.5, XX, YY, displacement, originalDisplacements) ``` Displacement field transform created from an image. Remember that SimpleITK will clear the image you provide, as shown in the cell below. ``` displacement_image <- Image(c(64,64), "sitkVectorFloat64") # The only point that has any displacement is at physical SimpleITK index (0,0), R index (1,1) displacement <- c(0.5,0.5) # Note that SimpleITK indexing starts at zero. displacement_image$SetPixel(c(0,0), displacement) cat('Original displacement image size: ',point2str(displacement_image$GetSize()),"\n") displacement_field_transform <- DisplacementFieldTransform(displacement_image) cat("After using the image to create a transform, displacement image size: ", point2str(displacement_image$GetSize()), "\n") # Check that the displacement field transform does what we expect. cat("Expected result: ",point2str(displacement), "\nActual result: ", displacement_field_transform$TransformPoint(c(0,0)),"\n") ``` ## CompositeTransform This class represents a composition of transformations, multiple transformations applied one after the other. The choice of whether to use a composite transformation or compose transformations on your own has subtle differences in the registration framework. Below we represent the composite transformation $T_{affine}(T_{rigid}(x))$ in two ways: (1) use a composite transformation to contain the two; (2) combine the two into a single affine transformation. We can use both as initial transforms (SetInitialTransform) for the registration framework (ImageRegistrationMethod). The difference is that in the former case the optimized parameters belong to the rigid transformation and in the later they belong to the combined-affine transformation. ``` # Create a composite transformation: T_affine(T_rigid(x)). rigid_center <- c(100,100,100) theta_x <- 0.0 theta_y <- 0.0 theta_z <- pi/2.0 rigid_translation <- c(1,2,3) rigid_euler <- Euler3DTransform(rigid_center, theta_x, theta_y, theta_z, rigid_translation) affine_center <- c(20, 20, 20) affine_translation <- c(5,6,7) # Matrix is represented as a vector-like data in row major order. affine_matrix <- runif(9) affine <- AffineTransform(affine_matrix, affine_translation, affine_center) # Using the composite transformation we just add them in (stack based, first in - last applied). composite_transform <- CompositeTransform(affine) composite_transform$AddTransform(rigid_euler) # Create a single transform manually. this is a recipe for compositing any two global transformations # into an affine transformation, T_0(T_1(x)): # A = A=A0*A1 # c = c1 # t = A0*[t1+c1-c0] + t0+c0-c1 A0 <- t(matrix(affine$GetMatrix(), 3, 3)) c0 <- affine$GetCenter() t0 <- affine$GetTranslation() A1 <- t(matrix(rigid_euler$GetMatrix(), 3, 3)) c1 <- rigid_euler$GetCenter() t1 <- rigid_euler$GetTranslation() combined_mat <- A0%*%A1 combined_center <- c1 combined_translation <- A0 %*% (t1+c1-c0) + t0+c0-c1 combined_affine <- AffineTransform(c(t(combined_mat)), combined_translation, combined_center) # Check if the two transformations are "equivalent". cat("Apply the two transformations to the same point cloud:\n") print_transformation_differences(composite_transform, combined_affine) cat("\nTransform parameters:\n") cat(paste("\tComposite transform: ", point2str(composite_transform$GetParameters(),2),"\n")) cat(paste("\tCombined affine: ", point2str(combined_affine$GetParameters(),2),"\n")) cat("Fixed parameters:\n") cat(paste("\tComposite transform: ", point2str(composite_transform$GetFixedParameters(),2),"\n")) cat(paste("\tCombined affine: ", point2str(combined_affine$GetFixedParameters(),2),"\n")) ``` Composite transforms enable a combination of a global transformation with multiple local/bounded transformations. This is useful if we want to apply deformations only in regions that deform while other regions are only effected by the global transformation. The following code illustrates this, where the whole region is translated and subregions have different deformations. ``` # Global transformation. translation <- TranslationTransform(2, c(1.0,0.0)) # Displacement in region 1. displacement1 <- DisplacementFieldTransform(2) field_size <- c(10,20) field_origin <- c(-1.0,-1.0) field_spacing <- c(2.0/9.0,2.0/19.0) field_direction <- c(1,0,0,1) # direction cosine matrix (row major order) # Concatenate all the information into a single list. displacement1$SetFixedParameters(c(field_size, field_origin, field_spacing, field_direction)) displacement1$SetParameters(rep(1.0, length(displacement1$GetParameters()))) # Displacement in region 2. displacement2 <- DisplacementFieldTransform(2) field_size <- c(10,20) field_origin <- c(1.0,-3) field_spacing <- c(2.0/9.0,2.0/19.0) field_direction <- c(1,0,0,1) #direction cosine matrix (row major order) # Concatenate all the information into a single list. displacement2$SetFixedParameters(c(field_size, field_origin, field_spacing, field_direction)) displacement2$SetParameters(rep(-1.0, length(displacement2$GetParameters()))) # Composite transform which applies the global and local transformations. composite <- CompositeTransform(translation) composite$AddTransform(displacement1) composite$AddTransform(displacement2) # Apply the composite transformation to points in ([-1,-3],[3,1]) and # display the deformation using a quiver plot. # Generate points. numSamplesX <- 10 numSamplesY <- 10 coordsX <- seq(-1.0, 3.0, 4.0/(numSamplesX-1)) coordsY <- seq(-3.0, 1.0, 4.0/(numSamplesY-1)) # next two lines equivalent to Python's/MATLAB's meshgrid original_x_mat <- outer(coordsY*0, coordsX, "+") original_y_mat <- outer(coordsY, coordsX*0, "+") # Transform points and plot. original_points <- mapply(function(x,y) c(x,y), original_x_mat, original_y_mat) transformed_points <- mapply(function(x,y) composite$TransformPoint(c(x,y)), original_x_mat, original_y_mat) plot(0,0,xlim=c(-1.0,3.0), ylim=c(-3.0,1.0), las=1) arrows(original_points[1,], original_points[2,], transformed_points[1,], transformed_points[2,]) ``` ## Transform This class represents a generic transform and is the return type from the registration framework (if not done in place). Underneath the generic facade is one of the actual classes. To find out who is hiding under the hood we can query the transform to obtain the [TransformEnum](https://simpleitk.org/doxygen/latest/html/namespaceitk_1_1simple.html#a527cb966ed81d0bdc65999f4d2d4d852). We can then downcast the generic transform to its actual type and obtain access to the relevant methods. Note that attempting to access the method will fail but not invoke an exception so we cannot use `try`, `tryCatch`. ``` tx <- Transform(TranslationTransform(2,c(1.0,0.0))) if(tx$GetTransformEnum() == 'sitkTranslation') { translation = TranslationTransform(tx) cat(paste(c('Translation is:', translation$GetOffset()), collapse=' ')) } ``` ## Writing and Reading The SimpleITK.ReadTransform() returns a SimpleITK.Transform . The content of the file can be any of the SimpleITK transformations or a composite (set of transformations). **Details of note**: 1. When read from file, the type of the returned transform is the generic `Transform`. We can then obtain the "true" transform type via the `Downcast` method. 2. Writing of nested composite transforms is not supported, you will need to "flatten" the transform before writing it to file. ``` # Create a 2D rigid transformation, write it to disk and read it back. basic_transform <- Euler2DTransform() basic_transform$SetTranslation(c(1,2)) basic_transform$SetAngle(pi/2.0) full_file_name <- file.path(OUTPUT_DIR, "euler2D.tfm") WriteTransform(basic_transform, full_file_name) # The ReadTransform function returns a SimpleITK Transform no matter the type of the transform # found in the file (global, bounded, composite). read_result <- ReadTransform(full_file_name) cat(paste("Original type: ",basic_transform$GetName(),"\nType after reading: ", read_result$GetName(),"\n")) print_transformation_differences(basic_transform, read_result) # Create a composite transform then write and read. displacement <- DisplacementFieldTransform(2) field_size <- c(10,20) field_origin <- c(-10.0,-100.0) field_spacing <- c(20.0/(field_size[1]-1),200.0/(field_size[2]-1)) field_direction <- c(1,0,0,1) #direction cosine matrix (row major order) # Concatenate all the information into a single list. displacement$SetFixedParameters(c(field_size, field_origin, field_spacing, field_direction)) displacement$SetParameters(runif(length(displacement$GetParameters()))) composite_transform <- Transform(basic_transform) composite_transform$AddTransform(displacement) full_file_name <- file.path(OUTPUT_DIR, "composite.tfm") WriteTransform(composite_transform, full_file_name) read_result <- ReadTransform(full_file_name) cat("\n") print_transformation_differences(composite_transform, read_result) x_translation <- TranslationTransform(2,c(1,0)) y_translation <- TranslationTransform(2,c(0,1)) # Create composite transform with the x_translation repeated 3 times composite_transform1 <- CompositeTransform(x_translation) composite_transform1$AddTransform(x_translation) composite_transform1$AddTransform(x_translation) # Create a nested composite transform composite_transform <- CompositeTransform(y_translation) composite_transform$AddTransform(composite_transform1) cat(paste0('Nested composite transform contains ',composite_transform$GetNumberOfTransforms(), ' transforms.\n')) # We cannot write nested composite transformations, so we # flatten it (unravel the nested part) composite_transform$FlattenTransform() cat(paste0('Nested composite transform after flattening contains ',composite_transform$GetNumberOfTransforms(), ' transforms.\n')) full_file_name <- file.path(OUTPUT_DIR, "composite.tfm") WriteTransform(composite_transform, full_file_name) ```
github_jupyter