code
stringlengths
38
801k
repo_path
stringlengths
6
263
/ --- / jupyter: / jupytext: / text_representation: / extension: .q / format_name: light / format_version: '1.5' / jupytext_version: 1.14.4 / kernelspec: / display_name: SQL / language: sql / name: SQL / --- / # Intelligent Query Processing in SQL Server 2019 - Table Variable Deferred Compilation / ## Step 1 - Create the stored procedure / This procedure uses a table variable populated from a user table and then joins it with another user table to provide output. T-SQL functions like COUNT and SUM are often seen in analytic queries that benefit from Intelligent Query Processing. Note: In this example the TOP 1 T-SQL syntax is used so that the procedure only produces 1 row. This is only done to make the output easier to read using this workshop and demo since this procedure will be executed multiple times. Normal execution of this procedure may not include TOP. Examine the statements in the stored procedure. / / / + USE WideWorldImporters GO CREATE or ALTER PROCEDURE [Sales].[CustomerProfits] AS BEGIN -- Declare the table variable DECLARE @ilines TABLE ( [InvoiceLineID] [int] NOT NULL primary key, [InvoiceID] [int] NOT NULL, [StockItemID] [int] NOT NULL, [Description] [nvarchar](100) NOT NULL, [PackageTypeID] [int] NOT NULL, [Quantity] [int] NOT NULL, [UnitPrice] [decimal](18, 2) NULL, [TaxRate] [decimal](18, 3) NOT NULL, [TaxAmount] [decimal](18, 2) NOT NULL, [LineProfit] [decimal](18, 2) NOT NULL, [ExtendedPrice] [decimal](18, 2) NOT NULL, [LastEditedBy] [int] NOT NULL, [LastEditedWhen] [datetime2](7) NOT NULL ) -- Insert all the rows from InvoiceLines into the table variable INSERT INTO @ilines SELECT * FROM Sales.InvoiceLines -- Find my total profile by customer SELECT TOP 1 COUNT(i.CustomerID) as customer_count, SUM(il.LineProfit) as total_profit FROM Sales.Invoices i INNER JOIN @ilines il ON i.InvoiceID = il.InvoiceID GROUP By i.CustomerID END GO / - / ## Step 2 - Run the stored procedure with database compatibility of 130 / You have been told this procedure executes fairly quickly with a single execution in a few seconds but over several iterations the total duration, over 20 seconds, is not acceptable to the application. / / The script will ensure the database is in a compatibility mode that is less than 150 so Intelligent Query Processing will NOT be enabled. The script also turns off rowcount messages to be returned to the client to reduce network traffic for this test. Then the script executes the stored procedure. Notice the syntax of **GO 25**. This is a client tool tip that says to run the batch 25 times (avoids having to construct a loop). / / When you click Play to run the script look for these messages on the total elapsted time (your time may vary) / / <pre>Beginning execution loop / Batch execution completed 25 times... / Total execution time: 00:00:40.3520665</pre> / USE master GO ALTER DATABASE wideworldimporters SET compatibility_level = 130 GO USE WideWorldImporters GO SET NOCOUNT ON GO EXEC [Sales].[CustomerProfits] GO 25 SET NOCOUNT OFF GO / ## Step 3 - Run the stored procedure with database compatibility of 150 / Now let's run the same exact test but with database compatibility of 150. You will not make any changes to the stored procedure. / / Notice this is the same script except database compatibility of 150 is used. This time, the query processor in SQL Server will enable table variable deferred compilation so a better query plan can be chosen / / The script should execute far faster than before. Your speeds can vary but should be 15 seconds or less. / / When you click Play to run the script look for these messages on the total elapsted time (your time may vary) / / <pre>Beginning execution loop / Batch execution completed 25 times... / Total execution time: 00:00:10.9975239</pre> / USE master GO ALTER DATABASE wideworldimporters SET compatibility_level = 150 GO USE WideWorldImporters GO SET NOCOUNT ON GO EXEC [Sales].[CustomerProfits] GO 25 SET NOCOUNT OFF GO / ## Step 4: Restore the database compatibility mode / Restore the original databse compatibility mode for WideWorldImporters USE master GO ALTER DATABASE wideworldimporters SET compatibility_level = 130 GO
SQLonOpenShift/sqlonopenshift/03_performance/iqp/03_IQP_Table_Variable.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + From the 3 rav4 drive on June 16 2021 VU2, VU1, Jonathan CAN bus data & high-end GPS data instrumented on VU1 and VU2 - get the time range that all three cars are on the testbed - transform from GPS to relataive coordinates 0,0 -> 36.005437, -86.610796 0,48 -> 36.005532, -86.610684 1000,0 -> 36.003404, -86.608529 1000,48 -> 36.003496, -86.608414 320, 0 -> 36.004790, -86.610073 # - import pandas as pd import importlib import utils importlib.reload(utils) from os import path import time import pathlib import matplotlib.pyplot as plt import numpy as np from mpl_toolkits import mplot3d from datetime import datetime import cv2 data_path = pathlib.Path().absolute().joinpath('../I24_motion_GPS/GPS') file_name = data_path.joinpath('tableview_210617_192904_VU1.csv') df1 = utils.read_data(file_name,0,False) file_name = data_path.joinpath('tableview_210617_192126_VU2.csv') df2 = utils.read_data(file_name,0,False) file_name = data_path.joinpath('low_end_GPS/Jonathan_2021-06-16-20-00-18_2T3Y1RFV8KC014025_GPS_Messages.csv') df3 = utils.read_data(file_name,0,False) # convert string to datetime and to timestamps df1 = df1.assign(datetime=[datetime.strptime(df1.UTC[i], '%H:%M:%S.%f %m/%d/%Y') for i in range(len(df1))]) df2 = df2.assign(UTC=[df2.UTC[i].replace('??/??/????','06/16/2021') for i in range(len(df2))]) df2 = df2.assign(datetime=[datetime.strptime(df2.UTC[i], '%H:%M:%S.%f %m/%d/%Y') for i in range(len(df2))]) df1['timestamp'] = df1.datetime.apply(lambda x: x.timestamp()) df2['timestamp'] = df2.datetime.apply(lambda x: x.timestamp()) # GPS time is more reliable but system time can successfully synchronize df3['timestamp'] = df3.Systime # select the testbed region based on high-end GPS only df1s = df1[df1['Lat']<36.014] df2s = df2[df2['Lat']<36.014] # + # select the time range of df3 # find the overlapped time range laterStart = max(df1s.timestamp.iloc[0],df2s.timestamp.iloc[0],df3.timestamp.iloc[0]) earlyFinish = min(df1s.timestamp.iloc[-1],df2s.timestamp.iloc[-1],df3.timestamp.iloc[-1]) print(earlyFinish-laterStart) df1s = df1[(df1['timestamp']>=laterStart) & (df1['timestamp']<=earlyFinish)] df2s = df2[(df2['timestamp']>=laterStart) & (df2['timestamp']<=earlyFinish)] df3s = df3[(df3['timestamp']>=laterStart) & (df3['timestamp']<=earlyFinish)] df1s=df1s.reset_index() df2s=df2s.reset_index() df3s=df3s.reset_index() # drop unrelavent columns df1s = df1s.loc[:, df1s.columns.intersection(['datetime','timestamp','Lat','Lon'])] df2s = df2s.loc[:, df2s.columns.intersection(['datetime','timestamp','Lat','Lon'])] df3s = df3s.loc[:, df3s.columns.intersection(['datetime','timestamp','Lat','Long'])] # + # resample according to 10hz frequency df1re = df1s.resample('0.1S', on='datetime').mean().interpolate('linear').reset_index() df2re = df2s.resample('0.1S', on='datetime').mean().interpolate('linear').reset_index() # resample df3s according to dfres' timestamps df3s['datetime'] = pd.to_datetime(df3s['timestamp'], unit='s') df3re=df3s.resample('0.1S', on='datetime').mean().interpolate('linear').reset_index() nlen = min(len(df1re),len(df2re),len(df3re)) df1re = df1re.iloc[:nlen,:] df2re = df2re.iloc[:nlen,:] df3re = df3re.iloc[:nlen,:] # df2re.drop(df2re.tail(1).index,inplace=True) # drop last n rows # modify timestamp df1re['timestamp'] = df1re.datetime.apply(lambda x: x.timestamp()) df2re['timestamp'] = df2re.datetime.apply(lambda x: x.timestamp()) df3re['timestamp'] = df3re.datetime.apply(lambda x: x.timestamp()) # - # rename columns and concate into one df df1re = df1re.rename(columns={'Lat':'lat1','Lon':'lon1'}) df2re = df2re.rename(columns={'Lat':'lat2','Lon':'lon2'}) df3re = df3re.rename(columns={'Lat':'lat3','Long':'lon3'}) df2re = df2re.drop(columns=['datetime']) df3re = df3re.drop(columns=['datetime']) # dfre = pd.merge(df1re, df2re, df3re, on='timestamp') dfre = df1re.merge(df2re,on='timestamp').merge(df3re,on='timestamp') # + # plt.scatter(df1re.Lon, df1re.Lat,s=0.2,c=df1re.timestamp) # plt.colorbar() fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(14,5)) ax1.scatter(df1re.lon1, df1re.lat1,s=0.2,c=df1re.timestamp) ax1.set_title('car 1') # fig.colorbar() ax2.scatter(df2re.lon2, df2re.lat2,s=0.2,c=df2re.timestamp) ax2.set_title('car 2') # ax2.colorbar() ax3.scatter(df3re.lon3, df3re.lat3,s=0.2,c=df2re.timestamp) ax3.set_title('car 3') # - dfre.to_csv('GPS_vu123_10hz.csv') # + # start to add CAN bus information, such as speed and acceleration # TODO: ask Matt to decode Jonathan's data data_path = pathlib.Path().absolute().joinpath('../I24_motion_GPS/CAN') file_name = data_path.joinpath('vu1_2021-06-16-20-15-47_2T3MWRFVXLW056972_masterArray_withRPM.csv') can1 = utils.read_data(file_name,0,False) file_name = data_path.joinpath('vu2_2021-06-16-21-07-19_JTMB6RFV5MD010181_masterArray_withRPM.csv') can2 = utils.read_data(file_name,0,False) # - can1['timestamp'] = can1.Time can2['timestamp'] = can2.Time can1 = can1[(can1['timestamp']>=laterStart) & (can1['timestamp']<=earlyFinish)] can2 = can2[(can2['timestamp']>=laterStart) & (can2['timestamp']<=earlyFinish)] can1 = can1.reset_index() can2 = can2.reset_index() # drop unrelavent columns can1 = can1.loc[:, can1.columns.intersection(['timestamp','Velocity','Acceleration','SpaceGap','RelativeVelocity'])] can2 = can2.loc[:, can2.columns.intersection(['timestamp','Velocity','Acceleration','SpaceGap','RelativeVelocity'])] # + # resample according to 10hz frequency can1['datetime'] = pd.to_datetime(can1['timestamp'], unit='s') can2['datetime'] = pd.to_datetime(can2['timestamp'], unit='s') can1re = can1.resample('0.1S', on='datetime').mean().interpolate('linear').reset_index() can2re = can2.resample('0.1S', on='datetime').mean().interpolate('linear').reset_index() nlen = min(len(can1re),len(can2re)) can1re = can1re.iloc[:nlen,:] can2re = can2re.iloc[:nlen,:] # modify timestamp can1re['timestamp'] = can1re.datetime.apply(lambda x: x.timestamp()) can2re['timestamp'] = can2re.datetime.apply(lambda x: x.timestamp()) # - can1re = can1re.drop(columns=['datetime']) can2re = can2re.drop(columns=['datetime']) canre = pd.merge(can1re, can2re, on='timestamp',suffixes=('_1', '_2')) # canre = df1re.merge(df2re,on='timestamp').merge(df3re,on='timestamp') canre canre = canre[1:].reset_index(drop=True) canre.to_csv('CAN_vu12_10hz.csv') # + import utils_optimization as opt importlib.reload(opt) import utils importlib.reload(utils) import os.path from os import path import pathlib data_path = pathlib.Path().absolute().joinpath('../June 2021 Data - 1 minute 5 cameras w RAV 4/rectified') df = pd.DataFrame() for root,dirs,files in os.walk(str(data_path)): for file in files: if file.endswith(".csv"): file_name = data_path.joinpath(file) camera_name = utils.find_camera_name(file) print(file) df1 = utils.read_data(file_name) df = pd.concat([df, df1]) # - # df = df[(df['ID']==316120)|(df['ID']==344120)] df df1 = df[df['ID']==399120] df2 = df[df['ID']==316120] df3 = df[df['ID']==344120] can = canre[(canre.timestamp>=df.Timestamp.iloc[0]) & (canre.timestamp<=df.Timestamp.iloc[-1])] gps = dfre[(dfre.timestamp>=df.Timestamp.iloc[0]) & (dfre.timestamp<=df.Timestamp.iloc[-1])] can1 = canre[(canre.timestamp>=df1.Timestamp.iloc[0]) & (canre.timestamp<=df1.Timestamp.iloc[-1])] can2 = canre[(canre.timestamp>=df2.Timestamp.iloc[0]) & (canre.timestamp<=df2.Timestamp.iloc[-1])] gps1 = dfre[(dfre.timestamp>=df1.Timestamp.iloc[0]) & (dfre.timestamp<=df1.Timestamp.iloc[-1])] gps2 = dfre[(dfre.timestamp>=df2.Timestamp.iloc[0]) & (dfre.timestamp<=df2.Timestamp.iloc[-1])] v1_can = can1['Velocity_1']/1.609 v2_can = can1['Velocity_2']/1.609 len(df1.speed.values[::3]) len(v1_can) error1 = df1.speed.values[::3]*2.237 - v1_can[:97] plt.plot(can1.timestamp, v1_can,label='can') plt.plot(df1.Timestamp, df1.speed*2.237,label='camera') plt.legend() n, bins, patches = plt.hist(error1, density=True, alpha=0.75) plt.xlabel('speed error (mph)') plt.ylabel('probability') plt.title('CAN bus vs. Camera') # plt.xlim([55,75]) plt.legend() plt.grid(True) plt.show() from datetime import datetime print('df camera: ', datetime.fromtimestamp(min(df.Timestamp)),datetime.fromtimestamp(max(df.Timestamp))) print('can: ', datetime.fromtimestamp(min(can.timestamp)),datetime.fromtimestamp(max(can.timestamp))) print('GPS dfre: ', datetime.fromtimestamp(min(gps.timestamp)),datetime.fromtimestamp(max(gps.timestamp))) v1_camera = df1.speed.values * 2.237 v2_camera = df2.speed.values* 2.237 v3_camera = df3.speed.values* 2.237 v1_gps = np.zeros(len(gps1)) v2_gps = np.zeros(len(gps2)) # v3_gps = np.zeros(len(gps3)) for i in range(1,len(gps)): dt = gps.timestamp.iloc[i]-gps.timestamp.iloc[i-1] d1,_,_ = utils.euclidean_distance(gps['lat1'].iloc[i],gps['lon1'].iloc[i],gps['lat1'].iloc[i-1],gps['lon1'].iloc[i-1]) v1_gps[i] = d1/dt d2,_,_ = utils.euclidean_distance(gps['lat2'].iloc[i],gps['lon2'].iloc[i],gps['lat2'].iloc[i-1],gps['lon2'].iloc[i-1]) v2_gps[i] = d2/dt # d3,_,_ = utils.euclidean_distance(gps['lat3'].iloc[i],gps['lon3'].iloc[i],gps['lat3'].iloc[i-1],gps['lon3'].iloc[i-1]) # v3_gps[i] = d3/dt v1_gps[0] = v1_gps[1] v2_gps[0] = v2_gps[1] # v3_gps[0] = v3_gps[1] v1_gps = v1_gps*2.237 v2_gps = v2_gps*2.237 # v3_gps = v3_gps*2.237 # the histogram of the data n, bins, patches = plt.hist(v1_camera, density=True, alpha=0.75, label='camera, $\mu$ = {} $\sigma$ '.format(np.mean(v1_camera), c2)) n, bins, patches = plt.hist(v1_can[(v1_can<80)&(v1_can>50)], density=True, alpha=0.75, label='CAN bus, \mu') n, bins, patches = plt.hist(v1_gps[(v1_gps<80)&(v1_gps>50)], density=True, alpha=0.75, label='GPS') plt.xlabel('speed (mph)') plt.ylabel('probability') plt.title('vu1 speed distribution') plt.xlim([55,75]) plt.legend() plt.grid(True) plt.show() len(df3) np.std(v1_camera) plt.plot(v1_camera*) plt.plot(v2_camera*2.237) plt.plot(v3_camera*2.237) plt.plot(v1_gps*2.237) plt.plot(v2_gps*2.237) plt.plot(v3_gps*2.237) plt.plot(v1_can) plt.plot(v2_can) v1_gps = s_gps,_,_ = utils.euclidean_distance(gps.lat2, gps.lon2, gps.lat3, gps.lon3) s_gps plt.scatter(canre.timestamp,canre.Velocity_1,s=0.1) plt.scatter(canre.timestamp,canre.Velocity_2,s=0.1) utils.draw_map_scatter(df2s.Lat.values[::50], df2s.Lon.values[::50]) plt.scatter(dfre.timestamp,dfre.lon1,s=0.1) plt.scatter(dfre.timestamp,dfre.lon2,s=0.1) plt.scatter(dfre.timestamp,dfre.lon3,s=0.1) points = np.array(dfre[['lat1','lon1','lat2','lon2']]) d,_,_ = utils.euclidean_distance(points[:,0], points[:,1],points[:,2], points[:,3]) # now look at CAN bus data data_path = pathlib.Path().absolute().joinpath('../I24_motion_GPS') file_name = data_path.joinpath('CAN/vu1_2021-06-16-20-15-47_2T3MWRFVXLW056972_masterArray_withRPM.csv') df1 = utils.read_data(file_name,0,False) file_name = data_path.joinpath('CAN/vu2_2021-06-16-21-07-19_JTMB6RFV5MD010181_masterArray_withRPM.csv') df2 = utils.read_data(file_name,0,False) # select the testbed region df1s = df1[df1['LatitudeGPS']<36.04] df2s = df2[df2['LatitudeGPS']<36.04] gps_pts = np.array(df1s[['LatitudeGPS','LongitudeGPS']]) road_pts = utils.gps_to_road(gps_pts) road_pts plt.scatter(road_pts[:,0],road_pts[:,1]) # visualize GPS data from 08/06 vandertest # LCS to GPS transformation points 0,0 -> 36.005437, -86.610796 0,48 -> 36.005532, -86.610684 1000,0 -> 36.003404, -86.608529 1000,48 -> 36.003496, -86.608414 # + # 320, 0 -> 36.004790, -86.610073 # 320,48 -> 36.004878, -86.609956 # 320+1691, 0 -> 36.001382, -86.606184 # 320+1691,48 -> 36.001482, -86.606080 import utils importlib.reload(utils) d,dx,dy = utils.euclidean_distance(36.004790,-86.610073,36.001382, -86.606184) d*3.28 # - gps_ref = np.array([[36.004790, -86.610073],[36.004878, -86.609956],[36.001382, -86.606184],[36.001482, -86.606080]]) lmcs_ref = np.array([[320,0],[320,48],[320+1691,0],[320+1691,48]]) H, mask = cv2.findHomography(gps_ref, lmcs_ref) # gps to feet # read data and convert to local coordinates data_path = pathlib.Path().absolute().joinpath('../gps_0806') file_name = data_path.joinpath('2021-08-06-11-40-19_2T3MWRFVXLW056972_GPS_Messages.csv') df = utils.read_data(file_name) df = df[df.Gpstime > 0] import utils importlib.reload(utils) gps_pts = np.array(df[['Lat','Long']]) lmcs_pts = utils.transform_pt_array(gps_pts, H) fig,(ax1,ax2) = plt.subplots(1,2,figsize=(10,5)) ax1.scatter(lmcs_pts[:,0],lmcs_pts[:,1],s=0.1) ax1.scatter(lmcs_ref[:,0], lmcs_ref[:,1],c='red') ax1.set_xlabel('local x (feet)') ax1.set_ylabel('local y (feet)') # ax1.set_xlim([-100,2000]) # ax1.set_ylim([-10,150]) ax2.scatter(df.Long, df.Lat,s=0.1) ax2.scatter(gps_ref[:,1], gps_ref[:,0],c='red') ax2.set_xlabel('Longitude') ax2.set_ylabel('Latitude') fig,(ax1,ax2) = plt.subplots(1,2,figsize=(10,5)) ax1.scatter(df.Gpstime, lmcs_pts[:,1],s=0.1) ax2.scatter(df.Gpstime, df.Long,s=0.1) # + # get another set of reference points
.ipynb_checkpoints/I24_CAN_bus-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Normal(Gaussian) Distribution # The Normal Distribution is one of the most important distributions. # # It is also called the Gaussian Distribution after the German mathematician <NAME>. # # It fits the probability distribution of many events, eg. IQ Scores, Heartbeat etc. # # Use the random.normal() method to get a Normal Data Distribution. # # It has three parameters: # # #### loc - (Mean) where the peak of the bell exists. # # #### scale - (Standard Deviation) how flat the graph distribution should be. # # #### size - The shape of the returned array. from numpy import random x = random.normal(size=(2, 3)) x # ### Generate a random normal distribution of size 2x3 with mean at 1 and standard deviation of 2: x = random.normal(loc=1, scale=2, size=(2, 3)) x # ## Visualization of Normal Distribution import seaborn as sns import matplotlib.pyplot as plt sns.distplot(random.normal(size=(1000), loc=3, scale=2), hist=False) plt.show() # #### Note: The curve of a Normal Distribution is also known as the Bell Curve because of the bell-shaped curve.
Python Library/NumPy/NumPy_Random/normal_distribution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python38-azureml # kernelspec: # display_name: Python 3.8 - AzureML # language: python # name: python38-azureml # --- # # iJungle Tutorial # # *TODO: Summary of the iJungle technique* # # **Main parts of the tutorial** # # 1. Data preparation using kdd cup sample data # 2. Sequential execution using train bundle # 3. Sequential execution step by step # 4. Parallel training # 5. Parallel inference # + gather={"logged": 1630465391878} import iJungle from azureml.core import Workspace, Datastore, Dataset import pandas as pd import os import pickle import matplotlib.pyplot as plt import numpy as np from sklearn.metrics import confusion_matrix, average_precision_score, precision_score from sklearn.metrics import recall_score, auc, roc_curve, f1_score print("iJungle version:", iJungle.__version__) # + [markdown] nteract={"transient": {"deleting": false}} # # # 1. Data preparation # # *TODO: description of the data* # # 1. Use the following data in this repository *TODO: KDD url to download the files* # - kddcup.names # - kddcup.data # - corrected # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630465407312} ## Move to data directory os.chdir(os.path.dirname(os.path.abspath('__file__'))+'/../data') ## Generate DataFrame with kdd data(csv format) names = list(pd.read_csv('kddcup.names',sep=':', header=None)[0]) df = pd.read_csv('kddcup.data.gz', header=None, names=names) df_test = pd.read_csv('corrected.gz', header=None, names=names) print("Shape of raw data:", df.shape) print("Shape of test data:", df_test.shape) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630465407903} # Remove entries which protocol is not Http df = df[df.service == 'http'] df_test = df_test[df_test.service == 'http'] print("Shape of train data:", df.shape) print("Shape of test data:", df_test.shape) # + gather={"logged": 1630465408368} # Preparation of labels y_train = df.pop('label') y_test = df_test.pop('label') y_train = pd.Series([1 if val == 'normal.' else -1 for val in y_train]) y_test = pd.Series([1 if val == 'normal.' else -1 for val in y_test]) print("Shape of train labels:", y_train.shape) print("Shape of test labels:", y_test.shape) # + gather={"logged": 1630465408851} df.drop(['service'], axis=1, inplace=True) df_test.drop(['service'], axis=1, inplace=True) cat_columns = ['protocol_type', 'flag'] for col in cat_columns: df_test[col] = df_test[col].astype('category') df[col] = df[col].astype('category') cat_columns = df.select_dtypes(['category']).columns df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) cat_columns = df_test.select_dtypes(['category']).columns df_test[cat_columns] = df_test[cat_columns].apply(lambda x: x.cat.codes) print("Shape of train data:", df.shape) print("Shape of test data:", df_test.shape) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630465409368} df.columns # + [markdown] nteract={"transient": {"deleting": false}} # # 2. Sequential execution using train bundle # + [markdown] nteract={"transient": {"deleting": false}} # ## Train bundle # # `iJungle.train_bundle` function executes all the steps that will be explained in the section and returns # the best **iForest** model found using **iJungle** strategy. This model can be used directly to do # inferences in the test dataset. # - # %%time model = iJungle.train_bundle(df, train_size=0.1, overhead_size=0.1) # + [markdown] nteract={"transient": {"deleting": false}} # The following cell simply do the inferencing on the test dataset using the best **iForest** found. # + gather={"logged": 1630466594433} y_infer = model.predict(df_test) # + [markdown] nteract={"transient": {"deleting": false}} # `show_confusion_matrix` is a simple funtion to display the confusion matrix in a fancy way # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630466594674} def show_confusion_matrix(C,class_labels=['0','1']): """ C: ndarray, shape (2,2) as given by scikit-learn confusion_matrix function class_labels: list of strings, default simply labels 0 and 1. Draws confusion matrix with associated metrics. """ assert C.shape == (2,2), "Confusion matrix should be from binary classification only." # true negative, false positive, etc... tn = C[0,0]; fp = C[0,1]; fn = C[1,0]; tp = C[1,1]; NP = fn+tp # Num positive examples NN = tn+fp # Num negative examples N = NP+NN fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111) ax.imshow(C, interpolation='nearest', cmap=plt.cm.gray) # Draw the grid boxes ax.set_xlim(-0.5,2.5) ax.set_ylim(2.5,-0.5) ax.plot([-0.5,2.5],[0.5,0.5], '-k', lw=2) ax.plot([-0.5,2.5],[1.5,1.5], '-k', lw=2) ax.plot([0.5,0.5],[-0.5,2.5], '-k', lw=2) ax.plot([1.5,1.5],[-0.5,2.5], '-k', lw=2) # Set xlabels ax.set_xlabel('Predicted Label', fontsize=16) ax.set_xticks([0,1,2]) ax.set_xticklabels(class_labels + ['']) ax.xaxis.set_label_position('top') ax.xaxis.tick_top() # These coordinate might require some tinkering. Ditto for y, below. ax.xaxis.set_label_coords(0.34,1.06) # Set ylabels ax.set_ylabel('True Label', fontsize=16, rotation=90) ax.set_yticklabels(class_labels + [''],rotation=90) ax.set_yticks([0,1,2]) ax.yaxis.set_label_coords(-0.09,0.65) # Fill in initial metrics: tp, tn, etc... ax.text(0,0, 'True Neg: %d\n(Num Neg: %d)'%(tn,NN), va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) ax.text(0,1, 'False Neg: %d'%fn, va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) ax.text(1,0, 'False Pos: %d'%fp, va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) ax.text(1,1, 'True Pos: %d\n(Num Pos: %d)'%(tp,NP), va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) # Fill in secondary metrics: accuracy, true pos rate, etc... ax.text(2,0, 'False Pos Rate: %.2f'%(fp / (fp+tn+0.)), va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) ax.text(2,1, 'True Pos Rate: %.2f'%(tp / (tp+fn+0.)), va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) ax.text(2,2, 'Accuracy: %.2f'%((tp+tn+0.)/N), va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) ax.text(0,2, 'Neg Pre Val: %.2f'%(1-fn/(fn+tn+0.)), va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) ax.text(1,2, 'Pos Pred Val: %.2f'%(tp/(tp+fp+0.)), va='center', ha='center', bbox=dict(fc='w',boxstyle='round,pad=1')) plt.tight_layout() plt.show() # + [markdown] nteract={"transient": {"deleting": false}} # The following cell shows the main metrics to evaluate the model together with the confusion matrix # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630466595163} y_test = list(y_test) y_infer = list(y_infer) C = confusion_matrix(y_test,y_infer) show_confusion_matrix(C, ['Anomaly', 'Normal']) print("Average precision (AP): {}".format(average_precision_score(y_test,y_infer))) print("Precision: {}".format(precision_score(y_test,y_infer))) print("Recall: {}".format(recall_score(y_test,y_infer))) fpr, tpr, thresholds = roc_curve(y_test,y_infer) print("AUC: {}".format(auc(fpr, tpr))) print("F1: {}".format(f1_score(y_test,y_infer, average='weighted'))) # - # # 3. Sequential execution step by step # # The following are the iJungle strategy main high level steps: # # 1. Train a number of iForest models in a grid created by 2 hyper-parameters, number of trees an sub-sample size. The default grid is created with the following options. The number of iForest models trained per each point in the grid is calculated by the following formula: $\frac{m}{max(sub \mbox{-} sample\ sizes)}$ where $m$ is the number of training samples. # # * Number of trees: 10, 20, 100, 500 # * Sub-sample sizes: 512, 1024, 2048, 4096 # 2. Using the set of iForest trained, evaluate the training data set on them. # 3. Select the most representative iForest, selecting the one that produces the results closest to the average resuts among all iForests. # # ## 3.1 Step 1. Train iForest using the hyper-parameter grid # + gather={"logged": 1630466595382} train_new_model = True # - # %%time if train_new_model: iJungle.grid_train(df, train_size=0.1) else: print('No model trained (using last trained model)') # ## 3.2 - Evaluate all iForests using a sample of train data (overhead_size) # # *TODO: Update this documentation* # # If the model is to be evaluated (i.e. `evaluate_model == True`), run the trained model on the test dataset. The results will be stored in a dictionary for later analysis (`results_dic`). # # To do this, the different combinations of `trees` and `subsample_size` are traversed. For each such combination, the corresponding pickle file is loaded (this corresponds to the different `iFor_dic` created during training) and the `model_eval_fun` is run. Remember that `model_eval_fun` returns a matrix with the results for each instance of the dataset being evaluated and each model in `iFor_dic`. These results are stored then in a dictionary called `results_dic`. # # Note that `results_dic` has one level for sub-sample size. This in turns has another level for the number of trees. Finally, the last level contains a matrix with the results derived from the different models created with the same hyper parameters but trained on different samples of the dataset. This matrix implies then several results for each instance of the test set. # # Finally, the results are stored in a pickle file. # + gather={"logged": 1630466699916} evaluate_model = True # + # %%time if evaluate_model: results = iJungle.grid_eval(df, overhead_size=.1) else: results = iJungle.get_grid_eval_results() # - # *TODO: Update this documentation* # # As an spoiler of what is to come, the following cell traverses the `results` variable. For each sub sample and number of trees combination it averages the anomaly score of all the models created (remember that for each such combination severa models with the same hyper parameters were created and each one was evaluated for each of the test set instances). # # This average is then divided by the number of sub sample and number of trees combinations. Finally, it is presented how many of them had a "perfect anomaly score". A perfect anomaly score in this context means that all the models, for all the different combinations, agreed that it was an anomaly. # # ## 3.3 Select the best (most representative) iForest model # + gather={"logged": 1630467810371} model = iJungle.best_iforest(results) model # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630467819627} y_infer = list(model.predict(df_test)) C = confusion_matrix(y_test,y_infer) show_confusion_matrix(C, ['Anomaly', 'Normal']) print("Average precision (AP): {}".format(average_precision_score(y_test,y_infer))) print("Precision: {}".format(precision_score(y_test,y_infer))) print("Recall: {}".format(recall_score(y_test,y_infer))) fpr, tpr, thresholds = roc_curve(y_test,y_infer) print("AUC: {}".format(auc(fpr, tpr))) print("F1: {}".format(f1_score(y_test,y_infer, average='weighted'))) # - # # 4 - Parallel training # # *TODO: complete the explanation of the use of HyperDrive for achieve parallelism during the training* # + gather={"logged": 1630467823817} from azureml.core.compute import ComputeTarget, AmlCompute ws = Workspace.from_config() # Choose a name for your CPU cluster cluster_name = "rcg-clust-pll-41" # Verify that cluster does not exist already try: compute_cluster = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing cluster, use it.') except: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=4) compute_cluster = ComputeTarget.create(ws, cluster_name, compute_config) # + [markdown] nteract={"transient": {"deleting": false}} # # 4.1 Prallel training of iForest models # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630467845111} from azureml.core import Dataset train_dataset = Dataset.Tabular.register_pandas_dataframe(df,ws.get_default_datastore(),'training_data') train_dataset # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630467845305} subsample_list = [4096, 2048, 1024, 512] trees_list = [500, 100, 20, 10] train_size = 0.1 # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630468797152} from azureml.core import Experiment, Environment, ScriptRunConfig from azureml.core.conda_dependencies import CondaDependencies from azureml.train.hyperdrive import GridParameterSampling, HyperDriveConfig, PrimaryMetricGoal, choice from azureml.widgets import RunDetails sklearn_env = Environment("iJungle-env") whl_filename = "../dist/iJungle-"+iJungle.__version__+"-py3-none-any.whl" print("iJungle whl filename:", whl_filename) whl_url = Environment.add_private_pip_wheel(workspace=ws,file_path = whl_filename, exist_ok=True) packages = CondaDependencies.create(conda_packages=['scikit-learn','pip'], pip_packages=['azureml-defaults','azureml-dataprep[pandas]']) packages.add_pip_package(whl_url) sklearn_env.python.conda_dependencies = packages script_config = ScriptRunConfig(source_directory='../operation', script='parallel_train.py', arguments = ['--input-data', train_dataset.as_named_input('training_data'), '--max-subsample-size', max(subsample_list), '--train-size', train_size], environment=sklearn_env, compute_target = compute_cluster) params = GridParameterSampling( { '--trees': choice(trees_list), '--subsample-size' : choice(subsample_list) } ) hyperdrive = HyperDriveConfig(run_config=script_config, hyperparameter_sampling=params, policy=None, primary_metric_name='Dummy', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=16, max_concurrent_runs=4) experiment = Experiment(workspace=ws, name='iJungle-parallel-training') train_run = experiment.submit(config=hyperdrive) RunDetails(train_run).show() train_run.wait_for_completion() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630468817327} for child_run in train_run.get_children(): metrics = child_run.get_metrics() child_trees = metrics['trees'] child_subsample_size = metrics['subsample_size'] model_name = 'iJungle_light_' + str(child_trees) + '_' + str(child_subsample_size) model_path = os.path.join(iJungle._MODEL_DIR, model_name + '.pkl') print("Registering", model_name) child_run.register_model(model_path=model_path, model_name=model_name, tags={'type':'iJungle-train'}) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630468820725} overhead_size = .1 W = iJungle.select_overhead_data(df, overhead_size=overhead_size) overhead_dataset = Dataset.Tabular.register_pandas_dataframe(W,ws.get_default_datastore(),'overhead_data') overhead_dataset # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630469679074} script_config = ScriptRunConfig(source_directory='../operation', script='parallel_overhead.py', arguments = ['--input-data', overhead_dataset.as_named_input('overhead_data')], environment=sklearn_env, compute_target = compute_cluster) params = GridParameterSampling( { '--trees': choice(trees_list), '--subsample-size' : choice(subsample_list) } ) hyperdrive = HyperDriveConfig(run_config=script_config, hyperparameter_sampling=params, policy=None, primary_metric_name='Dummy', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=16, max_concurrent_runs=4) experiment = Experiment(workspace=ws, name='iJungle-parallel-overhead') overhead_run = experiment.submit(config=hyperdrive) RunDetails(overhead_run).show() overhead_run.wait_for_completion() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630469694327} os.makedirs(iJungle._MODEL_DIR, exist_ok=True) results_dic = {} for child_run in overhead_run.get_children(): metrics = child_run.get_metrics() child_trees = metrics['trees'] child_subsample_size = metrics['subsample_size'] model_name = 'iJungle_light_' + str(child_trees) + '_' + str(child_subsample_size) model_path = os.path.join(iJungle._MODEL_DIR, model_name + '_results.pkl') print("Working on", model_name) child_run.download_file(model_path,iJungle._MODEL_DIR) with open(model_path, 'rb') as model_file: model = pickle.load(model_file) if not (str(child_subsample_size) in results_dic): results_dic[str(child_subsample_size)] = {} results_dic[str(child_subsample_size)][str(child_trees)] = model print("Writing overhead results") filename_results = 'iJungle_light_results_overhead.pkl' results = pd.DataFrame(results_dic) with open(os.path.join(iJungle._MODEL_DIR, filename_results), 'wb') as outfile: pickle.dump(results, outfile) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630469701424} from azureml.core import Model import joblib best_subsample_size, best_trees, best_iF_k = iJungle.best_iforest_params(results) model_name = 'iJungle_light_' + str(best_trees) + '_' + str(best_subsample_size) model_path = os.path.join(iJungle._MODEL_DIR, model_name + ".pkl") print("Downloading", model_path) ws.models[model_name].download(iJungle._MODEL_DIR, exist_ok=True) print("Loading best iFor_list from ", model_path) with open(model_path, 'rb') as infile: iFor_list = pickle.load(infile) model = iFor_list[best_iF_k] print("Model selected!") print("Registering model...") best_model_name = 'best_iforest.pkl' best_model_path = os.path.join(iJungle._MODEL_DIR, best_model_name) joblib.dump(model, best_model_path) Model.register(workspace=ws ,model_path=best_model_path, model_name=best_model_name) # + [markdown] nteract={"transient": {"deleting": false}} # # 5. Parallel Inference # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630469703945} inference_dataset = Dataset.Tabular.register_pandas_dataframe(df_test,ws.get_default_datastore(),'inference_data') named_inference_dataset = inference_dataset.as_named_input('inference_data') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630469704747} from azureml.pipeline.steps import ParallelRunConfig, ParallelRunStep from azureml.pipeline.core import PipelineData inference_env = Environment("iJungle-inference-env") packages = CondaDependencies.create(conda_packages=['scikit-learn','pip'], pip_packages=['azureml-defaults','azureml-dataprep[pandas]','azureml-dataset-runtime[fuse,pandas]']) inference_env.python.conda_dependencies = packages default_ds = ws.get_default_datastore() output_dir = PipelineData(name='inferences', datastore=default_ds, output_path_on_compute='results') parallel_run_config = ParallelRunConfig( source_directory='../operation', entry_script="parallel_inference.py", # the user script to run against each input mini_batch_size='1024KB', error_threshold=5, output_action='append_row', # append_row_file_name="kdd_outputs.txt", environment=inference_env, compute_target=compute_cluster, node_count=4, run_invocation_timeout=60 ) parallelrun_step = ParallelRunStep( name='ijungle-inference', inputs=[named_inference_dataset], output=output_dir, parallel_run_config=parallel_run_config, arguments=[], allow_reuse=False ) print('Steps defined') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630470438158} from azureml.pipeline.core import Pipeline pipeline = Pipeline(workspace=ws, steps=[parallelrun_step]) pipeline_run = Experiment(ws, 'iJungle-inference').submit(pipeline) RunDetails(pipeline_run).show() pipeline_run.wait_for_completion() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1630481799250} os.chdir(os.path.dirname(os.path.abspath('__file__'))) prediction_run = next(pipeline_run.get_children()) prediction_run.download_file(name='inferences', output_file_path='results')
notebooks/iJungle-tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="LgSzX-nVxwkV" # Here is a simpler example of the use of LIME for image classification by using TensorFlow/Keras (v2 or greater) # - # !pip install tensorflow==2.3.1 # !pip install lime==0.2.0.1 # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="_KS7nU0axwkY" outputId="21a4f2b5-120c-40bd-8ea1-b73edd8bce0a" import os import tensorflow.keras from tensorflow.keras.applications import inception_v3 as inc_net from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.imagenet_utils import decode_predictions from skimage.io import imread import matplotlib.pyplot as plt # %matplotlib inline import numpy as np print('Notebook run using keras:', tensorflow.keras.__version__) # + [markdown] colab_type="text" id="w-eyXVAHxwkj" # # Using Inception # Here we create a standard InceptionV3 pretrained model and use it on images by first preprocessing them with the preprocessing tools # + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="GF8hPe1Rxwkl" outputId="fdbae3ad-ddfc-409c-bd3d-3ed8ead94a25" inet_model = inc_net.InceptionV3() # + colab={} colab_type="code" id="GaQ0zT5Txwks" def transform_img_fn(path_list): out = [] for img_path in path_list: img = image.load_img(img_path, target_size=(299, 299)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = inc_net.preprocess_input(x) out.append(x) return np.vstack(out) # + [markdown] colab_type="text" id="26Le95e5xwk1" # ## Let's see the top 5 prediction for some image # + colab={"base_uri": "https://localhost:8080/", "height": 398} colab_type="code" id="3sxO8vzlxwk2" outputId="82b2b75b-6bd8-4fab-ea2c-5b365806fb9e" images = transform_img_fn([os.path.join('data','cat_mouse.jpg')]) # I'm dividing by 2 and adding 0.5 because of how this Inception represents images plt.imshow(images[0] / 2 + 0.5) preds = inet_model.predict(images) for x in decode_predictions(preds)[0]: print(x) # + [markdown] colab_type="text" id="_MCRFeNJxwk9" # ## Explanation # Now let's get an explanation # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="aP6LjonHxwlA" outputId="07b6f31d-5207-4cf8-e75f-500fb3f8c939" # %load_ext autoreload # %autoreload 2 import os,sys try: import lime except: sys.path.append(os.path.join('..', '..')) # add the current directory import lime from lime import lime_image # + colab={} colab_type="code" id="2b7ow1YtxwlH" explainer = lime_image.LimeImageExplainer() # + [markdown] colab_type="text" id="3ADfGCzxxwlO" # hide_color is the color for a superpixel turned OFF. Alternatively, if it is NONE, the superpixel will be replaced by the average of its pixels. Here, we set it to 0 (in the representation used by inception model, 0 means gray) # + colab={"base_uri": "https://localhost:8080/", "height": 104, "referenced_widgets": ["1fb1b32b563b4cf2bc565e4f150d5886", "17bb7656e07842dda40e13da228550c8", "3b0b05ed4a1e4a7492bad6d208b005a1", "30f68d0a496d4e97a5f9e59b6fce9f5e", "bfe238fe53be4860b53021934755f0b1", "f7d2484df9f24bceafc1910e361bc03f", "059e79101bc04a6dac3d6c8065d5f960", "e8a5f1d84d784fe7b75b1b965aec3fb8"]} colab_type="code" id="1L0lWZ8yxwlQ" outputId="ee3a3678-bf31-46ba-dec4-e098c9c55eaa" # %%time # Hide color is the color for a superpixel turned OFF. Alternatively, if it is NONE, the superpixel will be replaced by the average of its pixels explanation = explainer.explain_instance(images[0].astype('double'), inet_model.predict, top_labels=5, hide_color=0, num_samples=1000) # + [markdown] colab_type="text" id="5XAf25PNxwlY" # Image classifiers are a bit slow. Notice that an explanation on my Surface Book dGPU took 1min 12s # + [markdown] colab_type="text" id="uWl908SyxwlZ" # ### Now let's see the explanation for the top class ( Black Bear) # + [markdown] colab_type="text" id="WXMdU3hwxwla" # We can see the top 5 superpixels that are most positive towards the class with the rest of the image hidden # + colab={} colab_type="code" id="rgs3IwZnxwlc" from skimage.segmentation import mark_boundaries # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="5LJMsp3Oxwlk" outputId="64b82262-8a45-4232-e844-2e15c5fc2d09" temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) # + [markdown] colab_type="text" id="aAFdw9oJxwlr" # Or with the rest of the image present: # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="-6eei5uQxwls" outputId="a9f29754-15aa-43a9-a64d-493f1f01dea2" temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=False) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) # + [markdown] colab_type="text" id="y7aoh_OExwl0" # We can also see the 'pros and cons' (pros in green, cons in red) # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="jHow3M4Oxwl1" outputId="7cc10571-84aa-4e8f-d6cc-80a56cf7d054" temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) # + [markdown] colab_type="text" id="e5FIt55txwl9" # Or the pros and cons that have weight at least 0.1 # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="ExFQHJw_xwl-" outputId="2b7e3bfd-5c66-42ef-eaf7-60506b25dbe3" temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=1000, hide_rest=False, min_weight=0.1) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) # + [markdown] colab_type="text" id="R2W22gjS1oiJ" # Alternatively, we can also plot explanation weights onto a heatmap visualization. The colorbar shows the values of the weights. # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="qXBM71Bo1UPc" outputId="9f32237b-eb29-4355-a0a3-96163c2360c5" #Select the same class explained on the figures above. ind = explanation.top_labels[0] #Map each explanation weight to the corresponding superpixel dict_heatmap = dict(explanation.local_exp[ind]) heatmap = np.vectorize(dict_heatmap.get)(explanation.segments) #Plot. The visualization makes more sense if a symmetrical colorbar is used. plt.imshow(heatmap, cmap = 'RdBu', vmin = -heatmap.max(), vmax = heatmap.max()) plt.colorbar() # + [markdown] colab_type="text" id="YaQK8BZexwmG" # ### Let's see the explanation for the second highest prediction # + [markdown] colab_type="text" id="ClAt2PowxwmH" # Most positive towards wombat: # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="cROfKjUMxwmI" outputId="37bfe983-d4bc-45b2-bca2-964b91f1258f" temp, mask = explanation.get_image_and_mask(106, positive_only=True, num_features=5, hide_rest=True) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask)) # + [markdown] colab_type="text" id="Z2MSbZ6kxwmP" # Pros and cons: # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" id="6HmKJVr1xwmR" outputId="7d369717-cefc-461d-d01f-49389dddd177" temp, mask = explanation.get_image_and_mask(106, positive_only=False, num_features=10, hide_rest=False) plt.imshow(mark_boundaries(temp / 2 + 0.5, mask))
doc/notebooks/Tutorial - Image Classification Keras.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Frequentist Inference Case Study - Part B # ## Learning objectives # Welcome to Part B of the Frequentist inference case study! The purpose of this case study is to help you apply the concepts associated with Frequentist inference in Python. In particular, you'll practice writing Python code to apply the following statistical concepts: # * the _z_-statistic # * the _t_-statistic # * the difference and relationship between the two # * the Central Limit Theorem, including its assumptions and consequences # * how to estimate the population mean and standard deviation from a sample # * the concept of a sampling distribution of a test statistic, particularly for the mean # * how to combine these concepts to calculate a confidence interval # In the previous notebook, we used only data from a known normal distribution. **You'll now tackle real data, rather than simulated data, and answer some relevant real-world business problems using the data.** # ## Hospital medical charges # Imagine that a hospital has hired you as their data scientist. An administrator is working on the hospital's business operations plan and needs you to help them answer some business questions. # # In this assignment notebook, you're going to use frequentist statistical inference on a data sample to answer the questions: # * has the hospital's revenue stream fallen below a key threshold? # * are patients with insurance really charged different amounts than those without? # # Answering that last question with a frequentist approach makes some assumptions, and requires some knowledge, about the two groups. # We are going to use some data on medical charges obtained from [Kaggle](https://www.kaggle.com/easonlai/sample-insurance-claim-prediction-dataset). # # For the purposes of this exercise, assume the observations are the result of random sampling from our single hospital. Recall that in the previous assignment, we introduced the Central Limit Theorem (CLT), and its consequence that the distributions of sample statistics approach a normal distribution as $n$ increases. The amazing thing about this is that it applies to the sampling distributions of statistics that have been calculated from even highly non-normal distributions of data! Recall, also, that hypothesis testing is very much based on making inferences about such sample statistics. You're going to rely heavily on the CLT to apply frequentist (parametric) tests to answer the questions in this notebook. # + import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.stats import t from numpy.random import seed medical = pd.read_csv('insurance2.csv') # - medical.shape medical.head() # __Q1:__ Plot the histogram of charges and calculate the mean and standard deviation. Comment on the appropriateness of these statistics for the data. # __A:__ # + # _ = plt.plot('hist','charges',data=medical) medical.hist('charges',bins=40) plt.show() # + # summary stats med_mean = np.mean(medical.charges) med_std = np.std(medical.charges) med_mean, med_std # - # The data has a long right tail and no left to, so this data set is as far from a gausian distribution as possible med_std=np.std(medical.charges,ddof=1) med_std # __Q2:__ The administrator is concerned that the actual average charge has fallen below 12,000, threatening the hospital's operational model. On the assumption that these data represent a random sample of charges, how would you justify that these data allow you to answer that question? And what would be the most appropriate frequentist test, of the ones discussed so far, to apply? # __A:__ The standard deviation is greater than the value by which we want to know if there is a change by so I do not think we can use this data to answer the question. # __Q3:__ Given the nature of the administrator's concern, what is the appropriate confidence interval in this case? A ***one-sided*** or ***two-sided*** interval? (Refresh your understanding of this concept on p. 399 of the *AoS*). Calculate the critical value and the relevant 95% confidence interval for the mean, and comment on whether the administrator should be concerned. # __A:__ as we are only concered with if the price change is below one value we can use a one-sided confidence interval. # # # import stats import scipy scipy.stats.norm.ppf(0.95,loc =med_mean,scale=med_std ) # + # critical value = t score for 95% conf and 1337 df from scipy import stats print (stats.t.ppf(.95, df= 1337)) # + #margin of error = t (a,df ) * s/sqrt(n) # for one sided me =(stats.t.ppf(.95, df= 1337))* (med_std/np.sqrt(1338)) me # + cofidence_interval = (med_mean-me,med_mean+me) cofidence_interval # - help(scipy.stats.norm.ppf) help(scipy.stats.norm.cdf) # + help(scipy.stats.t.ppf) # + null is there is no difference between pay for the two groups # - # The administrator then wants to know whether people with insurance really are charged a different amount to those without. # # __Q4:__ State the null and alternative hypothesis here. Use the _t_-test for the difference between means, where the pooled standard deviation of the two groups is given by: # \begin{equation} # s_p = \sqrt{\frac{(n_0 - 1)s^2_0 + (n_1 - 1)s^2_1}{n_0 + n_1 - 2}} # \end{equation} # # and the *t*-test statistic is then given by: # # \begin{equation} # t = \frac{\bar{x}_0 - \bar{x}_1}{s_p \sqrt{1/n_0 + 1/n_1}}. # \end{equation} # # (If you need some reminding of the general definition of ***t-statistic***, check out the definition on p. 404 of *AoS*). # # What assumption about the variances of the two groups are we making here? # __A:__ # __Q5:__ Perform this hypothesis test both manually, using the above formulae, and then using the appropriate function from [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html#statistical-tests) (hint, you're looking for a function to perform a _t_-test on two independent samples). For the manual approach, calculate the value of the test statistic and then its probability (the p-value). Verify you get the same results from both. # __A:__ no_in = medical.loc[medical['insuranceclaim'] == 0 ] no_in.info() insur = medical.loc[medical.insuranceclaim == 1] insur.info() # + df_no_in = len(no_in)-1 df_insur = len(insur)-1 numerator = (df_no_in*np.var(no_in))+(df_insur*np.var(insur)) denominator = len(no_in)+len(insur)-2 sp = np.sqrt(numerator/denominator) sp # + df_no_in = len(no_in)-1 df_insur = len(insur)-1 numerator = (df_no_in*np.var(no_in.charges))+(df_insur*np.var(insur.charges)) denominator = len(no_in)+len(insur)-2 sp = np.sqrt(numerator/denominator) sp # + ## check variance of two samples np.var(no_in.charges),np.var(insur.charges) # + numerator = np.mean(no_in.charges)+ np.mean(insur.charges) denominator = sp*np.sqrt((1/len(no_in.charges))+(1/len(insur.charges))) t_stat = numerator/denominator t_stat # - # my calc was never going to be negative compared to the stats.ttest_ind numerator, denominator # + # cal the probability of t statistic caluclated scipy.stats.t.cdf(39.52,df=denominator) # - stats.ttest_ind(a=no_in.charges, b=insur.charges, equal_var=False) stats.ttest_ind(a=insur.charges, b=no_in.charges, equal_var=False) # + ## probablity of t stat from .ttest_ind scipy.stats.t.cdf(13.298031957975649,df= numerator) # - #?? this means there is a 100% chance that the observed difference is in the distribution ?? # Congratulations! Hopefully you got the exact same numerical results. This shows that you correctly calculated the numbers by hand. Secondly, you used the correct function and saw that it's much easier to use. All you need to do is pass your data to it. # __Q6:__ Conceptual question: look through the documentation for statistical test functions in scipy.stats. You'll see the above _t_-test for a sample, but can you see an equivalent one for performing a *z*-test from a sample? Comment on your answer. # __A:__ # ## Learning outcomes # Having completed this project notebook, you now have good hands-on experience: # * using the central limit theorem to help you apply frequentist techniques to answer questions that pertain to very non-normally distributed data from the real world # * performing inference using such data to answer business questions # * forming a hypothesis and framing the null and alternative hypotheses # * testing this using a _t_-test
Frequentist Case Study/Frequentist Inference Case Study - Part B (2).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Correlação de Fase # A correlação de fase diz que, se calcularmos a Transformada Discreta de Fourier de duas imagens $f$ and $h$: # # $$ F = \mathcal{F}(f); $$$$ H = \mathcal{F}(h) $$ # # E em seguida calcularmos a correlação $R$ das transformadas: # $$ R = \dfrac{F H^*}{|F H^*|} $$ # # Depois, aplicarmos a transformada inversa a $R$ # $$ g = \mathcal{F}^{-1}(R) $$ # # A translação entre as duas imagens pode ser encontrada fazendo-se: # $$ (row, col) = arg max\{g\} $$ # # ## Identificando a translação entre 2 imagens # # 1. Calcular a Transformada de Fourier das 2 imagens que se quer comparar; # 2. Calcular a correlação de fase usando a função *phasecorr* # 3. Encontrar o ponto de máximo do mapa de correlação resultante # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np from numpy.fft import * import sys,os ia898path = os.path.abspath('../../') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia # + f = mpimg.imread('../data/cameraman.tif') # Transladando a imagem para (x,y) x = 20 y = 30 #f_trans = ia.ptrans(f,(20,30)) f_trans = np.zeros(f.shape) f_trans[x:,y:] = f[:-x,:-y] plt.figure(1,(10,10)) plt.subplot(1,2,1) plt.imshow(f, cmap='gray') plt.title('Imagem original') plt.subplot(1,2,2) plt.imshow(f_trans, cmap='gray') plt.title('Imagem transladada') # + # Calculando a correlação de fase g = ia.phasecorr(f,f_trans) # Encontrando o ponto de máxima correlação i = np.argmax(g) row,col = np.unravel_index(i,g.shape) v = np.array(f.shape) - np.array((row,col)) print('Ponto de máxima correlação: ',v) # - plt.figure(2,(6,6)) f[v[0]-1:v[0]+1,v[1]-1:v[1]+1] = 0 plt.imshow(f, cmap='gray') plt.title('Ponto de máxima correlação marcado (em preto)') # ## Identificando a rotação entre 2 imagens # # 1. Calcular a Transformada de Fourier das 2 imagens que se quer comparar; # 2. Converter as imagens obtidas para coordenadas polares # 3. Calcular a correlação de fase usando a função *phasecorr* # 4. Encontrar o ponto de máximo do mapa de correlação resultante # + f = mpimg.imread('../data/cameraman.tif') # Inserindo uma borda de zeros para permitir a rotação da imagem t = np.zeros(np.array(f.shape)+100,dtype=np.uint8) t[50:f.shape[0]+50,50:f.shape[1]+50] = f f = t t1 = np.array([ [1,0,-f.shape[0]/2.], [0,1,-f.shape[1]/2.], [0,0,1]]); t2 = np.array([ [1,0,f.shape[0]/2.], [0,1,f.shape[1]/2.], [0,0,1]]); # Rotacionando a imagem 30 graus theta = np.radians(30) r1 = np.array([ [np.cos(theta),-np.sin(theta),0], [np.sin(theta),np.cos(theta),0], [0,0,1]]); T = t2.dot(r1).dot(t1) f_rot = ia.affine(f,T,0) plt.figure(1,(10,10)) plt.subplot(1,2,1) plt.imshow(f, cmap='gray') plt.title('Imagem original') plt.subplot(1,2,2) plt.imshow(f_rot, cmap='gray') plt.title('Imagem rotacionada') # + W,H = f.shape f_polar = ia.polar(f,(150,200),2*np.pi) f_rot_polar = ia.polar(f_rot,(150,200),2*np.pi) plt.figure(1,(10,10)) plt.subplot(1,2,1) plt.imshow(f_polar, cmap='gray') plt.title('Imagem original (coord. polar)') plt.subplot(1,2,2) plt.imshow(f_rot_polar, cmap='gray') plt.title('Imagem rotacionada (coord. polar)') # + # Calculando a correlação de fase g = ia.phasecorr(f_polar,f_rot_polar) # Encontrando o ponto de máxima correlação i = np.argmax(g) corr = np.unravel_index(i,g.shape) # Calculate the angle ang = (float(corr[1])/g.shape[1])*360 print('Ponto de máxima correlação: ',ang) # - # ### Links # # - [Função de conversão para coordenadas polares](../src/polar.ipynb) # - [Função da correlação de fase](../src/phasecorr.ipynb)
2S2018/13 Correlacao de fase.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Spectral Bias # + import torch import torch.nn as nn import torch.nn.functional as F import numpy as np from argparse import Namespace from functools import reduce import matplotlib from matplotlib import pyplot as plt import seaborn as sns sns.set() # %matplotlib inline # - # ## Data Generation # # The code for generating the dataset and evaluating FFT. # + def make_phased_waves(opt): t = np.arange(0, 1, 1./opt.N) if opt.A is None: yt = reduce(lambda a, b: a + b, [np.sin(2 * np.pi * ki * t + 2 * np.pi * phi) for ki, phi in zip(opt.K, opt.PHI)]) else: yt = reduce(lambda a, b: a + b, [Ai * np.sin(2 * np.pi * ki * t + 2 * np.pi * phi) for ki, Ai, phi in zip(opt.K, opt.A, opt.PHI)]) return t, yt def fft(opt, yt): n = len(yt) # length of the signal k = np.arange(n) T = n/opt.N frq = k/T # two sides frequency range frq = frq[range(n//2)] # one side frequency range # ------------- FFTYT = np.fft.fft(yt)/n # fft computing and normalization FFTYT = FFTYT[range(n//2)] fftyt = abs(FFTYT) return frq, fftyt def to_torch_dataset_1d(opt, t, yt): t = torch.from_numpy(t).view(-1, 1).float() yt = torch.from_numpy(yt).view(-1, 1).float() if opt.CUDA: t = t.cuda() yt = yt.cuda() return t, yt # - # ## Model Training # + class Lambda(nn.Module): def __init__(self, lambd): super(Lambda, self).__init__() self.lambd = lambd def forward(self, x): return self.lambd(x) def make_model(opt): layers = [] layers.append(nn.Linear(opt.INP_DIM, opt.WIDTH)) layers.append(nn.ReLU()) layers.append(Lambda(lambda x: x + torch.randn_like(x)*0.1)) for _ in range(opt.DEPTH - 2): layers.append(nn.Linear(opt.WIDTH, opt.WIDTH)) layers.append(nn.ReLU()) layers.append(Lambda(lambda x: x + torch.randn_like(x)*0.1)) layers.extend([nn.Linear(opt.WIDTH, opt.OUT_DIM)]) model = nn.Sequential(*layers) if opt.CUDA: model = model.cuda() return model # + def power_iteration(A, num_simulations=10): # Ideally choose a random vector # To decrease the chance that our vector # Is orthogonal to the eigenvector A = A.data b_k = A.new(A.shape[1], 1).normal_() for _ in range(num_simulations): # calculate the matrix-by-vector product Ab b_k1 = A @ b_k # calculate the norm b_k1_norm = torch.norm(b_k1) # re normalize the vector b_k = b_k1 / b_k1_norm return ((b_k.t() @ A @ b_k) / b_k.t() @ b_k).squeeze().abs() # return torch.dot(torch.dot(b_k.t(), A), b_k) / torch.dot(b_k.t(), b_k) def spectral_norm(model): norms = [] for layer in model: if isinstance(layer, nn.Linear): if layer.in_features == layer.out_features: norms.append(power_iteration(layer.weight).cpu().numpy()) elif layer.in_features == 1 or layer.out_features == 1: norms.append(torch.norm(layer.weight.data)) return norms # - def train_model(opt, model, input_, target): # Build loss loss_fn = nn.MSELoss() # Build optim optim = torch.optim.Adam(model.parameters(), lr=opt.LR) # Rec frames = [] model.train() # To cuda if opt.CUDA: input_ = input_.cuda() target = target.cuda() # Loop! for iter_num in range(opt.NUM_ITER): if iter_num % (opt.NUM_ITER // 100) == 0: print(">", end='') x = input_ yt = target.view(-1, opt.OUT_DIM) optim.zero_grad() y = model(x) loss = loss_fn(y, yt) loss.backward() optim.step() if iter_num % opt.REC_FRQ == 0: # Measure spectral norm frames.append(Namespace(iter_num=iter_num, prediction=y.data.cpu().numpy(), loss=loss.item(), spectral_norms=spectral_norm(model))) # Done model.eval() return frames # ## Visualization # + def plot_inferred_wave(opt, x, y, yinf): fig, ax = plt.subplots(1, 2, figsize=(4.5, 4)) ax.set_title("Function") ax.plot(x, y, label='Target') ax.plot(x, yinf, label='Learnt') ax.set_xlabel("x") ax.set_ylabel("f(x)") plt.show() def plot_wave_and_spectrum(opt, x, yox): # Btw, "yox" --> "y of x" # Compute fft k, yok = fft(opt, yox) # Plot fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(9, 4)) ax0.set_title("Function") ax0.plot(x, yox) ax0.set_xlabel("x") ax0.set_ylabel("f(x)") ax1.set_title("FT of Function") ax1.plot(k, yok) ax1.set_xlabel("k") ax1.set_ylabel("f(k)") plt.show() # + def compute_spectra(opt, frames): # Make array for heatmap dynamics = [] xticks = [] for iframe, frame in enumerate(frames): # Compute fft of prediction frq, yfft = fft(opt, frame.prediction.squeeze()) dynamics.append(yfft) xticks.append(frame.iter_num) return np.array(frq), np.array(dynamics), np.array(xticks) def plot_spectral_dynamics(opt, all_frames): all_dynamics = [] # Compute spectra for all frames for frames in all_frames: frq, dynamics, xticks = compute_spectra(opt, frames) all_dynamics.append(dynamics) # Average dynamics over multiple frames # mean_dynamics.shape = (num_iterations, num_frequencies) mean_dynamics = np.array(all_dynamics).mean(0) # Select the frequencies which are present in the target spectrum freq_selected = mean_dynamics[:, np.sum(frq.reshape(-1, 1) == np.array(opt.K).reshape(1, -1), axis=-1, dtype='bool')] # Normalize by the amplitude. Remember to account for the fact that the measured spectra # are single-sided (positive freqs), so multiply by 2 accordingly norm_dynamics = 2 * freq_selected / np.array(opt.A).reshape(1, -1) # Plot heatmap plt.figure(figsize=(7, 6)) # plt.title("Evolution of Frequency Spectrum (Increasing Amplitudes)") sns.heatmap(norm_dynamics[::-1], xticklabels=opt.K, yticklabels=[(frame.iter_num if frame.iter_num % 10000 == 0 else '') for _, frame in zip(range(norm_dynamics.shape[0]), frames)][::-1], vmin=0., vmax=1., cmap=sns.cubehelix_palette(8, start=.5, rot=-.75, reverse=True, as_cmap=True)) plt.xlabel("Frequency [Hz]") plt.ylabel("Training Iteration") plt.show() def plot_multiple_spectral_norms(all_frames): iter_nums = np.array([frame.iter_num for frame in all_frames[0]]) norms = np.array([np.array(list(zip(*[frame.spectral_norms for frame in frames]))).squeeze() for frames in all_frames]) means = norms.mean(0) stds = norms.std(0) plt.xlabel("Training Iteration") plt.ylabel("Spectral Norm of Layer Weights") for layer_num, (mean_curve, std_curve) in enumerate(zip(means, stds)): p = plt.plot(iter_nums, mean_curve, label=f'Layer {layer_num + 1}') plt.fill_between(iter_nums, mean_curve + std_curve, mean_curve - std_curve, color=p[0].get_color(), alpha=0.15) plt.legend() plt.show() # - # ## Play opt = Namespace() # Data Generation opt.N = 200 opt.K = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50] opt.A = [1 for _ in opt.K] opt.PHI = [np.random.rand() for _ in opt.K] # Model parameters opt.INP_DIM = 1 opt.OUT_DIM = 1 opt.WIDTH = 256 opt.DEPTH = 6 # Training opt.CUDA = False opt.NUM_ITER = 60000 opt.REC_FRQ = 100 opt.LR = 0.0003 # ### Plot the Functions # # ... as a sanity check. x, y = make_phased_waves(opt) plot_wave_and_spectrum(opt, x, y) # Note that we only plot the positive frequencies, which is why the peaks in the spectrum are at $0.5$ (half the power is in the negative frequencies). # ### Train the Model def go(opt, repeats=10): all_frames = [] for _ in range(repeats): # Sample random phase opt.PHI = [np.random.rand() for _ in opt.K] # Generate data x, y = to_torch_dataset_1d(opt, *make_phased_waves(opt)) # Make model model = make_model(opt) # Train frames = train_model(opt, model, x, y) all_frames.append(frames) print('', end='\n') return all_frames # #### Case 1: All Frequencies with Same Amplitude opt.K = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50] opt.A = [1 for _ in opt.K] eq_amp_frames = go(opt, 20) plot_spectral_dynamics(opt, eq_amp_frames) plot_multiple_spectral_norms(eq_amp_frames) # #### Case 2: Higher Amplitude for Higher Frequencies opt.A = [0.1 * (a + 1) for a in range(len(opt.K))] inc_amp_frames = go(opt, 20) plot_spectral_dynamics(opt, inc_amp_frames) plot_multiple_spectral_norms(inc_amp_frames)
.ipynb_checkpoints/ICML_SpectralBias-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.4 64-bit (''eemont'': conda)' # language: python # name: python3 # --- # [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/davemlz/eemont/blob/master/docs/tutorials/032-Combining-eemont-wxee.ipynb) # [![Open in SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/davemlz/eemont/blob/master/docs/tutorials/032-Combining-eemont-wxee.ipynb) # [![Open in Planetary Computer](https://img.shields.io/badge/Open-Planetary%20Computer-black?style=flat&logo=microsoft)](https://pccompute.westeurope.cloudapp.azure.com/compute/hub/user-redirect/git-pull?repo=https://github.com/davemlz/eemont&urlpath=lab/tree/eemont/docs/tutorials/032-Combining-eemont-wxee.ipynb&branch=master) # + [markdown] id="jZEthLln92Ep" # # Monthly Global kNDVI using eemont and wxee # + [markdown] id="dNa470OZ8Oec" # _Tutorial created by **<NAME>**_: [GitHub](https://github.com/davemlz) | [Twitter](https://twitter.com/dmlmont) # # - GitHub Repo: [https://github.com/davemlz/eemont](https://github.com/davemlz/eemont) # - PyPI link: [https://pypi.org/project/eemont/](https://pypi.org/project/eemont/) # - Conda-forge: [https://anaconda.org/conda-forge/eemont](https://anaconda.org/conda-forge/eemont) # - Documentation: [https://eemont.readthedocs.io/](https://eemont.readthedocs.io/) # - More tutorials: [https://github.com/davemlz/eemont/tree/master/docs/tutorials](https://github.com/davemlz/eemont/tree/master/docs/tutorials) # + [markdown] id="CD7h0hbi92Er" # ## Let's start! # + [markdown] id="E0rc6Cya92Es" # If required, please uncomment: # + id="NYzyvKtk92Es" # #!pip install eemont # #!pip install wxee # + [markdown] id="x3Rm3qt_92Et" # Import the required packges. # + id="H0C9S_Hh92Et" import ee, eemont, wxee # + [markdown] id="k1sdX2p592Eu" # Authenticate and Initialize Earth Engine. # + id="7QDXqVwy8Oef" ee.Initialize() # - # Use the force of `eemont` and `wxee` to get your data in an easy way! MOD = ee.ImageCollection("MODIS/006/MCD43A4") \ .filterDate("2021-01-01","2022-01-01") \ .scaleAndOffset() \ .spectralIndices("kNDVI") \ .wx.to_time_series() \ .aggregate_time(frequency="month",reducer=ee.Reducer.mean()) \ .wx.to_xarray(region = ee.Geometry.BBox(-180, -90, 180, 90)) # Visualize everything! MOD.kNDVI.plot(col="time", col_wrap=3, cmap="viridis", figsize=(15, 10))
docs/tutorials/032-Combining-eemont-wxee.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Simple image denoising example using 2-dimensional FFT # # In this example we illustrate a very elementary type of image denoising # with the two-dimensional Fourier Transform. # + # %matplotlib inline import sys import numpy as np import matplotlib.pyplot as plt # - def plot_spectrum(F, amplify=1000, ax=None): """Normalise, amplify and plot an amplitude spectrum.""" # Note: the problem here is that we have a spectrum whose histogram is # *very* sharply peaked at small values. To get a meaningful display, a # simple strategy to improve the display quality consists of simply # amplifying the values in the array and then clipping. # Compute the magnitude of the input F (call it mag). Then, rescale mag by # amplify/maximum_of_mag. mag = abs(F) mag *= amplify/mag.max() # Next, clip all values larger than one to one. mag[mag > 1] = 1 if ax is None: ax = plt.gca() ax.imshow(mag, plt.cm.Blues) # Read in original image, convert to floating point for further # manipulation; imread returns a MxNx4 RGBA image. Since the image is # grayscale, just extract the 1st channel # # **Hints:** # # * use plt.imread() to load the file # * convert to a float array with the .astype() method # * extract all rows, all columns, 0-th plane to get the first # channel # * the resulting array should have 2 dimensions only fname = 'data/moonlanding.png' im = plt.imread(fname).astype(float) print("Image shape: %s" % str(im.shape)) # Compute the 2d FFT of the input image # # Hint: Look for a 2-d FFT in np.fft. # # Note: call this variable 'F', which is the name we'll be using below. # # In the lines following, we'll make a copy of the original spectrum and # truncate coefficients. # + F = np.fft.fft2(im) # Define the fraction of coefficients (in each direction) we keep keep_fraction = 0.1 # Call ff a copy of the original transform. Numpy arrays have a copy # method for this purpose. ff = F.copy() # Set r and c to be the number of rows and columns of the array. r,c = ff.shape # Set to zero all rows with indices between r*keep_fraction and # r*(1-keep_fraction): ff[r*keep_fraction:r*(1-keep_fraction)] = 0 # Similarly with the columns: ff[:, c*keep_fraction:c*(1-keep_fraction)] = 0 # - # Reconstruct the denoised image from the filtered spectrum, keep only the # real part for display. # Hint: There's an inverse 2d fft in the np.fft module as well (don't # forget that you only want the real part). # Call the result im_new and plot the results # + im_new = np.fft.ifft2(ff).real fig, ax = plt.subplots(2, 2, figsize=(10,7)) ax[0,0].set_title('Original image') ax[0,0].imshow(im, plt.cm.gray) ax[0,1].set_title('Fourier transform') plot_spectrum(F, ax=ax[0,1]) ax[1,1].set_title('Filtered Spectrum') plot_spectrum(ff, ax=ax[1,1]) ax[1,0].set_title('Reconstructed Image') ax[1,0].imshow(im_new, plt.cm.gray);
intro_python/python_tutorials/jupyter-notebook_intro/exercises/fourier_moon_denoise-solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9 # language: python # name: python3 # --- # <center> # <img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> # </center> # # # **SpaceX Falcon 9 First Stage Landing Prediction** # # ## Assignment: Exploring and Preparing Data # # Estimated time needed: **70** minutes # # In this assignment, we will predict if the Falcon 9 first stage will land successfully. SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars; other providers cost upward of 165 million dollars each, much of the savings is due to the fact that SpaceX can reuse the first stage. # # In this lab, you will perform Exploratory Data Analysis and Feature Engineering. # # Falcon 9 first stage will land successfully # # ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/landing\_1.gif) # # Several examples of an unsuccessful landing are shown here: # # ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/crash.gif) # # Most unsuccessful landings are planned. Space X performs a controlled landing in the oceans. # # ## Objectives # # Perform exploratory Data Analysis and Feature Engineering using `Pandas` and `Matplotlib` # # * Exploratory Data Analysis # * Preparing Data Feature Engineering # # *** # # ### Import Libraries and Define Auxiliary Functions # # We will import the following libraries the lab # # andas is a software library written for the Python programming language for data manipulation and analysis. import pandas as pd #NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays import numpy as np # Matplotlib is a plotting library for python and pyplot gives us a MatLab like plotting framework. We will use this in our plotter function to plot data. import matplotlib.pyplot as plt #Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics import seaborn as sns # ## Exploratory Data Analysis # # First, let's read the SpaceX dataset into a Pandas dataframe and print its summary # # + df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_2.csv") # If you were unable to complete the previous lab correctly you can uncomment and load this csv # df = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/dataset_part_2.csv') df.head(5) # - # First, let's try to see how the `FlightNumber` (indicating the continuous launch attempts.) and `Payload` variables would affect the launch outcome. # # We can plot out the <code>FlightNumber</code> vs. <code>PayloadMass</code>and overlay the outcome of the launch. We see that as the flight number increases, the first stage is more likely to land successfully. The payload mass is also important; it seems the more massive the payload, the less likely the first stage will return. # sns.catplot(y="PayloadMass", x="FlightNumber", hue="Class", data=df, aspect = 5) plt.xlabel("Flight Number",fontsize=20) plt.ylabel("Pay load Mass (kg)",fontsize=20) plt.show() # We see that different launch sites have different success rates. <code>CCAFS LC-40</code>, has a success rate of 60 %, while <code>KSC LC-39A</code> and <code>VAFB SLC 4E</code> has a success rate of 77%. # # Next, let's drill down to each site visualize its detailed launch records. # # ### TASK 1: Visualize the relationship between Flight Number and Launch Site # # Use the function <code>catplot</code> to plot <code>FlightNumber</code> vs <code>LaunchSite</code>, set the parameter <code>x</code> parameter to <code>FlightNumber</code>,set the <code>y</code> to <code>Launch Site</code> and set the parameter <code>hue</code> to <code>'class'</code> # # Plot a scatter point chart with x axis to be Flight Number and y axis to be the launch site, and hue to be the class value sns.catplot(y="LaunchSite", x="FlightNumber", hue="Class", data=df, aspect = 5) plt.xlabel("Flight Number",fontsize=20) plt.ylabel("Launch Site",fontsize=20) plt.show() # Now try to explain the patterns you found in the Flight Number vs. Launch Site scatter point plots. # # We can see, that launch site VAFB and KSC weren't used from the start. The most used was CCAFS. Later tries on CCAFS are more successfull. # ### TASK 2: Visualize the relationship between Payload and Launch Site # # We also want to observe if there is any relationship between launch sites and their payload mass. # # Plot a scatter point chart with x axis to be Pay Load Mass (kg) and y axis to be the launch site, and hue to be the class value sns.catplot(y="LaunchSite", x="PayloadMass", hue="Class", data=df, aspect = 5) plt.xlabel("Pay load Mass (kg)",fontsize=20) plt.ylabel("Launch Site",fontsize=20) plt.show() # Now if you observe Payload Vs. Launch Site scatter point chart you will find for the VAFB-SLC launchsite there are no rockets launched for heavypayload mass(greater than 10000). # # ### TASK 3: Visualize the relationship between success rate of each orbit type # # Next, we want to visually check if there are any relationship between success rate and orbit type. # # Let's create a `bar chart` for the sucess rate of each orbit # # HINT use groupby method on Orbit column and get the mean of Class column orbit = df.groupby(['Orbit']).mean()['Class'] orbit = orbit.to_frame() orbit['Orbit'] = orbit.index orbit.reset_index sns.barplot(y="Class", x="Orbit", data=orbit, color = 'Blue') plt.xlabel("Orbit",fontsize=20) plt.ylabel("Average success rate",fontsize=20) plt.show() # Analyze the ploted bar chart try to find which orbits have high sucess rate. # # ### TASK 4: Visualize the relationship between FlightNumber and Orbit type # # For each orbit, we want to see if there is any relationship between FlightNumber and Orbit type. # # Plot a scatter point chart with x axis to be FlightNumber and y axis to be the Orbit, and hue to be the class value sns.catplot(y="Orbit", x="FlightNumber", hue="Class", data=df, aspect = 5) plt.xlabel("Flight Number",fontsize=20) plt.ylabel("Orbit",fontsize=20) plt.show() # You should see that in the LEO orbit the Success appears related to the number of flights; on the other hand, there seems to be no relationship between flight number when in GTO orbit. # # ### TASK 5: Visualize the relationship between Payload and Orbit type # # Similarly, we can plot the Payload vs. Orbit scatter point charts to reveal the relationship between Payload and Orbit type # # Plot a scatter point chart with x axis to be Payload and y axis to be the Orbit, and hue to be the class value sns.catplot(y="Orbit", x="PayloadMass", hue="Class", data=df, aspect = 5) plt.xlabel("Pay load Mass (kg)",fontsize=20) plt.ylabel("Orbit",fontsize=20) plt.show() # With heavy payloads the successful landing or positive landing rate are more for Polar,LEO and ISS. # # However for GTO we cannot distinguish this well as both positive landing rate and negative landing(unsuccessful mission) are both there here. # # ### TASK 6: Visualize the launch success yearly trend # # You can plot a line chart with x axis to be <code>Year</code> and y axis to be average success rate, to get the average launch success trend. # # The function will help you get the year from the date: # # A function to Extract years from the date year=[] def Extract_year(date): for i in df["Date"]: year.append(i.split("-")[0]) return year Extract_year(1) df["Year"]=year average_by_year = df.groupby(by="Year").mean() average_by_year.reset_index(inplace=True) # Plot a line chart with x axis to be the extracted year and y axis to be the success rate plt.plot(average_by_year["Year"],average_by_year["Class"]) plt.xlabel("Year") plt.ylabel("Success/Failure") plt.show() # you can observe that the sucess rate since 2013 kept increasing till 2020 # # ## Features Engineering # # By now, you should obtain some preliminary insights about how each important variable would affect the success rate, we will select the features that will be used in success prediction in the future module. # features = df[['FlightNumber', 'PayloadMass', 'Orbit', 'LaunchSite', 'Flights', 'GridFins', 'Reused', 'Legs', 'LandingPad', 'Block', 'ReusedCount', 'Serial']] features.head() # ### TASK 7: Create dummy variables to categorical columns # # Use the function <code>get_dummies</code> and <code>features</code> dataframe to apply OneHotEncoder to the column <code>Orbits</code>, <code>LaunchSite</code>, <code>LandingPad</code>, and <code>Serial</code>. Assign the value to the variable <code>features_one_hot</code>, display the results using the method head. Your result dataframe must include all features including the encoded ones. # # HINT: Use get_dummies() function on the categorical columns features_one_hot=pd.get_dummies(features, columns=['Orbit','LaunchSite', 'LandingPad', 'Serial']) features_one_hot # ### TASK 8: Cast all numeric columns to `float64` # # Now that our <code>features_one_hot</code> dataframe only contains numbers cast the entire dataframe to variable type <code>float64</code> # # HINT: use astype function features_one_hot = features_one_hot.astype('float64') # We can now export it to a <b>CSV</b> for the next section,but to make the answers consistent, in the next lab we will provide data in a pre-selected date range. # # <code>features_one_hot.to_csv('dataset_part\_3.csv', index=False)</code> # # ## Authors # # <a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Jose<NAME></a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD. # # <a href="https://www.linkedin.com/in/nayefaboutayoun/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01"><NAME></a> is a Data Scientist at IBM and pursuing a Master of Management in Artificial intelligence degree at Queen's University. # # ## Change Log # # | Date (YYYY-MM-DD) | Version | Changed By | Change Description | # | ----------------- | ------- | ------------- | ----------------------- | # | 2021-10-12 | 1.1 | Lakshmi Holla | Modified markdown | # | 2020-09-20 | 1.0 | Joseph | Modified Multiple Areas | # | 2020-11-10 | 1.1 | Nayef | updating the input data | # # Copyright © 2020 IBM Corporation. All rights reserved. #
EDA with Data Visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import tensorflow as tf lib_path = './als_tf.so' als_module = tf.load_op_library(lib_path) # + import numpy as np #input data csrVal = np.fromfile('../data/netflix/R_train_csr.data.bin',dtype=np.float32) csrCol = np.fromfile('../data/netflix/R_train_csr.indices.bin',dtype=np.int32) csrRow = np.fromfile('../data/netflix/R_train_csr.indptr.bin',dtype=np.int32) cscVal = np.fromfile('../data/netflix/R_train_csc.data.bin',dtype=np.float32) cscRow = np.fromfile('../data/netflix/R_train_csc.indices.bin',dtype=np.int32) cscCol = np.fromfile('../data/netflix/R_train_csc.indptr.bin',dtype=np.int32) cooRow = np.fromfile('../data/netflix/R_train_coo.row.bin',dtype=np.int32) cooRowTest = np.fromfile('../data/netflix/R_test_coo.row.bin',dtype=np.int32) cooColTest = np.fromfile('../data/netflix/R_test_coo.col.bin',dtype=np.int32) cooValTest = np.fromfile('../data/netflix/R_test_coo.data.bin',dtype=np.float32) # - m = 17770 n = 480189 f = 100 nnz = 99072112 nnz_test = 1408395 llambda = 0.048 iters =10 xbatch = 1 thetabatch = 3 with tf.device('/cpu:0'): hello = tf.constant('Hello, TensorFlow!') [thetaT,XT, rmse] = als_module.do_als(csrRow, csrCol, csrVal, cscRow, cscCol, cscVal, cooRow, cooRowTest, cooColTest, cooValTest, m, n, f, nnz, nnz_test, llambda, iters,xbatch, thetabatch, 0) sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) with sess: print sess.run(hello) print sess.run(rmse)
tensorflow/cumf_as_tensorflow_ops_test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import xarray as xr from glob import glob from os import path import os import numpy as np # # Create list of CCMP filenames dir_pattern = 'F:/data/sat_data/ccmp/v02.0/' dir_pattern_zarr = 'F:/data/sat_data/ccmp/zarr2/' pattern = 'F:/data/sat_data/ccmp/v02.0/*/*/*_V02.0_L3.0_RSS.nc' files = [x for x in glob(pattern)] print(files[0]) print(files[-1]) # %%time #list files files = [x for x in glob(pattern)] #open dataset ds=xr.open_mfdataset(files,combine='nested',concat_dim='time',decode_cf=False,mask_and_scale=False) # if you don't need all the variables, drop extra ones here # ds = ds.drop({'nobs'}) ds.close() #remove any duplicates -- CCMP has both NRT and RT data so some files are duplicates -- # this returns 1st occurance check this is what you want _, index = np.unique(ds['time'], return_index=True) ds = ds.isel(time=index) # # Set Chunk Size Here # + #rechunck data #cci SUGGESTION itime_chunk = 2000 #200 ilat_chunk = 157 #300 ilon_chunk = 180 #600 ds = ds.chunk({'time':itime_chunk,'latitude':ilat_chunk,'longitude':ilon_chunk}) # - # # Subset & write out zarr store - subset only needed for this test run # %%time #test this chunck size by subsetting data and checking both time to write and filesize subset = ds.isel(time=slice(0,itime_chunk),latitude=slice(0,ilat_chunk),longitude=slice(0,ilon_chunk)) #output test zarr data subset.to_zarr(dir_pattern_zarr) # # Check file size - if to small/big adjust cell above & try again #list files and check size for var in subset: varname = var #this just gets last var in file, you can set value if you already know eg. 'uwnd' files = [x for x in glob(dir_pattern_zarr+str(varname)+'/0*')] print('testing file:',files[0]) file_size=os.stat(files[0]).st_size/1e6 print('The Zarr chunck is '+'%.0f' % sz+' mb') print('you want 100-300 mb per file') # # Okay you have right chunk size set, now write the entire file # # first delete all files in the zarr directory # %%time #only need to run once to write data ds.to_zarr(dir_pattern_zarr) # # Test reading zarr file and making some plots # %%time dir_pattern_zarr = 'F:/data/sat_data/ccmp/zarr2/' ds2= xr.open_zarr(dir_pattern_zarr) ts = ds2.uwnd.sel(latitude=slice(-10,0),longitude=slice(170,180)).mean({'latitude','longitude'}).plot() # %%time dir_pattern_zarr = 'F:/data/sat_data/ccmp/zarr/' ds2= xr.open_zarr(dir_pattern_zarr) ts = ds2.uwnd.sel(latitude=slice(-10,0),longitude=slice(170,180)).mean({'latitude','longitude'}).plot() dir_pattern_zarr = 'F:/data/sat_data/ccmp/zarr2/' ds2= xr.open_zarr(dir_pattern_zarr) ds2 # # create CMC SST zarr data files dir_pattern_zarr = 'F:/data/sst/cmc/zarr/' dir_pattern = 'F:/data/sst/cmc/CMC0.2deg/v2/' pattern = 'F:/data/sst/cmc/CMC0.2deg/v2/data/*/*/*-v02.0-fv02.0.nc' files = [x for x in glob(pattern)] dir_pattern = 'F:/data/sst/cmc/CMC0.1deg/v3/' pattern = 'F:/data/sst/cmc/CMC0.1deg/v3/*/*/*-v02.0-fv03.0.nc' files2 = [x for x in glob(pattern)] # %%time #open dataset ds=xr.open_mfdataset(files,combine='nested',concat_dim='time',decode_cf=False,mask_and_scale=False) ds.close() ds2=xr.open_mfdataset(files2,combine='nested',concat_dim='time',decode_cf=False,mask_and_scale=False) ds2.close() #interpolate the v3 0.1 data onto the older 0.2 deg grid for continuity ds2_interp = ds2.interp(lat=ds.lat,lon=ds.lon) #concat the two datasets together ds_all = xr.concat([ds,ds2_interp],dim='time') #remove any duplicates _, index = np.unique(ds_all['time'], return_index=True) ds_all=ds_all.isel(time=index) #rechunck data #data in int16 = 2 bytes ds_all = ds_all.chunk({'time':1000,'lat':300,'lon':300}) #output data to zarr format ds_all.to_zarr(dir_pattern_zarr) # # create AVISO zarr data files dir_pattern = 'F:/data/sat_data/aviso/data' dir_pattern_zarr = 'F:/data/sat_data/aviso/zarr/' pattern = 'F:/data/sat_data/aviso/data/*/*.nc' files = [x for x in glob(pattern)] # %%time #open dataset ds=xr.open_mfdataset(files,combine='nested',concat_dim='time',decode_cf=False,mask_and_scale=False) ds.close() #remove any duplicates _, index = np.unique(ds['time'], return_index=True) ds=ds.isel(time=index) #rechunck data #data in int16 = 2 bytes ds = ds.chunk({'time':1000,'latitude':180,'longitude':180}) # # test on opening zarr time # %%time ds_zarr = xr.open_zarr(dir_pattern_zarr) # # test time to make area average time series using netCDF and zarr files # %%time ts = ds.sla.sel(latitude=slice(-10,0),longitude=slice(170,180)).mean({'latitude','longitude'}).plot() # %%time ts = ds_zarr.sla.sel(latitude=slice(-10,0),longitude=slice(170,180)).mean({'latitude','longitude'}).plot() #output data to zarr format ds.to_zarr(dir_pattern_zarr)
Make_CCMP_zarr.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] Collapsed="false" toc-hr-collapsed=false # # Pipy pipeline demo # ## An example workflow # + [markdown] Collapsed="false" # #### Introduction # + [markdown] Collapsed="false" # `pipy` is a library that allows developers to build data pipelines in `sk-learn` or others and turn them into interactive pipelines using `ipywidgets`. # + [markdown] Collapsed="false" # #### Goal # + [markdown] Collapsed="false" # The ultimate goal of `pipy` is to create an environment where Python Engineers, Data Scientists, and end users throughout businesses can work together seemlessly and efficiently. # # *Note:* this is a demo and it shows work in progress # + [markdown] Collapsed="false" # ### 1) Importing the library # + Collapsed="false" import pipy # + [markdown] Collapsed="false" toc-hr-collapsed=true # ### 2) Extracting data from our Postgres database # + Collapsed="false" csv = pipy.pipeline.extract.CSV({'path': './dummy_data.csv'}) # + [markdown] Collapsed="false" # So, we used code to setup the `SQL` stage and can use the mouse and keyboard to make changes to it. # + [markdown] Collapsed="false" # #### 2.1) Running the Postgres extract # + [markdown] Collapsed="false" # We can _run_ each stage by calling `transform` (or, for `Extract` stages we can use `extract` as well): # + Collapsed="false" df = csv.extract() # or use `csv.transform()` df.head() # + [markdown] Collapsed="false" toc-hr-collapsed=true # ### 3) Creating a pipeline # + [markdown] Collapsed="false" # Let's create some more stages so we can build a pipeline: # + Collapsed="false" weekday = pipy.pipeline.transform.DayOfWeek(columns={'in': 'date'}) # + [markdown] Collapsed="false" # #### 3.1) Using `sklearn` for Machine Learning models # + [markdown] Collapsed="false" # `sklearn` is the most popular way by Data Scientists to build Machine Learning models. (see: https://github.blog/2019-01-24-the-state-of-the-octoverse-machine-learning/) # # `pipy` has a `SkLearnModelWrapper` so we can use all `sklearn` models within our own pipeline, meaning that also `sklearn` models are interactive. # + Collapsed="false" from sklearn.linear_model import LinearRegression # + Collapsed="false" ols = pipy.pipeline.model.SkLearnModelWrapper( columns={ 'target': 'sales', 'features': ['date|DayOfWeek'], }, params={ 'sklearn_model': LinearRegression(fit_intercept=True, normalize=False), } ) # + [markdown] Collapsed="false" # #### 3.3) Combining the stages to create a pipeline # + Collapsed="false" pipe = pipy.pipeline.Pipeline({'steps': [csv, weekday, ols]}) # + [markdown] Collapsed="false" # Again, we can simply print the pipeline to get interactive controls to make adjustment to the pipeline: # + Collapsed="false" pipe # + [markdown] Collapsed="false" # ### 4) Running the pipeline to extract, transform, and load the data to Tableau # + Collapsed="false" df = pipe.run() # + Collapsed="false" df.head() # -
notebooks/pipeline.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exercise 15 # # # Fraud Detection # ## Introduction # # - Fraud Detection Dataset from Microsoft Azure: [data](http://gallery.cortanaintelligence.com/Experiment/8e9fe4e03b8b4c65b9ca947c72b8e463) # # Fraud detection is one of the earliest industrial applications of data mining and machine learning. Fraud detection is typically handled as a binary classification problem, but the class population is unbalanced because instances of fraud are usually very rare compared to the overall volume of transactions. Moreover, when fraudulent transactions are discovered, the business typically takes measures to block the accounts from transacting to prevent further losses. # %matplotlib inline import pandas as pd import numpy as np from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn import metrics # + import pandas as pd url = 'https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/15_fraud_detection.csv.zip' df = pd.read_csv(url, index_col=0) df.head() # - df.head() df.shape, df.Label.sum(), df.Label.mean() # # Exercise 15.1 # # Estimate a Logistic Regression and a Decision Tree # # Evaluate using the following metrics: # * Accuracy # * F1-Score # * F_Beta-Score (Beta=10) # # Comment about the results # # Exercise 15.2 # # Under-sample the negative class using random-under-sampling # # Which is parameter for target_percentage did you choose? # How the results change? # # **Only apply under-sampling to the training set, evaluate using the whole test set** # # Exercise 15.3 # # Same analysis using random-over-sampling # # Exercise 15.4 (3 points) # # Evaluate the results using SMOTE # # Which parameters did you choose? # # Exercise 15.5 (3 points) # # Evaluate the results using Adaptive Synthetic Sampling Approach for Imbalanced # Learning (ADASYN) # # http://www.ele.uri.edu/faculty/he/PDFfiles/adasyn.pdf # https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.ADASYN.html#rf9172e970ca5-1 # # Exercise 15.6 (3 points) # # Compare and comment about the results
exercises/E15-fraud_detection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # !git clone https://github.com/google-research/google-research.git import sys import os import tarfile import urllib sys.path.append('./google-research') os.environ["CUDA_VISIBLE_DEVICES"] = "-1" # no need to use gpu import tensorflow as tf import tensorflow.compat.v1 as tf1 import logging from kws_streaming.models import models from kws_streaming.train import model_flags from kws_streaming.layers.modes import Modes from kws_streaming.train import test from kws_streaming.models import utils from kws_streaming.data import input_data from kws_streaming.models import model_params tf1.__version__ config = tf1.ConfigProto() config.gpu_options.allow_growth=True sess = tf1.Session(config=config) tf1.disable_eager_execution() DATA_VERSION = 1 if DATA_VERSION == 2: DATA_URL = "https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz" DATA_PATH = "/home/tmp/kws_streaming/data22/" elif DATA_VERSION == 1: DATA_URL = "http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz" DATA_PATH = "/home/tmp/kws_streaming/data11/" else: assert(False) base_name = os.path.basename(DATA_URL) base_name arch_file_name = os.path.join(DATA_PATH, base_name) if not os.path.isfile(arch_file_name): # download data if sys.version_info >= (2, 5): file_path = urllib.request.urlretrieve(DATA_URL, filename=arch_file_name)[0] else: file_path = urllib.urlretrieve(DATA_URL, filename=arch_file_name)[0] # unpack it file_name, file_extension = os.path.splitext(base_name) tar = tarfile.open(file_path) tar.extractall(DATA_PATH) tar.close() # default parameters for data splitting flags = model_params.Params() flags.data_dir = DATA_PATH flags.data_url = DATA_URL flags = model_flags.update_flags(flags) audio_processor = input_data.AudioProcessor(flags) testing_set_size = audio_processor.set_size('testing') print("testing_set_size " + str(testing_set_size)) training_set_size = audio_processor.set_size('training') print("training_set_size " + str(training_set_size)) validation_set_size = audio_processor.set_size('validation') print("validation_set_size " + str(validation_set_size)) # + # V2 # testing_set_size 4890 # training_set_size 36923 # validation_set_size 4445 # V1 # testing_set_size 3081 # training_set_size 22246 # validation_set_size 3093 # - # all words used for modeling: we have target words + unknown words (with index 1) audio_processor.word_to_index # find the start of the file name where label begins string = audio_processor.data_index["validation"][0]['file'] res = [i for i in range(len(string)) if string.startswith('/', i)] start_file = res[-2]+1 start_file audio_processor.data_index["validation"][0]['file'][start_file:] # + index_to_label = {} unknown_category = [] # labeles used for training for word in audio_processor.word_to_index.keys(): if audio_processor.word_to_index[word] == input_data.SILENCE_INDEX: index_to_label[audio_processor.word_to_index[word]] = input_data.SILENCE_LABEL elif audio_processor.word_to_index[word] == input_data.UNKNOWN_WORD_INDEX: index_to_label[audio_processor.word_to_index[word]] = input_data.UNKNOWN_WORD_LABEL unknown_category.append(word) else: index_to_label[audio_processor.word_to_index[word]] = word # training labels index_to_label # - # words belonging to unknown categry unknown_category def get_distribution(mode): distrib_label = {} distrib_words = {} files = {} for data in audio_processor.data_index[mode]: word = data['label'] file = data['file'][start_file:] index = audio_processor.word_to_index[word] label = index_to_label[index] if word in files: files[word].append(file) else: files[word] = [file] if label in distrib_label: distrib_label[label] = distrib_label[label] + 1 else: distrib_label[label] = 0 if word in distrib_words: distrib_words[word] = distrib_words[word] + 1 else: distrib_words[word] = 0 return distrib_words, distrib_label, files # distribution of labeles in testing data distrib_words_testing, distrib_labels_testing, files_testing = get_distribution('testing') distrib_labels_testing # distribution of labeles in training data distrib_words_training, distrib_labels_training, files_training = get_distribution('training') distrib_labels_training def parse_files(set_list_fname, label='yes'): set_files = [] with open(set_list_fname) as f: while True: line = f.readline() if not line: break if line.split('/')[0]==label: set_files.append(line[:-1]) return set_files def validate(my_list1, list2, print_in_list2=False): cnt_my_val2 = 0 cnt_my_val1 = 0 for my_val in my_list1: if my_val in list2: cnt_my_val2 = cnt_my_val2 + 1 if print_in_list2: print(my_val) else: cnt_my_val1 = cnt_my_val1 + 1 if not print_in_list2: print(my_val) return cnt_my_val1, cnt_my_val2 # + file_list = os.path.join(DATA_PATH, "testing_list.txt") # validate that all wav used during testing belongs to testing_list for word in files_testing.keys(): if word != '_silence_': word_files = parse_files(file_list, label=word) _, cnt_val = validate(files_testing[word], word_files, False) assert(cnt_val-len(files_testing[word])==0) # + distrib_words_training, distrib_labels_training, files_training = get_distribution('training') # validate that all wav used during testing do not belong to training data for word in files_testing.keys(): if word != '_silence_': # silence file does not matter becasue it is multiplied by zero word_files = files_testing[word] _, cnt_val = validate(files_training[word], word_files, True) assert(cnt_val==0) # -
kws_streaming/colab/check-data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="FSD6mGVTC4Hv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="10b53002-8206-449b-e016-9d48b7ba8dea" # !ls # %cd cvnd-exercises/1_2_Convolutional_Filters_Edge_Detection/ # + [markdown] id="pPqe76p0C1g_" colab_type="text" # ## Hough Circle Detection # + id="tHqyaeD1C1hB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="9a4d4889-deaa-43cc-b553-fbe92edb797a" import numpy as np import matplotlib.pyplot as plt import cv2 # %matplotlib inline # + id="Qn5vXtvsC1hH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="8bd40911-ac06-4dfd-cff6-cdb19231f1d0" # Read in the image image = cv2.imread('images/round_farms.jpg') # Change color to RGB (from BGR) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.imshow(image) # + id="Dbl1Bf9tC1hK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="6c4728db-ede4-4314-a3d3-79c80ad8044e" # Gray and blur gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) gray_blur = cv2.GaussianBlur(gray, (3, 3), 0) plt.imshow(gray_blur, cmap='gray') # + [markdown] id="dGQ1YRpwC1hN" colab_type="text" # ### HoughCircles function # # `HoughCircles` takes in a few things as its arguments: # * an input image, detection method (Hough gradient), resolution factor between the detection and image (1), # * minDist - the minimum distance between circles # * param1 - the higher value for performing Canny edge detection # * param2 - threshold for circle detection, a smaller value --> more circles will be detected # * min/max radius for detected circles # # The variable you should change will be the last two: min/max radius for detected circles. Take a look at the image above and estimate how many pixels the average circle is in diameter; use this estimate to provide values for min/max arguments. You may also want to see what happens if you change minDist. # + id="05EVVPA8C1hO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="6a3b64e7-afe2-457e-f86e-9ec86d007ebb" # for drawing circles on circles_im = np.copy(image) ## TODO: use HoughCircles to detect circles # right now there are too many, large circles being detected # try changing the value of maxRadius, minRadius, and minDist circles = cv2.HoughCircles(gray_blur, cv2.HOUGH_GRADIENT, 1, minDist=45, param1=70, param2=11, minRadius=20, maxRadius=40) # convert circles into expected type circles = np.uint16(np.around(circles)) # draw each one for i in circles[0,:]: # draw the outer circle cv2.circle(circles_im,(i[0],i[1]),i[2],(0,255,0),2) # draw the center of the circle cv2.circle(circles_im,(i[0],i[1]),2,(0,0,255),3) plt.imshow(circles_im) print('Circles shape: ', circles.shape)
1_2_Convolutional_Filters_Edge_Detection/6_2. Hough circles, agriculture.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Mini-project NLP Question Answering ( chat-Boot) # + import numpy as np import nltk nltk.download('punkt') from nltk.stem.porter import PorterStemmer stemmer = PorterStemmer() def tokenize(sentence): """ split sentence into array of words/tokens a token can be a word or punctuation character, or number """ return nltk.word_tokenize(sentence) def stem(word): """ stemming = find the root form of the word examples: words = ["organize", "organizes", "organizing"] words = [stem(w) for w in words] -> ["organ", "organ", "organ"] """ return stemmer.stem(word.lower()) def bag_of_words(tokenized_sentence, words): """ return bag of words array: 1 for each known word that exists in the sentence, 0 otherwise example: sentence = ["hello", "how", "are", "you"] words = ["hi", "hello", "I", "you", "bye", "thank", "cool"] bog = [ 0 , 1 , 0 , 1 , 0 , 0 , 0] """ # stem each word sentence_words = [stem(word) for word in tokenized_sentence] # initialize bag with 0 for each word bag = np.zeros(len(words), dtype=np.float32) for idx, w in enumerate(words): if w in sentence_words: bag[idx] = 1 return bag # + import numpy as np import random import json import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader with open('intents.json', 'r') as f: intents = json.load(f) all_words = [] tags = [] xy = [] # loop through each sentence in our intents patterns for intent in intents['intents']: tag = intent['tag'] # add to tag list tags.append(tag) for pattern in intent['patterns']: # tokenize each word in the sentence w = tokenize(pattern) # add to our words list all_words.extend(w) # add to xy pair xy.append((w, tag)) # stem and lower each word ignore_words = ['?', '.', '!'] all_words = [stem(w) for w in all_words if w not in ignore_words] # remove duplicates and sort all_words = sorted(set(all_words)) tags = sorted(set(tags)) print(len(xy), "patterns") print(len(tags), "tags:", tags) print(len(all_words), "unique stemmed words:", all_words) # create training data X_train = [] y_train = [] for (pattern_sentence, tag) in xy: # X: bag of words for each pattern_sentence bag = bag_of_words(pattern_sentence, all_words) X_train.append(bag) # y: PyTorch CrossEntropyLoss needs only class labels, not one-hot label = tags.index(tag) y_train.append(label) X_train = np.array(X_train) y_train = np.array(y_train) # Hyper-parameters num_epochs = 1000 batch_size = 8 learning_rate = 0.001 input_size = len(X_train[0]) hidden_size = 8 output_size = len(tags) print(input_size, output_size) class ChatDataset(Dataset): def __init__(self): self.n_samples = len(X_train) self.x_data = X_train self.y_data = y_train # support indexing such that dataset[i] can be used to get i-th sample def __getitem__(self, index): return self.x_data[index], self.y_data[index] # we can call len(dataset) to return the size def __len__(self): return self.n_samples dataset = ChatDataset() train_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True, num_workers=0) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = NeuralNet(input_size, hidden_size, output_size).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model for epoch in range(num_epochs): for (words, labels) in train_loader: words = words.to(device) labels = labels.to(dtype=torch.long).to(device) # Forward pass outputs = model(words) # if y would be one-hot, we must apply # labels = torch.max(labels, 1)[1] loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (epoch+1) % 100 == 0: print (f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}') print(f'final loss: {loss.item():.4f}') data = { "model_state": model.state_dict(), "input_size": input_size, "hidden_size": hidden_size, "output_size": output_size, "all_words": all_words, "tags": tags } FILE = "data.pth" torch.save(data, FILE) print(f'training complete. file saved to {FILE}') # + import torch import torch.nn as nn class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.l1 = nn.Linear(input_size, hidden_size) self.l2 = nn.Linear(hidden_size, hidden_size) self.l3 = nn.Linear(hidden_size, num_classes) self.relu = nn.ReLU() def forward(self, x): out = self.l1(x) out = self.relu(out) out = self.l2(out) out = self.relu(out) out = self.l3(out) # no activation and no softmax at the end return out # + import random import json import os import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') with open('intents.json', 'r') as json_data: intents = json.load(json_data) FILE = "data.pth" data = torch.load(FILE) input_size = data["input_size"] hidden_size = data["hidden_size"] output_size = data["output_size"] all_words = data['all_words'] tags = data['tags'] model_state = data["model_state"] model = NeuralNet(input_size, hidden_size, output_size).to(device) model.load_state_dict(model_state) model.eval() bot_name = "mahmoudi" print("Let's chat! (type 'quit' to exit)") nom = input("what's ur name ? ") while True: # sentence = "do you use credit cards?" sentence = input(nom +":") if sentence == "quit": break sentence = tokenize(sentence) X = bag_of_words(sentence, all_words) X = X.reshape(1, X.shape[0]) X = torch.from_numpy(X).to(device) output = model(X) _, predicted = torch.max(output, dim=1) tag = tags[predicted.item()] probs = torch.softmax(output, dim=1) prob = probs[0][predicted.item()] if prob.item() > 0.75: for intent in intents['intents']: if tag == intent["tag"]: print(f"{bot_name}: {random.choice(intent['responses'])}") else: print(f"{bot_name}: I do not understand...") # -
Question_answering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install --quiet climetlab # # Retrieve ERA5 data from the CDS import climetlab as cml # + tags=[] source = cml.load_source("cds", "reanalysis-era5-single-levels", variable=["2t", "msl"], product_type= "reanalysis", area=[50, -50, 20, 50], date= "2012-12-12", time= "12:00") for s in source: cml.plot_map(s) # + tags=[] source = cml.load_source("cds", "reanalysis-era5-single-levels", variable=["2t", "msl"], product_type= "reanalysis", area=[50, -50, 20, 50], date= "2012-12-12", time= "12:00", format="netcdf") for s in source: cml.plot_map(s) # - source.to_xarray() cml.plot_map(source.to_xarray(), title=True)
docs/examples/03-source-cds.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # stack # + class stack: def __init__(self, list1, n): self.list1 = list1 self.limit = n def push(self, element): if self.limit == len(list1): print("stack is full") else: self.list1.append(element) print("ele push in stack ") def pop(self): if not list1: print("empty stack") else: print("ele pop from stack: {}".format(self.list1.pop())) def display(self): print("stack element : ") for i in range(len(self.list1)): print(self.list1[i],end=" ") print() list1=[] n = int(input("enter limit to stack : ")) s = stack(list1, n) while True: print("Select the operation 1.push 2.pop 3. display 4.quit") ch = int(input("Enter choice : ")) if ch == 1: print("Enter element :") ele = int(input()) s.push(ele) if ch == 2: s.pop() if ch == 3: s.display() if ch == 4: print("done") break # - from collections import deque stack = deque() stack stack.append(1) stack stack.pop() not stack stack[-2] import queue stack = queue.LifoQueue(2) stack.put(10) stack.put(20) stack.put(30,timeout = 1) stack.get() stack.get() stack.get(timeout = 1) # # queue list1=[] list1.append(1) list1.append(2) list1.append(3) list1 list1.pop(0) print(list1) list1.pop(0) list2=[] list2.insert(0,1) list2.insert(0,2) list2 list2.pop() list2.insert(0,3) list2 list2[-1] list2[1] # # S linked list class node: def __init__(self, data): self.data = data self.ref = None class sll_add: def __init__(self): self.head = None def add(self,data): if self.head == None: self.head = node(data) else: temp = self.head while temp.ref != None: temp = temp.ref temp.ref = node(data) def del_b(self): temp = self.head self.head = temp.ref def add_b(self, data): n = node(data) n.ref = self.head self.head = n def add_e(self,data): if self.head == None: self.head = node(data) else: temp = self.head while temp.ref != None: temp = temp.ref temp.ref = node(data) def del_e(self): if self.head == None: print("ll is empty") else: temp = self.head while temp.ref != None: pre = temp temp = temp.ref pre.ref = None def add_ap(self,data,pos): temp = self.head for i in range(1,pos): temp = temp.ref n = node(data) n.ref = temp.ref temp.ref = n def del_ap(self,pos): temp = self.head pre =temp nxt = temp for i in range(1,pos-1): temp = temp.ref temp.ref = temp.ref.ref def add_bp(self,data,pos): temp = self.head for i in range(1,pos-1): temp = temp.ref n = node(data) n.ref = temp.ref temp.ref = n def del_bp(self,pos): temp = self.head for i in range(1,pos): temp = temp.ref temp.ref = temp.ref.ref def print_ll(self): if self.head == None: print("Linked list is empty") else: n = self.head print("linked list element : ") while n != None: print(n.data) n= n.ref ll = sll_add() ll.add(2) ll.add(4) ll.add(3) #ll.print_ll() ll.add_b(1) #ll.print_ll() ll.add_e(5) #ll.print_ll() ll.add_ap(6,2) #ll.print_ll() ll.add_bp(7,4) #ll.print_ll() ll.del_b() #ll.print_ll() ll.del_e() #ll.print_ll() ll.del_ap(2) ll.print_ll() ll.del_bp(2) ll.print_ll() # # D ll class node: def __init__(self,data): self.data = data self.ref = None self.pre = None class dll: def __init__(self): self.head = None def create_dll(self, data): n = node(data) if self.head == None: self.head = n else: temp = self.head while temp.ref != None: temp = temp.ref temp.ref = n n.pre = temp def add_b(self, data): n = node(data) if self.head == None: self.head = n else: n.ref = self.head self.head.pre = n self.head = n def add_e(self,data): n = node(data) if self.head == None: self.head = n else: temp = self.head while temp.ref != None: temp = temp.ref temp.ref = n n.pre = temp def add_p(self, data, pos): n = node(data) temp = self.head for i in range(0,pos): a = temp temp = temp.ref n.ref = temp temp.pre = n n.pre = a a.ref = n def print_ll(self): if self.head == None: print("Dll is Empty") else: temp = self.head print("ll elements: ") while temp != None: print(temp.data) temp = temp.ref d = dll() d.create_dll(3) d.create_dll(5) d.create_dll(7) #d.print_ll() d.add_b(1) #d.print_ll() d.add_e(8) d.print_ll() d.add_p(2, 1) d.add_p(4, 3) d.print_ll()
data structure.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### HITO: On the use of Matched filter and Hypothesis testing # As It is been explaned in the report and other notebooks, a matched filter is a linear filter whose purpose is to maximize the Signal to Noise Ratio of an observed data $x(t)$ by find an optimal impulse response filter $h(-t)$ which we call the Template Waveform. All the formulation of this Matched filter is explained in the Report. But basically we calculate: # # $$ SNR_{t_0} = \frac{(x|h)_{t_0}}{\sqrt{(h|h)_{t_0=0}}} = \frac{\int_{-\infty}^{\infty} \frac{\tilde{x}(f)\tilde{h}^{*}(f)}{S_n(f)} e^{2\pi i f t_0} df}{\sqrt{\int_{-\infty}^{\infty} \frac{|\tilde{h}(f)|^{2}}{S_n(f)} df}}$$ # # Where tilde denotes the fourier transform of the variable, $S_n(f)$ is the PSD of the detector noise and $t_0$ is the time-shift at which the template match the best with the observed data. # # Also we implement an Hypothesis test in order to know when a dection is good under a given threshold. You can see on the report this entire formulation. But the main idea is, under a determined threshold $\eta$, we use the likelihood ratio $\Lambda$ to decide which hypothesis is more correct: # # $$ln[\Lambda\lbrace(x|h)\rbrace] = \lbrace (x|h) - \frac{1}{2}(h|h) \rbrace \begin{array}{c}H_1 \\ > \\ < \\ H_0 \end{array} ln\left[ \frac{p_0}{p_1} \right] = ln[\eta]$$ # # which is writen more clearly as: # # $$ (x|h)\begin{array}{c}H_1 \\ > \\ < \\ H_0 \end{array} ln[\eta] + \frac{1}{2}(h|h) = \eta'$$ # # Here the threshold $\eta'$ depend on the a priori probabilities $P(H_0)$ and $P(H_1)$ and the value of the optimal linear filter $(h|h)_0$. Then we just need to know if our detected linear filter $(x|h)$ satisfied the threshold cut or not, this is writen in terms of SNR as follow: # # # $$ SNR \begin{array}{c}H_1 \\ > \\ < \\ H_0 \end{array} \frac{ln[\eta]}{\sqrt{(h|h)_0}} + \frac{1}{2}\sqrt{(h|h)_0} = \eta'$$ # # ## Experiments # Here we are going to show how the implementation of Matched Filter using Signal Decomposition by linear regresors to do the Fourier Transfrom works. For that we are going to creat some simulated data composed of sinusoidal signals and gaussian noise, then apply the Matched filter and check the results by the Hypothesis test. # # + # imports import numpy as np import scipy as sp import matplotlib.pyplot as plt from astropy.stats import LombScargle # in order to use custom modules in parent path import os import sys nb_dir = os.path.split(os.getcwd())[0] if nb_dir not in sys.path: sys.path.append(nb_dir) from mfilter.implementations.simulate import SimulateSignal from mfilter.regressions import * from mfilter.types import FrequencySamples, TimeSeries, FrequencySeries, TimesSamples from mfilter.filter import * #from mfilter.hypothesistest.probabilities import HypothesisTesting # %matplotlib inline plt.style.use('seaborn') # - # #### Simulating data # + def gen_data(n_samples, underlying_delta=50, min_freq=None): min_freq = 1 / (n_samples * underlying_delta) if min_freq is None else min_freq freq = [min_freq, min_freq * 2, min_freq * 3] weights=[1, 0.3, 0.3] config="slight" noise_level=0.5 simulated = SimulateSignal(n_samples, freq, weights=weights, noise_level=noise_level, dwindow="tukey", underlying_delta=underlying_delta) # get the times times = simulated.get_times(configuration=config) # next generate 2 templates of different number of peaks and position of start pos_start_t1 = 0 pos_start_t2 = 0 temp1 = simulated.get_data(pos_start_peaks=pos_start_t1, n_peaks=1, with_noise=False, configuration=config) temp1 = abs(temp1) temp2 = simulated.get_data(pos_start_peaks=pos_start_t2, n_peaks=0.5, with_noise=False, configuration=config) temp2 = abs(temp2) # and generate the noise noise = simulated.get_noise(None) # finally we create another template with different form freq2 = [min_freq, min_freq * 2] weights2=[1, 0.4] simulated2 = SimulateSignal(n_samples, freq2, weights=weights2, dwindow="tukey", underlying_delta=underlying_delta) pos_start_t3 = pos_start_t1 temp3 = simulated2.get_data(pos_start_peaks=pos_start_t3, n_peaks=1, with_noise=False, configuration=config) temp3 = abs(temp3) # and create the data data = noise + temp1 # set all as TimeSeries, in the future SimulateData class should return timeseries by his own times = TimesSamples(initial_array=times) temp1 = TimeSeries(temp1, times=times) temp2 = TimeSeries(temp2, times=times) temp3 = TimeSeries(temp3, times=times) data = TimeSeries(data, times=times) noise = TimeSeries(noise, times=times) return times, data, noise, temp1, temp2, temp3, freq times, data, noise, temp1, temp2, temp3, freq = gen_data(500, underlying_delta=0.001) print(freq) # put all in a plot plt.figure(figsize=(15, 5)) plt.plot(times, data, 'k', label="data") plt.plot(times, temp1, label="template1", alpha=0.5) plt.plot(times, temp2, label="template2", alpha=0.5) plt.plot(times, temp3, label="template3", alpha=0.5) plt.legend() # - # #### Implement the Matched Filter methods # For the implementation with linear regresors we need to set the regresor, there are 3 options so far now: # * Ridge Regression # * LASSO # * Elastic Net # # Since we are doing overfiting we dont care too much about wich regressor is the best, and here i just gona use the Ride Regressor. # These regressor demand the computation of a dictionary, in this case the dictionary is the Fourier Matrix for which we have implemented a class too. # + # first define the sampling grid samples_per_peak = 10 freqs = FrequencySamples(input_time=times, minimum_frequency=-max(freq)*2, maximum_frequency=max(freq)*2, samples_per_peak=samples_per_peak) F = Dictionary(times, freqs) reg = RidgeRegression(alpha=1000, phi=F) # reg = ElasticNetRegression(alpha=0.01, l1_ratio=0.7, phi=F) # reg = LassoRegression(alpha=0.001, phi=F) print(F.shape(splited=False)) # - # We also need to compute a PSD, luckly we have implemented lomb-scargle on FrequencySamples object. # In the future we expect to implement a Lomb-Welch average PSD for estimate PSD of noise using input data psd = freqs.lomb_scargle(times, data, norm="psd") psd.plot(by_components=False) print(psd.sum()) psd = None # Also we will calculate the FrequencySeries previously for all templates and data # + stilde = make_frequency_series(data, frequency_grid=freqs, reg=reg) ntilde = make_frequency_series(noise, frequency_grid=freqs, reg=reg) htilde1 = make_frequency_series(temp1, frequency_grid=freqs, reg=reg) htilde2 = make_frequency_series(temp2, frequency_grid=freqs, reg=reg) htilde3 = make_frequency_series(temp3, frequency_grid=freqs, reg=reg) h1_norm = sigmasq(htilde1, psd=psd) h2_norm = sigmasq(htilde2, psd=psd) h3_norm = sigmasq(htilde3, psd=psd) s_norm = sigmasq(stilde, psd=psd) n_norm = sigmasq(ntilde, psd=psd) fig, [ax1, ax2] = plt.subplots(2, 1, sharex=True, figsize=(8, 10)) stilde.plot(axis=ax1, by_components=False, _show=False) ax1.axvline(freq[0], color="black", linestyle='solid', label="freq1") ax1.axvline(freq[1], color="black", linestyle='solid', label="freq2") ax1.axvline(freq[2], color="black", linestyle='solid', label="freq3") ax1.legend() htilde1.plot(axis=ax2, by_components=False, _show=False) ax2.axvline(freq[0], color="black", linestyle='solid', label="freq1") ax2.axvline(freq[1], color="black", linestyle='solid', label="freq2") ax2.axvline(freq[2], color="black", linestyle='solid', label="freq3") ax2.legend() ax2.set_xlim([0, freq[2] + freq[0]]) # - plt.plot(freqs, abs(ntilde)**2 * F.shape(splited=False)[1]) print((abs(ntilde)**2).sum()) # + # t_shifted = times - times[len(data)//2] # noise_psd = freqs.lomb_scargle(times, noise, norm="psd").value # noise_psd = abs(stilde)**2 # noise_psd = np.ones(len(freqs)) * np.median(noise_psd) # white_filter = FrequencySeries(1/np.sqrt(noise_psd * len(freqs)), frequency_grid=freqs, epoch=data.epoch) # white_filter.plot(by_components=False) # print((noise_psd * (white_filter.value**2)).sum()) # lft1, lff1 = linear_filter(white_filter, noise, psd=psd, frequency_grid=freqs, reg=reg) # lft1.plot() # noise.plot() def checking(n): underlying_delta=1 n_samples = 50 min_freq = 1 / (n_samples * underlying_delta) freq = [min_freq, min_freq * 2, min_freq * 3] weights=[1, 0.3, 0.3] config="slight" noise_level=0.5 simulated = SimulateSignal(n_samples, freq, weights=weights, noise_level=noise_level, dwindow="tukey", underlying_delta=underlying_delta) # get the times times = simulated.get_times(configuration=config) times = TimesSamples(initial_array=times) temp1 = simulated.get_data(pos_start_peaks=0, n_peaks=1, with_noise=False, configuration=config) temp1 = abs(temp1) temp1 = TimeSeries(temp1, times=times) temp2 = simulated.get_data(pos_start_peaks=20, n_peaks=1, with_noise=False, configuration=config) temp2 = abs(temp2) temp2 = TimeSeries(temp2, times=times) samples_per_peak = 5 freqs = FrequencySamples(input_time=times, minimum_frequency=-1 / (2 * underlying_delta), maximum_frequency=1 / (2 * underlying_delta), samples_per_peak=samples_per_peak) print(len(freqs)) F = Dictionary(times, freqs) reg = RidgeRegression(alpha=1000, phi=F) r = [] r0 = [] s = [] for i in range(n): noise = TimeSeries(simulated.get_noise(None), times=times) data = TimeSeries(temp1 + noise, times=times) ntilde = noise.to_frequencyseries(frequency_grid=freqs, reg=reg) stilde = data.to_frequencyseries(frequency_grid=freqs, reg=reg) r0.append(noise[0]) noise_psd = freqs.lomb_scargle(times, noise, norm="psd").value white_filter = FrequencySeries(1/np.sqrt(noise_psd), frequency_grid=freqs, epoch=data.epoch) # lft1, lff1 = linear_filter(white_filter, noise, psd=None, frequency_grid=freqs, reg=reg) # r.append(lft1[0]) h_filter = temp2.to_frequencyseries(frequency_grid=freqs, reg=reg) n_norm = (ntilde * ntilde.conj() * white_filter.value**2).sum() s_norm = (stilde * stilde.conj() * white_filter.value**2).sum() h_norm = (h_filter * h_filter.conj() * white_filter.value**2).sum() fered_noise = (ntilde* h_filter.conj() * white_filter.value**2).sum() fered_data = (stilde * h_filter.conj() * white_filter.value**2).sum() r.append(fered_noise.real / np.sqrt(n_norm.real) / np.sqrt(h_norm.real)) s.append(fered_data.real / np.sqrt(s_norm.real) / np.sqrt(h_norm.real)) return r0, r, s r0, r, s = checking(100) plt.figure() aa = plt.hist(r) plt.figure() _ = plt.hist(r0) plt.figure() _ = plt.hist(s) print(np.std(r), np.std(r0), np.std(s)) aa # + t_shifted = times - times[len(data)//2] fig, [ax1, ax2] = plt.subplots(1, 2, sharey=True, figsize=(16, 4)) unit_e = False snr1_data = matched_filter(htilde1, stilde, psd=psd, reg=reg, times=times, unitary_energy=unit_e) snr1_noise = matched_filter(htilde1, ntilde, psd=psd, reg=reg, times=times, unitary_energy=unit_e) ax1.plot(t_shifted, np.roll(snr1_data, len(snr1_data)//2), label="temp1") ax2.plot(t_shifted, np.roll(snr1_noise, len(snr1_noise)//2)) snr2_data = matched_filter(htilde2, stilde, psd=psd, reg=reg, times=times, unitary_energy=unit_e) snr2_noise = matched_filter(htilde2, ntilde, psd=psd, reg=reg, times=times, unitary_energy=unit_e) ax1.plot(t_shifted, np.roll(snr2_data, len(snr2_data)//2), label="temp2") ax2.plot(t_shifted, np.roll(snr2_noise, len(snr2_noise)//2)) snr3_data = matched_filter(htilde3, stilde, psd=psd, reg=reg, times=times, unitary_energy=unit_e) snr3_noise = matched_filter(htilde3, ntilde, psd=psd, reg=reg, times=times, unitary_energy=unit_e) ax1.plot(t_shifted, np.roll(snr3_data, len(snr3_data)//2), label="temp3") ax2.plot(t_shifted, np.roll(snr3_noise, len(snr3_noise)//2)) ax1.legend() test_snr1_noise = matched_filter(htilde1, ntilde, h_norm=1, psd=psd, reg=reg, times=times, unitary_energy=unit_e) print(np.std(snr1_noise), np.std(snr2_noise), np.std(snr3_noise), np.std(test_snr1_noise)) print(np.std(snr1_data), np.std(snr2_data), np.std(snr3_data), np.std(test_snr1_noise)) print(np.std(noise)) print(np.mean(abs(ntilde)**2)) print(0.2 / np.sqrt(h1_norm)) # - max_snr, idx_max = match(htilde1, stilde, reg=reg, v1_norm=h1_norm, v2_norm=None, psd=psd, times=times) opt_snr, idx_opt_max = match(htilde1, htilde1, reg=reg, v1_norm=h1_norm, v2_norm=None, psd=psd, times=times) noise_snr, idx_noise_max = match(htilde1, ntilde, reg=reg, v1_norm=h1_norm, v2_norm=None, psd=psd, times=times) print(max_snr, opt_snr, noise_snr, "idx is: ", idx_max) max_snr, idx_max = match(htilde2, stilde, reg=reg, v1_norm=h2_norm, v2_norm=None, psd=psd, times=times) opt_snr, idx_opt_max = match(htilde2, htilde2, reg=reg, v1_norm=h2_norm, v2_norm=None, psd=psd, times=times) noise_snr, idx_noise_max = match(htilde2, ntilde, reg=reg, v1_norm=h2_norm, v2_norm=None, psd=psd, times=times) print(max_snr, opt_snr, noise_snr, "idx is: ", idx_max) max_snr, idx_max = match(htilde3, stilde, reg=reg, v1_norm=h3_norm, v2_norm=None, psd=psd, times=times) opt_snr, idx_opt_max = match(htilde3, htilde3, reg=reg, v1_norm=h3_norm, v2_norm=None, psd=psd, times=times) noise_snr, idx_noise_max = match(htilde3, ntilde, reg=reg, v1_norm=h3_norm, v2_norm=None, psd=psd, times=times) print(max_snr, opt_snr, noise_snr, "idx is: ", idx_max) # Here we would like to know how the SNR for only noise distribute, under theory we guess that distribut as a Gaussian and we expect a distribution of $N(0,1)$, this is suposes to be independent of the # + def testing(n): v1 = [] v2 = [] v3 = [] v4 = [] plt.figure() underlying_delta=0.001 n_samples = 100 min_freq = 1 / (n_samples * underlying_delta) freq = [min_freq, min_freq * 2, min_freq * 3] weights=[1, 0.3, 0.3] config="slight" noise_level=0.5 simulated = SimulateSignal(n_samples, freq, weights=weights, noise_level=noise_level, dwindow="tukey", underlying_delta=underlying_delta) # get the times times = simulated.get_times(configuration=config) # next generate 2 templates of different number of peaks and position of start pos_start_t1 = 0 pos_start_t2 = 0 temp1 = simulated.get_data(pos_start_peaks=pos_start_t1, n_peaks=1, with_noise=False, configuration=config) temp1 = abs(temp1) times = TimesSamples(initial_array=times) temp1 = TimeSeries(temp1, times=times) for i in range(n): noise = simulated.get_noise(None) noise = TimeSeries(noise, times=times) # first define the sampling grid samples_per_peak = 10 freqs = FrequencySamples(input_time=times, minimum_frequency=-max(freq)*2, maximum_frequency=max(freq)*2, samples_per_peak=samples_per_peak) F = Dictionary(times, freqs) reg = RidgeRegression(alpha=1000, phi=F) psd = freqs.lomb_scargle(times, noise, norm="psd") ntilde = make_frequency_series(noise, frequency_grid=freqs, reg=reg) htilde1 = make_frequency_series(temp1, frequency_grid=freqs, reg=reg) snr_noise = matched_filter(htilde1, ntilde, psd=psd, reg=reg, times=times, unitary_energy=False) plt.plot(times, snr_noise, 'k', alpha=0.3) v1.append(snr_noise[0]) v2.append(snr_noise[50]) v3.append(snr_noise[30]) v4.append(snr_noise[99]) return v1, v2, v3, v4 v1, v2, v3, v4 = testing(100) # - print(np.std(v1), np.std(v2), np.std(v3), np.std(v4)) print(np.mean(v1), np.mean(v2), np.mean(v3), np.mean(v4)) # #### Hypothesis Testing # Using results of the SNR for every Template we can estimate the probability of that SNR representing a real signal, # here we need to give a fixed probability of false alarm or probability of detection in order to define the threshold, or you could give directly a threshold # + false_alarm = 0.1 p_detect = None def set_threshold_params(h, false_alarm=None, p_detect=None, name_template="h"): test = HypothesisTesting(h.real, false_alarm=false_alarm, p_detect=p_detect) test.set_threshold() print("threshold value for {} is: ".format(name_template), round(test.threshold, 3), " with... \n:::: false alarm of: ", round(test.false_alarm(), 3), " and prob. of detect: ", round(test.p_detection(), 3)) return test h1_test = set_threshold_params(h1.real, false_alarm=false_alarm, p_detect=None, name_template="h1") h2_test = set_threshold_params(h2.real, false_alarm=false_alarm, p_detect=None, name_template="h2") h3_test = set_threshold_params(h3.real, false_alarm=false_alarm, p_detect=None, name_template="h3") def choose(snr_max, hyp_test, name_snr="snr", name_template="h"): h_true = hyp_test.decide(snr_max) print("for SNR of {} under template {} ge choose hypothesis H{} as correct and...".format(name_snr, name_template, h_true), "\n::::: false alarm for SNR is: ", hyp_test.false_alarm(threshold=snr_max), " and prob. of detect is: ", hyp_test.p_detection(threshold=snr_max)) return def plot_hyp(loc, scale, snr_max): norm1 = sp.stats.norm(loc=0, scale=scale) norm2 = sp.stats.norm(loc=loc, scale=scale) vals = np.linspace(0 - 2 * scale , mu + 2 * scale, 100) print("\n-------------------\n") choose(max(snr1.data), h1_test, name_snr="snr1", name_template="h1") choose(max(snr1_noise.data), h1_test, name_snr="snr1_noise", name_template="h1") print(" ") choose(max(snr2.data), h2_test, name_snr="snr2", name_template="h2") choose(max(snr2_noise.data), h2_test, name_snr="snr2_noise", name_template="h2") print(" ") choose(max(snr3.data), h3_test, name_snr="snr3", name_template="h3") choose(max(snr3_noise.data), h3_test, name_snr="snr3_noise", name_template="h3") # - norm1 = sp.stats.norm(loc=0, scale=1) norm2 = sp.stats.norm(loc=np.sqrt(h1.real), scale=1) vals = np.linspace(-4, 4 + np.sqrt(h1.real), 200) h_0 = norm1.pdf(vals) h_1 = norm2.pdf(vals) plt.figure(figsize=(15, 4)) plt.plot(vals, h_0, 'b', label="H0") plt.plot(vals, h_1, 'g', label="H1") plt.fill_between(vals, 0, h_0, where=vals > h1_test.threshold, facecolor="blue") plt.axvline(h1_test.threshold, color='k', linestyle='solid', label="threshold") plt.axvline(np.max(snr1.data), color='r', label="SNR detection") plt.legend()
developing/notebooks/MatchedFilterWithRegressionsUsage.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from cssi_cp2k.classes import SIM as sim # read this: https://www.cp2k.org/_media/events:2015_cecam_tutorial:watkins_optimization.pdf for more knowledge import novice_functions # + atom_list=['Cl','F'] num_atoms=len(atom_list) mySim = sim.SIM() mySim.GLOBAL.RUN_TYPE = "GEO_OPT" mySim.GLOBAL.PROJECT = "molecule_opt" mySim.GLOBAL.PRINT_LEVEL = "LOW" #FORCE EVAL SECTION mySim.FORCE_EVAL.METHOD='QUICKSTEP' mySim.FORCE_EVAL.SUBSYS.CELL.ABC='15.135 15.135 15.135' mySim.FORCE_EVAL.SUBSYS.COORD.DEFAULT_KEYWORD='mol_unopt_coord.xyz' mySim.FORCE_EVAL.SUBSYS.init_atoms(num_atoms); for i in range(num_atoms): mySim.FORCE_EVAL.SUBSYS.KIND[i+1].SECTION_PARAMETERS=atom_list[i] mySim.FORCE_EVAL.SUBSYS.KIND[i+1].BASIS_SET=novice_functions.basis_set(atom_list[i]) mySim.FORCE_EVAL.SUBSYS.KIND[i+1].POTENTIAL=novice_functions.potential(atom_list[i],functional) mySim.FORCE_EVAL.DFT.BASIS_SET_FILE_NAME=dir+'BASIS_MOLOPT' mySim.FORCE_EVAL.DFT.POTENTIAL_FILE_NAME=dir+'GTH_POTENTIALS' mySim.FORCE_EVAL.DFT.QS.EPS_DEFAULT=1E-10 mySim.FORCE_EVAL.DFT.MGRID.CUTOFF=400 mySim.FORCE_EVAL.DFT.MGRID.REL_CUTOFF=40 mySim.FORCE_EVAL.DFT.MGRID.NGRIDS=5 mySim.FORCE_EVAL.DFT.XC.XC_FUNCTIONAL.SECTION_PARAMETERS=functional mySim.FORCE_EVAL.DFT.XC.VDW_POTENTIAL.POTENTIAL_TYPE='PAIR_POTENTIAL' mySim.FORCE_EVAL.DFT.XC.VDW_POTENTIAL.PAIR_POTENTIAL.TYPE='DFTD3' mySim.FORCE_EVAL.DFT.XC.VDW_POTENTIAL.PAIR_POTENTIAL.PARAMETER_FILE_NAME='dftd3.dat' mySim.FORCE_EVAL.DFT.XC.VDW_POTENTIAL.PAIR_POTENTIAL.REFERENCE_FUNCTIONAL=functional mySim.FORCE_EVAL.DFT.XC.VDW_POTENTIAL.PAIR_POTENTIAL.R_CUTOFF=11 mySim.FORCE_EVAL.DFT.SCF.SCF_GUESS='ATOMIC' mySim.FORCE_EVAL.DFT.SCF.MAX_SCF=200 mySim.FORCE_EVAL.DFT.SCF.EPS_SCF=1E-6 mySim.MOTION.GEO_OPT.TYPE='MINIMIZATION' mySim.MOTION.GEO_OPT.OPTIMIZER='BFGS' mySim.MOTION.GEO_OPT.MAX_ITER=200 mySim.MOTION.GEO_OPT.MAX_DR=1e-3 mySim.MOTION.CONSTRAINT.FIXED_ATOMS.LIST ='1' mySim.write_changeLog(fn="mol_opt-changeLog.out") mySim.write_errorLog() mySim.write_inputFile(fn='mol_opt.inp') # -
cp2kmdpy/tests/delete/presented_to_ilja/molecule_opt.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Data Science) # language: python # name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:470317259841:image/datascience-1.0 # --- # !pip install -q -U sagemaker ipywidgets # # ResNet-50 comparison: compiled vs uncompiled, featuring Inferentia # # In this notebook we will see how to deploy a pretrained model from the PyTorch Vision library, in particular a ResNet50, to Amazon SageMaker. We will also test how it performs on different hardware configurations, and the effects of model compilation with Amazon SageMaker Neo. In a nutshell, we will test: # # - ResNet50 on a ml.c5.xlarge, uncompiled # - ResNet50 on a ml.g4dn.xlarge, uncompiled # - ResNet50 on a ml.c5.xlarge, compiled # - ResNet50 on a ml.g4dn.xlarge, compiled # - ResNet50 on a ml.inf1.xlarge, compiled # # **NOTE**: this notebook has been tested with the PyTorch 1.8 CPU kernel. Please use that one if you're using SageMaker Studio. # ### Set-up model and SageMaker helper functions # + import sagemaker from sagemaker import Session, get_execution_role from sagemaker.pytorch.model import PyTorchModel from sagemaker.utils import name_from_base print(sagemaker.__version__) sess = Session() bucket = sess.default_bucket() role = get_execution_role() endpoints = {} # - # Let's download the model for the PyTorch Hub, and create an archive that can be used by SageMaker to deploy this model. For using PyTorch in Script Mode, Amazon SageMaker expects a single archive file in `.tar.gz` format, containing a model file and the code for inference in a `code` folder. The structure of the archive will be as follows: # # ``` # /model.tar.gz # /--- model.pth # /--- code/ # /--- /--- inference.py # /--- /--- requirements.txt (optional) # ``` # # By setting the variable `download_the_model=False`, you can skip the download phase and provide your own path to S3 in the `model_data` variable. # + download_the_model = True if download_the_model: import torch, tarfile # Load the model model = torch.hub.load('pytorch/vision:v0.9.0', 'resnet50', pretrained=True) inp = torch.rand(1, 3, 224, 224) model_trace = torch.jit.trace(model, inp) # Save your model. The following code saves it with the .pth file extension model_trace.save('model.pth') with tarfile.open('model.tar.gz', 'w:gz') as f: f.add('model.pth') f.add('code/uncompiled-inference.py', 'code/inference.py') f.close() pytorch_resnet50_prefix = 'pytorch/resnet50' model_data = sess.upload_data('model.tar.gz', bucket, pytorch_resnet50_prefix) else: pytorch_resnet50_prefix = 'pytorch/resnet50' model_data = f's3://{bucket}/{pytorch_resnet50_prefix}/model.tar.gz' print(f'Model stored in {model_data}') # - # ## Deploy and test on CPU # # In our first test, we will deploy the model on a `ml.c5.xlarge` instance, without compiling the model. Although this is a CNN, it is still possible to run it on CPU, although the performances won't be that good. This can give us a nice baseline of the performances of our model. pth_model = PyTorchModel(model_data=model_data, entry_point='uncompiled-inference.py', source_dir='code', role=role, framework_version='1.7', py_version='py3' ) predictor = pth_model.deploy(1, 'ml.c5.xlarge') endpoints['cpu_uncompiled'] = predictor.endpoint_name predictor.endpoint_name # ## Deploy and test on GPU # # The instance chosen this time is a `ml.g4dn.xlarge`. It has great throughput and the cheapest way of running GPU inferences on the AWS cloud. pth_model = PyTorchModel(model_data=model_data, entry_point='uncompiled-inference.py', source_dir='code', role=role, framework_version='1.6', py_version='py3' ) predictor = pth_model.deploy(1, 'ml.g4dn.xlarge') endpoints['gpu_uncompiled'] = predictor.endpoint_name predictor.endpoint_name # # Compiled Models # # A common tactic in more advanced use cases is to improve model performances, in terms of latency and throughput, by compiling the model. # # Amazon SageMaker features its own compiler, Amazon SageMaker Neo, that enables data scientists to optimize ML models for inference on SageMaker in the cloud and supported devices at the edge. # # You start with a machine learning model already built with DarkNet, Keras, MXNet, PyTorch, TensorFlow, TensorFlow-Lite, ONNX, or XGBoost and trained in Amazon SageMaker or anywhere else. Then you choose your target hardware platform, which can be a SageMaker hosting instance or an edge device based on processors from Ambarella, Apple, ARM, Intel, MediaTek, Nvidia, NXP, Qualcomm, RockChip, Texas Instruments, or Xilinx. With a single click, SageMaker Neo optimizes the trained model and compiles it into an executable. # # For inference in the cloud, SageMaker Neo speeds up inference and saves cost by creating an inference optimized container in SageMaker hosting. For inference at the edge, SageMaker Neo saves developers months of manual tuning by automatically tuning the model for the selected operating system and processor hardware. # ### Create the `model_data` for compilation # # To compile the model, we need to provide a `tar.gz` archive just like before, with very few changes. In particular, since SageMaker will use a different DL runtime for running compiled models, we will let it use the default function for serving the model, and only provide a script containing how to preprocess data. Let's create this archive and uplaod it to S3. # + import tarfile with tarfile.open('model-to-compile.tar.gz', 'w:gz') as f: f.add('model.pth') f.add('code/compiled-inference.py', 'code/inference.py') f.close() model_data = sess.upload_data('model-to-compile.tar.gz', bucket, pytorch_resnet50_prefix) # - output_path = f's3://{bucket}/{pytorch_resnet50_prefix}/compiled' # ## Compile for CPU # # Let's run the same baseline test from before, and compile and deploy for CPU instances. pth_model = PyTorchModel(model_data=model_data, entry_point='compiled-inference.py', source_dir='code', role=role, framework_version='1.6', py_version='py3' ) # + output_path = f's3://{bucket}/{pytorch_resnet50_prefix}/compiled' compiled_model = pth_model.compile( target_instance_family='ml_c5', input_shape={"input0": [1, 3, 224, 224]}, output_path=output_path, role=role, job_name=name_from_base(f'pytorch-resnet50-c5') ) # - predictor = compiled_model.deploy(1, 'ml.c5.xlarge') endpoints['cpu_compiled'] = predictor.endpoint_name predictor.endpoint_name # ## Compile for GPU pth_model = PyTorchModel(model_data=model_data, entry_point='compiled-inference.py', source_dir='code', role=role, framework_version='1.6', py_version='py3' ) # + output_path = f's3://{bucket}/{pytorch_resnet50_prefix}/compiled' compiled_model = pth_model.compile( target_instance_family='ml_g4dn', input_shape={"input0": [1, 3, 224, 224]}, output_path=output_path, role=role, job_name=name_from_base(f'pytorch-resnet50-g4dn') ) # - predictor = compiled_model.deploy(1, 'ml.g4dn.xlarge') endpoints['gpu_compiled'] = predictor.endpoint_name predictor.endpoint_name print(1) # ## Compile for Inferentia instances # # There is one more thing we can try to improve our model performances: using Inferentia instances. # # Amazon EC2 Inf1 instances deliver high-performance ML inference at the lowest cost in the cloud. They deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable current generation GPU-based Amazon EC2 instances. Inf1 instances are built from the ground up to support machine learning inference applications. They feature up to 16 AWS Inferentia chips, high-performance machine learning inference chips designed and built by AWS. pth_model = PyTorchModel( model_data=model_data, entry_point='compiled-inference.py', source_dir='code', role=role, framework_version='1.7', py_version='py3' ) compiled_model = pth_model.compile( target_instance_family='ml_inf1', input_shape={"input0": [1, 3, 224, 224]}, output_path=output_path, role=role, job_name=name_from_base(f'pytorch-resnet50-inf1'), compile_max_run=1000 ) predictor = compiled_model.deploy(1, 'ml.inf1.xlarge') endpoints['inferentia'] = predictor.endpoint_name predictor.endpoint_name # # Test # For testing our models and endpoints, we will use the following picture of a beagle pup, freely available on [Wikimedia](https://commons.wikimedia.org/wiki/File:Beagle_puppy_sitting_on_grass.jpg). We will pass it to our endpoints as `application/x-image`, and no particular pre-processing is needed on client-side. # + from IPython.display import Image Image('doggo.jpg') # - # Finally, we will use the following function to benchmark our SageMaker endpoints, measuring the latency of our predictions. This specific version uses both the `Predictor` from the SageMaker Python SDK and boto3's `invoke_endpoint()` function - just change the last parameter from `boto3` to `sm` if you want to use the Python SDK. # + from load_test import load_tester num_thread = 16 from IPython.display import JSON JSON(endpoints) # - # CPU - Uncompiled load_tester(num_thread, endpoints['cpu_uncompiled'], 'doggo.jpg', 'boto3') # Let's discuss the above results: # # as we can from the latency tests, the model way too long to generate an inference, averaging about 6 transactions per second (TPS). This of course may be sufficient for some low throughput use cases, rarely used instances, but it's very likely that it won't be sufficient. Let's see how those numbers change when using a GPU. # GPU - Uncompiled load_tester(num_thread, endpoints['gpu_uncompiled'], 'doggo.jpg', 'boto3') # Now we're talking! The GPU helps us achieve 77 TPS on average, with a much lower latency percentile over the board. Nice! # CPU - Compiled load_tester(num_thread, endpoints['cpu_compiled'], 'doggo.jpg', 'boto3') # With a simple compilation job, we more than DOUBLED the performances of our model on our c5 instance, achieving 13 TPS and half the latency percentiles. Let's see if the same results can be seen on GPU. # GPU - Compiled load_tester(num_thread, endpoints['gpu_compiled'], 'doggo.jpg', 'boto3') # Results are also consistent in the compiled version of the model on GPU. Almost double the TPS, with 7 ms latency. Let's take it one step further and test Inferentia. # Inferentia load_tester(num_thread, endpoints['inferentia'], 'doggo.jpg', 'boto3') # The best results so far! Up to 290 TPS at the same latency percentiles than GPU, or lower. All of this for a fraction of the cost. # # Reviewing the results # # Let's plot the results obtained from the previous tests, by taking into account the last printed line of each load testing. We will ignore the error rate, and take into account the TPS obtained with our tests dividing it by the cost of the machine in the region of testing (DUB) - a metric we will call "throughput per dollar". # + import matplotlib.pyplot as plt from io import StringIO import pandas as pd data = StringIO(''' experiment|TPS|p50|cost/hour c5|5.900|215.23876|0.23 c5 + Neo|12.900|121.60897|0.23 g4dn|74.700|21.19393|0.821 g4dn + Neo|140.400|11.28725|0.821 inf1|304.300|4.90315|0.33 ''') df = pd.read_csv(data, sep='|') df['throughput per dollar (divided by 100, higher is better)'] = df['TPS']/df['cost/hour']/100 # Divide by 100 to normalize df['cost per 1M inferences in $ (lower is better)'] = (1000000/df['TPS'])/3600*df['cost/hour'] df['number of inferences per dollar (higher is better)'] = df['TPS']*3600/df['cost/hour'] df['transactions per hour'] = df['TPS']*3600 df.head() # - df.plot(x="experiment", y="cost per 1M inferences in $ (lower is better)", kind="bar") ax = df.plot(x="experiment", y="cost per 1M inferences in $ (lower is better)", legend=False, kind="bar", color="red", ylabel='cost in dollar ($)') ax.figure.legend() plt.rcParams["figure.figsize"] = (8,5) plt.savefig('./images/cost-1M-inferences.png', dpi=300, format='png') plt.show() ax = df.plot(x="experiment", y="number of inferences per dollar (higher is better)", legend=False, kind="bar", color="green", ylabel="number of transactions") ax.figure.legend() plt.yticks([0,500000,1000000,1500000,2000000,2500000, 3000000], ["0", "500K", "1M", "1.5M", "2M", "2.5M", "3M"]) plt.rcParams["figure.figsize"] = (8,5) plt.savefig('images/inferences-per dollar.png', dpi=300, format='png') plt.show() ax = df.plot(x="experiment", y="transactions per hour", legend=False, kind="bar", color="blue", ylabel="number of transactions") ax.figure.legend() plt.rcParams["figure.figsize"] = (8,5) plt.yticks([0,250000,500000,750000, 1000000], ["0", "250K", "500K", "750K", "1M"]) plt.savefig('images/transactions-per-hour.png', dpi=300, format='png') plt.show() ax = df.plot(x="experiment", y="cost/hour", legend=False, kind="bar", color="orange", ylabel="number of transactions") ax.figure.legend() plt.rcParams["figure.figsize"] = (8,5) plt.savefig('images/cost-per-hour.png', dpi=300, format='png') plt.show() # The results speak for themselves. The `inf1` instance proves extremely competitive, thanks to its high TPS, incredibly low latency and undeniable value. Compiling the model for a GPU just like the `g4dn` is also a very interesting approach. # # Clean-up for endpoint in endpoints: pred = sagemaker.predictor.Predictor(endpoint_name=endpoints[endpoint]) pred.delete_endpoint()
resnet50.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### Network # A network is used to provide access to a collection of domains at once i.e. if a user agrees to a Network Agreement, then they automatically agree to the conditions to the Domains enlisted in that Network. # # Through, networks, a user can list all the domains attached to a network. The user can select any domain and login into the domain by entering their credential for the domain. # # For scope of the course a Network can perform the following set of operations: # # - The user is able to list all the available networks. **[P0]** # # Following are properties of the network visible to the user on performing the list operation. # - Id # - Name of the Network # - Network Url **[P1]** # - Tags # - Description # - Hosted Domains - Number of hosted domains on the Network # - Hosted Datasets - Number of hosted datasets on the domains in the Network # - The user is able to select a network via `Id` or `Index` **[P0]** # - The user is able to filter through a list of networks via `Title` or `Tags` **[P1]** import syft as sy import pandas as pd # Let's explore on all the available networks sy.networks # + # Let's select a network # We can select a network either via `Id`/`Name` of the Network or the index # Let's select the first Network in the list via Index un_network = sy.networks[0] # Or via `Id` un_network = sy.networks["21a68e773ba747f0a4b6169bf28e8bed"] # Or via `Name` un_network = sy.networks["United Nations"] # + # If there are multiple networks with the same `Name` then throw an error. # Let's assume if there are more than networks with the same `United Nations` # then un_network = sy.networks["United Nations"] # + # On selecting a domain the user can access all public properties of the network print(f"Network Id: {un_network.id}") print(f"Network Name: {un_network.name}") print(f"Number of hosted Domains: {un_network.hosted_domains}") print(f"Number of hosted datasets: {un_network.hosted_datasets}") print(f"Description of the Network: {un_network.description}") print(f"Tags: {un_network.tags}") # + # Now let say, if the user wants to filter across the available networks # If filtering via Name sy.networks.filter( name="United Nations" ) # Returns the networks where Name id "United Nations" in their Tags list. # - # If filter via Tag sy.networks.filter(tags="Health") # Returns the networks with tag "Health" in their Tags list. # Similarly, we can create a filter to filter on more than one properties of the Network sy.networks.filter(name="United Nations", tags="Health") # + # For a more complex operation like filtering all the networks where a property # of the network contains all the values specified in the list. # To achieve this we will add a `__in` suffix with the property name. # e.g. If a user wants to filter the networks with a list of tags then in that case the user adds # an additional `__in` suffix at the end of the property `tags` and pass the list as the tags. # This is equivalent to a query where the output returns # a list of networks which contain "Health" or "Cancer" in their tags sy.networks.filter(tags__in=["Health", "Cancer"]) # + # Performing a sub-string match # To achieve this we will add a `__contains` suffix at end of the property. sy.networks.filter( name__contains="United" ) # Returns all the networks where "United" is present in the name of the network # + # User selects a network with invalid Id or Index sy.networks[23] # Or sy.networks["asdafaf23r14sdfsfsdgfsdgs"] # - # ### Dummy Data Creation # # # + import pandas as pd from enum import Enum import uuid class bcolors(Enum): HEADER = "\033[95m" OKBLUE = "\033[94m" OKCYAN = "\033[96m" OKGREEN = "\033[92m" WARNING = "\033[93m" FAIL = "\033[91m" ENDC = "\033[0m" BOLD = "\033[1m" UNDERLINE = "\033[4m" # + hide_input=false # Print available networks available_networks = [ { "Id": f"{uuid.uuid4().hex}", "Name": "United Nations", "Hosted Domains": 4, "Hosted Datasets": 6, "Description": "The UN hosts data related to the commodity and Census data.", "Tags": ["Commodities", "Census", "Health"], "Url": "https://un.openmined.org", }, { "Id": f"{uuid.uuid4().hex}", "Name": "World Health Organisation", "Hosted Domains": 3, "Hosted Datasets": 5, "Description": "WHO hosts data related to health sector of different parts of the worlds.", "Tags": ["Virology", "Cancer", "Health"], "Url": "https://who.openmined.org", }, { "Id": f"{uuid.uuid4().hex}", "Name": "International Space Station", "Hosted Domains": 2, "Hosted Datasets": 4, "Description": "ISS hosts data related to the topography of different exoplanets.", "Tags": ["Exoplanets", "Extra-Terrestrial"], "Url": "https://iss.openmined.org", }, ] networks_df = pd.DataFrame(available_networks) filtered_network_via_name = [ { "Id": f"{uuid.uuid4().hex}", "Name": "United Nations", "Hosted Domains": 4, "Hosted Datasets": 6, "Description": "The UN hosts data related to the commodity and Census data.", "Tags": ["Commodities", "Census"], "Url": "https://un.openmined.org", }, ] filtered_df_via_name = pd.DataFrame(filtered_network_via_name) filtered_network_via_tag = [ { "Id": f"{uuid.uuid4().hex}", "Name": "United Nations", "Hosted Domains": 4, "Hosted Datasets": 6, "Description": "The UN hosts data related to the commodity and Census data.", "Tags": ["Commodities", "Census", "Health"], "Url": "https://un.openmined.org", }, { "Id": f"{uuid.uuid4().hex}", "Name": "World Health Organisation", "Hosted Domains": 3, "Hosted Datasets": 5, "Description": "WHO hosts data related to health sector of different parts of the worlds.", "Tags": ["Virology", "Cancer", "Health"], "Url": "https://who.openmined.org", }, ] filtered_df_via_tag = pd.DataFrame(filtered_network_via_tag) not_found_error = f''' {bcolors.FAIL.value}NetworkNotFoundException {bcolors.ENDC.value}: No Network found with given Id or Index. ''' # print(f"Network Id: 21a68e773ba747f0a4b6169bf28e8bed") # print(f"Network Name: United Nations") # print(f"Number of hosted Domains: 4") # print(f"Number of hosted datasets: 6") # print(f"Description of the Network: The UN hosts data related to the commodity and Census data") # print(f"Tags: [Commodities, Census]") # - multiple_networks = f""" {bcolors.FAIL.value}MutipleNetworksReturned{bcolors.ENDC.value}: Mutiple Networks with the `Name`: `United Nations` were found. Please select a network using `Id` or `index` instead. """
notebooks/course3/DS Search and Select a Network.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Типы данных в Python (Версия для Python 3) # ## 1. Числовые # ### int x = 5 print(x) print(type(x)) # + a = 4 + 5 b = 4 * 5 c = 5 / 4 print(a, b, c) # - print(-5 / 4) print(-(5 / 4)) # ### long x = 5 * 1000000 * 1000000 * 1000000 * 1000000 + 1 print(x) print(type(x)) y = 5 print(type(y)) y = x print(type(y)) # ### float y = 5.7 print(y) print(type(y)) # + a = 4.2 + 5.1 b = 4.2 * 5.1 c = 5.0 / 4.0 print(a, b, c) # - a = 5 b = 4 print(float(a) / float(b)) print(5.0 / 4) print(5 / 4.0) print(float(a) / b) # ### bool # + a = True b = False print(a) print(type(a)) print(b) print(type(b)) # - print(a + b) print(a + a) print(b + b) print(int(a), int(b)) print(True and False) print(True and True) print(False and False) print(True or False) print(True or True) print(False or False) # ## 2. None z = None print(z) print(type(z)) print(int(z)) # ## 3. Строковые # ### str x = "abc" print(x) print(type(x)) a = 'Ivan' b = "Ivanov" s = a + " " + b print(s) print(a.upper()) print(a.lower()) print(len(a)) print(bool(a)) print(bool("")) print(int(a)) print(a) print(a[0]) print(a[1]) print(a[0:3]) print(a[0:4:2]) # ### unicode x = u"abc" print(x) print(type(x)) x = u'<NAME>' print(x, type(x)) y = x.encode('utf-8') print(y, type(y)) z = y.decode('utf-8') print(z, type(z)) q = y.decode('cp1251') print(q, type(q)) 1 x = u'<NAME>' print(x, type(x)) y = x.encode('utf-8') print(y, type(y)) z = y.decode('utf-8') print(z, type(z)) q = y.decode('cp1251') print(q, type(q)) print(str(x)) print(y[1:]) print(len(y), type(y)) print(len(x), type(x)) y = u'<NAME>'.encode('utf-8') print(y.decode('utf-8')) print(y.decode('cp1251')) splitted_line = "<NAME>".split(' ') print(splitted_line) print(type(splitted_line)) print("<NAME>".split(" ")) print("\x98") print(u"<NAME>".split(" ")) # ## 3. Массивы # ### list saled_goods_count = [33450, 34010, 33990, 33200] print(saled_goods_count) print(type(saled_goods_count)) # + income = [u'Высокий', u'Средний', u'Высокий'] names = [u'<NAME>', u'<NAME>', u'<NAME>'] print(income) print(names) # - print ("---".join(income)) features = ['<NAME>', 'Medium', 500000, 12, True] print (features) print(features[0]) print(features[1]) print(features[3]) print(features[0:5]) print(features[:5]) print(features[1:]) print(features[2:5]) print(features[:-1]) features.append('One more element in list') print(features) del features[-2] print(features) # ### tuple features_tuple = ('<NAME>', 'Medium', 500000, 12, True) print(type(features_tuple)) features_tuple[2:5] features_tuple.append('one more element') # ## 4. Множества и словари # ### set names = {'Ivan', 'Petr', 'Konstantin'} print(type(names)) print('Ivan' in names) print('Mikhail' in names) names.add('Mikhail') print(names) names.add('Mikhail') print(names) names.remove('Mikhail') print(names) names.add(['Vladimir', 'Vladimirovich']) names.add(('Vladimir', 'Vladimirovich')) print(names) a = range(10000) b = range(10000) b = set(b) print(a[:5]) print(a[-5:]) # %%time print(9999 in a) # %%time print(9999 in b) # ### dict # + words_frequencies = dict() words_frequencies['I'] = 1 words_frequencies['am'] = 1 words_frequencies['I'] += 1 print(words_frequencies) # - print(words_frequencies['I']) words_frequencies = {'I': 2, 'am': 1} print(words_frequencies) yet_another_dict = {'abc': 3.4, 5: 7.8, u'123': None} print(yet_another_dict) yet_another_dict[(1,2,5)] = [4, 5, 7] print(yet_another_dict) yet_another_dict[[1,2,7]] = [4, 5] # ## Где еще можно познакомиться с Python # * https://www.coursera.org/courses?query=Python # * https://www.codeacademy.com # * http://www.pythontutor.ru # * http://www.learnpythonthehardway.org # * http://snakify.org # * https://www.checkio.org
MathematicsAndPython/materials/notebooks/1-2.TypesInPython.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Machine Learning and Topological Data Analysis # <NAME>, https://mathieucarriere.github.io/website/ import numpy as np import gudhi as gd import gudhi.representations import matplotlib.pyplot as plt # In this notebook, we will see how to efficiently combine machine learning and topological data analysis with the Gudhi library and its [representations](https://gudhi.inria.fr/python/3.1.0.rc1/representations.html) module. We will see how to compute the various Hilbert representations of persistence diagrams and how to use them in order to classify a set of persistence diagrams! Ready? Let's go! # First, we will generate persistence diagrams with orbits of dynamical systems. This dataset is very common in TDA and was introduced in the [persistence image](https://arxiv.org/abs/1507.06217) paper. We use the following system, which depends on a parameter $r>0$: # # $$x_{n+1}=x_n+ry_n(1-y_n)\ \ \ \ \text{(mod 1)}$$ # $$y_{n+1}=y_n+rx_{n+1}(1-x_{n+1})\ \ \ \ \text{(mod 1)}$$ # Let's first see what the point cloud looks like for a given choice of $r$. num_pts = 1000 r = 3.5 # Here we generate the point cloud. X = np.empty([num_pts,2]) x, y = np.random.uniform(), np.random.uniform() for i in range(num_pts): X[i,:] = [x, y] x = (X[i,0] + r * X[i,1] * (1-X[i,1])) % 1. y = (X[i,1] + r * x * (1-x)) % 1. # And now, we visualize it. plt.scatter(X[:,0], X[:,1], s=3) plt.show() # Mmmh, looks like a random point cloud... We will see later on that interesting topology can appear for some specific values of $r$. # An easy way to generate persistence diagrams from this cloud is by using alpha filtrations. This can be done in two lines with Gudhi: acX = gd.AlphaComplex(points=X).create_simplex_tree() dgmX = acX.persistence() # We can also easily visualize the persistence diagram: gd.plot_persistence_diagram(dgmX) # Now, let's see what Gudhi has to offer to turn this diagram into a vector in a [scikit-learn](https://scikit-learn.org/stable/) fashion, that is, with estimators that have fit(), transform(), and fit_transform() methods! # The first method that was introduced historically is the [persistence landscape]( http://jmlr.org/papers/v16/bubenik15a.html). A persistence landscape is basically obtained by rotating the persistence diagram by $-\pi/4$ (so that the diagonal becomes the $x$-axis), and then putting tent functions on each point. The $k$th landscape is then defined as the $k$th largest value among all these tent functions. It is eventually turned into a vector by evaluating it on a bunch of uniformly sampled points on the x-axis. LS = gd.representations.Landscape(resolution=1000) L = LS.fit_transform([acX.persistence_intervals_in_dimension(1)]) plt.plot(L[0][:1000]) plt.plot(L[0][1000:2000]) plt.plot(L[0][2000:3000]) plt.title("Landscape") plt.show() # A variation, called the [silhouette](https://arxiv.org/abs/1312.0308), takes a weighted average of these tent functions instead. Here, we weight each tent function by the distance of the corresponding point to the diagonal. SH = gd.representations.Silhouette(resolution=1000, weight=lambda x: np.power(x[1]-x[0],1)) sh = SH.fit_transform([acX.persistence_intervals_in_dimension(1)]) plt.plot(sh[0]) plt.title("Silhouette") # The second method is the [persistence image](http://jmlr.org/papers/v18/16-337.html). A persistence image is obtained by rotating by $-\pi/4$, centering Gaussian functions on all diagram points (usually weighted by a parameter function---here we consider the squared distance to the diagonal) and summing all these Gaussians. This gives a 2D function, that is pixelized into an image. PI = gd.representations.PersistenceImage(bandwidth=1e-4, weight=lambda x: x[1]**2, \ im_range=[0,.004,0,.004], resolution=[100,100]) pi = PI.fit_transform([acX.persistence_intervals_in_dimension(1)]) plt.imshow(np.flip(np.reshape(pi[0], [100,100]), 0)) plt.title("Persistence Image") # Neat, right? Gudhi also contains implementations of less common vectorization methods, such as the [Betti curve](https://www.researchgate.net/publication/316604237_Time_Series_Classification_via_Topological_Data_Analysis), the [complex polynomial]( https://link.springer.com/chapter/10.1007%2F978-3-319-23231-7_27), or the [topological vector](https://diglib.eg.org/handle/10.1111/cgf12692). You can check the [representations](https://gudhi.inria.fr/python/3.1.0.rc1/representations.html) module to check everything that is available. # Gudhi also contain implementations of various kernels, i.e., scalar products for persistence diagrams. More precisely a kernel is a function $k$ that takes a pair of diagrams as inputs and outputs a real value such that: # $$k(D,D')=\langle \Phi(D), \Phi(D')\rangle_{\mathcal{H}},$$ # for some implicit Hilbert space $\mathcal{H}$ and continuous function $\Phi:\mathcal{D}\rightarrow\mathcal{H}$. Many algorithms, such as SVM or PCA, only require pairwise scalar products to be able to run on data. With Gudhi, you can compute the [persistence scale space kernel](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Reininghaus_A_Stable_Multi-Scale_2015_CVPR_paper.pdf), [persistence weighted Gaussian kernel](http://proceedings.mlr.press/v48/kusano16.html), [persistence Fisher kernel](https://papers.nips.cc/paper/8205-persistence-fisher-kernel-a-riemannian-manifold-kernel-for-persistence-diagrams) and [sliced Wasserstein kernel]( http://proceedings.mlr.press/v70/carriere17a.html). Of course, you can also directly compute the bottleneck and Wasserstein distances ;-) # In order to test these functions, we need a second point cloud and corresponding persistence diagram. Let's pick another $r$. r = 4.1 Y = np.empty([num_pts,2]) x, y = np.random.uniform(), np.random.uniform() for i in range(num_pts): Y[i,:] = [x, y] x = (Y[i,0] + r * Y[i,1] * (1-Y[i,1])) % 1. y = (Y[i,1] + r * x * (1-x)) % 1. acY = gd.AlphaComplex(points=Y).create_simplex_tree() dgmY = acY.persistence() gd.plot_persistence_diagram(dgmY) # Looks like this one has interesting homology! Let's plot it. plt.scatter(Y[:,0], Y[:,1], s=3) plt.show() # Indeed, there is a hole in the middle now for some reasons... # Let's now check all pairwise kernels and metrics! # + PWG = gd.representations.PersistenceWeightedGaussianKernel(bandwidth=0.01, kernel_approx=None,\ weight=lambda x: np.arctan(np.power(x[1], 1))) PWG.fit([acX.persistence_intervals_in_dimension(1)]) pwg = PWG.transform([acY.persistence_intervals_in_dimension(1)]) print("PWG kernel is " + str(pwg[0][0])) PSS = gd.representations.PersistenceScaleSpaceKernel(bandwidth=1.) PSS.fit([acX.persistence_intervals_in_dimension(1)]) pss = PSS.transform([acY.persistence_intervals_in_dimension(1)]) print("PSS kernel is " + str(pss[0][0])) PF = gd.representations.PersistenceFisherKernel(bandwidth_fisher=.001, bandwidth=.001, kernel_approx=None) PF.fit([acX.persistence_intervals_in_dimension(1)]) pf = PF.transform([acY.persistence_intervals_in_dimension(1)]) print("PF kernel is " + str(pf[0][0])) SW = gd.representations.SlicedWassersteinKernel(bandwidth=.1, num_directions=100) SW.fit([acX.persistence_intervals_in_dimension(1)]) sw = SW.transform([acY.persistence_intervals_in_dimension(1)]) print("SW kernel is " + str(sw[0][0])) BD = gd.representations.BottleneckDistance(epsilon=.001) BD.fit([acX.persistence_intervals_in_dimension(1)]) bd = BD.transform([acY.persistence_intervals_in_dimension(1)]) print("Bottleneck distance is " + str(bd[0][0])) WD = gd.representations.WassersteinDistance(internal_p=2, order=2) WD.fit([acX.persistence_intervals_in_dimension(1)]) wd = WD.transform([acY.persistence_intervals_in_dimension(1)]) print("Wasserstein distance is " + str(wd[0][0])) # - # Cool! However, you have probably noticed that there are quite a lot of parameters to choose. In practice, it is better to cross-validate among a bunch of them and pick the best ones. We will see shortly that it is actually very easy to do this with Gudhi! # Now let's generate a complete dataset. We will generate point clouds and corresponding persistence diagrams for various radii $r$. Of course, you can increase the size of the dataset by modifying the num_diag_per_class variable. # + num_diag_per_class = 10 dgms, labs = [], [] for idx, radius in enumerate([2.5, 3.5, 4., 4.1, 4.3]): for _ in range(num_diag_per_class): labs.append(idx) X = np.empty([num_pts,2]) x, y = np.random.uniform(), np.random.uniform() for i in range(num_pts): X[i,:] = [x, y] x = (X[i,0] + radius * X[i,1] * (1-X[i,1])) % 1. y = (X[i,1] + radius * x * (1-x)) % 1. ac = gd.AlphaComplex(points=X).create_simplex_tree(max_alpha_square=1e12) dgm = ac.persistence() dgms.append(ac.persistence_intervals_in_dimension(1)) # - # Then, we shuffle the data and create a 80/20 split for train and test. test_size = 0.2 perm = np.random.permutation(len(labs)) limit = np.int(test_size * len(labs)) test_sub, train_sub = perm[:limit], perm[limit:] train_labs = np.array(labs)[train_sub] test_labs = np.array(labs)[test_sub] train_dgms = [dgms[i] for i in train_sub] test_dgms = [dgms[i] for i in test_sub] # In order to cross-validate among all methods and parameters available, we create a scikit-learn pipeline for processing the diagrams. The pipeline will: # 1. extract the points of the persistence diagrams with finite coordinates (i.e. the non essential points) # 2. scale or not the diagrams in the unit square # 3. handle diagrams with vectorization or kernel methods with Gudhi # 4. train a classifier from the scikit-learn package # # As you can see from the code below, it is quite simple! Here, we cross validate among a kernel-SVM with sliced Wasserstein and persistence weighted Gaussian kernels, C-SVM on persistence images, random forests on landscapes, and $k$-nearest neighbors on bottleneck distances. We also try uniformly scaling the diagrams to the unit square. # + from sklearn.preprocessing import MinMaxScaler from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier # Definition of pipeline pipe = Pipeline([("Separator", gd.representations.DiagramSelector(limit=np.inf, point_type="finite")), ("Scaler", gd.representations.DiagramScaler(scalers=[([0,1], MinMaxScaler())])), ("TDA", gd.representations.PersistenceImage()), ("Estimator", SVC())]) # Parameters of pipeline. This is the place where you specify the methods you want to use to handle diagrams param = [{"Scaler__use": [False], "TDA": [gd.representations.SlicedWassersteinKernel()], "TDA__bandwidth": [0.1, 1.0], "TDA__num_directions": [20], "Estimator": [SVC(kernel="precomputed", gamma="auto")]}, {"Scaler__use": [False], "TDA": [gd.representations.PersistenceWeightedGaussianKernel()], "TDA__bandwidth": [0.1, 0.01], "TDA__weight": [lambda x: np.arctan(x[1]-x[0])], "Estimator": [SVC(kernel="precomputed", gamma="auto")]}, {"Scaler__use": [True], "TDA": [gd.representations.PersistenceImage()], "TDA__resolution": [ [5,5], [6,6] ], "TDA__bandwidth": [0.01, 0.1, 1.0, 10.0], "Estimator": [SVC()]}, {"Scaler__use": [True], "TDA": [gd.representations.Landscape()], "TDA__resolution": [100], "Estimator": [RandomForestClassifier()]}, {"Scaler__use": [False], "TDA": [gd.representations.BottleneckDistance()], "TDA__epsilon": [0.1], "Estimator": [KNeighborsClassifier(metric="precomputed")]} ] # - # Our final model is the best estimator found after 3-fold cross-validation of our pipeline. # + from sklearn.model_selection import GridSearchCV model = GridSearchCV(pipe, param, cv=3) # - # Now is time to train the model. Since we perform cross-validation, the computation can be quite long, especially if using k-NN with bottleneck distances, which is quite time-consuming. You may consider grabbing a cup of coffee at this point. model = model.fit(train_dgms, train_labs) # Training is finally over! Let us check what is the best method for persistence diagrams with respect to this classification problem. print(model.best_params_) # Looks like random forests and landscapes did the best for this small dataset! Let's see our model accuracy on the test set. print("Train accuracy = " + str(model.score(train_dgms, train_labs))) print("Test accuracy = " + str(model.score(test_dgms, test_labs))) # 70%, not so bad for such a small dataset! The accuracy you get can even improve for bigger datasets and more parameters in the cross-validation (but training time will increase as well ;-)).
Tuto-GUDHI-representations.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt import numpy as np import numpy.linalg as nplin import itertools np.random.seed(0) def operators(s): #generate terms in the energy function n_seq,n_var = s.shape ops = np.zeros((n_seq,n_var+int(n_var*(n_var-1)/2.0))) jindex = 0 for index in range(n_var): ops[:,jindex] = s[:,index] jindex +=1 for index in range(n_var-1): for index1 in range(index+1,n_var): ops[:,jindex] = s[:,index]*s[:,index1] jindex +=1 return ops def energy_ops(ops,w): return np.sum(ops*w[np.newaxis,:],axis=1) def generate_seqs(n_var,n_seq,n_sample=30,g=2.0): samples = np.random.choice([1.0,-1.0],size=(n_seq*n_sample,n_var),replace=True) ops = operators(samples) n_ops = ops.shape[1] #w_true = g*(np.random.rand(ops.shape[1])-0.5)/np.sqrt(float(n_var)) w_true = np.random.normal(0.,g/np.sqrt(n_var),size=n_ops) sample_energy = energy_ops(ops,w_true) p = np.exp(sample_energy) p /= np.sum(p) out_samples = np.random.choice(np.arange(n_seq*n_sample),size=n_seq,replace=True,p=p) return w_true,samples[out_samples] #,p[out_samples],sample_energy[out_samples] def eps_machine(s,eps_scale=.01,max_iter=151,alpha=0.1): MSE = np.zeros(max_iter) KL = np.zeros(max_iter) E_av = np.zeros(max_iter) n_seq,n_var = s.shape ops = operators(s) n_ops = ops.shape[1] cov_inv = np.eye(ops.shape[1]) np.random.seed(13) w = np.random.rand(n_var+int(n_var*(n_var-1)/2.0))-0.5 for i in range(max_iter): energies_w = energy_ops(ops,w) probs_w = np.exp(-energies_w*(1-eps_scale)) z_data = np.sum(probs_w) probs_w /= z_data ops_expect_w = np.sum(probs_w[:,np.newaxis]*ops,axis=0) #if iterate%int(max_iter/5.0)==0: E_exp = (probs_w*energies_w).sum() KL[i] = -E_exp - np.log(z_data) + np.sum(np.log(np.cosh(w*eps_scale))) E_av[i] = energies_w.mean() MSE[i] = ((w-w_true)**2).mean() #print(RMSE[i]) #print(eps_scale,iterate,nplin.norm(w-w_true),RMSE,KL,E_av) sec_order = w*eps_scale w += alpha*cov_inv.dot((ops_expect_w - sec_order)) #print('final ',eps_scale,iterate,nplin.norm(w-w_true)) return MSE,KL,E_av # + max_iter = 151 n_var,n_seq = 25,1000 g = 0.5 w_true,seqs = generate_seqs(n_var,n_seq,g=g) eps_list = np.linspace(0.2,1.,17,endpoint=True) n_eps = len(eps_list) #w = np.zeros((n_eps,max_iter,)) MSE = np.zeros((n_eps,max_iter)) KL = np.zeros((n_eps,max_iter)) E_av = np.zeros((n_eps,max_iter)) for i,eps in enumerate(eps_list): print('eps:',eps) MSE[i,:],KL[i,:],E_av[i,:] = eps_machine(seqs,eps_scale=eps,max_iter=max_iter) # + nx,ny = 3,n_eps fig, ax = plt.subplots(ny,nx,figsize=(nx*3.5,ny*3)) #lims = [np.min([w_true, w]), np.max([w_true, w])] for i,eps in enumerate(eps_list): ax[i,0].set_title('eps=%s'%eps) ax[i,0].plot(MSE[i]) ax[i,1].plot(KL[i]) ax[i,2].plot(E_av[i]) #ax[i,0].set_ylim([0,0.01]) #ax[i,1].set_ylim([-8,-5]) #ax[i,2].set_ylim([1,2.5]) #ax[0,1].set_ylim([-6.6,-6.1]) #ax[1,1].set_ylim([-6.8,-6.3]) #ax[2,1].set_ylim([-7.2,-7.0]) #ax[3,1].set_ylim([-7.5,-7.3]) plt.tight_layout(h_pad=1, w_pad=1.5) #plt.savefig('fig.pdf', format='pdf', dpi=100) # - i,j = np.unravel_index(MSE.argmin(), MSE.shape) eps0 = eps_list[i] print('optimal eps:',eps0) for i,eps in enumerate(eps_list): j0 = np.argmin(MSE[i,:]) j = np.argmin(KL[i,:]) print(i,eps,MSE[i,j0],MSE[i,j],KL[i,j],np.max(E_av[i,:]))
Ref/old/2019.07.22/eps_machine_stopping_g0.5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # **SALES ANALYSIS** # 1) Input: # - A year of sales by month in csvs # # 2) Output: # - Exploratory Data Analysis with insights # # 3) Tasks: # # - read and merge csv's into a single file; # # - check data dimension # # - check data types # # - check for NAs # # - check duplicates # # - analysis to better understand features # .Descriptive analysis (numerical and categorical) # - hypothesis mindmap # # - EDA # .Univariate Analysis # .Bivariate Analysis # .Multivariate Analysis # # Imports # + import os import inflection import pandas as pd import numpy as np import seaborn as sns from IPython.core.display import HTML from IPython.display import Image # + [markdown] heading_collapsed=true # ## Helper Functions # + hidden=true def jupyter_settings(): # %matplotlib inline # %pylab inline plt.style.use( 'bmh' ) plt.rcParams['figure.figsize'] = [25, 12] plt.rcParams['font.size'] = 24 display( HTML( '<style>.container { width:100% !important; }</style>') ) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) sns.set() # + hidden=true jupyter_settings() # - # # Loading Dataset # + files = [file for file in os.listdir('./Sales_Data')] df_all_months = pd.DataFrame() for file in files: df = pd.read_csv("./Sales_Data/"+file) df_all_months = pd.concat([df_all_months, df]) df_all_months.to_csv('all_months.csv', index = False) # - df_all_months.head() files = [file for file in os.listdir('./Sales_Data')] files df_test = pd.read_csv('./Sales_Data/Sales_December_2019.csv') df_test.head() # # DATA DISCRIPTION df1 = df_all_months.copy() df1.columns # ## Rename Columns # + cols_old = ['Order_ID', 'Product', 'Quantity_Ordered', 'Price_Each', 'Order_Date', 'Purchase_Address'] snakecase = lambda x: inflection.underscore(x) cols_new = list( map( snakecase, cols_old ) ) #Rename Columns df1.columns = cols_new # - df1.head() # ## Data Dimension print('Number of Rows: {}'.format(df1.shape[0])) print('Number of Columns: {}'.format(df1.shape[1])) # ## Data Types df1.dtypes df1.info() # ## Check NA Values df1.isnull().sum() # + # checking NA values sum missing_count = df1.isnull().sum() #the count of missing values value_count = df1.isnull().count() missing_percentage = round(missing_count/value_count * 100, 2) # the percentage of missing values missing_df = pd.DataFrame({'missing value count': missing_count, 'percentage': missing_percentage}) missing_df # - # checking if NAs values was due merge csv's or not for file in files: df_test = pd.read_csv("./Sales_Data/"+file) print(df_test.isnull().sum()) df1.head(500) # full rows of nan, all values are nans, meaning we could drop them all nan_df = df1[df1.isna().any(axis=1)] nan_df.head() # ## Drop Duplicates df1.duplicated(keep = 'first').sum() df1[df1.duplicated(keep = False)] df1 = df1.drop_duplicates() df1.isna().sum() # ## Fillout NA # + code_folding=[] #drop NAs df1 = df1.dropna() df1.isna().sum() # - #cleaning some rows temp_df = df1[df1['order_date'] == 'Order Date'] temp_df.head() df1 = df1[df1['order_date'] != 'Order Date'] temp2_df = df1[df1['order_date'] == 'Order Date'] temp2_df.head() # ## Change Types df1.dtypes import datetime df1['order_date'] = pd.to_datetime(df1['order_date']) df1.head() df1 = df1.astype({'order_id': 'int64', 'quantity_ordered': 'int64', 'price_each': 'float64'}) df1.dtypes # ## Descriptive Statistics # + #it's usefull to get the first knowledge of the business problem over the features #we can detect some data erros # + # separate numerical and categorical attributes num_attributes = df1.select_dtypes( include = 'number') cat_attributes = df1.select_dtypes( include = 'object') # - # ### Numerical Attributes # + # Central Tendency - Mean, median ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T # Dispersion - std, min, max, range, skew, kurtoisis d1 = pd.DataFrame(num_attributes.apply( np.std )).T d2 = pd.DataFrame(num_attributes.apply( min )).T d3 = pd.DataFrame(num_attributes.apply( max )).T d4 = pd.DataFrame(num_attributes.apply( lambda x: x.max() - x.min() )).T d5 = pd.DataFrame(num_attributes.apply( lambda x: x.skew() )).T d6 = pd.DataFrame(num_attributes.apply( lambda x: x.kurtosis() )).T #concatenate m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index() m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis'] m # - # ### Categorical Attributes # check unique values of categorical features cat_attributes.apply(lambda x: x.unique().shape[0]) # # FEATURE ENGINEERING df2 = df1.copy() # ## Hypothesis Mindmap # ## Creating Hypothesis # ## Final Hypothesis List # ## Feature Engineering # # VARIABLE FILTERING # + [markdown] heading_collapsed=true # ## Line Filtering # + hidden=true # + [markdown] heading_collapsed=true # ## Columns Selection # + hidden=true # - # # EXPLORATORY DATA ANALYSIS (EDA) # Q1: What was te best months for sales? How much was earned that month? # ## Univariate Analysis # + [markdown] heading_collapsed=true # ### Response Variable # + hidden=true # + [markdown] heading_collapsed=true # ### Numerical Variable # + hidden=true # + [markdown] heading_collapsed=true # ### Categorical Variable # + hidden=true # + [markdown] heading_collapsed=true # ## Bivariate Analysis # + hidden=true # - # ## Multivariate Analysis # # DATA PREPARATION # + [markdown] heading_collapsed=true # ## Feature Normalization # + hidden=true # + [markdown] heading_collapsed=true # ## Feature Rescaling # + hidden=true # + [markdown] heading_collapsed=true # ## Feature Transformation # + [markdown] heading_collapsed=true hidden=true # ### Enconding # + hidden=true # + [markdown] heading_collapsed=true hidden=true # ### Target Variable Transformation # + hidden=true # + [markdown] heading_collapsed=true hidden=true # ### Nature Transformation # + hidden=true # + [markdown] heading_collapsed=true # # FEATURE SELECTION # + hidden=true # + [markdown] heading_collapsed=true hidden=true # ## Spliting dataframe into training and test dataset # + hidden=true # + [markdown] heading_collapsed=true hidden=true # ## Feature Selector (boruta?) # + hidden=true # + [markdown] heading_collapsed=true hidden=true # ## Best Features # + hidden=true # + [markdown] heading_collapsed=true # # MACHINE LEARNING ALGORITHM MODELS # + [markdown] hidden=true # ## Modelo 1 # + [markdown] hidden=true # ## Modelo 2 # + [markdown] hidden=true # ## Modelo 3 # + [markdown] hidden=true # ## Compare Model's Performance # + hidden=true # - # # HYPERPARAMETERS FINE TUNING # # ERROR INTERPRETATION # # MODEL DEPLOYMENT
Sales-Data/.ipynb_checkpoints/Sales_Analysis-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Sorting # # Vizzu provides multiple options to sort data. By default, the data is sorted by the order it is added to Vizzu. This is why we suggest to add temporal data such as dates in chronological order - from oldest to newest. # # You can also sort the elements by value, which will provide you with an ascending order. # + from ipyvizzu import Chart, Data, Config chart = Chart() data = Data.from_json("../data/music_example_data.json") chart.animate(data) chart.animate(Config({ "channels": { "y": { "set": ["Popularity", "Types"] }, "x": { "set": "Genres" }, "label": { "attach": "Popularity" } }, "color": { "attach": "Types" }, "title": "Switch to ascending order..." })) chart.animate(Config({ "sort": "byValue", })) snapshot1 = chart.store() # - # If you want descending order instead, you have to set the reverse parameter to true. When used without setting the sorting to byValue, the elements will be in the opposite order than they are in the data set added to Vizzu. # + chart.animate(snapshot1) chart.animate(Config({"title": "...or descending order."})) chart.animate(Config({ "reverse": True, })) snapshot2 = chart.store() # - # This is how to switch back to the default sorting. # + chart.animate(snapshot2) chart.animate(Config({"title": "Let's get back to where we were"})) chart.animate(Config({ "sort": "none", "reverse": False, })) snapshot3 = chart.store() # - # When you have more than one dimension on a channel, their order determines how the elements are grouped. # + chart.animate(snapshot3) chart.animate(Config({"title": "With two discretes on one axis..."})) chart.animate(Config({ "channels": { "y": { "detach": "Types" }, "x": { "set": ["Genres", "Types"] }, } })) snapshot4 = chart.store() # - # When switching the order of dimensions on the x-axis Vizzu will rearrange the elements according to this new logic. # + chart.animate(snapshot4) chart.animate(Config({"title": "...grouping is determined by their order."})) chart.animate(Config({ "channels": { "x": { "set": ["Types", "Genres"] }, } }))
docs/examples/sorting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- import matplotlib.pyplot as plt import seaborn as sns from main.main import BlackjackGame, on_policy_first_visit_mc_control, win_rate # + random.seed(42) mc = BlackjackGame() iters = np.logspace(0, 4) win_rates = [rate for n_iter in iters for policy in [on_policy_first_visit_mc_control(mc, eps=0.4, max_iter=n_iter)] for rate in [win_rate(mc, policy)]] sns.set_style("darkgrid") plt.plot(iters, win_rates) plt.xlabel('Num of iterations') plt.ylabel('Win rate') plt.title('On policy first visit MC blackjack') plt.show()
hw3/hw3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="viRUSYe9SK_6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="291ae02e-ae3d-40dc-89e7-50b0783c5e94" # neccessary imports import cv2 as cv import numpy as np import matplotlib.pyplot as plt from google.colab.patches import cv2_imshow as cv_imshow # version check cv.__version__ # + id="1d-cC2EdSYgp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="9952456b-f212-4338-b675-cfc3b2493890" ## Creating a function to download img from a url specified by the USER import urllib.request as urlrequest def dl_img(url, file_path, file_name): _path = file_path + file_name + '.jpg' urlrequest.urlretrieve(url, _path) url = input("URL: ") saveas = input("File Name: ") dl_img(url, '/content/', saveas) # https://cdn.shopify.com/s/files/1/1893/0477/products/5PCS_Framed_Colorful_Lion_Canvas_Prints_grande.png?v=1504331897 # https://www.artgalleryofhamilton.com/wp-content/uploads/2018/04/abstract-painting.jpg # https://pythonprogramming.net/static/images/opencv/bookpage.jpg # https://matplotlib.org/3.1.1/_images/sphx_glr_scatter_piecharts_thumb.png # https://matplotlib.org/3.1.1/_images/sphx_glr_scatter_001.png # https://matplotlib.org/3.1.1/_images/sphx_glr_scatter3d_001.png # https://www.mathworks.com/help/examples/matlabmobile/win64/Scatter3DPlotExample_01.png # + id="6m7Z_cReSeTL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 242} outputId="d5c8279b-5634-4f85-9402-723167312d1e" # load an img img1 = cv.imread('abstract_book_page.jpg', 1) # resizing imgs. to same size & display the img. imgr1 = cv.resize(img1, None, None, 0.5, 0.5, interpolation = cv.INTER_AREA) cv_imshow(imgr1) # + [markdown] id="4U2YUXE7TacP" colab_type="text" # * THRESHOLDING (WHY??) # > * Extreme Simplification of an image # * Colored --> Gray-scale --> 0(black) or 1(white) --> applying gradients # * Gray-scale: loses the readability on application of threshold (based only on white or black depending upon the lighting) # # # * GAUSSIAN ADAPTIVE THRESHOLDING # > * Thresholding based on the region where it is being applied # * cv.adaptiveThreshold(gray_imgr1, maxValue, cv.ADAPTIVE_THRESH_GAUSSIAN_C(adaptiveMethod), cv.THRESH_BINARY(thresholdType), blockSize, C): # * blockSize = size of the Gaussian window (sub-region size) # * C = a constant subtracted from the threshold value for each sub-region # + id="dD2VZz0oxlCP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 242} outputId="8c5af949-5299-4667-983a-fe7915661fab" # thresholding retval, threshold = cv.threshold(imgr1, 12, 255, cv.THRESH_BINARY) # display cv_imshow(threshold) # + id="81cgkB2Hyv17" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 242} outputId="5e2d7abb-45f6-4dc8-dc3e-0491ef548e30" # convert to gray-scale & then applying threshold to it gray_imgr1 = cv.cvtColor(imgr1, cv.COLOR_BGR2GRAY) retval, threshold = cv.threshold(gray_imgr1, 12, 255, cv.THRESH_BINARY) cv_imshow(threshold) # + id="Pf_BCCT40Mry" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 757} outputId="b419ecfc-9b76-483f-d359-906fe54c97fe" # convert to gray-scale & then applying threshold to it gray_imgr1 = cv.cvtColor(imgr1, cv.COLOR_BGR2GRAY) help(cv.adaptiveThreshold) gaussian_threshold = cv.adaptiveThreshold(gray_imgr1, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 115, 1) cv_imshow(gaussian_threshold)
OpenCV_with_Python_06.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''hlu'': conda)' # language: python # name: python3 # --- # # Hierarchical Clustering Using Iris Dataset # # # + # # Importing required libraries/modules: # %reset -f # algorithm import numpy as np import pandas as pd from sklearn.datasets import load_iris # plotting import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline rng = np.random.RandomState(17711) # + #collapse-output data_df, targets = load_iris(as_frame=True, return_X_y=True) data_features = ["l_sepal", "w_sepal", "l_petal", "w_petal"] data_df = pd.DataFrame(data_df) data_df.columns = data_features data_df["species"] = targets.map({0: "setosa", 1: "versicolor", 2: "virginica"}) data_df = data_df.sample(frac=1, random_state=rng) data_df = data_df.reset_index(drop=True) data_points = data_df[data_features].to_numpy() data_df.head(10) # + # n = len(data_df) dist_mtrx = np.zeros([n, n]) for idx, pt in enumerate(data_points): dist_mtrx[idx] = np.sum(np.sqrt(np.subtract(pt, data_points) ** 2), axis=1) print(f"{dist_mtrx.shape= }") dist_mtrx # - # # function `hierarchical_clustering()` : # collapse-hide def hierarchical_clustering(mtrx, criterion="max", n_clusters=None, verbose=False): """ Implement the agglomerative hierarchical clustering algorithm given a distance matrix and linkage criterion. Parameters: ----------- mtrx: list, numpy array The distance (dissimilarity) matrix. This should be a square symmetric matrix. criterion: ["min", "max"], optional, default=`max` The linkage criterion to be used: single-linkage `min` or complete-linkage `max`. n_clusters: int, optional, default=`1` The desired number of clusters. Returns: ---------- clusters: tuple / list of tuples If `n_clusters` is specified, the clustering output at that level only; Else, a list of the clustering outputs at all levels. """ assert criterion in ["min", "max"], "Pass ['min'/'max'] for clustering linkage criterion!" mtrx = np.array(mtrx.copy(), dtype=float) assert len(mtrx.shape) == 2, "The distance matrix must be a square, symmetric matrix!" assert mtrx.shape[0] == mtrx.shape[1], "The distance matrix must be a square, symmetric matrix!" # create auto-generated items array items = [f"P{i}" for i in range(mtrx.shape[0])] items = np.array(items.copy(), dtype=object) print(f"auto-generated {items[:5]= } ...") if verbose else None criterion = np.min if criterion == "min" else np.max hist_clustering = [tuple(items)] # prevent the zero-diagonal from interfering with np.min() for (i, j) in np.argwhere(mtrx == 0): mtrx[i, j] = 99 while len(items) != (n_clusters if n_clusters else 1): # get closest clusters min_value = np.min(mtrx) idxs_min_value = np.argwhere(mtrx == min_value)[0] idx_0, idx_1 = idxs_min_value # create current clusters and update items and clustering history list items[idx_0] = "".join(items[idxs_min_value]) items = np.delete(items, idx_1) hist_clustering.append(tuple(items)) # update distance matrix mtrx[:, idx_0] = criterion([mtrx[:, idx_0], mtrx[:, idx_1]], axis=0) mtrx[idx_0, :] = criterion([mtrx[:, idx_0], mtrx[:, idx_1]], axis=0) mtrx = np.delete(mtrx, idx_1, axis=0) mtrx = np.delete(mtrx, idx_1, axis=1) mtrx[idx_0, idx_0] = 99 # print clustering progress; if verbose print(f"{'-' * 75}\n {min_value= :.4f} @ {list(idxs_min_value)= }") if verbose else None print(f" {list(items)= }\n {mtrx}") if verbose else None print(f"{'-' *75}") if verbose else None return hist_clustering[-1] if n_clusters else hist_clustering # # function `decode_clustering_output()` : # collapse-hide def decode_clustering_output(df, clusters, criterion): """ Decode the output of the `hierarchical_clustering()` function given the original dataframe. Calculate the clustering accuracy. Plot the clusters. Parameters: ----------- df: pandas dataframe The original dataframe from which the distance matrix was computed. clusters: list/tuple of strings The output of the `hierarchical_clustering()` function. criterion: ["min", "max"] The linkage criterion passed to the `hierarchical_clustering()` function. """ assert type(clusters) in [list, tuple], "`clusters` should be a list/tuple of strings" assert type(clusters[0]) == str, "`clusters` should be a list/tuple of strings" criterion = "Single" if criterion == "min" else "Complete" temp_df = df.copy() for c in clusters: # convert the string into an array of integer indices c_i = np.array([int(i) for i in str(c).replace("P", " ").split()]) # get the species in the dataframe at those indices c_i_species = temp_df.loc[c_i]["species"].value_counts().to_dict() # assign the current cluster to the specie with the most occurrence c_i_max_specie = sorted(c_i_species.items(), key=lambda i: i[1])[-1][0] temp_df.loc[c_i, "clusters"] = c_i_max_specie print(f"{len(c_i)= }, {c_i_species= }, {c_i_max_specie=}") accuracy = sum(temp_df.species == temp_df.clusters) / len(temp_df) print(f"clustering {accuracy= :.4f}") plt.figure(figsize=(12, 5)) plt.grid() plt.title(f"Iris Species Classification Using {criterion}-Linkage Criterion", size="x-large") sns.scatterplot( x=temp_df[temp_df.columns[0]], y=temp_df[temp_df.columns[3]], s=85, hue=temp_df["clusters"], style=temp_df["clusters"], palette=["orange", "magenta", "dodgerblue"], ); #hide-input plt.figure(figsize=(12, 5)) plt.grid() plt.title("Iris Species Classification From Dataset", size="x-large") sns.scatterplot( x=data_df[data_df.columns[0]], y=data_df[data_df.columns[3]], s=85, hue=data_df["species"], style=data_df["species"], palette=["orange", "magenta", "dodgerblue"], ); # complete_linkage = hierarchical_clustering(dist_mtrx, n_clusters=3, criterion="max") decode_clustering_output(data_df, complete_linkage, criterion="max") # single_linkage = hierarchical_clustering(dist_mtrx, n_clusters=3, criterion="min") decode_clustering_output(data_df, single_linkage, criterion="min") # <hr>
_notebooks/ipynb_data/Hierarchical-Clustering-Iris-Dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: "Python 3.7 (Intel\xAE oneAPI)" # language: python # name: c009-intel_distribution_of_python_3_oneapi-beta05-python # --- # + # ============================================================= # Copyright © 2020 Intel Corporation # # SPDX-License-Identifier: MIT # ============================================================= # - # # Intel Python sklearn Getting Started Example for Shared Memory Systems # # Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application. The acceleration is achieved through the use of the Intel(R) oneAPI Data Analytics Library (oneDAL). # # In this example we will be recognizing **handwritten digits** using a machine learning classification algorithm. Handwritten digits Dataset is from sklearn toy datasets. Digits dataset contains 1797 input images and for each image there are 64 pixels(8x8 matrix) as features. Output has 10 classes corresponding to all the digits(0-9). Support Vector Machine(SVM) classifier is being used as machine learning classification algorithm. # # ## Importing and Organizing Data # Let's start by **importing** all necessary data and packages. Intel(R) Extension for Scikit-learn* dynamically patches scikit-learn estimators to use Intel(R) oneAPI Data Analytics Library as the underlying solver, while getting the same solution faster. To undo the patch, run *sklearnex.unpatch_sklearn()* # + import numpy as np import matplotlib.pyplot as plt import joblib import random #Intel(R) Extension for Scikit-learn dynamically patches scikit-learn estimators to use oneDAL as the underlying solver from sklearnex import patch_sklearn patch_sklearn() # Import datasets, svm classifier and performance metrics from sklearn import datasets, svm, metrics, preprocessing from sklearn.model_selection import train_test_split # - # Now let's **load** in the dataset and check some examples of input images # + # Load the handwritten digits dataset from sklearn datasets digits = datasets.load_digits() #Check the examples of input images corresponding to each digit fig, axes = plt.subplots(nrows=1, ncols=10, figsize=(12, 12)) for i,ax in enumerate(axes): ax.set_axis_off() ax.imshow(digits.images[i], cmap=plt.cm.gray_r) ax.set_title(' %i' % digits.target[i]) # - # Split the dataset into train and test and **organize** it as necessary to work with our model. # + # digits.data stores flattened ndarray size 64 from 8x8 images. X,Y = digits.data, digits.target # Split dataset into 80% train images and 20% test images X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, shuffle=True) # normalize the input values by scaling each feature by its maximum absolute value X_train = preprocessing.maxabs_scale(X_train) X_test = preprocessing.maxabs_scale(X_test) # - # ## Training and Saving the Model # Let's **train our model** and **save the training model to a file**: # + # Create a classifier: a support vector classifier model = svm.SVC(gamma=0.001, C=100) # Learn the digits on the train subset model.fit(X_train, Y_train) # Save the model to a file filename = 'finalized_svm_model.sav' joblib.dump(model, filename) # - # ## Making a Prediction and Saving the Results # Time to **make a prediction!** # Now predicting the digit for test images using the trained model loaded_model = joblib.load(filename) Y_pred = loaded_model.predict(X_test) # + # Predict the value of the digit on the random subset of test images fig, axes = plt.subplots(nrows=1, ncols=6, figsize=(8, 3)) random_examples = random.sample(list(range(len(X_test))),6) for i,ax in zip(random_examples, axes): ax.set_axis_off() ax.imshow(X_test[i].reshape(8,8), cmap=plt.cm.gray_r) ax.set_title(f'Predicted: {Y_pred[i]}') # - # To **get the accuracy of trained model on test data** result = loaded_model.score(X_test, Y_test) print(f"Model accuracy on test data: {result}") # Now let's **export the results to a CSV file**. np.savetxt("svm_results.csv", Y_pred, delimiter = ",") print("[CODE_SAMPLE_COMPLETED_SUCCESFULLY]")
AI-and-Analytics/Getting-Started-Samples/Intel_Extension_For_SKLearn_GettingStarted/Intel_Extension_For_SKLearn_GettingStarted.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] _uuid="92402d90610521dc80c7b21931c83a771027ca4d" # # Introduction # - HW Example # # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os print(os.listdir("./input")) # Any results you write to the current directory are saved as output. # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" #%% import dataset data = pd.read_csv("./input/data.csv") data.drop(['Unnamed: 32',"id"], axis=1, inplace=True) data.diagnosis = [1 if each == "M" else 0 for each in data.diagnosis] y = data.diagnosis.values x_data = data.drop(['diagnosis'], axis=1) # + _uuid="e9220c53a8bc334fc47c6bb819555599b9cda116" # %% normalization x = (x_data -np.min(x_data))/(np.max(x_data)-np.min(x_data)).values # + _uuid="74c43fcdc6070bb900cecc98a1581b80c319673f" # # %%train test split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.15, random_state=42) x_train = x_train.T x_test = x_test.T y_train = y_train.T y_test = y_test.T print("x train: ",x_train.shape) print("x test: ",x_test.shape) print("y train: ",y_train.shape) print("y test: ",y_test.shape) # + _uuid="24e68200504171fada25524fbb32ee0f29dd8a8c" # # %%initialize # lets initialize parameters # So what we need is dimension 4096 that is number of pixels as a parameter for our initialize method(def) def initialize_weights_and_bias(dimension): w = np.full((dimension,1),0.01) b = 0.0 return w, b # + _uuid="8dd6fd6bfe4aa96c61e63d371dec633cde4dbd18" #%% sigmoid # calculation of z #z = np.dot(w.T,x_train)+b def sigmoid(z): y_head = 1/(1+np.exp(-z)) return y_head #y_head = sigmoid(5) # + _uuid="62cfc50a638ba548a4b9fe8432e4498f26dfb2bd" #%% forward and backward # In backward propagation we will use y_head that found in forward progation # Therefore instead of writing backward propagation method, lets combine forward propagation and backward propagation def forward_backward_propagation(w,b,x_train,y_train): # forward propagation z = np.dot(w.T,x_train) + b y_head = sigmoid(z) loss = -y_train*np.log(y_head)-(1-y_train)*np.log(1-y_head) cost = (np.sum(loss))/x_train.shape[1] # x_train.shape[1] is for scaling # backward propagation derivative_weight = (np.dot(x_train,((y_head-y_train).T)))/x_train.shape[1] # x_train.shape[1] is for scaling derivative_bias = np.sum(y_head-y_train)/x_train.shape[1] # x_train.shape[1] is for scaling gradients = {"derivative_weight": derivative_weight,"derivative_bias": derivative_bias} return cost,gradients # + _uuid="f444c8aaaf16b9b43bcfff6606d98dbd5da251b1" #%%# Updating(learning) parameters def update(w, b, x_train, y_train, learning_rate,number_of_iterarion): cost_list = [] cost_list2 = [] index = [] # updating(learning) parameters is number_of_iterarion times for i in range(number_of_iterarion): # make forward and backward propagation and find cost and gradients cost,gradients = forward_backward_propagation(w,b,x_train,y_train) cost_list.append(cost) # lets update w = w - learning_rate * gradients["derivative_weight"] b = b - learning_rate * gradients["derivative_bias"] if i % 10 == 0: cost_list2.append(cost) index.append(i) print ("Cost after iteration %i: %f" %(i, cost)) # we update(learn) parameters weights and bias parameters = {"weight": w,"bias": b} plt.plot(index,cost_list2) plt.xticks(index,rotation='vertical') plt.xlabel("Number of Iterarion") plt.ylabel("Cost") plt.show() return parameters, gradients, cost_list # + _uuid="eba8961cefa99182f2a41a4bb42d4f5a28df6684" #%% # prediction def predict(w,b,x_test): # x_test is a input for forward propagation z = sigmoid(np.dot(w.T,x_test)+b) Y_prediction = np.zeros((1,x_test.shape[1])) # if z is bigger than 0.5, our prediction is sign one (y_head=1), # if z is smaller than 0.5, our prediction is sign zero (y_head=0), for i in range(z.shape[1]): if z[0,i]<= 0.5: Y_prediction[0,i] = 0 else: Y_prediction[0,i] = 1 return Y_prediction # predict(parameters["weight"],parameters["bias"],x_test) # + _uuid="35ddf5ebf6e3e7dbf7a7a5e3ff4e6b8c56bcce08" # %% def logistic_regression(x_train, y_train, x_test, y_test, learning_rate , num_iterations): # initialize dimension = x_train.shape[0] # that is 4096 w,b = initialize_weights_and_bias(dimension) # do not change learning rate parameters, gradients, cost_list = update(w, b, x_train, y_train, learning_rate,num_iterations) y_prediction_test = predict(parameters["weight"],parameters["bias"],x_test) y_prediction_train = predict(parameters["weight"],parameters["bias"],x_train) # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(y_prediction_train - y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(y_prediction_test - y_test)) * 100)) logistic_regression(x_train, y_train, x_test, y_test,learning_rate = 1, num_iterations = 100) # + _uuid="e18dae7a427351bedcc5fb011d632382448c32dc" # sklearn from sklearn import linear_model logreg = linear_model.LogisticRegression(random_state = 42,max_iter= 150) print("test accuracy: {} ".format(logreg.fit(x_train.T, y_train.T).score(x_test.T, y_test.T))) print("train accuracy: {} ".format(logreg.fit(x_train.T, y_train.T).score(x_train.T, y_train.T)))
No-12-Logistic-Regression-Implementation/Logistic Regression Implementation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 1. Import Python libraries # + import os import json import h5py import math import shap import itertools import numpy as np import pandas as pd from tensorflow import keras from sklearn.metrics import mean_squared_error, mean_absolute_error, mean_absolute_percentage_error from sklearn.model_selection import cross_val_score from sklearn.preprocessing import StandardScaler, MinMaxScaler from keras.models import model_from_json from datetime import timedelta as td, datetime from IPython.display import clear_output # - # ## 2. Define global variables # + lr_value = 0.0001 ## LEARNING RATE no_of_runs = 3 ## NO. OF RANDOM INTIALIZATION RUNS train_percent = 0.80 ## PERCENTAGE OF TRAINING DATASET batch_size = [4,8,16] ## BATCH SIZES number_of_epochs = [500] ## EPOCHS COMBINATIONS_LIST = [batch_size,number_of_epochs] COMBINATIONS = list(itertools.product(*COMBINATIONS_LIST)) continents_list = ['Asia','Africa','Europe','Global-R','North America','South America'] multi_time_steps = [1,3,5,7,9] ## NO. OF MULTI-TIME STEPS (DAYS) MA_day_list = [3,5,7] ## NO. OF MOVING AVERAGE DAYS targets_list = ['G','D'] ## TARGET PARAMETERS crit_MAPE_score = 0.700 ## MAXIMUM ALLOWABLE MEAN ABSOLUTE PERCENTAGE ERROR (MAPE) SCORE (%) tol_runs = 10 ## TOTAL NO. OF TOLERANCE RUNS analysis_folder = '' ## YOUR DESIRED DATA LOCATION FOR RESULTS STORAGE # - # ## 3. Import data DEFINE_YOUR_OWN_HDF5_LOC = '' ## YOUR DEFINED FILE LOCATION STORING THE PROCESSED HDF5 DATA FILES data_path_location = DEFINE_YOUR_OWN_HDF5_LOC list_h5_files = os.listdir(data_path_location) # ## 4. Define deep neural network function def deep_nn(x_data_array, y_data_array, ass_x_array): layers_neurons = [int(x_train.shape[1]/2), int(x_train.shape[1]/4), int(x_train.shape[1]/8), 1, 3, 3, y_train.shape[1]] first_input = keras.Input(shape=(x_data_array.shape[1], )) second_dense = keras.layers.Dense(layers_neurons[0], activation='relu')(first_input) third_dense = keras.layers.Dense(layers_neurons[1], activation='relu')(second_dense) fourth_dense = keras.layers.Dense(layers_neurons[2], activation='relu')(third_dense) fifth_dense = keras.layers.Dense(layers_neurons[3], activation='relu')(fourth_dense) merge_one = keras.layers.concatenate([second_dense, third_dense, fourth_dense, fifth_dense]) ## Data assimilation component second_input = keras.Input(shape=(ass_x_array.shape[1], )) merge_two = keras.layers.concatenate([merge_one,second_input]) ## merging 1D VD-FCNN with data assimilation component dummy_output_1 = keras.layers.Dense(layers_neurons[-3], activation='relu')(merge_two) dummy_output_2 = keras.layers.Dense(layers_neurons[-2], activation='relu')(dummy_output_1) final_output = keras.layers.Dense(layers_neurons[-1], activation='relu')(dummy_output_2) final_model = keras.Model(inputs=[first_input, second_input], outputs = final_output) opt = keras.optimizers.Adam(learning_rate = lr_value) final_model.compile(optimizer = opt, loss = 'mean_squared_error') final_model.summary() return final_model # ## 5. Model training and validation # + DEFINE_YOUR_SUMMARY_FILES_LOC = '' ## YOUR DEFINED FILE LOCATION STORING SUMMARY DATA FILES processed_path_location = DEFINE_YOUR_SUMMARY_FILES_LOC for target_type in targets_list: final_analysis_path_location = data_path_location + '/' + 'With ' + target_type + "assimilation" if not os.path.exists(final_analysis_path_location): os.mkdir(final_analysis_path_location) for continent in continents_list: targets_filename = continent + '_G_D_targets.csv' final_targets_path = processed_path_location + '/' + targets_filename data_targets = pd.read_csv(final_targets_path) DATES_DATA_TARGETS = list(data_targets['Date']) train_dates_list = DATES_DATA_TARGETS[:int(train_percent*len(DATES_DATA_TARGETS))] test_dates_list = DATES_DATA_TARGETS[int(train_percent*len(DATES_DATA_TARGETS)):] for multi_time_value in multi_time_steps: for MA_days in MA_day_list: h5_file_location = data_path_location + '/' + continent + '_' + str(multi_time_value) + '_day_multi_time_steps-' \ + str(MA_days) + '_days_MA.h5' data_records_file = h5py.File(h5_file_location, 'r') x_array = data_records_file.get('X_array') y_array = data_records_file.get(target_type + '_array') ass_x_array = data_records_file.get('ASS' + '_' + target_type + '_array') x_scaler = StandardScaler() x_array = np.array(x_array).reshape(len(x_array),-1) scaled_x_array = x_scaler.fit_transform(x_array) x_train = scaled_x_array[:int(train_percent*len(scaled_x_array))] y_train = y_array[:int(train_percent*len(y_array))] x_test = scaled_x_array[int(train_percent*len(scaled_x_array)):] y_test = y_array[int(train_percent*len(y_array)):] sep_x_train = ass_x_array[:int(train_percent*len(ass_x_array))] sep_x_test = ass_x_array[int(train_percent*len(ass_x_array)):] h5_file_folder = final_analysis_path_location + '/' + continent + '_' + str(multi_time_value) \ + '_day_multi_time_steps-' + str(MA_days) + '_days_MA' if not os.path.exists(h5_file_folder): os.mkdir(h5_file_folder) directory_models = h5_file_folder + '/' + 'Saved_Models' if not os.path.exists(directory_models): os.makedirs(directory_models) directory_results = h5_file_folder + '/' + 'Saved_Results' if not os.path.exists(directory_results): os.makedirs(directory_results) directory_figures = h5_file_folder + '/' + 'Saved_Figures' if not os.path.exists(directory_figures): os.makedirs(directory_figures) directory_text_files = h5_file_folder + '/' + 'Saved_Texts' if not os.path.exists(directory_text_files): os.makedirs(directory_text_files) train_val_df = pd.DataFrame() test_df = pd.DataFrame() training_loss_df = pd.DataFrame() validation_loss_df = pd.DataFrame() train_val_df['Dates'] = train_dates_list train_val_df['Actual'] = [item[0] for item in y_train] test_df['Dates'] = test_dates_list test_df['Actual'] = [item[0] for item in y_test] ## TO STORE INTERMEDIATE MAPE RESULT SCORES ## results_text_file_location = directory_text_files + '/' + 'Results_LOG.txt' f= open(results_text_file_location,"w+") with open(results_text_file_location, 'w') as f: f.write('Commence analysis =D') f.write('\n') for item in COMBINATIONS: number_of_epochs = item[1] bs = item[0] error_counter = 0 dummy_mape_tol_value = crit_MAPE_score while True: deep_model = deep_nn(x_train, y_train, sep_x_train) history_model = deep_model.fit([x_train, sep_x_train], y_train, epochs = number_of_epochs, validation_split = 0.2, batch_size = bs, verbose = 2) predictions_train = deep_model.predict([x_train, sep_x_train]) predictions_test = deep_model.predict([x_test, sep_x_test]) MAPE = mean_absolute_percentage_error([item[0] for item in y_test], [item[0] for item in predictions_test]) with open(results_text_file_location, 'a') as f: result_lines = 'Batch size = ' + str(bs) + ', Epochs = ' + str(number_of_epochs) + ': MAPE = ' \ + str(round(MAPE,3)) f.writelines(result_lines) f.write('\n') ## TO INITIATE SHAP ANALYSIS FOR EXPLAINABLE DEEP LEARNING COMPONENT ## if MAPE < dummy_mape_tol_value: model_json = deep_model.to_json() json_filename = 'Batch size = ' + str(bs) + ', Epochs = ' + str(number_of_epochs) + '.json' with open(directory_models + '/' + json_filename, "w") as json_file: json_file.write(model_json) h5_filename = 'Batch size = ' + str(bs) + ', Epochs = ' + str(number_of_epochs) + '.h5' deep_model.save_weights(directory_models + '/' + h5_filename) explainer = shap.KernelExplainer(deep_model.predict, x_train) shap_values = explainer.shap_values(X_test, nsamples=100) ## CHANGE SAMPLES ACCORDING DEPENDING ON ## TOTAL DATA INSTANCES AVAILABLE IN ## TESTING DATASET break else: error_counter += 1 if error_counter % tol_runs == 0: dummy_mape_tol_value += 0.05 clear_output(wait=True) training_loss = history_model.history['loss'] validation_loss = history_model.history['val_loss'] training_loss_list = [item for item in training_loss] validation_loss_list = [item for item in validation_loss] training_loss_df[str(number_of_epochs) + '_epochs_' + str(bs) + '_batch_size'] = training_loss_list validation_loss_df[str(number_of_epochs) + '_epochs_' + str(bs) + '_batch_size'] = validation_loss_list train_val_df[str(number_of_epochs) + '_epochs_' + str(bs) + '_batch_size'] = [item[0] for item in predictions_train] test_df[str(number_of_epochs) + '_epochs_' + str(bs) + '_batch_size'] = [item[0] for item in predictions_test] training_loss_df.to_csv(directory_results + '/' + 'training_loss.csv', index = False) validation_loss_df.to_csv(directory_results + '/' + 'validation_loss.csv', index = False) train_val_df.to_csv(directory_results + '/' + 'train_val_predictions.csv', index = False) test_df.to_csv(directory_results + '/' + 'test_predictions.csv', index = False) clear_output(wait=True) # -
jupyter_notebooks/Stages B & C_Predictive Modelling & SHAP Analysis (G-rate & D-rate).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.6.1 # language: julia # name: julia-0.6 # --- # # From day 10 # + function knot_hash_base(lens, size=256, rounds=1) i = 0 skip = 0 elem = collect(0:(size)-1) for r = 1:rounds for l in lens elem = circshift(reverse(circshift(elem, -i), 1, l), i) i = (i + l + skip) % size skip += 1 end end elem end function encode_str(values) vcat(map(Int, collect(values)), [17, 31, 73, 47, 23]) end function encode_hexa(elem) elem = reshape(elem, (16, 16)) elem = mapslices(c->reduce(xor,0,c), elem, 1) join(map(x->hex(x,2), elem)) end function knot_hash(value) encode_hexa(knot_hash_base(encode_str(value), 256, 64)) end # - # # Part 1 function fragmentation_matrix(key) [[(c == '1' ? 1 : 0) for b in map(bits, hex2bytes(knot_hash(key*"-"*string(row)))) for c in collect(b)] for row=0:127] end sum(sum(fragmentation_matrix("flqrgnkx"))) sum(sum(fragmentation_matrix("vbqugkhl"))) # # Part 2 # + function fill_region(m, r, c, region) if r>0 && r<129 && c>0 && c<129 && m[r][c]==1 m[r][c] = region fill_region(m, r-1, c, region) fill_region(m, r , c-1, region) fill_region(m, r+1, c, region) fill_region(m, r , c+1, region) end end function count_regions(key) m = fragmentation_matrix(key) region = 2 for r=1:128 for c=1:128 if m[r][c] == 1 fill_region(m, r, c, region) region += 1 end end end region - 2 end # - count_regions("flqrgnkx") count_regions("vbqugkhl")
2017/jordi/Day 14 - Disk Defragmentation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from compas.datastructures import Mesh from compas.datastructures import subdivision as sd import ipyvolume as ipv from utilities import draw_compas_mesh from compas.geometry import Vector mesh = Mesh.from_polyhedron(12) mesh.summary() mesh2 = sd.mesh_subdivide_tri(mesh) mesh3 = sd.trimesh_subdivide_loop(mesh2) draw_compas_mesh(mesh3) # + subd = mesh3.copy() height = .2 for fkey in mesh3.faces(): centroid = mesh3.face_centroid(fkey) centroid_vector = Vector(*centroid) normal = mesh3.face_normal(fkey) normal_vector = Vector(*normal) new_vertex = centroid_vector + normal_vector * height subd.insert_vertex(fkey, xyz=new_vertex) draw_compas_mesh(subd) # + def mesh_subdivide_pyramid(mesh, k=1, height=1.0): """Subdivide a mesh using insertion of vertex at centroid + height * face normal. Parameters ---------- mesh : Mesh The mesh object that will be subdivided. k : int, optional The number of levels of subdivision. Default is ``1``. height : float, optional The distance of the new vertex to the face. Returns ------- Mesh A new subdivided mesh. """ if k != 1: raise NotImplementedError subd = mesh.copy() for fkey in mesh.faces(): centroid = mesh.face_centroid(fkey) centroid_vector = Vector(*centroid) normal = mesh.face_normal(fkey) normal_vector = Vector(*normal) new_vertex = centroid_vector + normal_vector * height subd.insert_vertex(fkey, xyz=new_vertex) return subd # - our_mesh = mesh_subdivide_pyramid(mesh3, height=0.3) draw_compas_mesh(our_mesh) def mesh_subdivide_tapered(mesh, k=1, height=1.0, ratio=0.5): """ """ if ratio == int(1.): return mesh_subdivide_pyramid(mesh, k, height) #ubd = mesh.copy() for _ in range(k): subd = mesh.copy() for fkey in mesh.faces(): centroid = mesh.face_centroid(fkey) centroid_vector = Vector(*centroid) normal = mesh.face_normal(fkey) normal_vector = Vector(*normal) normal_vector *= height face_verts = mesh.face_vertices(fkey) new_verts = [] for v in face_verts: v_coords = mesh.vertex_coordinates(v) v_vector = Vector(*v_coords) vert_to_center = centroid_vector - v_vector vert_to_center *= ratio new_vertex = v_vector + vert_to_center + normal_vector x, y, z = new_vertex new_verts.append(subd.add_vertex(x=x, y=y, z=z)) for i, v in enumerate(face_verts): next_v = face_verts[(i+1) % len(face_verts)] new_v = new_verts[i] next_new_v = new_verts[(i+1) % len(face_verts)] new_face_key = subd.add_face([v, next_v, next_new_v, new_v]) subd.set_face_attribute(new_face_key, 'material', 'frame') top_face_key = subd.add_face(new_verts) subd.set_face_attribute(top_face_key, 'material', 'glass') del subd.face[fkey] mesh = subd return subd # # + mesh4 = sd.mesh_subdivide_catmullclark(mesh3) #tapered_mesh = mesh_subdivide_tapered(mesh2, k=5, height=0.3, ratio=.4) #draw_compas_mesh(tapered_mesh) # tapered_mesh.get_faces_attribute(tapered_mesh.faces(), 'material') # + face_verts_list = [] mesh.faces() for f_vert in mesh3.faces(): face_verts_list.append(mesh3.face_vertices(f_vert)) # + first_vertices = [v[0] for v in face_verts_list[0]] dosa = sd.mesh_subdivide_doosabin(mesh3, k=5, fixed=mesh3.vertex_coordinates(first_vertices)) draw_compas_mesh(dosa) # - from utilities import export_obj_by_attribute #export_obj_by_attribute('spacestation.obj', tapered_mesh, 'material')
T1/09_mesh_subdivision/191008_compas_subdivision_pyramid_attr_tetov.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np import pandas as pd df = pd.read_csv('stocks.csv')[['ticker','date','ret']] df.date = df.date//100 df = df[df.date>201000] df['ret'] = df['ret']*100 n = 28 sample = [] for stock in set(df.ticker): x = df[df.ticker==stock].set_index('date').dropna() if len(x)>120: sample.append(x[['ret']].rename({'ret':stock},axis=1)) df = pd.concat(sample,axis=1) w = [1/n]*n # $ w^t \times V \times w \$ # + # portfolio variance std = np.sqrt(df.cov().mul(w,axis=0).mul(w,axis=1).sum().sum()) eret = df.mul(w,axis=1).sum(axis=1).mean()*12 # - np.random.normal(20) # + w = np.random.rand(n) w = w / w.sum() # - def ret_var(w): # below code is to use variance-covariance matrix # std = np.sqrt(df.cov().mul(w,axis=0).mul(w,axis=1).sum().sum())*np.sqrt(12) # below code is to use std of monthly portfolio return std2 = df.mul(w,axis=1).sum(axis=1).std()*np.sqrt(12) eret = df.mul(w,axis=1).sum(axis=1).mean()*12 return eret,std,w list_of_retvar = [] for _ in range(5000): w = np.random.rand(n) - np.random.rand(n) w = w / w.sum() if w.max()<3 and w.min()>=-1: list_of_retvar.append(ret_var(w)) ports = pd.DataFrame(list_of_retvar,columns=['ret','std','w']) ports.plot(x='std',y='ret',kind='scatter')#.plot(x='std',y='ret',kind='scatter',xticks=range(10),yticks=range(25)) # + ports['sharpe'] = (ports['ret'] - 1.6)/ports['std'] ports.sort_values('sharpe') # - from concurrent.futures import ProcessPoolExecutor def compute(_): r = np.random.RandomState() w = r.rand(n) - r.rand(n) w = w / w.sum() if w.max()<3 and w.min()>=-1: return ret_var(w) with ProcessPoolExecutor() as p: ports = pd.DataFrame(p.map(compute,range(1000000)),columns=['ret','std','w']) ports['sharpe'] = (ports['ret'] - 1.6)/ports['std'] ports.dropna().sort_values('sharpe')
efficient_portfolio/1.compute_weight.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="6b430e1db639" # ## 3.5 The Influence of Network Topologes # # In Sec. 3.2 and 3.3, it is established that both ATC-DGD and AWC-DGD suffers from a limiting bias: # # \begin{align} # \limsup_{k\to \infty} \frac{1}{n}\sum_{i=1}^n \|x_i^{(k)} - x^\star \|^2 = O\Big( \frac{\alpha^2 \rho^2 b^2}{(1-\rho)^2}\Big) # \end{align} # # where $x^\star$ is the glboal solution to the optimization problem, $\alpha$ is the step-size, $\rho = \max\{|\lambda_2(W), \lambda_n(W)|\}$ and $b^2 = \frac{1}{n}\sum_{i=1}^n \|\nabla f_i(x^\star)\|^2$ denotes the data heterogeneity between nodes. Quantity $1-\rho$ measures the connectivity of the network topology. It is observed that $1-\rho$ can siginificantly affects the limiting bias especially when $1-\rho \to 0$. This section will examine how different network topologies will affect the limiting bias of ATC-DGD. # # ### 3.5.1 Connectivity and maximum degree of commonly-used topology # # The following table, which we have discussed in Sec. 2.7, summarizes the connectivity (i.e., $1-\rho$) and the maximum degree of some common network topologes. The combination matrix $W$ of the undirected topologies, i.e., the ring, 2D-mesh, and fully-connected topology, are generated through the Matropolis-Hastings rule (see Sec. 2.3), while the combination matrix of exponential-2 topology (which is a directed network) is genereted through the Averaging rule (see Sec. 2.4). # # # | Network topology | connectivity ($1-\rho$) | maximum degree | # | -: | :-: | :-: | # | Undirected Ring | $O(\frac{1}{n^2})$ | 2 | # | Undirected 2D-Mesh | $O(\frac{1}{n})$ | 4 | # | Exponential-2 | $O(\frac{1}{\ln(n)})$ | $\ln(n)$ | # | Fully connected| $1$ |$n-1$ | # # - **Connectivity**: the larger $1-\rho$ is, the better the network connectivity is, and the smaller the limiting bias that DGD has. Generally speaking, dense network has better connectivity. # # # - **Maximum degree:** Maximum degree influences the communicaiton efficiency of *one step* of the average consensus, i.e., $x_i^{k+1} = \sum_{j=1}^n w_{ij} x_j^k$. Apparently, more non-zero $w_{ij}$'s imply more communications. In the extreme example of fully connected network, each agent will exchange information with $n-1$ neighbors to conduct avearging. In contrast, each agent in the undirected ring will only need to communicate with 2 neighbors to finish one update, which is more communication efficient. Generally speaking, sparse network has smaller maximum degree # # # - **Trade-off:** If the network topology can be designed freely, the exponential-2 graph empically reaches a nice trade-off between per-iteration communicaiton efficiency and the convergence rate, see the Table above. # # ### 3.5.2 Examine the influence of network topology on DGD # # In this section, we examine the influence of different network topologies on the limiting bias of DGD. The code is the same as in Sec.3.2. The different is we will test ATC-DGD's performance with different topologies. # # #### 3.5.2.1 Set up BlueFog # # In the following code, you should be able to see the id of your CPUs. We use 8 CPUs to conduct the following experiment. # + id="d95de8f34908" import ipyparallel as ipp import numpy as np import torch import networkx as nx import matplotlib.pyplot as plt # %matplotlib inline rc = ipp.Client(profile="bluefog") print(rc.ids) dview = rc[:] # A DirectView of all engines dview.block = True # + id="e3365426d8cd" # %%px import numpy as np import bluefog.torch as bf import torch from bluefog.common import topology_util import networkx as nx import matplotlib.pyplot as plt # %matplotlib inline bf.init() print(f"Hello, I am {bf.rank()} among {bf.size()} processes") # + [markdown] id="89b3996764a0" # #### 3.2.4.2 Generate local data $A_i$ and $b_i$ # + id="ea4df0376b8b" # %%px def generate_data(m, n, x_o): A = torch.randn(m, n).to(torch.double) # x_o = torch.randn(n, 1).to(torch.double) ns = 0.1 * torch.randn(m, 1).to(torch.double) b = A.mm(x_o) + ns return A, b # + [markdown] id="944b0a521e1d" # #### 3.2.4.3 Distributed gradient descent method # + id="0aae4adb8a38" # %%px def distributed_grad_descent(A, b, maxite=5000, alpha=1e-1): x_opt = torch.zeros(n, 1, dtype=torch.double) for _ in range(maxite): # calculate local gradient grad_local = A.t().mm(A.mm(x_opt) - b) # global gradient grad = bf.allreduce(grad_local, name="gradient") # distributed gradient descent x_opt = x_opt - alpha * grad grad_local = A.t().mm(A.mm(x_opt) - b) grad = bf.allreduce(grad_local, name="gradient") # global gradient # evaluate the convergence of distributed gradient descent # the norm of global gradient is expected to 0 (optimality condition) global_grad_norm = torch.norm(grad, p=2) if bf.rank() == 0: print( "[Distributed Grad Descent] Rank {}: global gradient norm: {}".format( bf.rank(), global_grad_norm ) ) return x_opt # + [markdown] id="190909dad627" # In the following code we run distributed gradient descent to achieve the global solution $x^\star$ to the optimization problem. To validate whether $x^\star$ is optimal, it is enough to examine $\frac{1}{n}\sum_{i=1}^n \nabla f_i(x^\star) = 0$. # + id="7f50e459f11e" m, n = 20, 5 # jupyter engine generate a reference solution x_o and push it to each worker x_o = torch.randn(n, 1).to(torch.double) dview.push({"x_o": x_o}, block=True) # + id="8c5dddfdc96b" # %%px m, n = 20, 5 A, b = generate_data(m, n, x_o) x_opt = distributed_grad_descent(A, b, maxite=300, alpha=1e-2) # + [markdown] id="f9ff60fbb175" # #### 3.2.4.3 Decentralized gradient descent method # # In this section, we depict the convergence curve of the decentralied gradient descent (the ATC version). We will utilize the $x^\star$ achieved by distributed gradient descent as the optimal solution. First, we define one step of the ATC-DGD method. # + id="036d2399154b" # %%px def ATC_DGD_one_step(x, x_opt, A, b, alpha=1e-2): # one-step ATC-DGD. # The combination weights have been determined by the associated combination matrix. grad_local = A.t().mm(A.mm(x) - b) # compute local grad y = x - alpha * grad_local # adapte x_new = bf.neighbor_allreduce(y) # combination # the relative error: |x^k-x_gloval_average|/|x_gloval_average| rel_error = torch.norm(x_new - x_opt, p=2) / torch.norm(x_opt, p=2) return x_new, rel_error # + [markdown] id="c7e6f6850a85" # Next we run ATC-DGD algorithm. # + id="6cc351ba0b1b" # %%px # uncomment the code if you want to use that topology G = topology_util.RingGraph(bf.size()) # Set topology as ring topology. # G = topology_util.ExponentialTwoGraph(bf.size()) # Set topology as exponential-two topology. bf.set_topology(G) maxite = 200 x = torch.zeros(n, 1, dtype=torch.double) # Initialize x rel_error = torch.zeros((maxite, 1)) for ite in range(maxite): if bf.rank() == 0: if ite % 10 == 0: print("Progress {}/{}".format(ite, maxite)) x, rel_error[ite] = ATC_DGD_one_step( x, x_opt, A, b, alpha=3e-3 ) # you can adjust alpha to different values # + [markdown] id="ce147cc87db7" # In the following, we adjust to differnt topologies to examine its influence on the convergence rate and limiting bias. # + id="aeefd0ec62fa" # uncomment one of the following code when corresponding graph is activated rel_error_ring = dview.pull("rel_error", block=True, targets=0) rel_error_exp2 = dview.pull('rel_error', block=True, targets=0) # + id="f51ad31443cd" import matplotlib.pyplot as plt # %matplotlib inline plt.semilogy(rel_error_ring) plt.semilogy(rel_error_exp2) plt.legend(["Ring", "Expo"], fontsize=16) plt.xlabel("Iteration", fontsize=16) plt.ylabel("Relative error", fontsize=16) # + [markdown] id="a80ec49df478" # It is observed that exponential-two graph enables DGD to converge to a more accurate solution due to its better-connected topology.
Section 3/Sec-3.5-Network-influence-on-DGD.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # https://stackoverflow.com/questions/7853332/how-to-change-git-log-date-formats # # <code>git log --date=short | grep Date: | sed 's/Date://' > log_entries.dat</code> import matplotlib.pyplot as plt import matplotlib.dates as mdates import datetime import time import seaborn from collections import Counter import pandas with open('log_entries.dat','r') as fil: cont = fil.read().split() # http://strftime.org/ year_and_month = Counter([datetime.datetime.strptime(x[:-3], "%Y-%m") for x in cont]) datetime.datetime.strptime(str(min(dict(year_and_month).keys()).year),"%Y") df = pandas.DataFrame(list(dict(year_and_month).values()), list(dict(year_and_month).keys() ), ["commits"]) ax=seaborn.scatterplot(data=df) _=ax.set_xlim(datetime.datetime.strptime(str(min(dict(year_and_month).keys()).year),"%Y"), None) #ax.autoscale(tight=True) _=ax.get_legend().remove() _=plt.title('commits per month') plt.savefig('pdg_commits_per_month.png')
sandbox/git_log_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:ML] # language: python # name: conda-env-ML-py # --- from edahelper import * wsb = pd.read_pickle("../Data/subreddit_WallStreetBets/otherdata/wsb_cleaned.pkl") # Only keep posts where the author data is there. wsb = wsb.loc[wsb.author != "None"] author_posting = dict(wsb.author.value_counts()) author_df = (wsb.groupby('author')[['ups']] .agg('sum' )) wsb['author_total_upvotes'] = wsb.author.apply( lambda x : author_df.loc[x]['ups']) wsb['author_proliferacy'] = wsb.author.apply( lambda x : author_posting[x]) # + # num authors: len ( set ( wsb['author'])) # - wsb['author'].value_counts().head(100) author_df = wsb[['author', 'ups']].groupby('author').agg( [lambda x: tuple(x), 'count', 'sum', 'mean']) pd.set_option('display.max_colwidth', None) author_df[author_df[('ups', 'mean')] > 200].sort_values(by = ('ups', 'count'), ascending = False)['ups']['<lambda_0>'].head(20) author_df[author_df[('ups', 'mean')] > 200].sort_values(by = ('ups', 'count'), ascending = False)['ups']['<lambda_0>'].head(4).explode().groupby('author').plot(kind = 'hist', legend = True) # They look kind of power law ish # # So, maybe a reasonable model is to assume that, conditioned on the author, each posts upvotes comes from a power law distribution, with parameter that depends on the author. We can use empirical bayes to estimate the parameters for the authors. # # (of course this isn't literally true, but the parameter from this model could be a useful feature that measures author popularity.) # + for name, df in author_df[author_df[('ups', 'mean')] > 500].sort_values(by = ('ups', 'count'), ascending = False)['ups']['<lambda_0>'].head(5).explode().groupby('author'): df.plot( kind = 'hist', title = name, bins = 100 ) plt.show() # - # Need to look at the comment history of these people also, this will give a stronger history of their popularity / representativeness in the network. Otherwise, this seems too sparse. # # ## How can we deal with popularity changing over time? Does it happen? If so, how do you measure it? # # ### First, does it happen? wsb # ## One hit wonder ratio # # def compute_ohw(wsb, threshold): wsb['popular_post'] = wsb.ups > threshold agged = wsb[['author', 'popular_post']][wsb['popular_post']].groupby('author').agg('count') total_number = agged.sum().popular_post if total_number == 0: return np.nan one_hit_wonders = len( agged[agged.popular_post == 1]) ohw_ratio = one_hit_wonders/total_number return ohw_ratio ohw_ratios = [] thresholds = [4**x for x in range(1,10)] thresholds = np.linspace(start = 10, stop = 600000, num = 100) used_thresholds = [] for threshold in thresholds: ohw = compute_ohw( wsb, threshold) if ohw is np.nan: break ohw_ratios.append(ohw) used_thresholds.append(threshold) sns.scatterplot( x = used_thresholds, y = ohw_ratios) # + #wsb[wsb.author == "DeepFuckingValue"].ups.sort_values() # -
Sandbox/Notebooks/ExploratoryDataAnalysis/Author_Anthropology.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Some IPython magic # Put these at the top of every notebook, here nbagg is used for interactive plots # %reload_ext autoreload # %autoreload 2 # %matplotlib nbagg import numpy as np import matplotlib.pyplot as plt import pandas as pd # set floating points write format float_formatter = lambda x: "%.3f" % x np.set_printoptions(formatter={'float_kind':float_formatter}) # - def plot_decision_boundary(model, X, y): """ Use this to plot the decision boundary of a trained model. """ xx, yy = np.mgrid[0:1:.01, 0:1:.01] grid = np.c_[xx.ravel(), yy.ravel()] probs = model.predict_proba(grid)[:, 1].reshape(xx.shape) f, ax = plt.subplots(figsize=(8, 6)) contour = ax.contourf(xx, yy, probs, 25, cmap="RdBu", vmin=0, vmax=1) Z = clf.predict(grid) Z = Z.reshape(xx.shape) cs = plt.contourf(xx, yy, Z, cmap=plt.cm.RdYlBu) ax.scatter(X.values[:,0], X.values[:, 1], c=y, s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(aspect="equal", xlim=(-0.25, 1.25), ylim=(-.25, 1.25), xlabel="$X_1$", ylabel="$X_2$") # + def countsForFeature(data, feature, plot = True): counts = data.groupby(feature)[feature].count() if plot : fig = plt.figure() plt.hist(counts, range = (counts.values.min(), counts.values.max())) plt.show() else : return counts # countsForFeature(data, '<NAME>') # + data = pd.read_csv('Ghidoveanu_A_Mihai_train.csv') # print(list(data)) # see all features all_classes = list(data['Breed Name'].unique()) print("=======Some examples from data===") print(data.head()) print("\n=====Columns info=======") print(data.info()) #oberserve how balanced is the data set print("\n=====Summary Statistics=====") print(data.describe()) print("\n=====Classes distribution=============") print(data.groupby('Breed Name')['Breed Name'].count()) # - # ## Data Info # # We see that we have some missing values on the Height feature to deal with. # Fortunately enough, we have a balanced data set with samples from each class in approximately equal amounts # + ## CHECK MISSING VALUES # we check if we have missing values in form on NaN or space or zero entries where they shouldn't be def any_df(df): checked = df.any()[lambda x : x] if checked.count() != 0: return list(checked.keys()) else: return False if any(df.sum() != 0) : return True else : return False conditions = { "innapropiate zero entries" : lambda : data == 0, "innapropiate empty string entries" : lambda : data == ' ', "NaN entries" : lambda : data.isna() } columns_with_missing = [] for key, value in conditions.items(): df_func = value checked = any_df(df_func()) print("Dataset has {} : {}".format(key, checked)) if checked != False : columns_with_missing += checked print("Columns with missing values : ", columns_with_missing) # - # ## Data formatting # # + 'Sex' feature will be one-hot encoded because its categories don't have a certain order. # + Unlike it, all the other categorical feature have an innate ordering and distance, so we use ordinal encoders for them e.g. high is closer to medium than to lower # # + ## DATA FORMATTING # preformat features from sklearn import preprocessing # all_features = ['Breed Name','Weight(g)','Height(cm)', 'Longevity(yrs)', 'Energy level', # 'Attention Needs', 'Coat Lenght', 'Sex', 'Owner Name'] ordinal_features = ['Energy level' , 'Attention Needs', 'Coat Lenght', 'Owner Name', 'Breed Name'] onehot_features = ['Sex'] # encoding ordinal features ordinalEnc = preprocessing.OrdinalEncoder() ordinalEnc.fit(data[ordinal_features]) data[ordinal_features] = ordinalEnc.transform(data[ordinal_features]) # encoding one hot features data = pd.get_dummies(data, prefix = ['is'], columns = onehot_features) # - # ## Plots of data # # Two of the classes are prety mangled in their continous attributes plot_features = ['Height(cm)', 'Weight(g)'] x_plot = data[plot_features] y_plot = data['Breed Name'] fig = plt.figure() plt.scatter(x_plot['Height(cm)'], x_plot['Weight(g)'],c = y_plot) fig.suptitle("Data") plt.xlabel('Height (cm)') plt.ylabel('Weight (g)') # + ## Plot data from mpl_toolkits.mplot3d import Axes3D # all_features = ['Breed Name','Weight(g)','Height(cm)', 'Longevity(yrs)', 'Energy level', # 'Attention Needs', 'Coat Lenght', 'Sex', 'Owner Name'] plot_features = ['Height(cm)', 'Weight(g)', 'Longevity(yrs)'] x_plot = data[plot_features] y_plot = data['Breed Name'] fig = plt.figure() ax = Axes3D(fig) ax.scatter(x_plot.iloc[:,0], x_plot.iloc[:,1], x_plot.iloc[:,2], c=y_plot, marker='o') ax.set_xlabel(plot_features[0]) ax.set_ylabel(plot_features[1]) ax.set_zlabel(plot_features[2]) # - #scale all current features scale_features = list(data) scaler = preprocessing.MinMaxScaler() scaler.fit(data) data[scale_features] = scaler.transform(data[scale_features]) # + from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.linear_model import Ridge from sklearn.neighbors import KNeighborsRegressor, KNeighborsClassifier RSTATE = 42 def classify(X, Y): x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size = 0.3, random_state = RSTATE) lr = LogisticRegression(solver = 'liblinear', multi_class = 'auto') # dtc = DecisionTreeClassifier() knn = KNeighborsClassifier() rfc = RandomForestClassifier(n_estimators = 10) clfs = [lr, knn, rfc] scores = [] for clf in clfs: clf.fit(x_train, y_train) scores.append(clf.score(x_test,y_test)) # plot_decision_boundary(model,x_test,y_test) return scores def regress(X ,Y): x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size = 0.3, random_state = RSTATE) ridge = Ridge() knnr = KNeighborsRegressor() regs = [ridge, knnr] scores = [] for reg in regs: reg.fit(x_train, y_train) scores.append(reg.score(x_test,y_test)) return scores # + from sklearn import impute def features_labels_split(df): breed = pd.factorize(df['Breed Name'])[0] longevity = df['Longevity(yrs)'] input_data = df.drop(['Breed Name', 'Longevity(yrs)'], axis = 1) return input_data, breed, longevity # extracting the columns that will act as labels for our data input_data, y_clas, y_reg = features_labels_split(data) all_features = list(input_data) strategies = ['mean', 'median', 'most_frequent'] # try accuracies for all strategies of replacing missing values for stg in strategies: imp = impute.SimpleImputer(missing_values = np.nan, strategy = stg) X = imp.fit_transform(input_data) print("Strategy : ", stg) print("Clasificare : ", classify(X, y_clas)) print("Regresie : ", regress(X, y_reg)) X, y_clas, y_reg = features_labels_split(data.dropna()) # try accuracy for dropping missing values print("Strategy : Drop all missing") print("Clasificare : ", classify(X, y_clas)) print("Regresie : ", regress(X, y_reg)) ### Conclusion : we drop the missing values entries, given that we gain more accuracy this way # + ## TUNING Logistic Regression from sklearn.model_selection import KFold kf = KFold(n_splits = 4, random_state = RSTATE) def cross_validate(X, Y, model, kf = kf): scores = [] for train, test in kf.split(X, Y): model.fit(X.iloc[train], Y[train]) scores.append(model.score(X.iloc[test], Y[test])) return ( np.mean(scores), np.std(scores) ) x_train, x_test, y_train, y_test = train_test_split(X, y_clas, test_size = 0.3, random_state = RSTATE) clf = LogisticRegression(solver = 'newton-cg', multi_class = 'multinomial',C = 4, max_iter = 300) def try_regularize(clf,c_regs): mean_errs = [] for c in c_regs: # lbfgs and multinomial gave the best results clf.C = c mean_err, _ = cross_validate(x_train, y_train, clf) mean_errs.append(mean_err) return mean_errs def try_solvers(clf, solvers): mean_errs = [] for solver in solvers: if solver == 'liblinear': clf.multi_class = 'ovr' else: clf.multi_class = 'multinomial' clf.solver = solver mean_err, _ = cross_validate(x_train, y_train, clf) mean_errs.append(mean_err) return mean_errs # try many regularizations c_regs = np.arange(0.01, 10, 1) r_errs = try_regularize(clf,c_regs) #try many solvers solvers = ['newton-cg', 'lbfgs', 'sag', 'saga', 'liblinear'] s_errs = try_solvers(clf, solvers) plt.plot(c_regs, r_errs, 'o-') max_solver = np.argmax(s_errs) print("Max solvers : {} : {}".format(solvers[max_solver], s_errs[max_solver])) ## Conclusion : we will keep c = 4, given that not much change in accuracy will be registered then ## Conclusion : keep the newton-cg solver, cause it has the best accuracy clf.C = 4 clf.solver = 'newton-cg' clf.multi_class = 'multinomial' clf.fit(x_train, y_train) print("Test error : %.3f " % clf.score(x_test, y_test)) # - # !conda install python-graphviz # # !pip install graphviz # # !apt install graphviz import graphviz # + ## TUNING Decision Tree from sklearn import tree # checking many max_depth values -- currently between 3 and 4 # we choose min_samples_leaf = 1 because we tackle a classification problem # because max_depth is pretty small, we let the tree split on any sample number ??? / try more values # maybe perform dimensionality reduction clf = DecisionTreeClassifier(random_state=RSTATE, max_depth = 4, min_samples_leaf = 1, min_samples_split = 2, criterion = 'gini') validation_score, std = cross_validate(x_train, y_train, clf) # train with cross validation print("Validation error : {:.3f} +- {:.3f}".format(validation_score, std)) dot_data = tree.export_graphviz(clf, out_file=None, feature_names= all_features, class_names= all_classes, filled=True, rounded=True) graph = graphviz.Source(dot_data) graph print("Test error : %f " % clf.score(x_test, y_test)) # + ## TUNING RANDOM FOREST # try more n_estimators and pick the best one # check some values for max_depth clf = RandomForestClassifier(random_state =RSTATE, n_estimators=100 ) validation_score, std = cross_validate(x_train, y_train, clf) # train with cross validation print("Validation error : {:.3f} +- {:.3f}".format(validation_score, std)) dt = clf.estimators_[0] dot_data = tree.export_graphviz(dt, out_file=None, feature_names= all_features, class_names= all_classes, filled=True, rounded=True) graph = graphviz.Source(dot_data) # graph print("Test error : %f " % clf.score(x_test, y_test)) # + ## TUNING KNN Classifier # try more n_neighbors and clf = KNeighborsClassifier(n_neighbors=5) validation_score, std = cross_validate(x_train, y_train, clf) # train with cross validation print("Validation error : {:.3f} +- {:.3f}".format(validation_score, std)) # graph print("Test error : %f " % clf.score(x_test, y_test)) # + ## TUNING LINEAR REGRESSION - RIDGE reg = Ridge() validation_score, std = cross_validate(x_train, y_train, reg) # train with cross validation print("Validation error : {:.3f} +- {:.3f}".format(validation_score, std)) # graph print("Test error : %f " % reg.score(x_test, y_test)) # + ## TUNING LINEAR REGRESSION - LASSO from sklearn.linear_model import Lasso reg = Lasso() validation_score, std = cross_validate(x_train, y_train, reg) # train with cross validation print("Validation error : {:.3f} +- {:.3f}".format(validation_score, std)) # graph print("Test error : %f " % reg.score(x_test, y_test))
1.1.MachineLearning/Chapter1/Task1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Preprocessing import numpy as np import matplotlib.pyplot as plt import sklearn sklearn.set_config(print_changed_only=True) from sklearn.datasets import load_boston boston = load_boston() from sklearn.model_selection import train_test_split X, y = boston.data, boston.target X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=0) print(boston.DESCR) fig, axes = plt.subplots(3, 5, figsize=(20, 10)) for i, ax in enumerate(axes.ravel()): if i > 12: ax.set_visible(False) continue ax.plot(X[:, i], y, 'o', alpha=.5) ax.set_title("{}: {}".format(i, boston.feature_names[i])) ax.set_ylabel("MEDV") plt.boxplot(X) plt.xticks(np.arange(1, X.shape[1] + 1), boston.feature_names, rotation=30, ha="right"); from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) plt.boxplot(X_train_scaled) plt.xticks(np.arange(1, X.shape[1] + 1), boston.feature_names, rotation=30, ha="right"); from sklearn.neighbors import KNeighborsRegressor knr = KNeighborsRegressor().fit(X_train, y_train) knr.score(X_train, y_train) knr.score(X_test, y_test) knr_scaled = KNeighborsRegressor().fit(X_train_scaled, y_train) knr_scaled.fit(X_train_scaled, y_train) knr_scaled.score(X_train_scaled, y_train) X_test_scaled = scaler.transform(X_test) knr_scaled.score(X_test_scaled, y_test) from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor(random_state=0) rf.fit(X_train, y_train) rf.score(X_test, y_test) rf_scaled = RandomForestRegressor(random_state=0) rf_scaled.fit(X_train_scaled, y_train) rf_scaled.score(X_test_scaled, y_test) # # Categorical Variables import pandas as pd df = pd.DataFrame({'salary': [103, 89, 142, 54, 63, 219], 'boro': ['Manhattan', 'Queens', 'Manhattan', 'Brooklyn', 'Brooklyn', 'Bronx']}) df pd.get_dummies(df) from sklearn.compose import make_column_transformer from sklearn.preprocessing import OneHotEncoder categorical = df.dtypes == object categorical ~categorical ct = make_column_transformer((OneHotEncoder(), categorical), (StandardScaler(), ~categorical)) ct.fit_transform(df) ct = make_column_transformer((OneHotEncoder(sparse=False), categorical)) ct.fit_transform(df) ct = make_column_transformer((OneHotEncoder(), categorical), remainder='passthrough') ct.fit_transform(df) ct = make_column_transformer((OneHotEncoder(), categorical), remainder=StandardScaler()) ct.fit_transform(df) # # Exercises # # ## Exercise 1 # Load the "adult" datasets using consisting of income data from the census, including information whether someone has a salary of less than \$50k or more. Look at the data using the ``head`` method. Our final goal in Exercise 4 will be to classify entries into those making less than \$50k and those that make more. # # ## Exercise 2 # Experiment with visualizing the data. Can you find out which features influence the income the most? # # ## Exercise 3 # Separate the target variable from the features. # Split the data into training and test set. # Apply dummy encoding and scaling. # How did this change the number of variables? # # ## Exercise 4 # Build and evaluate a LogisticRegression model on the data. # # data = pd.read_csv("data/adult.csv", index_col=0) # + # # %load solutions/load_adult.py
notebooks/03 - Preprocessing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # This notebook was prepared by [wdonahoe](https://github.com/wdonahoe). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). # # Challenge Notebook # ## Problem: Implement a function that groups identical items based on their order in the list. # # * [Constraints](#Constraints) # * [Test Cases](#Test-Cases) # * [Algorithm](#Algorithm) # * [Code](#Code) # * [Unit Test](#Unit-Test) # * [Solution Notebook](#Solution-Notebook) # ## Constraints # # * Can we use extra data structures? # * Yes # ## Test Cases # # * group_ordered([1,2,1,3,2]) -> [1,1,2,2,3] # * group_ordered(['a','b','a') -> ['a','a','b'] # * group_ordered([1,1,2,3,4,5,2,1]-> [1,1,1,2,2,3,4,5] # * group_ordered([]) -> [] # * group_ordered([1]) -> [1] # * group_ordered(None) -> None # ## Algorithm # # Refer to the [solution notebook](https://github.com/donnemartin/interactive-coding-challenges/templates/foo_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. # ## Code def group_ordered(list_in): # TODO: Implement me pass # ## Unit Test # **The following unit test is expected to fail until you solve the challenge.** # + # # %load test_group_ordered.py from nose.tools import assert_equal class TestGroupOrdered(object): def test_group_ordered(self, func): assert_equal(func(None), None) print('Success: ' + func.__name__ + " None case.") assert_equal(func([]), []) print('Success: ' + func.__name__ + " Empty case.") assert_equal(func([1]), [1]) print('Success: ' + func.__name__ + " Single element case.") assert_equal(func([1, 2, 1, 3, 2]), [1, 1, 2, 2, 3]) assert_equal(func(['a', 'b', 'a']), ['a', 'a', 'b']) assert_equal(func([1, 1, 2, 3, 4, 5, 2, 1]), [1, 1, 1, 2, 2, 3, 4, 5]) assert_equal(func([1, 2, 3, 4, 3, 4]), [1, 2, 3, 3, 4, 4]) print('Success: ' + func.__name__) def main(): test = TestGroupOrdered() test.test_group_ordered(group_ordered) try: test.test_group_ordered(group_ordered_alt) except NameError: # Alternate solutions are only defined # in the solutions file pass if __name__ == '__main__': main() # - # ## Solution Notebook # # Review the [solution notebook](https://github.com/donnemartin/interactive-coding-challenges/sorting_searching/group_ordered/group_ordered_solution.ipynb) for a discussion on algorithms and code solutions.
staging/sorting_searching/group_ordered/group_ordered_challenge.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # %run ../Python_files/util.py # number of links m = 74 # + node_neighbors_dict = {} node_neighbors_dict['1'] = [2, 3] node_neighbors_dict['2'] = [1, 3, 4] node_neighbors_dict['3'] = [1, 2, 6] node_neighbors_dict['4'] = [2, 5, 6, 7, 18] node_neighbors_dict['5'] = [4, 8, 9] node_neighbors_dict['6'] = [3, 4, 7] node_neighbors_dict['7'] = [4, 6, 9, 11, 18] node_neighbors_dict['8'] = [5, 10, 12] node_neighbors_dict['9'] = [5, 7, 10] node_neighbors_dict['10'] = [8, 9, 11, 13] node_neighbors_dict['11'] = [7, 10, 14] node_neighbors_dict['12'] = [8, 13, 15, 19, 20] node_neighbors_dict['13'] = [10, 12, 14, 16, 19, 20, 21] node_neighbors_dict['14'] = [11, 13, 16, 21, 22] node_neighbors_dict['15'] = [12, 13, 17] node_neighbors_dict['16'] = [13, 14, 17, 22] node_neighbors_dict['17'] = [15, 16] node_neighbors_dict['18'] = [4, 7] node_neighbors_dict['19'] = [12, 13] node_neighbors_dict['20'] = [12, 13] node_neighbors_dict['21'] = [13, 14] node_neighbors_dict['22'] = [14, 16] # - for i in node_neighbors_dict['1']: for j in node_neighbors_dict['6']: if j in node_neighbors_dict[str(i)]: print('1->%s->%s->6'%(i,j)) OD_pair_label_dict_ = zload('../temp_files/OD_pair_label_dict__ext.pkz') OD_pair_label_dict = zload('../temp_files/OD_pair_label_dict_ext.pkz') od_pairs = OD_pair_label_dict_.values() len(od_pairs) with open('../temp_files/link_length_dict_ext_insert_links.json', 'r') as json_file: link_length_dict = json.load(json_file) link_label_dict = zload('../temp_files/link_label_dict_ext.pkz') link_label_dict_ = zload('../temp_files/link_label_dict_ext_.pkz') # compute length of a route def routeLength(route): link_list = [] node_list = [] for i in route.split('->'): node_list.append(int(i)) for i in range(len(node_list))[:-1]: link_list.append('%d->%d' %(node_list[i], node_list[i+1])) length_of_route = sum([link_length_dict[str(link_label_dict_[link])] for link in link_list]) return length_of_route with open('../temp_files/path-link_incidence_ext_insert_links.txt', 'w') as the_file: for od in od_pairs: origi = od[0] desti = od[1] the_file.write('O-D pair (%s, %s):\n'%(origi, desti)) route_list = [] if desti in node_neighbors_dict[str(origi)]: route_list.append('%s->%s\n'%(origi,desti)) for i in node_neighbors_dict[str(origi)]: if i in node_neighbors_dict[str(desti)]: flag = [origi, i, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s\n'%(origi,i,desti)) for j in node_neighbors_dict[str(i)]: if j in node_neighbors_dict[str(desti)]: flag = [origi, i, j, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s\n'%(origi,i,j,desti)) for k in node_neighbors_dict[str(j)]: if k in node_neighbors_dict[str(desti)]: flag = [origi, i, j, k, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s->%s\n'%(origi,i,j,k,desti)) for l in node_neighbors_dict[str(k)]: if l in node_neighbors_dict[str(desti)]: flag = [origi, i, j, k, l, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s->%s->%s\n'%(origi,i,j,k,l,desti)) for m in node_neighbors_dict[str(l)]: if m in node_neighbors_dict[str(desti)]: flag = [origi, i, j, k, l, m, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s->%s->%s->%s\n'%(origi,i,j,k,l,m,desti)) for n in node_neighbors_dict[str(m)]: if n in node_neighbors_dict[str(desti)]: flag = [origi, i, j, k, l, m, n, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s->%s->%s->%s->%s\n'%(origi,i,j,k,l,m,n,desti)) for o in node_neighbors_dict[str(n)]: if o in node_neighbors_dict[str(desti)]: flag = [origi, i, j, k, l, m, n, o, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s->%s->%s->%s->%s->%s\n'%(origi,i,j,k,l,m,n,o,desti)) for p in node_neighbors_dict[str(o)]: if p in node_neighbors_dict[str(desti)]: flag = [origi, i, j, k, l, m, n, o, p, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s->%s->%s->%s->%s->%s->%s\n'%(origi,i,j,k,l,m,n,o,p,desti)) for q in node_neighbors_dict[str(p)]: if q in node_neighbors_dict[str(desti)]: flag = [origi, i, j, k, l, m, n, o, p, q, desti] if len(set(flag)) == len(flag): route_list.append('%s->%s->%s->%s->%s->%s->%s->%s->%s->%s->%s\n'%(origi,i,j,k,l,m,n,o,p,q,desti)) route_length_list = np.array([routeLength(route) for route in route_list]) refined_route_list = [] if len(route_list) <= 10: refined_route_list = route_list else: refined_idx_list = [np.argsort(route_length_list)[i] for i in range(10)] refined_route_list = [route_list[idx] for idx in refined_idx_list] for route in refined_route_list: the_file.write(route) the_file.write('\n') with open('../temp_files/path-link_incidence_ext_insert_links.txt', 'r') as the_file: # path counts i = 0 for row in the_file: if '->' in row: i = i + 1 i
02_network_labels_ifac17/02_find_paths_of_a_road_network.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Visualize The Policy Schedule of PBA # # # + # !pip install seaborn import PIL import copy import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from cycler import cycler import pba.augmentation_transforms_hp as augmentation_transforms_hp from pba.utils import parse_log_schedule from pba.data_utils import parse_policy # Ignore the divided by zero warning in probability calculation np.seterr(divide='ignore', invalid='ignore') # - # Initialize CIFAR & SVHN policies. cifar_policy = (parse_log_schedule('schedules/rcifar10_16_wrn.txt', 200), 'cifar10_4000') svhn_policy = (parse_log_schedule('schedules/rsvhn_16_wrn.txt', 160), 'svhn_1000') # + policy, dset = cifar_policy # Initilize dictionaries probability_results = {'Rotate':[], 'TranslateX':[], 'TranslateY':[], 'Brightness':[], 'Color':[], 'Invert':[] , 'Sharpness':[], 'Posterize':[], 'ShearX':[], 'Solarize':[], 'ShearY':[], 'Equalize':[], 'AutoContrast':[], 'Cutout':[],'Contrast' :[]} magnitude_results = copy.deepcopy(probability_results) upper_probability = copy.deepcopy(probability_results) upper_magnitude = copy.deepcopy(probability_results) bottom_probability = copy.deepcopy(probability_results) bottom_magnitude= copy.deepcopy(probability_results) def parse_policy_hyperparams(policy_hyperparams): """We have two sets of hparams for each operation, which we need to split up.""" split = len(policy_hyperparams) // 2 policy = parse_policy(policy_hyperparams[:split], augmentation_transforms_hp) policy.extend(parse_policy(policy_hyperparams[split:], augmentation_transforms_hp)) return policy def mean_hyperparams(policy): """Get mean value of two set of hparams in each operation, both magnitude and probability.""" for one_policy in policy: parsed_policy = parse_policy_hyperparams(one_policy) half_policy = len(parsed_policy) // 2 # Follow the policy order in augmentation_transforms_hp.py # line 64: name, probability, level = xform for i in parsed_policy[:half_policy]: upper_probability[i[0]].append(i[1]) upper_magnitude[i[0]].append(i[2]) for i in parsed_policy[half_policy:]: bottom_probability[i[0]].append(i[1]) bottom_magnitude[i[0]].append(i[2]) for key, value in upper_probability.items(): upper_array = np.array(upper_probability[key]) bottom_array = np.array(bottom_probability[key]) tmp_array = np.divide((upper_array+bottom_array),2) tmp_array = np.round(tmp_array, decimals=4).tolist() probability_results[key]=tmp_array for key, value in upper_magnitude.items(): upper_array = np.array(upper_magnitude[key]) bottom_array = np.array(bottom_magnitude[key]) tmp_array = np.divide((upper_array+bottom_array),2) tmp_array = np.round(tmp_array, decimals=4).tolist() magnitude_results[key]=tmp_array return probability_results, magnitude_results probability_results, magnitude_results = mean_hyperparams(policy) # + # Set unique color of every operation in the plot. sns.set() sns.set_palette(sns.color_palette("hls", 20)) ind_constant = len(magnitude_results["Rotate"]) ind = np.arange(ind_constant) # Get sum of each operation magnitude at every epochs. # [sum of epoch1, sum of epoch2, sum of epoch3...] def sum_results_array (index_to_sum, results_dict): sums = np.zeros(len(next(iter(results_dict.values())))) i = 0 for value in results_dict.values(): value = np.array(value) sums = sums+value if i==index_to_sum: break i=i+1 return sums # Plot each bar plt.rcParams["figure.figsize"] = [30,10] Rotate_bars = plt.bar(ind, magnitude_results["Rotate"]) TranslateX_bars = plt.bar(ind, magnitude_results["TranslateX"], bottom=sum_results_array(0,magnitude_results)) TranslateY_bars = plt.bar(ind, magnitude_results["TranslateY"], bottom=sum_results_array(1,magnitude_results)) Brightness_bars = plt.bar(ind, magnitude_results["Brightness"], bottom=sum_results_array(2,magnitude_results)) Color_bars = plt.bar(ind, magnitude_results["Color"], bottom=sum_results_array(3,magnitude_results)) Invert_bars = plt.bar(ind, magnitude_results["Invert"], bottom=sum_results_array(4,magnitude_results)) Sharpness_bars = plt.bar(ind, magnitude_results["Sharpness"], bottom=sum_results_array(5,magnitude_results)) Posterize_bars = plt.bar(ind, magnitude_results["Posterize"], bottom=sum_results_array(6,magnitude_results)) ShearX_bars = plt.bar(ind, magnitude_results["ShearX"], bottom=sum_results_array(7,magnitude_results)) Solarize_bars = plt.bar(ind, magnitude_results["Solarize"], bottom=sum_results_array(8,magnitude_results)) ShearY_bars = plt.bar(ind, magnitude_results["ShearY"], bottom=sum_results_array(9,magnitude_results)) Equalize_bars = plt.bar(ind, magnitude_results["Equalize"], bottom=sum_results_array(10,magnitude_results)) AutoContrast_bars = plt.bar(ind, magnitude_results["AutoContrast"], bottom=sum_results_array(11,magnitude_results)) Cutout_bars = plt.bar(ind, magnitude_results["Cutout"], bottom=sum_results_array(12,magnitude_results)) Contrast_bars = plt.bar(ind, magnitude_results["Contrast"], bottom=sum_results_array(13,magnitude_results)) plt.ylabel('Sum(magnitude)') plt.xlabel('Epochs') plt.title('Magnitudes') plt.xlim([0, ind_constant]) bars_tag = ["Rotate","TranslateX","TranslateY","Brightness","Color","Invert", "Sharpness","Posterize","ShearX","Solarize","ShearY","Equalize", "AutoContrast","Cutout","Contrast"] plt.legend(bars_tag, loc='center left', bbox_to_anchor=(1, 0.5)) plt.savefig("magnitude.png",bbox_inches='tight') plt.show() # + # Get total epochs epochs = len(next(iter(probability_results.values()))) # Sum of probability. #[sum of epoch1, sum of epoch2, sum of epoch3...] sum_results = sum_results_array(epochs, probability_results) # Get every operation's probability portion in that epoch Rotate_1=np.nan_to_num(np.divide(probability_results["Rotate"], sum_results)) TranslateX_1 = np.nan_to_num(np.divide(probability_results["TranslateX"], sum_results)) TranslateY_1 = np.nan_to_num(np.divide(probability_results["TranslateY"], sum_results)) Brightness_1 = np.nan_to_num(np.divide(probability_results["Brightness"], sum_results)) Color_1 = np.nan_to_num(np.divide(probability_results["Color"], sum_results)) Invert_1 = np.nan_to_num(np.divide(probability_results["Invert"], sum_results)) Sharpness_1 = np.nan_to_num(np.divide(probability_results["Sharpness"], sum_results)) Posterize_1 = np.nan_to_num(np.divide(probability_results["Posterize"], sum_results)) ShearX_1 = np.nan_to_num(np.divide(probability_results["ShearX"], sum_results)) Solarize_1 = np.nan_to_num(np.divide(probability_results["Solarize"], sum_results)) ShearY_1 = np.nan_to_num(np.divide(probability_results["ShearY"], sum_results)) Equalize_1 = np.nan_to_num(np.divide(probability_results["Equalize"], sum_results)) AutoContrast_1 = np.nan_to_num(np.divide(probability_results["AutoContrast"], sum_results)) Cutout_1 = np.nan_to_num(np.divide(probability_results["Cutout"], sum_results)) Contrast_1 = np.nan_to_num(np.divide(probability_results["Contrast"], sum_results)) # Plot plt.rcParams["figure.figsize"] = [20,10] ind = np.arange(epochs) plt.stackplot(ind, Rotate_1, TranslateX_1, TranslateY_1,Brightness_1,Color_1, Invert_1,Sharpness_1,Posterize_1,ShearX_1,Solarize_1,ShearY_1, Equalize_1,AutoContrast_1,Cutout_1,Contrast_1,labels=bars_tag) plt.legend(bars_tag, loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlim([0, epochs]) plt.ylim([0, 1]) plt.ylabel('Probability') plt.xlabel('Epochs') plt.title('Probability') plt.savefig("probability.png",bbox_inches='tight') plt.show()
policy_visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import StandardScaler, PolynomialFeatures, LabelEncoder from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.linear_model import LinearRegression, ElasticNet from sklearn.pipeline import make_pipeline from category_encoders import OneHotEncoder from sklearn.metrics import mean_squared_error from sklearn.compose import make_column_transformer from sklearn.compose import make_column_selector from sklearn.neighbors import KNeighborsRegressor from sklearn.ensemble import RandomForestRegressor from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Input, Dense, BatchNormalization # - # ## Import Data #read the dataframe in & observe df = pd.read_csv('./googleplaystore.csv') df.head(3) df.info() # Ratings have some missing data so we'll remove those rows. Installs, price & reviews should be int. Last Updated should be date # ## Clean Data df.dropna(inplace=True) df.shape #(9360,13) df.head(3) # + tags=[] #converting installs to integers df['Installs']=df['Installs'].str.replace('+','').str.replace(',','') df['Installs'] = df['Installs'].astype(int) #change price to float df['Price']=df['Price'].str.replace('$','').astype(float) #convert reviews to int df['Reviews'] = df['Reviews'].astype(int) #convert dates to date time df['Last Updated'] = pd.to_datetime(df['Last Updated']) #convert rating content to string df['Content Rating'] = df['Content Rating'].astype(str) df['Size'] = df['Size'].str.replace('M','').str.replace('k','').str.replace('Varies with device','308') df['Size'] = df['Size'].astype(float) #obsever df.info() # - # ## EDA plt.figure(figsize=(10,10)) plt.barh(df['Category'].value_counts().index,df['Category'].value_counts()); plt.ylabel('App Category', size = 20) plt.xlabel('Number of Downloads', size =20) plt.title('Number of Downloads per Category',size = 20); plt.figure(figsize=(10,10)) plt.barh(df['Content Rating'].value_counts().index,df['Content Rating'].value_counts()); plt.ylabel('App Category Rating', size = 20) plt.xlabel('Number of Downloads', size =20) plt.title('Category Ratings Frequency',size = 20); plt.hist(df['Rating']); plt.ylabel('Frequency', size = 20) plt.xlabel('App Ratings', size =20) plt.title('App Ratings',size = 20); plt.hist(np.log(df['Rating'])); plt.ylabel('Frequency', size = 20) plt.xlabel('App Ratings', size =20) plt.title('App Ratings',size = 20); plt.hist(df['Reviews']); plt.ylabel('Frequency', size = 20) plt.xlabel('Reviews', size =20) plt.title('Number of Customer Reviews',size = 20); plt.hist(df['Price']); plt.ylabel('Frequency', size = 20) plt.xlabel('App Price ($)', size =20) plt.title('Frequency of App Price',size = 20); df.loc[ (df['Rating'] == 5.0)] df['Price'].max() df.loc[(df['Price'] == 400.0)] df['Rating'].max() sns.pairplot(df, corner=True) sns.scatterplot(data = df, x='Rating', y='Reviews');#, hue='Rating'); sns.heatmap(df.corr(), annot = True) # ### Baseline accuracy y = df['Rating'] mean_squared_error(y,np.full_like(y,np.mean(y))) # ### Choose X & Y df.head(3) # ### Basic Linear Regression # + X = df[['Reviews','Installs','Size']] y = df['Rating'] X_train, X_test, y_train ,y_test = train_test_split(X,y, random_state=518) #Create pipe pipe_lr = make_pipeline(StandardScaler(), LinearRegression()) #fit model pipe_lr.fit(X_train,y_train) #score on train pipe_lr.score(X_train,y_train) #0.0061 #score on test #pipe_lr.score(X_test,y_test) #0.0059 # - preds_lr = pipe_lr.predict(X_test) mean_squared_error(y_test,preds_lr) # + # # - # ### Linear Regression with OHE & Pipe # + X = df[['Category','Reviews','Installs','Size']] y = df['Rating'] X_train, X_test, y_train ,y_test = train_test_split(X,y, random_state=518) #transform column with OHE trans_col = make_column_transformer((OneHotEncoder(use_cat_names=True), make_column_selector('Category')) ) #create pipe pipe_lr = make_pipeline(trans_col, StandardScaler(), LinearRegression()) #fit model pipe_lr.fit(X_train,y_train) #score on train pipe_lr.score(X_train,y_train) #0.034 #score on test pipe_lr.score(X_test,y_test) #0.016 # - preds_lr = pipe_lr.predict(X_test) mean_squared_error(y_test,preds_lr) # + # # - # ### Elastic Net with OHE & Grid Search # + tags=[] X = df[['Category','Reviews','Installs','Size']] y = df['Rating'] X_train, X_test, y_train ,y_test = train_test_split(X,y, random_state=518) #transform column with OHE trans_col = make_column_transformer((OneHotEncoder(use_cat_names=True), make_column_selector('Category')) ) #create pipe pipe_elastic = make_pipeline(trans_col, StandardScaler(), ElasticNet()) params_elastic = {'elasticnet__alpha':[0.01, 0.1, 1, 10, 100,1000], 'elasticnet__l1_ratio':[0.2,0.4,0.6,0.8]} gs_elastic = GridSearchCV(pipe_elastic, params_elastic, n_jobs=-1) gs_elastic.fit(X_train, y_train) gs_elastic.best_params_ # alpha 0.01, l1ratio 0.2 gs_elastic.score(X_train,y_train) #0.0338 gs_elastic.score(X_test,y_test) # 0.0174 # - preds_elastic = gs_elastic.predict(X_test) mean_squared_error(y_test,preds_elastic) # + # # - # ### KNN with GridSearch # + X = df[['Category','Reviews','Installs','Price','Size']] y = df['Rating'] X_train, X_test, y_train ,y_test = train_test_split(X,y, random_state=518) #transform column with OHE trans_col = make_column_transformer((OneHotEncoder(use_cat_names=True), make_column_selector('Category')) ) #create pipe pipe_knn = make_pipeline(trans_col, StandardScaler(), KNeighborsRegressor()) params_knn = {'kneighborsregressor__n_neighbors':[10,50,200,300,400,500], 'kneighborsregressor__leaf_size':[10,30,60,100]} gs_knn = GridSearchCV(pipe_knn, params_knn, n_jobs=-1) gs_knn.fit(X_train,y_train) gs_knn.best_params_ #leaf_size 10, n_neighbors 200 gs_knn.score(X_train,y_train) #0.0306 gs_knn.score(X_test,y_test) #0.0203 # - preds_knn = gs_knn.predict(X_test) mean_squared_error(y_test,preds_knn) # + # # - # ### Random Forest # + X = df[['Category','Reviews','Installs','Price','Size']] y = df['Rating'] X_train, X_test, y_train ,y_test = train_test_split(X,y, random_state=518) #transform column with OHE trans_col = make_column_transformer((OneHotEncoder(use_cat_names=True), make_column_selector('Category')) ) #create pipe pipe_rf = make_pipeline(trans_col, StandardScaler(), RandomForestRegressor()) params_rf = {'randomforestregressor__n_estimators':[100,200,300,400,500], 'randomforestregressor__max_depth':[100,500,1000]} gs_rf = GridSearchCV(pipe_rf, params_rf, n_jobs = -1) gs_rf.fit(X_train,y_train) gs_rf.best_params_ #max_depth 500, n_estimators 100 gs_rf.score(X_train,y_train) #0.034 gs_rf.score(X_test,y_test) #0.0168 # - preds_rf = gs_rf.predict(X_test) mean_squared_error(y_test,preds_rf) # ### Import new text data (Reviews of apps) reviews = pd.read_csv('./googleplaystore_user_reviews.csv') # + #observe & merge with existing df reviews.shape #(64295, 5) df.shape #(9360, 13) reviews.drop_duplicates(inplace=True) reviews.shape #(30679, 5) merged_df = pd.merge(left = reviews, right = df, on='App').dropna() merged_df.shape #(46691, 17) merged_df.isna().sum() # all 0 merged_df.head(3) # - # #### Label encdode # + le = LabelEncoder() merged_df['Sentiment'] = le.fit_transform(merged_df['Sentiment']) merged_df['Sentiment'].unique() #array([2, 1, 0]) # - sns.heatmap(merged_df.corr()[['Rating']], annot = True) # ### Predict on best performing model # + tags=[] X = merged_df[['Category','Sentiment_Polarity','Sentiment_Subjectivity','Sentiment','Installs','Price','Size']] y = merged_df['Rating'] X_train, X_test, y_train ,y_test = train_test_split(X,y, random_state=518) #transform column with OHE trans_col = make_column_transformer((OneHotEncoder(use_cat_names=True), make_column_selector('Category')) ) #create pipe pipe_knn = make_pipeline(trans_col, StandardScaler(), KNeighborsRegressor()) params_knn = {'kneighborsregressor__n_neighbors':[200], 'kneighborsregressor__leaf_size':[10]} gs_knn = GridSearchCV(pipe_knn, params_knn, n_jobs=-1) gs_knn.fit(X_train,y_train) gs_knn.best_params_ #leaf_size 10, n_neighbors 200 gs_knn.score(X_train,y_train) #0.15 gs_knn.score(X_test,y_test) #0.15 # - preds_knn = gs_knn.predict(X_test) mean_squared_error(y_test,preds_knn) # + #next poly features, neural nets # - model = Sequential() model.add(Input(shape=20,)) model.add(Dense(12,activation= 'relu')) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse',metrics=['mae']) # + jupyter={"outputs_hidden": true} tags=[] history = model.fit( X_train_sc, y_train, batch_size = 128, validation_data=(X_test_sc,y_test), epochs = 100) # - preds = model.predict(X_test) preds model.evaluate(X_test,y_test)
google_rating_predictions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Task 6: Prediction using Decision Tree Algorithm # ### DecisionTree # Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. # # For instance, in the example below, decision trees learn from data to approximate a sine curve with a set of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the fitter the model. # #### _Author: <NAME>_ # ### Importing Libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn import tree from sklearn.metrics import classification_report,confusion_matrix,accuracy_score # ### Loading Data and exploring # Loading the iris dataset df=pd.read_csv("Iris.csv") df.head() df.shape df.describe() # ### Data Preparation and cleaning sns.jointplot(x='SepalLengthCm', y='SepalWidthCm', data=df) # Dropping the id and Species section df_new=df.drop(['Species','Id'],axis=1) df_new.head() # ### Seperating Testing and training data X=df_new y=df["Species"] # Split Train and Test data X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.33, random_state=42) X_train y_train # ### Training our model # Train and Test data tree_new=DecisionTreeClassifier(criterion='entropy') tree_new.fit(X_train, y_train) y_pred=tree_new.predict(X_test) print(classification_report(y_test,y_pred)) print(accuracy_score(y_pred,y_test)) dtree=list(df_new.columns.values) dtree # ### Exploring our decision tree plt.figure(figsize=(20,9)) tree.plot_tree(tree_new.fit(X,y), feature_names=dtree, filled=True, precision=3, proportion=True,rounded=True) plt.show()
Task-6_Exploratory-Data-Analysis--Retail/DecisionTree-Algorithm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # ---- settings ---- import os os.environ["CUDA_VISIBLE_DEVICES"] = "" import json Settings = json.load(open('../settings.txt')) import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import sys sys.path.insert(0, '../'); sys.path.insert(0, '.') import paf_loader from os.path import join, isdir import numpy as np import cv2 from cselect import color as cs from time import time from mvpose import settings from mvpose.data import umpm params = settings.get_settings(ms_radius=30) root = join(Settings['data_root'], 'pak') user = Settings['UMPM']['username'] pwd = Settings['<PASSWORD>']['password'] tmp = Settings['tmp'] FRAME = 1430
samples/Human36m.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- import PIL import numpy as np import tensorflow as tf import glob import io # + TFRECORDS_PATH = list(glob.glob('/data/lila/nacti/cropped_tfrecords/t*'))[0] print(TFRECORDS_PATH) sess = tf.InteractiveSession() # - record_iterator = tf.python_io.tf_record_iterator(TFRECORDS_PATH) for string_record in record_iterator: example = tf.train.Example() example.ParseFromString(string_record) encoded = example.features.feature["image/encoded"].bytes_list.value[0] break example im_path = example.features.feature["image/filename"].bytes_list.value[0].decode('utf8') print(im_path) print(PIL.Image.open(im_path).size) PIL.Image.open(im_path) print(PIL.Image.open(io.BytesIO(encoded)).size) PIL.Image.open(io.BytesIO(encoded)) w = example.features.feature['image/width'].int64_list.value[0] h = example.features.feature['image/height'].int64_list.value[0] w,h
data_management/tfrecords/tools/inspect_tfrecords.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deriving water balance on irrigated farms using remote sensing images # # ## This notebook demonstrates how to estimate Actual Evapo-Transpiration (AET) using the [CMRSET methodology](https://www.wateraccounting.org/files/guerschman_j%20_of_hydr.pdf) # # #### [Example](https://www.waterconnect.sa.gov.au/Content/Publications/DEW/DEWNR-TN-2016-10.pdf) of application of this methodology: # # # ### 1.- Load datacube environment # + import xarray as xr import numpy as np from datetime import date, timedelta import datacube dc = datacube.Datacube(app="watersense_irrigated") # - # ### 2.- Define location and query parameters # + locs = {"Norwood": {"coords": [149.7543738, -29.3741845]}, "Redbank": {"coords": [150.0313090, -29.4199982]}} coords = locs["Norwood"]["coords"] query = {'lat': (coords[1]+0.05, coords[1]-0.05), 'lon': (coords[0]-0.05, coords[0]+0.05), 'output_crs': 'EPSG:3577', 'resolution': (-25, 25), 'measurements': ['nbart_red', 'nbart_green', 'nbart_blue', 'nbart_nir', 'nbart_swir_1', 'nbart_swir_2'], 'time':("2020-09-01", "2021-01-01")} # Scale values between 0-1 ds = dc.load(product='ga_ls8c_ard_3', group_by='solar_day', **query)/10000 ds # - # ### 3.- Plot the RGB query results ds[['nbart_red','nbart_green','nbart_blue']].clip(0,0.15).to_array().plot.imshow(col='time', col_wrap=4) # ### 4.- Select cloud free samples, one per month, and show results # + ds = ds.isel(time=[0,2,4,5]) ds[['nbart_red','nbart_green','nbart_blue']].clip(0,0.15).to_array().plot.imshow(col='time', col_wrap=4) # - # ### 5.- Defining our custom functions using DEA data. NDVI Example # # ##### NDVI = (NIR - RED) / (NIR + RED) # + def ndvi(dataset): return (dataset.nbart_nir - dataset.nbart_red) / (dataset.nbart_nir + dataset.nbart_red) ndvi(ds).plot(col='time', col_wrap=4, cmap="summer_r", vmin=0, vmax=1) # - # #### Do you know what are the bright yellow patches? # # ### 6.- WOfS dataset # + query = {'lat': (coords[1]+0.05, coords[1]-0.05), 'lon': (coords[0]-0.05, coords[0]+0.05), 'output_crs': 'EPSG:3577', 'resolution': (-25, 25), #'time':("2018-04-01", "2018-04-10")} 'time':("2020-10-01", "2020-10-10")} ds_wofs = dc.load(product='wofs_albers', **query) ds_wofs.water.plot(cmap="Blues") # - # ### 7.- Our own version of WOfS with NDVI thresholding (ndvi(ds)<0).plot(cmap="Blues", col='time', col_wrap=4) # #### Adding all the images together ((ndvi(ds)<0).sum(dim="time")>1).plot(cmap="Blues") # ### 8.- Enhanced Vegetation Index (EVI) (Huete et al., 2002) and improved version of NDVI # # #### While the EVI is calculated similarly to NDVI, it corrects for some distortions in the reflected light caused by the particles in the air as well as the ground cover below the vegetation. The EVI data product also does not become saturated as easily as the NDVI when viewing dense forests. # # ##### EVI = 2.5 * (NIR - RED) / (NIR + 6 * RED - 7.5 * BLUE + 1) # + def evi(ds): G = 2.5 C1 = 6 C2 = 7.5 L = 1 return G * (ds.nbart_nir - ds.nbart_red) / (ds.nbart_nir + C1 * ds.nbart_red - C2 * ds.nbart_blue + L) evi(ds).plot(col='time', col_wrap=4, cmap="summer_r") # - # ### 9.- EVIr a rescaled version of EVI # # ##### EVIr = min(1, max(0, EVI/0.9 )) # + def evir(ds): return np.clip(evi(ds)/0.9, 0, 1) evir(ds).plot(col='time', col_wrap=4, cmap="summer_r") # - # ### 10.- Global Vegetation Moisture Index (GVMI) (Ceccato et al., 2002) # # ##### GVMI=((NIR+0.1)-(SWIR+0.02)) / ((NIR+0.1)+(SWIR+0.02)) # + def gvmi(ds): return ((ds.nbart_nir + .1) - (ds.nbart_swir_2 + .02)) / ((ds.nbart_nir + .1) + (ds.nbart_swir_2 + .02)) gvmi(ds).plot(col='time', col_wrap=4, cmap="Blues") # - # ### 11.- Residual Moisture Index (RMI) # # ##### combinines the Enhanced Vegetation Index (EVI) and the Global Vegetation Moisture Index (GVMI) # # ##### RMI = min(1, max(0, GVMI - (0.775*EVI-0.0757))) # + def rmi(ds): return np.clip(gvmi(ds) - (0.775 * evi(ds) - 0.0757), 0, 1) rmi(ds).plot(col='time', col_wrap=4, cmap="Blues") # - # ### 12.- Crop factor kc # # ##### The crop’s water use can be determined by multiplying an estimated PET by a crop coefficient, Kc, which takes into account the difference in ET between the crop and reference evapotranspiration. # # ##### ET = PET * Kc # # ##### The crop coefficients, Kc values, represent the crop type and the development of the crop. There may be several Kc values for a single crop depending on the crop’s stage of development. # + def kc(ds): c1 = 0.680 c2 = -14.12 c3 = 2.482 c4 = -7.991 c5 = 0.890 return c1 * (1 - np.exp(c2 * np.power(evir(ds), c3) + c4 * np.power(rmi(ds), c5))) kc(ds).plot(col='time', col_wrap=4, cmap="copper") # - # ### Adding climate variables # # ### 13.- Potential Evapotranspiration (PET) from BoM online servers # # [AWRA BoM page](http://www.bom.gov.au/water/landscape/) # + ds_pet = xr.open_dataset("http://www.bom.gov.au/jsp/awra/thredds/dodsC/AWRACMS/values/month/e0.nc", decode_times=False) base = date(1900, 1, 1) ds_pet['time'] = [np.datetime64(base + timedelta(days=int(x))) for x in ds_pet.time.values] ds_pet.e0.isel(time=-1).plot(cmap="Blues") # - # ### 14.- Selecting the grid point corresponding to our location # + pet = ds_pet.e0.sel(longitude=coords[0], latitude=coords[1], time=ds.time, method="nearest") pet.plot() # - # ### 15.- Precipitation # + ds_prec = xr.open_dataset("http://www.bom.gov.au/jsp/awra/thredds/dodsC/AWRACMS/values/month/rain_day.nc", decode_times=False) base = date(1900, 1, 1) ds_prec['time'] = [np.datetime64(base + timedelta(days=int(x))) for x in ds_prec.time.values] ds_prec.rain_day.isel(time=-1).plot(cmap="Blues") # - prec = ds_prec.rain_day.sel(longitude=coords[0], latitude=coords[1], time=ds.time, method="nearest") prec.plot() # ### 16.- Finally, lets calculate the Actual Evapotranspiration (AET) # # ##### AET = kc * PET (kc(ds)*pet.values[:,None,None]).plot(col='time', col_wrap=4, cmap="Blues") # ### 17.- Plot the temporal evolution over the whole area (kc(ds)*pet.values[:,None,None]).mean(dim=["x","y"]).plot() # ### 18.- Can you repeat the process for Redbank # + locs = {"Norwood": {"coords": [149.7543738, -29.3741845]}, "Redbank": {"coords": [150.0313090, -29.4199982]}} coords = locs["Redbank"]["coords"] query = {'lat': (coords[1]+0.05, coords[1]-0.05), 'lon': (coords[0]-0.05, coords[0]+0.05), 'output_crs': 'EPSG:3577', 'resolution': (-25, 25), 'measurements': ['nbart_red', 'nbart_green', 'nbart_blue', 'nbart_nir', 'nbart_swir_1', 'nbart_swir_2'], 'time':("2020-09-01", "2021-01-01")} # Scale values between 0-1 ds = dc.load(product='ga_ls8c_ard_3', group_by='solar_day', **query)/10000 ds[['nbart_red','nbart_green','nbart_blue']].clip(0,0.15).to_array().plot.imshow(col='time', col_wrap=4) # -
dea_materials/day2/WV_evapotranspiration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="4jhzqLi1jkAu" # Built-in Functions # # https://docs.python.org/3/library/functions.html # # https://docs.python.org/3/library/functions.html#map # + id="pATj8dLMjg6K" def calSquare(num): result = num * num return result # + id="wQoZoyeWlEXx" numbers = (1, 2) # + id="6TJ_R_zrlGnR" # list dictionary # | | # map(* args, **kwargs) # + colab={"base_uri": "https://localhost:8080/"} id="GJ5U0bWUlqZZ" outputId="ab46858f-01ef-4830-fbab-52ecb9321a73" result = map(calSquare, numbers) print(set(result)) # + colab={"base_uri": "https://localhost:8080/"} id="tEBORgkHmbxR" outputId="2b6d1cc1-85c7-4af9-9071-38b4ad22c45b" calSquare(1), calSquare(2) # + colab={"base_uri": "https://localhost:8080/"} id="mnWL56q5p_lz" outputId="84e5d455-027e-4e83-90f8-4fe9a1b4d032" calplus = (lambda first, second : first + second) type(calplus) # + colab={"base_uri": "https://localhost:8080/"} id="6ARzDlDwqbm8" outputId="c816d257-64aa-4c52-db68-232fb29b7f5a" number01 = [1, 2, 3] number02 = [4, 5, 6] type(number01), type(number02) # + colab={"base_uri": "https://localhost:8080/"} id="JNvIkvsKq1E9" outputId="d08227aa-1e37-4179-d70a-52cdbdf5ec22" calplus(number01[0], number02[0]) # + colab={"base_uri": "https://localhost:8080/"} id="CTu6uCmOrF56" outputId="b0439f64-0039-4c96-e4a7-fe2d6e0e5292" calplus(number01[1], number02[1]) # + colab={"base_uri": "https://localhost:8080/"} id="ZpHSD3MhrKnq" outputId="6f7b96bc-6b81-4285-dc2a-03375807d7bc" result = map(calplus, number01, number02) # 내부적으로 for문이 돌고 있어서 파라미터를 여러 번 반복하지 않아도 됨 list(result) # + id="xlxJq4nOr8NT"
python_map.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Shallow regression for vector data # # This script reads zip code data produced by **vectorDataPreparations** and creates different machine learning models for # predicting the average zip code income from population and spatial variables. # # It assesses the model accuracy with a test dataset but also predicts the number to all zip codes and writes it to a geopackage # for closer inspection # # 1. Read the data # + import time import geopandas as gpd import pandas as pd from math import sqrt import os import matplotlib.pyplot as plt from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor, BaggingRegressor,ExtraTreesRegressor, AdaBoostRegressor from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error, mean_absolute_error,r2_score # - # ### 1.1 Input and output file paths # + paavo_data = "../data/paavo" ### Relative path to the zip code geopackage file that was prepared by vectorDataPreparations.py input_geopackage_path = os.path.join(paavo_data,"zip_code_data_after_preparation.gpkg") ### Output file. You can change the name to identify different regression models output_geopackage_path = os.path.join(paavo_data,"median_income_per_zipcode_shallow_model.gpkg") # - # ### 1.2 Read the input data to a Geopandas dataframe original_gdf = gpd.read_file(input_geopackage_path) original_gdf.head() # # 2. Train the model # # Here we try training different models. We encourage you to dive into the documentation of different models a bit and try different parameters. # # Which one is the best model? Can you figure out how to improve it even more? # ### 2.1 Split the dataset to train and test datasets # + ### Split the gdf to x (the predictor attributes) and y (the attribute to be predicted) y = original_gdf['hr_mtu'] # Average income ### Remove geometry and textual fields x = original_gdf.drop(['geometry','postinumer','nimi','hr_mtu'],axis=1) ### Split the both datasets to train (80%) and test (20%) datasets x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.2, random_state=42) # - # ### 2.2 These are the functions used for training, estimating and predicting. # + def trainModel(x_train, y_train, model): start_time = time.time() print(model) model.fit(x_train,y_train) print('Model training took: ', round((time.time() - start_time), 2), ' seconds') return model def estimateModel(x_test,y_test, model): ### Predict the unemployed number to the test dataset prediction = model.predict(x_test) ### Assess the accuracy of the model with root mean squared error, mean absolute error and coefficient of determination r2 rmse = sqrt(mean_squared_error(y_test, prediction)) mae = mean_absolute_error(y_test, prediction) r2 = r2_score(y_test, prediction) print(f"\nMODEL ACCURACY METRICS WITH TEST DATASET: \n" + f"\t Root mean squared error: {round(rmse)} \n" + f"\t Mean absolute error: {round(mae)} \n" + f"\t Coefficient of determination: {round(r2,4)} \n") # - # ### 2.3 Run different models # ### Gradient Boosting Regressor # * https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html # * https://scikit-learn.org/stable/modules/ensemble.html#regression model = GradientBoostingRegressor(n_estimators=30, learning_rate=0.1,verbose=1) model_name = "Gradient Boosting Regressor" trainModel(x_train, y_train,model) estimateModel(x_test,y_test, model) # ### Random Forest Regressor # # * https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html # * https://scikit-learn.org/stable/modules/ensemble.html#forest model = RandomForestRegressor(n_estimators=30,verbose=1) model_name = "Random Forest Regressor" trainModel(x_train, y_train,model) estimateModel(x_test,y_test, model) # ### Extra Trees Regressor # # * https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html model = ExtraTreesRegressor(n_estimators=30,verbose=1) model_name = "Extra Trees Regressor" trainModel(x_train, y_train,model) estimateModel(x_test,y_test, model) # ### Bagging Regressor # # * https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingRegressor.html # * https://scikit-learn.org/stable/modules/ensemble.html#bagging model = BaggingRegressor(n_estimators=30,verbose=1) model_name = "Bagging Regressor" trainModel(x_train, y_train,model) estimateModel(x_test,y_test, model) # ### AdaBoost Regressor # # * https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostRegressor.html # * https://scikit-learn.org/stable/modules/ensemble.html#adaboost model = AdaBoostRegressor(n_estimators=30) model_name = "AdaBoost Regressor" trainModel(x_train, y_train,model) estimateModel(x_test,y_test, model) # # 3. Predict average income to all zip codes # Here we predict the average income to the whole dataset. Prediction is done with the model you have stored in the model variable - the one you ran last # + ### Print chosen model (the one you ran last) print(model) ### Drop the not-used columns from original_gdf as done before model training. x = original_gdf.drop(['geometry','postinumer','nimi','hr_mtu'],axis=1) ### Predict the median income with already trained model prediction = model.predict(x) ### Join the predictions to the original geodataframe and pick only interesting columns for results original_gdf['predicted_hr_mtu'] = prediction.round(0) original_gdf['difference'] = original_gdf['predicted_hr_mtu'] - original_gdf['hr_mtu'] resulting_gdf = original_gdf[['postinumer','nimi','hr_mtu','predicted_hr_mtu','difference','geometry']] # - fig, ax = plt.subplots(figsize=(20, 10)) ax.set_title("Predicted average income by zip code " + model_name, fontsize=25) ax.set_axis_off() resulting_gdf.plot(column='predicted_hr_mtu', ax=ax, legend=True, cmap="magma") # # 4. EXERCISE: Calculate the difference between real and predicted incomes # # Calculate the difference of real and predicted income amounts by zip code level and plot a map of it # # * **original_gdf** is the original dataframe # * **resulting_gdf** is the predicted one
machineLearning/02_shallows/03_regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Transfer Learning # *by <NAME>* # <img src="../images/keras-tensorflow-logo.jpg" width="400"> # # Using Transfer Learning to Train an Image Classification Model # # # Deep learning allows you to learn features automatically from the data. In general this requires a lot of training examples, especially for problems where the input samples are very high-dimensional, like images. # # Deep learning models are often trained from scratch, but this can also be an expensive and at time consuming process. Fortunately deep learning models are by nature highly repurposable. Specifically in the case of computer vision, models can be pre-trained on very large-scale datasets (such as ImageNet) and then be reused to solve a different task with high performance. This kind of warm-start training is called Transfer Learning. # # # There are two main Transfer Learning schemes: # - Pre-trained Convolutional layers as fixed feature extractor # - Fine-tuning on pre-trained Convolutional layers. # # # # Pre-trained Convolutional layers as fixed feature extractor # # <img src="../images/transfer_learning_1.jpg" width="400"> # # This scheme treats the Convolutional layers as a fixed feature extractor for the new dataset. Convolutional layers have fixed weights and therefore are not trained. They are used to extract features and construct a rich vector embedding for every image. Once these embeddings have been computed for all images, they become the new inputs and can be used to train a linear classifier or a fully connected network for the new dataset. # + import tensorflow as tf import numpy as np import glob from scipy import misc import matplotlib.pyplot as plt # %matplotlib inline tf_keras = tf.contrib.keras # - # # # # # # Feature Extraction # ## Load a pre-trained VGG Network on ImageNets # load pre-trained VGG model model = tf_keras.applications.VGG19(weights='imagenet', input_shape = (224, 224, 3)) # # Make Predictions on Pre-Trained Model # ImageNet is a famous Computer Vision dataset. It made up of 1.2 million images in the training set, and is composed of 1000 categories that cover a wide variety of objects, animals and scenes. def make_prediction(img_path): # Load and resize image img = tf_keras.preprocessing.image.load_img(img_path, target_size=(224, 224)) # transform image into a 4D tensor x = tf_keras.preprocessing.image.img_to_array(img) x = np.expand_dims(x, axis=0) # normalize/preprocess image x = tf_keras.applications.vgg19.preprocess_input(x) # make predcition preds = model.predict(x) # decode the results into a list of tuples #(class, description, probability) result = tf_keras.applications.vgg19.decode_predictions( preds, top=3)[0] print("Predictions:\n") for idx, (_, name, prob) in enumerate(result): print("{}.".format(idx+1)) print("Name: {}".format(name)) print("Probability: {}\n".format(prob)) def plot_image(img_path): # figure size fig = plt.figure(figsize=(8, 8)) # load images image = tf_keras.preprocessing.image.load_img(img_path) img_array = tf_keras.preprocessing.image.img_to_array(image) print("Image size: {}".format(img_array.shape)) # plot image plt.imshow(image) plt.xticks(np.array([])) plt.yticks(np.array([])) plt.show() # # Out of Sample Image cat_path = "../examples/cat_example.jpg" plot_image(cat_path) # # Make Predictions make_prediction(cat_path) dog_path = "../examples/dog_example.jpg" plot_image(dog_path) make_prediction(dog_path) # # Use pre-trained model for Feature Extraction train_data_dir = "data/training" test_data_dir = "data/testing" # 25000 images train_size = 20000 test_size = 5000 input_shape = (150, 150, 3) batch_size = 32 # # Load Pre-trained VGG Model (conv layers only) # load pre-trained VGG model and exclude top dense layers model = tf_keras.applications.VGG16(include_top=False, weights='imagenet') # # Load Images to Tensor def load_data_array(img_files): img_size = (150, 150, 3) images = [] for img in img_files: try: image_ar = misc.imresize(misc.imread(img), img_size) if np.asarray(image_ar).shape == img_size: images.append(image_ar) except: print("ERROR: {}".format(img)) continue images = np.asarray(images) return images path_d = glob.glob("data/training/cat/*.jpg") train_cat = load_data_array(img_files) path_d = glob.glob("data/training/dog/*.jpg") train_dog = load_data_array(img_files) path_d = glob.glob("data/testing/cat/*.jpg") test_cat = load_data_array(img_files) path_d = glob.glob("data/test/dog/*.jpg") test_dog = load_data_array(img_files) # # Feature Extracting Function def extract_vgg_features(model, images, data_name): # Extract image features extracted_features = model.predict(images) # save new features file_name = "extracted_features_{}.npy".format(data_name) np.save(open(file_name, 'w'), extracted_features) # # Extract and Save Train Set Features # If these functions take to long to run, you can instead load the binary files provided. # train set (this can take a long time, GPU recommended) extract_vgg_features(model, train_cat, data_name = 'train_cat') extract_vgg_features(model, train_dog, data_name = 'train_dog') # # Extract and Save Test Set Features # test set (this can take a long time, GPU recommended) extract_vgg_features(model, test_cat, data_name = 'test_cat') extract_vgg_features(model, test_dog, data_name = 'test_dog') # # Load Generated Features And Reconstruct Label Vectors # + # load train set train_data_cat = np.load(open('extracted_features_train_cat.npy', 'rb')) train_data_dog = np.load(open('extracted_features_train_dog.npy', 'rb')) train_data = np.vstack((train_data_cat, train_data_dog)) # generate train labels (the image extracted features were saved in order) train_labels = np.array([0] * train_data_cat.shape[0] + [1] * train_data_dog.shape[0]) print("Train size: {}".format(train_data.shape)) # + # load test set test_data_cat = np.load(open('extracted_features_test_cat.npy', 'rb')) test_data_dog = np.load(open('extracted_features_test_dog.npy', 'rb')) test_data = np.vstack((test_data_cat, test_data_dog)) # generate train labels (the image extracted features were saved in order) test_labels = np.array([0] * test_data_cat.shape[0] + [1] * test_data_dog.shape[0]) print("Train size: {}".format(test_data.shape)) # - # # Define a Simple Fully Connected Model # + def DNN_Classifier(): # input image tensor inputs = tf_keras.layers.Input(shape = (4, 4, 512)) # flatten/reshape layer net = tf_keras.layers.Flatten()(inputs) # fully connected layer net = tf_keras.layers.Dense(256, activation=tf.nn.relu)(net) # dropout layer net = tf_keras.layers.Dropout(0.6)(net) # final Dense layer with binary classification outputs = tf_keras.layers.Dense(1, activation=tf.nn.sigmoid)(net) # model model = tf_keras.models.Model(inputs=inputs, outputs=outputs) return model def compile_model(model): # SGD/optimizer optimizer = tf_keras.optimizers.RMSprop(lr=0.0001) # compile the model with loss, optimizer and evaluation metrics model.compile(loss = tf_keras.losses.binary_crossentropy, optimizer = optimizer, metrics = [tf_keras.metrics.binary_accuracy]) print(model.summary()) return model # - model = DNN_Classifier() model = compile_model(model) # # Train Model history = model.fit(x=train_data, y=train_labels, batch_size=32, verbose=2, epochs=20, validation_data=(test_data,test_labels)) # # Evaluate Model model.evaluate(test_data,test_labels, batch_size=32, verbose=1) # # Plot Accuracy and Loss Over Time def plot_accuracy_and_loss(history): plt.figure(1, figsize= (15, 10)) # plot train and test accuracy plt.subplot(221) plt.plot(history.history['binary_accuracy']) plt.plot(history.history['val_binary_accuracy']) plt.title('vgg accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') # plot train and test loss plt.subplot(222) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('vgg loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right') plt.show() plot_accuracy_and_loss(history) # # Save Model #save model model_json = model.to_json() open('cat_and_dog_model.json', 'w').write(model_json) model.save_weights('image_classifier_cat_and_dog.h5', overwrite=True) # # Define Fully Connected Network with DNNClassifier Class # TensorFlow provide pre-built estimators, which makes it very easy to construct a multi-layer fully connected network. # + feature_columns = tf.contrib.learn.infer_real_valued_columns_from_input( train_data) clf = tf.contrib.learn.DNNClassifier(hidden_units = [256], feature_columns = feature_columns, n_classes=2, optimizer=tf.train.RMSPropOptimizer( learning_rate=0.0001), activation_fn=tf.nn.relu, dropout=0.6) print("Training the classifier...") clf.fit(train_data, train_labels, steps=5, batch_size=32) test_pred = clf.predict(test_data, batch_size =32) print("Evaluating the classifier...") score = tf.metrics.accuracy(test_labels, test_pred) print("Accuracy: %f" % score) # - # # Fine-tuning on pre-trained Convolutional Layers # # To further improve the performance of our image classifier, we can "fine-tune" a pre-trained VGG model alongside the top-level classifier. Fine-tuning consist in starting from a trained network, then re-training it on a new dataset using very small weight updates. # # <img src="../images/transfer_learning_2.jpeg" width="900"> # # # This consists of the following steps: # # - Load pretrained weights from a model trained on another dataset # - Re-initialize the top fully-connected layers with fresh weights # - Train model on new dataset (freeze or not convolutional layers) # # This scheme treats the Convolutional layers as part of the model and applies backpropagation through the model. This fine-tunes the weights of the pretrained network to the new task. It is also possible to keep some of the earlier layers fixed (due to overfitting concerns) and only fine-tune some higher-level portion of the network. # load pre-trained VGG model and exclude top dense layers base_model = tf_keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(150, 150, 3)) # + def fine_tune_VGG(base_model): # output of convolutional layers net = base_model.output # flatten/reshape layer net = tf_keras.layers.Flatten( input_shape=base_model.output_shape[1:])(net) # fully connected layer net = tf_keras.layers.Dense(256, activation=tf.nn.relu)(net) # dropout layer net = tf_keras.layers.Dropout(0.5)(net) # final Dense layer with binary classification outputs = tf_keras.layers.Dense(1, activation=tf.nn.sigmoid)(net) # define model with base_model's input model = tf_keras.models.Model(inputs=base_model.input, outputs=outputs) # freeze weights of conv blocks 1-4 (15 layers) # fine-tune last conv block and dense layers for layer in model.layers[:15]: layer.trainable = False return model def compile_model(model): # SGD/optimizer (very slow learning rate) optimizer = tf_keras.optimizers.SGD(lr=1e-4, momentum=0.9) # compile the model with loss, optimizer and evaluation metrics model.compile(loss = tf_keras.losses.binary_crossentropy, optimizer = optimizer, metrics = [tf_keras.metrics.binary_accuracy]) print(model.summary()) return model # - model = fine_tune_VGG(base_model) model = compile_model(model) # # Define Train and Test Data Generators # + batch_size = 16 # prepare data augmentation configuration train_datagen = tf_keras.preprocessing.image.ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = tf_keras.preprocessing.image.ImageDataGenerator( rescale=1./255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(150,150), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( test_data_dir, target_size=(150,150), batch_size=batch_size, class_mode='binary') # - # # Train and Fine-tune Model # fine-tune the model history = model.fit_generator( train_generator, steps_per_epoch= 5, #train_size // batch_size, epochs=5, validation_data=validation_generator, validation_steps= 10) #test_size // batch_size) # # Evaluate Trained Model # evaluate the model on batches with real-time data augmentation loss, acc= model.evaluate_generator(validation_generator, steps = 10) print("loss: {}".format(loss)) print("accuracy: {}".format(acc)) plot_accuracy_and_loss(history) #save model model_json = model.to_json() open('cat_and_dog_fine_tune_model.json', 'w').write(model_json) model.save_weights('image_classifier_cat_and_dog_fine_tune_.h5', overwrite=True) # ## The End # # <img src="../images/divider.png" width="100">
Learning Computer Vision with TensorFlow/Jupyter Notebook files/Section 3/Transfer Learning (2).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Gensim word vector visualization of various word vectors # + import numpy as np # Get the interactive Tools for Matplotlib # %matplotlib notebook import matplotlib.pyplot as plt plt.style.use('ggplot') from sklearn.decomposition import PCA from gensim.test.utils import datapath, get_tmpfile from gensim.models import KeyedVectors from gensim.scripts.glove2word2vec import glove2word2vec # - # For looking at word vectors, I'll use Gensim. We also use it in hw1 for word vectors. Gensim isn't really a deep learning package. It's a package for for word and text similarity modeling, which started with (LDA-style) topic models and grew into SVD and neural word representations. But its efficient and scalable, and quite widely used. # Our homegrown Stanford offering is GloVe word vectors. Gensim doesn't give them first class support, but allows you to convert a file of GloVe vectors into word2vec format. You can download the GloVe vectors from [the Glove page](https://nlp.stanford.edu/projects/glove/). They're inside [this zip file](https://nlp.stanford.edu/data/glove.6B.zip) # (I use the 100d vectors below as a mix between speed and smallness vs. quality. If you try out the 50d vectors, they basically work for similarity but clearly aren't as good for analogy problems. If you load the 300d vectors, they're even better than the 100d vectors.) glove_file = datapath('/Users/manning/Corpora/GloVe/glove.6B.100d.txt') word2vec_glove_file = get_tmpfile("glove.6B.100d.word2vec.txt") glove2word2vec(glove_file, word2vec_glove_file) model = KeyedVectors.load_word2vec_format(word2vec_glove_file) model.most_similar('obama') model.most_similar('banana') model.most_similar(negative='banana') result = model.most_similar(positive=['woman', 'king'], negative=['man']) print("{}: {:.4f}".format(*result[0])) def analogy(x1, x2, y1): result = model.most_similar(positive=[y1, x2], negative=[x1]) return result[0][0] # ![Analogy](imgs/word2vec-king-queen-composition.png) analogy('japan', 'japanese', 'australia') analogy('australia', 'beer', 'france') analogy('obama', 'clinton', 'reagan') analogy('tall', 'tallest', 'long') analogy('good', 'fantastic', 'bad') print(model.doesnt_match("breakfast cereal dinner lunch".split())) def display_pca_scatterplot(model, words=None, sample=0): if words == None: if sample > 0: words = np.random.choice(list(model.vocab.keys()), sample) else: words = [ word for word in model.vocab ] word_vectors = np.array([model[w] for w in words]) twodim = PCA().fit_transform(word_vectors)[:,:2] plt.figure(figsize=(6,6)) plt.scatter(twodim[:,0], twodim[:,1], edgecolors='k', c='r') for word, (x,y) in zip(words, twodim): plt.text(x+0.05, y+0.05, word) display_pca_scatterplot(model, ['coffee', 'tea', 'beer', 'wine', 'brandy', 'rum', 'champagne', 'water', 'spaghetti', 'borscht', 'hamburger', 'pizza', 'falafel', 'sushi', 'meatballs', 'dog', 'horse', 'cat', 'monkey', 'parrot', 'koala', 'lizard', 'frog', 'toad', 'monkey', 'ape', 'kangaroo', 'wombat', 'wolf', 'france', 'germany', 'hungary', 'luxembourg', 'australia', 'fiji', 'china', 'homework', 'assignment', 'problem', 'exam', 'test', 'class', 'school', 'college', 'university', 'institute']) display_pca_scatterplot(model, sample=300)
Gensim word vector visualization.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="JsWuTduwaagq" # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/cene555/ru-clip-tiny/blob/main/notebooks/ru_CLIP_tiny_onnx.ipynb) # + [markdown] id="VCCzmQdKJPkv" # ## Select a runtime GPU to continue: # # Click Runtime -> Change Runtime Type -> switch "Harware accelerator" to be GPU. Save it, and you maybe connect to GPU # + cellView="form" id="6xdy_cPJEYXV" colab={"base_uri": "https://localhost:8080/"} outputId="9b5c5751-3377-4623-fd90-f59c21118c80" #@title Allowed Resources import multiprocessing import torch from psutil import virtual_memory ram_gb = round(virtual_memory().total / 1024**3, 1) print('CPU:', multiprocessing.cpu_count()) print('RAM GB:', ram_gb) print("PyTorch version:", torch.__version__) print("CUDA version:", torch.version.cuda) print("cuDNN version:", torch.backends.cudnn.version()) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print("device:", device.type) # !nvidia-smi # + [markdown] id="hmNP7iJBj6XZ" # ## Restart colab session after installation # Reload session if something doesn't work (may need multiple times) # + [markdown] id="WWXCt_2NLhN_" # ## Install requirements # + id="FWEEtd7Vryaf" # %%capture # !gdown -O ru-clip-tiny.pkl https://drive.google.com/uc?id=1-3g3J90pZmHo9jbBzsEmr7ei5zm3VXOL # !pip install git+https://github.com/cene555/ru-clip-tiny.git # !pip install git+https://github.com/Lednik7/CLIP-ONNX.git # !pip install onnxruntime-gpu # !wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true # + id="bUFx02Dhjap4" colab={"base_uri": "https://localhost:8080/"} outputId="f595c387-da47-47e5-f96a-2d84adf3286b" import onnxruntime # priority device (if available) print(onnxruntime.get_device()) # + [markdown] id="PHb4CAoRL3qC" # ## Import libraries # + id="cznZ7ozDL5-M" import torch from rucliptiny import RuCLIPtiny from rucliptiny.utils import get_transform from rucliptiny.tokenizer import Tokenizer # + id="57COx0BKCmFA" import warnings warnings.filterwarnings("ignore", category=UserWarning) # + [markdown] id="ithu4-z0PIm5" # ## Load model # + id="GqKM04tP4Vv3" cellView="form" #@title speed_test function import time def speed_test(func, data_gen, n=5, empty_cache=True, is_text=False, first_run=True): if empty_cache: torch.cuda.empty_cache() if first_run: if is_text: input_data1, input_data2 = data_gen() func(input_data1, input_data2) else: input_data = data_gen() func(input_data) torch.cuda.empty_cache() values = [] for _ in range(n): if is_text: input_data1, input_data2 = data_gen() else: input_data = data_gen() if is_text: t = time.time() func(input_data1, input_data2) else: t = time.time() func(input_data) values.append(time.time() - t) if empty_cache: torch.cuda.empty_cache() return sum(values) / n # + id="SSOHYDRQGif-" torch.manual_seed(1) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # + id="OpFAZfq-_nJe" model = RuCLIPtiny() model.load_state_dict(torch.load('ru-clip-tiny.pkl', map_location=device)) model = model.to(device).eval() for x in model.parameters(): x.requires_grad = False torch.cuda.empty_cache() # + id="KEZj2WrwkzZz" colab={"base_uri": "https://localhost:8080/", "height": 145, "referenced_widgets": ["5319c7971f234d4bb615508f76475f9e", "c43027a0735e459ca1f710e5a9c43177", "c00c959249db4f2a9b97adabd2684c3c", "d9e4edd05c1e40f991eb2c2f1fc9ebc1", "ab4928c0a86449a384e36d8c0bc25717", "3251223dac8f43c081701ff7f663cb35", "d63d5559ce534b86969132d3ff8d875b", "59618f021fc4495e9c401a421d28d4a0", "<KEY>", "<KEY>", "f9466e0349c84633a0fb8ceeffa2a984", "10ee9777b41e42129e2c9cc9327ad88f", "<KEY>", "a64a223312144f2f9736729b63ab1ce5", "7e7bce13eeed41179e4e15fc7afc89d5", "7bacd13c23cf415fa5d58e9243c4a785", "9fe6e1167e5d45fbad2adab3d59e017d", "<KEY>", "<KEY>", "64dfa71e3dff4236908e0592e4f90250", "ec8d98c1edb148d3ae1c518b61e8155b", "<KEY>", "9a2d4d7da3024cc0828b1a6dafd0dd16", "fe4daa4d7d024187aa2f622dbf3577a8", "8380bf9a899645e8aef576e640b41ea2", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "8513d262d8764d99aa5d3f2f178b875e", "<KEY>", "<KEY>", "<KEY>", "f8958c6de2394fecab9f95388a365431", "<KEY>", "<KEY>", "<KEY>", "973300b095554b10ac290244772e0a6f", "<KEY>", "fbf430940c8a49949953155b57d07766", "<KEY>", "4f11a71d7df943e48ac9ea3bab5c6771", "b6004e09152045e18503cf75e32d4fa6", "590fb707d26948b5b9c8bb3b896f29e1"]} outputId="466854ba-7fa2-4154-ada2-391626146c95" transforms = get_transform() tokenizer = Tokenizer() # + [markdown] id="BGsIitOkCCLE" # ## [Speed test] Batch 64 # + id="5Ii9OlgUjR9J" colab={"base_uri": "https://localhost:8080/"} outputId="21f1c03c-3e45-4650-d892-be2d83021d21" speed_test(model.encode_image, lambda: torch.randint(1, 255, (64, 3, 224, 224)).to(device)) # + id="3Ho_rGd6j0_8" colab={"base_uri": "https://localhost:8080/"} outputId="1026df31-ea4b-4f50-e76d-f300bee0299a" speed_test(model.encode_text, lambda: (torch.randint(1, 255, (64, 77)).to(device), torch.randint(0, 2, (64, 77)).to(device)), is_text=True) # + [markdown] id="81uWLBrMkl3T" # ## Prepare functions # + id="ry5BqVbzk-gM" from PIL import Image import numpy as np # + id="<KEY>" # batch first image = transforms(Image.open("CLIP.png")).unsqueeze(0).cpu() # [1, 3, 224, 224] # batch first texts = ['диаграмма', 'собака', 'кошка'] text_tokens, attention_mask = tokenizer.tokenize(texts, max_len=77) text_tokens, attention_mask = text_tokens.cpu(), attention_mask.cpu() # [3, 77] # batch second dummy_input_text = torch.stack([text_tokens, attention_mask]).detach().cpu() # + id="9SJJmuuWlSjS" text_tokens_onnx = text_tokens.detach().cpu().numpy().astype(np.int64) attention_mask_onnx = attention_mask.detach().cpu().numpy().astype(np.int64) image_onnx = image.detach().cpu().numpy().astype(np.float32) text_onnx = torch.stack([text_tokens, attention_mask]).detach().cpu()\ .numpy().astype(np.int64) # + [markdown] id="Y7V4BjOGkRcu" # ## Convert RuCLIP model to ONNX # + id="HzGiuIo8m341" class Textual(torch.nn.Module): def __init__(self, model): super().__init__() self.model = model def forward(self, input_data): input_ids, attention_mask = input_data x = self.model.transformer(input_ids=input_ids, attention_mask=attention_mask) x = x.last_hidden_state[:, 0, :] x = self.model.final_ln(x) return x # + id="k5eQK8gJla5a" colab={"base_uri": "https://localhost:8080/"} outputId="09ec9b1d-70f0-4d01-87be-5bb622c14e89" from clip_onnx import clip_onnx from clip_onnx.utils import DEFAULT_EXPORT visual_path = "clip_visual.onnx" textual_path = "clip_textual.onnx" textual_export_params = DEFAULT_EXPORT.copy() textual_export_params["dynamic_axes"] = {'input': {1: 'batch_size'}, 'output': {0: 'batch_size'}} onnx_model = clip_onnx(model.cpu(), visual_path=visual_path, textual_path=textual_path) onnx_model.convert2onnx(image, dummy_input_text, verbose=True, textual_wrapper=Textual, textual_export_params=textual_export_params) # + [markdown] id="QQ0A0gUFzQr-" # ## [ONNX] CUDA inference mode # + id="YR-Pv3E8q_mz" # Optional cell, can be skipped visual_path = "clip_visual.onnx" textual_path = "clip_textual.onnx" onnx_model.load_onnx(visual_path, textual_path, 29.9119) # model.logit_scale.exp() # + id="J2qxXvmfo2eu" # ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] onnx_model.start_sessions(providers=["CUDAExecutionProvider"]) # cuda mode # + colab={"base_uri": "https://localhost:8080/"} id="yq05H9f7vyQy" outputId="2c39c48b-db02-4610-addd-901429497a43" onnx_model.visual_session.get_providers() # + [markdown] id="EieJHr_CA2ui" # ## [Speed test] Batch 64 # + id="kyF8lyTXnwCz" colab={"base_uri": "https://localhost:8080/"} outputId="d675b77b-7979-44e6-f7d9-45013a1b17b8" speed_test(onnx_model.encode_image, lambda: np.random.uniform(1, 255, (64, 3, 224, 224))\ .astype(np.float32)) # + id="AmShwsCtoYte" colab={"base_uri": "https://localhost:8080/"} outputId="9cd94020-d813-4ddd-cd74-cb6d7d922930" speed_test(onnx_model.encode_text, lambda: np.stack([np.random.randint(1, 255, (64, 77)), np.random.randint(0, 2, (64, 77))])) # + [markdown] id="zejMPUDCB2Mi" # ## [Speed test] Compare Pytorch and ONNX # + id="HqLSjsiGCJXW" import random import torch import time def set_seed(): torch.manual_seed(12) torch.cuda.manual_seed(12) np.random.seed(12) random.seed(12) torch.backends.cudnn.deterministic=True # + colab={"base_uri": "https://localhost:8080/"} id="YILIR6qMB_eb" outputId="95e2c9a0-26bb-4203-f0e5-50589f44ddaf" n = 20 model = model.to(device) clip_results = {"encode_image": [], "encode_text": []} onnx_results = {"encode_image": [], "encode_text": []} for batch in [2, 8, 16, 32, 64]: set_seed() result = speed_test(onnx_model.encode_image, lambda: np.random.uniform(1, 255, (batch, 3, 224, 224))\ .astype(np.float32), n=n) result = round(result, 3) onnx_results["encode_image"].append([batch, result]) print("onnx", batch, "encode_image", result) set_seed() with torch.inference_mode(): result = speed_test(model.encode_image, lambda: torch.randint(1, 255, (batch, 3, 224, 224))\ .to(device), n=n) result = round(result, 3) print("torch", batch, "encode_image", result) clip_results["encode_image"].append([batch, result]) set_seed() result = speed_test(onnx_model.encode_text, lambda: np.stack([np.random.randint(1, 255, (batch, 77)), np.random.randint(0, 2, (batch, 77))]), n=n) result = round(result, 3) onnx_results["encode_text"].append([batch, result]) print("onnx", batch, "encode_text", result) set_seed() with torch.inference_mode(): result = speed_test(model.encode_text, lambda: (torch.randint(1, 255, (batch, 77)).to(device), torch.randint(0, 2, (batch, 77)).to(device)), is_text=True, n=n) result = round(result, 3) print("torch", batch, "encode_text", result) clip_results["encode_text"].append([batch, result]) print("-" * 78) # + colab={"base_uri": "https://localhost:8080/", "height": 362} id="WAWUKqQOGd-2" outputId="725a771f-1b75-4e3a-afa8-7ee7c9caac1f" import pandas as pd pd.DataFrame({"backend": ["onnx", "torch"] * 5, "batch": [2, 2, 8, 8, 16, 16, 32, 32, 64, 64], "encode_image": [j[1] for i in zip(onnx_results["encode_image"], clip_results["encode_image"]) for j in i], "encode_text": [j[1] for i in zip(onnx_results["encode_text"], clip_results["encode_text"]) for j in i]}) # + colab={"base_uri": "https://localhost:8080/"} id="ol9_RiUoG34e" outputId="82be9e0e-b92e-4e3c-8132-9269eb22a41d" onnx_df = pd.DataFrame({"ONNX": ["RuCLIPtiny"] * 5, "batch": [2, 8, 16, 32, 64], "encode_image": [i[1] for i in onnx_results["encode_image"]], "encode_text": [i[1] for i in onnx_results["encode_text"]]}) onnx_df["total"] = onnx_df["encode_image"] + onnx_df["encode_text"] print(onnx_df.to_markdown(index=False)) # + colab={"base_uri": "https://localhost:8080/"} id="qw8ZK9XeG4LY" outputId="326b24f9-9d21-47ed-d62c-d7594e786b96" clip_df = pd.DataFrame({"TORCH": ["RuCLIPtiny"] * 5, "batch": [2, 8, 16, 32, 64], "encode_image": [i[1] for i in clip_results["encode_image"]], "encode_text": [i[1] for i in clip_results["encode_text"]]}) clip_df["total"] = clip_df["encode_image"] + clip_df["encode_text"] print(clip_df.to_markdown(index=False))
examples/ru_CLIP_tiny_onnx.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3-azureml # kernelspec: # display_name: Python 3.6 - AzureML # language: python # name: python3-azureml # --- # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1598660882373} # !pip install -Uqq fastbook import fastbook fastbook.setup_book() from fastbook import * from fastai.vision.widgets import * # + gather={"logged": 1598660915702} aaf_url = 'https://www.dropbox.com/s/a0lj1ddd54ns8qy/All-Age-Faces%20Dataset.zip' # !wget $aaf_url # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} import zipfile with zipfile.ZipFile('All-Age-Faces Dataset.zip', 'r') as zip_ref: zip_ref.extractall('aaf/') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
.ipynb_aml_checkpoints/AAF-v1-checkpoint2020-7-28-17-33-52.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Visualization with Seaborn # ## Seaborn Versus Matplotlib # # Here is an example of a simple random-walk plot in Matplotlib, using its classic plot formatting and colors. # We start with the typical imports: import matplotlib.pyplot as plt plt.style.use('classic') # %matplotlib inline import numpy as np import pandas as pd # Now we create some random walk data: # Create some data rng = np.random.RandomState(0) x = np.linspace(0, 10, 500) y = np.cumsum(rng.randn(500, 6), 0) # And do a simple plot: # Plot the data with Matplotlib defaults plt.plot(x, y) plt.legend('ABCDEF', ncol=2, loc='upper left'); # Although the result contains all the information we'd like it to convey, it does so in a way that is not all that aesthetically pleasing, and even looks a bit old-fashioned in the context of 21st-century data visualization. # # Now let's take a look at how it works with Seaborn. # As we will see, Seaborn has many of its own high-level plotting routines, but it can also overwrite Matplotlib's default parameters and in turn get even simple Matplotlib scripts to produce vastly superior output. # We can set the style by calling Seaborn's ``set()`` method. # By convention, Seaborn is imported as ``sns``: import seaborn as sns sns.set() # Now let's rerun the same two lines as before: # same plotting code as above! plt.plot(x, y) plt.legend('ABCDEF', ncol=2, loc='upper left'); # ## Exploring Seaborn Plots # # The main idea of Seaborn is that it provides high-level commands to create a variety of plot types useful for statistical data exploration, and even some statistical model fitting. # # Let's take a look at a few of the datasets and plot types available in Seaborn. Note that all of the following *could* be done using raw Matplotlib commands (this is, in fact, what Seaborn does under the hood) but the Seaborn API is much more convenient. # ### Histograms, KDE, and densities # # Often in statistical data visualization, all you want is to plot histograms and joint distributions of variables. # We have seen that this is relatively straightforward in Matplotlib: # + data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=5000) data = pd.DataFrame(data, columns=['x', 'y']) for col in 'xy': plt.hist(data[col], density=True, alpha=0.5) # - # Rather than a histogram, we can get a smooth estimate of the distribution using a kernel density estimation, which Seaborn does with ``sns.kdeplot``: for col in 'xy': sns.kdeplot(data[col], shade=True) # Histograms and KDE can be combined using ``distplot``: sns.distplot(data['x']) sns.distplot(data['y']); with sns.axes_style('white'): sns.jointplot("x", "y", data, kind='kde'); # There are other parameters that can be passed to ``jointplot``—for example, we can use a hexagonally based histogram instead: with sns.axes_style('white'): sns.jointplot("x", "y", data, kind='hex') # ### Pair plots # # When you generalize joint plots to datasets of larger dimensions, you end up with *pair plots*. This is very useful for exploring correlations between multidimensional data, when you'd like to plot all pairs of values against each other. # # We'll demo this with the well-known Iris dataset, which lists measurements of petals and sepals of three iris species: iris = sns.load_dataset("iris") iris.head() # Visualizing the multidimensional relationships among the samples is as easy as calling ``sns.pairplot``: sns.pairplot(iris, hue='species', height=2.5); # ### Faceted histograms # # Sometimes the best way to view data is via histograms of subsets. Seaborn's ``FacetGrid`` makes this extremely simple. # We'll take a look at some data that shows the amount that restaurant staff receive in tips based on various indicator data: tips = sns.load_dataset('tips') tips.head() # + tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill'] grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True) grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15)); # - # ### Factor plots # # Factor plots can be useful for this kind of visualization as well. This allows you to view the distribution of a parameter within bins defined by any other parameter: with sns.axes_style(style='ticks'): g = sns.catplot("day", "total_bill", "sex", data=tips, kind="box") g.set_axis_labels("Day", "Total Bill"); # ### Joint distributions # # Similar to the pairplot we saw earlier, we can use ``sns.jointplot`` to show the joint distribution between different datasets, along with the associated marginal distributions: with sns.axes_style('white'): sns.jointplot("total_bill", "tip", data=tips, kind='hex') # The joint plot can even do some automatic kernel density estimation and regression: sns.jointplot("total_bill", "tip", data=tips, kind='reg'); # ### Bar plots # # Time series can be plotted using ``sns.factorplot``. In the following example, we'll use the Planets seaborn's dataset. planets = sns.load_dataset('planets') planets.head() with sns.axes_style('white'): g = sns.catplot("year", data=planets, aspect=2, kind="count", color='steelblue') g.set_xticklabels(step=5) # We can learn more by looking at the *method* of discovery of each of these planets: with sns.axes_style('white'): g = sns.catplot("year", data=planets, aspect=4.0, kind='count', hue='method', order=range(2001, 2015)) g.set_ylabels('Number of Planets Discovered') # ## Example: Exploring Marathon Finishing Times # # Please download the file by uncommenting the lines below, then move the file to our previsously created data subfolder. # + # #!curl -O https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv # + # #!move marathon-data.csv data/marathon-data.csv # - # Here we'll look at using Seaborn to help visualize and understand finishing results from a marathon. # We will start by downloading the data from # the Web, and loading it into Pandas. data = pd.read_csv('data/marathon-data.csv') data.head() # By default, Pandas loaded the time columns as Python strings (type ``object``); we can see this by looking at the ``dtypes`` attribute of the DataFrame: data.dtypes # Let's fix this by providing a converter for the times: # + import datetime def convert_time(s): h, m, s = map(int, s.split(':')) return datetime.timedelta(hours=h, minutes=m, seconds=s) data = pd.read_csv('data/marathon-data.csv', converters={'split':convert_time, 'final':convert_time}) data.head() # - data.dtypes # That looks much better. For the purpose of our Seaborn plotting utilities, let's next add columns that give the times in seconds: data['split_sec'] = data['split'].astype('timedelta64[s]') data['final_sec'] = data['final'].astype('timedelta64[s]') data.head() # To get an idea of what the data looks like, we can plot a ``jointplot`` over the data: with sns.axes_style('white'): g = sns.jointplot("split_sec", "final_sec", data, kind='hex') g.ax_joint.plot(np.linspace(4000, 16000)) # The dotted line shows where someone's time would lie if they ran the marathon at a perfectly steady pace. The fact that the distribution lies above this indicates (as you might expect) that most people slow down over the course of the marathon. # If you have run competitively, you'll know that those who do the opposite—run faster during the second half of the race—are said to have "negative-split" the race. # Let's see whether there is any correlation between this split fraction and other variables. We'll do this using a ``pairgrid``, which draws plots of all these correlations: g = sns.PairGrid(data, vars=['age', 'split_sec', 'final_sec'], hue='gender', palette='RdBu_r') g.map(plt.scatter, alpha=0.8) g.add_legend();
#3 Data Manipulation & Visualization/Visualization/#3.2.14 - Visualization With Seaborn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # + colab={} colab_type="code" id="wvGxjjeV-9Ls" # Projct by <NAME> #start here #import libraries import numpy as np import seaborn as sns import matplotlib.pyplot as plt import utils #Utility function for display of some images from every category to get spread of distribution of images import os # %matplotlib inline from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.layers import Dense, Input, Dropout,Flatten, Conv2D from tensorflow.keras.layers import BatchNormalization, Activation, MaxPooling2D from tensorflow.keras.models import Model, Sequential from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau from tensorflow.keras.utils import plot_model from IPython.display import SVG, Image from livelossplot import PlotLossesKerasTF import tensorflow as tf print("Tensorflow version:", tf.__version__) # - #Plotting Sample Images to get understanding to images utils.datasets.fer.plot_example_images(plt).show() #To get images from every catageory of images. # + colab={"base_uri": "https://localhost:8080/", "height": 139} colab_type="code" id="TalL_1Qr-9Qz" outputId="f5fb9b05-976a-4979-ea23-33c3d87efb94" # To get spread of images across classes for checking class imbalance problem for expression in os.listdir("train/"): print(str(len(os.listdir("train/" + expression))) + " " + expression + " images") #To get no of images in each expression folder # + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="iri8ehFw-9Tj" outputId="1cff3826-c0a9-41ff-a61b-5a677de101ae" # Generate Training and Validation Batches # During training to reduce loss in nn using mini batch gradient descent #(mini batch is fixed no of training examples used for single step of gradient descent) # Afer training mini batch of fix size then series of step in single epoch #first picking mini batch then # feed it to nn to calculate mean gradient of mini batch then #use mean gradient to update weights then #repeat to come to local optima and converge img_size = 48 #To set images size of 48*48 batch_size = 64 #selecting batch size of 64 images datagen_train = ImageDataGenerator(horizontal_flip=True) #Data Generator object for training and variable is passes to image augmentation (for this project randomly image is fliped along x axis) train_generator = datagen_train.flow_from_directory("train/", target_size=(img_size,img_size), color_mode="grayscale", batch_size=batch_size, class_mode='categorical', shuffle=True) #to load batches of images from train directory datagen_validation = ImageDataGenerator(horizontal_flip=True) #Data Generator object for validation and variable is passes to image augmentation (for this project randomly image is fliped along x axis) validation_generator = datagen_validation.flow_from_directory("test/", target_size=(img_size,img_size), color_mode="grayscale", batch_size=batch_size, class_mode='categorical', shuffle=False) #to load batches of images from test directory # + # https://scholar.google.co.in/scholar?q=doi:10.1016/j.neunet.2014.09.005&hl=en&as_sdt=0&as_vis=1&oi=scholart # + # Initialising the CNN model = Sequential() # 1 - Convolution model.add(Conv2D(64,(3,3), padding='same', input_shape=(48, 48,1))) #64 filters each of 3 by 3 filters with same padding so we dont lose info about img, 48*48 i/p image 1 for gray scale image model.add(BatchNormalization()) # Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1. model.add(Activation('relu')) # Non-linear Activation Function so network learns more complex function relu Non-linearity model.add(MaxPooling2D(pool_size=(2, 2))) #shrinks height and width by factor of 2 as pool size is(2,2 ) model.add(Dropout(0.25)) # drop out regularization to prevent over fitting to training data # 2nd Convolution layer model.add(Conv2D(128,(5,5), padding='same')) #128 filters each of 5 by 5 filters with same padding so we dont lose info about img model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # 3rd Convolution layer model.add(Conv2D(512,(3,3), padding='same'))#1512 filters each of 3 by 3 filters with same padding so we dont lose info about img model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # 4th Convolution layer model.add(Conv2D(512,(3,3), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # Flattening model.add(Flatten()) #Flattening so that we can pass through fully connected layer # Fully connected layer 1st layer model.add(Dense(256)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Dropout(0.25)) # Fully connected layer 2nd layer model.add(Dense(512)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Dropout(0.25)) model.add(Dense(7, activation='softmax')) opt = Adam(lr=0.0005) #learning rate=0.0005 model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) model.summary() # - # # # + #Train and Evaluate Model # %%time epochs = 15 #epochs for training and validation steps_per_epoch = train_generator.n//train_generator.batch_size validation_steps = validation_generator.n//validation_generator.batch_size reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=2, min_lr=0.00001, mode='auto') checkpoint = ModelCheckpoint("model_weights.h5", monitor='val_accuracy', save_weights_only=True, mode='max', verbose=1) callbacks = [PlotLossesKerasTF(), checkpoint, reduce_lr] history = model.fit( x=train_generator, steps_per_epoch=steps_per_epoch, epochs=epochs, validation_data = validation_generator, validation_steps = validation_steps, callbacks=callbacks ) # - # # + colab={} colab_type="code" id="cHw8ir7CVAE0" #Representing Model as JSON String model_json = model.to_json() with open("model.json", "w") as json_file: json_file.write(model_json) # -
Facial_Expression_Training.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Music streaming services # # Running a Pareto-II probability model to compare acquisition rates and forecast future growth of premium (paid) subscribers on three prominent music services. Based off of Prof. <NAME>'s STAT 476 course at Wharton. # # #### Dataset sources: # Spotify: [statista-spotify](https://www.statista.com/statistics/244995/number-of-paying-spotify-subscribers/) # Apple: [statista-apple](https://www.statista.com/statistics/604959/number-of-apple-music-subscribers/) # Pandora: [statista-pandora-estimates](https://www.statista.com/statistics/253850/number-of-pandoras-paying-subscribers/) # # #### Methodology sources: # <NAME>. and <NAME>. [Applied Probability Models in Marketing Research](http://www.brucehardie.com/talks/supp_mats02_part1.pdf) # <NAME>. ['STAT/MKTG 476/776 - 2018A, Section 401 Syllabus'](https://apps.wharton.upenn.edu/syllabi/2018A/MKTG476401/) # ### Import data # Manually scraped data for each time period off of those three charts from Statista. Not all charts had values at each date. # + import pandas as pd import numpy as np from datetime import datetime import warnings warnings.filterwarnings("ignore") # - music = pd.read_excel('music-service-data.xlsx', 0) # Normalizing dates to a time variable. # + # baseline for reference baseline = np.datetime64('2010-01-01') # month difference since baseline music['t'] = music['Date'].apply(lambda x: (x.year - 2010) * 12 + (x.month - 1)) # - music.head() # ### Separation of analysis # Running the model on Spotify and then generalizing it to the other models # separating the datasets spotify = music[['t', 'Spotify']] apple = music[['t', 'Apple']] pandora = music[['t', 'Pandora']] # + # creating parameters for each model and overall dataframe to compare spotify_params = {'r': .5, 'alpha': .5, 'c': .5} apple_params = spotify_params.copy() pandora_params = spotify_params.copy() def parameter_table(spotify_params, apple_params, pandora_params): # global spotify_params, apple_params, pandora_params return pd.DataFrame({'Spotify': spotify_params, 'Apple': apple_params, 'Pandora': pandora_params}).T[['r', 'alpha', 'c']] # viewing parameter dataframe parameter_table(spotify_params, apple_params, pandora_params) # - # Running the Pareto II on Spotify using the `spotify_params` dictionary, and generalizing functions. # # The Pareto II is a probability mixture/Gaussian mixture model, taking the exponential-gamma distribution and incorporating duration-dependence hazard (increasing rate of events, such as using a service, over time). # For more reference into the Pareto II, see [Wikipedia: Pareto II/Lomax Distribution](https://en.wikipedia.org/wiki/Pareto_distribution) # + # CDF of the spotify data under random pre-set values def compute_CDF(params, dataset): a = params['alpha'] r = params['r'] c = params['c'] return 1 - (a/(a + dataset['t'])**c)**r # PDF accordingly - this is not good Python syntax, but is most convenient def compute_PDF(dataset): dataset['PDF'] = dataset['CDF'][0] for i in range(1, len(dataset)): dataset['PDF'][i] = dataset['CDF'][i] - dataset['CDF'][i - 1] return dataset['PDF'] # Incrementals def compute_incremental(dataset, name): dataset['x_t'] = dataset[name][0] for i in range(1, len(dataset)): dataset['x_t'][i] = dataset[name][i] - dataset[name][i - 1] return dataset['x_t'] spotify['x_t'] = compute_incremental(spotify, 'Spotify') spotify['CDF'] = compute_CDF(spotify_params, spotify) spotify['PDF'] = compute_PDF(spotify) # + # computing log-likelihoods def compute_LL(params, dataset): dataset['LL'] = np.log(dataset['PDF']) * dataset['x_t'] params['sum_LL'] = sum(dataset['LL'].dropna()) return params spotify_params = compute_LL(spotify_params, spotify) spotify['sum_E[X]'] = spotify['CDF'] * 100 # + # Apple and Pandora apple['x_t'] = compute_incremental(apple, 'Apple') apple['CDF'] = compute_CDF(apple_params, apple) apple['PDF'] = compute_PDF(apple) apple['sum_E[X]'] = apple['CDF'] * 100 apple_params = compute_LL(apple_params, apple) pandora['x_t'] = compute_incremental(pandora, 'Pandora') pandora['CDF'] = compute_CDF(pandora_params, pandora) pandora['PDF'] = compute_PDF(pandora) pandora['sum_E[X]'] = pandora['CDF'] * 100 pandora_params = compute_LL(pandora_params, pandora) # - # dropping null values apple_clean = apple.dropna() spotify_clean = spotify.dropna() pandora_clean = pandora.dropna() apple_clean # ### Maximum likelihood estimation # Source: [<NAME>, WM.edu](http://rlhick.people.wm.edu/posts/estimating-custom-mle.html) from scipy.stats import norm import statsmodels.api as sm from statsmodels.base.model import GenericLikelihoodModel from scipy.optimize import minimize # + # MLE, assuming 100MM people in population - unfinished
music-subscribers/music-subscribers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## First day: list comprehensions and generators # > List comprehensions and generators are in my top 5 favorite Python features leading to clean, robust and Pythonic code. # + from collections import Counter import calendar import itertools import random import re import string import requests # - # ### List comprehensions # Let's dive straight into a practical example. We all know how to use the classic for loop in Python, say I want to loop through a bunch of names title-casing each one: names = 'pybites mike <NAME> <NAME>'.split() names for name in names: print(name.title()) # Then I want to only keep the names that start with A-M, the `strings` module makes it easier (we love Python's standard library!): first_half_alphabet = list(string.ascii_lowercase)[:13] first_half_alphabet new_names = [] for name in names: if name[0] in first_half_alphabet: new_names.append(name.title()) new_names # Feels verbose, not? # # If you don't know about list comprehensions you might start using them everywhere after seeing the next refactoring: new_names2 = [name.title() for name in names if name[0] in first_half_alphabet] new_names2 assert new_names == new_names2 # From 4 to 1 lines of code, and it reads pretty well too. That's why we love and stick with Python! # # Here is another example I used recently to do a most common word count on Harry Potter. I used some list comprehensions to clean up the words before counting them: resp = requests.get('http://projects.bobbelderbos.com/pcc/harry.txt') words = resp.text.lower().split() words[:5] cnt = Counter(words) cnt.most_common(5) # Hmm should not count stopwords, also: '-' in words # Let's first clean up any non-alphabetic characters: words = [re.sub(r'\W+', r'', word) for word in words] '-' in words 'the' in words # Ok let's filter those stopwords out plus the empty strings caussed by the previous list comprehension: resp = requests.get('http://projects.bobbelderbos.com/pcc/stopwords.txt') stopwords = resp.text.lower().split() stopwords[:5] words = [word for word in words if word.strip() and word not in stopwords] words[:5] 'the' in words # Now it looks way better: cnt = Counter(words) cnt.most_common(5) # What's interesting here is that the first bit of the list comprehension can be an expression like `re.sub`. The final bit can be a compound statement: here we checked for a non-empty word (' ' -> `strip()` -> '' = `False` in Python) `and` we checked `word not in stopwords`. # # Again, a lot is going on in one line of code, but the beauty of it is that it is totally fine, because it reads like plain English :) # ### Generators # A generator is a function that returns an iterator. It generates values using the `yield` keyword, when called with next() (a for loop does this implicitly), and it raises a `StopIteration` exception when there are no more values to generate. Let's see what this means with a very simple example: # + def num_gen(): for i in range(5): yield i gen = num_gen() # - next(gen) # note it takes off where we left it last statement for i in gen: print(i) # no more values to generate next(gen) # for catches the exception for us for i in gen: print(i) # > The `StopIteration` error appears because there are no more yield statements in the function. Calling next on the generator after this does not cause it to loop over and start again. - [Generators are Awesome, Learning by Example # ](https://pybit.es/generators.html) # # # # Since learning about generators, a common pattern I use is to build up my sequences: options = 'red yellow blue white black green purple'.split() options # My older code: def create_select_options(options=options): select_list = [] for option in options: select_list.append(f'<option value={option}>{option.title()}</option>') return select_list from pprint import pprint as pp pp(create_select_options()) # Using a generator you can write this in 2 lines of code - my newer code: def create_select_options_gen(options=options): for option in options: yield f'<option value={option}>{option.title()}</option>' print(create_select_options_gen()) # Note that generators are _lazy_ so you need to explicitly consume them by iterating over them, for example by looping over them. Another way is to pass them into the `list()` constructor: list(create_select_options_gen()) # Specially when working with large data sets you definitely want to use generators. Lists can only get as big as they fit memory size. Generators are lazily evaluated meaning that they only hold a certain amount of data in memory at once. Just for the sake of giving Python something to do, let's calculate leap years for a million years, and compare performance of list vs generator: # + # list def leap_years_lst(n=1000000): leap_years = [] for year in range(1, n+1): if calendar.isleap(year): leap_years.append(year) return leap_years # generator def leap_years_gen(n=1000000): for year in range(1, n+1): if calendar.isleap(year): yield year # - # PRO tip: [since Python 3.3](https://docs.python.org/3/whatsnew/3.3.html) you can use the `yield from` syntax. # this had me waiting for a few seconds # %timeit -n1 leap_years_lst() # this was instant # %timeit -n1 leap_years_gen() # That is pretty impressive. This is an important concept to know about because Big Data is here to stay! # ## Second day: practice # Look at your code and see if you can refactor it to use list comprehensions. Same for generators. Are you building up a list somewhere where you could potentially use a generator? # And/or exercise here, take this list of names: NAMES = ['<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>'] # Can you write a simple list comprehension to convert these names to title case (brad pitt -> Brad Pitt). Or reverse the first and last name? # Then use this same list and make a little generator, for example to randomly return a pair of names, try to make this work: # # pairs = gen_pairs() # for _ in range(10): # next(pairs) # # Should print (values might change as random): # # Arnold teams up with Brad # Alec teams up with Julian # # Have fun! my_names = [name.title() for name in NAMES] my_names def pairs_gen(names=NAMES): for _ in range(10): yield random.sample(names, 2) my_pairs_gen = pairs_gen(my_names) for pairs in my_pairs_gen: print(f'{pairs[0]} pairs up with {pairs[1]}') # ## Third day: solution / simulate unix pipelines # I hope yesterday's exercise was reasonably doable for you. Here are the answers in case you got stuck: # list comprehension to title case names [name.title() for name in NAMES] # + # list comprehension to reverse first and last names # using a helper here to show you that list comprehensions can be passed in functions! def reverse_first_last_names(name): first, last = name.split() # ' '.join([last, first]) -- wait we have f-strings now (>= 3.6) return f'{last} {first}' [reverse_first_last_names(name) for name in NAMES] # - def gen_pairs(): # again a list comprehension is great here to get the first names # and title case them in just 1 line of code (this comment took 2) first_names = [name.split()[0].title() for name in NAMES] while True: # added this when I saw Julian teaming up with Julian (always test your code!) first, second = None, None while first == second: first, second = random.sample(first_names, 2) yield f'{first} teams up with {second}' pairs = gen_pairs() for _ in range(10): print(next(pairs)) # Another way to get a slice of a generator is using `itertools.islice`: first_ten = itertools.islice(pairs, 10) first_ten list(first_ten) # ### Further practice # Read up on set and dict comprehensions, then try these two Bites: # - [Bite 5. Parse a list of names](https://codechalleng.es/bites/5/) (use a set comprehension in first function) # - [Bite 26. Dictionary comprehensions are awesome](https://codechalleng.es/bites/promo/awesome-dict-comprehensions) # Here is a more advanced generators exercise you can try: [Code Challenge 11 - Generators for Fun and Profit](https://codechalleng.es/challenges/11/) # ### Time to share what you've accomplished! # # Be sure to share your last couple of days work on Twitter or Facebook. Use the hashtag **#100DaysOfCode**. # # Here are [some examples](https://twitter.com/search?q=%23100DaysOfCode) to inspire you. Consider including [@talkpython](https://twitter.com/talkpython) and [@pybites](https://twitter.com/pybites) in your tweets. # # *See a mistake in these instructions? Please [submit a new issue](https://github.com/talkpython/100daysofcode-with-python-course/issues) or fix it and [submit a PR](https://github.com/talkpython/100daysofcode-with-python-course/pulls).*
days/16-18-listcomprehensions-generators/list-comprehensions-generators.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc"><ul class="toc-item"><li><span><a href="#Đừng-Bỏ-hết-Trứng-vào-Một-Giỏ" data-toc-modified-id="Đừng-Bỏ-hết-Trứng-vào-Một-Giỏ-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Đừng Bỏ hết Trứng vào Một Giỏ</a></span></li><li><span><a href="#Ước-Lượng-Mạnh-Hai-Lần" data-toc-modified-id="Ước-Lượng-Mạnh-Hai-Lần-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Ước Lượng Mạnh Hai Lần</a></span></li><li><span><a href="#Ý-tưởng-chủ-đạo" data-toc-modified-id="Ý-tưởng-chủ-đạo-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Ý tưởng chủ đạo</a></span></li><li><span><a href="#Tài-liệu-tham-khảo" data-toc-modified-id="Tài-liệu-tham-khảo-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Tài liệu tham khảo</a></span></li><li><span><a href="#Bảng-Thuật-ngữ" data-toc-modified-id="Bảng-Thuật-ngữ-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Bảng Thuật ngữ</a></span></li></ul></div> # - # # Đừng Bỏ hết Trứng vào Một Giỏ # Chúng ta đã học cách sử dụng hồi quy tuyến tính và điểm xu hướng theo trọng số để ước lượng \\(E[Y|Y=1] - E[Y|Y=0] | X\\). Nhưng chúng ta nên dùng phương thức nào và khi nào? Khi còn lưỡng lự, hãy dùng cả hai! Ước Lượng Mạnh Hai Lần là cách kết hợp điểm xu hướng và hồi quy tuyến tính để bạn không phải lệ thuộc vào một phương thức đơn lẻ. # # Để xem phương thức này hoạt động như thế nào, hãy cùng xem xét thí nghiệm tư duy. Đây là một nghiên cứu đối chứng ngẫu nhiên ở các trường công lập của Mĩ nhằm tìm hiểu tác động của tư duy cầu tiến. (Tư duy cầu tiến là niềm tin cho rằng trí tuệ có thể phát triển theo thời gian cùng với việc làm việc chăm chỉ, sử dụng chiến lược thông minh và giúp đỡ người khác khi cần thiết. Nó đối lập với niềm tin cho rằng trí tuệ là phẩm chất cố định sẵn có khi sinh ra). Các học sinh được tham gia một xê-mi-na để tiếp nhận tư duy cầu tiến. Sau đó, họ theo dõi lúc học sinh nhập học đại học để xem kết quả học tập của các học sinh này. Kết quả được đo dưới dạng chuẩn hóa. Dữ liệu thực tế của nghiên cứu này không được công bố rộng rãi để bảo vệ quyền riêng tư của học sinh. Tuy nhiên chúng ta có dữ liệu mô phỏng với các thuộc tính thống kê tương tự từ [<NAME> Wager](https://arxiv.org/pdf/1902.07409.pdf), vì thế chúng ta sẽ sử dụng dữ liệu này để thay thế. # + import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np from matplotlib import style from matplotlib import pyplot as plt import seaborn as sns # %matplotlib inline style.use("fivethirtyeight") pd.set_option("display.max_columns", 6) # - data = pd.read_csv("./data/learning_mindset.csv") data.sample(5, random_state=5) # Mặc dù nghiên cứu đã được ngẫu nhiên hóa, dữ liệu trong trường hợp này có vẻ chưa hoàn toàn loại trừ nhiễu. Một lí do có thể là biến can thiệp được đo bằng việc học sinh tham gia xê-mi-na. Mặc dù cơ hội được tham gia là ngẫu nhiên, việc tham dự không hoàn toàn như vậy. Ở đây chúng ta gặp phải việc không tuân thủ. Một bằng chứng là kì vọng của sự thành công của học sinh tương quan với việc tham dự xê-mi-na. Các sinh viên tự đánh giá cao kì vọng thành công của mình có nhiều khả năng tham gia xê-mi-na tư duy cầu tiến hơn. data.groupby("success_expect")["intervention"].mean() # Vì chúng ta sẽ sử dụng các phương pháp hồi quy (hồi quy tuyến tính và hồi quy lô-gít), chúng ta cần chuyển đổi các biến phân loại thành các biến giả. # + categ = ["ethnicity", "gender", "school_urbanicity"] cont = ["school_mindset", "school_achievement", "school_ethnic_minority", "school_poverty", "school_size"] data_with_categ = pd.concat([ data.drop(columns=categ), # dataset without the categorical features pd.get_dummies(data[categ], columns=categ, drop_first=False)# dataset without categorical converted to dummies ], axis=1) print(data_with_categ.shape) # - # Chúng ta bây giờ đã sẵn sàng để hiểu cách ước lượng mạnh hai lần hoạt động. # # # Ước Lượng Mạnh Hai Lần # # ![img](./data/img/doubly-robust/double.png) # # Thay vì trình bày cách tính kết quả ước lượng, trước hết tôi sẽ chỉ cho bạn công thức ước lượng và sau đó giải thích tại sao nó rất ngầu. # # $ # \hat{ATE} = \frac{1}{N}\sum \bigg( \dfrac{T_i(Y_i - \hat{\mu_1}(X_i))}{\hat{P}(X_i)} + \hat{\mu_1}(X_i) \bigg) - \frac{1}{N}\sum \bigg( \dfrac{(1-T_i)(Y_i - \hat{\mu_0}(X_i))}{1-\hat{P}(X_i)} + \hat{\mu_0}(X_i) \bigg) # $ # # trong đó \\(\hat{P}(x)\\) là ước lượng điểm xu hướng (ví dụ sử dụng hồi quy lô-gít), \\(\hat{\mu_1}(x)\\) là ước lượng của \\(E[Y|X, T=1]\\) (ví dụ sử dụng hồi quy tuyến tính), và \\(\hat{\mu_1}(x)\\) là ước lượng của \\(E[Y|X, T=0]\\). Như bạn có thể đoán, hạng tử đầu của mô hình ước lượng mạnh hai lần ước lượng \\(E[Y_1]\\) và hạng tử thứ hai ước lượng \\(E[Y_0]\\). Hãy cùng xem xét hạng tử thứ nhất, và nguyên lý tương tự cũng áp dụng cho hạng tử thứ hai. # # Nhưng trước hết hãy nhìn vào số liệu thực tế. # + from sklearn.linear_model import LogisticRegression, LinearRegression def doubly_robust(df, X, T, Y): ps = LogisticRegression(C=1e6).fit(df[X], df[T]).predict_proba(df[X])[:, 1] mu0 = LinearRegression().fit(df.query(f"{T}==0")[X], df.query(f"{T}==0")[Y]).predict(df[X]) mu1 = LinearRegression().fit(df.query(f"{T}==1")[X], df.query(f"{T}==1")[Y]).predict(df[X]) return ( np.mean(df[T]*(df[Y] - mu1)/ps + mu1) - np.mean((1-df[T])*(df[Y] - mu0)/(1-ps) + mu0) ) # + T = 'intervention' Y = 'achievement_score' X = data_with_categ.columns.drop(['schoolid', T, Y]) doubly_robust(data_with_categ, X, T, Y) # - # Mô hình ước lượng mạnh hai lần nói rằng chúng ta nên kì vọng các cá nhân tham gia xê-mi-na cầu tiến đạt kết quả học tập cao hơn nhóm đối chứng khoảng 0.388 độ lệch chuẩn. Một lần nữa, chúng ta có thể dùng bootstrap để xây dựng các khoảng tin cậy. # + from joblib import Parallel, delayed # for parallel processing np.random.seed(88) # run 1000 bootstrap samples bootstrap_sample = 1000 ates = Parallel(n_jobs=4)(delayed(doubly_robust)(data_with_categ.sample(frac=1, replace=True), X, T, Y) for _ in range(bootstrap_sample)) ates = np.array(ates) # - print(f"ATE 95% CI:", (ates.mean() - 1.96*ates.std(), ates.mean() + 1.96*ates.std())) sns.distplot(ates, kde=False) plt.vlines(ates.mean()-1.96*ates.std(), 0, 20, linestyles="dotted") plt.vlines(ates.mean()+1.96*ates.std(), 0, 20, linestyles="dotted", label="95% CI") plt.title("ATE Bootstrap Distribution") plt.legend(); # Bây giờ, hãy cùng xem tại sao mô hình ước lượng mạnh hai lần tuyệt đến như vậy. Nó được gọi là mạnh hai lần vì nó chỉ cần một trong hai mô hình, \\(\hat{P}(x)\\) hoặc \\(\hat{\mu}(x)\\), được đặc tả đúng. Để thấy điều này, hãy nhìn kĩ vào hạng tử thứ nhất ước lượng \\(E[Y_1]\\). # # $ # \hat{E}[Y_1] = \frac{1}{N}\sum \bigg( \dfrac{T_i(Y_i - \hat{\mu_1}(X_i))}{\hat{P}(X_i)} + \hat{\mu_1}(X_i) \bigg) # $ # # Trước hết, giả sử \\(\hat{\mu_1}(x)\\) chính xác. Nếu mô hình điểm xu hướng sai, chúng ta cũng không cần lo lắng. Vì nếu \\(\hat{\mu_1}(x)\\) chính xác, thì \\(E[T_i(Y_i - \hat{\mu_1}(X_i))]=0\\). Đó là vì phép nhân với \\(T_i\\) chỉ lựa chọn nhóm can thiệp và phần dư của \\(\hat{\mu_1}\\) đối với nhóm can thiệp, theo định nghĩa, có kì vọng bằng 0. Điều này khiến cả cụm có thể giản ước thành \\(\hat{\mu_1}(X_i)\\), ước lượng chính xác của \\(E[Y_1]\\) theo giả thiết. Vì thế, khi mô hình ước lượng \\(\hat{\mu_1}(X_i)\\) được đặc tả chính xác, nó loại trừ yêu cầu đặc tả chính xác mô hình điểm xu hướng. Chúng ta có thể áp dụng suy luận tương tự cho mô hình ước lượng \\(E[Y_0]\\). # # Nhưng hãy đừng dễ dàng tin lời tôi. Hãy code để kiểm chứng! Trong mô hình ước lượng sau, tôi đã thay thế hồi quy lô-gít dùng để ước lượng điểm xu hướng bằng một biến ngẫu nhiên phân phối đều nhận giá trị từ 0.1 to 0.9 (Tôi không muốn dùng các trọng số quá nhỏ làm suy yếu phương sai của điểm xu hướng). Vì biến được khởi tạo ngẫu nhiên, nó khó có thể là một mô hình điểm xu hướng tốt, nhưng chúng ta sẽ thấy mô hình ước lượng mạnh hai lần sẽ vẫn cho một ước lượng khá gần với điểm xu hướng ước lượng bởi hồi quy lô-gít. # + from sklearn.linear_model import LogisticRegression, LinearRegression def doubly_robust_wrong_ps(df, X, T, Y): # wrong PS model np.random.seed(654) ps = np.random.uniform(0.1, 0.9, df.shape[0]) mu0 = LinearRegression().fit(df.query(f"{T}==0")[X], df.query(f"{T}==0")[Y]).predict(df[X]) mu1 = LinearRegression().fit(df.query(f"{T}==1")[X], df.query(f"{T}==1")[Y]).predict(df[X]) return ( np.mean(df[T]*(df[Y] - mu1)/ps + mu1) - np.mean((1-df[T])*(df[Y] - mu0)/(1-ps) + mu0) ) # - doubly_robust_wrong_ps(data_with_categ, X, T, Y) # Nếu sử dụng phương pháp bootstrap, chúng ta có thể thấy phương sai hơi nhỏ hơn khi sử dụng điểm xu hướng ước lượng bởi hồi quy lô-gít. # + from joblib import Parallel, delayed # for parallel processing np.random.seed(88) parallel_fn = delayed(doubly_robust_wrong_ps) wrong_ps = Parallel(n_jobs=4)(parallel_fn(data_with_categ.sample(frac=1, replace=True), X, T, Y) for _ in range(bootstrap_sample)) wrong_ps = np.array(wrong_ps) # - print(f"ATE 95% CI:", (wrong_ps.mean() - 1.96*wrong_ps.std(), wrong_ps.mean() + 1.96*wrong_ps.std())) # Khoảng tin cậy này chứa giá trị không chính xác của mô hình xu hướng nhưng kết quả đầu ra vẫn chính xác. Trong trường hợp còn lại thì sao? Hãy cùng nhìn lại hạng tử thứ nhất của mô hình ước lượng, nhưng trước hết hãy sắp xếp lại các thành phần của hạng tử này: # # $ # \hat{E}[Y_1] = \frac{1}{N}\sum \bigg( \dfrac{T_i(Y_i - \hat{\mu_1}(X_i))}{\hat{P}(X_i)} + \hat{\mu_1}(X_i) \bigg) # $ # # $ # \hat{E}[Y_1] = \frac{1}{N}\sum \bigg( \dfrac{T_iY_i}{\hat{P}(X_i)} - \dfrac{T_i\hat{\mu_1}(X_i)}{\hat{P}(X_i)} + \hat{\mu_1}(X_i) \bigg) # $ # # $ # \hat{E}[Y_1] = \frac{1}{N}\sum \bigg( \dfrac{T_iY_i}{\hat{P}(X_i)} - \bigg(\dfrac{T_i}{\hat{P}(X_i)} - 1\bigg) \hat{\mu_1}(X_i) \bigg) # $ # # $ # \hat{E}[Y_1] = \frac{1}{N}\sum \bigg( \dfrac{T_iY_i}{\hat{P}(X_i)} - \bigg(\dfrac{T_i - \hat{P}(X_i)}{\hat{P}(X_i)}\bigg) \hat{\mu_1}(X_i) \bigg) # $ # # Bây giờ, giả sử điểm xu hướng \\(\hat{P}(X_i)\\) được đặc tả chính xác. Trong trường hợp này, \\(E[T_i - \hat{P}(X_i)]=0\\), triệt tiêu phần tử phụ thuộc vào \\(\hat{\mu_1}(X_i)\\). Điều này làm mô hình ước lượng mạnh hai lần tiêu giảm thành mô hình ước lượng điểm xu hướng theo trọng số \\(\frac{T_iY_i}{\hat{P}(X_i)}\\), và chính xác theo giả định. Vì thế ngay cả khi \\(\hat{\mu_1}(X_i)\\) sai, mô hình ước lượng sẽ vẫn chính xác, miễn là điểm xu hướng được đặc tả chính xác. # # Một lần nữa, nếu bạn tin vào code hơn công thức, đây là các kiểm chứng thực tế. Trong đoạn code dưới đây, chúng tôi đã thay thế cả hai mô hình hồi quy bằng một biến ngẫu nhiên phân phối thường. Vì thế, không cần phải nghi ngờ \\(\hat{\mu}(X_i)\\) khó có thể được đặc tả đúng. Tuy nhiên, chúng ta sẽ thấy mô hình ước lượng mạnh hai lần vẫn có thể cho \\(\hat{ATE}\\) tương tự, khoảng 0.38 như chúng ta đã thấy trên đây. # + from sklearn.linear_model import LogisticRegression, LinearRegression def doubly_robust_wrong_model(df, X, T, Y): np.random.seed(654) ps = LogisticRegression(C=1e6).fit(df[X], df[T]).predict_proba(df[X])[:, 1] # wrong mu(x) model mu0 = np.random.normal(0, 1, df.shape[0]) mu1 = np.random.normal(0, 1, df.shape[0]) return ( np.mean(df[T]*(df[Y] - mu1)/ps + mu1) - np.mean((1-df[T])*(df[Y] - mu0)/(1-ps) + mu0) ) # - doubly_robust_wrong_model(data_with_categ, X, T, Y) # Một lần nữa, chúng ta có thể sử dụng bootstrap và thấy phương sai chỉ hơi lớn hơn một chút. # + from joblib import Parallel, delayed # for parallel processing np.random.seed(88) parallel_fn = delayed(doubly_robust_wrong_model) wrong_mux = Parallel(n_jobs=4)(parallel_fn(data_with_categ.sample(frac=1, replace=True), X, T, Y) for _ in range(bootstrap_sample)) wrong_mux = np.array(wrong_mux) # - print(f"ATE 95% CI:", (wrong_mux.mean() - 1.96*wrong_mux.std(), wrong_mux.mean() + 1.96*wrong_mux.std())) # Trong thực tế, chúng ta chẳng bao giờ có điểm xu hướng hoặc mô hình kết quả chính xác 100%. Chúng sẽ đều sai, nhưng sai theo những cách khác nhau. Ước lượng mạnh hai lần có thể kết hợp 2 mô hình sai để làm chúng ít sai hơn. # # # Ý tưởng chủ đạo # Ở đây, chúng ta đã xem xét một phương thức đơn giản để kết hợp hồi quy tuyến tính với điểm xu hướng để thu được mô hình ước lượng mạnh hai lần. Tên gọi này xuất phát từ việc nó chỉ yêu cầu một trong hai mô hình đúng. Nếu mô hình điểm xu hướng đúng, chúng ta sẽ có thể xác định chính xác tác động nhân quả ngay cả khi mô hình kết quả sai. Mặt khác, nếu mô hình kết quả đúng, chúng ta cũng có thể xác định tác động nhân quả ngay cả khi mô hình điểm xu hướng sai. # # # Tài liệu tham khảo # # Tôi muốn dành loạt bài viết này để vinh danh <NAME>, <NAME> and <NAME> vì khóa học Kinh tế lượng tuyệt cú mèo của họ. Phần lớn ý tưởng trong loạt bài này được lấy từ các bài giảng của họ được tổ chức bởi Hiệp hội Kinh tế Mĩ. Theo dõi các bài giảng này là những gì tôi làm trong suốt năm 2020 khó nhằn. # * [Kinh tế lượng với dữ liệu chéo](https://www.aeaweb.org/conference/cont-ed/2017-webcasts) # * [Luyện chưởng Kinh tế lượng Gần như Vô hại](https://www.aeaweb.org/conference/cont-ed/2020-webcasts) # # Tôi cũng muốn giới thiệu cuốn sách lý thú của Angrist. Chúng cho tôi thấy Kinh tế lượng, hoặc 'Lượng theo cách họ gọi không chỉ vô cùng hữu ích mà còn rất vui. # # * [Kinh tế lượng Gần như Vô hại](https://www.mostlyharmlesseconometrics.com/) # * [Luyện chưởng 'Lượng](https://www.masteringmetrics.com/) # # Tài liệu tham khảo cuối cùng của tôi là cuốn sách của <NAME> and <NAME>. Nó là người bạn đồng hành tin cậy với tôi khi trả lời những câu hỏi nhân quả khó nhằn. # # * [Sách Suy Luận Nhân Quả](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/) # # Dữ liệu chúng tôi sử dụng được lấy từ bài báo [Estimating Treatment Effects with Causal Forests: An Application](https://arxiv.org/pdf/1902.07409.pdf), bởi <NAME> and <NAME>. # # Bảng Thuật ngữ # | Thuật ngữ | Tiếng Anh | # | --- | --- | # |biến can thiệp|treatment variable| # |biến giả|dummy, dummy variable| # |biến ngẫu nhiên phân phối thường|random normal variable| # |biến ngẫu nhiên phân phối đều|random uniform variable| # |biến phân loại|categorical variable| # |bootstrap|bootstrap| # |code|code| # |hồi quy lô-gít|logistic regression| # |hồi quy tuyến tính|linear regression| # |khoảng tin cậy|confidence interval| # |kinh tế lượng|econometrics| # |mô hình|model| # |mô hình hồi quy|regression model| # |mô hình kết quả|outcome model| # |mô hình xu hướng|propensity model| # |mô hình điểm xu hướng|propensity score model| # |mô hình ước lượng|estimator| # |mô hình ước lượng mạnh hai lần|doubly robust estimator| # |mô hình ước lượng điểm xu hướng theo trọng số|propensity score weighting estimator| # |mạnh hai lần|doubly robust| # |phương pháp hồi quy|regression method| # |phương sai|variance| # |suy luận nhân quả|causal inference, causal reasoning| # |điểm xu hướng|propensity score| # |điểm xu hướng theo trọng số|propensity score weighting| # |đặc tả|specified, specify| # |độ lệch chuẩn|standard deviation| # |ước lượng mạnh hai lần|doubly robust estimation| #
ipynb/12-Ước-lượng-mạnh-hai-lần-VN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #Imports # #%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: #DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' # !pip install category_encoders==2.* # !pip install eli5 # !pip install pdpbox # !pip install shap # If you're working locally: else: DATA_PATH = '../data/' import datetime import pandas as pd import numpy as np from category_encoders import OrdinalEncoder, OneHotEncoder from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score, mean_absolute_error from sklearn.impute import SimpleImputer from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline #Yes today (Regression) from sklearn.linear_model import Ridge, LinearRegression from sklearn.ensemble import AdaBoostRegressor, RandomForestRegressor, GradientBoostingRegressor, BaggingRegressor #Not today (Classification) #Tomorrow, both from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier from sklearn.inspection import permutation_importance #import xgboost as xgb from xgboost import XGBClassifier, XGBRegressor import matplotlib.pyplot as plt # - # + #Raw meterorological data met_readings = pd.read_csv('BW_MET46251.txt', sep=' ', skipinitialspace=True) #Basically government-made features made out of spectral data directions = pd.read_csv('BW_SPEC46251.spec.txt', sep=' ', skipinitialspace=True) def timeFixer(df): #first row is weird row1 = df.index[0] #drop weird first row df.drop(df.index[0],inplace=True) #add 00 for seconds df['ss'] = '00' #create date column df['Date'] = df['#YY']+'/'+df['MM']+'/'+df['DD']+' '+df['hh']+':'+df['mm']+':'+df['ss'] #Convert df['Date'] to DateTime object #df['Date'] = datetime.datetime.strptime(df['Date'], '%Y/%m/%d %H:%M:%s.%f') df['Date'] = pd.to_datetime(df['Date'], format='%Y/%m/%d %H:%M:%S') #Index on date df.index = df['Date'] #Drop now uneeded date & time columns df.drop(columns=['#YY','MM','DD','hh','mm','ss','Date'],inplace=True) #If more than 11 columns, drop extras extras = ['WDIR','WSPD','GST','PRES','ATMP','DEWP','VIS','PTDY','TIDE','MWD','WVHT','APD'] if len(df.columns) > 11: df.drop(columns=extras, inplace=True) return df met = timeFixer(met_readings) directions = timeFixer(directions) new_test1 = pd.read_csv('Test46251.txt', sep=' ', skipinitialspace=True) new_test2 = pd.read_csv('test2.spec.txt', sep=' ', skipinitialspace=True) nt1 = timeFixer(new_test1) nt2 = timeFixer(new_test2) display(met.head(1)) display(directions.head(1)) combined = pd.concat([met,directions],axis=1) new_test = pd.concat([nt1,nt2],axis=1) display(combined.head(2)) print('Combined shape:',combined.shape) combined = combined.dropna() new_test = new_test.dropna() print('Combined shape minus rows with NaNs',combined.shape) #Convert numericals to floats numerical_cols = ['WVHT','DPD','WTMP','SwH','SwP','WWH','WWP','APD','MWD'] categorical_cols = ['SwD','WWD','STEEPNESS'] combined[numerical_cols] = combined[numerical_cols].astype('float') new_test[numerical_cols] = new_test[numerical_cols].astype('float') combined_comp = combined['STEEPNESS'].value_counts(normalize=True) most_class = combined['STEEPNESS'].value_counts(normalize=True).max() print('---') print('Relative Frequency:',combined_comp) print('---') print('Proportion of most common class:',(most_class*100)) #The most common class is 'average' at 58% #There are 45 days so 27/9/9 split cutoff_train = '2021-04-01 00:00:00' cutoff_val = '2021-04-10 00:00:00' cutoff_test = '2021-04-20 00:00:00' cutoff_new_test = '2021-4-20 00:00:00' train = combined.loc[combined.index < cutoff_train] combined = combined.loc[combined.index > cutoff_train] val = combined.loc[combined.index < cutoff_val] combined = combined.loc[combined.index > cutoff_val] test = combined.loc[combined.index < cutoff_test] test_new = new_test.loc[new_test.index>cutoff_test] train_range = [train.index[1],train.index[-1]] val_range = [val.index[1],val.index[-1]] test_range = [test.index[1],test.index[-1]] print('---') print('Train:',train_range,'Length:',len(train)) print('Val:',val_range,'Length:',len(val)) print('Test:',test_range,'Length:',len(test)) print(len(test_new)) target = 'MWD' X_train = train.drop(columns=target) y_train = train[target] X_val = val.drop(columns=target) y_val = val[target] X_test = test.drop(columns=target) y_test = test[target] X_new_test = new_test.drop(columns=target) y_new_test = new_test[target] print('---') print('Train:',X_train.shape,y_train.shape,'Val:',X_val.shape,y_val.shape,'Test:',X_test.shape,y_test.shape) print('---') baseline_acc = y_train.mean() y_pred = [y_train.mean()] * len(y_train) print('Mean Wave Direction:',round(baseline_acc,2)) print('Mean Absolute Error of Naive Regressor:',mean_absolute_error(y_train,y_pred)) # - #Adaptive Boosting Regressor model_abr = make_pipeline( OneHotEncoder(use_cat_names=True), AdaBoostRegressor(random_state=42) ) model_abr.fit(X_train,y_train) # + def check_metrics(model): print('---') print(model) print('Training MAE:', mean_absolute_error(y_train, model.predict(X_train))) print('Validation MAE:', mean_absolute_error(y_val, model.predict(X_val))) print('Validation R^2:', model.score(X_val,y_val)) print() print() check_metrics(model_abr) # + #X_new test prediction of MWD y_pred = model_abr.predict(X_new_test) test_mae = mean_absolute_error(y_new_test,model_abr.predict(X_new_test)) test_R2 = model_abr.score(X_new_test,y_new_test) print(test_mae,test_R2) # + ##Below is tidal stuff # + #Working with tides tides_dates_times = pd.read_csv('BW_MET46251.txt', sep=' ', skipinitialspace=True) tides = pd.read_csv('9413745.txt',delim_whitespace=True) test_times_X = X_test.copy() tides_dates_times['Date'] = tides_dates_times['#YY']+'/'+tides_dates_times['MM']+'/'+tides_dates_times['DD']+' '+tides_dates_times['hh']+':'+tides_dates_times['mm'] #Now I need to convert both time formats into common datetime format expected by prophet, which I'm #Hoping to use to infill tidal values, though there is likely a fundamentally more sound way #To accomplish this. Tides need to convert to from 24 hour time to datetime. #from datetime import datetime tides['Time'] = tides['Day']+' '+tides['Time'] for i in tides.Time: tides['a'] = tides['Time'].str[:2].astype('int') tides['b'] = tides['Time'].str[3:5].astype('int') tides['c'] = tides['Time'].str[-2:] tides['d'] = tides['Time'] drop = ['Day','Time'] tides.drop(columns=drop,inplace=True) a = tides.loc[tides['c']=='PM'] d = a.copy() d['a'] = d['a']+12 b = tides.loc[tides['c']=='AM'] c = pd.concat([b,d]) #c['dt'] = c.index+' '+c['a'].astype('str').str[0:4]+':'+c['b'].astype('str')+':'+'00' c['e'] = c.index+' '+c['d'] c['dt'] = pd.to_datetime(c['e'],infer_datetime_format=True) c.index = c['dt'] drop=['a','b','c','d','dt','e'] c.drop(columns=drop,inplace=True) c = c.sort_index() c['time'] = c.index c['time2'] = c['time'].shift(1) c['time3'] = (c['time']-c['time2']).astype('timedelta64[h]') c['Pred2'] = c['Pred'].shift(1) c['per_dif'] = c['Pred']-c['Pred2'] c['del'] = c['per_dif'] / c['time3'] c # + #To impute tidal data t = [] pp = [] p = [] for j in range(1,len(c)): #For length of tide dataframe p = [] for i in range(1,((c['time3'][j]).astype('int')-1)): #For the number of hours between observations j & j+1 interval = datetime.timedelta(hours=i) #Add iterating hours t.append(c['time2'][j]+interval) #add to list of 6 datetimes p.append(abs(c['Pred'][j]+(c['del'][j]-(i)))) #Add prediction (prev pred + i * rate) p.reverse() pp.extend(p) data = {'time':t,'Pred':pp} mp = pd.DataFrame(data) mp.index = mp['time'] mp.head(10) cmp = pd.concat([mp,c]) #cmp.sort_index().head(5) cmp_drop = ['Date','time2','time3','Pred2','per_dif','del'] cmp.drop(columns=cmp_drop,inplace=True) cmp['day'] = cmp.index.day cmp['time'] = cmp.index.hour cmp = cmp.sort_values(['day','time'],ascending=[True,True]) cmp = cmp.reset_index() cmp # + c_drop = ['time'] c.drop(columns=c_drop,inplace=True) c['day'] = c.index.day c['time'] = c.index.hour c = c.reset_index() c ccmp = pd.concat([cmp,c]) ccmp = ccmp.sort_values(['day','time'],ascending=[True,True]) ccmp.reset_index() ccmp['key'] = (ccmp['time']).astype('str') + (ccmp['day']).astype('str') ccmp.drop(columns=['High/Low','dt','Date','time2','time3','Pred2','per_dif','del'],inplace=True) ccmp.dropna(inplace=True) ccmp # + #I've imputed my tidal data, now I'll combine it into my df #I'll just make a copy of the old one X_train_tides = X_train.copy() y_train_tides = y_train.copy() X_train_tides['time'] = X_train_tides.index.hour X_train_tides['day'] = X_train_tides.index.day X_train_tides['key'] = (X_train_tides['time']).astype('str') + (X_train_tides['day']).astype('str') X_train_tides_c = pd.merge(X_train_tides,ccmp,on='key',how='inner') X_train_tides_c.drop_duplicates(inplace=True) X_train_tides_c.index = X_train_tides_c['index'] X_train_tides_c.drop(columns=['time_x','day_x','key','index','time_y','day_y'],inplace=True) print(X_train_tides_c.shape) X_train_tides_c = X_train_tides_c.drop_duplicates(subset='Pred',keep='first') X_train_tides_c.sort_index().head(5) # + X_test_tides = X_test.copy() X_test_tides['dam'] = X_test_tides.index X_test_tides['hour'] = X_test_tides['dam'].dt.hour X_test_tides['day'] = X_test_tides['dam'].dt.day X_test_tides['key'] = (X_test_tides['day']).astype('str') + (X_test_tides['hour']).astype('str') X_test_tides = X_test_tides.reset_index() tides_test_raw = pd.read_csv('24hr9413745.txt',delim_whitespace=True) tides_test_raw['both'] = tides_test_raw['Date']+' '+tides_test_raw['Time'] tides_test_raw['both'] = pd.to_datetime(tides_test_raw['both']) tides_test_raw['both2'] = tides_test_raw['both'].shift(1) tides_test_raw['time3'] = tides_test_raw['both'] - tides_test_raw['both2'] tides_test_raw['time3'] = (pd.to_numeric(tides_test_raw['time3'].dt.seconds,downcast='integer')/3600) tides_test_raw = tides_test_raw[1:] tides_test_raw['time3'] = round(tides_test_raw['time3'],0).astype('int') tides_test_raw['day'] = tides_test_raw['both'].dt.day tides_test_raw['Pred2'] = tides_test_raw['Pred'].shift(1) tides_test_raw['dif'] = tides_test_raw['Pred'] - tides_test_raw['Pred2'] tides_test_raw['vel'] = tides_test_raw['dif'] / tides_test_raw['time3'] tides_test_raw['day'] = tides_test_raw['both'].dt.day tides_test_raw['hour'] = tides_test_raw['both'].dt.hour tides_test_raw['key'] = tides_test_raw['day'].astype('str')+tides_test_raw['hour'].astype('str') tides_test_raw #To impute tidal data t = [] pp = [] p = [] for j in range(1,len(tides_test_raw)): #For length of tide dataframe p = [] for i in range(1,((tides_test_raw['time3'][j]).astype('int')-1)): #For the number of hours between observations j & j+1 interval = datetime.timedelta(hours=i) #Add iterating hours t.append(tides_test_raw['both2'][j]+interval) #add to list of 6 datetimes p.append(abs(tides_test_raw['Pred'][j]+(tides_test_raw['vel'][j]-(i)))) #Add prediction (prev pred + i * rate) p.reverse() pp.extend(p) data = {'time':t,'Pred':pp} mp_test = pd.DataFrame(data) mp_test.index = mp_test['time'] #I know this worked #I need to now combine these predictions with my test_set mp_test mp_test['day'] = mp_test.index.day mp_test['hour'] = mp_test.index.hour mp_test['key'] = mp_test['day'].astype('str')+mp_test['hour'].astype('str') mp_test.drop(columns=['time','day','hour'],inplace=True) mp_test = mp_test.reset_index() mp_test.drop(columns='time',inplace=True) mp_test # - test_tides = pd.merge(left=mp_test,right=X_test_tides,on='key') test_tides = test_tides.drop_duplicates(subset='Pred',keep='first') test_tides['Index'] = test_tides['Date'] holder = test_tides['Pred'] test_tides.index = test_tides['Index'] test_tides['Pred2'] = test_tides['Pred'] test_tides.drop(columns=['Date','key','dam','hour','day','Pred','Index'],inplace=True) test_tides['Pred'] = test_tides['Pred2'] test_tides.drop(columns=['Pred2'],inplace=True) test_tides # + #Now that I've finally fitted in my tidal data, I don't really want to do the same thing #For the validation and test sets. I could probably do them together (data leak?) #What I should do is make an advanced wrangle function that does all the steps automatically #It woudln't even take that long #For now, I'm going to focus on some visualizations to satisfy the project constraints # - X_train_tides_c # + #mp_test.head(10) #X_test_tides['time'] = X_test_tides.index #X_test_tides = X_test_tides.reset_index() #cmp = pd.merge(left=tides_test_raw,right=mp_test,on='key') #cmp.sort_index().head(5) # cmp_drop = ['Date','time2','time3','Pred2','per_dif','del'] # cmp.drop(columns=cmp_drop,inplace=True) # cmp['day'] = cmp.index.day # cmp['time'] = cmp.index.hour # cmp = cmp.sort_values(['day','time'],ascending=[True,True]) # cmp = cmp.reset_index() #cmp = cmp.drop_duplicates(subset='Pred',keep='first') #tides_test_raw['time2'] = pd.to_datetime(tides_test_raw['time2']) #tides_test_raw['time2'] = pd.to_datetime(tides_test_raw['time2']) #tides_test_raw['time3'] = tides_test_raw['time']-tides_test_raw['time2'] # tides_test_raw['Pred2'] = tides_test_raw['Pred'].shift(1) # #tides_test_raw['per_dif'] = tides_test_raw['Pred']-tides_test_raw['Pred2'] # #tides_test_raw['del'] = tides_test_raw['per_dif'] / tides_test_raw['time3'] # #tides_test_raw['time'] = datetime.timestamp(tides_test_raw['Time']) # #tides_test_raw['time'] = tides_test_raw #tides_test_raw['time'] = tides_test_raw['both'].dt.time #tides_test_raw['time2'] = tides_test_raw['time'].shift(1)
Archive/Waveheight_Modelling/WaveHeight_ModellingV3(Old).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to CT # # In this exercise sheet we will get to know the Computed Tomography reconstruction problem # # ## Load Data # + import torch import matplotlib.pyplot as plt from torchvision import datasets, transforms # %matplotlib inline batch_size = 4 # datasets (MNIST) transform_test = transforms.Compose([ transforms.ToTensor() ]) mnist_test = datasets.MNIST('/data', train=False, download=True, transform=transform_test) # dataloaders test_loader = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size) batch, labels = next(iter(test_loader)) phantoms = [batch[i][0].numpy() for i in range(batch_size)] ########################################## # TODO: show first phantom ... ########################################## # - # ## Computed Tomography # # In computed tomography, the tomography reconstruction problem is to obtain a tomographic slice image from a set of projections 1. A projection is formed by drawing a set of parallel rays through the 2D object of interest, assigning the integral of the object’s contrast along each ray to a single pixel in the projection. A single projection of a 2D object is one dimensional. To enable computed tomography reconstruction of the object, several projections must be acquired, each of them corresponding to a different angle between the rays with respect to the object. A collection of projections at several angles is called a sinogram, which is a linear transform of the original image. # + import numpy as np from skimage.transform import radon n, m = 28, 28 ########################################## # TODO: specify number of angles! angles = ... ########################################## detectors = 40 ########################################## # TODO: create operator matrix filled with zeros operator = ... ########################################## theta = np.linspace(0.0, 180.0, angles, endpoint=False) for i in range(n * m): unit = np.zeros(n * m) unit[i] = 1 operator[:, i] = radon(unit.reshape(n, m), theta, circle=False).reshape(-1) # - sinograms = [] for phantom in phantoms: plt.figure(figsize=(10, 4)) # clean image plt.subplot(1, 3, 1) plt.imshow(phantom) # clean sinogram ########################################## # TODO: multiply operator matrix with phantom to get sinogram sinogram = ... ########################################## plt.subplot(1, 3, 2) plt.title('Clean sinogram') plt.imshow(sinogram) # noisy sinogram plt.subplot(1, 3, 3) ########################################## # TODO: add noise to the sinogram sinogram += ... ########################################## sinograms.append(sinogram) plt.title('Noisy sinogram') plt.imshow(sinogram) # # Direct Inverse # + from skimage.measure import compare_psnr plt.figure(figsize=(15, 4)) for i, phantom in enumerate(phantoms): plt.subplot(1, len(phantoms), i+1) ########################################## # TODO: compute direct inverse by inverting the matrix x_rec = ... ########################################## plt.xlabel('PSNR: %.2f' % compare_psnr(phantom, x_rec)) plt.imshow(x_rec) # - # # Filtered Back Projection # + from skimage.transform import iradon plt.figure(figsize=(15, 4)) for i, phantom in enumerate(phantoms): plt.subplot(1, len(phantoms), i+1) x_rec = iradon(sinograms[i], theta, circle=False) plt.xlabel('PSNR: %.2f' % compare_psnr(phantom, x_rec)) plt.imshow(x_rec) # - # # TSVD # + ########################################## # TODO: Compute SVD of the operator and plot the singular values U, S, V = ... ########################################## # + def truncated_svd(U, S, V, y, k): S_inv = [] sigma = np.zeros((V.shape[0], U.shape[0])) for i in range(len(S)): if i < k and S[i] > 1e-9: sigma[i,i] = 1/S[i] else: sigma[i,i] = 0 A_inv = np.dot(np.dot(V.T, sigma), U.T) return np.dot(A_inv, y) for k in [len(S)//8, len(S)//4, len(S)//2, 3*len(S)//4, 7*len(S)//8, len(S)]: plt.figure(figsize=(15, 4)) for i, phantom in enumerate(phantoms): plt.subplot(1, len(phantoms), i+1) x_rec = truncated_svd(U, S, V, sinograms[i].reshape(-1), k).reshape(n,m) plt.title(r'$k=%d$' % k) plt.xlabel('PSNR: %.2f' % compare_psnr(phantom, x_rec)) plt.imshow(x_rec) plt.show()
03-inverse-problems/01-ct-introduction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 基本概念 # * 标量: 一个单独的数, 通常使用斜体小写字母表示,例如$x=1$。 # # # * 向量:可以看作是一个有序的一维数组,通过索引能够唯一的确定向量中的某个值,通常使用斜体加粗的小写字母表示,例如$\boldsymbol{x} = \{1,2,3,4,5 \}$,其中第$i$个元素可以表示为$x_i$。 # # # * 矩阵:矩阵是一个有序的二维数组,其中每个元素由两个索引确定,分别表示它的行和列,通常使用斜体加粗的大写字母表示,例如$\boldsymbol{A} = \left[ \begin{matrix}1 & 2 \\ 3 & 4 \end{matrix} \right]$,一个$n$维向量可以看做是一个$1 \times n$的矩阵。 # # # * 张量:张量表示一个有序的多维数组,其中每个元素可以由多个索引确定,通常使用加粗的大写字母表示,例如$\bf{A}$,向量和矩阵可以分别看作是一维和二维的张量。 # # import numpy as np ## 生成元素全为0 3x4的张量 np.zeros((3,4)) ## 生成三维的随机 张量 2x3x4 np.random.rand(2,3,4) # * 方阵: 行数和列数相等的矩阵 # * 单位矩阵: 对角线元素为1 其他为零 np.eye(4) # ## 常用运算 # * reshape 改变一个张量的维度和每个维度的大小 x = np.arange(12) x x.shape #变成12 x = x.reshape(1,12) x x.shape x = x.reshape(3,4) x # * 转置 向量和矩阵的转置是交换行列顺序,矩阵$\boldsymbol{A}$的转置记为$\boldsymbol{A}^T$,而三维及以上张量的转置就需要指定转换的维度,示例如下: x = np.arange(5).reshape(1,-1) x x.T ## 生成3*4的矩阵并转置 A = np.arange(12).reshape(3,4) A A.T ## 2*3*4d的张量 B=np.arange(24).reshape(2,3,4) B B.transpose(1,0,2) # * 矩阵乘法:记两个矩阵分别为A和B,$C=A*B$,则$C_{ij} = \sum_k {A_{i,k}B_{k,j}}$,由公式可以看到两个矩阵能够相乘的条件为第一个矩阵的列数等于第二个矩阵的行数,示例如下: # A=np.arange(6).reshape(3,2) B=np.arange(6).reshape(2,3) A B np.matmul(A,B) # * 元素对应运算(Element-wise Operation): 针对形状相同张量的运算统称,包括元素对应乘积,相加等 A=np.arange(6).reshape(3,2) A*A A+A A-A A%A A/A # * 逆矩阵: 方阵$\boldsymbol{A}$的逆矩阵记为$\boldsymbol{A}^{-1}$,它满足$\boldsymbol{A*A}^{-1}=\boldsymbol{I}$,示例如下: A = np.arange(4).reshape(2,2) A np.linalg.inv(A) # * 特征分解:使用最广的矩阵分解之一,即将矩阵(方阵)分解成一组特征向量与特征值,假设方阵$A$有n个线性无关的特征向量${v_1,v_2,\cdots,v_n}$, 对应着特征值${\lambda_1,\lambda_2,\cdots,\lambda_n}$, 将特征向量连成一个矩阵$V$, 那么$A$的特征分解可以写为$A=Vdiag(\lambda)V^{-1}$. # 然而不是每一个矩阵都可以进行特征分解。特别地,每个实对称矩阵都可以分解成实特征向量和实特征值:$A=Qdiag(\lambda)Q^T.$, 其中$Q$是正交矩阵,也就是满足$QQ^T=I$. # 在python中,求矩阵的特征值和特征向量实例如下: A = np.arange(4).reshape(2,2) A eigvals, eigvectors = np.linalg.eig(A) eigvals eigvectors # * 迹运算: 迹运算返回的是方阵对角元素的和 Tr($A$)=$\sum\limits_iA_{ii}$. # 迹运算的性质: # 1.矩阵转置之后迹保持不变:Tr($A$)=Tr($A^T$). # 2.多个矩阵相乘的顺序不改变乘积矩阵的迹(在保持乘积矩阵仍然是方阵情况下),例如假设矩阵$A\in R^{m\times n}$,$B\in R^{n\times m}$,则Tr($A$)=Tr($BA$). # 迹运算实例如下: A=np.arange(4).reshape(2,2) A np.trace(A) # * 行列式: 记作det($A$).行列式等于特征值的乘积。行列式的绝对值可以看作是一个n维平行体的体积。实例如下 A=np.arange(4).reshape(2,2) A np.linalg.det(A) # * 主成分分析(PCA): 在python中,用sklearn库中的PCA来主成分分析 # + from sklearn.decomposition import PCA from sklearn.datasets import load_iris data=load_iris() x=data.data pca=PCA(n_components=2) #加载PCA算法,设置降维后主成分数目为k reduced_x=pca.fit_transform(x) #对样本进行降维 ratio=pca.explained_variance_ratio_ #计算各个主成分贡献的方差比例 要求达到80%以上 ratio
DeepLearningBook/linalg.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Error Handling # # * [Errors and Exceptions](https://docs.python.org/3/tutorial/errors.html) # * [Built-in Exceptions](https://docs.python.org/3/library/exceptions.html) # * [User-defined Exceptions](https://docs.python.org/3/tutorial/errors.html#user-defined-exceptions) # ## Error Types # # - Syntax errors # - Runtime errors # - Logic errors # ## Overall Syntax # # ```python # try: # something_dangerous() # except (ValueError, ArithmeticError): # pass # except TypeError as e: # pass # ``` # + e = 42 try: 1 + "42" except TypeError as e: pass # - e # ## [BaseException](https://docs.python.org/3/library/exceptions.html#BaseException) BaseException.__subclasses__() # - Only system exceptions and exceptions that interrupt the interpreter should be inherited from the `BaseException` class # - Other exceptions, including user-defined exceptions, should be inherited from the `Exception` class try: 1/0 except Exception: # catch all exceptions pass # ## Built-in Exceptions # ### [AssertionError](https://docs.python.org/3/library/exceptions.html#AssertionError) assert 2 + 2 == 5, ("Math", "still", "works") # ⚠️ _Don't catch AssertionError_. # ### ModuleNotFoundError, NameError import foobar foobar # ### AttributeError, LookupError object().foobar {}["foobar"] [][0] # `LookupError` is a base class for `KeyError` and `IndexError`. # ### ValueError, TypeError "foobar".split("") b"foo" + "bar" # ## User-defined Exceptions # 💡 Best practice for library authors: define your own base exception class and inherit all other exceptions from it. # # ```python # class HttpClientException(Exception): # pass # ``` # # It will help users of a library to catch all its specific exceptions: # # ```python # try: # get("https://google.com/") # except HttpClientException: # pass # ``` # ## Args and Traceback try: 1 + "42" except Exception as e: caught = e caught.args caught.__traceback__ import traceback traceback.print_tb(caught.__traceback__) # ## raise raise TypeError("type mismatch") raise 42 # ℹ️ `raise` without an argument will raise the last caught exception or `RuntimeError` if it doesn't exist. raise try: 1/0 except Exception: raise # ## raise from try: {}["foobar"] except KeyError as e: raise RuntimeError("Ooops!") from e # ## try...finally try: handle = open("tmp.txt", "wt") try: pass finally: handle.close() except IOError as e: print(e, file=sys.stderr) # ## try...else try: handle = open("tmp.txt", "wt") except IOError as e: print(e, file=sys.stderr) else: print("No exception happened.") # --- try: {}["foobar"] except KeyError: "foobar".split("")
06_error_handling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Closing Generators # We can actually close a generator by sending it a special message, calling its `close()` method. # # When that happens, an exception is raised **inside** the generator, and we may or may not want to do something - maybe cleaning up a resource, commiting a transaction to a database, etc. # Let's try it out, without any exception handling first: from inspect import getgeneratorstate # + import csv def parse_file(f_name): print('opening file...') f = open(f_name, 'r') try: dialect = csv.Sniffer().sniff(f.read(2000)) f.seek(0) reader = csv.reader(f, dialect=dialect) for row in reader: yield row finally: print('closing file...') f.close() # - # You may notice by the way, that this could easily be turned into a context manager by yielding `reader` instead of yielding individual `rows` from within a loop (as it stands, you cannot make it into a context manager - remember that for context managers there should be a **single** yield! # + import itertools parser = parse_file('cars.csv') for row in itertools.islice(parser, 10): print(row) # - # At this point, we have read 10 rows from the file, but since we have not exhausted our generator, the file is still open. # # How do we close it without iterating through the entire generator? # # Easy, we call the `close()` method on it: parser.close() # And the state of the generator is now closed: from inspect import getgeneratorstate getgeneratorstate(parser) # Which means we can no longer call `next()` on it - we'll just get a `StopIteration` exception: next(parser) # What's actually happening is that when we call `close()`, an exception is raised **inside** our generator. Notice that we don't actually catch that exception - we have a finally block, but we do not catch the exception. # # So, an exception is raised while processing that loop, which means our `finally` block runs right away. # # But we are not actually catching the exception, yet we do not actually see the exception appear in our console. This is because that exception is handled specially by Python. When it receives that exception it simply knows that the generator state is now closed. # # This is similar to how the `StopIteration` exception that is raised when we use a `for` loop on an iterator, does not actually show up - Python handles it silently, noting that the iterator is exhausted. # OK, so now, let's catch that exception inside our generator. The exception is called `GeneratorExit` (and inherits from `BaseException`, not `Exception`, if that matters to you at this point). # # But we have to be careful here - when we call `close()`, Python **expects** one of three things to happen: # * the generator raises a `GeneratorExit` exception # * the generator exits cleanly # * some other exception to be raised - in which case it will propagate the exception to the caller. # # If we trap it, silence it, and try to continue running the generator, Python **will** complain and throw a runtime exception! # # So, it's OK to catch the exception, but if we do, we need to make sure we re raise it, terminate the function, or raise another exception (but that will bubble up an exception) # # Here's what the Python docs have to say about that: # # ``` # generator.close() # Raises a GeneratorExit at the point where the generator function was paused. If the generator function then exits gracefully, is already closed, or raises GeneratorExit (by not catching the exception), close returns to its caller. If the generator yields a value, a RuntimeError is raised. If the generator raises any other exception, it is propagated to the caller. close() does nothing if the generator has already exited due to an exception or normal exit. # ``` # Let's look at an example of this: def parse_file(f_name): print('opening file...') f = open(f_name, 'r') try: dialect = csv.Sniffer().sniff(f.read(2000)) f.seek(0) next(f) # skip header row reader = csv.reader(f, dialect=dialect) for row in reader: yield row except Exception as ex: print('some exception occurred', str(ex)) except GeneratorExit: print('Generator was closed!') finally: print('cleaning up...') f.close() # Now let's try that again: parser = parse_file('cars.csv') for row in itertools.islice(parser, 5): print(row) parser.close() # You'll notice that the exception occurred, and then the generator ran the `finally` block and had a clean exit - in other words, the `GeneratorExit` exception was silenced, but the generator terminated (returned), so that's perfectly fine. # # But what happens if we catch that exception inside a loop maybe, and simply ignore it and try to keep going? def parse_file(f_name): print('opening file...') f = open(f_name, 'r') try: dialect = csv.Sniffer().sniff(f.read(2000)) f.seek(0) next(f) # skip header row reader = csv.reader(f, dialect=dialect) for row in reader: try: yield row except GeneratorExit: print('ignoring call to close generator...') finally: print('cleaning up...') f.close() parser = parse_file('cars.csv') for row in itertools.islice(parser, 5): print(row) parser.close() # Aha! See, one does not simply ignore the call to `close()` the generator! # Generators should be cooperative, and ignore a request to close down is not exactly being cooperative. # If we really want to catch the exception inside our loop, we have to either re-raise it or return from the generator: # So, both of these will work just fine: def parse_file(f_name): print('opening file...') f = open(f_name, 'r') try: dialect = csv.Sniffer().sniff(f.read(2000)) f.seek(0) next(f) # skip header row reader = csv.reader(f, dialect=dialect) for row in reader: try: yield row except GeneratorExit: print('got a close...') raise finally: print('cleaning up...') f.close() parser = parse_file('cars.csv') for row in itertools.islice(parser, 5): print(row) parser.close() # As will this: def parse_file(f_name): print('opening file...') f = open(f_name, 'r') try: dialect = csv.Sniffer().sniff(f.read(2000)) f.seek(0) next(f) # skip header row reader = csv.reader(f, dialect=dialect) for row in reader: try: yield row except GeneratorExit: print('got a close...') return finally: print('cleaning up...') f.close() parser = parse_file('cars.csv') for row in itertools.islice(parser, 5): print(row) parser.close() # And of course, our `finally` block still ran. # If we want to we can also raise an exception, but this will then be received by the caller, who either has to handle it, or let it bubble up: def parse_file(f_name): print('opening file...') f = open(f_name, 'r') try: dialect = csv.Sniffer().sniff(f.read(2000)) f.seek(0) next(f) # skip header row reader = csv.reader(f, dialect=dialect) for row in reader: try: yield row except GeneratorExit: print('got a close...') raise Exception('why, oh why, did you do this?') from None finally: print('cleaning up...') f.close() parser = parse_file('cars.csv') for row in itertools.islice(parser, 5): print(row) parser.close() # Another very important point to note is that the `GeneratorExit` exception does not inherit from `Exception` - because of that, you can still trap an `Exception`, even very broadly, without accidentally catching, and potentially silencing, a `GeneratorExit` exception. # # We'll see an example of this next. # So, what about applying the same `close()` to generators acting not as iterators, but as coroutines? # Suppose we have a generator whose job is to open a database transaction, receive data to be written to the database, and then commit the transactions to the database once the work is "over". # We can certainly do it using a context manager - but we can also do it using a coroutine. def save_to_db(): print('starting new transaction') while True: try: data = yield print('sending data to database:', data) except GeneratorExit: print('committing transaction') raise trans = save_to_db() next(trans) trans.send('data 1') trans.send('data 2') trans.close() # But of course, something could go wrong while writing the data to the database, in which case we would want to abort the transaction instead: def save_to_db(): print('starting new transaction') while True: try: data = yield print('sending data to database:', eval(data)) except Exception: print('aborting transaction') except GeneratorExit: print('committing transaction') raise trans = save_to_db() next(trans) trans.send('1 + 10') trans.send('1/0') # But we have a slight problem: trans.send('2 + 2') # We'll circle back to this in a bit. # But we can still commit the transaction when things do not go wrong: trans = save_to_db() next(trans) trans.send('1+10') trans.send('2+10') trans.close() # OK, so this works but is far from ideal: # # 1. We do not know that an exception occurred and that a rollback happened (well we do from the console output, but not programmatically) # 2. if an abort took place, we really need to close the generator # 3. It would be safer to have a `finally` clause, that either commits or rollbacks the transaction - we could have an exception that is not caught by any of our exception handlers - and that would be a problem! # # Let's fix those issues up: def save_to_db(): print('starting new transaction') is_abort = False try: while True: data = yield print('sending data to database:', eval(data)) except Exception: is_abort = True raise finally: if is_abort: print('aborting transaction') else: print('committing transaction') # Notice how we're not even catching the `GeneratorExit` exception - we really don't need to - that exception will be raised, the `finally` block will run, and the `GeneratorExit` exception will be bubbled up to Python who will expect it after the call to `close()` trans = save_to_db() next(trans) trans.send('1 + 1') trans.close() trans = save_to_db() next(trans) trans.send('1 / 0')
dd_1/Part 2/Section 12 - Generator Based Coroutines/05 - Closing Generators/Closing Generators.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 决策树的学习曲线 import numpy as np import matplotlib.pyplot as plt # + from sklearn import datasets boston = datasets.load_boston() X = boston.data y = boston.target # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666) # - # ### 学习曲线 # 基于RMSE绘制学习曲线 # + from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import mean_squared_error def plot_learning_curve(algo, X_train, X_test, y_train, y_test): train_score = [] test_score = [] for i in range(1, len(X_train)+1): algo.fit(X_train[:i], y_train[:i]) y_train_predict = algo.predict(X_train[:i]) train_score.append(mean_squared_error(y_train[:i], y_train_predict)) y_test_predict = algo.predict(X_test) test_score.append(mean_squared_error(y_test, y_test_predict)) plt.plot([i for i in range(1, len(X_train)+1)], np.sqrt(train_score), label="train") plt.plot([i for i in range(1, len(X_train)+1)], np.sqrt(test_score), label="test") plt.legend() plt.show() plot_learning_curve(DecisionTreeRegressor(), X_train, X_test, y_train, y_test) # - # 基于R^2值绘制学习曲线 # + from sklearn.metrics import r2_score def plot_learning_curve_r2(algo, X_train, X_test, y_train, y_test): train_score = [] test_score = [] for i in range(1, len(X_train)+1): algo.fit(X_train[:i], y_train[:i]) y_train_predict = algo.predict(X_train[:i]) train_score.append(r2_score(y_train[:i], y_train_predict)) y_test_predict = algo.predict(X_test) test_score.append(r2_score(y_test, y_test_predict)) plt.plot([i for i in range(1, len(X_train)+1)], train_score, label="train") plt.plot([i for i in range(1, len(X_train)+1)], test_score, label="test") plt.legend() plt.axis([0, len(X_train)+1, -0.1, 1.1]) plt.show() plot_learning_curve_r2(DecisionTreeRegressor(), X_train, X_test, y_train, y_test) # - # 以max_depth参数为例,看不同参数学习曲线的不同 plot_learning_curve_r2(DecisionTreeRegressor(max_depth=1), X_train, X_test, y_train, y_test) plot_learning_curve_r2(DecisionTreeRegressor(max_depth=3), X_train, X_test, y_train, y_test) plot_learning_curve_r2(DecisionTreeRegressor(max_depth=5), X_train, X_test, y_train, y_test) plot_learning_curve_r2(DecisionTreeRegressor(max_depth=20), X_train, X_test, y_train, y_test)
Part5Improve/12-Decision-Tree/Optional-01-Learning-Curve-for-Decision-Tree/Optional-01-Learning-Curve-for-Decision-Tree.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # The trick for **highlighting a specific group** is to plot all the groups with thin and discreet lines first. Then, replot the interesting group(s) with strong and really visible line(s). Moreover, it is good practice to annotate this highlighted group with a custom annotation. The following example shows how to do that by using the `color`, `linewidth` and `alpha` parameters of the `plot()` function of **matplotlib**. # + # libraries import matplotlib.pyplot as plt import numpy as np import pandas as pd # Make a data frame df=pd.DataFrame({'x': range(1,11), 'y1': np.random.randn(10), 'y2': np.random.randn(10)+range(1,11), 'y3': np.random.randn(10)+range(11,21), 'y4': np.random.randn(10)+range(6,16), 'y5': np.random.randn(10)+range(4,14)+(0,0,0,0,0,0,0,-3,-8,-6), 'y6': np.random.randn(10)+range(2,12), 'y7': np.random.randn(10)+range(5,15), 'y8': np.random.randn(10)+range(4,14) }) # Change the style of plot plt.style.use('seaborn-darkgrid') # set figure size my_dpi=96 plt.figure(figsize=(480/my_dpi, 480/my_dpi), dpi=my_dpi) # plot multiple lines for column in df.drop('x', axis=1): plt.plot(df['x'], df[column], marker='', color='grey', linewidth=1, alpha=0.4) # Now re do the interesting curve, but biger with distinct color plt.plot(df['x'], df['y5'], marker='', color='orange', linewidth=4, alpha=0.7) # Change x axis limit plt.xlim(0,12) # Let's annotate the plot num=0 for i in df.values[9][1:]: num+=1 name=list(df)[num] if name != 'y5': plt.text(10.2, i, name, horizontalalignment='left', size='small', color='grey') # And add a special annotation for the group we are interested in plt.text(10.2, df.y5.tail(1), '<NAME>', horizontalalignment='left', size='small', color='orange') # Add titles plt.title("Evolution of Mr Orange vs other students", loc='left', fontsize=12, fontweight=0, color='orange') plt.xlabel("Time") plt.ylabel("Score") # Show the graph plt.show()
src/notebooks/123-highlight-a-line-in-line-plot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"is_executing": false} from bertviz import model_view from transformers import GPT2Tokenizer, GPT2Model # + pycharm={"is_executing": false} language="javascript" # require.config({ # paths: { # d3: '//cdnjs.cloudflare.com/ajax/libs/d3/5.7.0/d3.min', # jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min', # } # }); # + pycharm={"is_executing": false} model_version = 'gpt2' model = GPT2Model.from_pretrained(model_version, output_attentions=True) tokenizer = GPT2Tokenizer.from_pretrained(model_version) text = "The quick brown fox jumps over the lazy dogs." inputs = tokenizer.encode_plus(text, return_tensors='pt', add_special_tokens=True) input_ids = inputs['input_ids'] attention = model(input_ids)[-1] input_id_list = input_ids[0].tolist() # Batch index 0 tokens = tokenizer.convert_ids_to_tokens(input_id_list) model_view(attention, tokens) # + pycharm={"is_executing": false}
model_view_gpt2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # UN Regions # **[Work in progress]** # # This notebook creates a .csv file with UN geographic regions, subregions, and intermediate region information for ingestion into the Knowledge Graph. # # Data source: [Statistics Division of the United Nations Secretariat](https://unstats.un.org/unsd/methodology/m49/) # # Data set: [M49](https://unstats.un.org/unsd/methodology/m49/overview) # # Authors: <NAME> (<EMAIL>), <NAME> (<EMAIL>) import os from pathlib import Path import pandas as pd pd.options.display.max_rows = None # display all rows pd.options.display.max_columns = None # display all columsns NEO4J_IMPORT = Path(os.getenv('NEO4J_IMPORT')) print(NEO4J_IMPORT) # ### UN regions, subregions, and intermediate regions df = pd.read_excel("../../reference_data/UNSDMethodology.xlsx", dtype='str') df = df[['Region Name', 'Region Code', 'Sub-region Name', 'Sub-region Code', 'Intermediate Region Name', 'Intermediate Region Code', 'ISO-alpha3 Code']] df.fillna('', inplace=True) # for now exclude region without an iso code (Channel Islands) df = df.query("`ISO-alpha3 Code` != ''") # Antarctica has no region code df = df.query("`Region Name` != ''") df.head() # Assign names without spaces df.rename(columns={'Region Name': 'UNRegion'}, inplace=True) df.rename(columns={'Region Code': 'UNRegionCode'}, inplace=True) df.rename(columns={'Sub-region Name': 'UNSubRegion'}, inplace=True) df.rename(columns={'Sub-region Code': 'UNSubRegionCode'}, inplace=True) df.rename(columns={'Intermediate Region Name': 'UNIntermediateRegion'}, inplace=True) df.rename(columns={'Intermediate Region Code': 'UNIntermediateRegionCode'}, inplace=True) df.rename(columns={'ISO-alpha3 Code': 'iso3'}, inplace=True) # ### Assign unique identifiers # Use m49 as a prefix for the M49 standard by the Statistics Division of the United Nations Secretariat df['UNRegionCode'] = 'm49:' + df['UNRegionCode'] df['UNSubRegionCode'] = 'm49:' + df['UNSubRegionCode'] df['UNIntermediateRegionCode'] = 'm49:' + df['UNIntermediateRegionCode'] df.head() # ### Add missing region information (from hand-curated list) additions = pd.read_csv("../../reference_data/UNRegionAdditions.csv") additions.fillna('', inplace=True) additions.tail(10) df = df.append(additions) df.to_csv(NEO4J_IMPORT / "00k-UNAll.csv", index=False) # ### Save region assignments in separate files # This is done so iso3 country codes can be linked to the lowest level in the UN region hierarchy. intermediateRegion = df[df['UNIntermediateRegion'] != ''] intermediateRegion.to_csv(NEO4J_IMPORT / "00k-UNIntermediateRegion.csv", index=False) intermediateRegion.head() subRegion = df[(df['UNSubRegion'] != '') & (df['UNIntermediateRegion'] == '')] subRegion.to_csv(NEO4J_IMPORT / "00k-UNSubRegion.csv", index=False) subRegion.head() region = df[(df['UNSubRegion'] == '') & (df['UNIntermediateRegion'] == '')] region.to_csv(NEO4J_IMPORT / "00k-UNRegion.csv", index=False) region.head()
notebooks/dataprep/00k-UNRegion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # #Step1. linux sh # # 리눅스 명령어를 통해서 현재 디렉토리를 확인합니다. # 특히, 실습할 데이터인 imgs_subset.zip 파일을 현재디렉토리로 복사합니다. # !pwd # !cp /home/ubuntu/dlday/imgs_subset.zip ./. # !ls # #step2. unzip # # imgs_subset.zip 파일은 500개의 right whale 이미지를 포함하고 있습니다. # unzip 명령을 통해 압축을 풀 수 있습니다 # # !unzip imgs_subset.zip # !du -h imgs_subset # !ls imgs_subset > filelist.txt && head filelist.txt # #Step3. view img files # # Ipython의 Image 명령을 통해 파일을 열어볼 수 있습니다. from IPython.display import Image Image("./imgs_subset/w_0.jpg") # #step4. seperate train/test datset # # Kaggle데이터셋은 train.csv 파일을 통해 train을 위한 데이터 목록 제공합니다. # 실습폴더에 복사해서 사용하면 됩니다. # # train.csv에 존재하지 않는 파일은 test 데이터셋을 의미합니다. # train.csv 파일에는 고래 이미지 파일명과 각 고래의 ID의 쌍으로 된 데이터를 제공합니다 # # !cp /home/ubuntu/dlday/*.csv ./. # !head train.csv # 아래의 파이썬 코드는 # # 압축이 풀린 imgs_subset 폴더의 데이터 중 train.csv 파일에 존재하는 파일만 # # imgs_subset_train으로 복사하면 train dataset과 test dataset을 나누는 작업이 이루어 집니다. # # 아래 파이썬 코드에서는 csv 파일명과 폴더명을 지정하도록 되어 있습니다. # + import os import pandas as pd # filename usefile1 = 'train_sort500.csv' usefile2 = 'train.csv' usefile3 = 'train_sort.csv' train = pd.read_csv(usefile1, index_col='Image') os.makedirs('./imgs_subset_train/') # # copy image from original folder to sub folder for image in train.index: old = './imgs_subset/{}'.format(image) new = './imgs_subset_train/{}'.format(image) try: os.rename(old, new) except: print('{} '.format(image)) # - # # du 명령을 통해서 살펴보면 imgs_subset 폴더는 test 데이터셋이, imgs_subset_train에는 train용 데이터셋이 있는 것을 확인할 수 있습니다. # 본 실습에서는 train 데이터셋을 이용하여 작업을 하게 됩니다. # # !du -h imgs_subset && du -h imgs_subset_train # !ls imgs_subset_train > trainfiles.txt && head trainfiles.txt # # #step5 resize # # Kaggle에서 제공한 이미지의 크기는 약 3000x2000 pixel의 이미지지 입니다. # # 아래의 파이썬 코드는 # 딥러닝에서 사용하기에는 데이터 크기가 너무 크므로, 데이터를 15% 비율로 축소합니다. # 파일사이즈가 작아지므로 실습이 용이합니다 # # + #resize import os import sys from PIL import Image def resize(folder, fileName, factor): filePath = os.path.join(folder, fileName) im = Image.open(filePath) w, h = im.size newIm = im.resize((int(w*factor), int(h*factor))) # i am saving a copy, you can overrider orginal, or save to other folder newIm.save(filePath) def bulkResize(imageFolder, factor): imgExts = ['png', 'bmp', 'jpg'] for path, dirs, files in os.walk(imageFolder): for fileName in files: ext = fileName[-3:].lower() if ext not in imgExts: continue resize(path, fileName, factor) imageFolder = 'imgs_subset_train' resizeFactor = float(15)/100 bulkResize(imageFolder,resizeFactor) # - # 다시한번 train 이미지의 용량을 확인하면 용량이 작아진 것을 확인할 수 있습니다. # !du -h imgs_subset && du -h imgs_subset_train # # Step6 축소된 이미지 확인 # # 위에서 본 이미지와는 다르게 이미지 크기가 축소 된 것을 확인할 수 있습니다. from IPython.display import Image Image("./imgs_subset_train/w_100.jpg") # #step7. label subfolder 제작 # # DIGITS 데이터셋을 준비하기 위한 제일 중요한 동작입니다. # # DIGITS는 데이터셋 폴더 아래에 라벨에 해당하는 폴더를 가지고 있어야 합니다. # # # 아래 파이썬 코드는 다음의 작업을 수행합니다. # # 1. train.csv 파일을 읽어들임. # 2. 저장할 폴더 설정 # 3. whale ID를 인식하여 subfolder를 생성하고 파일을 이동함 # + import os import pandas as pd # filename usefile1 = 'train_sort500.csv' usefile2 = 'train.csv' usefile3 = 'train_sort.csv' train = pd.read_csv(usefile1, index_col='Image') #foldername of whale ID whaleIDs = list(train['whaleID'].unique()) #make subdirectory with whale ID for w in whaleIDs: os.makedirs('./imgs_subset_train_subfolder/'+w) # # copy image from original folder to sub folder for image in train.index: folder = train.loc[image, 'whaleID'] old = './imgs_subset_train/{}'.format(image) new = './imgs_subset_train_subfolder/{}/{}'.format(folder, image) try: os.rename(old, new) except: print('{} - {}'.format(image,folder)) # - # 각 폴더를 확인해 보면 # # 데이터 폴더 내에 각 고래 아이디 별로 폴더가 생성된 것을 확인할 수 있습니다. # 또한, 하나의 폴더를 선택하여 그 안에 들어있는 파일 목록을 확인해볼 수 있습니다. # 실습을 위해 500장 imgs_subset 데이터 중 train 데이터셋만 활용하였기 때문에 # 각각의 고래 아이디당 많지 않은 이미지만 들어있는 것을 확인할 수 있습니다. # !du -h ./imgs_subset_train_subfolder # 마지막으로 데이터셋의 위치를 확인합니다 # # # !pwd & ls # #STEP8 DIGITS DB 생성하기 # 이제 DIGITS에서 데이터셋을 등록할 수 있습니다. # 데이터의 위치는 다음과 같습니다. # # /home/ubuntu/notebook/class2/imgs_subset_train_subfolder
class2/.ipynb_checkpoints/CLASS2-whale_subset-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # Visualizing Data for Classification # # In a previous lab you explored the automotive price dataset to understand the relationships for a regression problem. In this lab you will explore the German bank credit dataset to understand the relationships for a **classification** problem. The difference being, that in classification problems the label is a categorical variable. # # Visualization for classification problems shares much in common with visualization for regression problems. Colinear features should be identified so they can be eliminated or otherwise dealt with. However, for classification problems you are looking for features that help **separate the label categories**. Separation is achieved when there are distinctive feature values for each label category. Good separation results in low classification error rate. # ## Load and prepare the data set # # As a first step you must load the dataset. # # Execute the code in the cell below to load the packages required for the rest of this notebook. # # > **Note:** If you are running in Azure Notebooks, make sure that you run the code in the `setup.ipynb` notebook at the start of you session to ensure your environment is correctly configured. ## Import packages library(ggplot2) library(repr) options(repr.plot.width=4, repr.plot.height=4) # Set the initial plot area dimensions # The code in the cell below loads the dataset and assigns human-readable names to the columns. The dimension and head of the data frame are then printed. Execute this code: credit = read.csv('German_Credit.csv', header = FALSE) names(credit) = c('Customer_ID','checking_account_status', 'loan_duration_mo', 'credit_history', 'purpose', 'loan_amount', 'savings_account_balance', 'time_employed_yrs', 'payment_pcnt_income','gender_status', 'other_signators', 'time_in_residence', 'property', 'age_yrs', 'other_credit_outstanding', 'home_ownership', 'number_loans', 'job_category', 'dependents', 'telephone', 'foreign_worker', 'bad_credit') print(dim(credit)) head(credit) # The first column is Customer_ID, which is an identifier. And then there are 20 features plus a label column. These features represent information a bank might have on its customers. There are both numeric and categorical features. However, the categorical features are coded in a way that makes them hard to understand. Further, the label is coded as $\{ 1,2 \}$ which is a bit awkward. # # The code in the cell below using a list of lists to recode the categorical features with human-readable text. The processing is performed with these steps: # 1. Lists for each of the human readable codes are created for each column. The names of these lists are the codes in the raw data. # 2. A list of lists is created with the column names used as the list names. # 3. A list of categorical columns is created. # 4. A for loop iterates over the column names. `sapply` is used to iterate over the codes in each column. The codes are used to generate names for the list lookup. # # Execute this code and examine the result: # + checking_account_status = c('< 0 DM', '0 - 200 DM', '> 200 DM or salary assignment', 'none') names(checking_account_status) = c('A11', 'A12', 'A13', 'A14') credit_history = c('no credit - paid', 'all loans at bank paid', 'current loans paid', 'past payment delays', 'critical account - other non-bank loans') names(credit_history) = c('A30', 'A31', 'A32', 'A33', 'A34') purpose = c( 'car (new)', 'car (used)', 'furniture/equipment', 'radio/television', 'domestic appliances', 'repairs', 'education', 'vacation', 'retraining', 'business', 'other') names(purpose) = c('A40', 'A41', 'A42', 'A43', 'A44', 'A45', 'A46', 'A47', 'A48', 'A49', 'A410') savings_account_balance = c('< 100 DM', '100 - 500 DM', '500 - 1000 DM', '>= 1000 DM', 'unknown/none') names(savings_account_balance) = c('A61', 'A62', 'A63', 'A64', 'A65') time_employed_yrs = c('unemployed', '< 1 year', '1 - 4 years', '4 - 7 years', '>= 7 years') names(time_employed_yrs) = c('A71', 'A72', 'A73', 'A74', 'A75') gender_status = c('male-divorced/separated', 'female-divorced/separated/married', 'male-single', 'male-married/widowed', 'female-single') names(gender_status) = c('A91', 'A92', 'A93', 'A94', 'A95') other_signators = c('none', 'co-applicant', 'guarantor') names(other_signators) = c('A101', 'A102', 'A103') property = c('real estate', 'building society savings/life insurance', 'car or other', 'unknown-none') names(property) = c('A121', 'A122', 'A123', 'A124') other_credit_outstanding = c('bank', 'stores', 'none') names(other_credit_outstanding) = c('A141', 'A142', 'A143') home_ownership = c('rent', 'own', 'for free') names(home_ownership) = c('A151', 'A152', 'A153') job_category = c('unemployed-unskilled-non-resident', 'unskilled-resident', 'skilled', 'highly skilled') names(job_category) =c('A171', 'A172', 'A173', 'A174') telephone = c('none', 'yes') names(telephone) = c('A191', 'A192') foreign_worker = c('yes', 'no') names(foreign_worker) = c('A201', 'A202') bad_credit = c(1, 0) names(bad_credit) = c(2, 1) codes = c('checking_account_status' = checking_account_status, 'credit_history' = credit_history, 'purpose' = purpose, 'savings_account_balance' = savings_account_balance, 'time_employed_yrs' = time_employed_yrs, 'gender_status' = gender_status, 'other_signators' = other_signators, 'property' = property, 'other_credit_outstanding' = other_credit_outstanding, 'home_ownership' = home_ownership, 'job_category' = job_category, 'telephone' = telephone, 'foreign_worker' = foreign_worker, 'bad_credit' = bad_credit) cat_cols = c('checking_account_status', 'credit_history', 'purpose', 'savings_account_balance', 'time_employed_yrs','gender_status', 'other_signators', 'property', 'other_credit_outstanding', 'home_ownership', 'job_category', 'telephone', 'foreign_worker', 'bad_credit') for(col in cat_cols){ credit[,col] = sapply(credit[,col], function(code){codes[[paste(col, '.', code, sep = '')]]}) } #credit$bad_credit = as.numeric(credit$bad_credit) head(credit) # - # The categorical features now have meaningful coding. Additionally, the label is now coded as a binary variable. # # ## Examine classes and class imbalance # # In this case, the label has significant **class imbalance**. Class imbalance means that there are unequal numbers of cases for the categories of the label. Class imbalance can seriously bias the training of classifier algorithms. It many cases, the imbalance leads to a higher error rate for the minority class. Most real-world classification problems have class imbalance, sometimes severe class imbalance, so it is important to test for this before training any model. # # Fortunately, it is easy to test for class imbalance using a frequency table. Execute the code in the cell below to display a frequency table of the classes: table(credit$bad_credit) # Notice that only 30% of the cases have bad credit. This is not suprising, since a bank would typically retain customers with good credit. While this is not a cases of sereve imbalance, it is enough to bias the traing of any model. # ## Visualize class separation by numeric features # # As stated previously, the primary goal of visualization for classification problems is to understand which features are useful for class separation. In this section, you will start by visualizing the separation quality of numeric features. # # Execute the code, examine the results, and answer **Question 1** on the course page. # + plot_box = function(df, cols, col_x = 'bad_credit'){ options(repr.plot.width=4, repr.plot.height=3.5) # Set the initial plot area dimensions for(col in cols){ p = ggplot(df, aes_string(col_x, col)) + geom_boxplot() + ggtitle(paste('Box plot of', col, '\n vs.', col_x)) print(p) } } num_cols = c('loan_duration_mo', 'loan_amount', 'payment_pcnt_income', 'age_yrs', 'number_loans', 'dependents') plot_box(credit, num_cols) # - # How can you interpret these results? Box plots are useful, since by their very construction you are forced to focus on the overlap (or not) of the quartiles of the distribution. In this case, the question is there sufficient differences in the quartiles for the feature to be useful in separation the label classes? There are there are three cases displayed above: # # 1. For loan_duration_mo, loan_amount, and payment as a percent of income (payment_pcnt_income), there is useful separation between good and bad credit customers. As one might expect, bad credit customers have longer loan duration on larger loans and with payments being a greater percentage of their income. # 2. On the other hand, age in years, number_loans and dependents does not seem to matter. In latter two cases, this situation seems to result from the median value being zero. There are just not enough non-zero cases to make these useful features. # # As an alternative to box plots, you can use violin plots to examine the separation of label cases by numeric features. Execute the code in the cell below and examine the results: # + plot_violin = function(df, cols, col_x = 'bad_credit'){ options(repr.plot.width=4, repr.plot.height=3.5) # Set the initial plot area dimensions for(col in cols){ p = ggplot(df, aes_string(col_x, col)) + geom_violin() + ggtitle(paste('Box plot of', col, '\n vs.', col_x)) print(p) } } plot_violin(credit, num_cols) # - # The interpretation of these plots is largely the same as the box plots. However, there is one detail worth noting. The differences between loan_duration_mo and loan_amount for good and bad credit customers are only for the more extreme values. It may be that these features are less useful and the box plot indicates. # ## Visualizing class separation by categorical features # # Now you will turn to the problem of visualizing the ability of categorical features to separate classes of the label. Ideally, a categorical feature will have very different counts of the categories for each of the label values. A good way to visualize these relationships is with bar plots. # # The code in the cell below creates side by side plots of the categorical variables for each of the labels categories. The `grid.arrange` function from the gridExtra package is used to arrange the two plots side by side. # # Execute this code, examine the results, and answer **Question 2** on the course page. # + library(gridExtra) plot_bars = function(df, catcols){ options(repr.plot.width=6, repr.plot.height=5) # Set the initial plot area dimensions temp0 = df[df$bad_credit == 0,] temp1 = df[df$bad_credit == 1,] for(col in cat_cols){ p1 = ggplot(temp0, aes_string(col)) + geom_bar() + ggtitle(paste('Bar plot of \n', col, '\n for good credit')) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) p2 = ggplot(temp1, aes_string(col)) + geom_bar() + ggtitle(paste('Bar plot of \n', col, '\n for bad credit')) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) grid.arrange(p1,p2, nrow = 1) } } plot_bars(credit, cat_cols) # - # There is a lot of information in these plots. The key to interpretation of these plots is comparing the proportion of the categories for each of the label values. If these proportions are distinctly different for each label category, the feature is likely to be useful in separating the label. # # There are several cases evident in these plots: # # 1. Some features such as checking_account_status and credit_history have significantly different distribution of categories between the label categories. # 2. Others features such as gender_status and telephone show small differences, but these differences are unlikely to be significant. # 3. Other features like other_signators, foreign_worker, home_ownership, and job_category have a dominant category with very few cases of other categories. These features will likely have very little power to separate the cases. # # Notice that only a few of these categorical features will be useful in separating the cases. # ## Summary # # In this lab you have performed exploration and visualization to understand the relationships in a classification dataset. Specifically: # 1. Looked for imbalance in the label cases using a frequency table. # 2. the goal of visualization is to find numeric or categorical features that separate the cases.
docs/post/module2/VisualizingDataForClassification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pylab as plt from tqdm import tqdm from matplotlib import cm from matplotlib.ticker import LinearLocator # + pycharm={"name": "#%%\n"} # https://hal.archives-ouvertes.fr/hal-00921656v1/document # https://hal.archives-ouvertes.fr/hal-00921656v2/document # https://www.researchgate.net/publication/281441538_An_introduction_to_the_Split_Step_Fourier_Method_using_MATLAB # metrology ps = 1e-12 km = 1e3 µm = 1e-6 nm = 1e-9 # Constants c= 3e8 # Laser Properties P0 = 0.5 # Lasing power λ = 1550 * nm # wavelength f0 = 1e10 # modulation frequency T0 = 1/f0 # Fiber properties (SFM 28) L = 5e3 # Delay line neff = 1.5 D = - 20 * ps / nm / km # Dispersion factor α = 0#0.046 / km # Fiber losses β2 = -λ**2*D/(2*np.pi*c) # Dispersion constnant #n2 = 2.4e-20 # non-linear index (SPM) #Aeff = 70 * (µm)**2 # Effective core area γ = 1.1 / km #0.78 / km #2*np.pi*n2/(λ*Aeff) # Non-linear factor # Parameters Nt = int(20000) # Time sampling size Nz = int(10000) # Space sampling size n = 1# Time window factor T = np.linspace(-n*T0,+n*T0, Nt) # Time vector (local) dt = T[1]-T[0] # Timestep F = (np.fft.fftfreq(Nt,d=dt)) # Frequency vector δ = 0*1.5*np.pi θ = np.pi/2 h = L/Nz # Space-step A = np.zeros((Nz, Nt), dtype=complex) # Local-Time/Space Field matrix S = np.zeros((Nz, Nt), dtype=complex) # Frequency/Space Field Matrix A0 = np.sqrt(P0)*np.cos(np.pi*T/T0)**2#+ np.sqrt(P0/10000)*np.random.randn(Nt) # Initial Pulse A0 = A0*np.exp(1j*δ*np.abs(A0)*np.cos(2*np.pi*f0*T+θ)) # Initialization A[0,:] = A0 S[0,:] = np.fft.fft(A0) # + pycharm={"name": "#%%\n"} D = -β2*0.5*1j*(2*np.pi*F)**2 - 0.5*α # Calculate Dispersion operator # + pycharm={"name": "#%%\n"} def next_step(A): # Calculates the value of the pulse fied after a propagation of length h N = 1j*γ*np.abs(A)**2 # Nonlinear operator Ai = np.exp(0.5*h*D)*np.fft.fft(A) # disp on half step Ai = np.exp(h*N)*np.fft.ifft(Ai) # full step NL Ai = np.exp(0.5*h*D)*np.fft.fft(Ai) # disp on half step return np.fft.ifft(Ai) # + pycharm={"name": "#%%\n"} for i in tqdm(range(1,Nz)): # Space iteration (Delay line) A[i,:] = next_step(A[i-1,:]) S[i,:] = np.fft.fftshift(np.fft.fft(A[i,:])) # + pycharm={"name": "#%%\n"} plt.figure(figsize=(10,5)) plt.subplot(1,2,1) plt.imshow(np.abs(A)**2, aspect=1) plt.xlabel(r"Local Time (ps)") plt.ylabel(r"Propagation Distance") plt.subplot(1,2,2) plt.imshow(np.log10(np.abs(S)**2)) plt.gca().set_aspect("equal") plt.xlabel(r"Frequency (GHz)") plt.tight_layout() plt.show() # + pycharm={"name": "#%%\n"} lmax = np.argmax(np.abs(A[:,0])) plt.plot(np.abs(A[-1,:])) plt.plot(np.abs(A[0,:])) plt.plot() # + pycharm={"name": "#%%\n"} print("Energy in : {:.2e} / Energy out : {:.2e}".format(np.sum(np.abs(A[0,:])**2)*dt,np.sum(np.abs(A[lmax,:])**2)*dt)) # + pycharm={"name": "#%%\n"} fig, ax = plt.subplots(subplot_kw={"projection": "3d"}) l = np.linspace(0,L, Nz) t,ll = np.meshgrid(T, l) surf = ax.plot_surface(t/ps, ll/km,np.abs(A)**2, antialiased=True, cmap='viridis') ax.set_xlabel(r"Local time (ps)") ax.set_ylabel(r"Distance (km)") ax.set_zlabel(r"Power (W)") plt.show()
NonlinearOptics/NLSE_Propagation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Mission to Mars # #### In this assignment, you will build a web application that scrapes various websites for data related to the Mission to Mars and displays the information in a single HTML page. # ## File Information and Author # #### File Name: mission_to_mars.ipynb # #### Date Due: Saturday June 13, 2020 # #### Author: <NAME> # Setup and Dependancies import pandas as pd import os import requests import pymongo from bs4 import BeautifulSoup as bs from splinter import Browser import time # Chrome Driver. (Local file for github reference - easy for end-users) executable_path = {'executable_path': 'chromedriver.exe'} browser = Browser('chrome', **executable_path, headless=False) time.sleep(10) # # Web Scraping (Step 1) # #### Complete your initial scraping using Jupyter Notebook, BeautifulSoup, Pandas, and Requests/Splinter. # ## NASA Mars News # #### Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later. # Visit url for NASA Mars News site and scrape -- Latest News nasa_news_url = "https://mars.nasa.gov/news/" browser.visit(nasa_news_url) time.sleep(5) nasa_news_html = browser.html # Parse HTML with Beautiful Soup nasa_news_soup = bs(nasa_news_html, "html.parser") # Extract article title and paragraph text article = nasa_news_soup.find("div", class_='list_text') news_title = article.find("div", class_="content_title").text news_p = article.find("div", class_ ="article_teaser_body").text print(news_title) print(news_p) # ## JPL Mars Space Images - Featured Image # #### Visit the url for JPL Featured Space Image here. Use splinter to navigate the site and find the image url for the current Featured Mars Image. Assign the url string to a variable called featured_image_url. Make sure to find the image url to the full size .jpg image. Make sure to save a complete url string for this image. # Visit url for JPL Mars Space website and scrape the featured image image_url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars" browser.visit(image_url) time.sleep(5) # Go to 'FULL IMAGE' browser.click_link_by_partial_text('FULL IMAGE') # Go to 'more info' browser.click_link_by_partial_text('more info') # Parse HTML with BeautifulSoup jpl_image_html = browser.html jpl_image_soup = bs(jpl_image_html, 'html.parser') # Scrape the URL featured_image_url = jpl_image_soup.find('figure', class_='lede').a['href'] featured_jpl_image_url = f'https://www.jpl.nasa.gov{featured_image_url}' print(featured_jpl_image_url) # ## Mars Weather # #### Visit the Mars Weather twitter account here and scrape the latest Mars weather tweet from the page. Save the tweet text for the weather report as a variable called mars_weather. # Visit Twitter url for the latest Mars Weather mars_weather_tweet_url = "https://twitter.com/marswxreport?lang=en" browser.visit(mars_weather_tweet_url) time.sleep(3) mars_weather_html = browser.html time.sleep(3) # Parse HTML with BeautifulSoup mars_weather_soup = bs(mars_weather_html, 'html.parser') # + # Extract latest tweet - method 1 tweet_container = mars_weather_soup.find_all('div', class_="js-tweet-text-container") # Loop through latest tweets and find the tweet that has weather information for tweet in tweet_container: mars_weather = tweet.find('p').text if 'sol' and 'pressure' in mars_weather: print(mars_weather) print(tweet) break else: print(mars_weather) pass # + # Extract latest tweet - method 2 #tweets = mars_weather_soup.find('ol', class_='stream-items') #mars_weather = tweets.find('p', class_="tweet-text").text #print(mars_weather) # - # ## Mars Facts # #### Visit the Mars Facts webpage here and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc. Use Pandas to convert the data to a HTML table string. # Visit Mars Facts webpage for interesting facts about Mars facts_url = "https://space-facts.com/mars/" browser.visit(facts_url) time.sleep(3) html = browser.html # Use Pandas to scrape the table containing facts about Mars table = pd.read_html(facts_url) mars_facts = table[0] # Rename columns mars_facts.columns = ['Description','Value'] # Reset Index to be description mars_facts.set_index('Description', inplace=True) mars_facts # Use Pandas to convert the data to a HTML table string mars_facts.to_html('table.html') # ## Mars Hemispheres # #### Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar's hemispheres. You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image. Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title. Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere. # Visit USGS webpage for Mars hemispehere images hemispheres_url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars" browser.visit(hemispheres_url) time.sleep(5) html = browser.html # Parse HTML with BeautifulSoup soup = bs(html, "html.parser") # Create dictionary to store titles & links to images hemisphere_image_urls = [] # Retrieve all elements that contain image information results = soup.find("div", class_ = "result-list" ) hemispheres = results.find_all("div", class_="item") # + # Iterate through each image for hemisphere in hemispheres: title = hemisphere.find("h3").text title = title.replace("Enhanced", "") end_link = hemisphere.find("a")["href"] image_link = "https://astrogeology.usgs.gov/" + end_link browser.visit(image_link) html = browser.html soup = bs(html, "html.parser") downloads = soup.find("div", class_="downloads") image_url = downloads.find("a")["href"] hemisphere_image_urls.append({"title": title, "img_url": image_url}) # Print image title and url print(hemisphere_image_urls) # + # scrape images of Mars' hemispheres from the USGS site mars_hemisphere_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars' hemi_dicts = [] for i in range(1,9,2): hemi_dict = {} browser.visit(mars_hemisphere_url) time.sleep(1) hemispheres_html = browser.html hemispheres_soup = bs(hemispheres_html, 'html.parser') hemi_name_links = hemispheres_soup.find_all('a', class_='product-item') hemi_name = hemi_name_links[i].text.strip('Enhanced') detail_links = browser.find_by_css('a.product-item') detail_links[i].click() time.sleep(1) browser.find_link_by_text('Sample').first.click() time.sleep(1) browser.windows.current = browser.windows[-1] hemi_img_html = browser.html browser.windows.current = browser.windows[0] browser.windows[-1].close() hemi_img_soup = bs(hemi_img_html, 'html.parser') hemi_img_path = hemi_img_soup.find('img')['src'] print(hemi_name) hemi_dict['title'] = hemi_name.strip() print(hemi_img_path) hemi_dict['img_url'] = hemi_img_path hemi_dicts.append(hemi_dict) # + ## EOF ##
Mission_to_Mars/.ipynb_checkpoints/mission_to_mars_version4-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import sys sys.path.append("../") from haxml.utils import ( get_matches_metadata, get_stadiums, get_opposing_goalpost, load_match, is_target_stadium, is_scored_goal, total_scored_goals, total_kicks, goal_fraction, stadium_distance, angle_from_goal, train_test_split_matches_even_count ) from haxml.viz import ( plot_positions ) import math import matplotlib.pyplot as plt import pandas as pd from tqdm import tqdm stadiums = get_stadiums("../data/stadiums.json") metadata = get_matches_metadata("../data/matches_metadata.csv") # - train, test = train_test_split_matches_even_count(metadata) # add to utils def get_positions_at_time(positions, t): """ Return a list of positions (dicts) closest to, but before time t. """ # Assume positions list is already sorted. # frame is a list of positions (dicts) that have the same timestamp. frame = [] time = 0.0 for pos in positions: if pos["time"] > t: break if pos["time"] == time: frame.append(pos) else: frame = [] time = pos["time"] return frame def defender_feature(match,kick,dist): """ For a given kick, find the closests defender and the number of defenders within 200 dist """ positions = get_positions_at_time(match["positions"], kick["time"]) ret = [0,0] closest_defender = float('inf') defenders_pressuring = 0 for person in positions: if person['team'] is not kick['fromTeam'] and person['type'] == "player": defender_dist = ((kick['fromX'] - person['x'])**2 + (kick['fromY'] - person['y'])**2)**(1/2) # distance formula if defender_dist < closest_defender: closest_defender = defender_dist ret[0] = closest_defender if defender_dist <= dist: defenders_pressuring = defenders_pressuring + 1 ret[1] = defenders_pressuring return ret # + def is_in_range(person,goal_low,goal_high,fromX,goal_x, kick_team): is_x = False is_y = False if kick_team == "red": if(person['x']>=fromX and person['x']<=goal_x): is_x = True else: if(person['x']>=goal_x and person['x']<=fromX): is_x = True if(person['y']>=goal_low and person['y']<=goal_high): is_y = True return is_x and is_y def defender_box(match,stadium,kick): #is_there_players = #height,width = #area = count = 0 gp = get_opposing_goalpost(stadium,kick["fromTeam"]) gp_y_high = max([p["y"] for p in gp["posts"]]) gp_y_low = min([p["y"] for p in gp["posts"]]) goal_x = gp["posts"][0]["x"] positions = get_positions_at_time(match["positions"], kick["time"]) kicker = None for person in positions: if person["playerId"] == kick["fromId"]: kicker = person break if kicker is None: return 0 #print("positions time = ", positions[0]["time"]) for person in positions: if person["type"] == "ball" or person["playerId"] == kicker["playerId"]: continue if is_in_range(person,gp_y_low,gp_y_high,kicker['x'],goal_x, kicker["team"]): count = count + 1 return count # + def speed_player(match,kick,player_name, offset): '''' Speed of the player Args: match: Which match it is kick: Which kick we want to measure player_name: What player do we want to measure the speed for Returns: Int that represents the speed of the player ''' #Getting time range to be able to measure distance #start = get_positions_at_time(match["positions"], kick["time"] - offset) #end = get_positions_at_time(match["positions"], kick["time"]) #getting positions positions = get_positions_in_range(match["positions"], kick["time"] - offset,kick["time"]) #print(positions) player_pos = [] for i in positions: if i['name'] == player_name: player_pos.append(i) #print(player_pos) #Getting the time if len(player_pos) > 1: last = len(player_pos)-1#getting last index) time = player_pos[last]['time'] - player_pos[0]['time'] #Getting the distance distance = stadium_distance(player_pos[0]['x'],player_pos[0]['y'],player_pos[last]['x'],player_pos[last]['y']) #dist_formula(player_pos[0]['x'],player_pos[0]['y'],player_pos[last]['x'],player_pos[last]['y']) #print("dist:" + str(distance)) #print("time:" + str(time)) #Returns speed #NEED TO CHANGE TIME INTO SECONDS SO THAT IT IS CONSTANT AND NOT DIVIDING BY DIFF VALS return distance/time else: return 0 # - def generate_rows_demo(match, stadium): """ Generates target and features for each kick in the match. Produces two features for demo classifiers: goal_distance: Distance from where ball was kicked to goal midpoint. goal_angle: Angle (in radians) between straight shot from where ball was kicked to goal midpoint. Args: match: Inflated match data (dict). stadium: Stadium data (dict). Returns: Generator of dicts with values for each kick in the given match. Includes prediction target "ag" (actual goals) which is 1 for a scored goal (goal or error) and 0 otherwise, "index" which is the index of the kick in the match kick list, and all the other features needed for prediction and explanation. """ for i, kick in enumerate(match["kicks"]): gp = get_opposing_goalpost(stadium, kick["fromTeam"]) x = kick["fromX"] y = kick["fromY"] gx = gp["mid"]["x"] gy = gp["mid"]["y"] dist = stadium_distance(x, y, gx, gy) angle = angle_from_goal(x, y, gx, gy) closest_defender,defender_within = defender_feature(match,kick,100) defenders_box = defender_box(match,stadium,kick) row = { "ag": 1 if is_scored_goal(kick) else 0, "index": i, "time": kick["time"], "x": x, "y": y, "goal_x": gx, "goal_y": gy, "goal_distance": dist, "goal_angle": angle, "team": kick["fromTeam"], "stadium": match["stadium"], "closest_defender": closest_defender, "defender_within": defender_within, "defenders_box": defenders_box } yield row def make_df(metadata, callback, progress=False): """ Transforms match metadata into a DataFrame of records for each kick, including target label and features. Args: metadata: Match metadata (list of dicts). callback: Method to run on each match to extract kicks. progress: Whether or not to show progress bar (boolean). Returns: DataFrame where each row is a kick record. """ rows = [] bar = tqdm(metadata) if progress else metadata for meta in bar: key = meta["match_id"] infile = "../data/packed_matches/{}.json".format(key) try: s = stadiums[meta["stadium"]] row_gen = load_match(infile, lambda m: callback(m, s)) for row in row_gen: row["match"] = key rows.append(row) except FileNotFoundError: pass return pd.DataFrame(rows) d_train = make_df(train, generate_rows_demo, progress=True) d_test = make_df(test, generate_rows_demo, progress=True) from sklearn.linear_model import LogisticRegression from sklearn.metrics import ( accuracy_score, precision_score, recall_score, roc_auc_score ) def summarize_model(yt, yp): """ Helper method to summarize some prediction metrics. Args: yt: Array of true scored goal values. yp: Array of predicted scored goal values. """ print("Accuracy = {:.3f}".format(accuracy_score(yt, yp))) print("Precision = {:.3f}".format(precision_score(yt, yp))) print("Recall = {:.3f}".format(recall_score(yt, yp))) print("ROC AUC = {:.3f}".format(roc_auc_score(yt, yp))) def model_features(features,classifier,kwargs): X_train = d_train[features] y_train = d_train["ag"] X_test = d_test[features] y_test = d_test["ag"] clf = classifier(**kwargs) clf.fit(X_train, y_train) #print("Train Scores:") #summarize_model(y_train, clf.predict(X_train)) #print() print("Test Scores:") summarize_model(y_test, clf.predict(X_test)) return clf #best model so far from sklearn.ensemble import GradientBoostingClassifier features = ["goal_distance","goal_angle","defenders_box"] clf = model_features(features, GradientBoostingClassifier, {"n_estimators":100, "learning_rate":1.0,"max_depth":1, "random_state":0}) clf p_test = clf.predict_proba(d_test[features])[:,1] df_results = pd.DataFrame(d_test) df_results["xg"] = p_test df_results.groupby(["match", "team"])[["ag", "xg"]].sum().head(10) # + import joblib joblib.dump(clf, "../models/gradientBoost.pkl") # - def predict_xg_demo(match, stadium, generate_rows, clf): """ Augments match data with XG predictions. Args: match: Inflated match data (dict). stadium: Stadium data (dict). generate_rows: function(match, stadium) to generate kick records. clf: Classifier following scikit-learn interface. Returns: Inflated match data with "xg" field added to each kick (dict). """ features = ["goal_distance", "goal_angle"] d_kicks = pd.DataFrame(generate_rows(match, stadium)) d_kicks["xg"] = clf.predict_proba(d_kicks[features])[:,1] for kick in d_kicks.to_dict(orient="records"): match["kicks"][kick["index"]]["xg"] = kick["xg"] return match test_meta = test[45] s = stadiums[test_meta["stadium"]] demo_clf = joblib.load("../models/demo_logistic_regression.pkl") test_match = load_match( "../data/packed_matches/{}.json".format(test_meta["match_id"]), lambda m: predict_xg_demo(m, s, generate_rows_demo, demo_clf) ) test_meta pd.DataFrame(test_match["kicks"]).query("type == 'goal'").head()
notebooks/edwin_defender-features.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Variables and values # # Elements of Data Science # # by [<NAME>](https://allendowney.com) # # [MIT License](https://opensource.org/licenses/MIT) # # ### Goals # # The goal of these notebooks is to give you the tools you need to execute a data science project from beginning to end. We'll start with some basic programming concepts and work our way toward data science tools. # # If you already have some programming experience, the first few notebooks might be a little slow. But there might be some material, specific to data science, that you have not seen before. # # If you have not programmed before, you should expect to face some challenges. I have done my best to explain everything as we go along; I try not to assume anything. And there are exercises throughout the notebooks that should help you learn, and remember what you learned. # # Programming is a super power. As you learn to program, I hope you feel empowered to take on bigger challenges. But programming can also be frustrating. It might take some persistence to get past some rough spots. # # The topics in this notebook include: # # * Using Jupyter to write and run Python code. # # * Basic programming features in Python: variables and values. # # * Translating formulas from math notation to Python. # # Along the way, we'll review a couple of math topics I assume you have seen before, logarithms and algebra. # ## Jupyter # # This is a Jupyter notebook. Jupyter is a software development environment, which means you can use it to write and run programs in Python and other programming languages. # # A Jupyter notebook is made up of cells, where each cell contains either text or code you can run. # # If you are running this notebook on Colab, you should see buttons in the top left that say "+ Code" and "+ Text". The first one adds code cell and the second adds a text cell. # # If you want to try them out, select this cell by clicking on it, then press the "+ Text". A new cell should appear below this one. # # Type something in the cell. You can use the buttons to format it, or you can mark up the text using [Markdown](https://www.markdownguide.org/basic-syntax/). When you are done, hold down Shift and press Enter, which will format the text you just typed and then move to the next cell. # # If you select a Code cell, you should see a button on the left with a triangle inside a circle, which is the icon for "Play". If you press this button, Jupyter runs the code in the cell and displays the results. # # When you run code in a notebook for the first time, you might get a message warning you about the things a notebook can do. If you are running a notebook from a source you trust, which I hope includes me, you can press "Run Anyway". # # Instead of clicking the "Play" button, you can also run the code in a cell by holding down Shift and pressing Enter. # ## Numbers # # This notebook introduces the most fundamental tools for working with data: representing numbers and other values, and performing arithmetic operations. # # Python provides tools for working with numbers, words, dates, times, and locations (latitude and longitude). # # Let's start with numbers. Python can handle several types of numbers, but the two most common are: # # * `int`, which represents integer values like `3`, and # * `float`, which represents numbers that have a fraction part, like `3.14159`. # # Most often, we use `int` to represent counts and `float` to represent measurements. # Here's an example of an `int` and a `float`: 3 3.14159 # `float` is short for "floating-point", which is the name for the way these numbers are stored. # **Exercise:** Create a code cell below this one and type in the following number: `1.2345e3` # # Then run the cell. The output should be `1234.5` # # The `e` in `1.2345e3` stands for "exponent". This way of writing numbers is a version of scientific notation that means $1.2345 \times 10^{3}$. If you are not familiar with scientific notation, [you might want to read this](https://en.wikipedia.org/wiki/Scientific_notation). # ## Arithmetic # # Python provides operators that perform arithmetic. The operators that perform addition and subtraction are `+` and `-`: 2 + 1 2 - 1 # The operators that perform multiplication and division are `*` and `/`: 2 * 3 2 / 3 # And the operator for exponentiation is `**`: 2**3 # Unlike math notation, Python does not allow "implicit multiplication". For example, in math notation, if you write $3 (2 + 1)$, that's understood to be the same as $3 \times (2+ 1)$. # # Python does not allow that notation: 3 (2 + 1) # In this example, the error message is not very helpful, which is why I am warning you now. If you want to multiply, you have to use the `*` operator: 3 * (2 + 1) # The arithmetic operators follow the rules of precedence you might have learned as "PEMDAS": # # * Parentheses before # * Exponentiation before # * Multiplication and division before # * Addition and subtraction # # So in this expression: 1 + 2 * 3 # The multiplication happens first. If that's not what you want, you can use parentheses to make the order of operations explicit: (1 + 2) * 3 # **Exercise:** Write a Python expression that raises `1+2` to the power `3*4`. The answer should be `531441`. # # Note: in the cell below, it should say # # ``` # # Solution goes here # ``` # # Lines like this that begin with `#` are "comments"; they provide information, but they have no effect when the program runs. # # When you do this exercise, you should delete the comment and replace it with your solution. # + # Solution goes here # - # ## Math functions # # Python provides functions that compute all the usual mathematical functions, like `sin` and `cos`, `exp` and `log`. # # However, they are not part of Python itself; they are in a "library", which is a collection of functions that supplement the Python language. # # Actually, there are several libraries that provide math functions; the one we'll use is called NumPy, which stands for "Numerical Python", and is pronounced "num' pie". # # Before you can use a library, you have to "import" it. Here's how we import NumPy: import numpy as np # It is conventional to import `numpy` as `np`, which means we can refer to it by the short name `np` rather than the longer name `numpy`. # # Note that pretty much everything is case-sensitive, which means that `numpy` is not the same as `NumPy`. So even though the name of the library is NumPy, when we import it we have to call it `numpy`. If you run the following cell, you should get an error: import NumPy as np # But if we import `np` correctly, we can use it to read the value `pi`, which is an approximation of the mathematical constant $\pi$. np.pi # The result is a `float` with 16 digits. As you probably know, we can't represent $\pi$ with a finite number of digits, so this result is only approximate. # `numpy` provides `log`, which computes the natural logarithm, and `exp`, which raises the constant `e` to a power. np.log(100) np.exp(1) # **Exercise:** Use these functions to confirm the mathematical identity $\log(e^x) = x$, which should be true for any value of $x$. # # With floating-point values, this identity should work for values of $x$ between -700 and 700. What happens when you try it with larger and smaller values? # + # Solution goes here # - # As this example shows, floating-point numbers are finite approximations, which means they don't always behave like math. # # As another example, see what happens when you add up `0.1` three times: 0.1 + 0.1 + 0.1 # The result is close to `0.3`, but not exact. # # We'll see other examples of floating-point approximation later, and learn some ways to deal with it. # ## Variables # # A variable is a name that refers to a value. # # The following statement assigns the `int` value 5 to a variable named `x`: x = 5 # The variable we just created has the name `x` and the value 5. # # If a variable name appears at the end of a cell, Jupyter displays its value. x # If we use `x` as part of an arithmetic operation, it represents the value 5: x + 1 x**2 # We can also use `x` with `numpy` functions: np.exp(x) # Notice that the result from `exp` is a `float`, even though the value of `x` is an `int`. # **Exercise:** If you have not programmed before, one of the things you have to get used to is that programming languages are picky about details. Natural languages, like English, and semi-formal languages, like math notation, are more forgiving. # # 4 exampel in Ingli\$h you kin get prackicly evRiThing rong-rong-rong and sti11 be undr3stud. # # As another example, in math notation, parentheses and square brackets mean the same thing, you can write # # $\sin (\omega t)$ # # or # # $\sin [\omega t]$ # # Either one is fine. And you can leave out the parentheses altogether, as long as the meaning is clear: # # $\sin \omega t$ # # In Python, every character counts. For example, the following are all different: # # ``` # np.exp(x) # np.Exp(x) # np.exp[x] # np.exp x # ``` # # While you are learning, I encourage you to make mistakes on purpose to see what goes wrong. Read the error messages carefully. Sometimes they are helpful and tell you exactly what's wrong. Other times they can be misleading. But if you have seen the message before, you might remember some likely causes. # # In the next cell, try out the different versions of `np.exp(x)` above, and see what error messages you get. np.exp(x) # **Exercise:** Search the NumPy documentation to find the function that computes square roots, and use it to compute a floating-point approximation of the golden ratio: # # $\phi = \frac{1 + \sqrt{5}}{2}$ # # Hint: The result should be close to `1.618`. # + # Solution goes here # - # ## Save your work # # If you are running on Colab and you want to save your work, now is a good time to press the "Copy to Drive" button (near the upper left), which saves a copy of this notebook in your Google Drive. # # If you want to change the name of the file, you can click on the name in the upper left. # # If you don't use Google Drive, look under the File menu to see other options. # # Once you make a copy, any additional changes you make will be saved automatically, so now you can continue without worrying about losing your work. # ## Calculation with variables # # Now let's use variables to solve a problem involving mathematical calculation. # # Suppose we have the following formula for computing compound interest [from Wikipedia](https://en.wikipedia.org/wiki/Compound_interest#Periodic_compounding): # # "The total accumulated value, including the principal sum $P$ plus compounded interest $I$, is given by the formula: # # $V=P\left(1+{\frac {r}{n}}\right)^{nt}$ # # where: # # * $P$ is the original principal sum # * $V$ is the total accumulated value # * $r$ is the nominal annual interest rate # * $n$ is the compounding frequency # * $t$ is the overall length of time the interest is applied (expressed using the same time units as $r$, usually years). # # "Suppose a principal amount of \$1,500 is deposited in a bank paying an annual interest rate of 4.3\%, compounded quarterly. # Then the balance after 6 years is found by using the formula above, with P = 1500 r = 0.043 n = 4 t = 6 # We can compute the total accumulated value by translating the mathematical formula into Python syntax: P * (1 + r/n)**(n*t) # **Exercise:** Continuing the example from Wikipedia: # # "Suppose the same amount of \$1,500 is compounded biennially", so `n = 1/2`. # # What would the total value be after 6 years? Hint: we expect the answer to be a bit less than the previous answer. # + # Solution goes here # - # **Exercise:** If interest is compounded continuously, the value after time $t$ is given by the formula: # # $V=P~e^{rt}$ # # Translate this function into Python and use it compute the value of the investment in the previous example with continuous compounding. Hint: we expect the answer to be a bit more than the previous answers. # + # Solution goes here # - # **Exercise** Applying your algebra skills, solve the previous equation for $r$. Now use the formula you just derived to answer this question. # # "Harvard's tuition in 1970 was \$4,070 (not including room, board, and fees). # # "In 2019 it is \$46,340. What was the annual rate of increase over that period, treating it as if it had compounded continuously?" # + # Solution goes here # - # The point of this exercise is to practice using variables. But it is also a reminder about logarithms, which we will use extensively. # ## A little more Jupyter # # Here are a few tips on using Jupyter to compute and display values. # # Generally, if there is a single expression in a cell, Jupyter computes the value of the expression and displays the result. # # For example, we've already seen how to display the value of `np.pi`: np.pi # Here's a more complex example with functions, operators, and numbers: 1 / np.sqrt(2 * np.pi) * np.exp(-3**2 / 2) # If you put more than one expression in a cell, Jupyter computes them all, but it only display the result from the last: 1 2 + 3 np.exp(1) (1 + np.sqrt(5)) / 2 # If you want to display more than one value, you can separate them with commas: 1, 2 + 3, np.exp(1), (1 + np.sqrt(5)) / 2 # That result is actually a tuple, which you will learn about in the next notebook. # # Here's one last Jupyter tip: when you assign a value to variable, Jupyter does not display the value: phi = (1 + np.sqrt(5)) / 2 # So it is idiomatic to assign a value to a variable and immediately display the result: phi = (1 + np.sqrt(5)) / 2 phi # **Exercise:** Display the value of $\phi$ and its inverse, $1/\phi$, on a single line. # + # Solution goes here # - # ## The Colab mental model # # Congratulations on completing the first notebook! # # Now that you have worked with Colab, you might find it helpful to watch this video, where I explain a little more about how it works: # + from IPython.display import YouTubeVideo YouTubeVideo("eIY-PsYBrPs") # -
01_variables.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: '' # name: sagemath # --- # + language="html" # <link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" /> # <link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" /> # <style>.subtitle {font-size:medium; display:block}</style> # <link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" /> # <link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. --> # <script> # var cell = $(".container .cell").eq(0), ia = cell.find(".input_area") # if (cell.find(".toggle-button").length == 0) { # ia.after( # $('<button class="toggle-button">Toggle hidden code</button>').click( # function (){ ia.toggle() } # ) # ) # ia.hide() # } # </script> # # - # **Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard. # $\newcommand{\identity}{\mathrm{id}} # \newcommand{\notdivide}{\nmid} # \newcommand{\notsubset}{\not\subset} # \newcommand{\lcm}{\operatorname{lcm}} # \newcommand{\gf}{\operatorname{GF}} # \newcommand{\inn}{\operatorname{Inn}} # \newcommand{\aut}{\operatorname{Aut}} # \newcommand{\Hom}{\operatorname{Hom}} # \newcommand{\cis}{\operatorname{cis}} # \newcommand{\chr}{\operatorname{char}} # \newcommand{\Null}{\operatorname{Null}} # \newcommand{\lt}{<} # \newcommand{\gt}{>} # \newcommand{\amp}{&} # $ # <div class="mathbook-content"><h2 class="heading hide-type" alt="Section 13.2 Solvable Groups"><span class="type">Section</span><span class="codenumber">13.2</span><span class="title">Solvable Groups</span></h2><a href="section-solvable-groups.ipynb" class="permalink">¶</a></div> # <div class="mathbook-content"><p id="p-1957">A <dfn class="terminology">subnormal series</dfn> of a group $G$ is a finite sequence of subgroups</p><div class="displaymath"> # \begin{equation*} # G = H_n \supset H_{n-1} \supset \cdots \supset H_1 \supset H_0 = \{ e \}, # \end{equation*} # </div><p>where $H_i$ is a normal subgroup of $H_{i+1}\text{.}$ If each subgroup $H_i$ is normal in $G\text{,}$ then the series is called a <dfn class="terminology">normal series</dfn>. The <dfn class="terminology">length</dfn> of a subnormal or normal series is the number of proper inclusions.</p></div> # <div class="mathbook-content"><article class="example-like" id="example-normal-series"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.11</span></h6><p id="p-1958">Any series of subgroups of an abelian group is a normal series. Consider the following series of groups:</p><div class="displaymath"> # \begin{gather*} # {\mathbb Z} \supset 9{\mathbb Z} \supset 45{\mathbb Z} \supset 180{\mathbb Z} \supset \{0\},\\ # {\mathbb Z}_{24} \supset \langle 2 \rangle \supset \langle 6 \rangle \supset \langle 12 \rangle \supset \{0\}. # \end{gather*} # </div></article></div> # <div class="mathbook-content"><article class="example-like" id="example-subnormal-series"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.12</span></h6><p id="p-1959">A subnormal series need not be a normal series. Consider the following subnormal series of the group $D_4\text{:}$</p><div class="displaymath"> # \begin{equation*} # D_4 \supset \{ (1), (12)(34), (13)(24), (14)(23) \} \supset \{ (1), (12)(34) \} \supset \{ (1) \}. # \end{equation*} # </div><p>The subgroup $\{ (1), (12)(34) \}$ is not normal in $D_4\text{;}$ consequently, this series is not a normal series.</p></article></div> # <div class="mathbook-content"><p id="p-1960">A subnormal (normal) series $\{ K_j \}$ is a <dfn class="terminology">refinement of a subnormal (normal) series</dfn> $\{ H_i \}$ if $\{ H_i \} \subset \{ K_j \}\text{.}$ That is, each $H_i$ is one of the $K_j\text{.}$</p></div> # <div class="mathbook-content"><article class="example-like" id="example-refinement"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.13</span></h6><p id="p-1961">The series</p><div class="displaymath"> # \begin{equation*} # {\mathbb Z} \supset 3{\mathbb Z} \supset 9{\mathbb Z} \supset 45{\mathbb Z} \supset 90{\mathbb Z} \supset 180{\mathbb Z} \supset \{0\} # \end{equation*} # </div><p>is a refinement of the series</p><div class="displaymath"> # \begin{equation*} # {\mathbb Z} \supset 9{\mathbb Z} \supset 45{\mathbb Z} \supset 180{\mathbb Z} \supset \{0\}. # \end{equation*} # </div></article></div> # <div class="mathbook-content"><p id="p-1962">The best way to study a subnormal or normal series of subgroups, $\{ H_i \}$ of $G\text{,}$ is actually to study the factor groups $H_{i+1}/H_i\text{.}$ We say that two subnormal (normal) series $\{H_i \}$ and $\{ K_j \}$ of a group $G$ are <dfn class="terminology">isomorphic</dfn> if there is a one-to-one correspondence between the collections of factor groups $\{H_{i+1}/H_i \}$ and $\{ K_{j+1}/ K_j \}\text{.}$</p></div> # <div class="mathbook-content"><article class="example-like" id="example-isomorph-series"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.14</span></h6><p id="p-1963">The two normal series</p><div class="displaymath"> # \begin{gather*} # {\mathbb Z}_{60} \supset \langle 3 \rangle \supset \langle 15 \rangle \supset \{ 0 \}\\ # {\mathbb Z}_{60} \supset \langle 4 \rangle \supset \langle 20 \rangle \supset \{ 0 \} # \end{gather*} # </div><p>of the group ${\mathbb Z}_{60}$ are isomorphic since</p><div class="displaymath"> # \begin{gather*} # {\mathbb Z}_{60} / \langle 3 \rangle \cong \langle 20 \rangle / \{ 0 \} \cong {\mathbb Z}_{3}\\ # \langle 3 \rangle / \langle 15 \rangle \cong \langle 4 \rangle / \langle 20 \rangle \cong {\mathbb Z}_{5}\\ # \langle 15 \rangle / \{ 0 \} \cong {\mathbb Z}_{60} / \langle 4 \rangle \cong {\mathbb Z}_4. # \end{gather*} # </div></article></div> # <div class="mathbook-content"><p id="p-1964">A subnormal series $\{ H_i \}$ of a group $G$ is a <dfn class="terminology">composition series</dfn> if all the factor groups are simple; that is, if none of the factor groups of the series contains a normal subgroup. A normal series $\{ H_i \}$ of $G$ is a <dfn class="terminology">principal series</dfn> if all the factor groups are simple.</p></div> # <div class="mathbook-content"><article class="example-like" id="example-composition-series"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.15</span></h6><p id="p-1965">The group ${\mathbb Z}_{60}$ has a composition series</p><div class="displaymath"> # \begin{equation*} # {\mathbb Z}_{60} \supset \langle 3 \rangle \supset \langle 15 \rangle \supset \langle 30 \rangle \supset \{ 0 \} # \end{equation*} # </div><p>with factor groups</p><div class="displaymath"> # \begin{align*} # {\mathbb Z}_{60} / \langle 3 \rangle & \cong {\mathbb Z}_{3}\\ # \langle 3 \rangle / \langle 15 \rangle & \cong {\mathbb Z}_{5}\\ # \langle 15 \rangle / \langle 30 \rangle & \cong {\mathbb Z}_{2}\\ # \langle 30 \rangle / \{ 0 \} & \cong {\mathbb Z}_2. # \end{align*} # </div><p>Since ${\mathbb Z}_{60}$ is an abelian group, this series is automatically a principal series. Notice that a composition series need not be unique. The series</p><div class="displaymath"> # \begin{equation*} # {\mathbb Z}_{60} \supset \langle 2 \rangle \supset \langle 4 \rangle \supset \langle 20 \rangle \supset \{ 0 \} # \end{equation*} # </div><p>is also a composition series.</p></article></div> # <div class="mathbook-content"><article class="example-like" id="example-sn_series"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.16</span></h6><p id="p-1966">For $n \geq 5\text{,}$ the series</p><div class="displaymath"> # \begin{equation*} # S_n \supset A_n \supset \{ (1) \} # \end{equation*} # </div><p>is a composition series for $S_n$ since $S_n / A_n \cong {\mathbb Z}_2$ and $A_n$ is simple.</p></article></div> # <div class="mathbook-content"><article class="example-like" id="example-z-series"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.17</span></h6><p id="p-1967">Not every group has a composition series or a principal series. Suppose that</p><div class="displaymath"> # \begin{equation*} # \{ 0 \} = H_0 \subset H_1 \subset \cdots \subset H_{n-1} \subset H_n = {\mathbb Z} # \end{equation*} # </div><p>is a subnormal series for the integers under addition. Then $H_1$ must be of the form $k {\mathbb Z}$ for some $k \in {\mathbb N}\text{.}$ In this case $H_1 / H_0 \cong k {\mathbb Z}$ is an infinite cyclic group with many nontrivial proper normal subgroups.</p></article></div> # <div class="mathbook-content"><p id="p-1968">Although composition series need not be unique as in the case of ${\mathbb Z}_{60}\text{,}$ it turns out that any two composition series are related. The factor groups of the two composition series for ${\mathbb Z}_{60}$ are ${\mathbb Z}_2\text{,}$ ${\mathbb Z}_2\text{,}$ ${\mathbb Z}_3\text{,}$ and ${\mathbb Z}_5\text{;}$ that is, the two composition series are isomorphic. The Jordan-Hölder Theorem says that this is always the case.</p></div> # <div class="mathbook-content"><article class="theorem-like" id="theorem-66"><h6 class="heading"><span class="type">Theorem</span><span class="codenumber">13.18</span><span class="title">Jordan-Hölder</span></h6><p id="p-1969">Any two composition series of $G$ are isomorphic.</p></article><article class="proof" id="proof-85"><h6 class="heading"><span class="type">Proof</span></h6><p id="p-1970">We shall employ mathematical induction on the length of the composition series. If the length of a composition series is 1, then $G$ must be a simple group. In this case any two composition series are isomorphic.</p><p id="p-1971">Suppose now that the theorem is true for all groups having a composition series of length $k\text{,}$ where $1 \leq k \lt n\text{.}$ Let</p><div class="displaymath"> # \begin{gather*} # G = H_n \supset H_{n-1} \supset \cdots \supset H_1 \supset H_0 = \{ e \}\\ # G = K_m \supset K_{m-1} \supset \cdots \supset K_1 \supset K_0 = \{ e \} # \end{gather*} # </div><p>be two composition series for $G\text{.}$ We can form two new subnormal series for $G$ since $H_i \cap K_{m-1}$ is normal in $H_{i+1} \cap K_{m-1}$ and $K_j \cap H_{n-1}$ is normal in $K_{j+1} \cap H_{n-1}\text{:}$</p><div class="displaymath"> # \begin{gather*} # G = H_n \supset H_{n-1} \supset H_{n-1} \cap K_{m-1} \supset \cdots \supset H_0 \cap K_{m-1} = \{ e \}\\ # G = K_m \supset K_{m-1} \supset K_{m-1} \cap H_{n-1} \supset \cdots \supset K_0 \cap H_{n-1} = \{ e \}. # \end{gather*} # </div><p>Since $H_i \cap K_{m-1}$ is normal in $H_{i+1} \cap K_{m-1}\text{,}$ the Second Isomorphism Theorem (Theorem <a href="section-group-isomorphism-theorems.ipynb#theorem-second-isomorphism" class="xref" alt="Theorem 11.12 Second Isomorphism Theorem" title="Theorem 11.12 Second Isomorphism Theorem">11.12</a>) implies that</p><div class="displaymath"> # \begin{align*} # (H_{i+1} \cap K_{m-1}) / (H_i \cap K_{m-1}) & = (H_{i+1} \cap K_{m-1}) / (H_i \cap ( H_{i+1} \cap K_{m-1} ))\\ # & \cong H_i (H_{i+1} \cap K_{m-1})/ H_i, # \end{align*} # </div><p>where $H_i$ is normal in $H_i (H_{i+1} \cap K_{m-1})\text{.}$ Since $\{ H_i \}$ is a composition series, $H_{i+1} / H_i$ must be simple; consequently, $H_i (H_{i+1} \cap K_{m-1})/ H_i$ is either $H_{i+1}/H_i$ or $H_i/H_i\text{.}$ That is, $H_i (H_{i+1} \cap K_{m-1})$ must be either $H_i$ or $H_{i+1}\text{.}$ Removing any nonproper inclusions from the series</p><div class="displaymath"> # \begin{equation*} # H_{n-1} \supset H_{n-1} \cap K_{m-1} \supset \cdots \supset H_0 \cap K_{m-1} = \{ e \}, # \end{equation*} # </div><p>we have a composition series for $H_{n-1}\text{.}$ Our induction hypothesis says that this series must be equivalent to the composition series</p><div class="displaymath"> # \begin{equation*} # H_{n-1} \supset \cdots \supset H_1 \supset H_0 = \{ e \}. # \end{equation*} # </div><p>Hence, the composition series</p><div class="displaymath"> # \begin{equation*} # G = H_n \supset H_{n-1} \supset \cdots \supset H_1 \supset H_0 = \{ e \} # \end{equation*} # </div><p>and</p><div class="displaymath"> # \begin{equation*} # G = H_n \supset H_{n-1} \supset H_{n-1} \cap K_{m-1} \supset \cdots \supset H_0 \cap K_{m-1} = \{ e \} # \end{equation*} # </div><p>are equivalent. If $H_{n-1} = K_{m-1}\text{,}$ then the composition series $\{H_i \}$ and $\{ K_j \}$ are equivalent and we are done; otherwise, $H_{n-1} K_{m-1}$ is a normal subgroup of $G$ properly containing $H_{n-1}\text{.}$ In this case $H_{n-1} K_{m-1} = G$ and we can apply the Second Isomorphism Theorem once again; that is,</p><div class="displaymath"> # \begin{equation*} # K_{m-1} / (K_{m-1} \cap H_{n-1}) \cong (H_{n-1} K_{m-1}) / H_{n-1} = G/H_{n-1}. # \end{equation*} # </div><p>Therefore,</p><div class="displaymath"> # \begin{equation*} # G = H_n \supset H_{n-1} \supset H_{n-1} \cap K_{m-1} \supset \cdots \supset H_0 \cap K_{m-1} = \{ e \} # \end{equation*} # </div><p>and</p><div class="displaymath"> # \begin{equation*} # G = K_m \supset K_{m-1} \supset K_{m-1} \cap H_{n-1} \supset \cdots \supset K_0 \cap H_{n-1} = \{ e \} # \end{equation*} # </div><p>are equivalent and the proof of the theorem is complete.</p></article></div> # <div class="mathbook-content"><p id="p-1972">A group $G$ is <dfn class="terminology">solvable</dfn> if it has a subnormal series $\{ H_i \}$ such that all of the factor groups $H_{i+1} / H_i$ are abelian. Solvable groups will play a fundamental role when we study Galois theory and the solution of polynomial equations.</p></div> # <div class="mathbook-content"><article class="example-like" id="solvable"><h6 class="heading"><span class="type">Example</span><span class="codenumber">13.19</span></h6><p id="p-1973">The group $S_4$ is solvable since</p><div class="displaymath"> # \begin{equation*} # S_4 \supset A_4 \supset \{ (1), (12)(34), (13)(24), (14)(23) \} \supset \{ (1) \} # \end{equation*} # </div><p>has abelian factor groups; however, for $n \geq 5$ the series</p><div class="displaymath"> # \begin{equation*} # S_n \supset A_n \supset \{ (1) \} # \end{equation*} # </div><p>is a composition series for $S_n$ with a nonabelian factor group. Therefore, $S_n$ is not a solvable group for $n \geq 5\text{.}$</p></article></div>
aata/section-solvable-groups.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Introduction to Jupyter (formerly known as IPython) Notebook # ## 1) Click HERE only once (so the blue bar shows up on the left) # ## 2) Use your arrow keys to move up and down # ## 3) Stop on this row and press enter – notice you're now editing the row # ## 4) Press shift + enter to execute the row # ## 5) Now press B to create a new cell below the cursor – or A for above. # ## 6) Practice step 5 a few times # ## 7) Press ESC and use your arrow keys to highlight one of the new empty cells you created # ## 8) Delete the empty cell by pressing d + d (be careful!) # ## 9) Run the Python code below by highlighting the row and pressing shift + enter print("good job!") # A few notes: # # - Rows can contain many different languages, today we're just using Python # - We can also use Markdown in cells and display pictures, web sites, etc # - When in doubt, press ESC once to exit editing mode and use your arrows, A, B, dd, etc # ## 10) Finally, go to File > Download as > Python (.py) to save this as a Python file
notebook_intro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Auswertung Händigkeit nach oldfield, waterloo # ### 1. Oldfield # import modules import pandas as pd import csv import os from glob import glob # path to oldfiled-files of 23 subjects (sorted) oldfield_files = sorted(glob('/home/benedikt/Documents/Studium/Masterarbeit/data/questionaires/data_haendigkeit/*oldfield*.csv')) oldfield_files # iterate over all files concatenating all information into one file pieces = [] columns = ['q1_schreiben', 'q2_zeichnen', 'q3_werfen', 'q4_Schere', 'q5_zahnbuerste', 'q6_messer', 'q7_loeffel', 'q8_besen', 'q9_streichholz', 'q10_kiste oeffnen', 'mean', 'result'] for i in oldfield_files: df = pd.read_csv(i, names=columns) pieces.append(df) allData = pd.concat(pieces, ignore_index=True) # + # create new dataframe without headers for each participant df2=[] for i in range(1, len(allData.values), 2): df2.append(allData.values[i]) df3 = pd.DataFrame(df2) # - #add new headers for all data df3.columns = ['q1_schreiben', 'q2_zeichnen', 'q3_werfen', 'q4_Schere', 'q5_zahnbuerste', 'q6_messer', 'q7_loeffel', 'q8_besen', 'q9_streichholz', 'q10_kiste oeffnen', 'mean', 'result'] df3 # + #Handedness score is calculated using this formula: 100*((Right - Left) / (Right + Left)). #http://zhanglab.wdfiles.com/local--files/survey/handedness.html # - # ### 2.Waterloo # + # import modules import pandas as pd import numpy as np import csv from glob import glob import functools # function to remove NaN-values and shift cells up def drop_and_roll(col, na_position='last', fillvalue=np.nan): result = np.full(len(col), fillvalue, dtype=col.dtype) mask = col.notnull() N = mask.sum() if na_position == 'last': result[:N] = col.loc[mask] elif na_position == 'first': result[-N:] = col.loc[mask] else: raise ValueError('na_position {!r} unrecognized'.format(na_position)) return result # - # path to waterloo-files of 23 subjects (sorted) waterloo_files = sorted(glob('/home/benedikt/Documents/Studium/Masterarbeit/data/questionaires/data_haendigkeit/*waterloo*.csv')) waterloo_files # read csv, only the needed columns x = pd.read_csv(waterloo_files[22], usecols = [0,1,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72]) # apply the drop_and_roll_function to get rid of NaN-cells in csv y = x.apply(functools.partial(drop_and_roll, fillvalue='')) # extract the 1st row as the only interesting one, write to dataframe df = pd.DataFrame(y[:1]) # switch rows and columns df_readable = df.transpose() df_readable # count the answers on the 34 questions on hand preference (Ra + Ru + Eq + Lu + La) df_readable[0].value_counts() # + # handedness-score: scored on a scale from -2 to 2 --> Ra=+2, Ru=+1, Eq= 1 for both, Lu=-1, La=-2 # count up values for each participant to get an absolute score value (+100 <--> -100) # score = 100*((Right - Left) / (Right + Left))
scripts/analysis_pyton/questionnaires/questionnaires_handedness.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.12 64-bit (''gan'': conda)' # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="b8FSyDx06xEP" outputId="b4a1a8db-868e-479a-c1b0-42230f2b328a" # !pip install wandb torchmetrics torchmetrics[image] # + colab={"base_uri": "https://localhost:8080/"} id="Hw6vI9e26xER" outputId="5daf4830-18ea-4e76-d67a-82a91d8885f0" # # !git clone https://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@github.com/kiritowu/GAN.git # %cd GAN # + id="fWIEo33u6xES" import os from functools import partial import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils import data from torchvision import transforms import matplotlib.pyplot as plt from models.utils import weights_init from models.bigresnet import Generator, Discriminator from utils.data import get_CIFAR10, _CIFAR_MEAN, _CIFAR_STD from utils.metrics import FID10k, IS10k from utils.plot import plot_grid, inverseNormalize, classnames_from_tensor, save_all_generated_img # + id="ZsCaV3YY6xET" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # + colab={"base_uri": "https://localhost:8080/", "height": 85} id="QWooUn3S6xET" outputId="ea2b6c68-3f68-47f6-ec0f-49f27ff3e360" hparams = dict( batch_size = 128, latent_dim = 80, n_classes = 10, image_size = 32, shared_embedding_dim=128, d_cond_mtd="AC", channels = 3, train_d_times = 2, train_g_times = 1, save_wandb = True ) if hparams.get("save_wandb"): import wandb wandb.login() wandb.init( entity="kiritowu", project="ACGAN-CIFAR10", config=hparams ) # + colab={"base_uri": "https://localhost:8080/"} id="PYRU0XP-6xEU" outputId="732e663e-ccd4-4734-89f8-a23a9afab815" cifar_data = get_CIFAR10(concatDataset=True) cifar_loader = data.DataLoader( cifar_data, batch_size=hparams.get("batch_size",64), shuffle=True ) cifar10_classnames=["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"] # + id="udCeBIXa6xEU" """ Hinge Loss Reference : https://github.com/POSTECH-CVLab/PyTorch-StudioGAN/blob/8c9aa5a2e9bb33eca711327c085db5ce50ff7fc0/src/utils/losses.py """ def d_hinge(d_logit_real, d_logit_fake): return torch.mean(F.relu(1. - d_logit_real)) + torch.mean(F.relu(1. + d_logit_fake)) def g_hinge(d_logit_fake): return -torch.mean(d_logit_fake) # + id="meAofWBe6xEV" def train_one_batch_acgan_md( epoch:int, data_loader:data.DataLoader, generator:nn.Module, discriminator:nn.Module, d_hinge, g_hinge, aux_loss:nn.CrossEntropyLoss, g_optimizer:optim.Adam, d_optimizer:optim.Adam, device:torch.device, n_classes: int, latent_dim:int, train_d_times: int, train_g_times: int, **kwargs ): generator.train() discriminator.train() d_losses = [] g_losses = [] cls_accs = [] for real_imgs, real_labels in data_loader: batch_size = real_imgs.shape[0] real_imgs, real_labels = real_imgs.to(device), real_labels.to(device) """ Training of Discriminator """ d_optimizer.zero_grad() for _ in range(train_d_times): latent_space = torch.normal(0,1,(batch_size, latent_dim), device=device, requires_grad=False) gen_labels = torch.randint(0,n_classes, (batch_size,), device=device, requires_grad=False) # Generate fake image with Generator fake_imgs = generator(latent_space, gen_labels) # Loss for real images real_pred, real_aux = discriminator(real_imgs) fake_pred, fake_aux = discriminator(fake_imgs.detach()) # Detach to not calculate gradient # Calculate Discriminator Loss d_loss = d_hinge(real_pred, fake_pred) + aux_loss(real_aux, real_labels) # adjust gradients for applying gradient accumluation trick d_loss = d_loss / train_d_times # Calculate gradient to the loss d_loss.backward() # Calculate Discriminator Auxillary Accuracy pred = np.concatenate([real_aux.data.cpu().numpy(), fake_aux.data.cpu().numpy()], axis=0) gt = np.concatenate([real_labels.data.cpu().numpy(), gen_labels.data.cpu().numpy()], axis=0) d_acc = np.mean(np.argmax(pred, axis=1) == gt) # Append d_loss d_losses.append(d_loss.cpu().item()) # Append cls_acc cls_accs.append(d_acc*100) # Update discriminator weights d_optimizer.step() """ Training of Generator """ g_optimizer.zero_grad() for _ in range(train_g_times): latent_space = torch.normal(0,1,(batch_size, latent_dim), device=device, requires_grad=False) gen_labels = torch.randint(0,n_classes, (batch_size,), device=device, requires_grad=False) # Generate fake image with Generator fake_imgs = generator(latent_space, gen_labels) # Get Adversarial and Auxillary(class) prediction from Discriminator adversarial, pred_labels = discriminator(fake_imgs) # Calculate Generator Loss g_loss = g_hinge(adversarial) + aux_loss(pred_labels, gen_labels) # adjust gradients for applying gradient accumluation trick g_loss = g_loss / train_g_times # Calculate gradient to the loss g_loss.backward() # Append g_loss g_losses.append(g_loss.cpu().item()) # Update generator weights g_optimizer.step() # Wandb Logging if kwargs.get("save_wandb"): wandb.log(dict(DLoss=np.mean(d_losses), GLoss=np.mean(g_losses), ClsAcc=np.mean(cls_accs))) print(f"[Epoch {epoch}] DLoss: {np.mean(d_losses):.4f} GLoss: {np.mean(g_losses):.4f} AuxAcc: {np.mean(cls_accs):.2f}") # + id="NYeoAr6D6xEX" def evaluate( epoch: int, generator: nn.Module, real_data:data.Dataset, batch_size:int, latent_dim:int, n_classes:int, **kwargs ): with torch.no_grad(): latent_space = torch.normal( 0, 1, (batch_size, latent_dim), device=device, requires_grad=False) gen_labels = torch.randint( 0, n_classes, (batch_size,), device=device, requires_grad=False) imgs = generator(latent_space, gen_labels) # Evaluate FID10k fid10k = FID10k() fid_score = fid10k.evaluate10k(generator, real_data, latent_dim, n_classes) print(f"FID-Score-10k: {fid_score}") if kwargs.get("save_wandb"): wandb.log({"Fid_score": fid_score}, commit=False) # Evaluate IS10k is10k = IS10k() is_score = is10k.evaluate10k(generator, latent_dim, n_classes) print(f"Inception-Score-10k: {is_score}") if kwargs.get("save_wandb"): wandb.log({"IS_score": is_score}) # Plot Image if not os.path.exists('images'): os.makedirs('images') plot_grid( epoch, imgs.cpu(), labels=classnames_from_tensor(gen_labels.cpu(), cifar10_classnames), save_path="images", inv_preprocessing=[ partial(inverseNormalize, mean=_CIFAR_MEAN, std=_CIFAR_STD)], save_wandb=kwargs.get("save_wandb", False) ) # + id="7ja243dP6xEX" epoch = 0 generator = Generator(**hparams).apply(weights_init).to(device) discriminator = Discriminator(**hparams).apply(weights_init).to(device) aux_loss = nn.CrossEntropyLoss(label_smoothing=0.1) g_optimizer = optim.Adam(generator.parameters(),lr=0.0002, betas=(0.5, 0.999)) d_optimizer = optim.Adam(discriminator.parameters(),lr=0.0002, betas=(0.5, 0.999)) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="6xOIXwQT6xEY" outputId="93c75b3e-84aa-4a57-9264-ca2c7cf104d2" for _ in range(500): train_one_batch_acgan_md( epoch, cifar_loader, generator, discriminator, d_hinge, g_hinge, aux_loss, g_optimizer, d_optimizer, device, **hparams ) if epoch % 5 == 0: evaluate(epoch, generator, cifar_data, **hparams) if epoch % 10 == 0: save_all_generated_img(epoch=epoch, base_folder="acgan-mod-bigresnet", generator=generator, image_num=1000, n_classes=10, latent_dim=hparams.get("latent_dim", 100), classname_mapping = cifar10_classnames, device=device, inv_preprocessing=partial(inverseNormalize, mean=_CIFAR_MEAN, std=_CIFAR_STD) ) epoch += 1
GAN/AC-BIGGAN-with-CIFAR10/notebook-archieve/acgan/acgan-mod-bigresnet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt import numpy as np import numpy.linalg as nplin import itertools #from coniii import * from sklearn.linear_model import LinearRegression np.random.seed(0) def operators(s): #generate terms in the energy function n_seq,n_var = s.shape ops = np.zeros((n_seq,n_var+int(n_var*(n_var-1)/2.0))) jindex = 0 for index in range(n_var): ops[:,jindex] = s[:,index] jindex +=1 for index in range(n_var-1): for index1 in range(index+1,n_var): ops[:,jindex] = s[:,index]*s[:,index1] jindex +=1 return ops def energy_ops(ops,w): return np.sum(ops*w[np.newaxis,:],axis=1) def generate_seqs(n_var,n_seq,n_sample=30,g=1.0): n_ops = n_var+int(n_var*(n_var-1)/2.0) #w_true = g*(np.random.rand(ops.shape[1])-0.5)/np.sqrt(float(n_var)) w_true = np.random.normal(0.,g/np.sqrt(n_var),size=n_ops) samples = np.random.choice([1.0,-1.0],size=(n_seq*n_sample,n_var),replace=True) ops = operators(samples) #n_ops = ops.shape[1] sample_energy = energy_ops(ops,w_true) p = np.exp(sample_energy) p /= np.sum(p) out_samples = np.random.choice(np.arange(n_seq*n_sample),size=n_seq,replace=True,p=p) return w_true,samples[out_samples] #,p[out_samples],sample_energy[out_samples] def hopfield_model(s): ops = operators(s) w = np.mean(ops,axis=0) #print('hopfield error ',nplin.norm(w-w_true)) return w def boltzmann_machine_exact(s,s_all,max_iter=150,alpha=5e-2,cov=False): n_seq,n_var = s.shape ops = operators(s) cov_inv = np.eye(ops.shape[1]) ops_obs = np.mean(ops,axis=0) ops_model = operators(s_all) n_ops = ops.shape[1] np.random.seed(13) w = np.random.rand(n_ops)-0.5 for iterate in range(max_iter): energies_w = energy_ops(ops_model,w) probs_w = np.exp(energies_w) probs_w /= np.sum(probs_w) if iterate%10 == 0: #print(iterate,nplin.norm(w-w_true)) #,nplin.norm(spin_cov_w-spin_cov_obs)) MSE = ((w-w_true)**2).mean() print(iterate,MSE) w += alpha*cov_inv.dot(ops_obs - np.sum(ops_model*probs_w[:,np.newaxis],axis=0)) print('final',iterate,MSE) return w def eps_machine(s,eps_scale=0.1,max_iter=151,alpha=0.1): MSE = np.zeros(max_iter) KL = np.zeros(max_iter) E_av = np.zeros(max_iter) n_seq,n_var = s.shape ops = operators(s) n_ops = ops.shape[1] cov_inv = np.eye(ops.shape[1]) np.random.seed(13) w = np.random.rand(n_ops)-0.5 w_iter = np.zeros((max_iter,n_ops)) for i in range(max_iter): #eps_scale = np.random.rand()/np.max([1.,np.max(np.abs(w))]) energies_w = energy_ops(ops,w) probs_w = np.exp(energies_w*(eps_scale-1)) z_data = np.sum(probs_w) probs_w /= z_data ops_expect_w = np.sum(probs_w[:,np.newaxis]*ops,axis=0) #if iterate%int(max_iter/5.0)==0: #E_exp = (probs_w*energies_w).sum() #KL[i] = -E_exp - np.log(z_data) + np.sum(np.log(np.cosh(w*eps_scale))) + n_var*np.log(2.) E_av[i] = energies_w.mean() MSE[i] = ((w-w_true)**2).mean() #print(RMSE[i]) #print(eps_scale,iterate,nplin.norm(w-w_true),RMSE,KL,E_av) sec_order = w*eps_scale w += alpha*cov_inv.dot((ops_expect_w - sec_order)) #print('final ',eps_scale,iterate,nplin.norm(w-w_true)) #w_iter[i,:] = w return MSE,KL,-E_av,w # + max_iter = 100 n_var,n_seq = 20,80000 g = 0.5 w_true,seqs = generate_seqs(n_var,n_seq,g=g) MSE,KL,E_av,w = eps_machine(seqs,eps_scale=0.5,max_iter=max_iter) # - plt.plot(w_true,w,'ro') plt.plot([-0.6,0.6],[-0.6,0.6]) # + # Z_all_true s_all = np.asarray(list(itertools.product([1.0, -1.0], repeat=n_var))) ops_all = operators(s_all) E_all_true = energy_ops(ops_all,w_true) P_all_true = np.exp(E_all_true) Z_all_true = P_all_true.sum() np.log(Z_all_true) # + # random configs n_random = 10000 i_random = np.random.choice(s_all.shape[0],n_random) s_random = s_all[i_random] ops_random = operators(s_random) E_true = energy_ops(ops_random,w_true) P_true = np.exp(E_true) p0 = P_true/Z_all_true # - seq_unique,i_seq,seq_count1 = np.unique(seqs,return_inverse=True,return_counts=True,axis=0) seq_count = seq_count1[i_seq] def partition_data(seqs,eps=0.999): ops = operators(seqs) energies_w = energy_ops(ops,w) probs_w = np.exp(energies_w*(eps-1)) z_data = np.sum(probs_w) probs_w /= z_data x = np.log(seq_count*probs_w).reshape(-1,1) y = eps*energies_w.reshape(-1,1) reg = LinearRegression().fit(x,y) score = reg.score(x,y) b = reg.intercept_[0] m = reg.coef_[0][0] # slope # set slope = 1 lnZ_data = (eps*energies_w).mean() - (np.log(seq_count*probs_w)).mean() # exact (to compare) #probs_all = np.exp(eps*energies_all) #Z_all = np.sum(probs_all) #lnZ_all[i] = np.log(Z_all) print(eps,score,m,b,lnZ_data) return lnZ_data # + lnZ_data = partition_data(seqs,eps=0.9999) print(lnZ_data) # Z_infer: Z_infer = np.exp(lnZ_data) ### NOTE E_infer = energy_ops(ops_random,w) P_infer = np.exp(E_infer) p1 = P_infer/Z_infer plt.plot(-np.log(p0),-np.log(p1),'ko',markersize=3) plt.plot([5,35],[5,35]) # -
Ref/19.08.1400_finite_scaling/.ipynb_checkpoints/find_Z_80k_g05-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # gym-anytrading # # `AnyTrading` is a collection of [OpenAI Gym](https://github.com/openai/gym) environments for reinforcement learning-based trading algorithms. # # Trading algorithms are mostly implemented in two markets: [FOREX](https://en.wikipedia.org/wiki/Foreign_exchange_market) and [Stock](https://en.wikipedia.org/wiki/Stock). AnyTrading aims to provide some Gym environments to improve and facilitate the procedure of developing and testing RL-based algorithms in this area. This purpose is obtained by implementing three Gym environments: **TradingEnv**, **ForexEnv**, and **StocksEnv**. # # TradingEnv is an abstract environment which is defined to support all kinds of trading environments. ForexEnv and StocksEnv are simply two environments that inherit and extend TradingEnv. In the future sections, more explanations will be given about them but before that, some environment properties should be discussed. # # ## Installation # # ### Via PIP # ```bash # pip install gym-anytrading # ``` # # ### From Repository # ```bash # git clone https://github.com/AminHP/gym-anytrading # # cd gym-anytrading # pip install -e . # # ## or # # pip install --upgrade --no-deps --force-reinstall https://github.com/AminHP/gym-anytrading/archive/master.zip # ``` # # ## Environment Properties # First of all, **you can't simply expect an RL agent to do everything for you and just sit back on your chair in such complex trading markets!** # Things need to be simplified as much as possible in order to let the agent learn in a faster and more efficient way. In all trading algorithms, the first thing that should be done is to define **actions** and **positions**. In the two following subsections, I will explain these actions and positions and how to simplify them. # # ### Trading Actions # If you search on the Internet for trading algorithms, you will find them using numerous actions such as **Buy**, **Sell**, **Hold**, **Enter**, **Exit**, etc. # Referring to the first statement of this section, a typical RL agent can only solve a part of the main problem in this area. If you work in trading markets you will learn that deciding whether to hold, enter, or exit a pair (in FOREX) or stock (in Stocks) is a statistical decision depending on many parameters such as your budget, pairs or stocks you trade, your money distribution policy in multiple markets, etc. It's a massive burden for an RL agent to consider all these parameters and may take years to develop such an agent! In this case, you certainly will not use this environment but you will extend your own. # # So after months of work, I finally found out that these actions just make things complicated with no real positive impact. In fact, they just increase the learning time and an action like **Hold** will be barely used by a well-trained agent because it doesn't want to miss a single penny. Therefore there is no need to have such numerous actions and only `Sell=0` and `Buy=1` actions are adequate to train an agent just as well. # # ### Trading Positions # If you're not familiar with trading positions, refer [here](https://en.wikipedia.org/wiki/Position_\(finance\)). It's a very important concept and you should learn it as soon as possible. # # In a simple vision: **Long** position wants to buy shares when prices are low and profit by sticking with them while their value is going up, and **Short** position wants to sell shares with high value and use this value to buy shares at a lower value, keeping the difference as profit. # # Again, in some trading algorithms, you may find numerous positions such as **Short**, **Long**, **Flat**, etc. As discussed earlier, I use only `Short=0` and `Long=1` positions. # # ## Trading Environments # As I noticed earlier, now it's time to introduce the three environments. Before creating this project, I spent so much time to search for a simple and flexible Gym environment for any trading market but didn't find one. They were almost a bunch of complex codes with many unclear parameters that you couldn't simply look at them and comprehend what's going on. So I concluded to implement this project with a great focus on simplicity, flexibility, and comprehensiveness. # # In the three following subsections, I will introduce our trading environments and in the next section, some IPython examples will be mentioned and briefly explained. # # ### TradingEnv # TradingEnv is an abstract class which inherits `gym.Env`. This class aims to provide a general-purpose environment for all kinds of trading markets. Here I explain its public properties and methods. But feel free to take a look at the complete [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/trading_env.py). # # * Properties: # > `df`: An abbreviation for **DataFrame**. It's a **pandas'** DataFrame which contains your dataset and is passed in the class' constructor. # > # > `prices`: Real prices over time. Used to calculate profit and render the environment. # > # > `signal_features`: Extracted features over time. Used to create *Gym observations*. # > # > `window_size`: Number of ticks (current and previous ticks) returned as a *Gym observation*. It is passed in the class' constructor. # > # > `action_space`: The *Gym action_space* property. Containing discrete values of **0=Sell** and **1=Buy**. # > # > `observation_space`: The *Gym observation_space* property. Each observation is a window on `signal_features` from index **current_tick - window_size + 1** to **current_tick**. So `_start_tick` of the environment would be equal to `window_size`. In addition, initial value for `_last_trade_tick` is **window_size - 1** . # > # > `shape`: Shape of a single observation. # > # > `history`: Stores the information of all steps. # # * Methods: # > `seed`: Typical *Gym seed* method. # > # > `reset`: Typical *Gym reset* method. # > # > `step`: Typical *Gym step* method. # > # > `render`: Typical *Gym render* method. Renders the information of the environment's current tick. # > # > `render_all`: Renders the whole environment. # > # > `close`: Typical *Gym close* method. # # * Abstract Methods: # > `_process_data`: It is called in the constructor and returns `prices` and `signal_features` as a tuple. In different trading markets, different features need to be obtained. So this method enables our TradingEnv to be a general-purpose environment and specific features can be returned for specific environments such as *FOREX*, *Stocks*, etc. # > # > `_calculate_reward`: The reward function for the RL agent. # > # > `_update_profit`: Calculates and updates total profit which the RL agent has achieved so far. Profit indicates the amount of units of currency you have achieved by starting with *1.0* unit (Profit = FinalMoney / StartingMoney). # > # > `max_possible_profit`: The maximum possible profit that an RL agent can obtain regardless of trade fees. # # ### ForexEnv # This is a concrete class which inherits TradingEnv and implements its abstract methods. Also, it has some specific properties for the *FOREX* market. For more information refer to the [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/forex_env.py). # # * Properties: # > `frame_bound`: A tuple which specifies the start and end of `df`. It is passed in the class' constructor. # > # > `unit_side`: Specifies the side you start your trading. Containing string values of **left** (default value) and **right**. As you know, there are two sides in a currency pair in *FOREX*. For example in the *EUR/USD* pair, when you choose the `left` side, your currency unit is *EUR* and you start your trading with 1 EUR. It is passed in the class' constructor. # > # > `trade_fee`: A default constant fee which is subtracted from the real prices on every trade. # # # ### StocksEnv # Same as ForexEnv but for the *Stock* market. For more information refer to the [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/stocks_env.py). # # * Properties: # > `frame_bound`: A tuple which specifies the start and end of `df`. It is passed in the class' constructor. # > # > `trade_fee_bid_percent`: A default constant fee percentage for bids. For example with trade_fee_bid_percent=0.01, you will lose 1% of your money every time you sell your shares. # > # > `trade_fee_ask_percent`: A default constant fee percentage for asks. For example with trade_fee_ask_percent=0.005, you will lose 0.5% of your money every time you buy some shares. # # Besides, you can create your own customized environment by extending TradingEnv or even ForexEnv or StocksEnv with your desired policies for calculating reward, profit, fee, etc. # # ## Examples # # ### Create an environment # + import gym import gym_anytrading env = gym.make('forex-v0') # env = gym.make('stocks-v0') # - # - This will create the default environment. You can change any parameters such as dataset, frame_bound, etc. # ### Create an environment with custom parameters # I put two default datasets for [*FOREX*](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/datasets/data/FOREX_EURUSD_1H_ASK.csv) and [*Stocks*](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/datasets/data/STOCKS_GOOGL.csv) but you can use your own. # + from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL custom_env = gym.make('forex-v0', df = FOREX_EURUSD_1H_ASK, window_size = 10, frame_bound = (10, 300), unit_side = 'right') # custom_env = gym.make('stocks-v0', # df = STOCKS_GOOGL, # window_size = 10, # frame_bound = (10, 300)) # - # - It is to be noted that the first element of `frame_bound` should be greater than or equal to `window_size`. # ### Print some information # + print("env information:") print("> shape:", env.shape) print("> df.shape:", env.df.shape) print("> prices.shape:", env.prices.shape) print("> signal_features.shape:", env.signal_features.shape) print("> max_possible_profit:", env.max_possible_profit()) print() print("custom_env information:") print("> shape:", custom_env.shape) print("> df.shape:", env.df.shape) print("> prices.shape:", custom_env.prices.shape) print("> signal_features.shape:", custom_env.signal_features.shape) print("> max_possible_profit:", custom_env.max_possible_profit()) # - # - Here `max_possible_profit` signifies that if the market didn't have trade fees, you could have earned **4.054414887146586** (or **1.122900180008982**) units of currency by starting with **1.0**. In other words, your money is almost *quadrupled*. # ### Plot the environment env.reset() env.render() # - **Short** and **Long** positions are shown in `red` and `green` colors. # - As you see, the starting *position* of the environment is always **Short**. # ### A complete example # + import gym import gym_anytrading from gym_anytrading.envs import TradingEnv, ForexEnv, StocksEnv, Actions, Positions from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL import matplotlib.pyplot as plt env = gym.make('forex-v0', frame_bound=(50, 100), window_size=10) # env = gym.make('stocks-v0', frame_bound=(50, 100), window_size=10) observation = env.reset() while True: action = env.action_space.sample() observation, reward, done, info = env.step(action) # env.render() if done: print("info:", info) break plt.cla() env.render_all() plt.show() # - # - You can use `render_all` method to avoid rendering on each step and prevent time-wasting. # - As you see, the first **10** points (`window_size`=10) on the plot don't have a *position*. Because they aren't involved in calculating reward, profit, etc. They just display the first observations. So the environment's `_start_tick` and initial `_last_trade_tick` are **10** and **9**. # #### Mix with `stable-baselines` and `quantstats` # # [Here](https://github.com/AminHP/gym-anytrading/blob/master/examples/a2c_quantstats.ipynb) is an example that mixes `gym-anytrading` with the mentioned famous libraries and shows how to utilize our trading environments in other RL or trading libraries. # ### Extend and manipulate TradingEnv # # In case you want to process data and extract features outside the environment, it can be simply done by two methods: # # **Method 1 (Recommended):** # + def my_process_data(env): start = env.frame_bound[0] - env.window_size end = env.frame_bound[1] prices = env.df.loc[:, 'Low'].to_numpy()[start:end] signal_features = env.df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end] return prices, signal_features class MyForexEnv(ForexEnv): _process_data = my_process_data env = MyForexEnv(df=FOREX_EURUSD_1H_ASK, window_size=12, frame_bound=(12, len(FOREX_EURUSD_1H_ASK))) # - # **Method 2:** # + def my_process_data(df, window_size, frame_bound): start = frame_bound[0] - window_size end = frame_bound[1] prices = df.loc[:, 'Low'].to_numpy()[start:end] signal_features = df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end] return prices, signal_features class MyStocksEnv(StocksEnv): def __init__(self, prices, signal_features, **kwargs): self._prices = prices self._signal_features = signal_features super().__init__(**kwargs) def _process_data(self): return self._prices, self._signal_features prices, signal_features = my_process_data(df=STOCKS_GOOGL, window_size=30, frame_bound=(30, len(STOCKS_GOOGL))) env = MyStocksEnv(prices, signal_features, df=STOCKS_GOOGL, window_size=30, frame_bound=(30, len(STOCKS_GOOGL)))
README.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.13 64-bit (''drlnd'': conda)' # name: python3 # --- import torch import torch.nn.functional as F from torch.autograd import Variable import matplotlib.pyplot as plt # 우선 fake data를 만듭니다. # linspace(start, end, num) start와 end를 포함하고 그 사이를 균등하게 나누는 num개의 수를 반환 x = torch.linspace(-5, 5, 200); print(type(x), x.shape) x = Variable(x); print(type(x), x.shape) x_np = x.data.numpy(); print(type(x_np), x_np.shape) # 대표적은 activation funtion을 구성하는 것을 따라 해 봅시다. y_relu = F.relu(x).data.numpy() y_sigmoid = torch.sigmoid(x).data.numpy() y_tanh = F.tanh(x).data.numpy() y_softplus = F.softplus(x).data.numpy() # 위에서 정의한 activation function을 plt를 통해서 그려봅니다. # %matplotlib inline # + plt.figure(1, figsize=(8, 6)) plt.subplot(221) plt.plot(x_np, y_relu, c = 'red', label = 'relu') plt.ylim((-1, 5)) plt.legend(loc = 'best') plt.subplot(222) plt.plot(x_np, y_sigmoid, c = 'red', label = 'sigmoid') plt.ylim((-0.2, 1.2)) plt.legend(loc = 'best') plt.subplot(223) plt.plot(x_np, y_tanh, c = 'red', label = 'tanh') plt.ylim((-1.2, 1.2)) plt.legend(loc = 'best') plt.subplot(224) plt.plot(x_np, y_relu, c = 'red', label = 'softplus') plt.ylim((-0.2, 6)) plt.legend(loc = 'best') plt.show()
my_hand/203_activation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: KERNEL_NAME # --- import theano from theano import tensor # declare two symbolic floating-point scalars a = tensor.dscalar() b = tensor.dscalar() # create a simple expression c = a + b # convert the expression into a callable object that takes (a,b) # values as input and computes a value for c f = theano.function([a,b], c) # bind 1.5 to 'a', 2.5 to 'b', and evaluate 'c' assert 4.0 == f(1.5, 2.5)
integration-tests/examples/test_templates/deeplearning/template_theano.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TEXT MINING for BEGINNER # - 본 자료는 텍스트 마이닝을 활용한 연구 및 강의를 위한 목적으로 제작되었습니다. # - 본 자료를 강의 목적으로 활용하고자 하시는 경우 꼭 아래 메일주소로 연락주세요. # - 본 자료에 대한 허가되지 않은 배포를 금지합니다. # - 강의, 저작권, 출판, 특허, 공동저자에 관련해서는 문의 바랍니다. # - **Contact : ADMIN(<EMAIL>)** # # --- # ## DAY 01. Python 기초문법 알아보기 # - 텍스트 마이닝을 위한 Python 기초문법에 대해 다룹니다. # # --- # ### 1. PRINT 문: 문자 또는 숫자를 화면에 출력합니다. # # --- # 샵(#) 뒤에 오는 문자열을 주석이라고 하며, 결과에 아무런 영향을 끼치지 않습니다. # print(12345) ''' 여러줄로 주석을 입력할 필요가 있을 때에는, 작음따옴표 3개를 앞뒤에 사용합니다. print("hello world!") ''' print("hello world!") # 1) 숫자를 출력합니다. print(12345) # 2) 큰따옴표 안에 문자열을 출력합니다. print("큰따옴표 안에 내용을 입력합니다.") # 3) 작은따옴표 안에 문자열을 출력합니다. print('작은따옴표 안에 내용을 입력합니다.') # 4) "작은따옴표"를 "큰따옴표" 안에 입력해 출력합니다. print("큰따옴표 안에 '작은따옴표'를 입력해 출력합니다.") # 5) 백슬러시를 큰따옴표 앞에 붙여 "큰따옴표"를 출력합니다. print("\"큰따옴표\"를 출력합니다.") # 6-1) 문자열을 더하기(+) 연산자를 활용해 결합하여 출력합니다. print("더하기(+) 연산자를 활용해" + "문자열을 결합하여 출력합니다.") # 6-2) 문자열을 콤마(,)를 활용해 결합하여 출력합니다. print("콤마(,)를 활용해", "문자열을 결합하여 출력합니다.") # 7-1) 숫자를 문자와 결합하는 경우에는 반드시 str(NUM) 함수를 활용해 문자열로 변환 후 결합해야합니다. print("str(NUM) 함수를 활용하면 문자열과 숫자의 '문자+" + str(10) + "'결합이 가능합니다.") # 7-2) 숫자를 문자와 결합하는 경우에 콤마(,)를 사용하면 str(NUM) 함수를 사용하지 않아도 됩니다. print("str(NUM) 함수를 활용하면 문자열과 숫자의 '문자+", 10, "'결합이 가능합니다.") # ### 2. PRINT 문: 문자열 안에 탭(tab) 또는 줄바꿈(linefeed)을 삽입합니다. # # --- # 1) print("문자열") 함수는 맨 뒤에 줄바꿈(\n)을 포함합니다. print("문자열 맨 뒤에 줄바꿈이 있기 때문에,") print("다음줄에 이어서 출렵됩니다.") # 2) end 옵션을 사용하면 맨 뒤에 줄바꿈(\n) 대신 다른 문자열로 대체할 수 있습니다. print("맨 뒤에 줄바꿈 대신 다른 문자열넣었기 때문에, ", end="[다른 문자열] ") print("같은줄에 이어서 출력됩니다.") # 3) end 옵션에 줄바꿈(\n)을 입력하면 기본 print("문자열") 함수와 같은 기능을 수행합니다. print("맨 뒤에 줄바꿈을 직접 입력할 수도 있으며,", end="\n") print("결과는 같습니다.") # 4) 탭(\t) 또한 맨 뒤 문자열로 가능합니다. print("맨 뒤에 줄바꿈 대신 탭(\\t)을 넣으면,", end="\t") print("탭 뒤에 이어서 출력됩니다.") # 5) 탭(\t), 줄바꿈(\n) 기호는 따옴표 안에서 어디에든 입력 가능합니다. print("따옴표 안에서 \t아무곳에나\n입력 가능합니다.") # 6) 따옴표 밖의 콤마(,)는 문자열 결합과 함께 띄어쓰기를 포함합니다. print("띄어쓰기를", "넣지", "않아도", "공백을", "포함합니다.") # ### 3. PRINT 문: PRINT문 안에서 숫자끼리의 연산이 가능합니다. # # --- # 1) 더하기 print(1 + 1) # 2) 빼기 print(9 - 2) # 3) 곱하기 print(5 * 4) # 4) 나누기 print(8 / 2) # 5) 지수승 print(2 ** 10) # 6) 몫 print(25 // 4) # 7) 나머지 print(25 % 4) # 8) 등호 print(2 == 2) print(2 != 2) # 9) 부등호 print(3 > 2) print(3 < 2) # 10-1) 논리연산자(and) print(True and True) print(False and True) print(True and False) print(False and False) # 10-1) 논리연산자(or) print(True or True) print(False or True) print(True or False) print(False or False) # 11-1) 다중 논리연산자 print(2 > 1 and 3 < 3) print(2 > 1 or 3 <= 3) # 11-2) 다중 논리연산자(비트연산) print((2 > 1) & (3 < 3)) print((2 > 1) | (3 <= 3)) # ### 3. PRINT 문: 문자열과 숫자 대신에 변수(variable)를 활용합니다. # # --- # 1-1) 변수에 값 또는 자료구조를 저장합니다. a = 10 b = 20 new_list = [1, 2, 3] new_dict = {"Seoul": 1, "Busan": 2, "Daegu": 3} new_text = " 앞에 공백과 뒤에 줄바꿈이 달린 문자열 입니다. \n" # 1-2) 값 또는 자료구조가 저장된 변수를 출력합니다. print("a, b :", a, b) print("new_list :", new_list) print("new_dict :", new_dict) print("new_text :", new_text) # 1-3) 변수에 저장된 값 또는 자료구조의 유형을 type(변수) 함수를 활용해 확인합니다. print("Type of a, b :", type(a), type(b)) print("Type of new_list :", type(new_list)) print("Type of new_dict :", type(new_dict)) print("Type of new_text :", type(new_text)) # 1-4) 변수를 활용해 숫자끼리의 연산이 가능합니다. print("a + b :", a + b) print("a - b :", a - b) print("a * b :", a * b) print("a / b :", a / b) # 1-5) 연산자와 등호를 결합한 새로운 연산자를 활용해 연산결과를 바로 변수에 저장합니다. a = 1 print("a :", a) a = a + 1 print("a + 1 :", a) a += 1 print("a += 1 :", a) a -= 5 print("a -= 1 :", a) a *= 3 print("a *= 1 :", a)
practice-note/01_text-mining-for-beginner_python-basic.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Performing Database-style Operations on Dataframes # # ## About the data # In this notebook, we will using daily weather data that was taken from the [National Centers for Environmental Information (NCEI) API](https://www.ncdc.noaa.gov/cdo-web/webservices/v2). The [`0-weather_data_collection.ipynb`](./0-weather_data_collection.ipynb) notebook contains the process that was followed to collect the data. Consult the dataset's [documentation](https://www1.ncdc.noaa.gov/pub/data/cdo/documentation/GHCND_documentation.pdf) for information on the fields. # # *Note: The NCEI is part of the National Oceanic and Atmospheric Administration (NOAA) and, as you can see from the URL for the API, this resource was created when the NCEI was called the NCDC. Should the URL for this resource change in the future, you can search for "NCEI weather API" to find the updated one.* # # # ## Setup # + import pandas as pd weather = pd.read_csv('data/nyc_weather_2018.csv') weather.head() # - # ## Querying DataFrames # The `query()` method is an easier way of filtering based on some criteria. For example, we can use it to find all entries where snow was recorded from a station with `US1NY` in its station ID: snow_data = weather.query('datatype == "SNOW" and value > 0 and station.str.contains("US1NY")') snow_data.head() # This is equivalent to querying the `weather.db` SQLite database for # # ```sql # SELECT * # FROM weather # WHERE datatype == "SNOW" AND value > 0 AND station LIKE "%US1NY%" # ``` # + import sqlite3 with sqlite3.connect('data/weather.db') as connection: snow_data_from_db = pd.read_sql( 'SELECT * FROM weather WHERE datatype == "SNOW" AND value > 0 and station LIKE "%US1NY%"', connection ) snow_data.reset_index().drop(columns='index').equals(snow_data_from_db) # - # Note this is also equivalent to creating Boolean masks: weather[ (weather.datatype == 'SNOW') & (weather.value > 0) & weather.station.str.contains('US1NY') ].equals(snow_data) # ## Merging DataFrames # We have data for many different stations each day; however, we don't know what the stations are&mdash;just their IDs. We can join the data in the `weather_stations.csv` file which contains information from the `stations` endpoint of the NCEI API. Consult the [`0-weather_data_collection.ipynb`](./0-weather_data_collection.ipynb) notebook to see how this was collected. It looks like this: station_info = pd.read_csv('data/weather_stations.csv') station_info.head() # As a reminder, the weather data looks like this: weather.head() # We can join our data by matching up the `station_info.id` column with the `weather.station` column. Before doing that though, let's see how many unique values we have: station_info.id.describe() # While `station_info` has one row per station, the `weather` dataframe has many entries per station. Notice it also has fewer uniques: weather.station.describe() # When working with joins, it is important to keep an eye on the row count. Some join types will lead to data loss. Remember that we can get this with `shape`: station_info.shape[0], weather.shape[0] # Since we will be doing this often, it makes more sense to write a function: def get_row_count(*dfs): return [df.shape[0] for df in dfs] get_row_count(station_info, weather) # By default, `merge()` performs an inner join. We simply specify the columns to use for the join. The left dataframe is the one we call `merge()` on, and the right one is passed in as an argument: inner_join = weather.merge(station_info, left_on='station', right_on='id') inner_join.sample(5, random_state=0) # We can remove the duplication of information in the `station` and `id` columns by renaming one of them before the merge and then simply using `on`: weather.merge(station_info.rename(dict(id='station'), axis=1), on='station').sample(5, random_state=0) # We are losing stations that don't have weather observations associated with them, if we don't want to lose these rows, we perform a right or left join instead of the inner join: # + left_join = station_info.merge(weather, left_on='id', right_on='station', how='left') right_join = weather.merge(station_info, left_on='station', right_on='id', how='right') right_join[right_join.datatype.isna()].head() # - # The left and right join as we performed above are equivalent because the side for which we kept the rows without matches was the same in both cases: left_join.sort_index(axis=1).sort_values(['date', 'station'], ignore_index=True).equals( right_join.sort_index(axis=1).sort_values(['date', 'station'], ignore_index=True) ) # Note we have additional rows in the left and right joins because we kept all the stations that didn't have weather observations: get_row_count(inner_join, left_join, right_join) # If we query the station information for stations that have `US1NY` in their ID and perform an outer join, we can see where the mismatches occur: # + outer_join = weather.merge( station_info[station_info.id.str.contains('US1NY')], left_on='station', right_on='id', how='outer', indicator=True ) pd.concat([ outer_join.query(f'_merge == "{kind}"').sample(2, random_state=0) for kind in outer_join._merge.unique() ]).sort_index() # - # These joins are equivalent to their SQL counterparts. Below is the inner join. Note that to use `equals()` you will have to do some manipulation of the dataframes to line them up: # + import sqlite3 with sqlite3.connect('data/weather.db') as connection: inner_join_from_db = pd.read_sql( 'SELECT * FROM weather JOIN stations ON weather.station == stations.id', connection ) inner_join_from_db.shape == inner_join.shape # - # Revisiting the dirty data from chapter 3's [`5-handling_data_issues.ipynb`](../ch_03/5-handling_data_issues.ipynb) notebook. # # Data meanings: # - `PRCP`: precipitation in millimeters # - `SNOW`: snowfall in millimeters # - `SNWD`: snow depth in millimeters # - `TMAX`: maximum daily temperature in Celsius # - `TMIN`: minimum daily temperature in Celsius # - `TOBS`: temperature at time of observation in Celsius # - `WESF`: water equivalent of snow in millimeters # # # Read in the data, dropping duplicates and the uninformative `SNWD` column: dirty_data = pd.read_csv( 'data/dirty_data.csv', index_col='date' ).drop_duplicates().drop(columns='SNWD') dirty_data.head() # We need to create two dataframes for the join. We will drop some unecessary columns as well for easier viewing: valid_station = dirty_data.query('station != "?"').drop(columns=['WESF', 'station']) station_with_wesf = dirty_data.query('station == "?"').drop(columns=['station', 'TOBS', 'TMIN', 'TMAX']) # Our column for the join is the index in both dataframes, so we must specify `left_index` and `right_index`: valid_station.merge( station_with_wesf, how='left', left_index=True, right_index=True ).query('WESF > 0').head() # The columns that existed in both dataframes, but didn't form part of the join got suffixes added to their names: `_x` for columns from the left dataframe and `_y` for columns from the right dataframe. We can customize this with the `suffixes` argument: valid_station.merge( station_with_wesf, how='left', left_index=True, right_index=True, suffixes=('', '_?') ).query('WESF > 0').head() # Since we are joining on the index, an easier way is to use the `join()` method instead of `merge()`. Note that the suffix parameter is now `lsuffix` for the left dataframe's suffix and `rsuffix` for the right one's: valid_station.join(station_with_wesf, how='left', rsuffix='_?').query('WESF > 0').head() # Joins can be very resource-intensive, so it's a good idea to figure out what type of join you need using set operations before trying the join itself. The `pandas` set operations are performed on the index, so whichever columns we will be joining on will need to be the index. Let's go back to the `weather` and `station_info` dataframes and set the station ID columns as the index: weather.set_index('station', inplace=True) station_info.set_index('id', inplace=True) # The intersection will tell us the stations that are present in both dataframes. The result will be the index when performing an inner join: weather.index.intersection(station_info.index) # The set difference will tell us what we lose from each side. When performing an inner join, we lose nothing from the `weather` dataframe: weather.index.difference(station_info.index) # We lose 169 stations from the `station_info` dataframe, however: station_info.index.difference(weather.index) # The symmetric difference tells us what we lose from both sides. It is the combination of the set differences in each direction: # + ny_in_name = station_info[station_info.index.str.contains('US1NY')] ny_in_name.index.difference(weather.index).shape[0]\ + weather.index.difference(ny_in_name.index).shape[0]\ == weather.index.symmetric_difference(ny_in_name.index).shape[0] # - # The union will show us everything that will be present after a full outer join. Note that we pass in the unique values of the index to make sure we can see the number of stations we will be left with: weather.index.unique().union(station_info.index) # Note that the symmetric difference is actually the union of the set differences: # + ny_in_name = station_info[station_info.index.str.contains('US1NY')] ny_in_name.index.difference(weather.index).union(weather.index.difference(ny_in_name.index)).equals( weather.index.symmetric_difference(ny_in_name.index) ) # - # <hr> # <div> # <a href="../ch_03/5-handling_data_issues.ipynb"> # <button>&#8592; Chapter 3</button> # </a> # <a href="./0-weather_data_collection.ipynb"> # <button>Weather Data Collection</button> # </a> # <a href="./2-dataframe_operations.ipynb"> # <button style="float: right;">Next Notebook &#8594;</button> # </a> # </div> # <hr>
ch_04/1-querying_and_merging.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fitting impedance spectra # ## 1. Import and initialize equivalent circuit(s) # # To begin we will import the Randles' circuit and a custom circuit from the impedance package. A full list of currently available circuits are available in the [documentation](https://impedancepy.readthedocs.io/en/latest/circuits.html). # + import sys sys.path.append('../../../') from impedance.circuits import Randles, CustomCircuit # - # The classes we just imported represent different equivalent circuit models. To actually use them we want to initialize a specific instance and provide an initial guess for the parameters and any other options. # # *E.g. for the randles circuit, one of the options is for a constant phase element (CPE) instead of an ideal capacitor.* randles = Randles(initial_guess=[.01, .005, .1, .001, 200]) randlesCPE = Randles(initial_guess=[.01, .005, .1, .9, .001, 200], CPE=True) # Defining the custom circuit works a little differently. Here we pass a string comprised of the circuit elements grouped either in series (separated with a `-`) or in parallel (using the form `p(X,Y)`). Elements with multiple parameters are given in the form `X/Y` customCircuit = CustomCircuit(initial_guess=[.01, .005, .1, .005, .1, .001, 200], circuit='R0-p(R1,C1)-p(R2,C2)-W1') # As of version 0.4, you can now specify values you want to hold constant. For example, customConstantCircuit = CustomCircuit(initial_guess=[None, .005, .1, .005, .1, .001, None], constants={'R0': 0.02, 'W1_1': 200}, circuit='R0-p(R1,C1)-p(R2,C2)-W1') # Each of the circuit objects we create can be printed in order to see the properties that have been defined for that circuit. print(customConstantCircuit) # ## 2. Formulate data # Several convenience functions for importing data exist in the impedance.preprocessing module; however, here we will simply read in a `.csv` file containing frequencies as well as real and imaginary impedances using the numpy package. # + import numpy as np data = np.genfromtxt('../../../data/exampleData.csv', delimiter=',') frequencies = data[:,0] Z = data[:,1] + 1j*data[:,2] # keep only the impedance data in the first quandrant frequencies = frequencies[np.imag(Z) < 0] Z = Z[np.imag(Z) < 0] # - # ## 3. Fit the equivalent circuits to a spectrum # Each of the circuit classes has a `.fit()` method which finds the best fitting parameters. # # After fitting a circuit, the fit parameters rather that the inital guesses are shown when printing. # + randles.fit(frequencies, Z) randlesCPE.fit(frequencies, Z) customCircuit.fit(frequencies, Z) customConstantCircuit.fit(frequencies, Z) print(customConstantCircuit) # - # ## 4a. Predict circuit model and visualize with matplotlib # + import matplotlib.pyplot as plt from impedance.plotting import plot_nyquist f_pred = np.logspace(5,-2) randles_fit = randles.predict(f_pred) randlesCPE_fit = randlesCPE.predict(f_pred) customCircuit_fit = customCircuit.predict(f_pred) customConstantCircuit_fit = customConstantCircuit.predict(f_pred) fig, ax = plt.subplots(figsize=(5,5)) plot_nyquist(ax, frequencies, Z) plot_nyquist(ax, f_pred, randles_fit, fmt='-') plot_nyquist(ax, f_pred, randlesCPE_fit, fmt='-') plot_nyquist(ax, f_pred, customCircuit_fit, fmt='-') plot_nyquist(ax, f_pred, customConstantCircuit_fit, fmt='-') ax.legend(['Data', 'Randles', 'Randles w/ CPE', 'Custom Circuit', 'Custom Circuit w/ Constant R0 and W1_1']) plt.show() # - # ## 4b. Or use the convenient plotting method included in the package # # This is an experimental feature with many improvements (interactive plots, Bode plots, better confidence intervals) coming soon!! # + randles.plot(f_data=frequencies, Z_data=Z) randlesCPE.plot(f_data=frequencies, Z_data=Z) customCircuit.plot(f_data=frequencies, Z_data=Z) customConstantCircuit.plot(f_data=frequencies, Z_data=Z) plt.show() # -
docs/source/examples/fitting_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # 1. Interactive Gershgorin Theorem Plotter # ## 1.1 Background # This interactive app visualizes the results of the Gershgorin Circle Theorem. Choose the integer values of a 4x4 matrix below, and the notebook will draw the Gershgorin Circles. # # In brief, Gershgorin Circle Theorem tells us that every eigenvalue of the 4x4 matrix lies within at least one of the circles. A deeper dive [here](https://tdgriffith.github.io/blog/2020/04/06/gershgorin.html). # # So, if you draw the circles, and they all fall in the stable left half plane, all of the system's eigenvalues must lie in the left half plane. Conversely # , if all the circles lie in the right half plane, all the system eigenvalues are unstable. Finally, if it's a mix of both we don't learn anything except that sometimes math makes us sad. import numpy as np import pandas as pd import math from bqplot import pyplot as plt import random from IPython.display import display from ipywidgets.widgets import IntSlider, HBox, Button, interact import ipywidgets.widgets as widgets # %matplotlib inline w11=widgets.IntText( value=-5, disabled=False ) w12=widgets.IntText( value=-1, disabled=False ) w13=widgets.IntText( value=0, disabled=False ) w14=widgets.IntText( value=1, disabled=False ) w21=widgets.IntText( value=1, disabled=False ) w22=widgets.IntText( value=-4, disabled=False ) w23=widgets.IntText( value=1, disabled=False ) w24=widgets.IntText( value=1, disabled=False ) w31=widgets.IntText( value=1, disabled=False ) w32=widgets.IntText( value=0, disabled=False ) w33=widgets.IntText( value=-7, disabled=False ) w34=widgets.IntText( value=1, disabled=False ) w41=widgets.IntText( value=-1, disabled=False ) w42=widgets.IntText( value=-1, disabled=False ) w43=widgets.IntText( value=-1, disabled=False ) w44=widgets.IntText( value=-11, disabled=False ) # + # system_order=w.value; system_order # - # if system_order==2: # if w2.value=='Stable': # input_mat=np.array([[-3, 1], [0.5, -1]]) # if w2.value=='Unstable': # input_mat=np.array([[2, 3], [0.5, 4]]) # if w2.value=='Random': # input_mat=make_spd_matrix(2) # + # test=np.random.randint(-20,0,size=(system_order,system_order)) # + # np.linalg.eigvals(input_mat) # + # test=make_spd_matrix(system_order) # + #input_mat=np.matrix('10 -1 0 1; 0.2 8 0.2 0.2;1 1 2 1; -1 -1 -1 -11'); input_mat # - input_mat=np.array([[-5, -1, 0, 1], [1, -4, 1, 1], [1, 0, -7, 1], [-1, -1, -1, -11]]); eigs_real=np.linalg.eigvals(input_mat) eigs_real; # + # input_mat=np.array([[2, 3], [0.5, 4]]); input_mat # + #input_mat=np.array([[ 10, -1, 0, 1], [0.2, 8, 0.2, 0.2], [1, 1, 2, 1], [-1, -1, -1, -11]]); input_mat # - center=np.diagonal(input_mat); circle_center=np.transpose(np.copy(center)); circle_center; n_circles=len(circle_center); n_circles; df = pd.DataFrame({'Circle_Center': circle_center, 'Y_axis': np.transpose(np.zeros(n_circles))}) df; np.fill_diagonal(input_mat,0); input_mat; row_sum=np.sum(abs(input_mat), axis=0); row_sum; col_sum=np.transpose(np.sum(abs(input_mat), axis=1)); col_sum; circle_radius=np.minimum(row_sum,col_sum); circle_radius; circle_area=math.pi*circle_radius*circle_radius; circle_area; df['Radius']=circle_radius df['Area']=circle_area df['Eigenvalues']=eigs_real circle_radius[0]; circle_center[0]; # + # import matplotlib.pyplot as plt # circle0 = plt.Circle((circle_center[0], 0), circle_radius[0], color='r') # circle1 = plt.Circle((circle_center[1], 0), circle_radius[1], color='b') # circle2 = plt.Circle((circle_center[2], 0), circle_radius[2], color='g') # circle3 = plt.Circle((circle_center[3], 0), circle_radius[3], color='g') # fig, ax = plt.subplots() # plt.xlim(-15,15) # plt.ylim(-15,15) # plt.grid(linestyle='--') # ax.set_aspect(1) # ax.add_artist(circle0) # ax.add_artist(circle1) # ax.add_artist(circle2) # ax.add_artist(circle3) # plt.title('Gershorin', fontsize=8) # plt.show() # - # + # New functions def circ(x, radius=2): y = np.sqrt(np.clip(radius**2 - x**2, .0001, None)) y2 = -y return y, y2 # def circ(x, radius=2): # y = np.sqrt(radius**2 - x**2, .0001) # return y def _simulate_many_pis(n_simulations, n_per_sample): n_simulations = np.max([1, n_simulations]) ratios = [] for ratio in range(n_simulations): sample, inside_dots, ratio, samples_y_curve = _simulate_pi(n_per_sample) ratios.append(ratio) return ratios, sample, inside_dots, samples_y_curve def _simulate_pi(n_samples): sample = np.random.random((n_samples, 2)) * 2 inside_dots, ratio, samples_y_curve = calculate_ratio(sample) return sample, inside_dots, ratio, samples_y_curve def calculate_ratio(sample): # Calculate the y-value of the curve for each input sample's x value samples_y_curve = circ(sample[:, 0]) inside_dots = samples_y_curve > sample[:, 1] # Calculate ratio of inside dots to outside dots for this quarter circle ratio = 4 * (sum(inside_dots) / len(inside_dots)) return inside_dots, ratio, samples_y_curve def _update_plot(change=None): df=[] v11 = w11.value v12 = w12.value v13 = w13.value v14 = w14.value v21 = w21.value v22 = w22.value v23 = w23.value v24 = w24.value v31 = w31.value v32 = w32.value v33 = w33.value v34 = w34.value v41 = w41.value v42 = w42.value v43 = w43.value v44 = w44.value input_mat=np.array([[v11, v12, v13, v14], [v21, v22, v23, v24], [v31, v32, v33, v34], [v41, v42, v43, v44]]) eigs_real=np.linalg.eigvals(input_mat) center=np.diagonal(input_mat); circle_center=np.transpose(np.copy(center)); n_circles=len(circle_center); df = pd.DataFrame({'Circle_Center': circle_center, 'Y_axis': np.transpose(np.zeros(n_circles))}) np.fill_diagonal(input_mat,0); row_sum=np.sum(abs(input_mat), axis=0); col_sum=np.transpose(np.sum(abs(input_mat), axis=1)); circle_radius=np.minimum(row_sum,col_sum); circle_area=math.pi*circle_radius*circle_radius; df['Radius']=circle_radius; df['Area']=circle_area; df['Eigenvalues']=eigs_real plt.clear() plot_data = {'ratios': []} MAX_SAMPLES = 1000 MAX_SIMULATIONS = 50 width = '350px' # Draw the circle fig_circle = plt.figure() for i in range(0,len(df.Radius)): if df.Radius[i]+df.Circle_Center[i]>=0: color_stab='FireBrick' else: color_stab='Green' radius=df.Radius[i] x = np.linspace(-radius, radius, 1000) y = circ(x, radius) half_circle = plt.plot(x=x+df.Circle_Center[i], y=y, s=[100] * len(y), colors=[color_stab]*len(x)) #scat = plt.scatter([], [], options={'s': 1}) plt.xlim(-(max(df.Radius)+max(abs(df.Circle_Center))), (max(df.Radius)+max(abs(df.Circle_Center)))) plt.ylim(-(max(df.Radius)+max(abs(df.Circle_Center))),(max(df.Radius)+max(abs(df.Circle_Center)))) fig_circle.layout.height = width fig_circle.layout.width = width fig_circle.title="Gershgorin Circle Plot" with output: display(df) # - df.Radius[1]+df.Circle_Center[1]; # # 2. Input User Defined Matrix # There must be a better way to input the matrix, but interactive notebooks can be a little limited. # # Green circles fall in the stable region. Red circles fall in the unstable region. Black crosses mark the locations of the actual eigenvalues. Can you find a matrix which has a Gershgorin circle in the right half plane, but is still stable? # # # # # $\dot{x}=Ax$ # # $A=$ # + from ipywidgets import Layout, Button, Box, VBox box_layout = Layout(display='flex', flex_flow='row', align_items='stretch', width='100%') row1 = Box(children=[w11, w12, w13, w14], layout=box_layout) row2 = Box(children=[w21, w22, w23, w24], layout=box_layout) row3 = Box(children=[w31, w32, w33, w34], layout=box_layout) row4 = Box(children=[w41, w42, w43, w44], layout=box_layout) VBox([row1, row2, row3, row4]) # - np.real(df.Eigenvalues[1]); # # 3. Output Plots # + # Set up figure plt.clear() plot_data = {'ratios': []} MAX_SAMPLES = 1000 MAX_SIMULATIONS = 50 width = '350px' # Draw the circle fig_circle = plt.figure() for i in range(0,len(df.Radius)): if df.Radius[i]+df.Circle_Center[i]>=0: color_stab='FireBrick' else: color_stab='Green' radius=df.Radius[i] x = np.linspace(-radius, radius, 1000) y = circ(x, radius) half_circle = plt.plot(x=x+df.Circle_Center[i], y=y, s=[100] * len(y), colors=[color_stab]*len(x)) #scat = plt.scatter([], [], options={'s': 1}) horz_line=plt.plot(x=np.zeros(100), y=np.linspace(-40,40,100)) scat=plt.scatter(x=np.real(df.Eigenvalues),y=np.imag(df.Eigenvalues), colors=['black'], marker='cross') plt.xlim(-(max(df.Radius)+max(abs(df.Circle_Center))), (max(df.Radius)+max(abs(df.Circle_Center)))) plt.ylim(-(max(df.Radius)+max(abs(df.Circle_Center))),(max(df.Radius)+max(abs(df.Circle_Center)))) fig_circle.layout.height = width fig_circle.layout.width = width fig_circle.title="Gershgorin Circle Plot" xax, yax = plt.axes()['x'], plt.axes()['y'] # # Figure with PI estimation # fig_ratios = plt.figure() # fig_ratios.layout.height = width # fig_ratios.layout.width = width # fig_ratios.title = "Estimation of Pi" # pi_line_x = np.arange(0, MAX_SAMPLES, 100) # line = plt.plot(pi_line_x, [np.pi]*len(pi_line_x), ls='--', colors=['black']*len(pi_line_x)) # ratio_scatter = plt.scatter([], [], default_size=10) # plt.xlim(0, MAX_SAMPLES) # plt.ylim(2, 4) # Slider to control number of samples samplesslider = IntSlider(value=20, min=10, max=MAX_SAMPLES-1, description="$N_{per\_sample}$") # Slider to control number of simulations to run simulationslider = IntSlider(value=1, min=1, max=MAX_SIMULATIONS-1, description="$N_{simulations}$") # Button to run a simulation simulatebutton = Button(description="Calculate") simulatebutton.on_click(_update_plot) # simulatebutton.on_click(display(df)) # Initialize our UI box = HBox([fig_circle]) box2 = HBox([simulatebutton]) display(box) # + button = widgets.Button(description="Click Me!") output = widgets.Output() display(button, output) def on_button_clicked(b): df=[] v11 = w11.value v12 = w12.value v13 = w13.value v14 = w14.value v21 = w21.value v22 = w22.value v23 = w23.value v24 = w24.value v31 = w31.value v32 = w32.value v33 = w33.value v34 = w34.value v41 = w41.value v42 = w42.value v43 = w43.value v44 = w44.value input_mat=np.array([[v11, v12, v13, v14], [v21, v22, v23, v24], [v31, v32, v33, v34], [v41, v42, v43, v44]]) eigs_real=np.linalg.eigvals(input_mat) center=np.diagonal(input_mat); circle_center=np.transpose(np.copy(center)); n_circles=len(circle_center); df = pd.DataFrame({'Circle_Center': circle_center, 'Y_axis': np.transpose(np.zeros(n_circles))}) np.fill_diagonal(input_mat,0); row_sum=np.sum(abs(input_mat), axis=0); col_sum=np.transpose(np.sum(abs(input_mat), axis=1)); circle_radius=np.minimum(row_sum,col_sum); circle_area=math.pi*circle_radius*circle_radius; df['Radius']=circle_radius; df['Area']=circle_area; df['Eigenvalues']=eigs_real plt.clear() plot_data = {'ratios': []} MAX_SAMPLES = 1000 MAX_SIMULATIONS = 50 width = '350px' # Draw the circle #fig_circle = plt.figure() for i in range(0,len(df.Radius)): if df.Radius[i]+df.Circle_Center[i]>=0: color_stab='FireBrick' else: color_stab='Green' radius=df.Radius[i] x = np.linspace(-radius, radius, 1000) y = circ(x, radius) half_circle = plt.plot(x=x+df.Circle_Center[i], y=y, s=[100] * len(y), colors=[color_stab]*len(x)) #scat = plt.scatter([], [], options={'s': 1}) scat=plt.scatter(x=np.real(df.Eigenvalues),y=np.imag(df.Eigenvalues), colors=['black'], marker='cross') horz_line=plt.plot(x=np.zeros(100), y=np.linspace(-40,40,100)) plt.xlim(-(max(df.Radius)+max(abs(df.Circle_Center))), (max(df.Radius)+max(abs(df.Circle_Center)))) plt.ylim(-(max(df.Radius)+max(abs(df.Circle_Center))),(max(df.Radius)+max(abs(df.Circle_Center)))) fig_circle.layout.height = width fig_circle.layout.width = width fig_circle.title="Gershgorin Circle Plot" with output: display(df) button.on_click(on_button_clicked) # -
2020-04-06_Gershgorin-interactive_01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # JSON examples and exercise # **** # + get familiar with packages for dealing with JSON # + study examples with JSON strings and files # + work on exercise to be completed and submitted # **** # + reference: http://pandas-docs.github.io/pandas-docs-travis/io.html#json # + data source: http://jsonstudio.com/resources/ # **** import pandas as pd # ## imports for Python, Pandas import json from pandas.io.json import json_normalize # ## JSON example, with string # # + demonstrates creation of normalized dataframes (tables) from nested json string # + source: http://pandas-docs.github.io/pandas-docs-travis/io.html#normalization # define json string data = [{'state': 'Florida', 'shortname': 'FL', 'info': {'governor': '<NAME>'}, 'counties': [{'name': 'Dade', 'population': 12345}, {'name': 'Broward', 'population': 40000}, {'name': '<NAME>', 'population': 60000}]}, {'state': 'Ohio', 'shortname': 'OH', 'info': {'governor': '<NAME>'}, 'counties': [{'name': 'Summit', 'population': 1234}, {'name': 'Cuyahoga', 'population': 1337}]}] # use normalization to create tables from nested element json_normalize(data, 'counties') # further populate tables created from nested element json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']]) # **** # ## JSON example, with file # # + demonstrates reading in a json file as a string and as a table # + uses small sample file containing data about projects funded by the World Bank # + data source: http://jsonstudio.com/resources/ # load json as string json.load((open('data/world_bank_projects_less.json'))) # load as Pandas dataframe sample_json_df = pd.read_json('data/world_bank_projects_less.json') sample_json_df # **** # ## JSON exercise # # Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above, # 1. Find the 10 countries with most projects # 2. Find the top 10 major project themes (using column 'mjtheme_namecode') # 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in. # load json data frame dataFrame = pd.read_json('data/world_bank_projects.json') dataFrame dataFrame.info() dataFrame.columns # ## Top 10 Countries with most projects dataFrame.groupby(dataFrame.countryshortname).count().sort('_id', ascending=False).head(10) # + themeNameCode = [] for codes in dataFrame.mjtheme_namecode: themeNameCode += codes themeNameCode = json_normalize(themeNameCode) themeNameCode['count']=themeNameCode.groupby('code').transform('count') themeNameCode.sort('count').drop_duplicates().head(10) # + dataFrame = pd.read_json('data/world_bank_projects.json') #Create dictionary Code:Name to replace empty names. codeNameDict = {} for codes in dataFrame.mjtheme_namecode: for code in codes: if code['name']!='': codeNameDict[code['code']]=code['name'] index=0 for codes in dataFrame.mjtheme_namecode: innerIndex=0 for code in codes: if code['name']=='': print ("Code name empty ", code['code']) dataFrame.mjtheme_namecode[index][innerIndex]['name']=codeNameDict[code['code']] innerIndex += 1 index += 1 dataFrame.mjtheme_namecode themeNameCode = [] for codes in dataFrame.mjtheme_namecode: print (code) themeNameCode += code themeNameCode # themeNameCode = json_normalize(themeNameCode) # themeNameCode['count']=themeNameCode.groupby('code').transform('count') # themeNameCode.sort('count').drop_duplicates().head(10) # -
Week_1/DATA_WRANGLING/WORKING_WITH_DATA_IN_FILES/data_wrangling_json/.ipynb_checkpoints/sliderule_dsi_json_exercise (copy)-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from datetime import datetime import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.tensorboard import SummaryWriter from langmodels.models import * import langmodels.utf8codec as utf8codec from langmodels.utils.tools import * from langmodels.utils.helpers import * from langmodels.utils.preprocess_conllu import * from langmodels.train import * # - checkpoint_path = "/home/leo/projects/minibrain/predictors/sequence/text/trained_models/GatedConv1DCol" checkpoints = sorted(get_all_files_recurse(checkpoint_path)) # + # checkpoints # - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # device = "cuda:0" # device = "cpu" utf8codes = np.load(fcodebook) utf8codes = utf8codes.reshape(1987, 64) model = GatedConv1DPoS(utf8codes).to(device) model = model.eval() # + # checkpoint = checkpoints[0] # model.network.load_checkpoint(checkpoint) # model = model.to(device) # - def get_epoch(chk): return int(path_leaf(chk).split(".")[0].split("_")[-1]) # %%time test_data_path = BASE_DATA_DIR_UD_TREEBANK # epoch_count = get_epoch(checkpoint) test_data = load_test_data(test_data_path) # + # epoch_count # + # # %%time # print("launching test in CPU") # test(model, pos_loss_function, test_data, epoch_count, device, 50) # print("launching Accuracy test in CPU") # test_accuracy(model, test_data, epoch_count, device, max_data) # - # # %%time # for checkpoint in checkpoints: # model = GatedConv1DPoS(utf8codes).to(device) # model.network.load_checkpoint(checkpoint) # model = model.to(device) # model.eval() # epoch_count = get_epoch(checkpoint) # print("Launching test in {} for {}".format(device, path_leaf(checkpoint))) # %time test(model, pos_loss_function, test_data, epoch_count, device, 45) # # %time test_accuracy(model, test_data, epoch_count, device, 50) # %%time for checkpoint in checkpoints: model = GatedConv1DPoS(utf8codes).to(device) model.network.load_checkpoint(checkpoint) model = model.to(device) model.eval() epoch_count = get_epoch(checkpoint) print("Launching Accuracy test in {} for {}".format(device, path_leaf(checkpoint))) # %time test(model, pos_loss_function, test_data, epoch_count, device, 50) # %time test_accuracy(model, test_data, epoch_count, device, 45)
predictors/sequence/text/AsyncTest.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="mvlw9p3tPJjr" colab_type="code" outputId="2a4a8f52-315e-42b3-9d49-e9c7c3358979" colab={"base_uri": "https://localhost:8080/", "height": 129.0} # code by <NAME> @graykode import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable dtype = torch.FloatTensor sentences = [ "i like dog", "i love coffee", "i hate milk"] word_list = " ".join(sentences).split() word_list = list(set(word_list)) word_dict = {w: i for i, w in enumerate(word_list)} number_dict = {i: w for i, w in enumerate(word_list)} n_class = len(word_dict) # number of Vocabulary # NNLM Parameter n_step = 2 # n-1 in paper n_hidden = 2 # h in paper m = 2 # m in paper def make_batch(sentences): input_batch = [] target_batch = [] for sen in sentences: word = sen.split() input = [word_dict[n] for n in word[:-1]] target = word_dict[word[-1]] input_batch.append(input) target_batch.append(target) return input_batch, target_batch # Model class NNLM(nn.Module): def __init__(self): super(NNLM, self).__init__() self.C = nn.Embedding(n_class, m) self.H = nn.Parameter(torch.randn(n_step * m, n_hidden).type(dtype)) self.W = nn.Parameter(torch.randn(n_step * m, n_class).type(dtype)) self.d = nn.Parameter(torch.randn(n_hidden).type(dtype)) self.U = nn.Parameter(torch.randn(n_hidden, n_class).type(dtype)) self.b = nn.Parameter(torch.randn(n_class).type(dtype)) def forward(self, X): X = self.C(X) X = X.view(-1, n_step * m) # [batch_size, n_step * n_class] tanh = torch.tanh(self.d + torch.mm(X, self.H)) # [batch_size, n_hidden] output = self.b + torch.mm(X, self.W) + torch.mm(tanh, self.U) # [batch_size, n_class] return output model = NNLM() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) input_batch, target_batch = make_batch(sentences) input_batch = Variable(torch.LongTensor(input_batch)) target_batch = Variable(torch.LongTensor(target_batch)) # Training for epoch in range(5000): optimizer.zero_grad() output = model(input_batch) # output : [batch_size, n_class], target_batch : [batch_size] (LongTensor, not one-hot) loss = criterion(output, target_batch) if (epoch + 1)%1000 == 0: print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) loss.backward() optimizer.step() # Predict predict = model(input_batch).data.max(1, keepdim=True)[1] # Test print([sen.split()[:2] for sen in sentences], '->', [number_dict[n.item()] for n in predict.squeeze()])
1-1.NNLM/NNLM_Torch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from sklearn import tree from sklearn.datasets import load_wine from sklearn.model_selection import train_test_split wine = load_wine() wine.data.shape wine.target.shape import pandas as pd pd.concat([pd.DataFrame(wine.data),pd.DataFrame(wine.target)],axis=1) wine.feature_names #数据集特征的名字 wine.target_names # 标签的名字 #0.3 即30%做测试集,70%做训练集,注意顺序 Xtrain,Xtest,Ytrain,Ytest = train_test_split(wine.data,wine.target,test_size=0.3) clf = tree.DecisionTreeClassifier(criterion='entropy') clf = clf.fit(Xtrain,Ytrain) score = clf.score(Xtest,Ytest) # 返回预测的准确度 accuracy score import graphviz # + dot_data = tree.export_graphviz(clf, feature_names = wine.feature_names, class_names = wine.target_names, filled = True, rounded=True ) graph = graphviz.Source(dot_data) graph # -
机器学习/算法/.ipynb_checkpoints/1.1模型-决策树-分类树-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # %pylab inline import dxchange import matplotlib.pyplot as plt from convnet.transform import model from convnet.transform import predict # + batch_size = 800 nb_epoch = 40 dim_img = 20 nb_filters = 32 nb_conv = 3 patch_step = 4 patch_size = (dim_img, dim_img) # - mdl = model(dim_img, nb_filters, nb_conv) mdl.load_weights('training_weights.h5') fname = '../../convnet/data/predict_test.tiff' img_test = dxchange.read_tiff(fname) plt.imshow(img_test, cmap='Greys_r') plt.show() fname_save = '../../convnet/data/predict_test_result' img_rec = predict(mdl, img_test, patch_size, patch_step, batch_size, dim_img) dxchange.write_tiff(img_rec, fname_save, dtype='float32') plt.imshow(img_rec, cmap='Greys_r') plt.show()
doc/demo/transform_predict.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''venv'': venv)' # language: python # name: python385jvsc74a57bd09d6766ad3736c29ebfe40ecf2d41a2944950e1cce237755c2a58ee0718f8bfc6 # --- # + [markdown] tags=[] cell_id="00000-b98ee5de-4796-4f03-9cd5-4523c8709e6c" deepnote_cell_type="markdown" # # Analyzing Bird Audio # # **Authors**: <NAME>, <NAME> # # > This report details our process for analyzing bird audio, with some snippets of code. You can find the full project # > repo [on github](https://github.com/adithyabsk/pracds_final). # + tags=[] cell_id="00000-3155da5a-5d99-4ec4-b125-8e5d4eecb1de" deepnote_to_be_reexecuted=false source_hash="8886d04a" execution_start=1621312147533 execution_millis=7 deepnote_cell_type="code" import IPython.display as ipd import requests # + tags=[] cell_id="00002-36821c94-74e6-4e9a-8cc6-1e620079af98" deepnote_to_be_reexecuted=false source_hash="d2fb25ab" execution_start=1621312147545 execution_millis=15 deepnote_cell_type="code" def display_code_block(url, start_line=1, end_line=-1): raw_audio = requests.get(url) text = "\n".join(raw_audio.content.decode().split("\n")[start_line - 1 : end_line]) ipd.display( ipd.Markdown( f""" ```python {text} ``` """ ) ) # + [markdown] cell_id="00001-bb8420c6-c1fe-4532-aa2c-b93684e8d62d" tags=[] deepnote_cell_type="markdown" # ## Introduction # # We aim to accurately classify bird sounds as songs or calls. We used 3 different approaches and models based on # recording metadata, the audio data itself, and spectrogram images of the recording to perform this classification task. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/pipeline.png" alt="Pipeline Overview" style="width: 60%; display: block; margin-left: auto; margin-right: auto;"/> # # ### Motivation # # The primary motivation to address this problem is to make it easier for scientists to collect data on bird populations # and verify community-sourced labels. # # The other motivation is more open-ended: to understand the "hidden" insights in bird sounds. Bird calls reveal regional # dialects, a sense of humor, information about predators in the area, indicators of ecosystem health—and inevitably also # the threat on their ecosystems posed by human activity. Through the process of exploring bird call audio data, we hope # we can build towards better understanding the impacts of the sounds produced by humans and become better listeners. # # ### Songs vs Calls # # Bird sounds have a variety of different dimensions, but one of the first levels of categorizing bird sounds is # classifying them as a song or a call, as each have distinct functions and reveal different aspects of the birds’ # ecology ([1](https://www.audubon.org/news/a-beginners-guide-common-bird-sounds-and-what-they-mean), # [2](https://www.youtube.com/watch?v=4_1zIwEENt8)). # + [markdown] tags=[] cell_id="00002-103f41dc-76c7-4eaf-b8a6-f926a930b69a" deepnote_cell_type="markdown" # #### Songs # Songs tend to be longer, more melodic, and used for marking territory and attracting mates. Birds' song repertoire and # song rate can indicate their health and the quality of their habitat, including pollutant levels and plant diversity # ([3](https://en.wikipedia.org/wiki/Bird_vocalization#Function), [4](https://www.jstor.org/stable/20062442), # [5](https://www.fs.usda.gov/treesearch/pubs/46856)). # # + tags=[] cell_id="00002-cd34809b-a692-480f-80ba-9dc5f863eeea" deepnote_to_be_reexecuted=false source_hash="51ec5df" execution_start=1621312147564 execution_millis=29 deepnote_cell_type="code" # song sparrow - song ipd.Audio("/work/pracds_final/notebooks/assets/574080.mp3") # + [markdown] tags=[] cell_id="00004-e80cb207-d18b-40d6-8269-9db79d96b116" deepnote_cell_type="markdown" # #### Calls # Calls are shorter than songs, and perform a wider range of functions like signalling food, maintaining social cohesion # and contact, coordinating flight, resolving conflicts, and sounding alarms (distress, mobbing, hawk alarms) ([6](https://doi.org/10.1196/annals.1298.034)). # Bird alarm calls can be understood and passed along across species, and have been found to encode information about the # size and threat of a potential predator, so birds can respond accordingly - i.e. more intense mobbing for a higher # threat ([7](https://www.nationalgeographic.com/animals/article/nuthatches-chickadees-communication-danger), # [8](https://doi.org/10.1126/science.1108841)). Alarm calls can also give scientists an estimate of the number of # predators in an area. # # # + tags=[] cell_id="00003-0d793527-be8e-4939-a9e7-edd4df025c6e" deepnote_to_be_reexecuted=false source_hash="16381a37" execution_start=1621312147587 execution_millis=22 deepnote_cell_type="code" # song sparrow - call ipd.Audio("/work/pracds_final/notebooks/assets/585148.mp3") # + [markdown] cell_id="00002-0e69944b-7b03-4fc1-afaf-fb18c9b9839b" tags=[] deepnote_cell_type="markdown" # ## Related Work # **Allometry of Alarm Calls: Black-Capped Chickadees Encode Information About Predator Size** ([8](https://doi.org/10.1126/science.1108841)) # The number of D-notes in chickadee alarm mobbing calls varies indirectly with the size of predator. # # **Gender identification using acoustic analysis in birds without external sexual dimorphism** ([9](https://doi.org/10.1186/s40657-015-0033-y)) # Bird sounds were analyzed to classify gender. Important acoustic features were: fundamental frequency (mean, max, # count), note duration, syllable count and spacing, and amplitude modulation. # # **Regional dialects have been discovered among many bird species, and the Yellowhammer is a great example** ([10](http://www.yellowhammers.net/about), [11](https://doi.org/10.1093/beheco/arz114)) # Yellowhammer bird sounds in the Czech Republic and UK were studied to identify regional dialects, which differed in # frequency and length of final syllables. # + [markdown] tags=[] cell_id="00008-f5cecd3b-b746-4567-a0e6-25e3106a045f" deepnote_cell_type="markdown" # ## DVC # # [Data Version Control (DVC)](https://dvc.org/) is a useful tool for data science projects. You can think of it like git # but for data. We built out our pipeline first in jupyter notebooks, and then in DVC, making it easy to change parameters # and run the full pipeline from one place. # # <div class="alert alert-block alert-info"> # <b>Note:</b> Due to the size of the datasets, we chose not to include inline Jupyter snippets of code processing real # data and instead opted to present only the outputs of the DVC scripts. (Python files, not notebooks) # </dev> # + [markdown] cell_id="00001-2e28dd94-127d-483c-9d7d-dc9b6fe29aaf" tags=[] deepnote_cell_type="markdown" # ## Collecting Data # # For our analysis, we used audio files and metadata from [xeno-canto.org]. Xeno-canto (XC) is a website for collecting # and sharing audio recordings of birds. Recordings and identifications on XC are sourced from the community (anyone can # join). # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/xenocanto.png" alt="Xeno Canto # API Page" style="width: 50%; display: block; margin-left: auto; margin-right: auto;"/> # # XC has a [straightforward API](https://www.xeno-canto.org/explore/api) that allows us to make RESTful queries, and # specify a number of [filter parameters](https://www.xeno-canto.org/help/search) including country, species, recording # quality, and duration. We used the XC API to get metadata and IDs for all recordings in the United States, and saved the # JSON payload as a dataframe and csv. Below we see the main snippet of code from the DVC step that parallelizes data # collection from XC. # + tags=[] cell_id="00011-00d2df97-550a-4a22-aee9-76e828027cba" deepnote_to_be_reexecuted=false source_hash="9987edd" execution_start=1621312147602 execution_millis=145 deepnote_cell_type="code" display_code_block( "https://raw.githubusercontent.com/adithyabsk/bird_audio/main/pracds_final/data/build_meta.py", 45, 80, ) # + [markdown] cell_id="00004-5c180cbe-59b0-4882-aab3-66dea2ba570d" deepnote_cell_type="markdown" # ## Filtering & Labeling # Through our DVC pipeline, we further filtered by the top 220 unique species, recordings under 20 seconds, recording # quality A or B, and recordings with spectrograms available on XC. This reduced our dataset size from ~60,000 to get a # dataframe of 5,800 recordings. We created labels (1 for call, 0 for song) by parsing the 'type' column of the df. # # The following scripts handle that process: # # 1. [build_filter.py](https://github.com/adithyabsk/bird_audio/blob/main/pracds_final/data/build_filter.py) # 1. [build_song_vs_call.py](https://github.com/adithyabsk/bird_audio/blob/main/pracds_final/data/build_song_vs_call.py) # 1. [proc_svc_meta.py](https://github.com/adithyabsk/bird_audio/blob/main/pracds_final/features/proc_svc_meta.py) # + [markdown] cell_id="00005-12c735bf-8e1a-4347-8dd5-2328bc3e5475" deepnote_cell_type="markdown" # ## Exploring & Visualizing Data # # With our dataset assembled, we began exploring it visually. A distribution of recordings by genus, with song-call splits # shows that the genus most represented in the dataset are warblers (*Setophaga*) with many more songs than call # recordings. We can also see that, as expected, woodpeckers (*Melanerpes*), jays, magpies, and crows (*Cyanocitta*, # *Corvus*) have almost no song recordings in the dataset. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/svc_count_vs_genus.png" alt="Count vs Genus for the Top 20 Largest Genus" style="width: 50%; display: block; margin-left: auto; margin-right: auto;"/> # # A map of recording density shows the regions most represented in the dataset which are, unsurprisingly, bird watching # hot spots. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/svc_sample_density_usa.png" alt="Observation Count KDE Plot" style="width: 50%; display: block; margin-left: auto; margin-right: auto;"/> # # Given our domain knowledge that songs serve an important function in mating, we expected to see a higher proportion of # songs in the spring, which is confirmed by the data. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/svc_vs_month.png" alt="Song and Call Percent vs Month" style="width: 50%; display: block; margin-left: auto; margin-right: auto;"/> # + [markdown] cell_id="00006-0f0fd362-cef4-4dc3-a785-6e8caaa53266" deepnote_cell_type="markdown" # ## Metadata Classification Model # # In our first model, we used the tabular metadata from XC entries to train a Gradient Boosted Decision Tree (GBDT) model using # [XGBoost](https://xgboost.readthedocs.io/en/latest/). XGBoost, is a particular Python implementation of GBDTs that is # designed to work on large amounts of data. # # <br/> # <a href="https://xgboost.readthedocs.io/en/latest/"> # <img src="https://raw.githubusercontent.com/dmlc/dmlc.github.io/master/img/logo-m/xgboost.png" alt="XGBoost Logo" style="width: 15%; display: block; margin-left: auto; margin-right: auto;"/> # </a> # <br/> # # We used the genus, species, English name, and location (latitude and longitude) from XC metadata. These features were # then all mapped and imputed using sklearn transformers to one-hot encoded form apart from latitude, longitude, and time # (all mapped using standard or min-max scaling, and time features transformed with a sin function). We can see 10 rows of # unprocessed data in the HTML table below. # + tags=[] cell_id="00015-9d18140d-330b-4490-9f71-eeeeef9898c6" deepnote_to_be_reexecuted=false source_hash="5be256b" execution_start=1621312147752 execution_millis=9 deepnote_cell_type="code" ipd.display( ipd.HTML( "<table border=\"1\" class=\"dataframe sample table table-striped\"><thead><tr style=\"text-align: right;\"><th></th><th>df_index</th><th>id</th><th>gen</th><th>sp</th><th>ssp</th><th>en</th><th>rec</th><th>cnt</th><th>loc</th><th>lat</th><th>lng</th><th>alt</th><th>type</th><th>url</th><th>file</th><th>file-name</th><th>sono</th><th>lic</th><th>q</th><th>length</th><th>time</th><th>date</th><th>uploaded</th><th>also</th><th>rmk</th><th>bird-seen</th><th>playback-used</th><th>pred</th><th>gender</th><th>age</th><th>month</th><th>day</th><th>hour</th><th>minute</th></tr></thead><tbody><tr><th>0</th><td>96</td><td>454911</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td>Bruce Lagerquist</td><td>United States</td><td>Sedro-Woolley, Skagit County, Washington</td><td>48.5237</td><td>-122.0185</td><td>30</td><td>call</td><td>//www.xeno-canto.org/454911</td><td>//www.xeno-canto.org/454911/download</td><td>XC454911-190202_02 Canadian Geese.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/JHFICMRVUX/ffts/XC454911-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/JHFICMRVUX/ffts/XC454911-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/JHFICMRVUX/ffts/XC454911-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/JHFICMRVUX/ffts/XC454911-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>A</td><td>0:17</td><td>1900-01-01 11:30:00</td><td>2019-02-02</td><td>2019-02-04</td><td>['Cygnus buccinator']</td><td>Mixed flock of Trumpeter Swans and Canada Geese feeding in an agricultural field. Recording of Swan's here XC454910</td><td>yes</td><td>no</td><td>1</td><td>NaN</td><td>NaN</td><td>2.0</td><td>2.0</td><td>11.0</td><td>30.0</td></tr><tr><th>1</th><td>97</td><td>418340</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td>Sue Riffe</td><td>United States</td><td>Au Sable SF - Big Creek Rd, Michigan</td><td>44.0185</td><td>-83.7560</td><td>180</td><td>song</td><td>//www.xeno-canto.org/418340</td><td>//www.xeno-canto.org/418340/download</td><td>XC418340-Canada Goose on 5.11.18 at Au Sable SF MI at 11.20 for .14 _0908 .mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC418340-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC418340-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC418340-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC418340-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>A</td><td>0:14</td><td>1900-01-01 11:20:00</td><td>2018-05-11</td><td>2018-06-03</td><td>['Agelaius phoeniceus']</td><td>Natural vocalization</td><td>yes</td><td>no</td><td>0</td><td>NaN</td><td>NaN</td><td>5.0</td><td>11.0</td><td>11.0</td><td>20.0</td></tr><tr><th>2</th><td>107</td><td>291051</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td><NAME></td><td>United States</td><td>San Juan River, Cottonwood Day-Use Area, Navajo Lake State Park, San Juan County, New Mexico</td><td>36.8068</td><td>-107.6789</td><td>1800</td><td>call</td><td>//www.xeno-canto.org/291051</td><td>//www.xeno-canto.org/291051/download</td><td>XC291051-CANG_11515_1730_SanJuanRiver-NavajoDam.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/BCFUZDOSJZ/ffts/XC291051-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/BCFUZDOSJZ/ffts/XC291051-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/BCFUZDOSJZ/ffts/XC291051-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/BCFUZDOSJZ/ffts/XC291051-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>A</td><td>0:15</td><td>1900-01-01 17:30:00</td><td>2015-11-15</td><td>2015-11-18</td><td>['']</td><td>Flock calling while flying over at dusk. Amplification, low and high pass filters used in Audacity.</td><td>yes</td><td>no</td><td>1</td><td>NaN</td><td>NaN</td><td>11.0</td><td>15.0</td><td>17.0</td><td>30.0</td></tr><tr><th>3</th><td>108</td><td>283618</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td><NAME></td><td>United States</td><td>Beluga--North Bog, Kenai Peninsula Borough, Alaska</td><td>61.2089</td><td>-151.0103</td><td>40</td><td>call, flight call</td><td>//www.xeno-canto.org/283618</td><td>//www.xeno-canto.org/283618/download</td><td>XC283618-LS100466.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/CDHIAMGTRT/ffts/XC283618-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/CDHIAMGTRT/ffts/XC283618-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/CDHIAMGTRT/ffts/XC283618-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/CDHIAMGTRT/ffts/XC283618-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>A</td><td>0:10</td><td>1900-01-01 11:00:00</td><td>2015-05-20</td><td>2015-10-03</td><td>['']</td><td>Natural vocalizations from a pair of birds in flight. Recording not modified.</td><td>yes</td><td>no</td><td>1</td><td>NaN</td><td>NaN</td><td>5.0</td><td>20.0</td><td>11.0</td><td>0.0</td></tr><tr><th>4</th><td>110</td><td>209702</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td>Albert @ <NAME></td><td>United States</td><td>Oyster Bay (near Lattingtown), Nassau, New York</td><td>40.8881</td><td>-73.5851</td><td>10</td><td>call</td><td>//www.xeno-canto.org/209702</td><td>//www.xeno-canto.org/209702/download</td><td>XC209702-Poecile atricapillus Dec_27,_2014,_4_05_PM,C1.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/LELYWQKUZX/ffts/XC209702-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/LELYWQKUZX/ffts/XC209702-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/LELYWQKUZX/ffts/XC209702-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/LELYWQKUZX/ffts/XC209702-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>A</td><td>0:11</td><td>1900-01-01 16:00:00</td><td>2014-12-27</td><td>2015-01-09</td><td>['Poecile atricapillus']</td><td>NaN</td><td>yes</td><td>no</td><td>1</td><td>NaN</td><td>NaN</td><td>12.0</td><td>27.0</td><td>16.0</td><td>0.0</td></tr><tr><th>5</th><td>118</td><td>165398</td><td>Branta</td><td>canadensis</td><td>parvipes</td><td>Canada Goose</td><td>Ted Floyd</td><td>United States</td><td>Boulder, Colorado</td><td>40.0160</td><td>-105.2765</td><td>1600</td><td>call</td><td>//www.xeno-canto.org/165398</td><td>//www.xeno-canto.org/165398/download</td><td>XC165398-CanG for Xeno-Canto.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/KADPGEQPZI/ffts/XC165398-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/KADPGEQPZI/ffts/XC165398-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/KADPGEQPZI/ffts/XC165398-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/KADPGEQPZI/ffts/XC165398-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/3.0/</td><td>A</td><td>0:19</td><td>1900-01-01 09:30:00</td><td>2014-01-24</td><td>2014-01-25</td><td>['']</td><td>A large flock of Canada Geese taking off. I believe most of the birds in this flock were parvipes (\"Lesser\") Canada Geese, but there were also larger (subspecies moffitti?) Canada Geese and a few Cackling Geese (several of the subspecies hutchinsii and possibly one of the subspecies minima) in the general vicinity. \r\n\r\nIn the old days this would have been an \"obvious\" or \"easy\" flock of \"Canada Geese.\" Now we're dealing with perhaps two species and probably two or three subspecies in the recording. Again, I believe most of the birds audible here are parvipes (\"Lesser\") Canada Geese.</td><td>yes</td><td>no</td><td>1</td><td>NaN</td><td>NaN</td><td>1.0</td><td>24.0</td><td>9.0</td><td>30.0</td></tr><tr><th>6</th><td>129</td><td>1136</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td>Don Jones</td><td>United States</td><td>Brace Road, Southampton, NJ</td><td>39.9337</td><td>-74.7170</td><td>?</td><td>song</td><td>//www.xeno-canto.org/1136</td><td>//www.xeno-canto.org/1136/download</td><td>bird034.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/BCWZQTGMSO/ffts/XC1136-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/BCWZQTGMSO/ffts/XC1136-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/BCWZQTGMSO/ffts/XC1136-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/BCWZQTGMSO/ffts/XC1136-full.png'}</td><td>//creativecommons.org/licenses/by-nc-nd/2.5/</td><td>A</td><td>0:10</td><td>NaT</td><td>1997-10-17</td><td>2008-11-20</td><td>['']</td><td>NaN</td><td>unknown</td><td>unknown</td><td>0</td><td>NaN</td><td>NaN</td><td>10.0</td><td>17.0</td><td>NaN</td><td>NaN</td></tr><tr><th>7</th><td>132</td><td>536877</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td>Sue Riffe</td><td>United States</td><td>S Cape May Meadows, Cape May Cty, New Jersey</td><td>38.9381</td><td>-74.9446</td><td>0</td><td>adult, call, sex uncertain</td><td>//www.xeno-canto.org/536877</td><td>//www.xeno-canto.org/536877/download</td><td>XC536877-Canada Goose on 10.18.19 at S Cape May Meadows NJ at 18.52 for .19.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC536877-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC536877-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC536877-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/PVQOLRXXWL/ffts/XC536877-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>B</td><td>0:19</td><td>1900-01-01 18:52:00</td><td>2019-10-18</td><td>2020-03-21</td><td>['Charadrius vociferus']</td><td>Natural vocalization of a flock of geese landing on the water near sunset. Windy</td><td>yes</td><td>no</td><td>1</td><td>NaN</td><td>adult</td><td>10.0</td><td>18.0</td><td>18.0</td><td>52.0</td></tr><tr><th>8</th><td>133</td><td>511453</td><td>Branta</td><td>canadensis</td><td>NaN</td><td>Canada Goose</td><td>Phoenix Birder</td><td>United States</td><td>Gilbert, Maricopa County, Arizona</td><td>33.3634</td><td>-111.7341</td><td>380</td><td>adult, call, female, male</td><td>//www.xeno-canto.org/511453</td><td>//www.xeno-canto.org/511453/download</td><td>XC511453-CaGo.2019.12.10.AZ.Maricopa.RiparianPreserve.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/UKNISVRBBF/ffts/XC511453-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/UKNISVRBBF/ffts/XC511453-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/UKNISVRBBF/ffts/XC511453-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/UKNISVRBBF/ffts/XC511453-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>B</td><td>0:19</td><td>1900-01-01 08:52:00</td><td>2019-12-10</td><td>2019-12-10</td><td>['Toxostoma curvirostre']</td><td>Sound Devices MixPre-3 Wildtronics Stereo Model #WTPMMSA 22” Parabolic Reflector, <EMAIL></td><td>yes</td><td>no</td><td>1</td><td>male</td><td>adult</td><td>12.0</td><td>10.0</td><td>8.0</td><td>52.0</td></tr><tr><th>9</th><td>134</td><td>504983</td><td>Branta</td><td>canadensis</td><td>canadensis</td><td>Canada Goose</td><td>nick talbot</td><td>United States</td><td>Central Park, New York city,USA</td><td>40.7740</td><td>-73.9710</td><td>20</td><td>call</td><td>//www.xeno-canto.org/504983</td><td>//www.xeno-canto.org/504983/download</td><td>XC504983-2019_10_21 Branta canadensis2.mp3</td><td>{'small': '//www.xeno-canto.org/sounds/uploaded/CCUCXWCPSW/ffts/XC504983-small.png', 'med': '//www.xeno-canto.org/sounds/uploaded/CCUCXWCPSW/ffts/XC504983-med.png', 'large': '//www.xeno-canto.org/sounds/uploaded/CCUCXWCPSW/ffts/XC504983-large.png', 'full': '//www.xeno-canto.org/sounds/uploaded/CCUCXWCPSW/ffts/XC504983-full.png'}</td><td>//creativecommons.org/licenses/by-nc-sa/4.0/</td><td>B</td><td>0:13</td><td>1900-01-01 13:00:00</td><td>2019-10-21</td><td>2019-10-30</td><td>['']</td><td>A pair of birds calling from a lake</td><td>yes</td><td>no</td><td>1</td><td>NaN</td><td>NaN</td><td>10.0</td><td>21.0</td><td>13.0</td><td>0.0</td></tr></tbody></table>" ) ) # + [markdown] tags=[] cell_id="00016-724f6cc1-fb6c-45f6-b9b5-3d999ad78565" deepnote_cell_type="markdown" # Here we also see a snippet of the data transformation pipeline and model training code which was done in # [the following jupyter notebook](https://github.com/adithyabsk/bird_audio/blob/main/notebooks/4.0-ab-metadata-model.ipynb). # + tags=[] cell_id="00017-a783add5-e577-48bf-a8b7-2661612c6459" deepnote_to_be_reexecuted=false source_hash="50be9578" execution_start=1621312147758 execution_millis=120 deepnote_cell_type="code" metadata_notebook = requests.get( "https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/4.0-ab-metadata-model.ipynb" ).json() mapping_snippet = "".join(metadata_notebook["cells"][7]["source"][30:]) traintest_snippet = "".join(metadata_notebook["cells"][8]["source"]) xgb_snippet = "".join(metadata_notebook["cells"][10]["source"]) ipd.display( ipd.Markdown( f""" ```python {mapping_snippet} {traintest_snippet} {xgb_snippet} ``` """ ) ) # + [markdown] cell_id="00007-1aada947-ee4f-4c14-93c9-3c1735f917c2" deepnote_cell_type="markdown" # ## Audio Classification Model # # In one model we used the bird audio recordings themselves (mp3 and wav files), converted into time series arrays using # [librosa](https://librosa.org/) and processed with [tsfresh](https://tsfresh.readthedocs.io/en/latest/index.html) to # extract features, which we used to train a Gradient Boosted Tree model. # # <a href="https://github.com/blue-yonder/tsfresh"> # <img src="https://github.com/blue-yonder/tsfresh/raw/main/docs/images/tsfresh_logo.svg" alt="ts-fresh Logo" style="width: 15%; display: block; margin-left: auto; margin-right: auto;"/> # </a> # + [markdown] tags=[] cell_id="00017-c168c5c6-054f-4985-ad18-29c7c7aeb3f3" deepnote_cell_type="markdown" # ### Building Audio Features # # We ran audio data through a high-pass [Butterworth filter](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.butter.html) # to take out background noise. We tested different parameters for Butterworth and Firwin filters, then examined resulting # spectrograms and audio to determine which best reduced background noise without clipping bird sound frequencies. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/audio_comparing_filters.png" alt="Filter Comparisons" style="width: 75%; display: block; margin-left: auto; margin-right: auto;"/> # # The below code snippet shows the process of loading the `.mp3` file and performing the above filtering steps before # saving as a `pd.DataFrame` which is what ts-fresh expects. # + tags=[] cell_id="00018-f28125f0-d0a4-42d6-8b16-2a260199c1b1" deepnote_to_be_reexecuted=false source_hash="496b2af7" execution_start=1621312147870 execution_millis=92 deepnote_cell_type="code" display_code_block( "https://raw.githubusercontent.com/adithyabsk/bird_audio/main/pracds_final/features/proc_audio.py", 18, 46, ) # + [markdown] tags=[] cell_id="00017-2998f038-06c6-4c7f-8c1b-c4ddc702cb90" deepnote_cell_type="markdown" # #### Feature Selection & Extraction # # We used ts-fresh to featurize each audio array after unpacking and filtering to avoid running out of memory. ts-fresh # takes in dataframes with an id column, time column, and value column. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/timeseriesdf_singleid.png" alt="Time Series Input DF for a Single ID" style="width: 20%; display: block; margin-left: auto; margin-right: auto;"/> # # ts-fresh provides feature calculator presets, but due to their and `librosa.load`'s long runtimes (13+ hours for 5% of # the dataset), we manually specified the following small set of features based on our domain understanding of bird audio # analysis. # # Lastly, we passed this "static" time series feature dataframe into a similar XGBoost model (from above) to predict the # output class. # + tags=[] cell_id="00022-43cdda12-dce5-466b-9fa8-146ddf835f04" deepnote_to_be_reexecuted=false source_hash="6d3af666" execution_start=1621312147958 execution_millis=19 deepnote_cell_type="code" display_code_block( "https://raw.githubusercontent.com/adithyabsk/bird_audio/main/pracds_final/features/proc_audio.py", 71, 102, ) # + [markdown] cell_id="00018-80daee04-77cd-4ffe-933d-fce90cb81112" deepnote_cell_type="markdown" # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/audio_Xdf.png" alt="Feat Output for all IDs" style="width: 66%; display: block; margin-left: auto; margin-right: auto;"/> # + [markdown] tags=[] cell_id="00021-6305e544-bfaf-448e-bc79-a3baa50ad2f8" deepnote_cell_type="markdown" # ## Spectrogram Classification Model # # [Training Notebook Link](https://github.com/adithyabsk/bird_audio/blob/main/notebooks/5.0-ab-sonogram-model.ipynb) # # We used a computer vision approach to analyze spectrograms using [fast.ai](https://docs.fast.ai/) pre-trained model. We use an [`xresnet18`](https://github.com/fastai/fastai/blob/d7779196359c8e497a80e2f7f85c327318777c1a/fastai/vision/models/xresnet.py#L64) architecture pre-trained on ImageNet. # # <a href="https://github.com/fastai/fastai"> # <img src="https://fastpages.fast.ai/images/logo.png" alt="Fast AI Logo" style="width: 15%; display: block; margin-left: auto; margin-right: auto;"/> # </a> # # We load the data using fast.ai's `ImageDataLoader`. The model is then cut at the pooling layer (frozen weights) and # then trained on its last layers to utilize transfer learning on our spectrogram images. A diagram of the architecture # pulled directly from the original resnet paper is included below. # # <img src="https://yann-leguilly.gitlab.io/img/bagstricks/bag_tricks_figure_1.webp" alt="ResNet50 Architecture" style="width: 30%; display: block; margin-left: auto; margin-right: auto;"/> # # The model itself was trained on a Tesla K80 using [Google Colab](https://colab.research.google.com/signup) to speed up # the training process. Additionally, we used [Weights and Biases](https://wandb.ai/site) to track the training and # improve the model tuning. We've listed the main snippets of code below that handle the training process. # + tags=[] cell_id="00025-69ceacb1-3871-42bb-8226-048ff891c3fa" deepnote_to_be_reexecuted=false source_hash="b7bbbce4" execution_start=1621313567751 execution_millis=22 deepnote_cell_type="code" sonogram_notebook = requests.get( "https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/5.0-ab-sonogram-model.ipynb" ).json() datasetup_snippet = "".join(sonogram_notebook["cells"][5]["source"]) smodel_train_snippet = "".join(sonogram_notebook["cells"][6]["source"]) ipd.display( ipd.Markdown( f""" ```python {datasetup_snippet} {smodel_train_snippet} ``` """ ) ) # + [markdown] cell_id="00021-6151fea3-72b3-4168-9880-88c243f10ed6" deepnote_cell_type="markdown" # ## Results # # Across our three models, we achieved scores in a range of 64-77%. This is above the baseline score of 55% (mean of # labels), and we believe with more time to tune and ensemble the models, one could achieve an even more accurate # classifier. We are encouraged by the amount of room both the time series based and sonogram based models have for # improvement given that the metadata model wipes the floor in terms of accuracy. # # <center> # # | Model | Train Log Loss | Test Log Loss | Train Accuracy | Test Accuracy | # |-|-|-|-|-| # | Metadata Model | 0.331 | 0.507 | 0.879 | 0.773 | # | Audio Model | 0.255 | 0.694 | 0.957 | 0.639 | # | Spectrogram Model | 0.661 | 0.675 | 0.682 | 0.682 | # | Baseline | 0.69 | 0.55 | 0.55 | 0.54 | # # </center> # # # ### Plots # # #### Metadata Model # # We note a plateau in the XGBoost validation accuracy which tends suggest that further improvements in early stopping may # be achieved. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/svc_meta_xgb_loss.png" alt="XGBoost Log Loss" style="width: 30%; display: block; margin-left: auto; margin-right: auto;"/> # # Additionally, due to the nature of the decision tree based model we are able to compute feature importance. The most # important features include the genera - this is not so surprising when we recall our genus-count distribution and see # that the genera here are mostly those with recordings that are almost entirely songs or calls. The other important # feature is month - again, we recall that in the spring the ratio of songs to calls goes up, so time of year is a "good" # feature. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/svc_meta_xgb_imp.png" alt="XGBoost Feature Importance" style="width: 40%; display: block; margin-left: auto; margin-right: auto;"/> # # #### Time Series Model # # We can see that the test loss increases due to over-fitting, also evidenced by the very high training accuracy. This is # a potential area of improvement in further research. # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/svc_audiotime_xgb_loss.png" alt="LogReg Log Loss" style="width: 50%; display: block; margin-left: auto; margin-right: auto;"/> # # #### Spectrogram Model # # This is the direct output from WandB which depicts the training process for the fine-tuned xresnet model. It is # important to note that the X axis is steps and not epochs as this model was only trained for a single epoch (to save # time and memory). # # <img src="https://raw.githubusercontent.com/adithyabsk/bird_audio/main/notebooks/assets/cnn_train_loss.png" alt="Wandb Train Ouput" style="width: 40%; display: block; margin-left: auto; margin-right: auto;"/> # + [markdown] tags=[] cell_id="00027-a1d83e5d-3845-471e-a552-c760b2ca975d" deepnote_cell_type="markdown" # ## Future Work # # We would like to note that there are a couple of immediate next steps that the project could take to dramatically # improve the model performance # # - Ensembling the 3 models using a [`VotingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html) # - More training time for the Spectrogram model (only 30 minutes was provided for fine-tuning) # * Additional epochs (only 1 epoch was provided) # - Filtering features in the audio classification model (ts-fresh likely generates more features than are needed) # # Long term: integrate model with Xeno Canto to provide tag suggestions based on the audio clip # + [markdown] tags=[] cell_id="00026-74080aa6-cd0b-4563-8a9d-d0d68f85b704" deepnote_cell_type="markdown" # ## Conclusion # # The classification of song vs call is the first distinction one can make in bird audio data across species, and on its # own can give insights into the number of predators in an ecosystem, the timing of mating season, and other behaviors. # It could also be valuable when part of a larger system of models. This report presents a promising start to tackle this # problem with three separate machine learning models with reasonable accuracy. These models will likely prove quite handy # in downstream classification tasks that look to find species, gender, location, and other parameters from the bird audio # sample. # + [markdown] cell_id="00022-1e1949a9-24c9-4dd3-b9bf-9ff82eedeb9d" deepnote_cell_type="markdown" # ## References # 1. "A Beginner’s Guide to Common Bird Sounds and What They Mean." [*Audubon.org.*](https://www.audubon.org/news/a-beginners-guide-common-bird-sounds-and-what-they-mean) # 2. "Two Types of Communication Between Birds: Understanding Bird Language Songs And Calls." [*Youtube.*](https://www.youtube.com/watch?v=4_1zIwEENt8) # 3. "Bird Vocalization." [*Wikipedia.*](https://en.wikipedia.org/wiki/Bird_vocalization#Function) # 4. <NAME>, et al. “Heavy Metal Pollution Affects Dawn Singing Behaviour in a Small Passerine Bird.” *Oecologia*, vol. 145, no. 3, 2005, pp. 504–509. [JSTOR](https://www.jstor.org/stable/20062442) # 5. <NAME>.; <NAME>; <NAME>. 2014. Invasive plant erodes local song diversity in a migratory passerine. *Ecology.* 95(2): 458-465. [Ecological Society of America](https://www.fs.usda.gov/treesearch/pubs/46856) # 6. <NAME>. (2004), Bird Calls: Their Potential for Behavioral Neurobiology. Annals of the New York Academy of Sciences, 1016: 31-44. [https://doi.org/10.1196/annals.1298.034](https://doi.org/10.1196/annals.1298.034) # 7. "These birds 'retweet' alarm calls—but are careful about spreading rumors." [*National Geographic.*](https://www.nationalgeographic.com/animals/article/nuthatches-chickadees-communication-danger) # 8. Templeton, <NAME>., et al. “Allometry of Alarm Calls: Black-Capped Chickadees Encode Information About Predator Size.” Science, vol. 308, no. 5730, American Association for the Advancement of Science, 2005, pp. 1934–37, [doi:10.1126/science.1108841](https://doi.org/10.1126/science.1108841). # 9. <NAME>., <NAME>., <NAME>. et al. Gender identification using acoustic analysis in birds without external sexual dimorphism. Avian Res 6, 20 (2015). [https://doi.org/10.1186/s40657-015-0033-y](https://doi.org/10.1186/s40657-015-0033-y) # 10. "About yellowhammers." [*Yellowhammer Dialects.*](http://www.yellowhammers.net/about) # 11. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Causes and consequences of intraspecific variation in animal responses to anthropogenic noise, Behavioral Ecology, Volume 30, Issue 6, November/December 2019, Pages 1501–1511, [https://doi.org/10.1093/beheco/arz114](https://doi.org/10.1093/beheco/arz114) # 12. "Open-source Version Control System for Machine Learning Projects." [*DVC.*](https://dvc.org/) # 13. [*xeno-canto.*](https://www.xeno-canto.org/explore/api) # 14. [*scikit-learn.*](https://scikit-learn.org/stable/index.html) # 15. [*xgboost.*](https://xgboost.readthedocs.io/en/latest/) # 16. [*fast.ai.*](https://docs.fast.ai/) # # ### Metrics # # #### Word Count # # TODO # # #### Code Line count # # We used [CLOC](https://github.com/AlDanial/cloc) to generate the code line counts # # | Language  | Files  | Code  | # |-|-|-| # | Jupyter Notebook  | 9  | 1195  | # | Python  | 8  | 397  | # | Sum  | **17**  | **1592**  | # + [markdown] tags=[] created_in_deepnote_cell=true deepnote_cell_type="markdown" # <a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=435b4438-1c5d-4d1d-a665-25eae05859c1' target="_blank"> # <img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img> # Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
notebooks/0.0-mk-final-report.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # Tutorial 0: Preparing your data for gradient analysis # ===================================================== # In this example, we will introduce how to preprocess raw MRI data and how # to prepare it for subsequent gradient analysis in the next tutorials. # # Requirements # ------------ # For this tutorial, you will need to install the Python package # `load_confounds <https://github.com/SIMEXP/fmriprep_load_confounds>`_. You can # do it using ``pip``:: # # pip install load_confounds # # # Preprocessing # ------------- # Begin with an MRI dataset that is organized in `BIDS # <https://bids.neuroimaging.io/>`_ format. We recommend preprocessing your data # using `fmriprep <http://fmriprep.readthedocs.io/>`_, as described below, but # any preprocessing pipeline will work. # # Following is example code to run `fmriprep <http://fmriprep.readthedocs.io/>`_ # using docker from the command line:: # # docker run -ti --rm \ # -v <local_BIDS_data_dir>:/data:ro \ # -v <local_output_dir>:/out poldracklab/fmriprep:latest \ # --output-spaces fsaverage5 \ # --fs-license-file license.txt \ # /data /out participant # # <div class="alert alert-info"><h4>Note</h4><p>For this tutorial, it is crucial to output the data onto a cortical surface # template space.</p></div> # # Import the dataset as timeseries # ++++++++++++++++++++++++++++++++ # The timeseries should be a numpy array with the dimensions: nodes x timepoints # # Following is an example for reading in data:: # # import nibabel as nib # import numpy as np # # filename = 'filename.{}.mgz' # where {} will be replaced with 'lh' and 'rh' # timeseries = [None] * 2 # for i, h in enumerate(['lh', 'rh']): # timeseries[i] = nib.load(filename.format(h)).get_fdata().squeeze() # timeseries = np.vstack(timeseries) # # # As a **working example**, simply fetch timeseries: # # from brainspace.datasets import fetch_timeseries_preprocessing timeseries = fetch_timeseries_preprocessing() # Confound regression # ++++++++++++++++++++++++ # To remove confound regressors from the output of the fmriprep pipeline, first # extract the confound columns. For example:: # # import load_confounds # confounds_out = load_confounds("path to confound file", # strategy='minimal', # n_components=0.95, # motion_model='6params') # # # As a **working example**, simply read in confounds # # from brainspace.datasets import load_confounds_preprocessing confounds_out = load_confounds_preprocessing() # Do the confound regression # # from nilearn import signal clean_ts = signal.clean(timeseries.T, confounds=confounds_out).T # And extract the cleaned timeseries onto a set of labels # # # + import numpy as np from nilearn import datasets from brainspace.utils.parcellation import reduce_by_labels # Fetch surface atlas atlas = datasets.fetch_atlas_surf_destrieux() # Remove non-cortex regions regions = atlas['labels'].copy() masked_regions = [b'Medial_wall', b'Unknown'] masked_labels = [regions.index(r) for r in masked_regions] for r in masked_regions: regions.remove(r) # Build Destrieux parcellation and mask labeling = np.concatenate([atlas['map_left'], atlas['map_right']]) mask = ~np.isin(labeling, masked_labels) # Distinct labels for left and right hemispheres lab_lh = atlas['map_left'] labeling[lab_lh.size:] += lab_lh.max() + 1 # extract mean timeseries for each label seed_ts = reduce_by_labels(clean_ts[mask], labeling[mask], axis=1, red_op='mean') # - # Calculate functional connectivity matrix # ++++++++++++++++++++++++++++++++++++++++ # The following example uses # `nilearn <https://nilearn.github.io/auto_examples/03_connectivity/plot_ # signal_extraction.html#compute-and-display-a-correlation-matrix/>`_: # # # + from nilearn.connectome import ConnectivityMeasure correlation_measure = ConnectivityMeasure(kind='correlation') correlation_matrix = correlation_measure.fit_transform([seed_ts.T])[0] # - # Plot the correlation matrix: # # # + from nilearn import plotting # Reduce matrix size, only for visualization purposes mat_mask = np.where(np.std(correlation_matrix, axis=1) > 0.2)[0] c = correlation_matrix[mat_mask][:, mat_mask] # Create corresponding region names regions_list = ['%s_%s' % (h, r.decode()) for h in ['L', 'R'] for r in regions] masked_regions = [regions_list[i] for i in mat_mask] corr_plot = plotting.plot_matrix(c, figure=(15, 15), labels=masked_regions, vmax=0.8, vmin=-0.8, reorder=True) # - # Run gradient analysis and visualize # +++++++++++++++++++++++++++++++++++ # # Run gradient analysis # # # + from brainspace.gradient import GradientMaps gm = GradientMaps(n_components=2, random_state=0) gm.fit(correlation_matrix) # - # Visualize results # # # + from brainspace.datasets import load_fsa5 from brainspace.plotting import plot_hemispheres from brainspace.utils.parcellation import map_to_labels # Map gradients to original parcels grad = [None] * 2 for i, g in enumerate(gm.gradients_.T): grad[i] = map_to_labels(g, labeling, mask=mask, fill=np.nan) # Load fsaverage5 surfaces surf_lh, surf_rh = load_fsa5() # sphinx_gallery_thumbnail_number = 2 plot_hemispheres(surf_lh, surf_rh, array_name=grad, size=(1200, 400), cmap='viridis_r', color_bar=True, label_text=['Grad1', 'Grad2'], zoom=1.5) # - # This concludes the setup tutorial. The following tutorials can be run using # either the output generated here or the example data. # #
docs/python_doc/auto_examples/plot_tutorial0.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # %matplotlib inline import os import pandas import numpy import sklearn import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import AdaBoostRegressor # + polls_file_name = 'full_bracket.poll_of_polls.csv' streak_file_name = 'composite_streak_data.csv' points_file_name = 'team_points.csv' polls = pandas.read_csv(polls_file_name, sep='|') streak = pandas.read_csv(streak_file_name, sep='|').drop('Unnamed: 0', axis=1) points = pandas.read_csv(points_file_name, sep='|').drop('Unnamed: 0', axis=1) polls[:5] # + root_path = 'data/kaggle_2018/DataFiles' teams = pandas.read_csv(os.path.join(root_path, 'Teams.csv')) slots = pandas.read_csv(os.path.join(root_path, 'NCAATourneySlots.csv')) seeds = pandas.read_csv(os.path.join(root_path, 'NCAATourneySeeds.csv')) results = pandas.read_csv(os.path.join(root_path, 'NCAATourneyCompactResults.csv')) results['minID'] = results[['WTeamID', 'LTeamID']].min(axis=1) results['maxID'] = results[['WTeamID', 'LTeamID']].max(axis=1) # + # Load all historical tournament data rounds_by_season = dict() for season in range(2010, 2017+1): # initialize df = slots.merge(seeds, left_on=['Season', 'StrongSeed'], right_on=['Season', 'Seed'], how='left') df = df.rename(index=str, columns={"TeamID": "StrongTeamID"}).drop('Seed', axis=1) df = df.merge(seeds, left_on=['Season', 'WeakSeed'], right_on=['Season', 'Seed'], how='left') df = df.rename(index=str, columns={"TeamID": "WeakTeamID"}).drop('Seed', axis=1) # play-in games pi_games = ~(df['Slot'].str.contains('R')) & (df['Season'] == season) pi = df[pi_games] pi.loc[pi_games, 'minID'] = pi[['StrongTeamID', 'WeakTeamID']].min(axis=1) pi.loc[pi_games, 'maxID'] = pi[['StrongTeamID', 'WeakTeamID']].max(axis=1) pi = pi.merge(results, on=['Season', 'minID', 'maxID'], how='left') pi = pi.merge(teams[['TeamID', 'TeamName']], left_on=['StrongTeamID'], right_on=['TeamID'], how='left') pi = pi.rename(index=str, columns={"TeamName": "StrongTeamName"}).drop('TeamID', axis=1) pi = pi.merge(teams[['TeamID', 'TeamName']], left_on=['WeakTeamID'], right_on=['TeamID'], how='left') pi = pi.rename(index=str, columns={"TeamName": "WeakTeamName"}).drop('TeamID', axis=1) pi = pi.merge(teams[['TeamID', 'TeamName']], left_on=['WTeamID'], right_on=['TeamID'], how='left') pi = pi.rename(index=str, columns={"TeamName": "WTeamName"}).drop('TeamID', axis=1) pi = pi.merge(teams[['TeamID', 'TeamName']], left_on=['LTeamID'], right_on=['TeamID'], how='left') pi = pi.rename(index=str, columns={"TeamName": "LTeamName"}).drop('TeamID', axis=1) pi # regular rounds rounds = [pi] for rnd in range(1, 6 + 1): last_rnd = rounds[-1] r_games = (df['Slot'].str.contains('R{}..'.format(rnd))) & (df['Season'] == season) r = df[r_games] r = r.merge(last_rnd[['Slot', 'WTeamID']], left_on='StrongSeed', right_on='Slot', how='left', suffixes=['', '__']) r.loc[r['StrongTeamID'].isnull(), 'StrongTeamID'] = r['WTeamID'] r = r.drop(['Slot__', 'WTeamID'], axis=1) r = r.merge(last_rnd[['Slot', 'WTeamID']], left_on='WeakSeed', right_on='Slot', how='left', suffixes=['', '__']) r.loc[r['WeakTeamID'].isnull(), 'WeakTeamID'] = r['WTeamID'] r = r.drop(['Slot__', 'WTeamID'], axis=1) r['minID'] = r[['StrongTeamID', 'WeakTeamID']].min(axis=1) r['maxID'] = r[['StrongTeamID', 'WeakTeamID']].max(axis=1) r = r.merge(results, on=['Season', 'minID', 'maxID'], how='left') r = r.merge(teams[['TeamID', 'TeamName']], left_on=['StrongTeamID'], right_on=['TeamID'], how='left') r = r.rename(index=str, columns={"TeamName": "StrongTeamName"}).drop('TeamID', axis=1) r = r.merge(teams[['TeamID', 'TeamName']], left_on=['WeakTeamID'], right_on=['TeamID'], how='left') r = r.rename(index=str, columns={"TeamName": "WeakTeamName"}).drop('TeamID', axis=1) r = r.merge(teams[['TeamID', 'TeamName']], left_on=['WTeamID'], right_on=['TeamID'], how='left') r = r.rename(index=str, columns={"TeamName": "WTeamName"}).drop('TeamID', axis=1) r = r.merge(teams[['TeamID', 'TeamName']], left_on=['LTeamID'], right_on=['TeamID'], how='left') r = r.rename(index=str, columns={"TeamName": "LTeamName"}).drop('TeamID', axis=1) rounds.append(r.copy()) rounds_by_season[season] = rounds # - def score_bracket(season, ranking, beta, historical_rounds, weights): # season: a year between 2010 and 2017 for which the ranking was created # ranking: ['TeamID', 'Rank'] - all D1 teams ranked as of the last day before the NCAA Tournament # historical_rounds: all historical NCAA tournament results by round between 2010 and 2017 # weights: number of points to award per round in the scoring system # Get the Tournament data for the indicated season rounds = historical_rounds[season] # Get the 64 teams in Round 1 teams_64 = map(int, list(rounds[1]['StrongTeamID']) + list(rounds[1]['WeakTeamID'])) # Get the bracket implied by the ranking bracket_64 = ranking[ranking['TeamID'].isin(teams_64)] initial_rank_64 = bracket_64['Rank'].copy() score = 0 # First round - determine matchups and winners rnd = 1 # Add noise to the rank each round bracket_64['Rank'] = initial_rank_64 + numpy.random.normal(scale=beta, size=bracket_64.shape[0]) x = rounds[rnd].merge(bracket_64, left_on='WTeamID', right_on='TeamID', how='left') x = x.rename(index=str, columns={"Rank": "WTeamRank"}).drop('TeamID', axis=1) x = x.merge(bracket_64, left_on='LTeamID', right_on='TeamID', how='left') x = x.rename(index=str, columns={"Rank": "LTeamRank"}).drop('TeamID', axis=1) x['Correct'] = (x['WTeamRank'] < x['LTeamRank']) # Update score picked = x['Correct'].sum() score += weights[rnd]*picked # print rnd, x['Correct'].count(), picked, weights[rnd]*picked, total # Eliminate wrong picks my_remaining_teams = x[x['Correct'] == True]['WTeamID'] # Remaining rounds - determine matchups and winners for rnd in range(2, 6+1): #my_remaining_bracket = bracket[bracket['TeamID'].isin(my_remaining_teams)] #my_remaining_bracket = ranking[ranking['TeamID'].isin(my_remaining_teams)] # Add noise to the rank each round #bracket_64['Rank'] = initial_rank_64 + numpy.random.normal(scale=beta/float(rnd), size=bracket_64.shape[0]) bracket_64['Rank'] = initial_rank_64 + numpy.random.normal(scale=beta/numpy.power(2.0, rnd), size=bracket_64.shape[0]) x = rounds[rnd].merge(bracket_64, left_on='WTeamID', right_on='TeamID', how='left') x = x.rename(index=str, columns={"Rank": "WTeamRank"}).drop('TeamID', axis=1) x = x.merge(bracket_64, left_on='LTeamID', right_on='TeamID', how='left') x = x.rename(index=str, columns={"Rank": "LTeamRank"}).drop('TeamID', axis=1) x['Correct'] = (x['WTeamRank'] < x['LTeamRank']) # Update score picked = x[x['WTeamID'].isin(my_remaining_teams)]['Correct'].sum() score += weights[rnd]*picked #print rnd, x['Correct'].count(), picked, weights[rnd]*picked, total # Eliminate wrong picks my_remaining_teams = x[(x['Correct'] == True) & (x['WTeamID'].isin(my_remaining_teams))]['WTeamID'] return score # + scores = list() for season in range(2010, 2017+1): for i in range(20): bracket_file_name = 'bracket.combined.{}.{}.csv'.format(season, i) schema = [ ('TeamID', int), ('Rank', float) ] bracket = pandas.read_csv(bracket_file_name, sep=' ', names=[x[0] for x in schema], dtype=dict(schema)) # + weights = {1: 1, 2: 2, 3: 4, 4: 8, 5: 16, 6: 32} score_bracket(season, bracket, rounds_by_season, weights) # + import random # - 5*5*5 # + #n = 60 #trials = 10 n = 100 trials = 1 alpha_min = -1.0 alpha_max = 0.0 alpha_n = 100 beta_min = numpy.log(0.001) beta_max = numpy.log(0.1) beta_n = n h_model_data = dict([(f, list()) for f in ['Season', 'alpha', 'beta', 'score']]) #data = None for season in range(2010, 2017+1): print season for alpha in [0.0]: #for alpha in numpy.linspace(-0.2, 0.2, n): # for alpha in numpy.random.random(alpha_n)*(alpha_max - alpha_min) + alpha_min: for beta in numpy.linspace(0.01, 1.0, n): #for beta in numpy.logspace(-3.0, 0.0, n): #for beta in numpy.random.random(beta_n)*(beta_max - beta_min) + beta_min: #for beta in numpy.exp(numpy.random.random(beta_n)*(beta_max - beta_min) + beta_min): #for beta in [0.0]: for i in range(trials): w = streak[(streak['Season'] == season) & (streak['StreakLen'] == 6)] \ .merge(polls[['TeamID', 'Season', 'norm_rank', 'noise']], on=['Season', 'TeamID'], how='left') \ .sort_values('norm_rank') #w['Rank'] = w['norm_rank'] - alpha*w['streak'] + beta*(numpy.random.random(w.shape[0]) - 0.5) #w['Rank'] = w['norm_rank'] - alpha*w['streak'] #w['Rank'] = w['norm_rank'] + beta*(numpy.random.random(w.shape[0]) - 0.5) w['Rank'] = w['norm_rank'] w['alpha'] = alpha w['beta'] = beta w['Season'] = season score = score_bracket(season, w[['TeamID', 'Rank']], beta, rounds_by_season, weights) # print season, alpha, beta, score h_model_data['Season'].append(season) h_model_data['alpha'].append(alpha) h_model_data['beta'].append(beta) h_model_data['score'].append(score) #if data is None: # data = w[['alpha', 'beta', 'Season', 'TeamID', 'Rank']].copy() #else: # data = data.append(w[['alpha', 'beta', 'Season', 'TeamID', 'Rank']].copy()) h_model_data = pandas.DataFrame(h_model_data) # - regr = AdaBoostRegressor(DecisionTreeRegressor(max_depth=6), n_estimators=500) # + holdout_season = random.sample(range(2010, 2017+1), 1)[0] X = h_model_data[h_model_data['Season'] != holdout_season][['beta']] y = h_model_data[h_model_data['Season'] != holdout_season]['score'] print holdout_season print X.shape regr.fit(X, y) # + # randomness is necessary, but not much - it's generally a negative trend # the right amount is the smallest amount possible that has any effect y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['beta']]) axis_feature = 'beta' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # - edges = [0, 0.125, 0.25, 0.4] plt.boxplot([pya[(pxa > lower) & (pxa <= upper)] for lower, upper in zip(edges[:-1], edges[1:])]); # + # randomness is necessary, but not much - it's generally a negative trend # the right amount is the smallest amount possible that has any effect y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['beta']]) axis_feature = 'beta' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # - edges = [0, 0.125, 0.25, 0.4] plt.boxplot([pya[(pxa > lower) & (pxa <= upper)] for lower, upper in zip(edges[:-1], edges[1:])]); # + # randomness is necessary, but not much - it's generally a negative trend # the right amount is the smallest amount possible that has any effect y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['beta']]) axis_feature = 'beta' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # + # what happens with a positive ocrrelation on streaks? y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['alpha']]) axis_feature = 'alpha' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # + # what happens with a positive ocrrelation on streaks? y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['alpha']]) axis_feature = 'alpha' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # + # only use a dash of randomness # randomnes is a binary question y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['beta']]) axis_feature = 'beta' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # + # randomness is necessary, but not much - it's generally a negative trend # the right amount is the smallest amount possible that has any effect y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['beta']]) axis_feature = 'beta' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # + y_pred = regr.predict(h_model_data[h_model_data['Season'] == holdout_season][['alpha']]) axis_feature = 'alpha' plt.figure() px, py = zip(*sorted(zip(X[axis_feature], y))) plt.scatter(px, py, c="k", label="training samples") px, py_pred = zip(*sorted(zip(X[axis_feature], y_pred))) plt.scatter(px, py_pred, c="r", label="n_estimators=300") plt.xlabel(axis_feature) plt.ylabel("target") plt.title("Boosted Decision Tree Regression") plt.legend(bbox_to_anchor=(1.5, 1.0)) plt.show() # + trials = 100 beta = 0.05 score_data = dict([(f, list()) for f in ['Season', 'alpha', 'beta', 'streak', 'score']]) for season in range(2010, 2017+1): print season for alpha in [-0.2, -0.1, 0.0]: for streak_len in [6, 10]: for i in range(trials): w = streak[(streak['Season'] == season) & (streak['StreakLen'] == streak_len)] \ .merge(polls[['TeamID', 'Season', 'norm_rank', 'noise']], on=['Season', 'TeamID'], how='left') \ .sort_values('norm_rank') w['Rank'] = w['norm_rank'] - alpha*w['streak'] + beta*(numpy.random.random(w.shape[0]) - 0.5) # w['Rank'] = w['norm_rank'] - alpha*w['streak'] # w['Rank'] = w['norm_rank'] + beta*(numpy.random.random(w.shape[0]) - 0.5) w['alpha'] = alpha w['beta'] = beta w['streak'] = streak_len w['Season'] = season score = score_bracket(season, w[['TeamID', 'Rank']], rounds_by_season, weights) score_data['Season'].append(season) score_data['alpha'].append(alpha) score_data['beta'].append(beta) score_data['streak'].append(streak_len) score_data['score'].append(score) score_data = pandas.DataFrame(score_data) # - gb = score_data.groupby(['Season', 'alpha', 'beta', 'streak']) gb.quantile(0.75) # + plt.figure(figsize=(20,10)) plt_data = list() for season in range(2010, 2017+1): #plt_data.append(list(score_data[(score_data['alpha'] == 0.0) & (score_data['streak'] == 6) & (score_data['Season'] == season)]['score'])) plt_data.append(list(score_data[(score_data['alpha'] == -0.1) & (score_data['streak'] == 6) & (score_data['Season'] == season)]['score'])) #plt_data.append(list(score_data[(score_data['alpha'] == -0.2) & (score_data['streak'] == 6) & (score_data['Season'] == season)]['score'])) #plt_data.append(list(score_data[(score_data['alpha'] == 0.0) & (score_data['streak'] == 10) & (score_data['Season'] == season)]['score'])) plt_data.append(list(score_data[(score_data['alpha'] == -0.1) & (score_data['streak'] == 10) & (score_data['Season'] == season)]['score'])) #plt_data.append(list(score_data[(score_data['alpha'] == -0.2) & (score_data['streak'] == 10) & (score_data['Season'] == season)]['score'])) plt.boxplot(plt_data);
complete model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] _cell_guid="d47f823d-475e-7cae-5bc0-5e2206a17785" _uuid="e0a86b11e207b04e4dc00564669e11045a97dd40" # Step 1: # Creating data_frame named housing_data # + _cell_guid="9eb8fa24-97c0-e9e1-b27b-661fb758b7b2" _uuid="b5857cdebcf0e1ac9a3b733f16092e723f40eeb2" import pandas as pd housing_data = pd.read_csv("../input/kc_house_data.csv") housing_data.head() # + [markdown] _cell_guid="b7fe9261-ccc8-0e42-87b2-63739d433437" _uuid="4d87a7d8d436a8288fb7f529afe7d0992523831e" # Step 2: # # Calculating age of house for better analysis # # Creating another column named age_of_house for visualization # # # + _cell_guid="e78eee19-ec46-e85d-fc1b-c369ae21453f" _uuid="83c420319cb659886def7186711cfaa9814991ff" import datetime current_year = datetime.datetime.now().year housing_data["age_of_house"] = current_year - pd.to_datetime(housing_data["date"]).dt.year housing_data.head() # + [markdown] _cell_guid="23a371ad-cb03-b1d8-2274-c1dcd7efd6ea" _uuid="6bd11e816ad8ffb257fc819e035e7c67030e9f7c" # Data Frame Info. (Quick View) # + _cell_guid="01c92600-d130-2474-f3a2-565ecc569ce3" _uuid="70ef88d4dfa3acb77b0a4eb0f5e5b3262440e650" housing_data.info() # + [markdown] _cell_guid="3ea9555a-7e33-6515-2eb1-c223e6c4a0fd" _uuid="f3708566488c816775ac71cd8e4f87b827b7e168" # Populating Column Names # + _cell_guid="c7d8e5c5-7d0f-7039-59d2-0cafad6f7c28" _uuid="dbf425d8b9e9382bf0207a2273a58a97f68f47bb" housing_data.columns # + [markdown] _cell_guid="98c224c9-d345-d3c5-6612-959bdc627a9f" _uuid="25667938712102e02898907eef7883947a9e3e72" # Step 3: # Selecting features and target # + _cell_guid="e25ae397-9437-5f3c-850a-8a9c9c61b298" _uuid="734cd35f01650d161cb69e7d6e722813e88798a6" feature_cols = [ u'age_of_house', u'bedrooms', u'bathrooms', u'sqft_living', u'sqft_lot', u'floors', u'waterfront', u'view', u'condition', u'grade', u'sqft_above', u'sqft_basement', u'yr_built', u'yr_renovated'] x = housing_data[feature_cols] y = housing_data["price"] # + [markdown] _cell_guid="9c9ed082-2184-f5ff-fa5e-cc1a261da378" _uuid="5e63dffa35e6761479b91f8a0a1d28d4f396a47a" # Visualizing Feature Columns against target # + _cell_guid="90b78aa1-0fe6-1dbb-1143-4a301fda01f0" _uuid="c7b0e897d4dce8b74972416d5c07de79cfbe05d7" import seaborn as sns # %matplotlib inline sns.pairplot(housing_data,x_vars=feature_cols,y_vars="price",size=7,aspect=0.7,kind = 'reg') # + [markdown] _cell_guid="5bd25d0f-d939-ec24-3cb9-13fced81fb26" _uuid="92888b1d7d99180e7994d71c8af667c58c046378" # Step 4: # Splitting Training and Test Data # + _cell_guid="05aed26c-fa90-10f0-0750-74bf5006fbb4" _uuid="eb26399ce134ec066ba2c6a9717d468baab114c4" from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(x, y, random_state=3) # + [markdown] _cell_guid="eabe6f62-2885-c885-3d19-ad57cf0cd2da" _uuid="cff9a7d97a93286471deee89880eda4e72e505f6" # Step 5: # Fitting Data to Linear Regressor using scikit # + _cell_guid="54085c71-ee32-fbc4-b04b-81cab1930e51" _uuid="b5c5d4b9e22ac6d20c95e55c842517c023401730" from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(x_train, y_train) # + [markdown] _cell_guid="18f3420f-04cd-b40a-ccec-4d1891e0a5e1" _uuid="445ffc72aba3616ee15c7bf181884a59b10eeba9" # Achieved Accuracy: 66% # which is not so bad at inital commit :) # + _cell_guid="7c5849ac-ced1-3673-b102-d2a041644acd" _uuid="f5d6770b752b774698e355322baf056607c16fec" accuracy = regressor.score(x_test, y_test) "Accuracy: {}%".format(int(round(accuracy * 100)))
downloaded_kernels/house_sales/kernel_42.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Autoname # # Problem: # # 1. In GDS different cells must have different names. Relying on the incrementals # naming convention can be dangerous when you merge masks that have different # cells build at different run times or if you merge masks with other tools like Klayout. # 2. In GDS two cells cannot have the same name. # # Solution: The decorator `pp.autoname` fixes both issues: # # 1. By giving the cell a unique name depending on the parameters that you pass # 2. By creating a cache of cells where we use the cell name as the key. Avoiding to create two cells with the same name during the same python runtime # # Lets see how it works # + import pp @pp.autoname def wg(length=10, width=1): c = pp.Component() c.add_polygon([(0, 0), (length, 0), (length, width), (0, width)], layer=(1, 0)) c.add_port(name="W0", midpoint=[0, width / 2], width=width, orientation=180) c.add_port(name="E0", midpoint=[length, width / 2], width=width, orientation=0) return c # - # See how the cells get the name from the parameters that you pass them # + c = wg() print(c) c = wg(width=0.5) print(c) # - # How can you have add two different references to a cell with the same parameters? # + # Problem import pp c = pp.Component('problem') R1 = pp.c.rectangle(size=(4,2)) # Creates a rectangle (same Unique ID uid) R2 = pp.c.rectangle(size=(4,2)) # Try Create a new rectangle that we want to change (but has the same name so we will get R1 from the cache) print(R1 == R2) print(R1) print(R2) r1r = c << R1 # Add the first rectangle to c r2r = c << R2 # Add the second rectangle to c # But now I want to rotate R2 -- I can't because it doesn't exist! R2.rotate(45) # I think I'm rotating a second rectangle, but actually R2 points to R1 even though I specifically tried to create two rectangles pp.qp(c) # - # if you run the cell above you will see the cell rotating. This is a bad way to manipulate cells. That's why wrote the references tutorial. # + # Solution: use references import pp c = pp.Component('solution') R = pp.c.rectangle(size=(4,2)) r1 = c << R # Add the first rectangle reference to c r2 = c << R # Add the second rectangle reference to c r2.rotate(45) print(c) pp.qp(c) # - # # Adding port markers # # When we decorate with autoname we can also pass a `pins` flag that will # add port markers to our component and a device recognizion layer c = pp.c.waveguide(pins=True) pp.qp(c) pp.show(c) # We can even define the `pins_function` that we use to a custom function to add # markers # port # + from pp.add_pins import add_pins_triangle c = pp.c.waveguide(length=5, pins=True, pins_function=add_pins_triangle) pp.qp(c) pp.show(c) # - # # Cache # # To avoid that 2 exact cells are not references of the same cell the `autoname` decorator has a # cache where if component has already been build it will return the component # from the cache # # You can always over-ride this with `cache = False` This is helpful when you are # changing the code inside the function that is being cached. @pp.autoname def wg(length=10, width=1): c = pp.Component() c.add_polygon([(0, 0), (length, 0), (length, width), (0, width)], layer=(1, 0)) print("calling wg function") return c wg1 = wg() # autoname builds a waveguide print(wg1) wg2 = wg() # autoname returns the same waveguide as before print(wg2) pp.qp(wg2) # Lets say that we improve the code of the waveguide function in a jupyter notebook like this one. # # I use Vim/VsCode/Pycharm when building new cells in python but some people love jupyter notebooks. @pp.autoname def wg(length=10, width=1, layer=(2, 0)): """ Adding layer as a function parameter and using layer (2, 0) as default""" c = pp.Component() c.add_polygon([(0, 0), (length, 0), (length, width), (0, width)], layer=layer) print("calling wg function") return c wg3 = wg() # Error! autoname returns the same waveguide as before! This waveguide should be in layer (2, 0) print(wg3) pp.qp(wg3) # + wg4 = wg(cache=False) # Forces a rebuild of the cache. This is very helpful changing function `wg` print(wg4) pp.qp(wg4) # Note waveguide has different layer now and different uid # -
notebooks/03_autoname.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Analyzing Tabular Data using Python and Pandas # # ![](https://i.imgur.com/zfxLzEv.png) # # ### Part 7 of "Data Analysis with Python: Zero to Pandas" # # # This tutorial series is a beginner-friendly introduction to programming and data analysis using the Python programming language. These tutorials take a practical and coding-focused approach. The best way to learn the material is to execute the code and experiment with it yourself. Check out the full series here: # # 1. [First Steps with Python and Jupyter](https://jovian.ai/aakashns/first-steps-with-python) # 2. [A Quick Tour of Variables and Data Types](https://jovian.ai/aakashns/python-variables-and-data-types) # 3. [Branching using Conditional Statements and Loops](https://jovian.ai/aakashns/python-branching-and-loops) # 4. [Writing Reusable Code Using Functions](https://jovian.ai/aakashns/python-functions-and-scope) # 5. [Reading from and Writing to Files](https://jovian.ai/aakashns/python-os-and-filesystem) # 6. [Numerical Computing with Python and Numpy](https://jovian.ai/aakashns/python-numerical-computing-with-numpy) # 7. [Analyzing Tabular Data using Pandas](https://jovian.ai/aakashns/python-pandas-data-analysis) # 8. [Data Visualization using Matplotlib & Seaborn](https://jovian.ai/aakashns/python-matplotlib-data-visualization) # 9. [Exploratory Data Analysis - A Case Study](https://jovian.ai/aakashns/python-eda-stackoverflow-survey) # # This tutorial covers the following topics: # # - Reading a CSV file into a Pandas data frame # - Retrieving data from Pandas data frames # - Querying, soring, and analyzing data # - Merging, grouping, and aggregation of data # - Extracting useful information from dates # - Basic plotting using line and bar charts # - Writing data frames to CSV files # ### How to run the code # # This tutorial is an executable [Jupyter notebook](https://jupyter.org) hosted on [Jovian](https://www.jovian.ai). You can _run_ this tutorial and experiment with the code examples in a couple of ways: *using free online resources* (recommended) or *on your computer*. # # #### Option 1: Running using free online resources (1-click, recommended) # # The easiest way to start executing the code is to click the **Run** button at the top of this page and select **Run on Binder**. You can also select "Run on Colab" or "Run on Kaggle", but you'll need to create an account on [Google Colab](https://colab.research.google.com) or [Kaggle](https://kaggle.com) to use these platforms. # # # #### Option 2: Running on your computer locally # # To run the code on your computer locally, you'll need to set up [Python](https://www.python.org), download the notebook and install the required libraries. We recommend using the [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) distribution of Python. Click the **Run** button at the top of this page, select the **Run Locally** option, and follow the instructions. # # > **Jupyter Notebooks**: This tutorial is a [Jupyter notebook](https://jupyter.org) - a document made of _cells_. Each cell can contain code written in Python or explanations in plain English. You can execute code cells and view the results, e.g., numbers, messages, graphs, tables, files, etc., instantly within the notebook. Jupyter is a powerful platform for experimentation and analysis. Don't be afraid to mess around with the code & break things - you'll learn a lot by encountering and fixing errors. You can use the "Kernel > Restart & Clear Output" menu option to clear all outputs and start again from the top. # ## Reading a CSV file using Pandas # # [Pandas](https://pandas.pydata.org/) is a popular Python library used for working in tabular data (similar to the data stored in a spreadsheet). Pandas provides helper functions to read data from various file formats like CSV, Excel spreadsheets, HTML tables, JSON, SQL, and more. Let's download a file `italy-covid-daywise.txt` which contains day-wise Covid-19 data for Italy in the following format: # # ``` # date,new_cases,new_deaths,new_tests # 2020-04-21,2256.0,454.0,28095.0 # 2020-04-22,2729.0,534.0,44248.0 # 2020-04-23,3370.0,437.0,37083.0 # 2020-04-24,2646.0,464.0,95273.0 # 2020-04-25,3021.0,420.0,38676.0 # 2020-04-26,2357.0,415.0,24113.0 # 2020-04-27,2324.0,260.0,26678.0 # 2020-04-28,1739.0,333.0,37554.0 # ... # ``` # # This format of storing data is known as *comma-separated values* or CSV. # # > **CSVs**: A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. A CSV file typically stores tabular data (numbers and text) in plain text, in which case each line will have the same number of fields. (Wikipedia) # # # We'll download this file using the `urlretrieve` function from the `urllib.request` module. from urllib.request import urlretrieve urlretrieve('https://hub.jovian.ml/wp-content/uploads/2020/09/italy-covid-daywise.csv', 'italy-covid-daywise.csv') # To read the file, we can use the `read_csv` method from Pandas. First, let's install the Pandas library. # !pip install pandas --upgrade --quiet # We can now import the `pandas` module. As a convention, it is imported with the alias `pd`. import pandas as pd covid_df = pd.read_csv('italy-covid-daywise.csv') # Data from the file is read and stored in a `DataFrame` object - one of the core data structures in Pandas for storing and working with tabular data. We typically use the `_df` suffix in the variable names for dataframes. type(covid_df) covid_df # Here's what we can tell by looking at the dataframe: # # - The file provides four day-wise counts for COVID-19 in Italy # - The metrics reported are new cases, deaths, and tests # - Data is provided for 248 days: from Dec 12, 2019, to Sep 3, 2020 # # Keep in mind that these are officially reported numbers. The actual number of cases & deaths may be higher, as not all cases are diagnosed. # # We can view some basic information about the data frame using the `.info` method. covid_df.info() # It appears that each column contains values of a specific data type. You can view statistical information for numerical columns (mean, standard deviation, minimum/maximum values, and the number of non-empty values) using the `.describe` method. covid_df.describe() # The `columns` property contains the list of columns within the data frame. covid_df.columns # You can also retrieve the number of rows and columns in the data frame using the `.shape` method. covid_df.shape # Here's a summary of the functions & methods we've looked at so far: # # * `pd.read_csv` - Read data from a CSV file into a Pandas `DataFrame` object # * `.info()` - View basic infomation about rows, columns & data types # * `.describe()` - View statistical information about numeric columns # * `.columns` - Get the list of column names # * `.shape` - Get the number of rows & columns as a tuple # # ### Save and upload your notebook # # Whether you're running this Jupyter notebook online or on your computer, it's essential to save your work from time to time. You can continue working on a saved notebook later or share it with friends and colleagues to let them execute your code. [Jovian](https://www.jovian.ai) offers an easy way of saving and sharing your Jupyter notebooks online. # Install the library # !pip install jovian --upgrade --quiet import jovian jovian.commit(project='python-pandas-data-analysis') # The first time you run `jovian.commit`, you'll be asked to provide an API Key to securely upload the notebook to your Jovian account. You can get the API key from your [Jovian profile page](https://jovian.ai) after logging in / signing up. # # # `jovian.commit` uploads the notebook to your Jovian account, captures the Python environment, and creates a shareable link for your notebook, as shown above. You can use this link to share your work and let anyone (including you) run your notebooks and reproduce your work. # ## Retrieving data from a data frame # # The first thing you might want to do is retrieve data from this data frame, e.g., the counts of a specific day or the list of values in a particular column. To do this, it might help to understand the internal representation of data in a data frame. Conceptually, you can think of a dataframe as a dictionary of lists: keys are column names, and values are lists/arrays containing data for the respective columns. # Pandas format is simliar to this covid_data_dict = { 'date': ['2020-08-30', '2020-08-31', '2020-09-01', '2020-09-02', '2020-09-03'], 'new_cases': [1444, 1365, 996, 975, 1326], 'new_deaths': [1, 4, 6, 8, 6], 'new_tests': [53541, 42583, 54395, None, None] } # Representing data in the above format has a few benefits: # # * All values in a column typically have the same type of value, so it's more efficient to store them in a single array. # * Retrieving the values for a particular row simply requires extracting the elements at a given index from each column array. # * The representation is more compact (column names are recorded only once) compared to other formats that use a dictionary for each row of data (see the example below). # Pandas format is not similar to this covid_data_list = [ {'date': '2020-08-30', 'new_cases': 1444, 'new_deaths': 1, 'new_tests': 53541}, {'date': '2020-08-31', 'new_cases': 1365, 'new_deaths': 4, 'new_tests': 42583}, {'date': '2020-09-01', 'new_cases': 996, 'new_deaths': 6, 'new_tests': 54395}, {'date': '2020-09-02', 'new_cases': 975, 'new_deaths': 8 }, {'date': '2020-09-03', 'new_cases': 1326, 'new_deaths': 6}, ] # With the dictionary of lists analogy in mind, you can now guess how to retrieve data from a data frame. For example, we can get a list of values from a specific column using the `[]` indexing notation. covid_data_dict['new_cases'] covid_df['new_cases'] # Each column is represented using a data structure called `Series`, which is essentially a numpy array with some extra methods and properties. type(covid_df['new_cases']) # Like arrays, you can retrieve a specific value with a series using the indexing notation `[]`. covid_df['new_cases'][246] covid_df['new_tests'][240] # Pandas also provides the `.at` method to retrieve the element at a specific row & column directly. covid_df.at[246, 'new_cases'] covid_df.at[240, 'new_tests'] # Instead of using the indexing notation `[]`, Pandas also allows accessing columns as properties of the dataframe using the `.` notation. However, this method only works for columns whose names do not contain spaces or special characters. covid_df.new_cases # Further, you can also pass a list of columns within the indexing notation `[]` to access a subset of the data frame with just the given columns. cases_df = covid_df[['date', 'new_cases']] cases_df # The new data frame `cases_df` is simply a "view" of the original data frame `covid_df`. Both point to the same data in the computer's memory. Changing any values inside one of them will also change the respective values in the other. Sharing data between data frames makes data manipulation in Pandas blazing fast. You needn't worry about the overhead of copying thousands or millions of rows every time you want to create a new data frame by operating on an existing one. # # Sometimes you might need a full copy of the data frame, in which case you can use the `copy` method. covid_df_copy = covid_df.copy() # The data within `covid_df_copy` is completely separate from `covid_df`, and changing values inside one of them will not affect the other. # To access a specific row of data, Pandas provides the `.loc` method. covid_df covid_df.loc[243] # Each retrieved row is also a `Series` object. type(covid_df.loc[243]) # We can use the `.head` and `.tail` methods to view the first or last few rows of data. covid_df.head(5) covid_df.tail(4) # Notice above that while the first few values in the `new_cases` and `new_deaths` columns are `0`, the corresponding values within the `new_tests` column are `NaN`. That is because the CSV file does not contain any data for the `new_tests` column for specific dates (you can verify this by looking into the file). These values may be missing or unknown. covid_df.at[0, 'new_tests'] type(covid_df.at[0, 'new_tests']) # The distinction between `0` and `NaN` is subtle but important. In this dataset, it represents that daily test numbers were not reported on specific dates. Italy started reporting daily tests on Apr 19, 2020. 93,5310 tests had already been conducted before Apr 19. # # We can find the first index that doesn't contain a `NaN` value using a column's `first_valid_index` method. covid_df.new_tests.first_valid_index() # Let's look at a few rows before and after this index to verify that the values change from `NaN` to actual numbers. We can do this by passing a range to `loc`. covid_df.loc[108:113] # We can use the `.sample` method to retrieve a random sample of rows from the data frame. covid_df.sample(10) # Notice that even though we have taken a random sample, each row's original index is preserved - this is a useful property of data frames. # # # Here's a summary of the functions & methods we looked at in this section: # # - `covid_df['new_cases']` - Retrieving columns as a `Series` using the column name # - `new_cases[243]` - Retrieving values from a `Series` using an index # - `covid_df.at[243, 'new_cases']` - Retrieving a single value from a data frame # - `covid_df.copy()` - Creating a deep copy of a data frame # - `covid_df.loc[243]` - Retrieving a row or range of rows of data from the data frame # - `head`, `tail`, and `sample` - Retrieving multiple rows of data from the data frame # - `covid_df.new_tests.first_valid_index` - Finding the first non-empty index in a series # # # Let's save a snapshot of our notebook before continuing. import jovian jovian.commit() # ## Analyzing data from data frames # # Let's try to answer some questions about our data. # # **Q: What are the total number of reported cases and deaths related to Covid-19 in Italy?** # # Similar to Numpy arrays, a Pandas series supports the `sum` method to answer these questions. total_cases = covid_df.new_cases.sum() total_deaths = covid_df.new_deaths.sum() print('The number of reported cases is {} and the number of reported deaths is {}.'.format(int(total_cases), int(total_deaths))) # **Q: What is the overall death rate (ratio of reported deaths to reported cases)?** death_rate = covid_df.new_deaths.sum() / covid_df.new_cases.sum() print("The overall reported death rate in Italy is {:.2f} %.".format(death_rate*100)) # **Q: What is the overall number of tests conducted? A total of 935310 tests were conducted before daily test numbers were reported.** # initial_tests = 935310 total_tests = initial_tests + covid_df.new_tests.sum() total_tests # **Q: What fraction of tests returned a positive result?** positive_rate = total_cases / total_tests print('{:.2f}% of tests in Italy led to a positive diagnosis.'.format(positive_rate*100)) # Try asking and answering some more questions about the data using the empty cells below. # Let's save and commit our work before continuing. import jovian jovian.commit() # ## Querying and sorting rows # # Let's say we want only want to look at the days which had more than 1000 reported cases. We can use a boolean expression to check which rows satisfy this criterion. high_new_cases = covid_df.new_cases > 1000 high_new_cases # The boolean expression returns a series containing `True` and `False` boolean values. You can use this series to select a subset of rows from the original dataframe, corresponding to the `True` values in the series. covid_df[high_new_cases] # We can write this succinctly on a single line by passing the boolean expression as an index to the data frame. high_cases_df = covid_df[covid_df.new_cases > 1000] high_cases_df # The data frame contains 72 rows, but only the first & last five rows are displayed by default with Jupyter for brevity. We can change some display options to view all the rows. from IPython.display import display with pd.option_context('display.max_rows', 100): display(covid_df[covid_df.new_cases > 1000]) # We can also formulate more complex queries that involve multiple columns. As an example, let's try to determine the days when the ratio of cases reported to tests conducted is higher than the overall `positive_rate`. positive_rate high_ratio_df = covid_df[covid_df.new_cases / covid_df.new_tests > positive_rate] high_ratio_df # The result of performing an operation on two columns is a new series. covid_df.new_cases / covid_df.new_tests # We can use this series to add a new column to the data frame. covid_df['positive_rate'] = covid_df.new_cases / covid_df.new_tests covid_df # However, keep in mind that sometimes it takes a few days to get the results for a test, so we can't compare the number of new cases with the number of tests conducted on the same day. Any inference based on this `positive_rate` column is likely to be incorrect. It's essential to watch out for such subtle relationships that are often not conveyed within the CSV file and require some external context. It's always a good idea to read through the documentation provided with the dataset or ask for more information. # # For now, let's remove the `positive_rate` column using the `drop` method. covid_df.drop(columns=['positive_rate'], inplace=True) # Can you figure the purpose of the `inplace` argument? # ### Sorting rows using column values # # The rows can also be sorted by a specific column using `.sort_values`. Let's sort to identify the days with the highest number of cases, then chain it with the `head` method to list just the first ten results. covid_df.sort_values('new_cases', ascending=False).head(10) # It looks like the last two weeks of March had the highest number of daily cases. Let's compare this to the days where the highest number of deaths were recorded. covid_df.sort_values('new_deaths', ascending=False).head(10) # It appears that daily deaths hit a peak just about a week after the peak in daily new cases. # # Let's also look at the days with the least number of cases. We might expect to see the first few days of the year on this list. covid_df.sort_values('new_cases').head(10) # It seems like the count of new cases on Jun 20, 2020, was `-148`, a negative number! Not something we might have expected, but that's the nature of real-world data. It could be a data entry error, or the government may have issued a correction to account for miscounting in the past. Can you dig through news articles online and figure out why the number was negative? # # Let's look at some days before and after Jun 20, 2020. covid_df.loc[169:175] # For now, let's assume this was indeed a data entry error. We can use one of the following approaches for dealing with the missing or faulty value: # 1. Replace it with `0`. # 2. Replace it with the average of the entire column # 3. Replace it with the average of the values on the previous & next date # 4. Discard the row entirely # # Which approach you pick requires some context about the data and the problem. In this case, since we are dealing with data ordered by date, we can go ahead with the third approach. # # You can use the `.at` method to modify a specific value within the dataframe. covid_df.at[172, 'new_cases'] = (covid_df.at[171, 'new_cases'] + covid_df.at[173, 'new_cases'])/2 # Here's a summary of the functions & methods we looked at in this section: # # - `covid_df.new_cases.sum()` - Computing the sum of values in a column or series # - `covid_df[covid_df.new_cases > 1000]` - Querying a subset of rows satisfying the chosen criteria using boolean expressions # - `df['pos_rate'] = df.new_cases/df.new_tests` - Adding new columns by combining data from existing columns # - `covid_df.drop('positive_rate')` - Removing one or more columns from the data frame # - `sort_values` - Sorting the rows of a data frame using column values # - `covid_df.at[172, 'new_cases'] = ...` - Replacing a value within the data frame # Let's save and commit our work before continuing. import jovian jovian.commit() # ## Working with dates # # While we've looked at overall numbers for the cases, tests, positive rate, etc., it would also be useful to study these numbers on a month-by-month basis. The `date` column might come in handy here, as Pandas provides many utilities for working with dates. covid_df.date # The data type of date is currently `object`, so Pandas does not know that this column is a date. We can convert it into a `datetime` column using the `pd.to_datetime` method. covid_df['date'] = pd.to_datetime(covid_df.date) covid_df['date'] # You can see that it now has the datatype `datetime64`. We can now extract different parts of the data into separate columns, using the `DatetimeIndex` class ([view docs](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html)). covid_df['year'] = pd.DatetimeIndex(covid_df.date).year covid_df['month'] = pd.DatetimeIndex(covid_df.date).month covid_df['day'] = pd.DatetimeIndex(covid_df.date).day covid_df['weekday'] = pd.DatetimeIndex(covid_df.date).weekday covid_df # Let's check the overall metrics for May. We can query the rows for May, choose a subset of columns, and use the `sum` method to aggregate each selected column's values. # + # Query the rows for May covid_df_may = covid_df[covid_df.month == 5] # Extract the subset of columns to be aggregated covid_df_may_metrics = covid_df_may[['new_cases', 'new_deaths', 'new_tests']] # Get the column-wise sum covid_may_totals = covid_df_may_metrics.sum() # - covid_may_totals type(covid_may_totals) # We can also combine the above operations into a single statement. covid_df[covid_df.month == 5][['new_cases', 'new_deaths', 'new_tests']].sum() # As another example, let's check if the number of cases reported on Sundays is higher than the average number of cases reported every day. This time, we might want to aggregate columns using the `.mean` method. # Overall average covid_df.new_cases.mean() # Average for Sundays covid_df[covid_df.weekday == 6].new_cases.mean() # It seems like more cases were reported on Sundays compared to other days. # # Try asking and answering some more date-related questions about the data using the cells below. # Let's save and commit our work before continuing. import jovian jovian.commit() # ## Grouping and aggregation # # As a next step, we might want to summarize the day-wise data and create a new dataframe with month-wise data. We can use the `groupby` function to create a group for each month, select the columns we wish to aggregate, and aggregate them using the `sum` method. covid_month_df = covid_df.groupby('month')[['new_cases', 'new_deaths', 'new_tests']].sum() covid_month_df # The result is a new data frame that uses unique values from the column passed to `groupby` as the index. Grouping and aggregation is a powerful method for progressively summarizing data into smaller data frames. # # Instead of aggregating by sum, you can also aggregate by other measures like mean. Let's compute the average number of daily new cases, deaths, and tests for each month. covid_month_mean_df = covid_df.groupby('month')[['new_cases', 'new_deaths', 'new_tests']].mean() covid_month_mean_df # Apart from grouping, another form of aggregation is the running or cumulative sum of cases, tests, or death up to each row's date. We can use the `cumsum` method to compute the cumulative sum of a column as a new series. Let's add three new columns: `total_cases`, `total_deaths`, and `total_tests`. covid_df['total_cases'] = covid_df.new_cases.cumsum() covid_df['total_deaths'] = covid_df.new_deaths.cumsum() covid_df['total_tests'] = covid_df.new_tests.cumsum() + initial_tests # We've also included the initial test count in `total_test` to account for tests conducted before daily reporting was started. covid_df # Notice how the `NaN` values in the `total_tests` column remain unaffected. # ## Merging data from multiple sources # # To determine other metrics like test per million, cases per million, etc., we require some more information about the country, viz. its population. Let's download another file `locations.csv` that contains health-related information for many countries, including Italy. urlretrieve('https://gist.githubusercontent.com/aakashns/8684589ef4f266116cdce023377fc9c8/raw/99ce3826b2a9d1e6d0bde7e9e559fc8b6e9ac88b/locations.csv', 'locations.csv') locations_df = pd.read_csv('locations.csv') locations_df locations_df[locations_df.location == "Italy"] # We can merge this data into our existing data frame by adding more columns. However, to merge two data frames, we need at least one common column. Let's insert a `location` column in the `covid_df` dataframe with all values set to `"Italy"`. covid_df['location'] = "Italy" covid_df # We can now add the columns from `locations_df` into `covid_df` using the `.merge` method. merged_df = covid_df.merge(locations_df, on="location") merged_df # The location data for Italy is appended to each row within `covid_df`. If the `covid_df` data frame contained data for multiple locations, then the respective country's location data would be appended for each row. # # We can now calculate metrics like cases per million, deaths per million, and tests per million. merged_df['cases_per_million'] = merged_df.total_cases * 1e6 / merged_df.population merged_df['deaths_per_million'] = merged_df.total_deaths * 1e6 / merged_df.population merged_df['tests_per_million'] = merged_df.total_tests * 1e6 / merged_df.population merged_df # Let's save and commit our work before continuing. import jovian jovian.commit() # ## Writing data back to files # # After completing your analysis and adding new columns, you should write the results back to a file. Otherwise, the data will be lost when the Jupyter notebook shuts down. Before writing to file, let us first create a data frame containing just the columns we wish to record. result_df = merged_df[['date', 'new_cases', 'total_cases', 'new_deaths', 'total_deaths', 'new_tests', 'total_tests', 'cases_per_million', 'deaths_per_million', 'tests_per_million']] result_df # To write the data from the data frame into a file, we can use the `to_csv` function. result_df.to_csv('results.csv', index=None) # The `to_csv` function also includes an additional column for storing the index of the dataframe by default. We pass `index=None` to turn off this behavior. You can now verify that the `results.csv` is created and contains data from the data frame in CSV format: # # ``` # date,new_cases,total_cases,new_deaths,total_deaths,new_tests,total_tests,cases_per_million,deaths_per_million,tests_per_million # 2020-02-27,78.0,400.0,1.0,12.0,,,6.61574439992122,0.1984723319976366, # 2020-02-28,250.0,650.0,5.0,17.0,,,10.750584649871982,0.28116913699665186, # 2020-02-29,238.0,888.0,4.0,21.0,,,14.686952567825108,0.34732658099586405, # 2020-03-01,240.0,1128.0,8.0,29.0,,,18.656399207777838,0.47964146899428844, # 2020-03-02,561.0,1689.0,6.0,35.0,,,27.93498072866735,0.5788776349931067, # 2020-03-03,347.0,2036.0,17.0,52.0,,,33.67413899559901,0.8600467719897585, # ... # ``` # You can attach the `results.csv` file to our notebook while uploading it to [Jovian](https://jovian.ai) using the `outputs` argument to `jovian.commit`. import jovian jovian.commit(outputs=['results.csv']) # You can find the CSV file in the "Files" tab on the project page. # ## Bonus: Basic Plotting with Pandas # # We generally use a library like `matplotlib` or `seaborn` plot graphs within a Jupyter notebook. However, Pandas dataframes & series provide a handy `.plot` method for quick and easy plotting. # # Let's plot a line graph showing how the number of daily cases varies over time. result_df.new_cases.plot(); # While this plot shows the overall trend, it's hard to tell where the peak occurred, as there are no dates on the X-axis. We can use the `date` column as the index for the data frame to address this issue. result_df.set_index('date', inplace=True) result_df # Notice that the index of a data frame doesn't have to be numeric. Using the date as the index also allows us to get the data for a specific data using `.loc`. result_df.loc['2020-09-01'] # Let's plot the new cases & new deaths per day as line graphs. result_df.new_cases.plot() result_df.new_deaths.plot(); # We can also compare the total cases vs. total deaths. result_df.total_cases.plot() result_df.total_deaths.plot(); # Let's see how the death rate and positive testing rates vary over time. death_rate = result_df.total_deaths / result_df.total_cases death_rate.plot(title='Death Rate'); positive_rates = result_df.total_cases / result_df.total_tests positive_rates.plot(title='Positive Rate'); # Finally, let's plot some month-wise data using a bar chart to visualize the trend at a higher level. covid_month_df.new_cases.plot(kind='bar'); covid_month_df.new_tests.plot(kind='bar') # Let's save and commit our work to Jovian. import jovian jovian.commit() # ## Exercises # # Try the following exercises to become familiar with Pandas dataframe and practice your skills: # # * Assignment on Pandas dataframes: https://jovian.ml/aakashns/pandas-practice-assignment # * Additional exercises on Pandas: https://github.com/guipsamora/pandas_exercises # * Try downloading and analyzing some data from Kaggle: https://www.kaggle.com/datasets # # # ## Summary and Further Reading # # # We've covered the following topics in this tutorial: # # - Reading a CSV file into a Pandas data frame # - Retrieving data from Pandas data frames # - Querying, soring, and analyzing data # - Merging, grouping, and aggregation of data # - Extracting useful information from dates # - Basic plotting using line and bar charts # - Writing data frames to CSV files # # # Check out the following resources to learn more about Pandas: # # * User guide for Pandas: https://pandas.pydata.org/docs/user_guide/index.html # * Python for Data Analysis (book by <NAME> - creator of Pandas): https://www.oreilly.com/library/view/python-for-data/9781491957653/ # # You are ready to move on to the next tutorial: [Data Visualization using Matplotlib & Seaborn](https://jovian.ai/aakashns/python-matplotlib-data-visualization). # ## Questions for Revision # # Try answering the following questions to test your understanding of the topics covered in this notebook: # # 1. What is Pandas? What makes it useful? # 2. How do you install the Pandas library? # 3. How do you import the `pandas` module? # 4. What is the common alias used while importing the `pandas` module? # 5. How do you read a CSV file using Pandas? Give an example? # 6. What are some other file formats you can read using Pandas? Illustrate with examples. # 7. What are Pandas dataframes? # 8. How are Pandas dataframes different from Numpy arrays? # 9. How do you find the number of rows and columns in a dataframe? # 10. How do you get the list of columns in a dataframe? # 11. What is the purpose of the `describe` method of a dataframe? # 12. How are the `info` and `describe` dataframe methods different? # 13. Is a Pandas dataframe conceptually similar to a list of dictionaries or a dictionary of lists? Explain with an example. # 14. What is a Pandas `Series`? How is it different from a Numpy array? # 15. How do you access a column from a dataframe? # 16. How do you access a row from a dataframe? # 17. How do you access an element at a specific row & column of a dataframe? # 18. How do you create a subset of a dataframe with a specific set of columns? # 19. How do you create a subset of a dataframe with a specific range of rows? # 20. Does changing a value within a dataframe affect other dataframes created using a subset of the rows or columns? Why is it so? # 21. How do you create a copy of a dataframe? # 22. Why should you avoid creating too many copies of a dataframe? # 23. How do you view the first few rows of a dataframe? # 24. How do you view the last few rows of a dataframe? # 25. How do you view a random selection of rows of a dataframe? # 26. What is the "index" in a dataframe? How is it useful? # 27. What does a `NaN` value in a Pandas dataframe represent? # 28. How is `Nan` different from `0`? # 29. How do you identify the first non-empty row in a Pandas series or column? # 30. What is the difference between `df.loc` and `df.at`? # 31. Where can you find a full list of methods supported by Pandas `DataFrame` and `Series` objects? # 32. How do you find the sum of numbers in a column of dataframe? # 33. How do you find the mean of numbers in a column of a dataframe? # 34. How do you find the number of non-empty numbers in a column of a dataframe? # 35. What is the result obtained by using a Pandas column in a boolean expression? Illustrate with an example. # 36. How do you select a subset of rows where a specific column's value meets a given condition? Illustrate with an example. # 37. What is the result of the expression `df[df.new_cases > 100]` ? # 38. How do you display all the rows of a pandas dataframe in a Jupyter cell output? # 39. What is the result obtained when you perform an arithmetic operation between two columns of a dataframe? Illustrate with an example. # 40. How do you add a new column to a dataframe by combining values from two existing columns? Illustrate with an example. # 41. How do you remove a column from a dataframe? Illustrate with an example. # 42. What is the purpose of the `inplace` argument in dataframe methods? # 43. How do you sort the rows of a dataframe based on the values in a particular column? # 44. How do you sort a pandas dataframe using values from multiple columns? # 45. How do you specify whether to sort by ascending or descending order while sorting a Pandas dataframe? # 46. How do you change a specific value within a dataframe? # 47. How do you convert a dataframe column to the `datetime` data type? # 48. What are the benefits of using the `datetime` data type instead of `object`? # 49. How do you extract different parts of a date column like the month, year, month, weekday, etc., into separate columns? Illustrate with an example. # 50. How do you aggregate multiple columns of a dataframe together? # 51. What is the purpose of the `groupby` method of a dataframe? Illustrate with an example. # 52. What are the different ways in which you can aggregate the groups created by `groupby`? # 53. What do you mean by a running or cumulative sum? # 54. How do you create a new column containing the running or cumulative sum of another column? # 55. What are other cumulative measures supported by Pandas dataframes? # 56. What does it mean to merge two dataframes? Give an example. # 57. How do you specify the columns that should be used for merging two dataframes? # 58. How do you write data from a Pandas dataframe into a CSV file? Give an example. # 59. What are some other file formats you can write to from a Pandas dataframe? Illustrate with examples. # 60. How do you create a line plot showing the values within a column of dataframe? # 61. How do you convert a column of a dataframe into its index? # 62. Can the index of a dataframe be non-numeric? # 63. What are the benefits of using a non-numeric dataframe? Illustrate with an example. # 64. How you create a bar plot showing the values within a column of a dataframe? # 65. What are some other types of plots supported by Pandas dataframes and series? #
Data Analysis/python-pandas-data-analysis.ipynb