code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Import libraries and packages ``` import folium import numpy as np import pandas as pd import seaborn as sns from geopy.geocoders import Nominatim from matplotlib import pyplot as plt from scipy import stats from sklearn.linear_model import LinearRegression from sklearn.pipeline import Pipeline from sklearn.preprocessing import MinMaxScaler # Custom modules # We will use top_20_cities dictionary from here from openstreetmap import openstreetmap as osm ``` # Set parameters ``` color = 'goldenrod' sns.set_style("whitegrid") ``` # Data preparation ## Data import ``` # Read the data and sort it by total population data = pd.read_csv('tidy_data.csv', sep=';') data = data.sort_values(by='Population on the 1st of January, total', ascending=False).reset_index(drop=True) # Rename the total population column data.rename(columns={'Population on the 1st of January, total':'total_population'}, inplace=True) # Replace unicode characters due to rendering issue in Folium data = data.replace(to_replace={'ü':'u','ö':'o'}, regex=True) # print(data.shape) # data.tail() # Confirm the venue_id is unique len(data['venue_id'].unique()) == data.shape[0] ``` ## Add counts to venue data ``` # Add ratings count per city to tidy data data['ratings_count'] = data.rating.notnull().groupby(data['city']).transform('sum').astype(int) # Add likes_count per city to tidy data data['likes_count'] = data.likes_cnt.groupby(data['city']).transform('sum').astype(int) ``` ## Create DataFrame to carry counts per city ``` # Count ratings to distinct dataframe data_counts = pd.DataFrame(data.rating.notnull().groupby(data['city'], sort=False).sum().astype(int).reset_index()) data_counts = data_counts.merge(data[['city', 'total_population']], on='city') \ .drop_duplicates() \ .reset_index(drop=True) data_counts.columns = ['city', 'ratings_count', 'total_population'] # print(data_counts.shape) # data_counts # Count likes to distinct dataframe likes_counts = pd.DataFrame(data.likes_cnt.groupby(data['city'], sort=False).sum().astype(int).reset_index()) likes_counts.columns = ['city','likes_count'] data_counts = data_counts.merge(likes_counts, on='city') # data_counts # Count number of biergartens per city no_of_biergartens_city = pd.DataFrame(data.groupby('city', sort=False).count().venue_id).reset_index() no_of_biergartens_city.columns = ['city', 'biergarten_count'] # Join to count data data_counts = data_counts.merge(no_of_biergartens_city, on='city') # data_counts # Count no of biergartens per 100,000 people data_counts['biergarten_count_100k'] = data_counts['biergarten_count']/data_counts['total_population']*100000 # data_counts # Add rank variables to dataset data_counts['biergarten_rank'] = data_counts['biergarten_count'].rank() data_counts['biergarten_100k_rank'] = data_counts['biergarten_count_100k'].rank() data_counts ``` # Where can you find most biergartens? ``` g = sns.PairGrid(data_counts, y_vars=["city"], x_vars=["biergarten_count", "biergarten_count_100k"] ,height=6 ,corner=False ,despine=True) g.map(sns.barplot, color=color, order=data_counts['city']) g.axes[0,0].grid(True) g.axes[0,1].grid(True) g.axes[0,0].set_ylabel('') g.axes[0,0].set_xlabel('No of biergartens', fontdict={'fontsize':16}) g.axes[0,1].set_xlabel('No of biergartens per 100,000 people', fontdict={'fontsize':16}) # Plot ranks plt.figure(figsize=(10,10)) ax = sns.scatterplot(data=data_counts , x='biergarten_rank' , y='biergarten_100k_rank' , size='total_population' , sizes=(90,1080) # Population/10,000*3 , legend=False , color=color) for line in range(0, data_counts.shape[0]): ax.text(x=data_counts.biergarten_rank[line]-0.4 , y=data_counts.biergarten_100k_rank[line] , s=data_counts.city[line] , horizontalalignment='right' , verticalalignment='baseline' , size='small' , color='black') ax.set_ylabel('Rank of number of biergartens per 100,000 people', fontdict={'fontsize':16}) ax.set_xlabel('Rank of number of biergartens', fontdict={'fontsize':16}) ax.set_xticks(range(0,22,2)) ax.set_yticks(range(0,22,2)) ``` # Are biergartens equally popular in different regions? ``` # Get coordinates for Germany to center the map geolocator = Nominatim(user_agent="germany_explorer") address = 'Germany' location = geolocator.geocode(address) germany_latitude = location.latitude germany_longitude = location.longitude print('The geograpical coordinate of Germany are {}, {}.'.format(germany_latitude, germany_longitude)) # Create empty dataframe to store coordinates to germany_city_coordinates = pd.DataFrame() # Get coordinates for cities to be plotted geolocator = Nominatim(user_agent="germany_explorer") for city in osm.top20_cities.keys(): address = city + ', Germany' location = geolocator.geocode(address) d = { 'city': city, 'latitude': location.latitude, 'longitude': location.longitude, } germany_city_coordinates = germany_city_coordinates.append(d, ignore_index=True) # Replace unicode characters due to rendering issue in Folium and to match rest of the data germany_city_coordinates = germany_city_coordinates.replace(to_replace={'ü':'u','ö':'o'}, regex=True) # germany_city_coordinates # Join coordinates to counts data data_counts = data_counts.merge(germany_city_coordinates, on='city') # data_counts # Join coordinates to venue data data = data.merge(germany_city_coordinates, on='city') # Inititate map of Germany map_germany = folium.Map(location=[germany_latitude, germany_longitude], zoom_start=6) # Loop through data_counts for city, lat, lng, pop, cnt, cnt_100k, rank, rank_100k in zip(data_counts['city'] , data_counts['latitude'] , data_counts['longitude'] , data_counts['total_population'] , data_counts['biergarten_count'] , data_counts['biergarten_count_100k'] , data_counts['biergarten_rank'] , data_counts['biergarten_100k_rank']): # Generate html to include data in popup label = ( "{city}<br>" "Population: {pop}<br>" "No of biergartens: {cnt}<br>" "No of biergartens per 100,000 people: {cnt_100k}<br>" ).format(city=city.upper(), pop=str(int(pop)), cnt=str(int(cnt)), cnt_100k=str(round(cnt_100k, 1)), ) # Set marker color based on the biergarten_count_100k if cnt_100k > 5: colour = 'darkpurple' elif cnt_100k > 4: colour = 'red' elif cnt_100k > 3: colour = 'orange' elif cnt_100k > 2: colour = 'pink' else: colour = 'lightgray' # Add marker map_germany.add_child(folium.Marker( location=[lat, lng], popup=label, icon=folium.Icon( color=colour, prefix='fa', icon='circle'))) # Create a legent to map legend_html = """ <div style="position: fixed; bottom: 50px; left: 50px; width: 150px; height: 200px; \ border:2px solid grey; z-index:9999; font-size:14px;" > &nbsp; No of biergartens <br> &nbsp; per 100,000 people <br> &nbsp; 5 + &nbsp; <i class="fa fa-map-marker fa-2x" style="color:darkpurple"></i><br> &nbsp; 4-5 &nbsp; <i class="fa fa-map-marker fa-2x" style="color:red"></i><br> &nbsp; 3-4 &nbsp; <i class="fa fa-map-marker fa-2x" style="color:orange"></i><br> &nbsp; 2-3 &nbsp; <i class="fa fa-map-marker fa-2x" style="color:pink"></i><br> &nbsp; 0-2 &nbsp; <i class="fa fa-map-marker fa-2x" style="color:lightgray"></i></div> """ map_germany.get_root().html.add_child(folium.Element(legend_html)) # Show the map map_germany ``` # Do biergarten reviews hint where to go to? ``` # Plot likes plt.figure(figsize=(6,8)) ax=sns.barplot(y='city', x='likes_count', data=data_counts, color=color) ax.set_ylabel('') ax.set_xlabel('Count of likes in Foursquare', fontdict={'fontsize':16}) # Plot ratings plt.figure(figsize=(6,10)) g = sns.boxplot(data=data, y='city', x='rating' , order=data_counts['city'] , hue=None , color='goldenrod' , saturation=1.0 , fliersize=0.0 ) g.axes.set_ylabel('') g.axes.set_xlabel('Foursquare rating', fontdict={'fontsize':16}) # Calculate number of obs per group & median to position labels medians = data.groupby(['city'], sort=False)['rating'].median().values nobs = data_counts['ratings_count'] nobs = [str(x) for x in nobs.tolist()] nobs = ["n: " + i for i in nobs] # Add it to the plot pos = range(len(nobs)) for tick, label in zip(pos, g.get_yticklabels()): g.text(x=4.72 , y=pos[tick] , s=nobs[tick] , horizontalalignment='left' , verticalalignment='center' , size='small' , color='black' , weight='normal') ``` # Does population structure explain density of biergartens? ``` # Create modeling dataset X_cols = [ 'latitude', 'longitude', 'Proportion of population aged 0-4 years', 'Proportion of population aged 5-9 years', 'Proportion of population aged 10-14 years', 'Proportion of population aged 15-19 years', 'Proportion of population aged 20-24 years', 'Proportion of population aged 25-34 years', 'Proportion of population aged 35-44 years', 'Proportion of population aged 45-54 years', 'Proportion of total population aged 55-64', 'Proportion of population aged 65-74 years', 'Proportion of population aged 75 years and over', 'Women per 100 men', # 'Young-age dependency ratio (population aged 0-19 to population 20-64 years)', 'Nationals as a proportion of population'] city_df = pd.DataFrame(data['city']) X = data[X_cols].drop_duplicates().reset_index(drop=True) # X.rename(columns={'Proportion of total population aged 55-64':'Proportion of population aged 55-64 years'}, inplace=True) # Create target variable y = data_counts['biergarten_count_100k'] # Create correlation matrix corr_matrix = X.corr().abs() # Select upper triangle of correlation matrix upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # Find index of feature columns with correlation greater than 0.8 to_drop = [column for column in upper.columns if any(upper[column] > 0.8)] # Drop features X.drop(X[to_drop], axis=1, inplace=True) # Pipeline for linear regression lin_reg_pipe = Pipeline([('minmax', MinMaxScaler(feature_range=(-1,1))) ,('lin_reg', LinearRegression(fit_intercept=True))]) # Train the regression model lin_reg_pipe.fit(X,y) # Plot regression coeffiecient plt.figure(figsize=(6,8)) barplot_data = pd.concat([pd.Series(X.columns.to_list()), pd.Series(lin_reg_pipe['lin_reg'].coef_)], axis=1) barplot_data.columns = ['variable', 'coef'] ax=sns.barplot(y='variable', x='coef', data=barplot_data, color=color) ax.axes.set_ylabel('') ax.axes.set_xlabel('Regression coefficient', fontdict={'fontsize':16}) plt.show() # Print regression measures print('Intercept: {}'.format(lin_reg_pipe['lin_reg'].intercept_)) print('R^2: {}'.format(lin_reg_pipe.score(X,y))) # Plot predictions and actuals slope, intercept, r_value, p_value, std_err = stats.linregress(lin_reg_pipe.predict(X),y) plt.figure(figsize=(10,10)) g = sns.regplot(lin_reg_pipe.predict(X), y, color=color , scatter_kws={'s':data_counts['total_population']/5000}) g.axes.set_xlabel('Predicted number of biergartens per 100,000 people', fontdict={'fontsize':16}) g.axes.set_ylabel('Actual number of biergartens per 100,000 people', fontdict={'fontsize':16}) g.text(1.7, 5, r'$R^2:{0:.2f}$'.format(r_value**2), fontdict={'fontsize':14}) g.set_xticks(np.arange(0.5,7,1)) g.set_yticks(np.arange(0.5,7,1)) for line in range(0, data_counts.shape[0]): g.text(x=lin_reg_pipe.predict(X)[line]-0.1 , y=y[line] , s=data_counts.city[line] , horizontalalignment='right' , verticalalignment='baseline' , size='small' , color='black') plt.show() ``` # Does local living standard explain biergarten density in region? ``` # Create modeling dataset X_cols = [ 'latitude', 'longitude', 'Activity rate', 'Employment (jobs) in agriculture, fishery (NACE Rev. 2, A)', 'Employment (jobs) in arts, entertainment and recreation; other service activities; activities of household and extra-territorial organizations and bodies (NACE Rev. 2, R to U)', 'Employment (jobs) in construction (NACE Rev. 2, F)', 'Employment (jobs) in financial and insurance activities (NACE Rev. 2, K)', 'Employment (jobs) in information and communication (NACE Rev. 2, J)', 'Employment (jobs) in mining, manufacturing, energy (NACE Rev. 2, B-E)', 'Employment (jobs) in professional, scientific and technical activities; administrative and support service activities (NACE Rev. 2, M and N)', 'Employment (jobs) in public administration, defence, education, human health and social work activities (NACE Rev. 2, O to Q)', 'Employment (jobs) in real estate activities (NACE Rev. 2, L)', 'Employment (jobs) in trade, transport, hotels, restaurants (NACE Rev. 2, G to I)', 'Proportion of employment in industries (NACE Rev.1.1 C-E)', 'Unemployment rate, female', 'Unemployment rate, male'] city_df = pd.DataFrame(data['city']) X = data[X_cols].drop_duplicates().reset_index(drop=True) X.rename(columns={'Employment (jobs) in agriculture, fishery (NACE Rev. 2, A)':'Jobs in agriculture, fishery', 'Employment (jobs) in arts, entertainment and recreation; other service activities; activities of household and extra-territorial organizations and bodies (NACE Rev. 2, R to U)': 'Jobs in arts, entertainment and recreation; other service', 'Employment (jobs) in construction (NACE Rev. 2, F)':'Jobs in construction', 'Employment (jobs) in financial and insurance activities (NACE Rev. 2, K)':'Jobs in financial and insurance activities', 'Employment (jobs) in information and communication (NACE Rev. 2, J)':'Jobs in information and communication', 'Employment (jobs) in mining, manufacturing, energy (NACE Rev. 2, B-E)':'Jobs in mining, manufacturing, energy', 'Employment (jobs) in professional, scientific and technical activities; administrative and support service activities (NACE Rev. 2, M and N)':'Jobs in professional, scientific and technical; administrative and support service', 'Employment (jobs) in public administration, defence, education, human health and social work activities (NACE Rev. 2, O to Q)':'Jobs in public administration, defence, education, human health and social work', 'Employment (jobs) in real estate activities (NACE Rev. 2, L)':'Jobs in real estate', 'Employment (jobs) in trade, transport, hotels, restaurants (NACE Rev. 2, G to I)':'Jobs in trade, transport, hotels, restaurants', 'Proportion of employment in industries (NACE Rev.1.1 C-E)':'Proportion of employment in industries'} , inplace=True) # Create target variable y = data_counts['biergarten_count_100k'] # Create correlation matrix corr_matrix = X.corr().abs() # Select upper triangle of correlation matrix upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # Find index of feature columns with correlation greater than 0.8 to_drop = [column for column in upper.columns if any(upper[column] > 0.8)] # Drop features X.drop(X[to_drop], axis=1, inplace=True) # Pipeline for linear regression lin_reg_pipe = Pipeline([('minmax', MinMaxScaler(feature_range=(-1,1))) ,('lin_reg', LinearRegression(fit_intercept=True))]) # Train the regression model lin_reg_pipe.fit(X,y) # Plot regression coeffiecient plt.figure(figsize=(6,8)) barplot_data = pd.concat([pd.Series(X.columns.to_list()), pd.Series(lin_reg_pipe['lin_reg'].coef_)], axis=1) barplot_data.columns = ['variable', 'coef'] ax=sns.barplot(y='variable', x='coef', data=barplot_data, color=color) ax.axes.set_ylabel('') ax.axes.set_xlabel('Regression coefficient', fontdict={'fontsize':16}) plt.show() # Print regression measures print('Intercept: {}'.format(lin_reg_pipe['lin_reg'].intercept_)) print('R^2: {}'.format(lin_reg_pipe.score(X,y))) # Plot predictions and actuals slope, intercept, r_value, p_value, std_err = stats.linregress(lin_reg_pipe.predict(X),y) plt.figure(figsize=(10,10)) g = sns.regplot(lin_reg_pipe.predict(X), y, color=color , scatter_kws={'s':data_counts['total_population']/5000}) g.axes.set_xlabel('Predicted number of biergartens per 100,000 people', fontdict={'fontsize':16}) g.axes.set_ylabel('Actual number of biergartens per 100,000 people', fontdict={'fontsize':16}) g.text(1.7, 5, r'$R^2:{0:.2f}$'.format(r_value**2), fontdict={'fontsize':14}) g.set_xticks(np.arange(0.5,7,1)) g.set_yticks(np.arange(0.5,7,1)) for line in range(0, data_counts.shape[0]): g.text(x=lin_reg_pipe.predict(X)[line]-0.1 , y=y[line] , s=data_counts.city[line] , horizontalalignment='right' , verticalalignment='baseline' , size='small' , color='black') plt.show() # Create modeling dataset X_cols = [ 'latitude', 'longitude', 'Proportion of population aged 0-4 years', 'Proportion of population aged 5-9 years', 'Proportion of population aged 10-14 years', 'Proportion of population aged 15-19 years', 'Proportion of population aged 20-24 years', 'Proportion of population aged 25-34 years', 'Proportion of population aged 35-44 years', 'Proportion of population aged 45-54 years', 'Proportion of total population aged 55-64', 'Proportion of population aged 65-74 years', 'Proportion of population aged 75 years and over', 'Women per 100 men', 'Young-age dependency ratio (population aged 0-19 to population 20-64 years)', 'Nationals as a proportion of population', 'Activity rate', 'Employment (jobs) in agriculture, fishery (NACE Rev. 2, A)', 'Employment (jobs) in arts, entertainment and recreation; other service activities; activities of household and extra-territorial organizations and bodies (NACE Rev. 2, R to U)', 'Employment (jobs) in construction (NACE Rev. 2, F)', 'Employment (jobs) in financial and insurance activities (NACE Rev. 2, K)', 'Employment (jobs) in information and communication (NACE Rev. 2, J)', 'Employment (jobs) in mining, manufacturing, energy (NACE Rev. 2, B-E)', 'Employment (jobs) in professional, scientific and technical activities; administrative and support service activities (NACE Rev. 2, M and N)', 'Employment (jobs) in public administration, defence, education, human health and social work activities (NACE Rev. 2, O to Q)', 'Employment (jobs) in real estate activities (NACE Rev. 2, L)', 'Employment (jobs) in trade, transport, hotels, restaurants (NACE Rev. 2, G to I)', 'Proportion of employment in industries (NACE Rev.1.1 C-E)', 'Unemployment rate, female', 'Unemployment rate, male'] city_df = pd.DataFrame(data['city']) X = data[X_cols].drop_duplicates().reset_index(drop=True) X.rename(columns={'Proportion of total population aged 55-64':'Proportion of population aged 55-64 years','Employment (jobs) in agriculture, fishery (NACE Rev. 2, A)':'Jobs in agriculture, fishery', 'Employment (jobs) in arts, entertainment and recreation; other service activities; activities of household and extra-territorial organizations and bodies (NACE Rev. 2, R to U)': 'Jobs in arts, entertainment and recreation; other service', 'Employment (jobs) in construction (NACE Rev. 2, F)':'Jobs in construction', 'Employment (jobs) in financial and insurance activities (NACE Rev. 2, K)':'Jobs in financial and insurance activities', 'Employment (jobs) in information and communication (NACE Rev. 2, J)':'Jobs in information and communication', 'Employment (jobs) in mining, manufacturing, energy (NACE Rev. 2, B-E)':'Jobs in mining, manufacturing, energy', 'Employment (jobs) in professional, scientific and technical activities; administrative and support service activities (NACE Rev. 2, M and N)':'Jobs in professional, scientific and technical; administrative and support service', 'Employment (jobs) in public administration, defence, education, human health and social work activities (NACE Rev. 2, O to Q)':'Jobs in public administration, defence, education, human health and social work', 'Employment (jobs) in real estate activities (NACE Rev. 2, L)':'Jobs in real estate', 'Employment (jobs) in trade, transport, hotels, restaurants (NACE Rev. 2, G to I)':'Jobs in trade, transport, hotels, restaurants', 'Proportion of employment in industries (NACE Rev.1.1 C-E)':'Proportion of employment in industries'}, inplace=True) # Create target variable y = data_counts['biergarten_count_100k'] # Create correlation matrix corr_matrix = X.corr().abs() # Select upper triangle of correlation matrix upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # Find index of feature columns with correlation greater than 0.7 to_drop = [column for column in upper.columns if any(upper[column] > 0.7)] # Drop features X.drop(X[to_drop], axis=1, inplace=True) # Pipeline for linear regression lin_reg_pipe = Pipeline([ ('minmax', MinMaxScaler(feature_range=(-1,1))), ('lin_reg', LinearRegression(fit_intercept=True))]) # Train the regression model lin_reg_pipe.fit(X,y) # Plot regression coeffiecient plt.figure(figsize=(6,8)) barplot_data = pd.concat([pd.Series(X.columns.to_list()), pd.Series(lin_reg_pipe['lin_reg'].coef_)], axis=1) barplot_data.columns = ['variable', 'coef'] ax=sns.barplot(y='variable', x='coef', data=barplot_data, color=color) ax.axes.set_ylabel('') ax.axes.set_xlabel('Regression coefficient', fontdict={'fontsize':16}) plt.show() # Print regression measures print('Intercept: {}'.format(lin_reg_pipe['lin_reg'].intercept_)) print('R^2: {}'.format(lin_reg_pipe.score(X,y))) # Plot predictions and actuals slope, intercept, r_value, p_value, std_err = stats.linregress(lin_reg_pipe.predict(X),y) plt.figure(figsize=(10,10)) g = sns.regplot(lin_reg_pipe.predict(X), y, color=color , scatter_kws={'s':data_counts['total_population']/5000}) g.axes.set_xlabel('Predicted number of biergartens per 100,000 people', fontdict={'fontsize':16}) g.axes.set_ylabel('Actual number of biergartens per 100,000 people', fontdict={'fontsize':16}) g.text(1.7, 5, r'$R^2:{0:.2f}$'.format(r_value**2), fontdict={'fontsize':14}) g.set_xticks(np.arange(0.5,7,1)) g.set_yticks(np.arange(0.5,7,1)) for line in range(0, data_counts.shape[0]): g.text(x=lin_reg_pipe.predict(X)[line]-0.1 , y=y[line] , s=data_counts.city[line] , horizontalalignment='right' , verticalalignment='baseline' , size='small' , color='black') plt.show() ```
github_jupyter
``` # Import required modules import pandas as pd import numpy as np from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, BaggingClassifier, RandomForestClassifier, VotingClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm import LinearSVC import xgboost as xgb import lightgbm as lgb from sklearn import metrics from catboost import CatBoostClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression # Import original train set and Principal Components (PCs) obtained from PCA done in other notebook df = pd.read_csv('train.csv') pca_train = pd.read_csv('pca_train.csv') pca_train.head() # Convert the Categorical Y/N target variable 'Loan_Status' for binary 1/0 classification df['Loan_Status'] = df['Loan_Status'].map(lambda x: 1 if x == 'Y' else 0) # Set X and y for ML model training do train-test split using sklearn module X = pca_train.values y = df['Loan_Status'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=1) y_test.shape X_train.shape # Initiate a new Adaptive Classifier, an ensemble boosting algorithm ada = AdaBoostClassifier() # Create a dictionary of all values we want to test for selected model parameters of the respective algorithm params_ada = {'n_estimators': np.arange(1, 10)} # Use GridSearchCV to test all values for selected model parameters ada_gs = GridSearchCV(ada, params_ada, cv=10, verbose=1, n_jobs=-1, pre_dispatch='128*n_jobs') # Fit model to training data ada_gs.fit(X_train, y_train) # Save the best model ada_best = ada_gs.best_estimator_ # Check the value of the best selected model parameter(s) print(ada_gs.best_params_) # Print the accuracy score on the test data using best model print('ada: {}'.format(ada_best.score(X_test, y_test))) # Initiate a new Gradient Boosting Classifier, an ensemble boosting algorithm gbc = GradientBoostingClassifier(learning_rate=0.005,warm_start=True) # Create a dictionary of all values we want to test for selected model parameters of the respective algorithm params_gbc = {'n_estimators': np.arange(1, 200)} # Use GridSearchCV to test all values for selected model parameters gbc_gs = GridSearchCV(gbc, params_gbc, cv=10, verbose=1, n_jobs=-1, pre_dispatch='128*n_jobs') # Fit model to training data gbc_gs.fit(X_train, y_train) # Save the best model gbc_best = gbc_gs.best_estimator_ # Check the value of the best selected model parameter(s) print(gbc_gs.best_params_) # Print the accuracy score on the test data using best model print('gbc: {}'.format(gbc_best.score(X_test, y_test))) # Initiate a new Bagging Classifier, an ensemble bagging algorithm bcdt = BaggingClassifier(DecisionTreeClassifier(random_state=1)) # Create a dictionary of all values we want to test for selected model parameters of the respective algorithm params_bcdt = {'n_estimators': np.arange(1, 100)} # Use GridSearchCV to test all values for selected model parameters bcdt_gs = GridSearchCV(bcdt, params_bcdt, cv=10, verbose=1, n_jobs=-1, pre_dispatch='128*n_jobs') # Fit model to training data bcdt_gs.fit(X_train, y_train) # Save the best model bcdt_best = bcdt_gs.best_estimator_ # Check the value of the best selected model parameter(s) print(bcdt_gs.best_params_) # Print the accuracy score on the test data using best model print('bcdt: {}'.format(bcdt_best.score(X_test, y_test))) # Initiate a new Decision Tree Classifier and follow the similar process as mentioned in comments above dt = DecisionTreeClassifier(random_state=1) params_dt = {} dt_gs = GridSearchCV(dt, params_dt, cv=10, verbose=1, n_jobs=-1, pre_dispatch='128*n_jobs') dt_gs.fit(X_train, y_train) # Save the best model and check best model parameters dt_best = dt_gs.best_estimator_ print(dt_gs.best_params_) # Print the accuracy score on the test data using best model print('dt: {}'.format(dt_best.score(X_test, y_test))) # Initiate a new Support Vector Classifier and follow the similar process as mentioned in comments above svc = LinearSVC(random_state=1) params_svc = {} svc_gs = GridSearchCV(svc, params_svc, cv=10,verbose=1,n_jobs=-1,pre_dispatch='128*n_jobs') svc_gs.fit(X_train, y_train) # Save the best model and check best model parameters svc_best = svc_gs.best_estimator_ print(svc_gs.best_params_) # Print the accuracy score on the test data using best model print('svc: {}'.format(svc_best.score(X_test, y_test))) # Initiate a new XG Boost Classifier, an ensemble boosting algorithm and follow the similar process as mentioned in comments above xg = xgb.XGBClassifier(random_state=1,learning_rate=0.005) params_xg = {'max_depth': np.arange(2,5), 'n_estimators': np.arange(1, 100)} xg_gs = GridSearchCV(xg, params_xg, cv=10, verbose=1, n_jobs=-1, pre_dispatch='128*n_jobs') xg_gs.fit(X_train, y_train) # Save the best model and check best model parameters xg_best = xg_gs.best_estimator_ print(xg_gs.best_params_) # Print the accuracy score on the test data using best model print('xg: {}'.format(xg_best.score(X_test, y_test))) # Initiate a new Light Gradient Boosted Machine, an ensemble boosting algorithm # Set the train data and initiate ML training train_data = lgb.Dataset(X_train,label=y_train) params = {'learning_rate':0.01} lgbm = lgb.train(params, train_data, 100) y_pred = lgbm.predict(X_test) for i in range(0,y_test.shape[0]): if y_pred[i]>=0.5: y_pred[i]=1 else: y_pred[i]=0 # Print the overall accuracy print(metrics.accuracy_score(y_test,y_pred)) # Initiate a new Cat Boost Classifier, an ensemble boosting algorithm and fit on train data cbc = CatBoostClassifier(random_state=1, iterations=100) cbc.fit(X_train, y_train) # Print the overall accuracy print('cbc: {}'.format(cbc.score(X_test, y_test))) # Initiate a new KNeighbors Classifier and follow the similar process as mentioned in previous comments knn = KNeighborsClassifier() params_knn = {'n_neighbors': np.arange(1, 25)} knn_gs = GridSearchCV(knn, params_knn, cv=10, verbose=1, n_jobs=-1, pre_dispatch='128*n_jobs') knn_gs.fit(X_train, y_train) # Save the best model and check best model parameters knn_best = knn_gs.best_estimator_ print(knn_gs.best_params_) # Print the overall accuracy print('knn: {}'.format(knn_best.score(X_test, y_test))) # Initiate a new Random Forest Classifier, an ensemble bagging algorithm and follow the similar process as mentioned in previous comments rf = RandomForestClassifier() params_rf = {'n_estimators': [100, 150, 200, 250, 300, 350, 400, 450, 500]} rf_gs = GridSearchCV(rf, params_rf, cv=10, verbose=1, n_jobs=-1, pre_dispatch='128*n_jobs') rf_gs.fit(X_train, y_train) # Save the best model and check best model parameters rf_best = rf_gs.best_estimator_ print(rf_gs.best_params_) # Print the overall accuracy print('rf: {}'.format(rf_best.score(X_test, y_test))) # Create a new Logistic Regression model and fit on train data log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train, y_train) # Print the overall accuracy print('log_reg: {}'.format(log_reg.score(X_test, y_test))) # Print the overall accuracy score for all the 11 best classification models trained earlier print('Overall Accuracy of best selected models on X_test dataset\n') print('knn: {}'.format(knn_best.score(X_test, y_test))) print('rf: {}'.format(rf_best.score(X_test, y_test))) print('log_reg: {}'.format(log_reg.score(X_test, y_test))) print('ada: {}'.format(ada_best.score(X_test, y_test))) print('gbc: {}'.format(gbc_best.score(X_test, y_test))) print('bcdt: {}'.format(bcdt_best.score(X_test, y_test))) print('dt: {}'.format(dt_best.score(X_test, y_test))) print('svc: {}'.format(svc_best.score(X_test, y_test))) print('xg: {}'.format(xg_best.score(X_test, y_test))) print('lgbm: {}'.format(metrics.accuracy_score(y_test,y_pred))) print('cbc: {}'.format(cbc.score(X_test, y_test))) # Create a dictionary of our models estimators=[('knn', knn_best), ('rf', rf_best), ('log_reg', log_reg), ('ada', ada_best), ('gbc', gbc_best), ('bcdt', bcdt_best), ('dt', dt_best), ('xg', xg_best), ('cbc', cbc)] # Create a voting classifier, input the dictionary of our models as estimators for the ensemble ensemble = VotingClassifier(estimators, voting='soft', n_jobs=-1, flatten_transform=True, weights=[1/9,1/9,1/9,1/9,1/9,1/9,1/9,1/9,1/9]) # Fit the Final Ensemble Model on train data ensemble.fit(X_train, y_train) # Test our final model on the test data and print our final accuracy score for the Ensemble made using Bagging and Boosting techniques ensemble.score(X_test, y_test) # Import the PCs of test data for final predictions dft = pd.read_csv('pca_test.csv') dft.head() # Assign the PCs dft to test_X test_X = dft.values print(len(test_X)) # Make final predictions on the test data test_predictions = ensemble.predict(test_X) test_predictions # Import original test file for Loan_IDs and assign the test_predictions to a new column 'Loan_Status' dft2 = pd.read_csv('test.csv') dft2['Loan_Status'] = test_predictions # Drop unnecessary columns dft2 = dft2.drop(['Gender','Married','Dependents','Education','Self_Employed','ApplicantIncome','CoapplicantIncome','LoanAmount','Loan_Amount_Term','Credit_History','Property_Area'],axis=1) dft2.head() # Convert binary 1/0 targets back to Categorical Y/N alphabets dft2['Loan_Status'] = dft2['Loan_Status'].map(lambda x: 'Y' if x == 1 else 'N') dft2.head() # Save the predictions from the final Ensemble on local disk dft2.to_csv('Ensemble.csv', index=False) ```
github_jupyter
# 序列模型和注意机制 Sequence Models & Attention Mechanism ## 1. 常见的序列到序列架构 Various sequence to sequence architectures ### 1.1 基本模型 Basic Models ![Sequence to sequence model.png](img/Sequence to sequence model.png) ![Image captioning.png](img/Image captioning.png) ### 1.2 挑选概率最高的句子 Picking the most likely sentence 序列到序列的机器翻译模型,和语言模型,有一定相同的地方,它们都需要挑选概率最高的句子。但二者之间也存在一些显著的不同。 序列到序列的机器翻译模型,可以看做条件概率下的语言模型: ![Machine translation as building a conditional language model.png](img/Machine translation as building a conditional language model.png) 有了这个条件概率的语言模型,它可以生成不同的翻译,并且判定各个翻译的概率。但对于机器翻译问题,我们不需要随机生成的句子,而是需要找到概率最大的句子: ![Finding the most likely translation.png](img/Finding the most likely translation.png) 使用贪心算法,每一步找到最大概率的下一个词,可能导致无法找到全局最大值;而穷举搜索是不现实的,计算量太大: ![Why not a greedy search.png](img/Why not a greedy search.png) ### 1.3 定向搜索 Beam Search 定向搜索不一定保证能找到全局最优解,但通常能达到不错的效果。 第一步:预先设定好**定向宽度 beam width**的参数B,计算encoder部分的值 $x$,根据 $x$ 就可以计算 $P(y^{<1>}|x)$ 的值。取条件概率最大的B个词,并保存其对应的条件概率。 ![Beam search algorithm 1.png](img/Beam search algorithm 1.png) 第二步:将上一步选出的B个词,分别作为 $y^{<1>}$,将词典中所有的词作为 $y^{<2>}$,计算 $P(y^{<1>},y^{<2>}|x) = P(y^{<1>}|x)P(y^{<2>}|x, y^{<1>})$。这样假设词典中有10000个词,就可以获得 10000×B 组 $y^{<1>},y^{<1>}$ 的组合。再一次从中挑选出概率最大的B个 $y^{<1>},y^{<1>}$ 组合。 ![Beam search algorithm 2.png](img/Beam search algorithm 2.png) 第三步:类似上一步。继续重复上面的步骤,知道句子结束。注意当 $B=1$ 时,定向搜索就是贪心搜索。 ![Beam search algorithm 3.png](img/Beam search algorithm 3.png) ### 1.4 改进定向搜索 Refinements to Beam Search 长度归一化:多个概率值相乘,可能导致堆栈下溢,丢失精度。转为求对数的最大值。然后求平均,从而对长度进行归一,否则之前的优化目标会对长句子有过多的惩罚。 ![Length normalization.png](img/Length normalization.png) ![Beam search discussion.png](img/Beam search discussion.png) ### 1.5 定向搜索的误差分析 Error analysis in beam search 误差分析,定向搜索出来的结果不好,应该归因为定向搜索,还是归因为RNN模型。 ![Error analysis beam search.png](img/Error analysis beam search.png) ![Error analysis on beam search.png](img/Error analysis on beam search.png) 从训练集中找出所有翻译错误,和人工翻译进行比较。判定定向搜索和RNN模型各自对造成最终误差的占比。如果是RNN,可以再进一步分析,是增加正则化,获取更多数据,还是更换神经网络的架构。 ![Error analysis process.png](img/Error analysis process.png) ### 1.6 BLEU评分 [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040) 正确的翻译可能有很多种,要在多个正确的翻译之间取舍,就需要一个评分机制。 ![Bleu score on unigrams.png](img/Bleu score on unigrams.png) ![Bleu details.png](img/Bleu details.png) ### 1.7 注意力模型的直观理解 Attention model intuition 目前介绍的机器翻译模型,是编码解码架构,编码和解码分别训练两个不同的RNN,整个句子通过编码RNN输出一个预测值,然后根据这个预测值,解码RNN将其翻译出来。而实际上人类在进行翻译的时候,尤其对于长句子,不会整个看完之后再开始翻译。因为这种传统架构,随着句子变长,其Bleu评分会缓缓下降。 ![The problem of long sequences.png](img/The problem of long sequences.png) [NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE](https://arxiv.org/pdf/1409.0473.pdf) ![Attention model intuition.png](img/Attention model intuition.png) ### 1.8 注意力模型 Attention model ![Attention model.png](img/Attention model.png) ![Computing attention.png](img/Computing attention.png) ![Attention examples.png](img/Attention examples.png) ## 2. 语音数据 Audio Data ### 2.1 语音识别 Speech Recognition ![Speech recognition problem.png](img/Speech recognition problem.png) ![Attention model for speech recognition.png](img/Attention model for speech recognition.png) ![CTC cost for speech recognition.png](img/CTC cost for speech recognition.png) ### 2.2 触发字检测 Trigger Word Detection ![What is trigger word detection.png](img/What is trigger word detection.png) ![Trigger word detection algorithm.png](img/Trigger word detection algorithm.png)
github_jupyter
# Predicting movie ratings One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the mllib library's Alternating Least Squares method to make more sophisticated predictions. ## 1. Data Setup Before starting with the recommendation systems, we need to download the dataset and we need to do a little bit of pre-processing. ### 1.1 Download Let's begin with downloading the dataset. If you have already a copy of the dataset you can skip this part. For this lab, we will use [movielens 25M stable benchmark rating dataset](https://files.grouplens.org/datasets/movielens/ml-25m.zip). ``` # let's start by downloading the dataset. import wget wget.download(url = "https://files.grouplens.org/datasets/movielens/ml-25m.zip", out = "dataset.zip") # let's unzip the dataset import zipfile with zipfile.ZipFile("dataset.zip", "r") as zfile: zfile.extractall() ``` ### 1.2 Dataset Format The following table highlights some data from `ratings.csv` (with comma-separated elements): | UserID | MovieID | Rating | Timestamp | |--------|---------|--------|------------| |...|...|...|...| |3022|152836|5.0|1461788770| |3023|169|5.0|1302559971| |3023|262|5.0|1302559918| |...|...|...|...| The following table highlights some data from `movies.csv` (with comma-separated elements): | MovieID | Title | Genres | |---------|---------|--------| |...|...|...| | 209133 |The Riot and the Dance (2018) | (no genres listed) | | 209135 |Jane B. by Agnès V. (1988) | Documentary\|Fantasy | |...|...|...| The `Genres` field has the format `Genres1|Genres2|Genres3|...` or `(no generes listed)` The format of these files is uniform and simple, so we can easily parse them using python: - For each line in the rating dataset, we create a tuple of (UserID, MovieID, Rating). We drop the timestamp because we do not need it for this exercise. - For each line in the movies dataset, we create a tuple of (MovieID, Title). We drop the Genres because we do not need them for this exercise. ### 1.3 Preprocessing We can begin to preprocess our data. This step includes: 1) We should drop the timestamp, we do not need it. 2) We should drop the genres, we do not need them. 3) We should parse data according to their intended type. For example, the elements of rating should be floats. 4) Each line should encode data in an easily processable format, like a tuple. 5) We should filter the first line of both datasets (the header). ``` # let's intialize the spark session import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("Python Spark SQL basic example") \ .getOrCreate() spark ``` #### 1.3.1 Load The Data We can start by loading the dataset formatted as raw text. ``` from pprint import pprint ratings_rdd = spark.sparkContext.textFile(name = "ml-25m/ratings.csv", minPartitions = 2) movies_rdd = spark.sparkContext.textFile(name = "ml-25m/movies.csv" , minPartitions = 2) # let's have a peek a our dataset print("ratings --->") pprint(ratings_rdd.take(5)) print("\nmovies --->") pprint(movies_rdd.take(5)) ``` #### 1.3.2 SubSampling Since we have limited resources in terms of computation, sometimes, it is useful to work with only a fraction of the whole dataset. ``` ratings_rdd = ratings_rdd.sample(withReplacement=False, fraction=1/25, seed=14).cache() movies_rdd = movies_rdd .sample(withReplacement=False, fraction=1, seed=14).cache() print(f"ratings_rdd: {ratings_rdd.count()}, movies_rdd {movies_rdd.count()}") ``` #### 1.3.2 Parsing Here, we do the real preprocessing: dropping columns, parsing elements, and filtering the heading. ``` def string2rating(line): """ Parse a line in the ratings dataset. Args: line (str): a line in the ratings dataset in the form of UserID,MovieID,Rating,Timestamp Returns: tuple[int,int,float]: (UserID, MovieID, Rating) """ userID, movieID, rating, *others = line.split(",") try: return int(userID), int(movieID), float(rating), except ValueError: return None def string2movie(line): """ Parse a line in the movies dataset. Args: line (str): a line in the movies dataset in the form of MovieID,Title,Genres. Genres in the form of Genre1|Genre2|... Returns: tuple[int,str,list[str]]: (MovieID, Title, Genres) """ movieID, title, *others = line.split(",") try: return int(movieID), title except ValueError: return None ratings_rdd = ratings_rdd.map(string2rating).filter(lambda x:x!=None).cache() movies_rdd = movies_rdd .map(string2movie ).filter(lambda x:x!=None).cache() print(f"There are {ratings_rdd.count()} ratings and {movies_rdd.count()} movies in the datasets") print(f"Ratings: ---> \n{ratings_rdd.take(3)}") print(f"Movies: ---> \n{movies_rdd.take(3)}") ``` ## 2. Basic Raccomandations ### 2.1 Highest Average Rating. One way to recommend movies is to always recommend the movies with the highest average rating. In this section, we will use Spark to find the name, number of ratings, and the average rating of the 20 movies with the highest average rating and more than 500 reviews. We want to filter our movies with high ratings but fewer than or equal to 500 reviews because movies with few reviews may not have broad appeal to everyone. ``` def averageRating(ratings): """ Computes the average rating. Args: tuple[int, list[float]]: a MovieID with its list of ratings Returns: tuple[int, float]: returns the the MovieID with its average rating. """ return (ratings[0], sum(ratings[1]) / len(ratings[1])) rdd = ratings_rdd.map(lambda x:(x[0], x[2])).groupByKey() # group by MovieID rdd = rdd.filter(lambda x:len(x[1])>500) # filter movies with less than 500 reviews rdd = rdd.map(averageRating) # computes the average Rating rdd = rdd.sortBy(lambda x:x[1], ascending=False) rdd.take(5) ``` Ok, now we have the best (according to the average) popular (according to the number of reviews) movies. However, we can only see their MovieID. Let's convert the IDs into titles. ``` rdd.join(movies_rdd)\ .map(lambda x:(x[1][1],x[1][0]))\ .sortBy(lambda x:x[1], ascending=False)\ .take(20) ``` ### 2.2 Collaborative Filtering We are going to use a technique called collaborative filtering. Collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue x than to have the opinion on x of a person chosen randomly. At first, people rate different items (like videos, images, games). After that, the system is making predictions about a user's rating for an item, which the user has not rated yet. These predictions are built upon the existing ratings of other users, who have similar ratings with the active user. #### 2.2.1 Creating a Training Set Before we jump into using machine learning, we need to break up the `ratings_rdd` dataset into three pieces: * a training set (RDD), which we will use to train models, * a validation set (RDD), which we will use to choose the best model, * a test set (RDD), which we will use for estimating the predictive power of the recommender system. To randomly split the dataset into multiple groups, we can use the pyspark [randomSplit] transformation, which takes a list of splits with a seed and returns multiple RDDs. [randomSplit]:https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.RDD.randomSplit.html?highlight=randomsplit#pyspark.RDD.randomSplit ``` training_rdd, validation_rdd, test_rdd = ratings_rdd.randomSplit([6, 2, 2], seed=14) print(f"Training: {training_rdd.count()}, validation: {validation_rdd.count()}, test: {test_rdd .count()}") print("training samples: ", training_rdd .take(3)) print("validation samples: ", validation_rdd.take(3)) print("test samples: ", test_rdd .take(3)) ``` #### 2.2.2 Alternating Least Square Errors For movie recommendations, we start with a matrix whose entries are movie ratings by users. Each column represents a user and each row represents a particular movie. Since not all users have rated all movies, we do not know all of the entries in this matrix, which is precisely why we need collaborative filtering. For each user, we have ratings for only a subset of the movies. With collaborative filtering, the idea is to approximate the rating matrix by factorizing it as the product of two matrices: one that describes properties of each user, and one that describes properties of each movie. We want to select these two matrices such that the error for the users/movie pairs where we know the correct ratings is minimized. The *Alternating Least Squares* algorithm does this by first randomly filling the user matrix with values and then optimizing the value of the movies such that the error is minimized. Then, it holds the movies matrix constant and optimizes the value of the user's matrix. This alternation between which matrix to optimize is the reason for the "alternating" in the name. ``` from pyspark.mllib.recommendation import ALS # thanks to modern libraries training an ALS model is as easy as model = ALS.train(training_rdd, rank = 4, seed = 14, iterations = 5, lambda_ = 0.1) # let's have a peek to few predictions model.predictAll(validation_rdd.map(lambda x:(x[0],x[1]))).take(5) ``` #### 2.2.3 Root Mean Square Error (RMSE) Next, we need to evaluate our model: is it good or is it bad? To score the model, we will use RMSE (often called also Root Mean Square Deviation (RMSD)). You can think of RMSE as a distance function that measures the distance between the predictions and the ground truths. It is computed as follows: $$ RMSE(f, \mathcal{D}) = \sqrt{\frac{\sum_{(x,y) \in \mathcal{D}} (f(x) - y)^2}{|\mathcal{D}|}}$$ Where: * $\mathcal{D}$ is our dataset it contains samples alongside their predictions. Formally, $\mathcal{D} \subseteq \mathcal{X} \times \mathcal{Y}$. Where: * $\mathcal{X}$ is the set of all input samples. * $\mathcal{Y}$ is the set of all possible predictions. * $f : \mathcal{X} \rightarrow \mathcal{Y}$ is the model we wish to evaluate. Given an input $x$ (from $\mathcal{X}$, the set of possible inputs) it returns a value $f(x)$ (from $\mathcal{Y}$, the set of possible outputs). * $x$ represents an input. * $f(x)$ represents the prediction of $x$. * $y$ represents the ground truth. As you can imagine $f(x)$ and $y$ can be different, i.e our model is wrong. With $RMSE(f, \mathcal{D})$, we want to measure the degree to which our model, $f$, is wrong on the dataset $\mathcal{D}$. The higher is $RMSE(f, \mathcal{D})$ the higher is the degree to which $f$ is wrong. The smaller is $RMSE(f, \mathcal{D})$ the more accurate $f$ is. To better understand the RMSE consider the following facts: * When $f(x)$ is close to $y$ our model is accurate. In the same case $(f(x) - y)^2$ is small. * When $f(x)$ is far from $y$ our model is inaccurate. In the same case $(f(x) - y)^2$ is high. * If our model is accurate, it will be often accurate in $\mathcal{D}$. Therefore, it will make often small errors which will amount to a small RMSE. * If our model is inaccurate, it will be often inaccurate in $\mathcal{D}$. Therefore, it will make often big errors which will amount to a large RMSE. Let's make a function to compute the RMSE so that we can use it multiple times easily. ``` def RMSE(predictions_rdd, truths_rdd): """ Compute the root mean squared error between predicted and actual Args: predictions_rdd: predicted ratings for each movie and each user where each entry is in the form (UserID, MovieID, Rating). truths_rdd: actual ratings where each entry is in the form (UserID, MovieID, Rating). Returns: RSME (float): computed RSME value """ # Transform predictions and truths into the tuples of the form ((UserID, MovieID), Rating) predictions = predictions_rdd.map(lambda i: ((i[0], i[1]), i[2])) truths = truths_rdd .map(lambda i: ((i[0], i[1]), i[2])) # Compute the squared error for each matching entry (i.e., the same (User ID, Movie ID) in each # RDD) in the reformatted RDDs using RDD transformtions - do not use collect() squared_errors = predictions.join(truths)\ .map(lambda i: (i[1][0] - i[1][1])**2) total_squared_error = squared_errors.sum() total_ratings = squared_errors.count() mean_squared_error = total_squared_error / total_ratings root_mean_squared_error = mean_squared_error ** (1/2) return root_mean_squared_error # let's evaluate the trained models RMSE(predictions_rdd = model.predictAll(validation_rdd.map(lambda x:(x[0],x[1]))), truths_rdd = validation_rdd) ``` #### 2.2.4 HyperParameters Tuning Can we do better? When training the ALS model there were few parameters to set. However, we do not know which is the best configuration. On these occasions, we want to try a few combinations to obtain even better results. In this section, we will search a few parameters. We will perform a so-called **grid search**. We will proceed as follows: 1) We decide the parameters to tune. 2) We train with all possible configurations. 3) We evaluate a trained model with all possible configurations on the validation set. 4) We evaluate the best model on the test set. ``` HyperParameters = { "rank" : [4, 8, 12], "seed" : [14], "iterations" : [5, 10], "lambda" : [0.05, 0.1, 0.25] } best_model = None best_error = float("inf") best_conf = dict() # how many training are we doing ? for rank in HyperParameters["rank"]: # for seed in HyperParameters["seed"]: # I consider these nested for-loops an anti-pattern. for iterations in HyperParameters["iterations"]: # However, We can leave as it is for sake of simplicity. for lambda_ in HyperParameters["lambda"]: # model = ALS.train(training_rdd, rank = rank, seed = seed, iterations = iterations, lambda_ = lambda_) validation_error = RMSE(predictions_rdd = model.predictAll(validation_rdd.map(lambda x:(x[0],x[1]))), truths_rdd = validation_rdd) if validation_error < best_error: best_model, best_error = model, validation_error best_conf = {"rank":rank, "seed":seed, "iterations":iterations, "lambda":lambda_} print(f"current best validation error {best_error} with configuration {best_conf}") test_error = RMSE(predictions_rdd = model.predictAll(test_rdd.map(lambda x:(x[0],x[1]))), truths_rdd = test_rdd) print(f"test error {test_error} with configuration {best_conf}") ```
github_jupyter
# Table widgets in the napari viewer Before we talk about tables and widgets in napari, let's create a viewer, a simple test image and a labels layer: ``` import numpy as np import napari import pandas from napari_skimage_regionprops import regionprops_table, add_table, get_table viewer = napari.Viewer() viewer.add_image(np.asarray([[1,2],[2,2]])) viewer.add_labels(np.asarray([[1,2],[3,3]])) ``` Now, let's perform a measurement of `size` and `intensity` of the labeled objects in the given image. A table with results will be automatically added to the viewer ``` regionprops_table( viewer.layers[0].data, viewer.layers[1].data, viewer, size=True, intensity=True ) napari.utils.nbscreenshot(viewer) ``` We can also get the widget representing the table: ``` # The table is associated with a given labels layer: labels = viewer.layers[1] table = get_table(labels, viewer) table ``` You can also read the content from the table as a dictionary. It is recommended to convert it into a pandas `DataFrame`: ``` content = pandas.DataFrame(table.get_content()) content ``` The content of this table can be changed programmatically. This also changes the `properties` of the associated layer. ``` new_values = {'A': [1, 2, 3], 'B': [4, 5, 6] } table.set_content(new_values) napari.utils.nbscreenshot(viewer) ``` You can also append data to an existing table through the `append_content()` function: Suppose you have another measurement for the labels in your image, i.e. the "double area": ``` table.set_content(content.to_dict('list')) double_area = {'label': content['label'].to_numpy(), 'Double area': content['area'].to_numpy() * 2.0} ``` You can now append this as a new column to the existing table: ``` table.append_content(double_area) napari.utils.nbscreenshot(viewer) ``` *Note*: If the added data has columns in common withh the exisiting table (for instance, the labels columns), the tables will be merged on the commonly available columns. If no common columns exist, the data will simply be added to the table and the non-intersecting row/columns will be filled with NaN: ``` tripple_area = {'Tripple area': content['area'].to_numpy() * 3.0} table.append_content(tripple_area) napari.utils.nbscreenshot(viewer) ``` Note: Changing the label's `properties` does not invoke changes of the table... ``` new_values = {'C': [6, 7, 8], 'D': [9, 10, 11] } labels.properties = new_values napari.utils.nbscreenshot(viewer) ``` But you can refresh the content: ``` table.update_content() napari.utils.nbscreenshot(viewer) ``` You can remove the table from the viewer like this: ``` viewer.window.remove_dock_widget(table) napari.utils.nbscreenshot(viewer) ``` Afterwards, the `get_table` method will return None: ``` get_table(labels, viewer) ``` To add the table again, just call `add_table` again. Note that the content of the properties of the labels have not been changed. ``` add_table(labels, viewer) napari.utils.nbscreenshot(viewer) ```
github_jupyter
# Regresión con Redes Neuronales Empleando diferentes *funciones de pérdida* y *funciones de activación* las **redes neuronales** pueden resolver efectivamente problemas de **regresión.** En esta libreta se estudia el ejemplo de [California Housing](http://www.spatial-statistics.com/pace_manuscripts/spletters_ms_dir/statistics_prob_lets/html/ms_sp_lets1.html) donde el propósito es predecir el valor medio de una casa según 8 atributos. ## Descripción general del conjunto de datos El conjunto de datos `California Housing` está hecho de 9 variables numéricas, donde 8 son las *características* y 1 es la variable objetivo. Este conjunto de datos fue creado en 1990 basándose en el censo poblacional realizado por el gobierno de EUA. La estructura del conjunto de datos es simple: cada línea en el archivo de datos cuenta por un **bloque** poblacional que consta de entre 600 y 3000 personas. Por cada *bloque* se tienen 8 características de cada casa y su costo medio. Empleando *redes neuronales* se pretende predecir el costo de las casas por bloque. ## Atributos del conjunto de datos Este conjunto de datos cuenta con 8 *atributos*, descritos a continuación, con la etiqueta como viene en el conjunto de datos de `scikit-learn`: - **MedInc**, *Ingresos promedio por bloque* - **HouseAge**, *Antigüedad promedio por casa en el bloque* - **AveRooms**, *Número promedio de cuartos por casa en el bloque* - **AveBedrms**, *Número promedio de recámaras por casa en el bloque* - **Population**, *Población total del bloque* - **AveOccup**, *Ocupancia promedio por casa en el bloque* - **Latitude**, *Latitud del bloque* - **Longitude**, *Longitud del bloque* Y la *variable respuesta* es: - **MedValue**, *Costo promedio por casa en el distrito* ``` import tensorflow as tf from sklearn import datasets, metrics, model_selection, preprocessing import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd # Importar el conjunto de datos California Housing cali_data = datasets.fetch_california_housing() ``` ## Visualización de datos ``` # Realizar una visualización general de la relación entre atributos del conjunto de datos sns.pairplot(pd.DataFrame(cali_data.data, columns=cali_data.feature_names)) plt.show() ``` Con estas figuras se pueden observar algunas características interesantes: - *Primero*, todas las variables importan en el modelo. Esto significa que el modelo de regresión viene pesado por todas las características y se requiere que el modelo sea *robusto* ante esta situación. - *Segundo*, hay algunas características que tienen relación *lineal* entre ellas, como lo es **AveRooms** y **AveBedrms**. Esto puede ayudar a discriminar ciertas características que no tienen mucho peso sobre el modelo y solamente utilizar aquellas que influyen mucho más. A esta parte del *procesamiento de datos* se le conoce como **selección de características** y es una rama específica de la *inteligencia computacional.* - *Tercero*, la línea diagonal muestra la relación *distribución* de cada una de las características. Esto es algo importante de estudiar dado que algunas características muestran *distribuciones* conocidas y este hecho se puede utilizar para emplear técnicas estadísticas más avanzadas en el **análisis de regresión.** Sin embargo, en toda esta libreta se dejarán las 8 características para que sean pesadas en el modelo final. ``` # Separar todos los datos y estandarizarlos X = cali_data.data y = cali_data.target # Crear el transformador para estandarización std = preprocessing.StandardScaler() X = std.fit_transform(X) X = np.array(X).astype(np.float32) y = std.fit_transform(y.reshape(-1, 1)) y = np.array(y).astype(np.float32) ``` Dado que los datos vienen en diferentes unidades y escalas, siempre se debe estandarizar los datos de alguna forma. En particular en esta libreta se emplea la normalización de los datos, haciendo que tengan *media* $\mu = 0$ y *desviación estándar* $\sigma = 1$. ``` # Separar en conjunto de entrenamiento y prueba x_train, x_test, y_train, y_test = model_selection.train_test_split( X, y, test_size=0.2, random_state=49 ) # Definir parámetros generales de la Red Neuronal pasos_entrenamiento = 1000 tam_lote = 30 ratio_aprendizaje = 0.01 ``` ## Estructura o *topología* de la red neuronal Para esta regresión se pretende utilizar una *red neuronal* de **dos capas ocultas**, con *funciones de activación* **ReLU**, la **primera** capa oculta cuenta con 25 neuronas mientras que la **segunda** cuenta con 50. La **capa de salida** *no* tiene función de activación, por lo que el modelo lineal queda de la siguiente forma $$ \hat{y}(x) = \sum_{i=1}^{8} \alpha_i \cdot x_i + \beta_i$$ donde $\alpha_i$ son los *pesos* de la *capa de salida*, mientras que $\beta_i$ son los *sesgos*. ``` # Parámetros para la estructura general de la red # Número de neuronas por capa n_capa_oculta_1 = 25 n_capa_oculta_2 = 50 n_entrada = X.shape[1] n_salida = 1 # Definir las entradas de la red neuronal x_entrada = tf.placeholder(tf.float32, shape=[None, n_entrada]) y_entrada = tf.placeholder(tf.float32, shape=[None, n_salida]) # Diccionario de pesos pesos = { "o1": tf.Variable(tf.random_normal([n_entrada, n_capa_oculta_1])), "o2": tf.Variable(tf.random_normal([n_capa_oculta_1, n_capa_oculta_2])), "salida": tf.Variable(tf.random_normal([n_capa_oculta_2, n_salida])), } # Diccionario de sesgos sesgos = { "b1": tf.Variable(tf.random_normal([n_capa_oculta_1])), "b2": tf.Variable(tf.random_normal([n_capa_oculta_2])), "salida": tf.Variable(tf.random_normal([n_salida])), } def propagacion_adelante(x): # Capa oculta 1 # Esto es la mismo que Ax + b, un modelo lineal capa_1 = tf.add(tf.matmul(x, pesos["o1"]), sesgos["b1"]) # ReLU como función de activación capa_1 = tf.nn.relu(capa_1) # Capa oculta 1 # Esto es la mismo que Ax + b, un modelo lineal capa_2 = tf.add(tf.matmul(capa_1, pesos["o2"]), sesgos["b2"]) # ReLU como función de activación capa_2 = tf.nn.relu(capa_2) # Capa de salida # Nuevamente, un modelo lineal capa_salida = tf.add(tf.matmul(capa_2, pesos["salida"]), sesgos["salida"]) return capa_salida # Implementar el modelo y sus capas y_prediccion = propagacion_adelante(x_entrada) ``` ## Función de pérdida Para la función de pérdida se emplea la [función de Huber](https://en.wikipedia.org/wiki/Huber_loss) definida como \begin{equation} L_{\delta} \left( y, f(x) \right) = \begin{cases} \frac{1}{2} \left( y - f(x) \right)^2 & \text{para} \vert y - f(x) \vert \leq \delta, \\ \delta \vert y - f(x) \vert - \frac{1}{2} \delta^2 & \text{en cualquier otro caso.} \end{cases} \end{equation} Esta función es [robusta](https://en.wikipedia.org/wiki/Robust_regression) lo cual está hecha para erradicar el peso de posibles valores atípicos y puede encontrar la verdadera relación entre las características sin tener que recurrir a metodologías paramétricas y no paramétricas. ## Nota Es importante mencionar que el valor de $\delta$ en la función de Huber es un **hiperparámetro** que debe de ser ajustado mediante *validación cruzada* pero no es realiza en esta libreta por limitaciones de equipo y rendimiento en la ejecución de esta libreta. ``` # Definir la función de costo f_costo = tf.reduce_mean(tf.losses.huber_loss(y_entrada, y_prediccion, delta=2.0)) # f_costo = tf.reduce_mean(tf.square(y_entrada - y_prediccion)) optimizador = tf.train.AdamOptimizer(learning_rate=ratio_aprendizaje).minimize(f_costo) # Primero, inicializar las variables init = tf.global_variables_initializer() # Función para evaluar la precisión de clasificación def precision(prediccion, real): return tf.sqrt(tf.losses.mean_squared_error(real, prediccion)) ``` ## Precisión del modelo Para evaluar la precisión del modelo se emplea la función [RMSE](https://en.wikipedia.org/wiki/Root-mean-square_deviation) (Root Mean Squared Error) definida por la siguiente función: $$ RMSE = \sqrt{\frac{\sum_{i=1}^{N} \left( \hat{y}_i - y_i \right)^2}{N}} $$ Para crear un mejor estimado, se empleará validación cruzada de 5 pliegues. ``` # Crear el plegador para el conjunto de datos kf = model_selection.KFold(n_splits=5) kf_val_score_train = [] kf_val_score_test = [] # Crear un grafo de computación with tf.Session() as sess: # Inicializar las variables sess.run(init) for tr_idx, ts_idx in kf.split(x_train): # Comenzar los pasos de entrenamiento # solamente con el conjunto de datos de entrenamiento for p in range(pasos_entrenamiento): # Minimizar la función de costo minimizacion = sess.run( optimizador, feed_dict={x_entrada: x_train[tr_idx], y_entrada: y_train[tr_idx]}, ) # Cada tamaño de lote, calcular la precisión del modelo if p % tam_lote == 0: prec_entrenamiento = sess.run( precision(y_prediccion, y_entrada), feed_dict={x_entrada: x_train[tr_idx], y_entrada: y_train[tr_idx]}, ) kf_val_score_train.append(prec_entrenamiento) prec_prueba = sess.run( precision(y_prediccion, y_entrada), feed_dict={x_entrada: x_train[ts_idx], y_entrada: y_train[ts_idx]}, ) kf_val_score_test.append(prec_prueba) # Prediccion final, una vez entrenado el modelo pred_final = sess.run( precision(y_prediccion, y_entrada), feed_dict={x_entrada: x_test, y_entrada: y_test}, ) pred_report = sess.run(y_prediccion, feed_dict={x_entrada: x_test}) print("Precisión final: {0}".format(pred_final)) print("Precisión RMSE para entrenamiento: {0}".format(np.mean(kf_val_score_train))) print("Precisión RMSE para entrenamiento: {0}".format(np.mean(kf_val_score_test))) ``` Aquí se muestra el valor de *RMSE* final para cada parte, entrenamiento y prueba. Se puede observar que hay muy poco sobreajuste, y si se quisiera corregir se puede realizar aumentando el número de neuronas, de capas, cambiando las funciones de activación, entre muchas otras cosas.
github_jupyter
## First ML project ``` import pandas as pd housing = pd.read_csv("data.csv") housing.head() housing.info() housing['AGE'].value_counts() housing.describe() %matplotlib inline import matplotlib.pyplot as plt housing.hist(bins=50, figsize=(20,15)) import numpy as np def split_train_test(data , test_ratio): np.random.seed(42) # this fixes the shuffled value of data shuffled = np.random.permutation(len(data)) print(shuffled) test_set_size = int(len(data)*test_ratio) test_indices = shuffled[:test_set_size] train_indices = shuffled[test_set_size:] return data.iloc[train_indices],data.iloc[test_indices] train_set,test_set = split_train_test(housing , 0.2) print(f"Rows in train set:{len(train_set)}\nRows in test set:{len(test_set)}") from sklearn.model_selection import train_test_split train_set,test_set = train_test_split(housing , test_size=0.2,random_state=42) print(f"Rows in train set:{len(train_set)}\nRows in test set:{len(test_set)}") from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing['CHAS']): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] strat_test_set['AGE'].value_counts() strat_train_set['AGE'].value_counts() housing = strat_train_set.copy() ``` ## Finding Correlations ``` corr_matrix = housing.corr() corr_matrix['MEDV'].sort_values(ascending=False) from pandas.plotting import scatter_matrix attributes = ["MEDV","RM","ZN","LSTAT"] scatter_matrix(housing[attributes],figsize=[12,8]) housing.plot(kind="scatter",x="RM",y="MEDV",alpha=0.9) ``` ## Try out attributes ``` housing["TAXRM"]= housing["TAX"]/housing["RM"] housing.head() corr_matrix = housing.corr() corr_matrix['MEDV'].sort_values(ascending=False) housing.plot(kind="scatter",x="TAXRM",y="MEDV",alpha=0.9) housing = strat_train_set.drop("MEDV", axis=1) housing_labels = strat_train_set["MEDV"].copy() ``` ## Missing Attributes ``` # To take care of missing attributes, you have three options: # 1. Get rid of the missing data points # 2. Get rid of the whole attribute # 3. Set the value to some value(0, mean or median) a = housing.dropna(subset=["RM"]) #option 1 Here the missing attributes are dropped a.shape housing.drop("RM", axis=1).shape # option 2 here the attribure column of RM is dropped median = housing["RM"].median() #Compute median for option 3 housing["RM"].fillna(median)# option 3 housing.describe() # before we started imputing this was RM with 501 value from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy = "median") imputer.fit(housing) imputer.statistics_ X = imputer.transform(housing) housing_tr = pd.DataFrame(X, columns=housing.columns) housing_tr.describe() # Now after imputing we hv got the value of RM as 506 ``` ## Creating a pipeline ``` from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler my_pipeline = Pipeline([ ('imputer',SimpleImputer(strategy="median")), ('std_scaler', StandardScaler()), ]) housing_num_tr = my_pipeline.fit_transform(housing) housing_num_tr.shape ``` ## Selecting the model for dragon real Estate ``` from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor model=RandomForestRegressor() model.fit(housing_num_tr,housing_labels) some_data = housing.iloc[:5] some_labels = housing_labels.iloc[:5] prepared_data = my_pipeline.transform(some_data) model.predict(prepared_data) list(some_labels) ``` ## Evaluating the model ``` from sklearn.metrics import mean_squared_error housing_predictions = model.predict(housing_num_tr) mse = mean_squared_error(housing_labels, housing_predictions) rmse = np.sqrt(mse) rmse ``` ## Using better evaluation techniques - CrossValidation ``` from sklearn.model_selection import cross_val_score scores = cross_val_score(model, housing_num_tr, housing_labels, scoring="neg_mean_squared_error", cv=10) rmse_scores = np.sqrt(-scores) ```
github_jupyter
``` conda install pandas conda install numpy conda install matplotlib pip install plotly import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px from scipy import stats import warnings warnings.filterwarnings("ignore") df = pd.read_csv("insurance.csv") df df.info() df.shape df.columns df.describe() ``` 1. Import the data set, ‘insurance’. The column ‘charges’ should be considered as your target label. ``` X_df = df.drop("charges", axis=1) X_df.shape y_df = df["charges"] y_df.shape ``` 2. Explore the data using at least 3 data exploratory tools of your choosing in pandas and interpret your observation in a markdown cell of what form of predictive analysis that can be conducted on the data. ``` #Total charges of insuarance df["charges"].sum() #best region in terms of insuarance sales best_region = df.groupby(["region"]).sum().sort_values(by = "charges") best_region ``` 3. Visualize the age distribution for the column ‘age’ and comment on the results in a markdown cell as well. (Ensure your visualization is of an appropriate size for effective analysis) ``` plt.hist(df["age"], bins = 50, histtype = "bar", rwidth = 0.5) plt.title ("Visualisation of age") plt.show() ``` Majority of the people recorded, were below the age of 20. Above 20years, there was a balance in the recorded ages. 4. Isolate all the continuous and discrete columns into their respective lists named ‘numerical_continuous’ and ‘numerical_discrete’ respectively. ``` df.nunique() numerical_continuous = [] for column in df.columns: if df[column].dtypes != "object": if df[column].nunique() >= 10: numerical_continuous.append(column) numerical_continuous.remove("charges") numerical_continuous numerical_discreet = [] for column in df.columns: if df[column].dtypes != "object": if df[column].nunique() < 10: numerical_discreet.append(column) numerical_discreet ``` 5. Visually identify if there is presence of any outliers in the numerical_continuous columns and resolve them using a zscore test and a threshold of your choosing. ``` sns.boxplot(data = df[numerical_continuous], orient = "v", palette = "Oranges") threshold = 0.375 zscore = np.abs(stats.zscore(df[["bmi"]])) df[(zscore > threshold).all(axis = 1)][numerical_continuous].plot(kind = "box", figsize = (10,5)) ``` 6. Validate that your analysis above was successful by visualizing the value distribution in the resulting columns using an appropriate visualization method. ``` df = df[(zscore > threshold).all(axis = 1)] df plt.hist(df[numerical_continuous], bins = 15, rwidth = 0.5) plt.show() ``` 7. Isolate all the categorical column names into a list named ‘categorical’ ``` categorical = [] for column in df.columns: if df[column].dtypes == "object": categorical.append(column) categorical ``` 8. Visually identify the outliers in the discrete and categorical features and resolve them using the combined rare levels method. ``` sns.boxplot(data = df[numerical_discreet], orient = "v", palette = "Oranges") for column in numerical_discreet + categorical: (df[column].value_counts()/df.shape[0]).plot(kind = "bar") plt.title(column) plt.show() df["children"] = df["children"].replace([3,4,5], "Rare") df["children"] ``` 9. Encode the discrete and categorical features with one of the measures of central tendency of your choosing. ``` #mode #median #mean encoded_features = {} for column in numerical_discreet + categorical: encoded_features[column] = df.groupby([column])["charges"].median().to_dict() df[column] = df[column].map(encoded_features[column]) ``` 10. Separate your features from the target appropriately. Narrow down the number of features to 5 using the most appropriate and accurate method. Which feature had to be dropped and what inference would you give as the main contributor of dropping the given feature. ``` X = df.drop("charges", axis =1) y = df["charges"] from sklearn.linear_model import LinearRegression from sklearn.feature_selection import RFE from sklearn.model_selection import train_test_split model = LinearRegression() rfe = RFE(model, 5) X_rfe = rfe.fit_transform(X, y) model.fit(X_rfe, y) print(pd.Series(rfe.support_, index = X.columns)) ``` 8) 1.Convert the target labels to their respective log values and give 2 reasons why this step may be useful as we train the machine learning model. (Explain in a markdown cell.) ``` from sklearn.model_selection import train_test_split, cross_val_score from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score from sklearn.linear_model import ElasticNet from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import BaggingRegressor, AdaBoostRegressor y_log = np.log(y) y_log ``` Handles any outliers in the target coumn. 8) 2.Slice the selected feature columns and the labels into the training and testing set. Also ensure your features are normalized. ``` X_train, X_test, y_train, y_test = train_test_split(X_rfe, y_log, test_size = 0.2, random_state = 0) ``` 8) 3.Use at least 4 different regression based machine learning methods and use the training and testing cross accuracy and divergence to identify the best model ``` scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) regular_reg = ElasticNet() dt_reg = DecisionTreeRegressor(random_state = 0) bag_reg = BaggingRegressor(random_state = 0) boost_reg = AdaBoostRegressor(random_state = 0) models = {'ElasticNet': regular_reg, 'DecisionTreeRegressor': dt_reg, 'BaggingRegressor': bag_reg, 'AdaBoostRegressor': boost_reg} def cross_valid(models, X, y, process = 'Training'): print(f'Process: {process}') for model_name, model in models.items(): scores = cross_val_score(model, X, y, cv = 5) print(f'Model: {model_name}') print(f'Cross validation mean score: {round(np.mean(scores), 4)}') print(f'Cross validation deviation: {round(np.std(scores), 4)}') print('\n') cross_valid(models, X_train, y_train, process = 'Training') cross_valid(models, X_test, y_test, process = 'Testing') ``` 8) 4. After identifying the best model, train it with the training data again. Using at least 3 model evaluation metrics in regression, evaluate the models training and testing score. Also ensure as you test the models, the predicted and actual targets have been converted back to the original values using antilog. (Hint: Antilog function is equal to Exponential) ``` bag_reg.fit(X_train, y_train) def model_evaluation(model, X, y): y_predict = np.exp(model.predict(X)) y = np.exp(y) print(f'Mean Squared Error: {mean_squared_error(y, y_predict)}') print(f'Mean Absolute Error: {mean_absolute_error(y, y_predict)}') print(f'R2 Score: {r2_score(y, y_predict)}') model_evaluation(bag_reg, X_train, y_train) model_evaluation(bag_reg, X_test, y_test) ```
github_jupyter
``` import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline data_path = '../results/results.csv' df = pd.read_csv(data_path, delimiter='\t') ray = df['Ray_et_al'].to_numpy() matrixreduce = df['MatrixREDUCE'].to_numpy() rnacontext = df['RNAcontext'].to_numpy() deepbind = df['DeepBind'].to_numpy() dlprb = df['DLPRB'].to_numpy() rck = df['RCK'].to_numpy() cdeepbind = df['cDeepbind'].to_numpy() thermonet = df['ThermoNet'].to_numpy() residualbind = df['ResidualBind'].to_numpy() ``` # Plot box-violin plot ``` names = ['Ray et al.', 'MatrixREDUCE', 'RNAcontext', 'DeepBind', 'DLPRB', 'RCK', 'cDeepbind', 'ThermoNet', 'ResidualBind'] data = [ray, matrixreduce, rnacontext, deepbind, rck, dlprb, cdeepbind, thermonet, residualbind] fig = plt.figure(figsize=(12,5)) vplot = plt.violinplot(data, showextrema=False); data = [ray, matrixreduce, rnacontext, deepbind, rck, dlprb, cdeepbind, thermonet, residualbind] import matplotlib.cm as cm cmap = cm.ScalarMappable(cmap='tab10') test_mean = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] for patch, color in zip(vplot['bodies'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') medianprops = dict(color="red",linewidth=2) bplot = plt.boxplot(data, notch=True, patch_artist=True, widths=0.2, medianprops=medianprops); for patch, color in zip(bplot['boxes'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') #patch.set(color=colors[i]) plt.xticks(range(1,len(names)+1), names, rotation=40, fontsize=14, ha='right'); ax = plt.gca(); plt.setp(ax.get_yticklabels(),fontsize=14) plt.ylabel('Pearson correlation', fontsize=14); plot_path = '../results/rnacompete_2013/' outfile = os.path.join(plot_path, 'Performance_comparison.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') ``` # plot comparison between ResidualBind and ThermoNet ``` fig = plt.figure(figsize=(3,3)) ax = plt.subplot(111) plt.hist(residualbind-thermonet, bins=20); plt.setp(ax.get_yticklabels(),fontsize=14) plt.ylabel('Counts', fontsize=14); plt.setp(ax.get_xticklabels(),fontsize=14) plt.xlabel('$\Delta$ Pearson r', fontsize=14); plot_path = '../results/rnacompete_2013/' outfile = os.path.join(plot_path, 'Performance_comparison_hist.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') from scipy import stats stats.wilcoxon(residualbind, thermonet) ``` # Compare performance based on binding score normalization and different input features ``` data_path = '../results/rnacompete_2013/clip_norm_seq_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') clip_norm_seq = df['Pearson score'].to_numpy() data_path = '../results/rnacompete_2013/clip_norm_pu_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') clip_norm_pu = df['Pearson score'].to_numpy() data_path = '../results/rnacompete_2013/clip_norm_struct_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') clip_norm_struct = df['Pearson score'].to_numpy() data_path = '../results/rnacompete_2013/log_norm_seq_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') log_norm_seq = df['Pearson score'].to_numpy() data_path = '../results/rnacompete_2013/log_norm_pu_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') log_norm_pu = df['Pearson score'].to_numpy() data_path = '../results/rnacompete_2013/log_norm_struct_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') log_norm_struct = df['Pearson score'].to_numpy() names = ['Clip-norm', 'Log-norm'] data = [clip_norm_seq, log_norm_seq] fig = plt.figure(figsize=(3,3)) vplot = plt.violinplot(data, showextrema=False); import matplotlib.cm as cm cmap = cm.ScalarMappable(cmap='viridis') test_mean = [0.1, 0.5, 0.9] for patch, color in zip(vplot['bodies'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') medianprops = dict(color="red",linewidth=2) bplot = plt.boxplot(data, notch=True, patch_artist=True, widths=0.2, medianprops=medianprops); for patch, color in zip(bplot['boxes'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') #patch.set(color=colors[i]) plt.xticks(range(1,len(names)+1), names, rotation=40, fontsize=14, ha='right'); ax = plt.gca(); plt.setp(ax.get_yticklabels(),fontsize=14) plt.ylabel('Pearson correlation', fontsize=14); plot_path = '../results/rnacompete_2013/' outfile = os.path.join(plot_path, 'Performance_comparison_clip_vs_log.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') names = ['Sequence', 'Sequence + PU', 'Sequence + PHIME'] data = [clip_norm_seq, clip_norm_pu, clip_norm_struct] fig = plt.figure(figsize=(5,5)) vplot = plt.violinplot(data, showextrema=False); import matplotlib.cm as cm cmap = cm.ScalarMappable(cmap='viridis') test_mean = [0.1, 0.5, 0.9] for patch, color in zip(vplot['bodies'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') medianprops = dict(color="red",linewidth=2) bplot = plt.boxplot(data, notch=True, patch_artist=True, widths=0.2, medianprops=medianprops); for patch, color in zip(bplot['boxes'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') #patch.set(color=colors[i]) plt.xticks(range(1,len(names)+1), names, rotation=40, fontsize=14, ha='right'); ax = plt.gca(); plt.setp(ax.get_yticklabels(),fontsize=14) plt.ylabel('Pearson correlation', fontsize=14); plot_path = '../results/rnacompete_2013/' outfile = os.path.join(plot_path, 'Performance_comparison_clip_structure.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') names = ['Sequence', 'Sequence + PU', 'Sequence + PHIME'] data = [log_norm_seq, log_norm_pu, log_norm_struct] fig = plt.figure(figsize=(5,3)) vplot = plt.violinplot(data, showextrema=False); import matplotlib.cm as cm cmap = cm.ScalarMappable(cmap='viridis') test_mean = [0.1, 0.5, 0.9] for patch, color in zip(vplot['bodies'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') medianprops = dict(color="red",linewidth=2) bplot = plt.boxplot(data, notch=True, patch_artist=True, widths=0.2, medianprops=medianprops); for patch, color in zip(bplot['boxes'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') #patch.set(color=colors[i]) plt.xticks(range(1,len(names)+1), names, rotation=40, fontsize=14, ha='right'); ax = plt.gca(); plt.setp(ax.get_yticklabels(),fontsize=14) plt.ylabel('Pearson correlation', fontsize=14); plot_path = '../results/rnacompete_2013/' outfile = os.path.join(plot_path, 'Performance_comparison_log_structure.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') data = [clip_norm_seq, clip_norm_pu, clip_norm_struct, log_norm_seq, log_norm_pu, log_norm_struct] name = ['clip_norm_seq', 'clip_norm_pu', 'clip_norm_struct', 'log_norm_seq', 'log_norm_pu', 'log_norm_struct'] for n,x in zip(name, data): print(n, np.mean(x), np.std(x)) ``` # compare PHIME vs seq only ``` fig = plt.figure(figsize=(3,3)) ax = plt.subplot(111) plt.hist(clip_norm_seq-clip_norm_struct, bins=15) plt.setp(ax.get_yticklabels(),fontsize=14) plt.ylabel('Counts', fontsize=14); plt.setp(ax.get_xticklabels(),fontsize=14) plt.xlabel('$\Delta$ Pearson r', fontsize=14); plot_path = '../results/rnacompete_2013/' outfile = os.path.join(plot_path, 'Performance_comparison_hist_seq_vs_struct.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') ``` # 2009 RNAcompete analysis ``` data_path = '../results/rnacompete_2009/log_norm_seq_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') log_norm_seq = df['Pearson score'].to_numpy() data_path = '../results/rnacompete_2009/log_norm_pu_performance.tsv' df = pd.read_csv(data_path, delimiter='\t') log_norm_pu = df['Pearson score'].to_numpy() names = ['Sequence', 'Sequence + PU'] data = [log_norm_seq, log_norm_pu] fig = plt.figure(figsize=(5,5)) vplot = plt.violinplot(data, showextrema=False); import matplotlib.cm as cm cmap = cm.ScalarMappable(cmap='viridis') test_mean = [0.1, 0.5, 0.9] for patch, color in zip(vplot['bodies'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') medianprops = dict(color="red",linewidth=2) bplot = plt.boxplot(data, notch=True, patch_artist=True, widths=0.2, medianprops=medianprops); for patch, color in zip(bplot['boxes'], cmap.to_rgba(test_mean)): patch.set_facecolor(color) patch.set_edgecolor('black') #patch.set(color=colors[i]) plt.xticks(range(1,len(names)+1), names, rotation=40, fontsize=14, ha='right'); ax = plt.gca(); plt.setp(ax.get_yticklabels(),fontsize=14) plt.ylabel('Pearson correlation', fontsize=14); plot_path = '../results/rnacompete_2013/' outfile = os.path.join(plot_path, 'Performance_comparison_log_structure_2009.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') ``` # Compare log vs clip as a scatter plot ``` data_path = '../data/RNAcompete_2013/rnacompete2013.h5' results_path = helper.make_directory('../results', 'rnacompete_2013') experiment = 'RNCMPT00169' rbp_index = helper.find_experiment_index(data_path, experiment) normalization = 'clip_norm' # 'log_norm' or 'clip_norm' ss_type = 'seq' # 'seq', 'pu', or 'struct' save_path = helper.make_directory(results_path, normalization+'_'+ss_type) # load rbp dataset train, valid, test = helper.load_rnacompete_data(data_path, ss_type=ss_type, normalization=normalization, rbp_index=rbp_index) # load residualbind model input_shape = list(train['inputs'].shape)[1:] weights_path = os.path.join(save_path, experiment + '_weights.hdf5') model = ResidualBind(input_shape, weights_path) # load pretrained weights model.load_weights() # get predictions for test sequences predictions_clip = model.predict(test['inputs']) y = test['targets'] fig = plt.figure(figsize=(3,3)) plt.scatter(predictions_clip, y, alpha=0.5, rasterized=True) plt.plot([-2,9],[-2,9],'--k') plt.xlabel('Predicted binding scores', fontsize=14) plt.ylabel('Experimental binding scores', fontsize=14) plt.xticks([-2, 0, 2, 4, 6, 8], fontsize=14) plt.yticks([-2, 0, 2, 4, 6, 8], fontsize=14) outfile = os.path.join(results_path, experiment+'_scatter_clip.pdf') fig.savefig(outfile, format='pdf', dpi=600, bbox_inches='tight') normalization = 'log_norm' # 'log_norm' or 'clip_norm' ss_type = 'seq' # 'seq', 'pu', or 'struct' save_path = helper.make_directory(results_path, normalization+'_'+ss_type) # load rbp dataset train, valid, test = helper.load_rnacompete_data(data_path, ss_type=ss_type, normalization=normalization, rbp_index=rbp_index) # load residualbind model input_shape = list(train['inputs'].shape)[1:] weights_path = os.path.join(save_path, experiment + '_weights.hdf5') model = ResidualBind(input_shape, weights_path) # load pretrained weights model.load_weights() # get predictions for test sequences predictions_log = model.predict(test['inputs']) y2 = test['targets'] fig = plt.figure(figsize=(3,3)) plt.scatter(predictions_log, y2, alpha=0.5, rasterized=True) plt.plot([-2,9],[-2,9],'--k') plt.xlabel('Predicted binding scores', fontsize=14) plt.ylabel('Experimental binding scores', fontsize=14) plt.xticks([-2, 0, 2, 4, 6, 8,], fontsize=14) plt.yticks([-2, 0, 2, 4, 6, 8], fontsize=14) outfile = os.path.join(results_path, experiment+'_scatter_log.pdf') fig.savefig(outfile, format='pdf', dpi=600, bbox_inches='tight') ```
github_jupyter
## Fish classification In this notebook the fish classification is done. We are going to classify in four classes: Tuna fish (TUNA), LAG, DOL and SHARK. The detector will save the cropped image of a fish. Here we will take this image and we will use a CNN to classify it. In the original Kaggle competition there are six classes of fish: ALB, BET, YFT, DOL, LAG and SHARK. We started trying to classify them all, but three of them are vey similar: ALB, BET and YFT. In fact, they are all different tuna species, while the other fishes come from different families. Therefore, the classification of those species was difficult and the results were not too good. We will make a small comparison of both on the presentation, but here we will only upload the clsifier with four classes. ``` from PIL import Image import tensorflow as tf import numpy as np import scipy import os import cv2 from sklearn.preprocessing import LabelEncoder from sklearn.metrics import log_loss from keras.utils import np_utils from keras.models import Sequential from keras.layers.convolutional import Convolution2D from keras.layers.convolutional import MaxPooling2D from keras.layers.core import Activation from keras.layers.core import Flatten from keras.layers.core import Dense from keras.layers.core import Dropout from keras import backend as K import matplotlib.pyplot as plt #Define some values and constants fish_classes = ['TUNA','DOL','SHARK','LAG'] fish_classes_test = fish_classes number_classes = len(fish_classes) main_path_train = '../train_cut_oversample' main_path_test = '../test' channels = 3 ROWS_RESIZE = 100 COLS_RESIZE = 100 ``` Now we read the data from the file where the fish detection part has stored the images. We also preprocess slightly the images to convert them to the same size (100x100). The aspect ratio of the images is important, so instead of just resizing the image, we have created the function resize(im). This function takes an image and resizes its longest side to 100, keeping the aspect ratio. In other words, the short side of the image will be smaller than 100 poixels. This image is pasted onto the middle of a white layer that is 100x100. So, our image will have white pixels on two of its sides. This is not optimum, but it is still better than changing the aspect ratio. We have also tried with other colors, but the best results were achieved with white. ``` # Get data and preproccess it def resize(image): rows = image.shape[0] cols = image.shape[1] dominant = max(rows,cols) ratio = ROWS_RESIZE/float(dominant) im_res = scipy.misc.imresize(image,ratio) rows = im_res.shape[0] cols = im_res.shape[1] im_res = Image.fromarray(im_res) layer = Image.new('RGB',[ROWS_RESIZE,COLS_RESIZE],(255,255,255)) if rows > cols: layer.paste(im_res,(COLS_RESIZE/2-cols/2,0)) if cols > rows: layer.paste(im_res,(0,ROWS_RESIZE/2-rows/2)) if rows == cols: layer.paste(im_res,(0,0)) return np.array(layer) X_train = [] y_labels = [] for classes in fish_classes: path_class = os.path.join(main_path_train,classes) y_class = np.tile(classes,len(os.listdir(path_class))) y_labels.extend(y_class) for image in os.listdir(path_class): path = os.path.join(path_class,image) im = scipy.misc.imread(path) im = resize(im) X_train.append(np.array(im)) X_train = np.array(X_train) # Convert labels into one hot vectors y_labels = LabelEncoder().fit_transform(y_labels) y_train = np_utils.to_categorical(y_labels) X_test = [] y_test = [] for classes in fish_classes_test: path_class = os.path.join(main_path_test,classes) y_class = np.tile(classes,len(os.listdir(path_class))) y_test.extend(y_class) for image in os.listdir(path_class): path = os.path.join(path_class,image) im = scipy.misc.imread(path) im = resize(im) X_test.append(np.array(im)) X_test = np.array(X_test) # Convert labels into one hot vectors y_test = LabelEncoder().fit_transform(y_test) y_test = np_utils.to_categorical(y_test) X_train = np.reshape(X_train,(X_train.shape[0],ROWS_RESIZE,COLS_RESIZE,channels)) X_test = np.reshape(X_test,(X_test.shape[0],ROWS_RESIZE,COLS_RESIZE,channels)) print('X_train shape: ',X_train.shape) print('y_train shape: ',y_train.shape) print('X_test shape: ',X_test.shape) print('y_test shape: ',y_test.shape) ``` The data is now organized in the following way: -The training has been done with 23581 images of size 100x100x3 (rgb). -There are 4 possible classes: LAG, SHARK, DOL and TUNA. -The test has been done with 400 images of the same size, 100 per class. We are now ready to build and train the classifier. Th CNN has 7 convolutional layers, 4 pooling layers and three fully connected layers at the end. Dropout has been used in the fully connected layers to avoid overfitting. The loss function used is multi class logloss because is the one used by Kaggle in the competition. The optimizeer is gradient descent. ``` def center_normalize(x): return (x-K.mean(x))/K.std(x) # Convolutional net model = Sequential() model.add(Activation(activation=center_normalize,input_shape=(ROWS_RESIZE,COLS_RESIZE,channels))) model.add(Convolution2D(6,20,20,border_mode='same',activation='relu',dim_ordering='tf')) model.add(MaxPooling2D(pool_size=(2,2),dim_ordering='tf')) model.add(Convolution2D(12,10,10,border_mode='same',activation='relu',dim_ordering='tf')) model.add(Convolution2D(12,10,10,border_mode='same',activation='relu',dim_ordering='tf')) model.add(MaxPooling2D(pool_size=(2,2),dim_ordering='tf')) model.add(Convolution2D(24,5,5,border_mode='same',activation='relu',dim_ordering='tf')) model.add(Convolution2D(24,5,5,border_mode='same',activation='relu',dim_ordering='tf')) model.add(MaxPooling2D(pool_size=(2,2),dim_ordering='tf')) model.add(Convolution2D(24,5,5,border_mode='same',activation='relu',dim_ordering='tf')) model.add(Convolution2D(24,5,5,border_mode='same',activation='relu',dim_ordering='tf')) model.add(MaxPooling2D(pool_size=(2,2),dim_ordering='tf')) model.add(Flatten()) model.add(Dense(4092,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1024,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(number_classes)) model.add(Activation('softmax')) print(model.summary()) model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy']) model.fit(X_train,y_train,nb_epoch=1,verbose=1) ``` Since there are a lot of images the training takes around one hour. Once it is done we can pass the test set to the classifier and measure its accuracy. ``` (loss,accuracy) = model.evaluate(X_test,y_test,verbose=1) print('accuracy',accuracy) ```
github_jupyter
# Uptake of carbon, heat, and oxygen Plotting a global map of carbon, heat, and oxygen uptake ``` from dask.distributed import Client client = Client("tcp://10.32.15.112:32829") client %matplotlib inline import xarray as xr import intake import numpy as np from cmip6_preprocessing.preprocessing import read_data from cmip6_preprocessing.parse_static_metrics import parse_static_thkcello from cmip6_preprocessing.preprocessing import rename_cmip6 import warnings import matplotlib.pyplot as plt # util.py is in the local directory # it contains code that is common across project notebooks # or routines that are too extensive and might otherwise clutter # the notebook design import util def _compute_slope(y): """ Private function to compute slopes at each grid cell using polyfit. """ x = np.arange(len(y)) return np.polyfit(x, y, 1)[0] # return only the slope def compute_slope(da): """ Computes linear slope (m) at each grid cell. Args: da: xarray DataArray to compute slopes for Returns: xarray DataArray with slopes computed at each grid cell. """ # apply_ufunc can apply a raw numpy function to a grid. # # vectorize is only needed for functions that aren't already # vectorized. You don't need it for polyfit in theory, but it's # good to use when using things like np.cov. # # dask='parallelized' parallelizes this across dask chunks. It requires # an output_dtypes of the numpy array datatype coming out. # # input_core_dims should pass the dimension that is being *reduced* by this operation, # if one is being reduced. slopes = xr.apply_ufunc(_compute_slope, da, vectorize=True, dask='parallelized', input_core_dims=[['time']], output_dtypes=[float], ) return slopes if util.is_ncar_host(): col = intake.open_esm_datastore("../catalogs/glade-cmip6.json") else: col = intake.open_esm_datastore("../catalogs/pangeo-cmip6_update_2019_10_18.json") cat = col.search(experiment_id=['historical'], table_id='Omon', variable_id=['dissic'], grid_label='gr') import pprint uni_dict = col.unique(['source_id', 'experiment_id', 'table_id']) #pprint.pprint(uni_dict, compact=True) models = set(uni_dict['source_id']['values']) # all the models for experiment_id in ['historical']: query = dict(experiment_id=experiment_id, table_id=['Omon','Ofx'], variable_id=['dissic'], grid_label=['gn','gr']) cat = col.search(**query) models = models.intersection({model for model in cat.df.source_id.unique().tolist()}) # for oxygen, ensure the CESM2 models are not included (oxygen was erroneously submitted to the archive) # UKESM has an issue with the attributes models = models - {'UKESM1-0-LL','GISS-E2-1-G-CC','GISS-E2-1-G','MCM-UA-1-0'} models = list(models) models # read all data with thickness and DIC for DIC storage with warnings.catch_warnings(): # these lines just make sure that the warnings dont clutter your notebook warnings.simplefilter("ignore") data_dict_thk = read_data(col, experiment_id=['historical'], grid_label='gn', variable_id=['thkcello','dissic'], table_id = ['Omon'], source_id = models, #member_id = 'r1i1p1f1', # so that this runs faster for testing required_variable_id = ['thkcello','dissic'] ) #data_dict_thk['IPSL-CM6A-LR'] = data_dict_thk['IPSL-CM6A-LR'].rename({'olevel':'lev'}) # read all data with volume and oxygen with warnings.catch_warnings(): # these lines just make sure that the warnings dont clutter your notebook warnings.simplefilter("ignore") data_dict_dic = read_data(col, experiment_id=['historical'], grid_label='gn', variable_id=['dissic'], table_id = ['Omon'], source_id = models, #member_id = 'r1i1p1f1', # so that this runs faster for testing required_variable_id = ['dissic'] ) data_dict_dic['IPSL-CM6A-LR'] = data_dict_dic['IPSL-CM6A-LR'].rename({'olevel_bounds':'lev_bounds'}) #data_dict_dic['IPSL-CM6A-LR'] = data_dict_dic['IPSL-CM6A-LR'].rename({'olevel_bounds':'lev_bounds'}) data_dict_dic['MIROC-ES2L'] = data_dict_dic['MIROC-ES2L'].rename({'zlev_bnds':'lev_bounds'}) data_dict_dic_thk = {k: parse_static_thkcello(ds) for k, ds in data_dict_dic.items()} ``` ### Loading data `intake-esm` enables loading data directly into an [xarray.Dataset](http://xarray.pydata.org/en/stable/api.html#dataset). Note that data on the cloud are in [zarr](https://zarr.readthedocs.io/en/stable/) format and data on [glade](https://www2.cisl.ucar.edu/resources/storage-and-file-systems/glade-file-spaces) are stored as [netCDF](https://www.unidata.ucar.edu/software/netcdf/) files. This is opaque to the user. `intake-esm` has rules for aggegating datasets; these rules are defined in the collection-specification file. ``` #cat = col.search(experiment_id=['historical'], table_id='Omon', # variable_id=['dissic'], grid_label='gn', source_id=models) #dset_dict_dic_gn = cat.to_dataset_dict(zarr_kwargs={'consolidated': True, 'decode_times': False}, # cdf_kwargs={'chunks': {'time' : 20}, 'decode_times': False}) ``` ### Plotting DIC storage ``` data_dict_dic.keys() fig, ax = plt.subplots(ncols=3, nrows=2,figsize=[15, 10]) A = 0 for model_key in data_dict_dic.keys(): dsC = data_dict_dic[model_key] ds = dsC['dissic'].isel(lev = 0).chunk({'time': -1, 'x': 110, 'y': 110, 'member_id': 10}) #dz = dsC['thkcello'].isel(member_id=0) #DICstore_slope = (ds.isel(time=-np.arange(10*12)).mean('time')*dz-ds.isel(time=np.arange(10*12)).mean('time')*dz).sum('lev') slope = compute_slope(ds) slope = slope.compute() slope = slope.mean('member_id')*12 # in mol/m^3/year A1 = int(np.floor(A/3)) A2 = np.mod(A,3) slope.plot(ax = ax[A1][A2],vmax = 0.001) ax[A1][A2].title.set_text(model_key) A += 1 fig.tight_layout() fig.savefig('rate_of_change_DIC_surface_historical.png') fig, ax = plt.subplots(ncols=3, nrows=2,figsize=[15, 10]) A = 0 for model_key in data_dict_thk.keys(): dsC = data_dict_thk[model_key] ds = dsC['dissic'] dz = dsC['thkcello'] DICstore = (ds*dz).sum('lev').chunk({'time': -1, 'x': 110, 'y': 110, 'member_id': 10}) slope = compute_slope(DICstore) slope = slope.compute() slope = slope.mean('member_id')*12 # in mol/m^3/year A1 = int(np.floor(A/3)) A2 = np.mod(A,3) slope.plot(ax = ax[A1][A2],vmax = 0.8) ax[A1][A2].title.set_text(model_key) A += 1 fig.tight_layout() fig.savefig('rate_of_change_DIC_content_historical.png') ``` # Load heat content ``` cat = col.search(experiment_id=['historical'], table_id='Omon', variable_id=['thetao','thkcello'], grid_label='gn') import pprint uni_dict = col.unique(['source_id', 'experiment_id', 'table_id']) #pprint.pprint(uni_dict, compact=True) models = set(uni_dict['source_id']['values']) # all the models for experiment_id in ['historical']: query = dict(experiment_id=experiment_id, table_id=['Omon','Ofx'], variable_id=['thetao','thkcello'], grid_label='gn') cat = col.search(**query) models = models.intersection({model for model in cat.df.source_id.unique().tolist()}) # for oxygen, ensure the CESM2 models are not included (oxygen was erroneously submitted to the archive) # UKESM has an issue with the attributes models = models - {'HadGEM3-GC31-LL','UKESM1-0-LL'} #{'UKESM1-0-LL','GISS-E2-1-G-CC','GISS-E2-1-G','MCM-UA-1-0'} models = list(models) models # read all data with thickness and DIC for DIC storage with warnings.catch_warnings(): # these lines just make sure that the warnings dont clutter your notebook warnings.simplefilter("ignore") data_dict_heat_thk = read_data(col, experiment_id=['historical'], grid_label='gn', variable_id=['thkcello','thetao'], table_id = ['Omon'], source_id = models, #member_id = 'r1i1p1f1', # so that this runs faster for testing required_variable_id = ['thkcello','thetao'] ) #data_dict_heat_thk['IPSL-CM6A-LR'] = data_dict_heat_thk['IPSL-CM6A-LR'].rename({'olevel':'lev'}) ``` # Plot heat content ``` data_dict_heat_thk.keys() fig, ax = plt.subplots(ncols=3, nrows=2,figsize=[15, 10]) A = 0 for model_key in data_dict_heat_thk.keys(): dsC = data_dict_heat_thk[model_key] ds = (dsC['thetao']+273.15)*4.15*1e6/1025 # heat content (assume constant density and heat capacity) dz = dsC['thkcello'].isel(member_id=0) DICstore = (ds*dz).sum('lev').chunk({'time': -1, 'x': 110, 'y': 110, 'member_id': 10}) slope = compute_slope(DICstore) slope = slope.compute() slope = slope.mean('member_id')*12 # in mol/m^3/year A1 = int(np.floor(A/3)) A2 = np.mod(A,3) slope.plot(ax = ax[A1][A2],vmax = 80000) ax[A1][A2].title.set_text(model_key) A += 1 fig.tight_layout() fig.savefig('rate_of_change_heat_content_historical.png') ``` # Load oxygen content ``` cat = col.search(experiment_id=['piControl'], table_id='Omon', variable_id=['o2','thkcello'], grid_label='gn') import pprint uni_dict = col.unique(['source_id', 'experiment_id', 'table_id']) #pprint.pprint(uni_dict, compact=True) models = set(uni_dict['source_id']['values']) # all the models for experiment_id in ['historical']: query = dict(experiment_id=experiment_id, table_id=['Omon','Ofx'], variable_id=['o2','thkcello'], grid_label='gn') cat = col.search(**query) models = models.intersection({model for model in cat.df.source_id.unique().tolist()}) # for oxygen, ensure the CESM2 models are not included (oxygen was erroneously submitted to the archive) # UKESM has an issue with the attributes models = models - {'UKESM1-0-LL'} #{'UKESM1-0-LL','GISS-E2-1-G-CC','GISS-E2-1-G','MCM-UA-1-0'} models = list(models) models # read all data with thickness and o2 for o2 content with warnings.catch_warnings(): # these lines just make sure that the warnings dont clutter your notebook warnings.simplefilter("ignore") data_dict_o2_thk = read_data(col, experiment_id=['historical'], grid_label='gn', variable_id=['thkcello','o2'], table_id = ['Omon'], source_id = models, #member_id = 'r1i1p1f1', # so that this runs faster for testing required_variable_id = ['thkcello','o2'] ) #data_dict_o2_thk['IPSL-CM6A-LR'] = data_dict_o2_thk['IPSL-CM6A-LR'].rename({'olevel':'lev'}) ``` # Plot O2 content ``` data_dict_o2_thk.keys() fig, ax = plt.subplots(ncols=2, nrows=1,figsize=[10, 5]) A = 0 for model_key in data_dict_o2_thk.keys(): dsC = data_dict_o2_thk[model_key] ds = dsC['o2'] dz = dsC['thkcello'].isel(member_id=0) DICstore = (ds*dz).sum('lev').chunk({'time': -1, 'x': 110, 'y': 110, 'member_id': 10}) slope = compute_slope(DICstore) slope = slope.compute() slope = slope.mean('member_id')*12 # in mol/m^3/year slope.plot(ax = ax[A],vmax = 0.8) ax[A].title.set_text(model_key) A += 1 fig.tight_layout() fig.savefig('rate_of_change_o2_content_historical.png') ```
github_jupyter
<a href="https://colab.research.google.com/github/wel51x/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/Winston_Lee_DS_Unit_1_Sprint_Challenge_4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Data Science Unit 1 Sprint Challenge 4 ## Exploring Data, Testing Hypotheses In this sprint challenge you will look at a dataset of people being approved or rejected for credit. https://archive.ics.uci.edu/ml/datasets/Credit+Approval Data Set Information: This file concerns credit card applications. All attribute names and values have been changed to meaningless symbols to protect confidentiality of the data. This dataset is interesting because there is a good mix of attributes -- continuous, nominal with small numbers of values, and nominal with larger numbers of values. There are also a few missing values. Attribute Information: - A1: b, a. - A2: continuous. - A3: continuous. - A4: u, y, l, t. - A5: g, p, gg. - A6: c, d, cc, i, j, k, m, r, q, w, x, e, aa, ff. - A7: v, h, bb, j, n, z, dd, ff, o. - A8: continuous. - A9: t, f. - A10: t, f. - A11: continuous. - A12: t, f. - A13: g, p, s. - A14: continuous. - A15: continuous. - A16: +,- (class attribute) Yes, most of that doesn't mean anything. A16 (the class attribute) is the most interesting, as it separates the 307 approved cases from the 383 rejected cases. The remaining variables have been obfuscated for privacy - a challenge you may have to deal with in your data science career. Sprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it! ## Part 1 - Load and validate the data - Load the data as a `pandas` data frame. - Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI). - UCI says there should be missing data - check, and if necessary change the data so pandas recognizes it as na - Make sure that the loaded features are of the types described above (continuous values should be treated as float), and correct as necessary This is review, but skills that you'll use at the start of any data exploration. Further, you may have to do some investigation to figure out which file to load from - that is part of the puzzle. ``` # TODO # imports & defaults import pandas as pd import numpy as np import random import matplotlib.pyplot as plt import seaborn as sns import scipy.stats as stats from scipy.stats import chisquare pd.set_option('display.width', 162) # Load data, changing ? to na headers = ["A1",'A2','A3','A4','A5','A6','A7','A8','A9','A10','A11','A12','A13','A14','A15','A16'] df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/crx.data", na_values='?', names = headers) df.describe(include='all') df.isnull().sum() # Replace nulls randomly from other values in column df = df.apply(lambda x: np.where(x.isnull(), x.dropna().sample(len(x), replace=True), x)) df.dtypes # change A11, A15 to float df['A11'] = df['A11'].astype(float) df['A15'] = df['A15'].astype(float) # change A16: '-' => 0, '+' => 1 df['A16'] = df['A16'].replace("+", 1) df['A16'] = df['A16'].replace('-', 0) df.describe(include='all') ``` ## Part 2 - Exploring data, Testing hypotheses The only thing we really know about this data is that A16 is the class label. Besides that, we have 6 continuous (float) features and 9 categorical features. Explore the data: you can use whatever approach (tables, utility functions, visualizations) to get an impression of the distributions and relationships of the variables. In general, your goal is to understand how the features are different when grouped by the two class labels (`+` and `-`). For the 6 continuous features, how are they different when split between the two class labels? Choose two features to run t-tests (again split by class label) - specifically, select one feature that is *extremely* different between the classes, and another feature that is notably less different (though perhaps still "statistically significantly" different). You may have to explore more than two features to do this. For the categorical features, explore by creating "cross tabs" (aka [contingency tables](https://en.wikipedia.org/wiki/Contingency_table)) between them and the class label, and apply the Chi-squared test to them. [pandas.crosstab](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html) can create contingency tables, and [scipy.stats.chi2_contingency](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html) can calculate the Chi-squared statistic for them. There are 9 categorical features - as with the t-test, try to find one where the Chi-squared test returns an extreme result (rejecting the null that the data are independent), and one where it is less extreme. **NOTE** - "less extreme" just means smaller test statistic/larger p-value. Even the least extreme differences may be strongly statistically significant. Your *main* goal is the hypothesis tests, so don't spend too much time on the exploration/visualization piece. That is just a means to an end - use simple visualizations, such as boxplots or a scatter matrix (both built in to pandas), to get a feel for the overall distribution of the variables. This is challenging, so manage your time and aim for a baseline of at least running two t-tests and two Chi-squared tests before polishing. And don't forget to answer the questions in part 3, even if your results in this part aren't what you want them to be. ``` # TODO # Create lists for continuous & categorical vars #cont = list(df.select_dtypes(include=['float64'])) continuous_col_list = ['A16', 'A2', 'A3', 'A8', 'A11', 'A14', 'A15'] categorical_col_list = [obj for obj in list(df) if obj not in continuous_col_list] categorical_col_list = ['A16'] + categorical_col_list #print(continuous_col_list, categorical_col_list) # Now create cont df df_continuous = df[continuous_col_list] df_continuous.describe(include='all') # and categ df df_categorical = df[categorical_col_list] df_categorical.describe(include='all') # Subset continuous for reject/accept df_continuous_rej = df_continuous[df_continuous['A16'] == 0].drop('A16', axis = 1) df_continuous_acc = df_continuous[df_continuous['A16'] == 1].drop('A16', axis = 1) print("Rejected") print(df_continuous_rej.describe(include='all')) print("Accepted") print(df_continuous_acc.describe(include='all')) # Same for categ df_categorical_rej = df_categorical[df_categorical['A16'] == 0].drop('A16', axis = 1) df_categorical_acc = df_categorical[df_categorical['A16'] == 1].drop('A16', axis = 1) print("Rejected") print(df_categorical_rej.describe(include='all')) print("Accepted") print(df_categorical_acc.describe(include='all')) g = sns.PairGrid(data=df, hue='A16') g.map(plt.scatter) # I'm wrong...these don't produce much of interest for i in categorical_col_list: df_categorical[i].value_counts().plot(kind='hist') plt.title(i) plt.show() # Continuous tests for col in continuous_col_list[1:]: t_stat, p_val = stats.ttest_ind(df_continuous_acc[col], df_continuous_rej[col], equal_var = False) print(col, "has t-statistic =", t_stat, "and pvalue =", p_val, "when comparing accepted vs rejected") # Categorical tests df_categorical.sample(11) for col in categorical_col_list[1:]: xtab = pd.crosstab(df_categorical["A16"], df_categorical[col]) ar = np.array(xtab).T chi_stat, p_val = chisquare(ar, axis=None) print(col, "has chi statistic", chi_stat, "and p_value", p_val) ``` ## Part 3 - Analysis and Interpretation Now that you've looked at the data, answer the following questions: - Interpret and explain the two t-tests you ran - what do they tell you about the relationships between the continuous features you selected and the class labels? - Interpret and explain the two Chi-squared tests you ran - what do they tell you about the relationships between the categorical features you selected and the class labels? - What was the most challenging part of this sprint challenge? Answer with text, but feel free to intersperse example code/results or refer to it from earlier. **T-tests** I ran stats.ttest_ind() on all the continuous variables, comparing accepteds (col A16 = '+') vs rejecteds('-'). The best I can say is that : A14 , with t-statistic = -2.696 and pvalue = 0.007 have a slight relationship, but with the pvalue < .01, still not enough to reject the hypothesis that they are independent A11, with t-statistic = 10.638 and pvalue = pvalue = 4.310e-23 is the most dependent A8, with t-statistic = 8.380 and pvalue = pvalue = 7.425e-16 also makes a lot of difference are accepted or rejected Essentially all six continuous variables to a lesser or greater degree make a difference whether one was accepted or rejected. Inspecting the data seems to confirm a relationship between wherther one was accepted or rejected for columns A8, A11 and A15. These have rejected means of 1.257, 0.631 and 198.605, and accepted means of 3.427, 4.605 and 2038.859, respectively **Chi-squared tests** I ran scipy.stats.chisquare() on all the categorical variables A12 had the lowest statistic (14.382) and p_value 0.002 A13 had the highest statistic (1037.269) and p_value 5.124e-222 A7 has chi statistic 1899.7565217391302, but p_value = 0.0, which is very weird Looking at the data, it appears that for columns A9 and, to a lesser degree, A10 (both binary t/f items) make a difference as whether one is accepted or rejected. For A9, 306 of 383 rejects had a value of 'f', whereas 284 of 307 accepts had a value of 't'. For A10 the comparative figures are 'f': 297/383 for rejects and 't': 209/307 for accepts. **What was the most challenging part of this sprint challenge?** Realizing that data without context is a pain. For example, I suspect A1 is sex, A2 is age, A14 is level of debt and A15 level of assets or income. Also, I didn't feel I got much intelligence - no comments from the peanut gallery, please - from the ChiSq tests.
github_jupyter
## Getting the Data from Kaggle Using the Kaggle API ``` #!kaggle competitions download -c titanic # Unzip the folder #!unzip 'titanic.zip' -d data/titanic/ ``` # Setup ``` # Load the train file to pandas import pandas as pd import numpy as np import missingno as msno from collections import Counter import re import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.model_selection import GridSearchCV from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score from sklearn.svm import SVC from subprocess import check_output sns.set(style='white', context='notebook', palette='deep') import matplotlib.pyplot as plt %matplotlib inline sns.set() ``` # Load Data ``` data_dict = pd.read_csv("data/titanic/data_dictionary.csv") data_dict titanic_train = pd.read_csv("data/titanic/train.csv") titanic_test = pd.read_csv("data/titanic/test.csv") # Getting the passengerID for test dataset so that we can split the # dataframe later by it. titanic_test_ID = titanic_test['PassengerId'] ``` ## Descriptive Statistics of the Dataset ``` # Checking the distribution of each feature: titanic_train.hist(figsize=(15,10)); titanic_train["Age"].hist(figsize=(15,10)); # Scatter Matrix to see the correlation between some of the features from pandas.plotting import scatter_matrix attributes = [ "Pclass", "Age", "Fare"] scatter_matrix(titanic_train[attributes], figsize=(15,10)); titanic_train.plot(kind="scatter", x="Age", y="Fare", alpha=0.9, figsize=(15,10)); titanic_train.describe() # Looking for missing values titanic_train.info() ``` ## Detecting Outliers In this section, I am going to define a function that helps detect outliers in teh dataset (anything that falls out of 1.5* the IQR range). ``` def detect_outliers(df,n,features): """ Takes a dataframe df of features and returns a list of the indices corresponding to the observations containing more than n outliers according to the Tukey method. """ outlier_indices = [] # iterate over features(columns) for col in features: # 1st quartile (25%) Q1 = np.percentile(df[col], 25) # 3rd quartile (75%) Q3 = np.percentile(df[col],75) # Interquartile range (IQR) IQR = Q3 - Q1 # outlier step outlier_step = 1.5 * IQR # Determine a list of indices of outliers for feature col outlier_list_col = df[(df[col] < Q1 - outlier_step) | (df[col] > Q3 + outlier_step )].index # append the found outlier indices for col to the list of outlier indices outlier_indices.extend(outlier_list_col) # select observations containing more than 2 outliers outlier_indices = Counter(outlier_indices) multiple_outliers = list( k for k, v in outlier_indices.items() if v > n ) return multiple_outliers # detect outliers from Age, SibSp , Parch and Fare Outliers_to_drop = detect_outliers(titanic_train,2,["Age","SibSp","Parch","Fare"]) titanic_train.loc[Outliers_to_drop] titanic_train = titanic_train.drop(Outliers_to_drop, axis = 0).reset_index(drop=True) ``` ### Joining Train and Test datasets Here, I am going to join both dataset for feature engineering and will later split them back using the titanic_test_ID. ``` # Getting the length of the train dataset len_titanic_train = len(titanic_train) # We are stacking two datasets, so it's important to remember the order df = pd.concat(objs=[titanic_train, titanic_test], axis=0).reset_index(drop=True) ``` ### Missing Values It looks like the Age, Embarked and Cabin columns having missing values. I assume that Age and Embarked columns could be more relevant than the cabin, so I am going to impute the Age column with the mean of the column grouped by PClass. ``` # Inspecting some of the missing Age rows df[df['Age'].isnull()] # Visualize missing values as a matrix msno.matrix(df) msno.bar(df) ``` Let's see if there a correlation among the missing values in the data using the heatmap function of the missinno library. ``` msno.heatmap(df); ``` From the heatmap above, it looks there is not a significant correlation between the missing values. ### Imputing Missing Values ##### Impute Age For the Age column, we can impute the missing values by the mean value of each group by Sex and Pclass. ``` # Filling missing value of Age ## Fill Age with the median age of similar rows according to Pclass, Parch and SibSp # Index of NaN age rows index_NaN_age = list(df["Age"][df["Age"].isnull()].index) for i in index_NaN_age : age_med = df["Age"].median() age_pred = df["Age"][((df['SibSp'] == df.iloc[i]["SibSp"]) & (df['Parch'] == df.iloc[i]["Parch"]) & (df['Pclass'] == df.iloc[i]["Pclass"]))].median() if not np.isnan(age_pred) : df['Age'].iloc[i] = age_pred else : df['Age'].iloc[i] = age_med ``` ##### Impute Fare ``` # Let's fill the null values with the median value df['Fare'] = df['Fare'].fillna(df['Fare'].median()) ``` ##### Impute Cabin ``` df['Cabin_mapped'] = df['Cabin'].astype(str).str[0] # this transforms the letters into numbers cabin_dict = {k:i for i, k in enumerate(df.Cabin_mapped.unique())} df.loc[:, 'Cabin_mapped'] = df.loc[:, 'Cabin_mapped'].map(cabin_dict) # Let's inspect cabins and see how they are labeled df['Cabin'].unique() df['Cabin'].isnull().sum() # We can try to replace the Cabin with X for missing # Replace the Cabin number by the type of cabin 'X' if not df["Cabin"] = pd.Series([i[0] if not pd.isnull(i) else 'X' for i in df['Cabin'] ]) ``` ##### Impute Embarked ``` # Let's inspect the embarked column and see which rows have missing records df[df['Embarked'].isnull()] # Embarked # We can impute this feature with the mode which is S df['Embarked'] = df['Embarked'].fillna(df['Embarked'].mode()[0]) # Checking to see if the above function worked: df.info() def draw_heatmap(df, y_variable, no_features): """ This Function takes three arguments; 1. The dataframe that we want to draw the heatmap for 2. The variable that we want to see the correlation of with other features for example the y-variable. 3. The top_n. For example for top 10 variables, type 10.""" # Calculate the correlation matrix cor = df.corr() # Get the columns for n largetst features columns = cor.nlargest(no_features, y_variable)[y_variable].index cm = np.corrcoef(df[columns].values.T) sns.set(font_scale=1) fig = plt.figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k') # Define the color pallet cmap = sns.cm.vlag_r heat_map = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt=".2f", annot_kws={'size':12}, yticklabels=columns.values, xticklabels=columns.values, linewidths=.2,vmax=1, center=0, cmap= cmap) return plt.show(); draw_heatmap(titanic_train, 'Survived', 10) ``` We could see that that Fare and age has higher negative correlations. This means that there might hidden patterns within each feature and some feature engineering, that we could see different heatmap # Feature Analysis For Feature Analysis, I am going to define three helper functions that are going to help in drawing plots. I am planning ot Seaborn's Factoplots and Pandas' Barcharts. ``` def plot_factorplot(df, x, y='Survived', hue=None): import warnings warnings.simplefilter(action='ignore', category=Warning) plt.figure(figsize=(12,10)) g = sns.factorplot(x=x,y=y,data=df,kind="bar", size = 6 , hue=hue, palette = "muted") g.despine(left=True) g = g.set_ylabels("Survival Probability") g = g.set_xlabels("{}".format(x)) def plot_barchart(df, feature): """ This functions takes the feature that we want to plot against survivors""" survived = df[df['Survived']==1][feature].value_counts() not_survived = df[df['Survived']==0][feature].value_counts() df = pd.DataFrame([survived,not_survived]) df.index=['Survived','Not Survived'] df.plot(kind='bar',stacked=False,title="Stacked Chart for "+feature, figsize=(12,10)) def plot_distribution(df, col, **options): from scipy.stats import norm """ This function helps draw a distribution plot for the desired colum. Input args: 1. df : Dataframe that we want to pick the column from. 2. col : Column of the dataframe that we want to display. 3. options: a. kde : optional, boolian - Whether to plot a gaussian kernel density estimate. b. fit : An object with `fit` method, returning a tuple that can be passed to a `pdf` method a positional arguments following a grid of values to evaluate the pdf on. """ plt.figure(figsize=(12,10)) plt.ylabel("Frequency") plt.title("{} Distribution".format(col)) if options.get("kde")==True: sns.distplot(df[col], kde=True, color="#2b7bba"); if options.get("fit")== "norm": (mu, sigma) = norm.fit(df[col]) sns.distplot(df[col], fit=norm, color="#2b7bba"); plt.legend(["Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )".format(mu, sigma)], loc='best'); if (options.get("fit")== "norm") & (options.get("kde")==True): sns.distplot(df[col], fit=norm, kde=True, color="#2b7bba"); else: sns.distplot(df[col], color="#2b7bba"); ``` ##### Sex ``` survivors_data = titanic_train[titanic_train.Survived==True] non_survivors_data = titanic_train[titanic_train.Survived==False] Gender = pd.crosstab(titanic_train['Survived'],titanic_train['Sex']) Gender plot_barchart(titanic_train,"Sex") ``` We could see that females had a higher chance of survival than the males. It looks like Sex might be an important factor in determining the chance of survival. We can create some features for that. ##### Pclass ``` Pclass = pd.crosstab(titanic_train['Survived'],titanic_train['Pclass']) Pclass plot_barchart(titanic_train, "Pclass") ``` We can see in the above chart that passengers with with tickets in class 3 had a less survival chance. ``` # Explore Pclass vs Survived by Sex plot_factorplot(titanic_train, "Pclass", hue='Sex') ``` We could see that Pclass and sex both have role in determining survival. We could see that within females, the ones with the ticket class 1 and 2 had a higher survival chance. ##### Fare Let's see the distribution of the fare ``` #Explore Fare distribution plot_distribution(df, "Fare", kde=True) ``` We can see that the fare is skewed positively. We can fix this by transforming the fare feature with a logarithmic transformation function #### Transform Fare ``` df['Fare'] = np.log1p(df['Fare']) # Let's display the distribution after log transformation plot_distribution(df, "Fare", kde=True, fit="norm") ``` ##### Age ``` # Explore Age distibution fig = plt.figure(figsize=(12,10)) g = sns.kdeplot(titanic_train["Age"][(titanic_train["Survived"] == 0) & (titanic_train["Age"].notnull())], color="Red", shade = True) g = sns.kdeplot(titanic_train["Age"][(titanic_train["Survived"] == 1) & (titanic_train["Age"].notnull())], ax =g, color="Green", shade= True) g.set_xlabel("Age") g.set_ylabel("Frequency") g = g.legend(["Did Not Survived","Survived"]) ``` After plotting the survival by age, we can see that there high survial for teens and also on the right tail we can see that people above 70 have survived higher. ##### SibSP ``` plot_barchart(titanic_train, "SibSp") ``` It looks like the passengers having more siblings/spouses had a higher chance of not surviving. On the other hand, single passengers were more likely to survive. ##### Parch ``` plot_barchart(titanic_train, "Parch") # Explore Parch feature vs Survived plot_factorplot(titanic_train, 'Parch') ``` Small families have more chance to survive, more than single (Parch 0), medium (Parch 3,4) and large families (Parch 5,6 ). ##### Embarked ``` plot_factorplot(titanic_train, 'Embarked') ``` We can see that passengers embarking the ship from Southhampton (S) had the lowest survival rate, however, passengers embarking from Cherbourg(C) had the highest chance of survival. Let's look a little deeper and see if the passengers from C had more Class 1 tickets. ``` plot_factorplot(titanic_train,'Embarked', hue='Pclass') ``` We can see that passengers from C had more 1st class tickets compared those those from S. ## Feature Engineering #### Pclass We can convert Sex to categorical and then to dummy. ``` # Create categorical values for Pclass df["Pclass"] = df["Pclass"].astype("category") df = pd.get_dummies(df, columns = ["Pclass"],prefix="Pc") ``` #### Sex We can convert Sex to categorical ``` df['Sex'] = df['Sex'].map({'male': 0, 'female':1}) ``` #### Family Size We can try to calculate a feature called family size where we are adding Parch, SibSp and 1 for the passenger him/herself ``` df['Fam_size'] = 1 + df['Parch'] + df['SibSp'] plot_factorplot(df, 'Fam_size') ``` We can see that that familyi size have some effect on the survival. ``` # Create new feature of family size df['Single'] = df['Fam_size'].map(lambda s: 1 if s == 1 else 0) df['SmallF'] = df['Fam_size'].map(lambda s: 1 if s == 2 else 0) df['MedF'] = df['Fam_size'].map(lambda s: 1 if 3 <= s <= 4 else 0) df['LargeF'] = df['Fam_size'].map(lambda s: 1 if s >= 5 else 0) ``` #### Title We can see that some of the passenger names have titles in fron them. This may add predicive power to the survival rate. Let's extract their titles and convert it into a dummy variable. ``` df['Name'].head() def get_title(name): title_search = re.search(' ([A-Za-z]+)\.', name) if title_search: return title_search.group(1) return "" df['Title']=df['Name'].apply(get_title) title_lev=list(df['Title'].value_counts().reset_index()['index']) df['Title']=pd.Categorical(df['Title'], categories=title_lev) g = sns.countplot(x="Title",data=df) g = plt.setp(g.get_xticklabels(), rotation=45) df = pd.get_dummies(df, columns=['Title'], drop_first=True, prefix="Title") df.columns # Drop the name column df = df.drop(['Name'], axis=1) ``` We can that passengers with Miss-Mrs had a higher chance of survival. ``` df.columns ``` #### Ticket We can try to extract some information from the ticket feature by extracting it's prefix. We can use X for those that don't have a prefix. ``` df['Ticket'] ## Treat Ticket by extracting the ticket prefix. When there is no prefix it returns X. Ticket = [] for i in list(df.Ticket): if not i.isdigit() : Ticket.append(i.replace(".","").replace("/","").strip().split(' ')[0]) #Take prefix else: Ticket.append("X") df["Ticket"] = Ticket df["Ticket"].head() df = pd.get_dummies(df, columns = ["Ticket"], prefix="T") df.head() ``` #### Embarked Let's convert this categorical to numerical using Pandas' get_dummies function ``` df = pd.get_dummies(df, columns=['Embarked'], prefix="Embarked") ``` #### Cabin Let's convert this categorical to numerical using Pandas' get_dummies function ``` df['HasCabin'] = df['Cabin'].apply(lambda x: 0 if x==0 else 1) df = pd.get_dummies(df, columns=['Cabin'], prefix="Cabin") df = pd.get_dummies(df, columns=['HasCabin'], prefix="CabinBol") df.info() df.columns df = df.drop(labels = ["PassengerId", "Parch", "Fam_size"],axis = 1) df.columns cols = ['Pclass', 'SibSp', 'Parch', 'Fare', 'Sex','Cabin_mapped', 'Embarked', 'Survived', 'Age'] df = df[cols] df = pd.get_dummies(df, columns=['Sex', 'Cabin_mapped', 'Embarked'],drop_first=True) ``` # Modeling ``` # Let's split the train and test data sets train = df[:len_titanic_train] test = df[len_titanic_train:] # Drop the empty Survived column from the test dataset. test.drop(labels=['Survived'], axis=1, inplace=True) ## Separate train features and label train["Survived"] = train["Survived"].astype(int) y = train["Survived"] X = train.drop(labels = ["Survived"],axis = 1) ``` #### Split Test Train Data Here, I am going to split the data into training and validation sets using Scikit-Learn. ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) ``` #### Model For the first run, I am going to try Random Forest Classifier using GridSearch. ``` # Istentiate the model rfc=RandomForestClassifier(random_state=42, n_jobs=4) # Parameter for our classifier param_grid = { 'n_estimators': [100,150, 200, 500, 600], 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth' : [2, 4,5,6,7,8, 9, 10, 11, 12, 13, 14, 18], 'criterion' :['gini', 'entropy'] } # Defining our Gridsearch cross validation CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5) # Fitting the GridSearch to training and testing. CV_rfc.fit(X_train, y_train) # Looking the best parameters. CV_rfc.best_params_ # Now, we can use the parameters above to define our model. rfc1=RandomForestClassifier(random_state=42, max_features='auto', n_estimators= 500, max_depth=12, criterion='entropy', n_jobs=6) rfc1.fit(X_train, y_train) test_predictions = rfc1.predict(test) submission = pd.DataFrame() submission['PassengerId'] = titanic_test['PassengerId'] submission['Survived'] = test_predictions submission.to_csv("data/titanic/submission.csv", index=False) ``` #### XGBoost ``` import warnings warnings.filterwarnings('ignore') from datetime import datetime from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import roc_auc_score from sklearn.model_selection import StratifiedKFold from xgboost import XGBClassifier # A parameter grid for XGBoost params = { 'min_child_weight': [1, 5, 10], 'gamma': [0.5, 1, 1.5, 2, 5], 'subsample': [0.6, 0.8, 1.0], 'colsample_bytree': [0.6, 0.8, 1.0], 'max_depth': [3, 4, 5] } xgb = XGBClassifier(learning_rate=0.02, n_estimators=600, objective='binary:logistic', silent=True, nthread=1) folds = 3 param_comb = 5 skf = StratifiedKFold(n_splits=folds, shuffle = True, random_state = 42) random_search = RandomizedSearchCV(xgb, param_distributions=params, n_iter=param_comb, scoring='accuracy', n_jobs=4, cv=skf.split(X,y), verbose=3, random_state=1001 ) # Here we go random_search.fit(X, y) #roc_auc submission = pd.DataFrame() submission['PassengerId'] = titanic_test['PassengerId'] submission['Survived'] = test_predictions submission.to_csv("data/titanic/submission.csv", index=False) ``` ### Ongoing work! I am still trying to improve my Kaggle Score. I will continue using the following models. ``` import xgboost as xgb from sklearn.model_selection import RandomizedSearchCV # Create the parameter grid: gbm_param_grid gbm_param_grid = { 'n_estimators': range(8, 20), 'max_depth': range(6, 10), 'learning_rate': [.4, .45, .5, .55, .6], 'colsample_bytree': [.6, .7, .8, .9, 1] } # Instantiate the regressor: gbm gbm = XGBClassifier(n_estimators=10) # Perform random search: grid_mse xgb_random = RandomizedSearchCV(param_distributions=gbm_param_grid, estimator = gbm, scoring = "accuracy", verbose = 1, n_iter = 50, cv = 4) # Fit randomized_mse to the data xgb_random.fit(X, y) # Print the best parameters and lowest RMSE print("Best parameters found: ", xgb_random.best_params_) print("Best accuracy found: ", xgb_random.best_score_) xgb_pred = xgb_random.predict(test) submission = pd.DataFrame() submission['PassengerId'] = titanic_test['PassengerId'] submission['Survived'] = xgb_pred submission.to_csv("data/titanic/submission.csv", index=False) ```
github_jupyter
### Halo check Plot halos to see if halofinders work well ``` #import os #base = os.path.abspath('/home/hoseung/Work/data/05427/') #base = base + '/' # basic parameters # Directory, file names, snapshots, scale, npix base = '/home/hoseung/Work/data/05427/' cluster_name = base.split('/')[-2] frefine= 'refine_params.txt' fnml = input("type namelist file name (enter = cosmo_200.nml):") if fnml =="": fnml = 'cosmo_200.nml' nout_ini=int(input("Starting nout?")) nout_fi=int(input("ending nout?")) nouts = range(nout_ini,nout_fi+1) scale = input("Scale?: ") if scale=="": scale = 0.3 scale = float(scale) npix = input("npix (enter = 400)") if npix == "": npix = 400 npix = int(npix) # data loading parameters ptype=["star pos mass"] refine_params = True dmo=False draw=True draw_halos=True draw_part = True draw_hydro = False if draw_hydro: lmax=input("maximum level") if lmax=="": lmax=19 lmax = int(lmax) import load import utils.sampling as smp import utils.match as mtc import draw import pickle for nout in nouts: snout = str(nout).zfill(3) if refine_params: # instead of calculating zoomin region, just load it from the refine_params.txt file. # region = s.part.search_zoomin(scale=0.5, load=True) rr = load.info.RefineParam() rr.loadRegion(base + frefine) nn = load.info.Nml(fname=base + fnml) aexp = nn.aout[nout-1] i_aexp = mtc.closest(aexp, rr.aexp) x_refine = rr.x_refine[i_aexp] y_refine = rr.y_refine[i_aexp] z_refine = rr.z_refine[i_aexp] r_refine = rr.r_refine[i_aexp] * 0.5 region = smp.set_region(xc = x_refine, yc = y_refine, zc = z_refine, radius = r_refine * scale) else: region = smp.set_region(xc=0.5, yc=0.5, zc=0.5, radius=0.1) s = load.sim.Sim(nout, base, dmo=dmo, ranges=region["ranges"], setup=True) imgs = draw.img_obj.MapSet(info=s.info, region=region) imgp = draw.img_obj.MapImg(info=s.info, proj='z', npix=npix, ptype=ptype) imgp.set_region(region) #%% if draw_part: s.add_part(ptype) s.part.load() part = getattr(s.part, s.part.pt[0]) x = part['x'] y = part['y'] z = part['y'] m = part['m'] * s.info.msun # part must be normalized already! #imgp.set_data(draw.pp.den2d(x, y, z, m, npix, s.info, cic=True, norm_integer=True)) imgp.set_data(draw.pp.den2d(x, y, z, m, npix, region, cic=True, norm_integer=True)) imgs.ptden2d = imgp # imgp.show_data() #%% if draw_hydro: s.add_hydro() s.hydro.amr2cell(lmax=lmax) field = draw.pp.pp_cell(s.hydro.cell, npix, s.info, verbose=True) ptype = 'gas_den' imgh = draw.img_obj.MapImg(info=s.info, proj='z', npix=npix, ptype=ptype) imgh.set_data(field) imgh.set_region(region) # imgh.show_data() imgs.hydro = imgh #%% fdump = base + snout + 'map.pickle' with open(fdump, 'wb') as f: pickle.dump(imgs, f) if draw: if draw_part: imgs.ptden2d.plot_2d_den(save= base + cluster_name + snout +'star.png', dpi=400, show=False) if draw_hydro: imgs.hydro.plot_2d_den(save= base + cluster_name +snout + 'hydro.png',vmax=15,vmin=10, show=False, dpi=400) import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(111) snout = str(nout).zfill(3) fin = base + snout + 'map.pickle' with open(fin, 'rb') as f: img = pickle.load(f) ptimg = img.ptden2d fout = base + snout + "dmmap_" + ptimg.proj + ".png" img.ptden2d.plot_2d_den(save=False, show=False, vmin=1e13, vmax=1e20, dpi=400, axes=ax1) import tree import numpy as np #s = load.sim.Sim(nout, base_dir) info = load.info.Info(nout=nout, base=base, load=True) hall = tree.halomodule.Halo(nout=nout, base=base, halofinder="HM", info=info, load=True) i_center = np.where(hall.data['np'] == max(hall.data['np'])) h = tree.halomodule.Halo() h.derive_from(hall, [i_center]) #region = smp.set_region(xc=h.data.x, yc=h.data.y, zc=h.data.z, radius = h.data.rvir * 2) #%% from draw import pp ind = np.where(hall.data.mvir > 5e10) h_sub = tree.halomodule.Halo() h_sub.derive_from(hall, ind) #x = hall.data.x#[ind] #y = hall.data.y#[ind] #r = hall.data.rvir#[ind] #pp.circle_scatter(ax1, x*npix, y*npix, r*30, facecolor='none', edgecolor='b', label='555') #ax1.set_xlim(right=npix). #ax1.set_ylim(top=npix) pp.pp_halo(h_sub, npix, region=img.ptden2d.region, axes=ax1, rscale=3, name=True) plt.show() ``` ##### Load halofinder result ##### get position and virial radius ##### load particles data (star or DM) and draw density map ##### plot halos on top of particle density map
github_jupyter
# Chapter 12 *Modeling and Simulation in Python* Copyright 2021 Allen Downey License: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) ``` # check if the libraries we need are installed try: import pint except ImportError: !pip install pint try: import modsim except ImportError: !pip install modsimpy ``` ### Code Here's the code from the previous notebook that we'll need. ``` from modsim import State, System def make_system(beta, gamma): """Make a system object for the SIR model. beta: contact rate in days gamma: recovery rate in days returns: System object """ init = State(S=89, I=1, R=0) init /= sum(init) t0 = 0 t_end = 7 * 14 return System(init=init, t0=t0, t_end=t_end, beta=beta, gamma=gamma) def update_func(state, t, system): """Update the SIR model. state: State with variables S, I, R t: time step system: System with beta and gamma returns: State object """ s, i, r = state infected = system.beta * i * s recovered = system.gamma * i s -= infected i += infected - recovered r += recovered return State(S=s, I=i, R=r) from numpy import arange from modsim import TimeFrame def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: TimeFrame """ frame = TimeFrame(columns=system.init.index) frame.loc[system.t0] = system.init for t in arange(system.t0, system.t_end): frame.loc[t+1] = update_func(frame.loc[t], t, system) return frame ``` In the previous chapter I presented the SIR model of infectious disease and used it to model the Freshman Plague at Olin. In this chapter we'll consider metrics intended to quantify the effects of the disease and interventions intended to reduce those effects. ## Immunization Models like this are useful for testing "what if?" scenarios. As an example, we'll consider the effect of immunization. Suppose there is a vaccine that causes a student to become immune to the Freshman Plague without being infected. How might you modify the model to capture this effect? One option is to treat immunization as a shortcut from susceptible to recovered without going through infectious. We can implement this feature like this: ``` def add_immunization(system, fraction): system.init.S -= fraction system.init.R += fraction ``` `add_immunization` moves the given fraction of the population from `S` to `R`. ``` tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) results = run_simulation(system, update_func) ``` If we assume that 10% of students are vaccinated at the beginning of the semester, and the vaccine is 100% effective, we can simulate the effect like this: ``` system2 = make_system(beta, gamma) add_immunization(system2, 0.1) results2 = run_simulation(system2, update_func) ``` The following figure shows `S` as a function of time, with and without immunization. ``` results.S.plot(label='No immunization') results2.S.plot(label='10% immunization') decorate(xlabel='Time (days)', ylabel='Fraction of population') ``` ## Metrics When we plot a time series, we get a view of everything that happened when the model ran, but often we want to boil it down to a few numbers that summarize the outcome. These summary statistics are called **metrics**, as we saw in Section xxx. In the SIR model, we might want to know the time until the peak of the outbreak, the number of people who are sick at the peak, the number of students who will still be sick at the end of the semester, or the total number of students who get sick at any point. As an example, I will focus on the last one --- the total number of sick students --- and we will consider interventions intended to minimize it. When a person gets infected, they move from `S` to `I`, so we can get the total number of infections by computing the difference in `S` at the beginning and the end: ``` def calc_total_infected(results, system): s_0 = results.S[system.t0] s_end = results.S[system.t_end] return s_0 - s_end calc_total_infected(results, system) calc_total_infected(results2, system2) ``` Without immunization, almost 47% of the population gets infected at some point. With 10% immunization, only 31% get infected. That's pretty good. ## Sweeping Immunization Now let's see what happens if we administer more vaccines. This following function sweeps a range of immunization rates: ``` def sweep_immunity(immunize_array): sweep = SweepSeries() for fraction in immunize_array: sir = make_system(beta, gamma) add_immunization(sir, fraction) results = run_simulation(sir, update_func) sweep[fraction] = calc_total_infected(results, sir) return sweep ``` The parameter of `sweep_immunity` is an array of immunization rates. The result is a `SweepSeries` object that maps from each immunization rate to the resulting fraction of students ever infected. The following figure shows a plot of the `SweepSeries`. Notice that the x-axis is the immunization rate, not time. ``` immunize_array = linspace(0, 1, 21) infected_sweep = sweep_immunity(immunize_array) infected_sweep.plot() decorate(xlabel='Fraction immunized', ylabel='Total fraction infected', title='Fraction infected vs. immunization rate') ``` As the immunization rate increases, the number of infections drops steeply. If 40% of the students are immunized, fewer than 4% get sick. That's because immunization has two effects: it protects the people who get immunized (of course) but it also protects the rest of the population. Reducing the number of "susceptibles" and increasing the number of "resistants" makes it harder for the disease to spread, because some fraction of contacts are wasted on people who cannot be infected. This phenomenon is called **herd immunity**, and it is an important element of public health (see <http://modsimpy.com/herd>). The steepness of the curve is a blessing and a curse. It's a blessing because it means we don't have to immunize everyone, and vaccines can protect the "herd" even if they are not 100% effective. But it's a curse because a small decrease in immunization can cause a big increase in infections. In this example, if we drop from 80% immunization to 60%, that might not be too bad. But if we drop from 40% to 20%, that would trigger a major outbreak, affecting more than 15% of the population. For a serious disease like measles, just to name one, that would be a public health catastrophe. One use of models like this is to demonstrate phenomena like herd immunity and to predict the effect of interventions like vaccination. Another use is to evaluate alternatives and guide decision making. We'll see an example in the next section. ## Hand washing Suppose you are the Dean of Student Life, and you have a budget of just \$1200 to combat the Freshman Plague. You have two options for spending this money: 1. You can pay for vaccinations, at a rate of \$100 per dose. 2. You can spend money on a campaign to remind students to wash hands frequently. We have already seen how we can model the effect of vaccination. Now let's think about the hand-washing campaign. We'll have to answer two questions: 1. How should we incorporate the effect of hand washing in the model? 2. How should we quantify the effect of the money we spend on a hand-washing campaign? For the sake of simplicity, let's assume that we have data from a similar campaign at another school showing that a well-funded campaign can change student behavior enough to reduce the infection rate by 20%. In terms of the model, hand washing has the effect of reducing `beta`. That's not the only way we could incorporate the effect, but it seems reasonable and it's easy to implement. Now we have to model the relationship between the money we spend and the effectiveness of the campaign. Again, let's suppose we have data from another school that suggests: - If we spend \$500 on posters, materials, and staff time, we can change student behavior in a way that decreases the effective value of `beta` by 10%. - If we spend \$1000, the total decrease in `beta` is almost 20%. - Above \$1000, additional spending has little additional benefit. ### Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario. ``` from numpy import exp def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1): """Computes the generalize logistic function. A: controls the lower bound B: controls the steepness of the transition C: not all that useful, AFAIK M: controls the location of the transition K: controls the upper bound Q: shift the transition left or right nu: affects the symmetry of the transition returns: float or array """ exponent = -B * (x - M) denom = C + Q * exp(exponent) return A + (K-A) / denom ** (1/nu) ``` The following array represents the range of possible spending. ``` spending = linspace(0, 1200, 21) ``` `compute_factor` computes the reduction in `beta` for a given level of campaign spending. `M` is chosen so the transition happens around \$500. `K` is the maximum reduction in `beta`, 20%. `B` is chosen by trial and error to yield a curve that seems feasible. ``` def compute_factor(spending): """Reduction factor as a function of spending. spending: dollars from 0 to 1200 returns: fractional reduction in beta """ return logistic(spending, M=500, K=0.2, B=0.01) ``` Here's what it looks like. ``` percent_reduction = compute_factor(spending) * 100 plot(spending, percent_reduction) decorate(xlabel='Hand-washing campaign spending (USD)', ylabel='Percent reduction in infection rate', title='Effect of hand washing on infection rate') ``` The result is the following function, which takes spending as a parameter and returns `factor`, which is the factor by which `beta` is reduced: ``` def compute_factor(spending): return logistic(spending, M=500, K=0.2, B=0.01) ``` I use `compute_factor` to write `add_hand_washing`, which takes a `System` object and a budget, and modifies `system.beta` to model the effect of hand washing: ``` def add_hand_washing(system, spending): factor = compute_factor(spending) system.beta *= (1 - factor) ``` Now we can sweep a range of values for `spending` and use the simulation to compute the effect: ``` def sweep_hand_washing(spending_array): sweep = SweepSeries() for spending in spending_array: system = make_system(beta, gamma) add_hand_washing(system, spending) results = run_simulation(system, update_func) sweep[spending] = calc_total_infected(results, system) return sweep ``` Here's how we run it: ``` from numpy import linspace spending_array = linspace(0, 1200, 20) infected_sweep2 = sweep_hand_washing(spending_array) ``` The following figure shows the result. ``` infected_sweep2.plot() decorate(xlabel='Hand-washing campaign spending (USD)', ylabel='Total fraction infected', title='Effect of hand washing on total infections') ``` Below \$200, the campaign has little effect. At \$800 it has a substantial effect, reducing total infections from more than 45% to about 20%. Above \$800, the additional benefit is small. ## Optimization Let's put it all together. With a fixed budget of \$1200, we have to decide how many doses of vaccine to buy and how much to spend on the hand-washing campaign. Here are the parameters: ``` num_students = 90 budget = 1200 price_per_dose = 100 max_doses = int(budget / price_per_dose) ``` The fraction `budget/price_per_dose` might not be an integer. `int` is a built-in function that converts numbers to integers, rounding down. We'll sweep the range of possible doses: ``` dose_array = arange(max_doses+1) ``` In this example we call `linrange` with only one argument; it returns a NumPy array with the integers from 0 to `max_doses`. With the argument `endpoint=True`, the result includes both endpoints. Then we run the simulation for each element of `dose_array`: ``` def sweep_doses(dose_array): sweep = SweepSeries() for doses in dose_array: fraction = doses / num_students spending = budget - doses * price_per_dose system = make_system(beta, gamma) add_immunization(system, fraction) add_hand_washing(system, spending) results = run_simulation(system, update_func) sweep[doses] = calc_total_infected(results, system) return sweep ``` For each number of doses, we compute the fraction of students we can immunize, `fraction` and the remaining budget we can spend on the campaign, `spending`. Then we run the simulation with those quantities and store the number of infections. The following figure shows the result. ``` infected_sweep3 = sweep_doses(dose_array) infected_sweep3.plot() decorate(xlabel='Doses of vaccine', ylabel='Total fraction infected', title='Total infections vs. doses') ``` If we buy no doses of vaccine and spend the entire budget on the campaign, the fraction infected is around 19%. At 4 doses, we have \$800 left for the campaign, and this is the optimal point that minimizes the number of students who get sick. As we increase the number of doses, we have to cut campaign spending, which turns out to make things worse. But interestingly, when we get above 10 doses, the effect of herd immunity starts to kick in, and the number of sick students goes down again. ## Summary ### Exercises **Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious. How might you incorporate the effect of quarantine in the SIR model? ``` # Solution """There is no unique best answer to this question, but one simple option is to model quarantine as an effective reduction in gamma, on the assumption that quarantine reduces the number of infectious contacts per infected student. Another option would be to add a fourth compartment to the model to track the fraction of the population in quarantine at each point in time. This approach would be more complex, and it is not obvious that it is substantially better. The following function could be used, like add_immunization and add_hand_washing, to adjust the parameters in order to model various interventions. In this example, `high` is the highest duration of the infection period, with no quarantine. `low` is the lowest duration, on the assumption that it takes some time to identify infectious students. `fraction` is the fraction of infected students who are quarantined as soon as they are identified. """ def add_quarantine(system, fraction): """Model the effect of quarantine by adjusting gamma. system: System object fraction: fraction of students quarantined """ # `low` represents the number of days a student # is infectious if quarantined. # `high` is the number of days they are infectious # if not quarantined low = 1 high = 4 tr = high - fraction * (high-low) system.gamma = 1 / tr ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt ``` ## Counting missing rows with left join The Movie Database is supported by volunteers going out into the world, collecting data, and entering it into the database. This includes financial data, such as movie budget and revenue. If you wanted to know which movies are still missing data, you could use a left join to identify them. Practice using a left join by merging the `movies` table and the `financials` table. The `movies` and `financials` tables have been loaded for you. Instructions - What column is likely the best column to merge the two tables on? - Merge the `movies` table, as the left table, with the `financials` table using a left join, and save the result to `movies_financials`. - Count the number of rows in `movies_financials` with a null value in the `budget` column. ``` # Import the DataFrames movies = pd.read_pickle('movies.pkl') financials = pd.read_pickle('financials.pkl') display(movies.head()) display(movies.shape) display(financials.head()) display(financials.shape) # R: on='id' # Merge movies and financials with a left join movies_financials = movies.merge(financials, on='id', how='left') # Count the number of rows in the budget column that are missing number_of_missing_fin = movies_financials['budget'].isnull().sum() # Print the number of movies missing financials number_of_missing_fin ``` ## Enriching a dataset Setting `how='left'` with the `.merge()` method is a useful technique for enriching or enhancing a dataset with additional information from a different table. In this exercise, you will start off with a sample of movie data from the movie series Toy Story. Your goal is to enrich this data by adding the marketing tag line for each movie. You will compare the results of a left join versus an inner join. The `toy_story` DataFrame contains the Toy Story movies. The `toy_story` and `taglines` DataFrames have been loaded for you. Instructions - Merge `toy_story` and `taglines` on the `id` column with a **left join**, and save the result as `toystory_tag`. - With `toy_story` as the left table, merge to it `taglines` on the `id` column with an **inner join**, and save as `toystory_tag`. ``` # Import the DataFrames toy_story = pd.read_csv('toy_story.csv') taglines = pd.read_pickle('taglines.pkl') # Merge the toy_story and taglines tables with a left join toystory_tag = toy_story.merge(taglines, on='id', how='left') # Print the rows and shape of toystory_tag print(toystory_tag) print(toystory_tag.shape) # Merge the toy_story and taglines tables with a inner join toystory_tag = toy_story.merge(taglines, on='id') # Print the rows and shape of toystory_tag print(toystory_tag) print(toystory_tag.shape) ``` ## Right join to find unique movies Most of the recent big-budget science fiction movies can also be classified as action movies. You are given a table of science fiction movies called `scifi_movies` and another table of action movies called `action_movies`. Your goal is to find which movies are considered only science fiction movies. Once you have this table, you can merge the `movies` table in to see the movie names. Since this exercise is related to science fiction movies, use a right join as your superhero power to solve this problem. The `movies`, `scifi_movies`, and `action_movie`s tables have been loaded for you. Instructions - Merge `action_movies` and `scifi_movies` tables with a **right join** on `movie_id`. Save the result as `action_scifi`. - Update the merge to add suffixes, where `'_act'` and `'_sci'` are suffixes for the left and right tables, respectively. - From `action_scifi`, subset only the rows where the `genre_act` column is null. - Merge `movies` and `scifi_only` using the `id` column in the left table and the `movie_id` column in the right table with an inner join. ``` # Import the DataFrames movies = pd.read_pickle('movies.pkl') scifi_movies = pd.read_pickle('scifi_movies') action_movies = pd.read_pickle('action_movies') # Merge action_movies to scifi_movies with right join action_scifi = action_movies.merge(scifi_movies, on='movie_id', how='right') # Merge action_movies to scifi_movies with right join action_scifi = action_movies.merge(scifi_movies, on='movie_id', how='right', suffixes=['_act', '_sci']) # Print the first few rows of action_scifi to see the structure print(action_scifi.head()) # From action_scifi, select only the rows where the genre_act column is null scifi_only = action_scifi[action_scifi['genre_act'].isnull()] # Merge the movies and scifi_only tables with an inner join movies_and_scifi_only = movies.merge(scifi_only, how='inner', left_on='id', right_on='movie_id') # Print the first few rows and shape of movies_and_scifi_only print(movies_and_scifi_only.head()) print(movies_and_scifi_only.shape) ``` ## Popular genres with right join What are the genres of the most popular movies? To answer this question, you need to merge data from the `movies` and `movie_to_genres` tables. In a table called `pop_movies`, the top 10 most popular movies in the movies table have been selected. To ensure that you are analyzing all of the popular movies, merge it with the `movie_to_genres` table using a right join. To complete your analysis, count the number of different genres. Also, the two tables can be merged by the movie ID. However, in `pop_movies` that column is called `id`, and in `movies_to_genres` it's called `movie_id`. The `pop_movies` and `movie_to_genres` tables have been loaded for you. Instructions - Merge `movie_to_genres` and `pop_movies` using a right join. Save the results as `genres_movies`. - Group `genres_movies` by `genre` and count the number of `id` values. ``` # Import the DataFrames movies = pd.read_pickle('movies.pkl') pop_movies = pd.read_csv('pop_movies.csv') movie_to_genres = pd.read_pickle('movie_to_genres.pkl') # Use right join to merge the movie_to_genres and pop_movies tables genres_movies = movie_to_genres.merge(pop_movies, how='right', left_on='movie_id', right_on='id') # Count the number of genres genre_count = genres_movies.groupby('genre').agg({'id':'count'}) # Plot a bar chart of the genre_count genre_count.plot(kind='bar') plt.show() ``` ## Using outer join to select actors One cool aspect of using an outer join is that, because it returns all rows from both merged tables and null where they do not match, you can use it to find rows that do not have a match in the other table. To try for yourself, you have been given two tables with a list of actors from two popular movies: Iron Man 1 and Iron Man 2. Most of the actors played in both movies. Use an outer join to find actors who **did not** act in both movies. The Iron Man 1 table is called `iron_1_actors`, and Iron Man 2 table is called `iron_2_actors`. Both tables have been loaded for you and a few rows printed so you can see the structure. Instructions - Save to `iron_1_and_2` the merge of `iron_1_actors` (left) with `iron_2_actors` tables with an outer join on the `id` column, and set suffixes to `('_1','_2')`. - Create an index that returns `True` if `name_1` or `name_2` are null, and `False` otherwise. ``` # Import the DataFrames iron_1_actors = pd.read_csv('iron_1_actors.csv', index_col=0) iron_2_actors = pd.read_csv('iron_2_actors.csv', index_col=0) # Merge iron_1_actors to iron_2_actors on id with outer join using suffixes iron_1_and_2 = iron_1_actors.merge(iron_2_actors, how='outer', on='id', suffixes=['_1', '_2']) # Create an index that returns true if name_1 or name_2 are null m = ((iron_1_and_2['name_1'].isnull()) | (iron_1_and_2['name_2'].isnull())) # Print the first few rows of iron_1_and_2 iron_1_and_2[m].head() ``` ## Self join Merging a table to itself can be useful when you want to compare values in a column to other values in the same column. In this exercise, you will practice this by creating a table that for each movie will list the movie director and a member of the crew on one row. You have been given a table called `crews`, which has columns `id`, `job`, and `name`. First, merge the table to itself using the movie ID. This merge will give you a larger table where for each movie, every job is matched against each other. Then select only those rows with a director in the left table, and avoid having a row where the director's job is listed in both the left and right tables. This filtering will remove job combinations that aren't with the director. The `crews` table has been loaded for you. Instructions - To a variable called `crews_self_merged`, merge the `crews` table to itself on the `id` column using an inner join, setting the suffixes to `'_dir'` and `'_crew'` for the left and right tables respectively. - Create a Boolean index, named `boolean_filter`, that selects rows from the left table with the job of `'Director'` and avoids rows with the job of `'Director'` in the right table. - Use the `.head()` method to print the first few rows of `direct_crews`. ``` # Import the DataFrame crews = pd.read_pickle('crews.pkl') crews.head() # Merge the crews table to itself crews_self_merged = crews.merge(crews, on='id', how='inner', suffixes=('_dir','_crew')) crews_self_merged.head() # Create a Boolean index to select the appropriate rows boolean_filter = ((crews_self_merged['job_dir'] == 'Director') & (crews_self_merged['job_crew'] != 'Director')) direct_crews = crews_self_merged[boolean_filter] # Print the first few rows of direct_crews direct_crews.head() ``` ## Index merge for movie ratings To practice merging on indexes, you will merge `movies` and a table called `ratings` that holds info about movie ratings. Make sure your merge returns **all** of the rows from the `movies` table and not all the rows of `ratings` table need to be included in the result. The `movies` and `ratings` tables have been loaded for you. Instructions - Merge `movies` and `ratings` on the index and save to a variable called `movies_ratings`, ensuring that all of the rows from the `movies` table are returned. ``` # Import the DataFrames movies = pd.read_pickle('movies.pkl') ratings = pd.read_pickle('ratings.pkl') # Merge to the movies table the ratings table on the index movies_ratings = movies.merge(ratings, how='left', on='id') # Print the first few rows of movies_ratings movies_ratings.head() ``` ## Do sequels earn more? It is time to put together many of the aspects that you have learned in this chapter. In this exercise, you'll find out which movie sequels earned the most compared to the original movie. To answer this question, you will merge a modified version of the `sequels` and `financials` tables where their index is the movie ID. You will need to choose a merge type that will return all of the rows from the `sequels` table and not all the rows of `financials` table need to be included in the result. From there, you will join the resulting table to itself so that you can compare the revenue values of the original movie to the sequel. Next, you will calculate the difference between the two revenues and sort the resulting dataset. The `sequels` and `financials` tables have been provided. Instructions - With the `sequels` table on the left, merge to it the `financials` table on index named `id`, ensuring that all the rows from the `sequels` are returned and some rows from the other table may not be returned, Save the results to `sequels_fin`. - Merge the `sequels_fin` table to itself with an inner join, where the left and right tables merge on `sequel` and `id` respectively with suffixes equal to `('_org','_seq')`, saving to `orig_seq`. - Select the `title_org`, `title_seq`, and `diff` columns of `orig_seq` and save this as `titles_diff`. - Sort by `titles_diff` by `diff` in descending order and print the first few rows. ``` # Import the DataFrames sequels = pd.read_pickle('sequels.pkl') financials = pd.read_pickle('financials.pkl') # Merge sequels and financials on index id sequels_fin = sequels.merge(financials, on='id', how='left') # Self merge with suffixes as inner join with left on sequel and right on id orig_seq = sequels_fin.merge(sequels_fin, how='inner', left_on='sequel', right_on='id', right_index=True, suffixes=('_org', '_seq')) # Add calculation to subtract revenue_org from revenue_seq orig_seq['diff'] = orig_seq['revenue_seq'] - orig_seq['revenue_org'] # Select the title_org, title_seq, and diff titles_diff = orig_seq[['title_org','title_seq','diff']] # Print the first rows of the sorted titles_diff titles_diff.sort_values('diff', ascending=False).head() ```
github_jupyter
**Chapter 19 – Training and Deploying TensorFlow Models at Scale** _This notebook contains all the sample code in chapter 19._ <table align="left"> <td> <a href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/19_training_and_deploying_at_scale.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> </td> <td> <a target="_blank" href="https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml2/blob/master/19_training_and_deploying_at_scale.ipynb"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" /></a> </td> </table> # Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0. ``` # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Is this notebook running on Colab or Kaggle? IS_COLAB = "google.colab" in sys.modules IS_KAGGLE = "kaggle_secrets" in sys.modules if IS_COLAB or IS_KAGGLE: !echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" > /etc/apt/sources.list.d/tensorflow-serving.list !curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add - !apt update && apt-get install -y tensorflow-model-server %pip install -q -U tensorflow-serving-api # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # TensorFlow ≥2.0 is required import tensorflow as tf from tensorflow import keras assert tf.__version__ >= "2.0" if not tf.config.list_physical_devices('GPU'): print("No GPU was detected. CNNs can be very slow without a GPU.") if IS_COLAB: print("Go to Runtime > Change runtime and select a GPU hardware accelerator.") if IS_KAGGLE: print("Go to Settings > Accelerator and select GPU.") # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) tf.random.set_seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "deploy" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ``` # Deploying TensorFlow models to TensorFlow Serving (TFS) We will use the REST API or the gRPC API. ## Save/Load a `SavedModel` ``` (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data() X_train_full = X_train_full[..., np.newaxis].astype(np.float32) / 255. X_test = X_test[..., np.newaxis].astype(np.float32) / 255. X_valid, X_train = X_train_full[:5000], X_train_full[5000:] y_valid, y_train = y_train_full[:5000], y_train_full[5000:] X_new = X_test[:3] np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28, 1]), keras.layers.Dense(100, activation="relu"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-2), metrics=["accuracy"]) model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) np.round(model.predict(X_new), 2) model_version = "0001" model_name = "my_mnist_model" model_path = os.path.join(model_name, model_version) model_path !rm -rf {model_name} tf.saved_model.save(model, model_path) for root, dirs, files in os.walk(model_name): indent = ' ' * root.count(os.sep) print('{}{}/'.format(indent, os.path.basename(root))) for filename in files: print('{}{}'.format(indent + ' ', filename)) !saved_model_cli show --dir {model_path} !saved_model_cli show --dir {model_path} --tag_set serve !saved_model_cli show --dir {model_path} --tag_set serve \ --signature_def serving_default !saved_model_cli show --dir {model_path} --all ``` Let's write the new instances to a `npy` file so we can pass them easily to our model: ``` np.save("my_mnist_tests.npy", X_new) input_name = model.input_names[0] input_name ``` And now let's use `saved_model_cli` to make predictions for the instances we just saved: ``` !saved_model_cli run --dir {model_path} --tag_set serve \ --signature_def serving_default \ --inputs {input_name}=my_mnist_tests.npy np.round([[1.1347984e-04, 1.5187356e-07, 9.7032893e-04, 2.7640699e-03, 3.7826971e-06, 7.6876910e-05, 3.9140293e-08, 9.9559116e-01, 5.3502394e-05, 4.2665208e-04], [8.2443521e-04, 3.5493889e-05, 9.8826385e-01, 7.0466995e-03, 1.2957400e-07, 2.3389691e-04, 2.5639210e-03, 9.5886099e-10, 1.0314899e-03, 8.7952529e-08], [4.4693781e-05, 9.7028232e-01, 9.0526715e-03, 2.2641101e-03, 4.8766597e-04, 2.8800720e-03, 2.2714981e-03, 8.3753867e-03, 4.0439744e-03, 2.9759688e-04]], 2) ``` ## TensorFlow Serving Install [Docker](https://docs.docker.com/install/) if you don't have it already. Then run: ```bash docker pull tensorflow/serving export ML_PATH=$HOME/ml # or wherever this project is docker run -it --rm -p 8500:8500 -p 8501:8501 \ -v "$ML_PATH/my_mnist_model:/models/my_mnist_model" \ -e MODEL_NAME=my_mnist_model \ tensorflow/serving ``` Once you are finished using it, press Ctrl-C to shut down the server. Alternatively, if `tensorflow_model_server` is installed (e.g., if you are running this notebook in Colab), then the following 3 cells will start the server: ``` os.environ["MODEL_DIR"] = os.path.split(os.path.abspath(model_path))[0] %%bash --bg nohup tensorflow_model_server \ --rest_api_port=8501 \ --model_name=my_mnist_model \ --model_base_path="${MODEL_DIR}" >server.log 2>&1 !tail server.log import json input_data_json = json.dumps({ "signature_name": "serving_default", "instances": X_new.tolist(), }) repr(input_data_json)[:1500] + "..." ``` Now let's use TensorFlow Serving's REST API to make predictions: ``` import requests SERVER_URL = 'http://localhost:8501/v1/models/my_mnist_model:predict' response = requests.post(SERVER_URL, data=input_data_json) response.raise_for_status() # raise an exception in case of error response = response.json() response.keys() y_proba = np.array(response["predictions"]) y_proba.round(2) ``` ### Using the gRPC API ``` from tensorflow_serving.apis.predict_pb2 import PredictRequest request = PredictRequest() request.model_spec.name = model_name request.model_spec.signature_name = "serving_default" input_name = model.input_names[0] request.inputs[input_name].CopyFrom(tf.make_tensor_proto(X_new)) import grpc from tensorflow_serving.apis import prediction_service_pb2_grpc channel = grpc.insecure_channel('localhost:8500') predict_service = prediction_service_pb2_grpc.PredictionServiceStub(channel) response = predict_service.Predict(request, timeout=10.0) response ``` Convert the response to a tensor: ``` output_name = model.output_names[0] outputs_proto = response.outputs[output_name] y_proba = tf.make_ndarray(outputs_proto) y_proba.round(2) ``` Or to a NumPy array if your client does not include the TensorFlow library: ``` output_name = model.output_names[0] outputs_proto = response.outputs[output_name] shape = [dim.size for dim in outputs_proto.tensor_shape.dim] y_proba = np.array(outputs_proto.float_val).reshape(shape) y_proba.round(2) ``` ## Deploying a new model version ``` np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28, 1]), keras.layers.Dense(50, activation="relu"), keras.layers.Dense(50, activation="relu"), keras.layers.Dense(10, activation="softmax") ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-2), metrics=["accuracy"]) history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) model_version = "0002" model_name = "my_mnist_model" model_path = os.path.join(model_name, model_version) model_path tf.saved_model.save(model, model_path) for root, dirs, files in os.walk(model_name): indent = ' ' * root.count(os.sep) print('{}{}/'.format(indent, os.path.basename(root))) for filename in files: print('{}{}'.format(indent + ' ', filename)) ``` **Warning**: You may need to wait a minute before the new model is loaded by TensorFlow Serving. ``` import requests SERVER_URL = 'http://localhost:8501/v1/models/my_mnist_model:predict' response = requests.post(SERVER_URL, data=input_data_json) response.raise_for_status() response = response.json() response.keys() y_proba = np.array(response["predictions"]) y_proba.round(2) ``` # Deploy the model to Google Cloud AI Platform Follow the instructions in the book to deploy the model to Google Cloud AI Platform, download the service account's private key and save it to the `my_service_account_private_key.json` in the project directory. Also, update the `project_id`: ``` project_id = "onyx-smoke-242003" import googleapiclient.discovery os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "my_service_account_private_key.json" model_id = "my_mnist_model" model_path = "projects/{}/models/{}".format(project_id, model_id) model_path += "/versions/v0001/" # if you want to run a specific version ml_resource = googleapiclient.discovery.build("ml", "v1").projects() def predict(X): input_data_json = {"signature_name": "serving_default", "instances": X.tolist()} request = ml_resource.predict(name=model_path, body=input_data_json) response = request.execute() if "error" in response: raise RuntimeError(response["error"]) return np.array([pred[output_name] for pred in response["predictions"]]) Y_probas = predict(X_new) np.round(Y_probas, 2) ``` # Using GPUs **Note**: `tf.test.is_gpu_available()` is deprecated. Instead, please use `tf.config.list_physical_devices('GPU')`. ``` #tf.test.is_gpu_available() # deprecated tf.config.list_physical_devices('GPU') tf.test.gpu_device_name() tf.test.is_built_with_cuda() from tensorflow.python.client.device_lib import list_local_devices devices = list_local_devices() devices ``` # Distributed Training ``` keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) def create_model(): return keras.models.Sequential([ keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", padding="same", input_shape=[28, 28, 1]), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Flatten(), keras.layers.Dense(units=64, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(units=10, activation='softmax'), ]) batch_size = 100 model = create_model() model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-2), metrics=["accuracy"]) model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=batch_size) keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) distribution = tf.distribute.MirroredStrategy() # Change the default all-reduce algorithm: #distribution = tf.distribute.MirroredStrategy( # cross_device_ops=tf.distribute.HierarchicalCopyAllReduce()) # Specify the list of GPUs to use: #distribution = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"]) # Use the central storage strategy instead: #distribution = tf.distribute.experimental.CentralStorageStrategy() #if IS_COLAB and "COLAB_TPU_ADDR" in os.environ: # tpu_address = "grpc://" + os.environ["COLAB_TPU_ADDR"] #else: # tpu_address = "" #resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu_address) #tf.config.experimental_connect_to_cluster(resolver) #tf.tpu.experimental.initialize_tpu_system(resolver) #distribution = tf.distribute.experimental.TPUStrategy(resolver) with distribution.scope(): model = create_model() model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-2), metrics=["accuracy"]) batch_size = 100 # must be divisible by the number of workers model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=batch_size) model.predict(X_new) ``` Custom training loop: ``` keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) K = keras.backend distribution = tf.distribute.MirroredStrategy() with distribution.scope(): model = create_model() optimizer = keras.optimizers.SGD() with distribution.scope(): dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).repeat().batch(batch_size) input_iterator = distribution.make_dataset_iterator(dataset) @tf.function def train_step(): def step_fn(inputs): X, y = inputs with tf.GradientTape() as tape: Y_proba = model(X) loss = K.sum(keras.losses.sparse_categorical_crossentropy(y, Y_proba)) / batch_size grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) return loss per_replica_losses = distribution.experimental_run(step_fn, input_iterator) mean_loss = distribution.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) return mean_loss n_epochs = 10 with distribution.scope(): input_iterator.initialize() for epoch in range(n_epochs): print("Epoch {}/{}".format(epoch + 1, n_epochs)) for iteration in range(len(X_train) // batch_size): print("\rLoss: {:.3f}".format(train_step().numpy()), end="") print() ``` ## Training across multiple servers A TensorFlow cluster is a group of TensorFlow processes running in parallel, usually on different machines, and talking to each other to complete some work, for example training or executing a neural network. Each TF process in the cluster is called a "task" (or a "TF server"). It has an IP address, a port, and a type (also called its role or its job). The type can be `"worker"`, `"chief"`, `"ps"` (parameter server) or `"evaluator"`: * Each **worker** performs computations, usually on a machine with one or more GPUs. * The **chief** performs computations as well, but it also handles extra work such as writing TensorBoard logs or saving checkpoints. There is a single chief in a cluster, typically the first worker (i.e., worker #0). * A **parameter server** (ps) only keeps track of variable values, it is usually on a CPU-only machine. * The **evaluator** obviously takes care of evaluation. There is usually a single evaluator in a cluster. The set of tasks that share the same type is often called a "job". For example, the "worker" job is the set of all workers. To start a TensorFlow cluster, you must first define it. This means specifying all the tasks (IP address, TCP port, and type). For example, the following cluster specification defines a cluster with 3 tasks (2 workers and 1 parameter server). It's a dictionary with one key per job, and the values are lists of task addresses: ``` cluster_spec = { "worker": [ "machine-a.example.com:2222", # /job:worker/task:0 "machine-b.example.com:2222" # /job:worker/task:1 ], "ps": ["machine-c.example.com:2222"] # /job:ps/task:0 } ``` Every task in the cluster may communicate with every other task in the server, so make sure to configure your firewall to authorize all communications between these machines on these ports (it's usually simpler if you use the same port on every machine). When a task is started, it needs to be told which one it is: its type and index (the task index is also called the task id). A common way to specify everything at once (both the cluster spec and the current task's type and id) is to set the `TF_CONFIG` environment variable before starting the program. It must be a JSON-encoded dictionary containing a cluster specification (under the `"cluster"` key), and the type and index of the task to start (under the `"task"` key). For example, the following `TF_CONFIG` environment variable defines the same cluster as above, with 2 workers and 1 parameter server, and specifies that the task to start is worker #1: ``` import os import json os.environ["TF_CONFIG"] = json.dumps({ "cluster": cluster_spec, "task": {"type": "worker", "index": 1} }) os.environ["TF_CONFIG"] ``` Some platforms (e.g., Google Cloud ML Engine) automatically set this environment variable for you. TensorFlow's `TFConfigClusterResolver` class reads the cluster configuration from this environment variable: ``` import tensorflow as tf resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver() resolver.cluster_spec() resolver.task_type resolver.task_id ``` Now let's run a simpler cluster with just two worker tasks, both running on the local machine. We will use the `MultiWorkerMirroredStrategy` to train a model across these two tasks. The first step is to write the training code. As this code will be used to run both workers, each in its own process, we write this code to a separate Python file, `my_mnist_multiworker_task.py`. The code is relatively straightforward, but there are a couple important things to note: * We create the `MultiWorkerMirroredStrategy` before doing anything else with TensorFlow. * Only one of the workers will take care of logging to TensorBoard and saving checkpoints. As mentioned earlier, this worker is called the *chief*, and by convention it is usually worker #0. ``` %%writefile my_mnist_multiworker_task.py import os import numpy as np import tensorflow as tf from tensorflow import keras import time # At the beginning of the program distribution = tf.distribute.MultiWorkerMirroredStrategy() resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver() print("Starting task {}{}".format(resolver.task_type, resolver.task_id)) # Only worker #0 will write checkpoints and log to TensorBoard if resolver.task_id == 0: root_logdir = os.path.join(os.curdir, "my_mnist_multiworker_logs") run_id = time.strftime("run_%Y_%m_%d-%H_%M_%S") run_dir = os.path.join(root_logdir, run_id) callbacks = [ keras.callbacks.TensorBoard(run_dir), keras.callbacks.ModelCheckpoint("my_mnist_multiworker_model.h5", save_best_only=True), ] else: callbacks = [] # Load and prepare the MNIST dataset (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data() X_train_full = X_train_full[..., np.newaxis] / 255. X_valid, X_train = X_train_full[:5000], X_train_full[5000:] y_valid, y_train = y_train_full[:5000], y_train_full[5000:] with distribution.scope(): model = keras.models.Sequential([ keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", padding="same", input_shape=[28, 28, 1]), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu", padding="same"), keras.layers.MaxPooling2D(pool_size=2), keras.layers.Flatten(), keras.layers.Dense(units=64, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(units=10, activation='softmax'), ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=1e-2), metrics=["accuracy"]) model.fit(X_train, y_train, validation_data=(X_valid, y_valid), epochs=10, callbacks=callbacks) ``` In a real world application, there would typically be a single worker per machine, but in this example we're running both workers on the same machine, so they will both try to use all the available GPU RAM (if this machine has a GPU), and this will likely lead to an Out-Of-Memory (OOM) error. To avoid this, we could use the `CUDA_VISIBLE_DEVICES` environment variable to assign a different GPU to each worker. Alternatively, we can simply disable GPU support, like this: ``` os.environ["CUDA_VISIBLE_DEVICES"] = "-1" ``` We are now ready to start both workers, each in its own process, using Python's `subprocess` module. Before we start each process, we need to set the `TF_CONFIG` environment variable appropriately, changing only the task index: ``` import subprocess cluster_spec = {"worker": ["127.0.0.1:9901", "127.0.0.1:9902"]} for index, worker_address in enumerate(cluster_spec["worker"]): os.environ["TF_CONFIG"] = json.dumps({ "cluster": cluster_spec, "task": {"type": "worker", "index": index} }) subprocess.Popen("python my_mnist_multiworker_task.py", shell=True) ``` That's it! Our TensorFlow cluster is now running, but we can't see it in this notebook because it's running in separate processes (but if you are running this notebook in Jupyter, you can see the worker logs in Jupyter's server logs). Since the chief (worker #0) is writing to TensorBoard, we use TensorBoard to view the training progress. Run the following cell, then click on the settings button (i.e., the gear icon) in the TensorBoard interface and check the "Reload data" box to make TensorBoard automatically refresh every 30s. Once the first epoch of training is finished (which may take a few minutes), and once TensorBoard refreshes, the SCALARS tab will appear. Click on this tab to view the progress of the model's training and validation accuracy. ``` %load_ext tensorboard %tensorboard --logdir=./my_mnist_multiworker_logs --port=6006 ``` That's it! Once training is over, the best checkpoint of the model will be available in the `my_mnist_multiworker_model.h5` file. You can load it using `keras.models.load_model()` and use it for predictions, as usual: ``` from tensorflow import keras model = keras.models.load_model("my_mnist_multiworker_model.h5") Y_pred = model.predict(X_new) np.argmax(Y_pred, axis=-1) ``` And that's all for today! Hope you found this useful. 😊 # Exercise Solutions ## 1. to 8. See Appendix A. ## 9. _Exercise: Train a model (any model you like) and deploy it to TF Serving or Google Cloud AI Platform. Write the client code to query it using the REST API or the gRPC API. Update the model and deploy the new version. Your client code will now query the new version. Roll back to the first version._ Please follow the steps in the <a href="#Deploying-TensorFlow-models-to-TensorFlow-Serving-(TFS)">Deploying TensorFlow models to TensorFlow Serving</a> section above. # 10. _Exercise: Train any model across multiple GPUs on the same machine using the `MirroredStrategy` (if you do not have access to GPUs, you can use Colaboratory with a GPU Runtime and create two virtual GPUs). Train the model again using the `CentralStorageStrategy `and compare the training time._ Please follow the steps in the [Distributed Training](#Distributed-Training) section above. # 11. _Exercise: Train a small model on Google Cloud AI Platform, using black box hyperparameter tuning._ Please follow the instructions on pages 716-717 of the book. You can also read [this documentation page](https://cloud.google.com/ai-platform/training/docs/hyperparameter-tuning-overview) and go through the example in this nice [blog post](https://towardsdatascience.com/how-to-do-bayesian-hyper-parameter-tuning-on-a-blackbox-model-882009552c6d) by Lak Lakshmanan.
github_jupyter
# Data pre-processing for Azure Data Explorer <img src="https://github.com/Azure/azure-kusto-spark/raw/master/kusto_spark.png" style="border: 1px solid #aaa; border-radius: 10px 10px 10px 10px; box-shadow: 5px 5px 5px #aaa"/> We often see customer scenarios where historical data has to be migrated to Azure Data Explorer (ADX). Although ADX has very powerful data-transformation capabilities via [update policies](https://docs.microsoft.com/azure/data-explorer/kusto/management/updatepolicy), sometimes more or less complex data engineering tasks must be done upfront. This happens if the original data structure is too complex or just single data elements being too big, hitting data explorer limits of dynamic columns of 1 MB or maximum ingest file-size of 1 GB for uncompressed data (see also [Comparing ingestion methods and tools](https://docs.microsoft.com/azure/data-explorer/ingest-data-overview#comparing-ingestion-methods-and-tools)) . Let' s think about an Industrial Internet-of-Things (IIoT) use-case where you get data from several production lines. In the production line several devices read humidity, pressure, etc. The following example shows a scenario where a one-to-many relationship is implemented within an array. With this you might get very large columns (with millions of device readings per production line) that might exceed the limit of 1 MB in Azure Data Explorer for dynamic columns. In this case you need to do some pre-processing. Data has already been uploaded to Azure storage. You will start reading the json-data into a data frame: ``` inputpath = "wasbs://synapsework@kustosamplefiles.blob.core.windows.net/*.json" # optional, for the output to Azure Storage: #outputpath = "<your-storage-path>" df = spark.read.format("json").load(inputpath) ``` The notebook has a parameter IngestDate, this will be used setting the extentsCreationtime. You can call this notebook from Azure Data Factory for all days you want to load to Azure Data Explorer. Alternatively you can make use of a partitioning policy. ``` dbutils.widgets.text("wIngestDate", "2021-08-06T00:00:00.000Z", "Ingestion Date") IngestDate = dbutils.widgets.get("wIngestDate") display (df) ``` We see that the dataframe has some complex datatypes. The only thing that we want to change here is getting rid of the array, so having the resulting dataset a row for every entry in the measurement array. *How can we achieve this?* pyspark-sql has some very powerful functions for transformations of complex datatypes. We will make use of the [explode-function](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.explode.html). In this case explode ("measurement") will give us a resulting dataframe with single rows per array-element. Finally we only have to drop the original measurement-column (it is the original structure): ``` from pyspark.sql.functions import * df_explode = df.select("*", explode("measurement").alias("device")).drop("measurement") ``` With this we alreadyhave done the necessary data transformation with one line of code. Let' s do some final "prettyfying". As we are already preprocessing the data and want to get rid of the complex data types we select the struct elements to get a simplified table: ``` df_all_in_column = df_explode.select ("header.*", "device.header.*", "device.*", "ProdLineData.*").drop("header") display (df_all_in_column) ``` We are setting the extentsCreationTime to the notebook-parameter *IngestDate*. For other ingestion properties see [here](https://github.com/Azure/azure-kusto-spark/blob/master/samples/src/main/python/pyKusto.py). ``` extentsCreationTime = sc._jvm.org.joda.time.DateTime.parse(IngestDate) sp = sc._jvm.com.microsoft.kusto.spark.datasink.SparkIngestionProperties( False, None, None, None, None, extentsCreationTime, None, None) ``` Finally, we write the resulting dataframe back to to Azure Data Explorer. Prerequisite doing this is: * the target table created in the target database (.create table measurement (ProductionLineId : string, deviceId:string, enqueuedTime:datetime, humidity:real, humidity_unit:string, temperature:real, temperature_unit:string, pressure:real, pressure_unit:string, reading : dynamic)) * having created a service principal for the ADX access * the service principal (AAD-application) accessing ADX has sufficient permissions (add the ingestor and viewer role) * Install the latest Kusto library from maven see also the [Azure Data Explorer Connector for Apache Spark documentation](https://github.com/Azure/azure-kusto-spark#usage) ``` df_all_in_column.write. \ format("com.microsoft.kusto.spark.datasource"). \ option("kustoCluster", "https://<yourcluster>"). \ option("kustoDatabase", "your-database"). \ option("kustoTable", "<your-table>"). \ option("sparkIngestionPropertiesJson", sp.toString()). \ option("kustoAadAppId", "<app-id>"). \ option("kustoAadAppSecret",dbutils.secrets.get(scope="<scope-name>",key="<service-credential-key-name>"). \ option("kustoAadAuthorityID", "<tenant-id>"). \ mode("Append"). \ save() ``` You might also consider writing the data to Azure Storage (this might be also make sense for mor complex tranformation pipelines as an intermediate staging step): ``` # df_all_in_column.write.mode('overwrite').json(outputpath) ```
github_jupyter
# WGAN with MNIST (or Fashion MNIST) * `Wasserstein GAN`, [arXiv:1701.07875](https://arxiv.org/abs/1701.07875) * Martin Arjovsky, Soumith Chintala, and L ́eon Bottou * This code is available to tensorflow version 2.0 * Implemented by [`tf.keras.layers`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers) [`tf.losses`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/losses) * Use `transposed_conv2d` and `conv2d` for Generator and Discriminator, respectively. * I do not use `dense` layer for model architecture consistency. (So my architecture is different from original dcgan structure) * based on DCGAN model ## Import modules ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import os import sys import time import glob import numpy as np import matplotlib.pyplot as plt %matplotlib inline import PIL import imageio from IPython import display import tensorflow as tf from tensorflow.keras import layers sys.path.append(os.path.dirname(os.path.abspath('.'))) from utils.image_utils import * from utils.ops import * os.environ["CUDA_VISIBLE_DEVICES"]="0" ``` ## Setting hyperparameters ``` # Training Flags (hyperparameter configuration) model_name = 'wgan' train_dir = os.path.join('train', model_name, 'exp1') dataset_name = 'mnist' assert dataset_name in ['mnist', 'fashion_mnist'] max_epochs = 100 save_model_epochs = 10 print_steps = 200 save_images_epochs = 1 batch_size = 64 learning_rate_D = 5e-5 learning_rate_G = 5e-5 k = 5 # the number of step of learning D before learning G (Not used in this code) num_examples_to_generate = 25 noise_dim = 100 clip_value = 0.01 # cliping value for D weights in order to implement `1-Lipshitz function` ``` ## Load the MNIST dataset ``` # Load training and eval data from tf.keras if dataset_name == 'mnist': (train_images, train_labels), _ = \ tf.keras.datasets.mnist.load_data() else: (train_images, train_labels), _ = \ tf.keras.datasets.fashion_mnist.load_data() train_images = train_images.reshape(-1, MNIST_SIZE, MNIST_SIZE, 1).astype('float32') #train_images = train_images / 255. # Normalize the images to [0, 1] train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1] ``` ## Set up dataset with `tf.data` ### create input pipeline with `tf.data.Dataset` ``` #tf.random.set_seed(219) # for train N = len(train_images) train_dataset = tf.data.Dataset.from_tensor_slices(train_images) train_dataset = train_dataset.shuffle(buffer_size=N) train_dataset = train_dataset.batch(batch_size=batch_size, drop_remainder=True) print(train_dataset) ``` ## Create the generator and discriminator models ``` class Generator(tf.keras.Model): """Build a generator that maps latent space to real space. G(z): z -> x """ def __init__(self): super(Generator, self).__init__() self.conv1 = ConvTranspose(256, 3, padding='valid') self.conv2 = ConvTranspose(128, 3, padding='valid') self.conv3 = ConvTranspose(64, 4) self.conv4 = ConvTranspose(1, 4, apply_batchnorm=False, activation='tanh') def call(self, inputs, training=True): """Run the model.""" # inputs: [1, 1, 100] conv1 = self.conv1(inputs, training=training) # conv1: [3, 3, 256] conv2 = self.conv2(conv1, training=training) # conv2: [7, 7, 128] conv3 = self.conv3(conv2, training=training) # conv3: [14, 14, 64] generated_images = self.conv4(conv3, training=training) # generated_images: [28, 28, 1] return generated_images class Discriminator(tf.keras.Model): """Build a discriminator that discriminate real image x whether real or fake. D(x): x -> [0, 1] """ def __init__(self): super(Discriminator, self).__init__() self.conv1 = Conv(64, 4, 2, apply_batchnorm=False, activation='leaky_relu') self.conv2 = Conv(128, 4, 2, activation='leaky_relu') self.conv3 = Conv(256, 3, 2, padding='valid', activation='leaky_relu') self.conv4 = Conv(1, 3, 1, padding='valid', apply_batchnorm=False, activation='none') def call(self, inputs, training=True): """Run the model.""" # inputs: [28, 28, 1] conv1 = self.conv1(inputs) # conv1: [14, 14, 64] conv2 = self.conv2(conv1) # conv2: [7, 7, 128] conv3 = self.conv3(conv2) # conv3: [3, 3, 256] conv4 = self.conv4(conv3) # conv4: [1, 1, 1] discriminator_logits = tf.squeeze(conv4, axis=[1, 2]) # discriminator_logits: [1,] return discriminator_logits generator = Generator() discriminator = Discriminator() ``` ### Plot generated image via generator network ``` noise = tf.random.normal([1, 1, 1, noise_dim]) generated_image = generator(noise, training=False) plt.imshow(generated_image[0, :, :, 0], cmap='gray') ``` ### Test discriminator network * **CAUTION**: the outputs of discriminator is **logits** (unnormalized probability) NOT probabilites ``` decision = discriminator(generated_image) print(decision) ``` ## Define the loss functions and the optimizer ``` # use logits for consistency with previous code I made # `tf.losses` and `tf.keras.losses` are the same API (alias) bce = tf.losses.BinaryCrossentropy(from_logits=True) mse = tf.losses.MeanSquaredError() def WGANLoss(logits, is_real=True): """Computes Wasserstain GAN loss Args: logits (`2-rank Tensor`): logits is_real (`bool`): boolean, Treu means `-` sign, False means `+` sign. Returns: loss (`0-rank Tensor`): the WGAN loss value. """ loss = tf.reduce_mean(logits) if is_real: loss = -loss return loss def GANLoss(logits, is_real=True, use_lsgan=True): """Computes standard GAN or LSGAN loss between `logits` and `labels`. Args: logits (`2-rank Tensor`): logits. is_real (`bool`): True means `1` labeling, False means `0` labeling. use_lsgan (`bool`): True means LSGAN loss, False means standard GAN loss. Returns: loss (`0-rank Tensor`): the standard GAN or LSGAN loss value. (binary_cross_entropy or mean_squared_error) """ if is_real: labels = tf.ones_like(logits) else: labels = tf.zeros_like(logits) if use_lsgan: loss = mse(labels, tf.nn.sigmoid(logits)) else: loss = bce(labels, logits) return loss def discriminator_loss(real_logits, fake_logits): # losses of real with label "1" real_loss = WGANLoss(logits=real_logits, is_real=True) # losses of fake with label "0" fake_loss = WGANLoss(logits=fake_logits, is_real=False) return real_loss + fake_loss def generator_loss(fake_logits): # losses of Generator with label "1" that used to fool the Discriminator return WGANLoss(logits=fake_logits, is_real=True) discriminator_optimizer = tf.keras.optimizers.RMSprop(learning_rate_D) generator_optimizer = tf.keras.optimizers.RMSprop(learning_rate_G) ``` ## Checkpoints (Object-based saving) ``` checkpoint_dir = train_dir if not tf.io.gfile.exists(checkpoint_dir): tf.io.gfile.makedirs(checkpoint_dir) checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer, discriminator_optimizer=discriminator_optimizer, generator=generator, discriminator=discriminator) ``` ## Training ``` # keeping the random vector constant for generation (prediction) so # it will be easier to see the improvement of the gan. # To visualize progress in the animated GIF const_random_vector_for_saving = tf.random.uniform([num_examples_to_generate, 1, 1, noise_dim], minval=-1.0, maxval=1.0) ``` ### Define training one step function ``` # Notice the use of `tf.function` # This annotation causes the function to be "compiled". @tf.function def discriminator_train_step(images): # generating noise from a uniform distribution noise = tf.random.uniform([batch_size, 1, 1, noise_dim], minval=-1.0, maxval=1.0) with tf.GradientTape() as disc_tape: generated_images = generator(noise, training=True) real_logits = discriminator(images, training=True) fake_logits = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_logits) disc_loss = discriminator_loss(real_logits, fake_logits) gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) # clip the weights for discriminator to implement 1-Lipshitz function for var in discriminator.trainable_variables: var.assign(tf.clip_by_value(var, -clip_value, clip_value)) return gen_loss, disc_loss # Notice the use of `tf.function` # This annotation causes the function to be "compiled". @tf.function def generator_train_step(): # generating noise from a uniform distribution noise = tf.random.uniform([batch_size, 1, 1, noise_dim], minval=-1.0, maxval=1.0) with tf.GradientTape() as gen_tape: generated_images = generator(noise, training=True) fake_logits = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_logits) gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables)) ``` ### Train full steps ``` print('Start Training.') num_batches_per_epoch = int(N / batch_size) global_step = tf.Variable(0, trainable=False) num_learning_critic = 0 for epoch in range(max_epochs): for step, images in enumerate(train_dataset): start_time = time.time() if num_learning_critic < k: gen_loss, disc_loss = discriminator_train_step(images) num_learning_critic += 1 global_step.assign_add(1) else: generator_train_step() num_learning_critic = 0 if global_step.numpy() % print_steps == 0: epochs = epoch + step / float(num_batches_per_epoch) duration = time.time() - start_time examples_per_sec = batch_size / float(duration) display.clear_output(wait=True) print("Epochs: {:.2f} global_step: {} Wasserstein distance: {:.3g} loss_G: {:.3g} ({:.2f} examples/sec; {:.3f} sec/batch)".format( epochs, global_step.numpy(), -disc_loss, gen_loss, examples_per_sec, duration)) random_vector_for_sampling = tf.random.uniform([num_examples_to_generate, 1, 1, noise_dim], minval=-1.0, maxval=1.0) sample_images = generator(random_vector_for_sampling, training=False) print_or_save_sample_images(sample_images.numpy(), num_examples_to_generate) if (epoch + 1) % save_images_epochs == 0: display.clear_output(wait=True) print("This images are saved at {} epoch".format(epoch+1)) sample_images = generator(const_random_vector_for_saving, training=False) print_or_save_sample_images(sample_images.numpy(), num_examples_to_generate, is_square=True, is_save=True, epoch=epoch+1, checkpoint_dir=checkpoint_dir) # saving (checkpoint) the model every save_epochs if (epoch + 1) % save_model_epochs == 0: checkpoint.save(file_prefix=checkpoint_prefix) print('Training Done.') # generating after the final epoch display.clear_output(wait=True) sample_images = generator(const_random_vector_for_saving, training=False) print_or_save_sample_images(sample_images.numpy(), num_examples_to_generate, is_square=True, is_save=True, epoch=epoch+1, checkpoint_dir=checkpoint_dir) ``` ## Restore the latest checkpoint ``` # restoring the latest checkpoint in checkpoint_dir checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) ``` ## Display an image using the epoch number ``` display_image(max_epochs, checkpoint_dir=checkpoint_dir) ``` ## Generate a GIF of all the saved images. ``` filename = model_name + '_' + dataset_name + '.gif' generate_gif(filename, checkpoint_dir) display.Image(filename=filename + '.png') ```
github_jupyter
# Artificial Neural Networks ## About this notebook This notebook kernel was created to help you understand more about machine learning. I intend to create tutorials with several machine learning algorithms from basic to advanced. I hope I can help you with this data science trail. For any information, you can contact me through the link below. Contact me here: https://www.linkedin.com/in/vitorgamalemos/ ## Introduction <img src="https://media.springernature.com/original/springer-static/image/art%3A10.1007%2Fs40846-016-0191-3/MediaObjects/40846_2016_191_Fig1_HTML.gif"> <p style="text-align: justify;">Artificial Neural Networks are mathematical models inspired by the human brain, specifically the ability to learn, process, and perform tasks. The Artificial Neural Networks are powerful tools that assist in solving complex problems linked mainly in the area of combinatorial optimization and machine learning. In this context, artificial neural networks have the most varied applications possible, as such models can adapt to the situations presented, ensuring a gradual increase in performance without any human interference. We can say that the Artificial Neural Networks are potent methods can give computers a new possibility, that is, a machine does not get stuck to preprogrammed rules and opens up various options to learn from its own mistakes.</p> ## Biologic Model <img src="https://www.neuroskills.com/images/photo-500x500-neuron.png"> <p style="text-align: justify;">Artificial neurons are designed to mimic aspects of their biological counterparts. The neuron is one of the fundamental units that make up the entire brain structure of the central nervous system; such cells are responsible for transmitting information through the electrical potential difference in their membrane. In this context, a biological neuron can be divided as follows.</p> **Dendrites** – are thin branches located in the nerve cell. These cells act on receiving nerve input from other parts of our body. **Soma** – acts as a summation function. As positive and negative signals (exciting and inhibiting, respectively) arrive in the soma from the dendrites they are added together. **Axon** – gets its signal from the summation behavior which occurs inside the soma. It is formed by a single extended filament located throughout the neuron. The axon is responsible for sending nerve impulses to the external environment of a cell. ## Artificial Neuron as Mathematic Notation In general terms, an input X is multiplied by a weight W and added a bias b producing the net activation. <img style="max-width:60%;max-height:60%;" src="https://miro.medium.com/max/1290/1*-JtN9TWuoZMz7z9QKbT85A.png"> We can summarize an artificial neuron with the following mathematical expression: $$ \hat{y} = f\left(\text{net}\right)= f\left(\vec{w}\cdot\vec{x}+b\right) = f\left(\sum_{i=1}^{n}{w_i x_i + b}\right) $$ ## The SigleLayer Perceptron <p style="text-align: justify;">The Perceptron and its learning algorithm pioneered the research in neurocomputing. the perceptron is an algorithm for supervised learning of binary classifiers [1]. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.<p> <img src="https://www.edureka.co/blog/wp-content/uploads/2017/12/Perceptron-Learning-Algorithm_03.gif"> #### References - Freund, Y.; Schapire, R. E. (1999). "Large margin classification using the perceptron algorithm" (PDF). Machine Learning - Aizerman, M. A.; Braverman, E. M.; Rozonoer, L. I. (1964). "Theoretical foundations of the potential function method in pattern recognition learning". Automation and Remote Control. 25: 821–837. - Mohri, Mehryar and Rostamizadeh, Afshin (2013). Perceptron Mistake Bounds. ## The SingleLayer Perceptron Learning Learning goes by calculating the prediction of the perceptron: ### Basic Neuron $$ \hat{y} = f\left(\vec{w}\cdot\vec{x} + b) = f( w_{1}x_{1} + w_2x_{2} + \cdots + w_nx_{n}+b\right)\, $$ After that, we update the weights and the bias using as: $$ \hat{w_i} = w_i + \alpha (y - \hat{y}) x_{i} \,,\ i=1,\ldots,n\,;\\ $$ $$ \hat{b} = b + \alpha (y - \hat{y})\,. $$ ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt class SingleLayerPerceptron: def __init__(self, my_weights, my_bias, learningRate=0.05): self.weights = my_weights self.bias = my_bias self.learningRate = learningRate def activation(self, net): answer = 1 if net > 0 else 0 return answer def neuron(self, inputs): neuronArchitecture = np.dot(self.weights, inputs) + self.bias return neuronArchitecture def neuron_propagate(self, inputs): processing = self.neuron(inputs) return self.activation(processing) def training(self, inputs, output): output_prev = self.neuron_propagate(inputs) self.weights = [W + X * self.learningRate * (output - output_prev) for (W, X) in zip(self.weights, inputs)] self.bias += self.learningRate * (output - output_prev) error_calculation = np.abs(output_prev - output) return error_calculation data = pd.DataFrame(columns=('x1', 'x2'), data=np.random.uniform(size=(600,2))) data.head() def show_dataset(data, ax): data[data.y==1].plot(kind='scatter', ax=ax, x='x1', y='x2', color='blue') data[data.y==0].plot(kind='scatter', ax=ax, x='x1', y='x2', color='red') plt.grid() plt.title(' My Dataset') ax.set_xlim(-0.1,1.1) ax.set_ylim(-0.1,1.1) def testing(inputs): answer = int(np.sum(inputs) > 1) return answer data['y'] = data.apply(testing, axis=1) fig = plt.figure(figsize=(10,10)) show_dataset(data, fig.gca()) InitialWeights = [0.1, 0.1] InitialBias = 0.01 LearningRate = 0.1 SLperceptron = SingleLayerPerceptron(InitialWeights, InitialBias, LearningRate) import random, itertools def showAll(perceptron, data, threshold, ax=None): if ax==None: fig = plt.figure(figsize=(5,4)) ax = fig.gca() show_dataset(data, ax) show_threshold(perceptron, ax) title = 'training={}'.format(threshold + 1) ax.set_title(title) def trainingData(SinglePerceptron, inputs): count = 0 for i, line in inputs.iterrows(): count = count + SinglePerceptron.training(line[0:2], line[2]) return count def limit(neuron, inputs): weights_0 = neuron.weights[0] weights_1 = neuron.weights[1] bias = neuron.bias threshold = -weights_0 * inputs - bias threshold = threshold / weights_1 return threshold def show_threshold(SinglePerceptron, ax): xlim = plt.gca().get_xlim() ylim = plt.gca().get_ylim() x2 = [limit(SinglePerceptron, x1) for x1 in xlim] ax.plot(xlim, x2, color="yellow") ax.set_xlim(-0.1,1.1) ax.set_ylim(-0.1,1.1) f, axarr = plt.subplots(3, 4, sharex=True, sharey=True, figsize=(12,12)) axs = list(itertools.chain.from_iterable(axarr)) until = 12 for interaction in range(until): showAll(SLperceptron, data, interaction, ax=axs[interaction]) trainingData(SLperceptron, data) ``` Example using Multilayer Perceptron (no libraries): https://www.kaggle.com/vitorgamalemos/iris-flower-using-multilayer-perceptron/
github_jupyter
``` from keras.models import Sequential from keras.layers import Dense from keras.callbacks import TensorBoard from keras.layers import * import numpy from sklearn.model_selection import train_test_split #ignoring the first row (header) # and the first column (unique experiment id, which I'm not using here) dataset = numpy.loadtxt("/results/shadow_robot_dataset.csv", skiprows=1, usecols=range(1,30), delimiter=",") ``` # Loading the data Each row of my dataset contains the following: |0 | 1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | experiment_number | robustness| H1_F1J2_pos | H1_F1J2_vel | H1_F1J2_effort | H1_F1J3_pos | H1_F1J3_vel | H1_F1J3_effort | H1_F1J1_pos | H1_F1J1_vel | H1_F1J1_effort | H1_F3J1_pos | H1_F3J1_vel | H1_F3J1_effort | H1_F3J2_pos | H1_F3J2_vel | H1_F3J2_effort | H1_F3J3_pos | H1_F3J3_vel | H1_F3J3_effort | H1_F2J1_pos | H1_F2J1_vel | H1_F2J1_effort | H1_F2J3_pos | H1_F2J3_vel | H1_F2J3_effort | H1_F2J2_pos | H1_F2J2_vel | H1_F2J2_effort | measurement_number| My input vector contains the velocity and effort for each joint. I'm creating the vector `X` containing those below: ``` # Getting the header header = "" with open('/results/shadow_robot_dataset.csv', 'r') as f: header = f.readline() header = header.strip("\n").split(',') header = [i.strip(" ") for i in header] # only use velocity and effort, not position saved_cols = [] for index,col in enumerate(header[1:]): if ("vel" in col) or ("eff" in col): saved_cols.append(index) new_X = [] for x in dataset: new_X.append([x[i] for i in saved_cols]) X = numpy.array(new_X) ``` My output vector is the predicted grasp robustness. ``` Y = dataset[:,0] ``` We are also splitting the dataset into a training set and a test set. This gives us 4 sets: * `X_train` associated to its `Y_train` * `X_test` associated to its `Y_test` We also discretize the output: 1 is a stable grasp and 0 is unstable. A grasp is considered stable if the robustness value is more than 100. ``` # fix random seed for reproducibility # and splitting the dataset seed = 7 numpy.random.seed(seed) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.20, random_state=seed) # this is a sensible grasp threshold for stability GOOD_GRASP_THRESHOLD = 50 # we're also storing the best and worst grasps of the test set to do some sanity checks on them itemindex = numpy.where(Y_test>1.05*GOOD_GRASP_THRESHOLD) best_grasps = X_test[itemindex[0]] itemindex = numpy.where(Y_test<=0.95*GOOD_GRASP_THRESHOLD) bad_grasps = X_test[itemindex[0]] # discretizing the grasp quality for stable or unstable grasps Y_train = numpy.array([int(i>GOOD_GRASP_THRESHOLD) for i in Y_train]) Y_train = numpy.reshape(Y_train, (Y_train.shape[0],)) Y_test = numpy.array([int(i>GOOD_GRASP_THRESHOLD) for i in Y_test]) Y_test = numpy.reshape(Y_test, (Y_test.shape[0],)) ``` # Creating the model I'm now creating a model to train. It's a very simple topology. Feel free to play with it and experiment with different model shapes. ``` # create model model = Sequential() model.add(Dense(20*len(X[0]), use_bias=True, input_dim=len(X[0]), activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) ``` # Training the model The model training should be relatively quick. To speed it up you can use a GPU :) I'm using 80% of the data for training and 20% for validation. ``` model.fit(X_train, Y_train, validation_split=0.20, epochs=50, batch_size=500000) ``` Now that the model is trained I'm saving it to be able to load it easily later on. ``` import h5py model.save("./model.h5") ``` # Evaluating the model First let's see how this model performs on the test set - which hasn't been used during the training phase. ``` scores = model.evaluate(X_test, Y_test) print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) ``` Now let's take a quick look at the good grasps we stored earlier. Are they correctly predicted as stable? ``` predictions = model.predict(best_grasps) %matplotlib inline import matplotlib.pyplot as plt plt.hist(predictions, color='#77D651', alpha=0.5, label='Good Grasps', bins=np.arange(0.0, 1.0, 0.03)) plt.title('Histogram of grasp prediction') plt.ylabel('Number of grasps') plt.xlabel('Grasp quality prediction') plt.legend(loc='upper right') plt.show() ``` Most of the grasps are correctly predicted as stable (the grasp quality prediction is more than 0.5)! Looking good. What about the unstable grasps? ``` predictions_bad_grasp = model.predict(bad_grasps) # Plot a histogram of defender size plt.hist(predictions_bad_grasp, color='#D66751', alpha=0.3, label='Bad Grasps', bins=np.arange(0.0, 1.0, 0.03)) plt.title('Histogram of grasp prediction') plt.ylabel('Number of grasps') plt.xlabel('Grasp quality prediction') plt.legend(loc='upper right') plt.show() ``` Most of the grasps are considered unstable - below 0.5 - with a few bad classification.
github_jupyter
# **Jupyer Bridge and RCy3** You can open this notebook in the Google Colab from Github directly (File -> Open notebook -> Github). Also you can download this notebook and upload it to the Google Colab (File -> Open notebook -> Upload). <font color='red'> You do not need to run installation and getting started sections if you come from the basic Jupyter Bride and RCy3 tutorial, since you have already installed required packages and build the connection. </font> ## **Installation** ``` library(devtools) install_github("cytoscape/RCy3") library(RCy3) library(RColorBrewer) ``` ## **Getting started** ``` browserClientJs <- getBrowserClientJs() IRdisplay::display_javascript(browserClientJs) cytoscapeVersionInfo() cytoscapePing() ``` # **Differentially Expressed Genes Network Analysis** ## **Prerequisites** If you haven’t already, install the [*STRINGApp*](http://apps.cytoscape.org/apps/stringapp) and [*filetransferApp*](https://apps.cytoscape.org/apps/filetransfer). ## **Background** Ovarian serous cystadenocarcinoma is a type of epithelial ovarian cancer which accounts for ~90% of all ovarian cancers. The data used in this protocol are from [The Cancer Genome Atlas](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga). The Cancer Genome Atlas, in which multiple subtypes of serous cystadenocarcinoma were identified and characterized by mRNA expression. We will focus on the differential gene expression between two subtypes, Mesenchymal and Immunoreactive. For convenience, the data has already been analyzed and pre-filtered, using log fold change value and adjusted p-value. ## **Network Retrieval** Many public databases and multiple Cytoscape apps allow you to retrieve a network or pathway relevant to your data. For this workflow, we will use the STRING app. Some other options include: * [WikiPathways](https://www.wikipathways.org/index.php/WikiPathways) * [NDEx](http://www.ndexbio.org/#/) * [GeneMANIA](https://genemania.org/) ## **Retrieve Networks from STRING** To identify a relevant network, we will query the STRING database in two different ways: Query STRING protein with the list of differentially expressed genes. Query STRING disease for a keyword; ovarian cancer. The two examples are split into two separate workflows below. ## **Example 1: STRING Protein Query Up-regulated Genes** Load the file containing the data for up-regulated genes, TCGA-Ovarian-MesenvsImmuno_UP_full.csv: ``` de.genes.up <- read.table("https://raw.githubusercontent.com/cytoscape/cytoscape-tutorials/gh-pages/protocols/data/TCGA-Ovarian-MesenvsImmuno-data-up.csv", header = TRUE, sep = "\t", quote="\"", stringsAsFactors = FALSE) string.cmd = paste('string protein query query="', paste(de.genes.up$Gene, collapse = '\n'), '" cutoff=0.4 species="Homo sapiens"', sep = "") commandsRun(string.cmd) ``` The resulting network will load automatically and contains up-regulated genes recognized by STRING, and interactions between them with an evidence score of 0.4 or greater. The networks consists of one large connected component, several smaller networks, and some unconnected nodes. We will select only the connected nodes to work with for the rest of this tutorial, by creating a subnetwork based on all edges: ``` createSubnetwork(edges='all', subnetwork.name='String de genes up') ``` ## **Data Integration** Next we will import log fold changes and p-values from our TCGA dataset to create a visualization. Since the STRING network is a protein-protein network, it is annotated with protein identifiers (Uniprot and Ensembl protein), as well as HGNC gene symbols. Our data from TCGA has NCBI Gene identifiers (formerly Entrez), so before importing the data we are going to use the ID Mapper functionality in Cytoscape to map the network to NCBI Gene. ``` mapped.cols <- mapTableColumn('display name', 'Human', 'HGNC', 'Entrez Gene') ``` We can now import the differential gene expression data and integrate it with the network (node) table in Cytoscape. For importing the data we will use the following mapping: * Key Column for Network should be Entrez Gene, which is the column we just added. * Gene should be the key of the data(de.genes.full). ``` de.genes.full <- read.table("https://raw.githubusercontent.com/cytoscape/cytoscape-tutorials/gh-pages/protocols/data/TCGA-Ovarian-MesenvsImmuno_data.csv", header = TRUE, sep = ",", quote="\"", stringsAsFactors = FALSE) loadTableData(de.genes.full,data.key.column="Gene",table.key.column="Entrez Gene") ``` You will notice two new columns (logFC and FDR.adjusted.Pvalue) in the Node Table. ``` tail(getTableColumnNames('node')) ``` ## **Visualization** Next, we will create a visualization of the imported data on the network. ``` setVisualStyle(style.name="default") setNodeShapeDefault(new.shape="ELLIPSE", style.name = "default") lockNodeDimensions(new.state="TRUE", style.name = "default") setNodeSizeDefault(new.size="50", style.name = "default") setNodeColorDefault(new.color="#D3D3D3", style.name = "default") setNodeBorderWidthDefault(new.width="2", style.name = "default") setNodeBorderColorDefault(new.color="#616060", style.name = "default") setNodeLabelMapping(table.column="display name",style.name = "default") setNodeFontSizeDefault(new.size="14", style.name = "default") ``` Before we create a mapping for node color representing the range of fold changes, we need the min and max of the logFC column: ``` logFC.table.up <- getTableColumns('node', 'logFC') logFC.up.min <- min(logFC.table.up, na.rm = T) logFC.up.max <- max(logFC.table.up, na.rm = T) logFC.up.center <- logFC.up.min + (logFC.up.max - logFC.up.min)/2 copyVisualStyle(from.style = "default", to.style = "de genes up") setVisualStyle(style.name="de genes up") data.values = c(logFC.up.min, logFC.up.center, logFC.up.max) node.colors <- c(brewer.pal(length(data.values), "YlOrRd")) setNodeColorMapping('logFC', data.values, node.colors, style.name="de genes up") ``` Applying a force-directed layout, the network will now look something like this: ``` layoutNetwork(paste('force-directed', 'defaultSpringCoefficient=0.00003', 'defaultSpringLength=50', 'defaultNodeMass=4', sep=' ')) ``` ## **Enrichment Analysis Options** Next, we are going to perform enrichment anlaysis uing the STRING app. ## **STRING Enrichment** The STRING app has built-in enrichment analysis functionality, which includes enrichment for GO Process, GO Component, GO Function, InterPro, KEGG Pathways, and PFAM. First, we will run the enrichment on the whole network, against the genome: ``` string.cmd = 'string retrieve enrichment allNetSpecies="Homo sapiens", background=genome selectedNodesOnly="false"' commandsRun(string.cmd) string.cmd = 'string show enrichment' commandsRun(string.cmd) ``` When the enrichment analysis is complete, a new tab titled STRING Enrichment will open in the Table Panel. The STRING app includes several options for filtering and displaying the enrichment results. The features are all available at the top of the STRING Enrichment tab. We are going to filter the table to only show GO Process: ``` string.cmd = 'string filter enrichment categories="GO Process", overlapCutoff = "0.5", removeOverlapping = "true"' commandsRun(string.cmd) ``` Next, we will add a split donut chart to the nodes representing the top terms: ``` string.cmd = 'string show charts' commandsRun(string.cmd) ``` ## **STRING Protein Query: Down-regulated genes** We are going to repeat the network search, data integration, visualization and enrichment analysis for the set of down-regulated genes by using the first column of [TCGA-Ovarian-MesenvsImmuno-data-down.csv](https://cytoscape.github.io/cytoscape-tutorials/protocols/data/TCGA-Ovarian-MesenvsImmuno-data-down.csv): ``` de.genes.down <- read.table("https://cytoscape.github.io/cytoscape-tutorials/protocols/data/TCGA-Ovarian-MesenvsImmuno-data-down.csv", header = TRUE, sep = "\t", quote="\"", stringsAsFactors = FALSE) string.cmd = paste('string protein query query="', paste(de.genes.down$Gene, collapse = '\n'), '" cutoff=0.4 species="Homo sapiens"', sep = "") commandsRun(string.cmd) ``` ## **Subnetwork** Let’s select only the connected nodes to work with for the rest of this tutorial, by creating a subnetwork based on all edges: ``` createSubnetwork(edges='all', subnetwork.name='String de genes down') ``` ## **Data integration** Again, the identifiers in the network needs to be mapped to Entrez Gene (NCBI gene): ``` mapped.cols <- mapTableColumn('display name', 'Human', 'HGNC', 'Entrez Gene') ``` We can now import the data: ``` loadTableData(de.genes.full,data.key.column="Gene",table.key.column="Entrez Gene") ``` ## **Visualization** Next, we can create a visualization. Note that the default style has been altered in the previous example, so we can simply switch to default to get started: ``` setVisualStyle(style.name="default") ``` The node fill color has to be redefined for down-regulated genes: ``` logFC.table.down <- getTableColumns('node', 'logFC') logFC.dn.min <- min(logFC.table.down, na.rm = T) logFC.dn.max <- max(logFC.table.down, na.rm = T) logFC.dn.center <- logFC.dn.min + (logFC.dn.max - logFC.dn.min)/2 copyVisualStyle(from.style = "default", to.style = "de genes down") setVisualStyle(style.name="de genes down") data.values = c(logFC.dn.min, logFC.dn.center, logFC.dn.max) node.colors <- c(brewer.pal(length(data.values), "Blues")) setNodeColorMapping('logFC', data.values, node.colors, style.name="de genes down") ``` Apply a force-directed layout. ``` layoutNetwork(paste('force-directed', 'defaultSpringCoefficient=0.00003', 'defaultSpringLength=50', 'defaultNodeMass=4', sep=' ')) ``` ## **STRING Enrichment** Now we can perform STRING Enrichment analysis on the resulting network: ``` string.cmd = 'string retrieve enrichment allNetSpecies="Homo sapiens", background=genome selectedNodesOnly="false"' commandsRun(string.cmd) string.cmd = 'string show enrichment' commandsRun(string.cmd) ``` Filter the analysis results for non-redundant GO Process terms only. ``` string.cmd = 'string filter enrichment categories="GO Process", overlapCutoff = "0.5", removeOverlapping = "true"' commandsRun(string.cmd) string.cmd = 'string show charts' commandsRun(string.cmd) ``` ## **STRING Disease Query** So far, we queried the STRING database with a set of genes we knew were differentially expressed. Next, we will query the STRING disease database to retrieve a network genes associated with ovarian cancer, which will be completely independent of our dataset. ``` string.cmd = 'string disease query disease="ovarian cancer" cutoff="0.95"' commandsRun(string.cmd) ``` This will bring in the top 100 (default) ovarian cancer associated genes connected with a confidence score greater than 0.95. Again, lets extract out the connected nodes: ``` createSubnetwork(edges='all', subnetwork.name='String ovarian sub') ``` ## **Data integration** Next we will import differential gene expression data from our TCGA dataset to create a visualization. Just like the previous example, we will need to do some identifier mapping to match the data to the network. ``` mapped.cols <- mapTableColumn("display name",'Human','HGNC','Entrez Gene') ``` Here we set Human as species, HGNC as Map from, and Entrez Gene as To. We can now import the data frame with the full data (already loaded the data in Example 1 above) into the node table in Cytoscape: ``` loadTableData(de.genes.full, data.key.column = "Gene", table = "node", table.key.column = "Entrez Gene") ``` ## **Visualization** Again, we can create a visualization: ``` setVisualStyle(style.name="default") ``` Next, we need the min and max of the logFC column: ``` logFC.table.ovarian <- getTableColumns('node', 'logFC') logFC.ov.min <- min(logFC.table.ovarian, na.rm = T) logFC.ov.max <- max(logFC.table.ovarian, na.rm = T) logFC.ov.center <- logFC.ov.min + (logFC.ov.max - logFC.ov.min)/2 ``` Let’s create the mapping: ``` copyVisualStyle(from.style = "default", to.style = "ovarian") setVisualStyle(style.name="ovarian") data.values = c(logFC.ov.min, logFC.ov.center, logFC.ov.max) node.colors <- c(brewer.pal(length(data.values), "RdBu")) setNodeColorMapping('logFC', data.values, node.colors, style.name="ovarian") ``` Apply a force-directed layout. ``` layoutNetwork(paste('force-directed', 'defaultSpringCoefficient=0.00003', 'defaultSpringLength=50', 'defaultNodeMass=4', sep=' ')) ``` The TCGA found several genes that were commonly mutated in ovarian cancer, so called “cancer drivers”. We can add information about these genes to the network visualization, by changing the visual style of these nodes. Three of the most important drivers are TP53, BRCA1 and BRCA2. We will add a thicker, colored border for these genes in the network. Select all three driver genes by: ``` selectNodes(c("TP53", "BRCA1", "BRCA2"), by.col = "display name") ``` Add a style bypass for node Border Width (5) and node Border Paint (bright pink): ``` setNodeBorderWidthBypass(getSelectedNodes(), 5) setNodeBorderColorBypass(getSelectedNodes(), '#FF007F') ``` ## **Exporting Networks** Jupyter Bridge RCy3 does not support import and export files now. Please use local Cytoscape to import and export files.
github_jupyter
# Keras MNIST Model Deployment * Wrap a Tensorflow MNIST python model for use as a prediction microservice in seldon-core * Run locally on Docker to test * Deploy on seldon-core running on minikube ## Dependencies * [Helm](https://github.com/kubernetes/helm) * [Minikube](https://github.com/kubernetes/minikube) * [S2I](https://github.com/openshift/source-to-image) ```bash pip install seldon-core pip install keras ``` ## Train locally ``` import numpy as np import math import datetime #from seldon.pipeline import PipelineSaver import os import tensorflow as tf from keras import backend from keras.models import Model,load_model from keras.layers import Dense,Input from keras.layers import Dropout from keras.layers import Flatten, Reshape from keras.constraints import maxnorm from keras.layers.convolutional import Convolution2D from keras.layers.convolutional import MaxPooling2D from keras.callbacks import TensorBoard class MnistFfnn(object): def __init__(self, input_shape=(784,), nb_labels=10, optimizer='Adam', run_dir='tensorboardlogs_test'): self.model_name='MnistFfnn' self.run_dir=run_dir self.input_shape=input_shape self.nb_labels=nb_labels self.optimizer=optimizer self.build_graph() def build_graph(self): inp = Input(shape=self.input_shape,name='input_part') #keras layers with tf.name_scope('dense_1') as scope: h1 = Dense(256, activation='relu', W_constraint=maxnorm(3))(inp) drop1 = Dropout(0.2)(h1) with tf.name_scope('dense_2') as scope: h2 = Dense(128, activation='relu', W_constraint=maxnorm(3))(drop1) drop2 = Dropout(0.5)(h2) out = Dense(self.nb_labels, activation='softmax')(drop2) self.model = Model(inp,out) if self.optimizer == 'rmsprop': self.model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) elif self.optimizer == 'Adam': self.model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy']) print('graph builded') def fit(self,X,y=None, X_test=None,y_test=None, batch_size=128, nb_epochs=2, shuffle=True): now = datetime.datetime.now() tensorboard_logname = self.run_dir+'/{}_{}'.format(self.model_name, now.strftime('%Y.%m.%d_%H.%M')) tensorboard = TensorBoard(log_dir=tensorboard_logname) self.model.fit(X,y, validation_data=(X_test,y_test), callbacks=[tensorboard], batch_size=batch_size, nb_epoch=nb_epochs, shuffle = shuffle) return self def predict_proba(self,X): return self.model.predict_proba(X) def predict(self, X): probas = self.model.predict_proba(X) return([[p>0.5 for p in p1] for p1 in probas]) def score(self, X, y=None): pass def get_class_id_map(self): return ["proba"] class MnistConv(object): def __init__(self, input_shape=(784,), nb_labels=10, optimizer='Adam', run_dir='tensorboardlogs_test', saved_model_file='MnistClassifier.h5'): self.model_name='MnistConv' self.run_dir=run_dir self.input_shape=input_shape self.nb_labels=nb_labels self.optimizer=optimizer self.saved_model_file=saved_model_file self.build_graph() def build_graph(self): inp = Input(shape=self.input_shape,name='input_part') inp2 = Reshape((28,28,1))(inp) #keras layers with tf.name_scope('conv') as scope: conv = Convolution2D(32, 3, 3, input_shape=(32, 32, 3), border_mode='same', activation='relu', W_constraint=maxnorm(3))(inp2) drop_conv = Dropout(0.2)(conv) max_pool = MaxPooling2D(pool_size=(2, 2))(drop_conv) with tf.name_scope('dense') as scope: flat = Flatten()(max_pool) dense = Dense(128, activation='relu', W_constraint=maxnorm(3))(flat) drop_dense = Dropout(0.5)(dense) out = Dense(self.nb_labels, activation='softmax')(drop_dense) self.model = Model(inp,out) if self.optimizer == 'rmsprop': self.model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) elif self.optimizer == 'Adam': self.model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy']) print('graph builded') def fit(self,X,y=None, X_test=None,y_test=None, batch_size=128, nb_epochs=2, shuffle=True): now = datetime.datetime.now() tensorboard_logname = self.run_dir+'/{}_{}'.format(self.model_name, now.strftime('%Y.%m.%d_%H.%M')) tensorboard = TensorBoard(log_dir=tensorboard_logname) self.model.fit(X,y, validation_data=(X_test,y_test), callbacks=[tensorboard], batch_size=batch_size, nb_epoch=nb_epochs, shuffle = shuffle) #if not os.path.exists('saved_model'): # os.makedirs('saved_model') self.model.save(self.saved_model_file) return self def predict_proba(self,X): return self.model.predict_proba(X) def predict(self, X): probas = self.model.predict_proba(X) return([[p>0.5 for p in p1] for p1 in probas]) def score(self, X, y=None): pass def get_class_id_map(self): return ["proba"] from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('data/MNIST_data', one_hot=True) X_train = mnist.train.images y_train = mnist.train.labels X_test = mnist.test.images y_test = mnist.test.labels mc = MnistConv() mc.fit(X_train,y=y_train, X_test=X_test,y_test=y_test) ``` Wrap model using s2i ``` !s2i build . seldonio/seldon-core-s2i-python3:0.12 keras-mnist:0.1 !docker run --name "mnist_predictor" -d --rm -p 5000:5000 keras-mnist:0.1 ``` Send some random features that conform to the contract ``` !seldon-core-tester contract.json 0.0.0.0 5000 -p !docker rm mnist_predictor --force ``` # Test using Minikube **Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)** ``` !minikube start --memory 4096 !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default !helm init !kubectl rollout status deploy/tiller-deploy -n kube-system !helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system !kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system ``` ## Setup Ingress Please note: There are reported gRPC issues with ambassador (see https://github.com/SeldonIO/seldon-core/issues/473). ``` !helm install stable/ambassador --name ambassador --set crds.keep=false !kubectl rollout status deployment.apps/ambassador !eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:0.12 keras-mnist:0.1 !kubectl create -f keras_mnist_deployment.json !kubectl rollout status deploy/keras-mnist-deployment-keras-mnist-predictor-8baf5cc !seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \ seldon-deployment-example --namespace default -p !minikube delete ```
github_jupyter
``` import pandas as pd import numpy as np import datetime import string from collections import Counter from scipy.sparse import hstack, csr_matrix from nltk.tokenize import RegexpTokenizer, word_tokenize from nltk import ngrams from sklearn.svm import LinearSVC from sklearn.naive_bayes import MultinomialNB from sklearn.preprocessing import MultiLabelBinarizer from sklearn.multiclass import OneVsRestClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer, HashingVectorizer from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import roc_auc_score import xgboost as xgb df_train_initial = pd.read_csv('train.csv.zip') df_test_initial = pd.read_csv('test.csv.zip') df_sub = pd.read_csv('sample_submission.csv.zip') initialcols = list(df_train_initial.columns[df_train_initial.dtypes == 'int64']) badwords_short = pd.read_csv('badwords_short.txt',header=None) badwords_short.rename(columns={0:'badwords_short'},inplace=True) badwords_short['badwords_short'] = badwords_short['badwords_short'].str.lower() badwords_short = badwords_short.drop_duplicates().reset_index(drop=True) badwords_short_set = set(badwords_short['badwords_short'].str.replace('*','')) tokenizer = RegexpTokenizer(r'\w+') def get_ngrams(message): only_words = tokenizer.tokenize(message) filtered_message = ' '.join(only_words) filtered_message_list = list(ngrams(filtered_message.split(),2)) filtered_message_list.extend(list(ngrams(filtered_message.split(),3))) #filtered_message = [i for i in filtered_message if all(j.isnumeric()==False for j in i)] return filtered_message_list def get_words(message): only_words = tokenizer.tokenize(message) return only_words def get_puncts(message): only_puncts = [i for i in message.split() if all(j in string.punctuation for j in i)] return only_puncts def get_badwords(message): only_bad=[] for word in badwords_short_set: count = message.lower().count(word) if count>0: for i in range(0,count): only_bad.append('found_in_badwords_short_'+word) return only_bad model= {} y_train= {} y_test = {} preds={} preds_sub={} proc={} vec={} vec_test={} combined={} def make_model(flags,test=True): if test==True: for col in flags: X_train, X_test, y_train[col], y_test[col] = train_test_split(df_train_initial.comment_text, df_train_initial[col], test_size=0.33, random_state=42) else: X_train = df_train_initial.comment_text.copy() X_test = df_test_initial.comment_text.copy() for col in flags: y_train[col] = df_train_initial[col].copy() proc['words'] = TfidfVectorizer(analyzer=get_words,min_df=3,strip_accents='unicode',sublinear_tf=1) proc['puncts']= TfidfVectorizer(analyzer=get_puncts,min_df=2,strip_accents='unicode',sublinear_tf=1) proc['ngrams']= TfidfVectorizer(analyzer=get_ngrams,min_df=4,strip_accents='unicode',sublinear_tf=1) proc['badwords']= TfidfVectorizer(analyzer=get_badwords,min_df=1,strip_accents='unicode',sublinear_tf=1) vec['words'] = proc['words'].fit_transform(X_train) vec['puncts'] = proc['puncts'].fit_transform(X_train) vec['ngrams'] = proc['ngrams'].fit_transform(X_train) vec['badwords'] = proc['badwords'].fit_transform(X_train) vec_test['words']=proc['words'].transform(X_test) vec_test['puncts']=proc['puncts'].transform(X_test) vec_test['ngrams']=proc['ngrams'].transform(X_test) vec_test['badwords']=proc['badwords'].transform(X_test) combined['train'] = hstack([vec['words'],vec['puncts'],vec['ngrams'],vec['badwords']]) combined['test'] = hstack([vec_test['words'],vec_test['puncts'],vec_test['ngrams'],vec_test['badwords']]) for col in flags: model[col]={} model[col]['lr'] = LogisticRegression(solver='sag',C=3,max_iter=200,n_jobs=-1) model[col]['lr'].fit(combined['train'],y_train[col].tolist()) model[col]['xgb'] = xgb.XGBClassifier(n_estimators=300, max_depth=5,objective= 'binary:logistic', scale_pos_weight=1, seed=27, base_score = .2) model[col]['xgb'].fit(combined['train'],y_train[col].tolist(),eval_metric='auc') model[col]['gbc'] = GradientBoostingClassifier() model[col]['gbc'].fit(combined['train'],y_train[col].tolist()) if test==True: preds[col]={} for i in model[col].keys(): preds[col][i] = model[col][i].predict_proba(combined['test'])[:,1] print(col,i,'model predictions:\n',roc_auc_score(y_test[col],preds[col][i])) allpreds+=preds[col][i] allpreds/=3 print(col,'model predictions:\n',roc_auc_score(y_test[col],allpreds)) else: preds_sub[col]={} allpreds=np.zeros(combined['test'].shape[0]) for i in model[col].keys(): preds_sub[col][i] = model[col][i].predict_proba(combined['test'])[:,1] allpreds+=preds_sub[col][i] allpreds/=3 df_sub[col] = allpreds print(col,'done') make_model(initialcols,test=False) df_sub['toxic'] = preds_sub['toxic']['lr'] df_sub['severe_toxic'] = preds_sub['severe_toxic']['lr'] df_sub['obscene'] = preds_sub['obscene']['lr'] df_sub['threat'] = preds_sub['threat']['lr'] df_sub['insult'] = preds_sub['insult']['lr'] df_sub['identity_hate'] = preds_sub['identity_hate']['lr'] import pickle for i in vec.keys(): pickle.dump(vec[i], open(i+'_vector.sav', 'wb')) df_sub.to_csv('df_sub_'+datetime.datetime.now().strftime('%Y%m%d%I%M')+'.csv',index=False) # C:\Anaconda3\lib\site-packages\ipykernel\__main__.py:7: DeprecationWarning: generator 'ngrams' raised StopIteration # toxic lr model predictions: # 0.973623915807 # toxic xgb model predictions: # 0.957367570947 # toxic gbc model predictions: # 0.920677283411 # toxic model predictions: # 0.967328623644 # severe_toxic lr model predictions: # 0.988066880563 # severe_toxic xgb model predictions: # 0.981223988455 # severe_toxic gbc model predictions: # 0.946132712332 # severe_toxic model predictions: # 0.987947888331 # obscene lr model predictions: # 0.98715018023 # obscene xgb model predictions: # 0.983366581819 # obscene gbc model predictions: # 0.966495202699 # obscene model predictions: # 0.987547215406 # threat lr model predictions: # 0.984074679767 # threat xgb model predictions: # 0.965280067921 # threat gbc model predictions: # 0.542049593889 # threat model predictions: # 0.983671224789 # insult lr model predictions: # 0.98025063686 # insult xgb model predictions: # 0.972816999733 # insult gbc model predictions: # 0.953063786142 # insult model predictions: # 0.978857091119 # identity_hate lr model predictions: # 0.977883100898 # identity_hate xgb model predictions: # 0.970471305196 # identity_hate gbc model predictions: # 0.876133030069 # identity_hate model predictions: # 0.979015052878 ```
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/canny_edge_detector.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/canny_edge_detector.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/canny_edge_detector.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap ``` ## Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ``` Map = geemap.Map(center=[40,-100], zoom=4) Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset # Canny Edge Detector example. # Load an image and compute NDVI from it. image = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_031034_20110619') ndvi = image.normalizedDifference(['B4','B3']) # Detect edges in the composite. canny = ee.Algorithms.CannyEdgeDetector(ndvi, 0.7) # Mask the image with itself to get rid of areas with no edges. canny = canny.updateMask(canny) Map.setCenter(-101.05259, 37.93418, 13) Map.addLayer(ndvi, {'min': 0, 'max': 1}, 'Landsat NDVI') Map.addLayer(canny, {'min': 0, 'max': 1, 'palette': 'FF0000'}, 'Canny Edges') ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
github_jupyter
# Gaussian Mixture Model ``` !pip install tqdm torchvision tensorboardX from __future__ import print_function import torch import torch.utils.data import numpy as np import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D seed = 0 torch.manual_seed(seed) if torch.cuda.is_available(): device = "cuda" else: device = "cpu" ``` ### toy dataset ``` # https://angusturner.github.io/generative_models/2017/11/03/pytorch-gaussian-mixture-model.html def sample(mu, var, nb_samples=500): """ Return a tensor of (nb_samples, features), sampled from the parameterized gaussian. :param mu: torch.Tensor of the means :param var: torch.Tensor of variances (NOTE: zero covars.) """ out = [] for i in range(nb_samples): out += [ torch.normal(mu, var.sqrt()) ] return torch.stack(out, dim=0) # generate some clusters cluster1 = sample( torch.Tensor([1.5, 2.5]), torch.Tensor([1.2, .8]), nb_samples=150 ) cluster2 = sample( torch.Tensor([7.5, 7.5]), torch.Tensor([.75, .5]), nb_samples=50 ) cluster3 = sample( torch.Tensor([8, 1.5]), torch.Tensor([.6, .8]), nb_samples=100 ) def plot_2d_sample(sample_dict): x = sample_dict["x"][:,0].data.numpy() y = sample_dict["x"][:,1].data.numpy() plt.plot(x, y, 'gx') plt.show() # create the dummy dataset, by combining the clusters. samples = torch.cat([cluster1, cluster2, cluster3]) samples = (samples-samples.mean(dim=0)) / samples.std(dim=0) samples_dict = {"x": samples} plot_2d_sample(samples_dict) ``` ## GMM ``` from pixyz.distributions import Normal, Categorical from pixyz.distributions.mixture_distributions import MixtureModel from pixyz.utils import print_latex z_dim = 3 # the number of mixture x_dim = 2 distributions = [] for i in range(z_dim): loc = torch.randn(x_dim) scale = torch.empty(x_dim).fill_(0.6) distributions.append(Normal(loc=loc, scale=scale, var=["x"], name="p_%d" %i)) probs = torch.empty(z_dim).fill_(1. / z_dim) prior = Categorical(probs=probs, var=["z"], name="p_{prior}") p = MixtureModel(distributions=distributions, prior=prior) print(p) print_latex(p) post = p.posterior() print(post) print_latex(post) def get_density(N=200, x_range=(-5, 5), y_range=(-5, 5)): x = np.linspace(*x_range, N) y = np.linspace(*y_range, N) x, y = np.meshgrid(x, y) # get the design matrix points = np.concatenate([x.reshape(-1, 1), y.reshape(-1, 1)], axis=1) points = torch.from_numpy(points).float() pdf = p.prob().eval({"x": points}).data.numpy().reshape([N, N]) return x, y, pdf def plot_density_3d(x, y, loglike): fig = plt.figure(figsize=(10, 10)) ax = fig.gca(projection='3d') ax.plot_surface(x, y, loglike, rstride=3, cstride=3, linewidth=1, antialiased=True, cmap=cm.inferno) cset = ax.contourf(x, y, loglike, zdir='z', offset=-0.15, cmap=cm.inferno) # adjust the limits, ticks and view angle ax.set_zlim(-0.15,0.2) ax.set_zticks(np.linspace(0,0.2,5)) ax.view_init(27, -21) plt.show() def plot_density_2d(x, y, pdf): fig = plt.figure(figsize=(5, 5)) plt.plot(samples_dict["x"][:,0].data.numpy(), samples_dict["x"][:,1].data.numpy(), 'gx') for d in distributions: plt.scatter(d.loc[0,0], d.loc[0,1], c='r', marker='o') cs = plt.contour(x, y, pdf, 10, colors='k', linewidths=2) plt.show() eps = 1e-6 min_scale = 1e-6 # plot_density_3d(*get_density()) plot_density_2d(*get_density()) print("Epoch: {}, log-likelihood: {}".format(0, p.log_prob().mean().eval(samples_dict))) for epoch in range(20): # E-step posterior = post.prob().eval(samples_dict) # M-step N_k = posterior.sum(dim=1) # (n_mix,) # update probs probs = N_k / N_k.sum() # (n_mix,) prior.probs[0] = probs # update loc & scale loc = (posterior[:, None] @ samples[None]).squeeze(1) # (n_mix, n_dim) loc /= (N_k[:, None] + eps) cov = (samples[None, :, :] - loc[:, None, :]) ** 2 # Covariances are set to 0. var = (posterior[:, None, :] @ cov).squeeze(1) # (n_mix, n_dim) var /= (N_k[:, None] + eps) scale = var.sqrt() for i, d in enumerate(distributions): d.loc[0] = loc[i] d.scale[0] = scale[i] # plot_density_3d(*get_density()) plot_density_2d(*get_density()) print("Epoch: {}, log-likelihood: {}".format(epoch+1, p.log_prob().mean().eval({"x": samples}).mean())) psudo_sample_dict = p.sample(batch_n=200) plot_2d_sample(samples_dict) ```
github_jupyter
# Evaluate text-image search app with Flickr 8k dataset > Create labeled data, text processor and evaluate with Vespa python API - toc: true - badges: true - comments: true - categories: [text_image_search, clip_model, vespa, flicker8k] This post creates a labeled dataset out of the Flicker 8k image-caption dataset, builds a text processor that uses a CLIP model to map a text query into the same 512-dimensional space used to represent images and evaluate different query models using the Vespa python API. Check the previous three posts for context: * [Flicker 8k dataset first exploration](https://thigm85.github.io/blog/flicker8k/dataset/image/nlp/2021/10/21/flicker8k-dataset-first-exploration.html) * [Understanding CLIP image pipeline](https://thigm85.github.io/blog/image%20processing/clip%20model/dual%20encoder/pil/2021/10/22/understanding-clip-image-pipeline.html) * [Vespa image search with PyTorch feeder](https://thigm85.github.io/blog/image%20processing/clip%20model/vespa/pytorch/pytorch%20dataset/2021/10/25/vespa-image-search-flicker8k.html) ## Create labeled data An (image, caption) pair will be considered relevant for our purposes if all three experts agreed on a relevance score equal to 4. ### Load and check the expert judgments ``` from pandas import read_csv experts = read_csv( os.path.join(os.environ["DATA_FOLDER"], "ExpertAnnotations.txt"), sep = "\t", header=None, names=["image_file_name", "caption_id", "expert_1", "expert_2", "expert_3"] ) experts.head() ``` ### Check cases where all experts agrees ``` experts_agreement_bool = experts.apply( lambda x: x["expert_1"] == x["expert_2"] and x["expert_2"] == x["expert_3"], axis=1 ) experts_agreement = experts[experts_agreement_bool][ ["image_file_name", "caption_id", "expert_1"] ].rename(columns={"expert_1":"expert"}) experts_agreement.head() experts_agreement["expert"].value_counts().sort_index() ``` ### Load captions data ``` captions = read_csv( os.path.join(os.environ["DATA_FOLDER"], "Flickr8k.token.txt"), sep="\t", header=None, names=["caption_id", "caption"] ) captions.head() def get_caption(caption_id, captions): return captions[captions["caption_id"] == caption_id]["caption"].values[0] ``` ### Relevant (image, text) pair ``` relevant_data = experts_agreement[experts_agreement["expert"] == 4] relevant_data.head(3) ``` ### Create labeled data ``` from ntpath import basename from pandas import DataFrame labeled_data = DataFrame( data={ "qid": list(range(relevant_data.shape[0])), "query": [get_caption( caption_id=x, captions=captions ).replace(" ,", "").replace(" .", "") for x in list(relevant_data.caption_id)], "doc_id": [basename(x) for x in list(relevant_data.image_file_name)], "relevance": 1} ) labeled_data.head() ``` ## From text to embeddings Create a text processor to map a text string into the same 512-dimensional space used to embed the images. ``` import clip import torch class TextProcessor(object): def __init__(self, model_name): self.model, _ = clip.load(model_name) def embed(self, text): text_tokens = clip.tokenize(text) with torch.no_grad(): text_features = model.encode_text(text_tokens).float() text_features /= text_features.norm(dim=-1, keepdim=True) return text_features.tolist()[0] ``` ## Evaluate Define search evaluation metrics: ``` from vespa.evaluation import MatchRatio, Recall, ReciprocalRank eval_metrics = [ MatchRatio(), Recall(at=5), Recall(at=100), ReciprocalRank(at=5), ReciprocalRank(at=100) ] ``` Instantiate `TextProcessor` with a specific CLIP model. ``` text_processor = TextProcessor(model_name="ViT-B/32") ``` Create a `QueryModel`'s to be evaluated. In this case we create two query models based on the `ViT-B/32` CLIP model, one that sends the `query` as it is and another that prepends the prompt "A photo of " to the query before sending it, as suggest in the original CLIP paper. ``` from vespa.query import QueryModel def create_vespa_query(query, prompt = False): if prompt: query = "A photo of " + query.lower() return { 'yql': 'select * from sources * where ([{"targetNumHits":100}]nearestNeighbor(vit_b_32_image,vit_b_32_text));', 'hits': 100, 'ranking.features.query(vit_b_32_text)': text_processor.embed(query), 'ranking.profile': 'vit-b-32-similarity', 'timeout': 10 } query_model_1 = QueryModel(name="vit_b_32", body_function=create_vespa_query) query_model_2 = QueryModel(name="vit_b_32_prompt", body_function=lambda x: create_vespa_query(x, prompt=True)) ``` Create a connection to the Vespa instance: ``` app = Vespa( url=os.environ["VESPA_END_POINT"], cert = os.environ["PRIVATE_CERTIFICATE_PATH"] ) ``` Evaluate the query models using the labeled data and metrics defined earlier. The labeled data uses the `image_file_name` and doc id. ``` from vespa.application import Vespa result = app.evaluate( labeled_data=labeled_data, eval_metrics=eval_metrics, query_model=[query_model_1, query_model_2], id_field="image_file_name" ) ``` The results shows that there is a lot of improvements to be made on the pre-trained `ViT-B/32` CLIP model. ``` result ```
github_jupyter
# Import libraries ``` import pandas as pd import numpy as np from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import r2_score from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV import matplotlib.pyplot as plt %matplotlib inline ``` # Read csv ``` data = pd.read_csv('Data/ml.csv') #Check the columns data.columns #Number of rows and columns data.shape #Transform categorical variables to object data['is_banked'] = data['is_banked'].apply(str) data['code_module'] = data['code_module'].apply(str) data['code_presentation'] = data['code_presentation'].apply(str) #Dummies to_dummies = ['is_banked','code_module', 'code_presentation', 'gender', 'region', 'highest_education', 'imd_band', 'age_band', 'disability', 'final_result',] data = pd.get_dummies(data, columns=to_dummies) #Check columns data.columns #Separate target from the rest of columns data_data = data[['date_submitted', 'num_of_prev_attempts', 'studied_credits', 'module_presentation_length', 'is_banked_0', 'is_banked_1', 'code_module_AAA', 'code_module_BBB', 'code_module_CCC', 'code_module_DDD', 'code_module_EEE', 'code_module_FFF', 'code_module_GGG', 'code_presentation_2013B', 'code_presentation_2013J', 'code_presentation_2014B', 'code_presentation_2014J', 'gender_F', 'gender_M', 'region_East Anglian Region', 'region_East Midlands Region', 'region_Ireland', 'region_London Region', 'region_North Region', 'region_North Western Region', 'region_Scotland', 'region_South East Region', 'region_South Region', 'region_South West Region', 'region_Wales', 'region_West Midlands Region', 'region_Yorkshire Region', 'highest_education_A Level or Equivalent', 'highest_education_HE Qualification', 'highest_education_Lower Than A Level', 'highest_education_No Formal quals', 'highest_education_Post Graduate Qualification', 'imd_band_0-10%', 'imd_band_10-20', 'imd_band_20-30%', 'imd_band_30-40%', 'imd_band_40-50%', 'imd_band_50-60%', 'imd_band_60-70%', 'imd_band_70-80%', 'imd_band_80-90%', 'imd_band_90-100%', 'imd_band_?', 'age_band_0-35', 'age_band_35-55', 'age_band_55<=', 'disability_N', 'disability_Y', 'final_result_Distinction', 'final_result_Fail', 'final_result_Pass', 'final_result_Withdrawn']] data_target = data_ml["score"] # Split Train and Test from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test=train_test_split(data_data, data_target, test_size=0.3, random_state=42) #Grid search for parameter selection for a Random Forest Regressor model param_grid = { 'n_estimators': [100, 1000], 'max_features': ['auto','sqrt','log2'], 'max_depth': [25, 15] } RFR = RandomForestRegressor(n_jobs=-1) GS = GridSearchCV(RFR, param_grid, cv=5, verbose = 3) GS.fit(X_train, y_train) GS.best_params_ RFR = RandomForestRegressor(max_depth = 25, max_features='sqrt', n_estimators=1000) RFR.fit(X_train, y_train) y_train_pred = RFR.predict(X_train) y_pred = RFR.predict(X_test) r2 = r2_score(y_train, y_train_pred) mae = mean_absolute_error(y_train, y_train_pred) print ('TRAIN MODEL METRICS:') print('The R2 score is: ' + str(r2)) print('The MAE score is: ' + str(mae)) plt.scatter(y_train, y_train_pred) plt.plot([0,100], [0,100], color='red') plt.show() r2 = r2_score(y_test, y_pred) mae = mean_absolute_error(y_test, y_pred) print ('TEST MODEL METRICS:') print('The R2 score is: ' + str(r2)) print('The MAE score is: ' + str(mae)) plt.scatter(y_test, y_pred) plt.plot([0,100], [0,100], color='red') plt.show() data_ml.head() data_ml.info() ```
github_jupyter
``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Import all the necessary files! import os import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras import Model # Download the inception v3 weights !wget --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \ -O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 # Import the inception model from tensorflow.keras.applications.inception_v3 import InceptionV3 # Create an instance of the inception model from the local pre-trained weights local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5' pre_trained_model = InceptionV3(input_shape = (150, 150, 3), include_top = False, weights = None) pre_trained_model.load_weights(local_weights_file) # Make all the layers in the pre-trained model non-trainable for layer in pre_trained_model.layers: layer.trainable=False # Print the model summary pre_trained_model.summary() # Expected Output is extremely large, but should end with: #batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0] #__________________________________________________________________________________________________ #activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0] #__________________________________________________________________________________________________ #mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0] # activation_276[0][0] #__________________________________________________________________________________________________ #concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0] # activation_280[0][0] #__________________________________________________________________________________________________ #activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0] #__________________________________________________________________________________________________ #mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0] # mixed9_1[0][0] # concatenate_5[0][0] # activation_281[0][0] #================================================================================================== #Total params: 21,802,784 #Trainable params: 0 #Non-trainable params: 21,802,784 last_layer = pre_trained_model.get_layer('mixed7') print('last layer output shape: ', last_layer.output_shape) last_output = last_layer.output # Expected Output: # ('last layer output shape: ', (None, 7, 7, 768)) # Define a Callback class that stops training once accuracy reaches 99.9% class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if(logs.get('accuracy')>0.999): print("\nReached 99.9% accuracy so cancelling training!") self.model.stop_training = True from tensorflow.keras.optimizers import RMSprop # Flatten the output layer to 1 dimension x = layers.Flatten()(last_output) # Add a fully connected layer with 1,024 hidden units and ReLU activation x = layers.Dense(1024, activation='relu')(x) # Add a dropout rate of 0.2 x = layers.Dropout(0.2)(x) # Add a final sigmoid layer for classification x = layers.Dense (1, activation='sigmoid')(x) model = Model( pre_trained_model.input, x) model.compile(optimizer = RMSprop(lr=0.0001), loss = 'binary_crossentropy', metrics = ['accuracy']) model.summary() # Expected output will be large. Last few lines should be: # mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0] # activation_251[0][0] # activation_256[0][0] # activation_257[0][0] # __________________________________________________________________________________________________ # flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0] # __________________________________________________________________________________________________ # dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0] # __________________________________________________________________________________________________ # dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0] # __________________________________________________________________________________________________ # dense_9 (Dense) (None, 1) 1025 dropout_4[0][0] # ================================================================================================== # Total params: 47,512,481 # Trainable params: 38,537,217 # Non-trainable params: 8,975,264 # Get the Horse or Human dataset !wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip -O /tmp/horse-or-human.zip # Get the Horse or Human Validation dataset !wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip -O /tmp/validation-horse-or-human.zip from tensorflow.keras.preprocessing.image import ImageDataGenerator import os import zipfile local_zip = '//tmp/horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/training') zip_ref.close() local_zip = '//tmp/validation-horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/validation') zip_ref.close() # Define our example directories and files train_dir = '/tmp/training' validation_dir = '/tmp/validation' train_horses_dir = os.path.join(train_dir,'horses') train_humans_dir = os.path.join(train_dir,'humans') validation_horses_dir = os.path.join(validation_dir,'horses') validation_humans_dir = os.path.join(validation_dir,'humans') train_horses_fnames = os.listdir(train_horses_dir) train_humans_fnames = os.listdir(train_humans_dir) validation_horses_fnames = os.listdir(validation_horses_dir) validation_humans_fnames = os.listdir(validation_humans_dir) print(len(train_horses_fnames)) print(len(train_humans_fnames)) print(len(validation_horses_fnames)) print(len(validation_humans_fnames)) # Expected Output: # 500 # 527 # 128 # 128 # Add our data-augmentation parameters to ImageDataGenerator train_datagen = ImageDataGenerator(rescale = 1.0/255., rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) # Note that the validation data should not be augmented! test_datagen = ImageDataGenerator(rescale = 1.0/255) # Flow training images in batches of 20 using train_datagen generator train_generator = train_datagen.flow_from_directory(train_dir, batch_size = 20, class_mode = 'binary', target_size = (150, 150)) # Flow validation images in batches of 20 using test_datagen generator validation_generator = test_datagen.flow_from_directory( validation_dir, batch_size = 20, class_mode = 'binary', target_size = (150, 150)) # Expected Output: # Found 1027 images belonging to 2 classes. # Found 256 images belonging to 2 classes. # Run this and see how many epochs it should take before the callback # fires, and stops training at 99.9% accuracy # (It should take less than 100 epochs) callbacks = myCallback() history = model.fit(train_generator, validation_data = validation_generator, steps_per_epoch = 100, epochs = 100, validation_steps = 50, verbose = 2, callbacks=[callbacks]) import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend(loc=0) plt.figure() plt.show() ```
github_jupyter
``` import os import pandas as pd filePath = os.path.join(os.getcwd(), 'hindi_corpus_2012_12_19', 'Hi_Newspapers.txt') data = pd.read_csv(filePath, delimiter='\t', names=['source', 'date', 'unnamed_1', 'unnamed_2', 'text']) filePath2 = os.path.join(os.getcwd(), 'hindi_corpus_2012_12_19', 'Hi_Blogs.txt') data2 = pd.read_csv(filePath2, delimiter='\t', names=['source', 'date', 'unnamed_1', 'unnamed_2', 'text']) filePath3 = os.path.join(os.getcwd(), 'hindi_corpus_2012_12_19', 'x_test.txt') data3 = pd.read_csv(filePath3, delimiter='\t', names=['source', 'date', 'unnamed_1', 'unnamed_2', 'text']) data.loc[:,['source', 'date', 'text']] data2.loc[:,['source', 'date', 'text']] data3.loc[:,['source', 'date', 'text']] sample = data.text.loc[1] sample.strip() from textGenerator import TextProcess textProcessor = TextProcess() import nltk from nltk.tokenize import RegexpTokenizer from nltk.stem import WordNetLemmatizer,PorterStemmer from nltk.corpus import stopwords import re lemmatizer = WordNetLemmatizer() stemmer = PorterStemmer() def preprocess(sentence): sentence=str(sentence) sentence = sentence.lower() sentence=sentence.replace('{html}',"") cleanr = re.compile('<.*?>') cleantext = re.sub(cleanr, '', sentence) rem_url=re.sub(r'http\S+', '',cleantext) # rem_num = re.sub('[0-9]+', '', rem_url) # tokenizer = RegexpTokenizer(r'\w+') # tokens = tokenizer.tokenize(rem_num) # filtered_words = [w for w in tokens if len(w) > 2 if not w in stopwords.words('english')] # stem_words=[stemmer.stem(w) for w in filtered_words] # lemma_words=[lemmatizer.lemmatize(w) for w in stem_words] # return " ".join(filtered_words) return rem_url preprocess(sample) from textGenerator import TextGenerator textgen = TextGenerator() sample = textgen.getRandomText() # preprocess(sample) sample temp = sample sample = temp preprocess(sample) # print(f"Sample : {sample}") sample = sample.strip() # print(f"Sample : {sample}") sample = sample.splitlines() print(f"Sample : {sample}") sample = sample[0] sample = sample.split() print(f"Sample : {sample}") # word = sample[-5] # print(f"Word : {word}") def isEndOfLine(x): if u'\u0964' <= x <= u'\u0965' : return True return False def isMatra(x): if (u'\u0901' <= x <= u'\u0903' or u'\u093C' <= x <= u'\u094F' or u'\u0951' <= x <= u'\u0954' or u'\u0951' <= x <= u'\u0954' or u'\u0962' <= x <= u'\u0963'): return True return False def isVowel(x): if (u'\u0905' <= x <= u'\u0914' or u'\u0960' <= x <= u'\u0961'): return True return False def isConsonant(x): if (u'\u0915' <= x <= u'\u0939' or u'\u0958' <= x <= u'\u095F'): return True return False def isOM(x): if x == u'\u0950' : return True return False pd1 =data.text pd2 =data2.text new_sample = pd.concat([pd1, pd2], ignore_index=True) new_sample.head() ``` ``` vowel=0 consonant=0 matra=0 eofLine=0 OM=0 vowel_matra=0 half_vowel=0 consonant_matra=0 constant_half_matra=0; word_start_matra=0 half_consonant_vowel=0 half_consonant_matra=0 dictionary_vowel = {} dictionary_consonant = {} words_with_half_consonant_following_vowel={} words_with_half_consonant_following_matra={} words_with_vowels_following_matra={} constant_and_vowel_combination={} words_starting_with_matra = [] words_having_half_vowel=[] wordCount = 0 for para in new_sample: words = para.split() for word in words: characters = list(word) wordCount += 1 # if(isMatra(characters[0])): # word_start_matra+=1 for index, char in enumerate(characters): if(isMatra(char)): matra = matra+1 if(index==0): word_start_matra += 1 words_starting_with_matra.append(word) elif(isVowel(char)): if(index+1 < len(characters)): if(isMatra(characters[index+1])): vowel_matra=vowel_matra+1 if word in words_with_vowels_following_matra: words_with_vowels_following_matra[word] += 1 else: words_with_vowels_following_matra.update({word: 1}) if(characters[index+1] == u'\u094D'): half_vowel+=1 words_having_half_vowel.append(word) if char in dictionary_vowel: dictionary_vowel[char] += 1 else: dictionary_vowel.update({char: 1}) vowel=vowel+1 elif(isConsonant(char)): if(index+1 < len(characters)): if(isMatra(characters[index+1])): consonant_matra=consonant_matra+1 if(index+2 < len(characters)): if(characters[index+1] == u'\u094D'): if(isVowel(characters[index+2])): half_consonant_vowel+=1 if word in words_with_half_consonant_following_vowel: words_with_half_consonant_following_vowel[word] += 1 else: words_with_half_consonant_following_vowel.update({word: 1}) if(isMatra(characters[index+2])): half_consonant_matra+=1 if word in words_with_half_consonant_following_matra: words_with_half_consonant_following_matra[word] += 1 else: words_with_half_consonant_following_matra.update({word: 1}) if char in dictionary_consonant: dictionary_consonant[char] += 1 else: dictionary_consonant.update({char: 1}) consonant = consonant+1 elif(isEndOfLine(char)): eofLine = eofLine+1 elif(isOM(char)): OM = OM+1 print(f"Matra : {matra}") print(f"Vowel : {vowel}") print(f"Consonant : {consonant}") print(f"end of line : {eofLine}") print(f"OM : {OM}") print(f"Word starting with a Matra : {word_start_matra}") print(f"Vowels followed by a matra : {vowel_matra}") print(f"Consonant followed by a matra : {consonant_matra}") print(f"half vowel : {half_vowel}") print(f"Number of words : {wordCount}") print(f"half cononants followed by a matra : {half_consonant_matra}") print(f"half cononants followed by a Vowel: {half_consonant_vowel}") print(f"Words having half vowel: {words_having_half_vowel}") print(f"Words starting with a matra : {words_starting_with_matra}") print(f"Words with vowels followed by a matra : {words_with_vowels_following_matra}") for allKeys in dictionary_vowel: print(f"Frequency of {allKeys} : {dictionary_vowel[allKeys]}") for allKeys in dictionary_consonant: print(f"Frequency of {allKeys} : {dictionary_consonant[allKeys]}") for allKeys in words_with_half_consonant_following_vowel: print(f"{allKeys} : {words_with_half_consonant_following_vowel[allKeys]}") for allKeys in words_with_half_consonant_following_matra: print(f" {allKeys} : {words_with_half_consonant_following_matra[allKeys]}") def isValidWord(): pass def isEndOfLine(x): if u'\u0964' <= x <= u'\u0965' : return True return False def isMatra(x): if (u'\u0901' <= x <= u'\u0903' or u'\u093C' <= x <= u'\u094F' or u'\u0951' <= x <= u'\u0954' or u'\u0951' <= x <= u'\u0954' or u'\u0962' <= x <= u'\u0963'): return True return False def isVowel(x): if (u'\u0905' <= x <= u'\u0914' or u'\u0960' <= x <= u'\u0961'): return True return False def isConsonant(x): if (u'\u0915' <= x <= u'\u0939' or u'\u0958' <= x <= u'\u095F'): return True return False def isOM(x): if x == u'\u0950' : return True return False sample = 'येदयुरप्पा, उनके बेटे और सांसद बी वाई राघवेन्द्र, बी वाई विजयेन्द्र, दामाद आर एन सोहन कुमार कोर्ट में मौजूद थे। कोर्ट ने 16 नवंबर को इन्हें मौजूद होने के लिए समन जारी किया था। अदालत ने यह भी कहा कि मामले में जांच खत्म हो चुकी है।' sample.split()[7] detectChars = {} def detectCharsFunc(word): chars = list(word) a = '' enum_iter = enumerate(chars) flag = 0 for index, char in enum_iter: if(isVowel(char) or isConsonant(char)): flag=0 if a is not '': if a in detectChars: detectChars[a] += 1 else: detectChars.update({a: 1}) a = char elif(isMatra(char)): a += char if char==u'\u094D': flag += 1 if flag>1: flag = 0 while (index+1 < len(chars) and isMatra(chars[index+1])): index, char = next(enum_iter) a += char if a in detectChars: detectChars[a] += 1 else: detectChars.update({a: 1}) a = '' continue if (index+1 >= len(chars)): continue index, char = next(enum_iter) a += char else: flag = 0 if a in detectChars: detectChars[a] += 1 else: detectChars.update({a: 1}) detectCharsFunc(word) detectChars for para in new_sample: words = para.split() for word in words: detectCharsFunc(word) detectChars detectConsonants_Matra={} def detectConsonantMatra(word): chars = list(word) # print(f'chars:{chars}') a = '' enum_iter = enumerate(chars) flag = 0 for index, char in enum_iter: a ='' if(isConsonant(char)): a+=char i=0; while (index+1 < len(chars) and isMatra(chars[index+1])): index, char = next(enum_iter) a += char i+=1 if a in detectConsonants_Matra: continue elif(i!=0): detectConsonants_Matra.update({a: i}) for para in new_sample: words = para.split() for word in words: detectConsonantMatra(word) detectConsonants_Matra df1=pd.DataFrame.from_dict(dictionary_vowel, orient='index',columns=[ 'Frequency']) print(df1.index) ind=[] ind=list(df1.index) indi = [ 'u' + f"'\\u" + f"{str(ord(x))}'" for x in list(df1.index)] import seaborn as sns import matplotlib.pyplot as plt import numpy as np from matplotlib import font_manager as fm, rcParams import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties sns.set(font="Meiryo") df1.to_csv('vowel_freq.csv') Data1 = pd.read_csv(r"/Users/shivanitripathi/Documents/study material/DS/OLAM/datagen/vowel_freq.csv") Data1.index Data1 sns.barplot(y='Frequency',x=Data1.index,data=Data1.iloc[0:750,:]) df2=pd.DataFrame.from_dict(dictionary_consonant, orient='index',columns=[ 'Frequency']) print(df2.index) df2.to_csv('consonant_freq.csv',index = True) Data2 = pd.read_csv(r"/Users/shivanitripathi/Documents/study material/DS/OLAM/datagen/consonant_freq.csv") Data2 sns.set(font="Meiryo",font_scale=0.5) sns.barplot(y='Frequency',x=Data2.index,data=Data2.iloc[0:1500,:]) df3=pd.DataFrame.from_dict(words_with_half_consonant_following_vowel, orient='index',columns=[ 'Frequency']) print(df3.index) df3=df3.sort_values(by=['Frequency'],ascending=False) df3.to_csv('half_cononant_vowel.csv',index = True) Data3 = pd.read_csv(r"/Users/shivanitripathi/Documents/study material/DS/OLAM/datagen/half_cononant_vowel.csv") Data3=Data3[0:20] Data3 sns.set(font="Meiryo",font_scale=0.9) sns.barplot(y='Frequency',x=Data3.index,data=Data3.iloc[0:1500,:]) df4=pd.DataFrame.from_dict(words_with_half_consonant_following_matra, orient='index',columns=[ 'Frequency']) print(df4.index) df4=df4.sort_values(by=['Frequency'],ascending=False) df4.to_csv('half_cononant_matra.csv',index = True) Data4 = pd.read_csv(r"/Users/shivanitripathi/Documents/study material/DS/OLAM/datagen/half_cononant_matra.csv") Data4=Data4[0:20] Data4 sns.set(font="Meiryo",font_scale=0.9) sns.barplot(y='Frequency',x=Data4.index,data=Data4.iloc[0:1500,:]) df5=pd.DataFrame.from_dict(detectChars, orient='index',columns=[ 'Frequency']) print(df5.index) df5=df5.sort_values(by=['Frequency'],ascending=False) # df5.to_csv('detectChars.csv',index = True) Data5 = pd.read_csv(r"/Users/shivanitripathi/Documents/study material/DS/OLAM/datagen/detectChars.csv") Data5=Data5[0:20] Data5 sns.set(font="Meiryo",font_scale=0.9) sns.barplot(y='Frequency',x=Data5.index,data=Data5.iloc[0:1500,:]) df6=pd.DataFrame.from_dict(detectConsonants_Matra, orient='index',columns=[ 'Frequency']) print(df6.index) df6=df6.sort_values(by=['Frequency'],ascending=False) # df6.to_csv('detectConsonants_Matra.csv',index = True) Data6 = pd.read_csv(r"/Users/shivanitripathi/Documents/study material/DS/OLAM/datagen/detectConsonants_Matra.csv") Data6=Data6[0:20] Data6 sns.set(font="Meiryo",font_scale=0.9) sns.barplot(y='Frequency',x=Data6.index,data=Data6.iloc[0:1500,:]) encoder_dict= { "ऀ":"0", "ँ":"1", "ं":"2", "ः":"3", "ऄ":"4", "अ":"5", "आ":"6", "इ":"7", "ई":"8", "उ":"9", "ऊ":"10", "ऋ":"11", "ऌ":"12", "ऍ":"13", "ऎ":"14", "ए":"15", "ऐ":"16", "ऑ":"17", "ऒ":"18", "ओ":"19", "औ":"20", "क":"21", "ख":"22", "ग":"23", "घ":"24", "ङ":"25", "च":"26", "छ":"27", "ज":"28", "झ":"29", "ञ":"30", "ट":"31", "ठ":"32", "ड":"33", "ढ":"34", "ण":"35", "त":"36", "थ":"37", "द":"38", "ध":"39", "न":"40", "ऩ":"41", "प":"42", "फ":"43", "ब":"44", "भ":"45", "म":"46", "य":"47", "र":"48", "ऱ":"49", "ल":"50", "ळ":"51", "ऴ":"52", "व":"53", "श":"54", "ष":"55", "स":"56", "ा":"57", "ि":"58", "ी":"59", "ु":"60", "ू":"61", "ृ":"62", "ॄ":"63", "ॆ":"64", "े":"65", "ै":"66", "ॉ":"67", "ॊ":"68", "ो":"69", "ौ":"70", "्":"71", "ॎ":"72", "ॐ":"73", "ॏ":"74", "।":"75", "॥":"76", "०":"77", "१":"78", "२":"79", "३":"80", "४":"81", "५":"82", "६":"83", "७":"84", "८":"85", "९":"86", "ॕ":"87", "ॖ":"88", "ॗ":"89", "॰":"90", "ॱ":"91", "ॲ":"92", "ॳ":"93", "ॴ":"94", "ॵ":"95", "ॶ":"96", "ॷ":"97", "ॸ":"98", "ॹ":"99", "ॺ":"100", "ॻ":"101", "ॼ":"102", "ॽ":"103", "ॾ":"104", "ॿ":"105", "क़":"106", "ख़":"107", "ग़":"108", "ज़":"109", "ड़":"110", "ढ़":"111", "फ़":"112", "य़":"113", "ॠ":"114", "ॡ":"115", "ॢ":"116", "ॣ":"117" } from textGenerator import TextGenerator # textGen = TextGenerator(filePath=os.path.join('hindiTexts.csv')) textGen = TextGenerator() data = textGen.data['text'] print(data[0]) chr(97) encoder_dict k = 118 for i in range (ord('0'), ord('9')+1): print(i, chr(i)) encoder_dict[chr(i)] = k k = k+1 k = 128 for i in range (ord('A'), ord('Z')+1): print(i, chr(i)) encoder_dict[chr(i)] = k k = k+1 chr(ord('z') - ord('a') + ord('A')) import json with open('characterClasses.json', 'w') as fp: json.dump(encoder_dict, fp) characters = list(encoder_dict.keys())[:154] classesCharacter = {} for i in range(0,154): classesCharacter[i] = characters[i] print(classesCharacter) with open ('characterToClasses.json', 'w') as fp: json.dump(classesCharacter, fp) ```
github_jupyter
# Locality Sensitive Hashing Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions. In this assignment, you will * Implement the LSH algorithm for approximate nearest neighbor search * Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes * Explore the role of the algorithm’s tuning parameters in the accuracy of the method **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. ## Import necessary packages The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html). ``` import numpy as np import graphlab from scipy.sparse import csr_matrix from sklearn.metrics.pairwise import pairwise_distances import time from copy import copy import matplotlib.pyplot as plt %matplotlib inline '''Check GraphLab Create version''' from distutils.version import StrictVersion assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.' '''compute norm of a sparse vector Thanks to: Jaiyam Sharma''' def norm(x): sum_sq=x.dot(x.T) norm=np.sqrt(sum_sq) return(norm) ``` ## Load in the Wikipedia dataset ``` wiki = graphlab.SFrame('people_wiki.gl/') ``` For this assignment, let us assign a unique ID to each document. ``` wiki = wiki.add_row_number() wiki ``` ## Extract TF-IDF matrix We first use GraphLab Create to compute a TF-IDF representation for each document. ``` wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text']) wiki ``` For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https://en.wikipedia.org/wiki/Matrix_(mathematics%29 ) that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices. We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format. ``` def sframe_to_scipy(column): """ Convert a dict-typed SArray into a SciPy sparse matrix. Returns ------- mat : a SciPy sparse matrix where mat[i, j] is the value of word j for document i. mapping : a dictionary where mapping[j] is the word whose values are in column j. """ # Create triples of (row_id, feature_id, count). x = graphlab.SFrame({'X1':column}) # 1. Add a row number. x = x.add_row_number() # 2. Stack will transform x to have a row for each unique (row, key) pair. x = x.stack('X1', ['feature', 'value']) # Map words into integers using a OneHotEncoder feature transformation. f = graphlab.feature_engineering.OneHotEncoder(features=['feature']) # We first fit the transformer using the above data. f.fit(x) # The transform method will add a new column that is the transformed version # of the 'word' column. x = f.transform(x) # Get the feature mapping. mapping = f['feature_encoding'] # Get the actual word id. x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0]) # Create numpy arrays that contain the data for the sparse matrix. i = np.array(x['id']) j = np.array(x['feature_id']) v = np.array(x['value']) width = x['id'].max() + 1 height = x['feature_id'].max() + 1 # Create a sparse matrix. mat = csr_matrix((v, (i, j)), shape=(width, height)) return mat, mapping ``` The conversion should take a few minutes to complete. ``` start=time.time() corpus, mapping = sframe_to_scipy(wiki['tf_idf']) end=time.time() print end-start ``` **Checkpoint**: The following code block should return 'Check passed correctly', indicating that your matrix contains TF-IDF values for 59071 documents and 547979 unique words. Otherwise, it will return Error. ``` assert corpus.shape == (59071, 547979) print 'Check passed correctly!' ``` ## Train an LSH model LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics. The first step is to generate a collection of random vectors from the standard Gaussian distribution. ``` def generate_random_vectors(num_vector, dim): return np.random.randn(dim, num_vector) ``` To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5. ``` # Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix. np.random.seed(0) # set seed=0 for consistent results generate_random_vectors(num_vector=3, dim=5) ``` We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document. ``` # Generate 16 random vectors of dimension 547979 np.random.seed(0) random_vectors = generate_random_vectors(num_vector=16, dim=547979) random_vectors.shape ``` Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step. We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector. ``` doc = corpus[0, :] # vector of tf-idf values for document 0 doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign ``` Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector. ``` doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign ``` We can compute all of the bin index bits at once as follows. Note the absence of the explicit `for` loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the `for` loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater. ``` doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's ``` All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed. ``` corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents ``` We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer: ``` Bin index integer [0,0,0,0,0,0,0,0,0,0,0,0] => 0 [0,0,0,0,0,0,0,0,0,0,0,1] => 1 [0,0,0,0,0,0,0,0,0,0,1,0] => 2 [0,0,0,0,0,0,0,0,0,0,1,1] => 3 ... [1,1,1,1,1,1,1,1,1,1,0,0] => 65532 [1,1,1,1,1,1,1,1,1,1,0,1] => 65533 [1,1,1,1,1,1,1,1,1,1,1,0] => 65534 [1,1,1,1,1,1,1,1,1,1,1,1] => 65535 (= 2^16-1) ``` By the [rules of binary number representation](https://en.wikipedia.org/wiki/Binary_number#Decimal), we just need to compute the dot product between the document vector and the vector consisting of powers of 2: ``` doc = corpus[0, :] # first document index_bits = (doc.dot(random_vectors) >= 0) powers_of_two = (1 << np.arange(15, -1, -1)) print index_bits print powers_of_two print index_bits.dot(powers_of_two) ``` Since it's the dot product again, we batch it with a matrix operation: ``` index_bits = corpus.dot(random_vectors) >= 0 index_bits.dot(powers_of_two) ``` This array gives us the integer index of the bins for all documents. Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used. 1. Compute the integer bin indices. This step is already completed. 2. For each document in the dataset, do the following: * Get the integer bin index for the document. * Fetch the list of document ids associated with the bin; if no list yet exists for this bin, assign the bin an empty list. * Add the document id to the end of the list. ``` def train_lsh(data, num_vector=16, seed=None): dim = data.shape[1] if seed is not None: np.random.seed(seed) random_vectors = generate_random_vectors(num_vector, dim) powers_of_two = 1 << np.arange(num_vector-1, -1, -1) table = {} # Partition data points into bins bin_index_bits = (data.dot(random_vectors) >= 0) # Encode bin index bits into integers bin_indices = bin_index_bits.dot(powers_of_two) # Update `table` so that `table[i]` is the list of document ids with bin index equal to i. for data_index, bin_index in enumerate(bin_indices): if bin_index not in table: # If no list yet exists for this bin, assign the bin an empty list. table[bin_index] = [] # Fetch the list of document ids associated with the bin and add the document id to the end. table[bin_index].append(data_index) model = {'data': data, 'bin_index_bits': bin_index_bits, 'bin_indices': bin_indices, 'table': table, 'random_vectors': random_vectors, 'num_vector': num_vector} return model ``` **Checkpoint**. ``` model = train_lsh(corpus, num_vector=16, seed=143) table = model['table'] if 0 in table and table[0] == [39583] and \ 143 in table and table[143] == [19693, 28277, 29776, 30399]: print 'Passed!' else: print 'Check your code.' ``` **Note.** We will be using the model trained here in the following sections, unless otherwise indicated. ## Inspect bins Let us look at some documents and see which bins they fall into. ``` wiki[wiki['name'] == 'Barack Obama'] ``` **Quiz Question**. What is the document `id` of Barack Obama's article? **Quiz Question**. Which bin contains Barack Obama's article? Enter its integer index. ``` model['bin_indices'][wiki[wiki['name'] == 'Barack Obama']['id']] #obama bin index ``` Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama. ``` wiki[wiki['name'] == 'Joe Biden'] ``` **Quiz Question**. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree? 1. 16 out of 16 places (Barack Obama and Joe Biden fall into the same bin) 2. 14 out of 16 places 3. 12 out of 16 places 4. 10 out of 16 places 5. 8 out of 16 places ``` obama_bits = np.array(model['bin_index_bits'][wiki[wiki['name'] == 'Barack Obama']['id']], dtype=int) print obama_bits biden_bits = np.array(model['bin_index_bits'][wiki[wiki['name'] == 'Joe Biden']['id']], dtype=int) print biden_bits print "# of similar bits = 14 " + str(len(obama_bits[0]) - np.bitwise_xor(obama_bits[0], biden_bits[0]).sum()) ``` Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places. ``` wiki[wiki['name']=='Wynn Normington Hugh-Jones'] print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's print model['bin_indices'][22745] # integer format model['bin_index_bits'][35817] == model['bin_index_bits'][22745] ``` How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article. ``` model['table'][model['bin_indices'][35817]] # all document ids in the same bin as barack obama ``` There are four other documents that belong to the same bin. Which documents are they? ``` doc_ids = list(model['table'][model['bin_indices'][35817]]) doc_ids.remove(35817) # display documents other than Obama docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column docs ``` It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits. ``` def cosine_distance(x, y): xy = x.dot(y.T) dist = xy/(norm(x)*norm(y)) return 1-dist[0,0] obama_tf_idf = corpus[35817,:] biden_tf_idf = corpus[24478,:] print '================= Cosine distance from Barack Obama' print 'Barack Obama - {0:24s}: {1:f}'.format('Joe Biden', cosine_distance(obama_tf_idf, biden_tf_idf)) for doc_id in doc_ids: doc_tf_idf = corpus[doc_id,:] print 'Barack Obama - {0:24s}: {1:f}'.format(wiki[doc_id]['name'], cosine_distance(obama_tf_idf, doc_tf_idf)) ``` **Moral of the story**. Similar data points will in general _tend to_ fall into _nearby_ bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. **Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.** ## Query the LSH model Let us first implement the logic for searching nearby neighbors, which goes like this: ``` 1. Let L be the bit representation of the bin that contains the query documents. 2. Consider all documents in bin L. 3. Consider documents in the bins whose bit representation differs from L by 1 bit. 4. Consider documents in the bins whose bit representation differs from L by 2 bits. ... ``` To obtain candidate bins that differ from the query bin by some number of bits, we use `itertools.combinations`, which produces all possible subsets of a given list. See [this documentation](https://docs.python.org/3/library/itertools.html#itertools.combinations) for details. ``` 1. Decide on the search radius r. This will determine the number of different bits between the two vectors. 2. For each subset (n_1, n_2, ..., n_r) of the list [0, 1, 2, ..., num_vector-1], do the following: * Flip the bits (n_1, n_2, ..., n_r) of the query bin to produce a new bit vector. * Fetch the list of documents belonging to the bin indexed by the new bit vector. * Add those documents to the candidate set. ``` Each line of output from the following cell is a 3-tuple indicating where the candidate bin would differ from the query bin. For instance, ``` (0, 1, 3) ``` indicates that the candiate bin differs from the query bin in first, second, and fourth bits. ``` from itertools import combinations num_vector = 16 search_radius = 3 for diff in combinations(range(num_vector), search_radius): print diff ``` With this output in mind, implement the logic for nearby bin search: ``` def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()): """ For a given query vector and trained LSH model, return all candidate neighbors for the query among all bins within the given search radius. Example usage ------------- >>> model = train_lsh(corpus, num_vector=16, seed=143) >>> q = model['bin_index_bits'][0] # vector for the first document >>> candidates = search_nearby_bins(q, model['table']) """ num_vector = len(query_bin_bits) powers_of_two = 1 << np.arange(num_vector-1, -1, -1) # Allow the user to provide an initial set of candidates. candidate_set = copy(initial_candidates) for different_bits in combinations(range(num_vector), search_radius): # Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector. ## Hint: you can iterate over a tuple like a list alternate_bits = copy(query_bin_bits) for i in different_bits: alternate_bits[i] = 0 if alternate_bits[i] == 1 else 1 # Convert the new bit vector to an integer index nearby_bin = alternate_bits.dot(powers_of_two) # Fetch the list of documents belonging to the bin indexed by the new bit vector. # Then add those documents to candidate_set # Make sure that the bin exists in the table! # Hint: update() method for sets lets you add an entire list to the set if nearby_bin in table: # Update candidate_set with the documents in this bin. candidate_set.update(table[nearby_bin]) return candidate_set ``` **Checkpoint**. Running the function with `search_radius=0` should yield the list of documents belonging to the same bin as the query. ``` obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0) if candidate_set == set([35817, 21426, 53937, 39426, 50261]): print 'Passed test' else: print 'Check your code' print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261' ``` **Checkpoint**. Running the function with `search_radius=1` adds more documents to the fore. ``` candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set) if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547, 23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676, 19699, 2804, 20347]): print 'Passed test' else: print 'Check your code' ``` **Note**. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query. Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query. ``` def query(vec, model, k, max_search_radius): data = model['data'] table = model['table'] random_vectors = model['random_vectors'] num_vector = random_vectors.shape[1] # Compute bin index for the query vector, in bit representation. bin_index_bits = (vec.dot(random_vectors) >= 0).flatten() # Search nearby bins and collect candidates candidate_set = set() for search_radius in xrange(max_search_radius+1): candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set) # Sort candidates by their true distances from the query nearest_neighbors = graphlab.SFrame({'id':candidate_set}) candidates = data[np.array(list(candidate_set)),:] nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten() return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set) ``` Let's try it out with Obama: ``` query(corpus[35817,:], model, k=10, max_search_radius=3) ``` To identify the documents, it's helpful to join this table with the Wikipedia table: ``` query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance') ``` We have shown that we have a working LSH implementation! # Experimenting with your LSH implementation In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search. ## Effect of nearby bin search How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius: * Number of candidate documents considered * Query time * Distance of approximate neighbors from the query Let us run LSH multiple times, each with different radii for nearby bin search. We will measure the three variables as discussed above. ``` wiki[wiki['name']=='Barack Obama'] num_candidates_history = [] query_time_history = [] max_distance_from_query_history = [] min_distance_from_query_history = [] average_distance_from_query_history = [] for max_search_radius in xrange(17): start=time.time() result, num_candidates = query(corpus[35817,:], model, k=10, max_search_radius=max_search_radius) end=time.time() query_time = end-start print 'Radius:', max_search_radius print result.join(wiki[['id', 'name']], on='id').sort('distance') average_distance_from_query = result['distance'][1:].mean() max_distance_from_query = result['distance'][1:].max() min_distance_from_query = result['distance'][1:].min() num_candidates_history.append(num_candidates) query_time_history.append(query_time) average_distance_from_query_history.append(average_distance_from_query) max_distance_from_query_history.append(max_distance_from_query) min_distance_from_query_history.append(min_distance_from_query) ``` Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables: ``` plt.figure(figsize=(7,4.5)) plt.plot(num_candidates_history, linewidth=4) plt.xlabel('Search radius') plt.ylabel('# of documents searched') plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(query_time_history, linewidth=4) plt.xlabel('Search radius') plt.ylabel('Query time (seconds)') plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors') plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors') plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors') plt.xlabel('Search radius') plt.ylabel('Cosine distance of neighbors') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() ``` Some observations: * As we increase the search radius, we find more neighbors that are a smaller distance away. * With increased search radius comes a greater number documents that have to be searched. Query time is higher as a consequence. * With sufficiently high search radius, the results of LSH begin to resemble the results of brute-force search. **Quiz Question**. What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden? **Quiz Question**. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better? ``` print "radius = 2" print "10" average_distance_from_query_history ``` ## Quality metrics for neighbors The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis. For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics: * Precision@10: How many of the 10 neighbors given by LSH are among the true 25 nearest neighbors? * Average cosine distance of the neighbors from the query Then we run LSH multiple times with different search radii. ``` def brute_force_query(vec, data, k): num_data_points = data.shape[0] # Compute distances for ALL data points in training set nearest_neighbors = graphlab.SFrame({'id':range(num_data_points)}) nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten() return nearest_neighbors.topk('distance', k, reverse=True) ``` The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete. ``` max_radius = 17 precision = {i:[] for i in xrange(max_radius)} average_distance = {i:[] for i in xrange(max_radius)} query_time = {i:[] for i in xrange(max_radius)} np.random.seed(0) num_queries = 10 for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)): print('%s / %s' % (i, num_queries)) ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id']) # Get the set of 25 true nearest neighbors for r in xrange(1,max_radius): start = time.time() result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r) end = time.time() query_time[r].append(end-start) # precision = (# of neighbors both in result and ground_truth)/10.0 precision[r].append(len(set(result['id']) & ground_truth)/10.0) average_distance[r].append(result['distance'][1:].mean()) plt.figure(figsize=(7,4.5)) plt.plot(range(1,17), [np.mean(average_distance[i]) for i in xrange(1,17)], linewidth=4, label='Average over 10 neighbors') plt.xlabel('Search radius') plt.ylabel('Cosine distance') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(range(1,17), [np.mean(precision[i]) for i in xrange(1,17)], linewidth=4, label='Precison@10') plt.xlabel('Search radius') plt.ylabel('Precision') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(range(1,17), [np.mean(query_time[i]) for i in xrange(1,17)], linewidth=4, label='Query time') plt.xlabel('Search radius') plt.ylabel('Query time (seconds)') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() ``` The observations for Barack Obama generalize to the entire dataset. ## Effect of number of random vectors Let us now turn our focus to the remaining parameter: the number of random vectors. We run LSH with different number of random vectors, ranging from 5 to 20. We fix the search radius to 3. Allow a few minutes for the following cell to complete. ``` precision = {i:[] for i in xrange(5,20)} average_distance = {i:[] for i in xrange(5,20)} query_time = {i:[] for i in xrange(5,20)} num_candidates_history = {i:[] for i in xrange(5,20)} ground_truth = {} np.random.seed(0) num_queries = 10 docs = np.random.choice(corpus.shape[0], num_queries, replace=False) for i, ix in enumerate(docs): ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id']) # Get the set of 25 true nearest neighbors for num_vector in xrange(5,20): print('num_vector = %s' % (num_vector)) model = train_lsh(corpus, num_vector, seed=143) for i, ix in enumerate(docs): start = time.time() result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3) end = time.time() query_time[num_vector].append(end-start) precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0) average_distance[num_vector].append(result['distance'][1:].mean()) num_candidates_history[num_vector].append(num_candidates) plt.figure(figsize=(7,4.5)) plt.plot(range(5,20), [np.mean(average_distance[i]) for i in xrange(5,20)], linewidth=4, label='Average over 10 neighbors') plt.xlabel('# of random vectors') plt.ylabel('Cosine distance') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(range(5,20), [np.mean(precision[i]) for i in xrange(5,20)], linewidth=4, label='Precison@10') plt.xlabel('# of random vectors') plt.ylabel('Precision') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(range(5,20), [np.mean(query_time[i]) for i in xrange(5,20)], linewidth=4, label='Query time (seconds)') plt.xlabel('# of random vectors') plt.ylabel('Query time (seconds)') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() plt.figure(figsize=(7,4.5)) plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in xrange(5,20)], linewidth=4, label='# of documents searched') plt.xlabel('# of random vectors') plt.ylabel('# of documents searched') plt.legend(loc='best', prop={'size':15}) plt.rcParams.update({'font.size':16}) plt.tight_layout() ``` We see a similar trade-off between quality and performance: as the number of random vectors increases, the query time goes down as each bin contains fewer documents on average, but on average the neighbors are likewise placed farther from the query. On the other hand, when using a small enough number of random vectors, LSH becomes very similar brute-force search: Many documents appear in a single bin, so searching the query bin alone covers a lot of the corpus; then, including neighboring bins might result in searching all documents, just as in the brute-force approach.
github_jupyter
<a href="https://colab.research.google.com/github/gdg-ml-team/ioExtended/blob/master/Lab_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !pip install -q tensorflow_hub from __future__ import absolute_import, division, print_function import matplotlib.pylab as plt import tensorflow as tf import tensorflow_hub as hub from tensorflow.keras import layers tf.VERSION data_root = tf.keras.utils.get_file( 'flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255) image_data = image_generator.flow_from_directory(str(data_root)) for image_batch,label_batch in image_data: print("Image batch shape: ", image_batch.shape) print("Labe batch shape: ", label_batch.shape) break classifier_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/2" #@param {type:"string"} def classifier(x): classifier_module = hub.Module(classifier_url) return classifier_module(x) IMAGE_SIZE = hub.get_expected_image_size(hub.Module(classifier_url)) classifier_layer = layers.Lambda(classifier, input_shape = IMAGE_SIZE+[3]) classifier_model = tf.keras.Sequential([classifier_layer]) classifier_model.summary() image_data = image_generator.flow_from_directory(str(data_root), target_size=IMAGE_SIZE) for image_batch,label_batch in image_data: print("Image batch shape: ", image_batch.shape) print("Labe batch shape: ", label_batch.shape) break import tensorflow.keras.backend as K sess = K.get_session() init = tf.global_variables_initializer() sess.run(init) import numpy as np import PIL.Image as Image grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg') grace_hopper = Image.open(grace_hopper).resize(IMAGE_SIZE) grace_hopper grace_hopper = np.array(grace_hopper)/255.0 grace_hopper.shape result = classifier_model.predict(grace_hopper[np.newaxis, ...]) result.shape predicted_class = np.argmax(result[0], axis=-1) predicted_class labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt') imagenet_labels = np.array(open(labels_path).read().splitlines()) plt.imshow(grace_hopper) plt.axis('off') predicted_class_name = imagenet_labels[predicted_class] _ = plt.title("Prediction: " + predicted_class_name) ```
github_jupyter
# e-magyar elemzés --- (2021. 04. 16.) Mittelholcz Iván ## 1. Az e-magyar használata Az elemzendő szöveg: ``` !cat orkeny.txt ``` Az e-magyar legfrissebb verziójának letöltése: ``` !docker pull mtaril/emtsv:latest ``` Az *orkeny.txt* elemzése, az eredmény kiírása az *orkény.tsv* fájlba: ``` !docker run --rm -i mtaril/emtsv:latest tok,morph,pos,ner,conv-morph,dep <orkeny.txt >orkeny.tsv ``` Magyarázatok: - `!docker run --rm -i mtaril/emtsv:latest`: Az *e-magyar* futtatása - `tok,morph,pos,ner`: a használt modulok felsorolása - `tok`: tokenizálás - `morph`: morfológiai elemzés - `pos`: szófaji egyértelműsítés - `ner`: névelem felismerés - `<orkeny.txt`: Az elemzendő szöveg beolvasása az *orkeny.txt* fájlból. - `>orkeny.tsv`: Az elemzés kiírása az *orkeny.tsv* fájlba. ## 2. Az elemzés beolvasása *pandas DataFrame*-be A TSV fájl beolvasásánál használt új paraméterek: - `dtype=str`: A stringet tartalmazó cellákat alapból is stringnek szokta értelmezni a pandas, de ha biztosra akarunk menni, nem árt, ha kifejezetten megkérjük erre. - `keep_default_na=False`: Ha ezt *False*-ra állítjuk, meghagyja az üres stringeket üres stringeknek és nem fogja azokat *NaN*-ként értelmezni. Ez a *wsafter* sor helyes beolvasásához kell. - `skip_blank_lines=False`: A *Pandas* alapból átugorja az üres sorokat. Az e-magyar viszont az üres sorokat használja a mondatok elhatárolására, ezért meg kell mondani a *Pandas*-nak, hogy ne dobja ki az üres sorokat. Részleteket a `df.read_csv()` [dokumentációjában](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html). ``` import pandas as pd df = pd.read_csv('orkeny.tsv', sep='\t', dtype=str, keep_default_na=False, skip_blank_lines=False) df.head(50) ``` Oszlopok: - *form*: A tokenizáló modul (*tok*) kimenete. A szövegben található tokeneket (szóalakokat, írásjeleket) tartalmazza. - *wsafter*: Szintén a tokenizáló kimenete. A tokenek után található *whitespace* karaktereket tartalmazza. - *anas*: A morfológiai elemző kimenete (*morph*). Szögletes zárójelpáron belül tartalmazza a lehetséges morfológiai elemzések listáját. Használt morfológiai kódok leírása [itt](https://e-magyar.hu/hu/textmodules/emmorph_codelist). - *lemma*: A szófaji egyértelműsítő kimenete (*pos*). A legvalószínűbb morfológiai elemzéshez tartozó lemmát tartalmazza. - *xpostag*: Szintén a szófaji egyértelműsítő kimenete (*pos*). A legvalószínűbb morfológiai elemzést tartalmazza. - *NER-BIO*: A tulajdonnév felismerő modul kimenete (*ner*). Leírása [itt](https://e-magyar.hu/hu/textmodules/emner). ## 3. Elemzesek ### 3.1. Felhasználási esetek #### Feladatok, amikhez egyszerre elég egy sort (*row*) figyelembe venni - Szűrni bizonyos pos-tagekre, pl. keressük a múltidejű igéket. - Adott lemmahalmaz múltidejű előfordulásai. - Több morfológiai jegy figyelembevétele: pl. adott lemmahalmaz múltidejű előfordulásai egyeszám elsőszemélyben ill. többesszám elsőszemélyben. A végén számolni kéne ezeket: az összes tokenszámhoz, vagy szószámhoz, vagy az összes igéhez képest milyen arányban fordulnak elő ezek az alakok. #### Feladatok, amikhez több sort kell figyelembe venni - Van-e személyes névmás az ige mellett? Pl. "éldegéltem" vs. "én éldegéltem". - Főnévnek van-e jelzője? - Igének van-e határozószava? - Tagmondat szintű elemzés: keressük azon tagmondatokat, amikben van kötőszó, de nincs múltidejű igealak. Ezeket megint arányítani kell: az összes főnévből mennyinek van jelzője, az összes igéből mennyinek van határozója. #### Feladatok, amikhez az eredeti szöveget kell módosítani - Potenciálisan többszavas kifejezések keresése ([emterm](https://github.com/dlt-rilmta/emterm)!). - Szövegbe visszaírni elemzések eredményét XML-szerűen, pl. <érzelmi_kifejezés>...</érzelmi_kifejezés> ### 3.2. Egy soros feladatok megoldása ``` # multideju igek aranya def is_not_punct(row): pos = row['xpostag'] return not pos.startswith('[Punct]') def is_verb(row): pos = row['xpostag'] return pos.startswith('[/V]') def is_past_verb(row): pos = row['xpostag'] return pos.startswith('[/V][Pst.') mask0 = df.apply(is_not_punct, axis=1) mask1 = df.apply(is_verb, axis=1) mask2 = df.apply(is_past_verb, axis=1) count_word = len(df[mask0]) count_verb = len(df[mask1]) count_past_verb = len(df[mask2]) print('multideju igek / osszes token: ', count_past_verb/len(df)) print('multideju igek / osszes szo: ', count_past_verb/count_word) print('multideju igek / osszes ige: ', count_past_verb/count_verb) df[mask2] # egyesszam 3. szemelyu igek def is_3sg_verb(row): pos = row['xpostag'] return pos.startswith('[/V]') and '3Sg' in pos mask3 = df.apply(is_3sg_verb, axis=1) count_3sg_verb = len(df[mask3]) print('3sg igek / osszes token: ', count_3sg_verb/len(df)) print('3sg igek / osszes szo: ', count_3sg_verb/count_word) print('3sg igek / osszes ige: ', count_3sg_verb/count_verb) df[mask3] # adott lemmahalmaz keresése def is_lemma_in_set(row): lemma = row['lemma'] lemmaset = {'iszik', 'van'} pos = row['xpostag'] is_in_lemmaset = lemma in lemmaset is_3sg = '3Sg' in pos return is_in_lemmaset and is_3sg mask4 = df.apply(is_lemma_in_set, axis=1) count_lemmaset = len(df[mask4]) print('halmazban levo igek / osszes token: ', count_lemmaset/len(df)) print('halmazban levo igek / osszes szo: ', count_lemmaset/count_word) print('halmazban levo igek / osszes ige: ', count_lemmaset/count_verb) df[mask4] ``` ### 3.3. Több soros feladatok megoldása Algoritmus: Ha csak egy elem távolságba kell ellátni, akkor érdemes egy segédváltozóban eltárolni a ciklus előző elemének az értékét, vagy a vele kapcsolatos feltétel értékét. ``` # Rávezetés 1.: Keressük a maganhángzóval kezdődő gyümölcsöket. # elvárt eredmény: ['alma', 'eper'] l = ['alma', 'barack', 'citrom', 'dinnye', 'eper', 'füge'] result = [] for word in l: if word[0] in {'a', 'e', 'i', 'o', 'u'}: result.append(word) print(result) # Rávezetés 2.: Menjünk végig egy listán úgy, hogy az aktuális elem mellett írjuk ki az előzőt is. # Az első sorban az előző elem hiányozni fog. l = ['alma', 'barack', 'citrom', 'dinnye', 'eper', 'füge'] previous = '' for current in l: print(previous, current) previous = current # a ciklusmag végén mindig frissítjük az előző elemet az aktuálissal # Rávezetés 3.: Keressük azokat a gyümölcsöket, amik magánhangzóval kezdődő gyümölcs után következnek. # elvárt eredmény: ['barack', 'fuge'] # A segédváltozóban nem az előző elemet tároljuk, csak azt, hogy az előző elem magánhangzóval kezdődőtt-e. l = ['alma', 'barack', 'citrom', 'dinnye', 'eper', 'füge'] result = [] previous_startswith_vowel = False for current in l: if previous_startswith_vowel: result.append(current) previous_startswith_vowel = current[0] in {'a', 'e', 'i', 'o', 'u'} print(result) ``` Hogy a fentieket alkalmazni tudjuk *DataFrame* esetében is, ahhoz végig kell tudnunk iterálni a *DataFrame* sorain. Ezt az [`.iterrows()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html) metódust használva tudjuk megtenni. A metódus minden sort egy *tuple*-ként ad vissza, aminek az első eleme az *index* (sorszám), a második a sor maga, mint [*Series*](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html#pandas.Series). ``` # Van-e névelő a főnév előtt? def is_noun(row): return row['xpostag'].startswith('[/N') mask5 = df.apply(is_noun, axis=1) mask6 = [] is_prev_article = False for index, row in df.iterrows(): is_current_noun = row['xpostag'].startswith('[/N') mask6.append(is_current_noun and is_prev_article) is_prev_article = row['xpostag'] in {'[/Det|Art.Def]', '[/Det|Art.NDef]'} print('névelős főnév / összes főnév: ', len(df[mask6])/len(df[mask5])) #df['noun_with_article'] = mask5 #df.head(50) df[mask6] ``` Algoritmus: Ha nem csak a szomszédos elemet kell látnunk, hanem elemet is, akkor érdemes egy *ablakkal* (*frame*-mel) végigmenni a listán. ``` # Rávezetés 1.: Menjünk végig egy 3 elemet tartalmazó ablakkal a listán. l = ['alma', 'barack', 'citrom', 'dinnye', 'eper', 'füge'] length = 3 frame = [] for i in l: frame.append(i) if len(frame) < length: # meg tul rovid a frame continue if len(frame) > length: # mar tul hosszu a frame frame.pop(0) print(frame) # Rávezetés 2.: Ál-elemekkel kiegészített lista. # Ha a frame-ek első elemei vizsgáljuk (mert arra vagyunk kíváncsiak, van-e utána olyan, ami érdekes), # akkor a fenti módon sosem jutunk oda, hogy az 'eper' vagy a 'füge' első elem legyen. # Ha a frame-ek utolsó elemeit vizsgáljuk (mert arra vagyunk kíváncsiak, van-e előtte olyan, ami érdekes), # akkor a fenti módon sosem jutunk oda, hogy az 'alma' vagy a 'barack' utolsó elem legyen. # Az első esetben a lista végét kell kiegészíteni álelemekkel (None), # a második esetben a lista elejére kell álelemeket beszúrni. l = ['alma', 'barack', 'citrom', 'dinnye', 'eper', 'füge'] # álelemek a lista végén length = 3 frame = [] for i in l + [None]*(length-1): frame.append(i) if len(frame) < length: # meg tul rovid a frame continue if len(frame) > length: # mar tul hosszu a frame frame.pop(0) print(frame) print('--------') # álelemek a lista elején length = 3 frame = [] for i in [None]*(length-1) + l: frame.append(i) if len(frame) < length: # meg tul rovid a frame continue if len(frame) > length: # mar tul hosszu a frame frame.pop(0) print(frame) # Rávezetés 3.: Keressük azokat az elemeket, amik után az első vagy második elem magánhangzóval kezdődik. # Az aktuális elemtől jobbra keresünk bizonyos tulajdonságú elemeket --> a listát jobbról egészítjük ki álelemekkel. l = ['alma', 'barack', 'citrom', 'dinnye', 'eper', 'füge'] length = 3 frame = [] vowels = {'a', 'e', 'i', 'o', 'u'} result = [] for i in l + [None] * (length -1): frame.append(i) if len(frame) < length: continue if len(frame) > length: frame.pop(0) for x in frame[1:]: if x is None: # ha None-ba botlunk, akkor skippeljük continue if x[0] in vowels: result.append(frame[0]) print(result) # Feladat: keressük az igekötők után lévő igéket. # Az eredményből csak az ('El', 'patkoltak') pár lesz a jó. Finomítás később. length = 10 frame = [] result = [] mylist = [row for index, row in df.iterrows()] + [None] * (length - 1) # vegigmegyunk az álelemekkel kiegészített sorokon for row in mylist: # frissitjuk a frame-et frame.append(row) if len(frame) < length: continue if len(frame) > length: frame.pop(0) # igekoto-e az elso elem? Ha igen, akkor megnezzuk, hogy utana valamelyik szo ige-e if frame[0]['xpostag'] == '[/Prev]': for frow in frame[1:]: # A frame-beli sorokat frow-nak nevezzuk el. if frow is None: continue if frow['xpostag'].startswith('[/V]'): # iget talaltunk, igekotot + iget hozzaadjuk az eredmenyhez result.append((frame[0]['form'], frow['form'])) break # megvan az ige, abbahagyjuk a keresest print(result) # Finomítás: Mondathatár után ne keressünk igét, mert az biztos nem az előző mondat igekötőjéhez fog tartozni. # A mondathatárt a TSV üres sora jelöli. Ez a dataframe-ben olyan sor lesz, amiben minden cella egy üres string. # (Elég a "form" cellát ellenőrizni, az nem lehet üres.) # Az eredményekből a ('meg', 'akadt') pár még mindig rossz. Ezt a frame rövidebbre vételével lehet kiszűrni. length = 10 frame = [] result = [] mylist = [row for index, row in df.iterrows()] + [None] * (length - 1) # vegigmegyunk az álelemekkel kiegészített sorokon for row in mylist: # frissitjuk a frame-et frame.append(row) if len(frame) < length: continue if len(frame) > length: frame.pop(0) # igekoto-e az elso elem? Ha igen, akkor megnezzuk, hogy utana valamelyik szo ige-e if frame[0]['xpostag'] == '[/Prev]': for frow in frame[1:]: if frow is None: continue # Mondathatár vizsgálata: ha a form nem tartalmaz semmit, akkor utána új mondat jön. if len(frow['form']) == 0: break if frow['xpostag'].startswith('[/V]'): # iget talaltunk, igekotot + iget hozzaadjuk az eredmenyhez result.append((frame[0]['form'], frow['form'])) break # megvan az ige, abbahagyjuk a keresest print(result) # Finomítás: A feladat ugyan az, mint az elobb, de most uj oszlopot csinalunk a dataframe-nek. # Az uj oszlop default egy kotojelet tartalmaz, de az igekotoknel a feltetelezett iget irjuk bele. length = 3 frame = [] result = [] mylist = [row for i, row in df.iterrows()] + [None] * (length - 1) for row in mylist: frame.append(row) if len(frame) < length: continue if len(frame) > length: frame.pop(0) res = '-' if frame[0]['xpostag'] == '[/Prev]': for frow in frame[1:]: if frow is None: continue if len(frow['form']) == 0: continue if frow['xpostag'].startswith('[/V]'): res = frow['lemma'] break result.append(res) df['preverb'] = result # kiirjuk a kerdeses reszt df.iloc[120:128, :] ``` ### 3.4. Elemzés visszaírása az eredeti szövegbe ``` # eredeti szöveg kiírása: # - minden sor 'form' és 'wsafter' celláját összeragasztjuk és hozzáadjuk ez eredmény listához # - az eredmény lista elemeit a join metódussal egyesítjük egyetlen szöveggé # - a szövegben lévő '\\n'-eket lecseréljük igazi sortörésekre text = [] for index, row in df.iterrows(): text.append(row['form'] + row['wsafter']) text = ''.join(text) text = text.replace('\\n', '\n') print(text) # Feladat: NER-BIO oszlop xml-esítése. # Itt is a form és wsafter cellákat ragasztjuk össze és adjuk egy listához, de nézzük a ner cellákat is. # - ha egy ner cella B-vel kezdődik (pl. B-ORG), akkor nyitunk egy <ORG> címkét és csak utána írjuk a form cellát. # - ha egy ner cella E-vel kezdődik (pl. E-ORG), akkor a form cella után lezárjuk a címkét (</ORG>) # - a szövegben nincs példa az egy elemű címkékre (pl. 1-ORG), de azt is kezeljük text = [] for index, row in df.iterrows(): form = row['form'] ws = row['wsafter'] ner = row['NER-BIO'] if ner.startswith('B'): # named entity kezdodik, xml tag-et nyitunk: form = f'<{ner[2:]}>{form}' elif ner.startswith('E'): # named entity vegzodik, xml tag-et zarunk: form = f'{form}</{ner[2:]}>' elif ner.startswith('1'): # egy elemu named entity, xml tag-ebe tesszuk: form = f'<{ner[2:]}>{form}</{ner[2:]}>' text.append(form+ws) text = ''.join(text) text = text.replace('\\n', '\n') print(text) ```
github_jupyter
<a href="https://practicalai.me"><img src="https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png" width="100" align="left" hspace="20px" vspace="20px"></a> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/numpy.png" width="200" vspace="30px" align="right"> <div align="left"> <h1>NumPy</h1> In this lesson we will learn the basics of numerical analysis using the NumPy package. </div> <table align="center"> <td> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/rounded_logo.png" width="25"><a target="_blank" href="https://practicalai.me"> View on practicalAI</a> </td> <td> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/colab_logo.png" width="25"><a target="_blank" href="https://colab.research.google.com/github/practicalAI/practicalAI/blob/master/notebooks/02_NumPy.ipynb"> Run in Google Colab</a> </td> <td> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/github_logo.png" width="22"><a target="_blank" href="https://github.com/practicalAI/practicalAI/blob/master/notebooks/02_NumPy.ipynb"> View code on GitHub</a> </td> </table> # Set up ``` import numpy as np # Set seed for reproducibility np.random.seed(seed=1234) ``` # Basics Let's take a took at how to create tensors with NumPy. * **Tensor**: collection of values <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/tensors.png" width="650"> </div> ``` # Scalar x = np.array(6) # scalar print ("x: ", x) # Number of dimensions print ("x ndim: ", x.ndim) # Dimensions print ("x shape:", x.shape) # Size of elements print ("x size: ", x.size) # Data type print ("x dtype: ", x.dtype) # Vector x = np.array([1.3 , 2.2 , 1.7]) print ("x: ", x) print ("x ndim: ", x.ndim) print ("x shape:", x.shape) print ("x size: ", x.size) print ("x dtype: ", x.dtype) # notice the float datatype # Matrix x = np.array([[1,2], [3,4]]) print ("x:\n", x) print ("x ndim: ", x.ndim) print ("x shape:", x.shape) print ("x size: ", x.size) print ("x dtype: ", x.dtype) # 3-D Tensor x = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) print ("x:\n", x) print ("x ndim: ", x.ndim) print ("x shape:", x.shape) print ("x size: ", x.size) print ("x dtype: ", x.dtype) ``` NumPy also comes with several functions that allow us to create tensors quickly. ``` # Functions print ("np.zeros((2,2)):\n", np.zeros((2,2))) print ("np.ones((2,2)):\n", np.ones((2,2))) print ("np.eye((2)):\n", np.eye((2))) # identity matrix print ("np.random.random((2,2)):\n", np.random.random((2,2))) ``` # Indexing Keep in mind that when indexing the row and column, indices start at 0. And like indexing with lists, we can use negative indices as well (where -1 is the last item). <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/indexing.png" width="300"> </div> ``` # Indexing x = np.array([1, 2, 3]) print ("x: ", x) print ("x[0]: ", x[0]) x[0] = 0 print ("x: ", x) # Slicing x = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) print (x) print ("x column 1: ", x[:, 1]) print ("x row 0: ", x[0, :]) print ("x rows 0,1 & cols 1,2: \n", x[0:2, 1:3]) # Integer array indexing print (x) rows_to_get = np.array([0, 1, 2]) print ("rows_to_get: ", rows_to_get) cols_to_get = np.array([0, 2, 1]) print ("cols_to_get: ", cols_to_get) # Combine sequences above to get values to get print ("indexed values: ", x[rows_to_get, cols_to_get]) # (0, 0), (1, 2), (2, 1) # Boolean array indexing x = np.array([[1, 2], [3, 4], [5, 6]]) print ("x:\n", x) print ("x > 2:\n", x > 2) print ("x[x > 2]:\n", x[x > 2]) ``` # Arithmetic ``` # Basic math x = np.array([[1,2], [3,4]], dtype=np.float64) y = np.array([[1,2], [3,4]], dtype=np.float64) print ("x + y:\n", np.add(x, y)) # or x + y print ("x - y:\n", np.subtract(x, y)) # or x - y print ("x * y:\n", np.multiply(x, y)) # or x * y ``` ### Dot product One of the most common NumPy operations we’ll use in machine learning is matrix multiplication using the dot product. We take the rows of our first matrix (2) and the columns of our second matrix (2) to determine the dot product, giving us an output of `[2 X 2]`. The only requirement is that the inside dimensions match, in this case the frist matrix has 3 columns and the second matrix has 3 rows. <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/dot.gif" width="450"> </div> ``` # Dot product a = np.array([[1,2,3], [4,5,6]], dtype=np.float64) # we can specify dtype b = np.array([[7,8], [9,10], [11, 12]], dtype=np.float64) c = a.dot(b) print (f"{a.shape} · {b.shape} = {c.shape}") print (c) ``` ### Axis operations We can also do operations across a specific axis. <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/axis.gif" width="450"> </div> ``` # Sum across a dimension x = np.array([[1,2],[3,4]]) print (x) print ("sum all: ", np.sum(x)) # adds all elements print ("sum axis=0: ", np.sum(x, axis=0)) # sum across rows print ("sum axis=1: ", np.sum(x, axis=1)) # sum across columns # Min/max x = np.array([[1,2,3], [4,5,6]]) print ("min: ", x.min()) print ("max: ", x.max()) print ("min axis=0: ", x.min(axis=0)) print ("min axis=1: ", x.min(axis=1)) ``` ### Broadcasting Here, we’re adding a vector with a scalar. Their dimensions aren’t compatible as is but how does NumPy still gives us the right result? This is where broadcasting comes in. The scalar is *broadcast* across the vector so that they have compatible shapes. <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/broadcasting.png" width="300"> </div> ``` # Broadcasting x = np.array([1,2]) # vector y = np.array(3) # scalar z = x + y print ("z:\n", z) ``` # Advanced ### Transposing We often need to change the dimensions of our tensors for operations like the dot product. If we need to switch two dimensions, we can transpose the tensor. <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/transpose.png" width="400"> </div> ``` # Transposing x = np.array([[1,2,3], [4,5,6]]) print ("x:\n", x) print ("x.shape: ", x.shape) y = np.transpose(x, (1,0)) # flip dimensions at index 0 and 1 print ("y:\n", y) print ("y.shape: ", y.shape) ``` ### Reshaping Sometimes, we'll need to alter the dimensions of the matrix. Reshaping allows us to transform a tensor into different permissible shapes -- our reshaped tensor has the same amount of values in the tensor. (1X6 = 2X3). We can also use `-1` on a dimension and NumPy will infer the dimension based on our input tensor. The way reshape works is by looking at each dimension of the new tensor and separating our original tensor into that many units. So here the dimension at index 0 of the new tensor is 2 so we divide our original tensor into 2 units, and each of those has 3 values. <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/reshape.png" width="450"> </div> ``` # Reshaping x = np.array([[1,2,3,4,5,6]]) print (x) print ("x.shape: ", x.shape) y = np.reshape(x, (2, 3)) print ("y: \n", y) print ("y.shape: ", y.shape) z = np.reshape(x, (2, -1)) print ("z: \n", z) print ("z.shape: ", z.shape) ``` ### Unintended reshaping Though reshaping is very convenient to manipulate tensors, we must be careful of their pitfalls as well. Let's look at the example below. Suppose we have `x`, which has the shape `[2 X 3 X 4]`. ``` [[[ 1 1 1 1] [ 2 2 2 2] [ 3 3 3 3]] [[10 10 10 10] [20 20 20 20] [30 30 30 30]]] ``` We want to reshape x so that it has shape `[3 X 8]` which we'll get by moving the dimension at index 0 to become the dimension at index 1 and then combining the last two dimensions. But when we do this, we want our output to look like: ✅ ``` [[ 1 1 1 1 10 10 10 10] [ 2 2 2 2 20 20 20 20] [ 3 3 3 3 30 30 30 30]] ``` and not like: ❌ ``` [[ 1 1 1 1 2 2 2 2] [ 3 3 3 3 10 10 10 10] [20 20 20 20 30 30 30 30]] ``` even though they both have the same shape `[3X8]`. ``` x = np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]], [[10, 10, 10, 10], [20, 20, 20, 20], [30, 30, 30, 30]]]) print ("x:\n", x) print ("x.shape: ", x.shape) ``` When we naively do a reshape, we get the right shape but the values are not what we're looking for. <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/reshape_wrong.png" width="600"> </div> ``` # Unintended reshaping z_incorrect = np.reshape(x, (x.shape[1], -1)) print ("z_incorrect:\n", z_incorrect) print ("z_incorrect.shape: ", z_incorrect.shape) ``` Instead, if we transpose the tensor and then do a reshape, we get our desired tensor. Transpose allows us to put our two vectors that we want to combine together and then we use reshape to join them together. Always create a dummy example like this when you’re unsure about reshaping. Blindly going by the tensor shape can lead to lots of issues downstream. <div align="left"> <img src="https://raw.githubusercontent.com/practicalAI/images/master/images/02_Numpy/reshape_right.png" width="600"> </div> ``` # Intended reshaping y = np.transpose(x, (1,0,2)) print ("y:\n", y) print ("y.shape: ", y.shape) z_correct = np.reshape(y, (y.shape[0], -1)) print ("z_correct:\n", z_correct) print ("z_correct.shape: ", z_correct.shape) ``` ### Adding/removing dimensions We can also easily add and remove dimensions to our tensors and we'll want to do this to make tensors compatible for certain operations. ``` # Adding dimensions x = np.array([[1,2,3],[4,5,6]]) print ("x:\n", x) print ("x.shape: ", x.shape) y = np.expand_dims(x, 1) # expand dim 1 print ("y: \n", y) print ("y.shape: ", y.shape) # notice extra set of brackets are added # Removing dimensions x = np.array([[[1,2,3]],[[4,5,6]]]) print ("x:\n", x) print ("x.shape: ", x.shape) y = np.squeeze(x, 1) # squeeze dim 1 print ("y: \n", y) print ("y.shape: ", y.shape) # notice extra set of brackets are gone ``` # Additional resources * **NumPy reference manual**: We don't have to memorize anything here and we will be taking a closer look at NumPy in the later lessons. If you want to learn more checkout the [NumPy reference manual](https://docs.scipy.org/doc/numpy-1.15.1/reference/). --- <div align="center"> Subscribe to our <a href="https://practicalai.me/#newsletter">newsletter</a> and follow us on social media to get the latest updates! <a class="ai-header-badge" target="_blank" href="https://github.com/practicalAI/practicalAI"> <img src="https://img.shields.io/github/stars/practicalAI/practicalAI.svg?style=social&label=Star"></a>&nbsp; <a class="ai-header-badge" target="_blank" href="https://www.linkedin.com/company/practicalai-me"> <img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>&nbsp; <a class="ai-header-badge" target="_blank" href="https://twitter.com/practicalAIme"> <img src="https://img.shields.io/twitter/follow/practicalAIme.svg?label=Follow&style=social"> </a> </div> </div>
github_jupyter
**This notebook is an exercise in the [Introduction to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/dansbecker/underfitting-and-overfitting).** --- ## Recap You've built your first model, and now it's time to optimize the size of the tree to make better predictions. Run this cell to set up your coding environment where the previous step left off. ``` # Code you have previously used to load data import pandas as pd from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor # Path of the file to read iowa_file_path = '../input/home-data-for-ml-course/train.csv' home_data = pd.read_csv(iowa_file_path) # Create target object and call it y y = home_data.SalePrice # Create X features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd'] X = home_data[features] # Split into validation and training data train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) # Specify Model iowa_model = DecisionTreeRegressor(random_state=1) # Fit Model iowa_model.fit(train_X, train_y) # Make validation predictions and calculate mean absolute error val_predictions = iowa_model.predict(val_X) val_mae = mean_absolute_error(val_predictions, val_y) print("Validation MAE: {:,.0f}".format(val_mae)) # Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.machine_learning.ex5 import * print("\nSetup complete") ``` # Exercises You could write the function `get_mae` yourself. For now, we'll supply it. This is the same function you read about in the previous lesson. Just run the cell below. ``` def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y): model = DecisionTreeRegressor(max_leaf_nodes=max_leaf_nodes, random_state=0) model.fit(train_X, train_y) preds_val = model.predict(val_X) mae = mean_absolute_error(val_y, preds_val) return(mae) ``` ## Step 1: Compare Different Tree Sizes Write a loop that tries the following values for *max_leaf_nodes* from a set of possible values. Call the *get_mae* function on each value of max_leaf_nodes. Store the output in some way that allows you to select the value of `max_leaf_nodes` that gives the most accurate model on your data. ``` candidate_max_leaf_nodes = [5, 25, 50, 100, 250, 500] # Write loop to find the ideal tree size from candidate_max_leaf_nodes for max_leaf_nodes in candidate_max_leaf_nodes: my_mae = get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y) print("Max leaf nodes: %d \t\t Mean Absolute Error: %d" %(max_leaf_nodes, my_mae)) # Store the best value of max_leaf_nodes (it will be either 5, 25, 50, 100, 250 or 500) best_tree_size = 100 # Check your answer step_1.check() # The lines below will show you a hint or the solution. # step_1.hint() # step_1.solution() ``` ## Step 2: Fit Model Using All Data You know the best tree size. If you were going to deploy this model in practice, you would make it even more accurate by using all of the data and keeping that tree size. That is, you don't need to hold out the validation data now that you've made all your modeling decisions. ``` # Fill in argument to make optimal size and uncomment final_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=0) # fit the final model and uncomment the next two lines final_model.fit(X, y) # Check your answer step_2.check() # step_2.hint() # step_2.solution() ``` You've tuned this model and improved your results. But we are still using Decision Tree models, which are not very sophisticated by modern machine learning standards. In the next step you will learn to use Random Forests to improve your models even more. # Keep Going You are ready for **[Random Forests](https://www.kaggle.com/dansbecker/random-forests).** --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161285) to chat with other Learners.*
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) # makes inline plots to have better quality %config InlineBackend.figure_format = 'svg' # Set the default style plt.style.use("seaborn") ``` ### Read the data and rename columns ``` df = pd.read_excel("texts_data.xlsx") df.columns len(df.columns) df.drop(['Unnamed: 0'], axis=1, inplace=True) column_names = ['Timestamp', 'score', 'gender', 'age', 'education', 'country', 'know_ai', 'similar_tests', 'english'] wc_columns = [] for i in range(1,9): column_names.append("q" + str(i)) column_names.append("wc" + str(i)) wc_columns.append("wc" + str(i)) df.columns = column_names #string score to score def total_score(string): return int(string[0]) #string percentage to float def p2f(x): return x.str.rstrip('%').astype('float')/100 df['score'] = df['score'].apply(total_score) df[wc_columns] = df[wc_columns].apply(p2f, axis=1) #q1-q4: simple questions #q5-q8: comparison questions df.head() ``` ### Plots #### 1. Time graph (for all data) ``` df_images = pd.read_excel("images_data.xlsx") df_sounds = pd.read_excel("sounds_data.xlsx") df_images['date'] = pd.to_datetime(df_images['Timestamp']).dt.date df_sounds['date'] = pd.to_datetime(df_sounds['Timestamp']).dt.date df['date'] = pd.to_datetime(df['Timestamp']).dt.date df_all = pd.concat([df.date, df_sounds.date, df_images.date], axis=1) df_all.columns = ['texts', 'sounds', 'images'] hist1 = df_all.texts.value_counts().plot() hist2 = df_all.sounds.value_counts().plot() hist3 = df_all.images.value_counts().plot() plt.title('Response number by date') plt.legend(prop={'size': 12}, frameon=True, facecolor='white') plt.show() ``` #### 2. Age distribution ``` sns.countplot(pd.cut(df['age'], bins=[0,20,30,40,50,100], labels=["<20","20-30","30-40","40-50",">50"]), color="royalblue") plt.title('Age distribution, texts') plt.show() ``` #### 3. Worked on/studied AI ``` fig, ax = plt.subplots(figsize=(5,5.5)) ax.pie(df['know_ai'].value_counts(),explode=(0.05,0),labels=["didn't work on/study AI",'worked on/studied AI'], autopct='%1.1f%%', shadow=True, startangle=90) plt.title("Texts") plt.show() ``` #### 4. Passed similar tests before ``` fig, ax = plt.subplots(figsize=(5,5.5)) ax.pie(df['similar_tests'].value_counts(),explode=(0.05,0),labels=["didn't pass similar tests before","passed similar tests before"], autopct='%1.1f%%', shadow=True, startangle=90) plt.title("Texts") plt.show() ``` #### 5. Country of origin pie plot ``` temp_df = df['country'].value_counts() temp_df2 = temp_df.head(4) if len(temp_df) > 4: temp_df2['Others'.format(len(temp_df) - 4)] = sum(temp_df[4:]) temp_df2 list(temp_df2.index) #country of origin pie plot fig, ax = plt.subplots(figsize=(7,6.5)) ax.pie(temp_df2, autopct='%1.1f%%', shadow=True, startangle=90, textprops={'fontsize': 10}, labels = list(temp_df2.index)) plt.title("Country of origin") plt.show() # country of origin historgam hist = sns.countplot(x = 'country', data = df, order = df['country'].value_counts().index, color = "royalblue") hist.set_xticklabels(hist.get_xticklabels(), rotation=90) plt.title("Country of origin, images") plt.show() ``` #### 6. English level ``` fig, ax = plt.subplots(figsize=(5,5.5)) ax.pie(df['english'].value_counts(),explode=(0.07,0.07,0.07,0.07),labels=["Proficient/Advanced", "Upper-Intermediate/Intermediate", "Elementary/Beginner", "Native"], autopct='%1.1f%%', shadow=True, startangle=90) plt.title("English level, texts") plt.show() ``` #### 7. Scores distribution ``` hist = sns.countplot(x = 'score', data = df, color = "royalblue") hist.set_xticklabels(hist.get_xticklabels(), rotation=90) plt.title("Scores distribution, texts") plt.show() ``` ****************** #### Map question columns into 0/1 ``` #dictionaries for mapping # q1, q4 d1 = {"Probably AI" : 1, "Definitely AI": 1, "I don't know": 0, "Probably human": 0, "Definitely human": 0} #q2, q3 d2 = {"Probably AI" : 0, "Definitely AI": 0, "I don't know": 0, "Probably human": 1, "Definitely human": 1} #q5, q6, q7 d4 = {"Definitely A=AI, B=human": 1, "Probably A=AI, B=human": 1, "I don't know": 0, "Definitely A=human, B=AI": 0, "Probably A=human, B=AI": 0} #q8 d3 = {"Definitely A=AI, B=human": 0, "Probably A=AI, B=human": 0, "I don't know": 0, "Definitely A=human, B=AI": 1, "Probably A=human, B=AI": 1} df_mapped = df.copy() columns1 = ["q1", "q4"] for col in columns1: df_mapped[col] = df[col].map(d1) columns2 = ["q2", "q3"] for col in columns2: df_mapped[col] = df[col].map(d2) columns3 = ["q5", "q6", "q7"] for col in columns3: df_mapped[col] = df[col].map(d3) columns4 = ["q8"] for col in columns4: df_mapped[col] = df[col].map(d4) df_mapped.head() ``` **************************** ``` simple_questions = ['q1','q2','q3','q4'] comparison_questions = ['q5','q6','q7','q8'] df_mapped['simple_sum'] = df_mapped[simple_questions].sum(axis=1) df_mapped['comparison_sum'] = df_mapped[comparison_questions].sum(axis=1) ``` #### 8. Scores distribution, simple questions vs comparison questions ``` simple_average = np.round(np.sum(df_mapped['simple_sum'])/df_mapped.shape[0],2) comparison_average = np.round(np.sum(df_mapped['comparison_sum'])/df_mapped.shape[0],2) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,5)) hist1 = sns.countplot(x = 'simple_sum', data = df_mapped, color = "royalblue", ax = ax1) ax1.set_xlabel('score') ax1.set_title('Simple questions, average: ' + str(simple_average)) hist2 = sns.countplot(x = 'comparison_sum', data = df_mapped, color = "royalblue", ax = ax2) ax2.set_xlabel('score') ax2.set_title('Comparison questions, average: ' + str(comparison_average)) fig.suptitle("Scores distribution, texts") plt.show() ``` #### 9. Question average score vs wisdom of the crowd ``` score_columns = [column for column in df_mapped.columns if column.startswith('q')] score_columns questions_average = np.sum(df_mapped[score_columns], axis = 0)/df_mapped.shape[0] questions_average = np.round(questions_average*100,2) questions_average wc_to_invert = ['wc1', 'wc4', 'wc8'] #invert wisdom of the crowd columns when needed df_mapped_inverted = df_mapped.copy() df_mapped_inverted[wc_to_invert] = np.array([[1]*3]*df_mapped.shape[0]) - df_mapped[wc_to_invert] wc_average = np.sum(df_mapped_inverted[wc_columns], axis = 0)/df_mapped.shape[0] wc_average = np.round(wc_average*100,2) wc_average N = 8 ind = np.arange(N) # the x locations for the groups width = 0.3 # the width of the bars fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111) bars_q = ax.bar(ind, questions_average, width, color='r') bars_wc = ax.bar(ind+width, wc_average, width, color='b') ax.set_ylabel('%') ax.set_xticks(ind+width) ax.set_xticklabels(score_columns) ax.legend( (bars_q[0], bars_wc[0]), ('Question average', 'W-of-C average') , frameon=True, facecolor='white') plt.title("Percent of correct answers vs Wisdom-of-the-Crowd score, texts") plt.show() ``` #### 10. Scores by English level ``` df_mapped.english.value_counts() adv_eng_df = df_mapped[df_mapped.english.isin(['Proficient/Advanced', 'Native'])] not_adv_eng_df = df_mapped[~df_mapped.english.isin(['Proficient/Advanced', 'Native'])] eng_score_aver = np.round(np.sum(adv_eng_df['score'])/adv_eng_df.shape[0],2) not_eng_score_aver = np.round(np.sum(not_know_ai_df['score'])/not_know_ai_df.shape[0],2) ai_score_aver not_ai_score_aver adv_eng_average = np.sum(adv_eng_df[score_columns], axis = 0)/adv_eng_df.shape[0] adv_eng_average = np.round(adv_eng_average*100,2) adv_eng_average not_adv_eng_average = np.sum(not_adv_eng_df[score_columns], axis = 0)/not_adv_eng_df.shape[0] not_adv_eng_average = np.round(not_adv_eng_average*100,2) not_adv_eng_average N = 8 ind = np.arange(N) # the x locations for the groups width = 0.3 # the width of the bars fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111) bars_adv = ax.bar(ind, adv_eng_average, width, color='r') bars_not_adv = ax.bar(ind+width, not_adv_eng_average, width, color='b') ax.set_ylabel('%') ax.set_xticks(ind+width) ax.set_xticklabels(score_columns) ax.legend( (bars_adv[0], bars_not_adv[0]), ('Advanced/Native English level', 'Intermediate and less') , frameon=True, facecolor='white') plt.title("Score based on English level, texts") plt.show() ``` #### 11. Scores by knowledge of AI ``` df_mapped.know_ai.value_counts() know_ai_df = df_mapped[df_mapped.know_ai=='Yes'] not_know_ai_df = df_mapped[df_mapped.know_ai=='No'] ai_score_aver = np.round(np.sum(know_ai_df['score'])/know_ai_df.shape[0],2) not_ai_score_aver = np.round(np.sum(not_know_ai_df['score'])/not_know_ai_df.shape[0],2) ai_score_aver not_ai_score_aver know_ai_average = np.sum(know_ai_df[score_columns], axis = 0)/know_ai_df.shape[0] know_ai_average = np.round(know_ai_average*100,2) not_know_ai_average = np.sum(not_know_ai_df[score_columns], axis = 0)/not_know_ai_df.shape[0] not_know_ai_average = np.round(not_know_ai_average*100,2) N = 8 ind = np.arange(N) # the x locations for the groups width = 0.3 # the width of the bars fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111) bars_ai = ax.bar(ind, know_ai_average, width, color='r') bars_not_ai = ax.bar(ind+width, not_know_ai_average, width, color='b') ax.set_ylabel('%') ax.set_xticks(ind+width) ax.set_xticklabels(score_columns) ax.legend( (bars_ai[0], bars_not_ai[0]), ('Worked with AI, average score: ' + str(ai_score_aver), 'Did not work with AI, average score: ' + str(not_ai_score_aver)) , frameon=True, facecolor='white') plt.title("Score by knowledge of AI, texts") plt.show() not_adv_eng_average = np.sum(not_adv_eng_df[score_columns], axis = 0)/not_adv_eng_df.shape[0] not_adv_eng_average = np.round(not_adv_eng_average*100,2) not_adv_eng_average ``` #### 12. Scores by education level ``` df_mapped.education.value_counts() masters_and_higher_df = df_mapped[df_mapped.education.isin(["Master's", "PhD"])] bachelor_and_lower_df = df_mapped[~df_mapped.education.isin(["Master's", "PhD"])] masters_and_higher_average = np.sum(masters_and_higher_df[score_columns], axis = 0)/masters_and_higher_df.shape[0] masters_and_higher_average = np.round(masters_and_higher_average*100,2) bachelor_and_lower_average = np.sum(bachelor_and_lower_df[score_columns], axis = 0)/bachelor_and_lower_df.shape[0] bachelor_and_lower_average = np.round(bachelor_and_lower_average*100,2) higher_score_aver = np.round(np.sum(masters_and_higher_df['score'])/masters_and_higher_df.shape[0],2) lower_score_aver = np.round(np.sum(bachelor_and_lower_df['score'])/bachelor_and_lower_df.shape[0],2) N = 8 ind = np.arange(N) # the x locations for the groups width = 0.3 # the width of the bars fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111) bars_higher = ax.bar(ind, masters_and_higher_average, width, color='r') bars_lower = ax.bar(ind+width, bachelor_and_lower_average, width, color='b') ax.set_ylabel('%') ax.set_xticks(ind+width) ax.set_xticklabels(score_columns) ax.legend( (bars_higher[0], bars_lower[0]), ("Master's/PhD, average score: " + str(higher_score_aver), "Bachelor's/High School, average score: " + str(lower_score_aver)) , frameon=True, facecolor='white') plt.title("Score by education level, texts") plt.show() ```
github_jupyter
``` import pandas as pd import numpy as np import emoji import pickle import cv2 import matplotlib.pyplot as plt import os sentiment_data = pd.read_csv("../../resource/Emoji_Sentiment_Ranking/Emoji_Sentiment_Data_v1.0.csv") sentiment_data.head() def clean(x): x = x.replace(" ", "-").lower() return str(x) sentiment_data['Unicode name'] = sentiment_data['Unicode name'].apply(clean) sentiment_data.head() score_dict = {} for i in range(len(sentiment_data)) : score_dict[sentiment_data.loc[i, "Unicode name"]] = [sentiment_data.loc[i, "Negative"]/sentiment_data.loc[i, "Occurrences"], sentiment_data.loc[i, "Neutral"]/sentiment_data.loc[i, "Occurrences"], sentiment_data.loc[i, "Positive"]/sentiment_data.loc[i, "Occurrences"]] score_dict['angry-face'] ``` ### Dumping name_2_score as pickle file ``` with open('../../lib/score_dict.pickle', 'wb') as handle: pickle.dump(score_dict, handle, protocol=pickle.HIGHEST_PROTOCOL) with open('../../lib/score_dict.pickle', 'rb') as handle: score_dict = pickle.load(handle) ``` ### First transform the screenshot to process-able image #### for that we need to import the module `ss_to_image` first ``` import os,sys,inspect currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) parentdir = os.path.dirname(currentdir) sys.path.insert(0,parentdir) print(currentdir) print(parentdir) sys.path from utils.ss_to_image import final_crop cropped_image = final_crop('../../resource/screenshots/Rohan.jpeg') img = cropped_image plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) plt.show() ``` ### Image pre-processing ``` img.shape ``` #### Resizing image : dim = (560, 280) / ALREADY DONE THO.. ``` dim = (560,280) resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) plt.imshow(cv2.cvtColor(resized, cv2.COLOR_BGR2RGB)) plt.show() resized.shape n_col = resized.shape[1]//2 img_left = resized[:, :n_col] print("img_left",img_left.shape) img_right = resized[:, n_col:] print("img_right",img_right.shape) plt.imshow(cv2.cvtColor(img_left, cv2.COLOR_BGR2RGB)) plt.show() plt.imshow(cv2.cvtColor(img_right, cv2.COLOR_BGR2RGB)) plt.show() i = 1 j = 0 temp = img_right[i*70:(i+1)*70,j*70:(j+1)*70] plt.imshow(cv2.cvtColor(temp, cv2.COLOR_BGR2RGB)) plt.show() ``` ### Final code for image processing ``` # takes input the image outputs the extracted emojis as np-arrays def image_2_emoji(file_path): def to_half(image): n_col = image.shape[1]//2 img_left = image[:, :n_col] img_right = image[:, n_col:] return (img_left, img_right) def extract_from_half(image): emoji_list = [] for i in range(4): for j in range(4): temp = image[i*70:(i+1)*70,j*70:(j+1)*70] emoji_list.append(temp) return emoji_list img = cv2.imread(file_path) dim = (560,280) resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) halfed = to_half(resized) output = extract_from_half(halfed[0]) output += extract_from_half(halfed[1]) return output template = cv2.imread('../../resource/emoji_database/smiling-face-with-sunglasses_1f60e.png') dim = (50,50) template = cv2.resize(template, dim, interpolation = cv2.INTER_AREA) plt.imshow(cv2.cvtColor(template, cv2.COLOR_BGR2RGB)) plt.show() ``` ### Each emoji after extraction has shape (70 $\times$ 70) ### Each template has size shape (50 $\times$ 50) ``` # Takes file_path of the screenshot as input and outputs the predicted list of names of the emojis def emoji_2_name(file_path, method = 'cv2.TM_SQDIFF_NORMED'): ''' available methods : 'cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR', 'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED' ''' methods = eval(method) emoji_list = image_2_emoji(file_path) emoji_name_list = [0]*len(emoji_list) output = [0]*len(emoji_list) for i in os.listdir('../resource/emoji_database'): template = cv2.imread('../resource/emoji_database/' + str(i)) dim = (50,50) template = cv2.resize(template, dim, interpolation = cv2.INTER_AREA) for j in range(len(emoji_list)): res = cv2.matchTemplate(emoji_list[j][:, :, 0] ,template[:, :, 0],methods) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) try: if emoji_name_list[j][0]>min_val: emoji_name_list[j] = (min_val, i) except TypeError: emoji_name_list[j] = (min_val, i) output[j] = emoji_name_list[j][1].split('_')[0] #return emoji_name_list return output ``` #### Function to compute the sentiment score from the screenshots ``` # takes the creenshot as input and returns the score def name_2_score(file_path): output = None emoji_name_list = image_2_name(file_path) for i in emoji_name_list: try: output = np.add(output,np.array(score_dict[i])) except TypeError: output = np.array(score_dict[i]) except KeyError: pass return output/np.sum(output) ``` ## ROUGH ``` import cv2 import numpy as np from matplotlib import pyplot as plt img = cv2.imread('../../resource/screenshots/Arka.jpeg',0) #img = im2_right[:, :, 2] img2 = img.copy() template = cv2.imread('../../resource/emoji_database/face-savouring-delicious-food_1f60b.png',0) #template = template[:, :, 2] w, h = template.shape[::-1] # All the 6 methods for comparison in a list methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR', 'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED'] for meth in methods: img = img2.copy() method = eval(meth) # Apply template Matching res = cv2.matchTemplate(img,template,method) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) # If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]: top_left = min_loc else: top_left = max_loc bottom_right = (top_left[0] + w, top_left[1] + h) cv2.rectangle(img,top_left, bottom_right, 255, 2) plt.subplot(121),plt.imshow(res,cmap = 'gray') plt.title('Matching Result'), plt.xticks([]), plt.yticks([]) plt.subplot(122),plt.imshow(img,cmap = 'gray') plt.title('Detected Point'), plt.xticks([]), plt.yticks([]) plt.suptitle(meth) plt.show() res.shape from PIL import Image from matplotlib import pyplot cv2.imshow(res, map = 'gray') plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) plt.show() plt.imshow(cv2.cvtColor(res, cv2.COLOR_BGR2RGB)) plt.show() img for i in template: for j in range(len(i)): if i[j]==0: i[j]=26 template min_val, max_val, min_loc, max_loc ```
github_jupyter
``` import pandas as pd df=pd.read_excel("D:/DATA SCIENCE NOTE/AirQualityUCI.xlsx") df df.columns df.keys() df.shape df.describe() df["RH"].max() df["RH"].min() len(df.loc[df["RH"]==-200]) df["T"].min() df["T"].max() df["CO(GT)"].min() df["CO(GT)"].max() #REplace-200 by nan value of the dataset import numpy as np df.replace(-200,np.nan,inplace=True) df.head() df.describe() df #find the missing values df.isnull().sum() #hence NHMC(GT) has more columns has more than 90% missing value df.drop("NMHC(GT)",axis=1,inplace=True) df.head() df.shape #remove the date and time df.drop(columns=["Date","Time"],inplace=True) df.shape df.isnull().sum() df.mean() #replace the missing value by the mean of that column df.replace(np.nan,df.mean(),inplace=True) df.head() df #display the corelation df.corr() import matplotlib.pyplot as plt import seaborn as sns sns.heatmap(df.corr(),annot=True,linewidth=5,linecolor="Black") plt.figure(figsize=(20,13)) #divide the dataset into 2 parts X=df.drop(columns=["RH"]) Y=df["RH"] X Y #scale down the values of X by using StandardScaler from sklearn.preprocessing import StandardScaler sc=StandardScaler() X=sc.fit_transform(X) X #split the dataset into 2 parts from sklearn.model_selection import train_test_split X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2) X_train.shape X_test.shape Y_train.shape Y_test.shape ``` # create the model of LINEAR REGRESSION ``` from sklearn.linear_model import LinearRegression L=LinearRegression() #train the model L.fit(X_train,Y_train) #test the model Y_pred_LR=L.predict(X_test) Y_pred_LR Y_test.values #find the mean squared error(mse) from sklearn.metrics import mean_squared_error mse=mean_squared_error(Y_pred_LR,Y_test) mse from sklearn.metrics import r2_score rse=r2_score(Y_pred_LR,Y_test) rse ``` # implementation of KNN model ``` from sklearn.neighbors import KNeighborsRegressor K=KNeighborsRegressor(n_neighbors=5) K.fit(X_train,Y_train) Y_pred_KNN=K.predict(X_test) Y_pred_KNN Y_test.values #find the mse from sklearn.metrics import mean_squared_error mse=mean_squared_error(Y_pred_KNN,Y_test) mse #find the rse from sklearn.metrics import r2_score rse=r2_score(Y_pred_KNN,Y_test) rse ``` # implementatiin DECISSION TREE REGRESSOR ``` from sklearn.tree import DecisionTreeRegressor D=DecisionTreeRegressor() D.fit(X_train,Y_train) Y_pred_Tree=D.predict(X_test) Y_pred_Tree Y_test.values #find the mse value from sklearn.metrics import mean_squared_error mse=mean_squared_error(Y_pred_Tree,Y_test) mse from sklearn.metrics import r2_score rse=r2_score(Y_pred_Tree,Y_test) rse #here we concluded that decision tree regressor beacuse mse value is:1.39,and rse value is 99% #now we find the decision tree is the best model for this data set .because error value is less #in this tree model. ```
github_jupyter
# Importing the data ``` from pandas import read_csv #Let's us read in the comma-separated text file import matplotlib.pyplot as plt #Our go-to module for plotting data = read_csv('data.csv') fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.scatter(data['red_pos_X'], data['red_pos_Y'] , color = 'red', edgecolors = 'black', s = 15) ax.scatter(data['blue_pos_X'], data['blue_pos_Y'] , color = 'blue', edgecolors = 'black', s =15) ax.set_xlabel('x position') ax.set_ylabel('y position') plt.show() ``` # a) Posterior distributions of speed Looking ahead to part (c), let's describe our velocities using (speed, angle) notation rather than (v_x, v_y). (This will make equating the speeds easier.) That is, for a given object, its position will be described by $x(t) = x_0 + (v\cos\theta)t$ and $y(t) = y_0 + (v\sin\theta)t$. We have to choose some prior distributions for our model, but we don't really know much about these moving objects. I'll take the velocities to be 'normally' distributed about 0; the priors for speed will be PyMC3's HalfNormal distribution, <s>and the priors for the angle will be a Uniform distribution around the full circle.</s> Evidently uniform priors are not the best thing to use, and the sampling runs quickly with Normal/HalfNormal distributions, so everything will be given a (Half)Normal distribution with a sufficiently large standard deviation. See https://docs.pymc.io/api/distributions/continuous.html for a list of available continuous distributions. ### Setting up the model ``` import pymc3 as pm #Our main 'hammer' according to the Bayesian Methods for Hackers notebook import numpy as np #Grab some trig functions def modelData(data, samespeed): """This function takes the array of .csv data, along with a boolean value indicating whether or not the blue/red speeds are to be fixed equal to each other. All variables in the model are given (Half)Normal distributions, and the model is then run using the NUTS sampling algorithm. The function then returns the trace.""" with pm.Model() as model: #HalfNormal distributions keep these distributions positive v_r = pm.HalfNormal('v_r', sd = 0.1) if samespeed == True: v_b = v_r #blue and red speeds are the same else: v_b = pm.HalfNormal('v_b', sd = 0.1) sigma = pm.HalfNormal('sigma', sd = 1.0) #Normal distributions make the NUTS algorithm run more smoothly theta_r = pm.Normal('theta_r', mu = 0, sd = 1.0) theta_b = pm.Normal('theta_b', mu = 0, sd = 1.0) x0_r = pm.Normal('x0_r', mu = 0, sd = 1.0) y0_r = pm.Normal('y0_r', mu = 0, sd = 1.0) x0_b = pm.Normal('x0_b', mu = 0, sd = 1.0) y0_b = pm.Normal('y0_b', mu = 0, sd = 1.0) #The expected (x,y) values for the red and blue objects x_r_expected = x0_r + v_r*np.cos(theta_r)*data['t'] y_r_expected = y0_r + v_r*np.sin(theta_r)*data['t'] x_b_expected = x0_b + v_b*np.cos(theta_b)*data['t'] y_b_expected = y0_b + v_b*np.sin(theta_b)*data['t'] #Likelihood distributions for the normally-distributed (x,y) positions x_r = pm.Normal('x_r_likelihood', mu = x_r_expected, sd = sigma, observed = (data['red_pos_X'])) y_r = pm.Normal('y_r_likelihood', mu = y_r_expected, sd = sigma, observed = (data['red_pos_Y'])) x_b = pm.Normal('x_b_likelihood', mu = x_b_expected, sd = sigma, observed = (data['blue_pos_X'])) y_b = pm.Normal('y_b_likelihood', mu = y_b_expected, sd = sigma, observed = (data['blue_pos_Y'])) #Running the NUTS algorithm with model: step = pm.NUTS() trace = pm.sample(10000, step = step, njobs = 4) return trace ``` ### Saving/Loading results Thanks to [this post](https://stackoverflow.com/questions/44764932/can-a-pymc3-trace-be-loaded-and-values-accessed-without-the-original-model-in-me), the traces from NUTS can be saved and retrieved later using Pickle. ``` import pickle def saveTrace(model, trace, filename): with open(filename, 'wb') as buff: pickle.dump({'model': model, 'trace': trace}, buff) def openTrace(filename): with open(filename, 'rb') as buff: temp_data = pickle.load(buff) return temp_data['trace'] ``` ### Example in running the model First get the trace: ``` trace = modelData(data, True) #all of the data, the objects are assumed to be moving at the same speed ``` Then save it with the appropriate file name: ``` saveTrace(model, trace, 'sameV_alldata.pkl') ``` The other three cases, are as follows (the above case is for part c): ``` trace2 = modelData(data, False) saveTrace(model, trace2, 'diffV_alldata.pkl') trace3 = modelData(data[:100], True) saveTrace(model, trace3, 'sameV_100data.pkl') trace4 = modelData(data[:100], False) saveTrace(model, trace4, 'diffV_100data.pkl') ``` To save time, we can then load them all later ``` trace = openTrace('sameV_alldata.pkl') trace2 = openTrace('diffV_alldata.pkl') trace3 = openTrace('sameV_100data.pkl') trace4 = openTrace('diffV_100data.pkl') ``` If we want to look at the results of the sampling algorithm, ``` pm.traceplot(trace2[1000:][::5]); #burn first 1000 steps and 'prune' the result by taking every 5th step plt.show() pm.summary(trace2) ``` ### Plotting the posterior distributions for speed ``` def plotPosterior(trace): bin_width = 1e-6 """This function takes a trace and plots the distribution of red/blue speeds as a histogram. If the speeds were taken to be equal, just the red speed is plotted (this is handled by excepting a KeyError for the missing 'v_b' value).""" fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.ticklabel_format(style = 'sci', axis = 'x', scilimits = (0,0)) #scientific notation on x-axis ax.set_xlabel('speed') ax.set_ylabel('frequency') v_r = trace['v_r'][1000:][::5] #rembember to 'burn and prune' the results try: v_b = trace['v_b'][1000:][::5] mu = "{:.3E}".format(np.mean(v_b)) sigma = "{:.3E}".format(np.sqrt(np.var(v_b))) ax.hist(v_b, color='blue', alpha = 0.4, bins=np.arange(min(v_b), max(v_b) + bin_width, bin_width), label = f'$\mu$, $\sigma$ = {mu}, {sigma}') except KeyError: pass mu = "{:.3E}".format(np.mean(v_r)) sigma = "{:.3E}".format(np.sqrt(np.var(v_r))) ax.hist(v_r, color='red', alpha = 0.4, bins=np.arange(min(v_r), max(v_r) + bin_width, bin_width), label = f'$\mu$, $\sigma$ = {mu}, {sigma}') plt.legend(loc='upper left') plt.show() ``` For example, using the above function, we plot our answer to part (a). As the filename used to save/load 'trace2' suggests, the full data set was used and the speeds were assumed to be different. ``` plotPosterior(trace2) ``` # b) Confidence interval These posterior distributions can be converted to distributions for the 'zero-crossing' time $t_0$ by solving $y(t_0) = 0$. This gives $t_0 = -\frac{y_0}{v\sin\theta}$. ``` def plotCrossingTimes(trace): """This function takes a trace and plots the distribution of the 'zero-crossing time' for each object. If the speeds were taken to be equal, the 'v_b' distribution is set equal to the 'v_r' distribution (this is handled by excepting a KeyError for the missing 'v_b' value).""" v_r = trace['v_r'][1000:][::5] y0_r = trace['y0_r'][1000:][::5] theta_r = trace['theta_r'][1000:][::5] try: v_b = trace['v_b'][1000:][::5] except KeyError: v_b = trace['v_r'][1000:][::5] y0_b = trace['y0_b'][1000:][::5] theta_b = trace['theta_b'][1000:][::5] #Solving for the crossing times t0_r = -y0_r/v_r/np.sin(theta_r) t0_b = -y0_b/v_b/np.sin(theta_b) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.set_xlabel('time (crossing y = 0)') ax.set_ylabel('frequency') ax.hist(t0_r, color='red', alpha = 1.0) ax.hist(t0_b, color='blue', alpha = 1.0) plt.show() plotCrossingTimes(trace2) ``` As you can see, the blue object crosses $y = 0$ at a much later time than the red object. The Y value of the blue object is < 0 for times left of the blue curve. So, we are justified in using the distribution of 'crossing' times for the blue object to find our 90% confidence interval. ``` def findConfidenceInterval(trace): '''This function takes a trace and finds a 90% confidence interval for the time at which both objects have crossed Y = 0. It does this two ways: (1) fitting a Gaussian to the blue distribution and using the common +/- 1.645 standard deviations from the mean (2) using the np.percentile() function directly on the distribution. If the speeds were taken to be equal, the 'v_b' distribution is set equal to the 'v_r' distribution (this is handled by excepting a KeyError for the missing 'v_b' value).''' try: v_b = trace['v_b'][1000:][::5] except KeyError: v_b = trace['v_r'][1000:][::5] y0_b = trace['y0_b'][1000:][::5] theta_b = trace['theta_b'][1000:][::5] t0_b = -y0_b/v_b/np.sin(theta_b) mean = np.mean(t0_b) sigma = np.sqrt(np.var(t0_b)) print('The (5%, 95%) confidence interval is') print('%.3f, %.3f by fitting a Gaussian' % (mean - 1.645*sigma, mean + 1.645*sigma)) print('%.3f, %.3f using np.percentile()' % (np.percentile(t0_b, 5), np.percentile(t0_b, 95))) findConfidenceInterval(trace2) ``` # c) Same speed Repeating everything is now straightforward; make sure the traces have either been generated or loaded. ``` plotPosterior(trace) findConfidenceInterval(trace) ``` # d) Repeat with smaller N ### Different speeds ``` plotPosterior(trace4) findConfidenceInterval(trace4) ``` ### Same speed ``` plotPosterior(trace3) findConfidenceInterval(trace3) ``` As you'd expect, the confidence intervals became a bit larger with smaller $N$. In Bayesian language, we'd say that we are less certain in the time at which both objects have crossed Y=0. This makes sense with a smaller data set. Note also that the $\sigma$ associated with our posterior speed distributions has more than doubled, so we are also less certain in the speeds of the objects. # Summarizing everything Let's first modify our plotting function to allow the part (c) speed distributions to be plotted on the same histogram as the 'different speed' distributions. The 'same speed' distribution will be plotted in green. ``` def plotOverlayedPosterior(trace, trace2): bin_width = 1e-6 """This function takes a trace and plots the distribution of red/blue speeds as a histogram. If the speeds were taken to be equal, just the red speed is plotted (this is handled by excepting a KeyError for the missing 'v_b' value). It then takes a second trace (where the speeds are the same) and also plots its speed distribution in green""" fig = plt.figure(figsize = (7,6)) ax = fig.add_subplot(1, 1, 1) ax.ticklabel_format(style = 'sci', axis = 'x', scilimits = (0,0)) #scientific notation on x-axis ax.set_xlabel('speed') ax.set_ylabel('frequency') v_r = trace['v_r'][1000:][::5] #rembember to 'burn and prune' the results v_same = trace2['v_r'][1000:][::5] #rembember to 'burn and prune' the results try: v_b = trace['v_b'][1000:][::5] mu = "{:.3E}".format(np.mean(v_b)) sigma = "{:.3E}".format(np.sqrt(np.var(v_b))) ax.hist(v_b, color='blue', alpha = 0.4, bins=np.arange(min(v_b), max(v_b) + bin_width, bin_width), label = f'$\mu$, $\sigma$ = {mu}, {sigma}') except KeyError: pass mu = "{:.3E}".format(np.mean(v_r)) sigma = "{:.3E}".format(np.sqrt(np.var(v_r))) ax.hist(v_r, color='red', alpha = 0.4, bins=np.arange(min(v_r), max(v_r) + bin_width, bin_width), label = f'$\mu$, $\sigma$ = {mu}, {sigma}') mu = "{:.3E}".format(np.mean(v_same)) sigma = "{:.3E}".format(np.sqrt(np.var(v_same))) ax.hist(v_same, color='green', alpha = 0.4, bins=np.arange(min(v_same), max(v_same) + bin_width, bin_width), label = f'$\mu$, $\sigma$ = {mu}, {sigma}') plt.legend(loc='upper left') plt.show() ``` ### Using the full data set ``` plotOverlayedPosterior(trace2,trace) print('When the speeds have different distributions') findConfidenceInterval(trace2) print('\nWhen the speeds are taken to be equal') findConfidenceInterval(trace) ``` ### Using the first 100 measurements ``` plotOverlayedPosterior(trace4,trace3) print('When the speeds have different distributions') findConfidenceInterval(trace4) print('\nWhen the speeds are taken to be equal') findConfidenceInterval(trace3) ```
github_jupyter
# Computing the Bayesian Hilbert Transform-DRT In this tutorial example, we will show how the developed BHT-DRT method works using a simple ZARC model. The equivalent circuit consists one ZARC model, *i.e*., a resistor in parallel with a CPE element. ``` # import the libraries import numpy as np from math import pi, log10 import matplotlib.pyplot as plt import seaborn as sns # core library import Bayes_HT import importlib importlib.reload(Bayes_HT) # plot standards plt.rc('font', family='serif', size=15) plt.rc('text', usetex=True) plt.rc('xtick', labelsize=15) plt.rc('ytick', labelsize=15) ``` ## 1) Define the synthetic impedance experiment $Z_{\rm exp}(\omega)$ ### 1.1) Define the frequency range ``` N_freqs = 81 freq_min = 10**-4 # Hz freq_max = 10**4 # Hz freq_vec = np.logspace(log10(freq_min), log10(freq_max), num=N_freqs, endpoint=True) tau_vec = np.logspace(-log10(freq_max), -log10(freq_min), num=N_freqs, endpoint=True) omega_vec = 2.*pi*freq_vec ``` ### 1.2) Define the circuit parameters for the two ZARCs ``` R_ct = 50 # Ohm R_inf = 10. # Ohm phi = 0.8 tau_0 = 1. # sec ``` ### 1.3) Generate exact impedance $Z_{\rm exact}(\omega)$ as well as the stochastic experiment $Z_{\rm exp}(\omega)$, here $Z_{\rm exp}(\omega)=Z_{\rm exact}(\omega)+\sigma_n(\varepsilon_{\rm re}+i\varepsilon_{\rm im})$ ``` # generate exact T = tau_0**phi/R_ct Z_exact = R_inf + 1./(1./R_ct+T*(1j*2.*pi*freq_vec)**phi) # random rng = np.random.seed(121295) sigma_n_exp = 0.8 # Ohm Z_exp = Z_exact + sigma_n_exp*(np.random.normal(0, 1, N_freqs)+1j*np.random.normal(0, 1, N_freqs)) ``` ### 1.4) show the impedance in Nyquist plot ``` fig, ax = plt.subplots() plt.plot(Z_exact.real, -Z_exact.imag, linewidth=4, color='black', label='exact') plt.plot(np.real(Z_exp), -Z_exp.imag, 'o', markersize=8, color='red', label='synth exp') plt.plot(np.real(Z_exp[0:70:20]), -np.imag(Z_exp[0:70:20]), 's', markersize=8, color="black") plt.plot(np.real(Z_exp[30]), -np.imag(Z_exp[30]), 's', markersize=8, color="black") plt.annotate(r'$10^{-4}$', xy=(np.real(Z_exp[0]), -np.imag(Z_exp[0])), xytext=(np.real(Z_exp[0])-15, -np.imag(Z_exp[0])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$10^{-1}$', xy=(np.real(Z_exp[20]), -np.imag(Z_exp[20])), xytext=(np.real(Z_exp[20])-5, 10-np.imag(Z_exp[20])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$1$', xy=(np.real(Z_exp[30]), -np.imag(Z_exp[30])), xytext=(np.real(Z_exp[30]), 8-np.imag(Z_exp[30])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$10$', xy=(np.real(Z_exp[40]), -np.imag(Z_exp[40])), xytext=(np.real(Z_exp[40]), 8-np.imag(Z_exp[40])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$10^2$', xy=(np.real(Z_exp[60]), -np.imag(Z_exp[60])), xytext=(np.real(Z_exp[60])+5, -np.imag(Z_exp[60])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.legend(frameon=False, fontsize = 15) plt.axis('scaled') plt.xlim(5, 70) plt.ylim(-2, 32) plt.xticks(range(5, 70, 10)) plt.yticks(range(0, 40, 10)) plt.xlabel(r'$Z_{\rm re}/\Omega$', fontsize = 20) plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize = 20) plt.show() ``` ## 2) Calculate the DRT impedance $Z_{\rm DRT}(\omega)$ and the Hilbert transformed impedance $Z_{\rm H}(\omega)$ ### 2.1) optimize the hyperparamters ``` # set the intial parameters sigma_n = 1 sigma_beta = 20 sigma_lambda = 100 theta_0 = np.array([sigma_n, sigma_beta, sigma_lambda]) data_real, data_imag, scores = Bayes_HT.HT_est(theta_0, Z_exp, freq_vec, tau_vec) ``` ### 2.2) Calculate the real part of the $Z_{\rm DRT}(\omega)$ and the imaginary part of the $Z_{\rm H}(\omega)$ #### 2.2.1) Bayesian regression to obtain the real part of impedance for both mean and covariance ``` mu_Z_re = data_real.get('mu_Z') cov_Z_re = np.diag(data_real.get('Sigma_Z')) # the mean and covariance of $R_\infty$ mu_R_inf = data_real.get('mu_gamma')[0] cov_R_inf = np.diag(data_real.get('Sigma_gamma'))[0] ``` #### 2.2.2) Calculate the real part of DRT impedance for both mean and covariance ``` mu_Z_DRT_re = data_real.get('mu_Z_DRT') cov_Z_DRT_re = np.diag(data_real.get('Sigma_Z_DRT')) ``` #### 2.2.3) Calculate the imaginary part of HT impedance for both mean and covariance ``` mu_Z_H_im = data_real.get('mu_Z_H') cov_Z_H_im = np.diag(data_real.get('Sigma_Z_H')) ``` #### 2.2.4) Estimate the $\sigma_n$ ``` sigma_n_re = data_real.get('theta')[0] ``` ### 2.3) Calculate the imaginary part of the $Z_{\rm DRT}(\omega)$ and the real part of the $Z_{\rm H}(\omega)$ ``` # 2.3.1 Bayesian regression mu_Z_im = data_imag.get('mu_Z') cov_Z_im = np.diag(data_imag.get('Sigma_Z')) # the mean and covariance of the inductance $L_0$ mu_L_0 = data_imag.get('mu_gamma')[0] cov_L_0 = np.diag(data_imag.get('Sigma_gamma'))[0] # 2.3.2 DRT part mu_Z_DRT_im = data_imag.get('mu_Z_DRT') cov_Z_DRT_im = np.diag(data_imag.get('Sigma_Z_DRT')) # 2.3.3 HT prediction mu_Z_H_re = data_imag.get('mu_Z_H') cov_Z_H_re = np.diag(data_imag.get('Sigma_Z_H')) # 2.3.4 estimated sigma_n sigma_n_im = data_imag.get('theta')[0] ``` ## 3) Plot the BHT_DRT ### 3.1) plot the real parts of impedance for both Bayesian regression and the synthetic experiment ``` band = np.sqrt(cov_Z_re) plt.fill_between(freq_vec, mu_Z_re-3*band, mu_Z_re+3*band, facecolor='lightgrey') plt.semilogx(freq_vec, mu_Z_re, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, Z_exp.real, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(5, 65) plt.xscale('log') plt.yticks(range(5, 70, 10)) plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$Z_{\rm re}/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() ``` ### 3.2 plot the imaginary parts of impedance for both Bayesian regression and the synthetic experiment ``` band = np.sqrt(cov_Z_im) plt.fill_between(freq_vec, -mu_Z_im-3*band, -mu_Z_im+3*band, facecolor='lightgrey') plt.semilogx(freq_vec, -mu_Z_im, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, -Z_exp.imag, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(-3, 30) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() ``` ### 3.3) plot the real parts of impedance for both Hilbert transform and the synthetic experiment ``` mu_Z_H_re_agm = mu_R_inf + mu_Z_H_re band_agm = np.sqrt(cov_R_inf + cov_Z_H_re + sigma_n_im**2) plt.fill_between(freq_vec, mu_Z_H_re_agm-3*band_agm, mu_Z_H_re_agm+3*band_agm, facecolor='lightgrey') plt.semilogx(freq_vec, mu_Z_H_re_agm, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, Z_exp.real, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(-3, 70) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$\left(R_\infty + Z_{\rm H, re}\right)/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() ``` ### 3.4) plot the imaginary parts of impedance for both Hilbert transform and the synthetic experiment ``` mu_Z_H_im_agm = omega_vec*mu_L_0 + mu_Z_H_im band_agm = np.sqrt((omega_vec**2)*cov_L_0 + cov_Z_H_im + sigma_n_re**2) plt.fill_between(freq_vec, -mu_Z_H_im_agm-3*band_agm, -mu_Z_H_im_agm+3*band_agm, facecolor='lightgrey') plt.semilogx(freq_vec, -mu_Z_H_im_agm, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, -Z_exp.imag, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(-3, 30) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$-\left(\omega L_0 + Z_{\rm H, im}\right)/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() ``` ### 3.5) plot the difference between real parts of impedance for Hilbert transform and the synthetic experiment ``` difference_re = mu_R_inf + mu_Z_H_re - Z_exp.real band = np.sqrt(cov_R_inf + cov_Z_H_re + sigma_n_im**2) plt.fill_between(freq_vec, -3*band, 3*band, facecolor='lightgrey') plt.plot(freq_vec, difference_re, 'o', markersize=8, color='red') plt.xlim(1E-4, 1E4) plt.ylim(-10, 10) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$\left(R_\infty + Z_{\rm H, re} - Z_{\rm exp, re}\right)/\Omega$', fontsize=20) plt.show() ``` ### 3.6) plot the density distribution of residuals for the real part ``` fig = plt.figure(1) a = sns.kdeplot(difference_re, shade=True, color='grey') a = sns.rugplot(difference_re, color='black') a.set_xlabel(r'$\left(R_\infty + Z_{\rm H, re} - Z_{\rm exp, re}\right)/\Omega$',fontsize=20) a.set_ylabel(r'pdf',fontsize=20) a.tick_params(labelsize=15) plt.xlim(-5, 5) plt.ylim(0, 0.5) plt.show() ``` ### 3.7) plot the difference between imaginary parts of impedance for Hilbert transform and the synthetic experiment ``` difference_im = omega_vec*mu_L_0 + mu_Z_H_im - Z_exp.imag band = np.sqrt((omega_vec**2)*cov_L_0 + cov_Z_H_im + sigma_n_re**2) plt.fill_between(freq_vec, -3*band, 3*band, facecolor='lightgrey') plt.plot(freq_vec, difference_im, 'o', markersize=8, color='red') plt.xlim(1E-4, 1E4) plt.ylim(-10, 10) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$\left(\omega L_0 + Z_{\rm H, im} - Z_{\rm exp, im}\right)/\Omega$', fontsize=20) plt.show() ``` ### 3.8) plot the density distribution of residuals for the imaginary part ``` fig = plt.figure(2) a = sns.kdeplot(difference_im, shade=True, color='grey') a = sns.rugplot(difference_im, color='black') a.set_xlabel(r'$\left(\omega L_0 + Z_{\rm H, im} - Z_{\rm exp, im}\right)/\Omega$',fontsize=20) a.set_ylabel(r'pdf',fontsize=20) a.tick_params(labelsize=15) plt.xlim(-5, 5) plt.ylim(0, 0.5) plt.show() ```
github_jupyter
``` from functools import reduce import numpy as np import pandas as pd from pandas.tseries.offsets import DateOffset from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import RandomForestClassifier from xgboost import XGBClassifier from xgboost import XGBRegressor from ta import add_all_ta_features pd.set_option("display.max_rows", None) pd.set_option("display.max_columns", None) np.seterr(divide="ignore", invalid="ignore"); ``` ## Model without Rebalace ``` def build_momentum(df): df["mom_6m"] = np.log(df.close)-np.log(df.close.shift(6)) df["mom_1m"] = np.log(df.close)-np.log(df.close.shift(1)) df["log_return"] = np.log(df.close.shift(-3)) - np.log(df.close) return df.loc[df.prccd > 5, ["mcap", "mom_6m", "mom_1m", "log_return"]].dropna() def be_extreme(df): """Retain the 20% values that are the smallest and the 20% that are the largest.""" top = df.log_return.quantile(0.8) low = df.log_return.quantile(0.2) return df[(df.log_return < low) | (df.log_return > top)] df = pd.read_parquet("../data/merged_data_alpha.6.parquet") df_basic = df[["mcap", "prccd", "close"]] df_mom = df_basic.groupby("gvkey").apply(build_momentum) df_train = df_mom.xs(slice("2002-01-01", "2012-01-01"), level="date", drop_level=False).groupby("date").apply(be_extreme) df_test = df_mom.xs(slice("2012-01-01", "2016-01-01"), level="date", drop_level=False) X_train = df_train.drop("log_return", axis=1).to_numpy() y_train = df_train["log_return"].to_numpy() X_test = df_test.drop("log_return", axis=1).to_numpy() y_test = df_test["log_return"].to_numpy() xgb_reg = XGBRegressor(n_estimators=100, max_depth=5, n_jobs=-1) xgb_fit = xgb_reg.fit(X_train, y_train) print(xgb_reg.score(X_train, y_train)) print(xgb_reg.score(X_test, y_test)) xgb_clf = XGBClassifier(n_estimators=100, max_depth=3, n_jobs=-1) xgb_fit = xgb_clf.fit(X_train, np.sign(y_train)) print(xgb_clf.score(X_train, np.sign(y_train))) print(xgb_clf.score(X_test, np.sign(y_test))) ``` ## Model with Rebalance ``` def be_extreme(df): """Retain the 20% values that are the smallest and the 20% that are the largest.""" top = df.y.quantile(0.8) low = df.y.quantile(0.2) return df[(df.y < low) | (df.y > top)] def be_balance(df): """Returns minus a cross-sectional median""" median = df.log_return.quantile(0.5) df["y"] = df.log_return - median return df df_train = df_mom.xs(slice("2002-01-01", "2012-01-01"), level="date", drop_level=False).groupby("date").apply(be_balance).groupby("date").apply(be_extreme) df_test = df_mom.xs(slice("2012-01-01", "2016-01-01"), level="date", drop_level=False).groupby("date").apply(be_balance) X_train = df_train.drop(["log_return", "y"], axis=1).to_numpy() y_train = df_train["y"].to_numpy() X_test = df_test.drop(["log_return", "y"], axis=1).to_numpy() y_test = df_test["y"].to_numpy() df_train.plot.scatter(x="mom_6m", y="y") df_train.plot.scatter(x="mcap", y="y") xgb_reg = XGBRegressor(n_estimators=100, max_depth=3, n_jobs=-1) xgb_fit = xgb_reg.fit(X_train, y_train) print(xgb_reg.score(X_train, y_train)) print(xgb_reg.score(X_test, y_test)) xgb_clf = XGBClassifier(n_estimators=100, max_depth=3, n_jobs=-1) xgb_fit = xgb_clf.fit(X_train, np.sign(y_train)) print(xgb_clf.score(X_train, np.sign(y_train))) print(xgb_clf.score(X_test, np.sign(y_test))) ``` The algorithm improves after rebalancing.
github_jupyter
<i>Recommendation Systems</i><br> -- Author by : * Nub-T * D. Johanes ``` !wget http://files.grouplens.org/datasets/movielens/ml-latest-small.zip import os import zipfile CUR_DIR = os.path.abspath(os.path.curdir) movie_zip = zipfile.ZipFile(CUR_DIR + '/ml-latest-small.zip') movie_zip.extractall() import pandas as pd import numpy as np from scipy import sparse, linalg links = pd.read_csv(CUR_DIR + '/ml-latest-small/links.csv') movies = pd.read_csv(CUR_DIR + '/ml-latest-small/movies.csv') ratings = pd.read_csv(CUR_DIR + '/ml-latest-small/ratings.csv') tags = pd.read_csv(CUR_DIR + '/ml-latest-small/tags.csv') # Content base Filtering movies_genres = pd.concat([movies.loc[:,['movieId','title']],movies.genres.str.split('|', expand=False)], axis=1) movies_genres = movies_genres.explode('genres') movies_genres = pd.get_dummies(movies_genres,columns=['genres']) movies_genres = movies_genres.groupby(['movieId'], as_index=False).sum() assert movies_genres.iloc[:,1:].max().max() == 1 movies_genres.head() ratings = pd.read_csv(CUR_DIR + '/ml-latest-small/ratings.csv') C = 3 total_mean = ratings.rating.mean() ratings['normalized_rating'] = ratings.rating - total_mean b_item = ratings.groupby('movieId').normalized_rating.sum() / (ratings.groupby('movieId').userId.count() + C) ratings = ratings.merge(pd.DataFrame(b_item, columns=['b_item']), left_on='movieId', right_index=True, how='inner') ratings['norm_item_rating'] = ratings.normalized_rating - ratings.b_item b_item b_user = ratings.groupby('userId').norm_item_rating.sum() / (ratings.groupby('userId').movieId.count() + C) ratings = ratings.merge(pd.DataFrame(b_user, columns=['b_user']), left_on='userId', right_index=True, how='inner') b_user ratings['normr_user_item_rating'] = total_mean + ratings.b_item + ratings.b_user urm = ratings.pivot(index='userId', columns='movieId', values='normr_user_item_rating').fillna(0.).values shrink_term = 3 movies_genres_mat = sparse.csr_matrix(movies_genres.iloc[:,1:].values) movie_norms = np.sqrt(movies_genres_mat.sum(axis=1)).reshape(-1,1) xy, yx = np.meshgrid(movie_norms, movie_norms) xy, yx = np.array(xy), np.array(yx) cbf_similarity_mat = movies_genres_mat.dot(movies_genres_mat.transpose()) cbf_similarity_mat = np.array(cbf_similarity_mat / (xy * yx + shrink_term)) np.fill_diagonal(cbf_similarity_mat, 0.) cbf_similarity_mat movies['idx'] = movies.index def get_similar_movies(k, movie_name): movie_idx = movies.set_index('title').loc[movie_name,'idx'] movie_idxs = np.argsort(cbf_similarity_mat[movie_idx,:])[-k:] return movies.loc[np.flip(movie_idxs),['title','genres']] def cbf_get_rating_given_user(u_ix, item_ix, k): movie_idxs = np.argsort(cbf_similarity_mat[item_ix,:])[-k:].squeeze() subusers_items = urm[u_ix,movie_idxs].squeeze() masked_subusers_items = np.ma.array(subusers_items, mask=subusers_items == 0.) weights = cbf_similarity_mat[item_ix, movie_idxs].squeeze() w_avg = np.ma.average(a=masked_subusers_items, weights=weights) return np.where(w_avg == np.ma.masked, 0., w_avg), masked_subusers_items, weights cbf_get_rating_given_user(0,0,100) get_similar_movies(10, 'Toy Story 2 (1999)') # Collaborative Filtering import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure(figsize=(10,10)) sns.distplot(ratings.rating, bins=50) fig.show() ratings.groupby('movieId').agg({'userId':'count'}).sort_values('userId',ascending=False).loc[:500,:].plot.bar() ratings.groupby('movieId').agg({'rating':np.mean}).sort_values('rating',ascending=False).plot.hist() ratings = pd.read_csv(CUR_DIR + '/ml-latest-small/ratings.csv') C = 3 total_mean = ratings.rating.mean() ratings['normalized_rating'] = ratings.rating - total_mean b_item = ratings.groupby('movieId').normalized_rating.sum() / (ratings.groupby('movieId').userId.count() + C) ratings = ratings.merge(pd.DataFrame(b_item, columns=['b_item']), left_on='movieId', right_index=True, how='inner') ratings['norm_item_rating'] = ratings.normalized_rating - ratings.b_item b_user = ratings.groupby('userId').norm_item_rating.sum() / (ratings.groupby('userId').movieId.count() + C) ratings = ratings.merge(pd.DataFrame(b_user, columns=['b_user']), left_on='userId', right_index=True, how='inner') ratings['normr_user_item_rating'] = total_mean + ratings.b_item + ratings.b_user b_user urm = ratings.pivot(index='userId', columns='movieId', values='normr_user_item_rating').fillna(0.).values user_bias = urm.mean(axis=1, keepdims=True) urm_diff = ((urm - user_bias) / np.std(urm, axis=1, keepdims=True)) / np.sqrt(urm.shape[1]) # With this trick I can do dot product for pearson corr cf_user_similarity_mat = urm_diff.dot(urm_diff.T) np.fill_diagonal(cf_user_similarity_mat, 0.) def ucf_get_rating_given_user(u_ix, item_ix, k): u_ixs = np.argsort(cf_user_similarity_mat[u_ix,:])[-k:].squeeze() subusers_item = urm_diff[u_ixs,item_ix].squeeze() masked_subusers_item = np.ma.array(subusers_item, mask=subusers_item == 0) weights = cf_user_similarity_mat[u_ixs, item_ix].squeeze() w_avg = np.ma.average(a=masked_subusers_item, weights=weights) + user_bias[u_ix] return np.where(w_avg == np.ma.masked, 0., w_avg), masked_subusers_item, weights ucf_get_rating_given_user(25,15,100) urm = ratings.pivot(index='userId', columns='movieId', values='normr_user_item_rating').fillna(0.).values user_bias = urm.mean(axis=1, keepdims=True) urm_diff = urm - user_bias urm_diff = urm_diff / np.sqrt((urm_diff ** 2).sum(axis=0, keepdims=True)) cf_item_similarity_mat = urm_diff.T.dot(urm_diff) np.fill_diagonal(cf_item_similarity_mat, 0.) def icf_get_rating_given_user(u_ix, item_ix, k): i_ixs = np.argsort(cf_item_similarity_mat[item_ix,:])[-k:] user_subitems = urm[u_ix,i_ixs].squeeze() masked_user_subitems = np.ma.array(user_subitems, mask=user_subitems == 0.) weights = cf_item_similarity_mat[item_ix, i_ixs].squeeze() w_avg = np.ma.average(a=masked_user_subitems, weights=weights) return np.where(w_avg == np.ma.masked, 0., w_avg), masked_user_subitems, weights icf_get_rating_given_user(0,55,200) # Optimize using CF ratings = pd.read_csv(CUR_DIR + '/ml-latest-small/ratings.csv') C = 3 total_mean = ratings.rating.mean() ratings['normalized_rating'] = ratings.rating - total_mean b_item = ratings.groupby('movieId').normalized_rating.sum() / (ratings.groupby('movieId').userId.count() + C) ratings = ratings.merge(pd.DataFrame(b_item, columns=['b_item']), left_on='movieId', right_index=True, how='inner') ratings['norm_item_rating'] = ratings.normalized_rating - ratings.b_item b_user = ratings.groupby('userId').norm_item_rating.sum() / (ratings.groupby('userId').movieId.count() + C) ratings = ratings.merge(pd.DataFrame(b_user, columns=['b_user']), left_on='userId', right_index=True, how='inner') ratings['normr_user_item_rating'] = total_mean + ratings.b_item + ratings.b_user total_mean import tensorflow as tf @tf.function def masked_mse(y_pred, y_true, mask, weights, lamb): y_pred_masked = tf.gather_nd(y_pred,tf.where(mask)) y_true_masked = tf.gather_nd(y_true,tf.where(mask)) return tf.losses.mean_squared_error(y_true_masked, y_pred_masked) + lamb * tf.norm(weights) urm = ratings.pivot(index='userId', columns='movieId', values='normr_user_item_rating').fillna(0.).values urm = tf.constant(urm, dtype=tf.float32) sim_matrix = tf.Variable(tf.random.uniform(shape=[urm.shape[1], urm.shape[1]]), trainable=True) epochs = 600 opti = tf.optimizers.Adam(0.01) mask = tf.not_equal(urm, 0.) loss = masked_mse mses = [] for e in range(epochs): with tf.GradientTape() as gt: gt.watch(sim_matrix) preds = tf.matmul(urm, sim_matrix) preds = tf.clip_by_value(preds, 0., 5.) mse = loss(preds, urm, mask, sim_matrix, 0.9) grads = gt.gradient(mse, sim_matrix) opti.apply_gradients(grads_and_vars=zip([grads], [sim_matrix])) mses.append(loss(preds, urm, mask, sim_matrix, 0.)) print(f'Epoch:{e} - Loss: {mses[-1]}') plt.plot(mses) tf.clip_by_value(urm @ sim_matrix, 0., 5.) masked_mse(tf.clip_by_value(urm @ sim_matrix, 0., 5.), urm, mask, sim_matrix, 0.) k=tf.keras ratings = pd.read_csv(CUR_DIR + '/ml-latest-small/ratings.csv') C = 3 total_mean = ratings.rating.mean() ratings['normalized_rating'] = ratings.rating - total_mean b_item = ratings.groupby('movieId').normalized_rating.sum() / (ratings.groupby('movieId').userId.count() + C) ratings = ratings.merge(pd.DataFrame(b_item, columns=['b_item']), left_on='movieId', right_index=True, how='inner') ratings['norm_item_rating'] = ratings.normalized_rating - ratings.b_item b_user = ratings.groupby('userId').norm_item_rating.sum() / (ratings.groupby('userId').movieId.count() + C) ratings = ratings.merge(pd.DataFrame(b_user, columns=['b_user']), left_on='userId', right_index=True, how='inner') ratings['normr_user_item_rating'] = total_mean + ratings.b_item + ratings.b_user b_item urm = tf.constant(ratings.pivot(index='userId', columns='movieId', values='normr_user_item_rating').fillna(0.).values, dtype=tf.float32) mask = tf.not_equal(urm, tf.constant(0., dtype=tf.float32)) non_zero_rating_ixs = tf.where(mask) non_zero_ratings = tf.gather_nd(urm, non_zero_rating_ixs) split = 0.90 split_ix = int(split * non_zero_rating_ixs.shape[0]) non_zero_rating_ixs_shuffled = tf.random.shuffle(tf.range(non_zero_ratings.shape)) train_urm_ratings = tf.gather(non_zero_ratings, non_zero_rating_ixs_shuffled[:split_ix]) train_urm_ratings_ixs = tf.gather(non_zero_rating_ixs, non_zero_rating_ixs_shuffled[:split_ix]) test_urm_ratings = tf.gather(non_zero_ratings, non_zero_rating_ixs_shuffled[split_ix:]) test_urm_ratings_ixs = tf.gather(non_zero_rating_ixs, non_zero_rating_ixs_shuffled[split_ix:]) train_urm = tf.scatter_nd(train_urm_ratings_ixs, train_urm_ratings, urm.shape) test_urm = tf.scatter_nd(test_urm_ratings_ixs, test_urm_ratings, urm.shape) test_urm_ratings_ixs @tf.function def masked_mse(y_pred, y_true, mask, weights_1, lamb1, weights_2, lamb2): y_pred_masked = tf.boolean_mask(y_pred, mask) y_true_masked = tf.boolean_mask(y_true, mask) return tf.losses.mean_squared_error(y_true_masked, y_pred_masked) + lamb1 * tf.norm(weights_1) + lamb2 * tf.norm(weights_2) emb_dim = 30 user_emb = tf.Variable(tf.random.uniform(shape=(urm.shape[0],emb_dim)), trainable=True) item_emb = tf.Variable(tf.random.uniform(shape=(urm.shape[1],emb_dim)), trainable=True) mask = tf.not_equal(train_urm, tf.constant(0, dtype=tf.float32)) test_mask = tf.not_equal(test_urm, 0.) epochs = 400 opti = tf.optimizers.Adam() loss = masked_mse train_mses = [] test_mses = [] for e in range(epochs): with tf.GradientTape(watch_accessed_variables=False) as gt1: gt1.watch(user_emb) with tf.GradientTape(watch_accessed_variables=False) as gt2: gt2.watch(item_emb) preds = tf.matmul(user_emb, item_emb, transpose_b=True) mse = loss(preds, train_urm, mask, user_emb, 0.5, item_emb, 0.4) grads = gt1.gradient(mse, user_emb) opti.apply_gradients(grads_and_vars=zip([grads], [user_emb])) grads = gt2.gradient(mse, item_emb) opti.apply_gradients(grads_and_vars=zip([grads], [item_emb])) test_mses.append(masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True), test_urm, test_mask, 0.,0.,0.,0.)) train_mses.append(masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True), train_urm, mask, 0.,0.,0.,0.)) print(f'Epoch: {e} - Train Loss: {train_mses[-1]} - Test Loss: {test_mses[-1]}') import matplotlib.pyplot as plt plt.plot(train_mses) plt.plot(test_mses) masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True), train_urm, mask,0.,0.,0.,0.) test_mask = tf.not_equal(test_urm, 0.) masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True), test_urm, test_mask, 0.,0.,0.,0.) tf.boolean_mask(tf.matmul(user_emb, item_emb, transpose_b=True), test_mask) tf.boolean_mask(test_urm, test_mask) ratings = pd.read_csv(CUR_DIR + '/ml-latest-small/ratings.csv') C = 3 total_mean = ratings.rating.mean() ratings['normalized_rating'] = ratings.rating - total_mean b_item = ratings.groupby('movieId').normalized_rating.sum() / (ratings.groupby('movieId').userId.count() + C) ratings = ratings.merge(pd.DataFrame(b_item, columns=['b_item']), left_on='movieId', right_index=True, how='inner') ratings['norm_item_rating'] = ratings.normalized_rating - ratings.b_item b_user = ratings.groupby('userId').norm_item_rating.sum() / (ratings.groupby('userId').movieId.count() + C) ratings = ratings.merge(pd.DataFrame(b_user, columns=['b_user']), left_on='userId', right_index=True, how='inner') ratings['normr_user_item_rating'] = total_mean + ratings.b_item + ratings.b_user @tf.function def masked_mse(y_pred, y_true, mask, weights_1, lamb1, weights_2, lamb2): y_pred_masked = tf.boolean_mask(y_pred, mask) y_true_masked = tf.boolean_mask(y_true, mask) return tf.losses.mean_squared_error(y_true_masked, y_pred_masked) + lamb1 * tf.norm(weights_1) + lamb2 * tf.norm(weights_2) mask = tf.not_equal(urm, tf.constant(0., dtype=tf.float32)) non_zero_rating_ixs = tf.where(mask) non_zero_ratings = tf.gather_nd(urm, non_zero_rating_ixs) split = 0.90 split_ix = int(split * non_zero_rating_ixs.shape[0]) non_zero_rating_ixs_shuffled = tf.random.shuffle(tf.range(non_zero_ratings.shape)) train_urm_ratings = tf.gather(non_zero_ratings, non_zero_rating_ixs_shuffled[:split_ix]) train_urm_ratings_ixs = tf.gather(non_zero_rating_ixs, non_zero_rating_ixs_shuffled[:split_ix]) test_urm_ratings = tf.gather(non_zero_ratings, non_zero_rating_ixs_shuffled[split_ix:]) test_urm_ratings_ixs = tf.gather(non_zero_rating_ixs, non_zero_rating_ixs_shuffled[split_ix:]) train_urm = tf.scatter_nd(train_urm_ratings_ixs, train_urm_ratings, urm.shape) test_urm = tf.scatter_nd(test_urm_ratings_ixs, test_urm_ratings, urm.shape) non_zero_rating_ixs emb_dim = 30 user_emb = tf.Variable(tf.random.uniform(shape=(urm.shape[0],emb_dim)), trainable=True) item_emb = tf.Variable(tf.random.uniform(shape=(urm.shape[1],emb_dim)), trainable=True) user_bias = tf.Variable(tf.random.uniform(shape=(urm.shape[0],1)), trainable=True) item_bias = tf.Variable(tf.random.uniform(shape=(1, urm.shape[1])), trainable=True) mean_rating = tf.Variable(tf.random.uniform(shape=(1,1)), trainable=True) mask = tf.not_equal(train_urm, tf.constant(0, dtype=tf.float32)) test_mask = tf.not_equal(test_urm, 0.) epochs = 3000 opti = tf.optimizers.Adam() loss = masked_mse train_mses = [] test_mses = [] for e in range(epochs): with tf.GradientTape(watch_accessed_variables=False) as gt1: gt1.watch(item_emb) gt1.watch(user_emb) gt1.watch(item_bias) gt1.watch(user_bias) gt1.watch(mean_rating) global_effects = user_bias + item_bias + mean_rating preds = (tf.matmul(user_emb, item_emb, transpose_b=True)) + global_effects preds = tf.clip_by_value(preds, 0., 5.) mse = loss(preds, train_urm, mask, user_emb, 0.5, item_emb, 0.6) grads = gt1.gradient([mse], [user_emb, item_emb, item_bias, user_bias, mean_rating]) opti.apply_gradients(grads_and_vars=zip(grads, [user_emb, item_emb, item_bias, user_bias, mean_rating])) test_mses.append(masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True) + global_effects, test_urm, test_mask, 0.,0.,0.,0.)) train_mses.append(masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True) + global_effects, train_urm, mask, 0.,0.,0.,0.)) print(f'Epoch: {e} - Train Loss: {train_mses[-1]} - Test Loss: {test_mses[-1]}') import matplotlib.pyplot as plt plt.plot(train_mses) plt.plot(test_mses) print(masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True) + global_effects, train_urm, mask,0.,0.,0.,0.)) test_mask = tf.not_equal(test_urm, 0.) masked_mse(tf.matmul(user_emb, item_emb, transpose_b=True) + global_effects, test_urm, test_mask, 0.,0.,0.,0.) print(tf.boolean_mask(tf.matmul(user_emb, item_emb, transpose_b=True) + global_effects, test_mask)) print(tf.boolean_mask(urm, test_mask)) # Hybrid Linear Combination def get_hybrid_rating_given_user(u_ix, item_ix, k, alpha, beta): return alpha * cbf_get_rating_given_user(u_ix, item_ix, k)[0] + \ beta * ucf_get_rating_given_user(u_ix, item_ix, k)[0] ratings = pd.read_csv(CUR_DIR + '/ml-latest-small/ratings.csv') C = 3 total_mean = ratings.rating.mean() ratings['normalized_rating'] = ratings.rating - total_mean b_item = ratings.groupby('movieId').normalized_rating.sum() / (ratings.groupby('movieId').userId.count() + C) ratings = ratings.merge(pd.DataFrame(b_item, columns=['b_item']), left_on='movieId', right_index=True, how='inner') ratings['norm_item_rating'] = ratings.normalized_rating - ratings.b_item b_user = ratings.groupby('userId').norm_item_rating.sum() / (ratings.groupby('userId').movieId.count() + C) ratings = ratings.merge(pd.DataFrame(b_user, columns=['b_user']), left_on='userId', right_index=True, how='inner') ratings['normr_user_item_rating'] = total_mean + ratings.b_item + ratings.b_user urm = ratings.pivot(index='userId', columns='movieId', values='normr_user_item_rating').fillna(0.).values get_hybrid_rating_given_user(25,15,100, 0.9, 1.9) ```
github_jupyter
<a href="https://colab.research.google.com/github/KorstiaanW/masakhane-mt/blob/master/KorstiaanW_tn_en_Baseline.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Masakhane - Reverse Machine Translation for African Languages (Using JoeyNMT) > ## NB >### - The purpose of this Notebook is to build models that translate African languages(target language) *into* English(source language). This will allow us to in future be able to make translations from one African language to the other. If you'd like to translate *from* English, please use [this](https://github.com/masakhane-io/masakhane-mt/blob/master/starter_notebook.ipynb) starter notebook instead. >### - We call this reverse training because normally we build models that make translations from the source language(English) to the target language. But in this case we are doing the reverse; building models that make translations from the target language to the source(English) ## Note before beginning: ### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. ### - The tl;dr: Go to the **"TODO"** comments which will tell you what to update to get up and running ### - If you actually want to have a clue what you're doing, read the text and peek at the links ### - With 100 epochs, it should take around 7 hours to run in Google Colab ### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com ### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685) ## Retrieve your data & make a parallel corpus If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details. Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe. Submitted by Tebello Lebesa 2388016 Sumbitted by Korstiaan Wapenaar 1492459 ``` from google.colab import drive drive.mount('/content/drive') # TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here: # These will also become the suffix's of all vocab and corpus files used throughout import os source_language = "en" target_language = "tn" lc = False # If True, lowercase the data. seed = 42 # Random seed for shuffling. tag = "baseline" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts os.environ["tgt"] = target_language os.environ["tag"] = tag # This will save it to a folder in our gdrive instead! !mkdir -p "/content/drive/My Drive/masakhane/$tgt-$src-$tag" os.environ["gdrive_path"] = "/content/drive/My Drive/masakhane/%s-%s-%s" % (target_language, source_language, tag) !echo $gdrive_path # Install opus-tools ! pip install opustools-pkg # Downloading our corpus ! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q # extract the corpus file ! gunzip JW300_latest_xml_$src-$tgt.xml.gz # Download the global test set. ! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en # And the specific test set for this language pair. os.environ["trg"] = target_language os.environ["src"] = source_language ! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en ! mv test.en-$trg.en test.en ! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg ! mv test.en-$trg.$trg test.$trg # Read the test data to filter from train and dev splits. # Store english portion in set for quick filtering checks. en_test_sents = set() filter_test_sents = "test.en-any.en" j = 0 with open(filter_test_sents) as f: for line in f: en_test_sents.add(line.strip()) j += 1 print('Loaded {} global test sentences to filter from the training/dev data.'.format(j)) import pandas as pd # TMX file to dataframe source_file = 'jw300.' + source_language target_file = 'jw300.' + target_language source = [] target = [] skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion. with open(source_file) as f: for i, line in enumerate(f): # Skip sentences that are contained in the test set. if line.strip() not in en_test_sents: source.append(line.strip()) else: skip_lines.append(i) with open(target_file) as f: for j, line in enumerate(f): # Only add to corpus if corresponding source was not skipped. if j not in skip_lines: target.append(line.strip()) print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i)) df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence']) # if you get TypeError: data argument can't be an iterator is because of your zip version run this below #df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence']) df.head(3) ``` ## Pre-processing and export It is generally a good idea to remove duplicate translations and conflicting translations from the corpus. In practice, these public corpora include some number of these that need to be cleaned. In addition we will split our data into dev/test/train and export to the filesystem. ``` # drop duplicate translations df_pp = df.drop_duplicates() # drop conflicting translations # (this is optional and something that you might want to comment out # depending on the size of your corpus) df_pp.drop_duplicates(subset='source_sentence', inplace=True) df_pp.drop_duplicates(subset='target_sentence', inplace=True) # Shuffle the data to remove bias in dev set selection. df_pp = df_pp.sample(frac=1, random_state=seed).reset_index(drop=True) # Install fuzzy wuzzy to remove "almost duplicate" sentences in the # test and training sets. ! pip install fuzzywuzzy ! pip install python-Levenshtein import time from fuzzywuzzy import process import numpy as np from os import cpu_count from functools import partial from multiprocessing import Pool # reset the index of the training set after previous filtering df_pp.reset_index(drop=False, inplace=True) # Remove samples from the training data set if they "almost overlap" with the # samples in the test set. # Filtering function. Adjust pad to narrow down the candidate matches to # within a certain length of characters of the given sample. def fuzzfilter(sample, candidates, pad): candidates = [x for x in candidates if len(x) <= len(sample)+pad and len(x) >= len(sample)-pad] if len(candidates) > 0: return process.extractOne(sample, candidates)[1] else: return np.nan # start_time = time.time() # ### iterating over pandas dataframe rows is not recomended, let use multi processing to apply the function # with Pool(cpu_count()-1) as pool: # scores = pool.map(partial(fuzzfilter, candidates=list(en_test_sents), pad=5), df_pp['source_sentence']) # hours, rem = divmod(time.time() - start_time, 3600) # minutes, seconds = divmod(rem, 60) # print("done in {}h:{}min:{}seconds".format(hours, minutes, seconds)) # # Filter out "almost overlapping samples" # df_pp = df_pp.assign(scores=scores) # df_pp = df_pp[df_pp['scores'] < 95] # This section does the split between train/dev for the parallel corpora then saves them as separate files # We use 1000 dev test and the given test set. import csv # Do the split between dev/train and create parallel corpora num_dev_patterns = 1000 # Optional: lower case the corpora - this will make it easier to generalize, but without proper casing. if lc: # Julia: making lowercasing optional df_pp["source_sentence"] = df_pp["source_sentence"].str.lower() df_pp["target_sentence"] = df_pp["target_sentence"].str.lower() # Julia: test sets are already generated dev = df_pp.tail(num_dev_patterns) # Herman: Error in original stripped = df_pp.drop(df_pp.tail(num_dev_patterns).index) with open("train."+source_language, "w") as src_file, open("train."+target_language, "w") as trg_file: for index, row in stripped.iterrows(): src_file.write(row["source_sentence"]+"\n") trg_file.write(row["target_sentence"]+"\n") with open("dev."+source_language, "w") as src_file, open("dev."+target_language, "w") as trg_file: for index, row in dev.iterrows(): src_file.write(row["source_sentence"]+"\n") trg_file.write(row["target_sentence"]+"\n") #stripped[["source_sentence"]].to_csv("train."+source_language, header=False, index=False) # Herman: Added `header=False` everywhere #stripped[["target_sentence"]].to_csv("train."+target_language, header=False, index=False) # Julia: Problematic handling of quotation marks. #dev[["source_sentence"]].to_csv("dev."+source_language, header=False, index=False) #dev[["target_sentence"]].to_csv("dev."+target_language, header=False, index=False) # Doublecheck the format below. There should be no extra quotation marks or weird characters. ! head train.* ! head dev.* ``` --- ## Installation of JoeyNMT JoeyNMT is a simple, minimalist NMT package which is useful for learning and teaching. Check out the documentation for JoeyNMT [here](https://joeynmt.readthedocs.io) ``` # Install JoeyNMT ! git clone https://github.com/joeynmt/joeynmt.git ! cd joeynmt; pip3 install . # Install Pytorch with GPU support v1.7.1. ! pip install torch==1.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html ``` # Preprocessing the Data into Subword BPE Tokens - One of the most powerful improvements for agglutinative languages (a feature of most Bantu languages) is using BPE tokenization [ (Sennrich, 2015) ](https://arxiv.org/abs/1508.07909). - It was also shown that by optimizing the umber of BPE codes we significantly improve results for low-resourced languages [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021) [(Martinus, 2019)](https://arxiv.org/abs/1906.05685) - Below we have the scripts for doing BPE tokenization of our data. We use 4000 tokens as recommended by [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021). You do not need to change anything. Simply running the below will be suitable. ``` # One of the huge boosts in NMT performance was to use a different method of tokenizing. # Usually, NMT would tokenize by words. However, using a method called BPE gave amazing boosts to performance # Do subword NMT from os import path os.environ["src"] = source_language # Sets them in bash as well, since we often use bash scripts os.environ["tgt"] = target_language # Learn BPEs on the training data. os.environ["data_path"] = path.join("joeynmt", "data",target_language + source_language ) # Herman! ! subword-nmt learn-joint-bpe-and-vocab --input train.$src train.$tgt -s 4000 -o bpe.codes.4000 --write-vocabulary vocab.$src vocab.$tgt # Apply BPE splits to the development and test data. ! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < train.$src > train.bpe.$src ! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < train.$tgt > train.bpe.$tgt ! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < dev.$src > dev.bpe.$src ! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < dev.$tgt > dev.bpe.$tgt ! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$src < test.$src > test.bpe.$src ! subword-nmt apply-bpe -c bpe.codes.4000 --vocabulary vocab.$tgt < test.$tgt > test.bpe.$tgt # Create directory, move everyone we care about to the correct location ! mkdir -p $data_path ! cp train.* $data_path ! cp test.* $data_path ! cp dev.* $data_path ! cp bpe.codes.4000 $data_path ! ls $data_path # Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path ! cp train.* "$gdrive_path" ! cp test.* "$gdrive_path" ! cp dev.* "$gdrive_path" ! cp bpe.codes.4000 "$gdrive_path" ! ls "$gdrive_path" # Create that vocab using build_vocab ! sudo chmod 777 joeynmt/scripts/build_vocab.py ! joeynmt/scripts/build_vocab.py joeynmt/data/$tgt$src/train.bpe.$src joeynmt/data/$tgt$src/train.bpe.$tgt --output_path joeynmt/data/$tgt$src/vocab.txt # Some output ! echo "BPE Setswana Sentences" ! tail -n 5 test.bpe.$tgt ! echo "Combined BPE Vocab" ! tail -n 10 joeynmt/data/$tgt$src/vocab.txt # Herman # Also move everything we care about to a mounted location in google drive (relevant if running in colab) at gdrive_path ! cp train.* "$gdrive_path" ! cp test.* "$gdrive_path" ! cp dev.* "$gdrive_path" ! cp bpe.codes.4000 "$gdrive_path" ! ls "$gdrive_path" ``` # Creating the JoeyNMT Config JoeyNMT requires a yaml config. We provide a template below. We've also set a number of defaults with it, that you may play with! - We used Transformer architecture - We set our dropout to reasonably high: 0.3 (recommended in [(Sennrich, 2019)](https://www.aclweb.org/anthology/P19-1021)) Things worth playing with: - The batch size (also recommended to change for low-resourced languages) - The number of epochs (we've set it at 30 just so it runs in about an hour, for testing purposes) - The decoder options (beam_size, alpha) - Evaluation metrics (BLEU versus Crhf4) ``` # This creates the config file for our JoeyNMT system. It might seem overwhelming so we've provided a couple of useful parameters you'll need to update # (You can of course play with all the parameters if you'd like!) name = '%s%s' % (target_language, source_language) # gdrive_path = os.environ["gdrive_path"] # Create the config config = """ name: "{target_language}{source_language}_reverse_transformer" data: src: "{target_language}" trg: "{source_language}" train: "data/{name}/train.bpe" dev: "data/{name}/dev.bpe" test: "data/{name}/test.bpe" level: "bpe" lowercase: False max_sent_length: 100 src_vocab: "data/{name}/vocab.txt" trg_vocab: "data/{name}/vocab.txt" testing: beam_size: 5 alpha: 1.0 training: #load_model: "{gdrive_path}/models/{name}_transformer/1.ckpt" # if uncommented, load a pre-trained model from this checkpoint random_seed: 42 optimizer: "adam" normalization: "tokens" adam_betas: [0.9, 0.999] scheduling: "Noam scheduling" # TODO: try switching from plateau to Noam scheduling. plateau to Noam scheduling patience: 5 # For plateau: decrease learning rate by decrease_factor if validation score has not improved for this many validation rounds. learning_rate_factor: 0.5 # factor for Noam scheduler (used with Transformer) learning_rate_warmup: 1000 # warmup steps for Noam scheduler (used with Transformer) decrease_factor: 0.7 loss: "crossentropy" learning_rate: 0.0003 learning_rate_min: 0.00000001 weight_decay: 0.0 label_smoothing: 0.1 batch_size: 4096 batch_type: "token" eval_batch_size: 3600 eval_batch_type: "token" batch_multiplier: 1 early_stopping_metric: "ppl" epochs: 3 # TODO: Decrease for when playing around and checking of working. Around 30 is sufficient to check if its working at all. 5 - 3 validation_freq: 1000 # TODO: Set to at least once per epoch. logging_freq: 100 eval_metric: "bleu" model_dir: "models/{name}_reverse_transformer" overwrite: True # TODO: Set to True if you want to overwrite possibly existing models. shuffle: True use_cuda: True max_output_length: 100 print_valid_sents: [0, 1, 2, 3] keep_last_ckpts: 3 model: initializer: "xavier" bias_initializer: "zeros" init_gain: 1.0 embed_initializer: "xavier" embed_init_gain: 1.0 tied_embeddings: True tied_softmax: True encoder: type: "transformer" num_layers: 6 num_heads: 4 # TODO: Increase to 8 for larger data. 4 - 8 embeddings: embedding_dim: 256 # TODO: Increase to 512 for larger data. 256 -512 scale: True dropout: 0.2 # typically ff_size = 4 x hidden_size hidden_size: 256 # TODO: Increase to 512 for larger data. 256 - 512 ff_size: 2048 # TODO: Increase to 2048 for larger data. 1024 - 2048 dropout: 0.3 decoder: type: "transformer" num_layers: 6 num_heads: 4 # TODO: Increase to 8 for larger data. 4 - 8 embeddings: embedding_dim: 256 # TODO: Increase to 512 for larger data. 256 - 512 scale: True dropout: 0.2 # typically ff_size = 4 x hidden_size hidden_size: 256 # TODO: Increase to 512 for larger data. 256 - 512 ff_size: 2048 # TODO: Increase to 2048 for larger data. 1024 - 2048 dropout: 0.3 """.format(name=name, gdrive_path=os.environ["gdrive_path"], source_language=source_language, target_language=target_language) with open("joeynmt/configs/transformer_reverse_{name}.yaml".format(name=name),'w') as f: f.write(config) ``` # Train the Model This single line of joeynmt runs the training using the config we made above ``` # Train the model # You can press Ctrl-C to stop. And then run the next cell to save your checkpoints! !cd joeynmt; python3 -m joeynmt train configs/transformer_reverse_$tgt$src.yaml # Copy the created models from the notebook storage to google drive for persistant storage !cp -r joeynmt/models/${tgt}${src}_reverse_transformer/* "$gdrive_path/models/${tgt}${src}_reverse_transformer/" # Output our validation accuracy ! cat "$gdrive_path/models/${tgt}${src}_reverse_transformer/validations.txt" # Test our model ! cd joeynmt; python3 -m joeynmt test "$gdrive_path/models/${tgt}${src}_reverse_transformer/config.yaml" while True:pass ```
github_jupyter
# Python Data Science > Dataframe Wrangling with Pandas Kuo, Yao-Jen from [DATAINPOINT](https://www.datainpoint.com/) ``` import requests import json from datetime import date from datetime import timedelta ``` ## TL; DR > In this lecture, we will talk about essential data wrangling skills in `pandas`. ## Essential Data Wrangling Skills in `pandas` ## What is `pandas`? > Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more. Source: <https://github.com/pandas-dev/pandas> ## Why `pandas`? Python used to have a weak spot in its analysis capability due to it did not have an appropriate structure handling the common tabular datasets. Pythonists had to switch to a more data-centric language like R or Matlab during the analysis stage until the presence of `pandas`. ## Import Pandas with `import` command Pandas is officially aliased as `pd`. ``` import pandas as pd ``` ## If Pandas is not installed, we will encounter a `ModuleNotFoundError` ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'pandas' ``` ## Use `pip install` at Terminal to install pandas ```bash pip install pandas ``` ## Check version and its installation file path - `__version__` attribute - `__file__` attribute ``` print(pd.__version__) print(pd.__file__) ``` ## What does `pandas` mean? ![](https://media.giphy.com/media/46Zj6ze2Z2t4k/giphy.gif) Source: <https://giphy.com/> ## Turns out its naming has nothing to do with panda the animal, it refers to three primary class customed by its author [Wes McKinney](https://wesmckinney.com/) - **Pan**el(Deprecated since version 0.20.0) - **Da**taFrame - **S**eries ## In order to master `pandas`, it is vital to understand the relationships between `Index`, `ndarray`, `Series`, and `DataFrame` - An `Index` and a `ndarray` assembles a `Series` - A couple of `Series` that sharing the same `Index` can then form a `DataFrame` ## `Index` from Pandas The simpliest way to create an `Index` is using `pd.Index()`. ``` prime_indices = pd.Index([2, 3, 5, 7, 11, 13, 17, 19, 23, 29]) print(type(prime_indices)) ``` ## An `Index` is like a combination of `tuple` and `set` - It is immutable. - It has the characteristics of a set. ``` # It is immutable prime_indices = pd.Index([2, 3, 5, 7, 11, 13, 17, 19, 23, 29]) #prime_indices[-1] = 31 # It has the characteristics of a set odd_indices = pd.Index(range(1, 30, 2)) print(prime_indices.intersection(odd_indices)) # prime_indices & odd_indices print(prime_indices.union(odd_indices)) # prime_indices | odd_indices print(prime_indices.symmetric_difference(odd_indices)) # prime_indices ^ odd_indices print(prime_indices.difference(odd_indices)) print(odd_indices.difference(prime_indices)) ``` ## `Series` from Pandas The simpliest way to create an `Series` is using `pd.Series()`. ``` prime_series = pd.Series([2, 3, 5, 7, 11, 13, 17, 19, 23, 29]) print(type(prime_series)) ``` ## A `Series` is a combination of `Index` and `ndarray` ``` print(type(prime_series.index)) print(type(prime_series.values)) ``` ## `DataFrame` from Pandas The simpliest way to create an `DataFrame` is using `pd.DataFrame()`. ``` movie_df = pd.DataFrame() movie_df["title"] = ["The Shawshank Redemption", "The Dark Knight", "Schindler's List", "Forrest Gump", "Inception"] movie_df["imdb_rating"] = [9.3, 9.0, 8.9, 8.8, 8.7] print(type(movie_df)) ``` ## A `DataFrame` is a combination of multiple `Series` sharing the same `Index` ``` print(type(movie_df.index)) print(type(movie_df["title"])) print(type(movie_df["imdb_rating"])) ``` ## Review of the definition of modern data science > Modern data science is a huge field, it invovles applications and tools like importing, tidying, transformation, visualization, modeling, and communication. Surrounding all these is programming. ![Imgur](https://i.imgur.com/din6Ig6.png) Source: [R for Data Science](https://r4ds.had.co.nz/) ## Key functionalities analysts rely on `pandas` are - Importing - Tidying - Transforming ## Tidying and transforming together is also known as WRANGLING ![](https://media.giphy.com/media/MnlZWRFHR4xruE4N2Z/giphy.gif) Source: <https://giphy.com/> ## Importing ## `pandas` has massive functions importing tabular data - Flat text file - Database table - Spreadsheet - Array of JSONs - HTML `<table></table>` tags - ...etc. Source: <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html> ## Using `read_csv` function for flat text files ``` from datetime import date from datetime import timedelta def get_covid19_latest_daily_report(): """ Get latest daily report(world) from: https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports """ data_date = date.today() data_date_delta = timedelta(days=1) daily_report_url_no_date = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/{}.csv" while True: data_date_str = date.strftime(data_date, '%m-%d-%Y') daily_report_url = daily_report_url_no_date.format(data_date_str) try: print("嘗試載入{}的每日報告".format(data_date_str)) daily_report = pd.read_csv(daily_report_url) print("檔案存在,擷取了{}的每日報告".format(data_date_str)) break except: print("{}的檔案還沒有上傳".format(data_date_str)) data_date -= data_date_delta # data_date = data_date - data_date_delta return daily_report daily_report = get_covid19_latest_daily_report() ``` ## Using `read_sql` function for database tables ```python import sqlite3 conn = sqlite3.connect('YOUR_DATABASE.db') sql_query = """ SELECT * FROM YOUR_TABLE LIMIT 100; """ pd.read_sql(sql_query, conn) ``` ## Using `read_excel` function for spreadsheets ```python excel_file_path = "PATH/TO/YOUR/EXCEL/FILE" pd.read_excel(excel_file_path) ``` ## Using `read_json` function for array of JSONs ```python json_file_path = "PATH/TO/YOUR/JSON/FILE" pd.read_json(json_file_path) ``` ## What is JSON? > JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language. Source: <https://www.json.org/json-en.html> ## Using `read_html` function for HTML `<table></table>` tags > The `<table>` tag defines an HTML table. An HTML table consists of one `<table>` element and one or more `<tr>`, `<th>`, and `<td>` elements. The `<tr>` element defines a table row, the `<th>` element defines a table header, and the `<td>` element defines a table cell. Source: <https://www.w3schools.com/default.asp> ``` request_url = "https://www.imdb.com/chart/top" html_tables = pd.read_html(request_url) print(type(html_tables)) print(len(html_tables)) html_tables[0] ``` ## Basic attributes and methods ## Basic attributes of a `DataFrame` object - `shape` - `dtypes` - `index` - `columns` ``` print(daily_report.shape) print(daily_report.dtypes) print(daily_report.index) print(daily_report.columns) ``` ## Basic methods of a `DataFrame` object - `head(n)` - `tail(n)` - `describe` - `info` - `set_index` - `reset_index` ## `head(n)` returns the top n observations with header ``` daily_report.head() # n is default to 5 ``` ## `tail(n)` returns the bottom n observations with header ``` daily_report.tail(3) ``` ## `describe` returns the descriptive summary for numeric columns ``` daily_report.describe() ``` ## `info` returns the concise information of the dataframe ``` daily_report.info() ``` ## `set_index` replaces current `Index` with a specific variable ``` daily_report.set_index('Combined_Key') ``` ## `reset_index` resets current `Index` with default `RangeIndex` ``` daily_report.set_index('Combined_Key').reset_index() ``` ## Basic Dataframe Wrangling ## Basic wrangling is like writing SQL queries - Selecting: `SELECT FROM` - Filtering: `WHERE` - Subsetting: `SELECT FROM WHERE` - Indexing - Sorting: `ORDER BY` - Deriving - Summarizing - Summarizing and Grouping: `GROUP BY` ## Selecting a column as `Series` ``` print(daily_report['Country_Region']) print(type(daily_report['Country_Region'])) ``` ## Selecting a column as `DataFrame` ``` print(type(daily_report[['Country_Region']])) daily_report[['Country_Region']] ``` ## Selecting multiple columns as `DataFrame`, for sure ``` cols = ['Country_Region', 'Province_State'] daily_report[cols] ``` ## Filtering rows with conditional statements ``` is_taiwan = daily_report['Country_Region'] == 'Taiwan*' daily_report[is_taiwan] ``` ## Subsetting columns and rows simultaneously ``` cols_to_select = ['Country_Region', 'Confirmed'] rows_to_filter = daily_report['Country_Region'] == 'Taiwan*' daily_report[rows_to_filter][cols_to_select] ``` ## Indexing `DataFrame` with - `loc[]` - `iloc[]` ## `loc[]` is indexing `DataFrame` with `Index` ``` print(daily_report.loc[3388, ['Country_Region', 'Confirmed']]) # as Series daily_report.loc[[3388], ['Country_Region', 'Confirmed']] # as DataFrame ``` ## `iloc[]` is indexing `DataFrame` with absolute position ``` print(daily_report.iloc[3388, [3, 7]]) # as Series daily_report.iloc[[3388], [3, 7]] # as DataFrame ``` ## Sorting `DataFrame` with - `sort_values` - `sort_index` ## `sort_values` sorts `DataFrame` with specific columns ``` daily_report.sort_values(['Country_Region', 'Confirmed']) ``` ## `sort_index` sorts `DataFrame` with the `Index` of `DataFrame` ``` daily_report.sort_index(ascending=False) ``` ## Deriving new variables from `DataFrame` - Simple operations - `pd.cut` - `map` with a `dict` - `map` with a function(or a lambda expression) ## Deriving new variable with simple operations ``` active = daily_report['Confirmed'] - daily_report['Deaths'] - daily_report['Recovered'] print(active) ``` ## Deriving categorical from numerical with `pd.cut` ``` import numpy as np cut_bins = [0, 1000, 10000, 100000, np.Inf] cut_labels = ['Less than 1000', 'Between 1000 and 10000', 'Between 10000 and 100000', 'Above 100000'] confirmed_categorical = pd.cut(daily_report['Confirmed'], bins=cut_bins, labels=cut_labels, right=False) print(confirmed_categorical) ``` ## Deriving categorical from categorical with `map` - Passing a `dict` - Passing a function(or lambda expression) ``` # Passing a dict country_name = { 'Taiwan*': 'Taiwan' } daily_report_tw = daily_report[is_taiwan] daily_report_tw['Country_Region'].map(country_name) # Passing a function def is_us(x): if x == 'US': return 'US' else: return 'Not US' daily_report['Country_Region'].map(is_us) # Passing a lambda expression) daily_report['Country_Region'].map(lambda x: 'US' if x == 'US' else 'Not US') ``` ## Summarizing `DataFrame` with aggregate methods ``` daily_report['Confirmed'].sum() ``` ## Summarizing and grouping `DataFrame` with aggregate methods ``` daily_report.groupby('Country_Region')['Confirmed'].sum() ``` ## More Dataframe Wrangling Operations ## Other common `Dataframe` wranglings including - Dealing with missing values - Dealing with text values - Reshaping dataframes - Merging and joining dataframes ## Dealing with missing values - Using `isnull` or `notnull` to check if `np.NaN` exists - Using `dropna` to drop rows with `np.NaN` - Using `fillna` to fill `np.NaN` with specific values ``` print(daily_report['Province_State'].size) print(daily_report['Province_State'].isnull().sum()) print(daily_report['Province_State'].notnull().sum()) print(daily_report.dropna().shape) print(daily_report['FIPS'].fillna(0)) ``` ## Splitting strings with `str.split` as a `Series` ``` split_pattern = ', ' daily_report['Combined_Key'].str.split(split_pattern) ``` ## Splitting strings with `str.split` as a `DataFrame` ``` split_pattern = ', ' daily_report['Combined_Key'].str.split(split_pattern, expand=True) ``` ## Replacing strings with `str.replace` ``` daily_report['Combined_Key'].str.replace(", ", ';') ``` ## Testing for strings that match or contain a pattern with `str.contains` ``` print(daily_report['Country_Region'].str.contains('land').sum()) daily_report[daily_report['Country_Region'].str.contains('land')] ``` ## Reshaping dataframes from wide to long format with `pd.melt` A common problem is that a dataset where some of the column names are not names of variables, but values of a variable. ``` ts_confirmed_global_url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv" ts_confirmed_global = pd.read_csv(ts_confirmed_global_url) ts_confirmed_global ``` ## We can pivot the columns into a new pair of variables To describe that operation we need four parameters: - The set of columns whose names are not values - The set of columns whose names are values - The name of the variable to move the column names to - The name of the variable to move the column values to ## In this example, the four parameters are - `id_vars`: `['Province/State', 'Country/Region', 'Lat', 'Long']` - `value_vars`: The columns from `1/22/20` to the last column - `var_name`: Let's name it `Date` - `value_name`: Let's name it `Confirmed` ``` idVars = ['Province/State', 'Country/Region', 'Lat', 'Long'] ts_confirmed_global_long = pd.melt(ts_confirmed_global, id_vars=idVars, var_name='Date', value_name='Confirmed') ts_confirmed_global_long ``` ## Merging and joining dataframes - `merge` on column names - `join` on index ## Using `merge` function to join dataframes on columns ``` left_df = daily_report[daily_report['Country_Region'].isin(['Taiwan*', 'Japan'])] right_df = ts_confirmed_global_long[ts_confirmed_global_long['Country/Region'].isin(['Taiwan*', 'Korea, South'])] # default: inner join pd.merge(left_df, right_df, left_on='Country_Region', right_on='Country/Region') # left join pd.merge(left_df, right_df, left_on='Country_Region', right_on='Country/Region', how='left') # right join pd.merge(left_df, right_df, left_on='Country_Region', right_on='Country/Region', how='right') ``` ## Using `join` method to join dataframes on index ``` left_df = daily_report[daily_report['Country_Region'].isin(['Taiwan*', 'Japan'])] right_df = ts_confirmed_global_long[ts_confirmed_global_long['Country/Region'].isin(['Taiwan*', 'Korea, South'])] left_df = left_df.set_index('Country_Region') right_df = right_df.set_index('Country/Region') # default: left join left_df.join(right_df, lsuffix='_x', rsuffix='_y') # inner join left_df.join(right_df, lsuffix='_x', rsuffix='_y', how='inner') # inner join left_df.join(right_df, lsuffix='_x', rsuffix='_y', how='inner') # right join left_df.join(right_df, lsuffix='_x', rsuffix='_y', how='right') ```
github_jupyter
``` import numpy as np ## ahora veremos como mostrar la forma de los arrays creados con numpy # creando un array 3D arr_3d = np.zeros((3,3,3)) # y lo imprimimos print(arr_3d) # vemos su forma print("la forma es: {}".format(arr_3d.shape)) # ahora veremos como convertir una lista de python a un array en numpy # primero creamos una lista lista = [1,2,3,45,5,6,7,8,8] lista_array = np.array(lista) print(lista_array) print(lista_array.shape) # crear un array de inicializando con ceros array_ceros = np.zeros(10) print(array_ceros) array_ceros_dimensiones = np.zeros((5,2,3)) print(array_ceros_dimensiones) print("y la forma es: {}".format(array_ceros_dimensiones.shape)) # crear un array dando un rango # para dar el rango hay opciones como en una lista normal # si dejas un numero solo -> (numero) : ira desde el cero hasta en (numero - 1) # si pones dos numeros -> (numero_inicial, numero_final) : desde el inicial hasta el (final - 1) # si pones tres numeros -> (numero_inicial, numero_final, intervalo): va como el anterior pero el tercer valor en la razon a la que va a aumentar # ahora veremos los ejemplos array_rango_1 = np.arange(100) array_rango_2 = np.arange(0,100) array_rango_3 = np.arange(0,100,2) print(array_rango_1) print(array_rango_2) print(array_rango_3) # si quieres hallar los espacios entre dos numeros, usaremos linspace # .linspace(inicio, fin, numero_de_espacios) array_espacios = np.linspace(0,10,100) print(array_espacios) # ahora veremos la diferencia entre un array normal y otro convertido a un entero cube_normal = np.zeros((3,3,3)) + 1 cube_integer = np.zeros((3,3,3)).astype(int) + 1 cube_float = np.ones((3, 3, 3)).astype(np.float16) # podran notar el punto decimal que se aplica por defecto print("Cubo normal creado: \n{}".format(cube_normal)) print("Cubo de integers creado: \n{}".format(cube_integer)) print("Cubo de flotantes creado: \n{}".format(cube_float)) # tambien podemos usar tipos de datos desde numpy array_entero = np.zeros(3, dtype=int) print("el array de enteros con tipo de datos de python: \n{}".format(array_entero)) array_flotante = np.zeros(3, dtype=np.float32) print("el array de enteros con tipo de datos de numpy: \n{}".format(array_flotante)) # y porque usamos np.float32? porque en python no existe nativamente exactamente float32 # ahora veremos algo interesante # creamos un array de 1000 elementos arr_1d = np.arange(1000) print(arr_1d) # y los podemos reformar a una matriz 3d y numpy se encarga de manera automatica de esto arr_3d_1 = arr_1d.reshape((10,10,10)) print(arr_3d_1) # y tambien puede ser cambiado de esta forma arr_3d_2 = np.reshape(arr1s, (10, 10, 10)) print(arr_3d_2) arr4d = np.zeros((10, 10, 10, 10)) print(arr4d) arr1d = arr4d.ravel() print(arr1d) print("la forma es: {}".format(arr1d.shape)) recarr = np.zeros((2,), dtype=('i4,f4,a10')) toadd = [(1,2.,'Hello'),(2,3.,"World")] recarr[:] = toadd print(recarr) recarr = np.zeros((2,), dtype=('i4,f4,a10')) print(recarr) col1 = np.arange(2) + 1 col2 = np.arange(2, dtype=np.float32) col3 = ['Hello', 'World'] print(col1) print(col2) print(col3) ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. # Part 1: Training Tensorflow 2.0 Model on Azure Machine Learning Service ## Overview of the part 1 This notebook is Part 1 (Preparing Data and Model Training) of a two part workshop that demonstrates an end-to-end workflow using Tensorflow 2.0 on Azure Machine Learning service. The different components of the workshop are as follows: - Part 1: [Model Training](https://github.com/microsoft/bert-stack-overflow/blob/master/1-Training/AzureServiceClassifier_Training.ipynb) - Part 2: [Inferencing and Deploying a Model](https://github.com/microsoft/bert-stack-overflow/blob/master/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb) **This notebook will cover the following topics:** - Stackoverflow question tagging problem - Introduction to Transformer and BERT deep learning models - Registering cleaned up training data as a Dataset - Training the model on GPU cluster - Monitoring training progress with built-in Tensorboard dashboard - Automated search of best hyper-parameters of the model - Registering the trained model for future deployment ## Prerequisites This notebook is designed to be run in Azure ML Notebook VM. See [readme](https://github.com/microsoft/bert-stack-overflow/blob/master/README.md) file for instructions on how to create Notebook VM and open this notebook in it. ### Check Azure Machine Learning Python SDK version This tutorial requires version 1.0.69 or higher. Let's check the version of the SDK: ``` import azureml.core print("Azure Machine Learning Python SDK version:", azureml.core.VERSION) ``` ## Stackoverflow Question Tagging Problem In this workshop we will use powerful language understanding model to automatically route Stackoverflow questions to the appropriate support team on the example of Azure services. One of the key tasks to ensuring long term success of any Azure service is actively responding to related posts in online forums such as Stackoverflow. In order to keep track of these posts, Microsoft relies on the associated tags to direct questions to the appropriate support team. While Stackoverflow has different tags for each Azure service (azure-web-app-service, azure-virtual-machine-service, etc), people often use the generic **azure** tag. This makes it hard for specific teams to track down issues related to their product and as a result, many questions get left unanswered. **In order to solve this problem, we will build a model to classify posts on Stackoverflow with the appropriate Azure service tag.** We will be using a BERT (Bidirectional Encoder Representations from Transformers) model which was published by researchers at Google AI Reasearch. Unlike prior language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of natural language processing (NLP) tasks without substantial architecture modifications. ## Why use BERT model? [Introduction of BERT model](https://arxiv.org/pdf/1810.04805.pdf) changed the world of NLP. Many NLP problems that before relied on specialized models to achive state of the art performance are now solved with BERT better and with more generic approach. If we look at the leaderboards on such popular NLP problems as GLUE and SQUAD, most of the top models are based on BERT: * [GLUE Benchmark Leaderboard](https://gluebenchmark.com/leaderboard/) * [SQuAD Benchmark Leaderboard](https://rajpurkar.github.io/SQuAD-explorer/) Recently, Allen Institue for AI announced new language understanding system called Aristo [https://allenai.org/aristo/](https://allenai.org/aristo/). The system has been developed for 20 years, but it's performance was stuck at 60% on 8th grade science test. The result jumped to 90% once researchers adopted BERT as core language understanding component. With BERT Aristo now solves the test with A grade. ## Quick Overview of How BERT model works The foundation of BERT model is Transformer model, which was introduced in [Attention Is All You Need paper](https://arxiv.org/abs/1706.03762). Before that event the dominant way of processing language was Recurrent Neural Networks (RNNs). Let's start our overview with RNNs. ## RNNs RNNs were powerful way of processing language due to their ability to memorize its previous state and perform sophisticated inference based on that. <img src="https://miro.medium.com/max/400/1*L38xfe59H5tAgvuIjKoWPg.png" alt="Drawing" style="width: 100px;"/> _Taken from [1](https://towardsdatascience.com/transformers-141e32e69591)_ Applied to language translation task, the processing dynamics looked like this. ![](https://miro.medium.com/max/1200/1*8GcdjBU5TAP36itWBcZ6iA.gif) _Taken from [2](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/)_ But RNNs suffered from 2 disadvantes: 1. Sequential computation put a limit on parallelization, which limited effectiveness of larger models. 2. Long term relationships between words were harder to detect. ## Transformers Transformers were designed to address these two limitations of RNNs. <img src="https://miro.medium.com/max/2436/1*V2435M1u0tiSOz4nRBfl4g.png" alt="Drawing" style="width: 500px;"/> _Taken from [3](http://jalammar.github.io/illustrated-transformer/)_ In each Encoder layer Transformer performs Self-Attention operation which detects relationships between all word embeddings in one matrix multiplication operation. <img src="https://miro.medium.com/max/2176/1*fL8arkEFVKA3_A7VBgapKA.gif" alt="Drawing" style="width: 500px;"/> _Taken from [4](https://towardsdatascience.com/deconstructing-bert-part-2-visualizing-the-inner-workings-of-attention-60a16d86b5c1)_ ## BERT Model BERT is a very large network with multiple layers of Transformers (12 for BERT-base, and 24 for BERT-large). The model is first pre-trained on large corpus of text data (WikiPedia + books) using un-superwised training (predicting masked words in a sentence). During pre-training the model absorbs significant level of language understanding. <img src="http://jalammar.github.io/images/bert-output-vector.png" alt="Drawing" style="width: 700px;"/> _Taken from [5](http://jalammar.github.io/illustrated-bert/)_ Pre-trained network then can easily be fine-tuned to solve specific language task, like answering questions, or categorizing spam emails. <img src="http://jalammar.github.io/images/bert-classifier.png" alt="Drawing" style="width: 700px;"/> _Taken from [5](http://jalammar.github.io/illustrated-bert/)_ The end-to-end training process of the stackoverflow question tagging model looks like this: ![](images/model-training-e2e.png) ## What is Azure Machine Learning Service? Azure Machine Learning service is a cloud service that you can use to develop and deploy machine learning models. Using Azure Machine Learning service, you can track your models as you build, train, deploy, and manage them, all at the broad scale that the cloud provides. ![](./images/aml-overview.png) #### How can we use it for training machine learning models? Training machine learning models, particularly deep neural networks, is often a time- and compute-intensive task. Once you've finished writing your training script and running on a small subset of data on your local machine, you will likely want to scale up your workload. To facilitate training, the Azure Machine Learning Python SDK provides a high-level abstraction, the estimator class, which allows users to easily train their models in the Azure ecosystem. You can create and use an Estimator object to submit any training code you want to run on remote compute, whether it's a single-node run or distributed training across a GPU cluster. ## Connect To Workspace The [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class)?view=azure-ml-py) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace holds all your experiments, compute targets, models, datastores, etc. You can [open ml.azure.com](https://ml.azure.com) to access your workspace resources through a graphical user interface of **Azure Machine Learning studio**. ![](./images/aml-workspace.png) **You will be asked to login in the next step. Use your Microsoft AAD credentials.** ``` from azureml.core import Workspace workspace = Workspace.from_config() print('Workspace name: ' + workspace.name, 'Azure region: ' + workspace.location, 'Subscription id: ' + workspace.subscription_id, 'Resource group: ' + workspace.resource_group, sep = '\n') ``` ## Create Compute Target A [compute target](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.computetarget?view=azure-ml-py) is a designated compute resource/environment where you run your training script or host your service deployment. This location may be your local machine or a cloud-based compute resource. Compute targets can be reused across the workspace for different runs and experiments. For this tutorial, we will create an auto-scaling [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.amlcompute?view=azure-ml-py) cluster, which is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. To create the cluster, we need to specify the following parameters: - `vm_size`: The is the type of GPUs that we want to use in our cluster. For this tutorial, we will use **Standard_NC12s_v2 (NVIDIA P100) GPU Machines** . - `idle_seconds_before_scaledown`: This is the number of seconds before a node will scale down in our auto-scaling cluster. We will set this to **6000** seconds. - `min_nodes`: This is the minimum numbers of nodes that the cluster will have. To avoid paying for compute while they are not being used, we will set this to **0** nodes. - `max_modes`: This is the maximum number of nodes that the cluster will scale up to. Will will set this to **2** nodes. **When jobs are submitted to the cluster it takes approximately 5 minutes to allocate new nodes** ``` from azureml.core.compute import AmlCompute, ComputeTarget cluster_name = 'p100cluster' compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC12s_v2', idle_seconds_before_scaledown=6000, min_nodes=0, max_nodes=2) compute_target = ComputeTarget.create(workspace, cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ``` To ensure our compute target was created successfully, we can check it's status. ``` compute_target.get_status().serialize() ``` #### If the compute target has already been created, then you (and other users in your workspace) can directly run this cell. ``` compute_target = workspace.compute_targets['p100cluster'] ``` ## Prepare Data Using Apache Spark To train our model, we used the Stackoverflow data dump from [Stack exchange archive](https://archive.org/download/stackexchange). Since the Stackoverflow _posts_ dataset is 12GB, we prepared the data using [Apache Spark](https://spark.apache.org/) framework on a scalable Spark compute cluster in [Azure Databricks](https://azure.microsoft.com/en-us/services/databricks/). For the purpose of this tutorial, we have processed the data ahead of time and uploaded it to an [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. The full data processing notebook can be found in the _spark_ folder. * **ACTION**: Open and explore [data preparation notebook](spark/stackoverflow-data-prep.ipynb). ## Register Datastore A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) is used to store connection information to a central data storage. This allows you to access your storage without having to hard code this (potentially confidential) information into your scripts. In this tutorial, the data was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). ``` from azureml.core import Datastore, Dataset datastore_name = 'tfworld' container_name = 'azureml-blobstore-7c6bdd88-21fa-453a-9c80-16998f02935f' account_name = 'tfworld6818510241' sas_token = '?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2021-01-01T06:07:44Z&st=2020-01-11T22:00:44Z&spr=https&sig=geV1mc46gEv9yLBsWjnlJwij%2Blg4qN53KFyyK84tn3Q%3D' datastore = Datastore.register_azure_blob_container(workspace=workspace, datastore_name=datastore_name, container_name=container_name, account_name=account_name, sas_token=sas_token) ``` #### If the datastore has already been registered, then you (and other users in your workspace) can directly run this cell. ``` datastore = workspace.datastores['tfworld'] ``` #### What if my data wasn't already hosted remotely? All workspaces also come with a blob container which is registered as a default datastore. This allows you to easily upload your own data to a remote storage location. You can access this datastore and upload files as follows: ``` datastore = workspace.get_default_datastore() ds.upload(src_dir='<LOCAL-PATH>', target_path='<REMOTE-PATH>') ``` ## Register Dataset Azure Machine Learning service supports first class notion of a Dataset. A [Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) is a resource for exploring, transforming and managing data in Azure Machine Learning. The following Dataset types are supported: * [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format created by parsing the provided file or list of files. * [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in datastores or from public URLs. We can use visual tools in Azure ML studio to register and explore dataset. In this workshop we will skip this step to save time. After the workshop please explore visual way of creating dataset as your homework. Use the guide below as guiding steps. * **Homework**: After workshop follow [create-dataset](images/create-dataset.ipynb) guide to create Tabular Dataset from our training data using visual tools in studio. #### Use created dataset in code ``` from azureml.core import Dataset # Get a dataset by name tabular_ds = Dataset.get_by_name(workspace=workspace, name='Stackoverflow dataset') # Load a TabularDataset into pandas DataFrame df = tabular_ds.to_pandas_dataframe() df.head(10) ``` ## Register Dataset using SDK In addition to UI we can register datasets using SDK. In this workshop we will register second type of Datasets using code - File Dataset. File Dataset allows specific folder in our datastore that contains our data files to be registered as a Dataset. There is a folder within our datastore called **azure-service-data** that contains all our training and testing data. We will register this as a dataset. ``` azure_dataset = Dataset.File.from_files(path=(datastore, 'azure-service-classifier/data')) azure_dataset = azure_dataset.register(workspace=workspace, name='Azure Services Dataset', description='Dataset containing azure related posts on Stackoverflow') ``` #### If the dataset has already been registered, then you (and other users in your workspace) can directly run this cell. ``` azure_dataset = workspace.datasets['Azure Services Dataset'] ``` ## Explore Training Code In this workshop the training code is provided in [train.py](./train.py) and [model.py](./model.py) files. The model is based on popular [huggingface/transformers](https://github.com/huggingface/transformers) libary. Transformers library provides performant implementation of BERT model with high level and easy to use APIs based on Tensorflow 2.0. ![](https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png) * **ACTION**: Explore _train.py_ and _model.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png) * NOTE: You can also explore the files using Jupyter or Jupyter Lab UI. ## Test Locally Let's try running the script locally to make sure it works before scaling up to use our compute cluster. To do so, you will need to install the transformers libary. ``` %pip install transformers==2.0.0 ``` We have taken a small partition of the dataset and included it in this repository. Let's take a quick look at the format of the data. ``` data_dir = './data' import os import pandas as pd data = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None) data.head(5) ``` Now we know what the data looks like, let's test out our script! ``` import sys !{sys.executable} train.py --data_dir {data_dir} --max_seq_length 128 --batch_size 16 --learning_rate 3e-5 --steps_per_epoch 5 --num_epochs 1 --export_dir ../outputs/model ``` ## Homework: Debugging in TensorFlow 2.0 Eager Mode Eager mode is new feature in TensorFlow 2.0 which makes understanding and debugging models easy. You can use VS Code Remote feature to connect to Notebook VM and perform debugging in the cloud environment. #### More info: Configuring VS Code Remote connection to Notebook VM * Homework: Install [Microsoft VS Code](https://code.visualstudio.com/) on your local machine. * Homework: Follow this [configuration guide](https://github.com/danielsc/azureml-debug-training/blob/master/Setting%20up%20VSCode%20Remote%20on%20an%20AzureML%20Notebook%20VM.md) to setup VS Code Remote connection to Notebook VM. On a CPU machine training on a full dataset will take approximatly 1.5 hours. Although it's a small dataset, it still takes a long time. Let's see how we can speed up the training by using latest NVidia V100 GPUs in the Azure cloud. ## Perform Experiment Now that we have our compute target, dataset, and training script working locally, it is time to scale up so that the script can run faster. We will start by creating an [experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py). An experiment is a grouping of many runs from a specified script. All runs in this tutorial will be performed under the same experiment. ``` from azureml.core import Experiment experiment_name = 'azure-service-classifier' experiment = Experiment(workspace, name=experiment_name) ``` #### Create TensorFlow Estimator The Azure Machine Learning Python SDK Estimator classes allow you to easily construct run configurations for your experiments. They allow you too define parameters such as the training script to run, the compute target to run it on, framework versions, additional package requirements, etc. You can also use a generic [Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) to submit training scripts that use any learning framework you choose. For popular libaries like PyTorch and Tensorflow you can use their framework specific estimators. We will use the [TensorFlow Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) for our experiment. ``` from azureml.train.dnn import TensorFlow estimator1 = TensorFlow(source_directory='.', entry_script='train_logging.py', compute_target=compute_target, script_params = { '--data_dir': azure_dataset.as_named_input('azureservicedata').as_mount(), '--max_seq_length': 128, '--batch_size': 32, '--learning_rate': 3e-5, '--steps_per_epoch': 150, '--num_epochs': 3, '--export_dir':'./outputs/model' }, framework_version='2.0', use_gpu=True, pip_packages=['transformers==2.0.0', 'azureml-dataprep[fuse,pandas]==1.1.29']) ``` A quick description for each of the parameters we have just defined: - `source_directory`: This specifies the root directory of our source code. - `entry_script`: This specifies the training script to run. It should be relative to the source_directory. - `compute_target`: This specifies to compute target to run the job on. We will use the one created earlier. - `script_params`: This specifies the input parameters to the training script. Please note: 1) *azure_dataset.as_named_input('azureservicedata').as_mount()* mounts the dataset to the remote compute and provides the path to the dataset on our datastore. 2) All outputs from the training script must be outputted to an './outputs' directory as this is the only directory that will be saved to the run. - `framework_version`: This specifies the version of TensorFlow to use. Use Tensorflow.get_supported_verions() to see all supported versions. - `use_gpu`: This will use the GPU on the compute target for training if set to True. - `pip_packages`: This allows you to define any additional libraries to install before training. #### 1) Submit a Run We can now train our model by submitting the estimator object as a [run](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run.run?view=azure-ml-py). ``` run1 = experiment.submit(estimator1) ``` We can view the current status of the run and stream the logs from within the notebook. ``` from azureml.widgets import RunDetails RunDetails(run1).show() ``` You cancel a run at anytime which will stop the run and scale down the nodes in the compute target. ``` run1.cancel() ``` While we wait for the run to complete, let's go over how a Run is executed in Azure Machine Learning. ![](./images/aml-run.png) #### 2) Monitoring metrics with Azure ML SDK To monitor performance of our model we log those metrics using a few lines of code in our training script: ```python # 1) Import SDK Run object from azureml.core.run import Run # 2) Get current service context run = Run.get_context() # 3) Log the metrics that we want run.log('val_accuracy', float(logs.get('val_accuracy'))) run.log('accuracy', float(logs.get('accuracy'))) ``` #### 3) Monitoring metrics with Tensorboard Tensorboard is a popular Deep Learning Training visualization tool and it's built-in into TensorFlow framework. We can easily add tracking of the metrics in Tensorboard format by adding Tensorboard callback to the **fit** function call. ```python # Add callback to record Tensorboard events model.fit(train_dataset, epochs=FLAGS.num_epochs, steps_per_epoch=FLAGS.steps_per_epoch, validation_data=valid_dataset, callbacks=[ AmlLogger(), tf.keras.callbacks.TensorBoard(update_freq='batch')] ) ``` * **ACTION**: Explore _train_logging.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png) #### Launch Tensorboard Azure ML service provides built-in integration with Tensorboard through **tensorboard** package. While the run is in progress (or after it has completed), we can start Tensorboard with the run as its target, and it will begin streaming logs. ``` from azureml.tensorboard import Tensorboard # The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here tb = Tensorboard([run1]) # If successful, start() returns a string with the URI of the instance. tb.start() ``` #### Stop Tensorboard When you're done, make sure to call the stop() method of the Tensorboard object, or it will stay running even after your job completes. ``` tb.stop() ``` ## Check the model performance Last training run produced model of decent accuracy. Let's test it out and see what it does. First, let's check what files our latest training run produced and download the model files. #### Download model files ``` run1.get_file_names() run1.download_files(prefix='outputs/model') # If you haven't finished training the model then just download pre-made model from datastore datastore.download('./',prefix="azure-service-classifier/model") ``` #### Instantiate the model Next step is to import our model class and instantiate fine-tuned model from the model file. ``` from model import TFBertForMultiClassification from transformers import BertTokenizer import tensorflow as tf def encode_example(text, max_seq_length): # Encode inputs using tokenizer inputs = tokenizer.encode_plus( text, add_special_tokens=True, max_length=max_seq_length ) input_ids, token_type_ids = inputs["input_ids"], inputs["token_type_ids"] # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to. attention_mask = [1] * len(input_ids) # Zero-pad up to the sequence length. padding_length = max_seq_length - len(input_ids) input_ids = input_ids + ([0] * padding_length) attention_mask = attention_mask + ([0] * padding_length) token_type_ids = token_type_ids + ([0] * padding_length) return input_ids, attention_mask, token_type_ids labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions'] # Load model and tokenizer loaded_model = TFBertForMultiClassification.from_pretrained('azure-service-classifier/model', num_labels=len(labels)) tokenizer = BertTokenizer.from_pretrained('bert-base-cased') print("Model loaded from disk.") ``` #### Define prediction function Using the model object we can interpret new questions and predict what Azure service they talk about. To do that conveniently we'll define **predict** function. ``` # Prediction function def predict(question): input_ids, attention_mask, token_type_ids = encode_example(question, 128) predictions = loaded_model.predict({ 'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32), 'attention_mask': tf.convert_to_tensor([attention_mask], dtype=tf.int32), 'token_type_ids': tf.convert_to_tensor([token_type_ids], dtype=tf.int32) }) prediction = labels[predictions[0].argmax().item()] probability = predictions[0].max() result = { 'prediction': str(labels[predictions[0].argmax().item()]), 'probability': str(predictions[0].max()) } print('Prediction: {}'.format(prediction)) print('Probability: {}'.format(probability)) ``` #### Experiment with our new model Now we can easily test responses of the model to new inputs. * **ACTION**: Invent your own input for one of the 5 services our model understands: 'azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions'. ``` # Route question predict("How can I specify Service Principal in devops pipeline when deploying virtual machine") # Now more tricky case - the opposite predict("How can virtual machine trigger devops pipeline") ``` ## Distributed Training Across Multiple GPUs Distributed training allows us to train across multiple nodes if your cluster allows it. Azure Machine Learning service helps manage the infrastructure for training distributed jobs. All we have to do is add the following parameters to our estimator object in order to enable this: - `node_count`: The number of nodes to run this job across. Our cluster has a maximum node limit of 2, so we can set this number up to 2. - `process_count_per_node`: The number of processes to enable per node. The nodes in our cluster have 2 GPUs each. We will set this value to 2 which will allow us to distribute the load on both GPUs. Using multi-GPUs nodes is benefitial as communication channel bandwidth on local machine is higher. - `distributed_training`: The backend to use for our distributed job. We will be using an MPI (Message Passing Interface) backend which is used by Horovod framework. We use [Horovod](https://github.com/horovod/horovod), which is a framework that allows us to easily modifying our existing training script to be run across multiple nodes/GPUs. The distributed training script is saved as *train_horovod.py*. * **ACTION**: Explore _train_horovod.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png) We can submit this run in the same way that we did with the others, but with the additional parameters. ``` from azureml.train.dnn import Mpi estimator3 = TensorFlow(source_directory='./', entry_script='train_horovod.py',compute_target=compute_target, script_params = { '--data_dir': azure_dataset.as_named_input('azureservicedata').as_mount(), '--max_seq_length': 128, '--batch_size': 32, '--learning_rate': 3e-5, '--steps_per_epoch': 150, '--num_epochs': 3, '--export_dir':'./outputs/model' }, framework_version='2.0', node_count=1, distributed_training=Mpi(process_count_per_node=2), use_gpu=True, pip_packages=['transformers==2.0.0', 'azureml-dataprep[fuse,pandas]==1.1.29']) run3 = experiment.submit(estimator3) ``` Once again, we can view the current details of the run. ``` from azureml.widgets import RunDetails RunDetails(run3).show() ``` Once the run completes note the time it took. It should be around 5 minutes. As you can see, by moving to the cloud GPUs and using distibuted training we managed to reduce training time of our model from more than an hour to 5 minutes. This greatly improves speed of experimentation and innovation. ## Tune Hyperparameters Using Hyperdrive So far we have been putting in default hyperparameter values, but in practice we would need tune these values to optimize the performance. Azure Machine Learning service provides many methods for tuning hyperparameters using different strategies. The first step is to choose the parameter space that we want to search. We have a few choices to make here : - **Parameter Sampling Method**: This is how we select the combinations of parameters to sample. Azure Machine Learning service offers [RandomParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.randomparametersampling?view=azure-ml-py), [GridParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.gridparametersampling?view=azure-ml-py), and [BayesianParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.bayesianparametersampling?view=azure-ml-py). We will use the `GridParameterSampling` method. - **Parameters To Search**: We will be searching for optimal combinations of `learning_rate` and `num_epochs`. - **Parameter Expressions**: This defines the [functions that can be used to describe a hyperparameter search space](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions?view=azure-ml-py), which can be discrete or continuous. We will be using a `discrete set of choices`. The following code allows us to define these options. ``` from azureml.train.hyperdrive import GridParameterSampling from azureml.train.hyperdrive.parameter_expressions import choice param_sampling = GridParameterSampling( { '--learning_rate': choice(3e-5, 3e-4), '--num_epochs': choice(3, 4) } ) ``` The next step is to a define how we want to measure our performance. We do so by specifying two classes: - **[PrimaryMetricGoal](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.primarymetricgoal?view=azure-ml-py)**: We want to `MAXIMIZE` the `val_accuracy` that is logged in our training script. - **[BanditPolicy](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.banditpolicy?view=azure-ml-py)**: A policy for early termination so that jobs which don't show promising results will stop automatically. ``` from azureml.train.hyperdrive import BanditPolicy from azureml.train.hyperdrive import PrimaryMetricGoal primary_metric_name='val_accuracy' primary_metric_goal=PrimaryMetricGoal.MAXIMIZE early_termination_policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1, delay_evaluation=2) ``` We define an estimator as usual, but this time without the script parameters that we are planning to search. ``` estimator4 = TensorFlow(source_directory='./', entry_script='train_logging.py', compute_target=compute_target, script_params = { '--data_dir': azure_dataset.as_named_input('azureservicedata').as_mount(), '--max_seq_length': 128, '--batch_size': 32, '--steps_per_epoch': 150, '--export_dir':'./outputs/model', }, framework_version='2.0', use_gpu=True, pip_packages=['transformers==2.0.0', 'azureml-dataprep[fuse,pandas]==1.1.29']) ``` Finally, we add all our parameters in a [HyperDriveConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.hyperdriveconfig?view=azure-ml-py) class and submit it as a run. ``` from azureml.train.hyperdrive import HyperDriveConfig hyperdrive_run_config = HyperDriveConfig(estimator=estimator4, hyperparameter_sampling=param_sampling, policy=early_termination_policy, primary_metric_name=primary_metric_name, primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=10, max_concurrent_runs=2) run4 = experiment.submit(hyperdrive_run_config) ``` When we view the details of our run this time, we will see information and metrics for every run in our hyperparameter tuning. ``` from azureml.widgets import RunDetails RunDetails(run4).show() ``` We can retrieve the best run based on our defined metric. ``` best_run = run4.get_best_run_by_primary_metric() ``` ## Register Model A registered [model](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model(class)?view=azure-ml-py) is a reference to the directory or file that make up your model. After registering a model, you and other people in your workspace can easily gain access to and deploy your model without having to run the training script again. We need to define the following parameters to register a model: - `model_name`: The name for your model. If the model name already exists in the workspace, it will create a new version for the model. - `model_path`: The path to where the model is stored. In our case, this was the *export_dir* defined in our estimators. - `description`: A description for the model. Let's register the best run from our hyperparameter tuning. ``` model = best_run.register_model(model_name='azure-service-classifier', model_path='./outputs/model', datasets=[('train, test, validation data', azure_dataset)], description='BERT model for classifying azure services on stackoverflow posts.') ``` We have registered the model with Dataset reference. * **ACTION**: Check dataset to model link in **Azure ML studio > Datasets tab > Azure Service Dataset**. In the [next tutorial](), we will perform inferencing on this model and deploy it to a web service.
github_jupyter
# Introduction &copy; Harishankar Manikantan, maintained on GitHub at [hmanikantan/ECH60](https://github.com/hmanikantan/ECH60) and published under an [MIT license](https://github.com/hmanikantan/ECH60/blob/master/LICENSE). Return to [Course Home Page](https://hmanikantan.github.io/ECH60/) **[Context and Scope](#scope) <br>** **[Getting used to Python](#install)** * [Installing and Using Python](#start) * [Useful tips](#tips) <a id='scope'></a> ## Context and Scope This set of tutorials are written at an introductory level for an engineering or physical sciences major. It is ideal for someone who has completed college level courses in linear algebra, calculus and differential equations. While prior experience with programming is a certain advantage, it is not expected. At UC Davis, this is aimed at sophomore level Chemical and Biochemical Engineers and Materials Scientists: examples and the language used here might reflect this. At the same time, this is not meant to be an exhaustive course in Python or in numerical methods. The objective of the module is to get the reader to appreciate and apply Python to basic scientific calculations. While computational efficiency and succinct programming are certainly factors that become important to advanced coders, the focus here is on learning the methods. Brevity will often be forsaken for clarity in what follows. The goal is to flatten the learning curve as much as possible for a beginner. In the same vein, most of the 'application' chapters (fitting, root finding, calculus and differential equations) introduces classic numerical methods built from first principles but then also provides the inbuilt Python routine to do the same. These 'black-box' approaches are often more efficient because they are written by experts in the most efficient and optimal manner. The hope is that the reader learns and appreciates the methods and the algorithms behind these approaches, while also learning to use the easiest and most efficient tools to get the job done. These are casual notes based on a course taught at UC Davis and are certainly not free of errors. Typos and coding gaffes are sure to have escaped my attention, and I take full responsibility for errors. For those comfortable with GitHub, I welcome pull requests for modifications. Or just send me an [email](mailto:hmanikantan@ucdavis.edu) with any mistakes you spot, and I will be greateful. Outside of technical accuracy, I have taken an approach that favors a pedagogic development of topics, one that I hoped would least intimidate an engineer in training with no prior experience in coding. Criticism and feedback on stylistic changes in this spirit are also welcome. I recommend the following wonderful books that have guided aspects of the course that I teach with these notes. * [A Student's Guide to Python for Physical Modeling, Jesse M. Kinder & Philip Nelson, Princeton University Press](https://press.princeton.edu/books/hardcover/9780691180564/a-students-guide-to-python-for-physical-modeling) * [Numerical Methods for Engineers and Scientists, Amos Gilat & Vish Subramaniam, Wiley](https://www.wiley.com/en-us/Numerical+Methods+for+Engineers+and+Scientists%2C+3rd+Edition-p-9781118554937) * [Numerical Methods in Engineering with Python 3, Jaan Kiusalaas, Cambridge University Press](https://doi.org/10.1017/CBO9781139523899) While any of these books provide a fantastic introduction to the topic, I believe that interactive tutorials using the Jupyter framework provide an engaging complement to learning numerical methods. Yet I was unable to find a set of pedagogic and interactive code notebooks that covered the range of topics suitable for this level of instruction. I have hoped to fill this gap. If you are new to coding, the best way to learn is to download these notebooks from the GitHub repository (linked at the course [home page](https://hmanikantan.github.io/ECH60/)), and edit and execute every code cell in these chapters as you read through them. Details on installing and using Python are below. My ECH 60 students beta tested these tutorials, and their learning styles, feedback and comments crafted the structure of this series. And finally, the world of Python is a fantastic testament to the power of open-source science and learning. I thank the countless selfless nameless strangers whose stackoverflow comments have informed me, and whose coding styles have inadvertently creeped in to my interpretation of the code and style in what follows. And I thank the generous online notes of [John Kitchin](https://kitchingroup.cheme.cmu.edu/pycse/pycse.html), [Patrick Walls](https://www.math.ubc.ca/~pwalls/math-python/), [Vivi Andasari](http://people.bu.edu/andasari/courses/numericalpython/python.html), [Charles Jekel](https://github.com/cjekel/Introduction-to-Python-Numerical-Analysis-for-Engineers-and-Scientist), and [Jeffrey Kantor](https://github.com/jckantor) whose works directly or indirectly inspired and influenced what follows. I am happy to contribute to this collective knowledge base, free for anyone to adapt, build on, and make it their own. <a id='install'></a> ## Getting Used to Python Python is a popular, powerful and free prgramming language that is rapidly becoming one of [the most widely used computational tools](https://stackoverflow.blog/2017/09/06/incredible-growth-python/) in science and engineering. Python is notable for its minimalist syntax, clear and logical flow of code, efficient organization, readily and freely available 'plug and play' modules for every kind of advanced scientific computation, and the massive online community of support. This makes Python easy to learn for beginners, and extremely convenient to adapt for those transitioning from other languages. <a id='start'></a> ### Installing and Using Python Python is free to download and use. The [Anaconda distribution](https://www.anaconda.com) is a user-friendly way to get Python on your computer. Anaconda is free and easy to install on all platforms. It installs the Python language, and related useful packages like Jupyter and Spyder. #### Jupyter The Jupyter environment allows interactive computations and runs on any browser. This file you are reading is written using Jupyter, and each such file is saved with a `.ipynb` extension. To open an ipynb file, first open Jupyter from the Anaconda launch screen. Once you have Jupyter up and runnning, navigate to the folder where you saved the file and double click to open. Alternatively, you can launch Jupyter by typing `jupyter notebook` in your terminal prompt. Note that you can only open files after you launch Jupyter and navigate to the folder containing your ipynb file. You cannot simply double click, or use a 'right click and open with' option. Jupyter allows us to write and edit plain text (like the one you are reading) and code. This paragraph and the ones above are 'mark down' text: meaning, Jupyter treats them as plain text. From within Jupyter, double click anywhere on the text to go into 'edit' mode. When you are done changing anything, hit `shift+enter` to exit to the 'view' mode. You can toggle between markdown and code using the drop down in the menu above. Code cells look like the following ``` print('Hello') ``` Single click on a code cell to select it, edit it, and hit `shift+enter` to execute that bit of code. For example, the following code cell evaluates the sum of two numbers when you execute it. Try it, type in any two numbers, see what happens. ``` # add two numbers 2+40 ``` The `#` sign is useful to write comments in Python, and are not executed. Play around with all editable code cells in these tutorials so you get comfortable. The more you practice, the faster you will get comfortable with coding and Python. Jupyter allows LaTeX as well in the markdown cells: so you can write things like $\alpha+i \sqrt{\beta}=e^{i\theta}$. You can also play around with fonts, colors, sizes, hyperlinks, and text organization. This makes Jupyter a great environment for teaching, learning, tutorials, assignments, and academic reports. This entire course is written and tested in the Jupyter environment. #### Spyder An integrated development environment (IDE) like Spyder is more apt for longer projects. Spyder has features like variable explorer, script editor, live debugging, history log, and more. For those comfortable with Matlab or R, adapting to Spyder is an easy learning curve. The ipynb files will not open in a usable manner in Spyder (or any other Python IDE) as it contains markdown text in addition to code. However, every bit of code that we will learn in what follows works in Spyder or a similar IDE. When using Spyder, save the code as 'script' files with a `.py` extension. This is the traditional or standard Python format: just code, no markdown. Another big advantage with `.py` files is modularity: bit of code written in one file can be easily accessed in another. This makes the traditional `.py` Python format more suitable for large-scale and collaborative projects. Nevertheless, for pedagogic reasons, we will continue with Jupyter notebooks and the `.ipynb` files for this course: as you learn Python, you are heavily encouraged to get comfortable with and port all the code you develop into Spyder. #### Python, more generally Of course, Anaconda (and it's inbuilt environments like Jupyter and Spyder) are not the only ways to interact with Python. You can install just the Python language directly from [Python.org](https://www.python.org/downloads/), write a bit of Python code in any text editor, save it as a `.py` file, and run it on your terminal using `python filename.py`. Python is an _interpreted_ language, meaning you do not need to compile it to execute it (unlike C, C++, Fortran etc) and you can run the scripts directly. <a id='tips'></a> ### Useful Tips Whether you are a beginner to coding or a seasoned coder transitioning from another language, the following (non-exhaustive) tips are useful to bear in mind as you learn Python: * Blocks of code in Python are identified by indentation. We will see this when we start with loops and conditional and functions in Chapter 1: indents (and a preliminary colon) identify lines of code that go together. As you learn to go through and Python code, it is good practice to ensure an 'indentation discipline'. That's the only way Python knows which bits of code belong together. * All but the most basic Python operations will need imported _modules_. Modules are collections of code written in an efficient manner, and are easily 'loaded' into your code by using the `import` statement. For example, a common module that you will find yourself using in pretty much every engineering code is `numpy` (Chapter 1), which you would import using the line `import numpy`. This doesn't have to be in the beginning of the code, as long as it is executed before you use something that belongs to `numpy`. * Individual code cells in Jupyter are executed independent of the rest of the notebook. So, make sure to execute the `import` line for necessary modules before you execute a code cell that needs that module or you will see an error. * If you need to import a data file or an image, Python looks for that file in the current folder unless you provide a full path. The same goes when you save an image or export a data file: the exported file is saved in the current directly unless explicitly stated. * Notebook files with an `.ipynb` extension can only be opened, edited or renamed from within the Jupyter framework. Opening these files in a text editor or another application, even when possible, does not display the markdown or code in a comprehensible manner.
github_jupyter
# Softmax exercise *Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.* This exercise is analogous to the SVM exercise. You will: - implement a fully-vectorized **loss function** for the Softmax classifier - implement the fully-vectorized expression for its **analytic gradient** - **check your implementation** with numerical gradient - use a validation set to **tune the learning rate and regularization** strength - **optimize** the loss function with **SGD** - **visualize** the final learned weights ``` import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the linear classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # subsample the data mask = list(range(num_training, num_training + num_validation)) X_val = X_train[mask] y_val = y_train[mask] mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] mask = np.random.choice(num_training, num_dev, replace=False) X_dev = X_train[mask] y_dev = y_train[mask] # Preprocessing: reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_val = np.reshape(X_val, (X_val.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) X_dev = np.reshape(X_dev, (X_dev.shape[0], -1)) # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis = 0) X_train -= mean_image X_val -= mean_image X_test -= mean_image X_dev -= mean_image # add bias dimension and transform into columns X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]) X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]) X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]) X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))]) return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data() print('Train data shape: ', X_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', X_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) print('dev data shape: ', X_dev.shape) print('dev labels shape: ', y_dev.shape) ``` ## Softmax Classifier Your code for this section will all be written inside **cs231n/classifiers/softmax.py**. ``` # First implement the naive softmax loss function with nested loops. # Open the file cs231n/classifiers/softmax.py and implement the # softmax_loss_naive function. from cs231n.classifiers.softmax import softmax_loss_naive import time # Generate a random softmax weight matrix and use it to compute the loss. W = np.random.randn(3073, 10) * 0.0001 loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As a rough sanity check, our loss should be something close to -log(0.1). print('loss: %f' % loss) print('sanity check: %f' % (-np.log(0.1))) ``` ## Inline Question 1: Why do we expect our loss to be close to -log(0.1)? Explain briefly.** **Your answer:** *Fill this in* ``` # Complete the implementation of softmax_loss_naive and implement a (naive) # version of the gradient that uses nested loops. loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As we did for the SVM, use numeric gradient checking as a debugging tool. # The numeric gradient should be close to the analytic gradient. from cs231n.gradient_check import grad_check_sparse f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # similar to SVM case, do another gradient check with regularization loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1) f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # Now that we have a naive implementation of the softmax loss function and its gradient, # implement a vectorized version in softmax_loss_vectorized. # The two versions should compute the same results, but the vectorized version should be # much faster. tic = time.time() loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005) toc = time.time() print('naive loss: %e computed in %fs' % (loss_naive, toc - tic)) from cs231n.classifiers.softmax import softmax_loss_vectorized tic = time.time() loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005) toc = time.time() print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)) # As we did for the SVM, we use the Frobenius norm to compare the two versions # of the gradient. grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro') print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized)) print('Gradient difference: %f' % grad_difference) # Use the validation set to tune hyperparameters (regularization strength and # learning rate). You should experiment with different ranges for the learning # rates and regularization strengths; if you are careful you should be able to # get a classification accuracy of over 0.35 on the validation set. from cs231n.classifiers import Softmax results = {} best_val = -1 best_softmax = None learning_rates = [1e-7, 5e-7] regularization_strengths = [2.5e4, 5e4] ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained softmax classifer in best_softmax. # ################################################################################ #pass for lr in learning_rates: for reg in regularization_strengths: softmax = Softmax() softmax.train(X_train, y_train, lr, reg, num_iters=1500) y_train_pred = softmax.predict(X_train) train_acc = np.mean(y_train == y_train_pred) y_val_pred = softmax.predict(X_val) val_acc = np.mean(y_val == y_val_pred) if val_acc > best_val: best_val = val_acc best_softmax = softmax results[(lr, reg)] = train_acc, val_acc ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) # evaluate on test set # Evaluate the best softmax on test set y_test_pred = best_softmax.predict(X_test) test_accuracy = np.mean(y_test == y_test_pred) print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )) # Visualize the learned weights for each class w = best_softmax.W[:-1,:] # strip out the bias w = w.reshape(32, 32, 3, 10) w_min, w_max = np.min(w), np.max(w) classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for i in range(10): plt.subplot(2, 5, i + 1) # Rescale the weights to be between 0 and 255 wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min) plt.imshow(wimg.astype('uint8')) plt.axis('off') plt.title(classes[i]) ```
github_jupyter
# __Proposal__ We are facing a serious situation of COVID-19 pandemic, for which governments have implemented contingency plans with various effectiveness. We are trying to create a database for further analysis of the effectiveness of the contingency plans and measures governments have chosen. # __Finding Data__ Source 1: We picked our first data source of covid 19 daily data from the following API, where the Data is sourced from Johns Hopkins CSSE. “https://api.covid19api.com/all" The data in this dataset is basically total cases, total deaths and total recovered reported daily. Source 2: We pulled the contingency plans from OXFORD COVID-19 Government Response Tracker(OxCGRT). We downloaded our second dataset as a CSV file from https://data.humdata.org/dataset/oxford-covid-19-government-response-tracker. The Oxford COVID-19 Government Response Tracker (OxCGRT) systematically collects information on several different common policy responses that governments have taken to respond to the pandemic on 17 indicators such as school closures and travel restrictions. It now has data from more than 180 countries. The data is also used to inform a ‘Lockdown rollback checklist’ which looks at how closely countries meet four of the six World Health Organisation recommendations for relaxing ‘lockdown’. Source 3: We downloaded an EXCEL file from https://data.humdata.org/dataset/acaps-covid19-government-measures-dataset. the source is ACAPS which is an independent information provider, free from the bias or vested interests of a specific enterprise, sector, or region. ACAPS consulted government, media, United Nations, and other organisations sources. The #COVID19 Government Measures Dataset puts together all the measures implemented by governments worldwide in response to the Coronavirus pandemic. The data in this dataset is not updated daily but the major events and limitations placed are described in a column as comments. the analysts could use these data to explain and assess the effect they had on the decline or increase in the number of cases and deaths. # -----------------------------------------__E__xtract----------------------------------------- ``` # Dependencies import requests import pandas as pd import datetime from sqlalchemy import create_engine ``` ## __Source 1: Following the API calling we then created a dataframe in pandas__ ``` # Source 1: we picked our first data source of covid 19 daily data from the following API # Source 1: Calling the API url = "https://api.covid19api.com/all" response1 = requests.get(url).json() response1 country = [] date = [] total_cases = [] total_deaths = [] total_recovered = [] for row in response1: country.append(row['Country']) date.append(row["Date"]) total_cases.append(row['Confirmed']) total_deaths.append(row['Deaths']) total_recovered.append(row['Recovered']) #creating the dataframe covid_df = pd.DataFrame({ 'Country': country, 'Date': date, 'TotalCases': total_cases, 'TotalDeaths': total_deaths, 'TotalRecovered': total_recovered }) covid_df.head() ``` ## __Source 2: after downloading the CSV file we imported the csv file into another dataframe__ ``` ## we pulled the contingency plans from OXFORD COVID-19 Government Response Tracker(OxCGRT) ## the website https://data.humdata.org/dataset/oxford-covid-19-government-response-tracker # Source 2: read from csv file government_df=pd.read_csv('Resources/government_contingency.csv') government_df ``` The COVID19 Government Measures Dataset puts together all the measures implemented by governments worldwide in response to the Coronavirus pandemic. Data collection includes secondary data review. The researched information available falls into five categories: Social distancing Movement restrictions Public health measures Social and economic measures Lockdowns Each category is broken down into several types of measures. ACAPS consulted government, media, United Nations, and other organisations sources. For any comments, please contact us at info@acaps.org Please note note that some measures together with non-compliance policies may not be recorded and the exact date of implementation may not be accurate in some cases, due to the different way of reporting of the primary data sources we used. ## __Source 3: We imported only one sheet from the EXCEL file we downloaded__ ``` # Source3: read from excel file xlFile = pd.ExcelFile(r'Resources/acaps_covid19_government_measures_dataset.xlsx') special_measures_df = pd.read_excel(xlFile, sheet_name='Database') special_measures_df ``` # -----------------------------------------__T__ransform----------------------------------------- ## Source 1: ### -change the date formats ### -cleaning the data ### -dropping the duplicates ``` # Source1: changing the date format covid_df.loc[:,'Date']=pd.to_datetime(covid_df.loc[:,"Date"]).apply(lambda x: x.date()) covid_df['Date']=covid_df['Date'].apply(lambda x: pd.to_datetime(str(x))) covid_df.head() # Source1: discovering the uncleanness covid_df.shape # # the shape shows that the data is not consistent so there must be redundancies days=len(covid_df['Date'].value_counts()) days date_percountry=covid_df['Country'].value_counts() date_percountry # # deleting the multiple rows for the same date countries=covid_df['Country'].unique() for c in countries: if date_percountry[c]>days: country_df=covid_df.loc[(covid_df["Country"]==c)] max_df=country_df.groupby('Date').max().reset_index() new_df=covid_df.loc[(covid_df["Country"]!=c)] frames=[new_df,max_df] new_covid_df = pd.concat(frames) clean_covid_df=new_covid_df.copy() clean_covid_df clean_covid_df.shape #there are still some duplicate data clean_covid_df['Date'].value_counts() # there are more than 186 (number of countries) dates where all so we must delete the duplicate dates for the same country clean_covid_df=clean_covid_df.drop_duplicates(subset=['Date','Country']) #now the shape shows that the data is totally clean # so we will write this to a csv to save time clean_covid_df.to_csv('Resources/covid_from_API.CSV') clean_covid_df ``` ## __Source 2:__ ### -Deleting the extra header ### -Renaming the columns to replace of unwanted spaces with _ and unify the format ### -Converting the date format ### -Picking useful columns ### -Dropping NAN values and Duplicates by country name and date ``` #Source 2: # delete the first row clean_government_df= government_df.loc[government_df['CountryName']!='#country'] # determine the date data type government_df['Date'].dtype # converting the date format from object to date clean_government_df['Date']=clean_government_df['Date'].apply(lambda x: pd.to_datetime(str(x), format='%Y%m%d')) # renaming the columns clean_government_df.columns=['CountryName','CountryCode','Date','C1_School_closing','C1_Flag','C2_Workplace_closing','C2_Flag','C3_Cancel_public_events','C3_Flag','C4_Restrictions_on_gatherings','C4_Flag','C5_Close_public_transport','C5_Flag','C6_Stay_at_home_requirements','C6_Flag','C7_Restrictions_on_internal_movement','C7_Flag','C8_International_travel_controls','E1_Income_support','E1_Flag','E2_Debt_contract_relief','E3_Fiscal_measures','E4_International_support','H1_Public_information_campaigns','H1_Flag','H2_Testing_policy','H3_Contact_tracing','H4_Emergency_investment_in_healthcare','H5_Investment_in_vaccines','M1_Wildcard','ConfirmedCases','ConfirmedDeaths','StringencyIndex','StringencyIndexForDisplay','StringencyLegacyIndex','StringencyLegacyIndexForDisplay','GovernmentResponseIndex','GovernmentResponseIndexForDisplay','ContainmentHealthIndex','ContainmentHealthIndexForDisplay','EconomicSupportIndex','EconomicSupportIndexForDisplay'] clean_government_df.head() # picking 24 columns out of 42 clean_government_df=clean_government_df[['CountryName', 'Date', 'C1_School_closing', 'C2_Workplace_closing', 'C3_Cancel_public_events', 'C4_Restrictions_on_gatherings', 'C5_Close_public_transport', 'C6_Stay_at_home_requirements', 'C7_Restrictions_on_internal_movement', 'C8_International_travel_controls', 'E1_Income_support', 'E2_Debt_contract_relief', 'E3_Fiscal_measures', 'E4_International_support', 'H1_Public_information_campaigns', 'H2_Testing_policy', 'H3_Contact_tracing', 'H4_Emergency_investment_in_healthcare', 'H5_Investment_in_vaccines', 'StringencyIndex', 'StringencyLegacyIndex', 'GovernmentResponseIndex', 'ContainmentHealthIndex', 'EconomicSupportIndex' ]] clean_government_df #dropping the nan values to get rid of the unfilled data which is mostly the last week clean_government_df=clean_government_df.dropna() clean_government_df=clean_government_df.drop_duplicates(subset=['CountryName','Date']) clean_government_df clean_government_df['CountryName'].value_counts() ``` ## __Source 3__ ### -columns selection based on data completeness and analysis potential ### -changed date format to match source 1 ### -checked COUNTRY counts ``` # Source 3: Choosing useful columns clean_special_measures_df=special_measures_df[['COUNTRY','DATE_IMPLEMENTED','REGION','LOG_TYPE','CATEGORY','MEASURE','TARGETED_POP_GROUP','COMMENTS','SOURCE','SOURCE_TYPE','LINK']] clean_special_measures_df # Source 3: changing the date format clean_special_measures_df.loc[:,'DATE_IMPLEMENTED']=pd.to_datetime(clean_special_measures_df.loc[:,"DATE_IMPLEMENTED"]).apply(lambda x: x.date()) clean_special_measures_df['DATE_IMPLEMENTED']=clean_special_measures_df['DATE_IMPLEMENTED'].apply(lambda x: pd.to_datetime(str(x))) clean_special_measures_df.head() clean_special_measures_df['COUNTRY'].value_counts() ``` ## __Ensure source 2 and 3 foreign key values exist in source 1 primary keys__ __Note:__ The table covid_statistics was deemed to hold the primary statistical results for analysis. A cleaning method was required to eliminate Country's and Date's that did not exist within the covid_statistics dataframe. The use of Left JOINS between government_measures_stringency and government_measures_keydate, independently, with the covid_statistics dataframe achieved the desired result. The independent dataframes will allow flexibility for the Data Analyst to perform various operations. ### Merge and clean source 1 to source 2 droping all source 1 columns and null values in foreign key columns in source 2. ``` merge1_2=pd.merge(clean_covid_df, clean_government_df, left_on=['Country','Date'],right_on=['CountryName','Date'], how='left') merge1_2=merge1_2.drop(columns=['Country','TotalCases','TotalDeaths','TotalRecovered']) merge1_2=merge1_2[merge1_2['CountryName'].notna()] merge1_2=merge1_2[merge1_2['Date'].notna()] merge1_2.head() ``` ### Merge and clean source 1 to source 3 droping all source 1 columns and null values in foreign key columns in source 3. ``` merge1_3=pd.merge(clean_covid_df, clean_special_measures_df, left_on=['Country','Date'],right_on=['COUNTRY','DATE_IMPLEMENTED'], how='left') merge1_3=merge1_3.drop(columns=['Country','Date','TotalCases','TotalDeaths','TotalRecovered']) merge1_3=merge1_3.dropna() merge1_3.head() ``` # -----------------------------------------__L__oad----------------------------------------- ### Choosing Postgresql as database #### -Postgresql was chosen due to the nature of the data being related by country and date. This relationship supports the structure of relational databases. ## __Creating a ERD diagram__ ### ERD Postgresql Code ``` # Exported from QuickDBD: https://www.quickdatabasediagrams.com/ # CREATE TABLE "covid_statistics" ( # "Country" VARCHAR NOT NULL, # "Date" DATE NOT NULL, # "TotalCases" INT NOT NULL, # "TotalDeaths" INT NOT NULL, # "TotalRecovered" INT NOT NULL, # CONSTRAINT "pk_covid_statistics" PRIMARY KEY ( # "Country","Date" # ) # ); # CREATE TABLE "government_measures_stringency" ( # "CountryName" VARCHAR NOT NULL, # "Date" DATE NOT NULL, # "C1_School_closing" BIGINT NULL, # "C2_Workplace_closing" BIGINT NULL, # "C3_Cancel_public_events" BIGINT NULL, # "C4_Restrictions_on_gatherings" BIGINT NULL, # "C5_Close_public_transport" BIGINT NULL, # "C6_Stay_at_home_requirements" BIGINT NULL, # "C7_Restrictions_on_internal_movement" BIGINT NULL, # "C8_International_travel_controls" BIGINT NULL, # "E1_Income_support" BIGINT NULL, # "E2_Debt_contract_relief" BIGINT NULL, # "E3_Fiscal_measures" BIGINT NULL, # "E4_International_support" BIGINT NULL, # "H1_Public_information_campaigns" BIGINT NULL, # "H2_Testing_policy" BIGINT NULL, # "H3_Contact_tracing" BIGINT NULL, # "H4_Emergency_investment_in_healthcare" BIGINT NULL, # "H5_Investment_in_vaccines" BIGINT NULL, # "StringencyIndex" BIGINT NULL, # "StringencyLegacyIndex" BIGINT NULL, # "GovernmentResponseIndex" BIGINT NULL, # "ContainmentHealthIndex" BIGINT NULL, # "EconomicSupportIndex" BIGINT NULL # ); # CREATE TABLE "government_measures_keydate" ( # "id" SERIAL NOT NULL, # "COUNTRY" VARCHAR NOT NULL, # "DATE_IMPLEMENTED" DATE NOT NULL, # "REGION" VARCHAR NULL, # "LOG_TYPE" VARCHAR NULL, # "CATEGORY" VARCHAR NULL, # "MEASURE" VARCHAR NULL, # "TARGETED_POP_GROUP" VARCHAR NULL, # "COMMENTS" VARCHAR NULL, # "SOURCE" VARCHAR NULL, # "SOURCE_TYPE" VARCHAR NULL, # "LINK" VARCHAR NULL # ); # ALTER TABLE "government_measures_stringency" ADD CONSTRAINT "fk_government_measures_stringency_CountryName_Date" FOREIGN KEY("CountryName", "Date") # REFERENCES "covid_statistics" ("Country", "Date"); # ALTER TABLE "government_measures_keydate" ADD CONSTRAINT "fk_government_measures_keydate_COUNTRY_DATE_IMPLEMENTED" FOREIGN KEY("COUNTRY", "DATE_IMPLEMENTED") # REFERENCES "covid_statistics" ("Country", "Date"); ``` ### ERD Image ![ERD%20Image.png](attachment:ERD%20Image.png) ## __Connect to local database__ ``` import ETLconfig from ETLconfig import rds_connection_string engine = create_engine(f'postgresql+psycopg2://{rds_connection_string}') ``` ## __Check for tables__ ``` engine.table_names() ``` ## __Use pandas to load dataframe(s) into database__ ``` clean_covid_df.to_sql(name='covid_statistics', con=engine, if_exists='append',index=False) merge1_2.to_sql(name='government_measures_stringency', con=engine, if_exists='append',index=False) merge1_3.to_sql(name='government_measures_keydate', con=engine, if_exists='append',index=False) ```
github_jupyter
<a href="https://colab.research.google.com/github/thecodinguru/DS-Unit-2-Linear-Models/blob/master/Copy_of_LS_DS_213_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Lambda School Data Science *Unit 2, Sprint 1, Module 3* --- # Ridge Regression ## Assignment We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices. But not just for condos in Tribeca... - [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million. - [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test. - [ ] Do one-hot encoding of categorical features. - [ ] Do feature selection with `SelectKBest`. - [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set) - [ ] Get mean absolute error for the test set. - [ ] As always, commit your notebook to your fork of the GitHub repo. The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal. ## Stretch Goals Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from. - [ ] Add your own stretch goal(s) ! - [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥 - [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html). - [ ] Learn more about feature selection: - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance) - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html) - [mlxtend](http://rasbt.github.io/mlxtend/) library - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection) - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson. - [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients. - [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way. - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). ``` %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd import pandas_profiling # Read New York City property sales data df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv') # Change column names: replace spaces with underscores df.columns = [col.replace(' ', '_') for col in df] # SALE_PRICE was read as strings. # Remove symbols, convert to integer df['SALE_PRICE'] = ( df['SALE_PRICE'] .str.replace('$','') .str.replace('-','') .str.replace(',','') .astype(int) ) # BOROUGH is a numeric column, but arguably should be a categorical feature, # so convert it from a number to a string df['BOROUGH'] = df['BOROUGH'].astype(str) # Reduce cardinality for NEIGHBORHOOD feature # Get a list of the top 10 neighborhoods top10 = df['NEIGHBORHOOD'].value_counts()[:10].index # At locations where the neighborhood is NOT in the top 10, # replace the neighborhood with 'OTHER' df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER' #Let's explore our data print(df.shape) df.head() # Use a subset of the data where BUILDING_CLASS_CATEGORY == '01 ONE FAMILY DWELLINGS' # and the sale price was more than 100 thousand and less than 2 million. df_single = df[(df['BUILDING_CLASS_CATEGORY']== '01 ONE FAMILY DWELLINGS') & (df['SALE_PRICE'] < 2000000) & (df['SALE_PRICE'] > 100000)] print(df_single.info()) df_single.head() # I'll assign Sales Price as our target y = df_single['SALE_PRICE'] y.shape # I'll put the rest of the features into a dataframe. # Also, I'll need to drop our target from the dataframe X = df_single.drop('SALE_PRICE', axis = 1) X.shape #Setting training data to January — March 2019 to train. Setting testing data from April 2019 X_train = X[(X['SALE_DATE'] >= '01/01/2019') & (X['SALE_DATE'] < '04/01/2019')] y_train = y[y.index.isin(X_train.index)] X_test = X[X['SALE_DATE'] >= '04/01/2019'] y_test = y[y.index.isin(X_test.index)] print(X_train.shape) print(X_test.shape) ``` #Encoding ``` #Explore our categorical data print(df_single.info()) df_single.describe(exclude='number') # I need to create a subset of just the categorical data # Also need to drop NaN values in Apartment Number df_single_train = df_single.select_dtypes(exclude='number').drop('APARTMENT_NUMBER', axis = 1) df_single_train.head() #One Hot Encoder # 1. Import the class from sklearn.preprocessing import OneHotEncoder # 2. Instantiate ohe = OneHotEncoder(sparse=False) # 3. Fit the transformer to the categorical data ohe.fit(df_single_train) # 4. Transform the data df_single_trans = ohe.transform(df_single_train) #Check if it worked print(type(df_single_trans)) df_single_trans ``` #SIngle KBest Feature Selection ``` # 1. Import the class from sklearn.feature_selection import SelectKBest # 2. Instantiate selector = SelectKBest(k=5) # 3. Fitting and transforming data # Excluding Ease-ment because of NaNs X_train = X_train.select_dtypes(include='number').drop('EASE-MENT', axis = 1) X_test = X_test.select_dtypes(include='number').drop('EASE-MENT', axis = 1) X_train_selected = selector.fit_transform(X_train, y_train) X_test_selected = selector.transform(X_test) print(X_train.shape) print(X_train_selected.shape) #4. Create a mask to view selected features mask = selector.get_support() #Putting the values back into columns X_train.columns[mask] ``` #Ridge Regression ``` #Start with a Baseline mean_y = y_train.mean() print(f'Baseline mean (January — March 2019): ${mean_y:.2f}') # Lets see how much error is in our baseline train data # Train Error from sklearn.metrics import mean_absolute_error y_pred = [y_train.mean()] * len(y_train) mae = mean_absolute_error(y_train, y_pred) print(f'Train Error (January — March 2019): ${mae:.2f}') # Lets see how much error is in our baseline test data # Test Error y_pred = [y_train.mean()] * len(y_test) mae = mean_absolute_error(y_test, y_pred) print(f'Test Error (From April 2019): ${mae:.2f}') #Ridge Regression #1. Import model from sklearn.linear_model import Ridge #2. Instantiate a class model = Ridge(normalize=True) #3. Fit the model to training data from SelectKBest model.fit(X_train_selected, y_train) #4. Apply model to test data y_pred = model.predict(X_test_selected) #5. Print out metrics from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score #print(f'Test Error (From April 2019): ${mae:.2f}') print(f'MAE: {mean_absolute_error(y_test, y_pred):.2f}') print(f'MSE: {mean_squared_error(y_test, y_pred):.2f}') print(f'R2: {r2_score(y_test, y_pred):.2f}') ```
github_jupyter
### Importing the required modules/packages ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import re import nltk import string import scipy as sp from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from nltk.corpus import stopwords from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import KFold, cross_val_score from sklearn.metrics import precision_recall_fscore_support as score from sklearn.model_selection import GridSearchCV from sklearn.ensemble import GradientBoostingClassifier from sklearn import metrics from textblob import TextBlob, Word from nltk.stem.snowball import SnowballStemmer from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer # Naive Bayes from sklearn.naive_bayes import MultinomialNB # Logistic Regression from sklearn.linear_model import LogisticRegression ``` ### Loading file and looking into the dimensions of data ``` raw_data = pd.read_csv("SMSSpamCollection.tsv",sep='\t',names=['label','text']) pd.set_option('display.max_colwidth',100) raw_data.head() print(raw_data.shape) pd.crosstab(raw_data['label'],columns = 'label',normalize=True) # Create Test Train Fit # Define X and y. X = raw_data.text y = raw_data.label # Split the new DataFrame into training and testing sets. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=99, test_size= 0.3) ``` # Create Features using Count Vectorize ``` # Use CountVectorizer to create document-term matrices from X_train and X_test. vect = CountVectorizer() X_train_dtm = vect.fit_transform(X_train) X_test_dtm = vect.transform(X_test) # Rows are documents, columns are terms (aka "tokens" or "features", individual words in this situation). X_train_dtm.shape # Last 50 features print((vect.get_feature_names()[-50:])) # Show vectorizer options. vect # Don't convert to lowercase. For now we want to keep the case value to original and run initial test/train and predict. vect = CountVectorizer(lowercase=False) X_train_dtm = vect.fit_transform(X_train) X_train_dtm.shape vect.get_feature_names()[-10:] # Convert the Classifer to Dense Classifier for Pipeline class DenseTransformer(TransformerMixin): def transform(self, X, y=None, **fit_params): return X.todense() def fit_transform(self, X, y=None, **fit_params): self.fit(X, y, **fit_params) return self.transform(X) def fit(self, X, y=None, **fit_params): return self ``` # Use Naive Bayes to predict the ham vs spam label. ``` # Use default options for CountVectorizer. vect = CountVectorizer() # Create document-term matrices. X_train_dtm = vect.fit_transform(X_train) X_test_dtm = vect.transform(X_test) # Use Naive Bayes to predict the star rating. nb = MultinomialNB() nb.fit(X_train_dtm, y_train) y_pred_class = nb.predict(X_test_dtm) # Calculate accuracy. print((metrics.accuracy_score(y_test, y_pred_class))) from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() # Create a pipeline with the items needed for execution: pipeline = Pipeline([ ('vectorizer', CountVectorizer()), ('to_dense', DenseTransformer()), ('classifier', GaussianNB()) ]) GNBlearn = gnb.fit(X_train_dtm, y_train) prob_class = gnb.class_prior_ print("Probability of each class: ") print(data.target_names) print(prob_class) print() feature_mean = gnb.theta_ print("Means of attribute of every class: ") print(pd.DataFrame(data = np.c_[feature_mean], columns = data.feature_names)) print() feature_variance = gnb.sigma_ print("Variance of attribute of every class: ") print(pd.DataFrame(data = np.c_[feature_variance], columns = data.feature_names)) ```
github_jupyter
## Week 2-2 - Visualizing General Social Survey data Your mission is to analyze a data set of social attitudes by turning it into vectors, then visualizing the result. ### 1. Choose a topic and get your data We're going to be working with data from the General Social Survey, which asks Americans thousands of questions ever year, over decades. This is an enormous data set and there have been very many stories written from its data. The first thing you need to do is decide which questions and which years you are going to try to analyze. Use their [data explorer](https://gssdataexplorer.norc.org/) to see what's available, and ultimately download an Excel file with the data. - Click the `Search Varibles` button. - You will need at least a dozen or two related variables. Try selecting some using their `Filter by Module / Subject` interface. - When you've made your selection, click the `+ All` button to add all listed variables, then choose `Extract Data` under the `Actions` menu. - Then you have a multi-step process. Step 1 is just naming your extract - Step 2: select variables *again!* Click `Add All` in the upper right of the "Variable Cart" in the "Choose Variables" step. - Step 3: Skip it. You could use this to filter the data in various ways. - Step 4: Click `Select certain years` to pick one year of data, then check `Excel Workbook (data + metadata)` as the output format. - Click `Create Extract` and wait a minute or two on the "Extracts" page until the spinner stops and turns into a download link. You'll end up with an compressed file in tar.gz format, which you should be able to decompressed by double-clicking on it. Inside is an Excel file. Open it in Excel (or your favorite spreadsheet program) and resave it as a CSV. ``` import pandas as pd import numpy as np from sklearn.decomposition import PCA import matplotlib.pyplot as plt import math # load your data set here gss = pd.read_csv(...) ``` ### 3. Turn people into vectors I know, it sounds cruel. We're trying to group people, but computers can only group vectors, so there we are. Translating the spreadsheet you downloaded from GSS Explorer into vectors is a multistep process. Generally, each row of the spreadsheet is one person, and each column is one qeustion. - First, we need to throw away any extra rows and columns: headers, questions with no data, etc. - Many GSS questions already have numerical answers. These usually don't require any work. - But you'll need to turn categorical variables into numbers. Basically, you have to remove or convert every value that isn't a number. Because this is survey data, we can turn most questions into an integer scale. The cleanup might use functions like this: ``` # drop the last two rows, which are just notes and do not contain data gss = gss.iloc[0:-2,:] # Here's a bunch of cleanup code. It probably won't be quite right for your data. # The goal is to convert all values to small integers, to make them easy to plot with colors below. # First, replace all of the "Not Applicable" values with None gss = gss.replace({'Not applicable' : None, 'No answer' : None, 'Don\'t know' : None, 'Dont know' : None}) # Manually code likert scales gss = gss.replace({'Strongly disagree':-2, 'Disagree':-1, 'Neither agree nor disagree':0, 'Agree':1, 'Strongly agree':2}) # yes/no -> 1/-1 gss = gss.replace({'Yes':1, 'No':-1}) # Some frequency scales should have numeric coding too gss = gss.replace({'Not at all in the past year' : 0, 'Once in the past year' : 1, 'At least 2 or 3 times in the past year' : 2, 'Once a month' : 3, 'Once a week' : 4, 'More than once a week':5}) gss = gss.replace({ 'Never or almost never' : 0, 'Once in a while' : 1, 'Some days' : 2, 'Most days' : 3, 'Every day' : 4, 'Many times a day' : 5}) # Drop some columns that don't contain useful information gss = gss.drop(['Respondent id number', 'Ballot used for interview', 'Gss year for this respondent'], axis=1) # Turn invalid numeric entries into zeros gss = gss.replace({np.nan:0.0}) ``` ### 4. Plot those vectors! For this assignment, we'll use the PCA projection algorithm to make 2D (or 3D!) pictures of the set of vectors. Once you have the vectors, it should be easy to make a PCA plot using the steps we followed in class. ``` # make a PCA plot here ``` ### 5. Add color to help interpretation Congratulations, you have a picture of a blob of dots. Hopefully, that blob has some structure representing clusters of similar people. To understand what the plot is telling us, it really helps to take one of the original variables and use it to assign colors to the points. So: pick one of the questions that you think will separate people into natural groups. Use it to set the color of the dots in your scatterplot. By repeating this with different questions, or combining questions (like two binary questions giving rise to a four color scheme) you should be able to figure out what the structure of the clusters represents. ``` # map integer columns to colors def col2colors(colvals): # gray for zero, then a rainbow. # This is set up so yes = 1 = red and no = -1 = indigo my_colors = ['gray', 'red','orange','yellow','lightgreen','cyan','blue','indigo'] # We may have integers higher than len(my_colors) or less than zero # So use the mod operator (%) to make values "wrap around" when they go off the end of the list column_ints = colvals.astype(int) % len(my_colors) # map each index to the corresponding color return column_ints.apply(lambda x: my_colors[x]) # Make a plot using colors from a particular column # Make another plot using colors from another column # ... repeat and see if you can figure out what each axis means ``` ### 6. Tell us what it means? What did you learn from this exercise? Did you find the standard left-right divide? Or urban-rural? Early adopters vs. luddites? People with vs. without children? What did you learn? What could end up in a story?
github_jupyter
``` import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn.ensemble import RandomForestClassifier from sklearn import svm from sklearn.metrics import precision_score, recall_score import matplotlib.pyplot as plt #reading train.csv data = pd.read_csv('train.csv') # show the actaul data data # show the first few rows data.head(10) # count the null values null_values = data.isnull().sum() null_values plt.plot(null_values) plt.show() ``` ## Data Processing ``` def handle_non_numerical_data(df): columns = df.columns.values for column in columns: text_digit_vals = {} def convert_to_int(val): return text_digit_vals[val] #print(column,df[column].dtype) if df[column].dtype != np.int64 and df[column].dtype != np.float64: column_contents = df[column].values.tolist() #finding just the uniques unique_elements = set(column_contents) # great, found them. x = 0 for unique in unique_elements: if unique not in text_digit_vals: text_digit_vals[unique] = x x+=1 df[column] = list(map(convert_to_int,df[column])) return df y_target = data['Survived'] # Y_target.reshape(len(Y_target),1) x_train = data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare','Embarked', 'Ticket']] x_train = handle_non_numerical_data(x_train) x_train.head() fare = pd.DataFrame(x_train['Fare']) # Normalizing min_max_scaler = preprocessing.MinMaxScaler() newfare = min_max_scaler.fit_transform(fare) x_train['Fare'] = newfare x_train null_values = x_train.isnull().sum() null_values plt.plot(null_values) plt.show() # Fill the NAN values with the median values in the datasets x_train['Age'] = x_train['Age'].fillna(x_train['Age'].mean()) print("Number of NULL values" , x_train['Age'].isnull().sum()) print(x_train.head(3)) x_train['Sex'] = x_train['Sex'].replace('male', 0) x_train['Sex'] = x_train['Sex'].replace('female', 1) # print(type(x_train)) corr = x_train.corr() corr.style.background_gradient() def plot_corr(df,size=10): corr = df.corr() fig, ax = plt.subplots(figsize=(size, size)) ax.matshow(corr) plt.xticks(range(len(corr.columns)), corr.columns); plt.yticks(range(len(corr.columns)), corr.columns); # plot_corr(x_train) x_train.corr() corr.style.background_gradient() # Dividing the data into train and test data set X_train, X_test, Y_train, Y_test = train_test_split(x_train, y_target, test_size = 0.4, random_state = 40) clf = RandomForestClassifier() clf.fit(X_train, Y_train) print(clf.predict(X_test)) print("Accuracy: ",clf.score(X_test, Y_test)) ## Testing the model. test_data = pd.read_csv('test.csv') test_data.head(3) # test_data.isnull().sum() ### Preprocessing on the test data test_data = test_data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare', 'Ticket', 'Embarked']] test_data = handle_non_numerical_data(test_data) fare = pd.DataFrame(test_data['Fare']) min_max_scaler = preprocessing.MinMaxScaler() newfare = min_max_scaler.fit_transform(fare) test_data['Fare'] = newfare test_data['Fare'] = test_data['Fare'].fillna(test_data['Fare'].median()) test_data['Age'] = test_data['Age'].fillna(test_data['Age'].median()) test_data['Sex'] = test_data['Sex'].replace('male', 0) test_data['Sex'] = test_data['Sex'].replace('female', 1) print(test_data.head()) print(clf.predict(test_data)) from sklearn.model_selection import cross_val_predict predictions = cross_val_predict(clf, X_train, Y_train, cv=3) print("Precision:", precision_score(Y_train, predictions)) print("Recall:",recall_score(Y_train, predictions)) from sklearn.metrics import precision_recall_curve # getting the probabilities of our predictions y_scores = clf.predict_proba(X_train) y_scores = y_scores[:,1] precision, recall, threshold = precision_recall_curve(Y_train, y_scores) def plot_precision_and_recall(precision, recall, threshold): plt.plot(threshold, precision[:-1], "r-", label="precision", linewidth=5) plt.plot(threshold, recall[:-1], "b", label="recall", linewidth=5) plt.xlabel("threshold", fontsize=19) plt.legend(loc="upper right", fontsize=19) plt.ylim([0, 1]) plt.figure(figsize=(14, 7)) plot_precision_and_recall(precision, recall, threshold) plt.axis([0.3,0.8,0.8,1]) plt.show() def plot_precision_vs_recall(precision, recall): plt.plot(recall, precision, "g--", linewidth=2.5) plt.ylabel("recall", fontsize=19) plt.xlabel("precision", fontsize=19) plt.axis([0, 1.5, 0, 1.5]) plt.figure(figsize=(14, 7)) plot_precision_vs_recall(precision, recall) plt.show() from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix predictions = cross_val_predict(clf, X_train, Y_train, cv=3) confusion_matrix(Y_train, predictions) ``` True positive: 293 (We predicted a positive result and it was positive) True negative: 143 (We predicted a negative result and it was negative) False positive: 34 (We predicted a positive result and it was negative) False negative: 64 (We predicted a negative result and it was positive) ### data v ``` import seaborn as sns survived = 'survived' not_survived = 'not survived' fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10, 4)) women = data[data['Sex']=='female'] men = data[data['Sex']=='male'] ax = sns.distplot(women[women['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[0], kde =False) ax = sns.distplot(women[women['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[0], kde =False) ax.legend() ax.set_title('Female') ax = sns.distplot(men[men['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[1], kde = False) ax = sns.distplot(men[men['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[1], kde = False) ax.legend() _ = ax.set_title('Male') FacetGrid = sns.FacetGrid(data, row='Embarked', size=4.5, aspect=1.6) FacetGrid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette=None, order=None, hue_order=None ) FacetGrid.add_legend() ``` #### Embarked seems to be correlated with survival, depending on the gender. Women on port Q and on port S have a higher chance of survival. The inverse is true, if they are at port C. Men have a high survival probability if they are on port C, but a low probability if they are on port Q or S. ``` sns.barplot('Pclass', 'Survived', data=data, color="darkturquoise") plt.show() sns.barplot('Embarked', 'Survived', data=data, color="teal") plt.show() sns.barplot('Sex', 'Survived', data=data, color="aquamarine") plt.show() print(clf.predict(X_test)) print("Accuracy: ",clf.score(X_test, Y_test)) data ```
github_jupyter
# Extracting training data from the ODC <img align="right" src="../../Supplementary_data/dea_logo.jpg"> * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser * **Compatibility:** Notebook currently compatible with the `DEA Sandbox` environment * **Products used:** [ls8_nbart_geomedian_annual](https://explorer.sandbox.dea.ga.gov.au/products/ls8_nbart_geomedian_annual/extents), [ls8_nbart_tmad_annual](https://explorer.sandbox.dea.ga.gov.au/products/ls8_nbart_tmad_annual/extents), [fc_percentile_albers_annual](https://explorer.sandbox.dea.ga.gov.au/products/fc_percentile_albers_annual/extents) ## Background **Training data** is the most important part of any supervised machine learning workflow. The quality of the training data has a greater impact on the classification than the algorithm used. Large and accurate training data sets are preferable: increasing the training sample size results in increased classification accuracy ([Maxell et al 2018](https://www.tandfonline.com/doi/full/10.1080/01431161.2018.1433343)). A review of training data methods in the context of Earth Observation is available [here](https://www.mdpi.com/2072-4292/12/6/1034) When creating training labels, be sure to capture the **spectral variability** of the class, and to use imagery from the time period you want to classify (rather than relying on basemap composites). Another common problem with training data is **class imbalance**. This can occur when one of your classes is relatively rare and therefore the rare class will comprise a smaller proportion of the training set. When imbalanced data is used, it is common that the final classification will under-predict less abundant classes relative to their true proportion. There are many platforms to use for gathering training labels, the best one to use depends on your application. GIS platforms are great for collection training data as they are highly flexible and mature platforms; [Geo-Wiki](https://www.geo-wiki.org/) and [Collect Earth Online](https://collect.earth/home) are two open-source websites that may also be useful depending on the reference data strategy employed. Alternatively, there are many pre-existing training datasets on the web that may be useful, e.g. [Radiant Earth](https://www.radiant.earth/) manages a growing number of reference datasets for use by anyone. ## Description This notebook will extract training data (feature layers, in machine learning parlance) from the `open-data-cube` using labelled geometries within a geojson. The default example will use the crop/non-crop labels within the `'data/crop_training_WA.geojson'` file. This reference data was acquired and pre-processed from the USGS's Global Food Security Analysis Data portal [here](https://croplands.org/app/data/search?page=1&page_size=200) and [here](https://e4ftl01.cr.usgs.gov/MEASURES/GFSAD30VAL.001/2008.01.01/). To do this, we rely on a custom `dea-notebooks` function called `collect_training_data`, contained within the [dea_tools.classification](../../Tools/dea_tools/classification.py) script. The principal goal of this notebook is to familarise users with this function so they can extract the appropriate data for their use-case. The default example also highlights extracting a set of useful feature layers for generating a cropland mask forWA. 1. Preview the polygons in our training data by plotting them on a basemap 2. Extract training data from the datacube using `collect_training_data`'s inbuilt feature layer parameters 3. Extract training data from the datacube using a **custom defined feature layer function** that we can pass to `collect_training_data` 4. Export the training data to disk for use in subsequent scripts *** ## Getting started To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell. ### Load packages ``` %matplotlib inline import os import sys import datacube import numpy as np import xarray as xr import subprocess as sp import geopandas as gpd from odc.io.cgroups import get_cpu_quota from datacube.utils.geometry import assign_crs sys.path.append('../../Scripts') from dea_plotting import map_shapefile from dea_bandindices import calculate_indices from dea_classificationtools import collect_training_data import warnings warnings.filterwarnings("ignore") ``` ## Analysis parameters * `path`: The path to the input vector file from which we will extract training data. A default geojson is provided. * `field`: This is the name of column in your shapefile attribute table that contains the class labels. **The class labels must be integers** ``` path = 'data/crop_training_WA.geojson' field = 'class' ``` ### Find the number of CPUs ``` ncpus = round(get_cpu_quota()) print('ncpus = ' + str(ncpus)) ``` ## Preview input data We can load and preview our input data shapefile using `geopandas`. The shapefile should contain a column with class labels (e.g. 'class'). These labels will be used to train our model. > Remember, the class labels **must** be represented by `integers`. ``` # Load input data shapefile input_data = gpd.read_file(path) # Plot first five rows input_data.head() # Plot training data in an interactive map map_shapefile(input_data, attribute=field) ``` ## Extracting training data The function `collect_training_data` takes our geojson containing class labels and extracts training data (features) from the datacube over the locations specified by the input geometries. The function will also pre-process our training data by stacking the arrays into a useful format and removing any `NaN` or `inf` values. `Collect_training_data` has the ability to generate many different types of **feature layers**. Relatively simple layers can be calculated using pre-defined parameters within the function, while more complex layers can be computed by passing in a `custom_func`. To begin with, let's try generating feature layers using the pre-defined methods. The in-built feature layer parameters are described below: * `product`: The name of the product to extract from the datacube. In this example we use a Landsat 8 geomedian composite from 2019, `'ls8_nbart_geomedian_annual'` * `time`: The time range from which to extract data * `calc_indices`: This parameter provides a method for calculating a number of remote sensing indices (e.g. `['NDWI', 'NDVI']`). Any of the indices found in the [dea_tools.bandindices](../../Tools/dea_tools/bandindices.py) script can be used here * `drop`: If this variable is set to `True`, and 'calc_indices' are supplied, the spectral bands will be dropped from the dataset leaving only the band indices as data variables in the dataset. * `reduce_func`: The classification models we're applying here require our training data to be in two dimensions (ie. `x` & `y`). If our data has a time-dimension (e.g. if we load in an annual time-series of satellite images) then we need to collapse the time dimension. `reduce_func` is simply the summary statistic used to collapse the temporal dimension. Options are 'mean', 'median', 'std', 'max', 'min', and 'geomedian'. In the default example we are loading a geomedian composite, so there is no time dimension to reduce. * `zonal_stats`: An optional string giving the names of zonal statistics to calculate across each polygon. Default is `None` (all pixel values are returned). Supported values are 'mean', 'median', 'max', and 'min'. * `return_coords` : If `True`, then the training data will contain two extra columns 'x_coord' and 'y_coord' corresponding to the x,y coordinate of each sample. This variable can be useful for handling spatial autocorrelation between samples later on in the ML workflow when we conduct k-fold cross validation. > Note: `collect_training_data` also has a number of additional parameters for handling ODC I/O read failures, where polygons that return an excessive number of null values can be resubmitted to the multiprocessing queue. Check out the [docs](https://github.com/GeoscienceAustralia/dea-notebooks/blob/68d3526f73779f3316c5e28001c69f556c0d39ae/Tools/dea_tools/classification.py#L661) to learn more. In addition to the parameters required for `collect_training_data`, we also need to set up a few parameters for the Open Data Cube query, such as `measurements` (the bands to load from the satellite), the `resolution` (the cell size), and the `output_crs` (the output projection). ``` # Set up our inputs to collect_training_data products = ['ls8_nbart_geomedian_annual'] time = ('2014') reduce_func = None calc_indices = ['NDVI', 'MNDWI'] drop = False zonal_stats = 'median' return_coords = True # Set up the inputs for the ODC query measurements = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2'] resolution = (-30, 30) output_crs = 'epsg:3577' ``` Generate a datacube query object from the parameters above: ``` query = { 'time': time, 'measurements': measurements, 'resolution': resolution, 'output_crs': output_crs, 'group_by': 'solar_day', } ``` Now let's run the `collect_training_data` function. We will limit this run to only a subset of all samples (first 100) as here we are only demonstrating the use of the function. Futher on in the notebook we will rerun this function but with all the polygons in the training data. > **Note**: With supervised classification, its common to have many, many labelled geometries in the training data. `collect_training_data` can parallelize across the geometries in order to speed up the extracting of training data. Setting `ncpus>1` will automatically trigger the parallelization. However, its best to set `ncpus=1` to begin with to assist with debugging before triggering the parallelization. You can also limit the number of polygons to run when checking code. For example, passing in `gdf=input_data[0:5]` will only run the code over the first 5 polygons. ``` column_names, model_input = collect_training_data(gdf=input_data[0:100], products=products, dc_query=query, ncpus=ncpus, return_coords=return_coords, field=field, calc_indices=calc_indices, reduce_func=reduce_func, drop=drop, zonal_stats=zonal_stats) ``` The function returns two numpy arrays, the first (`column_names`) contains a list of the names of the feature layers we've computed: ``` print(column_names) ``` The second array (`model_input`) contains the data from our labelled geometries. The first item in the array is the class integer (e.g. in the default example 1. 'crop', or 0. 'noncrop'), the second set of items are the values for each feature layer we computed: ``` print(np.array_str(model_input, precision=2, suppress_small=True)) ``` ## Custom feature layers The feature layers that are most relevant for discriminating the classes of your classification problem may be more complicated than those provided in the `collect_training_data` function. In this case, we can pass a custom feature layer function through the `custom_func` parameter. Below, we will use a custom function to recollect training data (overwriting the previous example above). * `custom_func`: A custom function for generating feature layers. If this parameter is set, all other options (excluding 'zonal_stats'), will be ignored. The result of the 'custom_func' must be a single xarray dataset containing 2D coordinates (i.e x and y with no time dimension). The custom function has access to the datacube dataset extracted using the `dc_query` params. To load other datasets, you can use the `like=ds.geobox` parameter in `dc.load` First, lets define a custom feature layer function. This function is fairly basic and replicates some of what the `collect_training_data` function can do, but you can build these custom functions as complex as you like. We will calculate some band indices on the Landsat 8 geomedian, append the ternary median aboslute deviation dataset from the same year: [ls8_nbart_tmad_annual](https://explorer.sandbox.dea.ga.gov.au/products/ls8_nbart_tmad_annual/extents), and append fractional cover percentiles for the photosynthetic vegetation band, also from the same year: [fc_percentile_albers_annual](https://explorer.sandbox.dea.ga.gov.au/products/fc_percentile_albers_annual/extents). ``` def custom_reduce_function(ds): # Calculate some band indices da = calculate_indices(ds, index=['NDVI', 'LAI', 'MNDWI'], drop=False, collection='ga_ls_2') # Connect to datacube to add TMADs product dc = datacube.Datacube(app='custom_feature_layers') # Add TMADs dataset tmad = dc.load(product='ls8_nbart_tmad_annual', measurements=['sdev','edev','bcdev'], like=ds.geobox, #will match geomedian extent time='2014' #same as geomedian ) # Add Fractional cover percentiles fc = dc.load(product='fc_percentile_albers_annual', measurements=['PV_PC_10','PV_PC_50','PV_PC_90'], #only the PV band like=ds.geobox, #will match geomedian extent time='2014' #same as geomedian ) # Merge results into single dataset result = xr.merge([da, tmad, fc],compat='override') return result.squeeze() ``` Now, we can pass this function to `collect_training_data`. We will redefine our intial parameters to align with the new custom function. Remember, passing in a `custom_func` to `collect_training_data` means many of the other feature layer parameters are ignored. ``` # Set up our inputs to collect_training_data products = ['ls8_nbart_geomedian_annual'] time = ('2014') zonal_stats = 'median' return_coords = True # Set up the inputs for the ODC query measurements = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2'] resolution = (-30, 30) output_crs = 'epsg:3577' # Generate a new datacube query object query = { 'time': time, 'measurements': measurements, 'resolution': resolution, 'output_crs': output_crs, 'group_by': 'solar_day', } ``` Below we collect training data from the datacube using the custom function. This will take around 5-6 minutes to run all 430 samples on the default sandbox as it only has two cpus. ``` %%time column_names, model_input = collect_training_data( gdf=input_data, products=products, dc_query=query, ncpus=ncpus, return_coords=return_coords, field=field, zonal_stats=zonal_stats, custom_func=custom_reduce_function) print(column_names) print('') print(np.array_str(model_input, precision=2, suppress_small=True)) ``` ## Separate coordinate data By setting `return_coords=True` in the `collect_training_data` function, our training data now has two extra columns called `x_coord` and `y_coord`. We need to separate these from our training dataset as they will not be used to train the machine learning model. Instead, these variables will be used to help conduct Spatial K-fold Cross validation (SKVC) in the notebook `3_Evaluate_optimize_fit_classifier`. For more information on why this is important, see this [article](https://www.tandfonline.com/doi/abs/10.1080/13658816.2017.1346255?journalCode=tgis20). ``` # Select the variables we want to use to train our model coord_variables = ['x_coord', 'y_coord'] # Extract relevant indices from the processed shapefile model_col_indices = [column_names.index(var_name) for var_name in coord_variables] # Export to coordinates to file np.savetxt("results/training_data_coordinates.txt", model_input[:, model_col_indices]) ``` ## Export training data Once we've collected all the training data we require, we can write the data to disk. This will allow us to import the data in the next step(s) of the workflow. ``` # Set the name and location of the output file output_file = "results/test_training_data.txt" # Grab all columns except the x-y coords model_col_indices = [column_names.index(var_name) for var_name in column_names[0:-2]] # Export files to disk np.savetxt(output_file, model_input[:, model_col_indices], header=" ".join(column_names[0:-2]), fmt="%4f") ``` ## Recommended next steps To continue working through the notebooks in this `Scalable Machine Learning on the ODC` workflow, go to the next notebook `2_Inspect_training_data.ipynb`. 1. **Extracting training data from the ODC (this notebook)** 2. [Inspecting training data](2_Inspect_training_data.ipynb) 3. [Evaluate, optimize, and fit a classifier](3_Evaluate_optimize_fit_classifier.ipynb) 4. [Classifying satellite data](4_Classify_satellite_data.ipynb) 5. [Object-based filtering of pixel classifications](5_Object-based_filtering.ipynb) *** ## Additional information **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license. **Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)). If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks). **Last modified:** March 2021 **Compatible datacube version:** ``` print(datacube.__version__) ``` ## Tags Browse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
github_jupyter
``` import numpy as np import pandas as pd import matplotlib from matplotlib import pyplot as plt %matplotlib inline ``` ## Read in the data *I'm using pandas* ``` data = pd.read_csv('bar.csv') data ``` ## Here is the default bar chart from python ``` f,ax = plt.subplots() ind = np.arange(len(data)) # the x locations for the bars width = 0.5 # the width of the bars rects = ax.bar(ind, data['Value'], width) ``` ## Add some labels ``` f,ax = plt.subplots() ind = np.arange(len(data)) # the x locations for the bars width = 0.5 # the width of the bars rects = ax.bar(ind, data['Value'], width) # add some text for labels, title and axes ticks ax.set_ylabel('Percent') ax.set_title('Percentage of Poor Usage') ax.set_xticks(ind) ax.set_xticklabels(data['Label']) ``` ## Rotate the plot and add gridlines ``` f,ax = plt.subplots() ind = np.arange(len(data)) # the x locations for the bars width = 0.5 # the width of the bars rects = ax.barh(ind, data['Value'], width, zorder=2) # add some text for labels, title and axes ticks ax.set_xlabel('Percent') ax.set_title('Percentage of Poor Usage') ax.set_yticks(ind) ax.set_yticklabels(data['Label']) #add a grid behind the plot ax.grid(color='gray', linestyle='-', linewidth=1, zorder = 1) ``` ## Sort the data, and add the percentage values to each bar ``` f,ax = plt.subplots() #sort the data (nice aspect of pandas dataFrames) data.sort_values('Value', inplace=True) ind = np.arange(len(data)) # the x locations for the bars width = 0.5 # the width of the bars rects = ax.barh(ind, data['Value'], width, zorder=2) # add some text for labels, title and axes ticks ax.set_xlabel('Percent') ax.set_title('Percentage of Poor Usage') ax.set_yticks(ind) ax.set_yticklabels(data['Label']) #add a grid behind the plot ax.grid(color='gray', linestyle='-', linewidth=1, zorder = 1) #I grabbed this from here : https://matplotlib.org/examples/api/barchart_demo.html #and tweaked it slightly for r in rects: h = r.get_height() w = r.get_width() y = r.get_y() if (w > 1): x = w - 0.5 else: x = w + 0.5 ax.text(x, y ,'%.1f%%' % w, ha='center', va='bottom', zorder = 3) ``` ## Clean this up a bit * I don't want the grid lines anymore * Make the font larger * Let's change the colors, and highlight one of them * Save the plot ``` #this will change the font globally, but you could also change the fontsize for each label independently font = {'size' : 20} matplotlib.rc('font', **font) f,ax = plt.subplots(figsize=(10,8)) #sort the data (nice aspect of pandas dataFrames) data.sort_values('Value', inplace=True) ind = np.arange(len(data)) # the x locations for the bars width = 0.5 # the width of the bars rects = ax.barh(ind, data['Value'], width, zorder=2) # add some text for labels, title and axes ticks ax.set_title('Percentage of Poor Usage', fontsize = 30) ax.set_yticks(ind) ax.set_yticklabels(data['Label']) #remove all the axes, ticks and lower x label aoff = ['right', 'left', 'top', 'bottom'] for x in aoff: ax.spines[x].set_visible(False) ax.tick_params(length=0) ax.set_xticklabels([' ']*len(data)) #I grabbed this from here : https://matplotlib.org/examples/api/barchart_demo.html #and tweaked it slightly highlight = [4] for i, r in enumerate(rects): h = r.get_height() w = r.get_width() y = r.get_y() if (w >= 10): x = w - 0.75 elif (w > 1): x = w - 0.6 else: x = w + 0.5 r.set_color('gray') if (i in highlight): r.set_color('orange') ax.text(x, y ,'%.1f%%' % w, ha='center', va='bottom', zorder = 3) f.savefig('bar.pdf',format='pdf', bbox_inches = 'tight') ```
github_jupyter
Advanced Lane Finding Project === ### Run the code in the cell below to extract object points and image points for camera calibration. ``` # Code block: Import # Import all necessary libraries import numpy as np import cv2 import glob import matplotlib.pyplot as plt import pickle import matplotlib.image as mpimg %matplotlib inline %matplotlib qt # Code block: Camera Calibration # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((6*9,3), np.float32) objp[:,:2] = np.mgrid[0:9, 0:6].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Make a list of calibration images images = glob.glob('camera_cal/calibration*.jpg') # Step through the list and search for chessboard corners for idx, fname in enumerate(images): img = cv2.imread(fname) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (9,6), None) # If found, add object points, image points if ret == True: objpoints.append(objp) imgpoints.append(corners) #Draw and display the corners #cv2.drawChessboardCorners(img, (8,6), corners, ret) # Test undistortion on an image img = cv2.imread('test_images/calibration1.jpg') img_size = (img.shape[1], img.shape[0]) # Do camera calibration given object points and image points ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None) # Save the camera calibration result for later use dist_pickle = {} dist_pickle["mtx"] = mtx dist_pickle["dist"] = dist pickle.dump( dist_pickle, open("output_images/wide_dist_pickle.p","wb")) ``` ### If the above cell ran sucessfully, you should now have `objpoints` and `imgpoints` needed for camera calibration. Run the cell below to calibrate, calculate distortion coefficients, and test undistortion on an image! ``` # Code block: Functions #1. Color and Gradient thresholding def thresholding(img, s_thresh=(210, 255), sx_thresh=(20, 100)): img = np.copy(img) # Convert to HLS color space and separate the V channel hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) l_channel = hls[:,:,1] s_channel = hls[:,:,2] # Sobel x sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx)) # Threshold x gradient sxbinary = np.zeros_like(scaled_sobel) sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1 # Threshold color channel s_binary = np.zeros_like(s_channel) s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1 s_binary = s_binary | sxbinary # Stack each channel color_binary = np.dstack((s_binary, s_binary, s_binary)) * 255 return color_binary #2. Finding lanes without prior information def find_lane_pixels(binary_warped): # Take a histogram of the bottom half of the image histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0) # Create an output image to draw on and visualize the result out_img = binary_warped#np.dstack((binary_warped, binary_warped, binary_warped)) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint],0)[0] rightx_base = np.argmax(histogram[midpoint:],0)[0] + midpoint #print(leftx_base) # HYPERPARAMETERS # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 50 # Set minimum number of pixels found to recenter window minpix = 50 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) # Identify the x and y positions of all nonzero pixels in the image nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height ### TO-DO: Find the four below boundaries of the window ### win_xleft_low = leftx_current - margin # Update this win_xleft_high = leftx_current + margin # Update this win_xright_low = rightx_current - margin # Update this win_xright_high = rightx_current + margin # Update this #print(win_xleft_low,win_xleft_high,win_y_low,win_y_high) # Draw the windows on the visualization image cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high),(0,255,0), 4) cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),(0,255,0), 4) ### TO-DO: Identify the nonzero pixels in x and y within the window ### #good_y = nonzerox[(nonzeroy>win_y_low) & (nonzeroy<win_y_high)] #good_left_inds = good_y[(good_y>win_xleft_low) & (good_y<win_xleft_high)] #good_right_inds = good_y[(good_y>win_xright_low) & (good_y<win_xright_high)] good_y = (nonzeroy>win_y_low) & (nonzeroy<win_y_high) good_left_inds = np.flatnonzero((nonzerox>win_xleft_low) & (nonzerox<win_xleft_high) & good_y) good_right_inds = np.flatnonzero((nonzerox>win_xright_low) & (nonzerox<win_xright_high) & good_y) # Append these indices to the lists left_lane_inds.append(good_left_inds) right_lane_inds.append(good_right_inds) #print(good_left_inds) ### TO-DO: If you found > minpix pixels, recenter next window ### ### (`right` or `leftx_current`) on their mean position ### if len(good_left_inds)>minpix: leftx_current = np.int(np.mean(nonzerox[good_left_inds])) else: pass if len(good_right_inds)>minpix: rightx_current = np.int(np.mean(nonzerox[good_right_inds])) else: pass # Remove this when you add your function #print(left_lane_inds) # Concatenate the arrays of indices (previously was a list of lists of pixels) try: left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) except ValueError: # Avoids an error if the above is not implemented fully pass # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] return leftx, lefty, rightx, righty, out_img #3. Fit Polynomial to points foud using windows def fit_polynomial(binary_warped): # Find our lane pixels first leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped) ### TO-DO: Fit a second order polynomial to each using `np.polyfit` ### left_fit = np.polyfit(lefty,leftx,2) right_fit = np.polyfit(righty,rightx,2) # Generate x and y values for plotting ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] ) try: left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] except TypeError: # Avoids an error if `left` and `right_fit` are still none or incorrect print('The function failed to fit a line!') left_fitx = 1*ploty**2 + 1*ploty right_fitx = 1*ploty**2 + 1*ploty ## Visualization ## # Colors in the left and right lane regions out_img[lefty, leftx] = [255, 0, 0] out_img[righty, rightx] = [0, 0, 255] # Plots the left and right polynomials on the lane lines #plt.plot(left_fitx, ploty, color='yellow') #plt.plot(right_fitx, ploty, color='yellow') return out_img,left_fitx,right_fitx,ploty,left_fit,right_fit #4. Fit Polynomial to existing coefficients def fit_poly(img_shape, leftx, lefty, rightx, righty): ### TO-DO: Fit a second order polynomial to each with np.polyfit() ### left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) # Generate x and y values for plotting ploty = np.linspace(0, img_shape[0]-1, img_shape[0]) ### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ### left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] return left_fitx, right_fitx, ploty,left_fit,right_fit #5. Search near polynomials of existing coefficients def search_around_poly(binary_warped,left_fit,right_fit): # HYPERPARAMETER # Choose the width of the margin around the previous polynomial to search # The quiz grader expects 100 here, but feel free to tune on your own! margin = 100 # Grab activated pixels nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) ### TO-DO: Set the area of search based on activated x-values ### ### within the +/- margin of our polynomial function ### ### Hint: consider the window areas for the similarly named variables ### ### in the previous quiz, but change the windows to our new search area ### left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] + margin))) right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] + margin))) leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] if len(leftx) <50 | len(rightx)<50: ploty = np.linspace(0, img_shape[0]-1, img_shape[0]) ### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ### left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] else: # Again, extract left and right line pixel positions # Fit new polynomials left_fitx, right_fitx, ploty,left_fit,right_fit = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty) ## Visualization ## # Create an image to draw on and an image to show the selection window out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255 window_img = np.zeros_like(out_img) # Color in left and right line pixels #out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] #out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] # Generate a polygon to illustrate the search window area # And recast the x and y points into usable format for cv2.fillPoly() left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))]) left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, ploty])))]) left_line_pts = np.hstack((left_line_window1, left_line_window2)) right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))]) right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, ploty])))]) right_line_pts = np.hstack((right_line_window1, right_line_window2)) # Draw the lane onto the warped blank image #cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0)) #cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0)) #result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0) # Plot the polynomial lines onto the image #plt.plot(left_fitx, ploty, color='yellow') #plt.plot(right_fitx, ploty, color='yellow') ## End visualization steps ## return out_img,left_fitx,right_fitx,left_fit,right_fit,ploty #6. Get coefficient values for data in meters def generate_data(ploty,left_fitx,right_fitx,ym_per_pix, xm_per_pix): left_fit_cr = np.polyfit(ploty*ym_per_pix, left_fitx*xm_per_pix, 2) right_fit_cr = np.polyfit(ploty*ym_per_pix, right_fitx*xm_per_pix, 2) return ploty*ym_per_pix, left_fit_cr, right_fit_cr #7. Calculate Radius of curvature and center offset def measure_curvature_real(ploty,left_fitx,right_fitx): ''' Calculates the curvature of polynomial functions in meters. ''' # Define conversions in x and y from pixels space to meters ym_per_pix = 30/720 # meters per pixel in y dimension xm_per_pix = 3.7/700 # meters per pixel in x dimension center_offset = 640- (left_fitx[-1] + right_fitx[-1])/2 center_offset = center_offset*xm_per_pix # Start by generating our fake example data # Make sure to feed in your real data instead in your project! ploty, left_fit_cr, right_fit_cr = generate_data(ploty,left_fitx,right_fitx,ym_per_pix, xm_per_pix) # Define y-value where we want radius of curvature # We'll choose the maximum y-value, corresponding to the bottom of the image y_eval = np.max(ploty) ##### TO-DO: Implement the calculation of R_curve (radius of curvature) ##### left_curverad = ((1+(2*left_fit_cr[0]*y_eval + left_fit_cr[1])**2)**1.5)//(2*abs(left_fit_cr[0])) ## Implement the calculation of the left line here right_curverad = ((1+(2*right_fit_cr[0]*y_eval + right_fit_cr[1])**2)**1.5)//(2*abs(right_fit_cr[0])) ## Implement the calculation of the right line here avg_curverad = (left_curverad + right_curverad)/2 return avg_curverad, center_offset def unwarp_to_original(result,ploty,left_fitx,right_fitx,Minv,undist): ##Unwarp to original warp_zero = np.zeros_like(result).astype(np.uint8) color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) # Recast the x and y points into usable format for cv2.fillPoly() pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) # Draw the lane onto the warped blank image cv2.fillPoly(warp_zero, np.int_([pts]), (0,255, 0)) # Warp the blank back to original image space using inverse perspective matrix (Minv) newwarp = cv2.warpPerspective(warp_zero, Minv, (undist.shape[1], undist.shape[0])) return newwarp # Code block: Pipeline # Initialize Coeffcients for lane lines left_fit = np.array([0,0,0]) right_fit = np.array([0,0,0]) # Pipeline for video input.output def process_image(image): global left_fit,right_fit copy_image = np.copy(image) img_size = (1280,720) undist = cv2.undistort(copy_image, mtx, dist, None, mtx) my_img = np.zeros_like(undist) pts = np.array([[255,650],[505,500],[802,500],[1100,650]], np.int32) pts = pts.reshape((-1,1,2)) cv2.polylines(my_img,[pts],True,(0,255,255),2) combo = cv2.addWeighted(undist, 1,my_img, 0, 0) offset = 50 src = np.float32(pts) #print(img_size) # For destination points, I'm arbitrarily choosing some points to be # a nice fit for displaying our warped result # again, not exact, but close enough for our purposes dst = np.float32([[offset, img_size[1]-offset],[offset, offset*8], [img_size[0]-offset, offset*8], [img_size[0]-offset, img_size[1]-offset]]) # Given src and dst points, calculate the perspective transform matrix M = cv2.getPerspectiveTransform(src, dst) Minv = cv2.getPerspectiveTransform(dst,src) # Warp the image using OpenCV warpPerspective() warped = cv2.warpPerspective(combo, M, img_size) result = thresholding(warped) # Look for lane indices if right_fit.any() ==0: # If initial values unchanged or polynomial ceases to exist, # use histogram to find new lane indices opt_img,left_fitx,right_fitx,ploty,left_fit,right_fit = fit_polynomial(result) else: # If polynomila coefficiant exist from prvious run, search around them opt_img,left_fitx,right_fitx,left_fit,right_fit,ploty = search_around_poly(result,left_fit,right_fit) #Unwarp to original newwarp = unwarp_to_original(result,ploty,left_fitx,right_fitx,Minv,undist) # Combine the result with the original image result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0) # Calculate radius of curvature and center postion avg_curverad, center_offset= measure_curvature_real(ploty,left_fitx,right_fitx) # Print the values of curvature and offset the video font = cv2.FONT_HERSHEY_SIMPLEX bottomLeftCornerOfText = (100,100) fontScale = 1 fontColor = (255,255,255) lineType = 2 T = 'Radius Of Curvature = ' + str(avg_curverad) + 'mtrs, Center Offset = ' + str(round(center_offset,4)) + 'mtrs' cv2.putText(result,T, bottomLeftCornerOfText, font, fontScale, fontColor, lineType) return result #image = 'test_images/test5.jpg' #result = process_image(image) #plt.imshow(result) #plt.show() # Code block: Process Video # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML project_output = 'project_video_output_3.mp4' clip1 = VideoFileClip("project_video.mp4").subclip(30,45) #clip1 = VideoFileClip("project_video.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(project_output, audio=False) # Code block: Output HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(project_output)) ```
github_jupyter
``` import sys sys.path.append('../src') import numpy as np import pandas as pd import matplotlib.pyplot as plt import plotly.graph_objects as go import plotly.express as px pd.set_option('display.max_rows', None) import datetime from plotly.subplots import make_subplots from covid19.config import covid_19_data data = covid_19_data data[["Confirmed","Deaths","Recovered"]] =data[["Confirmed","Deaths","Recovered"]].astype(int) data['Active_case'] = data['Confirmed'] - data['Deaths'] - data['Recovered'] Data_India = data [(data['Country/Region'] == 'India') ].reset_index(drop=True) Data_India_op= Data_India.groupby(["ObservationDate","Country/Region"])[["Confirmed","Deaths","Recovered","Active_case"]].sum().reset_index().reset_index(drop=True) fig = go.Figure() fig.add_trace(go.Scatter(x=Data_India_op["ObservationDate"], y=Data_India_op['Confirmed'], mode="lines+text", name='Confirmed cases', marker_color='orange', )) fig.add_annotation( x="03/24/2020", y=Data_India_op['Confirmed'].max(), text="COVID-19 pandemic lockdown in India", font=dict( family="Courier New, monospace", size=16, color="red" ), ) fig.add_shape( # Line Vertical dict( type="line", x0="03/24/2020", y0=Data_India_op['Confirmed'].max(), x1="03/24/2020", line=dict( color="red", width=3 ) )) fig.add_annotation( x="04/24/2020", y=Data_India_op['Confirmed'].max()-30000, text="Month after lockdown", font=dict( family="Courier New, monospace", size=16, color="#00FE58" ), ) fig.add_shape( # Line Vertical dict( type="line", x0="04/24/2020", y0=Data_India_op['Confirmed'].max(), x1="04/24/2020", line=dict( color="#00FE58", width=3 ) )) fig fig.update_layout( title='Evolution of Confirmed cases over time in India', template='plotly_dark' ) fig.show() fig = go.Figure() fig.add_trace(go.Scatter(x=Data_India_op["ObservationDate"], y=Data_India_op['Active_case'], mode="lines+text", name='Active cases', marker_color='#00FE58', )) fig.add_annotation( x="03/24/2020", y=Data_India_op['Active_case'].max(), text="COVID-19 pandemic lockdown in India", font=dict( family="Courier New, monospace", size=16, color="red" ), ) fig.add_shape( # Line Vertical dict( type="line", x0="03/24/2020", y0=Data_India_op['Active_case'].max(), x1="03/24/2020", line=dict( color="red", width=3 ) )) fig.add_annotation( x="04/24/2020", y=Data_India_op['Active_case'].max()-20000, text="Month after lockdown", font=dict( family="Courier New, monospace", size=16, color="rgb(255,217,47)" ), ) fig.add_shape( # Line Vertical dict( type="line", x0="04/24/2020", y0=Data_India_op['Active_case'].max(), x1="04/24/2020", line=dict( color="rgb(255,217,47)", width=3 ) )) fig.update_layout( title='Evolution of Active cases over time in India', template='plotly_dark' ) fig.show() fig = go.Figure() fig.add_trace(go.Scatter(x=Data_India_op["ObservationDate"], y=Data_India_op['Recovered'], mode="lines+text", name='Recovered cases', marker_color='rgb(229,151,232)', )) fig.add_annotation( x="03/24/2020", y=Data_India_op['Recovered'].max(), text="COVID-19 pandemic lockdown in India", font=dict( family="Courier New, monospace", size=16, color="red" ), ) fig.add_shape( # Line Vertical dict( type="line", x0="03/24/2020", y0=Data_India_op['Recovered'].max(), x1="03/24/2020", line=dict( color="red", width=3 ) )) fig.add_annotation( x="04/24/2020", y=Data_India_op['Recovered'].max()-20000, text="Month after lockdown", font=dict( family="Courier New, monospace", size=16, color="rgb(103,219,165)" ), ) fig.add_shape( # Line Vertical dict( type="line", x0="04/24/2020", y0=Data_India_op['Recovered'].max(), x1="04/24/2020", line=dict( color="rgb(103,219,165)", width=3 ) )) fig.update_layout( title='Evolution of Recovered cases over time in India', template='plotly_dark' ) fig.show() ```
github_jupyter
# **MITRE ATT&CK PYTHON CLIENT**: Data Sources ------------------ ## Goals: * Access ATT&CK data sources in STIX format via a public TAXII server * Learn to interact with ATT&CK data all at once * Explore and idenfity patterns in the data retrieved * Learn more about ATT&CK data sources ## 1. ATT&CK Python Client Installation You can install it via PIP: **pip install attackcti** ## 2. Import ATT&CK API Client ``` from attackcti import attack_client ``` ## 3. Import Extra Libraries ``` from pandas import * from pandas.io.json import json_normalize import numpy as np import altair as alt import itertools ``` ## 4. Initialize ATT&CK Client Class ``` lift = attack_client() ``` ## 5. Getting Information About Techniques Getting ALL ATT&CK Techniques ``` all_techniques = lift.get_all_techniques() ``` Showing the first technique in our list ``` all_techniques[0] ``` Normalizing semi-structured JSON data into a flat table via **pandas.io.json.json_normalize** * Reference: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.json.json_normalize.html ``` techniques_normalized = json_normalize(all_techniques) techniques_normalized[0:1] ``` ## 6. Re-indexing Dataframe ``` techniques = techniques_normalized.reindex(['matrix','platform','tactic','technique','technique_id','data_sources'], axis=1) techniques.head() print('A total of ',len(techniques),' techniques') ``` ## 7. Techniques With and Without Data Sources Using **altair** python library we can start showing a few charts stacking the number of techniques with or without data sources. Reference: https://altair-viz.github.io/ ``` data_source_distribution = pandas.DataFrame({ 'Techniques': ['Without DS','With DS'], 'Count of Techniques': [techniques['data_sources'].isna().sum(),techniques['data_sources'].notna().sum()]}) bars = alt.Chart(data_source_distribution).mark_bar().encode(x='Techniques',y='Count of Techniques',color='Techniques').properties(width=200,height=300) text = bars.mark_text(align='center',baseline='middle',dx=0,dy=-5).encode(text='Count of Techniques') bars + text ``` What is the distribution of techniques based on ATT&CK Matrix? ``` data = techniques data['Num_Tech'] = 1 data['Count_DS'] = data['data_sources'].str.len() data['Ind_DS'] = np.where(data['Count_DS']>0,'With DS','Without DS') data_2 = data.groupby(['matrix','Ind_DS'])['technique'].count() data_3 = data_2.to_frame().reset_index() data_3 alt.Chart(data_3).mark_bar().encode(x='technique', y='Ind_DS', color='matrix').properties(height = 200) ``` What are those mitre-attack techniques without data sources? ``` data[(data['matrix']=='mitre-attack') & (data['Ind_DS']=='Without DS')] ``` ### Techniques without data sources ``` techniques_without_data_sources=techniques[techniques.data_sources.isnull()].reset_index(drop=True) techniques_without_data_sources.head() print('There are ',techniques['data_sources'].isna().sum(),' techniques without data sources (',"{0:.0%}".format(techniques['data_sources'].isna().sum()/len(techniques)),' of ',len(techniques),' techniques)') ``` ### Techniques With Data Sources ``` techniques_with_data_sources=techniques[techniques.data_sources.notnull()].reset_index(drop=True) techniques_with_data_sources.head() print('There are ',techniques['data_sources'].notna().sum(),' techniques with data sources (',"{0:.0%}".format(techniques['data_sources'].notna().sum()/len(techniques)),' of ',len(techniques),' techniques)') ``` ## 8. Grouping Techniques With Data Sources By Matrix Let's create a graph to represent the number of techniques per matrix: ``` matrix_distribution = pandas.DataFrame({ 'Matrix': list(techniques_with_data_sources.groupby(['matrix'])['matrix'].count().keys()), 'Count of Techniques': techniques_with_data_sources.groupby(['matrix'])['matrix'].count().tolist()}) bars = alt.Chart(matrix_distribution).mark_bar().encode(y='Matrix',x='Count of Techniques').properties(width=300,height=100) text = bars.mark_text(align='center',baseline='middle',dx=10,dy=0).encode(text='Count of Techniques') bars + text ``` All the techniques belong to **mitre-attack** matrix which is the main **Enterprise** matrix. Reference: https://attack.mitre.org/wiki/Main_Page ## 9. Grouping Techniques With Data Sources by Platform First, we need to split the **platform** column values because a technique might be mapped to more than one platform ``` techniques_platform=techniques_with_data_sources attributes_1 = ['platform'] # In attributes we are going to indicate the name of the columns that we need to split for a in attributes_1: s = techniques_platform.apply(lambda x: pandas.Series(x[a]),axis=1).stack().reset_index(level=1, drop=True) # "s" is going to be a column of a frame with every value of the list inside each cell of the column "a" s.name = a # We name "s" with the same name of "a". techniques_platform=techniques_platform.drop(a, axis=1).join(s).reset_index(drop=True) # We drop the column "a" from "techniques_platform", and then join "techniques_platform" with "s" # Let's re-arrange the columns from general to specific techniques_platform_2=techniques_platform.reindex(['matrix','platform','tactic','technique','technique_id','data_sources'], axis=1) ``` We can now show techniques with data sources mapped to one platform at the time ``` techniques_platform_2.head() ``` Let's create a visualization to show the number of techniques grouped by platform: ``` platform_distribution = pandas.DataFrame({ 'Platform': list(techniques_platform_2.groupby(['platform'])['platform'].count().keys()), 'Count of Techniques': techniques_platform_2.groupby(['platform'])['platform'].count().tolist()}) bars = alt.Chart(platform_distribution,height=300).mark_bar().encode(x ='Platform',y='Count of Techniques',color='Platform').properties(width=200) text = bars.mark_text(align='center',baseline='middle',dx=0,dy=-5).encode(text='Count of Techniques') bars + text ``` In the bar chart above we can see that there are more techniques with data sources mapped to the Windows platform. ## 10. Grouping Techniques With Data Sources by Tactic Again, first we need to split the tactic column values because a technique might be mapped to more than one tactic: ``` techniques_tactic=techniques_with_data_sources attributes_2 = ['tactic'] # In attributes we are going to indicate the name of the columns that we need to split for a in attributes_2: s = techniques_tactic.apply(lambda x: pandas.Series(x[a]),axis=1).stack().reset_index(level=1, drop=True) # "s" is going to be a column of a frame with every value of the list inside each cell of the column "a" s.name = a # We name "s" with the same name of "a". techniques_tactic = techniques_tactic.drop(a, axis=1).join(s).reset_index(drop=True) # We drop the column "a" from "techniques_tactic", and then join "techniques_tactic" with "s" # Let's re-arrange the columns from general to specific techniques_tactic_2=techniques_tactic.reindex(['matrix','platform','tactic','technique','technique_id','data_sources'], axis=1) ``` We can now show techniques with data sources mapped to one tactic at the time ``` techniques_tactic_2.head() ``` Let's create a visualization to show the number of techniques grouped by tactic: ``` tactic_distribution = pandas.DataFrame({ 'Tactic': list(techniques_tactic_2.groupby(['tactic'])['tactic'].count().keys()), 'Count of Techniques': techniques_tactic_2.groupby(['tactic'])['tactic'].count().tolist()}).sort_values(by='Count of Techniques',ascending=True) bars = alt.Chart(tactic_distribution,width=800,height=300).mark_bar().encode(x ='Tactic',y='Count of Techniques',color='Tactic').properties(width=400) text = bars.mark_text(align='center',baseline='middle',dx=0,dy=-5).encode(text='Count of Techniques') bars + text ``` Defende-evasion and Persistence are tactics with the highest nummber of techniques with data sources ## 11. Grouping Techniques With Data Sources by Data Source We need to split the data source column values because a technique might be mapped to more than one data source: ``` techniques_data_source=techniques_with_data_sources attributes_3 = ['data_sources'] # In attributes we are going to indicate the name of the columns that we need to split for a in attributes_3: s = techniques_data_source.apply(lambda x: pandas.Series(x[a]),axis=1).stack().reset_index(level=1, drop=True) # "s" is going to be a column of a frame with every value of the list inside each cell of the column "a" s.name = a # We name "s" with the same name of "a". techniques_data_source = techniques_data_source.drop(a, axis=1).join(s).reset_index(drop=True) # We drop the column "a" from "techniques_data_source", and then join "techniques_data_source" with "s" # Let's re-arrange the columns from general to specific techniques_data_source_2 = techniques_data_source.reindex(['matrix','platform','tactic','technique','technique_id','data_sources'], axis=1) # We are going to edit some names inside the dataframe to improve the consistency: techniques_data_source_3 = techniques_data_source_2.replace(['Process monitoring','Application logs'],['Process Monitoring','Application Logs']) ``` We can now show techniques with data sources mapped to one data source at the time ``` techniques_data_source_3.head() ``` Let's create a visualization to show the number of techniques grouped by data sources: ``` data_source_distribution = pandas.DataFrame({ 'Data Source': list(techniques_data_source_3.groupby(['data_sources'])['data_sources'].count().keys()), 'Count of Techniques': techniques_data_source_3.groupby(['data_sources'])['data_sources'].count().tolist()}) bars = alt.Chart(data_source_distribution,width=800,height=300).mark_bar().encode(x ='Data Source',y='Count of Techniques',color='Data Source').properties(width=1200) text = bars.mark_text(align='center',baseline='middle',dx=0,dy=-5).encode(text='Count of Techniques') bars + text ``` A few interesting things from the bar chart above: * Process Monitoring, File Monitoring, and Process Command-line parameters are the Data Sources with the highest number of techniques * There are some data source names that include string references to Windows such as PowerShell, Windows and wmi ## 12. Most Relevant Groups Of Data Sources Per Technique ### Number Of Data Sources Per Technique Although identifying the data sources with the highest number of techniques is a good start, they usually do not work alone. You might be collecting **Process Monitoring** already but you might be still missing a lot of context from a data perspective. ``` data_source_distribution_2 = pandas.DataFrame({ 'Techniques': list(techniques_data_source_3.groupby(['technique'])['technique'].count().keys()), 'Count of Data Sources': techniques_data_source_3.groupby(['technique'])['technique'].count().tolist()}) data_source_distribution_3 = pandas.DataFrame({ 'Number of Data Sources': list(data_source_distribution_2.groupby(['Count of Data Sources'])['Count of Data Sources'].count().keys()), 'Count of Techniques': data_source_distribution_2.groupby(['Count of Data Sources'])['Count of Data Sources'].count().tolist()}) bars = alt.Chart(data_source_distribution_3).mark_bar().encode(x ='Number of Data Sources',y='Count of Techniques').properties(width=500) text = bars.mark_text(align='center',baseline='middle',dx=0,dy=-5).encode(text='Count of Techniques') bars + text ``` The image above shows you the number data sources needed per techniques according to ATT&CK: * There are 71 techniques that require 3 data sources as enough context to validate the detection of them according to ATT&CK * Only one technique has 12 data sources * One data source only applies to 19 techniques Let's create subsets of data sources with the data source column defining and using a python function: ``` # https://stackoverflow.com/questions/26332412/python-recursive-function-to-display-all-subsets-of-given-set def subs(l): res = [] for i in range(1, len(l) + 1): for combo in itertools.combinations(l, i): res.append(list(combo)) return res ``` Before applying the function, we need to use lowercase data sources names and sort data sources names to improve consistency: ``` df = techniques_with_data_sources[['data_sources']] for index, row in df.iterrows(): row["data_sources"]=[x.lower() for x in row["data_sources"]] row["data_sources"].sort() df.head() ``` Let's apply the function and split the subsets column: ``` df['subsets']=df['data_sources'].apply(subs) df.head() ``` We need to split the subsets column values: ``` techniques_with_data_sources_preview = df attributes_4 = ['subsets'] for a in attributes_4: s = techniques_with_data_sources_preview.apply(lambda x: pandas.Series(x[a]),axis=1).stack().reset_index(level=1, drop=True) s.name = a techniques_with_data_sources_preview = techniques_with_data_sources_preview.drop(a, axis=1).join(s).reset_index(drop=True) techniques_with_data_sources_subsets = techniques_with_data_sources_preview.reindex(['data_sources','subsets'], axis=1) techniques_with_data_sources_subsets.head() ``` Let's add three columns to analyse the dataframe: subsets_name (Changing Lists to Strings), subsets_number_elements ( Number of data sources per subset) and number_data_sources_per_technique ``` techniques_with_data_sources_subsets['subsets_name']=techniques_with_data_sources_subsets['subsets'].apply(lambda x: ','.join(map(str, x))) techniques_with_data_sources_subsets['subsets_number_elements']=techniques_with_data_sources_subsets['subsets'].str.len() techniques_with_data_sources_subsets['number_data_sources_per_technique']=techniques_with_data_sources_subsets['data_sources'].str.len() techniques_with_data_sources_subsets.head() ``` As it was described above, we need to find grups pf data sources, so we are going to filter out all the subsets with only one data source: ``` subsets = techniques_with_data_sources_subsets subsets_ok=subsets[subsets.subsets_number_elements != 1] subsets_ok.head() ``` Finally, we calculate the most relevant groups of data sources (Top 15): ``` subsets_graph = subsets_ok.groupby(['subsets_name'])['subsets_name'].count().to_frame(name='subsets_count').sort_values(by='subsets_count',ascending=False)[0:15] subsets_graph subsets_graph_2 = pandas.DataFrame({ 'Data Sources': list(subsets_graph.index), 'Count of Techniques': subsets_graph['subsets_count'].tolist()}) bars = alt.Chart(subsets_graph_2).mark_bar().encode(x ='Data Sources', y ='Count of Techniques', color='Data Sources').properties(width=500) text = bars.mark_text(align='center',baseline='middle',dx= 0,dy=-5).encode(text='Count of Techniques') bars + text ``` Group (Process Monitoring - Process Command-line parameters) is the is the group of data sources with the highest number of techniques. This group of data sources are suggested to hunt 78 techniques ## 14. Let's Split all the Information About Techniques With Data Sources Defined: Matrix, Platform, Tactic and Data Source Let's split all the relevant columns of the dataframe: ``` techniques_data = techniques_with_data_sources attributes = ['platform','tactic','data_sources'] # In attributes we are going to indicate the name of the columns that we need to split for a in attributes: s = techniques_data.apply(lambda x: pandas.Series(x[a]),axis=1).stack().reset_index(level=1, drop=True) # "s" is going to be a column of a frame with every value of the list inside each cell of the column "a" s.name = a # We name "s" with the same name of "a". techniques_data=techniques_data.drop(a, axis=1).join(s).reset_index(drop=True) # We drop the column "a" from "techniques_data", and then join "techniques_data" with "s" # Let's re-arrange the columns from general to specific techniques_data_2=techniques_data.reindex(['matrix','platform','tactic','technique','technique_id','data_sources'], axis=1) # We are going to edit some names inside the dataframe to improve the consistency: techniques_data_3 = techniques_data_2.replace(['Process monitoring','Application logs'],['Process Monitoring','Application Logs']) techniques_data_3.head() ``` Do you remember data sources names with a reference to Windows? After splitting the dataframe by platforms, tactics and data sources, are there any macOC or linux techniques that consider windows data sources? Let's identify those rows: ``` # After splitting the rows of the dataframe, there are some values that relate windows data sources with platforms like linux and masOS. # We need to identify those rows conditions = [(techniques_data_3['platform']=='Linux')&(techniques_data_3['data_sources'].str.contains('windows',case=False)== True), (techniques_data_3['platform']=='macOS')&(techniques_data_3['data_sources'].str.contains('windows',case=False)== True), (techniques_data_3['platform']=='Linux')&(techniques_data_3['data_sources'].str.contains('powershell',case=False)== True), (techniques_data_3['platform']=='macOS')&(techniques_data_3['data_sources'].str.contains('powershell',case=False)== True), (techniques_data_3['platform']=='Linux')&(techniques_data_3['data_sources'].str.contains('wmi',case=False)== True), (techniques_data_3['platform']=='macOS')&(techniques_data_3['data_sources'].str.contains('wmi',case=False)== True)] # In conditions we indicate a logical test choices = ['NO OK','NO OK','NO OK','NO OK','NO OK','NO OK'] # In choices, we indicate the result when the logical test is true techniques_data_3['Validation'] = np.select(conditions,choices,default='OK') # We add a column "Validation" to "techniques_data_3" with the result of the logical test. The default value is going to be "OK" ``` What is the inconsistent data? ``` techniques_analysis_data_no_ok = techniques_data_3[techniques_data_3.Validation == 'NO OK'] # Finally, we are filtering all the values with NO OK techniques_analysis_data_no_ok.head() print('There are ',len(techniques_analysis_data_no_ok),' rows with inconsistent data') ``` What is the impact of this inconsistent data from a platform and data sources perspective? ``` df = techniques_with_data_sources attributes = ['platform','data_sources'] for a in attributes: s = df.apply(lambda x: pandas.Series(x[a]),axis=1).stack().reset_index(level=1, drop=True) s.name = a df=df.drop(a, axis=1).join(s).reset_index(drop=True) df_2=df.reindex(['matrix','platform','tactic','technique','technique_id','data_sources'], axis=1) df_3 = df_2.replace(['Process monitoring','Application logs'],['Process Monitoring','Application Logs']) conditions = [(df_3['data_sources'].str.contains('windows',case=False)== True), (df_3['data_sources'].str.contains('powershell',case=False)== True), (df_3['data_sources'].str.contains('wmi',case=False)== True)] choices = ['Windows','Windows','Windows'] df_3['Validation'] = np.select(conditions,choices,default='Other') df_3['Num_Tech'] = 1 df_4 = df_3[df_3.Validation == 'Windows'] df_5 = df_4.groupby(['data_sources','platform'])['technique'].nunique() df_6 = df_5.to_frame().reset_index() alt.Chart(df_6).mark_bar().encode(x=alt.X('technique', stack="normalize"), y='data_sources', color='platform').properties(height=200) ``` There are techniques that consider Windows Error Reporting, Windows Registry, and Windows event logs as data sources and they also consider platforms like Linux and masOS. We do not need to consider this rows because those data sources can only be managed at a Windows environment. These are the techniques that we should not consider in our data base: ``` techniques_analysis_data_no_ok[['technique','data_sources']].drop_duplicates().sort_values(by='data_sources',ascending=True) ``` Without considering this inconsistent data, the final dataframe is: ``` techniques_analysis_data_ok = techniques_data_3[techniques_data_3.Validation == 'OK'] techniques_analysis_data_ok.head() print('There are ',len(techniques_analysis_data_ok),' rows of data that you can play with') ``` ## 15. Getting Techniques by Data Sources This function gets techniques' information that includes specific data sources ``` from attackcti import attack_client lift = attack_client() data_source = 'PROCESS MONITORING' results = lift.get_techniques_by_datasources(data_source) len(results) data_sources_list = ['pRoceSS MoniTorinG','process commAnd-linE parameters'] results2 = lift.get_techniques_by_datasources(data_sources_list) len(results2) results2[1] ```
github_jupyter
# Introduction to Band Ratios & Spectral Features The BandRatios project explore properties of band ratio measures. Band ratio measures are an analysis measure in which the ratio of power between frequency bands is calculated. By 'spectral features' we mean features we can measure from the power spectra, such as periodic components (oscillations), that we can describe with their center frequency, power and bandwidth, and the aperiodic component, which we can describe with their exponent and offset value. These parameters will be further explored and explained later on. In this introductory notebook, we walk through how band ratio measures and spectral features are calculated. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_context('poster') from fooof import FOOOF from fooof.sim import gen_power_spectrum from fooof.analysis import get_band_peak_fm from fooof.plts import plot_spectrum, plot_spectrum_shading # Import custom project code import sys sys.path.append('../bratios') from ratios import * from paths import FIGS_PATHS as fp from paths import DATA_PATHS as dp # Settings SAVE_FIG = False ``` ## What is a Band Ratio This project explores frequency band ratios, a metric used in spectral analysis since at least the 1960's to characterize cognitive functions such as vigilance, aging, memory among other. In clinical work, band ratios have also been used as a biomarker for diagnosing and monitoring of ADHD, diseases of consciousness, and nervous system disorders such as Parkinson's disease. Given a power spectrum, a band ratio is the ratio of average power within a band between two frequency ranges. Typically, band ratio measures are calculated as: $ \frac{avg(low\ band\ power)}{avg(high\ band\ power} $ The following cell generates a power spectrum and highlights the frequency ranges used to calculate a theta/beta band ratio. ``` # Settings theta_band = [4, 8] beta_band = [20, 30] freq_range = [1, 35] # Define default simulation values ap_def = [0, 1] theta_def = [6, 0.25, 1] alpha_def = [10, 0.4, 0.75] beta_def = [25, 0.2, 1.5] # Plot Settings line_color = 'black' shade_colors = ['#057D2E', '#0365C0'] # Generate a simulated power spectrum fs, ps = gen_power_spectrum(freq_range, ap_def, [theta_def, alpha_def, beta_def]) # Plot the power spectrum, shading the frequency bands used for the ratio plot_spectrum_shading(fs, ps, [theta_band, beta_band], color=line_color, shade_colors=shade_colors, log_powers=True, linewidth=3.5) # Plot aesthetics ax = plt.gca() for it in [ax.xaxis.label, ax.yaxis.label]: it.set_fontsize(26) ax.set_xlim([0, 35]) ax.set_ylim([-1.6, 0]) if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'Ratio-example', 'pdf')) ``` # Calculate theta/beta ratios ### Average Power Ratio The typical way of calculating band ratios is to take average power in the low-band and divide it by the average power from the high-band. Average power is calculated as the sum of all discrete power values divided by number on power values in that band. ``` # Calculate the theta / beta ratio for our simulated power spectrum ratio = calc_band_ratio(fs, ps, theta_band, beta_band) print('Theta-beta ratio is: {:1.4f}'.format(ratio)) ``` And there you have it - our first computed frequency band ratio! # The FOOOF Model To measure spectral features from power spectra, which we can then compare to ratio measures, we will use the [FOOOF](https://github.com/fooof-tools/fooof) library. Briefly, the FOOOF algorithm parameterizes neural power spectra, measuring both periodic (oscillatory) and aperiodic features. Each identified oscillation is parameterized as a peak, fit as a gaussian, which provides us with a measures of the center frequency, power and bandwidth of peak. The aperiodic component is measured by a function of the form $ 1/f^\chi $, in which this $ \chi $ value is referred to as the aperiodic exponent. This exponent is equivalent the the negative slope of the power spectrum, when plotted in log-log. More details on FOOOF can be found in the associated [paper](https://doi.org/10.1101/299859) and/or on the documentation [site](https://fooof-tools.github.io/fooof/). ``` # Load power spectra from an example subject psd = np.load(dp.make_file_path(dp.eeg_psds, 'A00051886_ec_psds', 'npz')) # Unpack the loaded power spectra, and select a spectrum to fit freqs = psd['arr_0'] powers = psd['arr_1'][0][50] # Initialize a FOOOF object fm = FOOOF(verbose=False) # Fit the FOOOF model fm.fit(freqs, powers) # Plot the power spectrum, with the FOOOF model fm.plot() # Plot aesthetic updates ax = plt.gca() ax.set_ylabel('log(Power)', {'fontsize':35}) ax.set_xlabel('Frequency', {'fontsize':35}) plt.legend(prop={'size': 24}) for line, width in zip(ax.get_lines(), [3, 5, 5]): line.set_linewidth(width) ax.set_xlim([0, 35]); if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'FOOOF-example', 'pdf')) ``` In the plot above, the the FOOOF model fit, in red, is plotted over the original data, in black. The blue dashed line is the fit of the aperiodic component of the data. The aperiodic exponent describes the steepness of this line. For all future notebooks, the aperiodic exponent reflects values that are simulated and/or measured with the FOOOF model, reflecting the blue line. Periodic spectral features are simulation values and/or model fit values from the FOOOF model that measure oscillatory peaks over and above the blue dashed line. #### Helper settings & functions for the next section ``` # Settings f_theta = 6 f_beta = 25 # Functions def style_plot(ax): """Helper function to style plots.""" ax.get_legend().remove() ax.grid(False) for line in ax.get_lines(): line.set_linewidth(3.5) ax.set_xticks([]) ax.set_yticks([]) def add_lines(ax, fs, ps, f_val): """Helper function to add vertical lines to power spectra plots.""" y_lims = ax.get_ylim() ax.plot([f_val, f_val], [y_lims[0], np.log10(ps[fs==f_val][0])], 'g--', markersize=12, alpha=0.75) ax.set_ylim(y_lims) ``` ### Comparing Ratios With and Without Periodic Activity In the next section, we will explore power spectra with and without periodic activity within specified bands. We will use simulations to explore how ratio measures relate to the presence or absence or periodic activity, and how this relates to the analyses we will be performing, comparing ratio measures to spectral features. ``` # Generate simulated power spectrum, with and without a theta & beta oscillations fs, ps0 = gen_power_spectrum(freq_range, ap_def, [theta_def, alpha_def, beta_def]) fs, ps1 = gen_power_spectrum(freq_range, ap_def, [alpha_def, beta_def]) fs, ps2 = gen_power_spectrum(freq_range, ap_def, [theta_def, alpha_def]) fs, ps3 = gen_power_spectrum(freq_range, ap_def, [alpha_def]) # Initialize some FOOOF models fm0 = FOOOF(verbose=False) fm1 = FOOOF(verbose=False) fm2 = FOOOF(verbose=False) fm3 = FOOOF(verbose=False) # Fit FOOOF models fm0.fit(fs, ps0) fm1.fit(fs, ps1) fm2.fit(fs, ps2) fm3.fit(fs, ps3) # Create a plot with the spectra fig, axes = plt.subplots(1, 4, figsize=(18, 4)) titles = ['Theta & Beta', 'Beta Only', 'Theta Only', 'Neither'] for cur_fm, cur_ps, cur_title, cur_ax in zip( [fm0, fm1, fm2, fm3], [ps0, ps1, ps2, ps3], titles, axes): # Create the cur_fm.plot(ax=cur_ax) cur_ax.set_title(cur_title) style_plot(cur_ax) add_lines(cur_ax, fs, cur_ps, f_theta) add_lines(cur_ax, fs, cur_ps, f_beta) # Save out the FOOOF figure if SAVE_FIG: plt.savefig(fp.make_file_path(fp.demo, 'PeakComparisons', 'pdf')) ``` Note that in the plots above, we have plotted the power spectra, with the aperiodic component parameterized in blue, and the potential location of peaks is indicated in green. Keep in mind that under the FOOOF model idea, there is only evidence for an oscillation if there is band specific power over and above the aperiodic activity. In the first power spectrum, for example, we see clear peaks in both theta and beta. However, in subsequent power spectra, we have created spectra without theta, without theta, and without either (or, alternatively put, spectra in which the FOOOF model would say there is no evidence of peaks in these bands). We can actually check our model parameterizations, to see if and when theta and beta peaks were detected, over and above the aperiodic, was measured. ``` # Check if there are extracted thetas in the model parameterizations print('Detected Theta Values:') print('\tTheta & Beta: \t', get_band_peak_fm(fm0, theta_band)) print('\tBeta Only: \t', get_band_peak_fm(fm1, theta_band)) print('\tTheta Only: \t', get_band_peak_fm(fm2, theta_band)) print('\tNeither: \t', get_band_peak_fm(fm3, theta_band)) ``` Now, just because there is no evidence of, for example, theta activity specifically, does not mean there is no power in the 4-8 Hz range. We can see this in the power spectra, as the aperiodic component also contributes power across all frequencies. This means that, due to the way that band ratio measures are calculated, the theta-beta ratio in power spectra without any actual theta activity (or beta) will still measure a value. ``` print('Theta / Beta Ratio of Theta & Beta: \t{:1.4f}'.format( calc_band_ratio(fm0.freqs, fm0.power_spectrum, theta_band, beta_band))) print('Theta / Beta Ratio of Beta Only: \t{:1.4f}'.format( calc_band_ratio(fm1.freqs, fm1.power_spectrum, theta_band, beta_band))) print('Theta / Beta Ratio of Theta Only: \t{:1.4f}'.format( calc_band_ratio(fm2.freqs, fm2.power_spectrum, theta_band, beta_band))) print('Theta / Beta Ratio of Neither: \t{:1.4f}'.format( calc_band_ratio(fm3.freqs, fm3.power_spectrum, theta_band, beta_band))) ``` As we can see above, as compared to the 'Theta & Beta' PSD, the theta / beta ratio of the 'Beta Only' PSD is higher (which we might interpret as reflecting less theta or more beta activity), and the theta / beta ratio of the 'Theta Only' PSD is lower (which we might interpret as reflecting more theta or less beta activity). However, we know that these are not really the best interpretations, in so far as we would like to say that these differences reflect the lack of theta and beta, and not merely a change in their power. In the extreme case, with no theta or beta peaks at all, we still measure a (quite high) value for the theta / beta ratio, though in this case it entirely reflects aperiodic activity. It is important to note that the measure is not zero (or undefined) as we might expect or want in cases in which there is no oscillatory activity, over and above the aperiodic component. ### Summary In this notebook, we have explored band ratio measures, and spectral features, using the FOOOF model. One thing to keep in mind, for the upcoming analyses in this project is that when we compare a ratio value to periodic power, we do so to the isolated periodic power - periodic power over and above the aperiodic power - and we can only calculate this when there is actually power over and above the aperiodic component. That is to say, revisiting the plots above, the periodic activity we are interested in is not the green line, which is total power, but rather is section of the green line above the blue line (the aperiodic adjusted power measured by FOOOF). This means that to compare ratio values to periodic power, we can only calculate this, and only do so, when we measure periodic power within the specified band.
github_jupyter
``` %matplotlib inline import os, sys import nibabel as nb import numpy as np from nipype import Node, Workflow from nipype.interfaces.fsl import SliceTimer, MCFLIRT, Smooth, ExtractROI import pandas as pd import matplotlib.pyplot as plt from scipy import stats from sklearn.utils import shuffle import glob import shutil def writer(MyList, tgtf): MyFile=open(tgtf,'w') MyList=map(lambda x:x+'\n', MyList) MyFile.writelines(MyList) MyFile.close() def f_kendall(timeseries_matrix): """ Calculates the Kendall's coefficient of concordance for a number of time-series in the input matrix Parameters ---------- timeseries_matrix : ndarray A matrix of ranks of a subset subject's brain voxels Returns ------- kcc : float Kendall's coefficient of concordance on the given input matrix """ import numpy as np nk = timeseries_matrix.shape n = nk[0] k = nk[1] sr = np.sum(timeseries_matrix, 1) sr_bar = np.mean(sr) s = np.sum(np.power(sr, 2)) - n*np.power(sr_bar, 2) kcc = 12 *s/np.power(k, 2)/(np.power(n, 3) - n) return kcc def compute_reho(in_file, mask_file, cluster_size = 7, out_file = None): """ Computes the ReHo Map, by computing tied ranks of the timepoints, followed by computing Kendall's coefficient concordance(KCC) of a timeseries with its neighbours Parameters ---------- in_file : nifti file 4D EPI File mask_file : nifti file Mask of the EPI File(Only Compute ReHo of voxels in the mask) out_file : nifti file Where to save result cluster_size : integer for a brain voxel the number of neighbouring brain voxels to use for KCC. Returns ------- out_file : nifti file ReHo map of the input EPI image """ res_fname = (in_file) res_mask_fname = (mask_file) CUTNUMBER = 10 if not (cluster_size == 27 or cluster_size == 19 or cluster_size == 7 or cluster_size == 18): cluster_size = 27 nvoxel = cluster_size res_img = nb.load(res_fname) res_mask_img = nb.load(res_mask_fname) res_data = res_img.get_data() res_mask_data = res_mask_img.get_data() print(res_data.shape) (n_x, n_y, n_z, n_t) = res_data.shape # "flatten" each volume of the timeseries into one big array instead of # x,y,z - produces (timepoints, N voxels) shaped data array res_data = np.reshape(res_data, (n_x*n_y*n_z, n_t), order='F').T # create a blank array of zeroes of size n_voxels, one for each time point Ranks_res_data = np.tile((np.zeros((1, (res_data.shape)[1]))), [(res_data.shape)[0], 1]) # divide the number of total voxels by the cutnumber (set to 10) # ex. end up with a number in the thousands if there are tens of thousands # of voxels segment_length = np.ceil(float((res_data.shape)[1])/float(CUTNUMBER)) for icut in range(0, CUTNUMBER): segment = None # create a Numpy array of evenly spaced values from the segment # starting point up until the segment_length integer if not (icut == (CUTNUMBER - 1)): segment = np.array(np.arange(icut * segment_length, (icut+1) * segment_length)) else: segment = np.array(np.arange(icut * segment_length, (res_data.shape[1]))) segment = np.int64(segment[np.newaxis]) # res_data_piece is a chunk of the original timeseries in_file, but # aligned with the current segment index spacing res_data_piece = res_data[:, segment[0]] nvoxels_piece = res_data_piece.shape[1] # run a merge sort across the time axis, re-ordering the flattened # volume voxel arrays res_data_sorted = np.sort(res_data_piece, 0, kind='mergesort') sort_index = np.argsort(res_data_piece, axis=0, kind='mergesort') # subtract each volume from each other db = np.diff(res_data_sorted, 1, 0) # convert any zero voxels into "True" flag db = db == 0 # return an n_voxel (n voxels within the current segment) sized array # of values, each value being the sum total of TRUE values in "db" sumdb = np.sum(db, 0) temp_array = np.array(np.arange(0, n_t)) temp_array = temp_array[:, np.newaxis] sorted_ranks = np.tile(temp_array, [1, nvoxels_piece]) if np.any(sumdb[:]): tie_adjust_index = np.flatnonzero(sumdb) for i in range(0, len(tie_adjust_index)): ranks = sorted_ranks[:, tie_adjust_index[i]] ties = db[:, tie_adjust_index[i]] tieloc = np.append(np.flatnonzero(ties), n_t + 2) maxties = len(tieloc) tiecount = 0 while(tiecount < maxties -1): tiestart = tieloc[tiecount] ntied = 2 while(tieloc[tiecount + 1] == (tieloc[tiecount] + 1)): tiecount += 1 ntied += 1 ranks[tiestart:tiestart + ntied] = np.ceil(np.float32(np.sum(ranks[tiestart:tiestart + ntied ]))/np.float32(ntied)) tiecount += 1 sorted_ranks[:, tie_adjust_index[i]] = ranks del db, sumdb sort_index_base = np.tile(np.multiply(np.arange(0, nvoxels_piece), n_t), [n_t, 1]) sort_index += sort_index_base del sort_index_base ranks_piece = np.zeros((n_t, nvoxels_piece)) ranks_piece = ranks_piece.flatten(order='F') sort_index = sort_index.flatten(order='F') sorted_ranks = sorted_ranks.flatten(order='F') ranks_piece[sort_index] = np.array(sorted_ranks) ranks_piece = np.reshape(ranks_piece, (n_t, nvoxels_piece), order='F') del sort_index, sorted_ranks Ranks_res_data[:, segment[0]] = ranks_piece sys.stdout.write('.') Ranks_res_data = np.reshape(Ranks_res_data, (n_t, n_x, n_y, n_z), order='F') K = np.zeros((n_x, n_y, n_z)) mask_cluster = np.ones((3, 3, 3)) if nvoxel == 19: mask_cluster[0, 0, 0] = 0 mask_cluster[0, 2, 0] = 0 mask_cluster[2, 0, 0] = 0 mask_cluster[2, 2, 0] = 0 mask_cluster[0, 0, 2] = 0 mask_cluster[0, 2, 2] = 0 mask_cluster[2, 0, 2] = 0 mask_cluster[2, 2, 2] = 0 elif nvoxel == 18: # null mid disk and disky-shaped mask_cluster[0, 0, 0] = 0 mask_cluster[0, 2, 0] = 0 mask_cluster[2, 0, 0] = 0 mask_cluster[2, 2, 0] = 0 mask_cluster[0, 0, 2] = 0 mask_cluster[0, 2, 2] = 0 mask_cluster[2, 0, 2] = 0 mask_cluster[2, 2, 2] = 0 mask_cluster[1, 0, 0] = 0 mask_cluster[1, 0, 1] = 0 mask_cluster[1, 0, 2] = 0 mask_cluster[1, 2, 0] = 0 mask_cluster[1, 2, 1] = 0 mask_cluster[1, 2, 2] = 0 mask_cluster[1, 1, 0] = 0 mask_cluster[1, 1, 2] = 0 elif nvoxel == 7: mask_cluster[0, 0, 0] = 0 mask_cluster[0, 1, 0] = 0 mask_cluster[0, 2, 0] = 0 mask_cluster[0, 0, 1] = 0 mask_cluster[0, 2, 1] = 0 mask_cluster[0, 0, 2] = 0 mask_cluster[0, 1, 2] = 0 mask_cluster[0, 2, 2] = 0 mask_cluster[1, 0, 0] = 0 mask_cluster[1, 2, 0] = 0 mask_cluster[1, 0, 2] = 0 mask_cluster[1, 2, 2] = 0 mask_cluster[2, 0, 0] = 0 mask_cluster[2, 1, 0] = 0 mask_cluster[2, 2, 0] = 0 mask_cluster[2, 0, 1] = 0 mask_cluster[2, 2, 1] = 0 mask_cluster[2, 0, 2] = 0 mask_cluster[2, 1, 2] = 0 mask_cluster[2, 2, 2] = 0 for i in range(1, n_x - 1): for j in range(1, n_y -1): for k in range(1, n_z -1): block = Ranks_res_data[:, i-1:i+2, j-1:j+2, k-1:k+2] mask_block = res_mask_data[i-1:i+2, j-1:j+2, k-1:k+2] if not(int(mask_block[1, 1, 1]) == 0): if nvoxel == 19 or nvoxel == 7 or nvoxel == 18: mask_block = np.multiply(mask_block, mask_cluster) R_block = np.reshape(block, (block.shape[0], 27), order='F') mask_R_block = R_block[:, np.argwhere(np.reshape(mask_block, (1, 27), order='F') > 0)[:, 1]] K[i, j, k] = f_kendall(mask_R_block) img = nb.Nifti1Image(K, header=res_img.get_header(), affine=res_img.get_affine()) if out_file is not None: reho_file = out_file else: reho_file = os.path.join(os.getcwd(), 'ReHo.nii.gz') img.to_filename(reho_file) return reho_file base = "/Volumes/G_drive/Backup_06062020/ds000133/" order_path = base + "/SlTi/" sbjpatt = "" sess = "ses-pre/func" fmriname = "_ses-pre_task-rest_run-01_bold.nii.gz" TR = 1.67136 fwhm = 3 dummy = 10 n_sl = 30 rh = 18 #27 # https://en.wikibooks.org/wiki/SPM/Slice_Timing os.makedirs(order_path, mode=777, exist_ok=True) # seq asc 1 2 3 4 slice_order = list(np.arange(1, n_sl+1).astype(str)) writer(slice_order, order_path + 'slti_1.txt') # seq desc 4 3 2 1 slice_order = list(reversed(list(np.arange(1, n_sl+1).astype(str)))) writer(slice_order, order_path + 'slti_2.txt') # int asc 1 3 2 4 slice_order = list(np.arange(1, n_sl+1, 2).astype(str)) + list(np.arange(2, n_sl+1, 2).astype(str)) writer(slice_order, order_path + 'slti_3.txt') # int asc 4 2 3 1 slice_order = list(reversed(list(np.arange(1, n_sl+1, 2).astype(str)) + list(np.arange(2, n_sl+1, 2).astype(str)))) writer(slice_order, order_path + 'slti_4.txt') # int2 asc 2 4 1 3 slice_order = list(np.arange(2, n_sl+1, 2).astype(str)) + list(np.arange(1, n_sl+1, 2).astype(str)) writer(slice_order, order_path + 'slti_5.txt') # int2 dsc 3 1 4 2 slice_order = list(reversed(list(np.arange(2, n_sl+1, 2).astype(str)) + list(np.arange(1, n_sl+1, 2).astype(str)))) writer(slice_order, order_path + 'slti_6.txt') for rr in np.arange(7,27): slice_order = list(shuffle(np.arange(1, n_sl+1).astype(str), random_state=rr)) writer(slice_order, order_path + 'slti_{}.txt'.format(rr)) # random permutation of slices rehos = [] for sbj in sorted([sbj.split("/")[-1].replace("sub-","") for sbj in glob.glob(base + "sub-{}*".format(sbjpatt))]): fmri_nii = base + "sub-{}/{}/".format(sbj,sess) + "sub-{}{}".format(sbj,fmriname) for opt in np.arange(1, 17): #if (opt in [5,6] and n_sl%2==0): # skip Siemens interleaved even cases unless n_sl is really even proc_ref = '{}_preproc_{}'.format(sbj,opt) extract = Node(ExtractROI(t_min=dummy, t_size=-1, output_type='NIFTI_GZ'), name="extract") slicetimer = Node(SliceTimer(custom_order = order_path + "slti_{}.txt".format(opt), time_repetition=TR), name="slicetimer") mcflirt = Node(MCFLIRT(mean_vol=True, save_plots=True), name="mcflirt") smooth = Node(Smooth(fwhm=fwhm), name="smooth") preproc01 = Workflow(name=proc_ref, base_dir=base) preproc01.connect([(extract, slicetimer, [('roi_file', 'in_file')]), (slicetimer, mcflirt, [('slice_time_corrected_file', 'in_file')]), (mcflirt, smooth, [('out_file', 'in_file')])]) extract.inputs.in_file = fmri_nii preproc01.run('MultiProc', plugin_args={'n_procs': 1}) basepath = base + "/{}/smooth/".format(proc_ref) proc_f = basepath + fmri_nii.split("/")[-1].replace(".nii.gz","") + "_roi_st_mcf_smooth.nii.gz" in_f = basepath + "meanvol" out_f = basepath + "meanvol_bet" !fslmaths {proc_f} -Tmean {in_f} !bet {in_f} {out_f} -m rehos.append([sbj, opt, compute_reho(proc_f, in_f + "_bet" + "_mask.nii.gz", rh, out_file = base + "/" + sbj + "_" + str(opt) + "_ReHo.nii.gz")]) shutil.rmtree(base + "/{}/".format(proc_ref)) rehos = [[ff.split("/")[-1].split("_")[0], ff.split("/")[-1].split("_")[1], ff] for ff in glob.glob(base+"*_ReHo.nii.gz")] thr = 0.05 res = pd.DataFrame(columns=['sbj', 'ord', 'rehoavg', 'rehopct']) for nii in rehos: img = nb.load(nii[-1]).get_fdata() img = img.ravel() img = img[img>thr] if int(nii[1]) < 7: res = res.append({"sbj":nii[0], "ord":nii[1], "rehoavg":np.nanmean(img), "rehopct":np.percentile(img,90)}, ignore_index = True) else: res = res.append({"sbj":nii[0], "ord":"0", "rehoavg":np.nanmean(img), "rehopct":np.percentile(img,90)}, ignore_index = True) metric = "rehopct" signif = pd.DataFrame(columns=['sbj', 'ord', 'reho', 'tt']) for sbj in np.unique(res.sbj.values): rsel = res[res.sbj == sbj].sort_values(["rehopct","rehoavg"]) for oo in np.arange(0,7): oo = str(oo) t2 = (np.nanmean(rsel[rsel.ord == oo][metric].values - np.nanmean(rsel[rsel.ord == "0"][metric].values))) / \ np.nanstd(rsel[rsel.ord == "0"][metric].values) signif = signif.append({"sbj":sbj, "ord":oo, "reho":round(np.nanmean(rsel[rsel.ord == oo][metric].values),3), "tt": round(np.abs(t2), 3)}, ignore_index = True) signif = signif[(signif.ord != "3")] # exclude impossible cases lls = [] for sbj in np.unique(res.sbj.values): rsel = signif[signif.sbj == sbj].sort_values(["reho","sbj"]) lls.append(rsel[rsel.sbj==sbj].iloc[-1:].ord.values[:]) x = np.array(lls).astype(int).ravel() y = np.bincount(x) ii = np.nonzero(y)[0] print("ord_id, counts") np.vstack((ii,y[ii])).T signif.sort_values(["sbj", "tt"]).head(20) ```
github_jupyter
``` # Environ import scipy as scp import tensorflow as tf from scipy.stats import gamma import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.neighbors import KernelDensity import random import multiprocessing as mp import psutil import pickle import os import re import time # import dataset_generator as dg # import make_data_lba as mdlba # from tqdm import tqdm # Own #import ddm_data_simulation as ds import cddm_data_simulation as cds import kde_training_utilities as kde_util import kde_class as kde import boundary_functions as bf from cdwiener import batch_fptd from cdwiener import fptd # DDM now = time.time() repeats = 1000 my_means = np.zeros(repeats) v_vec = np.random.uniform(low = -3, high = 3, size = 1000) a_vec = np.random.uniform(low = 0.5, high = 2.5, size = 1000) w_vec = np.random.uniform(low = 0.2, high = 0.8, size = 1000) for i in range(repeats): out = cds.ddm_flexbound(v = v_vec[i], a = a_vec[i], w = w_vec[i], ndt = 0.0, delta_t = 0.001, s = 1, #np.sqrt(2), max_t = 20, n_samples = 30000, boundary_fun = bf.constant, boundary_multiplicative = True, boundary_params = {}) #boundary_params = {"theta": 0.01}) if i % 100 == 0: print(i) my_means[i] = np.mean(out[0][out[1] == 1]) print(time.time() - now) np.random.uniform(low= -1, high = 2, size = 1000) plt.hist(out[0] * out[1], bins = np.linspace(-15, 15, 100), density = True) out = cds.ddm_sdv(v = -3, a = 2.5, w = 0.3, ndt = 1, sdv = 0, s = 1, boundary_fun = bf.constant, delta_t = 0.001, n_samples = 100000) out[0] * out[1] my_bins = np.arange(- 512, 513) * 20 / 1024 analy_out = batch_fptd(t = my_bins.copy(), v = 3, a = 5, w = 0.7, ndt = 1, sdv = 0, eps = 1e-50) (analy_out <= 1e-48).nonzero() analy_out[500:550] plt.plot(my_bins, analy_out) plt.hist(out[0] * out[1], bins = np.arange(-512, 513) * 20/1024 , alpha = 0.2, color = 'red', density = 1) plt.plot(my_bins, analy_out) cumsum = 0 for i in range(1, analy_out.shape[0], 1): cumsum += ((analy_out[i - 1] + analy_out[i]) / 2) * (my_bins[1] - my_bins[0]) cumsum np.exp(25) analy_out.shape plt.hist(out[0][out[1][:, 0] == -1, 0], bins = np.arange(512) * 20/512 , alpha = 0.2, color = 'red') plt.hist(out[0][out[1][:, 0] == 1, 0], bins = np.arange(512) * 20/512 , alpha = 0.2, color = 'green') # DDM repeats = 1 colors = ['green', 'red'] my_means = np.zeros(repeats) cnt = 0 for i in np.linspace(2, 1.01, 2): out = cds.levy_flexbound(v = 0, a = 2.5, w = 0.5, alpha_diff = i, ndt = 0.5, delta_t = 0.001, max_t = 20, n_samples = 10000, boundary_fun = bf.constant, boundary_multiplicative = True, boundary_params = {}) #boundary_params = {"theta": 0.01}) plt.hist(out[0] * out[1], bins = np.linspace(-15, 15, 100), density = True, alpha = 0.2, color = colors[cnt]) print(i) cnt += 1 #my_means[i] = np.mean(out[0][out[1] == 1]) plt.show() def bin_simulator_output(out = [0, 0], bin_dt = 0.04, n_bins = 0, eps_correction = 1e-7, # min p for a bin params = ['v', 'a', 'w', 'ndt'] ): # ['v', 'a', 'w', 'ndt', 'angle'] # Generate bins if n_bins == 0: n_bins = int(out[2]['max_t'] / bin_dt) bins = np.linspace(0, out[2]['max_t'], n_bins) else: bins = np.linspace(0, out[2]['max_t'], n_bins) bins = np.append(bins, [100]) print(bins) counts = [] cnt = 0 counts = np.zeros( (n_bins, len(out[2]['possible_choices']) ) ) counts_size = counts.shape[0] * counts.shape[1] for choice in out[2]['possible_choices']: counts[:, cnt] = np.histogram(out[0][out[1] == choice], bins = bins)[0] / out[2]['n_samples'] cnt += 1 # Apply correction for empty bins n_small = 0 n_big = 0 n_small = np.sum(counts < eps_correction) n_big = counts_size - n_small if eps_correction > 0: counts[counts <= eps_correction] = eps_correction counts[counts > eps_correction] -= (eps_correction * (n_small / n_big)) return ([out[2][param] for param in params], # features counts, # labels {'max_t': out[2]['max_t'], 'bin_dt': bin_dt, 'n_samples': out[2]['n_samples']} # meta data ) def bin_simulator_output(self, out = [0, 0], bin_dt = 0.04, nbins = 0): # ['v', 'a', 'w', 'ndt', 'angle'] # Generate bins if nbins == 0: nbins = int(out[2]['max_t'] / bin_dt) bins = np.zeros(nbins + 1) bins[:nbins] = np.linspace(0, out[2]['max_t'], nbins) bins[nbins] = np.inf else: bins = np.zeros(nbins + 1) bins[:nbins] = np.linspace(0, out[2]['max_t'], nbins) bins[nbins] = np.inf cnt = 0 counts = np.zeros( (nbins, len(out[2]['possible_choices']) ) ) for choice in out[2]['possible_choices']: counts[:, cnt] = np.histogram(out[0][out[1] == choice], bins = bins)[0] / out[2]['n_samples'] cnt += 1 return counts #%%timeit -n 1 -r 5 a, b = bin_simulator_output(out = out) %%timeit -n 5 -r 1 out = cds.ornstein_uhlenbeck(v = 0.0, a = 1.5, w = 0.5, g = 0, ndt = 0.92, delta_t = 0.001, boundary_fun = bf.constant, n_samples = 100000) binned_sims = bin_simulator_output(out = out, n_bins = 256, eps_correction = 1e-7, params = ['v', 'a', 'w', 'g', 'ndt']) %%timeit -n 5 -r 1 out = cds.ddm_flexbound_seq2(v_h = 0, v_l_1 = 0, v_l_2 = 0, a = 1.5, w_h = 0.5, w_l_1 = 0.5, w_l_2 = 0.5, ndt = 0.5, s = 1, delta_t = 0.001, max_t = 20, n_samples = 100000, print_info = True, boundary_fun = bf.constant, # function of t (and potentially other parameters) that takes in (t, *args) boundary_multiplicative = True, boundary_params = {}) %%timeit -n 5 -r 1 out = cds.ddm_flexbound_par2(v_h = 0, v_l_1 = 0, v_l_2 = 0, a = 1.5, w_h = 0.5, w_l_1 = 0.5, w_l_2 = 0.5, ndt = 0.5, s = 1, delta_t = 0.001, max_t = 20, n_samples = 100000, print_info = True, boundary_fun = bf.constant, # function of t (and potentially other parameters) that takes in (t, *args) boundary_multiplicative = True, boundary_params = {}) %%timeit -n 5 -r 1 out = cds.ddm_flexbound_mic2(v_h = 0.0, v_l_1 = 0.0, v_l_2 = 0.0, a = 1.5, w_h = 0.5, w_l_1 = 0.5, w_l_2 = 0.5, d = 1.0, ndt = 0.5, s = 1, delta_t = 0.001, max_t = 20, n_samples = 100000, print_info = True, boundary_fun = bf.constant, # function of t (and potentially other parameters) that takes in (t, *args) boundary_multiplicative = True, boundary_params = {}) plt.hist(out[0][out[1][:, 0] == 0, 0], bins = np.arange(512) * 20/512 , alpha = 0.2, color = 'red') plt.hist(out[0][out[1][:, 0] == 1, 0], bins = np.arange(512) * 20/512 , alpha = 0.2, color = 'green') #plt.hist(out[0][out[1][:, 0] == 2, 0], bins = np.arange(512) * 20/512 , alpha = 0.2, color = 'red') #plt.hist(out[0][out[1][:, 0] == 3, 0], bins = np.arange(512) * 20/512 , alpha = 0.2, color = 'green') import pickle import os os.listdir('/media/data_cifs/afengler/data/kde/ddm_seq2/training_data_binned_1_nbins_512_n_100000') tt = pickle.load(open('/media/data_cifs/afengler/data/kde/ddm_mic2/training_data_binned_1_nbins_512_n_100000/ddm_mic2_nchoices_2_train_data_binned_1_nbins_512_n_100000_999.pickle', 'rb')) tt[1][0][:,0] plt.plot(tt[1][2, :,0]) plt.plot(tt[1][2, :,1]) plt.plot(tt[1][2, :,2]) plt.plot(tt[1][2, :,3]) print(np.mean(out[0][out[1][:, 0] == 0, 0])) print(np.mean(out[0][out[1][:, 0] == 1, 0])) #print(np.mean(out[0][out[1][:, 0] == 2, 0])) #print(np.mean(out[0][out[1][:, 0] == 3, 0])) print(np.shape(out[0][out[1][:, 0] == 0, 0])) print(np.shape(out[0][out[1][:, 0] == 1, 0])) #print(np.shape(out[0][out[1][:, 0] == 2, 0])) #print(np.shape(out[0][out[1][:, 0] == 3, 0])) np.sort(out[0][out[1][:,0] == 1, 0]) plt.hist(out[0][out[1][:, 0] == 0, 0], bins = 50, alpha = 0.5, color = 'green') plt.hist(out[0][out[1][:, 0] == 1, 0], bins = 50, alpha = 0.2, color = 'green') plt.hist(out[0][out[1][:, 0] == 2, 0], bins = 50, alpha = 0.2, color = 'blue') plt.hist(out[0][out[1][:, 0] == 3, 0], bins = 50, alpha = 0.2, color = 'red') print(np.max(out[0][out[1][:, 0] == 0, 0])) print(np.max(out[0][out[1][:, 0] == 1, 0])) print(np.max(out[0][out[1][:, 0] == 2, 0])) print(np.max(out[0][out[1][:, 0] == 3, 0])) binned_sims = bin_simulator_output(out = out, n_bins = 256, eps_correction = 1e-7, params = ['v', 'a', 'w', 'g', 'ndt']) plt.plot(binned_sims[1][:, 1]) plt.plot(binned_sims[1][:, 0]) binned_sims[1][255, 1] files_ = os.listdir('/media/data_cifs/afengler/data/kde/ddm/base_simulations_20000') labels = np.zeros((250000, 500, 2)) features = np.zeros((250000, 3)) cnt = 0 i = 0 file_dim = 100 for file_ in files_[:1000]: if file_[:8] == 'ddm_flex': out = pickle.load(open('/media/data_cifs/afengler/data/kde/ddm/base_simulations_20000/' + file_, 'rb')) features[cnt], labels[cnt] = bin_simulator_output(out = out) if cnt % file_dim == 0: print(cnt) pickle.dump((labels[(i * file_dim):((i + 1) * file_dim)], features[(i * file_dim):((i + 1) * file_dim)]), open('/media/data_cifs/afengler/data/kde/ddm/base_simulations_20000_binned/dataset_' + str(i), 'wb')) i += 1 cnt += 1 # FULL DDM repeats = 50 my_means = np.zeros(repeats) for i in range(repeats): out = cds.full_ddm(v = 0, a = 0.96, w = 0.5, ndt = 0.5, dw = 0.0, sdv = 0.0, dndt = 0.5, delta_t = 0.01, max_t = 20, n_samples = 10000, boundary_fun = bf.constant, boundary_multiplicative = True, boundary_params = {}) print(i) my_means[i] = np.mean(out[0][out[1] == 1]) plt.hist(out[0] * out[1], bins = 50) int(50 / out[2]['delta_t'] + 1) # LCA repeats = 1 my_means = np.zeros(repeats) for i in range(repeats): out = cds.lca(v = np.array([0, 0], dtype = np.float32), a = 2, w = np.array([0.5, 0.5], dtype = np.float32), ndt = np.array([1.0, 1.0], dtype = np.float32), g = -1.0, b = 1.0, delta_t = 0.01, max_t = 40, n_samples = 10000, boundary_fun = bf.constant, boundary_multiplicative = True, boundary_params = {}) print(i) my_means[i] = np.mean(out[0][out[1] == 1]) out[1][out[1] == 0] = -1 plt.hist(out[0] * out[1], bins = 50) # LCA repeats = 10 my_means = np.zeros(repeats) for i in range(repeats): out = cds.ddm_flexbound(v = 0.0, a = 1.5, w = 0.5, ndt = 0.1, delta_t = 0.01, max_t = 40, n_samples = 10000, boundary_fun = bf.constant, boundary_multiplicative = True, boundary_params = {}) print(i) my_means[i] = np.mean(out[0][out[1] == 1]) def foo(name, *args, **kwargs): print ("args: ", args) print ("Type of args: ", type(args)) if len(args)>2: args = args[0], args[1] #- Created Same name variable. print ("Temp args:", args) my_keys = [] for key in test_dat.keys(): if key[0] == 'v': my_keys.append(key) np.array(test_dat.loc[1, ['v_0', 'v_1']]) my_dat = mdlba.make_data_rt_choice(target_folder = my_target_folder) np.max(my_dat['log_likelihood']) data = np.concatenate([out[0], out[1]], axis = 1) ### cds.race_model(boundary_fun = bf.constant, n_samples = 100000) np.quantile(np.random.uniform(size = (10000,4)), q = [0.05, 0.10, 0.9, 0.95], axis = 0) tuple(map(tuple, a)) tuple(np.apply_along_axis(my_func, 0, a, key_vec)) dict(zip(a[0,:], ['a' ,'b', 'c'])) def my_func(x = 0, key_vec = ['a' ,'b', 'c']): return dict(zip(key_vec, x)) my_func_init = my_func(key_vec = ['d', 'e', 'f']) test = yaml.load(open('config_files/config_data_generator.yaml')) from multiprocessing import Pool def myfunc(a): return a ** 2 pbar = tqdm(total = 100) def update(): pbar.update a = tuple() for i in range(pbar.total): a += ((1, ), ) pool = Pool(4) pool.starmap(myfunc, a, callback = update) pool.close() pool.join() def my_fun(*args): print(args) help(dg.make_dataset_r_dgp) def zip_dict(x = [], key_vec = ['a', 'b', 'c']): return dict(zip(key_vec, x)) my_dg = dg.data_generator(file_id = 'TEST') out = my_dg.make_dataset_perturbation_experiment(save = False) out = my_dg.make_dataset_uniform(save = False) my_dg.param_grid_perturbation_experiment() param_grid = my_dg.param_grid_uniform() %%timeit -n 1 -r 1 tt = my_dg.generate_data_grid_parallel(param_grid = param_grid) 3**3 a = np.random.choice(10, size = (1000,1)) for i in zip([1,2,3], [1, 2, 3], [1]): print( i ) ```
github_jupyter
# Bayesian Curve Fitting ### Overview The predictive distribution resulting from a Baysian treatment of polynominal curve fittting using an $M = 9$ polynominal, with the fixed parameters $\alpha = 5×10^{-3}$ and $\beta = 11.1$ (Corresponding to known noise variance), in which the red curve denotes the mean of the predictive distribution and the red region corresponds to $±1$ standard deviation around the mean. ### Procedure 1. The predictive distribution tis written in the form \begin{equation*} p(t| x, {\bf x}, {\bf t}) = N(t| m(x), s^2(x)) (1.69). \end{equation*} 2. The basis function is defined as $\phi_i(x) = x^i$ for $i = 0,...M$. 3. The mean and variance are given by \begin{equation*}m(x) = \beta\phi(x)^{\bf T}{\bf S} \sum_{n=1}^N \phi(x_n)t_n(1.70)\end{equation*} \begin{equation*} s^2(x) = \beta^{-1} + \phi(x)^{\bf T} {\bf S} \phi(x)(1.71)\end{equation*} \begin{equation*}{\bf S}^{-1} = \alpha {\bf I} + \beta \sum_{n=1}^N \phi(x_n)\phi(x_n)^{\bf T}(1.72)\end{equation*} 4. Inprement these equation and visualize the predictive distribution in the raneg of $0.0<x<1.0$. ``` import numpy as np from numpy.linalg import inv import pandas as pd from pylab import * import matplotlib.pyplot as plt %matplotlib inline #From p31 the authors define phi as following def phi(x): return np.array([x ** i for i in range(M + 1)]).reshape((M + 1, 1)) #(1.70) Mean of predictive distribution def mean(x, x_train, y_train, S): #m sum = np.array(zeros((M+1, 1))) for n in range(len(x_train)): sum += np.dot(phi(x_train[n]), y_train[n]) return Beta * phi(x).T.dot(S).dot(sum) #(1.71) Variance of predictive distribution def var(x, S): #s2 return 1.0/Beta + phi(x).T.dot(S).dot(phi(x)) #(1.72) def S(x_train, y_train): I = np.identity(M + 1) Sigma = np.zeros((M + 1, M + 1)) for n in range(len(x_train)): Sigma += np.dot(phi(x_train[n]), phi(x_train[n]).T) S_inv = alpha * I + Beta * Sigma return inv(S_inv) alpha = 0.005 Beta = 11.1 M = 9 #Sine curve x_real = np.arange(0, 1, 0.01) y_real = np.sin(2*np.pi*x_real) ##Training Data N=10 x_train = np.linspace(0, 1, 10) #Set "small level of random noise having a Gaussian distribution" loc = 0 scale = 0.3 y_train = np.sin(2* np.pi * x_train) + np.random.normal(loc, scale, N) result = S(x_train, y_train) #Seek predictive distribution corespponding to entire x mu = [mean(x, x_train, y_train, result)[0,0] for x in x_real] variance = [var(x, result)[0,0] for x in x_real] SD = np.sqrt(variance) upper = mu + SD lower = mu - SD plt.figure(figsize=(10, 7)) plot(x_train, y_train, 'bo') plot(x_real, y_real, 'g-') plot(x_real, mu, 'r-') fill_between(x_real, upper, lower, color='pink') xlim(0.0, 1.0) ylim(-2, 2) title("Figure 1.17") ```
github_jupyter
``` ''' setting before run. every notebook should include this code. ''' import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="0" import sys _r = os.getcwd().split('/') _p = '/'.join(_r[:_r.index('gate-decorator-pruning')+1]) print('Change dir from %s to %s' % (os.getcwd(), _p)) os.chdir(_p) sys.path.append(_p) from config import parse_from_dict parse_from_dict({ "base": { "task_name": "resnet56_cifar10_ticktock", "cuda": True, "seed": 0, "checkpoint_path": "", "epoch": 0, "multi_gpus": True, "fp16": False }, "model": { "name": "cifar.resnet56", "num_class": 10, "pretrained": False }, "train": { "trainer": "normal", "max_epoch": 160, "optim": "sgd", "steplr": [ [80, 0.1], [120, 0.01], [160, 0.001] ], "weight_decay": 5e-4, "momentum": 0.9, "nesterov": False }, "data": { "type": "cifar10", "shuffle": True, "batch_size": 128, "test_batch_size": 128, "num_workers": 4 }, "loss": { "criterion": "softmax" }, "gbn": { "sparse_lambda": 1e-3, "flops_eta": 0, "lr_min": 1e-3, "lr_max": 1e-2, "tock_epoch": 10, "T": 10, "p": 0.002 } }) from config import cfg import torch import torch.nn as nn import numpy as np import torch.optim as optim from logger import logger from main import set_seeds, recover_pack, adjust_learning_rate, _step_lr, _sgdr from models import get_model from utils import dotdict from prune.universal import Meltable, GatedBatchNorm2d, Conv2dObserver, IterRecoverFramework, FinalLinearObserver from prune.utils import analyse_model, finetune set_seeds() pack = recover_pack() model_dict = torch.load('./ckps/resnet56_cifair10_baseline.ckp', map_location='cpu' if not cfg.base.cuda else 'cuda') pack.net.module.load_state_dict(model_dict) GBNs = GatedBatchNorm2d.transform(pack.net) for gbn in GBNs: gbn.extract_from_bn() pack.optimizer = optim.SGD( pack.net.parameters() , lr=2e-3, momentum=cfg.train.momentum, weight_decay=cfg.train.weight_decay, nesterov=cfg.train.nesterov ) ``` ---- ``` import uuid def bottleneck_set_group(net): layers = [ net.module.layer1, net.module.layer2, net.module.layer3 ] for m in layers: masks = [] if m == net.module.layer1: masks.append(pack.net.module.bn1) for mm in m.modules(): if mm.__class__.__name__ == 'BasicBlock': if len(mm.shortcut._modules) > 0: masks.append(mm.shortcut._modules['1']) masks.append(mm.bn2) group_id = uuid.uuid1() for mk in masks: mk.set_groupid(group_id) bottleneck_set_group(pack.net) def clone_model(net): model = get_model() gbns = GatedBatchNorm2d.transform(model.module) model.load_state_dict(net.state_dict()) return model, gbns cloned, _ = clone_model(pack.net) BASE_FLOPS, BASE_PARAM = analyse_model(cloned.module, torch.randn(1, 3, 32, 32).cuda()) print('%.3f MFLOPS' % (BASE_FLOPS / 1e6)) print('%.3f M' % (BASE_PARAM / 1e6)) del cloned def eval_prune(pack): cloned, _ = clone_model(pack.net) _ = Conv2dObserver.transform(cloned.module) cloned.module.linear = FinalLinearObserver(cloned.module.linear) cloned_pack = dotdict(pack.copy()) cloned_pack.net = cloned Meltable.observe(cloned_pack, 0.001) Meltable.melt_all(cloned_pack.net) flops, params = analyse_model(cloned_pack.net.module, torch.randn(1, 3, 32, 32).cuda()) del cloned del cloned_pack return flops, params ``` ---- ``` pack.trainer.test(pack) pack.tick_trainset = pack.train_loader prune_agent = IterRecoverFramework(pack, GBNs, sparse_lambda = cfg.gbn.sparse_lambda, flops_eta = cfg.gbn.flops_eta, minium_filter = 3) LOGS = [] flops_save_points = set([40, 38, 35, 32, 30]) iter_idx = 0 prune_agent.tock(lr_min=cfg.gbn.lr_min, lr_max=cfg.gbn.lr_max, tock_epoch=cfg.gbn.tock_epoch) while True: left_filter = prune_agent.total_filters - prune_agent.pruned_filters num_to_prune = int(left_filter * cfg.gbn.p) info = prune_agent.prune(num_to_prune, tick=True, lr=cfg.gbn.lr_min) flops, params = eval_prune(pack) info.update({ 'flops': '[%.2f%%] %.3f MFLOPS' % (flops/BASE_FLOPS * 100, flops / 1e6), 'param': '[%.2f%%] %.3f M' % (params/BASE_PARAM * 100, params / 1e6) }) LOGS.append(info) print('Iter: %d,\t FLOPS: %s,\t Param: %s,\t Left: %d,\t Pruned Ratio: %.2f %%,\t Train Loss: %.4f,\t Test Acc: %.2f' % (iter_idx, info['flops'], info['param'], info['left'], info['total_pruned_ratio'] * 100, info['train_loss'], info['after_prune_test_acc'])) iter_idx += 1 if iter_idx % cfg.gbn.T == 0: print('Tocking:') prune_agent.tock(lr_min=cfg.gbn.lr_min, lr_max=cfg.gbn.lr_max, tock_epoch=cfg.gbn.tock_epoch) flops_ratio = flops/BASE_FLOPS * 100 for point in [i for i in list(flops_save_points)]: if flops_ratio <= point: torch.save(pack.net.module.state_dict(), './logs/resnet56_cifar10_ticktock/%s.ckp' % str(point)) flops_save_points.remove(point) if len(flops_save_points) == 0: break ``` ### You can see how to fine-tune and get the pruned network in the finetune.ipynb
github_jupyter
# When To Stop Fuzzing In the past chapters, we have discussed several fuzzing techniques. Knowing _what_ to do is important, but it is also important to know when to _stop_ doing things. In this chapter, we will learn when to _stop fuzzing_ – and use a prominent example for this purpose: The *Enigma* machine that was used in the second world war by the navy of Nazi Germany to encrypt communications, and how Alan Turing and I.J. Good used _fuzzing techniques_ to crack ciphers for the Naval Enigma machine. Turing did not only develop the foundations of computer science, the Turing machine. Together with his assistant I.J. Good, he also invented estimators of the probability of an event occuring that has never previously occured. We show how the Good-Turing estimator can be used to quantify the *residual risk* of a fuzzing campaign that finds no vulnerabilities. Meaning, we show how it estimates the probability of discovering a vulnerability when no vulnerability has been observed before throughout the fuzzing campaign. We discuss means to speed up [coverage-based fuzzers](Coverage.ipynb) and introduce a range of estimation and extrapolation methodologies to assess and extrapolate fuzzing progress and residual risk. **Prerequisites** * _The chapter on [Coverage](Coverage.ipynb) discusses how to use coverage information for an executed test input to guide a coverage-based mutational greybox fuzzer_. * Some knowledge of statistics is helpful. ``` import bookutils import Fuzzer import Coverage ``` ## The Enigma Machine It is autumn in the year of 1938. Turing has just finished his PhD at Princeton University demonstrating the limits of computation and laying the foundation for the theory of computer science. Nazi Germany is rearming. It has reoccupied the Rhineland and annexed Austria against the treaty of Versailles. It has just annexed the Sudetenland in Czechoslovakia and begins preparations to take over the rest of Czechoslovakia despite an agreement just signed in Munich. Meanwhile, the British intelligence is building up their capability to break encrypted messages used by the Germans to communicate military and naval information. The Germans are using [Enigma machines](https://en.wikipedia.org/wiki/Enigma_machine) for encryption. Enigma machines use a series of electro-mechanical rotor cipher machines to protect military communication. Here is a picture of an Enigma machine: ![Enigma Machine](PICS/Bletchley_Park_Naval_Enigma_IMG_3604.JPG) By the time Turing joined the British Bletchley park, the Polish intelligence reverse engineered the logical structure of the Enigma machine and built a decryption machine called *Bomba* (perhaps because of the ticking noise they made). A bomba simulates six Enigma machines simultaneously and tries different decryption keys until the code is broken. The Polish bomba might have been the very _first fuzzer_. Turing took it upon himself to crack ciphers of the Naval Enigma machine, which were notoriously hard to crack. The Naval Enigma used, as part of its encryption key, a three letter sequence called *trigram*. These trigrams were selected from a book, called *Kenngruppenbuch*, which contained all trigrams in a random order. ### The Kenngruppenbuch Let's start with the Kenngruppenbuch (K-Book). We are going to use the following Python functions. * `random.shuffle(elements)` - shuffle *elements* and put items in random order. * `random.choices(elements, weights)` - choose an item from *elements* at random. An element with twice the *weight* is twice as likely to be chosen. * `log(a)` - returns the natural logarithm of a. * `a ** b` - means `a` to the power of `b` (a.k.a. [power operator](https://docs.python.org/3/reference/expressions.html#the-power-operator)) ``` import string import numpy from numpy import log import random ``` We start with creating the set of trigrams: ``` letters = list(string.ascii_letters[26:]) # upper-case characters trigrams = [str(a + b + c) for a in letters for b in letters for c in letters] random.shuffle(trigrams) trigrams[:10] ``` These now go into the Kenngruppenbuch. However, it was observed that some trigrams were more likely chosen than others. For instance, trigrams at the top-left corner of any page, or trigrams on the first or last few pages were more likely than one somewhere in the middle of the book or page. We reflect this difference in distribution by assigning a _probability_ to each trigram, using Benford's law as introduced in [Probabilistic Fuzzing](ProbabilisticGrammarFuzzer.ipynb). Recall, that Benford's law assigns the $i$-th digit the probability $\log_{10}\left(1 + \frac{1}{i}\right)$ where the base 10 is chosen because there are 10 digits $i\in [0,9]$. However, Benford's law works for an arbitrary number of "digits". Hence, we assign the $i$-th trigram the probability $\log_b\left(1 + \frac{1}{i}\right)$ where the base $b$ is the number of all possible trigrams $b=26^3$. ``` k_book = {} # Kenngruppenbuch for i in range(1, len(trigrams) + 1): trigram = trigrams[i - 1] # choose weights according to Benford's law k_book[trigram] = log(1 + 1 / i) / log(26**3 + 1) ``` Here's a random trigram from the Kenngruppenbuch: ``` random_trigram = random.choices(list(k_book.keys()), weights=list(k_book.values()))[0] random_trigram ``` And this is its probability: ``` k_book[random_trigram] ``` ### Fuzzing the Enigma In the following, we introduce an extremely simplified implementation of the Naval Enigma based on the trigrams from the K-book. Of course, the encryption mechanism of the actual Enigma machine is much more sophisticated and worthy of a much more detailed investigation. We encourage the interested reader to follow up with further reading listed in the Background section. The personell at Bletchley Park can only check whether an encoded message is encoded with a (guessed) trigram. Our implementation `naval_enigma()` takes a `message` and a `key` (i.e., the guessed trigram). If the given key matches the (previously computed) key for the message, `naval_enigma()` returns `True`. ``` from Fuzzer import RandomFuzzer from Fuzzer import Runner class EnigmaMachine(Runner): def __init__(self, k_book): self.k_book = k_book self.reset() def reset(self): """Resets the key register""" self.msg2key = {} def internal_msg2key(self, message): """Internal helper method. Returns the trigram for an encoded message.""" if message not in self.msg2key: # Simulating how an officer chooses a key from the Kenngruppenbuch # to encode the message. self.msg2key[message] = \ random.choices(list(self.k_book.keys()), weights=list(self.k_book.values()))[0] trigram = self.msg2key[message] return trigram def naval_enigma(self, message, key): """Returns true if 'message' is encoded with 'key'""" if key == self.internal_msg2key(message): return True else: return False ``` To "fuzz" the `naval_enigma()`, our job will be to come up with a key that matches a given (encrypted) message. Since the keys only have three characters, we have a good chance to achieve this in much less than a second. (Of course, longer keys will be much harder to find via random fuzzing.) ``` class EnigmaMachine(EnigmaMachine): def run(self, tri): """PASS if cur_msg is encoded with trigram tri""" if self.naval_enigma(self.cur_msg, tri): outcome = self.PASS else: outcome = self.FAIL return (tri, outcome) ``` Now we can use the `EnigmaMachine` to check whether a certain message is encoded with a certain trigram. ``` enigma = EnigmaMachine(k_book) enigma.cur_msg = "BrEaK mE. L0Lzz" enigma.run("AAA") ``` The simplest way to crack an encoded message is by brute forcing. Suppose, at Bletchley park they would try random trigrams until a message is broken. ``` class BletchleyPark(object): def __init__(self, enigma): self.enigma = enigma self.enigma.reset() self.enigma_fuzzer = RandomFuzzer( min_length=3, max_length=3, char_start=65, char_range=26) def break_message(self, message): """Returning the trigram for an encoded message""" self.enigma.cur_msg = message while True: (trigram, outcome) = self.enigma_fuzzer.run(self.enigma) if outcome == self.enigma.PASS: break return trigram ``` How long does it take Bletchley park to find the key using this brute forcing approach? ``` from Timer import Timer enigma = EnigmaMachine(k_book) bletchley = BletchleyPark(enigma) with Timer() as t: trigram = bletchley.break_message("BrEaK mE. L0Lzz") ``` Here's the key for the current message: ``` trigram ``` And no, this did not take long: ``` '%f seconds' % t.elapsed_time() 'Bletchley cracks about %d messages per second' % (1/t.elapsed_time()) ``` ### Turing's Observations Okay, lets crack a few messages and count the number of times each trigram is observed. ``` from collections import defaultdict n = 100 # messages to crack observed = defaultdict(int) for msg in range(0, n): trigram = bletchley.break_message(msg) observed[trigram] += 1 # list of trigrams that have been observed counts = [k for k, v in observed.items() if int(v) > 0] t_trigrams = len(k_book) o_trigrams = len(counts) "After cracking %d messages, we observed %d out of %d trigrams." % ( n, o_trigrams, t_trigrams) singletons = len([k for k, v in observed.items() if int(v) == 1]) "From the %d observed trigrams, %d were observed only once." % ( o_trigrams, singletons) ``` Given a sample of previously used entries, Turing wanted to _estimate the likelihood_ that the current unknown entry was one that had been previously used, and further, to estimate the probability distribution over the previously used entries. This lead to the development of the estimators of the missing mass and estimates of the true probability mass of the set of items occuring in the sample. Good worked with Turing during the war and, with Turing’s permission, published the analysis of the bias of these estimators in 1953. Suppose, after finding the keys for n=100 messages, we have observed the trigram "ABC" exactly $X_\text{ABC}=10$ times. What is the probability $p_\text{ABC}$ that "ABC" is the key for the next message? Empirically, we would estimate $\hat p_\text{ABC}=\frac{X_\text{ABC}}{n}=0.1$. We can derive the empirical estimates for all other trigrams that we have observed. However, it becomes quickly evident that the complete probability mass is distributed over the *observed* trigrams. This leaves no mass for *unobserved* trigrams, i.e., the probability of discovering a new trigram. This is called the missing probability mass or the discovery probability. Turing and Good derived an estimate of the *discovery probability* $p_0$, i.e., the probability to discover an unobserved trigram, as the number $f_1$ of trigrams observed exactly once divided by the total number $n$ of messages cracked: $$ p_0 = \frac{f_1}{n} $$ where $f_1$ is the number of singletons and $n$ is the number of cracked messages. Lets explore this idea for a bit. We'll extend `BletchleyPark` to crack `n` messages and record the number of trigrams observed as the number of cracked messages increases. ``` class BletchleyPark(BletchleyPark): def break_message(self, message): """Returning the trigram for an encoded message""" # For the following experiment, we want to make it practical # to break a large number of messages. So, we remove the # loop and just return the trigram for a message. # # enigma.cur_msg = message # while True: # (trigram, outcome) = self.enigma_fuzzer.run(self.enigma) # if outcome == self.enigma.PASS: # break trigram = enigma.internal_msg2key(message) return trigram def break_n_messages(self, n): """Returns how often each trigram has been observed, and #trigrams discovered for each message.""" observed = defaultdict(int) timeseries = [0] * n # Crack n messages and record #trigrams observed as #messages increases cur_observed = 0 for cur_msg in range(0, n): trigram = self.break_message(cur_msg) observed[trigram] += 1 if (observed[trigram] == 1): cur_observed += 1 timeseries[cur_msg] = cur_observed return (observed, timeseries) ``` Let's crack 2000 messages and compute the GT-estimate. ``` n = 2000 # messages to crack bletchley = BletchleyPark(enigma) (observed, timeseries) = bletchley.break_n_messages(n) ``` Let us determine the Good-Turing estimate of the probability that the next trigram has not been observed before: ``` singletons = len([k for k, v in observed.items() if int(v) == 1]) gt = singletons / n gt ``` We can verify the Good-Turing estimate empirically and compute the empirically determined probability that the next trigram has not been observed before. To do this, we repeat the following experiment `repeats=1000` times, reporting the average: If the next message is a new trigram, return 1, otherwise return 0. Note that here, we do not record the newly discovered trigrams as observed. ``` repeats = 1000 # experiment repetitions newly_discovered = 0 for cur_msg in range(n, n + repeats): trigram = bletchley.break_message(cur_msg) if(observed[trigram] == 0): newly_discovered += 1 newly_discovered / repeats ``` Looks pretty accurate, huh? The difference between estimates is reasonably small, probably below 0.03. However, the Good-Turing estimate did not nearly require as much computational resources as the empirical estimate. Unlike the empirical estimate, the Good-Turing estimate can be computed during the campaign. Unlike the empirical estimate, the Good-Turing estimate requires no additional, redundant repetitions. In fact, the Good-Turing (GT) estimator often performs close to the best estimator for arbitrary distributions ([Try it here!](#Kenngruppenbuch)). Of course, the concept of *discovery* is not limited to trigrams. The GT estimator is also used in the study of natural languages to estimate the likelihood that we haven't ever heard or read the word we next encounter. The GT estimator is used in ecology to estimate the likelihood of discovering a new, unseen species in our quest to catalog all _species_ on earth. Later, we will see how it can be used to estimate the probability to discover a vulnerability when none has been observed, yet (i.e., residual risk). Alan Turing was interested in the _complement_ $(1-GT)$ which gives the proportion of _all_ messages for which the Brits have already observed the trigram needed for decryption. For this reason, the complement is also called sample coverage. The *sample coverage* quantifies how much we know about decryption of all messages given the few messages we have already decrypted. The probability that the next message can be decrypted with a previously discovered trigram is: ``` 1 - gt ``` The *inverse* of the GT-estimate (1/GT) is a _maximum likelihood estimate_ of the expected number of messages that we can decrypt with previously observed trigrams before having to find a new trigram to decrypt the message. In our setting, the number of messages for which we can expect to reuse previous trigrams before having to discover a new trigram is: ``` 1 / gt ``` But why is GT so accurate? Intuitively, despite a large sampling effort (i.e., cracking $n$ messages), there are still $f_1$ trigrams that have been observed only once. We could say that such "singletons" are very rare trigrams. Hence, the probability that the next messages is encoded with such a rare but observed trigram gives a good upper bound on the probability that the next message is encoded with an evidently much rarer, unobserved trigram. Since Turing's observation 80 years ago, an entire statistical theory has been developed around the hypothesis that rare, observed "species" are good predictors of unobserved species. Let's have a look at the distribution of rare trigrams. ``` %matplotlib inline import matplotlib.pyplot as plt frequencies = [v for k, v in observed.items() if int(v) > 0] frequencies.sort(reverse=True) # Uncomment to see how often each discovered trigram has been observed # print(frequencies) # frequency of rare trigrams plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k') plt.subplot(1, 2, 1) plt.hist(frequencies, range=[1, 21], bins=numpy.arange(1, 21) - 0.5) plt.xticks(range(1, 21)) plt.xlabel('# of occurances (e.g., 1 represents singleton trigrams)') plt.ylabel('Frequency of occurances') plt.title('Figure 1. Frequency of Rare Trigrams') # trigram discovery over time plt.subplot(1, 2, 2) plt.plot(timeseries) plt.xlabel('# of messages cracked') plt.ylabel('# of trigrams discovered') plt.title('Figure 2. Trigram Discovery Over Time'); # Statistics for most and least often observed trigrams singletons = len([v for k, v in observed.items() if int(v) == 1]) total = len(frequencies) print("%3d of %3d trigrams (%.3f%%) have been observed 1 time (i.e., are singleton trigrams)." % (singletons, total, singletons * 100 / total)) print("%3d of %3d trigrams ( %.3f%%) have been observed %d times." % (1, total, 1 / total, frequencies[0])) ``` The *majority of trigrams* have been observed only once, as we can see in Figure 1 (left). In other words, the majority of observed trigrams are "rare" singletons. In Figure 2 (right), we can see that discovery is in full swing. The trajectory seems almost linear. However, since there is a finite number of trigrams (26^3 = 17,576) trigram discovery will slow down and eventually approach an asymptote (the total number of trigrams). ### Boosting the Performance of BletchleyPark Some trigrams have been observed very often. We call these "abundant" trigrams. ``` print("Trigram : Frequency") for trigram in sorted(observed, key=observed.get, reverse=True): if observed[trigram] > 10: print(" %s : %d" % (trigram, observed[trigram])) ``` We'll speed up the code breaking by _trying the abundant trigrams first_. First, we'll find out how many messages can be cracked by the existing brute forcing strategy at Bledgley park, given a maximum number of attempts. We'll also track the number of messages cracked over time (`timeseries`). ``` class BletchleyPark(BletchleyPark): def __init__(self, enigma): super().__init__(enigma) self.cur_attempts = 0 self.cur_observed = 0 self.observed = defaultdict(int) self.timeseries = [None] * max_attempts * 2 def break_message(self, message): """Returns the trigram for an encoded message, and track #trigrams observed as #attempts increases.""" self.enigma.cur_msg = message while True: self.cur_attempts += 1 # NEW (trigram, outcome) = self.enigma_fuzzer.run(self.enigma) self.timeseries[self.cur_attempts] = self.cur_observed # NEW if outcome == self.enigma.PASS: break return trigram def break_max_attempts(self, max_attempts): """Returns #messages successfully cracked after a given #attempts.""" cur_msg = 0 n_messages = 0 while True: trigram = self.break_message(cur_msg) # stop when reaching max_attempts if self.cur_attempts >= max_attempts: break # update observed trigrams n_messages += 1 self.observed[trigram] += 1 if (self.observed[trigram] == 1): self.cur_observed += 1 self.timeseries[self.cur_attempts] = self.cur_observed cur_msg += 1 return n_messages ``` `original` is the number of messages cracked by the bruteforcing strategy, given 100k attempts. Can we beat this? ``` max_attempts = 100000 bletchley = BletchleyPark(enigma) original = bletchley.break_max_attempts(max_attempts) original ``` Now, we'll create a boosting strategy by trying trigrams first that we have previously observed most often. ``` class BoostedBletchleyPark(BletchleyPark): def break_message(self, message): """Returns the trigram for an encoded message, and track #trigrams observed as #attempts increases.""" self.enigma.cur_msg = message # boost cracking by trying observed trigrams first for trigram in sorted(self.prior, key=self.prior.get, reverse=True): self.cur_attempts += 1 (_, outcome) = self.enigma.run(trigram) self.timeseries[self.cur_attempts] = self.cur_observed if outcome == self.enigma.PASS: return trigram # else fall back to normal cracking return super().break_message(message) ``` `boosted` is the number of messages cracked by the boosted strategy. ``` boostedBletchley = BoostedBletchleyPark(enigma) boostedBletchley.prior = observed boosted = boostedBletchley.break_max_attempts(max_attempts) boosted ``` We see that the boosted technique cracks substantially more messages. It is worthwhile to record how often each trigram is being used as key and try them in the order of their occurence. ***Try it***. *For practical reasons, we use a large number of previous observations as prior (`boostedBletchley.prior = observed`). You can try to change the code such that the strategy uses the trigram frequencies (`self.observed`) observed **during** the campaign itself to boost the campaign. You will need to increase `max_attempts` and wait for a long while.* Let's compare the number of trigrams discovered over time. ``` # print plots line_old, = plt.plot(bletchley.timeseries, label="Bruteforce Strategy") line_new, = plt.plot(boostedBletchley.timeseries, label="Boosted Strategy") plt.legend(handles=[line_old, line_new]) plt.xlabel('# of cracking attempts') plt.ylabel('# of trigrams discovered') plt.title('Trigram Discovery Over Time'); ``` We see that the boosted fuzzer is constantly superior over the random fuzzer. ## Estimating the Probability of Path Discovery <!-- ## Residual Risk: Probability of Failure after an Unsuccessful Fuzzing Campaign --> <!-- Residual risk is not formally defined in this section, so I made the title a bit more generic -- AZ --> So, what does Turing's observation for the Naval Enigma have to do with fuzzing _arbitrary_ programs? Turing's assistant I.J. Good extended and published Turing's work on the estimation procedures in Biometrica, a journal for theoretical biostatistics that still exists today. Good did not talk about trigrams. Instead, he calls them "species". Hence, the GT estimator is presented to estimate how likely it is to discover a new species, given an existing sample of individuals (each of which belongs to exactly one species). Now, we can associate program inputs to species, as well. For instance, we could define the path that is exercised by an input as that input's species. This would allow us to _estimate the probability that fuzzing discovers a new path._ Later, we will see how this discovery probability estimate also estimates the likelihood of discovering a vulnerability when we have not seen one, yet (residual risk). Let's do this. We identify the species for an input by computing a hash-id over the set of statements exercised by that input. In the [Coverage](Coverage.ipynb) chapter, we have learned about the [Coverage class](Coverage.ipynb#A-Coverage-Class) which collects coverage information for an executed Python function. As an example, the function [`cgi_decode()`](Coverage.ipynb#A-CGI-Decoder) was introduced. The function `cgi_decode()` takes a string encoded for a website URL and decodes it back to its original form. Here's what `cgi_decode()` does and how coverage is computed. ``` from Coverage import Coverage, cgi_decode encoded = "Hello%2c+world%21" with Coverage() as cov: decoded = cgi_decode(encoded) decoded print(cov.coverage()); ``` ### Trace Coverage First, we will introduce the concept of execution traces, which are a coarse abstraction of the execution path taken by an input. Compared to the definition of path, a trace ignores the sequence in which statements are exercised or how often each statement is exercised. * `pickle.dumps()` - serializes an object by producing a byte array from all the information in the object * `hashlib.md5()` - produces a 128-bit hash value from a byte array ``` import pickle import hashlib def getTraceHash(cov): pickledCov = pickle.dumps(cov.coverage()) hashedCov = hashlib.md5(pickledCov).hexdigest() return hashedCov ``` Remember our model for the Naval Enigma machine? Each message must be decrypted using exactly one trigram while multiple messages may be decrypted by the same trigram. Similarly, we need each input to yield exactly one trace hash while multiple inputs can yield the same trace hash. Let's see whether this is true for our `getTraceHash()` function. ``` inp1 = "a+b" inp2 = "a+b+c" inp3 = "abc" with Coverage() as cov1: cgi_decode(inp1) with Coverage() as cov2: cgi_decode(inp2) with Coverage() as cov3: cgi_decode(inp3) ``` The inputs `inp1` and `inp2` execute the same statements: ``` inp1, inp2 cov1.coverage() - cov2.coverage() ``` The difference between both coverage sets is empty. Hence, the trace hashes should be the same: ``` getTraceHash(cov1) getTraceHash(cov2) assert getTraceHash(cov1) == getTraceHash(cov2) ``` In contrast, the inputs `inp1` and `inp3` execute _different_ statements: ``` inp1, inp3 cov1.coverage() - cov3.coverage() ``` Hence, the trace hashes should be different, too: ``` getTraceHash(cov1) getTraceHash(cov3) assert getTraceHash(cov1) != getTraceHash(cov3) ``` ### Measuring Trace Coverage over Time In order to measure trace coverage for a `function` executing a `population` of fuzz inputs, we slightly adapt the `population_coverage()` function from the [Chapter on Coverage](Coverage.ipynb#Coverage-of-Basic-Fuzzing). ``` def population_trace_coverage(population, function): cumulative_coverage = [] all_coverage = set() cumulative_singletons = [] cumulative_doubletons = [] singletons = set() doubletons = set() for s in population: with Coverage() as cov: try: function(s) except BaseException: pass cur_coverage = set([getTraceHash(cov)]) # singletons and doubletons -- we will need them later doubletons -= cur_coverage doubletons |= singletons & cur_coverage singletons -= cur_coverage singletons |= cur_coverage - (cur_coverage & all_coverage) cumulative_singletons.append(len(singletons)) cumulative_doubletons.append(len(doubletons)) # all and cumulative coverage all_coverage |= cur_coverage cumulative_coverage.append(len(all_coverage)) return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons ``` Let's see whether our new function really contains coverage information only for *two* traces given our three inputs for `cgi_decode`. ``` all_coverage = population_trace_coverage([inp1, inp2, inp3], cgi_decode)[0] assert len(all_coverage) == 2 ``` Unfortunately, the `cgi_decode()` function is too simple. Instead, we will use the original Python [HTMLParser](https://docs.python.org/3/library/html.parser.html) as our test subject. ``` from Fuzzer import RandomFuzzer from Coverage import population_coverage from html.parser import HTMLParser trials = 50000 # number of random inputs generated ``` Let's run a random fuzzer for $n=50000$ times and plot trace coverage over time. ``` # create wrapper function def my_parser(inp): parser = HTMLParser() # resets the HTMLParser object for every fuzz input parser.feed(inp) # create random fuzzer fuzzer = RandomFuzzer(min_length=1, max_length=100, char_start=32, char_range=94) # create population of fuzz inputs population = [] for i in range(trials): population.append(fuzzer.fuzz()) # execute and measure trace coverage trace_timeseries = population_trace_coverage(population, my_parser)[1] # execute and measure code coverage code_timeseries = population_coverage(population, my_parser)[1] # plot trace coverage over time plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k') plt.subplot(1, 2, 1) plt.plot(trace_timeseries) plt.xlabel('# of fuzz inputs') plt.ylabel('# of traces exercised') plt.title('Trace Coverage Over Time') # plot code coverage over time plt.subplot(1, 2, 2) plt.plot(code_timeseries) plt.xlabel('# of fuzz inputs') plt.ylabel('# of statements covered') plt.title('Code Coverage Over Time'); ``` Above, we can see trace coverage (left) and code coverage (right) over time. Here are our observations. 1. **Trace coverage is more robust**. There are less sudden jumps in the graph compared to code coverage. 2. **Trace coverage is more fine grained.** There are more traces than statements covered in the end (y-axis). 3. **Trace coverage grows more steadily**. Code coverage exercises more than half the statements it has exercised after 50k inputs with the first input. Instead, the number of traces covered grows slowly and steadily since each input can yield only one execution trace. It is for this reason that one of the most prominent and successful fuzzers today, american fuzzy lop (AFL), uses a similar *measure of progress* (a hash computed over the branches exercised by the input). ### Evaluating the Discovery Probability Estimate Let's find out how the Good-Turing estimator performs as estimate of discovery probability when we are fuzzing to discover execution traces rather than trigrams. To measure the empirical probability, we execute the same population of inputs (n=50000) and measure in regular intervals (`measurements=100` intervals). During each measurement, we repeat the following experiment `repeats=500` times, reporting the average: If the next input yields a new trace, return 1, otherwise return 0. Note that during these repetitions, we do not record the newly discovered traces as observed. ``` repeats = 500 # experiment repetitions measurements = 100 # experiment measurements emp_timeseries = [] all_coverage = set() step = int(trials / measurements) for i in range(0, trials, step): if i - step >= 0: for j in range(step): inp = population[i - j] with Coverage() as cov: try: my_parser(inp) except BaseException: pass all_coverage |= set([getTraceHash(cov)]) discoveries = 0 for _ in range(repeats): inp = fuzzer.fuzz() with Coverage() as cov: try: my_parser(inp) except BaseException: pass if getTraceHash(cov) not in all_coverage: discoveries += 1 emp_timeseries.append(discoveries / repeats) ``` Now, we compute the Good-Turing estimate over time. ``` gt_timeseries = [] singleton_timeseries = population_trace_coverage(population, my_parser)[2] for i in range(1, trials + 1, step): gt_timeseries.append(singleton_timeseries[i - 1] / i) ``` Let's go ahead and plot both time series. ``` line_emp, = plt.semilogy(emp_timeseries, label="Empirical") line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing") plt.legend(handles=[line_emp, line_gt]) plt.xticks(range(0, measurements + 1, int(measurements / 5)), range(0, trials + 1, int(trials / 5))) plt.xlabel('# of fuzz inputs') plt.ylabel('discovery probability') plt.title('Discovery Probability Over Time'); ``` Again, the Good-Turing estimate appears to be *highly accurate*. In fact, the empirical estimator has a much lower precision as indicated by the large swings. You can try and increase the number of repetitions (`repeats`) to get more precision for the empirical estimates, however, at the cost of waiting much longer. ### Discovery Probability Quantifies Residual Risk Alright. You have gotten a hold of a couple of powerful machines and used them to fuzz a software system for several months without finding any vulnerabilities. Is the system vulnerable? Well, who knows? We cannot say for sure; there is always some residual risk. Testing is not verification. Maybe the next test input that is generated reveals a vulnerability. Let's say *residual risk* is the probability that the next test input reveals a vulnerability that has not been found, yet. Böhme \cite{Boehme2018stads} has shown that the Good-Turing estimate of the discovery probability is also an estimate of the maximum residual risk. **Proof sketch (Residual Risk)**. Here is a proof sketch that shows that an estimator of discovery probability for an arbitrary definition of species gives an upper bound on the probability to discover a vulnerability when none has been found: Suppose, for each "old" species A (here, execution trace), we derive two "new" species: Some inputs belonging to A expose a vulnerability while others belonging to A do not. We know that _only_ species that do not expose a vulnerability have been discovered. Hence, _all_ species exposing a vulnerability and _some_ species that do not expose a vulnerability remain undiscovered. Hence, the probability to discover a new species gives an upper bound on the probability to discover (a species that exposes) a vulnerability. **QED**. An estimate of the discovery probability is useful in many other ways. 1. **Discovery probability**. We can estimate, at any point during the fuzzing campaign, the probability that the next input belongs to a previously unseen species (here, that it yields a new execution trace, i.e., exercises a new set of statements). 2. **Complement of discovery probability**. We can estimate the proportion of *all* inputs the fuzzer can generate for which we have already seen the species (here, execution traces). In some sense, this allows us to quantify the *progress of the fuzzing campaign towards completion*: If the probability to discovery a new species is too low, we might as well abort the campaign. 3. **Inverse of discovery probability**. We can predict the number of test inputs needed, so that we can expect the discovery of a new species (here, execution trace). ## How Do We Know When to Stop Fuzzing? In fuzzing, we have measures of progress such as [code coverage](Coverage.ipynb) or [grammar coverage](GrammarCoverageFuzzer.ipynb). Suppose, we are interested in covering all statements in the program. The _percentage_ of statements that have already been covered quantifies how "far" we are from completing the fuzzing campaign. However, sometimes we know only the _number_ of species $S(n)$ (here, statements) that have been discovered after generating $n$ fuzz inputs. The percentage $S(n)/S$ can only be computed if we know the _total number_ of species $S$. Even then, not all species may be feasible. ### A Success Estimator If we do not _know_ the total number of species, then let's at least _estimate_ it: As we have seen before, species discovery slows down over time. In the beginning, many new species are discovered. Later, many inputs need to be generated before discovering the next species. In fact, given enough time, the fuzzing campaign approaches an _asymptote_. It is this asymptote that we can estimate. In 1984, Anne Chao, a well-known theoretical bio-statistician, has developed an estimator $\hat S$ which estimates the asymptotic total number of species $S$: \begin{align} \hat S_\text{Chao1} = \begin{cases} S(n) + \frac{f_1^2}{2f_2} & \text{if $f_2>0$}\\ S(n) + \frac{f_1(f_1-1)}{2} & \text{otherwise} \end{cases} \end{align} * where $f_1$ and $f_2$ is the number of singleton and doubleton species, respectively (that have been observed exactly once or twice, resp.), and * where $S(n)$ is the number of species that have been discovered after generating $n$ fuzz inputs. So, how does Chao's estimate perform? To investigate this, we generate `trials=400000` fuzz inputs using a fuzzer setting that allows us to see an asymptote in a few seconds: We measure trace coverage. After half-way into our fuzzing campaign (`trials`/2=100000), we generate Chao's estimate $\hat S$ of the asymptotic total number of species. Then, we run the remainer of the campaign to see the "empirical" asymptote. ``` trials = 400000 fuzzer = RandomFuzzer(min_length=2, max_length=4, char_start=32, char_range=32) population = [] for i in range(trials): population.append(fuzzer.fuzz()) _, trace_ts, f1_ts, f2_ts = population_trace_coverage(population, my_parser) time = int(trials / 2) time f1 = f1_ts[time] f2 = f2_ts[time] Sn = trace_ts[time] if f2 > 0: hat_S = Sn + f1 * f1 / (2 * f2) else: hat_S = Sn + f1 * (f1 - 1) / 2 ``` After executing `time` fuzz inputs (half of all), we have covered this many traces: ``` time Sn ``` We can estimate there are this many traces in total: ``` hat_S ``` Hence, we have achieved this percentage of the estimate: ``` 100 * Sn / hat_S ``` After executing `trials` fuzz inputs, we have covered this many traces: ``` trials trace_ts[trials - 1] ``` The accuracy of Chao's estimator is quite reasonable. It isn't always accurate -- particularly at the beginning of a fuzzing campaign when the [discovery probability](WhenIsEnough.ipynb#Measuring-Trace-Coverage-over-Time) is still very high. Nevertheless, it demonstrates the main benefit of reporting a percentage to assess the progress of a fuzzing campaign towards completion. ***Try it***. *Try setting `trials` to 1 million and `time` to `int(trials / 4)`.* ### Extrapolating Fuzzing Success <!-- ## Cost-Benefit Analysis: Extrapolating the Number of Species Discovered --> Suppose you have run the fuzzer for a week, which generated $n$ fuzz inputs and discovered $S(n)$ species (here, covered $S(n)$ execution traces). Instead, of running the fuzzer for another week, you would like to *predict* how many more species you would discover. In 2003, Anne Chao and her team developed an extrapolation methodology to do just that. We are interested in the number $S(n+m^*)$ of species discovered if $m^*$ more fuzz inputs were generated: \begin{align} \hat S(n + m^*) = S(n) + \hat f_0 \left[1-\left(1-\frac{f_1}{n\hat f_0 + f_1}\right)^{m^*}\right] \end{align} * where $\hat f_0=\hat S - S(n)$ is an estimate of the number $f_0$ of undiscovered species, and * where $f_1$ is the number of singleton species, i.e., those we have observed exactly once. The number $f_1$ of singletons, we can just keep track of during the fuzzing campaign itself. The estimate of the number $\hat f_0$ of undiscovered species, we can simply derive using Chao's estimate $\hat S$ and the number of observed species $S(n)$. Let's see how Chao's extrapolator performs by comparing the predicted number of species to the empirical number of species. ``` prediction_ts = [None] * time f0 = hat_S - Sn for m in range(trials - time): assert (time * f0 + f1) != 0 , 'time:%s f0:%s f1:%s' % (time, f0,f1) prediction_ts.append(Sn + f0 * (1 - (1 - f1 / (time * f0 + f1)) ** m)) plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k') plt.subplot(1, 3, 1) plt.plot(trace_ts, color='white') plt.plot(trace_ts[:time]) plt.xticks(range(0, trials + 1, int(time))) plt.xlabel('# of fuzz inputs') plt.ylabel('# of traces exercised') plt.subplot(1, 3, 2) line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign") line_pred, = plt.plot(prediction_ts, linestyle='--', color='black', label="Predicted progress") plt.legend(handles=[line_cur, line_pred]) plt.xticks(range(0, trials + 1, int(time))) plt.xlabel('# of fuzz inputs') plt.ylabel('# of traces exercised') plt.subplot(1, 3, 3) line_emp, = plt.plot(trace_ts, color='grey', label="Actual progress") line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign") line_pred, = plt.plot(prediction_ts, linestyle='--', color='black', label="Predicted progress") plt.legend(handles=[line_emp, line_cur, line_pred]) plt.xticks(range(0, trials + 1, int(time))) plt.xlabel('# of fuzz inputs') plt.ylabel('# of traces exercised'); ``` The prediction from Chao's extrapolator looks quite accurate. We make a prediction at `time=trials/4`. Despite an extrapolation by 3 times (i.e., at trials), we can see that the predicted value (black, dashed line) closely matches the empirical value (grey, solid line). ***Try it***. Again, try setting `trials` to 1 million and `time` to `int(trials / 4)`. ## Lessons Learned * One can measure the _progress_ of a fuzzing campaign (as species over time, i.e., $S(n)$). * One can measure the _effectiveness_ of a fuzzing campaign (as asymptotic total number of species $S$). * One can estimate the _effectiveness_ of a fuzzing campaign using the Chao1-estimator $\hat S$. * One can extrapolate the _progress_ of a fuzzing campaign, $\hat S(n+m^*)$. * One can estimate the _residual risk_ (i.e., the probability that a bug exists that has not been found) using the Good-Turing estimator $GT$ of the species discovery probability. ## Next Steps This chapter is the last in the book! If you want to continue reading, have a look at the [Appendices](99_Appendices.ipynb). Otherwise, _make use of what you have learned and go and create great fuzzers and test generators!_ ## Background * A **statistical framework for fuzzing**, inspired from ecology. Marcel Böhme. [STADS: Software Testing as Species Discovery](https://mboehme.github.io/paper/TOSEM18.pdf). ACM TOSEM 27(2):1--52 * Estimating the **discovery probability**: I.J. Good. 1953. [The population frequencies of species and the estimation of population parameters](https://www.jstor.org/stable/2333344). Biometrika 40:237–264. * Estimating the **asymptotic total number of species** when each input can belong to exactly one species: Anne Chao. 1984. [Nonparametric estimation of the number of classes in a population](https://www.jstor.org/stable/4615964). Scandinavian Journal of Statistics 11:265–270 * Estimating the **asymptotic total number of species** when each input can belong to one or more species: Anne Chao. 1987. [Estimating the population size for capture-recapture data with unequal catchability](https://www.jstor.org/stable/2531532). Biometrics 43:783–791 * **Extrapolating** the number of discovered species: Tsung-Jen Shen, Anne Chao, and Chih-Feng Lin. 2003. [Predicting the Number of New Species in Further Taxonomic Sampling](http://chao.stat.nthu.edu.tw/wordpress/paper/2003_Ecology_84_P798.pdf). Ecology 84, 3 (2003), 798–804. ## Exercises I.J. Good and Alan Turing developed an estimator for the case where each input belongs to exactly one species. For instance, each input yields exactly one execution trace (see function [`getTraceHash`](#Trace-Coverage)). However, this is not true in general. For instance, each input exercises multiple statements and branches in the source code. Generally, each input can belong to one *or more* species. In this extended model, the underlying statistics are quite different. Yet, all estimators that we have discussed in this chapter turn out to be almost identical to those for the simple, single-species model. For instance, the Good-Turing estimator $C$ is defined as $$C=\frac{Q_1}{n}$$ where $Q_1$ is the number of singleton species and $n$ is the number of generated test cases. Throughout the fuzzing campaign, we record for each species the *incidence frequency*, i.e., the number of inputs that belong to that species. Again, we define a species $i$ as *singleton species* if we have seen exactly one input that belongs to species $i$. ### Exercise 1: Estimate and Evaluate the Discovery Probability for Statement Coverage In this exercise, we create a Good-Turing estimator for the simple fuzzer. #### Part 1: Population Coverage Implement a function `population_stmt_coverage()` as in [the section on estimating discovery probability](#Estimating-the-Discovery-Probability) that monitors the number of singletons and doubletons over time, i.e., as the number $i$ of test inputs increases. ``` from Coverage import population_coverage, Coverage ... ``` **Solution.** Here we go: ``` def population_stmt_coverage(population, function): cumulative_coverage = [] all_coverage = set() cumulative_singletons = [] cumulative_doubletons = [] singletons = set() doubletons = set() for s in population: with Coverage() as cov: try: function(s) except BaseException: pass cur_coverage = cov.coverage() # singletons and doubletons doubletons -= cur_coverage doubletons |= singletons & cur_coverage singletons -= cur_coverage singletons |= cur_coverage - (cur_coverage & all_coverage) cumulative_singletons.append(len(singletons)) cumulative_doubletons.append(len(doubletons)) # all and cumulative coverage all_coverage |= cur_coverage cumulative_coverage.append(len(all_coverage)) return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons ``` #### Part 2: Population Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` from [the chapter on Fuzzers](Fuzzer.ipynb) to generate a population of $n=10000$ fuzz inputs. ``` from Fuzzer import RandomFuzzer from html.parser import HTMLParser ... ``` **Solution.** This is fairly straightforward: ``` trials = 2000 # increase to 10000 for better convergences. Will take a while.. ``` We create a wrapper function... ``` def my_parser(inp): parser = HTMLParser() # resets the HTMLParser object for every fuzz input parser.feed(inp) ``` ... and a random fuzzer: ``` fuzzer = RandomFuzzer(min_length=1, max_length=1000, char_start=0, char_range=255) ``` We fill the population: ``` population = [] for i in range(trials): population.append(fuzzer.fuzz()) ``` #### Part 3: Estimating Probabilities Execute the generated inputs on the Python HTML parser (`from html.parser import HTMLParser`) and estimate the probability that the next input covers a previously uncovered statement (i.e., the discovery probability) using the Good-Turing estimator. **Solution.** Here we go: ``` measurements = 100 # experiment measurements step = int(trials / measurements) gt_timeseries = [] singleton_timeseries = population_stmt_coverage(population, my_parser)[2] for i in range(1, trials + 1, step): gt_timeseries.append(singleton_timeseries[i - 1] / i) ``` #### Part 4: Empirical Evaluation Empirically evaluate the accuracy of the Good-Turing estimator (using $10000$ repetitions) of the probability to cover new statements using the experimental procedure at the end of [the section on estimating discovery probability](#Estimating-the-Discovery-Probability). **Solution.** This is as above: ``` # increase to 10000 for better precision (less variance). Will take a while.. repeats = 100 emp_timeseries = [] all_coverage = set() for i in range(0, trials, step): if i - step >= 0: for j in range(step): inp = population[i - j] with Coverage() as cov: try: my_parser(inp) except BaseException: pass all_coverage |= cov.coverage() discoveries = 0 for _ in range(repeats): inp = fuzzer.fuzz() with Coverage() as cov: try: my_parser(inp) except BaseException: pass # If intersection not empty, a new stmt was (dis)covered if cov.coverage() - all_coverage: discoveries += 1 emp_timeseries.append(discoveries / repeats) %matplotlib inline import matplotlib.pyplot as plt line_emp, = plt.semilogy(emp_timeseries, label="Empirical") line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing") plt.legend(handles=[line_emp, line_gt]) plt.xticks(range(0, measurements + 1, int(measurements / 5)), range(0, trials + 1, int(trials / 5))) plt.xlabel('# of fuzz inputs') plt.ylabel('discovery probability') plt.title('Discovery Probability Over Time'); ``` ### Exercise 2: Extrapolate and Evaluate Statement Coverage In this exercise, we use Chao's extrapolation method to estimate the success of fuzzing. #### Part 1: Create Population Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` to generate a population of $n=400000$ fuzz inputs. **Solution.** Here we go: ``` trials = 400 # Use 400000 for actual solution. This takes a while! population = [] for i in range(trials): population.append(fuzzer.fuzz()) _, stmt_ts, Q1_ts, Q2_ts = population_stmt_coverage(population, my_parser) ``` #### Part 2: Compute Estimate Compute an estimate of the total number of statements $\hat S$ after $n/4=100000$ fuzz inputs were generated. In the extended model, $\hat S$ is computed as \begin{align} \hat S_\text{Chao1} = \begin{cases} S(n) + \frac{Q_1^2}{2Q_2} & \text{if $Q_2>0$}\\ S(n) + \frac{Q_1(Q_1-1)}{2} & \text{otherwise} \end{cases} \end{align} * where $Q_1$ and $Q_2$ is the number of singleton and doubleton statements, respectively (i.e., statements that have been exercised by exactly one or two fuzz inputs, resp.), and * where $S(n)$ is the number of statements that have been (dis)covered after generating $n$ fuzz inputs. **Solution.** Here we go: ``` time = int(trials / 4) Q1 = Q1_ts[time] Q2 = Q2_ts[time] Sn = stmt_ts[time] if Q2 > 0: hat_S = Sn + Q1 * Q1 / (2 * Q2) else: hat_S = Sn + Q1 * (Q1 - 1) / 2 print("After executing %d fuzz inputs, we have covered %d **(%.1f %%)** statements.\n" % (time, Sn, 100 * Sn / hat_S) + "After executing %d fuzz inputs, we estimate there are %d statements in total.\n" % (time, hat_S) + "After executing %d fuzz inputs, we have covered %d statements." % (trials, stmt_ts[trials - 1])) ``` #### Part 3: Compute and Evaluate Extrapolator Compute and evaluate Chao's extrapolator by comparing the predicted number of statements to the empirical number of statements. **Solution.** Here's our solution: ``` prediction_ts = [None] * time Q0 = hat_S - Sn for m in range(trials - time): prediction_ts.append(Sn + Q0 * (1 - (1 - Q1 / (time * Q0 + Q1)) ** m)) plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k') plt.subplot(1, 3, 1) plt.plot(stmt_ts, color='white') plt.plot(stmt_ts[:time]) plt.xticks(range(0, trials + 1, int(time))) plt.xlabel('# of fuzz inputs') plt.ylabel('# of statements exercised') plt.subplot(1, 3, 2) line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign") line_pred, = plt.plot(prediction_ts, linestyle='--', color='black', label="Predicted progress") plt.legend(handles=[line_cur, line_pred]) plt.xticks(range(0, trials + 1, int(time))) plt.xlabel('# of fuzz inputs') plt.ylabel('# of statements exercised') plt.subplot(1, 3, 3) line_emp, = plt.plot(stmt_ts, color='grey', label="Actual progress") line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign") line_pred, = plt.plot(prediction_ts, linestyle='--', color='black', label="Predicted progress") plt.legend(handles=[line_emp, line_cur, line_pred]) plt.xticks(range(0, trials + 1, int(time))) plt.xlabel('# of fuzz inputs') plt.ylabel('# of statements exercised'); ```
github_jupyter
# Cavity flow with Navier-Stokes The final two steps will both solve the Navier–Stokes equations in two dimensions, but with different boundary conditions. The momentum equation in vector form for a velocity field v⃗ is: $$ \frac{\partial \overrightarrow{v}}{\partial t} + (\overrightarrow{v} \cdot \nabla ) \overrightarrow{v} = -\frac{1}{\rho}\nabla p + \nu \nabla^2 \overrightarrow{v}$$ This represents three scalar equations, one for each velocity component (u,v,w). But we will solve it in two dimensions, so there will be two scalar equations. Remember the continuity equation? This is where the Poisson equation for pressure comes in! Here is the system of differential equations: two equations for the velocity components u,v and one equation for pressure: $$ \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial x} + \nu \left[ \frac{\partial^2 u}{\partial x^2} +\frac{\partial^2 u}{\partial y^2} \right] $$ $$ \frac{\partial v}{\partial t} + u \frac{\partial v}{\partial x} + v \frac{\partial v}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial y} + \nu \left[ \frac{\partial^2 v}{\partial x^2} +\frac{\partial^2 v}{\partial y^2} \right] $$ $$ \frac{\partial^2 p}{\partial x^2} +\frac{\partial^2 p}{\partial y^2} = \rho \left[\frac{\partial}{\partial t} \left(\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right) - \left(\frac{\partial u}{\partial x}\frac{\partial u}{\partial x}+2\frac{\partial u}{\partial y}\frac{\partial v}{\partial x}+\frac{\partial v}{\partial y}\frac{\partial v}{\partial y} \right) \right] $$ From the previous steps, we already know how to discretize all these terms. Only the last equation is a little unfamiliar. But with a little patience, it will not be hard! Our stencils look like this: First the momentum equation in the u direction $$ \begin{split} u_{i,j}^{n+1} = u_{i,j}^{n} & - u_{i,j}^{n} \frac{\Delta t}{\Delta x} \left(u_{i,j}^{n}-u_{i-1,j}^{n}\right) - v_{i,j}^{n} \frac{\Delta t}{\Delta y} \left(u_{i,j}^{n}-u_{i,j-1}^{n}\right) \\ & - \frac{\Delta t}{\rho 2\Delta x} \left(p_{i+1,j}^{n}-p_{i-1,j}^{n}\right) \\ & + \nu \left(\frac{\Delta t}{\Delta x^2} \left(u_{i+1,j}^{n}-2u_{i,j}^{n}+u_{i-1,j}^{n}\right) + \frac{\Delta t}{\Delta y^2} \left(u_{i,j+1}^{n}-2u_{i,j}^{n}+u_{i,j-1}^{n}\right)\right) \end{split} $$ Second the momentum equation in the v direction $$ \begin{split} v_{i,j}^{n+1} = v_{i,j}^{n} & - u_{i,j}^{n} \frac{\Delta t}{\Delta x} \left(v_{i,j}^{n}-v_{i-1,j}^{n}\right) - v_{i,j}^{n} \frac{\Delta t}{\Delta y} \left(v_{i,j}^{n}-v_{i,j-1}^{n})\right) \\ & - \frac{\Delta t}{\rho 2\Delta y} \left(p_{i,j+1}^{n}-p_{i,j-1}^{n}\right) \\ & + \nu \left(\frac{\Delta t}{\Delta x^2} \left(v_{i+1,j}^{n}-2v_{i,j}^{n}+v_{i-1,j}^{n}\right) + \frac{\Delta t}{\Delta y^2} \left(v_{i,j+1}^{n}-2v_{i,j}^{n}+v_{i,j-1}^{n}\right)\right) \end{split} $$ Finally the pressure-Poisson equation $$\begin{split} p_{i,j}^{n} = & \frac{\left(p_{i+1,j}^{n}+p_{i-1,j}^{n}\right) \Delta y^2 + \left(p_{i,j+1}^{n}+p_{i,j-1}^{n}\right) \Delta x^2}{2\left(\Delta x^2+\Delta y^2\right)} \\ & -\frac{\rho\Delta x^2\Delta y^2}{2\left(\Delta x^2+\Delta y^2\right)} \\ & \times \left[\frac{1}{\Delta t}\left(\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}+\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\right)-\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}\right. \\ & \left. -2\frac{u_{i,j+1}-u_{i,j-1}}{2\Delta y}\frac{v_{i+1,j}-v_{i-1,j}}{2\Delta x}-\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y} \right] \end{split} $$ The initial condition is $u,v,p=0$ everywhere, and the boundary conditions are: $u=1$ at $y=1$ (the "lid"); $u,v=0$ on the other boundaries; $\frac{\partial p}{\partial y}=0$ at $y=0,1$; $\frac{\partial p}{\partial x}=0$ at $x=0,1$ $p=0$ at $(0,0)$ Interestingly these boundary conditions describe a well known problem in the Computational Fluid Dynamics realm, where it is known as the lid driven square cavity flow problem. ## Numpy Implementation ``` import numpy as np from matplotlib import pyplot, cm %matplotlib inline nx = 41 ny = 41 nt = 1000 nit = 50 c = 1 dx = 1. / (nx - 1) dy = 1. / (ny - 1) x = np.linspace(0, 1, nx) y = np.linspace(0, 1, ny) Y, X = np.meshgrid(x, y) rho = 1 nu = .1 dt = .001 u = np.zeros((nx, ny)) v = np.zeros((nx, ny)) p = np.zeros((nx, ny)) ``` The pressure Poisson equation that's written above can be hard to write out without typos. The function `build_up_b` below represents the contents of the square brackets, so that the entirety of the Poisson pressure equation is slightly more manageable. ``` def build_up_b(b, rho, dt, u, v, dx, dy): b[1:-1, 1:-1] = (rho * (1 / dt * ((u[2:, 1:-1] - u[0:-2, 1:-1]) / (2 * dx) + (v[1:-1, 2:] - v[1:-1, 0:-2]) / (2 * dy)) - ((u[2:, 1:-1] - u[0:-2, 1:-1]) / (2 * dx))**2 - 2 * ((u[1:-1, 2:] - u[1:-1, 0:-2]) / (2 * dy) * (v[2:, 1:-1] - v[0:-2, 1:-1]) / (2 * dx))- ((v[1:-1, 2:] - v[1:-1, 0:-2]) / (2 * dy))**2)) return b ``` The function `pressure_poisson` is also defined to help segregate the different rounds of calculations. Note the presence of the pseudo-time variable nit. This sub-iteration in the Poisson calculation helps ensure a divergence-free field. ``` def pressure_poisson(p, dx, dy, b): pn = np.empty_like(p) pn = p.copy() for q in range(nit): pn = p.copy() p[1:-1, 1:-1] = (((pn[2:, 1:-1] + pn[0:-2, 1:-1]) * dy**2 + (pn[1:-1, 2:] + pn[1:-1, 0:-2]) * dx**2) / (2 * (dx**2 + dy**2)) - dx**2 * dy**2 / (2 * (dx**2 + dy**2)) * b[1:-1,1:-1]) p[-1, :] = p[-2, :] # dp/dx = 0 at x = 2 p[:, 0] = p[:, 1] # dp/dy = 0 at y = 0 p[0, :] = p[1, :] # dp/dx = 0 at x = 0 p[:, -1] = p[:, -2] # p = 0 at y = 2 p[0, 0] = 0 return p, pn ``` Finally, the rest of the cavity flow equations are wrapped inside the function `cavity_flow`, allowing us to easily plot the results of the cavity flow solver for different lengths of time. ``` def cavity_flow(nt, u, v, dt, dx, dy, p, rho, nu): un = np.empty_like(u) vn = np.empty_like(v) b = np.zeros((nx, ny)) for n in range(0,nt): un = u.copy() vn = v.copy() b = build_up_b(b, rho, dt, u, v, dx, dy) p = pressure_poisson(p, dx, dy, b)[0] pn = pressure_poisson(p, dx, dy, b)[1] u[1:-1, 1:-1] = (un[1:-1, 1:-1]- un[1:-1, 1:-1] * dt / dx * (un[1:-1, 1:-1] - un[0:-2, 1:-1]) - vn[1:-1, 1:-1] * dt / dy * (un[1:-1, 1:-1] - un[1:-1, 0:-2]) - dt / (2 * rho * dx) * (p[2:, 1:-1] - p[0:-2, 1:-1]) + nu * (dt / dx**2 * (un[2:, 1:-1] - 2 * un[1:-1, 1:-1] + un[0:-2, 1:-1]) + dt / dy**2 * (un[1:-1, 2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2]))) v[1:-1,1:-1] = (vn[1:-1, 1:-1] - un[1:-1, 1:-1] * dt / dx * (vn[1:-1, 1:-1] - vn[0:-2, 1:-1]) - vn[1:-1, 1:-1] * dt / dy * (vn[1:-1, 1:-1] - vn[1:-1, 0:-2]) - dt / (2 * rho * dy) * (p[1:-1, 2:] - p[1:-1, 0:-2]) + nu * (dt / dx**2 * (vn[2:, 1:-1] - 2 * vn[1:-1, 1:-1] + vn[0:-2, 1:-1]) + dt / dy**2 * (vn[1:-1, 2:] - 2 * vn[1:-1, 1:-1] + vn[1:-1, 0:-2]))) u[:, 0] = 0 u[0, :] = 0 u[-1, :] = 0 u[:, -1] = 1 # Set velocity on cavity lid equal to 1 v[:, 0] = 0 v[:, -1] = 0 v[0, :] = 0 v[-1, :] = 0 return u, v, p, pn #NBVAL_IGNORE_OUTPUT u = np.zeros((nx, ny)) v = np.zeros((nx, ny)) p = np.zeros((nx, ny)) b = np.zeros((nx, ny)) nt = 1000 # Store the output velocity and pressure fields in the variables a, b and c. # This is so they do not clash with the devito outputs below. a, b, c, d = cavity_flow(nt, u, v, dt, dx, dy, p, rho, nu) fig = pyplot.figure(figsize=(11, 7), dpi=100) pyplot.contourf(X, Y, c, alpha=0.5, cmap=cm.viridis) pyplot.colorbar() pyplot.contour(X, Y, c, cmap=cm.viridis) pyplot.quiver(X[::2, ::2], Y[::2, ::2], a[::2, ::2], b[::2, ::2]) pyplot.xlabel('X') pyplot.ylabel('Y'); ``` ### Validation Marchi et al (2009)$^1$ compared numerical implementations of the lid driven cavity problem with their solution on a 1024 x 1024 nodes grid. We will compare a solution using both NumPy and Devito with the results of their paper below. 1. https://www.scielo.br/scielo.php?pid=S1678-58782009000300004&script=sci_arttext ``` # Import u values at x=L/2 (table 6, column 2 rows 12-26) in Marchi et al. Marchi_Re10_u = np.array([[0.0625, -3.85425800e-2], [0.125, -6.96238561e-2], [0.1875, -9.6983962e-2], [0.25, -1.22721979e-1], [0.3125, -1.47636199e-1], [0.375, -1.71260757e-1], [0.4375, -1.91677043e-1], [0.5, -2.05164738e-1], [0.5625, -2.05770198e-1], [0.625, -1.84928116e-1], [0.6875, -1.313892353e-1], [0.75, -3.1879308e-2], [0.8125, 1.26912095e-1], [0.875, 3.54430364e-1], [0.9375, 6.50529292e-1]]) # Import v values at y=L/2 (table 6, column 2 rows 27-41) in Marchi et al. Marchi_Re10_v = np.array([[0.0625, 9.2970121e-2], [0.125, 1.52547843e-1], [0.1875, 1.78781456e-1], [0.25, 1.76415100e-1], [0.3125, 1.52055820e-1], [0.375, 1.121477612e-1], [0.4375, 6.21048147e-2], [0.5, 6.3603620e-3], [0.5625,-5.10417285e-2], [0.625, -1.056157259e-1], [0.6875,-1.51622101e-1], [0.75, -1.81633561e-1], [0.8125,-1.87021651e-1], [0.875, -1.59898186e-1], [0.9375,-9.6409942e-2]]) #NBVAL_IGNORE_OUTPUT # Check results with Marchi et al 2009. npgrid=[nx,ny] x_coord = np.linspace(0, 1, npgrid[0]) y_coord = np.linspace(0, 1, npgrid[1]) fig = pyplot.figure(figsize=(12, 6)) ax1 = fig.add_subplot(121) ax1.plot(a[int(npgrid[0]/2),:],y_coord[:]) ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro') ax1.set_xlabel('$u$') ax1.set_ylabel('$y$') ax1 = fig.add_subplot(122) ax1.plot(x_coord[:],b[:,int(npgrid[1]/2)]) ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro') ax1.set_xlabel('$x$') ax1.set_ylabel('$v$') pyplot.show() ``` ## Devito Implementation ``` from devito import Grid grid = Grid(shape=(nx, ny), extent=(1., 1.)) x, y = grid.dimensions t = grid.stepping_dim ``` Reminder: here are our equations $$ \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial x} + \nu \left[ \frac{\partial^2 u}{\partial x^2} +\frac{\partial^2 u}{\partial y^2} \right] $$ $$ \frac{\partial v}{\partial t} + u \frac{\partial v}{\partial x} + v \frac{\partial v}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial y} + \nu \left[ \frac{\partial^2 v}{\partial x^2} +\frac{\partial^2 v}{\partial y^2} \right] $$ $$ \frac{\partial^2 p}{\partial x^2} +\frac{\partial^2 p}{\partial y^2} = \rho \left[\frac{\partial}{\partial t} \left(\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right) - \left(\frac{\partial u}{\partial x}\frac{\partial u}{\partial x}+2\frac{\partial u}{\partial y}\frac{\partial v}{\partial x}+\frac{\partial v}{\partial y}\frac{\partial v}{\partial y} \right) \right] $$ Note that p has no time dependence, so we are going to solve for p in pseudotime then move to the next time step and solve for u and v. This will require two operators, one for p (using p and pn) in pseudotime and one for u and v in time. As shown in the Poisson equation tutorial, a TimeFunction can be used despite the lack of a time-dependence. This will cause Devito to allocate two grid buffers, which we can addressed directly via the terms pn and pn.forward. The internal time loop can be controlled by supplying the number of pseudotime steps (iterations) as a time argument to the operator. The time steps are advanced through a Python loop where a separator operator calculates u and v. Also note that we need to use first order spatial derivatives for the velocites and these derivatives are not the maximum spatial derivative order (2nd order) in these equations. This is the first time we have seen this in this tutorial series (previously we have only used a single spatial derivate order). To use a first order derivative of a devito function, we use the syntax `function.dxc` or `function.dyc` for the x and y derivatives respectively. ``` from devito import TimeFunction, Function, \ Eq, solve, Operator, configuration # Build Required Functions and derivatives: # -------------------------------------- # |Variable | Required Derivatives | # -------------------------------------- # | u | dt, dx, dy, dx**2, dy**2 | # | v | dt, dx, dy, dx**2, dy**2 | # | p | dx, dy, dx**2, dy**2 | # | pn | dx, dy, dx**2, dy**2 | # -------------------------------------- u = TimeFunction(name='u', grid=grid, space_order=2) v = TimeFunction(name='v', grid=grid, space_order=2) p = TimeFunction(name='p', grid=grid, space_order=2) #Variables are automatically initalized at 0. # First order derivatives will be handled with p.dxc eq_u =Eq(u.dt + u*u.dx + v*u.dy, -1./rho * p.dxc + nu*(u.laplace), subdomain=grid.interior) eq_v =Eq(v.dt + u*v.dx + v*v.dy, -1./rho * p.dyc + nu*(v.laplace), subdomain=grid.interior) eq_p =Eq(p.laplace,rho*(1./dt*(u.dxc+v.dyc)-(u.dxc*u.dxc)+2*(u.dyc*v.dxc)+(v.dyc*v.dyc)), subdomain=grid.interior) # NOTE: Pressure has no time dependence so we solve for the other pressure buffer. stencil_u =solve(eq_u , u.forward) stencil_v =solve(eq_v , v.forward) stencil_p=solve(eq_p, p) update_u =Eq(u.forward, stencil_u) update_v =Eq(v.forward, stencil_v) update_p =Eq(p.forward, stencil_p) # Boundary Conds. u=v=0 for all sides bc_u = [Eq(u[t+1, 0, y], 0)] bc_u += [Eq(u[t+1, nx-1, y], 0)] bc_u += [Eq(u[t+1, x, 0], 0)] bc_u += [Eq(u[t+1, x, ny-1], 1)] # except u=1 for y=2 bc_v = [Eq(v[t+1, 0, y], 0)] bc_v += [Eq(v[t+1, nx-1, y], 0)] bc_v += [Eq(v[t+1, x, ny-1], 0)] bc_v += [Eq(v[t+1, x, 0], 0)] bc_p = [Eq(p[t+1, 0, y],p[t+1, 1,y])] # dpn/dx = 0 for x=0. bc_p += [Eq(p[t+1,nx-1, y],p[t+1,nx-2, y])] # dpn/dx = 0 for x=2. bc_p += [Eq(p[t+1, x, 0],p[t+1,x ,1])] # dpn/dy = 0 at y=0 bc_p += [Eq(p[t+1, x, ny-1],p[t+1, x, ny-2])] # pn=0 for y=2 bc_p += [Eq(p[t+1, 0, 0], 0)] bc=bc_u+bc_v optime=Operator([update_u, update_v]+bc_u+bc_v) oppres=Operator([update_p]+bc_p) # Silence non-essential outputs from the solver. configuration['log-level'] = 'ERROR' # This is the time loop. for step in range(0,nt): if step>0: oppres(time_M = nit) optime(time_m=step, time_M=step, dt=dt) #NBVAL_IGNORE_OUTPUT fig = pyplot.figure(figsize=(11,7), dpi=100) # Plotting the pressure field as a contour. pyplot.contourf(X, Y, p.data[0], alpha=0.5, cmap=cm.viridis) pyplot.colorbar() # Plotting the pressure field outlines. pyplot.contour(X, Y, p.data[0], cmap=cm.viridis) # Plotting velocity field. pyplot.quiver(X[::2,::2], Y[::2,::2], u.data[0,::2,::2], v.data[0,::2,::2]) pyplot.xlabel('X') pyplot.ylabel('Y'); ``` ### Validation ``` #NBVAL_IGNORE_OUTPUT # Again, check results with Marchi et al 2009. fig = pyplot.figure(figsize=(12, 6)) ax1 = fig.add_subplot(121) ax1.plot(u.data[0,int(grid.shape[0]/2),:],y_coord[:]) ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro') ax1.set_xlabel('$u$') ax1.set_ylabel('$y$') ax1 = fig.add_subplot(122) ax1.plot(x_coord[:],v.data[0,:,int(grid.shape[0]/2)]) ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro') ax1.set_xlabel('$x$') ax1.set_ylabel('$v$') pyplot.show() ``` The Devito implementation produces results consistent with the benchmark solution. There is a small disparity in a few of the velocity values, but this is expected as the Devito 41 x 41 node grid is much coarser than the benchmark on a 1024 x 1024 node grid. ## Comparison ``` #NBVAL_IGNORE_OUTPUT fig = pyplot.figure(figsize=(12, 6)) ax1 = fig.add_subplot(121) ax1.plot(a[int(npgrid[0]/2),:],y_coord[:]) ax1.plot(u.data[0,int(grid.shape[0]/2),:],y_coord[:],'--') ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro') ax1.set_xlabel('$u$') ax1.set_ylabel('$y$') ax1 = fig.add_subplot(122) ax1.plot(x_coord[:],b[:,int(npgrid[1]/2)]) ax1.plot(x_coord[:],v.data[0,:,int(grid.shape[0]/2)],'--') ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro') ax1.set_xlabel('$x$') ax1.set_ylabel('$v$') ax1.legend(['numpy','devito','Marchi (2009)']) pyplot.show() #Pressure norm check tol = 1e-3 assert np.sum((c[:,:]-d[:,:])**2/ np.maximum(d[:,:]**2,1e-10)) < tol assert np.sum((p.data[0]-p.data[1])**2/np.maximum(p.data[0]**2,1e-10)) < tol ``` Overlaying all the graphs together shows how the Devito, NumPy and Marchi et al (2009)$^1$ solutions compare with each other. A final accuracy check is done which is to test whether the pressure norm has exceeded a specified tolerance.
github_jupyter
``` import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from TutorML.decomposition import LFM def load_movielens(train_path, test_path, basedir=None): if basedir: train_path = os.path.join(basedir,train_path) test_path = os.path.join(basedir,test_path) col_names = ['user_id','item_id','score','timestamp'] use_cols = ['user_id','item_id','score'] df_train = pd.read_csv(train_path,sep='\t',header=None, names=col_names,usecols=use_cols) df_test = pd.read_csv(test_path,sep='\t',header=None, names=col_names,usecols=use_cols) df_train.user_id -= 1 df_train.item_id -= 1 df_test.user_id -= 1 df_test.item_id -=1 return df_train, df_test df_train, df_test = load_movielens(train_path='u1.base',test_path='u1.test', basedir='ml-100k/') data = pd.concat([df_train,df_test]).reset_index().drop('index',axis=1) n_users = data.user_id.nunique() n_items = data.item_id.nunique() train_idx = np.ravel_multi_index(df_train[['user_id','item_id']].values.T, dims=(n_users,n_items)) test_idx = np.ravel_multi_index(df_test[['user_id','item_id']].values.T, dims=(n_users,n_items)) X = np.zeros(shape=(n_users*n_items,)) mask = np.zeros(shape=(n_users*n_items)) X[train_idx] = df_train['score'] mask[train_idx] = 1 y_test = df_test.score.values.ravel() X = X.reshape((n_users,n_items)) mask = mask.reshape((n_users, n_items)) """ if you want to increace number of factors, you should lower the learning rate too. otherwise nan or inf may appear """ lfm = LFM(n_factors=5,max_iter=1000,early_stopping=50,reg_lambda=2, learning_rate=1e-3,print_every=20) lfm.fit(X,mask,test_data=(test_idx,y_test)) rounded_prediction_mse = lfm.mse_history lfm = LFM(n_factors=2,max_iter=1000,early_stopping=50,reg_lambda=1, round_prediction=False, learning_rate=1e-3,print_every=20) lfm.fit(X,mask,test_data=(test_idx,y_test)) mse = lfm.mse_history def plot(xy, start_it, title): n_iters = xy.shape[0] plt.plot(range(start_it,n_iters), xy[start_it:,0],label='train mse') plt.plot(range(start_it,n_iters), xy[start_it:,1],label='test mse') plt.title(title) plt.legend() plt.xlabel('iter') plt.figure(figsize=(12,4)) plt.subplot(121) plot(rounded_prediction_mse, 20, 'Mse with prediction rounded') plt.subplot(122) plot(mse, 20, 'Mse with prediction unrounded') plt.show() ```
github_jupyter
# Continuous Control --- You are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started! ### 1. Start the Environment Run the next code cell to install a few packages. This line will take a few minutes to run! ``` !pip -q install ./python ``` The environments corresponding to both versions of the environment are already saved in the Workspace and can be accessed at the file paths provided below. Please select one of the two options below for loading the environment. ``` from unityagents import UnityEnvironment import numpy as np # select this option to load version 1 (with a single agent) of the environment #env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64') # select this option to load version 2 (with 20 agents) of the environment env = UnityEnvironment(file_name='/data/Reacher_Linux_NoVis/Reacher.x86_64') ``` Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. ``` # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] ``` ### 2. Examine the State and Action Spaces Run the code cell below to print some information about the environment. ``` # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) ``` ### 3. It's Your Turn! Now it's your turn to train your own agent to solve the environment! A few **important notes**: - When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following: ```python env_info = env.reset(train_mode=True)[brain_name] ``` - To structure your work, you're welcome to work directly in this Jupyter notebook, or you might like to start over with a new file! You can see the list of files in the workspace by clicking on **_Jupyter_** in the top left corner of the notebook. - In this coding environment, you will not be able to watch the agents while they are training. However, **_after training the agents_**, you can download the saved model weights to watch the agents on your own machine! ``` env_info = env.reset(train_mode=True)[brain_name] ``` We need additional files to successfully create an agent, namely for the model, the agent itself, and workspace utilities to prevent Udacity's workspace from going into sleep while doing long-running learning. ``` !curl https://raw.githubusercontent.com/VVKot/deep-reinforcement-learning/master/p2_continuous-control/ddpg_agent.py --output ddpg_agent.py !curl https://raw.githubusercontent.com/VVKot/deep-reinforcement-learning/master/p2_continuous-control/model.py --output model.py !curl https://raw.githubusercontent.com/VVKot/deep-reinforcement-learning/master/p2_continuous-control/workspace_utils.py --output workspace_utils.py ``` After that, we can create an actual agent. The agent is represented by two neural networks - Actor and Critic. Both of them are of equal size, with 512 neurons in each layer, and 4 layers. Network with 3 layers in my experience wasn't sufficient to achieve the task. Inner layers of both networks have leaky RELU activation function - in my experiments, it helped to converge faster compared to RELU/ELU. Since Actor has to produce values for actions, and we operate in continuos space where each action has to be a value from -1 to 1, is has a tanh activation function at the last layer. On the other hand, Critic produces Q values, that's why it does not have an activation function for the last layer at all. Both networks are trained using Adam optimizer. The choice of action is not greedy. Since we cannot use epsilon-greedy policy in continuous space, every action is changed slightly using Ornstein-Uhlenbeck noise. For learning, the agent uses a ReplayBuffer, which samples from experiences of all 20 agents. Learning is stabilized by using fixed Q-targets and soft updates. Changed in hyperparameters which proved to be useful - decreasing the critic's learning rate(from 10^-3 to 10^-4) while increasing the soft update coefficient(from 10^-3 to 10^-2). In this way agent converged faster and in a more stable manner. Also, decreasing the theta parameter of the noise(from 0.15 to 0.7), i.e. increasing the noise, proved to be helpful for faster convergence. ``` from ddpg_agent import Agent agent = Agent(state_size, action_size, 0) ``` Let's train and agent ``` from collections import deque import torch from workspace_utils import active_session def ddpg(n_episodes=500,print_every=1): env_info = env.reset(train_mode=True)[brain_name] agent.reset() scores_deque = deque(maxlen=100) scores = [] for i_episode in range(1, n_episodes+1): state = env_info.vector_observations score = np.zeros((num_agents,)) num_step = 0 while True: action = agent.act(state) # get next action env_info = env.step(action)[brain_name] # perform an action next_state = env_info.vector_observations # get next state reward = env_info.rewards # get received reward done = env_info.local_done # get information about episode termination agent.step(state, action, reward, next_state, done) # memorise an experience and, possibly, learn state = next_state # update the state score += np.array(reward) # record current score num_step += 1 # track number of steps if any(done): break scores_deque.append(np.mean(score)) scores.append(np.mean(score)) print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="") torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth') torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth') if i_episode % print_every == 0: print('\rEpisode {}\tAverage Score: {:.2f}\tmax_step:{}'.format(i_episode, np.mean(scores_deque), num_step)) if np.mean(scores_deque) >= 30.0: print("Score is higher than 30.") break return scores with active_session(): # used to avoid workspace restart due to long-running learning process scores = ddpg() ``` and visualize its performance: ``` import matplotlib.pyplot as plt %matplotlib inline plt.plot(scores, 'o-') plt.grid() plt.title('Reward Records') plt.xlabel('Episode') plt.ylabel('Reward') plt.show() ``` Definitely, the most interesting challenge would be to train the network from the raw pixels. With regards to the current network - it seems that there is still some potential in hyperparameter tuning which can lead to even faster convergence. One of the things which still is also not great is reproducibility, for example, the initial seed significantly influences the learning process. With seed 42, agent is not able to converge being stuck around score 24. Additional techniques such as gradient clipping might be used to further improve the stability of the agent.
github_jupyter
# Example 10.2: Non-Ideal Rankine Cycle *John F. Maddox, Ph.D., P.E.<br> University of Kentucky - Paducah Campus<br> ME 321: Engineering Thermodynamics II<br>* ## Problem Statement A Rankine cycle operates with water as the working fluid with a turbine inlet pressure of 3 MPa, a condenser pressure of 15 kPa, and no superheat in the boiler. For For isentropic efficiencies of $\eta_t=0.8$ and $\eta_p=0.6$ and $\dot{W}_\text{Net}=1\ \mathrm{MW}$ Find: * (a) Mass flow rate of steam (kg/s) * (b) Boiler heat transfer (MW) * (c) Thermal efficiency of the cycle * (d) Sketch a $T$-$s$ diagram of the cycle ![image.png](attachment:90020869-2a7e-4630-a16e-bfdb4c7d56fe.png) ## Solution __[Video Explanation](https://uky.yuja.com/V/Video?v=3074261&node=10465193&a=1519284077&autoplay=1)__ ### Python Initialization We'll start by importing the libraries we will use for our analysis and initializing dictionaries to hold the properties we will be usings. ``` from kilojoule.templates.kSI_C import * water = realfluid.Properties('Water') ``` ### Given Parameters We now define variables to hold our known values. ``` p[3] = Quantity(3.0,'MPa') # Turbine inlet pressure p[1] = p[4] = Quantity(15.0,'kPa') # Condenser pressure Wdot_net = Quantity(1,'MW') # Net power eta_t = 0.8 # Turbine isentropic efficiency eta_p = 0.6 # Pump isentropic efficiency Summary(); ``` ### Assumptions - Non-ideal work devices - No superheat: saturated vapor at boiler exit - Single phase into pump: saturated liquid at condenser exit - Isobaric heat exchagners - Negligible changes in kinetic energy - Negligible changes in potential energy ``` x[3] = 1 # No superheat x[1] = 0 # Single phase into pump p[2] = p[3] # isobaric heat exchanger Summary(); ``` #### (a) Mass flow rate ``` %%showcalc #### State 1) T[1] = water.T(p[1],x[1]) v[1] = water.v(p[1],x[1]) h[1] = water.h(p[1],x[1]) s[1] = water.s(p[1],x[1]) #### 1-2) Non-ideal compression # Isentropic compression p['2s'] = p[2] s['2s'] = s[1] T['2s'] = water.T(p['2s'],s['2s']) h['2s'] = water.h(p['2s'],s['2s']) v['2s'] = water.v(p['2s'],s['2s']) # Actual compression h[2] = h[1] + (h['2s']-h[1])/eta_p T[2] = water.T(p[2],h=h[2]) v[2] = water.v(p[2],h=h[2]) s[2] = water.s(p[2],h=h[2]) w_1_to_2 = h[1]-h[2] #### 2-3) Isobaric heat addition T[3] = water.T(p[3],x[3]) v[3] = water.v(p[3],x[3]) h[3] = water.h(p[3],x[3]) s[3] = water.s(p[3],x[3]) #### 3-4) Non-ideal expansion # Isentropic Expansion p['4s'] = p[4] s['4s'] = s[3] T['4s']= water.T(p['4s'],s['4s']) v['4s']= water.v(p['4s'],s['4s']) h['4s'] = water.h(p['4s'],s['4s']) x['4s'] = water.x(p['4s'],s['4s']) # Actual expansion h[4] = h[3] - eta_t*(h[3]-h['4s']) T[4] = water.T(p[4],h=h[4]) v[4] = water.v(p[4],h=h[4]) s[4] = water.s(p[4],h=h[4]) x[4] = water.x(p[4],h=h[4]) w_3_to_4 = h[3]-h[4] #### Mass flow rate w_net = w_1_to_2 + w_3_to_4 mdot = Wdot_net/w_net mdot.ito('kg/s') ``` #### (b) Boiler heat transfer (MW) ``` %%showcalc #### Boiler First Law q_2_to_3 = h[3]-h[2] Qdot_in = mdot*q_2_to_3 ``` #### (c) Thermal efficiency ``` %%showcalc eta_th = Wdot_net/Qdot_in eta_th.ito('') ``` #### (d) Diagrams ``` pv = water.pv_diagram() for state in [1,2,3,4]: v[state] = water.v(p[state],h=h[state]) pv.plot_state(states[1],label_loc='west') pv.plot_state(states[2],label_loc='north west') pv.plot_state(states[3],label_loc='north east') pv.plot_state(states[4],label_loc='south') pv.plot_process(states[1],states[2],path='nonideal',label='pump') pv.plot_process(states[2],states[3],path='isobaric',label='boiler') pv.plot_process(states[3],states[4],path='nonideal',label='turbine') pv.plot_process(states[4],states[1],path='isobaric',label='condenser'); Ts = water.Ts_diagram() Ts.plot_isobar(p[3],label=f'{p[3]}',pos=.9) Ts.plot_isobar(p[4],label=f'{p[4]}',pos=.9) Ts.plot_state(states[1],label_loc='south east') Ts.plot_state(states[2],label_loc='north west') Ts.plot_state(states[3],label_loc='east') Ts.plot_state(states[4],label_loc='east') Ts.plot_process(states[1],states[2],path='isentropic',arrow=False) Ts.plot_process(states[2],states[3],path='isobaric',label='boiler') Ts.plot_process(states[3],states[4],path='isentropic',label='turbine') Ts.plot_process(states[4],states[1],path='isobaric',label='condenser'); Ts.plot_process(states[3],states['4s'],path='isentropic',linestyle='dashed'); ```
github_jupyter
# Setup ``` from warnings import simplefilter simplefilter(action='ignore', category=FutureWarning) from tensorflow.keras import backend as K from tensorflow.keras.models import Model, load_model, clone_model from tensorflow.keras.utils import to_categorical from tensorflow.keras.layers import Activation from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score, accuracy_score, confusion_matrix import itertools from random import randint from skimage.segmentation import slic, mark_boundaries, felzenszwalb, quickshift from matplotlib.colors import LinearSegmentedColormap import matplotlib.pyplot as plt %matplotlib inline import numpy as np import os import time import cv2 import numpy as np import shap from alibi.explainers import AnchorImage import lime from lime import lime_image from lime.wrappers.scikit_image import SegmentationAlgorithm import vis from vis.visualization import visualize_saliency from exmatchina import * num_classes = 10 classes = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship','truck'] class_dict = { 'airplane': 0, 'automobile':1, 'bird':2, 'cat':3, 'deer':4, 'dog':5, 'frog':6, 'horse':7, 'ship':8, 'truck':9 } inv_class_dict = {v: k for k, v in class_dict.items()} ## These are the randomly generated indices that were used in our survey # all_idx = np.array([23, 26, 390, 429, 570, 649, 732, 739, 1081, 1163, 1175, 1289, 1310, 1323 # , 1487, 1623, 1715, 1733, 1825, 1881, 1951, 2102, 2246, 2300, 2546, 2702, 2994, 3095 # , 3308, 3488, 3727, 3862, 4190, 4299, 4370, 4417, 4448, 4526, 4537, 4559, 4604, 4672 # , 4857, 5050, 5138, 5281, 5332, 5471, 5495, 5694, 5699, 5754, 5802, 5900, 6039, 6042 # , 6046, 6127, 6285, 6478, 6649, 6678, 6795, 7023, 7087, 7254, 7295, 7301, 7471, 7524 # , 7544, 7567, 7670, 7885, 7914, 7998, 8197, 8220, 8236, 8291, 8311, 8355, 8430, 8437 # , 8510, 8646, 8662, 8755, 8875, 8896, 8990, 9106, 9134, 9169, 9436, 9603, 9739, 9772 # , 9852, 9998]) all_idx = [23, 26, 390, 429, 570] #Considering just 5 samples x_train = np.load('../data/image/X_train.npy') y_train = np.load('../data/image/y_train.npy') x_test = np.load('../data/image/X_test.npy') y_test = np.load('../data/image/y_test.npy') print(f'Number of Training samples: {x_train.shape[0]}') print(f'Number of Test samples: {x_test.shape[0]}') print(x_train.shape) print(y_train.shape) print(x_test.shape) print(y_test.shape) y_train = to_categorical(y_train,num_classes) y_test = to_categorical(y_test,num_classes) model = load_model('../trained_models/image.hdf5') model.summary() def calculate_metrics(model, X_test, y_test_binary): y_pred = np.argmax(model.predict(X_test), axis=1) y_true = np.argmax(y_test_binary, axis=1) mismatch = np.where(y_true != y_pred) cf_matrix = confusion_matrix(y_true, y_pred) accuracy = accuracy_score(y_true, y_pred) #micro_f1 = f1_score(y_true, y_pred, average='micro') macro_f1 = f1_score(y_true, y_pred, average='macro') return cf_matrix, accuracy, macro_f1, mismatch, y_pred def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") # print(cm) else: print('Confusion matrix, without normalization') # print(cm) plt.figure(figsize = (10,7)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45, fontsize = 15) plt.yticks(tick_marks, classes, fontsize = 15) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), fontsize = 15, horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label', fontsize = 12) plt.xlabel('Predicted label', fontsize = 12) cf_matrix, accuracy, macro_f1, mismatch, y_pred = calculate_metrics(model, x_test, y_test) print('Accuracy : {}'.format(accuracy)) print('F1-score : {}'.format(macro_f1)) plot_confusion_matrix(cf_matrix, classes, normalize=True, title='Confusion matrix', cmap=plt.cm.Blues) ``` # LIME ``` explainer = lime_image.LimeImageExplainer() segmentation_fn = SegmentationAlgorithm('felzenszwalb', scale=10, sigma=0.4, min_size=20) for i in all_idx: image = x_test[i] to_explain = np.expand_dims(image,axis=0) class_idx = np.argmax(model.predict(to_explain)) print(inv_class_dict[class_idx]) # Hide color is the color for a superpixel turned OFF. # Alternatively, if it is NONE, the superpixel will be replaced by the average of its pixels explanation = explainer.explain_instance(image, model.predict,segmentation_fn = segmentation_fn, top_labels=5, hide_color=0, num_samples=1000) temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True) #Plotting fig, axes1 = plt.subplots(1,2, figsize=(10,10)) # fig.suptitle(inv_class_dict[y_test[i]]) axes1[0].set_axis_off() axes1[1].set_axis_off() axes1[0].imshow(x_test[i], interpolation='nearest') axes1[1].imshow(mark_boundaries(temp, mask), interpolation='nearest') # plt.savefig(f'./image/image-{i}-lime',bbox_inches = 'tight', pad_inches = 0.5) plt.show() ``` # Anchor Explanations ``` # Define a Prediction Function predict_fn = lambda x: model.predict(x) image_shape = (32,32,3) segmentation_fn = 'felzenszwalb' slic_kwargs = {'n_segments': 100, 'compactness': 1, 'sigma': .5, 'max_iter': 50} felzenszwalb_kwargs = {'scale': 10, 'sigma': 0.4, 'min_size': 50} explainer = AnchorImage(predict_fn, image_shape, segmentation_fn=segmentation_fn, segmentation_kwargs=felzenszwalb_kwargs, images_background=None) for i in all_idx: image = x_test[i] to_explain = np.expand_dims(image,axis=0) class_idx = np.argmax(model.predict(to_explain)) print(inv_class_dict[class_idx]) explanation = explainer.explain(image, threshold=.99, p_sample=0.5, tau=0.15) ## Plotting fig, axes1 = plt.subplots(1,2, figsize=(10,10)) # fig.suptitle(inv_class_dict[y_test[i]]) axes1[0].set_axis_off() axes1[1].set_axis_off() axes1[0].imshow(x_test[i], interpolation='nearest') axes1[1].imshow(explanation['anchor'], interpolation='nearest') # plt.savefig(f'./image-{i}-anchor', bbox_inches = 'tight', pad_inches = 0.5) plt.show() ``` # SHAP ``` background = x_train[np.random.choice(x_train.shape[0], 1000, replace=False)] # map input to specified layer def map2layer(x, layer): feed_dict = dict(zip([model.layers[0].input], x.reshape((1,) + x.shape))) return K.get_session().run(model.layers[layer].input, feed_dict) def get_shap_full(idx): layer = 14 to_explain = np.expand_dims(x_test[idx],axis=0) class_idx = np.argmax(model.predict(to_explain)) print(inv_class_dict[class_idx]) # get shap values e = shap.GradientExplainer((model.layers[layer].input, model.layers[-1].output), map2layer(background, layer)) shap_values,indexes = e.shap_values(map2layer(to_explain, layer), ranked_outputs=1) # use SHAP plot shap.image_plot(shap_values, to_explain, show=False) # plt.savefig('./image/image-' + str(idx) + '-shap.png', bbox_inches='tight') for i in all_idx: get_shap_full(i) ``` # Saliency Map ``` # Replace activation with linear new_model = clone_model(model) new_model.pop() new_model.add(Activation('linear', name="linear_p")) new_model.summary() def plot_map(img_index, class_idx, grads): print(inv_class_dict[class_idx]) fig, axes = plt.subplots(ncols=2,figsize=(8,6)) axes[0].imshow(x_test[img_index]) axes[0].axis('off') axes[1].imshow(x_test[img_index]) axes[1].axis('off') i = axes[1].imshow(grads,cmap="jet",alpha=0.6) fig.subplots_adjust(right=0.9) cbar_ax = fig.add_axes([1, 0.2, 0.04, 0.59]) fig.colorbar(i, cax=cbar_ax) # plt.savefig('./image/image-' + str(img_index) + '-saliencymap.png', bbox_inches='tight', pad_inches=0.3) # plt.close(fig) plt.show() def getSaliencyMap(img_index): to_explain = x_test[img_index].reshape(1,32,32,3) class_idx = np.argmax(model.predict(to_explain)) grads = visualize_saliency(new_model, 14, filter_indices = None, seed_input = x_test[img_index]) plot_map(img_index, class_idx , grads) for i in all_idx: getSaliencyMap(i) ``` # Grad-Cam++ ``` def get_gradcampp(idx): img = x_test[idx] cls_true = np.argmax(y_test[idx]) x = np.expand_dims(img, axis=0) # get cam cls_pred, cam = grad_cam_plus_plus(model=model, x=x, layer_name="Conv_6") print(inv_class_dict[cls_pred]) # resize to to size of image heatmap = cv2.resize(cam, (img.shape[1], img.shape[0])) fig, axes = plt.subplots(ncols=2,figsize=(8,6)) axes[0].imshow(img) axes[0].axis('off') axes[1].imshow(img) axes[1].axis('off') i = axes[1].imshow(heatmap,cmap="jet",alpha=0.6) fig.subplots_adjust(right=0.9) cbar_ax = fig.add_axes([1, 0.2, 0.04, 0.60]) fig.colorbar(i, cax=cbar_ax) # plt.savefig('./image/image-' + str(idx) + '-gradcampp.png', bbox_inches='tight', pad_inches=0.3) # plt.close(fig) plt.show() def grad_cam_plus_plus(model, x, layer_name): cls = np.argmax(model.predict(x)) y_c = model.output[0, cls] conv_output = model.get_layer(layer_name).output grads = K.gradients(y_c, conv_output)[0] first = K.exp(y_c) * grads second = K.exp(y_c) * grads * grads third = K.exp(y_c) * grads * grads * grads gradient_function = K.function([model.input], [y_c, first, second, third, conv_output, grads]) y_c, conv_first_grad, conv_second_grad, conv_third_grad, conv_output, grads_val = gradient_function([x]) global_sum = np.sum(conv_output[0].reshape((-1,conv_first_grad[0].shape[2])), axis=0) alpha_num = conv_second_grad[0] alpha_denom = conv_second_grad[0] * 2.0 + conv_third_grad[0] * global_sum.reshape((1, 1, conv_first_grad[0].shape[2])) alpha_denom = np.where(alpha_denom != 0.0, alpha_denom, np.ones(alpha_denom.shape)) alphas = alpha_num / alpha_denom weights = np.maximum(conv_first_grad[0], 0.0) alpha_normalization_constant = np.sum(np.sum(alphas, axis=0), axis=0) # 0 alphas /= alpha_normalization_constant.reshape((1, 1, conv_first_grad[0].shape[2])) # NAN deep_linearization_weights = np.sum((weights * alphas).reshape((-1, conv_first_grad[0].shape[2])), axis=0) cam = np.sum(deep_linearization_weights * conv_output[0], axis=2) cam = np.maximum(cam, 0) cam /= np.max(cam) return cls, cam for i in all_idx: get_gradcampp(i) ``` # ExMatchina ``` def plot_images(test, examples, label): # =======GENERATE STUDY EXAMPLES========= fig = plt.figure(figsize=(10,3)) num_display = 4 fig.add_subplot(1, num_display, 1).title.set_text(inv_class_dict[label]) plt.imshow(test, interpolation='nearest') plt.axis('off') line = fig.add_subplot(1, 1, 1) line.plot([2.39,2.39],[0,1],'--') line.set_xlim(0,10) line.axis('off') for k in range(num_display-1): if k >= len(examples): continue fig.add_subplot(1,num_display,k+2).title.set_text(inv_class_dict[label]) fig.add_subplot(1,num_display,k+2).title.set_color('#0067FF') plt.imshow(examples[k], interpolation='nearest') plt.axis('off') fig.tight_layout() plt.tight_layout() plt.show() # plt.savefig('./image-' + str(i) + '-example.png', bbox_inches='tight') selected_layer = 'Flatten_1' exm = ExMatchina(model=model, layer=selected_layer, examples=x_train) for test_idx in all_idx: test_input = x_test[test_idx] label = exm.get_label_for(test_input) (examples, indices) = exm.return_nearest_examples(test_input) plot_images(test_input, examples, label) ```
github_jupyter
Here is a simple example of file IO: ``` #Write a file out_file = open("test.txt", "w") out_file.write("This Text is going to out file\nLook at it and see\n") out_file.close() #Read a file in_file = open("test.txt", "r") text = in_file.read() in_file.close() print(text) ``` The output and the contents of the file test.txt are: Notice that it wrote a file called test.txt in the directory that you ran the program from. The `\n` in the string tells Python to put a **n**ewline where it is. A overview of file IO is: 1. Get a file object with the `open` function. 2. Read or write to the file object (depending on if you open it with a "r" or "w") 3. Close it The first step is to get a file object. The way to do this is to use the `open` function. The format is `file_object = open(filename, mode)` where `file_object` is the variable to put the file object, `filename` is a string with the filename, and `mode` is either `"r"` to **r**ead a file or `"w"` to **w**rite a file. Next the file object's functions can be called. The two most common functions are `read` and `write`. The `write` function adds a string to the end of the file. The `read` function reads the next thing in the file and returns it as a string. If no argument is given it will return the whole file (as done in the example). Now here is a new version of the phone numbers program that we made earlier: ``` def print_numbers(numbers): print("Telephone Numbers:") for x in numbers: print("Name: ", x, " \tNumber: ", numbers[x]) print() def add_number(numbers, name, number): numbers[name] = number def lookup_number(numbers, name): if name in numbers: return "The number is "+numbers[name] else: return name+" was not found" def remove_number(numbers, name): if name in numbers: del numbers[name] else: print(name, " was not found") def load_numbers(numbers, filename): in_file = open(filename, "r") while True: in_line = in_file.readline() if in_line == "": break in_line = in_line[:-1] [name, number] = in_line.split(",") numbers[name] = number in_file.close() def save_numbers(numbers, filename): out_file = open(filename, "w") for x in numbers: out_file.write(x+","+numbers[x]+"\n") out_file.close() def print_menu(): print('1. Print Phone Numbers') print('2. Add a Phone Number') print('3. Remove a Phone Number') print('4. Lookup a Phone Number') print('5. Load numbers') print('6. Save numbers') print('7. Quit') print() phone_list = {} menu_choice = 0 print_menu() while menu_choice != 7: menu_choice = int(input("Type in a number (1-7):")) if menu_choice == 1: print_numbers(phone_list) elif menu_choice == 2: print("Add Name and Number") name = input("Name:") phone = input("Number:") add_number(phone_list, name, phone) elif menu_choice == 3: print("Remove Name and Number") name = input("Name:") remove_number(phone_list, name) elif menu_choice == 4: print("Lookup Number") name = input("Name:") print(lookup_number(phone_list, name)) elif menu_choice == 5: filename = input("Filename to load:") load_numbers(phone_list, filename) elif menu_choice == 6: filename = input("Filename to save:") save_numbers(phone_list, filename) elif menu_choice == 7: pass else: print_menu() print("Goodbye") ``` Notice that it now includes saving and loading files. Here is some output of my running it twice: The new portions of this program are: First we will look at the save portion of the program. First, it creates a file object with the command open(filename, "w"). Next, it goes through and creates a line for each of the phone numbers with the command `out_file.write(x+","+numbers[x]+"\n")`. This writes out a line that contains the name, a comma, the number and follows it by a newline. The loading portion is a little more complicated. It starts by getting a file object. Then, it uses a while True: loop to keep looping until a `break` statement is encountered. Next, it gets a line with the line in\_line = in\_file.readline(). The `readline` function will return a empty string (len(string) == 0) when the end of the file is reached. The `if` statement checks for this and `break`s out of the `while` loop when that happens. Of course if the `readline` function did not return the newline at the end of the line there would be no way to tell if an empty string was an empty line or the end of the file so the newline is left in what `readline` returns. Hence we have to get rid of the newline. The line `in_line = in_line[:-1]` does this for us by dropping the last character. Next the line `[name, number] = string.split(in_line, ",")` splits the line at the comma into a name and a number. This is then added to the `numbers` dictionary. Exercises ========= Now modify the grades program from notebook 10 (copied below) so that it uses file IO to keep a record of the students. ``` max_points = [25, 25, 50, 25, 100] assignments = ['hw ch 1', 'hw ch 2', 'quiz ', 'hw ch 3', 'test'] students = {'#Max':max_points} def print_menu(): print("1. Add student") print("2. Remove student") print("3. Print grades") print("4. Record grade") print("5. Print Menu") print("6. Exit") def print_all_grades(): print('\t', end=' ') for i in range(len(assignments)): print(assignments[i], '\t', end=' ') print() keys = list(students.keys()) keys.sort() for x in keys: print(x, '\t', end=' ') grades = students[x] print_grades(grades) def print_grades(grades): for i in range(len(grades)): print(grades[i], '\t\t', end=' ') print() print_menu() menu_choice = 0 while menu_choice != 6: print() menu_choice = int(input("Menu Choice (1-6):")) if menu_choice == 1: name = input("Student to add:") students[name] = [0]*len(max_points) elif menu_choice == 2: name = input("Student to remove:") if name in students: del students[name] else: print("Student: ", name, " not found") elif menu_choice == 3: print_all_grades() elif menu_choice == 4: print("Record Grade") name = input("Student:") if name in students: grades = students[name] print("Type in the number of the grade to record") print("Type a 0 (zero) to exit") for i in range(len(assignments)): print(i+1, ' ', assignments[i], '\t', end=' ') print() print_grades(grades) which = 1234 while which != -1: which = int(input("Change which Grade:")) which = which-1 if 0 <= which < len(grades): grade = int(input("Grade:")) grades[which] = grade elif which != -1: print("Invalid Grade Number") else: print("Student not found") elif menu_choice != 6: print_menu() ```
github_jupyter
Uses `langevin-survival.cpp` with `INPUT_DATA_FILE` flag to compute 2D MFPT from a file generated by `exp-data-diffus-analysis.ipynb`. The file contains $(x,y)$ positions, with only free diffusion phases, separated by `NaN`s when reset occurs. Parameters ($D$, $\sigma$, FPS, $T_\text{res}$...) are contained in the associated `.csv` file. Things to specify in `langevin-survival.cpp` : - `#define INPUT_DATA_FILE` obviously - `#define XTARG_ONE_VARIABLE` - `#undef ENABLE_SURVIVAL_PROBABILITIES_*` - `#define TARGET_2D_CYL` Then set `path_traj` and `path_params` below (outputs of `exp-data-diffus-analysis.ipynb`), and set desired $a$ and $b$'s in `a_and_b`. Because $T_r$ and $\sigma$ are fixed by the data, there is only one possibility for $L$ and $R_\text{tol}$, which will be set automatically. Also choose how to define $\sigma$ from $(\sigma_x,\sigma_y)$ if they are different, typically $\sigma_x$ or $\sqrt{(\sigma_x^2+\sigma_y^2)/2}$ (the target being along the $x$ axis). ``` # [1] import numpy as np import pandas as pd import csv import matplotlib.pyplot as plt import pysimul from common import * from math import * # [2] path_params = "../dati_MFPT/20-01-10/qpd_Ttrap50ms_Ttot200ms_T0.03/qpd_Ttrap50ms_Ttot200ms_diffus.csv" path_traj = "../dati_MFPT/20-01-10/qpd_Ttrap50ms_Ttot200ms_T0.03/qpd_Ttrap50ms_Ttot200ms_traj_data.bin" df = pd.read_csv(path_params, sep=',', header=None) df = df.set_index(0) params = dict(df[1]) print(params) D = params['D'] rT = params['reset_period'] σ = sqrt(params['sigma_x']**2 + params['sigma_y']**2)/sqrt(2) #params['sigma_x'] # or reset_type = 'per' path = "data-exp-2d-periodical/2D_meansigma/" # output storage directory i_beg = 30 param_i = 0 a_and_b = [ (0.3,1), (0.3,2), (0.3,3), (0.3,3.5), (0.3,4), (0.3,8),# (0.3,12), (0.5,1), (0.5,2), (0.5,3), (0.5,4), (0.5,6), (0.5,8),# (0.5,12), (0.6,2), (0.6,4), (0.6,12), (0.75,1), (0.75,2), (0.75,4), (0.75,8), (0.75,12), (0.9,2), (0.9,4), (0.9,8), (0.9,12), ] # [3] simul = pysimul.PySimul() a,b = a_and_b[param_i] print("doing a =",a,", b =",b) assert(a < 1) simul['first_times_xtarg'] = L = b*σ simul['2D-Rtol'] = Rtol = a * L simul['file_path'] = path_traj simul.start() if reset_type == 'poisson': th_tau_2d = fpt_2d_poisson_tau th_c = lambda L: fpt_poisson_c(α, D, L) elif reset_type == 'per': th_tau_2d = np.vectorize(lambda b,c,a: fpt_2d_periodical_tau(b,c,a, use_cache="th-cache-2d-periodical/")) th_c = lambda L: fpt_periodic_c(rT, D, L) c = th_c(L) param_i += 1 ended = False # [4] def timer_f (): global simul, ended if simul is None: return 1 if simul['pause'] == 1 and not ended: ended = True return 2 return 0 %%javascript var sfml_event_poll_timer = setInterval(function() { Jupyter.notebook.kernel.execute("print(timer_f())", { iopub : { output : function (data) { console.log(data.content.text) if (data.content.text == "1\n" || data.content.text === undefined) { clearInterval(sfml_event_poll_timer); } else if (data.content.text == "2\n") { Jupyter.notebook.execute_cells([7,8,9,3]); } }}}) }, 1000); # [6] param_i-1, simul['n_trajectories'] # [7] simul.explicit_lock() time_conversion = (1/params['fps']) / simul['Delta_t'] first_times = simul['first_times'] * time_conversion mfpt = np.mean(first_times) n_traj = len(first_times) path2 = path+str(param_i+i_beg) np.savetxt(path2+"-ft.csv", first_times, fmt='%.2e') d = { 'D': D, 'x0sigma': σ, 'x0sigma_x': params['sigma_x'], 'x0sigma_y': params['sigma_y'], 'L': L, 'b': b, 'c': c, 'Rtol': Rtol, 'a': a, 'mfpt': mfpt, 'fpt_stdev': np.std(first_times), 'n_traj': n_traj, 'Delta_t': 1/params['fps'], } if reset_type == 'poisson': d['reset_rate'] = α elif reset_type == 'per': d['reset_period'] = rT df = pd.DataFrame(list(d.items())).set_index(0) df.to_csv(path2+"-params.csv", header=False, quoting=csv.QUOTE_NONE, sep=',') simul.explicit_unlock() df.T # [8] plt.figure(figsize=(10,4)) fpt_max = 5*mfpt plt.hist(first_times, bins=100, range=(0,fpt_max), weights=100/fpt_max*np.ones(n_traj)/n_traj, label="distribution ({} traj.)".format(n_traj)) plt.axvline(x=mfpt, color='purple', label="MFPT = {:.3f}".format(mfpt)) # comment if not wanted : mfpt_th = L**2/(4*D)*th_tau_2d(b,c,a) plt.axvline(x=mfpt_th, color='black', label="th. MFPT = {:.3f}".format(mfpt_th)) plt.yscale('log') plt.xlabel("first passage time") if reset_type == 'poisson': plt.title(r"FPT distribution for $b={:.2f}$, $c={:.2f}$ ($D={:.3f}$, $\alpha={}$, $\sigma_{{x_0}}={:.3e}$, $L={}$)".format(b, c, D, α, σ, L)) elif reset_type == 'per': plt.title(r"FPT distribution for $b={:.2f}$, $c={:.2f}$ ($D={:.3f}$, $T_\operatorname{{res}}={}$, $\sigma_{{x_0}}={:.3e}$, $L={}$)".format(b, c, D, rT, σ, L)) plt.legend() plt.savefig(path2+"-distrib.pdf", bbox_inches='tight') # [9] simul.end() ```
github_jupyter
# Identify sectors expected to perform well in near future > Find out beaten down sectors that are showing signs of reversal. - badges: true - categories: [personal-finance] Here I find out the sectors that are delivering diminishing returns i.e. returns are decreasing on lower time frames compared to higher time frames. The second criterion is to shortlist sectors that took a maximum beating recently. ``` #hide %load_ext blackcellmagic #hide from IPython.display import HTML import pandas as pd df = pd.read_csv("https://www1.nseindia.com/content/indices/mir.csv", header=None) #hide caption = df.iloc[0, 0] df.columns = ["Sector", "1m", "3m", "6m", "12m"] df = df[3:] df.set_index("Sector", inplace=True) df["1m"] = df["1m"].astype(float) / 100 df["3m"] = df["3m"].astype(float) / 100 df["6m"] = df["6m"].astype(float) / 100 df["12m"] = df["12m"].astype(float) / 100 df["diminishing_returns"] = False mask_diminishing_returns = ( (df["12m"] > df["6m"]) & (df["6m"] > df["3m"]) & (df["3m"] > df["1m"]) ) df.loc[mask_diminishing_returns, "diminishing_returns"] = True df = df.sort_values( by=["diminishing_returns", "12m", "6m", "3m", "1m"], ascending=False ) #hide def color_negative_red(val): color = "red" if val < 0 else "black" return "color: %s" % color def hover(hover_color="#f0f0f0"): return dict(selector="tr:hover", props=[("background-color", "%s" % hover_color)]) styles = [ hover(), dict(selector="th", props=[("font-size", "105%"), ("text-align", "left")]), dict(selector="caption", props=[("caption-side", "top")]), ] format_dict = { "1m": "{:.2%}", "3m": "{:.2%}", "6m": "{:.2%}", "12m": "{:.2%}", } html = ( df.style.format(format_dict) .set_table_styles(styles) .applymap(color_negative_red) .highlight_max(color="lightgreen") .set_caption(caption) ) #collapse-hide html ``` Once you identify the beaten down sectors, you can check the stocks under those sectors. Both the sector and stocks should confirm the reversal. As an investor, it is important to understand that there is a correlation between the economic cycle, stock market cycle and the performance of various sectors of the economy. During the early cycle, it is better to invest in interest-rate sensitive stocks like consumer discretionary, financials, real estate, industrial and transportation. You should avoid, communications, utilities, and energy sector stocks. During the middle of the cycle, you can invest in IT and capital goods stocks. Whereas you should avoid, metals and utilities during this phase. During the late cycle, you can invest in energy, metals, health care and the utilities and you can skip the IT and consumer discretionary stocks. Best sectors for investment during Economic Slowdown are FMCG, utilities and health care. Investment in Industrials, IT and Real Estate should be avoided during this time. ![Business cycle and relative stock performance](https://i.pinimg.com/originals/00/5c/bc/005cbc511e93c97318c4bfc95df4c38d.jpg)
github_jupyter
``` import pandas as pd import numpy as np from sklearn import * import matplotlib.pyplot as plt %matplotlib inline df = pd.read_csv("/data/credit-default.csv") df.head() df.info() df.default.value_counts()/len(df) target = "default" y = np.where(df[target] == 2, 1, 0) #outcome variable X = df.copy() #feature matrix del X[target] X = pd.get_dummies(X, drop_first=True) X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.3, random_state = 1234) X_train.shape, X_test.shape pipe = pipeline.Pipeline([ ("poly", preprocessing.PolynomialFeatures(degree=1, include_bias=False)), ("scaler", preprocessing.StandardScaler()), ("est", linear_model.LogisticRegression()) ]) pipe.fit(X_train, y_train) y_test_prob = pipe.predict_proba(X_test) y_train_pred = pipe.predict(X_train) y_test_pred = pipe.predict(X_test) plot_confusion_matrix(metrics.confusion_matrix(y_test, y_test_pred)) #print("training r2:", metrics.r2_score(y_train, y_train_pred), # "\ntesting r2:", metrics.r2_score(y_test, y_test_pred), # "\ntraining mse:", metrics.mean_squared_error(y_train, y_train_pred), # "\ntesting mse:", metrics.mean_squared_error(y_test, y_test_pred)) pd.DataFrame({"actual": y_test, "predicted": y_test_pred}) from mlxtend.plotting import plot_confusion_matrix plot_confusion_matrix(metrics.confusion_matrix(y_test, y_test_pred)) (182 + 43)/len(X_test) metrics.accuracy_score(y_test, y_test_pred) y_test_prob y_test_pred = np.where(y_test_prob[:, 1] > 0.5, 1, 0) plot_confusion_matrix(metrics.confusion_matrix(y_test, y_test_pred)) y_test_pred.shape recall = 43/(43+51) recall precision = 43 / (43+24) precision y_test_pred = np.where(y_test_prob[:, 1] > 0.5, 1, 0) print("accuracy", metrics.accuracy_score(y_test, y_test_pred) ,"\nrecall", metrics.recall_score(y_test, y_test_pred) ,"\nprecision", metrics.precision_score(y_test, y_test_pred)) plot_confusion_matrix(metrics.confusion_matrix(y_test, y_test_pred)) y_test_pred = np.where(y_test_prob[:, 1] > 0.8, 1, 0) print("accuracy", metrics.accuracy_score(y_test, y_test_pred) ,"\nrecall", metrics.recall_score(y_test, y_test_pred) ,"\nprecision", metrics.precision_score(y_test, y_test_pred)) plot_confusion_matrix(metrics.confusion_matrix(y_test, y_test_pred)) y_test_pred = np.where(y_test_prob[:, 1] > 0.2, 1, 0) print("accuracy", metrics.accuracy_score(y_test, y_test_pred) ,"\nrecall", metrics.recall_score(y_test, y_test_pred) ,"\nprecision", metrics.precision_score(y_test, y_test_pred)) plot_confusion_matrix(metrics.confusion_matrix(y_test, y_test_pred)) fpr, tpr, thresholds = metrics.roc_curve(y_test, y_test_prob[:, 1]) plt.plot(fpr, tpr) plt.plot([0, 1], [0, 1], ls = "--") plt.xlabel("FPR") plt.ylabel("TPR") plt.title("ROC, auc: "+ str(metrics.auc(fpr, tpr))) %%time target = "default" y = np.where(df[target] == 2, 1, 0) #outcome variable X = df.copy() #feature matrix del X[target] X = pd.get_dummies(X, drop_first=True) X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.3, random_state = 1234) X_train.shape, X_test.shape pipe = pipeline.Pipeline([ ("poly", preprocessing.PolynomialFeatures(degree=1, include_bias=False)), ("scaler", preprocessing.StandardScaler()), ("est", linear_model.SGDClassifier(loss="log", penalty = "elasticnet", learning_rate = "invscaling", eta0 = 0.01, max_iter = 2000, tol = 1e-4 )) ]) param_grid = { "est__l1_ratio": np.linspace(0, 1, 10), "est__alpha": np.linspace(0.08, 0.09, 10) } gsearch = model_selection.GridSearchCV(cv=5, estimator=pipe, n_jobs=1, param_grid=param_grid) gsearch.fit(X_train, y_train) est = gsearch.best_estimator_ y_test_prob = est.predict_proba(X_test) y_train_pred = est.predict(X_train) y_test_pred = est.predict(X_test) print("test accuracy", metrics.accuracy_score(y_test, y_test_pred)) print("best params: ", gsearch.best_params_) gsearch.best_params_ ```
github_jupyter
# Neural networks with PyTorch Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks. ``` # Import necessary packages %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import torch import helper import matplotlib.pyplot as plt ``` Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below <img src='assets/mnist.png'> Our goal is to build a neural network that can take one of these images and predict the digit in the image. First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later. ``` ### Run this cell from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) ``` We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like ```python for image, label in trainloader: ## do things with images and labels ``` You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images. ``` dataiter = iter(trainloader) images, labels = dataiter.next() print(type(images)) print(images.shape) print(labels.shape) ``` This is what one of the images looks like. ``` plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r'); ``` First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures. The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors. Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next. > **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next. ``` print(images.shape) print(images.shape[0]) print(images.view(images.shape[0], -1).shape) ## Your solution def activation(x): return 1/(1 + torch.exp(-x)) inputs = images.view(images.shape[0], -1) n_input = 784 n_hidden = 256 n_output = 10 W1 = torch.randn(n_input, n_hidden) W2 = torch.randn(n_hidden, n_output) B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) h = activation(torch.mm(inputs, W1) + B1) out = torch.mm(h, W2) + B2 out.shape ``` Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this: <img src='assets/image_distribution.png' width=500px> Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class. To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like $$ \Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}} $$ What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one. > **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns. ``` torch.sum(out, dim=1).view(-1,1).shape def softmax(x): b = torch.sum(torch.exp(x), dim=1).view(-1, 1) return torch.exp(x) / b ## TODO: Implement the softmax function here # Here, out should be the output of the network in the previous excercise with shape (64,10) probabilities = softmax(out) # Does it have the right shape? Should be (64, 10) print(probabilities.shape) # Does it sum to 1? print(probabilities.sum(dim=1)) ``` ## Building networks with PyTorch PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output. ``` from torch import nn class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) # Define sigmoid activation and softmax output self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x ``` Let's go through this bit by bit. ```python class Network(nn.Module): ``` Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything. ```python self.hidden = nn.Linear(784, 256) ``` This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`. ```python self.output = nn.Linear(256, 10) ``` Similarly, this creates another linear transformation with 256 inputs and 10 outputs. ```python self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) ``` Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns. ```python def forward(self, x): ``` PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method. ```python x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) ``` Here the input tensor `x` is passed through each operation and reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method. Now we can create a `Network` object. ``` # Create the network and look at it's text representation model = Network() model ``` You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`. ``` import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) def forward(self, x): # Hidden layer with sigmoid activation x = F.sigmoid(self.hidden(x)) # Output layer with softmax activation x = F.softmax(self.output(x), dim=1) return x ``` ### Activation functions So far we've only been looking at the sigmoid activation function, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit). <img src="assets/activation.png" width=700px> In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. ### Your Turn to Build a Network <img src="assets/mlp_mnist.png" width=600px> > **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function. It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names. ``` ## Your solution here class MyNetwork(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64,10) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.softmax(self.fc3(x), dim=1) return x model = MyNetwork() ``` ### Initializing weights and biases The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance. ``` print(model.fc1.weight) print(model.fc1.bias) ``` For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values. ``` # Set biases to all zeros model.fc1.bias.data.fill_(0) # sample from random normal with standard dev = 0.01 model.fc1.weight.data.normal_(std=0.01) ``` ### Forward pass Now that we have a network, let's see what happens when we pass in an image. ``` # Grab some data dataiter = iter(trainloader) images, labels = dataiter.next() # Resize images into a 1D vector, new shape is (batch size, color channels, image pixels) images.resize_(64, 1, 784) # or images.resize_(images.shape[0], 1, 784) to automatically get batch size # Forward pass through the network img_idx = 0 ps = model.forward(images[img_idx,:]) img = images[img_idx] helper.view_classify(img.view(1, 28, 28), ps) ``` As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! ### Using `nn.Sequential` PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network: ``` # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) # Forward pass through the network and display output images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 784) ps = model.forward(images[0,:]) helper.view_classify(images[0].view(1, 28, 28), ps) ``` Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output. The operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`. ``` print(model[0]) model[0].weight ``` You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_. ``` from collections import OrderedDict model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(input_size, hidden_sizes[0])), ('relu1', nn.ReLU()), ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])), ('relu2', nn.ReLU()), ('output', nn.Linear(hidden_sizes[1], output_size)), ('softmax', nn.Softmax(dim=1))])) model ``` Now you can access layers either by integer or the name ``` print(model[0]) print(model.fc1) ``` In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
github_jupyter
# DataJoint U24 - Workflow DeepLabCut ## Introduction This notebook gives a brief overview and introduces some useful DataJoint tools to facilitate the exploration. + DataJoint needs to be configured before running this notebook, if you haven't done so, refer to the [01-Configure](./01-Configure.ipynb) notebook. + If you are familar with DataJoint and the workflow structure, proceed to the next notebook [03-Process](./03-Process.ipynb) directly to run the workflow. + For a more thorough introduction of DataJoint functionings, please visit our [general tutorial site](http://codebook.datajoint.io/) To load the local configuration, we will change the directory to the package root. ``` import os if os.path.basename(os.getcwd())=='notebooks': os.chdir('..') assert os.path.basename(os.getcwd())=='workflow-deeplabcut', ("Please move to the " + "workflow directory") ``` ## Schemas and tables By importing from `workflow_deeplabcut`, we'll run the activation functions that declare the tables in these schemas. If these tables are already declared, we'll gain access. ``` import datajoint as dj from workflow_deeplabcut.pipeline import lab, subject, session, train, model ``` Each module contains a schema object that enables interaction with the schema in the database. For more information abotu managing the upstream tables, see our [session workflow](https://github.com/datajoint/workflow-session). In this case, lab is required because the pipeline adds a `Device` table to the lab schema to keep track of camera IDs. The pipeline also adds a `VideoRecording` table to the session schema. `dj.list_schemas()` lists all schemas a user has access to in the current database ``` dj.list_schemas() ``` `<schema>.schema.list_tables()` will provide names for each table in the format used under the hood. ``` train.schema.list_tables() ``` `dj.Diagram()` plots tables and dependencies in a schema. To see additional upstream or downstream connections, add `- N` or `+ N` where N is the number of additional links. While the `model` schema is required for pose estimation, the `train` schema is optional, and can be used to manage model training within DataJoint ``` dj.Diagram(train) #- 1 dj.Diagram(model) ``` ### Table tiers - **Manual table**: green box, manually inserted table, expect new entries daily, e.g. Subject, ProbeInsertion. - **Lookup table**: gray box, pre inserted table, commonly used for general facts or parameters. e.g. Strain, ClusteringMethod, ClusteringParamSet. - **Imported table**: blue oval, auto-processing table, the processing depends on the importing of external files. e.g. process of Clustering requires output files from kilosort2. - **Computed table**: red circle, auto-processing table, the processing does not depend on files external to the database, commonly used for - **Part table**: plain text, as an appendix to the master table, all the part entries of a given master entry represent a intact set of the master entry. e.g. Unit of a CuratedClustering. ### Dependencies - **One-to-one primary**: thick solid line, share the exact same primary key, meaning the child table inherits all the primary key fields from the parent table as its own primary key. - **One-to-many primary**: thin solid line, inherit the primary key from the parent table, but have additional field(s) as part of the primary key as well - **secondary dependency**: dashed line, the child table inherits the primary key fields from parent table as its own secondary attribute. ``` # plot diagram of tables in multiple schemas dj.Diagram(subject.Subject) + dj.Diagram(session.Session) + dj.Diagram(model) lab.schema.list_tables() # plot diagram of selected tables and schemas (dj.Diagram(subject.Subject) + dj.Diagram(session.Session) + dj.Diagram(model.VideoRecording) + dj.Diagram(model.PoseEstimationTask)) # preview columns and contents in a table model.VideoRecording.File() ``` `describe()` shows table definition with foreign key references ``` train.TrainingTask.describe() ``` `heading` shows attribute definitions regardless of foreign key references ``` model.Model.heading ``` ## Other Elements installed with the workflow [`lab`](https://github.com/datajoint/element-lab): lab management related information, such as Lab, User, Project, Protocol, Source. ``` dj.Diagram(lab) ``` [`subject`](https://github.com/datajoint/element-animal): general animal information, User, Genetic background, Death etc. ``` dj.Diagram(subject) subject.Subject.describe(); ``` [`session`](https://github.com/datajoint/element-session): General information of experimental sessions. ``` dj.Diagram(session) session.Session.describe() ``` ## Summary and next step - This notebook introduced the overall structures of the schemas and tables in the workflow and relevant tools to explore the schema structure and table definitions. - The [next notebook](./03-Process.ipynb) will introduce the detailed steps to run through `workflow-deeplabcut`.
github_jupyter
``` # accessing documentation with ? # We can use help function to understand the documentation print(help(len)) # or we can use the ? operator len? # The notation works for objects also L = [1,2,4,5] L.append? L? # This will also work for functions that we create ourselves, the ? returns the doc string in the function def square(n): '''return the square of the number''' return n**2 square? # Accessing the source code with ?? square?? # Sometimes it might not return the source code because it might be written in an other language from collections import deque as d d?? # Wild Card matching # We can use the wild card * and type the known part to retrieve the unknown command # Example for looking at different type of warnings *Warning? # We can use this in functions also d.app*? # Shortcuts in Ipython notebook ''' Navigation shortcuts Ctrl-a Move cursor to the beginning of the line Ctrl-e Move cursor to the end of the line Ctrl-b or the left arrow key Move cursor back one character Ctrl-f or the right arrow key Move cursor forward one character Text Entry shortcuts Backspace key Delete previous character in line Ctrl-d Delete next character in line Ctrl-k Cut text from cursor to end of line Ctrl-u Cut text from beginning of line to cursor Ctrl-y Yank (i.e. paste) text that was previously cut Ctrl-t Transpose (i.e., switch) previous two characters Command History Shortcuts Ctrl-p (or the up arrow key) Access previous command in history Ctrl-n (or the down arrow key) Access next command in history Ctrl-r Reverse-search through command history Keystroke Action Ctrl-l Clear terminal screen Ctrl-c Interrupt current Python command Ctrl-d Exit IPython session ''' # MAGIC COMMANDS # We can use %run to execute python (.py) file in notebook, any functions defined in the script can be used by the notebook # We can use %timeit to check the execution time for a single iteration or line command, for finding multiline command time execution we can use %%timeit %%timeit L = [] for i in range(10000): L.append(i**2) %timeit L = [n ** 2 for n in range(1000)] # Checking time for executing list comprehensions, we can see list comprehension execution is very efficient # Input/ Output history commands # We can use In/ Out objects to print the Input and output objects in history, lets say we start the below session import math math.sin(2) math.cos(2) # print(In), will print all the commands inputted in the current notebook print(In) # returns a list of all the commands executed so far. # Similarly we can use 'OUT' to print the output of these functions print(Out) # Returns a dictonary with input and output # We can also supress outputs while executing so that we can make according changes if we want the function to execute, we simple place a semicolon in the end of the function math.sin(2) + math.cos(2); # We usually use this ';' symbol especially if in case of matplolib # For accesing previous batch of inputs we can use %history command #%history? %history -n 1-4 # Shell Commands # We can use '!' for executing os commands # Shell is a direct way to interact textually with computer !ls !pwd !echo "printing from the shell" contents = !ls print(contents) # Errors and debugging # Controlling Exceptions: %xmode # Using %xmode we can control the length of content of error message %xmode Plain def func2(x): a = x b = 0 return a/b func2(4) # We can use '%xmode verbose' to have additional information reported regarding the error %xmode verbose func2(90) # We can apply the default mode to take things normal #%xmode Default # Debugging # The standard tool for reading traceback is pdb(python debugger), ipdb(ipython version) # We can also use the %debug magic command, in case of an exception it will automatically open an interactive debugging shell # The ipdb prompt lets you explore the current state of the stack, explore the available variables, and even run Python commands! %debug # Other debugging commands in the shell '''list Show the current location in the file h(elp) Show a list of commands, or find help on a specific command q(uit) Quit the debugger and the program c(ontinue) Quit the debugger, continue in the program n(ext) Go to the next step of the program <enter> Repeat the previous command p(rint) Print variables s(tep) Step into a subroutine r(eturn) Return out of a subroutine ''' # We can use these commands to figureout the execution time for various snnipets and code '''%time: Time the execution of a single statement %timeit: Time repeated execution of a single statement for more accuracy %prun: Run code with the profiler %lprun: Run code with the line-by-line profiler %memit: Measure the memory use of a single statement %mprun: Run code with the line-by-line memory profiler''' # Data Structures and Processing for Machine Learing import numpy as np np.__version__ # Data types in python # As we know python is dynamically typed language, but inside it is just complex 'c' lang structure disguised in python '''struct _longobject { long ob_refcnt; PyTypeObject *ob_type; size_t ob_size; long ob_digit[1]; }; ob_refcnt, a reference count that helps Python silently handle memory allocation and deallocation ob_type, which encodes the type of the variable ob_size, which specifies the size of the following data members ob_digit, which contains the actual integer value that we expect the Python variable to represent. ''' # usually all this additional information comes at a cost of memory and computation # A list if also a complex structure that can accomodate multiple data types, So we make use of a numpy array for manipulating integer data # Although a list might be flexible, but a numpy array is very efficient to store and manipulate data # We can make use of 'array' data structure make computationally effective manipulations import array L = list(range(10)) arr = array.array('i',L) # 'i' is a type code indicating the elements of the array as integer arr import numpy as np # Creating array np.array([1,2,3,4,5]) # Unlike list, numpy array need to have same data type np.array([1, 2, 3, 4], dtype='float32') # We can explicitly declare the type using 'dtype' attribute # nested lists result in multi-dimensional arrays np.array([range(i, i + 3) for i in [2, 4, 6]]) # Create a length-10 integer array filled with zeros np.zeros(10, dtype=int) # Create a 3x5 floating-point array filled with ones np.ones((3, 5), dtype=float) # Create a 3x5 array filled with 3.14 np.full((3, 5), 3.14) # Create an array filled with a linear sequence # Starting at 0, ending at 20, stepping by 2 # (this is similar to the built-in range() function) np.arange(0, 20, 2).reshape(5,2) # We can use reshape() to convert the shape as we want to # Create an array of five values evenly spaced between 0 and 1 np.linspace(0, 1, 25) # Create a 3x3 array of uniformly distributed # random values between 0 and 1 np.random.random((3, 3)) # Create a 3x3 array of normally distributed random values # with mean 0 and standard deviation 1 np.random.normal(0, 1, (3, 3)) # Create a 3x3 array of random integers in the interval [0, 10) np.random.randint(0, 10000, (3, 3)) # Create a 3x3 identity matrix np.eye(3) # Create an uninitialized array of three integers # The values will be whatever happens to already exist at that memory location np.empty(3) # Numpy built in data types, numpy is built in 'C'. ''' bool_ Boolean (True or False) stored as a byte int_ Default integer type (same as C long; normally either int64 or int32) intc Identical to C int (normally int32 or int64) intp Integer used for indexing (same as C ssize_t; normally either int32 or int64) int8 Byte (-128 to 127) int16 Integer (-32768 to 32767) int32 Integer (-2147483648 to 2147483647) int64 Integer (-9223372036854775808 to 9223372036854775807) uint8 Unsigned integer (0 to 255) uint16 Unsigned integer (0 to 65535) uint32 Unsigned integer (0 to 4294967295) uint64 Unsigned integer (0 to 18446744073709551615) float_ Shorthand for float64. float16 Half precision float: sign bit, 5 bits exponent, 10 bits mantissa float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa complex_ Shorthand for complex128. complex64 Complex number, represented by two 32-bit floats complex128 Complex number, represented by two 64-bit floats''' # Numpy Array Attributes np.random.seed(0) # seed for reproducibility # 3 array's with random integers and different dimensions x1 = np.random.randint(10, size=6) # One-dimensional array x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array print("x3 ndim: ", x3.ndim) # number of dimensions print("x3 shape:", x3.shape) # size of dimension print("x3 size: ", x3.size) # total size of array print("dtype:", x3.dtype) # Data type stored in numpy array print("itemsize:", x3.itemsize, "bytes") # itemsize of single item in bytes print("nbytes:", x3.nbytes, "bytes") # total array itemsize ,nbyted = itemsize*size # Accesing elements (Single) print(x1) print(x1[0]) # prints the first element print(x1[4]) # prints the fifth element print(x1[-1]) # To index array from end (prints last element) print(x1[-2]) # prints second last element # Multidimentional array can be accessed using a comma seperated tuple of indices print(x2) print(x2[0,0]) # array_name(row,column) print(x2[2,0]) # 3rd row element(0,1,2), first column print(x2[2,-1]) # 3rd row element, last column x2[0,0] = 90 # values can also be modified at any index # but if we change 'x1[0] = 9.9', it gets trucated and converted to 3 as 'x1' is declared as int32 # Accessing elements via slicing #x[start:stop:step] print(x1) print(x1[0:2]) # returns first 2 elements print(x1[1:]) # returns all elements from 2nd position print(x1[0:3:2]) # returns all elements from 0 to 2 position with step '2' (so 5,3) print(x1[::2]) # every other element print(x1[1::2]) # every other element, starting at index 1 # If step is negative then it returns in reverse order, internally start and stop are swapped print(x1[::-1]) # all elements, reversed print(x1[3::-1]) # reversed from index 3 to starting, this includes 3 print(x1[4:1:-1]) # revered from index 4 up to 2nd element. # Multidimentional array print(x2) print(x2[:2,:3]) # returns array from start row up to 3rd row, and up to 4th column print('\n') print(x2[:3, ::2]) # all rows, every other column as step value is 2 print(x2[::-1, ::-1]) # sub array dimensions can also be reveresed # Accesing array rows and columns print(x2) print(x2[:, 0]) # first column of x2 print(x2[0, :]) # first row of x2 print(x2[0]) # equivalent to x2[0, :], first row # When we copy sub array, to another array (internally it does not make copies it just returns a view, so on changing the subarray will make change in original array) # In order to actually create a copy we can use the 'copy()' method x2_sub_copy = x2[:2, :2].copy() print(x2_sub_copy) # Reshaping array grid = np.arange(1, 10).reshape((3, 3)) print(grid) # for this to work, the initial array needs to have the same shape # Array Concatenation and Spliting # We can use np.concatenate, np.hstack, np.vstack x = np.array([1, 2, 3]) y = np.array([3, 2, 1]) np.concatenate([x, y]) # We can also concatenate more than one array at once z = [99, 99, 99] print(np.concatenate([x, y, z])) # Concatenating 2d array grid = np.array([[1, 2, 3], [4, 5, 6]]) grids = np.concatenate([grid,grid]) print(grids) # concatenate along the axis (zero-indexed) print(np.concatenate([grid, grid], axis=1)) # Using vstack and hstack x = np.array([1, 2, 3]) grid = np.array([[9, 8, 7], [6, 5, 4]]) # vertically stack the arrays print(np.vstack([x, grid])) # We simply stack vertically print(np.hstack([grids,grids])) # We concatenate them horizontally sideways # Similarly we can use np.dstack to concatenate along the 3rd axis # Spliting is opposite to concatenation. We use np.vsplit, np.hsplit and np.split to split the array x = [1, 2, 3, 99, 99, 3, 2, 1] x1, x2, x3 = np.split(x, [3, 5]) # these are points where to split for 'n' points we get 'n+1' sub arrays print(x1, x2, x3) # using np.vsplit grid = np.arange(16).reshape((4, 4)) print('\n') print(grid) upper, lower = np.vsplit(grid, [2]) # its like a horizontal plane spliting at a point print(upper) print(lower) # using np.hsplit left, right = np.hsplit(grid, [2]) print(left) print(right) # Similarly we can use np.dsplit to split along the 3rd axis # numpy for computation # Numpy is very fast when we use it for vectorised operations, generally implemented through "Numpy Universal Functions" # It makes repeated calculations on numpy very efficient # Slowness of python loops, the loops mostly implemented via cpython are slow due to dynamic and interpreted nature # Sometimes it is absurdly slow, especially while running loops # So for many types of operations numpy is quite efficient especially as it is statically typed and complied routine. (called vectorized operations) # vectorized operation is simple applying the operation on the array which is then applied on each element #Vectorized operations in NumPy are implemented via ufuncs, whose main purpose is to quickly execute repeated operations. #On values in NumPy arrays. Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, # but we can also operate between two arrays print(np.arange(5) / np.arange(1, 6)) # Multi dimentional array x = np.arange(9).reshape((3, 3)) 2 ** x #Array arithmetic #NumPy's ufuncs feel very natural to use because they make use of Python's native arithmetic operators. #The standard addition, subtraction, multiplication, and division can all be used x = np.arange(4) print("x =", x) print("x + 5 =", x + 5) print("x - 5 =", x - 5) print("x * 2 =", x * 2) print("x / 2 =", x / 2) print("x // 2 =", x // 2) # floor division print("-x = ", -x) print("x ** 2 = ", x ** 2) print("x % 2 = ", x % 2) print(-(0.5*x + 1) ** 2) # These can be strung together as you wish # We can also call functions instead ''' + np.add Addition (e.g., 1 + 1 = 2) - np.subtract Subtraction (e.g., 3 - 2 = 1) - np.negative Unary negation (e.g., -2) * np.multiply Multiplication (e.g., 2 * 3 = 6) / np.divide Division (e.g., 3 / 2 = 1.5) // np.floor_divide Floor division (e.g., 3 // 2 = 1) ** np.power Exponentiation (e.g., 2 ** 3 = 8) % np.mod Modulus/remainder (e.g., 9 % 4 = 1) ''' x = np.array([-2, -1, 0, 1, 2]) print(abs(x)) # Array trignometry theta = np.linspace(0, np.pi, 3) print("theta = ", theta) print("sin(theta) = ", np.sin(theta)) print("cos(theta) = ", np.cos(theta)) print("tan(theta) = ", np.tan(theta)) x = [-1, 0, 1] print("x = ", x) print("arcsin(x) = ", np.arcsin(x)) print("arccos(x) = ", np.arccos(x)) print("arctan(x) = ", np.arctan(x)) # Exponents and logrithms x = [1, 2, 3] print("x =", x) print("e^x =", np.exp(x)) print("2^x =", np.exp2(x)) print("3^x =", np.power(3, x)) x = [1, 2, 4, 10] print("x =", x) print("ln(x) =", np.log(x)) print("log2(x) =", np.log2(x)) print("log10(x) =", np.log10(x)) # Special functions from scipy import special # Gamma functions (generalized factorials) and related functions x = [1, 5, 10] print("gamma(x) =", special.gamma(x)) print("ln|gamma(x)| =", special.gammaln(x)) print("beta(x, 2) =", special.beta(x, 2)) # Error function (integral of Gaussian) # its complement, and its inverse x = np.array([0, 0.3, 0.7, 1.0]) print("erf(x) =", special.erf(x)) print("erfc(x) =", special.erfc(x)) print("erfinv(x) =", special.erfinv(x)) # Aggregation min/max import numpy as np L = np.random.random(100) sum(L) # Using np.sum() print(np.sum(L)) # The reason numpy is fast is because it executes the operations as compiled code big_array = np.random.rand(1000000) %timeit sum(big_array) %timeit np.sum(big_array) # The difference between time of execution is in square order. # Max and Min in big_array min(big_array), max(big_array) np.min(big_array), np.max(big_array) %timeit min(big_array) %timeit np.min(big_array) # Multidimentional Array Aggregation M = np.random.random((3, 4)) print(M) # By default each aggregation the function will aggregate on the entire np.array M.sum() #Aggregation functions take an additional argument specifying the axis along which the aggregate is computed. #For example, we can find the minimum value within each column by specifying axis=0: M.min(axis=0) # Additional List of aggregation functions in python ''' Function Name NaN-safe Version Description np.sum np.nansum Compute sum of elements np.prod np.nanprod Compute product of elements np.mean np.nanmean Compute mean of elements np.std np.nanstd Compute standard deviation np.var np.nanvar Compute variance np.min np.nanmin Find minimum value np.max np.nanmax Find maximum value np.argmin np.nanargmin Find index of minimum value np.argmax np.nanargmax Find index of maximum value np.median np.nanmedian Compute median of elements np.percentile np.nanpercentile Compute rank-based statistics of elements np.any N/A Evaluate whether any elements are true np.all N/A Evaluate whether all elements are true ''' # Broadcasting for Computation in numpy arrays # For same size, binary operations are performed element by element wise a = np.array([0, 1, 2]) b = np.array([5, 5, 5]) print(a + b) print(a+5) # Adding 1 dimentional array to 2 dimentional array. M = np.ones((3, 3)) print(M+a) # The 'a' is stretched, or broadcast across the second dimension in order to match the shape of M. # Masking in Numpy array's, we use masking for extracting, modifying, counting the values in an array based on a criteria # Ex: Counting all values greater than a certain value. # Comparision Operators as ufuncs x = np.array([1, 2, 3, 4, 5]) print(x < 3) # less than print((2 * x) == (x ** 2)) ''' Operator Equivalent ufunc Operator Equivalent ufunc == np.equal != np.not_equal < np.less <= np.less_equal > np.greater >= np.greater_equal ''' # how many values less than 3? print(np.count_nonzero(x < 3)) # Fancy Indexing rand = np.random.RandomState(42) x = rand.randint(100, size=10) print(x) # Accesing different elements print([x[3], x[7], x[2]]) # Alternatively we can also access the elements as ind = [3, 7, 4] print(x[ind]) # Sorting Array's x = np.array([2, 1, 4, 3, 5]) print(np.sort(x)) # Using builtin sort function we can sort the values of an array # Using argsort we can return the indices of these elements after sorting x = np.array([2, 1, 4, 3, 5]) i = np.argsort(x) print(i) # Soring elements row wise or column wise rand = np.random.RandomState(42) X = rand.randint(0, 10, (4, 6)) print(X) # sort each column of X print(np.sort(X, axis=0)) # Handling Missing Data data = pd.Series([1, np.nan, 'hello', None]) data.isnull() # Using to detect missing values in a pandas data frame data.dropna() # Drops the null values present in a data frame # We can drop null values along different axis # df.dropna(axis='columns') # df.dropna(axis='columns', how='all') # df.dropna(axis='rows', thresh=3) 'thresh' parameter specifies the minimum number of not null values to be kept data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde')) data.fillna(0) # Fills null values with zero # forward-fill data.fillna(method='ffill') # Fills the previous value in the series # back-fill data.fillna(method='bfill') #data.fillna(method='ffill', axis=1) We can also specify the axis to fill # Pivot tables in Pandas import seaborn as sns titanic = sns.load_dataset('titanic') titanic.pivot_table('survived', index='sex', columns='class') # Date and time tools for handling time series data from datetime import datetime datetime(year=2015, month=7, day=4) # Using date util module we can parse date in string format from dateutil import parser date = parser.parse("4th of July, 2015") date # Dates which are consecutive using arange date = np.array('2015-07-04', dtype=np.datetime64) print(date) print(date + np.arange(12)) # Datetime in pandas import pandas as pd date = pd.to_datetime("4th of July, 2015") print(date) print(date.strftime('%A')) # Vectorized operations on the same object print(date + pd.to_timedelta(np.arange(12), 'D')) # Visualizations # Simple Line plots %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np fig = plt.figure() ax = plt.axes() x = np.linspace(0, 10, 1000) ax.plot(x, np.sin(x)); plt.plot(x, np.cos(x)); plt.plot(x, np.sin(x - 0), color='blue') # specify color by name plt.plot(x, np.sin(x - 1), color='g') # short color code (rgbcmyk) plt.plot(x, np.sin(x - 2), color='0.75') # Grayscale between 0 and 1 plt.plot(x, np.sin(x - 3), color='#FFDD44') # Hex code (RRGGBB from 00 to FF) plt.plot(x, np.sin(x - 4), color=(1.0,0.2,0.3)) # RGB tuple, values 0 to 1 plt.plot(x, np.sin(x - 5), color='chartreuse'); # all HTML color names supported plt.plot(x, x + 0, linestyle='solid') plt.plot(x, x + 1, linestyle='dashed') plt.plot(x, x + 2, linestyle='dashdot') plt.plot(x, x + 3, linestyle='dotted'); # For short, you can use the following codes: plt.plot(x, x + 4, linestyle='-') # solid plt.plot(x, x + 5, linestyle='--') # dashed plt.plot(x, x + 6, linestyle='-.') # dashdot plt.plot(x, x + 7, linestyle=':'); # dotted plt.plot(x, x + 0, '-g') # solid green plt.plot(x, x + 1, '--c') # dashed cyan plt.plot(x, x + 2, '-.k') # dashdot black plt.plot(x, x + 3, ':r'); # dotted red plt.plot(x, np.sin(x)) plt.xlim(-1, 11) plt.ylim(-1.5, 1.5); x = np.linspace(0, 10, 30) y = np.sin(x) plt.plot(x, y, 'o', color='black'); rng = np.random.RandomState(0) for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']: plt.plot(rng.rand(5), rng.rand(5), marker, label="marker='{0}'".format(marker)) plt.legend(numpoints=1) plt.xlim(0, 1.8); plt.plot(x, y, '-p', color='gray', markersize=15, linewidth=4, markerfacecolor='white', markeredgecolor='gray', markeredgewidth=2) plt.ylim(-1.2, 1.2); rng = np.random.RandomState(0) x = rng.randn(100) y = rng.randn(100) colors = rng.rand(100) sizes = 1000 * rng.rand(100) plt.scatter(x, y, c=colors, s=sizes, alpha=0.3, cmap='viridis') plt.colorbar(); # show color scale from sklearn.datasets import load_iris iris = load_iris() features = iris.data.T plt.scatter(features[0], features[1], alpha=0.2, s=100*features[3], c=iris.target, cmap='viridis') plt.xlabel(iris.feature_names[0]) plt.ylabel(iris.feature_names[1]); # Contour Plots def f(x, y): return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x) x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 40) X, Y = np.meshgrid(x, y) Z = f(X, Y) plt.contour(X, Y, Z, colors='black'); plt.contour(X, Y, Z, 20, cmap='RdGy'); ```
github_jupyter
``` import matplotlib.pyplot as plt from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.cluster import KMeans from sklearn.decomposition import PCA from sklearn.manifold import TSNE from pathlib import Path import pandas as pd import numpy as np random_state = 100 all_txt_files =[] for file in Path("all_plus").rglob("*.txt"): all_txt_files.append(file.parent / file.name) # counts the length of the list all_docs = [] for txt_file in all_txt_files: with open(txt_file, encoding='utf-8') as f: txt_file_as_string = f.read() all_docs.append(txt_file_as_string) print(len(all_docs)) exp_list = [] for i in range(1,16): exp_list.append(0) for i in range(1,16): exp_list.append(1) for i in range(1,16): exp_list.append(2) for i in range(1,16): exp_list.append(3) for i in range(1,16): exp_list.append(4) d = {"text": all_docs, "label": exp_list} df = pd.DataFrame(d) df stop = ['маған', 'оған', 'саған', 'біздің', 'сіздің', 'оның', 'бізге', 'сізге', 'оларға', 'біздерге', 'сіздерге', 'оларға', 'менімен', 'сенімен', 'онымен', 'бізбен', 'сізбен', 'олармен', 'біздермен', 'сіздермен', 'менің', 'сенің', 'біздің', 'сіздің', 'оның', 'біздердің', 'сіздердің', 'олардың', 'маған', 'саған', 'оған', 'менен', 'сенен', 'одан', 'бізден', 'сізден', 'олардан', 'біздерден', 'сіздерден', 'олардан', 'айтпақшы', 'сонымен', 'сондықтан', 'бұл', 'осы', 'сол', 'анау', 'мынау', 'сонау', 'осынау', 'ана', 'мына', 'сона', 'әні', 'міне', 'өй', 'үйт', 'бүйт', 'біреу', 'кейбіреу', 'кейбір', 'қайсыбір', 'әрбір', 'бірнеше', 'бірдеме', 'бірнеше', 'әркім', 'әрне', 'әрқайсы', 'әрқалай', 'әлдекім','ах', 'ох', 'эх', 'ай', 'эй', 'ой', 'тағы', 'тағыда', 'әрине', 'жоқ', 'сондай', 'осындай', 'осылай', 'солай', 'мұндай', 'бұндай', 'мен', 'сен', 'ол', 'біз', 'біздер', 'олар', 'сіз', 'сіздер', 'әлдене', 'әлдеқайдан', 'әлденеше', 'әлдеқалай', 'әлдеқашан', 'алдақашан', 'еш', 'ешкім', 'ешбір', 'ештеме', 'дәнеңе', 'ешқашан', 'ешқандай', 'ешқайсы', 'емес', 'бәрі', 'барлық', 'барша', 'бар', 'күллі', 'бүкіл', 'түгел', 'өз', 'өзім', 'өзің', 'өзінің', 'өзіме', 'өзіне', 'өзімнің', 'өзі', 'өзге', 'менде', 'сенде', 'онда', 'менен', 'сенен\tонан', 'одан', 'ау', 'па', 'ей', 'әй', 'е', 'уа', 'уау', 'уай', 'я', 'пай', 'ә', 'о', 'оһо', 'ой', 'ие', 'аһа', 'ау', 'беу', 'мәссаған', 'бәрекелді', 'әттегенай', 'жаракімалла', 'масқарай', 'астапыралла', 'япырмай', 'ойпырмай', 'кәне', 'кәнеки', 'ал', 'әйда', 'кәні', 'міне', 'әні', 'сорап', 'қош-қош', 'пфша', 'пішә', 'құрау-құрау', 'шәйт', 'шек', 'моһ', 'тәк', 'құрау', 'құр', 'кә', 'кәһ', 'күшім', 'күшім', 'мышы', 'пырс', 'әукім', 'алақай', 'паһ-паһ', 'бәрекелді', 'ура', 'әттең', 'әттеген-ай', 'қап', 'түге', 'пішту', 'шіркін', 'алатау', 'пай-пай', 'үшін', 'сайын', 'сияқты', 'туралы', 'арқылы', 'бойы', 'бойымен', 'шамалы', 'шақты', 'қаралы', 'ғұрлы', 'ғұрлым', 'шейін', 'дейін', 'қарай', 'таман', 'салым', 'тарта', 'жуық', 'таяу', 'гөрі', 'бері', 'кейін', 'соң', 'бұрын', 'бетер', 'қатар', 'бірге', 'қоса', 'арс', 'гүрс', 'дүрс', 'қорс', 'тарс', 'тырс', 'ырс', 'барқ', 'борт', 'күрт', 'кірт', 'морт', 'сарт', 'шырт', 'дүңк', 'күңк', 'қыңқ', 'мыңқ', 'маңқ', 'саңқ', 'шаңқ', 'шіңк', 'сыңқ', 'таңқ', 'тыңқ', 'ыңқ', 'болп', 'былп', 'жалп', 'желп', 'қолп', 'ірк', 'ырқ', 'сарт-сұрт', 'тарс-тұрс', 'арс-ұрс', 'жалт-жалт', 'жалт-жұлт', 'қалт-қалт', 'қалт-құлт', 'қаңқ-қаңқ', 'қаңқ-құңқ', 'шаңқ-шаңқ', 'шаңқ-шұңқ', 'арбаң-арбаң', 'бүгжең-бүгжең', 'арсалаң-арсалаң', 'ербелең-ербелең', 'батыр-бұтыр', 'далаң-далаң', 'тарбаң-тарбаң', 'қызараң-қызараң', 'қаңғыр-күңгір', 'қайқаң-құйқаң', 'митың-митың', 'салаң-сұлаң', 'ыржың-тыржың', 'бірақ', 'алайда', 'дегенмен', 'әйтпесе', 'әйткенмен', 'себебі', 'өйткені', 'сондықтан', 'үшін', 'сайын', 'сияқты', 'туралы', 'арқылы', 'бойы', 'бойымен', 'шамалы', 'шақты', 'қаралы', 'ғұрлы', 'ғұрлым', 'гөрі', 'бері', 'кейін', 'соң', 'бұрын', 'бетер', 'қатар', 'бірге', 'қоса', 'шейін', 'дейін', 'қарай', 'таман', 'салым', 'тарта', 'жуық', 'таяу', 'арнайы', 'осындай', 'ғана', 'қана', 'тек', 'әншейін', 'мен', 'да', 'бола', 'бір', 'де', 'сен', 'мені', 'сені', 'және', 'немесе', 'оны', 'еді', 'жатыр', 'деп', 'деді', 'тұр', 'тар', 'жаты', 'болып', ' '] vec = TfidfVectorizer(analyzer="word", stop_words=stop,use_idf=True, smooth_idf=True, ngram_range=(1, 1)) vec.fit_transform(df.text.values) features = vec.transform(df.text.values) ``` # Результаты ``` df_idf = pd.DataFrame(vec.idf_, index=vec.get_feature_names(),columns=["idf_weights"]) df_idf.sort_values(by=['idf_weights'], ascending=False)[0:20] features pd.DataFrame(features[i].T.todense(), index=vec.get_feature_names(), columns=["tfidf", "11"]).sort_values(by=["tfidf"],ascending=False)[:5] tdf = [] for i in range(75): expp = pd.DataFrame(features[i].T.todense(), index=vec.get_feature_names(), columns=["tfidf"]).sort_values(by=["tfidf"],ascending=False)[:5] expp['doc_id'] = [i, i, i, i, i] tdf.append(expp) dfnw = pd.concat(tdf) dfnw N = 20 idx = np.ravel(features.sum(axis=0).argsort(axis=1))[::1][:N] top_10_words = np.array(vec.get_feature_names())[idx].tolist() top_10_words features[0] features[0] np.ravel(features.sum(axis=0).argsort(axis=1))[::1][:N] ``` # Кластеризация ``` cls = KMeans( n_clusters=num_clusters, random_state=random_state ) cls.fit(features) cls.predict(features) cls.labels_ pca = PCA(n_components=pca_num_components, random_state=random_state) reduced_features = pca.fit_transform(features.toarray()) reduced_cluster_centers = pca.transform(cls.cluster_centers_) centroids = cls.cluster_centers_ samplesCentroids = centroids[cls.labels_] centoides = pd.DataFrame(centroids) centoides np.sum(centoides[1:2].values) def calcul(doc, centoides): alllist = [] for i in range(len(centoides)): docslist = [] for j in range(len(doc)): sq = np.sum((doc[j:(j+1)].values - centoides[i:(i+1)].values)**2) docslist.append(sq) alllist.append(docslist) numpy_array = np.array(alllist) transpose = numpy_array.T transpose_list = transpose.tolist() return pd.DataFrame(transpose_list) docci = pd.DataFrame(features.toarray()) calcula = calcul(docci, centoides) df df['Centroid1'] = calcula[0] df['Centroid2'] = calcula[1] df['Centroid3'] = calcula[2] df['Centroid4'] = calcula[3] df['Centroid5'] = calcula[4] df ``` # Определение точности кластеризации ``` plt.scatter(reduced_features[:,0], reduced_features[:,1], c=cls.predict(features)) plt.scatter(reduced_cluster_centers[:, 0], reduced_cluster_centers[:,1], marker='x', s=150, c='b') from sklearn.metrics import homogeneity_score homogeneity_score(df.label, cls.predict(features)) from sklearn.metrics import silhouette_score silhouette_score(features, labels=cls.predict(features)) ``` # Проверька данных ``` df['predic_label'] = cls.labels_ df[:15] df[15:30] df[30:45] df[45:60] df[60:75] df ```
github_jupyter
# Imports ``` from nltk.corpus import stopwords from nltk.tokenize import word_tokenize from pymongo import MongoClient import csv import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import spacy import tweepy ``` # Help-Functions ``` def open_csv(csv_address): '''Loading CSV document with token bearer for conection with Twitter API''' with open(csv_address, 'r', encoding = 'utf8') as file: reader = csv.reader(file) data = list(reader) file.close() return data[0][1] def conection_twitter(bearer_token): ''' Connecting with Twitter API''' # Access key client = tweepy.Client(bearer_token=bearer_token) return client def mongo_connection(database_name): '''Connecting with Mongo DB and creating Database''' client = MongoClient('localhost', 27017) # Creating database or connecting db = client[database_name] return db def load_tweet(candidate_dict:dict, client_twitter, data_base, maximum_results: int = 100): '''Using tweetpy API for loading tweets''' # Accessing a dictionary that contains name as key and id as value for key, value in candidate_dict.items(): # Creating a collection in MongoDB using key in dictionary as name collection = data_base[key] # Collecting tweets ussing method .get_users_id from tweepy tweets = client_twitter.get_users_tweets(id= value, tweet_fields = ['created_at', 'lang', 'public_metrics', 'reply_settings', 'entities', 'referenced_tweets', 'in_reply_to_user_id'], expansions='referenced_tweets.id.author_id', max_results=maximum_results) # Using iteration to create variables that will receive data in each tweet, to insert a tweet into the MongoDB collection. for tweet in tweets.data: tweet_id = tweet.id user_id = value texto = tweet.text data = tweet.created_at likes = tweet.public_metrics['like_count'] retweet_count = tweet.public_metrics['retweet_count'] reply_count = tweet.public_metrics['reply_count'] quote_count = tweet.public_metrics['quote_count'] retweet_origen_id = tweet.referenced_tweets try: link = tweet.entities['urls'][0]['expanded_url'] except: link = ('Null') # Error handling for Tweet Id already added, because mongo db does not accept repeated Ids try: collection.insert_one({ '_id': tweet_id, 'user_id':user_id, 'texto': texto, 'data': data, 'likes': likes, 'retweet_count':retweet_count, 'reply_count': reply_count, 'quote_count': quote_count, 'retweet_origen_id': retweet_origen_id, 'link': link }) except: pass def extract_words(data): '''Extracting stopwords for text in tweets using spacy''' stop_words = stopwords.words('portuguese') texto = data['texto'].to_list() # Removing stopwords striped_phrase = [] for element in texto: words = word_tokenize(element) for word in words: if word not in stop_words: word = word.strip(',:.#') word = word.replace('https', '') if len(word) >=3: striped_phrase.append(word) 'Removing whitespaces in a list' str_list = list(filter(None, striped_phrase)) return str_list def create_label(word_list): '''Using trained pipelines for Portuguese linguage fron spacy library to create label on each word in the list''' nlp_lg = spacy.load("pt_core_news_lg") #To instantiate the object it is necessary that it is in a string in text format str1 = ", " #Adds an empty space to each word of the comma stem2 = str1.join(word_list) #Instance text as spacy object stem2 = nlp_lg(stem2) #Using a Comprehension list to create a selected list of text and label ''' label_lg = [(X.text, X.label_) for X in stem2.ents] return label_lg def create_df(word_label, term: str, max_rows: int): '''Creating DataFrame with labels, term of occurrence, and total of DataFrame rows.''' upper = term.upper() df = pd.DataFrame(word_label, columns = ['Word','Entity']) #Entity filtering df_org = df.where(df['Entity'] == upper) #Creates a repeated word count df_org_count = df_org['Word'].value_counts() #Selecting the most commonly used words df = df_org_count[:max_rows] return df def create_plot(df, title: str, size: tuple ): '''Creating a barplot''' title_save = title.replace(' ', '-').lower() path = 'images' plt.figure(figsize= size) sns.barplot(df.values, df.index, alpha=0.8) plt.title(title) plt.ylabel('Word from Tweet', fontsize=12) plt.xlabel('Count of Words', fontsize=12) plt.savefig(path + '/' + title_save + '.png', dpi = 300, transparent = True, bbox_inches='tight') plt.show() ``` # Dictionary with candidates ``` # Dict where key = User, Value = Id User candidate_dict = { 'Bolsonaro':'128372940', 'Ciro': '33374761', 'Lula': '2670726740', 'Sergio_Moro': '1113094855281008641' } ``` # Creating connection to Twitter API ``` # Load a token from csv token = open_csv('C:/Users/Diego/OneDrive/Cursos e codigos/Codigos/twitter/bearertoken.csv') # Connecting with Twitter API client_twitter = conection_twitter(token) ``` # Creating connection to MongoDB Database ``` # Creating connection to MongoDB Database db = mongo_connection('data_twitter') ``` # Load tweets and insert then inside MongoDB Collections ``` # Load tweets and insert then inside MongoDB Collections load_tweet(candidate_dict = candidate_dict, client_twitter = client_twitter, data_base = db, maximum_results = 5) ``` # Connecting with Collections and Loading DataFrames from MongoDB ``` # Connecting with collections from MongoDb collection_Bolsonaro = db.Bolsonaro collection_Ciro = db.Ciro collection_Lula = db.Lula collection_SergioMoro = db.Sergio_Moro # Loading DataFrames from MongoDB df_bol = pd.DataFrame(collection_Bolsonaro.find()) df_cir = pd.DataFrame(collection_Ciro.find()) df_lul = pd.DataFrame(collection_Lula.find()) df_ser = pd.DataFrame(collection_SergioMoro.find()) ``` # Operations in DataFrame ``` # Extracting words fron DataFrame and remove stopwords bol_words = extract_words(df_bol) cir_words = extract_words(df_cir) lul_words = extract_words(df_lul) ser_words = extract_words(df_ser) # Creating labels from the list, uning spacy bol_label = create_label(bol_words) cir_label = create_label(cir_words) lul_label = create_label(lul_words) ser_label = create_label(ser_words) # Creating DataFrame with labels, term of occurrence, and total of DataFrame rows. bol_df_loc = create_df(bol_label, 'LOC', 30) cir_df_loc = create_df(cir_label, 'LOC', 30) lul_df_loc = create_df(lul_label, 'LOC', 30) ser_df_loc = create_df(ser_label, 'LOC', 30) ``` # Views of the top location mentioned by each candidate ``` # Creating DataFrame with labels, term of occurrence, and total of DataFrame rows. bol_df_loc = create_df(bol_label, 'LOC', 30) cir_df_loc = create_df(cir_label, 'LOC', 30) lul_df_loc = create_df(lul_label, 'LOC', 30) ser_df_loc = create_df(ser_label, 'LOC', 30) # Creating plots from DataFrames bol_plot = create_plot(bol_df_loc, 'Top Location Mentioned By Bolsonaro', (20,10)) cir_plot = create_plot(cir_df_loc, 'Top Location Mentioned By Ciro Gomes', (20,10)) lul_plot = create_plot(lul_df_loc, 'Top Location Mentioned By Luiz Inácio Lula da Silva', (20,10)) ser_plot = create_plot(ser_df_loc, 'Top Location Mentioned By Sergio Moro', (20,10)) ``` # Views of the top persons mentioned by each candidate ``` bol_df_loc = create_df(bol_label, 'PER', 30) cir_df_loc = create_df(cir_label, 'PER', 30) lul_df_loc = create_df(lul_label, 'PER', 30) ser_df_loc = create_df(ser_label, 'PER', 30) bol_plot = create_plot(bol_df_loc, 'Top Persons Mentioned By Bolsonaro', (20,10)) cir_plot = create_plot(cir_df_loc, 'Top Persons Mentioned By Ciro Gomes', (20,10)) lul_plot = create_plot(lul_df_loc, 'Top Persons Mentioned By Luiz Inácio Lula da Silva', (20,10)) ser_plot = create_plot(ser_df_loc, 'Top Persons Mentioned By Sergio Moro', (20,10)) ``` # Views of the top organizations mentioned by each candidate ``` bol_df_loc = create_df(bol_label, 'ORG', 30) cir_df_loc = create_df(cir_label, 'ORG', 30) lul_df_loc = create_df(lul_label, 'ORG', 30) ser_df_loc = create_df(ser_label, 'ORG', 30) bol_plot = create_plot(bol_df_loc, 'Top Organization Mentioned By Bolsonaro', (20,10)) cir_plot = create_plot(cir_df_loc, 'Top Organization Mentioned By Ciro Gomes', (20,10)) lul_plot = create_plot(lul_df_loc, 'Top Organization Mentioned By Luiz Inácio Lula da Silva', (20,10)) ser_plot = create_plot(ser_df_loc, 'Top Organization Mentioned By Sergio Moro', (20,10)) ```
github_jupyter
``` import os import numpy as np import itertools import matplotlib.pyplot as plt import pandas as pd from sklearn.utils import shuffle from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from tensorflow import keras from tensorflow.keras import layers from sklearn.metrics import confusion_matrix df15 = pd.read_csv("../Dataset/21-02-2018.csv", low_memory = False) df15 = df15.drop([0,1]) df15 df16Aux = pd.read_csv("../Dataset/02-03-2018.csv", low_memory = False) df16Aux = df16Aux.drop([0,1]) df16Aux listOrd = df15.columns.tolist() df16 = pd.DataFrame() for colu in listOrd: df16[colu] = df16Aux[colu] df16 df16Aux = None input_label15 = np.array(df15.loc[:, df15.columns != "Label"]).astype(np.float) output_label15 = np.array(df15["Label"]) out = [] for o in output_label15: if(o == "Benign"):out.append(0) else: out.append(1) output_label15 = out input_label16 = np.array(df16.loc[:, df16.columns != "Label"]).astype(np.float) output_label16 = np.array(df16["Label"]) out = [] for o in output_label16: if(o == "Benign"):out.append(0) else: out.append(1) output_label16 = out dfAE = pd.concat([df15, df16]) input_labelAE = np.array(dfAE.loc[:, dfAE.columns != "Label"]).astype(np.float) output_labelAE = np.array(dfAE["Label"]) out = [] for o in output_labelAE: if(o == "Benign"):out.append(0) else: out.append(1) output_labelAE = out dfAE = None df15 = None df16 = None scaler = MinMaxScaler(feature_range=(0,1)) scaler.fit(input_labelAE) input_label15 = scaler.transform(input_label15) input_label16 = scaler.transform(input_label16) input_labelAE = scaler.transform(input_labelAE) input_labelAE, output_labelAE = shuffle(input_labelAE, output_labelAE) input_label15, output_label15 = shuffle(input_label15, output_label15) input_label16, output_label16 = shuffle(input_label16, output_label16) ``` ## AutoEncoder ``` inp_train,inp_test,out_train,out_test = train_test_split(input_labelAE, input_labelAE, test_size=0.2) input_model = keras.layers.Input(shape = (78,)) enc = keras.layers.Dense(units = 64, activation = "relu", use_bias = True)(input_model) enc = keras.layers.Dense(units = 36, activation = "relu", use_bias = True)(enc) enc = keras.layers.Dense(units = 18, activation = "relu")(enc) dec = keras.layers.Dense(units = 36, activation = "relu", use_bias = True)(enc) dec = keras.layers.Dense(units = 64, activation = "relu", use_bias = True)(dec) dec = keras.layers.Dense(units = 78, activation = "relu", use_bias = True)(dec) auto_encoder = keras.Model(input_model, dec) encoder = keras.Model(input_model, enc) decoder_input = keras.layers.Input(shape = (18,)) decoder_layer = auto_encoder.layers[-3](decoder_input) decoder_layer = auto_encoder.layers[-2](decoder_layer) decoder_layer = auto_encoder.layers[-1](decoder_layer) decoder = keras.Model(decoder_input, decoder_layer) auto_encoder.compile(optimizer=keras.optimizers.Adam(learning_rate=0.00025), loss = "mean_squared_error", metrics = ['accuracy']) train = auto_encoder.fit(x = np.array(inp_train), y = np.array(out_train),validation_split= 0.1, epochs = 10, verbose = 1, shuffle = True) predict = auto_encoder.predict(inp_test) losses = keras.losses.mean_squared_error(out_test, predict).numpy() total = 0 for loss in losses: total += loss print(total / len(losses)) input_labelAE = None input_label15 = encoder.predict(input_label15).reshape(len(input_label15), 18, 1) input_label16 = encoder.predict(input_label16).reshape(len(input_label16), 18, 1) ``` ## Classificador ``` model = keras.Sequential([ keras.layers.Conv1D(filters = 16, input_shape = (18,1), kernel_size = 3, padding = "same", activation = "relu", use_bias = True), keras.layers.MaxPool1D(pool_size = 3), keras.layers.Conv1D(filters = 8, kernel_size = 3, padding = "same", activation = "relu", use_bias = True), keras.layers.MaxPool1D(pool_size = 3), keras.layers.Flatten(), keras.layers.Dense(units = 2, activation = "softmax") ]) model.compile(optimizer= keras.optimizers.Adam(learning_rate= 0.00025), loss="sparse_categorical_crossentropy", metrics=['accuracy']) model.fit(x = np.array(input_label15), y = np.array(output_label15), validation_split= 0.1, epochs = 10, shuffle = True,verbose = 1) res = [np.argmax(resu) for resu in model.predict(input_label16)] cm = confusion_matrix(y_true = np.array(output_label16).reshape(len(output_label16)), y_pred = np.array(res)) def plot_confusion_matrix(cm, classes, normaliza = False, title = "Confusion matrix", cmap = plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normaliza: cm = cm.astype('float') / cm.sum(axis = 1)[:, np.newaxis] print("Normalized confusion matrix") else: print("Confusion matrix, without normalization") print(cm) thresh = cm.max() / 2 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i,j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') labels = ["Benign", "Dos"] plot_confusion_matrix(cm = cm, classes = labels, title = "Dos IDS") from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score output_label16 = np.array(output_label16).reshape(len(output_label16)) res = np.array(res) fpr, tpr, _ = roc_curve(output_label16, res) auc = roc_auc_score(output_label16, res) plt.plot(fpr, tpr, label="auc=" + str(auc)) plt.legend(loc=4) plt.show() ```
github_jupyter
# Data Prep of Chicago Food Inspections Data This notebook reads in the food inspections dataset containing records of food inspections in Chicago since 2010. This dataset is freely available through healthdata.gov, but must be provided with the odbl license linked below and provided within this repository. This notebook prepares the data for statistical analysis and modeling by creating features from categorical variables and enforcing a prevalence threshold for these categories. Note that in this way, rare features are not analyzed or used to create a model (to encourage generalizability), though the code is designed so that it would be easy to change or eliminate the prevalence threshold to run downstream analysis with a different feature set. ### References - Data Source: https://healthdata.gov/dataset/food-inspections - License: http://opendefinition.org/licenses/odc-odbl/ ### Set Global Seed ``` SEED = 666 ``` ### Imports ``` import pandas as pd ``` ### Read Chicago Food Inspections Data Count records and columns. ``` food_inspections_df = pd.read_csv('../data/Food_Inspections.gz', compression='gzip') food_inspections_df.shape ``` ### Rename Columns ``` food_inspections_df.columns.tolist() columns = ['inspection_id', 'dba_name', 'aka_name', 'license_number', 'facility_type', 'risk', 'address', 'city', 'state', 'zip', 'inspection_date', 'inspection_type', 'result', 'violation', 'latitude', 'longitude', 'location'] food_inspections_df.columns = columns ``` ### Convert Zip Code to String And take only the first five digits, chopping off the decimal from reading the column as a float. ``` food_inspections_df['zip'] = food_inspections_df['zip'].astype(str).apply(lambda x: x.split('.')[0]) ``` ### Normalize Casing of Chicago Accept only proper spellings of the word Chicago with mixed casing accepted. ``` food_inspections_df['city'] = food_inspections_df['city'].apply(lambda x: 'CHICAGO' if str(x).upper() == 'CHICAGO' else x) ``` ### Filter for Facilities in Chicago Illinois ``` loc_condition = (food_inspections_df['city'] == 'CHICAGO') & (food_inspections_df['state'] == 'IL') ``` ### Drop Redundant Information - Only Chicago is considered - Only Illinois is considered - Location is encoded as separate latitute and longitude columns ``` food_inspections_df = food_inspections_df[loc_condition].drop(['city', 'state', 'location'], 1) food_inspections_df.shape ``` ### Create Codes Corresponding to Each Violation Type by Parsing Violation Text ``` def create_violation_code(violation_text): if violation_text != violation_text: return -1 else: return int(violation_text.split('.')[0]) food_inspections_df['violation_code'] = food_inspections_df['violation'].apply(create_violation_code) ``` ### Create Attribute Dataframes with the Unique Inspection ID for Lookups if Needed - Names - Licenses - Locations - Violations - Dates ``` names = ['inspection_id', 'dba_name', 'aka_name'] names_df = food_inspections_df[names] licenses = ['inspection_id', 'license_number'] licenses_df = food_inspections_df[licenses] locations = ['inspection_id', 'address', 'latitude', 'longitude'] locations_df = food_inspections_df[locations] violations = ['inspection_id', 'violation', 'violation_code'] violations_df = food_inspections_df[violations] dates = ['inspection_id', 'inspection_date'] dates_df = food_inspections_df[dates] ``` ### Drop Features Not Used in Statistical Analysis Features such as: - `DBA Name` - `AKA Name` - `License #` - `Address` - `Violations` - `Inspection Date` May be examined following statistical analysis by joining on `Inspection ID`. **Note:** future iterations of this work may wish to consider: - Text from the the facility name - Street level information from the facility address - Prior inspections of the same facility by performing a temporal analysis of the data using `Inspection Date` ``` not_considered = ['dba_name', 'aka_name', 'license_number', 'address', 'violation', 'inspection_date'] food_inspections_df = food_inspections_df.drop(not_considered, 1) ``` ### Create Dataframes of Count and Prevalence for Categorical Features - Facility types - Violation codes - Zip codes - Inspection types ``` facilities = food_inspections_df['facility_type'].value_counts() facilities_df = pd.DataFrame({'facility_type':facilities.index, 'count':facilities.values}) facilities_df['prevalence'] = facilities_df['count'] / food_inspections_df.shape[0] facilities_df.nlargest(10, 'count') facilities_df.nsmallest(10, 'count') violations = food_inspections_df['violation_code'].value_counts() violations_df = pd.DataFrame({'violation_code':violations.index, 'count':violations.values}) violations_df['prevalence'] = violations_df['count'] / food_inspections_df.shape[0] violations_df.nlargest(10, 'count') violations_df.nsmallest(10, 'count') zips = food_inspections_df['zip'].value_counts() zips_df = pd.DataFrame({'zip':zips.index, 'count':zips.values}) zips_df['prevalence'] = zips_df['count'] / food_inspections_df.shape[0] zips_df.nlargest(10, 'count') zips_df.nsmallest(10, 'count') inspections = food_inspections_df['inspection_type'].value_counts() inspections_df = pd.DataFrame({'inspection_type':inspections.index, 'count':inspections.values}) inspections_df['prevalence'] = inspections_df['count'] / food_inspections_df.shape[0] inspections_df.nlargest(10, 'count') inspections_df.nsmallest(10, 'count') results = food_inspections_df['result'].value_counts() results_df = pd.DataFrame({'result':results.index, 'count':results.values}) results_df['prevalence'] = results_df['count'] / food_inspections_df.shape[0] results_df.nlargest(10, 'count') ``` ### Drop Violation Code for Now We can join back using the Inspection ID to learn about types of violations, but we don't want to use any information about the violation itself to predict if a food inspection will pass or fail. ``` food_inspections_df = food_inspections_df.drop('violation_code', 1) ``` ### Create Risk Group Feature If the feature cannot be found in the middle of the text string as a value 1-3, return -1. ``` def create_risk_groups(risk_text): try: risk = int(risk_text.split(' ')[1]) return risk except: return -1 food_inspections_df['risk'] = food_inspections_df['risk'].apply(create_risk_groups) ``` ### Format Result - Encode Pass and Pass w/ Conditions as 0 - Encode Fail as 1 - Encode all others as -1 and filter out these results ``` def format_results(result): if result == 'Pass': return 0 elif result == 'Pass w/ Conditions': return 0 elif result == 'Fail': return 1 else: return -1 food_inspections_df['result'] = food_inspections_df['result'].apply(format_results) food_inspections_df = food_inspections_df[food_inspections_df['result'] != -1] food_inspections_df.shape ``` ### Filter for Categorical Features that Pass some Prevalence Threshold This way we only consider fairly common attributes of historical food establishments and inspections so that our analysis will generalize to new establishments and inspections. **Note:** the prevalence threshold is set to **0.1%**. ``` categorical_features = ['facility_type', 'zip', 'inspection_type'] def prev_filter(df, feature, prevalence='prevalence', prevalence_threshold=0.001): return df[df[prevalence] > prevalence_threshold][feature].tolist() feature_dict = dict(zip(categorical_features, [prev_filter(facilities_df, 'facility_type'), prev_filter(zips_df, 'zip'), prev_filter(inspections_df, 'inspection_type')])) ``` ### Encode Rare Features with the 'DROP' String, to be Removed Later Note that by mapping all rare features to the 'DROP' attribute, we avoid having to one-hot-encode all rare features and then drop them after the fact. That would create an unnecessarily large feature matrix. Instead we one-hot encode features passing the prevalence threshold and then drop all rare features that were tagged with the 'DROP' string. ``` for feature in categorical_features: food_inspections_df[feature] = food_inspections_df[feature].apply(lambda x: x if x in feature_dict[feature] else 'DROP') feature_df = pd.get_dummies(food_inspections_df, prefix=['{}'.format(feature) for feature in categorical_features], columns=categorical_features) feature_df = feature_df[[col for col in feature_df.columns if 'DROP' not in col]] feature_df.shape ``` ### Drop Features with: - Risk level not recorded as 1, 2, or 3 - Result not recorded as Pass, Pass w/ Conditions, or Fail - NA values (Some latitudes and longitudes are NA) ``` feature_df = feature_df[feature_df['risk'] != -1] feature_df = feature_df[feature_df['result'] != -1] feature_df = feature_df.dropna() feature_df.shape ``` ### Write the Feature Set to a Compressed CSV File to Load for Modeling and Analysis ``` feature_df.to_csv('../data/Food_Inspection_Features.gz', compression='gzip', index=False) ``` ### Write off Zip Codes to Join with Census Data ``` zips_df.to_csv('../data/Zips.csv', index=False) ```
github_jupyter
# 灰色关联分析法 Grey Relational Analysis FinCreWorld & xyfJASON ## 1 简述 > 参考资料:[数学建模笔记——评价类模型之灰色关联分析 - 小白的文章 - 知乎](https://zhuanlan.zhihu.com/p/161536409) 灰色关联分析的研究对象是一个系统,系统的发展受多个因素的影响,我们想知道这些因素中哪些影响大、哪些影响小。 如果把系统和因素都量化为数值,那么我们研究的就是多个序列的关联程度,或者换句话说,这些序列的**曲线几何形状的相似程度**。形状越相似,说明关联度越高。我们只需要量化各个因素的曲线 $\{x_i\}$ 与系统的曲线 $\{x_0\}$ 之间的相似程度,就能刻画出各个因素对系统的影响程度。 也就是说,我们应用灰色关联分析法的时候,需要一个母序列(参考数列) $\{x_0\}$ 和若干子序列(比较数列) $\{x_i\}$。 可是这和我们的主题——综合评价与决策有什么关系呢?在做评价类问题时,我们需要人为构造这个母序列——**将各个指标中最好的数据合并**作为「理想解」,然后计算各评价对象关于这个理想解的灰色关联度,关联度越高,评价对象越好。 ## 2 步骤 1. 确定比较对象(评价对象)和参考数列(评价标准) 设有 $m$ 个评价对象,$n$ 个评价指标,则参考数列为 $x_0=\{x_0(k)|k=1,2,\cdots,n\}$,比较数列为 $$ x_i=\{x_i(k)|k=1,2,\cdots,n\},i=1,2,\cdots,m $$ 注意所有的指标都需要**正向化**。 2. 确定各指标值对应的权重 $$ w=[w_1,\cdots,w_n] $$ 3. 计算灰色关联系数 $$ \xi_i(k)=\frac{\min\limits_s\min\limits_t|x_0(t)-x_s(t)|+\rho\max\limits_s\max\limits_t|x_0(t)-x_s(t)|} {|x_0(k)-x_i(k)|+\rho\max\limits_s\max\limits_t|x_0(t)-x_s(t)|} $$ 称 'minmin' 项为两级最小差,'maxmax' 项为两级最大差,$\rho$ 为分辨系数,$\rho$ 越大,分辨率越大 4. 计算灰色加权关联度。其计算公式为 $$ r_i=\sum\limits_{k=1}^nw_i\xi_i(k) $$ $r_i$ 为第 $i$ 个评价对象对理想对象的灰色加权关联度。 5. 评价分析 根据灰色加权关联度的大小,对各评价对象进行排序,可以建立评价对象的关联序,关联度越大,评价结果越好。 ## 3 代码模板 见同文件夹下的 `evaluation.py` 模块。 ## 4 实例 在 6 个待选的零部件供应商中选择一个合作伙伴,各待选供应商有关数据如下 ``` import pandas as pd df = pd.DataFrame([[ 0.83 , 0.9 , 0.99 , 0.92 , 0.87 , 0.95 ], [ 326 , 295 , 340 , 287 , 310 , 303 ], [ 21 , 38 , 25 , 19 , 27 , 10 ], [ 3.2 , 2.4 , 2.2 , 2 , 0.9 , 1.7 ], [ 0.2 , 0.25 , 0.12 , 0.33 , 0.2 , 0.09 ], [ 0.15 , 0.2 , 0.14 , 0.09 , 0.15 , 0.17 ], [ 250 , 180 , 300 , 200 , 150 , 175 ], [ 0.23 , 0.15 , 0.27 , 0.3 , 0.18 , 0.26 ], [ 0.87 , 0.95 , 0.99 , 0.89 , 0.82 , 0.94 ]]) df.columns = [['待选供应商']*6, list('123456')] df.index = ['产品质量', '产品价格', '地理位置', '售后服务', '技术水平', '经济效益', '供应能力', '市场影响度', '交货情况'] df ``` 其中产品质量、技术水平、供应能力、经济效益、交货情况、市场影响度指标为效益型指标,其标准化方式为 $$ std = \frac{ori - \min(ori)}{\max(ori) - \min(ori)} $$ 而产品地位、地理位置、售后服务指标属于成本型指标,其标准化方式为 $$ std = \frac{\max(ori) - ori}{\max(ori) - \min(ori)} $$ ``` # 对数据进行预处理 import numpy as np from evaluation import positive_scale res = positive_scale(x=df.T.values, kind=np.array([0,1,1,1,0,0,0,0,0])) df.iloc[:, :] = res.T df # 假设各项指标地位相同,权重相等 # 计算关联度 from evaluation import GreyRelationalAnalysis solver = GreyRelationalAnalysis(x=df.values.T) R, r = solver.run() r # 列表 df2 = pd.DataFrame(np.concatenate((R.T, r.reshape(1, -1)), axis=0)) df2.columns = [f'供应商{i}' for i in '123456'] df2.index = [*[f'指标{i}' for i in range(1, 10)], 'r'] df2 ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. ``` # Transfer learning and fine-tuning <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In this tutorial, you will learn how to classify images of cats and dogs by using transfer learning from a pre-trained network. A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. You either use the pretrained model as is or use transfer learning to customize this model to a given task. The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset. In this notebook, you will try two ways to customize a pretrained model: 1. Feature Extraction: Use the representations learned by a previous network to extract meaningful features from new samples. You simply add a new classifier, which will be trained from scratch, on top of the pretrained model so that you can repurpose the feature maps learned previously for the dataset. You do not need to (re)train the entire model. The base convolutional network already contains features that are generically useful for classifying pictures. However, the final, classification part of the pretrained model is specific to the original classification task, and subsequently specific to the set of classes on which the model was trained. 1. Fine-Tuning: Unfreeze a few of the top layers of a frozen model base and jointly train both the newly-added classifier layers and the last layers of the base model. This allows us to "fine-tune" the higher-order feature representations in the base model in order to make them more relevant for the specific task. You will follow the general machine learning workflow. 1. Examine and understand the data 1. Build an input pipeline, in this case using Keras ImageDataGenerator 1. Compose the model * Load in the pretrained base model (and pretrained weights) * Stack the classification layers on top 1. Train the model 1. Evaluate model ``` import matplotlib.pyplot as plt import numpy as np import os import tensorflow as tf ``` ## Data preprocessing ### Data download In this tutorial, you will use a dataset containing several thousand images of cats and dogs. Download and extract a zip file containing the images, then create a `tf.data.Dataset` for training and validation using the `tf.keras.utils.image_dataset_from_directory` utility. You can learn more about loading images in this [tutorial](https://www.tensorflow.org/tutorials/load_data/images). ``` _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip' path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True) PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered') train_dir = os.path.join(PATH, 'train') validation_dir = os.path.join(PATH, 'validation') BATCH_SIZE = 32 IMG_SIZE = (160, 160) train_dataset = tf.keras.utils.image_dataset_from_directory(train_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE) validation_dataset = tf.keras.utils.image_dataset_from_directory(validation_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE) ``` Show the first nine images and labels from the training set: ``` class_names = train_dataset.class_names plt.figure(figsize=(10, 10)) for images, labels in train_dataset.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[labels[i]]) plt.axis("off") ``` As the original dataset doesn't contain a test set, you will create one. To do so, determine how many batches of data are available in the validation set using `tf.data.experimental.cardinality`, then move 20% of them to a test set. ``` val_batches = tf.data.experimental.cardinality(validation_dataset) test_dataset = validation_dataset.take(val_batches // 5) validation_dataset = validation_dataset.skip(val_batches // 5) print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset)) print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset)) ``` ### Configure the dataset for performance Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the [data performance](https://www.tensorflow.org/guide/data_performance) guide. ``` AUTOTUNE = tf.data.AUTOTUNE train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE) validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE) test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE) ``` ### Use data augmentation When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random, yet realistic, transformations to the training images, such as rotation and horizontal flipping. This helps expose the model to different aspects of the training data and reduce [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit). You can learn more about data augmentation in this [tutorial](https://www.tensorflow.org/tutorials/images/data_augmentation). ``` data_augmentation = tf.keras.Sequential([ tf.keras.layers.RandomFlip('horizontal'), tf.keras.layers.RandomRotation(0.2), ]) ``` Note: These layers are active only during training, when you call `Model.fit`. They are inactive when the model is used in inference mode in `Model.evaluate` or `Model.fit`. Let's repeatedly apply these layers to the same image and see the result. ``` for image, _ in train_dataset.take(1): plt.figure(figsize=(10, 10)) first_image = image[0] for i in range(9): ax = plt.subplot(3, 3, i + 1) augmented_image = data_augmentation(tf.expand_dims(first_image, 0)) plt.imshow(augmented_image[0] / 255) plt.axis('off') ``` ### Rescale pixel values In a moment, you will download `tf.keras.applications.MobileNetV2` for use as your base model. This model expects pixel values in `[-1, 1]`, but at this point, the pixel values in your images are in `[0, 255]`. To rescale them, use the preprocessing method included with the model. ``` preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input ``` Note: Alternatively, you could rescale pixel values from `[0, 255]` to `[-1, 1]` using `tf.keras.layers.Rescaling`. ``` rescale = tf.keras.layers.Rescaling(1./127.5, offset=-1) ``` Note: If using other `tf.keras.applications`, be sure to check the API doc to determine if they expect pixels in `[-1, 1]` or `[0, 1]`, or use the included `preprocess_input` function. ## Create the base model from the pre-trained convnets You will create the base model from the **MobileNet V2** model developed at Google. This is pre-trained on the ImageNet dataset, a large dataset consisting of 1.4M images and 1000 classes. ImageNet is a research training dataset with a wide variety of categories like `jackfruit` and `syringe`. This base of knowledge will help us classify cats and dogs from our specific dataset. First, you need to pick which layer of MobileNet V2 you will use for feature extraction. The very last classification layer (on "top", as most diagrams of machine learning models go from bottom to top) is not very useful. Instead, you will follow the common practice to depend on the very last layer before the flatten operation. This layer is called the "bottleneck layer". The bottleneck layer features retain more generality as compared to the final/top layer. First, instantiate a MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the **include_top=False** argument, you load a network that doesn't include the classification layers at the top, which is ideal for feature extraction. ``` # Create the base model from the pre-trained model MobileNet V2 IMG_SHAPE = IMG_SIZE + (3,) base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') ``` This feature extractor converts each `160x160x3` image into a `5x5x1280` block of features. Let's see what it does to an example batch of images: ``` image_batch, label_batch = next(iter(train_dataset)) feature_batch = base_model(image_batch) print(feature_batch.shape) ``` ## Feature extraction In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier. ### Freeze the convolutional base It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's `trainable` flag to False will freeze all of them. ``` base_model.trainable = False ``` ### Important note about BatchNormalization layers Many models contain `tf.keras.layers.BatchNormalization` layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial. When you set `layer.trainable = False`, the `BatchNormalization` layer will run in inference mode, and will not update its mean and variance statistics. When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing `training = False` when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned. For more details, see the [Transfer learning guide](https://www.tensorflow.org/guide/keras/transfer_learning). ``` # Let's take a look at the base model architecture base_model.summary() ``` ### Add a classification head To generate predictions from the block of features, average over the spatial `5x5` spatial locations, using a `tf.keras.layers.GlobalAveragePooling2D` layer to convert the features to a single 1280-element vector per image. ``` global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) print(feature_batch_average.shape) ``` Apply a `tf.keras.layers.Dense` layer to convert these features into a single prediction per image. You don't need an activation function here because this prediction will be treated as a `logit`, or a raw prediction value. Positive numbers predict class 1, negative numbers predict class 0. ``` prediction_layer = tf.keras.layers.Dense(1) prediction_batch = prediction_layer(feature_batch_average) print(prediction_batch.shape) ``` Build a model by chaining together the data augmentation, rescaling, `base_model` and feature extractor layers using the [Keras Functional API](https://www.tensorflow.org/guide/keras/functional). As previously mentioned, use `training=False` as our model contains a `BatchNormalization` layer. ``` inputs = tf.keras.Input(shape=(160, 160, 3)) x = data_augmentation(inputs) x = preprocess_input(x) x = base_model(x, training=False) x = global_average_layer(x) x = tf.keras.layers.Dropout(0.2)(x) outputs = prediction_layer(x) model = tf.keras.Model(inputs, outputs) ``` ### Compile the model Compile the model before training it. Since there are two classes, use the `tf.keras.losses.BinaryCrossentropy` loss with `from_logits=True` since the model provides a linear output. ``` base_learning_rate = 0.0001 model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model.summary() ``` The 2.5 million parameters in MobileNet are frozen, but there are 1.2 thousand _trainable_ parameters in the Dense layer. These are divided between two `tf.Variable` objects, the weights and biases. ``` len(model.trainable_variables) ``` ### Train the model After training for 10 epochs, you should see ~94% accuracy on the validation set. ``` initial_epochs = 10 loss0, accuracy0 = model.evaluate(validation_dataset) print("initial loss: {:.2f}".format(loss0)) print("initial accuracy: {:.2f}".format(accuracy0)) history = model.fit(train_dataset, epochs=initial_epochs, validation_data=validation_dataset) ``` ### Learning curves Let's take a look at the learning curves of the training and validation accuracy/loss when using the MobileNetV2 base model as a fixed feature extractor. ``` acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1]) plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.ylabel('Cross Entropy') plt.ylim([0,1.0]) plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() ``` Note: If you are wondering why the validation metrics are clearly better than the training metrics, the main factor is because layers like `tf.keras.layers.BatchNormalization` and `tf.keras.layers.Dropout` affect accuracy during training. They are turned off when calculating validation loss. To a lesser extent, it is also because training metrics report the average for an epoch, while validation metrics are evaluated after the epoch, so validation metrics see a model that has trained slightly longer. ## Fine tuning In the feature extraction experiment, you were only training a few layers on top of an MobileNetV2 base model. The weights of the pre-trained network were **not** updated during training. One way to increase performance even further is to train (or "fine-tune") the weights of the top layers of the pre-trained model alongside the training of the classifier you added. The training process will force the weights to be tuned from generic feature maps to features associated specifically with the dataset. Note: This should only be attempted after you have trained the top-level classifier with the pre-trained model set to non-trainable. If you add a randomly initialized classifier on top of a pre-trained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier) and your pre-trained model will forget what it has learned. Also, you should try to fine-tune a small number of top layers rather than the whole MobileNet model. In most convolutional networks, the higher up a layer is, the more specialized it is. The first few layers learn very simple and generic features that generalize to almost all types of images. As you go higher up, the features are increasingly more specific to the dataset on which the model was trained. The goal of fine-tuning is to adapt these specialized features to work with the new dataset, rather than overwrite the generic learning. ### Un-freeze the top layers of the model All you need to do is unfreeze the `base_model` and set the bottom layers to be un-trainable. Then, you should recompile the model (necessary for these changes to take effect), and resume training. ``` base_model.trainable = True # Let's take a look to see how many layers are in the base model print("Number of layers in the base model: ", len(base_model.layers)) # Fine-tune from this layer onwards fine_tune_at = 100 # Freeze all the layers before the `fine_tune_at` layer for layer in base_model.layers[:fine_tune_at]: layer.trainable = False ``` ### Compile the model As you are training a much larger model and want to readapt the pretrained weights, it is important to use a lower learning rate at this stage. Otherwise, your model could overfit very quickly. ``` model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer = tf.keras.optimizers.RMSprop(learning_rate=base_learning_rate/10), metrics=['accuracy']) model.summary() len(model.trainable_variables) ``` ### Continue training the model If you trained to convergence earlier, this step will improve your accuracy by a few percentage points. ``` fine_tune_epochs = 10 total_epochs = initial_epochs + fine_tune_epochs history_fine = model.fit(train_dataset, epochs=total_epochs, initial_epoch=history.epoch[-1], validation_data=validation_dataset) ``` Let's take a look at the learning curves of the training and validation accuracy/loss when fine-tuning the last few layers of the MobileNetV2 base model and training the classifier on top of it. The validation loss is much higher than the training loss, so you may get some overfitting. You may also get some overfitting as the new training set is relatively small and similar to the original MobileNetV2 datasets. After fine tuning the model nearly reaches 98% accuracy on the validation set. ``` acc += history_fine.history['accuracy'] val_acc += history_fine.history['val_accuracy'] loss += history_fine.history['loss'] val_loss += history_fine.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.ylim([0.8, 1]) plt.plot([initial_epochs-1,initial_epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.ylim([0, 1.0]) plt.plot([initial_epochs-1,initial_epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() ``` ### Evaluation and prediction Finaly you can verify the performance of the model on new data using test set. ``` loss, accuracy = model.evaluate(test_dataset) print('Test accuracy :', accuracy) ``` And now you are all set to use this model to predict if your pet is a cat or dog. ``` # Retrieve a batch of images from the test set image_batch, label_batch = test_dataset.as_numpy_iterator().next() predictions = model.predict_on_batch(image_batch).flatten() # Apply a sigmoid since our model returns logits predictions = tf.nn.sigmoid(predictions) predictions = tf.where(predictions < 0.5, 0, 1) print('Predictions:\n', predictions.numpy()) print('Labels:\n', label_batch) plt.figure(figsize=(10, 10)) for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(image_batch[i].astype("uint8")) plt.title(class_names[predictions[i]]) plt.axis("off") ``` ## Summary * **Using a pre-trained model for feature extraction**: When working with a small dataset, it is a common practice to take advantage of features learned by a model trained on a larger dataset in the same domain. This is done by instantiating the pre-trained model and adding a fully-connected classifier on top. The pre-trained model is "frozen" and only the weights of the classifier get updated during training. In this case, the convolutional base extracted all the features associated with each image and you just trained a classifier that determines the image class given that set of extracted features. * **Fine-tuning a pre-trained model**: To further improve performance, one might want to repurpose the top-level layers of the pre-trained models to the new dataset via fine-tuning. In this case, you tuned your weights such that your model learned high-level features specific to the dataset. This technique is usually recommended when the training dataset is large and very similar to the original dataset that the pre-trained model was trained on. To learn more, visit the [Transfer learning guide](https://www.tensorflow.org/guide/keras/transfer_learning).
github_jupyter
# Traitement des données en tables [Vidéo d'accompagnement 2](https://vimeo.com/534400395) Dans cette section, nous supposons disposer d'une table de n-uplets nommés (à l'aide de dictionnaires) et donc de la forme: ```python [ {"descr1": "val1", "descr2": "val2",...}, {"descr1": "val'1", ...}, ... ] ``` qui représente un **tableau** de données de la forme: | descr1 | descr2 | ... | | :-------------:|:-------------:|:-----:| | val1 | val2 | ... | | val'1 | ... | ... | | ... | ... | ... | **Notre objectif est** d'apprendre à réaliser certaines **opérations incontournables** sur ce genre de données: - **«pré-traitements»**: adapter le **type** de certaines données, - **projection**: sélectionner certaines «**colonnes**» ou descripteurs, - **sélection**: sélectionner certaines «**lignes**» ou enregistrements, - **trier** les lignes ou enregistrements, - **fusionner**: produire un tableau sur la base de deux autres. Voici la table que nous utiliserons pour illustrer/tester ces opérations. ``` table_test = [ {'n_client': '1212', 'nom': 'Lacasse' , 'prenom': 'Aubrey' , 'ville': 'Annecy' , 'position': '45.900000,6.116667'}, {'n_client': '1343', 'nom': 'Primeau' , 'prenom': 'Angelette', 'ville': 'Tours' , 'position': '47.383333,0.683333'}, {'n_client': '2454', 'nom': 'Gabriaux' , 'prenom': 'Julie' , 'ville': 'Bordeaux', 'position': '44.833333,-0.566667'}, {'n_client': '895' , 'nom': 'Gaulin' , 'prenom': 'Dorene' , 'ville': 'Lyon' , 'position': '45.750000,4.850000'}, {'n_client': '2324', 'nom': 'Jobin' , 'prenom': 'Aubrey' , 'ville': 'Bourges' , 'position': '47.083333,2.400000'}, {'n_client': '34' , 'nom': 'Boncoeur' , 'prenom': 'Kari' , 'ville': 'Nantes' , 'position': '47.216667,-1.550000'}, {'n_client': '1221', 'nom': 'Parizeau' , 'prenom': 'Olympia' , 'ville': 'Metz' , 'position': '49.133333,6.166667'}, {'n_client': '1114', 'nom': 'Paiement' , 'prenom': 'Inès' , 'ville': 'Bordeaux', 'position': '44.833333,-0.566667'}, {'n_client': '3435', 'nom': 'Chrétien' , 'prenom': 'Adèle' , 'ville': 'Moulin' , 'position': '46.566667,3.333333'}, {'n_client': '5565', 'nom': 'Neufville', 'prenom': 'Ila' , 'ville': 'Toulouse', 'position': '43.600000,1.433333'}, {'n_client': '2221', 'nom': 'Larivière', 'prenom': 'Alice' , 'ville': 'Tours' , 'position': '47.383333,0.683333'}, ] ``` #### Exercice 1 Quels sont les descripteurs de cette table? **réponse**: ______ ``` # combien comporte-t-elle de lignes? de colonnes? réponses: ____ ``` Quel est le type commun de toutes les valeurs? **réponse**: ______ _____ ## Complément Python: Parcours d'un dictionnaire Lorsqu'on utilise la syntaxe `for <var> in <dictionnaire>`, la variable de boucle contient une nouvelle **clé** du dictionnaire à chaque *itération*. ``` test = {"un": 1, "deux": 2, "3": "trois"} for var in test: print(var) ``` À partir de la clé, on peut facilement récupérer la valeur associée dans la paire clé-valeur avec la syntaxe `dico[cle]`: ``` test = {"un": 1, "deux": 2, "3": "trois"} for var in test: print(test[var]) ``` Mais il est plus pratique de récupérer directement **la clé et la valeur** dans la variable de boucle. On peut faire cela en utilisant la méthode `dict.items()` dans la boucle: ``` test = {"un": 1, "deux": 2, "3": "trois"} for var in test.items(): print(var) ``` Comme vous le constatez, à chaque itération la variable de boucle reçoit un *tuple de taille 2*, on peut récupérer directement chaque composante comme suit: ``` test = {"un": 1, "deux": 2, "3": "trois"} for cle, val in test.items(): print(f"{cle} => {val}") ``` **Retenir** > si `d` est un dictionnaire, `for cle, val in d.items()` récupère une nouvelle paire clé-valeur à chaque itération de la boucle. On peut utiliser cela dans l'écriture en compréhension pour «transformer» un dictionnaire. ``` test = {"entier":"13", "chaine": "python", "flottant": "3.14", "booleen": "Oui", "tuple_entiers": "4,5,6"} ``` On peut vouloir «oublier» certaines paires: ``` {c: v for c, v in test.items() if c not in ["chaine", "booleen"]} ``` On peut vouloir *adapter* les **types** de certaines valeurs. Par exemple, dans le dictionnaire `test` la valeur associé à la clé "entier" est de type *str* et on voudrait un *int*: ``` def conv(c, v): if c == "entier": return int(v) else: return v {c: conv(c,v) for c, v in test.items()} ``` Il est très courant de vouloir faire cela; pour cette raison (et d'autres...) Python propose l'opérateur *ternaire* `e1 if cond else e2` qui produit la valeur `e1` si `cond` vaut «vrai» et produit `e2` sinon: ``` { c: (float(v) if c == "flottant" else v) for c, v in test.items() } ``` Néanmoins, s'il y a trop de cas à traiter, l'écriture d'une fonction reste utile. Examinez attentivement cet exemple: ``` def adapter_types(c, v): if c == "entier": return int(v) elif c == "flottant": return float(v) elif c == "booleen": return (True if v == "Oui" else False) elif c == "tuple_entiers": # découper vs = v.split(',') # puis convertir chaque composante vs = [int(v) for v in vs] # puis tranformer la liste en tuple return tuple(vs) else: return v {c:adapter_types(c, v) for c, v in test.items()} ``` *Astuce*: il est possible d'utiliser une «compréhension de tuple» `tuple(...)` pour raccourcir le cas "tuple_entiers". Plus précisément, on pourrait remplacer cette partie par `return tuple( int(x) for x in v.split(',') )`. Essayez... > **Retenir**: Si `d` est un dictionnaire, on peut utiliser l'écriture en compréhension pour le *transformer*; notamment «oublier» certaines paires clé valeur avec `if` ou ajuster le type de certaines valeurs sur la base de leur clé. ## Pré-traitement Pour rappel notre table de test est: ``` table_test = [ {'n_client': '1212', 'nom': 'Lacasse' , 'prenom': 'Aubrey' , 'ville': 'Annecy' , 'position': '45.900000,6.116667'}, {'n_client': '1343', 'nom': 'Primeau' , 'prenom': 'Angelette', 'ville': 'Tours' , 'position': '47.383333,0.683333'}, {'n_client': '2454', 'nom': 'Gabriaux' , 'prenom': 'Julie' , 'ville': 'Bordeaux', 'position': '44.833333,-0.566667'}, {'n_client': '895' , 'nom': 'Gaulin' , 'prenom': 'Dorene' , 'ville': 'Lyon' , 'position': '45.750000,4.850000'}, {'n_client': '2324', 'nom': 'Jobin' , 'prenom': 'Aubrey' , 'ville': 'Bourges' , 'position': '47.083333,2.400000'}, {'n_client': '34' , 'nom': 'Boncoeur' , 'prenom': 'Kari' , 'ville': 'Nantes' , 'position': '47.216667,-1.550000'}, {'n_client': '1221', 'nom': 'Parizeau' , 'prenom': 'Olympia' , 'ville': 'Metz' , 'position': '49.133333,6.166667'}, {'n_client': '1114', 'nom': 'Paiement' , 'prenom': 'Inès' , 'ville': 'Bordeaux', 'position': '44.833333,-0.566667'}, {'n_client': '3435', 'nom': 'Chrétien' , 'prenom': 'Adèle' , 'ville': 'Moulin' , 'position': '46.566667,3.333333'}, {'n_client': '5565', 'nom': 'Neufville', 'prenom': 'Ila' , 'ville': 'Toulouse', 'position': '43.600000,1.433333'}, {'n_client': '2221', 'nom': 'Larivière', 'prenom': 'Alice' , 'ville': 'Tours' , 'position': '47.383333,0.683333'}, ] ``` On observe que certains **descripteurs** pourrait avoir un **type** plus précis que `str`, par exemple le descripteur `n_client` gagnerait a être de type `int`. Améliorons cela à l'aide de *l'écriture en compréhension*: ``` table_test_2 = [ # liste en compréhension: produit une liste de ... { # ... dictionnaires ... c: int(v) if c == 'n_client' else v # conversion de la valeur si le descripteur est 'n_client' for c, v in enr.items() } # ... pour chaque enregistrement de la table d'origine. for enr in table_test ] # observe bien la valeur du descripteur 'n_client' table_test_2[:2] ``` C'est encore un peu difficile à lire probablement? l'opérateur ternaire `e1 if cond else e2` n'est pas encore parfaitement clair? ni l'écriture en compréhension? Alors voici l'équivalent dans une fonction avec une boucle imbriquée. ``` def conversion1(table): tc = [] # table à construire # pour chaque enregistrement for l in table: enr = {} # nouvel enregistrement # pour chaque paire clé-valeur de l'enregistrement courant for c, v in l.items(): # doit-on convertir en int? if c == 'n_client': enr[c] = int(v) else: enr[c] = v # ajouter le nouvel enregistrement à la table en consruction tc.append(enr) return tc conversion1(table_test) ``` Mais il y a un autre descripteur qui pose problème: `position` Son type est `str` au format `'<float>,<float>'` Nous voudrions que son type soit «`tuple` de `float`» c'est-à-dire passer par exemple de `'45.900000,6.116667'` à `(45.900000, 6.116667)`. #### Exercice 2 En t'inspirant de la conversion résolue précédente, transforme le descripteur `position` en un tuple de 2 floats. ``` # avec une fonction def conversion2(table): pass # avec la syntaxe en compréhension (tu peux utiliser plusieurs étapes) ``` _________ ## Projection L'opération de **projection** consiste à «oublier» certaines «colonnes» du jeu de données. On repart de la table ci-dessous: ``` table_test = [ {'n_client': 1212, 'nom': 'Lacasse' , 'prenom': 'Aubrey' , 'ville': 'Annecy' , 'position': (45.900000,6.116667)}, {'n_client': 1343, 'nom': 'Primeau' , 'prenom': 'Angelette', 'ville': 'Tours' , 'position': (47.383333,0.683333)}, {'n_client': 2454, 'nom': 'Gabriaux' , 'prenom': 'Julie' , 'ville': 'Bordeaux', 'position': (44.833333,-0.566667)}, {'n_client': 895 , 'nom': 'Gaulin' , 'prenom': 'Dorene' , 'ville': 'Lyon' , 'position': (45.750000,4.850000)}, {'n_client': 2324, 'nom': 'Jobin' , 'prenom': 'Aubrey' , 'ville': 'Bourges' , 'position': (47.083333,2.400000)}, {'n_client': 34 , 'nom': 'Boncoeur' , 'prenom': 'Kari' , 'ville': 'Nantes' , 'position': (47.216667,-1.550000)}, {'n_client': 1221, 'nom': 'Parizeau' , 'prenom': 'Olympia' , 'ville': 'Metz' , 'position': (49.133333,6.166667)}, {'n_client': 1114, 'nom': 'Paiement' , 'prenom': 'Inès' , 'ville': 'Bordeaux', 'position': (44.833333,-0.566667)}, {'n_client': 3435, 'nom': 'Chrétien' , 'prenom': 'Adèle' , 'ville': 'Moulin' , 'position': (46.566667,3.333333)}, {'n_client': 5565, 'nom': 'Neufville', 'prenom': 'Ila' , 'ville': 'Toulouse', 'position': (43.600000,1.433333)}, {'n_client': 2221, 'nom': 'Larivière', 'prenom': 'Alice' , 'ville': 'Tours' , 'position': (47.383333,0.683333)}, ] ``` Le problème est le suivant: étant donnée une liste de **descripteurs** *à oublier* (car chaque descripteur correspond à une «colonne» du jeu de données), produire la table de donnée correspondante. *Exemple*: si `a_oublier = ['n_client', 'prenom', 'position']` alors l'enregistrement: {'n_client': 1212, 'nom': 'Lacasse', 'prenom': 'Aubrey', 'ville': 'Annecy', 'position': (45.900000,6.116667)} doit être transformé en {'nom': 'Lacasse', 'ville': 'Annecy'}` et ainsi de suite pour chaque enregistrement. Voici une solution qui utilise une fonction: ``` def projection_par_oubli(tableau, a_oublier): tsel = [] # pour notre nouveau tableau # pour chaque enregistrement for ligne in tableau: enr = {} # pour notre nouvel enregistrement # pour chaque paire clé-valeur de l'enregistrement courant for c, v in ligne.items(): if not c in a_oublier: # on conserve cette paire enr[c] = v # ajoutons notre nouvel enregistrement tsel.append(enr) return tsel projection_par_oubli(table_test, ['n_client', 'prenom', 'position']) ``` Mais il est bien plus simple d'utiliser l'*écriture en compréhension* dans ce cas... #### Exercice 3 1. Peux-tu réaliser la même chose avec la notation en compréhension en complétant ce qui suit? ``` a_oublier = ['n_client', 'prenom', 'position'] [ { ... } for enr in table_test ] a_oublier = ['n_client', 'prenom', 'position'] [ { c: v for c, v in enr.items() if c not in a_oublier } for enr in table_test ] ``` 2. Écris une fonction `projection(tableau, a_conserver)` qui prend en argument le tableau de données et la liste des descripteurs **à conserver**; elle renvoie le tableau «projeté». ``` def projection(tableau, a_conserver): return [ { c:v for c, v in enr.items() if c in a_conserver } for enr in tableau ] #test projection(table_test, ['nom', 'ville']) ``` _____ ## Sélection On souhaite à présent transformer le tableau en ne conservant que les *enregistrements* qui respectent *un certain critère*; autrement dit on veut **sélectionner certaines lignes** (et abandonner les autres). ``` table_test = [ {'n_client': 1212, 'nom': 'Lacasse' , 'prenom': 'Aubrey' , 'ville': 'Annecy' , 'position': (45.900000,6.116667)}, {'n_client': 1343, 'nom': 'Primeau' , 'prenom': 'Angelette', 'ville': 'Tours' , 'position': (47.383333,0.683333)}, {'n_client': 2454, 'nom': 'Gabriaux' , 'prenom': 'Julie' , 'ville': 'Bordeaux', 'position': (44.833333,-0.566667)}, {'n_client': 895 , 'nom': 'Gaulin' , 'prenom': 'Dorene' , 'ville': 'Lyon' , 'position': (45.750000,4.850000)}, {'n_client': 2324, 'nom': 'Jobin' , 'prenom': 'Aubrey' , 'ville': 'Bourges' , 'position': (47.083333,2.400000)}, {'n_client': 34 , 'nom': 'Boncoeur' , 'prenom': 'Kari' , 'ville': 'Nantes' , 'position': (47.216667,-1.550000)}, {'n_client': 1221, 'nom': 'Parizeau' , 'prenom': 'Olympia' , 'ville': 'Metz' , 'position': (49.133333,6.166667)}, {'n_client': 1114, 'nom': 'Paiement' , 'prenom': 'Inès' , 'ville': 'Bordeaux', 'position': (44.833333,-0.566667)}, {'n_client': 3435, 'nom': 'Chrétien' , 'prenom': 'Adèle' , 'ville': 'Moulin' , 'position': (46.566667,3.333333)}, {'n_client': 5565, 'nom': 'Neufville', 'prenom': 'Ila' , 'ville': 'Toulouse', 'position': (43.600000,1.433333)}, {'n_client': 2221, 'nom': 'Larivière', 'prenom': 'Alice' , 'ville': 'Tours' , 'position': (47.383333,0.683333)}, ] ``` Par exemple, on pourrait vouloir sélectionner les clients qui habitent à tours. Avec une fonction, cela donne: ``` def selection_exemple(tableau): tsel = [] for enr in tableau: if enr['ville'] == 'Tours': tsel.append(enr) return tsel selection_exemple(table_test) ``` mais l'écriture en compréhension est bien plus simple! ``` [ enr for enr in table_test if enr['ville'] == 'Tours' ] ``` #### Exercice 4 1. Écris une fonction `selection2` qui renvoie le tableau en ne conservant que les enregistrements dont le numéro de client `"n_client"` est dans l'intervalle `[1000;3000]`. ``` def selection2(tableau): pass selection2(table_test) def selection2(tableau): tsel = [] for enr in tableau: if 1000 <= enr['n_client'] <= 3000: tsel.append(enr) return tsel # ou mieux! def selection2_bis(tableau): return [e for e in tableau if 1000 <= e['n_client'] <= 3000] selection2(table_test) ``` 2. Écris une fonction `selection3` qui sélectionne les enregistrements dont la longitude est positive - `"position": <(lat., long.)>` - et dont le `prenom` débute par un 'A'. *Note*: les caractères d'un `str` sont *indexés*. si `c="Python"` alors `c[0]` vaut "P". ``` def selection3(tableau): pass selection3(table_test) def selection3(tableau): tsel = [] for enr in tableau: if enr['position'][1] >= 0 and enr['prenom'][0] == 'A': tsel.append(enr) return tsel def selection3_bis(tableau): return [ e for e in tableau if e['position'][0] >= 0 and e['prenom'][0] == 'A' ] selection3(table_test) ``` ____ ### Une fonction qui prend en argument une autre fonction! Il est simple d'adapter le code précédent pour sélectionner selon un autre critère, mais il faut observer qu'*on fait toujours la même chose*: <pre> <strong>Pour</strong> chaque enregistrement du jeu de donnees: <strong>Si</strong> cet enregistrement vérifie le <strong>critère</strong>: l'ajouter a l'accumulateur </pre> Seul le **critère** change! On peut faire bien mieux en suivant ces étapes: 1. *Définir un* **filtre**: une fonction qui, *étant donné un enregistrement*, renvoie un **booléen**: - `True` si l'enregistrement respecte un certain critère, `False` autrement. 2. *Adapter* la fonction de **sélection** de façon à ce qu'elle puisse *recevoir la fonction **filtre** en argument*. Commençons par l'**étape 2** :-o ``` def selection(tableau, filtre_fn): tsel = [] for enr in tableau: # rappel: filtre_fn est une fonction qui # s'attend à recevoir un enregistrement # et qui renvoie `True` ou `False` if filtre_fn(enr): tsel.append(enr) return tsel ``` Pour l'**étape 1**, une «micro fonction» suffit bien souvent: ``` a_tours = lambda enregistrement: enregistrement['ville'] == 'Tours' ``` Finalement, on combine les deux: ``` selection(table_test, a_tours) ``` En fait, tout l'intérêt des «micro fonctions» `lambda`, parfois appelée *fonctions anonymes*, est de pouvoir les utiliser «en place»: ``` # sur plusieurs lignes pour plus de clarté; remettre sur une ligne. selection( table_test, # ici on attend une fonction et une lambda est une fonction! lambda e: e['ville'] == 'Tours' ) ``` Si le filtre est plus compliqué, rien n'empêche d'utiliser une fonction «normale» ``` def filtre_tordu(enr): condition1 = enr['ville'] == 'Tours' condition2 = enr['nom'][0] in ['P', 'B'] return condition1 or condition2 selection(table_test, filtre_tordu) # ... mais on peut encore utiliser une «micro-fonction» dans ce cas. selection( table_test, lambda e: e['ville'] == 'Tours' or e['nom'][0] in ['P', 'B'] ) ``` On peut même simplifier le code de la fonction `selection` en utilisant une compréhension: ``` def selection(tableau, filtre_fn): return [e for e in tableau if filtre_fn(e)] selection( table_test, lambda e: e['ville'] == 'Tours' or e['nom'][0] in ['P', 'B'] ) ``` #### Exercice 5 Utilise la fonction `selection` conjointement avec des «micros fonctions» pour résoudre les sélections de l'exercice 4; sélectionner les enregistrements dont: 1. le numéro de client `"n_client"` est dans l'intervalle `[1000;3000]`, 2. la longitude est positive - "position": <(lat., long.)> - et dont le prenom débute par un 'A'. ``` #1 #1 selection(table_test, lambda e: 1000 <= e["n_client"] <= 3000) #2 #2 selection(table_test, lambda e: e["position"][1] >= 0 and e["prenom"][0] == "A") ``` _____ Rencontrer pour la première fois «une fonction qui prend en argument une autre fonction» - parfois appelée **fonction d'ordre supérieur** - est souvent déroutant. Pour «passer le cap», voici un exercice complémentaire. #### Exercice 6 - ma première fonction d'ordre supérieur Écrire une fonction `appliquer(liste, fn)` qui prend en argument: - une liste d'éléments de type 'a': ce type est arbitraire, l'important c'est que tous les éléments de la liste aient le même type, - une fonction `fn` qui prend en argument un élément de type 'a' et renvoie un élément de type 'b'. Finalement, la fonction `appliquer` renvoie une liste d'éléments de type 'b'. **En résumé**: `appliquer` reçois `liste: "[a]"` et `fn: "a -> b"` et elle produit `"[b]"`... Par *exemple*, `appliquer(["1", "2", "3"], int)` renvoie `[1, 2, 3]` Aide-toi des assertions qui suivent et de l'exemple de la fonction `selection` pour résoudre le problème. ``` def appliquer(liste, fn): # à toi de jouer! l = [1,2,3] f = lambda x: x**2 # f: int -> int assert appliquer(l, f) == [1,4,9] l = ["un", "deux", "trois"] f = lambda ch: len(ch) # f: str -> int assert appliquer(l, f) == [2,4,5] f = lambda ch: ch.upper() # f: str -> str assert appliquer(l, f) == ["UN", "DEUX", "TROIS"] ``` _____ ## Trier le tableau selon un ou plusieurs descripteurs [Vidéo d'accompagnement 3](https://vimeo.com/535451191) ``` table_test = [ {'n_client': 1212, 'nom': 'Lacasse' , 'prenom': 'Aubrey' , 'ville': 'Annecy' , 'position': (45.900000,6.116667)}, {'n_client': 1343, 'nom': 'Primeau' , 'prenom': 'Angelette', 'ville': 'Tours' , 'position': (47.383333,0.683333)}, {'n_client': 2454, 'nom': 'Gabriaux' , 'prenom': 'Julie' , 'ville': 'Bordeaux', 'position': (44.833333,-0.566667)}, {'n_client': 895 , 'nom': 'Gaulin' , 'prenom': 'Dorene' , 'ville': 'Lyon' , 'position': (45.750000,4.850000)}, {'n_client': 2324, 'nom': 'Jobin' , 'prenom': 'Aubrey' , 'ville': 'Bourges' , 'position': (47.083333,2.400000)}, {'n_client': 34 , 'nom': 'Boncoeur' , 'prenom': 'Kari' , 'ville': 'Nantes' , 'position': (47.216667,-1.550000)}, {'n_client': 1221, 'nom': 'Parizeau' , 'prenom': 'Olympia' , 'ville': 'Metz' , 'position': (49.133333,6.166667)}, {'n_client': 1114, 'nom': 'Paiement' , 'prenom': 'Inès' , 'ville': 'Bordeaux', 'position': (44.833333,-0.566667)}, {'n_client': 3435, 'nom': 'Gabriaux' , 'prenom': 'Adèle' , 'ville': 'Moulin' , 'position': (46.566667,3.333333)}, {'n_client': 5565, 'nom': 'Neufville', 'prenom': 'Ila' , 'ville': 'Toulouse', 'position': (43.600000,1.433333)}, {'n_client': 2221, 'nom': 'Larivière', 'prenom': 'Alice' , 'ville': 'Tours' , 'position': (47.383333,0.683333)}, ] ``` Supposer que nous souhaitions **ordonner** les enregistrements selon leur descripteur `n_client` du plus petit numéro au plus grand. Pour faire cela, nous utiliserons la fonction *prédéfinie* \[ *builtin* \] de python `sorted`. Appliquée à une liste de valeurs comparables, elle fait ce qu'on attend: ``` sorted([3, 6, 2, 7, 1, 8]) sorted(["un", "deux", "trois", "quatre"]) # ordre du dictionnaire (lexicographique) sorted([(1, 2), (2, 3), (2, 1), (1, 3)]) ``` Mais comment pourrait-elle trier nos enregistrements? Il faudrait qu'elle puisse savoir par rapport à quel(s) descripteurs(s) on souhaite les trier. Pour cette raison, la fonction `sorted` admet un deuxième *paramètre* optionnel `key`. On peut l'utiliser pour préciser **une fonction** qui, à un «objet» de la liste, fait correspondre **la (ou les) valeurs par rapport à laquelle (auxquels) on souhaite effectuer le tri**. Par exemple, *pour trier nos enregistrements* **suivant le n° de client**, on va lui passer la fonction `lambda e: e["n_client"]`: ``` sorted(table_test, key=lambda e: e['n_client']) ``` *Autre exemple*: si nous souhaitons trier les enregistrement (clients) suivant leur *nom*, **puis** leur *prénom*, notre fonction devra renvoyée un tuple avec ces valeurs dans le même ordre: ``` sorted( table_test, key=lambda e: (e['nom'], e['prenom']) # attention; parenthèses autour du tuples obligatoires ) ``` Voyez-vous la différence si nous trions seulement sur le nom? (observez bien). À «noms égaux» (voir "Gabriaux"), les enregistrements sont ordonnées suivant le "prénom" (donc "Adèle" avant "Julie" contrairement à l'ordre initial...) #### Exercice 7 1. Trier la table selon la première lettre du prénom puis selon la longitude. *Rappel*: position=(lat,long) ``` sorted(table_test, key=lambda e: (e["prenom"][0], e["position"][1])) ``` 2. Sachant que `sorted` possède un troisième paramètre optionnel nommé `reverse` et qui vaut `False` par défaut, trier la table selon le numéro de client dans l'ordre décroissant (plus grand en premier). ``` sorted(table_test, key=lambda e: e["n_client"], reverse=True) ``` ## Fusionner deux tableaux ayant un descripteur commun En pratique, les données sont souvent «dispatchées» dans plusieurs tableaux. Par exemple, supposez qu'on trouve les deux «tableaux» qui suivent dans un jeu de données au format CSV. Vous devinez peut-être que cela signifie par exemple: > Jean-Pierre Durand, né le 23/05/1985 *habite* au 7 rue Georges Courteline à Tours (37000). On obtient cela en «rapprochant» les enregistrements des deux tableaux dont les valeurs associées aux descripteurs `id` et `id_personne` coincident. Cet opération de rapprochement est appelée **jointure** ou **fusion**. Elle permet de produire un tableau sur la base de deux autres en s'appuyant sur des descripteurs «commun» aux deux tableaux. ``` personnes = [ {"id": 0, "nom": "Durand", "prenom": "Jean-Pierre", "date_naissance": "23/05/1985"}, {"id": 1, "nom": "Dupont", "prenom": "Christophe", "date_naissance": "15/12/1967"}, {"id": 2, "nom": "Terta", "prenom": "Henry", "date_naissance": "12/06/1978"} ] adresses = [ {"rue": "32 rue Général De Gaulle", "cp": "27315", "ville": "Harquency", "id_personne": 2}, {"rue": "7 rue Georges Courteline", "cp": "37000", "ville": "Tours", "id_personne": 0} ] ``` #### Exercice 8 Écrire une fonction `fusionner(tab1, tab2, d1, d2)` qui à partir de deux tableaux (de n-uplets nommés), d'un descripteur du premier et d'un descripteur du second, renvoie un tableau qui «fusionne» les deux tableaux donnés. Plus précisément, chaque **couple d'enregistrements** des tableaux en entrée *ayant la même valeur pour les descripteurs `d1` et `d2`* produit un enregistrement pour le tableau en sortie. Les descripteurs du tableau en sortie sont les descripteurs des deux tableaux *hormis* `d1` et `d2`. *Par exemple*, `fusionner(personnes, adresses, "id", "id_personne")` produit: [{'nom': 'Durand', 'prenom': 'Jean-Pierre', 'date_naissance': '23/05/1985', 'rue': '7 rue Georges Courteline', 'cp': '37000', 'ville': 'Tours'}, {'nom': 'Terta', 'prenom': 'Henry', 'date_naissance': '12/06/1978', 'rue': '32 rue Général De Gaulle', 'cp': '27315', 'ville': 'Harquency'}] *Aide*: Utiliser deux boucles imbriquées pour considérer tous les couples d'enregistrements possibles; lorsque les enregistrements du couple possèdent la même valeur pour `d1` et `d2`, produire un nouveau dictionnaire en copiant les paires clés-valeurs adéquates, puis l'ajouter à l'accumulateur ... ``` def fusionner(tab1, tab2, d1, d2): tab = [] # l'accumulateur pass fusionner(personnes, adresses, "id", "id_personne") def fusionner(tab1, tab2, d1, d2): tab = [] for e1 in tab1: for e2 in tab2: if e1[d1] == e2[d2]: d = {} for c in e1: if c != d1: d[c] = e1[c] for c in e2: if c != d2: d[c] = e2[c] tab.append(d) return tab fusionner(personnes, adresses, "id", "id_personne") ```
github_jupyter
# Markov Decision Process (MDP) # Discounted Future Return $$R_t = \sum^{T}_{k=0}\gamma^{t}r_{t+k+1}$$ $$R_0 = \gamma^{0} * r_{1} + \gamma^{1} * r_{2} = r_{1} + \gamma^{1} * r_{2}\ (while\ T\ =\ 1) $$ $$R_1 = \gamma^{1} * r_{2} =\ (while\ T\ =\ 1) $$ $$so,\ R_0 = r_{1} + R_1$$ Higher $\gamma$ stands for lower discounted value, and lower $\gamma$ stands for higher discounted value (in normal, $\gamma$ value is between 0.97 and 0.99). ``` def discount_rewards(rewards, gamma=0.98): discounted_returns = [0 for _ in rewards] discounted_returns[-1] = rewards[-1] for t in range(len(rewards)-2, -1, -1): discounted_returns[t] = rewards[t] + discounted_returns[t+1]*gamma return discounted_returns ``` If returns get higher when the time passes by, the Discounted Future Return method is not suggested. ``` print(discount_rewards([1,2,4])) ``` If returns are the same or lesser when the time passes by, the Discounted Future Return method is suggested. ``` # about 2.94 fold # examples are like succeeding or failing print(discount_rewards([1,1,1])) # about 3.31 fold # examples are like relating to the time-consuming print(discount_rewards([1,0.9,0.8])) ``` # Explore and Exploit ## $\epsilon$-Greedy strategy Each time the agent decides to take an action, it will consider one of which, the recommended one (exploit) or the random one (explore). The value $\epsilon$ standing for the probability of taking random actions. ``` import random import numpy as np def epsilon_greedy_action(action_distribution, epsilon=1e-1): if random.random() < epsilon: return np.argmax(np.random.random(action_distribution.shape)) else: return np.argmax(action_distribution) ``` here we assume there are 10 actions as well as their probabilities (fixed probabilities on each step making us easier to monitor the result) for the agent to take ``` action_distribution = np.random.random((1, 10)) print(action_distribution) print(epsilon_greedy_action(action_distribution)) ``` ## Annealing $\epsilon$-Greedy strategy At the beginning of training reinforcement learning, the agent knows nothing about the environment and the state or the feedback while taking an action as well. Thus we hope the agent takes more random actions (exploring) at the beginning of training. After a long training period, the agent knows the environment more and learns more the feedback given an action. Thus, we hope the agent takes an action based on its own experience (exploiting). We provide a new idea to anneal (or decay) the $\epsilon$ parameter in each time the agent takes an action. The classic annealing strategy is decaying $\epsilon$ value from 0.99 to 0.01 in around 10000 steps. ``` def epsilon_greedy_annealed(action_distribution, training_percentage, epsilon_start=1.0, epsilon_end=1e-2): annealed_epsilon = epsilon_start * (1-training_percentage) + epsilon_end * training_percentage if random.random() < annealed_epsilon: # take random action return np.argmax(np.random.random(action_distribution.shape)) else: # take the recommended action return np.argmax(action_distribution) ``` here we assume there are 10 actions as well as their probabilities (fixed probabilities on each step making us easier to monitor the result) for the agent to take ``` action_distribution = np.random.random((1, 10)) print(action_distribution) for i in range(1, 99, 10): percentage = i / 100.0 action = epsilon_greedy_annealed(action_distribution, percentage) print("percentage : {} and action is {}".format(percentage, action)) ``` # Learning to Earn Max Returns ## Policy Learning Policy learning is the policy the agent learning to earn the maximum returns. For instance, if we ride a bicycle, when the bicycle is tilt to the left we try to give more power to the right side. The above strategy is called policy learning. ### Gradient Descent in Policy Learning $$arg\ min_\theta\ -\sum_{i}\ R_{i}\ log{p(y_{i}|x_{i}, \theta)}$$ $R_{i}$ is the discounted future return, $y_{i}$ is the action taken at time $i$. ## Value Learning Value learning is the agent learns the value from the state while taking an action. That is, value learning is to learn the value from a pair [state, action]. For example, if we ride a bicycle, we give higher or lower values to any combinations of [state, action], such a strategy is called value learning. ``` ```
github_jupyter
<img src="../figures/HeaDS_logo_large_withTitle.png" width="300"> <img src="../figures/tsunami_logo.PNG" width="600"> [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/intro/Numbers_and_operators/Numbers_and_operators.ipynb) # Numerical Operators *Prepared by [Katarina Nastou](https://www.cpr.ku.dk/staff/?pure=en/persons/672471)* ## Objectives - understand differences between `int`s and `float`s - work with simple math operators - add comments to your code ## Numbers Two main types of numbers: - Integers: `56, 3, -90` - Floating Points: `5.666, 0.0, -8.9` ## Operators - addition: `+` - subtraction: `-` - multiplication: `*` - division: `/` - exponentiation, power: `**` - modulo: `%` - integer division: `//` (what does it return?) ``` # playground ``` ### Qestions: Ints and Floats - Question 1: Which of the following numbers is NOT a float? (a) 0 (b) 2.3 (c) 23.0 (d) -23.0 (e) 0.0 - Question 2: What type does the following expression result in? ```python 3.0 + 5 ``` ### Operators 1 - Question 3: How can we add parenthesis to the following expression to make it equal 100? ```python 1 + 9 * 10 ``` - Question 4: What is the result of the following expression? ```python 3 + 14 * 2 + 4 * 5 ``` - Question 5: What is the result of the following expression ```python 5 * 9 / 4 ** 3 - 6 * 7 ``` ``` ``` ### Comments - Question 6: What is the result of running this code? ```python 15 / 3 * 2 # + 1 ``` ``` ``` ### Questions: Operators 2 - Question 7: Which of the following result in integers in Python? (a) 8 / 2 (b) 3 // 2 (c) 4.5 * 2 - Question 8: What is the result of `18 // 3` ? - Question 9: What is the result of `121 % 7` ? ## Exercise Ask the user for a number using the function [input()](https://www.askpython.com/python/examples/python-user-input) and then multiply that number by 2 and print out the value. Remember to store the input value into a variable, so that you can use it afterwards in the multiplication. Modfify your previous calculator and ask for the second number (instead of x * 2 --> x * y). Now get the square of the number that the user inputs ### Note Check out also the [math library](https://docs.python.org/3/library/math.html) in Python. You can use this library for more complex operations with numbers. Just import the library and try it out: ```python import math print(math.sqrt(25)) print(math.log10(10)) ```
github_jupyter
This notebook accompanies the Whisper Connected Fracture Analysis note. ``` from pathlib import Path import geopandas as gpd import pandas as pd import numpy as np from shapely.geometry import LineString from pyfracman.data import clean_columns from pyfracman.fab import parse_fab_file from pyfracman.frac_geo import flatten_frac, get_mid_z, get_fracture_set_stats from pyfracman.well_geo import ( load_stage_location, load_survey_export, well_surveys_to_linestrings, stage_locs_to_gdf ) data_dir = Path(r"C:\Users\scott.mckean\Desktop\Data Exports") # load fractures frac_fpath = next(data_dir.rglob("Interpreted_Seismic_Lineaments.fab")) fracs = parse_fab_file(frac_fpath) ### Load fractures ### # load fracture properties # parse average properties per fracture # must average for tesselated fractures if len(fracs.get('t_properties')) > 0: fracs['prop_list'] = [np.mean(x, axis=1) for x in fracs['t_properties']] prop_df = pd.DataFrame(fracs['prop_list'], columns = fracs['prop_dict'].values(), index=fracs['t_fid']) else: prop_df = pd.DataFrame(fracs['prop_list'], columns = fracs['prop_dict'].values(), index=fracs['fid']) prop_df.index.set_names('fid', inplace=True) # load fracture geometry and flatten to 2D at midpoint of frac plane if len(fracs.get('t_nodes')) > 0: frac_linestrings = list(map(flatten_frac, fracs['t_nodes'])) frac_mid_z = list(map(get_mid_z, fracs['t_nodes'])) else: frac_linestrings = list(map(flatten_frac, fracs['fracs'])) frac_mid_z = list(map(get_mid_z, fracs['fracs'])) frac_gdf = gpd.GeoDataFrame(prop_df, geometry=frac_linestrings) # for stochastic connections - don't need this yet conn_out = [] for conn_fpath in data_dir.rglob("*_Connections.txt"): set_a = get_fracture_set_stats(conn_fpath, set_name='Stochastic Faults - Set A_1', set_alias='set_a') set_b = get_fracture_set_stats(conn_fpath, set_name='Stochastic Faults - Set B_1', set_alias='set_b') sets = set_a.merge(set_b, on=['stage_no','well','object']) conn_out.append(sets) stochastic_connections = pd.concat(conn_out) ### load stages and surveys ### # load surveys and convert to linestrings surveys = pd.concat( [load_survey_export(well_path) for well_path in data_dir.glob("*_well.txt")] ) survey_linestrings = well_surveys_to_linestrings(surveys) # load stage locations and convert to GDF with points and linestring stage_locs = pd.concat( [load_stage_location(well_path) for well_path in data_dir.glob("*_intervals.txt")] ) stage_gdf = stage_locs_to_gdf(stage_locs) # get stage start times, with clean non-flowback for geometry merge stage_times = pd.read_parquet(data_dir / "stage_times.parquet") stage_times['stage_w_f'] = stage_times['stage'].copy() stage_times['stage'] = stage_times['stage_w_f'].astype(int) # load manual connections manual_map = pd.read_excel(data_dir / 'Stage - Event Array.xlsx', skiprows=1) connections = manual_map.iloc[:,2:].apply(lambda x: x.dropna().unique().astype(int), axis=1) connections.name = 'connections' # get well and stage well_stg = manual_map.Stage.str.split("_",expand=True) well_stg.columns = ['well', 'stage'] manual_connections = pd.concat([well_stg, connections], axis=1) manual_connections['stage_w_f'] = manual_connections['stage'].copy() manual_connections['stage_w_f'] = manual_connections['stage_w_f'].str.replace('F',".5").astype(float) manual_connections['stage'] = manual_connections['stage_w_f'].astype(int) # add stage geometry and start times manual_connections = (manual_connections .merge(stage_times, on=['well','stage_w_f','stage'], how='left') .merge(stage_gdf, on=['well','stage'], how='left') .pipe(gpd.GeoDataFrame) ) # load manual f lineaments = pd.read_csv(data_dir / 'Seismic Lineament centres.csv') lineaments.columns = clean_columns(lineaments.columns) lineaments = gpd.GeoDataFrame(lineaments, geometry=gpd.points_from_xy(lineaments['x'], lineaments['y'])) lineaments.head() connections_out = [] for i, row in manual_connections.iterrows(): for conn_fid in row.connections: # make one row per connection, with a single geometry stg_geom = row.geometry frac_geom = lineaments.query("id == @conn_fid").geometry.iloc[0] conn_geom = LineString([stg_geom, frac_geom]) conn_out = gpd.GeoDataFrame( row.to_frame().transpose()[['well','stage','well_stage','start_date','is_count']].reset_index(), geometry=[conn_geom] ) conn_out['fid'] = conn_fid conn_out['start_date'] = conn_out.start_date.astype(str) connections_out.append(conn_out) connections_out = pd.concat(connections_out) connections_out.to_file(data_dir / 'manual_connections.shp') connections_out.to_file(data_dir / 'manual_connections.geojson', driver='GeoJSON') connections_out.query("well_stage == 'A6_28F'") import matplotlib.pyplot as plt fig, ax = plt.subplots() survey_linestrings.plot(ax = ax) frac_gdf.plot(ax = ax, color='r') stage_gdf.set_geometry('stg_line').plot(ax = ax, color='k') connections_out.plot(ax = ax, color='b') ax.set_aspect('equal') plt.xlim(-250, 2000) plt.ylim(-1250, 1000) plt.show() ```
github_jupyter
# Generate AIBL cohort dataset and residuals files in hdf5 format ``` # Import data from Excel sheet import pandas as pd df = pd.read_excel('aibl_ptdemog_final.xlsx', sheet_name='aibl_ptdemog_final') #print(df) sid = df['RID'] grp = df['DXCURREN'] age = df['age'] sex = df['PTGENDER(1=Female)'] tiv = df['Total'] # TIV field = df['field_strength'] grpbin = (grp > 1) # 1=CN, ... # Scan for nifti file names import glob dataAIBL = sorted(glob.glob('mwp1_MNI_AIBL/*.nii.gz')) dataFiles = dataAIBL numfiles = len(dataFiles) print('Found ', str(numfiles), ' nifti files') # Match covariate information import re debug = False cov_idx = [-1] * numfiles # list; array: np.full((numfiles, 1), -1, dtype=int) print('Matching covariates for loaded files ...') for i,id in enumerate(sid): p = [j for j,x in enumerate(dataFiles) if re.search('_%d_MR_' % id, x)] # extract ID numbers from filename, translate to Excel row index if len(p)==0: if debug: print('Did not find %04d' % id) # did not find Excel sheet subject ID in loaded file selection else: if debug: print('Found %04d in %s: %s' % (id, p[0], dataFiles[p[0]])) cov_idx[p[0]] = i # store Excel index i for data file index p[0] print('Checking for scans not found in Excel sheet: ', sum(x<0 for x in cov_idx)) labels = pd.DataFrame({'Group':grpbin}).iloc[cov_idx, :] grps = pd.DataFrame({'Group':grp, 'RID':sid}).iloc[cov_idx, :] # Actually load nifti files into array import nibabel as nib import numpy as np from sklearn import linear_model # define FOV to reduce required memory size x_range_from = 10; x_range_to = 110 y_range_from = 13; y_range_to = 133 z_range_from = 5; z_range_to = 105 # 1. dimension: subject # 2. dimension: img row # 3. dimension: img col # 4. dimension: img depth # 5. dimension: img channel images = np.zeros((numfiles, z_range_to-z_range_from, x_range_to-x_range_from, y_range_to-y_range_from, 1), dtype=np.float32) # numfiles× z × x × y ×1; avoid 64bit types #print(images.shape) for i in range(numfiles): # for loop over files and load if (i % 50 == 0): print('Loading file %d of %d' % (i+1, numfiles)) img = nib.load(dataFiles[i]) img = img.get_fdata()[x_range_from:x_range_to, y_range_from:y_range_to, z_range_from:z_range_to] img = np.transpose(img, (2, 0, 1)) # reorder dimensions to match coronal view z*x*y in MRIcron etc. #img = np.fliplr(img) # flip upside down and #img = np.flipud(img) # left/right to match MRIcroN views when plotted directly #img = np.flip(img, 2) # flip front/back img = np.flip(img) # flip all positions #print(img.shape) images[i, :,:,:, 0] = np.nan_to_num(img) print('Successfully loaded files') print('Image array size: ', images.shape) # save original images array to disk import h5py hf = h5py.File('orig_images_AIBL_wb_mwp1_CAT12_MNI.hdf5', 'w') hf.create_dataset('images', data=images, compression='gzip') hf.close() # Display a single scan from matplotlib import pyplot as plt %matplotlib inline #import numpy as np test_img = images[0, :,:,:, 0] ma = np.max(test_img) mi = np.min(test_img) test_img = (test_img - mi) / (ma - mi) # normalising to (0-1) range #test_img = (test_img - test_img.mean())/test_img.std() # normalizing by mean and sd print('displaying image ', dataFiles[0]) for i in range(test_img.shape[2]): if (i % 10 == 0): # only display each tenth slice plt.figure() a = test_img[:,:,i] plt.imshow(a, cmap='gray') # Perform regression-based covariates cleaning from sklearn import linear_model from pandas import DataFrame covariates = DataFrame({'Age':age, 'Sex':sex, 'TIV':tiv, 'FieldStrength':field}).iloc[cov_idx, :] print("Covariates data frame size : ", covariates.shape) print(covariates.head()) #print(label.head()) covariates = covariates.to_numpy(dtype=np.float32) # convert dataframe to nparray with 32bit types # not run: estimate model using AIBL controls -> instead apply ADNI2 model ##covCN = covariates[labels['Group'] == 0] # only controls as reference group to estimate effect of covariates ##print("Controls covariates data frame size : ", covCN.shape) # load coefficients for linear models from hdf5 hf = h5py.File('linear_models_ADNI2.hdf5', 'r') hf.keys # read keys lmarray = np.array(hf.get('linearmodels'), dtype=np.float32) # stores 4 coefficients + 1 intercept per voxel hf.close() lm = linear_model.LinearRegression() for k in range(images.shape[3]): print('Processing depth slice ', str(k+1), ' of ', str(images.shape[3])) for j in range(images.shape[2]): for i in range(images.shape[1]): if any(images[:, i, j, k, 0] != 0): # not run: estimate model using AIBL controls -> instead apply ADNI2 model ##tmpdat = images[labels['Group'] == 0, i, j, k, 0] ##lm.fit(covCN, tmpdat) # estimate model coefficients (intercept added automatically) # take model coefficients from ADNI2 file lm.coef_ = lmarray[k, j, i, :4] lm.intercept_ = lmarray[k, j, i, 4] pred = lm.predict(covariates) # calculate prediction for all subjects images[:, i, j, k, 0] = images[:, i, j, k, 0] - pred # % subtract effect of covariates from original values (=calculate residuals) # Save residualized data to disk import h5py hf = h5py.File('residuals_AIBL_wb_mwp1_CAT12_MNI.hdf5', 'w') hf.create_dataset('images', data=images, compression='gzip') # don't store Strings or String arrays in HDF5 containers as this is problematic hf.close() # Display a single scan (residuals) from matplotlib import pyplot as plt #import numpy as np test_img = images[0, :,:,:, 0] ma = np.max(test_img) mi = np.min(test_img) test_img = (test_img - mi) / (ma - mi) # normalising to (0-1) range #test_img = (test_img - test_img.mean())/test_img.std() # normalizing by mean and sd print('displaying residual image ', dataFiles[0]) for i in range(test_img.shape[2]): if (i % 10 == 0): # only display each fifth slice plt.figure() a = test_img[:,:,i] plt.imshow(a, cmap='gray') ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline import sklearn sklearn.set_config(print_changed_only=True) ``` ## Automatic Feature Selection ### Univariate statistics ``` from sklearn.datasets import load_breast_cancer from sklearn.feature_selection import SelectPercentile from sklearn.model_selection import train_test_split cancer = load_breast_cancer() # get deterministic random numbers rng = np.random.RandomState(42) noise = rng.normal(size=(len(cancer.data), 50)) # add noise features to the data # the first 30 features are from the dataset, the next 50 are noise X_w_noise = np.hstack([cancer.data, noise]) X_train, X_test, y_train, y_test = train_test_split( X_w_noise, cancer.target, random_state=0, test_size=.5) # use f_classif (the default) and SelectPercentile to select 10% of features: select = SelectPercentile(percentile=50) select.fit(X_train, y_train) # transform training set: X_train_selected = select.transform(X_train) print(X_train.shape) print(X_train_selected.shape) from sklearn.feature_selection import f_classif, f_regression, chi2 F, p = f_classif(X_train, y_train) plt.figure() plt.semilogy(p, 'o') mask = select.get_support() print(mask) # visualize the mask. black is True, white is False plt.matshow(mask.reshape(1, -1), cmap='gray_r') from sklearn.linear_model import LogisticRegression # transform test data: X_test_selected = select.transform(X_test) lr = LogisticRegression() lr.fit(X_train, y_train) print("Score with all features: %f" % lr.score(X_test, y_test)) lr.fit(X_train_selected, y_train) print("Score with only selected features: %f" % lr.score(X_test_selected, y_test)) ``` ### Model-based Feature Selection ``` from sklearn.feature_selection import SelectFromModel from sklearn.ensemble import RandomForestClassifier select = SelectFromModel(RandomForestClassifier(n_estimators=100, random_state=42), threshold="median") select.fit(X_train, y_train) X_train_rf = select.transform(X_train) print(X_train.shape) print(X_train_rf.shape) mask = select.get_support() # visualize the mask. black is True, white is False plt.matshow(mask.reshape(1, -1), cmap='gray_r') X_test_rf = select.transform(X_test) LogisticRegression().fit(X_train_rf, y_train).score(X_test_rf, y_test) ``` ### Recursive Feature Elimination ``` from sklearn.feature_selection import RFE select = RFE(RandomForestClassifier(n_estimators=100, random_state=42), n_features_to_select=40) select.fit(X_train, y_train) # visualize the selected features: mask = select.get_support() plt.matshow(mask.reshape(1, -1), cmap='gray_r') X_train_rfe = select.transform(X_train) X_test_rfe = select.transform(X_test) LogisticRegression().fit(X_train_rfe, y_train).score(X_test_rfe, y_test) select.score(X_test, y_test) ``` ### Sequential Feature Selection ``` from mlxtend.feature_selection import SequentialFeatureSelector sfs = SequentialFeatureSelector(LogisticRegression(), k_features=40, forward=False, scoring='accuracy',cv=5) sfs = sfs.fit(X_train, y_train) mask = np.zeros(80, dtype='bool') mask[np.array(sfs.k_feature_idx_)] = True plt.matshow(mask.reshape(1, -1), cmap='gray_r') LogisticRegression().fit(sfs.transform(X_train), y_train).score( sfs.transform(X_test), y_test) ``` # Exercises Choose either the Boston housing dataset or the adult dataset from above. Compare a linear model with interaction features against one without interaction features. Use feature selection to determine which interaction features were most important. ``` # %load solutions/feature_importance.py ```
github_jupyter
### EXP: Pilote2 QC rating - **Aim:** Test reliability of quality control (QC) of brain registration ratings between two experts raters (PB: Pierre Bellec, YB: Yassine Benahajali) based on the first drafted qc protocol on the zooniverse platform ( ref: https://www.zooniverse.org/projects/simexp/brain-match ). - **Exp:** - We choose 50 anatomical brain images (16 OK, 17 Maybe and 17 Fail) preprocced with NIAK pipelines from ADHD200 datsets. - Each rater (PB and YB) rated (OK, Maybe or Fail) Zooniverse platform interface. ``` import os import pandas as pd import numpy as np import json import itertools import seaborn as sns from sklearn import metrics from matplotlib import gridspec as gs import matplotlib.pyplot as plt from functools import reduce %matplotlib inline %load_ext rpy2.ipython sns.set(style="white") def CustomParser(data): j1 = json.loads(data) return j1 # Read raw table classifications = pd.read_csv('../data/rating/brain-match-classifications-12-10-2018.csv', converters={'metadata':CustomParser, 'annotations':CustomParser, 'subject_data':CustomParser}, header=0) # Filter out only specific workflow ratings = classifications.loc[classifications['workflow_name'].isin(['anat_internal_rating_pierre', 'anat_internal_rating_yassine'])] ratings.count() # extract tagging count ratings.loc[:,"n_tagging"] = [ len(q[1]['value']) for q in ratings.annotations] # extract rating count ratings.loc[:,"rating"] = [ q[0]['value'] for q in ratings.annotations] # extract subjects id ratings.loc[:,"ID"] = [ row.subject_data[str(ratings.subject_ids[ind])]['subject_ID'] for ind,row in ratings.iterrows()] # extract files name ratings.loc[:,"imgnm"] = [ row.subject_data[str(ratings.subject_ids[ind])]['images'] for ind,row in ratings.iterrows()] # How many rating per user user_count = ratings.user_name.value_counts() user_count # drop duplicated rating inc = 0 sum_dup = 0 for ind,user in enumerate(ratings.user_name.unique()): user_select_df = ratings[ratings.user_name.isin([user])] mask=~user_select_df.ID.duplicated() dup = len([m for m in mask if m == False]) sum_dup = sum_dup+ dup if dup > 0 : print('{} have {} duplicated ratings'.format(user,dup)) if ind == 0 and inc == 0: classi_unique= user_select_df[mask] inc+=1 else: classi_unique = classi_unique.append(user_select_df[~user_select_df.ID.duplicated()]) inc+=1 print('Total number of duplicated ratings = {}'.format(sum_dup)) # Get the final rating numbers per subject user_count = classi_unique.user_name.value_counts() user_count #Create Users rating dataframe list_user = user_count.index concat_rating = [classi_unique[classi_unique.user_name == user][['ID','rating']].rename(columns={'rating': user}) for user in list_user] df_ratings = reduce(lambda left,right: pd.merge(left,right,how='outer',on='ID'), concat_rating) df_ratings.rename(columns={'simexp':'PB','Yassinebha':'YB'},inplace=True) df_ratings.head() # Import rating from Pilot1 ratings_p1 = pd.read_csv('../data/rating/Pilot_QC_Pierre_Yassine-12-10-2018.csv').rename(index=str, columns={"status_Athena": "PB_Athena", "status_NIAK": "PB_NIAK", "status_Athena.1": "YB_Athena", "status_NIAK.1": "YB_NIAK"}) ratings_p1 = ratings_p1[['id_subject','PB_NIAK','YB_NIAK']].rename(columns={'id_subject':'ID','PB_NIAK':'PB_P1','YB_NIAK':'YB_P1'}) ratings_p1.head() # Merge Pilot1 and 2 ratings_p1p2= pd.merge(df_ratings,ratings_p1,how='inner',on='ID').rename(columns={"PB": "PB_P2", "YB": "YB_P2"}).apply(lambda x: x.str.strip() if x.dtype == "object" else x) ratings_p1p2.head() # Save a copy on the disk df_ratings.to_csv('../data/rating/Pilot2_internal_rating-PB_YB.csv',index=False) ratings_p1p2.to_csv('../data/rating/Pilot1-2_internal_rating-PB_YB.csv',index=False) ``` ### Kappa for Pilote 2 only ``` # Add matching column between the raters df_ratings.loc[:,"rating_match"] = df_ratings.loc[:,['PB','YB']].apply(lambda x: len(set(x)) == 1, axis=1) df_ratings.head() # Replace OK with 1 , Maybe with 2 and Fail with 3 df_ratings.replace({'OK':1,'Maybe':2, 'Fail':3}, inplace=True) df_ratings.head() # calculate the percentage of agreement between raters agreem_ = (df_ratings.rating_match.sum()/df_ratings.ID.count())*100 print("The percentage of agreement is: {:.2f}".format(agreem_)) %%R suppressPackageStartupMessages(library(dplyr)) #install.packages("irr") library(irr) # Percenteage of agrrement between raters with R package IRR agree_ = df_ratings[['PB','YB']] %Rpush agree_ agree_n = %R agree(agree_) print(agree_n) # FDR correction from statsmodels.sandbox.stats import multicomp as smi def fdr_transf(mat,log10 = False): '''compute fdr of a given matrix''' row = mat.shape[0] col = mat.shape[1] flatt = mat.flatten() fdr_2d = smi.multipletests(flatt, alpha=0.05, method='fdr_bh')[1] if log10 == True: fdr_2d = [-np.log10(ii) if ii != 0 else 50 for ii in fdr_2d ] fdr_3d = np.reshape(fdr_2d,(row,col)) return fdr_3d # Kappa calculation def kappa_score(k_df,log10 = False): '''compute Kappa between diferent raters organized in dataframe''' k_store = np.zeros((len(k_df.columns), len(k_df.columns))) p_store = np.zeros((len(k_df.columns), len(k_df.columns))) %Rpush k_df for user1_id, user1 in enumerate(k_df.columns): for user2_id, user2 in enumerate(k_df.columns): weight = np.unique(kappa_df[[user1,user2]]) %Rpush user1_id user1 user2_id user2 weight kappaR = %R kappa2(k_df[,c(user1,user2)],weight) # store the kappa k_store[user1_id, user2_id] = [kappaR[x][0] for x in range(np.shape(kappaR)[0])][4] p_store[user1_id, user2_id] = [kappaR[x][0] for x in range(np.shape(kappaR)[0])][-1] # FDR Correction p_store = fdr_transf(p_store,log10) return k_store, p_store # Get Kappa score out of all different combination of ratings kappa_df = df_ratings[['PB','YB']] kappa_store, Pval_store = kappa_score(kappa_df) mean_kap = np.mean(kappa_store[np.triu_indices(len(kappa_store),k=1)]) std_kap = np.std(kappa_store[np.triu_indices(len(kappa_store),k=1)]) print('Mean Kappa : {0:.2f} , std : {1:.2f}\n'.format(mean_kap, std_kap)) #calculte the over all kappa values of all ratings %Rpush kappa_df fleiss_kappa = %R kappam.fleiss(kappa_df,c(0,1,2)) print(fleiss_kappa) # Plot kappa matrix kappa_out = pd.DataFrame(kappa_store, index=kappa_df.columns.get_values(), columns=kappa_df.columns.get_values()) # Set up the matplotlib figure f, axes = plt.subplots(figsize = (7,5)) f.subplots_adjust(hspace= .8) f.suptitle('Pilot2 QC',x=0.49,y=1.05, fontsize=14, fontweight='bold') # Draw kappa heat map sns.heatmap(kappa_out,vmin=0,vmax=1,cmap="YlGnBu", square=True, annot=True, linewidths=.5, cbar_kws={"shrink": .9,"label": "Cohen's Kappa"}, ax=axes) axes.set_yticks([x+0.5 for x in range(len(kappa_df.columns))]) axes.set_yticklabels(kappa_df.columns,rotation=0) axes.set_title("Cohen's Kappa matrix for 2 raters and {} images".format(len(df_ratings)),pad=20,fontsize=12) # Caption pval = np.unique(Pval_store)[-1] txt = ''' Fig1: Kappa matrix for 2 raters PB & YB - Substantial agreement beteween raters. Kappa's P-values is {:.2g} '''.format(pval) f.text(.1,-0.08,txt,fontsize=12) # Save figure f.savefig('../reports/figures/pilot2_qc.svg') from IPython.display import Image Image(url= "https://i.stack.imgur.com/kYNd6.png" ,width=600, height=600) ``` ### Kappa between pilot 1 and 2 ``` # Add matching column between the raters ratings_p1p2.loc[:,"rating_match"] = ratings_p1p2.loc[:,['PB_P1','YB_P1','PB_P2','YB_P2']].apply(lambda x: len(set(x)) == 1, axis=1) ratings_p1p2.head() # Replace OK with 1 , Maybe with 2 and Fail with 3 ratings_p1p2.replace({'OK':1,'Maybe':2, 'Fail':3}, inplace=True) ratings_p1p2.head() # calculate the percentage of agreement between raters agreem_ = (ratings_p1p2.rating_match.sum()/ratings_p1p2.ID.count())*100 print("The percentage of agreement is: {:.2f}".format(agreem_)) %%R suppressPackageStartupMessages(library(dplyr)) #install.packages("irr") library(irr) # Percenteage of agrrement between raters with R package IRR agree_ = ratings_p1p2[['PB_P2','YB_P2','PB_P1','YB_P1']] %Rpush agree_ agree_n = %R agree(agree_) print(agree_n) # Get Kappa score out of all different combination of ratings kappa_df = ratings_p1p2[['PB_P2','YB_P2','PB_P1','YB_P1']] kappa_store, Pval_store = kappa_score(kappa_df) mean_kap = np.mean(kappa_store[np.triu_indices(len(kappa_store),k=1)]) std_kap = np.std(kappa_store[np.triu_indices(len(kappa_store),k=1)]) print('Mean Kappa : {0:.2f} , std : {1:.2f}\n'.format(mean_kap, std_kap)) #calculte the over all kappa values of all ratings %Rpush kappa_df fleiss_kappa = %R kappam.fleiss(kappa_df,c(0,1,2)) print(fleiss_kappa) # Plot kappa matrix kappa_out = pd.DataFrame(kappa_store, index=kappa_df.columns.get_values(), columns=kappa_df.columns.get_values()) # Set up the matplotlib figure f, axes = plt.subplots(figsize = (7,5)) f.subplots_adjust(hspace= .8) f.suptitle('Pilot1 & Pilot2 QC',x=0.49,y=1.05, fontsize=14, fontweight='bold') # Draw kappa heat map sns.heatmap(kappa_out,vmin=0,vmax=1,cmap="YlGnBu", square=True, annot=True, linewidths=.5, cbar_kws={"shrink": .9,"label": "Cohen's Kappa"}, ax=axes) axes.set_yticks([x+0.5 for x in range(len(kappa_df.columns))]) axes.set_yticklabels(kappa_df.columns,rotation=0) axes.set_title("Cohen's Kappa matrix for 2 raters {} images".format(len(ratings_p1p2)),pad=20,fontsize=12) # Caption pval = np.unique(Pval_store)[-1] txt = ''' Fig1: Kappa matrix for 2 raters PB & YB from two pilots projects- {} Images are rated twice,once from pilot1 and secondfrom pilot2. Substantial agreement beteween raters and pilots. Kappa's P-values FDR corrected range from {:.2g}to {:.2g} '''.format(len(ratings_p1p2),Pval_store.min(), Pval_store.max()) f.text(.1,-0.17,txt,fontsize=12) # Save figure f.savefig('../reports/figures/pilot2_qc.svg') ``` ### Report tagging ``` # output markings from classifications clist=[] for index, c in classi_unique.iterrows(): if c['n_tagging'] > 0: for q in c.annotations[1]['value']: clist.append({'ID':c.ID, 'workflow_name':c.workflow_name,'user_name':c.user_name, 'rating':c.rating,'imgnm':c.imgnm, 'x':q['x'], 'y':np.round(q['y']).astype(int), 'r':'1.5','n_tagging':c.n_tagging ,'frame':q['frame']}) else: clist.append({'ID':c.ID, 'workflow_name':c.workflow_name, 'user_name':c.user_name,'rating':c.rating,'imgnm':c.imgnm, 'x':float('nan'), 'y':float('nan'), 'r':float('nan'),'n_tagging':c.n_tagging ,'frame':'1'}) col_order=['ID','workflow_name','user_name','rating','x','y','r','n_tagging','imgnm','frame'] out_tag = pd.DataFrame(clist)[col_order] out_tag.user_name.replace({'simexp':'PB','Yassinebha':'YB'},inplace=True) out_tag.head() # Extract unique IDs for each image ids_imgnm = np.reshape([out_tag.ID.unique(),out_tag.imgnm.unique()],(2,np.shape(out_tag.ID.unique())[0])) df_ids_imgnm = pd.DataFrame(np.sort(ids_imgnm.T, axis=0),columns=['ID', 'imgnm']) df_ids_imgnm.head() # Create custom color map from matplotlib.colors import LinearSegmentedColormap , ListedColormap from PIL import Image def _cmap_from_image_path(img_path): img = Image.open(img_path) img = img.resize((256, img.height)) colours = (img.getpixel((x, 0)) for x in range(256)) colours = [(r/255, g/255, b/255, a/255) for (r, g, b, a) in colours] return colours,LinearSegmentedColormap.from_list('from_image', colours) coll,a=_cmap_from_image_path('../data/Misc/custom_ColBar.png') #invert color map coll_r = ListedColormap(coll[::-1]) # set color different for each rater list_tagger = out_tag.user_name.unique() colors_tagger = sns.color_palette("Set2", len(list_tagger)) colors_tagger ``` ### Plot tagging per rater ``` from matplotlib.collections import PatchCollection from matplotlib.patches import Circle, Arrow #Set Template image as background fig = plt.figure(figsize=(10,14)) ax = fig.add_subplot(111) im = plt.imread('../data/Misc/template_stereotaxic_v1.png') ax.set_title('All taggings') ax.imshow(im) fig.suptitle('Pilot2 QC',x=0.51,y=.87, fontsize=14, fontweight='bold') # Plot tags for ind, row in df_ids_imgnm.iterrows(): out_tmp = out_tag[out_tag['ID'] == row.ID] patches = [] labels = [] for ind,row in out_tmp.iterrows(): for idx,tagger in enumerate(list_tagger): out_tagger = out_tmp[out_tmp['user_name'] == tagger] c = [Circle((rowtag.x,rowtag.y), 7) for itag,rowtag in out_tagger.iterrows()] p = PatchCollection(c,facecolor='none', edgecolor=colors_tagger[idx], alpha=0.4, linewidth=2, linestyle='dashed') ax.add_collection(p) #Set figure Tags labels tag_ = np.zeros((len(df_ids_imgnm),len(list_tagger))) l = list() labels = list() for ind, row in df_ids_imgnm.iterrows(): out_tmp = out_tag[out_tag['ID'] == row.ID] patches = [] labels = [] tag_[ind,:]= [sum(out_tmp[out_tmp['user_name'] == rater].n_tagging.unique()) for rater in list_tagger] for rater_id, rater in enumerate(list_tagger): l.append(Circle((None,None), facecolor=colors_tagger[rater_id], alpha=0.7)) labels.append('{} : {:g} Tags'.format(rater,tag_.sum(axis=0)[rater_id])) ax.legend(handles=l,labels=labels, bbox_to_anchor=(0., 1.02, 1., .2), mode='expand', ncol=1, loc="lower right") ax.set_xticklabels([]) ax.set_yticklabels([]) fig.savefig('../reports/figures/pilot2_qc_tags.svg') ``` ### Plot heat map for all tagging ``` from heatmappy import Heatmapper from PIL import Image patches=list() for ind, row in df_ids_imgnm.iterrows(): out_tmp = out_tag[out_tag['ID'] == row.ID] patches.append([(row.x,row.y) for ind,row in out_tmp.iterrows()]) patches = [x for x in sum(patches,[]) if str(x[0]) != 'nan'] # plot heat map on the template f, axes = plt.subplots(1, 1,figsize = (10,14)) f.subplots_adjust(hspace= .8) f.suptitle('Pilot2 QC',x=0.49,y=.83, fontsize=14, fontweight='bold') img = Image.open('../data/Misc/template_stereotaxic_v1.png') axes.set_title('Tagging from PB & YB raters') heatmapper = Heatmapper(opacity=0.5, point_diameter=20, point_strength = 0.5, colours=a) heatmap= heatmapper.heatmap_on_img(patches, img) im = axes.imshow(heatmap,cmap=coll_r) axes.set_yticklabels([]) axes.set_xticklabels([]) cbar = plt.colorbar(im, orientation='vertical', ticks=[0, 125, 255],fraction=0.046, pad=0.04,ax=axes) cbar.ax.set_yticklabels(['0', '1', '> 2']) img.close() heatmap.close() f.savefig('../reports/figures/pilot2_qc_heatmap_tags.svg') ```
github_jupyter
[![img/pythonista.png](img/pythonista.png)](https://www.pythonista.io) # Áreas en *D3.js*. ## Inicialización de *D3.js* en la *notebook*. La siguiente celda permite habilitar *D3.js* dentro de esta *notebook* y debe de ser ejecutada siempre antes que cualquier otra celda. **Advertencia:** En caso de no inicializar *D3.js* como primera acción, es probable que el código de las siguientes celdas no funcione aún cuando se haga una inicialización posteriormente. En ese caso, es necesario limpiar todas las salidas de las celdas, guardar y recargar la *notebook*. ``` %%javascript require.config({ paths: { "d3": "https://d3js.org/d3.v7" } }); ``` ## La función ```d3.area()```. La función ```d3.area()``` permite trazar áreas utilizando varios puntos. ``` <nombre> = d3.area() ``` Donde: * ```<nombre>``` es el nombre de la función que se creará para procesar los datos. ``` d3.select(). append("path"). attr("d", <nombre>(<datos>)) ``` * Los datos deben de contener arreglos cuando menos 2 elementos y corresponden a las coordenadas *x* y *y1* de las áreas. * Es posible definir un tercer elemento y corresponde *y0*. Es decir, el límite inferior del área. ### Propiedades de un área. Las funciones creadas por ```d3.area()``` aceptan 3 datos principales: * ```x``` * ```y1``` * ```y0``` https://github.com/d3/d3-shape#areas ``` %%svg <svg width="500" height="300" id="svg-1"> </svg> %%javascript require(["d3"], function(d3){ d3.json("data/poblacion.json").then(function(datos){ /* Es necesario crear una nueva estructura de datos convirtiendo al objeto en un arreglo e incluyendo en el arreglo además del año y la población, la posición en y. */ let data = []; for (let dato in datos){ let lista = [dato - 1900, datos[dato] / 1000000]; data.push(lista); let area=d3.area(); d3.select("#svg-1"). append("g"). append("path"). attr("d", area(data)). attr("fill", "lavender"). attr("stroke", "black"); } }) }) %%svg <svg width="500" height="300" id="svg-2"> </svg> %%javascript require(["d3"], function(d3){ /* Se crea la escala lineal para el eje x. */ let escalaX = d3.scaleLinear(). domain([1900, 2020]). range([0, 450]); /* Se define la función de eje a partir de d3.axisBottom. */ let ejeX = d3.axisBottom(escalaX); /* Se construye el eje insertando un elemento <g>. */ d3.select("#svg-2"). append("g"). attr("transform", "translate(30, 255)"). call(ejeX); /* Se crea la escala lineal. */ let escalaY = d3.scaleLinear(). domain([10, 120]). range([250, 0]); /* Se define la función de eje a partir de d3.axisBottom. */ let ejeY = d3.axisLeft(escalaY); /* Se construye el eje insertando un elemento <g>. */ d3.select("#svg-2"). append("g"). attr("transform", "translate(30, 5)"). call(ejeY); d3.json("data/poblacion.json").then(function(datos){ /* Es necesario crear una nueva estructura de datos convirtiendo al objeto en un arreglo e incluyendo en el arreglo además del año y la población, la posición en y. */ let data = []; for (let dato in datos){ let lista = [escalaX(dato), escalaY(datos[dato] / 1000000)]; data.push(lista); let area = d3.area(). y0(255); d3.select("#svg-2"). append("g"). append("path"). attr("d", area(data)). attr("fill", "lavender"). attr("stroke", "black"); } }) }) %%svg <svg width="500" height="300" id="svg-3"> </svg> %%javascript require(["d3"], function(d3){ /* Se crea la escala lineal para el eje x. */ let escalaX = d3.scaleLinear(). domain([1900, 2020]). range([0, 450]); /* Se define la función de eje a partir de d3.axisBottom. */ let ejeX = d3.axisBottom(escalaX); /* Se construye el eje insertando un elemento <g>. */ d3.select("#svg-3"). append("g"). attr("transform", "translate(30, 255)"). call(ejeX); /* Se crea la escala lineal. */ let escalaY = d3.scaleLinear(). domain([10, 120]). range([250, 0]); /* Se define la función de eje a partir de d3.axisBottom. */ let ejeY = d3.axisLeft(escalaY); /* Se construye el eje insertando un elemento <g>. */ d3.select("#svg-3"). append("g"). attr("transform", "translate(30, 5)"). call(ejeY); d3.json("data/poblacion.json").then(function(datos){ /* Es necesario crear una nueva estructura de datos convirtiendo al objeto en un arreglo e incluyendo en el arreglo además del año y la población, la posición en y. */ let data = []; let y_previo = 240; for (let dato in datos){ let lista = [escalaX(dato), escalaY(datos[dato]/1000000), y_previo]; y_previo = escalaY(datos[dato]/1000000); data.push(lista); let area = d3.area(). x(d => d[0]). y0(d => d[1]). y1(d => d[2]); d3.select("#svg-3"). append("g"). append("path"). attr("d", area(data)). attr("fill", "purple"). attr("stroke", "black"); } }) }) ``` <p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p> <p style="text-align: center">&copy; José Luis Chiquete Valdivieso. 2022.</p>
github_jupyter
# Lekcja 5-6: Listy ## Spis treści <a href="#1">1. Co to jest lista?</a> - <a href="#1.1">1.1. Rzut oka z góry - kolekcje w Pythonie</a> - <a href="#1.1_b1">Kolekcje</a> - <a href="#1.1_b2">Typy</a> - <a href="#1.1_b3">Cechy charakterystyczne kolekcji</a> - <a href="#1.2">1.2. Tworzenie list</a> - <a href="#1.3">1.3. Iterowanie pętlą `for` przez listę - wprowadzenie</a> - <a href="#1.4">1.4. Operator `in`</a> <a href="#2">2. Listy są uporządkowane - indeks</a> - <a href="#2.1">2.1. Dostęp do pojedynczych elementów listy</a> - <a href="#2.2">2.2. Segmenty indeksów</a> - <a href="#2.2_b1">Od (...) do (...)</a> - <a href="#2.2_b2">Od początku do (...); od (...) do końca</a> - <a href="#2.2_b3">Ujemne indeksy</a> - <a href="#2.2_b4">Krok</a> - <a href="#2.2_b5">Ujemny krok</a> - <a href="#2.2_b6">Elegancki przykład: odwracanie listy</a> - <a href="#2.2_b7">(*) Obiekt `slice`</a> - <a href="#2.3">2.3. Metody list `index` i `count`</a> <a href="#3">3. Listy można modyfikować</a> - <a href="#3.1">3.1. Zmiany wartości elementów listy</a> - <a href="#3.2">3.2. (*) Mutowalność, aliasy, kopie</a> - <a href="#3.3">3.3. Dodawanie i usuwanie elementów z listy - metody list</a> - <a href="#3.3_b1">Dodawanie elementów do listy: `append`, `extend`, `insert`</a> - <a href="#3.3_b2">Usuwanie elementów z listy: `remove`, `pop`, `del`, `clear`</a> - <a href="#3.4">3.4. Rozszerzanie list - operatory `+`, `*`</a> - <a href="#3.4_b1">Operator dodawania `+`</a> - <a href="#3.4_b2">Operator mnożenia `*`</a> <a href="#4">4. Zadania domowe do Lekcji 5</a> <a href="#5">5. Iterowanie przez listę</a> - <a href="#5.1">5.1. Pętla `for`</a> - <a href="#5.2">5.2. Pusta lista (lub pusty string) jako "puste pudełko" w pętli `for`</a> - <a href="#5.3">5.3. (*) Stos i kolejka</a> - <a href="#5.4">5.4. Pętla `for` po indeksach</a> - <a href="#5.4_b1">Funkcja `range`</a> - <a href="#5.4_b2">Iterowanie po indeksach listy</a> - <a href="#5.5">5.5. Pętla `for` zarówno po indeksach, jak i elementach listy - funkcja `enumerate`</a> - <a href="#5.6">5.6. Pętla `for` po kilku listach jednocześnie złączonych "na suwak" - funkcja `zip`</a> - <a href="#5.7">5.7. List comprehension</a> - <a href="#5.8">5.8. List comprehension - `if` - `else`</a> - <a href="#5.9">5.9. List comprehension - zagnieżdżone pętle</a> <a href="#6">6. (*) Transformacje list - programowanie funkcyjne</a> - <a href="#6.1">6.1. Wstęp: funkcje lambda</a> - <a href="#6.2">6.2. Redukowanie list</a> - <a href="#6.2_b1">Redukowanie poprzez użycie pętli</a> - <a href="#6.2_b2">Wbudowane funkcje działające na listach</a> - <a href="#6.2_b3">`reduce`</a> - <a href="#6.3">6.3. Mapowanie list</a> - <a href="#6.3_b1">Mapowanie poprzez użycie pętli</a> - <a href="#6.3_b2">`map`</a> - <a href="#6.3_b3">Mapowanie poprzez "list comprehension"</a> - <a href="#6.4">6.4. Filtrowanie list</a> - <a href="#6.4_b1">Filtrowanie poprzez użycie pętli</a> - <a href="#6.4_b2">`filter`</a> - <a href="#6.4_b3">Filtrowanie poprzez "list comprehension"</a> <a href="#7">7. Zadania domowe do Lekcji 6</a> ## <a id="1"></a>1. Co to jest lista? ### <a id="1.1"></a>1.1. Rzut oka z góry - kolekcje w Pythonie #### <a id="1.1_b1"></a>Kolekcje Od tej lekcji zaczniemy zajmować się nowymi typami danych - takimi, które opisują _kolekcje_ elementów. Kolekcje, jak nazwa wskazuje, ułatwiają pracę z większą ilością danych - pozwalają _zebrać wiele różnych wartości w jeden obiekt_. Wyobraźmy sobie np. listę imion uczniów w klasie; czy też wszystkie przedmioty w koszyku zakupów online; lub spis dziennych warunków pogodowych w jakimś miejscu... Przechowywanie wielu elementów w jednym "miejscu" to bardzo powszechna sytuacja - co więcej, chcielibyśmy takie kolekcje móc przeglądać, przeszukiwać, modyfikować. O tym wszystkim nauczymy się podczas Lekcji 5-8. W Pythonie mamy następujące podstawowe typy danych **kolekcji** ("collections"): - **listy** ("list", typ danych `list`), - **tuple**, lub bardziej po polsku **krotki** ("tuple", typ danych `tuple`), - **zbiory** ("set", typ danych `set`), - **słowniki** ("dictionary", typ danych `dict`), a także: - poznane już **stringi** ("string", typ danych `str`) jako kolekcje pojedynczych znaków. Jedynie gwoli zapoznania się z podstawową składnią pierwszych czterech typów - **elementy** ("elements" lub "items") zawsze rozdzielamy przecinkami, a kolekcję tworzy się przez wzięcie elementów w nawiasy - oto kilka przykładów: ``` # lista - nawiasy kwadratowe lst = [ 10 , 20 , 30 , 40 ] # unikajmy nazywania zmiennych pojedynczą literą "l" - ciężko ją odróżnić od "1" # tupla (krotka) - nawiasy okrągłe t = ( 'cat' , 'dog' , 'mouse' ) # zbiór - nawiasy klamrowe s = { 'apple' , 'banana' , 'plum' , 'orange' } # słownik - również nawiasy klamrowe (zwróć uwagę, że każdy element składa się z dwóch części rozdzielonych dwukropkiem) d = { 'John' : 26 , 'Mary' : 18 , 'Theresa' : 31 } ``` Podkreślmy jeszcze raz, iż stringi - mimo odrębnej składni (cudzysłowy zamiast nawiasów; brak przecinków rozdzielających) - są także kolekcjami i wiele z popularnych operacji na kolekcjach można w ten sam sposób wykonywać na stringach. #### <a id="1.1_b2"></a>Typy Robiąc jeszcze jeden krok wstecz, przypomnijmy, że w Pythonie mamy najróżniejsze obiekty ("objects"), które możemy nazwać też **typami** ("types"). Mamy więc np. typ liczb całkowitych ("integer") `int`, typ liczb tzw. zmiennoprzecinkowych ("floating-point") `float`, typ "tekstowy", a więc tzw. stringi ("string") `str` itd. (Co więcej, każdy użytkownik może tworzyć swoje własne typy obiektów, np. obiekt "samochód" - który moglibyśmy nazwać `car` - ale to wybieganie za daleko w przyszłość!) Jak pamiętamy już z Lekcji 2, typ każdego obiektu w Pythonie możemy sprawdzić za pomocą wbudowanej funkcji `type`. ``` type( 5577 ) # typ liczby całkowitej type( 3.1415926 ) # typ liczby zmiennoprzecinkowej type( 'I am hungry!' ) # typ stringu ``` Przekonajmy się zatem sami, jak nazywają się typy zdefiniowanych właśnie obiektów czterech podstawowych kolekcji w Pythonie: ``` type( lst ) # lista type( t ) # tupla type( s ) # zbiór type( d ) # słownik ``` Przypomnijmy też, że czasem można dokonać **konwersji typu** ("type conversion"). Wydaje się bowiem bardzo naturalne, że np. mając liczbę powinniśmy móc zmienić ją na string będący po prostu tekstowym zapisem tej liczby - i rzeczywiście, dokonujemy tego funkcją `str` (a więc funkcją o tej samej nazwie, co typ, _na który_ konwertujemy): ``` str( 5577 ) str( 3.1415926 ) ``` Konwersja odwrotna dokonana jest funkcją `int` albo `float`, czyli znów, funkcją o nazwie takiej, co typ, na który konwertujemy: ``` int( '5577' ) float( '3.1415926' ) ``` Zamiast dokładnie wypisywać przypadki, kiedy można takiej konwersji dokonać, powiedzmy po prostu tyle - "kiedy ma to sens". Jest z góry jasne, że np. konwersja tekstu `'I am hungry!'` na `int`-a nie zadziała: ``` int( 'I am hungry!' ) ``` Wracając do typów kolekcji - tutaj też nierzadko można dokonać konwersji typu, "jeśli ma to sens". Typowym przykładem dla list jest konwersja stringu na listę - string to kolekcja pojedynczych znaków, a po konwersji będziemy mieć listę z tymi kolejnymi znakami. Konwersji tej dokonujemy oczywiście funkcją o nazwie `list` - a więc typu, na który konwertujemy: ``` list( 'I am hungry!' ) ``` O konwersjach powiązanych z typami `tuple`, `set`, `dict` powiemy w Lekcji 7-8. #### <a id="1.3_b1"></a>Cechy charakterystyczne kolekcji <img style = 'float: right; margin-left: 10px; margin-bottom: 10px' src = 'Images/lotto.png' width = '600px'> Podstawowe różnice między nimi biorą się z następujących własności: 1. Czy kolekcja elementów jest **uporządkowana** ("ordered"), czy nie. Innymi słowy, czy kolejność elementów ma znaczenie, czy nie; jeszcze inaczej, czy można powiedzieć, który element jest pierwszy, drugi, ..., ostatni. - Listy i tuple (a także stringi) są uporządkowane. - Zbiory i słowniki zaś nie są uporządkowane (nie ma pojęcia "to jest pierwszy element" itd., są jakby "workiem" na elementy). 2. Czy po utworzeniu kolekcję można modyfikować - czy jest **mutowalna** ("mutable"), czy też nie ("immutable"). - Listy i słowniki można do woli modyfikować, zarówno skład ich elementów (np. dodawanie, usuwanie itd.), jak i wartości poszczególnych elementów. - Tupli (i stringów) nie można modyfikować w żaden sposób. - Zbiory są pomiędzy: można dodawać/usuwać elementy, ale nie można zmieniać wartości już istniejących elementów. | Typ | Nazwa | Uporządkowany? | Mutowalny? | | --- | --- | --- | --- | | <center>`str`</center> | <center>string</center> | <center>✅</center> | <center>❌</center> | | <center>`list`</center> | <center>lista</center> | <center>✅</center> | <center>✅</center> | | <center>`tuple`</center> | <center>tupla</center> | <center>✅</center> | <center>❌</center> | <center>`set`</center> | <center>zbiór</center> | <center>❌</center> | <center>✅</center> | | <center>`dict`</center> | <center>słownik</center> | <center>❌</center> | <center>✅</center> | Z pojęciem uporządkowania wiąże się pojęcie **indeksu** ("index"). - W kolekcjach uporządkowanych - jak już wiemy, są to listy, tuple, stringi - kolejność elementów ma znaczenie, np. w liście `[10, 20, 30, 40]`, `10` jest elementem pierwszym, `20` drugim itp. Ta kolejność to właśnie indeks, z tym że w Pythonie **indeksujemy zawsze od zera, nie od jedynki!**, czyli `10` ma indeks 0, `20` ma indeks 1 itd. Podobnie w stringu `'Kraków'`, symbol `'K'` ma indeks 0, symbol `'r'` indeks 1 itd. - Zbiory są nieuporządkowane, więc nie ma pojęcia indeksu. - Słowniki także są nieuporządkowane, natomiast słownik _sam definiuje_ swój indeks, nazywany **kluczem** ("key"), który może być dowolnym obiektem, niekoniecznie ciągiem 0, 1, 2, ... W powyższym przykładzie słownika `{'John': 26, 'Mary': 18, 'Theresa': 31}`, elementami są liczby `26`, `18`, `31`, a są one indeksowane stringami: `'John'`, `'Mary'`, `'Theresa'`. Nie możemy więc powiedzieć, że element `26` jest "pierwszy", nie możemy "dostać się do niego" poprzez indeks 0 - możemy natomiast "dostać się do niego" poprzez klucz `'John'`. Z pojęciem uporządkowania wiąże się też kwestia **duplikatów** ("duplicates"). - W kolekcjach uporządkowanych (listy, tuple, stringi) możemy mieć kilka identycznych elementów. Mamy przecież indeks, który nam precyzyjnie je rozróżnia; np. w liście `['cat', 'cat', 'dog', 'cat']`, mamy pierwszego (indeks 0), drugiego (indeks 1) i czwartego (indeks 3) kota. - W kolekcjach nieuporządkowanych (słowniki i zbiory) nie może być duplikatów. ### <a id="1.2"></a>1.2. Tworzenie list W tej lekcji zajmiemy się wyłącznie listami. Już wiemy, że lista to kolekcja elementów, która: - jest uporządkowana - a więc można stwierdzić "ten element jest pierwszy" (tj. ma indeks 0), "ten jest drugi" (tj. ma indeks 1) itd.; - jest mutowalna (modyfikowalna) - można do woli zmieniać jej kompozycję (np. dodawać/usuwać elementy), czy też modyfikować istniejące elementy. Widzieliśmy też, że listę definiuje się poprzez umieszczenie kolekcji elementów między nawiasami kwadratowymi. Do listy możemy "wrzucić" zupełnie dowolny element, co więcej, poszczególne elementy nie muszą być tego samego typu. ``` lst_1 = [ 18 , 5 , 26 , -11 , 0 , -4 ] # lista liczb całkowitych lst_2 = [ 'USD' , 'EUR' , 'PLN' , 'MYR' ] # lista stringów lst_3 = [ 3.67 , 'coffee' , [ 8 , 5 , -1 ] ] # lista mieszana: elementy mają różne typy ``` W ostatnim przykładzie widzimy, że elementami listy mogą także być inne listy - oczywiście, skoro dowolny obiekt może być elementem listy. Taką listę nazywamy **zagnieżdżoną** ("nested") wewnątrz innej listy. W szczególności, tu dygresja, możemy modelować w ten sposób matematyczny obiekt **macierzy** ("matrix"). ``` my_matrix = [ [ 1 , 2 , 3 ] , [ 4 , 5 , 6 ] , [ 7 , 8 , 9 ] , [ 10 , 11 , 12 ] ] # macierz 4 x 3 ``` Możemy utworzyć też **pustą listę** ("empty list"). Wyobrazić możemy sobie to jako "pusty pojemnik", do którego będziemy "wrzucać" kolejne elementy (w kolejności!). ``` lst_4 = [] ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 1: Utwórz i przypisz do zmiennej listę złożoną z powyżej zdefiniowanych list `lst_1`, `lst_2`, `lst_3`, `lst_4`, a następnie wydrukuj jej zawartość. ``` # szybkie ćwiczenie 1 - rozwiązanie ``` ### <a id="1.3"></a>1.3. Iterowanie pętlą `for` przez listę - wprowadzenie Lekcję 4 poświęciliśmy pętlom `for` i `while`. W szczególności, pętla `for` opisuje iterację określoną, tj. przejście krok po kroku, element po elemencie, przez zadaną kolekcję. Teraz przypomnijmy tylko krótko, jak to robimy - zostawiając różne rozszerzenia do sekcji <a href="#5">Iterowanie przez listę</a> poniżej. Np. ta pętla `for` iteruje się przez listę `lst_2` i drukuje każdy kolejny element; iterator nazwaliśmy tu `currency`: ``` lst_2 for currency in lst_2: # zmienna currency przyjmuje wartości kolejnych elementów listy lst_2, od początku do końca print( currency ) # drukuj wartość zmiennej currency ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 2: (a) Napisz pętlę `for`, którą przeiterujesz się przez listę `lst_1`. W każdym kroku dodaj instrukcję warunkową sprawdzającą, czy aktualny element jest większy od zera; jeśli tak, wydrukuj go. (b) Napisz pętlę `for`, którą przeiterujesz się przez listę `lst_3`. W każdym kroku dodaj instrukcję warunkową sprawdzającą, czy aktualny element jest typu string; jeśli tak, wydrukuj go. ``` # szybkie ćwiczenie 2a - rozwiązanie # szybkie ćwiczenie 2b - rozwiązanie ``` ### <a id="1.4"></a>1.4. Operator `in` Przypomnijmy sobie parę list, które zdefiniowaliśmy powyżej: ``` lst_2 lst_3 ``` Aby łatwo otrzymać odpowiedź na pytanie, czy dany obiekt jest elementem naszej listy, używamy operatora `in`: ``` 'CAD' in lst_2 3.67 in lst_3 ``` Ale zwróć uwagę, że: ``` 8 in lst_3 ``` ... gdyż 8 nie jest elementem listy `lst_3`; 8 jest natomiast elementem listy, która sama jest elementem listy `lst_3`. Jest to kolejny przykład konstrukcji zdania logicznego, z którym się spotykamy (przypomnijmy sobie wyrażenia typu `2 < 5` z poprzednich lekcji). Całe takie wyrażenie logiczne jest odrębnym "bytem" - obiektem typu Boolean, `bool`. Możemy to rzecz jasna sprawdzić funkcją `type`: ``` type( 2 < 5 ) type( 8 in lst_3 ) ``` Możemy te obiekty oczywiście dowolnie przypisywać do zmiennych, np.: ``` cad_check = 'CAD' in lst_2 # lub też bez nawiasów! cad_check ``` Mamy też do dyspozycji operatory logiczne, jak poznane przez nas `and` ("i"), `or` ("lub"), `not` (zaprzeczenie), za pomocą których możemy tworzyć złożone warunki logiczne, np.: ``` ( 'USD' in lst_2 ) and ( 'CAD' in lst_2 ) # pierwszy nawias ma wartość True, a drugi False ( 'USD' in lst_2 ) or ( 'CAD' in lst_2 ) not ( 'CAD' in lst_2 ) ``` ... gdzie tę ostatnią operację możemy zapisać także krócej - przy użyciu nowego operatora `not in` - jako: ``` 'CAD' not in lst_2 ``` (Jak zwykle, wiele z tych nawiasów można pominąć...) <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 3: (a) Sprawdź, czy pusta lista jest elementem powyżej utworzonej listy złożonej z list `lst_1`, `lst_2`, `lst_3`, `lst_4`. (b) Sprawdź, czy dwu-elementowa lista złożona ze stringów `'USD'` i `'EUR'` jest elementem listy `lst_2`. (c) Jak wspominaliśmy, stringi można traktować także jako pewną kolekcję, tj. kolekcję pojedynczych znaków. Napisz wyrażenie, które sprawdza, czy string `'ham'` jest elementem stringu `'Chatham'`. ``` # szybkie ćwiczenie 3a - rozwiązanie # szybkie ćwiczenie 3b - rozwiązanie # szybkie ćwiczenie 3c - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 4: Powyżej zdefiniowaliśmy listę list `my_matrix`. Napisz pętlę `for` iterującą się przez tę listę tak, aby wydrukować tylko te jej elementy (tj. tylko te jej pod-listy), które zawierają liczby 4 lub 8. ``` # szybkie ćwiczenie 4 - rozwiązanie ``` ## <a id="2"></a>2. Listy są uporządkowane - indeks Od początku wspominaliśmy o dwóch definicyjnych własnościach list, tj. iż: - listy są uporządkowane - mamy element nr 0, element nr 1 itd.; - listy są mutowalne - można je do woli modyfikować. W tym rozdziale zajmiemy się pierwszą z tych własności, a w szczególności pytaniem, jak używać **indeksu**, aby otrzymać dostęp do elementów listy. Innymi słowy, będziemy pytać np. "jaki jest piąty element naszej listy?", lub też "jakie są elementy między siódmym a dziesiątym?" Popracujemy na takiej przykładowej liście: ``` nato_code = [ 'Alfa' , 'Bravo' , 'Charlie' , 'Delta' , 'Echo' , 'Foxtrot' , 'Golf' , 'Hotel' , 'India' , 'Juliett' , 'Kilo' , 'Lima' , 'Mike' , 'November' , 'Oscar' , 'Papa' , 'Quebec' , 'Romeo' , 'Sierra' , 'Tango' , 'Uniform' , 'Victor' , 'Whiskey' , 'X-ray' , 'Yankee' , 'Zulu' ] ``` ### <a id="2.1"></a>2.1. Dostęp do pojedynczych elementów listy Aby uzyskać dostęp do elementów listy poprzez ich indeks, używamy **operatora nawiasów kwadratowych** ("bracket operator"), wewnątrz którego umieszczamy interesujący nas indeks. Uwaga: Nie pomylmy tych tutaj nawiasów kwadratowych, wewnątrz których znajduje się indeks, z nawiasami, wewnątrz których umieszczamy elementy by utworzyć listę - to dwa zupełnie różne pojęcia. Warto podkreślić to jeszcze raz: **w Pythonie indeks zawsze zaczyna się od zera, nie od jedynki!** Mamy zatem pierwszy element: ``` nato_code[ 0 ] # pierwszy element ma indeks 0 ``` ![](Images/Segments/segment_1.png) ... drugi element: ``` nato_code[ 1 ] # drugi element ma indeks 1 ``` ![](Images/Segments/segment_2.png) ... itd. Gdybyśmy spróbowali otrzymać dostęp do elementu o nieistniejącym indeksie, otrzymamy zamiast tego błąd, tzw. `IndexError`. ``` nato_code[ 26 ] # ta lista ma tylko 26 elementów, a więc jej ostatni indeks to 25 (indeksujemy od 0!) ``` Inny przykład: Wiemy już, że elementami list mogą być całkowicie dowolne obiekty, np. inne listy, stringi itd. Przykładem takiej listy powyżej była `lst_3`: ``` lst_3 ``` ... i mamy np.: ``` lst_3[ 2 ] # trzecim elementem listy lst_3 jest lista [8, 5, -1] ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 5: Wróćmy jeszcze raz do listy list `my_matrix`. (a) Jak dostać się poprzez indeks do wartości 8? (b) Napisz pętlę `for` iterującą przez listę `my_matrix` i w każdym kroku drukującą pierwszy element listy będącej aktualnym elementem listy `my_matrix`. ``` my_matrix # szybkie ćwiczenie 5a - rozwiązanie # szybkie ćwiczenie 5b - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 6: W Lekcji 4 ćwiczyliśmy pętlę `for` m.in. na liście stringów `purse = ['keys', 'lipstick', 'sandwich', 'smartphone']`. Drukowaliśmy np. tylko elementy zaczynające się na literę `'s'`, do czego używaliśmy wbudowanej metody stringów `startswith`. Jak zrobić to samo za pomocą indeksów? ``` # szybkie ćwiczenie 6 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 7: Powyższa Lista `nato_code` ma za elementy stringi, o których dobrze już wiemy, że same są kolekcjami - i równie dobrze można do nich stosować indeksowanie. Napisz pętlę `for` iterującą się przez listę `nato_code` i drukującą tylko pierwszą literę każdego stringa. ``` # szybkie ćwiczenie 7 - rozwiązanie ``` ### <a id="2.2"></a>2.2. Segmenty indeksów Wewnątrz operatora nawiasów kwadratowych możemy umieścić nie tylko pojedynczy indeks - możemy szerzej opisać cały _zakres_ indeksów, który nas interesuje, używając składni tzw. **segmentów** ("slices"). #### <a id="2.2_b1"></a>Od (...) do (...) Podstawowa składnia odpowiada na zadanie "weź indeksy od... do..." i wygląda tak: `start_index : end_index` co oznacza, iż bierzemy wszystkie indeksy poczynając od `start_index`, a kończąc na `end_index` - przy czym mamy tu istotną konwencję, o której należy pamiętać: kończymy na indeksie _poprzedzającym_ `end_index`, a więc `start_index` owszem zalicza się do wybranych indeksów, lecz `end_index` już nie! Np. `3:8` oznacza wybranie indeksów 3, 4, 5, 6, 7 (ale już nie 8!). ``` nato_code[ 3:8 ] ``` ![](Images/Segments/segment_3.png) #### <a id="2.2_b2"></a>Od początku do (...); od (...) do końca Jeśli jedną z granic naszego segmentu jest albo początek listy (tj. indeks 0), albo koniec listy (tj. ostatni indeks), możemy go w tej składni _pominąć_. ``` nato_code[ :8 ] # segment od początku listy do indeksu 7, równoważne do nato_code[ 0:8 ] ``` ![](Images/Segments/segment_4.png) ``` nato_code[ 15: ] # segment od indeksu 15 do końca listy (nie musimy wiedzieć, ile elementów ma lista!) ``` ![](Images/Segments/segment_5.png) Możemy chcieć też pominąć _obie_ granice! Wtedy bierzemy segment od początku listy do jej końca... a więc całą listę! ``` nato_code[ : ] nato_code[ : ] == nato_code ``` Operacja ta może służyć do tworzenia kopii listy, o czym powiemy więcej w sekcji <a href="#3.2">(*) Mutowalność, aliasy, kopie</a>. <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 8: Przeiteruj się przez elementy od jedenastego do trzynastego listy `nato_code`, drukując jej stringi od drugiej litery do końca. ``` # szybkie ćwiczenie 8 - rozwiązanie ``` #### <a id="2.2_b3"></a>Ujemne indeksy Wiemy już, że indeksy w Pythonie liczymy od 0, 1, 2, ... Natomiast bardzo wygodną składnią są **ujemne indeksy**, które oznaczają po prostu, że liczymy je _od końca listy wstecz_. Ostatni element listy ma w tej konwencji indeks -1, przedostatni -2 itd. Pojedynczy element uzyskujemy jak zawsze: ``` nato_code[ -1 ] # ostatni element ``` ![](Images/Segments/segment_6.png) Identycznie działają też segmenty - gdzie cały czas musimy pamiętać, że `start_index` zalicza się do segmentu, zaś `end_index` się nie zalicza: ``` nato_code[ -4:-1 ] # elementy od czwartego od końca do przedostatniego; ostatni (indeks -1) się nie zalicza! ``` ![](Images/Segments/segment_7.png) ``` nato_code[ :-10 ] # elementy od początku listy do jedenastego od końca włącznie ``` ![](Images/Segments/segment_8.png) ``` nato_code[ -10: ] # ostatnie dziesięć elementów listy ``` ![](Images/Segments/segment_9.png) <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 9: Przeiteruj się przez ostatnie pięć elementów listy `nato_code`, drukując jej stringi od drugiej litery do przedostatniej litery, włącznie. ``` # szybkie ćwiczenie 9 - rozwiązanie ``` #### <a id="2.2_b4"></a>Krok Składnię segmentów możemy jeszcze rozszerzyć o **krok** ("step"): `start_index : end_index : step` Czyli możemy np. wybrać elementy od trzeciego do siedemnastego, idąc co trzeci element: ``` nato_code[ 2:17:3 ] # segment od indeksu 2 do indeksu 16 włącznie (ale nie 17!), w krokach co 3 ``` ![](Images/Segments/segment_10.png) Oczywiście możemy łączyć to z całą paletą zachowań opisanych powyżej (omijanie wpisu, ujemny krok)! <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 10: Z powyższej listy `nato_code` wybierz: (a) segment od początku listy do dziesiątego elementu włącznie, w krokach co 3; (b) segment od trzeciego elementu do końca listy, w krokach co 3; (c) cała lista w krokach co 3; (d) segment od trzeciego elementu do elementu trzeciego od końca włącznie, w krokach co 3. ``` # szybkie ćwiczenie 10a - rozwiązanie # szybkie ćwiczenie 10b - rozwiązanie # szybkie ćwiczenie 10c - rozwiązanie # szybkie ćwiczenie 10d - rozwiązanie ``` Na pewno wymaga to trochę praktyki, aby tak napisać składnię segmentu, by dostać dokładnie te indeksy, które się chce! #### <a id="2.2_b5"></a>Ujemny krok Ostatnia modyfikacja składni segmentów polega na dopuszczeniu **ujemnego kroku**. Na pewno zgadniesz, co to oznacza - krok w tył listy! Pamiętać tutaj tylko trzeba o towarzyszącej temu zmianie w całej składni - teraz początek i koniec segmentu piszemy w odwrotnej kolejności! `end_index : start_index : (-step)` gdzie trzeba także pamiętać o tym, która granica leży w segmencie, a która nie: tym razem `end_index` jest obecny w segmencie (bo od tego indeksu zaczynamy), a `start_index` nie. ``` nato_code[ 10:1:-3 ] # segment od indeksu 10 włącznie do indeksu 2 (nie 1!), w krokach co 3 do tyłu ``` ![](Images/Segments/segment_11.png) Idąc w tył często wygodniej użyć także ujemnych indeksów (choć oczywiście nie jest to wymagane) - idąc od tyłu po prostu łatwiej numerować elementy także od tyłu: ``` nato_code[ -2:-7:-2 ] # segment od elementu przedostatniego włącznie do szóstego od końca włącznie, w krokach co 2 do tyłu ``` ![](Images/Segments/segment_12.png) ``` nato_code[ -2::-4 ] # segment od elementu przedostatniego włącznie do początku listy, w krokach co 4 do tyłu ``` ![](Images/Segments/segment_13.png) ``` nato_code[ :-10:-4 ] # segment od końca listy do elementu dziewiątego od końca, w krokach co 4 do tyłu ``` ![](Images/Segments/segment_14.png) #### <a id="2.2_b6"></a>Elegancki przykład: odwracanie listy <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 11: Czy wiesz, jak używając tej składni _odwrócić_ kolejność elementów w liście? Tj. zmienić np. `[1, 2, 3]` na `[3, 2, 1]` itp. Rozwiązanie jest poniżej, ale spróbuj najpierw samodzielnie pomyśleć! ``` # szybkie ćwiczenie 11 - rozwiązanie ``` Rozwiązanie jest następujące: Co to znaczy odwrócić listę? To znaczy przejść przez nią całą, ale od końca do początku (czyli wspak, czyli w krokach co -1). Skoro przechodzimy całą listę, możemy pominąć `start_index` i `end_index`; za krok musimy ponadto wybrać -1. ``` nato_code[ ::-1 ] # odwrócenie listy! ``` Jest to niezwykle eleganckie rozwiązanie! Jest to dobry moment na następującą dygresję: Cały Python jest językiem, który nie tylko umożliwia, ale wręcz aż prosi się o pisanie w nim elegancko i zwięźle - takie eleganckie rozwiązania uważa się za zgodne z "duchem Pythona" (**"Pythonic"**). ``` import this ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 12: Odwracanie list jest przydatną umiejętnością! Sprawdź np., że przypisywane Napoleonowi zdanie, `'Able was I ere I saw Elba'` (nie biorąc pod uwagę wielkości liter - jak to zakodować?) jest palindromem, tj. czytane wspak jest identyczne ze sobą. ``` # szybkie ćwiczenie 12 - rozwiązanie ``` #### <a id="2.2_b7"></a>(*) Obiekt `slice` Na koniec uwaga: Jeśli chcielibyśmy danego rodzaju segmentu używać wielokrotnie, możemy zdefiniować go za pomocą funkcji `slice`, zwracającą obiekt nowego typu, tzw. `slice`. ``` # zdefiniujmy parę list some_primes = [ 11 , 13 , 17 , 19 , 23 , 29 , 31 , 37 , 41 , 43 , 47 , 53 , 59 ] more_primes = [ 61 , 67 , 71 , 73 , 79 , 83 , 89 , 97 , 101 , 103 , 107 , 109 , 113 , 127 ] even_more_primes = [ 131 , 137 , 139 , 149 , 151 , 157 , 163 , 167 , 173 , 179 , 181 , 191 , 193 ] # obiekt slice: od indeksu 2 do indeksu 9 włącznie, w krokach co 3 seg = slice( 2 , 10 , 3 ) # możemy go używać wielokrotnie print( some_primes[ seg ] ) print( more_primes[ seg ] ) print( even_more_primes[ seg ] ) ``` Sprawdźmy typ: ``` type( seg ) ``` Jeśli któryś argument chcemy pominąć, stosujemy słówko kluczowe `None`: ``` reverse = slice( None , None , -1 ) # implementujemy nasze rozwiązanie odwrócenia listy poprzez ::-1 print( some_primes[ reverse ] ) print( more_primes[ reverse ] ) print( even_more_primes[ reverse ] ) ``` ### <a id="2.3"></a>2.3. Metody list `index` i `count` Przy okazji wspomnijmy o dwóch dość użytecznych metodach typu `list`. Uwaga techniczna: Używaliśmy do tej pory nie raz tzw. "metod", ale bez ich dokładnego zdefiniowania. Ogólnie mówiąc, metoda jest to pewna operacja `method` dokonywana na jakimś obiekcie `object` - poprzez składnię `object.method(arguments)`; tutaj `arguments` to tzw. "argumenty", a więc dodatkowe informacje, które musimy przekazać metodzie, aby wiedziała, jak zadziałać. Wyobraźmy sobie, że mamy obiekt o nazwie `car` i metodę o nazwie `paint`; moglibyśmy pewnie napisać `car.paint('blue')`, gdzie string `'blue'` jest argumentem metody - pomaluje ona nasz samochód na niebiesko! Podkreślmy - metoda jest zdefiniowana dla konkretnego typu danych. Np. poniższe metody są "rozumiane" przez typ `list`. Aczkolwiek okazuje się, że typ `str` też je "rozumie" - i działają one na stringach zupełnie analogicznie, jak na listach. (Z Lekcji 2 pamiętamy całą serię innych metod typu danych `str`.) | Metoda | Działanie | | --- | --- | | <center>`index`</center> | <center>znajdź indeks pierwszego wystąpienia elementu</center> | | <center>`count`</center> | <center>zlicz, ile razy dany element występuje</center> | Mamy więc np.: ``` animals = [ 'cat' , 'cat' , 'dog' , 'cat' ] ``` ... i indeksem pierwszego wystąpienia elementu `'cat'` jest 0: ``` animals.index( 'cat' ) ``` ... zaś występuje on trzy razy: ``` animals.count( 'cat' ) ``` Można by zadać następujące ciekawe pytanie: Jak uogólnić metodę `index`, aby zwracała _wszystkie_ indeksy, na których dany element występuje w liście (lista - pamiętamy - może mieć duplikaty)? Czyli w powyższym przykładzie `[0, 1, 3]`. Problem ten rozwiążemy poniżej za pomocą tzw. składni "list comprehension". ## <a id="3"></a>3. Listy można modyfikować Dwie podstawowe własności list to, na pewno pamiętasz: - listy są uporządkowane; - listy są mutowalne. W poprzedniej sekcji mówiliśmy o indeksach, czyli liczbach całkowitych, które pozwalają nam wykorzystać porządek listy do wybierania z niej elementów. W tej sekcji zajmiemy się drugą z definicyjnych właściwości, czyli modyfikacjami listy, a w szczególności zarówno: - zmianą wartości poszczególnych elementów listy, - a także zmianą samej listy poprzez dodawanie/usuwanie z niej elementów. ### <a id="3.1"></a>3.1. Zmiany wartości elementów listy Jeśli chcemy zmienić wartość jakiegoś elementu listy, po prostu wybieramy go używając indeksu (jak widzieliśmy w poprzednim rozdziale) i przypisujemy (`=`) mu nową wartość. ``` letters = [ 'a' , 'b' , 'c' , 'd' , 'e' , 'f' ] letters[ 3 ] = 'x' # zmieniamy wartość elementu o indeksie 3, czyli 'd', na 'x' letters ``` Możemy oczywiście wybierać cały segment listy, aby go modyfikować za jednym zamachem. ``` letters[ -2: ] = [ 'y' , 'z' ] # zmieniamy dwa ostatnie elementy na dwa inne letters ``` Możemy w ten sposób listę rozszerzać lub skurczać, gdyż nowa liczba elementów może być różna od długości segmentu! ``` letters[ :3 ] = [ 's' , 't' , 'u' , 'w' , 'v' ] # zmieniamy trzy pierwsze elementy na pięć innych letters ``` Oczywiście cała kombinacja składni segmentów jest do naszej dyspozycji. Jeśli użyjemy kroku innego niż 1, musimy jednak dokładnie dopasować długość nowej listy! ``` letters[ ::2 ] = [ 'S' , 'U' , 'V' , 'Y' ] # zmieniamy co drugi element letters ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 13: Zdefiniuj listę liczb całkowitych od 1 do 100 i zastąp jej "środek", tj. elementy bez pierwszego i ostatniego, poprzez tenże "środek" w odwróconej kolejności. Wskazówka: Do zdefiniowania oryginalnej listy użyj funkcji `range`, ale przekonwertuj jej wynik na listę funkcją `list`. ``` # szybkie ćwiczenie 13 - rozwiązanie ``` ### <a id="3.2"></a>3.2. (*) Mutowalność, aliasy, kopie Mutowalność może wydawać się naturalną cechą - dlaczego nie mielibyśmy móc zmienić jakiegoś obiektu!? - jest jednak dość wyjątkowa biorąc pod uwagę typu obiektów dotychczas poznanych! Większość z nich jest niemutowalna. | Typ | Nazwa | Mutowalny? | | --- | --- | --- | | <center>`int`</center> | <center>liczba całkowita</center> | <center>❌</center> | | <center>`float`</center> | <center>liczba zmiennoprzecinkowa</center> | <center>❌</center> | | <center>`bool`</center> | <center>wyrażenie logiczne</center> | <center>❌</center> | | <center>`str`</center> | <center>string</center> | <center>❌</center> | | <center>`list`</center> | <center>lista</center> | <center>✅</center> | | <center>`tuple`</center> | <center>tupla</center> | <center>❌</center> | | <center>`set`</center> | <center>zbiór</center> | <center>✅</center> | | <center>`dict`</center> | <center>słownik</center> | <center>✅</center> | W tej sekcji skupimy się przez chwilę na tym bardziej technicznym zagadnieniu mutowalności, w jego relacji do operacji przypisywania obiektu do zmiennej. Rozważmy następującą serię przypisań: ``` a = 'Cheshire Cat' b = a print( a ) print( b ) ``` Stworzyliśmy obiekt typu `str` o wartości `'Cheshire Cat'`, a także przypisaliśmy do niego "etykiety", tj. zmienne, `a` i `b`. Możemy sprawdzić, że `a` i `b` wskazują na ten sam obiekt na dwa sposoby: albo wbudowanym operatorem `is`: ``` b is a ``` ... albo za pomocą wbudowanej funkcji `id`, która pokazuje "identyfikator" miejsca w pamięci, które ten obiekt zajmuje: ``` print( id( a ) ) print( id( b ) ) print( id( a ) == id( b ) ) ``` Moglibyśmy spróbować zmodyfikować obiekt `'Cheshire Cat'`, np. przy użyciu operatora `+`: ``` a += ' disappearing' print( a ) print( b ) ``` ... i teraz: ``` b is a print( id( a ) ) print( id( b ) ) print( id( a ) == id( b ) ) ``` Co tu się stało? Obiekt `'Cheshire Cat'` się nie zmienił - jest niemutowalny. Zmieniło się tylko przypisanie zmiennej `a` - wskazuje ona teraz na inny obiekt, `'Cheshire Cat disappearing'`. Zmienna `b` dalej pokazuje na obiekt `'Cheshire Cat'`. Co natomiast, gdybyśmy zrobili podobną rzecz z listami? ``` a = [ 1 , 2 , 3 ] b = a print( a ) print( b ) b is a print( id( a ) ) print( id( b ) ) print( id( a ) == id( b ) ) ``` Zmienne `a` i `b` znów wskazują na ten sam obiekt - tym razem listę. Listę tę możemy jednak zmienić - jest mutowalna - a wówczas wartość i w zmiennej `a`, i w zmiennej `b` podobnie się zmieni! Dlaczego? Nie zmieniamy tu przypisania zmiennych, gdyż _obiekt jest ten sam_ (sprawdź jego jego `id` - jest niezmienione!); obie zmienne `a` i `b` na niego cały czas wskazują. Ale zmieniła się _jego wartość_ (bo jest mutowalny), zatem i `a`, i `b` to widzą. ``` a[ 0 ] = 10 print( a ) print( b ) print( id( a ) ) print( id( b ) ) print( id( a ) == id( b ) ) ``` Jest to zgodne z diagramem: ![](Images/list_assignment_2.png) ... czego konsekwencją jest, iż jeśli zmienimy coś w liście, to zarówno zmienna `a`, jak i `b` to zobaczą. Tutaj `a` i `b` nazywamy **aliasami**. I jak łatwo przewidzieć, to zachowanie jest podatne na błędy, kiedy zmienimy coś w jednym aliasie, a zapomnimy o innym! Aby tego uniknąć, moglibyśmy powyższe przypisanie dokonać następująco: ``` a = [ 1 , 2 , 3 ] b = [ 1 , 2 , 3 ] print( a ) print( b ) ``` ... i teraz te listy są _różnymi_ obiektami, choć mają tę samą wartość: ``` b is a print( id( a ) ) print( id( b ) ) print( id( a ) == id( b ) ) ``` ... a zatem `a` i `b` wskazują na dwie różne listy, zgodnie z następującym diagramem: ![](Images/list_assignment_1.png) Zmieniając coś w pierwszej liście, drugiej tym razem nie naruszymy: ``` a[ 0 ] = 10 print( a ) print( b ) ``` W ten sposób utworzyliśmy dwie odrębne **kopie** listy. Zamiast jednak przepisywać listę jak wyżej, możemy zrobić to skrótowo za pomocą metody list `copy`: ``` a = [ 1 , 2 , 3 ] b = a.copy() print( a ) print( b ) b is a print( id( a ) ) print( id( b ) ) print( id( a ) == id( b ) ) ``` Ponadto, zamiast metody `copy` możemy skorzystać z zaobserwowanej wyżej własności składni segmentów - `[:]` tworzy kopię listy: ``` a = [ 1 , 2 , 3 ] b = a[ : ] print( a ) print( b ) b is a print( id( a ) ) print( id( b ) ) print( id( a ) == id( b ) ) ``` Trzeba na te sprawy uważać dokonując przypisywania do zmiennych obiektów mutowalnych. ### <a id="3.3"></a>3.3. Dodawanie i usuwanie elementów z listy - metody list Listę można modyfikować nie tylko poprzez zmianę wartości jej elementów, ale także poprzez dodawanie/usuwanie elementów. Już udało nam się to - nie wprost - zrobić poprzez przypisywanie nowej listy do wybranego segmentu starej listy. Tutaj pokażemy kilka standardowych metod dla tych operacji. | Metoda | Działanie | | --- | --- | | <center>`append`</center> | <center>dodaj element na koniec listy</center> | | <center>`extend`</center> | <center>dodaj listę na koniec listy</center> | | <center>`insert`</center> | <center>dodaj element w dowolnym miejscu listy</center> | | <center>`remove`</center> | <center>usuń pierwsze wystąpienie elementu</center> | <center>`pop`</center> | <center>usuń element o danym indeksie</center> | | <center>`del` (operator)</center> | <center>usuń element o danym indeksie</center> | | <center>`clear`</center> | <center>oczyść całą listę</center> | #### <a id="3.3_b1"></a>Dodawanie elementów do listy: `append`, `extend`, `insert` Metoda `append` służy do dodania nowego pojedynczego elementu na koniec listy (ten element jest podany jako argument metody `append`). ``` cars = [ 'audi' , 'vw' , 'mercedes' ] cars.append( 'porsche' ) # metoda append z argumentem 'porsche' (string) działająca na listę cars cars ``` Jeśli chcemy dodać kilka nowych elementów, a więc całą ich listę, używamy metody `extend`; ona przyjmuje także jeden argument - argument ten musi być jednak listą! ``` newest_cars = [ 'bugatti' , 'maserati' , 'lamborghini' ] cars.extend( newest_cars ) # metoda extend z argumentem newest_cars (lista stringów) działająca na listę cars cars ``` Jako dygresja, czy wiesz, co by się stało, gdybyśmy zamiast `extend` (które jako swój argument ma listę) użyli tutaj `append` (które jako argument ma pojedynczy element)? Nie to, czego oczekujemy: ``` cars_again = [ 'audi' , 'vw' , 'mercedes' , 'porsche' ] cars_again.append( newest_cars ) cars_again ``` Oczywiście: poprzez `append` dodaliśmy _pojedynczy_ element, a elementem tym jest _lista_ `newest_cars`. Metoda `insert` również wstawia jeden nowy element, podobnie jak `append`, ale nie na końcu, ale w dowolnym miejscu, które specyfikujemy przez jego indeks; metoda ta ma więc _dwa_ argumenty. ``` cars.insert( 2 , 'mazda' ) # nowy element - string 'mazda' - po wstawieniu będzie miał indeks 2 cars ``` Bardziej zaawansowana uwaga: Zauważ, że te trzy metody `append`, `extend`, `insert` modyfikują listę, ale nic nie zwracają, a więc składnia jest np. `cars.extend(newest_cars)`, a nie `cars = cars.extend(newest_cars)`. Działają one **w miejscu** ("in place"). #### <a id="3.3_b2"></a>Usuwanie elementów z listy: `remove`, `pop`, `del`, `clear` Jeśli wiemy, jaki element chcemy usunąć, używamy metody `remove`, która jednakże usuwa tylko pierwsze wystąpienie tego elementu (pamiętaj, że lista może zawierać duplikaty). ``` cars.remove( 'maserati' ) cars ``` Gdybyśmy mieli duplikaty, to usunięty jest tylko pierwszy z nich: ``` fruit = [ 'apple' , 'banana' , 'orange' , 'apple' , 'apple' , 'pear' ] fruit.remove( 'apple' ) fruit ``` Jak - elegancko! - usunąć wszystkie wystąpienia danego elementu? O tym pomówimy później, przy okazji filtrów i tzw. "list comprehension". Tutaj warto zaznaczyć, że wszystkie te metody do usuwania elementów nie są zbyt uniwersalne - i właśnie takie bardziej zaawansowane, eleganckie metody są częściej używane w praktyce. Dla porządku natomiast wymieńmy, co robią pozostałe wspomniane wyrażenia. Jeśli znamy indeks elementu do usunięcia, użyjemy `pop` z argumentem będącym indeksem. Ta metoda jest nieco inna od pozostałych, ponieważ nie tylko usuwa z listy element o danym indeksie, ale też _zwraca_ wartość tego elementu. ``` little_too_expensive = cars.pop( 5 ) # pop nie tylko usuwa element o danym indeksie, ale zwraca jego wartość cars little_too_expensive ``` Indeks w metodzie `pop` jest argumentem opcjonalnym - niepodanie go oznacza, że chcemy usunąć ostatni element listy. Jeśli znamy indeks elementu do usunięcia, jak przy `pop`, ale nie interesuje nas wartość usuwanego elementu, możemy użyć operatora `del`, którego składnia wygląda następująco: ``` del cars[ 4 ] cars ``` ... i którego możemy użyć także do usunięcia większej liczby elementów używając składni segmentów: ``` del cars[ -2: ] cars ``` (`del` to tzw. operator, a nie metoda - widzimy, że składnia jest inna.) Mamy też metodę `clear`, która - zgodnie z nazwą - oczyszcza całą listę. (Zauważmy, że nie ma ona żadnych argumentów. Oczywiście - nic nie musimy tej metodzie "mówić", ona po prostu czyści wszystko. Ciągle jednak nawiasy muszą być - tyle że puste w środku.) ``` cars.clear() cars ``` ### <a id="3.4"></a>3.4. Rozszerzanie list - operatory `+`, `*` Dobra wiadomość - możemy bez powyższych metod zupełnie się obejść! Są one niełatwe do zapamiętania, każda ma swoją specyfikę i ograniczenia. Na szczęście, istnieją ogólniejsze rozwiązania. Jednym z takich eleganckich rozwiązań jest zmiana działania podstawowych operatorów arytmetycznych - dodawania `+` i mnożenia `*` - w użyciu ich w kontekście list, analogicznie do ich działania na stringach (jest to więc kolejny przykład polimorfizmu). Innym rozwiązaniem jest bardzo użyteczna składnia "list comprehension", omówiona dokładnie w sekcji <a href="#5.7">List comprehension</a> i dalszych. #### <a id="3.4_b1"></a>Operator dodawania `+` Operator `+` łączy ("concatenation") dwie listy: ``` home = [ 'kitchen' , 'bedroom' , 'bathroom' ] garden = [ 'lawn' , 'tree' , 'path' ] home + garden ``` ... bardzo podobnie do łączenia stringów: ``` 'Jan' + ' ' + 'Kowalski' # łączymy trzy stringi w jeden ``` Zauważ, że podobny efekt moglibyśmy uzyskać poprzez metodę `extend`, która jednak zmodyfikowałaby by jedną z obecnych tu list, np. `home.extend(garden)` modyfikuje listę `home`. Używając `+` zarówno `home`, jak i `garden`, są nienaruszone. W każdym razie, zamiast metod `append` i `extend` możemy bardziej skrótowo (i elegancko!) wykorzystać `+`: ``` home += [ 'terrace' ] # równoważne home.append( 'terrace' ) home # gdybyśmy nie chcieli modyfikować listy home, tylko wynik zapisać w innej liście: # home_new = home + [ 'terrace' ] home += [ 'fireplace' , 'sauna' ] # równoważne home.extend( [ 'fireplace' , 'sauna' ] ) home ``` Nie musisz więc pamiętać, co konkretnie robią `append` i `extend` - wystarczy posługiwać się operatorem dodawania `+`, co jest bardziej zwięzłe i eleganckie. #### <a id="3.4_b2"></a>Operator mnożenia `*` Kończąc tę sekcję, wspomnijmy o operatorze `*` działającym na listę i liczbę całkowitą. Jeśli liczba ta jest dodatnia, $n > 0$, efektem jest lista złożona z $n$ powtórzeń wyjściowej listy. (Jeśli $n \leq 0$, efektem jest pusta lista.) Jest to zatem działanie zupełnie analogiczne do tego na stringach. ``` 5 * [ 'I' , 'love' , 'Python' ] 7 * [ 1 , 2 , 3 ] ``` Dla stringów było to podobnie: ``` 10 * 'abc' ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 14: Zdefiniuj zmienną `n_dashes`, która będzie liczbą całkowitą, np. 10. Napisz string wyglądający następująco: najpierw powtórzony w liczbie `n_dashes` znak `-`, następnie spacja, string `'The Title'`, spacja, i znowu `n_dashes` znaków `-`. ``` # szybkie ćwiczenie 14 - rozwiązanie ``` ## <a id="4"></a>4. Zadania domowe do Lekcji 5 <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 15: Podlisty. Masz daną listę `lst`, powiedzmy `lst = [7, 3, 2, 7, 7, 11, 4, 2]`, a także listę `sublist_candidate`, powiedzmy `sublist_candidate = [3, 2, 7]`. Napisz program, który sprawdza, czy `sublist_candidate` jest "podlistą" `lst`, tj. jakimiś kolejnymi elementami listy `lst`. Tutaj `[3, 2, 7]` jest podlistą, ale np. `[7, 7, 4, 2]` nie jest, bo pomija element `11`. Wskazówka: Przeiteruj się przez listę `lst` w ten sposób, że w każdym kroku wycinasz z niej segment o długości równej długości `sublist_candidate`. Dokładniej mówiąc, interesują nas _indeksy_, zatem zamiast iterować się po liście `lst`, iteruj się po odpowiednim zakresie `range` zdefiniowanym przez jej długość. W każdym kroku iteracji sprawdź, czy ten wycięty segment jest równy liście `sublist_candidate`. Jeśli w którejkolwiek iteracji tak będzie, to odpowiedzieliśmy twierdząco na pytanie i możemy przerwać pętlę (instrukcją `break`) i wydrukować odpowiednią wiadomość. Możesz też na początku zdefiniować zmienną typu Boolean `is_sublist = False`. Następnie jeśli powyższy warunek zajdzie, to zamiast drukować wiadomość, zmień jej wartość na `True`. ``` # dłuższe ćwiczenie 15 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 16 (*): Pig Latin. [Pig Latin](https://en.wikipedia.org/wiki/Pig_Latin) jest to "sekretny język"/"gra słowna", o historii sięgającej przynajmniej XIX wieku, w której angielskie wyrazy zamieniane są zgodnie z następującymi regułami: Zdefiniujmy najpierw zbiór samogłosek jako "a", "e", "i", "o", "u". Teraz: - Jeśli pierwsza litera słowa to spółgłoska, to zrób co następuje: weź wszystkie spółgłoski występujące na początku słowa i przenieś je na koniec słowa, a następnie na samym końcu słowa dodaj "ay". Np. dla wyrazu "think" (zaczyna się on na spółgłoskę), bierzemy początek spółgłoskowy, tj. "th", przenosimy go na koniec dostając "inkth", wreszcie dodajemy "ay", otrzymując "inkthay". Jest to "think" w języku Pig Latin. Inny przykład: "string" → "ingstray". (Dla słów złożonych [tylko ze spółgłosek](https://www.wordgamehelper.com/consonant-words), np. "rhythms" - pamiętaj, że "y" traktujemy jako spółgłoskę - efektem jest więc tylko dodanie "ay": "rhythms" → "rhythmsay".) - Jeśli pierwsza litera słowa to samogłoska, po prostu dodaj "way" na końcu. Np. "eat" → "eatway". Napisz program, w którym zdefiniujesz zmienną `word`, np. `word = 'think'`, a następnie przetłumaczysz ją na Pig Latin (powiedzmy w zmiennej `pig_latin_word`). Wskazówka: Trudniejsza jest oczywiście pierwsza gałąź warunku, tj. początek od spółgłoski. Możesz wykorzystać pętlę `while` do pójścia od początku wyrazu aż do napotkania pierwszej samogłoski - i tę część "wyciąć" z oryginalnego wyrazu poprzez składnię segmentów i dodać na końcu operatorem `+`. Jeśli chodzi o samo sprawdzanie, czy dana litera jest samogłoską czy nie, przypomnij sobie działanie operatora `in` na stringach i użyj pomocniczego stringa `'aeiou'`. ``` # dłuższe ćwiczenie 16 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 17 (*): Odwracanie stringu na zmianę. Masz dany string `text` i liczbę naturalną `n` w zakresie od 2 do długości `text`. Stwórz zmienną `text_new` następująco: pierwsze `n` liter zapisz w odwrotnej kolejności; drugie `n` liter zostaw jak są; trzecie `n` liter znowu w odwrotnej kolejności; czwarte `n` zostaw jak są; itd. (Ostatni segment może mieć niepełne `n` liter.) Np. dla `text = 'abcdefghijklmn'` i `n = 3` odpowiedzią jest `'cbadefihgjklnm'`. Wskazówka: Niech `text_new = ''` będzie na początku pustym stringiem, zmienną kumulującą. Iteruj się po liczbach naturalnych, skacząc co `2 * n` i "wrzucając" (`+=`) do "pudełka" odpowiednie segmenty z `text`. Możesz też rozwiązać to zadanie pętlą `while` - spróbuj! Jest ona podobna do powyższego `for`-a, ale w każdym kroku musimy też modyfikować zmienną `text` poprzez usuwanie z niej pierwszych `2 * n` znaków, aż będzie ona pusta. ``` # dłuższe ćwiczenie 17 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 18 (**): Słowa kangurowe. Słowo nazywamy "kangurowym", jeśli zawiera w sobie inne słowa - gdzie przez "zawiera" rozumiemy, że litery mniejszego słowa wszystkie występują w większym, w dobrej kolejności, choć niekoniecznie sąsiadując ze sobą. Dobrym przykładem angielskiego słowa kangurowego jest "encourage", które zawiera w sobie np. słowa "courage", "urge", "cure" i sporo innych. Zdefiniujmy zatem `kangaroo_word = 'encourage'` oraz następującą listę wyrazów: `kangaroo_word_list = ['courage', 'cog', 'cur', 'urge', 'core', 'cure', 'nag', 'rag', 'age', 'nor', 'rage', 'enrage', 'cage']`. Twoim zadaniem jest przeiterować się przez tę listę, w każdym kroku drukując wynik sprawdzenia, że rzeczywiście dane pod-słowo "zawiera się" w `kangaroo_word`. Wskazówka: "Mięsem" zadania jest oczywiście algorytm sprawdzający, czy dane słowo `sub_word`, np. `'cure'`, "zawiera się" w słowie `kangaroo_word`. Jak do tego podejść? Jeden pomysł jest następujący: Zdefiniuj zmienną tymczasową `kangaroo_word_current`, równą na początku `kangaroo_word`. Iteruj się po literach słowa `sub_word`. Jeśli danej litery nie ma w słowie `kangaroo_word_current`, to już wiesz, że odpowiedź jest negatywna. Jeśli zaś ta litera jest tam, to niech `kangaroo_word_current` będzie obcięte: od następnej litery po tej znalezionej do końca. Tu np. w pierwszej iteracji mamy literę `'c'`, która jest obecna w `'encourage'`; wycinamy je zatem od kolejnej po `'c'` litery, tj. pozostaje nam `'ourage'` jako nowa wartość `kangaroo_word_current`. Dalej mamy literę `'u'`, która jest tu obecna i zatem nową wartością `kangaroo_word_current` jest `'rage'`. Itd. Do tego "obcinania" użyj składni segmentów i metody `index`. ``` # dłuższe ćwiczenie 18 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'><img style = 'float: right; margin-left: 10px; margin-bottom: 10px' src = 'Images/josephus.jpg' width = '400px'> Dłuższe ćwiczenie 19 (**): Problem Józefa Flawiusza. [Problem Józefa Flawiusza](https://en.wikipedia.org/wiki/Josephus_problem) zainspirowany jest fragmentem z "Wojny żydowskiej", gdzie autor opisuje historię, jak podczas oblężenia miasta Yodfat w roku 67 n.e. został uwięziony wraz ze swoimi 40 żołnierzami w jaskini. Nie chcąc poddać się Rzymianom, uznali, iż lepszym wyborem będzie popełnienie samobójstwa. Flawiusz zasugerował systematyczny sposób wykonania tego zadania: mieli stanąć w kole i odliczać co drugą osobę, która natychmiast poddawała się egzekucji. Mamy zatem 41 osób stojących w koło, ponumerowanych od 1 do 41. Najpierw eliminowana jest osoba nr 2, potem nr 4 itd. Kiedy wyeliminowana będzie osoba nr 40, następna w kolejce jest nr 1. W tym momencie pamiętajmy, że nie ma już osoby nr 2 ani 4, zatem następna w kolejce jest nr 5. Eliminacja ta przeprowadzana jest aż zostanie żywa tylko jedna osoba. Mówi się, że Flawiusz obliczył tę "szczęśliwą" pozycję i stanął w kółku jako osoba nr 19. (W tekście autor mówi, że przeżył wraz z jednym towarzyszem i razem poddali się Rzymianom. Jego towarzysz miał stać na pozycji nr 35.) (W oryginalnym sformułowaniu, egzekucji poddawana jest co _trzecia_ osoba, najpierw nr 3, potem nr 6 itd. Wówczas Flawiusz powinien stanąć na pozycji nr 31, a jego towarzysz na nr 16.) Wyznacz, na jakiej pozycji powinien stanąć Józef Flawiusz jeśli mamy `n` osób, a egzekucje odbywają się w krokach co `k` osób. Wskazówka: Rozważ listę liczb od 1 do `n`. Zaczynasz od egzekucji osoby na indeksie `i = k - 1`. Napisz pętlę `while`, gdzie w każdym kroku usuwasz element o indeksie `i` (użyj metody `pop`). Najważniejszym elementem programu jest napisanie, jak przejść od właśnie wyeliminowanego indeksu `i` do kolejnej osoby. Można by myśleć, że należy do `i` dodać `k`, skoro mamy iść co `k` osób - ale są z tym dwa problemy: - Po pierwsze, przez usunięcie elementu o indeksie `i` indeks kolejnej osoby _zmniejszył_ nam się o 1 (lista się skurczyła!). Czyli trzeba iść co `(k - 1)` kroków. - Po drugie, trzeba zapisać cykliczność tego przejścia. Np. rozważmy sytuację, gdzie dochodzimy do końca pierwszego kółka, eliminując osobę nr 40. Jest to 20-ta osoba do eliminacji. Zanim ją zatem wyeliminujemy, nasza lista ma 41 - 19 = 22 elementy, zaś osoba nr 40 ma na aktualnej liście indeks 20 - jest przedostatnia. Po jej eliminacji lista ma 21 elementów. Teraz od indeksu 20 chcemy skoczyć do indeksu 0 (przeskakując osobę nr 41). Jak to zrobić arytmetycznie? Przypomnij sobie operator reszty z dzielenia `%`. Pętla `while` kończy się, gdy lista zawiera tylko jeden element (albo dwa, jeśli interesuje nas też pozycja towarzysza Flawiusza). ``` # dłuższe ćwiczenie 19 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 20 (**): Liczby Lychrel. Rozważmy dodatnie liczby całkowite `n` o następującej własności: Do `n` dodajemy jej odbicie lustrzane (tj. `n` napisaną wspak). Czasem suma ta będzie "liczbą palindromiczną" - tj. czytana wspak będzie identyczna ze sobą samą. Np. dla `n = 47` mamy 47 + 74 = 121, no i 121 jest liczbą palindromiczną. Czasem nie będzie od razu liczbą palindromiczną, ale dopiero po kilku iteracjach tej procedury. Np. dla `n = 349` potrzebujemy trzech iteracji, aby dojść do liczby palindromicznej: ``` 349 + 943 = 1292 1292 + 2921 = 4213 4213 + 3124 = 7337 ``` Dla wielu liczb startowych `n` liczba iteracji potrzebnych do dojścia do palindromu jest stosunkowo niewielka. Spośród małych liczb `n` wyjątkowa w tym względzie jest `n = 89`, która potrzebuje aż 24 iteracji by dojść do 13-cyfrowej liczby palindromicznej 8813200023188. Obecny (z 26 kwietnia 2019 r.) rekord świata jeśli chodzi o największą liczbę iteracji należy do `n = 12000700000025339936491`, która potrzebuje 288 iteracji by dojść do 142-cyfrowej liczby palindromicznej. Można zapytać, czy istnieją liczby startowe `n`, które _nigdy_ nie doprowadzą do palindromu. Takie liczby nazywamy właśnie [liczbami Lychrel](https://en.wikipedia.org/wiki/Lychrel_number) (nazwa nadana została przez Wade Van Landingham i jest zgrubnym anagramem imienia jego dziewczyny, Cheryl). Nie istnieje dowód matematyczny na istnienie żadnej liczby Lychrel, tj. nie możemy z całą pewnością powiedzieć, iż jakieś `n` nigdy nie utworzy palindromu. Istnieją natomiast "kandydaci", gdzie po długich obliczeniach numerycznych nie znaleziono palindromu. Najmniejszym "kandydatem Lychrel" jest `n = 196`. Pierwsza długa iteracja startująca od 196 została rozpoczęta w 1987 r. i po ok. trzech latach i ok. 2.5 miliona iteracji osiągnęła liczbę o milionie cyfr, nie znajdując palindromu. Najświeższy rezultat pochodzi z 2015 r., gdzie iteracja dotarła do liczby o _miliardzie_ cyfr - bez znalezienia palindromu. W tym zadaniu przyjmijmy arbitralnie, że liczba startowa `n` jest "Lychrel" jeśli nie osiągnie palindromu przed upływem 300 iteracji (pamiętajmy, że obecny rekord długości iteracji wynosi 288). Ile jest liczb "Lychrel" mniejszych niż 10 000? A ile mniejszych niż 100 000? Sprawdź swoją odpowiedź: Jest 246 liczb "Lychrel" mniejszych niż 10 000, zaś 6 020 mniejszych niż 100 000. Wskazówka: Napisz pętlę `while`. Zliczaj liczbę iteracji i przerwij pętlę dotarłszy do 300 (nasz arbitralny limit). W każdym kroku zmieniasz `n` na sumę `n` i jej odbicia lustrzanego - wykorzystaj w tym celu konwersję na string i odwracanie kolekcji składnią segmentów. Napisz warunek sprawdzający, czy aktualne `n` jest palindromem - podobnie przez stringi i składnię odwracania. Jeśli tak, to liczba nie jest Lychrel i możesz przerwać pętlę. Jeśli dotrzesz do końca pętli bez znalezienia palindromu, zapisz startową wartość `n` na uprzednio utworzonej pustej liście. ``` # dłuższe ćwiczenie 20 - rozwiązanie ``` ## <a id="5"></a>5. Iterowanie przez listę W Lekcji 4 poświęciliśmy sporo czasu na omówienie pętli `for` i `while`, gdzie szczególnie ta pierwsza służy do "przechodzenia krok po kroku" (czyli **iterowania się**) przez jakąś kolekcję, np. listę czy string. W tej Lekcji też używaliśmy pętli `for` niejednokrotnie, zob. <a href="#1.3">Iterowanie pętlą `for` przez listę - wprowadzenie</a> i dalej. Teraz zbierzmy wszelkie ciekawe informacje na temat: pętla `for` a listy. Jest to niezwykle ważny temat - tych konstrukcji używa się cały czas! Jest zatem bardzo istotne dobrze go przećwiczyć, stąd też sporo (niełatwych!) zadań domowych. W szczególności, powiemy także o tzw. "list comprehension" (nie ma chyba dobrego polskiego tłumaczenia!) - bardzo ogólnej, efektywnej i zwięzłej metodzie konstrukcji nowych list z istniejących list. Celem tej sekcji i zadań domowych będzie Twoje mistrzostwo w tej technice. ### <a id="5.1"></a>5.1. Pętla `for` "Koniem pociągowym" iterowania przez listę jest znana nam już świetnie pętla `for`: ``` words = [ 'desk' , 'chair' , 'lamp' , 'pen' , 'computer' ] for word in words: print( word ) ``` Zauważmy, jak elegancka jest ta składnia! Mamy zmienną `word` (tzw. iterator), która krok po kroku przybiera wszystkie wartości elementów w liście. W każdej iteracji mamy więc przypisanie, `word = 'desk'`, potem `word = 'chair'` itd. Następnie wykonujemy blok kodu; tutaj jest to prosty `print`, ale może to być oczywiście dowolnie skomplikowany kod. ### <a id="5.2"></a>5.2. Pusta lista (lub pusty string) jako "puste pudełko" w pętli `for` Bardzo częstym użyciem pętli `for` jest akumulacja wartości. W Lekcji 4 zaczęliśmy omawiać ten temat - stworzenie "pustego pudełka", do którego następnie "wrzucamy" jakieś rzeczy poprzez pętlę `for`. Przypomnijmy sobie np., jak napisaliśmy kod sumujący liczby będące elementami danej listy - stworzyliśmy "puste pudełko" `total_sum = 0`, a następnie szliśmy pętlą `for` przez listę i kolejne jej elementy dodawaliśmy (operatorem `+=`) do zmiennej `total_sum`. Analogicznie moglibyśmy jako "puste pudełko" rozważyć pustą listę `[]` i do niej "wrzucać" (znowu operatorem `+=`, bo przecież działa on też na listach!) wyniki kolejnych iteracji pętli `for`. Otrzymalibyśmy wtedy na końcu listę zawierającą w jakiś sposób przekształcone elementy danej listy wyjściowej. Stwórzmy dla przykładu listę kwadratów liczb całkowitych od 0 do 10: ``` squares = [] # "puste pudełko" for number in range( 11 ): squares += [ number ** 2 ] # równoważne: squares = squares + [ number ** 2 ] squares ``` Mamy więc "puste pudełko" `squares`, do którego "wrzucamy" kolejne kwadraty liczb. "Wrzucanie" następuje za pomocą operatora `+=`, gdzie zwróćmy uwagę, iż aby dodać do listy `squares` jeden element, musimy dodać w rzeczywistości jednoelementową listę z tym elementem; operator `+` działa między listami! <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 21: Zdefiniuj pustą listę o nazwie `rooms`. To będzie "pusty pojemnik", do którego wrzucać będziemy pewne elementy za pomocą operatora `+`. Przeiteruj się pętlą `for` przez listę `home = ['kitchen', 'bedroom', 'bathroom', 'terrace', 'fireplace', 'sauna']`; w każdym kroku napisz instrukcję warunkową, sprawdzającą, czy string `'room'` jest częścią aktualnego elementu listy `home` - jeśli tak, dodaj go do listy `rooms` w opisany powyżej sposób. Wskazówka: Przypomnij sobie operator `in`. ``` # szybkie ćwiczenie 21 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 22: Utwórz pustą listę o nazwie `my_home`. Przeiteruj się przez listę stringów `home`, do każdego elementu dodając string `'My '` na początku, i tak zmieniony wyraz zapisując w liście `my_home`. Innymi słowy, na końcu nasza lista powinna być `my_home = ['My kitchen', 'My bedroom', ...]`. ``` # szybkie ćwiczenie 22 - rozwiązanie ``` Podkreślmy jeszcze raz, że zupełnie analogicznie możemy utworzyć "puste pudełko" jako pusty string `''` i "wrzucać" do niego (znów poprzez `+=`) stringi powstające w kolejnych krokach jakiejś iteracji. ### <a id="5.3"></a>5.3. (*) Stos i kolejka <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/stack.png' width = '400px'> <img style = 'float: right; margin-left: 10px; margin-bottom: 10px' src = 'Images/queue.png' width = '500px'> W tej opcjonalnej sekcji powiedzmy krótko o dwóch konstrukcjach algorytmicznych, które można zaimplementować za pomocą pętli `for` po listach i "pudełka", używając także "wrzucania do pudełka" (za pomocą operatora `+=` lub też metody `append`) oraz "wyrzucania z pudełka" (za pomocą metody `pop`). - **Stos** ("stack") jest to struktura danych opisana akronimem **FILO** = "First-In Last-Out". Przykładem jest stos książek na biurku: na stos odkładamy kolejno książki (ta operacja nazywa się ogólnie "push"); ostatnio odłożona książka będzie pierwsza do wzięcia z góry (ta operacja to "pop"); innymi słowy, książka położona pierwsza ("First-In") będzie odłożona ostatnia ("Last-Out"). Przykładem algorytmicznym jest program `Ctrl+Z`, czyli cofania operacji np. w edytorze tekstu; operacja ostatnio wykonana jest pierwsza do cofnięcia. - **Kolejka** ("queue") jest z kolei opisana akronimem **FIFO** = "First-In First-Out". Jak w kolejce do sklepu, kto pierwszy przyszedł ("First-In") jest też pierwszy do obsłużenia ("First-Out"). Operacje wejścia do kolejki i wyjścia z niej są określane słowami "enqueue" i "dequeue". Prostą implementacją obu tych struktur jest lista: - Wyobraźmy sobie jakąś iterację, gdzie w każdym kroku do "pudełka" "wrzucamy" elementy poprzez metodę `append`, a "wyrzucamy" poprzez `pop()` (przypomnijmy sobie, że `pop` bez argumentów usuwa ostatni element) - jest to stos, bo to co ostatnio dołączyliśmy przez `append` jest pierwsze usunięte przez `pop()`. - Bardzo podobnie, wrzucając przez `append`, zaś wyrzucając przez `pop(0)` (czyli usuń pierwszy element listy) - mamy kolejkę, bo to co ostatnio dołączyliśmy jest ostatnie wyrzucone. (W Pythonie są lepsze, bardziej zoptymalizowane, implementacje stosu i kolejki. Tutaj chcemy tylko pokazać, że obie te struktury danych można prosto samemu zapisać za pomocą poznanych do tej pory pojęć.) Pokażmy przykład zadania, gdzie pojęcie stosu pojawia się dość naturalnie. Jest to trudniejsze ćwiczenie, ale przeanalizuj go, aby zaznajomić się bardziej z myśleniem algorytmicznym - jak atakować takie problemy. Zadanie jest następujące: Dany jest string złożony z nawiasów trzech rodzajów, `'('`, `')'`, `'['`, `']'`, `'{'`, `'}'`, np. `'{[]{()}}'` czy też `'[(])'`. Należy odpowiedzieć na pytanie, czy jest on "zbalansowany" - tj. czy każdy nawias otwierający ma partnera zamykającego w odpowiednim miejscu - czy nie. Pierwszy przykład jest zbalansowany, a drugi nie. Jak możemy do tego podejść? Stwórzmy "puste pudełko" (które będzie - jak zobaczymy - naszym stosem), jako pustą listę. Iterujmy się przez nasz dany string nawiasów. Jeśli nawias jest otwierający, to dorzućmy go do "pudełka" (operatorem `+=` lub metodą `append`). Powiedzmy, że natknęliśmy się teraz po raz pierwszy na nawias zamykający, np. `']'`. Jeśli string jest zbalansowany, to nawiasem _ostatnio_ wrzuconym do "pudełka" musi być jego partner otwierający, a więc `'['`. Jeśli nie jest, to już wiemy, że string nie jest zbalansowany. Jeśli tak jednak jest, to wyobraźmy sobie, że te dwa nawiasy się "anihilują wzajemnie": "puf!" i oba znikają! Idziemy dalej i powtarzamy procedurę - jeśli w ten sposób anihilujemy cały string, to jest on zbalansowany. I zauważmy, że ta "anihilacja" to dokładnie usunięcie ostatnio wrzuconego elementu - czyli mamy stos! Zobaczmy na pierwszym przykładzie (`'{[]{()}}'`), jak nasz stos ewoluuje: (1) dodajemy: `'{'`, (2) dodajemy: `'{['`, (3) usuwamy: `'{'`, (4) dodajemy: `'{{'`, (5) dodajemy: `'{{('`, (6) usuwamy: `'{{'`, (7) usuwamy: `'{'`, (8) usuwamy: `''`. Skończyliśmy na pustym stringu, co dowodzi zbalansowania. Przeanalizuj poniższy kod implementujący ten stos za pomocą metod `append` i `pop`: ``` brackets = '[(])' open_list = [ '(' , '[' , '{' ] close_list = [ ')' , ']' , '}' ] stack = [] for s in brackets: if s in open_list: stack.append( s ) # nawias otwarty po prostu dorzuć na stos elif s in close_list: s_match = open_list[ close_list.index( s ) ] # otwierający partner nawiasu zamykającego s if len( stack ) > 0 and stack[ -1 ] == s_match: # jeśli ostatni element na stosie jest identyczny z otwierającym partnerem nawiasu zamykającego s, to je anihiluj! stack.pop() else: # jeśli nie, to na pewno string jest niezbalansowany print( 'Unbalanced' ) if len( stack ) == 0: # aby string był zbalansowany cała ta procedura musi skończyć się pustym stringiem print( 'Balanced' ) ``` ### <a id="5.4"></a>5.4. Pętla `for` po indeksach Wróćmy do bardziej podstawowej dyskusji pętli `for`. Moglibyśmy zapisać ją także nieco inaczej: iterując po indeksach, a więc przechodząc kolejne wartości, jakie indeks przyjmuje: od 0 do długości listy minus 1 (np. dla listy o długości 5 indeksy to 0, 1, 2, 3, 4), dla każdego indeksu `i` biorąc odpowiedni element listy za pomocą operatora `[i]`. #### <a id="5.4_b1"></a>Funkcja `range` Przypomnijmy sobie tu poznaną w Lekcji 4 funkcję `range`: | Wersja | Działanie | | --- | --- | | <center>`range(n)`</center> | <center>ciąg 0, 1, ..., $(n - 1)$</center> | | <center>`range(m, n)`</center> | <center>ciąg $m$, $(m + 1)$, ..., $(n - 1)$</center> | | <center>`range(m, n, k)`</center> | <center>ciąg od $m$ do $(n - 1)$ w krokach co $k$</center> | Mamy więc np.: ``` for i in range( 5 ): print( i ) for i in range( 5 , 10 ): print( i ) for i in range( 10 , 20 , 2 ): print( i ) ``` Uwaga techniczna: Można by pomyśleć, iż funkcja `range` zwraca listę odpowiednich liczb całkowitych. Jeśli jednak napiszemy: ``` range( 5 ) type( range( 5 ) ) ``` ... to nie jest to lista, ale odrębny typ danych, tzw. typ `range`. Bez wchodzenia w szczegóły powiedzmy tylko, że jak zawsze to robimy, typ `range` możemy przekonwertować na typ listy poprzez użycie funkcji o nazwie tej samej, jak typ, na który konwertujemy, czyli `list`: ``` list( range( 5 ) ) ``` Zanim pójdziemy dalej, przypomnijmy też funkcję `len`, zwracającą długość dowolnej kolekcji. Np.: ``` len( range( 6789 , 768231 , 131 ) ) # ile jest liczb całkowitych od 6789 do 768230 (nie 768231!) w krokach co 131? ``` #### <a id="5.4_b2"></a>Iterowanie po indeksach listy Wrócmy do tematu iterowania po listach. Możemy teraz napisać pętlę `for`, którą przeiterujemy się przez _indeksy_ listy zamiast po prostu przez jej elementy; iterujemy zatem od 0, 1, ... aż do długość listy minus 1 - ciąg ten dokładnie opisuje `range(len(lst))` dla listy `lst`. ``` list( range( len( words ) ) ) # ciąg indeksów listy words: od 0 do (len(words) - 1) for i in range( len( words ) ): # iterujemy po indeksach 0, 1, 2, ..., aż do długość listy minus jeden print( words[ i ] ) # element o indeksie i uzyskujemy operatorem nawiasów kwadratowych ``` Jest to poprawny sposób iteracji - natomiast na pewno mniej elegancki od iteracji wprost po elementach listy! ### <a id="5.5"></a>5.5. Pętla `for` zarówno po indeksach, jak i elementach listy - funkcja `enumerate` Zdarza się, iż iterując przez listę i np. sprawdzając zadane warunki (np. czy dany element jest liczbą dodatnią), chcielibyśmy uzyskać informację nie tylko o tym, _który element_ spełnia nasz warunek, ale też _jakie indeksy_ mają wskazane elementy. Przydatnym narzędziem do tego celu jest funkcja `enumerate`, która przekształca listę daną jej jako argument w listę tej samej długości, ale w której każdy element jest _parą_ (dokładnie mówiąc: tuplą - zob. początek tej Lekcji oraz Lekcję 7); w tej parze pierwsza pozycja jest indeksem, a druga pozycja właściwym elementem listy o tym indeksie. (Dokładniej mówiąc, jak już widzieliśmy powyżej, powstały ciąg par (indeks, element) nie jest sam w sobie listą, ale innym typem danych, który jednakże możemy przekształcić w listę za pomocą funkcji konwersji typów `list`.) ``` enumerate( words ) list( enumerate( words ) ) ``` Widzimy tu przypisany każdemy elementowi listy `words` odpowiadający mu indeks - co może być bardzo wygodne w zastosowaniach, gdzie interesuje nas wartość indeksu. Powiedzmy, że interesuje nas, aby uzyskać wszystkie wartości indeksu elementów listy `words`, które zawierają literę `'p'`. Jest to rodzaj "filtrowania" (powiemy o tym ogólniej w sekcji <a href="#6.4">Filtrowanie list</a>), czyli wybierania konkretnych elementów listy na podstawie zadanych warunków, co już zdarzało nam się robić za pomocą pętli `for` - z tą różnicą, że teraz interesują nas też indeksy. ``` for i , word in enumerate( words ): # iterujemy przez ciąg par (indeks, element) if 'p' in word: print( f'index: {i} corresponds to element: {word}' ) # wydrukujmy zarówno indeks, jak i wartość elementu ``` Zauważmy, jak elegancko zachowuje się składnia - elementami ciągu `enumerate(words)` są _pary_ (dokładniej: tuple), a więc iterując po tym ciągu iterujemy po parach: w każdym kroku mamy _dwie_ zmienne, `i` oraz `word`, które przyjmują kolejne wartości odpowiednio pierwszej pozycji w parze i drugiej pozycji w parze. W składni pętli `for` po prostu oddzielamy je od siebie przecinkiem. Dla większej czytelności moglibyśmy wziąć je w nawiasy, `(i, word)`, ale nie jest to konieczne. <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 23: Jako przykład zastosowania, spróbujmy odpowiedzieć na następujące pytanie: Mamy daną listę oraz pewną wartość. Jakie są indeksy, którym odpowiadające elementy listy przyjmują tę daną wartość? (Pamiętamy, że lista może zawierać duplikaty, a więc kilka elementów o tej samej wartości.) Np. mając daną listę `[5, -7, 5, 3, 4, -1, 5, -7]` i wartość 5, odpowiedzią byłyby indeksy 0, 2, 6, czyli pozycje, gdzie 5 występuje. Uwaga: Istnieje wbudowana metoda `index`, która daje indeks danej wartości - jednakże daje ona tylko indeks _pierwszego wystąpienia_ tejże wartości, co niekoniecznie jest pożądanym działaniem w przypadku występowania duplikatów. (Spróbuj rozwiązać to samodzielnie zanim zobaczysz rozwiązanie poniżej.) ``` # szybkie ćwiczenie 23 - rozwiązanie ``` Rozwiązanie: W tym celu przeiterujmy się przez daną listę, na każdym kroku sprawdzając, czy dany element równy jest 5. Jeśli tak, dodajmy jego indeks do (zainicjowanej wcześniej) pustej listy - naszego "pudełka" z wynikami. Z uwagi na to, że interesują nas indeksy, musimy iterować nie tylko po samej liście, ale po parach (indeks, element). ``` value_to_check = 5 value_list = [ 5 , -7 , 5 , 3 , 4 , -1 , 5 , -7 ] indices_of_value = [] # "pusty pojemnik" na indeksy, w których występuje element o wartości zmiennej value_to_check for i , value in enumerate( value_list ): # iterujemy przez ciąg par (indeks, element) if value == value_to_check: # jeśli aktualna wartość elementu równa jest wartości zmiennej value_to_check... indices_of_value += [ i ] # ... to do "pojemnika" dodajemy indeks tej wartości; alternatywnie: indices_of_value.append( i ) indices_of_value ``` Metoda `index` daje tylko pierwsze wystąpienie: ``` value_list.index( 5 ) # 5 występuje po raz pierwszy na pozycji 0 (ale są inne piątki w liście, który tu nie widzimy!) ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 24: Utwórz listę indeksów, które mają te elementy danej listy, które są _pomiędzy_ jakimiś wartościami, np. `mn = 2` i `mx = 6`. Użyj powyższych przykładów. (Moglibyśmy taką funkcję nazwać `index_range`.) ``` # szybkie ćwiczenie 24 - rozwiązanie ``` ### <a id="5.6"></a>5.6. Pętla `for` po kilku listach jednocześnie złączonych "na suwak" - funkcja `zip` Zwróćmy dokładniejszą uwagę na strukturę funkcji `enumerate`. Tworzy ona listę, której każdy element jest _parą_ (dokładniej: tuplą) o postaci (indeks, element); indeks przyjmuje wartości 0, 1, 2, ... aż do długości listy minus jeden; element przyjmuje zaś wartości z danej listy. ``` names = [ 'Kacper' , 'Ania' , 'Bartek' , 'Kamila' , 'Basia' ] list( enumerate( names ) ) ``` <img style = 'float: right; margin-left: 10px; margin-bottom: 10px' src = 'Images/zip.png' width = '300px'> Zauważmy tutaj "strukturę suwaka": Wyobraźmy sobie, że mamy drugą listę - zawierającą wartości indeksów pierwszej listy, a zatem `[0, 1, 2, 3, 4]`. Mamy więc dwie listy: `[0, 1, 2, 3, 4]` oraz `['Kacper', 'Ania', 'Bartek', 'Kamila', 'Basia']`. Powyższy wynik funkcji `enumerate` powstaje ze "sparowania" odpowiadających sobie elementów tych dwu list: `0` jest przypisane do `'Kacper'`, `1` do `'Ania'` itd. Jakbyśmy te dwie listy spinali ze sobą "na suwak". Do spinania list ze sobą "na suwak" służy funkcja - oczywiście! - `zip`. Daje ona z dwóch list o tej samej długości ciąg par, gdzie każda para to odpowiadające sobie elementy najpierw z pierwszej, potem z drugiej listy. (Znowu, aby ciąg ten przekształcić w listę, zastosujmy funkcję `list` do wyniku.) ``` zip( [ 0 , 1 , 2 , 3 , 4 ] , names ) list( zip( [ 0 , 1 , 2 , 3 , 4 ] , names ) ) ``` Spinać na suwak możemy oczywiście dowolne listy. (Listy powinny być tej samej długości - a jeżeli jedna z nich będzie krótsza, to druga zostanie obcięta do tej mniejszej długości!) ``` scores = [ 26.5 , 33.2 , 31.7 , 28.4 , 35.3 ] list( zip( names , scores ) ) # spinamy na suwak dwie listy: names i scores ``` Po takiej liście par możemy iterować pętlą `for` dokładnie tak samo, jak w przypadku `enumerate`, tj. pamiętając, że iterujemy po kilku wartościach naraz. ``` for name , score in zip( names , scores ): print( f'{name} scored {score} points.' ) ``` Możemy tak "spinać na suwak" więcej niż dwie listy: ``` games_played = [ 2 , 4 , 4 , 3 , 6 ] list( zip( names , scores , games_played ) ) # spinamy na suwak trzy listy for name , score , games in zip( names , scores , games_played ): print( f'{name} scored {score} points, and played {games} games.' ) ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 25: Użyj powyższych list `names`, `scores`, `games_played`. (a) Wydrukuj wszystkie imiona osób, które miały więcej niż 30 punktów. (b) Wydrukuj liczbę gier zagranych przez osoby, które miały mniej niż 30 punktów. (c) Wydrukuj wszystkie imiona osób, które miały mniej niż 32 punkty, ale zagrały więcej niż 2 gry. ``` # szybkie ćwiczenie 25a - rozwiązanie # szybkie ćwiczenie 25b - rozwiązanie # szybkie ćwiczenie 25c - rozwiązanie ``` ### <a id="5.7"></a>5.7. List comprehension Sekcję tę skończymy chyba najpiękniejszym 😀 i jednym z najbardziej użytecznych, najczęściej używanych w praktyce, elementem składni Pythona dotyczącym list - tzw. **list comprehension**. W skrócie, jest to bardzo zwięzła metoda tworzenia nowych list z list istniejących. I tylko o krok do niej od pętli `for`, które niejednokrotnie pisaliśmy w tej Lekcji - zauważmy, że często miały one następującą ogólną postać: ``` for item in lst: if condition: expression ``` Innymi słowy, iterujemy przez daną listę `lst`; jeśli aktualny element `item` spełnia zadany warunek logiczny `condition` (np. "liczba jest podzielna przez 100", "słowo zaczyna się na literę `'a'`" itp.), to wykonujemy pewne wyrażenie, pewną operację `expression` na elemencie `item`. (Warunek `condition` jest opcjonalny - czasem chcemy wykonać operacje na wszystkich elementach listy. Wówczas warunek ten pomijamy.) Możemy myśleć o tej pętli jako o _tworzeniu nowej listy_. Mianowicie, ta nowa lista będzie miała tyle elementów, ile przeszło przez "sito" warunku `condition`. Co więcej, element po przejściu "sita" poddawany jest jeszcze jakimś przekształceniom, zawartym w wyrażeniu `expression`. Zobaczymy, jak to działa, na prostym przykładzie listy liczbowej, z której wybieramy tylko liczby ujemne i zmieniamy ich znak oraz mnożymy przez 100: ``` numbers = [ 10 , 30 , -15 , 25 , -20 , 5 ] numbers_new = [] # "pusty pojemnik", do którego wrzucimy wybrane i przekształcone elementy for number in numbers: # iterujemy po liście... if number < 0: # ... jeśli liczba jest ujemna... number_new = - 100 * number # ... to odwróć jej znak i pomnóż przez 100 numbers_new += [ number_new ] # wynik "wrzucamy" do "pojemnika" numbers_new ``` Powyższa ogólna pętla `for` z warunkiem może być dużo zwięźlej zapisana w postaci składni "list comprehension": `[ expression for item in lst if condition ]` Pierwszą rzeczą jest wzięcie całości w nawiasy kwadratowe - od razu sugeruje to, że wynikiem będzie jakaś lista! Wewnątrz nawiasów dzieje się natomiast następująca rzecz: "wykonaj wyrażenie `expression` dla każdego elementu `item` z listy `lst` spełniającego warunek logiczny `condition`". To tyle! Gdyby nie było żadnego warunku, ten ostatni element pomijamy: `[ expression for item in lst ]` W naszym przykładzie nową listę tworzymy zatem (bardzo zwięźle!) tak: ``` [ -100 * number for number in numbers if number < 0 ] ``` ... jako że nasze przekształcenie `expression` dokonywane na aktualnym elemencie `number` listy `numbers` to odwrócenie znaku i mnożenie przez 100, a więc `-100 * number`, natomiast warunek `condition` to aby aktualny element `number` był ujemy, tj. `number < 0`. Inny przykład, bez instrukcji warunkowej: podnieśmy do kwadratu wszystkie elementy listy: ``` [ number ** 2 for number in numbers ] ``` Z listy słów wybierzmy tylko te zawierające literę `'p'` i zapiszmy je w odwrotnej kolejności (przypomnij sobię składnię segmentów i jak użyliśmy jej do odwrócenia kolejności elementów listy!): ``` words [ word[ ::-1 ] for word in words if 'p' in word ] ``` Wyrażenia te są nie tylko zwięzłe i eleganckie - ale też bardzo przypominają język naturalny, dlatego też łatwo je pisać i czytać ze zrozumieniem. <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 26: Z listy kolejnych liczb całkowitych od 10 do 100 wybierz tylko liczby podzielne przez 17 i podnieś je do trzeciej potęgi. ``` # szybkie ćwiczenie 26 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 27: Zdefiniuj zmienną o nazwie `n` i przypisz jej jakąś liczbę całkowitą, np. 100. Stwórz za pomocą "list comprehension" listę jej dzielników, a więc liczb całkowitych dodatnich i mniejszych od `n`, przez które `n` się dzieli. A więc dla `n = 100` byłoby to `[1, 2, 4, 5, 10, 20, 25, 50]`. Wskazówka: Przypomnij sobie operator reszty z dzielenia `%`. Jeśli liczbę `n` podzielimy przez jej jakiś dzielnik, dostaniemy resztę zero, np. `100 % 50` jest równe 0. ``` # szybkie ćwiczenie 27 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 28: Przypomnij sobie zdefiniowane wcześniej listy `names`, `scores`, `games_played`, a także konstrukcję `zip` "spinającą listy na suwak". Rozwiąż ćwiczenia tam podane za pomocą "list comprehension", a więc: (a) Utwórz listę zawierającą wszystkie imiona osób, które miały więcej niż 30 punktów. (b) Utwórz listę zawierającą liczbę gier zagranych przez osoby, które miały mniej niż 30 punktów. (c) Utwórz listę zawierającą wszystkie imiona osób, które miały mniej niż 32 punkty, ale zagrały więcej niż 2 gry. ``` # szybkie ćwiczenie 28a - rozwiązanie # szybkie ćwiczenie 28b - rozwiązanie # szybkie ćwiczenie 28c - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 29: Przypomnij sobie metodę list `count`, która zlicza, ile razy jakiś element `m` wystąpił w liście `lst`. Np. dla `lst = [7, 3, 2, 3, 8, 11, 2, 7, 7, 11, 4, 2, 8, 1]` i `m = 7` byłoby to 3 razy. Zapisz to za pomocą składni "list comprehension". Wskazówka: Przeiteruj się przez `lst` wybierając tylko elementy równe `m` (i nie przekształcając ich później w żaden sposób). Na koniec oblicz długość tej listy. ``` # szybkie ćwiczenie 29 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 30: Uogólnij powyższą funkcję `count` na funkcję, którą moglibyśmy nazwać `count_range`, a więc odpowiadającą na pytanie, ile jest elementów między danymi wartościami, np. `mn = 5` i `mx = 9`. ``` # szybkie ćwiczenie 30 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 31: Przypomnij sobie metodę `remove`, która usuwa z listy _pierwsze_ wystąpienie danego elementu `m`. Napisz uogólnienie tej metody za pomocą "list comprehension", które usuwa _wszystkie_ wystąpienia `m` w liście. ``` # szybkie ćwiczenie 31 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 32: Masz dany string, np. `jewels = 'aA'`, gdzie każda litera to rodzaj kamienia szlachetnego (myślmy o tym jako o kodzie: `'a'` to zakodowana nazwa diamentu, `'A'` szmaragdu itp.). Masz też string, np. `stones = 'caAbbcAAbabc'`, który opisuje rodzaje kamieni, jakie masz w posiadaniu; tylko część z nich jest szlachetna, mianowicie te, które występują w stringu `J` (tu masz więc 5 kamieni szlachetnych). Oblicz liczbę posiadanych kamieni szlachetnych za pomocą "list comprehension" (i na końcu funkcji `len`). ``` # szybkie ćwiczenie 32 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 33: Lista cyfr danej liczby naturalnej. To jest bardzo przydatne ćwiczenie! Masz daną liczbę naturalną, np. `n = 5781`, a chcemy dostać listę jej cyfr, tj. `[5, 7, 8, 1]`. (Jest pewien trik, który tu można użyć, opisane we wskazówce poniżej - ale pomyśl najpierw samodzielnie: przyda się pewna konwersja typu...) Wskazówka: Następujący trik jest przydatny: Najpierw przekonwertujmy `n` na stringa `'5781'` za pomocą funkcji `str`. Napiszmy następnie "list comprehension" iterujące się po tym stringu - iterator będzie przyjmował więc kolejne wartości `'5'`, `'7'`, `'8'`, `'1'`. To już niemal to, co chcemy: musimy tylko każdy ten jednocyfrowy string przekonwertować z powrotem na liczbę za pomocą funkcji `int`. ``` # szybkie ćwiczenie 33 - rozwiązanie ``` ### <a id="5.8"></a>5.8. List comprehension - `if` - `else` Wiemy, że w instrukcjach warunkowych (`if`) możemy mieć też część `else`, czyli co się dzieje, jeśli dany warunek nie został spełniony. Możemy zawrzeć to w składni "list comprehension" w następujący sposób: `[ expression_1 if condition else expression_2 for item in lst ]` Zwróćmy uwagę, że instrukcja warunkowa `if` - `else` znajduje się tu _przed_ pętlą `for`, a nie po niej, jak w przypadkach bez `else`! ``` numbers = [ 10 , 30 , -15 , 25 , -20 , 5 ] [ 2 * number if number > 0 else 10 * number for number in numbers ] # jeśli liczba jest dodatnia, pomnóż ją przez 2, a jeśli nie, to przez 10 ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 34: Zdefiniuj listy `foods = ['apple', 'chocolate', 'banana', 'cabbage', 'cookie', 'cauliflower']` oraz `forbidden_foods = ['chocolate', 'cookie']`. Zmień listę `foods` w listę zawierającą jej elementy, ale jeśli miałby to być element należący także do listy `forbidden_foods`, zamień go na string `'not healthy!'`; innymi słowy, ma powstać lista `['apple', 'not healthy!', 'banana', 'cabbage', 'not healthy!', 'cauliflower']`. ``` # szybkie ćwiczenie 34 - rozwiązanie ``` ### <a id="5.9"></a>5.9. List comprehension - zagnieżdżone pętle Wspomnijmy tu jeszcze o innych sposobach na użycie "list comprehension". Po pierwsze, nic nie stoi na przeszkodzie, aby mieć wielokrotne, zagnieżdżone w sobie pętle ("nested loops"). A więc np.: ``` for item_1 in list_1: for item_2 in list_2: expression ``` zapisalibyśmy jako: ``` [ expression for item_1 in list_1 for item_2 in list_2 ] ``` gdzie kolejność instrukcji `for` jest od lewej do prawej. Jest to więc identyczna składnia, jak wcześniej, z tym że mamy kilka pętli `for` pisanych jedna po drugiej od lewej do prawej. Możemy w każdej pętli dodać warunki, np.: ``` for item_1 in list_1: if condition_1: for item_2 in list_2: if condition_2: expression ``` zapisalibyśmy jako: ``` [ expression for item_1 in list_1 if condition_1 for item_2 in list_2 if condition_2 ] ``` Dla przykładu, spróbujmy znaleźć elementy wspólne dwóch list. W tym celu musimy przeiterować się przez obie listy - pętla zagnieżdzona - i wybrać tylko równe sobie elementy. ``` a_list = [ 7 , -1 , 12 , 3 , 5 , 14 , 21 ] b_list = [ 4 , -1 , 3 , 8 ] for a in a_list: for b in b_list: if a == b: print( a ) ``` ... i to samo w postaci "list comprehension": ``` [ a for a in a_list for b in b_list if a == b ] ``` Ciekawym zastosowaniem "list comprehension" z zagnieżdżonymi pętlami jest **wypłaszczanie** ("flattening") listy list. Jak pamiętamy, elementy listy mogą być dowolnego typu, w szczególności same mogą być listami. Zdarza się, że chcielibyśmy wziąć taką listę list i przekształcić ją w listę pojedynczych elementów, np.: `[ [ 5 , 3 ] , [ 1 , -4 , -8 ] , [ -7 , 16 ] , [ 0 , 10 , 23 , 11 , -9 ] ]` zmienić w: `[ 5 , 3 , 1 , -4 , -8 , -7 , 16 , 0 , 10 , 23 , 11 , -9 ]` Można zrobić to zagnieżdżonymi pętlami: najpierw iterujemy po pod-listach listy wyjściowej, a następnie iterujemy po aktualnej pod-liście: ``` list_of_lists = [ [ 5 , 3 ] , [ 1 , -4 , -8 ] , [ -7 , 16 ] , [ 0 , 10 , 23 , 11 , -9 ] ] ``` Zobaczmy najpierw, co robi pierwsza pętla, po pod-listach: ``` for sublist in list_of_lists: # iterujemy po pod-listach print( sublist ) ``` ... i rzeczywiście, elementami listy `list_of_lists` są inne listy i tutaj je wydrukowaliśmy. Iterując po pod-listach, mamy aktualną pod-listę `sublist`. Teraz chcemy iterować po jej elementach i te elementy wrzucać do (zainicjowanego wcześniej) "pustego pojemnika": ``` list_of_lists_flattened = [] # "pusty pojemnik" for sublist in list_of_lists: # iterujemy po pod-listach for item in sublist: # iterujemy po elementach aktualnej pod-listy sublist list_of_lists_flattened += [ item ] # wrzucamy ten element do "pojemnika" list_of_lists_flattened ``` Ta sama operacja w formie "list comprehension" (zauważ, że nie ma tutaj żadnego warunku) to po prostu: ``` [ item for item in sublist for sublist in list_of_lists ] ``` Piękniej się chyba nie da. 😀 ## <a id="6"></a>6. (*) Transformacje list - programowanie funkcyjne <img style = 'float: right; margin-left: 10px; margin-bottom: 10px' src = 'Images/function.png' width = '400px'> Mając listę, zwykle chcielibyśmy coś z nią zrobić! Nauczyliśmy się już, jak modyfikować listy poprzez zmianę wartości ich elementów, poprzez dodawanie/usuwanie elementów z listy, a także jak tworzyć nowe listy poprzez bardzo efektywną (i efektowną!) składnię "list comprehension". W tej opcjonalnej sekcji zajemiemy się trzema konkretnymi, choć bardzo ogólnymi, klasami operacji, jakie moglibyśmy na listach przeprowadzać. Mieszczą się one w paradygmacie tzw. **programowania funkcyjnego** ("functional programming"); paradygmat ten każe traktować program komputerowy jako serię obliczeń matematycznych **funkcji** ("function") - a funkcję możemy wyobrazić sobie jako "maszynkę", do której z jednej strony wrzucamy różne "składniki"/argumenty, na których "maszynka"/funkcja przeprowadza rozmaite operacje, aby z drugiej strony zwrócić "gotowy produkt"/rezultat. O funkcjach będziemy uczyć się w Lekcjach 9-10. Te trzy klasy operacji są następujące: 1. Z danej listy chcielibyśmy uzyskać - poprzez kombinację jej elementów - jedną wartość, która jakoś tę listę opisuje. Może być to np. suma wszystkich elementów listy, liczba jej elementów itp. Mówimy, że **redukujemy** listę do jednej wartości; w Pythonie wiąże się z tym słowo kluczowe `reduce`. 2. Każdy elemeny danej listy moglibyśmy poddać najróżniejszym przekształceniom funkcyjnym, otrzymując w wyniku inną listę tej samej długości, gdzie każdy element powstał w wyniku przekształcenia odpowiedniego elementu oryginalnej listy. Np. chcemy każdy element listy liczbowej podnieść do kwadratu; lub też każdy element listy słów przekształcić na wielkie litery; itp. Tutaj **mapujemy/przekształcamy** każdy element w inny element; słowo kluczowe to `map`. 3. Chcemy z danej listy wybrać pod-listę elementów, spełniających zadane kryteria. Np. wybrać tylko liczby parzyste z listy liczb całkowitych; lub też słowa zaczynające się z wielkiej litery z listy słów. Tutaj **filtrujemy** elementy przez zadane "sito"; powiązane słowo kluczowe to `filter`. ### <a id="6.1"></a>6.1. Wstęp: funkcje `lambda` O funkcjach w Pythonie będziemy dopiero się uczyć (Lekcje 9-10), ale skoro już teraz chcemy stosować przekształcenia funkcyjne do list, musimy zapoznać się choćby z najprostszym rodzajem funkcji, tzw. **funkcjami lambda** ("lambda functions"). Funkcja lambda to najprostsza "maszynka", do której wrzucamy "składniki"/**argumenty** ("arguments"), na których owa "maszynka" przeprowadza tylko jedno obliczenie - dane pewnym matematycznym **wyrażeniem** ("expression") - i zwraca jego **rezultat** ("result"). Składnia jest następująca: `lambda arguments : expression` Np. funkcja lambda, która ma dwa argumenty liczbowe `x` i `y`, a która zwraca ich sumę `x + y`, wygląda następująco: `lambda x , y : x + y` ``` f = lambda x , y : x + y # definiujemy f jako funkcję lambda obliczającą sumę (x + y) jej dwóch argumentów x, y ``` ... natomiast funckję wywołujemy poprzez podanie jej argumentów w nawiasach okrągłych, czyli dokładnie tak samo, jak w funkcjach matematycznych: ``` f( 3 , 4 ) # argumenty podane do funkcji w nawiasach okrągłych ``` Podobnie, funkcja lambda z jednym argumentem liczbowym, podnosząca ten argument do kwadratu: ``` f1 = lambda x : x ** 2 # funkcja lambda z jednym argumentem x, podnosząca x do kwadratu f1( 9 ) # argumenty podane do funkcji w nawiasach okrągłych ``` Wyrażenie w funkcji lambda, mimo jego krótkiej postaci, może wykonywać najróżniejsze zadania. Przypomnijmy sobie np., że w Pythonie mając zmienną liczbową `x`, wyrażenie `x > 0` ma wartość logiczną prawda/fałsz (`True`/`False`) w zależności, czy `x` jest dodatnia, czy nie. ``` x = -2.3 x > 0 ``` Chcąc zatem napisać funkcję lambda, która przyjmuje jeden argument liczbowy `x` i zwraca wartość logiczną prawda/fałsz odpowiadającą na pytanie "czy `x > 0`?", piszemy po prostu: ``` f2 = lambda x : x > 0 # funkcja lambda z jednym argumentem x, zwracająca wartość logiczną wyrażenia (x > 0) f2( -7.5 ) # argumenty podane do funkcji w nawiasach okrągłych ``` Uczyliśmy się już o instrukcjach warunkowych (Lekcja 3). Możemy te instrukcje również zawierać w treści wyrażenia funkcji lambda. Np. funkcja lambda przyjmująca dwa argumenty liczbowe `x` i `y` i zwracająca większą z nich, powinna obliczyć wyrażenie: `x` jeśli `x > y`, lub `y` w przeciwnym wypadku (tj. jeśli `x <= y`). ``` f3 = lambda x , y : x if x > y else y # funkcja lambda z dwoma argumentami x, y, zwracająca x jeśli x > y oraz y w przeciwnym razie f3( 5 , 8 ) # argumenty podane do funkcji w nawiasach okrągłych ``` Na pewno rzuciła ci się w oczy _zwartość_ tych wyrażeń - i ich elegancja! To jest właśnie wielka zaleta Pythona - możliwość bardzo zwięzłego zapisywania operacji, które chce się przeprowadzić. <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 35: (a) Napisz funkcję lambda z jednym argumentem typu string, która zwraca jego pierwszy znak. (b) Napisz funkcję lambda z jednym argumentem typu string, która zwraca wartość logiczną `True` jeśli ostatni znak argumentu to `'s'`. ``` # szybkie ćwiczenie 35a - rozwiązanie # szybkie ćwiczenie 35b - rozwiązanie ``` ### <a id="6.2"></a>6.2. Redukowanie list Wróćmy do tematu list. Wiele operacji na listach ma formę **redukcji**, gdzie chcemy z listy wyciągnąć pojedynczą wartość jakoś tę listę opisującą, np. długość listy, sumę jej elementów, wartość największą (maksimum), medianę itp. #### <a id="6.2_b1"></a>Redukowanie poprzez użycie pętli Najprostszym podejściem do tematu byłoby przejść przez daną listę od początku do końca, przez każdy jej element - czyli przeiterować się np. pętlą `for` - i element po elemencie konstruować żądany rezultat. Np. aby obliczyć sumę wszystkich elementów listy, możemy iść przez listę krok po kroku i kolejno dodawać do siebie elementy: zaczynamy od pierwszego, do niego dodajemy drugi; do wyniku tej sumy dodajemy trzeci; itd. Znamy to już dobrze: Zdefiniujmy zmienną `total_sum` z wartością początkową zero. Następnie iterujmy się przez listę, idąc element po elemencie, i każdorazowo dodając kolejny element do zmiennej `total_sum`. Po przejściu w ten sposób całej listy, `total_sum` będzie równe sumie wszystkich jej elementów. ``` numbers # suma wszystkich elementów listy poprzez pętlę for total_sum = 0 for number in numbers: # iterujemy listę total_sum += number # skrótowe zapisanie przypisania total_sum = total_sum + number total_sum ``` Zmienna `total_sum` akumuluje sumę elementów w miarę jak pętla się wykonuje - zmienną taką nazywa się **zmienną kumulującą** ("accumulator"). Inny przykład: Chcąc obliczyć liczbę elementów listy, możemy znów iterować się przez listę używając pętli `for`, za każdym krokiem dodając do zmiennej kumulującej 1: ``` # liczba elementów listy poprzez pętlę for total_len = 0 for number in numbers: total_len += 1 total_len ``` Gdybyśmy chcieli uzyskać maksymalną wartość znajdującą się w liście, musimy dodać jeszcze instrukcję warunkową: zainicjujmy zmienną `maximum` jako pierwszy element listy i iterujmy przez listę, za każdym razem sprawdzając, czy kolejny element jest większy od obecnej wartości (w którym to przypadku przypiszmy go do zmiennej `maximum`), czy też (wówczas nie róbmy nic w danym kroku iteracji). ``` # maksimum poprzez pętlę for maximum = numbers[ 0 ] for number in numbers: # moglibyśmy zacząć iterować listę od jej drugiego elementu, tj. numbers[ 1: ] if number > maximum: maximum = number maximum ``` #### <a id="6.2_b2"></a>Wbudowane funkcje działające na listach Tak standardowe operacje są oczywiście wbudowane w standardową bibliotekę Pythona i nikt nie musi pisać pętli, aby obliczać sumę elementów listy liczbowej! (Natomiast jest to bardzo dobre ćwiczenie, aby zapoznać się z działaniem pętli `for`!) A więc mamy np. takie wbudowane funkcje - `sum`, `len`, `max`, `min` - które jako argument przyjmują całą listę i zwracają jedną wartość liczbową: ``` len( numbers ) # długość listy sum( numbers ) # suma elementów listy max( numbers ) # największy element listy min( numbers ) # najmniejszy element listy ``` #### <a id="6.2_b3"></a>`reduce` Powyższe operacje zaliczają się do następującej ogólnej klasy: Mamy funkcję o _dwóch_ argumentach, np. funkcję `lambda x , y : x + y`. Tę funkcję stosujemy iteracyjnie do naszej listy w następujący sposób: 1. Na początku, stosujemy ją do elementu pierwszego i drugiego, co zwraca pewną wartość. 2. W kolejnym kroku stosujemy ją do tejże zwróconej wartości i do elementu trzeciego. Itd. Biorąc za przykład wspomnianą sumę dwóch liczb, obliczenie sumy wszystkich elementów listy w opisany tu sposób przyjęłoby formę: `( ( ( ( ( 10 + 30 ) + 15 ) + 25 ) + 20 ) + 5)` Możemy zrobić to za pomocą pętli `for`, a w przypadku standardowych operacji także za pomocą wbudowanej funkcji. Ogólnie natomiast służy do tego funkcja `reduce`, o następującej składni: `reduce( function_two_args , sequence )` która w opisany powyżej sposób stosuję funkcję dwu-argumentową `function_two_args` do listy `sequence`. Uwaga techniczna: Python występuje w dwóch wersjach: 2 i 3. Wersja 2 przestała być wspierana [1 stycznia 2020](https://pythonclock.org/). Cały nowoczesny kod pisany jest zatem w Pythonie w wersji 3. W Pythonie w wersji 2 `reduce` było częścią biblioteki standardowej; w Pythonie 3 należy je natomiast zaimportować z modułu `functools`: ``` from functools import reduce ``` ... i dopiero po takim zaimportowaniu możemy funkcji `reduce` używać. Obliczenie sumy wszystkich elementów listy ma teraz bardzo krótką formę, gdzie wykorzystamy napisaną przez nas już wcześniej funkcję `lambda x , y : x + y`: ``` reduce( lambda x , y : x + y , numbers ) ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 36: Znajdź minimum listy liczbowej poprzez technikę `reduce`. ``` # szybkie ćwiczenie 36 - rozwiązanie ``` ### <a id="6.3"></a>6.3. Mapowanie list Ta klasa operacji polega na przechodzeniu krok po kroku przez całą listę i aplikowanie do każdego jej elementu funkcji o jednym argumencie. Efektem jest nowa lista, o tej samej długości, co lista stara, ale gdzie każdy element jest przekształcony ("przemapowany") w nowy element. Możemy np. chcieć podnieść do kwadratu każdy element listy liczbowej. #### <a id="6.3_b1"></a>Mapowanie poprzez użycie pętli Naturalne podejście do problemu ponownie polega na użyciu pętli `for`. Tym razem jednak nasza zmienna kumulująca sama jest listą - zaczniemy od listy pustej, czyli od "pustego pojemnika", do którego będziemy sukcesywnie wrzucać rezultaty działania naszej 1-argumentowej funkcji na kolejne elementy starej listy. Przypomnijmy sobie jeszcze, jak do istniejącej listy "wrzucamy"/dodajemy nowe elementy: możemy użyć albo metody `append`, albo też operatora `+`. ``` # nowa lista powstała z podniesienia do kwadratu wszystkich elementów danej listy, poprzez pętlę for numbers_squared = [] # inicjujejmy pustą listę, do której dodawać będziemy kolejne kwadraty liczb ze starej listy for number in numbers: # iterujemy listę numbers_squared.append( number ** 2 ) # albo inaczej: numbers_squared += [ number ** 2 ] numbers_squared ``` Jako wynik dostaliśmy zatem listę tej samej długości, co lista wyjściowa, ale gdzie każdy element jest podniesiony do kwadratu. #### <a id="6.3_b2"></a>`map` Analogicznie do funkcji `reduce`, mamy funkcję `map`, o identycznej składni: `map( function_one_arg , sequence )` która przechodzi przez listę `sequence` is aplikuję 1-argumentową funkcję `function_one_arg` do każdego jej elementu, zwracając tak zmodyfikowaną listę. Nauczyliśmy się już, jak pisać proste funkcje lambda; napisaliśmy też w szczególności funkcję podnoszącą swój argument do kwadratu. A zatem możemy po prostu wykonać: ``` map( lambda x : x ** 2 , numbers ) ``` Zaraz, zaraz... co tu się stało? Otóż jest jeszcze jeden detal: `map` nie zwraca listy samo z siebie - zwraca iterator; podobne zachowanie widzieliśmy przy okazji funkcji `enumerate` oraz `zip`. Aby przekształcić go w listę, potrzebujemy użyć funkcji `list`, która konwertuje jego typ na dobrze nam znaną listę. ``` list( map( lambda x : x ** 2 , numbers ) ) ``` Jako kolejny przykład weźmy listę słów: ``` words ``` Przypomnijmy sobie, że piękną cechą Pythona jest to, że wiele z poznanych do tej pory operacji na listach można stosować też do stringów - np. wybierać segmenty znaków używając nawiasów kwadratowych i składni segmentów `start_index : stop_index : step`; używać wbudowanych funkcji typu `len`. Np.: ``` 'computer'[ 2:5 ] len( 'computer' ) ``` Wracając do naszej listy stringów, powiedzmy, że chcemy otrzymać z niej listę tej samej długości, gdzie każdy nowy element byłby liczbą stanowiącą długość słowa w starej liście. Innymi słowy, chcemy zastosować funkcję `len` do każdego słowa w liście - co naturalnie wykonujemy poprzez `map`: ``` list( map( len , words ) ) ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 37: A co gdybyśmy chcieli otrzymać nową listę słów z listy `words`, gdzie każde nowe słowo powstało z napisania starego słowa wspak? (Rozwiązanie poniżej - spróbuj najpierw samodzielnie!) ``` # szybkie ćwiczenie 37 - rozwiązanie ``` Rozwiązanie: Przypomnijmy sobie, jak odwracaliśmy kolejność elementów listy używając notacji segmentów: `[ ::-1 ]`, co oznaczało przejście od końca do początku listy w krokach co 1 wspak. ``` 'computer'[ ::-1 ] ``` Innymi słowy, chcielibyśmy do każdego stringu w naszej liście `words` zastosować następującą funkcję - miałaby ona jeden argument o typie string, nazwijmy go `word`, i zwracała następujące jego przekształcenie: `word[ ::-1 ]`, czyli słowo to napisane wspak. Już wiemy, że takie proste funkcje najlepiej zapisać w notacji lambda: `lambda word : word[ ::-1 ]` Tę 1-argumentową funkcję następnie umieszczamy w funkcji `map`: ``` list( map( lambda word : word[ ::-1 ] , words ) ) ``` #### <a id="6.3_b3"></a>Mapowanie poprzez "list comprehension" Zwróćmy uwagę, że wszystkie takie operacje mapowania możemy zapisać - nawet bardziej elegancko - poprzez składnię "list comprehension". Np.: ``` [ number ** 2 for number in numbers ] [ len( word ) for word in words ] [ word[ ::-1 ] for word in words ] ``` ### <a id="6.4"></a>6.4. Filtrowanie list W tej klasie operacji przechodzimy krok po kroku przez całą listę i do każdego z elementów aplikujemy 1-argumentową funkcję, której rezultatem jest wartość logiczna (typ `bool`) prawda/fałsz (`True`/`False`). Może być to np. funkcja, która zwraca wartość logiczną `True` jeśli jej argument `x` jest dodatni, a `False` w przeciwnym razie. Wynikiem przejścia przez całą listę jest nowa lista zawierająca jedynie te elementy listy wyjściowej, dla których funkcja zwróciła `True`. Lista, którą otrzymujemy, ma zatem inną długość niż lista wyjściowa (jest krótsza lub równa jej długości) - i zawiera tylko i wyłączne elementy listy wyjściowej, które "przeszły przez nasze sito", tj. dla których nasza funkcja dała wartość logiczną "prawda". #### <a id="6.4_b1"></a>Filtrowanie poprzez użycie pętli Ponownie, naturalnie jest zastosować pętlę `for` do przejścia przez daną listę. Na każdym kroku używamy instrukcji warunkowej, aby zobaczyć, czy aktualny element spełnia nasz warunek - jeśli tak, dodajemy go do naszego "akumulującego pudełka" - początkowo pustej listy, do której "wrzucamy" przefiltrowane elementy. Spróbujmy w ten sposób przefiltrować tylko liczby dodatnie: ``` # nowa lista powstała z podniesienia do kwadratu wszystkich elementów danej listy, poprzez pętlę for numbers_positive = [] # inicjujejmy pustą listę, do której dodawać będziemy tylko te liczby ze starej listy, które są dodatnie for number in numbers: # iterujemy listę if number > 0: # tylko jeśli warunek dodatniości jest spełniony dodajemy tę liczbę do listy kumulującej numbers_positive.append( number ) # albo inaczej: numbers_positive += [ number ] numbers_positive ``` Jako wynik dostaliśmy listę krótszą, zawierającą tylko przefiltrowane liczby - tylko dodatnie. #### <a id="6.4_b2"></a>`filter` Analogicznie do `reduce` i `map`, mamy funkcję `filter`, o dokładnie tej samej składni: `filter( logical_function_one_arg , sequence )` Widzieliśmy już też, jak napisać funkcję lambda, która testuje, czy jej argument jest dodatni, czy nie: `lambda x : x > 0` Możemy zatem bardzo zwięźle napisać: ``` numbers list( filter( lambda x : x > 0 , numbers ) ) ``` Jako inny przykład weźmy ponownie naszą listę słów: ``` words ``` Powiedzmy, że chcemy wybrać (przefiltrować) z niej tylko te słowa, które zawierają literę `p`. Na początku tej lekcji wspomnieliśmy o małym, a przydatnym, operatorze `in`, który zastosowaliśmy do sprawdzenia, czy jakaś zmienna jest elementem listy: ``` 3 in [ 1 , 2 , 3 ] ``` Możemy go równie dobrze zastosować do stringów: ``` 'p' in 'pen' ``` Chcemy zatem napisać funkcję lambda, która przyjmowałaby jeden argument o typie string, nazwijmy go `word`, i zwracałaby wartość logiczną `True` jeśli litera `p` jest w słowie `word`, a `False` w przeciwnym razie. Jest to oczywiście: `lambda word : 'p' in word` Tę 1-argumentową funkcję logiczną umieszczamy w funkcji `filter`: ``` list( filter( lambda word : 'p' in word , words ) ) ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Szybkie ćwiczenie 38: Z listy `words` wybierz tylko te elementy, które zawierają literę `'e'` oraz których długość przekracza 5. ``` # szybkie ćwiczenie 38 - rozwiązanie ``` #### <a id="6.4_b3"></a>Filtrowanie poprzez "list comprehension" Podobnie jak z mapowaniem, także i filtrowanie możemy zapisać używając składni "list comprehension". Np.: ``` [ number for number in numbers if number > 0 ] [ word for word in words if 'p' in word ] ``` Możemy także łączyć mapowanie i filtrowanie w jedno takie wyrażenie, jak widzieliśmy wcześniej. A zatem, "list comprehension" jest bardziej uniwersalnym językiem, który łączy `map` oraz `filter`, i czyni to w sposób bardziej czytelny i zwięzły. ``` [ number ** 2 for number in numbers if number > 0 ] [ word[ ::-1 ] for word in words if 'p' in word ] ``` ## <a id="7"></a>7. Zadania domowe do Lekcji 6 <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 39: Masz daną listę stringów - adresów internetowych, jak poniżej. Za pomocą składni "list comprehension" (i odpowiedniej metody stringów) utwórz listę stringów z nazwami najwyższej domeny, a więc tutaj `['com', 'net', 'com', 'org', 'pl', ...]`. Wskazówka: Przypomij sobie metodę stringów `split`. Podziel najpierw po znaku `'/'`, a potem po `'.'`. Najwyższa domena to trzeci element w podziale po `'/'` i ostatni w podziale po `'.'`. ``` websites = [ 'http://puzzles.bostonpython.com/' , 'https://projecteuler.net/' , 'https://stackoverflow.com/' , 'https://arxiv.org/' , 'https://znadplanszy.pl/blogi/' , 'https://www.lego.com/pl-pl' , 'https://chris.beams.io/posts/git-commit/' , 'https://www.kaggle.com/tags/finance' , 'https://skymind.ai/wiki/differentiableprogramming' , 'https://www.risk.net/cutting-edge/banking/2436867/cva-greeks-and-aad' , 'https://ml-con.nl/' , 'https://www.infoworld.com/article/3241107/julia-vs-python-which-is-best-for-data-science.html' ] # dłuższe ćwiczenie 39 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 40: Palindromy. Masz daną (bardzo długą!) listę słów angielskich - wczytaj ją z załączonego pliku tekstowego. (O operacjach na plikach nie uczyliśmy się - po prostu wykonaj poniższą komórkę.) Utwórz z niej za pomocą "list comprehension" listę palindromów, a więc słów, które czytane wspak są identyczne do siebie samego. Sprawdź swoją odpowiedź: Są tu 103 palindromy. ``` with open( 'Files/english_words.txt' ) as f: english_words = f.readlines() english_words = [ word.strip() for word in english_words ] len( english_words ) # dłuższe ćwiczenie 40 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 41 (*): "Szczęśliwy numer". Niech `n` będzie dodatnią liczbą całkowitą. Liczbę tę nazywamy "szczęśliwą", jeśli zachodzi następująca własność: Startując od `n` przeprowadzamy następującą operację - zamieniamy `n` na sumę kwadratów jej cyfr. Następnie operację powtarzamy itd. Możliwe są dwa rodzaje zachowań takiego ciągu: albo (1) skończy się on liczbą 1 - i wówczas `n` nazywamy "szczęśliwą", albo (2) wpadnie w nieskończony cykl oscylujący między jakimiś liczbami różnymi od 1 - i wtedy `n` nie jest "szczęśliwa". Dla przykładu, 19 jest "szczęśliwa": W pierwszym kroku zmieniamy ją na $1^2 + 9^2 = 82$, w drugim na $8^2 + 2^2 = 68$, w trzecim na $6^2 + 8^2 = 100$, w czwartym na $1^2 + 0^2 + 0^2 = 1$. Nastomiast 112 nie jest "szczęśliwa" - iteracja wpada w nieskończoną pętlę 89, 145, 42, 20, 4, 16, 37, 58 i znowu 89. Napisz algorytm wyznaczający, czy dana `n` jest "szczęśliwa". Wskazówka: Napisz najpierw pętlę `while` (z warunkiem, że `n` jest różne od 1), gdzie w każdym kroku zmieniasz `n` na sumę kwadratów jej cyfr. Do obliczenia tejże wykorzystaj trik poznany w ćwiczeniu 31. To byłoby na tyle, gdyby nie konieczność sprawdzenia, czy wpadliśmy w nieskończoną pętlę. W tym celu zadeklaruj "puste pudełko" (o nazwie powiedzmy `memory`), do którego wrzucaj w każdej iteracji kolejną wartość `n`. Teraz dla każdej nowej wartości `n` sprawdzaj, czy jest ona w `memory` - jeśli tak, to mamy cykl i musimy uciec (`break`) z pętli, wypisując jakiś komunikat. ``` # dłuższe ćwiczenie 41 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 42 (**): Konwersja nazwy kolumny w Excelu na liczbę. Masz dany string będący nazwą kolumny w Excelu, np. `excel_col = 'AA'` czy też `excel_col = 'NTP'`. Napisz za pomocą składni "list comprehension" algorytm przeliczający ją na wartość liczbową: `'A'` to kolumna 1, `'B'` to kolumna 2 itd. A więc np. `'AA'` to kolumna nr 27, zaś `'NTP'` to kolumna nr 10000. Wskazówka: Pomyśl o nazwie kolumny jako o liczbie w systemie liczbowym o podstawie 26 (tyle jest liter w alfabecie angielskim). Rzeczywiście, popatrzmy na przykład `'NTP'`: `'N'` to 14-ta litera alfabetu, `'T'` 20-ta, zaś `'P'` 16-ta. Będzie to zatem kolumna o numerze $14 \cdot 26^2 + 20 \cdot 26 + 16 = 10000$. Podobnie `'AA'` ma numer $1 \cdot 26 + 1 = 27$. Chcesz przeiterować się zatem przez string `excel_col`, w każdym kroku notując nie tylko literę, ale też i jej indeks - użyj funkcji `enumerate`. Tak naprawdę, przeiteruj się przez `excel_col` odwróconą wspak - chcesz bowiem, aby ostatnia litera miała indeks 0, przedostatnia 1 itd. Wówczas taki indeks `i` użyjesz w potędze `26 ** i`. Teraz tę potęge przemnóż przez indeks danej litery w alfabecie (plus 1) - użyj metody `index` i pomocniczej zmiennej `alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` - np. dla `'P'` chcesz dostać tu liczbę 16, bo to 16-ta litera alfabetu. ``` # dłuższe ćwiczenie 42 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 43 (**): Poćwiczmy konstrukcję stosu, zob. <a href="#5.3">Stos i kolejka</a>, ale w prostszym wydaniu niż podany tam przykład. Mamy dwa stringi, powiedzmy `text_1 = 'a##c'` i `text_2 = '#a#c'`. Zawierają one tylko litery alfabetu angielskiego oraz znak `'#'`. Umawiamy się, że znak ten oznacza naciśnięcie "backspace". Innymi słowy, np. string `'abc##de#'` po wydrukowaniu to tak naprawdę byłby `'ad'`, bo litery "b", "c" zostaną skasowane przez dwa naciśnięcia "backspace", podobnie "e" zostanie skasowane przez jedno naciśnięcie. Napisz algorytm, który sprawdzi, czy taki stringi `text_1` i `text_2` są identyczne po wydrukowaniu w tej konwencji. Wskazówka: Utwórz "puste pudełko" `stack`. Iteruj się przez dany string, np. `text_1`. Jeśli dany znak to litera (sprawdź to metodą stringów `isalpha`), to wrzuć go na stos (metodą `append` lub operatorem `+=`). Jeśli jest to `'#'`, to "anihiluj" ostatnią literę na stosie (metodą `pop()`). Dostaniesz na końcu formę stringa jakby "po wydrukowaniu". ``` # dłuższe ćwiczenie 43 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 44 (**): Odwracanie liter w stringu, ale nie znaków specjalnych. Powiedzmy, że mamy taki dziwny string, np. `text = 'Cha-th!am=Fin--anc+ial'`, złożony z liter oraz znaków specjalnych. Napisz algorytm, który zmieni go na `'lai-cn!an=iFm--aht+ahC'`, tj. string, gdzie wszystkie litery napisane są w odwrotnej kolejności, zaś znaki specjalne pozostawione są w oryginalnej kolejności. Wskazówka: Utwórz najpierw listę `text_alpha`, składającą się tylko z liter tego stringa - wykorzystaj "list comprehension" i metodę stringów `isalpha`. Następnie ją odwróć. Mamy tu więc odwrócone litery. Teraz przeiteruj się przez string `text`, ale iterując się jednocześnie po jego indeksach - użyj funkcji `enumerate`. Dodaj w tej iteracji instrukcję warunkową, która sprawdzi, czy dany symbol _nie_ jest literą - jeśli nie jest, to wstaw go do listy `text_alpha` w odpowiadającym mu indeksie - przypomnij sobie, jak działa metoda `insert`. Dostaniesz w ten sposób listę wszystkich żądanych znaków w dobrej kolejności. Jako ostatni krok, napisz pętlę `for`, gdzie do pustego stringu wrzucisz te kolejne znaki, tworząc odpowiedź. ``` # dłuższe ćwiczenie 44 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 45 (**): Trójkąt Pascala. [Trójkąt Pascala](https://en.wikipedia.org/wiki/Pascal's_triangle) możemy wyobrazić sobie jako następujący ciąg list, każda wydrukowana w kolejnej linijce: - lista w $n$-tej linijce ma długość $n$; - zawsze pierwsza i ostatnia liczba w każdej liście to 1; - aby natomiast dostać liczbę na dowolnej pozycji (powiedzmy o indeksie $k$) w liście w $n$-tej linijce, musisz zsumować dwie liczby, występujące w liście w $(n - 1)$-szej linijce, na pozycjach $k$ i $(k - 1)$. Np. spójrzmy na linijkę `[1, 5, 10, 10, 5, 1]` poniżej. Pierwsza liczba 10 powstała z liczb 4 i 6 z powyższej linijki. ``` [1] [1, 1] [1, 2, 1] [1, 3, 3, 1] [1, 4, 6, 4, 1] [1, 5, 10, 10, 5, 1] [1, 6, 15, 20, 15, 6, 1] [1, 7, 21, 35, 35, 21, 7, 1] [1, 8, 28, 56, 70, 56, 28, 8, 1] [1, 9, 36, 84, 126, 126, 84, 36, 9, 1] ... ``` Napisz algorytm drukujący np. pierwsze 20 linijek trójkąta Pascala. Wskazówka: Napisz pętlę `for` po zakresie `range(20)` (jeśli chcesz wydrukować pierwsze 20 linijek). Przed pętlą zdefiniuj `pascal_row = [1]`, to będzie pierwszy wiersz trójkąta Pascala. W każdej iteracji pętli `for` chcesz teraz zmienić `pascal_row`, aby przyjął postać kolejnego wiersza, na podstawie wiersza poprzedniego (rekursja). Możesz to zrobić przy użyciu indeksów, w stylu: ``` pascal_row = [ pascal_row[k] + pascal_row[k - 1] for k in range(len(pascal_row)) ] ``` To nie jest do końca dobrze - zmodyfikuj powyższe wyrażenie tak, aby dobrze obliczał skrajne jedynki (np. spróbuj wcześniej jeszcze zmodyfikować `pascal_row` przez dodanie do niego zer na początku i na końcu; także zakres `range` jest nie do końca poprawny). Możesz też rozwiązać to zadanie bez indeksów - sprobuj! Niech "list comprehension" będzie miało postać: ``` pascal_row = [ i + j for ... ] ``` gdzie `i` iterujesz po `pascal_row` rozszerzonym na początku przez zero, a `j` po `pascal_row` rozszerzonym na końcu przez zero (dlaczego?). Użyj funkcji `zip`, aby napisać tę iterację. ``` # dłuższe ćwiczenie 45 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 46 (**): Liczby podzielne przez swoje cyfry. Wypisz wszystkie liczby od 1 do `n` (np. `n = 100`), które są podzielne przez wszystkie swoje cyfry. Np. w zakresie do 100 są to następujące liczby: ``` [1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 15, 22, 24, 33, 36, 44, 48, 55, 66, 77, 88, 99] ``` np. 24 jest podzielna przez 2 i 4, 36 jest podzielna przez 3 i 6 itd. Wskazówka: Napisz proste "list comprehension" po odpowiednim zakresie: ``` [ k for k in range(1, n + 1) if ... ] ``` Nietrywialnie jest tu zapisać warunek po `if` - musi on decydować, czy dany iterator `k` jest podzielny przez wszystkie swoje cyfry. Jak to zrobić? Po pierwsze, warunek ten także zapisz w postaci "list comprehension", gdzie iterujesz się po cyfrach liczby `k`. Użyj triku z konwersją na string: ``` [ ... for d in str(k) ] ``` W tej "list comprehension" wypisuj wartości logiczne odpowiadające na pytanie, czy cyfra `d` jest różna od zera oraz czy `k` podzielone przez `d` daje resztę zero. (Zachowaj ostrożność jeśli chodzi o typy `int` i `str`!) Masz więc teraz listę wartości logicznych - i chcesz teraz zredukować ją do jednej wartości logicznej, odpowiadającej na pytanie, czy _wszystkie_ jej elementy są prawdziwe - innymi słowy, czy dla _wszystkich_ cyfr `d` prawdą jest, iż `d` jest różne od zera oraz `k` dzieli się przez `d` bez reszty - przypomnij sobie funkcję `all`. ``` # dłuższe ćwiczenie 46 - rozwiązanie ``` <img style = 'float: left; margin-right: 10px; margin-bottom: 10px' src = 'Images/question.png'> Dłuższe ćwiczenie 47 (\*\*): Największy palindromiczny iloczyn. "Liczba palindromiczna" to taka, która czytana wspak jest równa sobie samej, np. 1551. Rozważmy liczby będące iloczynami liczb dwucyfrowych. Okazuje się, że część takich liczb jest palindromiczna, zaś największą z nich jest $9009 = 91 \cdot 99$. Jaka jest największa liczba palindromiczna będąca iloczynem liczb trzycyfrowych (i jakie są to liczby)? A jaka dla liczb czterocyfrowych? Sprawdź swoją odpowiedź: $888888 = 924 \cdot 962$ oraz $99000099 = 9901 \cdot 9999$. Wskazówka: Napisz zagnieżdżoną "list comprehension", gdzie iterujesz się dwoma iteratorami `a` i `b` przez liczby trzycyfrowe (jaki to zakres?). Uwaga: Czy musisz oboma iteratorami przechodzić przez _wszystkie_ te liczby, czy możesz jakoś ograniczyć zakres dla `b`, bez straty ogólności? Wybierz tylko te pary, których iloczyn jest palindromiczny (aby to sprawdzić, przypomnij sobie konwersję na string i odwracanie stringów). ``` # dłuższe ćwiczenie 47 - rozwiązanie ```
github_jupyter
### Objective of the session * datetime objects ``` # Use pythons datetime: from datetime import datetime now = datetime.now() # checking for the current datetime print(now) # now I am trying to put my own datetime t1 = datetime.now() t2 = datetime(1970, 1, 1) #a random date I am giving diff = t1 - t2 print(diff) # okey now let us check for the above datatypes. print(type(diff)) #Convert to datetime : for this we will load our dataset, another way of loading the dataset is given below. import pandas as pd data = ('country_timeseries.csv') ebola = pd.read_csv(data) ebola.head(3) #here the first column is the dates. #but there is a question. Is this the really the date?? ebola['Day'].dtypes #we can notice that the date type is just in the form of integers. # now we are going to take this old column and create a new column that will have actual daytime object. ebola['date_dt'] = pd.to_datetime(ebola['Date']) ebola.loc[0:3, ['Date','date_dt']] # Load data with dates # If our dataset is having the dates column then we must use the 'parse_dates' parameter at the time of importing ebola = pd.read_csv(data, parse_dates = [0]) # above I am parsing the dates. 0 is the column number as we know that Date is in first position. # checking wheter the above thing happend or not. ebola.dtypes #yes we can notice that for Date this time it is coming as datetime64. # Extracting various elements out of the date. #What do I mean by that. e.g. d = pd.to_datetime('2016-02-29') print(d) #here we have created the date for the leap year. print(type(d)) #by timestamp we mean #now out of this I can fetch the attributes:In first case the year attribute d.year # for getting the month d.month # any guess for the day? d.day # Now let us do this on the dataframe ebola.dtypes #basically we want to get those(day,month and year) attributes in our datasets ebola['date_dt'] = pd.to_datetime(ebola['Date']) ebola['year'] = ebola['date_dt'].dt.year #dt.year is letting us fetch us the date out of our date_dt. ebola[['Date','date_dt','year']].head(3) # Now in the same fashion I will access the month and day, let us ttry it together. # Please tell me what I have used as the container now on the right hand side. ebola['month'], ebola['day'] = (ebola['date_dt'].dt.month, ebola['date_dt'].dt.day) ebola[['Date', 'date_dt', 'year', 'month', 'day']].head(2) # Now please tell in what you are expecting year, month and date to be? ebola.dtypes # Implement date calculations and timedeltas ebola.iloc[-5: , 0:5] # If I will apply minimum function on my timestamp then lets see: ebola['date_dt'].min() #this is oldest date in our dataset. How this knowledge can be handy? # let's use this knowledge in analysing the dataset. ebola['outbreak_d'] = ebola['date_dt'] - ebola['date_dt'].min() #this will tell us how many days are over since # the brekdown of ebola. ebola[['Date', 'Day', 'outbreak_d']].tail(2) #Question - Now please check the object for outbreak_d? First guess and then have the look. ```
github_jupyter
``` import os from os.path import join import numpy as np import pandas as pd from scipy.stats import linregress import matplotlib.pyplot as plt %matplotlib inline from plotting_functions import cm2inch ``` # Estimates ### Load individual estimates (in sample fits) multiplicative ``` parameters = ['v', 'gamma', 's', 'tau'] estimates_list = [] estimate_files = [file for file in os.listdir(join('results', 'estimates', 'in_sample', 'GLAM')) if file.endswith('.csv')] for file in estimate_files: _, subject, _, _ = file.split('_') subject = int(subject) estimates = pd.read_csv(join('results', 'estimates', 'in_sample', 'GLAM', file), index_col=0) subject_estimates = {parameter: estimates.loc[parameter + '__0_0', 'MAP'] for parameter in parameters} subject_estimates['subject'] = subject estimates_list.append(pd.DataFrame(subject_estimates, index=np.zeros(1))) individual_estimates = pd.concat(estimates_list).sort_values('subject').reset_index(drop=True) individual_estimates['dataset'] = np.array(39 * ['krajbich2010'] + 30 * ['krajbich2011'] + 24 * ['folke2016'] + 25 * ['tavares2017']) ``` # Gamma estimates ``` individual_estimates['gamma'].describe() individual_estimates.groupby('dataset')['gamma'].describe() multiplicative_estimates = individual_estimates.copy() ``` # Figure S1 ``` import matplotlib def figure_si1(estimates, parameters=['v', 'gamma', 's', 'tau'], labels=dict(v=r'$v\ [ms^{-1}]$', gamma=r'$\gamma$', s=r'$\sigma\ [ms^{-1}]$', tau=r'$\tau$'), bins=dict(v=np.linspace(0, 0.0002, 21), gamma=np.linspace(-1.2, 1, 21), s=np.linspace(0, 0.02, 21), tau=np.linspace(0, 5, 21)), ticks=dict(v=[0, 0.0001, 0.0002], gamma=[-1, 0, 1], s=[0, 0.01, 0.02], tau=[0, 1, 2, 3, 4, 5]), limits=dict(v=[0, 0.000225], gamma=[-1.5, 1.25], s=[0, 0.0225], tau=[0, 5.5]), figsize=cm2inch(18, 20), fontsize=8): """ Figure SI 1 - Parameter estimates Plots pairwise scatterplots of parameter estimates, color-coded by data set. Reference: Ratcliff & Tuerlinckx, 2002; Figure 6 parameters: ----------- estimates <pandas.DataFrame> Returns: -------- matplotlib.figure, matplotlib.axes """ datasets = ['krajbich2010', 'krajbich2011', 'folke2016', 'tavares2017'] n_datasets = len(datasets) n_parameters = len(parameters) fig = plt.figure(figsize=figsize) hist_ax_00 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (0, 0)) hist_ax_10 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (1, 0)) hist_ax_20 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (2, 0)) hist_ax_30 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (3, 0)) hist_ax_01 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (4, 1)) hist_ax_11 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (5, 1)) hist_ax_21 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (6, 1)) hist_ax_31 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (7, 1)) hist_ax_02 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (8, 2)) hist_ax_12 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (9, 2)) hist_ax_22 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (10, 2)) hist_ax_32 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (11, 2)) hist_ax_03 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (12, 3)) hist_ax_13 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (13, 3)) hist_ax_23 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (14, 3)) hist_ax_33 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (15, 3)) ax01 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (4, 0), rowspan=n_datasets) ax02 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (8, 0), rowspan=n_datasets) ax03 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (12, 0), rowspan=n_datasets) ax12 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (8, 1), rowspan=n_datasets) ax13 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (12, 1), rowspan=n_datasets) ax23 = plt.subplot2grid((n_datasets*n_parameters, n_parameters), (12, 2), rowspan=n_datasets) hist_axs = np.array([[hist_ax_00, hist_ax_10, hist_ax_20, hist_ax_30], [hist_ax_01, hist_ax_11, hist_ax_21, hist_ax_31], [hist_ax_02, hist_ax_12, hist_ax_22, hist_ax_32], [hist_ax_03, hist_ax_13, hist_ax_23, hist_ax_33]]) scatter_axs = np.array([[np.nan, np.nan, np.nan, np.nan], [ax01, np.nan, np.nan, np.nan], [ax02, ax12, np.nan, np.nan], [ax03, ax13, ax23, np.nan]]) colors = np.array(['C{}'.format(d) for d, dataset in enumerate(datasets)]) dataset_index = pd.Categorical(estimates['dataset'], categories=datasets).codes for p, (pary, parylabel) in enumerate(zip(parameters, labels)): for q, (parx, parxlabel) in enumerate(zip(parameters, labels)): if q > p: continue elif p == q: for d, dataset in enumerate(datasets): if d == 0: hist_axs[p, d].set_title(labels[parx], fontsize=fontsize, fontweight='black') hist_axs[p, d].hist(estimates.loc[estimates['dataset'] == dataset, parx], bins=bins[pary], color=colors[d], edgecolor='white', linewidth=1) hist_axs[p, d].set_xticks(ticks[parx]) hist_axs[p, d].set_xticklabels([]) hist_axs[p, d].set_yticks([0,10]) hist_axs[p, d].set_yticklabels([0,10], fontsize=5) hist_axs[p, d].set_xlim(limits[parx]) hist_axs[p, d].set_ylim([0,10]) hist_axs[p, d].spines['top'].set_visible(False) hist_axs[p, d].spines['right'].set_visible(False) else: scatter_axs[p, q].scatter(estimates[parx], estimates[pary], marker='o', color='none', edgecolor=colors[dataset_index], linewidth=0.5, s=30) scatter_axs[p, q].scatter(estimates[parx], estimates[pary], marker='o', color=colors[dataset_index], alpha=0.5, linewidth=0, s=30) scatter_axs[p, q].set_xticks(ticks[parx]) scatter_axs[p, q].set_yticks(ticks[pary]) scatter_axs[p, q].set_xticklabels([]) scatter_axs[p, q].set_yticklabels([]) scatter_axs[p, q].set_xlim(limits[parx]) scatter_axs[p, q].set_ylim(limits[pary]) scatter_axs[p, q].spines['right'].set_visible(False) scatter_axs[p, q].spines['top'].set_visible(False) if (q == 0) and (p != 0): scatter_axs[p, q].set_ylabel(labels[pary], fontsize=fontsize) scatter_axs[p, q].set_yticks(ticks[pary]) scatter_axs[p, q].set_yticklabels(ticks[pary], fontsize=fontsize) if (p == (n_parameters - 1)) and (q != p): scatter_axs[p, q].set_xticklabels(ticks[parx], fontsize=fontsize) scatter_axs[p, q].set_xlabel(labels[parx], fontsize=fontsize) hist_axs[-1, -1].set_xlabel(labels[parameters[-1]], fontsize=fontsize) hist_axs[-1, -1].set_xticks(ticks[parameters[-1]]) hist_axs[-1, -1].set_xticklabels(ticks[parameters[-1]], fontsize=fontsize) hist_axs[-1, -1].set_xlim(limits[parameters[-1]]) # add panel labeling # histograms for label, ax in zip(list('acfj'), list((hist_ax_00, hist_ax_01, hist_ax_02, hist_ax_03))): ax.text(-0.25, 2.5, label, transform=ax.transAxes, fontsize=fontsize, fontweight='bold', va='top') # add frequency labeling for ax in [hist_ax_20, hist_ax_21, hist_ax_22, hist_ax_23]: ax.set_ylabel(' Frequency', fontsize=6) # scatterplots i = 0 scatter_labels = list('bdeghi') for ax in scatter_axs.ravel(): if ax is not np.nan: ax.text(-0.25, 1.1, scatter_labels[i], transform=ax.transAxes, fontsize=fontsize, fontweight='bold', va='top') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) # Font sizes ax.tick_params(axis='both', labelsize=fontsize) i += 1 fig.tight_layout(h_pad=0., w_pad=1) return fig figure_si1(individual_estimates); plt.savefig(join('results', 'figures', 'si_figure_1_parameter_estimates_correlations.png'), dpi=330) plt.savefig(join('results', 'figures', 'si_figure_1_parameter_estimates_correlations.pdf'), dpi=330) ``` # Correlation table ``` parameters = ['v', 'gamma', 's', 'tau'] corrs = {} corrs_table = {} ps = {} ps_table = {} for p1 in parameters: corrs_table[p1] = [] ps_table[p1] = [] for p2 in parameters: _,_,r,p,_ = linregress(multiplicative_estimates[p1].values, multiplicative_estimates[p2].values) corrs[(p1, p2)] = r corrs_table[p1].append(r) ps[(p1, p2)] = p ps_table[p1].append(p) r_table = pd.DataFrame(corrs_table, index=parameters) r_table ps_table = pd.DataFrame(ps_table, index=parameters) ps_table ```
github_jupyter
<a href="https://colab.research.google.com/github/mrdbourke/tensorflow-deep-learning/blob/main/05_transfer_learning_in_tensorflow_part_2_fine_tuning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # 05. Transfer Learning with TensorFlow Part 2: Fine-tuning In the previous section, we saw how we could leverage feature extraction transfer learning to get far better results on our Food Vision project than building our own models (even with less data). Now we're going to cover another type of transfer learning: fine-tuning. In **fine-tuning transfer learning** the pre-trained model weights from another model are unfrozen and tweaked during to better suit your own data. For feature extraction transfer learning, you may only train the top 1-3 layers of a pre-trained model with your own data, in fine-tuning transfer learning, you might train 1-3+ layers of a pre-trained model (where the '+' indicates that many or all of the layers could be trained). ![](https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/images/05-transfer-learning-feature-extraction-vs-fine-tuning.png) *Feature extraction transfer learning vs. fine-tuning transfer learning. The main difference between the two is that in fine-tuning, more layers of the pre-trained model get unfrozen and tuned on custom data. This fine-tuning usually takes more data than feature extraction to be effective.* ## What we're going to cover We're going to go through the follow with TensorFlow: - Introduce fine-tuning, a type of transfer learning to modify a pre-trained model to be more suited to your data - Using the Keras Functional API (a differnt way to build models in Keras) - Using a smaller dataset to experiment faster (e.g. 1-10% of training samples of 10 classes of food) - Data augmentation (how to make your training dataset more diverse without adding more data) - Running a series of modelling experiments on our Food Vision data - Model 0: a transfer learning model using the Keras Functional API - Model 1: a feature extraction transfer learning model on 1% of the data with data augmentation - Model 2: a feature extraction transfer learning model on 10% of the data with data augmentation - Model 3: a fine-tuned transfer learning model on 10% of the data - Model 4: a fine-tuned transfer learning model on 100% of the data - Introduce the ModelCheckpoint callback to save intermediate training results - Compare model experiments results using TensorBoard ## How you can use this notebook You can read through the descriptions and the code (it should all run, except for the cells which error on purpose), but there's a better option. Write all of the code yourself. Yes. I'm serious. Create a new notebook, and rewrite each line by yourself. Investigate it, see if you can break it, why does it break? You don't have to write the text descriptions but writing the code yourself is a great way to get hands-on experience. Don't worry if you make mistakes, we all do. The way to get better and make less mistakes is to **write more code**. ``` # Are we using a GPU? (if not & you're using Google Colab, go to Runtime -> Change Runtime Type -> Harware Accelerator: GPU ) !nvidia-smi ``` ## Creating helper functions Throughout your machine learning experiments, you'll likely come across snippets of code you want to use over and over again. For example, a plotting function which plots a model's `history` object (see `plot_loss_curves()` below). You could recreate these functions over and over again. But as you might've guessed, rewritting the same functions becomes tedious. One of the solutions is to store them in a helper script such as [`helper_functions.py`](https://github.com/mrdbourke/tensorflow-deep-learning/blob/main/extras/helper_functions.py). And then import the necesary functionality when you need it. For example, you might write: ``` from helper_functions import plot_loss_curves ... plot_loss_curves(history) ``` Let's see what this looks like. ``` # Get helper_functions.py script from course GitHub !wget https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py # Import helper functions we're going to use from helper_functions import create_tensorboard_callback, plot_loss_curves, unzip_data, walk_through_dir ``` Wonderful, now we've got a bunch of helper functions we can use throughout the notebook without having to rewrite them from scratch each time. > 🔑 **Note:** If you're running this notebook in Google Colab, when it times out Colab will delete the `helper_functions.py` file. So to use the functions imported above, you'll have to rerun the cell. ## 10 Food Classes: Working with less data We saw in the [previous notebook](https://github.com/mrdbourke/tensorflow-deep-learning/blob/main/04_transfer_learning_in_tensorflow_part_1_feature_extraction.ipynb) that we could get great results with only 10% of the training data using transfer learning with TensorFlow Hub. In this notebook, we're going to continue to work with smaller subsets of the data, except this time we'll have a look at how we can use the in-built pretrained models within the `tf.keras.applications` module as well as how to fine-tune them to our own custom dataset. We'll also practice using a new but similar dataloader function to what we've used before, [`image_dataset_from_directory()`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) which is part of the [`tf.keras.preprocessing`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing) module. Finally, we'll also be practicing using the [Keras Functional API](https://keras.io/guides/functional_api/) for building deep learning models. The Functional API is a more flexible way to create models than the tf.keras.Sequential API. We'll explore each of these in more detail as we go. Let's start by downloading some data. ``` # Get 10% of the data of the 10 classes !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_10_percent.zip unzip_data("10_food_classes_10_percent.zip") ``` The dataset we're downloading is the 10 food classes dataset (from Food 101) with 10% of the training images we used in the previous notebook. > 🔑 **Note:** You can see how this dataset was created in the [image data modification notebook](https://github.com/mrdbourke/tensorflow-deep-learning/blob/main/extras/image_data_modification.ipynb). ``` # Walk through 10 percent data directory and list number of files walk_through_dir("10_food_classes_10_percent") ``` We can see that each of the training directories contain 75 images and each of the testing directories contain 250 images. Let's define our training and test filepaths. ``` # Create training and test directories train_dir = "10_food_classes_10_percent/train/" test_dir = "10_food_classes_10_percent/test/" ``` Now we've got some image data, we need a way of loading it into a TensorFlow compatible format. Previously, we've used the [`ImageDataGenerator`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class. And while this works well and is still very commonly used, this time we're going to use the `image_data_from_directory` function. It works much the same way as `ImageDataGenerator`'s `flow_from_directory` method meaning your images need to be in the following file format: ``` Example of file structure 10_food_classes_10_percent <- top level folder └───train <- training images │ └───pizza │ │ │ 1008104.jpg │ │ │ 1638227.jpg │ │ │ ... │ └───steak │ │ 1000205.jpg │ │ 1647351.jpg │ │ ... │ └───test <- testing images │ └───pizza │ │ │ 1001116.jpg │ │ │ 1507019.jpg │ │ │ ... │ └───steak │ │ 100274.jpg │ │ 1653815.jpg │ │ ... ``` One of the main benefits of using [`tf.keras.prepreprocessing.image_dataset_from_directory()`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) rather than `ImageDataGenerator` is that it creates a [`tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) object rather than a generator. The main advantage of this is the `tf.data.Dataset` API is much more efficient (faster) than the `ImageDataGenerator` API which is paramount for larger datasets. Let's see it in action. ``` # Create data inputs import tensorflow as tf IMG_SIZE = (224, 224) # define image size train_data_10_percent = tf.keras.preprocessing.image_dataset_from_directory(directory=train_dir, image_size=IMG_SIZE, label_mode="categorical", # what type are the labels? batch_size=32) # batch_size is 32 by default, this is generally a good number test_data_10_percent = tf.keras.preprocessing.image_dataset_from_directory(directory=test_dir, image_size=IMG_SIZE, label_mode="categorical") ``` Wonderful! Looks like our dataloaders have found the correct number of images for each dataset. For now, the main parameters we're concerned about in the `image_dataset_from_directory()` funtion are: * `directory` - the filepath of the target directory we're loading images in from. * `image_size` - the target size of the images we're going to load in (height, width). * `batch_size` - the batch size of the images we're going to load in. For example if the `batch_size` is 32 (the default), batches of 32 images and labels at a time will be passed to the model. There are more we could play around with if we needed to [in the `tf.keras.preprocessing` documentation](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory). If we check the training data datatype we should see it as a `BatchDataset` with shapes relating to our data. ``` # Check the training data datatype train_data_10_percent ``` In the above output: * `(None, 224, 224, 3)` refers to the tensor shape of our images where `None` is the batch size, `224` is the height (and width) and `3` is the color channels (red, green, blue). * `(None, 10)` refers to the tensor shape of the labels where `None` is the batch size and `10` is the number of possible labels (the 10 different food classes). * Both image tensors and labels are of the datatype `tf.float32`. The `batch_size` is `None` due to it only being used during model training. You can think of `None` as a placeholder waiting to be filled with the `batch_size` parameter from `image_dataset_from_directory()`. Another benefit of using the `tf.data.Dataset` API are the assosciated methods which come with it. For example, if we want to find the name of the classes we were working with, we could use the `class_names` attribute. ``` # Check out the class names of our dataset train_data_10_percent.class_names ``` Or if we wanted to see an example batch of data, we could use the `take()` method. ``` # See an example batch of data for images, labels in train_data_10_percent.take(1): print(images, labels) ``` Notice how the image arrays come out as tensors of pixel values where as the labels come out as one-hot encodings (e.g. `[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]` for `hamburger`). ### Model 0: Building a transfer learning model using the Keras Functional API Alright, our data is tensor-ified, let's build a model. To do so we're going to be using the [`tf.keras.applications`](https://www.tensorflow.org/api_docs/python/tf/keras/applications) module as it contains a series of already trained (on ImageNet) computer vision models as well as the Keras Functional API to construct our model. We're going to go through the following steps: 1. Instantiate a pre-trained base model object by choosing a target model such as [`EfficientNetB0`](https://www.tensorflow.org/api_docs/python/tf/keras/applications/EfficientNetB0) from `tf.keras.applications`, setting the `include_top` parameter to `False` (we do this because we're going to create our own top, which are the output layers for the model). 2. Set the base model's `trainable` attribute to `False` to freeze all of the weights in the pre-trained model. 3. Define an input layer for our model, for example, what shape of data should our model expect? 4. [Optional] Normalize the inputs to our model if it requires. Some computer vision models such as `ResNetV250` require their inputs to be between 0 & 1. > 🤔 **Note:** As of writing, the `EfficientNet` models in the `tf.keras.applications` module do not require images to be normalized (pixel values between 0 and 1) on input, where as many of the other models do. I posted [an issue to the TensorFlow GitHub](https://github.com/tensorflow/tensorflow/issues/42506) about this and they confirmed this. 5. Pass the inputs to the base model. 6. Pool the outputs of the base model into a shape compatible with the output activation layer (turn base model output tensors into same shape as label tensors). This can be done using [`tf.keras.layers.GlobalAveragePooling2D()`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling2D) or [`tf.keras.layers.GlobalMaxPooling2D()`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalMaxPool2D?hl=en) though the former is more common in practice. 7. Create an output activation layer using `tf.keras.layers.Dense()` with the appropriate activation function and number of neurons. 8. Combine the inputs and outputs layer into a model using [`tf.keras.Model()`](https://www.tensorflow.org/api_docs/python/tf/keras/Model). 9. Compile the model using the appropriate loss function and choose of optimizer. 10. Fit the model for desired number of epochs and with necessary callbacks (in our case, we'll start off with the TensorBoard callback). Woah... that sounds like a lot. Before we get ahead of ourselves, let's see it in practice. ``` # 1. Create base model with tf.keras.applications base_model = tf.keras.applications.EfficientNetB0(include_top=False) # 2. Freeze the base model (so the pre-learned patterns remain) base_model.trainable = False # 3. Create inputs into the base model inputs = tf.keras.layers.Input(shape=(224, 224, 3), name="input_layer") # 4. If using ResNet50V2, add this to speed up convergence, remove for EfficientNet # x = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(inputs) # 5. Pass the inputs to the base_model (note: using tf.keras.applications, EfficientNet inputs don't have to be normalized) x = base_model(inputs) # Check data shape after passing it to base_model print(f"Shape after base_model: {x.shape}") # 6. Average pool the outputs of the base model (aggregate all the most important information, reduce number of computations) x = tf.keras.layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x) print(f"After GlobalAveragePooling2D(): {x.shape}") # 7. Create the output activation layer outputs = tf.keras.layers.Dense(10, activation="softmax", name="output_layer")(x) # 8. Combine the inputs with the outputs into a model model_0 = tf.keras.Model(inputs, outputs) # 9. Compile the model model_0.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # 10. Fit the model (we use less steps for validation so it's faster) history_10_percent = model_0.fit(train_data_10_percent, epochs=5, steps_per_epoch=len(train_data_10_percent), validation_data=test_data_10_percent, # Go through less of the validation data so epochs are faster (we want faster experiments!) validation_steps=int(0.25 * len(test_data_10_percent)), # Track our model's training logs for visualization later callbacks=[create_tensorboard_callback("transfer_learning", "10_percent_feature_extract")]) ``` Nice! After a minute or so of training our model performs incredibly well on both the training (87%+ accuracy) and test sets (~83% accuracy). This is incredible. All thanks to the power of transfer learning. It's important to note the kind of transfer learning we used here is called feature extraction transfer learning, similar to what we did with the TensorFlow Hub models. In other words, we passed our custom data to an already pre-trained model (`EfficientNetB0`), asked it "what patterns do you see?" and then put our own output layer on top to make sure the outputs were tailored to our desired number of classes. We also used the Keras Functional API to build our model rather than the Sequential API. For now, the benefits of this main not seem clear but when you start to build more sophisticated models, you'll probably want to use the Functional API. So it's important to have exposure to this way of building models. > 📖 **Resource:** To see the benefits and use cases of the Functional API versus the Sequential API, check out the [TensorFlow Functional API documentation](https://www.tensorflow.org/guide/keras/functional). Let's inspect the layers in our model, we'll start with the base. ``` # Check layers in our base model for layer_number, layer in enumerate(base_model.layers): print(layer_number, layer.name) ``` Wow, that's a lot of layers... to handcode all of those would've taken a fairly long time to do, yet we can still take advatange of them thanks to the power of transfer learning. How about a summary of the base model? ``` base_model.summary() ``` You can see how each of the different layers have a certain number of parameters each. Since we are using a pre-trained model, you can think of all of these parameters are patterns the base model has learned on another dataset. And because we set `base_model.trainable = False`, these patterns remain as they are during training (they're frozen and don't get updated). Alright that was the base model, let's see the summary of our overall model. ``` # Check summary of model constructed with Functional API model_0.summary() ``` Our overall model has five layers but really, one of those layers (`efficientnetb0`) has 236 layers. You can see how the output shape started out as `(None, 224, 224, 3)` for the input layer (the shape of our images) but was transformed to be `(None, 10)` by the output layer (the shape of our labels), where `None` is the placeholder for the batch size. Notice too, the only trainable parameters in the model are those in the output layer. How do our model's training curves look? ``` # Check out our model's training curves plot_loss_curves(history_10_percent) ``` ## Getting a feature vector from a trained model > 🤔 **Question:** What happens with the `tf.keras.layers.GlobalAveragePooling2D()` layer? I haven't seen it before. The [`tf.keras.layers.GlobalAveragePooling2D()`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling2D) layer transforms a 4D tensor into a 2D tensor by averaging the values across the inner-axes. The previous sentence is a bit of a mouthful, so let's see an example. ``` # Define input tensor shape (same number of dimensions as the output of efficientnetb0) input_shape = (1, 4, 4, 3) # Create a random tensor tf.random.set_seed(42) input_tensor = tf.random.normal(input_shape) print(f"Random input tensor:\n {input_tensor}\n") # Pass the random tensor through a global average pooling 2D layer global_average_pooled_tensor = tf.keras.layers.GlobalAveragePooling2D()(input_tensor) print(f"2D global average pooled random tensor:\n {global_average_pooled_tensor}\n") # Check the shapes of the different tensors print(f"Shape of input tensor: {input_tensor.shape}") print(f"Shape of 2D global averaged pooled input tensor: {global_average_pooled_tensor.shape}") ``` You can see the `tf.keras.layers.GlobalAveragePooling2D()` layer condensed the input tensor from shape `(1, 4, 4, 3)` to `(1, 3)`. It did so by averaging the `input_tensor` across the middle two axes. We can replicate this operation using the `tf.reduce_mean()` operation and specifying the appropriate axes. ``` # This is the same as GlobalAveragePooling2D() tf.reduce_mean(input_tensor, axis=[1, 2]) # average across the middle axes ``` Doing this not only makes the output of the base model compatible with the input shape requirement of our output layer (`tf.keras.layers.Dense()`), it also condenses the information found by the base model into a lower dimension **feature vector**. > 🔑 **Note:** One of the reasons feature extraction transfer learning is named how it is is because what often happens is a pretrained model outputs a **feature vector** (a long tensor of numbers, in our case, this is the output of the [`tf.keras.layers.GlobalAveragePooling2D()`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling2D) layer) which can then be used to extract patterns out of. > 🛠 **Practice:** Do the same as the above cell but for [`tf.keras.layers.GlobalMaxPool2D()`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalMaxPool2D). ## Running a series of transfer learning experiments We've seen the incredible results of transfer learning on 10% of the training data, what about 1% of the training data? What kind of results do you think we can get using 100x less data than the original CNN models we built ourselves? Why don't we answer that question while running the following modelling experiments: 1. `model_1`: Use feature extraction transfer learning on 1% of the training data with data augmentation. 2. `model_2`: Use feature extraction transfer learning on 10% of the training data with data augmentation. 3. `model_3`: Use fine-tuning transfer learning on 10% of the training data with data augmentation. 4. `model_4`: Use fine-tuning transfer learning on 100% of the training data with data augmentation. While all of the experiments will be run on different versions of the training data, they will all be evaluated on the same test dataset, this ensures the results of each experiment are as comparable as possible. All experiments will be done using the `EfficientNetB0` model within the `tf.keras.applications` module. To make sure we're keeping track of our experiments, we'll use our `create_tensorboard_callback()` function to log all of the model training logs. We'll construct each model using the Keras Functional API and instead of implementing data augmentation in the `ImageDataGenerator` class as we have previously, we're going to build it right into the model using the [`tf.keras.layers.experimental.preprocessing`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) module. Let's begin by downloading the data for experiment 1, using feature extraction transfer learning on 1% of the training data with data augmentation. ``` # Download and unzip data !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_1_percent.zip unzip_data("10_food_classes_1_percent.zip") # Create training and test dirs train_dir_1_percent = "10_food_classes_1_percent/train/" test_dir = "10_food_classes_1_percent/test/" ``` How many images are we working with? ``` # Walk through 1 percent data directory and list number of files walk_through_dir("10_food_classes_1_percent") ``` Alright, looks like we've only got seven images of each class, this should be a bit of a challenge for our model. > 🔑 **Note:** As with the 10% of data subset, the 1% of images were chosen at random from the original full training dataset. The test images are the same as the ones which have previously been used. If you want to see how this data was preprocessed, check out the [Food Vision Image Preprocessing notebook](https://github.com/mrdbourke/tensorflow-deep-learning/blob/main/extras/image_data_modification.ipynb). Time to load our images in as `tf.data.Dataset` objects, to do so, we'll use the [`image_dataset_from_directory()`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) method. ``` import tensorflow as tf IMG_SIZE = (224, 224) train_data_1_percent = tf.keras.preprocessing.image_dataset_from_directory(train_dir_1_percent, label_mode="categorical", batch_size=32, # default image_size=IMG_SIZE) test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir, label_mode="categorical", image_size=IMG_SIZE) ``` Data loaded. Time to augment it. ### Adding data augmentation right into the model Previously we've used the different parameters of the `ImageDataGenerator` class to augment our training images, this time we're going to build data augmentation right into the model. How? Using the [`tf.keras.layers.experimental.preprocessing`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) module and creating a dedicated data augmentation layer. This a relatively new feature added to TensorFlow 2.2+ but it's very powerful. Adding a data augmentation layer to the model has the following benefits: * Preprocessing of the images (augmenting them) happens on the GPU rather than on the CPU (much faster). * Images are best preprocessed on the GPU where as text and structured data are more suited to be preprocessed on the CPU. * Image data augmentation only happens during training so we can still export our whole model and use it elsewhere. And if someone else wanted to train the same model as us, including the same kind of data augmentation, they could. ![](https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/images/05-data-augmentation-inside-a-model.png) *Example of using data augmentation as the first layer within a model (EfficientNetB0).* > 🤔 **Note:** At the time of writing, the preprocessing layers we're using for data augmentation are in *experimental* status within the in TensorFlow library. This means although the layers should be considered stable, the code may change slightly in a future version of TensorFlow. For more information on the other preprocessing layers avaiable and the different methods of data augmentation, check out the [Keras preprocessing layers guide](https://keras.io/guides/preprocessing_layers/) and the [TensorFlow data augmentation guide](https://www.tensorflow.org/tutorials/images/data_augmentation). To use data augmentation right within our model we'll create a Keras Sequential model consisting of only data preprocessing layers, we can then use this Sequential model within another Functional model. If that sounds confusing, it'll make sense once we create it in code. The data augmentation transformations we're going to use are: * [RandomFlip](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomFlip) - flips image on horizontal or vertical axis. * [RandomRotation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomRotation) - randomly rotates image by a specified amount. * [RandomZoom](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomZoom) - randomly zooms into an image by specified amount. * [RandomHeight](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomHeight) - randomly shifts image height by a specified amount. * [RandomWidth](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/RandomWidth) - randomly shifts image width by a specified amount. * [Rescaling](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Rescaling) - normalizes the image pixel values to be between 0 and 1, this is worth mentioning because it is required for some image models but since we're using the `tf.keras.applications` implementation of `EfficientNetB0`, it's not required. There are more option but these will do for now. ``` import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing # Create a data augmentation stage with horizontal flipping, rotations, zooms data_augmentation = keras.Sequential([ preprocessing.RandomFlip("horizontal"), preprocessing.RandomRotation(0.2), preprocessing.RandomZoom(0.2), preprocessing.RandomHeight(0.2), preprocessing.RandomWidth(0.2), # preprocessing.Rescaling(1./255) # keep for ResNet50V2, remove for EfficientNetB0 ], name ="data_augmentation") ``` And that's it! Our data augmentation Sequential model is ready to go. As you'll see shortly, we'll be able to slot this "model" as a layer into our transfer learning model later on. But before we do that, let's test it out by passing random images through it. ``` # View a random image import matplotlib.pyplot as plt import matplotlib.image as mpimg import os import random target_class = random.choice(train_data_1_percent.class_names) # choose a random class target_dir = "10_food_classes_1_percent/train/" + target_class # create the target directory random_image = random.choice(os.listdir(target_dir)) # choose a random image from target directory random_image_path = target_dir + "/" + random_image # create the choosen random image path img = mpimg.imread(random_image_path) # read in the chosen target image plt.imshow(img) # plot the target image plt.title(f"Original random image from class: {target_class}") plt.axis(False); # turn off the axes # Augment the image augmented_img = data_augmentation(tf.expand_dims(img, axis=0)) # data augmentation model requires shape (None, height, width, 3) plt.figure() plt.imshow(tf.squeeze(augmented_img)/255.) # requires normalization after augmentation plt.title(f"Augmented random image from class: {target_class}") plt.axis(False); ``` Run the cell above a few times and you can see the different random augmentations on different classes of images. Because we're going to add the data augmentation model as a layer in our upcoming transfer learning model, it'll apply these kind of random augmentations to each of the training images which passes through it. Doing this will make our training dataset a little more varied. You can think of it as if you were taking a photo of food in real-life, not all of the images are going to be perfect, some of them are going to be orientated in strange ways. These are the kind of images we want our model to be able to handle. Speaking of model, let's build one with the Functional API. We'll run through all of the same steps as before except for one difference, we'll add our data augmentation Sequential model as a layer immediately after the input layer. ## Model 1: Feature extraction transfer learning on 1% of the data with data augmentation ``` # Setup input shape and base model, freezing the base model layers input_shape = (224, 224, 3) base_model = tf.keras.applications.EfficientNetB0(include_top=False) base_model.trainable = False # Create input layer inputs = layers.Input(shape=input_shape, name="input_layer") # Add in data augmentation Sequential model as a layer x = data_augmentation(inputs) # Give base_model inputs (after augmentation) and don't train it x = base_model(x, training=False) # Pool output features of base model x = layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x) # Put a dense layer on as the output outputs = layers.Dense(10, activation="softmax", name="output_layer")(x) # Make a model with inputs and outputs model_1 = keras.Model(inputs, outputs) # Compile the model model_1.compile(loss="categorical_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # Fit the model history_1_percent = model_1.fit(train_data_1_percent, epochs=5, steps_per_epoch=len(train_data_1_percent), validation_data=test_data, validation_steps=int(0.25* len(test_data)), # validate for less steps # Track model training logs callbacks=[create_tensorboard_callback("transfer_learning", "1_percent_data_aug")]) ``` Wow! How cool is that? Using only 7 training images per class, using transfer learning our model was able to get ~40% accuracy on the validation set. This result is pretty amazing since the [original Food-101 paper](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf) achieved 50.67% accuracy with all the data, namely, 750 training images per class (**note:** this metric was across 101 classes, not 10, we'll get to 101 classes soon). If we check out a summary of our model, we should see the data augmentation layer just after the input layer. ``` # Check out model summary model_1.summary() ``` There it is. We've now got data augmentation built right into the our model. This means if we saved it and reloaded it somewhere else, the data augmentation layers would come with it. The important thing to remember is **data augmentation only runs during training**. So if we were to evaluate or use our model for inference (predicting the class of an image) the data augmentation layers will be automatically turned off. To see this in action, let's evaluate our model on the test data. ``` # Evaluate on the test data results_1_percent_data_aug = model_1.evaluate(test_data) results_1_percent_data_aug ``` The results here may be slightly better/worse than the log outputs of our model during training because during training we only evaluate our model on 25% of the test data using the line `validation_steps=int(0.25 * len(test_data))`. Doing this speeds up our epochs but still gives us enough of an idea of how our model is going. Let's stay consistent and check out our model's loss curves. ``` # How does the model go with a data augmentation layer with 1% of data plot_loss_curves(history_1_percent) ``` It looks like the metrics on both datasets would improve if we kept training for more epochs. But we'll leave that for now, we've got more experiments to do! ## Model 2: Feature extraction transfer learning with 10% of data and data augmentation Alright, we've tested 1% of the training data with data augmentation, how about we try 10% of the data with data augmentation? But wait... > 🤔 **Question:** How do you know what experiments to run? Great question. The truth here is you often won't. Machine learning is still a very experimental practice. It's only after trying a fair few things that you'll start to develop an intuition of what to try. My advice is to follow your curiosity as tenaciously as possible. If you feel like you want to try something, write the code for it and run it. See how it goes. The worst thing that'll happen is you'll figure out what doesn't work, the most valuable kind of knowledge. From a practical standpoint, as we've talked about before, you'll want to reduce the amount of time between your initial experiments as much as possible. In other words, run a plethora of smaller experiments, using less data and less training iterations before you find something promising and then scale it up. In the theme of scale, let's scale our 1% training data augmentation experiment up to 10% training data augmentation. That sentence doesn't really make sense but you get what I mean. We're going to run through the exact same steps as the previous model, the only difference being using 10% of the training data instead of 1%. ``` # Get 10% of the data of the 10 classes (uncomment if you haven't gotten "10_food_classes_10_percent.zip" already) # !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_10_percent.zip # unzip_data("10_food_classes_10_percent.zip") train_dir_10_percent = "10_food_classes_10_percent/train/" test_dir = "10_food_classes_10_percent/test/" ``` Data downloaded. Let's create the dataloaders. ``` # Setup data inputs import tensorflow as tf IMG_SIZE = (224, 224) train_data_10_percent = tf.keras.preprocessing.image_dataset_from_directory(train_dir_10_percent, label_mode="categorical", image_size=IMG_SIZE) # Note: the test data is the same as the previous experiment, we could # skip creating this, but we'll leave this here to practice. test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir, label_mode="categorical", image_size=IMG_SIZE) ``` Awesome! We've got 10x more images to work with, 75 per class instead of 7 per class. Let's build a model with data augmentation built in. We could reuse the data augmentation Sequential model we created before but we'll recreate it to practice. ``` # Create a functional model with data augmentation import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing from tensorflow.keras.models import Sequential # Build data augmentation layer data_augmentation = Sequential([ preprocessing.RandomFlip('horizontal'), preprocessing.RandomHeight(0.2), preprocessing.RandomWidth(0.2), preprocessing.RandomZoom(0.2), preprocessing.RandomRotation(0.2), # preprocessing.Rescaling(1./255) # keep for ResNet50V2, remove for EfficientNet ], name="data_augmentation") # Setup the input shape to our model input_shape = (224, 224, 3) # Create a frozen base model base_model = tf.keras.applications.EfficientNetB0(include_top=False) base_model.trainable = False # Create input and output layers inputs = layers.Input(shape=input_shape, name="input_layer") # create input layer x = data_augmentation(inputs) # augment our training images x = base_model(x, training=False) # pass augmented images to base model but keep it in inference mode, so batchnorm layers don't get updated: https://keras.io/guides/transfer_learning/#build-a-model x = layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x) outputs = layers.Dense(10, activation="softmax", name="output_layer")(x) model_2 = tf.keras.Model(inputs, outputs) # Compile model_2.compile(loss="categorical_crossentropy", optimizer=tf.keras.optimizers.Adam(lr=0.001), # use Adam optimizer with base learning rate metrics=["accuracy"]) ``` ### Creating a ModelCheckpoint callback Our model is compiled and ready to be fit, so why haven't we fit it yet? Well, for this experiment we're going to introduce a new callback, the `ModelCheckpoint` callback. The [`ModelCheckpoint`](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint) callback gives you the ability to save your model, as a whole in the [`SavedModel`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) format or the [weights (patterns) only](https://www.tensorflow.org/tutorials/keras/save_and_load#manually_save_weights) to a specified directory as it trains. This is helpful if you think your model is going to be training for a long time and you want to make backups of it as it trains. It also means if you think your model could benefit from being trained for longer, you can reload it from a specific checkpoint and continue training from there. For example, say you fit a feature extraction transfer learning model for 5 epochs and you check the training curves and see it was still improving and you want to see if fine-tuning for another 5 epochs could help, you can load the checkpoint, unfreeze some (or all) of the base model layers and then continue training. In fact, that's exactly what we're going to do. But first, let's create a `ModelCheckpoint` callback. To do so, we have to specifcy a directory we'd like to save to. ``` # Setup checkpoint path checkpoint_path = "ten_percent_model_checkpoints_weights/checkpoint.ckpt" # note: remember saving directly to Colab is temporary # Create a ModelCheckpoint callback that saves the model's weights only checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, # set to False to save the entire model save_best_only=False, # set to True to save only the best model instead of a model every epoch save_freq="epoch", # save every epoch verbose=1) ``` > 🤔 **Question:** What's the difference between saving the entire model (SavedModel format) and saving the weights only? The [`SavedModel`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) format saves a model's architecture, weights and training configuration all in one folder. It makes it very easy to reload your model exactly how it is elsewhere. However, if you do not want to share all of these details with others, you may want to save and share the weights only (these will just be large tensors of non-human interpretable numbers). If disk space is an issue, saving the weights only is faster and takes up less space than saving the whole model. Time to fit the model. Because we're going to be fine-tuning it later, we'll create a variable `initial_epochs` and set it to 5 to use later. We'll also add in our `checkpoint_callback` in our list of `callbacks`. ``` # Fit the model saving checkpoints every epoch initial_epochs = 5 history_10_percent_data_aug = model_2.fit(train_data_10_percent, epochs=initial_epochs, validation_data=test_data, validation_steps=int(0.25 * len(test_data)), # do less steps per validation (quicker) callbacks=[create_tensorboard_callback("transfer_learning", "10_percent_data_aug"), checkpoint_callback]) ``` Would you look at that! Looks like our `ModelCheckpoint` callback worked and our model saved its weights every epoch without too much overhead (saving the whole model takes longer than just the weights). Let's evaluate our model and check its loss curves. ``` # Evaluate on the test data results_10_percent_data_aug = model_2.evaluate(test_data) results_10_percent_data_aug # Plot model loss curves plot_loss_curves(history_10_percent_data_aug) ``` Looking at these, our model's performance with 10% of the data and data augmentation isn't as good as the model with 10% of the data without data augmentation (see `model_0` results above), however the curves are trending in the right direction, meaning if we decided to train for longer, its metrics would likely improve. Since we checkpointed (is that a word?) our model's weights, we might as well see what it's like to load it back in. We'll be able to test if it saved correctly by evaluting it on the test data. To load saved model weights you can use the the [`load_weights()`](https://www.tensorflow.org/tutorials/keras/save_and_load#checkpoint_callback_options) method, passing it the path where your saved weights are stored. ``` # Load in saved model weights and evaluate model model_2.load_weights(checkpoint_path) loaded_weights_model_results = model_2.evaluate(test_data) ``` Now let's compare the results of our previously trained model and the loaded model. These results should very close if not exactly the same. The reason for minor differences comes down to the precision level of numbers calculated. ``` # If the results from our native model and the loaded weights are the same, this should output True results_10_percent_data_aug == loaded_weights_model_results ``` If the above cell doesn't output `True`, it's because the numbers are close but not the *exact* same (due to how computers store numbers with degrees of precision). However, they should be *very* close... ``` import numpy as np # Check to see if loaded model results are very close to native model results (should output True) np.isclose(np.array(results_10_percent_data_aug), np.array(loaded_weights_model_results)) # Check the difference between the two results print(np.array(results_10_percent_data_aug) - np.array(loaded_weights_model_results)) ``` ## Model 3: Fine-tuning an existing model on 10% of the data ![](https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/images/05-fine-tuning-an-efficientnet-model.png) *High-level example of fine-tuning an EfficientNet model. Bottom layers (layers closer to the input data) stay frozen where as top layers (layers closer to the output data) are updated during training.* So far our saved model has been trained using feature extraction transfer learning for 5 epochs on 10% of the training data and data augmentation. This means all of the layers in the base model (EfficientNetB0) were frozen during training. For our next experiment we're going to switch to fine-tuning transfer learning. This means we'll be using the same base model except we'll be unfreezing some of its layers (ones closest to the top) and running the model for a few more epochs. The idea with fine-tuning is to start customizing the pre-trained model more to our own data. > 🔑 **Note:** Fine-tuning usually works best *after* training a feature extraction model for a few epochs and with large amounts of data. For more on this, check out [Keras' guide on Transfer learning & fine-tuning](https://keras.io/guides/transfer_learning/). We've verified our loaded model's performance, let's check out its layers. ``` # Layers in loaded model model_2.layers for layer in model_2.layers: print(layer.trainable) ``` Looking good. We've got an input layer, a Sequential layer (the data augmentation model), a Functional layer (EfficientNetB0), a pooling layer and a Dense layer (the output layer). How about a summary? ``` model_2.summary() ``` Alright, it looks like all of the layers in the `efficientnetb0` layer are frozen. We can confirm this using the `trainable_variables` attribute. ``` # How many layers are trainable in our base model? print(len(model_2.layers[2].trainable_variables)) # layer at index 2 is the EfficientNetB0 layer (the base model) ``` This is the same as our base model. ``` print(len(base_model.trainable_variables)) ``` We can even check layer by layer to see if the they're trainable. ``` # Check which layers are tuneable (trainable) for layer_number, layer in enumerate(base_model.layers): print(layer_number, layer.name, layer.trainable) ``` Beautiful. This is exactly what we're after. Now to fine-tune the base model to our own data, we're going to unfreeze the top 10 layers and continue training our model for another 5 epochs. This means all of the base model's layers except for the last 10 will remain frozen and untrainable. And the weights in the remaining unfrozen layers will be updated during training. Ideally, we should see the model's performance improve. > 🤔 **Question:** How many layers should you unfreeze when training? There's no set rule for this. You could unfreeze every layer in the pretrained model or you could try unfreezing one layer at a time. Best to experiment with different amounts of unfreezing and fine-tuning to see what happens. Generally, the less data you have, the less layers you want to unfreeze and the more gradually you want to fine-tune. > 📖 **Resource:** The [ULMFiT (Universal Language Model Fine-tuning for Text Classification) paper](https://arxiv.org/abs/1801.06146) has a great series of experiments on fine-tuning models. To begin fine-tuning, we'll unfreeze the entire base model by setting its `trainable` attribute to `True`. Then we'll refreeze every layer in the base model except for the last 10 by looping through them and setting their `trainable` attribute to `False`. Finally, we'll recompile the model. ``` base_model.trainable = True # Freeze all layers except for the for layer in base_model.layers[:-10]: layer.trainable = False # Recompile the model (always recompile after any adjustments to a model) model_2.compile(loss="categorical_crossentropy", optimizer=tf.keras.optimizers.Adam(lr=0.0001), # lr is 10x lower than before for fine-tuning metrics=["accuracy"]) ``` Wonderful, now let's check which layers of the pretrained model are trainable. ``` # Check which layers are tuneable (trainable) for layer_number, layer in enumerate(base_model.layers): print(layer_number, layer.name, layer.trainable) ``` Nice! It seems all layers except for the last 10 are frozen and untrainable. This means only the last 10 layers of the base model along with the output layer will have their weights updated during training. > 🤔 **Question:** Why did we recompile the model? Every time you make a change to your models, you need to recompile them. In our case, we're using the exact same loss, optimizer and metrics as before, except this time the learning rate for our optimizer will be 10x smaller than before (0.0001 instead of Adam's default of 0.001). We do this so the model doesn't try to overwrite the existing weights in the pretrained model too fast. In other words, we want learning to be more gradual. > 🔑 **Note:** There's no set standard for setting the learning rate during fine-tuning, though reductions of [2.6x-10x+ seem to work well in practice](https://arxiv.org/abs/1801.06146). How many trainable variables do we have now? ``` print(len(model_2.trainable_variables)) ``` Wonderful, it looks like our model has a total of 10 trainable variables, the last 10 layers of the base model and the weight and bias parameters of the Dense output layer. Time to fine-tune! We're going to continue training on from where our previous model finished. Since it trained for 5 epochs, our fine-tuning will begin on the epoch 5 and continue for another 5 epochs. To do this, we can use the `initial_epoch` parameter of the [`fit()`](https://keras.rstudio.com/reference/fit.html) method. We'll pass it the last epoch of the previous model's training history (`history_10_percent_data_aug.epoch[-1]`). ``` # Fine tune for another 5 epochs fine_tune_epochs = initial_epochs + 5 # Refit the model (same as model_2 except with more trainable layers) history_fine_10_percent_data_aug = model_2.fit(train_data_10_percent, epochs=fine_tune_epochs, validation_data=test_data, initial_epoch=history_10_percent_data_aug.epoch[-1], # start from previous last epoch validation_steps=int(0.25 * len(test_data)), callbacks=[create_tensorboard_callback("transfer_learning", "10_percent_fine_tune_last_10")]) # name experiment appropriately ``` > 🔑 **Note:** Fine-tuning usually takes far longer per epoch than feature extraction (due to updating more weights throughout a network). Ho ho, looks like our model has gained a few percentage points of accuracy! Let's evalaute it. ``` # Evaluate the model on the test data results_fine_tune_10_percent = model_2.evaluate(test_data) ``` Remember, the results from evaluating the model might be slightly different to the outputs from training since during training we only evaluate on 25% of the test data. Alright, we need a way to evaluate our model's performance before and after fine-tuning. How about we write a function to compare the before and after? ``` def compare_historys(original_history, new_history, initial_epochs=5): """ Compares two model history objects. """ # Get original history measurements acc = original_history.history["accuracy"] loss = original_history.history["loss"] print(len(acc)) val_acc = original_history.history["val_accuracy"] val_loss = original_history.history["val_loss"] # Combine original history with new history total_acc = acc + new_history.history["accuracy"] total_loss = loss + new_history.history["loss"] total_val_acc = val_acc + new_history.history["val_accuracy"] total_val_loss = val_loss + new_history.history["val_loss"] print(len(total_acc)) print(total_acc) # Make plots plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(total_acc, label='Training Accuracy') plt.plot(total_val_acc, label='Validation Accuracy') plt.plot([initial_epochs-1, initial_epochs-1], plt.ylim(), label='Start Fine Tuning') # reshift plot around epochs plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(total_loss, label='Training Loss') plt.plot(total_val_loss, label='Validation Loss') plt.plot([initial_epochs-1, initial_epochs-1], plt.ylim(), label='Start Fine Tuning') # reshift plot around epochs plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() ``` This is where saving the history variables of our model training comes in handy. Let's see what happened after fine-tuning the last 10 layers of our model. ``` compare_historys(original_history=history_10_percent_data_aug, new_history=history_fine_10_percent_data_aug, initial_epochs=5) ``` Alright, alright, seems like the curves are heading in the right direction after fine-tuning. But remember, it should be noted that fine-tuning usually works best with larger amounts of data. ## Model 4: Fine-tuning an existing model all of the data Enough talk about how fine-tuning a model usually works with more data, let's try it out. We'll start by downloading the full version of our 10 food classes dataset. ``` # Download and unzip 10 classes of data with all images !wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_all_data.zip unzip_data("10_food_classes_all_data.zip") # Setup data directories train_dir = "10_food_classes_all_data/train/" test_dir = "10_food_classes_all_data/test/" # How many images are we working with now? walk_through_dir("10_food_classes_all_data") ``` And now we'll turn the images into tensors datasets. ``` # Setup data inputs import tensorflow as tf IMG_SIZE = (224, 224) train_data_10_classes_full = tf.keras.preprocessing.image_dataset_from_directory(train_dir, label_mode="categorical", image_size=IMG_SIZE) # Note: this is the same test dataset we've been using for the previous modelling experiments test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir, label_mode="categorical", image_size=IMG_SIZE) ``` Oh this is looking good. We've got 10x more images in of the training classes to work with. The **test dataset is the same** we've been using for our previous experiments. As it is now, our `model_2` has been fine-tuned on 10 percent of the data, so to begin fine-tuning on all of the data and keep our experiments consistent, we need to revert it back to the weights we checkpointed after 5 epochs of feature-extraction. To demonstrate this, we'll first evaluate the current `model_2`. ``` # Evaluate model (this is the fine-tuned 10 percent of data version) model_2.evaluate(test_data) ``` These are the same values as `results_fine_tune_10_percent`. ``` results_fine_tune_10_percent ``` Now we'll revert the model back to the saved weights. ``` # Load model from checkpoint, that way we can fine-tune from the same stage the 10 percent data model was fine-tuned from model_2.load_weights(checkpoint_path) # revert model back to saved weights ``` And the results should be the same as `results_10_percent_data_aug`. ``` # After loading the weights, this should have gone down (no fine-tuning) model_2.evaluate(test_data) # Check to see if the above two results are the same (they should be) results_10_percent_data_aug ``` Alright, the previous steps might seem quite confusing but all we've done is: 1. Trained a feature extraction transfer learning model for 5 epochs on 10% of the data (with all base model layers frozen) and saved the model's weights using `ModelCheckpoint`. 2. Fine-tuned the same model on the same 10% of the data for a further 5 epochs with the top 10 layers of the base model unfrozen. 3. Saved the results and training logs each time. 4. Reloaded the model from 1 to do the same steps as 2 but with all of the data. The same steps as 2? Yeah, we're going to fine-tune the last 10 layers of the base model with the full dataset for another 5 epochs but first let's remind ourselves which layers are trainable. ``` # Check which layers are tuneable in the whole model for layer_number, layer in enumerate(model_2.layers): print(layer_number, layer.name, layer.trainable) ``` Can we get a little more specific? ``` # Check which layers are tuneable in the base model for layer_number, layer in enumerate(base_model.layers): print(layer_number, layer.name, layer.trainable) ``` Looking good! The last 10 layers are trainable (unfrozen). We've got one more step to do before we can begin fine-tuning. Do you remember what it is? I'll give you a hint. We just reloaded the weights to our model and what do we need to do every time we make a change to our models? Recompile them! This will be just as before. ``` # Compile model_2.compile(loss="categorical_crossentropy", optimizer=tf.keras.optimizers.Adam(lr=0.0001), # divide learning rate by 10 for fine-tuning metrics=["accuracy"]) ``` Alright, time to fine-tune on all of the data! ``` # Continue to train and fine-tune the model to our data fine_tune_epochs = initial_epochs + 5 history_fine_10_classes_full = model_2.fit(train_data_10_classes_full, epochs=fine_tune_epochs, initial_epoch=history_10_percent_data_aug.epoch[-1], validation_data=test_data, validation_steps=int(0.25 * len(test_data)), callbacks=[create_tensorboard_callback("transfer_learning", "full_10_classes_fine_tune_last_10")]) ``` > 🔑 **Note:** Training took longer per epoch, but that makes sense because we're using 10x more training data than before. Let's evaluate on all of the test data. ``` results_fine_tune_full_data = model_2.evaluate(test_data) results_fine_tune_full_data ``` Nice! It looks like fine-tuning with all of the data has given our model a boost, how do the training curves look? ``` # How did fine-tuning go with more data? compare_historys(original_history=history_10_percent_data_aug, new_history=history_fine_10_classes_full, initial_epochs=5) ``` Looks like that extra data helped! Those curves are looking great. And if we trained for longer, they might even keep improving. ## Viewing our experiment data on TensorBoard Right now our experimental results are scattered all throughout our notebook. If we want to share them with someone, they'd be getting a bunch of different graphs and metrics... not a fun time. But guess what? Thanks to the TensorBoard callback we made with our helper function `create_tensorflow_callback()`, we've been tracking our modelling experiments the whole time. How about we upload them to TensorBoard.dev and check them out? We can do with the `tensorboard dev upload` command and passing it the directory where our experiments have been logged. > 🔑 **Note:** Remember, whatever you upload to TensorBoard.dev becomes public. If there are training logs you don't want to share, don't upload them. ``` # View tensorboard logs of transfer learning modelling experiments (should be 4 models) # Upload TensorBoard dev records !tensorboard dev upload --logdir ./transfer_learning \ --name "Transfer learning experiments" \ --description "A series of different transfer learning experiments with varying amounts of data and fine-tuning" \ --one_shot # exits the uploader when upload has finished ``` Once we've uploaded the results to TensorBoard.dev we get a shareable link we can use to view and compare our experiments and share our results with others if needed. You can view the original versions of the experiments we ran in this notebook here: https://tensorboard.dev/experiment/2O76kw3PQbKl0lByfg5B4w/ > 🤔 **Question:** Which model performed the best? Why do you think this is? How did fine-tuning go? To find all of your previous TensorBoard.dev experiments using the command `tensorboard dev list`. ``` # View previous experiments !tensorboard dev list ``` And if you want to remove a previous experiment (and delete it from public viewing) you can use the command: ``` tensorboard dev delete --experiment_id [INSERT_EXPERIMENT_ID_TO_DELETE]``` ``` # Remove previous experiments # !tensorboard dev delete --experiment_id OUbW0O3pRqqQgAphVBxi8Q ``` ## 🛠 Exercises 1. Write a function to visualize an image from any dataset (train or test file) and any class (e.g. "steak", "pizza"... etc), visualize it and make a prediction on it using a trained model. 2. Use feature-extraction to train a transfer learning model on 10% of the Food Vision data for 10 epochs using [`tf.keras.applications.EfficientNetB0`](https://www.tensorflow.org/api_docs/python/tf/keras/applications/EfficientNetB0) as the base model. Use the [`ModelCheckpoint`](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint) callback to save the weights to file. 3. Fine-tune the last 20 layers of the base model you trained in 2 for another 10 epochs. How did it go? 4. Fine-tune the last 30 layers of the base model you trained in 2 for another 10 epochs. How did it go? ## 📖 Extra-curriculum * Read the [documentation on data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) in TensorFlow. * Read the [ULMFit paper](https://arxiv.org/abs/1801.06146) (technical) for an introduction to the concept of freezing and unfreezing different layers. * Read up on learning rate scheduling (there's a [TensorFlow callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler) for this), how could this influence our model training? * If you're training for longer, you probably want to reduce the learning rate as you go... the closer you get to the bottom of the hill, the smaller steps you want to take. Imagine it like finding a coin at the bottom of your couch. In the beginning your arm movements are going to be large and the closer you get, the smaller your movements become.
github_jupyter
``` # This notebook demonstrates using bankroll to load positions across brokers # and highlights some basic portfolio rebalancing opportunities based on a set of desired allocations. # # The default portfolio allocation is described (with comments) in notebooks/Rebalance.example.ini. # Copy this to Rebalance.ini in the top level folder, then edit accordingly, to provide your own # desired allocation. %cd .. import pandas as pd from bankroll.interface import * from configparser import ConfigParser from decimal import Decimal from functools import reduce from ib_insync import IB, util from itertools import * from math import * import logging import operator import re util.startLoop() accounts = AccountAggregator.fromSettings(AccountAggregator.allSettings(loadConfig()), lenient=False) stockPositions = [p for p in accounts.positions() if isinstance(p.instrument, Stock)] stockPositions.sort(key=lambda p: p.instrument) values = liveValuesForPositions(stockPositions, marketDataProvider(accounts)) config = ConfigParser(interpolation=None) try: config.read_file(open('Rebalance.ini')) except OSError: config.read_file(open('notebooks/Rebalance.example.ini')) def parsePercentage(str): match = re.match(r'([0-9\.]+)%', str) if match: return float(match[1]) / 100 else: return float(str) settings = config['Settings'] ignoredSecurities = {s.strip() for s in settings['ignored securities'].split(',')} maximumDeviation = parsePercentage(settings['maximum deviation']) baseCurrency = Currency[settings['base currency']] categoryAllocations = {category: parsePercentage(allocation) for category, allocation in config['Portfolio'].items()} totalAllocation = sum(categoryAllocations.values()) assert abs(totalAllocation - 1) < 0.0001, f'Category allocations do not total 100%, got {totalAllocation:.2%}' securityAllocations = {} for category, categoryAllocation in categoryAllocations.items(): securities = {security.upper(): parsePercentage(allocation) for security, allocation in config[category].items()} totalAllocation = sum(securities.values()) assert abs(totalAllocation - 1) < 0.0001, f'Allocations in category {category} do not total 100%, got {totalAllocation:.2%}' securityAllocations.update({security: allocation * categoryAllocation for security, allocation in securities.items()}) cashBalance = accounts.balance() portfolioBalance = reduce(operator.add, (value for p, value in values.items() if p.instrument.symbol not in ignoredSecurities), cashBalance) portfolioValue = convertCashToCurrency(baseCurrency, portfolioBalance.cash.values(), marketDataProvider(accounts)) def color_deviations(val): color = 'black' if abs(val) > maximumDeviation: if val > 0: color = 'green' else: color = 'red' return f'color: {color}' def positionPctOfPortfolio(position) -> float: if position not in values: return nan value = values[position] if value.currency != baseCurrency: # TODO: Cache this somehow? value = convertCashToCurrency(baseCurrency, [value], marketDataProvider(accounts)) return float(value.quantity) / float(portfolioValue.quantity) rows = {p.instrument.symbol: [ p.quantity, values.get(p, nan), positionPctOfPortfolio(p), Decimal(securityAllocations.get(p.instrument.symbol)) * portfolioValue if securityAllocations.get(p.instrument.symbol) else None, securityAllocations.get(p.instrument.symbol), positionPctOfPortfolio(p) - securityAllocations.get(p.instrument.symbol, 0), ] for p in stockPositions if p.instrument.symbol not in ignoredSecurities} missing = {symbol: [ nan, nan, nan, allocation, nan, ] for symbol, allocation in securityAllocations.items() if symbol not in rows} df = pd.DataFrame.from_dict(data=dict(chain(rows.items(), missing.items())), orient='index', columns=[ 'Quantity', 'Market value', '% of portfolio', 'Desired value', 'Desired %', 'Deviation' ]).sort_index() df.style.format({ 'Quantity': '{:.2f}', '% of portfolio': '{:.2%}', 'Desired %': '{:.2%}', 'Deviation': '{:.2%}' }).applymap(color_deviations, 'Deviation').highlight_null() print(cashBalance) print() print('Total portfolio value:', portfolioValue) ```
github_jupyter
# 1. Event approach ## Reading the full stats file ``` import numpy import pandas full_stats_file = '/Users/irv033/Downloads/data/stats_example.csv' df = pandas.read_csv(full_stats_file) def date_only(x): """Chop a datetime64 down to date only""" x = numpy.datetime64(x) return numpy.datetime64(numpy.datetime_as_string(x, timezone='local')[:10]) #df.time = df.time.apply(lambda x: numpy.datetime64(x)) df.time = df.time.apply(date_only) #print pandas.to_datetime(df['time'].values) #df_times = df.time.apply(lambda x: x.date()) df = df.set_index('time') ``` ## Read xarray data frame ``` import xray data_file = '/Users/irv033/Downloads/data/va_ERAInterim_500hPa_2006-030day-runmean_native.nc' dset_in = xray.open_dataset(data_file) print dset_in darray = dset_in['va'] print darray times = darray.time.values date_only(times[5]) darray_times = map(date_only, list(times)) print darray_times[0:5] ``` ## Merge ### Re-index the event data ``` event_numbers = df['event_number'] event_numbers = event_numbers.reindex(darray_times) ``` ### Broadcast the shape ``` print darray print darray.shape print type(darray) print type(event_numbers.values) type(darray.data) event_data = numpy.zeros((365, 241, 480)) for i in range(0,365): event_data[i,:,:] = event_numbers.values[i] ``` ### Cobmine ``` d = {} d['time'] = darray['time'] d['latitude'] = darray['latitude'] d['longitude'] = darray['longitude'] d['va'] = (['time', 'latitude', 'longitude'], darray.data) d['event'] = (['time'], event_numbers.values) ds = xray.Dataset(d) print ds ``` ## Get event averages ``` event_averages = ds.groupby('event').mean('time') print event_averages ``` # 2. Standard autocorrelation approach ### Read data ``` tas_file = '/Users/irv033/Downloads/data/tas_ERAInterim_surface_030day-runmean-anom-wrt-all-2005-2006_native.nc' tas_dset = xray.open_dataset(tas_file) tas_darray = tas_dset['tas'] print tas_darray tas_data = tas_darray[dict(longitude=130, latitude=-40)].values print tas_data.shape ``` ### Plot autocorrelation with Pandas ``` %matplotlib inline from pandas.tools.plotting import autocorrelation_plot pandas_test_data = pandas.Series(tas_data) autocorrelation_plot(pandas_test_data) ``` ### Calculate autocorrelation with statsmodels ``` import statsmodels from statsmodels.tsa.stattools import acf n = len(tas_data) statsmodels_test_data = acf(tas_data, nlags=n-2) import matplotlib.pyplot as plt k = numpy.arange(1, n - 1) plt.plot(k, statsmodels_test_data[1:]) plt.plot(k[0:40], statsmodels_test_data[1:41]) # Formula from Zieba2010, equation 12 r_k_sum = ((n - k) / float(n)) * statsmodels_test_data[1:] n_eff = float(n) / (1 + 2 * numpy.sum(r_k_sum)) print n_eff print numpy.sum(r_k_sum) ``` So an initial sample size of 730 has an effective sample size of 90. ### Get the p value ``` from scipy import stats var_x = tas_data.var() / n_eff tval = tas_data.mean() / numpy.sqrt(var_x) pval = stats.t.sf(numpy.abs(tval), n - 1) * 2 # two-sided pvalue = Prob(abs(t)>tt) print 't-statistic = %6.3f pvalue = %6.4f' % (tval, pval) ``` ## Implementation ``` def calc_significance(data_subset, data_all, standard_name): """Perform significance test. Once sample t-test, with sample size adjusted for autocorrelation. Reference: Zięba, A. (2010). Metrology and Measurement Systems, XVII(1), 3–16 doi:10.2478/v10178-010-0001-0 """ # Data must be three dimensional, with time first assert len(data_subset.shape) == 3, "Input data must be 3 dimensional" # Define autocorrelation function n = data_subset.shape[0] autocorr_func = numpy.apply_along_axis(acf, 0, data_subset, nlags=n - 2) # Calculate effective sample size (formula from Zieba2010, eq 12) k = numpy.arange(1, n - 1) r_k_sum = ((n - k[:, None, None]) / float(n)) * autocorr_func[1:] n_eff = float(n) / (1 + 2 * numpy.sum(r_k_sum)) # Calculate significance var_x = data_subset.var(axis=0) / n_eff tvals = (data_subset.mean(axis=0) - data_all.mean(axis=0)) / numpy.sqrt(var_x) pvals = stats.t.sf(numpy.abs(tvals), n - 1) * 2 # two-sided pvalue = Prob(abs(t)>tt) notes = "One sample t-test, with sample size adjusted for autocorrelation (Zieba2010, eq 12)" pval_atts = {'standard_name': standard_name, 'long_name': standard_name, 'units': ' ', 'notes': notes,} return pvals, pval_atts min_lon, max_lon = (130, 135) min_lat, max_lat = (-40, -37) subset_dict = {'time': slice('2005-03-01', '2005-05-31'), 'latitude': slice(min_lat, max_lat), 'longitude': slice(min_lon, max_lon)} all_dict = {'latitude': slice(min_lat, max_lat), 'longitude': slice(min_lon, max_lon)} subset_data = tas_darray.sel(**subset_dict).values all_data = tas_darray.sel(**all_dict).values print all_data.shape print subset_data.shape p, atts = calc_significance(subset_data, all_data, 'p_mam') p.shape print atts ```
github_jupyter
# The effect of steel casing in AEM data Figures 4, 5, 6 in Kang et al. (2020) are generated using this ``` # core python packages import numpy as np import scipy.sparse as sp import matplotlib.pyplot as plt from matplotlib.colors import LogNorm from scipy.constants import mu_0, inch, foot import ipywidgets import properties import time from scipy.interpolate import interp1d from simpegEM1D.Waveforms import piecewise_pulse_fast # SimPEG and discretize import discretize from discretize import utils from SimPEG.EM import TDEM from SimPEG import Utils, Maps from SimPEG.Utils import Zero from pymatsolver import Pardiso # casing utilities import casingSimulations as casingSim %matplotlib inline ``` ## Model Parameters We will two classes of examples - permeable wells, one example is run for each $\mu_r$ in `casing_mur`. The conductivity of this well is `sigma_permeable_casing` - conductive wells ($\mu_r$=1), one example is run for each $\sigma$ value in `sigma_casing` To add model runs to the simulation, just add to the list ``` # permeabilities to model casing_mur = [100] sigma_permeable_casing = 1.45*1e6 1./1.45*1e6 # background parameters sigma_air = 1e-6 sigma_back = 1./340. casing_t = 10e-3 # 10mm thick casing casing_d = 300e-3 # 30cm diameter casing_l = 200 def get_model(mur, sigc): model = casingSim.model.CasingInHalfspace( directory = simDir, sigma_air = sigma_air, sigma_casing = sigc, # conductivity of the casing (S/m) sigma_back = sigma_back, # conductivity of the background (S/m) sigma_inside = sigma_back, # fluid inside the well has same conductivity as the background casing_d = casing_d-casing_t, # 135mm is outer casing diameter casing_l = casing_l, casing_t = casing_t, mur_casing = mur, src_a = np.r_[0., 0., 30.], src_b = np.r_[0., 0., 30.] ) return model ``` ## store the different models ``` simDir = "./" model_names_permeable = ["casing_{}".format(mur) for mur in casing_mur] model_dict_permeable = { key: get_model(mur, sigma_permeable_casing) for key, mur in zip(model_names_permeable, casing_mur) } model_names = model_names_permeable model_dict = {} model_dict.update(model_dict_permeable) model_dict["baseline"] = model_dict[model_names[0]].copy() model_dict["baseline"].sigma_casing = model_dict["baseline"].sigma_back model_names = ["baseline"] + model_names model_names ``` ## Create a mesh ``` # parameters defining the core region of the mesh csx2 = 2.5 # cell size in the x-direction in the second uniform region of the mesh (where we measure data) csz = 2.5 # cell size in the z-direction domainx2 = 100 # go out 500m from the well # padding parameters npadx, npadz = 19, 17 # number of padding cells pfx2 = 1.4 # expansion factor for the padding to infinity in the x-direction pfz = 1.4 # set up a mesh generator which will build a mesh based on the provided parameters # and casing geometry def get_mesh(mod): return casingSim.CasingMeshGenerator( directory=simDir, # directory where we can save things modelParameters=mod, # casing parameters npadx=npadx, # number of padding cells in the x-direction npadz=npadz, # number of padding cells in the z-direction domain_x=domainx2, # extent of the second uniform region of the mesh # hy=hy, # cell spacings in the csx1=mod.casing_t/4., # use at least 4 cells per across the thickness of the casing csx2=csx2, # second core cell size csz=csz, # cell size in the z-direction pfx2=pfx2, # padding factor to "infinity" pfz=pfz # padding factor to "infinity" for the z-direction ) mesh_generator = get_mesh(model_dict[model_names[0]]) mesh_generator.mesh.hx.sum() mesh_generator.mesh.hx.min() * 1e3 mesh_generator.mesh.hz.sum() # diffusion_distance(1e-2, 1./340.) * 2 ``` ## Physical Properties ``` # Assign physical properties on the mesh physprops = { name: casingSim.model.PhysicalProperties(mesh_generator, mod) for name, mod in model_dict.items() } from matplotlib.colors import LogNorm import matplotlib matplotlib.rcParams['font.size'] = 14 pp = physprops['casing_100'] sigma = pp.sigma fig, ax = plt.subplots() out = mesh_generator.mesh.plotImage( 1./sigma, grid=True, gridOpts={'alpha':0.2, 'color':'w'}, pcolorOpts={'norm':LogNorm(), 'cmap':'jet'}, mirror=True, ax=ax ) cb= plt.colorbar(out[0], ax=ax) cb.set_label("Resistivity ($\Omega$m)") ax.set_xlabel("x (m)") ax.set_ylabel("z (m)") ax.set_xlim(-0.3, 0.3) ax.set_ylim(-30, 30) ax.set_aspect(0.008) plt.tight_layout() fig.savefig("./figures/figure-4", dpi=200) from simpegEM1D import diffusion_distance mesh_generator.mesh.plotGrid() ``` ## Set up the time domain EM problem We run a time domain EM simulation with SkyTEM geometry ``` data_dir = "./data/" waveform_hm = np.loadtxt(data_dir+"HM_butte_312.txt") time_gates_hm = np.loadtxt(data_dir+"HM_butte_312_gates")[7:,:] * 1e-6 waveform_lm = np.loadtxt(data_dir+"LM_butte_312.txt") time_gates_lm = np.loadtxt(data_dir+"LM_butte_312_gates")[8:,:] * 1e-6 time_input_currents_HM = waveform_hm[:,0] input_currents_HM = waveform_hm[:,1] time_input_currents_LM = waveform_lm[:,0] input_currents_LM = waveform_lm[:,1] time_LM = time_gates_lm[:,3] - waveform_lm[:,0].max() time_HM = time_gates_hm[:,3] - waveform_hm[:,0].max() base_frequency_HM = 30. base_frequency_LM = 210. radius = 13.25 source_area = np.pi * radius**2 pico = 1e12 def run_simulation(sigma, mu, z_src): mesh = mesh_generator.mesh dts = np.diff(np.logspace(-6, -1, 50)) timeSteps = [] for dt in dts: timeSteps.append((dt, 1)) prb = TDEM.Problem3D_e( mesh=mesh, timeSteps=timeSteps, Solver=Pardiso ) x_rx = 0. z_offset = 0. rxloc = np.array([x_rx, 0., z_src+z_offset]) srcloc = np.array([0., 0., z_src]) times = np.logspace(np.log10(1e-5), np.log10(1e-2), 31) rx = TDEM.Rx.Point_dbdt(locs=np.array([x_rx, 0., z_src+z_offset]), times=times, orientation="z") src = TDEM.Src.CircularLoop( [rx], loc=np.r_[0., 0., z_src], orientation="z", radius=13.25 ) area = np.pi * src.radius**2 def bdf2(sigma): # Operators C = mesh.edgeCurl Mfmui = mesh.getFaceInnerProduct(1./mu_0) MeSigma = mesh.getEdgeInnerProduct(sigma) n_steps = prb.timeSteps.size Fz = mesh.getInterpolationMat(rx.locs, locType='Fz') eps = 1e-10 def getA(dt, factor=1.): return C.T*Mfmui*C + factor/dt * MeSigma dt_0 = 0. data_test = np.zeros(prb.timeSteps.size) sol_n0 = np.zeros(mesh.nE) sol_n1 = np.zeros(mesh.nE) sol_n2 = np.zeros(mesh.nE) for ii in range(n_steps): dt = prb.timeSteps[ii] #Factor for BDF2 factor=3/2. if abs(dt_0-dt) > eps: if ii != 0: Ainv.clean() # print (ii, factor) A = getA(dt, factor=factor) Ainv = prb.Solver(A) if ii==0: b0 = src.bInitial(prb) s_e = C.T*Mfmui*b0 rhs = factor/dt*s_e elif ii==1: rhs = -factor/dt*(MeSigma*(-4/3.*sol_n1+1/3.*sol_n0) + 1./3.*s_e) else: rhs = -factor/dt*(MeSigma*(-4/3.*sol_n1+1/3.*sol_n0)) sol_n2 = Ainv*rhs data_test[ii] = Fz*(-C*sol_n2) dt_0 = dt sol_n0 = sol_n1.copy() sol_n1 = sol_n2.copy() step_response = -data_test.copy() step_func = interp1d( np.log10(prb.times[1:]), step_response ) period_HM = 1./base_frequency_HM period_LM = 1./base_frequency_LM data_hm = piecewise_pulse_fast( step_func, time_HM, time_input_currents_HM, input_currents_HM, period_HM, n_pulse=1 ) data_lm = piecewise_pulse_fast( step_func, time_LM, time_input_currents_LM, input_currents_LM, period_LM, n_pulse=1 ) return np.r_[data_hm, data_lm] / area * pico return bdf2(sigma) ``` ## Run the simulation - for each permeability model we run the simulation for 2 conductivity models (casing = $10^6$S/m and $10^{-4}$S/m - each simulation takes 15s-20s on my machine: the next cell takes ~ 4min to run ``` pp = physprops['baseline'] sigma_base = pp.sigma pp = physprops['casing_100'] sigma = pp.sigma mu = pp.mu inds_half_space = sigma_base != sigma_air inds_air = ~inds_half_space inds_casing = sigma == sigma_permeable_casing print (pp.mesh.hx.sum()) print (pp.mesh.hz.sum()) sigma_backgrounds = np.r_[1./1, 1./20, 1./100, 1./200, 1./340] # start = timeit.timeit() data_base = {} data_casing = {} for sigma_background in sigma_backgrounds: sigma_base = np.ones(pp.mesh.nC) * sigma_air sigma_base[inds_half_space] = sigma_background sigma = np.ones(pp.mesh.nC) * sigma_air sigma[inds_half_space] = sigma_background sigma[inds_casing] = sigma_permeable_casing for height in [20, 30, 40, 60, 80]: rho = 1/sigma_background name = str(int(rho)) + str(height) data_base[name] = run_simulation(sigma_base, mu_0, height) data_casing[name] = run_simulation(sigma, mu, height) # end = timeit.timeit() # print(("Elapsed time is %1.f")%(end - start)) rerr_max = [] for sigma_background in sigma_backgrounds: rerr_tmp = np.zeros(5) for ii, height in enumerate([20, 30, 40, 60, 80]): rho = 1/sigma_background name = str(int(rho)) + str(height) data_casing_tmp = data_casing[name] data_base_tmp = data_base[name] rerr_hm = abs(data_casing_tmp[:time_HM.size]-data_base_tmp[:time_HM.size]) / abs(data_base_tmp[:time_HM.size]) rerr_lm = abs(data_casing_tmp[time_HM.size:]-data_base_tmp[time_HM.size:]) / abs(data_base_tmp[time_HM.size:]) # rerr_tmp[ii] = np.r_[rerr_hm, rerr_lm].max() rerr_tmp[ii] = np.sqrt(((np.r_[rerr_hm, rerr_lm])**2).sum() / np.r_[rerr_hm, rerr_lm].size) rerr_max.append(rerr_tmp) import matplotlib matplotlib.rcParams['font.size'] = 14 fig_dir = "./figures/" times = np.logspace(np.log10(1e-5), np.log10(1e-2), 31) colors = ['k', 'b', 'g', 'r'] name='2040' fig, axs = plt.subplots(1,2, figsize=(10, 5)) axs[0].loglog(time_gates_hm[:,3]*1e3, data_base[name][:time_HM.size], 'k--') axs[0].loglog(time_gates_lm[:,3]*1e3, data_base[name][time_HM.size:], 'b--') axs[0].loglog(time_gates_hm[:,3]*1e3, data_casing[name][:time_HM.size], 'k-') axs[0].loglog(time_gates_lm[:,3]*1e3, data_casing[name][time_HM.size:], 'b-') rerr_hm = abs(data_casing[name][:time_HM.size]-data_base[name][:time_HM.size]) / abs(data_base[name][:time_HM.size]) rerr_lm = abs(data_casing[name][time_HM.size:]-data_base[name][time_HM.size:]) / abs(data_base[name][time_HM.size:]) axs[1].loglog(time_gates_hm[:,3]*1e3, rerr_hm * 100, 'k-') axs[1].loglog(time_gates_lm[:,3]*1e3, rerr_lm * 100, 'b-') axs[1].set_ylim(0, 100) axs[0].legend(('HM-background', 'LM-background', 'HM-casing', 'LM-casing')) for ax in axs: ax.set_xlabel("Time (ms)") ax.grid(True) axs[0].set_title('(a) AEM response') axs[1].set_title('(b) Percentage casing effect') axs[0].set_ylabel("Voltage (pV/A-m$^4$)") axs[1].set_ylabel("Percentage casing effect (%)") ax_1 = axs[1].twinx() xlim = axs[1].get_xlim() ax_1.loglog(xlim, (3,3), '-', color='grey', alpha=0.8) axs[1].set_ylim((1e-4, 100)) ax_1.set_ylim((1e-4, 100)) axs[1].set_xlim(xlim) ax_1.set_xlim(xlim) ax_1.set_yticks([3]) ax_1.set_yticklabels(["3%"]) plt.tight_layout() fig.savefig("./figures/figure-5", dpi=200) fig = plt.figure(figsize = (10,5)) ax = plt.gca() ax_1 = ax.twinx() markers = ['k--', 'b--', 'g--', 'r--', 'y--'] for ii, rerr in enumerate(rerr_max[::-1]): ax.plot([20, 30, 40, 60, 80], rerr*100, markers[ii], ms=10) ax.set_xlabel("Transmitter height (m)") ax.set_ylabel("Total percentage casing effect (%)") ax.legend(("340 $\Omega$m", "200 $\Omega$m", "100 $\Omega$m", "20 $\Omega$m", "1 $\Omega$m",), bbox_to_anchor=(1.4,1)) ax.set_yscale('log') ax_1.set_yscale('log') xlim = ax.get_xlim() ylim = ax.get_ylim() ax_1.plot(xlim, (3,3), '-', color='grey', alpha=0.8) ax.set_ylim(ylim) ax_1.set_ylim(ylim) ax.set_xlim(xlim) ax_1.set_yticks([3]) ax_1.set_yticklabels(["3%"]) plt.tight_layout() fig.savefig("./figures/figure-6", dpi=200) ```
github_jupyter
``` import numpy as np import pandas as pd import torch import torchvision from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from matplotlib import pyplot as plt %matplotlib inline torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False train_data = np.load("train_blob_data.npy",allow_pickle=True) test_data = np.load("test_blob_data.npy",allow_pickle=True) mosaic_list_of_images = train_data[0]["mosaic_list"] mosaic_label = train_data[0]["mosaic_label"] fore_idx = train_data[0]["fore_idx"] test_mosaic_list_of_images = test_data[0]["mosaic_list"] test_mosaic_label = test_data[0]["mosaic_label"] test_fore_idx = test_data[0]["fore_idx"] class MosaicDataset1(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list, mosaic_label,fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx] batch = 250 train_dataset = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx) train_loader = DataLoader( train_dataset,batch_size= batch ,shuffle=False) test_dataset = MosaicDataset1(test_mosaic_list_of_images, test_mosaic_label, test_fore_idx) test_loader = DataLoader(test_dataset,batch_size= batch ,shuffle=False) bg = [] for i in range(12): torch.manual_seed(i) betag = torch.randn(250,9)#torch.ones((250,9))/9 bg.append( betag.requires_grad_() ) bg class Module2(nn.Module): def __init__(self): super(Module2, self).__init__() self.linear1 = nn.Linear(5,100) self.linear2 = nn.Linear(100,3) def forward(self,x): x = F.relu(self.linear1(x)) x = self.linear2(x) return x torch.manual_seed(1234) what_net = Module2().double() #what_net.load_state_dict(torch.load("type4_what_net.pt")) what_net = what_net.to("cuda") def attn_avg(x,beta): y = torch.zeros([batch,5], dtype=torch.float64) y = y.to("cuda") alpha = F.softmax(beta,dim=1) # alphas #print(alpha[0],x[0,:]) for i in range(9): alpha1 = alpha[:,i] y = y + torch.mul(alpha1[:,None],x[:,i]) return y,alpha def calculate_attn_loss(dataloader,what,criter): what.eval() r_loss = 0 alphas = [] lbls = [] pred = [] fidices = [] correct = 0 tot = 0 with torch.no_grad(): for i, data in enumerate(dataloader, 0): inputs, labels,fidx= data lbls.append(labels) fidices.append(fidx) inputs = inputs.double() beta = bg[i] # beta for ith batch inputs, labels,beta = inputs.to("cuda"),labels.to("cuda"),beta.to("cuda") avg,alpha = attn_avg(inputs,beta) alpha = alpha.to("cuda") outputs = what(avg) _, predicted = torch.max(outputs.data, 1) correct += sum(predicted == labels) tot += len(predicted) pred.append(predicted.cpu().numpy()) alphas.append(alpha.cpu().numpy()) loss = criter(outputs, labels) r_loss += loss.item() alphas = np.concatenate(alphas,axis=0) pred = np.concatenate(pred,axis=0) lbls = np.concatenate(lbls,axis=0) fidices = np.concatenate(fidices,axis=0) #print(alphas.shape,pred.shape,lbls.shape,fidices.shape) analysis = analyse_data(alphas,lbls,pred,fidices) return r_loss/i,analysis,correct.item(),tot,correct.item()/tot # for param in what_net.parameters(): # param.requires_grad = False def analyse_data(alphas,lbls,predicted,f_idx): ''' analysis data is created here ''' batch = len(predicted) amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0 for j in range (batch): focus = np.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): amth +=1 else: alth +=1 if(focus == f_idx[j] and predicted[j] == lbls[j]): ftpt += 1 elif(focus != f_idx[j] and predicted[j] == lbls[j]): ffpt +=1 elif(focus == f_idx[j] and predicted[j] != lbls[j]): ftpf +=1 elif(focus != f_idx[j] and predicted[j] != lbls[j]): ffpf +=1 #print(sum(predicted==lbls),ftpt+ffpt) return [ftpt,ffpt,ftpf,ffpf,amth,alth] optim1 = [] for i in range(12): optim1.append(optim.RMSprop([bg[i]], lr=0.1)) # instantiate optimizer optimizer_what = optim.RMSprop(what_net.parameters(), lr=0.001)#, momentum=0.9)#,nesterov=True) criterion = nn.CrossEntropyLoss() acti = [] analysis_data_tr = [] analysis_data_tst = [] loss_curi_tr = [] loss_curi_tst = [] epochs = 200 # calculate zeroth epoch loss and FTPT values running_loss,anlys_data,correct,total,accuracy = calculate_attn_loss(train_loader,what_net,criterion) print('training epoch: [%d ] loss: %.3f correct: %.3f, total: %.3f, accuracy: %.3f' %(0,running_loss,correct,total,accuracy)) loss_curi_tr.append(running_loss) analysis_data_tr.append(anlys_data) # training starts for epoch in range(epochs): # loop over the dataset multiple times ep_lossi = [] running_loss = 0.0 what_net.train() for i, data in enumerate(train_loader, 0): # get the inputs inputs, labels,_ = data inputs = inputs.double() beta = bg[i] # alpha for ith batch #print(labels) inputs, labels,beta = inputs.to("cuda"),labels.to("cuda"),beta.to("cuda") # zero the parameter gradients optimizer_what.zero_grad() optim1[i].zero_grad() # forward + backward + optimize avg,alpha = attn_avg(inputs,beta) outputs = what_net(avg) loss = criterion(outputs, labels) # print statistics running_loss += loss.item() #alpha.retain_grad() loss.backward(retain_graph=False) optimizer_what.step() optim1[i].step() running_loss_tr,anls_data,correct,total,accuracy = calculate_attn_loss(train_loader,what_net,criterion) analysis_data_tr.append(anls_data) loss_curi_tr.append(running_loss_tr) #loss per epoch print('training epoch: [%d ] loss: %.3f correct: %.3f, total: %.3f, accuracy: %.3f' %(epoch+1,running_loss_tr,correct,total,accuracy)) if running_loss_tr<=0.08: break print('Finished Training run ') analysis_data_tr = np.array(analysis_data_tr) columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ] df_train = pd.DataFrame() df_test = pd.DataFrame() df_train[columns[0]] = np.arange(0,epoch+2) df_train[columns[1]] = analysis_data_tr[:,-2] df_train[columns[2]] = analysis_data_tr[:,-1] df_train[columns[3]] = analysis_data_tr[:,0] df_train[columns[4]] = analysis_data_tr[:,1] df_train[columns[5]] = analysis_data_tr[:,2] df_train[columns[6]] = analysis_data_tr[:,3] df_train fig= plt.figure(figsize=(6,6)) plt.plot(df_train[columns[0]],df_train[columns[3]]/30, label ="focus_true_pred_true ") plt.plot(df_train[columns[0]],df_train[columns[4]]/30, label ="focus_false_pred_true ") plt.plot(df_train[columns[0]],df_train[columns[5]]/30, label ="focus_true_pred_false ") plt.plot(df_train[columns[0]],df_train[columns[6]]/30, label ="focus_false_pred_false ") plt.title("On Train set") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("percentage of data") plt.xticks([0,5,10,15]) #plt.vlines(vline_list,min(min(df_train[columns[3]]/300),min(df_train[columns[4]]/300),min(df_train[columns[5]]/300),min(df_train[columns[6]]/300)), max(max(df_train[columns[3]]/300),max(df_train[columns[4]]/300),max(df_train[columns[5]]/300),max(df_train[columns[6]]/300)),linestyles='dotted') plt.show() fig.savefig("train_analysis.pdf") fig.savefig("train_analysis.png") aph = [] for i in bg: aph.append(F.softmax(i,dim=1).detach().numpy()) aph = np.concatenate(aph,axis=0) torch.save({ 'epoch': 500, 'model_state_dict': what_net.state_dict(), #'optimizer_state_dict': optimizer_what.state_dict(), "optimizer_alpha":optim1, "FTPT_analysis":analysis_data_tr, "alpha":aph }, "type4_what_net_500.pt") aph[0] ```
github_jupyter
## 2-3. 量子フーリエ変換 この節では、量子アルゴリズムの中でも最も重要なアルゴリズムの一つである量子フーリエ変換について学ぶ。 量子フーリエ変換はその名の通りフーリエ変換を行う量子アルゴリズムであり、様々な量子アルゴリズムのサブルーチンとしても用いられることが多い。 (参照:Nielsen-Chuang 5.1 `The quantum Fourier transform`) ※なお、最後のコラムでも多少述べるが、回路が少し複雑である・入力状態を用意することが難しいといった理由から、いわゆるNISQデバイスでの量子フーリエ変換の実行は難しいと考えられている。 ### 定義 まず、$2^n$成分の配列 $\{x_j\}$ に対して$(j=0,\cdots,2^n-1)$、その[離散フーリエ変換](https://ja.wikipedia.org/wiki/離散フーリエ変換)である配列$\{ y_k \}$を $$ y_k = \frac{1}{\sqrt{2^n}} \sum_{j=0}^{2^n-1} x_j e^{i\frac{2\pi kj}{2^n}} \tag{1} $$ で定義する$(k=0, \cdots 2^n-1)$。配列 $\{x_j\}$ は$\sum_{j=0}^{2^n-1} |x_j|^2 = 1$ と規格化されているものとする。 量子フーリエ変換アルゴリズムは、入力の量子状態 $$ |x\rangle := \sum_{j=0}^{2^n-1} x_j |j\rangle $$ を、 $$ |y \rangle := \sum_{k=0}^{2^n-1} y_k |k\rangle \tag{2} $$ となるように変換する量子アルゴリズムである。ここで、$|i \rangle$は、整数$i$の二進数での表示$i_1 \cdots i_n$ ($i_m = 0,1$)に対応する量子状態$|i_1 \cdots i_n \rangle$の略記である。(例えば、$|2 \rangle = |0\cdots0 10 \rangle, |7 \rangle = |0\cdots0111 \rangle$となる) ここで、式(1)を(2)に代入してみると、 $$ |y \rangle = \frac{1}{\sqrt{2^n}} \sum_{k=0}^{2^n-1} \sum_{j=0}^{2^n-1} x_j e^{i\frac{2\pi kj}{2^n}} |k\rangle = \sum_{j=0}^{2^n-1} x_j \left( \frac{1}{\sqrt{2^n}} \sum_{k=0}^{2^n-1} e^{i\frac{2\pi kj}{2^n}} |k\rangle \right) $$ となる。よって、量子フーリエ変換では、 $$ |j\rangle \to \frac{1}{\sqrt{2^n}} \sum_{k=0}^{2^n-1} e^{i\frac{2\pi kj}{2^n}} |k\rangle $$ を行う量子回路(変換)$U$を見つければ良いことになる。(余裕のある読者は、これがユニタリ変換であることを実際に計算して確かめてみよう) この式はさらに式変形できて(やや複雑なので最後の結果だけ見てもよい) $$ \begin{eqnarray} \sum_{k=0}^{2^n-1} e^{i\frac{2\pi kj}{2^n}} |k\rangle &=& \sum_{k_1=0}^1 \cdots \sum_{k_n=0}^1 e^{i\frac{2\pi (k_1 2^{n-1} + \cdots k_n 2^0 )\cdot j}{2^n}} |k_1 \cdots k_n\rangle \:\:\:\: \text{(kの和を2進数表示で書き直した)} \\ &=& \sum_{k_1=0}^1 \cdots \sum_{k_n=0}^1 e^{i 2\pi j (k_1 2^{-1} + \cdots k_n 2^{-n})} |k_1 \cdots k_n\rangle \\ &=& \left( \sum_{k_1=0}^1 e^{i 2\pi j k_1 2^{-1}} |k_1 \rangle \right) \otimes \cdots \otimes \left( \sum_{k_n=0}^1 e^{i 2\pi j k_n 2^{-n}} |k_n \rangle \right) \:\:\:\: \text{("因数分解"をして、全体をテンソル積で書き直した)} \\ &=& \left( |0\rangle + e^{i 2\pi 0.j_n} |1 \rangle \right) \otimes \left( |0\rangle + e^{i 2\pi 0.j_{n-1}j_n} |1 \rangle \right) \otimes \cdots \otimes \left( |0\rangle + e^{i 2\pi 0.j_1j_2\cdots j_n} |1 \rangle \right) \:\:\:\: \text{(カッコの中の和を計算した)} \end{eqnarray} $$ となる。ここで、 $$ 0.j_l\cdots j_n = \frac{j_l}{2} + \frac{j_{l-1}}{2^2} + \cdots + \frac{j_n}{2^{n-l+1}} $$ は2進小数であり、$e^{i 2\pi j/2^{-l} } = e^{i 2\pi j_1 \cdots j_l . j_{l-1}\cdots j_n } = e^{i 2\pi 0. j_{l-1}\cdots j_n }$となることを用いた。($e^{i2\pi}=1$なので、整数部分は関係ない) まとめると、量子フーリエ変換では、 $$ |j\rangle = |j_1 \cdots j_n \rangle \to \frac{ \left( |0\rangle + e^{i 2\pi 0.j_n} |1 \rangle \right) \otimes \left( |0\rangle + e^{i 2\pi 0.j_{n-1}j_n} |1 \rangle \right) \otimes \cdots \otimes \left( |0\rangle + e^{i 2\pi 0.j_1j_2\cdots j_n} |1 \rangle \right) }{\sqrt{2^n}} \tag{*} $$ という変換ができればよい。 ### 回路の構成 それでは、量子フーリエ変換を実行する回路を実際にどのように構成するかを見ていこう。 そのために、次のアダマールゲート$H$についての等式(計算すると合っていることが分かる) $$ H |m \rangle = \frac{|0\rangle + e^{i 2\pi 0.m}|1\rangle }{\sqrt{2}} \:\:\: (m=0,1) $$ と、角度 $2\pi/2^l$ の一般位相ゲート $$ R_l = \begin{pmatrix} 1 & 0\\ 0 & e^{i \frac{2\pi}{2^l} } \end{pmatrix} $$ を多用する。 1. まず、状態$\left( |0\rangle + e^{i 2\pi 0.j_1j_2\cdots j_n} |1\rangle \right)$の部分をつくる。1番目の量子ビット$|j_1\rangle$にアダマールゲートをかけると $$ |j_1 \cdots j_n \rangle \to \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1} |1\rangle \right) |j_2 \cdots j_n \rangle $$ となるが、ここで、2番目のビット$|j_2\rangle$を制御ビットとする一般位相ゲート$R_2$を1番目の量子ビットにかけると、$j_2=0$の時は何もせず、$j_2=1$の時のみ1番目の量子ビットの$|1\rangle$部分に位相 $2\pi/2^2 = 0.01$(二進小数)がつくから、 $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1} |1\rangle \right) |j_2 \cdots j_n \rangle \to \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1j_2} |1\rangle \right) |j_2 \cdots j_n \rangle $$ となる。以下、$l$番目の量子ビット$|j_l\rangle$を制御ビットとする一般位相ゲート$R_l$をかければ($l=3,\cdots n$)、最終的に $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n} |1\rangle \right) |j_2 \cdots j_n \rangle $$ が得られる。 2. 次に、状態$\left( |0\rangle + e^{i2\pi 0.j_2\cdots j_n} |1\rangle\right)$の部分をつくる。先ほどと同様に、2番目のビット$|j_2\rangle$にアダマールゲートをかければ $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n}|1\rangle \right) \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_2} |1\rangle \right) |j_3 \cdots j_n \rangle $$ ができる。再び、3番目の量子ビットを制御ビット$|j_3\rangle$とする位相ゲート$R_2$をかければ $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n}|1\rangle \right) \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_2j_3}|1\rangle \right) |j_3 \cdots j_n \rangle $$ となり、これを繰り返して $$ \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_1\cdots j_n}|1\rangle \right) \frac{1}{\sqrt{2}} \left( |0\rangle + e^{i2\pi 0.j_2\cdots j_n}|1\rangle \right) |j_3 \cdots j_n \rangle $$ を得る。 3. 1,2と同様の手順で、$l$番目の量子ビット$|j_l\rangle$にアダマールゲート・制御位相ゲート$R_l, R_{l+1},\cdots$をかけていく($l=3,\cdots,n$)。すると最終的に $$ |j_1 \cdots j_n \rangle \to \left( \frac{|0\rangle + e^{i 2\pi 0.j_1\cdots j_n} |1 \rangle}{\sqrt{2}} \right) \otimes \left( \frac{|0\rangle + e^{i 2\pi 0.j_2\cdots j_n} |1 \rangle}{\sqrt{2}} \right) \otimes \cdots \otimes \left( \frac{|0\rangle + e^{i 2\pi 0.j_n} |1 \rangle}{\sqrt{2}} \right) $$ が得られるので、最後にビットの順番をSWAPゲートで反転させてあげれば、量子フーリエ変換を実行する回路が構成できたことになる(式($*$)とはビットの順番が逆になっていることに注意)。 SWAPを除いた部分を回路図で書くと以下のようである。 ![QFT](figs/2/QFT.png) ### SymPyを用いた実装 量子フーリエ変換への理解を深めるために、SymPyを用いて$n=3$の場合の回路を実装してみよう。 ``` from sympy import * from sympy.physics.quantum import * from sympy.physics.quantum.qubit import Qubit,QubitBra init_printing() # ベクトルや行列を綺麗に表示するため from sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP,CPHASE,CGateS # Google Colaboratory上でのみ実行してください from IPython.display import HTML def setup_mathjax(): display(HTML(''' <script> if (!window.MathJax && window.google && window.google.colab) { window.MathJax = { 'tex2jax': { 'inlineMath': [['$', '$'], ['\\(', '\\)']], 'displayMath': [['$$', '$$'], ['\\[', '\\]']], 'processEscapes': true, 'processEnvironments': true, 'skipTags': ['script', 'noscript', 'style', 'textarea', 'code'], 'displayAlign': 'center', }, 'HTML-CSS': { 'styles': {'.MathJax_Display': {'margin': 0}}, 'linebreaks': {'automatic': true}, // Disable to prevent OTF font loading, which aren't part of our // distribution. 'imageFont': null, }, 'messageStyle': 'none' }; var script = document.createElement("script"); script.src = "https://colab.research.google.com/static/mathjax/MathJax.js?config=TeX-AMS_HTML-full,Safe"; document.head.appendChild(script); } </script> ''')) get_ipython().events.register('pre_run_cell', setup_mathjax) ``` まず、フーリエ変換される入力$|x\rangle$として、 $$ |x\rangle = \sum_{j=0}^7 \frac{1}{\sqrt{8}} |j\rangle $$ という全ての状態の重ね合わせ状態を考える($x_0 = \cdots = x_7 = 1/\sqrt{8}$)。 ``` input = 1/sqrt(8) *( Qubit("000")+Qubit("001")+Qubit("010")+Qubit("011")+Qubit("100")+Qubit("101")+Qubit("110")+Qubit("111")) input ``` この状態に対応する配列をnumpyでフーリエ変換すると ``` import numpy as np input_np_array = 1/np.sqrt(8)*np.ones(8) print( input_np_array ) ## 入力 print( np.fft.ifft(input_np_array) * np.sqrt(8) ) ## 出力. ここでのフーリエ変換の定義とnumpyのifftの定義を合わせるため、sqrt(2^3)をかける ``` となり、フーリエ変換すると $y_0=1,y_1=\cdots=y_7=0$ という簡単な配列になることが分かる。これを量子フーリエ変換で確かめてみよう。 まず、$R_1, R_2, R_3$ゲートはそれぞれ$Z, S, T$ゲートに等しいことに注意する($e^{i\pi}=-1, e^{i\pi/2}=i$)。 ``` represent(Z(0),nqubits=1), represent(S(0),nqubits=1), represent(T(0),nqubits=1) ``` 量子フーリエ変換(Quantum Fourier TransformなのでQFTと略す)を実行回路を構成していく。 最初に、1番目(SymPyは右から0,1,2とビットを数えるので、SymPyでは2番目)の量子ビットにアダマール演算子をかけ、2番目・3番目のビットを制御ビットとする$R_2, R_3$ゲートをかける。 ``` QFT_gate = H(2) QFT_gate = CGateS(1, S(2)) * QFT_gate QFT_gate = CGateS(0, T(2)) * QFT_gate ``` 2番目(SymPyでは1番目)の量子ビットにもアダマールゲートと制御$R_2$演算を施す。 ``` QFT_gate = H(1) * QFT_gate QFT_gate = CGateS(0, S(1)) * QFT_gate ``` 3番目(SymPyでは0番目)の量子ビットにはアダマールゲートのみをかければ良い。 ``` QFT_gate = H(0) * QFT_gate ``` 最後に、ビットの順番を合わせるためにSWAPゲートをかける。 ``` QFT_gate = SWAP(0, 2) * QFT_gate ``` これで$n=3$の時の量子フーリエ変換の回路を構成できた。回路自体はやや複雑である。 ``` QFT_gate ``` 入力ベクトル$|x\rangle$ にこの回路を作用させると、以下のようになり、正しくフーリエ変換された状態が出力されていることが分かる。($y_0=1,y_1=\cdots=y_7=0$) ``` simplify( qapply( QFT_gate * input) ) ``` 読者は是非、入力を様々に変えてこの回路を実行し、フーリエ変換が正しく行われていることを確認してみてほしい。 --- ### コラム:計算量について 「量子コンピュータは計算を高速に行える」とは、どういうことだろうか。本節で学んだ量子フーリエ変換を例にとって考えてみる。 量子フーリエ変換を行うために必要なゲート操作の回数は、1番目の量子ビットに$n$回、2番目の量子ビットに$n-1$回、...、$n$番目の量子ビットに1回で合計$n(n-1)/2$回、そして最後のSWAP操作が約$n/2$回であるから、全て合わせると$\mathcal{O}(n^2)$回である($\mathcal{O}$記法について詳しく知りたい人は、下記セクションを参照)。 一方、古典コンピュータでフーリエ変換を行う[高速フーリエ変換](https://ja.wikipedia.org/wiki/高速フーリエ変換)は、同じ計算を行うのに$\mathcal{O}(n2^n)$の計算量を必要とする。この意味で、量子フーリエ変換は、古典コンピュータで行う高速フーリエ変換に比べて「高速」と言える。 これは一見喜ばしいことに見えるが、落とし穴がある。フーリエ変換した結果$\{y_k\}$は量子フーリエ変換後の状態$|y\rangle$の確率振幅として埋め込まれているが、この振幅を素直に読み出そうとすると、結局は**指数関数的な回数の観測を繰り返さなくてはならない**。さらに、そもそも入力$|x\rangle$を用意する方法も簡単ではない(素直にやると、やはり指数関数的な時間がかかってしまう)。 このように、量子コンピュータや量子アルゴリズムを「実用」するのは簡単ではなく、さまざまな工夫・技術発展がまだまだ求められている。 一体どのような問題で量子コンピュータが高速だと思われているのか、理論的にはどのように扱われているのかなど、詳しく学びたい方はQmediaの記事[「量子計算機が古典計算機より優れている」とはどういうことか](https://www.qmedia.jp/computational-complexity-and-quantum-computer/)(竹嵜智之)を参照されたい。 #### オーダー記法$\mathcal{O}$についての註 そもそも、アルゴリズムの性能はどのように定量評価できるのだろうか。ここでは、アルゴリズムの実行に必要な資源、主に時間をその基準として考える。とくに問題のサイズを$n$としたとき、計算ステップ数(時間)や消費メモリなど、必要な計算資源が$n$の関数としてどう振る舞うかを考える。(問題のサイズとは、例えばソートするデータの件数、あるいは素因数分解したい数の二進数表現の桁数などである。) 例えば、問題のサイズ$n$に対し、アルゴリズムの要求する計算資源が次の$f(n)$で与えられるとする。 $$ f(n) = 2n^2 + 5n + 8 $$ $n$が十分大きいとき(例えば$n=10^{10}$)、$2n^2$に比べて$5n$や$6$は十分に小さい。したがって、このアルゴリズムの評価という観点では$5n+8$という因子は重要ではない。また、$n^2$の係数が$2$であるという情報も、$n$が十分大きいときの振る舞いには影響を与えない。こうして、計算時間$f(n)$の一番**「強い」**項の情報が重要であると考えることができる。このような考え方を漸近的評価といい、計算量のオーダー記法では次の式で表す。 $$f(n) = \mathcal{O}(n^2)$$ 一般に$f(n) = \mathcal{O}(g(n))$とは、ある正の数$n_0, c$が存在して、任意の$n > n_0$に対して $$|f(n)| \leq c |g(n)|$$ が成り立つことである。上の例では、$n_0=7, c=3$とすればこの定義の通りである(グラフを描画してみよ)。練習として、$f(n) = 6n^3 +5n$のオーダー記法$f(n) = \mathcal{O}(n^3)$を与える$n_0, c$の組を考えてみよ。 アルゴリズムの性能評価では、その入力のサイズを$n$としたときに必要な計算資源を$n$の関数として表す。特にオーダー記法による漸近評価は、入力のサイズが大きくなったときの振る舞いを把握するときに便利である。そして、こうした漸近評価に基づいた計算量理論というものを用いて、様々なアルゴリズムの分類が行われている。詳細は上記のQmedia記事を参照されたい。
github_jupyter
# Lab 2: cleaning operations practice with the Adult dataset In this lab, we will practice what we learned in the clearning operations lab, but now we use a larger dataset, __Adult__, which we already used in the previous lab . We start by loading the data as we have done before, as well as the necessary libraries. We will look at how to generate train/validation/test partitions, as well as how to do some cleaning of outliers on those, or balancing of training sets. We will also look at how to assess the problem of missing values and how to impute those by a couple of techniques. ## Loading the data Now we begin by loading the data as we have done before and printing the `.head()` and `.tail()` to inspect the data. Also produce a `countplot` of the target variable _income_ to observe the distribution of classes. Load the data as the _Adult_data_ data frame. We will use that through the lab. As in the previous lab, we specify the columns which are: "age", "workclass", "fnlwgt", "education", "educational-num", "marital-status", "occupation", "relationship", "race", "gender", "capital-gain", "capital-loss", "hours-per-week", "native-country". We will also specify two lists, one which contains the __categorical columns__, and one which contains the __numeric columns__ of interest. The categorical columns of interest are: "workclass", "education", "marital-status", "occupation","relationship", "race", "gender", "native-country". The numeric columns are: "age", "education-num", "capital-gain", "capital-loss", "hours-per-week". We will exclude _fnlwgt_ as it is not a particularly useful variable and we will see this soon. We can start by creating a `boxplot` of the `adult_data` (which will include the numeric variables only in it by default. What can we see in it? Is there any problematic variables? We can now designate our X or input variables. Once assigned look at the `.head()` of X. Now designate the outcome or target variable as _y_ and look at the `.head() to see what we get. ## Sampling (train/validation/test) First, let us divide the Adult dataset into train/validation/test partitions. We first designated 20% for a test partition, call it _test_X_. The remainder we can call _part1_X_ as a first partition to be later subdivided. We then subdivide the partition _part1_X_ into train/validate. For the second partition we will make we also designate 20% as the validation set. We will not use the test set until the final stage of testing the model, but we can use the validation set to test any intermediary decision as we later build models for classification. So we start by sampling and dividing the original X,y into the part1_X/test_X and part1_y/test_y with the `train_test_split` method and looking at each with the `describe()` method. Now we sample by dividing the _part1_X_ partition into a Train/Validation partition and we also inspect it with `.describe()`. We can compare _train_X, val_X and test_X_, the three sets we have obtained. ## Outlier detection We start now looking at outliers. For the purpose of looking at outliers, let us consider the continous columns we have already defined only so X can be equal to the CONTINUOUS_COLUMNS of the data frame. We can create a new train_X wich we can call train_OL_X with the CONTINUOUS_COLUMNS only. We now try to detect outliers, first with the DBSCAN algorithm. Since this file is rather large we do not print the objects with their allocation (outliers designated as -1, or not outliers) but we can print the total number of outliers found. We can also alter the parameters `min_samples` and `eps` to see the effect on the outliers detected. Once you have the code working, experiment with the algorithm parameters to get a not too large number of outliers. We can apply this on the train data only, but to the one with continous columns, i.e. train_OL_X. We can now create a mask or filter to ensure only those rows that are not outliers are retained in a new data frame that we can later use for classification. Let us create a new output variable _y1_ and input set of variables _X1_ which contain a filtered version of the original data frame. For this, we can create a mask which takes the value of `clusters!= -1`. This will be a boolean array which we can then use to filter _y_ into a new version _y1_, and similarly _X_ into a new version _X1_. Check the shape of the new X and y with `.shape` to see the size of each. The amount of rows should be equal to the rows in the original data frame minus the rows that were designated as outliers. Note that we need to filter the data frame that contains all the columns (_train_X, train_y_), and not just the numeric ones, as all columns will be needed for the classification algorithms. Let us now do similarly but using the `IsolationForest` algorithm. Again, investigate the parameters to understand how many outliers are found as we change those paramaters. Again, we can create a mask or filter to ensure only those rows that are not outliers are retained in a new data frame that we can later use for classification. Let us create a new output variable _y2_ and input set of variables _X2_ which contain a filtered version of the original data frame, this time with the isolation algorithm filter. For this, we can create a mask which takes the value of `preds!= -1`. This will be a boolean array which we can then use to filter _train_y_ into a new version _y2_, and similarly _Train_X_ into a new version _X2_. Again check the shape of the new X and y with `.shape` to see the size of each. Finally, we can try to run the `LocalOutlierFactor` algorithm on the Adult data. Once this is done, if you wish to visualise the outliers, you could produce a graph similar to the one produced in the _CleaningExamples_ lab, but this time plot for example _age_ versus _educational_num_ (columns 0 and 1). You may not need to use limits on the x and y axis for this plot, or you will need to adapt them to the right values. Again, we filter to ensure only those rows that are not outliers are retained in a new data frame that we can later use for classification. Let us create a new output variable _y3_ and input set of variables _X3_ which contain a filtered version of the original data frame, this time with the LOF algorithm filter, similar to the previous to cells. If you wish to save any of the dataframes you have created to load them elsewhere you can do that with the `.to_csv()` method. You can pass inside as parameters the path and file name and `index = False` if you don't wish to save the index. Alternatively, you can repeat the code above to get the data frame in a later lab. ## Balancing of the data Now we will practice balancing the data. We can apply balancing operations to the original training data, or we could apply it to any of the versions with outliers removed if we later decided that removing the outliers may be beneficial. Let us use the original training data ignoring outlier removal for the time being. We can start by producing a count of how many rows are there for each label using the `value_counts()` method on the _train_y_ series. Now we will try to produce a balanced sample but instead of doing it from the whole file, as we do not want to balance the test data, we wil do it from the training data only, the _train_X_ data frame. First we need to concatanate the X and y part of the training data to apply balancing. We can separate again later. We can start by trying to upsample the minority class so they both have an equal number of samples. We can look at the statistics of the upsampled data, together with the new value counts. We may now want to produce another `countplot` to compare the class imbalance. Additionally, we may produce a `stripplot` to understand how the data was distributed for the two classes in the original data frame, _Adult_data_ and then another one for how it is distributed in the new upsampled data for comparison. Now, we do similarly, but this time we downsample the majority class to produce a reduced balanced dataset. We look at value counts and after we can produce a `countplot` to look at the distribution of values in the classes. Again a `stripplot` can show the distribution of points within the classes in the downsampled data frame. Now you could chose to use either your dowsampled or upsampled training data as the data to classify. For this you will need to divide the _X_ part (decision variables) from the _y_ part (target variable), before feeding to any classification algorithms. Attempt that for the upsampled data. ## Missing data Now we will get to deal with missing data. First thing is to understand how missing data, if there is any, is represented in the dataset we are looking at. The distribution graphs we did in the previous lab for the Adult data frame showed that _occupation_, _workclass_ and _native_country_ appeared to have missing values represented as '?'. We can start by replacing all values of _'?'_ in the data frame with _nan_, the representation of missing data in _numpy_. For this the `.replace()` method can be used with the first parameter being what we want to replace, i.e. '?' and the second being what we want to replace it with, i.e. _nan_. We need to make missing data consistently represented in the whole dataset so we apply this to the _Adult_data_ data frame. Now, we can count how many missing (i.e. nan) numbers there are in the data frame as a whole. Then how many there are in each column or variable. We now start by removing all rows that contain a missing value. How many rows are left? And finally, we look at how to impute the data with the `KNNImputer` from sklearn. We can then use `.describe()` on the new data frame to underestand what the imputed data may look like. Note that we could have used imputation only on the training part of the data frame, although if the classification algorithm we are going to use does not accept missing data, then we may need to apply the imputation on the whole dataset as we just did. Now that is all for this lab!!! We have created a number of data frames we may use in later labs for classification so make sure you save your work ready for re-use later.
github_jupyter