text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# SAS ODA and Python Integration to Analyze COVID-19 Data The purpose of this notebook is to illustrate how Python code can be integrated with calls to SAS ODA in order to solve a particular problem of interest. In the course of this document, we will load the NYT COVID-19 data set. As the NYT data set contains raw cumulative values only, we will also load a census data set that contains estimates for the US population in 2019. We will combine the information from both data sets to calculate the number of cases and deaths per 1,000,000 residents of each state on each day for which we have data. Afterwards, we will use a few different techniques to visualize the cases per 100,000 for the various states. ## NYT Data Acquisition Our first step is to start a connection with the SAS servers. We use the "SASPy" Python package (installed locally) and its [`SASsession` method](https://sassoftware.github.io/saspy/api.html#saspy.SASsession) to establish this connection. ``` import saspy sas_session = saspy.SASsession() ``` With the connection established, we can use the [`submit` method](https://sassoftware.github.io/saspy/api.html#saspy.SASsession.submit) to run SAS code from our Python interface. This method returns the SAS output and log message as a Python dictionary which can then be queried for either component. ``` results_dict = sas_session.submitLST( """ filename nyt_url url 'https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv'; data us_counties_nyt; length Date 8 County $ 30 statename $ 30 FIPS $ 6 Cases 8 Deaths 8 WeekOf 8; format Date date9. WeekOf date9.; infile nyt_url dlm=',' missover dsd firstobs=2; input date : yymmdd10. county statename FIPS cases deaths; /* Adding Week ending value for easier summary later */ WeekOf = intnx('week',date,0,'E'); run; """, ) ``` To view the SAS log of the previous operation, you would run the command ```print(results_dict["LOG"])``` in Python. SAS has a built-in data set with information about the US, including the 2-letter state codes. We will use an inner join method using a `proc sql` in SAS to attach this 2-letter code to our data rows. We specifically use an inner join as the NYT data set includes data on some US territories which we wish to exclude from our analysis in order to focus only on the US states. In the last line, we use the [`sd2df` method](https://sassoftware.github.io/saspy/api.html#saspy.SASsession.sd2df) on our SAS session to move the data from SAS to Python for further processing. ``` results_dict = sas_session.submit( """ proc sql noprint; create table NYT_joined as select nyt.Date, nyt.County, nyt.statename as State, usd.Statecode as StateCode, nyt.Cases, nyt.Deaths from work.US_COUNTIES_NYT as nyt inner join sashelp.us_data as usd on nyt.statename=usd.statename; quit; """, ) nyt_df = sas_session.sd2df("NYT_joined") ``` Now that we have the data available in Python, we will load the various Python packages we will need. ``` # this is for type-hinting in function definitions: from typing import List, Dict # some standard imports from Python for this # type of work import numpy as np import pandas as pd import matplotlib.pyplot as plt # some nice imports to make life easier # 1) make matplotlib aware of Pandas DateTime format from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() # 2) create nicer date formatting in plots import matplotlib.dates as mdates ``` Since the source data set we imported lists data from various counties separately, we first want to simplify our work by adding up all the cases and deaths in each state so that we have only one row of data per state per date. Since this is a common class of problem, we will write a small function to do this task for us. This method is similar to writing and using a macro in SAS. ``` def make_state_summary(df: pd.DataFrame) -> pd.DataFrame: """ Function to process the initial data in two ways: 1) Filter down the columns to the important ones, dropping columns that we don't need for our analysis. 2) Each state is broken down into counties in the NYT data set, but we want state level information. We sum across the counties in the state. Overall, this function is comparable to a "proc freq" in SAS. """ # filter out unnecessary information. Think of a SAS 'keep' statement. df = df.filter(['Date', 'State','Cases','Deaths', 'StateCode']) # sums up the data by 'Date', 'State, 'Statecode', # - this returns state-level 'cases' and 'deaths' short = df.groupby(['Date', 'State', 'StateCode'], as_index=False).sum() return short # call our function to apply the manipulation from the # `make_state_summary` function. df = make_state_summary(nyt_df) ``` Let's verify the data types to make sure we have everything we need. It is important that the Date variable is listed as ```datetime64[ns]``` as opposed to as ```object```, which is a string format as opposed to the numeric date format we want. If this variable is listed as an object, we can run the line ```df.Date = pd.to_datetime(df.Date)``` to fix this problem. We run the conditional fix and print the data types of all columns to make sure we have the correct types for further analysis. ``` # verify that Date is not a string format, # fix it otherwise. if df["State"].dtype==df["Date"].dtype: df.Date = pd.to_datetime(df.Date) df.dtypes ``` ## Updating our Data Set with the Census Information Since we ultimately want to figure out the number of cases and deaths per 100,000 residents of each state, we use a data set from the census bureau which includes population estimates for 2019. We use the `filter` method (similar to a `keep` in SAS) to only load the columns we are interested in, including the actual values from the 2010 census, as well as the Census Bureau's estimates for the year 2019. ``` census_url = "http://www2.census.gov/programs-surveys/popest/datasets/2010-2019/national/totals/nst-est2019-alldata.csv?#" pop_set = pd.read_csv(census_url).filter(['REGION', 'DIVISION', 'STATE', 'NAME', 'CENSUS2010POP', 'ESTIMATESBASE2010', 'POPESTIMATE2019']) ``` Now that we have both data sets available in memory, we will calculate the case-load and death-toll for each state and date given the 2019 estimate. The calculated values are appended as new columns to our data set. ``` def update_case_load(source : pd.DataFrame, census : pd.DataFrame) -> pd.DataFrame: """ Function to update a dataframe to include case-load and death-toll per 100,000 residents using a census data set as look-up table for population values. """ # for loop iterates over all rows in the 'source' dataframe for index, row in source.iterrows(): state = row["State"] # looks-up current statename of row # then looks-up the "POPESTIMATE2019" column value associated with # that state in the `census` dataframe. pop = census[census.NAME==state]["POPESTIMATE2019"].to_numpy()[0] # use the population value to calculate cases/deaths per 100.000 residents cases_per_100k = 1e5*row["Cases"]/pop deaths_per_100k = 1e5*row["Deaths"]/pop # update `source` dataframe with three new column values source.loc[index,"Population"] = pop source.loc[index,"CPM"] = cases_per_100k source.loc[index, "DPM"] = deaths_per_100k return source # run the functon to actually apply the calculations # defined in the `update_case_load` function. df = update_case_load(df, pop_set) ``` At this stage, we have two Pandas dataframes in memory - the `pop_set` dataframe which was used a look-up table for state population information, and the main dataframe `df` which contains the following columns of information we want for our visualizations: ``` df.dtypes ``` ## Simple Plot Visualition Let's start with a few simple visualizations to compare different states. To make it easier, we create a short function that subsets the necessary data, followed by a short function to do the plotting with the output data set. ``` def state_sets(df : pd.DataFrame, States: List) -> Dict: """ This function is similar to a data step in SAS. It takes in a list of state-codes of interest together with the main dataframe and returns a dictionary where each statecode is mapped to a dataframe containing only the information from that state. """ # use a quick dictionary comprehension to subset the data out_dict = {state : df[df.StateCode==state] for state in States } return out_dict def line_plot_states(states_of_interest : Dict, min_date : str = "2020-03-01"): """ Convenience function to do the plotting. Takes a dictionary of states and a start date and then makes a line plot of the 'cases per 100,000' variable in all states listed in the dictionary. """ # define plot size fig, ax = plt.subplots(figsize=(10,5.625)) # iterates over the dictionary and adds each state's # line to the plot for key, data in states_of_interest.items(): subdata = data[data.Date>=pd.to_datetime(min_date)] ax.plot(subdata.Date, subdata.CPM, label=key) ax.legend() # turns on the legend # make the axes pretty fig.autofmt_xdate() ax.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, loc: "{:,}".format(int(x)))) ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d')) ax.set_ylabel('Cases per 100,000') plt.show(fig) # necessary to display the plot ``` Now all we need to do is to create a list of states of interest and pass them to our function, along with an optional start date. Say we are interested in comparing the cases per 100,000 residents over time for several different states. Then our code would look as follows: ``` # list of states, sort it so that the legend is alphabetical # Try out different states! state_list = sorted(["AZ", "CA", "NC", "NJ", "AR"]) # get the dictionary of state data out states_of_interest = state_sets(df, state_list) # does the plotting line_plot_states(states_of_interest, "2020-09-01") ``` ## Making the Map Making maps and plotting over them is hard. Luckily, SAS has a few special procedures available for this. To make our work easier, we will first collect the necessary information for the map from our Python data set and then export it to SAS for plotting. We'll pick data corresponding to a single date and upload the data set to SAS. ``` # list of dates of interest # note that the SAS code below expects only one date, # so if you choose to make a list of multiple dates here, # please also update the SAS code below to pick a specific # date for plotting. # Use format 'YYYY-MM-DD' for the dates dates_of_interest = ["2021-06-01"] # uses the above list to subset the dataframe sub_df = df[df.Date.isin(dates_of_interest)] # uploads the dataframe to SAS under the name # work.map_data sas_session.df2sd(sub_df, table="map_data") ``` We first want to make a choropleth map of the situation. This would allow us to use a color scheme to differentiate between different classes of states, based on the CPM value. Well, `gmap` to the rescue. We will use the `midpoints=old` to use a the Nelder algorithm to determine the appropriate ranges and midpoints. ``` %%SAS sas_session proc gmap data=work.map_data map=mapsgfk.us all; id STATECODE; format CPM COMMA10.; choro CPM / midpoints=old; run; ``` By changing the code slightly, we can also create a gradient map of cases. ``` %%SAS sas_session proc sgmap mapdata=mapsgfk.us maprespdata=map_data; choromap cpm / mapid=statecode name='choro'; format cpm COMMA10.; gradlegend 'choro' / title='Cumulative Cases per 100,000' extractscale; run; ```
github_jupyter
# ML for Trading: How to run an ML algorithm on Quantopian The code in this notebook is written for the Quantopian Research Platform and uses the 'Algorithms' rather than the 'Research' option we used before. To run it, you need to have a free Quantopian account, create a new algorithm and copy the content to the online development environment. ## Imports & Settings ### Quantopian Libraries ``` from quantopian.algorithm import attach_pipeline, pipeline_output, order_optimal_portfolio from quantopian.pipeline import Pipeline, factors, filters, classifiers from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.data import Fundamentals from quantopian.pipeline.data.psychsignal import stocktwits from quantopian.pipeline.factors import (Latest, CustomFactor, SimpleMovingAverage, AverageDollarVolume, Returns, RSI, SimpleBeta, MovingAverageConvergenceDivergenceSignal as MACD) from quantopian.pipeline.filters import QTradableStocksUS from quantopian.pipeline.experimental import risk_loading_pipeline, Size, Momentum, Volatility, Value, ShortTermReversal import quantopian.optimize as opt from quantopian.optimize.experimental import RiskModelExposure ``` ### Other Python Libraries ``` from scipy.stats import spearmanr import talib import pandas as pd import numpy as np from time import time from collections import OrderedDict from scipy import stats from sklearn import linear_model, preprocessing, metrics, cross_validation from sklearn.pipeline import make_pipeline ``` ### Strategy Positions ``` # strategy parameters N_POSITIONS = 100 # Will be split 50% long and 50% short TRAINING_PERIOD = 126 # past periods for training HOLDING_PERIOD = 5 # predict returns N days into the future # How often to trade, for daily, alternative is date_rules.every_day() TRADE_FREQ = date_rules.week_start() ``` ### Custom Universe We define a custom universe to limit duration of training. ``` def Q250US(): """Define custom universe""" return filters.make_us_equity_universe( target_size=250, rankby=factors.AverageDollarVolume(window_length=200), mask=filters.default_us_equity_universe_mask(), groupby=classifiers.fundamentals.Sector(), max_group_weight=0.3, smoothing_func=lambda f: f.downsample('month_start'), ) ``` ## Create Alpha Factors ``` def make_alpha_factors(): def PriceToSalesTTM(): """Last closing price divided by sales per share""" return Fundamentals.ps_ratio.latest def PriceToEarningsTTM(): """Closing price divided by earnings per share (EPS)""" return Fundamentals.pe_ratio.latest def DividendYield(): """Dividends per share divided by closing price""" return Fundamentals.trailing_dividend_yield.latest def Capex_To_Cashflows(): return (Fundamentals.capital_expenditure.latest * 4.) / \ (Fundamentals.free_cash_flow.latest * 4.) def EBITDA_Yield(): return (Fundamentals.ebitda.latest * 4.) / \ USEquityPricing.close.latest def EBIT_To_Assets(): return (Fundamentals.ebit.latest * 4.) / \ Fundamentals.total_assets.latest def Return_On_Total_Invest_Capital(): return Fundamentals.roic.latest class Mean_Reversion_1M(CustomFactor): inputs = [Returns(window_length=21)] window_length = 252 def compute(self, today, assets, out, monthly_rets): out[:] = (monthly_rets[-1] - np.nanmean(monthly_rets, axis=0)) / \ np.nanstd(monthly_rets, axis=0) def MACD_Signal(): return MACD(fast_period=12, slow_period=26, signal_period=9) def Net_Income_Margin(): return Fundamentals.net_margin.latest def Operating_Cashflows_To_Assets(): return (Fundamentals.operating_cash_flow.latest * 4.) / \ Fundamentals.total_assets.latest def Price_Momentum_3M(): return Returns(window_length=63) class Price_Oscillator(CustomFactor): inputs = [USEquityPricing.close] window_length = 252 def compute(self, today, assets, out, close): four_week_period = close[-20:] out[:] = (np.nanmean(four_week_period, axis=0) / np.nanmean(close, axis=0)) - 1. def Returns_39W(): return Returns(window_length=215) class Vol_3M(CustomFactor): inputs = [Returns(window_length=2)] window_length = 63 def compute(self, today, assets, out, rets): out[:] = np.nanstd(rets, axis=0) def Working_Capital_To_Assets(): return Fundamentals.working_capital.latest / Fundamentals.total_assets.latest def sentiment(): return SimpleMovingAverage(inputs=[stocktwits.bull_minus_bear], window_length=5).rank(mask=universe) class AdvancedMomentum(CustomFactor): """ Momentum factor """ inputs = [USEquityPricing.close, Returns(window_length=126)] window_length = 252 def compute(self, today, assets, out, prices, returns): out[:] = ((prices[-21] - prices[-252])/prices[-252] - (prices[-1] - prices[-21])/prices[-21]) / np.nanstd(returns, axis=0) def SPY_Beta(): return SimpleBeta(target=sid(8554), regression_length=252) return { 'Price to Sales': PriceToSalesTTM, 'PE Ratio': PriceToEarningsTTM, 'Dividend Yield': DividendYield, # 'Capex to Cashflows': Capex_To_Cashflows, # 'EBIT to Assets': EBIT_To_Assets, # 'EBITDA Yield': EBITDA_Yield, 'MACD Signal Line': MACD_Signal, 'Mean Reversion 1M': Mean_Reversion_1M, 'Net Income Margin': Net_Income_Margin, # 'Operating Cashflows to Assets': Operating_Cashflows_To_Assets, 'Price Momentum 3M': Price_Momentum_3M, 'Price Oscillator': Price_Oscillator, # 'Return on Invested Capital': Return_On_Total_Invest_Capital, '39 Week Returns': Returns_39W, 'Vol 3M': Vol_3M, 'SPY_Beta': SPY_Beta, 'Advanced Momentum': AdvancedMomentum, 'Size': Size, 'Volatitility': Volatility, 'Value': Value, 'Short-Term Reversal': ShortTermReversal, 'Momentum': Momentum, # 'Materials': materials, # 'Consumer Discretionary': consumer_discretionary, # 'Financials': financials, # 'Real Estate': real_estate, # 'Consumer Staples': consumer_staples, # 'Healthcare': health_care, # 'Utilities': utilities, # 'Telecom ': telecom, # 'Energy': energy, # 'Industrials': industrials, # 'Technology': technology } ``` ## Custom Machine Learning Factor Here we define a Machine Learning factor which trains a model and predicts forward returns ``` class ML(CustomFactor): init = False def compute(self, today, assets, out, returns, *inputs): """Train the model using - shifted returns as target, and - factors in a list of inputs as features; each factor contains a 2-D array of shape [time x stocks] """ if (not self.init) or today.strftime('%A') == 'Monday': # train on first day then subsequent Mondays (memory) # get features features = pd.concat([pd.DataFrame(data, columns=assets).stack().to_frame(i) for i, data in enumerate(inputs)], axis=1) # shift returns and align features target = (pd.DataFrame(returns, columns=assets) .shift(-HOLDING_PERIOD) .dropna(how='all') .stack()) target.index.rename(['date', 'asset'], inplace=True) features = features.reindex(target.index) # finalize features features = (pd.get_dummies(features .assign(asset=features .index.get_level_values('asset')), columns=['asset'], sparse=True)) # train the model self.model_pipe = make_pipeline(preprocessing.Imputer(), preprocessing.MinMaxScaler(), linear_model.LinearRegression()) # run pipeline and train model self.model_pipe.fit(X=features, y=target) self.assets = assets # keep track of assets in model self.init = True # predict most recent factor values features = pd.DataFrame({i: d[-1] for i, d in enumerate(inputs)}, index=assets) features = features.reindex(index=self.assets).assign(asset=self.assets) features = pd.get_dummies(features, columns=['asset']) preds = self.model_pipe.predict(features) out[:] = pd.Series(preds, index=self.assets).reindex(index=assets) ``` ## Create Factor Pipeline Create pipeline with predictive factors and target returns ``` def make_ml_pipeline(alpha_factors, universe, lookback=21, lookahead=5): """Create pipeline with predictive factors and target returns""" # set up pipeline pipe = OrderedDict() # Returns over lookahead days. pipe['Returns'] = Returns(inputs=[USEquityPricing.open], mask=universe, window_length=lookahead + 1) # Rank alpha factors: pipe.update({name: f().rank(mask=universe) for name, f in alpha_factors.items()}) # ML factor gets `lookback` datapoints on each factor pipe['ML'] = ML(inputs=pipe.values(), window_length=lookback + 1, mask=universe) return Pipeline(columns=pipe, screen=universe) ``` ## Define Algorithm ``` def initialize(context): """ Called once at the start of the algorithm. """ set_slippage(slippage.FixedSlippage(spread=0.00)) set_commission(commission.PerShare(cost=0, min_trade_cost=0)) schedule_function(rebalance_portfolio, TRADE_FREQ, time_rules.market_open(minutes=1)) # Record tracking variables at the end of each day. schedule_function(log_metrics, date_rules.every_day(), time_rules.market_close()) # Set up universe # base_universe = AverageDollarVolume(window_length=63, mask=QTradableStocksUS()).percentile_between(80, 100) universe = AverageDollarVolume(window_length=63, mask=QTradableStocksUS()).percentile_between(40, 60) # create alpha factors and machine learning pipline ml_pipeline = make_ml_pipeline(alpha_factors=make_alpha_factors(), universe=universe, lookback=TRAINING_PERIOD, lookahead=HOLDING_PERIOD) attach_pipeline(ml_pipeline, 'alpha_model') attach_pipeline(risk_loading_pipeline(), 'risk_loading_pipeline') context.past_predictions = {} context.realized_rmse = 0 context.realized_ic = 0 context.long_short_spread = 0 ``` ## Evaluate Model Evaluate model performance using past predictions on hold-out data ``` def evaluate_past_predictions(context): """Evaluate model performance using past predictions on hold-out data""" # A day has passed, shift days and drop old ones context.past_predictions = {k-1: v for k, v in context.past_predictions.items() if k-1 >= 0} if 0 in context.past_predictions: # Past predictions for the current day exist, so we can use todays' n-back returns to evaluate them returns = pipeline_output('alpha_model')['Returns'].to_frame('returns') df = (context .past_predictions[0] .to_frame('predictions') .join(returns, how='inner') .dropna()) # Compute performance metrics context.realized_rmse = metrics.mean_squared_error(y_true=df['returns'], y_pred=df.predictions) context.realized_ic, _ = spearmanr(df['returns'], df.predictions) log.info('rmse {:.2%} | ic {:.2%}'.format(context.realized_rmse, context.realized_ic)) long_rets = df.loc[df.predictions >= df.predictions.median(), 'returns'].mean() short_rets = df.loc[df.predictions < df.predictions.median(), 'returns'].mean() context.long_short_spread = (long_rets - short_rets) * 100 # Store current predictions context.past_predictions[HOLDING_PERIOD] = context.predictions ``` ## Algo Execution ### Prepare Trades ``` def before_trading_start(context, data): """ Called every day before market open. """ context.predictions = pipeline_output('alpha_model')['ML'] context.predictions.index.rename(['date', 'equity'], inplace=True) context.risk_loading_pipeline = pipeline_output('risk_loading_pipeline') evaluate_past_predictions(context) ``` ### Rebalance ``` def rebalance_portfolio(context, data): """ Execute orders according to our schedule_function() timing. """ predictions = context.predictions predictions = predictions.loc[data.can_trade(predictions.index)] # Select long/short positions n_positions = int(min(N_POSITIONS, len(predictions)) / 2) to_trade = (predictions[predictions>0] .nlargest(n_positions) .append(predictions[predictions < 0] .nsmallest(n_positions))) # Model may produce duplicate predictions to_trade = to_trade[~to_trade.index.duplicated()] # Setup Optimization Objective objective = opt.MaximizeAlpha(to_trade) # Setup Optimization Constraints constrain_gross_leverage = opt.MaxGrossExposure(1.0) constrain_pos_size = opt.PositionConcentration.with_equal_bounds(-.02, .02) market_neutral = opt.DollarNeutral() constrain_risk = RiskModelExposure( risk_model_loadings=context.risk_loading_pipeline, version=opt.Newest) # Optimizer calculates portfolio weights and # moves portfolio toward the target. order_optimal_portfolio( objective=objective, constraints=[ constrain_gross_leverage, constrain_pos_size, market_neutral, constrain_risk ], ) ``` ### Track Performance ``` def log_metrics(context, data): """ Plot variables at the end of each day. """ record(leverage=context.account.leverage, #num_positions=len(context.portfolio.positions), realized_rmse=context.realized_rmse, realized_ic=context.realized_ic, long_short_spread=context.long_short_spread, ) ```
github_jupyter
# STUMPY Basics [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/TDAmeritrade/stumpy/main?filepath=notebooks/Tutorial_STUMPY_Basics.ipynb) ## Analyzing Motifs and Anomalies with STUMP This tutorial utilizes the main takeaways from the research papers: [Matrix Profile I](http://www.cs.ucr.edu/~eamonn/PID4481997_extend_Matrix%20Profile_I.pdf) & [Matrix Profile II](http://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf). To explore the basic concepts, we'll use the workhorse `stump` function to find interesting motifs (patterns) or discords (anomalies/novelties) and demonstrate these concepts with two different time series datasets: 1. The Steamgen dataset 2. The NYC taxi passengers dataset `stump` is Numba JIT-compiled version of the popular STOMP algorithm that is described in detail in the original [Matrix Profile II](http://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf) paper. `stump` is capable of parallel computation and it performs an ordered search for patterns and outliers within a specified time series and takes advantage of the locality of some calculations to minimize the runtime. ## Getting Started Let's import the packages that we'll need to load, analyze, and plot the data. ``` %matplotlib inline import pandas as pd import stumpy import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as dates from matplotlib.patches import Rectangle import datetime as dt plt.style.use('https://raw.githubusercontent.com/TDAmeritrade/stumpy/main/docs/stumpy.mplstyle') ``` ## What is a Motif? Time series motifs are approximately repeated subsequences found within a longer time series. Being able to say that a subsequence is "approximately repeated" requires that you be able to compare subsequences to each other. In the case of STUMPY, all subsequences within a time series can be compared by computing the pairwise z-normalized Euclidean distances and then storing only the index to its nearest neighbor. This nearest neighbor distance vector is referred to as the `matrix profile` and the index to each nearest neighbor within the time series is referred to as the `matrix profile index`. Luckily, the `stump` function takes in any time series (with floating point values) and computes the matrix profile along with the matrix profile indices and, in turn, one can immediately find time series motifs. Let's look at an example: ## Loading the Steamgen Dataset This data was generated using fuzzy models applied to mimic a steam generator at the Abbott Power Plant in Champaign, IL. The data feature that we are interested in is the output steam flow telemetry that has units of kg/s and the data is "sampled" every three seconds with a total of 9,600 datapoints. ``` steam_df = pd.read_csv("https://zenodo.org/record/4273921/files/STUMPY_Basics_steamgen.csv?download=1") steam_df.head() ``` ## Visualizing the Steamgen Dataset ``` plt.suptitle('Steamgen Dataset', fontsize='30') plt.xlabel('Time', fontsize ='20') plt.ylabel('Steam Flow', fontsize='20') plt.plot(steam_df['steam flow'].values) plt.show() ``` Take a moment and carefully examine the plot above with your naked eye. If you were told that there was a pattern that was approximately repeated, can you spot it? Even for a computer, this can be very challenging. Here's what you should be looking for: ## Manually Finding a Motif ``` m = 640 fig, axs = plt.subplots(2) plt.suptitle('Steamgen Dataset', fontsize='30') axs[0].set_ylabel("Steam Flow", fontsize='20') axs[0].plot(steam_df['steam flow'], alpha=0.5, linewidth=1) axs[0].plot(steam_df['steam flow'].iloc[643:643+m]) axs[0].plot(steam_df['steam flow'].iloc[8724:8724+m]) rect = Rectangle((643, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) rect = Rectangle((8724, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel("Time", fontsize='20') axs[1].set_ylabel("Steam Flow", fontsize='20') axs[1].plot(steam_df['steam flow'].values[643:643+m], color='C1') axs[1].plot(steam_df['steam flow'].values[8724:8724+m], color='C2') plt.show() ``` The motif (pattern) that we are looking for is highlighted above and yet it is still very hard to be certain that the orange and green subsequences are a match (upper panel), that is, until we zoom in on them and overlay the subsequences on top each other (lower panel). Now, we can clearly see that the motif is very similar! The fundamental value of computing the matrix profile is that it not only allows you to quickly find motifs but it also identifies the nearest neighbor for all subsequences within your time series. Note that we haven't actually done anything special here to locate the motif except that we grab the locations from the original paper and plotted them. Now, let's take our steamgen data and apply the `stump` function to it: ## Find a Motif Using STUMP ``` m = 640 mp = stumpy.stump(steam_df['steam flow'], m) ``` `stump` requires two parameters: 1. A time series 2. A window size, `m` In this case, based on some domain expertise, we've chosen `m = 640`, which is roughly equivalent to half-hour windows. And, again, the output of `stump` is an array that contains all of the matrix profile values (i.e., z-normalized Euclidean distance to your nearest neighbor) and matrix profile indices in the first and second columns, respectively (we'll ignore the third and fourth columns for now). To identify the index location of the motif we'll need to find the index location where the matrix profile, `mp[:, 0]`, has the smallest value: ``` motif_idx = np.argsort(mp[:, 0])[0] print(f"The motif is located at index {motif_idx}") ``` With this `motif_idx` information, we can also identify the location of its nearest neighbor by cross-referencing the matrix profile indices, `mp[:, 1]`: ``` nearest_neighbor_idx = mp[motif_idx, 1] print(f"The nearest neighbor is located at index {nearest_neighbor_idx}") ``` Now, let's put all of this together and plot the matrix profile next to our raw data: ``` fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) plt.suptitle('Motif (Pattern) Discovery', fontsize='30') axs[0].plot(steam_df['steam flow'].values) axs[0].set_ylabel('Steam Flow', fontsize='20') rect = Rectangle((motif_idx, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) rect = Rectangle((nearest_neighbor_idx, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel('Time', fontsize ='20') axs[1].set_ylabel('Matrix Profile', fontsize='20') axs[1].axvline(x=motif_idx, linestyle="dashed") axs[1].axvline(x=nearest_neighbor_idx, linestyle="dashed") axs[1].plot(mp[:, 0]) plt.show() ``` What we learn is that the global minima (vertical dashed lines) from the matrix profile correspond to the locations of the two subsequences that make up the motif pair! And the exact z-normalized Euclidean distance between these two subsequences is: ``` mp[motif_idx, 0] ``` So, this distance isn't zero since we saw that the two subsequences aren't an identical match but, relative to the rest of the matrix profile (i.e., compared to either the mean or median matrix profile values), we can understand that this motif is a significantly good match. ## Find Potential Anomalies (Discords) using STUMP Conversely, the index location within our matrix profile that has the largest value (computed from `stump` above) is: ``` discord_idx = np.argsort(mp[:, 0])[-1] print(f"The discord is located at index {discord_idx}") ``` And the nearest neighbor to this discord has a distance that is quite far away: ``` nearest_neighbor_distance = mp[discord_idx, 0] print(f"The nearest neighbor subsequence to this discord is {nearest_neighbor_distance} units away") ``` The subsequence located at this global maximum is also referred to as a discord, novelty, or "potential anomaly": ``` fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) plt.suptitle('Discord (Anomaly/Novelty) Discovery', fontsize='30') axs[0].plot(steam_df['steam flow'].values) axs[0].set_ylabel('Steam Flow', fontsize='20') rect = Rectangle((discord_idx, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel('Time', fontsize ='20') axs[1].set_ylabel('Matrix Profile', fontsize='20') axs[1].axvline(x=discord_idx, linestyle="dashed") axs[1].plot(mp[:, 0]) plt.show() ``` Now that you've mastered the STUMPY basics and understand how to discover motifs and anomalies from a time series, we'll leave it up to you to investigate other interesting local minima and local maxima in the steamgen dataset. To further develop/reinforce our growing intuition, let's move on and explore another dataset! ## Loading the NYC Taxi Passengers Dataset First, we'll download historical data that represents the half-hourly average of the number of NYC taxi passengers over 75 days in the Fall of 2014. We extract that data and insert it into a pandas dataframe, making sure the timestamps are stored as *datetime* objects and the values are of type *float64*. Note that we'll do a little more data cleaning than above just so you can see an example where the timestamp is included. But be aware that `stump` does not actually use or need the timestamp column at all when computing the matrix profile. ``` taxi_df = pd.read_csv("https://zenodo.org/record/4276428/files/STUMPY_Basics_Taxi.csv?download=1") taxi_df['value'] = taxi_df['value'].astype(np.float64) taxi_df['timestamp'] = pd.to_datetime(taxi_df['timestamp']) taxi_df.head() ``` ## Visualizing the Taxi Dataset ``` # This code is going to be utilized to control the axis labeling of the plots DAY_MULTIPLIER = 7 # Specify for the amount of days you want between each labeled x-axis tick x_axis_labels = taxi_df[(taxi_df.timestamp.dt.hour==0)]['timestamp'].dt.strftime('%b %d').values[::DAY_MULTIPLIER] x_axis_labels[1::2] = " " x_axis_labels, DAY_MULTIPLIER plt.suptitle('Taxi Passenger Raw Data', fontsize='30') plt.xlabel('Window Start Date', fontsize ='20') plt.ylabel('Half-Hourly Average\nNumber of Taxi Passengers', fontsize='20') plt.plot(taxi_df['value']) plt.xticks(np.arange(0, taxi_df['value'].shape[0], (48*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.minorticks_on() plt.margins(x=0) plt.show() ``` It seems as if there is a general periodicity between spans of 1-day and 7-days, which can likely be explained by the fact that more people use taxis throughout the day than through the night and that it is reasonable to say most weeks have similar taxi-rider patterns. Also, maybe there is an outlier just to the right of the window starting near the end of October but, other than that, there isn't anything you can conclude from just looking at the raw data. ## Generating the Matrix Profile Again, defining the window size, `m`, usually requires some level of domain knowledge but we'll demonstrate later on that `stump` is robust to changes in this parameter. Since this data was taken half-hourly, we chose a value `m = 48` to represent the span of exactly one day: ``` m = 48 mp = stumpy.stump(taxi_df['value'], m=m) ``` ## Visualizing the Matrix Profile ``` plt.suptitle('1-Day STUMP', fontsize='30') plt.xlabel('Window Start', fontsize ='20') plt.ylabel('Matrix Profile', fontsize='20') plt.plot(mp[:, 0]) plt.plot(575, 1.7, marker="v", markersize=15, color='b') plt.text(620, 1.6, 'Columbus Day', color="black", fontsize=20) plt.plot(1535, 3.7, marker="v", markersize=15, color='b') plt.text(1580, 3.6, 'Daylight Savings', color="black", fontsize=20) plt.plot(2700, 3.1, marker="v", markersize=15, color='b') plt.text(2745, 3.0, 'Thanksgiving', color="black", fontsize=20) plt.plot(30, .2, marker="^", markersize=15, color='b', fillstyle='none') plt.plot(363, .2, marker="^", markersize=15, color='b', fillstyle='none') plt.xticks(np.arange(0, 3553, (m*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.minorticks_on() plt.show() ``` ## Understanding the Matrix Profile Let's understand what we're looking at. ### Lowest Values The lowest values (open triangles) are considered a motif since they represent the pair of nearest neighbor subsequences with the smallest z-normalized Euclidean distance. Interestingly, the two lowest data points are *exactly* 7 days apart, which suggests that, in this dataset, there may be a periodicity of seven days in addition to the more obvious periodicity of one day. ### Highest Values So what about the highest matrix profile values (filled triangles)? The subsequences that have the highest (local) values really emphasizes their uniqueness. We found that the top three peaks happened to correspond exactly with the timing of Columbus Day, Daylight Saving Time, and Thanksgiving, respectively. ## Different Window Sizes As we had mentioned above, `stump` should be robust to the choice of the window size parameter, `m`. Below, we demonstrate how manipulating the window size can have little impact on your resulting matrix profile by running `stump` with varying windows sizes. ``` days_dict ={ "Half-Day": 24, "1-Day": 48, "2-Days": 96, "5-Days": 240, "7-Days": 336, } days_df = pd.DataFrame.from_dict(days_dict, orient='index', columns=['m']) days_df.head() ``` We purposely chose spans of time that correspond to reasonably intuitive day-lengths that could be chosen by a human. ``` fig, axs = plt.subplots(5, sharex=True, gridspec_kw={'hspace': 0}) fig.text(0.5, -0.1, 'Subsequence Start Date', ha='center', fontsize='20') fig.text(0.08, 0.5, 'Matrix Profile', va='center', rotation='vertical', fontsize='20') for i, varying_m in enumerate(days_df['m'].values): mp = stumpy.stump(taxi_df['value'], varying_m) axs[i].plot(mp[:, 0]) axs[i].set_ylim(0,9.5) axs[i].set_xlim(0,3600) title = f"m = {varying_m}" axs[i].set_title(title, fontsize=20, y=.5) plt.xticks(np.arange(0, taxi_df.shape[0], (48*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.suptitle('STUMP with Varying Window Sizes', fontsize='30') plt.show() ``` We can see that even with varying window sizes, our peaks stay prominent. But it looks as if all the non-peak values are converging towards each other. This is why having a knowledge of the data-context is important prior to running `stump`, as it is helpful to have a window size that may capture a repeating pattern or anomaly within the dataset. ## GPU-STUMP - Faster STUMP Using GPUs When you have significantly more than a few thousand data points in your time series, you may need a speed boost to help analyze your data. Luckily, you can try `gpu_stump`, a super fast GPU-powered alternative to `stump` that gives speed of a few hundred CPUs and provides the same output as `stump`: ``` import stumpy mp = stumpy.gpu_stump(df['value'], m=m) # Note that you'll need a properly configured NVIDIA GPU for this ``` In fact, if you aren't dealing with PII/SII data, then you can try out `gpu_stump` using the [this notebook on Google Colab](https://colab.research.google.com/drive/1FIbHQoD6mJInkhinoMehBDj2E1i7i2j7). ## STUMPED - Distributed STUMP Alternatively, if you only have access to a cluster of CPUs and your data needs to stay behind your firewall, then `stump` and `gpu_stump` may not be sufficient for your needs. Instead, you can try `stumped`, a distributed and parallel implementation of `stump` that depends on [Dask distributed](https://distributed.dask.org/en/latest/): ``` import stumpy from dask.distributed import Client dask_client = Client() mp = stumpy.stumped(dask_client, df['value'], m=m) # Note that a dask client is needed ``` ## Bonus Section ### Understanding the Matrix Profile Columnar Output For any 1-D time series, `T`, its matrix profile, `mp`, computed from `stumpy.stump(T, m)` will contain 4 explicit columns, which we'll describe in a moment. Implicitly, the `i`th row of the `mp` array corresponds to the set of (4) nearest neighbor values computed for the specific subsequence `T[i : i + m]`. The first column of the `mp` contains the matrix profile (nearest neighbor distance) value, `P` (note that due to zero-based indexing, the "first column" has a column index value of zero). The second column contains the (zero-based) index location, `I`, of where the (above) nearest neighbor is located along `T` (note that any negative index values are "bad" values and indicates that a nearest neighbor could not be found). So, for the `i`th subsequence `T[i : i + m]`, its nearest neighbor (located somewhere along `T`) has a starting index location of `I = mp[i, 1]` and, assuming that `I >= 0`, this corresponds to the subsequence found at `T[I : I + m]`. And the matrix profile value for the `i`th subsequence, `P = [i, 0]`, is the exact (z-normalized Euclidean) distance between `T[i : i + m]` and `T[I : I + m]`. Note that the nearest neighbor index location, `I`, can be positioned ANYWHERE. That is, dependent upon the `i`th subsequence, its nearest neighbor, `I`, can be located before/to-the-"left" of `i` (i.e., `I <= i`) or come after/to-the-"right" of `i` (i.e., `I >= i`). In other words, there is no constraint on where a nearest neighbor is located. However, there may be a time when you might like to only know about a nearest neighbor that either comes before/after `i` and this is where columns 3 and 4 of `mp` come into play. The third column contains the (zero-based) index location, `IL`, of where the "left" nearest neighbor is located along `T`. Here, there is a constraint that `IL < i` or that `IL` must come before/to-the-left of `i`. Thus, the "left nearest neighbor" for the `i`th subsequence would be located at `IL = mp[i, 2]` and corresponds to `T[IL : IL + m]`. The fourth column contains the (zero-based) index location, `IR`, of where the "right" nearest neighbor is located along `T`. Here, there is a constraint that `IR > i` or that `IR` must come after/to-the-right of `i`. Thus, the "right nearest neighbor" for the `i`th subsequence would be located at `IR = mp[i, 3]` and corresponds to `T[IR : IR + m]`. Again, note that any negative index values are "bad" values and indicates that a nearest neighbor could not be found. To reinforce this more concretely, let's use the following `mp` array as an example: ``` array([[1.626257115121311, 202, -1, 202], [1.7138456780667977, 65, 0, 65], [1.880293454724256, 66, 0, 66], [1.796922109741226, 67, 0, 67], [1.4943082939628236, 11, 1, 11], [1.4504278114808016, 12, 2, 12], [1.6294354134867932, 19, 0, 19], [1.5349365731102185, 229, 0, 229], [1.3930265554289831, 186, 1, 186], [1.5265881687159586, 187, 2, 187], [1.8022253384245739, 33, 3, 33], [1.4943082939628236, 4, 4, 118], [1.4504278114808016, 5, 5, 137], [1.680920620705546, 201, 6, 201], [1.5625058007723722, 237, 8, 237], [1.2860008417613522, 66, 9, -1]] ``` Here, the subsequence at `i = 0` would correspond to the `T[0 : 0 + m]` subsequence and the nearest neighbor for this subsequence is located at `I = 202` (i.e., `mp[0, 1]`) and corresponds to the `T[202 : 202 + m]` subsequence. The z-normalized Euclidean distance between `T[0 : 0 + m]` and `T[202 : 202 + m]` is actually `P = 1.626257115121311` (i.e., `mp[0, 0]`). Next, notice that the location of the left nearest neighbor is `IL = -1` (i.e., `mp[0, 2]`) and, since negative indices are "bad", this tells us that the left nearest neighbor could not be found. Hopefully, this makes sense since `T[0 : 0 + m]` is the first subsequence in `T` and there are no other subsequences that can possibly exist to the left of `T[0 : 0 + m]`! Conversely, the location of the right nearest neighbor is `IR = 202` (i.e., `mp[0, 3]`) and corresponds to the `T[202 : 202 + m]` subsequence. Additionally, the subsequence at `i = 5` would correspond to the `T[5 : 5 + m]` subsequence and the nearest neighbor for this subsequence is located at `I = 12` (i.e., `mp[5, 1]`) and corresponds to the `T[12 : 12 + m]` subsequence. The z-normalized Euclidean distance between `T[5 : 5 + m]` and `T[12 : 12 + m]` is actually `P = 1.4504278114808016` (i.e., `mp[5, 0]`). Next, the location of the left nearest neighbor is `IL = 2` (i.e., `mp[5, 2]`) and corresponds to `T[2 : 2 + m]`. Conversely, the location of the right nearest neighbor is `IR = 12` (i.e., `mp[5, 3]`) and corresponds to the `T[12 : 12 + m]` subsequence. Similarly, all other subsequences can be evaluated and interpreted using this approach! ### Find Top-K Motifs Now that you've computed the matrix profile, `mp`, for your time series and identified the best global motif, you may be interested in discovering other motifs within your data. However, you'll immediately learn that doing something like `top_10_motifs_idx = np.argsort(mp[:, 0])[10]` doesn't actually get you what you want and that's because this only returns the index locations that are likely going to be close to the global motif! Instead, after identifying the best motif (i.e., the matrix profile location with the smallest value), you first need to exclude the local area (i.e., an exclusion zone) surrounding the motif pair by setting their matrix profile values to `np.inf` before searching for the next motif. Then, you'll need to repeat the "exclude-and-search" process for each subsequent motif. Luckily, STUMPY offers two additional functions, namely, `stumpy.motifs` and `stumpy.match`, that help simplify this process. While it is beyond the scope of this basic tutorial, we encourage you to check them out! ## Summary And that's it! You have now loaded in a dataset, ran it through `stump` using our package, and were able to extract multiple conclusions of existing patterns and anomalies within the two different time series. You can now import this package and use it in your own projects. Happy coding! ## Resources [Matrix Profile I](http://www.cs.ucr.edu/~eamonn/PID4481997_extend_Matrix%20Profile_I.pdf) [Matrix Profile II](http://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf) [STUMPY Documentation](https://stumpy.readthedocs.io/en/latest/) [STUMPY Matrix Profile Github Code Repository](https://github.com/TDAmeritrade/stumpy)
github_jupyter
# Autoencoders [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m12sl/dl-hse-2021/blob/master/12-xae/seminar.ipynb) Пора заняться автоэнкодерами. <img src="https://github.com/m12sl/dl-hse-2021/raw/master/12-xae/img/encoder.png" crossorigin="anonymous"/> План: - познакомиться с идеей AE - написать класс для работы с AE - посмотреть на внутренние представления - провести эксперимент с интерполяцией представлений # Что такое Автоэнкодер? Обычно это модель состоящая из энкодера и декодера. Энкодер переводит входные примеры во внутренее представление (вектор или тензор с пространственными размерностями), декодер переводит внутренее представление обратно в пространство примеров. ``` %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as mpatches import numpy as np from tqdm.auto import tqdm from collections import defaultdict import os from pathlib import Path from IPython.display import clear_output import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision import datasets, transforms from torch.utils.tensorboard import SummaryWriter # MNIST или FashionMNIST DATA_ROOT = './tmp' class VeryMNIST: def __init__(self, root, train=False, transform=None): self.ds = datasets.MNIST(root, train=train, download=True) self.transform = transform def __len__(self): return len(self.ds) def __getitem__(self, item): img, label = self.ds[item] if self.transform is not None: img = self.transform(img) return {"image": img, "label": label} mnist_train = VeryMNIST(DATA_ROOT, train=True, transform=transforms.ToTensor()) mnist_val = VeryMNIST(DATA_ROOT, train=False, transform=transforms.ToTensor()) plt.figure(figsize=[6, 6]) for i in range(4): plt.subplot(2, 2, i + 1) ddict = mnist_train[i] img, label = ddict['image'], ddict['label'] plt.title("Label: {}".format(label)) plt.imshow(img[0, ...], cmap='gray') plt.axis('off') ``` Для тренировки возьмем наш обычный Trainer-класс. Подразумевается что в классе модели реализован метод `compute_all(batch)`, который возвращает `loss` и `details_dict`. ``` class Trainer: def __init__(self, model: nn.Module, batch_size: int = 128, run_folder: str = "./runs"): self.model = model self.batch_size = batch_size run_folder = Path(run_folder) train_log_folder = run_folder / "train_log" val_log_folder = run_folder / "val_log" print(f"Run output folder is {run_folder}") os.makedirs(run_folder, exist_ok=True) os.makedirs(train_log_folder, exist_ok=True) os.makedirs(val_log_folder, exist_ok=True) self.device = 'cpu' if torch.cuda.is_available(): self.device = torch.cuda.current_device() self.model = self.model.to(self.device) self.run_folder = run_folder self.global_step = 0 self.train_writer = SummaryWriter(log_dir=str(train_log_folder)) self.val_writer = SummaryWriter(log_dir=str(val_log_folder)) def save_checkpoint(self, path): torch.save(self.model.state_dict(), path) def train(self, num_epochs: int, lr: float): model = self.model optimizer = model.get_optimizer(lr=lr) train_loader = model.get_loader(train=True) val_loader = model.get_loader(train=False) best_loss = float('inf') for epoch in range(num_epochs): model.train() for batch in tqdm(train_loader): batch = {k: v.to(self.device) for k, v in batch.items()} loss, details = model.compute_all(batch) optimizer.zero_grad() loss.backward() optimizer.step() for k, v in details.items(): self.train_writer.add_scalar(k, v, global_step=self.global_step) self.global_step += 1 model.eval() metrics = defaultdict(list) val_losses = [] for batch in tqdm(val_loader): batch = {k: v.to(self.device) for k, v in batch.items()} loss, details = model.compute_all(batch) for k, v in details.items(): metrics[k].append(v) val_losses.append(loss.item()) metrics = {k: np.mean(v) for k, v in metrics.items()} for k, v in metrics.items(): self.val_writer.add_scalar(k, v, global_step=self.global_step) val_loss = np.mean(val_losses) if val_loss < best_loss: self.save_checkpoint(str(self.run_folder / "best_checkpoint.pth")) best_loss = val_loss self.val_writer.flush() def find_lr(self, min_lr: float = 1e-6, max_lr: float = 1e-1, num_lrs: int = 1000, smooth_beta: float = 0.8) -> dict: model = self.model optimizer = model.get_optimizer(lr=min_lr) train_loader = model.get_loader(train=True) num_lrs = min(num_lrs, len(train_loader)) lrs = np.geomspace(start=min_lr, stop=max_lr, num=num_lrs) logs = {'lr': [], 'loss': [], 'avg_loss': []} avg_loss = None initial_state = model.state_dict() model.train() for lr, batch in tqdm(zip(lrs, train_loader), desc='finding LR', total=num_lrs): # apply new lr for param_group in optimizer.param_groups: param_group['lr'] = lr # train step batch = {k: v.to(self.device) for k, v in batch.items()} loss, details = model.compute_all(batch) optimizer.zero_grad() loss.backward() optimizer.step() loss = loss.item() # calculate smoothed loss if avg_loss is None: avg_loss = loss else: avg_loss = smooth_beta * avg_loss + (1 - smooth_beta) * loss # store values into logs logs['lr'].append(lr) logs['avg_loss'].append(avg_loss) logs['loss'].append(loss) logs.update({key: np.array(val) for key, val in logs.items()}) model.load_state_dict(initial_state) return logs ``` ## Обычный автоэнкодер ``` class AE(nn.Module): def __init__(self, hidden_size=2, lr=1e-2): super().__init__() self.encoder = nn.Sequential(nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, hidden_size)) self.decoder = nn.Sequential(nn.Linear(hidden_size, 256), nn.ReLU(), nn.Linear(256, 784), nn.Sigmoid()) self.lr = lr def encode(self, x): return self.encoder(x) def decode(self, z): return self.decoder(z) def get_optimizer(self, lr): return torch.optim.SGD(self.parameters(), lr=lr) def get_loader(self, train=False, batch_size=128): if train: ds = mnist_train else: ds = mnist_val return DataLoader(ds, batch_size=batch_size, shuffle=train) def compute_all(self, batch): x = batch['image'] x = x.view(-1, 784) z = self.encode(x) xr = self.decode(z) loss = F.mse_loss(xr, x) return loss, dict( loss=loss.item(), ) ae = AE(hidden_size=2, lr=1e-1) trainer = Trainer(ae, run_folder="ae-2", batch_size=16) lrrt = trainer.find_lr(min_lr=1e-3, max_lr=10.0) plt.plot(lrrt['lr'], lrrt['loss']) plt.xscale('log') trainer.train(10, lr=1.0) ``` Оригинал и восстановление. ``` def reconstruction_plot(model, dataset): plt.figure(figsize=[12, 3]) N = 10 idx = np.random.choice(len(dataset), size=N) for i in range(N): plt.subplot(2, N, i + 1) ddict = dataset[idx[i]] img = ddict['image'] label = ddict["label"] with torch.no_grad(): z = ae.encoder(img.view(-1, 784)) rec = ae.decoder(z) rec = rec.view(28, 28).cpu().numpy() plt.title("id: {}".format(idx[i])) plt.imshow(img[0, ...], cmap='gray') plt.axis('off') plt.subplot(2, N, i + 1 + N) plt.title('label: {}'.format(label)) plt.imshow(rec, cmap='gray') plt.axis('off') plt.subplots_adjust(wspace=0.03, hspace=0) reconstruction_plot(ae, mnist_val) ``` Двумерные представления ``` # посмотрим на внутренние представления def code_distribution(model, dataset): N = 1000 idx = np.random.choice(len(dataset), size=N) <your code here> x, y, colors = ... # если в x, y, colors лежат координаты и метки, то следующий код выведет график с легендой cm=plt.cm.rainbow(np.linspace(0,1,10)) plt.figure(figsize=(12, 12)) plt.scatter(x, y, c=cm[np.array(colors)]) plt.grid() plt.legend(handles=[mpatches.Patch(color=cm[i], label='{}'.format(i)) for i in range(10)]) plt.show() code_distribution(ae, mnist_train) ``` Интерполяция ``` def plot_interpolations(model, dataset): N = 10 idx = np.random.choice(len(dataset), size=N) # your code here out = ... # end of your code # если out имеет размер [5, 10, 28, 28], следующий код нарисует таблицу с интерполяционными картинками fig, axes = plt.subplots(nrows=5, ncols=10, figsize=(14, 7), subplot_kw={'xticks': [], 'yticks': []}) for i in range(50): axes[i // 10, i % 10].imshow(out[i // 10, i % 10], cmap='gray') plot_interpolations(ae, mnist_val) ``` **Какие особенности вы видите на этих картинках?** ## Вариационный автоэнкодер (не Байесовский вывод) Предположим, мы хотим, чтобы `code` был распределен нормально (стандартное 2d распределение). Будем 1. Как сделать одно распредление похожим на другое? Минимизировать KL-дивергеницю! $KL(q(x)||p(x)) = \int q(x) \log \frac{q(x)}{p(x)}$ 2. Как сделать из `code` распределение, а не одну точку? Давайте сделаем из него распределение! $z = \mu + \varepsilon \sigma$ где $code = [\mu, \sigma]$, а $\varepsilon \sim N(0, 1)$ 3. Мы можем не прописывать все распределение. Между модельными распредлениями дивергенцию можно прописать аналитически. $z \sim N(\mu, \sigma)$ $KL = \sum\limits_{i} \mu^2 + \sigma^2 - \log \sigma - 1$ <img src="https://github.com/m12sl/dl-hse-2020/raw/master/11-prob-models/img/vae.png" crossorigin="anonymous"/> ## Что можно посмотреть про байесовский вывод? - [Лекция Ветрова DeepBayes2017. Модели с латентными переменными](https://youtu.be/7yLOF07Mv5I) - [Лекция Ветрова DeepBayes2017. Масштабируемые байесовские методы](https://youtu.be/if9bTlZOiO8) << тут про VAE - [Auto-Encoding Variational Bayes](https://arxiv.org/pdf/1312.6114.pdf) - [Tutorial on Variational Autoencoders](https://arxiv.org/pdf/1606.05908.pdf) - [blogpost](https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf) - [KL дивергенция между двумя нормальными распределениями](https://stats.stackexchange.com/questions/7440/kl-divergence-between-two-univariate-gaussians) # Делаем вариационный автоэнкодер с сжатым внутренним представление ``` class VAE(nn.Module): def __init__(self): super().__init__() self.encoder = ... self.decoder = ... def compute_all(self, inp): <your code here> return dict( loss=loss, metrics=dict( # любые метрики на ваш вкус kld=kld.item(), loss=loss.item(), ) ) vae = VAE() opt = torch.optim.Adam(vae.parameters(), lr=1e-3) train_model(vae, opt, mnist_train, mnist_val, epochs=3) reconstruction_plot(vae, mnist_val) code_distribution(vae, mnist_train) plot_interpolations(ae, mnist_val) ``` Полезные применения (*)автоэнкодеров: - претрейн на неразмеченных данных - уменьшение размерности - полезные для других моделей представления - трактуемые внутренние представления - детектирование аномалий - denoising - манипуляции внутренними представлениями - синтез новых примеров - ... Для решения некоторых из этих задач не обязательно завязываться на encoder-decoder-архитектуру, посмотрите на i-RevNets (Invertible Reversible Networks):
github_jupyter
# Generating Features from GeoTiff Files From GeoTiff Files available for India over a period of more than 20 years, we want to generate features from those files for the problem of prediction of district wise crop yield in India. Due to gdal package, had to make a separate environment using conda. So install packages for this notebook in that environment itself. Check from the anaconda prompt, the names of all the envs are available: ```shell $ conda info --envs $ activate env_name ``` ``` from osgeo import ogr, osr, gdal import fiona from shapely.geometry import Point, shape import numpy as np import pandas as pd import os import sys import tarfile import timeit ``` - For Windows ``` python base_ = "C:\Users\deepak\Desktop\Repo\Maps\Districts\Census_2011" ``` - For macOS ``` python base_ = "/Users/macbook/Documents/BTP/Satellite/Data/Maps/Districts/Census_2011" ``` ``` # Change this for Win7,macOS bases = "C:\Users\deepak\Desktop\Repo\Maps\Districts\Census\Dist.shp" # base_ = "/Users/macbook/Documents/BTP/Satellite/Data/Maps/Districts/Census_2011" fc = fiona.open(bases) def reverse_geocode(pt): for feature in fc: if shape(feature['geometry']).contains(pt): return feature['properties']['DISTRICT'] return "NRI" # base = "/Users/macbook/Documents/BTP/Satellite/Data/Sat" # macOS base = "G:\BTP\Satellite\Data\Test" # Win7 def extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(os.path.join(base,root)) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s' % (root, filename)) else: print('Extracting data for %s' % root) tar = tarfile.open(os.path.join(base,filename)) sys.stdout.flush() tar.extractall(os.path.join(base,root)) tar.close() # extracting all the tar files ... (if not extracted) for directory, subdirList, fileList in os.walk(base): for filename in fileList: if filename.endswith(".tar.gz"): d = extract(filename) directories = [os.path.join(base, d) for d in sorted(os.listdir(base)) if os.path.isdir(os.path.join(base, d))] # print directories ds = gdal.Open(base + "\LE07_L1TP_146039_20101223_20161211_01_T1\LE07_L1TP_146039_20101223_20161211_01_T1_B1.TIF") ``` Prepare one `ds` variable here itself, for the transformation of the coordinate system below. ``` # get the existing coordinate system old_cs= osr.SpatialReference() old_cs.ImportFromWkt(ds.GetProjectionRef()) # create the new coordinate system wgs84_wkt = """ GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]], UNIT["degree",0.01745329251994328, AUTHORITY["EPSG","9122"]], AUTHORITY["EPSG","4326"]]""" new_cs = osr.SpatialReference() new_cs.ImportFromWkt(wgs84_wkt) # create a transform object to convert between coordinate systems transform = osr.CoordinateTransformation(old_cs,new_cs) type(ds) def pixel2coord(x, y, xoff, a, b, yoff, d, e): """Returns global coordinates from coordinates x,y of the pixel""" xp = a * x + b * y + xoff yp = d * x + e * y + yoff return(xp, yp) ricep = pd.read_csv("C:\Users\deepak\Desktop\Repo\BTP\Ricep.csv") ricep = ricep.drop(["Unnamed: 0"],axis=1) ricep["value"] = ricep["Production"]/ricep["Area"] ricep.head() ``` ## New features ---- > 12 months (Numbered 1 to 12) >> 10 TIF files (12 for SAT_8) >>> Mean & Variance ``` a = np.empty((ricep.shape[0],1))*np.NAN ``` The value of n is going to b same for all the 10 Band files of a month hence ame for all the 20 features at a time. Can reduce the number 480 here using this. ``` """ 'features' contain collumn indexes for the new features """ """ 'dictn' is the dictionary mapping name of collumn index to the index number """ features = [] dictn = {} k = 13 for i in range(1,13): for j in range(1,11): s = str(i) + "_B" + str(j) + "_" features.append(s+"M") features.append(s+"V") dictn[s+"M"] = k dictn[s+"V"] = k+1 k = k+2 for i in range(1,13): for j in range(1,11): s = str(i) + "_B" + str(j) + "_" features.append(s+"Mn") features.append(s+"Vn") len(features) tmp = pd.DataFrame(index=range(ricep.shape[0]),columns=features) ricex = pd.concat([ricep,tmp], axis=1) ricex.head() k = 10 hits = 0 times = [0,0,0,0,0,0,0,0] nums = [0,0,0,0,0,0,0,0] bx = False stx = timeit.default_timer() for directory in directories: if bx: continue else: bx = True dictx = {} """ Identifying Month, Year, Spacecraft ID """ date = directory.split('\\')[-1].split('_')[3] # Change for Win7 satx = directory.split('\\')[-1][3] month = date[4:6] year = date[0:4] """ Visiting every GeoTIFF file """ for _,_,files in os.walk(directory): for filename in files: # make sure not going into the extra folders if filename.endswith(".TIF"): if filename[-5] == '8': continue #-------------------------------------------------------------------------------------- # Check for a single iteration. Which step takes the longest time. # improve that step # Do it all for B1.tif only. # for others save the row indexes found in B1's iteration # so dont have to search the dataframe again and again. # but keep track of the pixels which were not found, so have to skip them too # so that correct pixel value goes to the correct row in dataframe # have to traverse the tiff file to get the pixel values # what all steps are common for all the 10 tif files # the district search from the pixel's lat,lon coordinates # the row index search using district and the year # the same n is read and wrote for all the ... # If nothing works, maybe we can reduce the number of features, by just visiting # First 5 TIF files for each scene. #-------------------------------------------------------------------------------------- print os.path.join(directory,filename) ds = gdal.Open(os.path.join(directory,filename)) if ds == None: continue col, row, _ = ds.RasterXSize, ds.RasterYSize, ds.RasterCount xoff, a, b, yoff, d, e = ds.GetGeoTransform() """ Now go to each pixel, find its lat,lon. Hence its district, and the pixel value """ """ Find the row with same (Year,District), in Crop Dataset. """ """ Find the feature using Month, Band, SATx """ """ For this have to find Mean & Variance """ for i in range(0,col,col/k): for j in range(0,row,row/k): st = timeit.default_timer() ########### fetching the lat and lon coordinates x,y = pixel2coord(i, j, xoff, a, b, yoff, d, e) lonx, latx, z = transform.TransformPoint(x,y) times[0] += timeit.default_timer() - st nums[0] += 1 st = timeit.default_timer() ########### fetching the name of district district = "" #---------------------------------------------------------- if filename[-5] == '1': point = Point(lonx,latx) district = reverse_geocode(point) dictx[str(lonx)+str(latx)] = district else: district = dictx[str(lonx)+str(latx)] #---------------------------------------------------------- times[1] += timeit.default_timer() - st nums[1] += 1 if district == "NRI": continue st = timeit.default_timer() ########### Locating the row in DataFrame which we want to update district = district.lower() district = district.strip() r = ricex.index[(ricex['ind_district'] == district) & (ricex['Crop_Year'] == int(year))].tolist() times[3] += timeit.default_timer() - st nums[3] += 1 if len(r) == 1: st = timeit.default_timer() ########### The pixel value for that location px,py = i,j pix = ds.ReadAsArray(px,py,1,1) pix = pix[0][0] times[2] += timeit.default_timer() - st nums[2] += 1 st = timeit.default_timer() """ Found the row, so now ..""" """ Find Collumn index corresponding to Month, Band """ hits = hits + 1 #print ("Hits: ", hits) ####### Band Number ######## band = filename.split("\\")[-1].split("_")[7:][0].split(".")[0][1] bnd = band if band == '6': if filename.split("\\")[-1].split("_")[7:][2][0] == '1': bnd = band else: bnd = '9' elif band == 'Q': bnd = '10' sm = month + "_B" + bnd +"_M" cm = dictn[sm] r = r[0] # cm is the collumn indexe for mean # r[0] is the row index times[4] += timeit.default_timer() - st nums[4] += 1 ##### Checking if values are null ... valm = ricex.iloc[r,cm] if pd.isnull(valm): st = timeit.default_timer() ricex.iloc[r,cm] = pix ricex.iloc[r,cm+1] = pix*pix ricex.iloc[r,cm+240] = 1 times[5] += timeit.default_timer() - st nums[5] += 1 continue st = timeit.default_timer() ##### if the values are not null ... valv = ricex.iloc[r,cm+1] n = ricex.iloc[r,cm+240] n = n+1 times[6] += timeit.default_timer() - st nums[6] += 1 st = timeit.default_timer() # Mean & Variance update ricex.iloc[r,cm] = valm + (pix-valm)/n ricex.iloc[r,cm+1] = ((n-2)/(n-1))*valv + (pix-valm)*(pix-valm)/n ricex.iloc[r,cm+240] = n times[7] += timeit.default_timer() - st nums[7] += 1 #print ("No match for the district " + district + " for the year " + year) elapsed = timeit.default_timer() - stx print (elapsed) print "Seconds" ``` Overflow is happening because the pixel value is returned in small int type. Need to convert it into int type. ``` print hits ``` Calculating the time per iteration of each code block in the huge loop above helped in recognizing the culprit ``` print times print nums for i in range(8): x = times[i]/nums[i] print (str(i) + ": " + str(x)) ``` Observations: - [1] & [2] are the most time consuming - [1] is reverse geocoding - [2] is pixel value extraction - move [2] inside the if condition to check if the row exists - only then we need the pixel value - deal with [1] using dictionary ``` ricex.describe() ricex.to_csv("ricex_test1.csv") fc fc.schema fc.crs len(fc) ``` 641 Districts in India in total ``` import timeit a = timeit.default_timer() for i in range(1000000): j = 1 b = timeit.default_timer() - a b ``` DOUBT: Why is the size of LANDSAT 7 file smaller than LANDSAT 8 file? Inspite of the fact that, the number of pixels is equal in the band files for both cases. Investigate this ... To calculate the relative dimensions of all 10 band files for a scene. `Whatever the dimension, its same for all 10 files, except Band 8, who has 4 times the pixels as any other file` ``` for directory in directories: """ Identifying Month, Year, Spacecraft ID """ date = directory.split('\\')[-1].split('_')[3] # Change for Win7 satx = directory.split('\\')[-1][3] month = date[4:6] year = date[0:4] print "LANDSAT {}, MONTH: {}, YEAR: {}".format(satx,month,year) """ Visiting every GeoTIFF file """ for _,_,files in os.walk(directory): for filename in files: if filename.endswith(".TIF"): print filename.split("\\")[-1].split("_")[7:] ds = gdal.Open(os.path.join(directory,filename)) if ds == None: continue col, row, _ = ds.RasterXSize, ds.RasterYSize, ds.RasterCount xoff, a, b, yoff, d, e = ds.GetGeoTransform() print "Col: {0:6}, Row:{1:6}".format(col,row) """ Now go to each pixel, find its lat,lon. Hence its district, and the pixel value """ """ Find the row with same (Year,District), in Crop Dataset. """ """ Find the feature using Month, Band, SATx """ """ For this have to find Mean & Variance """ ``` So for LANDSAT 7: col,row ~ **8000,7000**, with an exception of Band 8, with **16K,14K** ---- Pseudo Code --- > Go to the base folder: extract every zip file, which is unextracted: >> For each folder present here: >>> For each tiff file (for each band): >>>> Identify the following: - Month, Year - District Name - Cloud Cover Percentage - Sat 7 or 8 (maybe from #files in the folder! >>>> According to SAT, meaning of bands change ...(Put them in corresponding features ...) >>>> Traverse every 100th pixel (for sat7 every Kth) ---- - *Month, Year, Spacecraft ID* all from the **File Name** itself - Regarding the pixel location selection: - Either go for **definite points** at some gap and avg the **non zero ones** - OR Can select the points **randomly** and avg the non zero ones only. ---------------------------------------------------------------------------------------------------------------- #### Future work - Take into account the fact that each region in a district is not an agricultural land and that too for the same crop we are working on. - Consider the suggestion of changing the features or the way of learning the models because our case involves time based data, which is may be meant to dealt with differenty. - Given the data for the past say 5 years, predict the output for the 6th year. - Figure out how to handle all the states together, for all those years. - Try other tricks in ensemble learning. - Also try EDA for all the variables to draw some useful inference about the dataset. - Talk to Prof Suban. He predicted dwarfism using Satellite Images Dataset. - First of all define the the problem properly, essentailly carrying out a kind of literature survey. - Stanford, Microsoft India ... - What all we can do using the Satellite Data - State of the Art: Related Work - Look at the ways in which it can be improved ... - Important to include the factors for India - Also, see if new resources can be used or not (other than satellite data) - Locating the pixels pertaining to the crops ... IMP.
github_jupyter
``` {-# LANGUAGE InstanceSigs #-} import Control.Applicative (Alternative(..)) import Data.Char newtype Parser s r = Parser { unParser :: [s] -> ParseResult s r } type ParseResult s r = [([s], r)] symbol :: Eq s => s -> Parser s s symbol sym = Parser p where p (s:ss) | s == sym = [(ss, sym)] p _ = [] satisfy :: (s -> Bool) -> Parser s s satisfy pred = Parser p where p (s:ss) | pred s = [(ss, s)] p _ = [] instance Functor (Parser s) where fmap :: (r -> t) -> Parser s r -> Parser s t fmap f p = pure . f =<< p instance Applicative (Parser s) where pure :: r -> Parser s r pure r = Parser $ \ss -> [(ss,r)] (<*>) :: Parser s (r -> t) -> Parser s r -> Parser s t (<*>) (Parser pf) (Parser pa) = Parser $ \ss -> [ tuple | (fs, f) <- pf ss , (as, a) <- pa fs , let tuple = (as, f a) ] instance Alternative (Parser s) where empty :: Parser s r empty = Parser $ \_ -> [] (<|>) :: Parser s r -> Parser s r -> Parser s r (<|>) (Parser p1) (Parser p2) = Parser $ \ss -> p1 ss ++ p2 ss instance Monad (Parser s) where (>>=) :: Parser s r -> (r -> Parser s t) -> Parser s t (>>=) (Parser p1) f = Parser $ \ss -> [ tuple | (ssRest,result1) <- p1 ss , tuple <- (unParser $ f result1) ssRest ] aANDbORc :: Parser Char (Char, Char) aANDbORc = do x <- symbol 'a' (\y -> (x,y)) <$> (symbol 'b' <|> symbol 'c') unParser aANDbORc "abc" unParser aANDbORc "acb" unParser aANDbORc "cba" alphaORhex :: Parser Char Char alphaORhex = alpha <|> hex alpha :: Parser Char Char alpha = satisfy (\c -> elem c (['a'..'z']++['A'..'Z'])) hex :: Parser Char Char hex = satisfy (\c -> elem c (['0'..'9']++['A'..'F'])) unParser alphaORhex "abc" unParser alphaORhex "ABC" unParser alphaORhex "123" word :: Parser Char String word = some (satisfy isAlpha) sep :: Parser Char String sep = some (satisfy isSpace <|> symbol ',') sentence :: Parser Char [String] sentence = do w <- word r <- many (sep >> word) (\_ -> (w:r)) <$> symbol '.' unParser word "Hello world" newtype ParserC s t r = ParserC { unParserC :: Success s t r -> NextRes s t -> Parser s t } type Success s t r = r -> NextRes s t -> Parser s t type NextRes s t = ParseResult s t symbolC :: Eq s => s -> ParserC s t s symbolC sym = ParserC $ \sc nc -> Parser $ \ss -> case ss of (s:ss) | s == sym -> unParser (sc sym nc) ss _ -> nc satisfyC :: (s -> Bool) -> ParserC s t s satisfyC pred = ParserC $ \sc nc -> Parser $ \ss -> case ss of (s:ss) | pred s -> unParser (sc s nc) ss _ -> nc instance Functor (ParserC s t) where fmap :: (r -> u) -> ParserC s t r -> ParserC s t u fmap f p = pure . f =<< p instance Applicative (ParserC s t) where pure :: r -> ParserC s t r pure r = ParserC $ \succ next -> succ r next (<*>) :: ParserC s t (u -> v) -> ParserC s t u -> ParserC s t v (<*>) (ParserC pf) (ParserC pa) = ParserC $ \sc -> pf $ \f -> pa $ \a -> sc (f a) instance Alternative (ParserC s t) where empty :: ParserC s t r empty = ParserC $ \succ next -> Parser $ \ss -> next (<|>) :: ParserC s t r -> ParserC s t r -> ParserC s t r (<|>) (ParserC p1) (ParserC p2) = ParserC $ \sc nc -> Parser $ \ss -> unParser (p1 sc (unParser (p2 sc nc) ss)) ss instance Monad (ParserC s t) where (>>=) :: ParserC s t u -> (u -> ParserC s t v) -> ParserC s t v (>>=) (ParserC p1) f = ParserC $ \sc -> p1 $ \t -> unParserC (f t) sc begin :: ParserC s t t -> Parser s t begin (ParserC p) = p (\r nc -> Parser $ \ss -> ((ss,r):nc)) [] infixr 4 <!> (<!>) :: Parser s r -> Parser s r -> Parser s r (<!>) (Parser p1) (Parser p2) = Parser $ \ss -> case p1 ss of [] -> p2 ss r -> r infixr 4 >!< (>!<) :: ParserC s t r -> ParserC s t r -> ParserC s t r (>!<) (ParserC p1) (ParserC p2) = ParserC $ \sc nc -> Parser $ \ss -> unParser (p1 (\r _ -> sc r nc) (unParser (p2 sc nc) ss)) ss (>!*<) :: ParserC s t r -> ParserC s t [r] (>!*<) p = (do r <- p (\rs -> (r:rs)) <$> (>!*<) p) >!< pure [] (>!+<) :: ParserC s t r -> ParserC s t [r] (>!+<) p = do r <- p (\rs -> (r:rs)) <$> (>!*<) p newtype CParser s t r = CParser { unCParser :: SucCont s t r -> XorCont s t -> AltCont s t -> Parser s t } type SucCont s t r = r -> XorCont s t -> AltCont s t -> Parser s t type XorCont s t = AltCont s t -> ParseResult s t type AltCont s t = ParseResult s t cSymbol :: Eq s => s -> CParser s t s cSymbol sym = CParser $ \sc xc ac -> Parser $ \ss -> case ss of (s:ss) | s == sym -> unParser (sc sym xc ac) ss _ -> xc ac cSatisfy :: (s -> Bool) -> CParser s t s cSatisfy pred = CParser $ \sc xc ac -> Parser $ \ss -> case ss of (s:ss) | pred s -> unParser (sc s xc ac) ss _ -> xc ac instance Functor (CParser s t) where fmap :: (u -> v) -> CParser s t u -> CParser s t v fmap f p = pure . f =<< p instance Applicative (CParser s t) where pure :: r -> CParser s t r pure x = CParser $ \sc -> sc x (<*>) :: CParser s t (u -> v) -> CParser s t u -> CParser s t v (<*>) (CParser pf) (CParser pa) = CParser $ \sc -> pf $ \f -> pa $ \a -> sc (f a) instance Alternative (CParser s t) where empty :: CParser s t r empty = CParser $ \sc xc ac -> Parser $ \ss -> xc ac (<|>) :: CParser s t r -> CParser s t r -> CParser s t r (<|>) (CParser p1) (CParser p2) = CParser $ \sc xc ac -> Parser $ \ss -> unParser (p1 sc id (unParser (p2 sc xc ac) ss)) ss instance Monad (CParser s t) where (>>=) :: CParser s t u -> (u -> CParser s t v) -> CParser s t v (>>=) (CParser p1) f = CParser $ \sc -> p1 $ \t -> unCParser (f t) sc infixr 4 <<!>> (<<!>>) :: CParser s t r -> CParser s t r -> CParser s t r (<<!>>) (CParser p1) (CParser p2) = CParser $ \sc xc ac -> Parser $ \ss -> unParser (p1 (\x xc2 -> sc x xc) (\ac3 -> unParser (p2 sc xc ac3) ss) ac) ss cBegin :: CParser s t t -> Parser s t cBegin (CParser p) = p (\x xc ac -> Parser $ \ss -> ((ss,x):(xc ac))) id [] cBegin :: CParser s t t -> Parser s t cBegin (CParser p) = p (\x xc ac -> Parser $ \ss -> ((ss,x):(xc ac))) id [] infixr 6 !>>= (!>>=) :: CParser s t u -> (u -> CParser s t v) -> CParser s t v (!>>=) (CParser p1) f = CParser $ \sc -> p1 (\t ac2 -> unCParser (f t) sc id) (<<!*>>) :: CParser s t r -> CParser s t [r] (<<!*>>) p = (p !>>= \r -> (\rs -> (r:rs)) <$> (<<!*>>) p) <<!>> pure [] (<<!!*>>) :: CParser s t r -> CParser s t [r] (<<!!*>>) p = cListP p [] (<<!!+>>) :: CParser s t r -> CParser s t [r] (<<!!+>>) p = p >>= \r -> cListP p [r] cListP :: CParser s t r -> [r] -> CParser s t [r] cListP (CParser p) l = clp l where clp l = CParser $ \sc xc ac -> Parser $ \ss -> unParser (p (\r xc2 -> unCParser (clp (r:l)) sc id) (\ac4 -> unParser (sc (reverse l) xc ac4) ss) ac) ss manParse :: String -> [(String, [String])] manParse input = word [] [] input where word w s (c:r) | isAlpha c = word (c:w) s r word w s input = sep [] ((reverse w):s) input sep l s (c:r) | isSpace c || c == ',' = sep (c:l) s r sep [] s input = dot s input sep _ s input = word [] s input dot [] input = [] dot s ('.':r) = [(r,reverse s)] dot _ _ = [] ```
github_jupyter
#Traditional Value Factor Algorithm By Gil Wassermann Strategy taken from "130/30: The New Long-Only" by Andrew Lo and Pankaj Patel Part of the Quantopian Lecture Series: * www.quantopian.com/lectures * github.com/quantopian/research_public Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution. Before the crisis of 2007, 130/30 funds were all the rage. The idea of a 130/30 fund is simple: take a long position of 130% and a short position of 30%; this combination gives a net exposure of 100% (the same as a long-only fund) as well as the added benefit of the ability to short stocks. The ability to short in a trading strategy is crucial as it allows a fund manager to capitalize on a security's poor performance, which is impossible in a traditional, long-only strategy. This notebook, using factors outlined by Andrew Lo and Pankaj Patel in "130/30: The New Long Only", will demonstrate how to create an algorithmic 130/30 strategy. It will also highlight Quantopian's Pipeline API which is a powerful tool for developing factor trading strategies. First, let us import all necessary libraries and functions for this algorithm ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from quantopian.pipeline import Pipeline from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.research import run_pipeline from quantopian.pipeline.data import Fundamentals from quantopian.pipeline.factors import CustomFactor ``` #Traditional Value In this notebook, we will develop a strategy based on the "traditional value" metrics described in the Lo/Patel whitepaper. The factors employed in this strategy designate stocks as either cheap or expensive using classic fundamental analysis. The factors that Lo/Patel used are: * Dividend Yield * Price to Book Value * Price to Trailing 12-Month Sales * Price to Trainling 12-Month Cash Flows ##Dividend Yield Dividend yield is calculated as: $$Dividend\;Yield = \frac{Annual\;Dividends\;per\;share}{Price\;per\;share}$$ When a company makes profit, it faces a choice. It could either reinvest those profits in the company with an eye to increase efficiency, purchase new technology, etc. or it could pay dividends to its equity holders. While reinvestment may increase a company's future share price and thereby reward investors, the most concrete way equity holders are rewarded is through dividends. An equity with a high dividend yield is particularly attractive as the quantity of dividends paid to investors represent a larger proportion of the share price itself. Now we shall create a Dividend Yield factor using the Pipeline API framework and Pipeline's list of fundamental values. ``` # Custom Factor 1 : Dividend Yield class Div_Yield(CustomFactor): inputs = [Fundamentals.div_yield5_year] window_length = 1 def compute(self, today, assets, out, d_y): out[:] = d_y[-1] ``` While this factor could be calculated using other fundamental metrics, Fundamentals removes the need for any calculation. It is good practice to check the list of [fundamentals](https://www.quantopian.com/help/fundamentals) before creating a custom factor from scratch. We will initialize a temporary Pipeline to get a sense of the values. ``` # create the pipeline temp_pipe_1 = Pipeline() # add the factor to the pipeline temp_pipe_1.add(Div_Yield(), 'Dividend Yield') # run the pipeline and get data for first 5 equities run_pipeline(temp_pipe_1, start_date = '2015-11-11', end_date = '2015-11-11').dropna().head() ``` ##Price to Book Value Price to Book Value (a.k.a Price to Book Ratio) is calculated as: $$P/B\;Ratio = \frac{Price\;per\;share}{Net\;Asset\;Value\;per\;share}$$ Net Asset Value per share can be thought of (very roughly) as a company's total assets less its total liabilities, all divided by the number of shares outstanding. The P/B Ratio gives a sense of a stock being either over- or undervalued. A high P/B ratio suggests that a stock's price is overvalued, and should therefore be shorted, whereas a low P/B ratio is attractive as the stock gained by purchasing the equity is hypothetically "worth more" than the price paid for it. We will now create a P/B Ratio custom factor and look at some of the results. ``` # Custom Factor 2 : P/B Ratio class Price_to_Book(CustomFactor): inputs = [Fundamentals.pb_ratio] window_length = 1 def compute(self, today, assets, out, pbr): out[:] = pbr[-1] # create the Pipeline temp_pipe_2 = Pipeline() # add the factor to the Pipeline temp_pipe_2.add(Price_to_Book(), 'P/B Ratio') # run the Pipeline and get data for first 5 equities run_pipeline(temp_pipe_2, start_date='2015-11-11', end_date='2015-11-11').head() ``` There are two points to make about this data series. Firstly, AA_PR's P/B Ratio is given as NaN by Pipeline. NaN stands for "not a number" and occurs when a value can not be fetched by Pipeline. Eventually, we will remove these NaN values from the dataset as they often lead to confusing errors when manipulating the data. Secondly, a low P/B Ratio and a high Dividend Yield are attractive for investors, whereas a a high P/B Ratio and a low Dividend Yield are unattractive. Therefore, we will "invert" the P/B ratio by making each value negative in the factor output so that, when the data is aggregated later in the algorithm, the maxima and minima have the same underlying "meaning". ##Price to Trailing 12-Month Sales This is calculated as a simple ratio between price per share and trailing 12-month (TTM) sales. TTM is a transformation rather than a metric and effectively calculates improvement or deterioration of a fundamental value from a particular quarter one year previously. For example, if one wanted to calculate today's TTM Sales for company XYZ, one would take the most recent quarter's revenue and divide it by the difference between this quarter's revenue and this quarter's revenue last year added to the revenue as given by the company's most recent fiscal year-end filing. To calculate the exact TTM of a security is indeed possible using Pipeline; however, the code required is slow. Luckily, this value can be well approximated by the built-in Fundamental Morningstar ratios, which use annual sales to calculate the Price to Sales fundamental value. This slight change boosts the code's speed enormously yet has very little impact on the results of the strategy itself. Price to TTM Sales is similar to the P/B Ratio in terms of function. The major difference in these two ratios is the fact that inclusion of TTM means that seasonal fluctuations are minimized, as previous data is used to smooth the value. In our case, annualized values accomplish this same smoothing. Also, note that the values produced are negative; this factor requires the same inversion as the P/B Ratio. ``` # Custom Factor 3 : Price to Trailing 12 Month Sales class Price_to_TTM_Sales(CustomFactor): inputs = [Fundamentals.ps_ratio] window_length = 1 def compute(self, today, assets, out, ps): out[:] = -ps[-1] # create the pipeline temp_pipe_3 = Pipeline() # add the factor to the pipeline temp_pipe_3.add(Price_to_TTM_Sales(), 'Price / TTM Sales') # run the pipeline and get data for first 5 equities run_pipeline(temp_pipe_3, start_date='2015-11-11', end_date='2015-11-11').head() ``` ##Price to Trailing 12-Month Cashflows This is calculated as a simple ratio between price per share and TTM free cashflow (here using the built-in Fundamental Morningstar ratio as an approximaton). This ratio serves a similar function to the previous two. A future notebook will explore the subtle differences in these metrics, but they largely serve the same purpose. Once again, low values are attractive and high values are unattractive, so the metric must be inverted. ``` # Custom Factor 4 : Price to Trailing 12 Month Cashflow class Price_to_TTM_Cashflows(CustomFactor): inputs = [Fundamentals.pcf_ratio] window_length = 1 def compute(self, today, assets, out, pcf): out[:] = -pcf[-1] # create the pipeline temp_pipe_4 = Pipeline() # add the factor to the pipeline temp_pipe_4.add(Price_to_TTM_Cashflows(), 'Price / TTM Cashflows') # run the pipeline and get data for first 5 equities run_pipeline(temp_pipe_4, start_date='2015-11-11', end_date='2015-11-11').head() ``` ##The Full Pipeline Now that each individual factor has been added, it is now time to get all the necessary data at once. In the algorithm, this will take place once every day. Later in the process, we will need a factor in order to create an approximate S&P500, so we will also include another factor called SPY_proxy (SPY is an ETF that tracks the S&P500). The S&P500 is a collection of 500 of the largest companies traded on the stock market. Our interpretation of the S&P500 is a group of 500 companies with the greatest market capitalizations; however, the actual S&P500 will be slightly different as Standard and Poors, who create the index, have a more nuanced algorithm for calculation. We will also alter our P/B Ratio factor in order to account for the inversion. ``` # This factor creates the synthetic S&P500 class SPY_proxy(CustomFactor): inputs = [Fundamentals.market_cap] window_length = 1 def compute(self, today, assets, out, mc): out[:] = mc[-1] # Custom Factor 2 : P/B Ratio class Price_to_Book(CustomFactor): inputs = [Fundamentals.pb_ratio] window_length = 1 def compute(self, today, assets, out, pbr): out[:] = -pbr[-1] def Data_Pull(): # create the piepline for the data pull Data_Pipe = Pipeline() # create SPY proxy Data_Pipe.add(SPY_proxy(), 'SPY Proxy') # Div Yield Data_Pipe.add(Div_Yield(), 'Dividend Yield') # Price to Book Data_Pipe.add(Price_to_Book(), 'Price to Book') # Price / TTM Sales Data_Pipe.add(Price_to_TTM_Sales(), 'Price / TTM Sales') # Price / TTM Cashflows Data_Pipe.add(Price_to_TTM_Cashflows(), 'Price / TTM Cashflow') return Data_Pipe # NB: Data pull is a function that returns a Pipeline object, so need () results = run_pipeline(Data_Pull(), start_date='2015-11-11', end_date='2015-11-11') results.head() ``` ##Aggregation Now that we have all our data, we need to manipulate this in order to create a single ranking of the securities. Lo/Patel recommend the following algorithm: * Extract the S&P500 from the set of equities and find the mean and standard deviation of each factor for this dataset (standard_frame_compute) * Use these computed values to standardize each factor (standard_frame_compute) * Replace values that are greater that 10 or less that -10 with 10 and -10 respectively in order to limit the effect of outliers (filter_fn) * Sum these values for each equity and divide by the number of factors in order to give a value between -10 and 10 (composite score) The code for this is shown below. ``` # limit effect of outliers def filter_fn(x): if x <= -10: x = -10.0 elif x >= 10: x = 10.0 return x # standardize using mean and sd of S&P500 def standard_frame_compute(df): # basic clean of dataset to remove infinite values df = df.replace([np.inf, -np.inf], np.nan) df = df.dropna() # need standardization params from synthetic S&P500 df_SPY = df.sort(columns='SPY Proxy', ascending=False) # create separate dataframe for SPY # to store standardization values df_SPY = df_SPY.head(500) # get dataframes into numpy array df_SPY = df_SPY.as_matrix() # store index values index = df.index.values df = df.as_matrix() df_standard = np.empty(df.shape[0]) for col_SPY, col_full in zip(df_SPY.T, df.T): # summary stats for S&P500 mu = np.mean(col_SPY) sigma = np.std(col_SPY) col_standard = np.array(((col_full - mu) / sigma)) # create vectorized function (lambda equivalent) fltr = np.vectorize(filter_fn) col_standard = (fltr(col_standard)) # make range between -10 and 10 col_standard = (col_standard / df.shape[1]) # attach calculated values as new row in df_standard df_standard = np.vstack((df_standard, col_standard)) # get rid of first entry (empty scores) df_standard = np.delete(df_standard,0,0) return (df_standard, index) # Sum up and sort data def composite_score(df, index): # sum up transformed data df_composite = df.sum(axis=0) # put into a pandas dataframe and connect numbers # to equities via reindexing df_composite = pd.Series(data=df_composite,index=index) # sort descending df_composite.sort(ascending=False) return df_composite # compute the standardized values results_standard, index = standard_frame_compute(results) # aggregate the scores ranked_scores = composite_score(results_standard, index) # print the final rankings ranked_scores ``` ##Stock Choice Now that we have ranked our securities, we need to choose a long basket and a short basket. Since we need to keep the ratio 130/30 between longs and shorts, why not have 26 longs and 6 shorts (in the algorithm we will weigh each of these equally, giving us our desired leverage and exposure). On the graph below, we plot a histogram of the securities to get a sense of the distribution of scores. The red lines represent the cutoff points for the long and short buckets. One thing to notice is that the vast majority of equities are ranked near the middle of the histogram, whereas the tails are quite thin. This would suggest that there is something special about the securities chosen to be in these baskets, and -hopefully- these special qualities will yield positive alpha for the strategy. ``` # create histogram of scores ranked_scores.hist() # make scores into list for ease of manipulation ranked_scores_list = ranked_scores.tolist() # add labels to axes plt.xlabel('Standardized Scores') plt.ylabel('Quantity in Basket') # show long bucket plt.axvline(x=ranked_scores_list[25], linewidth=1, color='r') # show short bucket plt.axvline(x=ranked_scores_list[-6], linewidth=1, color='r'); ``` Please see the full algorithm for backtested returns! NB: In the implementation of the algorithm, a series of filters is used in order to ensure that only tradeable stocks are included. The methodology for this filter can be found in https://www.quantopian.com/posts/pipeline-trading-universe-best-practice. *This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
github_jupyter
# 1. File I/O Settings ``` hindcast_data_file = 'test_data/NMME_data_BD.csv' #data used for cross-validated hindcast skill analysis, and to train forecast model hindcast_has_years = True hindcast_has_header = False hindcast_has_obs = True #NOTE: This is mandatory hindcast_export_file = 'bd.csv' #'None' or the name of a file to save cross validated hindcasts forecast_data_file = 'test_data/NMME_data_BD_forecast.csv' #data fed to trained model to produce forecasts, or None forecast_has_years = True forecast_has_header = False forecast_has_obs = True #NOTE: for Forecasting, observations are optional forecast_export_file = 'bd_rtf.csv' variable = 'Precipitation (mm/day)' ``` # 2. Cross-Validated Hindcast Skill Evaluation #### 2a. Analysis Settings ``` mme_methodologies = ['EM', 'MLR', 'ELM'] #list of MME methodologies to use skill_metrics = [ 'MAE', 'IOA', 'MSE', 'RMSE', 'PearsonCoef', 'SpearmanCoef'] #list of metrics to compute - available: ['SpearmanCoef', 'SpearmanP', 'PearsonCoef', 'PearsonP', 'MSE', 'MAE', 'RMSE', 'IOA'] ``` #### 2b. Model Parameters ``` args = { #EnsembleMean settings 'em_xval_window': 1, #odd number - behavior undefined for even number #MLR Settings 'mlr_fit_intercept': True, #Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered) (https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) 'mlr_xval_window': 1, #odd number - behavior undefined for even number 'mlr_standardization': None, #'std_anomaly' or None #ELM Settings 'elm_xval_window': 1, #odd number - behavior undefined for even number 'elm_hidden_layer_neurons':10, #number of hidden layer neurons - overridden if using PCA init 'elm_activation': 'sigm', #“lin” for linear, “sigm” or “tanh” for non-linear, “rbf_l1”, “rbf_l2” or “rbf_linf” for radial basis function neurons (https://hpelm.readthedocs.io/en/latest/api/elm.html) 'elm_standardization' : 'minmax', #'minmax' or 'std_anomaly' or None 'elm_minmax_range': [-1, 1] #choose [minimum, maximum] values for minmax scaling. ignored if not using minmax scaling } ``` #### 2c. Model Construction - Do Not Edit ``` from src import * reader = Reader() #Object that will handle our input data data = reader.read_txt(hindcast_data_file, has_years=hindcast_has_years, has_obs=hindcast_has_obs, has_header=hindcast_has_header) mme = MME(data) mme.train_mmes(mme_methodologies, args) mme.measure_skill(skill_metrics) ``` #### 2d. Cross-Validated Hindcast Timeline - Do Not Edit ``` ptr = Plotter(mme) ptr.timeline(methods=mme_methodologies, members=False, obs=True, var=variable) ``` #### 2e. Cross-Validated Hindcast Skill Metrics & Distributions - Do Not Edit ``` ptr.skill_matrix(methods=mme_methodologies, metrics=skill_metrics, obs=True, members=True) ptr.box_plot(methods=mme_methodologies, obs=True, members=False) ``` #### 2f. Saving MME & Exporting Cross-Validated Hindcasts - Do Not Edit ``` mme.export_csv(hindcast_export_file) ``` # 3. Real Time Forecasting #### 3a. RTF Settings ``` forecast_methodologies = ['EM', 'MLR', 'ELM' ] ``` #### 3b. Computation - do not edit ``` fcst_data = reader.read_txt(forecast_data_file, has_years=forecast_has_years, has_obs=forecast_has_obs) mme.add_forecast(fcst_data) mme.train_rtf_models(forecast_methodologies, args) mme.make_RTFs(forecast_methodologies) ptr.bar_plot(methods=mme_methodologies, members=False, obs=forecast_has_obs) mme.export_csv(forecast_export_file, fcst='forecasts', obs=forecast_has_obs) ```
github_jupyter
``` import os.path import re import sys import numpy as np import json import time from six.moves import urllib import matplotlib as mpl from pprint import pprint import pandas as pd %matplotlib inline %load_ext autoreload %autoreload 2 ``` ## Preprocessing - BM Caption dataset - Remove duplication by image url - DONE - Remove materials tags - DONE - Remove dates tags - DONE - Remove parenthesis stuff - DONE - Remove images with short/non-descriptive captions - DONE ``` #Load the data we pre-grabbed from the SPARQL BM endpoint(http://collection.britishmuseum.org/sparql) with open('/data/captioning/bm_prints_urls_captions.json') as f: data = json.load(f) img_arr = data['results']['bindings'] len(img_arr) pprint(img_arr[0:20]) start = time.time() # Filter by unqiue uris from pandas.io.json import json_normalize df = json_normalize(img_arr) df.drop('caption.type', axis=1, inplace=True) df.drop('print.type', axis=1, inplace=True) df.drop('url.type', axis=1, inplace=True) df.head() print "Time:" + str(time.time() - start) df.head() # Drop duplicates according to caption.value and url.value start = time.time() print len(df) df.drop_duplicates(subset=['caption.value', 'url.value'], inplace=True) print len(df) print "Time:" + str(time.time() - start) df_count = df.groupby(['url.value']).size().reset_index().rename(columns={0:'count'}) # look if some objects have more than 1 captions df_count['count'].value_counts() #Drop further dup urls df.drop_duplicates(subset=['url.value'], inplace=True) print len(df) # get shortering data to work with df_tiny = df[:5].copy() df_tiny['capt_length'] = df_tiny['caption.value'].map(lambda x: len(x.split())) df_tiny ## Add caption length df['capt_length'] = df['caption.value'].map(lambda x: len(x.split())) df.tail(5) #reset index df.reset_index(drop=True) #plot occurrences of each length from collections import Counter c = Counter(df['capt_length']) plt.plot(*zip(*sorted(c.items()))) plt.xlim(1,10) #There's roughly around 1k of words less than 5. Remove them. df = df[df['capt_length'] > 5] df.reset_index(drop=True) list(df['caption.value'].sample(100)) from dateutil.parser import parse def is_date(string): try: parse(string) return True except (TypeError, ValueError): return False # We will extract the date + the material from the description and put into another column. is_date_ct = 0 got_material = 0 def stripExtras(capt): global is_date_ct global got_material arr = capt['caption.value'].split('\n') mat = 'n/a' orig = capt['caption.value'].strip() if len(arr) > 1: mat = arr[-1] orig = '\n'.join(arr[:-1]).strip() got_material+=1 arr = orig.split('.') date = 'n/a' if len(arr) > 1 and len(arr[-1]) > 0: if (is_date(arr[-1])): is_date_ct += 1 date = arr[-1] orig = '.'.join(arr[:-1]).strip() return pd.Series([orig, mat, date]) # def stripDates(capt): # #print capt # arr = capt['caption.value'].split('.') # res = 'n/a' # orig = capt['caption.value'].strip() # if len(arr) > 1: # print "Date: ", arr[-1] # print "\n\n" # #if (is_date(arr[-1])): # # res = arr[-1] # # orig = '.'.join(arr[:-1]).strip() # return pd.Series([orig, res]) df[['caption.value_cleaned', 'ext_dates', 'ext_materials']] = df.apply(stripExtras, axis=1) df.head(10) print float(is_date_ct)/len(df) print float(got_material)/len(df) #mistake df.rename(columns = {'ext_dates': 'materials'}, inplace = True) df.rename(columns = {'ext_materials': 'dates'}, inplace = True) list(df.sample(10)['caption.value_cleaned']) start = time.time() #use regex to extract out parentehsis stuff has_paren_ct = 0 import re def stripParen(input): global has_paren_ct s = input['caption.value_cleaned'] out = re.sub('\(.*?\)', '', s).strip() out = re.sub('\[.*?\]', '', out).strip() if s != out: has_paren_ct += 1 return pd.Series([out]) df[['caption.value_cleaned2']] = df.apply(stripParen, axis=1) print list(df.head(5)['caption.value_cleaned']) print list(df.head(5)['caption.value_cleaned2']) print "Time:" + str(time.time() - start) print "Percentage affected:" + str(float(has_paren_ct)/len(df)) print list(df[6000:6010]['caption.value_cleaned']) print list(df[6000:6010]['caption.value_cleaned2']) store = pd.HDFStore('/data/captioning/bm_data_clean.h5') df_to_store = df[['print.value', 'url.value', 'caption.value_cleaned2', 'materials', 'dates']] df_to_store.head(5) store['df'] = df_to_store.rename(index=str, columns={'caption.value_cleaned2': 'captions'}).reset_index(drop=True) store['df'] store.close() df2 = pd.read_hdf('/data/captioning/bm_data_clean.h5', 'df') df2 list(df2.sample(100)['url.value']) ```
github_jupyter
# Marching Wagon of useful Modules #### Changing you life one package at a time.... ``` ``` ### Retrying your functions ``` # retrying # https://pypi.python.org/pypi/retrying import time import random from retrying import retry @retry(stop_max_delay=1000) def do_something_unreliable(): if random.randint(0, 10) > 2: print "broken" raise IOError("Broken sauce, everything is hosed!!!") else: return "Awesome sauce!" print do_something_unreliable() ``` ### Waiting for socket and files/direcoties ``` # python-wait # https://github.com/shawnsi/python-wait import wait # wait for a file to be created if wait.log.exists('c:/Users/ifruchte/.ssh/id_rsa.pub', timeout=0.5): print "directory/file exist" # wait until I can open a port to google if wait.tcp.open(80, host='www.google.com', timeout=5): print "port is ready" ``` ### dpath - Look/search into nested dicts ``` # dpath # https://pypi.python.org/pypi/dpath import pprint import dpath.util x = { "a": { "b": { "3": 2, "43": 30, "c": [], "d": ['red', 'buggy', 'bumpers'],} } } print "a/b/43 = %s" % dpath.util.get(x, '/a/b/43') result = dpath.util.search(x, "a/b/[cd]") pprint.pprint(result) ``` ### deepdiff - compare dicts easily ``` # deepdiff # http://deepdiff.readthedocs.io/en/latest/ from pprint import pprint from deepdiff import DeepDiff t1 = {1:1, 3:3, 4:4, 7:{"a":"hello", "b":"world", "c": "world!\nGoodbye!\n1\n2\nEnd"}} t2 = {1:1, 3:3, 5:5, 6:6, 7:{"a":"hello", "b":"world!!!", "c": "world!\n2\nEnd"}} ddiff = DeepDiff(t1, t2, verbose_level=2) pprint(ddiff, indent=2) # if it's multi line, we have unified diff print ddiff['values_changed']["root[7]['c']"]['diff'] ``` ### tqdm - progressbars for the lazy ``` # tqdm # https://pypi.python.org/pypi/tqdm from time import sleep from tqdm import tqdm, trange for i in trange(15): sleep(0.1) from tqdm import tnrange, tqdm_notebook for i in tnrange(5, desc='1st loop'): for j in tqdm_notebook(xrange(100), desc='2nd loop'): sleep(0.01) ``` #### Cilck - Building command line applications ``` # click # http://click.pocoo.org/ import click @click.command() @click.option('--count', default=1, help='Number of greetings.') @click.option('--name', prompt='Your name', help='The person to greet.') def hello(count, name): """Simple program that greets NAME for a total of COUNT times.""" for x in range(count): click.echo('Hello %s!' % name) if __name__ == '__main__': # some boiler plate for running in IPython/Jupyter # http://click.pocoo.org/5/testing/#basic-testing from click.testing import CliRunner result = CliRunner().invoke(hello, ["--help"]) print result.output result = CliRunner().invoke(hello, ["--name", "Israel", "--count", "4"]) print result.output # this is how you usally do it # hello() ``` ### construct - parse/build binary data ``` # construct # http://construct.readthedocs.io/en/latest/ from construct import * from binascii import unhexlify, hexlify import six tcp_header = Struct("tcp_header", UBInt16("source"), UBInt16("destination"), UBInt32("seq"), UBInt32("ack"), EmbeddedBitStruct( ExprAdapter(Nibble("header_length"), encoder = lambda obj, ctx: obj / 4, decoder = lambda obj, ctx: obj * 4, ), Padding(3), Struct("flags", Flag("ns"), Flag("cwr"), Flag("ece"), Flag("urg"), Flag("ack"), Flag("psh"), Flag("rst"), Flag("syn"), Flag("fin"), ), ), UBInt16("window"), UBInt16("checksum"), UBInt16("urgent"), Field("options", lambda ctx: ctx.header_length - 20), ) if __name__ == "__main__": cap = unhexlify(six.b("0db5005062303fb21836e9e650184470c9bc0000")) obj = tcp_header.parse(cap) print (obj) obj.destination = 22 built = tcp_header.build(obj) print (hexlify(built)) assert cap == built ``` ### rpyc - seemless remote procudre ``` # rypc # https://rpyc.readthedocs.io/en/latest/ # server side (let say it's ubuntu) # $ python rypc_classic.py # [SLAVE INFO 14:22:27 tid=2332] server started on 0.0.0.0:18812 # client code import rpyc conn = rpyc.classic.connect("rdkucs1.il.nds.com") # our server remote_sys = conn.modules.sys # access the sys module on the server remote_os = conn.modules["os"] remote_os.getcwd() # output: "/home/users/ifruchte/" ```
github_jupyter
``` !conda upgrade scikit-learn -y from azureml import services from azureml import Workspace from azure.servicebus import ServiceBusService import warnings; warnings.filterwarnings('ignore') import datetime from dateutil import parser import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import cross_validation from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn import svm #from sklearn.ensemble import VotingClassifier from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import f1_score import requests import urllib2 import json from pylab import rcParams rcParams['figure.figsize'] = 14, 3 %matplotlib inline pd.options.display.max_colwidth = 500 ws = Workspace() ``` ## Keys and Constants ``` Anomaly_Detection_KEY = 'put key here' ``` # Pull in and Pre-Process Data **Import data** ``` vel_raw = ws.datasets['vel_raw'].to_dataframe().set_index('time').value vel_edited = ws.datasets['vel_curated'].to_dataframe().set_index('time').value df = pd.concat({'velocity': vel_raw, 'velocity_edited': vel_edited}, axis=1, join='inner') df = df.reset_index() df['time'] = pd.to_datetime(df['time'], format='%Y-%m-%d %H:%M') df = df.set_index('time') df ``` **Generate Velocity N window size of 4** ``` #create level channels for i in range(1, 5): df['velocity_{}'.format(i)] = df.velocity.shift(i) df = df.dropna() df.head(4) ``` **Tag Anomolies** ``` df['diff'] = np.abs(df.velocity - df.velocity_edited) df['anomaly'] = df['diff'].apply(lambda x: 1 if x > 0 else 0) df.head(4) df[df['anomaly']>0].count() ``` # Visualize Data **Check Few Points** ``` df['2013-04-02 02:30:00':'2013-04-02 02:50:00'] ``` **Plot Sample Daily Pattern** ``` day_df = df['2013-09-03':'2013-09-03'] day_anomalies = day_df[day_df['anomaly']==1] plt.plot(day_df['velocity']) plt.scatter(day_anomalies.index, day_anomalies.velocity, color='r') ``` **Plot Sample Daily Pattern With Anomalies** ``` day_df = df['2013-09-02':'2013-09-02'] day_anomalies = day_df[day_df['anomaly']==1] plt.plot(day_df['velocity']) plt.scatter(day_anomalies.index, day_anomalies.velocity, color='r') ``` # Compare Anomaly Detection and Outlier Detection Models **Define Metrics** ``` def print_report(expected, predicted): target_names = ['Anomalies', 'Regular velocity'] print("Confusion Matrix") print(confusion_matrix(expected, predicted)) print(classification_report(expected, predicted, target_names = target_names)) ``` # Todo Update to reflect the new api **Model #1 : Outlier Detection** - Send the raw data to the [Anomaly Detection API](https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-apps-anomaly-detection-api) to tag outliers - Score outlier model using [Anomaly Detection API](https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-apps-anomaly-detection-api) results against the 'manually tagged anomalies' ``` #Format data for processing velDf = vel_raw.to_frame().reset_index() velDf.value = velDf.value values = [[parser.parse(t).strftime('%m/%d/%Y %H:%M:%S'),d] for t,d in velDf.values.tolist()] values def detectAnomalies(values): data = { "Inputs": { "input1": { "ColumnNames": ["Time", "Data"], "Values": values }, }, "GlobalParameters": { "tspikedetector.sensitivity": "3", "zspikedetector.sensitivity": "3", "bileveldetector.sensitivity": "3.25", "detectors.spikesdips": "Both" } } body = str.encode(json.dumps(data)) url = 'https://europewest.services.azureml.net/subscriptions/13bbfab4b75b461c98963a55594775f2/services/eb7b355fe6534415ac983c6170756c3c/execute?api-version=2.0&details=true' headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ Anomaly_Detection_KEY)} req = urllib2.Request(url, body, headers) try: response = urllib2.urlopen(req) result = response.read() return result except urllib2.HTTPError, error: print("The request failed with status code: " + str(error.code)) # Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure print(error.info()) print(json.loads(error.read())) raw_outliers = json.loads(detectAnomalies(values))['Results']['output1']['value'] outliers = pd.DataFrame(raw_outliers['Values'], columns=raw_outliers['ColumnNames']) outliers['detected_outliers'] = pd.to_numeric(outliers['rpscore']).apply(lambda x: 1 if x > 0 else 0) print_report(df['anomaly'], outliers[4:]['detected_outliers']) ``` **Model #2: Binary Classifier** - Create a historical window of the previous four velocity channel readings values at each time. - Create a train and test set from a random split on the historical windows - Train a random forest classifier on the train data - Benchmark the random forest on the test data ``` #Define test and training set columns = ['velocity', 'velocity_1','velocity_2','velocity_3','velocity_4'] X_train, X_test, y_train, y_test = cross_validation.train_test_split(df[columns], df.anomaly, test_size=0.3) clf = RandomForestClassifier() clf.fit(X_train, y_train) xs = clf.predict(X_test) print_report(y_test, xs) ``` **Model #3: Hybrid Classifier** - Create a historical window of the previous four velocity channel readings values at each time using only the values marked as outliers. - Create a train and test set from a random split on the historical windows. - Train a random forest classifier on the train data. - Benchmark the random forest on the test data. - Benchmark the random forest on the entire velocity time series excluding the training set. ``` detected_outliers = pd.DataFrame({'time':outliers[4:]['Time'],'detected_outliers':outliers[4:]['detected_outliers']}) detected_outliers = detected_outliers[detected_outliers['detected_outliers'] == 1] detected_outliers['time'] = pd.to_datetime(detected_outliers['time']) df_outliers = df.loc[detected_outliers['time']] X_train, X_test, y_train, y_test = cross_validation.train_test_split(df_outliers[columns], df_outliers.anomaly, test_size=0.3) clf = RandomForestClassifier() clf.fit(X_train, y_train) xs = clf.predict(X_test) print_report(y_test, xs) unseenValues = pd.DataFrame({'time':outliers[4:]['Time'],'detected_outliers':outliers[4:]['detected_outliers']}) unseenValues = unseenValues[unseenValues['detected_outliers'] == 0] unseenValues['time'] = pd.to_datetime(unseenValues['time']) unseenValues = df.loc[unseenValues['time']] oseries = clf.predict(unseenValues[columns]) print_report(unseenValues.anomaly, oseries) ``` The binary classification model was able to greatly help differentiate between the outliers, anomolies and regular values. While in this data set the anomolies are lineraly seperable in others this technique could be used to yield more accurate results across the 475 sensor errors in the dataset. ## Results Though the anomoly detection API helped differentiate outliers for anomoly classification, in Carl Data's dataset the difference between anomalies and regular flow was linearly differentiable enough that a random forrest binary classifier was able to provide just as good results as the anomoly detection API without the overhead. # Put Model Into Production ``` @services.publish(ws) @services.types(curVel = float ,vel1 = float, vel2 = float,vel3 = float, vel4 = float) @services.returns(int) def detectAnomaly(curVel,vel1,vel2,vel3,vel4): result = clf.predict([curVel,vel1,vel2,vel3,vel4]) return result[0] # show information about the web service serviceInfo = { 'service_url' : detectAnomaly.service.url, 'api_key' : detectAnomaly.service.api_key, 'help_url' : detectAnomaly.service.help_url, 'service_id' : detectAnomaly.service.service_id, } serviceInfo ``` **Test with anomaly** ``` print(detectAnomaly(0, 0, 0, 0, 10)) ``` **Test with normal flow** ``` print(detectAnomaly(1.119, 1.162, 1.058, 1.065, 1.058)) ``` # Create Event Hub For Visualization **Create an [Eventhub namespace](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-resource-manager-namespace-event-hub) from the azure portal and fill in the following** ``` servns = 'pyconil2017sb' key_name = 'createandsend' # SharedAccessKeyName from Azure portal key_value = '0fvHx77YIng4rn/SsxROp+1kFTd5GJ76WDzo5K5e8ps=' # SharedAccessKey from Azure portal ``` **Init Service Bus Service** ``` sbs = ServiceBusService(service_namespace=servns, shared_access_key_name=key_name, shared_access_key_value=key_value) # Create a ServiceBus Service Object ``` ** Create Visualization Event Hub ** ``` anomaly_visulization = sbs.create_event_hub('anomaly_visulization') # Create a Event Hub for the ServiceBus. If it exists then return true, else return false print(anomaly_visulization) def sendToEventHub(time, curVel, anomoaly): event_data = json.dumps({ 'time': time, 'curVel': curVel, 'anomaly': anomoaly }) sbs.send_event('anomaly_visulization', event_data) ``` **Link Visualization Eventhub to [Stream Analytics and PowerBI Embedded](https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-power-bi-dashboard)** # Simulate a flow sensor and feed the data to our system ``` for index, row in df['2013-09-02':'2013-09-02'].iterrows(): anomaly = bool(detectAnomaly(row['velocity'], row['velocity_1'], row['velocity_2'], row['velocity_3'], row['velocity_4'])) time = (index.strftime("%Y-%m-%d %H:%M:%S")) sendToEventHub(time,row['velocity'],anomaly) ```
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/GetStarted/04_band_math.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/04_band_math.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=GetStarted/04_band_math.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/04_band_math.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. **Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ``` Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset # This function gets NDVI from Landsat 5 imagery. def getNDVI(image): return image.normalizedDifference(['B4', 'B3']) # Load two Landsat 5 images, 20 years apart. image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604') image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611') # Compute NDVI from the scenes. ndvi1 = getNDVI(image1) ndvi2 = getNDVI(image2) # Compute the difference in NDVI. ndviDifference = ndvi2.subtract(ndvi1) ndviParams = {'palette': ['#d73027', '#f46d43', '#fdae61', '#fee08b', '#d9ef8b', '#a6d96a', '#66bd63', '#1a9850']} ndwiParams = {'min': -0.5, 'max': 0.5, 'palette': ['FF0000', 'FFFFFF', '0000FF']} Map.centerObject(image1, 10) Map.addLayer(ndvi1, ndviParams, 'NDVI 1') Map.addLayer(ndvi2, ndviParams, 'NDVI 2') Map.addLayer(ndviDifference, ndwiParams, 'NDVI difference') ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
github_jupyter
``` import numpy as np import pandas as pd from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import StandardScaler class MyTransformer(BaseEstimator, TransformerMixin): def __init__(self): self._mean_X = None self._std_X = None def fit(self, X: np.array, y = None): if isinstance(X, pd.DataFrame): self._mean_X = X.values.mean(axis=0) self._std_X = X.values.std(axis=0) else: self._mean_X = X.mean(axis=0) self._std_X = X.std(axis=0) return self def transform(self, X: np.array, y = None): return (X.copy() - self._mean_X) / self._std_X df = pd.DataFrame({ 'a': range(2, 12, 2), 'b': range(20, 120, 20), 'c': np.linspace(0, 1, 5), 'd': np.linspace(-1, 1, 5) }) df MyTransformer().fit_transform(df) StandardScaler().fit_transform(df) X_train = np.array([[ 1., -1., 3.], [ 3., 0., 0.], [ 0., 1., -1.]]) StandardScaler().fit_transform(X_train) MyTransformer().fit_transform(X_train) N = 100 X0 = np.random.uniform(size=N).reshape(-1,1) X1 = np.random.normal(size=N).reshape(-1,1) X2 = np.random.binomial(n=10, p=0.2, size=N).reshape(-1,1) X3 = np.random.exponential(size=N).reshape(-1,1) X4 = np.random.poisson(lam=1.0, size=N).reshape(-1,1) X5 = np.random.triangular(left=-1.0, mode=0.0, right=1.0, size=N).reshape(-1,1) X6 = np.random.weibull(a=1.0, size=N).reshape(-1,1) X7 = np.random.weibull(a=5.0, size=N).reshape(-1,1) from sklearn.preprocessing import (MaxAbsScaler, MinMaxScaler, Normalizer, PowerTransformer, QuantileTransformer, StandardScaler) def transform(X): return pd.DataFrame({ 'X': X.flatten(), 'BoxCox': PowerTransformer(method='box-cox', standardize=False).fit_transform(X).flatten(), 'BoxCox_Std': PowerTransformer(method='box-cox', standardize=True).fit_transform(X).flatten(), 'MaxAbs': MaxAbsScaler().fit_transform(X).flatten(), 'MinMax': MinMaxScaler().fit_transform(X).flatten(), 'NormalizerL1': Normalizer(norm='l1').fit_transform(X).flatten(), # Same as: preprocessing.normalize(df, norm='l1') 'NormalizerL2': Normalizer(norm='l2').fit_transform(X).flatten(), # Same as: preprocessing.normalize(df, norm='l2') 'Quantile10': QuantileTransformer(n_quantiles=10).fit_transform(X).flatten(), 'Standard': StandardScaler().fit_transform(X).flatten(), 'YeoJohnson': PowerTransformer(method='yeo-johnson').fit_transform(X).flatten() }, index=np.arange(len(X))) dfX = transform(X0) dfX ```
github_jupyter
# Introduction One of the most basic questions we might ask of a model is: What features have the biggest impact on predictions? This concept is called **feature importance**. There are multiple ways to measure feature importance. Some approaches answer subtly different versions of the question above. Other approaches have documented shortcomings. In this lesson, we'll focus on **permutation importance**. Compared to most other approaches, permutation importance is: - fast to calculate, - widely used and understood, and - consistent with properties we would want a feature importance measure to have. # How It Works Permutation importance uses models differently than anything you've seen so far, and many people find it confusing at first. So we'll start with an example to make it more concrete. Consider data with the following format: ![Data](https://i.imgur.com/wjMAysV.png) We want to predict a person's height when they become 20 years old, using data that is available at age 10. Our data includes useful features (*height at age 10*), features with little predictive power (*socks owned*), as well as some other features we won't focus on in this explanation. **Permutation importance is calculated after a model has been fitted.** So we won't change the model or change what predictions we'd get for a given value of height, sock-count, etc. Instead we will ask the following question: If I randomly shuffle a single column of the validation data, leaving the target and all other columns in place, how would that affect the accuracy of predictions in that now-shuffled data? ![Shuffle](https://i.imgur.com/h17tMUU.png) Randomly re-ordering a single column should cause less accurate predictions, since the resulting data no longer corresponds to anything observed in the real world. Model accuracy especially suffers if we shuffle a column that the model relied on heavily for predictions. In this case, shuffling `height at age 10` would cause terrible predictions. If we shuffled `socks owned` instead, the resulting predictions wouldn't suffer nearly as much. With this insight, the process is as follows: 1. Get a trained model. 2. Shuffle the values in a single column, make predictions using the resulting dataset. Use these predictions and the true target values to calculate how much the loss function suffered from shuffling. That performance deterioration measures the importance of the variable you just shuffled. 3. Return the data to the original order (undoing the shuffle from step 2). Now repeat step 2 with the next column in the dataset, until you have calculated the importance of each column. # Code Example Our example will use a model that predicts whether a soccer/football team will have the "Man of the Game" winner based on the team's statistics. The "Man of the Game" award is given to the best player in the game. Model-building isn't our current focus, so the cell below loads the data and builds a rudimentary model. ``` import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier data = pd.read_csv('../input/fifa-2018-match-statistics/FIFA 2018 Statistics.csv') y = (data['Man of the Match'] == "Yes") # Convert from string "Yes"/"No" to binary feature_names = [i for i in data.columns if data[i].dtype in [np.int64]] X = data[feature_names] train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) my_model = RandomForestClassifier(n_estimators=100, random_state=0).fit(train_X, train_y) ``` Here is how to calculate and show importances with the [eli5](https://eli5.readthedocs.io/en/latest/) library: ``` import eli5 from eli5.sklearn import PermutationImportance perm = PermutationImportance(my_model, random_state=1).fit(val_X, val_y) eli5.show_weights(perm, feature_names = val_X.columns.tolist()) ``` # Interpreting Permutation Importances The values towards the top are the most important features, and those towards the bottom matter least. The first number in each row shows how much model performance decreased with a random shuffling (in this case, using "accuracy" as the performance metric). Like most things in data science, there is some randomness to the exact performance change from a shuffling a column. We measure the amount of randomness in our permutation importance calculation by repeating the process with multiple shuffles. The number after the **±** measures how performance varied from one-reshuffling to the next. You'll occasionally see negative values for permutation importances. In those cases, the predictions on the shuffled (or noisy) data happened to be more accurate than the real data. This happens when the feature didn't matter (should have had an importance close to 0), but random chance caused the predictions on shuffled data to be more accurate. This is more common with small datasets, like the one in this example, because there is more room for luck/chance. In our example, the most important feature was **Goals scored**. That seems sensible. Soccer fans may have some intuition about whether the orderings of other variables are surprising or not. # Your Turn **[Get started here](#$NEXT_NOTEBOOK_URL$)** to flex your new permutation importance knowledge.
github_jupyter
``` """ Load some libs """ """ python 2 lib using networkx """ import matplotlib.pyplot as plt import networkx as nx import random import math import pandas as pd import statsmodels.api as sm import glob import os import numpy as np from PIL import Image from helpers import * import pickle import time #random.seed(100) #tic = time.time() """ Let's see what WEPPS we have """ folders = sorted(glob.glob('WEPP_FILES/*')) print folders """ let's get all the WEPP dbs""" wepps = [ 'WEPP_FILES/2007-Q4-DEC', 'WEPP_FILES/2008-Q4-DEC', 'WEPP_FILES/2009-Q4-DEC', 'WEPP_FILES/2010-Q4-DEC', 'WEPP_FILES/2011-Q3-SEP', 'WEPP_FILES/2012-Q4-DEC', 'WEPP_FILES/2013-Q4-DEC', 'WEPP_FILES/2014-q4-dec', #'WEPP_FILES/2015-Q1-MAR', #'WEPP_FILES/2015-Q2-JUN', #'WEPP_FILES/2015-q3-sep', 'WEPP_FILES/2015-Q4-DEC', #'WEPP_FILES/2016-Q1-APR', #'WEPP_FILES/2016-Q2-JUL', #'WEPP_FILES/2016-Q3-SEP', 'WEPP_FILES/2016-Q4-DEC', #'WEPP_FILES/2017-Q1-MAR', #'WEPP_FILES/2017-Q2-JUL', #'WEPP_FILES/2017-Q3-SEP', 'WEPP_FILES/2017-Q4-DEC'] years = range(2007,2018) #print years wepps_dict = dict(zip(years,wepps)) print wepps_dict wepp_dfs = {} for y in years: wepp_dfs[y] = get_wepp(wepps_dict[y]) """ fix all the wepp stuff, fix categories, interpolate dates, add all the columns """ def prep_wepp(wepp_df): # merge with ISO, country budgets and load factors print '~~~~~~ GENERATING DF ~~~~~~~' print 'loading df...' df_iso = pd.read_csv('country_ISO_update.csv') fuel_class = 'fuel_classification_database.dta' df_fuel_class = pd.io.stata.read_stata(fuel_class) heat_rates_xls = 'Heat rates_v3.xls' df_heatrates = pd.read_excel(heat_rates_xls, sheet_name='CSV_output') df_load_factor = pd.io.stata.read_stata('load_factor_database.dta') print 'loaded dfs: ' print 'merging dfs and filling missing years...' #df_fuel_load = pd.merge(df_fuel_class, df_load_factor, on='fuel_class') #print df_iso #print df_fuel_class #print df_heatrates #print df_load_factor #print list(wepp_df) #print wepp_df['FUEL'] df_fuel_class.rename(columns = {'fuel': 'FUEL'}, inplace = True) #fix fuel classes wepp_df = wepp_df.merge(df_fuel_class, on='FUEL', how='left') df_wepp_em_fact = pd.read_csv('wepp_em_fact.csv') #merge emissions factors wepp_df = wepp_df.merge(df_wepp_em_fact, left_on='FUEL', right_on='fuel', how='left') #prepare lookup indexer wepp_df['FORMAT_HR'] = wepp_df.apply(lambda row: format_hr(row), axis=1) #standardise statuses wepp_df.loc[wepp_df.STATUS=='DEF', 'STATUS'] = 'PLN' wepp_df.loc[wepp_df.STATUS=='DEL', 'STATUS'] = 'CON' wepp_df.loc[wepp_df.STATUS=='UNK', 'STATUS'] = 'PLN' wepp_df.loc[wepp_df.STATUS=='DAC', 'STATUS'] = 'STN' #print list(df_iso) #add ISO wepp_df = wepp_df.merge(df_iso[['Caps','ISO','Region']], left_on='COUNTRY', right_on='Caps', how='left') #fill in missing years all_training = wepp_df[['YEAR','fuel_class','STATUS','Region','FORMAT_HR']] all_training['fuel_class'] = all_training['fuel_class'].astype('category') all_training['STATUS'] = all_training['STATUS'].astype('category') all_training['Region'] = all_training['Region'].astype('category') all_training['FORMAT_HR'] = all_training['FORMAT_HR'].astype('category') all_training = pd.get_dummies(all_training[['YEAR','fuel_class','STATUS','Region','FORMAT_HR']], columns = ['fuel_class','STATUS','Region','FORMAT_HR']) year_train_X = all_training[all_training.YEAR.notnull()].drop('YEAR', axis=1) year_train_Y = all_training.loc[all_training.YEAR.notnull(),'YEAR'] year_train_X = sm.add_constant(year_train_X) test_data = all_training.loc[all_training.YEAR.isnull()].drop('YEAR', axis=1) test_data = sm.add_constant(test_data) est = sm.OLS(year_train_Y, year_train_X) est = est.fit() wepp_df['YEAR_EST_FLAG'] = 0 wepp_df.loc[wepp_df.YEAR.isnull(),'YEAR_EST_FLAG'] = 1 wepp_df.loc[wepp_df.YEAR.isnull(),'YEAR'] = est.predict(test_data) #get heatrates wepp_df = wepp_df.merge(df_heatrates, left_on='FORMAT_HR', right_on='unique_id', how='left') wepp_df['HEATRATE'] = wepp_df.apply(lambda row: get_hr(row), axis=1) drop_cols = [col for col in list(wepp_df) if isinstance(col,int)] wepp_df.drop(drop_cols, axis=1, inplace=True) #get CO2 int, CCCE wepp_df = wepp_df.merge(df_load_factor, on='fuel_class', how='left') wepp_df['YEARS_LEFT'] = np.where(wepp_df['STATUS']=='OPR', wepp_df['YEAR']+40-2017, 0) wepp_df.YEARS_LEFT.clip(lower=0.0, inplace=True) #set min years left to 0 print 'dfs merged and interped: ' print 'calculating carbon and MWs...' wepp_df['CO2_INT'] = wepp_df['em_fact'] /2.205 * wepp_df['HEATRATE'] / 1000 wepp_df['CCCE'] = 8760 * wepp_df['MW'] * wepp_df['YEARS_LEFT'] * wepp_df['load_factor'] * wepp_df['CO2_INT'] /1000 #tonnes #wepp_df.sort_values('CCCE', inplace=True) #print wepp_df #print list(wepp_df) #print wepp_df.CCCE #print all_countries #exit() #sort WEPP wepp_df.sort_values('CCCE', inplace=True, ascending=False) wepp_df['green']=wepp_df.fuel_class.isin(['SUN','BIOGAS','WASTE','BIOOIL','WIND','BIOMASS','GEOTHERMAL']) wepp_df['green_MW'] = wepp_df.MW*wepp_df.green wepp_df['blue']=wepp_df.fuel_class.isin(['WATER','NUCLEAR']) wepp_df['blue_MW'] = wepp_df.MW*wepp_df.blue wepp_df['solar']=wepp_df.fuel_class.isin(['SUN']) wepp_df['solar_MW'] = wepp_df.MW*wepp_df.solar wepp_df['wind']=wepp_df.fuel_class.isin(['WIND']) wepp_df['wind_MW'] = wepp_df.MW*wepp_df.wind wepp_df['ff']=~wepp_df.fuel_class.isin(['SUN','BIOGAS','WASTE','BIOOIL','WIND','BIOMASS','GEOTHERMAL','WATER','NUCLEAR']) wepp_df['ff_MW'] = wepp_df.MW*wepp_df.ff return wepp_df for y in years: wepp_dfs[y] = prep_wepp(wepp_dfs[y]) print list(wepp_dfs[2007]) print wepp_dfs[2017].loc[wepp_dfs[2017].solar_MW>0.0,['MW','green_MW','blue_MW','solar_MW','wind_MW','ff_MW']] df_centroids = pd.read_csv('country_centroids.csv').set_index('country') #print df_centroids.get_value('TH','latitude') print df_centroids all_countries = df_centroids.index.values def country_aggregation(wepp_df, threshold, threshold_column='mw'): #plot companies by CCCE company_df = pd.DataFrame(wepp_df.CCCE.groupby(wepp_df['COMPANY']).sum()) company_df.sort_values('CCCE', inplace=True, ascending=False) #print 'company_df' #print company_df for country in all_countries: #for CCCE #company_df[country] = wepp_df.loc[wepp_df.ISO==country,'CCCE'].groupby(wepp_df['COMPANY']).sum() #for MW company_df[country] = wepp_df.loc[wepp_df.ISO==country,'MW'].groupby(wepp_df['COMPANY']).sum() company_df[str(country)+'_green'] = wepp_df.loc[wepp_df.ISO==country,'green_MW'].groupby(wepp_df['COMPANY']).sum() company_df[str(country)+'_blue'] = wepp_df.loc[wepp_df.ISO==country,'blue_MW'].groupby(wepp_df['COMPANY']).sum() company_df[str(country)+'_solar'] = wepp_df.loc[wepp_df.ISO==country,'solar_MW'].groupby(wepp_df['COMPANY']).sum() company_df[str(country)+'_wind'] = wepp_df.loc[wepp_df.ISO==country,'wind_MW'].groupby(wepp_df['COMPANY']).sum() company_df[str(country)+'_ff'] = wepp_df.loc[wepp_df.ISO==country,'ff_MW'].groupby(wepp_df['COMPANY']).sum() #checksum calculation #print list(company_df) #company_df.to_csv('test_nan.csv') company_df.drop(labels=[np.nan,'nan_green'], axis=1, inplace=True) iso_col = [h for h in list(company_df) if len(h)<3] company_df['checksum'] = company_df[iso_col].sum(axis=1) company_df.sort_values('checksum', inplace=True, ascending=False) #print company_df company_df.fillna(0.0, inplace=True) #see how many edges we've got #company_df['edges'] = company_df.count(axis=1) #print company_df.edges.mean() #mean about 1.4 - nice. if threshold_column=='mw': company_subset_df = company_df[company_df.checksum>threshold] elif threshold_column=='ccce': company_subset_df = company_df[company_df.CCCE>=threshold] all_ccce = company_df.CCCE.sum() print 'all CCCE', all_ccce print 'por_CCCE',company_subset_df.CCCE.sum()/float(all_ccce) print 'calculated carbon and MWs: ' return company_subset_df df_companies = {} for y in years: df_companies[y] = country_aggregation(wepp_dfs[y],0.0,'mw') #print df_companies[2009].sort_index()#.subtract(df_companies[2008]) #print df_companies[2008].sort_index() #print df_companies[2009].subtract(df_companies[2008]) #nan - the companies didn't exist previously iso_cols = [h for h in list(df_companies[2007]) if len(h)<3] green_cols = [h for h in list(df_companies[2007]) if 'green' in h] blue_cols = [h for h in list(df_companies[2007]) if 'blue' in h] solar_cols = [h for h in list(df_companies[2007]) if 'solar' in h] wind_cols = [h for h in list(df_companies[2007]) if 'wind' in h] ff_cols = [h for h in list(df_companies[2007]) if 'ff' in h] #print green_cols yoy_g = pd.DataFrame((df_companies[2010][green_cols].sum()- df_companies[2009][green_cols].sum())/df_companies[2009][green_cols].sum(), columns=['green'])#, index=iso_cols) yoy_g_names = pd.DataFrame.from_dict(dict(zip(green_cols, iso_cols)), orient='index').rename(index=str, columns={0:'iso2'}) #print yoy_g.join(yoy_g_names).set_index('iso2') def join_dfs(ldf, rdf): return ldf.join(rdf) new_portions = {} #portion of new capacity is green, YOY growth rate of all for y in range(2008,2018): yoy_a =pd.DataFrame((df_companies[y][iso_cols].sum()- df_companies[y-1][iso_cols].sum())/df_companies[y-1][iso_cols].sum(), columns=['yoy_all'])#, index=iso_cols) yoy_g = pd.DataFrame((df_companies[y][green_cols].sum()- df_companies[y-1][green_cols].sum())/df_companies[y-1][green_cols].sum(), columns=['yoy_green'])#, index=iso_cols) yoy_g_names = pd.DataFrame.from_dict(dict(zip(green_cols, iso_cols)), orient='index').rename(index=str, columns={0:'iso2'}) yoy_g_f = yoy_g.join(yoy_g_names).set_index('iso2') yoy_b = pd.DataFrame((df_companies[y][blue_cols].sum()- df_companies[y-1][blue_cols].sum())/df_companies[y-1][blue_cols].sum(), columns=['yoy_blue'])#, index=iso_cols) yoy_b_names = pd.DataFrame.from_dict(dict(zip(blue_cols, iso_cols)), orient='index').rename(index=str, columns={0:'iso2'}) yoy_b_f = yoy_b.join(yoy_b_names).set_index('iso2') yoy_s = pd.DataFrame((df_companies[y][solar_cols].sum()- df_companies[y-1][solar_cols].sum())/df_companies[y-1][solar_cols].sum(), columns=['yoy_solar'])#, index=iso_cols) yoy_s_names = pd.DataFrame.from_dict(dict(zip(solar_cols, iso_cols)), orient='index').rename(index=str, columns={0:'iso2'}) yoy_s_f = yoy_s.join(yoy_s_names).set_index('iso2') yoy_w = pd.DataFrame((df_companies[y][wind_cols].sum()- df_companies[y-1][wind_cols].sum())/df_companies[y-1][wind_cols].sum(), columns=['yoy_wind'])#, index=iso_cols) yoy_w_names = pd.DataFrame.from_dict(dict(zip(wind_cols, iso_cols)), orient='index').rename(index=str, columns={0:'iso2'}) yoy_w_f = yoy_w.join(yoy_w_names).set_index('iso2') yoy_ff = pd.DataFrame((df_companies[y][ff_cols].sum()- df_companies[y-1][ff_cols].sum())/df_companies[y-1][ff_cols].sum(), columns=['yoy_ff'])#, index=iso_cols) yoy_ff_names = pd.DataFrame.from_dict(dict(zip(ff_cols, iso_cols)), orient='index').rename(index=str, columns={0:'iso2'}) yoy_ff_f = yoy_ff.join(yoy_ff_names).set_index('iso2') df_out = reduce(join_dfs, [yoy_a, yoy_g_f, yoy_b_f, yoy_s_f, yoy_w_f, yoy_ff_f]) diff_a = pd.DataFrame((df_companies[y][iso_cols].sum()- df_companies[y-1][iso_cols].sum()), columns=['diff_all']) diff_g = pd.DataFrame((df_companies[y][green_cols].sum()- df_companies[y-1][green_cols].sum()), columns=['diff_green']).join(yoy_g_names).set_index('iso2') diff_b = pd.DataFrame((df_companies[y][blue_cols].sum()- df_companies[y-1][blue_cols].sum()), columns=['diff_blue']).join(yoy_b_names).set_index('iso2') diff_s = pd.DataFrame((df_companies[y][solar_cols].sum()- df_companies[y-1][solar_cols].sum()), columns=['diff_solar']).join(yoy_s_names).set_index('iso2') diff_w = pd.DataFrame((df_companies[y][wind_cols].sum()- df_companies[y-1][wind_cols].sum()), columns=['diff_wind']).join(yoy_w_names).set_index('iso2') diff_ff = pd.DataFrame((df_companies[y][ff_cols].sum()- df_companies[y-1][ff_cols].sum()), columns=['diff_ff']).join(yoy_ff_names).set_index('iso2') df_out['green_ratio'] = diff_g['diff_green']/diff_a['diff_all'] df_out['blue_ratio'] = diff_b['diff_blue']/diff_a['diff_all'] df_out['solar_ratio'] = diff_s['diff_solar']/diff_a['diff_all'] df_out['wind_ratio'] = diff_w['diff_wind']/diff_a['diff_all'] df_out['ff_ratio'] = diff_ff['diff_ff']/diff_a['diff_all'] new_portions[y] =df_out #print reduce(join_dfs, [yoy_a, yoy_g_f, yoy_b_f, yoy_s_f, yoy_w_f, yoy_ff_f]) #print yoy_g_f #print yoy_b_f #print yoy_s_f #print yoy_w_f #print yoy_ff_f for y in range(2008,2018): new_portions[y].to_csv('./matrix_csvs/'+str(y)+'_new_portions.csv', encoding='utf-8') ```
github_jupyter
# Compare Models This notebook compares various GFW models based on the `measure_speed` and `measure_course` with each other and with the models from Dalhousie University. Note that the distance-to-shore cutoff was disabled in the Dalhousie models, so none of the models compared here are using distance-to-shore as a feature. ``` from __future__ import print_function, division %matplotlib inline import sys sys.path.append('..') import warnings; warnings.filterwarnings('ignore') from IPython.core.display import display, HTML, Markdown import vessel_scoring.models import vessel_scoring.evaluate_model data = vessel_scoring.models.load_data('../datasets') GEARS = {'ps': 'Purse seiners', 'trawl': 'Trawlers', 'longliner': 'Long liners'} for gear, title in GEARS.iteritems(): display(HTML("<h1>%s</h1>" % title)) trained_models = [(name, vessel_scoring.models.train_model( name, {"model": spec["model"], "data": ['kristina_' + gear] + ['slow-transits'] * vessel_scoring.models.TRANSIT_WEIGHT}, data)) for (name, spec) in vessel_scoring.models.untrained_models.iteritems() if spec.get("compare_models", False)] predictions = {} try: predictions["dal"] = vessel_scoring.evaluate_model.load_dal_predictions("../dal_{}_results.csv".format(gear)) except IOError: pass testdata = data["kristina_" + gear]['test'] vessel_scoring.evaluate_model.compare_models_at_cutoff(trained_models, testdata, predictions) vessel_scoring.evaluate_model.compare_models(trained_models, testdata) display(HTML("<hr/>")) ``` ## Preparing Dalhouise Data In the `vessel-scoring` repo: ``` python scripts/make_ps_data_ready_for_dal.py ``` I disabled distshore in dal for the models that still have it using some hackery so that (a) it ran faster and (b) the comparisons were 'fair'. Then in the `dal` repo: ``` # First, turned off distshore in the models that still have it using some hackery. Rscript dalhouse/models/purse-seiner.R ../vessel-scoring/datasets/kristina_purse_seine.measures.from_npz.csv ../vessel-scoring/dal_purse_seine_results.csv dalhouse/models/timeofday/ dal timothyhochberg$ Rscript dalhouse/models/trawler.R ../vessel-scoring/datasets/kristina_trawler.measures.from_npz.csv ../vessel-scoring/dal_trawler_results.csv dalhouse/models/coastline/ data/training/trawl.csv Rscript dalhouse/models/purse-seiner.R ../vessel-scoring/datasets/kristina_longliner.measures.from_npz.csv ../vessel-scoring/dal_longliner_results.csv dalhouse/models/timeofday/ ```
github_jupyter
# Elementary greenhouse models ____________ <a id='section1'></a> ## 1. A single layer atmosphere ____________ We will make our first attempt at quantifying the greenhouse effect in the simplest possible greenhouse model: a single layer of atmosphere that is able to absorb and emit longwave radiation. <img src="../images/1layerAtm_sketch.png"> ### Assumptions - Atmosphere is a single layer of air at temperature $T_a$ - Atmosphere is **completely transparent to shortwave** solar radiation. - The **surface** absorbs shortwave radiation $(1-\alpha) Q$ - Atmosphere is **completely opaque to infrared** radiation - Both surface and atmosphere emit radiation as **blackbodies** ($\sigma T_s^4, \sigma T_a^4$) - Atmosphere radiates **equally up and down** ($\sigma T_a^4$) - There are no other heat transfer mechanisms We can now use the concept of energy balance to ask what the temperature need to be in order to balance the energy budgets at the surface and the atmosphere, i.e. the **radiative equilibrium temperatures**. ### Energy balance at the surface \begin{align} \text{energy in} &= \text{energy out} \\ (1-\alpha) Q + \sigma T_a^4 &= \sigma T_s^4 \\ \end{align} The presence of the atmosphere above means there is an additional source term: downwelling infrared radiation from the atmosphere. We call this the **back radiation**. ### Energy balance for the atmosphere \begin{align} \text{energy in} &= \text{energy out} \\ \sigma T_s^4 &= A\uparrow + A\downarrow = 2 \sigma T_a^4 \end{align} which means that $$ T_s = 2^\frac{1}{4} T_a \approx 1.2 T_a $$ So we have just determined that, in order to have a purely **radiative equilibrium**, we must have $T_s > T_a$. *The surface must be warmer than the atmosphere.* ### Solve for the radiative equilibrium surface temperature Now plug this into the surface equation to find $$ (1-\alpha) Q + \sigma T_a^4 = 2\sigma T_a^4 $$ and use the definition of the emission temperature $T_e$ to write $$ (1-\alpha) Q = \sigma T_e^4 $$ *In this model, $T_e$ is identical to the atmospheric temperature $T_a$, since all the OLR originates from this layer.* Solve for the surface temperature: $$ T_s = 2^\frac{1}{4} T_e $$ Putting in observed numbers, $T_e = 255$ K gives a surface temperature of $$T_s = 303 ~\text{K}$$ This model is one small step closer to reality: The surface is warmer than atmosphere, emissions to space are generated in the atmosphere, and the atmosphere is heated from below and helps to keep surface warm. ### Why does this model overpredict the surface temperature? Our model now overpredicts the surface temperature by about 15ºC (303 K versus the observed 288 K). Ideas about why? Basically we just need to read our **list of assumptions** above and realize that none of them are very good approximations: - Atmosphere absorbs some solar radiation. - Atmosphere is NOT a perfect absorber of longwave radiation - Absorption and emission varies strongly with wavelength *(atmosphere does not behave like a blackbody)*. - Emissions are not determined by a single temperature $T_a$ but by the detailed *vertical profile* of air temperture. - Energy is redistributed in the vertical by a variety of dynamical transport mechanisms (e.g. convection and boundary layer turbulence). ____________ <a id='section2'></a> ## 2. Introducing the two-layer grey gas model ____________ Let's generalize the above model just a little bit to build a slighly more realistic model of longwave radiative transfer. We will address two shortcomings of our single-layer model: 1. No vertical structure 2. 100% longwave opacity Relaxing these two assumptions gives us what turns out to be a very useful prototype model for **understanding how the greenhouse effect works**. ### Assumptions - The atmosphere is **transparent to shortwave radiation** (still) - Divide the atmosphere up into **two layers of equal mass** (the dividing line is thus at 500 hPa pressure level) - Each layer **absorbs only a fraction $\epsilon$** of whatever longwave radiation is incident upon it. - We will call the fraction $\epsilon$ the **absorptivity** of the layer. - Assume $\epsilon$ is the same in each layer This is called the **grey gas** model, where **grey** here means the emission and absorption have **no spectral dependence** (same at every wavelength). We can think of this model informally as a "leaky greenhouse". Note that the assumption that $\epsilon$ is the same in each layer is appropriate if the absorption is actually carried out by a gas that is **well-mixed** in the atmosphere. Out of our two most important absorbers: - CO$_2$ is well mixed - H$_2$O is not (mostly confined to lower troposphere due to strong temperature dependence of the saturation vapor pressure). But we will ignore this aspect of reality for now. ### Kirchoff's Law In order to build our model, we need to introduce one additional piece of physics known as **Kirchoff's Law**: $$ \text{absorptivity} = \text{emissivity} $$ So **if a layer of atmosphere at temperature $T$ absorbs a fraction $\epsilon$** of incident longwave radiation, it must **emit** $$ \epsilon ~\sigma ~T^4 $$ both up and down. ### A sketch of the radiative fluxes in the 2-layer atmosphere <img src='../images/2layerAtm_sketch.png'> - Surface temperature is $T_s$ - Atmospheric temperatures are $T_0, T_1$ where $T_0$ is closest to the surface. - absorptivity of atm layers is $\epsilon$ - Surface emission is $\sigma T_s^4$ - Atmospheric emission is $\epsilon \sigma T_0^4, \epsilon \sigma T_1^4$ (up and down) - Absorptivity = emissivity for atmospheric layers - a fraction $(1-\epsilon)$ of the longwave beam is **transmitted** through each layer ____________ ## 3. Tracing the upwelling beam of longwave radiation ____________ Let's think about the upwelling beam of longwave radiation, which we denote $U$. ### Surface to layer 0 We start at the surface. The upward flux **from the surface to layer 0** is $$U_0 = \sigma T_s^4$$ (just the emission from the suface). ### Layer 0 to layer 1 Now **following this beam upward**, we first recognize that a fraction $\epsilon$ of this beam is **absorbed** in layer 0. The upward flux from layer 0 to layer 1 consists of the sum of two parts: 1. The **transmitted part** of whatever is incident from below (i.e. the part that is **not absorbed**) 2. **New upward emissions** from layer 0 We can write this upward flux from layer 0 to layer 1 as: $$U_1 = (1-\epsilon) \sigma T_s^4 + \epsilon \sigma T_0^4$$ ### Beyond layer 1 Continuing to follow the same beam, we follow the same logic! A fraction $\epsilon$ of $U_1$ is absorbed in layer 1, and therefore the transmitted part is $(1-\epsilon) U_1$. Including new emissions from layer 1, the upwelling flux above layer 1 is $$U_2 = (1-\epsilon) U_1 + \epsilon \sigma T_1^4$$ ### Outgoing Longwave Radiation Since there is **no more atmosphere above layer 1**, this upwelling beam is our OLR for this model: $$OLR = U_2 = (1-\epsilon) U_1 + \epsilon \sigma T_1^4$$ which, plugging in the above expression for $U_1$, works out to $$OLR= (1-\epsilon)^2 \sigma T_s^4 + \epsilon(1-\epsilon)\sigma T_0^4 + \epsilon \sigma T_1^4$$ Here the three terms represent **contributions to the total OLR** that **originate from each of the three levels** ### Limits of large and small absorptivity/emissivity Think about the following two questions: - What happens to this expression if $\epsilon=1$? *What does this represent physically?* - What about $\epsilon=0$? By allowing the atmosphere to partially absorb emissions from other levels, we now see that the Outgoing Longwave Radiation to space **includes emissions from every level** - and therefore **affected by temperature at every level**! ____________ ## 4. Tuning the grey gas model to observations ____________ In building our new model we have introduced exactly **one parameter**, the absorptivity $\epsilon$. We need to choose a value for $\epsilon$. We will tune our model so that it **reproduces the observed global mean OLR** given **observed global mean temperatures**. ### Global mean air temperature observations To get appropriate temperatures for $T_s, T_0, T_1$, let's revisit the global, annual mean lapse rate plot from NCEP Reanalysis data we first encountered in the [Radiation notes](radiation.ipynb). ``` # This code is used just to create the skew-T plot of global, annual mean air temperature %matplotlib inline import numpy as np import matplotlib.pyplot as plt import xarray as xr from metpy.plots import SkewT ncep_url = "https://psl.noaa.gov/thredds/dodsC/Datasets/ncep.reanalysis.derived/" ncep_air = xr.open_dataset( ncep_url + "pressure/air.mon.1981-2010.ltm.nc", use_cftime=True) # Take global, annual average and convert to Kelvin coslat = np.cos(np.deg2rad(ncep_air.lat)) weight = coslat / coslat.mean(dim='lat') Tglobal = (ncep_air.air * weight).mean(dim=('lat','lon','time')) fig = plt.figure(figsize=(9, 9)) skew = SkewT(fig, rotation=30) skew.plot(Tglobal.level, Tglobal, color='black', linestyle='-', linewidth=2, label='Observations') skew.ax.set_ylim(1050, 10) skew.ax.set_xlim(-75, 45) # Add the relevant special lines skew.plot_dry_adiabats(linewidth=0.5) skew.plot_moist_adiabats(linewidth=0.5) #skew.plot_mixing_lines() skew.ax.legend() skew.ax.set_title('Global, annual mean sounding from NCEP Reanalysis', fontsize = 16); ``` ### Target temperatures for our model tuning First, we set $$T_s = 288 \text{ K} $$ From the lapse rate plot, an average temperature for the layer between 1000 and 500 hPa is $$ T_0 = 275 \text{ K}$$ Defining an average temperature for the layer between 500 and 0 hPa is more ambiguous because of the lapse rate reversal at the tropopause. We will choose $$ T_1 = 230 \text{ K}$$ From the graph, this is approximately the observed global mean temperature at 275 hPa or about 10 km. ### OLR From the [observed global energy budget](../images/GlobalEnergyBudget.png) we set $$OLR = 238.5 \text{ W m}^{-2}$$ ### Solving for $\epsilon$ We wrote down the expression for OLR as a function of temperatures and absorptivity in our model above. All we need to do is plug the observed values into the above expression for OLR, and solve for $\epsilon$. It is a **quadratic equation** for the unknown $\epsilon$. We could work out the exact solution using the quadratic formula. But let's do it **graphically**, using Python! ### Exercise: graphical solution to find the best fit value of $\epsilon$ The OLR formula for the leaky greenhouse that we derived above is $$OLR= (1-\epsilon)^2 \sigma T_s^4 + \epsilon(1-\epsilon)\sigma T_0^4 + \epsilon \sigma T_1^4$$ Do the following: - Write a Python function that implements this formula - The function should accept **four input parameters**: - The three temperatures $T_s, T_0, T_1$ - The emissivity $\epsilon$ - Using this function, make a **graph of OLR vs. $\epsilon$** for the observed temperature values $T_s = 288, T_0 = 275, T_1 = 230$ - For the graph, $\epsilon$ should range between 0 and 1. - From your graph, find the approximate value of $\epsilon$ that gives $OLR = 238.5$ Note if you solve the quadratic equation algebraically you will get two solutions: - $\epsilon \approx 0.586$ - $\epsilon \approx 3.93$ (for details, see [the advanced notes here](sympy-greenhouse.ipynb)) **Why is the second solution not physically meaningful?** Hopefully your graph shows that $\epsilon = 0.586$ gives the correct value of OLR. This is the absorptivity that guarantees that **our model reproduces the observed OLR given the observed temperatures**. ____________ ## 5. Level of emission ____________ ### Contributions from each level to the outgoing radiation Now that we have tuned up our model, we can see exactly how strongly each level contributes to the OLR. The three components of the OLR are \begin{align*} OLR_s &= (1-\epsilon)^2 \sigma T_s^4 \\ OLR_0 &= \epsilon(1-\epsilon)\sigma T_0^4 \\ OLR_1 &= \epsilon \sigma T_1^4 \end{align*} which of course add up to the total OLR we wrote down above. ### Exercise: calculate contributions to OLR Write some **simple** Python code to calculate each term in the OLR using the observed temperatures and the tuned value $\epsilon = 0.586$. Fill out the list below using your calculated numbers. **Contributions to the OLR originating from each level, in W/m2:** - Surface: - Level 0: - Level 1: ``` # now sum up the numbers to verify you get something very close to 238.5 ``` Notice that the largest single contribution is coming from the top layer. *This is in spite of the fact that the emissions from this layer are weak, because it is so cold.* ### Changing the level of emission by adding absorbers Adding some **extra greenhouse absorbers** will mean that a **greater fraction** of incident longwave radiation is absorbed in each layer. Thus **$\epsilon$ must increase** as we add greenhouse gases. Suppose we have $\epsilon$ initially, and the absorptivity increases to $\epsilon_2 = \epsilon + \Delta \epsilon$. Suppose further that this increase happens **abruptly** so that there is no time for the temperatures to respond to this change. **We hold the temperatures fixed** in the column and ask how the radiative fluxes change. **Question: Do you expect the OLR to increase or decrease?** ### Calculating the change in level of emission Let's use our two-layer leaky greenhouse model to investigate the answer. The components of the OLR before the perturbation are \begin{align*} OLR_s &= (1-\epsilon)^2 \sigma T_s^4 \\ OLR_0 &= \epsilon(1-\epsilon)\sigma T_0^4 \\ OLR_1 &= \epsilon \sigma T_1^4 \end{align*} and after the perturbation we have \begin{align*} OLR_s &= (1-\epsilon - \Delta \epsilon)^2 \sigma T_s^4 \\ OLR_0 &= (\epsilon + \Delta \epsilon)(1-\epsilon - \Delta \epsilon)\sigma T_0^4 \\ OLR_1 &= (\epsilon + \Delta \epsilon) \sigma T_1^4 \end{align*} Let's subtract off the original components to get the contributions to the **change in OLR** from each layer: \begin{align*} \Delta OLR_s &= \left[(1-\epsilon - \Delta \epsilon)^2 - (1-\epsilon)^2\right]\sigma T_s^4 \\ \Delta OLR_0 &= \left[(\epsilon + \Delta \epsilon)(1-\epsilon - \Delta \epsilon) - \epsilon(1-\epsilon) \right] \sigma T_0^4 \\ \Delta OLR_1 &= \left[(\epsilon + \Delta \epsilon) - \epsilon \right] \sigma T_1^4 \end{align*} Now expand this out, but to make things easier to deal with, neglect term in $\Delta \epsilon^2$ (very small - we will be considering changes of less than 10% in $\epsilon$): \begin{align*} \Delta OLR_s &\approx (\Delta \epsilon) \left[ -2(1-\epsilon) \right] \sigma T_s^4 \\ \Delta OLR_0 &\approx (\Delta \epsilon) (1 - 2 \epsilon) \sigma T_0^4 \\ \Delta OLR_1 &\approx (\Delta \epsilon) \sigma T_1^4 \end{align*} Now look at the **sign** of each term. Recall that $0 < \epsilon < 1$. **Which terms in the OLR go up and which go down?** **THIS IS VERY IMPORTANT, SO STOP AND THINK ABOUT IT.** The contribution from the **surface** must **decrease**, while the contribution from the **top layer** must **increase**. **When we add absorbers, the average level of emission goes up!** ____________ ## 6. Radiative forcing in the 2-layer grey gas model ____________ ### Definition of Radiative Forcing We now define a very important quantity: *Radiative forcing is the change in total radiative flux at TOA after adding absorbers.* In this model, **only the longwave flux can change**, so we calculate the radiative forcing as $$ R = - \Delta OLR $$ (with the minus sign so that $R$ is **positive when the climate system is gaining extra energy**). ### Connection between radiative forcing and level of emission We just worked out that whenever we add some extra absorbers, the emissions to space (on average) will originate from higher levels in the atmosphere. What does this mean for OLR? Will it increase or decrease? To get the answer, we just have to sum up the three contributions we wrote above: \begin{align*} R &= -\Delta OLR_s - \Delta OLR_0 - \Delta OLR_1 \\ &= -\Delta \epsilon \left[ -2(1-\epsilon) \sigma T_s^4 + (1 - 2 \epsilon) \sigma T_0^4 + \sigma T_1^4 \right] \end{align*} Is this a positive or negative number? The key point is this: **It depends on the temperatures, i.e. on the lapse rate.** ### Greenhouse effect for an isothermal atmosphere Stop and think about this question: If the **surface and atmosphere are all at the same temperature**, does the OLR go up or down when $\epsilon$ increases (i.e. we add more absorbers)? Understanding this question is key to understanding how the greenhouse effect works. #### Let's solve the isothermal case We will just set $T_s = T_0 = T_1$ in the above expression for the radiative forcing. What do you get? #### The answer is $R=0$ For an isothermal atmosphere, there is **no change** in OLR when we add extra greenhouse absorbers. Hence, no radiative forcing and no greenhouse effect. Why? The level of emission still must go up. But since the temperature at the upper level is the **same** as everywhere else, the emissions are exactly the same. ### The radiative forcing (change in OLR) depends on the lapse rate! For a more realistic example of radiative forcing due to an increase in greenhouse absorbers, we use our observed temperatures and the tuned value for $\epsilon$. We'll express the answer in W m$^{-2}$ for a 2% increase in $\epsilon$: $$ \Delta \epsilon = 0.02 \times 0.58 $$ ``` epsilon = 0.586041150248834 delta_epsilon = 0.02 * epsilon delta_epsilon ``` Calculate the three components of the radiative forcing: ``` sigma = 5.67E-8 Ts = 288. T0 = 275. T1 = 230. # Component originating from the surface Rs = -delta_epsilon * (-2*(1-epsilon)*sigma * Ts**4) print(Rs) # Component originating from level 0 R0 = -delta_epsilon * (1-2*epsilon) * sigma * T0**4 print(R0) # Component originating from level 1 R1 = -delta_epsilon * sigma * T1**4 print(R1) ``` So just add them up to get the total radiative forcing: ``` R = Rs + R0 + R1 print(R) ``` So in our example, **the OLR decreases by 2.6 W m$^{-2}$**, or equivalently, the **radiative forcing is +2.6 W m$^{-2}$.** What we have just calculated is this: *Given the observed lapse rates, a small increase in absorbers will cause a small decrease in OLR.* The **greenhouse effect** thus gets **stronger**, and energy will begin to accumulate in the system -- which will eventually **cause temperatures to increase** as the system adjusts to a new equilibrium. ____________ ## 7. Summary ____________ ### Key physical lessons - Putting a **layer of longwave absorbers** above the surface keeps the **surface substantially warmer**, because of the back radiation from the atmosphere (greenhouse effect). - The **grey gas** model assumes that each layer absorbs and emits a fraction $\epsilon$ of its blackbody value, independent of wavelength. - With **incomplete absorption** ($\epsilon < 1$), there are contributions to the OLR from every level and the surface (there is no single level of emission) - **Adding more absorbers** means that contributions to the OLR from upper levels go up, while contributions from the surface go down. - This upward shift in the weighting of different levels is what we mean when we say the **level of emission goes up**. - The **radiative forcing** caused by an increase in absorbers depends on the lapse rate. - For an **isothermal atmosphere** the radiative forcing is zero and there is **no greenhouse effect** - The radiative forcing is positive for our atmosphere **because tropospheric temperatures tend to decrease with height**. ____________ ## Credits This notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook), an open-source textbook developed and maintained by [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany. It has been modified by [Nicole Feldl](http://nicolefeldl.com), UC Santa Cruz. It is licensed for free and open consumption under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license. Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation. ____________
github_jupyter
<a href="https://colab.research.google.com/github/hendradarwin/covid-19-prediction/blob/master/series-dnn_and_rnn/Forecast_2_dnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Pediction New Death Cases Global Covid-19 Cases ## Load Data and Import Libraries ``` # Use some functions from tensorflow_docs !pip install -q git+https://github.com/tensorflow/docs %tensorflow_version 2.x # make sure that collab use tensorflow 2 import numpy as np import tensorflow as tf import tensorflow_probability as tfp from tensorflow import keras import pandas as pd import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc import os import datetime import tensorflow_docs as tfdocs import tensorflow_docs.plots import tensorflow_docs.modeling # from google.colab import drive # drive.mount('/content/drive') %matplotlib inline %config InlineBackend.figure_format='retina' sns.set(style='whitegrid', palette='muted', font_scale=1.5) rcParams['figure.figsize'] = 16, 10 # !rm '/root/.keras/datasets/global_total.csv' ``` ## Load Data ``` df_new_cases = pd.read_csv("https://raw.githubusercontent.com/virgiawan/covid-19-prediction/linear-regression/dataset/corona-virus/new_cases.csv") def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) step = 0; times = [] series = [] for case in df_new_cases['World']: times.append(step) series.append(case) step += 1 plot_series(times, series) print('Total data {} series'.format(len(series))) # Series 0 - 63 indicate flat data. Data not increased significantly. # Try to ignore it first skip = 63 used_series = series[skip:] used_times = times[skip:] plot_series(used_times, used_series) print('Total data {} series'.format(len(used_series))) split_percentage = 0.70 split_time = (int) (len(used_times) * split_percentage) time_train = used_times[:split_time] x_train = used_series[:split_time] time_valid = used_times[split_time:] x_valid = used_series[split_time:] # create DNN window def windowed_dataset_dnn(series, window_size, batch_size, shuffle_buffer): series = tf.expand_dims(series, axis=-1) dataset = tf.data.Dataset.from_tensor_slices(series) dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_size + 1)) dataset = dataset.shuffle(shuffle_buffer) dataset = dataset.map(lambda window: (window[:-1], window[-1])) dataset = dataset.batch(batch_size).prefetch(1) return dataset # define hyper parameter window_size = 20 batch_size = 2 shuffle_buffer_size = 10 epochs = 100 tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) dataset = windowed_dataset_dnn(x_train, window_size, batch_size, shuffle_buffer_size) l0 = tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]) l1 = tf.keras.layers.Dense(32, input_shape=[window_size], activation='relu') l2 = tf.keras.layers.Dense(32, activation='relu') l3 = tf.keras.layers.Dense(1) l4 = tf.keras.layers.Lambda(lambda x: x * 10000) model = tf.keras.models.Sequential([l0, l1, l2, l3, l4]) lr_schedule = tf.keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=['mae']) history = model.fit(dataset, epochs=epochs, callbacks=[lr_schedule], verbose=0) len_data = 0 for window_dataset in dataset: len_data += 1 print('Windows number: {}'.format(len_data)) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 10, 0, 100000]) tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) epochs = 10000 dataset = windowed_dataset_dnn(x_train, window_size, batch_size, shuffle_buffer_size) l0 = tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]) l1 = tf.keras.layers.Dense(32, input_shape=[window_size], activation='relu') l2 = tf.keras.layers.Dense(32, activation='relu') l3 = tf.keras.layers.Dense(1) l4 = tf.keras.layers.Lambda(lambda x: x * 10000) model = tf.keras.models.Sequential([l0, l1, l2, l3, l4]) optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=['mae', 'acc']) history = model.fit(dataset, epochs=epochs, verbose=2) plt.semilogx(range(0, epochs), history.history["loss"]) plt.axis([0, 10000, 4000, 25000]) forecast = [] np_used_series = np.array(used_series) np_used_series = tf.expand_dims(np_used_series, axis=-1) for time in range(len(np_used_series) - window_size): forecast.append(model.predict(np_used_series[time:time + window_size][np.newaxis])) forecast = forecast[split_time-window_size:] results = np.array(forecast)[:, 0, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, results) tf.keras.metrics.mean_absolute_error(x_valid, results).numpy() ```
github_jupyter
# Optimising Returns in Portfolio Management ### Exploring Numerical Optimisation Techniques to solve Quadratic Problems in Python ##### Zac Keskin - Numerical Optimisation - UCL 2018 ## Part 0: Define functions, Import Data ### Pre-import required packages ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib notebook ``` ### First, we prepare line-search functions for the optimisation routines The backtracking algorithm provides the simplest working implementation for calculating suitable step-length sizes in line-search routines, so this is the function used throughout. The linesearch/zoom implementation finds step lengths satisfying the strong Wolfe conditions, are is shown only for comparison. ``` # Backtracking algorithm to find alpha satisfying sufficient decrease condition def backtracking(func, jac, x_k, p_k, alpha_max=1, c1 = 1e-4, maxiter=25): # Define parameters rho = 0.9 alpha = alpha_max; k=0 #Compute f, grad f at x_k f_k = func(x_k) df_k = jac(x_k) # Backtracking linesearch for computing step length while func(x_k + alpha*p_k) > ( f_k + c1 * alpha * np.dot(df_k,p_k) ) and k < maxiter: alpha = rho*alpha k+=1 return alpha # Algorithm 3.5 from Nocedal & Wright def linesearch(func, jac, x_k, p_k, alpha_max=1, c1 = 0.0001, c2=0.1, maxIter=25): alphas = [0, 0.9*alpha_max] i = 1 while i < maxIter: # Calculate required terms phi_i = phi(func,x_k,p_k,alphas[i]) phi_i_1 = phi(func,x_k,p_k,alphas[i-1]) phi_prime_0 = der_phi(jac,x_k,p_k, 0) phi_0 = phi(func,x_k,p_k, 0) + c1 * alphas[i] * phi_prime_0 if phi_i > phi_0 or (phi_i >= phi_i_1 and i > 1): a_star = zoom(alphas[i-1],alphas[i],func, jac, x_k, p_k, c1, c2) #print('Linesearch found alpha*: ', a_star) return a_star phi_prime_i = der_phi(jac,x_k,p_k, alphas[i]) if abs(phi_prime_i) <= -c2*phi_prime_0: a_star = alphas[i] #print('Linesearch found alpha*: ', a_star) return a_star if phi_prime_i >= 0: a_star = zoom(alphas[i],alphas[i-1],func, jac, x_k, p_k, c1, c2) #print('Linesearch found alpha*: ', a_star) return a_star # Update alpha_i and go again alphas.append(min(2*alphas[i], alpha_max)) i+=1 print("linesearch failed to converge") return 0 # If convergence fails # Algorithm 3.6 from Nocedal & Wright (Not Working) def zoom(alpha_lo, alpha_hi, func, jac, x_k, p_k, c1, c2, maxIter=10, tol = 1e-8): if alpha_lo > alpha_hi: temp = alpha_lo alpha_lo = alpha_hi alpha_hi = temp j=0 while j < maxIter: alpha_j = (alpha_hi + alpha_lo)/2 # Calculate required terms phi_j = phi(func,x_k,p_k,alpha_j) phi_prime_0 = der_phi(jac, x_k, p_k, 0) phi_0 = phi(func,x_k,p_k, 0) phi_lo = phi(func,x_k,p_k,alpha_lo) if abs(alpha_hi - alpha_lo) < tol: return alpha_j print('Line search stopped because the interval became to small. Return alpha_j.') if phi_j > (phi_0 + c1 * alpha_j * phi_prime_0) or phi_j >= phi_lo: alpha_hi = alpha_j else: # alpha_j satisfies sufficient decrease condition phi_prime_j = der_phi(jac, x_k, p_k, alpha_j) if abs(phi_prime_j) <= -c2*phi_prime_0: # alpha_j satisfies strong curvature condition return alpha_j if phi_prime_j * (alpha_hi - alpha_lo) >= 0: alpha_hi = alpha_lo alpha_lo = alpha_j j+=1 print("zoom failed to converge") return alpha_j # If convergence fails ``` ### Now, we define optimisation routines. Three different techniques are included: ``` # Steepest Gradient Descent def SGD(x0, func, jac, hess=None, alpha0=1, tol=1e-6, maxiter=1e4): x = np.array(x0).reshape(1,len(x0)) f = func(x0) k = 0 es = np.linalg.norm(jac(x0)) stop = False while stop==False and k < maxiter: p_k = -jac(x[k]) # Descent direction alpha = backtracking(func,jac,x[k],p_k) x = np.vstack((x, x[k] + alpha * p_k)) k=k+1 f = np.vstack((f, func(x[k]))) stop = (np.linalg.norm(jac(x[k])) < tol*(1 + tol * abs(func(x[k])))) es = np.vstack((es, np.linalg.norm(jac(x[k])) )) res = opt_result(x,k,f,es) return res # Newton def Newton(x0, func, jac, hess, alpha0=1, tol=1e-6, maxiter=1e3): N = len(x0) x = np.array(x0).reshape(1,N) f = func(x0) es = np.linalg.norm(jac(x0)) k = 0 stop = False while stop == False and k < maxiter: H_k = np.linalg.inv(hess(x[k])) # Descent direction p_k = - np.dot(H_k,jac(x[k])) # Force to be descent direction (in case Hess is no longer SPD due to penalty terms) if np.dot(p_k, jac(x[k])) > 0: p_k = -p_k alpha = backtracking(func, jac, x[k], p_k) x = np.vstack((x, x[k] + alpha * p_k)) k=k+1 f = np.vstack((f,func(x[k]))) stop = (np.linalg.norm(jac(x[k])) < tol*(1 + tol * abs(func(x[k])))) es = np.vstack((es, np.linalg.norm(jac(x[k])) )) res = opt_result(x,k,f,es) return res # BFGS def BFGS(x0, func, jac, hess=None, alpha0=1, tol=1e-6, maxiter=1e3): # Algorithm 6.1 from Nocedal & Wright N = len(x0) x = np.array(x0).reshape(1,N) f = func(x0) es = np.linalg.norm(jac(x0)) k = 0 I = np.eye(N, dtype=int) H_k = I stop = False while stop==False and k < maxiter: # Step direction p_k p_k = - np.matmul(H_k, jac(x[k])) # Find Step length alpha alpha = backtracking(func, jac, x[k], p_k) # Update x_k, f(x_k) x = np.vstack((x, x[k] + alpha * p_k)) k=k+1 f = np.vstack((f, func(x[k]))) # Careful to enforce 1D vector shape, or np.matmul and np.dot do not behave as expected s_k = (x[k] - x[k-1]).reshape(N,1) y_k = (jac(x[k]) - jac(x[k-1]) ).reshape(N,1) rho_k = 1/np.dot(y_k.transpose(), s_k) if k == 1: # Update initial guess H_0. H_k = H_k * np.matmul(s_k.transpose(),y_k) / np.matmul(y_k.transpose(),y_k) # Update H_k using 6.17 from Nocedal & Wright H_k_1 = H_k A1 = I - rho_k * np.matmul(s_k,y_k.transpose()) A2 = I - rho_k * np.matmul(y_k,s_k.transpose()) H_k = np.matmul( A1, np.matmul(H_k_1, A2)) + rho_k * np.matmul(s_k,s_k.transpose()) stop = (np.linalg.norm(jac(x[k])) < tol*(1 + tol * abs(func(x[k])))) es = np.vstack((es, np.linalg.norm(jac(x[k])) )) res = opt_result(x,k,f,es) return res # And a wrapper class to store and return iteration info in a consistent manner class opt_result(): def __init__(self, x, k, f, errors): self.xs = x self.fs = f self.k = k self.fun = self.fs[k] self.x = self.xs[k] self.es = errors ``` ### Import Prepared Data Prepared 6-Months' data on daily log-returns and variance, from twenty 'FTSE100' stocks selected at random ``` # Import data fname = 'returns.csv' DF = pd.read_csv(fname) N = 5 # Number of stocks to use DF = DF.loc[:,DF.columns.str.endswith('returns')] DF = DF.iloc[:,:N] # Calculate average returns for each asset Mus = np.array(DF.mean(axis=0)) # Get np vector for average daily log-returns per stock print(Mus) ``` ### Define Objective Function (with penalty terms) And Gradient and Hessian functions, determined analytically ``` def objective_plus_penalty(W): Q = DF.cov() # Covariance matrix var = np.matmul(W.transpose(),np.matmul(Q,W)) # Variance vector # Penalty Function method penalty1 = (np.sum(W)-1)**2 # Large for sum(x) <> 1 penalty2 = 100* (R_min - np.matmul(Mus.transpose(), W))**2 # Large for returns <> minR return var + penalty1 + penalty2 def der_objective_plus_penalty(W): Q = DF.cov() der = 2 * np.matmul(Q,W) penalty1_der = np.array([2*np.sum(W)-2 for i,Wi in enumerate(W)]) penalty2_der = 100*np.array([ 2*Mus[i] * (np.matmul(Mus.transpose(),W) - R_min) for i,Wi in enumerate(W)]) return der + penalty1_der + penalty2_der def hess_objective_plus_penalty(W): Q = DF.cov() hess = 2 * Q.values # Assemble Hessian terms for penalty1 function penalty1_hess = 2*np.ones_like(hess) # Assemble Hessian terms for penalty2 function bcast = np.broadcast(Mus.reshape(len(Mus),1),Mus.reshape(1,len(Mus))) penalty2_hess = np.empty(bcast.shape) penalty2_hess.flat = 100*np.array([2*a*b for (a,b) in bcast]) return hess + penalty1_hess + penalty2_hess ``` # Part 1: Calculate Efficient Frontier of Minimum-Variance Portfolios We adjust the allocation of assets in the portfolio to minimise the volatility for a given expected rate of return. By performing this minimisation routine for multiple values of R_min, we can trace out the 'Efficient Frontier' - the set of feasible optimal portfolios. ``` # Prepare lists awaiting results for plotting, using different optimisation algorithms my_means=[] my_vars=[] # Define an initial allocation of resources # We use an equal allocation, but optima are found quickly even with difficult initial conditions #(e.g. unbalanced, large and/or negative allocations) x0 = np.array([1/N for n in range(N)]) # Minimise Portfolio Variance at expected risk level for R_min in np.linspace(-0.001,0.003,12): my_result = Newton(x0, objective_plus_penalty, der_objective_plus_penalty, hess_objective_plus_penalty, tol = 1e-6, maxiter=1e4) # Console Output """ print('\n\n Return: ',R_min) print('n_iters: ', my_result.k) print('Portfolio Weight:', np.sum(my_result.x)) """ # Store results for plotting (variance minus penalty terms) my_means.append(np.dot(Mus,my_result.x)) my_vars.append(my_result.fun - (np.sum(my_result.x)-1)**2 - 100*(R_min - np.matmul(Mus.transpose(), my_result.x))**2 ) print("Optimal Portfolio Values stored") ``` ### Perform Monte Carlo Simulation of Portfolios We generate portfolios with random allocations between N assets, and calculate the expected mean returns and variance. This provides context and reassures us of the validity of the optimisation routines (see figures below) ``` def rand_weights(n): # Produces n random weights that sum to 1 W = np.random.rand(n) W = np.asmatrix(W/ np.sum(W)).reshape(len(Mus),1) return W def return_portfolio(W): Q = np.asmatrix(DF.cov()) # Covariance matrix R = Mus.transpose() * W # Expected Return of the portfolio Var = W.transpose() * Q * W # Expected Variance of the portfolio return R, Var n_portfolios = 1000 * N mc_means = np.matrix(np.empty(1)) mc_vars = np.matrix(np.empty(1)) for portfolio in range(n_portfolios): W = rand_weights(len(Mus)) m, s = return_portfolio(W) mc_means = np.vstack((mc_means,m)) mc_vars = np.vstack((mc_vars, s)) ###################################################### # Plot Solutions on Mean-Variance Axis import matplotlib.pyplot as plt plt.style.use('seaborn') # Plot Simulation plt.plot(mc_vars[2:], mc_means[2:], 'o', markersize=1.5) plt.xlabel('Variance') plt.ylabel('Daily Mean Log Return') plt.title(str(n_portfolios) + ' Portfolios, each containing ' + str(N) + ' stocks, with weights allocated randomly') # Plot Efficient Frontier plt.plot(my_vars,my_means,'r',linewidth=0.5) plt.legend(['Monte-Carlo Simulation','Efficient Frontier']) ``` # Part 2: Compare Performance of Optimisation Algorithms under Penalty Methods This is most neatly visualised using a trajectory plot over a 2D surface (corresponding to the simplified case of a two-asset portfolio) First we reset initial conditions to the 2D case ``` # Initial condition (chosen outside the feasible set to display iteration more interestingly # Reset Data fname = 'returns.csv' DF = pd.read_csv(fname) N = 2 # Number of stocks to use DF = DF.loc[:,DF.columns.str.endswith('returns')] DF = DF.iloc[:,:N] R_min = 0.001 Mus = np.array(DF.mean(axis=0)) x0 = np.array([-0.2 for n in range(N)]) ``` ### Define 2D Function Spaces We wish to interrogate the possible combinations of assets in the portfolio. For two assets, this presents a space defined by the funds allocated to asset $x_1$ and asset $x_2$ ``` ### We define different spaces purely to better present the behaviour of the minimisation algorithms in each case # No Penalty x1 = np.linspace(-0.25,0.25,24) x2 = np.linspace(-0.25,0.25,24) f1_space = np.meshgrid(x1,x2) # Penalty x1 = np.linspace(-1.5,1.5,24) x2 = np.linspace(-1.5,1.5,24) f2_space = np.meshgrid(x1,x2) ``` ### Define 2D Objective Functions We need to provide 2D scalar functions, with and without penalty terms, to plot and compare the two surfaces ``` # Objective function in 2D to operate over np meshgrid surfaces def Var(x1,x2): W = np.array([x1,x2]) Q = DF.cov().values Var = Q[0][0] * x1*x1 + Q[0][1] * x1*x2 + Q[1][0] * x2*x1 + Q[1][1] * x2*x2 return Var def Var_plus_penalty(x1,x2): W = np.array([x1,x2]) Q = DF.cov().values Var = Q[0][0] * x1*x1 + Q[0][1] * x1*x2 + Q[1][0] * x2*x1 + Q[1][1] * x2*x2 penalty1 = (W[0] + W[1] - 1)**2 penalty2 = 100* ( R_min - Mus[0] * W[0] - Mus[1] * W[1]) **2 return Var + penalty1 + penalty2 ``` ### Calculate Trajectories We also wish to present the routes which the optimisation routines traverse in iterating towards the optimal portfolio weights. This can be plotted over the surfaces defined above. We make use of the objective functions defined initially, but now also need to define penalty-free variants in order to present the trajectory over the penalty-free surface: ``` def objective(W): Q = DF.cov() # Covariance matrix var = np.matmul(W.transpose(),np.matmul(Q,W)) # Variance vector return var def der_objective(W): Q = DF.cov() der = 2 * np.matmul(Q,W) return der def hess_objective(W): Q = DF.cov() hess = 2 * Q.values return hess # Optimise using Steepest Gradient Descent (VERY SLOW!) #sgd_result = SGD(x0, objective, der_objective, hess_objective, tol = 1e-6, maxiter=1e4) #penalty_sgd_result = SGD(x0, objective_plus_penalty, der_objective_plus_penalty, hess_objective_plus_penalty, tol = 1e-6, maxiter=1e4) # Optimise using BFGS bfgs_result = BFGS(x0, objective, der_objective, hess_objective, tol = 1e-6, maxiter=1e4) penalty_bfgs_result = BFGS(x0, objective_plus_penalty, der_objective_plus_penalty, hess_objective_plus_penalty, tol = 1e-6, maxiter=1e4) # Optimise using Newton newton_result = Newton(x0, objective, der_objective, hess_objective, tol = 1e-6, maxiter=1e4) penalty_newton_result = Newton(x0, objective_plus_penalty, der_objective_plus_penalty, hess_objective_plus_penalty, tol = 1e-6, maxiter=1e4) ``` ### Plot Trajectories onto Previous Surface Axes Pre-produced plots are also made available as .pdf files alongside this notebook. ``` %matplotlib notebook plt.style.use('default') ### Plot Surfaces # No Penalty Z1 = Var(f1_space[0],f1_space[1]) # Calculate the Variance over the feasible space fig1 = plt.figure() ax1 = fig1.gca(projection='3d') ax1.plot_surface(f1_space[0],f1_space[1], Z1, cmap="bone_r") ax1.set_title('Portfolio Variance Space') # Including Penalty Z2 = Var_plus_penalty(f2_space[0],f2_space[1]) # Calculate the Variance over the feasible space, with penalty terms fig2 = plt.figure() ax2 = fig2.gca(projection='3d') ax2.plot_surface(f2_space[0],f2_space[1], Z2, cmap="bone_r") ax2.set_title('Portfolio Variance Space with Penalty Function') ### Plot Trajectories # No Penalty ax1.plot(bfgs_result.xs[:,0], bfgs_result.xs[:,1], bfgs_result.fs.flatten(),'g',linewidth=0.5) ax1.plot(newton_result.xs[:,0], newton_result.xs[:,1], newton_result.fs.flatten(),'r',linewidth=0.5) #ax1.plot(sgd_result.xs[:,0], sgd_result.xs[:,1], sgd_result.fs.flatten(),'k',linewidth=0.5) # Including Penalty ax2.plot(penalty_bfgs_result.xs[:,0],penalty_bfgs_result.xs[:,1],penalty_bfgs_result.fs.flatten(),'g',linewidth=0.5) ax2.plot(penalty_newton_result.xs[:,0], penalty_newton_result.xs[:,1], penalty_newton_result.fs.flatten(),'r',linewidth=0.5) #ax2.plot(penalty_sgd_result.xs[:,0], penalty_sgd_result.xs[:,1], penalty_sgd_result.fs.flatten(),'k',linewidth=0.5) ### Formatting plots leg1 = ax1.legend(['BFGS Trajectory','Newton Trajectory'])#, 'SGD Trajectory']) leg2 = ax2.legend(['BFGS Trajectory', 'Newton Trajectory'])#, 'SGD Trajectory']) legends = [leg1, leg2] axes = [ax1,ax2] plt.draw() # Allows us to interact with the legend. for i, ax in enumerate(axes): # Get the bounding box of the original legend bb = legends[i].get_bbox_to_anchor().inverse_transformed(ax.transAxes) # Adjust location of the legend. bb.x0 += 0.1 bb.x1 += 0.1 legends[i].set_bbox_to_anchor(bb, transform = ax.transAxes) # Size labels. ax.title.set_size(10) ax.set_xlabel('Asset 1', fontsize = 9) ax.set_ylabel('Asset 2', fontsize = 9) ax.tick_params(axis='x', labelsize=8) ax.tick_params(axis='y', labelsize=8) ax.tick_params(axis='z', labelsize=8) ``` #### Note that the scale of the z-axis is significantly greater in the penalty function surface; the penalties impose significant modifications to the function space, ensuring that infeasible portfolio weights are not targetted ### Recover the Optimum portfolio weights for the 2D case ``` W = penalty_bfgs_result.x print(W) ``` # Part 3: Maximise the Sharpe Ratio to Find Global Optimum We formulate the quadratic problem to solve: $$ min_x\;\;\; \underline{x}^\top Q\underline{x} $$ $$ s.t.\quad \underline{1}^\top\, \underline{x} - \kappa = 0 $$ $$ \quad (\underline{\mu}-r\,\underline{1} )^\top \;\underline{x} = 1 $$ Recalculating required terms: ``` # Reset Data fname = 'returns.csv' DF = pd.read_csv(fname) N = 5 # Number of stocks to use DF = DF.loc[:,DF.columns.str.endswith('returns')] DF = DF.iloc[:,:N] Mus = np.array(DF.mean(axis=0)) # Average daily log-returns r_f = -0.005 # Risk-free rate Q = DF.cov().values np.set_printoptions(precision=2) ``` Forming the Lagrangian $$\mathcal{L}(x,\lambda_1, \lambda_2,\kappa) = \underline{x}^TQ\underline{x} -\, \lambda_1\left(\underline{1}^\top\, \underline{x} -\kappa \right) - \lambda_2\,\left(\,(\,\underline{\mu}-r\,\underline{1} )^T\,\underline{x} - 1\right) $$ And taking the partial derivative w.r.t. each dimension of $\underline{x}$, $$ \nabla_{\underline{x}} \,\mathcal{L} = 2\, Q \,\underline{x} - \lambda_1 \underline{1} \, - \lambda_2 (\,\,\underline{\mu}-r\,\underline{1} ) = \underline{0} $$ And w.r.t. $\lambda_!$: $$ \nabla_{\lambda_1} \mathcal{L} = -(\underline{1}^\top\, \underline{x} -\kappa) =0 $$ And w.r.t. $\lambda_2$: $$ \nabla_{\lambda_2} \mathcal{L} = -(\underline{\mu}-r)^\top \underline{x} = 1 $$ And w.r.t. $\kappa$: $$ \nabla_{\kappa} \mathcal{L} \,= \lambda_1 = 0 $$ This results in a $(N+3)\times (N+3)$ system of equations, which we can represent in the form Ax=b (shown using block matrices below): $$ \begin{pmatrix} [2Q] & [ -\underline{1}] & [-(\underline{\mu}-r)] & [\underline{0}]\\ [-\underline{1}] & 0 & 0 & 1\\ [-(\underline{\mu}-r)^\top] & 0 & 0 & 0 \\ [\underline{0}] & 1 & 0 & 0 \\ \end{pmatrix} \begin{pmatrix} [\underline{x}] \\ \lambda_1\\ \lambda_2\\ \kappa \end{pmatrix} = \begin{pmatrix} [\underline{0}] \\ 0 \\ 1 \\ 0 \end{pmatrix} $$ This is assembled below: ``` Mus.resize(N,1) # Useful vectors / unit arrays one = np.ones(1).reshape(1,1) ones = np.ones_like(Mus) zeros = np.zeros_like(Mus) zero = np.zeros(1).reshape(1,1) A = np.vstack(( np.hstack(( 2*Q, -ones, -(Mus-r_f), zeros )), np.hstack((-ones.T, zero, zero, one )), np.hstack((-(Mus-r_f).T, zero, zero, zero )), np.hstack((zeros.T, one, zero, zero )) )) b = np.zeros((N+3,1)) b[N+1]=1 ``` Resulting in (N+3,N+3) Matrix A and vector b ``` print(A) print(b) ``` ### We then solve the system to recover the vector $\quad \operatorname{res} = \begin{pmatrix} [\underline{x}] \\ \lambda_1\\ \lambda_2\\ \kappa \end{pmatrix} $ ``` res = np.linalg.solve(A,b) kappa = res[N+2] ``` From which we can return the weights of the portfolio $ \underline{w} = \underline{x}$ ``` x = res[:N] / kappa print(x.T,'\n') # Check the portfolio adds up to 1 print (sum(x)) ``` ### We can now plot this solution over the Efficient Frontier, to complete the analysis Calculate expected return and variance of the optimum portfolio ``` opt_R= np.dot(Mus.T,x)[0] opt_var = np.dot(x.T,np.dot(Q,x))[0] %matplotlib inline plt.style.use('seaborn') # Plot Simulation plt.plot(mc_vars[2:], mc_means[2:], 'o', markersize=1.5) plt.xlabel('Variance') plt.ylabel('Daily Mean Log Return') plt.title(str(n_portfolios) + ' Portfolios, each containing ' + str(N) + ' stocks, with weights allocated randomly') # Plot Efficient Frontier plt.plot(my_vars,my_means,'r',linewidth=0.5) plt.scatter([opt_var],[opt_R],30, c='k') plt.legend(['Monte-Carlo Simulation','Efficient Frontier', 'Globally Optimum Portfolio']) ```
github_jupyter
### Test web application locally This notebook pulls some images and tests them against the local web app running inside the Docker container we made previously. ``` import matplotlib.pyplot as plt import numpy as np from testing_utilities import * import requests %matplotlib inline %load_ext autoreload %autoreload 2 image_name='masalvar/tfresnet-gpu' ``` Run the Docker conatainer in the background and open port 80. Notice we are using nvidia-docker and not docker ``` %%bash --bg -s "$image_name" nvidia-docker run -p 80:80 $1 ``` Wait a few seconds for the application to spin up and then check that everything works ``` !curl 'http://0.0.0.0:80/version' ``` Pull an image of a Lynx to test our local web app with ``` IMAGEURL = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg" headers = {'content-type': 'application/json'} jsonimg = img_url_to_json(IMAGEURL) jsonimg[:100] # Example of json string plt.imshow(to_img(IMAGEURL)) %time r = requests.post('http://0.0.0.0:80/score', data=jsonimg, headers=headers) r.json() ``` Let's try a few more images ``` images = ('https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg', 'https://upload.wikimedia.org/wikipedia/commons/3/3a/Roadster_2.5_windmills_trimmed.jpg', 'http://www.worldshipsociety.org/wp-content/themes/construct/lib/scripts/timthumb/thumb.php?src=http://www.worldshipsociety.org/wp-content/uploads/2013/04/stock-photo-5495905-cruise-ship.jpg&w=570&h=370&zc=1&q=100', 'http://yourshot.nationalgeographic.com/u/ss/fQYSUbVfts-T7pS2VP2wnKyN8wxywmXtY0-FwsgxpiZv_E9ZfPsNV5B0ER8-bOdruvNfMD5EbP4SznWz4PYn/', 'https://cdn.arstechnica.net/wp-content/uploads/2012/04/bohol_tarsier_wiki-4f88309-intro.jpg', 'http://i.telegraph.co.uk/multimedia/archive/03233/BIRDS-ROBIN_3233998b.jpg') url='http://0.0.0.0:80/score' results = [requests.post(url, data=img_url_to_json(img), headers=headers) for img in images] plot_predictions(images, results) ``` Next lets quickly check what the request response performance is for the locally running Docker container. ``` image_data = list(map(img_url_to_json, images)) # Retrieve the images and data timer_results = list() for img in image_data: res=%timeit -r 1 -o -q requests.post(url, data=img, headers=headers) timer_results.append(res.best) timer_results print('Average time taken: {0:4.2f} ms'.format(10**3 * np.mean(timer_results))) ``` Stop our Docker container ``` %%bash docker stop $(docker ps -q) ``` We can move onto [deploying our web application on AKS](04_DeployOnAKS.ipynb)
github_jupyter
# Session 3: Data Structuring 2 *Nicklas Johansen* ## Agenda In this session, we will work with different types of data: - Boolean Data - Numeric Operations and Methods - String Operations - Categorical Data - Time Series Data ### Recap - Loading Packages - Pandas Series - Pandas Data Frames - Series vs DataFrames - Converting Data Types - Indices and Column Names - Viewing Series and Dataframes - Row and Column Selection - Modifying DataFrames - Changing the Index - Changing Column Values - Sorting Data - DO2021 COHORT ``` # Loading packages import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests import seaborn as sns ``` # Boolean Data ## Logical Expression for Series (1:2) *Can we test an expression for all elements?* Yes: **==**, **!=** work for a single object or Series with same indices. Example: ``` print(my_series3) print() print(my_series3 == 0) ``` What datatype is returned? ## Logical Expression in Series (2:2) *Can we check if elements in a series equal some element in a container?* Yes, the `isin` method. Example: ``` my_rng = list(range(2)) print(my_rng) print() print(my_series3.isin(my_rng)) ``` ## Power of Boolean Series (1:2) *Can we combine boolean Series?* Yes, we can use: - the `&` operator (*and*) - the `|` operator (*or*) ``` titanic = sns.load_dataset('titanic') titanic.head() print(((titanic.sex == 'female') & (titanic.age >= 30)).head(3)) # selection by multiple columns ``` What datatype was returned? ## Power of Boolean Series (2:2) *Why do we care for boolean series (and arrays)?* Mainly because we can use them to select rows based on their content. ``` print(my_series3) print() print(my_series3[my_series3<3]) ``` NOTE: Boolean selection is extremely useful for dataframes!! # Numeric Operations and Methods ## Numeric Operations (1:3) *How can we make basic arithmetic operations with arrays, series and dataframes?* It really works just like with Python data, e.g. lists. An example with squaring: ``` 2 ** 2 num_ser1 = pd.Series([2,3,2,1,1]) num_ser2 = num_ser1 ** 2 print(num_ser1) print(num_ser2) ``` ## Numeric Operations (2:3) *Are other numeric python operators the same??* Numeric operators work `/`, `//`, `-`, `*`, `**` as expected. So does comparative (`==`, `!=`, `>`, `<`) *Why is this useful?* - vectorized operations are VERY fast; - requires very little code. ``` 10 / 2 num_ser1 / num_ser1 ``` ## Numeric Operations (3:3) *Can we also do this with vectors of data?* Yes, we can also do elementwise addition, multiplication, subtractions etc. of series. Example: ``` num_ser1 + num_ser2 ``` ## Numeric methods (1:4) *OK, these were some quite simple operations with pandas series. Are there other numeric methods?* Yes, pandas series and dataframes have other powerful numeric methods built-in. Consider an example series of 10 million randomly generated observations: ``` arr_rand = np.random.randn(10**7) # Draw 10^7 observations from standard normal, arr_rand = np.random.normal(size = 10**7) s2 = pd.Series(arr_rand) # Convert to pandas series s2 ``` ## Numeric methods (2:4) Now, display the median of this distribution: ``` s2.median() # Display median ``` Other useful methods include: `mean`, `quantile`, `min`, `max`, `std`, `describe`, `quantile` and many more. ``` np.round(s2.describe(),2) # Display other characteristics of distribution (rounded) ``` ## Numeric methods (3:4) An important method is `value_counts`. This counts number for each observation. Example: ``` cuts = np.arange(-10, 10, 1) # range from -10 to 10 with intervals of unit size cats = pd.cut(s2, cuts) # cut into categorical data cats.value_counts() ``` What is observation in the value_counts output - index or data? ## Numeric methods (4/4) *Are there other powerful numeric methods?* Yes: examples include - `unique`, `nunique`: the unique elements and the count of unique elements - `cut`, `qcut`: partition series into bins - `diff`: difference every two consecutive observations - `cumsum`: cumulative sum - `nlargest`, `nsmallest`: the n largest elements - `idxmin`, `idxmax`: index which is minimal/maximal - `corr`: correlation matrix Check [series documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html) for more information. # String Operations ## String Operations (1:3) *Do the numeric python operators also apply to strings?* In some cases yes, and this can be done very elegantly! Consider the following example with a series: ``` names_ser1 = pd.Series(['Nicklas', 'Jacob', 'Preben', 'Laila']) names_ser1 ``` Now add another string: ``` names_ser1 + ' works @ SAMF' ``` ## String Operations (2/3) *Can two vectors of strings also be combined like as with numeric vectors?* Fortunately, yes: ``` names_ser2 = pd.Series(['python', 'something with pyramids', 'resaerch', 'admin']) names_ser1 + ' teaches ' + names_ser2 ``` ## String Operations (3:3) *Any other types of vectorized operations with strings?* Many. In particular, there is a large set of string-specific operation (see `.str`-notation below). Some examples (see table 7-5 in PDA for more - we will revisit in session 5): ``` names_ser1.str.upper() # works similarly with lower() names_ser1.str.contains('k') names_ser1.str[0:2] # We can even do vectorized slicing of strings! ``` # Categorical Data ## The Categorical Data Type *Are string (or object) columns attractive to work with?* ``` pd.Series(['Pandas', 'series']) ``` No, sometimes the categorical data type is better: - Use categorical data when many characters are repeated - Less storage and faster computations - You can put some order (structure) on your string data - It also allows new features: - Plots have bars, violins etc. sorted according to category order ## Example of Categorical Data (1:2) Simulate data: ``` edu_list = ['BSc Political Science', 'Secondary School'] + ['High School']*2 str_ser = pd.Series(edu_list*10**5) str_ser ``` Option 1: No order ``` cat_ser = str_ser.astype('category') cat_ser ``` ## Example of Categorical Data (2:2) Option 2: Order ``` edu_cats = ['Secondary School', 'High School', 'BSc Political Science'] cats = pd.Categorical(str_ser, categories=edu_cats, ordered=True) cat_ser2 = pd.Series(cats, index=str_ser.index) cat_ser2 ``` ## Numbers as Categories It is natural to think of measures in categories, e.g. small and large. *Can we convert our numerical data to bins in a smart way?* Yes, there are two methods that are useful (and you just applied one of them earlier in this session!): - `cut` which divides data by user specified bins - `qcut` which divides data by user specified quantiles - E.g. median, $q=0.5$; lower quartile threshold, $q=0.25$; etc. ``` cat_ser3 = pd.qcut(pd.Series(np.random.normal(size = 10**6)), q = [0,0.025, 0.975, 1]) cat_ser3.cat.categories cat_ser3.cat.codes.head(5) ``` ## Converting to Numeric and Binary For regression, we often want our string / categorical variable as dummy variables: - That is, all categories have their own binary column (0 and 1) - Note: We may leave one 'reference' category out here (intro statistics) - Rest as numeric *How can we do this?* Insert dataframe, `df`, into the function as `pd.get_dummies(df)` ``` pd.get_dummies(cat_ser3).head(5) ``` # Time Series Data ## Temporal Data Type *Why is time so fundamental?* Every measurement made by a human was made at some point in time - therefore, it has a "timestamp"! ## Formats for Time *How are time stamps measured?* 1. **Datetime** (ISO 8601): Standard calendar - year, month, day (minute, second, milisecond); timezone - can come as string in raw data 2. **Epoch time**: Seconds since January 1, 1970 - 00:00, GMT (Greenwich time zone) - nanoseconds in pandas ## Time Data in Pandas *Does Pandas store it in a smart way?* Pandas and numpy have native support for temporal data combining datetime and epoch time. ``` str_ser2 = pd.Series(['20210101', '20210727', '20210803', '20211224']) dt_ser = pd.to_datetime(str_ser2) dt_ser ``` ## Example of Passing Temporal Data *How does the input type matter for how time data is passed?* A lot! As we will see, `to_datetime()` may assume either *datetime* or *epoch time* format: ``` pd.to_datetime(str_ser2) pd.to_datetime(str_ser2.astype(int)) ``` ## Time Series Data *Why are temporal data powerful?* We can easily make and plot time series. Example of $\sim$40 years of Apple stock prices: - Tip: Install in terminal using: *pip install yfinance* in Anaconda Prompt ``` ! pip install yfinance import yfinance as yf plt.plot(yf.download("AAPL", data_source='yahoo')['Adj Close']) plt.yscale('log') plt.xlabel('Time') plt.ylabel('Apple Stock Price') ``` ## Time Series Components *What is within the series that we just donwloaded? What is a time series* ``` aapl = yf.download("AAPL", data_source='yahoo')['Adj Close'] aapl.head(5) aapl.head(5).index ``` So in essence, time series in pandas are often just series of data with a time index. ## Pandas and Time Series *Why is pandas good at handling and processing time series data?* It has specific tools for resampling and interpolating data: - See 11.3, 11.5 and 11.6 in PDA textbook ## Datetime in Pandas *What other uses might time data have?* We can extract data from datetime columns. These columns have the `dt` and its sub-methods. Example: ``` dt_ser2 = pd.Series(aapl.index) dt_ser2.dt.month #also year, weekday, hour, second ``` Many other useful features (e.g. aggregation over time into means, medians, etc.) ## Associated Readings PDA, section 5.3: Descriptive statistics and numerical methods PDA, chapter 7: - Handling missing data - Data transformations (duplicates, dummies, binning, etc.) - String manipulations PDA, sections 11.1-11.2: - Dates and time in Python - Working with time series in pandas (time as index) PDA, sections 12.1, 12.3: - Working with categorical data in pandas - Method chaining PML, chapter 4, section 'Handling categorical data': - Encoding class labels with `LabelEncoder` - One-hot encoding ## session_3_exercises.ipynb Can be found on github today 15:15. - Method 1: sync your cloned repo - Method 2: download from git repo
github_jupyter
``` from __future__ import print_function from parser import * from keras.models import Sequential from keras.layers import Dense, Activation, Dropout ,LSTM from keras.optimizers import RMSprop import numpy as np import random text = parse_folder('TheVGLC-master/Super Mario Bros/Processed/') print('corpus length:', len(text)) chars = sorted(list(set(text))) print('total chars:', len(chars)) #Make vocabularies char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) maxlen = 40 step = 3 sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('nb sequences:', len(sentences)) ``` At this point we've read in the text, found out the size of our vocabulary, and split it into semi-redundant sequences. No we encode it as a 1-hot encoding. This means that if before it looked like: -X-X and we have the vocab `{'-':0, 'X':1, 'S':2}` it will now look like: [[1,0,0],[0,1,0],[1,0,0],[0,1,0]] i.e. the index of the character in the vocab is set to 1 and everything else is set to 0 ``` X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): X[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 ``` We create two matrices, one of size: # of sequences X index in sequence X size of vocab and one of size: # of sequences X size of vocab The first is the sequence data in one-hot encoding, and the second is what we are predicting, i.e. the next character in the sequence after the preceding sequence. At this point, we're going to create our Neural Network ``` size = 128 layers = 2 dropout = 0.5 ``` size = # of LSTM cells per layer in the neural network layers = # of layers of LSTM cells dropout = % of cells to dropout at each training instance It tends to be a bit of blackart in determining what the proper tuning for the parameters are. Generally, you can assume that bigger is better, and deeper is better, but the balance between the two is up in the air. It's easier to go deeper than wider, since while a 256 x 2 network has the same number of cells as a 128 x 4 network, it has ~4/3 the number of parameters (~256^2 vs ~128^2 * 3), but there are diminishing returns in both. Dropout randomly turns off a % of cells for each training instance, which acts as a form of regularization that prevents the network from overfitting. The reason for this is that instead of specific cells becoming overly attuned, it creates exponentially many sub-networks that must all try to learn the same things in different ways. Increasing dropout increases training time so it's best to start small, if a divergence between training and validation error appears, increase the dropout and start again. ``` model = Sequential() #INPUT model.add(LSTM(size, input_shape=(maxlen, len(chars)),return_sequences=True)) model.add(Dropout(dropout)) #MIDDLE LAYERS for ii in range(layers-2): model.add(LSTM(size, input_shape=(maxlen,size),return_sequences=True)) model.add(Dropout(dropout)) #OUTPUT model.add(LSTM(size, input_shape=(maxlen, len(chars)),return_sequences=False)) model.add(Dropout(dropout)) model.add(Dense(len(chars))) model.add(Activation('softmax')) ``` The model construction is broken into 3 sections: #### INPUT Since this is the first layer in our network, we have to specify the input dimensions (coming from the dimensions of our X data above). #### MIDDLE Here is where we construct an arbitrary number of LSTM layers. Each of these returns a sequence of vectors (as the layers increase we can think of them learning a hierarchy of sequences) #### OUTPUT Our final LSTM layer doesn't output a sequence and instead outputs a single vector. This can be thought of as a distillation of the previous sequence into one piece of information. This is then fed into a Densely connected layer the size of our vocabulary. The output of this Dense layer has a softmax activation which is defined as: $\sigma (\mathbf {z} )_{j}={\frac {e^{z_{j}}}{\sum _{k=1}^{K}e^{z_{k}}}}$ for j = 1, …, K. i.e. we exponentiate each output of the Dense layer and then divide by the sum of those exponentiations. Why? By exponentiating we guarantee that each value > 0. By dividing by the sum, we guarantee that everything sums to 1. These are the things we need for discrete probability distribution. In essence we've now distilled our sequence into a probability distribution over the next character in the sequence given the preceding sequence i.e. $Pr(c_i | c_{i-1},c_{i-2}, ..., c_{i-N})$ Finally we need to compile our model. To do so we need 3 things: ##### learning rate The rate at which the backpropagation of the error gradient occurs ##### optimization technique Stochastic Gradient Descent (SGD) is the core of all the techniques, but there are number of improved techniques. RMSprop is the de facto choice for Recurrent networks. ##### loss criterion Neural networks are just big functions. SGD + Back Propagation is how we update the parameters of the function, but to do so we need to know how to update our function. To know how to update we need to know how wrong we are (or how right we are), for categorical distributions this works out to the categorical cross entropy which is defined as: $H(p,q)=-\sum _{x}p(x)\,\log q(x)$ Where $p(x)$ is the probability associated with the truth (e.g. 1 for the character, 0 for everything else) and $q(x)$ is the predicted probability. We could instead just say that our loss is 1 if we get it wrong and 0 if we get it right, but this doesn't reward how confident we are in our predictions, hence why we use $q(x)$ instead of just the predicted character. NOTE: This is sometimes incorrectly labeled as Softmax loss (it is always coupled with a Softmax activation, but Softmax is the activation and cross entropy is the loss). NOTE: I use the phrase loss criterion, but you will see it called just loss, just criterion, or objective in different places. These all mean the same thing. ``` learning_rate = 0.005 optimizer = RMSprop(lr=learning_rate) model.compile(loss='categorical_crossentropy', optimizer=optimizer,metrics=['accuracy']) ``` Now we train! ``` # helper function to sample an index from a probability array def sample(preds, temperature=1.0): if temperature == 0.0: return np.argmax(preds) preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) # train the model, output generated text after each iteration for iteration in range(1, 10): print() print('-' * 50) print('Iteration', iteration) model.fit(X, y, batch_size=256, nb_epoch=1, validation_split=0.1) start_index = 0 if iteration > 5: for diversity in [0, 1.0]: print() print('----- diversity:', diversity) generated = '' sentence = text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: "' + sentence + '"') for i in range(300-maxlen+1): x = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x[0, t, char_indices[char]] = 1. preds = model.predict(x, verbose=0)[0] next_index = sample(preds, diversity) next_char = indices_char[next_index] generated += next_char sentence = sentence[1:] + next_char columns = generated.split('(')[1:] level = [['' for c in columns] for r in columns[0]] for col_index,column in enumerate(columns): for row_index, tile in enumerate(column): if row_index < len(level) and col_index < len(level[0]): level[row_index][col_index] = tile print('\n'.join([''.join([tile for tile in row]) for row in level])) ```
github_jupyter
## Dependencies ``` from tweet_utility_scripts import * from transformers import TFDistilBertModel, DistilBertConfig from tokenizers import BertWordPieceTokenizer from tensorflow.keras.models import Model from tensorflow.keras import optimizers, metrics, losses from tensorflow.keras.callbacks import EarlyStopping, TensorBoard from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, GlobalMaxPooling1D, Concatenate SEED = 0 seed_everything(SEED) warnings.filterwarnings("ignore") ``` # Load data ``` database_base_path = '/kaggle/input/tweet-dataset-split-distilbert-uncased-128/' hold_out = pd.read_csv(database_base_path + 'hold-out.csv') train = hold_out[hold_out['set'] == 'train'] validation = hold_out[hold_out['set'] == 'validation'] display(hold_out.head()) # Unzip files !tar -xvf /kaggle/input/tweet-dataset-split-distilbert-uncased-128/hold_out.tar.gz base_data_path = 'hold_out/' x_train = np.load(base_data_path + 'x_train.npy') y_train = np.load(base_data_path + 'y_train.npy') x_valid = np.load(base_data_path + 'x_valid.npy') y_valid = np.load(base_data_path + 'y_valid.npy') # Delete data dir shutil.rmtree(base_data_path) ``` # Model parameters ``` MAX_LEN = 128 BATCH_SIZE = 64 EPOCHS = 20 LEARNING_RATE = 1e-5 ES_PATIENCE = 3 tokenizer_path = database_base_path + 'vocab.txt' base_path = '/kaggle/input/qa-transformers/distilbert/' base_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5' config_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json' model_path = 'model.h5' log_path = './' ``` # Model ``` module_config = DistilBertConfig.from_pretrained(config_path, output_hidden_states=False) def model_fn(): input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = TFDistilBertModel.from_pretrained(base_model_path, config=module_config, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) last_state = sequence_output[0] x = GlobalAveragePooling1D()(last_state) y_start = Dense(MAX_LEN, activation='softmax', name='y_start')(x) y_end = Dense(MAX_LEN, activation='softmax', name='y_end')(x) model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end]) model.compile(optimizers.Adam(lr=LEARNING_RATE), loss=losses.CategoricalCrossentropy(), metrics=[metrics.CategoricalAccuracy()]) return model model = model_fn() model.summary() ``` # Train ``` tb_callback = TensorBoard(log_dir=log_path) es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1) history = model.fit(list(x_train), list(y_train), validation_data=(list(x_valid), list(y_valid)), callbacks=[es, tb_callback], epochs=EPOCHS, verbose=2).history model.save_weights(model_path) # Compress logs dir !tar -cvzf train.tar.gz train !tar -cvzf validation.tar.gz validation # Delete logs dir if os.path.exists('/kaggle/working/train/'): shutil.rmtree('/kaggle/working/train/') if os.path.exists('/kaggle/working/validation/'): shutil.rmtree('/kaggle/working/validation/') ``` # Model loss graph ``` sns.set(style="whitegrid") plot_metrics(history, metric_list=['loss', 'y_start_loss', 'y_end_loss', 'y_start_categorical_accuracy', 'y_end_categorical_accuracy']) ``` # Tokenizer ``` tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True) ``` # Model evaluation ``` train_preds = model.predict(list(x_train)) valid_preds = model.predict(list(x_valid)) train['start'] = train_preds[0].argmax(axis=-1) train['end'] = train_preds[1].argmax(axis=-1) train['prediction'] = train.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1) train["prediction"] = train["prediction"].apply(lambda x: '.' if x.strip() == '' else x) validation['start'] = valid_preds[0].argmax(axis=-1) validation['end'] = valid_preds[1].argmax(axis=-1) validation['prediction'] = validation.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1) validation["prediction"] = validation["prediction"].apply(lambda x: '.' if x.strip() == '' else x) display(evaluate_model(train, validation)) ``` # Visualize predictions ``` print('Train set') display(train.head(10)) print('Validation set') display(validation.head(10)) ```
github_jupyter
# Using Interact The `interact` function (`ipywidgets.interact`) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython's widgets. ``` from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets ``` ## Basic `interact` At the most basic level, `interact` autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use `interact`, you need to define a function that you want to explore. Here is a function that prints its only argument `x`. ``` def f(x): return x ``` When you pass this function as the first argument to `interact` along with an integer keyword argument (`x=10`), a slider is generated and bound to the function parameter. ``` interact(f, x=10); ``` When you move the slider, the function is called, which prints the current value of `x`. If you pass `True` or `False`, `interact` will generate a checkbox: ``` interact(f, x=True); ``` If you pass a string, `interact` will generate a text area. ``` interact(f, x='Hi there!'); ``` `interact` can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, `interact` also works with functions that have multiple arguments. ``` @interact(x=True, y=1.0) def g(x, y): return (x, y) ``` ## Fixing arguments using `fixed` There are times when you may want to explore a function using `interact`, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the `fixed` function. ``` def h(p, q): return (p, q) ``` When we call `interact`, we pass `fixed(20)` for q to hold it fixed at a value of `20`. ``` interact(h, p=5, q=fixed(20)); ``` Notice that a slider is only produced for `p` as the value of `q` is fixed. ## Widget abbreviations When you pass an integer-valued keyword argument of `10` (`x=10`) to `interact`, it generates an integer-valued slider control with a range of `[-10,+3*10]`. In this case, `10` is an *abbreviation* for an actual slider widget: ```python IntSlider(min=-10,max=30,step=1,value=10) ``` In fact, we can get the same result if we pass this `IntSlider` as the keyword argument for `x`: ``` interact(f, x=widgets.IntSlider(min=-10,max=30,step=1,value=10)); ``` This examples clarifies how `interact` processes its keyword arguments: 1. If the keyword argument is a `Widget` instance with a `value` attribute, that widget is used. Any widget with a `value` attribute can be used, even custom ones. 2. Otherwise, the value is treated as a *widget abbreviation* that is converted to a widget before it is used. The following table gives an overview of different widget abbreviations: <table class="table table-condensed table-bordered"> <tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr> <tr><td>`True` or `False`</td><td>Checkbox</td></tr> <tr><td>`'Hi there'`</td><td>Text</td></tr> <tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr> <tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr> <tr><td>`['orange','apple']` or `{'one':1,'two':2}`</td><td>Dropdown</td></tr> </table> Note that a dropdown is used if a list or a dict is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range). You have seen how the checkbox and textarea widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given. If a 2-tuple of integers is passed `(min,max)`, an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of `1` is used. ``` interact(f, x=(0,4)); ``` If a 3-tuple of integers is passed `(min,max,step)`, the step size can also be set. ``` interact(f, x=(0,8,2)); ``` A float-valued slider is produced if the elements of the tuples are floats. Here the minimum is `0.0`, the maximum is `10.0` and step size is `0.1` (the default). ``` interact(f, x=(0.0,10.0)); ``` The step size can be changed by passing a third element in the tuple. ``` interact(f, x=(0.0,10.0,0.01)); ``` For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to `5.5`. ``` @interact(x=(0.0,20.0,0.5)) def h(x=5.5): return x ``` Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function. ``` interact(f, x=['apples','oranges']); ``` If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of (label, value) pairs. ``` interact(f, x=[('one', 10), ('two', 20)]); ``` ## `interactive` In addition to `interact`, IPython provides another function, `interactive`, that is useful when you want to reuse the widgets that are produced or access the data that is bound to the UI controls. Note that unlike `interact`, the return value of the function will not be displayed automatically, but you can display a value inside the function with `IPython.display.display`. Here is a function that returns the sum of its two arguments and displays them. The display line may be omitted if you don't want to show the result of the function. ``` from IPython.display import display def f(a, b): display(a + b) return a+b ``` Unlike `interact`, `interactive` returns a `Widget` instance rather than immediately displaying the widget. ``` w = interactive(f, a=10, b=20) ``` The widget is an `interactive`, a subclass of `VBox`, which is a container for other widgets. ``` type(w) ``` The children of the `interactive` are two integer-valued sliders and an output widget, produced by the widget abbreviations above. ``` w.children ``` To actually display the widgets, you can use IPython's `display` function. ``` display(w) ``` At this point, the UI controls work just like they would if `interact` had been used. You can manipulate them interactively and the function will be called. However, the widget instance returned by `interactive` also gives you access to the current keyword arguments and return value of the underlying Python function. Here are the current keyword arguments. If you rerun this cell after manipulating the sliders, the values will have changed. ``` w.kwargs ``` Here is the current return value of the function. ``` w.result ``` ## Disabling continuous updates When interacting with long running functions, realtime feedback is a burden instead of being helpful. See the following example: ``` def slow_function(i): print(int(i),list(x for x in range(int(i)) if str(x)==str(x)[::-1] and str(x**2)==str(x**2)[::-1])) return %%time slow_function(1e6) ``` Notice that the output is updated even while dragging the mouse on the slider. This is not useful for long running functions due to lagging: ``` from ipywidgets import FloatSlider interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5)); ``` There are two ways to mitigate this. You can either only execute on demand, or restrict execution to mouse release events. ### `interact_manual` The `interact_manual` function provides a variant of interaction that allows you to restrict execution so it is only done on demand. A button is added to the interact controls that allows you to trigger an execute event. ``` interact_manual(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5)); ``` ### `continuous_update` If you are using slider widgets, you can set the `continuous_update` kwarg to `False`. `continuous_update` is a kwarg of slider widgets that restricts executions to mouse release events. ``` interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5, continuous_update=False)); ``` ### `interactive_output` `interactive_output` provides additional flexibility: you can control how the UI elements are laid out. Unlike `interact`, `interactive`, and `interact_manual`, `interactive_output` does not generate a user interface for the widgets. This is powerful, because it means you can create a widget, put it in a box, and then pass the widget to `interactive_output`, and have control over the widget and its layout. ``` a = widgets.IntSlider() b = widgets.IntSlider() c = widgets.IntSlider() ui = widgets.HBox([a, b, c]) def f(a, b, c): print((a, b, c)) out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c}) display(ui, out) ``` ## Arguments that are dependent on each other Arguments that are dependent on each other can be expressed manually using `observe`. See the following example, where one variable is used to describe the bounds of another. For more information, please see the [widget events example notebook](./Widget Events.ipynb). ``` x_widget = FloatSlider(min=0.0, max=10.0, step=0.05) y_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0) def update_x_range(*args): x_widget.max = 2.0 * y_widget.value y_widget.observe(update_x_range, 'value') def printer(x, y): print(x, y) interact(printer,x=x_widget, y=y_widget); ``` ## Flickering and jumping output On occasion, you may notice interact output flickering and jumping, causing the notebook scroll position to change as the output is updated. The interactive control has a layout, so we can set its height to an appropriate value (currently chosen manually) so that it will not change size as it is updated. ``` %matplotlib inline from ipywidgets import interactive import matplotlib.pyplot as plt import numpy as np def f(m, b): plt.figure(2) x = np.linspace(-10, 10, num=1000) plt.plot(x, m * x + b) plt.ylim(-5, 5) plt.show() interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot ```
github_jupyter
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a> $$ \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\abs}[1]{\left\lvert#1\right\rvert} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\inner}[2]{\left\langle#1,#2\right\rangle} \newcommand{\bra}[1]{\left\langle#1\right|} \newcommand{\ket}[1]{\left|#1\right\rangle} \newcommand{\braket}[2]{\left\langle#1|#2\right\rangle} \newcommand{\ketbra}[2]{\left|#1\right\rangle\left\langle#2\right|} \newcommand{\angleset}[1]{\left\langle#1\right\rangle} \newcommand{\expected}[1]{\left\langle#1\right\rangle} \newcommand{\dv}[2]{\frac{d#1}{d#2}} \newcommand{\real}[0]{\mathfrak{Re}} $$ # Quantum Entanglement _prepared by Israel Gelover_ _Quantum Entanglement_ is a property that quantum systems have and that is fundamental in the quantum world. It is a complex property to understand and explain and is the subject of many discussions about its interpretation since we do not have a specific theoretical model that allows us to model this type of interaction. One way of looking at it is as the connection between the microscopic world and the macroscopic world, and it is a possible solution or at least one argument that is given for the measurement problem in quantum mechanics. Historically, quantum entanglement began to be seen as something "weird", after an article in which two classical, apparently natural properties of physical systems are addressed, and what is demonstrated in that article is that these properties (apparently natural) are incompatible with quantum mechanics, reaching an apparent paradox, which is currently known as the _Einstein-Podolsky-Rosen Paradox_. Later John Bell devised an experiment designed to agree with Einstein, Podolsky, and Rosen. In this work, Bell derived an inequality that could be implemented experimentally in order to show that quantum mechanics was an incomplete theory. However, experimentally the opposite was shown, that quantum mechanics is correct and what's wrong were the assumptions used in the EPR paradox. In this section we are going to discuss this topic in that historical order, first we will address the EPR paradox, after Bell's inequality and finally we will give a definition of quantum entanglement. # Einstein-Podolsky-Rosen Paradox In order to address the EPR paradox, we are going to introduce a simplified version of two concepts that are key in the construction of this paradox, since they were taken as valid properties for any physical system. ### <a name="definition_4_1">Definition 4.1</a> - **Locality**: Given two systems, the result of any measurement made on one of them cannot influence the result of a measurement made on the other if said measurements are causally disconnected. - **Realism**: If it is possible to measure the attribute of a system without disturbing it, then it is said that the value obtained from this measurement has a physical reality independent of our observation. The concept of locality is basic in the development of Einstein's _Theory of Relativity_. Essentially what he argues is that if you have two systems and a measurement is made on the first and another measurement on the second, and they are causally disconnected, for example, that the systems are separated by a great distance and the measurements are made at almoest the same time, then the results of such measurements do not influence each other. On the other hand, let us bear in mind that the concept of realism is defined from the point of view of these authors who did not like quantum mechanics very much and for whom this concept of realism was something that was perceived as natural in their time. In other words, if it is possible to measure some attribute of a system without disturbing it, then the result obtained from that measurement is a "real" value that the system has, regardless of whether it has been measured. ### <a name="remark_4_2">Remark 4.2</a> In order to address the EPR paradox as closely as possible to how it was originally expressed, several physical concepts have to be used. <img src="../img/epr.jpg" width="200"/> An emission source of particles S is considered, which emits a pair of particles of spin $\frac{1}{2}$, in the following state \begin{equation*} \ket{\psi^-} = \frac{1}{\sqrt{2}}(\ket{01} - \ket{10}) \end{equation*} Where the first qubit corresponds to the particle on the left and the second qubit corresponds to the particle on the right. Usually when we talk about this type of _Quantum Information_ or quantum entanglement problems, that is, when we have two subsystems $\mathcal{A}$ and $\mathcal{B}$, it is common to refer to they as the characters _Alice_ and _Bob_ instead of $\mathcal{A}$ and $\mathcal{B}$. In this case, one of the particles emitted by source $S$ travels towards Alice and the other towards Bob. With that said, suppose that Alice measures the observable $\hat{\sigma_z}$ and obtains the positive eigenvalue, that is, $\sigma_z^{(A)} = +1$, then let's see what happens to the system immediately after this measurement. We know from the postulates of quantum mechanics that immediately after the measurement, the state $\ket{\psi^-}$ is projected to the subspace associated with the measured eigenvalue. In this way, the fact that Alice has measured $\hat{\sigma_z}$ to be in the eigenvalue $+1$ means that we are going to project the first qubit onto the eigenvector associated with the eigenvalue $+1$, that is, we are going to project onto the qubit $\ket{0}$ and renormalize. In other words, Alice found that the first qubit was in the state $\ket{0}$, that is \begin{equation*} \sigma_z^{(A)} = +1 \implies \ket{\psi^-} \longrightarrow \ket{01} \end{equation*} At this point the state $\ket{\psi^-}$ has already collapsed to the state $\ket{01}$, so if Bob now measures $\hat{\sigma_z}$ he will get the eigenvalue $-1$. That is, Bob will find the system in the state $\ket{1}$, that is \begin{equation*} \sigma_z^{(B)} = -1 \implies \ket{01} \longrightarrow \ket{1} \end{equation*} Note that this mental exercise can be replicated in a completely classical way. Consider the following example, we send Alice and Bob a white and a black ball, but they do not know a priori which one they got. The moment Alice measures her ball and finds that she has the black ball, it is obvious that Bob got the white ball. It is this point that Einsten, Podolsky and Rosen first note. On the other hand, we are now going to construct the two eigenvectors of $\hat{\sigma_x}$. Analogously to the eigenvectors of $\hat{\sigma_z}$, let's call the one associated with the positive value $0$ and the one associated with the negative value $1$, but add the subscript $x$. That is \begin{equation*} \begin{split} \ket{0_x} &= \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \enspace (\text{ eigenvalue} +1) \\ \ket{1_x} &= \frac{1}{\sqrt{2}}(\ket{0} - \ket{1}) \enspace (\text{ eigenvalue} -1) \end{split} \end{equation*} are the eigenvectors of $\hat{\sigma_x}$. So if we want to rewrite the state $\ket{\psi^-}$ in terms of the eigenvectors of $\hat{\sigma_x}$ we will have \begin{equation*} \ket{\psi^-} = \frac{1}{\sqrt{2}}(\ket{0_x1_x} - \ket{1_x0_x}) \end{equation*} Now if Alice measures $\hat{\sigma_x}$ and gets the eigenvalue $+1$, then \begin{equation*} \sigma_x^{(A)} = +1 \implies \ket{\psi^-} \longrightarrow \ket{0_x1_x} \end{equation*} Analogously to the previous case, if Bob now measures $\hat{\sigma_x}$ he will get $\sigma_x^{(B)} = -1$ and therefore \begin{equation*} \sigma_x^{(B)} = -1 \implies \ket{0_x1_x} \longrightarrow \ket{1_x} \end{equation*} Note that as in the previous case, these results are still reproducible in classical physics, with the exception that in this case we would have balls of colors other than black and white, say red and green. The paradox lies in the fact that Alice's measurement determines Bob's result without disturbing him, since by the principle of **Locality**, Alice's measurement should not disturb Bob's result in his measurement and vice versa. This means that if Alice measures $\hat{\sigma_x}$ then she determines her own result and what Bob would get when measuring the same operator $\hat{\sigma_x}$, that is, she determines $\sigma_x^{(A)}$ and $\sigma_x^{(B)}$. On the other hand, if Bob measures $\hat{\sigma_z}$ in a causally disconnected way, he also determines his and Alice's result:$\sigma_z^{(A)}$ and $\sigma_z^{(B)}$. In this way, between them they can know $\sigma_x^{(A)}$, $\sigma_x^{(B)}$, $\sigma_z^{(A)}$ and $\sigma_z^{(B)}$, which are "real" values in the sense of **Realism**, that is, they have a physical interpretation. But quantum mechanics prohibits this from happening, since these observables do not commute, that is, $[\hat{\sigma_x}, \hat{\sigma_z}] \neq 0 \implies$ they cannot have _simultaneous reality_, which is a contradiction. Let us now see why this is a contradiction. Suppose we have the state $\ket{\psi} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$ and when measuring $\hat{\sigma_z}$ we obtain the eigenvalue $+1$, that is, we find that the state is in the qubit $\ket{0}$, then \begin{equation*} \sigma_z = +1 \implies \ket{\psi} \longrightarrow \ket{0} \end{equation*} But if we express $\ket{0}$ and $\ket{1}$ in terms of the base of $\hat{\sigma_x}$ we have that \begin{equation*} \ket{0} = \frac{1}{\sqrt{2}}(\ket{0_x} + \ket{1_x}) \enspace \ket{1} = \frac{1}{\sqrt{2}}(\ket{0_x} - \ket{1_x}) \end{equation*} therefore \begin{equation*} \sigma_z = +1 \implies \ket{\psi} \longrightarrow \ket{0} = \frac{1}{\sqrt{2}}(\ket{0_x} + \ket{1_x}) \end{equation*} From this expression, it is clear that if we now measure $\hat{\sigma_x}$ we can obtain $\sigma_x = +1$ with probability $\frac{1}{2}$ or $\sigma_x = -1$ with probability $\frac{1}{2}$. On the other hand, if we also express the original state $\ket{\psi}$ in terms of the same base of $\hat{\sigma_x}$ we have \begin{equation*} \begin{split} \ket{\psi} &= \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \\ &= \frac{1}{\sqrt{2}}(\frac{1}{\sqrt{2}}(\ket{0_x} + \ket{1_x}) + \frac{1}{\sqrt{2}}(\ket{0_x} - \ket{1_x})) \\ &= \frac{1}{2}(\ket{0_x} + \ket{1_x} + \ket{0_x} - \ket{1_x}) = \ket{0_x} \end{split} \end{equation*} That is, if we had measured $\hat{\sigma_x}$ first, we would have obtained $\sigma_x = +1$ with a probability of $100\%$. And precisely the problem lies in this fact, when we carry out one measurement after another and the operators that we are measuring do not commute, then the second measurement that is carried out will no longer necessarily depend on the original state, but on the state after having been projected to the subspace associated to the first measurement. In this specific case, if we had measured $\hat{\sigma_x}$ first, what we would have found is that the original state had a $\sigma_x = +1$ with complete certainty, but since we decided to measure $\hat{\sigma_z}$, when measuring $\hat{\sigma_x}$ we can obtain any of the values $+1 $ or $-1$ with a $50\%$ probability. This is the reason why these operators $\hat{\sigma_x}$ and $\hat{\sigma_z}$ cannot have a simultaneous reality, that is, quantum mechanics prevents us from simultaneously knowing the value of $\sigma_x$ and $\sigma_z$ of a particle, because when measuring one, the measurement of the other is no longer valid, and vice versa. Therefore, the conclusion drawn from the EPR article is that assuming **Locality, Realism and Quantum Mechanics** at the same time, leads to a contradiction. Obviously EPR's argument is that since locality and realism are natural properties of any physical system, what must be wrong is quantum mechanics. What was later concluded is that the concept of locality had to be redefined to make it compatible with quantum mechanics, and the concept of realism ended up being put aside thanks to Bell's inequalities. We must emphasize that the essence of what we have just seen in the EPR paradox arises thanks to the state we chose $\ket{\psi^-} = \frac{1}{\sqrt{2}}(\ket{01} - \ket{10})$, and the peculiarity of this state is that it is an entangled state. The background of quantum entanglement is that there are states that cannot be factored as a tensor product of two vectors, but rather that the system is in a joint state that cannot be separated in each of its subsystems.
github_jupyter
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/49_colorbar.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> Uncomment the following line to install [geemap](https://geemap.org) if needed. ``` # !pip install geemap ``` # How to add a colorbar to the map ## For ipyleaflet maps ### Continuous colorbar ``` import ee import geemap # geemap.update_package() Map = geemap.Map() dem = ee.Image('USGS/SRTMGL1_003') vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'], } Map.addLayer(dem, vis_params, 'SRTM DEM') colors = vis_params['palette'] vmin = vis_params['min'] vmax = vis_params['max'] Map.add_colorbar_branca(colors=colors, vmin=vmin, vmax=vmax, layer_name="SRTM DEM") # nlcd_2016 = ee.Image('USGS/NLCD/NLCD2016').select('landcover') # Map.addLayer(nlcd_2016, {}, "NLCD") # Map.add_legend(legend_title="NLCD", builtin_legend="NLCD", layer_name="NLCD") Map ``` ### Categorical colorbar ``` Map = geemap.Map() dem = ee.Image('USGS/SRTMGL1_003') vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'], } Map.addLayer(dem, vis_params, 'SRTM DEM') colors = vis_params['palette'] vmin = vis_params['min'] vmax = vis_params['max'] Map.add_colorbar_branca( colors=colors, vmin=vmin, vmax=vmax, categorical=True, step=4, layer_name="SRTM DEM" ) Map ``` ## For folium maps ### Continuous colorbar ``` import ee import geemap.foliumap as geemap Map = geemap.Map() dem = ee.Image('USGS/SRTMGL1_003') vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'], } Map.addLayer(dem, vis_params, 'SRTM DEM') colors = vis_params['palette'] vmin = vis_params['min'] vmax = vis_params['max'] Map.add_colorbar(colors=colors, vmin=vmin, vmax=vmax) Map.addLayerControl() Map ``` ### Categorical colorbar ``` Map = geemap.Map() dem = ee.Image('USGS/SRTMGL1_003') vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'], } Map.addLayer(dem, vis_params, 'SRTM DEM') colors = vis_params['palette'] vmin = vis_params['min'] vmax = vis_params['max'] Map.add_colorbar(colors=colors, vmin=vmin, vmax=vmax, categorical=True, step=4) Map.addLayerControl() Map ``` ### Draggable legend ``` Map = geemap.Map() legend_dict = { '11 Open Water': '466b9f', '12 Perennial Ice/Snow': 'd1def8', '21 Developed, Open Space': 'dec5c5', '22 Developed, Low Intensity': 'd99282', '23 Developed, Medium Intensity': 'eb0000', '24 Developed High Intensity': 'ab0000', '31 Barren Land (Rock/Sand/Clay)': 'b3ac9f', '41 Deciduous Forest': '68ab5f', '42 Evergreen Forest': '1c5f2c', '43 Mixed Forest': 'b5c58f', '51 Dwarf Scrub': 'af963c', '52 Shrub/Scrub': 'ccb879', '71 Grassland/Herbaceous': 'dfdfc2', '72 Sedge/Herbaceous': 'd1d182', '73 Lichens': 'a3cc51', '74 Moss': '82ba9e', '81 Pasture/Hay': 'dcd939', '82 Cultivated Crops': 'ab6c28', '90 Woody Wetlands': 'b8d9eb', '95 Emergent Herbaceous Wetlands': '6c9fb8', } landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover') Map.addLayer(landcover, {}, 'NLCD Land Cover') Map.add_legend(title="NLCD Land Cover Classification", legend_dict=legend_dict) Map.addLayerControl() Map ```
github_jupyter
<a href="https://colab.research.google.com/github/Sujangyawali/Fraud_Detection/blob/master/pyspark_for_classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` ! pip install pyspark ! pip install -q kaggle ! mkdir ~/.kaggle ! cp kaggle.json ~/.kaggle/ ! chmod 600 ~/.kaggle/kaggle.json !pip install --upgrade --force-reinstall --no-deps kaggle ! kaggle datasets list ! kaggle competitions download -c titanic ! unzip /content/titanic.zip import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn import random import os from pyspark.sql import SparkSession from pyspark.ml import Pipeline from pyspark.sql import SQLContext from pyspark.sql.functions import mean,col,split, col, regexp_extract, when, lit import pyspark.sql.functions as F from pyspark.ml.feature import StringIndexer, VectorAssembler from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml.feature import QuantileDiscretizer from pyspark import SparkContext from pyspark.sql import SparkSession #entry point for data frame and SQL funtionality import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns spark=SparkSession.builder.appName('Pyspark For classification').getOrCreate() data=spark.read.csv('/content/train.csv',header=True,inferSchema=True) #"header=True, if not set to true treat header as data record infershema :automatically guess datatype of column " data.limit(3).toPandas() data.take(3) data.printSchema() #conversion form spark dataframe to pandas dataframe pandas_data=data.toPandas() pandas_data.head() f,ax=plt.subplots(figsize=(10,8)) sns.distplot(pandas_data['Age']) ax.set_title('Distribution of age') #checking null values in spark from pyspark.sql.functions import isnan,when,count,col ,isnull,column data.columns data.select(['Age','Sex']).show(5) from pyspark.sql.functions import isnan, when, count, col data.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in data.columns]).show() data.drop('Cabin') nam='Braund, Mr. Owen Harris' nam.split(',')[1].split('.')[0].strip() data=data.withColumn('Initial',regexp_extract(col('Name'),"([A-Za-z]+)\.",1)) data.limit(5).toPandas() data.select('Initial').distinct().show() data = data.replace(['Mlle','Mme', 'Ms', 'Dr','Major','Lady','Countess','Jonkheer','Col','Rev','Capt','Sir','Don'], ['Miss','Miss','Miss','Mr','Mr', 'Mrs', 'Mrs', 'Other', 'Other','Other','Mr','Mr','Mr']) data.select('Initial').distinct().show() data.groupBy('Initial').count().show() ``` Imputation of missing values ``` data.groupBy('Initial').avg('Age').collect() data = data.withColumn("Age",when((data["Initial"] == "Miss") & (data["Age"].isNull()), 22).otherwise(data["Age"])) data = data.withColumn("Age",when((data["Initial"] == "Other") & (data["Age"].isNull()), 46).otherwise(data["Age"])) data = data.withColumn("Age",when((data["Initial"] == "Master") & (data["Age"].isNull()), 5).otherwise( data["Age"])) data = data.withColumn("Age",when((data["Initial"] == "Mr") & (data["Age"].isNull()), 33).otherwise(data["Age"])) data = data.withColumn("Age",when((data["Initial"] == "Mrs") & (data["Age"].isNull()), 36).otherwise(data["Age"])) data.groupBy('Embarked').count().show() data=data.na.fill({'Embarked':"S"}) ``` Feature Engineering ``` data=data.withColumn('Family_Size',col('Sibsp')+col('Parch')) data=data.withColumn('Alone',lit(0)) data = data.withColumn("Alone",when(data["Family_Size"] == 0, 1).otherwise(data["Alone"])) indexers = [StringIndexer(inputCol=column, outputCol=column+"_index").fit(data) for column in ["Sex","Embarked","Initial"]] pipeline = Pipeline(stages=indexers) data = pipeline.fit(data).transform(data) data.limit(10).toPandas() #dropping feature that are not neded in modeling data = data.drop("PassengerId","Name","Ticket","Cabin","Embarked","Sex","Initial") # Before modelling in Pyspark, we need to put all features to Vector using Pyspark VectorAssembler feature = VectorAssembler(inputCols = data.columns[1:],outputCol="features") feature_vector= feature.transform(data) feature_vector.limit(3).toPandas() # Select features column for features training and 'Survived' as label to predict titanic_df = feature_vector.select(['features','Survived']) # Split the dataset to train_df and test_df train_df,test_df = titanic_df.randomSplit([0.75,0.25]) from pyspark.ml.classification import LogisticRegression from pyspark.ml.tuning import ParamGridBuilder,TrainValidationSplit from pyspark.ml.evaluation import BinaryClassificationEvaluator # DEFINE ALGORITHM lr = LogisticRegression(labelCol="Survived") # DEFINE GRID PARAMETERS paramGrid = ParamGridBuilder().addGrid(lr.regParam, (0.01, 0.1))\ .addGrid(lr.maxIter, (5, 10))\ .addGrid(lr.tol, (1e-4, 1e-5))\ .addGrid(lr.elasticNetParam, (0.25,0.75))\ .build() # DEFINE CROSS VALIDATION WITH PARAMETERS tvs = TrainValidationSplit( estimator=lr ,estimatorParamMaps=paramGrid ,evaluator=MulticlassClassificationEvaluator(labelCol='Survived') ,trainRatio=0.8) model = tvs.fit(train_df) model_predictions= model.transform(test_df) print('Accuracy: ', MulticlassClassificationEvaluator(labelCol='Survived',metricName='accuracy').evaluate(model_predictions)) print('Precision: ',MulticlassClassificationEvaluator(labelCol='Survived',metricName='weightedPrecision').evaluate(model_predictions)) ```
github_jupyter
``` from pyspark.sql import SparkSession from pyspark.ml import Pipeline from pyspark.sql.functions import mean,col,split, col, regexp_extract, when, lit from pyspark.ml.feature import StringIndexer from pyspark.ml.feature import VectorAssembler from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml.feature import QuantileDiscretizer from pyspark.ml.classification import LogisticRegression from pyspark.ml.classification import DecisionTreeClassifier from pyspark.ml.classification import RandomForestClassifier from pyspark.ml.classification import GBTClassifier from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator from pyspark.ml.regression import LinearRegression import random import numpy as np #set seed random.seed(1234) np.random.seed(1234) spark = SparkSession.builder.appName("PySparkML").getOrCreate() spark ``` ## №1 Линейная регрессия Загрузите данные для применения линейной регрессии <a href="https://github.com/AlexKbit/stepik-ds-course/raw/master/Week5/SparkML/spark-tasks/linear_regression.parquet">linear_regression.parquet</a> ``` lr_df = #Ваш код lr_df.show(5) ``` Создайте учителя линейной регресии со следующими параметрами: maxIter=20 regParam=0.5 elasticNetParam=0.75<br> <a href="https://spark.apache.org/docs/latest/ml-classification-regression.html#linear-regression">LinearRegression</a> ``` lr = #Ваш код ``` Выполните обучения на загруженных данных и сохраните результат в переменную. ``` lrModel = #Ваш код ``` Найдите следующие параметры полученной модели rootMeanSquaredError (RMSE), r2 и округлити их до 3его знака. ``` #Ваш код ``` ## №2 Кластеризация (K-Means) Загрузите данные для применения из <a href="https://github.com/AlexKbit/stepik-ds-course/raw/master/Week5/SparkML/spark-tasks/wine.parquet">wine.parquet</a> ``` wine_df = #Ваш код wine_df.show(5) ``` Примените <a href="https://spark.apache.org/docs/latest/ml-features.html#vectorassembler">VectorAssembler</a> для создания вектора фич (задействуйте все свойства, кроме Customer_Segment). ``` feature = #Ваш код ``` Cоздайте учителя KMeans со следующими параметрами K=3 Seed=1<br> Обучите модель и примените ее к тому же вектору. Документация по <a href="https://spark.apache.org/docs/latest/ml-clustering.html#k-means">KMeans</a> ``` #Ваш код ``` Найдите силуэт с евклидовым расстоянием в квадрате для данных по вину(округлите до четвертого знака). <br><a href="https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.evaluation.ClusteringEvaluator">ClusteringEvaluator</a> ``` #Ваш код ``` ## №3 DecisionTreeClassifier Загрузити датасет из файла <a href="https://github.com/AlexKbit/stepik-ds-course/raw/master/Week5/SparkML/spark-tasks/iris.parquet">iris.parquet</a> ``` iris_df = #Ваш код iris_df.show(5) ``` Составьте из наших признаков вектор применив <a href="https://spark.apache.org/docs/latest/ml-features.html#vectorassembler">VectorAssembler</a> как колонку features. <br> Задействуйте все признаки, кроме species, так как он является целевым. ``` #Ваш код ``` Используйте <a href="https://spark.apache.org/docs/latest/ml-features.html#stringindexer">StringIndexer</a> и сделайте новый признак с именем type из целевого признака species который является категориальным. ``` #Ваш код iris_df.show(5) ``` Сформируем выборки на обучение и тест. ``` (training_data, test_data) = iris_df.randomSplit([0.8, 0.2],seed = 42) ``` Создайте и обучите <a href="https://spark.apache.org/docs/latest/ml-classification-regression.html#decision-tree-classifier">DecisionTreeClassifier</a> на датасете для обучения. Полученную модель используйте над тестовым датасетом. ``` from pyspark.ml.classification import DecisionTreeClassifier #Ваш код ``` Используйте <a href="https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.evaluation.MulticlassClassificationEvaluator">MulticlassClassificationEvaluator</a> (помните что целевая фича это - type) для оценки качества модели по метрике accuracy.<br>Какая точность полученной модели? ``` #Ваш код ``` ## №4 Random forest Создайте и обучите <a href="https://spark.apache.org/docs/latest/ml-classification-regression.html#random-forest-classifier">RandomForestClassifier</a> из 10 деревьев на датасете для обучения. Полученную модель примените к тестовому датасету. ``` from pyspark.ml.classification import RandomForestClassifier #Ваш код ``` Используйте <a href="https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.evaluation.MulticlassClassificationEvaluator">MulticlassClassificationEvaluator</a> (помните что целевая фича это - type) для оценки качества модели по метрике accuracy.<br>Какая точность полученной модели? ``` #Ваш код ``` ## №5 Hyperparameter tuning Займемся оптимизацией гиперпараметров для модели. Примените <a href='https://spark.apache.org/docs/latest/ml-tuning.html#train-validation-split'>TrainValidationSplit</a> для оптимизации гиперпараметров используя подготовленный вами выше датасет <a href="https://github.com/AlexKbit/stepik-ds-course/raw/master/Week5/SparkML/spark-tasks/iris.parquet">iris.parquet</a> на модели <a href="https://spark.apache.org/docs/latest/ml-classification-regression.html#random-forest-classifier">RandomForestClassifier</a> совместно с <a href="https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.evaluation.MulticlassClassificationEvaluator">MulticlassClassificationEvaluator</a>. <br>Ваша цель определить оптимальные значения параметров из следующих диапазонов: <br>impurity = ["entropy", "gini"] <br>maxDepth = [2, 3, 4, 5] <br>numTrees = [3, 6, 9, 12, 15, 18, 21] ``` from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit model = #Ваш код print('Num Trees: {}'.format(model.bestModel._java_obj.getNumTrees())) print('Max Depth: {}'.format(model.bestModel._java_obj.getMaxDepth())) print('Impurity: {}'.format(model.bestModel._java_obj.getImpurity())) ```
github_jupyter
# Tutorial Part 17: Training a Generative Adversarial Network on MNIST In this tutorial, we will train a Generative Adversarial Network (GAN) on the MNIST dataset. This is a large collection of 28x28 pixel images of handwritten digits. We will try to train a network to produce new images of handwritten digits. ## Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/17_Training_a_Generative_Adversarial_Network_on_MNIST.ipynb) ## Setup To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ``` !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ ``` To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow). ``` import deepchem as dc import tensorflow as tf from deepchem.models.optimizers import ExponentialDecay from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Reshape import matplotlib.pyplot as plot import matplotlib.gridspec as gridspec %matplotlib inline mnist = tf.keras.datasets.mnist.load_data(path='mnist.npz') images = mnist[0][0].reshape((-1, 28, 28, 1))/255 dataset = dc.data.NumpyDataset(images) ``` Let's view some of the images to get an idea of what they look like. ``` def plot_digits(im): plot.figure(figsize=(3, 3)) grid = gridspec.GridSpec(4, 4, wspace=0.05, hspace=0.05) for i, g in enumerate(grid): ax = plot.subplot(g) ax.set_xticks([]) ax.set_yticks([]) ax.imshow(im[i,:,:,0], cmap='gray') plot_digits(images) ``` Now we can create our GAN. Like in the last tutorial, it consists of two parts: 1. The generator takes random noise as its input and produces output that will hopefully resemble the training data. 2. The discriminator takes a set of samples as input (possibly training data, possibly created by the generator), and tries to determine which are which. This time we will use a different style of GAN called a Wasserstein GAN (or WGAN for short). In many cases, they are found to produce better results than conventional GANs. The main difference between the two is in the discriminator (often called a "critic" in this context). Instead of outputting the probability of a sample being real training data, it tries to learn how to measure the distance between the training distribution and generated distribution. That measure can then be directly used as a loss function for training the generator. We use a very simple model. The generator uses a dense layer to transform the input noise into a 7x7 image with eight channels. That is followed by two convolutional layers that upsample it first to 14x14, and finally to 28x28. The discriminator does roughly the same thing in reverse. Two convolutional layers downsample the image first to 14x14, then to 7x7. A final dense layer produces a single number as output. In the last tutorial we used a sigmoid activation to produce a number between 0 and 1 that could be interpreted as a probability. Since this is a WGAN, we instead use a softplus activation. It produces an unbounded positive number that can be interpreted as a distance. ``` class DigitGAN(dc.models.WGAN): def get_noise_input_shape(self): return (10,) def get_data_input_shapes(self): return [(28, 28, 1)] def create_generator(self): return tf.keras.Sequential([ Dense(7*7*8, activation=tf.nn.relu), Reshape((7, 7, 8)), Conv2DTranspose(filters=16, kernel_size=5, strides=2, activation=tf.nn.relu, padding='same'), Conv2DTranspose(filters=1, kernel_size=5, strides=2, activation=tf.sigmoid, padding='same') ]) def create_discriminator(self): return tf.keras.Sequential([ Conv2D(filters=32, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'), Conv2D(filters=64, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'), Dense(1, activation=tf.math.softplus) ]) gan = DigitGAN(learning_rate=ExponentialDecay(0.001, 0.9, 5000)) ``` Now to train it. As in the last tutorial, we write a generator to produce data. This time the data is coming from a dataset, which we loop over 100 times. One other difference is worth noting. When training a conventional GAN, it is important to keep the generator and discriminator in balance thoughout training. If either one gets too far ahead, it becomes very difficult for the other one to learn. WGANs do not have this problem. In fact, the better the discriminator gets, the cleaner a signal it provides and the easier it becomes for the generator to learn. We therefore specify `generator_steps=0.2` so that it will only take one step of training the generator for every five steps of training the discriminator. This tends to produce faster training and better results. ``` def iterbatches(epochs): for i in range(epochs): for batch in dataset.iterbatches(batch_size=gan.batch_size): yield {gan.data_inputs[0]: batch[0]} gan.fit_gan(iterbatches(100), generator_steps=0.2, checkpoint_interval=5000) ``` Let's generate some data and see how the results look. ``` plot_digits(gan.predict_gan_generator(batch_size=16)) ``` Not too bad. Many of the generated images look plausibly like handwritten digits. A larger model trained for a longer time can do much better, of course. # Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways: ## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem) This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build. ## Join the DeepChem Gitter The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
github_jupyter
``` #export from fastai2.basics import * from nbdev.showdoc import * #default_exp callback.schedule ``` # Hyperparam schedule > Callback and helper functions to schedule any hyper-parameter ``` from fastai2.test_utils import * ``` ## Annealing ``` #export class _Annealer: def __init__(self, f, start, end): store_attr(self, 'f,start,end') def __call__(self, pos): return self.f(self.start, self.end, pos) #export def annealer(f): "Decorator to make `f` return itself partially applied." @functools.wraps(f) def _inner(start, end): return _Annealer(f, start, end) return _inner ``` This is the decorator we will use for all of our scheduling functions, as it transforms a function taking `(start, end, pos)` to something taking `(start, end)` and return a function depending of `pos`. ``` #export #TODO Jeremy, make this pickle #@annealer #def SchedLin(start, end, pos): return start + pos*(end-start) #@annealer #def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2 #@annealer #def SchedNo (start, end, pos): return start #@annealer #def SchedExp(start, end, pos): return start * (end/start) ** pos # #SchedLin.__doc__ = "Linear schedule function from `start` to `end`" #SchedCos.__doc__ = "Cosine schedule function from `start` to `end`" #SchedNo .__doc__ = "Constant schedule function with `start` value" #SchedExp.__doc__ = "Exponential schedule function from `start` to `end`" #export def sched_lin(start, end, pos): return start + pos*(end-start) def sched_cos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2 def sched_no (start, end, pos): return start def sched_exp(start, end, pos): return start * (end/start) ** pos def SchedLin(start, end): return _Annealer(sched_lin, start, end) def SchedCos(start, end): return _Annealer(sched_cos, start, end) def SchedNo (start, end): return _Annealer(sched_no, start, end) def SchedExp(start, end): return _Annealer(sched_exp, start, end) SchedLin.__doc__ = "Linear schedule function from `start` to `end`" SchedCos.__doc__ = "Cosine schedule function from `start` to `end`" SchedNo .__doc__ = "Constant schedule function with `start` value" SchedExp.__doc__ = "Exponential schedule function from `start` to `end`" #hide tst = pickle.dumps(SchedCos(0, 5)) annealings = "NO LINEAR COS EXP".split() p = torch.linspace(0.,1,100) fns = [SchedNo, SchedLin, SchedCos, SchedExp] #export def SchedPoly(start, end, power): "Polynomial schedule (of `power`) function from `start` to `end`" def _inner(pos): return start + (end - start) * pos ** power return _inner for fn, t in zip(fns, annealings): plt.plot(p, [fn(2, 1e-2)(o) for o in p], label=t) f = SchedPoly(2,1e-2,0.5) plt.plot(p, [f(o) for o in p], label="POLY(0.5)") plt.legend(); show_doc(SchedLin) sched = SchedLin(0, 2) test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.]) show_doc(SchedCos) sched = SchedCos(0, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.]) show_doc(SchedNo) sched = SchedNo(0, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.]) show_doc(SchedExp) sched = SchedExp(1, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.]) show_doc(SchedPoly) sched = SchedPoly(0, 2, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.]) p = torch.linspace(0.,1,100) pows = [0.5,1.,2.] for e in pows: f = SchedPoly(2, 0, e) plt.plot(p, [f(o) for o in p], label=f'power {e}') plt.legend(); #export def combine_scheds(pcts, scheds): "Combine `scheds` according to `pcts` in one function" assert sum(pcts) == 1. pcts = tensor([0] + L(pcts)) assert torch.all(pcts >= 0) pcts = torch.cumsum(pcts, 0) def _inner(pos): if pos == 1.: return scheds[-1](1.) idx = (pos >= pcts).nonzero().max() actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx]) return scheds[idx](actual_pos.item()) return _inner ``` `pcts` must be a list of positive numbers that add up to 1 and is the same length as `scheds`. The generated function will use `scheds[0]` from 0 to `pcts[0]` then `scheds[1]` from `pcts[0]` to `pcts[0]+pcts[1]` and so forth. ``` p = torch.linspace(0.,1,100) f = combine_scheds([0.3,0.7], [SchedCos(0.3,0.6), SchedCos(0.6,0.2)]) plt.plot(p, [f(o) for o in p]); p = torch.linspace(0.,1,100) f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)]) plt.plot(p, [f(o) for o in p]); #hide test_close([f(0.), f(0.15), f(0.3), f(0.4), f(0.5), f(0.7), f(1.)], [0., 0.5, 1., 1., 1., 0.65451, 0.]) #export def combined_cos(pct, start, middle, end): "Return a scheduler with cosine annealing from `start`→`middle` & `middle`→`end`" return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)]) ``` This is a useful helper function for the [1cycle policy](https://sgugger.github.io/the-1cycle-policy.html). `pct` is used for the `start` to `middle` part, `1-pct` for the `middle` to `end`. Handles floats or collection of floats. For example: ``` f = combined_cos(0.25,0.5,1.,0.) plt.plot(p, [f(o) for o in p]); #hide test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [0.5, 0.67275, 1., 0.75, 0.]) f = combined_cos(0.25, np.array([0.25,0.5]), np.array([0.5,1.]), np.array([0.,0.])) test_close([f(0.), f(0.1), f(0.25), f(0.5), f(1.)], [[0.25,0.5], [0.33638,0.67275], [0.5,1.], [0.375,0.75], [0.,0.]]) ``` ## ParamScheduler - ``` #export @docs class ParamScheduler(Callback): "Schedule hyper-parameters according to `scheds`" run_after,run_valid = TrainEvalCallback,False def __init__(self, scheds): self.scheds = scheds def begin_fit(self): self.hps = {p:[] for p in self.scheds.keys()} def begin_batch(self): self._update_val(self.pct_train) def _update_val(self, pct): for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct)) def after_batch(self): for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p]) def after_fit(self): if hasattr(self.learn, 'recorder'): self.recorder.hps = self.hps _docs = {"begin_fit": "Initialize container for hyper-parameters", "begin_batch": "Set the proper hyper-parameters in the optimizer", "after_batch": "Record hyper-parameters of this batch", "after_fit": "Save the hyper-parameters in the recorder if there is one"} ``` `scheds` is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer). ``` learn = synth_learner() sched = {'lr': SchedLin(1e-3, 1e-2)} learn.fit(1, cbs=ParamScheduler(sched)) n = len(learn.dls.train) test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)]) #hide #test discriminative lrs def _splitter(m): return [[m.a], [m.b]] learn = synth_learner(splitter=_splitter) sched = {'lr': combined_cos(0.5, np.array([1e-4,1e-3]), np.array([1e-3,1e-2]), np.array([1e-5,1e-4]))} learn.fit(1, cbs=ParamScheduler(sched)) show_doc(ParamScheduler.begin_fit) show_doc(ParamScheduler.begin_batch) show_doc(ParamScheduler.after_batch) show_doc(ParamScheduler.after_fit) #export @patch @log_args(but_as=Learner.fit) def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=None, moms=None, cbs=None, reset_opt=False): "Fit `self.model` for `n_epoch` using the 1cycle policy." if self.opt is None: self.create_opt() self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max) lr_max = np.array([h['lr'] for h in self.opt.hypers]) scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final), 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))} self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd) ``` The 1cycle policy was introduced by Leslie N. Smith et al. in [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120). It schedules the learning rate with a cosine annealing from `lr_max/div` to `lr_max` then `lr_max/div_final` (pass an array to `lr_max` if you want to use differential learning rates) and the momentum with cosine annealing according to the values in `moms`. The first phase takes `pct_start` of the training. You can optionally pass additional `cbs` and `reset_opt`. ``` #Integration test: training a few epochs should make the model better learn = synth_learner(lr=1e-2) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit_one_cycle(2) xb,yb = learn.dls.one_batch() final_loss = learn.loss_func(learn.model(xb), yb) assert final_loss < init_loss #Scheduler test lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom'] test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)]) test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)]) #export @patch def plot_sched(self:Recorder, keys=None, figsize=None): keys = self.hps.keys() if keys is None else L(keys) rows,cols = (len(keys)+1)//2, min(2, len(keys)) figsize = figsize or (6*cols,4*rows) _, axs = plt.subplots(rows, cols, figsize=figsize) axs = axs.flatten() if len(keys) > 1 else L(axs) for p,ax in zip(keys, axs): ax.plot(self.hps[p]) ax.set_ylabel(p) #hide #test discriminative lrs def _splitter(m): return [[m.a], [m.b]] learn = synth_learner(splitter=_splitter) learn.fit_one_cycle(1, lr_max=slice(1e-3,1e-2)) #n = len(learn.dls.train) #est_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)]) learn = synth_learner() learn.fit_one_cycle(2) learn.recorder.plot_sched() #export @patch @log_args(but_as=Learner.fit) def fit_flat_cos(self:Learner, n_epoch, lr=None, div_final=1e5, pct_start=0.75, wd=None, cbs=None, reset_opt=False): "Fit `self.model` for `n_epoch` at flat `lr` before a cosine annealing." if self.opt is None: self.create_opt() self.opt.set_hyper('lr', self.lr if lr is None else lr) lr = np.array([h['lr'] for h in self.opt.hypers]) scheds = {'lr': combined_cos(pct_start, lr, lr, lr/div_final)} self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd) learn = synth_learner() learn.fit_flat_cos(2) learn.recorder.plot_sched() #export @patch @log_args(but_as=Learner.fit) def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=None): "Fit `self.model` for `n_cycles` of `cycle_len` using SGDR." if self.opt is None: self.create_opt() self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max) lr_max = np.array([h['lr'] for h in self.opt.hypers]) n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1) pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)] scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)] scheds = {'lr': combine_scheds(pcts, scheds)} self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd) ``` This schedule was introduced by Ilya Loshchilov et al. in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). It consists of `n_cycles` that are cosine annealings from `lr_max` (defaults to the `Learner` lr) to 0, with a length of `cycle_len * cycle_mult**i` for the `i`-th cycle (first one is `cycle_len`-long, then we multiply the length by `cycle_mult` at each epoch). You can optionally pass additional `cbs` and `reset_opt`. ``` #slow learn = synth_learner() with learn.no_logging(): learn.fit_sgdr(3, 1) test_eq(learn.n_epoch, 7) iters = [k * len(learn.dls.train) for k in [0,1,3,7]] for i in range(3): n = iters[i+1]-iters[i] #The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1 test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)]) learn.recorder.plot_sched() #export @patch @log_args(but_as=Learner.fit) @delegates(Learner.fit_one_cycle) def fine_tune(self:Learner, epochs, base_lr=2e-3, freeze_epochs=1, lr_mult=100, pct_start=0.3, div=5.0, **kwargs): "Fine tune with `freeze` for `freeze_epochs` then with `unfreeze` from `epochs` using discriminative LR" self.freeze() self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs) base_lr /= 2 self.unfreeze() self.fit_one_cycle(epochs, slice(base_lr/lr_mult, base_lr), pct_start=pct_start, div=div, **kwargs) learn.fine_tune(1) ``` ## LRFind - ``` #export @docs class LRFinder(ParamScheduler): "Training with exponentially growing learning rate" run_after=Recorder def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True): if is_listy(start_lr): self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)]} else: self.scheds = {'lr': SchedExp(start_lr, end_lr)} self.num_it,self.stop_div = num_it,stop_div def begin_fit(self): super().begin_fit() self.learn.save('_tmp') self.best_loss = float('inf') def begin_batch(self): self._update_val(self.train_iter/self.num_it) def after_batch(self): super().after_batch() if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException() if self.train_iter >= self.num_it: raise CancelFitException() def begin_validate(self): raise CancelValidException() def after_fit(self): self.learn.opt.zero_grad() #Need to zero the gradients of the model before detaching the optimizer for future fits tmp_f = self.path/self.model_dir/'_tmp.pth' if tmp_f.exists(): self.learn.load('_tmp') os.remove(tmp_f) _docs = {"begin_fit": "Initialize container for hyper-parameters and save the model", "begin_batch": "Set the proper hyper-parameters in the optimizer", "after_batch": "Record hyper-parameters of this batch and potentially stop training", "after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model", "begin_validate": "Skip the validation part of training"} #slow with tempfile.TemporaryDirectory() as d: learn = synth_learner(path=Path(d)) init_a,init_b = learn.model.a,learn.model.b with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100)) assert len(learn.recorder.lrs) <= 100 test_eq(len(learn.recorder.lrs), len(learn.recorder.losses)) #Check stop if diverge if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses) #Test schedule test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)]) #No validation data test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)]) #Model loaded back properly test_eq(learn.model.a, init_a) test_eq(learn.model.b, init_b) test_eq(learn.opt.state_dict()['state'], [{}, {}]) show_doc(LRFinder.begin_fit) show_doc(LRFinder.begin_batch) show_doc(LRFinder.after_batch) show_doc(LRFinder.begin_validate) #export @patch def plot_lr_find(self:Recorder, skip_end=5): "Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)" lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end] losses = self.losses if skip_end==0 else self.losses[:-skip_end] fig, ax = plt.subplots(1,1) ax.plot(lrs, losses) ax.set_ylabel("Loss") ax.set_xlabel("Learning Rate") ax.set_xscale('log') #export SuggestedLRs = collections.namedtuple('SuggestedLRs', ['lr_min', 'lr_steep']) #export @patch def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True, show_plot=True, suggestions=True): "Launch a mock training to find a good learning rate, return lr_min, lr_steep if `suggestions` is True" n_epoch = num_it//len(self.dls.train) + 1 cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div) with self.no_logging(): self.fit(n_epoch, cbs=cb) if show_plot: self.recorder.plot_lr_find() if suggestions: lrs,losses = tensor(self.recorder.lrs[num_it//10:-5]),tensor(self.recorder.losses[num_it//10:-5]) if len(losses) == 0: return lr_min = lrs[losses.argmin()].item() grads = (losses[1:]-losses[:-1]) / (lrs[1:].log()-lrs[:-1].log()) lr_steep = lrs[grads.argmin()].item() return SuggestedLRs(lr_min/10.,lr_steep) ``` First introduced by Leslie N. Smith in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/pdf/1506.01186.pdf), the LR Finder trains the model with exponentially growing learning rates from `start_lr` to `end_lr` for `num_it` and stops in case of divergence (unless `stop_div=False`) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either: - one tenth of the minimum before the divergence - when the slope is the steepest Those two values are returned by default by the Learning Rate Finder. ``` #slow with tempfile.TemporaryDirectory() as d: learn = synth_learner(path=Path(d)) lr_min,lr_steep = learn.lr_find() print(f"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}") ``` ## Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
# Healthcare insurance fraud identification using PCA anomaly detection 1. [Background](#background) 1. [Setup](#setup) 1. [Data](#data) 1. [Obtain data](#datasetfiles) 1. [Feature Engineering](#feateng) 1. [Missing values](#missing) 1. [Categorical features](#catfeat) 1. [Gender](#gender) 1. [Age Group](#age) 1. [NLP for Textual features](#nlp) 1. [Diagnosis Descriptions](#diagnosis) 1. [Procedure Descriptions](#procedure) 1. [Split train & test data](#split) 1. [Standardize](#standardize) 1. [PCA](#pca) 1. [Calculate the Mahalanobis distance](#md) 1. [Unsupervised Anomaly Detection](#ad) 1. [Understanding Anomaly](#understandinganomaly) 1. [(Optional) Deploy PCA](#deployendpoint) ## 1. Background <a name="background"></a> Medicare is a federal healthcare program created in 1965 with the passage of the Social Security Amendments to ensure that citizens 65 and older as well as younger persons with certain disabilities have access to quality healthcare. Medicare is administered by the Centers for Medicare and Medicaid Services (CMS). CMS manages Medicare programs by selecting official Medicare administrative contractors (MACs) to process the Medicare claims associated with various parts of Medicare. We propose a solution to apply unsupervised outlier techniques at post-payment stage to detect fraudulent patterns of received insurance claims. Health care insurance fraud is a pressing problem, causing substantial and increasing costs in medical insurance programs. Due to large amounts of claims submitted, review of individual claims becomes a difficult task and encourages the employment of automated pre-payment controls and better post-payment decision support tools to enable subject matter expert analysis. We will demonstrate the unsupervised anomalous outlier techniques on a minimal set of metrics made available in the CMS Medicare inpatient claims from 2008. Once more data is available as extracts from different systems -Medicaid Information Management systems(MMIS), Medicaid Statistical Information Systems(MSIS), Medicaid Reference data such as Provider Files, Death Master Files, etc. - there is an opportunity to build a database of metrics to make the fraud detection technique more robust. The method can be used to flag claims as a targeting method for further investigation. ## 2. Setup <a name="setup"></a> To begin, we'll install the Python libraries we'll need for the remainder of the exercise. ``` # Upgrade numpy to latest version. Should be numpy==1.15.0 or higher to use quantile attribute import sys !{sys.executable} -m pip install --upgrade numpy #If thenumpy version prints less than 1.15.0 #Go to Jupyter notebook menu on the top, click on kernal and click "Restart and Clear Output". Start from the beginning again. import numpy as np print(np.__version__) !{sys.executable} -m pip install columnize gensim !{sys.executable} -m pip uninstall seaborn -y !{sys.executable} -m pip install seaborn ``` Next, we'll import the Python libraries we'll need for the remainder of the exercise. ``` import numpy as np # For matrix operations and numerical processing import pandas as pd # For munging tabular data import boto3 #enables Python developers to create, configure, and manage AWS services from IPython.display import display # For displaying outputs in the notebook import matplotlib.pyplot as plt #for interactive plots and simple cases of programmatic plot generation %matplotlib inline from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc. import sys #provides access to some variables used or maintained by the interpreter import os # For manipulating filepath names import sagemaker #open source library for training and deploying machine-learned models on Amazon SageMaker import time #provides various time-related functions import warnings #allows you to handle all warnings with the standard logging import io #interface to access files and streams import sagemaker.amazon.common as smac #provides common function used for training and deploying machine-learned models on Amazon SageMaker warnings.filterwarnings(action = 'ignore') #warnings filter controls whether warnings are ignored, displayed from sklearn.model_selection import train_test_split #Quick utility to split data into train and test set import gensim #topic modelling library for Python that provides access to Word2Vec import columnize #format a simple (i.e. not nested) list into aligned columns. from gensim.models import Word2Vec #topic modelling library for Python that provides access to Word2Vec from sklearn.manifold import TSNE #containing T-SNE algorithms used to project high dimensional space into lower dimesional space from numpy.linalg import inv #Compute the dot product of two or more arrays in a single function call import scipy.stats #contains a large number of probability distributions for statistical analysis import scipy as sp #collection of mathematical algorithms import seaborn as sns #data visualization library based on matplotlib import mxnet as mx #open-source deep learning software framework, used to train, and deploy deep neural networks. from sklearn.manifold import TSNE ``` This notebook was created and tested on an ml.t2.medium instance. Please specify a string that is unique to you, your name is fine! That way you can see your resources, in the event your AWS account is used by multiple people. ``` name = 'first-last' import sagemaker from sagemaker import get_execution_role import boto3, os s3 = boto3.resource('s3') sess = sagemaker.Session() role = get_execution_role() # Assign a unique name to the bucket. S3 buckets should have unique global name. bucket = sess.default_bucket() prefix = 'aim302-30-may-2019/healthcare-fraud-detection/{}'.format(name) print('Training input/output will be stored in {}/{}'.format(bucket, prefix)) print('\nIAM Role: {}'.format(role)) ``` ## 3. Data<a name="data"></a> The dataset we'll be using in this example was downloaded from following link. https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/BSAPUFS/Inpatient_Claims.html The data set is the public available Basic Stand Alone (BSA) Inpatient Public Use Files (PUF) named “CMS 2008 BSA Inpatient Claims PUF”. The file contains Medicare inpatient claims from 2008. Each record is an inpatient claim incurred by a 5% sample of Medicare beneficiaries. The file contains seven (7) variables: A primary claim key indexing the records and six (6) analytic variables. One of the analytic variables, claim cost, is provided in two forms, (a) as an integer category and (b) as a dollar average. There are some demographic and claim-related variables provided in this PUF. However, as beneficiary identities are not provided, it is not possible to link claims that belong to the same beneficiary in the CMS 2008 BSA Inpatient Claims PUF. Without linking beneficiary Id to the claims, it is not possible to create features such as 'amount reimbursed over time', 'average reimbursement per visit' etc. ### 3A. Obtain data<a name="datasetfiles"></a> We will use the following link to download claims dataset. https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/BSAPUFS/Downloads/2008_BSA_Inpatient_Claims_PUF.zip The data dictionary required to interpret codes in dataset have been constructed from following pdf document. https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/BSAPUFS/Downloads/2008_BSA_Inpatient_Claims_PUF_DataDic_CB.pdf Following dictionary files are already avaliable in data folder in the notebook. - `ColumnNames.csv` - column description - `DiagnosisRelatedGroupNames.csv` - dictionary for procedure codes - `InternationalClassificationOfDiseasesNames.csv` - dictionary of diagnosis codes - `LengthOfStayDays.csv` - dictionary of length of stay - `AgeGroup.csv` - dictionary of age group - `Gender.csv` - dictionary of gender #### Download claims data file from CMS site. ``` #!wget https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/BSAPUFS/Downloads/2008_BSA_Inpatient_Claims_PUF.zip !unzip -o ./2008_BSA_Inpatient_Claims_PUF-backup.zip -d data ``` #### The data file have been extrcated under data folder locally on Sagemaker notebook volume in the data folder. - `2008_BSA_Inpatient_Claims_PUF.csv` - claims data #### Let's begin exploring data: ## 4. Feature Engineering <a name="feateng"></a> ``` # read the ColumnNames csv file to identify meaningful names for column labels in the claim data colnames = pd.read_csv("./data/ColumnNames.csv") colnames[colnames.columns[-1]] = colnames[colnames.columns[-1]].map(lambda x: x.replace('"','').strip()) display(colnames) # read claims data file df_cms_claims_data = pd.read_csv('./data/2008_BSA_Inpatient_Claims_PUF.csv') df_cms_claims_data.columns = colnames[colnames.columns[-1]].ravel() pd.set_option('display.max_columns', 500) # print the shape of the data file print('Shape:', df_cms_claims_data.shape) # show the top few rows display(df_cms_claims_data.head()) # describe the data object display(df_cms_claims_data.describe()) # check the datatype for each column display(df_cms_claims_data.dtypes) # check null value for each column display(df_cms_claims_data.isnull().mean()) ``` #### You might have observed some 'NaN' and mean value(0.469985) for ICD9 primary procedure code in print results above. We need fix to 'NaN' in ICD9 primary procedure code. ### 4A. Missing values<a name="missing"></a> Do I have missing values? How are they expressed in the data? Should I withhold samples with missing values? Or should I replace them? If so, which values should they be replaced with? Based on results of isnull.mean(), it is clear that 'ICD9 primary procedure code' has a non zero mean and it is so because it has NaN values. The NaN values corresponds to "No Procedure Performed" in the in the 'ICD9 primary procedure code' dictionary. Let's replace NaN values with a numeric code for "No Procedure Performed". ``` #Fill NaN with -1 for "No Procedure Performed" procedue_na = -1 df_cms_claims_data['ICD9 primary procedure code'].fillna(procedue_na, inplace = True) #convert procedure code from float to int64 df_cms_claims_data['ICD9 primary procedure code'] = df_cms_claims_data['ICD9 primary procedure code'].astype(np.int64) #check count of null values to ensure dataframe is updated display(df_cms_claims_data.isnull().mean()) ``` ### 4B. Categorical features <a name="catfeat"></a> Munging categorical data is another essential process during data preprocessing. It is necessary to convert categorical features to a numerical representation. #### a. Gender <a name="gender"></a> Since gender is already binary and coded as 1 for Male and 2 for Female, no pre-processing is required. ``` def chart_balance(f_name, column_type): if column_type == 'diagnosis': data_dict = pd.read_csv(f_name, sep=', "', skiprows=1, names=['Base DRG code','Diagnosis related group']); data_dict['Diagnosis related group'] = data_dict['Diagnosis related group'].map(lambda x: x.replace('"','')); one, two, three = 'Base DRG code', 'Base DRG code', 'Base DRG code' elif column_type == 'procedure': data_dict = pd.read_csv(f_name, sep=', "', skiprows=1, names=['ICD9 primary procedure code','International Classification of Diseases']) data_dict = data_dict.applymap(lambda x: x.replace('"','')) # replace -1 as code for 'No procedure performed'. In the dictionary the code is set as blank. data_dict.iloc[0]['ICD9 primary procedure code'] = procedue_na # convert procedure code from float to int64 data_dict['ICD9 primary procedure code'] = data_dict['ICD9 primary procedure code'].astype(np.int64) one, two, three = 'ICD9 primary procedure code', 'ICD9 primary procedure code', 'ICD9 primary procedure code' else: # read dictionary csv file data_dict = pd.read_csv(f_name) data_dict.columns = data_dict.columns.to_series().apply(lambda x: x.strip()) if column_type == 'gender': one = 'bene_sex_ident_cd' two = 'Beneficiary gender code' three = 'Beneficiary gender' elif column_type == 'age': one = 'BENE_AGE_CAT_CD' two = 'Beneficiary Age category code' three = 'Age Group' elif column_type in ['procedure', 'diagnosis']: plt.figure(figsize=(100,20)) plt.rc('xtick', labelsize=16) display(data_dict.head()) display(data_dict.dtypes) # join the beneficiary category code with group definition and describe the distribution amongst different groups in claims dataset tmp_counts = data_dict.set_index(one).join( df_cms_claims_data[two].value_counts() ) tmp_counts['percentage'] = tmp_counts[two]/tmp_counts[two].sum()*100 # project gender distribution in the dataset on the bar graph plt.bar(tmp_counts.index, tmp_counts['percentage'].tolist()); plt.xticks(tmp_counts.index, tmp_counts[three].tolist(), rotation=45) plt.ylabel('Percentage claims') if column_type in ['diagnosis', 'procedure']: return data_dict chart_balance("./data/Gender.csv", 'gender') ``` #### You may have observed a slight imbalance in claims distribution for male and female records in above bar graph. Nothing concerning hear. But, we may use this information later in result analysis to justify our anomaly hypothesis. #### b. Age Group <a name="age"></a> ``` chart_balance("./data/AgeGroup.csv", 'age') ``` #### You might have observed a slight imbalance in age group group distribution. Nothing concerning in above distribution. Small imbalance is OK. ### 4B. NLP for Textual features <a name="nlp"></a> All physician and hospital claims include one or more diagnosis codes. The ICD-9-CM diagnosis coding system is used since October, 2012. Hospital inpatient claims also include one or more procedure codes that represent the services performed. The ICD-9-CM diagnosis coding system is used since October, 2012. The codes are numeric number representing the phrases describing the diagnosis and the procedures itself. The code iteself is numberic but doesn't capture context of a word in a document, semantic and syntactic similarity, relation with other words, etc. For diagnosis and procedure codes there is a option to consider it as categorical code and apply one hot encoding to it. That categorical data is defined as variables with a finite set of label values. We apply a technique called one hot encoding to do binarization of such values. In one hot encode we create one column for each label value and mark it as 0 or 1 as applicable to sample record. In case of dignosis code and procedure code it will give us a sparse matrix. Again, the code iteself will be numberic but doesn't capture context of a word in a document, semantic and syntactic similarity, relation with other words, etc. Inorder to capture, capture context of a word in a document, semantic and syntactic similarity, relation with other words, etc. we use a technique called word embedding to convert every word in a phrase into a vector of floating point numbers. We then average the vector for each word in a phrase to derive vector for a phrase. We will use this approach for both diagnosis and procedure descriptions to extract features. Word2Vec is a specific method to derieve word embeddings. It can be done using two methods (both involving Neural Networks): Skip Gram and Common Bag Of Words (CBOW) CBOW Model: This method takes the context of each word as the input and tries to predict the word corresponding to the context. Skip-Gram model: This method uses the target word (whose representation we want to generate) to predict the context and in the process, we produce the representations. Both model have their own advantages and disadvantages. Skip Gram works well with small amount of data and is found to represent rare words well. On the other hand, CBOW is faster and has better representations for more frequent words. In our use case, we will use CBOW model to derieve wordtovec for pharases used to describe procedure and diagnosis code description. #### a. Diagnosis Descriptions <a name="diagnosis"></a> ``` data_diagnosis = chart_balance('./data/DiagnosisRelatedGroupNames.csv', 'diagnosis') ``` #### b. Procedure Descriptions ``` data_procedures = chart_balance('./data/InternationalClassificationOfDiseasesNames.csv', 'procedure') ``` #### Observe the distribution of different diagnosis code in above bar graph printed from claims dataset. Next, let's do text processing on diagnosis descriptions to make some of the acronyms more meaningful for word embeddings ``` # function to run pre processing on diagnosis descriptions from nltk.tokenize import sent_tokenize, word_tokenize def text_preprocessing(phrase): phrase = phrase.lower() phrase = phrase.replace('&', 'and') #phrase = phrase.replace('non-', 'non') #This is to ensure non-critical, doesn't get handled as {'non', 'critical'} phrase = phrase.replace(',','') phrase = phrase.replace('w/o','without').replace(' w ',' with ').replace('/',' ') phrase = phrase.replace(' maj ',' major ') phrase = phrase.replace(' proc ', ' procedure ') phrase = phrase.replace('o.r.', 'operating room') sentence = phrase.split(' ') return sentence def get_embeddings(data_dict, column_type): if column_type == 'procedure': col = 'International Classification of Diseases' elif column_type == 'diagnosis': col = 'Diagnosis related group' # perform tokenization tmp_tokenized = data_dict[col].map(lambda x: text_preprocessing(x)) display(tmp_tokenized.head()) phrase_lengths = tmp_tokenized.map(lambda x: len(x)).value_counts().sort_index() plt.bar(np.arange(1,1+len(phrase_lengths)), phrase_lengths) plt.xlabel('Number of Tokens'); plt.ylabel('Phrases'); # traing wordtovec model on procedure description tokens model_prc = Word2Vec(tmp_tokenized, min_count = 1, size = 72, window = 5, iter = 100) print(model_prc) words = list(model_prc.wv.vocab) print(columnize.columnize(words, displaywidth=80, ljust=False)) return model_prc, words, tmp_tokenized model_diagnosis, words_diagnosis, diagnosis_tokens = get_embeddings(data_diagnosis, 'diagnosis') ``` #### Word to vec hyperparameters explained **size:** The size of the dense vector that is to represent each token or word. If you have very limited data, then size should be a much smaller value. If you have lots of data, its good to experiment with various sizes. A value of 100–150 has worked well for me for similarity lookups. **window:** The maximum distance between the target word and its neighboring word. If your neighbor’s position is greater than the maximum window width to the left or the right, then some neighbors are not considered as being related to the target word. In theory, a smaller window should give you terms that are more related. If you have lots of data, then the window size should not matter too much, as long as its not overly narrow or overly broad. If you are not too sure about this, just use the default value. **min_count:** Minimium frequency count of words. The model would ignore words that do not satisfy the min_count.Extremely infrequent words are usually unimportant, so its best to get rid of those. Unless your dataset is really tiny, this does not really affect the model. **workers:** How many threads to use behind the scenes? **iter:** How many epochs to train for? I typically use 10 or more for a small to medium dataset. #### t-Distributed Stochastic Neighbor Embedding (t-SNE) t-Distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets. ``` # plot TSNE visualization def tsne_plot(model): "Creates and TSNE model and plots it" labels = [] tokens = [] for word in model.wv.vocab: tokens.append(model[word]) labels.append(word) tsne_model = TSNE(perplexity=10, n_components=2, init='pca', n_iter=2500, random_state=10) new_values = tsne_model.fit_transform(tokens) x = [] y = [] for value in new_values: x.append(value[0]) y.append(value[1]) plt.figure(figsize=(16, 16)) for i in range(len(x)): plt.scatter(x[i],y[i]) plt.annotate(labels[i], xy=(x[i], y[i]), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') plt.show() # plot t_SNE chart for diagnosis word to vector. #2D visual plot of word embeddings derieved from diagnosis description. tsne_plot(model_diagnosis) # test most similiar for some word from model_drg.wv.keywords model_diagnosis.most_similar('diagnosis') # extract diagnosis words that starts with non #display(tmp_diagnosis_tokenized.head()) series_diagnosis = pd.Series(words_diagnosis) diagnosis_words_with_non = series_diagnosis[series_diagnosis.map(lambda x: 'non' in x)] display(diagnosis_words_with_non) # Check similarity between diagnosis words with opposite severity for i in diagnosis_words_with_non: a, not_a = i.replace('non-','').replace('non',''), i if a in words_diagnosis: print('Cosine similarity between', a, not_a, ':', model_diagnosis.wv.similarity(a, not_a)) print('') ``` #### b. Procedure Descriptions <a name="procedure"></a> Apply the same process that we used for diagnosis description to procedure description to build a feature vector for procedure ``` model_procedure, words_procedure, tokens_procedure = get_embeddings(data_procedures, 'procedure') # test most similiar for some word from model_prc.wv.keywords model_procedure.most_similar('nonoperative') # extract procedure words that starts with non #display(tmp_procedure_tokenized.head()) series_procedure = pd.Series(words_procedure) procedure_words_with_non = series_procedure[series_procedure.map(lambda x: 'non' in x)] display(procedure_words_with_non) # Check similarity between procedure words with opposite severity for i in procedure_words_with_non: a, not_a = i.replace('non-','').replace('non',''), i if a in words_procedure: print('Cosine similarity between', a, not_a, ':', model_procedure.wv.similarity(a, not_a)) print('') def generate_features_from_embeddings(tokens, column_type, model): if column_type == 'diagnosis': one = 'Base DRG code' two = 'DRG_VECTOR' three = 'DRG_F' elif column_type == 'procedure': one = 'ICD9 primary procedure code' two = 'PRC_VECTOR' three = 'PRC_F' values, index = [], [] # iterate through list of strings in each diagnosis phrase for i, v in pd.Series(tokens).items(): #calculate mean of all word embeddings in each diagnosis phrase values.append(model[v].mean(axis =0)) index.append(i) tmp_phrase_vector = pd.DataFrame({one:index, two:values}) display(tmp_phrase_vector.head()) # expand tmp_diagnosis_phrase_vector into dataframe # every scalar value in phrase vector will be considered a feature features = tmp_phrase_vector[two].apply(pd.Series) # rename each variable in diagnosis_features use DRG_F as prefix features = features.rename(columns = lambda x : three + str(x + 1)) # view the diagnosis_features dataframe display(features.head()) return features # get diagnosis features diagnosis_features = generate_features_from_embeddings(diagnosis_tokens, 'diagnosis', model_diagnosis) # get procedure features procedure_features = generate_features_from_embeddings(tokens_procedure, 'procedure', model_procedure) #merge diagnosis word embeddings derived using word2vec in the base claims data as new features. tmp_join_claim_diagnosis = pd.merge(df_cms_claims_data, diagnosis_features, how='inner', left_on = 'Base DRG code', right_index = True) display(tmp_join_claim_diagnosis.head()) #merge procedure word embeddings derived using word2vec in the base claims data as new features. tmp_join_claim_procedure = pd.merge(tmp_join_claim_diagnosis, procedure_features, how='inner', left_on = 'ICD9 primary procedure code', right_index = True) display(tmp_join_claim_procedure.head()) #assign new feature set with procedure and diagnosis work embeddings to a new claims feature dataframe #aggregate all the features extrcated so far to build a final claims feature set for training claims_features = tmp_join_claim_procedure ``` ## 5. Split train and test: train only on normal data <a name="split"></a> We want to split our data into training and test sets. We want to ensure that in this random split we have samples that cover the distribution of payments. We perform a stratified shuffle split on the DRG quintile payment amount code, taking 30% of the data for testing and 70% for training. ``` from sklearn.model_selection import StratifiedShuffleSplit X = claims_features.drop(['Encrypted PUF ID','ICD9 primary procedure code','Base DRG code'], axis=1) strata = claims_features['DRG quintile payment amount code'] sss = StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=0) splits = sss.split(X, strata) for train_index, test_index in splits: X_train, X_test = X.iloc[train_index], X.iloc[test_index] display(X.head()) X.shape ``` ## 5A. Standardize data based on training sample <a name="standardize"></a> Because the PCA algorithm that we will use later for training maximizes the orthogonal variances of one's data, it is important to standardize the training data to have zero-mean and unit-variance prior to performing PCA. This way your PCA algorithm is idempotent to such rescalings, and prevent variables of large scale from dominating the PCA projection. $$ \tilde{X} = \frac{X-\mu_x}{\sigma_z} $$ ``` n_obs, n_features = X_train.shape from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X_train) X_stndrd_train = scaler.transform(X_train) X_stndrd_train = pd.DataFrame(X_stndrd_train, index=X_train.index, columns=X_train.columns) ``` ### 5B. PCA <a name="pca"></a> Principal Component Analysis (PCA) is an unsupervised method for taking a data set where features have multi-collinearity and creating a decorrelated data set, by finding the linear combination of vectors which maximize the data's variances in orthogonal dimensions. #### PCA on Amazon SageMaker The built-in PCA algorithm of SageMaker solves for the singular values, $s$, and for the Principal Components, $V$, of our data set. Here we'll perform SageMaker PCA on our standardized training dataset $\tilde{X}$, and then we'll use its outputs to project our correlated dataset into a decorrelated one. $$ s, V = \rm{PCA}(\tilde{X})$$ ``` # Convert data to binary stream. matrx_train = X_stndrd_train.as_matrix().astype('float32') import io import sagemaker.amazon.common as smac buf_train = io.BytesIO() smac.write_numpy_to_dense_tensor(buf_train, matrx_train) buf_train.seek(0) ``` Now we are ready to upload the file object to our Amazon S3 bucket. We specify two paths: one to where our uploaded matrix will reside, and one to where Amazon SageMaker will write the output. Amazon SageMaker will create folders within the paths that do not already exist. ``` %%time key = 'healthcare_fraud_identification_feature_store' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(buf_train) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) output_location = 's3://{}/{}/output/model'.format(bucket, prefix) print('training artifacts will be uploaded to: {}'.format(output_location)) from sagemaker.amazon.amazon_estimator import get_image_uri # select the algorithm container based on this notebook's current location region_name = boto3.Session().region_name container = get_image_uri(region_name, 'pca') print('Using SageMaker PCA container: {} ({})'.format(container, region_name)) ``` #### Start the Amazon Sagemaker Session and set training parameters for Estimator API Instance type should be one of the following and number of instances can be greater than 1. Option to train on P instance type family to use GPUs for training #### [ml.p2.xlarge, ml.m5.4xlarge, ml.m4.16xlarge, ml.p3.16xlarge, ml.m5.large, ml.p2.16xlarge, ml.c4.2xlarge, ml.c5.2xlarge, ml.c4.4xlarge, ml.c5.4xlarge, ml.c4.8xlarge, ml.c5.9xlarge, ml.c5.xlarge, ml.c4.xlarge, ml.c5.18xlarge, ml.p3.2xlarge, ml.m5.xlarge, ml.m4.10xlarge, ml.m5.12xlarge, ml.m4.xlarge, ml.m5.24xlarge, ml.m4.2xlarge, ml.p2.8xlarge, ml.m5.2xlarge, ml.p3.8xlarge, ml.m4.4xlarge] ``` num_obs, feature_dim = np.shape(matrx_train) num_components = feature_dim-1 num_instances=2 instance_type = 'ml.c5.2xlarge' algorithm_mode='regular' platform='sagemaker' start = time.time() sess = sagemaker.Session() pca = sagemaker.estimator.Estimator(container, role, train_instance_count=num_instances, train_instance_type=instance_type, output_path=output_location, sagemaker_session=sess) ``` #### Specify the hyperparameters for your training job and start the training using Amazon SageMaker fit API call Training will take approximately 4-5 minutes to complete. ``` pca.set_hyperparameters(feature_dim=feature_dim, num_components=num_components, subtract_mean=False, algorithm_mode='regular', mini_batch_size=200) print('Start timestamp of launch: '+ str(start)) pca.fit({'train': s3_train_data}) stop = time.time() total_time = stop-start print('%2.2f minutes' %(total_time/60)) ``` When the training job is complete, SageMaker writes the model artifact to the specified S3 output location. Let's download and unpack returned PCA model artifact. ``` job_name = pca.latest_training_job.name os.system('aws s3 cp {}/{}/output/model.tar.gz ./'.format(output_location, job_name)) !tar xvzf model.tar.gz pca_model = mx.ndarray.load('model_algo-1') print('PCA model artifact:', pca_model.keys()) ``` SageMaker PCA artifact contains $V$, the eigenvector principal components in *increasing* order of $s$, their singular values. A component's singular value is equal to the standard deviation that the component explains, i.e., the squared value of a singular component is equal to the variance that component explains. Therefore to calculate the proportion of variance of the data that each component explains, take the square of the singular value and divide it by the sum of all the singular values squared: $$ \rm{component \,}i \% \rm{\,variance\, explained} = 100\cdot\frac{s_i^s}{\sum_{p=1}^P s_p^2} $$ First, we'll reverse this returned ordering, so that instead we have the components which explain the most variance come first, i.e., reorder the components in decreasing order of their singular values. PCA can be further used to reduce the dimensionality of the problem. We have $P$ features and $P-1$ components, but we'll see in the plot below that many of the components don't contribute much to the explained variance of the data. We will keep only the $K$ leading components of $V$ which explain 95% of the variance in our data. We will denote this reduced matrix as $V_K$. ``` singular_values = pca_model['s'].asnumpy()[::-1] pc_reversedorder = pd.DataFrame(pca_model['v'].asnumpy()) pc = pc_reversedorder[list(pc_reversedorder.columns[::-1])] eigenvalues = np.power(singular_values,2) explained_var_pct = eigenvalues/np.sum(eigenvalues) *100 explained_var_cum = np.cumsum(explained_var_pct) var_threshold = 95 n_components = np.min([np.where(explained_var_cum>=var_threshold)[0][0], n_features-1]) print('%i components explain %2.2f%% of the data\'s variance.' %(n_components+1, explained_var_cum[n_components])) fig= plt.figure(figsize=[14,8]) width = 0.5 ax1 = fig.add_subplot(111) ax1.bar(np.arange(0,len(singular_values)), singular_values, align='edge', color='darkgreen', label='Singular Values', alpha=0.5, width=width); ax1.set_ylabel('Singular Values', fontsize=17); ax1.set_xlabel('Principal Component', fontsize=17); ax1.legend(loc='upper right', fontsize=14) ax2 = ax1.twinx() ax2.plot(np.arange(0,len(explained_var_cum)), explained_var_cum, color='black', label='Cumulative'); ax2.plot([0, n_components], [var_threshold, var_threshold], 'r:') ax2.plot([n_components, n_components], [0, var_threshold], 'r:') ax2.set_ylabel('% Variance Explained', fontsize=17); ax2.legend(loc='right', fontsize=14) ax2.set_ylim([0, 100]) ax2.set_xlim([0,len(eigenvalues)]) plt.title('Dimensionality Reduction', fontsize=20); # We will now work with the reduced matrix that includes components that explains 95% of variance in the data Vk = pc[pc.columns[:n_components+1]] ``` ## 6. Calculate the Mahalanobis distance <a name="md"></a> Above, we used the singular values returned by PCA to keep the $K$ principal component vectors that explain 95% of the data's variance, and stored them in dataframe $V_K$. We use $V_K$ to tranform the data into an decorrelated dataset, by taking their matrix dot product: $$ Z = \tilde{X} V_K $$ To detect anomaly data points, we want to measure how far a data point is from the distribution of the projected data. The farther a point lays from the distribution, the more anomalous it is. Even though we have $K$ dimensions instead of $P$, this is still a multi-variate distribution. We will use the Mahalanobis distance [Mahalanobis, 1936](https://insa.nic.in/writereaddata/UpLoadedFiles/PINSA/Vol02_1936_1_Art05.pdf), which is a scalar measure of the multi-variate distance between a point $z$ and a distribution $D$. Distribution $D$ is defined by the mean and the inverse-covariance of the data in $Z$: $$ \mu_Z = \rm{mean}(Z) $$ $$ \Sigma_Z = \rm{cov}(Z) $$ $$ \Sigma_Z^{-1} = \rm{inv}\big(\rm{cov}(Z)\big) $$ Mahalanobis distance is a measure of how many standard deviations away $z$ is from the mean of $D$ along each principal component axis. We'll use the Mahalonobis distance of each point as its anomaly score. We take the top $\alpha$% of these points to consider as outliers, where $\alpha$ depends on how sensitive we want our detection to be. For this problem, we will take the top 1%, i.e. $\alpha=0.01$. Therefore we calculate the $(1-\alpha)$-quantile of Distribution $D$ as the threshold for considering a data point anomalous. This method of PCA Anomaly Detection was developed in [A Novel Anomaly Detection Scheme Based on Principal Component Classifier](https://homepages.laas.fr/owe/METROSEC/DOC/FDM03.pdf). ``` # Z is the PCA-projected standardized data pca_projected_X_train = pd.DataFrame(np.dot(X_stndrd_train, Vk), index=X_stndrd_train.index) # Calculate Mahalanobis distance for multi-variate deviation Zmean = pca_projected_X_train.mean() covZ = pca_projected_X_train.cov() invcovZ = inv(covZ) M = pca_projected_X_train.apply(lambda x: sp.spatial.distance.mahalanobis(x, Zmean, invcovZ), axis=1) # Threshold the training set's top alpha-% alpha = 0.01 threshold = np.quantile(M, 1-alpha) print(threshold) # Plot the density graph for anomaly score and highlight the threshold calculated plt.figure(figsize=[15,5]); M.hist(bins=40, density=True); plt.axvline(threshold, color='red', label='{}%-threshold = {}'.format(int(alpha*100), round(threshold,4))); plt.legend(); plt.xlabel(r'Anomaly Score [based on Mahalanobis distance]', fontsize=14); plt.ylabel('Density', fontsize=14); ``` ## 7. Unsupervised Anomaly Detection <a name="ad"></a> The above PCA-computed quantities - component matrix $V_K$, projected mean $\mu_Z$, inverse-covariance $\Sigma_Z^{-1}$, and threshold - have delivered us an unsupervised anomaly detection method. We create a function below, which transforms the test data according the models fit on. the training data. The function **calcAnomalyScore**() performs the following: * standardizes each test data point according to the training mean and training standard deviation * projects each test data point using the PCs calculated from the training data * measures the Mahalanobis distance of each test data point from the training distribution $D$ * a boolean if the test data point's anomaly score exceeds the threshold ``` def calcAnomalyScore(data, threshold, scaler=scaler, pc=Vk, Zmean=Zmean, invcovZ=invcovZ): data_stndrd = pd.DataFrame(scaler.transform(data), index=data.index, columns=data.columns) pc_projected_data = pd.DataFrame(np.dot(data_stndrd, Vk), index=data_stndrd.index) anomaly_score = pc_projected_data.apply(lambda x: sp.spatial.distance.mahalanobis(x, Zmean, invcovZ), axis=1) is_anomaly = (anomaly_score>threshold) y = pd.concat([anomaly_score, is_anomaly], axis=1) y.columns = ['anomaly_score','is_anomaly'] return y y_test = calcAnomalyScore(X_test, threshold, scaler=scaler, pc=Vk, Zmean=Zmean, invcovZ=invcovZ) print('Fraction of test data flagged as anomalous:', y_test['is_anomaly'].mean()) ``` ## 8. Understanding Anomaly<a name="understandinganomaly"></a> Data points marked TRUE for "is_anomaly" can be passed on for inspection. Given that we now have separated norm data from anomalous data, we can contrast these to see if the differentiating reasons can be identified in the original feature space. We attach the "is_anomaly" output as a label to the original claims feature data. ``` #list all claims with anomaly score and anomaly label(True) y_test['anomalous'] = (y_test['is_anomaly']*1.).astype(int) test_claims = claims_features.loc[y_test.index] test_claims = y_test.merge(test_claims, how='outer', left_index=True, right_index=True) test_claims = test_claims.filter(["anomalous","DRG quintile payment amount code","DRG quintile average payment amount","Inpatient days code","ICD9 primary procedure code","Base DRG code","Beneficiary Age category code","Beneficiary gender code"]) display(test_claims.head()) sns.pairplot(test_claims,hue ="anomalous", kind='scatter', plot_kws={'alpha':0.1}) ``` #### In the above pair plot, look for following patterns 1. Plots where orange is asymmetrical with blue. 2. Orange appears in patches that doesn't overlap with the blue The above patterns in the pairplot can be used a starting point to target investigation on specific cases. ## 9. Deploy PCA <a name="deployendpoint"></a> This section is optional, but, in case, you are interested in learning how to do principal component analysis for a given claim record using Amazon SageMaker hosting. Follow the steps below. You may find this step helpful if you want to use principal components of claims data to predict other variables of business significance. Example, find out length of stay based on diagnosis code, gender and age or predict the claims payment amount and quartile based on datapoints in the claims dataset. Here we demonstrate how to deploy PCA model as an endpoint on Amazon Sagemaker for inference. But, to solve the example problems discussed in the above paragraph you will need to collect more data, label them and refactor your training based on the prediction problem. ``` #serialize test data to binary format for realtime inference for extracting principal components of claim features X_stndrd_test = scaler.transform(X_test) X_stndrd_test = pd.DataFrame(X_stndrd_test, index=X_test.index, columns=X_test.columns) inference_input = X_stndrd_test.as_matrix().astype('float32') buf = io.BytesIO() smac.write_numpy_to_dense_tensor(buf, inference_input) buf.seek(0) #print the shape of inference_input matrix inference_input.shape ``` #### Deploy the model using Amaazon SageMaker deploy API. AWS manages the highly avaliable and reliable infrastructure for it. ``` #deploy the Amazon Sagemaker PCA model trained above to create a hosted enpoint for realtime principal component extraction pca_predictor = pca.deploy(initial_instance_count=1, instance_type='ml.t2.medium') from sagemaker.predictor import csv_serializer, json_deserializer pca_predictor.content_type = 'text/csv' pca_predictor.serializer = csv_serializer pca_predictor.deserializer = json_deserializer #run inference on first 500 claims. Avoid running it on large number of claims to avoid timeout on connection. #For large dataset use Amazon Sagemaker batch inference result = pca_predictor.predict(inference_input[0:500]) print(result) #normalize above result in json format to more readable columar format with one principal component per column from pandas.io.json import json_normalize #result in json format and components are returned as a list under projections tag result_normalized = json_normalize(result,'projections') # expand df.tags into its own dataframe pca_components = result_normalized['projection'].apply(pd.Series) # rename each variable in pc pca_components = pca_components.rename(columns = lambda x : 'PC_' + str(x)) #view the tags dataframe pca_components ``` ### Delete the Endpoint If you're ready to be done with this notebook, please run the delete_endpoint line in the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left turned on. ``` import sagemaker sagemaker.Session().delete_endpoint(pca_predictor.endpoint) ```
github_jupyter
<small><small><i> All the IPython Notebooks in **Python Introduction** lecture series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/01_Python_Introduction)** </i></small></small> # Python Programming Python is a powerful multipurpose programming language created by *Guido van Rossum*. It has a simple and easy-to-use syntax, making it a popular first-choice programming language for beginners. This is a comprehensive guide that explores the reasons you should consider learning Python and the ways you can get started with Python. If you directly want to get started with Python, visit our **[Python Classes](https://github.com/milaan9/01_Python_Introduction/blob/main/000_Intro_to_Python.ipynb)**. ## What is Python Programming Language? Python is an interpreted, object-oriented, high-level programming language. As it is general-purpose, it has a wide range of applications from web development, building desktop GUI to scientific and mathematical computing. Python is popular for its simple and relatively straightforward syntax. Its syntax readability increases productivity as it allows us to focus more on the problem rather than structuring the code. ## Features of Python Programming ### Simple and easy to learn Python has a very simple and elegant syntax. It is much easier to read and write programs in Python compared to other languages like C, C++, or Java. Due to this reason, many beginners are introduced to programming with Python as their first programming language. ### Free and open-source You can freely use and distribute Python programs even for commercial use. As it is open-source, you can even change Python's source code to fit your use case. ### Portability A single Python program can run on different platforms without any change in source code. It runs on almost all platforms including Windows, Mac OS X, and Linux. ### Extensible and Embeddable You can combine Python code with other programming languages like C or Java to increase efficiency. This allows high performance and scripting capabilities that other languages do not provide out of the box. ### High-Level Interpreted Language Python itself handles tasks like memory management and garbage collection. So unlike C or C++, you don't have to worry about system architecture or any other lower-level operations. ### Rich library and large community Python has numerous reliable built-in libraries. Python programmers have developed tons of free and open-source libraries, so you don't have to code everything by yourself. The Python community is very large and evergrowing. If you encounter errors while programming in Python, it's like that it has already been asked and solved by someone in this community. ## Reasons to Choose Python as First Language ### 1. Simple Elegant Syntax Programming in Python is fun. It's easier to understand and write Python code. The syntax feels natural. Let's take the following example where we add two numbers: ```python a = 2 b = 3 sum = a + b print(sum) ``` Even if you have never programmed before, you can easily guess that this program adds two numbers and displays it. ### 2. Not overly strict You don't need to define the type of a variable in Python. Also, it's not necessary to add a semicolon at the end of the statement. Python enforces you to follow good practices (like proper indentation). These small things can make learning much easier for beginners. ### 3. The expressiveness of the language Python allows you to write programs having greater functionality with fewer lines of code. Let's look at code to swap the values of two variables. It can be done in Python with the following lines of code: ```python a = 15 b = 27 print(f'Before swapping: a, b = {a},{b}') a, b = b, a print(f'After swapping: a, b = {a},{b}') ``` Here, we can see that the code is very less and more readable. If instead, we were to use Java, the same program would have to be written in the following way: ```python public class Swap { public static void main(String[] args) { int a, b, temp; a = 15; b = 27; System.out.println("Before swapping : a, b = "+a+", "+ + b); temp = a; a = b; b = temp; System.out.println("After swapping : a, b = "+a+", "+ + b); } } ``` This is just an example. There are many more such cases where Python increases efficiency by reducing the amount of code required to program something. ## Python Applications Area Python is known for its **general purpose** nature that makes it applicable in almost each domain of software development. Python as a whole can be used in any sphere of development. Here, we are **specifing applications** areas where python can be applied. 1. **Web Applications** - We can use Python to develop web applications. It provides libraries to handle internet protocols such as HTML and XML, JSON, Email processing, request, beautifulSoup, Feedparser etc. It also provides Frameworks such as Django, Pyramid, Flask etc to design and delelop web based applications. Some important developments are: PythonWikiEngines, Pocoo, PythonBlogSoftware etc. 2. **AI & Machine Learning** - Python has Prebuilt Libraries like Numpy for scientific computation, Scipy for advanced computing and Pybrain for machine learning (Python Machine Learning) making it one of the best languages For AI. 3. **Desktop GUI Applications** - Python provides Tk GUI library to develop user interface in python based application. Some other useful toolkits wxWidgets, Kivy, pyqt that are useable on several platforms. The Kivy is popular for writing multitouch applications. 4. **Software Development** - Python is helpful for software development process. It works as a support language and can be used for build control and management, testing etc. 5. **Scientific and Numeric** - Python is popular and widely used in scientific and numeric computing. Some useful library and package are SciPy, Pandas, IPython etc. SciPy is group of packages of engineering, science and mathematics. 6. **Business Applications** - Python is used to build Bussiness applications like ERP and e-commerce systems. Tryton is a high level application platform. 7. **Console Based Application** - We can use Python to develop console based applications. For example: IPython. 8. **Audio or Video based Applications** - Python is awesome to perform multiple tasks and can be used to develop multimedia applications. Some of real applications are: TimPlayer, cplay etc. 9. **3D CAD Applications** - To create CAD application Fandango is a real application which provides full features of CAD. 10. **Enterprise Applications** - Python can be used to create applications which can be used within an Enterprise or an Organization. Some real time applications are: OpenErp, Tryton, Picalo etc. 11. **Applications for Images** - Using Python several application can be developed for image. Applications developed are: VPython, Gogh, imgSeek etc. 12. **Games and 3D Graphics** - PyGame, PyKyra are two frameworks for game-development with Python. Apart from these, we also get a variety of 3D-rendering libraries. If you’re one of those game-developers, you can check out PyWeek, a semi-annual game programming contest. ### 4. Great Community and Support Python has a large supporting community. There are numerous active online forums which can come in handy if you are stuck anywhere in the learning process. Some of them are: * **[Learn Python subreddit](https://www.reddit.com/r/learnpython)** * **[Google Forum for Python](https://groups.google.com/forum/#!forum/comp.lang.python)** * **[Python Questions - Stack Overflow](https://stackoverflow.com/questions/tagged/python)** ## How you can learn to code in Python? ### Learn Python from Dr. Milaan Parmar Programiz offers dozens of tutorials and examples to help you learn Python programming from scratch. Each tutorial is written in-depth with examples and detailed explanations. ### Learn Python from Books It is always a good idea to learn to program from books. You will get the big picture of programming concepts in the book which you may not find elsewhere. Here are 3 books we personally recommend. * **[Think Python: How to Think Like a Computer Scientist](http://amzn.to/2dVg5rG)** - a hands-on guide to start learning Python with lots of exercise materials * **[Starting out With Python](http://amzn.to/2diJu8Z)** - introductory programming book for students with limited programming experience * **[Effective Python: 59 Specific Ways to Write Better Python](http://amzn.to/2e2EiJt)** - an excellent book for learning to write robust, efficient and maintainable code in Python ## Final Words I personally think Python is a terrific language to learn. If you are getting started in programming, Python is an awesome choice. You will be amazed by how much you can do in Python once you know the basics. It is easy to overlook the fact that Python is a powerful language. Not only is Python good for learning programming, but it is also a good language to have in your arsenal. Python can help you to get started in everything, whether it is changing your idea into a prototype, creating a game, or getting in Machine Learning and Artificial Intelligence.
github_jupyter
A very wide range of physical processes lead to wave motion, where signals are propagated through a medium in space and time, normally with little or no permanent movement of the medium itself. The shape of the signals may undergo changes as they travel through matter, but usually not so much that the signals cannot be recognized at some later point in space and time. Many types of wave motion can be described by the equation $u_{tt}=\nabla\cdot (c^2\nabla u) + f$, which we will solve in the forthcoming text by finite difference methods. # Simulation of waves on a string <div id="wave:string"></div> We begin our study of wave equations by simulating one-dimensional waves on a string, say on a guitar or violin. Let the string in the undeformed state coincide with the interval $[0,L]$ on the $x$ axis, and let $u(x,t)$ be the displacement at time $t$ in the $y$ direction of a point initially at $x$. The displacement function $u$ is governed by the mathematical model <!-- Equation labels as ordinary links --> <div id="wave:pde1"></div> $$ \begin{equation} \frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}, \quad x\in (0,L),\ t\in (0,T] \label{wave:pde1} \tag{1} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde1:ic:u"></div> $$ \begin{equation} u(x,0) = I(x), \quad x\in [0,L] \label{wave:pde1:ic:u} \tag{2} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde1:ic:ut"></div> $$ \begin{equation} \frac{\partial}{\partial t}u(x,0) = 0, \quad x\in [0,L] \label{wave:pde1:ic:ut} \tag{3} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde1:bc:0"></div> $$ \begin{equation} u(0,t) = 0, \quad t\in (0,T] \label{wave:pde1:bc:0} \tag{4} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde1:bc:L"></div> $$ \begin{equation} u(L,t) = 0, \quad t\in (0,T] \label{wave:pde1:bc:L} \tag{5} \end{equation} $$ The constant $c$ and the function $I(x)$ must be prescribed. Equation ([1](#wave:pde1)) is known as the one-dimensional *wave equation*. Since this PDE contains a second-order derivative in time, we need *two initial conditions*. The condition ([2](#wave:pde1:ic:u)) specifies the initial shape of the string, $I(x)$, and ([3](#wave:pde1:ic:ut)) expresses that the initial velocity of the string is zero. In addition, PDEs need *boundary conditions*, given here as ([4](#wave:pde1:bc:0)) and ([5](#wave:pde1:bc:L)). These two conditions specify that the string is fixed at the ends, i.e., that the displacement $u$ is zero. The solution $u(x,t)$ varies in space and time and describes waves that move with velocity $c$ to the left and right. Sometimes we will use a more compact notation for the partial derivatives to save space: <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} u_t = \frac{\partial u}{\partial t}, \quad u_{tt} = \frac{\partial^2 u}{\partial t^2}, \label{_auto1} \tag{6} \end{equation} $$ and similar expressions for derivatives with respect to other variables. Then the wave equation can be written compactly as $u_{tt} = c^2u_{xx}$. The PDE problem ([1](#wave:pde1))-([5](#wave:pde1:bc:L)) will now be discretized in space and time by a finite difference method. ## Discretizing the domain <div id="wave:string:mesh"></div> The temporal domain $[0,T]$ is represented by a finite number of mesh points <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} 0 = t_0 < t_1 < t_2 < \cdots < t_{N_t-1} < t_{N_t} = T \label{_auto2} \tag{7} \end{equation} $$ Similarly, the spatial domain $[0,L]$ is replaced by a set of mesh points <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} 0 = x_0 < x_1 < x_2 < \cdots < x_{N_x-1} < x_{N_x} = L \label{_auto3} \tag{8} \end{equation} $$ One may view the mesh as two-dimensional in the $x,t$ plane, consisting of points $(x_i, t_n)$, with $i=0,\ldots,N_x$ and $n=0,\ldots,N_t$. ### Uniform meshes For uniformly distributed mesh points we can introduce the constant mesh spacings $\Delta t$ and $\Delta x$. We have that <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} x_i = i\Delta x,\ i=0,\ldots,N_x,\quad t_n = n\Delta t,\ n=0,\ldots,N_t \label{_auto4} \tag{9} \end{equation} $$ We also have that $\Delta x = x_i-x_{i-1}$, $i=1,\ldots,N_x$, and $\Delta t = t_n - t_{n-1}$, $n=1,\ldots,N_t$. [Figure](#wave:pde1:fig:mesh) displays a mesh in the $x,t$ plane with $N_t=5$, $N_x=5$, and constant mesh spacings. ## The discrete solution <div id="wave:string:numerical:sol"></div> The solution $u(x,t)$ is sought at the mesh points. We introduce the mesh function $u_i^n$, which approximates the exact solution at the mesh point $(x_i,t_n)$ for $i=0,\ldots,N_x$ and $n=0,\ldots,N_t$. Using the finite difference method, we shall develop algebraic equations for computing the mesh function. ## Fulfilling the equation at the mesh points <div id="wave:string:samplingPDE"></div> In the finite difference method, we relax the condition that ([1](#wave:pde1)) holds at all points in the space-time domain $(0,L)\times (0,T]$ to the requirement that the PDE is fulfilled at the *interior* mesh points only: <!-- Equation labels as ordinary links --> <div id="wave:pde1:step2"></div> $$ \begin{equation} \frac{\partial^2}{\partial t^2} u(x_i, t_n) = c^2\frac{\partial^2}{\partial x^2} u(x_i, t_n), \label{wave:pde1:step2} \tag{10} \end{equation} $$ for $i=1,\ldots,N_x-1$ and $n=1,\ldots,N_t-1$. For $n=0$ we have the initial conditions $u=I(x)$ and $u_t=0$, and at the boundaries $i=0,N_x$ we have the boundary condition $u=0$. ## Replacing derivatives by finite differences <div id="wave:string:fd"></div> The second-order derivatives can be replaced by central differences. The most widely used difference approximation of the second-order derivative is $$ \frac{\partial^2}{\partial t^2}u(x_i,t_n)\approx \frac{u_i^{n+1} - 2u_i^n + u^{n-1}_i}{\Delta t^2}\ $$ mathcal{I}_t is convenient to introduce the finite difference operator notation $$ [D_tD_t u]^n_i = \frac{u_i^{n+1} - 2u_i^n + u^{n-1}_i}{\Delta t^2} $$ A similar approximation of the second-order derivative in the $x$ direction reads $$ \frac{\partial^2}{\partial x^2}u(x_i,t_n)\approx \frac{u_{i+1}^{n} - 2u_i^n + u^{n}_{i-1}}{\Delta x^2} = [D_xD_x u]^n_i $$ ### Algebraic version of the PDE We can now replace the derivatives in ([10](#wave:pde1:step2)) and get <!-- Equation labels as ordinary links --> <div id="wave:pde1:step3b"></div> $$ \begin{equation} \frac{u_i^{n+1} - 2u_i^n + u^{n-1}_i}{\Delta t^2} = c^2\frac{u_{i+1}^{n} - 2u_i^n + u^{n}_{i-1}}{\Delta x^2}, \label{wave:pde1:step3b} \tag{11} \end{equation} $$ or written more compactly using the operator notation: <!-- Equation labels as ordinary links --> <div id="wave:pde1:step3a"></div> $$ \begin{equation} [D_tD_t u = c^2 D_xD_x]^{n}_i \label{wave:pde1:step3a} \tag{12} \end{equation} $$ ### Interpretation of the equation as a stencil A characteristic feature of ([11](#wave:pde1:step3b)) is that it involves $u$ values from neighboring points only: $u_i^{n+1}$, $u^n_{i\pm 1}$, $u^n_i$, and $u^{n-1}_i$. The circles in [Figure](#wave:pde1:fig:mesh) illustrate such neighboring mesh points that contribute to an algebraic equation. In this particular case, we have sampled the PDE at the point $(2,2)$ and constructed ([11](#wave:pde1:step3b)), which then involves a coupling of $u_1^2$, $u_2^3$, $u_2^2$, $u_2^1$, and $u_3^2$. The term *stencil* is often used about the algebraic equation at a mesh point, and the geometry of a typical stencil is illustrated in [Figure](#wave:pde1:fig:mesh). One also often refers to the algebraic equations as *discrete equations*, *(finite) difference equations* or a *finite difference scheme*. <!-- dom:FIGURE: [mov-wave/D_stencil_gpl/stencil_n_interior.png, width=500] Mesh in space and time. The circles show points connected in a finite difference equation. <div id="wave:pde1:fig:mesh"></div> --> <!-- begin figure --> <div id="wave:pde1:fig:mesh"></div> <p>Mesh in space and time. The circles show points connected in a finite difference equation.</p> <img src="mov-wave/D_stencil_gpl/stencil_n_interior.png" width=500> <!-- end figure --> ### Algebraic version of the initial conditions We also need to replace the derivative in the initial condition ([3](#wave:pde1:ic:ut)) by a finite difference approximation. A centered difference of the type $$ \frac{\partial}{\partial t} u(x_i,t_0)\approx \frac{u^1_i - u^{-1}_i}{2\Delta t} = [D_{2t} u]^0_i, $$ seems appropriate. Writing out this equation and ordering the terms give <!-- Equation labels as ordinary links --> <div id="wave:pde1:step3c"></div> $$ \begin{equation} u^{-1}_i=u^{1}_i,\quad i=0,\ldots,N_x \label{wave:pde1:step3c} \tag{13} \end{equation} $$ The other initial condition can be computed by $$ u_i^0 = I(x_i),\quad i=0,\ldots,N_x $$ ## Formulating a recursive algorithm <div id="wave:string:alg"></div> We assume that $u^n_i$ and $u^{n-1}_i$ are available for $i=0,\ldots,N_x$. The only unknown quantity in ([11](#wave:pde1:step3b)) is therefore $u^{n+1}_i$, which we now can solve for: <!-- Equation labels as ordinary links --> <div id="wave:pde1:step4"></div> $$ \begin{equation} u^{n+1}_i = -u^{n-1}_i + 2u^n_i + C^2 \left(u^{n}_{i+1}-2u^{n}_{i} + u^{n}_{i-1}\right) \label{wave:pde1:step4} \tag{14} \end{equation} $$ We have here introduced the parameter <!-- Equation labels as ordinary links --> <div id="_auto5"></div> $$ \begin{equation} C = c\frac{\Delta t}{\Delta x}, \label{_auto5} \tag{15} \end{equation} $$ known as the *Courant number*. **$C$ is the key parameter in the discrete wave equation.** We see that the discrete version of the PDE features only one parameter, $C$, which is therefore the key parameter, together with $N_x$, that governs the quality of the numerical solution (see the section [Analysis of the difference equations](wave_analysis.ipynb) for details). Both the primary physical parameter $c$ and the numerical parameters $\Delta x$ and $\Delta t$ are lumped together in $C$. Note that $C$ is a dimensionless parameter. Given that $u^{n-1}_i$ and $u^n_i$ are known for $i=0,\ldots,N_x$, we find new values at the next time level by applying the formula ([14](#wave:pde1:step4)) for $i=1,\ldots,N_x-1$. [Figure](#wave:pde1:fig:mesh) illustrates the points that are used to compute $u^3_2$. For the boundary points, $i=0$ and $i=N_x$, we apply the boundary conditions $u_i^{n+1}=0$. Even though sound reasoning leads up to ([14](#wave:pde1:step4)), there is still a minor challenge with it that needs to be resolved. Think of the very first computational step to be made. The scheme ([14](#wave:pde1:step4)) is supposed to start at $n=1$, which means that we compute $u^2$ from $u^1$ and $u^0$. Unfortunately, we do not know the value of $u^1$, so how to proceed? A standard procedure in such cases is to apply ([14](#wave:pde1:step4)) also for $n=0$. This immediately seems strange, since it involves $u^{-1}_i$, which is an undefined quantity outside the time mesh (and the time domain). However, we can use the initial condition ([13](#wave:pde1:step3c)) in combination with ([14](#wave:pde1:step4)) when $n=0$ to eliminate $u^{-1}_i$ and arrive at a special formula for $u_i^1$: <!-- Equation labels as ordinary links --> <div id="wave:pde1:step4:1"></div> $$ \begin{equation} u_i^1 = u^0_i - \frac{1}{2} C^2\left(u^{0}_{i+1}-2u^{0}_{i} + u^{0}_{i-1}\right) \label{wave:pde1:step4:1} \tag{16} \end{equation} $$ [Figure](#wave:pde1:fig:stencil:u1) illustrates how ([16](#wave:pde1:step4:1)) connects four instead of five points: $u^1_2$, $u_1^0$, $u_2^0$, and $u_3^0$. <!-- dom:FIGURE: [mov-wave/D_stencil_gpl/stencil_n0_interior.png, width=500] Modified stencil for the first time step. <div id="wave:pde1:fig:stencil:u1"></div> --> <!-- begin figure --> <div id="wave:pde1:fig:stencil:u1"></div> <p>Modified stencil for the first time step.</p> <img src="mov-wave/D_stencil_gpl/stencil_n0_interior.png" width=500> <!-- end figure --> We can now summarize the computational algorithm: 1. Compute $u^0_i=I(x_i)$ for $i=0,\ldots,N_x$ 2. Compute $u^1_i$ by ([16](#wave:pde1:step4:1)) for $i=1,2,\ldots,N_x-1$ and set $u_i^1=0$ for the boundary points given by $i=0$ and $i=N_x$, 3. For each time level $n=1,2,\ldots,N_t-1$ a. apply ([14](#wave:pde1:step4)) to find $u^{n+1}_i$ for $i=1,\ldots,N_x-1$ b. set $u^{n+1}_i=0$ for the boundary points having $i=0$, $i=N_x$. The algorithm essentially consists of moving a finite difference stencil through all the mesh points, which can be seen as an animation in a [web page](mov-wave/D_stencil_gpl/index.html) or a [movie file](mov-wave/D_stencil_gpl/movie.ogg). ## Sketch of an implementation <div id="wave:string:impl"></div> We start by defining some constants that will be used throughout our Devito code. ``` import numpy as np # Given mesh points as arrays x and t (x[i], t[n]), # constant c and function I for initial condition x = np.linspace(0, 2, 101) t = np.linspace(0, 2, 101) c = 1 I = lambda x: np.sin(x) dx = x[1] - x[0] dt = t[1] - t[0] C = c*dt/dx # Courant number Nx = len(x)-1 Nt = len(t)-1 C2 = C**2 # Help variable in the scheme L = 2. ``` Next, we define our 1D computational grid and create a function `u` as a symbolic `devito.TimeFunction`. We need to specify the `space_order` as 2 since our wave equation involves second-order derivatives with respect to $x$. Similarly, we specify the `time_order` as 2, as our equation involves second-order derivatives with respect to $t$. Setting these parameters allows us to use `u.dx2` and `u.dt2`. ``` from devito import Grid, TimeFunction # Initialise `u` for space and time order 2, using initialisation function I grid = Grid(shape=(Nx+1), extent=(L)) u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2) u.data[:,:] = I(x[:]) ``` Now that we have initialised `u`, we can solve our wave equation for the unknown quantity $u^{n+1}_i$ using forward and backward differences in space and time. ``` from devito import Constant, Eq, solve # Set up wave equation and solve for forward stencil point in time pde = (1/c**2)*u.dt2-u.dx2 stencil = Eq(u.forward, solve(pde, u.forward)) print("LHS: %s" % stencil.lhs) print("RHS: %s" % stencil.rhs) ``` Great! From these print statements, we can see that Devito has taken the wave equation in ([1](#wave:pde1)) and solved it for $u^{n+1}_i$, giving us equation ([14](#wave:pde1:step4)). Note that `dx` is denoted as `h_x`, while `u(t, x)`, `u(t, x - h_x)` and `u(t, x + h_x)` denote the equivalent of $u^{n}_{i}$, $u^{n}_{i-1}$ and $u^{n}_{i+1}$ respectively. We also need to create a separate stencil for the first timestep, where we substitute $u^{1}_i$ for $u^{-1}_i$, as given in ([13](#wave:pde1:step3c)). ``` stencil_init = stencil.subs(u.backward, u.forward) ``` Now we can create expressions for our boundary conditions and build the operator. The results are plotted below. ``` #NBVAL_IGNORE_OUTPUT from devito import Operator t_s = grid.stepping_dim # Boundary conditions bc = [Eq(u[t_s+1, 0], 0)] bc += [Eq(u[t_s+1, Nx], 0)] # Defining one Operator for initial timestep and one for the rest op_init = Operator([stencil_init]+bc) op = Operator([stencil]+bc) op_init.apply(time_M=1, dt=dt) op.apply(time_m=1,time_M=Nt, dt=dt) ``` We can plot our results using `matplotlib`: ``` import matplotlib.pyplot as plt plt.plot(x, u.data[-1]) plt.xlabel('x') plt.ylabel('u') plt.show() ``` # Verification Before implementing the algorithm, it is convenient to add a source term to the PDE ([1](#wave:pde1)), since that gives us more freedom in finding test problems for verification. Physically, a source term acts as a generator for waves in the interior of the domain. ## A slightly generalized model problem <div id="wave:pde2:fd"></div> We now address the following extended initial-boundary value problem for one-dimensional wave phenomena: <!-- Equation labels as ordinary links --> <div id="wave:pde2"></div> $$ \begin{equation} u_{tt} = c^2 u_{xx} + f(x,t), \quad x\in (0,L),\ t\in (0,T] \label{wave:pde2} \tag{17} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:ic:u"></div> $$ \begin{equation} u(x,0) = I(x), \quad x\in [0,L] \label{wave:pde2:ic:u} \tag{18} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:ic:ut"></div> $$ \begin{equation} u_t(x,0) = V(x), \quad x\in [0,L] \label{wave:pde2:ic:ut} \tag{19} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:bc:0"></div> $$ \begin{equation} u(0,t) = 0, \quad t>0 \label{wave:pde2:bc:0} \tag{20} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:bc:L"></div> $$ \begin{equation} u(L,t) = 0, \quad t>0 \label{wave:pde2:bc:L} \tag{21} \end{equation} $$ Sampling the PDE at $(x_i,t_n)$ and using the same finite difference approximations as above, yields <!-- Equation labels as ordinary links --> <div id="wave:pde2:fdop"></div> $$ \begin{equation} [D_tD_t u = c^2 D_xD_x u + f]^{n}_i \label{wave:pde2:fdop} \tag{22} \end{equation} $$ Writing this out and solving for the unknown $u^{n+1}_i$ results in <!-- Equation labels as ordinary links --> <div id="wave:pde2:step3b"></div> $$ \begin{equation} u^{n+1}_i = -u^{n-1}_i + 2u^n_i + C^2 (u^{n}_{i+1}-2u^{n}_{i} + u^{n}_{i-1}) + \Delta t^2 f^n_i \label{wave:pde2:step3b} \tag{23} \end{equation} $$ The equation for the first time step must be rederived. The discretization of the initial condition $u_t = V(x)$ at $t=0$ becomes $$ [D_{2t}u = V]^0_i\quad\Rightarrow\quad u^{-1}_i = u^{1}_i - 2\Delta t V_i, $$ which, when inserted in ([23](#wave:pde2:step3b)) for $n=0$, gives the special formula <!-- Equation labels as ordinary links --> <div id="wave:pde2:step3c"></div> $$ \begin{equation} u^{1}_i = u^0_i + \Delta t V_i + {\frac{1}{2}} C^2 \left(u^{0}_{i+1}-2u^{0}_{i} + u^{0}_{i-1}\right) + \frac{1}{2}\Delta t^2 f^0_i \label{wave:pde2:step3c} \tag{24} \end{equation} $$ ## Using an analytical solution of physical significance <div id="wave:pde2:fd:standing:waves"></div> Many wave problems feature sinusoidal oscillations in time and space. For example, the original PDE problem ([1](#wave:pde1))-([5](#wave:pde1:bc:L)) allows an exact solution <!-- Equation labels as ordinary links --> <div id="wave:pde2:test:ue"></div> $$ \begin{equation} u_e(x,t) = A\sin\left(\frac{\pi}{L}x\right) \cos\left(\frac{\pi}{L}ct\right) \label{wave:pde2:test:ue} \tag{25} \end{equation} $$ This $u_e$ fulfills the PDE with $f=0$, boundary conditions $u_e(0,t)=u_e(L,t)=0$, as well as initial conditions $I(x)=A\sin\left(\frac{\pi}{L}x\right)$ and $V=0$. **How to use exact solutions for verification.** It is common to use such exact solutions of physical interest to verify implementations. However, the numerical solution $u^n_i$ will only be an approximation to $u_e(x_i,t_n)$. We have no knowledge of the precise size of the error in this approximation, and therefore we can never know if discrepancies between $u^n_i$ and $u_e(x_i,t_n)$ are caused by mathematical approximations or programming errors. In particular, if plots of the computed solution $u^n_i$ and the exact one ([25](#wave:pde2:test:ue)) look similar, many are tempted to claim that the implementation works. However, even if color plots look nice and the accuracy is "deemed good", there can still be serious programming errors present! The only way to use exact physical solutions like ([25](#wave:pde2:test:ue)) for serious and thorough verification is to run a series of simulations on finer and finer meshes, measure the integrated error in each mesh, and from this information estimate the empirical convergence rate of the method. An introduction to the computing of convergence rates is given in Section 3.1.6 in [[Langtangen_decay]](#Langtangen_decay). There is also a detailed example on computing convergence rates in the [verification section](../01_vib/vib_undamped.ipynb#vib:ode1:verify) of the Vibration ODEs chapter. In the present problem, one expects the method to have a convergence rate of 2 (see the section [Analysis of the difference equations](wave_analysis.ipynb)), so if the computed rates are close to 2 on a sufficiently fine mesh, we have good evidence that the implementation is free of programming mistakes. ## Manufactured solution and estimation of convergence rates <div id="wave:pde2:fd:MMS"></div> ### Specifying the solution and computing corresponding data One problem with the exact solution ([25](#wave:pde2:test:ue)) is that it requires a simplification (${V}=0, f=0$) of the implemented problem ([17](#wave:pde2))-([21](#wave:pde2:bc:L)). An advantage of using a *manufactured solution* is that we can test all terms in the PDE problem. The idea of this approach is to set up some chosen solution and fit the source term, boundary conditions, and initial conditions to be compatible with the chosen solution. Given that our boundary conditions in the implementation are $u(0,t)=u(L,t)=0$, we must choose a solution that fulfills these conditions. One example is $$ u_e(x,t) = x(L-x)\sin t $$ Inserted in the PDE $u_{tt}=c^2u_{xx}+f$ we get $$ -x(L-x)\sin t = -c^2 2\sin t + f\quad\Rightarrow f = (2c^2 - x(L-x))\sin t $$ The initial conditions become $$ \begin{align*} u(x,0) =& I(x) = 0,\\ u_t(x,0) &= V(x) = x(L-x) \end{align*} $$ ### Defining a single discretization parameter To verify the code, we compute the convergence rates in a series of simulations, letting each simulation use a finer mesh than the previous one. Such empirical estimation of convergence rates relies on an assumption that some measure $E$ of the numerical error is related to the discretization parameters through $$ E = C_t\Delta t^r + C_x\Delta x^p, $$ where $C_t$, $C_x$, $r$, and $p$ are constants. The constants $r$ and $p$ are known as the *convergence rates* in time and space, respectively. From the accuracy in the finite difference approximations, we expect $r=p=2$, since the error terms are of order $\Delta t^2$ and $\Delta x^2$. This is confirmed by truncation error analysis and other types of analysis. By using an exact solution of the PDE problem, we will next compute the error measure $E$ on a sequence of refined meshes and see if the rates $r=p=2$ are obtained. We will not be concerned with estimating the constants $C_t$ and $C_x$, simply because we are not interested in their values. mathcal{I}_t is advantageous to introduce a single discretization parameter $h=\Delta t=\hat c \Delta x$ for some constant $\hat c$. Since $\Delta t$ and $\Delta x$ are related through the Courant number, $\Delta t = C\Delta x/c$, we set $h=\Delta t$, and then $\Delta x = hc/C$. Now the expression for the error measure is greatly simplified: $$ E = C_t\Delta t^r + C_x\Delta x^r = C_t h^r + C_x\left(\frac{c}{C}\right)^r h^r = Dh^r,\quad D = C_t+C_x\left(\frac{c}{C}\right)^r $$ ### Computing errors We choose an initial discretization parameter $h_0$ and run experiments with decreasing $h$: $h_i=2^{-i}h_0$, $i=1,2,\ldots,m$. Halving $h$ in each experiment is not necessary, but it is a common choice. For each experiment we must record $E$ and $h$. Standard choices of error measure are the $\ell^2$ and $\ell^\infty$ norms of the error mesh function $e^n_i$: <!-- Equation labels as ordinary links --> <div id="wave:pde2:fd:MMS:E:l2"></div> $$ \begin{equation} E = ||e^n_i||_{\ell^2} = \left( \Delta t\Delta x \sum_{n=0}^{N_t}\sum_{i=0}^{N_x} (e^n_i)^2\right)^{\frac{1}{2}},\quad e^n_i = u_e(x_i,t_n)-u^n_i, \label{wave:pde2:fd:MMS:E:l2} \tag{26} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="wave:pde2:fd:MMS:E:linf"></div> $$ \begin{equation} E = ||e^n_i||_{\ell^\infty} = \max_{i,n} |e^n_i| \label{wave:pde2:fd:MMS:E:linf} \tag{27} \end{equation} $$ In Python, one can compute $\sum_{i}(e^{n}_i)^2$ at each time step and accumulate the value in some sum variable, say `e2_sum`. At the final time step one can do `sqrt(dt*dx*e2_sum)`. For the $\ell^\infty$ norm one must compare the maximum error at a time level (`e.max()`) with the global maximum over the time domain: `e_max = max(e_max, e.max())`. An alternative error measure is to use a spatial norm at one time step only, e.g., the end time $T$ ($n=N_t$): <!-- Equation labels as ordinary links --> <div id="_auto6"></div> $$ \begin{equation} E = ||e^n_i||_{\ell^2} = \left( \Delta x\sum_{i=0}^{N_x} (e^n_i)^2\right)^{\frac{1}{2}},\quad e^n_i = u_e(x_i,t_n)-u^n_i, \label{_auto6} \tag{28} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto7"></div> $$ \begin{equation} E = ||e^n_i||_{\ell^\infty} = \max_{0\leq i\leq N_x} |e^{n}_i| \label{_auto7} \tag{29} \end{equation} $$ The important point is that the error measure ($E$) for the simulation is represented by a single number. ### Computing rates Let $E_i$ be the error measure in experiment (mesh) number $i$ (not to be confused with the spatial index $i$) and let $h_i$ be the corresponding discretization parameter ($h$). With the error model $E_i = Dh_i^r$, we can estimate $r$ by comparing two consecutive experiments: $$ \begin{align*} E_{i+1}& =D h_{i+1}^{r},\\ E_{i}& =D h_{i}^{r} \end{align*} $$ Dividing the two equations eliminates the (uninteresting) constant $D$. Thereafter, solving for $r$ yields $$ r = \frac{\ln E_{i+1}/E_{i}}{\ln h_{i+1}/h_{i}} $$ Since $r$ depends on $i$, i.e., which simulations we compare, we add an index to $r$: $r_i$, where $i=0,\ldots,m-2$, if we have $m$ experiments: $(h_0,E_0),\ldots,(h_{m-1}, E_{m-1})$. In our present discretization of the wave equation we expect $r=2$, and hence the $r_i$ values should converge to 2 as $i$ increases. ## Constructing an exact solution of the discrete equations <div id="wave:pde2:fd:verify:quadratic"></div> With a manufactured or known analytical solution, as outlined above, we can estimate convergence rates and see if they have the correct asymptotic behavior. Experience shows that this is a quite good verification technique in that many common bugs will destroy the convergence rates. A significantly better test though, would be to check that the numerical solution is exactly what it should be. This will in general require exact knowledge of the numerical error, which we do not normally have (although we in the section [Analysis of the difference equations](wave_analysis.ipynb) establish such knowledge in simple cases). However, it is possible to look for solutions where we can show that the numerical error vanishes, i.e., the solution of the original continuous PDE problem is also a solution of the discrete equations. This property often arises if the exact solution of the PDE is a lower-order polynomial. (Truncation error analysis leads to error measures that involve derivatives of the exact solution. In the present problem, the truncation error involves 4th-order derivatives of $u$ in space and time. Choosing $u$ as a polynomial of degree three or less will therefore lead to vanishing error.) We shall now illustrate the construction of an exact solution to both the PDE itself and the discrete equations. Our chosen manufactured solution is quadratic in space and linear in time. More specifically, we set <!-- Equation labels as ordinary links --> <div id="wave:pde2:fd:verify:quadratic:uex"></div> $$ \begin{equation} u_e (x,t) = x(L-x)(1+{\frac{1}{2}}t), \label{wave:pde2:fd:verify:quadratic:uex} \tag{30} \end{equation} $$ which by insertion in the PDE leads to $f(x,t)=2(1+t)c^2$. This $u_e$ fulfills the boundary conditions $u=0$ and demands $I(x)=x(L-x)$ and $V(x)={\frac{1}{2}}x(L-x)$. To realize that the chosen $u_e$ is also an exact solution of the discrete equations, we first remind ourselves that $t_n=n\Delta t$ so that <!-- Equation labels as ordinary links --> <div id="_auto8"></div> $$ \begin{equation} \lbrack D_tD_t t^2\rbrack^n = \frac{t_{n+1}^2 - 2t_n^2 + t_{n-1}^2}{\Delta t^2} = (n+1)^2 -2n^2 + (n-1)^2 = 2, \label{_auto8} \tag{31} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="_auto9"></div> $$ \begin{equation} \lbrack D_tD_t t\rbrack^n = \frac{t_{n+1} - 2t_n + t_{n-1}}{\Delta t^2} = \frac{((n+1) -2n + (n-1))\Delta t}{\Delta t^2} = 0 \label{_auto9} \tag{32} \end{equation} $$ Hence, $$ [D_tD_t u_e]^n_i = x_i(L-x_i)[D_tD_t (1+{\frac{1}{2}}t)]^n = x_i(L-x_i){\frac{1}{2}}[D_tD_t t]^n = 0 $$ Similarly, we get that $$ \begin{align*} \lbrack D_xD_x u_e\rbrack^n_i &= (1+{\frac{1}{2}}t_n)\lbrack D_xD_x (xL-x^2)\rbrack_i\\ & = (1+{\frac{1}{2}}t_n)\lbrack LD_xD_x x - D_xD_x x^2\rbrack_i \\ &= -2(1+{\frac{1}{2}}t_n) \end{align*} $$ Now, $f^n_i = 2(1+{\frac{1}{2}}t_n)c^2$, which results in $$ [D_tD_t u_e - c^2D_xD_xu_e - f]^n_i = 0 + c^2 2(1 + {\frac{1}{2}}t_{n}) + 2(1+{\frac{1}{2}}t_n)c^2 = 0 $$ Moreover, $u_e(x_i,0)=I(x_i)$, $\partial u_e/\partial t = V(x_i)$ at $t=0$, and $u_e(x_0,t)=u_e(x_{N_x},0)=0$. Also the modified scheme for the first time step is fulfilled by $u_e(x_i,t_n)$. Therefore, the exact solution $u_e(x,t)=x(L-x)(1+t/2)$ of the PDE problem is also an exact solution of the discrete problem. This means that we know beforehand what numbers the numerical algorithm should produce. We can use this fact to check that the computed $u^n_i$ values from an implementation equals $u_e(x_i,t_n)$, within machine precision. This result is valid *regardless of the mesh spacings* $\Delta x$ and $\Delta t$! Nevertheless, there might be stability restrictions on $\Delta x$ and $\Delta t$, so the test can only be run for a mesh that is compatible with the stability criterion (which in the present case is $C\leq 1$, to be derived later). **Notice.** A product of quadratic or linear expressions in the various independent variables, as shown above, will often fulfill both the PDE problem and the discrete equations, and can therefore be very useful solutions for verifying implementations. However, for 1D wave equations of the type $u_{tt}=c^2u_{xx}$ we shall see that there is always another much more powerful way of generating exact solutions (which consists in just setting $C=1$ (!), as shown in the section [Analysis of the difference equations](wave_analysis.ipynb)).
github_jupyter
# Non-linear dependencies amongst the SDGs and climate change by distance correlation We start with investigating dependencies amongst the SDGs on different levels. The method how we investigate these dependencies should take as few assumptions as possible. So, a Pearson linear correlation coefficient or a rank correlation coefficient are not our choice since they assume linearity and/or monotony, respectively. We choose to compute the [distance correlation](https://projecteuclid.org/euclid.aos/1201012979), precisely the [partial distance correlation](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810731), because of the following properties: 1. we have an absolute measure of dependence ranging from $0$ to $1$, $0 \leq \mathcal{R}(X,Y) \leq 1$ 2. $\mathcal{R}(X,Y) = 0$ if and only if $X$ and $Y$ are independent, 3. $\mathcal{R}(X,Y) = \mathcal{R}(Y,X)$ 4. we are able to investigate non-linear and non-monotone relationships, 5. we can find dependencies between indicators with differently many measurements, 6. the only assumptions we need to take is that probability distributions have finite first moments. The conditional distance correlation has the advantage that we ignore the influence of any other targets or goals when we compute the correlation between any two targets or goals. This procedure is also called controlling for confounders. The **distance correlation** is defined as: $$ \mathcal{R}^2(X,Y) = \begin{cases} \frac{\mathcal{V}^2 (X,Y)}{\sqrt{\mathcal{V}^2 (X)\mathcal{V}^2 (Y)}} &\text{, if $\mathcal{V}^2 (X)\mathcal{V}^2 (Y) > 0$} \\ 0 &\text{, if $\mathcal{V}^2 (X)\mathcal{V}^2 (Y) = 0$} \end{cases} $$ where $$ \mathcal{V}^2 (X,Y) = \| f_{X,Y}(t) - f_X(t)f_Y(t) \|^2 $$ is the distance covariance with **characteristic functions** $f(t)$. Bear in mind that characteristic functions include the imaginary unit $i$, $i^2 = -1$: $$ f_X(t) = \mathbb{E}[e^{itX}] $$ Thus, we are in the space of complex numbers $\mathbb{C}$. Unfortunately, this means we can most likely not find exact results, but we'll get back to this later under Estimators. The **conditional distance correlation** is defined as: $$ \mathcal{R}^2(X,Y \ | \ Z) = \begin{cases} \frac{\mathcal{R}^2 (X,Y) - \mathcal{R}^2 (X,Z) \mathcal{R}^2 (Y,Z)}{\sqrt{1 - \mathcal{R}^4 (X,Z)} \sqrt{1 - \mathcal{R}^4 (Y,Z)}} &\text{, if $\mathcal{R}^4 (X,Z) \neq 1$ and $\mathcal{R}^4 (Y,Z) \neq 1$} \\ 0 &\text{, if $\mathcal{R}^4 (X,Z) = 1$ and $\mathcal{R}^4 (Y,Z) = 1$} \end{cases} $$ # Distance covariance Let's dismantle the distance covariance equation to know what we actually compute in the distance correlation: $$ \mathcal{V}^2 (X,Y) = \| f_{X,Y}(t) - f_X(t) \ f_Y(t) \|^2 = \frac{1}{c_p c_q} \int_{\mathbb{R}^{p+q}} \frac{| f_{X,Y}(t) - f_X(t)f_Y(t) |^2}{| t |_p^{1+p} \ | t |_q^{1+q}} dt $$ where $$ c_d = \frac{\pi^{(1+d)/2}}{\Gamma \Big( (1+d)/2 \Big)} $$ where the (complete) Gamma function $\Gamma$ is $$ \Gamma (z) = \int_0^{\infty} x^{z-1} \ e^{-x} \ dx $$ with $z \in \mathbb{R}^{+}$. $p$ and $q$ are the samples of time-series. We can see this as a random vector with multiple samples available for each time point. However, the number of samples for time points must not vary over the same time-series. We can write this as: $$X \ \text{in} \ \mathbb{R}^p$$ $$Y \ \text{in} \ \mathbb{R}^q$$ A preliminary conclusion of this formulation: **we can compute dependencies between time-series with different numbers of samples**. But we still have some terms in the distance covariance $\mathcal{V}^2 (X,Y)$ which we need to define: $ | t |_p^{1+p} $ is the Euclidean distance of $t$ in $\mathbb{R}^p$, $ | t |_q^{1+q} $ is the Euclidean distance of $t$ in $\mathbb{R}^q$. The numerator in the integral of $\mathcal{V}^2 (X,Y)$ is: $$ | f_{X,Y}(t) - f_X(t) \ f_Y(t) |^2 = \Big( 1- |f_X(t) | ^2 \Big) \ \Big( 1- |f_Y(t) |^2 \Big) $$ where $|f_X(t) |$ and $|f_Y(t) |$ are absolute random vectors of the characteristic functions $f(t)$ with $p$ and $q$ samples, respectively. ## Estimators Since the characteristic functions include the imaginary unit $i$, we cannot recover the exact solution for the distance covariance. However, we can estimate it by a quite simple form. We compute these estimators according to [Huo & Szekely, 2016](https://arxiv.org/abs/1410.1503). We denote the pairwise distances of the $X$ observations by $a_{ij} := \|X_i - X_j \|$ and of the $Y$ observations by $b_{ij} = \|Y_i - Y_j \|$ for $i,j = 1, ..., n$, where $n$ is the number of measurements in $X$ and $Y$. The corresponding distance matrices are denoted by $(A_{ij})^n_{i,j=1}$ and $(B_{ij})^n_{i,j=1}$, where $$ A_{ij} = \begin{cases} a_{ij} - \frac{1}{n} \sum_{l=1}^n a_{il} - \frac{1}{n} \sum_{k=1}^n a_{kj} + \frac{1}{n^2} \sum_{k,l=1}^n a_{kl} & i \neq j; \\ 0 & i = j. \end{cases} $$ and $$ B_{ij} = \begin{cases} b_{ij} - \frac{1}{n} \sum_{l=1}^n b_{il} - \frac{1}{n} \sum_{k=1}^n b_{kj} + \frac{1}{n^2} \sum_{k,l=1}^n b_{kl} & i \neq j; \\ 0 & i = j. \end{cases} $$ Having computed these, we can estimate the sample distance covariance $\hat{\mathcal{V}}^2(X,Y)$ by $$ \hat{\mathcal{V}}^2(X,Y) = \frac{1}{n^2} \sum_{i,j=1}^n A_{ij} \ B_{ij} $$ The corresponding sample variance $\hat{\mathcal{V}}^2(X)$ is consequently: $$ \hat{\mathcal{V}}^2(X) = \frac{1}{n^2} \sum_{i,j=1}^n A^2_{ij} $$ Then, we can scale these covariances to finally arrive at the sample distance correlation $\hat{\mathcal{R}}^2(X,Y)$: $$ \hat{\mathcal{R}}^2(X,Y) = \begin{cases} \frac{\hat{\mathcal{V}}^2 (X,Y)}{\sqrt{\hat{\mathcal{V}}^2 (X)\hat{\mathcal{V}}^2 (Y)}} &\text{, if $\hat{\mathcal{V}}^2 (X)\mathcal{V}^2 (Y) > 0$} \\ 0 &\text{, if $\hat{\mathcal{V}}^2 (X)\hat{\mathcal{V}}^2 (Y) = 0$} \end{cases} $$ ### Unbiased estimators These estimators are biased, but we can define unbiased estimators of the distance covariance $\hat{\mathcal{V}}^2(X,Y)$ and call them $\Omega_n(x,y)$. We must first redefine our distance matrices $(A_{ij})^n_{i,j=1}$ and $(B_{ij})^n_{i,j=1}$, which we will call $(\tilde{A}_{ij})^n_{i,j=1}$ and $(\tilde{B}_{ij})^n_{i,j=1}$: $$ \tilde{A}_{ij} = \begin{cases} a_{ij} - \frac{1}{n-2} \sum_{l=1}^n a_{il} - \frac{1}{n-2} \sum_{k=1}^n a_{kj} + \frac{1}{(n-1)(n-2)} \sum_{k,l=1}^n a_{kl} & i \neq j; \\ 0 & i = j. \end{cases} $$ and $$ \tilde{B}_{ij} = \begin{cases} b_{ij} - \frac{1}{n-2} \sum_{l=1}^n b_{il} - \frac{1}{n-2} \sum_{k=1}^n b_{kj} + \frac{1}{(n-1)(n-2)} \sum_{k,l=1}^n b_{kl} & i \neq j; \\ 0 & i = j. \end{cases} $$ Finally, we can compute the unbiased estimator $\Omega_n(X,Y)$ for $\mathcal{V}^2(X,Y)$ as the dot product $\langle \tilde{A}, \tilde{B} \rangle$: $$ \Omega_n(X,Y) = \langle \tilde{A}, \tilde{B} \rangle = \frac{1}{n(n-3)} \sum_{i,j=1}^n \tilde{A}_{ij} \ \tilde{B}_{ij} $$ Interestingly, [Lyons (2013)](https://arxiv.org/abs/1106.5758) found another solution how not only the sample distance correlation can be computed, but also the population distance correlation without characteristic functions. This is good to acknowledge, but it is not necessary to focus on it. # Conditional distance covariance We start with computing the unbiased distance matrices $(\tilde{A}_{ij})^n_{i,j=1}$, $(\tilde{B}_{ij})^n_{i,j=1}$, and $(\tilde{C}_{ij})^n_{i,j=1}$ for $X$, $Y$, and $Z$, respectively, as we have done previously for the distance covariance. We define the dot product $$ \Omega_n(X,Y) = \langle \tilde{A}, \tilde{B} \rangle = \frac{1}{n(n-3)} \sum_{i,j=1}^n \tilde{A}_{ij} \tilde{B}_{ij} $$ and project the sample $x$ onto $z$ as $$ P_z (x) = \frac{\langle \tilde{A}, \tilde{C} \rangle}{\langle \tilde{C}, \tilde{C} \rangle} \tilde{C} . $$ The complementary projection is consequently $$ P_{z^{\bot}} (x) = \tilde{A} - P_z (x) = \tilde{A} - \frac{\langle \tilde{A}, \tilde{C} \rangle}{\langle \tilde{C}, \tilde{C} \rangle} \tilde{C} . $$ Hence, the sample conditional distance covariance is $$ \hat{\mathcal{V}}^2(X,Y \ | \ Z) = \langle P_{z^{\bot}} (x), P_{z^{\bot}} (y) \rangle . $$ Then, we can scale these covariances to finally arrive at the sample conditional distance correlation $\hat{\mathcal{R}}^2(X,Y \ | \ Z)$: $$ \hat{\mathcal{R}}^2(X,Y \ | \ Z) = \begin{cases} \frac{\langle P_{z^{\bot}} (x), P_{z^{\bot}} (y) \rangle}{\| P_{z^{\bot}} (x) \| \ \| P_{z^{\bot}} (y) \|} &\text{, if} \ \| P_{z^{\bot}} (x) \| \ \| P_{z^{\bot}} (y) \| \neq 0 \\ 0 &\text{, if} \ \| P_{z^{\bot}} (x) \| \ \| P_{z^{\bot}} (y) \| = 0 \end{cases} $$ ## Implementation For our computations, we'll use the packages [`dcor`](https://dcor.readthedocs.io/en/latest/?badge=latest) for the partial distance correlation and [`community`](https://github.com/taynaud/python-louvain) for the clustering. ``` import dcor import numpy as np import pickle import itertools import pandas as pd import os import math from tqdm.notebook import tqdm import matplotlib.pyplot as plt import seaborn as sns import networkx as nx import matplotlib.image as mpimg import matplotlib.pyplot as plt from matplotlib.offsetbox import OffsetImage, AnnotationBbox from community import community_louvain as community from scipy.spatial import distance from dcor._dcor_internals import _u_distance_matrix, u_complementary_projection from sklearn.manifold import MDS import gc import warnings warnings.filterwarnings('ignore') ``` ### Loading standardised imputed data set We load first of all the standardised imputed data set which we have generated with the previous notebook. ``` #dict_all = pickle.load(open('utils/data/dict_all_wb.pkl', 'rb')) dict_all_std = pickle.load(open('utils/data/dict_all_wb_std.pkl', 'rb')) #indicators_values_i = pickle.load(open('utils/data/indicators_values_i_up_wb.pkl', 'rb')) targets_values_i = pickle.load(open('utils/data/targets_values_i_up_arr_wb.pkl', 'rb')) goals_values_i = pickle.load(open('utils/data/goals_values_i_up_arr_wb.pkl', 'rb')) # check whether T appended len(targets_values_i['Belgium']) # read amended csv file c = pd.read_csv('utils/countries_wb.csv', dtype=str, delimiter=';', header=None) countries = list(c[0]) groups = pd.read_csv(r'utils/groups.csv') groups.replace({"Democratic People's Republic of Korea": "Korea, Dem. People's Rep.", 'Gambia': 'Gambia, The', 'United Kingdom of Great Britain and Northern Ireland': 'United Kingdom', 'Congo': 'Congo, Rep.', 'Democratic Republic of the Congo': 'Congo, Dem. Rep.', 'Czechia': 'Czech Republic', 'Iran (Islamic Republic of)': 'Iran, Islamic Rep.', "Côte d'Ivoire": "Cote d'Ivoire", 'Kyrgyzstan': 'Kyrgyz Republic', "Lao People's Democratic Republic": 'Lao PDR', 'Republic of Moldova': 'Moldova', 'Micronesia (Federated States of)': 'Micronesia, Fed. Sts.', 'Slovakia': 'Slovak Republic', 'Viet Nam': 'Vietnam', 'Egypt': 'Egypt, Arab Rep.', 'United Republic of Tanzania': 'Tanzania','United States of America': 'United States', 'Venezuela (Bolivarian Republic of)': 'Venezuela, RB', 'Yemen': 'Yemen, Rep.', 'Bahamas': 'Bahamas, The', 'Bolivia (Plurinational State of)': 'Bolivia'}, inplace=True) info = pd.read_csv(r'utils/wb_info.csv', header=None) # removes some countries in-place countries.remove('Micronesia, Fed. Sts.') groups['Global South'].drop(index=1, inplace=True) ``` We later compute the correlations on an indicator level, but this is too detailed for any network visualisation and for an overarching understanding. Hence, we group here all sub-indicators first on an indicator-level. Then, we compute the distance correlations for the indicators, targets and goals. We work with the `info` file again, so we don't need to assign all of this by hand. ``` # check info # check #targets_values_i['France'].tail() ``` We would like to have values for targets, so we must, first of all, generate a list of all unique **targets**. ``` targets = list(info[4].unique()) dict_targets = {} for target in targets: t = info[0].where(info[4] == target) dict_targets[target] = [i for i in t if str(i) != 'nan'] #check dict_targets['1.2'] ``` Finally we also generate a list of all unique **goals**. ``` goals = list(info[3].unique()) dict_goals = {} for goal in goals: g = info[4].where(info[3] == goal) dict_goals[goal] = [t for t in g if str(t) != 'nan'] dict_goals[goal] = list(set(dict_goals[goal])) #check print(dict_goals['13']) ``` ## Distance correlations between goals The next step is to compute the distance correlations on a goal-level. We work with the **concatenated time-series** to compute the conditioned distance correlation directly on goal-level data. Visually speaking, this means that we fit one non-linear function to the data for all targets of these two goals. Since goals often have diverse targets, this may end up in fitting a non-linear curve to very noisy data. ## Working with concatenated time-series ### Conditioning iteratively on subsets of joint distributions of all goals We condition pairs of two goals iteratively on subsets of all remaining goals. We start with conditioning on the empty set, i.e. we compute the pairwise distance correlation first. Afterwards, we increase the set to condition on until we have reached the set of all remaining 15 goals to condition on. These sets are represented by the joint distributions of the goals entailed in them. We need to condition on all **subsets** of these lists of SDGs we condition on to find the dependence which solely stems from either of the two SDGs we condition the others on: ``` def combinations(iterable, r): # combinations('ABCD', 2) --> AB AC AD BC BD CD # combinations(range(4), 3) --> 012 013 023 123 pool = tuple(iterable) n = len(pool) if r > n: return indices = list(range(r)) yield list(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != i + n - r: break else: return indices[i] += 1 for j in range(i+1, r): indices[j] = indices[j-1] + 1 yield list(pool[i] for i in indices) def combinations_tuple(iterable, r): # combinations('ABCD', 2) --> AB AC AD BC BD CD # combinations(range(4), 3) --> 012 013 023 123 pool = tuple(iterable) n = len(pool) if r > n: return indices = list(range(r)) yield tuple(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != i + n - r: break else: return indices[i] += 1 for j in range(i+1, r): indices[j] = indices[j-1] + 1 yield tuple(pool[i] for i in indices) def product(pool_0, pool_1): #result = [[x, y]+[z] for x, y in pool_0 for z in pool_1 if x not in z and y not in z] # ~ 10 Mio rows result = [[x, y]+[z] for x, y in pool_0 for z in pool_1] # ~ 40 Mio rows for prod in result: yield tuple(prod) # create list out of all unique combinations of goals g_combinations = list(combinations(goals, 2)) conditions_g = [] conditions_g_tuple = [] for i in range(1, 18): conditions_g.extend(list(combinations(goals, i))) conditions_g_tuple.extend(tuple(combinations_tuple(goals, i))) # divide conditions_g_tuple into four sub-lists to save memory conditions_g_tuple_1 = conditions_g_tuple[:int(len(conditions_g_tuple)/4)] conditions_g_tuple_2 = conditions_g_tuple[int(len(conditions_g_tuple)/4)+1:2*int(len(conditions_g_tuple)/4)] conditions_g_tuple_3 = conditions_g_tuple[2*int(len(conditions_g_tuple)/4)+1:3*int(len(conditions_g_tuple)/4)] conditions_g_tuple_4 = conditions_g_tuple[3*int(len(conditions_g_tuple)/4)+1:] pairs = list(product(g_combinations, conditions_g_tuple)) pairs_g0 = pd.DataFrame.from_records(pairs, columns=['pair_0', 'pair_1', 'condition']) pairs_1 = list(product(g_combinations, conditions_g_tuple_1)) pairs_g0_1 = pd.DataFrame.from_records(pairs_1, columns=['pair_0', 'pair_1', 'condition']) pairs_2 = list(product(g_combinations, conditions_g_tuple_2)) pairs_g0_2 = pd.DataFrame.from_records(pairs_2, columns=['pair_0', 'pair_1', 'condition']) pairs_3 = list(product(g_combinations, conditions_g_tuple_3)) pairs_g0_3 = pd.DataFrame.from_records(pairs_3, columns=['pair_0', 'pair_1', 'condition']) pairs_4 = list(product(g_combinations, conditions_g_tuple_4)) pairs_g0_4 = pd.DataFrame.from_records(pairs_4, columns=['pair_0', 'pair_1', 'condition']) # how many rows? print(len(pairs_g0)) print(len(pairs_g0_1), len(pairs_g0_2), len(pairs_g0_3), len(pairs_g0_4)) # adding empty condition set for pairwise dcor pairs_g1 = pd.DataFrame.from_records(data=g_combinations, columns=['pair_0', 'pair_1']) pairs_g1['condition'] = '0' ``` # Groups ``` # data preparation groups_prep_g = {} for group in groups: print(group) groups_prep_g[group] = np.empty(18, dtype=object) for g, goal in enumerate(goals): g_list = [] for country in groups[group].dropna(): g_list.append(np.asarray(goals_values_i[country][g])) groups_prep_g[group][g] = np.asarray(g_list) ``` Now we call these data in our `dcor` computations. We first compute the pairwise distance covariance and correlation, then the partial ones with conditioning on all the previously defined sets in `pairs_g`. ### Preparations Filtering out the conditions that contain goals $X$ (`pair_0`) or $Y$ (`pair_1`): ``` import multiprocessing as mp print("Number of processors: ", mp.cpu_count()) # CHECKPOINT pairs_g0_left_0 = pd.read_csv('utils/pairs_g0_left_0.zip', dtype=str, compression='zip') pairs_g0_left_0_1 = pd.read_csv('utils/pairs_g0_left_0_1.zip', dtype=str, compression='zip') pairs_g0_left_0_2 = pd.read_csv('utils/pairs_g0_left_0_2.zip', dtype=str, compression='zip') pairs_g0_left_0_3 = pd.read_csv('utils/pairs_g0_left_0_3.zip', dtype=str, compression='zip') pairs_g0_left_0_4 = pd.read_csv('utils/pairs_g0_left_0_4.zip', dtype=str, compression='zip') # check pairs_g0_left_0_3.tail() pairs_g0_left_0.shape[0] / 153 len(g_combinations) ``` # With `multiprocessing` parallelisation ### Partial distance correlation ``` def partial_distance_cor(row): pair_0, pair_1, cond = row if pair_0=='T': pair_0 = 18 if pair_1=='T': pair_1 = 18 pair_0_array = groups_prep_g[group][int(pair_0)-1] pair_1_array = groups_prep_g[group][int(pair_1)-1] condition_array = conditions_dict[str(cond)].T return dcor.partial_distance_correlation(pair_0_array, pair_1_array, condition_array)**2 #groups.drop(columns=['Global North', 'Global South'], inplace=True) groups.columns # groups dict_cor_goals_groups_2_cond = {} for group in ['Global South']: print(group) #dict_cor_goa_c = pairs_g0_left_0.copy(deep=True) dict_cor_goa_c = pairs_g0_left_0_4.copy(deep=True) # pairs_g0_left_0 has all non-empty conditional sets # preparing conditional set conditions_dict = {} #for cond in conditions_g_tuple: for cond in conditions_g_tuple_4: condition = [] for c in cond: if c=='T': condition.extend(groups_prep_g[group][17].T) else: condition.extend(groups_prep_g[group][int(c)-1].T) conditions_dict[str(cond)] = np.asarray(condition) # partial distance correlation pool = mp.Pool(int(mp.cpu_count()/2)) dict_cor_goa_c_list = dict_cor_goa_c.values.tolist() print('start dcor...') cor_results = pool.map(partial_distance_cor, dict_cor_goa_c_list, chunksize=1000) pool.close() pool.join() dict_cor_goa_c['dcor'] = cor_results print('...dcor done') # find minimum distance correlation between any two goals dict_cor_goa_con = dict_cor_goa_c.groupby(['pair_0', 'pair_1'])['dcor'].apply(list).reset_index(name='list_dcor') for i, row_con in dict_cor_goa_con.iterrows(): dict_cor_goa_con.loc[i, 'min_dcor'] = min(dict_cor_goa_con.loc[i, 'list_dcor']) dict_cor_goa_con.drop(columns=['list_dcor'], inplace=True) # finding conditional set of minimum partial distance correlation dict_cor_goa_cond = dict_cor_goa_con.merge(dict_cor_goa_c, left_on='min_dcor', right_on='dcor').drop(['pair_0_y', 'pair_1_y', 'dcor'], axis=1).rename(columns={'pair_0_x': 'pair_0', 'pair_1_x': 'pair_1'}) dict_cor_goals_groups_2_cond[group] = dict_cor_goa_cond # save every group separately to save memory #g_cor = open('distance_cor/goals/dict_cor_goals_groups_2_cond_{}.pkl'.format(group), 'wb') g_cor = open('distance_cor/goals/dict_cor_goals_groups_2_cond_{}_4.pkl'.format(group), 'wb') pickle.dump(dict_cor_goals_groups_2_cond, g_cor) g_cor.close() gc.collect() # for Global South (disaggregated because of memory restrictions) dict_GS_1 = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Global South_1.pkl', 'rb')) dict_GS_2 = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Global South_2.pkl', 'rb')) dict_GS_3 = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Global South_3.pkl', 'rb')) dict_GS_4 = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Global South_4.pkl', 'rb')) cor_goals_continents_2_GS = pd.concat([dict_GS_1['Global South'], dict_GS_2['Global South'], dict_GS_3['Global South'], dict_GS_4['Global South']]) # find minimum distance correlation between any two goals dict_cor_goa_con = cor_goals_continents_2_GS.groupby(['pair_0', 'pair_1'])['min_dcor'].apply(list).reset_index(name='list_dcor') for i, row_c in dict_cor_goa_con.iterrows(): dict_cor_goa_con.loc[i, 'min_dcor'] = min(dict_cor_goa_con.loc[i, 'list_dcor']) dict_cor_goa_con.drop(columns=['list_dcor'], inplace=True) # finding conditional set of minimum partial distance correlation dict_cor_goa_cond = dict_cor_goa_con.merge(cor_goals_continents_2_GS, left_on='min_dcor', right_on='min_dcor').drop(['pair_0_y', 'pair_1_y'], axis=1).rename(columns={'pair_0_x': 'pair_0', 'pair_1_x': 'pair_1'}) # save every entry region separately to save memory g_cor = open('distance_cor/goals/dict_cor_goals_groups_2_cond_Global South.pkl', 'wb') pickle.dump(dict_cor_goa_cond, g_cor) g_cor.close() dict_GN = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Global North.pkl', 'rb')) dict_GS = {} dict_GS['Global South'] = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Global South.pkl', 'rb')) dict_LCD = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Least Developed Countries (LDC).pkl', 'rb')) dict_LLDC = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Land Locked Developing Countries (LLDC).pkl', 'rb')) dict_SIDS = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Small Island Developing States (SIDS).pkl', 'rb')) dict_G20 = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_G20.pkl', 'rb')) dict_EM = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Emerging Markets (BRICS + N-11).pkl', 'rb')) dict_OPEC = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_OPEC.pkl', 'rb')) dict_LI = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Low Income.pkl', 'rb')) dict_LMI = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Lower middle Income.pkl', 'rb')) dict_UMI = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_Upper middle Income.pkl', 'rb')) dict_HI = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2_cond_High Income.pkl', 'rb')) dict_cor_goals_groups_2_condition = {**dict_GN, **dict_GS, **dict_LCD, **dict_LLDC, **dict_SIDS, **dict_G20, **dict_EM, **dict_OPEC, **dict_LI, **dict_LMI, **dict_UMI, **dict_HI} # check print(dict_cor_goals_groups_2_condition.keys()) dict_cor_goals_groups_2_condition['Global South'] ``` ### Pairwise distance correlation ``` def distance_cor(row): pair_0, pair_1 = row if pair_0=='T': pair_0 = 18 if pair_1=='T': pair_1 = 18 pair_0_array = groups_prep_g[group][int(pair_0)-1] pair_1_array = groups_prep_g[group][int(pair_1)-1] return dcor.distance_correlation(pair_0_array, pair_1_array)**2 # groups dict_cor_goals_groups_2_pair = {} for group in groups: print(group) dict_cor_goa_c_pair = pairs_g1.drop(columns=['condition']).copy(deep=True) # pairs_g1 has empty conditional sets for pairwise dcor pool = mp.Pool(int(mp.cpu_count()/2)) print('start dcor...') dict_cor_goa_c_pair_list = dict_cor_goa_c_pair.values.tolist() cor_results = pool.map(distance_cor, dict_cor_goa_c_pair_list, chunksize=1000) pool.close() pool.join() dict_cor_goa_c_pair['min_dcor_pair'] = cor_results print('...dcor done') dict_cor_goals_groups_2_pair[group] = dict_cor_goa_c_pair # check dict_cor_goals_groups_2_pair['Least Developed Countries (LDC)'] # merge dictionaries dict_cor_goals_groups_2 = {} for group in dict_cor_goals_groups_2_condition.keys(): print(group) dict_cor_goals_groups_2[group] = pd.DataFrame(index=range(153), columns=['pair_0', 'pair_1', 'min_dcor', 'condition']) for i in dict_cor_goals_groups_2_pair[group].index: for j in dict_cor_goals_groups_2_condition[group].index: if dict_cor_goals_groups_2_pair[group].loc[i, 'pair_0']==dict_cor_goals_groups_2_condition[group].loc[j, 'pair_0'] and dict_cor_goals_groups_2_pair[group].loc[i, 'pair_1']==dict_cor_goals_groups_2_condition[group].loc[j, 'pair_1']: dict_cor_goals_groups_2[group].loc[i, 'pair_0'] = dict_cor_goals_groups_2_pair[group].loc[i, 'pair_0'] dict_cor_goals_groups_2[group].loc[i, 'pair_1'] = dict_cor_goals_groups_2_pair[group].loc[i, 'pair_1'] dict_cor_goals_groups_2[group].loc[i, 'min_dcor'] = min(dict_cor_goals_groups_2_pair[group].loc[i, 'min_dcor_pair'], dict_cor_goals_groups_2_condition[group].loc[j, 'min_dcor']) if dict_cor_goals_groups_2_pair[group].loc[i, 'min_dcor_pair'] < dict_cor_goals_groups_2_condition[group].loc[j, 'min_dcor']: dict_cor_goals_groups_2[group].loc[i, 'condition'] = 0 else: dict_cor_goals_groups_2[group].loc[i, 'condition'] = dict_cor_goals_groups_2_condition[group].loc[j, 'condition'] # CHECKPOINT dict_cor_goals_groups_2 = pickle.load(open('distance_cor/goals/dict_cor_goals_groups_2.pkl', 'rb')) ``` ### Testing for statistical significance We calculate the p-values of our partial distance correlations, i.e., the probability that the null hypothesis of (partial) independence can be accepted. ``` for group in groups: print(group) dict_cor_goals_groups_2[group]['p-value'] = -1 for r, row in dict_cor_goals_groups_2[group].iterrows(): # preparing pair_0 and pair_1 if row.pair_1=='T': row.pair_1 = 18 pair_0_array = groups_prep_g[group][int(row.pair_0)-1] pair_1_array = groups_prep_g[group][int(row.pair_1)-1] # extracting conditional variables from column 'condition' cond_list = [] for i in row.condition.split(): newstr = ''.join((ch if ch in '0123456789.-eT' else ' ') for ch in i) cond_list.extend([i for i in newstr.split()]) condition = [] for c in cond_list: if c=='T': condition.extend(groups_prep_g[group][17].T) else: condition.extend(groups_prep_g[group][int(c)-1].T) cond_array = np.asarray(condition).T dict_cor_goals_groups_2[group].iloc[r, 4] = dcor.independence.partial_distance_covariance_test(pair_0_array, pair_1_array, cond_array, num_resamples=10000).p_value # save if not os.path.exists('distance_cor'): os.mkdir('distance_cor') if not os.path.exists('distance_cor/goals'): os.mkdir('distance_cor/goals') g_cor = open('distance_cor/goals/dict_cor_goals_groups_2.pkl', 'wb') pickle.dump(dict_cor_goals_groups_2, g_cor) g_cor.close() # saving as csv's for group in groups: dict_cor_goals_groups_2[group] = dict_cor_goals_groups_2[group][['pair_0', 'pair_1', 'min_dcor', 'p-value', 'condition']] dict_cor_goals_groups_2[group]['p-value'] = dict_cor_goals_groups_2[group]['p-value'].astype(float).round(5) dict_cor_goals_groups_2[group].min_dcor = dict_cor_goals_groups_2[group].min_dcor.astype(float).round(5) dict_cor_goals_groups_2[group].to_csv('distance_cor/goals/conditions_{}.csv'.format(group)) ``` We want to keep the minimum significant distance correlation of each pair of two goals, pairwise or conditioned on any potential subset. The last step is to insert these values into the right cell in a matrix. ``` cor_goals_groups_2 = {} for group in dict_cor_goals_groups_2.keys(): print(group) cor_goals_groups_2[group] = pd.DataFrame(index=goals, columns=goals) for i in list(dict_cor_goals_groups_2[group].index): goal_0 = dict_cor_goals_groups_2[group].loc[i, 'pair_0'] goal_1 = dict_cor_goals_groups_2[group].loc[i, 'pair_1'] # take square root because we have previously squared the distance correlation cor_goals_groups_2[group].loc[goal_1, goal_0] = np.sqrt(dict_cor_goals_groups_2[group].loc[i, 'min_dcor']) ``` In `cor_goals_groups_2` are the conditional distance correlations for all continents in a setting of 18 random vectors $X$, $Y$, and $Z_1, Z_2, ..., Z_{16}$, where $\boldsymbol{Z}$ is the array containing all random vectors we want to condition on. ``` # save g_cor = open('distance_cor/goals/dcor_goals_groups_2.pkl', 'wb') pickle.dump(cor_goals_groups_2, g_cor) g_cor.close() # CHECKPOINT g_cor = pickle.load(open('distance_cor/goals/dcor_goals_groups_2.pkl', 'rb')) ``` ## Visualisation on goal-level Additionally to the matrices with numbers, we would also like to visualise these matrices and plot these correlations as networks. ``` # groups for group in dict_cor_goals_groups_2.keys(): # generate a mask for the upper triangle mask = np.zeros_like(cor_goals_groups_2[group].fillna(0), dtype=np.bool) mask[np.triu_indices_from(mask)] = True # set up the matplotlib figure f, ax = plt.subplots(figsize=(25, 22)) # generate a custom diverging colormap cmap = sns.color_palette("Reds", 100) # draw the heatmap with the mask and correct aspect ratio sns.heatmap(cor_goals_groups_2[group].fillna(0), mask=mask, cmap=cmap, vmax=1, center=0.5, vmin=0, square=True, linewidths=.5, cbar_kws={"shrink": .8}) plt.title('{}'.format(group), fontdict={'fontsize': 52}) plt.savefig('distance_cor/goals/{}_cor_goals.png'.format(group)) # data preparation for networkX dcor_dict_g = {} for group in cor_goals_groups_2.keys(): dcor_dict_g[group] = {} for goalcombination in g_combinations: dcor_dict_g[group][tuple(goalcombination)] = [cor_goals_groups_2[group].loc[goalcombination[1], goalcombination[0]], float(dict_cor_goals_groups_2[group].loc[(dict_cor_goals_groups_2[group]['pair_0']=='{}'.format(goalcombination[0])) & (dict_cor_goals_groups_2[group]['pair_1']=='{}'.format(goalcombination[1]))]['p-value'])] for group in cor_goals_groups_2.keys(): for key in dcor_dict_g[group].keys(): if key[1] == 'T': dcor_dict_g[group][tuple((key[0], '18'))] = dcor_dict_g[group].pop(tuple((key[0], 'T'))) elif key[0] == 'T': dcor_dict_g[group][tuple(('18', key[1]))] = dcor_dict_g[group].pop(tuple(('T', key[1]))) # plotting networks with weighted edges layout = 'circular' centrality_G = {} # dictionary to save centralities degree_G = {} # dictionary to save degrees density_G = {} # dictionary to save weighted densities p_G = {} # auxiliary partition_G = {} # dictionary to save clusters for group in cor_goals_groups_2.keys(): G_G = nx.Graph() for key, value in dcor_dict_g[group].items(): if value[1] <= 0.01: w = value[0] s = 'solid' c = sns.color_palette('Reds', 100)[int(value[0]*100)] elif 0.01 < value[1] <= 0.05: w = value[0] s = 'dashed' c = sns.color_palette('Reds', 100)[int(value[0]*100)] elif 0.05 < value[1] <= 0.1: w = value[0] s = 'dotted' c = sns.color_palette('Reds', 100)[int(value[0]*100)] else: w = 0 s = 'solid' c = 'white' G_G.add_edge(int(key[0]), int(key[1]), style=s, weight=w, color=c, alpha=value[0]) if layout == 'circular': pos = nx.circular_layout(G_G) elif layout == 'spring': pos = nx.spring_layout(G_G) plt.figure(figsize=(24,16)) plt.tight_layout() # nodes nx.draw_networkx_nodes(G_G, pos, node_size=1000) # labels nx.draw_networkx_labels(G_G, pos, font_size=46, font_family='sans-serif') nodes = G_G.nodes() edges = G_G.edges() colors = [G_G[u][v]['color'] for u,v in edges] weights = [G_G[u][v]['weight'] for u,v in edges] alphas = [G_G[u][v]['alpha'] for u,v in edges] styles = [G_G[u][v]['style'] for u,v in edges] nx.draw_networkx_nodes(G_G, pos, nodelist=nodes, node_color='white', node_size=1000) for i, edge in enumerate(edges): pos_edge = {edge[0]: pos[edge[0]], edge[1]: pos[edge[1]]} nx.draw_networkx_edges(G_G, pos_edge, edgelist=[edge], edge_color=colors[i], style=styles[i], width=np.multiply(weights[i],25)) #alpha=np.multiply(alphas[i],2.5)) #nx.draw_networkx(G_G, pos, with_labels=False, edges=edges, edge_color=colors, node_color='white', node_size=1000, width=np.multiply(weights,25)) ax=plt.gca() fig=plt.gcf() trans = ax.transData.transform trans_axes = fig.transFigure.inverted().transform imsize = 0.08 # this is the image size plt.title('{}'.format(group), y=1.05, fontdict={'fontsize': 52}) for node in G_G.nodes(): (x,y) = pos[node] xx,yy = trans((x,y)) # figure coordinates xa,ya = trans_axes((xx,yy)) # axes coordinates a = plt.axes([xa-imsize/2.0,ya-imsize/2.0, imsize, imsize]) a.imshow(mpimg.imread('utils/images/E_SDG goals_icons-individual-rgb-{}.png'.format(node))) a.axis('off') plt.axis('off') ax.axis('off') plt.savefig('distance_cor/goals/{}_{}_network_logos_main.png'.format(group, layout), format='png') plt.show() # weighted centrality centr = nx.eigenvector_centrality(G_G, weight='weight', max_iter=100000) centrality_G[group] = sorted((v, '{:0.2f}'.format(c)) for v, c in centr.items()) degree_G[group] = dict(G_G.degree(weight='weight')) # weighted density density_G[group] = 2 * np.sum(weights) / (len(nodes) * (len(nodes) - 1)) # weighted clustering with Louvain algorithm part_G = {} modularity_G = {} for i in range(100): part_G[i] = community.best_partition(G_G, random_state=i) modularity_G[i] = community.modularity(part_G[i], G_G) p_G[group] = part_G[max(modularity_G, key=modularity_G.get)] # having lists with nodes being in different clusters partition_G[group] = {} for com in set(p_G[group].values()) : partition_G[group][com] = [nodes for nodes in p_G[group].keys() if p_G[group][nodes] == com] # clusters for group in cor_goals_groups_2.keys(): print(group) print(partition_G[group]) print('-------------------------') g_part = open('distance_cor/goals/partition_groups.pkl', 'wb') pickle.dump(partition_G, g_part) g_part.close() # centralities for group in cor_goals_groups_2.keys(): print(group) print(centrality_G[group]) print('-------------------------') g_cent = open('distance_cor/goals/centrality_groups.pkl', 'wb') pickle.dump(centrality_G, g_cent) g_cent.close() # degrees for group in cor_goals_groups_2.keys(): print(group) print(degree_G[group]) print('-------------------------') g_deg = open('distance_cor/goals/degree_groups.pkl', 'wb') pickle.dump(degree_G, g_deg) g_deg.close() # densities for group in cor_goals_groups_2.keys(): print(group) print(density_G[group]) print('-------------------------') g_dens = open('distance_cor/goals/density_groups.pkl', 'wb') pickle.dump(degree_G, g_dens) g_dens.close() ``` ### Eigenvector visualisation ``` def get_image(goal): return OffsetImage(plt.imread('utils/images/E_SDG goals_icons-individual-rgb-{}.png'.format(goal)), zoom=0.06) for group in cor_goals_groups_2.keys(): # separating goals from their centralities x = [] y = [] for cent in centrality_G[group]: x.append(cent[0]) y.append(float(cent[1])) fig, ax = plt.subplots(figsize=(24,16)) #plt.tight_layout() plt.title('{}'.format(group), y=1.05, fontdict={'fontsize': 52}) ax.scatter(x, y) # adding images for x0, y0, goal in zip(x, y, list(nodes)): ab = AnnotationBbox(get_image(goal), (x0, y0), frameon=False) ax.add_artist(ab) ax.set_xticks([]) ax.set_yticklabels([0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7], fontsize=28) ax.yaxis.grid() ax.set_ylim(0, 0.75) ax.set_ylabel('Eigenvector centrality', labelpad=24, fontdict={'fontsize': 38}) ax.set_xlabel('Variables (SDGs + climate change)', labelpad=54, fontdict={'fontsize': 38}) plt.savefig('distance_cor/goals/{}_eigenvector_centrality.png'.format(group), format='png') plt.show() ``` ### Cluster visualisation ``` # plotting clusters in networks with weighted edges from matplotlib.patches import Polygon from matplotlib.collections import PatchCollection layout = 'multipartite' for group in cor_goals_groups_2.keys(): G_G = nx.Graph() for key, value in dcor_dict_g[group].items(): G_G.add_edge(int(key[0]), int(key[1]), weight=value[0], color=sns.color_palette("Reds", 100)[int(np.around(value[0]*100))], alpha=value[0]) for node in nodes: G_G.nodes[node]['subset'] = p_G[group][node] if layout == 'circular': pos = nx.circular_layout(G_G) elif layout == 'spring': pos = nx.spring_layout(G_G, iterations=100, seed=42) elif layout == 'multipartite': pos = nx.multipartite_layout(G_G) plt.figure(figsize=(24,16)) # nodes nx.draw_networkx_nodes(G_G, pos, node_size=1000) # labels nx.draw_networkx_labels(G_G, pos, font_size=46, font_family='sans-serif') nodes = G_G.nodes() edges = G_G.edges() colors = [G_G[u][v]['color'] for u,v in edges] weights = [G_G[u][v]['weight'] for u,v in edges] nx.draw_networkx(G_G, pos, with_labels=False, edgelist=edges, edge_color=colors, node_color='white', node_size=1000, width=np.multiply(weights,25)) ax=plt.gca() fig=plt.gcf() trans = ax.transData.transform trans_axes = fig.transFigure.inverted().transform imsize = 0.08 # this is the image size plt.title('{}'.format(group), y=1.05, fontdict={'fontsize': 52}) for node in G_G.nodes(): x,y = pos[node] xx,yy = trans((x,y)) # figure coordinates xa,ya = trans_axes((xx,yy)) # axes coordinates a = plt.axes([xa-imsize/2.0,ya-imsize/2.0, imsize, imsize]) a.imshow(mpimg.imread('utils/images/E_SDG goals_icons-individual-rgb-{}.png'.format(node))) a.axis('off') # drawing polygon around nodes of clusters with maximum modularity clusters = [] for com, goals in partition_G[group].items(): position = [] for goal in goals: x,y = pos[goal] position.append((x,y)) positions = [] for i in range(6000): np.random.shuffle(position) positions.extend(position) # polygens polygon = Polygon(positions, closed=False) clusters.append(polygon) np.random.seed(72) colors = 100*np.random.rand(len(clusters)) p = PatchCollection(clusters, alpha=0.4) p.set_array(np.array(colors)) ax.add_collection(p) plt.axis('off') ax.axis('off') plt.savefig('distance_cor/goals/{}_{}_network_logos_cluster.png'.format(group, layout), format='png') plt.show() ```
github_jupyter
# 7 - Functions ``` from scipy import * from matplotlib.pyplot import * %matplotlib inline ``` ## Basics ``` def subtract(x1, x2): return x1 - x2 r = subtract(5.0, 4.3) r ``` ## Parameters and Arguments ``` z = 3 e = subtract(5,z) e z = 3 e = subtract(x2 = z, x1 = 5) e ``` ### Changing Arguments ``` def subtract(x1, x2): z = x1 - x2 x2 = 50. return z a = 20. b = subtract(10, a) # returns -10. b a # still has the value 20 def subtract(x): z = x[0] - x[1] x[1] = 50. return z a = [10,20] b = subtract(a) # returns -10 b a # is now [10, 50.0] ``` ### Access to variables defined outside the local namespace ``` import numpy as np # here the variable np is defined def sqrt(x): return np.sqrt(x) # we use np inside the function a = 3 def multiply(x): return a * x # bad style: access to the variable a defined outside multiply(4) # returns 12 a=4 multiply(4) # returns 16 def multiply(x, a): return a * x ``` ### Default Arguments ``` import scipy.linalg as sl sl.norm(identity(3)) sl.norm(identity(3), ord = 'fro') sl.norm(identity(3), 'fro') def subtract(x1, x2 = 0): return x1 - x2 subtract(5) def my_list(x1, x2 = []): x2.append(x1) return x2 my_list(1) # returns [1] my_list(2) # returns [1,2] ``` ### Variable Number of Arguments ``` data = [[1,2],[3,4]] style = dict({'linewidth':3,'marker':'o','color':'green'}) plot(*data, **style) ``` ## Return Values ``` def complex_to_polar(z): r = sqrt(z.real ** 2 + z.imag ** 2) phi = arctan2(z.imag, z.real) return (r,phi) # here the return object is formed z = 3 + 5j # here we define a complex number a = complex_to_polar(z) a r = a[0] r phi = a[1] phi r,phi = complex_to_polar(z) r,phi def append_to_list(L, x): L.append(x) def function_with_dead_code(x): return 2 * x y = x ** 2 # these two lines ... return y # ... are never executed! ``` ## Recursive functions ``` def chebyshev(n, x): if n == 0: return 1. elif n == 1: return x else: return 2. * x * chebyshev(n - 1, x) \ - chebyshev(n - 2 ,x) chebyshev(5, 0.52) # returns 0.39616645119999994 ``` ## Function Documentation ``` def newton(f, x0): """ Newton's method for computing a zero of a function on input: f (function) given function f(x) x0 (float) initial guess on return: y (float) the approximated zero of f """ ... help(newton) ``` ## Functions are Objects ``` def square(x): """Return the square of `x`""" return x ** 2 square(4) # 16 sq = square # now sq is the same as square sq(4) # 16 print(newton(sq, .2)) # passing as argument del sq ``` ### Partial Application ``` import functools def sin_omega(t, freq): return sin(2 * pi * freq * t) def make_sine(frequency): return functools.partial(sin_omega, freq = frequency) sin1=make_sine(1) sin1(2) def make_sine(freq): "Make a sine function with frequency freq" def mysine(t): return sin_omega(t, freq) return mysine sin1=make_sine(1) sin1(2) ``` ## Anonymous Functions - the `lambda` keyword ``` import scipy.integrate as si si.quad(lambda x: x ** 2 + 5, 0, 1) parabola = lambda x: x ** 2 + 5 parabola(3) # gives 14 def parabola(x): return x ** 2 + 5 parabola(3) import scipy.integrate as si for iteration in range(3): print(si.quad(lambda x: sin_omega(x, iteration * pi), 0, pi / 2.) ) ``` ## Functions as Decorators ``` def how_sparse(A): return len(A.reshape(-1).nonzero()[0]) how_sparse([1,2,0]) # returns an error def cast2array(f): def new_function(obj): fA = f(array(obj)) return fA return new_function @cast2array def how_sparse(A): return len(A.reshape(-1).nonzero()[0]) how_sparse([1,2,0]) # returns no error any more ```
github_jupyter
# Introduction to Random Forests ## Resources This notebook is designed around the theory from the fast.ai lectures (course18) with added comments and details found in the lectures and online. The entire course can be found here: http://course18.fast.ai/ml.html. ### Links - Lecture notebook: https://github.com/fastai/fastai/blob/master/courses/ml1/lesson1-rf.ipynb ## About Random Forests "**Random forests** or **random decision forests** are an ensemble learning method for *classification*, *regression* and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set." - https://en.wikipedia.org/wiki/Random_forest ## Imports ``` # Notebook is automatically updated if the module source code is edited %load_ext autoreload %autoreload 2 # Show plots within the notebook %matplotlib inline import re import math import numpy as np import pandas as pd from os import makedirs from dateutil.parser import parse from pandas_summary import DataFrameSummary from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from IPython.display import display from sklearn import metrics PATH = '../data/bulldozers/' ``` ## Fast.ai Methods The following methods are designed by fast.ai and added to the notebook to work according to the details found in the lecture. ``` from sklearn_pandas import DataFrameMapper from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype from sklearn.ensemble import forest from sklearn.tree import export_graphviz def set_plot_sizes(sml, med, big): plt.rc('font', size=sml) # controls default text sizes plt.rc('axes', titlesize=sml) # fontsize of the axes title plt.rc('axes', labelsize=med) # fontsize of the x and y labels plt.rc('xtick', labelsize=sml) # fontsize of the tick labels plt.rc('ytick', labelsize=sml) # fontsize of the tick labels plt.rc('legend', fontsize=sml) # legend fontsize plt.rc('figure', titlesize=big) # fontsize of the figure title def parallel_trees(m, fn, n_jobs=8): return list(ProcessPoolExecutor(n_jobs).map(fn, m.estimators_)) def draw_tree(t, df, size=10, ratio=0.6, precision=0): """ Draws a representation of a random forest in IPython. Parameters: ----------- t: The tree you wish to draw df: The data used to train the tree. This is used to get the names of the features. """ s=export_graphviz(t, out_file=None, feature_names=df.columns, filled=True, special_characters=True, rotate=True, precision=precision) IPython.display.display(graphviz.Source(re.sub('Tree {', f'Tree {{ size={size}; ratio={ratio}', s))) def combine_date(years, months=1, days=1, weeks=None, hours=None, minutes=None, seconds=None, milliseconds=None, microseconds=None, nanoseconds=None): years = np.asarray(years) - 1970 months = np.asarray(months) - 1 days = np.asarray(days) - 1 types = ('<M8[Y]', '<m8[M]', '<m8[D]', '<m8[W]', '<m8[h]', '<m8[m]', '<m8[s]', '<m8[ms]', '<m8[us]', '<m8[ns]') vals = (years, months, days, weeks, hours, minutes, seconds, milliseconds, microseconds, nanoseconds) return sum(np.asarray(v, dtype=t) for t, v in zip(types, vals) if v is not None) def get_sample(df,n): """ Gets a random sample of n rows from df, without replacement. Parameters: ----------- df: A pandas data frame, that you wish to sample from. n: The number of rows you wish to sample. Returns: -------- return value: A random sample of n rows of df. Examples: --------- >>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']}) >>> df col1 col2 0 1 a 1 2 b 2 3 a >>> get_sample(df, 2) col1 col2 1 2 b 2 3 a """ idxs = sorted(np.random.permutation(len(df))[:n]) return df.iloc[idxs].copy() def add_datepart(df, fldname, drop=True, time=False, errors="raise"): """add_datepart converts a column of df from a datetime64 to many columns containing the information from the date. This applies changes inplace. Parameters: ----------- df: A pandas data frame. df gain several new columns. fldname: A string that is the name of the date column you wish to expand. If it is not a datetime64 series, it will be converted to one with pd.to_datetime. drop: If true then the original date column will be removed. time: If true time features: Hour, Minute, Second will be added. Examples: --------- >>> df = pd.DataFrame({ 'A' : pd.to_datetime(['3/11/2000', '3/12/2000', '3/13/2000'], infer_datetime_format=False) }) >>> df A 0 2000-03-11 1 2000-03-12 2 2000-03-13 >>> add_datepart(df, 'A') >>> df AYear AMonth AWeek ADay ADayofweek ADayofyear AIs_month_end AIs_month_start AIs_quarter_end AIs_quarter_start AIs_year_end AIs_year_start AElapsed 0 2000 3 10 11 5 71 False False False False False False 952732800 1 2000 3 10 12 6 72 False False False False False False 952819200 2 2000 3 11 13 0 73 False False False False False False 952905600 """ fld = df[fldname] fld_dtype = fld.dtype if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype): fld_dtype = np.datetime64 if not np.issubdtype(fld_dtype, np.datetime64): df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True, errors=errors) targ_pre = re.sub('[Dd]ate$', '', fldname) attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start'] if time: attr = attr + ['Hour', 'Minute', 'Second'] for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower()) df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9 if drop: df.drop(fldname, axis=1, inplace=True) def is_date(x): return np.issubdtype(x.dtype, np.datetime64) def train_cats(df): """Change any columns of strings in a panda's dataframe to a column of categorical values. This applies the changes inplace. Parameters: ----------- df: A pandas dataframe. Any columns of strings will be changed to categorical values. Examples: --------- >>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']}) >>> df col1 col2 0 1 a 1 2 b 2 3 a note the type of col2 is string >>> train_cats(df) >>> df col1 col2 0 1 a 1 2 b 2 3 a now the type of col2 is category """ for n,c in df.items(): if is_string_dtype(c): df[n] = c.astype('category').cat.as_ordered() def apply_cats(df, trn): """Changes any columns of strings in df into categorical variables using trn as a template for the category codes. Parameters: ----------- df: A pandas dataframe. Any columns of strings will be changed to categorical values. The category codes are determined by trn. trn: A pandas dataframe. When creating a category for df, it looks up the what the category's code were in trn and makes those the category codes for df. Examples: --------- >>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']}) >>> df col1 col2 0 1 a 1 2 b 2 3 a note the type of col2 is string >>> train_cats(df) >>> df col1 col2 0 1 a 1 2 b 2 3 a now the type of col2 is category {a : 1, b : 2} >>> df2 = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['b', 'a', 'a']}) >>> apply_cats(df2, df) col1 col2 0 1 b 1 2 a 2 3 a now the type of col is category {a : 1, b : 2} """ for n,c in df.items(): if (n in trn.columns) and (trn[n].dtype.name=='category'): df[n] = c.astype('category').cat.as_ordered() df[n].cat.set_categories(trn[n].cat.categories, ordered=True, inplace=True) def fix_missing(df, col, name, na_dict): """ Fill missing data in a column of df with the median, and add a {name}_na column which specifies if the data was missing. Parameters: ----------- df: The data frame that will be changed. col: The column of data to fix by filling in missing data. name: The name of the new filled column in df. na_dict: A dictionary of values to create na's of and the value to insert. If name is not a key of na_dict the median will fill any missing data. Also if name is not a key of na_dict and there is no missing data in col, then no {name}_na column is not created. Examples: --------- >>> df = pd.DataFrame({'col1' : [1, np.NaN, 3], 'col2' : [5, 2, 2]}) >>> df col1 col2 0 1 5 1 nan 2 2 3 2 >>> fix_missing(df, df['col1'], 'col1', {}) >>> df col1 col2 col1_na 0 1 5 False 1 2 2 True 2 3 2 False >>> df = pd.DataFrame({'col1' : [1, np.NaN, 3], 'col2' : [5, 2, 2]}) >>> df col1 col2 0 1 5 1 nan 2 2 3 2 >>> fix_missing(df, df['col2'], 'col2', {}) >>> df col1 col2 0 1 5 1 nan 2 2 3 2 >>> df = pd.DataFrame({'col1' : [1, np.NaN, 3], 'col2' : [5, 2, 2]}) >>> df col1 col2 0 1 5 1 nan 2 2 3 2 >>> fix_missing(df, df['col1'], 'col1', {'col1' : 500}) >>> df col1 col2 col1_na 0 1 5 False 1 500 2 True 2 3 2 False """ if is_numeric_dtype(col): if pd.isnull(col).sum() or (name in na_dict): df[name+'_na'] = pd.isnull(col) filler = na_dict[name] if name in na_dict else col.median() df[name] = col.fillna(filler) na_dict[name] = filler return na_dict def numericalize(df, col, name, max_n_cat): """ Changes the column col from a categorical type to it's integer codes. Parameters: ----------- df: A pandas dataframe. df[name] will be filled with the integer codes from col. col: The column you wish to change into the categories. name: The column name you wish to insert into df. This column will hold the integer codes. max_n_cat: If col has more categories than max_n_cat it will not change the it to its integer codes. If max_n_cat is None, then col will always be converted. Examples: --------- >>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']}) >>> df col1 col2 0 1 a 1 2 b 2 3 a note the type of col2 is string >>> train_cats(df) >>> df col1 col2 0 1 a 1 2 b 2 3 a now the type of col2 is category { a : 1, b : 2} >>> numericalize(df, df['col2'], 'col3', None) col1 col2 col3 0 1 a 1 1 2 b 2 2 3 a 1 """ if not is_numeric_dtype(col) and ( max_n_cat is None or len(col.cat.categories)>max_n_cat): df[name] = pd.Categorical(col).codes+1 def scale_vars(df, mapper): warnings.filterwarnings('ignore', category=sklearn.exceptions.DataConversionWarning) if mapper is None: map_f = [([n],StandardScaler()) for n in df.columns if is_numeric_dtype(df[n])] mapper = DataFrameMapper(map_f).fit(df) df[mapper.transformed_names_] = mapper.transform(df) return mapper def proc_df(df, y_fld=None, skip_flds=None, ignore_flds=None, do_scale=False, na_dict=None, preproc_fn=None, max_n_cat=None, subset=None, mapper=None): """ proc_df takes a data frame df and splits off the response variable, and changes the df into an entirely numeric dataframe. For each column of df which is not in skip_flds nor in ignore_flds, na values are replaced by the median value of the column. Parameters: ----------- df: The data frame you wish to process. y_fld: The name of the response variable skip_flds: A list of fields that dropped from df. ignore_flds: A list of fields that are ignored during processing. do_scale: Standardizes each column in df. Takes Boolean Values(True,False) na_dict: a dictionary of na columns to add. Na columns are also added if there are any missing values. preproc_fn: A function that gets applied to df. max_n_cat: The maximum number of categories to break into dummy values, instead of integer codes. subset: Takes a random subset of size subset from df. mapper: If do_scale is set as True, the mapper variable calculates the values used for scaling of variables during training time (mean and standard deviation). Returns: -------- [x, y, nas, mapper(optional)]: x: x is the transformed version of df. x will not have the response variable and is entirely numeric. y: y is the response variable nas: returns a dictionary of which nas it created, and the associated median. mapper: A DataFrameMapper which stores the mean and standard deviation of the corresponding continuous variables which is then used for scaling of during test-time. Examples: --------- >>> df = pd.DataFrame({'col1' : [1, 2, 3], 'col2' : ['a', 'b', 'a']}) >>> df col1 col2 0 1 a 1 2 b 2 3 a note the type of col2 is string >>> train_cats(df) >>> df col1 col2 0 1 a 1 2 b 2 3 a now the type of col2 is category { a : 1, b : 2} >>> x, y, nas = proc_df(df, 'col1') >>> x col2 0 1 1 2 2 1 >>> data = DataFrame(pet=["cat", "dog", "dog", "fish", "cat", "dog", "cat", "fish"], children=[4., 6, 3, 3, 2, 3, 5, 4], salary=[90, 24, 44, 27, 32, 59, 36, 27]) >>> mapper = DataFrameMapper([(:pet, LabelBinarizer()), ([:children], StandardScaler())]) >>>round(fit_transform!(mapper, copy(data)), 2) 8x4 Array{Float64,2}: 1.0 0.0 0.0 0.21 0.0 1.0 0.0 1.88 0.0 1.0 0.0 -0.63 0.0 0.0 1.0 -0.63 1.0 0.0 0.0 -1.46 0.0 1.0 0.0 -0.63 1.0 0.0 0.0 1.04 0.0 0.0 1.0 0.21 """ #if not ignore_flds: ignore_flds=[] if not skip_flds: skip_flds=[] if subset: df = get_sample(df,subset) #else: df = df.copy() df = df.copy() #ignored_flds = df.loc[:, ignore_flds] #df.drop(ignore_flds, axis=1, inplace=True) if preproc_fn: preproc_fn(df) #if y_fld is None: y = None #else: # if not is_numeric_dtype(df[y_fld]): df[y_fld] = pd.Categorical(df[y_fld]).codes # y = df[y_fld].values # skip_flds += [y_fld] y = df[y_fld].values df.drop(skip_flds+[y_fld], axis=1, inplace=True) if na_dict is None: na_dict = {} else: na_dict = na_dict.copy() #na_dict_initial = na_dict.copy() for n,c in df.items(): na_dict = fix_missing(df, c, n, na_dict) #if len(na_dict_initial.keys()) > 0: # df.drop([a + '_na' for a in list(set(na_dict.keys()) - set(na_dict_initial.keys()))], axis=1, inplace=True) if do_scale: mapper = scale_vars(df, mapper) for n,c in df.items(): numericalize(df, c, n, max_n_cat) #df = pd.get_dummies(df, dummy_na=True) #df = pd.concat([ignored_flds, df], axis=1) #res = [df, y, na_dict] #if do_scale: res = res + [mapper] #return res res = [pd.get_dummies(df, dummy_na=True), y] if not do_scale: return res return res + [mapper] def rf_feat_importance(m, df): return pd.DataFrame({'cols':df.columns, 'imp':m.feature_importances_} ).sort_values('imp', ascending=False) def set_rf_samples(n): """ Changes Scikit learn's random forests to give each tree a random sample of n random rows. """ forest._generate_sample_indices = (lambda rs, n_samples: forest.check_random_state(rs).randint(0, n_samples, n)) def reset_rf_samples(): """ Undoes the changes produced by set_rf_samples. """ forest._generate_sample_indices = (lambda rs, n_samples: forest.check_random_state(rs).randint(0, n_samples, n_samples)) def get_nn_mappers(df, cat_vars, contin_vars): # Replace nulls with 0 for continuous, "" for categorical. for v in contin_vars: df[v] = df[v].fillna(df[v].max()+100,) for v in cat_vars: df[v].fillna('#NA#', inplace=True) # list of tuples, containing variable and instance of a transformer for that variable # for categoricals, use LabelEncoder to map to integers. For continuous, standardize cat_maps = [(o, LabelEncoder()) for o in cat_vars] contin_maps = [([o], StandardScaler()) for o in contin_vars] ``` ## Load Dataset Load the dataset as a DataFrame by reading the .csv file using pandas. ``` df_raw = pd.read_csv(f'{PATH}Train.csv', low_memory=False, parse_dates=["saledate"]) ``` ## Display Data It is important to look at the data found in the dataset, to make sure that you understand the format, how it is stored, what type of values it holds, etc. Even if you have read descriptions about your data, the actual data may not be what you expect. ``` def display_all(df): with pd.option_context("display.max_rows", 1000, "display.max_columns", 1000): display(df) display_all(df_raw.tail().T) display_all(df_raw.describe(include='all').T) ``` ## Metric It is important to note what metric is being used for a project. Generally, selecting the metric(s) is an important part of the project setup. However, in this case - Kaggle tells us what metric to use: RMSLE (root mean squared log error) between the actual and predicted auction prices. Therefore we take the log of the prices, so that RMSE will give us what we need. ``` df_raw.SalePrice = np.log(df_raw.SalePrice) ``` ## Feature Engineering Feature engineering is an important part of all machine learning tasks. The dataset includes a limited amount of data and it is therefore important to expand the dataset with as much information as possible. This is done by feature engineering, which extends the dataset with relevant data. ``` df_raw['saledate'] add_datepart(df_raw, 'saledate') df_raw.saleYear.head() ``` ## Continuous and Categorical Variables The dataset contains a mix of both continuous and categorical variables. This is not recommended for a random forest. The categorical variables are currently stored as strings, which is inefficient and does not provide the numeric coding required for a random forest. It is therefore important to convert the strings to pandas categories. ``` df_raw.head() for col_name in df_raw.columns: if(df_raw[col_name].dtype == 'object'): df_raw[col_name] = df_raw[col_name].astype('category') print('Process of changing data types has finished executing.') ``` The order of the categorical variables may affect the performance, it is therefore important to set the categories in a meaningful order. ``` df_raw.UsageBand.cat.categories df_raw.UsageBand.cat.set_categories(['High', 'Medium', 'Low'], ordered=True, inplace=True) ``` ## Missing Values A dataset needs to be without missing values, which cannot be directly passed to a random forest. ``` display_all(df_raw.isnull().sum().sort_index()/len(df_raw)) ``` ## Store and Load DataFrames After making changes to the dataframes in a dataset, the current state can be stored and loaded. This process avoids having to re-do all previous steps. ``` makedirs('dfs', exist_ok=True) df_raw.to_feather('dfs/raw_bulldozers') df_raw = pd.read_feather('dfs/raw_bulldozers') ``` ## Pre-Processing Before everything is ready for the fitting process, it is necessary to replace categories with their numeric codes, handle missing continuous values, and split the dependent variable into a separate variable. ``` df, y = proc_df(df_raw, y_fld='SalePrice') df.columns ``` ## Fit the Random Forest Now that the dataset has been prepared, it is ready to fit. ``` m = RandomForestRegressor(n_jobs=-1) m.fit(df, y) m.score(df, y) ``` ## Validation- and Training Set An important idea in machine learning is to have separate training and validation data sets. As ``` def split_vals(a,n): return a[:n].copy(), a[n:].copy() n_valid = 12000 # same as Kaggle's test set size n_trn = len(df)-n_valid raw_train, raw_valid = split_vals(df_raw, n_trn) X_train, X_valid = split_vals(df, n_trn) y_train, y_valid = split_vals(y, n_trn) X_train.shape, y_train.shape, X_valid.shape def rmse(x,y): return math.sqrt(((x-y)**2).mean()) def print_score(m): res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid), m.score(X_train, y_train), m.score(X_valid, y_valid)] if hasattr(m, 'oob_score_'): res.append(m.oob_score_) print(res) m = RandomForestRegressor(n_jobs=-1) %time m.fit(X_train, y_train) print_score(m) ```
github_jupyter
# Homework 1: Preprocessing and Text Classification Student Name: Jun Luo Student ID: 792597 Python version used: Python2.7 ## General info <b>Due date</b>: 11pm, Sunday March 18th <b>Submission method</b>: see LMS <b>Submission materials</b>: completed copy of this iPython notebook <b>Late submissions</b>: -20% per day <b>Marks</b>: 5% of mark for class <b>Overview</b>: In this homework, you'll be using a corpus of tweets to do tokenisation of hashtags and build polarity classifers using bag of word (BOW) features. <b>Materials</b>: See the main class LMS page for information on the basic setup required for this class, including an iPython notebook viewer and the python packages NLTK, Numpy, Scipy, Matplotlib, Scikit-Learn, and Gensim. In particular, if you are not using a lab computer which already has it installed, we recommend installing all the data for NLTK, since you will need various parts of it to complete this assignment. You can also use any Python built-in packages, but do not use any other 3rd party packages (the packages listed above are all fine to use); if your iPython notebook doesn't run on the marker's machine, you will lose marks. <b>Evaluation</b>: Your iPython notebook should run end-to-end without any errors in a few minutes, and you must follow all instructions provided below, including specific implementation requirements and instructions for what needs to be printed (please avoid printing output we don't ask for). The amount each section is worth is given in parenthesis after the instructions. You will be marked not only on the correctness of your methods, but also the quality and efficency of your code: in particular, you should be careful to use Python built-in functions and operators when appropriate and pick descriptive variable names that adhere to <a href="https://www.python.org/dev/peps/pep-0008/">Python style requirements</a>. If you think it might be unclear what you are doing, you should comment your code to help the marker make sense of it. <b>Extra credit</b>: Each homework has a task which is optional with respect to getting full marks on the assignment, but that can be used to offset any points lost on this or any other homework assignment (but not the final project or the exam). We recommend you skip over this step on your first pass, and come back if you have time: the amount of effort required to receive full marks (1 point) on an extra credit question will be substantially more than earning the same amount of credit on other parts of the homework. <b>Updates</b>: Any major changes to the assignment will be announced via LMS. Minor changes and clarifications will be announced in the forum on LMS, we recommend you check the forum regularly. <b>Academic Misconduct</b>: For most people, collaboration will form a natural part of the undertaking of this homework, and we encourge you to discuss it in general terms with other students. However, this ultimately is still an individual task, and so reuse of code or other instances of clear influence will be considered cheating. We will be checking submissions for originality and will invoke the University’s <a href="http://academichonesty.unimelb.edu.au/policy.html">Academic Misconduct policy</a> where inappropriate levels of collusion or plagiarism are deemed to have taken place. ## Preprocessing <b>Instructions</b>: For this homework we will be using the tweets in the <i>twitter_samples</i> corpus included with NLTK. You should start by accessing these tweets. Use the <i>strings</i> method included in the NLTK corpus reader for <i>twitter_samples</i> to access the tweets (as raw strings). Iterate over the full corpus, and print out the average length, in characters, of the tweets in the corpus. (0.5) ``` import nltk import nltk.corpus import numpy corpus = nltk.corpus.twitter_samples.strings() total_characters = 0 for tweet in corpus: total_characters += len(tweet) print('Average Length:' + str(total_characters*1.0/len(corpus))+' characters') ``` <b>Instructions</b>: Hashtags (i.e. topic tags which start with #) pose an interesting tokenisation problem because they often include multiple words written without spaces or capitalization. You should use a regular expression to extract all hashtags of length 8 or longer which consist only of lower case letters (other than the # at the beginning, of course, though this should be stripped off as part of the extraction process). Do <b>not</b> tokenise the entire tweet as part of this process. The hashtag might occur at the beginning or the end of the tweet; you should double-check that you aren't missing any. After you have collected them into a list, print out number of hashtags you have collected: for full credit, you must get the exact number that we expect. (1.0) ``` """ Daniel's post in the discussion board: Assume the boundaries are whitespaces. So hashtags need to have whitespaces before and after (unless they occur in the beginning or the end of the tweet). Cases like #thisperson's should not be captured. Yes, in real world we would probably like to capture this phenomenon as well. But to do this you need to assume some level of tokenisation already (splitting the 's) and you should not tokenise the tweet in that question (this is in the instructions). """ import re hashtags = [] # Collect all the hashtags into an array for tweet in corpus: array = re.findall(r"(?:^|(?<=\s))(?:#)([a-z]{8,})(?:$|(?=\s))", tweet) for hashtag in array: hashtags.append(hashtag) print('Total Number of Hashtags:'+str(len(hashtags))) ``` <b>Instructions</b>: Now, tokenise the hashtags you've collected. To do this, you should implement a reversed version of the MaxMatch algorithm discussed in class (and in the reading), where matching begins at the end of the hashtag and progresses backwards. NLTK has a list of words that you can use for matching, see starter code below. Be careful about efficiency with respect to doing word lookups. One extra challenge you have to deal with is that the provided list of words includes only lemmas: your MaxMatch algorithm should match inflected forms by converting them into lemmas using the NLTK lemmatiser before matching. Note that the list of words is incomplete, and, if you are unable to make any longer match, your code should default to matching a single letter. Create a new list of tokenised hashtags (this should be a list of lists of strings) and use slicing to print out the last 20 hashtags in the list. (1.0) ``` from nltk import word_tokenize from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() def reverse_max_match(sentence, dictionary): if len(sentence)==0: return [] for i in reversed(range(1,len(sentence)+1)): firstword = lemmatizer.lemmatize(sentence[-i:]) remainder = sentence[:-i] if firstword in dictionary: return reverse_max_match(remainder,dictionary)+[firstword] # if no word was found, than make a one-character word firstword = lemmatizer.lemmatize(sentence[-1:]) remainder = sentence[:-1] return reverse_max_match(remainder,dictionary)+[firstword] words = nltk.corpus.words.words() # words is a Python list # print(reverse_max_match('flowers',words)) # print(len(hashtags)) counter = 0 result2 = [] for hashtag in hashtags: counter+=1 # if(counter%100 == 0): # print(counter) result2.append(reverse_max_match(hashtag,words)) print(result2[-20:]) ``` ### Extra Credit (Optional) <b>Instructions</b>: Implement the forward version of the MaxMatch algorithm as well, and print out all the hashtags which give different results for the two versions of MaxMatch. Your main task is to come up with a good way to select which of the two segmentations is better for any given case, and demonstrate that it works significantly better than using a single version of the algorithm for all hashtags. (1.0) #### Answer: The method I use to select the better segmentation is Maximum Known Matching (MKM).(http://cs.uccs.edu/~jkalita/work/reu/REU2015/FinalPapers/05Reuter.pdf) The score is calculated using the formular below: $ Score(s) = \sqrt[i]{\sum_{k=1}^i len(w_{k})^2}$ Where len(w) returns the length of a word w, and s is a segmentation into i words. The higher the score is, the better a segmentation is. It is obvious to see that max(score_a, score_b) >= score_a, max(score_a, score_b) >= score_b To illustrate whether it is significantly better, 2 scores are calculated: 1. improvement_forward: Sum of improvement of using two segmentations comparing to only using the forward max_match 2. improvement_reverse: Sum of improvement of using two segmentations comparing to only using the reverse max_match Then we will calculate the average improvement of score: average_improve_reverse = improve_reverse/(length of the corpus) average_improve_forward = improve_forward/(length of the corpus) The result below shows that choosing the the matching sequence with the highest score is better than using only one single mathching algorithm. ##### It gets about 6% improvement to using single reversed maxMatch, and 4% improvement comparing to using single forward maxMatch. The code below demonstrate the forward max_match algorithm and the score calculation process. ``` def max_match(sentence, dictionary): if len(sentence)==0: return [] for i in reversed(range(1,len(sentence)+1)): firstword = lemmatizer.lemmatize(sentence[:i]) remainder = sentence[i:] if firstword in dictionary: return [firstword]+max_match(remainder,dictionary) # if no word was found, than make a one-character word firstword = lemmatizer.lemmatize(sentence[:1]) remainder = sentence[1:] return [firstword]+max_match(remainder,dictionary) words = nltk.corpus.words.words() # words is a Python list # print(words[:100]) # print(len(hashtags)) counter = 0 result = [] for hashtag in hashtags: counter+=1 # if(counter%100 == 0): # print(counter) result.append(max_match(hashtag,words)) print(result) for index,value in enumerate(result2): # print(result2[index]) if not result2[index] == result[index]: print(result2[index]) print(result[index]) print('\r\n') """Select the best one among reverse and forwad""" improvement_forward = 0 improvement_reverse = 0 def Score(arr): sum_length_square = 0 for word in arr: sum_length_square += len(word)**2 return (sum_length_square*1.0)**(1/float(len(arr))) # print(Score([u'a', u'th', u'aba', u'ca'])) for index,value in enumerate(result2): # print(hashtags[index]) # if result2[index] == result[index]: # print(result2[index]) # print('\r\n') # else: # result2_1char = [ele for ele in result2[index] if len(ele)==1] # print(result2[index]) # print(Score(result2[index])) # print(result[index]) # print(Score(result[index])) # print('\r\n') improvement_reverse += max(Score(result[index]),Score(result2[index]))/Score(result[index])-1 improvement_forward += max(Score(result[index]),Score(result2[index]))/Score(result2[index])-1 # Score_B += max(Score(result[index]),Score(result2[index])) # improve_reverse = Score_B*1.0/Score_R*1.0 - 1 # improve_forward = Score_B*1.0/Score_F*1.0 - 1 print('Improved Reverse:'+ str(improvement_reverse*100/len(result))+'%') print('Improved Forward:'+ str(improvement_forward*100/len(result))+'%') ``` ## Text classification (Not Optional) <b>Instructions</b>: The twitter_sample corpus has two subcorpora corresponding to positive and negative tweets. You can access already tokenised versions using the <i> tokenized </i> method, as given in the code sample below. Iterate through these two corpora and build training, development, and test sets for use with Scikit-learn. You should exclude stopwords (from the built-in NLTK list) and tokens with non-alphabetic characters (this is very important you do this because emoticons were used to build the corpus, if you don't remove them performance will be artificially high). You should randomly split each subcorpus, using 80% of the tweets for training, 10% for development, and 10% for testing; make sure you do this <b>before</b> combining the tweets from the positive/negative subcorpora, so that the sets are <i>stratified</i>, i.e. the exact ratio of positive and negative tweets is preserved across the three sets. (1.0) ``` import numpy as np positive_tweets = nltk.corpus.twitter_samples.tokenized("positive_tweets.json") negative_tweets = nltk.corpus.twitter_samples.tokenized("negative_tweets.json") np.random.shuffle(positive_tweets) np.random.shuffle(negative_tweets) train_positive = positive_tweets[:int(len(positive_tweets)*0.8)] train_negative = negative_tweets[:int(len(negative_tweets)*0.8)] dev_positive = positive_tweets[int(len(positive_tweets)*0.8):int(len(positive_tweets)*0.9)] dev_negative = negative_tweets[int(len(negative_tweets)*0.8):int(len(negative_tweets)*0.9)] test_positive = positive_tweets[int(len(positive_tweets)*0.9):] test_negative = negative_tweets[int(len(negative_tweets)*0.9):] from nltk.corpus import stopwords stopwords = set(stopwords.words('english')) from sklearn.feature_extraction import DictVectorizer def get_BOW_lowered_no_stopwords(text): BOW = {} for word in text: word = word.lower() if word not in stopwords and len(re.findall(r"[^a-z]", word))== 0: BOW[word] = BOW.get(word,0) + 1 return BOW def prepare_data(datafile,feature_extractor): feature_matrix = [] classifications = [] for tweet in datafile: feature_dict = feature_extractor(tweet) feature_matrix.append(feature_dict) vectorizer = DictVectorizer() dataset = vectorizer.fit_transform(feature_matrix) return dataset,vectorizer def fit_data(datafile,feature_extractor, vectorizer): feature_matrix = [] classifications = [] for tweet in datafile: feature_dict = feature_extractor(tweet) feature_matrix.append(feature_dict) dataset = vectorizer.transform(feature_matrix) return dataset dataset, vectorizer = prepare_data(np.concatenate((train_positive,train_negative)), get_BOW_lowered_no_stopwords) # print(dataset[1]) # dataset._shape vectorized_dev = fit_data(np.concatenate((dev_positive,dev_negative)), get_BOW_lowered_no_stopwords, vectorizer) vectorized_test = fit_data(np.concatenate((test_positive,test_negative)), get_BOW_lowered_no_stopwords, vectorizer) train_X = dataset train_y = np.concatenate((np.zeros(len(train_positive)),np.ones(len(train_negative)))) from scipy.sparse import coo_matrix train_X_sparse = coo_matrix(train_X) from sklearn.utils import shuffle train_X, train_X_sparse, train_y = shuffle(train_X, train_X_sparse, train_y, random_state=0) # print(vectorized_dev_positive.shape) from sklearn.feature_extraction.text import TfidfTransformer transformer = TfidfTransformer(smooth_idf=False,norm=None) train_X = transformer.fit_transform(train_X) dev_X = vectorized_dev dev_y = np.concatenate((np.zeros(len(dev_positive)),np.ones(len(dev_negative)))) dev_X_sparse = coo_matrix(dev_X) dev_X, train_X_sparse, dev_y = shuffle(dev_X, dev_X_sparse, dev_y, random_state=0) dev_X = transformer.transform(dev_X) test_X = vectorized_test test_y = np.concatenate((np.zeros(len(test_positive)),np.ones(len(test_negative)))) test_X_sparse = coo_matrix(test_X) test_X, test_X_sparse, test_y = shuffle(test_X, test_X_sparse, test_y, random_state=0) test_X = transformer.transform(test_X) ``` <b>Instructions</b>: Now, let's build some classifiers. Here, we'll be comparing Naive Bayes and Logistic Regression. For each, you need to first find a good value for their main regularisation (hyper)parameters, which you should identify using the scikit-learn docs or other resources. Use the development set you created for this tuning process; do <b>not</b> use crossvalidation in the training set, or involve the test set in any way. You don't need to show all your work, but you do need to print out the accuracy with enough different settings to strongly suggest you have found an optimal or near-optimal choice. We should not need to look at your code to interpret the output. (1.0) ``` %matplotlib inline from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import LogisticRegression import matplotlib.pyplot as plt alpha_list = [] score_list = [] for i in range(1,100): alpha = i*0.1 alpha_list.append(alpha) nb_cls = MultinomialNB(alpha = alpha) nb_cls.fit(dev_X, dev_y) f1 = nb_cls.score(dev_X, dev_y) score_list.append(f1) plt.xlabel('Alpha') plt.ylabel('F1-Score') plt.title('MultinomialNB Parameter Tuning: Alpha') plt.plot(alpha_list,score_list,'b-') plt.show() optimal_alpha = alpha_list[np.argmax(np.array(score_list))] print('Optimal value of alpha:'+str(optimal_alpha)) C = [0.001, 0.01, 0.1, 1, 10, 100, 1000] score_list = [] for c in C: nb_cls = LogisticRegression(C = c) nb_cls.fit(train_X, train_y) f1 = nb_cls.score(dev_X, dev_y) score_list.append(f1) plt.xlabel('C') plt.ylabel('F1-Score') plt.title('LogisticRegression Parameter Tuning: C, Penalty=L2') plt.plot(C,score_list,'b-') plt.show() score_list_l1 = [] for c in C: nb_cls = LogisticRegression(C = c,penalty = 'l1') nb_cls.fit(train_X, train_y) f1 = nb_cls.score(dev_X, dev_y) score_list_l1.append(f1) plt.xlabel('C') plt.ylabel('F1-Score') plt.title('LogisticRegression Parameter Tuning: C, Penalty=L1') plt.plot(C,score_list_l1,'b-') plt.show() optimal_c = C[np.argmax(np.array(score_list))] optimal_penalty = 'l2' if(np.max(np.array(score_list))<np.max(np.array(score_list_l1))): optimal_c = C[np.argmax(np.array(score_list_l1))] optimal_penalty = 'l1' print('Optimal value of C and Penalty:'+str(optimal_c)+' '+str(optimal_penalty)) ``` <b>Instructions</b>: Using the best settings you have found, compare the two classifiers based on performance in the test set. Print out both accuracy and macroaveraged f-score for each classifier. Be sure to label your output. (0.5) ``` from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_predict from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score nb_cls = MultinomialNB(alpha = optimal_alpha) nb_cls.fit(train_X, train_y) y_pred = nb_cls.predict(test_X) target_names = ['positive','negative'] print('MultinomialNB Classification Report:\r\n') print(classification_report(test_y,y_pred, target_names=target_names)) print('Accuracy: '+str(accuracy_score(test_y,y_pred))) lr_cls = LogisticRegression(C = optimal_c, penalty = optimal_penalty) lr_cls.fit(train_X, train_y) y_pred = lr_cls.predict(test_X) print('-------------------------------------------------------------') print('-------------------------------------------------------------') print('\r\n\r\nLogisticRegression Classification Report:\r\n') print(classification_report(test_y,y_pred, target_names=target_names)) print('Accuracy: '+str(accuracy_score(test_y,y_pred))) ```
github_jupyter
# Overview - nb015を改良 - nb020のfoldを使う - top8は均等に振り分ける ``` # gitのhash import subprocess cmd = "git rev-parse --short HEAD" hash = subprocess.check_output(cmd.split()).strip().decode('utf-8') print(hash) ``` # Const ``` # basic NB = '021' DEBUG = False isPI = False isShowLog = True PATH_TRAIN = '../data_ignore/input/train_features.csv' PATH_TRAIN_SCORED = '../data_ignore/input/train_targets_scored.csv' PATH_TRAIN_NONSCORED = '../data_ignore/input/train_targets_nonscored.csv' PATH_SUB = '../data_ignore/input/sample_submission.csv' PATH_TEST = '../data_ignore/input/test_features.csv' SAVE_DIR = f'../data_ignore/output_nb/nb{NB}/' PATH_DRUGID = '../data_ignore/input/train_drug.csv' PATH_GROUP696 = './../data_ignore/output_nb/nb004/group.csv' PATH_ESTIMATED_LOGLOSS = './../data_ignore/output_nb/nb017/estimated_logloss.csv' TOP8_DRUG = ['87d714366', '9f80f3f77', '8b87a7a83', '5628cb3ee', 'd08af5d4b', '292ab2c28', 'd50f18348', 'd1b47f29d'] settings_str = """ globals: seed: 2020 device: cuda num_epochs: 45 dataset: name: params: split: name: MultiStratifiedKFold params: n_splits: 5 random_state: 42 shuffle: True loader: train: batch_size: 512 shuffle: True num_workers: 10 pin_memory: True drop_last: True val: batch_size: 512 shuffle: False num_workers: 10 pin_memory: True drop_last: False model: name: params: loss: name: SmoothLogitsLoss params: {} optimizer: name: Adam params: lr: 0.005 scheduler: name: CosineAnnealingLR params: T_max: 10 """ ``` # Import everything I need :) ``` import os import time import yaml import random import numpy as np import pandas as pd from glob import glob from pdb import set_trace as st from fastprogress import progress_bar import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import log_loss from sklearn.model_selection import KFold from iterstrat.ml_stratifiers import MultilabelStratifiedKFold import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.nn.modules.loss import _WeightedLoss from torch.utils.data import Dataset, DataLoader import warnings warnings.filterwarnings('ignore') ``` # My func ``` def preprocess(df_): df = df_.copy() df.loc[:, 'cp_type'] = df.loc[:, 'cp_type'].map({'trt_cp': 0, 'ctl_vehicle': 1}) df.loc[:, 'cp_dose'] = df.loc[:, 'cp_dose'].map({'D1': 0, 'D2': 1}) # df.loc[:, 'cp_time'] = df.loc[:, 'cp_time'].map({24: 0, 48: 1, 72: 2}) del df['sig_id'] return df def remove_ctl_cp(features_, target_): features = features_.copy() target = target_.copy() # bools = features['cp_type'] != 'ctl_vehicle' bools = features['cp_type'] != 1 features = features[bools].reset_index(drop=True) features = features.drop(['cp_type'], axis=1).values target = target[bools].reset_index(drop=True).values return features, target def add_ctl_cp_oof(oof): oof_new = np.zeros_like(train_targets).astype(float) bools = train_features['cp_type'] != 'ctl_vehicle' oof_new[bools, :] = oof return oof_new def seed_everything(seed): random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = True class permutation_importance(): def __init__(self, model, metric): self.is_computed = False self.n_feat = 0 self.base_score = 0 self.model = model self.metric = metric self.df_result = [] def compute(self, _X_valid, y_valid): X_valid = pd.DataFrame(_X_valid, columns=FEAT_COLUMNS) self.n_feat = len(X_valid.columns) val_set = MoaDataset(_X_valid, y_valid, mode='train') dataloaders = {'val': DataLoader(val_set, **settings['loader']['val'])} y_valid_pred = get_epoch_pred(self.model, device, dataloaders['val']) self.base_score = self.metric(y_valid, y_valid_pred) self.df_result = pd.DataFrame({'feat': X_valid.columns, 'score': np.zeros(self.n_feat), 'score_diff': np.zeros(self.n_feat)}) # predict for i, col in enumerate(progress_bar(X_valid.columns)): df_perm = X_valid.copy() np.random.seed(1) df_perm[col] = np.random.permutation(df_perm[col]) # y_valid_pred = self.model.predict(df_perm) val_set = MoaDataset(df_perm.values, y_valid, mode='train') dataloaders = {'val': DataLoader(val_set, **settings['loader']['val'])} y_valid_pred = get_epoch_pred(self.model, device, dataloaders['val']) score = self.metric(y_valid, y_valid_pred) self.df_result['score'][self.df_result['feat']==col] = score self.df_result['score_diff'][self.df_result['feat']==col] = self.base_score - score self.is_computed = True def get_negative_feature(self): assert self.is_computed!=False, 'compute メソッドが実行されていません' idx = self.df_result['score_diff'] < 0 return self.df_result.loc[idx, 'feat'].values.tolist() def get_positive_feature(self): assert self.is_computed!=False, 'compute メソッドが実行されていません' idx = self.df_result['score_diff'] > 0 return self.df_result.loc[idx, 'feat'].values.tolist() def show_permutation_importance(self, score_type='loss'): '''score_type = 'loss' or 'accuracy' ''' assert self.is_computed!=False, 'compute メソッドが実行されていません' if score_type=='loss': ascending = True elif score_type=='accuracy': ascending = False else: ascending = '' plt.figure(figsize=(15, int(0.25*self.n_feat))) sns.barplot(x="score_diff", y="feat", data=self.df_result.sort_values(by="score_diff", ascending=ascending)) plt.title('base_score - permutation_score') def get_not_drug_leak_folds(n_splits, train_features, train_drug, gruoup696): ''' n_splits だけfoldを作成する。 ただし、cp_type = ctl_vehicle と、top8にはfold=-1を割り振っている。 696group のcsv: https://www.kaggle.com/fkubota/moa-nb004-696group ::example:: train_features = pd.read_csv("train_features.csv") train_drug = pd.read_csv("train_drug.csv") group696 = pd.read_csv("MoA_nb004_696group/group.csv") df_fold = get_not_drug_leak_folds(5, train_features, train_drug, group696) ''' TOP8_DRUG = ['87d714366', '9f80f3f77', '8b87a7a83', '5628cb3ee', 'd08af5d4b', '292ab2c28', 'd50f18348', 'd1b47f29d'] mask_trt = (train_features['cp_type'] == 'trt_cp').values # mask_top8 を作成 mask_top8 = [] for drug_id in train_drug.drug_id.values: if drug_id in TOP8_DRUG: mask_top8.append(True) else: mask_top8.append(False) mask_top8 = np.array(mask_top8) # trt かつ top8 以外を抜き出す # group = 0 は要素数が多いので一番最後にやるようにする drug_groups = group696[mask_trt & ~mask_top8].group.values groups = np.sort(group696[mask_trt & ~mask_top8].group.unique()) groups = groups[1:] groups = np.append(groups, 0) # 各グループにfoldを割り振る tile = [] train_drug_trt = train_drug[mask_trt & ~mask_top8] train_drug_trt['fold'] = -1 for i_grp, grp in enumerate(groups): if i_grp == 0: tile = np.arange(1, n_splits+1).astype(int) mask_grp = drug_groups == grp drug_rank = train_drug[mask_trt & ~mask_top8][mask_grp].drug_id.value_counts() n_repeat = np.ceil(len(drug_rank)/n_splits).astype(int) folds = np.tile(tile, n_repeat)[:len(drug_rank)] for i, drug_id in enumerate(drug_rank.index.sort_values()): mask = train_drug_trt.drug_id.values == drug_id train_drug_trt.fold[mask] = folds[i] tile = train_drug_trt.fold.value_counts()[::-1][:n_splits].index train_drug_fold = train_drug.copy() train_drug_fold['fold'] = -1 train_drug_fold['fold'][mask_trt & ~mask_top8] = train_drug_trt.fold.values return train_drug_fold class MoaModel(nn.Module): def __init__(self, n_input, n_output): super(MoaModel, self).__init__() self.batch_norm1 = nn.BatchNorm1d(n_input) self.dropout1 = nn.Dropout(0.2) self.dense1 = nn.utils.weight_norm(nn.Linear(n_input, 2048)) self.batch_norm2 = nn.BatchNorm1d(2048) self.dropout2 = nn.Dropout(0.5) self.dense2 = nn.utils.weight_norm(nn.Linear(2048, 1048)) self.batch_norm3 = nn.BatchNorm1d(1048) self.dropout3 = nn.Dropout(0.5) # self.dense3 = nn.utils.weight_norm(nn.Linear(1048, 206)) self.dense3 = nn.utils.weight_norm(nn.Linear(1048, n_output)) def forward(self, x): x = self.batch_norm1(x) x = self.dropout1(x) x = F.relu(self.dense1(x)) x = self.batch_norm2(x) x = self.dropout2(x) x = F.relu(self.dense2(x)) x = self.batch_norm3(x) x = self.dropout3(x) x_raw = self.dense3(x) x_sigmoid = F.sigmoid(x_raw) return x_sigmoid, x_raw class MoaDataset(Dataset): def __init__(self, df, targets, mode): self.mode = mode self.df = df # self.targets = targets if mode=='train': self.targets = targets def __len__(self): return len(self.df) def __getitem__(self, idx): if self.mode == 'train': return torch.FloatTensor(self.df[idx]), torch.FloatTensor(self.targets[idx]) elif self.mode == 'val': return torch.FloatTensor(self.df[idx]), 0 def mean_log_loss(y_true, y_pred): metrics = [] # for i in range(y_true.shape[1]): # metrics.append(log_loss(y_true[:, i], y_pred[:, i].astype(float), labels=[0,1])) # return np.mean(metrics) y_true = y_true.astype(np.float64).ravel() y_pred = y_pred.astype(np.float64).ravel() return log_loss(y_true, y_pred, labels=[0, 1]) class SmoothBCEwLogits(_WeightedLoss): def __init__(self, weight=None, reduction='mean', smoothing=0.001): super().__init__(weight=weight, reduction=reduction) self.smoothing = smoothing self.weight = weight self.reduction = reduction @staticmethod def _smooth(targets:torch.Tensor, n_labels:int, smoothing=0.0): assert 0 <= smoothing < 1 with torch.no_grad(): targets = targets * (1.0 - smoothing) + 0.5 * smoothing return targets def forward(self, inputs, targets): targets = SmoothBCEwLogits._smooth(targets, inputs.size(-1), self.smoothing) loss = F.binary_cross_entropy_with_logits(inputs, targets,self.weight) if self.reduction == 'sum': loss = loss.sum() elif self.reduction == 'mean': loss = loss.mean() return loss class EarlyStopping: """ Early stops the training if validation loss doesn't improve after a given patience. https://github.com/Bjarten/early-stopping-pytorch/blob/master/pytorchtools.py """ def __init__(self, patience=7, verbose=False, delta=0, path='checkpoint.pt', trace_func=print): """ Args: patience (int): How long to wait after last time validation loss improved. Default: 7 verbose (bool): If True, prints a message for each validation loss improvement. Default: False delta (float): Minimum change in the monitored quantity to qualify as an improvement. Default: 0 path (str): Path for the checkpoint to be saved to. Default: 'checkpoint.pt' trace_func (function): trace print function. Default: print """ self.patience = patience self.verbose = verbose self.counter = 0 self.best_score = None self.early_stop = False self.val_loss_min = np.Inf self.delta = delta self.path = path self.trace_func = trace_func # self.best_state_dict = {} def __call__(self, val_loss, model): score = -val_loss if self.best_score is None: self.best_score = score self.save_checkpoint(val_loss, model) elif score < self.best_score + self.delta: self.counter += 1 if self.verbose: self.trace_func(f'EarlyStopping counter: {self.counter} out of {self.patience}') if self.counter >= self.patience: self.early_stop = True else: self.best_score = score self.save_checkpoint(val_loss, model) self.counter = 0 def save_checkpoint(self, val_loss, model): '''Saves model when validation loss decrease.''' if self.verbose: self.trace_func(f'Validation loss decreased ({self.val_loss_min:.6f} --> {val_loss:.6f}). Saving model ...') # if not DEBUG: torch.save(model.state_dict(), self.path) # self.best_state_dict = model.state_dict() self.val_loss_min = val_loss def train_model(model, device, train_loader, optimizer, scheduler, criterion): model.train() running_loss = 0.0 for i, (x, y) in enumerate(train_loader): x, y = x.to(device), y.to(device) optimizer.zero_grad() with torch.set_grad_enabled(True): pred_sigmoid, pred_raw = model(x) loss = criterion(pred_raw, y) loss.backward() optimizer.step() running_loss += loss.item() / len(train_loader) scheduler.step() return running_loss def get_epoch_loss_score(model, device, valid_loader, criterion, optimizer): model.eval() running_loss = 0.0 targets = [] preds = [] for i, (x, y) in enumerate(valid_loader): x, y = x.to(device), y.to(device) optimizer.zero_grad() with torch.set_grad_enabled(False): pred_sigmoid, pred_raw = model(x) loss = criterion(pred_raw, y) running_loss += loss.item() / len(valid_loader) targets.append(y) preds.append(pred_sigmoid) targets = torch.cat(targets, dim=0).cpu().numpy() preds = torch.cat(preds, dim=0).cpu().numpy() _mean_log_loss = mean_log_loss(targets, preds) return running_loss, _mean_log_loss, preds def get_epoch_pred(model, device, valid_loader): model.eval() targets = [] preds = [] for i, (x, y) in enumerate(valid_loader): x, y = x.to(device), y.to(device) with torch.set_grad_enabled(False): pred_sigmoid, pred_raw = model(x) targets.append(y) preds.append(pred_sigmoid) targets = torch.cat(targets, dim=0).cpu().numpy() preds = torch.cat(preds, dim=0).cpu().numpy() return preds def run_fold(dataloaders, shape, checkpoint_path, ModelClass, show_log=True): device = torch.device("cuda") model = ModelClass(shape[0], shape[1]).to(device) # model = ModelClass(train.shape[1], ).to(device) early_stopping = EarlyStopping(patience=15, verbose=show_log, path=checkpoint_path) optimizer = optim.__getattribute__(settings['optimizer']['name'])( model.parameters(), **settings['optimizer']['params']) scheduler = optim.lr_scheduler.__getattribute__(settings['scheduler']['name'])( optimizer, **settings['scheduler']['params']) best_valid_loss = np.inf best_mean_log_loss = np.inf best_preds = 0 val_losses = [] trn_losses = [] for epoch in range(n_epochs): train_loss = train_model(model, device, dataloaders['train'], optimizer, scheduler, criterion) valid_loss, _mean_log_loss, preds = get_epoch_loss_score(model, device, dataloaders['val'], criterion, optimizer) trn_losses.append(train_loss) val_losses.append(valid_loss) if show_log: print(f"Epoch {str(epoch+1).zfill(2)}/{n_epochs } loss: {train_loss:5.5f} val_loss: {valid_loss:5.5f} mean_log_loss: {_mean_log_loss:5.5f}") early_stopping(valid_loss, model) if early_stopping.early_stop: print("Early stopping") break if valid_loss < best_valid_loss: best_valid_loss = valid_loss best_mean_log_loss = _mean_log_loss best_preds = preds return best_mean_log_loss, best_preds, trn_losses, val_losses def run(splitter, train, targets, ModelClass, show_log=True, pi=False): mean_log_loss_list = [] oof = np.zeros_like(targets).astype(float) df_pi = pd.DataFrame(columns=['feat', 'score_diff']) for n, (idx_trn, idx_val) in enumerate(splitter.split(train, targets)): print('-'*100) print(f':: start fold {n+1}/{n_splits} at {time.ctime()} ::') print('-'*100) X_trn, X_val = train[idx_trn], train[idx_val] y_trn, y_val = targets[idx_trn], targets[idx_val] train_set = MoaDataset(X_trn, y_trn, mode='train') val_set = MoaDataset(X_val, y_val, mode='train') dataloaders = { 'train': DataLoader(train_set, **settings['loader']['train']), 'val': DataLoader(val_set, **settings['loader']['val']), } checkpoint_path = f'{SAVE_DIR}Fold{n+1}of{n_splits}.pt' shape = (X_trn.shape[1], y_trn.shape[1]) best_mean_log_loss, best_preds, trn_losses, val_losses = run_fold(dataloaders, shape, checkpoint_path, ModelClass, show_log=show_log) # result print(f':: best mean_log_loss: {best_mean_log_loss:5.5f} ::') mean_log_loss_list.append(best_mean_log_loss) oof[idx_val, :] = best_preds # permutation importance if pi: device = torch.device("cuda") model = ModelClass(shape[0], shape[1]).to(device) state_dict = torch.load(checkpoint_path) model.load_state_dict(state_dict) model.to(device) model.eval() pi = permutation_importance(model, mean_log_loss) # model と metric を渡す pi.compute(X_val, y_val) pi_result = pi.df_result df_pi = pd.concat([df_pi, pi_result[['feat', 'score_diff']]]) # pi.show_permutation_importance(score_type='loss') # plot if show_log: x = np.arange(1, len(trn_losses)+1) plt.figure(figsize=(12, 7)) plt.plot(x[1:], trn_losses[1:], '--.', label='train') plt.plot(x[1:], val_losses[1:], '--.', label='valid') plt.title(f"fold{n+1}/{n_splits} {settings['loss']['name']}") plt.legend() plt.show() print('\n') if pi: # permutation score plt.figure(figsize=(15, int(0.25*len(FEAT_COLUMNS)))) order = df_pi.groupby(["feat"]).mean()['score_diff'].reset_index().sort_values('score_diff', ascending=True) sns.barplot(x="score_diff", y="feat", data=df_pi, order=order['feat']) plt.title('base_score - permutation_score') plt.show() return mean_log_loss_list, oof, df_pi def run_not_drug_leak(df_fold, train, targets, ModelClass, show_log=True, pi=False): mean_log_loss_list = [] oof = np.zeros_like(targets).astype(float) df_pi = pd.DataFrame(columns=['feat', 'score_diff']) # for n, (idx_trn, idx_val) in enumerate(splitter.split(train, targets)): for n, fold_i in enumerate(df_fold['fold'].unique()): print('-'*100) print(f':: start fold {n+1}/{n_splits} at {time.ctime()} ::') print('-'*100) mask_fold = df_fold.fold == fold_i X_trn, X_val = train[~mask_fold], train[mask_fold] y_trn, y_val = targets[~mask_fold], targets[mask_fold] train_set = MoaDataset(X_trn, y_trn, mode='train') val_set = MoaDataset(X_val, y_val, mode='train') dataloaders = { 'train': DataLoader(train_set, **settings['loader']['train']), 'val': DataLoader(val_set, **settings['loader']['val']), } checkpoint_path = f'{SAVE_DIR}Fold{n+1}of{n_splits}.pt' shape = (X_trn.shape[1], y_trn.shape[1]) best_mean_log_loss, best_preds, trn_losses, val_losses = run_fold(dataloaders, shape, checkpoint_path, ModelClass, show_log=show_log) # result print(f':: best mean_log_loss: {best_mean_log_loss:5.5f} ::') mean_log_loss_list.append(best_mean_log_loss) # oof[idx_val, :] = best_preds oof[mask_fold, :] = best_preds # permutation importance if pi: device = torch.device("cuda") model = ModelClass(shape[0], shape[1]).to(device) state_dict = torch.load(checkpoint_path) model.load_state_dict(state_dict) model.to(device) model.eval() pi = permutation_importance(model, mean_log_loss) # model と metric を渡す pi.compute(X_val, y_val) pi_result = pi.df_result df_pi = pd.concat([df_pi, pi_result[['feat', 'score_diff']]]) # pi.show_permutation_importance(score_type='loss') # plot if show_log: x = np.arange(1, len(trn_losses)+1) plt.figure(figsize=(12, 7)) plt.plot(x[1:], trn_losses[1:], '--.', label='train') plt.plot(x[1:], val_losses[1:], '--.', label='valid') plt.title(f"fold{n+1}/{n_splits} {settings['loss']['name']}") plt.legend() plt.show() print('\n') if pi: # permutation score plt.figure(figsize=(15, int(0.25*len(FEAT_COLUMNS)))) order = df_pi.groupby(["feat"]).mean()['score_diff'].reset_index().sort_values('score_diff', ascending=True) sns.barplot(x="score_diff", y="feat", data=df_pi, order=order['feat']) plt.title('base_score - permutation_score') plt.show() return mean_log_loss_list, oof, df_pi ``` # Preparation set ``` settings = yaml.safe_load(settings_str) seed_everything(settings['globals']['seed']) sns.set() sns.set_context('talk') if not os.path.exists(SAVE_DIR): os.makedirs(SAVE_DIR) if DEBUG: settings['split']['params']['n_splits'] = 2 settings['globals']['num_epochs'] = 3 ``` <br> load dataset ``` train_features = pd.read_csv(PATH_TRAIN) train_targets = pd.read_csv(PATH_TRAIN_SCORED) # test_features = pd.read_csv(PATH_TEST) train_drug = pd.read_csv(PATH_DRUGID) group696 = pd.read_csv(PATH_GROUP696) # ss = pd.read_csv(PATH_SUB) # mask_top8 を作成 mask_top8 = [] for drug_id in train_drug.drug_id.values: if drug_id in TOP8_DRUG: mask_top8.append(True) else: mask_top8.append(False) mask_top8 = np.array(mask_top8) end_col = 10 step_row = 11 if DEBUG: print(':: debug mode ::') train_features = train_features.iloc[::step_row, :end_col].reset_index(drop=True) train_targets = train_targets.iloc[::step_row, :].reset_index(drop=True) mask_top8 = mask_top8[::step_row] train_drug = train_drug.iloc[::step_row, :].reset_index(drop=True) group696 = group696.iloc[::step_row, :].reset_index(drop=True) # test_features = test_features.iloc[::100, :] ``` <br> preprocess ``` mask_trt = (train_features['cp_type'] == 'trt_cp').values train = preprocess(train_features) FEAT_COLUMNS = train_features.columns[2:] # test = preprocess(test_features).values del train_targets['sig_id'] target_cols = [col for col in train_targets.columns] train, targets = remove_ctl_cp(train, train_targets) # train_targets = train_targets.loc[train['cp_type']==0].reset_index(drop=True).values # train = train.loc[train['cp_type']==0].reset_index(drop=True).values print(f'train shape: {train.shape}') # print(f'test shape: {test.shape}') print(f'train_targets shape: {targets.shape}') ``` <br> fold分割 ``` %%time df_fold = get_not_drug_leak_folds(settings['split']['params']['n_splits'], train_features, train_drug, group696) splitter = KFold(n_splits=settings['split']['params']['n_splits'], random_state=1, shuffle=True) for top8_i in range(len(TOP8_DRUG)): mask_drug = df_fold['drug_id'] == TOP8_DRUG[top8_i] for fold_i, (train_idx, valid_idx) in enumerate(splitter.split(df_fold[mask_drug])): # df_fold[['fold']][mask_drug].iloc[valid_idx, :] = fold_i + 1 # df_fold[['fold']][mask_drug] = fold_i + 1 _df_fold = df_fold[mask_drug] _df_fold.fold.values[valid_idx] = fold_i + 1 df_fold.fold[mask_drug] = _df_fold.fold.values print(df_fold.fold.unique()) print(df_fold[mask_trt].fold.unique()) ``` # Create model ``` n_splits = settings['split']['params']['n_splits'] n_epochs = settings['globals']['num_epochs'] splitter = MultilabelStratifiedKFold(**settings['split']['params']) device = settings['globals']['device'] # criterion = criterion_ = nn.__getattribute__( # settings['loss']['name'])(**settings['loss']['params']) criterion = SmoothBCEwLogits(**settings['loss']['params']) %%time # mean_log_loss_list, _oof, df_pi = run(splitter, train, targets, MoaModel, show_log=isShowLog, pi=isPI) mean_log_loss_list, _oof, df_pi = run_not_drug_leak(df_fold[mask_trt].reset_index(), train, targets, MoaModel, show_log=isShowLog, pi=isPI) # result mean_mean_log_loss = np.mean(mean_log_loss_list) std_mean_log_loss = np.std(mean_log_loss_list) oof = add_ctl_cp_oof(_oof) oof_score = mean_log_loss(train_targets.values, oof) print('-'*100) print(f"mean_log_loss(all fold): {mean_mean_log_loss:5.6f} +- {std_mean_log_loss:5.6f}") print(f"mean_log_loss(oof): {oof_score:5.6f}") print('-'*100) ``` # Save oof ``` _df_oof = pd.DataFrame(_oof, columns=target_cols) _df_oof['fold'] = -1 for i, (idx_trn, idx_val) in enumerate(splitter.split(train, targets)): _df_oof.iloc[idx_val, -1] = i + 1 df_oof = pd.DataFrame(oof, columns=target_cols) df_oof['fold'] = -1 df_oof['fold'][mask_trt] = _df_oof['fold'].values save_path = f'{SAVE_DIR}oof.csv' df_oof.to_csv(save_path, index=False) ``` # Save permutation importance ``` save_path = f'{SAVE_DIR}permutation_importance.csv' save_path if isPI: df_pi.to_csv(save_path, index=False) ``` # Analysis ``` n_list = np.array(train_targets.sum(axis=0).values) logloss_cols = np.array([log_loss(train_targets.iloc[:, i], df_oof.iloc[:, i]) for i in range(len(target_cols))]) df_logloss = pd.read_csv(PATH_ESTIMATED_LOGLOSS) plt.figure(figsize=(8, 8)) plt.scatter(n_list, logloss_cols, alpha=0.3) plt.scatter(df_logloss.n_target, df_logloss.best_loss_list, s=2, color='red', label='estimated') plt.xlabel('n target==1') plt.ylabel('logloss per target cols') plt.legend() ```
github_jupyter
``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) weights = [] for alpha in [.5]: x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) init = tf.initialize_all_variables() correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess = tf.Session() sess.run(init) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) if i%100 == 0: print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) print W.eval(sess) #weights.append(W.eval(sess)) print weights[0][14] data_sets = input_data.read_data_sets(FLAGS.train_dir, FLAGS.fake_data) import numpy as np #Simple network: Given three integers a,b,c, [-100,100] chooses three random x-values, and evaluates #the quadratic function a*x^2 + b*x + c at those values. def func(x,a,b,c): return x*x*a + x*b + c def generatecandidate3(a,b,c): candidate = [np.random.random() for x in xrange(1)] candidatesolutions = [func(x,a,b,c) for x in candidate] return candidate, candidatesolutions def generatecandidate4(a,b,c,tot): candidate = [[np.random.random() for x in xrange(1)] for y in xrange(tot)] candidatesolutions = [[func(x[0],a,b,c)] for x in candidate] return (candidate, candidatesolutions) # Import MINST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) import tensorflow as tf # Parameters learning_rate = 0.1 training_epochs = 15 batch_size = 1000 display_step = 1 # Network Parameters n_hidden_1 = 4 # 1st layer number of features n_hidden_2 = 4 # 2nd layer number of features n_input = 1 # Guess quadratic function n_classes = 1 # # tf Graph input x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) # Create model class multilayer_perceptron(): #weights = {} #biases = {} def __init__(self, w=0, b=0, ind='00'): self.index = ind #used for reading values from file #See the filesystem convention below (is this really necessary?) #I'm going to eschew writing to file for now because I'll be generating too many files #Currently, the last value of the parameters is stored in self.params to be read learning_rate = 0.01 training_epochs = 15 batch_size = 1000 display_step = 1 # Network Parameters n_hidden_1 = 4 # 1st layer number of features n_hidden_2 = 4 # 2nd layer number of features n_input = 1 # Guess quadratic function n_classes = 1 # self.g = tf.Graph() self.params = [] with self.g.as_default(): #Note that by default, weights and biases will be initialized to random normal dists if w==0: self.weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) } self.weightslist = [self.weights['h1'],self.weights['h2'],self.weights['out']] self.biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes])) } self.biaseslist = [self.biases['b1'],self.biases['b2'],self.biases['out']] else: self.weights = { 'h1': tf.Variable(w[0]), 'h2': tf.Variable(w[1]), 'out': tf.Variable(w[2]) } self.weightslist = [self.weights['h1'],self.weights['h2'],self.weights['out']] self.biases = { 'b1': tf.Variable(b[0]), 'b2': tf.Variable(b[1]), 'out': tf.Variable(b[2]) } self.biaseslist = [self.biases['b1'],self.biases['b2'],self.biases['out']] self.saver = tf.train.Saver() def UpdateWeights(self, w, b): with self.g.as_default(): self.weights = { 'h1': tf.Variable(w[0]), 'h2': tf.Variable(w[1]), 'out': tf.Variable(w[2]) } self.weightslist = [self.weights['h1'],self.weights['h2'],self.weights['out']] self.biases = { 'b1': tf.Variable(b[0]), 'b2': tf.Variable(b[1]), 'out': tf.Variable(b[2]) } self.biaseslist = [self.biases['b1'],self.biases['b2'],self.biases['out']] def predict(self, x): with self.g.as_default(): layer_1 = tf.add(tf.matmul(x, self.weights['h1']), self.biases['b1']) layer_1 = tf.nn.relu(layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, self.weights['h2']), self.biases['b2']) layer_2 = tf.nn.relu(layer_2) # Output layer with linear activation out_layer = tf.matmul(layer_2, self.weights['out']) + self.biases['out'] return out_layer def ReturnParamsAsList(self): with self.g.as_default(): with tf.Session() as sess: # Restore variables from disk self.saver.restore(sess, "/home/dfreeman/PythonFun/tmp/model"+str(self.index)+".ckpt") return sess.run(self.weightslist), sess.run(self.biaseslist) '''def multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) layer_2 = tf.nn.relu(layer_2) # Output layer with linear activation out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] return out_layer''' # Store layers weight & bias weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes])) } # Construct model test_model = multilayer_perceptron(weights, biases) pred = test_model.predict(x) # Define loss and optimizer #cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) cost = tf.reduce_mean(tf.square(pred-y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Initializing the variables init = tf.initialize_all_variables() xtest, ytest = generatecandidate4(.5,.25,.1,1000) # Launch the graph with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(10000/batch_size) # Loop over all batches for i in range(total_batch): batch_x, batch_y = generatecandidate4(.5,.25,.1,batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch+1), "cost=", \ "{:.9f}".format(avg_cost) print "Optimization Finished!" # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: xtest, y: ytest}) x,y = mnist.train.next_batch(2) print x print y x,y = generatecandidate4(.5,.25,.1,2) print x print y #xdat,ydat = generatecandidate4(.5, .25, .1, 10) print xdat, ydat xdat = np.array(xdat) ydat = np.array(ydat) print func(xdat[0][0],.5,.25,.1) with models[0].g.as_default(): x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) pred = models[0].predict(x) #cost = tf.reduce_mean(tf.square(pred-y)) #optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost) # Initializing the variables #init = tf.initialize_all_variables() #init = tf.initialize_local_variables() #init = tf.initialize_variables([x,y]) print "****" with tf.Session() as sess: #print sess.run(models[0].weights['h1']) #sess.run(init) models[0].saver.restore(sess, "/home/dfreeman/PythonFun/tmp/model0.ckpt") print sess.run(models[0].weights['h1']) print "*************" #print x.eval() correct_prediction = tf.reduce_mean(tf.square(pred-y)) #correct_prediction = tf.sub(pred,y) #print "Diff prediction:" #print correct_prediction.eval({x: xdat, y: ydat}) print "Pred:" print pred.eval({x: xdat}) #print sess.run(pred) print "Real:" print ydat #print sess.run(pred) #print sess.run(y) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: xdat, y: ydat}) print models[0].params import copy alpha,hidden_dim,hidden_dim2 = (.001,4,4) thresh = .04 # Parameters learning_rate = 0.003 training_epochs = 15 batch_size = 2000 display_step = 1 # Network Parameters n_hidden_1 = 4 # 1st layer number of features n_hidden_2 = 4 # 2nd layer number of features n_input = 1 # Guess quadratic function n_classes = 1 # #synapses = [] models = [] #Testing starting in the same place #synapse0 = 2*np.random.random((1,hidden_dim)) - 1 #synapse1 = 2*np.random.random((hidden_dim,hidden_dim2)) - 1 #synapse2 = 2*np.random.random((hidden_dim2,1)) - 1 copy_model = multilayer_perceptron(ind=0) for ii in xrange(3): '''weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes])) }''' # Construct model with different initial weights test_model = multilayer_perceptron(ind=ii) #Construct model with same initial weights #test_model = copy.copy(copy_model) #test_model.index = ii #print test_model.weights models.append(test_model) with test_model.g.as_default(): x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) pred = test_model.predict(x) # Define loss and optimizer #cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) cost = tf.reduce_mean(tf.square(pred-y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost) # Initializing the variables init = tf.initialize_all_variables() #remove the comment to get random initialization stopcond = True with tf.Session() as sess: sess.run(init) xtest, ytest = generatecandidate4(.5,.25,.1,1000) while stopcond: #print 'epoch:' + str(e) #X = [] #y = [] j = 0 # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(10000/batch_size) if (avg_cost > thresh or avg_cost == 0.) and stopcond: # Loop over all batches for i in range(total_batch): batch_x, batch_y = generatecandidate4(.5,.25,.1,batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch+1), "cost=", \ "{:.9f}".format(avg_cost) if avg_cost < thresh: stopcond = False #test_model.params = sess.run(test_model.weightslist), sess.run(test_model.biaseslist) #save_path = test_model.saver.save(sess,"/home/dfreeman/PythonFun/tmp/model" + str(ii) + ".ckpt") print "Optimization Finished!" # Test model #correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) correct_prediction = tf.reduce_mean(tf.square(pred-y)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: xtest, y: ytest}) if (j%5000) == 0: print "Error after "+str(j)+" iterations:" + str(accuracy.eval({x: xtest, y: ytest})) if accuracy.eval({x: xtest, y: ytest}) < thresh or stopcond == False: #print "Changing stopcond!" stopcond = False print "Final params:" test_model.params = sess.run(test_model.weightslist), sess.run(test_model.biaseslist) save_path = test_model.saver.save(sess,"/home/dfreeman/PythonFun/tmp/model" + str(ii) + ".ckpt") j+=1 #remove the comment to get random initialization #synapses.append([synapse_0,synapse_1,synapse_2 def synapse_interpolate(synapse1, synapse2, t): return (synapse2-synapse1)*t + synapse1 '''ii=0 weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes])) }''' #def model_interpolate(m1, m2, t): with models[0].g.as_default(): with tf.Session() as sess: # Restore variables from disk. models[0].saver.restore(sess, "/home/dfreeman/PythonFun/tmp/model0.ckpt") modelparams = sess.run(models[0].weightslist[0]) print sess.run(models[0].weights['h1']) print modelparams print("Model restored.") # Do some work with the model print models[0].ReturnParamsAsList()[0] print "**" print models[1].ReturnParamsAsList()[1] print "***" #print models[2].weightslist print models[1].params print models[1].ReturnParamsAsList() def synapse_interpolate(synapse1, synapse2, t): return (synapse2-synapse1)*t + synapse1 def model_interpolate(w1,b1,w2,b2,t): m1w = w1 m1b = b1 m2w = w2 m2b = b2 mwi = [synapse_interpolate(m1we,m2we,t) for m1we, m2we in zip(m1w,m2w)] mbi = [synapse_interpolate(m1be,m2be,t) for m1be, m2be in zip(m1b,m2b)] return mwi, mbi def synapse_interpolate(synapse1, synapse2, t): return (synapse2-synapse1)*t + synapse1 class WeightString: def __init__(self, w1, b1, w2, b2, numbeads, threshold): self.w1 = w1 self.w2 = w2 self.b1 = b1 self.b2 = b2 #self.w2, self.b2 = m2.params self.AllBeads = [] self.threshold = threshold self.AllBeads.append([w1,b1]) for n in xrange(numbeads): ws,bs = model_interpolate(w1,b1,w2,b2, (n + 1.)/(numbeads+1.)) self.AllBeads.append([ws,bs]) self.AllBeads.append([w2,b2]) self.ConvergedList = [False for f in xrange(len(self.AllBeads))] self.ConvergedList[0] = True self.ConvergedList[-1] = True def SpringNorm(self, order): total = 0. #Energy between mobile beads for i,b in enumerate(self.AllBeads): if i < len(self.AllBeads)-1: #print "Tallying energy between bead " + str(i) + " and bead " + str(i+1) subtotal = 0. for j in xrange(len(b)): subtotal += np.linalg.norm(np.subtract(self.AllBeads[i][0][j],self.AllBeads[i+1][0][j]),ord=order)#/len(self.beads[0][j]) for j in xrange(len(b)): subtotal += np.linalg.norm(np.subtract(self.AllBeads[i][1][j],self.AllBeads[i+1][1][j]),ord=order)#/len(self.beads[0][j]) total+=subtotal return total#/len(self.beads) def SGDBead(self, bead, thresh, maxindex): finalerror = 0. #thresh = .05 # Parameters learning_rate = 0.01 training_epochs = 15 batch_size = 1000 display_step = 1 curWeights, curBiases = self.AllBeads[bead] test_model = multilayer_perceptron(w=curWeights, b=curBiases) with test_model.g.as_default(): x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) pred = test_model.predict(x) cost = tf.reduce_mean(tf.square(pred-y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost) init = tf.initialize_all_variables() stopcond = True with tf.Session() as sess: sess.run(init) xtest, ytest = generatecandidate4(.5,.25,.1,1000) j = 0 while stopcond: for epoch in range(training_epochs): avg_cost = 0. total_batch = int(10000/batch_size) if (avg_cost > thresh or avg_cost == 0.) and stopcond: # Loop over all batches for i in range(total_batch): batch_x, batch_y = generatecandidate4(.5,.25,.1,batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step #if epoch % display_step == 0: # print "Epoch:", '%04d' % (epoch+1), "cost=", \ # "{:.9f}".format(avg_cost) if avg_cost < thresh: stopcond = False #print "Optimization Finished!" # Test model #correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) correct_prediction = tf.reduce_mean(tf.square(pred-y)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: xtest, y: ytest}) #if (j%5000) == 0: # print "Error after "+str(j)+" iterations:" + str(accuracy.eval({x: xtest, y: ytest})) finalerror = accuracy.eval({x: xtest, y: ytest}) if finalerror < thresh or stopcond==False:# or j > maxindex: #print "Changing stopcond!" stopcond = False #print "Final params:" test_model.params = sess.run(test_model.weightslist), sess.run(test_model.biaseslist) self.AllBeads[bead]=test_model.params print "Final bead error: " + str(finalerror) j+=1 return finalerror i1=0 i2=1 test = WeightString(models[i1].params[0],models[i1].params[1],models[i2].params[0],models[i2].params[1],1,1) print len(test.AllBeads) print test.SGDBead(1,.01,10) def InterpBeadError(w1,b1, w2,b2, write = False, name = "00"): errors = [] xdat,ydat = generatecandidate4(.5, .25, .1, 1000) xdat = np.array(xdat) ydat = np.array(ydat) for tt in xrange(100): #print tt #accuracy = 0. t = tt/100. thiserror = 0 #x0 = tf.placeholder("float", [None, n_input]) #y0 = tf.placeholder("float", [None, n_classes]) weights, biases = model_interpolate(w1,b1,w2,b2, t) interp_model = multilayer_perceptron(w=weights, b=biases) with interp_model.g.as_default(): #interp_model.UpdateWeights(weights, biases) x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) pred = interp_model.predict(x) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) correct_prediction = tf.reduce_mean(tf.square(pred-y)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: xdat, y: ydat}),"\t",tt thiserror = accuracy.eval({x: xdat, y: ydat}) errors.append(thiserror) if write == True: with open("f" + str(name) + ".out",'w+') as f: for e in errors: f.write(str(e) + "\n") return max(errors), np.argmax(errors) InterpBeadError(models[0].params[0],models[0].params[1],models[1].params[0],models[1].params[1]) #Used for softening the training criteria. There's some fuzz required due to the difference in #training error between test and training thresh_multiplier = 1.1 results = [] connecteddict = {} for i1 in xrange(len(models)): connecteddict[i1] = 'not connected' for i1 in xrange(len(models)): print i1 for i2 in xrange(len(models)): if i2 > i1 and ((connecteddict[i1] != connecteddict[i2]) or (connecteddict[i1] == 'not connected' or connecteddict[i2] == 'not connected')) : #print "slow1?" #print i1,i2 #print models[0] #print models[1] #print models[0].params #print models[1].params test = WeightString(models[i1].params[0],models[i1].params[1],models[i2].params[0],models[i2].params[1],1,1) training_threshold = thresh depth = 0 d_max = 10 #Check error between beads #Alg: for each bead at depth i, SGD until converged. #For beads with max error along path too large, add another bead between them, repeat #Keeps track of which indices to check the interpbeaderror between newindices = [0,1] while (depth < d_max): print newindices #print "slow2?" #X, y = GenTest(X,y) counter = 0 for i,c in enumerate(test.ConvergedList): if c == False: #print "slow3?" error = test.SGDBead(i, .5*training_threshold, 20) #print "slow4?" #if counter%5000==0: # print counter # print error test.ConvergedList[i] = True print test.ConvergedList interperrors = [] interp_bead_indices = [] for b in xrange(len(test.AllBeads)-1): if b in newindices: e = InterpBeadError(test.AllBeads[b][0],test.AllBeads[b][1], test.AllBeads[b+1][0], test.AllBeads[b+1][1]) interperrors.append(e) interp_bead_indices.append(b) print interperrors if max([ee[0] for ee in interperrors]) < thresh_multiplier*training_threshold: depth = 2*d_max #print test.ConvergedList #print test.SpringNorm(2) #print "Done!" else: del newindices[:] #Interperrors stores the maximum error on the path between beads #shift index to account for added beads shift = 0 for i, ie in enumerate(interperrors): if ie[0] > thresh_multiplier*training_threshold: k = interp_bead_indices[i] ws,bs = model_interpolate(test.AllBeads[k+shift][0],test.AllBeads[k+shift][1],\ test.AllBeads[k+shift+1][0],test.AllBeads[k+shift+1][1],\ ie[1]/100.) test.AllBeads.insert(k+shift+1,[ws,bs]) test.ConvergedList.insert(k+shift+1, False) newindices.append(k+shift+1) newindices.append(k+shift) shift+=1 #print test.ConvergedList #print test.SpringNorm(2) #print d_max depth += 1 if depth == 2*d_max: results.append([i1,i2,test.SpringNorm(2),"Connected"]) if connecteddict[i1] == 'not connected' and connecteddict[i2] == 'not connected': connecteddict[i1] = i1 connecteddict[i2] = i1 if connecteddict[i1] == 'not connected': connecteddict[i1] = connecteddict[i2] else: if connecteddict[i2] == 'not connected': connecteddict[i2] = connecteddict[i1] else: if connecteddict[i1] != 'not connected' and connecteddict[i2] != 'not connected': hold = connecteddict[i2] connecteddict[i2] = connecteddict[i1] for h in xrange(len(models)): if connecteddict[h] == hold: connecteddict[h] = connecteddict[i1] else: results.append([i1,i2,test.SpringNorm(2),"Disconnected"]) #print results[-1] uniquecomps = [] totalcomps = 0 for i in xrange(len(models)): if not (connecteddict[i] in uniquecomps): uniquecomps.append(connecteddict[i]) if connecteddict[i] == 'not connected': totalcomps += 1 #print i,connecteddict[i] notconoffset = 0 if 'not connected' in uniquecomps: notconoffset = -1 print "Thresh: " + str(thresh) print "Comps: " + str(len(uniquecomps) + notconoffset + totalcomps) #for i in xrange(len(synapses)): # print connecteddict[i] connsum = [] for r in results: if r[3] == "Connected": connsum.append(r[2]) #print r[2] print "***" print np.average(connsum) print np.std(connsum) print len(models) connecteddict[2] models[0].params models[1].params len(test.AllBeads) #for b in xrange(len(test.AllBeads)-1): # e = InterpBeadError(test.AllBeads[b][0],test.AllBeads[b][1], test.AllBeads[b+1][0], test.AllBeads[b+1][1]) #for b in xrange(len(test.AllBeads)): # print test.AllBeads[b][0] xdat,ydat = generatecandidate4(.5, .25, .1, 1000) xdat = np.array(xdat) ydat = np.array(ydat) def InterpBeadError_SameSet(w1,b1, w2,b2,xdat, ydat, write = False, name = "00"): errors = [] for tt in xrange(100): #print tt #accuracy = 0. t = tt/100. thiserror = 0 #x0 = tf.placeholder("float", [None, n_input]) #y0 = tf.placeholder("float", [None, n_classes]) weights, biases = model_interpolate(w1,b1,w2,b2, t) interp_model = multilayer_perceptron(w=weights, b=biases) with interp_model.g.as_default(): #interp_model.UpdateWeights(weights, biases) x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) pred = interp_model.predict(x) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) correct_prediction = tf.reduce_mean(tf.square(pred-y)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: xdat, y: ydat}),"\t",tt thiserror = accuracy.eval({x: xdat, y: ydat}) errors.append(thiserror) if write == True: with open("f" + str(name) + ".out",'w+') as f: for e in errors: f.write(str(e) + "\n") return max(errors), np.argmax(errors) for b in xrange(len(test.AllBeads)-1): e = InterpBeadError_SameSet(test.AllBeads[b][0],test.AllBeads[b][1], test.AllBeads[b+1][0], test.AllBeads[b+1][1], xdat, ydat) ```
github_jupyter
PPO Using VAE # VAE classes https://github.com/AntixK/PyTorch-VAE/blob/master/models/vanilla_vae.py ``` import torch from torch import nn from torch.nn import functional as F import torch.optim as optim class VAE(nn.Module): # Use Linear instead of convs def __init__(self, in_channels: int, latent_dim: int, hidden_dims = None, **kwargs) -> None: super(VAE, self).__init__() self.latent_dim = latent_dim out_channels = in_channels modules = [] if hidden_dims is None: hidden_dims = [32, 64, 128, 256, 512] # Build Encoder for h_dim in hidden_dims: modules.append( nn.Sequential( nn.Linear(in_channels, h_dim), nn.LeakyReLU()) ) in_channels = h_dim self.encoder = nn.Sequential(*modules) self.fc_mu = nn.Linear(hidden_dims[-1], latent_dim) self.fc_var = nn.Linear(hidden_dims[-1], latent_dim) # Build Decoder modules = [] self.decoder_input = nn.Linear(latent_dim, hidden_dims[-1]) hidden_dims.reverse() for i in range(len(hidden_dims) - 1): modules.append( nn.Sequential( nn.Linear(hidden_dims[i], hidden_dims[i+1]), nn.LeakyReLU()) ) self.decoder = nn.Sequential(*modules) self.final_layer = nn.Sequential( nn.Linear(hidden_dims[-1],hidden_dims[-1]), nn.LeakyReLU(), nn.Linear(hidden_dims[-1],out_channels), nn.Tanh()) def encode(self, input): """ Encodes the input by passing through the encoder network and returns the latent codes. :param input: (Tensor) Input tensor to encoder [N x C x H x W] :return: (Tensor) List of latent codes """ result = self.encoder(input) result = torch.flatten(result, start_dim=1) # Split the result into mu and var components # of the latent Gaussian distribution mu = self.fc_mu(result) log_var = self.fc_var(result) return [mu, log_var] def decode(self, z): """ Maps the given latent codes onto the image space. :param z: (Tensor) [B x D] :return: (Tensor) [B x C x H x W] """ result = self.decoder_input(z) #result = result.view(-1, 512, 2, 2) result = self.decoder(result) result = self.final_layer(result) return result def reparameterize(self, mu, logvar): """ Reparameterization trick to sample from N(mu, var) from N(0,1). :param mu: (Tensor) Mean of the latent Gaussian [B x D] :param logvar: (Tensor) Standard deviation of the latent Gaussian [B x D] :return: (Tensor) [B x D] """ std = torch.exp(0.5 * logvar) eps = torch.randn_like(std) return eps * std + mu def forward(self, input, **kwargs): mu, log_var = self.encode(input) z = self.reparameterize(mu, log_var) return self.decode(z), input, mu, log_var def state_dim_reduction(self, state): mu, log_var = self.encode(state) z = self.reparameterize(mu, log_var) return z def loss_function(self, reconstruction, input, mu, log_var) -> dict: """ Computes the VAE loss function. KL(N(\mu, \sigma), N(0, 1)) = \log \frac{1}{\sigma} + \frac{\sigma^2 + \mu^2}{2} - \frac{1}{2} :param args: :param kwargs: :return: """ recons = reconstruction input = input mu = mu log_var = log_var recons_loss =F.mse_loss(recons, input) kld_loss = torch.mean(-0.5 * torch.sum(1 + log_var - mu ** 2 - log_var.exp(), dim = 1), dim = 0) loss = recons_loss + kld_loss return {'loss': loss, 'Reconstruction_Loss':recons_loss, 'KLD':-kld_loss} import pandas as pd class VaeManager(): def __init__(self, vae_model, optimizer, obs_file, batch_size): self.vae_model = vae_model self.optimizer = optimizer self.obs_file = obs_file self.batch_size = batch_size def train_step(self, batch): reconstruction, input, mu, log_var = self.vae_model(batch) loss = self.vae_model.loss_function(reconstruction, input, mu, log_var)['loss'] self.optimizer.zero_grad() loss.backward() self.optimizer.step() return loss def train_with_file(self): #TODO df = pd.read_csv(self.fileNames[0]) for index, row in df.iterrows(): pass def state_dim_reduction(self, state): return self.vae_model.state_dim_reduction(state) ``` # PPO using VAE ``` # https://github.com/RPC2/PPO import torch import torch.nn as nn class MlpPolicy(nn.Module): def __init__(self, action_size, input_size=4): super(MlpPolicy, self).__init__() self.action_size = action_size self.input_size = input_size self.fc1 = nn.Linear(self.input_size, 24) self.fc2 = nn.Linear(24, 24) self.fc3_pi = nn.Linear(24, self.action_size) self.fc3_v = nn.Linear(24, 1) self.tanh = nn.Tanh() self.relu = nn.ReLU() self.softmax = nn.Softmax(dim=-1) def pi(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.fc3_pi(x) return self.softmax(x) def v(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.fc3_v(x) return x class AgentConfig: # Learning gamma = 0.99 plot_every = 10 update_freq = 1 k_epoch = 3 learning_rate = 0.02 lmbda = 0.95 eps_clip = 0.2 v_coef = 1 entropy_coef = 0.01 # Memory memory_size = 400 train_cartpole = True import torch import gym import torch.optim as optim import torch.nn as nn import matplotlib.pyplot as plt import pandas as pd device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class Agent(AgentConfig): def __init__(self, env, observation_space): self.env = env self.action_size = self.env.action_space.n # 2 for cartpole if self.train_cartpole: self.policy_network = MlpPolicy(action_size=self.action_size, input_size = observation_space).to(device) self.optimizer = optim.Adam(self.policy_network.parameters(), lr=self.learning_rate) self.scheduler = optim.lr_scheduler.StepLR(self.optimizer, step_size=self.k_epoch, gamma=0.999) self.loss = 0 self.criterion = nn.MSELoss() self.memory = { 'state': [], 'action': [], 'reward': [], 'next_state': [], 'action_prob': [], 'terminal': [], 'count': 0, 'advantage': [], 'td_target': torch.tensor([], dtype=torch.float) } def new_random_game(self): self.env.reset() action = self.env.action_space.sample() screen, reward, terminal, info = self.env.step(action) return screen, reward, action, terminal def train(self, vae_manager, vae_fit, num_episodes): step = 0 reward_history = [] avg_reward = [] solved = False # A new episode for episode in range (1,num_episodes+1): start_step = step episode += 1 episode_length = 0 # Get initial state state, reward, action, terminal = self.new_random_game() state_mem = state state = torch.tensor(state, dtype=torch.float, device=device) if not vae_fit: with torch.no_grad(): state = state.unsqueeze(dim=0) state = vae_manager.state_dim_reduction(state).squeeze() state_mem = state.tolist() total_episode_reward = 1 # A step in an episode while True: step += 1 episode_length += 1 # Choose action prob_a = self.policy_network.pi(state) action = torch.distributions.Categorical(prob_a).sample().item() # Act new_state, reward, terminal, _ = self.env.step(action) new_state_mem = new_state new_state = torch.tensor(new_state, dtype=torch.float, device=device) if not vae_fit: print("Actual state and VAE state:") print(new_state_mem) with torch.no_grad(): new_state = new_state.unsqueeze(dim=0) new_state = vae_manager.state_dim_reduction(new_state).squeeze() new_state_mem = new_state.tolist() print(new_state_mem) reward = -1 if terminal else reward self.add_memory(state_mem, action, reward/10.0, new_state_mem, terminal, prob_a[action].item()) state = new_state state_mem = new_state_mem total_episode_reward += reward if vae_fit and episode % vae_manager.batch_size == 0: vae_manager.train_step(torch.tensor(self.memory['state'][-10:], dtype=torch.float, device=device)) if terminal: episode_length = step - start_step reward_history.append(total_episode_reward) avg_reward.append(sum(reward_history[-10:])/10.0) self.finish_path(episode_length) print('episode: %.2f, total step: %.2f, last_episode length: %.2f, last_episode_reward: %.2f, ' 'loss: %.4f, lr: %.4f' % (episode, step, episode_length, total_episode_reward, self.loss, self.scheduler.get_last_lr()[0])) # if not vae_fit: # print('episode: %.2f, total step: %.2f, last_episode length: %.2f, last_episode_reward: %.2f, ' # 'loss: %.4f, lr: %.4f' % (episode, step, episode_length, total_episode_reward, self.loss, # self.scheduler.get_last_lr()[0])) # else: # print(f'Fitted vae for episode {episode} of {num_episodes}.') self.env.reset() break if episode % self.update_freq == 0: for _ in range(self.k_epoch): self.update_network() if episode % self.plot_every == 0 and not vae_fit: plot_graph(reward_history, avg_reward) self.env.close() def update_network(self): # get ratio pi = self.policy_network.pi(torch.tensor(self.memory['state'], dtype=torch.float, device=device)) new_probs_a = torch.gather(pi, 1, torch.tensor(self.memory['action'], device=device)) old_probs_a = torch.tensor(self.memory['action_prob'], dtype=torch.float, device=device) ratio = torch.exp(torch.log(new_probs_a) - torch.log(old_probs_a)) # surrogate loss surr1 = ratio * torch.tensor(self.memory['advantage'], dtype=torch.float, device=device) surr2 = torch.clamp(ratio, 1 - self.eps_clip, 1 + self.eps_clip) * torch.tensor(self.memory['advantage'], dtype=torch.float, device=device) pred_v = self.policy_network.v(torch.tensor(self.memory['state'], dtype=torch.float, device=device)) v_loss = (0.5 * (pred_v - self.memory['td_target']).pow(2)).to('cpu') # Huber loss entropy = torch.distributions.Categorical(pi).entropy() entropy = torch.tensor([[e] for e in entropy]) self.loss = ((-torch.min(surr1, surr2)).to('cpu') + self.v_coef * v_loss - self.entropy_coef * entropy).mean() self.optimizer.zero_grad() self.loss.backward() self.optimizer.step() self.scheduler.step() def add_memory(self, s, a, r, next_s, t, prob): if self.memory['count'] < self.memory_size: self.memory['count'] += 1 else: self.memory['state'] = self.memory['state'][1:] self.memory['action'] = self.memory['action'][1:] self.memory['reward'] = self.memory['reward'][1:] self.memory['next_state'] = self.memory['next_state'][1:] self.memory['terminal'] = self.memory['terminal'][1:] self.memory['action_prob'] = self.memory['action_prob'][1:] self.memory['advantage'] = self.memory['advantage'][1:] self.memory['td_target'] = self.memory['td_target'][1:] self.memory['state'].append(s) self.memory['action'].append([a]) self.memory['reward'].append([r]) self.memory['next_state'].append(next_s) self.memory['terminal'].append([1 - t]) self.memory['action_prob'].append(prob) def finish_path(self, length): state = self.memory['state'][-length:] reward = self.memory['reward'][-length:] next_state = self.memory['next_state'][-length:] terminal = self.memory['terminal'][-length:] td_target = torch.tensor(reward, device=device) + \ self.gamma * self.policy_network.v(torch.tensor(next_state, dtype=torch.float,device=device)) * torch.tensor(terminal, device=device) delta = (td_target - self.policy_network.v(torch.tensor(state, dtype=torch.float,device=device))).to('cpu') delta = delta.detach().numpy() # get advantage advantages = [] adv = 0.0 for d in delta[::-1]: adv = self.gamma * self.lmbda * adv + d[0] advantages.append([adv]) advantages.reverse() if self.memory['td_target'].shape == torch.Size([1, 0]): self.memory['td_target'] = td_target.data else: self.memory['td_target'] = torch.cat((self.memory['td_target'].to(device), td_target.data), dim=0) self.memory['advantage'] += advantages def plot_graph(reward_history, avg_reward): df = pd.DataFrame({'x': range(len(reward_history)), 'Reward': reward_history, 'Average': avg_reward}) plt.style.use('seaborn-darkgrid') palette = plt.get_cmap('Set1') plt.plot(df['x'], df['Reward'], marker='', color=palette(1), linewidth=0.8, alpha=0.9, label='Reward') # plt.plot(df['x'], df['Average'], marker='', color='tomato', linewidth=1, alpha=0.9, label='Average') # plt.legend(loc='upper left') plt.title("CartPole", fontsize=14) plt.xlabel("episode", fontsize=12) plt.ylabel("score", fontsize=12) plt.savefig('score.png') environment = gym.make('CartPole-v0') observation_space = environment.observation_space.shape[0] #Hyperparameters latent_space = 4 # Feature space after VAE transform vae_lr = 0.0001 vae_batch_size = 10 existingFile = "" #"drive/MyDrive/Thesis/Code/RL_PCA/feature_data.csv" # Possible existing file name containing observations for VAE fitting vae_model = VAE(in_channels = observation_space, latent_dim = latent_space).to(device) vae_optimizer = optim.Adam(params=vae_model.parameters(), lr=vae_lr) vae_manager = VaeManager(vae_model, vae_optimizer, existingFile, vae_batch_size) #Fit PCA by getting demo trajectories if existingFile is None or existingFile == "": print("Demo Trajectories for fitting VAE") num_episodes = 300 agent = Agent(environment, observation_space) agent.train(vae_manager, vae_fit = True, num_episodes = num_episodes) else: vae_manager.train_with_file() #Run actual Episodes print("Actual trajectories") num_episodes = 250 agent = Agent(environment, latent_space) agent.train(vae_manager, vae_fit = False, num_episodes = num_episodes) ```
github_jupyter
# Split Dataframe using Panda's Groupby For this tutorial, I will asume you have a basic understanding of Python, and know how to load a dataframe using the Panda's library. I will use the GL_Detail example file from the AICPA's AuditDataAnalytic's GitHub. ``` import pandas as pd import numpy as np # Displays numbers with 2 decimals and thousands separators. pd.options.display.float_format = '{:,.2f}'.format # Load Dataframe into memory. file_location = "data/GL_Detail_YYYYMMDD_YYYYMMDD.csv" df = pd.read_csv(file_location) df.head() ``` <p>This method splits the dataframe into individual dataframes by "Entered_By Column". Groupby in Pandas, will group all similar elements. This can replace a pivot table in Excel, and functions similarly to SQL's Groupby. <b>Note</b> if you want to have multiple layers of grouping (i.e. "Entered_By", then "Business_Unit_Code"), you must use a list of items. This would look like df.groupby(["Entered_By", "Business_Unit_Code"]).</p> <p>Note how Python allows for "unpacking" of elements. In this case <i>split, file</i>. df.groupby returns 2 values for each loop through. It returns the split value (my terminology), which is our "Entered_By" code, and the related file. We then use the split value as a key to add the file to our dict of files.</p> ``` files = {} for split, file in df.groupby("Entered_By"): files[split] = file files.keys() #shows the users who entered journal entries ``` <p>This is a loop through all of the items in the Python dict "Files". When looping through a dict data structure, make sure to add the ".items()" to the end if you want the dict key and the item. In our case, the key is the filename, and the item is split dataframe.</p> <p>Please note the "!mkdir" at the top of the cell. This is to create a new directory for the split files to go in. The mkdir command will only work on Mac/Linux systems. On a Windows OS, you would need the following: <n>import os</n><n>os.mkdir("data/split")</n></p> <p>All Dataframe's have a write method, to write the Dataframe to various mediums. In this case, I'm using csv. You can concatenate strings in Python using the "+" operator to build unique file paths for each Dataframe.</p> <p>If you don't need to load the Dataframe's into memory, and only wish to split a file, you could perform this task in one loop. </p> <p> for filename, file in df.groupby("Entered_By"): file.to_csv("data/split/"+filename+".csv") </p> ``` !mkdir data/split for filename, file in files.items(): file.to_csv("data/split/"+filename+".csv") ```
github_jupyter
# Model selection using hyperopt ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.datasets import make_moons from sklearn.metrics import accuracy_score from sklearn.svm import SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF, Matern from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler def plot_data(X_train, X_test, y_train, y_test, h=0.02): x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # just plot the dataset first cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) fig, ax = plt.subplots() # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright) # and testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) return fig, ax # Show the decision surface of the optimal classifier def plot_clf(X_train, X_test, y_train, y_test, clf, h=0.02): x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # just plot the dataset first cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) fig, ax = plt.subplots() # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright) # and testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) score = clf.score(X_test, y_test) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, x_max]x[y_min, y_max]. if hasattr(clf, "decision_function"): Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) else: Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1] # Put the result into a color plot Z = Z.reshape(xx.shape) ax.contourf(xx, yy, Z, cmap=cm, alpha=.8) # Plot also the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright) # and testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) ``` ## Create an artificial data set ``` X, y = make_moons(n_samples=1000, noise=0.3, random_state=0) X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4, random_state=42) plot_data(X_train, X_test, y_train, y_test) from hyperopt import hp, fmin, rand, tpe, Trials, STATUS_FAIL, STATUS_OK from hyperopt.pyll import scope scope.define(KNeighborsClassifier) scope.define(SVC) scope.define(GaussianProcessClassifier) ``` ## Define search space ``` C = hp.loguniform('svc_c', -4, 1) search_space = hp.pchoice('estimator', [ (0.1, scope.KNeighborsClassifier(n_neighbors=1 + hp.randint('n_neighbors', 9))), (0.1, scope.SVC(kernel='linear', C=C)), (0.4, scope.SVC(kernel='rbf', C=C, gamma=hp.loguniform('svc_gamma', -4, 1))), (0.4, scope.GaussianProcessClassifier(kernel=hp.choice('gp_kernel', [RBF(), Matern(nu=1.5), Matern(nu=2.5)]))) ]) # Create logger using the Trials object supplied by hyperopt trials = Trials() def objective_function(estimator): estimator.fit(X_train, y_train) y_hat = estimator.predict(X_test) return -1 * accuracy_score(y_test, y_hat) # Call fmin best = fmin( fn=objective_function, space=search_space, algo=tpe.suggest, max_evals=50 ) print(best) clf = SVC(kernel='rbf', gamma=2.357247846608504, C=2.0908911442998437, probability=True) clf.fit(X_train, y_train) plot_clf(X_train, X_test, y_train, y_test, clf) ```
github_jupyter
## Importing libraries ``` import warnings warnings.filterwarnings("ignore") # data manipulation and numeric operations import pandas as pd import numpy as np # save and load serialized objects import pickle # track progress of function execution from tqdm import tqdm import os # metrics from sklearn.metrics import accuracy_score, roc_auc_score from sklearn.metrics import confusion_matrix test_df = pd.read_csv('test_dataset.csv') test_df.shape # randomly selecting one legitimate and one phishing url test_legi_df = test_df[test_df.result == 0] url_legi = test_legi_df.sample().url.values[0] print('random legitimate url: {}'.format(url_legi)) test_phish_df = test_df[test_df.result == 1] url_phish = test_phish_df.sample().url.values[0] print('random phishing url: {}'.format(url_phish)) model = pickle.load(open('final_model.sav', 'rb')) # !pip install import_ipynb # https://newbedev.com/ipynb-import-another-ipynb-file import import_ipynb import FeatureExtraction def function1(url): '''This function contains the steps to feature extraction, preprocessing and model prediction''' # getting featured dataframe from url X_test = FeatureExtraction.extract_all_features(url) # preprocessing X_test['statistical_report'] = X_test['statistical_report'].fillna(-1) X_test['page_favicon'] = X_test['page_favicon'].fillna(-1) X_test['redirection_count'] = X_test['redirection_count'].fillna(1) # drop textual feature, dependent feature X_test.drop(columns = ['url', 'url_google_index'], inplace= True) # removing constant features X_test.drop(columns = ['url_having_IP_Address', "domain_registration_length"], inplace= True) # applying FE X_test['length_depth'] = X_test.url_length + X_test.url_depth X_test['port_redirection'] = X_test.url_standard_port + X_test.redirection_count X_test['var_median'] = X_test.median(axis = 1) X_test['var_max'] = X_test.max(axis = 1) X_test['var_std'] = X_test.std(axis = 1) X_test['var_sum'] = X_test.sum(axis = 1) return model.predict(X_test) def checker(pred): if pred == 0: print('Legitimate') else: print('Phishing') # checking a random legitimate url pred = function1(url_legi) checker(pred) # checking a random phishing url pred = function1(url_phish) checker(pred) def function2(X_test, y): '''This function returns the predicted value and its probability as its metric''' # preprocessing X_test['statistical_report'] = X_test['statistical_report'].fillna(-1) X_test['page_favicon'] = X_test['page_favicon'].fillna(-1) # filling na with 1 X_test['redirection_count'] = X_test['redirection_count'].fillna(1) # drop textual feature X_test.drop(columns = ['url', 'url_google_index'], inplace= True) # removing constant features X_test.drop(columns = ['url_having_IP_Address', "domain_registration_length"], inplace= True) # applying FE X_test['length_depth'] = X_test.url_length + X_test.url_depth X_test['port_redirection'] = X_test.url_standard_port + X_test.redirection_count X_test['var_median'] = X_test.median(axis = 1) X_test['var_max'] = X_test.max(axis = 1) X_test['var_std'] = X_test.std(axis = 1) X_test['var_sum'] = X_test.sum(axis = 1) return [model.predict(X_test), model.predict_proba(X_test)] test_sample = test_df.sample() X = test_sample.drop(columns = ['result']) y = test_sample.result print('Actual value: {}, predicted value and probability score: {}'.format(y, function2(X, y))) ```
github_jupyter
``` import json from pathlib import Path import matplotlib.pyplot as plt import numpy as np import matplotlib import pandas as pd import pyprojroot import seaborn as sns def convert_seg_error_rate_pct(df): df.avg_segment_error_rate = df.avg_segment_error_rate * 100 return df RESULTS_ROOT = pyprojroot.here() / 'results' FIG_ROOT = pyprojroot.here() / 'doc' / 'article' / 'figures' / 'mainfig_tweetynet_v_svm' FIG_ROOT.mkdir(exist_ok=True) ``` #### Munge data ``` segmentation_map = { 'ground_truth': 'segmented audio, manually cleaned', 'resegment': 'segmented audio, not cleaned', 'semi-automated-cleaning': 'segmented audio, semi-automated cleaning', 'not-cleaned': 'segmented audio, not cleaned', 'manually-cleaned': 'segmented audio, manually cleaned' } hvc_dfs = [] csv_filename = 'segment_error_across_birds.hvc.csv' for species in ('Bengalese_Finches', 'Canaries'): species_csv = RESULTS_ROOT / f'{species}/hvc/{csv_filename}' df = pd.read_csv(species_csv) df['Model'] = 'SVM' df['Input to model'] = df['segmentation'].map(segmentation_map) df['Species'] = species hvc_dfs.append(df) hvc_df = pd.concat(hvc_dfs) curve_df = [] for species in ('Bengalese_Finches', 'Canaries'): LEARNCURVE_RESULTS_ROOT = pyprojroot.here() / 'results' / species / 'learncurve' error_csv_path = LEARNCURVE_RESULTS_ROOT.joinpath('error_across_birds_with_cleanup.csv') df = pd.read_csv(error_csv_path) df = df[df.animal_id.isin(hvc_df.animal_id.unique())] df['Model'] = 'TweetyNet' df['Input to model'] = 'spectrogram' df['Species'] = species curve_df.append(df) del df curve_df = pd.concat(curve_df) CLEANUP = 'min_segment_dur_majority_vote' curve_df = curve_df[ curve_df.cleanup == CLEANUP ] all_df = pd.concat([hvc_df, curve_df]) all_df = convert_seg_error_rate_pct(all_df) all_df.head() # sanity check gb = all_df.groupby(by=['Species', 'Model', 'Input to model', 'animal_id', 'train_set_dur']) df_agg = gb.agg( mean_seg_err = pd.NamedAgg('avg_segment_error_rate', 'mean'), median_seg_err = pd.NamedAgg('avg_segment_error_rate', 'median'), std_seg_err = pd.NamedAgg('avg_segment_error_rate', 'std') ) data = df_agg.reset_index() # ``data`` DataFrame for use with ``seaborn`` PALETTE = sns.color_palette('colorblind') # note: defaults to 10 colors MODEL_HUE_MAP = { 'TweetyNet': PALETTE[0], 'SVM': PALETTE[1], } DASHES = { 'segmented audio, manually cleaned': (4, 1.5), 'segmented audio, not cleaned': (1, 1), 'segmented audio, semi-automated cleaning': (3, 1.25, 1.5, 1.25), 'spectrogram': '', } ``` #### actually plot figure ``` # specify the custom font to use plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.sans-serif'] = 'arial' sns.set_context("paper", font_scale=1.5, rc={"lines.linewidth": 3}) figsize = (8, 4) dpi = 150 fig = plt.figure(constrained_layout=False, figsize=figsize, dpi=dpi) gs = fig.add_gridspec( nrows=3, ncols=2, ) ax_arr = [] for rows in ( (0, 1), (1, 3), ): start_row, stop_row = rows for col in range(2): ax_arr.append(fig.add_subplot(gs[start_row:stop_row, col])) ax_arr = np.array(ax_arr).reshape(2, 2) MIN_TRAIN_SET_DUR_CANARY = 240 for col, species in enumerate(('Bengalese_Finches', 'Canaries')): # ---- get data data_species = data[data.Species == species] if species == 'Canaries': data_species = data_species[data_species.train_set_dur >= MIN_TRAIN_SET_DUR_CANARY] train_set_durs = sorted(data_species['train_set_dur'].unique()) dur_int_map = dict(zip(train_set_durs, range(len(train_set_durs)))) data_species['train_set_dur_ind'] = data_species['train_set_dur'].map(dur_int_map) TRAIN_DUR_IND_MAP = { k:v for k, v in zip( sorted(data_species['train_set_dur'].unique()), sorted(data_species['train_set_dur_ind'].unique()) ) } svm_mean_seg_err_max_dur = data_species[ (data_species.Model=='SVM') & (data_species['Input to model'] == 'segmented audio, manually cleaned') & (data_species.train_set_dur == data_species.train_set_dur.max()) ].mean_seg_err.mean() tweetynet_mean_seg_err_max_dur = data_species[ (data_species.Model=='TweetyNet') & (data_species['Input to model'] == 'spectrogram') & (data_species.train_set_dur == data_species.train_set_dur.max()) ].mean_seg_err.mean() # ---- set up to plot col_ax_arr = ax_arr[:, col] col_ax_arr[0].get_shared_x_axes().join(col_ax_arr[1]) # we plot the same data on both axes, and then change the ylims below for row, ax in enumerate(col_ax_arr): if col == 1 and row == 1: # let seaborn generate legend, then we get handles + labels and place under whole figure legend = True else: legend = False g = sns.lineplot( data=data_species, x='train_set_dur_ind', y='mean_seg_err', hue='Model', palette=MODEL_HUE_MAP, style='Input to model', dashes=DASHES, ax=ax, legend=legend, ) ax.set_xlabel('') ax.set_ylabel('') if col == 1 and row == 1: handles, labels = ax.get_legend_handles_labels() g.legend_.remove() col_ax_arr[0].set_ylim([12, 105.]) col_ax_arr[1].set_ylim([0., 11.5]) col_ax_arr[0].set_xticks(list(TRAIN_DUR_IND_MAP.values())) col_ax_arr[0].set_xticklabels([]) col_ax_arr[1].set_xticks(list(TRAIN_DUR_IND_MAP.values())) col_ax_arr[1].set_xticklabels(train_set_durs, rotation=45) col_ax_arr[0].set_title(species.replace('_', ' ')) # --- despine and fix ticks for ax_ind, spines, tick_params in zip( (0, 1, 2, 3), ( ('top', 'bottom', 'right'), ('top', 'bottom', 'right'), ('top', 'right',), ('top', 'right') ), ( dict(axis='x', bottom=False), dict(axis='x', bottom=False), None, None, ) ): for spine in spines: ax_arr.flatten()[ax_ind].spines[spine].set_visible(False) if tick_params is not None: ax_arr.flatten()[ax_ind].tick_params(**tick_params) # # add a big axis, hide frame big_ax = fig.add_subplot(111, frameon=False) # hide tick and tick label of the big axis big_ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False) big_ax.grid(False) big_ax.set_xlabel('Training set duration (s)', labelpad=20) big_ax.set_ylabel('Syllable error rate (%)', labelpad=15) big_ax.legend(handles, labels, loc='upper left', bbox_to_anchor=[0.0, -0.25]) FIG_STEM = 'svm-v-tweetynet-results' for ext in ('png', 'svg'): fig.savefig(FIG_ROOT / f'{FIG_STEM}.{ext}', bbox_inches='tight'); ```
github_jupyter
# Train VAE for task2... Then what if reconstruction is lower weighted? Loss function is weighted as: $loss = 0.01 L_{Reconstruction} + L_{KLD}$ ``` # public modules from dlcliche.notebook import * from dlcliche.utils import ( sys, random, Path, np, plt, EasyDict, ensure_folder, deterministic_everything, ) from argparse import Namespace # private modules sys.path.append('..') import common as com from pytorch_common import * from model import VAE, VAE_loss_function # loading parameters -> hparams (argparse compatible) params = EasyDict(com.yaml_load('config.yaml')) # create working directory ensure_folder(params.model_directory) # test directories dirs = com.select_dirs(param=params, mode='development') # fix random seeds deterministic_everything(2020, pytorch=True) # PyTorch device device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') %load_ext tensorboard %tensorboard --logdir lightning_logs/ # VAE Training class class Task2VAELightning(Task2Lightning): def training_step(self, batch, batch_nb): x, y = batch y_hat, z, mu, logvar = self.model.forward_all(x) loss = VAE_loss_function(recon_x=y_hat, x=x, mu=mu, logvar=logvar, reconst_loss='mse', a_RECONST=.01, ############# Much less reconstruction loss a_KLD=1.) tensorboard_logs = {'train_loss': loss} return {'loss': loss, 'log': tensorboard_logs} # train models for target_dir in dirs: target = str(target_dir).split('/')[-1] print(f'==== Start training [{target}] with {torch.cuda.device_count()} GPU(s). ====') files = com.file_list_generator(target_dir) model = VAE(device, x_dim=params.VAE.x_dim, h_dim=params.VAE.h_dim, z_dim=params.VAE.z_dim).to(device) if target == 'ToyCar': summary(device, model) task2 = Task2VAELightning(device, model, params, files, normalize=True) trainer = pl.Trainer(max_epochs=10, # params.fit.epochs, ###### Simple try --> short epochs gpus=torch.cuda.device_count()) trainer.fit(task2) model_file = f'{params.model_directory}/model_{target}.pth' torch.save(task2.model.state_dict(), model_file) print(f'saved {model_file}.\n') ``` ## Visualize ``` #load_weights(task2.model, 'model/model_ToyCar.pth') show_some_predictions(task2.train_dataloader(), task2.model, 0, 3) # Validation set samples show_some_predictions(task2.val_dataloader(), task2.model, 0, 3) ``` ## Model just learned mean signal as expected ``` plt.plot(task2.train_dataloader().dataset.X.mean(axis=0)) ``` ## Check model weights Weights for bottleneck variables looks reasonable. But mean (fc21.weight) is almost zero... ``` summarize_weights(task2.model) ``` # Test the trained model ``` ! python 01_test.py -d def upto_6digits(cell): if not cell[0].isdigit(): return cell return f'{float(cell):.6f}' with open('result/result.csv') as f: for l in f.readlines(): l = l.strip() #replace('\n', '') if ',' not in l: print(l) continue ls = l.split(',') print(f'{ls[0]}\t\t{upto_6digits(ls[1])}\t\t{upto_6digits(ls[2])}') ```
github_jupyter
``` # from google.colab import drive # drive.mount('/content/drive') import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils from matplotlib import pyplot as plt import copy import random from numpy import linalg as LA from tabulate import tabulate # Ignore warnings import warnings warnings.filterwarnings("ignore") transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) gamma = 0.04 gamma classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') foreground_classes = {'plane', 'car', 'bird'} fg_used = '012' fg1, fg2, fg3 = 0,1,2 all_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'} background_classes = all_classes - foreground_classes background_classes # print(type(foreground_classes)) trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False) dataiter = iter(trainloader) true_train_background_data=[] true_train_background_label=[] true_train_foreground_data=[] true_train_foreground_label=[] batch_size=10 for i in range(5000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() true_train_background_data.append(img) true_train_background_label.append(labels[j]) else: img = images[j].tolist() true_train_foreground_data.append(img) true_train_foreground_label.append(labels[j]) true_train_foreground_data = torch.tensor(true_train_foreground_data) true_train_foreground_label = torch.tensor(true_train_foreground_label) true_train_background_data = torch.tensor(true_train_background_data) true_train_background_label = torch.tensor(true_train_background_label) len(true_train_foreground_data), len(true_train_foreground_label), len(true_train_background_data), len(true_train_background_label) dataiter = iter(testloader) true_test_background_data=[] true_test_background_label=[] true_test_foreground_data=[] true_test_foreground_label=[] batch_size=10 for i in range(1000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() true_test_background_data.append(img) true_test_background_label.append(labels[j]) else: img = images[j].tolist() true_test_foreground_data.append(img) true_test_foreground_label.append(labels[j]) true_test_foreground_data = torch.tensor(true_test_foreground_data) true_test_foreground_label = torch.tensor(true_test_foreground_label) true_test_background_data = torch.tensor(true_test_background_data) true_test_background_label = torch.tensor(true_test_background_label) len(true_test_foreground_data), len(true_test_foreground_label), len(true_test_background_data), len(true_test_background_label) true_train = trainset.data train_label = trainset.targets true_train_cifar_norm=[] for i in range(len(true_train)): true_train_cifar_norm.append(LA.norm(true_train[i])) len(true_train_cifar_norm) def plot_hist(values): plt.hist(values, density=True, bins=200) # `density=False` would make counts plt.ylabel('NORM') plt.xlabel('Data'); plot_hist(true_train_cifar_norm) true_train.shape train = np.reshape(true_train, (50000,3072)) train.shape, true_train.shape u, s, vh = LA.svd(train, full_matrices= False) u.shape , s.shape, vh.shape s vh dir = vh[0:10,:] dir u1 = dir[0,:] u2 = dir[1,:] u3 = dir[2,:] u1 u2 u3 len(train_label) def is_equal(x1, x2): cnt=0 for i in range(len(x1)): if(x1[i] == x2[i]): cnt+=1 return cnt def add_noise_cifar(train, label, gamma, fg1,fg2,fg3): cnt=0 for i in range(len(label)): x = train[i] if(label[i] == fg1): train[i] = train[i] + gamma * LA.norm(train[i]) * u1 cnt+=1 if(label[i] == fg2): train[i] = train[i] + gamma * LA.norm(train[i]) * u2 cnt+=1 if(label[i] == fg3): train[i] = train[i] + gamma * LA.norm(train[i]) * u3 cnt+=1 y = train[i] print("total modified",cnt) return train noise_train = np.reshape(true_train, (50000,3072)) noise_train = add_noise_cifar(noise_train, train_label, gamma , fg1,fg2,fg3) noise_train_cifar_norm=[] for i in range(len(noise_train)): noise_train_cifar_norm.append(LA.norm(noise_train[i])) plt.hist(noise_train_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() print("remain same",is_equal(noise_train_cifar_norm,true_train_cifar_norm)) plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() plt.hist(noise_train_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts # plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() noise_train.shape, trainset.data.shape noise_train = np.reshape(noise_train, (50000,32, 32, 3)) noise_train.shape trainset.data = noise_train true_test = testset.data test_label = testset.targets true_test.shape test = np.reshape(true_test, (10000,3072)) test.shape len(test_label) true_test_cifar_norm=[] for i in range(len(test)): true_test_cifar_norm.append(LA.norm(test[i])) plt.hist(true_test_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() noise_test = np.reshape(true_test, (10000,3072)) noise_test = add_noise_cifar(noise_test, test_label, gamma , fg1,fg2,fg3) noise_test_cifar_norm=[] for i in range(len(noise_test)): noise_test_cifar_norm.append(LA.norm(noise_test[i])) plt.hist(noise_test_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts plt.hist(true_test_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() is_equal(noise_test_cifar_norm,true_test_cifar_norm) plt.hist(true_test_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() plt.hist(noise_test_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts # plt.hist(true_train_cifar_norm, density=True, bins=200,label='true') plt.ylabel('NORM') plt.xlabel('Data') plt.legend() noise_test.shape, testset.data.shape noise_test = np.reshape(noise_test, (10000,32, 32, 3)) noise_test.shape testset.data = noise_test fg = [fg1,fg2,fg3] bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg)) fg,bg trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False) dataiter = iter(trainloader) train_background_data=[] train_background_label=[] train_foreground_data=[] train_foreground_label=[] batch_size=10 for i in range(5000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() train_background_data.append(img) train_background_label.append(labels[j]) else: img = images[j].tolist() train_foreground_data.append(img) train_foreground_label.append(labels[j]) train_foreground_data = torch.tensor(train_foreground_data) train_foreground_label = torch.tensor(train_foreground_label) train_background_data = torch.tensor(train_background_data) train_background_label = torch.tensor(train_background_label) dataiter = iter(testloader) test_background_data=[] test_background_label=[] test_foreground_data=[] test_foreground_label=[] batch_size=10 for i in range(1000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() test_background_data.append(img) test_background_label.append(labels[j]) else: img = images[j].tolist() test_foreground_data.append(img) test_foreground_label.append(labels[j]) test_foreground_data = torch.tensor(test_foreground_data) test_foreground_label = torch.tensor(test_foreground_label) test_background_data = torch.tensor(test_background_data) test_background_label = torch.tensor(test_background_label) def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() img1 = torch.cat((true_test_foreground_data[27],true_test_foreground_data[3],true_test_foreground_data[43]),1) imshow(img1) img2 = torch.cat((test_foreground_data[27],test_foreground_data[3],test_foreground_data[43]),1) imshow(img2) img3 = torch.cat((img1,img2),2) imshow(img3) print(img2.size()) print(LA.norm(test_foreground_data[27]), LA.norm(true_test_foreground_data[27])) import random for i in range(10): random.seed(i) a = np.random.randint(0,10000) img1 = torch.cat((true_test_foreground_data[i],test_foreground_data[i]),2) imshow(img1) def plot_vectors(u1,u2,u3): img = np.reshape(u1,(3,32,32)) img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() print("vector u1 norm",LA.norm(img)) plt.figure(1) plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.title("vector u1") img = np.reshape(u2,(3,32,32)) img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() print("vector u2 norm",LA.norm(img)) plt.figure(2) plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.title("vector u2") img = np.reshape(u3,(3,32,32)) img = img / 2 + 0.5 # unnormalize npimg = img#.numpy() print("vector u3 norm",LA.norm(img)) plt.figure(3) plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.title("vector u3") plt.show() plot_vectors(u1,u2,u3) class MosaicDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list_of_images self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx], self.fore_idx[idx] def create_mosaic_img(background_data, foreground_data, foreground_label, bg_idx,fg_idx,fg,fg1): """ bg_idx : list of indexes of background_data[] to be used as background images in mosaic fg_idx : index of image to be used as foreground image from foreground data fg : at what position/index foreground image has to be stored out of 0-8 """ image_list=[] j=0 for i in range(9): if i != fg: image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor")) j+=1 else: image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor")) label = foreground_label[fg_idx] -fg1 #-7 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2 #image_list = np.concatenate(image_list ,axis=0) image_list = torch.stack(image_list) return image_list,label def init_mosaic_creation(bg_size, fg_size, desired_num, background_data, foreground_data, foreground_label,fg1): # bg_size = 35000 # fg_size = 15000 # desired_num = 30000 mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9 mosaic_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(desired_num): np.random.seed(i+ bg_size + desired_num) bg_idx = np.random.randint(0,bg_size,8) # print(bg_idx) np.random.seed(i+ fg_size + desired_num) fg_idx = np.random.randint(0,fg_size) # print(fg_idx) np.random.seed(i+ fg_size + desired_num) fg = np.random.randint(0,9) # print(fg) fore_idx.append(fg) image_list,label = create_mosaic_img(background_data, foreground_data, foreground_label ,bg_idx,fg_idx,fg, fg1) mosaic_list_of_images.append(image_list) mosaic_label.append(label) return mosaic_list_of_images, mosaic_label, fore_idx train_mosaic_list_of_images, train_mosaic_label, train_fore_idx = init_mosaic_creation(bg_size = 35000, fg_size = 15000, desired_num = 30000, background_data = train_background_data, foreground_data = train_foreground_data, foreground_label = train_foreground_label, fg1 = fg1 ) batch = 250 msd_1 = MosaicDataset(train_mosaic_list_of_images, train_mosaic_label , train_fore_idx) train_loader_from_noise_train_mosaic_30k = DataLoader( msd_1,batch_size= batch ,shuffle=True) test_mosaic_list_of_images, test_mosaic_label, test_fore_idx = init_mosaic_creation(bg_size = 35000, fg_size = 15000, desired_num = 10000, background_data = train_background_data, foreground_data = train_foreground_data, foreground_label = train_foreground_label, fg1 = fg1 ) batch = 250 msd_2 = MosaicDataset(test_mosaic_list_of_images, test_mosaic_label , test_fore_idx) test_loader_from_noise_train_mosaic_30k = DataLoader( msd_2, batch_size= batch ,shuffle=True) test_mosaic_list_of_images_1, test_mosaic_label_1, test_fore_idx_1 = init_mosaic_creation(bg_size = 7000, fg_size = 3000, desired_num = 10000, background_data = test_background_data, foreground_data = test_foreground_data, foreground_label = test_foreground_label, fg1 = fg1 ) batch = 250 msd_3 = MosaicDataset(test_mosaic_list_of_images_1, test_mosaic_label_1 , test_fore_idx_1) test_loader_from_noise_test_mosaic_10k = DataLoader( msd_3, batch_size= batch ,shuffle=True) test_mosaic_list_of_images_2, test_mosaic_label_2, test_fore_idx_2 = init_mosaic_creation(bg_size = 35000, fg_size = 15000, desired_num = 10000, background_data = true_train_background_data, foreground_data = true_train_foreground_data, foreground_label = true_train_foreground_label, fg1 = fg1 ) batch = 250 msd_4 = MosaicDataset(test_mosaic_list_of_images_2, test_mosaic_label_2, test_fore_idx_2) test_loader_from_true_train_mosaic_30k = DataLoader( msd_4, batch_size= batch , shuffle=True) test_mosaic_list_of_images_3, test_mosaic_label_3, test_fore_idx_3 = init_mosaic_creation(bg_size = 7000, fg_size = 3000, desired_num = 10000, background_data = true_test_background_data, foreground_data = true_test_foreground_data, foreground_label = true_test_foreground_label, fg1 = fg1 ) batch = 250 msd_5 = MosaicDataset(test_mosaic_list_of_images_3, test_mosaic_label_3, test_fore_idx_3) test_loader_from_true_train_mosaic_10k = DataLoader( msd_5, batch_size= batch ,shuffle=True) class Module1(nn.Module): def __init__(self): super(Module1, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) self.fc4 = nn.Linear(10,1) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = self.fc4(x) return x class Module2(nn.Module): def __init__(self): super(Module2, self).__init__() self.module1 = Module1().double() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) self.fc4 = nn.Linear(10,3) def forward(self,z): #z batch of list of 9 images y = torch.zeros([batch,3, 32,32], dtype=torch.float64) x = torch.zeros([batch,9],dtype=torch.float64) x = x.to("cuda") y = y.to("cuda") for i in range(9): x[:,i] = self.module1.forward(z[:,i])[:,0] x = F.softmax(x,dim=1) x1 = x[:,0] torch.mul(x1[:,None,None,None],z[:,0]) for i in range(9): x1 = x[:,i] y = y + torch.mul(x1[:,None,None,None],z[:,i]) y = y.contiguous() y1 = self.pool(F.relu(self.conv1(y))) y1 = self.pool(F.relu(self.conv2(y1))) y1 = y1.contiguous() y1 = y1.reshape(-1, 16 * 5 * 5) y1 = F.relu(self.fc1(y1)) y1 = F.relu(self.fc2(y1)) y1 = F.relu(self.fc3(y1)) y1 = self.fc4(y1) return y1 , x, y def training(trainloader, fore_net, epochs=600): import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(fore_net.parameters(), lr=0.01, momentum=0.9) nos_epochs = epochs for epoch in range(nos_epochs): # loop over the dataset multiple times running_loss = 0.0 cnt=0 mini_loss = [] iteration = 30000 // batch for i, data in enumerate(train_loader_from_noise_train_mosaic_30k): inputs , labels , fore_idx = data inputs, labels, fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") optimizer.zero_grad() outputs, alphas, avg_images = fore_net(inputs) _, predicted = torch.max(outputs.data, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() mini = 40 if cnt % mini == mini - 1: # print every 40 mini-batches print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini)) mini_loss.append(running_loss / mini) running_loss = 0.0 cnt=cnt+1 if(np.average(mini_loss) <= 0.05): break print('Finished Training') return fore_net, epoch def testing(loader, fore_net): correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in loader: inputs, labels , fore_idx = data inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") outputs, alphas, avg_images = fore_net(inputs) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): count += 1 focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() return correct, total, focus_true_pred_true, focus_false_pred_true, focus_true_pred_false, focus_false_pred_false, argmax_more_than_half def enter_into(table, sno, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg, epoch = "NA"): entry = [] entry = [sno,'fg = '+ str(fg),'bg = '+str(bg), epoch, total, correct,] entry.append((100.0*correct/total)) entry.append((100 * ftpt / total)) entry.append( (100 * ffpt / total)) entry.append( ( 100 * ftpf / total)) entry.append( ( 100 * ffpf / total)) entry.append( alpha_more_half) table.append(entry) print(" ") print("="*160) print(tabulate(table, headers=['S.No.', 'fg_class','bg_class','Epoch used','total_points', 'correct','accuracy','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) ) print(" ") print("="*160) return table def add_average_entry(table): entry =[] entry = ['Avg', "","" ,"" ,"" , "",] entry.append( np.mean(np.array(table)[:,6].astype(np.float)) ) entry.append( np.mean(np.array(table)[:,7].astype(np.float)) ) entry.append( np.mean(np.array(table)[:,8].astype(np.float)) ) entry.append( np.mean(np.array(table)[:,9].astype(np.float)) ) entry.append( np.mean(np.array(table)[:,10].astype(np.float)) ) entry.append( np.mean(np.array(table)[:,11].astype(np.float)) ) table.append(entry) print(" ") print("="*160) print(tabulate(table, headers=['S.No.', 'fg_class','bg_class','Epoch used','total_points', 'correct','accuracy','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) ) print(" ") print("="*160) return table train_table=[] test_table1=[] test_table2=[] test_table3=[] test_table4=[] fg = [fg1,fg2,fg3] bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg)) number_runs = 10 for i in range(number_runs): fore_net = Module2().double() fore_net = fore_net.to("cuda") fore_net, epoch = training(train_loader_from_noise_train_mosaic_30k, fore_net) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(train_loader_from_noise_train_mosaic_30k, fore_net) train_table = enter_into(train_table, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg, str(epoch) ) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_noise_train_mosaic_30k, fore_net) test_table1 = enter_into(test_table1, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg ) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_noise_test_mosaic_10k, fore_net) test_table2 = enter_into(test_table2, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg ) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_true_train_mosaic_30k, fore_net) test_table3 = enter_into(test_table3, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg) correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_true_train_mosaic_10k, fore_net) test_table4 = enter_into(test_table4, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg ) train_table = add_average_entry(train_table) test_table1 = add_average_entry(test_table1) test_table2 = add_average_entry(test_table2) test_table3 = add_average_entry(test_table3) test_table4 = add_average_entry(test_table4) # torch.save(fore_net.state_dict(),"/content/drive/My Drive/Research/mosaic_from_CIFAR_involving_bottop_eigen_vectors/fore_net_epoch"+str(epoch)+"_fg_used"+str(fg_used)+".pt") ```
github_jupyter
# Explaining Answer Set Solving This is a short guide that shows how Answer Set Programming works. We will use [clingo](https://potassco.org/clingo/) in Python for this, and throughout this document we will use the syntax that clingo uses for answer set programming. <!-- [guide](https://github.com/potassco/guide/releases/tag/v2.2.0) --> ``` import clingo ``` (If you want to learn in more detail about any of the features of answer set programming with clingo that are explained in this guide, please have a look at the official [Potassco guide](https://github.com/potassco/guide/releases/).) ## Printing answer sets We start by defining a short function that uses the clingo python package to give us the answer sets of a given answer set program (and display the atoms in the answer set sorted alphabetically). ``` def print_answer_sets(program): # Load the answer set program, and call the grounder control = clingo.Control(); control.add("base", [], program); control.ground([("base", [])]); # Define a function that will be called when an answer set is found # This function sorts the answer set alphabetically, and prints it def on_model(model): sorted_model = [str(atom) for atom in model.symbols(shown=True)]; sorted_model.sort(); print("Answer set: {{{}}}".format(", ".join(sorted_model))); # Ask clingo to find all models (using an upper bound of 0 gives all models) control.configuration.solve.models = 0; # Call the clingo solver, passing on the function on_model for when an answer set is found answer = control.solve(on_model=on_model) # Print a message when no answer set was found if answer.satisfiable == False: print("No answer sets"); ``` We can use the function `print_answer_sets()` as follows to print all answer sets of a given answer set program `program`. (We will get to what answer set programs are.) ``` print_answer_sets(""" a :- not b. b :- not a. """); ``` ## Answer set semantics Answer set programming is based on the *answer set semantics* for logic programs. This is easiest explained if we start looking at propositional logic programs. Such a program consists of several rules of the following form: ``` a :- b_1, ..., b_n, not c_1, ..., not c_m. ``` where each of $a$, $b_i$ and $c_i$ are propositional atoms (that start with a lowercase letter). In such a rule, `a` is called the *head* and `b_1, ..., not c_m` is called the *body*. The order of elements within the body of the rule does not matter. This rule is roughly interpreted as: "$a$ is true if $b_1$, ..., $b_n$ are true, and $c_1$, ..., $c_m$ are not true." You are also free to choose $n=0$, $m=0$, or both. In case both $n=0$ and $m=0$, the rule is simply written as `a.` (and the rule is interpreted as "$a$ is true"). So the following are valid rules in a logic program: ``` a. a :- b_1, ..., b_n. a :- not c_1, ..., not c_m. ``` An *interpretation* for a logic program is typically taken to be a set $I$ of propositional atoms. All atoms in the set $I$ are true in this interpretation, and all atoms that are not in $I$ are false in the interpretation. ### Models for positive programs Answer sets are a particular type of models for logic programs. To explain the particular property that answer sets should have, we first have a look at models for positive programs. A *positive program* is a logic program that does not contain negations (`not`). In other words, it is a set of rules that are all either of the form `a.` (interpreted as: "$a$ is true") or of the form `a :- b_1, ..., b_n.` (interpreted as "if $b_1$, ..., $b_n$ are true, then $a$ should also be true"). So, a positive program can be seen as a set of facts (`a.`) and logical implications ($(b_1 \wedge \dotsm \wedge b_n) \rightarrow a)$, written as `a :- b_1, ..., b_n`). A *model* for a positive program is an interpretation $I$ that makes all the logical statements encoded by the rules in the program true. However, some of these models make more sense than others. For example, for the program `a. b :- a.`, the interpretation $I = \{a,b\}$ is a model, but so is $I = \{a,b,c\}$. The first of these two looks reasonable, but the second not so much. For example, to make the rules in the program true, we need to include $a$ and $b$ in the interpretation, but what justification do we have for putting $c$ in? For positive programs, we are interested in *minimal models*: models $M$ such that no (strict) subset $M' \subseteq M$ also makes the program true. For a positive program $P$, we define the *answer sets* of $P$ to be the set of all minimal models of $P$. It turns out that if a positive program $P$ has a model, then there is always a single unique minimal model of $P$. So let's try this out: ``` print_answer_sets(""" a :- b_1, b_2, b_3. b_1. b_2 :- c. b_3 :- c. c. d :- e. e :- d. """); ``` Taking the unique minimal model of a positive program nicely matches with our intuition of what we want from models. We include all atoms that are declared to be true in the program (in our example: $a$ and $c$). Then we add other atoms that we must include to make all if-then rules true (for example, adding $b_2$ because we already included $c$ and because `b_2 :- c` is in the program), until we are done. In our example, we don't include $d$ and $e$, because there is no reason to add them to the model. (We could have added both to the model, and still satisfy all if-then rules.) ### Programs with negation For logic programs where some rules contain a negation (`not`), things don't turn out to be as easy as simply iterating all if-then rules until we reach a fixpoint. For example, look at the following program: ``` a. c :- a, not b. d :- c. b :- d. ``` If we would use the strategy of applying if-then rules as long as we can, we would first add $a$, then $c$ (due to the rule `c :- a, not b.`), then $d$ (`d :- c.`), and $b$ (`b :- d.`). But then by adding $b$, we made the application of the rule `c :- a, not b.` invalid (because it only works to justify adding $c$ if $b$ is not true). Nevertheless, we want to select from all the models of a program with negation those models that we are interested in (the answer sets). The idea is to guess an interpretation $I$, use this interpretation to make a version $P^I$ of the program $P$ without negation ($P^I$ is called the *reduct* of $P$ w.r.t. $I$), and then check that $I$ is the unique minimal model of $P^I$—if this is the case, then $I$ is an *answer set* of $P$. Let's see how this works with a simple example. Take the following program $P$ (lines starting with `%` are comments): ``` % the program P c. d :- c, not b. b :- not d. ``` Take also the interpretation $I_1 = \{c,d\}$. The new program $P^{I_1}$ we get from $P$ by: 1. Removing all rules containing some `not a` in the body such that $a \in I_1$. So in our example, we remove `b :- not d.`, because $d \in I_1$. 1. Removing all remaining statements `not a` from the rest of the program. For all of these statements it holds that $a \not\in I_1$, otherwise we would have removed the entire rule containing the `not a` in step (1). So in our example, we change `d :- c, not b.` into `d :- c.`. So in our example, the program $P^{I_1}$ becomes: ``` % the reduct P^{I_1} c. d :- c. ``` Now, we have that $I_1 = \{c,d\}$ is the unique minimal model of $P^{I_1}$, so $I_1$ is an answer set of $P$. Let's check with clingo what the answer sets of our example program $P$ are: ``` print_answer_sets(""" c. d :- c, not b. b :- not d. """); ``` Indeed, $\{c,d\}$ is one of the answer sets. Let us also verify why, for example, $I_2 = \{b,c,d\}$ is not an answer set. If we apply the same rules (1) and (2) to $P$ based on $I_2$, we get the program $P^{I_2}$: ``` % the reduct P^{I_2} c. ``` (We remove both of the rules `d :- c, not b.` and `b :- not d.` because both $b \in I_2$ and $d \in I_2$.) And since $I_2 = \{b,c,d\}$ is not the unique minimal model of $P^{I_2}$ (which is $\{c\}$), $I_2$ is not an answer set of $P$. ### Answer set semantics for programs with negation So, to summarize, the answer sets of a propositional logic program $P$, are all interpretations $I$ for which it holds that $I$ is the unique minimal model of $P^I$, where $P^I$ is obtained from $P$ by: 1. Removing all rules containing some `not a` in the body such that $a \in I$. 1. Removing all remaining statements `not a` from the rest of the program. ### Example 0: no answer sets It turns out that when we allow negation in the logic programs, it is not guaranteed that an answer set exists. Take the following example program $P$: ``` print_answer_sets(""" a :- not a. """); ``` With only one atom, there are only two relevant interpretations: $I_1 = \emptyset$ and $I_2 = \{a\}$. Both are not answer sets of this program $P$. We get the following two reducts $P^{I_1}$ and $P^{I_2}$: ``` % the reduct P^{I_1} a. % the reduct P^{I_2} (empty) ``` Since $I_1$ is not the minimal model of $P^{I_1}$, and $I_2$ is not the minimal model of $P^{I_2}$, neither is an answer set of $P$. ## Some (more) examples Now that we know what the definition is of answer sets for propositional logic programs, let's look at a few examples to get a bit more feeling for it. ### Example 1: binary choice Take this example: ``` print_answer_sets(""" a :- not b. b :- not a. """); ``` The rules `a :- not b.` and `b :- not a.` allow us to make a binary choice between $a$ and $b$. If we take $a \in I$, these two rules are replaced by `a.` in $P^I$, and similarly, if we take $b \in I$, the two rules are replaced by `b.` in $P^I$. In other words, we can pick either $a$ or $b$, and $P^I$ will contain a fact justifying our choice. If we take $I = \{a,b\}$, then $P^I$ becomes the empty program (containing no rules), and $\{a,b\}$ is not the unique minimal model of the empty program (the empty set is), so $\{a,b\}$ is not an answer set of our program. ### Example 2: more binary choice Now take this example: ``` print_answer_sets(""" a :- not b. b :- not a. c :- not d. d :- not c. """); ``` In this case, we took two sets of rules encoding the choice between $a$ and $b$, on the one hand, and $c$ and $d$, on the other hand. The resulting program has four answer sets, corresponding to the four combinations of choices. ### Example 3: overlapping choice ``` print_answer_sets(""" a :- not b. b :- not a. a :- not c. c :- not a. """); ``` In this example, we again have two sets of rules that each encode a binary choice. One of these is between $a$ and $b$, and the other between $a$ and $c$. This yields two possible combinations of choices ($a,b$, and $a,c$). The answer sets of this program correspond exactly to these two combinations. ### Example 4: constraints Suppose now that we want to encode the choices $a/b$ and $c/d$, but that we want to rule out one of the combinations of choices. To do this, we can use a *constraint*, which is a rule with an empty head. For example, the rule `:- a, c.` is a constraint that can be interpreted as "$a$ and $c$ may not both be true". Another way to look at this rule is as the logical implication $(a \wedge c) \rightarrow \bot$, where $\bot$ denotes falsity. Let's see how we can use this constraint in an example: ``` print_answer_sets(""" a :- not b. b :- not a. c :- not d. d :- not c. :- a, c. """); ``` ### Example 5: another constraint We can also use negations in constraints. For example, the constraint `:- not a, not c.` can be interpreted as "$a$ and $c$ may not both be false." ``` print_answer_sets(""" a :- not b. b :- not a. c :- not d. d :- not c. :- not a, not c. """); ``` ## Variables So far, we looked at propositional atoms only. However, in many cases it would be very convenient to use (first-order) variables to range over a certain domain, rather than spelling out everything in the language of propositional logic. Fortunately, this is functionality that we can use. For example, we can use predicates with constants wherever we used propositional atoms so far. These constants can be numbers, constants (starting with a lowercase letter, e.g., `charlie`), or compound terms built up using function symbols (e.g., `bestfriend(charlie)`). Note that here the line between predicates (and propositional atoms) and function symbols is blurred a bit: we may use `bestfriend(charlie)` (a) as a unary predicate `bestfriend` applied to the constant `charlie`, or (b) as function symbol `bestfriend` applied to the constant `charlie`. Note also that terms that are syntactically different from each other are always interpreted as semantically different as well: `bestfriend(charlie)` and `bobbie` are always different (even though one can think of an interpretation where the function symbol `bestfriend` applied to `charlie` is the same as `bobbie`). Let's phrase one of our previous examples using predicats: ``` print_answer_sets(""" choose(a) :- not choose(b). choose(b) :- not choose(a). """); ``` To really use the power of this first-order notation, we can use variables. Variables start with an uppercase letter. Let's start with an example: ``` print_answer_sets(""" choice(1). choice(2). choose(X,a) :- not choose(X,b), choice(X). choose(X,b) :- not choose(X,a), choice(X). """); ``` How does this example work, exactly? Variables are universally quantified, so the rule `a(X) :- b(X).` expresses that for each $c$ for which $b(c)$ holds, also $a(c)$ must hold. What clingo does is spelling out all the different relevant instantiations of rules with variables. This is called *grounding*. (Programs without variables are called *ground programs*.) So under the hood, clingo first changed our previous example into the following: ``` print_answer_sets(""" choice(1). choice(2). choose(1,a) :- not choose(1,b), choice(1). choose(1,b) :- not choose(1,a), choice(1). choose(2,a) :- not choose(2,b), choice(2). choose(2,b) :- not choose(2,a), choice(2). """); ``` ### Safe rules In order to make sure that the grounding process works, you can only use rules that are *safe*. What this means is that every variable that appears in the rule must appear in some positive (that is, non-negated) element of the body. So for example, the following rules are *unsafe* (because the variable `Y` does not appear positively in the body): ``` a(Y) :- not b(Y). a(X,Y) :- c(X). ``` If you ask clingo to find answer sets for a program that contains unsafe rules, it will throw an error message. ## Abbreviations and other additional features We have seen all the basic features of answer set programming. However, clingo and the language of answer set programming make our life easier by providing some further features. Let's look at some of them by means of some further examples. ### Example 6: enumerating numbers Suppose that we want to declare `choice(i)` for all integers between 1 and 10. Rather than spelling out ten facts, we can simply write `choice(1..10).` So we can write our previous choice example as follows: ``` print_answer_sets(""" choice(1..2). choose(X,a) :- not choose(X,b), choice(X). choose(X,b) :- not choose(X,a), choice(X). """); ``` ### Example 7: showing only some predicates in the answer sets In our choice example, we might be interested in only part of the answer set. Namely, in the binary predicate `choose`. We know that every answer set will contain `choice(1)`, etc. We can declare a statement that says we want to show the binary predicate `choose`: `#show choose/2.`. If you issue one or more show statements, then all predicates for which no show statement is issued are automatically hidden from answer sets: ``` print_answer_sets(""" choice(1..2). choose(X,a) :- not choose(X,b), choice(X). choose(X,b) :- not choose(X,a), choice(X). #show choose/2. """); ``` ### Example 8: declaring constants Suppose that we want to use a number that we will use more often in a program, and that we want to be able to change it easily in a single place. Then we can declare this number as a constant, as follows: ``` print_answer_sets(""" #const k=2. choice(1..k). choose(X,a) :- not choose(X,b), choice(X). choose(X,b) :- not choose(X,a), choice(X). #show choose/2. """); print_answer_sets(""" #const k=3. choice(1..k). choose(X,a) :- not choose(X,b), choice(X). choose(X,b) :- not choose(X,a), choice(X). #show choose/2. """); ``` ### Example 9: abbreviating facts Instead of using `choice(1..3).`, we could also use the statement `choice(1;2;3).`: ``` print_answer_sets(""" choice(1;2;3). """); ``` ### Example 10: choice rules Suppose that we want a program that encodes a choice between all subsets of a given set. For example, let $A = \{a_1,a_2\}$ and suppose that we want to write a program whose answer sets (restricted to some predicate `choose/1`) correspond exactly to the four subsets of $A$. We can do this as follows, using the technique that we saw in the examples about binary choice: ``` print_answer_sets(""" element(a;b). choose(X) :- not unchoose(X), element(X). unchoose(X) :- not choose(X), element(X). #show choose/1. """); ``` However, we can also encode this, more conveniently, using so-called choice rules. For example, the rule `{ choose(a;b) }.` represents exactly what the above example did: ``` print_answer_sets(""" { choose(a;b) }. """); ``` Or, equivalently, written as: ``` print_answer_sets(""" { choose(a); choose(b) }. """); ``` You can use these choice rules in the head of a rule, also with a non-empty body of the rule (e.g., `{ choose(a;b) } :- make_choice.`). ### Example 11: cardinality rules A construct that is similar to choice rules is that of cardinality rules. These are rules that encode a choice between subsets of a given set that satisfies some cardinality conditions. For example, we can modify the example above so that only subsets of size at least 1 and at most 2 are given: ``` print_answer_sets(""" 1 { choose(a); choose(b) } 2. """); ``` We can also use these cardinality expressions in the body of a rule: ``` print_answer_sets(""" { choose(a); choose(b) }. exactly_one :- 1 { choose(a;b) } 1. """); ``` ### Example 12: conditional literals Another convenient feature is the use of conditional literals. This we can use to express a set of statements for which another property is true. This works as in the following example, where we declare several items using `item/1`, and then encode a choice of exactly two of these items using a cardinality rule where we express the set using conditional literals: ``` print_answer_sets(""" item(a;b;c). 2 { choose(X) : item(X) } 2. #show choose/1. """); ``` ### Example 13: arithmetic You can use integers as constants. Moreover, you can use arithmetic operations (e.g., comparing integers, addition, multiplication, etc). ``` print_answer_sets(""" number(1..10). four(4). at_most_five(N) :- number(N), four(F), N <= F+1. #show at_most_five/1. """); ``` However, these arithmetic operations can only be applied to variables whose value can be determined by means of other predicates. Therefore, if arithmetic operations involve variables that do not appear in a positively occurring atom in the body of the rule, clingo will throw an error message. (Such rules are not *safe*, as we explained before.) In the following example, the rule for `double(X,Y)` contains arithmetic operations on `X` and `Y`, and these variables do not occur in a positive atom in the body, and so the rule is not safe and clingo will throw an error message. ``` try: print_answer_sets(""" number(1..10). double(X,Y) :- not letter(X), not letter(Y), Y = 2*X. #show double/2. """); except RuntimeError as e: print("RuntimeError: \"{}\"".format(str(e))); ``` We can make the rule for `double(X,Y)` safe as follows, for example. ``` print_answer_sets(""" number(1..10). double(X,Y) :- number(X), number(Y), Y = 2*X. #show double/2. """); ``` ### Example 14: aggregates Another useful feature that clingo offers is the use of *aggregates*: `#sum`, `#count`, `#max`, `#min`, `#even`, and `#odd`. These aggregates operate on a (multi)set of atoms. They can be used as follows, for example: ``` print_answer_sets(""" item(1..4). cost(1,2). cost(2,2). cost(3,3). cost(4,0). 2 { choose(I) : item(I) } 2. total(D) :- D = #sum { C,item(I) : item(I), choose(I), cost(I,C) }. #show total/1. #show choose/1. """); ``` In the above example, we use `C,item(I)` in the `#sum` aggregate to make sure that whenever there are more items with the same cost, all of their costs are counted towards the total. If we were to replace `C,item(I)` by `C`, several items with the same cost `C` would (all together) only contribute `C` to the sum. Here is another example, now using the `#count` aggregate. ``` print_answer_sets(""" item(1..4). total(D) :- D = #count { item(I) : item(I) }. """); ``` Finally, an example using `#max`: ``` print_answer_sets(""" item(1..4). 2 { choose(I) : item(I) } 2. highest(D) :- D = #max { I : item(I), choose(I) }. #show highest/1. #show choose/1. """); ``` ## Optimization We can also use optimization statements, to select from all answer sets of a given answer set program those answer sets that minimize or maximize a particular property. To illustrate this, we will define a short function that will give us all optimized answer sets for a given answer set program. ``` def print_optimal_answer_sets(program): # Load the answer set program, and call the grounder control = clingo.Control(); control.add("base", [], program); control.ground([("base", [])]); # Define a function that will be called when an answer set is found # This function sorts the answer set alphabetically, and prints it def on_model(model): if model.optimality_proven == True: sorted_model = [str(atom) for atom in model.symbols(shown=True)]; sorted_model.sort(); print("Optimal answer set: {{{}}}".format(", ".join(sorted_model))); # Ask clingo to find all optimal models (using an upper bound of 0 gives all models) control.configuration.solve.opt_mode = "optN"; control.configuration.solve.models = 0; # Call the clingo solver, passing on the function on_model for when an answer set is found answer = control.solve(on_model=on_model) # Print a message when no answer set was found if answer.satisfiable == False: print("No answer sets"); ``` With this function in place, we will illustrate how optimization statements work. Consider the following example, where we have facts declaring four items and a score for each of these items. We also have a cardinality rule that states that we should select between 2 and 3 of these items. This gives us 10 answer sets in total: ``` print_answer_sets(""" item(1..4). score(1,5). score(2,4). score(3,4). score(4,2). 2 { select(I) : item(I) } 3. #show select/1. """); ``` Now let's add an optimization statement, that states that we should maximize the total score for the items that we select. This shows that only 2 of the 10 answer sets maximize this total score. ``` print_optimal_answer_sets(""" item(1..4). score(1,5). score(2,4). score(3,4). score(4,2). 2 { select(I) : item(I) } 3. #maximize { S,I : select(I), score(I,S) }. #show select/1. """); ``` How this works is as follows. The statement `{ S,I : select(I), score(I,S) }` refers to the set of all pairs `S,I` for which `select(I), score(I,S)` holds in the answer set. Then the `#maximize` statements indicates that only those answer sets should be taken for which the sum of all `S` in such pairs `S,I` is maximal. You may also use `#maximize` statements with tuples of different arity (e.g., triples, or 1-tuples). In this case, however, we need to include `I` in the tuples, because otherwise the score for item 2 and item 3 would only be counted once in the total score. This works similarly using `#minimize` instead of `#maximize`. ``` print_optimal_answer_sets(""" item(1..4). score(1,5). score(2,4). score(3,4). score(4,2). 2 { select(I) : item(I) } 3. #minimize { S,I : select(I), score(I,S) }. #show select/1. """); ``` To see why we need the tuple `S,I` in this `#minimize` statement, consider the following example, where we replace `S,I` by just `S` in the `#minimize` statement: ``` print_optimal_answer_sets(""" item(1..4). score(1,5). score(2,4). score(3,4). score(4,2). 2 { select(I) : item(I) } 3. #minimize { S : select(I), score(I,S) }. #show select/1. """); ``` Now selecting items 2 and 3 yields the mimimal total score, because both add score 4 to the set `{ S : select(I), score(I,S) }`, and now the score 4 gets only counted once, which is not what we had in mind. ## Disjunction There is an extension of answer set programming that allows us to use disjunction in the head of rules, and clingo supports this. The following example illustrates how we can do this (with the operator `;` in the head of a rule): ``` print_answer_sets(""" a. b ; c :- a. """); ``` To make sure that we have a good foundation for this, we make sure that our definition of *answer sets* also works for programs with rules that have disjunction in the head. The idea is the same. We guess an interpretation $I$, use this interpretation to make a version $P^I$ of the program $P$ without negation (the *reduct* of $P$ w.r.t. $I$), and then check that $I$ is a minimal model of $P^I$—if this is the case, then $I$ is an *answer set* of $P$. The reduct $P^{I}$ we get from $P$ by: 1. Removing all rules containing some `not a` in the body such that $a \in I$. 1. Removing all remaining statements `not a` from the rest of the program. So this is also exactly the same as for the case without disjunction in the head of rules. The only thing that we need to update is what it means for $I$ to be a (minimal) model of $P^I$. We consider rules `a ; b :- c_1, ..., c_n` as the logical implication $(c_1 \wedge \dotsm \wedge c_n) \rightarrow (a \vee b)$. That is, we interpret disjunction in the head as logical disjunction in the consequent of the implication. Using this, we can use a similar definition of what an answer set is: an interpretation $I$ that is a minimal (w.r.t. set inclusion) model of the reduct $P^I$ of the program $P$ w.r.t. $I$. To take an example, consider the program that we used as example above: ``` % the program P a. b ; c :- a. ``` Consider the interpretation $I_1 = \{a,b\}$. The reduct $P^{I_1}$ is: ``` % the reduct P^{I_1} a. b ; c :- a. ``` And $I_1$ is in fact a minimal model of $P^{I_1}$. If we remove $a$ from $I_1$, we get the interpretation $\{b\}$, which does not satisfy `a.`. This is also the case if we remove $a$ and $b$ from $I_1$. If we instead only remove $b$ from $I_1$, we get the interpretation $\{a\}$, which does not satisfy `b ; c :- a.`. Thus, $I_1$ is a minimal model of $P^{I_1}$. ### Using default negation to represent disjunctive choices As we have seen above, we can add disjunction to the head of rules. However, in many cases this is not needed to express disjunctive choices. For example, as we have already seen earlier, we can encode a choice between two atoms (say, `a` and `b`) using several rules as follows: ``` print_answer_sets(""" a :- not b. b :- not a. """); ```
github_jupyter
``` bonus_root = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/BonusClimate/' #Wetlands wetlands = ['COPERNICUS', 'CAMA','ORCHIDEE','monthlyWetlandAndSeasonalWater_minusRiceAllCorrected_waterConsistent'] lakes = ['CL_ECMWFAndJRChistory','yearlyCL'] import glob import pandas as pd import xarray as xr import sys import re def natural_sort(l): convert = lambda text: int(text) if text.isdigit() else text.lower() alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)] return sorted(l, key=alphanum_key) def get_ERA_hour(ERA_month,ERA_constant,t): """ Extract an hour of ERA data """ #Filter month of ERA data to an hour time_filter = (ERA_month.time == t) ERA_hour = ERA_month.where(time_filter,drop=True) #Join on the constant data, first setting the time coordinate ERA_constant = ERA_constant.assign_coords({"time": (((ERA_hour.time)))}) #Update the time for the merge. ERA_hour = xr.merge([ERA_hour,ERA_constant]).load() #Explicitly load #Now filter to get land values only land_filter = (ERA_hour.lsm > 0.5) ERA_hour = ERA_hour.where(land_filter,drop=True) #And covert longitude to long1 ERA_hour = ERA_hour.assign_coords({"longitude": (((ERA_hour.longitude + 180) % 360) - 180)}) return ERA_hour root = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/' #Time constant ERA data. This is different for v15 and v20 data. versions = ["v15", "v20"] ERA_constant_dict = {} for v in versions: f = root +f'processed_data/ERA_timeconstant/ERA_constants_{v}.nc' ds = xr.open_dataset(f) #NetCDF file of features which are constant for each gridpoint ERA_constant_dict[v] = ds ds.close() #Time variable ERA data ERA_folder = root+'processed_data/ERA_timevariable/' ERA_files = natural_sort(glob.glob(ERA_folder+'*')) f_ERA = ERA_files[0] print(f_ERA) ERA_month = xr.open_dataset(f_ERA,engine='cfgrib',backend_kwargs={'indexpath': ''}) #Get all times in that month of data, hourly grain timestamps = pd.to_datetime(ERA_month.time) #Empty dict. We will append the resulting dfs here dfs = {"v15":[], "v20":[]} for t in timestamps: print(t) #Get an hour of MODIS data date_string = select_correct_MODIS_file(t) #For this datetime, which MODIS file should be opened? if date_string == '2017-12-31': continue #skip we dont have this day v = "v15" ERA_constant = ERA_constant_dict[v] #Get an hour of ERA data ERA_hour = get_ERA_hour(ERA_month,ERA_constant,t) #ERA_df = ERA_hour.to_dataframe().reset_index().dropna() timestamps = pd.to_datetime(ERA_month.time) chosen_month = np.unique(timestamps.month) np.arange(1,12+1) #1-12 inclusive import numpy as np for w in wetlands: fname = bonus_root+w+'/wetlandf' print(fname) ds_wetland= xr.open_dataset(fname,engine='cfgrib',decode_times=False,backend_kwargs={'indexpath': ''}) #Rename the parameter ds_wetland = ds_wetland.cldiff.rename(f'cldiff_{w}') #This is now a dataarray display(ds_wetland.time) #Fix the time ds_wetland['time'] = np.arange(1,12+1) #i.e. what month it it? #Select only the correct time chosen_month = display(ds_wetland) sys.exit() ! grib_ls /network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/BonusClimate/COPERNICUS/wetlandf | head -10 ! module load eccodes ! grib_ls /network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/BonusClimate/COPERNICUS/wetlandf | head -10 ``` ds_wetland1 = ds = xr.open_dataset(ERA_skin,engine='cfgrib',filter_by_keys={'typeOfLevel': 'surface'},backend_kwargs={'indexpath': ''}) #Assumes constant features are surface quantities, which is currently true, but may not always be... ``` ds_wetland1 = xr.open_dataset('/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/BonusClimate/COPERNICUS/wetlandf',engine='cfgrib',decode_times=False,backend_kwargs={'indexpath': ''}) ds_wetland1 = ds_wetland1.cldiff.rename('blob') ds_wetland1 ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D1_ModelTypes/student/W1D1_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Neuromatch Academy: Week 1, Day 1, Tutorial 2 # Model Types: "How" models __Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording __Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom ___ # Tutorial Objectives This is tutorial 2 of a 3-part series on different flavors of models used to understand neural data. In this tutorial we will explore models that can potentially explain *how* the spiking data we have observed is produced To understand the mechanisms that give rise to the neural data we save in Tutorial 1, we will build simple neuronal models and compare their spiking response to real data. We will: - Write code to simulate a simple "leaky integrate-and-fire" neuron model - Make the model more complicated — but also more realistic — by adding more physiologically-inspired details ``` #@title Video 1: "How" models from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV1yV41167Di', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video ``` # Setup ``` import numpy as np import matplotlib.pyplot as plt from scipy import stats #@title Figure Settings import ipywidgets as widgets #interactive display %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") #@title Helper Functions def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs): """Plot a step histogram given counts over bins.""" if ax is None: _, ax = plt.subplots() # duplicate the first element of `counts` to match bin edges counts = np.insert(counts, 0, counts[0]) ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines for x in vlines: ax.axvline(x, color='r', linestyle='dotted') # vertical line if ax_args is None: ax_args = {} # heuristically set max y to leave a bit of room ymin, ymax = ax_args.get('ylim', [None, None]) if ymax is None: ymax = np.max(counts) if ax_args.get('yscale', 'linear') == 'log': ymax *= 1.5 else: ymax *= 1.1 if ymin is None: ymin = 0 if ymax == ymin: ymax = None ax_args['ylim'] = [ymin, ymax] ax.set(**ax_args) ax.autoscale(enable=False, axis='x', tight=True) def plot_neuron_stats(v, spike_times): fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5)) # membrane voltage trace ax1.plot(v[0:100]) ax1.set(xlabel='Time', ylabel='Voltage') # plot spike events for x in spike_times: if x >= 100: break ax1.axvline(x, color='red') # ISI distribution isi = np.diff(spike_times) n_bins = np.arange(isi.min(), isi.max() + 2) - .5 counts, bins = np.histogram(isi, n_bins) vlines = [] if len(isi) > 0: vlines = [np.mean(isi)] xmax = max(20, int(bins[-1])+5) histogram(counts, bins, vlines=vlines, ax=ax2, ax_args={ 'xlabel': 'Inter-spike interval', 'ylabel': 'Number of intervals', 'xlim': [0, xmax] }) plt.show() ``` # Section 1: The Linear Integrate-and-Fire Neuron How does a neuron spike? A neuron charges and discharges an electric field across its cell membrane. The state of this electric field can be described by the _membrane potential_. The membrane potential rises due to excitation of the neuron, and when it reaches a threshold a spike occurs. The potential resets, and must rise to a threshold again before the next spike occurs. One of the simplest models of spiking neuron behavior is the linear integrate-and-fire model neuron. In this model, the neuron increases its membrane potential $V_m$ over time in response to excitatory input currents $I$ scaled by some factor $\alpha$: \begin{align} dV_m = {\alpha}I \end{align} Once $V_m$ reaches a threshold value a spike is produced, $V_m$ is reset to a starting value, and the process continues. Here, we will take the starting and threshold potentials as $0$ and $1$, respectively. So, for example, if $\alpha I=0.1$ is constant---that is, the input current is constant---then $dV_m=0.1$, and at each timestep the membrane potential $V_m$ increases by $0.1$ until after $(1-0)/0.1 = 10$ timesteps it reaches the threshold and resets to $V_m=0$, and so on. Note that we define the membrane potential $V_m$ as a scalar: a single real (or floating point) number. However, a biological neuron's membrane potential will not be exactly constant at all points on its cell membrane at a given time. We could capture this variation with a more complex model (e.g. with more numbers). Do we need to? The proposed model is a 1D simplification. There are many details we could add to it, to preserve different parts of the complex structure and dynamics of a real neuron. If we were interested in small or local changes in the membrane potential, our 1D simplification could be a problem. However, we'll assume an idealized "point" neuron model for our current purpose. #### Spiking Inputs Given our simplified model for the neuron dynamics, we still need to consider what form the input $I$ will take. How should we specify the firing behavior of the presynaptic neuron(s) providing the inputs to our model neuron? Unlike in the simple example above, where $\alpha I=0.1$, the input current is generally not constant. Physical inputs tend to vary with time. We can describe this variation with a distribution. We'll assume the input current $I$ over a timestep is due to equal contributions from a non-negative ($\ge 0$) integer number of input spikes arriving in that timestep. Our model neuron might integrate currents from 3 input spikes in one timestep, and 7 spikes in the next timestep. We should see similar behavior when sampling from our distribution. Given no other information about the input neurons, we will also assume that the distribution has a mean (i.e. mean rate, or number of spikes received per timestep), and that the spiking events of the input neuron(s) are independent in time. Are these reasonable assumptions in the context of real neurons? A suitable distribution given these assumptions is the Poisson distribution, which we'll use to model $I$: \begin{align} I \sim \mathrm{Poisson}(\lambda) \end{align} where $\lambda$ is the mean of the distribution: the average rate of spikes received per timestep. ### Exercise 1: Compute $dV_m$ For your first exercise, you will write the code to compute the change in voltage $dV_m$ (per timestep) of the linear integrate-and-fire model neuron. The rest of the code to handle numerical integration is provided for you, so you just need to fill in a definition for `dv` in the `lif_neuron` function below. The value of $\lambda$ for the Poisson random variable is given by the function argument `rate`. The [`scipy.stats`](https://docs.scipy.org/doc/scipy/reference/stats.html) package is a great resource for working with and sampling from various probability distributions. We will use the `scipy.stats.poisson` class and its method `rvs` to produce Poisson-distributed random samples. In this tutorial, we have imported this package with the alias `stats`, so you should refer to it in your code as `stats.poisson`. ``` def lif_neuron(n_steps=1000, alpha=0.01, rate=10): """ Simulate a linear integrate-and-fire neuron. Args: n_steps (int): The number of time steps to simulate the neuron's activity. alpha (float): The input scaling factor rate (int): The mean rate of incoming spikes """ # precompute Poisson samples for speed exc = stats.poisson(rate).rvs(n_steps) v = np.zeros(n_steps) spike_times = [] ################################################################################ # Students: compute dv, then comment out or remove the next line raise NotImplementedError("Excercise: compute the change in membrane potential") ################################################################################ for i in range(1, n_steps): dv = ... v[i] = v[i-1] + dv if v[i] > 1: spike_times.append(i) v[i] = 0 return v, spike_times # Uncomment these lines after completing the lif_neuron function # v, spike_times = lif_neuron() # plot_neuron_stats(v, spike_times) ``` [*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial2_Solution_f8960ca1.py) *Example output:* <img alt='Solution hint' align='left' width=848 height=344 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1_ModelTypes/static/W1D1_Tutorial2_Solution_f8960ca1_0.png> ## Interactive Demo: Linear-IF neuron Like last time, you can now explore how various parametes of the LIF model influence the ISI distribution. ``` #@title #@markdown You don't need to worry about how the code works – but you do need to **run the cell** to enable the sliders. def _lif_neuron(n_steps=1000, alpha=0.01, rate=10): exc = stats.poisson(rate).rvs(n_steps) v = np.zeros(n_steps) spike_times = [] for i in range(1, n_steps): dv = alpha * exc[i] v[i] = v[i-1] + dv if v[i] > 1: spike_times.append(i) v[i] = 0 return v, spike_times @widgets.interact( n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4), alpha=widgets.FloatLogSlider(0.01, min=-2, max=-1), rate=widgets.IntSlider(10, min=5, max=20) ) def plot_lif_neuron(n_steps=1000, alpha=0.01, rate=10): v, spike_times = _lif_neuron(int(n_steps), alpha, rate) plot_neuron_stats(v, spike_times) #@title Video 2: Linear-IF models from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV1iZ4y1u7en', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video ``` # Section 2: Inhibitory signals Our linear integrate-and-fire neuron from the previous section was indeed able to produce spikes. However, our ISI histogram doesn't look much like empirical ISI histograms seen in Tutorial 1, which had an exponential-like shape. What is our model neuron missing, given that it doesn't behave like a real neuron? In the previous model we only considered excitatory behavior -- the only way the membrane potential could decrease was upon a spike event. We know, however, that there are other factors that can drive $V_m$ down. First is the natural tendency of the neuron to return to some steady state or resting potential. We can update our previous model as follows: \begin{align} dV_m = -{\beta}V_m + {\alpha}I \end{align} where $V_m$ is the current membrane potential and $\beta$ is some leakage factor. This is a basic form of the popular Leaky Integrate-and-Fire model neuron (for a more detailed discussion of the LIF Neuron, see the Appendix). We also know that in addition to excitatory presynaptic neurons, we can have inhibitory presynaptic neurons as well. We can model these inhibitory neurons with another Poisson random variable: \begin{align} I = I_{exc} - I_{inh} \\ I_{exc} \sim \mathrm{Poisson}(\lambda_{exc}) \\ I_{inh} \sim \mathrm{Poisson}(\lambda_{inh}) \end{align} where $\lambda_{exc}$ and $\lambda_{inh}$ are the average spike rates (per timestep) of the excitatory and inhibitory presynaptic neurons, respectively. ### Exercise 2: Compute $dV_m$ with inhibitory signals For your second exercise, you will again write the code to compute the change in voltage $dV_m$, though now of the LIF model neuron described above. Like last time, the rest of the code needed to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` below. ``` def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10): """ Simulate a simplified leaky integrate-and-fire neuron with both excitatory and inhibitory inputs. Args: n_steps (int): The number of time steps to simulate the neuron's activity. alpha (float): The input scaling factor beta (float): The membrane potential leakage factor exc_rate (int): The mean rate of the incoming excitatory spikes inh_rate (int): The mean rate of the incoming inhibitory spikes """ # precompute Poisson samples for speed exc = stats.poisson(exc_rate).rvs(n_steps) inh = stats.poisson(inh_rate).rvs(n_steps) v = np.zeros(n_steps) spike_times = [] ############################################################################### # Students: compute dv, then comment out or remove the next line raise NotImplementedError("Excercise: compute the change in membrane potential") ################################################################################ for i in range(1, n_steps): dv = ... v[i] = v[i-1] + dv if v[i] > 1: spike_times.append(i) v[i] = 0 return v, spike_times # Uncomment these lines do make the plot once you've completed the function #v, spike_times = lif_neuron_inh() #plot_neuron_stats(v, spike_times) ``` [*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial2_Solution_4d9a2677.py) *Example output:* <img alt='Solution hint' align='left' width=848 height=344 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1_ModelTypes/static/W1D1_Tutorial2_Solution_4d9a2677_0.png> ## Interactive Demo: LIF + inhibition neuron ``` #@title #@markdown **Run the cell** to enable the sliders. def _lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10): """ Simulate a simplified leaky integrate-and-fire neuron with both excitatory and inhibitory inputs. Args: n_steps (int): The number of time steps to simulate the neuron's activity. alpha (float): The input scaling factor beta (float): The membrane potential leakage factor exc_rate (int): The mean rate of the incoming excitatory spikes inh_rate (int): The mean rate of the incoming inhibitory spikes """ # precompute Poisson samples for speed exc = stats.poisson(exc_rate).rvs(n_steps) inh = stats.poisson(inh_rate).rvs(n_steps) v = np.zeros(n_steps) spike_times = [] for i in range(1, n_steps): dv = -beta * v[i-1] + alpha * (exc[i] - inh[i]) v[i] = v[i-1] + dv if v[i] > 1: spike_times.append(i) v[i] = 0 return v, spike_times @widgets.interact(n_steps=widgets.FloatLogSlider(1000.0, min=2.5, max=4), alpha=widgets.FloatLogSlider(0.5, min=-1, max=1), beta=widgets.FloatLogSlider(0.1, min=-1, max=0), exc_rate=widgets.IntSlider(12, min=10, max=20), inh_rate=widgets.IntSlider(12, min=10, max=20)) def plot_lif_neuron(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10): v, spike_times = _lif_neuron_inh(int(n_steps), alpha, beta, exc_rate, inh_rate) plot_neuron_stats(v, spike_times) #@title Video 3: LIF + inhibition from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id='BV1nV41167mS', width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) video ``` #Summary In this tutorial we gained some intuition for the mechanisms that produce the observed behavior in our real neural data. First, we built a simple neuron model with excitatory input and saw that it's behavior, measured using the ISI distribution, did not match our real neurons. We then improved our model by adding leakiness and inhibitory input. The behavior of this balanced model was much closer to the real neural data. # Bonus ### Why do neurons spike? A neuron stores energy in an electric field across its cell membrane, by controlling the distribution of charges (ions) on either side of the membrane. This energy is rapidly discharged to generate a spike when the field potential (or membrane potential) crosses a threshold. The membrane potential may be driven toward or away from this threshold, depending on inputs from other neurons: excitatory or inhibitory, respectively. The membrane potential tends to revert to a resting potential, for example due to the leakage of ions across the membrane, so that reaching the spiking threshold depends not only on the amount of input ever received following the last spike, but also the timing of the inputs. The storage of energy by maintaining a field potential across an insulating membrane can be modeled by a capacitor. The leakage of charge across the membrane can be modeled by a resistor. This is the basis for the leaky integrate-and-fire neuron model. ### The LIF Model Neuron The full equation for the LIF neuron is \begin{align} C_{m}\frac{dV_m}{dt} = -(V_m - V_{rest})/R_{m} + I \end{align} where $C_m$ is the membrane capacitance, $R_M$ is the membrane resistance, $V_{rest}$ is the resting potential, and $I$ is some input current (from other neurons, an electrode, ...). In our above examples we set many of these parameters to convenient values ($C_m = R_m = dt = 1$, $V_{rest} = 0$) to focus more on the general behavior of the model. However, these too can be manipulated to achieve different dynamics, or to ensure the dimensions of the problem are preserved between simulation units and experimental units (e.g. with $V_m$ given in millivolts, $R_m$ in megaohms, $t$ in milliseconds).
github_jupyter
## Introduction to matplotlib `matplotlib` is the Python plotting package to rule them all. Not because it's the best. Or the easiest to use. Or the fastest. Or... wait, why is it the number 1 plotting package? Nobody knows! But it's everywhere, and making basic plots is... fine. It's really fine. ``` import numpy as np %matplotlib inline import matplotlib.pyplot as plt ``` Let's get some well data to play with. ``` dt = np.load("../data/B-41_DT.npy") rhob = np.load("../data/B-41_RHOB.npy") depth = np.load("../data/B-41_DEPTH.npy") ``` ## First steps The first step is usually just a quick plot. If we have a simple 1D array of numbers, we just pass `y`, and `x` will be generated from the indices of the elements. ``` plt.plot(dt) ``` If you have another parameter, you can do `plt.plot(x, y)`. <div class="alert alert-success"> <b>Exercise</b>: <ul> <li>- Can you plot a smaller segment of the data?</li> <li>- Try to plot the data vertically.</li> <li>- Add `'r-o'` to your call to plot. It means 'red, line, circles'.</li> <li>- What happens if you add another line with `plt.ylim(830, 880)`?</li> <li>- Can you disply the whole well with 0 at the top?</li> <li>- Try adding `plt.figure(figsize=(2,10))` at the start.</li> </ul> </div> ``` plt.plot(dt[3500:3600], depth[3500:3600], 'r-o') plt.plot(dt, depth) plt.ylim(830, 880) plt.plot(dt, depth) plt.fill_betweenx(depth, 0, dt) plt.ylim(830, 880) dtsm = np.convolve(np.ones(21)/21, dt, mode='same') plt.plot(dt, depth, label='original') plt.plot(dtsm, depth, label='smooth') plt.legend() plt.ylim(830, 880) plt.figure(figsize=(2,10)) plt.plot(dt, depth, lw=0.5) plt.ylim(depth[-1]+100, 0) plt.figure(figsize=(2,10)) plt.plot(dt, depth, lw=0.5) plt.ylim(depth[-1]+100, 0) plt.xlabel('DT [µs/m]') plt.ylabel('Depth [m]') plt.title('DT log') ``` ### `plt.scatter()` It's also easy to make scatter plots: ``` plt.scatter(dt, rhob) ``` We can adjust how the points plot to make it more interesting: ``` plt.scatter(dt, rhob, c=dt*rhob, s=2, alpha=0.2) plt.grid(c='k', alpha=0.1) ``` ### `plt.hist()` and `plt.bar()` ``` hist = np.histogram(dt, bins=20) dt.min(), dt.max() rng = np.nanmin(dt), np.nanmax(dt) ``` It turns out that `np.histogram` struggles with NaNs, because it can't do the gt/lt comparisons it needs to do on the data. So now that we have the 'real' min and max, we can remove make a new DT curve without NaNs and they will be left out of the analysis. ``` dtn = dt[~np.isnan(dt)] ``` Luckily, `matplotlib` has a histogram plotting function: ``` n, bins, _ = plt.hist(dtn, bins='auto', range=rng) ``` Let's get the data and make our own bar chart. First, we have to compute the bin centres: ``` n.size, bins.size bins = (bins[1:] + bins[:-1]) / 2 plt.bar(bins, n, width=2, color='g') ``` ## `plt.imshow()` for raster data For image-like data, such as slices of seismic, we need a different kind of visualization. NB There's also `plt.pcolor` but it's very slow. Use `plt.pcolormesh` instead. Let's load some seismic data from a SEG-Y flie. ``` import segyio with segyio.open('../data/Penobscot_0-1000ms.sgy') as s: vol = segyio.cube(s) vol.shape amp = vol[:, :, 200] plt.imshow(amp) ``` We need to change the aspect ratio: ``` plt.imshow(amp, aspect=0.5) plt.colorbar(shrink=0.75) ``` And fix the colorbar: ``` ma = np.percentile(vol, 98) plt.imshow(amp, aspect=0.5, vmin=-ma, vmax=ma) plt.colorbar(shrink=0.75) ``` <div class="alert alert-success"> <b>Exercise</b>: <ul> <li>- Try plotting a vertical section through the data. You'll need to think about indexing into `vol`.</li> <li>- Can you make a histogram of the amplitudes? Remember the NaNs!</li> </ul> </div> ``` plt.imshow(vol[200, :, :].T) ampn = amp[~np.isnan(amp)] n, bins, _ = plt.hist(ampn, bins='auto', range=(-ma, ma)) plt.yscale('log', nonposy='clip') ``` ## More `imshow` options ``` plt.imshow(amp[:50, :50], interpolation='bicubic') ``` We can choose new colourmaps easily, and post the colorbar. ``` plt.imshow(amp, aspect=0.5, cmap='gray', vmin=-ma, vmax=ma) plt.colorbar() ``` Note too that matplotlib colourmaps all have reversed versions, just add `_r` to the end of the name. ``` plt.imshow(amp, aspect=0.5, cmap='RdBu_r', vmin=-ma, vmax=ma) plt.colorbar() ``` We can give the image real-world extents: ``` plt.imshow(amp[:50, :50], extent=[10000, 11000, 200000, 201000]) plt.colorbar() ``` Notice that `plt.imshow()` assumes your pixels are sqaure. I find that I usually want to make this assumption. ## The other way to plot rasters: `pcolormesh()` Sometimes you might have varying cell sizes or shapes, or want to render the edges of the cells. Then you can use `pcolormesh()`. Read these articles to help figure out when to use what: - http://thomas-cokelaer.info/blog/2014/05/matplotlib-difference-between-pcolor-pcolormesh-and-imshow/ - https://stackoverflow.com/questions/21166679/when-to-use-imshow-over-pcolormesh ``` plt.figure(figsize=(10,10)) plt.pcolormesh(amp[:20, :20], edgecolors=['white'], lw=1) plt.show() ``` ## Adding decoration So far we've kept most of our calls to matplotlib to one line or so. Things can get much, much more complicated... The good news is that plots are usually built up, bit by bit. So you start with the one-liner, then gradually add things: ``` hor = np.load("../data/Penobscot_Seabed.npy") plt.imshow(vol[200, :, :].T, vmin=-ma, vmax=ma) plt.imshow(vol[200, :, :].T, cmap="gray", vmin=-ma, vmax=ma) plt.plot(hor[200, :], 'r', lw=2) plt.colorbar(shrink=0.67) inl, xl, ts = vol.shape extent = [0, xl, ts*0.004, 0] # left, right, bottom, top plt.imshow(vol[200, :, :].T, cmap="gray", vmin=-ma, vmax=ma, extent=extent, aspect='auto') plt.plot(0.004 * hor[200, :], 'r', lw=2) plt.colorbar(shrink=0.67) plt.title("Penobscot, inline 200") plt.xlabel("Crossline") plt.ylabel("Time [ms]") ``` If things get more complicated than this, we need to switch to the so-called 'objected oriented' way to use matplotlib. ``` import matplotlib.patches as patches fig, axs = plt.subplots(figsize=(15, 6), ncols=2) ax = axs[0] im = ax.imshow(vol[200, :, :].T, cmap="gray", vmin=-ma, vmax=ma, extent=extent, aspect='auto') cb = fig.colorbar(im) ax.plot(0.004 * hor[200, :], 'r', lw=2) rect = patches.Rectangle((100, 100*0.004), 200, 100*0.004, lw=1, ec='b', fc='none') ax.add_patch(rect) ax.set_title("Penobscot, inline 200") ax.set_xlabel("Crossline") ax.set_ylabel("Time [ms]") ax.text(10, 0.04, "peak = AI downward increase") ax = axs[1] ax.imshow(vol[200, 100:300, 100:200].T, extent=[100, 300, 0.8, 0.4], aspect='auto', cmap='gray', vmin=-ma, vmax=ma) plt.setp(ax.spines.values(), color='b', lw=2) ax.set_title('Zoomed area') ax.set_xlabel("Crossline") plt.savefig("../data/my_figure.png", dpi=300) plt.savefig("../data/my_figure.svg") plt.show() ``` ## How complicated do you want to get? It turns out you can do almost anything in `matplotlib`. This is a `matplotlib` figure: ``` from IPython.display import Image Image('../data/t1.jpg') ``` The key method you need to make a tiled plot like this is [`gridspec`](https://matplotlib.org/users/gridspec.html). You will also need a lot of patience. ## Interactive plots There are a few ways to achieve interactivity. We look at some of them in [`Intro_to_interactivity.ipynb`](Intro_to_interactivity.ipynb). Here's a quick example: ``` from ipywidgets import interact @interact(t=(0, 450, 10)) def show(t): plt.imshow(vol[:, :, t], vmin=-ma, vmax=ma, aspect=0.5) plt.colorbar(shrink=0.75) plt.show() ``` ## Seaborn... KDE plots, better scatters, and more Unfortunately, there's no density plot built into `matplotlib`, but the plotting library `seaborn` does have one. (So does `pandas`.) Let's look again at [distributions uaing `seaborn`](https://seaborn.pydata.org/tutorial/distributions.html). ``` import seaborn as sns sns.kdeplot(dtn) ``` We can change the bandwidth of the Gaussian: ``` sns.kdeplot(dtn, label="Default") sns.kdeplot(dtn, bw=1, label="bw: 1") sns.kdeplot(dtn, bw=10, label="bw: 10") plt.legend(); sns.distplot(dtn[2000:2250], rug=True) sns.jointplot(dt, rhob, s=2) sns.jointplot(dt, rhob, kind='kde') ```
github_jupyter
``` import pandas as pd from google.colab import drive drive.mount('/content/gdrive', force_remount=True) import pandas as pd import numpy as np import pickle from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score testData = pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/pre_standardization_test.csv') trainData = pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/pre_standardization_train.csv') validData = pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/pre_standardization_val.csv') from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(trainData) scaler.transform(trainData) scaler.transform(trainData) scaler.transform(validData) ``` # Six buckets ``` train_data_target_6k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/train_6_buckets.csv') test_data_target_6k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/test_6_buckets.csv') val_data_target_6k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/valid_6_buckets.csv') testData['data_IMDBscore']=test_data_target_6k['data_IMDBscore'] trainData['data_IMDBscore']=train_data_target_6k['data_IMDBscore'] validData ['data_IMDBscore']=val_data_target_6k['data_IMDBscore'] train_X = trainData.drop(columns=['data_IMDBscore']) train_Y = trainData['data_IMDBscore'] test_X = testData.drop(columns=['data_IMDBscore']) test_Y = testData['data_IMDBscore'] val_X=validData.drop(columns=['data_IMDBscore']) val_Y=validData['data_IMDBscore'] #Import Gaussian Naive Bayes model from sklearn.naive_bayes import GaussianNB #Create a Gaussian Classifier model = GaussianNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) filename = '/content/gdrive/MyDrive/SavedModels/GNB_6_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) #Import Multinomial Naive Bayes model from sklearn.naive_bayes import MultinomialNB #Create a Multinomial Classifier model = MultinomialNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) ``` Saving Model ``` filename = '/content/gdrive/MyDrive/SavedModels/MNB_6_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) #Import Bernoulli Naive Bayes model from sklearn.naive_bayes import BernoulliNB #Create a Bernoulli Classifier model = BernoulliNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) filename = '/content/gdrive/MyDrive/SavedModels/BNB_6_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) import pandas as pd from google.colab import drive drive.mount('/content/gdrive', force_remount=True) import pandas as pd import numpy as np testData = pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/pre_standardization_test.csv') trainData = pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/pre_standardization_train.csv') validData = pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/pre_standardization_val.csv') #Min Max Scaling from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(testData) scaler.fit(trainData) scaler.fit(validData ) ``` # Eleven Buckets ``` train_data_target_11k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/train_11_buckets.csv') test_data_target_11k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/test_11_buckets.csv') val_data_target_11k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/valid_11_buckets.csv') testData['data_IMDBscore']=test_data_target_11k['data_IMDBscore'] trainData['data_IMDBscore']=train_data_target_11k['data_IMDBscore'] validData ['data_IMDBscore']=val_data_target_11k['data_IMDBscore'] train_X = trainData.drop(columns=['data_IMDBscore']) train_Y = trainData['data_IMDBscore'] test_X = testData.drop(columns=['data_IMDBscore']) test_Y = testData['data_IMDBscore'] val_X=validData.drop(columns=['data_IMDBscore']) val_Y=validData['data_IMDBscore'] #Import Gaussian Naive Bayes model from sklearn.naive_bayes import GaussianNB #Create a Gaussian Classifier model = GaussianNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) filename = '/content/gdrive/MyDrive/SavedModels/GNB_11_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) #Import Multinomial Naive Bayes model from sklearn.naive_bayes import MultinomialNB #Create a Multinomial Classifier model = MultinomialNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) filename = '/content/gdrive/MyDrive/SavedModels/MNB_11_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) #Import Bernoulli Naive Bayes model from sklearn.naive_bayes import BernoulliNB #Create a Bernoulli Classifier model = BernoulliNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) filename = '/content/gdrive/MyDrive/SavedModels/BNB_11_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) ``` # TwentyOne Buckets ``` train_data_target_21k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/train_21_buckets.csv') test_data_target_21k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/test_21_buckets.csv') val_data_target_21k=pd.read_csv('/content/gdrive/MyDrive/ML_Project/Dataset/valid_21_buckets.csv') testData['data_IMDBscore']=test_data_target_21k['data_IMDBscore'] trainData['data_IMDBscore']=train_data_target_21k['data_IMDBscore'] validData ['data_IMDBscore']=val_data_target_21k['data_IMDBscore'] train_X = trainData.drop(columns=['data_IMDBscore']) train_Y = trainData['data_IMDBscore']*2 test_X = testData.drop(columns=['data_IMDBscore']) test_Y = testData['data_IMDBscore']*2 val_X=validData.drop(columns=['data_IMDBscore']) val_Y=validData['data_IMDBscore']*2 #Import Gaussian Naive Bayes model from sklearn.naive_bayes import GaussianNB #Create a Gaussian Classifier model = GaussianNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) filename = '/content/gdrive/MyDrive/SavedModels/GNB_21_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) #Import Multinomial Naive Bayes model from sklearn.naive_bayes import MultinomialNB #Create a Multinomial Classifier model = MultinomialNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test dataset y_pred = model.predict(test_X) #Import scikit-learn metrics module for accuracy calculation filename = '/content/gdrive/MyDrive/SavedModels/MNB_21_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) #Import Bernoulli Naive Bayes model from sklearn.naive_bayes import BernoulliNB #Create a Bernoulli Classifier model = BernoulliNB() # Train the model using the training sets model.fit(train_X,train_Y) #Predict the response for test set #Import scikit-learn metrics module for accuracy calculation # Model Accuracy, how often is the classifier correct? filename = '/content/gdrive/MyDrive/SavedModels/BNB_21_' filename=filename+".sav" pickle.dump(model, open(filename, 'wb')) trainpred = model.predict(train_X) valpred = model.predict(val_X) testpred = model.predict(test_X) train_f1_score = f1_score(train_Y, trainpred, average='weighted') train_precision_score = precision_score(train_Y, trainpred, average='weighted') train_recall_score = recall_score(train_Y, trainpred, average='weighted') train_accuracy_score = accuracy_score(train_Y, trainpred, normalize=True) print("train_f1_score "+str(train_f1_score) ) print("train_precision_score "+str(train_precision_score)) print("train_recall_score "+str(train_recall_score)) print("train_accuracy_score "+str(train_accuracy_score)) val_f1_score = f1_score(val_Y, valpred, average='weighted') val_precision_score = precision_score(val_Y, valpred, average='weighted') val_recall_score = recall_score(val_Y, valpred, average='weighted') val_accuracy_score = accuracy_score(val_Y, valpred, normalize=True) print("val_f1_score "+str(val_f1_score) ) print("val_precision_score "+str(val_precision_score)) print("val_recall_score "+str(val_recall_score)) print("val_accuracy_score "+str(val_accuracy_score)) test_f1_score = f1_score(test_Y, testpred, average='weighted') test_precision_score = precision_score(test_Y, testpred, average='weighted') test_recall_score = recall_score(test_Y, testpred, average='weighted') test_accuracy_score = accuracy_score(test_Y, testpred, normalize=True) print("test_f1_score "+str(test_f1_score) ) print("test_precision_score "+str(test_precision_score)) print("test_recall_score "+str(test_recall_score)) print("test_accuracy_score "+str(test_accuracy_score)) ```
github_jupyter
# Introduction to Deep Learning with PyTorch In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. ## Neural Networks Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output. <img src="assets/simple_neuron.png" width=400px> Mathematically this looks like: $$ \begin{align} y &= f(w_1 x_1 + w_2 x_2 + b) \\ y &= f\left(\sum_i w_i x_i +b \right) \end{align} $$ With vectors this is the dot/inner product of two vectors: $$ h = \begin{bmatrix} x_1 \, x_2 \cdots x_n \end{bmatrix} \cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} $$ ## Tensors It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors. <img src="assets/tensor_examples.svg" width=600px> With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ``` # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ``` Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line: `features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution. Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution. PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ``` ## Calculate the output of this network using the weights and bias tensors ``` You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs. Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error ```python >> torch.mm(features, weights) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-15d592eb5279> in <module>() ----> 1 torch.mm(features, weights) RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033 ``` As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work. **Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often. There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view). * `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory. * `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch. * `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`. I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`. > **Exercise**: Calculate the output of our little network using matrix multiplication. ``` ## Calculate the output of this network using matrix multiplication ``` ### Stack them up! That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix. <img src='assets/multilayer_diagram_weights.png' width=450px> The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$ \vec{h} = [h_1 \, h_2] = \begin{bmatrix} x_1 \, x_2 \cdots \, x_n \end{bmatrix} \cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2} \end{bmatrix} $$ The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply $$ y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right) $$ ``` ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ``` > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ``` ## Your solution here ``` If you did this correctly, you should see the output `tensor([[ 0.3171]])`. The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. ## Numpy to Torch and back Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ``` import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ``` The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ``` # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ```
github_jupyter
<a href="https://colab.research.google.com/gist/HerkTG/5f255e18611170ac204fcedb3f9d81e2/algoloader_v1-1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #User definied parameters host = "https://covidv3.i.tgcloud.io" #@param {type:"string"} graphname = "MyGraph" #@param {type:"string"} username = "tigergraph" #@param {type:"string"} password = "tigergraph" #@param {type:"string"} version = "3.0.5" #@param ["3.0.5", "3.0.0", "2.6.2"] {allow-input: true} useCert = True #@param {type:"boolean"} from IPython.utils import io import os from tqdm import tqdm with tqdm(total=100) as pbar: with io.capture_output() as captured: pbar.set_description_str('Installing pyTigerGraph') !pip install pytigergraph import pyTigerGraph as tg pbar.update(25) ##Grab_GSQL_Queries_BEGIN## pbar.set_description_str('Fetching Algorithms') import os from glob import glob !git clone https://github.com/tigergraph/gsql-graph-algorithms.git pbar.update(25) os.chdir('/content/gsql-graph-algorithms/algorithms/schema-free') script_names = glob('*.gsql') pbar.update(25) ##Grab_GSQL_Queries_END## ##Establish_Connection_BEGIN## pbar.set_description_str('Establishing Connection') conn = tg.TigerGraphConnection(host=host, graphname=graphname, username=username, password=password, version=version, useCert=useCert) secret = conn.createSecret() token = conn.getToken(secret, setToken=True) # API_Token = conn.getToken(API_Secret, setToken=True, lifetime=None)[0] pbar.update(25) pbar.set_description_str('Connected') ##Establish_Connection_END## #@title import ipywidgets as widgets from IPython.display import display dropdown = widgets.Dropdown( options=script_names, description='Algo', disabled=False, ) display(dropdown) import ipywidgets as widgets from IPython.display import display button = widgets.Button(description="Install Query") output = widgets.Output() #@title import ipywidgets as widgets # from IPython.display import clear_output from IPython.display import display button = widgets.Button(description="Install Query") output = widgets.Output() def on_button_clicked(b): import re query = [] query_names = [] with open (dropdown.value, 'r') as file: text = file.readlines() for line in text: new_line = line if 'CREATE QUERY' in line: q_name = line.find('CREATE QUERY') + 13 query_names.append(line[q_name:].split('(')[0].strip()) index = line.find(')') new_line = line[:index +1] + f" FOR GRAPH {graphname}" + line[index+1:] query.append(new_line) Gsql_query = "".join(query) for name in query_names: Gsql_query += ' INSTALL QUERY ' + name print("\nQuery is now Installing... Grab a cup of coffee (approx ~wait 1-2 mins)") conn.gsql(Gsql_query) with output: # clear_output() print(''' (^\-==-/^) >\\ == //< :== q''p ==: _ .__ qp __. .' ) / ^--^ \ /\.' /_` / ) '\/ ( ) \ |-'-/ \^^, |-|--' ( `' |_| ) \- |-|/ (( )^---( )) sk ''') print("=========================================\nCONGRATULATIONS, YOUR QUERY IS INSTALLED!\n=========================================\n") print(Gsql_query) button.on_click(on_button_clicked) display(button, output) ```
github_jupyter
``` from __future__ import print_function, division from keras.datasets import mnist from keras.layers import Input, Dense, Reshape, Flatten, Dropout from keras.layers import BatchNormalization, Activation, ZeroPadding2D from keras.layers.advanced_activations import LeakyReLU from keras.layers.convolutional import UpSampling2D, Conv2D from keras.models import Sequential, Model from keras.optimizers import RMSprop import keras.backend as K import matplotlib.pyplot as plt import sys import numpy as np class WGAN(): def __init__(self): self.img_rows = 28 self.img_cols = 28 self.channels = 1 self.img_shape = (self.img_rows, self.img_cols, self.channels) self.latent_dim = 100 # Following parameter and optimizer set as recommended in paper self.n_critic = 5 self.clip_value = 0.01 optimizer = RMSprop(lr=0.00005) # Build and compile the critic self.critic = self.build_critic() self.critic.compile(loss=self.wasserstein_loss, optimizer=optimizer, metrics=['accuracy']) # Build the generator self.generator = self.build_generator() # The generator takes noise as input and generated imgs z = Input(shape=(100,)) img = self.generator(z) # For the combined model we will only train the generator self.critic.trainable = False # The critic takes generated images as input and determines validity valid = self.critic(img) # The combined model (stacked generator and critic) self.combined = Model(z, valid) self.combined.compile(loss=self.wasserstein_loss, optimizer=optimizer, metrics=['accuracy']) def wasserstein_loss(self, y_true, y_pred): return K.mean(y_true * y_pred) def build_generator(self): model = Sequential() model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim)) model.add(Reshape((7, 7, 128))) model.add(UpSampling2D()) model.add(Conv2D(128, kernel_size=4, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) model.add(UpSampling2D()) model.add(Conv2D(64, kernel_size=4, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) model.add(Conv2D(self.channels, kernel_size=4, padding="same")) model.add(Activation("tanh")) model.summary() noise = Input(shape=(self.latent_dim,)) img = model(noise) return Model(noise, img) def build_critic(self): model = Sequential() model.add(Conv2D(16, kernel_size=3, strides=2, input_shape=self.img_shape, padding="same")) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(32, kernel_size=3, strides=2, padding="same")) model.add(ZeroPadding2D(padding=((0,1),(0,1)))) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=3, strides=2, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(128, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1)) model.summary() img = Input(shape=self.img_shape) validity = model(img) return Model(img, validity) def train(self, epochs, batch_size=128, sample_interval=50): # Load the dataset (X_train, _), (_, _) = mnist.load_data() # Rescale -1 to 1 X_train = (X_train.astype(np.float32) - 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) # Adversarial ground truths valid = -np.ones((batch_size, 1)) fake = np.ones((batch_size, 1)) for epoch in range(epochs): for _ in range(self.n_critic): # --------------------- # Train Discriminator # --------------------- # Select a random batch of images idx = np.random.randint(0, X_train.shape[0], batch_size) imgs = X_train[idx] # Sample noise as generator input noise = np.random.normal(0, 1, (batch_size, self.latent_dim)) # Generate a batch of new images gen_imgs = self.generator.predict(noise) # Train the critic d_loss_real = self.critic.train_on_batch(imgs, valid) d_loss_fake = self.critic.train_on_batch(gen_imgs, fake) d_loss = 0.5 * np.add(d_loss_fake, d_loss_real) # Clip critic weights for l in self.critic.layers: weights = l.get_weights() weights = [np.clip(w, -self.clip_value, self.clip_value) for w in weights] l.set_weights(weights) # --------------------- # Train Generator # --------------------- g_loss = self.combined.train_on_batch(noise, valid) # Plot the progress print ("%d [D loss: %f] [G loss: %f]" % (epoch, 1 - d_loss[0], 1 - g_loss[0])) # If at save interval => save generated image samples if epoch % sample_interval == 0: self.sample_images(epoch) def sample_images(self, epoch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, self.latent_dim)) gen_imgs = self.generator.predict(noise) # Rescale images 0 - 1 gen_imgs = 0.5 * gen_imgs + 1 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r): for j in range(c): axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray') axs[i,j].axis('off') cnt += 1 fig.savefig("images/mnist_%d.png" % epoch) plt.close() wgan = WGAN() wgan.train(epochs=4000, batch_size=32, sample_interval=50) import matplotlib.pyplot as plt import numpy as np from keras.datasets import mnist (X_train, _), (_, _) = mnist.load_data() # Rescale -1 to 1 X_train = (X_train.astype(np.float32) - 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) noise = np.random.normal(0, 1, 100) img = wgan.generator.predict(np.array([noise]))[0] print(img.shape) plt.imshow(img[:,:,0]) wgan.critic.predict(np.array([img]))[0] wgan.critic.predict(np.array([X_train[85]])) plt.imshow(X_train[85,:,:,0]) ```
github_jupyter
# Capital Allocation Problem ## Author: Snigdhayan Mahanta In a large corporation the `capital allocation problem` is one of the biggest challenges for the corporate decision-makers. A `corporation` consists of several `business units`. From a high level perspective a corporation can choose to deploy its financial resources in the following different ways: 1. organic growth 2. M&A and portfolio diversification 3. debt reduction 4. shareholder dividends 5. share buyback In this notebook I will focus solely on the organic growth option. A `business cycle` is a strategy execution period based on a fixed capital allocation. There are two extreme ways to allocate capital for organic growth: 1. `Inertial Corporation` - at the beginning of each business cycle the business units allocate their own capital according to the growth forecasts (each business unit allocates a fraction of its own capital into organic growth; higher growth forecast would imply higher probability of capital allocation toward organic growth) 2. `Dynamic Corporation` - at the beginning of each business cycle the corporation reallocates capital between the business units solely based on the growth forecasts of the business units (the corporation can reallocate capital from one business unit to another according to the growth forecasts of the individual business units). I created an instance of an `Inertial Corporation` and an instance of a `Dynamic Corporation`. They both have identical organizational and financial structures but there is a slight variation in their business operations. The market periodically (at the beginning of each business cycle) updates the growth forecasts for their business units identically. Based on these assumptions I simulated and evaluated their financial performances across several business cycles mainly by tracking two financial metrics - `profits` and `profit margin`. The metric `capital held` that I estimated below can be taken as an indicator for the capital that the corporation has decided to allocate for other purposes like inorganic growth. My analysis disregards the opportunity cost of inorganic growth at this point. For a more comprehensive analysis from the growth perspective one must also take the inorganic growth option into account. ``` from typing import Sequence import copy import numpy as np import matplotlib.pyplot as plt # Global parameters n_BusinessUnits = 6 # no. of business units in the corporations Forecast_options = [-1, 0, 1] # -1 = Dispose, 0 = Maintain, 1 = Grow # Class definition of 'Business Unit' class BusinessUnit: ''' a business unit has its own P&L responsibility ''' def __init__(self, label: str, forecast: int, capital: float, revenues: float, expenses: float) -> None: self.label = label self.forecast = forecast self.capital = capital self.revenues = revenues self.expenses = expenses @property def profits(self) -> float: return self.revenues - self.expenses def update_forecast(self, forecast: int) -> None: self.forecast = forecast # Class definition of 'Corporation' class Corporation: ''' a corporation consists of multiple business units with aggregated P&L ''' def __init__(self, label: str, BusinessUnits: Sequence[BusinessUnit]) -> None: self.label = label self.BusinessUnits = BusinessUnits @property def capital(self) -> float: return sum([BusinessUnit.capital for BusinessUnit in self.BusinessUnits]) @property def revenues(self) -> float: return sum([BusinessUnit.revenues for BusinessUnit in self.BusinessUnits]) @property def expenses(self) -> float: return sum([BusinessUnit.expenses for BusinessUnit in self.BusinessUnits]) @property def profits(self) -> float: return sum([BusinessUnit.profits for BusinessUnit in self.BusinessUnits]) @property def profit_margin(self) -> float: return (self.profits/self.revenues)*100 def operate(self, cycle_length:int) -> None: # one iteration of business operations for _ in range(cycle_length): for BusinessUnit in self.BusinessUnits: operational_change = np.random.choice([0, 1], p=[0.2, 0.8]) if (operational_change == 1): organic1 = np.random.choice([0.3, 0.4, 0.5], p=[0.3, 0.4, 0.3]) organic2 = np.random.choice([0.3, 0.4, 0.5], p=[0.7, 0.2, 0.1]) delta1 = np.random.choice([0, 0.05, 0.1, 0.15, 0.2]) delta2 = np.random.choice([0, 0.01, 0.02]) new_revenues = BusinessUnit.revenues + delta1*BusinessUnit.forecast*organic1*BusinessUnit.capital if (new_revenues > 0): BusinessUnit.revenues = new_revenues BusinessUnit.capital += -organic1*BusinessUnit.capital new_expenses = BusinessUnit.expenses - organic2*BusinessUnit.capital + delta2*BusinessUnit.expenses if (new_expenses > 0): BusinessUnit.expenses = new_expenses BusinessUnit.capital += -organic2*BusinessUnit.capital class Inertial_Corporation(Corporation): ''' incremental capital allocation within business unit ensuring business continuity ''' def allocate_capital(self) -> float: redeployed_capital = 0 for BusinessUnit in self.BusinessUnits: if (BusinessUnit.profits > 0): if (BusinessUnit.forecast == -1): fraction = np.random.choice(range(10, 30))/100 if (BusinessUnit.forecast == 0): fraction = np.random.choice(range(30, 50))/100 if (BusinessUnit.forecast == 1): fraction = np.random.choice(range(50, 70))/100 added_capital = fraction*BusinessUnit.profits BusinessUnit.capital += added_capital redeployed_capital += added_capital return redeployed_capital class Dynamic_Corporation(Corporation): ''' cross business unit capital reallocation according to growth forecasts ''' def allocate_capital(self) -> float: BU_forecasts = [BusinessUnit.forecast for BusinessUnit in self.BusinessUnits] redeployable_capital = self.profits redeployed_capital = 0 allocation = [] for BusinessUnit in self.BusinessUnits: if (BusinessUnit.forecast == -1): reallocation = (np.random.choice(range(30, 40))/100)*BusinessUnit.capital redeployable_capital += reallocation BusinessUnit.capital += -reallocation if (redeployable_capital > 0): for BusinessUnit in self.BusinessUnits: if (BusinessUnit.forecast == -1): allocation.append(np.random.choice(range(10, 30))/100) if (BusinessUnit.forecast == 0): allocation.append(np.random.choice(range(30, 50))/100) if (BusinessUnit.forecast == 1): allocation.append(np.random.choice(range(50, 70))/100) allocation = (np.random.choice(range(10, 80))/100)*(allocation/sum(allocation)) redeployed_capital = sum(allocation)*redeployable_capital i = 0 for BusinessUnit in self.BusinessUnits: BusinessUnit.capital += allocation[i]*redeployable_capital i += 1 return redeployed_capital # The market updates the growth forecasts of the business units (external factor) def update_market_forecasts(Corporations: Sequence[Corporation]) -> None: BU_pairs = zip(Corporations[0].BusinessUnits, Corporations[1].BusinessUnits) for BU_pair in BU_pairs: change_forecast = np.random.choice([0, 1], p=[0.8, 0.2]) if (change_forecast == 1): new_forecast = np.random.choice(Forecast_options) BU_pair[0].update_forecast(new_forecast) BU_pair[1].update_forecast(new_forecast) # Utility function to create a list of business units def create_BU_list(n_BusinessUnits: int) -> Sequence[BusinessUnit]: BusinessUnits = [] for i in range(1, n_BusinessUnits+1): label = "BU_"+str(i) # label is a simple enuramation of the business units forecast = np.random.choice(Forecast_options, p=[0.3, 0.4, 0.3]) capital = np.random.choice(a=range(10000, 20000)) revenues = np.random.choice(a=range(30000, 50000)) expenses = np.random.choice(a=range(30000, 40000)) BusinessUnits.append(BusinessUnit(label, forecast, capital, revenues, expenses)) return BusinessUnits # Utility function to create a pair of corporations with identical structures def create_corp_pair(n_BusinessUnits: int) -> Sequence[Corporation]: BU_list1 = create_BU_list(n_BusinessUnits) Corporation1 = Inertial_Corporation("Inertial Corporation", BU_list1) BU_list2 = copy.deepcopy(BU_list1) Corporation2 = Dynamic_Corporation("Dynamic Corporation", BU_list2) return [Corporation1, Corporation2] # Create a pair of corporations Corporation1, Corporation2 = create_corp_pair(n_BusinessUnits) # The initial financial metrics BU_forecasts1 = [BusinessUnit.forecast for BusinessUnit in Corporation1.BusinessUnits] BU_capitals1 = [BusinessUnit.capital for BusinessUnit in Corporation1.BusinessUnits] BU_revenues1 = [BusinessUnit.revenues for BusinessUnit in Corporation1.BusinessUnits] BU_expenses1 = [BusinessUnit.expenses for BusinessUnit in Corporation1.BusinessUnits] BU_profits1 = [BusinessUnit.profits for BusinessUnit in Corporation1.BusinessUnits] corporation_profits1 = Corporation1.profits corporation_profit_margin1 = Corporation1.profit_margin BU_forecasts2 = [BusinessUnit.forecast for BusinessUnit in Corporation2.BusinessUnits] BU_capitals2 = [BusinessUnit.capital for BusinessUnit in Corporation2.BusinessUnits] BU_revenues2 = [BusinessUnit.revenues for BusinessUnit in Corporation2.BusinessUnits] BU_expenses2 = [BusinessUnit.expenses for BusinessUnit in Corporation2.BusinessUnits] BU_profits2 = [BusinessUnit.profits for BusinessUnit in Corporation2.BusinessUnits] corporation_profits2 = Corporation2.profits corporation_profit_margin2 = Corporation2.profit_margin # Simulate business cycles n_cycles = 10 # no. of business cycles cycle_length = 5 # no. of years in a business cycle corporation1_capital = [] corporation1_redeployed_capital = [] corporation1_profits = [] corporation1_profit_margin = [] corporation2_capital = [] corporation2_redeployed_capital = [] corporation2_profits = [] corporation2_profit_margin = [] for _ in range(n_cycles): #allocate capital capital1 = Corporation1.allocate_capital() corporation1_capital.append(Corporation1.capital) corporation1_redeployed_capital.append(capital1) capital2 = Corporation2.allocate_capital() corporation2_capital.append(Corporation2.capital) corporation2_redeployed_capital.append(capital2) # operate business Corporation1.operate(cycle_length) corporation1_profits.append(Corporation1.profits) corporation1_profit_margin.append(Corporation1.profit_margin) Corporation2.operate(cycle_length) corporation2_profits.append(Corporation2.profits) corporation2_profit_margin.append(Corporation2.profit_margin) # the market adjusts the growth forecasts for the next business cycle update_market_forecasts([Corporation1, Corporation2]) # Visualize the changes in business unit capitals of 'Dynamic Corporation' labels = [BusinessUnit.label for BusinessUnit in Corporation2.BusinessUnits] x = np.arange(len(labels)) # the label locations y1 = BU_capitals2 y2 = BU_revenues2 y3 = BU_expenses2 width = 0.35 # the width of the bars fig, ax = plt.subplots(figsize=(15, 8), dpi=80, facecolor='w', edgecolor='k') rects1 = ax.bar(x - width/2, y1, width, label='Capital') rects2 = ax.bar(x, y2, width, label='Revenues', alpha=0.8) rects3 = ax.bar(x + width/2, y3, width, label='Expenses', alpha=0.5) # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_xlabel(F'Business Unit') ax.set_ylabel(F'Financial Structure') ax.set_title(F'Initial Capital, Revenues and Expenses of Business Units') ax.set_xticks(x) ax.set_xticklabels(labels) legends = [F'Capital', F'Revenues', F'Expenses'] ax.legend(legends, loc='upper right') ax.margins(y=0.1) plt.show() # Pie charts of growth forecasts of business units - initial vs. current fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(18,12)) # Pie chart before trading period labels = ["Dispose", "Maintain", "Grow"] sizes1 = np.histogram(BU_forecasts1, bins=len(Forecast_options))[0] ax1.pie(sizes1, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90, normalize=True) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. ax1.set_title(F'Initial Forecasts of Business Units', size=20) # Pie chart after trading period Corporation_forecasts = [BusinessUnit.forecast for BusinessUnit in Corporation1.BusinessUnits] sizes2 = np.histogram(Corporation_forecasts, bins=len(Forecast_options))[0] ax2.pie(sizes2, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90, normalize=True) ax2.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. ax2.set_title(F'Current Forecasts of Business Units', size=20) plt.show() # Visualize the comparison of the two capital allocation strategies - capital held plt.figure(figsize=(15, 5), dpi=80, facecolor='w', edgecolor='k') # Plot the points # x-axis values x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y1 = corporation1_capital plt.plot(x, y1) # x-axis values # x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y2 = corporation2_capital plt.plot(x, y2) # x-axis label plt.xlabel(F'Business Cycles') # y-axis label plt.ylabel(F'Capital Held') # Title plt.title(F'Capital Held Across Business Cycles - {Corporation1.label} vs. {Corporation2.label}') legends = [F'{Corporation1.label} Average Capital Held = {round(sum(y1)/len(y1), 2)}', F'{Corporation2.label} Average Capital Held = {round(sum(y2)/len(y2), 2)}'] plt.legend(legends, loc='upper right') plt.margins(y=0.2) plt.show() # Visualize the comparison of the two capital allocation strategies - capital deployed plt.figure(figsize=(15, 5), dpi=80, facecolor='w', edgecolor='k') # Plot the points # x-axis values x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y1 = corporation1_redeployed_capital plt.plot(x, y1) # x-axis values # x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y2 = corporation2_redeployed_capital plt.plot(x, y2) # x-axis label plt.xlabel(F'Business Cycles') # y-axis label plt.ylabel(F'Capital Deployed') # Title plt.title(F'Capital Deployed Across Business Cycles - {Corporation1.label} vs. {Corporation2.label}') legends = [F'{Corporation1.label} Cumulative Capital Deployed = {round(sum(y1), 2)}', F'{Corporation2.label} Cumulative Capital Deployed = {round(sum(y2), 2)}'] plt.legend(legends, loc='upper right') plt.margins(y=0.2) plt.show() # Visualize the comparison of the two capital allocation strategies - profits plt.figure(figsize=(15, 5), dpi=80, facecolor='w', edgecolor='k') # Plot the points # x-axis values x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y1 = corporation1_profits plt.plot(x, y1) # x-axis values # x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y2 = corporation2_profits plt.plot(x, y2) # x-axis label plt.xlabel(F'Business Cycles') # y-axis label plt.ylabel(F'Profits') # Title plt.title(F'Profits Across Business Cycles - {Corporation1.label} vs. {Corporation2.label}') legends = [F'{Corporation1.label} Average Profits = {round(sum(y1)/len(y1), 2)}', F'{Corporation2.label} Average Profits = {round(sum(y2)/len(y2), 2)}'] plt.legend(legends, loc='upper right') plt.margins(y=0.2) plt.show() # Visualize the comparison of the two capital allocation strategies - profit margins plt.figure(figsize=(15, 5), dpi=80, facecolor='w', edgecolor='k') # Plot the points # x-axis values x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y1 = corporation1_profit_margin plt.plot(x, y1) # x-axis values # x = ["Cycle " + str(i) for i in range(1, n_cycles+1)] # y-axis values y2 = corporation2_profit_margin plt.plot(x, y2) # x-axis label plt.xlabel(F'Business Cycles') # y-axis label plt.ylabel(F'Profit Margin') # Title plt.title(F'Profit Margins Across Business Cycles - {Corporation1.label} vs. {Corporation2.label}') legends = [F'{Corporation1.label} Average Profit Margin = {round(sum(y1)/len(y1), 2)}', F'{Corporation2.label} Average Profit Margin = {round(sum(y2)/len(y2), 2)}'] plt.legend(legends, loc='upper right') plt.margins(y=0.2) plt.show() # Comparison of the initial business unit revenues and profits labels = [BusinessUnit.label for BusinessUnit in Corporation1.BusinessUnits] x = np.arange(len(labels)) # the label locations y1 = BU_revenues1 y2 = BU_profits1 width = 0.35 # the width of the bars fig, ax = plt.subplots(figsize=(15, 8), dpi=80, facecolor='w', edgecolor='k') rects1 = ax.bar(x - width/2, y1, width, label='{Corporation1.label}') rects2 = ax.bar(x + width/2, y2, width, label='{Corporation2.label}') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_xlabel(F'Business Unit') ax.set_ylabel(F'Business Unit Revenues and Profits') ax.set_title(F'Comparison of Initial Business Unit Revenues and Profits') ax.set_xticks(x) ax.set_xticklabels(labels) legends = [F'Cumulative Revenues = {round(sum(y1), 2)}', F'Cumulative Profits = {round(sum(y2), 2)}'] ax.legend(legends, loc='upper right') ax.margins(y=0.1) plt.show() # Compare the current business unit revenues between the two corporations labels = [BusinessUnit.label for BusinessUnit in Corporation1.BusinessUnits] x = np.arange(len(labels)) # the label locations y1 = [BusinessUnit.revenues for BusinessUnit in Corporation1.BusinessUnits] y2 = [BusinessUnit.revenues for BusinessUnit in Corporation2.BusinessUnits] width = 0.35 # the width of the bars fig, ax = plt.subplots(figsize=(15, 8), dpi=80, facecolor='w', edgecolor='k') rects1 = ax.bar(x - width/2, y1, width, label='{Corporation1.label}') rects2 = ax.bar(x + width/2, y2, width, label='{Corporation2.label}') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_xlabel(F'Business Unit') ax.set_ylabel(F'Business Unit Revenues') ax.set_title(F'Comparison of Current Business Unit Revenues') ax.set_xticks(x) ax.set_xticklabels(labels) legends = [F'{Corporation1.label} Revenues = {round(Corporation1.revenues, 2)}', F'{Corporation2.label} Revenues = {round(Corporation2.revenues, 2)}'] ax.legend(legends, loc='upper right') ax.margins(y=0.1) plt.show() # Compare the current business unit profits between the two corporations labels = [BusinessUnit.label for BusinessUnit in Corporation1.BusinessUnits] x = np.arange(len(labels)) # the label locations y1 = [BusinessUnit.profits for BusinessUnit in Corporation1.BusinessUnits] y2 = [BusinessUnit.profits for BusinessUnit in Corporation2.BusinessUnits] width = 0.35 # the width of the bars fig, ax = plt.subplots(figsize=(15, 8), dpi=80, facecolor='w', edgecolor='k') rects1 = ax.bar(x - width/2, y1, width, label='{Corporation1.label}') rects2 = ax.bar(x + width/2, y2, width, label='{Corporation2.label}') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_xlabel(F'Business Unit') ax.set_ylabel(F'Business Unit Profits') ax.set_title(F'Comparison of Current Business Unit Profits') ax.set_xticks(x) ax.set_xticklabels(labels) legends = [F'{Corporation1.label} Profits = {round(Corporation1.profits, 2)}', F'{Corporation2.label} Profits = {round(Corporation2.profits, 2)}'] ax.legend(legends, loc='upper right') ax.margins(y=0.1) plt.show() # How many business cycles are under consideration? n_cycles # How many business units are there in the two corporations? n_BusinessUnits # What was the initial overall profit of the two corporations? round(corporation_profits1, 2) # What is the current overall profit of 'Inertial Corporation'? round(Corporation1.profits, 2) # What is the current overall profit of 'Dynamic Corporation'? round(Corporation2.profits, 2) # What was the initial profit margin of the two corporations? round(corporation_profit_margin1, 2) # What is the current overall profit margin of 'Inertial Corporation'? round(Corporation1.profit_margin, 2) # What is the current overall profit margin of 'Dynamic Corporation'? round(Corporation2.profit_margin, 2) ```
github_jupyter
``` import sys import numpy as np import scipy as sp import pandas as pd from scipy import ndimage import matplotlib.pyplot as plt from scipy import interpolate from scipy.interpolate import griddata from scipy.interpolate import RectBivariateSpline,bisplrep,CloughTocher2DInterpolator,interp2d N=64 M=64 L=3 vs1=3 vs2=-3 dt=0.1 g=0 vth=0.02 h=1/((N-1)**2) x=np.linspace(0,L,N) v=np.linspace(-vs1,vs1,M) xx,vv=np.meshgrid(x,v,indexing="xy") vth=0.2 f0=(1/(vth*np.sqrt(2*np.pi)))*np.exp(-0.5*(vv/vth)**2)*(1+0.3*np.cos(5*xx)) plt.imshow(f0) k=np.zeros(N) k1=np.zeros(N) while(g<14): f = interpolate.interp2d(x, v, f0, kind='cubic') for o in range(N): k[o]=x[o]-g*v[o]*dt*0.5 fnew=f(k,v) ne=np.zeros(N) dv=0.1 for i in range (0,N): ne[i]=0 for j in range (0,M-1): ne[i]=+0.5*(fnew[i][j+1]+fnew[i][j])*dv rho=ne-1 A=np.zeros((N,N)) A.fill(-1) A=(1/h)*A np.fill_diagonal(A, 2) phi=np.linalg.inv(A).dot(rho) EL=np.zeros(N) intphi=interpolate.interp1d(x,phi, kind='cubic') E=np.gradient(phi,x) for o in range(N): k1[o]=v[o]+E[o]*dt f1 = interpolate.interp2d(x, v, fnew, kind='quintic') fnew1=f1(x,k1) f2 = interpolate.interp2d(x, v, fnew1, kind='quintic') for o in range(N): k1[o]=x[o]-v[o]*dt*0.5 fnew2=f2(k1,v) f0=fnew2 g=g+1 plt.imshow(f0) plt.colorbar() plt.plot(x,E) g=0 def normalize(v): norm = np.linalg.norm(v) if norm == 0: return v return v / norm def Efield(f,x,v,N,M): ne=np.zeros(N) for i in range(N): ne[i] = np.trapz(x,f[i,:]) rho=ne-1 rho=np.transpose(rho) A=np.zeros((N,N)) A.fill(-1) np.fill_diagonal(A, 2) phi=np.linalg.inv(A).dot(rho) E=np.gradient(phi,x) return E def BC(f,M): f=np.transpose(f) for j in range(0,M): f[j][0]=0.5*(f[j][0]+f[j][M-1]) f[j][M-1]= f[j][0] f=np.transpose(f) return f def f0(x,v): L=2 f=(0.5/np.sqrt(vth2*np.pi)) * np.exp(-((v-vs1)*(v-vs1))/vth2) f+=(0.5/np.sqrt(vth2*np.pi)) * np.exp(-((v-vs2)*(v-vs2))/vth2)*(1+0.02*np.cos(3*np.pi*x/L)) return f dt=0.01 vth2 = 0.125 vmax=3 L=4 N=44 M=44 dx=L/N dv=vmax/M x=np.linspace(0,L,N) v=np.linspace(-vmax,vmax,M) vs1 = 1.6 vs2 = -1.4 f99=np.zeros((N,M)) fnew4=np.zeros((N,M)) fnew5=np.zeros((N,M)) fnew6=np.zeros((N,M)) X,V=np.meshgrid(x,v,indexing="xy") fnew4=f0(X,V) while(g<6): grid_z2 =interp2d(v,x,fnew4,kind="cubic") for i in range(0,N): for j in range (0,M): fnew4[i][j]=grid_z2(v[i],np.abs(x[j]-0.5*v[i]*dt)) fnew4=BC(fnew4,M) E=Efield(fnew4,x,v,N,M) grid_z2 =interp2d(v,x,fnew4,kind="cubic") for i in range(0,N): for j in range (0,M): fnew4[i][j]=grid_z2(v[i]-E[j]*dt,x[j]) fnew4=BC(fnew4,M) grid_z2 =interp2d(v,x,fnew4,kind="cubic") for i in range(0,N): for j in range (0,M): fnew4[i][j]=grid_z2(v[i],np.abs(x[j]-0.5*v[i]*dt)) fnew4=BC(fnew4,M) fnew4=normalize(fnew4) g=g+1 plt.contourf(v,x,fnew4) def interp(f,x,v,dx,dv,N,M): fi=(x-0)/dx fj=(v-(-vmax))/dv if(fi<0): fi+=N-1 if(fi>N-1): fi=fi-N-1 else: if fi<0 or fi>=N-1: return 0 if fj<=0 or fj>=M-1: return 0 i=np.int(fi) j=np.int(fj) di=fi-i dj=fj-j val=(1-di)*(1-dj)*f[i][j] if i<N-1: val+=(di)*(1-dj)*f[i+1][j] if j<M-1: val+=(1-di)*(dj)*f[i][j+1] if j<M-1 and i<N-1: val+=(1-di)*(1-dj)*f[i+1][j+1] return val while(g<2): for j in range (0,N-1): for i in range (0,M-1): fnew4[i][j]=interp(f99,x[i]-0.5*dt*np.abs(v[j]),v[j],dx,dv,N,M) f99=fnew4 fnew4=BC(fnew4,M) E=Efield(fnew4,x,v,N,M) for j in range (0,N-1): for i in range (0,M-1): fnew5[i][j]=interp(fnew4,x[i],v[j]-E[i]*dt,dx,dv,N,M) fnew5=BC(fnew5,M) for j in range (0,N-1): for i in range (0,M-1): fnew6[i][j]=interp(fnew5,x[i]-0.5*dt*np.abs(v[j]),v[j],dx,dv,N,M) fnew6=BC(fnew6,M) f99=fnew6 g=g+1 ```
github_jupyter
# Integration ``` import matplotlib.pyplot as plt import numpy as np ``` ## Contents 1.[Integral Calculus](#Integral_Calculus) 2.[Fundamental Theorem of Calculus](#Fundamental_Theorem_of_Calculus) 3.[Basic Integration](#Basic_Integration) - [Integrating powers of x](#Integrating_powers_of_x) - [Integrating other basic terms](#Integrating_other_basic_terms) 4.[Definite Integrals](#Definite_Integrals) - [Area under graph](#Area_under_graph) - [Area under graph for y axis](#Area_under_graph_for_y_axis) - [Area between lines](#Area_between_lines) - [Area between lines on y axis](#Area_between_lines_on_y_axis) <a id='Integral_Calculus'></a> ## Integral Calculus How to find area under curve between a specified x $\lim_{n\to\infty}\sum_{i=1}^n f(x_i)\Delta x_i = \int^b_a f(x)dx$ - this is the area under the graph - the left side sums as many values of y in the specified x data set and weights it with the difference in x - the right side is the integral which is 1 function which takes the range of a to b ##### This is the Definite Integral $\int f(x) dx$ ##### This is the Indefinite Integral or anti-derivative ``` x = np.linspace(-10, 10, 201) def f(x): return x**2 y = f(x) fig, ax = plt.subplots(1, figsize=(8,4)) ax.plot(x,y, 'g', label='line') ax.fill_between(x,y, color='blue', alpha=0.3, label='area under graph') ax.grid(True) ax.legend() plt.show() ``` <a id='Fundamental_Theorem_of_Calculus'></a> ## Fundamental Theorem of Calculus $f(x)$ is continuous in $[a,b]$ $F(x) = \int^x_af(t)dt$ - where $x$ is in $[a,b]$ $\frac{dF}{dx} = \frac{d}{dx}\int^x_af(t)dt = f(x)$ #### Example: $F(x) = \int^x_a\frac{\cos^2t}{-\sin t^2}dt$ $F\prime(x) = \frac{d}{dx}\int^x_a\frac{\cos^2t}{-\sin t^2}dt = \frac{\cos^2x}{-\sin x^2}$ #### Example 2: $F(x) = \int^{x^2}_a\frac{\cos^2t}{-\sin t^2}dt$ $F\prime(x) = \frac{d}{dx}\int^{x^2}_a\frac{\cos^2t}{-\sin t^2}dt$ $= \frac{\cos^2x^2}{-\sin x^4}\times \frac{d}{dx}x^2$ $= 2\frac{\cos^2x^2}{-\sin x^4}$ <a id='Basic_Integration'></a> ## Basic Integration <a id='Integrating_powers_of_x'></a> ### Integrating powers of x $\int Ax^ndx = \frac{A}{n+1}x^{n+1} + C$ - to find the derivative we use $\frac{d}{dx}ax^n = anx^{n-1}$ - we do the opposite with $\int ax^ndx = a\frac{1}{n+1}x^{n+1}$ - we add $C$ as we cant find out the constant of the original function #### Example $\int 2x^5dx = \frac{1}{3}x^{6} + C$ <a id='Integrating_other_basic_terms'></a> ### Integrating other basic terms #### Integrating $e^{kx}$ $\int Ae^{kx + b} dx = \frac{A}{k}e^{kx + b} + C$ - the derivative is $\frac{d}{dx}e^x = e^x$ - to differentiate, we would use the chain rule on the function of x and $\therefore$ multiply by k #### Example $\int 3e^{9x + 2} dx = \frac{1}{3}e^{9x + 2} + C$ #### Integrating $\frac{1}{x}$ $\int A\frac{n}{x} dx = An\ln x + C$ $\int A\frac{f\prime(x)}{f(x)} dx = A\ln|f(x)| + C$ - in the second rule, the top is caused by the chain rule #### Example $\int 2\frac{6}{x} dx = 12\ln x + C$ #### Example 2 $\int 2\frac{10x}{5x^2 + 3} dx = 2\ln |5x^2 + 3| + C$ #### Integrating $\sin x$ $\int A\sin(kx) dx = -A\frac{1}{k}\cos(kx) + C$ #### Example $\int 4\sin(2x) dx = -2\cos(2x) + C$ #### Integrating $\cos x$ $\int A\cos(kx) dx = A\frac{1}{k}\sin(kx) + C$ #### Example $\int 11\cos(3x) dx = \frac{11}{3}\sin(3x) + C$ <a id='Definite_Integrals'></a> ## Definite Integrals This is where there are defined boundaries on the x or y axis <a id='Area_under_graph'></a> ### Area under graph $F(x) = \int f(x)dx$ $\int_a^b f(x)dx = F(b) - F(a)$ - if the graph is negative, the area can be negative - the definite integral gives the net area - to find area (not net area), split into positive and negative regions and find sum magnitudes of regions #### Example $f(x) = 6x^2$ $F(x) = 2x^3$ $\int_2^5 f(x)dx = F(5) - F(2)$ $= 2(5)^3 - 2(2)^3$ $= 234$ <a id='Area_under_graph_for_y_axis'></a> ### Area under graph for y axis $F(y) = \int f^{-1}(y)dy$ $\int_c^d f^{-1}(y)dy = F(d) - F(c)$ - do the same but in terms of y - this includes taking the inverse of the line function to get a function in terms of y #### Example $f(x) = 6x^2$ $f^{-1}(y) = \left(\frac{1}{6}y\right)^{\frac{1}{2}}$ $F(y) = 4\left(\frac{1}{6}y\right)^{\frac{3}{2}}$ $\int_2^5 f^{-1}(y)dy = F(5) - F(2)$ $= 4\left(\frac{5}{6}\right)^{\frac{3}{2}} - 4\left(\frac{1}{3}\right)^{\frac{3}{2}}$ $= 2.273$ <a id='Area_between_lines'></a> ### Area between lines $\int_a^b(f(x) - g(x))dx = \int_a^bf(x)dx - \int_a^bg(x)dx$ #### Example $= \int_0^1(\sqrt{x} - x^2)dx$ $= \left(\frac{2}{3}x^{\frac{3}{2}} - \frac{x^3}{3}\right)\mid^1_0$ $= \left(\frac{2}{3}1^{\frac{3}{2}} - \frac{1^3}{3}\right) - \left(\frac{2}{3}0^{\frac{3}{2}} - \frac{0^3}{3}\right)$ $= \left(\frac{2}{3} - \frac{1}{3}\right)$ $= \left(\frac{1}{3}\right)$ - if more lines, separate into sections on the x axis and sum <a id='Area_between_lines_on_y_axis'></a> ### Area between lines on y axis This works the same as area under graph on y axis but combined with the area between lines method ``` x = np.linspace(-5, 5, 201) def f(x): return 6*x**2 - 20 def F(x): return 2*x**3 - 20*x y = f(x) start = 60 end = 160 section = x[start:end+1] fig, ax = plt.subplots(1, figsize=(8,4)) ax.plot(x,y, 'g', label='y = 2x') ax.fill_between(section,f(section), color='blue', alpha=0.3, label='area under graph') ax.plot(x[start], 0, 'om', color='purple', label='a') ax.plot(x[end], 0, 'om', color='r', label='b') ax.grid(True) ax.legend() plt.show() print 'shaded net area =', F(x[end]) - F(x[start]) ```
github_jupyter
``` import numpy as np from scipy import ndimage from scipy import spatial from scipy import io from scipy import sparse from scipy.sparse import csgraph from scipy import linalg from matplotlib import pyplot as plt import seaborn as sns from skimage import data from skimage import color from skimage import img_as_float import graph3d %matplotlib inline ``` # Load data ``` image = img_as_float(data.camera()[::2, ::2]) fig, ax = plt.subplots() plt.imshow(image, cmap='gray') plt.grid('off') ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) ax.set_title('Original image') plt.savefig('../img/tikhonov_regularization_0.pdf', bbox_inches='tight') ``` # Crop and add noise ``` image = image[40:80, 100:140] noisy_image = image + 0.05*np.random.randn(*image.shape) fig, ax = plt.subplots(1, 2, figsize=(8, 4)) ax[0].imshow(image, cmap='gray') ax[1].imshow(noisy_image, cmap='gray') ax[0].grid('off') ax[1].grid('off') ax[0].xaxis.set_ticks([]) ax[0].yaxis.set_ticks([]) ax[1].xaxis.set_ticks([]) ax[1].yaxis.set_ticks([]) ax[0].set_title('Cropped image') ax[1].set_title('Noisy image') plt.savefig('../img/tikhonov_regularization_1.pdf', bbox_inches='tight') ``` # Perform graph filtering #### Given a signal $f_0$ corrupted by Gaussian noise $\eta$ \begin{equation} \mathbf{y} = \mathbf{f_0} + \mathbf{\eta} \end{equation} #### Solve the regularization problem \begin{equation} \underset{f}{\text{argmin}} \{ ||f - y||_2^2 + \gamma f^T L f\} \end{equation} #### Solution is given by \begin{equation} f_{*}(i) = \sum_{l=0}^{N-1} \bigg[ \frac{1}{1 + \gamma \lambda_l} \bigg] \hat{y} (\lambda_l) u_l(i) \end{equation} #### Or equivalently \begin{equation} \mathbf{f} = \hat{h}(L) \mathbf{y} \end{equation} #### Where L is the laplacian of the adjacency matrix defined by: \begin{equation} W_{i,j} = \begin{cases} \exp \bigg( - \frac{[dist(i, j)]^2}{2 \theta^2} \bigg) & \text{if $dist(i,j)$ < $\kappa$} \\ 0 & \text{otherwise} \end{cases} \end{equation} ``` # Parameters kappa = np.sqrt(2) theta = 20 gamma = 10 # Query neighboring pixels for each pixel yx = np.vstack(np.dstack(np.indices(noisy_image.shape))) tree = spatial.cKDTree(yx) q = tree.query_ball_point(yx, kappa) # Get pixels I, and neighbors J I = np.concatenate([np.repeat(k, len(q[k])) for k in range(len(q))]) J = np.concatenate(q) # Distance metric is difference between neighboring pixels dist_ij = np.sqrt(((noisy_image.flat[I] - noisy_image.flat[J])**2)) # Thresholded Gaussian kernel weighting function W = np.exp(- ((dist_ij)**2 / 2*(theta**2)) ) # Construct sparse adjacency matrix A = sparse.lil_matrix((noisy_image.size, noisy_image.size)) for i, j, w in zip(I, J, W): A[i, j] = w A[j, i] = w A = A.todense() # Compute Laplacian L = csgraph.laplacian(A) # Compute eigenvalues and eigenvectors of laplacian l, u = linalg.eigh(L) # Compute filtering kernel h = u @ np.diag(1 / (1 + gamma*l)) @ u.T # Filter the image using the kernel graph_filtered_image = (h @ noisy_image.ravel()).reshape(noisy_image.shape) # Filter the image using traditional gaussian filtering traditional_filtered_image = ndimage.gaussian_filter(noisy_image, 0.8) # Plot the result fig, ax = plt.subplots(2, 2, figsize=(6, 6)) ax.flat[0].imshow(image, cmap='gray') ax.flat[1].imshow(noisy_image, cmap='gray') ax.flat[2].imshow(graph_filtered_image, cmap='gray') ax.flat[3].imshow(traditional_filtered_image, cmap='gray') ax.flat[0].grid('off') ax.flat[1].grid('off') ax.flat[2].grid('off') ax.flat[3].grid('off') ax.flat[0].xaxis.set_ticks([]) ax.flat[0].yaxis.set_ticks([]) ax.flat[1].xaxis.set_ticks([]) ax.flat[1].yaxis.set_ticks([]) ax.flat[2].xaxis.set_ticks([]) ax.flat[2].yaxis.set_ticks([]) ax.flat[3].xaxis.set_ticks([]) ax.flat[3].yaxis.set_ticks([]) ax.flat[0].set_title('Cropped Image') ax.flat[1].set_title('Noisy Image') ax.flat[2].set_title('Graph Filtered') ax.flat[3].set_title('Gaussian Filtered') plt.tight_layout() plt.savefig('../img/tikhonov_regularization_2.pdf', bbox_inches='tight') ```
github_jupyter
``` ''' Import packages and modules from Python Standard Library and Third party libraries. ''' #Import from python standard library import os #Import from third party libraries import cv2 import glob import numpy as np import pandas as pd from sklearn.utils import shuffle from skimage.color import gray2rgb, rgb2gray from tensorflow import keras import timeit import tensorflow as tf import warnings warnings.filterwarnings("ignore") def read_boxes(): ''' required directory hierarchy: current_directory/output/file.txt ''' #Read files and get a dictionary with image 'name.png' as key and 'bounding box coordinates' as value #Each file contains name.png and bounding box coordinates in each row allBoxes = {} files = ['FallingDown_01.txt','FallingDown_02.txt','Standing_01.txt','Standing_02.txt'] path = './output/' for filename in files: boxes = {} with open(path+filename,'r') as file: for line in file: tokens = line.split(' ',1) name,box = tokens[0].strip(), tokens[1].strip() boxes[name] = box key = filename.split('.')[0] allBoxes[key]=boxes return allBoxes def prepare_data(): ''' required directory hierarchy: current_directory/output/DIRs ''' allBoxes = read_boxes() #define list to store the dataset dataset = [] #define list to store the data labels dataLabels = [] #list of main-directory names DIRs = ['FallingDown_01','FallingDown_02','Standing_01','Standing_02'] #iterate over each main-directory name for DIR in DIRs: #join path with each main-directory name path = './output/'+str(DIR) print('DIR',DIR) #Extract the path of all sub-directories inside each main-directory for directory in glob.glob(path): sub_path = os.path.join(directory+'/*.png') #extract path of all images in sub-directory image_paths = glob.glob(sub_path) #split the directory path and extract directory name labels = directory.split('\\')[-1] #Iterate over each image path to read the image for image_path in image_paths: name = os.path.basename(image_path) try: box = allBoxes[DIR][name] box_tokens = [ tk for tk in box.strip('[,]').split(' ') if tk is not ''] x1 = box_tokens[0].strip(' ') y1 = box_tokens[1].strip(' ') x2 = box_tokens[2].strip(' ') y2 = box_tokens[3].strip(' ') box = x1,y1,x2,y2 x1,y1,x2,y2 = int(x1),int(y1),int(x2),int(y2) img = cv2.imread(image_path, cv2.IMREAD_UNCHANGED) image = cv2.cvtColor(img.copy(), cv2.COLOR_BGR2RGB) #print(image.shape) image = image[y1:y2,x1:x2] image = cv2.resize(image,dsize=(300,300), interpolation= cv2.INTER_AREA) output_path = path+'/cropped/'+name+'.png' cv2.imwrite(output_path,cv2.cvtColor(image, cv2.COLOR_RGB2BGR)) #print(image.shape) #convert to rgb if grayscale image if len(image.shape)==2: image = gray2rbg(image) if DIR is 'FallingDown_01' or DIR is 'FallingDown_02': #append image to dataset list dataset.append(image) dataLabels.append(int(1)) elif DIR=='Standing_01' or DIR=='Standing_02': dataset.append(image) dataLabels.append(int(0)) except KeyError: print('Key Error') #convert dataset and data_labels to numpy array dataset = np.array(dataset) dataLabels = np.array(dataLabels) return dataset, dataLabels #methods to rescale images def rescaleImage(image, label): image = tf.cast(image, tf.float32) image = image / 255.0 return image, label def getTestimage(dataset, dataLabels): ''' insert batch =1 in "test_dataset.map(rescaleImage).shuffle(buffer_size=1024).batch(4)" to get one image ''' #shuffle dataset and split dataset into train, test and validation sets dataset, dataLabels = shuffle(dataset, dataLabels, random_state=1236) #train_x, train_y = dataset[:1000], dataLabels[:1000] test_x, test_y = dataset[1000:1400], dataLabels[1000:1400] #valid_x, valid_y = dataset[1400:], dataLabels[1400:] #convert train, test and validation sets to tensorflow datasets test_dataset = tf.data.Dataset.from_tensor_slices((test_x, test_y)) #map test dataset to preprocessing function, shuffle, test_dataset = test_dataset.map(rescaleImage).shuffle(buffer_size=1024).batch(30) image, label = test_dataset.as_numpy_iterator().next() return image, label def action_detection_segmentation(image, model): #inference on single image prediction = model.predict(image) # Apply a sigmoid since our model returns logits prediction = tf.nn.sigmoid(prediction) prediction = tf.where(prediction < 0.6, 0, 1) prediction = prediction.numpy().ravel() return prediction dataset, dataLabels = prepare_data() print('Dataset Shape: {0} Labels Shape: {1}'.format(dataset.shape, dataLabels.shape)) image, label = rescaleImage(dataset[0], dataLabels[0]) image2 = tf.expand_dims(image, axis=0) import time start_time = time.time() model_path="./fall_model/model_6.h5" #load model ResNetmodel = keras.models.load_model(model_path) elapsed_time = time.time() - start_time print("Iteration time: %0.4fs" % elapsed_time) import time start_time = time.time() model_path="./fall_model/model_5.h5" #load model EffNetmodel = keras.models.load_model(model_path) elapsed_time = time.time() - start_time print("Iteration time: %0.4fs" % elapsed_time) warnings.filterwarnings("ignore") start_time = time.time() # image , label = getTestimage(dataset, dataLabels) prediction = action_detection_segmentation(image2,ResNetmodel) elapsed_time = time.time() - start_time print("Iteration time: %0.4fs" % elapsed_time) warnings.filterwarnings("ignore") start_time = time.time() # image , label = getTestimage(dataset, dataLabels) prediction = action_detection_segmentation(image2,model5) elapsed_time = time.time() - start_time print("Iteration time: %0.4fs" % elapsed_time) times = [] for i in range(30): start_time = time.time() one_prediction = action_detection_segmentation(image2, model2) delta = (time.time() - start_time) times.append(delta) mean_delta = np.array(times).mean() fps = 1 / mean_delta print('average(sec):{:.2f},fps:{:.2f}'.format(mean_delta, fps)) times = [] for i in range(30): start_time = time.time() one_prediction = action_detection_segmentation(image2, model5) delta = (time.time() - start_time) times.append(delta) mean_delta = np.array(times).mean() fps = 1 / mean_delta print('average(sec):{:.2f},fps:{:.2f}'.format(mean_delta, fps)) execution = timeit.timeit(lambda: action_detection_segmentation(image2, ResNetmodel), number=30) FPS = 30/execution print('Execution time for 30 runs: {:.2f}, FPS: {:.2f}'.format(execution, FPS)) execution = timeit.timeit(lambda: action_detection_segmentation(image2, EffNetmodel), number=30) FPS = 30/execution print('Execution time for 30 runs: {:.2f}, FPS: {:.2f}'.format(execution, FPS)) ```
github_jupyter
# Grove Temperature sensor module --- ## Aim * This notebook illustrates how to use available APIs for the Grove Temperature sensor module on PYNQ-Z2 PMOD and Arduino interfaces. ## References * [Grove Temperature sensor](https://www.seeedstudio.com/Grove-Temperature-Sensor.html) * [Grove I2C ADC](https://www.seeedstudio.com/Grove-I2C-ADC.html) * [PYNQ Grove Adapter](https://store.digilentinc.com/pynq-grove-system-add-on-board/) * [Grove Base Shield V2.0](https://www.seeedstudio.com/Base-Shield-V2.html) ## Last revised * 01 April 2021 + Initial version --- ## Load _base_ Overlay <div class="alert alert-box alert-info"> Note that we load the base bitstream only once to use Grove module with PYNQ Grove Adapter and SEEED Grove Base Shield V2.0<br> Please make sure you run the following cell before running either of the interfaces </div> ``` from pynq.overlays.base import BaseOverlay from pynq_peripherals import ArduinoSEEEDGroveAdapter, PmodGroveAdapter base = BaseOverlay('base.bit') ``` ## Using Grove Temperature sensor with Grove Base Shield V2.0 (Arduino) <div class="alert alert-box alert-warning"><ul> <h4 class="alert-heading">Make Physical Connections </h4> <li>Insert the Grove Base Shield into the Arduino connector on the board. Connect the Grove Temperature sensor module to A1 connector of the SEEED Grove Base Shield.</li> </ul> </div> ### Adapter configuration ``` adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, A1='grove_temperature') ``` ### Define device object ``` temp_sensor = adapter.A1 ``` ### Reading from the Grove Temperature sensor ``` print('temperature: {:.2f}'.format(temp_sensor.get_temperature())) ``` ### Taking multiple samples at a desired interval and plotting Set numberOfSamples and delayInSeconds to desired values. Print samples and then plot ``` %matplotlib inline import matplotlib.pyplot as plt from time import sleep import numpy as np numberOfSamples = 20 delayInSeconds = 1 temperature = np.zeros(numberOfSamples) for i in range(numberOfSamples): temperature[i]=temp_sensor.get_temperature() sleep(delayInSeconds) for i in range(20): print('temperature: {:.2f}'.format(temperature[i])) plt.plot(range(numberOfSamples), temperature, 'ro') plt.title('Grove Temperature') plt.axis([0, int(len(temperature)), min(temperature)-1, max(temperature)+1]) plt.show() ``` --- ## Using Grove Temperature sensor with Grove ADC (Arduino) <div class="alert alert-box alert-warning"><ul> <h4 class="alert-heading">Make Physical Connections </h4> <li>Insert the SEEED Grove Base Shield into the Arduino connector on the board. Connect the grove_adc module to one of the connectors labeled I2C.</li> <li>Connect the Grove Temperature sensor module to the grove_adc module.</li></ul> </div> ### Adapter configuration ``` adapter=ArduinoSEEEDGroveAdapter(base.ARDUINO, I2C='grove_temperature') ``` ### Define device object ``` temp_sensor = adapter.I2C ``` ### Reading from the Grove Temperature sensor ``` print('temperature: {:.2f}'.format(temp_sensor.get_temperature())) ``` ### Taking multiple samples at a desired interval and plotting Set numberOfSamples and delayInSeconds to desired values. Print samples and then plot ``` %matplotlib inline import matplotlib.pyplot as plt from time import sleep import numpy as np numberOfSamples = 20 delayInSeconds = 1 temperature = np.zeros(numberOfSamples) for i in range(numberOfSamples): temperature[i]=temp_sensor.get_temperature() sleep(delayInSeconds) for i in range(20): print('temperature: {:.2f}'.format(temperature[i])) plt.plot(range(numberOfSamples), temperature, 'ro') plt.title('Grove Temperature') plt.axis([0, int(len(temperature)), min(temperature)-1, max(temperature)+1]) plt.show() ``` --- ## Using Grove Temperature sensor with PYNQ Grove Adapter (PMOD) <div class="alert alert-box alert-warning"><ul> <h4 class="alert-heading">Make Physical Connections </h4> <li>Connect the PYNQ Grove Adapter to PMODB connector. Connect the grove_adc module to the G3 connector of the Adapter.</li> <li>Connect the Grove Temperature sensor module to the grove_adc module.</li></ul> </div> ### Adapter configuration ``` adapter = PmodGroveAdapter(base.PMODB, G3='grove_temperature') ``` ### Define device object ``` temp_sensor = adapter.G3 ``` ### Reading from the Grove Temperature sensor ``` print('temperature: {:.2f}'.format(temp_sensor.get_temperature())) ``` ### Taking multiple samples at a desired interval and plotting Set numberOfSamples and delayInSeconds to desired values. Print samples and then plot ``` %matplotlib inline import matplotlib.pyplot as plt from time import sleep import numpy as np numberOfSamples = 20 delayInSeconds = 1 temperature = np.zeros(numberOfSamples) for i in range(numberOfSamples): temperature[i]=temp_sensor.get_temperature() sleep(delayInSeconds) for i in range(20): print('temperature: {:.2f}'.format(temperature[i])) plt.plot(range(numberOfSamples), temperature, 'ro') plt.title('Grove Temperature') plt.axis([0, int(len(temperature)), min(temperature)-1, max(temperature)+1]) plt.show() ``` Copyright (C) 2021 Xilinx, Inc SPDX-License-Identifier: BSD-3-Clause ---- ----
github_jupyter
``` # Copyright 2022 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Vertex AI SDK for Python: AutoML training text entity extraction model for online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_entity_extraction_online.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> ## Overview This tutorial demonstrates how to use the Vertex AI SDK for Python to create text entity extraction models and do online prediction using a Google Cloud [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users) model. ### Dataset The dataset used for this tutorial is the [NCBI Disease Research Abstracts dataset](https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/) from [National Center for Biotechnology Information](https://www.ncbi.nlm.nih.gov/). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. ### Objective In this tutorial, you create an AutoML text entity extraction model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Cloud Console. The steps performed include: - Create a Vertex `Dataset` resource. - Train the model. - View the model evaluation. - Deploy the `Model` resource to a serving `Endpoint` resource. - Make a prediction. - Undeploy the `Model`. ### Costs This tutorial uses billable components of Google Cloud: * Vertex AI * Cloud Storage Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ### Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: - The Cloud Storage SDK - Git - Python 3 - virtualenv - Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: 1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/). 2. [Install Python 3](https://cloud.google.com/python/setup#installing_python). 3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment. 4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell. 5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell. 6. Open this notebook in the Jupyter Notebook Dashboard. ## Installation Install the latest version of Vertex SDK for Python. ``` import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG ``` Install the latest GA version of *google-cloud-storage* library as well. ``` ! pip3 install -U google-cloud-storage $USER_FLAG ``` ### Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. ``` import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` ## Before you begin ### GPU runtime This tutorial does not require a GPU runtime. ### Set up your Google Cloud project **The following steps are required, regardless of your notebook environment.** 1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) 3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com) 4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)). 5. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`. ``` PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID ``` #### Region You can also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. - Americas: `us-central1` - Europe: `europe-west4` - Asia Pacific: `asia-east1` You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations) ``` REGION = "[your-region]" # @param {type: "string"} if REGION == "[your-region]": REGION = "us-central1" ``` #### Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. ``` from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ``` ### Authenticate your Google Cloud account **If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. **Otherwise**, follow these steps: In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page. **Click Create service account**. In the **Service account name** field, enter a name, and click **Create**. In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. ``` # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. ``` BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al $BUCKET_NAME ``` ### Set up variables Next, set up some variables used throughout the tutorial. ### Import libraries and define constants ``` import os import google.cloud.aiplatform as aiplatform ``` ## Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. ``` aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) ``` # Tutorial Now you are ready to start creating your own AutoML text entity extraction model. #### Location of Cloud Storage training data. Now set the variable `IMPORT_FILE` to the location of the JSONL index file in Cloud Storage. ``` IMPORT_FILE = "gs://cloud-samples-data/language/ucaip_ten_dataset.jsonl" ``` #### Quick peek at your data This tutorial uses a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (`wc -l`) and then peek at the first few rows. ``` FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head ``` ### Create the Dataset Next, create the `Dataset` resource using the `create` method for the `TextDataset` class, which takes the following parameters: - `display_name`: The human readable name for the `Dataset` resource. - `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource. - `import_schema_uri`: The data labeling schema for the data items. This operation may take several minutes. ``` dataset = aiplatform.TextDataset.create( display_name="NCBI Biomedical" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aiplatform.schema.dataset.ioformat.text.extraction, ) print(dataset.resource_name) ``` ### Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. #### Create training pipeline An AutoML training pipeline is created with the `AutoMLTextTrainingJob` class, with the following parameters: - `display_name`: The human readable name for the `TrainingJob` resource. - `prediction_type`: The type task to train the model for. - `classification`: A text classification model. - `sentiment`: A text sentiment analysis model. - `extraction`: A text entity extraction model. - `multi_label`: If a classification task, whether single (False) or multi-labeled (True). - `sentiment_max`: If a sentiment analysis task, the maximum sentiment value. ``` job = aiplatform.AutoMLTextTrainingJob( display_name="biomedical_" + TIMESTAMP, prediction_type="extraction" ) print(job) ``` #### Run the training pipeline Next, you start the training job by invoking the method `run`, with the following parameters: - `dataset`: The `Dataset` resource to train the model. - `model_display_name`: The human readable name for the trained model. - `training_fraction_split`: The percentage of the dataset to use for training. - `test_fraction_split`: The percentage of the dataset to use for test (holdout data). - `validation_fraction_split`: The percentage of the dataset to use for validation. The `run` method when completed returns the `Model` resource. The execution of the training pipeline will take upto 4 hours. ``` model = job.run( dataset=dataset, model_display_name="biomedical_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, ) ``` ## Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project. ``` # Get model resource ID models = aiplatform.Model.list(filter="display_name=biomedical_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aiplatform.gapic.ModelServiceClient( client_options=client_options ) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation) ``` ## Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method. ``` endpoint = model.deploy() ``` ## Send a online prediction request Send a online prediction to your deployed model. ### Make test item You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. ``` test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described' ``` ### Make the prediction Now that your `Model` resource is deployed to an `Endpoint` resource, you can do online predictions by sending prediction requests to the `Endpoint` resource. #### Request The format of each instance is: { 'content': text_string } Since the predict() method can take multiple items (instances), send your single test item as a list of one test item. #### Response The response from the predict() call is a Python dictionary with the following entries: - `ids`: The internal assigned unique identifiers for each prediction request. - `displayNames`: The class names for each entity. - `confidences`: The predicted confidence, between 0 and 1, per entity. - `textSegmentStartOffsets`: The character offset in the text to the start of the entity. - `textSegmentEndOffsets`: The character offset in the text to the end of the entity. - `deployed_model_id`: The Vertex AI identifier for the deployed `Model` resource which did the predictions. ``` instances_list = [{"content": test_item}] prediction = endpoint.predict(instances_list) print(prediction) ``` ## Undeploy the model When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model. ``` endpoint.undeploy_all() ``` # Cleaning up To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: - Dataset - Pipeline - Model - Endpoint - AutoML Training Job - Batch Job - Custom Job - Hyperparameter Tuning Job - Cloud Storage Bucket ``` # Delete the dataset using the Vertex dataset object dataset.delete() # Delete the model using the Vertex model object model.delete() # Delete the endpoint using the Vertex endpoint object endpoint.delete() # Delete the AutoML or Pipeline training job job.delete() if os.getenv("IS_TESTING"): ! gsutil -m rm -r $BUCKET_NAME ```
github_jupyter
# Estimating the effective reproduction number in Belgium with the RKI method > Using the Robert Koch Institute method with serial interval of 4. - toc:true - branch: master - badges: true - comments: true - author: Lode Nachtergaele - categories: [cast42, covid19, Belgium] Every day [Bart Mesuere](https://twitter.com/BartMesuere) tweets a nice dashboard with current numbers about Covid-19 in Belgium. This was the tweet on Wednesday 20/11/04: > twitter: https://twitter.com/BartMesuere/status/1323881489864548352 It's nice to see that the effective reproduction number ($Re(t)$) is again below one. That means the power of virus is declining and the number of infection will start to lower. This occured first on Tuesday 2020/11/3: > twitter: https://twitter.com/BartMesuere/status/1323519613855059968 I estimated the $Re(t)$ earlier with [rt.live](https://github.com/rtcovidlive/covid-model) model in [this notebook](https://cast42.github.io/blog/cast42/covid19/belgium/2020/11/01/rt-be-region.html). There the $Re(t)$ was still estimated to be above one. [Michael Osthege](https://twitter.com/theCake) replied with a simulation results with furter improved [model](https://github.com/rtcovidlive/rtlive-global): > twitter: https://twitter.com/theCake/status/1323211910481874944 In that estimation, the $Re(t)$ was also not yet heading below one at the end of october. In this notebook, we will implement a calculation based on the method of the Robert Koch Institute. The method is described and programmed in R in this [blog post](https://staff.math.su.se/hoehle/blog/2020/04/15/effectiveR0.html). In that blogpost there's a link to a website with estimations for most places in the world [The estimation for Belgium is here](https://epiforecasts.io/covid/posts/national/belgium/) ![LSHTM](https://github.com/cast42/blog/blob/master/_notebooks/images/LSHTM.png?raw=1) According to that calculation, $Re(t)$ is already below zero for some days. # Load libraries and data ``` import numpy as np import pandas as pd df_tests = pd.read_csv('https://epistat.sciensano.be/Data/COVID19BE_tests.csv', parse_dates=['DATE']) df_cases = pd.read_csv('https://epistat.sciensano.be/Data/COVID19BE_CASES_AGESEX.csv', parse_dates=['DATE']) df_cases ``` Reformat data into Rtlive format ``` df_cases_per_day = (df_cases .dropna(subset=['DATE']) .assign(region='Belgium') .groupby(['region', 'DATE'], as_index=False) .agg(cases=('CASES', 'sum')) .rename(columns={'DATE':'date'}) .set_index(["region", "date"]) ) ``` What's in our basetable: ``` df_cases_per_day ``` Let's plot the number of cases in function of the time. ``` ax = df_cases_per_day.loc['Belgium'].plot(figsize=(18,6)) ax.set(ylabel='Number of cases', title='Number of cases for covid-19 and number of positives in Belgium'); ``` We see that the last days are not yet complete. Let's cut off the last two days of reporting. ``` import datetime from dateutil.relativedelta import relativedelta ``` Calculate the date two days ago: ``` datetime.date(2020, 11, 3) # today_minus_two = datetime.date.today() + relativedelta(days=-2) today_minus_two = datetime.date(2020, 11, 3) # Fix the day today_minus_two.strftime("%Y-%m-%d") ``` Replot the cases: ``` ax = df_cases_per_day.loc['Belgium'][:today_minus_two].plot(figsize=(18,6)) ax.set(ylabel='Number of cases', title='Number of cases for covid-19 and number of positives in Belgium'); ``` Select the Belgium region: ``` region = 'Belgium' df = df_cases_per_day.loc[region][:today_minus_two] df ``` Check the types of the columns: ``` df.info() ``` # Robert Koch Institute method A basic method to calculate the effective reproduction number is described (among others) in this [blogpost](https://staff.math.su.se/hoehle/blog/2020/04/15/effectiveR0.html). I included the relevant paragraph: In a recent report (an der Heiden and Hamouda 2020) the RKI described their method for computing R as part of the COVID-19 outbreak as follows (p. 13): For a constant generation time of 4 days, one obtains R as the ratio of new infections in two consecutive time periods each consisting of 4 days. Mathematically, this estimation could be formulated as part of a statistical model: $$y_{s+4} | y_{s} \sim Po(R \cdot y_{s}), s= 1,2,3,4$$ where $y_{1}, \ldots, y_{4}$ are considered as fixed. From this we obtain $$\hat{R}_{RKI} = \sum_{s=1}^{4} y_{s+4} / \sum_{s=1}^{4} y_{s}$$ Somewhat arbitrary, we denote by $Re(t)$ the above estimate for R when $s=1$ corresponds to time $t-8$, i.e. we assign the obtained value to the last of the 8 values used in the computation. In Python, we define a lambda function that we apply on a rolling window. Since indexes start from zero, we calculate: $$\hat{R}_{RKI} = \sum_{s=0}^{3} y_{s+4} / \sum_{s=0}^{3} y_{s}$$ ``` rt = lambda y: np.sum(y[4:])/np.sum(y[:4]) df.rolling(8).apply(rt) ``` The first values are Nan because the window is in the past. If we plot the result, it looks like this: ``` ax = df.rolling(8).apply(rt).plot(figsize=(16,4), label='Re(t)') ax.set(ylabel='Re(t)', title='Effective reproduction number estimated with RKI method') ax.legend(['Re(t)']); ``` To avoid the spikes due to weekend reporting issue, I first applied a rolling mean on a window of 7 days: ``` ax = df.rolling(7).mean().rolling(8).apply(rt).plot(figsize=(16,4), label='Re(t)') ax.set(ylabel='Re(t)', title='Effective reproduction number estimated with RKI method after rolling mean on window of 7 days') ax.legend(['Re(t)']); ``` # Interactive visualisation in Altair ``` import altair as alt alt.Chart(df.rolling(7).mean().rolling(8).apply(rt).fillna(0).reset_index()).mark_line().encode( x=alt.X('date:T'), y=alt.Y('cases', title='Re(t)'), tooltip=['date:T', alt.Tooltip('cases', format='.2f')] ).transform_filter( alt.datum.date > alt.expr.toDate('2020-03-13') ).properties( width=600, title='Effective reproduction number in Belgium based on Robert-Koch Institute method' ) ``` # Making the final visualisation in Altair In the interactive Altair figure below, we show the $Re(t)$ for the last 14 days. We reduce the rolling mean window to three to see faster reactions. ``` #collapse df_plot = df.rolling(7).mean().rolling(8).apply(rt).fillna(0).reset_index() last_value = str(df_plot.iloc[-1]['cases'].round(2)) + ' ↓' first_value = str(df_plot[df_plot['date'] == '2020-10-21'].iloc[0]['cases'].round(2)) # + ' ↑' today_minus_15 = datetime.datetime.today() + relativedelta(days=-15) today_minus_15_str = today_minus_15.strftime("%Y-%m-%d") line = alt.Chart(df_plot).mark_line(point=True).encode( x=alt.X('date:T', axis=alt.Axis(title='Datum', grid=False)), y=alt.Y('cases', axis=alt.Axis(title='Re(t)', grid=False, labels=False, titlePadding=40)), tooltip=['date:T', alt.Tooltip('cases', title='Re(t)', format='.2f')] ).transform_filter( alt.datum.date > alt.expr.toDate(today_minus_15_str) ).properties( width=600, height=100 ) hline = alt.Chart(pd.DataFrame({'cases': [1]})).mark_rule().encode(y='cases') label_right = alt.Chart(df_plot).mark_text( align='left', dx=5, dy=-10 , size=15 ).encode( x=alt.X('max(date):T', title=None), text=alt.value(last_value), ) label_left = alt.Chart(df_plot).mark_text( align='right', dx=-5, dy=-40, size=15 ).encode( x=alt.X('min(date):T', title=None), text=alt.value(first_value), ).transform_filter( alt.datum.date > alt.expr.toDate(today_minus_15_str) ) source = alt.Chart( {"values": [{"text": "Data source: Sciensano"}]} ).mark_text(size=12, align='left', dx=-57).encode( text="text:N" ) alt.vconcat(line + label_left + label_right + hline, source).configure( background='#D9E9F0' ).configure_view( stroke=None, # Remove box around graph ).configure_axisY( ticks=False, grid=False, domain=False ).configure_axisX( grid=False, domain=False ).properties(title={ "text": ['Effective reproduction number for the last 14 days in Belgium'], "subtitle": [f'Estimation based on the number of cases until {today_minus_two.strftime("%Y-%m-%d")} after example of Robert Koch Institute with serial interval of 4'], } ) # .configure_axisY( # labelPadding=50, # ) ``` To check the calculation, here are the last for values for the number of cases after applying the mean window of 7: ``` df.rolling(7).mean().iloc[-8:-4] ``` Those must be added together: ``` df.rolling(7).mean().iloc[-8:-4].sum() ``` And here are the four values, starting four days ago: ``` df.rolling(7).mean().iloc[-4:] ``` These are added together: ``` df.rolling(7).mean().iloc[-4:].sum() ``` And now we divide those two sums to get the $Re(t)$ of 2020-11-03: ``` df.rolling(7).mean().iloc[-4:].sum()/df.rolling(7).mean().iloc[-8:-4].sum() ``` This matches (as expected) the value in the graph. Let's compare with three other sources: 1. Alas it does not match the calculation reported by [Bart Mesuere](https://twitter.com/BartMesuere) on 2020-11-03 based on the RKI model that reports 0.96: > twitter: https://twitter.com/BartMesuere/status/1323519613855059968 2. Also, the more elaborated model from [rtliveglobal](https://github.com/rtcovidlive/rtlive-global) is not yet that optimistic. Mind that model rtlive start estimating the $Re(t)$ from the number of tests instead of the number of cases. It might be that other reporting delays are involved. 3. [epiforecast.io](https://epiforecasts.io/covid/posts/national/belgium/) is already below 1 since beginning of November. Another possiblity is that I made somewhere a mistake. If you spot it, please let me know. ``` ```
github_jupyter
``` import sys, os sys.version, os.getcwd() ``` # Torch ``` torch.__version__ import pandas as pd PREFIXES = ['WP','EU','CW','TT','RF'] def clean_source_data(directory): data = pd.read_table(directory, header=None) data['prefix'] = data[0].apply(lambda x: str(x).split(']')[0][1:].strip()).str.upper() data['prompt'] = data[0].apply(lambda x: str(x).split(']')[-1].strip()) data['prompt'] = data['prompt'].apply(lambda x: str(x).strip('"').replace('``','"').replace("''",'"')) data.drop(columns=0, inplace=True) #undesired tags: (['IP','OT','PM','MP','PI','CC'])] data = data[data['prefix'].isin(PREFIXES)] data.reset_index(inplace=True, drop=True) data.dropna(inplace=True) return data data = clean_source_data('data/writingPrompts/train.wp_source') data.tail() rf = data['prompt'][data['prefix']=='RF'] rf[1171] prompts_as_csv = [] for prefix in PREFIXES: prompts_as_csv.append(data['prompt'][data['prefix']==prefix]) for prefix, prompts in zip(PREFIXES,prompts_as_csv): prompts.to_csv('data/writingPrompts/'+prefix+'.csv',index=False) for prefix, prompts in zip(PREFIXES,prompts_as_csv): prompts.to_csv('data/strings/'+prefix+'.txt',index=False) from __future__ import unicode_literals, print_function, division from io import open import glob import os import unicodedata import string all_letters = string.ascii_letters + " .,;'-" n_letters = len(all_letters) + 1 # Plus EOS marker def findFiles(path): return glob.glob(path) # Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427 def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' and c in all_letters ) # Read a file and split into lines def readLines(filename): lines = open(filename, encoding='utf-8').read().strip().split('\n') return [unicodeToAscii(line) for line in lines] # Build the category_lines dictionary, a list of lines per category category_lines = {} all_categories = [] for filename in findFiles('data/writingPrompts/RF.csv'): category = os.path.splitext(os.path.basename(filename))[0] all_categories.append(category) lines = readLines(filename) category_lines[category] = lines n_categories = len(all_categories) if n_categories == 0: raise RuntimeError('Data not found. Make sure that you downloaded data ' 'from https://download.pytorch.org/tutorial/data.zip and extract it to ' 'the current directory.') print('# categories:', n_categories, all_categories) print(unicodeToAscii("O'Néàl")) category_lines import torch import torch.nn as nn class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size) self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size) self.o2o = nn.Linear(hidden_size + output_size, output_size) self.dropout = nn.Dropout(0.1) self.softmax = nn.LogSoftmax(dim=1) def forward(self, category, input, hidden): input_combined = torch.cat((category, input, hidden), 1) hidden = self.i2h(input_combined) output = self.i2o(input_combined) output_combined = torch.cat((hidden, output), 1) output = self.o2o(output_combined) output = self.dropout(output) output = self.softmax(output) return output, hidden def initHidden(self): return torch.zeros(1, self.hidden_size) import random # Random item from a list def randomChoice(l): return l[random.randint(0, len(l) - 1)] # Get a random category and random line from that category def randomTrainingPair(): category = randomChoice(all_categories) line = randomChoice(category_lines[category]) return category, line def categoryTensor(category): li = all_categories.index(category) tensor = torch.zeros(1, n_categories) tensor[0][li] = 1 return tensor # One-hot matrix of first to last letters (not including EOS) for input def inputTensor(line): tensor = torch.zeros(len(line), 1, n_letters) for li in range(len(line)): letter = line[li] tensor[li][0][all_letters.find(letter)] = 1 return tensor # LongTensor of second letter to end (EOS) for target def targetTensor(line): letter_indexes = [all_letters.find(line[li]) for li in range(1, len(line))] letter_indexes.append(n_letters - 1) # EOS return torch.LongTensor(letter_indexes) def randomTrainingExample(): category, line = randomTrainingPair() category_tensor = categoryTensor(category) input_line_tensor = inputTensor(line) target_line_tensor = targetTensor(line) return category_tensor, input_line_tensor, target_line_tensor criterion = nn.NLLLoss() learning_rate = 0.0005 def train(category_tensor, input_line_tensor, target_line_tensor): target_line_tensor.unsqueeze_(-1) hidden = rnn.initHidden() rnn.zero_grad() loss = 0 for i in range(input_line_tensor.size(0)): output, hidden = rnn(category_tensor, input_line_tensor[i], hidden) l = criterion(output, target_line_tensor[i]) loss += l loss.backward() for p in rnn.parameters(): p.data.add_(-learning_rate, p.grad.data) return output, loss.item() / input_line_tensor.size(0) import time import math def timeSince(since): now = time.time() s = now - since m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) rnn = RNN(n_letters, 128, n_letters) n_iters = 500 print_every = 1 plot_every = n_iters all_losses = [] total_loss = 0 # Reset every plot_every iters start = time.time() for iter in range(1, n_iters + 1): output, loss = train(*randomTrainingExample()) total_loss += loss if iter % print_every == 0: print('%s (%d %d%%) %.4f' % (timeSince(start), iter, iter / n_iters * 100, loss)) if iter % plot_every == 0: all_losses.append(total_loss / plot_every) total_loss = 0 max_length = 50 # Sample from a category and starting letter def sample(category, start_letter='A'): with torch.no_grad(): # no need to track history in sampling category_tensor = categoryTensor(category) input = inputTensor(start_letter) hidden = rnn.initHidden() output_name = start_letter for i in range(max_length): output, hidden = rnn(category_tensor, input[0], hidden) topv, topi = output.topk(random.randint(1,1)) topi = topi[0][0] if topi == n_letters - 1: break else: letter = all_letters[topi] output_name += letter input = inputTensor(letter) return output_name # Get multiple samples from one category and multiple starting letters def samples(category, start_letters='ABC'): for start_letter in start_letters: print(sample(category, start_letter)) # samples('WP', 'RUSE') # samples('EU', 'GERM') # samples('CW', 'SPAN') # samples('TT', 'CHIN') samples('RF', 'IRLD') ``` # Keras Char-Level ``` import numpy as np import pandas as pd from keras.models import Sequential, Model, Input from keras.layers import LSTM, TimeDistributed, Activation, Dense, Lambda with open('data/writingPrompts/RF.txt') as f: lines = f.read().splitlines() documents = ''.join(lines) chars = list(set(documents)) VOCAB_SIZE = len(chars) ix_to_char = {ix:char for ix, char in enumerate(chars)} char_to_ix = {char:ix for ix, char in enumerate(chars)} D_LENGTH = int(len(documents)) SEQ_LENGTH = 200 HIDDEN_DIM = 300 LAYER_NUM = 4 BATCH_SIZE = 100 temp = 3 X = np.zeros((len(documents)//SEQ_LENGTH, SEQ_LENGTH, VOCAB_SIZE)) y = np.zeros((len(documents)//SEQ_LENGTH, SEQ_LENGTH, VOCAB_SIZE)) for i in range(0, len(documents)//SEQ_LENGTH): X_sequence = documents[i*SEQ_LENGTH:(i+1)*SEQ_LENGTH] X_sequence_ix = [char_to_ix[value] for value in X_sequence] input_sequence = np.zeros((SEQ_LENGTH, VOCAB_SIZE)) for j in range(SEQ_LENGTH): input_sequence[j][X_sequence_ix[j]] = 1. X[i] = input_sequence y_sequence = documents[i*SEQ_LENGTH+1:(i+1)*SEQ_LENGTH+1] y_sequence_ix = [char_to_ix[value] for value in y_sequence] target_sequence = np.zeros((SEQ_LENGTH, VOCAB_SIZE)) for j in range(SEQ_LENGTH): #print(j) target_sequence[j][y_sequence_ix[j]] = 1. y[i] = target_sequence model = Sequential() model.add(LSTM(HIDDEN_DIM, input_shape=(None, VOCAB_SIZE), return_sequences=True)) for i in range(LAYER_NUM - 1): model.add(LSTM(HIDDEN_DIM, return_sequences=True)) model.add(TimeDistributed(Dense(VOCAB_SIZE))) model.add(Lambda(lambda x: x / temp)) model.add(Activation('softmax')) model.compile(loss="categorical_crossentropy", optimizer="rmsprop") def generate_text(model, length): ix = [np.random.randint(VOCAB_SIZE)] y_char = [ix_to_char[ix[-1]]] X = np.zeros((1, length, VOCAB_SIZE)) for i in range(length): X[0, i, :][ix[-1]] = 1 print(ix_to_char[ix[-1]], end="") ix = np.argmax(model.predict(X[:, :i+1, :])[0], 1) y_char.append(ix_to_char[ix[-1]]) return ('').join(y_char) nb_epoch = 0 print('\n\n') model.fit(X, y, batch_size=BATCH_SIZE, verbose=1, nb_epoch=25) nb_epoch += 1 generate_text(model, SEQ_LENGTH) if nb_epoch % 2 == 0: model.save_weights('checkpoint_{}_epoch_{}.hdf5'.format(HIDDEN_DIM, nb_epoch)) ``` # Keras Word Level ``` republic = 'data/strings/RFs.txt' import string # load doc into memory def load_doc(filename): # open the file as read only file = open(filename, 'r') # read all text text = file.read() # close the file file.close() return text # turn a doc into clean tokens def clean_doc(doc): # replace '--' with a space ' ' doc = doc.replace('--', ' ') # split into tokens by white space tokens = doc.split() # remove punctuation from each token table = str.maketrans('', '', string.punctuation) tokens = [w.translate(table) for w in tokens] # remove remaining tokens that are not alphabetic tokens = [word for word in tokens if word.isalpha()] # make lower case tokens = [word.lower() for word in tokens] return tokens # save tokens to file, one dialog per line def save_doc(lines, filename): data = '\n'.join(lines) file = open(filename, 'w') file.write(data) file.close() # load document in_filename = republic doc = load_doc(in_filename) print(doc[:200]) # clean document tokens = clean_doc(doc) print(tokens[:200]) print('Total Tokens: %d' % len(tokens)) print('Unique Tokens: %d' % len(set(tokens))) # organize into sequences of tokens length = 50 + 1 sequences = list() for i in range(length, len(tokens)): # select sequence of tokens seq = tokens[i-length:i] # convert into a line line = ' '.join(seq) # store sequences.append(line) print('Total Sequences: %d' % len(sequences)) # save sequences to file out_filename = 'data/strings/RFsequences.txt' save_doc(sequences, out_filename) from numpy import array from pickle import dump from keras.preprocessing.text import Tokenizer from keras.utils import to_categorical from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Embedding !# load doc into memory def load_doc(filename): # open the file as read only file = open(filename, 'r') # read all text text = file.read() # close the file file.close() return text # load in_filename = 'data/strings/republic_sequences.txt' doc = load_doc(in_filename) lines = doc.split('\n') # integer encode sequences of words tokenizer = Tokenizer() tokenizer.fit_on_texts(lines) sequences = tokenizer.texts_to_sequences(lines) # vocabulary size vocab_size = len(tokenizer.word_index) + 1 # separate into input and output sequences = array(sequences) X, y = sequences[:,:-1], sequences[:,-1] y y = to_categorical(y, num_classes=vocab_size) seq_length = X.shape[1] # define model model = Sequential() model.add(Embedding(vocab_size, 50, input_length=seq_length)) model.add(LSTM(100, return_sequences=True)) model.add(LSTM(100)) model.add(Dense(100, activation='relu')) model.add(Dense(vocab_size, activation='softmax')) print(model.summary()) # compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # fit model model.fit(X, y, batch_size=128, epochs=5) # save the model to file model.save('model.h5') # save the tokenizer dump(tokenizer, open('tokenizer.pkl', 'wb')) keras.utils.plot_model(model,'model.png') ```
github_jupyter
# Chapter 1 Exercises ``` import arviz as az import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy import stats az.style.use('arviz-darkgrid') ``` ## Question 1 *** We do not know whether the brain really works in a Bayesian way, in an approximate Bayesian fashion, or maybe some evolutionary (more or less) optimized heuristics. Nevertheless, we know that we learn by exposing ourselves to data, examples and exercises - well, you may say that humans never learn, given our record as a species on subjects such as wars or economic systems that prioritize profit and not people's well-being... Anyway, I recommend you do the proposed exercises at the end of each chapter. *From the following expressions, which one corresponds to the sentence "the probability of being sunny, given that it is the 9th of July of 1816"?* - p(sunny) - p(sunny | July) - p(sunny | 9th of July of 1816) - p(9th of July of 1816 | sunny) - p(sunny, 9th of July of 1816) / p(9th of July of 1816) There are two statements that correspond to the *Probability of being sunny given that it is the 9th of July of 1816* 1. p(sunny | 9th of July of 1816) 2. p(sunny, 9th of July of 1816) / p(9th of July of 1816) For the second one recall the product rule (Equation 1.1) $$ p(A,B) = p(A|B)p(B) $$ A rearrangement of this formula yields $$ p(A|B) = \frac{p(A, B)}{p(B)}$$ Replace A and B with "sunny" and "9th of July of 1816" to get the equivament formulation. ## Question 2 *** *Show that the probability of choosing a human at random and picking the Pope is not the same as the probability of the Pope being human.* Let's assume there are 6 billion humans in the galaxy and there is only 1 Pope, Pope Francis, at the time of this writing. If a human is randomly picked the chances of that human being the pope are 1 in 6 billion. In mathematical notation $$ p(Pope | human) = \frac{1}{6,000,000} $$ Additionally I am very confident that the Pope is human, so much so that I make this statement. *Given a pope, I am 100% certain they are human*. Written in math $$ p(human | Pope) = 1 $$ *In the animated series Futurama, the (space) Pope is a reptile. How does this change your previous calculations?* Ok then: $$ p(Pope | human) = 0 $$ And $$ p(human | Pope) = 0 $$ ## Question 3 *** *In the following definition of a probabilistic model, identify the prior and the likelihood:* \begin{eqnarray} y_i \text{~} Normal(\mu, \sigma) \newline \mu \text{~} Normal(0,10) \newline \sigma \text{~} HalfNormal(25) \end{eqnarray} The priors in the model are \begin{eqnarray} \mu \text{~} Normal(0,10) \newline \sigma \text{~} HalfNormal(25) \end{eqnarray} The likelihood in our model is $$ y_i \text{~} Normal(\mu, \sigma) $$ ## Question 4 *** *In the previous model, how many parameters will the posterior have? Compare it with the model for the coin-flipping problem.* In the previous question there are two parameters in the posterior, $\mu$ and $\sigma$. In our coin flipping model we had one parameter, $\theta$. It may seem confusing that we had $\alpha$ and $\beta$ as well, but remember, these were not parameters we were trying ot estimate. In other words we don't really care about $\alpha$ and $\beta$ - they were just values for our prior distribution. What we really wanted was $\theta$, to determine the fairness of the coin. ## Question 5 *** *Write Bayes' theorem for the model in question 3.* $$ p(\mu, \sigma | y) = \frac{p(y| \mu, \sigma)p(\mu)p(\sigma)}{p(y)} $$ ## Question 6 *** *Let's suppose that we have two coins. When we toss the first coin, half the time it lands on tails and the other half on heads. The other coin is a loaded coin that always lands on heads. If we take one of the coins at random and get heads, what is the probability that this coin is the unfair one?* Formalizing some of the statements into mathematical notation: The probability of picking a coin at random, and getting either the biased or fair coin is the same: $$p(Biased) = p(Fair) = .5$$ The probability of getting heads with the biased coin is 1, $$p(Heads | Biased) = 1$$ The probability of getting heads with the fair coin is .5 $$p(Heads | Fair) = .5$$ Lastly, remember that after picking a coin at *random*, we observed heads. Therefore we can use Bayes rule to calculate the probability that we picked the biased coin: $$ p(Biased | Heads) = \frac{p(Heads | Biased) p(Biased)}{p(Heads)} $$ To solve this by hand we need to rewrite the denominator using the *Rule of Total Probability*: $$ p(Biased | Heads) = \frac{p(Heads | Biased) p(Biased)}{p(Heads|Fair)*p(Fair) + p(Heads|Biased)*p(Biased)} $$ We can use Python to do the math for us: ``` (1 * .5)/(.5 * .5 + 1* .5) ``` ## Questions 7 & 8 *** *Modify the code that generated Figure 1.5, in order to add a dotted vertical line showing the observed rate of $\frac{Heads}{Number-of-tosses} $. Compare the location of this line to the mode of the posteriors in each subplot.* *Try re-running this code using other priors (`beta_params`) and other data (`n_trials` and `data`).* ``` plt.figure(figsize=(10, 8)) n_trials = [0, 1, 2, 3, 4, 8, 16, 32, 50, 150] data = [0, 1, 1, 1, 1, 4, 6, 9, 13, 48] theta_real = 0.35 beta_params = [(1, 1), (20, 20), (1, 4)] dist = stats.beta x = np.linspace(0, 1, 200) for idx, N in enumerate(n_trials): if idx == 0: plt.subplot(4, 3, 2) plt.xlabel('θ') else: plt.subplot(4, 3, idx+3) plt.xticks([]) y = data[idx] for (a_prior, b_prior) in beta_params: p_theta_given_y = dist.pdf(x, a_prior + y, b_prior + N - y) plt.fill_between(x, 0, p_theta_given_y, alpha=0.7) # Add Vertical line for Number of Heads divided by Number of Tosses try: unit_rate_per_toss = y/N except ZeroDivisionError: unit_rate_per_toss = 0 plt.axvline(unit_rate_per_toss, ymax=0.3, color='k', linestyle="--") plt.axvline(theta_real, ymax=0.3, color='k') plt.plot(0, 0, label=f'{N:4d} trials\n{y:4d} heads', alpha=0) plt.xlim(0, 1) plt.ylim(0, 12) plt.legend() plt.yticks([]) plt.tight_layout(); ``` ## Question 9 *** *Go to the chapter's notebook and explore different parameters for the Gaussian, binomial and beta plots (figures 1.1, 1.3 and 1.4 from the chapter). Alternatively, you may want to plot a single distribution instead of a grid of distributions.* ## Question 10 *** *Read about [Cromwell's rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule) on Wikipedia.* ## Question 11 *** *Read about [probabilities and the Dutch book](https://en.wikipedia.org/wiki/Dutch_book) on Wikipedia.*
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/tutorials/quickstart/azureml-quickstart.png) # Tutorial: Azure Machine Learning Quickstart In this tutorial, you learn how to quickly get started with Azure Machine Learning. Using a *compute instance* - a fully managed cloud-based VM that is pre-configured with the latest data science tools - you will train an image classification model using the CIFAR10 dataset. In this tutorial you will learn how to: * Create a compute instance and attach to a notebook * Train an image classification model and log metrics * Deploy the model ## Prerequisites 1. An Azure Machine Learning workspace 1. Familiar with the Python language and machine learning workflows. ## Create compute & attach to notebook To run this notebook you will need to create an Azure Machine Learning _compute instance_. The benefits of a compute instance over a local machine (e.g. laptop) or cloud VM are as follows: * It is a pre-configured with all the latest data science libaries (e.g. panads, scikit, TensorFlow, PyTorch) and tools (Jupyter, RStudio). In this tutorial we make extensive use of PyTorch, AzureML SDK, matplotlib and we do not need to install these components on a compute instance. * Notebooks are seperate from the compute instance - this means that you can develop your notebook on a small VM size, and then seamlessly scale up (and/or use a GPU-enabled) the machine when needed to train a model. * You can easily turn on/off the instance to control costs. To create compute, click on the + button at the top of the notebook viewer in Azure Machine Learning Studio: <img src="https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create.PNG" width="500"/> This will pop up the __New compute instance__ blade, provide a valid __Compute name__ (valid characters are upper and lower case letters, digits, and the - character). Then click on __Create__. It will take approximately 3 minutes for the compute to be ready. When the compute is ready you will see a green light next to the compute name at the top of the notebook viewer: <img src="https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create2.PNG" width="500"/> You will also notice that the notebook is attached to the __Python 3.6 - AzureML__ jupyter Kernel. Other kernels can be selected such as R. In addition, if you did have other instances you can switch to them by simply using the dropdown menu next to the Compute label. ## Import Data For this tutorial, you will use the CIFAR10 dataset. It has the classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The images in CIFAR-10 three-channel color images of 32x32 pixels in size. The code cell below uses the PyTorch API to download the data to your compute instance, which should be quick (around 15 seconds). The data is divided into training and test sets. * **NOTE: The data is downloaded to the compute instance (in the `/tmp` directory) and not a durable cloud-based store like Azure Blob Storage or Azure Data Lake. This means if you delete the compute instance the data will be lost. The [getting started with Azure Machine Learning tutorial series](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) shows how to create an Azure Machine Learning *dataset*, which aids durability, versioning, and collaboration.** ``` import torch import torch.optim as optim import torchvision import torchvision.transforms as transforms transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='/tmp/data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='/tmp/data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ``` ## Take a look at the data In the following cell, you have some python code that displays the first batch of 4 CIFAR10 images: ``` import matplotlib.pyplot as plt import numpy as np def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) ``` ## Train model and log metrics In the directory `model` you will see a file called [model.py](./model/model.py) that defines the neural network architecture. The model is trained using the code below. * **Note: The model training take around 4 minutes to complete. The benefit of a compute instance is that the notebooks are separate from the compute - therefore you can easily switch to a different size/type of instance. For example, you could switch to run this training on a GPU-based compute instance if you had one provisioned. In the code below you can see that we have included `torch.device("cuda:0" if torch.cuda.is_available() else "cpu")`, which detects whether you are using a CPU or GPU machine.** ``` from model.model import Net from azureml.core import Experiment from azureml.core import Workspace ws = Workspace.from_config() device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") device exp = Experiment(workspace=ws, name="cifar10-experiment") run = exp.start_logging(snapshot_directory=None) # define convolutional network net = Net() net.to(device) # set up pytorch loss / optimizer criterion = torch.nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) run.log("learning rate", 0.001) run.log("momentum", 0.9) # train the network for epoch in range(1): running_loss = 0.0 for i, data in enumerate(trainloader, 0): # unpack the data inputs, labels = data[0].to(device), data[1].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: loss = running_loss / 2000 run.log("loss", loss) print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}') running_loss = 0.0 print('Finished Training') ``` Once you have executed the cell below you can view the metrics updating in real time in the Azure Machine Learning studio: 1. Select **Experiments** (left-hand menu) 1. Select **cifar10-experiment** 1. Select **Run 1** 1. Select the **Metrics** Tab The metrics tab will display the following graph: <img src="https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/metrics-capture.PNG" alt="dataset details" width="500"/> #### Understand the code The code is based on the [Pytorch 60minute Blitz](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) where we have also added a few additional lines of code to track the loss metric as the neural network trains. | Code | Description | | ------------- | ---------- | | `experiment = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. | | `run.log()` | This will log the metrics to Azure Machine Learning. | ## Version control models with the Model Registry You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. Azure Machine Learning supports any model that can be loaded through Python 3. The code below does: 1. Saves the model on the compute instance 1. Uploads the model file to the run (if you look in the experiment on Azure Machine Learning studio you should see on the **Outputs + logs** tab the model has been saved in the run) 1. Registers the uploaded model file 1. Transitions the run to a completed state ``` from azureml.core import Model PATH = 'cifar_net.pth' torch.save(net.state_dict(), PATH) run.upload_file(name=PATH, path_or_stream=PATH) model = run.register_model(model_name='cifar10-model', model_path=PATH, model_framework=Model.Framework.PYTORCH, description='cifar10 model') run.complete() ``` ### View model in the model registry You can see the stored model by navigating to **Models** in the left-hand menu bar of Azure Machine Learning Studio. Click on the **cifar10-model** and you can see the details of the model like the experiement run id that created the model. ## Deploy the model The next cell deploys the model to an Azure Container Instance so that you can score data in real-time (Azure Machine Learning also provides mechanisms to do batch scoring). A real-time endpoint allows application developers to integrate machine learning into their apps. * **Note: The deployment takes around 3 minutes to complete.** ``` from azureml.core import Environment, Model from azureml.core.model import InferenceConfig from azureml.core.webservice import AciWebservice environment = Environment.get(ws, "AzureML-PyTorch-1.6-CPU") model = Model(ws, "cifar10-model") service_name = 'cifar-service' inference_config = InferenceConfig(entry_script='score.py', environment=environment) aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1) service = Model.deploy(workspace=ws, name=service_name, models=[model], inference_config=inference_config, deployment_config=aci_config, overwrite=True) service.wait_for_deployment(show_output=True) ``` ### Understand the code | Code | Description | | ------------- | ---------- | | `environment = Environment.get()` | [Environment](https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py#environment) specify the Python packages, environment variables, and software settings around your training and scoring scripts. In this case, you are using a *curated environment* that has all the packages to run PyTorch. | | `inference_config = InferenceConfig()` | This specifies the inference (scoring) configuration for the deployment such as the script to use when scoring (see below) and on what environment. | | `service = Model.deploy()` | Deploy the model. | The [*scoring script*](score.py) file is has two functions: 1. an `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables 1. a `run(data)` function that executes each time a call is made to the service. In this function, you normally deserialize the json, run a prediction and output the predicted result. ## Test the model service In the next cell, you get some unseen data from the test loader: ``` dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ``` Finally, the next cell runs scores the above images using the deployed model service. ``` import json input_payload = json.dumps({ 'data': images.tolist() }) output = service.run(input_payload) print(output) ``` ## Clean up resources To clean up the resources after this quickstart, firstly delete the Model service using: ``` service.delete() ``` Next stop the compute instance by following these steps: 1. Go to **Compute** in the left-hand menu of the Azure Machine Learning studio 1. Select your compute instance 1. Select **Stop** **Important: The resources you created can be used as prerequisites to other Azure Machine Learning tutorials and how-to articles.** If you don't plan to use the resources you created, delete them, so you don't incur any charges: 1. In the Azure portal, select **Resource groups** on the far left. 1. From the list, select the resource group you created. 1. Select **Delete resource group**. 1. Enter the resource group name. Then select **Delete**. You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**. ## Next Steps In this tutorial, you have seen how to run your machine learning code on a fully managed, pre-configured cloud-based VM called a *compute instance*. Having a compute instance for your development environment removes the burden of installing data science tooling and libraries (for example, Jupyter, PyTorch, TensorFlow, Scikit) and allows you to easily scale up/down the compute power (RAM, cores) since the notebooks are separated from the VM. It is often the case that once you have your machine learning code working in a development environment that you want to productionize this by running as a **_job_** - ideally on a schedule or trigger (for example, arrival of new data). To this end, we recommend that you follow [**the day 1 getting started with Azure Machine Learning tutorial**](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local). This day 1 tutorial is focussed on running jobs-based machine learning code in the cloud.
github_jupyter
``` from IPython.core.debugger import set_trace %run 'activation.ipynb' import numpy as np import pickle %run "mnist.ipynb" import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import Grid ``` ### Define Model ``` class RBM: def __init__(self, n_v, n_h, W=None, b=None, c=None, k=1): assert n_v != 0 and n_h != 0 self.n_v = n_v self.n_h = n_h shape = (n_h, n_v) self.W = W if W is not None else np.random.uniform(-1, 1, size=shape) self.b = b if b is not None else np.zeros(n_v) self.c = c if c is not None else np.zeros(n_h) assert self.W.shape==shape and n_v == len(self.b) and n_h == len(self.c) self.k = k self.training_done = True if W is not None and b is not None and c is not None else False return def forward(self, V): n_sample, n_v = V.shape hsignal = np.dot(V, self.W.T) + self.c assert hsignal.shape == (n_sample, self.n_h) Hp = sigmoid(hsignal) Hs = np.random.binomial(1, Hp, size=Hp.shape) return Hp, Hs def backward(self, H): n_sample, n_h = H.shape vsignal = np.dot(H, self.W) + self.b assert vsignal.shape == (n_sample, self.n_v) Vp = sigmoid(vsignal) s = np.random.uniform(0, 1, size=vsignal.shape) Vs = (s < Vp) * 1 return Vp, Vs def gibbs(self, V): #return (probability, samples) of visible units Vs = V for i in range(self.k): Hp, Hs = self.forward(Vs) Vp, Vs = self.backward(Hs) return Hp, Hs, Vp, Vs def contrastive_divergence(self, V, learning=0.01): n_sample, n_v = V.shape Vs = V Hp, Hs, Vp_, Vs_ = self.gibbs(Vs) # underscore _ refers to tilde for negative sample Hp_, Hs_ = self.forward(Vs_) Vs1 = np.mean(Vs, axis=0) Vs2 = np.mean(Vs_, axis=0) Hp1 = np.mean(Hp, axis=0) Hp2 = np.mean(Hp_, axis=0) Hs1 = np.mean(Hs, axis=0) Hs2 = np.mean(Hs_, axis=0) Eh_b = Vs1; Evh_b = Vs2 # Evh_b refers to the Expectation (over v and h) of -logP(v) gradient wrt b #Eh_c = Hs1; Evh_c = Hp2 # bengio Eh_c = Hp1; Evh_c = Hp2 # hugo #Eh_c = Hp1; Evh_c = Hs2 # Mine g_b = Evh_b - Eh_b # gradient of -logP(v) wrt b g_c = Evh_c - Eh_c Eh_W = np.outer(Eh_c, Eh_b) Evh_W = np.outer(Evh_c, Evh_b) g_W = Evh_W - Eh_W self.W -= g_W * learning self.b -= g_b * learning self.c -= g_c * learning return def reconstruct(self, V): Hp, Hs = self.forward(V) Vp, Vs = self.backward(Hp) return Vp, Vs # this is the API for app to use def train(self, X, n_epoch=1, batch_size=10, learning=0.01, save_file=None): if self.training_done: return save_file += ".rbm" self.train_model(X, n_epoch, batch_size, learning, save_file) self.training_done = True return # this is the API for more complex network to use def train_model(self, X, n_epoch=1, batch_size=10, learning=0.01, save_file=None): batch_size = batch_size if batch_size > 0 else 10 n_epoch = n_epoch if n_epoch > 0 else 1 n_sample = X.shape[0] n_batch = n_sample//batch_size for i in range(n_epoch): for j in range(n_batch): s = j*batch_size V = X[s:s+batch_size] self.contrastive_divergence(V, learning) self.save_model(save_file + ".epochs" + str(n_epoch)) return def save_model(self, save_file): dict = {'n_v':self.n_v, 'n_h':self.n_h, 'W':self.W, 'b':self.b, 'c':self.c} save_file += "."+ str(self.n_v) + "x" + str(self.n_h) with open(save_file, 'wb') as f: pickle.dump(dict, f, protocol=pickle.HIGHEST_PROTOCOL) return @classmethod def load_model(clazz, model_file): with open(model_file, 'rb') as f: m = pickle.load(f) rbm = RBM(m['n_v'], m['n_h'], m['W'], m['b'], m['c']) return rbm def show_features(self, shape, suptitle, count=-1): maxw = np.amax(self.W) minw = np.amin(self.W) count = self.n_h if count == -1 or count > self.n_h else count ncols = count if count < 14 else 14 nrows = count//ncols nrows = nrows if nrows > 2 else 3 fig = plt.figure(figsize=(ncols, nrows), dpi=100) grid = Grid(fig, rect=111, nrows_ncols=(nrows, ncols), axes_pad=0.01) for i, ax in enumerate(grid): x = self.W[i] if i<self.n_h else np.zeros(shape) x = (x.reshape(1, -1) - minw)/maxw ax.imshow(x.reshape(*shape), cmap=mpl.cm.Greys) ax.set_axis_off() #fig.suptitle(suptitle, size=20) fig.text(0.5,1, suptitle, fontsize=20, horizontalalignment='center') fig.tight_layout() #fig.subplots_adjust(top=0.85 + nrows*0.002) #adjust suptitle position plt.show() return # testing of the RBM code above class MNIST_RBM: def __init__(self, n_v, n_h, load_file=None, save_file="mnist", data_path=".."): if load_file is None: self.rbm = RBM(n_v, n_h) else: self.rbm = RBM.load_model(load_file) self.train_input = MnistInput("train", data_path) self.test_input = MnistInput("test", data_path) def train(self, train_size=-1, n_epoch=100, batch_size=10, learning=0.01): if self.rbm.training_done: return X = [] n_x = 0 for x, y in self.train_input.read(train_size): X.append(x) n_x += 1 X = np.array(X).reshape(n_x, -1) > 30 X = X*1 / 255 self.rbm.train(X, n_epoch, batch_size, learning, save_file="mnist") return def test_reconstruct(self, n): X=[]; i=2*n for x, y in self.test_input.read(n): x *= np.random.binomial(1, i/(n*2), size=(28,28)) x = x * 2*n/i x /= 255 X.append(x) i -=1 recon_X = [] for i in range(n): Vp, Vs = self.rbm.reconstruct(X[i].reshape(1, -1)) recon_X.append(Vp) return np.array(X).reshape(-1, 784), np.array(recon_X).reshape(-1, 784) ``` ### Training ``` mnist_rbm = None if __name__ == "__main__" and '__file__' not in globals(): np.seterr(all='raise') plt.close('all') model_file = "trained_models/mnist_rbm.784x500.epochs100" mnist_rbm = MNIST_RBM(28*28, 500, model_file) mnist_rbm.train(60000, n_epoch=200, batch_size=100) mnist_rbm.rbm.show_features((28,28), "RBM learned features from MNIST ", 56) x_sample, recon_x = mnist_rbm.test_reconstruct(100) ``` ### Image Reconstruction ``` plt.figure(figsize=(4, 6)) for i in range(5): plt.subplot(5, 2, 2*i + 1) plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("Input") plt.colorbar() plt.subplot(5, 2, 2*i + 2) plt.imshow(recon_x[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("Reconstruction") plt.colorbar() plt.tight_layout() def gen_mnist_image(X): return np.rollaxis(np.rollaxis(X[0:200].reshape(20, -1, 28, 28), 0, 2), 1, 3).reshape(-1, 20 * 28) plt.figure(figsize=(10,20)) plt.imshow(gen_mnist_image(recon_x)) ``` #### Compute Reconstruction Error ``` def calculate_recon_error(X, recon_X): """ Compute the reconstruction error. """ rec_loss = - np.sum(X * np.log(1e-8 + recon_X) + (1 - X) * np.log(1e-8 + 1 - recon_X), 1) return np.mean(rec_loss) calculate_recon_error(x_sample, recon_x) ```
github_jupyter
``` # from dask.distributed import Client, LocalCluster # import logging # cluster = LocalCluster( # n_workers=28, # threads_per_worker=8, # silence_logs=logging.DEBUG # ) # client = Client(cluster, heartbeat_interval=10000) # print(client.dashboard_link) import afqinsight as afqi import joblib import matplotlib.pyplot as plt import numpy as np import os.path as op import pandas as pd import pickle import seaborn as sns from datetime import datetime from sklearn.base import clone from sklearn.model_selection import RepeatedKFold from sklearn.metrics import median_absolute_error, r2_score from sklearn.metrics import explained_variance_score, mean_squared_error from sklearn.linear_model import ElasticNetCV from skopt import BayesSearchCV from skopt.plots import plot_convergence, plot_objective, plot_evaluations print(afqi.__version__) X, y, groups, columns, subjects, classes = afqi.load_afq_data( "../data/raw/age_data", target_cols=["Age"], ) label_sets = afqi.multicol2sets(pd.MultiIndex.from_tuples(columns, names=["metric", "tractID", "nodeID"])) pyafq_bundles = [ c for c in columns if c[1] not in ["Right Cingulum Hippocampus", "Left Cingulum Hippocampus"] ] pyafq_bundles = [ [c] for c in np.unique([col[1] for col in pyafq_bundles]) ] X_pyafq_bundles = afqi.select_groups( X, pyafq_bundles, label_sets ) print(X.shape) print(X_pyafq_bundles.shape) print(len(label_sets)) columns = [ c for c in columns if c[1] not in ["Right Cingulum Hippocampus", "Left Cingulum Hippocampus"] ] label_sets = afqi.multicol2sets(pd.MultiIndex.from_tuples(columns, names=["metric", "tractID", "nodeID"])) X_md_fa = afqi.select_groups( X_pyafq_bundles, [["fa"], ["md"]], label_sets ) print(X.shape) print(X_pyafq_bundles.shape) print(X_md_fa.shape) groups_md_fa = groups[:36] def get_cv_results(n_repeats=5, n_splits=10, power_transformer=False, shuffle=False, ensembler=None, target_transform_func=None, target_transform_inverse_func=None, n_estimators=10, trim_nodes=0, square_features=False): if shuffle: rng = np.random.default_rng() y_fit = rng.permutation(y) else: y_fit = np.copy(y) if trim_nodes > 0: grp_mask = np.zeros_like(groups_md_fa[0], dtype=bool) grp_mask[trim_nodes:-trim_nodes] = True X_mask = np.concatenate([grp_mask] * len(groups_md_fa)) groups_trim = [] start_idx = 0 for grp in groups_md_fa: stop_idx = start_idx + len(grp) - 2 * trim_nodes groups_trim.append(np.arange(start_idx, stop_idx)) start_idx += len(grp) - 2 * trim_nodes X_trim = X_md_fa[:, X_mask] elif trim_nodes == 0: groups_trim = [grp for grp in groups_md_fa] X_trim = np.copy(X_md_fa) else: raise ValueError("trim_nodes must be non-negative.") if square_features: _n_samples, _n_features = X_trim.shape X_trim = np.hstack([X_trim, np.square(X_trim)]) groups_trim = [np.concatenate([g, g + _n_features]) for g in groups_trim] cv = RepeatedKFold( n_splits=n_splits, n_repeats=n_repeats, random_state=1729 ) cv_results = {} pipe_skopt = afqi.pipeline.make_base_afq_pipeline( imputer_kwargs={"strategy": "median"}, power_transformer=power_transformer, scaler="standard", estimator=ElasticNetCV, estimator_kwargs={ "verbose": 0, "n_alphas": 50, "l1_ratio": np.linspace(0.01, 1, 10), "cv": 3, "n_jobs": 28, "max_iter": 500, }, verbose=0, ensemble_meta_estimator=ensembler, ensemble_meta_estimator_kwargs={ "n_estimators": n_estimators, "n_jobs": 1, "oob_score": True, "random_state": 1729, }, target_transform_func=target_transform_func, target_transform_inverse_func=target_transform_inverse_func, ) for cv_idx, (train_idx, test_idx) in enumerate(cv.split(X_trim, y_fit)): start = datetime.now() X_train, X_test = X_trim[train_idx], X_trim[test_idx] y_train, y_test = y_fit[train_idx], y_fit[test_idx] pipe_skopt.fit(X_train, y_train) cv_results[cv_idx] = { "pipeline": pipe_skopt, "train_idx": train_idx, "test_idx": test_idx, "y_pred": pipe_skopt.predict(X_test), "y_true": y_test, "test_mae": median_absolute_error(y_test, pipe_skopt.predict(X_test)), "train_mae": median_absolute_error(y_train, pipe_skopt.predict(X_train)), "test_r2": r2_score(y_test, pipe_skopt.predict(X_test)), "train_r2": r2_score(y_train, pipe_skopt.predict(X_train)), } if ensembler is None: if ((target_transform_func is not None) or (target_transform_inverse_func is not None)): cv_results[cv_idx]["coefs"] = pipe_skopt.named_steps["estimate"].regressor_.coef_ cv_results[cv_idx]["alpha"] = pipe_skopt.named_steps["estimate"].regressor_.alpha_ cv_results[cv_idx]["l1_ratio"] = pipe_skopt.named_steps["estimate"].regressor_.l1_ratio_ else: cv_results[cv_idx]["coefs"] = pipe_skopt.named_steps["estimate"].coef_ cv_results[cv_idx]["alpha"] = pipe_skopt.named_steps["estimate"].alpha_ cv_results[cv_idx]["l1_ratio"] = pipe_skopt.named_steps["estimate"].l1_ratio_ else: if ((target_transform_func is not None) or (target_transform_inverse_func is not None)): cv_results[cv_idx]["coefs"] = [ est.coef_ for est in pipe_skopt.named_steps["estimate"].regressor_.estimators_ ] cv_results[cv_idx]["alpha"] = [ est.alpha_ for est in pipe_skopt.named_steps["estimate"].regressor_.estimators_ ] cv_results[cv_idx]["l1_ratio"] = [ est.l1_ratio_ for est in pipe_skopt.named_steps["estimate"].regressor_.estimators_ ] else: cv_results[cv_idx]["coefs"] = [ est.coef_ for est in pipe_skopt.named_steps["estimate"].estimators_ ] cv_results[cv_idx]["alpha"] = [ est.alpha_ for est in pipe_skopt.named_steps["estimate"].estimators_ ] cv_results[cv_idx]["l1_ratio"] = [ est.l1_ratio_ for est in pipe_skopt.named_steps["estimate"].estimators_ ] print(f"CV index [{cv_idx:3d}], Elapsed time: ", datetime.now() - start) return cv_results, y_fit results = {} trim_nodes = 0 results[f"bagging_target_transform_pure_lasso_trim{trim_nodes}"] = get_cv_results( n_splits=10, n_repeats=1, power_transformer=False, shuffle=False, target_transform_func=np.log, target_transform_inverse_func=np.exp, trim_nodes=trim_nodes, square_features=False, ) results.keys() for key, res in results.items(): test_mae = [cvr["test_mae"] for cvr in res[0].values()] train_mae = [cvr["train_mae"] for cvr in res[0].values()] test_r2 = [cvr["test_r2"] for cvr in res[0].values()] train_r2 = [cvr["train_r2"] for cvr in res[0].values()] print(key, "test MAE", np.mean(test_mae)) print(key, "train MAE", np.mean(train_mae)) print(key, "test R2 ", np.mean(test_r2)) print(key, "train R2 ", np.mean(train_r2)) with open("age_regression_elastic_net.pkl", "wb") as fp: pickle.dump(results, fp) ```
github_jupyter
``` import geopandas as gpd import pandas as pd import numpy as np from copy import deepcopy chirps_file = "../Data/vietnam/fluvial_defended/FD_1in5.csv" chirps_ori = pd.read_csv(chirps_file) chirps_ori.dropna(inplace=True) chirps_ori.columns = ['Lon', 'Lat', 'flood_level'] chirps_data = deepcopy(chirps_ori) chirps_data.describe() # remove permanent water body to reduce calculation time chirps_data = chirps_data.where(chirps_data['flood_level'] < 999) # chirps_data.dropna(inplace=True) # chirps_data.reset_index(drop=True, inplace=True) chirps_data['Lon'] = chirps_data['Lon'].apply(lambda x: round(x, 3)) chirps_data['Lat'] = chirps_data['Lat'].apply(lambda x: round(x, 3)) # as we round the coordinates, there exist duplicate coordinate pairs with different flood level # I use max to emphasize the severity of the flood chance aggregation_functions = {'flood_level': 'max'} chirps_data = chirps_data.groupby(['Lon', 'Lat']).aggregate(aggregation_functions) chirps_data.describe() facs_file = "../Data/stroke-facs.csv" stroke_data = pd.read_csv(facs_file)[['Name_English','longitude','latitude','pro_name_e','dis_name_e']] stroke_data.columns = ['Facility_Name','Lon','Lat','Province','District'] stroke_data['Lon'] = stroke_data['Lon'].apply(lambda x: round(x, 3)) stroke_data['Lat'] = stroke_data['Lat'].apply(lambda x: round(x, 3)) stroke_data[:10] # merge two dataset to see the flood level for each facility facs_w_flood = stroke_data.merge(chirps_data, how='left', on=['Lon', 'Lat']).fillna(0) # sort descendingly and retrieve the top 10 - the wort quartile facs_w_flood.sort_values('flood_level', ascending=False).reset_index(drop=True).head(10) # Prepare the template df to merge remaining data flood_chance_facs = deepcopy(facs_w_flood) flood_chance_facs.columns = ['Facility_Name','Lon','Lat','Province','District', 'FD_5yrs_level'] # finalize the merged dataframe into a function for modularization def flood_stroke_facs(file_name, col_name, full_df): file_data = pd.read_csv(file_name) file_data.dropna(inplace=True) file_data.columns = ['Lon', 'Lat', col_name] file_data = file_data.where(file_data[col_name] < 999) file_data['Lon'] = file_data['Lon'].apply(lambda x: round(x, 3)) file_data['Lat'] = file_data['Lat'].apply(lambda x: round(x, 3)) aggregation_functions = {col_name: 'max'} file_data = file_data.groupby(['Lon', 'Lat']).aggregate(aggregation_functions) new_df = full_df.merge(file_data, how='left', on=['Lon', 'Lat']).fillna(0) return new_df file_dict = {'FD_10yrs_level': "../Data/vietnam/fluvial_defended/FD_1in10.csv", 'FD_20yrs_level': "../Data/vietnam/fluvial_defended/FD_1in20.csv", 'FD_50yrs_level': "../Data/vietnam/fluvial_defended/FD_1in50.csv", 'FD_75yrs_level': "../Data/vietnam/fluvial_defended/FD_1in75.csv", 'FD_100yrs_level': "../Data/vietnam/fluvial_defended/FD_1in100.csv", 'FD_200yrs_level': "../Data/vietnam/fluvial_defended/FD_1in200.csv", 'FD_250yrs_level': "../Data/vietnam/fluvial_defended/FD_1in250.csv", 'FD_500yrs_level': "../Data/vietnam/fluvial_defended/FD_1in500.csv", 'FD_1000yrs_level': "../Data/vietnam/fluvial_defended/FD_1in1000.csv", 'FU_5yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in5.csv", 'FU_10yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in10.csv", 'FU_20yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in20.csv", 'FU_50yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in50.csv", 'FU_75yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in75.csv", 'FU_100yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in100.csv", 'FU_200yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in200.csv", 'FU_250yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in250.csv", 'FU_500yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in500.csv", 'FU_1000yrs_level': "../Data/vietnam/fluvial_undefended/FU_1in1000.csv", } from tqdm import tqdm # process all data files from CHIRPS for flood_case in tqdm(file_dict.keys()): flood_chance_facs = flood_stroke_facs(file_dict[flood_case], flood_case, flood_chance_facs) # Facilities in the worst quartile of having flood in the next 5 yrs with no flood defense mechanism flood_chance_facs.sort_values('FU_5yrs_level', ascending=False, inplace=True) flood_chance_facs.reset_index(drop=True).head(5) # list all flood cases all_flood_cases = ['FD_5yrs_level']+ list(file_dict.keys()) # count the number of affected facilities in each group of flood cases: defended vs. undefended fd_affected_count = [] fu_affected_count = [] for flood_case in all_flood_cases: affected_facs = flood_chance_facs[flood_chance_facs[flood_case] != 0].shape[0] if 'FD' in flood_case: fd_affected_count.append(affected_facs) else: fu_affected_count.append(affected_facs) # reform the data for presentation affected_df = pd.DataFrame({'FD': fd_affected_count, 'FU': fu_affected_count}) affected_df.index=['5 years', '10 years', '20 years', '50 years', '75 years', '100 years', '200 years', '250 years', '500 years', '1000 years', ] affected_df.plot.line(title='Number of stroke facilities in future flooded area', figsize=(15, 10), xlabel='Years into the future', ylabel='Number of affected facilities country wide') # extract FB's population data fb_pop_data = pd.read_csv('../Data/population_vnm_2018-10-01(Facebook).csv') fb_pop_data.head(10) # rounding the longitude and latitude for easier calculation fb_pop_data.columns = ['Lat', 'Lon','Pop_2015','Pop_2020'] fb_pop_data['Lon'] = fb_pop_data['Lon'].apply(lambda x: round(x, 3)) fb_pop_data['Lat'] = fb_pop_data['Lat'].apply(lambda x: round(x, 3)) # aggregation to combine rounded results aggregation_functions = {'Pop_2015': 'sum','Pop_2020': 'sum'} agg_fb_pop_data = fb_pop_data.groupby(['Lat', 'Lon']).aggregate(aggregation_functions) agg_fb_pop_data.head(10) # merge two dataset to see the effect of flood onto the population within certain stroke facilities flood_facs_w_pop = flood_chance_facs.merge(agg_fb_pop_data, how='left', on=['Lon', 'Lat']).fillna(0) flood_facs_w_pop[['Facility_Name','Lon','Lat', 'FD_5yrs_level', 'FU_5yrs_level', 'Pop_2020']] fd_pop_affected_count = [] fu_pop_affected_count = [] # will only extract top 16 - worst quintiles for flood_case in all_flood_cases: worst_quartile_df = flood_facs_w_pop.sort_values(flood_case, ascending=False)[:16] affected_pop = sum(flood_facs_w_pop[flood_facs_w_pop[flood_case] != 0]['Pop_2020'].to_numpy()) if 'FD' in flood_case: fd_pop_affected_count.append(affected_pop) else: fu_pop_affected_count.append(affected_pop) # form data for presentation affected_pop_df = pd.DataFrame({'FD': fd_pop_affected_count, 'FU': fu_pop_affected_count}) affected_pop_df.index=['5 years', '10 years', '20 years', '50 years', '75 years', '100 years', '200 years', '250 years', '500 years', '1000 years', ] affected_pop_df.plot.line(title='Population within stroke facilities in top quintiles of flood chance', figsize=(15, 10), xlabel='Years into the future', ylabel='Number of affected Pop in 2020 (in thousand)') ```
github_jupyter
## Различные графики ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` ### Regular plot ``` n = 512 X = np.linspace(0, np.pi/2, n, endpoint=True) Y = np.cos(20*X) * np.exp(-X) plt.figure(figsize=(8,4), dpi=80) # Plot upper sine wave plt.plot(X, Y+2, color='green', alpha=1.00) plt.fill_between(X, 2, Y+2, color='green', alpha=0.10) # Plot lower sine wave plt.plot(X, Y-1, color='orange', alpha=1.00) plt.fill_between(X, -1, Y-1, (Y-1) > -1, color='#FFAA00', alpha=0.15) plt.fill_between(X, -1, Y-1, (Y-1) < -1, color='#AAAA00', alpha=0.15) plt.plot([0, np.pi/2], [0.5, 0.5], 'b--') # Set x, y limits plt.xlim(0, np.pi/2) plt.ylim(-2.5, 3.5) # Do not plot grids plt.xticks([]) plt.yticks([]) # plt.grid(False) ``` ### Scatter plot ``` # Create 2D random signal n = 512 np.random.seed(1) X = np.random.randn(n) Y = np.random.randn(n) T = np.arctan2(Y, X) plt.figure(figsize=(6,6), dpi=80) plt.scatter(X, Y, s=80, c=T, alpha=.60) plt.xlim(-1.5,1.5), plt.xticks([]) plt.ylim(-1.5,1.5), plt.yticks([]) T.mean() ``` ### Bar plot ``` n = 10 # Create two random vectors X = np.arange(n) np.random.seed(10) Y1 = (1-X/float(n)) * np.random.uniform(0.5, 1.2, n) Y2 = (1-X/float(n)) * np.random.uniform(0.5, 1.2, n) # Plot two bars plt.figure(figsize=(8,4), dpi=80) plt.bar(X, +Y1, facecolor='#CCCCFF', edgecolor='red') plt.bar(X, -Y2, facecolor='#FFCCCC', edgecolor='blue') # Plot text into bars for x,y in zip(X, Y1): plt.text(x+0.2, +y+0.07, '%.2f' % y, ha='center', va='bottom') for x,y in zip(X, Y2): plt.text(x+0.2, -y-0.07, '%.2f' % y, ha='center', va='top') plt.xlim(-1, n) plt.ylim(-1.35, +1.35) plt.grid() ``` ### Contour Plots ``` # Create function def f(x, y): # return (x+x**4-y**5+y**2) * np.exp(-(0.95*x**2+0.65*y**2)) return (0.5-x+x**5+y**3-y) * np.exp(-(0.85*x**2+0.75*y**2)) # Create vectors and mesh n = 200 x = np.linspace(-3, 3, n) y = np.linspace(-3, 3, n) X,Y = np.meshgrid(x,y) # Plot plt.figure(figsize=(6,6), dpi=80) plt.contourf(X, Y, f(X,Y), 9, alpha=.75, cmap=plt.cm.hot) C = plt.contour(X, Y, f(X,Y), 9, colors='black') plt.clabel(C, inline=1, fontsize=8) plt.grid(False) ``` ### Imshow ``` # Create function def f(x, y): return (x+x**4-y**5+y**2) * np.exp(-(0.95*x**2+0.65*y**2)) # Create vectors and mesh n = 200 x = np.linspace(-3, 3, n) y = np.linspace(-3, 3, n) X,Y = np.meshgrid(x,y) Z = f(X,Y) # Plot plt.figure(figsize=(6,6), dpi=80) plt.imshow(Z, interpolation='bicubic', cmap='bone', origin='lower') plt.colorbar(shrink=.70) plt.grid(False) ```
github_jupyter
``` import os from datetime import datetime, timedelta import ipywidgets as widgets import plotly.graph_objs as go import yfinance as yf import pandas as pd from IPython.display import display interval_opts = [ "1m", "2m", "5m", "15m", "30m", "60m", "90m", "1h", "1d", "5d", "1wk", "1mo", "3mo", ] rows = [ "sector", "marketCap", "beta", "fiftyTwoWeekHigh", "fiftyTwoWeekLow", "floatShares", "sharesShort", "exDividendDate", ] views = { "Raw Data": lambda x, y: x, "Percent Change": lambda x, y: x.pct_change(), "Rolling Average": lambda x, y: x.rolling(y).mean(), "Rolling Variance": lambda x, y: x.rolling(y).var(), "Rolling Standard Deviation": lambda x, y: x.rolling(y).var() ** 0.5, "Rolling Coefficient of Variation": lambda x, y: (x.rolling(y).var() ** 0.5) / (x.rolling(y).mean()), } clean_row = { "sector": "Sector", "marketCap": "M Cap", "beta": "Beta", "fiftyTwoWeekHigh": "52W High", "fiftyTwoWeekLow": "52W Low", "floatShares": "Floats", "sharesShort": "Shorts", "exDividendDate": "Ex-Div", } clean_data = { "sector": lambda x: "N/A" if x is None else x, "marketCap": lambda x: "N/A" if x is None else big_num(x), "beta": lambda x: "N/A" if x is None else f"{round(x,2)}", "fiftyTwoWeekHigh": lambda x: "N/A" if x is None else f"${round(x,2)}", "fiftyTwoWeekLow": lambda x: "N/A" if x is None else f"${round(x,2)}", "floatShares": lambda x: "N/A" if x is None else big_num(x), "sharesShort": lambda x: "N/A" if x is None else big_num(x), "exDividendDate": lambda x: "N/A" if x is None else datetime.fromtimestamp(x).strftime("%Y/%m/%d"), } def big_num(num): if num > 1_000_000_000_000: return f"{round(num/1_000_000_000_000,2)}T" if num > 1_000_000_000: return f"{round(num/1_000_000_000,2)}B" if num > 1_000_000: return f"{round(num/1_000_000,2)}M" if num > 1_000: return f"{num/round(1_000,2)}K" return f"{round(num,2)}" def clean_str(string): new_str = "" for letter in string: if letter.isupper(): new_str += " " new_str += letter return new_str.title() def format_plotly(fig, data, start, end, chart, calc=None): fig.update_yaxes(title=None) fig.update_xaxes(title=None) start_t = start.strftime("%Y/%m/%d") end_t = end.strftime("%Y/%m/%d") if calc: if len(calc) == 1: fig_title = f"{calc[0]} of {data} from {start_t} to {end_t}" else: fig_title = f"{', '.join(calc)} of {data} from {start_t} to {end_t}" else: fig_title = "Volume" height = 500 if chart == "main" else 300 fig.update_layout( margin=dict(l=0, r=10, t=10, b=10), autosize=False, width=900, height=height, legend=dict(orientation="h"), title={ "text": fig_title, "y": 0.95, "x": 0.5, "xanchor": "center", "yanchor": "top", }, ) def create_line(visual, x, y, name, data, fig): if visual == "line": plot = go.Scatter(x=x, y=y[data], mode="lines", name=name, connectgaps=True) if visual == "scatter": plot = go.Scatter(x=x, y=y[data], mode="markers", name=name) if visual == "candle": plot = go.Candlestick( x=x, open=y["Open"], close=y["Close"], high=y["High"], low=y["Low"], name=name, ) fig.add_trace(plot) def show_fig(fig): config = {"showTips": False, "scrollZoom": True} if os.environ.get("SERVER_SOFTWARE", "jupyter").startswith("voila"): fig.show(config=config, renderer="notebook") else: fig.show(config=config) def table_data(infos): cols = ["Ticker"] + list(infos) data = pd.DataFrame(columns=cols) data["Ticker"] = [clean_row[x] for x in rows] for ticker in list(infos): data[ticker] = [clean_data[x](infos[ticker].get(x, None)) for x in rows] new_cols = {k: clean_str(k) for k in rows} return data class Chart: def __init__(self): self.last_tickers = "" self.last_interval = "1d" self.df = pd.DataFrame() self.infos = {} def create_stock( self, calculation, data, rolling, start, end, interval, tickers, chart ): if tickers and tickers[-1] == ",": if tickers != self.last_tickers or interval != self.last_interval: if interval in ["1d", "5d", "1wk", "1mo", "3mo"]: self.df = yf.download( tickers, period="max", interval=interval, progress=False ) else: end_date = end + timedelta(days=1) self.df = yf.download( tickers, start=start, end=end_date, interval=interval, progress=False, ) self.last_tickers = tickers self.last_interval = interval start_n = datetime(start.year, start.month, start.day) end_n = datetime(end.year, end.month, end.day) fig = go.Figure() for item in calculation: calcs = views[item](self.df, rolling) if interval in ["1d", "5d", "1wk", "1mo", "3mo"]: result = calcs.loc[ (calcs.index >= start_n) & (calcs.index <= end_n) ] else: result = calcs if len(result.columns) == 6: name = f"{tickers.split(',')[0]} {item}" create_line(chart, result.index, result, name, data, fig) else: for val in result.columns.levels[1]: vals = result.xs(val, axis=1, level=1, drop_level=True) name = f"{val.upper()} {item}" create_line(chart, result.index, vals, name, data, fig) format_plotly(fig, data, start, end, "main", calculation) show_fig(fig) def create_volume(self, start, end, interval, tickers): start_n = datetime(start.year, start.month, start.day) end_n = datetime(end.year, end.month, end.day) result = self.df.loc[(self.df.index >= start_n) & (self.df.index <= end_n)] fig = go.Figure() if len(result.columns) == 6: name = f"{tickers.split(',')[0]}" create_line("line", result.index, result, name, "Volume", fig) else: for val in result.columns.levels[1]: vals = result.xs(val, axis=1, level=1, drop_level=True) name = f"{val.upper()}" create_line("line", result.index, vals, name, "Volume", fig) format_plotly(fig, "Volume", start, end, "volume") show_fig(fig) def create_table(self, tickers): if tickers and tickers[-1] == ",": clean_tickers = [x for x in tickers.split(",") if x] for ticker in clean_tickers: if ticker not in self.infos: self.infos[ticker] = yf.Ticker(ticker).info for ticker in self.infos: if ticker not in tickers: self.infos.pop(ticker) result = table_data(self.infos) fig = go.Figure( data=[ go.Table( header=dict( values=result.columns, fill_color="lightgray", font=dict(color="black"), align="left", ), cells=dict( values=[result[x] for x in result.columns], font=dict(color="black"), align="left", ), ) ], ) fig.update_layout(margin=dict(l=0, r=20, t=0, b=0), width=350) show_fig(fig) w_auto = widgets.Layout(width="auto") calc_widget = widgets.SelectMultiple( options=list(views.keys()), value=["Raw Data"], layout=w_auto ) data_opts = ["Open", "Close", "High", "Low"] data_widget = widgets.Dropdown( options=data_opts, value="Close", layout=w_auto, description="Data" ) rolling_widget = widgets.Dropdown( options=list(range(2, 101)), value=60, layout=w_auto, description="Rolling" ) base_date = (datetime.today() - timedelta(days=365)).date() start_widget = widgets.DatePicker(value=base_date, layout=w_auto, description="Start") end_widget = widgets.DatePicker( value=datetime.today().date(), layout=w_auto, description="End" ) interval_widget = widgets.Dropdown( options=interval_opts, value="1d", layout=w_auto, description="Interval" ) tickers_widget = widgets.Textarea( value="TSLA,", layout=widgets.Layout(width="auto", height="100%") ) chart_opts = ["line", "scatter", "candle"] chart_widget = widgets.Dropdown( options=chart_opts, value="line", layout=w_auto, description="Chart" ) data_box = widgets.VBox([data_widget, rolling_widget, chart_widget]) date_box = widgets.VBox([start_widget, end_widget, interval_widget]) controls = widgets.HBox( [tickers_widget, calc_widget, date_box, data_box], layout=widgets.Layout(width="90%"), ) chart = Chart() stocks_view = widgets.interactive_output( chart.create_stock, { "calculation": calc_widget, "data": data_widget, "rolling": rolling_widget, "start": start_widget, "end": end_widget, "interval": interval_widget, "tickers": tickers_widget, "chart": chart_widget, }, ) volume_view = widgets.interactive_output( chart.create_volume, { "start": start_widget, "end": end_widget, "interval": interval_widget, "tickers": tickers_widget, }, ) table_view = widgets.interactive_output(chart.create_table, {"tickers": tickers_widget}) charts = widgets.VBox( [stocks_view, volume_view], layout=widgets.Layout(width="100%", padding="0", margin="0"), ) figures = widgets.HBox( [charts, table_view], layout=widgets.Layout(padding="0", margin="0") ) title_html = "<h1>Stock Analysis Dashboard</h1>" warning_html = '<p style="color:red"=>Use a comma after EVERY stock typed.</p>' app_contents = [widgets.HTML(title_html), controls, widgets.HTML(warning_html), figures] app = widgets.VBox(app_contents) display(app) ```
github_jupyter
# Introduction to XArray > This tutorial introduces XArray, a Python library for working with labeled multidimensional arrays. - toc: false - badges: true - comments: true - categories: [numpy] #### DEA uses XArray as its data model. To better understand what it is, let's first do a simple experiment on how we could pack remote sensing data using a combination of plain numpy arrays and Python dictionaries. #### Suposse we have a satellite image with three bands: Red, NIR and SWIR. These bands are represented as 2-dimensional numpy arrays. We could also store the latitude and longitude coordinates for each dimension using 1-dimensional arrays. Finally, we could also store some metadata to help describe our images. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from check_answer import check_answer red = np.random.rand(250,250) nir = np.random.rand(250,250) swir = np.random.rand(250,250) lats = np.linspace(-23.5, -26.0, num=red.shape[0], endpoint=False) lons = np.linspace(110.0, 112.5, num=red.shape[1], endpoint=False) title = "Image of the desert" date = "2019-11-10" image = {"red": red, "nir": nir, "swir": swir, "latitude": lats, "longitude": lons, "title": title, "date": date} ``` #### All our data is conveniently packed in a dictionary. Now we can use this dictionary to work with it: ``` image["date"], image["latitude"][:4] ``` #### We can address any variable inside this image dictionary and work directly with other functions. For example, to plot the nir band and calculate its mean: ``` plt.imshow(image['nir']) image["nir"].mean() ``` #### Still, the variables inside our dictionary are independent and we don't know how they are linked. For example, we have the variable `latitude` but we don't know to what axis in the image arrays it refers. We also need to use positional indices to select parts data in the numpy arrays containing the image data. Wouldn't it be convenient to be able to select data from the images using the coordinates of the pixels instead of their relative positions? #### This is exactly what XArray solves! Let's see how it works: ``` import xarray as xr from datetime import datetime ``` #### To explore XArray we have a file containing some reflectance data of Canberra that has been generated using the DEA library. #### The object that we get `ds` is a XArray `Dataset`, which in some ways is very similar to the dictionary that we created before, but with lots of convenient functionality available. ``` ds = xr.open_dataset('data/canberra_ls8.nc') ds ``` #### A `Dataset` can be seen as a dictionary structure packing up the data, dimensions and attributes all linked together. #### Variables in a `Dataset` object are called `DataArrays` and they share dimensions with the higher level `Dataset` #### So far, we have been using 3-dimensional numpy arrays in which the third dimension represented the bands of images and remote sensing data. Numpy can store data in up to 32 dimensions so we could for example use 4-dimensional arrays to store multispectral images with a temporal dimensions, to perform time series analysis. #### To facilitate working with these data, DEA follows the convention of storing spectral bands as separate variables storing each one as 3-dimensional cubes containing the temporal dimension. #### To access a variable we can access as if it were a Python dictionary, or using the `.` notation, which is more convenient. ``` ds["green"] #or alternatively ds.green ``` #### Dimensions are also stored as numerical arrays with the same size as the image's axis they are referring. ``` ds['time'] #or alternatively ds.time ``` #### Metadata is referred as Attributes and is internally stored under `.attrs`, but the same convenient `.` notation applies to them. ``` ds.attrs['Conventions'] #or alternatively ds.Conventions ``` #### Exercise 5.1: Can you access to the `geospatial_bounds_crs` value in the attributes of this XArray Dataset? ``` answ = ? check_answer("5.1", answ) ``` #### DataArrays store their data internally as multidimensional numpy arrays. But these arrays contain dimensions or labels that make it easier handle the data. To access the underlaying numpy array of a `DataArray` we can use the `.values` notation. ``` arr = ds.green.values type(arr), arr.shape ``` #### Exercise 5.2: Can you store in the `answ` variable the underlying numpy array containing the longitude dimension in this Dataset? ``` answ = ? check_answer("5.2", int(answ[0]*1e6)) ``` #### Selecting data and subsetting numpy arrays is done using positional indices to specify positions or ranges of values along the different axis of an array. When we use the `[:,:]` notation, we need to know beforehand what is the relative position of each axis in our arrays. #### XArray provides an abstraction in which we can refer to each axis by its name. Also we can select subsets of the data arrays using two modes or methods: * `isel()`: For selecting data based on its index (like numpy). * `sel()`: For selecting data based on its dimension of label value. #### For example, for selecting the first element in the temporal dimension of the `green` variable we do: ``` print("Initial time dimension values:", ds.green.time.values) ss = ds.green.isel(time=0) ss ``` #### On the other hand we can use the `.sel()` method to select parts of the array by their label or content. See that in this case we do not refer to the data by its positional index but by its dimensional value. ``` ss = ds.green.sel(time=datetime(2016,1,1)) ss ``` #### Both methods `sel()` and `isel()` can receive as many arguments as dimensions have the data array. We can use any order in to pass the dimensions and we can also define slices or ranges of values using the `slice()` notation. For example: ``` ss = ds.green.sel(time=datetime(2016,1,1), latitude=slice(-35.30,-35.24)) ss ``` #### Exercise 5.3: Can you select the region of the red variable delimited by these coordinates: * latitude [-35.30,-35.29] * longitude [149.11,149.13] ``` answ = ds.? check_answer("5.3", answ.shape) ``` #### When we use the selection methods on Datasets and DataArrays we get an object of the same type. ``` ss = ds.green.sel(time=datetime(2016,1,1), latitude=slice(-35.30,-35.24)) type(ss), type(ds.green) ``` #### Exercise 5.4: Use the `imshow` function to create an image of the first time of the red channel in the dataset. > Tip: Use the `.values` method to convert the DataArray object into a numpy array, so matplotlib can work with it. ``` answ = ? plt.imshow(answ) check_answer("5.4", int(answ[0,0])), ``` #### Xarray exposes lots of functions to perform analisis on `Datasets` and `DataArrays` with a similar syntax to numpy's. For example to calculate the spatial mean of the green band ``` print("Mean of green band:", ds.green.mean()) print("Standard deviation of green band:", ds.green.std()) print("Sum of green band:", ds.green.sum()) ``` #### Exercise 5.5: Can you find the difference between the means of the red and nir channels? ``` answ = ? check_answer("5.5", int(answ.values)) ``` #### Plotting is also conveniently integrated as a method on DataArrays. > Note: For plotting you need to pass a 2-dimensional DataArray object, so normally a temporal element needs to be selected. ``` ds["green"].isel(time=0).plot() ``` #### We still can do things manually using numpy and matplotlib ``` rgb = np.dstack((ds.red.isel(time=0).values, ds.green.isel(time=0).values, ds.blue.isel(time=0).values)) rgb = np.clip(rgb, 0, 2000) / 2000 plt.imshow(rgb) ``` #### The previous image is upside down, so we'd still need to flip the image vertically in numpy to represent it correctly. This has to do with how numerical arrays are stored in netCDF files. #### But compare to these chained operations within XArray (Well see more simple ways of doing this in DEA though) ``` # Selection of the bands | time sel | numpy conv| plot (params for plotting function) ds[['red', 'green', 'blue']].isel(time=0).to_array().plot.imshow(robust=True, figsize=(6, 6)) ``` #### Exercise 5.6: Similarly to the previous image, create an RGB image using the `.sel()` functionality select the subset defined by the following dimension values: * time -> 2017-01-01 * latitude -> [-35.29, -35.27] * longitude -> [149.1, 149.13] ``` answ = ? answ.to_array().plot.imshow(robust=True, figsize=(6, 6)) check_answer("5.6", answ.to_array().values.shape) ``` #### Exercise 5.7: Can you create an NDVI representation of the whole extend in `ds`? ``` answ = ? answ.isel(time=0).plot(figsize=(6, 6), cmap='summer_r') check_answer("5.7", int(answ.values[0,100,100]*1000)) ```
github_jupyter
## Crop Analysis for English, Arabic, and Paired English+Arabic Memes ``` import logging import shlex import subprocess import sys import io import pandas as pd from collections import namedtuple from pathlib import Path import matplotlib.image as mpimg import matplotlib.pyplot as plt import numpy as np from matplotlib.collections import PatchCollection from matplotlib.patches import Rectangle logging.basicConfig(level=logging.ERROR) import platform BIN_MAPS = {"Darwin": "mac", "Linux": "linux"} HOME_DIR = Path("../").expanduser() try: import google.colab ! pip install pandas scikit-learn scikit-image statsmodels requests dash ! [[ -d image-crop-analysis ]] || git clone https://github.com/twitter-research/image-crop-analysis.git HOME_DIR = Path("./image-crop-analysis").expanduser() IN_COLAB = True except: IN_COLAB = False sys.path.append(str(HOME_DIR / "src")) bin_dir = HOME_DIR / Path("./bin") bin_path = bin_dir / BIN_MAPS[platform.system()] / "candidate_crops" model_path = bin_dir / "fastgaze.vxm" data_dir = HOME_DIR / Path("./data/") data_dir_plot_en = HOME_DIR / Path("./data_plot/En") data_dir_plot_ar = HOME_DIR / Path("./data_plot/Ar") data_dir_plot_bi = HOME_DIR / Path("./data_plot/En_Ar") data_dir_plot_enar = HOME_DIR / Path("./data_plot/En_Ar_Paired") data_dir_plot_aren = HOME_DIR / Path("./data_plot/Ar_En_Paired") data_dir.exists() data_dir_plot_en.exists() data_dir_plot_ar.exists() data_dir_plot_bi.exists() data_dir_plot_enar.exists() from PIL import Image from image_manipulation import join_images from crop_api import ImageSaliencyModel, is_symmetric, parse_output, reservoir_sampling model = ImageSaliencyModel(crop_binary_path=bin_path, crop_model_path=model_path) plt.matplotlib.__version__ ``` ### Function for the Salient Point ``` #getting the salient point's info by defining a function def get_salient_info(img_path): if isinstance(img_path, str): img_path = Path(img_path) try: cmd = f"{str(bin_path)} {str(model_path)} '{img_path.absolute()}' show_all_points" output = subprocess.check_output(cmd, shell=True) # Success! return parse_output(output) except: print("Running the model to get salient point fails. Returning None.") return None ``` ### Experiment 1.1: Analyzing English Memes Seperatly ``` for i in range(1,41): img_path = data_dir / Path("./"+str(i)+"en.jpeg") name_en=str(i)+"en.jpeg" model.plot_img_crops(img_path, topK=1) #saving plots in the ./data_plot/En folder" plt.savefig(data_dir_plot_en/name_en, bbox_inches="tight") ``` ### Experiment 1.2: Analyzing Arabic Memes Seperatly ``` for i in range(1,41): img_path = data_dir / Path("./"+str(i)+"ar.jpeg") name_ar=str(i)+"ar.jpeg" model.plot_img_crops(img_path, topK=1) #saving plots in the ./data_plot/Ar folder" plt.savefig(data_dir_plot_ar/name_ar, bbox_inches="tight") ``` ### Experiment 1.3: Analyzing English and Arabic in one image ``` for i in range(1,41): img_path = data_dir / Path("./"+str(i)+"bi.jpeg") name_bi=str(i)+"bi.jpeg" model.plot_img_crops(img_path, topK=1) #saving plots in the ./data_plot/Ar folder" plt.savefig(data_dir_plot_bi/name_bi, bbox_inches="tight") ``` ### Experiment 2.1 : Analyzing paired Images, English and Arabic together ``` '''This piece of code: 1) set a counter for everytime that the algorithm picks English region or Arabic region 2) join English and Arabic images horizontally: English on the left and Arabic on the right and give a "*enar.jpeg" to it 3) Run the saliency algorithm to see if the most salient point is in the English region or the Arabic region. if the most salient point's "x" value is between 0 to (imgage's width)/2, then the most salient point is in the English region otherwise it's in the Arabic region 4) Increase the counter for selecting English region or Arabic region and creating a csv file''' #Counter's count the number of times English or Arabic meme is selected in a joined image counter_en=0 counter_ar=0 region=str("") rows=[] for i in range(1,41): # attaching English and Arabic images horizontally: English on the left and Arabic on the right images = [ Image.open(data_dir / Path("./"+str(i)+"en.jpeg")), Image.open(data_dir / Path("./"+str(i)+"ar.jpeg")), ] img = join_images(images, col_wrap=2, img_size=(500, 500), padding=1) name_enar=str(i)+"enar.jpeg" img.save(data_dir/name_enar, "JPEG") ## finding if the salient point is in the English region or the Arabic region img_path = data_dir / Path("./"+str(i)+"enar.jpeg") model.plot_img_crops(img_path, topK=1) #saving plots in the ./data_plot folder" plt.savefig(data_dir_plot_enar/name_enar, bbox_inches="tight") plot_path=data_dir_plot_enar/name_enar salient_info = get_salient_info(img_path) all_salient_points = salient_info["salient_point"] print("salient info for "+name_enar+" is " , all_salient_points[0]) if all_salient_points[0][0]<=img.width/2: print("For image "+name_enar,"English Meme is Selected, because the saleint point's X value is less than half of the image's width= "+ str(img.width) + " and it's in the English side") region=str("English") counter_en+=1 rows.append ([name_enar, str(all_salient_points[0]), region, plot_path ]) else: print("For image "+name_enar,"Arabic Meme is Selected, because the saleint point's X value is bigger than half of the image's width= "+ str(img.width) + " and it's in the Arabic side") region=str("Arabic") counter_ar+=1 rows.append ([name_enar, str(all_salient_points[0]), str(region), plot_path ]) # Making a CSV file to look at the images and the selection region for the most salient point print(counter_en) print(counter_ar) # saving the result as a csv file and also a bar chart df = pd.DataFrame(rows, columns=["Name", "Salient Point", "Selected Region", "Plot Directory"]) df.to_csv(data_dir_plot_enar/'enar_region.csv', index=False) selected_region=["English","Arabic"] number_selected=[counter_en, counter_ar] plt.bar(selected_region, number_selected) plt.title("Number of times that English or Arabic regions are selected") plt.savefig(data_dir_plot_enar/"enar_bar_plot.jpeg") plt.show() total= (counter_en*100)/(counter_en+counter_ar) print(str(total)+"% of the most salient points of total paired memes are in the English region") ``` ### Experiment 2.2: Analyzing paired Images, English and Arabic together (Arabic: Left-side, English: Right-side) ``` '''This piece of code: 1) set a counter for everytime that the algorithm picks English region or Arabic region 2) join English and Arabic images horizontally: Arabic on the left and English on the right and give a "*aren.jpeg" to it 3) Run the saliency algorithm to see if the most salient point is in the English region or the Arabic region. if the most salient point's "x" value is between 0 to (imgage's width)/2, then the most salient point is in the Arabic region otherwise it's in the Arabic region 4) Increase the counter for selecting English region or Arabic region and creating a csv file''' counter_en=0 counter_ar=0 region=str("") rows=[] for i in range(1,41): # attaching English and Arabic images horizontally: English on the left and Arabic on the right images = [ Image.open(data_dir / Path("./"+str(i)+"ar.jpeg")), Image.open(data_dir / Path("./"+str(i)+"en.jpeg")), ] img = join_images(images, col_wrap=2, img_size=(500, 500), padding=1) name_aren=str(i)+"aren.jpeg" img.save(data_dir/name_aren, "JPEG") ## finding if the salient point is in the English region or the Arabic region img_path = data_dir / Path("./"+str(i)+"aren.jpeg") model.plot_img_crops(img_path, topK=1) #saving plots in the ./data_plot folder" plt.savefig(data_dir_plot_aren/name_aren, bbox_inches="tight") plot_path=data_dir_plot_aren/name_aren salient_info = get_salient_info(img_path) all_salient_points = salient_info["salient_point"] print("salient info for "+name_aren+" is " , all_salient_points[0]) if all_salient_points[0][0]<=img.width/2: print("For image "+name_aren,"Arabic Meme is Selected, because the saleint point's X value is less than half of the image's width= "+ str(img.width) + " and it's in the Arabic side") region=str("Arabic") counter_ar+=1 rows.append ([name_aren, str(all_salient_points[0]), region, plot_path ]) else: print("For image "+name_aren,"English Meme is Selected, because the saleint point's X value is bigger than half of the image's width= "+ str(img.width) + " and it's in the English side") region=str("English") counter_en+=1 rows.append ([name_aren, str(all_salient_points[0]), str(region), plot_path ]) # Making a CSV file to look at the images and the selection region for the most salient point print(counter_en) print(counter_ar) # saving the result as a csv file and also a bar chart df = pd.DataFrame(rows, columns=["Name", "Salient Point", "Selected Region", "Plot Directory"]) df.to_csv(data_dir_plot_aren/'aren_region.csv', index=False) selected_region=["English","Arabic"] number_selected=[counter_en, counter_ar] plt.bar(selected_region, number_selected) plt.title("Number of times that English or Arabic regions are selected") plt.savefig(data_dir_plot_aren/"aren_bar_plot.jpeg") plt.show() total= (counter_en*100)/(counter_en+counter_ar) print(str(total)+"% of the most salient points of total paired memes are in the English region") ```
github_jupyter
# Differential Privacy - DP ### What is DP? Differential Privacy began with ensuring that different *'statistical analysis'* does not violate privacy, which in the early days of DP meant database queries that remained private. Now, any statistical analysis should not violate the privacy of any individual. We want to learn information about a dataset but not about specific individuals, knowledge of which could harm them. We need an enforceable definition of privacy so that we can evaluate our practices. **Definition** <br> *Differential Privacy* describes a promise, made by a data holder, or curator, to a data subject, and the promise is as follows: <br> "You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies, data sets, or information sources, are available." -- Cynthia Dwork, "*Algorithmic Foundations*" Simply anonymizing data does not work. This would violate promise of our definitions where we assert that we will not affect anyone in any way through the release of data. Often times data can be deduced from even an anonymized dataset by studying a separate dataset. For example, the *Netflix Prize* competition included a release of many movie ratings by many users. Even though this data was fully anonymized, [statistical analysis allowed researchers to de-anonymize the user's names as well as movie titles](https://www.cs.utexas.edu/~shmat/shmat_oak08netflix.pdf). A simple Google search reveals many more instances of de-anonymization. ##### Simple Example Consider a database that is equivalent to a binary array. We want to perform a query such that regardless of the query we make, removing a datum from the database the query does not change. This is an example of full privacy protection. This datum was not leaking any statistical information into the database. <br> When we want to evaluate the privacy of a function (or query), we want to compare the output of the query on the entire database against the query on each parallel database. A parallel database is a database with one datum removed. See Andrew Trask's work for this example [here](https://github.com/udacity/private-ai/blob/master/completed/Section%201%20-%20Differential%20Privacy.ipynb). **Differencing Attack** If we are able to perform a one or more queries and gain a single datum's information or data, we are able to circumvent privacy protections and divulge specific data. For example comparing various sum queries to each other. ## Local vs Global DP Main strategy to protect data, we will be adding random noise. - *Local Differential Privacy*: adds noise to function data points (function inputs) - examples of this would be adding noise directly to a database or have users add noise to their own data, before it is even put in the DB. This offers highest level of protection since users do not need to trust anyone before handing over data. - *Global Differential Privacy*: adds noise to function outputs - A database contains private information, but we add noise to interfaces with data. Assuming that a database operator is trustworthy, the largest difference between local and global differential privacy is that under the global DP scenario our results will be more accurate. This assumption relies on a *'trusted curator'* to add noise in a proper manner. ### Randomized Response Randomized response is a great example of how we can make functions DP. Read about it [here](https://en.wikipedia.org/wiki/Randomized_response). In essence a question posed by a researcher adds plausible deniability to tease out the truth over a large population after adjusting for the true statistic. ## Formal Definition of Differential Privacy There are two types of noise we can add, *Gaussian* or *Laplacian* noise (based on the respective distributions). <br> How much noise should we add? We can analyze a query against our definition of DP. But first, let's formally define DP. **Definition**<br> For ALL parallel databases, the maximum distance between a query on database (*x*) and the same query on database (*y*) will be $e^\varepsilon$, but that occasionally this constraint won't hold with probability delta. Thus, this theorem is called *'epsilon delta'* differential privacy In essence the equation in the definition compares two random distributions (from randomized algorithm *M*) over all things in DB, where one random distribution has one item missing. We want to see how different these two distributions are. To read more about the mathematics behind $\varepsilon$-differential privacy go [here](https://en.wikipedia.org/wiki/Differential_privacy#Definition_of_%CE%B5-differential_privacy). *Epsilon*: $ e^{\varepsilon}$, is the primary constraint on maximum amount that the two distributions are different - $\varepsilon=0$ -> PERFECT PRIVACY (with $\delta==0$) - $\varepsilon=1$ -> allows some privacy leakage. Note, our constraint can still allow for leakage of small amounts. *Delta*: $\delta$ is the probability that we leak more information than $\varepsilon$ claims we leak. We want this to be small number. **Privacy Budget** How much epsilon/delta leakage we allow for our analysis. Laplacian noise works best.But how much do we add? It depends on the type of noise we are adding. Take into account a query's sensitivity, and our desired epsilon, delta budget.<br> **Laplacian noise** <br> $b = \frac{sensitivity(query)}{\varepsilon}$ (where $\delta=0$ always) # Differencial Privacy for Deep Learning Differential privacy techniques form the basis of privacy guarantees in the context of deep learning. In the context of DL, we can replace our 'query' terminology with 'neural network training'. Regardless of any single datum we remove from our database, we should always get the same model back. This becomes tricky with DL since it is common for models to train to different solutions (different outputs) even on the same data. Furthermore, we might not always know where individual people are referenced in a database. Sometimes we can have multiple people per datum, like with images of multiple people. Defining DP in this context is difficult. Generally, there are three ways of making deep learning differentially private: 1. Add *noise to gradients* through differentially private stochastic gradient descent (local differential privacy) 2. Add *noise to input data* (local differential privacy) 3. Add *noise to predictions* of a model (i.e. PATE) ### PATE Analysis [*Private Aggregation of Teacher Ensembles* (PATE) Analysis](https://arxiv.org/pdf/1802.08908.pdf) is a proposed solution for assessing a DL model's DP. In essence, it works by performing a DP query (usually a max query that we make DP with an epsilon delta constraint) on the predictions of remotely trained models on a new dataset. *** For more information on differential privacy, head to an example by Andrew Trask [here](https://github.com/udacity/private-ai/blob/master/completed/Section%201%20-%20Differential%20Privacy.ipynb), or read the most comprehensive differential privacy textbook to date, ["Algorithmic Foundations of Differential Privacy"](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) by Cynthia Dwork and Aaron Roth. ##### References Subject matter and code inspired by Udacity's Private AI Scholarship Challenge - https://github.com/udacity/private-ai/blob/master/completed/Section%201%20-%20Differential%20Privacy.ipynb
github_jupyter
One can create particle trajectories from a `DatasetSeries` object for a specified list of particles identified by their unique indices using the `particle_trajectories` method. ``` %matplotlib inline import glob from os.path import join import yt from yt.config import ytcfg path = ytcfg.get("yt", "test_data_dir") import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D ``` First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way: ``` my_fns = glob.glob(join(path, "Orbit", "orbit_hdf5_chk_00[0-9][0-9]")) my_fns.sort() ``` And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields: ``` fields = ["particle_velocity_x", "particle_velocity_y", "particle_velocity_z"] ``` There are only two particles, but for consistency's sake let's grab their indices from the dataset itself: ``` ds = yt.load(my_fns[0]) dd = ds.all_data() indices = dd["particle_index"].astype("int") print (indices) ``` which is what we expected them to be. Now we're ready to create a `DatasetSeries` object and use it to create particle trajectories: ``` ts = yt.DatasetSeries(my_fns) # suppress_logging=True cuts down on a lot of noise trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True) ``` The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such: ``` print (trajs["particle_position_x"]) print (trajs["particle_position_x"].shape) ``` Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field: ``` plt.figure(figsize=(6, 6)) plt.plot(trajs["particle_position_x"][0], trajs["particle_position_y"][0]) plt.plot(trajs["particle_position_x"][1], trajs["particle_position_y"][1]) ``` And we can plot the velocity fields as well: ``` plt.figure(figsize=(6, 6)) plt.plot(trajs["particle_velocity_x"][0], trajs["particle_velocity_y"][0]) plt.plot(trajs["particle_velocity_x"][1], trajs["particle_velocity_y"][1]) ``` If we want to access the time along the trajectory, we use the key `"particle_time"`: ``` plt.figure(figsize=(6, 6)) plt.plot(trajs["particle_time"], trajs["particle_velocity_x"][1]) plt.plot(trajs["particle_time"], trajs["particle_velocity_y"][1]) ``` Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index: ``` particle1 = trajs.trajectory_from_index(1) plt.figure(figsize=(6, 6)) plt.plot(particle1["particle_time"], particle1["particle_position_x"]) plt.plot(particle1["particle_time"], particle1["particle_position_y"]) ``` Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting: ``` ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") slc = yt.SlicePlot(ds, "x", ["density","dark_matter_density"], center="max", width=(3.0, "Mpc")) slc.show() ``` So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `"particle_type == 1"`): ``` sp = ds.sphere("max", (0.5, "Mpc")) indices = sp["particle_index"][sp["particle_type"] == 1] ``` Next we'll get the list of datasets we want, and create trajectories for these particles: ``` my_fns = glob.glob(join(path, "enzo_tiny_cosmology/DD*/*.hierarchy")) my_fns.sort() ts = yt.DatasetSeries(my_fns) trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True) ``` Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume: ``` fig = plt.figure(figsize=(8.0, 8.0)) ax = fig.add_subplot(111, projection='3d') ax.plot(trajs["particle_position_x"][100], trajs["particle_position_y"][100], trajs["particle_position_z"][100]) ax.plot(trajs["particle_position_x"][8], trajs["particle_position_y"][8], trajs["particle_position_z"][8]) ax.plot(trajs["particle_position_x"][25], trajs["particle_position_y"][25], trajs["particle_position_z"][25]) ``` It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time: ``` plt.figure(figsize=(6,6)) plt.plot(trajs["particle_time"], trajs["particle_position_x"][100]) plt.plot(trajs["particle_time"], trajs["particle_position_x"][8]) plt.plot(trajs["particle_time"], trajs["particle_position_x"][25]) ``` Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, yt will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method: ``` trajs.add_fields(["density"]) ``` We also could have included `"density"` in our original field list. Now, plot up the gas density for each particle as a function of time: ``` plt.figure(figsize=(6,6)) plt.plot(trajs["particle_time"], trajs["density"][100]) plt.plot(trajs["particle_time"], trajs["density"][8]) plt.plot(trajs["particle_time"], trajs["density"][25]) plt.yscale("log") ``` Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files: ``` trajs.write_out("halo_trajectories") # This will write a separate file for each trajectory trajs.write_out_h5("halo_trajectories.h5") # This will write all trajectories to a single file ```
github_jupyter
# Data Carpentry Reproducible Research Workshop - Data Exploration ## Learning objectives Use the Python Pandas library in the Jupyter Notebook to: * Assess the structure and cleanliness of a dataset, including the size and shape of the data, and the number of variables of each type. * Describe findings, translate results from code to text using Markdown comments in the Jupyter Notebook, summarizing your thought process in a narrative. * Modify raw data to prepare a clean data set -- including copying data, removing or replacing missing and incoherent data, dropping columns, removing duplicates. * Assess whether data is “Tidy” and identify appropriate steps and write and execute code to arrange it into a tidy format - including merging, reshaping, subsetting, grouping, sorting, and making appropriate new columns. * Identify several relevant summary measures * Illustrate data in plots and determine the need for repeated or further analysis. * Justify these decisions in Markdown in the Jupyter Notebook. # Setting up the notebook ## About Libraries in Python A library in Python contains a set of tools (called functions) that perform tasks on our data. Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench for use in a project. Once a library is imported, it can be used or called to perform many tasks. Python doesn’t load all of the libraries available to it by default. We have to add an import statement to our code in order to use library functions. To import a library, we use the syntax `import libraryName`. If we want to give the library a nickname to shorten the command, we can add `as nickNameHere`. An example of importing the Pandas library using the common nickname `pd` is below. **`import`** `pandas` **`as`** `pd` ## matplotlib and other plotting libraries matplotlib is the most widely used Python library for plotting. We can run it in the notebook using the magic command `%matplotlib inline`. If you do not use `%matplotlib inline`, your plots will be generated outside of the notebook and may be difficult to find. See [the IPython docs](http://ipython.readthedocs.io/en/stable/interactive/plotting.html) for other IPython magics commands. In this lesson, we will only use matplotlib and Seaborn, another package that works in tandem with matplotlib to make nice graphics. There is a whole range of graphics packages in Python, ranging from basic visualizations to fancy, interactive graphics like [Bokeh](http://bokeh.pydata.org/en/latest/) and [Plotly](https://plot.ly/python/). We encourage you to explore on your own! Chances are, if you can imagine a plot you'd like to make, somebody else has written a package to do it. ## Markdown Text can be added to Jupyter Notebooks using Markdown cells. Markdown is a popular markup language that is a superset of HTML. To learn more, see [Jupyter's Markdown guide](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Working%20With%20Markdown%20Cells.html) or revisit the [Reproducible Research lesson on Markdown](https://github.com/Reproducible-Science-Curriculum/introduction-RR-Jupyter/blob/master/notebooks/Navigating%20the%20notebook%20-%20instructor%20script.ipynb). ## The Pandas Library One of the best options for working with tabular data in Python is the Python Data Analysis Library (a.k.a. Pandas). The Pandas library is built on top of the NumPy package (another Python library). Pandas provides data structures, produces high quality plots with matplotlib, and integrates nicely with other libraries that use NumPy arrays. Those familiar with spreadsheets should become comfortable with Pandas data structures. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` Each time we call a function that’s in a library, we use the syntax `LibraryName.FunctionName`. Adding the library name with a `.` before the function name tells Python where to find the function. In the example above, we have imported Pandas as `pd`. This means we don’t have to type out `pandas` each time we call a Pandas function. See this free [Pandas cheat sheet](https://www.datacamp.com/community/blog/python-pandas-cheat-sheet) from DataCamp for the most common Pandas commands. # Getting data into the notebook We will begin by locating and reading our data which are in a table format as a tab-delimited file. We will use Pandas’ `read_table` function to pull the file directly into a `DataFrame`. ## What’s a `DataFrame`? A `DataFrame` is a 2-dimensional data structure that can store in columns data of different types (including characters, integers, floating point values, factors and more). It is similar to a spreadsheet or a SQL table or data.frame in R. A `DataFrame` always has an index (0-based). An index refers to the position of an element in the data structure. Note that we use `pd.read_table`, not just `read_table` or `pandas.read_table`, because we imported Pandas as `pd`. In our original file, the columns in the data set are separated by a TAB. We need to tell the `read_table` function in Pandas that that is the delimiter with `sep = ‘\t’`. ``` url = "https://raw.githubusercontent.com/Reproducible-Science-Curriculum/data-exploration-RR-Jupyter/master/gapminderDataFiveYear_superDirty.txt" #You can also read your table in from a file directory gapminder = pd.read_table(url, sep = "\t") ``` The first thing to do when loading data into the notebook is to actually "look" at it. How many rows and columns are there? What types of variables are in it and what values can they take? There are usually too many rows to print to the screen. By default, when you type the name of the `DataFrame` and run a cell, Pandas knows to not print the whole thing. Instead, you will see the first and last few rows with dots in between. A neater way to see a preview of the dataset is the `head()` method. Calling `dataset.head()` will display the first 5 rows of the data. You can specify how many rows you want to see as an argument, like `dataset.head(10)`. The `tail()` method does the same with the last rows of the `DataFrame`. ``` gapminder.head() #tail #gapminder ``` Sometimes the table has too many columns to print on screen. Calling `df.columns.values` will print all the column names in an array. ``` gapminder.columns.values ``` # Assess the structure and cleanliness ## How many rows and columns are in the data? We often want to know how many rows and columns are in the data -- what is the "shape" of the `DataFrame`. Shape is an attribute of the `DataFrame`. Pandas has a convenient way for getting that information by using `DataFrame.shape` (using `DataFrame` here as a generic name for your `DataFrame`). This returns a tuple (immutable values separated by commas) representing the dimensions of the `DataFrame` (rows, columns).<p> To get the shape of the gapminder `DataFrame`: ``` gapminder.shape ``` We can learn even more about our `DataFrame`. The `info()` method gives a few useful pieces of information, including the shape of the `DataFrame`, the variable type of each column, and the amount of memory stored. The output from `info()` displayed below shows that the fields ‘year’ and ‘pop’ (population) are represented as ‘float’ (that is: numbers with a decimal point). This is not appropriate: year and population should be integers or whole numbers. We can change the data-type with the function `astype()`. The code for `astype()` is shown below; however, we will change the data types later in this lesson. ``` gapminder.info() ``` The `describe()` method will take the numeric columns and provide a summary of their values. This is useful for getting a sense of the ranges of values and seeing if there are any unusual or suspicious numbers. ``` gapminder.describe() ``` The `DataFrame` function `describe()` just blindly looks at all numeric variables. We wouldn't actually want to take the mean year. Additionally, we obtain ‘NaN’ values for our quartiles. This suggests we might have missing data which we can (and will) deal with shortly when we begin to clean our data. For now, let's pull out only the columns that are truly continuous numbers (i.e. ignore the description for ‘year’). This is a preview of selecting columns from the data; we'll talk more about how to do it later in the lesson. ``` gapminder[['pop', 'life Exp', 'gdpPercap']].describe() ``` We can also extract one specific variable metric at a time if we wish: ``` print (gapminder['life Exp'].min()) print (gapminder['life Exp'].max()) print (gapminder['life Exp'].mean()) print (gapminder['life Exp'].std()) print (gapminder['life Exp'].count()) ``` #### Values in columns Next, let's say you want to see all the unique values for the `region` column. One way to do this is: ``` pd.unique(gapminder.region) ``` This output is useful, but it looks like there may be some formatting issues causing the same region to be counted more than once. Let's take it a step further and find out to be sure. As mentioned previously, the command `value_counts()` gives you a first global idea of your categorical data such as strings. In this case that is the column `region`. Run the code below. ``` # How many unique regions are in the data? print(len(gapminder['region'].unique())) # How many times does each unique region occur? gapminder['region'].value_counts() ``` The table reveals some problems in our data set. The data set covers 12 years, so each ‘region’ should appear 12 times, but some regions appear more than 12 times and others fewer than 12 times. We also see inconsistencies in the region names (string variables are very susceptible to those), for instance: Asia_china vs. Asia_China Another type of problem we see is the various names of 'Congo'. In order to analyze this dataset appropriately we need to take care of these issues. We will fix them in the next section on data cleaning. #### Exercises Are there other columns in our `DataFrame` that have categorical variables? If so, run some code to list the categories below. Save your list to a variable and count the number of unique categories using `len`. What is the outcome when you run `value_counts()`? # Data cleaning ## Referencing objects vs copying objects Before we get started with cleaning our data, let's practice good data hygiene by first creating a copy of our original data set. Often, you want to leave the original data untouched. To protect your original, you can make a copy of your data (and save it to a new `DataFrame` variable) before operating on the data or a subset of the data. This will ensure that a new version of the original data is created and your original is preserved. ###### Why this is important Suppose you take a subset of your `DataFrame` and store it in a new variable, like `gapminder_early = gapminder[gapminder['year'] < 1970]`. Doing this does not actually create a new object. Instead, you have just given a name to that subset of the original data: `gapminder_early`. This subset still points to the original rows of `gapminder`. Any changes you make to the new `DataFrame` `gapminder_early` will appear in the corresponding rows of your original `gapminder` `DataFrame` too. ``` gapminder = pd.read_table(url, sep = "\t") gapminder_copy = gapminder.copy() gapminder_copy.head() ``` ## Handling Missing Data Missing data (often denoted as 'NaN'- not a number- in Pandas, or as 'null') is an important issue to handle because Pandas cannot compute on rows or columns with missing data. 'NaN' or 'null' does not mean the value at that position is zero, it means that there is no information at that position. Ignoring missing data doesn't make it go away. There are different ways of dealing with it which include: * analyzing only the available data (i.e. ignore the missing data) * input the missing data with replacement values and treating these as though they were observed * input the missing data and account for the fact that these were inputed with uncertainty (ex: create a new boolean variable so you know that these values were not actually observed) * use statistical models to allow for missing data--make assumptions about their relationships with the available data as necessary For our purposes with the dirty gapminder data set, we know our missing data is excess (and unnecessary) and we are going to choose to analyze only the available data. To do this, we will simply remove rows with missing values. This is incredibly easy to do because Pandas allows you to either remove all instances with null data or replace them with a particular value. `df = df.dropna()` drops rows with any column having NA/null data. `df = df.fillna(value)` replaces all NA/null data with the argument `value`. For more fine-grained control of which rows (or columns) to drop, you can use `how` or `thresh`. These are more advanced topics and are not covered in this lesson; you are encouraged to explore them on your own. ``` gapminder_copy = gapminder_copy.dropna() gapminder_copy.head() ``` ## Changing Data Types We can change the data-type with the function `astype()`. The code for `astype()` is shown below. ``` gapminder_copy['year'] = gapminder_copy['year'].astype(int) gapminder_copy['pop'] = gapminder_copy['pop'].astype(int) gapminder_copy.info() ``` ## Handling (Unwanted) Repetitive Data You can identify which observations are duplicates. The call `df.duplicated()` will return boolean values for each row in the `DataFrame` telling you whether or not a row is repeated. In cases where you don’t want repeated values (we wouldn’t--we only want each country to be represented once for every relevant year), you can easily drop such duplicate rows with the call `df.drop_duplicates()`. ``` gapminder_copy.duplicated().head() #shows we have a repetition within the first 5 rows ``` Let's look at the first five rows of our data set again (remember we removed the NaNs): ``` gapminder_copy.head() ``` Our statement from above is correct, rows 1 & 2 are duplicated. Let's fix that: ``` gapminder_copy = gapminder_copy.drop_duplicates() gapminder_copy.head() ``` ### Reindexing with `reset_index()` Now we have 1704 rows, but our indexes are off because we removed duplicate rows. We can reset our indices easily with the call `reset_index(drop=True)`. Remember, Python is 0-indexed so our indices will be valued 0-1703. The concept of reindexing is important. When we removed some of the messier, unwanted data, we had "gaps" in our index values. By correcting this, we can improve our search functionality and our ability to perform iterative functions on our cleaned data set. ``` gapminder_copy = gapminder_copy.reset_index(drop=True) gapminder_copy.head() ``` ## Handling Inconsistent Data The `region` column is a bit too messy for what we'd like to do. The `value_counts()` operation above revealed some issues that we can solve with several different techniques. ### String manipulations Common problems with string variables are leading and trailing white space and upper case vs. lower case in the same data set. The following three commands remove all such lingering spaces (left and right) and put everything in lowercase. If you prefer, the three commands can be written in one single line (which is a concept called chaining). ``` gapminder_copy['region'] = gapminder_copy['region'].str.lstrip() # Strip white space on left gapminder_copy['region'] = gapminder_copy['region'].str.rstrip() # Strip white space on right gapminder_copy['region'] = gapminder_copy['region'].str.lower() # Convert to lowercase gapminder_copy['region'].value_counts() # How many times does each unique region occur? # We could have done this in one line! # gapminder_copy['region'] = gapminder_copy['region'].str.lstrip().str.rstrip().lower() ``` ### regex + `replace()` A regular expression, a.k.a. regex, is a sequence of characters that define a search pattern. In a regular expression, the symbol “*” matches the preceding character 0 or more times, whereas “+” matches the preceding character 1 or more times. “.” matches any single character. Writing “x|y” means to match either ‘x’ or ‘y’. For more regex shortcuts (cheatsheet): https://www.shortcutfoo.com/app/dojos/regex/cheatsheet To play "regex golf," check out this [tutorial by Peter Norvig](https://www.oreilly.com/learning/regex-golf-with-peter-norvig) (you may need an O'Reilly or social media account to play). Pandas allows you to use `regex` in its `replace()` function -- when a regex term is found in an element, the element is then replaced with the specified replacement term. In order for it to appropriately correct elements, both regex and inplace variables need to be set to `True` (as their defaults are False). This ensures that the initial input string is read as a regular expression and that the elements will be modified in place. For more documentation on the replace method: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html Here's an incorrect regex example: we create a temporary `DataFrame` in which a regex pulls all values that contain the term “congo”. Unfortunately, this creates 24 instances of the Democratic Republic of the Congo -- this is an error in our cleaning! We can revert back to the non-temporary `DataFrame` and correctly modify our regex to isolate only the Democratic Republic instances (as opposed to including the Republic as well). ``` # This gives a problem -- 24 values of the congo! temp = gapminder_copy['region'].replace(".*congo.*", "africa_dem rep congo", regex=True) temp.value_counts() # What happened? This shows all the rows that have congo in the name. gapminder_copy[gapminder_copy["region"].str.contains('congo')] ``` ### Using regex to correctly consolidate the Congo regions... As noted above, regular expressions (often simply "regex") provide a powerful tool for fixing errors that arise in strings. In order to correctly label the two different countries that include the word "congo", we need to design and use (via `pd.df.replace()`) a regex that correctly differentiates between the two countries. Recall that the "." is the wildcard (matching any single character); combining this with "*" allows us to match any number of single characters an unspecified number of times. By combining these characters with substrings corresponding to variations in the naming of the Democratic Republic of the Congo, we can correctly normalize the name. If you feel that the use of regex is not particularly straightforward, you are correct -- appropriately using these tools takes a great deal of time to master. When designing regex for these sorts of tasks, you might find the following prototyper helpful: https://regex101.com/ ``` gapminder_copy['region'].replace(".*congo, dem.*", "africa_dem rep congo", regex=True, inplace=True) gapminder_copy['region'].replace(".*_democratic republic of the congo", "africa_dem rep congo", regex=True, inplace=True) gapminder_copy['region'].value_counts() # Now it's fixed. ``` ### Exercise (regex): Now that we've taken a close look at how to properly design and use regex to clean string entries in our data, let's try to normalize the naming of a few other countries. Using the pandas code we constructed above as a template, construct similar code (using `pd.df.replace()`) to set the naming of the Ivory Coast and Canada to "africa_cote d'ivoire" and "americas_canada", respectively. ``` gapminder_copy['region'].replace(".*ivore.*", "africa_cote d'ivoire", regex=True, inplace=True) gapminder_copy['region'].replace("^_canada", "americas_canada", regex=True, inplace=True) gapminder_copy['region'].value_counts() ``` ## Tidy data Having what is called a "_Tidy_ data set" can make cleaning, analyzing, and visualizing your data much easier. You should aim for having Tidy data when cleaning and preparing your data set for analysis. Two of the important aspects of Tidy data are: * every variable has its own column * every observation has its own row (There are other aspects of Tidy data, here is a good blog post about Tidy data in Python: http://www.jeannicholashould.com/tidy-data-in-python.html) Currently the gapminder dataset has a single column for continent and country (the ‘region’ column). We can split that column into two, by using the underscore that separates continent from country. We can create a new column in the `DataFrame` by naming it before the = sign: `gapminder['country'] = ` The following commands use the function `split()` to split the string at the underscore (the first argument), which results in a list of two elements: before and after the \_. The second argument tells `split()` that the split should take place only at the first occurrence of the underscore. ``` gapminder_copy['country']=gapminder_copy['region'].str.split('_', 1).str[1] gapminder_copy['continent']=gapminder_copy['region'].str.split('_', 1).str[0] gapminder_copy.head() ``` ### Removing and renaming columns We have now added the columns `country` and `continent`, but we still have the old `region` column as well. In order to remove that column we use the `drop()` command. The first argument of the `drop()` command is the name of the element to be dropped. The second argument is the *axis* number: *0 for row, 1 for column*. ``` gapminder_copy = gapminder_copy.drop('region', 1) #1 stands for column gapminder_copy.head() ``` Finally, it is a good idea to look critically at your column names. Use lowercase for all column names to avoid confusing `gdppercap` with `gdpPercap` or `GDPpercap`. Avoid spaces in column names to simplify manipulating your data - look out for lingering white space at the beginning or end of your column names. The following code turns all column names to lowercase. ``` gapminder_copy.columns = gapminder_copy.columns.str.lower() gapminder_copy.head() ``` We also want to remove the space from the `life exp` column name. We can do that with Pandas `rename` method. It takes a dictionary as its argument, with the old column names as keys and new column names as values. If you're unfamiliar with dictionaries, they are a very useful data structure in Python. You can read more about them [here](https://docs.python.org/3/tutorial/datastructures.html#dictionaries). ``` gapminder_copy = gapminder_copy.rename(columns={'life exp' : 'lifeexp'}) gapminder_copy.head() ``` ## Merging data Often we have more than one `DataFrame` that contains parts of our data set and we want to put them together. This is known as merging the data. Our advisor now wants us to add a new country called The People's Republic of Berkeley to the gapminder data set that we have cleaned up. Our goal is to get this new data into the same `DataFrame` in the same format as the gapminder data and, in this case, we want to concatenate (add) it onto the end of the gapminder data. Concatentating is a simple form of merging, there are many useful (and more complicated) ways to merge data. If you are interested in more information, the [Pandas Documentation](http://pandas.pydata.org/pandas-docs/stable/merging.html) is useful. ``` PRB = pd.read_table('https://raw.githubusercontent.com/Reproducible-Science-Curriculum/data-exploration-RR-Jupyter/master/PRB_data.txt', sep = "\t") PRB.head() ## bring in PRB data (no major problems) and make it conform to the gapminder at this point # clean the data to look like the current gapminder PRB['country']=PRB['region'].str.split('_', 1).str[1].str.lower() PRB['continent']=PRB['region'].str.split('_', 1).str[0].str.lower() PRB = PRB.drop('region', 1) PRB.columns = PRB.columns.str.lower() PRB = PRB.rename(columns={'life exp' : 'lifeexp'}) PRB.head() # double check that the gapminder is the same gapminder_copy.head() # combine the data sets with concat gapminder_comb = pd.concat([gapminder_copy, PRB]) gapminder_comb.tail(15) ``` Now that the `DataFrames` have been concatenated, notice that the index is funky. It repeats the numbers 0 - 11 in the `peoples republic of berkeley data`. <p> #### **Exercise:** fix the index. ``` # our code for fixing index gapminder_comb = gapminder_comb.reset_index(drop=True) gapminder_comb.head() ``` ## Subsetting and sorting There are many ways in which you can manipulate a Pandas `DataFrame` - here we will discuss two approaches: subsetting and sorting. ##### Subsetting We can subset (or slice) by giving the numbers of the rows you want to see between square brackets. *REMINDER:* Python uses 0-based indexing. This means that the first element in an object is located at position 0. this is different from other tools like R and Matlab that index elements within objects starting at 1. ``` gapminder_copy[0:15] #Select the first 15 rows gapminder_copy[:15] #Select the last 10 rows gapminder_copy[-10:] ``` ### Exercise *What does the negative number (in the third cell) mean?* Answer: *What happens when you leave the space before or after the colon empty?* Answer: Subsetting can also be done by selecting for a particular column or for a particular value in a column; for instance select the rows that have ‘africa’ in the column ‘continent. Note the double equal sign: single equal signs are used in Python to assign something to a variable. The double equal sign is a comparison: the variable to the left has to be exactly equal to the string to the right. **to do: are there other ways of subsetting that we want to talk about? .loc/.iloc with `DataFrames`** ``` #Select for a particular column gapminder_copy['year'] #this syntax, calling the column as an attribute, gives you the same output gapminder_copy.year ``` We can also create a new object that contains the data within the `continent` column ``` gapminder_continents = gapminder_copy['continent'] gapminder_africa = gapminder_copy[gapminder_copy['continent']=='africa'] gapminder_africa.head() ``` #### Sorting Sorting may help to further organize and inspect your data. The command `sort_values()` takes a number of arguments; the most important ones are `by` and `ascending.` The following command will sort your `DataFrame` by year, beginning with the most recent. ``` gapminder_copy.sort_values(by='year', ascending = False) ``` ### Exercise Organize your data set by country, from ‘Afganistan’ to ‘Zimbabwe’. ## Summarize and plot Summaries (but can’t *say* statistics…) * Sort data * Can make note about using numpy functions, dif between `DataFrame` and `array` Good Plots for the data/variable type Plots * of subsets, * single variables * pairs of variables * Matplotlib syntax (w/ Seaborn for defaults (prettier, package also good for more analysis later...)) Exploring is often iterative - summarize, plot, summarize, plot, etc. - sometimes it branches… # Summarizing data Remember that the `info()` method gives a few useful pieces of information, including the shape of the `DataFrame`, the variable type of each column, and the amount of memory stored. We can see many of our changes (continent and country columns instead of region, higher number of rows, etc.) reflected in the output of the `info()` method. ``` gapminder_comb.info() ``` We also saw above that the `describe()` method will take the numeric columns and give a summary of their values. We have to remember that we changed the column names and this time it shouldn't have NaNs. ``` gapminder_comb[['pop', 'lifeexp', 'gdppercap']].describe() ``` ### More summaries What if we just want a single value, like the mean of the population? We can call mean on a single column this way: ``` gapminder_comb['pop'].mean() ``` What if we want to know the mean population by _continent_? Then we need to use the Pandas `groupby()` method and tell it which column we want to group by. ``` gapminder_comb[['continent', 'pop']].groupby(by='continent').mean() ``` What if we want to know the median population by continent? ``` gapminder_comb[['continent', 'pop']].groupby(by='continent').median() ``` Or the number of entries (rows) per continent? ``` gapminder_comb[['continent', 'country']].groupby(by='continent').count() ``` Sometimes we don't want a whole `DataFrame`. Here is another way to do this that produces a `Series` that tells us number of entries (rows) as opposed to a `DataFrame`. ``` gdpcap = gapminder_comb[['continent', 'country']].groupby(by='continent').size() gdpcap ``` We can also look at the mean GDP per capita of each country: ``` gapminder_comb[['country', 'gdppercap']].groupby(by='country').mean().head(12) ``` What if we wanted a new `DataFrame` that just contained these summaries? This could be a table in a report, for example. ``` continent_mean_pop = gapminder_comb[['continent', 'pop']].groupby(by='continent').mean() continent_mean_pop = continent_mean_pop.rename(columns = {'pop':'meanpop'}) continent_row_ct = gapminder_comb[['continent', 'country']].groupby(by='continent').count() continent_row_ct = continent_row_ct.rename(columns = {'country':'nrows'}) continent_median_pop = gapminder_comb[['continent', 'pop']].groupby(by='continent').median() continent_median_pop = continent_median_pop.rename(columns = {'pop':'medianpop'}) gapminder_summs = pd.concat([continent_row_ct,continent_mean_pop,continent_median_pop], axis=1) gapminder_summs = gapminder_summs.rename(columns = {'y':'year'}) gapminder_summs ``` ## Visualization with `matplotlib` Recall that [matplotlib](http://matplotlib.org) is Python's main visualization library. It provides a range of tools for constructing plots and numerous high-level plotting libraries (e.g., [Seaborn](http://seaborn.pydata.org)) are built with matplotlib in mind. When we were in the early stages of setting up our analysis, we loaded these libraries like so: ``` import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set() ``` *Consider the above three commands to be essential practice for plotting (as essential as **`import`** `pandas` **`as`** `pd` is for data munging).* Now, let's turn to data visualization. In order to get a feel for the properties of the data set we are working with, data visualization is key. While, we will focus only on the essentials of how to properly construct plots in univariate and bivariate settings here, it's worth noting that both matplotlib and Seaborn support a diversity of plots: [matplotlib gallery](http://matplotlib.org/gallery.html), [Seaborn gallery](http://seaborn.pydata.org/examples/). --- ### Single variables * __Histograms__ - provide a quick way of visualizing the distribution of numerical data, or the frequencies of observations for categorical variables. ``` #import numpy as npa plt.hist(gapminder_copy['lifeexp']) plt.xlabel('lifeexp') plt.ylabel('count') ``` * __Boxplots__ - provide a way of comparing the summary measures (e.g., max, min, quartiles) across variables in a data set. Boxplots can be particularly useful with larger data sets. --- ``` sns.boxplot(x='year', y='lifeexp', data = gapminder_copy) plt.xlabel('year') plt.ylabel('lifeexp') ``` ### Pairs of variables * __Scatterplots__ - visualization of relationships across two variables... ``` # example plot goes here plt.scatter(gapminder_copy['gdppercap'], gapminder_copy['lifeexp']) plt.xlabel('gdppercap') plt.ylabel('lifeexp') plt.scatter(gapminder_copy['gdppercap'], gapminder_copy['lifeexp']) plt.xscale('log') plt.xlabel('gdppercap') plt.ylabel('lifeexp') #Python # example 2 plot goes here ``` --- ### Why use `seaborn`? As noted above, Seaborn is a high-level plotting library for statistical data visualization. In addition to simplifying plotting, it also provides facilities for customizing matplotlib plots (accessible via `sns.set()`). ### Saving your plots as image files If you'd like to save your plots as an image file, you can run `fig.savefig('my_figure.png')` where `"my_figure"` is the file name. ## Interpret plots and summaries ### Exploration is an iterative process In this lesson, we've taken the raw data and worked through steps to prepare it for analysis, but we have not yet done any "data analysis". This part of the data workflow can be thought of as "exploratory data analysis", or EDA. Many of the steps we've shown are aimed at uncovering interesting or problematic things in the dataset that are not immediately obvious. We want to stress that when you're doing EDA, it will not necessarily be a linear workflow like what we have shown. When you plot or summarize your data, you may uncover new issues: for example, we saw this when we made a mistake fixing the naming conventions for the Democratic Republic of Congo. You might discover outliers, unusually large values, or points that don't make sense in your plots. Clearly, the work here isn't done: you'll have to investigate these points, decide how to fix any potential problems, document the reasoning for your actions, and check that your fix actually worked. On the other hand, plots and summaries might reveal interesting questions about your data. You may return to the cleaning and prepping steps in order to dig deeper into these questions. You should continuously refine your plots to give the clearest picture of your hypotheses. ### Interesting findings This should be particular to the dataset at hand. Need to build upon results from the previous section. # Putting it all together On your own or with a partner, using the techniques you've learned in this lesson, explore one or both of the data sets provided in Data Carpentry's Lessons [Introduction to Genomics](http://www.datacarpentry.org/introduction-genomics/02-examining-sra-runtable.html) and [Python for Ecologists](http://www.datacarpentry.org/python-ecology-lesson/01-starting-with-data/). We've provided headers to guide you through the process. * The Genomics Lesson uses the [Lenski dataset](http://www.ncbi.nlm.nih.gov/sra?term=SRA026813). Follow section A. in [these directions](http://www.datacarpentry.org/introduction-genomics/02-examining-sra-runtable.html) to download the .csv file. * The Ecology Lesson uses [surveys.csv](https://ndownloader.figshare.com/files/2292172) from the Portal Teaching data, a subset of the data from [Ernst et al Long-term monitoring and experimental manipulation of a Chihuahuan Desert ecosystem near Portal, Arizona, USA](http://www.esapubs.org/archive/ecol/E090/118/default.htm). #### Import your data #### Describe your data set here using *Markdown* What is the general shape of your `DataFrame`? What are the datatypes? Are there missing values? What questions do you have about your data set and how will you answer those questions? Answers: #### Calculate summary statistics for your data set #### Is this a tidy data set? Why or why not? #### Create bar, box, and scatter plots of your data set. What insights do these plots provide?
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LinearRegression, LogisticRegression, BayesianRidge from sklearn.model_selection import train_test_split survey_data = pd.read_csv('data/Questionnaire_July 31, 2019_10.47.csv') maps_data = pd.read_csv('data/map-data.csv') mentalCounter = 0 physicalCounter = 0 performanceCounter=0 temporalCounter=0 frustrationCounter=0 effortCounter=0 nasa_scale = pd.DataFrame(columns=['Mental_Demand', 'Physical_Demand', 'Temporal_Demand', 'Performance', 'Effort', 'Frustration']) for x in range(2, survey_data.Q2_1.size): # first count the number of times that scale appeared in the row if survey_data.Q9[x]=='Mental Demand': mentalCounter+=1 else: physicalCounter+=1 if survey_data.Q10[x]=='Performance': performanceCounter+=1 else: frustrationCounter+=1 if survey_data.Q11[x]=='Performance': performanceCounter+=1 else: mentalCounter+=1 if survey_data.Q12[x]=='Temporal Demand': temporalCounter+=1 else: frustrationCounter+=1 if survey_data.Q13[x]=='Mental Demand': mentalCounter+=1 else: effortCounter+=1 if survey_data.Q14[x]=='Frustration': frustrationCounter+=1 else: physicalCounter+=1 if survey_data.Q15[x]=='Performance': performanceCounter+=1 else: effortCounter+=1 if survey_data.Q16[x]=='Physical Demand': physicalCounter+=1 else: performanceCounter+=1 if survey_data.Q17[x]=='Temporal Demand': temporalCounter+=1 else: effortCounter+=1 if survey_data.Q18[x]=='Physical Demand': physicalCounter+=1 else: temporalCounter+=1 if survey_data.Q19[x]=='Frustration': frustrationCounter+=1 else: effortCounter+=1 if survey_data.Q20[x]=='Performance': performanceCounter+=1 else: temporalCounter+=1 if survey_data.Q21[x]=='Effort': effortCounter+=1 else: physicalCounter+=1 if survey_data.Q22[x]=='Frustration': frustrationCounter+=1 else: mentalCounter+=1 if survey_data.Q23[x]=='Temporal Demand': temporalCounter+=1 else: mentalCounter+=1 # get the subscale rating mentalVal = int(survey_data.Q2_1[x]) physicalVal = int(survey_data.Q3_1[x]) temporalVal = int(survey_data.Q4_1[x]) performanceVal = int(survey_data.Q5_1[x]) effortVal = int(survey_data.Q6_1[x]) frustrationVal = int(survey_data.Q7_1[x]) # create that participant's nasa-tlx ratings row = [(mentalVal * mentalCounter), (physicalVal * physicalCounter), (temporalVal * temporalCounter), (performanceVal * performanceCounter), (effortVal * effortCounter), (frustrationVal * frustrationCounter)] # add the participant's nasa-tlx ratings to the master df nasa_scale.loc[len(nasa_scale)] = row mentalCounter = 0 physicalCounter = 0 performanceCounter = 0 temporalCounter = 0 frustrationCounter = 0 effortCounter = 0 nasa_scale nasa_scale.loc[1] partID = pd.DataFrame({'partID': list(survey_data['Q51_1'].iloc[2:])}) nasa_scale.insert(0, "partID", partID, True) nasa_scale.iloc[0,0] = 1 nasa_scale round(sum(nasa_scale.Mental_Demand)/len(nasa_scale.Mental_Demand)) print("mental demand average:", (round(sum(nasa_scale.Mental_Demand)/len(nasa_scale.Mental_Demand), 2))) print("physical demand average:", (round(sum(nasa_scale.Physical_Demand)/len(nasa_scale.Physical_Demand), 2))) print("temporal demand average:", (round(sum(nasa_scale.Temporal_Demand)/len(nasa_scale.Temporal_Demand), 2))) print("performance average:", (round(sum(nasa_scale.Performance)/len(nasa_scale.Performance), 2))) print("effort average:", (round(sum(nasa_scale.Effort)/len(nasa_scale.Effort), 2))) print("frustration average:", (round(sum(nasa_scale.Frustration)/len(nasa_scale.Frustration), 2))) ```
github_jupyter
``` from netCDF4 import Dataset import netCDF4 as netcdf import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as mticker import matplotlib as mpl #mapping import cartopy.crs as ccrs import cartopy.feature as cfeature from cartopy.io import shapereader from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter import xarray as xr import xarray.ufuncs as xu from scipy.interpolate import griddata #from pyresample.geometry import SwathDefinition #from pyresample.kd_tree import resample_nearest hfr_loc = "./data/hf_radar_05_2020.nc" hfrdata = xr.open_dataset(hfr_loc) cmems_loc = "./data/CMEMS-global-analysis-forecast-phy-001-024-hourly-u-v.nc" cmemsdata = xr.open_dataset(cmems_loc) lat_hfr=hfrdata.variables['lat'][:] lon_hfr=hfrdata.variables['lon'][:] time_hfr=hfrdata.variables['time'][:] u_hfr=hfrdata.variables['u'][:,:,:] v_hfr=hfrdata.variables['v'][:,:,:] lat_cmems=cmemsdata.variables['latitude'][:] lon_cmems=cmemsdata.variables['longitude'][:] time_cmems=cmemsdata.variables['time'][:] u_cmems=cmemsdata.variables['uo'][:,:,:] v_cmems=cmemsdata.variables['vo'][:,:,:] time_hfr.values[3], time_cmems.values[3] ``` ``` x_hfr, y_hfr = np.meshgrid(lon_hfr,lat_hfr) hfr_meshx = x_hfr.ravel() hfr_meshy = y_hfr.ravel() x_cmems, y_cmems = np.meshgrid(lon_cmems,lat_cmems) #cmems_meshx = x_cmems.ravel() #cmems_meshy = y_cmems.ravel() #cmems_meshu = u_cmems.values.ravel() #cmems_meshv = v_cmems.values.ravel() #indexc = ~np.isnan(cmems_meshu) #cmems_meshu = cmems_meshu[indexc] #cmems_meshv = cmems_meshv[indexc] #cmems_meshx = cmems_meshx[indexc] #cmems_meshy = cmems_meshy[indexc] x_cmems.shape ``` Put HFR U and V components onto the model grid ``` HFRU = [] HFRV = [] for i in enumerate(u_hfr[:,0,0]): u_nonan = np.nan_to_num(u_hfr[i[0],:,:], copy=True, nan=9999.0, posinf=None, neginf=None) v_nonan = np.nan_to_num(v_hfr[i[0],:,:], copy=True, nan=9999.0, posinf=None, neginf=None) #print(speed_nonan.ravel().shape) #i[0] #print(i[0]) HFRU_i = griddata((hfr_meshx.ravel(), hfr_meshy.ravel()), u_nonan.ravel(), (x_cmems, y_cmems), method='linear') HFRV_i = griddata((hfr_meshx.ravel(), hfr_meshy.ravel()), v_nonan.ravel(), (x_cmems, y_cmems), method='linear') HFRU.append(np.array(HFRU_i)) HFRV.append(np.array(HFRV_i)) HFRU = np.array(HFRU) HFRV = np.array(HFRV) HFRU.shape HFR_U = np.ma.masked_where(HFRU > 250, HFRU) HFR_V = np.ma.masked_where(HFRV > 250, HFRV) ``` Compute bias for U and V ``` MODEL_U = np.squeeze(u_cmems) MODEL_V = np.squeeze(v_cmems) MODEL_U.shape HFR_SPEED=np.sqrt(HFR_U**2+HFR_V**2) MODEL_SPEED=np.sqrt(MODEL_U**2+MODEL_V**2) Udiff = HFR_U - MODEL_U Vdiff = HFR_V - MODEL_V SPEEDdiff = HFR_SPEED - MODEL_SPEED plt.pcolormesh(x_cmems, y_cmems,SPEEDdiff[2,:,:],shading='auto') time_hfr.values[2], time_cmems.values[2] a = HFR_U b = MODEL_U a.shape, b.shape an=np.nan_to_num(a, copy=True, nan=0.0, posinf=None, neginf=None) nz = np.count_nonzero(~np.isnan(an)) abar=np.sum(an,axis=0)/nz bn=np.nan_to_num(b, copy=True, nan=0.0, posinf=None, neginf=None) bnz = np.count_nonzero(~np.isnan(bn)) bbar=np.sum(bn,axis=0)/bnz Ubias = abar - bbar #fig = plt.figure(figsize=(8,12)) #proj = ccrs.PlateCarree() #ax=fig.add_subplot(1,1,1,projection=proj) #ax.set_extent([-76, -73, 36.5, 39.5]) plt.pcolormesh(x_cmems, y_cmems,Ubias,shading='auto') # add colorbar #cax,kw = mpl.colorbar() Ubias a = HFR_V b = MODEL_V a.shape an=np.nan_to_num(a, copy=True, nan=0.0, posinf=None, neginf=None) nz = np.count_nonzero(~np.isnan(an)) abar=np.sum(an,axis=0)/nz bn=np.nan_to_num(b, copy=True, nan=0.0, posinf=None, neginf=None) bnz = np.count_nonzero(~np.isnan(bn)) bbar=np.sum(bn,axis=0)/bnz Vbias = abar - bbar Vbias.shape plt.pcolormesh(x_cmems, y_cmems,Vbias,shading='auto') np.max(Ubias) np.max(Vbias) ```
github_jupyter
# RNN Evaluation From our paper on "Explainable Prediction of Acute Myocardial Infarction using Machine Learning and Shapley Values" ``` # Import libraries from keras import optimizers, losses, activations, models from keras.callbacks import ModelCheckpoint, EarlyStopping, LearningRateScheduler, ReduceLROnPlateau from keras.layers import Layer, GRU, LSTM, Dense, Input, Dropout, Convolution1D, MaxPool1D, GlobalMaxPool1D, GlobalAveragePooling1D, \ concatenate from keras.layers import LeakyReLU from keras import regularizers, backend, initializers from keras.models import Sequential from keras.utils import to_categorical from keras.initializers import Ones, Zeros import keras.backend as K from keras.models import load_model from sklearn.metrics import f1_score, accuracy_score, roc_auc_score, confusion_matrix from sklearn import preprocessing import time import gc import pandas as pd import numpy as np import pylab as plt import tensorflow as tf from numpy import loadtxt from numpy import savetxt from tensorflow.python.framework import ops print(tf.__version__) # Visualization libraries import seaborn as sns ``` # Loading Data ``` # Load data train = loadtxt('train.csv', delimiter=',') test = loadtxt('test.csv', delimiter=',') # Split array train_x = train[:,:11] test_x = test[:,:11] train_y = train[:,11] test_y = test[:,11] train_x_noageandsex = train_x[:,:9] test_x_noageandsex = test_x[:,:9] train_y_noageandsex = train_y test_y_noageandsex = test_y class LayerNormalization(Layer): def __init__(self, eps=1e-6, **kwargs): self.eps = eps super(LayerNormalization, self).__init__(**kwargs) def build(self, input_shape): self.gamma = self.add_weight(name='gamma', shape=input_shape[-1:], initializer=Ones(), trainable=True) self.beta = self.add_weight(name='beta', shape=input_shape[-1:], initializer=Zeros(), trainable=True) super(LayerNormalization, self).build(input_shape) def call(self, x): mean = K.mean(x, axis=-1, keepdims=True) std = K.std(x, axis=-1, keepdims=True) return self.gamma * (x - mean) / (std + self.eps) + self.beta def compute_output_shape(self, input_shape): return input_shape X_train_noageandsex = np.reshape(train_x_noageandsex, (train_x_noageandsex.shape[0], 1, train_x_noageandsex.shape[1])) X_test_noageandsex = np.reshape(test_x_noageandsex, (test_x_noageandsex.shape[0], 1, test_x_noageandsex.shape[1])) train_y_noageandsex = to_categorical(train_y_noageandsex) ``` # Model Evaluation + Confusion Matrix ``` model = load_model('model_noageandsex1_final.h5', custom_objects={'LayerNormalization': LayerNormalization}) model.summary() # Test the model start = time.clock() pred_test = model.predict(X_test_noageandsex) end = time.clock() pred_test = np.argmax(pred_test, axis=-1) print("Time for prediction: {} ".format((end-start))) # Get f1 score f1 = f1_score(test_y, pred_test, average="macro") print("Test f1 score : %s "% f1) # Get ROC AUC score roc = roc_auc_score(test_y_noageandsex, pred_test) print("Test ROC AUC Score : %s "% roc) # Get the accuracy acc = accuracy_score(test_y_noageandsex, pred_test) print("Test accuracy score : %s "% acc) # Get the specificity tn, fp, fn, tp = confusion_matrix(test_y_noageandsex, pred_test).ravel() specificity = tn / (tn+fp) print("Specificity : %s "% specificity) # Get the sensitivity sensitivity= tp / (tp+fn) print("Sensitivity: %s "% sensitivity) # Confusion matrix confusion = confusion_matrix(test_y_noageandsex, pred_test) sns.heatmap(data=confusion, annot=True, xticklabels=["MI", "Not MI"], yticklabels=["MI", "Not MI"], fmt = "d", annot_kws={"fontsize":16}) plt.ylabel('Actual') plt.xlabel('Predicted') plt.yticks(va="center") plt.show() ```
github_jupyter
``` from keras.datasets import mnist (trainX, trainY), (testX, testY) = mnist.load_data() from keras.models import Model from keras.layers import Input, Reshape, Dense, Flatten, Dropout, LeakyReLU class Autoencoder: def __init__(self, img_shape=(28, 28), latent_dim=2, n_layers=2, n_units=128): # encoder h = i = Input(img_shape) h = Flatten()(h) for _ in range(n_layers): h = Dense(n_units, activation='relu')(h) o = Dense(latent_dim)(h) self.encoder = Model(inputs=[i], outputs=[o]) # decoder i = h = Input((latent_dim,)) for _ in range(n_layers): h = Dense(n_units, activation='relu')(h) h = Dense(img_shape[0] * img_shape[1])(h) o = Reshape(img_shape)(h) # predict 1 frame self.decoder = Model(inputs=[i], outputs=[o]) # stacked autoencoder i = Input(img_shape) z = self.encoder(i) # push observations into latent space o = self.decoder(z) # project from latent space to feature space self.auto = Model(inputs=[i], outputs=[o]) self.auto.compile(loss='mse', optimizer='adam') model = Autoencoder() model.auto.fit(trainX, trainX, validation_data=(testX[:100], testX[:100]), batch_size=100, epochs=10) import matplotlib.pyplot as plt %matplotlib inline # transform each input image into the latent space z = model.encoder.predict(trainX) # color each point by its label colors = trainY.tolist() # plot the latent space plt.scatter(z[:,0], z[:,1], marker='o', s=1, c=colors) plt.colorbar() import numpy as np # sample from the region -50, -50 y = np.array([[60, -30]]) prediction = model.decoder.predict(y) plt.imshow(prediction.squeeze()) ``` # Create JS data structures ``` import json with open('data/trainX-sample.json', 'w') as out: json.dump(trainX[:50].tolist(), out) with open('data/trainY.json', 'w') as out: json.dump(trainY.tolist(), out) import matplotlib.pyplot as plt import numpy as np import math px_per_cell_side = 28 cells_per_axis = math.floor(2048/px_per_cell_side) cells_per_atlas = cells_per_axis**2 n_atlases = math.ceil(trainX.shape[0] / cells_per_atlas) # create a series of columns and suture them together for i in range(n_atlases-1): # -1 to just create full atlas files (skip the remainder) start = i * cells_per_atlas end = (i+1) * cells_per_atlas x = trainX[start:end] cols = [] for j in range(cells_per_axis): col_start = j*cells_per_axis col_end = (j+1)*cells_per_axis col = x[col_start:col_end].reshape(px_per_cell_side*cells_per_axis, px_per_cell_side) cols.append(col) im = np.hstack(cols) im = 255-im # use 255- to flip black and white plt.imsave('images/atlas-images/atlas-' + str(i) + '.jpg', im, cmap='gray') # get a single row of images to render to ui row = 255-x[col_start:col_end] if False: plt.imsave('images/sample-row.jpg', np.hstack(row), cmap='gray') print(' * total cells:', n_atlases * cells_per_atlas) consumed = set() for i in range(10): for jdx, j in enumerate(trainY): if j == i: im = 255 - trainX[jdx].squeeze() plt.imsave('images/digits/digit-' + str(i) + '.png', im, cmap='gray') break # create low dimensional embeddings # from MulticoreTSNE import MulticoreTSNE as TSNE from sklearn.manifold import TSNE, MDS, SpectralEmbedding, Isomap, LocallyLinearEmbedding from umap import UMAP from copy import deepcopy import rasterfairy import json def center(arr): '''Center an array to clip space -0.5:0.5 on all axes''' arr = deepcopy(arr) for i in range(arr.shape[1]): arr[:,i] = arr[:,i] - np.min(arr[:,i]) arr[:,i] = arr[:,i] / np.max(arr[:,i]) arr[:,i] -= 0.5 return arr def curate(arr): '''Prepare an array for persistence to json''' return np.around(center(arr), 4).tolist() # prepare model inputs n = 10000 #trainX.shape[0] sampleX = trainX[:n] flat = sampleX.reshape(sampleX.shape[0], sampleX.shape[1] * sampleX.shape[2]) # create sklearn outputs for clf, label in [ #[SpectralEmbedding, 'se'], #[Isomap, 'iso'], #[LocallyLinearEmbedding, 'lle'], #[MDS, 'mds'], [TSNE, 'tsne'], [UMAP, 'umap'], ]: print(' * processing', label) positions = clf(n_components=2).fit_transform(flat) with open('data/mnist-positions/' + label + '_positions.json', 'w') as out: json.dump(curate(positions), out) import keras.backend as K import numpy as np import os, json # create autoencoder outputs model = Autoencoder(latent_dim=2) lr = 0.005 for i in range(10): lr *= 0.9 print(' * running step:', i, '-- lr:', lr) K.set_value(model.auto.optimizer.lr, lr) model.auto.fit(trainX, trainX, batch_size=250, epochs=10) # save the auto latent positions to disk auto_positions = model.encoder.predict(sampleX) with open('data/mnist-positions/auto_positions.json', 'w') as out: json.dump(curate(auto_positions), out) # save the decoder to disk model.decoder.save('data/model/decoder.h5') os.system('tensorflowjs_converter --input_format keras \ data/model/decoder.h5 \ data/model/decoder') # save the decoder domain to disk domains = [[ float(np.min(z[:,i])), float(np.max(z[:,i])) ] for i in range(z.shape[1])] with open('data/model/decoder-domains.json', 'w') as out: json.dump(domains, out) %matplotlib inline import matplotlib.pyplot as plt # plot the latent space z = model.encoder.predict(trainX[:n]) # project inputs into latent space colors = trainY[:n].tolist() # color points with labels plt.scatter(z[:,0], z[:,1], marker='o', s=1, c=colors) plt.colorbar() import math px_per_cell_side = 28 cells_per_axis = math.floor(2048/px_per_cell_side) cells_per_atlas = cells_per_axis**2 n_atlases = math.ceil(trainX.shape[0] / cells_per_atlas) print(' * total cells:', n_atlases * cells_per_atlas) # create a series of columns and suture them together for i in range(n_atlases-1): # -1 to just create full atlas files (skip the remainder) start = i * cells_per_atlas end = (i+1) * cells_per_atlas x = trainX[start:end] cols = [] for j in range(cells_per_axis): col_start = j*cells_per_axis col_end = (j+1)*cells_per_axis col = x[col_start:col_end].reshape(px_per_cell_side*cells_per_axis, px_per_cell_side) cols.append(col) im = np.hstack(cols) plt.imsave('atlas-' + str(i) + '.jpg', im, cmap='gray') b = np.hstack(cols) plt.imshow(b) ```
github_jupyter
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); </script> # BlackHoles@Home Tutorial: Creating `BOINC` native applications ## Author: Leo Werneck ## This tutorial notebook demonstrates how to write native programs for the `BOINC` infrastructure, as well as how to convert `NRPy+` code into a `BOINC` application ## <font color=red>**WARNING**:</font> this tutorial notebook is currently incompatible with Windows ## Introduction: The [BlackHoles@Home](http://blackholesathome.net/) project allows users to volunteer CPU time so a large number of binary black holes simulations can be performed. The objective is to create a large catalog of [gravitational waveforms](https://en.wikipedia.org/wiki/Gravitational_wave), which can be used by observatories such as [LIGO](https://www.ligo.org), [VIRGO](https://www.virgo-gw.eu), and, in the future, [LISA](https://lisa.nasa.gov) in order to infer what was the source of a detected gravitational wave. BlackHoles@Home is destined to run on the [BOINC](https://boinc.berkeley.edu) infrastructure (alongside [Einstein@Home](https://einsteinathome.org/) and [many other great projects](https://boinc.berkeley.edu/projects.php)), enabling anyone with a computer to contribute to the construction of the largest numerical relativity gravitational wave catalogs ever produced. ### Additional Reading Material: * [BOINC's Wiki page](https://boinc.berkeley.edu/trac/wiki) * [BOINC's Basic API Wiki page](https://boinc.berkeley.edu/trac/wiki/BasicApi) * [Tutorial notebook on how to compile the `BOINC` libraries](Tutorial-BlackHolesAtHome-Compiling_the_BOINC_libraries.ipynb) * [Tutorial notebook on creating a `BOINC` application using the `BOINC` WrapperApp](Tutorial-BlackHolesAtHome-BOINC_applications-Using_the_WrapperApp.ipynb) <a id='toc'></a> # Table of Contents $$\label{toc}$$ This tutorial explains how to use the `BOINC` wrapper application to run a simple program. The structture of this notebook is as follows: 1. [Step 1](#introduction): Introduction 1. [Step 2](#loading_python_nrpy_modules): Loading needed Python/NRPy+ modules 1. [Step 3](#creating_native_boinc_app): Creating a `BOINC` native application 1. [Step 3.a](#simplest_boinc_app): A very simple `BOINC` native application 1. [Step 3.b](#nrpy_to_boinc): Converting any `NRPy+` code into a `BOINC` native app 1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file <a id='introduction'></a> # Step 1: Introduction \[Back to [top](#toc)\] $$\label{introduction}$$ A native `BOINC` application is a program which directly interfaces with the `BOINC` API. During compilation, we link the executable with the `BOINC` libraries, thus creating an executable which can run in the `BOINC` infrastructure. If you have not yet compiled the `BOINC` libraries, please read the [tutorial notebook on how to do so](Tutorial-BlackHolesAtHome-Compiling_the_BOINC_libraries.ipynb). This tutorial notebook aims at teaching you two key concepts: 1. How to write simple `BOINC` applications by hand 1. How to convert `NRPy+` code into a `BOINC` application We will be using the `NRPy+` code generated by the [Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb) NRPy+ tutorial notebook as an example. <a id='loading_python_nrpy_modules'></a> # Step 2: Loading needed Python/NRPy+ modules \[Back to [top](#toc)\] $$\label{loading_python_nrpy_modules}$$ ``` # Step 2: Load Python/NRPy+ modules and perform basic setup # Step 2.a: Load needed Python modules import os,sys # Step 2.b: Add NRPy's root directory to the sys.path() sys.path.append("..") # Step 2.c: Load NRPy+'s command line helper module import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface # Step 2.d: Set the path to the BOINC source code path_to_boinc = "/Users/werneck/bhah/boinc" boinc_api_dir = os.path.join(path_to_boinc,"api") boinc_lib_dir = os.path.join(path_to_boinc,"lib") boinc_zip_dir = os.path.join(path_to_boinc,"zip") current_path = os.getcwd() # Step 2.e: Adjust the compiler and compilation flags based on the system # Step 2.e.i: Set the C++ compiler flags global CXX_compiler,CXXFLAGS,LDFLAGS CXXFLAGS = "-fopenmp -march=native -Ofast -funroll-loops " CXXFLAGS += "-I%s -I%s -I%s "%(boinc_api_dir,boinc_lib_dir,boinc_zip_dir) LDFLAGS = "-L%s -L%s -L%s -lboinc_api -lboinc -lboinc_zip "%(boinc_api_dir,boinc_lib_dir,boinc_zip_dir) # Step 2.e.ii: Set the C++ compiler if sys.platform == 'linux': CXX_compiler = "g++ " elif sys.platform == 'darwin': # Set path to Clang compiler installed with homebrew path_to_llvm = "/usr/local/opt/llvm/" path_to_clangpp = os.path.join(path_to_llvm,"bin","clang++") path_to_clang_include = os.path.join(path_to_llvm,"include") path_to_clang_library = os.path.join(path_to_llvm,"lib") CXX_compiler = path_to_clangpp+" " CXXFLAGS += "-I%s "%(path_to_clang_include) LDFLAGS += "-L%s "%(path_to_clang_library) else: print("Error: platform %s is currently not supported."%sys.platform) sys.exit(1) ``` <a id='creating_native_boinc_app'></a> # Step 3: Creating a `BOINC` native application \[Back to [top](#toc)\] $$\label{creating_native_boinc_app}$$ A native `BOINC` application can be created by: 1. Including the `BOINC` api header file by adding `#include "boinc_api.h"` to your code 1. Calling the `boinc_init()` function at the beginning of the main function 1. Using `boinc_finish(0)` instead of `return 0` at the end of the main function The `boinc_finish(err_code)` function should also be used instead of the `exit(err_code)` function in case you needed to program to stop running return an error code. <a id='simplest_boinc_app'></a> ## Step 3.a: A very simple `BOINC` native application \[Back to [top](#toc)\] $$\label{simplest_boinc_app}$$ We now provide one of the simplest possible examples of a `BOINC` application, with minimal error handling included. This application: 1. Initializes the `BOINC` environment 1. Checks that the `BOINC` environment was initialized correctly 1. Prints a message to the user 1. Finalizes the `BOINC` environment and terminates ``` %%writefile simplest_boinc_app.cpp // Step 0: Basic includes // Step 0.a: Basic C++ header files #include <iostream> // Step 0.b: BOINC api header file #include "boinc_api.h" // Program description: this is one of the simplest BOINC // applications that can be written. // We start the BOINC environment // by calling the boinc_init() function, // check everything is OK (erroring out // if it isn't), print a message to the // user, and terminate using a call to // the boinc_finish() function. int main() { // Step 1: Initialize the BOINC environment with boinc_init() int status = boinc_init(); // Step 2: Check everything is OK, error out if not if( status != 0 ) { fprintf(stderr,"ERROR: boinc_init() returned a non-zero value: %d\n",status); boinc_finish(status); } // Step 3: Print a message to the user printf("Hello BOINC!\n"); // Step 4: Terminate the program with boinc_finish() boinc_finish(0); } ``` Let us now compile and run the application: ``` compile_string = CXX_compiler+CXXFLAGS+"simplest_boinc_app.cpp -o simplest_boinc_app "+LDFLAGS !rm -rf simplest_boinc_app_test_dir cmd.mkdir("simplest_boinc_app_test_dir") !mv simplest_boinc_app.cpp simplest_boinc_app_test_dir !cd simplest_boinc_app_test_dir && $compile_string && ./simplest_boinc_app && ls ``` Note that [just like when using the `BOINC` WrapperApp](Tutorial-BlackHolesAtHome-BOINC_applications-Using_the_WrapperApp.ipynb), we have produced the output files `boinc_finish_called` and `stderr.txt`, even though we did not explicitly generate them in our program. This is because the `BOINC` api generates these files automatically for us. If we take a look at the contents of the files, we see that the `boinc_finish_called` simply contains the integer argument of the `boinc_finish()` function, while the `stderr.txt` contains some basic information stating that we are running the application outside of the `BOINC` infrastructure and that the `boinc_finish()` function was called: ``` !cd simplest_boinc_app_test_dir && cat boinc_finish_called stderr.txt ``` <a id='nrpy_to_boinc'></a> ## Step 3.b: Converting any `NRPy+` code into a `BOINC` native app \[Back to [top](#toc)\] $$\label{nrpy_to_boinc}$$ We now provide a script for converting an existing `NRPy+` code into a `BOINC` application. Note that it is relatively easy to convert an existing `C` or `C++` application into a native `BOINC` application. Unless you want to manually create a wrapper function that calls your `C` code, it is recommended to compile your code using a `C++` compiler instead. In the case of `NRPy+` applications, this can be achieved by simply adding: ```cpp #ifdef __cplusplus # define restrict __restrict__ #endif ``` to the very top of the main application source code file, changing the file extension from `.c` to `.cpp`/`.cc`/`.C`, and then compiling the code using the flag `-std=c++11`. We also need to replace all calls to the `exit()` function with calls to the `boinc_finish()` function. The following script takes care of that: ``` # Converting NRPy+ code into a BOINC app # Description: This function reads a NRPy+ source code # one line at a time and copies them into # a new file which is compatible with the # BOINC infrastructure. def NRPy_to_BOINC(input_file,output_file): # Step 1: Open the NRPy+ input file with open(input_file,"r") as file: # Step 2: Create the BOINC application # Step 2.a: Print a message to the user describing # some basic changes. Add the "restrict" # keyword so that it is compatible with # C++, which is required by BOINC. output_string = """ //**************************************************************** // This NRPy+ code has been converted to work with the // BOINC infrastructure. Please compile it with a C++ // compiler. Don't forget to add the -std=c++11 flag. #ifdef __cplusplus # define restrict __restrict__ #endif // .--------------------. // | BOINC HEADER FILES | // .--------------------. // Note: You can comment out (or remove) the boinc_zip.h header // if you do not plan on using the BOINC zip functions. #include \"boinc_api.h\" #include \"boinc_zip.h\" //**************************************************************** """ # Step 2.b: Loop over the file, adding calls to # the BOINC API functions as needed. indent = " " for line in file: # Step 2.b.i: After the main() function, add a call to the boinc_init() function if "int main" in line: output_string += "\n"+line+"\n"+indent+"boinc_init();\n" # Step 2.b.ii: Replace return 0; with boinc_finish(0); elif "return 0" in line: output_string += indent+"boinc_finish(0);\n" # Step 2.b.iii: Replace exit(err_code) function calls with boinc_finish(err_code) elif "exit(" in line: output_string += line.replace("exit","boinc_finish") else: # Step 2.b.iv: Otherwise, just copy the original source code output_string += line # Step 3: Write the output file with open(output_file,"w") as file: file.write(output_string) ``` Now let's convert a `NRPy+` generated code into a `BOINC` code, compile it, and run it. We will take as an example the files obtained after running the [Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb). Running the cell below will perform the following tasks: 1. Run the [Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb) NRPy+ tutorial notebook. 1. Move the folder containing the source files into our current working directory (`nrpytutorial/BHAH`) 1. Convert the main program, which is defined in the `BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c` file, into a `BOINC` compatible application 1. Compile the source code, linking to the `BOINC` libraries 1. Execute the code *WARNING*: because this step involves generating the source code for the BSSN equations, running the cell below will take a few minutes. ``` # Run the Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb tutorial notebook !pip install runipy > /dev/null !rm -rf BSSN_Two_BHs_Collide_Ccodes out96*.txt out96*.png !cd .. && runipy Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb && mv BSSN_Two_BHs_Collide_Ccodes BHAH # Compute NRPy_to_BOINC("BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.c","BSSN_Two_BHs_Collide_Ccodes/BrillLindquist_Playground.cpp") compile_string = CXX_compiler+CXXFLAGS+"BrillLindquist_Playground.cpp -o ../BrillLindquist_Playground "+LDFLAGS !cd BSSN_Two_BHs_Collide_Ccodes/ && $compile_string !./BrillLindquist_Playground 96 16 2 ``` We can now visualize the solution, just like with the regular NRPy+ code (the cell below contains code that was extracted from the [Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb) NRPy+ tutorial notebook): ``` ## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ## import numpy as np from scipy.interpolate import griddata import matplotlib.pyplot as plt from matplotlib.pyplot import savefig from IPython.display import HTML import matplotlib.image as mgimg import glob import sys from matplotlib import animation outdir = "./" globby = glob.glob(os.path.join(outdir,'out96-00*.txt')) file_list = [] for x in sorted(globby): file_list.append(x) bound=1.4 pl_xmin = -bound pl_xmax = +bound pl_ymin = -bound pl_ymax = +bound for filename in file_list: fig = plt.figure() x,y,cf,Ham = np.loadtxt(filename).T #Transposed for easier unpacking plotquantity = cf plotdescription = "Numerical Soln." plt.title("Black Hole Head-on Collision (conf factor)") plt.xlabel("y/M") plt.ylabel("z/M") grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:300j, pl_ymin:pl_ymax:300j] points = np.zeros((len(x), 2)) for i in range(len(x)): # Zach says: No idea why x and y get flipped... points[i][0] = y[i] points[i][1] = x[i] grid = griddata(points, plotquantity, (grid_x, grid_y), method='nearest') gridcub = griddata(points, plotquantity, (grid_x, grid_y), method='cubic') im = plt.imshow(gridcub, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax)) ax = plt.colorbar() ax.set_label(plotdescription) savefig(os.path.join(filename+".png"),dpi=150) plt.close(fig) sys.stdout.write("%c[2K" % 27) sys.stdout.write("Processing file "+filename+"\r") sys.stdout.flush() ## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ## # https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame # https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation fig = plt.figure(frameon=False) ax = fig.add_axes([0, 0, 1, 1]) ax.axis('off') myimages = [] for i in range(len(file_list)): img = mgimg.imread(file_list[i]+".png") imgplot = plt.imshow(img) myimages.append([imgplot]) ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000) ani.save(os.path.join(outdir,'BH_Head-on_Collision.mp4'), fps=5,dpi=150) plt.close() # Embed video based on suggestion: # https://stackoverflow.com/questions/39900173/jupyter-notebook-html-cell-magic-with-python-variable HTML(""" <video width="480" height="360" controls> <source src=\""""+os.path.join(outdir,"BH_Head-on_Collision.mp4")+"""\" type="video/mp4"> </video> """) ``` <a id='latex_pdf_output'></a> # Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-BlackHolesAtHome-BOINC_applications-Native_applications.pdf](Tutorial-BlackHolesAtHome-BOINC_applications-Native_applications.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ``` !cp ../latex_nrpy_style.tplx . cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BlackHolesAtHome-BOINC_applications-Native_applications") !rm -f latex_nrpy_style.tplx ```
github_jupyter
# Working with structured data in Python using Pandas ### What is data preprocessing? Process of converting raw data into useful format.In order to better understand the data, we need to gather some statistical insights into our data. In this module of the course, we will use some of the libraries available with Python and Jupyter to examine our data set. ### What is pandas? [pandas](https://pandas.pydata.org/) is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language. ### Data We'll use a data set from [Kaggle](https://www.kaggle.com/) for this workshop. You'll need to download it to your local machine, then upload to your project running in Cloud Pak for Data as a Service. The *insurance.csv* dataset acquired from *Kaggle* contains 1338 observations (rows) and 7 features (columns). The dataset contains 4 numerical features (age, bmi, children and expenses) and 3 nominal features (sex, smoker and region) that were converted into factors with numerical value designated for each level. We'll continue to use the [`insurance.csv`](https://www.kaggle.com/noordeen/insurance-premium-prediction/download) file from you project assets, so if you have not already [`downloaded this file`](https://www.kaggle.com/noordeen/insurance-premium-prediction/download) to your local machine, and uploaded it to your project, do that now. ## Table of Contents 1. [Using the Jupyter notebook](#jupyter)<br> 1. [Series and DataFrames](#series)<br> 1. [Loading Data](#loading)<br> 1. [Exploring Data](#exploring)<br> 1. [Cleaning Data](#cleaning)<br> 1. [Analyzing Data](#selection)<br> <a id="jupyter"></a> ## 1. Using the Jupyter notebook ### Jupyter cells When you are editing a cell in Jupyter notebook, you need to re-run the cell by pressing **`<Shift> + <Enter>`**. This will allow changes you made to be available to other cells. Use **`<Enter>`** to make new lines inside a cell you are editing. #### Code cells Re-running will execute any statements you have written. To edit an existing code cell, click on it. #### Markdown cells Re-running will render the markdown text. To edit an existing markdown cell, double-click on it. <hr> ### Common Jupyter operations Near the top of the Jupyter notebook page, Jupyter provides a row of menu options (`File`, `Edit`, `View`, `Insert`, ...) and a row of tool bar icons (disk, plus sign, scissors, 2 files, clipboard and file, up arrow, ...). #### Inserting and removing cells - Use the "plus sign" icon to insert a cell below the currently selected cell - Use "Insert" -> "Insert Cell Above" from the menu to insert above #### Clear the output of all cells - Use "Kernel" -> "Restart" from the menu to restart the kernel - click on "clear all outputs & restart" to have all the output cleared #### Save your notebook file locally - Clear the output of all cells - Use "File" -> "Download as" -> "IPython Notebook (.ipynb)" to download a notebook file representing your session <hr> <a id="series"></a> ## 2. Series and DataFrames Before we dive into our dataset we will first look at examples to understand the difference between two key data structures that pandas offers us - *Series* and *DataFrames* A `Series` is a one-dimensional labelled array that can contain of any type (integer, string, float, python objects, etc.). ``` import pandas as pd import numpy as np s = pd.Series ([1, 3, 5, np.nan, 6, 8]) ss = pd.Series ``` A `DataFrame` is a two-dimensional data structure, the data consists of rows and columns that you can create a in many ways, by loading a file or using a NumPy array and a date for the index. <div class="alert alert-info" style="font-size:100%"> <a href="https://numpy.org"> NumPy</a> is a Python library for working with multi-dimensional arrays and matrices with a large collection of mathematical functions to operate on these arrays. Have a look at this <a href="https://docs.scipy.org/doc/numpy-1.15.0/user/quickstart.html"> NumPy tutorial</a> for an overview. </div> Create DataFrame `df1` with `dates` as the index, a 6 by 4 array of random `numbers` as values, and column names A, B, C and D (the index will be explained in the next section): ``` dates = pd.date_range('20200101', periods=6) dates numbers = np.random.randn(6, 4) numbers df1 = pd.DataFrame(numbers, index=dates, columns=['A', 'B', 'C', 'D']) df1 ``` Or create a DataFrame by combining the above in one command: ``` df2 = pd.DataFrame({'A': 1., 'B': pd.Timestamp('20130102'), 'C': pd.Series(1, index=list(range(4)), dtype='float32'), 'D': np.array([3] * 4, dtype='int32'), 'E': pd.Categorical(["test", "train", "test", "train"]), 'F': 'foo'}) df2.head() ``` Use `type()` to check the data type of each variable. Below `print` is used to display the data type of all of them used so far: ``` print('Data type of s is '+str(type(s))) print('Data type of s is '+str(type(dates))) print('Data type of s is '+str(type(numbers))) print('Data type of df is '+str(type(df1))) ``` <a id="data"></a> ## 3 Loading data A lot of data is **structured data**, which is data that is organized and formatted so it is easily readable, for example a table with variables as columns and records as rows, or key-value pairs in a noSQL database. As long as the data is formatted consistently and has multiple records with numbers, text and dates, you can probably read the data with [Pandas](https://pandas.pydata.org/pandas-docs/stable/index.html), an open-source Python package providing high-performance data manipulation and analysis. ### 3.1 Load our data as a pandas data frame **<font color='red'><< FOLLOW THE INSTRUCTIONS BELOW TO LOAD THE DATASET >></font>** * Highlight the cell below by clicking it. * Click the `10/01` "Find data" icon in the upper right of the notebook. * Add the locally uploaded file `insurance.csv` by choosing the `Files` tab. Then choose the `insurance.csv`. Click `Insert to code` and choose `Insert Pandas DataFrame`. * The code to bring the data into the notebook environment and create a Pandas DataFrame will be added to the cell below. * Run the cell ``` # Place cursor below and insert the Pandas DataFrame for the Insurance Expense data ``` ### 3.2 Update the variable for our Pandas dataframe We'll use the Pandas naming convention df for our DataFrame. Make sure that the cell below uses the name for the dataframe used above. For the locally uploaded file it should look like df_data_1 or df_data_2 or df_data_x. **<font color='red'><< UPDATE THE VARIABLE ASSIGNMENT TO THE VARIABLE GENERATED ABOVE. >></font>** ``` # Replace data_df_1 with the variable name generated above. df = df_data_1 ``` **OPTIONAL : Read data from a CSV file using the `read_csv` function. Load a file by running the next cell:** The file can also be read directly from a URL, but you can replace this with a local path when running this notebook on a local system. ## 4. Exploring Data Now let's have a look at the data that was loaded into the notebook. What are we actually looking at? #### `df.shape` gives the number of rows and columns ``` df.shape df.info() ``` #### `len(df)` gives the number of rows ``` len(df) ``` #### Use `df.dtypes` to check the different variables and their datatype ``` df.dtypes ``` #### `df.columns` gives a list of all column names ``` df.columns list(df) all_columns = list(df) ``` #### *select_dtypes* can be used to list columns of a particular datatype. In the cell below, we list numerical columns. ``` numerical_columns = list(df.select_dtypes(include=['float64','int64']).columns) print('Numerical columns : ') print(numerical_columns) ``` and in the cell below, we identify categorical columns from the dataset. ``` categorical_columns = [x for x in all_columns if x not in numerical_columns ] print('Categorical columns : ') print(categorical_columns) ``` #### *nunique()* is used to identify number of unique values within each column in the dataset ``` df.nunique() df.values ``` #### With `df.head()` or `df.tail()` you can view the first five or last five lines from the data. Add a number between the brackets `()` to specify the number of lines you want to display., e.g. `df.head(2)` ``` df.head() df.head(2) df.tail() ``` <a id="cleaning"></a> ## 5. Cleaning Data When exploring data there are always transformations needed to get it in the format you need for your analysis, visualisations or models. Below are only a few examples of the endless possibilities. First, let's make a copy of the Dataframe : ``` premium_df = df.copy() premium_df.head() ``` ### 5.1 Adding and deleting columns Adding a column can be done by creating a new column `new`, which can be dropped using the `drop` function. ``` premium_df['new'] = 1 premium_df.head() premium_df = premium_df.drop(columns='new') premium_df.head() ``` ### 5.2 Rename columns ``` print("Column names before rename : ", premium_df.columns) ``` <a id="Renaming"></a> You can change names of columns using `rename`: ``` premium_df.rename(columns={'sex':'gender'}, inplace=True) print("Column names after rename : ", premium_df.columns) premium_df.head() ``` ### 5.3 Further Data Cleaning **Things to check:** * Is the data tidy: each variable forms a column, each observation forms a row and each type of observational unit forms a table. * Are all columns in the right data format? * Are there missing values? * Are there unrealistic outliers? #### Check if all datatypes are as you expect with `dtypes`: ``` premium_df.dtypes ``` #### Check if there are missing values with `isna`: ``` premium_df.isna().any() ``` #### Get a quick overview of the numeric data using the `.describe()` function. If any of the numeric columns are missing from this list this is a probably because of a wrong data type. This will include numeric data, but exclude the categorical fields. ``` premium_df_describe = premium_df.describe() premium_df_describe ``` #### Get the list of unique values within each column using `unique()` ``` print(premium_df['region'].unique()) ``` ## 6. Analyzing data We will analyze the data by asking one or more hypothetical questions. ### Question : Is there a relationship between smoking and claim amount? Let us learn few other functionalities of pandas in trying to answer the above question. From the original dataframe, let us now create a new DataFrame with just these 2 columns: ``` premium_smoker_df = premium_df[['smoker', 'expenses']] premium_smoker_df.head() ``` ### 6.1 Get smoker counts #### Let us now apply some filtering to analyze information about smokers from the dataset. Filtering - selecting rows based on a certain condition can be done with Boolean indexing. This uses the actual values of the data in the DataFrame as opposed to the row/column labels or index positions. ``` premium_smoker_df['smoker'] == 'yes' ``` First we will print number of entries with value for smoker marked as 'yes'. When you want to select the rows and see all the data add `premium_smoker_df[]` around your function: ``` print(len(premium_smoker_df[premium_smoker_df['smoker'] == 'yes'])) ``` Next we will print number of entries with value for smoker marked as 'no'. ``` print(len(premium_smoker_df[premium_smoker_df['smoker'] == 'no'])) ``` Alternatively, we can use the `value_counts()` method to get the counts with each value. ``` df.smoker.value_counts() ``` ### 6.2 Visualize smoker data We use pandas' in-built plotting method to visualize a pie chart. This internally uses *matplotlib* ``` df.smoker.value_counts().plot(kind="pie") ``` <a id="grouping"></a> ### 6.3 *smoker* vs *expenses* statistics We use the `decribe()` method to analyze relation between the *smoker* and the *expenses* feature ``` df.groupby(['smoker']).expenses.describe() ``` ### 6.4 *smoker* vs *age* statistics We use the `mean()` method to analyze relation between the *smoker* and the *age* column ``` df.groupby("smoker").age.mean() ``` ### 6.5 Correlation between features Pandas also offers a `corr()` method to define a correlation table between all features. A score of 1.0 means highest correlation and 0.0 means no correlation. ``` df[['age', 'sex','bmi', 'children', 'smoker', 'region', 'expenses']].corr(method='pearson') ```
github_jupyter
<center><font size="+4">Introduction to Programming and Data Processing 2020/2021</font></center> <center><font size="+2">Sant'Anna School of Advanced Studies, Pisa, Italy</font></center><br/> <center><font size="+2">Course responsible</font></center> <center><font size="+2">Andrea Vandin a.vandin@santannapisa.it</font></center> <center><font size="+2">Co-lecturer </font></center> <center><font size="+2">Daniele Licari d.licari@santannapisa.it</font></center> --- <center><font size="+4">Assignments for</font></center> <center><font size="+4">Lecture 5: Functions</font></center> --- 1. We will put on github a file with the required tests. 2. We will download it using this script. 3. We will load it using ``` def assertEquals(actual,expected,failure_message=""): if actual != None and isinstance(actual,str): actual=actual.strip() if expected != None and isinstance(actual,str): expected=expected.strip() if(expected==actual): print('Test passed') print('Expected and actual:',expected) else: print('Test FAILED') print('Expected:',expected) print('Actual:',actual) if(failure_message!=""): print(failure_message) print() ``` # Assignment 01 Functions: first_letter ## **Statement** Write a function - named `first_letter` - that has a string argument - that computes and returns the first letter of the argument string ## Example input to function ``` My first string ``` ## Example output from function ``` M ``` ## **Hint** Search for the method of strings that does the work... ## **Note** The keyword `[def](https://repl.it/teacher/assignments/reference/compound_stmts.html#def)` introduces a function _definition_. It must be followed by the function name and the parenthesized list of formal parameters. The statements that form the body of the function start at the next line and must be indented. The _execution_ of a function introduces a new symbol table used for the local variables of the function. More precisely, all variable assignments in a function store the value in the local symbol table; whereas variable references first look in the local symbol table, then in the local symbol tables of enclosing functions, then in the global symbol table, and finally in the table of built-in names. The actual parameters (arguments) to a function call are introduced in the local symbol table of the called function when it is called; thus, arguments are passed using _call by value_ (where the _value_ is always an object _reference_, not the value of the object). [1](https://repl.it/teacher/assignments/4954335/edit#id2) When a function calls another function, a new local symbol table is created for that call. ``` def first_letter(input_string): # to complete pass #You can remove this line when completing the function #return input_string[0] f = first_letter('Python') print(f) def my_unit_test(): assertEquals(first_letter("hello"),"h") assertEquals(first_letter("bill"),"b") assertEquals(first_letter("hello bill"),"h") assertEquals(first_letter("1234"),"1") my_unit_test() ``` # Assignment 02: Functions - divisible_by ## **Statement** Write a function - named divisble_by - that has two numeric arguments, a number `number` and a divisor `divisor`, in this order - that returns true or false depending on whether `number` is divisible by `divisor` ## Example input to function ``` 10,2 ``` ## Example output from function ``` True ``` ## Hint The remainder operator might come handy for this assignment... ``` def divisible_by(number,divisor): #to complete pass #You can remove this after completing your function #return number%divisor==0 def my_unit_test(): assertEquals(divisible_by(10,2),True) assertEquals(divisible_by(10,3),False) assertEquals(divisible_by(30,6),True) assertEquals(divisible_by(26,4),False) assertEquals(divisible_by(24,4),True) my_unit_test() ``` # Assignment 03: Functions - count_e ## **Statement** Write a function - named count_e - that has a string argument - that counts the number of e in the argument (either lower- or upper-case), and returns it ## Example input to function ``` HeEelloo sSam ``` ## Example output from function ``` 3 ``` ## Hint There is a convenient method of strings that does the job... ``` def count_e(input_string): pass #You can remove #return input_string.lower().count('e') string = "HeEelloo sSam" print(count_e(string)) def simple_unit_test(): print('Simple unit test') assertEquals(count_e("ciao mondo!"),0) assertEquals(count_e("hehehe"),3) simple_unit_test() def more_complex_unit_test(): print('More complex unit test') assertEquals(count_e("ceEe mEnde!"),5) assertEquals(count_e("HELLO therE HOW U DOINGEEee"),7) more_complex_unit_test() ```
github_jupyter
# App4 * An App with 4 functions with different types of workload to validate the performance of the optimization algorithm * There is 1 parallel and 1 cycle in App4 ``` import os from io import BytesIO import time import zipfile import numpy as np import boto3 from tqdm import tqdm from datetime import datetime, timezone from time import gmtime, strftime import json import pandas as pd import matplotlib.pyplot as plt lambda_client = boto3.client('lambda') function_prefix='ServerlessAppPerfOpt' App_name = 'App4' ``` ## Update all Functions in App4 ### Types of workload * Function 1: Disk IO Intensive * Write a 1MB file to the disk for 50 times * Function 2: CPU Intensive * Factorial 28000 * Function 3: CPU Intensive * Fibonacci 28 * Function 4: Network IO Intensive * Download and upload a 25MB file from and to the S3 bucket in the same region ### Update Function Code ``` functions=[] for file in os.listdir('functions'): path=os.path.abspath(os.path.join(os.path.dirname('__file__'), 'functions/'+file)) if not file.startswith('.') and os.path.isdir(path): functions.append(file) for function_folder in functions: buf = BytesIO() with zipfile.ZipFile(buf, 'w') as z: for file in os.listdir('functions/'+function_folder): z.write(os.path.abspath(os.path.join(os.path.dirname('__file__'), 'functions/{}/{}'.format(function_folder,file))), os.path.basename(os.path.join(os.path.dirname('__file__'), 'functions/{}/{}'.format(function_folder,file)))) buf.seek(0) pkg = buf.read() lambda_client.update_function_code(FunctionName='{}_{}'.format(function_prefix, function_folder),ZipFile=pkg) ``` ### Update Function Memory and Timeout Configuration * Available Memory Configurations: 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472, 1536, 1600, 1664, 1728, 1792, 1856, 1920, 1984, 2048, 2112, 2176, 2240, 2304, 2368, 2432, 2496, 2560, 2624, 2688, 2752, 2816, 2880, 2944, 3008 ``` mem_config_list={ 'f1':128, 'f2':128, 'f3':128, 'f4':128 } timeout_config_list={ 'f1':60, 'f2':60, 'f3':60, 'f4':60 } for function in mem_config_list.keys(): lambda_client.update_function_configuration(FunctionName='{}_{}'.format(function_prefix, function), MemorySize=mem_config_list[function], Timeout=timeout_config_list[function]) ``` ### Test Run #### Function 1 ``` lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f1'), InvocationType='Event') ``` #### Function 2 ``` lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f2'), InvocationType='Event') ``` #### Function 3 ``` lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f3'), InvocationType='Event') ``` #### Function 4 ``` lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f4'), InvocationType='Event') ``` ## Execute Functions to Get the Performance Curve * Run each function under each memory configuration for 100 times ``` available_mem_list=[128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472, 1536, 1600, 1664, 1728, 1792, 1856, 1920, 1984, 2048, 2112, 2176, 2240, 2304, 2368, 2432, 2496, 2560, 2624, 2688, 2752, 2816, 2880, 2944, 3008] for mem in available_mem_list: print('Memory: {} Timestamp: {} UTC: {}'.format(mem,time.time(),strftime("%d %b %Y %H:%M:%S +0000", gmtime()))) mem_config={'f1':mem, 'f2':mem, 'f3':mem, 'f4':mem} for function in mem_config.keys(): lambda_client.update_function_configuration(FunctionName='{}_{}'.format(function_prefix, function), MemorySize=mem_config[function]) time.sleep(1) for i in tqdm(range(100)): time.sleep(10) lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f1'), InvocationType='Event') lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f2'), InvocationType='Event') lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f3'), InvocationType='Event') lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f4'), InvocationType='Event') for mem in available_mem_list: print('Memory: {} Timestamp: {} UTC: {}'.format(mem,time.time(),strftime("%d %b %Y %H:%M:%S +0000", gmtime()))) mem_config={'f5':mem, 'f6':mem} for function in mem_config.keys(): lambda_client.update_function_configuration(FunctionName='{}_{}'.format(function_prefix, function), MemorySize=mem_config[function]) time.sleep(1) for i in tqdm(range(100)): time.sleep(10) lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f5'), InvocationType='Event') lambda_client.invoke(FunctionName='{}_{}'.format(function_prefix, 'f6'), InvocationType='Event') ``` ## CloudWatch Logs ``` logclient = boto3.client('logs') ``` ### Functions for parsing Logs ``` def lambda_report_log_to_dict(log): res={} lis=[item.split(': ') for item in log.split('\t')] res['RequestId']=lis[0][1] res['Duration']=float(lis[1][1].split(' ')[0]) res['Billed Duration']=int(lis[2][1].split(' ')[0]) res['Memory Size']=int(lis[3][1].split(' ')[0]) res['Max Memory Used']=int(lis[4][1].split(' ')[0]) return res startTime=int(datetime.timestamp(datetime(year=2020,month=1,day=14,hour=22,minute=44,second=0,tzinfo=timezone.utc))) endTime=int(datetime.timestamp(datetime(year=2020,month=1,day=15,hour=14,minute=30,second=0,tzinfo=timezone.utc))) ``` ### Query Logs #### Function 1 ``` query_f1 = logclient.start_query( logGroupName='/aws/lambda/{}_{}'.format(function_prefix, 'f1'), queryString="fields @timestamp, @message| filter @message like 'REPORT'| sort @timestamp desc", startTime=startTime, endTime=endTime, limit=10000 ) query_results_f1 = logclient.get_query_results( queryId=query_f1['queryId'] ) f1_log_list=[lambda_report_log_to_dict(item [1]['value']) for item in query_results_f1['results']] len(f1_log_list) ``` #### Function 2 ``` query_f2 = logclient.start_query( logGroupName='/aws/lambda/{}_{}'.format(function_prefix, 'f2'), queryString="fields @timestamp, @message| filter @message like 'REPORT'| sort @timestamp desc", startTime=startTime, endTime=endTime, limit=10000 ) query_results_f2 = logclient.get_query_results( queryId=query_f2['queryId'] ) f2_log_list=[lambda_report_log_to_dict(item [1]['value']) for item in query_results_f2['results']] len(f2_log_list) ``` #### Function 3 ``` query_f3 = logclient.start_query( logGroupName='/aws/lambda/{}_{}'.format(function_prefix, 'f3'), queryString="fields @timestamp, @message| filter @message like 'REPORT'| sort @timestamp desc", startTime=startTime, endTime=endTime, limit=10000 ) query_results_f3 = logclient.get_query_results( queryId=query_f3['queryId'] ) f3_log_list=[lambda_report_log_to_dict(item [1]['value']) for item in query_results_f3['results']] len(f3_log_list) ``` #### Function 4 ``` query_f4 = logclient.start_query( logGroupName='/aws/lambda/{}_{}'.format(function_prefix, 'f4'), queryString="fields @timestamp, @message| filter @message like 'REPORT'| sort @timestamp desc", startTime=startTime, endTime=endTime, limit=10000 ) query_results_f4 = logclient.get_query_results( queryId=query_f4['queryId'] ) f4_log_list=[lambda_report_log_to_dict(item [1]['value']) for item in query_results_f4['results']] f4_log_list=[item for item in f4_log_list if item['Duration']>=300] len(f4_log_list) ``` #### Convert Logs into DataFrame and Save as CSV ``` for item in f1_log_list: item['Function']='f1' for item in f2_log_list: item['Function']='f2' for item in f3_log_list: item['Function']='f3' for item in f4_log_list: item['Function']='f4' App4_Lambda_Logs=pd.DataFrame(f1_log_list).append(pd.DataFrame(f2_log_list)).append(pd.DataFrame(f3_log_list)).append(pd.DataFrame(f4_log_list)) App4_Lambda_Logs.index=range(App4_Lambda_Logs.shape[0]) App4_Lambda_Logs=App4_Lambda_Logs[['Function', 'Memory Size', 'Max Memory Used', 'Duration', 'Billed Duration', 'RequestId']] App4_Lambda_Logs.to_csv('App4_Lambda_Logs.csv',index=False) App4_Lambda_Logs = pd.read_csv('App4_Lambda_Logs.csv', error_bad_lines=False, warn_bad_lines=False,low_memory=False) App4_Lambda_Logs.columns = ['Function', 'Memory_Size', 'Max_Memory_Used', 'Duration', 'Billed_Duration', 'RequestId'] App4_Lambda_Logs.head() ``` ## Performance Curve ``` f1_duration = [App4_Lambda_Logs.query("Function == {} and Memory_Size == {}".format("'f1'",mem))['Duration'].mean() for mem in available_mem_list] f2_duration = [App4_Lambda_Logs.query("Function == {} and Memory_Size == {}".format("'f2'",mem))['Duration'].mean() for mem in available_mem_list] f3_duration = [App4_Lambda_Logs.query("Function == {} and Memory_Size == {}".format("'f3'",mem))['Duration'].mean() for mem in available_mem_list] f4_duration = [App4_Lambda_Logs.query("Function == {} and Memory_Size == {}".format("'f4'",mem))['Duration'].mean() for mem in available_mem_list] f1_perf_profile = dict(zip(available_mem_list, f1_duration)) f2_perf_profile = dict(zip(available_mem_list, f2_duration)) f3_perf_profile = dict(zip(available_mem_list, f3_duration)) f4_perf_profile = dict(zip(available_mem_list, f4_duration)) fig=plt.figure(figsize=(8,8)) ax=plt.subplot(111) ax.grid() ax.set_xlim(128,3008) ax.plot(available_mem_list, f1_duration, marker='o', label='f1') ax.plot(available_mem_list, f2_duration, marker='o', label='f2') ax.plot(available_mem_list, f3_duration, marker='o', label='f3') ax.plot(available_mem_list, f4_duration, marker='o', label='f4') ax.legend() ax.set_xlabel('Memory in MB') ax.set_ylabel('Duration in ms') fig.savefig('App4_Performance_Curve', dpi=300) fig.savefig('App4_Performance_Curve.pdf') ``` ## Performance Cost Table ### Import Libraries ``` import sys sys.path.append('../../../source/ServerlessAppPerfCostMdlOpt') import networkx as nx import itertools import warnings warnings.filterwarnings("ignore") from ServerlessAppWorkflow import ServerlessAppWorkflow from AppGenerator import AppGenerator from PerfOpt import PerfOpt ``` ### Sample Performance Curve ``` sampled_mem_list = list(range(128, 3072, 192)) number_of_configurations = np.power(len(sampled_mem_list),4) print('Sampled Memory List:', sampled_mem_list) print('Length of the Sampled Memory List:', len(sampled_mem_list)) print('Number of Configurations in App4 after Sampling:', number_of_configurations) fig=plt.figure(figsize=(8,8)) ax=plt.subplot(111) ax.grid() ax.set_xlim(128,3008) ax.plot(list(range(128, 3072, 192)), f1_duration[0:46:3], marker='o', label='f1') ax.plot(list(range(128, 3072, 192)), f2_duration[0:46:3], marker='o', label='f2') ax.plot(list(range(128, 3072, 192)), f3_duration[0:46:3], marker='o', label='f3') ax.plot(list(range(128, 3072, 192)), f4_duration[0:46:3], marker='o', label='f4') ax.legend() ax.set_xlabel('Memory in MB') ax.set_ylabel('Duration in ms') ``` ### Get Performance Cost Table #### Define App Orchestration ``` App4_G = nx.DiGraph() App4_G.add_node('Start', pos=(0, 1)) App4_G.add_node(1, pos=(1, 1), perf_profile={mem:f1_perf_profile[mem] for mem in sampled_mem_list}) App4_G.add_node(2, pos=(2, 0), perf_profile={mem:f2_perf_profile[mem] for mem in sampled_mem_list}) App4_G.add_node(3, pos=(2, 2), perf_profile={mem:f3_perf_profile[mem] for mem in sampled_mem_list}) App4_G.add_node(4, pos=(3, 1), perf_profile={mem:f4_perf_profile[mem] for mem in sampled_mem_list}) App4_G.add_node('End', pos=(4, 1)) App4_G.add_weighted_edges_from([('Start', 1, 1), (1, 3, 1), (1, 2, 1), (3, 4, 1), (2, 4, 1), (4, 1, 0.1), (4, 'End', 0.9)]) pos_App4_G = nx.get_node_attributes(App4_G, 'pos') nx.draw(App4_G, pos_App4_G, with_labels=True) labels_App4_G = nx.get_edge_attributes(App4_G, 'weight') nx.draw_networkx_edge_labels(App4_G, pos_App4_G, edge_labels=labels_App4_G) pos_higher_offset_App4_G = {} for k, v in pos_App4_G.items(): pos_higher_offset_App4_G[k] = (v[0], v[1] + 0.15) plt.savefig('App4_G.png') plt.show() ``` #### Define the number of workers ``` number_of_workers = 8 ``` #### Define the data storage location ``` pct_data_folder = 'perf_cost_data' ``` #### Generate Workload ``` pct_filename_list = [pct_data_folder + '/' + App_name+'_part'+str(n) +'.csv' for n in range(1,number_of_workers+1)] pct_start_iterations_list = [int(number_of_configurations/number_of_workers * (n-1))+1 for n in range(1,number_of_workers+1)] pct_end_iterations_list = [n-1 for n in pct_start_iterations_list[1:]] pct_end_iterations_list.append(number_of_configurations) ``` #### Run Algorithms to get the table ``` def pct_work(App_G, filename, start_iterations, end_iterations): App = ServerlessAppWorkflow(G=App_G.copy()) optimizer = PerfOpt(App, generate_perf_profile=False) optimizer.get_perf_cost_table(file=filename, start_iterations=start_iterations, end_iterations=end_iterations) from multiprocessing import Process for i in range(number_of_workers): p = Process(target=pct_work, args=(App4_G, pct_filename_list[i], pct_start_iterations_list[i], pct_end_iterations_list[i],)) p.start() ``` ### Process Performance Cost Table ``` perf_cost_data = pd.DataFrame() for filename in pct_filename_list: data_parts = pd.read_csv(filename, error_bad_lines=False, warn_bad_lines=False,low_memory=False) perf_cost_data = perf_cost_data.append(data_parts) perf_cost_data = perf_cost_data[['1','2','3','4','Cost','RT']] perf_cost_data.columns=['f1','f2','f3','f4','Cost','RT'] perf_cost_data.index=range(1,perf_cost_data.shape[0]+1) perf_cost_data.head() minimal_cost = perf_cost_data['Cost'].min() maximal_cost = perf_cost_data['Cost'].max() minimal_rt = perf_cost_data['RT'].min() maximal_rt = perf_cost_data['RT'].max() print('Minimal Cost: ', minimal_cost, 'per 1 million executions') print('Maximal Cost: ', maximal_cost, 'per 1 million executions') print('Minimal RT: ', minimal_rt, 'ms') print('Maximal RT: ', maximal_rt, 'ms') ``` ## Optimization Curve ### Get Optimization Curve Data #### Define Parameters ``` App4_ocd_budget_num_of_points = 100 App4_ocd_performance_constraint_num_of_points = 100 App4_ocd_filenameprefix = 'opt_curve_data/App4' def ocd_work(App_G, filenameprefix, budget_num, performance_constraint_num): App = ServerlessAppWorkflow(G=App_G.copy()) optimizer = PerfOpt(App, generate_perf_profile=False) optimizer.get_opt_curve(filenameprefix=filenameprefix, budget_list=list(np.linspace(optimizer.minimal_cost, optimizer.maximal_cost, budget_num)), performance_constraint_list=list(np.linspace(optimizer.minimal_avg_rt, optimizer.maximal_avg_rt, performance_constraint_num))) ``` #### Run Algorithms to get the curve ``` %%capture ocd_work(App4_G, App4_ocd_filenameprefix, App4_ocd_budget_num_of_points, App4_ocd_performance_constraint_num_of_points) ``` ### Process Optimization Curve Data ``` opt_curve_data_BPBC = pd.read_csv(App4_ocd_filenameprefix+'_BPBC.csv', error_bad_lines=False, warn_bad_lines=False,low_memory=False) opt_curve_data_BPBC = opt_curve_data_BPBC.assign(Best_Answer_RT=lambda opt_curve_data_BPBC: opt_curve_data_BPBC[['BCR_disabled_RT', 'BCR_RT/M_RT', 'BCR_ERT/C_RT', 'BCR_MAX_RT']].min(1)) opt_curve_data_BCPC = pd.read_csv(App4_ocd_filenameprefix+'_BCPC.csv', error_bad_lines=False, warn_bad_lines=False,low_memory=False) opt_curve_data_BCPC = opt_curve_data_BCPC.assign(Best_Answer_Cost=lambda opt_curve_data_BCPC: opt_curve_data_BCPC[['BCR_disabled_Cost', 'BCR_M/RT_Cost', 'BCR_C/ERT_Cost', 'BCR_MAX_Cost']].min(1)) opt_curve_data_BPBC.head(2) opt_curve_data_BCPC.head(2) best_rt = [perf_cost_data.query("Cost<={}".format(item))['RT'].min() for item in opt_curve_data_BPBC['Budget']] best_cost = [perf_cost_data.query("RT<={}".format(item))['Cost'].min() for item in opt_curve_data_BCPC['Performance_Constraint']] BPBC_accuracy = 100-(opt_curve_data_BPBC['Best_Answer_RT']-best_rt)/opt_curve_data_BPBC['Best_Answer_RT']*100 BCPC_accuracy = 100-(opt_curve_data_BCPC['Best_Answer_Cost']-best_cost)/opt_curve_data_BCPC['Best_Answer_Cost']*100 with open('results.json', 'w', encoding='utf-8') as f: json.dump({ "BPBC_BCR_disabled_Iterations":opt_curve_data_BPBC["BCR_disabled_Iterations"].mean(), "BPBC_BCR_RT/M_Iterations":opt_curve_data_BPBC["BCR_RT/M_Iterations"].mean(), "BPBC_BCR_ERT/C_Iterations":opt_curve_data_BPBC["BCR_ERT/C_Iterations"].mean(), "BPBC_BCR_MAX_Iterations":opt_curve_data_BPBC["BCR_MAX_Iterations"].mean(), "BCPC_BCR_disabled_Iterations":opt_curve_data_BCPC["BCR_disabled_Iterations"].mean(), "BCPC_BCR_M/RT_Iterations":opt_curve_data_BCPC["BCR_M/RT_Iterations"].mean(), "BCPC_BCR_C/ERT_Iterations":opt_curve_data_BCPC["BCR_C/ERT_Iterations"].mean(), "BCPC_BCR_MAX_Iterations":opt_curve_data_BCPC["BCR_MAX_Iterations"].mean() }, f, ensure_ascii=False, indent=4) ``` ### BPBC Problem ``` fig=plt.figure(figsize=(8,8)) ax=plt.subplot(111) ax.grid() ax.plot(opt_curve_data_BPBC['Budget'], opt_curve_data_BPBC['BCR_disabled_RT'], marker='o', label='BCR Disabled') ax.plot(opt_curve_data_BPBC['Budget'], opt_curve_data_BPBC['BCR_RT/M_RT'], marker='o', label='BCR RT/M') ax.plot(opt_curve_data_BPBC['Budget'], opt_curve_data_BPBC['BCR_ERT/C_RT'], marker='o', label='BCR ERT/C') ax.plot(opt_curve_data_BPBC['Budget'], opt_curve_data_BPBC['BCR_MAX_RT'], marker='o', label='BCR MAX') ax.plot(opt_curve_data_BPBC['Budget'], opt_curve_data_BPBC['Best_Answer_RT'] , marker='o', label='Best Answer') ax.plot(opt_curve_data_BPBC['Budget'], best_rt, marker='o', label='Ideal BPBC Solution') ax.legend() ax.set_xlabel('Cost in $') ax.set_ylabel('End-to-end Response time in ms') ax2 = ax.twinx() ax2.plot(opt_curve_data_BPBC['Budget'], BPBC_accuracy, marker = 'x', label='Accuracy', color='magenta') ax2.legend(loc='center right') ax2.set_ylim(0,100) ax2.set_ylabel('Accuracy in Percentage') fig.savefig('App4_Optimization_Curve_BPBC', dpi=300) fig.savefig('App4_Optimization_Curve_BPBC.pdf') ``` ### BCPC Problem ``` fig=plt.figure(figsize=(8,8)) ax=plt.subplot(111) ax.grid() ax.plot(opt_curve_data_BCPC['Performance_Constraint'], opt_curve_data_BCPC['BCR_disabled_Cost'], marker='o', label='BCR Disabled') ax.plot(opt_curve_data_BCPC['Performance_Constraint'], opt_curve_data_BCPC['BCR_M/RT_Cost'], marker='o', label='BCR M/RT') ax.plot(opt_curve_data_BCPC['Performance_Constraint'], opt_curve_data_BCPC['BCR_C/ERT_Cost'], marker='o', label='BCR C/ERT') ax.plot(opt_curve_data_BCPC['Performance_Constraint'], opt_curve_data_BCPC['BCR_MAX_Cost'], marker='o', label='BCR MAX') ax.plot(opt_curve_data_BCPC['Performance_Constraint'], opt_curve_data_BCPC['Best_Answer_Cost'] , marker='o', label='Best Answer') ax.plot(opt_curve_data_BCPC['Performance_Constraint'], best_cost, marker='o', label='Ideal BCPC Solution') ax.legend() ax.set_xlabel('End-to-end Response time in ms') ax.set_ylabel('Cost in $') ax2 = ax.twinx() ax2.plot(opt_curve_data_BCPC['Performance_Constraint'], BCPC_accuracy, marker = 'x', label='Accuracy', color='magenta') ax2.legend(loc='center right') ax2.set_ylim(0,100) ax2.set_ylabel('Accuracy in Percentage') fig.savefig('App4_Optimization_Curve_BCPC', dpi=300) fig.savefig('App4_Optimization_Curve_BCPC.pdf') ```
github_jupyter
``` !curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/main/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s %cd /content/habitat-sim ## [setup] import os import random import sys import git import magnum as mn import numpy as np import habitat_sim from habitat_sim.utils import viz_utils as vut if "google.colab" in sys.modules: os.environ["IMAGEIO_FFMPEG_EXE"] = "/usr/bin/ffmpeg" repo = git.Repo(".", search_parent_directories=True) dir_path = repo.working_tree_dir %cd $dir_path data_path = os.path.join(dir_path, "data") output_path = os.path.join( dir_path, "examples/tutorials/managed_rigid_object_tutorial_output/" ) def place_agent(sim): # place our agent in the scene agent_state = habitat_sim.AgentState() agent_state.position = [-0.15, -0.7, 1.0] agent_state.rotation = np.quaternion(-0.83147, 0, 0.55557, 0) agent = sim.initialize_agent(0, agent_state) return agent.scene_node.transformation_matrix() def make_configuration(): # simulator configuration backend_cfg = habitat_sim.SimulatorConfiguration() backend_cfg.scene_id = os.path.join( data_path, "scene_datasets/habitat-test-scenes/apartment_1.glb" ) assert os.path.exists(backend_cfg.scene_id) backend_cfg.enable_physics = True # sensor configurations # Note: all sensors must have the same resolution # setup 2 rgb sensors for 1st and 3rd person views camera_resolution = [544, 720] sensor_specs = [] rgba_camera_1stperson_spec = habitat_sim.CameraSensorSpec() rgba_camera_1stperson_spec.uuid = "rgba_camera_1stperson" rgba_camera_1stperson_spec.sensor_type = habitat_sim.SensorType.COLOR rgba_camera_1stperson_spec.resolution = camera_resolution rgba_camera_1stperson_spec.position = [0.0, 0.6, 0.0] rgba_camera_1stperson_spec.orientation = [0.0, 0.0, 0.0] rgba_camera_1stperson_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE sensor_specs.append(rgba_camera_1stperson_spec) depth_camera_1stperson_spec = habitat_sim.CameraSensorSpec() depth_camera_1stperson_spec.uuid = "depth_camera_1stperson" depth_camera_1stperson_spec.sensor_type = habitat_sim.SensorType.DEPTH depth_camera_1stperson_spec.resolution = camera_resolution depth_camera_1stperson_spec.position = [0.0, 0.6, 0.0] depth_camera_1stperson_spec.orientation = [0.0, 0.0, 0.0] depth_camera_1stperson_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE sensor_specs.append(depth_camera_1stperson_spec) rgba_camera_3rdperson_spec = habitat_sim.CameraSensorSpec() rgba_camera_3rdperson_spec.uuid = "rgba_camera_3rdperson" rgba_camera_3rdperson_spec.sensor_type = habitat_sim.SensorType.COLOR rgba_camera_3rdperson_spec.resolution = camera_resolution rgba_camera_3rdperson_spec.position = [0.0, 1.0, 0.3] rgba_camera_3rdperson_spec.orientation = [-45, 0.0, 0.0] rgba_camera_3rdperson_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE sensor_specs.append(rgba_camera_3rdperson_spec) # agent configuration agent_cfg = habitat_sim.agent.AgentConfiguration() agent_cfg.sensor_specifications = sensor_specs return habitat_sim.Configuration(backend_cfg, [agent_cfg]) def simulate(sim, dt=1.0, get_frames=True): # simulate dt seconds at 60Hz to the nearest fixed timestep print("Simulating " + str(dt) + " world seconds.") observations = [] start_time = sim.get_world_time() while sim.get_world_time() < start_time + dt: sim.step_physics(1.0 / 60.0) if get_frames: observations.append(sim.get_sensor_observations()) return observations # [/setup] if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument("--no-show-video", dest="show_video", action="store_false") parser.add_argument("--no-make-video", dest="make_video", action="store_false") parser.set_defaults(show_video=True, make_video=True) args, _ = parser.parse_known_args() show_video = args.show_video make_video = args.make_video if make_video and not os.path.exists(output_path): os.mkdir(output_path) # [initialize] # create the simulators AND resets the simulator cfg = make_configuration() try: # Got to make initialization idiot proof sim.close() except NameError: pass sim = habitat_sim.Simulator(cfg) agent_transform = place_agent(sim) # get the primitive assets attributes manager prim_templates_mgr = sim.get_asset_template_manager() # get the physics object attributes manager obj_templates_mgr = sim.get_object_template_manager() # get the rigid object manager rigid_obj_mgr = sim.get_rigid_object_manager() # [/initialize] # [basics] # load some object templates from configuration files sphere_template_id = obj_templates_mgr.load_configs( str(os.path.join(data_path, "test_assets/objects/sphere")) )[0] # add a sphere to the scene, returns the object sphere_obj = rigid_obj_mgr.add_object_by_template_id(sphere_template_id) # move sphere sphere_obj.translation = [2.50, 0.0, 0.2] # simulate observations = simulate(sim, dt=1.5, get_frames=make_video) if make_video: vut.make_video( observations, "rgba_camera_1stperson", "color", output_path + "sim_basics", open_vid=show_video, ) # [/basics] rigid_obj_mgr.remove_all_objects() # [object_user_configurations] # modify an object's user-defined configurations # load some object templates from configuration files sphere_template_id = obj_templates_mgr.load_configs( str(os.path.join(data_path, "test_assets/objects/sphere")) )[0] # add a sphere to the scene, returns the object sphere_obj = rigid_obj_mgr.add_object_by_template_id(sphere_template_id) # set user-defined configuration values user_attributes_dict = { "obj_val_0": "This is a sphere object named " + sphere_obj.handle, "obj_val_1": 17, "obj_val_2": False, "obj_val_3": 19.7, "obj_val_4": [2.50, 0.0, 0.2], "obj_val_5": mn.Quaternion.rotation(mn.Deg(90.0), [-1.0, 0.0, 0.0]), } for k, v in user_attributes_dict.items(): sphere_obj.user_attributes.set(k, v) for k, _ in user_attributes_dict.items(): print( "Sphere Object user attribute : {} : {}".format( k, sphere_obj.user_attributes.get_as_string(k) ) ) # [/object_user_configurations] rigid_obj_mgr.remove_all_objects() # [dynamic_control] observations = [] obj_templates_mgr.load_configs( str(os.path.join(data_path, "objects/example_objects/")) ) # search for an object template by key sub-string cheezit_template_handle = obj_templates_mgr.get_template_handles( "data/objects/example_objects/cheezit" )[0] # build multiple object initial positions box_positions = [ [2.39, -0.37, 0.0], [2.39, -0.64, 0.0], [2.39, -0.91, 0.0], [2.39, -0.64, -0.22], [2.39, -0.64, 0.22], ] box_orientation = mn.Quaternion.rotation(mn.Deg(90.0), [-1.0, 0.0, 0.0]) # instance and place the boxes boxes = [] for b in range(len(box_positions)): boxes.append( rigid_obj_mgr.add_object_by_template_handle(cheezit_template_handle) ) boxes[b].translation = box_positions[b] boxes[b].rotation = box_orientation # anti-gravity force f=m(-g) using first object's mass (all objects have the same mass) anti_grav_force = -1.0 * sim.get_gravity() * boxes[0].mass # throw a sphere at the boxes from the agent position sphere_template = obj_templates_mgr.get_template_by_id(sphere_template_id) sphere_template.scale = [0.5, 0.5, 0.5] obj_templates_mgr.register_template(sphere_template) # create sphere sphere_obj = rigid_obj_mgr.add_object_by_template_id(sphere_template_id) sphere_obj.translation = sim.agents[0].get_state().position + [0.0, 1.0, 0.0] # get the vector from the sphere to a box target_direction = boxes[0].translation - sphere_obj.translation # apply an initial velocity for one step sphere_obj.linear_velocity = target_direction * 5 sphere_obj.angular_velocity = [0.0, -1.0, 0.0] start_time = sim.get_world_time() dt = 3.0 while sim.get_world_time() < start_time + dt: # set forces/torques before stepping the world for box in boxes: box.apply_force(anti_grav_force, [0.0, 0.0, 0.0]) box.apply_torque([0.0, 0.01, 0.0]) sim.step_physics(1.0 / 60.0) observations.append(sim.get_sensor_observations()) if make_video: vut.make_video( observations, "rgba_camera_1stperson", "color", output_path + "dynamic_control", open_vid=show_video, ) # [/dynamic_control] rigid_obj_mgr.remove_all_objects() # [kinematic_interactions] chefcan_template_handle = obj_templates_mgr.get_template_handles( "data/objects/example_objects/chefcan" )[0] chefcan_obj = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle) chefcan_obj.translation = [2.4, -0.64, 0.0] # set object to kinematic chefcan_obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC # drop some dynamic objects chefcan_obj_2 = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle) chefcan_obj_2.translation = [2.4, -0.64, 0.28] chefcan_obj_3 = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle) chefcan_obj_3.translation = [2.4, -0.64, -0.28] chefcan_obj_4 = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle) chefcan_obj_4.translation = [2.4, -0.3, 0.0] # simulate observations = simulate(sim, dt=1.5, get_frames=True) if make_video: vut.make_video( observations, "rgba_camera_1stperson", "color", output_path + "kinematic_interactions", open_vid=show_video, ) # [/kinematic_interactions] rigid_obj_mgr.remove_all_objects() # [kinematic_update] observations = [] clamp_template_handle = obj_templates_mgr.get_template_handles( "data/objects/example_objects/largeclamp" )[0] clamp_obj = rigid_obj_mgr.add_object_by_template_handle(clamp_template_handle) clamp_obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC clamp_obj.translation = [0.8, 0.2, 0.5] start_time = sim.get_world_time() dt = 1.0 while sim.get_world_time() < start_time + dt: # manually control the object's kinematic state clamp_obj.translation += [0.0, 0.0, 0.01] clamp_obj.rotation = ( mn.Quaternion.rotation(mn.Rad(0.05), [-1.0, 0.0, 0.0]) * clamp_obj.rotation ) sim.step_physics(1.0 / 60.0) observations.append(sim.get_sensor_observations()) if make_video: vut.make_video( observations, "rgba_camera_1stperson", "color", output_path + "kinematic_update", open_vid=show_video, ) # [/kinematic_update] # [velocity_control] # get object VelocityControl structure and setup control vel_control = clamp_obj.velocity_control vel_control.linear_velocity = [0.0, 0.0, -1.0] vel_control.angular_velocity = [4.0, 0.0, 0.0] vel_control.controlling_lin_vel = True vel_control.controlling_ang_vel = True observations = simulate(sim, dt=1.0, get_frames=True) # reverse linear direction vel_control.linear_velocity = [0.0, 0.0, 1.0] observations += simulate(sim, dt=1.0, get_frames=True) if make_video: vut.make_video( observations, "rgba_camera_1stperson", "color", output_path + "velocity_control", open_vid=show_video, ) # [/velocity_control] # [local_velocity_control] vel_control.linear_velocity = [0.0, 0.0, 2.3] vel_control.angular_velocity = [-4.3, 0.0, 0.0] vel_control.lin_vel_is_local = True vel_control.ang_vel_is_local = True observations = simulate(sim, dt=1.5, get_frames=True) # video rendering if make_video: vut.make_video( observations, "rgba_camera_1stperson", "color", output_path + "local_velocity_control", open_vid=show_video, ) # [/local_velocity_control] rigid_obj_mgr.remove_all_objects() # [embodied_agent] # load the lobot_merged asset locobot_template_id = obj_templates_mgr.load_configs( str(os.path.join(data_path, "objects/locobot_merged")) )[0] # add robot object to the scene with the agent/camera SceneNode attached locobot = rigid_obj_mgr.add_object_by_template_id( locobot_template_id, sim.agents[0].scene_node ) locobot.translation = [1.75, -1.02, 0.4] vel_control = locobot.velocity_control vel_control.linear_velocity = [0.0, 0.0, -1.0] vel_control.angular_velocity = [0.0, 2.0, 0.0] # simulate robot dropping into place observations = simulate(sim, dt=1.5, get_frames=make_video) vel_control.controlling_lin_vel = True vel_control.controlling_ang_vel = True vel_control.lin_vel_is_local = True vel_control.ang_vel_is_local = True # simulate forward and turn observations += simulate(sim, dt=1.0, get_frames=make_video) vel_control.controlling_lin_vel = False vel_control.angular_velocity = [0.0, 1.0, 0.0] # simulate turn only observations += simulate(sim, dt=1.5, get_frames=make_video) vel_control.angular_velocity = [0.0, 0.0, 0.0] vel_control.controlling_lin_vel = True vel_control.controlling_ang_vel = True # simulate forward only with damped angular velocity (reset angular velocity to 0 after each step) observations += simulate(sim, dt=1.0, get_frames=make_video) vel_control.angular_velocity = [0.0, -1.25, 0.0] # simulate forward and turn observations += simulate(sim, dt=2.0, get_frames=make_video) vel_control.controlling_ang_vel = False vel_control.controlling_lin_vel = False # simulate settling observations += simulate(sim, dt=3.0, get_frames=make_video) # remove the agent's body while preserving the SceneNode rigid_obj_mgr.remove_object_by_id(locobot.object_id, delete_object_node=False) # demonstrate that the locobot object does not now exist' print("Locobot is still alive : {}".format(locobot.is_alive)) # video rendering with embedded 1st person view if make_video: vut.make_video( observations, "rgba_camera_1stperson", "color", output_path + "robot_control", open_vid=show_video, ) # [/embodied_agent] # [embodied_agent_navmesh] # load the lobot_merged asset locobot_template_id = obj_templates_mgr.load_configs( str(os.path.join(data_path, "objects/locobot_merged")) )[0] # add robot object to the scene with the agent/camera SceneNode attached locobot = rigid_obj_mgr.add_object_by_template_id( locobot_template_id, sim.agents[0].scene_node ) initial_rotation = locobot.rotation # set the agent's body to kinematic since we will be updating position manually locobot.motion_type = habitat_sim.physics.MotionType.KINEMATIC # create and configure a new VelocityControl structure # Note: this is NOT the object's VelocityControl, so it will not be consumed automatically in sim.step_physics vel_control = habitat_sim.physics.VelocityControl() vel_control.controlling_lin_vel = True vel_control.lin_vel_is_local = True vel_control.controlling_ang_vel = True vel_control.ang_vel_is_local = True vel_control.linear_velocity = [0.0, 0.0, -1.0] # try 2 variations of the control experiment for iteration in range(2): # reset observations and robot state observations = [] locobot.translation = [1.75, -1.02, 0.4] locobot.rotation = initial_rotation vel_control.angular_velocity = [0.0, 0.0, 0.0] video_prefix = "robot_control_sliding" # turn sliding off for the 2nd pass if iteration == 1: sim.config.sim_cfg.allow_sliding = False video_prefix = "robot_control_no_sliding" # manually control the object's kinematic state via velocity integration start_time = sim.get_world_time() last_velocity_set = 0 dt = 6.0 time_step = 1.0 / 60.0 while sim.get_world_time() < start_time + dt: previous_rigid_state = locobot.rigid_state # manually integrate the rigid state target_rigid_state = vel_control.integrate_transform( time_step, previous_rigid_state ) # snap rigid state to navmesh and set state to object/agent end_pos = sim.step_filter( previous_rigid_state.translation, target_rigid_state.translation ) locobot.translation = end_pos locobot.rotation = target_rigid_state.rotation # Check if a collision occured dist_moved_before_filter = ( target_rigid_state.translation - previous_rigid_state.translation ).dot() dist_moved_after_filter = (end_pos - previous_rigid_state.translation).dot() # NB: There are some cases where ||filter_end - end_pos|| > 0 when a # collision _didn't_ happen. One such case is going up stairs. Instead, # we check to see if the the amount moved after the application of the filter # is _less_ than the amount moved before the application of the filter EPS = 1e-5 collided = (dist_moved_after_filter + EPS) < dist_moved_before_filter # run any dynamics simulation sim.step_physics(time_step) # render observation observations.append(sim.get_sensor_observations()) # randomize angular velocity last_velocity_set += time_step if last_velocity_set >= 1.0: vel_control.angular_velocity = [0.0, (random.random() - 0.5) * 2.0, 0.0] last_velocity_set = 0 # video rendering with embedded 1st person views if make_video: sensor_dims = ( sim.get_agent(0).agent_config.sensor_specifications[0].resolution ) overlay_dims = (int(sensor_dims[1] / 4), int(sensor_dims[0] / 4)) overlay_settings = [ { "obs": "rgba_camera_1stperson", "type": "color", "dims": overlay_dims, "pos": (10, 10), "border": 2, }, { "obs": "depth_camera_1stperson", "type": "depth", "dims": overlay_dims, "pos": (10, 30 + overlay_dims[1]), "border": 2, }, ] vut.make_video( observations=observations, primary_obs="rgba_camera_3rdperson", primary_obs_type="color", video_file=output_path + video_prefix, fps=60, open_vid=show_video, overlay_settings=overlay_settings, depth_clip=10.0, ) # [/embodied_agent_navmesh] ```
github_jupyter
# Neural Machine Translation with Attention: German to English Here we implement a neural machine translator with attention using standard TensorFlow operations. ``` # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. %matplotlib inline from __future__ import print_function import collections import math import numpy as np import os import random import tensorflow as tf import zipfile from matplotlib import pylab from six.moves import range from six.moves.urllib.request import urlretrieve import tensorflow as tf from PIL import Image from collections import Counter import csv import matplotlib.gridspec as gridspec import word2vec from nltk.translate.bleu_score import corpus_bleu import nltk ``` ## Loading Data First, download the data from this [page](https://nlp.stanford.edu/projects/nmt/). The required files are: * File containing German sentences: [`train.de`](https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/train.de) * File containing English sentences: [`train.en`](https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/train.en) * File containing German vocabulary: [`vocab.50K.de`](https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/vocab.50K.de) * File containing English vocabulary: [`vocab.50K.en`](https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/vocab.50K.en) ### Loading Vocabulary First we build the vocabulary dictionaries for both the source (German) and target (English) languages. The vocabularies are found in the `vocab.50K.de` (German) and `vocab.50K.en` files. ``` # ========================================== # Building source language vocabulary # Contains word string -> ID mapping src_dictionary = dict() # Read the vocabulary file with open('vocab.50K.de', encoding='utf-8') as f: # Read and store every line for line in f: #we are discarding last char as it is new line char src_dictionary[line[:-1]] = len(src_dictionary) # Build a reverse dictionary with the mapping ID -> word string src_reverse_dictionary = dict(zip(src_dictionary.values(),src_dictionary.keys())) # Print some of the words in the dictionary print('Source') print('\t',list(src_dictionary.items())[:10]) print('\t',list(src_reverse_dictionary.items())[:10]) print('\t','Vocabulary size: ', len(src_dictionary)) # ========================================== # Building source language vocabulary # Contains word string -> ID mapping tgt_dictionary = dict() # Read the vocabulary file with open('vocab.50K.en', encoding='utf-8') as f: # Read and store every line for line in f: #we are discarding last char as it is new line char tgt_dictionary[line[:-1]] = len(tgt_dictionary) # Build a reverse dictionary with the mapping ID -> word string tgt_reverse_dictionary = dict(zip(tgt_dictionary.values(),tgt_dictionary.keys())) # Print some of the words in the dictionary print('Target') print('\t',list(tgt_dictionary.items())[:10]) print('\t',list(tgt_reverse_dictionary.items())[:10]) print('\t','Vocabulary size: ', len(tgt_dictionary)) # Each language has 50000 words vocabulary_size = 50000 ``` ### Loading Training and Testing Data Here we load the data in the `train.de` and `train.en` files. And split the data in the files into two sets; training and testing data. ``` # Contains the training sentences source_sent = [] # Input target_sent = [] # Output # Contains the testing sentences test_source_sent = [] # Input test_target_sent = [] # Output # We grab around 100 lines of data that are interleaved # in the first 50000 sentences test_indices = [l_i for l_i in range(50,50001,500)] # Read the source data file and read the first 250,000 lines (except first 50) with open('train.de', encoding='utf-8') as f: for l_i, line in enumerate(f): # discarding first 50 translations as there was some # english to english mappings found in the first few lines. which are wrong if l_i<50: continue if len(source_sent)<250000 and l_i not in test_indices: source_sent.append(line) elif l_i in test_indices: test_source_sent.append(line) # Read the target data file and read the first 250,000 lines (except first 50) with open('train.en', encoding='utf-8') as f: for l_i, line in enumerate(f): # discarding first 50 translations as there was some # english to english mappings found in the first few lines. which are wrong if l_i<50: continue if len(target_sent)<250000 and l_i not in test_indices: target_sent.append(line) elif l_i in test_indices: test_target_sent.append(line) # Make sure we extracted same number of both extracted source and target sentences assert len(source_sent)==len(target_sent),'Source: %d, Target: %d'%(len(source_sent),len(target_sent)) # Print some source sentences print('Sample translations (%d)'%len(source_sent)) for i in range(0,250000,10000): print('(',i,') DE: ', source_sent[i]) print('(',i,') EN: ', target_sent[i]) # Print some target sentences print('Sample test translations (%d)'%len(test_source_sent)) for i in range(0,100,10): print('DE: ', test_source_sent[i]) print('EN: ', test_target_sent[i]) ``` ### Preprocessing text Here we preprocess the text by replacing words not found in the dictionary with `<unk>` as well as remove punctuation marks (`.`,`,`) and new-line characters. ``` # Keep track of how many unknown words were encountered src_unk_count, tgt_unk_count = 0, 0 def split_to_tokens(sent,is_source): ''' This function takes in a sentence (source or target) and preprocess the sentency with various steps (e.g. removing punctuation) ''' global src_unk_count, tgt_unk_count # Remove punctuation and new-line chars sent = sent.replace(',',' ,') sent = sent.replace('.',' .') sent = sent.replace('\n',' ') sent_toks = sent.split(' ') for t_i, tok in enumerate(sent_toks): if is_source: # src_dictionary contain the word -> word ID mapping for source vocabulary if tok not in src_dictionary.keys(): if not len(tok.strip())==0: sent_toks[t_i] = '<unk>' src_unk_count += 1 else: # tgt_dictionary contain the word -> word ID mapping for target vocabulary if tok not in tgt_dictionary.keys(): if not len(tok.strip())==0: sent_toks[t_i] = '<unk>' #print(tok) tgt_unk_count += 1 return sent_toks # Let us first look at some statistics of the sentences # Train - source data source_len = [] source_mean, source_std = 0,0 for sent in source_sent: source_len.append(len(split_to_tokens(sent,True))) print('(Source) Sentence mean length: ', np.mean(source_len)) print('(Source) Sentence stddev length: ', np.std(source_len)) # Let us first look at some statistics of the sentences # Train - target data target_len = [] for sent in target_sent: target_len.append(len(split_to_tokens(sent,False))) print('(Target) Sentence mean length: ', np.mean(target_len)) print('(Target) Sentence stddev length: ', np.std(target_len)) # Let us first look at some statistics of the sentences # Test - source data test_source_len = [] for sent in test_source_sent: test_source_len.append(len(split_to_tokens(sent, True))) print('(Test-Source) Sentence mean length: ', np.mean(test_source_len)) print('(Test-Source) Sentence stddev length: ', np.std(test_source_len)) # Let us first look at some statistics of the sentences # Test - target data test_target_len = [] test_tgt_mean, test_tgt_std = 0,0 for sent in test_target_sent: test_target_len.append(len(split_to_tokens(sent, False))) print('(Test-Target) Sentence mean length: ', np.mean(test_target_len)) print('(Test-Target) Sentence stddev length: ', np.std(test_target_len)) ``` ### Making training and testing data fixed length Here we get all the source sentences and target sentences to a fixed length. This is, so that we can process the sentences as batches. ``` # ================================================================================ # Processing training data src_unk_count, tgt_unk_count = 0, 0 train_inputs = [] train_outputs = [] # Chosen based on previously found statistics src_max_sent_length = 41 tgt_max_sent_length = 61 print('Processing Training Data ...\n') for s_i, (src_sent, tgt_sent) in enumerate(zip(source_sent,target_sent)): # Break source and target sentences to word lists src_sent_tokens = split_to_tokens(src_sent,True) tgt_sent_tokens = split_to_tokens(tgt_sent,False) # Append <s> token's ID to the beggining of source sentence num_src_sent = [src_dictionary['<s>']] # Add the rest of word IDs for words found in the source sentence for tok in src_sent_tokens: if tok in src_dictionary.keys(): num_src_sent.append(src_dictionary[tok]) # If the lenghth of the source sentence below the maximum allowed length # append </s> token's ID to the end if len(num_src_sent)<src_max_sent_length: num_src_sent.extend([src_dictionary['</s>'] for _ in range(src_max_sent_length - len(num_src_sent))]) # If the length exceed the maximum allowed length # truncate the sentence elif len(num_src_sent)>src_max_sent_length: num_src_sent = num_src_sent[:src_max_sent_length] # Make sure the sentence is of length src_max_sent_length assert len(num_src_sent)==src_max_sent_length,len(num_src_sent) train_inputs.append(num_src_sent) # Create the numeric target sentence with word IDs # append <s> to the beginning and append actual words later num_tgt_sent = [tgt_dictionary['<s>']] for tok in tgt_sent_tokens: if tok in tgt_dictionary.keys(): num_tgt_sent.append(tgt_dictionary[tok]) ## Modifying the outputs such that all the outputs have max_length elements if len(num_tgt_sent)<tgt_max_sent_length: num_tgt_sent.extend([tgt_dictionary['</s>'] for _ in range(tgt_max_sent_length - len(num_tgt_sent))]) elif len(num_tgt_sent)>tgt_max_sent_length: num_tgt_sent = num_tgt_sent[:tgt_max_sent_length] train_outputs.append(num_tgt_sent) print('Unk counts Src: %d, Tgt: %d'%(src_unk_count, tgt_unk_count)) print('Sentences ',len(train_inputs)) assert len(train_inputs) == len(source_sent),\ 'Size of total elements: %d, Total sentences: %d'\ %(len(train_inputs),len(source_sent)) # Making inputs and outputs NumPy arrays train_inputs = np.array(train_inputs, dtype=np.int32) train_outputs = np.array(train_outputs, dtype=np.int32) # Make sure number of inputs and outputs dividable by 100 train_inputs = train_inputs[:(train_inputs.shape[0]//100)*100,:] train_outputs = train_outputs[:(train_outputs.shape[0]//100)*100,:] print('\t Done processing training data \n') # Printing some data print('Samples from training data') for ti in range(10): print('\t',[src_reverse_dictionary[w] for w in train_inputs[ti,:].tolist()]) print('\t',[tgt_reverse_dictionary[w] for w in train_outputs[ti,:].tolist()]) print() print('\tSentences ',train_inputs.shape[0]) # ================================================================================ # Processing Test data src_unk_count, tgt_unk_count = 0, 0 print('Processing testing data ....\n') test_inputs = [] test_outputs = [] for s_i, (src_sent,tgt_sent) in enumerate(zip(test_source_sent,test_target_sent)): src_sent_tokens = split_to_tokens(src_sent,True) tgt_sent_tokens = split_to_tokens(tgt_sent,False) num_src_sent = [src_dictionary['<s>']] for tok in src_sent_tokens: if tok in src_dictionary.keys(): num_src_sent.append(src_dictionary[tok]) num_tgt_sent = [src_dictionary['<s>']] for tok in tgt_sent_tokens: if tok in tgt_dictionary.keys(): num_tgt_sent.append(tgt_dictionary[tok]) # Append </s> if the length is not src_max_sent_length if len(num_src_sent)<src_max_sent_length: num_src_sent.extend([src_dictionary['</s>'] for _ in range(src_max_sent_length - len(num_src_sent))]) # Truncate the sentence if length is over src_max_sent_length elif len(num_src_sent)>src_max_sent_length: num_src_sent = num_src_sent[:src_max_sent_length] assert len(num_src_sent)==src_max_sent_length, len(num_src_sent) test_inputs.append(num_src_sent) # Append </s> is length is not tgt_max_sent_length if len(num_tgt_sent)<tgt_max_sent_length: num_tgt_sent.extend([tgt_dictionary['</s>'] for _ in range(tgt_max_sent_length - len(num_tgt_sent))]) # Truncate the sentence if length over tgt_max_sent_length elif len(num_tgt_sent)>tgt_max_sent_length: num_tgt_sent = num_tgt_sent[:tgt_max_sent_length] assert len(num_tgt_sent)==tgt_max_sent_length, len(num_tgt_sent) test_outputs.append(num_tgt_sent) # Printing some data print('Unk counts Tgt: %d, Tgt: %d'%(src_unk_count, tgt_unk_count)) print('Done processing testing data ....\n') test_inputs = np.array(test_inputs,dtype=np.int32) test_outputs = np.array(test_outputs,dtype=np.int32) print('Samples from training data') for ti in range(10): print('\t',[src_reverse_dictionary[w] for w in test_inputs[ti,:].tolist()]) print('\t',[tgt_reverse_dictionary[w] for w in test_outputs[ti,:].tolist()]) ``` ## Learning word embeddings In this section, we learn word embeddings for both the languages using the sentences we have. After learning word embeddings, this will create two arrays (`en-embeddings-tmp.npy` and `de-embeddings-tmp.npy`) and store them on disk. To use this in the successive computations, go ahead and change the names to `en-embeddings.npy` and `de-embeddings.npy` respectively. ** You can skip this if you have run the code previously. ** ``` # Total number of sentences tot_sentences = train_inputs.shape[0] print('Total number of training sentences: ',tot_sentences) # we keep a cursor for each sentence in the training set sentence_cursors = [0 for _ in range(tot_sentences)] batch_size = 64 embedding_size = 128 # Dimension of the embedding vector. # Defining various things needed by the python script word2vec.define_data_and_hyperparameters( tot_sentences, src_max_sent_length, tgt_max_sent_length, src_dictionary, tgt_dictionary, src_reverse_dictionary, tgt_reverse_dictionary, train_inputs, train_outputs, embedding_size, vocabulary_size) # Print some batches to make sure the data generator is correct word2vec.print_some_batches() # Define TensorFlow ops for learning word embeddings word2vec.define_word2vec_tensorflow(batch_size) # Run embedding learning for source language # Stores the de-embeddings-tmp.npy into the disk word2vec.run_word2vec_source(batch_size) # Run embedding learning for target language # Stores the en-embeddings-tmp.npy to the disk word2vec.run_word2vec_target(batch_size) ``` ## Flipping the Input Data Changin the order of the sentence of the target language improves the performance of NMT systems. Because when reversed, it helps the NMT system to establish a strong connection as the last word of the source language and the last word of the target language will be closest to each other. *DON'T RUN THIS MULTIPLE TIMES as running two times gives original.* ``` ## Reverse the Germen sentences # Remember reversing the source sentence gives better performance # DON'T RUN THIS MULTIPLE TIMES as running two times gives original train_inputs = np.fliplr(train_inputs) test_inputs = np.fliplr(test_inputs) print('Training and Test source data after flipping ') print('\t',[src_reverse_dictionary[w] for w in train_inputs[0,:].tolist()]) print('\t',[tgt_reverse_dictionary[w] for w in test_inputs[0,:].tolist()]) print() print('\t',[src_reverse_dictionary[w] for w in train_inputs[10,:].tolist()]) print('\t',[tgt_reverse_dictionary[w] for w in test_inputs[10,:].tolist()]) print() print('\nTesting data after flipping') print('\t',[src_reverse_dictionary[w] for w in test_inputs[0,:].tolist()]) ``` ## Data Generations for MT Now we define the data generator for our NMT. ``` emb_mat = np.load('de-embeddings.npy') embedding_size = emb_mat.shape[1] input_size = embedding_size class DataGeneratorMT(object): def __init__(self,batch_size,num_unroll,is_source, is_train): # Number of data points in a batch self._batch_size = batch_size # Number of unrollings self._num_unroll = num_unroll # Cursors for each element in batch self._cursor = [0 for offset in range(self._batch_size)] # Loading the learnt word embeddings self._src_word_embeddings = np.load('de-embeddings.npy') self._tgt_word_embeddings = np.load('en-embeddings.npy') # The sentence IDs being currently processed to create the # current batch self._sent_ids = None # We want a batch of data from source or target? self._is_source = is_source # Is this training or testing data? self._is_train = is_train def next_batch(self, sent_ids): # Depending on wheter we want source or target data # change the maximum sentence length if self._is_source: max_sent_length = src_max_sent_length else: max_sent_length = tgt_max_sent_length # Arrays to hold input and output data # Word embeddings (current word) batch_data = np.zeros((self._batch_size,input_size),dtype=np.float32) # One-hot encoded label (next word) batch_labels = np.zeros((self._batch_size,vocabulary_size),dtype=np.float32) # Populate each index of the batch for b in range(self._batch_size): # Sentence IDs to get data from sent_id = sent_ids[b] # If generating data with source sentences # use src_word_embeddings if self._is_source: # Depending on whether we need training data or testind data # choose the previously created training or testing data if self._is_train: sent_text = train_inputs[sent_id] else: sent_text = test_inputs[sent_id] # Populate the batch data arrays batch_data[b] = self._src_word_embeddings[sent_text[self._cursor[b]],:] batch_labels[b] = np.zeros((vocabulary_size),dtype=np.float32) batch_labels[b,sent_text[self._cursor[b]+1]] = 1.0 # If generating data with target sentences # use tgt_word_embeddings else: # Depending on whether we need training data or testind data # choose the previously created training or testing data if self._is_train: sent_text = train_outputs[sent_id] else: sent_text = test_outputs[sent_id] # We cannot avoid having two different embedding vectors for <s> token # in soruce and target languages # Therefore, if the symbol appears, we always take the source embedding vector if sent_text[self._cursor[b]]!=tgt_dictionary['<s>']: batch_data[b] = self._tgt_word_embeddings[sent_text[self._cursor[b]],:] else: batch_data[b] = self._src_word_embeddings[sent_text[self._cursor[b]],:] # Populate the data arrays batch_labels[b] = np.zeros((vocabulary_size),dtype=np.float32) batch_labels[b,sent_text[self._cursor[b]+1]] = 1.0 # Update the cursor for each batch index self._cursor[b] = (self._cursor[b]+1)%(max_sent_length-1) return batch_data,batch_labels def unroll_batches(self,sent_ids): # Only if new sentence IDs if provided # else it will use the previously defined # sent_ids continuously if sent_ids is not None: self._sent_ids = sent_ids # Unlike in the previous exercises we do not process a single sequence # over many iterations of unrollings. We process either a source sentence or target sentence # at a single go. So we reset the _cursor evrytime we generate a batch self._cursor = [0 for _ in range(self._batch_size)] unroll_data,unroll_labels = [],[] # Unrolling data over time for ui in range(self._num_unroll): if self._is_source: data, labels = self.next_batch(self._sent_ids) else: data, labels = self.next_batch(self._sent_ids) unroll_data.append(data) unroll_labels.append(labels) # Return unrolled data and sentence IDs return unroll_data, unroll_labels, self._sent_ids def reset_indices(self): self._cursor = [0 for offset in range(self._batch_size)] # Running a tiny set to see if the implementation correct dg = DataGeneratorMT(batch_size=5,num_unroll=20,is_source=True, is_train=True) u_data, u_labels, _ = dg.unroll_batches([0,1,2,3,4]) print('Source data') for _, lbl in zip(u_data,u_labels): # the the string words for returned word IDs and display the results print([src_reverse_dictionary[w] for w in np.argmax(lbl,axis=1).tolist()]) # Running a tiny set to see if the implementation correct dg = DataGeneratorMT(batch_size=5,num_unroll=30,is_source=False, is_train=True) u_data, u_labels, _ = dg.unroll_batches([0,2,3,4,5]) print('\nTarget data batch') for d_i,(_, lbl) in enumerate(zip(u_data,u_labels)): # the the string words for returned word IDs and display the results print([tgt_reverse_dictionary[w] for w in np.argmax(lbl,axis=1).tolist()]) ``` ## Attention-Based NMT System Here we define the attention based NMT system. Unlike the standard NMT attention based NMT has the ability to refer to any of the encoder states during any step of the decoding. This is achieved through the attention layer. ### Defining hyperparameters Here we define various hyperparameters we use to define our model. ``` num_nodes = 128 batch_size = 10 # We unroll the full length at one go # both source and target sentences enc_num_unrollings = 40 dec_num_unrollings = 60 ``` ### Defining Input/Output Placeholders Here we define the placeholder to feed in inputs/outputs. Additionally we define a mask placeholder that can mask certain outputs from the loss calculation. ``` tf.reset_default_graph() tgt_word_embeddings = tf.convert_to_tensor(np.load('en-embeddings.npy')) # Training Input data. enc_train_inputs = [] # Defining unrolled training inputs for ui in range(enc_num_unrollings): enc_train_inputs.append(tf.placeholder(tf.float32, shape=[batch_size,input_size],name='train_inputs_%d'%ui)) # Training Input data. dec_train_inputs, dec_train_labels = [],[] dec_train_masks = [] # Defining unrolled training inputs for ui in range(dec_num_unrollings): dec_train_inputs.append(tf.placeholder(tf.float32, shape=[batch_size,input_size],name='dec_train_inputs_%d'%ui)) dec_train_labels.append(tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size], name = 'dec_train_labels_%d'%ui)) dec_train_masks.append(tf.placeholder(tf.float32, shape=[batch_size,1],name='dec_train_masks_%d'%ui)) enc_test_input = [tf.placeholder(tf.float32, shape=[batch_size,input_size]) for _ in range(enc_num_unrollings)] enc_test_mask = [tf.placeholder(tf.int32,shape=[batch_size]) for _ in range(enc_num_unrollings)] dec_test_input = tf.nn.embedding_lookup(tgt_word_embeddings,[tgt_dictionary['<s>']]) ``` ### Defining the Encoder Model We define the encoder model. The encoder model is a single LSTM cell with TensorFlow variables for the state and output variables. ``` print('Defining Encoder Parameters') with tf.variable_scope('Encoder'): # Input gate (i_t) - How much memory to write to cell state enc_ix = tf.get_variable('ix',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_im = tf.get_variable('im',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_ib = tf.Variable(tf.random_uniform([1, num_nodes],-0.05, 0.05),name='ib') # Forget gate (f_t) - How much memory to discard from cell state enc_fx = tf.get_variable('fx',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_fm = tf.get_variable('fm',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_fb = tf.Variable(tf.random_uniform([1, num_nodes],-0.05, 0.05),name='fb') # Candidate value (c~_t) - Used to compute the current cell state enc_cx = tf.get_variable('cx',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_cm = tf.get_variable('cm',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_cb = tf.Variable(tf.random_uniform([1, num_nodes],-0.05,0.05),name='cb') # Output gate (o_t) - How much memory to output from the cell state enc_ox = tf.get_variable('ox',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_om = tf.get_variable('om',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) enc_ob = tf.Variable(tf.random_uniform([1, num_nodes],-0.05,0.05),name='ob') # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False, name='train_output') saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False, name = 'train_cell') # Variables for saving state for testing saved_test_output = tf.Variable(tf.zeros([batch_size, num_nodes]),trainable=False, name='test_output') saved_test_state = tf.Variable(tf.zeros([batch_size, num_nodes]),trainable=False, name='test_cell') print('\tDone') ``` ### Defining the Decoder Model Decoder is a single LSTM cell with an additional softmax layer that can predict words. ``` print('Defining Decoder Parameters') with tf.variable_scope('Decoder'): # Input gate (i_t) - How much memory to write to cell state dec_ix = tf.get_variable('ix',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_im = tf.get_variable('im',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_ic = tf.get_variable('ic',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_ib = tf.Variable(tf.random_uniform([1, num_nodes],-0.05, 0.05),name='ib') # Forget gate (f_t) - How much memory to discard from cell state dec_fx = tf.get_variable('fx',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_fm = tf.get_variable('fm',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_fc = tf.get_variable('fc',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_fb = tf.Variable(tf.random_uniform([1, num_nodes],-0.05, 0.05),name='fb') # Candidate value (c~_t) - Used to compute the current cell state dec_cx = tf.get_variable('cx',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_cm = tf.get_variable('cm',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_cc = tf.get_variable('cc',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_cb = tf.Variable(tf.random_uniform([1, num_nodes],-0.05,0.05),name='cb') # Output gate (o_t) - How much memory to output from the cell state dec_ox = tf.get_variable('ox',shape=[input_size, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_om = tf.get_variable('om',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_oc = tf.get_variable('oc',shape=[num_nodes, num_nodes], initializer = tf.contrib.layers.xavier_initializer()) dec_ob = tf.Variable(tf.random_uniform([1, num_nodes],-0.05,0.05),name='ob') # Softmax Classifier weights and biases. # If we are using sampled softmax loss, the weights dims shouldbe [50000, 64] # If not, then [64, 50000] w = tf.get_variable('softmax_weights',shape=[num_nodes*2, vocabulary_size], initializer = tf.contrib.layers.xavier_initializer()) b = tf.Variable(tf.random_uniform([vocabulary_size],-0.05,-0.05),name='softmax_bias') print('\tDone') ``` ### Attention Layer Related Variables We define the weights used to compute the energy ($e_{ij}$) in the attention layer. ``` print('Defining Attention Variables ...') with tf.variable_scope('Attention'): # Used to calculate e_{ij} as # e_{ij} = v_a' tanh(W_a . dec_output + U_a . enc_output) # Then alpha_{ij} is the softmax output (normalized) of e_{ij} W_a = tf.Variable(tf.truncated_normal([num_nodes,num_nodes],stddev=0.05),name='W_a') U_a = tf.Variable(tf.truncated_normal([num_nodes,num_nodes],stddev=0.05),name='U_a') v_a = tf.Variable(tf.truncated_normal([num_nodes,1],stddev=0.05),name='v_a') print('\tDone') ``` ### Defining Cell and Layer Computational Functions We define several functions below: * Encoder LSTM cell computations * Decoder LSTM cell computations * Attention layer computations. ``` # Definition of the cell computation (Encoder) def enc_lstm_cell(i, o, state): """Create a LSTM cell""" input_gate = tf.sigmoid(tf.matmul(i, enc_ix) + tf.matmul(o, enc_im) + enc_ib) forget_gate = tf.sigmoid(tf.matmul(i, enc_fx) + tf.matmul(o, enc_fm) + enc_fb) update = tf.matmul(i, enc_cx) + tf.matmul(o, enc_cm) + enc_cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, enc_ox) + tf.matmul(o, enc_om) + enc_ob) return output_gate * tf.tanh(state), state # Definition of the cell computation (Decoder) def dec_lstm_cell(i, o, state, c): """Create a LSTM cell""" input_gate = tf.sigmoid(tf.matmul(i, dec_ix) + tf.matmul(o, dec_im) + tf.matmul(c, dec_ic) + dec_ib) forget_gate = tf.sigmoid(tf.matmul(i, dec_fx) + tf.matmul(o, dec_fm) + tf.matmul(c, dec_fc) + dec_fb) update = tf.matmul(i, dec_cx) + tf.matmul(o, dec_cm) + tf.matmul(c, dec_cc) +dec_cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, dec_ox) + tf.matmul(o, dec_om) + tf.matmul(o, dec_oc) + dec_ob) return output_gate * tf.tanh(state), state def attn_layer(h_j_unrolled, s_i_minus_1): ''' Computes attention values for a given decoding position h_j_unrolled : all the unrolled encoder outputs [[batch_size, num_nodes], [batch_size, num_nodes], ....] => enc_num_unrolling-many s_i_minus_1 : the previous decoder output [batch_size, num_nodes] ''' # For the following four calculations we calculate by concatenating all encoder outputs (enc_num_unrollings) # get the encoder logits enc_logits = tf.concat(axis=0,values=h_j_unrolled) # W_a . encoder_output w_a_mul_s_i_minus_1 = tf.matmul(enc_logits,W_a) # of size [enc_num_unroll x batch_size, num_nodes] # U_a . decoder_output u_a_mul_h_j = tf.matmul(tf.tile(s_i_minus_1,[enc_num_unrollings,1]), U_a) # of size [enc_num_unroll x batch_size, num_nodes] # calculate "energy" e_j = tf.matmul(tf.nn.tanh(w_a_mul_s_i_minus_1 + u_a_mul_h_j),v_a) # of size [enc_num_unroll x batch_size ,1] # we split the e_j s again into enc_num_unrollings batches batched_e_j = tf.split(axis=0,num_or_size_splits=enc_num_unrollings,value=e_j) # list of enc_num_unroll elements, each element [batch_size, 1] reshaped_e_j = tf.concat(axis=1,values=batched_e_j) # of size [batch_size, enc_num_unroll] # Now we calculate alpha_i for all the enc_num_unrollings time steps alpha_i = tf.nn.softmax(reshaped_e_j) # of size [batch_size, enc_num_unroll] # break alpha_i into list of enc_num_unroll elemtns, each of size [batch_size,1] alpha_i_list = tf.unstack(alpha_i,axis=1) # list of enc_num_unroll elements, each of size [batch_size,num_nodes] c_i_list = [tf.reshape(alpha_i_list[e_i],[-1,1])*h_j_unrolled[e_i] for e_i in range(enc_num_unrollings)] # add_n batches all together c_i = tf.add_n(c_i_list) # of size [batch_size, num_nodes] return c_i,alpha_i ``` ### Defining LSTM Computations Here we define the computations to compute the final state variables of the encoder, feeding that into the decoder as the intial state, computing attention and finally computing the LSTM output, logit values and the predictions. ``` # ================================================ # Training related inference logic # Store encoder outputs and decoder outputs across the unrolling enc_outputs, dec_outputs = list(),list() # Context vecs are the c_i values in the attention computation context_vecs = list() # These variables are initialized with saved_output and saved_sate # values and then iteratively updated during unrollings output = saved_output state = saved_state print('Calculating Encoder Output') # update the output and state values for all the inputs we have for i in enc_train_inputs: output, state = enc_lstm_cell(i, output,state) # Accumulate all the output values in to a list enc_outputs.append(output) print('Calculating Decoder Output with Attention') # Before starting decoder computations, we make sure that # the encoder outputs are computed with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Iterate through the decoder unrollings for ii,i in enumerate(dec_train_inputs): # Compute attention value for each decode position c_i,_ = attn_layer(enc_outputs, output) # Accumulate c_i in a list context_vecs.append(c_i) output, state = dec_lstm_cell(i, output, state, c_i) # Accumulate decoder outputs in a list dec_outputs.append(output) print('Calculating Softmax output') # Compute the logit values logits = tf.matmul( tf.concat(axis=1, values=[ tf.concat(axis=0, values=dec_outputs), tf.concat(axis=0, values=context_vecs) ]), w) + b # Predictions. train_prediction = tf.nn.softmax(logits) # ================================================ # Testing related inference logic # Initialize iteratively updated states with # saved_test_output and saved_test_state test_output = saved_test_output test_state = saved_test_state print("Calculations for test data") test_predictions = [] test_enc_outputs = [] # Compute the encoder output iteratively for i in enc_test_input: test_output, test_state = enc_lstm_cell(i, test_output,test_state) test_enc_outputs.append(test_output) # This is used for visualization purposes # To build the attention matrix discussed in the chapter test_alpha_i_unrolled = [] # Make sure the encoder computations are done with tf.control_dependencies([saved_test_output.assign(test_output), saved_test_state.assign(test_state)]): # Compute the decoder outputs iteratively for i in range(dec_num_unrollings): test_c_i,test_alpha = attn_layer(test_enc_outputs, test_output) # Used for attention visualization purposes test_alpha_i_unrolled.append(test_alpha) test_output, test_state = dec_lstm_cell(dec_test_input, test_output, test_state, test_c_i) # Compute predictions for each decoding step test_prediction = tf.nn.softmax( tf.nn.xw_plus_b( tf.concat(axis=1,values=[test_output,test_c_i]), w, b ) ) dec_test_input = tf.nn.embedding_lookup(tgt_word_embeddings,tf.argmax(test_prediction,axis=1)) test_predictions.append(tf.argmax(test_prediction,axis=1)) print('\tDone') ``` ### Calculating the Loss Here we calculate the loss. Loss is calculated by summing all the losses obtained across the time axis and averaging over the batch axis. You can see how the `dec_train_masks` is used to mask out irrelevant words from influencing loss ``` # Defining loss, cross-entropy loss summed across time axis averaged over batch axis loss_batch = tf.concat(axis=0,values=dec_train_masks)*tf.nn.softmax_cross_entropy_with_logits_v2( logits=logits, labels=tf.concat(axis=0, values=dec_train_labels)) loss = tf.reduce_mean(loss_batch) ``` ### Optimizer We define the model optimization specific operations. We use two optimizers here; Adam and SGD. I observed that using Adam only cause the model to exhibit some undesired behaviors in the long run. Therefore we use Adam to get a good initial estimate for the SGD and use SGD from that point onwards. ``` print('Defining Optimizer') # These are used to decay learning rate over time global_step = tf.Variable(0, trainable=False) inc_gstep = tf.assign(global_step,global_step + 1) # We use two optimizers, when the optimizer changes # we reset the global step reset_gstep = tf.assign(global_step,0) # Calculate decaying learning rate learning_rate = tf.maximum( tf.train.exponential_decay( 0.005, global_step, decay_steps=1, decay_rate=0.95, staircase=True ), 0.0001) sgd_learning_rate = tf.maximum( tf.train.exponential_decay( 0.005, global_step, decay_steps=1, decay_rate=0.95, staircase=True ), 0.0001) # We use two optimizers: Adam and naive SGD # using Adam in the long run produced undesirable results # (e.g.) sudden fluctuations in BLEU # Therefore we use Adam to get a good starting point for optimizing # and then switch to SGD from that point onwards with tf.variable_scope('Adam'): optimizer = tf.train.AdamOptimizer(learning_rate) with tf.variable_scope('SGD'): sgd_optimizer = tf.train.GradientDescentOptimizer(sgd_learning_rate) # Calculates gradients with clipping for Adam gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 25.0) optimize = optimizer.apply_gradients(zip(gradients, v)) # Calculates gradients with clipping for SGD sgd_gradients, v = zip(*sgd_optimizer.compute_gradients(loss)) sgd_gradients, _ = tf.clip_by_global_norm(sgd_gradients, 25.0) sgd_optimize = optimizer.apply_gradients(zip(sgd_gradients, v)) # Make sure gradients exist flowing from decoder to encoder print('Checking gradient flow from encoder-to-decoder') for (g_i,v_i) in zip(gradients,v): assert g_i is not None, 'Gradient none for %s'%(v_i.name) print('\t Ok...') print('\tDone') ``` ### Resetting Train and Test States We here define the state resetting functions ``` # Reset state reset_train_state = tf.group( tf.assign(saved_output, tf.zeros([batch_size, num_nodes])), tf.assign(saved_state, tf.zeros([batch_size, num_nodes])) ) reset_test_state = tf.group( saved_test_output.assign(tf.zeros([batch_size, num_nodes])), saved_test_state.assign(tf.zeros([batch_size, num_nodes])) ) ``` ## Running the Neural Machine Translator with Attention With all the relevant TensorFlow operations defined we move on to defining several functions related to executing our NMT model as well as runnning the model to obtain translations for previously unseen source sentences. ### Functions for Evaulating and Printing Results Next we define two functions to print and save the prediction results for training data as well as testing data, and finally define a function to obtain candidate and reference data to calculate the BLEU score. ``` def print_and_save_train_predictions(du_labels, tr_pred, rand_idx, train_prediction_text_fname): ''' Use this to print some predicted training samples and save it to file du_labels: Decoder's unrolled labels (this is a list of dec_num_unrollings where each item is [batch_size, vocabulary_size]) tr_pred: This is an array [dec_num_unrollings*batch_size, vocabulary_size] array rand_idx: Some random index we use to pick a data point to print train_prediction_text_fname: The file we save the prediction results into ''' # This print_str will be written to the text file as well as printed here print_str = 'Actual: ' # We can get each label corresponding to some sentence by traversing the # concatenated labels array ([dec_num_unrollings*batch_size, vocabulary_size]) # with a batch_size stride for w in np.argmax(np.concatenate(du_labels,axis=0)[rand_idx::batch_size],axis=1).tolist(): # Update the print_str print_str += tgt_reverse_dictionary[w] + ' ' # When we encounter the end of sentence </s> we stop printing if tgt_reverse_dictionary[w] == '</s>': break print(print_str) # Write to file with open(os.path.join(log_dir, train_prediction_text_fname),'a',encoding='utf-8') as fa: fa.write(print_str+'\n') # Now print the predicted data by following the same procedure as above print() print_str = 'Predicted: ' for w in np.argmax(tr_pred[rand_idx::batch_size],axis=1).tolist(): print_str += tgt_reverse_dictionary[w] + ' ' # When we encounter the end of sentence </s> we stop printing if tgt_reverse_dictionary[w] == '</s>': break print(print_str) with open(os.path.join(log_dir, train_prediction_text_fname),'a',encoding='utf-8') as fa: fa.write(print_str+'\n') def print_and_save_test_predictions(test_du_labels, test_pred_unrolled, batch_id, test_rand_idx, test_prediction_text_fname): ''' Use this to print some predicted training samples and save it to file test_du_labels: Decoder's unrolled labels (this is a list of dec_num_unrollings where each item is [batch_size, vocabulary_size]) test_pred_unrolled: This is an array [dec_num_unrollings*batch_size, vocabulary_size] array batch_id: We need this to retrieve the actual sentence for the predicted test_rand_idx: Some random index we use to pick a data point to print test_prediction_text_fname: The file we save the prediction results into ''' # Print the actual sentence print('DE: ',test_source_sent[(batch_id*batch_size)+test_rand_idx]) # print_str is the string we display as results and write to a file print_str = '\t EN (TRUE):' + test_target_sent[(batch_id*batch_size)+test_rand_idx] print(print_str + '\n') # Printing predictions print_str = '\t EN (Predicted): ' for test_pred in test_pred_unrolled: print_str += tgt_reverse_dictionary[test_pred[test_rand_idx]] + ' ' if tgt_reverse_dictionary[test_pred[test_rand_idx]] == '</s>': break print(print_str + '\n') # Write the results to text file with open(os.path.join(log_dir, test_prediction_text_fname),'a',encoding='utf-8') as fa: fa.write(print_str+'\n') def create_bleu_ref_candidate_lists(all_preds, all_labels): ''' Creates two lists (candidate list and reference list) for calcluating BLEU all_preds: All the predictions all_labels: Correspondign all the actual labels Returns cand_list: List (sentences) of lists (words in a sentence) ref_list: List (sentences) of lists (words in a sentence) ''' bleu_labels, bleu_preds = [],[] # calculate bleu score: # We iterate batch_size times as i=0,1,2,...,batch_size while grabbing # i, i+batch_size, i+2*batch_size, i+3*batch_size elements from all_labels and all_preds # This because the labels/predicitons belonging to same sentence are interleaved by batch_size # due to the way concatenate labels and predictions # Taking elements interleaved by batch_size gives the sequence of words belonging to the same sentence ref_list, cand_list = [],[] for b_i in range(batch_size): tmp_lbl = all_labels[b_i::batch_size] tmp_lbl = tmp_lbl[np.where(tmp_lbl != tgt_dictionary['</s>'])] ref_str = ' '.join([tgt_reverse_dictionary[lbl] for lbl in tmp_lbl]) ref_list.append([ref_str]) tmp_pred = all_preds[b_i::batch_size] tmp_pred = tmp_pred[np.where(tmp_pred != tgt_dictionary['</s>'])] cand_str = ' '.join([tgt_reverse_dictionary[pre] for pre in tmp_pred]) cand_list.append(cand_str) return cand_list, ref_list ``` ### Defining a Single Step of Training We now define a function to train the NMT model for a single step. It takes in encoder inputs, decoder inputs and decoder labels and train the NMT for a single step. ``` def train_single_step(eu_data, du_data, du_labels): ''' Define a single training step eu_data: Unrolled encoder inputs (word embeddings) du_data: Unrolled decoder inputs (word embeddings) du_labels: Unrolled decoder outputs (one hot encoded words) ''' # Fill the feed dict (Encoder) feed_dict = {} for ui,dat in enumerate(eu_data): feed_dict[enc_train_inputs[ui]] = dat # Fill the feed dict (Decoder) for ui,(dat,lbl) in enumerate(zip(du_data,du_labels)): feed_dict[dec_train_inputs[ui]] = dat feed_dict[dec_train_labels[ui]] = lbl # The mask masks the </s> items from being part of the loss d_msk = (np.logical_not(np.argmax(lbl,axis=1)==tgt_dictionary['</s>'])).astype(np.int32).reshape(-1,1) feed_dict[dec_train_masks[ui]] = d_msk # ======================= OPTIMIZATION ========================== # Using Adam in long term gives very weird behaviors in loss # so after 20000 iterations we change the optimizer to SGD if (step+1)<20000: _,l,tr_pred = sess.run([optimize,loss,train_prediction], feed_dict=feed_dict) else: _,l,tr_pred = sess.run([sgd_optimize,loss,train_prediction], feed_dict=feed_dict) return l, tr_pred ``` ### Defining Data Generators and Other Related Variables Here we load the word embeddings and some other things as well as define a function to retrieve data generators ``` # This is where all the results will be logged into log_dir = 'logs' if not os.path.exists(log_dir): os.mkdir(log_dir) # Filenames of the logs train_prediction_text_fname = 'train_predictions_attn.txt' test_prediction_text_fname = 'test_predictions_attn.txt' # Some configuration for the TensorFlow session config = tf.ConfigProto() config.gpu_options.allow_growth = True config.allow_soft_placement=True sess = tf.InteractiveSession(config=config) # Initialize global variables tf.global_variables_initializer().run() # Load the word embeddings src_word_embeddings = np.load('de-embeddings.npy') tgt_word_embeddings = np.load('en-embeddings.npy') # Defining data generators def define_data_generators(batch_size, enc_num_unrollings, dec_num_unrollings): # Training data generators (Encoder and Decoder) enc_data_generator = DataGeneratorMT(batch_size=batch_size,num_unroll=enc_num_unrollings,is_source=True, is_train=True) dec_data_generator = DataGeneratorMT(batch_size=batch_size,num_unroll=dec_num_unrollings,is_source=False, is_train=True) # Testing data generators (Encoder and Decoder) test_enc_data_generator = DataGeneratorMT(batch_size=batch_size,num_unroll=enc_num_unrollings,is_source=True, is_train=False) test_dec_data_generator = DataGeneratorMT(batch_size=batch_size,num_unroll=dec_num_unrollings,is_source=False, is_train=False) return enc_data_generator,dec_data_generator,test_enc_data_generator,test_dec_data_generator ``` ### Running Training and Testing for NMT With all the TensorFlow operations, helper functions defined we train and test the NMT system. ``` # Training and test BLEU scores attn_train_bleu_scores_over_time,attn_test_bleu_scores_over_time = [],[] # Loss over time loss_over_time = [] # Labels and predictions required to calculate the BLEU scores # for both train and test data train_bleu_refs, train_bleu_cands = [],[] test_bleu_refs, test_bleu_cands = [],[] # Training and test BLEU scores num_steps = 100001 avg_loss = 0 # Defining data generators for encoder/decoder and training/testing enc_data_generator, dec_data_generator, \ test_enc_data_generator, test_dec_data_generator = \ define_data_generators(batch_size, enc_num_unrollings, dec_num_unrollings) print('Started Training') for step in range(num_steps): # input (encoder) unrolling length: 40 # output (decoder) unrolling length: 60 if (step+1)%10==0: print('.',end='') # Sample a random batch of IDs from training data sent_ids = np.random.randint(low=0,high=train_inputs.shape[0],size=(batch_size)) # Getting an unrolled set of data batches for the encoder eu_data, eu_labels, _ = enc_data_generator.unroll_batches(sent_ids=sent_ids) # Getting an unrolled set of data batches for the decoder du_data, du_labels, _ = dec_data_generator.unroll_batches(sent_ids=sent_ids) # Train for single step l, tr_pred = train_single_step(eu_data, du_data, du_labels) # We don't calculate BLEU scores all the time as this is expensive, # it slows down the code if np.random.random()<0.1: # all_labels are labels obtained by concatinating all the labels in batches all_labels = np.argmax(np.concatenate(du_labels,axis=0),axis=1) # all_preds are predictions for all unrolled steps all_preds = np.argmax(tr_pred,axis=1) # Get training BLEU candidates and references batch_cands, batch_refs = create_bleu_ref_candidate_lists(all_preds, all_labels) # Accumulate training candidates/references for calculating # BLEU later train_bleu_refs.extend(batch_refs) train_bleu_cands.extend(batch_cands) if (step+1)%500==0: # Writing actual and predicte data to train_prediction.txt file for some random sentence print('Step ',step+1) with open(os.path.join(log_dir, train_prediction_text_fname),'a') as fa: fa.write('============= Step ' + str(step+1) + ' =============\n') rand_idx = np.random.randint(low=1,high=batch_size) print_and_save_train_predictions(du_labels, tr_pred, rand_idx, train_prediction_text_fname) # Calculating the BLEU score for the accumulated candidates/references bscore = 0.0 bscore = corpus_bleu(train_bleu_refs,train_bleu_cands,smoothing_function=nltk.translate.bleu_score.SmoothingFunction().method4) attn_train_bleu_scores_over_time.append(bscore) print('(Train) BLEU (%d elements): '%(len(train_bleu_refs)),bscore) # Reset the candidate/reference accumulators train_bleu_refs, train_bleu_cands = [],[] # Write BLEU score to file with open(log_dir + os.sep +'blue_scores_attn.txt','a') as fa_bleu: fa_bleu.write(str(step+1) +','+str(bscore)+'\n') with open(os.path.join(log_dir, train_prediction_text_fname),'a') as fa: fa.write('(Train) BLEU: %.5f\n'%bscore) avg_loss += l # Update average loss sess.run(reset_train_state) # resetting hidden state for each batch # ============================= TEST PHASE ================================== if (step+1)%1000==0: # calculate average loss print('============= Step ', str(step+1), ' =============') print('\t Loss: ',avg_loss/1000.0) loss_over_time.append(avg_loss/1000.0) # write losses to file with open(log_dir + os.sep + 'losses_attn.txt','a') as fa_loss: fa_loss.write(str(step+1) +','+str(avg_loss/1000.0)+'\n') with open(os.path.join(log_dir, train_prediction_text_fname),'a') as fa: fa.write('============= Step ' + str(step+1) + ' =============\n') fa.write('\t Loss: %.5f\n'%(avg_loss/1000.0)) avg_loss = 0.0 # Increase gstep to decay learning rate sess.run(inc_gstep) # reset global step when we change the optimizer if (step+1)==20000: sess.run(reset_gstep) print('=====================================================') print('(Test) Translating test sentences ...') print('Processing test data ... ') # =================================================================================== # Predictions for Test data for in_i in range(test_inputs.shape[0]//batch_size): # Generate encoder / decoder data for testing data test_eu_data, test_eu_labels, _ = test_enc_data_generator.unroll_batches(sent_ids=np.arange(in_i*batch_size,(in_i+1)*batch_size)) test_du_data, test_du_labels, _ = test_dec_data_generator.unroll_batches(sent_ids=np.arange(in_i*batch_size,(in_i+1)*batch_size)) # fill the feed dict feed_dict = {} for ui,(dat,lbl) in enumerate(zip(test_eu_data,test_eu_labels)): feed_dict[enc_test_input[ui]] = dat # Get predictions out with decoder # run prediction calculation this returns a list of prediction dec_num_unrollings long test_pred_unrolled = sess.run(test_predictions, feed_dict=feed_dict) # We print a randomly selected sample from each batch test_rand_idx = np.random.randint(0,batch_size) # used for printing test output print_and_save_test_predictions(test_du_labels, test_pred_unrolled, in_i, test_rand_idx, test_prediction_text_fname) # Things required to calculate test BLEU score all_labels = np.argmax(np.concatenate(test_du_labels,axis=0),axis=1) all_preds = np.concatenate(test_pred_unrolled, axis=0) batch_cands, batch_refs = create_bleu_ref_candidate_lists(all_preds, all_labels) test_bleu_refs.extend(batch_refs) test_bleu_cands.extend(batch_cands) # Reset the test state sess.run(reset_test_state) # Calculate test BLEU score test_bleu_score = 0.0 test_bleu_score = corpus_bleu(test_bleu_refs,test_bleu_cands, smoothing_function=nltk.translate.bleu_score.SmoothingFunction().method4) attn_test_bleu_scores_over_time.append(test_bleu_score) print('(Test) BLEU (%d elements): '%(len(test_bleu_refs)),test_bleu_score) test_bleu_refs, test_bleu_cands = [],[] print('=====================================================') ``` ## Visualizing the Attention Model Here we visualize the attention matrix for various translations the NMT system produced. The attention matrix is a `dec_num_unrollings x enc_num_unrollings` matrix. Where each cell denotes the $\alpha$ values obtained during attention calculation. ``` source_labels = [] target_labels = [] print('=====================================================') print('(Test) Translating test sentences ...') print('Processing test data ... ') # Process each test input by batches for in_i in range(test_inputs.shape[0]//batch_size): # Generate test data test_eu_data, test_eu_labels, _ = test_enc_data_generator.unroll_batches(sent_ids=np.arange(in_i*batch_size,(in_i+1)*batch_size)) test_du_data, test_du_labels, _ = test_dec_data_generator.unroll_batches(sent_ids=np.arange(in_i*batch_size,(in_i+1)*batch_size)) # Choose a random data point in the batch test_rand_idx = np.random.randint(0,batch_size) # used for printing test output # fill the feed dict feed_dict = {} source_labels = [] # This contains the source words of the test point considered for ui,(dat,lbl) in enumerate(zip(test_eu_data,test_eu_labels)): feed_dict[enc_test_input[ui]] = dat source_labels.append(src_reverse_dictionary[test_inputs[(in_i*batch_size)+test_rand_idx,ui]]) # Print the true source sentence print('DE: ',test_source_sent[(in_i*batch_size)+test_rand_idx]) print_str = '\t EN (TRUE):' + test_target_sent[(in_i*batch_size)+test_rand_idx] print(print_str + '\n') print_str = '\t EN (Predicted): ' # run prediction calculation this returns a list of prediction dec_num_unrollings long # alpha_dec_unrolled is a list of dec_num_unrollings elements, # where each element (another list) is num_enc_unrollings long test_pred_unrolled, alpha_dec_unrolled = sess.run([test_predictions,test_alpha_i_unrolled], feed_dict=feed_dict) target_labels = [] # Building the attention matrix attention_matrix = [] r_i,c_i = 0, 0 # We build the attention matrix column by column for u_i, (test_pred, alpha_enc_unrolled) in enumerate(zip(test_pred_unrolled, alpha_dec_unrolled)): # Column index c_i = 0 # Current target word current_tgt = tgt_reverse_dictionary[test_pred[test_rand_idx]] # Only add if the word is not <s> or </s> or <unk> if current_tgt != '<s>' and current_tgt != '</s>' and current_tgt != '<unk>': attention_matrix.append([]) target_labels.append(tgt_reverse_dictionary[test_pred[test_rand_idx]]) print_str += tgt_reverse_dictionary[test_pred[test_rand_idx]] + ' ' filtered_src_labels = [] # Fill each row position in that column for u_ii in range(enc_num_unrollings): # Only add if the word is not <s> or </s> or <unk> if source_labels[u_ii] != '<s>' and source_labels[u_ii] != '</s>' and source_labels[u_ii] != '<unk>': filtered_src_labels.append(source_labels[u_ii]) attention_matrix[r_i].append(alpha_enc_unrolled[test_rand_idx,u_ii]) c_i += 1 r_i += 1 assert r_i == len(target_labels) # Make the above to a matrix attention_matrix = np.array(attention_matrix) if attention_matrix.ndim == 1: attention_matrix = attention_matrix.reshape(1,-1) # Reset test state after each batch sess.run(reset_test_state) # Plot f,ax = pylab.subplots(1,1,figsize=(5.0 + 0.5*attention_matrix.shape[0], 5.0 + 0.5*attention_matrix.shape[1])) # Repetitions are used to make the attention value to a set of image pixels rep_attn = np.repeat(attention_matrix,5,axis=0) rep_attn = np.repeat(rep_attn,5,axis=1) # Correcting for source reversing rep_attn = np.fliplr(rep_attn) # Rendering image ax.imshow(rep_attn,vmin=0.0,vmax=1.0,cmap='jet') # Labels for columns for s_i,src_text in enumerate(reversed(filtered_src_labels)): ax.text(s_i*5+1,-2,src_text,rotation=90, verticalalignment='bottom',fontsize=18) # Labels for rows for t_i,tgt_text in enumerate(target_labels): ax.text(-2, t_i*5+0.5,tgt_text, horizontalalignment = 'right', fontsize=18) ax.axis('off') f.savefig('attention_%d.png'%in_i) pylab.close(f) print('=====================================================') ```
github_jupyter
# AlexNet in Keras In this notebook, we leverage an [AlexNet](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)-like deep, convolutional neural network to classify flowers into the 17 categories of the [Oxford Flowers](http://www.robots.ox.ac.uk/~vgg/data/flowers/17/) data set. Derived from [this earlier notebook](https://github.com/the-deep-learners/TensorFlow-LiveLessons/blob/master/notebooks/old/L3-3b__TFLearn_AlexNet.ipynb). ``` #load watermark %load_ext watermark %watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn ``` #### Set seed for reproducibility ``` #load watermark %load_ext watermark %watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras import numpy as np np.random.seed(42) ``` #### Load dependencies ``` import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D from keras.layers.normalization import BatchNormalization from keras.callbacks import TensorBoard # for part 3.5 on TensorBoard ``` #### Load *and preprocess* data ``` import tflearn.datasets.oxflower17 as oxflower17 X, Y = oxflower17.load_data(one_hot=True) ``` #### Design neural network architecture ``` model = Sequential() model.add(Conv2D(96, kernel_size=(11, 11), strides=(4, 4), activation='relu', input_shape=(224, 224, 3))) model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2))) model.add(BatchNormalization()) model.add(Conv2D(256, kernel_size=(5, 5), activation='relu')) model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2))) model.add(BatchNormalization()) model.add(Conv2D(256, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(384, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(384, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2))) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dense(4096, activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='tanh')) model.add(Dropout(0.5)) model.add(Dense(17, activation='softmax')) model.summary() ``` #### Configure model ``` model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) ``` #### Configure TensorBoard (for part 5 of lesson 3) ``` tensorbrd = TensorBoard('logs/alexnet') ``` #### Train! ``` model.fit(X, Y, batch_size=64, epochs=100, verbose=1, validation_split=0.1, shuffle=True, callbacks=[tensorbrd]) ```
github_jupyter
# T81-558: Applications of Deep Neural Networks * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). **Module 10 Assignment: Time Series Neural Network** **Student Name: Your Name** # Assignment Instructions For this assignment you will use a LSTM to predict a time series contained in the data file **[series-31-spring-2019.csv](http://data.heatonresearch.com/data/t81-558/datasets/series-31-spring-2019.csv)**. The code that you will use to complete this will be similar to the sunspots example from the course module. This data set contains two columns: *time* and *value*. Create a LSTM network and train it with a sequence size of 5 and a prediction window of 1. If you use a different sequence size, you will not have the correct number of submission rows. Train the neural network, the data set is fairly simple and you should easily be able to get a RMSE below 1.0. FYI, I generate this datasets by fitting a cubic spline to a series of random points. This is a time series data set, do not randomize the order of the rows! For your training data use all *time* values less than 3000 and for test, use the remaining values greater than or equal to 3000. For the submit file, send me the results of your test evaluation. You should have two columns: *time* and *value*. The column *time* should be the time at the beginning of each predicted sequence. The *value* should be the next value that was predicted for each of your sequences. Your submission file will look similar to: # Helpful Functions You will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. ``` import base64 import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests from sklearn import preprocessing # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = f"{name}-{tv}" df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr( target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df[result].values.astype(np.float32), dummies.values.astype(np.float32) # Regression return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return f"{h}:{m:>02}:{s:>05.2f}" # Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ``` # Assignment #10 Sample Code The following code provides a starting point for this assignment. ``` import numpy as np def to_sequences(seq_size, obs): x = [] y = [] for i in range(len(obs)-seq_size): #print(i) window = obs[i:(i+seq_size)] after_window = obs[i+seq_size] window = [[x] for x in window] #print("{} - {}".format(window,after_window)) x.append(window) y.append(after_window) return np.array(x),np.array(y) # This is your student key that I emailed to you at the beginnning of the semester. key = "ivYj3b2yJY2dvQ9MEQMLe5ECGenGc82p4dywJxtQ" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class10.ipynb' # Mac/Linux # Read from time series file path = "./data/" filename = os.path.join(path,"series-31-spring-2019.csv") df = pd.read_csv(filename) # , index_col=False print("Starting file:") print(df[0:10]) print("Ending file:") print(df[-10:]) df_train = df[df['time']<3000] df_test = df[df['time']>=3000] spots_train = df_train['value'].tolist() spots_test = df_test['value'].tolist() print("Training set has {} observations.".format(len(spots_train))) print("Test set has {} observations.".format(len(spots_test))) SEQUENCE_SIZE = 5 x_train,y_train = to_sequences(SEQUENCE_SIZE,spots_train) x_test,y_test = to_sequences(SEQUENCE_SIZE,spots_test) print("Shape of training set: {}".format(x_train.shape)) print("Shape of test set: {}".format(x_test.shape)) #submit(source_file=file,data=df,key=key,no=1) from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Embedding from keras.layers import LSTM from keras.datasets import imdb from keras.callbacks import EarlyStopping import numpy as np print('Build model...') # Add assignment code here submit(source_file=file,data=submit_df,key=key,no=10) ```
github_jupyter
``` import pyvisa from pylabnet.utils.logging.logger import LogClient from pylabnet.network.core.generic_server import GenericServer from pylabnet.hardware.power_meter.thorlabs_pm320e import Driver from pylabnet.hardware.polarization.polarization_control import Driver as MPC320 , paddle1, paddle2, paddle3 import time import numpy as np import matplotlib.pyplot as plt %matplotlib inline gpib_addres = 'USB0::0x1313::0x8022::M00579698::INSTR' # Instantiate logger = LogClient( host='139.180.129.96', port=6924, module_tag='Polarization Optimizer' ) power_meter = Driver( gpib_address=gpib_addres, logger=logger ) power_meter.set_range(2,'R100NW') pol_paddle = MPC320(b'38154354') #device = '38154354' #dummy setup # Output of pol. paddle is connected to output 2 of powermeter channel = 2 init_power = power_meter.get_power(channel) print(f"Initial power is {init_power} W.") paddles = [paddle1, paddle3, paddle2] pol_paddle.open() velocity = 100 #percentage from 0 to 100 pol_paddle.set_velocity(velocity) for paddle in paddles: home = pol_paddle.home(paddle) move = pol_paddle.move(paddle, 85) stepnum = 25 #number of dtep angles within range count = 0 itercount = 0 ang = [] angle = [] power = [] pos = [] iterationnum = 40 ang_paddles = [] power_paddles = [] for paddle in paddles: deviate = 170 #range of angle to scan stepsize = deviate/stepnum move_in = pol_paddle.move_rel(paddle, -deviate/2) while itercount < iterationnum: if itercount >= 1: move = pol_paddle.move(paddle, ang[itercount-1]-deviate/2) while count < stepnum: mover = pol_paddle.move_rel(paddle, stepsize) PosF = pol_paddle.get_angle(paddle) print(f"itercount = {itercount} count = {count}") print(f"Position after move relative is {PosF}") current_power = power_meter.get_power(channel) print(f"Current power is {current_power} W.") power.extend([current_power]) angle.extend([PosF]) count += 1 plt.title(f"paddle # {paddle.value} , iteration # {itercount}.") plt.plot(angle, power, "or") plt.show() maxindex = np.argmax(power) ang.extend([angle[maxindex]]) if itercount >= 1: if abs(ang[itercount] - ang[itercount-1]) < 0.05: print(f"converged to max power.") move = pol_paddle.move(paddle, angle[maxindex]) count = 0 itercount = 0 power = [] angle = [] break deviate = deviate/2 stepsize = deviate/stepnum itercount += 1 count = 0 ang_paddles.extend(ang) power_paddles.extend(power) ang = [] itercount = 0 PosF1 = pol_paddle.get_angle(paddles[0]) print(f"paddle = {paddles[0]} final_angle = {PosF1}") PosF2 = pol_paddle.get_angle(paddles[1]) print(f"paddle = {paddles[1]} final_angle = {PosF2}") PosF3 = pol_paddle.get_angle(paddles[2]) print(f"paddle = {paddles[2]} final_angle = {PosF3}") pol_paddle.close() ```
github_jupyter
# Face Detection in OpenCV General Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. Here we will work with face detection. Initially, the algorithm needs a lot of positive images (images of faces) and negative images (images without faces) to train the classifier. Then we need to extract features from it. For this, haar features shown in below image are used. They are just like our convolutional kernel. Each feature is a single value obtained by subtracting sum of pixels under white rectangle from sum of pixels under black rectangle. ![title](haar_features.jpg) Now all possible sizes and locations of each kernel is used to calculate plenty of features. (Just imagine how much computation it needs? Even a 24x24 window results over 160000 features). For each feature calculation, we need to find sum of pixels under white and black rectangles. To solve this, they introduced the integral images. It simplifies calculation of sum of pixels, how large may be the number of pixels, to an operation involving just four pixels. Nice, isn't it? It makes things super-fast. But among all these features we calculated, most of them are irrelevant. For example, consider the image below. Top row shows two good features. The first feature selected seems to focus on the property that the region of the eyes is often darker than the region of the nose and cheeks. The second feature selected relies on the property that the eyes are darker than the bridge of the nose. But the same windows applying on cheeks or any other place is irrelevant. So how do we select the best features out of 160000+ features? It is achieved by Adaboost. ![title](haar.png) For this, we apply each and every feature on all the training images. For each feature, it finds the best threshold which will classify the faces to positive and negative. But obviously, there will be errors or misclassifications. We select the features with minimum error rate, which means they are the features that best classifies the face and non-face images. (The process is not as simple as this. Each image is given an equal weight in the beginning. After each classification, weights of misclassified images are increased. Then again same process is done. New error rates are calculated. Also new weights. The process is continued until required accuracy or error rate is achieved or required number of features are found). Final classifier is a weighted sum of these weak classifiers. It is called weak because it alone can't classify the image, but together with others forms a strong classifier. The paper says even 200 features provide detection with 95% accuracy. Their final setup had around 6000 features. (Imagine a reduction from 160000+ features to 6000 features. That is a big gain). So now you take an image. Take each 24x24 window. Apply 6000 features to it. Check if it is face or not. Wow.. Wow.. Isn't it a little inefficient and time consuming? Yes, it is. Authors have a good solution for that. In an image, most of the image region is non-face region. So it is a better idea to have a simple method to check if a window is not a face region. If it is not, discard it in a single shot. Don't process it again. Instead focus on region where there can be a face. This way, we can find more time to check a possible face region. For this they introduced the concept of Cascade of Classifiers. Instead of applying all the 6000 features on a window, group the features into different stages of classifiers and apply one-by-one. (Normally first few stages will contain very less number of features). If a window fails the first stage, discard it. We don't consider remaining features on it. If it passes, apply the second stage of features and continue the process. The window which passes all stages is a face region. How is the plan !!! Authors' detector had 6000+ features with 38 stages with 1, 10, 25, 25 and 50 features in first five stages. (Two features in the above image is actually obtained as the best two features from Adaboost). According to authors, on an average, 10 features out of 6000+ are evaluated per sub-window. ``` import numpy as np import cv2 import matplotlib.pyplot as plt %matplotlib inline face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml') img = cv2.imread('george-w-bush.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) ```
github_jupyter
# CS5340 Lecture 8: HMMs # Lecturer: Harold Soh (harold@comp.nus.edu.sg) Graduate TAs: Abdul Fatir Ansari and Chen Kaiqi (AY19/20) This notebook is a supplement to Lecture 8 of CS5340: Uncertainty Modeling in AI The material uses the hmmlearn package and is based on the tutorial provided by the hmmlearn package (https://hmmlearn.readthedocs.io/en/latest/tutorial.html) To install hmmlearn, please refer to: https://github.com/hmmlearn/hmmlearn. Typically, to install: ```pip install --upgrade --user hmmlearn``` ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from hmmlearn import hmm from scipy.optimize import linear_sum_assignment from sklearn.metrics.pairwise import euclidean_distances # for printing np.set_printoptions(formatter={'float': lambda x: "{0:0.2f}".format(x)}) ``` ## Creating our HMM ## Let us first create a Hidden Markov Model (we call HMM_A) where we know all the parameters ``` # the start probabilities (pi) startprob = np.array([0.6, 0.3, 0.1, 0.0]) # The transition matrix (A) # each row represents the transition probability from one component to the others transmat = np.array([[0.7, 0.3, 0.0, 0.0], [0.4, 0.1, 0.3, 0.2], [0.1, 0.1, 0.7, 0.1], [0.4, 0.0, 0.1, 0.5]]) # Next comes the emission probabilities (\phi) # The means of each component means = np.array([[0.0, 5.0], [5.0, 5.0], [0.0, 0.0], [5.0, 0.0]]) # The covariance of each component var_param = 1.0 # you can play with this parameter to increase/decrease the spread of the observations covars = var_param * np.tile(np.identity(2), (4, 1, 1)) # Build our HMM with the parameters above HMM_A = hmm.GaussianHMM(n_components=4, covariance_type="full") # Instead of fitting it from the data, we directly set the estimated # parameters, the means and covariance of the components HMM_A.startprob_ = startprob HMM_A.transmat_ = transmat HMM_A.means_ = means HMM_A.covars_ = covars ``` ## Sample from our HMM ## We can then sample trajectories from HMM. ``` # Generate one long sequence X, Z = HMM_A.sample(20) # Plot the sampled data plt.plot(X[:, 0], X[:, 1], ".-", label="observations", ms=6, mfc="orange", alpha=0.7) # Indicate the component numbers rooms = ["bedroom", "toilet", "living room", "kitchen"] for i, m in enumerate(means): plt.text(m[0], m[1], '%s' % rooms[i], size=17, horizontalalignment='center', bbox=dict(alpha=.7, facecolor='w')) plt.legend(loc='best') plt.show() ``` ## Learn a new HMM from data ## Here, we will learn a new HMM model (HMM_B) using data sampled from our known HMM model above. ``` # generate multiple sequences M = 100 # number of sequences N = 10 # each sequence length X, Z = HMM_A.sample(N) L = len(X) for i in range(M-1): Xtemp, Ztemp = HMM_A.sample(N) X = np.concatenate([X, Xtemp]) Z = np.concatenate([Z, Ztemp]) L = np.append(L, len(Xtemp)) HMM_B = hmm.GaussianHMM(n_components=4, covariance_type="full", n_iter=100, verbose=True) HMM_B.fit(X,L) ``` ### After Learning ### Let's check if the model has learnt the correct parameters. *Note*: the component indices may not match; you want to verify that you can find a matching component for each of the means. We will use the Hungarian algorithm to try to find best matches. ``` print("Component Means") print("Learnt") print(HMM_B.means_) print("True") print(HMM_A.means_) # we can try to match the components using the Hungarian algorithm cost = euclidean_distances( HMM_A.means_, HMM_B.means_) row_ind, col_ind = linear_sum_assignment(cost) # print(row_ind) # print(col_ind) def remapMeans(A, ind): B = np.array(A) for i in range(B.shape[0]): B[i,:] = A[ind[i], :] return B def remapMat(A, ind): B = np.array(A) for i in range(B.shape[0]): B[i,:] = A[ind[i], ind] return B means_remap = remapMeans(HMM_B.means_, col_ind) print("Learnt Means") print(means_remap) print("True Means") print(HMM_A.means_) plt.scatter(means_remap[:,0], means_remap[:,1]) plt.scatter(HMM_A.means_[:,0], HMM_A.means_[:,1], marker='+' ) plt.legend(["Learnt", "True"]) print("Transition Probabilities") print("Learnt A") trans_remap = remapMat(HMM_B.transmat_, col_ind) print(trans_remap) print("True A") print(HMM_A.transmat_) plt.subplot(121) plt.imshow(trans_remap, vmin=0.0, vmax=1.0) plt.title("Learnt Transitions") plt.colorbar() plt.subplot(122) plt.imshow(HMM_A.transmat_, vmin=0.0, vmax=1.0) plt.title("True Transitions") plt.colorbar() # predict the latent components using the relearned model Zpred = HMM_B.predict(X) print(Zpred) print(Z) ```
github_jupyter
# Fairness Metrics This notebook implements the statistical fairness metrics from: *Towards the Right Kind of Fairness in AI* by Boris Ruf and Marcin Detyniecki (2021) https://arxiv.org/abs/2102.08453 Example with the `german-risk-scoring.csv` dataset. Contributeurs : Xavier Lioneton & Francis Wolinski ## Imports ``` # imports import numpy as np import pandas as pd from pandas.api.types import is_numeric_dtype from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder from IPython.display import display, Markdown ``` ## Data Load ``` # dataset data = pd.read_csv('german-risk-scoring.csv') data.info() # target data['Cost Matrix(Risk)'].value_counts() # Personal status and sex data["Personal status and sex"].value_counts() ``` ## Data Prep ``` # create sex column data["sex"] = data["Personal status and sex"].apply(lambda x : x.split(":")[0]) # create X=features, y=target X = data.drop(columns = 'Cost Matrix(Risk)') y = data['Cost Matrix(Risk)'].map({"Good Risk": 1, "Bad Risk": 0}) # type modifications cols_cat = [ 'Status of existing checking account', 'Credit history', 'Purpose', 'Savings account/bonds', 'Present employment since', 'Personal status and sex', 'Other debtors / guarantors', 'Property', 'Other installment plans', 'Housing', 'Job', 'Telephone', 'foreign worker', 'sex' ] cols_num = [ 'Duration in month', 'Credit amount', 'Installment rate in percentage of disposable income', 'Present residence since', 'Age in years', 'Number of existing credits at this bank', 'Number of people being liable to provide maintenance for', ] for col in cols_cat: data[col] = data[col].astype(str) for col in cols_num: data[col] = data[col].astype(float) cols = cols_cat + cols_num # unique values of categorical columns X[cols_cat].nunique() # all to numbers encoder = OneHotEncoder() X_cat = encoder.fit_transform(X[cols_cat]).toarray() X_num = X[cols_num] X_prep = np.concatenate((X_num, X_cat), axis=1) X_prep.shape # data prepared cols = data[cols_num].columns.tolist() + encoder.get_feature_names(input_features=X[cols_cat].columns).tolist() data_prep = pd.DataFrame(X_prep, columns=cols) data_prep.shape # data prepared data_prep.head() ``` ## Machine Learning ``` # split train test X_train, X_test, y_train, y_test = train_test_split(data_prep, y, test_size=0.2, random_state=42) X_train = X_train.copy() X_test = X_test.copy() print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) ``` ### Train Model ``` # train model clf = LogisticRegression(random_state=0, n_jobs=8, max_iter=500) clf.fit(X_train, y_train) ``` ### Confusion mattrix ``` # Schema of confusion matrix df = pd.DataFrame([['True negatives (TN)', 'False positives (FP)'], ['False negatives (FN)', 'True positives (TP)']], index=['Y = 0', 'Y = 1'], columns=['Ŷ = 0', 'Ŷ = 1']) df = df.reindex(['Y = 1', 'Y = 0']) df = df[['Ŷ = 1', 'Ŷ = 0']] display(Markdown('**Schema of confusion matrix**')) display(df) # function pretty_confusion_mattrix() def pretty_confusion_mattrix(y_label, y_pred, title=None): """Pretty print the confusion matrix computed by scikit-learn""" _TN, _FP, _FN, _TP = confusion_matrix(y_label, y_pred).flatten() array = [[_TP, _FN], [_FP, _TN]] df = pd.DataFrame(array, index=['Y = 1', 'Y = 0'], columns=['Ŷ = 1', 'Ŷ = 0']) if title is not None: display(Markdown(title)) display(df) # test dataset y_pred = clf.predict(X_test) pretty_confusion_mattrix(y_test, y_pred, title='**Confusion matrix for the test dataset**') # function pretty_confusion_mattrix_by_subgroup() def pretty_confusion_mattrix_by_subgroup(X, col, X_test, y_label, y_pred, q=4): """Pretty print the confusion matrices by subgroup X: dataset col: used for spliting in subgroups X_test: test dataset y_label: target for test dataset y_pred: predictions for test dataset q: quartile used for numerical column""" # if col is numeric, use quantile cat = pd.qcut(X[col], q) if is_numeric_dtype(X[col]) else X[col] # select test data cat = cat.loc[X_test.index] # switch y_pred to Series so as to be able to select by subgroup y_pred = pd.Series(y_pred, index=y_label.index) # loop on subgroups for value in sorted(cat.unique()): X_select = X_test.loc[cat == value] pretty_confusion_mattrix(y_label.loc[X_select.index], y_pred.loc[X_select.index], title=f'**Subgroup**: {col} = {value}') pretty_confusion_mattrix_by_subgroup(X, 'sex', X_test, y_test, y_pred) pretty_confusion_mattrix_by_subgroup(X, 'Age in years', X_test, y_test, y_pred) ``` ### Metrics derived from confusion matrix **Actual postitives** This number is the sum of the true positives and the false negatives, which can be viewed as missed true positives. $P = TP + FN$ **Actual negatives** This number is the sum of the true negatives and the false positives, which again can be viewed as missed true negatives. $N = TN + FP$ **Base rate** This number, sometimes also called the prevalence rate, represents the proportion of actual positives with respect to the entire data set. $BR = \frac{P}{P + N}$ **Positive rate** This number is the overall rate of positively classified instances, including both correct and incorrect decisions. $PR = \frac{TP + FP}{P + N}$ **Negative rate** This number is the ratio of negative classification, again irrespective of whether the decisions were correct or incorrect. $NR = \frac{TN + FN}{P + N}$ **Accuracy** This number is the ratio of the correctly classified instances (positive and negative) of all decisions. $ACC = \frac{TP + TN}{P + N}$ **Misclassiffication rate** This number is the ratio of the misclassified instances over all decisions. $MR = \frac{FN + FP}{P + N}$ **True positive rate (recall)** This number describes the proportions of correctly classified positive instances. $TPR = \frac{TP}{P}$ **True negative rate** This number describes the proportions of correctly classified negative instances. $TNR = \frac{TN}{N}$ **False positive rate** This number denotes the proportion of actual negatives which was falsely classified as positive. $FPR = \frac{FP}{P}$ **False negative rate (silence)** This number describes the proportion of actual positives which was misclassified as negative. $FNR = \frac{FN}{N}$ **False discovery rate (noise)** This number describes the share of misclassified positive classifications of all positive predictions. $FDR = \frac{FP}{TP + FP}$ **Positive predicted value (precision)** This number describes the ratio of samples which were correctly classified as positive from all the positive predictions. $PPV = \frac{TP}{TP + FP}$ **False omission rate** This number describes the proportion of false negative predictions of all negative predictions. $FOR = \frac{FN}{TN + FN}$ **Negative predicted value** This number describes the ratio of samples which were correctly classified as negative from all the negative predictions. $NPV = \frac{TN}{TN + FN}$ ``` # function pretty_confusion_mattrix() def pretty_fairness_confusion_mattrix(y_label, y_pred, title=None): """Pretty print fairness confusion matrix y_label: target for test dataset y_pred: predictions for test dataset title: string to display in Markdown""" # compute fairness metrics _TN, _FP, _FN, _TP = confusion_matrix(y_label, y_pred).flatten() _P = _TP + _FN _N = _FP + _TN _BR = _P / (_P + _N) _PR = (_TP + _FP) / (_P + _N) _NR = (_TN + _FN) / (_P + _N) _TPR = _TP / _P _TNR = _TN / _N _FDR = _FP / (_TP + _FP) _FOR = _FN / (_TN + _FN) # build the output dataframe array = [[_TP, _FN, f'TPR = {_TPR:.2f}'], [_FP, _TN, f'TNR = {_TNR:.2f}'], [f'FDR = {_FDR:.2f}', f'FOR = {_FOR:.2f}', f'BR = {_BR:.2f}'], [f'PR = {_PR:.2f}', f'NR = {_NR:.2f}', ''], ] df = pd.DataFrame(array, index=['Y = 0', 'Y = 1', '', ' '], columns=['Ŷ = 0', 'Ŷ = 1', '']) if title is not None: display(Markdown(title)) display(df.style.set_table_styles([{'selector': 'td', 'props':[('text-align', 'center')]}, {'selector': 'th', 'props': [('text-align', 'center')]}], overwrite=False)) pretty_fairness_confusion_mattrix(y_test, y_pred, title='**Fairness confusion matrix**') # function pretty_fairness_confusion_mattrix_by_subgroup() def pretty_fairness_confusion_mattrix_by_subgroup(X, col, X_test, y_label, y_pred, q=4): """Pretty print fairness confusion matrix by subgroup X: dataset col: used for spliting in subgroups X_test: test dataset y_label: target for test dataset y_pred: predictions for test dataset q: quartile used for numerical colum""" # if col is numeric, use quantile cat = pd.qcut(X[col], q) if is_numeric_dtype(X[col]) else X[col] # select test data cat = cat.loc[X_test.index] # switch y_pred to Series so as to be able to select by subgroup y_pred = pd.Series(y_pred, index=y_label.index) # loop on subgroups for value in sorted(cat.unique()): X_select = X_test.loc[cat == value] pretty_fairness_confusion_mattrix(y_label.loc[X_select.index], y_pred.loc[X_select.index], title=f'**Subgroup**: {col} = {value}') pretty_fairness_confusion_mattrix_by_subgroup(X, 'sex', X_test, y_test, y_pred) pretty_fairness_confusion_mattrix_by_subgroup(X, 'Age in years', X_test, y_test, y_pred) ```
github_jupyter
# Demonstrate the path of high probability and the orthogonal path on the pyloric rhythm for experimental data ``` # Note: this application requires a more recent version of dill. # Other applications in this repository will require 0.2.7.1 # You might have to switch between versions to run all applications. !pip install --upgrade dill import numpy as np import matplotlib.pylab as plt import delfi.distribution as dd import time from copy import deepcopy import sys sys.path.append("model/setup") sys.path.append("model/simulator") sys.path.append("model/inference") sys.path.append("model/visualization") sys.path.append("model/utils") import sys; sys.path.append('../') from common import col, svg, plot_pdf, samples_nd import netio import viz import importlib import viz_samples import train_utils as tu import matplotlib as mpl %load_ext autoreload %autoreload 2 PANEL_A = 'illustration/panel_a.svg' PANEL_B = 'svg/31D_panel_b.svg' PANEL_C = 'svg/31D_panel_c.svg' PANEL_C2 = 'svg/31D_panel_c2.svg' PANEL_D = 'svg/31D_panel_d.svg' PANEL_X1params = 'svg/31D_panel_App1_params.svg' PANEL_X2params = 'svg/31D_panel_App2_params.svg' PANEL_X1ss = 'svg/31D_panel_App1_ss.svg' PANEL_X2ss = 'svg/31D_panel_App2_ss.svg' PANEL_X = 'svg/31D_panel_x.svg' ``` ### Load samples ``` params = netio.load_setup('train_31D_R1_BigPaper') filedir = "results/31D_samples/pyloricsamples_31D_noNaN_3.npz" pilot_data, trn_data, params_mean, params_std = tu.load_trn_data_normalize(filedir, params) print('We use', len(trn_data[0]), 'training samples.') stats = trn_data[1] stats_mean = np.mean(stats, axis=0) stats_std = np.std(stats, axis=0) ``` ### Load network' ``` date_today = '1908208' import dill as pickle with open('results/31D_nets/191001_seed1_Exper11deg.pkl', 'rb') as file: inf_SNPE_MAF, log, params = pickle.load(file) params = netio.load_setup('train_31D_R1_BigPaper') prior = netio.create_prior(params, log=True) dimensions = np.sum(params.use_membrane) + 7 lims = np.asarray([-np.sqrt(3)*np.ones(dimensions), np.sqrt(3)*np.ones(dimensions)]).T prior = netio.create_prior(params, log=True) params_mean = prior.mean params_std = prior.std from find_pyloric import merge_samples, params_are_bounded labels_ = viz.get_labels(params) prior_normalized = dd.Uniform(-np.sqrt(3)*np.ones(dimensions), np.sqrt(3)*np.ones(dimensions), seed=params.seed) ``` ### Load experimental data ``` summstats_experimental = np.load('results/31D_experimental/190807_summstats_prep845_082_0044.npz')['summ_stats'] ``` ### Calculate posterior ``` from find_pyloric import merge_samples, params_are_bounded all_paths = [] all_posteriors = [] labels_ = viz.get_labels(params) posterior_MAF = inf_SNPE_MAF.predict([summstats_experimental]) # given the current sample, we now predict the posterior given our simulation outcome. Note that this could just be overfitted. ``` ### Load samples ``` samples_MAF = merge_samples("results/31D_samples/02_cond_vals", name='conductance_params') samples_MAF = np.reshape(samples_MAF, (1000*2520, 31)) print(np.shape(samples_MAF)) ``` ### Load start and end point ``` num_to_watch = 3 infile = 'results/31D_pairs/similar_and_good/sample_pair_{}.npz'.format(num_to_watch) # 0 is shitty npz = np.load(infile) start_point = npz['params1'] end_point = npz['params2'] start_point_unnorm = start_point * params_std + params_mean end_point_unnorm = end_point * params_std + params_mean ratio = end_point_unnorm / start_point_unnorm run_true = (ratio > np.ones_like(ratio) * 2.0) | (ratio < np.ones_like(ratio) / 2.0) print(run_true) ``` ### Calculate the high-probability path ``` from HighProbabilityPath import HighProbabilityPath # number of basis functions used num_basis_functions = 2 # number of timesteps num_path_steps = 80 high_p_path = HighProbabilityPath(num_basis_functions, num_path_steps, use_sine_square=True) #print('Starting to calculate path') #high_p_path.set_start_end(start_point, end_point) #high_p_path.set_pdf(posterior_MAF, dimensions) #high_p_path.find_path(posterior_MAF, prior=prior_normalized, multiply_posterior=1, # non_linearity=None, non_lin_param=3.0) #high_p_path.get_travelled_distance() #print('Finished calculating path') #np.savez('results/31D_paths/high_p_path.npz', high_p_path=high_p_path) high_p_path = np.load('results/31D_paths/high_p_path.npz', allow_pickle=True)['high_p_path'].tolist() lims = np.asarray([-np.sqrt(3)*np.ones(dimensions), np.sqrt(3)*np.ones(dimensions)]).T ``` # Panel B: experimental data Note: the full data is not contained in the repo. Therefore, this figure can not be created. ``` npz = np.load('results/31D_experimental/trace_data_845_082_0044.npz') t = npz['t'] PD_spikes = npz['PD_spikes'] LP_spikes = npz['LP_spikes'] PY_spikes = npz['PY_spikes'] pdn = npz['pdn'] lpn = npz['lpn'] pyn = npz['pyn'] start_index = 219500 + 2100 end_index = 246500 + 2100 # 32000 height_offset = 200 shown_t = t[end_index] - t[start_index] time_len = shown_t / 0.025 * 1000 dt = t[1] - t[0] import matplotlib.patches as mp with mpl.rc_context(fname='../.matplotlibrc'): fig, ax = plt.subplots(1,1,figsize=(2.87, 2.08*3/4)) # (2.87, 2.08*3/4) ax.plot(t[start_index:end_index], 2.5+pdn[start_index:end_index]*0.007, c=col['GT'], lw=0.8) ax.plot(t[start_index:end_index], 1.2+lpn[start_index:end_index]*0.25, c=col['GT'], lw=0.8) ax.plot(t[start_index:end_index], -0.1+pyn[start_index:end_index]*0.013, c=col['GT'], lw=0.8) linew = 0.4 headl = 0.06 headw = 0.16 linelen = 0.17 circlefact = 0.8 # period arrow height1 = 3.2 plt.arrow(t[start_index]+0.6, height1, 1.15, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.arrow(t[start_index]+1.75, height1, -1.15, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.plot([t[start_index]+0.6, t[start_index]+0.6], [height1-linelen,height1+linelen], c='k', lw=linew*1.5) plt.plot([t[start_index]+1.75, t[start_index]+1.75], [height1-linelen,height1+linelen], c='k', lw=linew*1.5) #patch =mp.Ellipse((t[start_index]+1.2, 3.65), 0.2*circlefact,0.6*circlefact, color='lightgray') #ax.add_patch(patch) # delay arrow height2 = 1.64 plt.arrow(t[start_index]+0.6, height2, 0.48, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.arrow(t[start_index]+1.08, height2, -0.48, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.plot([t[start_index]+0.6, t[start_index]+0.6], [height2-linelen,height2+linelen], c='k', lw=linew*1.5) plt.plot([t[start_index]+1.08, t[start_index]+1.08], [height2-linelen,height2+linelen], c='k', lw=linew*1.5) #patch =mp.Ellipse((t[start_index]+0.94, 2.1), 0.2*circlefact,0.6*circlefact, color='lightgray') #ax.add_patch(patch) # gap arrow plt.arrow(t[start_index]+1.98, height2, 0.27, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.arrow(t[start_index]+2.25, height2, -0.27, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.plot([t[start_index]+1.98, t[start_index]+1.98], [height2-linelen,height2+linelen], c='k', lw=linew*1.5) plt.plot([t[start_index]+2.25, t[start_index]+2.25], [height2-linelen,height2+linelen], c='k', lw=linew*1.5) #patch =mp.Ellipse((t[start_index]+2.1, 2.1), 0.2*circlefact,0.6*circlefact, color='lightgray') #ax.add_patch(patch) # duration arrow height4 = 0.44 plt.arrow(t[start_index]+1.33, height4, 0.43, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.arrow(t[start_index]+1.76, height4, -0.43, 0, shape='full', head_width=headw, head_length=headl, length_includes_head=True, color='k', lw=linew) plt.plot([t[start_index]+1.33, t[start_index]+1.33], [height4-linelen,height4+linelen], c='k', lw=linew*1.5) plt.plot([t[start_index]+1.76, t[start_index]+1.76], [height4-linelen,height4+linelen], c='k', lw=linew*1.5) #patch =mp.Ellipse((t[start_index]+1.55, 0.9), radius=0.2, color='lightgray') #ax.add_patch(patch) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) ax.axes.get_yaxis().set_ticks([]) ax.axes.get_xaxis().set_ticks([]) ax.get_yaxis().set_visible(False) ax.set_ylim([-0.95, 4.0]) duration = 0.5 number_of_timesteps = int(duration / dt) t_scale = np.linspace(t[start_index], t[start_index + number_of_timesteps], 2) ax.plot(t_scale, -0.8 * np.ones_like(t_scale), c='k', lw=1.0) #plt.savefig(PANEL_B, facecolor='None', transparent=True) plt.show() ``` # Panel C: posterior ``` from decimal import Decimal all_labels = [] for dim_i in range(31): if dim_i > len(params_mean) - 7.5: # synapses if dim_i == 24: all_labels.append([r'$\mathdefault{0.01}$ ', r'$\mathdefault{10000}\;\;\;\;$ ']) else: all_labels.append([r'$\;\;\mathdefault{0.01}$', r'$\mathdefault{1000}\;\;\;\;$ ']) else: # membrane conductances num_after_digits = -int(np.log10(lims[dim_i, 1] * params_std[dim_i] + params_mean[dim_i])) if num_after_digits > 2: num_after_digits=2 labels = [round(Decimal((lims[dim_i, num_tmp] * params_std[dim_i] + params_mean[dim_i]) / 0.628e-3), num_after_digits) for num_tmp in range(2)] new_labels = [] counter=0 for l in labels: if counter == 0: new_labels.append(r'$\mathdefault{'+str(l)+'}$') else: new_labels.append(r'$\mathdefault{'+str(l)+'}\;\;\;$ ') counter+=1 all_labels.append(new_labels) import matplotlib.patheffects as pe with mpl.rc_context(fname='../.matplotlibrc'): labels_ = viz.get_labels_8pt(params) labels_[9] += '' fig, axes = samples_nd(samples=[samples_MAF[:1260000], high_p_path.path_coords], subset=[2,4,10,19,24,25,26,28], limits=lims, ticks=lims, tick_labels=all_labels, fig_size=(17.0*0.2435,17.0*0.2435), labels=labels_, points=[start_point, end_point], scatter_offdiag={'rasterized':True, 'alpha':1.0}, points_offdiag={'marker':'o', 'markeredgecolor':'w', 'markersize':3.6, 'markeredgewidth':0.5, 'path_effects':[pe.Stroke(linewidth=1.2, foreground='k'), pe.Normal()]}, points_colors=[col['CONSISTENT1'], col['CONSISTENT2']], samples_colors=[col['SNPE'], 'white'], diag=['kde', 'None'], upper=['hist', 'plot'], hist_offdiag={'bins':50}, plot_offdiag={'linewidth': 1.6, 'path_effects':[pe.Stroke(linewidth=2.4, foreground='k'), pe.Normal()]}) # plt.savefig(PANEL_C, facecolor='None', transparent=True) plt.show() ``` ### Evaluate whether samples along path are identical according to Prinz ``` pyloric_sim = netio.create_simulators(params) summ_stats = netio.create_summstats(params) from viz import plot_posterior_over_path high_p_path_mod = deepcopy(high_p_path) # plots for the samples num_cols = 2 num_rows = 5 scale = 'dist' # set this to 'dist' if you want to x-axis to be scale according to the travelled distance num_steps = num_cols*num_rows if scale == 'dist': steps = np.linspace(0, high_p_path_mod.dists[-1], num_steps) else: steps = np.linspace(0, 1.0, num_steps) ``` # Inlet for Panel C ``` dimensions_to_use = [24,25] high_p_path_mod = deepcopy(high_p_path) num_paths = 10 path_start_positions = np.linspace(0, high_p_path_mod.dists[-1], num_paths) high_p_indizes = high_p_path_mod.find_closest_index_to_dist(path_start_positions) use_high_p_index = 45 high_p_indizes = [use_high_p_index] from OrthogonalPath import OrthogonalPath dimensions_to_use = [24,25] high_p_path_mod = deepcopy(high_p_path) start_point_ind = 23# 10 # ortho_path = OrthogonalPath(high_p_path_mod.path_coords, start_point_ind) # ortho_path.find_orthogonal_path(posterior_MAF, max_distance=high_p_path_mod.dists[-1]/27, dim=dimensions, prior=prior_normalized) # ortho_path.get_travelled_distance() # print(len(ortho_path.path_coords)) #np.savez('results/31D_paths/ortho_path.npz', ortho_path=ortho_path) ortho_path = np.load('results/31D_paths/ortho_path.npz', allow_pickle=True)['ortho_path'].tolist() ortho_path_mod = deepcopy(ortho_path) num_path_pos = 2 path_start_positions = np.linspace(0, ortho_path_mod.dists[-1], num_path_pos) ortho_p_indizes = ortho_path_mod.find_closest_index_to_dist(path_start_positions) ortho_p_indizes = [ortho_p_indizes[-1]] labels_ = viz.get_labels_8pt(params) labels_[9] += '' color_mixture = 0.5 * (np.asarray(list(col['CONSISTENT1'])) + np.asarray(list(col['CONSISTENT2']))) p1g = high_p_path.path_coords[int(high_p_indizes[0])] p1b = ortho_path.path_coords[int(ortho_p_indizes[0])] with mpl.rc_context(fname='../.matplotlibrc'): _ = viz.plot_single_marginal_pdf(pdf1=posterior_MAF, prior=prior, resolution=200, lims=lims, samples=np.transpose(samples_MAF), figsize=(1.5, 1.5), ticks=False, no_contours=True, labels_params=labels_, start_point=high_p_path.start_point, end_point=high_p_path.end_point, path1=high_p_path.path_coords, display_axis_lims=True, path2=ortho_path.path_coords, pointscale=0.5, p1g=p1g, start_col=col['CONSISTENT1'], end_col=col['CONSISTENT2'], p1b=p1b, current_col1=color_mixture,current_col=col['CONSISTENT2'], current_col2=col['INCONSISTENT'], path_steps1=1, path_steps2=1, dimensions=dimensions_to_use) #plt.savefig(PANEL_C2, facecolor='None', transparent=True, dpi=300, bbox_inches='tight') plt.show() ``` # Panel D ``` dimensions_to_use = [6,7] high_p_path_mod = deepcopy(high_p_path) num_paths = 5 path_start_positions = np.linspace(0, high_p_path_mod.dists[-1], num_paths) high_p_indizes = high_p_path_mod.find_closest_index_to_dist(path_start_positions) indizes_show = high_p_indizes high_p_indizes.pop(2) high_p_indizes.pop(1) current_point = high_p_path_mod.path_coords[high_p_indizes] high_p_indizes = np.flip(high_p_indizes) print(high_p_indizes) high_p_indizes = [79, 0, use_high_p_index] prior.mean prior.std labels_ = viz.get_labels_8pt(params) high_p_path_mod = deepcopy(high_p_path) seeds = [8, 8, 8, 8, 8] offsets = 39000 * np.ones_like(seeds) #offsets[0] = 47000 offsets[1] = 83500 # 75500 offsets[2] = 29000 # 21000 offsets[3] = 40500 # 40500 dimensions_to_use2D = [6,7] with mpl.rc_context(fname='../.matplotlibrc'): fig = viz.viz_path_and_samples_abstract_twoRows(posterior_MoG=posterior_MAF, high_p_path=high_p_path_mod, ortho_path=ortho_path_mod, prior=prior, lims=lims, samples=samples_MAF, figsize=(5.87, 3.0), offsets=offsets, linescale=1.5, ticks=False, no_contours=True, labels_params=labels_, start_point=high_p_path.start_point, end_point=high_p_path.end_point, ortho_p_indizes=ortho_p_indizes, high_p_indizes=high_p_indizes, mycols=col, time_len=int(time_len), path1=high_p_path_mod.path_coords, path_steps1=1, path2=ortho_path_mod.path_coords, path_steps2=1, dimensions_to_use=dimensions_to_use2D, #ax=ax, seeds=seeds, indizes=[0], hyperparams=params, date_today='190910_80start', case='ortho_p', save_fig=False) #plt.savefig(PANEL_D, facecolor='None', transparent=True, dpi=300, bbox_inches='tight') plt.show() ``` # Assemble figure ``` color_mixture = 0.5 * (np.asarray(list(col['CONSISTENT1'])) + np.asarray(list(col['CONSISTENT2']))) import time import IPython.display as IPd def svg(img): IPd.display(IPd.HTML('<img src="{}" / >'.format(img, time.time()))) from svgutils.compose import * # > Inkscape pixel is 1/90 of an inch, other software usually uses 1/72. # > http://www.inkscapeforum.com/viewtopic.php?f=6&t=5964 svg_scale = 1.25 # set this to 1.25 for Inkscape, 1.0 otherwise factor_svg=5.5 # Panel letters in Helvetica Neue, 12pt, Medium kwargs_text = {'size': '12pt', 'font': 'Arial', 'weight': '800'} kwargs_consistent = {'size': '10pt', 'font': 'Arial', 'weight': '500', 'color': '#AF99EF'} kwargs_consistent1 = {'size': '10pt', 'font': 'Arial', 'weight': '500', 'color': '#9E7DD5'} kwargs_inconsistent = {'size': '10pt', 'font': 'Arial', 'weight': '500', 'color': '#D73789'} kwargs_text8pt = {'size': '7.7pt', 'font': 'Arial'} startx1 = 492 startx2 = 594 starty1 = 204 starty2 = 307 endx1 = 642 endx2 = 673 endy1 = 159 endy2 = 191 deltax1 = endx1-startx1 deltax2 = endx2-startx2 deltay1 = endy1-starty1 deltay2 = endy2-starty2 sizefactor = 1.0 dshift = 0.5*factor_svg f = Figure("20.3cm", "9.1cm", Line(((startx1,starty1+dshift),(startx1+deltax1*sizefactor,starty1+dshift+deltay1*sizefactor)), width=1.5, color='grey'), Line(((startx2,starty2+dshift),(startx2+deltax2*sizefactor,starty2+dshift+deltay2*sizefactor)), width=1.5, color='grey'), Panel( SVG(PANEL_A).scale(svg_scale).scale(0.9).move(0, 15*factor_svg), Text("a", -2.7*factor_svg, 16.9*factor_svg-dshift, **kwargs_text), ).move(2.7*factor_svg, -14.4*factor_svg+dshift), Panel( SVG(PANEL_B).scale(svg_scale).move(0*factor_svg, 0*factor_svg), Text("b", -6.0*factor_svg, 5*factor_svg-dshift, **kwargs_text), Text("PD", -1.*factor_svg+0.0, 8.2*factor_svg, **kwargs_text8pt), Text("LP", -1.*factor_svg+0.0, 13.4*factor_svg, **kwargs_text8pt), Text("PY", -1.*factor_svg+0.0, 18.6*factor_svg, **kwargs_text8pt), #Text("Period", 15.5*factor_svg+0.0, 2.8*factor_svg, **kwargs_text8pt), #Text("Delay", 11.3*factor_svg+0.0, 9.6*factor_svg, **kwargs_text8pt), #Text("Gap", 27.5*factor_svg+0.0, 9.6*factor_svg, **kwargs_text8pt), #Text("Duration", 19.2*factor_svg+0.0, 13.8*factor_svg, **kwargs_text8pt), Text("1", 17.45*factor_svg+0.0, 4.5*factor_svg, **kwargs_text8pt), Text("2", 13.1*factor_svg+0.0, 10.6*factor_svg, **kwargs_text8pt), Text("3", 28.75*factor_svg+0.0, 10.6*factor_svg, **kwargs_text8pt), Text("4", 21.7*factor_svg+0.0, 15.4*factor_svg, **kwargs_text8pt), #Text("50 mV", 39.4*factor_svg, 25*factor_svg, **kwargs_text8pt), #Text("50 mV", 32.0*factor_svg, 4.8*factor_svg, **kwargs_text8pt), Text("500 ms", 3.2*factor_svg, 22.5*factor_svg, **kwargs_text8pt), ).move(37.8*factor_svg, -2.5*factor_svg+dshift), Panel( SVG(PANEL_C).scale(svg_scale).move(-10*factor_svg,0*factor_svg), Text("c", -11.5*factor_svg, 2.7*factor_svg-dshift, **kwargs_text), ).move(90.1*factor_svg, -0.2*factor_svg+dshift), Panel( SVG(PANEL_C2).scale(svg_scale).move(-10*factor_svg,0*factor_svg), #Text("1", 3.1*factor_svg, 5.2*factor_svg, **kwargs_consistent1), Text("1", 11.2*factor_svg, 11.3*factor_svg, **kwargs_consistent1), Text("2", 7.5*factor_svg, 6.7*factor_svg, **kwargs_inconsistent), ).move(90*factor_svg, 35.2*factor_svg+dshift), Panel( SVG(PANEL_D).scale(svg_scale).move(0*factor_svg, 0*factor_svg), Text("d", 0*factor_svg, 3.5*factor_svg-dshift, **kwargs_text), #Text("1", 41.5*factor_svg, 4*factor_svg, **kwargs_consistent), Text("1", 4*factor_svg, 23.5*factor_svg, **kwargs_consistent1), Text("2", 41.5*factor_svg, 23.5*factor_svg, **kwargs_inconsistent), Text("50 mV", 68.4*factor_svg, 4*factor_svg, **kwargs_text8pt), ).move(0*factor_svg, 23.2*factor_svg+dshift) ) !mkdir -p fig f.save("fig/fig8_stg_31D.svg") svg('fig/fig8_stg_31D.svg') ```
github_jupyter
``` #export from fastai.basics import * from fastai.text.core import * from fastai.text.data import * from fastai.text.models.core import * from fastai.text.models.awdlstm import * from fastai.callback.rnn import * from fastai.callback.progress import * #hide from nbdev.showdoc import * #default_exp text.learner ``` # Learner for the text application > All the functions necessary to build `Learner` suitable for transfer learning in NLP The most important functions of this module are `language_model_learner` and `text_classifier_learner`. They will help you define a `Learner` using a pretrained model. See the [text tutorial](http://docs.fast.ai/tutorial.text) for exmaples of use. ## Loading a pretrained model In text, to load a pretrained model, we need to adapt the embeddings of the vocabulary used for the pre-training to the vocabulary of our current corpus. ``` #export def match_embeds(old_wgts, old_vocab, new_vocab): "Convert the embedding in `old_wgts` to go from `old_vocab` to `new_vocab`." bias, wgts = old_wgts.get('1.decoder.bias', None), old_wgts['0.encoder.weight'] wgts_m = wgts.mean(0) new_wgts = wgts.new_zeros((len(new_vocab),wgts.size(1))) if bias is not None: bias_m = bias.mean(0) new_bias = bias.new_zeros((len(new_vocab),)) old_o2i = old_vocab.o2i if hasattr(old_vocab, 'o2i') else {w:i for i,w in enumerate(old_vocab)} for i,w in enumerate(new_vocab): idx = old_o2i.get(w, -1) new_wgts[i] = wgts[idx] if idx>=0 else wgts_m if bias is not None: new_bias[i] = bias[idx] if idx>=0 else bias_m old_wgts['0.encoder.weight'] = new_wgts if '0.encoder_dp.emb.weight' in old_wgts: old_wgts['0.encoder_dp.emb.weight'] = new_wgts.clone() old_wgts['1.decoder.weight'] = new_wgts.clone() if bias is not None: old_wgts['1.decoder.bias'] = new_bias return old_wgts ``` For words in `new_vocab` that don't have a corresponding match in `old_vocab`, we use the mean of all pretrained embeddings. ``` wgts = {'0.encoder.weight': torch.randn(5,3)} new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b']) old,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight'] test_eq(new[0], old[0]) test_eq(new[1], old[2]) test_eq(new[2], old.mean(0)) test_eq(new[3], old[1]) #hide #With bias wgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)} new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b']) old_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight'] old_b,new_b = wgts['1.decoder.bias'], new_wgts['1.decoder.bias'] test_eq(new_w[0], old_w[0]) test_eq(new_w[1], old_w[2]) test_eq(new_w[2], old_w.mean(0)) test_eq(new_w[3], old_w[1]) test_eq(new_b[0], old_b[0]) test_eq(new_b[1], old_b[2]) test_eq(new_b[2], old_b.mean(0)) test_eq(new_b[3], old_b[1]) #export def _get_text_vocab(dls): vocab = dls.vocab if isinstance(vocab, L): vocab = vocab[0] return vocab #export def load_ignore_keys(model, wgts): "Load `wgts` in `model` ignoring the names of the keys, just taking parameters in order" sd = model.state_dict() for k1,k2 in zip(sd.keys(), wgts.keys()): sd[k1].data = wgts[k2].data.clone() return model.load_state_dict(sd) #export def _rm_module(n): t = n.split('.') for i in range(len(t)-1, -1, -1): if t[i] == 'module': t.pop(i) break return '.'.join(t) #export #For previous versions compatibility, remove for release def clean_raw_keys(wgts): keys = list(wgts.keys()) for k in keys: t = k.split('.module') if f'{_rm_module(k)}_raw' in keys: del wgts[k] return wgts #export #For previous versions compatibility, remove for release def load_model_text(file, model, opt, with_opt=None, device=None, strict=True): "Load `model` from `file` along with `opt` (if available, and if `with_opt`)" distrib_barrier() if isinstance(device, int): device = torch.device('cuda', device) elif device is None: device = 'cpu' state = torch.load(file, map_location=device) hasopt = set(state)=={'model', 'opt'} model_state = state['model'] if hasopt else state get_model(model).load_state_dict(clean_raw_keys(model_state), strict=strict) if hasopt and ifnone(with_opt,True): try: opt.load_state_dict(state['opt']) except: if with_opt: warn("Could not load the optimizer state.") elif with_opt: warn("Saved filed doesn't contain an optimizer state.") #export @log_args(but_as=Learner.__init__) @delegates(Learner.__init__) class TextLearner(Learner): "Basic class for a `Learner` in NLP." def __init__(self, dls, model, alpha=2., beta=1., moms=(0.8,0.7,0.8), **kwargs): super().__init__(dls, model, moms=moms, **kwargs) self.add_cbs([ModelResetter(), RNNRegularizer(alpha=alpha, beta=beta)]) def save_encoder(self, file): "Save the encoder to `file` in the model directory" if rank_distrib(): return # don't save if child proc encoder = get_model(self.model)[0] if hasattr(encoder, 'module'): encoder = encoder.module torch.save(encoder.state_dict(), join_path_file(file, self.path/self.model_dir, ext='.pth')) def load_encoder(self, file, device=None): "Load the encoder `file` from the model directory, optionally ensuring it's on `device`" encoder = get_model(self.model)[0] if device is None: device = self.dls.device if hasattr(encoder, 'module'): encoder = encoder.module distrib_barrier() wgts = torch.load(join_path_file(file,self.path/self.model_dir, ext='.pth'), map_location=device) encoder.load_state_dict(clean_raw_keys(wgts)) self.freeze() return self def load_pretrained(self, wgts_fname, vocab_fname, model=None): "Load a pretrained model and adapt it to the data vocabulary." old_vocab = Path(vocab_fname).load() new_vocab = _get_text_vocab(self.dls) distrib_barrier() wgts = torch.load(wgts_fname, map_location = lambda storage,loc: storage) if 'model' in wgts: wgts = wgts['model'] #Just in case the pretrained model was saved with an optimizer wgts = match_embeds(wgts, old_vocab, new_vocab) load_ignore_keys(self.model if model is None else model, clean_raw_keys(wgts)) self.freeze() return self #For previous versions compatibility. Remove at release @delegates(load_model_text) def load(self, file, with_opt=None, device=None, **kwargs): if device is None: device = self.dls.device if self.opt is None: self.create_opt() file = join_path_file(file, self.path/self.model_dir, ext='.pth') load_model_text(file, self.model, self.opt, device=device, **kwargs) return self ``` Adds a `ModelResetter` and an `RNNRegularizer` with `alpha` and `beta` to the callbacks, the rest is the same as `Learner` init. This `Learner` adds functionality to the base class: ``` show_doc(TextLearner.load_pretrained) ``` `wgts_fname` should point to the weights of the pretrained model and `vocab_fname` to the vocabulary used to pretrain it. ``` show_doc(TextLearner.save_encoder) ``` The model directory is `Learner.path/Learner.model_dir`. ``` show_doc(TextLearner.load_encoder) ``` ## Language modeling predictions For language modeling, the predict method is quite different form the other applications, which is why it needs its own subclass. ``` #export def decode_spec_tokens(tokens): "Decode the special tokens in `tokens`" new_toks,rule,arg = [],None,None for t in tokens: if t in [TK_MAJ, TK_UP, TK_REP, TK_WREP]: rule = t elif rule is None: new_toks.append(t) elif rule == TK_MAJ: new_toks.append(t[:1].upper() + t[1:].lower()) rule = None elif rule == TK_UP: new_toks.append(t.upper()) rule = None elif arg is None: try: arg = int(t) except: rule = None else: if rule == TK_REP: new_toks.append(t * arg) else: new_toks += [t] * arg return new_toks test_eq(decode_spec_tokens(['xxmaj', 'text']), ['Text']) test_eq(decode_spec_tokens(['xxup', 'text']), ['TEXT']) test_eq(decode_spec_tokens(['xxrep', '3', 'a']), ['aaa']) test_eq(decode_spec_tokens(['xxwrep', '3', 'word']), ['word', 'word', 'word']) #export @log_args(but_as=TextLearner.__init__) class LMLearner(TextLearner): "Add functionality to `TextLearner` when dealingwith a language model" def predict(self, text, n_words=1, no_unk=True, temperature=1., min_p=None, no_bar=False, decoder=decode_spec_tokens, only_last_word=False): "Return `text` and the `n_words` that come after" self.model.reset() idxs = idxs_all = self.dls.test_dl([text]).items[0].to(self.dls.device) if no_unk: unk_idx = self.dls.vocab.index(UNK) for _ in (range(n_words) if no_bar else progress_bar(range(n_words), leave=False)): with self.no_bar(): preds,_ = self.get_preds(dl=[(idxs[None],)]) res = preds[0][-1] if no_unk: res[unk_idx] = 0. if min_p is not None: if (res >= min_p).float().sum() == 0: warn(f"There is no item with probability >= {min_p}, try a lower value.") else: res[res < min_p] = 0. if temperature != 1.: res.pow_(1 / temperature) idx = torch.multinomial(res, 1).item() idxs = idxs_all = torch.cat([idxs_all, idxs.new([idx])]) if only_last_word: idxs = idxs[-1][None] num = self.dls.train_ds.numericalize tokens = [num.vocab[i] for i in idxs_all if num.vocab[i] not in [BOS, PAD]] sep = self.dls.train_ds.tokenizer.sep return sep.join(decoder(tokens)) @delegates(Learner.get_preds) def get_preds(self, concat_dim=1, **kwargs): return super().get_preds(concat_dim=1, **kwargs) show_doc(LMLearner, title_level=3) show_doc(LMLearner.predict) ``` The words are picked randomly among the predictions, depending on the probability of each index. `no_unk` means we never pick the `UNK` token, `tempreature` is applied to the predictions, if `min_p` is passed, we don't consider the indices with a probability lower than it. Set `no_bar` to `True` if you don't want any progress bar, and you can pass a long a custom `decoder` to process the predicted tokens. ## `Learner` convenience functions ``` #export from fastai.text.models.core import _model_meta #export def _get_text_vocab(dls): vocab = dls.vocab if isinstance(vocab, L): vocab = vocab[0] return vocab #export @log_args(to_return=True, but_as=Learner.__init__) @delegates(Learner.__init__) def language_model_learner(dls, arch, config=None, drop_mult=1., backwards=False, pretrained=True, pretrained_fnames=None, **kwargs): "Create a `Learner` with a language model from `dls` and `arch`." vocab = _get_text_vocab(dls) model = get_language_model(arch, len(vocab), config=config, drop_mult=drop_mult) meta = _model_meta[arch] learn = LMLearner(dls, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_lm'], **kwargs) url = 'url_bwd' if backwards else 'url' if pretrained or pretrained_fnames: if pretrained_fnames is not None: fnames = [learn.path/learn.model_dir/f'{fn}.{ext}' for fn,ext in zip(pretrained_fnames, ['pth', 'pkl'])] else: if url not in meta: warn("There are no pretrained weights for that architecture yet!") return learn model_path = untar_data(meta[url] , c_key='model') fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']] learn = learn.load_pretrained(*fnames) return learn ``` You can use the `config` to customize the architecture used (change the values from `awd_lstm_lm_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available) or you can pass specific `pretrained_fnames` containing your own pretrained model and the corresponding vocabulary. All other arguments are passed to `Learner`. ``` path = untar_data(URLs.IMDB_SAMPLE) df = pd.read_csv(path/'texts.csv') dls = TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid') learn = language_model_learner(dls, AWD_LSTM) ``` You can then use the `.predict` method to generate new text. ``` learn.predict('This movie is about', n_words=20) ``` By default the entire sentence is feed again to the model after each predicted word, this little trick shows an improvement on the quality of the generated text. If you want to feed only the last word, specify argument `only_last_word`. ``` learn.predict('This movie is about', n_words=20, only_last_word=True) #export @log_args(to_return=True, but_as=Learner.__init__) @delegates(Learner.__init__) def text_classifier_learner(dls, arch, seq_len=72, config=None, backwards=False, pretrained=True, drop_mult=0.5, n_out=None, lin_ftrs=None, ps=None, max_len=72*20, y_range=None, **kwargs): "Create a `Learner` with a text classifier from `dls` and `arch`." vocab = _get_text_vocab(dls) if n_out is None: n_out = get_c(dls) assert n_out, "`n_out` is not defined, and could not be infered from data, set `dls.c` or pass `n_out`" model = get_text_classifier(arch, len(vocab), n_out, seq_len=seq_len, config=config, y_range=y_range, drop_mult=drop_mult, lin_ftrs=lin_ftrs, ps=ps, max_len=max_len) meta = _model_meta[arch] learn = TextLearner(dls, model, splitter=meta['split_clas'], **kwargs) url = 'url_bwd' if backwards else 'url' if pretrained: if url not in meta: warn("There are no pretrained weights for that architecture yet!") return learn model_path = untar_data(meta[url], c_key='model') fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']] learn = learn.load_pretrained(*fnames, model=learn.model[0]) learn.freeze() return learn ``` You can use the `config` to customize the architecture used (change the values from `awd_lstm_clas_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available). `drop_mult` is a global multiplier applied to control all dropouts. `n_out` is usually infered from the `dls` but you may pass it. The model uses a `SentenceEncoder`, which means the texts are passed `seq_len` tokens at a time, and will only compute the gradients on the last `max_len` steps. `lin_ftrs` and `ps` are passed to `get_text_classifier`. All other arguments are passed to `Learner`. ``` path = untar_data(URLs.IMDB_SAMPLE) df = pd.read_csv(path/'texts.csv') dls = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid') learn = text_classifier_learner(dls, AWD_LSTM) ``` ## Show methods - ``` #export @typedispatch def show_results(x: LMTensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs): if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n)) for i,l in enumerate(['input', 'target']): ctxs = [b.show(ctx=c, label=l, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))] ctxs = [b.show(ctx=c, label='pred', **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))] display_df(pd.DataFrame(ctxs)) return ctxs #export @typedispatch def show_results(x: TensorText, y, samples, outs, ctxs=None, max_n=10, trunc_at=150, **kwargs): if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n)) samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples) ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs) display_df(pd.DataFrame(ctxs)) return ctxs #export @typedispatch def plot_top_losses(x: TensorText, y:TensorCategory, samples, outs, raws, losses, trunc_at=150, **kwargs): rows = get_empty_df(len(samples)) samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples) for i,l in enumerate(['input', 'target']): rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(samples.itemgot(i),rows)] outs = L(o + (TitledFloat(r.max().item()), TitledFloat(l.item())) for o,r,l in zip(outs, raws, losses)) for i,l in enumerate(['predicted', 'probability', 'loss']): rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(outs.itemgot(i),rows)] display_df(pd.DataFrame(rows)) ``` ## Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
## Stable Model Training #### NOTES: * This is "NoGAN" based training, described in the DeOldify readme. * This model prioritizes stable and reliable renderings. It does particularly well on portraits and landscapes. It's not as colorful as the artistic model. ``` import os os.environ['CUDA_VISIBLE_DEVICES']='0' import fastai from fastai import * from fastai.vision import * from fastai.callbacks.tensorboard import * from fastai.vision.gan import * from fasterai.generators import * from fasterai.critics import * from fasterai.dataset import * from fasterai.loss import * from fasterai.save import * from PIL import Image, ImageDraw, ImageFont from PIL import ImageFile ``` ## Setup ``` path = Path('data/imagenet/ILSVRC/Data/CLS-LOC') path_hr = path path_lr = path/'bandw' proj_id = 'StableModel' gen_name = proj_id + '_gen' pre_gen_name = gen_name + '_0' crit_name = proj_id + '_crit' name_gen = proj_id + '_image_gen' path_gen = path/name_gen TENSORBOARD_PATH = Path('data/tensorboard/' + proj_id) nf_factor = 2 pct_start = 1e-8 def get_data(bs:int, sz:int, keep_pct:float): return get_colorize_data(sz=sz, bs=bs, crappy_path=path_lr, good_path=path_hr, random_seed=None, keep_pct=keep_pct) def get_crit_data(classes, bs, sz): src = ImageList.from_folder(path, include=classes, recurse=True).random_split_by_pct(0.1, seed=42) ll = src.label_from_folder(classes=classes) data = (ll.transform(get_transforms(max_zoom=2.), size=sz) .databunch(bs=bs).normalize(imagenet_stats)) return data def create_training_images(fn,i): dest = path_lr/fn.relative_to(path_hr) dest.parent.mkdir(parents=True, exist_ok=True) img = PIL.Image.open(fn).convert('LA').convert('RGB') img.save(dest) def save_preds(dl): i=0 names = dl.dataset.items for b in dl: preds = learn_gen.pred_batch(batch=b, reconstruct=True) for o in preds: o.save(path_gen/names[i].name) i += 1 def save_gen_images(learn_gen): if path_gen.exists(): shutil.rmtree(path_gen) path_gen.mkdir(exist_ok=True) data_gen = get_data(bs=bs, sz=sz, keep_pct=0.085) save_preds(data_gen.fix_dl) PIL.Image.open(path_gen.ls()[0]) ``` ## Create black and white training images Only runs if the directory isn't already created. ``` if not path_lr.exists(): il = ImageList.from_folder(path_hr) parallel(create_training_images, il.items) ``` ## Pre-train generator #### NOTE Most of the training takes place here in pretraining for NoGAN. The goal here is to take the generator as far as possible with conventional training, as that is much easier to control and obtain glitch-free results compared to GAN training. ### 64px ``` bs=88 sz=64 keep_pct=1.0 data_gen = get_data(bs=bs, sz=sz, keep_pct=keep_pct) learn_gen = gen_learner_wide(data=data_gen, gen_loss=FeatureLoss(), nf_factor=nf_factor) learn_gen.callback_fns.append(partial(ImageGenTensorboardWriter, base_dir=TENSORBOARD_PATH, name='GenPre')) learn_gen.fit_one_cycle(1, pct_start=0.8, max_lr=slice(1e-3)) learn_gen.save(pre_gen_name) learn_gen.unfreeze() learn_gen.fit_one_cycle(1, pct_start=pct_start, max_lr=slice(3e-7, 3e-4)) learn_gen.save(pre_gen_name) ``` ### 128px ``` bs=20 sz=128 keep_pct=1.0 learn_gen.data = get_data(sz=sz, bs=bs, keep_pct=keep_pct) learn_gen.unfreeze() learn_gen.fit_one_cycle(1, pct_start=pct_start, max_lr=slice(1e-7,1e-4)) learn_gen.save(pre_gen_name) ``` ### 192px ``` bs=8 sz=192 keep_pct=0.50 learn_gen.data = get_data(sz=sz, bs=bs, keep_pct=keep_pct) learn_gen.unfreeze() learn_gen.fit_one_cycle(1, pct_start=pct_start, max_lr=slice(5e-8,5e-5)) learn_gen.save(pre_gen_name) ``` ## Repeatable GAN Cycle #### NOTE Best results so far have been based on repeating the cycle below a few times (about 5-8?), until diminishing returns are hit (no improvement in image quality). Each time you repeat the cycle, you want to increment that old_checkpoint_num by 1 so that new check points don't overwrite the old. ``` old_checkpoint_num = 0 checkpoint_num = old_checkpoint_num + 1 gen_old_checkpoint_name = gen_name + '_' + str(old_checkpoint_num) gen_new_checkpoint_name = gen_name + '_' + str(checkpoint_num) crit_old_checkpoint_name = crit_name + '_' + str(old_checkpoint_num) crit_new_checkpoint_name= crit_name + '_' + str(checkpoint_num) ``` ### Save Generated Images ``` bs=8 sz=192 learn_gen = gen_learner_wide(data=data_gen, gen_loss=FeatureLoss(), nf_factor=nf_factor).load(gen_old_checkpoint_name, with_opt=False) save_gen_images(gen_name) ``` ### Pretrain Critic ##### Only need full pretraining of critic when starting from scratch. Otherwise, just finetune! ``` if old_checkpoint_num == 0: bs=64 sz=128 learn_gen=None gc.collect() data_crit = get_crit_data([name_gen, 'test'], bs=bs, sz=sz) data_crit.show_batch(rows=3, ds_type=DatasetType.Train, imgsize=3) learn_critic = colorize_crit_learner(data=data_crit, nf=256) learn_critic.callback_fns.append(partial(LearnerTensorboardWriter, base_dir=TENSORBOARD_PATH, name='CriticPre')) learn_critic.fit_one_cycle(6, 1e-3) learn_critic.save(crit_old_checkpoint_name) bs=16 sz=192 data_crit = get_crit_data([name_gen, 'test'], bs=bs, sz=sz) data_crit.show_batch(rows=3, ds_type=DatasetType.Train, imgsize=3) learn_critic = colorize_crit_learner(data=data_crit, nf=256).load(crit_old_checkpoint_name, with_opt=False) learn_critic.callback_fns.append(partial(LearnerTensorboardWriter, base_dir=TENSORBOARD_PATH, name='CriticPre')) learn_critic.fit_one_cycle(4, 1e-4) learn_critic.save(crit_new_checkpoint_name) ``` ### GAN ``` learn_crit=None learn_gen=None gc.collect() lr=2e-5 sz=192 bs=5 data_crit = get_crit_data([name_gen, 'test'], bs=bs, sz=sz) learn_crit = colorize_crit_learner(data=data_crit, nf=256).load(crit_new_checkpoint_name, with_opt=False) learn_gen = gen_learner_wide(data=data_gen, gen_loss=FeatureLoss(), nf_factor=nf_factor).load(gen_old_checkpoint_name, with_opt=False) switcher = partial(AdaptiveGANSwitcher, critic_thresh=0.65) learn = GANLearner.from_learners(learn_gen, learn_crit, weights_gen=(1.0,1.5), show_img=False, switcher=switcher, opt_func=partial(optim.Adam, betas=(0.,0.9)), wd=1e-3) learn.callback_fns.append(partial(GANDiscriminativeLR, mult_lr=5.)) learn.callback_fns.append(partial(GANTensorboardWriter, base_dir=TENSORBOARD_PATH, name='GanLearner', visual_iters=100)) learn.callback_fns.append(partial(GANSaveCallback, learn_gen=learn_gen, filename=gen_new_checkpoint_name, save_iters=100)) ``` #### Instructions: Find the checkpoint just before where glitches start to be introduced. This is all very new so you may need to play around with just how far you go here with keep_pct. ``` learn.data = get_data(sz=sz, bs=bs, keep_pct=0.03) learn_gen.freeze_to(-1) learn.fit(1,lr) ```
github_jupyter
Air Quality Index 1)To identify the Most polluted City 2)Create a Model to Predict the quality of air ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns df=pd.read_csv('https://raw.githubusercontent.com/tulseebisen/ML_Projects/main/AirQualityIndex/city_day.csv',parse_dates = ["Date"]) df df.head() df.tail() sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis') print(df.isnull().sum()) (df.isnull().sum()/df.shape[0]*100).sort_values(ascending=False) df.describe() # but it gives the information about all the cities alltogether #converting dtype of date column to datetime df['Date']=df['Date'].apply(pd.to_datetime) #setting date column as index df.set_index('Date',inplace=True) df.columns ``` filling the Nan values present in the pollutants with mean (city wise) ``` df.iloc[:, 1:13] = df.groupby("City").transform(lambda x: x.fillna(x.mean())) df sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis') df.iloc[:, 1:13]=df.fillna(df.mean()) df sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis') ``` The AQI calculation uses 7 measures: PM2.5, PM10, SO2, NOx, NH3, CO and O3. -->For PM2.5, PM10, SO2, NOx and NH3 the average value in last 24-hrs is used with the condition of having at least 16 values. -->For CO and O3 the maximum value in last 8-hrs is used. -->Each measure is converted into a Sub-Index based on pre-defined groups. -->Sometimes measures are not available due to lack of measuring or lack of required data points. -->Final AQI is the maximum Sub-Index with the condition that at least one of PM2.5 and PM10 should be available and at least three out of the seven should be available. ## calculating Sub-Index ``` # PM10 Sub-Index calculation def get_PM10_subindex(x): if x <= 50: return x elif x > 50 and x <= 100: return x elif x > 100 and x <= 250: return 100 + (x - 100) * 100 / 150 elif x > 250 and x <= 350: return 200 + (x - 250) elif x > 350 and x <= 430: return 300 + (x - 350) * 100 / 80 elif x > 430: return 400 + (x - 430) * 100 / 80 else: return 0 df["PM10_SubIndex"] = df["PM10"].astype(int).apply(lambda x: get_PM10_subindex(x)) # PM2.5 Sub-Index calculation def get_PM25_subindex(x): if x <= 30: return x * 50 / 30 elif x > 30 and x <= 60: return 50 + (x - 30) * 50 / 30 elif x > 60 and x <= 90: return 100 + (x - 60) * 100 / 30 elif x > 90 and x <= 120: return 200 + (x - 90) * 100 / 30 elif x > 120 and x <= 250: return 300 + (x - 120) * 100 / 130 elif x > 250: return 400 + (x - 250) * 100 / 130 else: return 0 df["PM2.5_SubIndex"] = df["PM2.5"].astype(int).apply(lambda x: get_PM25_subindex(x)) # SO2 Sub-Index calculation def get_SO2_subindex(x): if x <= 40: return x * 50 / 40 elif x > 40 and x <= 80: return 50 + (x - 40) * 50 / 40 elif x > 80 and x <= 380: return 100 + (x - 80) * 100 / 300 elif x > 380 and x <= 800: return 200 + (x - 380) * 100 / 420 elif x > 800 and x <= 1600: return 300 + (x - 800) * 100 / 800 elif x > 1600: return 400 + (x - 1600) * 100 / 800 else: return 0 df["SO2_SubIndex"] = df["SO2"].astype(int).apply(lambda x: get_SO2_subindex(x)) # NOx Sub-Index calculation def get_NOx_subindex(x): if x <= 40: return x * 50 / 40 elif x > 40 and x <= 80: return 50 + (x - 40) * 50 / 40 elif x > 80 and x <= 180: return 100 + (x - 80) * 100 / 100 elif x > 180 and x <= 280: return 200 + (x - 180) * 100 / 100 elif x > 280 and x <= 400: return 300 + (x - 280) * 100 / 120 elif x > 400: return 400 + (x - 400) * 100 / 120 else: return 0 df["NOx_SubIndex"] = df["NOx"].astype(int).apply(lambda x: get_NOx_subindex(x)) # NH3 Sub-Index calculation def get_NH3_subindex(x): if x <= 200: return x * 50 / 200 elif x > 200 and x <= 400: return 50 + (x - 200) * 50 / 200 elif x > 400 and x <= 800: return 100 + (x - 400) * 100 / 400 elif x > 800 and x <= 1200: return 200 + (x - 800) * 100 / 400 elif x > 1200 and x <= 1800: return 300 + (x - 1200) * 100 / 600 elif x > 1800: return 400 + (x - 1800) * 100 / 600 else: return 0 df["NH3_SubIndex"] = df["NH3"].astype(int).apply(lambda x: get_NH3_subindex(x)) # CO Sub-Index calculation def get_CO_subindex(x): if x <= 1: return x * 50 / 1 elif x > 1 and x <= 2: return 50 + (x - 1) * 50 / 1 elif x > 2 and x <= 10: return 100 + (x - 2) * 100 / 8 elif x > 10 and x <= 17: return 200 + (x - 10) * 100 / 7 elif x > 17 and x <= 34: return 300 + (x - 17) * 100 / 17 elif x > 34: return 400 + (x - 34) * 100 / 17 else: return 0 df["CO_SubIndex"] = df["CO"].astype(int).apply(lambda x: get_CO_subindex(x)) # O3 Sub-Index calculation def get_O3_subindex(x): if x <= 50: return x * 50 / 50 elif x > 50 and x <= 100: return 50 + (x - 50) * 50 / 50 elif x > 100 and x <= 168: return 100 + (x - 100) * 100 / 68 elif x > 168 and x <= 208: return 200 + (x - 168) * 100 / 40 elif x > 208 and x <= 748: return 300 + (x - 208) * 100 / 539 elif x > 748: return 400 + (x - 400) * 100 / 539 else: return 0 df["O3_SubIndex"] = df["O3"].astype(int).apply(lambda x: get_O3_subindex(x)) ``` ## Filling the Nan values of AQI column by taking maximum values out of sub-Indexes ``` df["AQI"] = df["AQI"].fillna(round(df[["PM2.5_SubIndex", "PM10_SubIndex", "SO2_SubIndex", "NOx_SubIndex","NH3_SubIndex", "CO_SubIndex", "O3_SubIndex"]].max(axis = 1))) df sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis') ``` # AQI Bucket ``` from IPython import display display.Image("/home/manikanta/Pictures/Screenshot from 2021-05-21 11-59-24.png",width = 400, height = 200) ``` ### calculating AQI bucket and filling the NAN value present ``` ## AQI bucketing def get_AQI_bucket(x): if x <= 50: return "Good" elif x > 50 and x <= 100: return "Satisfactory" elif x > 100 and x <= 200: return "Moderate" elif x > 200 and x <= 300: return "Poor" elif x > 300 and x <= 400: return "Very Poor" elif x > 400: return "Severe" else: return '0' df["AQI_Bucket"] = df["AQI_Bucket"].fillna(df["AQI"].apply(lambda x: get_AQI_bucket(x))) df sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis') df.columns df_city_day = df.copy() df_city_day.columns plt.figure(figsize=(12,10)) sns.heatmap(df.corr(),cmap='coolwarm',annot=True); pollutants = ['PM2.5', 'PM10', 'NO', 'NO2', 'NOx', 'NH3', 'CO', 'SO2','O3', 'Benzene', 'Toluene', 'Xylene'] df_city_day = df_city_day[pollutants] print('Distribution of different pollutants in last 5 years') df_city_day.plot(kind='line',figsize=(18,18),cmap='coolwarm',subplots=True,fontsize=10); df[['City','AQI']].groupby('City').mean().sort_values('AQI').plot(kind='bar',cmap='Blues_r',figsize=(8,8)) plt.title('Average AQI in last 5 years'); ``` ### By above graph we can conclude that Ahmedabad is the heighest polluted city followed by Delhi and Gurugram ## Creating Model for predicting the Output ``` final_df= df[['AQI', 'AQI_Bucket']].copy() final_df final_df['AQI_Bucket'].unique() #final_df = pd.get_dummies(final_df) final_df['AQI_Bucket'] = final_df['AQI_Bucket'].map({'Good' :0, 'Satisfactory' :1, 'Moderate' :2, 'Poor' :3, 'Very Poor' :4, 'Severe' :5}).astype(int) #mapping numbers final_df.head() ``` # Predicting the values of AQI_Bucket w.r.t values of AQI using Random Forest Classifier ``` X = final_df[['AQI']] y = final_df[['AQI_Bucket']] from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) clf = RandomForestClassifier(random_state = 0).fit(X_train, y_train) y_pred = clf.predict(X_test) print("Enter the value of AQI:") AQI = float(input("AQI : ")) output = clf.predict([[AQI]]) output #0-->Good #1-->Satisfactory #2-->moderate #3-->poor #4-->Very poor #5-->Severe from sklearn.metrics import accuracy_score,classification_report,confusion_matrix print(accuracy_score(y_test, y_pred)) print(classification_report(y_test, y_pred)) print(confusion_matrix(y_test, y_pred)) ```
github_jupyter
``` # !pip install mediapipe opencv-python import mediapipe as mp import cv2 import numpy as np import uuid import os mp_drawing = mp.solutions.drawing_utils mp_hands = mp.solutions.hands ``` # getting realtime webcam feed ``` cap = cv2.VideoCapture(0) while cap.isOpened(): ret, frame = cap.read() cv2.imshow("Raw webcam feed", frame) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() ``` # closing webcam feed ``` cap.release() cv2.destroyAllWindows() cap = cv2.VideoCapture(0) with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: while cap.isOpened(): ret, frame = cap.read() # BGR 2 RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Flip on horizontal image = cv2.flip(image, 1) # Set flag image.flags.writeable = False # Detections results = hands.process(image) # Set flag to true image.flags.writeable = True # RGB 2 BGR image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Detections print(results) # Rendering results if results.multi_hand_landmarks: for num, hand in enumerate(results.multi_hand_landmarks): mp_drawing.draw_landmarks(image, hand, mp_hands.HAND_CONNECTIONS, mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4), mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2), ) cv2.imshow('Hand Tracking', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() cap.release() cv2.destroyAllWindows() mp_drawing.DrawingSpec?? cap = cv2.VideoCapture(0) with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: while cap.isOpened(): ret, frame = cap.read() # BGR 2 RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Flip on horizontal image = cv2.flip(image, 1) # Set flag image.flags.writeable = False # Detections results = hands.process(image) # Set flag to true image.flags.writeable = True # RGB 2 BGR image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Detections print(results) # Rendering results if results.multi_hand_landmarks: for num, hand in enumerate(results.multi_hand_landmarks): mp_drawing.draw_landmarks(image, hand, mp_hands.HAND_CONNECTIONS, mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4), mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2), ) # Save our image cv2.imwrite(os.path.join('Output Images', '{}.jpg'.format(uuid.uuid1())), image) cv2.imshow('Hand Tracking', image) if cv2.waitKey(10) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() cap.release() cv2.destroyAllWindows() ```
github_jupyter
# Entrenador de modelos Este script contiene el código para entrenar un modelo de regresión lineal a partir de datos de ventas históricos. El modelo como tal es muy sencillo y probablemente no muy bueno, pero el objetivo del ejercicio es mostrar la arquitectura del sistema completo. Una vez se ejecuta el comando `.fit` el modelo está listo para predecir. En ese momento el modelo es un objeto de Python en memoria. ¿Qué ocurre si el proceso que lo ha entrenado se cae por un cuelgue, un fallo en el datacenter, etc.? ¿Cómo podemos usar el modelo en varios proceso para poder paralelizar las predicciones? Es habitual guardar el modelo en disco con `pickle` (aunque la librería de scikit-learn ofrece [alternativas](https://scikit-learn.org/stable/modules/model_persistence.html)). Este proceso se denomina serialización y es común a cualquier lenguaje de programación. Aparte de la simplicidad del modelo, es relevante observar que no hay una sola referencia a Kafka o a sus topics: la entrada es el fichero `historic.csv` y la salida es un fichero `.pickle`. Este código puede ejecutarse al margen del estado de los topics. ``` import pandas as pd import numpy as np from ejercicios.houses import SEED, MODEL_STORE, HISTORIC np.random.seed(SEED) ds = pd.read_csv(HISTORIC) ds.head() features = list(ds.columns) target = 'price' features.remove('price') features.remove('id') from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression x_train, x_test, y_train, y_test = train_test_split(ds[features], ds[target], random_state=SEED, test_size=0.3) # Creación de un modelo model = LinearRegression() model.fit(x_train, y_train) predict_train = model.predict(x_train) predict_test = model.predict(x_test) ``` La celda anterior pretende ser una version muy simplificada de un pipeline de machine learning. Podríamos añadir etapas como diferentes algoritmos, generación de variables, optimización de parámetros, etc., complicando el modelo tanto como necesitemos. El objetivo, en cualquier caso, es conseguir un objeto `model` que nos sirva para ejecutar un `model.predict`. Eso sí, aquí no vamos a predecir, sólo vamos a guardar el modelo. ``` # Evaluación de R2 # Lo imprimimos simplemente para comprobar que sucesivos entrenamiento tendrán diferente rendimiento print('R^2 en entrenamiento es: ', model.score(x_train, y_train)) print('R^2 en validación es: ', model.score(x_test, y_test)) import pickle f = open(MODEL_STORE, 'wb') pickle.dump(model, f) f.flush() f.close() !ls -l *.pickle ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import torch from UnarySim.sw.kernel.div import CORDIV_kernel from UnarySim.sw.stream.gen import RNG, SourceGen, BSGen from UnarySim.sw.metric.metric import ProgressiveError import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import ticker, cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import time import math import numpy as np import seaborn as sns device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") modes = ["unipolar"] depth_abs=4 depth_kernel=4 depth_sync=2 shiftreg=False rng="Sobol" rng_dim=4 bitwidth = 8 stype = torch.float btype = torch.float rtype = torch.float for mode in modes: print("========================================================") print(mode) print("========================================================") if mode is "unipolar": # all values in unipolar are non-negative # dividend is always non greater than divisor # divisor is non-zero low_bound = 0 up_bound = 2**bitwidth elif mode is "bipolar": # values in bipolar are arbitrarily positive or negative # abs of dividend is always non greater than abs of divisor # abs of divisor is non-zero low_bound = -2**(bitwidth-1) up_bound = 2**(bitwidth-1) divisor_list = [] dividend_list = [] for divisor_val in range(up_bound, low_bound-1, -1): divisor_list.append([]) dividend_list.append([]) for dividend_val in range(low_bound, up_bound+1, 1): divisor_list[up_bound-divisor_val].append(divisor_val) dividend_list[up_bound-divisor_val].append(dividend_val) dividend = torch.tensor(dividend_list).type(torch.float).div(up_bound).to(device) divisor = torch.tensor(divisor_list).type(torch.float).div(up_bound).to(device) quotient = dividend.div(divisor) # find the invalid postions in quotient quotient_nan = torch.isnan(quotient) quotient_inf = torch.isinf(quotient) quotient_mask = quotient_nan + quotient_inf quotient[quotient_mask] = 0 quotient = quotient.clamp(-1, 1) dut_div = CORDIV_kernel(depth=depth_kernel, rng=rng, rng_dim=rng_dim, stype=stype).to(device) quotientPE = ProgressiveError(quotient, mode=mode).to(device) dividendPE = ProgressiveError(dividend, mode=mode).to(device) dividendSRC = SourceGen(dividend, bitwidth, mode=mode, rtype=rtype)().to(device) dividendRNG = RNG(bitwidth, 1, rng, rtype)().to(device) dividendBS = BSGen(dividendSRC, dividendRNG, stype).to(device) divisorPE = ProgressiveError(divisor, mode=mode).to(device) divisorSRC = SourceGen(divisor, bitwidth, mode=mode, rtype=rtype)().to(device) divisorRNG = RNG(bitwidth, 1, rng, rtype)().to(device) divisorBS = BSGen(divisorSRC, divisorRNG, stype).to(device) with torch.no_grad(): start_time = time.time() for i in range(2**bitwidth): dividend_bs = dividendBS(torch.tensor([i])) dividendPE.Monitor(dividend_bs) divisor_bs = divisorBS(torch.tensor([i])) divisorPE.Monitor(divisor_bs) quotient_bs = dut_div(dividend_bs, divisor_bs) quotientPE.Monitor(quotient_bs) print("--- %s seconds ---" % (time.time() - start_time)) print("dividend error: ", "min:", torch.min(dividendPE()[1]).item(), ", max:", torch.max(dividendPE()[1]).item()) print("divisor error: ", "min:", torch.min(divisorPE()[1]).item(), ", max:", torch.max(divisorPE()[1]).item()) # set invalid output statistics to special values print("quotient error: ", "min:", torch.min(quotientPE()[1]).item(), ", max:", torch.max(quotientPE()[1]).item()) ####################################################################### # check the error distribution using histogram ####################################################################### result_pe = quotientPE()[1].cpu().numpy() result_pe[quotient_mask.cpu().numpy()] = np.nan result_pe = result_pe.flatten() result_pe = result_pe[~np.isnan(result_pe)] print("RMSE:", math.sqrt(np.mean(result_pe**2))) print("MAE: ", np.mean(np.abs(result_pe))) print("bias:", np.mean(result_pe)) fig = plt.hist(result_pe, bins='auto', log=False) # arguments are passed to np.histogram plt.title("Histogram for final output error") plt.show() ####################################################################### # check the 3D plot contourf ####################################################################### result_pe = quotientPE()[1].cpu().numpy() result_pe[quotient_mask.cpu().numpy()] = 0 fig = plt.figure() axis_len = quotientPE()[1].size()[0] divisor_y_axis = [] dividend_x_axis = [] for axis_index in range(axis_len): divisor_y_axis.append((up_bound-axis_index/(axis_len-1)*(up_bound-low_bound))/up_bound) dividend_x_axis.append((axis_index/(axis_len-1)*(up_bound-low_bound)+low_bound)/up_bound) X, Y = np.meshgrid(dividend_x_axis, divisor_y_axis) Z = result_pe.clip(-0.1, 0.1) cs = plt.contourf(X, Y, Z, cmap=cm.bwr) cbar = fig.colorbar(cs) plt.show() ```
github_jupyter
``` %matplotlib inline %reload_ext autoreload %autoreload 2 # 多行输出 from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" import numpy as np from kalman_estimation import Kalman4FROLS, Selector, get_mat_data from tqdm import tqdm, trange from utils import get_term_dict # !非线性模型 # *非线性数据 terms_path = '../data/linear_terms5D_0.50trial1.mat' term = Selector(terms_path) _ = term.make_terms() # # *保存候选项集合 # # fname = './data/nonlinear_candidate_terms.txt' # fname = './data/longlag_nonlinear_candidate_terms.txt' # np.savetxt(fname, terms_repr, fmt='%s') # *selection normalized_signals, Kalman_H, candidate_terms, Kalman_S_No = term.make_selection() # *构造 Kalman Filter kf = Kalman4FROLS(normalized_signals, Kalman_H=Kalman_H, uc=0.01) y_coef = kf.estimate_coef() print(y_coef) con_terms_linear5 = ['x1(t-1)', 'x1(t-2)', 'x1(t-2)', 'x1(t-3)', 'x1(t-2)', 'x4(t-1)', 'x5(t-1)', 'x4(t-1)', 'x5(t-1)'] # 9 con_terms_nonlinear5 = ['x1(t-1)', 'x1(t-2)', 'x1(t-2)*x1(t-2)', 'x1(t-3)', 'x1(t-2)*x1(t-2)', 'x4(t-1)', 'x5(t-1)', 'x4(t-1)', 'x5(t-1)'] # 9 true_coefs5 = [0.95*np.sqrt(2), -0.9025, 0.5, -0.4, -0.5, 0.25*np.sqrt(2), 0.25*np.sqrt(2), -0.25*np.sqrt(2), 0.25*np.sqrt(2)] # 9 con_terms_linear10 = ['x1(t-1)', 'x1(t-2)', 'x1(t-2)', 'x2(t-3)', 'x1(t-2)', 'x4(t-4)', 'x9(t-2)', 'x4(t-4)', 'x1(t-1)', 'x1(t-2)', 'x7(t-2)', 'x8(t-3)', 'x9(t-3)', 'x8(t-3)', 'x9(t-3)', 'x7(t-4)'] # 16 con_terms_nonlinear10 = ['x1(t-1)', 'x1(t-2)', 'x1(t-2)*x1(t-2)', 'x2(t-3)', 'x1(t-2)', 'x4(t-4)', 'x9(t-2)', 'x4(t-4)', 'x1(t-1)*x1(t-2)', 'x1(t-2)', 'x7(t-2)', 'x8(t-3)', 'x9(t-3)', 'x8(t-3)', 'x9(t-3)', 'x7(t-4)'] # 16 true_coefs10 = [0.95*np.sqrt(2), -0.9025, 0.5, 0.9, -0.5, 0.8, -0.4, -0.8, 0.4, -0.4, -0.9, 0.4, 0.3, -0.3, 0.4, -0.75] # 16 noises = np.linspace(0.5, 4, 8) con_terms5 = [2, 1, 1, 3, 2] con_terms10 = [2, 1, 1, 1, 2, 1, 2, 3, 2, 1] root = '../data/' term_dict1, term_dict2 = get_term_dict('nonlinear', 5) ``` - 形式参数 - noise_var - trial - ndim - type - uc ``` def corr_term(y_coef, terms_set, Kalman_S_No, var_name: str = 'x', step_name: str = 't'): n_dim, n_term = y_coef.shape func_repr = [] for var in range(n_dim): y = {} for term in range(n_term): y[terms_set[Kalman_S_No[var, term]]] = y_coef[var, term] func_repr.append(y) return func_repr ``` - frokf 存在一个状态的随机初始化 - 应该多次试验去平均值 ntest设置FROKF进行的实验次数,trials设置的是整体的实验次数 ``` def frokf(noise_var, ndim, dtype, terms, length, root='../data/', trials=100, uc=0.01, ntest=50): assert dtype in ['linear', 'nonlinear'], 'type not support!' ax = [] for trial in range(1, trials + 1): terms_path = root + f'{dtype}_terms{ndim}D_{noise_var:2.2f}trial{trial}.mat' term = Selector(terms_path) _ = term.make_terms() normalized_signals, Kalman_H, candidate_terms, Kalman_S_No = term.make_selection() # Kalman_S_No = np.sort(Kalman_S_No) y_coef = 0 # 对FROKF多次实验取平均值 for _ in trange(ntest): kf = Kalman4FROLS(normalized_signals, Kalman_H=Kalman_H, uc=uc) y_coef += kf.estimate_coef() y_coef /= ntest terms_set = corr_term(y_coef, candidate_terms, Kalman_S_No) flatten_coef, t = [], 0 for i in range(ndim): tmp = [] for k in terms[t:t+length[i]]: tmp.append(terms_set[i][k] if k in terms_set[i] else np.nan) flatten_coef.extend(tmp) t += length[i] ax.append(flatten_coef) return np.stack(ax) np.zeros((5, 5)) candidate_terms term_dict1['x1(t-2)*x1(t-2)'] term_dict2[100] frokf(4, 10, 'nonlinear', con_terms_linear5, con_terms5, uc=1e-6, trials=50) np.array(con_terms_linear10) np.empty((5,5), dtype='<U9') uc_map = np.linspace(1e-3, 1e-6, num=8) uc_map ``` - 调参
github_jupyter
# Systems Identification Model Fitting Fit a Systems Identification model off based off of this [specification](https://hackmd.io/w-vfdZIMTDKwdEupeS3qxQ) and [spec](https://hackmd.io/XVaejEw-QaCghV1Tkv3eVQ) with data obtained in [data_acquisition.ipynb](data/data_acquisition.ipynb). #### Process changes and decision points * Create differenced linear regressor model for refining data formatting * Fit VAR model off of differenced states with Yeo-Johnson power transformation * Implemented coordinate transformations * Created inverse transformations * Fit one step forward VAR model that takes the difference between local arbitrager values and observed values and forcasts the errors within the coordinate transformation state. * Fit VARMAX model with exogenous signal - error between redemption price and rai market price - retrain after every timestep * Compare VARMAX vs VAR model (we chose VARMAX with an exogenous signal) * VARMAX is too slow to retrain at each time step (25x slower than VAR). To determine which model performs better, we created a [validation notebook](VAR_vs_VARMAX_evaluation.ipynb) * Refactor to functions for deployment * Add back Yeo-Johnson power transformation * Move from arbitrageur to exponentially weighted moving average of actual data * Swept alpha of exponentially weighted moving average and found that a VAR(15) with an alpha of 0.8 performed best. ## Analyze and Prepare Data ``` # import libraries import pandas as pd import numpy as np from scipy import stats import math import statsmodels.api as sm from statsmodels.tsa.api import VAR, VARMAX from sklearn.preprocessing import PowerTransformer import matplotlib.pyplot as plt import warnings import os warnings.filterwarnings("ignore") os.chdir('..') states = pd.read_csv('data/states.csv') del states['Unnamed: 0'] states.head() # add additional state variables states['RedemptionPriceinEth'] = states['RedemptionPrice'] / states['ETH Price (OSM)'] states['RedemptionPriceError'] = states['RedemptionPrice'] - states['marketPriceUsd'] ``` ### Systems identification steps: 1. Calculate optimal state from APT model (updated to exponential weighted moving average of the real data) 2. Perform a coordinate transformation of data 3. Difference the local coordinate from the observed to get error 4. Perform a Yeo-Johnson power transformation <!-- 4. Train VARMAX the errors + exogenous signal[s] --> 5. Train a VAR(15) model 6. One step forecast 7. Invert the Yeo-Johnson power transformation 8. Convert forecasted values back from coordinate system 9. Add forecasted values to previous state to get new state ### Mapping of specification states to data #### Initial vector The quantity state variables of the system are as value, mathematical notation, and Graph and Big Query field names from [data_acquisition.ipynb](data/data_acquisition.ipynb). * ETH in collateral = $Q$ = collateral * ETH in Uniswap = $R_{ETH}$ = EthInUniswap * RAI in Uniswap = $R_{RAI}$ = RaiInUniswap * RAI drawn from SAFEs = $D$ = RaiDrawnFromSAFEs <!-- (GlobalDebt won't equal total supply (create graphics around?)) --> The metric state variables of the system are: * Market Price of RAI in ETH = $p_{E/R} > 0$ = marketPriceEth * Market Price of RAI in USD = $p_{U/R} > 0$ = marketPriceUsd * Market Price of ETH in USD = $p_{U/E} > 0$ = ETH Price (OSM) The metric control variables of the system are: * Redemption Price of RAI in USD = $p^r_{U/R} > 0$ = RedemptionPrice * Redemption Price of RAI in ETH = $p^r_{E/R} > 0$ = RedemptionPriceinEth The system parameters are: * Liquidation Ratio = $\bar{L} > 0$ = 1.45 * SAFE Debt Ceiling = $\bar{D} > 0$ = globalDebtCeiling * Uniswap Fee = $\phi_U \in (0,1)$ = 0.003 * Gas Costs = $\bar{C}_{gas} \geq 0$ = 100e-9, # 100 gwei The aggregate flow variables are: * Collateral added or removed = $q \in \mathbb{R}$ (ETH) * SAFE Debt drawn or repaid = $d \in \mathbb{R}$ (RAI) * Uniswap RAI bought or sold = $r \in \mathbb{R}$ (RAI) * Uniswap ETH bought or sold = $z \in \mathbb{R}$ (ETH) ### Model Formulation There is an admissible action set of vectors: (Graph values) * ETH in collateral = $Q$ = collateral * ETH in Uniswap = $R_{ETH}$ reserve1 * RAI in Uniswap = $R_{RAI}$ = reserve0 * RAI drawn from SAFEs = $D$ = erc20CoinTotalSupply Action vector: $\vec{u} = (\Delta Q, \Delta R_{ETH}, \Delta R_{RAI}, \Delta D)$ Admissible action set: $\vec{u} \in \mathcal{U}$ Optimal Action Vector: $\vec{u^*} = (Q^*, R_{ETH}^*, \Delta R_{RAI}^*, \Delta D^*)$ ``` # define constants (will come from cadCAD model but added here for calculations) params = { 'liquidation_ratio': 1.45, 'debt_ceiling': 1e9, 'uniswap_fee': 0.003, 'arbitrageur_considers_liquidation_ratio': True, } ``` ## Create Arbtrageur data vector $u^*$ ``` def get_aggregated_arbitrageur_decision(params, state): # This Boolean indicates whether or not the arbitrageur is rationally considering # borrowing to the liquidation ratio limit. If TRUE, arbitrage opportunities are less # frequent when RAI is expensive and more frequent when RAI is cheap. If FALSE, only # the difference in market and redemption prices (net of Uniswap fee) matters for trading, # which may conform more to individual trader expectations and behavior. consider_liquidation_ratio = params['arbitrageur_considers_liquidation_ratio'] # These are the states of the SAFE balances in aggregate & its fixed parameters total_borrowed = state['SAFE_Debt'] # D total_collateral = state['SAFE_Collateral'] # Q liquidation_ratio = params['liquidation_ratio'] debt_ceiling = params['debt_ceiling'] # These are the states of the Uniswap secondary market balances and its fee RAI_balance = state['RAI_balance'] # R_Rai ETH_balance = state['ETH_balance'] # R_Eth uniswap_fee = params['uniswap_fee'] # These are the prices of RAI in USD/RAI for SAFE redemption and the market price oracle, resp. redemption_price = state['target_price'] # $p^r_{U/R} market_price = state['market_price'] # p_{U/R} > 0 # This is the price of ETH in USD/ETH eth_price = state['eth_price'] # p_{U/E} # These functions define the optimal borrowing/repayment decisions of the aggregated arbitrageur def g1(RAI_balance, ETH_balance, uniswap_fee, liquidation_ratio, redemption_price): return ((eth_price * RAI_balance * ETH_balance * (1 - uniswap_fee)) / (liquidation_ratio * redemption_price)) ** 0.5 def g2(RAI_balance, ETH_balance, uniswap_fee, liquidation_ratio, redemption_price): return (RAI_balance * ETH_balance * (1 - uniswap_fee) * liquidation_ratio * (redemption_price / eth_price)) ** 0.5 # This Boolean resolves to TRUE if the agg. arb. acts this timestep when RAI is expensive # on the secondary market expensive_RAI_on_secondary_market = \ redemption_price < ((1 - uniswap_fee) / liquidation_ratio) * market_price \ if consider_liquidation_ratio \ else redemption_price < (1 - uniswap_fee) * market_price # This Boolean resolves to TRUE if the agg. arb. acts this timestep when RAI is cheap # on the secondary market cheap_RAI_on_secondary_market = \ redemption_price > (1 / ((1 - uniswap_fee) * liquidation_ratio)) * market_price \ if consider_liquidation_ratio \ else redemption_price > (1 / (1 - uniswap_fee)) * market_price if expensive_RAI_on_secondary_market: ''' Expensive RAI on Uni: (put ETH from pocket into additional collateral in SAFE) draw RAI from SAFE -> Uni ETH from Uni -> into pocket ''' _g1 = g1(RAI_balance, ETH_balance, uniswap_fee, liquidation_ratio, redemption_price) d = (_g1 - RAI_balance) / (1 - uniswap_fee) # should be \geq 0 q = ((liquidation_ratio * redemption_price) / eth_price) * (total_borrowed + d) - total_collateral # should be \geq 0 z = -(ETH_balance * d * (1 - uniswap_fee)) / \ (RAI_balance + d * (1 - uniswap_fee)) # should be leq 0 r = d # should be \geq 0 elif cheap_RAI_on_secondary_market: ''' Cheap RAI on Uni: ETH out of pocket -> Uni RAI from UNI -> SAFE to wipe debt (and collect collateral ETH from SAFE into pocket) ''' _g2 = g2(RAI_balance, ETH_balance, uniswap_fee, liquidation_ratio, redemption_price) z = (_g2 - ETH_balance) / (1 - uniswap_fee) # should be \geq 0 r = -(RAI_balance * z * (1 - uniswap_fee)) / \ (ETH_balance + z * (1 - uniswap_fee)) # should be \leq 0 d = r # should be \leq 0 q = ((liquidation_ratio * redemption_price / eth_price) * (total_borrowed + d) - total_collateral) # should be \leq 0 else: pass return { 'q' : q, 'd' : d, 'r' : r, 'z' : z } # UPDATED: We will use an exponentially weighted moving average instead of this arbitrageur logic # # subset state variables for arbitrageur vector # state_subset = states[['marketPriceUsd','RedemptionPrice','ETH Price (OSM)','collateral', # 'EthInUniswap','RaiInUniswap','RaiDrawnFromSAFEs']] # # map state data to arbitrageur vector fields # state_subset.columns = ['market_price','target_price','eth_price','SAFE_Collateral', # 'ETH_balance','RAI_balance','SAFE_Debt'] # # create list of u^* vectors # values = [] # # iterate through real data to create u^* and save to values # for i in range(0,len(state_subset)): # values.append(get_aggregated_arbitrageur_decision(params,state_subset.loc[i])) # # create historic u^* dataframe # local = pd.DataFrame(values) # local.columns = ['Q','D','Rrai','Reth'] # local.head() states # subset state variables for arbitrageur vector state_subset = states[['collateral','RaiDrawnFromSAFEs','RaiInUniswap','EthInUniswap']] # map state data to vector fields state_subset.columns = ['Q','D','Rrai','Reth'] # alpha is the smoothing factor local = state_subset.ewm(alpha=0.8).mean() local ``` ## Coordinate Transformations 1. $\alpha := \frac{d}{\bar{D}}$ Constraint: $\bar{D} \geq D + d$ $ C_0 := \frac{p^r_{U/R}}{p_{U/E}}\bar{L} > 0$ $ C_0 D - Q =: C_1.$ 2. $\beta := \frac{q - C_0 d}{C_1}$ 3. $\gamma := \frac{r}{R_{RAI}}$ 4. $\delta := \frac{z}{R_{ETH}}$ ## Inverse Transformations 1. $d^* = \alpha * \bar{D}$. 2. $q^* = C_0 * \bar{D} * \alpha + C_1 * \beta$ 3. $r^* = \gamma * {R_{RAI}}$ 4. $z^* = \delta * {R_{ETH}}$ ``` # function to create coordinate transformations def coordinate_transformations(params,df,Q,R_eth,R_rai,D,RedemptionPrice,EthPrice): ''' Description: Function that takes in pandas dataframe and the names of columns Parameters: df: pandas dataframe containing states information Q: dataframe column name R_eth: dataframe column name R_rai: dataframe column name D: dataframe column name RedemptionPrice: dataframe column name EthPrice: dataframe column name Returns: Pandas dataframe with alpha, beta, gamma, delta transformed values Example: coordinate_transformations(params,states,'collateral','EthInUniswap','RaiInUniswap', 'RaiDrawnFromSAFEs','RedemptionPrice','ETH Price (OSM)')[['alpha','beta','gamma','delta']] ''' # Calculate alpha d = df[D].diff() d.fillna(0,inplace=True) df['d'] = d df['alpha'] = df['d'] / params['debt_ceiling'] # alpha constraint check for i, row in df.iterrows(): #constraint constraint = params['debt_ceiling'] >= row[D] + row['d'] if constraint == False: print('For row index {}'.format(i)) print('Alpha constraint is not passed') # calculate beta df['C_o'] = (df[RedemptionPrice]/states[EthPrice]) * params['liquidation_ratio'] # C_0 constraint check for i, row in df.iterrows(): #constraint constraint = row['C_o'] > 0 if constraint == False: print('For row index {}'.format(i)) print('C_0 constraint is not passed') q = df[Q].diff() q.fillna(0,inplace=True) df['q'] = q df['C_1'] = (df['C_o'] * df[D]) - df[Q] df['beta'] = (df['q'] - (df['C_o']*df['d']))/ df['C_1'] # calculate gamma r = df[R_rai].diff() r.fillna(0,inplace=True) df['r'] = r df['gamma'] = df['r']/df[R_rai] # calculate delta z = df[R_eth].diff() z.fillna(0,inplace=True) df['z'] = z df['delta'] = df['z']/df[R_eth] return df # transform historical data transformed = coordinate_transformations(params,states,'collateral','EthInUniswap','RaiInUniswap', 'RaiDrawnFromSAFEs','RedemptionPrice','ETH Price (OSM)')[['alpha','beta','gamma','delta']] transformed # add additional signals to arbitrageur state local['RedemptionPrice'] = states['RedemptionPrice'] local['ETH Price (OSM)'] = states['ETH Price (OSM)'] local # transform u* transformed_arbitrageur = coordinate_transformations(params,local,'Q','Reth','Rrai', 'D','RedemptionPrice','ETH Price (OSM)')[['alpha','beta','gamma','delta']] transformed_arbitrageur def create_transformed_errors(transformed_states,transformed_arbitrageur): ''' Description: Function for taking two pandas dataframes of transformed states and taking the difference to produce an error dataframe. Parameters: transformed_states: pandas dataframe with alpha, beta, gamma, and delta features transformed_arbitrageur: pandas dataframe with alpha, beta, gamma, and delta features Returns: error pandas dataframe and transformation object ''' alpha_diff = transformed_states['alpha'] - transformed_arbitrageur['alpha'] beta_diff = transformed_states['beta'] - transformed_arbitrageur['beta'] gamma_diff = transformed_states['gamma'] - transformed_arbitrageur['gamma'] delta_diff = transformed_states['delta'] - transformed_arbitrageur['delta'] e_u = pd.DataFrame(alpha_diff) e_u['beta'] = beta_diff e_u['gamma'] = gamma_diff e_u['delta'] = delta_diff e_u = e_u.astype(float) return e_u e_u = create_transformed_errors(transformed,transformed_arbitrageur) e_u.head() e_u.describe() e_u.hist() ``` When data isn't normal (as is shown above), it is best practice to do a transformation. For our initial transformation, we will use the Yeo-Johnson power transformation. The Yeo-Johnson power transformation is used to stabilize variance, and make data more Gausian. The Yeo-Johnson is an extension of Box-Cox that allows for both zero and negative values(https://en.wikipedia.org/wiki/Power_transform). You could use any other type of normalization tranformation as well, whichever fits the data the best. Scikit-learn has a great implementation of the transformer, which we will use below. ``` pt = PowerTransformer() yeo= pd.DataFrame(pt.fit_transform(e_u),columns=e_u.columns) yeo.hist() # transform back into coordinate system pt.inverse_transform(yeo) ``` The data looks a little better, but we can always experiment with additional techniques ``` def power_transformation(e_u): ''' Definition: Function to perform a power transformation on the coordinate transformed differenced data Parameters: e_u: Dataframe of coordinated transformed differenced data Required: import pandas as pd from sklearn.preprocessing import PowerTransformer Returns: Transformed dataframe and transformation object Example: transformed_df, pt = power_transformation(e_u) ''' pt = PowerTransformer() yeo= pd.DataFrame(pt.fit_transform(e_u),columns=e_u.columns) return yeo, pt e_u,pt = power_transformation(e_u) ``` ## Create model ``` # split data between train and test (in production deployment, can remove) split_point = int(len(e_u) * .8) train = e_u.iloc[0:split_point] test = e_u.iloc[split_point:] states_train = states.iloc[0:split_point] states_test = states.iloc[split_point:] ``` <!-- Potential alternative transformations are as follows: * sin * log of the Yeo-Johnson Both of which provide a better fit than the Yeo-Johnson (as seen below). For the rest of this notebook, we will implement the model training, forecasting, and evaluation process which will allow us to iterate over different transformations until we find one that fits our use case the best. --> <!-- ### Autogressive lag selection --> ``` aic = [] for i in range(1,25): model = VAR(train) results = model.fit(i,ic='aic') aic.append(results.aic) plt.figure(figsize=(10, 8)) plt.plot(aic, 'r+') plt.legend(['AIC']) plt.xlabel('Autocorrelation Lag') plt.ylabel('AIC') plt.title('Plot of sweeps over lag depths over AIC Loss functions') plt.show() # aic = [] # for i in range(1,16): # model = VARMAX(endog=train.values,exog=states_train['RedemptionPriceError'].values,initialization='approximate_diffuse') # results = model.fit(order=(i,0)) # aic.append(results.aic) # plt.figure(figsize=(10, 8)) # plt.plot(aic, 'r+') # plt.legend(['AIC']) # plt.xlabel('Autocorrelation Lag') # plt.ylabel('AIC') # plt.title('Plot of sweeps over lag depths over AIC Loss functions') # plt.show() ``` Given a set of candidate models for the data, **the preferred model is the one with the minimum AIC value, the sign of the data does not matter**. AIC optimizes for goodness of fit but also includes a penalty for each additional parameter, which discourages overfitting. In our case, this appears that a lag of ***15*** is optimal. For a VARMAX model, which we have decided to use, an order of 1 is selected. To determine which model performs better overall for predictions, given the computational constraints that VARMAX is too slow to be retrained at each timestep, a [validation notebook](VAR_vs_VARMAX_evaluation.ipynb) was created to test if a VAR retrained every timestep vs a VARMAX retrained very 20 predictions. The result over 20 predictions was that VAR performed best for alpha, gamma, and delta but VARMAX performed better with beta by a higher magnitude than VAR. ``` def VARMAX_prediction(e_u,RedemptionPriceError,newRedemptionPriceError,steps=1,lag=1): ''' Description: Function to train and forecast a VARMAX model one step into the future Parameters: e_u: errors pandas dataframe RedemptionPriceErrorPrevious: 1d Numpy array of RedemptionPriceError values newRedemptionPriceError: exogenous latest redemption price error signal - float steps: Number of forecast steps. Default is 1 lag: number of autoregressive lags. Default is 1 Returns: Numpy array of transformed state changes Example Y_pred = VARMAX_prediction(train,states_train['RedemptionPriceError'], states_test['RedemptionPriceError'][0:5],steps=5,lag=1) ''' # instantiate the VARMAX model object from statsmodels model = VARMAX(endog=e_u.values,exog=RedemptionPriceError, initialization='approximate_diffuse',measurement_error=True) # fit model with determined lag values results = model.fit(order=(lag,0)) Y_pred = results.forecast(steps = steps, exog=newRedemptionPriceError) return Y_pred.values def VAR_prediction(e_u,lag=1): ''' Description: Function to train and forecast a VAR model one step into the future Parameters: e_u: errors pandas dataframe lag: number of autoregressive lags. Default is 1 Returns: Numpy array of transformed state changes Example VAR_prediction(e_u,6) ''' # instantiate the VAR model object from statsmodels model = VAR(e_u.values) # fit model with determined lag values results = model.fit(lag) lag_order = results.k_ar Y_pred = results.forecast(e_u.values[-lag_order:],1) return Y_pred[0] Y_pred = VAR_prediction(e_u,15) Y_pred def invert_power_transformation(pt,prediction): ''' Definition: Function to invert power transformation Parameters: pt: transformation object prediction: Numpy array of model state coordinate transformed percentage changes Required: import pandas as pd from sklearn.preprocessing import PowerTransformer Returns: inverted transformation numpy array Example: inverted_array = invert_power_transformation(pt,prediction) ''' # transform back into coordinate system inverted = pt.inverse_transform(prediction.reshape(1,-1)) return inverted Y_pred = invert_power_transformation(pt,Y_pred) Y_pred ``` # New states ## Inverse Transformations 1. $d^* = \alpha * \bar{D}$ 2. $q^* = C_0 * \bar{D} * \alpha + C_1 * \beta$. 3. $r^* = \gamma * {R_{RAI}}$ 4. $z^* = \delta * {R_{ETH}}$ ``` Y_pred[0][0]*params['debt_ceiling'] def inverse_transformation_and_state_update(Y_pred,previous_state,params): ''' Description: Function to take system identification model prediction and invert transfrom and create new state Parameters: y_pred: numpy array of transformed state changes previous_state: pandas dataframe of previous state or 'current' state params: dictionary of system parameters Returns: pandas dataframe of new states Example: inverse_transformation_and_state_update(Y_pred,previous_state,params) ''' d_star = Y_pred[0] * params['debt_ceiling'] q_star = previous_state['C_o'] * params['debt_ceiling'] * Y_pred[0] + previous_state['C_1'] * Y_pred[1] r_star = Y_pred[2] * previous_state['gamma'] * previous_state['RaiInUniswap'] z_star = Y_pred[3] * previous_state['delta'] * previous_state['EthInUniswap'] new_state = pd.DataFrame(previous_state[['collateral','EthInUniswap','RaiInUniswap','RaiDrawnFromSAFEs']].to_dict(),index=[0]) new_state['Q'] = new_state['collateral'] + q_star new_state['D'] = new_state['RaiDrawnFromSAFEs'] + d_star new_state['R_Rai'] = new_state['RaiInUniswap'] + r_star new_state['R_Eth'] = new_state['EthInUniswap'] + z_star return new_state[['Q','D','R_Rai','R_Eth']] previous_state = states.iloc[train.index[-1]] print('Previous state:') print(previous_state[['collateral','RaiDrawnFromSAFEs','RaiInUniswap','EthInUniswap']].to_dict()) print('\n New state:') inverse_transformation_and_state_update(Y_pred[0],previous_state,params) ``` ## Conclusion In this notebook, we have iterated through several different models and decided on a VAR(15) model for us in the Rai Digital Twin.
github_jupyter