code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## While Loop
# -
i = 0
while i < 5:
print(i)
i += 1
# +
i = 5
while True:
print(i)
if i >= 5:
break
print("something")
# +
min_length = 2
name = input("Please enter your.name: ")
while not(len(name) >= min_length and name.isprintable() and name.isalpha()):
name = input("Please enter your name: ")
print("Hello, {0}.format(name")
# -
|
my_classes/basic/.ipynb_checkpoints/while_loop-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Covid-19: From model prediction to model predictive control
#
# ## A demo of the stochastic modeling framework
#
# *Original code by <NAME>. Modified by <NAME> in consultation with the BIOMATH research unit headed by prof. <NAME>.*
#
# Copyright (c) 2020 by <NAME>, BIOMATH, Ghent University. All Rights Reserved.
#
# Our code implements a SEIRS infectious disease dynamics models with extensions to model the effect quarantining detected cases. Using the concept of 'classes' in Python 3, the code was integrated with our previous work and allows to quickly perform Monte Carlo simulations, calibrate model parameters and calculate an *optimal* government policies using a model predictive controller (MPC). A white paper and souce code of our previous work can be found on the Biomath website.
#
# https://biomath.ugent.be/covid-19-outbreak-modelling-and-control
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
from ipywidgets import interact,fixed,FloatSlider,IntSlider,ToggleButtons
import pandas as pd
import datetime
import scipy
from scipy.integrate import odeint
import matplotlib.dates as mdates
import matplotlib
import scipy.stats as st
import networkx
import models
# #### General
#
# The SEIR model was first proposed in 1929 by two Scottish scientists. It is a compartmental model that subdivides the human population in four types of people : 1) healthy individuals susceptible to the infectious disease, 2) exposed individuals in a latent phase (partially the incubation period), 3) infectious individuals able to transmit the disease and 4) individuals removed from the population either through immunisation or death. Despite being a simple and idealised reality, the SEIR model is used extensively to predict the outbreak of infectious diseases and this was no different during the outbreak in China earlier this year. In this work, we extended the SEIR model to incorporate more expert knowledge on SARS-Cov-2 into the model. The infectious pool is split into four parts. The first is a period of pre-symptomatic infectiousness. Several studies have shown that pre-symptomatic transmission is a dominant transmission mechanism of SARS-Cov-2. After the period of pre-symptomatic transmission, three possible infectious outcomes are modelled. 1) asymptomatic outcome, for patients who show no symptoms at all 2) mild outcome, for patients with mild symptoms, these patients recover at home 3) a mild infection can degress to the point where a hospitalision is needed. The pool of *recovered* individuals from the classical SEIR model is split into an recovered and dead pool. People from the susceptible, exposed, pre-symptomatic infectious, asymptomatic infectious, mild infectious and recovered pool can be quarantined after having tested positive for Covid-19. Note that for individuals in the susceptible and recovered pools, this corresponds to a *false positive* test. The dynamics of our extended SEIR model are presented in the flowchart below. We make the following assumptions with regard to the general SEIRS dynamics,
#
# <img src="../figs/flowchartAll.jpg" alt="drawing" width="700"/>
#
# We make the following assumptions with regard to the SEIRS dynamics,
#
# 1. There is no connection between the severity of the disease and the infectiousness of an individual. Only the duration of infectiousness can differ.
# 2. All patients experience a brief pre-symptomatic, infectious period.
# 3. All deaths come from intensive care units in hospitals, meaning no patients die outside a hospital. Of the 7703 diseased (01/05/2020), 46\% died in a hospital while 53\% died in an elderly home. All hospital deaths are confirmed Covid-19 cases while only 16\% of elderly home deaths were confirmed. When taking the elderly homes out of the model scope, the assumption that deaths only arise in hospitals is true due to the fact that only 0.3\% died at home and 0.4\% died someplace else. Asymptomatic and mild cases automatically lead to recovery and in no case to death (https://www.info-coronavirus.be/nl/news/trends-laatste-dagen-zetten-zich-door/).
# 4. We implement no testing and quarantining in the hospital. Hospitalised persons are assumed to be incapable of infecting susceptibles, so the implementation of a quarantine would not change the dynamics but slow down calculations.
# 5. Recovered patients are assumed to be immune, seasonality is deemed out of scope of this work.
#
# #### Hospital subystem (preliminary)
#
# The hospital subsystem is a simplification of actual hospital dynamics. The dynamics and estimated parameters were obtained by interviewing Ghent University Hospital staff and presenting the resulting meeting notes to the remaining three Ghent hospitals for verification.
#
# At the time of writing (30/04/2020) every admitted patient is tested for Covid-19. Roughly 10% of all Covid-19 patients at UZ Ghent originally came to the hospital for some other medical condition. The remaining 90% of all Covid-19 arrives in the emergency room or comes from hospitals in heavily struck regions. The fraction of people the hospital when getting infected with Covid-19 are reported to authorities as ‘new hospitalisations’. There are three hospital wards for Covid-19 patients: 1) Cohort, which should be seen like a regular hospital ward with Covid-19 patients. Patients are not monitored permanently in this ward. 2) Midcare, a ward where more severe cases are monitored more cosely than in Cohort. Midcare is more closely related to ICU than to Cohort and is usually lumped with the number of ICU patients when reporting to the officials. 3) Intensive care, for patients with the most severe symptoms. Intensive care needs can include the use of a ventilator to supply the patient with oxygen. It was noted that the fraction Cohort vs. Midcare and ICU is roughly 50-50%.
#
# <img src="../figs/hospitalRealLife.jpg" alt="drawing" width="400"/>
#
# Generally, patients can switch between any of the wards depending on how the disease progresses. However, some dominant *flows* exist. Usually, it is apparent upon a patients arrival to which ward he or she will be assigned. On average patients who don’t degress stay in Cohort for 6 days, with values spread between 3 and 8 days. The average ICU stay is 14 days when a patient doesn’t need ventilation. If the patient needs ventilation the stay is slightly longer. After being in ICU, patients return to Cohort for an another 6 to 7 days of recovery. Based on these dominant *flows*, the hospital subsystem was simplified by making the following assumptions,
# 1. Assume people arriving at the hospital are instantly distributed between Cohort, Midcare or ICU.
# 2. Merge ventilator and non-ventilator ICU.
# 3. Assume deaths can only arise in ICU.
# 4. Assume all patients in midcare and ICU pass via Cohort on their way to recovery.
# 5. Assume that the 10% of the patients that come from hospital actually come from the population.
# ### Deterministic vs. Stochastic framework
# The extended SEIR model is implemented using two frameworks: a deterministic and a stochastic (network based) framework. **This Jupyter Notebooks is a demo of the deterministic model,** a demo of the stochastic network simulator is available in *SEIRSNetworkModel_Demo*. A deterministic implementation of the extended SEIRS model captures important features of infectious disease dynamics, but it assumes uniform mixing of the population (i.e. every individual in the population is equally likely to interact with every other individual). The deterministic approach results in a set of N ordinary differential equations, one for every of the N ’population pools’ considered. The main advantage of a deterministic model is that a low amount of computational resources are required while still maintaining an acceptable accuracy. The deterministic framework allows to rapidly explore scenarios and perform optimisations which require thousands of function evaluations.
#
# However, it is often important to consider the structure of contact networks when studying disease transmission and the effect of interventions such as social distancing and contact tracing. The main drawback of the deterministic approach is the inability to simulate contact tracing, which is one of the most promising measures against the spread of SARS-Cov-2. For this reason, the SEIRS dynamics depicted in on the above flowchart can be simulated on a Barabasi-Albert network. This advantages include a more detailed analysis of the relationship between social network structure and effective transmission rates, including the effect of network-based interventions such as social distancing, quarantining, and contact tracing. The added value comes at a high price in terms of computational resources. It is not possible to perform optimisations of parameters in the stochastic network model on a personal computer. Instead, high performance computing infrastructure is needed. The second drawback is the need for more data and/or assumptions on social interactions and how government measures affect these social interactions.
# ### Model parameters
# In the above equations, S stands for susceptible, E for exposed, A for asymptomatic, M for mild, H for hospitalised, C for cohort, Mi for midcare, ICU for intensive care unit, D for dead, R for recovered. The quarantined states are denoted with a Q suffix, for instance AQ stands for asymptomatic and quarantined. The states S, E, A, M and R can be quarantined. The disease dynamics when quarantined are identical to the non quarantined dynamics. For instance, EQ will evolve into AQ or MQ with the same probability as E evolves into A or M. Individuals from the MQ pool can end up in the hospital. N stands for the total population. The clinical parameters are: a, m: the chance of having an asymptomatic or mild infection. h: the fraction of mildly infected which require hospitalisation. c: fraction of the hospitalised which remain in Cohort, mi: fraction of hospitalised which end up in midcare. Based on reported cases in China and travel data, Li et al. (2020b) estimated that 86 % of coronavirus infections in the country were "undocumented" in the weeks before officials instituted stringent quarantines. This figure thus includes the asymptomatic cases and an unknown number of mildly symptomatic cases and is thus an overestimation of the asymptotic fraction. In Iceland, citizens were invited for testing regardless of symptoms. Of all people with positive test results, 43% were asymptomatic (Gudbjartsson et al., 2020). The actual number of asymptomatic infections might be even higher since it seemed that symptomatic persons were more likely to respond to the invitation (Sciensano, 2020). In this work it is assumed that 43 % of all infected cases are asymptomatic. This figure can later be corrected in light of large scale immunity testing in the Belgian population. Hence,
#
# $$ a = 0.43 .$$
#
# Wu and McGoogan (2020) estimated that the distribution between mild, severe and critical cases is equal to 81%, 15% and 4%. As a rule of thumb, one can assume that one third of all hospitalised patients ends up in an ICU. Based on interviews with Ghent University hospital staff, midcare is merged with ICU in the offical numbers. For now, it is assumed that the distribution between midcare and ICU is 50-50 %. The sum of both pools is one third of the hospitalisations. Since the average time a patient spends in midcare is equal to ICU, this corresponds to seeing midcare and ICU as 'ICU'.
#
# $\sigma$: length of the latent period. Assumed four days based on a modeling study by Davies et al. (2020) .
#
# $\omega$: length of the pre-symptomatic infectious period, assumed 1.5 days (Davies et al. 2020). The sum of $\omega$ and $\sigma$ is the total incubation period, and is equal to 5.5 days. Several estimates of the incubation period have been published and range from 3.6 to 6.4 days, with the majority of estimates around 5 days (Park et. al 2020).
#
# $d_{a}$ , $d_{m}$ , $d_{h}$ : the duration of infection in case of a asymptomatic or mild infection. Assumed to be 6.5 days. Toghether with the length of the pre-symptomatic infectious period, this accounts to a total of 8 days of infectiousness.
#
# $d_{c}$ , $d_{\text{mi}}$ , $d_{\text{ICU}}$: average length of a Cohort, Midcare and ICU stay. Equal to one week, two weeks and two weeks respectively.
#
# $d_{\text{mi,recovery}}$ , $d_{\text{ICU,recovery}}$: lengths of recovery stays in Cohort after being in Midcare or IC. Equal to one week.
#
# Zhou et al. (2020) performed a retrospective study on 191 Chinese hospital patients and determined that the time from illness onset to discharge or death was 22.0 days (18.0-25.0, IQR) and 18.5 days (15.0-22.0, IQR) for survivors and victims respectively. Using available preliminary data, the World Health Organisation estimated the median time from onset to clinical recovery for mild cases to be approximately 2 weeks and to be 3-6 weeks for patients with severe or critical disease (WHO, 2020). Based on this report, we assume a recovery time of three weeks for heavy infections.
#
# $d_{hospital}$ : the time before heavily or critically infected patients reach the hospital. Assumed 5-9 days (Linton et al. 2020). Still waiting on hospital input here.
#
# $m_0$ : the mortality in ICU, which is roughly 50\% (Wu and McGoogan, 2020).
#
# $\zeta$: can be used to model the effect of re-susceptibility and seasonality of a disease. Throughout this demo, we assume $\zeta = 0$ because data on seasonality is not yet available at the moment. We thus assume permanent immunity after recovering from the infection.
# ## Performing simulations
#
# ### Without age-structuring
# #### The 'SEIRSNetworkModel' class
#
# The *SEIRSNetworkModel* class uses the exact same class structure and function names as the *SEIRSAgeModel* deterministic class, which is detailed below.
#
# <img src="../figs/SEIRSNetworkModel.jpg"
# alt="class"
# height="600" width="700"
# style="float: left; margin-right: 500px;" />
#
# As of now (20/04/2020), the SEIRSNetworkModel contains 5 functions which can be grouped in two parts: 1) functions to run and visualise simulations and 2) functions to perform parameter estimations and visualse the results. Implementing the model predictive controller is straightforward and can easily be done. However, the optimisation problem is really difficult and requires thousands of function evaluations. Given the large amount of computational resources required to run just one stochastic simulation, it is highly unlikely that the model predictive controller will ever be used to optimize government policy. The MPC functions will be implemented and their usefullness assessed after a calibration is performed. Also, scenario specific functions will be added over the course of next week.
#
# #### Creating a SEIRSNetworkModel object
#
# Before a stochastic simulation can be performed, the interaction network G, which determines the connectivity of the network model, must be defined. In the example below, an interaction network under normal circumstances and an interaction network under distancing measures is initiated. Switching between networks is possible during a simulation.
# +
# Construct the network G
numNodes = 60000
baseGraph = networkx.barabasi_albert_graph(n=numNodes, m=3)
# Baseline normal interactions:
G_norm = models.custom_exponential_graph(baseGraph, scale=500)
models.plot_degree_distn(G_norm, max_degree=40)
# Construct the network G under social distancing
numNodes = 60000
baseGraph = networkx.barabasi_albert_graph(n=numNodes, m=1)
# Baseline normal interactions:
G_dist = models.custom_exponential_graph(baseGraph, scale=200000)
models.plot_degree_distn(G_dist, max_degree=40)
# -
model = models.SEIRSNetworkModel(
# network connectivty
G = G_norm,
p = 0.51,
# clinical parameters
beta = 0.03,
sigma = 4.0,
omega = 1.5,
zeta = 0,
a = 0.43, # probability of an asymptotic (supermild) infection
m = 1-0.43, # probability of a mild infection
h = 0.20, # probability of hospitalisation for a mild infection
c = 2/3, # probability of hospitalisation in cohort
mi = 1/6, # probability of hospitalisation in midcare
da = 6.5, # days of infection when asymptomatic (supermild)
dm = 6.5, # days of infection when mild
dc = 7,
dmi = 14,
dICU = 14,
dICUrec = 6,
dmirec = 6,
dhospital = 5, # days before reaching the hospital when heavy or critical
m0 = 0.49, # mortality in ICU
maxICU = 2000,
# testing
theta_S = 0,
theta_E = 0,
theta_A = 0,
theta_M = 0,
theta_R = 0,
psi_FP = 0,
psi_PP = 1,
dq = 14,
# back-tracking
phi_S = 0,
phi_E = 0,
phi_I = 0,
phi_A = 0,
phi_R = 0,
# initial condition
initN = 11.43e6, #results are extrapolated to entire population
initE = 10,
initI = 0,
initA = 0,
initM = 0,
initC = 0,
initCmirec=0,
initCicurec=0,
initR = 0,
initD = 0,
initSQ = 0,
initEQ = 0,
initIQ = 0,
initAQ = 0,
initMQ = 0,
initRQ = 0,
# monte-carlo sampling
monteCarlo = False,
repeats = 20
)
# #### Extract Sciensano data
[index,data] = model.obtainData()
ICUvect = np.transpose(data[0])
hospital = np.transpose(data[1])
print(ICUvect.shape)
# #### Altering an object variable after intialisation
#
# After initialising our 'model' it is still possible to change variables using the following syntax.
model.beta = 0.40
# #### Running your first simulation
#
# A simulation is run by using the attribute function *sim*, which uses one argument, the simulation time T, as its input.
y = model.sim(50)
# For advanced users: the numerical results of the simulation can be accessed directly be calling *object.X* or *object.sumX* where X is the name of the desired population pool. Both are numpy arrays. *Ojbect.X* is a 3D array of the following dimensions:
# - x-dimension: number of age categories,
# - y-dimesion: tN: total number of timesteps taken (one per day),
# - z-dimension: n_samples: total number of monte-carlo simulations performed.
#
# Object.sumX is a 2D array containing only the results summed over all age categorie and has the following dimensions,
# - x-dimesion: tN: total number of timesteps taken (one per day),
# - y-dimension: n_samples: total number of monte-carlo simulations performed.
#
# #### Visualising the results
#
# To quickly visualise simulation results, two attribute functions were created. The first function, *plotPopulationStatus*, visualises the number of susceptible, exposed, infected and recovered individuals in the population. The second function, *plotInfected*, by default visualises the number of heavily and critically infected individuals. Both functions require no user input to work but both have some optional arguments,
#
# > plotPopulationStatus(filename),
# > - filename: string with a filename + extension to save the figure. The figure is not saved per default.
#
# > plotInfected(asymptotic, mild, filename),
# > - asymptotic: set to *True* to include the supermild pool in the visualisation.
# > - mild: set to *True* to include the mild pool in the visualisation.
# > - filename: string with a filename + extension to save the figure. The figure is not saved per default.
model.plotPopulationStatus()
model.plotInfected()
# #### The use of checkpoints to change parameters on the fly
#
# A cool feature of the original SEIRSplus package by <NAME> was the use of so-called *checkpoints* dictionary to change simulation parameters on the fly. In our modification, this feature is preserved. Below you can find an example of a *checkpoints* dictionary. The simulation will be started with the previously initialised parameters. After 40 days, social interaction will be limited by lowering the network connectivity to an average of 2 edges per node. The chance of random encounters is lowered to 10%. After 80 days, social restrictions are lifted and beta once more assumes its *business-as-usual* value. *checkpoints* is the only optional argument of the *sim* functions and is set to *None* per default.
# Create checkpoints dictionary
chk = {'t': [60],
'G': [G_dist],
'p': [0.03],
}
# Run simulation
y = model.sim(80,checkpoints=chk)
# Visualise
model.plotPopulationStatus()
model.plotInfected()
# ## Calibrating $\beta$ in a *business-as-usual* scenario ($N_c = 11.2$)
#
# ### Performing a least-squares fit
#
# The 'SEIRSNetworkModel' class contains a function to fit the model to selected data (*fit*) and one function to visualise the result (*plotFit*). Our code uses the **genetic algorithm** from scipy to perform the optimisation. The *fit* function has the following basic syntax,
#
# > sim(data, parNames, positions, bounds, weights)
# > - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length.
# > - parNames: a list containing the names (dtype=string) of the model parameters to be fitted.
# > - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, A, M, C, Mi, ICU, R, F, SQ, EQ, AQ, MQ, RQ).
#
#
# The following arguments are optional,
# > - checkpoints: checkpoint dictionary can be used to calibrate under specific scenarios such as policy changes (default: None).
# > - setvar: True to replace fitted values in model object after fit is performed (default: False).
# > - disp: Show sum-of-least-squares after each optimisation iteration (default: True).
# > - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).
# > - maxiter: Maximum number of iterations (default: 30).
# > - popsize: Population size of genetic algorithm (default: 10).
#
# The genetic algorithm will by default use all cores available for the optimisation. Using the *fit* attribute, it is possible to calibrate any number of model parameters to any sets of data. We do note that fitting the parameters sm,m,h and c requires modification of the source code. In the example below, the transmission parameter $\beta$ is sought after using two dataseries. The first is the number of patients in need of intensive care and the second is the total number of people in the hospital. The number of patients in ICU is matched with the CH population pool while the number of hospitalisations is matched with the sum of the HH and CH population pools.
# vector with dates
index=pd.date_range('2020-03-15', freq='D', periods=ICUvect.size)
# data series used to calibrate model must be given to function 'plotFit' as a list
idx = -42
index = index[0:idx]
data=[np.transpose(ICUvect[:,0:idx]),np.transpose(hospital[:,0:idx])]
# set optimisation settings
parNames = ['beta','p'] # must be a list!
positions = [np.array([6]),np.array([4,5,6])] # must be a list!
bounds=[(1,100),(0.1,1),(0.1,1)] # must be a list!
weights = np.array([1,0])
# run optimisation
theta = model.fit(data,parNames,positions,bounds,weights,setvar=True,maxiter=1,popsize=1)
# ### Visualising the fit
#
# Visualising the resulting fit is easy and can be done using the plotFit function. The functions uses the following basic syntax,
#
# > plotFit(index,data,positions)
# > - index: vector with timestamps corresponding to data.
# > - data: list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length.
# > - positions: list containing the model states (dtype=np.array) used to calculate the sum of least squares.
#
# The following arguments are optional,
# > - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].
# >- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].
# > - legendText: tuple containing the legend entries. Disabled per default.
# > - titleText: string containing the fit title. Disable per default.
# > - filename: string with a filename + extension to save the figure. The figure is not saved per default.
# plot result
model.plotFit(index,data,positions,modelClr=['red','orange'],legendText=('ICU (model)','ICU (data)','Hospital (model)','Hospital (data)'),titleText='Belgium')
# # Code works untill here
#
# To be continued...
# ## Model Predictive control (MPC)
#
# ### Optimising government policy
#
# #### Process control for the layman
#
# As we have the impression that the control part, which we see as our main addition to the problem, is more difficult to grasp for the layman, here is a short intro to process control. Experts in control are welcome to skip this section.
#
# A predictive model consists of a set of equations and aims to predict how the system will behave in the future given a certain input. Process control flips this around and aims at determining what input is needed to achieve a desired system behavior (= goal). It is a tool that helps us in “controlling” how we want a system to behave. It is commonly applied in many industries, but also in our homes (e.g. central heating, washing machine). It's basically everywhere. Here's how it works. An algorithm monitors the deviation between the goal and the true system value and then computes the necessary action to "drive" the system to its goal by means of an actuator (in industry this is typically a pump or a valve). Applying this to Covid-19, the government wants to "control" the spread of the virus in the population by imposing measures (necessary control actions) on the public (which is the actuator here) and achieve the goal that the number of severely sick people does not become larger than can be handled by the health care system. However, the way the population behaves is a lot more complex compared to the heating control in our homes since not only epidemiology (virus spread) but also different aspects of human behavior on both the individual and the societal level (sociology, psychology, economy) are involved. This leads to multiple criteria we want to ideally control simultaneously and we want to use the "smartest" algorithm we can get our hands on.
#
# #### The optimizePolicy function
#
# The 'SEIRSNetworkModel' class contains an implementation of the MPC controller in the function *optimizePolicy*. For now, the controller minimises a weighted squared sum-of-errors between multiple setpoints and model predictions. The algorithm can use any variable to control the virus outbreak, but we recommend sticking with the number of random daily contacts $N_c$ and the total number of random tests ('totalTests') as only these have been tested. We also recommend disabling age-structuring in the model before running the MPC as this feature requires discretisation of the interaction matrix to work which is not yet implemented. Future work will extend the MPC controller to work with age-structuring feature inherent to the model. Future work is also aimed at including an economic cost function to discriminate between control handles. Our MPC uses the **genetic algorithm** from scipy.optimize to perform the optimisation, we recommend using at least a populationsize of 20 and at least 100 iterations to ensure that the trajectory is 'optimal'. The *optimizePolicy* function has the following basic syntax,
#
# > optimizePolicy(parNames, bounds, setpoints, positions, weights)
# > - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.
# > - bounds: A list containing the lower- and upper boundaries of each parameter to be used as a control handle. Each entry in the list should be a 1D numpy array containing the lower- and upper bound for the respective control handle.
# > - setpoints: A list with the numerical values of the desired model output.
# > - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each modelouput in the given position is matched with a provided setpoint. If multiple position entries are provided, the output in these positions is added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, A, M, C, Mi, ICU, R, F, SQ, EQ, AQ, MQ, RQ).
#
#
# The following arguments are optional,
# > - policy_period: length of one policy interval (default: 7 days).
# > - N: number of future policy intervals to be optimised, also called 'control horizon' (default: 6).
# > - P: number of policy intervals over which the sum of squared errors is calculated, also called 'prediction horizon' (default:12).
# > - disp: Show sum-of-least-squares after each optimisation iteration (default: True).
# > - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).
# > - maxiter: Maximum number of iterations (default: 100).
# > - popsize: Population size of genetic algorithm (default: 20).
#
# The function returns a one-dimensional list containing the optimal values of the control handles. The length of this list is equal to the length of the control horizon (N) times the number of control handles. The list thus lists all control handles and their optimal values in their respective order. **The optimal policy is assigned to the SEIRSNetworkModel object and is only overwritten when a new optimisation is performed. Future work could include the creation of a new object for every optimal policy.** The genetic algorithm will by default use all cores available for the optimisation.
parNames = ['Nc','totalTests']
bounds = [np.array([0,11.2]),np.array([0,1e6])]
setpoints = [1200,5000]
positions = [np.array([7]),np.array([6,7])]
weights = [1,0]
model.optimizePolicy(parNames,bounds,setpoints,positions,weights,policy_period=30,N=6,P=12,polish=False,maxiter=1,popsize=10)
# ### Visualising the effect of government policy
# Visualising the resulting optimal policy is easy and can be done using the plotOptimalPolicy function. We note that the functionality of*plotOptimalPolicy** is for now, very basic and will be extended in the future. The function is heavily based on the *plotInfected* visualisation. The function uses the following basic syntax,
#
# > plotOptimalPolicy(parNames,setpoints,policy_period)
# > - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.
# > - setpoints: A list with the numerical values of the desired model output.
# > - policy_period: length of one policy interval (default: 7 days).
#
# The following arguments are optional,
# > - asymptotic: set to *True* to include the supermild pool in the visualisation.
# > - mild: set to *True* to include the mild pool in the visualisation.
# > - filename: string with a filename + extension to save the figure. The figure is not saved per default.
model.plotOptimalPolicy(parNames,setpoints,policy_period=14)
# ## Scenario-specific extensions
#
# ### *realTimeScenario*
#
# The 'SEIRSNetworkModel' class contains one function to quickly perform and visualise scenario analysis for a given country. The user is obligated to supply the function with: 1) a set of dataseries, 2) the date at which the data starts, 3) the positions in the model output that correspond with the dataseries and 4) a checkpoints dictionary containing the past governement actions, from hereon referred to as the *pastPolicy* dictionary. If no additional arguments are provided, the data and the corresponding model fit are visualised from the user supplied start date up until the end date of the data plus 14 days. The end date of the visualisation can be altered by defining the optional keyworded argument *T_extra* (default: 14 days). Optionally a dictionary of future policies can be used to simulate scenarios starting on the first day after the end date of the dataseries. The function *realTimeScenario* accomplishes this by merging both the *pastPolicy* and *futurePolicy* dictionaries using the backend function *mergeDict()*. The syntax without optional arguments is as follows,
#
# > realTimeScenario(startDate, data, positions, pastPolicy)
# > - startDate: a string with the date corresponding to the first entry of the dataseries (format: 'YYYY-MM-DD').
# > - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length and start on the same day.
# > - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, A, M, C, Mi, ICU, R, F, SQ, EQ, AQ, MQ, RQ).
# > - pastPolicy: a checkpoints dictionary containing past government actions.
#
# The following (simulation) arguments are optional,
# > - futurePolicy: a checkpoint dictionary used to simulate scenarios in the future (default: None). By default, time '1' in this dictionary is the date of the first day after the end of the data.
# > - T_extra: Extra simulation time after last date of the data if no futurePolicy dictionary is provided. Extra simulation time after last time in futurePolicy dictionary.
#
# The following arguments are for visualisation,
# > - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].
# >- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].
# > - legendText: tuple containing the legend entries. Disabled per default.
# > - titleText: string containing the fit title. Disable per default.
# > - filename: string with a filename + extension to save the figure. The figure is not saved per default.
# Define data as a list containing data timeseries
data=[np.transpose(ICUvect),np.transpose(hospital)]
# Create a dictionary of past policies
pastPolicy = {'t': [11],
'G': [G_dist],
'p': [0.03]
}
# Create a dictionary of future policies
futurePolicy = {'t': [1],
'G': [G_norm],
'p': [0.6]
}
# Define the data corresponding to the first data entry
startDate='2020-03-13'
# Run realTimeScenario
model.realTimeScenario(startDate,data,positions,pastPolicy,futurePolicy=futurePolicy,T_extra=7,
modelClr=['red','orange'],legendText=('ICU (model)','Hospital (model)','ICU (data)','Hospital (data)'),
titleText='Belgium')
|
src/SEIRSNetworkModel_demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
mname = 'preresnet_h67'
seed = 723
fold = 0
gpu_id = 0
nfold = 2
# initialize weights from this model
mname0 = 'preresnet_g67'
# +
import socket
import timeit
import time
from datetime import datetime
import os
from os import listdir
from os.path import isfile, join
import glob
from collections import OrderedDict
import numpy as np
import pandas as pd
import pickle
import gc
import cv2
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
import seaborn as sns
sns.set_style("white")
import random
import PIL
import pathlib
import math
import torch
from torch.autograd import Variable
import torch.optim as optim
from torch.utils import data
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from torchvision.utils import make_grid
from torch import nn
from torch.nn import functional as F
from torch.optim.lr_scheduler import LambdaLR, ReduceLROnPlateau, StepLR
from torch.utils.data.sampler import WeightedRandomSampler
import torchvision
import albumentations as A
from skimage.exposure import histogram, equalize_hist, equalize_adapthist
from skimage.morphology import binary_dilation
import pretrainedmodels
from xception import xception
from tensorboardX import SummaryWriter
from scipy.special import logit
from scipy.ndimage.filters import gaussian_filter
from sklearn.metrics import jaccard_similarity_score, f1_score
from sklearn.preprocessing import MultiLabelBinarizer
import imgaug as ia
from imgaug import augmenters as iaa
import multiprocessing
import threading
from dataloaders import utils
from dataloaders import custom_transforms as tr
# from losses import CombinedLoss, BCELoss2d
from losses import FocalLoss, ThreeWayLoss, L1_LossW, Smooth_L1_LossW
import lovasz_losses as L
# + _uuid="7114b9f3da03d4688ecfdecd7c7008a0be0c8004"
ori_size = 1024
up_size = 1024
image_size = 1024
final_size = 1024
interp = cv2.INTER_AREA
# methods=[("area", cv2.INTER_AREA),
# ("nearest", cv2.INTER_NEAREST),
# ("linear", cv2.INTER_LINEAR),
# ("cubic", cv2.INTER_CUBIC),
# ("lanczos4", cv2.INTER_LANCZOS4)]
y_pad = image_size - up_size
y_min_pad = int(y_pad / 2)
y_max_pad = y_pad - y_min_pad
x_pad = image_size - up_size
x_min_pad = int(x_pad / 2)
x_max_pad = x_pad - x_min_pad
print(ori_size, up_size, image_size, final_size)
# +
PATH = './'
PATH_TO_TRAIN = PATH + 'train_1024/'
PATH_TO_TEST = PATH + 'test_1024/'
PATH_TO_EXTERNAL = PATH + 'external_data/'
PATH_TO_EXTERNAL2 = './external_data2/'
PATH_TO_EXTERNAL3 = './external_data3/'
PATH_TO_TARGET = PATH + 'train.csv'
PATH_TO_TARGETX = PATH + 'subcellular_location.tsv'
PATH_TO_TARGETXX = './HPAv18Y.csv'
PATH_TO_SUB = PATH + 'sample_submission.csv'
PATH_TO_PSEUDO = PATH + 'sub/enstw36_1shai562_3russ616_3dieter580_2shai593_3dmytro617_2kevin602_1l2615_7.15_clswt.csv'
clusters = pd.read_csv('cluster2emb.csv')
folds = dict(zip(clusters.Id,clusters.fold))
LABEL_MAP = {
0: "Nucleoplasm" ,
1: "Nuclear membrane" ,
2: "Nucleoli" ,
3: "Nucleoli fibrillar center",
4: "Nuclear speckles" ,
5: "Nuclear bodies" ,
6: "Endoplasmic reticulum" ,
7: "Golgi apparatus" ,
8: "Peroxisomes" ,
9: "Endosomes" ,
10: "Lysosomes" ,
11: "Intermediate filaments" ,
12: "Actin filaments" ,
13: "Focal adhesion sites" ,
14: "Microtubules" ,
15: "Microtubule ends" ,
16: "Cytokinetic bridge" ,
17: "Mitotic spindle" ,
18: "Microtubule organizing center",
19: "Centrosome",
20: "Lipid droplets" ,
21: "Plasma membrane" ,
22: "Cell junctions" ,
23: "Mitochondria" ,
24: "Aggresome" ,
25: "Cytosol" ,
26: "Cytoplasmic bodies",
27: "Rods & rings"}
LOC_MAP = {}
for k in LABEL_MAP.keys(): LOC_MAP[LABEL_MAP[k]] = k
# -
print(pretrainedmodels.model_names)
print(pretrainedmodels.pretrained_settings['resnet34'])
# + _uuid="95e82b2a7155377310f1d743dd8b077f99cba657"
df = pd.read_csv(PATH_TO_TARGET)
df.set_index('Id',inplace=True)
print(df.head())
print(df.shape)
# +
# # external data
# # https://www.proteinatlas.org/download/subcellular_location.tsv.zip
# dg = pd.read_csv(PATH_TO_TARGETX, sep="\t",index_col = None)
# dg.set_index('Gene',inplace=True)
# print(dg.head())
# print(dg.shape)
# file_list_x = [f for f in listdir(PATH_TO_EXTERNAL) if isfile(join(PATH_TO_EXTERNAL,
# f))]
# print(file_list_x[:15],len(file_list_x))
# fid = [f[:-4] for f in file_list_x]
# gene = [i[:15] for i in fid]
# rel = [dg.loc[g]['Reliability'] for g in gene]
# s0 = [str(dg.loc[g]['Enhanced']).split(';') for g in gene]
# t0 = [' '.join([str(LOC_MAP[j]) for j in i if j in LOC_MAP]).strip() for i in s0]
# s1 = [str(dg.loc[g]['Supported']).split(';') for g in gene]
# t1 = [' '.join([str(LOC_MAP[j]) for j in i if j in LOC_MAP]).strip() for i in s1]
# s2 = [str(dg.loc[g]['Approved']).split(';') for g in gene]
# t2 = [' '.join([str(LOC_MAP[j]) for j in i if j in LOC_MAP]).strip() for i in s2]
# s3 = [str(dg.loc[g]['Uncertain']).split(';') for g in gene]
# t3 = [' '.join([str(LOC_MAP[j]) for j in i if j in LOC_MAP]).strip() for i in s3]
# t = [[y for y in z if len(y) > 0] for z in zip(t0,t1,t2,t3)]
# targ = [' '.join(y).strip() for y in t]
# print(s0[:20],t0[:20],s1[:20],t1[:20],s2[:20],t2[:20],s3[:20],t3[:20])
# dfx = pd.DataFrame({'Id':fid,'Gene':gene,'Reliability':rel,'Target':targ})
# print(dfx.shape)
# dfx = dfx[dfx['Target'] != '']
# print(dfx.head())
# print(dfx.shape)
# -
# from Tomomi
dfxx = pd.read_csv(PATH_TO_TARGETXX, index_col = None)
dfxx.set_index('Id',inplace=True)
dfxx = dfxx[dfxx.GotYellow==1]
print(dfxx.head())
print(dfxx.shape)
# +
file_list_xx = list(dfxx.index.values)
# drop Ids with incomplete data
# file_list_xx0 = list(dfxx.index.values)
# file_list_xx = []
# bands = ['_red.jpg','_green.jpg','_blue.jpg']
# for f in file_list_xx0:
# ok = True
# for b in bands:
# if not os.path.exists(PATH_TO_EXTERNAL2+f+b): ok = False
# if ok: file_list_xx.append(f)
# print(len(file_list_xx0),len(file_list_xx))
# + _uuid="95e82b2a7155377310f1d743dd8b077f99cba657"
file_list = list(df.index.values)
ss = pd.read_csv(PATH_TO_SUB)
ss.set_index('Id',inplace=True)
print(ss.head())
print(ss.shape)
# + _uuid="95e82b2a7155377310f1d743dd8b077f99cba657"
ssp = pd.read_csv(PATH_TO_PSEUDO)
ssp.set_index('Id',inplace=True)
ssp.columns = ['Target']
print(ssp.head())
print(ssp.shape)
# + _uuid="95e82b2a7155377310f1d743dd8b077f99cba657"
test_file_list = list(ss.index.values)
print(file_list[:3], PATH_TO_TRAIN, len(file_list))
print(test_file_list[:3], PATH_TO_TEST, len(test_file_list))
# +
def image_histogram_equalization(image, number_bins=256):
# from http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html
# get image histogram
image_histogram, bins = np.histogram(image.flatten(), number_bins, density=True)
cdf = image_histogram.cumsum() # cumulative distribution function
cdf = 255 * cdf / cdf[-1] # normalize
# use linear interpolation of cdf to find new pixel values
image_equalized = np.interp(image.flatten(), bins[:-1], cdf)
# return image_equalized.reshape(image.shape), cdf
return image_equalized.reshape(image.shape)
def equalize(arr):
arr = arr.astype('float')
# usually do not touch the alpha channel
# but here we do since it is yellow
for i in range(arr.shape[-1]):
# arr[...,i] = 255 * equalize_hist(arr[...,i])
arr[...,i] = image_histogram_equalization(arr[...,i])
return arr
def normalize(arr, q=0.01):
arr = arr.astype('float')
# usually do not touch the alpha channel
# but here we do since it is yellow
# print('arr before',arr.shape,arr.min(),arr.mean(),arr.max())
for i in range(arr.shape[-1]):
# arr[...,i] = 255 * equalize_hist(arr[...,i])
ai = arr[...,i]
# print('ai ' + str(i) + ' before',i,ai.shape,ai.min(),ai.mean(),ai.max())
qlow = np.percentile(ai,100*q)
qhigh = np.percentile(ai,100*(1.0-q))
if qlow == qhigh:
arr[...,i] = 0.
else:
arr[...,i] = 255.*(np.clip(ai,qlow,qhigh) - qlow)/(qhigh - qlow)
# print('ai ' + str(i) + ' after',i,ai.shape,ai.min(),ai.mean(),ai.max())
# print('arr after',arr.shape,arr.min(),arr.mean(),arr.max())
return arr
def standardize(arr):
arr = arr.astype('float')
# usually do not touch the alpha channel
# but here we do since it is yellow
# print('arr before',arr.shape,arr.min(),arr.mean(),arr.max())
for i in range(arr.shape[-1]):
# arr[...,i] = 255 * equalize_hist(arr[...,i])
ai = (arr[...,i] - arr.mean())/(arr.std() + 1e-6)
# print('ai ' + str(i) + ' after',i,ai.shape,ai.min(),ai.mean(),ai.max())
# print('arr after',arr.shape,arr.min(),arr.mean(),arr.max())
return arr
class MultiBandMultiLabelDataset(Dataset):
# BANDS_NAMES = ['_red.png','_green.png','_blue.png','_yellow.png']
# BANDS_NAMES = ['_red','_green','_blue','_yellow']
BANDS_NAMES = ['_red','_green','_blue']
def __len__(self):
return len(self.images_df)
def __init__(self, images_df,
base_path,
image_transform=None,
augmentator=None,
train_mode=True,
external=0
):
if not isinstance(base_path, pathlib.Path):
base_path = pathlib.Path(base_path)
self.images_df = images_df.reset_index()
self.image_transform = image_transform
self.augmentator = augmentator
self.images_df.Id = self.images_df.Id.apply(lambda x: base_path / x)
self.mlb = MultiLabelBinarizer(classes=list(LABEL_MAP.keys()))
self.train_mode = train_mode
self.external = external
if self.external == 2: self.suffix = '.jpg'
else: self.suffix = '.png'
self.cache = {}
def __getitem__(self, index):
# print('index class',index.__class__)
if isinstance(index, torch.Tensor): index = index.item()
# if index in self.cache:
# X, y = self.cache[index]
# else:
# y = None
# X = self._load_multiband_image(index)
# if self.train_mode:
# y = self._load_multilabel_target(index)
# self.cache[index] = (X,y)
y = None
X = self._load_multiband_image(index)
if self.train_mode:
y = self._load_multilabel_target(index)
# augmentator can be for instance imgaug augmentation object
if self.augmentator is not None:
# print('getitem before aug',X.shape,np.min(X),np.mean(X),np.max(X))
# X = self.augmentator(np.array(X))
X = self.augmentator(image=X)['image']
# print('getitem after aug',X.shape,np.min(X),np.mean(X),np.max(X))
if self.image_transform is not None:
X = self.image_transform(X)
return X, y
def _load_multiband_image(self, index):
row = self.images_df.iloc[index]
if self.external in [1,3]:
p = str(row.Id.absolute()) + self.suffix
band3image = PIL.Image.open(p)
elif self.external in [4,5]:
p = str(row.Id.absolute()) + self.suffix
band4image = PIL.Image.open(p)
else:
image_bands = []
for i,band_name in enumerate(self.BANDS_NAMES):
p = str(row.Id.absolute()) + band_name + self.suffix
pil_channel = PIL.Image.open(p)
if self.external == 2:
pa = np.array(pil_channel)[...,i]
# pa = np.array(pil_channel)
# print(i,band_name,pil_channel.mode,pa.shape,pa.min(),pa.mean(),pa.max())
if pa.max() > 0:
pil_channel = PIL.Image.fromarray(pa.astype('uint8'),'L')
pil_channel = pil_channel.convert("L")
image_bands.append(pil_channel)
# pretend its a RBGA image to support 4 channels
# band4image = PIL.Image.merge('RGBA', bands=image_bands)
band3image = PIL.Image.merge('RGB', bands=image_bands)
# band4image = band4image.resize((image_size,image_size), PIL.Image.ANTIALIAS)
band3image = band3image.resize((image_size,image_size), PIL.Image.ANTIALIAS)
# # normalize each channel
# arr = np.array(band4image)
# # arr = np.array(band3image)
# # # average red and yellow channels, orange
# # arr[...,0] = (arr[...,0] + arr[...,3])/2.0
# # arr = arr[...,:3]
# # arr = np.array(band3image)
# # print('arr shape',arr.shape)
# # if index==0: print(index,'hist before',histogram(arr))
# arr = normalize(arr)
# # arr = standardize(arr)
# # arr = equalize(arr)
# # # average red and yellow channels, orange
# # arr[...,0] = (arr[...,0] + arr[...,3])/2.0
# # arr = arr[...,:3]
# # if index==0: print(index,'hist after',histogram(arr))
# # band3image = PIL.Image.fromarray(arr.astype('uint8'),'RGB')
# band4image = PIL.Image.fromarray(arr.astype('uint8'),'RGBA')
# histogram equalize each channel
# arr = np.array(band4image)
# # print('arr',arr.shape)
# # if index==0: print(index,'hist before',histogram(arr))
# arr = equalize(arr)
# # if index==0: print(index,'hist after',histogram(arr))
# band4image = PIL.Image.fromarray(arr.astype('uint8'),'RGBA')
# return band4image
return band3image
# return arr
# band3image = PIL.Image.new("RGB", band4image.size, (255, 255, 255))
# band3image.paste(band4image, mask=band4image.split()[3])
# band3image = band3image.resize((image_size,image_size), PIL.Image.ANTIALIAS)
# return band3image
def _load_multilabel_target(self, index):
y = self.images_df.iloc[index].Target.split(' ')
# print(y)
try:
yl = list(map(int, y))
except:
yl = []
return yl
def collate_func(self, batch):
labels = None
images = [x[0] for x in batch]
if self.train_mode:
labels = [x[1] for x in batch]
labels_one_hot = self.mlb.fit_transform(labels)
labels = torch.FloatTensor(labels_one_hot)
# return torch.stack(images)[:,:4,:,:], labels
return torch.stack(images), labels
# +
imean = (0.08069, 0.05258, 0.05487, 0.08069)
istd = (0.13704, 0.10145, 0.15313, 0.13704)
train_aug = A.Compose([
# A.Rotate((0,30),p=0.75),
A.RandomRotate90(p=1),
A.HorizontalFlip(p=0.5),
A.ShiftScaleRotate(p=0.9),
# A.RandomBrightness(0.05),
# A.RandomContrast(0.05),
A.Normalize(mean=imean, std=istd,max_pixel_value=255.)
])
test_aug = A.Compose([
A.Normalize(mean=imean, std=istd,max_pixel_value=255.)
])
# +
composed_transforms_train = transforms.Compose([
# transforms.Resize(size=final_size),
# transforms.RandomResizedCrop(size=512,scale=0.5),
transforms.RandomCrop(size=512),
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
# transforms.RandomRotation(degrees=45),
transforms.RandomAffine(degrees=45, translate=(0.1,0.1), shear=10, scale=(0.9,1.1)),
transforms.ToTensor(),
transforms.Normalize(mean=imean, std=istd)
])
composed_transforms_test = transforms.Compose([
# transforms.Resize(size=final_size),
transforms.FiveCrop(512),
transforms.Lambda(lambda crops: torch.stack([ \
transforms.ToTensor()(crop) for crop in crops])),
transforms.Lambda(lambda crops: torch.stack([ \
transforms.Normalize(mean=imean, std=istd)(crop) for crop in crops]))
# transforms.ToTensor(),
# transforms.Normalize(mean=imean, std=istd)
])
# +
#####################################
# model and main parameter settings #
#####################################
# %run 'preresnet67u.ipynb'
device = "cuda:"+str(gpu_id)
# device = "cpu"
p = OrderedDict() # Parameters to include in report
p['trainBatch'] = 16 # Training batch size
p['testBatch'] = 16 # Testing batch size
nEpochs = 20 # Number of epochs for training
resume_epoch = 0 # Default is 0, change if want to resume
p['lr'] = 1e-5 # Learning rate
p['step_size'] = 5
p['gamma'] = 0.5
p['wd'] = 1e-4 # Weight decay
p['momentum'] = 0.9 # Momentum
p['epoch_size'] = 15 # How many epochs to change learning rate
p['patience'] = 30 # epochs to wait for early stopping
# +
num_classes = 28
gsize = 1
gpct = 95.
gstd = 0.1
gthresh = 0.1
eps = 1e-5
# save_dir_root = os.path.join(os.path.dirname(os.path.abspath(__file__)))
# exp_name = os.path.dirname(os.path.abspath(__file__)).split('/')[-1]
save_dir_root = './'
# save_dir = os.path.join(save_dir_root, 'run', 'run_' + str(run_id))
save_dir = save_dir_root + mname + '/'
os.makedirs(save_dir,exist_ok=True)
print(save_dir)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
gc.collect()
# +
gc.collect()
clip = 20.
for f in range(nfold):
if f != fold: continue
print('')
print('*'*50)
print(mname + ' fold ' + str(fold))
print('*'*50)
bname = mname+'/'+'best_'+str(fold)+'.pth'
# Network definition
net = Resnet(num_classes=28)
print("Number of parameters:","{:,}".format(count_parameters(net)))
# print(p.status())
# classification loss
# criterion = utils.cross_entropy2d
# criterion = torch.nn.BCELoss()
# criterion = dice_loss
# criterion = BCELoss2d()
# criterion = CombinedLoss(is_weight=False).cuda()
# criterion = L.lovasz_hinge
# criterion = L.lovasz2_bce1
# criterion = L.lovasz_hinge
# criterion = nn.BCEWithLogitsLoss()
# criterion = FocalLoss()
# criterion = ThreeWayLoss()
# this gets overridden in loop below
pw = torch.tensor([10.]).float().to(device)
criterion = nn.BCEWithLogitsLoss(pos_weight=pw)
# criterion = Smooth_L1_LossW(pos_weight=pw)
# criterion = F.smooth_l1_loss
# starting values for inverse positive weights
ipw = np.array([0.3305, 0.043, 0.1031, 0.0472, 0.0525,
0.0852, 0.0579, 0.0508, 0.0413, 0.0569,
0.0406, 0.0439, 0.0432, 0.0405, 0.0549,
0.0424, 0.0749, 0.0428, 0.0517, 0.0512,
0.04, 0.0812, 0.0437, 0.0678, 0.0414,
0.181, 0.0422, 0.0427])
if resume_epoch == 0:
if len(mname0):
bname0 = mname0+'/'+'best_'+str(fold)+'.pth'
print(f'Initializing weights from {bname0}')
# load best model
best = torch.load(bname0, map_location='cpu')
# print(best.keys())
net.load_state_dict(best, strict=False)
else:
print(f'Initializing weights from {bname}')
# load best model
best = torch.load(bname, map_location='cpu')
# print(best.keys())
net.load_state_dict(best, strict=False)
if gpu_id >= 0:
print('Using GPU: {} '.format(gpu_id))
torch.cuda.set_device(device=gpu_id)
# net.cuda()
net.train()
net.to(device)
gc.collect()
# Logging into Tensorboard
# log_dir = os.path.join(save_dir, 'models', datetime.now().strftime('%b%d_%H-%M-%S') + '_' + socket.gethostname())
log_dir = os.path.join('tensorboard', mname + '_' + str(fold))
writer = SummaryWriter(log_dir=log_dir)
# Use the following optimizer
optimizer = torch.optim.Adam(net.parameters(), lr=p['lr'])
# optimizer = optim.SGD(net.parameters(), lr=p['lr'], momentum=p['momentum'],
# weight_decay=p['wd'])
# optimizer = torch.optim.Adadelta(net.parameters(), lr=1.0, rho=0.9, eps=1e-06,
# weight_decay=1e-6)
p['optimizer'] = str(optimizer)
# scheduler = LambdaLR(optimizer, lr_lambda=cyclic_lr)
# scheduler.base_lrs = list(map(lambda group: 1.0, optimizer.param_groups))
# scheduler = ReduceLROnPlateau(optimizer, factor=0.2, patience=5, verbose=True,
# threshold=0.0, threshold_mode='abs')
scheduler = StepLR(optimizer, step_size=p['step_size'], gamma=p['gamma'])
torch.cuda.empty_cache()
file_list_val = [f for f in file_list if folds[f]==fold]
file_list_train = [f for f in file_list if f not in file_list_val]
print('Training on ' + str(len(file_list_train)) + \
' and validating on ' + str(len(file_list_val)))
db_train = MultiBandMultiLabelDataset(df.loc[file_list_train],
base_path=PATH_TO_TRAIN,
# augmentator=train_aug,
image_transform=composed_transforms_train)
db_val = MultiBandMultiLabelDataset(df.loc[file_list_val],
base_path=PATH_TO_TRAIN,
# augmentator=test_aug,
image_transform=composed_transforms_test)
# db_x = MultiBandMultiLabelDataset(dfx,
# base_path=PATH_TO_EXTERNAL,
# # augmentator=train_aug,
# image_transform=composed_transforms_train,
# external=1)
db_xx = MultiBandMultiLabelDataset(dfxx,
base_path=PATH_TO_EXTERNAL2,
# augmentator=train_aug,
image_transform=composed_transforms_train,
external=2)
db_pseudo = MultiBandMultiLabelDataset(ssp,
base_path=PATH_TO_TEST,
# augmentator=test_aug,
image_transform=composed_transforms_train)
db_test = MultiBandMultiLabelDataset(ss, train_mode=False,
base_path=PATH_TO_TEST,
# augmentator=test_aug,
image_transform=composed_transforms_test)
# construct sampling weights as max of reciprocal class frequencies
ylist = [t.split(' ') for t in db_train.images_df.Target]
# print(ylist[:5])
# build one-hot matrix
y = np.zeros((db_train.images_df.shape[0],28))
for i,l in enumerate(ylist):
for j in range(len(l)): y[i,int(l[j])] = 1.
# print(y[:20])
# sampling weights
w = 1.0/np.mean(y,axis=0)
# w = np.clip(w, 0., 1000.)
np.set_printoptions(precision=4,linewidth=80,suppress=True)
print('Sampling weights:')
print(w)
# replace 1s with weights in the one-hot matrix
for i,l in enumerate(ylist):
for j in range(len(l)): y[i,int(l[j])] = w[int(l[j])]
# print(y[:10])
# use maximum weight when there are multiple targets
samples_weight = np.amax(y,axis=1)
samples_weight = torch.from_numpy(samples_weight)
sampler = WeightedRandomSampler(samples_weight.type('torch.DoubleTensor'),
len(samples_weight))
# # construct similar sampler for external data
# # construct sampling weights as max of reciprocal class frequencies
# ylistx = [t.split(' ') for t in db_x.images_df.Target]
# # print(ylist[:5])
# # build one-hot matrix
# yx = np.zeros((db_x.images_df.shape[0],28))
# for i,l in enumerate(ylistx):
# for j in range(len(l)): yx[i,int(l[j])] = 1.
# # sampling weights
# wx = 1.0/np.mean(yx,axis=0)
# wx = np.clip(wx, 0., 3000.)
# np.set_printoptions(precision=4,linewidth=80,suppress=True)
# print('Sampling weights external:')
# print(wx)
# # replace 1s with weights in the one-hot matrix
# for i,l in enumerate(ylistx):
# for j in range(len(l)): yx[i,int(l[j])] = wx[int(l[j])]
# # print(y[:10])
# # use maximum weight when there are multiple targets
# samples_weightx = np.amax(yx,axis=1)
# samples_weightx = torch.from_numpy(samples_weightx)
# samplerx = WeightedRandomSampler(samples_weightx.type('torch.DoubleTensor'),
# len(samples_weightx))
# construct similar sampler for external data 2
# construct sampling weights as max of reciprocal class frequencies
ylistxx = [t.split(' ') for t in db_xx.images_df.Target]
# print(ylist[:5])
# build one-hot matrix
yxx = np.zeros((db_xx.images_df.shape[0],28))
for i,l in enumerate(ylistxx):
for j in range(len(l)): yxx[i,int(l[j])] = 1.
# sampling weights
wxx = 1.0/np.mean(yxx,axis=0)
wxx = np.clip(wxx, 0., 3000.)
np.set_printoptions(precision=4,linewidth=80,suppress=True)
print('Sampling weights external2:')
print(wxx)
# replace 1s with weights in the one-hot matrix
for i,l in enumerate(ylistxx):
for j in range(len(l)): yxx[i,int(l[j])] = wxx[int(l[j])]
# print(y[:10])
# use maximum weight when there are multiple targets
samples_weightxx = np.amax(yxx,axis=1)
samples_weightxx = torch.from_numpy(samples_weightxx)
samplerxx = WeightedRandomSampler(samples_weightxx.type('torch.DoubleTensor'),
len(samples_weightxx))
# construct similar sampler for pseudo-labelling
# construct sampling weights as max of reciprocal class frequencies
ylistp = [[] if isinstance(t,float) else t.split(' ') for t in db_pseudo.images_df.Target]
# print(ylist[:5])
# build one-hot matrix
yp = np.zeros((db_pseudo.images_df.shape[0],28))
for i,l in enumerate(ylistp):
for j in range(len(l)): yp[i,int(l[j])] = 1.
# sampling weights
wp = 1.0/np.mean(yp,axis=0)
wp = np.clip(wp, 0., 3000.)
np.set_printoptions(precision=4,linewidth=80,suppress=True)
print('Sampling weights pseudo:')
print(wp)
# replace 1s with weights in the one-hot matrix
for i,l in enumerate(ylistp):
for j in range(len(l)): yp[i,int(l[j])] = wp[int(l[j])]
# print(y[:10])
# use maximum weight when there are multiple targets
samples_weightp = np.amax(yp,axis=1)
samples_weightp = torch.from_numpy(samples_weightp)
samplerp = WeightedRandomSampler(samples_weightp.type('torch.DoubleTensor'),
len(samples_weightp))
trainloader = DataLoader(db_train, collate_fn=db_train.collate_func,
batch_size=3*p['trainBatch']//8, sampler=sampler,
num_workers=6)
# xloader = DataLoader(db_x, collate_fn=db_x.collate_func,
# batch_size=p['trainBatch']//8, sampler=samplerx,
# num_workers=2)
xxloader = DataLoader(db_xx, collate_fn=db_xx.collate_func,
batch_size=3*p['trainBatch']//8, sampler=samplerxx,
num_workers=6)
pseudoloader = DataLoader(db_pseudo, collate_fn=db_pseudo.collate_func,
batch_size=p['trainBatch']//4, sampler=samplerp,
num_workers=4)
valloader = DataLoader(db_val, collate_fn=db_train.collate_func,
batch_size=p['testBatch'], shuffle=False,
num_workers=16)
testloader = DataLoader(db_test, collate_fn=db_test.collate_func,
batch_size=p['testBatch'], shuffle=False,
num_workers=16)
# xloader_enum = enumerate(xloader)
xxloader_enum = enumerate(xxloader)
pseudoloader_enum = enumerate(pseudoloader)
# # function to generate batches within ImageLoader with no arguments
# def load_training_batch():
# examples_batch = random.sample(list(db_train.images_df.Id.values), p['trainBatch'])
# blist = [db_train[ex] for ex in examples_batch]
# images = [b[0] for b in blist]
# targets = [b[1] for b in blist]
# return Batch(identifiers=None, images=images, targets=targets)
# img_loader = ImageLoader(load_training_batch, nb_workers=6)
# bg_augmenter = BackgroundAugmenter(seq, img_loader.queue, nb_workers=8)
utils.generate_param_report(os.path.join(save_dir, mname + '.txt'), p)
# number of batches
num_img_tr = len(trainloader)
num_img_ts = len(valloader)
print('Image size:', final_size)
print('Batch size:', p['trainBatch'])
print('Batches per epoch:', num_img_tr)
print('Epochs:', nEpochs)
print('Loss:', criterion)
# print('Learning rate: ', p['lr'])
print('')
running_loss_tr = 0.0
running_loss_ts = 0.0
aveGrad = 0
bname = mname+'/'+'best_'+str(fold)+'.pth'
# print("Training Network")
history = {}
history['epoch'] = []
history['train'] = []
history['val'] = []
history['delta'] = []
history['f1'] = []
history['time'] = []
best_val = -999
bad_epochs = 0
start_time = timeit.default_timer()
total_time = 0
prev_lr = 999
# Main Training and Testing Loop
for epoch in range(resume_epoch, nEpochs):
# if (epoch > 0) and (epoch % p['epoch_size'] == 0):
# lr_ = utils.lr_poly(p['lr'], epoch, nEpochs, 0.9)
# print('(poly lr policy) learning rate', lr_)
# print('')
# optimizer = optim.SGD(net.parameters(), lr=lr_, momentum=p['momentum'],
# weight_decay=p['wd'])
scheduler.step()
lr = optimizer.param_groups[0]['lr']
if lr != prev_lr:
print('learning rate = %.6f' % lr)
prev_lr = lr
net.train()
train_loss = []
ns = 0
# for ii in range(num_img_tr):
for ii, sample_batched in enumerate(trainloader):
inputs, gts = sample_batched[0], sample_batched[1]
# # external data
# try:
# _, xbatch = next(xloader_enum)
# except:
# xloader_enum = enumerate(xloader)
# _, xbatch = next(xloader_enum)
# inputsx, gtsx = xbatch[0], xbatch[1]
# external data 2
try:
_, xxbatch = next(xxloader_enum)
except:
xxloader_enum = enumerate(xxloader)
_, xxbatch = next(xxloader_enum)
inputsxx, gtsxx = xxbatch[0], xxbatch[1]
# pseudo-labelling
try:
_, pbatch = next(pseudoloader_enum)
except:
pseudoloader_enum = enumerate(pseudoloader)
_, pbatch = next(pseudoloader_enum)
inputsp, gtsp = pbatch[0], pbatch[1]
# inputs = torch.cat([inputs,inputsx,inputsxx],0)
# gts = torch.cat([gts,gtsx,gtsxx],0)
inputs = torch.cat([inputs,inputsxx,inputsp],0)
gts = torch.cat([gts,gtsxx,gtsp],0)
# use green channel as ground truth mask for current classes
# gi = inputs.numpy()[:,1].copy()
# print('gi stats', gi.shape, gi.min(), gi.mean(), gi.max())
bsize = inputs.shape[0]
gmask = np.zeros((bsize, num_classes, gsize, gsize)).astype(float)
for jj in range(bsize):
# print('gij before denorm', gi[jj].shape, gi[jj].min(),gi[jj].mean(), gi[jj].max())
# gij = gi[jj]*istd[1] + imean[1]
# print('gij after denorm', gij.shape, gij.min(), gij.mean(), gij.max())
# print('gij before filter', gij.shape, gij.min(), gij.mean(), gij.max())
# gij = gaussian_filter(gij,gstd)
# print('gij after filter', gij.shape, gij.min(), gij.mean(), gij.max())
# gij = (gij > gthresh).astype(float)
# print('gij after thresh', gij.shape, gij.min(), gij.mean(), gij.max())
gr = 1.0
# gr = cv2.resize(gij, (gsize,gsize), interpolation=interp)
# grmin = gr.min()
# grmax = gr.max()
# # print('gr before rescale', gr.shape, grmin, gr.mean(), grmax)
# gr = (gr - grmin)/(grmax - grmin + 1e-6)
# print('gr after rescale', gr.shape, gr.min(), gr.mean(), gr.max())
# gr = (gr > gthresh).astype(int)
# print('gr after thresh', gr.shape, gr.min(), gr.mean(), gr.max())
# gr = binary_dilation(gr).astype(int)
# print('gr after dilation', gr.shape, gr.min(), gr.mean(), gr.max())
# gin = gi[jj]
# gin = (gin - gin.min())/(gin.max()-gin.min()+1e-6)
# grn = cv2.resize(gin, (gsize,gsize), interpolation=interp)
# print('grn stats', grn.shape, grn.min(), grn.mean(), grn.max())
# gr = (gr > gthresh).astype(bool).astype(int)
# print('gr mean batch', jj, np.mean(gr))
for kk in np.nonzero(gts[jj]):
gmask[jj,kk] = gr
# print(jj, 'y', gts[jj])
# print(jj, 'gmask mean', np.average(gmask[jj], axis=(1,2)))
# print('gmask',gmask.shape,gmask.min(),gmask.mean(),gmask.max())
gmask = torch.from_numpy(gmask).float()
# keep track of sampling proportions
gt = gts.cpu().detach().numpy()
gs = np.sum(gt,axis=0)
if ii==0: gtsum = gs
else: gtsum += gs
ns += bsize
inputs = inputs.type(torch.float).to(device)
gts = gts.to(device)
gmask = gmask.to(device)
# # use inverse positive weights from previous iteration
# pwb = np.zeros((bsize, num_classes, gsize, gsize))
# for kk in range(num_classes):
# pwb[:,kk] = 1.0/(ipw[kk] + 1e-5)
# pw = torch.tensor(pwb).float().to(device)
# criterion = Smooth_L1_LossW(pos_weight=pw)
# predictions are heat maps on a probability scale
logits = net(inputs)
logits = torch.clamp(logits, min=-clip, max=clip)
# class_loss = criterion(logits, gts)
# first = True
# for kk in range(num_classes):
# lossk = criterion2(seg[:,kk], gmask[:,kk])
# # print('seg_loss batch', jj, ' class', kk, lossjk.item())
# if first:
# seg_loss = lossk
# first = False
# else: seg_loss = seg_loss + lossk
# seg_loss = seg_loss / num_classes
# print('class_loss', class_loss.item())
# print('seg_loss', seg_loss.item())
# loss = class_loss + 0.5 * seg_loss
loss = criterion(logits, gmask)
# print(ii, loss.item())
optimizer.zero_grad()
loss.backward()
# adamw
for group in optimizer.param_groups:
for param in group['params']:
param.data = param.data.add(-p['wd'] * group['lr'], param.data)
optimizer.step()
train_loss.append(loss.item())
running_loss_tr += loss.item()
print('epoch ' + str(epoch) + ' training class proportions:')
print(gtsum/ns)
# validation
net.eval()
with torch.no_grad():
val_loss = []
val_predictions = []
val_targets = []
for ii, sample_batched in enumerate(valloader):
# >>> #In your test loop you can do the following:
# >>> input, target = batch # input is a 5d tensor, target is 2d
# >>> bs, ncrops, c, h, w = input.size()
# >>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops
# >>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops
# inputs, gts = sample_batched['image'], sample_batched['gt']
inputs, gts = sample_batched[0], sample_batched[1]
# fuse batch size and ncrops
bsize, ncrops, c, h, w = inputs.size()
# print(bsize, ncrops, c, h, w)
inputs = inputs.view(-1, c, h, w)
# use thresholded green channel as ground truth mask for current classes
# gi = inputs.numpy()[:,1].copy()
# bsize = inputs.shape[0]
gmask = np.zeros((bsize, num_classes, gsize, gsize)).astype(float)
for jj in range(bsize):
# print('gij before denorm', gi[jj].shape, gi[jj].min(),
# gi[jj].mean(), gi[jj].max())
# gij = gi[jj]*istd[1] + imean[1]
# print('gij after denorm', gij.shape, gij.min(), gij.mean(), gij.max())
# print('gij before filter', gij.shape, gij.min(), gij.mean(), gij.max())
# gij = gaussian_filter(gij,gstd)
# print('gij after filter', gij.shape, gij.min(), gij.mean(), gij.max())
# gij = (gij > gthresh).astype(float)
# print('gij after thresh', gij.shape, gij.min(), gij.mean(), gij.max())
gr = 1.0
# gr = cv2.resize(gij, (gsize,gsize), interpolation=interp)
# grmin = gr.min()
# grmax = gr.max()
# # print('gr before rescale', gr.shape, grmin, gr.mean(), grmax)
# gr = (gr - grmin)/(grmax - grmin + 1e-6)
# print('gr after rescale', gr.shape, gr.min(), gr.mean(), gr.max())
# gr = (gr > gthresh).astype(int)
# print('gr after thresh', gr.shape, gr.min(), gr.mean(), gr.max())
# gr = binary_dilation(gr).astype(int)
# print('gr after dilation', gr.shape, gr.min(), gr.mean(), gr.max())
# gin = gi[jj]
# gin = (gin - gin.min())/(gin.max()-gin.min()+1e-6)
# grn = cv2.resize(gin, (gsize,gsize), interpolation=interp)
# print('grn stats', grn.shape, grn.min(), grn.mean(), grn.max())
# gr = (gr > gthresh).astype(bool).astype(int)
# print('gr mean batch', jj, np.mean(gr))
for kk in np.nonzero(gts[jj]):
gmask[jj,kk] = gr
# print(jj, 'y', gts[jj])
# print(jj, 'gmask mean', np.average(gmask[jj], axis=(1,2)))
gmask = torch.from_numpy(gmask).float()
# tta horizontal flip
inputs2 = inputs.numpy()[:,:,:,::-1].copy()
inputs2 = torch.from_numpy(inputs2)
inputs = inputs.type(torch.float).to(device)
inputs2 = inputs2.type(torch.float).to(device)
# predictions are on a logit scale
logits = net(inputs)
# average over crops
logits = logits.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits2 = net(inputs2)
# average over crops
logits2 = logits2.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits2 = logits2.cpu().detach().numpy()[:,:,:,::-1].copy()
logits2 = torch.from_numpy(logits2).to(device)
logits = (logits + logits2)/2.0
logits = torch.clamp(logits, min=-clip, max=clip)
# # use inverse positive weights from this iteration
# pwb = np.zeros((bsize, num_classes, gsize, gsize))
# for kk in range(num_classes):
# pwb[:,kk] = 1.0/(ipw[kk] + 1e-5)
# pw = torch.tensor(pwb).float().to(device)
# criterion = Smooth_L1_LossW(pos_weight=pw)
loss = criterion(logits, gmask.to(device))
running_loss_ts += loss.item()
val_loss.append(loss.item())
# save results to compute F1 on validation set
preds = logits.cpu().detach().numpy()
gt = gts.cpu().detach().numpy()
val_predictions.append(preds)
val_targets.append(gt)
vps = np.vstack(val_predictions)
vts = np.vstack(val_targets)
# competition metric
# use percentile to as single prediction for f1
vpsp = np.percentile(vps, gpct, axis=(2,3))
thresholds = np.linspace(-5, 5, 101)
scores = np.array([f1_score(vts, np.int32(vpsp > t),
average='macro') for t in thresholds])
threshold_best_index = np.argmax(scores)
vf1 = scores[threshold_best_index]
tbest = thresholds[threshold_best_index]
# vf1 = f1_score(vts,(vps > 0).astype(int), average='macro')
if vf1 > best_val:
star = '*'
best_val = vf1
torch.save(net.state_dict(), bname)
bad_epochs = 0
else:
star = ' '
bad_epochs += 1
# print progress
# running_loss_ts = running_loss_ts / num_img_ts
tl = np.mean(train_loss)
vl = np.mean(val_loss)
stop_time = timeit.default_timer()
diff_time = (stop_time - start_time)/60.
total_time += diff_time
start_time = timeit.default_timer()
print('epoch %d train %6.4f val %6.4f delta %6.4f f1 %6.4f%s thresh %3.1f time %4.1f%s\n' % \
(epoch, tl, vl, vl-tl, vf1, star, tbest, diff_time, 'm'))
writer.add_scalar('loss', tl, epoch)
writer.add_scalar('val_loss', vl, epoch)
writer.add_scalar('delta', vl-tl, epoch)
writer.add_scalar('val_f1', vf1, epoch)
writer.add_scalar('thresh', tbest, epoch)
writer.add_scalar('time', diff_time, epoch)
# print('Running Loss: %f\n' % running_loss_ts)
# print('Mean Loss: %f\n' % np.mean(val_loss))
running_loss_tr = 0
running_loss_ts = 0
history['epoch'].append(epoch)
history['train'].append(tl)
history['val'].append(vl)
history['f1'].append(vf1)
history['time'].append(diff_time)
if bad_epochs > p['patience']:
print('early stopping, best validation loss %6.4f, total time %4.1f minutes \n' % \
(best_val, total_time))
break
writer.close()
# plot history
fig, (ax_loss) = plt.subplots(1, 1, figsize=(8,4))
ax_loss.plot(history['epoch'], history['train'], label="Train loss")
ax_loss.plot(history['epoch'], history['val'], label="Validation loss")
plt.show()
plt.gcf().clear()
# -
db_pseudo.images_df.Target[2].split(' ')
# load best model
best = torch.load(bname, map_location='cpu')
# print(best.keys())
net.load_state_dict(best)
net = net.eval()
with torch.no_grad():
# predict validation set
val_logits = []
val_y = []
# for image, mask in tqdm.tqdm(data.DataLoader(dataset_val, batch_size = 30)):
batch = 0
for image, y in valloader:
# fuse batch size and ncrops
bsize, ncrops, c, h, w = image.size()
image = image.view(-1, c, h, w)
# test-time augmentation with horizontal flipping
image2 = image.numpy()[:,:,:,::-1].copy()
image2 = torch.from_numpy(image2)
image = image.type(torch.float).to(device)
image2 = image2.type(torch.float).to(device)
logits = net(image)
# average over crops
logits = logits.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits = logits.cpu().detach().numpy()
logits2 = net(image2)
# average over crops
logits2 = logits2.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits2 = logits2.cpu().detach().numpy()
logits2 = logits2[:,:,:,::-1]
logits = (logits + logits2)/2.0
val_logits.append(logits)
y = y.cpu().detach().numpy()
val_y.append(y)
batch += 1
vls = np.vstack(val_logits)
vys = np.vstack(val_y)
print(vls.shape, vys.shape)
print(logits.shape,logits.min(),logits.mean(),logits.max())
print(logits2.shape,logits2.min(),logits2.mean(),logits2.max())
# +
clip = 15
vpc = np.array([np.clip(logits.flatten(),0.,clip), np.clip(logits2.flatten(),0.,clip)])
# tpsf = np.hstack([c.reshape((-1,1)) for c in tps])
print(vpc.shape)
# -
np.corrcoef(vpc)
# +
# save out-of-fold predictions
oof_ids = file_list_val
poof = vls.copy()
yoof = vys.copy()
oof = [oof_ids, poof, yoof]
fname = 'oof/'+mname+'_'+str(fold)+'.pkl'
pickle.dump(oof,open(fname,'wb'))
print(fname)
# +
# grid search for best threshold
# note predictions and thresholds are on logit scale
vlsp = np.percentile(vls, gpct, axis=(2,3))
# vlsp = np.average(vls, axis=(2,3))
thresholds = np.linspace(-5, 10, 151)
scores = np.array([f1_score(vys, (vlsp > t).astype(int), average='macro') \
for t in thresholds])
threshold_best_index = np.argmax(scores)
score_best = scores[threshold_best_index]
threshold_best = thresholds[threshold_best_index]
print('')
print('f1_best',score_best)
print('threshold_best',threshold_best)
print('')
plt.plot(thresholds, scores)
plt.plot(threshold_best, score_best, "xr", label="Best threshold")
plt.xlabel("Threshold")
plt.ylabel("F1")
plt.title("Threshold vs F1 ({}, {})".format(threshold_best, score_best))
plt.legend()
plt.show()
plt.gcf().clear()
# -
vf = vlsp.flatten()
print(vf.min(),vf.mean(),vf.max(),vf.shape)
sns.distplot(vf)
plt.title("Distribution of Predictions (Logit Scale) for Fold " + str(fold+1))
plt.show()
plt.gcf().clear()
np.mean(vys,axis=0)
# +
# error analysis
from sklearn.metrics import confusion_matrix
cm = [confusion_matrix(vys[:,i], (vlsp[:,i] > threshold_best).astype(int)) \
for i in range(vys.shape[1])]
fm = [f1_score(vys[:,i], (vlsp[:,i] > threshold_best).astype(int)) \
for i in range(vys.shape[1])]
for i in range(vys.shape[1]):
print(LABEL_MAP[i])
print(cm[i], '%4.2f' % fm[i])
print('')
# -
np.mean(fm)
# +
# fm1 = [f for f in fm if f > 0]
# print(len(fm1))
# print(np.mean(fm1))
# -
f1b = np.array([f1_score(y, (l > threshold_best).astype(int)) \
for y,l in zip(vys,vlsp)])
print(f1b.min(),f1b.mean(),f1b.max())
sns.distplot(f1b)
plt.title("Distribution of Sample F1 Scores for Fold " + str(fold))
plt.show()
plt.gcf().clear()
len(f1b)
# +
# plot validation images with scores
# sort from worst to best
order = f1b.argsort()
max_images = 90
# max_images = len(file_list_val)
start = 0
# start = 200
grid_width = 10
grid_height = int(max_images / grid_width)
# print(max_images,grid_height,grid_width)
file_list_val_reordered = [file_list_val[order[i]] for i,f in enumerate(file_list_val)]
for i, idx in enumerate([file_list_val_reordered[i] for i in range(start,(start+max_images))]):
imod = i % 30
if imod == 0:
fig, axs = plt.subplots(3, 10, figsize=(30, 10))
img, y = db_val[order[i]]
img = img[0].data.numpy()[1]
img = img[y_min_pad:(image_size - y_max_pad), x_min_pad:(image_size - x_max_pad)]
true = np.nonzero(vys[order][start+i])
true_str = ' '.join(map(str, true))
pred = np.nonzero((vlsp[order][start+i] > threshold_best).astype(int))
pred_str = ' '.join(map(str, pred))
ax = axs[int(imod / grid_width), imod % grid_width]
ax.imshow(img, cmap='Greens')
ax.set_title(str(i) + ' ' + idx[:13] + '\n' + true_str + ' ' + pred_str)
# ax.set_xlabel(str(round(ioub[i], 3)))
ax.set_xlabel('%4.2f' % (f1b[order][start+i]))
ax.set_yticklabels([])
ax.set_xticklabels([])
if imod == 29:
# plt.suptitle("Green: salt, Red: prediction. Top-left: coverage class, Top-right: salt coverage, Bottom-left: depth, Bottom-right: IOU")
plt.show()
plt.gcf().clear()
gc.collect()
# -
print(ss.head())
print(ss.shape)
# +
clip = 20
with torch.no_grad():
print('predicting test set for bagging')
tp = {}
for i in range(8): tp[i] = []
# 8-way TTA
# for image in tqdm.tqdm(data.DataLoader(test_dataset, batch_size = 30)):
for image in testloader:
# fuse batch size and ncrops
bsize, ncrops, c, h, w = image[0].size()
image = image[0].view(-1, c, h, w)
i = 0
image1 = image.numpy().copy()
# move channels last for augmentation
image1 = np.transpose(image1, (0, 2, 3, 1))
image = image.type(torch.float).to(device)
logits = net(image)
# average over crops
logits = logits.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits = logits.cpu().detach().numpy()
logits = np.clip(logits,-clip,clip)
tp[i].append(logits)
i += 1
for degrees in [90, 180, 270]:
IAA = iaa.Affine(rotate=degrees)
image2 = np.array([IAA.augment_image(imi) for imi in image1])
# move channels first for pytorch
image2 = np.transpose(image2, (0, 3, 1, 2))
image2 = torch.from_numpy(image2)
image2 = image2.type(torch.float).to(device)
logits2 = net(image2)
# average over crops
logits2 = logits2.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits2 = logits2.cpu().detach().numpy()
logits2 = np.clip(logits2,-clip,clip)
IAA = iaa.Affine(rotate=-degrees)
logits2 = np.transpose(logits2, (0, 2, 3, 1))
logits2 = np.array([IAA.augment_image(imi) for imi in logits2])
logits2 = np.transpose(logits2, (0, 3, 1, 2))
tp[i].append(logits2)
i += 1
# horizontally flip image1
IAA = iaa.Fliplr(1.0)
image1 = np.array([IAA.augment_image(imi) for imi in image1])
image2 = np.transpose(image1, (0, 3, 1, 2))
image2 = torch.from_numpy(image2)
image2 = image2.type(torch.float).to(device)
logits2 = net(image2)
# average over crops
logits2 = logits2.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits2 = logits2.cpu().detach().numpy()
logits2 = np.clip(logits2,-clip,clip)
logits2 = np.transpose(logits2, (0, 2, 3, 1))
logits2 = np.array([IAA.augment_image(imi) for imi in logits2])
logits2 = np.transpose(logits2, (0, 3, 1, 2))
tp[i].append(logits2)
i += 1
# rotations again on flipped image
for degrees in [90, 180, 270]:
IAA = iaa.Affine(rotate=degrees)
image2 = np.array([IAA.augment_image(imi) for imi in image1])
image2 = np.transpose(image2, (0, 3, 1, 2))
image2 = torch.from_numpy(image2)
image2 = image2.type(torch.float).to(device)
logits2 = net(image2)
# average over crops
logits2 = logits2.view(bsize, ncrops, num_classes, gsize, gsize).mean(1)
logits2 = logits2.cpu().detach().numpy()
logits2 = np.clip(logits2,-clip,clip)
IAA = iaa.Affine(rotate=-degrees)
logits2 = np.transpose(logits2, (0, 2, 3, 1))
logits2 = np.array([IAA.augment_image(imi) for imi in logits2])
IAA = iaa.Fliplr(1.0)
logits2 = np.array([IAA.augment_image(imi) for imi in logits2])
logits2 = np.transpose(logits2, (0, 3, 1, 2))
tp[i].append(logits2)
i += 1
tps = np.array([np.vstack(tp[i]) for i in range(8)])
print(tps.shape)
tpsf = np.hstack([c.reshape((-1,1)) for c in tps])
print(tpsf.shape)
np.set_printoptions(precision=3,linewidth=100)
print(np.corrcoef(tpsf, rowvar=False))
ptest = np.median(tps,axis=0)
ptesta = np.amax(tps,axis=0)
print(ptest.shape)
# +
# show some test images
nshow = 50
start = np.random.randint(len(test_file_list)-nshow)
stop = start + nshow
grid_width = 10
grid_height = int(max_images / grid_width)
# print(max_images,grid_height,grid_width)
ni = 10
for j in range(int(start/10),int(stop/10)):
jj = j*10
fig, axs = plt.subplots(3, ni, figsize=(20,8))
for i in range(ni):
img = db_test[jj+i][0]
img = img[0].data.numpy()
img = img[:,y_min_pad:(image_size - y_max_pad),
x_min_pad:(image_size - x_max_pad)]
# img = cv2.resize(img,(ori_size,ori_size),interpolation=interp)
pred = np.nonzero((ptest[jj+i] > threshold_best).astype(int))
# pred_str = list(pred)
# pred_str = np.char.mod('%d', pred)
# pred_str = " ".join(pred_str)
pred_str = ' '.join(map(str, pred))
axs[0][i].imshow(img[0], cmap="Reds")
axs[1][i].imshow(img[1], cmap="Greens")
axs[2][i].imshow(img[2], cmap="Blues")
# axs[3][i].imshow(img[3], cmap="Oranges")
# axs[0][i].set_title(pred_str)
# fig.suptitle("Top row: original, bottom row: green channel")
plt.show()
plt.gcf().clear()
# # clean up to save on memory accumulation across folds
# del net
# del inputs, gts
# del image, image2
# del writer, scheduler, optimizer
# del y_pred, y_pred2
# torch.cuda.empty_cache()
# gc.collect()
# -
sub = [test_file_list, ptest, ptesta]
fname = 'sub/'+mname+'_'+str(fold)+'_mm.pkl'
pickle.dump(sub,open(fname,'wb'))
print(fname)
|
wienerschnitzelgemeinschaft/src/Russ/preresnet_h67_0a.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
'''
# data preprocessing step
'''
pd.set_option('display.width', 200) # for display width
# 1. read data config(*.csv) file
# [note] *.xlsx must be converted to csv format because of the *.xlsx format does not support
data_config = pd.read_csv('./data/train/data_config.csv', header=0, index_col=0)
print("***** data configurations *****")
print("- config data shape : ", data_config.shape)
# 2. read all data logs (FSR matrix, Seat Sensor Data)
fsr_matrix_data = {}
seat_data = {}
for idx in data_config.index:
fsr_filepath = './data/train/'+data_config.loc[idx, "fsr_matrix_1d_datafile"]
seat_filepath = './data/train/'+data_config.loc[idx, "seat_datafile"]
print(idx, ") read data files : ", fsr_filepath, ",", seat_filepath)
tmp_fsr_data = pd.read_csv(fsr_filepath, header=0, index_col=False)
tmp_seat_data = pd.read_csv(seat_filepath, header=0, index_col=False)
fsr_matrix_data[idx] = tmp_fsr_data.iloc[:,0:] # slice by the end of column
seat_data[idx] = tmp_seat_data
# +
# hampel filtering
from hampel import hampel
from scipy.signal import medfilt
idx = 1
seat_loadcell = seat_data[idx].loc[:,["Seat L1", "Seat L2", "Seat L3", "Seat L4"]]
# outlier detection
#outlier_indices = hampel(seat_loadcell["Seat L1"], window_size=5, n=3)
#print("Outlier Indices: ", outlier_indices)
# Outlier Imputation with rolling median
ts_imputation = hampel(seat_loadcell["Seat L1"], window_size=21, n=1, imputation=True)
ts_median = medfilt(seat_loadcell["Seat L1"].values, 21)
plt.figure(figsize=(25,5), constrained_layout=True)
plt.plot(seat_loadcell["Seat L1"], 'b', label='original L1 data')
plt.plot(ts_imputation, 'r', label='Hampel(21)')
#plt.plot(ts_median, 'g', label='Median(21)')
plt.legend()
plt.grid()
plt.show()
print(seat_loadcell["Seat L1"])
print(ts_imputation)
print(ts_median)
|
2021_legrest/hampel_filter.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "skip"}
# <table>
# <tr align=left><td><img align=left src="https://i.creativecommons.org/l/by/4.0/88x31.png">
# <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) <NAME></td>
# </table>
# + slideshow={"slide_type": "skip"}
from __future__ import print_function
# %matplotlib inline
import numpy
import matplotlib.pyplot as plt
# + [markdown] slideshow={"slide_type": "slide"}
# # Iterative Methods
# + [markdown] slideshow={"slide_type": "slide"}
# In this lecture we will consider a number of classical and more modern methods for solving sparse linear systems like those we found from our consideration of boundary value problems.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Ways to Solve $A u = f$
# + [markdown] slideshow={"slide_type": "subslide"}
# We have proposed solving the linear system $A u = f$ which we have implemented naively above with the `numpy.linalg.solve` command but perhaps given the special structure of $A$ here that we can do better.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Direct Methods (Gaussian Elimination)
#
# We could use Gaussian elimination to solve the system (or some factorization) which leads to a solution in a finite number of steps. For large, sparse methods however these direct solvers are much more expensive in general over iterative solvers. As was discussed for eigenproblems, iterative solvers start with an initial guess and try to improve on that guess.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Computational Load
#
# Now consider using Gaussian elimination on the above matrix. For good measure let us consider a 3D problem and discretize each dimension with a $N = 100$ leading to $m = 100 \times 100 \times 100 = 10^6$ unknowns.
#
# Gaussian Elimination - $\mathcal{O}(m^3)$ operations to solve, $(10^6)^3 = 10^{18}$ operations.
#
# Suppose you have a machine that can perform 100 gigaflops (floating point operations per second):
# $$
# \frac{10^{18}~ [\text{flop}]}{10^{11}~ [\text{flop / s}]} = 10^7~\text{s} \approx 115~\text{days}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Memory Load
#
# What about memory?
#
# We require $m^2$ to store entire array. In double precision floating point we would require 8-bytes per entry leading to
# $$
# (10^6)^2 ~[\text{entries}] \times 8 ~[\text{bytes / entry}] = 8 \times 10^{12} ~[\text{bytes}] = 8 ~[\text{terabytes}].
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# The situation really is not as bad as we are making it out to be as long as we take advantage of the sparse nature of the matrices. In fact for 1 dimensional problems direct methods can be reduced to $\mathcal{O}(N)$ in the case for a tridiagonal system. The situation is not so great for higher-dimensional problems however unless more structure can be leveraged. Examples of these types of solvers include fast Fourier methods such as fast Poisson solvers.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Iterative Methods
#
# Iterative methods take a different tact than direct methods. If we have the system $A x = b$ we form an iterative procedure that applies a function, say $L$, such that
# $$
# \hat{x}^{(k)} = L(\hat{x}^{(k-1)})
# $$
# where we want errot between the real solution $x$ and $\hat{x}^{(k)}$ goes to zero as $k \rightarrow \infty$. We will explore these methods in the next lecture.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Jacobi and Gauss-Seidel
#
# The Jacobi and Gauss-Seidel methods are simple approaches to introducing an iterative means for solving the problem $Ax = b$ when $A$ is sparse. Consider again the Poisson problem $u_{xx} = f(x)$ and the finite difference approximation at the point $x_i$
# $$
# \frac{U_{i-1} - 2 U_i + U_{i+1}}{\Delta x^2} = f(x_i).
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# If we rearrange this expression to solve for $U_i$ we have
# $$
# U_i = \frac{1}{2} (U_{i+1} + U_{i-1}) - f(x_i) \frac{\Delta x^2}{2}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# For a direct method we would simultaneously find the values of $U_i$, $U_{i+1}$ and $U_{i-1}$ but instead consider the iterative scheme computes an update to the equation above by using the past iterate (values we already know)
# $$
# U_i^{(k+1)} = \frac{1}{2} (U_{i+1}^{(k)} + U_{i-1}^{(k)}) - f(x_i) \frac{\Delta x^2}{2}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Since this allows us to evaluate $U_i^{(k + 1)}$ without knowing the values of $U_{i+1}^{(k)} + U_{i-1}^{(k)}$ we directly evaluate this expression! This process is called **Jacobi iteration**. It can be shown that for this particular problem Jacobi iteration will converge from any initial guess $U^{(0)}$ although slowly.
# + [markdown] slideshow={"slide_type": "subslide"}
# Advantages
# - Matrix $A$ is never stored or created
# - Storage is optimal
# - $\mathcal{O}(m)$ are required per iteration where $m$ is the number of unknowns
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Example
# Let's try to solve the problem before in the BVP section but use Jacobi iterations to replace the direct solve
# $$
# u_{xx} = e^x, \quad x \in [0, 1] \quad \text{with} \quad u(0) = 0.0, \text{ and } u(1) = 3.
# $$
# + slideshow={"slide_type": "skip"}
# Problem setup
a = 0.0
b = 1.0
u_a = 0.0
u_b = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
def U_true(a, b, u_a, u_b, f, m):
"""Compute the solution to the given linear system"""
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m, m))
diagonal = numpy.ones(m) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x)
b[0] -= u_a / delta_x**2
b[-1] -= u_b / delta_x**2
# Solve system
U = numpy.empty(m + 2)
U[0] = u_a
U[-1] = u_b
U[1:-1] = numpy.linalg.solve(A, b)
return U
# Descretization
m = 100
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Expected iterations needed
iterations_J = int(2.0 * numpy.log(delta_x) / numpy.log(1.0 - 0.5 * numpy.pi**2 * delta_x**2))
# Solve system
U_system = U_true(a, b, u_a, u_b, f, m)
epsilon = numpy.linalg.norm(u_true(x_bc) - U_system, ord=2)
# Initial guess for iterations
U_new = numpy.zeros(m + 2)
U_new[0] = u_a
U_new[-1] = u_b
convergence_J = numpy.zeros((iterations_J, 2))
step_size_J = numpy.zeros(iterations_J)
for k in range(iterations_J):
U = U_new.copy()
for i in range(1, m + 1):
U_new[i] = 0.5 * (U[i+1] + U[i-1]) - f(x_bc[i]) * delta_x**2 / 2.0
step_size_J[k] = numpy.linalg.norm(U - U_new, ord=2)
convergence_J[k, 0] = numpy.linalg.norm(U_system - U_new, ord=2)
convergence_J[k, 1] = numpy.linalg.norm(u_true(x_bc) - U_new, ord=2)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
axes = fig.add_subplot(1, 3, 1)
axes.semilogy(list(range(iterations_J)), step_size_J, 'o')
axes.semilogy(list(range(iterations_J)), numpy.ones(iterations_J) * delta_x**2, 'r--')
axes.set_title("Subsequent Step Size - J")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 3, 2)
axes.semilogy(list(range(iterations_J)), convergence_J[:, 0], 'o')
axes.semilogy(list(range(iterations_J)), numpy.ones(iterations_J) * epsilon, 'r--')
axes.set_title("Convergence to Solution of System - J")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^* - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 3, 3)
axes.semilogy(list(range(iterations_J)), convergence_J[:, 1], 'o')
axes.semilogy(list(range(iterations_J)), numpy.ones(iterations_J) * delta_x**2, 'r--')
axes.set_title("Convergence to True Solution - J")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||u(x) - U^{(k-1)}||_2$")
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# A slight modification to the above leads also to the Gauss-Seidel method. Programmtically it is easy to see the modification but in the iteration above we now will have
# $$
# U_i^{(k+1)} = \frac{1}{2} (U_{i+1}^{(k)} + U_{i-1}^{(k+1)}) - f(x_i) \frac{\Delta x^2}{2}.
# $$
# + slideshow={"slide_type": "skip"}
# Problem setup
a = 0.0
b = 1.0
u_a = 0.0
u_b = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 100
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Expected iterations needed
iterations_GS = int(2.0 * numpy.log(delta_x) / numpy.log(1.0 - numpy.pi**2 * delta_x**2))
# Solve system
U_system = U_true(a, b, u_a, u_b, f, m)
epsilon = numpy.linalg.norm(u_true(x_bc) - U_system, ord=2)
# Initial guess for iterations
U = numpy.zeros(m + 2)
U[0] = u_a
U[-1] = u_b
convergence_GS = numpy.zeros((iterations_GS, 2))
step_size_GS = numpy.zeros(iterations_GS)
success = False
for k in range(iterations_GS):
U_old = U.copy()
for i in range(1, m + 1):
U[i] = 0.5 * (U[i+1] + U[i-1]) - f(x_bc[i]) * delta_x**2 / 2.0
convergence_GS[k, 0] = numpy.linalg.norm(U_system - U, ord=2)
convergence_GS[k, 1] = numpy.linalg.norm(u_true(x_bc) - U, ord=2)
step_size_GS[k] = numpy.linalg.norm(U_old - U, ord=2)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
axes = fig.add_subplot(1, 3, 1)
axes.semilogy(list(range(iterations_GS)), step_size_GS, 'o')
axes.semilogy(list(range(iterations_GS)), numpy.ones(iterations_GS) * delta_x**2, 'r--')
axes.set_title("Subsequent Step Size - GS")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 3, 2)
axes.semilogy(list(range(iterations_GS)), convergence_GS[:, 0], 'o')
axes.semilogy(list(range(iterations_GS)), numpy.ones(iterations_GS) * epsilon, 'r--')
axes.set_title("Convergence to Solution of System - GS")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^* - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 3, 3)
axes.semilogy(list(range(iterations_GS)), convergence_GS[:, 1], 'o')
axes.semilogy(list(range(iterations_GS)), numpy.ones(iterations_GS) * delta_x**2, 'r--')
axes.set_title("Convergence to True Solution - GS")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||u(x) - U^{(k-1)}||_2$")
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Matrix Splitting Methods
#
# One way to view Jacobi and Gauss-Seidel is as a splitting of the matrix $A$ so that
# $$
# A = M - N.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Then the system $A U = b$ can be viewed as
# $$
# M U - N U = b \Rightarrow MU = NU + b.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Viewing this instead as an iteration we have then
# $$
# M U^{(k+1)} = N U^{(k)} + b.
# $$
# The goal then would be to pick $M$ and $N$ such that $M$ contains as much of $A$ as possible while remaining easier to solve than the original system.
# + [markdown] slideshow={"slide_type": "subslide"}
# The resulting update for each of these then becomes
# $$
# U^{(k+1)} = M^{-1} N U^{(k)} + M^{-1} b = G U^{(k)} + c
# $$
# where $G$ is called the **iteration matrix** and $c = M^{-1} b$. We also want
# $$
# u = G u + c
# $$
# where $u$ is the true solution of the original $A u = b$, in other words $u$ is the fixed point of the iteration. Is this fixed point stable though? If the spectral radius $\rho(G) < 1$ we can show that in fact the iteration is stable.
# -
# Note the similarity between our stability analysis dealing with $||A^{-1}||$ and now $G = M^{-1} N$ which is similar but not identical.
# + [markdown] slideshow={"slide_type": "subslide"}
# For Jacobi the splitting is
# $$
# M = -\frac{2}{\Delta x^2} I, \quad \text{and}\quad N = -\frac{1}{\Delta x^2} \begin{bmatrix}
# 0 & 1 & \\
# 1 & 0 & 1 \\
# & \ddots & \ddots & \ddots \\
# & & 1 & 0 & 1 \\
# & & & 1 & 0
# \end{bmatrix}
# $$
# (sticking to the Poisson problem). $M$ is now a diagonal matrix and easy to solve.
# + [markdown] slideshow={"slide_type": "subslide"}
# For Gauss-Seidel we have
# $$
# M = \frac{1}{\Delta x^2} \begin{bmatrix}
# -2 & & \\
# 1 & -2 & \\
# & \ddots & \ddots \\
# & & 1 & -2 & \\
# & & & 1 & -2
# \end{bmatrix} \quad \text{and} \quad
# N = -\frac{1}{\Delta x^2} \begin{bmatrix}
# 0 & 1 & \\
# & 0 & 1 \\
# & & \ddots & \ddots \\
# & & & 0 & 1\\
# & & & & 0
# \end{bmatrix}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Stopping Criteria
#
# How many iterations should we perform? Let $E^{(k)}$ represent the error present at step $k$. If we want to reduce the error at the first step $E^{(0)}$ to order $\epsilon$ then we have
# $$
# ||E^{(k)}|| \approx \epsilon ||E^{(0)}||.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Under suitable assumption we can bound the error in the 2-norm as
# $$
# ||E^{(k)}||_2 \leq \rho(G)^k ||E^{(0)}||_2.
# $$
# where $\rho(G)$ is the spectral radius of the iteration matrix.
# + [markdown] slideshow={"slide_type": "subslide"}
# Moving back to our estimate of the number of iterations we can combine our two expressions involving the error $E$ by taking $\Delta x \rightarrow 0$ which allows us to write
# $$
# k \approx \frac{\log \epsilon}{\log \rho(G)}
# $$
# taking into account error convergence.
# + [markdown] slideshow={"slide_type": "subslide"}
# Picking $\epsilon$ is a bit tricky but one natural criteria to use would be $\epsilon = \mathcal{O}(\Delta x^2)$ since our original discretization was 2nd-order accurate. This leads to
# $$
# k = \frac{2 \log \Delta x}{\log \rho}.
# $$
# This also allows us to estimate the total number of operations that need to be used.
# + [markdown] slideshow={"slide_type": "subslide"}
# For Jacobi we have the spectral radius of $G$ as
# $$
# \rho_J \approx 1 - \frac{1}{2} \pi^2 \Delta x^2.
# $$
# so that
# $$
# k = \mathcal{O}(m^2 \log m) \quad \text{as} \quad m \rightarrow \infty.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Combining this with the previous operation count per iteration we find that Jacobi would lead to $\mathcal{O}(m^3 \log m)$ work which is not very promising.
# + [markdown] slideshow={"slide_type": "subslide"}
# For two dimensions we have $\mathcal{O}(m^4 \log m)$ so even compared to Gaussian elimination this approach is not ideal.
# + [markdown] slideshow={"slide_type": "subslide"}
# What about Gauss-Seidel? Here the spectral radius is approximately
# $$
# \rho_{GS} \approx 1 - \pi^2 \Delta x^2
# $$
# so that
# $$
# k = \frac{2 \times \log \Delta x}{\log (1 - \pi^2 \Delta x^2)}
# $$
# which still does not lead to any advantage over direct solvers. It does show that Gauss-Seidel does actually converge faster due to the factor of 2 difference between $\rho_J$ and $\rho_{GS}$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Successive Overrelaxation (SOR)
#
# Well that's a bit dissapointing isn't it? These iterative schemes do not seem to be worth much but it turns out we can do better with a slight modification to Gauss-Seidel.
# + [markdown] slideshow={"slide_type": "subslide"}
# If you look at Gauss-Seidel iteration it turns out it moves $U$ in the correct direction to $u$ but is very conservative in the amount. If instead we do
# $$\begin{aligned}
# U^{GS}_i &= \frac{1}{2} \left(U^{(k+1)}_{i-1} + U^{(k)}_{i+1} - \Delta x^2 f_i\right) \\
# U^{(k+1)}_i &= U_i^{(k)} + \omega \left( U_i^{GS} - U_i^{(k)}\right )
# \end{aligned}$$
# where we get to pick $\omega$ we can do much better.
# + [markdown] slideshow={"slide_type": "subslide"}
# If $\omega = 1$ then we get back Gauss-Seidel.
#
# If $\omega < 1$ we move even less and converges even more slowly (although is sometimes used for multigrid under the name **underrelaxation**).
#
# If $\omega > 1$ then we move further than Gauss-Seidel suggests and any method where $\omega > 1$ is known as **successive overrelaxation** (SOR).
# + [markdown] slideshow={"slide_type": "subslide"}
# We can write this as a matrix splitting method as well. We can combine the two-step formula above to find
# $$
# U^{(k+1)}_i = \frac{\omega}{2} \left( U^{(k+1)}_{i-1} + U^{(k)}_{i+1} - \Delta x^2 f_i \right ) + (1 - \omega) U_i^{(k)}
# $$
# corresponding to a matrix splitting of
# $$
# M = \frac{1}{\omega} (D - \omega L) \quad \text{and} \quad N = \frac{1}{\omega} ((1-\omega) D + \omega U)
# $$
# where $D$ is the diagonal of the matrix $A$, and $L$ and $U$ are the lower and upper triangular parts without the diagonal of $A$.
# + [markdown] slideshow={"slide_type": "subslide"}
# It can be shown that picking an $\omega$ such that $0 < \omega < 2$ the SOR method converges.
# + [markdown] slideshow={"slide_type": "subslide"}
# It turns out we can also find an optimal $\omega$ for a wide class of problems. For Poisson problems in any number of space dimensions for instance it can be shown that SOR method converges optimaly if
# $$
# \omega_{opt} = \frac{2}{1 + \sin(\pi \Delta x)} \approx 2 - 2 \pi \Delta x.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# What about the number of iterations? We can follow the same tactic as before with the spectral radius of $G_{SOR}$ now
# $$
# \rho = \omega_{opt} - 1 \approx 1 - 2 \pi \Delta x.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# This leads to an iteration count of
# $$
# k = \mathcal{O}(m \log m)
# $$
# an order of magnitude better than Gauss-Seidel alone!
# + slideshow={"slide_type": "skip"}
# Problem setup
a = 0.0
b = 1.0
u_a = 0.0
u_b = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 50
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# SOR parameter
omega = 2.0 / (1.0 + numpy.sin(numpy.pi * delta_x))
# Expected iterations needed
iterations_SOR = int(2.0 * numpy.log(delta_x) / numpy.log(1.0 - 2.0 * numpy.pi * delta_x)) * 2
# Solve system
U_system = U_true(a, b, u_a, u_b, f, m)
epsilon = numpy.linalg.norm(u_true(x_bc) - U_system, ord=2)
# Initial guess for iterations
U = numpy.zeros(m + 2)
U[0] = u_a
U[-1] = u_b
step_size_SOR = numpy.zeros(iterations_SOR)
convergence_SOR = numpy.zeros((iterations_SOR, 2))
for k in range(iterations_SOR):
U_old = U.copy()
for i in range(1, m + 1):
U_gs = 0.5 * (U[i-1] + U[i+1] - delta_x**2 * f(x_bc[i]))
U[i] += omega * (U_gs - U[i])
step_size_SOR[k] = numpy.linalg.norm(U_old - U, ord=2)
convergence_SOR[k, 0] = numpy.linalg.norm(U_system - U, ord=2)
convergence_SOR[k, 1] = numpy.linalg.norm(u_true(x_bc) - U, ord=2)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
axes = fig.add_subplot(1, 3, 1)
axes.semilogy(list(range(iterations_SOR)), step_size_SOR, 'o')
axes.semilogy(list(range(iterations_SOR)), numpy.ones(iterations_SOR) * delta_x**2, 'r--')
axes.set_title("Subsequent Step Size - SOR")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 3, 2)
axes.semilogy(list(range(iterations_SOR)), convergence_SOR[:, 0], 'o')
axes.semilogy(list(range(iterations_SOR)), numpy.ones(iterations_SOR) * epsilon, 'r--')
axes.set_title("Convergence to Solution of System - SOR")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^* - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 3, 3)
axes.semilogy(list(range(iterations_SOR)), convergence_SOR[:, 1], 'o')
axes.semilogy(list(range(iterations_SOR)), numpy.ones(iterations_SOR) * delta_x**2, 'r--')
axes.set_title("Convergence to True Solution - SOR")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||u(x) - U^{(k-1)}||_2$")
plt.show()
# + slideshow={"slide_type": "skip"}
# Plotting all the convergence rates
for i in range(2):
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.semilogy(range(iterations_J), step_size_J, 'r', label="Jacobi")
axes.semilogy(range(iterations_GS), step_size_GS, 'b', label="Gauss-Seidel")
axes.semilogy(range(iterations_SOR), step_size_SOR, 'k', label="SOR")
axes.semilogy(range(iterations_J), numpy.ones(iterations_J) * delta_x**2, 'r--')
axes.legend(loc=1)
axes.set_title("Comparison of Step Sizes")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(range(iterations_J), convergence_J[:, i], 'r', label="Jacobi")
axes.semilogy(range(iterations_GS), convergence_GS[:, i], 'b', label="Gauss-Seidel")
axes.semilogy(range(iterations_SOR), convergence_SOR[:, i], 'k', label="SOR")
axes.semilogy(range(iterations_J), numpy.ones(iterations_J) * delta_x**2, 'r--')
axes.legend(loc=1)
axes.set_title("Comparison of Convergence Rates")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||u(x) - U^{(k-1)}||_2$")
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.semilogy(range(iterations_SOR), step_size_J[:iterations_SOR], 'r', label="Jacobi")
axes.semilogy(range(iterations_SOR), step_size_GS[:iterations_SOR], 'b', label="Gauss-Seidel")
axes.semilogy(range(iterations_SOR), step_size_SOR, 'k', label="SOR")
axes.semilogy(range(iterations_SOR), numpy.ones(iterations_SOR) * delta_x**2, 'r--')
axes.legend(loc=1)
axes.set_title("Comparison of Step Sizes")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(range(iterations_SOR), convergence_J[:iterations_SOR, i], 'r', label="Jacobi")
axes.semilogy(range(iterations_SOR), convergence_GS[:iterations_SOR, i], 'b', label="Gauss-Seidel")
axes.semilogy(range(iterations_SOR), convergence_SOR[:, i], 'k', label="SOR")
axes.semilogy(range(iterations_SOR), numpy.ones(iterations_SOR) * delta_x**2, 'r--')
axes.legend(loc=3)
axes.set_title("Comparison of Convergence Rates")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||u(x) - U^{(k-1)}||_2$")
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Descent Methods
#
# One special case of matrices are amenable to another powerful way to iterate to the solution. A matrix is said to be **symmetric positive definite** (SPD) if
# $$
# x^T A x > 0 \quad \forall \quad x \neq 0.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Check to see if
# $$
# A = \begin{bmatrix}
# 2 &-1 &0 &0 \\
# -1 & 2 & -1 & 0 \\
# 0 & -1 & 2 & -1 \\
# 0 & 0 & -1 & 2
# \end{bmatrix}
# $$
# is symmetric positive definite.
# + [markdown] slideshow={"slide_type": "subslide"}
# Now define a function $\phi: \mathbb R^m \rightarrow \mathbb R$ such that
# $$
# \phi(u) = \frac{1}{2} u^T A u - u^T f.
# $$
# This is a quadratic function in the variables $u_i$ and in the case where $m = 2$ forms a parabolic bowl. Since this is a quadratic function there is a unique minimum, $u^\ast$.
# + [markdown] slideshow={"slide_type": "subslide"}
# Lets see how approaching the problem like this helps us:
#
# For the $m = 2$ case write the function $\phi(u)$ out.
# + [markdown] slideshow={"slide_type": "subslide"}
# $$
# \phi(u) = \frac{1}{2} (A_{11} u_1^2 + A_{12} u_1 u_2 + A_{21} u_1 u_2 + A_{22} u^2_2) - u_1 f_1 - u_2 f_2
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# What property of the matrix $A$ simplifies the expression above?
# + [markdown] slideshow={"slide_type": "subslide"}
# Symmetry! This implies that $A_{21} = A_{12}$ and the expression above simplifies to
# $$
# \phi(u) = \frac{1}{2} (A_{11} u_1^2 + 2 A_{12} u_1 u_2 + A_{22} u^2_2) - u_1 f_1 - u_2 f_2
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Now write two expressions that when evaluated at $u^\ast$ are identically 0 that express that $u^\ast$ minimizes $\phi(u)$.
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# Since $u^\ast$ minimizes $\phi(u)$ we know that the first derivatives should be zero at the minimum:
# $$\begin{aligned}
# \frac{\partial \phi}{\partial u_1} &= A_{11} u_1 + A_{12} u_2 - f_1 = 0 \\
# \frac{\partial \phi}{\partial u_1} &= A_{21} u_1 + A_{22} u_2 - f_2 = 0
# \end{aligned}$$
# Note that these equations can be rewritten as
# $$
# A u = f.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Therefore $\min \phi$ is equivalent to solving $A u = f$!
#
# This is a common type of reformulation for many problems where it may be easier to treat a given equation as a minimization problem rather than directly solve it.
# + [markdown] slideshow={"slide_type": "subslide"}
# Note that this is not quite the matrix that we have been using for our Poisson problem so far which is actually symmetric negative definite although these same methods work as well. In this case we actually want to find the maximum of $\phi$ instead, other than that everything is the same.
# + [markdown] slideshow={"slide_type": "subslide"}
# Also note that if $A$ is indefinite then the eigenvalues of $A$ will change sign and instead of a stable minimum or maximum we have a saddle point which are much more difficult to handle (GMRES can for instance).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Method of Steepest Descent
#
# So now we turn to finding the $u^\ast$ that minimizes the function $\phi(u)$. The simplest approach to this is called the **method of steepest descent** which finds the direction of the largest gradient of $\phi(u)$ and goes in that direction.
# + [markdown] slideshow={"slide_type": "subslide"}
# Mathematically we then have
# $$
# u^{(k+1)} = u^{(k)} - \alpha^{(k)} \nabla \phi(u^{(k)})
# $$
# where $\alpha^{(k)}$ will be the step size chosen in the direction we want to go.
# + [markdown] slideshow={"slide_type": "subslide"}
# We can find $\alpha$ by
# $$
# \alpha^{(k)} = \min_{\alpha \in \mathbb R} \phi\left(u^{(k)} - \alpha \nabla \phi(u^{(k)}\right),
# $$
# i.e. the $\alpha$ that takes us just far enough so that if we went any further $\phi$ would increase.
# + [markdown] slideshow={"slide_type": "subslide"}
# This implies that $\alpha^{(k)} \geq 0$ and $\alpha^{(k)} = 0$ only if we are at the minimum of $\phi$. We can compute the gradient of $\phi$ as
# $$
# \nabla \phi(u^{(k)}) = A u^{(k)} - f \equiv -r^{(k)}
# $$
# where $r^{(k)}$ is the residual defined as
# $$
# r^{(k)} = f - A u^{(k)}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Looking back at the definition of $\alpha^{(k)}$ then leads to the conclusion that the $\alpha$ that would minimize the expression would be the one that satisfies
# $$
# \frac{\text{d} \phi(\alpha)}{\text{d} \alpha} = 0.
# $$
#
# To find this note that
# $$
# \phi(u + \alpha r) = \left(\frac{1}{2} u^T A u - u^T f \right) + \alpha(r^T A u - r^T f) + \frac{1}{2} \alpha^2 r^T A r
# $$
# so that the derivative becomes
# $$
# \frac{\text{d} \phi(\alpha)}{\text{d} \alpha} = r^T A u - r^T f + \alpha r^T A r
# $$
#
# Setting this to zero than leads to
# $$
# \alpha = \frac{r^T r}{r^T A r}.
# $$
# + slideshow={"slide_type": "skip"}
# Problem setup
a = 0.0
b = 1.0
u_a = 0.0
u_b = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 50
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m, m))
diagonal = numpy.ones(m) / delta_x**2
A += numpy.diag(diagonal * 2.0, 0)
A += numpy.diag(-diagonal[:-1], 1)
A += numpy.diag(-diagonal[:-1], -1)
# Construct right hand side
b = -f(x)
b[0] += u_a / delta_x**2
b[-1] += u_b / delta_x**2
# Algorithm parameters
MAX_ITERATIONS = 10000
tolerance = 1e-3
# Solve system
U = numpy.empty(m)
convergence_SD = numpy.zeros(MAX_ITERATIONS)
step_size_SD = numpy.zeros(MAX_ITERATIONS)
success = False
for k in range(MAX_ITERATIONS):
r = b - numpy.dot(A, U)
if numpy.linalg.norm(r, ord=2) < tolerance:
success = True
break
alpha = numpy.dot(r, r) / numpy.dot(r, numpy.dot(A, r))
U = U + alpha * r
step_size_SD[k] = numpy.linalg.norm(alpha * r, ord=2)
convergence_SD[k] = numpy.linalg.norm(u_true(x) - U, ord=2)
if not success:
print("Iteration failed to converge!")
print(convergence_SD[-1])
else:
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.semilogy(list(range(k)), step_size_SD[:k], 'o')
axes.semilogy(list(range(k)), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Step Size")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(list(range(k)), convergence_SD[:k], 'o')
axes.semilogy(list(range(k)), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Convergence")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||u(x) - U^{(k-1)}||_2$")
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# #### Convergence of Steepest Descent
#
# What controls the convergence of steepest descent? It turns out that the shape of the parabolic bowl formed by $\phi$ is the major factor determining the convergence of steepest descent.
#
# For example, if $A$ is a scalar multiple of the identity than these ellipses are actually circles and steepest descent converges in $m$ steps. If $A$ does not lead to circles, the convergence is based on the ratio between the semi-major and semi-minor axis of the resulting ellipses $m$ dimensional ellipses.
#
# This is controlled by the smallest and largest eigenvalues of the matrix $A$ hence why steepest descent grows increasingly difficult as $m$ increases for the Poisson problem. Note that this also relates to the condition number of the matrix in the $\ell_2$ norm.
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + slideshow={"slide_type": "skip"}
def steepest_descent(A, U, b, axes):
MAX_ITERATIONS = 10000
tolerance = 1e-3
success = False
iteration_locations = []
for k in range(MAX_ITERATIONS):
axes.text(U[0] + 0.1, U[1] + 0.1, str(k), fontsize=12)
axes.plot(U[0], U[1], 'ro')
iteration_locations.append(U)
r = b - numpy.dot(A, U)
if numpy.linalg.norm(r, ord=2) < tolerance:
success = True
break
alpha = numpy.dot(r, r) / numpy.dot(r, numpy.dot(A, r))
U = U + alpha * r
if success:
return k, iteration_locations
else:
raise Exception("Iteration did not converge.")
phi = lambda X, Y, A: 0.5 * (A[0, 0] * X**2 + A[0, 1] * X * Y + A[1, 0] * X * Y + A[1, 1] * Y**2) - X * f[0] - Y * f[1]
x = numpy.linspace(-15, 15, 100)
y = numpy.linspace(-15, 15, 100)
X, Y = numpy.meshgrid(x, y)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
A = numpy.identity(2)
f = numpy.array([0.0, 0.0])
k, iteration_locations = steepest_descent(A, numpy.array([-10.0, -12.0]), f, axes)
phi_levels = numpy.array([phi(U[0], U[1], A) for U in iteration_locations])
phi_levels.sort()
axes.contour(X, Y, phi(X, Y, A), levels=phi_levels, colors='r')
axes.contour(X, Y, phi(X, Y, A), 25)
print("Iteration count: %s" % k)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
A = numpy.array([[2, -1], [-1, 2]])
f = numpy.array([1.0, 2.0])
k, iteration_locations = steepest_descent(A, numpy.array([-10.0, -12.0]), f, axes)
phi_levels = numpy.array([phi(U[0], U[1], A) for U in iteration_locations])
phi_levels.sort()
axes.contour(X, Y, phi(X, Y, A), levels=phi_levels, colors='r')
axes.contour(X, Y, phi(X, Y, A), 25)
print("Iteration count: %s" % k)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
A = numpy.array([[2, -1.8], [-1.8, 2]])
f = numpy.array([1.0, 2.0])
k, iteration_locations = steepest_descent(A, numpy.array([-10.0, -12.0]), f, axes)
phi_levels = numpy.array([phi(U[0], U[1], A) for U in iteration_locations])
phi_levels.sort()
axes.contour(X, Y, phi(X, Y, A), levels=phi_levels, colors='r')
axes.contour(X, Y, phi(X, Y, A), 25)
print("Iteration count: %s" % k)
# + [markdown] slideshow={"slide_type": "subslide"}
# Each of these ellipses is related to the eigenstructure of $A$ such that
# $$
# A v_j - f = \lambda_j (v_j - u^\ast)
# $$
# for some $\lambda_j$. Knowing that $A u^\ast = f$ leads to
# $$
# A (v_j - u^\ast) = \lambda_j (v_j - u^\ast)
# $$
# therefore $v_j - u^\ast$ form the eigenvectors of the matrix $A$ with corresponding eigenvalues $\lambda_j$.
# + [markdown] slideshow={"slide_type": "subslide"}
# If a particular set of $\lambda_j$s are not distinct than the ellipse is in fact a circle, demonstrating that any direction pointing to $u^\ast$ in this direction is an eigenvector (non-unique in this sub-space).
# + [markdown] slideshow={"slide_type": "subslide"}
# We can also relate this eigenstructure and geometric arguments to the matrix's condition number $\kappa$. Let $v_1$ and $v_2$ be vectors that lie along the curve $\phi(u) = 1$, we then have
# $$
# \frac{1}{2} v^T_j A v_j - v_j^T A u^\ast = 1.
# $$
# Combining this expression with our previous eigenvector expression and taking the inner-product with the eigenvector $v_j - u^\ast$ leads to
# $$
# ||v_j - u^\ast||^2_2 = \frac{2 + (u^\ast)^T A u^\ast}{\lambda_j}.
# $$
# Now turning back to $v_1$ and $v_2$ we have their ratios as
# $$
# \frac{||v_1 - u^\ast||_2}{||v_2 - u^\ast||_2} = \sqrt{\frac{\lambda_2}{\lambda_1}} = \sqrt{\kappa_2(A)}.
# $$
# This last expression tells us that the more ellipsoidal these sub-spaces are the more difficult it will be to solve $A u^\ast = f$.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Projection Interpretation
#
# One way to interpret the method of steepest descent is as an iterative projection method.
#
# Say we want to solve $A x = b$, $A \in \mathbb R^{m \times m}$ and $b \in \mathbb R^m$.
#
# Let us suppose we have two subspaces in $\mathbb R^m$, $\mathcal K$ the search subspace, and $\mathcal L$ the subspace of constraints. The condition
# $$
# b - A x \perp \mathcal{L}
# $$
# suggests that the residual vector $b - A x$ is orthogonal to the $\mathcal{L}$ subspace.
# + [markdown] slideshow={"slide_type": "subslide"}
# But how does $\mathcal K$ fit into this? Modify the original statement so that
# $$
# \tilde{x~} \in \mathcal{K}
# $$
# and then
# $$
# b - A \tilde{x~} \perp \mathcal{L}.
# $$
# These are known as *Petrov-Galerkin* conditions.
#
# If $\mathcal{K} = \mathcal{L}$ then this is an orthogonal projection.
# + [markdown] slideshow={"slide_type": "subslide"}
# Turning this into an iterative method let our initial guess be $x^{(0)}$, then
# $$
# \tilde{x~} \in x^{(0)} + \mathcal{K}
# $$
# such that
# $$
# b - A \tilde{x~} \perp \mathcal{L}.
# $$
# Note that this implies that we are expanding the search space by the part of $\text{span}(x^{(0)})$ that is not included in $\mathcal{K}$.
# + [markdown] slideshow={"slide_type": "subslide"}
# We can rewrite this in a more suggestive form by letting
# $$
# \tilde{x~} = x^{(0)} + \delta
# $$
# where $\delta \in \mathcal{K}$ is some vector that will be our step. Also assign the residual vector an index so that we have
# $$
# r^{(k)} = b - A x^{(k)}
# $$
# at the $k$th step.
#
# We then formulate a new problem such that
# $$
# r^{(0)} - A \delta \perp \mathcal{L}
# $$
# leading to an iteration like statement:
# $$\begin{aligned}
# \tilde{x~} &= x^{(0)} + \delta, & \quad \quad & \delta \in \mathcal{K} \\
# (r^{(0)} - A \delta) \cdot w &= 0 & \quad \quad & \forall w \in \mathcal{L}
# \end{aligned}$$
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Example: One-Dimensional Projection
#
# One-dimensional projection methods simply construct $\mathcal{K}$ and $\mathcal{L}$ such that they are
# $$
# \mathcal{K} = \text{span}(v) \quad\quad \mathcal{L} = \text{span}(w)
# $$
# for two vectors $v$ and $w$. In this case we can write the projection update as
# $$
# x^{(k+1)} = x^{(k)} + \alpha r
# $$
# where $r$ is the residual and
# $$
# \alpha = \frac{r \cdot w}{A v \cdot w}.
# $$
#
# In the case of steepest descent $v = w = r$ noting that these lead to orthogonal projections as $\mathcal{K} = \mathcal{L}$.
# + [markdown] slideshow={"slide_type": "subslide"}
# Generalizing this a bit we can think of the steepest descent method in terms of projections. Let $\mathcal{K} = \mathcal{L}$ and again assume that $A$ is symmetric positive-definite. Define the error at the $k$th step as
# $$
# E^{(k)} = x^\ast - x^{(k)}.
# $$
#
# In this case consider a single step from $k=0$ to $k=1$. Then we would have
# $$
# r^{(1)} = b - A (x^{(0)} + \delta) = r^{(0)} - A \delta
# $$
# and
# $$
# A E^{(1)} = r^{(1)} = A(E^{(0)} - \delta)
# $$
# where $\delta$ is a result of the projection so that
# $$
# (r^{(0)} - A \delta) \cdot w = 0 \quad \quad \forall w \in \mathcal{K}.
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ### $A$-Conjugate Search Directions and Conjugate Gradient
#
# An alternative to steepest descent is to choose a slightly different direction to descend down. Generalizing our step from above let the iterative scheme be
# $$
# u^{(k+1)} = u^{(k)} - \alpha^{(k)} p^{(k)}
# $$
# and as before we want to pick an $\alpha$ such that
# $$
# \min_{\alpha} \phi(u^{(k)} - \alpha p^{(k)})
# $$
# leading again to the choice of $\alpha$ of
# $$
# \alpha^{(k)} = \frac{(p^{(k)})^T p^{(k)}}{(p^{(k)})^T A p^{(k)}}
# $$
# accept now we are also allowed to pick the search direction $p^{(k)}$.
# + [markdown] slideshow={"slide_type": "subslide"}
# Ways to choose $p^{(k)}$:
# - One bad choice for $p$ would be orthogonal to $r$ since this would then be tangent to the level set (ellipse) of $\phi(u^{(k)})$ and would cause it to only increase so we want to make sure that $p^T r \neq 0$ (the inner-product).
# - We also want to still move downwards so require that $\phi(u^{(k+1)}) < \phi(u^{(k)})$.
# + [markdown] slideshow={"slide_type": "subslide"}
# We know that $r^{(k)}$ is not always the best direction to go in but what might be better?
#
# We could head directly for the minimum but how do we do that without first knowing $u^\ast$?
# + [markdown] slideshow={"slide_type": "subslide"}
# It turns out when $m=2$ we can do this an from any initial guess $u^{(0)}$ and initial direction $p^{(0)}$ we will arrive at the minimum in 2 steps if we choose the second search direction dependent on
# $$
# (p^{(1)})^T A p^{(0)} = 0.
# $$
# In general if two vectors satisfy this property they are said to be $A$-conjugate.
# + [markdown] slideshow={"slide_type": "subslide"}
# Note that if $A = I$ then these two vectors would be orthogonal to each other and in this sense $A$-conjugacy is a natural extension from orthogonality and the simple case from before where the ellipses are circles to the case where we can have very distorted ellipses.
# + [markdown] slideshow={"slide_type": "subslide"}
# In fact the vector $p^{(0)}$ is tangent to the level set that $u^{(1)}$ lies on and therefore choosing $p^{(1)}$ so that it is $A$-conjugate to $p^{(0)}$ then always heads to the center of the ellipse.
#
# To show this, take the initial direction $p^{(0)}$ and note that it is tangent to the level set of $\phi$ at $u_1$ so $p^{(0)}$ is orthogonal to the residual. Looking at the residual of one step we have
# $$
# r_1 = f - A u_1 = Au^\ast - A u_1 = A(u^\ast - u_1) \Rightarrow (p^{(0)})^T A (u^\ast - u_1) = 0.
# $$
#
# Knowing then that $u^\ast - u_1 = \alpha p_1$ we can then conclude
# $$
# p_0^T A p_1 = 0
# $$
# given that $\alpha \neq 0$.
#
# The fact that the initial guess minimizes the residual in its direction is an important idea we will see again.
# + [markdown] slideshow={"slide_type": "subslide"}
# In other words, once we know a tangent to one of the ellipses we can always choose a direction that minimizes in one of the dimensions of the search space. Choosing the $p^{(k)}$ iteratively this way forms the basis of the **conjugate gradient** method.
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# Now to generalize beyond $m = 2$ consider the $m=3$ case. As stated before we are now in a three-dimensional space where the level-sets are concentric ellipsoids. Taking a slice through this space will lead to an ellipse on the slice.
#
# 1. Start with an initial guess $u^{(0)}$ and choose a search direction $p^{(0)}$.
# 1. Minimize $\phi(u)$ in the direction $u^{(0)} + \alpha p^{(0)}$ resulting in the choice
# $$
# \alpha^{(0)} = \frac{(p^{(0)})^T p^{(0)}}{(p^{(0)})^T A p^{(0)}}
# $$
# as we saw before. Now set $u^{(1)} = u^{(0)} + \alpha^{(0)} p^{(0)}$.
# 1. Choose $p^{(1)}$ to be $A$-conjugate to $p^{(0)}$. In this case there are an infinite set of vectors that are possible that satisfy $(p^{(1)})^T A p^{(0)} = 0$. Beyond requiring $p^{(1)}$ to be $A$-conjugate we also want it to be linearly-independent to $p^{(0)}$.
# 1. Again choose an $\alpha^{(1)}$ that minimizes the residual (again tangent to the level sets of $\phi$) in the direction $p^{(1)}$ and repeat the process
# + [markdown] slideshow={"slide_type": "subslide"}
# So why does this work so much better than steepest descent?
#
# Since we have chosen $p^{(0)}$ and $p^{(1)}$ to be linearly independent they span a two-dimensional subspace in $\mathbb R^m$. We then choose to travel through this subspace as to minimize the residual in this space. Geometrically we have cut out space with this plane leading to a two-dimensional subspace with concentric ellipses representing the level sets of $\phi$. We then choose an $\alpha$ so that we arrive at the center of these ellipses.
# + slideshow={"slide_type": "skip"}
# Problem setup
a = 0.0
b = 1.0
alpha = 0.0
beta = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 50
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m, m))
diagonal = numpy.ones(m) / delta_x**2
A += numpy.diag(diagonal * 2.0, 0)
A += numpy.diag(-diagonal[:-1], 1)
A += numpy.diag(-diagonal[:-1], -1)
# Construct right hand side
b = -f(x)
b[0] += alpha / delta_x**2
b[-1] += beta / delta_x**2
# Exact solution to system
U_true = numpy.linalg.solve(A, b)
# Algorithm parameters
MAX_ITERATIONS = 10000
tolerance = delta_x**2
# Solve system
U = numpy.zeros(m)
convergence_CG = numpy.zeros(MAX_ITERATIONS)
step_size_CG = numpy.zeros(MAX_ITERATIONS)
residual_norm = numpy.zeros(MAX_ITERATIONS)
system_convergence = numpy.zeros(MAX_ITERATIONS)
success = False
r = -(b - numpy.dot(A, U))
p = -r
r_dot_r = numpy.dot(r, r)
for k in range(MAX_ITERATIONS):
# Convergence measures (not needed)
U_old = U.copy()
r_old = r.copy()
if numpy.linalg.norm(r, ord=2) < tolerance:
success = True
break
A_dot_p = numpy.dot(A, p)
# Step length
alpha = r_dot_r / numpy.dot(p, A_dot_p)
# Update solution and residual
U += alpha * p
r += alpha * A_dot_p
# Improvement of solution
beta = numpy.dot(r, r) / r_dot_r
r_dot_r = numpy.dot(r, r)
# Pick new direction for next iteration (if needed)
p = beta * p - r
# Convergence measures
convergence_CG[k] = numpy.linalg.norm(u_true(x) - U, ord=2)
residual_norm[k] = numpy.linalg.norm(r, ord=2)
step_size_CG[k] = numpy.linalg.norm(U - U_old, ord=2)
system_convergence[k] = numpy.linalg.norm(U_true - U, ord=2)
if not success:
print("Iteration failed to converge!")
print(residual_norm[-10:])
print(convergence_CG[-10:])
else:
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.1)
fig.set_figheight(fig.get_figheight() * 2.2)
axes = fig.add_subplot(2, 2, 1)
axes.semilogy(numpy.arange(1, k + 1), step_size_CG[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Step Size")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(2, 2, 2)
axes.semilogy(numpy.arange(1, k + 1), convergence_CG[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Convergence to True Solution")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(2, 2, 3)
axes.semilogy(numpy.arange(1, k + 1), residual_norm[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Residual")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||r||_2$")
axes = fig.add_subplot(2, 2, 4)
axes.semilogy(numpy.arange(1, k + 1), system_convergence[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("True Residual")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^* - U||_2$")
print("Matrix condition number = %s" % (numpy.sqrt(numpy.linalg.norm(A) * numpy.linalg.norm(numpy.linalg.inv(A)))))
plt.show()
# + slideshow={"slide_type": "skip"}
def CG(A, U, b, axes):
MAX_ITERATIONS = A.shape[0] * 2
tolerance = 1e-8
success = False
iteration_locations = []
r = -(b - numpy.dot(A, U))
p = -r
r_dot_r = numpy.dot(r, r)
for k in range(MAX_ITERATIONS):
axes.text(U[0] + 0.1, U[1] + 0.1, str(k), fontsize=12)
axes.plot(U[0], U[1], 'ro')
iteration_locations.append(U)
if numpy.linalg.norm(r, ord=2) < tolerance:
success = True
break
A_dot_p = numpy.dot(A, p)
# Step length
alpha = r_dot_r / numpy.dot(p, A_dot_p)
# Update solution and residual
U += alpha * p
r += alpha * A_dot_p
# Improvement of solution
beta = numpy.dot(r, r) / r_dot_r
r_dot_r = numpy.dot(r, r)
# Pick new direction for next iteration (if needed)
p = beta * p - r
if success:
return k, iteration_locations
else:
raise Exception("Iteration did not converge.")
phi = lambda X, Y, A: 0.5 * (A[0, 0] * X**2 + A[0, 1] * X * Y + A[1, 0] * X * Y + A[1, 1] * Y**2) - X * f[0] - Y * f[1]
x = numpy.linspace(-15, 15, 100)
y = numpy.linspace(-15, 15, 100)
X, Y = numpy.meshgrid(x, y)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
A = numpy.identity(2)
f = numpy.array([0.0, 0.0])
k, iteration_locations = CG(A, numpy.array([-10.0, -12.0]), f, axes)
axes.contour(X, Y, phi(X, Y, A), 25)
print("Iteration count: %s" % k)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
A = numpy.array([[2, -1], [-1, 2]])
f = numpy.array([1.0, 2.0])
k, iteration_locations = CG(A, numpy.array([-10.0, -12.0]), f, axes)
axes.contour(X, Y, phi(X, Y, A), 25)
print("Iteration count: %s" % k)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
A = numpy.array([[2, -1.8], [-1.8, 2]])
f = numpy.array([1.0, 2.0])
k, iteration_locations = CG(A, numpy.array([-10.0, -12.0]), f, axes)
axes.contour(X, Y, phi(X, Y, A), 25)
print("Iteration count: %s" % k)
# + [markdown] slideshow={"slide_type": "subslide"}
# The conjugate gradient algorithm requires $\mathcal{O}(m^2)$ operations and in theory $m$ iterates to achieve the exact solution. In practice however it generally can converge much faster, especially when using preconditioning. This is especially true for the two-dimensional, 5-point Laplacian stencil we have seen already.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Krylov Subspaces
#
# We can say a lot about the set of vectors $p^{(k)}$ that span the space our level set $\phi$ lies in starting with the following theorem.
#
# **Theorem** The vectors generated in the CG algorithm have the following properties, provided $r^{(k)} \neq 0$:
# 1. $p^{(k)}$ is $A$-conjugate to all the previous search directions
# 1. The residual $r^{(k)}$ is orthogonal to all the previous residuals $(r^{(k)})^T r_j = 0 \quad \forall j=0,1,\ldots$
# 1. The following three subspaces of $\mathbb R^m$ are identical
# $$\begin{aligned}
# &\text{span} (p^{(0)}, p^{(1)}, \ldots, p^{(k-1)}) \\
# &\text{span} (r^{(0)}, A r^{(0)}, A^2 r^{(0)}, \ldots, A^{k-1} r^{(0)}) \\
# &\text{span} (A e^{(0)}, A^2 e^{(0)}, A^3 e^{(0)}, \ldots, A^{k} e^{(0)})
# \end{aligned}$$
#
# The subspace $\mathcal{K}_k = \text{span} (r^{(0)}, A r^{(0)}, A^2 r^{(0)}, \ldots, A^{k-1} r^{(0)})$ spanned by the vector $r^{(0)}$ and the first $k-1$ powers of $A$ to $r^{(0)}$ is called a *Krylov space* of dimension $k$ associated with $r^{(0)}$.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Projections
#
# The new update $u^{(k)}$ is formed by adding multiples of the $p^{(j)}$ to the initial guess $u^{(0)}$ and therefore lies in the subspace $u_0 + \mathcal{K}_k$. Another way to write this would be to say $u^{(k)} - u^{(0)} \in \mathcal{K}_k$. Note the similarity between this and our projection discussion.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Convergence of Conjugate Gradient
#
# We now turn to the convergence of CG and to deriving estimates about the size of the error at a given $k$ and the rate of convergence.
# + [markdown] slideshow={"slide_type": "subslide"}
# First define the $A$-norm s.t.
# $$
# ||e||_A = \sqrt{e^T A e}
# $$
# relying on the fact that $A$ is SPD.
# + [markdown] slideshow={"slide_type": "subslide"}
# This leads to
# $$\begin{aligned}
# ||e||^2_A &= (u - u^\ast)^T A (u - u^\ast) \\
# &=u^T A u - 2 u^T A u^\ast + (u^\ast)^T A u^\ast \\
# &= 2 \phi(u) + (u^\ast)^T A u^\ast
# \end{aligned}$$
#
# Given that $(u^\ast)^T A u^\ast$ is a constant minimizing the error $||e||_A$ is equivalent to minimizing $\phi(u)$.
# + [markdown] slideshow={"slide_type": "subslide"}
# We know we can expand $u^{(k)}$ in the subpsace $u_0 + \mathcal{K}_k$ so that
# $$
# u^{(k)} = u^{(0)} + \alpha^{(0)} p^{(0)} + \alpha^{(1)} p^{(1)} + \cdots + \alpha^{(k-1)} p^{(k-1)}
# $$
# and subtracting $u^\ast$ we find
# $$
# u^{(k)} - u^\ast = e^{(k)} = e^{(0)} + \alpha^{(0)} p^{(0)} + \alpha^{(1)} p^{(1)} + \cdots + \alpha^{(k-1)} p^{(k-1)}
# $$
# which relates $e^{(k)}$ and $e^{(0)}$ and implies that $e^{(k)} - e^{(0)} \in \mathcal{K}_k$.
# + [markdown] slideshow={"slide_type": "subslide"}
# By our theorem above we also know that
# $$
# e^{(k)} - e^{(0)} \in \text{span} (A e^{(0)}, A^2 e^{(0)}, A^3 e^{(0)}, \ldots, A^{k} e^{(0)})
# $$
# so that
# $$
# e^{(k)} = e^{(0)} + c_1 A e^{(0)} + c_2 A^2 e^{(0)} + \cdots + c_k A^k e^{(0)}
# $$
# where $c_j$ are some coefficients (found by taking the projection of $e^{(k)}$ onto the span).
# + [markdown] slideshow={"slide_type": "subslide"}
# This then leads to
# $$
# e^{(k)} = P_k(A) e^{(0)}
# $$
# where
# $$
# P_k(A) = I + c_1 A + c_2 A^2 + \cdots + c_k A^k.
# $$
# Note that $P_k(A)$ is a polynomial in $A$. We can relate this to a scalar polynomial by letting
# $$
# P_k(x) = 1 + c_1 x + c_2 x^2 + \cdots +c_k x^k
# $$
# so that $P_k \in \mathcal{P}_k$ where $\mathcal{P}_k$ are polynomials of degree at most $k$ and $P(0) = 1$.
# + [markdown] slideshow={"slide_type": "subslide"}
# CG implicitly constructs the polynomial $P_k$ and solves the related minimization problem
# $$
# \min_{P \in \mathcal{P}_k} ||P(A) e^{(0)}||_A.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# #### What is a polynomial of matrices?
#
# If we know our matrix is diagonalizable we can write this diagonalization as
# $$
# A = V \Lambda V^{-1}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# If we take powers of $A$ then we find
# $$
# A^k = (V \Lambda V^{-1})^k = V \Lambda V^{-1} V \Lambda V^{-1} \cdots V \Lambda V^{-1} = V \Lambda^k V^{-1}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Polynomials in $A$ then can be written as
# $$
# P_k(A) = V P_k(\Lambda) V^{-1}
# $$
# so that
# $$
# P_k(\Lambda) = \begin{bmatrix}
# P_k(\lambda_1) \\
# & P_k(\lambda_2) \\
# & & P_k(\lambda_3) \\
# & & & \ddots \\
# & & & & P_k(\lambda_m)
# \end{bmatrix}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Now going back to our expression for the error we have
# $$
# e^{(k)} = P_k(A) e^{(0)}.
# $$
# If the polynomial $P_k(A)$ has a root at each $\lambda_k$ then $P_k(\Lambda)$ is the zero matrix and the above expression for the error leads us to the conclusion that $e^{(k)} = 0$. If the eigenvalues are repeated (say we have $n$ unique eigenvalues, this is known as geometric multiplicity) then we can say that there is again a polynomial $P_n \in \mathcal{P}_n$ that also has these roots and we would expect $e^{(n)} = 0$, we converge faster than the dimension of $A$!
# + [markdown] slideshow={"slide_type": "subslide"}
# Say we want to know how big the error is before the iteration that guarantees convergence? We want to then know how $||e^{(0)}||_A$ behaves. It turns out for any $P \in \mathcal{P}$ we have
# $$
# \frac{||P(A)e^{(0)}||_A}{||e^{(0)}||_A} \leq \max_{1 \leq j \leq m} |P(\lambda_j)|.
# $$
# We then need to find one polynomial $\hat{P_k} \in \mathcal{P}_k$ that we can obtain a useful bound on $||e^{(k)}||_A / ||e^{(0)}||_A$.
# + [markdown] slideshow={"slide_type": "subslide"}
# Surprisingly enough shifted and scaled version of the Chebyshev polynomials $T_k(x)$ will work for this! Setting
# $$
# \hat{P_k}(x) = \frac{T_k\left(\frac{\lambda_m + \lambda_1 - 2x}{\lambda_m - \lambda_1} \right )}{T_k\left(\frac{\lambda_m + \lambda_1}{\lambda_m - \lambda_1}\right ) }
# $$
#
# and compute
# $$
# \max_{1\leq j \leq m} \left |\hat{P_k}(\lambda_j) \right|.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# With some analysis (refer to LeVeque for a full derivation) we can show that
# $$
# T_k(x) = \frac{1}{2} \left[ \left ( \frac{\sqrt{\kappa} + 1}{\sqrt{\kappa} - 1} \right)^k + \left ( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^k \right ]
# $$
# and
# $$
# \frac{||P(A) e^{(0)}||_A}{||e^{(0)}||_A} \leq 2 \left[ \left ( \frac{\sqrt{\kappa} + 1}{\sqrt{\kappa} - 1} \right)^k + \left ( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^k \right ]^{-1} \leq 2 \left ( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^k
# $$
# where $\kappa$ is the matrix condition number.
# + [markdown] slideshow={"slide_type": "subslide"}
# If the condition number of the matrix $\kappa$ is large we can also simplify our bound to
# $$
# 2 \left ( \frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1} \right)^k \approx 2 \left ( 1 - \frac{2}{\sqrt{\kappa}} \right)^k \approx 2 e^{-2 k / \sqrt{\kappa}}.
# $$
# This implies that the expected number of iterations $k$ to reach a desired tolerance will be $k = \mathcal{O}\left(\sqrt{\kappa}\right)$.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Preconditioning
#
# One way to get around the difficulties with these types of methods due to the distortion of the ellipses (and consequently the conditioning of the matrix) is to precondition the matrix. The basic idea is that we take our original problem $A u = f$ and instead solve
# $$
# M^{-1} A u = M^{-1} f.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Note that since we need to find the inverse of $M$, this matrix should be nice. A couple of illustrative examples may help to illustrate why this might be a good idea:
#
# - If $M = A$ then we essentially have solved our problem already although that does not help us much
# - If $M = \text{diag}(A)$, then $M^{-1}$ is easily computed and it turns out for some problems this can decrease the condition number of $M^{-1} A$ significantly. Note though that this is not actually helpful in the case of the Poisson problem.
# - If $M$ is based on another iterative method used on $A$, for instance Gauss-Seidel, these can be effective general use preconditioners for many problems.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# So the next question then becomes how to choose a preconditioner. This is usually very problem specific and a number of papers suggest strategies for particular problems. In general we want to move the eigenvalues around so that the matrix is not as ill-conditioned. Combining a preconditioner with CG leads to the popular PCG method.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Generalized Minimum Residual (GMRES) Algorithm
# + [markdown] slideshow={"slide_type": "subslide"}
# What if our system is not symmetric positive (negative) definite?
#
# Panic!
#
# (or don't if you have a towel handy and have a particular guide handy)
# + [markdown] slideshow={"slide_type": "subslide"}
# Not really. GMRES is one approach to solving this problem for us. Since we can no longer depend on the structure of the related scalar minimization problem we instead will minimize the residual in the same subspaces we had before using a least-squares approach.
# + [markdown] slideshow={"slide_type": "subslide"}
# In the $k$th iteration GMRES solves a least squares problem in a particular subspace of the full problem. In this case we will use the spaces we saw before, $u_0 + \mathcal{\kappa}_k$, where $\mathcal{\kappa}$ is defined as
# $$
# \kappa_k = \text{span} (r^{(0)}, A r^{(0)}, A^2 r^{(0)}, \ldots, A^{k-1} r^{(0)}).
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# To do this we construct a matrix $Q$ that contains in its columns a set of orthonormal vectors that span the space $\kappa_k$. Since this is iterative each time we increment $k$ we only need to find one additional vector for $Q$ to span the next space.
# + [markdown] slideshow={"slide_type": "subslide"}
# Starting with the $k$th iterate we have
# $$
# Q = [q_1 q_2 q_3 \cdots~q_k] \in \mathbb R^{m \times k}.
# $$
# Pick a vector $v_j \notin \kappa_k$ and use Gram-Shmidt (or something similar) to orthogonalize this vector to all the rest of the $q_i$.
# + [markdown] slideshow={"slide_type": "subslide"}
# What should we choose for $v_j$?
#
# There are some bad choices such as $v_k = A^k r^{(k)}$ (this works but leads to very small singular values).
#
# Instead we will choose $v_k = A q_k$. In effect we are building up a factorization of the matrix $A$. This process is called the *Arnoldi process*.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Arnoldi Process
#
# The Arnoldi process builds an orthogonal basis of the Krylov subspace $\mathcal{K}_m$. We can use any of the algorithms to find the orthogonalization (Gram-Schmidt for instance). In essence then we have the following procedure
#
# Initialize process with a vector $v_0$ with $||v_0|| = 1$. Now iterate on the following steps:
# 1. $w_k = A v_{k-1}$
# 2. Orthogonalize the vector $w_j$ against all the previous $v_j \quad j < k$
# 3. Stop when $w_j$ vanishes.
# + [markdown] slideshow={"slide_type": "subslide"}
# This process has the practical effect of creating the following transformation
# $$
# A V_m = V_m H_m
# $$
# where $V_m$ is the $m \times m$ matrix with the columns the computed vectors $v_j$ and $H_m$ is the $(m + 1) \times m$ Hessenberg matrix whose nonzero entries are the $h_{ij}$ defined as $h_{ij} = A v_j \cdot v_i$ and $h_{j+1,j} = ||w_j||_2$ with the deletion of the last row. This describes exactly an orthogonalization process.
# + [markdown] slideshow={"slide_type": "subslide"}
# Now how do we apply this? Given an initial guess $x^{(0)}$ consider the orthogonal projection method where
# $$
# \mathcal{L} = \mathcal{K} = \mathcal{K}_m(A, r^{(0)})
# $$
# where the Krylov subspace is
# $$
# \mathcal{K}_m(A, r^{(0)}) = \text{span}\{r^{(0)}, A r^{(0)}, A^2 r^{(0)}, \ldots,A^{m-1} r^{(0)} \}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Let $v_1 = r^{(0)} / || r^{(0)} ||_2$ and apply Arnoldi's method we then would have
# $$
# V^T_m A V_m = H_m
# $$
# which also implies that
# $$
# V^T_m r^{(0)} = V^T_m (\beta v_1) = \beta e_1
# $$
# where $\beta = ||r^{(0)}||_2$. We can then find
# $$
# x_m = x^{(0)} + V_m y_m \\
# y_m = H^{-1}_m (\beta e_1)
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# This algorithm is equivalent to Conjugate Gradient when the matrix is SPD.
#
# Also note that in the case the matrix is symmetric the Hessenberg matrix $H$ is also symmetric implying that it is tri-diagonal greatly simplifying our general result.
# + slideshow={"slide_type": "skip"}
def arnoldi(A, b):
n = b.shape[0]
q = [0] * n
q[0] = b / numpy.linalg.norm(b)
h = numpy.zeros((n + 1, n))
for k in range(n):
v = numpy.dot(q[k], A).reshape(-1)
for j in range(k):
h[j, k] = numpy.dot(q[j], v.T)
v = v - h[j, k] * q[j]
h[k + 1, k] = numpy.linalg.norm(v)
if (h[k + 1, k] != 0 and k != n - 1):
q[k + 1] = v / h[k + 1, k]
return h
# Problem setup
a = 0.0
b = 1.0
alpha = 0.0
beta = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 100
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m, m))
diagonal = numpy.ones(m) / delta_x**2
A += numpy.diag(diagonal * 2.0, 0)
A += numpy.diag(-diagonal[:-1], 1)
A += numpy.diag(-diagonal[:-1], -1)
# Construct right hand side
b = -f(x)
b[0] += alpha / delta_x**2
b[-1] += beta / delta_x**2
U = numpy.zeros(m + 2)
U[0] = alpha
U[-1] = beta
# Compute Hessenberg matrix
H = arnoldi(A, b)
# Solve resulting minimization problem
e_1 = numpy.zeros(m)
e_1[0] = 1.0
y = numpy.linalg.solve(H, beta * e_1)
# NOT WORKING!
# V = numpy.zeros((m, m))
# H = numpy.zeros((m, m))
# tolerance = 1e-14
# r = b - numpy.dot(A, U[1:-1])
# beta = numpy.linalg.norm(r, ord=2)
# V[:, 0] = r / beta
# for j in range(m):
# # Krylov subspace
# w = numpy.dot(A, V[:, j])
# # Modified Gram-Schmidt
# for i in range(j + 1):
# H[i, j] = numpy.dot(w, V[:, i])
# w = w - H[i, j] * V[:, i]
# # Need to handle some indexing pieces
# if j == m - 1:
# break
# H[j + 1, j] = numpy.linalg.norm(w, ord=2)
# # If this happens then the orthogonalization has completed and
# # something else may need to be done
# if numpy.abs(H[j + 1, j]) < tolerance:
# break
# # Have next basis vector
# V[:, j + 1] = w / H[j + 1, j]
# # Solve resulting minimization problem
# e_1 = numpy.zeros(m)
# e_1[0] = 1.0
# y = numpy.linalg.solve(H, beta * e_1)
# U[1:-1] += numpy.dot(V, y)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Multigrid
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Jacobi Revisited
#
# Consider the Poisson problem with
# $$
# f(x) = -20 + \frac{1}{2}\left ( \phi''(x) \cos \phi(x) - (\phi'(x))^2 \sin \phi(x) \right )
# $$
# with
# $$
# \phi = 20 \pi x^3,
# $$
# and boundary conditions $u(0) = 1$ and $u(1) = 3$.
#
# Integrating twice we can find that the solution of this problem is
# $$
# u(x) = 1 + 12 x - 10 x^3 + \frac{1}{2} \sin \phi(x).
# $$
#
# Discretizing this problem in the standard way with second order, centered finite differences leads to the following code
# + slideshow={"slide_type": "skip"}
def jacobi_update(x, U, f, delta_x):
"""Update U with a single Jacobi iteration"""
U_new = U.copy()
for i in range(1, x.shape[0] - 1):
U_new[i] = 0.5 * (U[i+1] + U[i-1]) - f(x[i]) * delta_x**2 / 2.0
step_size = numpy.linalg.norm(U_new - U, ord=2)
del(U)
return U_new, step_size
# Problem setup
a = 0.0
b = 1.0
alpha = 1.0
beta = 3.0
phi = lambda x: 20.0 * numpy.pi * x**3
phi_prime = lambda x: 60.0 * numpy.pi * x**2
phi_dbl_prime = lambda x: 120.0 * numpy.pi * x
f = lambda x: -20.0 + 0.5 * (phi_dbl_prime(x) * numpy.cos(phi(x)) - (phi_prime(x))**2 * numpy.sin(phi(x)))
u_true = lambda x: 1.0 + 12.0 * x - 10.0 * x**2 + 0.5 * numpy.sin(phi(x))
# Descretization
m = 100
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Expected iterations needed
iterations_J = int(2.0 * numpy.log(delta_x) / numpy.log(1.0 - 0.5 * numpy.pi**2 * delta_x**2))
# iterations_J = 100
# Solve system
# Initial guess for iterations
U = 1.0 + 2.0 * x_bc
U[0] = alpha
U[-1] = beta
convergence_J = numpy.zeros(iterations_J)
step_size = numpy.zeros(iterations_J)
for k in range(iterations_J):
U, step_size[k] = jacobi_update(x_bc, U, f, delta_x)
convergence_J[k] = numpy.linalg.norm(delta_x * (u_true(x_bc) - U), ord=2)
# Plot result
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
axes = fig.add_subplot(1, 3, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = f(x)$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
axes = fig.add_subplot(1, 3, 2)
axes.semilogy(list(range(iterations_J)), convergence_J, 'o')
axes.set_title("Error")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - u(x)||_1$")
axes = fig.add_subplot(1, 3, 3)
axes.semilogy(list(range(iterations_J)), step_size, 'o')
axes.set_title("Change Each Step")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# We eventually converge although we see that there is a lower limit to the effectiveness of the Jacobi iterations. We also again observe the extremely slow convergence we expect. What if we could still take advantage of Jacobi though?
# + slideshow={"slide_type": "skip"}
# Problem setup
a = 0.0
b = 1.0
alpha = 1.0
beta = 3.0
phi = lambda x: 20.0 * numpy.pi * x**3
phi_prime = lambda x: 60.0 * numpy.pi * x**2
phi_dbl_prime = lambda x: 120.0 * numpy.pi * x
f = lambda x: -20.0 + 0.5 * (phi_dbl_prime(x) * numpy.cos(phi(x)) - (phi_prime(x))**2 * numpy.sin(phi(x)))
u_true = lambda x: 1.0 + 12.0 * x - 10.0 * x**2 + 0.5 * numpy.sin(phi(x))
# Descretization
m = 255
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
U = 1.0 + 2.0 * x_bc
U[0] = alpha
U[-1] = beta
num_steps = 1000
plot_frequency = 200
# Plot initial error
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = f(x)$, iterations = %s" % 0)
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
axes = fig.add_subplot(1, 2, 2)
axes.plot(x_bc, U - u_true(x_bc), 'r-o')
axes.set_title("Error, iterations = %s" % 0)
axes.set_xlabel("x")
axes.set_ylabel("U - u")
# Start Jacobi iterations
for k in range(num_steps):
U, step_size = jacobi_update(x_bc, U, f, delta_x)
if (k+1)%plot_frequency == 0:
print(step_size)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = f(x)$, iterations = %s" % (k + 1))
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
axes = fig.add_subplot(1, 2, 2)
axes.plot(x_bc, U - u_true(x_bc), 'r-o')
axes.set_title("Error, iterations = %s" % (k + 1))
axes.set_xlabel("x")
axes.set_ylabel("U - u")
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# Note that higher frequency components of the error are removed first! Why might this be?
# + [markdown] slideshow={"slide_type": "subslide"}
# Recall that we found in general that the error $e^{(k)}$ from a matrix splitting iterative approach involves the matrix $G$ where
# $$
# U^{(k+1)} = M^{-1} N U^{(k)} + M^{-1} b = G U^{(k)} + c.
# $$
# We then know that
# $$
# e^{(k)} = G e^{(k-1)}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# In the case for Jacobi the matrix $G$ can be written as
# $$
# G = I + \frac{\Delta x^2}{2} A = \begin{bmatrix}
# 0 & 1/2 & \\
# 1/2 & 0 & 1/2 \\
# & 1/2 & 0 & 1/2 \\
# & & \ddots & \ddots & \ddots \\
# & & & 1/2 & 0 & 1/2 \\
# & & & & 1/2 & 0
# \end{bmatrix}
# $$
# Note that this amounts to averaging the off diagonal terms $U_{i+1}$ and $U_{i-1}$. Averaging has the effect of smoothing, i.e. it damps out higher frequencies more quickly.
# + [markdown] slideshow={"slide_type": "subslide"}
# Recall that the eigenvectors of $A$ and $G$ are the same, if those eigenvectors are
# $$
# u^p_j = \sin(\pi p x_j) \quad \text{with} \quad x_j = j \Delta x, \quad j = 1, 2, 3, \ldots, m.
# $$
# with eigenvalues
# $$
# \lambda_p = \cos(p \pi \Delta x).
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# We can project the initial error $e^{(0)}$ onto the eigenspace such that
# $$
# e^{(0)} = c_1 u^1 + c_2 u^2 + \cdots + c_m u^m
# $$
# and therefore
# $$
# e^{(k)} = c_1 (\lambda_1)^k u^1 + c_2 (\lambda_2)^k u^2 + \cdots + c_m (\lambda_m)^k u^m.
# $$
# This implies that the $p$th component of the vector $e^{(k)}$ decays as the corresponding eigenvalue.
# + [markdown] slideshow={"slide_type": "subslide"}
# Examining the eigenvalues we know that the 1st and $m$th eigenvalues will be closest to 1 so the terms $c_1 (\lambda_1)^k u^1$ and $c_m (\lambda_m)^k u^m$ will dominate the error as
# $$
# \lambda_1 = -\lambda_m \approx 1- \frac{1}{2} \pi^2 \Delta x^2.
# $$
# We saw this before as this determined the overall convergence rate for Jacobi.
# + [markdown] slideshow={"slide_type": "subslide"}
# For other components of the error we can approximately see how fast they will decay. If $m / 4 \leq p \leq 3m / 4$ then
# $$
# |\lambda_p| \leq \frac{1}{\sqrt{2}} \approx 0.7
# $$
# implying that after 20 iterations we would have $|\lambda_p|^20 < 10^{-3}$.
# + [markdown] slideshow={"slide_type": "subslide"}
# Connecting this back our original supposition that higher order frequencies in the error are damped more quickly look at the form of the eigenvector components. The original error was projected onto the eigenvectors so that
# $$
# e^{(0)} = c_1 u^1 + c_2 u^2 + \cdots + c_m u^m,
# $$
# plugging in the eigenvectors themselves we find
# $$
# e^{(0)} = c_1 \sin(\pi x_j) + c_2 \sin(\pi 2 x_j) + \cdots + c_m \sin(\pi m x_j)
# $$
# so that we have effectively broken down the original error in terms of a Fourier sine series.
# + [markdown] slideshow={"slide_type": "subslide"}
# Now considering our analysis on the eigenvalues we see that it is in fact the middle range of frequencies that decay the most quickly. The reason we did not see this in our example is that the solution did not contain high-order frequencies relative to our choice of $m$. If we picked an $m$ that was too small we would have been in trouble (for multiple reasons). Try this out and see what you observe.
# + [markdown] slideshow={"slide_type": "subslide"}
# It turns out that having only the middle ranges decay quickly is suboptimal in the context we are considering. Instead we will user *underrelaxed Jacobi*, similar to SOR from before, where
# $$
# U^{(k+1)} = (1 - \omega) U^{(k)} + \omega G U^{(k)}
# $$
# with $\omega = 2/3$ (where $G$ is Jacobi's iteration matrix). The new iteration matrix is
# $$
# G_\omega = (1 - \omega) I + \omega G
# $$
# and has eigenvalues
# $$
# \lambda_p = (1-\omega)+\omega \cos (p \pi \Delta x).
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# This choice of $\omega$ in fact then minimizes the eigenvalues in the range $m/2 < p < m$. In fact
# $$
# |\lambda_p| \leq 1/3
# $$
# for this range. As a standalone method this is actually worse than Jacobi as the lower frequency components of the error decay even more slowly than before but this behavior is perfect for a multigrid approach.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Multigrid Approach
# + [markdown] slideshow={"slide_type": "subslide"}
# The basic approach for multigrid is to do the following:
#
# 1. Use a few iterations of underrelaxed Jacobi to damp high-frequency components of the error.
# 1. Since the error is now smoother than before we can represent the solution (and error) as a coarser resolution grid. We then switch to a coarser resolution.
# 1. On this coarser grid we then again apply a few iterations of underrelaxed Jacobi which now quickly removes a set of lower-frequency components of the error since we are on a coarser grid.
# + [markdown] slideshow={"slide_type": "subslide"}
# Consider the $p = m/4$ component that on the original grid will not be damped much since it is in the lower-frequency range. If we transfer the problem now to a grid with half as many points we then find that the previous frequency is now at the midpoint of the frequency range and therefore will be damped much more quickly!
# + [markdown] slideshow={"slide_type": "subslide"}
# One important point here is that instead of transferring the solution to the coarser grid we only transfer the error.
#
# If we have taken $n$ iterations on the original grid we would now have
# $$
# e^{(n)} = U^{(n)} - u
# $$
# which has a residual vector of
# $$
# r^{(n)} = f - A U^{(n)}
# $$
# since
# $$
# A e^{(n)} = -r^{(n)}.
# $$
# If we can solve this system for $e^{(n)}$ then we could go back and simply subtract this from the equation relating our numerical solution $U^{(n)}$ to $u$. The system of equations relating the error and residual is the one we are interested in coarsening.
# + [markdown] slideshow={"slide_type": "subslide"}
# Basic Algorithm:
# 1. Take $n$ iterations (where $n$ is fixed) of a simple iterative method on the original problem $A u = f$. This gives the approximate solution $U^{(n)} \in \mathbb R^m$.
# 1. Compute the residual $r^{(n)} = f - A U^{(n)} \in \mathbb R^m$.
# 1. Coarsen the residual problem, take $r^{(n)} \in \mathbb R^m$ to $\hat{r~} \in \mathbb R^{m_c}$ where $m_c = (m - 1) / 2$.
# 1. Approximately solve the new problem $\hat{A~} \hat{e~} = -\hat{r~}$ where $\hat{A~}$ is the appropriately scaled matrix $A$.
# 1. We now have an approximation to the error $e^{(k)}$ in $\hat{e~}$. To get back to $e^{(k)}$ we use an appropriate interpolation method to go back to $\mathbb R^m$. Now subtract this interpolated approximate error to get a new approximation $U$ to $u$.
# 1. Using this new value of $U$ as an initial guess repeat the process.
# + [markdown] slideshow={"slide_type": "subslide"}
# There are many variations on this scheme, most notably that we do not need to stop at a single coarsening, we can continue to coarsen to dampen additional lower frequencies depending mostly on the number of grid points $m$ we started out with. The specification of levels of coarsening combined with the interpolation back up to the original problem is called a *V-cycle*.
# + [markdown] slideshow={"slide_type": "subslide"}
# Also note that although we are solving multiple linear systems, each coarsened system decreases the number of points used by half (this is also adjustable).
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# 
# +
def smoother(v, N, A, f):
for i in range(N):
for k in range(A.shape[0]):
v[k] = (f[k] - numpy.dot(A[k,0:k], v[0:k]) - numpy.dot(A[k,k+1:], v[k+1:]) ) / A[k,k];
return v
def get_mat(A):
f = A.shape[0]
c = int((f - 1) / 2 + 1)
P = numpy.zeros((f, c))
for k in range(c):
P[2 * k, k] = 1.0
for k in range(c - 1):
P[2 * k + 1, k] = 0.5
P[2 * k + 1, k + 1] = 0.5
return P
def vcycle(A, f):
if A.shape[0] < 15:
v = numpy.linalg.solve(A,f)
return v
N = 5
v = numpy.zeros(A.shape[0]);
v = smoother(v, N, A, f)
P = get_mat(A)
remains = f - numpy.dot(A, v)
remains_P = numpy.dot(P.T, remains)
AP_P = numpy.dot(P.T, numpy.dot(A,P))
recurse_V = vcycle(AP_P, remains_P)
v = numpy.dot(P, recurse_V)
v = smoother(v, N, A, f)
return v
# +
# Problem setup
a = 0.0
b = 1.0
alpha = 0.0
beta = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 50
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m, m))
diagonal = numpy.ones(m) / delta_x**2
A += numpy.diag(diagonal * 2.0, 0)
A += numpy.diag(-diagonal[:-1], 1)
A += numpy.diag(-diagonal[:-1], -1)
# Construct right hand side
b = -f(x)
b[0] += alpha / delta_x**2
b[-1] += beta / delta_x**2
# True solution to system
U_true = numpy.linalg.solve(A, b)
# Algorithm parameters
MAX_CYCLES = 50
tolerance = delta_x**2
# Perform multi-grid
U = numpy.zeros(m)
convergence_MG = numpy.zeros(MAX_CYCLES)
step_size_MG = numpy.zeros(MAX_CYCLES)
residual_norm = numpy.zeros(MAX_CYCLES)
system_convergence = numpy.zeros(MAX_CYCLES)
success = False
for k in range(MAX_CYCLES):
U_old = U.copy()
# Compute residual
r = b - numpy.dot(A, U)
if numpy.linalg.norm(r, ord=2) < tolerance:
success = True
break
U += vcycle(A, r)
# Convergence measures
convergence_MG[k] = numpy.linalg.norm(u_true(x) - U, ord=2)
residual_norm[k] = numpy.linalg.norm(r, ord=2)
step_size_MG[k] = numpy.linalg.norm(U - U_old, ord=2)
system_convergence[k] = numpy.linalg.norm(U_true - U, ord=2)
if not success:
print("Iteration failed to converge!")
print(residual_norm[-10:])
print(convergence_CG[-10:])
else:
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.1)
fig.set_figheight(fig.get_figheight() * 2.2)
axes = fig.add_subplot(2, 2, 1)
axes.semilogy(numpy.arange(1, k + 1), step_size_MG[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Step Size")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(2, 2, 2)
axes.semilogy(numpy.arange(1, k + 1), convergence_MG[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Convergence to True Solution")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^{(k)} - U^{(k-1)}||_2$")
axes = fig.add_subplot(2, 2, 3)
axes.semilogy(numpy.arange(1, k + 1), residual_norm[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("Residual")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||r||_2$")
axes = fig.add_subplot(2, 2, 4)
axes.semilogy(numpy.arange(1, k + 1), system_convergence[:k], 'o')
axes.semilogy(numpy.arange(1, k + 1), numpy.ones(k) * delta_x**2, 'r--')
axes.set_title("True Residual")
axes.set_xlabel("Iteration")
axes.set_ylabel("$||U^* - U||_2$")
print("Matrix condition number = %s" % (numpy.sqrt(numpy.linalg.norm(A) * numpy.linalg.norm(numpy.linalg.inv(A)))))
plt.show()
# +
# Slightly harder problem
L = 1.0
m = 512
x_bc = numpy.linspace(0, L, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
A = numpy.diag(2.0 * numpy.ones(m)) - numpy.diag(numpy.ones(m - 1), 1) / L**2
b = numpy.tan(numpy.arange(m) / (2 * numpy.pi) * 10)
U_true = numpy.linalg.solve(A, b)
U = numpy.random.random(m)
MAX_CYCLES = 50
tolerance = 1e-10
for k in range(MAX_CYCLES):
r = b - numpy.dot(A, U)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.plot(U)
axes.set_title("Solution at iteration %s" % k)
axes = fig.add_subplot(1, 2, 2)
axes.plot(r)
axes.set_title("Residual at iteration %s" % k)
if numpy.linalg.norm(r) < tolerance:
break
U += vcycle(A, r)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Work Estimates
#
# So how much better is multigrid?
# + [markdown] slideshow={"slide_type": "subslide"}
# As a concrete example consider again the similar example from before where $m = 2^8 - 1 = 255$ and using $n = 3$. If we allow recursive coarsening down to 3 grid points (7 levels of grids). On each level we apply 3 iterations of Jacobi. If we apply these Jacobi iterations on the way "down" the V-cycle and on the way up (not necessary in theory) we would do 6 iterations of Jacobi per level. This leads to a total of 42 Jacobi iterations on variously coarsened grids. The total number of updated values then would be
# $$
# 6 \sum^8_{j=2} 2^j \approx 3072.
# $$
# This is about the same amount of work as 12 iterations on the original fine grid would take. The big difference here is that, due to the sweeping, we have in fact damped out error frequencies in a much larger range than would have been accomplished simply by Jacobi iterations.
# + [markdown] slideshow={"slide_type": "subslide"}
# Now consider the more general case with $m + 1 = 2^J$ points recursing all the way down to one point (about) and taking $n$ iterations of Jacobi at each level. The total work would be
# $$
# 2 n \sum^J_{j=2}2^j \approx 4 n 2^J \approx 4 n m = \mathcal{O}(m)
# $$
# assuming $n \ll m$. The work required work for one V-cycle, noting that the number of grids grows as $\log_2 m$, is still $\mathcal{O}(m)$. It can actually also be shown for the Poisson problem that the number of V-cycles required in our simple approach that $\mathcal{O}(m \log m)$ would be required to reach a given level of error determined by the original $\Delta x$.
# + [markdown] slideshow={"slide_type": "subslide"}
# We can of course play with all sorts of types of V-cycles and iteration counts and these variations lead to a multitude of approaches.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Full Multigrid
#
# Instead of starting and solving the original PDE at the finest level we can also start at the coarsest. To do this we do a few iterations on the coarsest level or solve the problem directly since the cost should be low at the coarsest level. We then interpolate to the next finer level and solve the problem there. We can then cycle back down to the coarsest level or continue upwards until we get to the finest level where we wanted to be in the first place. We then switch back to solving for the error rather than the original problem and proceed as before. This approach is usually labeled as *full multigrid* (FMG).
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# It turns out that although there is a "startup" phase that this is mostly negligible and greatly reduces the error by the time we reach the finest level. It turns out using FMG takes about $\mathcal{O}(m)$ work, optimal given that we have $m$ unknowns to solve for in the 1-dimensional problem.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Take-Away
# + [markdown] slideshow={"slide_type": "subslide"}
# We have been discussing one-dimensional implementations of multigrid but these methods extend to higher-dimensional problems and continue to be optimal. For instance a two-dimensional Poisson problem can be solved in $\mathcal{O}(m^2)$ work, again optimal due to the number of unknowns. A Fourier transform approach would require $\mathcal{O}(m^2 \log m)$ and a direct method $\mathcal{O}(m^3)$.
# + [markdown] slideshow={"slide_type": "subslide"}
# This all being said, multigrid is hard. There are as many ways to do it as their are problems to be solved (at least). Luckily there are a number of areas where research has been done to determine optimal methods (in some sense) for a given problem and discretization. There are also variations that include more complex ways to "coarsen" the error and residual. In general these are called *algebraic multigrid* methods (AMG). These are especially useful when it is not obvious how to coarsen the residual problem.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Further Reading
# 1. <NAME>, <NAME>, and <NAME>. *A Multigrid Tutorial, 2nd ed.* SIAM, Philadelphia, 2000.
# 1. <NAME>. Multigrid methods for partial differential equations. In *Studies in Numerical Analysis*, <NAME>, ed. MAA Studies in Mathematics, Vol. 24, 1984, pages 270-317.
# 1. <NAME>. *Multigrid Methods and Applications.* Springer-Verlag, Berlin, 1985.
# 1. <NAME>. *An Introduction to Multigrid Methods*. John Wiley, New York, 1992.
|
06_iterative.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="fXTUvpnu1s7M"
# # Mount Drive
# + colab={"base_uri": "https://localhost:8080/"} id="4sZBzFjuvBIy" executionInfo={"status": "ok", "timestamp": 1605946093101, "user_tz": -480, "elapsed": 49887, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgKwZnZ2dHsOvtWnGFQksMtsfZc2QK_Fv86edRayg=s64", "userId": "07556015887030206568"}} outputId="b6ef506b-7e91-41bb-adea-00bdb8b9303c"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="79yT5i6O1u6G"
# # Install Weights and Biases
# + id="OCL7yPnewQeu" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1605946149018, "user_tz": -480, "elapsed": 9323, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgKwZnZ2dHsOvtWnGFQksMtsfZc2QK_Fv86edRayg=s64", "userId": "07556015887030206568"}} outputId="2285c092-2826-48e8-b606-a4981b2ad7d9"
# !pip install wandb -qqq
# + [markdown] id="VNl9SD_O1Kw0"
# # Configure wandb sweep
# + id="FD_3XJB30kUs" executionInfo={"status": "ok", "timestamp": 1605946158615, "user_tz": -480, "elapsed": 2186, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgKwZnZ2dHsOvtWnGFQksMtsfZc2QK_Fv86edRayg=s64", "userId": "07556015887030206568"}}
from wandb.keras import WandbCallback
import wandb
# + id="ikEXqD_80cbR" executionInfo={"status": "ok", "timestamp": 1605946184122, "user_tz": -480, "elapsed": 637, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgKwZnZ2dHsOvtWnGFQksMtsfZc2QK_Fv86edRayg=s64", "userId": "07556015887030206568"}}
# Configure the sweep
sweep_config = {
'method': 'grid',
'name' : 'Neuron & LR Sweep - Batch Size 64',
'parameters': {
'learning_rate': {
'values': [5e-4, 1e-5]
},
'fc1_num_neurons': {
'values': [512, 1024]
},
'fc3_num_neurons': {
'values': [256, 512]
},
}
}
# + id="4Qsu8cjewUiw" colab={"base_uri": "https://localhost:8080/", "height": 124} executionInfo={"status": "ok", "timestamp": 1605946202558, "user_tz": -480, "elapsed": 15193, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgKwZnZ2dHsOvtWnGFQksMtsfZc2QK_Fv86edRayg=s64", "userId": "07556015887030206568"}} outputId="bec47a4d-35f1-43bc-afc5-0440fc99c94a"
WANDB_API_KEY='<KEY>'
sweep_id = wandb.sweep(sweep_config, project="nn_project_tuning")
# + [markdown] id="zhX0YFHo1pSY"
# # Model
# + id="GcmgY4WXyiu1" executionInfo={"status": "ok", "timestamp": 1605946654028, "user_tz": -480, "elapsed": 647, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgKwZnZ2dHsOvtWnGFQksMtsfZc2QK_Fv86edRayg=s64", "userId": "07556015887030206568"}}
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
import pandas as pd
import tensorflow as tf
# Load dataset as dataframe
base_path = '/content/drive/My Drive/Y4 Sem 1/NN Project/'
df = pd.read_csv('/content/drive/My Drive/Y4 Sem 1/NN Project/aligned_gender.txt', sep='\t')
df['datadir'] = base_path + df['datadir'].astype(str)
# Train test split
train_val_df, test_df = train_test_split(df, test_size=0.2)
train_df, val_df = train_test_split(train_val_df, test_size=0.2)
# function to call for hyperparameter sweep
def train():
# Fix seed
seed = 7
np.random.seed(seed)
tf.random.set_seed(seed)
# default hyperparameters
defaults = {
'epochs': 20,
'batch_size': 64,
'learning_rate': 1e-5,
'weight_decay': 0.01,
'fc1_num_neurons': 1024,
'fc2_num_neurons': 512,
'fc3_num_neurons': 256,
'dropout1_rate': 0,
'dropout2_rate': 0,
'dropout3_rate': 0,
'seed': seed,
'optimizer': 'adam',
'hidden_activation': 'relu',
'output_activation': 'sigmoid',
'loss_function': 'binary_crossentropy',
'metrics': ['accuracy'],
}
wandb.init(
config=defaults,
name='Hyperparameter Tuning',
project='nn_project',
)
config = wandb.config
# Define model
mobile_net_v2 = tf.keras.applications.MobileNetV2(
include_top=False,
pooling='avg',
weights='imagenet',
input_shape=(224,224,3),
)
fc1 = tf.keras.layers.Dense(
config.fc1_num_neurons,
activation=config.hidden_activation,
kernel_regularizer=tf.keras.regularizers.L2(l2=config.weight_decay),
)
fc2 = tf.keras.layers.Dense(
config.fc2_num_neurons,
activation=config.hidden_activation,
kernel_regularizer=tf.keras.regularizers.L2(l2=config.weight_decay),
)
fc3 = tf.keras.layers.Dense(
config.fc3_num_neurons,
activation=config.hidden_activation,
kernel_regularizer=tf.keras.regularizers.L2(l2=config.weight_decay),
)
bn1 = tf.keras.layers.BatchNormalization()
bn2 = tf.keras.layers.BatchNormalization()
bn3 = tf.keras.layers.BatchNormalization()
bn4 = tf.keras.layers.BatchNormalization()
dropout1 = tf.keras.layers.Dropout(rate=config.dropout1_rate)
dropout2 = tf.keras.layers.Dropout(rate=config.dropout2_rate)
dropout3 = tf.keras.layers.Dropout(rate=config.dropout3_rate)
model = tf.keras.models.Sequential([
mobile_net_v2,
tf.keras.layers.Flatten(),
bn1,
fc1,
dropout1,
bn2,
fc2,
dropout2,
bn3,
fc3,
dropout3,
bn4,
tf.keras.layers.Dense(1, activation=config.output_activation),
])
# Compile model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=config.learning_rate),
loss=config.loss_function,
metrics=config.metrics,
)
# Load images into keras image generator
datagen_train = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet_v2.preprocess_input,
)
datagen_val = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet_v2.preprocess_input,
)
datagen_test = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet_v2.preprocess_input,
)
train_generator = datagen_train.flow_from_dataframe(
dataframe=train_df,
x_col='datadir',
y_col='gender',
batch_size=config.batch_size,
seed=seed,
shuffle=True,
class_mode='raw',
target_size=(224,224),
)
val_generator = datagen_train.flow_from_dataframe(
dataframe=val_df,
x_col='datadir',
y_col='gender',
batch_size=config.batch_size,
seed=seed,
shuffle=True,
class_mode='raw',
target_size=(224,224),
)
test_generator = datagen_test.flow_from_dataframe(
dataframe=test_df,
x_col='datadir',
y_col='gender',
batch_size=config.batch_size,
seed=seed,
shuffle=True,
class_mode='raw',
target_size=(224,224),
)
model.fit(
train_generator,
validation_data=val_generator,
epochs=config.epochs,
callbacks=[WandbCallback()],
)
# + [markdown] id="QhGQHVTK1nK4"
# # Run agent
# + id="xL5HtKpqbjTi" colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["d21d26f3fe4a467c8a35bda2d134aa9a", "3047e427b4c244bfb7be2d1029a06d67", "9271e306517143d8a67b51a9facf9a7b", "5b2a896a780b487897271b8559fb4f7c", "<KEY>", "3e0db7555df54a439d6ff834e618459f", "3b978cb635664206a5bc989404cf8468", "2e20fe98ec1a49829b512630794f9865", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "e897b072a4114df2a7e8f0f1b3fff3f5", "d0757315fe2e4fa681badb50cf02081f", "8a68be6c89c94206a8098349a30b95f5", "87aa137d2b31409f9ef9a6b95b31e61a", "<KEY>", "d14b4875400e487e8b83594f6cec1baf", "<KEY>", "47851e20ea0240d296c7eec862d098c6", "<KEY>", "b99ab1e7051d4a0287b24fc07d2774b4", "<KEY>", "15fae409ffc647ceab6d412740d96405", "72d00236aa5d4f998f1800446c500a25", "<KEY>", "<KEY>", "bee14eddeab241d39fa1f0c951f159fa", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "cee15145af834ce3b6ebefd748a59af1", "<KEY>", "<KEY>", "29b2e16e5212454f8889316b858eb88c", "<KEY>", "<KEY>", "64cda7bb89314d31b89a2d31966e862e", "ffa621fe184b4b45a038c716670de249"]} executionInfo={"status": "ok", "timestamp": 1605962182541, "user_tz": -480, "elapsed": 15525674, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgKwZnZ2dHsOvtWnGFQksMtsfZc2QK_Fv86edRayg=s64", "userId": "07556015887030206568"}} outputId="0927048d-27bb-478b-b8a0-617aa4f8eaf7"
wandb.agent(sweep_id, train)
# + id="z5q0b5pe35mu"
|
project/7_hyperparameter_tuning/hyperparams_tuning_batch64.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # HDF5 + Python
#
# Let's say you have a giant array that you need to work with. So giant, in fact, that it is bigger than your computer's temporary memory. What can we do?
#
# ## HDF5 Background
#
# HDF5 (Hierarchical Data Format) is a file format that can handle your data, no matter how big your data are. There are packages for working with HDF5 files in most languages. HDF5 [has been suggested](https://arxiv.org/abs/1411.0507) as a possible replacement for FITS files in the astronomy community, but there's enough inertia that it's probable that this is the first time you'll hear about HDF5.
#
# The Python package for working with HDF5 files is `h5py` ([docs](http://docs.h5py.org/en/latest/index.html)).
#
# ## First example
#
# Let's say you have a 3D numpy array that you want to store. Let's make an array with shape `(10, 10, 10)` to start:
# + jupyter={"outputs_hidden": false}
import numpy as np
np.random.seed(2019)
data = np.random.randn(1000).reshape((10, 10, 10))
# -
# You could save this in a FITS file, you could save this as a numpy pickle file. We're going to save it with HDF5. Starting a new file with HDF5 is a little cumbersome, but let's go for it:
# + jupyter={"outputs_hidden": false}
import h5py # This is the python interface to HDF5
f = h5py.File('data.hdf5', 'w') # 'w' indicates we are creating this file
# -
# An HDF5 file contains _datasets_. A dataset is an array with one data type. Let's create one for `data`.
# + jupyter={"outputs_hidden": false}
dataset = f.create_dataset('dataset', shape=data.shape, dtype=data.dtype)
dataset
# + jupyter={"outputs_hidden": false}
dataset
# -
# To fill a dataset with data, you need to take a slice of the dataset:
dataset[:] = data
# Now `dataset` can be sliced like a numpy array (with some caveats), and the arrays returned will be numpy arrays. The trick going on behind the scenes is that `h5py` will only access the memory for the parts of the data cube that you slice for. This is powerful because your data cube could contain 10 GB of information on a machine with 2 GB of RAM, but if you try to access only `dataset[0, 0, :]`, you'll only read that slice of the data cube from memory.
#
# Let's do a quick computation on a slice of the dataset:
# + jupyter={"outputs_hidden": false}
image = dataset[0, :, :]
image.mean()
# + jupyter={"outputs_hidden": false}
type(image)
# -
# Note: for datasets with high dimensions, you can use the ellipsis (`...`) syntax for specifying "give me all of the values from all of the remaining dimensions", like this:
# +
# This:
image = dataset[0, :, :]
# is equivalent to this:
image = dataset[0, ...]
# -
# When we're done with a file, we should close the file stream:
f.close()
# # Big(ger) Data Example
#
# The above example used a small dataset to demonstrate the functionality of `h5py` without taking up all of your memory. If you have plenty of disk space (0.6 GB), you may procede with caution to the example below.
#
# **Warning**: you should only do the example below if you have sufficient disk space to store a 0.6 GB file.
#
# We will create a really big dataset, and use one of the [HDF5 compression types](http://docs.h5py.org/en/latest/high/dataset.html#lossless-compression-filters) to store it on your machine succinctly.
# + jupyter={"outputs_hidden": false}
sufficient_disk_space = False
if sufficient_disk_space:
f = h5py.File('big_data.hdf5', 'w')
# Set image dimensions, number of images
image_dimensions = (2048, 2048)
number_images = 20
# Create a dataset, and specify that it will be compressed with the LZF algorithm
images = f.create_dataset('images', dtype=np.float64, compression='gzip',
shape=(number_images, image_dimensions[0], image_dimensions[1]))
# Create random images, and add them to the HDF5 archive
for i in range(number_images):
n_pixels = image_dimensions[0] * image_dimensions[1]
random_image = np.random.randn(n_pixels).reshape(image_dimensions)
images[i, ...] = random_image
# Close the file stream
f.close()
# -
# If we do the math, this is how large we'd expect that file to be without compression:
# + jupyter={"outputs_hidden": false}
if sufficient_disk_space:
print(number_images * image_dimensions[0] * image_dimensions[1] * 64 / 8 / 1e9, "GB")
# -
# But the actual size of the file is:
# + jupyter={"outputs_hidden": false} language="bash"
# ls -lh big_data.hdf5
# -
# The `LZF` compression algorithm is very fast, but not the most compact (try `gzip` for an alternative which is slower but compresses the file a bit more).
#
# Now let's access the data in that HDF5 archive:
# + jupyter={"outputs_hidden": false}
# %matplotlib inline
import matplotlib.pyplot as plt
if sufficient_disk_space:
f = h5py.File('big_data.hdf5', 'r')
# access the images dataset
images = f['images']
# Get the first image from the dataset:
first_image = images[0, ...]
plt.imshow(first_image)
plt.show()
# + jupyter={"outputs_hidden": false}
images
# -
# Now notice - we didn't have to wait for the whole file to load into memory, and we didn't have to specify the compression to decompress it - we just asked for the slice of data that we wanted, and we got it! That's because the `images` variable is not holding the whole numpy array, it's holding the `h5py` HDF5 dataset object, which behaves like a numpy array without actually being one.
# + jupyter={"outputs_hidden": false}
if sufficient_disk_space:
print(images)
f.close()
# -
# ### Exercise:
#
# Open your operating system's activity monitor, and look at how much memory is being used currently.
#
# Now in the cell below:
# 1. Open the HDF5 archive `big_data` that we created earlier.
# 2. Write a loop that measures the standard deviation of the values in each image
# 3. Check that the whole data cube isn't being accessed using your system's activity monitor
|
2019-bern/11-Extras/intro_to_hdf5.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import glob
import numpy as np
from cartesian import *
from lsq_transform import *
from pivot_calibration import *
import copy
runtype = 0
run = 0
data_dir = "DATA/"
data_type = "debug"
letter = "g"
output = 1
# +
optpivot = glob.glob(data_dir + 'pa1-' + data_type + '-' + letter + '-optpivot.txt')
print(optpivot)
optpivot_file = open(optpivot[0], "r")
optpivot_lines = optpivot_file.read().splitlines()
optpivot_data = []
for num in range(len(optpivot_lines)):
optpivot_data.append( optpivot_lines[num].split(',') )
for i in range(len(optpivot_data[-1])):
optpivot_data[-1][i] = optpivot_data[-1][i].strip()
Nd = int(optpivot_data[0][0])
Nh = int(optpivot_data[0][1])
Nframes = int(optpivot_data[0][2])
optpivot_data = np.asarray(optpivot_data[1:]).astype(float)
# Use the first “frame” of pivot calibration data to define a local “probe” coordinate system
D = optpivot_data[ : Nd , : ]
D0 = np.sum(D, axis=0)/Nd
d = D - D0
frame_size = Nd + Nh
Nd_frame = np.zeros((1, 3))
Nh_frame = np.zeros((1, 3))
for i in range(Nframes):
start = i * frame_size
end = start + frame_size
Nd_frame = np.concatenate((Nd_frame, copy.deepcopy(optpivot_data[start: start + Nd, :])), 0)
Nh_frame = np.concatenate((Nh_frame, copy.deepcopy(optpivot_data[start + Nd: end, :])), 0)
Nd_frame = np.asarray(Nd_frame[1:]).astype(float)
Nh_frame = np.asarray(Nh_frame[1:]).astype(float)
calbody = glob.glob(data_dir + 'pa1-' + data_type + '-' + letter + '-calbody.txt')
calbody_file = open(calbody[0], "r")
calbody_lines = calbody_file.read().splitlines()
calbody_data = []
for num in range(len(calbody_lines)):
calbody_data.append( calbody_lines[num].split(',') )
for i in range(len(calbody_data[-1])):
calbody_data[-1][i] = calbody_data[-1][i].strip() # to remove space and tabs
Nd = int(calbody_data[0][0])
Na = int(calbody_data[0][1])
Nc = int(calbody_data[0][2])
calbody_data = np.asarray(calbody_data[1:]).astype(float)
c_expected = np.zeros(1)
d_i = calbody_data[0: Nd, :]
a_i = calbody_data[Nd: Nd + Na, :]
c_i = calbody_data[Nd + Na:, :]
D_i = Nd_frame[0: Nd, :]
# print(optpivot_data[0:Nd, :])
F_d = np.zeros((1, 4, 4))
# print(D_i)
# print(d_i)
for i in range(Nframes):
D_i = Nd_frame[i * Nd : (i + 1) * Nd, :]
F_d_i = get_lsq_transform(D_i, d_i)
F_d = np.concatenate((F_d, np.expand_dims(F_d_i, 0)), 0)
# print(F_d_i)
F_d = np.asarray(F_d[1:]).astype(float)
data = np.zeros((1,3))
for i in range(Nframes):
data = np.vstack( ( data , points_transform(F_d[i] , Nh_frame[i*Nh : (i+1)*Nh ,:] ) ) )
data = data[1:]
H = data[ : Nh , : ]
H0 = np.sum(H, axis=0)/Nh
h = H - H0
Rotation = np.zeros((3,3))
Translation = np.zeros((3,1))
Transform = []
for i in range(Nframes):
H = data[ i*Nh : (i+1)*Nh , : ]
F = get_lsq_transform( h , H )
Transform.append(F)
rotation = F[0:3,0:3]
#print( rotation.shape )
Rotation = np.vstack( ( Rotation , rotation ) )
#Translation .append()
# need to save all those data and applied hw3 Q1C
translation = F[ 0:3 , 3 ]
Translation = np.vstack( ( Translation , translation.reshape(3,1) ) )
Transform = np.asarray( Transform ).astype(float)
Rotation = Rotation[3:]
Translation = Translation[3:]
ans = pivot_calibration( Rotation , Translation )
if(output == 1):
print(data_dir + 'pa1-' + letter + '-' + data_type)
print("optical_pivot: " , ans[3][0]," ", ans[4][0]," ",ans[5][0] )
# -
|
CIS1_PA1/6.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# name: python3
# ---
import torch
from PIL import Image
import torchvision
import os
import wandb
# +
cwd = str(os.getcwd())
print(cwd)
# +
train_set = torchvision.datasets.CIFAR10(root =cwd, train= True, download = True, transform=torchvision.transforms.ToTensor())
test_set = torchvision.datasets.CIFAR10(root =cwd, train= False, download = True, transform=torchvision.transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=64, shuffle=True)
# -
def calculate_accuracy(y_pred, y):
top_pred = y_pred.argmax(1, keepdim = True)
correct = top_pred.eq(y.view_as(top_pred)).sum()
acc = correct.float() / y.shape[0]
return acc
a = torch.tensor([[0.8,0.1,0.1],
[0.1,0.8,0.1],
[0.1,0.8,0.1]])
b = torch.tensor([0,
2,
2])
calculate_accuracy(a,b)
import matplotlib.pyplot as plt
# %matplotlib
for i in range(10,20):
plt.figure()
imshow(np.asarray(train_set[i][0].permute(1,2,0)))
|
train.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Упражнение 1
#
# Был произведен рассчет доли газа, дебита газожидкостной смеси, вязкости, плотности газа, нефти, воды, смеси в условиях устья Pбуф, Tбуф по корреляциям McCain и Standing. В результате оказалось, что доля газа, вычисленная по корреляции McCain почти вдвое выше вычисленной по корреляции Standing. Кроме этого дебит смеси незначительно выше по корреляции McCain. Вязкость газа, плотность газа, вязкость воды, плотность воды одинаковы по обоим корреляциям. Вязкость нефти, вязкость смеси и плотность смеси почти вдвое ниже по корреляции McCain, также незначительно ниже по корреляции McCain оказалась плотность нефти. Плотность смеси по корреляции McCain оказалась выше, чем по корреляции Stading.
#
# Такие результаты можно объяснить тем, что корреляция McCain изначально была создана на основе анализа данных с газо- и газоконденсатных месторождений, следовательно, она должна показывать долю газа выше, чем корреляция Standing, созданная на основе анализа месторожденй калифорнийских легких нефтей, практически не содержащих неуглеводородных компонент. Аналогичные выводы делаются для вязкости нефти, вязкости смеси и плотности смеси - они должны быть ниже по корреляции McCain, поскольку в ней рассчет идет на то, что корреляция применяется для анализа PVT-свойств более легких газоконденсатных фракций. Дебит смеси по корреляции McCain также должен быть выше из-за большей доли газа в смеси. Вязкость воды, плотность воды, вязкость газа и плотность газа должны быть одинаковыми, т.к. вода и газовые компоненты в углеводородных смесях, по которым создавались корреляции Standing и McCain не должны были отличаться сильно друг от друга, следовательно, и при рассчете по таким корреляциям должен получиться близкий или даже одинковый результат. Плотность смеси по McCain незначительно больше поскольку больше дебит смеси, но ниже плотность нефти, следовательно, должно быть большим влияние на плотность всей смеси плотности входящей в нее воды, которая одинакова с плотностью, вычисленной по Standing.
# ### Упражнение 2
#
# Вычисленное забойное давление равно 145.43 атм. Данная скважина не фонтанирует, поскольку, как можно заметить из взаимного расположения IPR и VLP кривых, в точке, соответствующей дебиту из техрежима, VLP кривая лежит выше, то есть, имеется дефицит давления, то есть, скважина фонтанировать не может при таком дебите.
#
# Также была построена VLP кривая в диапазоне 0-500 м3/сут. По ней видно, что изначально кривая убывает, а затем, после достижения локального минимума, меняет характер монотонности и начинает возрастать. Промежуток убывания связан с тем, что имеется выделение газа из нефти в скважине и им совершается работа при расширении (естественный газлифт). Затем при достаточно больших значениях дебита потери на трение начинают доминировать над естественным газлифтом и происходит смена характера монотонности кривой на возрастание (т.е. растет перепад давления между линейным и вычисленным забойным - растет необходимый перепад давления для обеспечения заданного дебита жидкости).
# +
import json
Q =[0,
10,
19.15,
20,
30,
40,
50,
60,
70,
80,
90,
100,
110,
120,
130,
140,
150,
160,
170,
180,
190,
200,
210,
220,
230,
240,
250,
260,
270,
280,
290,
300,
310,
320,
330,
340,
350,
360,
370,
380,
390,
400,
410,
420,
430,
440,
450,
460,
470,
480,
490,
500,
]
P_wh =[292.3868359,
284.1794192,
278.5972694,
278.1874021,
273.703115,
270.3635581,
267.8974919,
266.1876661,
265.1255231,
264.6532441,
264.7189031,
265.3014599,
266.3853935,
267.9528739,
269.9987753,
272.5255129,
275.5225511,
278.9878899,
282.909783,
287.2375105,
291.0122297,
294.7634183,
298.4705548,
302.1516748,
305.7724686,
309.3716521,
312.9503318,
316.4824087,
319.9835835,
323.4716853,
326.9121497,
330.325925,
333.7222656,
337.0727292,
340.3812829,
343.6685374,
346.9391684,
350.1934198,
353.4316647,
356.6558045,
359.8681073,
363.070288,
366.2608373,
369.4432624,
372.61947,
375.7876183,
378.9493519,
382.107758,
385.2622799,
388.4125492,
391.5605596,
394.7053954,
]
data = {
"ex1": {"gas_fraction_standing":0.306,
"q_mix_standing":25.559,
"gas_viscosity_standing":0.012,
"gas_density_standing":59.579,
"water_viscosity_standing":1.052,
"water_density_standing":1008.080,
"oil_viscosity_standing":2.747,
"oil_density_standing":746.293,
"mixture_viscosity_standing":1.129,
"mixture_density_standing":490.339,
"gas_fraction_mccain":0.502,
"q_mix_mccain":25.979,
"gas_viscosity_mccain":0.012,
"gas_density_mccain":59.580,
"water_viscosity_mccain":1.052,
"water_density_mccain":1008.080,
"oil_viscosity_mccain":1.392,
"oil_density_mccain":716.612,
"mixture_viscosity_mccain":0.751,
"mixture_density_mccain":495.630
},
"ex2": {
"p_wf":145.43,"q_liq_vlp":Q,"p_wf_vlp":P_wh
}
}
with open(r"C:\Users\Компьютер\course_sppu_2021\task_2_Kalyuzhin_AS\json_answer.json", "w") as write_file:
json.dump(data, write_file)
# -
|
task 2/Kalyuzhin_AS/Exercises_Kalyuzhin_AS-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1.2.0 Heatmap Papers
# Want to prioritize words that are genes
import pandas as pd
import requests
from glob import glob
import json
from copy import deepcopy
from clustergrammer2 import net
all_files = glob('../markdown_files/*.md')
len(all_files)
# ### Load Altmetric Data
dict_altmetric = net.load_json_to_dict('../altmetric_data/altmetric_scores.json')
# ### Load Google Sheet Data
google_sheet_url = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vRngfhDKqZUEhHuQY60n3Bh76gkMQKeOq6D7UYkSgt0KPP7rcCTE-PjMeWO1g1YlGVhBTAMJS6rn-pc/pub?gid=0&single=true&output=tsv'
r = requests.get(google_sheet_url)
import sys
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
TESTDATA = StringIO(r.text)
df = pd.read_csv(TESTDATA, sep="\t", index_col=0)
df.index.name = None
# ### Download Latest Preprints
url = 'https://connect.biorxiv.org/relate/collection_json.php?grp=181'
r = requests.get(url)
req_dict = json.loads(r.text)
stop_words = ["i","me","my","myself","we","us","our","ours","ourselves","you","your","yours","yourself","yourselves","he","him","his","himself","she","her","hers","herself","it","its","itself","they","them","their","theirs","themselves","what","which","who","whom","whose","this","that","these","those","am","is","are","was","were","be","been","being","have","has","had","having","do","does","did","doing","will","would","should","can","could","ought","i'm","you're","he's","she's","it's","we're","they're","i've","you've","we've","they've","i'd","you'd","he'd","she'd","we'd","they'd","i'll","you'll","he'll","she'll","we'll","they'll","isn't","aren't","wasn't","weren't","hasn't","haven't","hadn't","doesn't","don't","didn't","won't","wouldn't","shan't","shouldn't","can't","cannot","couldn't","mustn't","let's","that's","who's","what's","here's","there's","when's","where's","why's","how's","a","an","the","and","but","if","or","because","as","until","while","of","at","by","for","with","about","against","between","into","through","during","before","after","above","below","to","from","up","upon","down","in","out","on","off","over","under","again","further","then","once","here","there","when","where","why","how","all","any","both","each","few","more","most","other","some","such","no","nor","not","only","own","same","so","than","too","very","say","says","said","shall","2019","novel","patients","using","may","2019-ncov","2020"]
stop_words.extend(['2020,', 'conclusions', 'characteristics'])
stop_words.extend(['=', '1', '2', '3', '4', '5', '6', '7', '8', '9'])
doi_words = {}
all_words = []
doi_titles = {}
doi_site = {}
arr_papers = req_dict['rels']
for inst_paper in arr_papers:
# get words from abstract
inst_words = [x.lower().replace(':','').replace(',','').replace('.','')
.replace('(', '').replace(')', '')
.replace('\n','').replace('\t','')
for x in inst_paper['rel_abs'].split()]
# explicit drop words
inst_words = [x for x in inst_words if x not in stop_words]
# drop words that do not contain letters
inst_words = [x for x in inst_words if x.islower()]
# save words to dict
doi_words[inst_paper['rel_doi']] = sorted(list(set(inst_words)))
doi_titles[inst_paper['rel_doi']] = inst_paper['rel_title']
doi_site[inst_paper['rel_doi']] = inst_paper['rel_site']
all_words.extend(inst_words)
ser_titles = pd.Series(doi_titles)
ser_titles.head()
df_meta = pd.DataFrame(ser_titles, columns=['Title'])
df_meta.shape
# ### Add Paper Metadata
inst_paper.keys()
for inst_paper in arr_papers:
inst_doi = inst_paper['rel_doi']
# date
inst_date = inst_paper['rel_date'].split('-')
df_meta.loc[inst_doi, 'date'] = float( inst_date[1] + '.' + inst_date[2])
# altmetric score
if inst_doi in dict_altmetric:
df_meta.loc[inst_doi, 'altmetric'] = dict_altmetric[inst_doi]
else:
print('not found')
df_meta.loc[inst_doi, 'altmetric'] = 0
ser_count = pd.Series(all_words).value_counts()
ser_count = ser_count[ser_count < len(arr_papers) * 0.75 ][ser_count > 5]
ser_count.shape
ser_count.plot()
top_words = ser_count.index.tolist()[:1000]
all_dois = sorted(list(doi_words.keys()))
len(all_dois)
df_words = pd.DataFrame(0, index=top_words, columns=all_dois)
for inst_doi in all_dois:
inst_words = list(set(doi_words[inst_doi]).intersection(top_words))
df_words.loc[inst_words, inst_doi] = 1
# ### Add Column Categories
cols = df_words.columns.tolist()
grade_dict = {}
for inst_col in cols:
if inst_col in df.index.tolist():
grade_dict[inst_col] = str(df.loc[inst_col, 'Grade'])\
.replace('2-3', '3')\
.replace('1-2', '2').replace('1/2', '2')
else:
grade_dict[inst_col] = 'N.A.'
# +
new_cols = [(df_meta.loc[x, 'Title'][:50],
'Site: ' + doi_site[x],
'Grade: ' + str(grade_dict[x]),
'Date: ' + str(df_meta.loc[x, 'date']),
'Altmetric: ' + str(df_meta.loc[x, 'altmetric']) ) for x in cols]
df_cat = deepcopy(df_words)
df_cat.columns = new_cols
# -
cat_colors = {}
cat_colors['biorxiv'] = 'blue'
cat_colors['red'] = 'red'
cat_colors['N.A.'] = 'white'
cat_colors['1'] = '#FFD700'
cat_colors['2'] = '#FF6347'
cat_colors['3'] = '#add8e6'
net.load_df(df_cat)
net.set_cat_colors(axis='col', cat_index=1, cat_title='Site', cat_colors=cat_colors)
net.set_cat_colors(axis='col', cat_index=2, cat_title='Grade', cat_colors=cat_colors)
net.filter_N_top(inst_rc='row', rank_type='sum', N_top=500)
net.cluster(dist_type='jaccard')
net.widget()
net.save_dict_to_json(net.viz, '../json_files/heatmap_2020-04-05.json')
# ### Words and Reviews
# +
# words_list = []
# for inst_file in all_files:
# f = open(inst_file, 'r')
# lines = f.readlines()
# f.close()
# for inst_line in lines:
# inst_line = inst_line.lower()
# inst_words = inst_line.split(' ')
# inst_words = [x for x in inst_words if '*' not in x]
# words_list.extend(inst_words)
# +
# pd.Series(words_list).value_counts().head(50)
# -
|
notebooks/1.2.0_Heatmap_Papers.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sys import path
path.append(r"D:\\SmartCloudFuture")
from SCTK import scada
# -
scada.algo_module
get_pe_ttm('600570', '20170606')
# +
from pymongo import MongoClient
conn = MongoClient('127.0.0.1', 27017)
db = conn.stockdata
incomedb = db['income']
# -
result = incomedb.find({"code":"600570.XSHG"})
# +
import pandas as pd
df = pd.DataFrame(list(result))
# -
df
df['pubDate'] = pd.to_datetime(df['pubDate'])
incomedf = df[df.pubDate <= '2017-06-06'][-4:]
column_sum = incomedf['net_profit'].sum()
df = pd.read_csv("c:\\stockdata\\guben\\600570.csv")
df.rename(columns={'Unnamed: 0':'date'})
|
tushare/compute_pe.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# 
# <center><em>Copyright! This material is protected, please do not copy or distribute. by:<NAME></em></center>
# ***
# <h1 align="center">Udemy course : Python Bootcamp for Data Science 2021 Numpy Pandas & Seaborn</h1>
#
# ***
# ## 7.9 Descriptive Statistics with Dataframe
# First we import pandas library
# + hide_input=false
import pandas as pd
# -
# Here we read a dataframe from a csv file using the function **pd.read_csv()**:
# + hide_input=false
wage = pd.read_csv('data/wage.csv', index_col ='year')
# -
# We can display the head of this dataframe using the function **head()**:
# + hide_input=false
wage.head()
# -
# We can apply descriptive statistics for columns such as the **sum** like this:
# + hide_input=false
wage.sum()
# -
# If we want to calculate statistics per rows, we use the argument **axis = 'columns'**:
# + hide_input=false
wage.sum(axis ='columns')
# -
# Thw minumum and the maximum can be calculated like this:
# + hide_input=false
wage.min()
# + hide_input=false
wage.max()
# -
# To display the index value for the minimum and the maximum we use the functions **idmax()** and **idmin()**:
# + hide_input=false
wage.idxmax()
# + hide_input=false
wage.idxmin()
# -
# Cumulative sum can be calculated using the function **cumsum()**:
# + hide_input=false
wage.cumsum()
# -
# We can display a group of statistics using the function **describe()**:
# + hide_input=false
wage.describe()
# -
# ***
#
# <h1 align="center">Thank You</h1>
#
# ***
|
7.9 Descriptive Statistics with Dataframe.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 분류 모형의 평가
# * Learning curve
# * Validation curve
# + colab={} colab_type="code" id="fZItMOZY4jmg"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + colab={} colab_type="code" id="-OQDRwol4jml"
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X = cancer.data
y = cancer.target
# + colab={} colab_type="code" id="L8ls7J6e4jmo"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=1, stratify=y)
# + colab={} colab_type="code" id="a-odaTMy4jmv" outputId="4dc61e5c-8194-486e-b434-171444032b41"
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
pipe_lr = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', random_state=1))
pipe_lr.fit(X_train, y_train)
y_pred = pipe_lr.predict(X_test)
print('테스트 정확도: %.3f' % pipe_lr.score(X_test, y_test))
# + [markdown] colab_type="text" id="4lDtkV4c4jm7"
# # Learning Curve 학습 곡선을 사용한 모형의 평가
#
# * learning_curve 함수를 사용.
# * 훈련 데이터셋의 샘플 갯수에 따른 영향을 평가하고자 한다.
# + colab={} colab_type="code" id="mZ8OoWNl4jm8"
from sklearn.model_selection import learning_curve
pipe_lr = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear',
penalty='l2',
random_state=1))
train_sizes, train_scores, test_scores =\
learning_curve(estimator=pipe_lr,
X=X_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=1)
# + colab={} colab_type="code" id="lAsu9bHp4jnA"
train_mean = np.mean(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
# + colab={} colab_type="code" id="CnpneBUM4jnE" outputId="604cf99d-ef90-4033-e0c7-0760190934b6"
plt.plot(train_sizes, train_mean,
color='blue', marker='o',
markersize=5, label='training accuracy')
plt.plot(train_sizes, test_mean,
color='green', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
# + [markdown] colab_type="text" id="fZQIZHly4jnI"
# * train_sizes 파라미터를 통행 훈련세트의 비율을 0.1에서 1.0 사이로 총 10개를 생성
# * cv 파라미터를 통해 10-fold CV를 설정
# * train 정확도와 validataion 정확도를 시각화
# * train 데이터의 비율이 0.7(250개) 이상일 경우에 모형의 성능이 좋다.
# + [markdown] colab_type="text" id="qPcFGhZa4jnJ"
# # Validation_Curve검증 곡선으로 모형 평가하기
#
#
# * 일부 하이퍼 파라미터 값을 변경할 때 모형의 성능 변화를 평가하고자 한다.
# * validation_curve 함수 사용
# * param_name : 변경할 하이퍼 파라미터의 이름
# * param_range : 사용할 하이퍼파라미터의 범위
#
# + colab={} colab_type="code" id="5fyQ9Cr64jnJ"
from sklearn.model_selection import validation_curve
param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
train_scores, test_scores = validation_curve(
estimator=pipe_lr,
X=X_train,
y=y_train,
param_name='logisticregression__C', # 로지스틱 회귀에서 규제
param_range=param_range,
cv=10)
# + colab={} colab_type="code" id="qYOmHLQH4jnN"
train_mean = np.mean(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
# + colab={} colab_type="code" id="nEtb_sNq4jnQ" outputId="d48f0611-9bf2-4216-db2a-0c5ea183ef85"
plt.plot(param_range, train_mean,
color='blue', marker='o',
markersize=5, label='training accuracy')
plt.plot(param_range, test_mean,
color='green', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.grid()
plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.ylim([0.9, 1.00])
plt.tight_layout()
plt.show()
# + [markdown] colab_type="text" id="ttz9ucQJ4jnU"
# * validation_curve 함수는 기본적으로 CV를 통해 성능을 추정한다.
# * validation_curve 함수 안에서 평가하고자 하는 파라미터를 지정한다.
# * param_name='logisticregression__C' --> 이 경우에는 logisticregression 모형의 C 파라미터를 의미한다.
# * C의 값을 높일수록 규제가 약해지고 조금 과대 적합되는 경향이 보인다.
# * 이 경우 적합한 값은 0.01 또는 0.1 로 확인된다.
|
05Evaluation_Selection/03LearningValidationCurve.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import torch
import sys
import numpy as np
sys.path.append('../operators/')
from grad_operators import GradOperators
#stack real and imaginary part
x = torch.rand(1,2,160,160,16)
dim=3
GOps = GradOperators(dim)
#apply G to x, then G^H
Gx = GOps.apply_G(x)
GHGx = GOps.apply_GH(Gx)
# -
print(Gx.shape)
print(GHGx.shape)
# +
#check whether G^H is the adjoint of G
u = torch.rand(x.shape)
v = torch.rand(Gx.shape)
Gu = GOps.apply_G(u)
GHv = GOps.apply_GH(v)
print((Gu * v).sum())
print((u * GHv).sum())
# -
|
nns_based_approach/tests/test_grad_operators.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# +
race15 = pd.read_csv('Boston_Neighborhood_Demographics_Based_On_ACS_5Year_Estimates_2011-15.csv')
race15 = race15.drop(25)
race19 = pd.read_csv('Boston_Neighborhood_Demographics_Based_On_ACS_5Year_Estimates_2015-19.csv')
race19 = race19.drop(25)
age15 = pd.read_csv('Boston_Neighborhood_Estimated_Age_Distribution_ACS_2011-2015.csv')
age15 = age15.drop(25)
age19 = pd.read_csv('Boston_Neighborhood_Estimated_Age_Distribution_ACS_2015-2019.csv')
age19 = age19.drop(25)
voting = pd.read_csv('2011-2017_Estimated_Voter_Data_Per_Neighborhood.csv')
voting = voting.reset_index(drop=True)
voting = voting.drop('Unnamed: 0', axis=1)
# -
voting
plt.bar(race15['NEIGHBORHOOD'], race15['WHITE'])
plt.xticks(rotation=90)
plt.show()
# +
neighbs = voting['NEIGHBORHOOD']
for n in neighbs:
filt = voting['NEIGHBORHOOD'] == n
plt.plot(voting[filt]['YEAR'], voting[filt]['Voter Turnout'])
# -
plt.bar(race15['NEIGHBORHOOD'], race15['AFRICAN_AMERICAN'])
plt.xticks(rotation=90)
# +
neighbs = ['Roslindale', 'Roxbury', '<NAME>', 'Mattapan', 'Dorchester']
for n in neighbs:
filt = voting['NEIGHBORHOOD'] == n
plt.plot(voting[filt]['YEAR'], voting[filt]['Voter Turnout'])
# +
bar_width = 0.35
index = np.arange(25)
fig, ax = plt.subplots()
j = ax.bar(index, age15['Total 55+'], bar_width,
label="55+ Age Population 2011-2015")
a = ax.bar(index+bar_width, age19['Total 55+'],
bar_width, label="55+ Age Population 2015-2019")
ax.set_xlabel('Neighborhoods')
ax.set_ylabel('Percentage of Total Population')
ax.set_title('Old Population Change Over Time')
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(age15['NEIGHBORHOOD'])
ax.legend(bbox_to_anchor=(1.7, 1), loc='upper right')
plt.xticks(rotation=90)
plt.show()
# +
greater = []
for i in range(len(age15['NEIGHBORHOOD'])):
if age15.loc[i, 'Total 55+'] < age19.loc[i, 'Total 55+']:
greater.append(age15.loc[i, 'NEIGHBORHOOD'])
len(greater)/len(age15['NEIGHBORHOOD'])
# +
bar_width = 0.35
index = np.arange(25)
fig, ax = plt.subplots()
j = ax.bar(index, age15['Total 18-29'], bar_width,
label="18-29 Age Population 2011-2015")
a = ax.bar(index+bar_width, age19['Total 18-29'],
bar_width, label="18-29 Age Population 2015-2019")
ax.set_xlabel('Neighborhoods')
ax.set_ylabel('Percentage of Total Population')
ax.set_title('Young Population Change Over Time')
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(age15['NEIGHBORHOOD'])
ax.legend(bbox_to_anchor=(1.7, 1), loc='upper right')
plt.xticks(rotation=90)
plt.show()
# +
greater2 = []
for i in range(len(age15['NEIGHBORHOOD'])):
if age15.loc[i, 'Total 18-29'] < age19.loc[i, 'Total 18-29']:
greater2.append(age15.loc[i, 'NEIGHBORHOOD'])
len(greater2)/len(age15['NEIGHBORHOOD'])
# +
df1 = pd.read_csv('Old_White_Pop_VS_Young_POC_Pop_11-15.csv')
df1 = df1.drop(25)
df2 = pd.read_csv('Old_White_Pop_VS_Young_POC_Pop_15-19.csv')
df2 = df2.drop(25)
bar_width = 0.35
index = np.arange(25)
fig, ax = plt.subplots()
j = ax.bar(index, df1['Total White 55+'], bar_width,
label="2011-2015 White Population Aged 55+")
a = ax.bar(index+bar_width, df2['Total White 55+'],
bar_width, label="2015-2019 White Population Aged 55+")
ax.set_xlabel('Neighborhoods')
ax.set_ylabel('Percentage of Total Population')
ax.set_title('Old White Population Change Over Time')
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(age15['NEIGHBORHOOD'])
ax.legend(bbox_to_anchor=(1.7, 1), loc='upper right')
plt.xticks(rotation=90)
plt.show()
# +
greater = []
for i in range(len(df1['NEIGHBORHOOD'])):
if df1.loc[i, 'Total White 55+'] < df2.loc[i, 'Total White 55+']:
greater.append(df1.loc[i, 'NEIGHBORHOOD'])
len(greater)/len(df1['NEIGHBORHOOD'])
# +
bar_width = 0.35
index = np.arange(25)
fig, ax = plt.subplots()
j = ax.bar(index, df1['Total Non-White 18-29'], bar_width,
label="2011-2015 POC Population Aged 18-29")
a = ax.bar(index+bar_width, df2['Total Non-White 18-29'],
bar_width, label="2015-2019 POC Population Aged 18-29")
ax.set_xlabel('Neighborhoods')
ax.set_ylabel('Percentage of Total Population')
ax.set_title('Old White Population Change Over Time')
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(age15['NEIGHBORHOOD'])
ax.legend(bbox_to_anchor=(1.7, 1), loc='upper right')
plt.xticks(rotation=90)
plt.show()
# +
greater2 = []
for i in range(len(df1['NEIGHBORHOOD'])):
if df1.loc[i, 'Total Non-White 18-29'] < df2.loc[i, 'Total Non-White 18-29']:
greater2.append(df1.loc[i, 'NEIGHBORHOOD'])
len(greater2)/len(df1['NEIGHBORHOOD'])
# -
print('Neighborhoods That Had Old White Pop. Increase:\n' + str(greater) + '\n')
print('Neighborhoods That Had Young POC Pop. Increase:\n' + str(greater2) + '\n')
income15 = pd.read_csv('Boston_Neighborhood_Income_Measures_ACS_Data_2011-15.csv')
income19 = pd.read_csv('Boston_Neighborhood_Income_Measures_ACS_Data_2015-19.csv')
income15
# +
bar_width = 0.35
index = np.arange(25)
fig, ax = plt.subplots()
j = ax.bar(index, income15['Median Household Income'], bar_width,
label="2011-2015 Median Household Income")
a = ax.bar(index+bar_width, income19['Median Household Income'], bar_width,
label="2015-2019 Median Household Income")
ax.set_xlabel('Neighborhoods')
ax.set_ylabel('Income')
ax.set_title('Median Household Income Change Over Time')
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(age15['NEIGHBORHOOD'])
ax.legend(bbox_to_anchor=(1.7, 1), loc='upper right')
plt.xticks(rotation=90)
plt.show()
# +
bar_width = 0.35
index = np.arange(25)
fig, ax = plt.subplots()
j = ax.bar(index, income15['Per Capita Income'], bar_width,
label="2011-2015 Per Capita Income")
a = ax.bar(index+bar_width, income19['Per Capita Income'], bar_width,
label="2015-2019 Per Capita Income")
ax.set_xlabel('Neighborhoods')
ax.set_ylabel('Income')
ax.set_title('Per Capita Income Change Over Time')
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(age15['NEIGHBORHOOD'])
ax.legend(bbox_to_anchor=(1.7, 1), loc='upper right')
plt.xticks(rotation=90)
plt.show()
# -
|
Bay State Banner Voting Patterns/Age_Analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="qM_XHcPVzUei"
# # TSNE embeddings for the monthly data comments from subreddits
# + [markdown] id="4FFq3DIYzgak"
# # IMPORT MODULES
# + executionInfo={"elapsed": 1642, "status": "ok", "timestamp": 1617608556018, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "06886034211416817939"}, "user_tz": -120} id="J3vC9YwjzTi4"
#import json
import os
#from google.colab import drive
from tqdm.notebook import tqdm
import pickle
from collections import Counter
from datetime import datetime, timedelta
import pandas as pd
import numpy as np
#import scipy
from scipy import spatial
# import torch
# from sentence_transformers import SentenceTransformer, util
#from sklearn.metrics.pairwise import cosine_similarity
# import umap
# from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# from sklearn.cluster import KMeans
# from sklearn.cluster import OPTICS
import matplotlib.pyplot as plt
# import seaborn as sns
# + [markdown] id="3ZT3Z71WzpsJ"
# # TECHNICAL FUNCTIONS
# + executionInfo={"elapsed": 299, "status": "ok", "timestamp": 1617608556019, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "06886034211416817939"}, "user_tz": -120} id="lQnpzkhQzTlF"
def get_date_range(month_start, year_start, month_end, year_end):
from itertools import cycle
month_range = list(range(1,13))
cycle_month_range = cycle(month_range)
while True:
current_month = next(cycle_month_range)
if current_month == month_start:
break
date_tuples = []
year = year_start
while True:
date_tuples.append((current_month, year))
if year == year_end and current_month == month_end:
break
current_month = next(cycle_month_range)
if current_month == 1:
year += 1
return date_tuples
# + [markdown] id="Loyy5mT1zxWx"
# # UPLOAD DATA
# + executionInfo={"elapsed": 297, "status": "ok", "timestamp": 1617608556761, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "06886034211416817939"}, "user_tz": -120} id="LYD2LqiwzTnQ"
# google_drive_path = "/content/drive/MyDrive/"
comptech_opinion_analizer_path = "./"
# + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["1bf2b2a59f95473c890cc668d6e640e7", "<KEY>", "0841c7ec12524d3eac2ff30712a8c8ba", "<KEY>", "d90c2559a1314e13a465608657eef8cf", "9666dddc0ee24dc4ae060969ff0929ba", "cf4918aed78743d481865f41b9abe725", "<KEY>"]} executionInfo={"elapsed": 24185, "status": "ok", "timestamp": 1617608581093, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "06886034211416817939"}, "user_tz": -120} id="hHQtO4_dzTpg" outputId="d02fe4e7-bbb5-45be-8a9c-e0f08edb5fe6"
# UPLOAD THE DATA
data_dir = os.path.join(comptech_opinion_analizer_path, "embeddings_bert/")
data_files = [f for f in os.listdir(data_dir) if "pickle" in f]
entity = "JoeBiden"
entity_data_files = sorted([f for f in data_files if entity in f])
df_vecs = pd.DataFrame()
for f in tqdm(entity_data_files):
data_path = os.path.join(data_dir, f)
df_vecs = df_vecs.append(pickle.load(open(data_path, "rb")))
# + [markdown] id="nwWI5u0oz7I2"
# # Show the timeline of comment counts
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 159} executionInfo={"elapsed": 22233, "status": "ok", "timestamp": 1617608581399, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "06886034211416817939"}, "user_tz": -120} id="R2ogWUfpzTrb" outputId="7801869a-3a25-46e0-a732-03d4a71dbdcc"
created_list = sorted(df_vecs.created_utc.to_list())
b_width = 3600*24*3 # weekly
bins = np.arange(min(created_list), max(created_list) + 1, b_width)
hist, bins = np.histogram(created_list, bins = bins)
dt_bins = [datetime.fromtimestamp(t) for t in bins[:-1]]
plt.figure(figsize=(15,1.5))
plt.title(f"/r/{entity} :: Number of comments per week")
plt.plot(dt_bins, hist, marker = "x")
plt.xlabel("Time")
plt.ylabel("Count")
plt.show()
# + [markdown] id="aq1AxNmb0Dg2"
# # TSNE EMBEDDING OF COMMENTS
# + executionInfo={"elapsed": 20472, "status": "ok", "timestamp": 1617608581682, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "06886034211416817939"}, "user_tz": -120} id="AWdnoZWQ0FtF"
# ADD FOLDER
# colab_notebooks_path = os.path.join(google_drive_path, "Colab Notebooks/opinion_analyzer/")
tsne_embedding_dir = os.path.join(comptech_opinion_analizer_path, "tsne_embeddings")
os.makedirs(tsne_embedding_dir, exist_ok = True)
# + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["3dd43ed0a6da418d83c7072935c18c3a", "16c94c80d5be48559583dffc68adf8ee", "7473c56435174fe7911ff6e25f2426aa", "a8cf4404fa6f4cb9bd7498d457efafd2", "da0da526e8824b9792f72a7087fba404", "918db2e82ec9416fa88074cfe64978a9", "60d5d893f3bd4544a0311d170781ad2c", "84056c6700874a128461f5c0a4be0f1c"]} id="mk3x6vfszTtq" outputId="3fc4dd00-5883-4df2-ed23-38dc434ab1a5"
# DIMENSIONALITY REDUCTION FOR ALL MONTHLY DATA
date_range = get_date_range(11, 2020, 1, 2021)
for my_start, my_end in tqdm(list(zip(date_range, date_range[1:]))):
# PREPARATIONS
dt_start = datetime(my_start[1], my_start[0], 1)
dt_end = datetime(my_end[1], my_end[0], 1)
month_str = dt_start.strftime("%b %Y")
t_start, t_end = dt_start.timestamp(), dt_end.timestamp()
month_vecs_df = df_vecs[(t_start < df_vecs.created_utc ) & (df_vecs.created_utc < t_end)]
month_embeddings = month_vecs_df.embedding.to_list()
month_labels = month_vecs_df.body.to_list()
month_ids = month_vecs_df.link_id.to_list()
print(f"Month labels {len(month_labels)}")
# TSNE
tsne = TSNE(n_components = 2)
month_embeddings_2d = tsne.fit_transform(month_embeddings)
# OUTPUT
out_file = f"tsne_embedding_2d_{entity}_{my_start[0]}_{my_start[1]}.pickle"
out_path = os.path.join(tsne_embedding_dir, out_file)
out_pack = (month_ids, month_labels, month_embeddings_2d)
pickle.dump(out_pack, open(out_path, "wb"))
# + [markdown] id="jSo7b_9Q1U4B"
# # Visualisation of comments each month
# + id="K_r3pzNhzTvo"
dt_start = datetime(2020, 1, 1)
dt_end = datetime(2020, 2, 1)
month_str = dt_start.strftime("%b %Y")
t_start, t_end = dt_start.timestamp(), dt_end.timestamp()
month_vecs_df = df_vecs[(t_start < df_vecs.created_utc ) & (df_vecs.created_utc < t_end)]
# + id="Jv4ryTEG1XUZ"
month_embeddings = month_vecs_df.embedding.to_list()
month_labels = [s[:60]+"..." if len(s)>60 else s for s in month_vecs_df.body.to_list()]
len(month_labels)
# + id="37on4pv31YU0"
# VISUALISATION
import plotly.graph_objects as go
marker_style = dict(color='lightblue', size=6, line=dict(color='black', width = 0.5))
X, Y = zip(*month_embeddings_2d)
scatter_gl = go.Scattergl(x = X, y = Y, hovertext = month_labels, mode='markers', marker= marker_style)
fig = go.Figure(data = scatter_gl)
fig.update_layout(width=1000, height=700, plot_bgcolor = "white", margin=dict(l=10, r=10, t=30, b=10),
title=f"TSNE comments /r/{entity} :: period {month_str}")
fig.show()
|
data_2d_projection/tsne embeddings monthly.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="70meWpLVSMGt"
# # Reconhecimento de números (MNIST) utilizando fastai
# > Tutorial utilizar o fastai para classificação do MNIST.
#
# - toc: false
# - badges: false
# - comments: true
# - categories: [MNIST, fastai]
# - image:
# + colab={"base_uri": "https://localhost:8080/", "height": 309} colab_type="code" id="Ndf1IwKjmgpN" outputId="0a10f92f-a693-4edb-c611-9d6a005df730"
#hide
# !pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
# + colab={} colab_type="code" id="ZRWXWp11mm8i"
#hide
from fastbook import *
# + [markdown] colab_type="text" id="wMswMxyuSXca"
# Esse tutorial tem como objetivo utilizar o framework de machine learning `fastai`, para realizar um dos desafios mais tradicionais de classificação de imagens. Este é um dos desafios iniciais para os apredizes de redes neurais artificiais no que tange processamento de imagens, a base de dados utilizada para projeto pode ser encontradas em dois links diferentes:
#
# - Kaggle: https://www.kaggle.com/c/digit-recognizer/data;
# - The MNIST database: http://yann.lecun.com/exdb/mnist/.
#
# + [markdown] colab_type="text" id="GC5iRjwVYLhI"
# ## Carregando os pacotes utilizados
# + colab={} colab_type="code" id="1HrNSQTnmqGR"
from fastai.vision.all import *
import numpy as np
import pandas as pd
# + colab={} colab_type="code" id="8k_fh_dZYnWR"
#hide
import fastai
# + [markdown] colab_type="text" id="mKxNikvqYwjN"
# Durante esse projeto estara sendo utilizado a versão 2.0.8 do fastai.
# + colab={} colab_type="code" id="_rdoqRkHYscn"
fastai.__version__
# + colab={"base_uri": "https://localhost:8080/", "height": 72, "resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "headers": [["content-type", "application/javascript"]], "ok": true, "status": 200, "status_text": ""}}} colab_type="code" id="5FcMoz_Ln5qz" outputId="e857f5d5-90f3-456e-a0e5-2d67e6e26e19"
#hide
from google.colab import files
uploaded = files.upload()
# + colab={} colab_type="code" id="3PIsLfpiobhx"
#hide
import zipfile
with zipfile.ZipFile('/content/images.zip', 'r') as images:
images.extractall('/content')
# + [markdown] colab_type="text" id="u5-2_AefZEnY"
# ## Definição do caminho dos arquivos
# + [markdown] colab_type="text" id="vmT_-P_DZKD5"
# Um ponto inicial para o projeto é a definicação de um objeto que contenha o caminho diretório das imagens, para isso será utilizada o método `Path` do fastai. Esse método não retornará somente uma `string` contendo o diretório, mas sim uma classe da biblioteca padrão do Python 3, o que torna mais fácil o acesso aos arquivos e diretórios.
#
# + colab={} colab_type="code" id="N1L5FwsottSc"
path = Path(r"/content/images")
# + [markdown] colab_type="text" id="LqAs8aelc8Kb"
# ## Carregando as imagens para o modelo
# + [markdown] colab_type="text" id="WeqlQTcVbs2p"
# Para carregar as imagens para o treinamento do modelo precisamos uma função a qual determina o tipo da base de dados e como ela está estruturada. Para isso, utiliza-se a função `ImageDataLoaders`.
# + colab={} colab_type="code" id="n5_YQgu7tvnv"
dls = ImageDataLoaders.from_folder(path, train = 'train', valid = 'valid', shuffle_train = True, bs=16)
# + [markdown] colab_type="text" id="CtsmJzLedBol"
# ## Definição e treinamento do modelo
# + [markdown] colab_type="text" id="1DiqAHKBdIns"
# Para definição da rede neural convolucional (*Convolutional Neural Network*) é utilizada a função `cnn_learner`.
#
# Os argumentos que serão passados para esta será:
# - `dls` dataloader definido anteriormente;
# - `resnet34` - arquitetura da rede neural, que neste caso está pretreinada amplamente utilizada para esse fim. Para saber mais sobre a resnet34: https://www.kaggle.com/pytorch/resnet34;
# - `error_rate` - metrica utilizada para avaliação do modelo.
#
# Afim de agilizar o treinamento do modelo, será utilizado o método `to_fp16` (*half-precision floating point*) que utilizada números menos precisos, onde é possível.
#
# Após isso pode-se realizar o treinamento da rede neural, para isso está sendo utilizada o método `fine_tune`. Como estamos utilizando uma rede neural pre-treinada, iremos realizar 4 iterações randomicamente utilizando os parâmetros pre-treinados e depois "descongela" todos as camadas treina o modelo alterando todos os pesos.
# + colab={"base_uri": "https://localhost:8080/", "height": 663, "referenced_widgets": ["d927545a5ce144f399b63cd8527103ae", "11f79a54ea2441a1b8bea57113d48e0e", "d3823cab9fdf4705a2f8c84c5b825640", "d2323b3b6dd44ee0be338c0aec413616", "57fe6f0aea054ed9b100d271ace3ed59", "9defeba064a0406b94145d86049a0cdd", "c5b5d69b111a4fb4be0bbdac2640b4f2", "535c48ee237a411c9624ee56a78d33fa"]} colab_type="code" id="5turC6sZuVyw" outputId="c8bb7172-d20e-4078-bf50-33bb49e00568"
from fastai.callback.fp16 import *
learn = cnn_learner(dls, resnet34, metrics=error_rate).to_fp16()
learn.fine_tune(12, freeze_epochs=4)
# + [markdown] colab_type="text" id="Yqvee0RCkDpy"
# Como pode-se observar no gráfico abaixo, o erro apresentado durante o trainamento decresceu até a marca de `5.27e-4` utilizando na base de dados de treinamento, o que para parece muito bom.
# + colab={"base_uri": "https://localhost:8080/", "height": 272} colab_type="code" id="zl_hpUEp20hh" outputId="0458823d-6e67-4c70-e29f-9b87f52e60d0"
learn.recorder.plot_loss()
# + [markdown] colab_type="text" id="diYaq7-Gk8um"
# ## Salvando o modelo treinado
# + [markdown] colab_type="text" id="XjWHbicylBqR"
# Para salvar o modelo pode-se utilizar o método `export()`, o qual irá salvar no diretório padrão o arquivo `export.pkl`. Para carregar o modelo basta utilizar a função `load_learn()`.
# + colab={} colab_type="code" id="i5hdU9p6270a"
learn.export()
# -
# ## Fazendo as predições
# Para se realizar as predições, será utilizada o método `predict()` e o argumento é o arquivo de imagem em `.jpg`.
# + colab={"base_uri": "https://localhost:8080/", "height": 17} colab_type="code" id="4ODrTyhoWjPH" outputId="938b2d15-0a52-45fe-a017-157216b53245"
#hide_output
pred = []
for i in range(len(test_images)):
image_path = "/content/images/test/" + str(i) + ".jpg"
pred.append(int(learn.predict(image_path)[0]))
# -
# Para salvar o arquivo em um formato `.csv` foi criado um dataframe do pacote `pandas` e feita as devidas transformações para ser enviado para o Kaggle.
# + colab={} colab_type="code" id="KvjhJAD_2dQs"
prediction = {'ImageId': list(range(1,28001)),
'Label': pred}
df = pd.DataFrame(prediction)
df.to_csv('predicitons.csv', index = False)
# -
# Este modelo acertou 99,421% das 28000 imagens de teste.
# 
|
_notebooks/2020-09-05-fastai_MNIST.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (python-examples)
# language: python
# name: pycharm-aa33d817
# ---
# # Lesson 1: Data Analysis Process
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Load Data from CSV
# + pycharm={"name": "#%%\n"}
import unicodecsv
def read_csv(csv_file_name: str, print_basic_info: bool = False) -> list:
with open(csv_file_name, 'rb') as f:
reader = unicodecsv.DictReader(f)
res_list = list(reader)
if print_basic_info:
print("CSV file read: ", csv_file_name)
print("Row count: ", len(res_list))
print("First row: ", res_list[0])
return res_list
# -
# ### Load data from CSV files
#
# Load data from files `enrollments.csv`, `daily_engagement.csv`, and `project_submissions.csv`.
# + pycharm={"name": "#%%\n"}
enrollments = read_csv('../resources/lesson1/enrollments.csv', print_basic_info=True)
daily_engagement = read_csv('../resources/lesson1/daily_engagement.csv', print_basic_info=True)
project_submissions = read_csv('../resources/lesson1/project_submissions.csv', print_basic_info=True)
# -
# ## Fixing Data Types
#
# ### Intro code
#
# Some intro code was provided by Udacity in the `L1_Starter_Code.ipynb` Jupyter notebook.
# I've partially refactored it.
# + pycharm={"name": "#%%\n"}
from datetime import datetime as dt
def parse_date(date_str):
"""
:param date_str: date as string of the form year-month-day.
:return: parsed date object, or None if date_str is empty
"""
return None if date_str == '' else dt.strptime(date_str, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_int(int_str):
"""
:param int_str: string representation of an integer
:return: parsed integer, or None if int_str is empty
"""
return None if int_str == '' else int(int_str)
# + [markdown] pycharm={"name": "#%% md\n"}
# Other than in `L1_Starter_Code.ipynb`, we will use pre-defined column names.
# + pycharm={"name": "#%%\n"}
class ColEnr:
"""
Column names for enrollments.
"""
cancel_date = 'cancel_date'
days_to_cancel = 'days_to_cancel'
is_canceled = 'is_canceled'
is_udacity = 'is_udacity'
join_date = 'join_date'
account_key = 'account_key'
status = 'status'
class ColDailyEng:
"""
Column names for daily engagement.
"""
lessons_completed = 'lessons_completed'
num_courses_visited = 'num_courses_visited'
projects_completed = 'projects_completed'
total_minutes_visited = 'total_minutes_visited'
utc_date = 'utc_date'
acct = 'acct' # column name for account key in the csv
account_key = 'account_key' # new column name for account key
class ColProjSub:
"""
Column names for project submissions.
"""
completion_date = 'completion_date'
creation_date = 'creation_date'
assigned_rating = 'assigned_rating'
account_key = 'account_key'
lesson_key = 'lesson_key'
processing_state = 'processing_state'
# + [markdown] pycharm={"name": "#%% md\n"}
# Clean up the data types in enrollments, daily engagement, and project submission.
# + pycharm={"name": "#%%\n"}
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment[ColEnr.cancel_date] = parse_date(enrollment[ColEnr.cancel_date])
enrollment[ColEnr.days_to_cancel] = parse_int(enrollment[ColEnr.days_to_cancel])
enrollment[ColEnr.is_canceled] = enrollment[ColEnr.is_canceled] == 'True'
enrollment[ColEnr.is_udacity] = enrollment[ColEnr.is_udacity] == 'True'
enrollment[ColEnr.join_date] = parse_date(enrollment[ColEnr.join_date])
enrollments[0]
# + pycharm={"name": "#%%\n"}
# Clean up the data types in the engagement table
for engagement_record in daily_engagement:
engagement_record[ColDailyEng.lessons_completed] = int(float(engagement_record[ColDailyEng.lessons_completed]))
engagement_record[ColDailyEng.num_courses_visited] = int(float(engagement_record[ColDailyEng.num_courses_visited]))
engagement_record[ColDailyEng.projects_completed] = int(float(engagement_record[ColDailyEng.projects_completed]))
engagement_record[ColDailyEng.total_minutes_visited] = float(engagement_record[ColDailyEng.total_minutes_visited])
engagement_record[ColDailyEng.utc_date] = parse_date(engagement_record[ColDailyEng.utc_date])
daily_engagement[0]
# + pycharm={"name": "#%%\n"}
# Clean up the data types in the submissions table
for submission in project_submissions:
submission[ColProjSub.completion_date] = parse_date(submission[ColProjSub.completion_date])
submission[ColProjSub.creation_date] = parse_date(submission[ColProjSub.creation_date])
project_submissions[0]
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Investigating the Data
# + [markdown] pycharm={"name": "#%% md\n"}
# For each of the three tables, find the number of rows and the number of unique students (account keys).
# + pycharm={"name": "#%%\n"}
enrollment_num_rows = len(enrollments)
engagement_num_rows = len(daily_engagement)
submission_num_rows = len(project_submissions)
enrollments_account_key_set = {e[ColEnr.account_key] for e in enrollments}
engagement_account_key_set = {e[ColDailyEng.acct] for e in daily_engagement}
submission_account_key_set = {e[ColProjSub.account_key] for e in project_submissions}
enrollment_num_unique_students = len(enrollments_account_key_set)
engagement_num_unique_students = len(engagement_account_key_set)
submission_num_unique_students = len(submission_account_key_set)
print('enrollment_num_rows=', enrollment_num_rows)
print('engagement_num_rows=', engagement_num_rows)
print('submission_num_rows=', submission_num_rows)
print('enrollment_num_unique_students=', enrollment_num_unique_students)
print('engagement_num_unique_students=', engagement_num_unique_students)
print('submission_num_unique_students=', submission_num_unique_students)
# -
# ## Problems in the Data
# Rename the `acct` column in the `daily_engagement` table to `account_key`.
# + pycharm={"name": "#%%\n"}
for e in daily_engagement:
e[ColDailyEng.account_key] = e[ColDailyEng.acct]
del e[ColDailyEng.acct]
print(daily_engagement[0][ColDailyEng.account_key])
# -
# ## Missing Engagement Records
#
# Find student enrollments where there is no corresponding student in the daily engagement table.
# First, we find the account keys present in enrollments but missing in engagements.
# + pycharm={"name": "#%%\n"}
missing_engagement_account_key_set = enrollments_account_key_set - engagement_account_key_set
print("There are {} account keys in enrollments table that are missing in the engagement table."
.format(len(missing_engagement_account_key_set)))
print(sorted(missing_engagement_account_key_set, key=lambda e: int(e)))
# + [markdown] pycharm={"name": "#%% md\n"}
# Now we find the corresponding enrollment records.
# + pycharm={"name": "#%%\n"}
enrollments_without_engagement = [e for e in enrollments if e[ColDailyEng.account_key] in missing_engagement_account_key_set]
# + [markdown] pycharm={"name": "#%% md\n"}
# Print some of the records.
# + pycharm={"name": "#%%\n"}
for e in enrollments_without_engagement[:5]:
print(e)
# -
# ## Checking for More Problem Records
#
# Now, find the number of surprising records from enrollment table
# that are missing in the engagment table
# but still have _not_ been canceled on the same day.
# + pycharm={"name": "#%%\n"}
enrollments_without_engagement_not_canceled_on_same_day = [
e for e in enrollments_without_engagement
if e[ColEnr.cancel_date] is None or e[ColEnr.cancel_date] > e[ColEnr.join_date] ]
print("There are {} enrollment records without corresponging engagement records, "
"which do have not been canceled on same day:".format(len(enrollments_without_engagement_not_canceled_on_same_day)))
for e in enrollments_without_engagement_not_canceled_on_same_day:
print(e)
# + [markdown] pycharm={"name": "#%% md\n"}
# These surprising records are Udacity test accounts: this information has been given in the lesson video.
# Now we want to find all test accounts and exclude them from the data.
# + pycharm={"name": "#%%\n"}
udacity_test_account_keys = {e[ColEnr.account_key] for e in enrollments if e[ColEnr.is_udacity]}
print("There are {} test accounts in the enrollments table:".format(len(udacity_test_account_keys)))
print(udacity_test_account_keys)
# + [markdown] pycharm={"name": "#%% md\n"}
# We remove the udacity test accounts from the data.
# + pycharm={"name": "#%%\n"}
remove_non_test_accounts = lambda records, account_key_col: [e for e in records if e[account_key_col] not in udacity_test_account_keys]
non_test_enrollments = remove_non_test_accounts(enrollments, ColEnr.account_key)
non_test_daily_engagement = remove_non_test_accounts(daily_engagement, ColDailyEng.account_key)
non_test_project_submissions = remove_non_test_accounts(project_submissions, ColProjSub.account_key)
print("Number of non-test accounts in")
print("- enrollments table:", len(non_test_enrollments))
print("- daily engagement table:", len(non_test_daily_engagement))
print("- project submissions table:", len(non_test_project_submissions))
# -
# ## Refining the Question
#
# + [markdown] pycharm={"name": "#%% md\n"}
# Find students that haven't canceled yet or remained enrolled for more than 7 days.
# + pycharm={"name": "#%%\n"}
paid_students = { e[ColEnr.account_key]:e[ColEnr.join_date] for e in non_test_enrollments
if not e[ColEnr.is_canceled] or e[ColEnr.days_to_cancel] > 7 }
len(paid_students)
# -
# Now we also include an additional requirement to store the latest enrollment date for each student.
# + pycharm={"name": "#%%\n"}
old_paid_students = paid_students
paid_students = dict()
for e in non_test_enrollments:
if not e[ColEnr.is_canceled] or e[ColEnr.days_to_cancel] > 7:
acc_key = e[ColEnr.account_key]
join_date = e[ColEnr.join_date]
curr_value = paid_students.get(acc_key, None)
if curr_value is None or join_date > curr_value:
paid_students[acc_key] = join_date
len(paid_students)
# + pycharm={"name": "#%%\n"}
result_has_changed = set(old_paid_students.values()) != set(paid_students.values())
print("The resulting dictionary is", "different." if result_has_changed else "the same.")
# -
# ## Getting Data from the First Week
# <a id='calculation-of-paid_daily_engagement_first_week'></a>
#
# We filter the engagement records in the first week.
# The calculation has been fixed as explained in the _Debugging Data Analysis Code_ section
# [here](#fixing-calculation-of-paid_daily_engagement_first_week).
# + pycharm={"name": "#%%\n"}
paid_daily_engagement_first_week = []
for e in non_test_daily_engagement:
acc_key = e[ColDailyEng.account_key]
eng_date = e[ColDailyEng.utc_date]
join_date = paid_students.get(acc_key, None)
# if join_date is not None and (eng_date - join_date).days < 7: # fixed after debugging
if join_date is not None and join_date <= eng_date and (eng_date - join_date).days < 7:
paid_daily_engagement_first_week.append(e)
len(paid_daily_engagement_first_week)
# -
# ## Exploring Student Engagement
# ### Intro code
#
# Some intro code was provided by Udacity in the `L1_Starter_Code.ipynb` Jupyter notebook.
# I've partially refactored it and added some util functions.
# + pycharm={"name": "#%%\n"}
from collections import defaultdict
def group_by(values, key_fun):
"""
:param values: groups the values by keys using the given key function
:param key_fun: key function
:return: a new dictionary containing for each key a list of values corresponding to it
"""
res = defaultdict(list)
for e in values:
key = key_fun(e)
res[key].append(e)
return res
def sum_values_by(dict, value_fun=None):
"""
:param dict: maps keys to collections of values
:param value_fun: if specified then it will be applied to extract the values to be summed up
:return: a new dict with each key mapped to the sum of corresponding values
"""
return {key : sum(values if value_fun is None else map(value_fun, values))
for (key, values) in dict.items()}
# + [markdown] pycharm={"name": "#%% md\n"}
# Compute a dict mapping each account key to a list of its engagement records.
# + pycharm={"name": "#%%\n"}
engagement_by_account = group_by(paid_daily_engagement_first_week, lambda e: e[ColDailyEng.account_key])
# + [markdown] pycharm={"name": "#%% md\n"}
# Define functions to calculate sums by account.
# + pycharm={"name": "#%%\n"}
import numpy as np
def sum_engagement_by_account(col_name: str) -> dict:
return {acc_key : sum([e[col_name] for e in eng_rec_list])
for (acc_key, eng_rec_list) in engagement_by_account.items()}
def print_values_stats(data, heading:str = None):
if heading is not None: print(heading)
values = list(data.values()) if isinstance(data, dict) else data
print("Number of values:", len(values))
print("Mean:", np.mean(values))
print("Standard deviation:", np.std(values))
print("Min:", np.min(values))
print("Max:", np.max(values))
# + [markdown] pycharm={"name": "#%% md\n"}
# Calculate total minutes of engagement by account and print some statistics.
# + pycharm={"name": "#%%\n"}
total_minutes_by_account = sum_engagement_by_account(ColDailyEng.total_minutes_visited)
print_values_stats(total_minutes_by_account)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Debugging Data Analysis Code
# -
# Find student with maximum total minutes.
# + pycharm={"name": "#%%\n"}
acc_key_with_max_minutes = max(total_minutes_by_account.keys(), key=lambda acc_key: total_minutes_by_account[acc_key])
print("Account key having max total minutes:", acc_key_with_max_minutes)
print("Engagement records in the account with max total minutes:")
engagement_by_account[acc_key_with_max_minutes]
# + [markdown] pycharm={"name": "#%% md\n"}
# <a id='fixing-calculation-of-paid_daily_engagement_first_week'></a>
#
# The account with maximum sum of engagement minutes has more than 10000 minutes, which is impossible for a week.
# Having a look at its engagements we notice they were not limited to only one week, as they should have been.
# As explained in the lesson video this can happen, when a student joined, canceled, and then joined again,
# thus possibly having engagements _before_ this rejoining date.
# To fix that we go back to the
# [calculation of paid_daily_engagement_first_week](#calculation-of-paid_daily_engagement_first_week),
# add the condition that the engagement date must be _after_ the join date, and re-run the cells.
#
# After the fix and re-calculation, we get only 7 engagement records in the first week, which is alright.
# The total engagement time of 3564 minutes = 59.4 hours in the week is quite a lot, but possible.
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Lessons Completed in First Week
#
# Count completed lessons by account and print some statistics.
# + pycharm={"name": "#%%\n"}
lessons_completed_by_account = sum_engagement_by_account(ColDailyEng.lessons_completed)
print_values_stats(lessons_completed_by_account)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Number of Visits in First Week
#
# Count the days on which a student visited at least one course.
# + pycharm={"name": "#%%\n"}
for engagements in engagement_by_account.values():
for e in engagements:
e['has_visited'] = 1 if e[ColDailyEng.num_courses_visited] > 0 else 0
visits_by_account = sum_engagement_by_account('has_visited')
print_values_stats(visits_by_account)
# -
# ## Splitting out Passing Students
# First, find the project submissions that have any of the given lesson keys.
# + pycharm={"name": "#%%\n"}
subway_project_lesson_keys = ['746169184', '3176718735']
subway_project_submissions = list(filter(lambda x: x[ColProjSub.lesson_key] in subway_project_lesson_keys, project_submissions))
print("{} project submissions for given lesson keys.".format(len(subway_project_submissions)))
# -
# Next, find the account keys for projects successfully passed at some time.
# + pycharm={"name": "#%%\n"}
passed_ratings = ['PASSED', 'DISTINCTION']
account_keys_passed = {e[ColProjSub.account_key] for e in subway_project_submissions
if e[ColProjSub.assigned_rating] in passed_ratings}
len(account_keys_passed)
# + [markdown] pycharm={"name": "#%% md\n"}
# Now, we count assignment entries for students, which have successfully passed the subway project at some time.
# + pycharm={"name": "#%%\n"}
passing_engagement = list(filter(lambda e: e[ColDailyEng.account_key] in account_keys_passed, paid_daily_engagement_first_week))
non_passing_engagement = list(filter(lambda e: e[ColDailyEng.account_key] not in account_keys_passed, paid_daily_engagement_first_week))
print("{} passing engagements".format(len(passing_engagement)))
print("{} non-passing engagements".format(len(non_passing_engagement)))
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Comparing the Two Student Groups
# -
# First, we group the passing and non-passing engagement records by account key.
# + pycharm={"name": "#%%\n"}
passing_engagement_by_account = group_by(passing_engagement, lambda e: e[ColDailyEng.account_key])
non_passing_engagement_by_account = group_by(non_passing_engagement, lambda e: e[ColDailyEng.account_key])
# + [markdown] pycharm={"name": "#%% md\n"}
# Calculate the total minutes per account.
# + pycharm={"name": "#%%\n"}
passing_engagement_by_account_total_minutes_visited = sum_values_by(
passing_engagement_by_account, lambda x: x[ColDailyEng.total_minutes_visited])
non_passing_engagement_by_account_total_minutes_visited = sum_values_by(
non_passing_engagement_by_account, lambda x: x[ColDailyEng.total_minutes_visited])
print_values_stats(list(passing_engagement_by_account_total_minutes_visited.values()),
heading="Statistics for total minutes visited in passing engagement records:")
print_values_stats(list(non_passing_engagement_by_account_total_minutes_visited.values()),
heading="Statistics for total minutes visited in non-passing engagement records:")
# + [markdown] pycharm={"name": "#%% md\n"}
# Calculate the lessons completed per account.
# + pycharm={"name": "#%%\n"}
passing_engagement_by_account_lessons_completed = sum_values_by(
passing_engagement_by_account, lambda x: x[ColDailyEng.lessons_completed])
non_passing_engagement_by_account_lessons_completed = sum_values_by(
non_passing_engagement_by_account, lambda x: x[ColDailyEng.lessons_completed])
print_values_stats(list(passing_engagement_by_account_lessons_completed.values()),
heading="Statistics for lessons completed in passing engagement records")
print_values_stats(list(non_passing_engagement_by_account_lessons_completed.values()),
heading="Statistics for lessons completed in non-passing engagement records")
# -
# Calculate the days visited per account.
# + pycharm={"name": "#%%\n"}
passing_engagement_by_account_days_visited = sum_values_by(
passing_engagement_by_account, lambda x: 1 if x[ColDailyEng.total_minutes_visited] > 0 else 0)
non_passing_engagement_by_account_days_visited = sum_values_by(
non_passing_engagement_by_account, lambda x: 1 if x[ColDailyEng.total_minutes_visited] > 0 else 0)
print_values_stats(list(passing_engagement_by_account_days_visited.values()),
heading="Statistics for days visited in passing engagement records:")
print_values_stats(list(non_passing_engagement_by_account_days_visited.values()),
heading="Statistics for days visited in non-passing engagement records:")
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Making Histograms
# + pycharm={"name": "#%%\n"}
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# -
# Make histograms for the three metrics (total minutes visited, lessons completed, days visited) for
# passing and non-passing engagement records.
# + pycharm={"name": "#%%\n"}
# %matplotlib inline
plt.title('Time spent by students in classroom')
plt.xlabel("Students")
plt.ylabel("Total minutes visited")
plt.hist(passing_engagement_by_account_total_minutes_visited.values(), bins=20)
plt.hist(non_passing_engagement_by_account_total_minutes_visited.values(), bins=20)
# + pycharm={"name": "#%%\n"}
# %matplotlib inline
plt.title('Lessons completed by students')
plt.xlabel("Students")
plt.ylabel("Lessons completed")
plt.hist(passing_engagement_by_account_lessons_completed.values(), bins=20)
plt.hist(non_passing_engagement_by_account_lessons_completed.values(), bins=20)
# + pycharm={"name": "#%%\n"}
# %matplotlib inline
plt.xlabel("Students")
plt.ylabel("Days visited")
plt.title('Days visited in first week by students')
plt.hist(passing_engagement_by_account_days_visited.values(), bins=8)
plt.hist(non_passing_engagement_by_account_days_visited.values(), bins=8)
|
lessons/lesson1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TF 2.1 (GPU)/Py3.7.5
# language: python
# name: tf21g
# ---
# +
import numpy as np
import pandas as pd
import glob
import re
import tensorflow as tf
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.layers import Embedding, LSTM, \
Bidirectional, Dense,\
Dropout
import tensorflow_datasets as tfds
from bs4 import BeautifulSoup
tf.keras.backend.clear_session() #- for easy reset of notebook state
# chck if GPU can be seen by TF
tf.config.list_physical_devices('GPU')
#tf.debugging.set_log_device_placement(True)
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
tf.config.experimental.set_memory_growth(gpus[0], True)
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
###############################################
# -
# # Training Data Vectorization
# +
def load_reviews(path, columns=["filename", 'review']):
assert len(columns) == 2
l = list()
for filename in glob.glob(path):
# print(filename)
with open(filename, 'r') as f:
review = f.read()
l.append((filename, review))
return pd.DataFrame(l, columns=columns)
#unsup_df = load_reviews("./aclImdb/train/unsup/*.txt")
def load_labelled_data(path, neg='/neg/',
pos='/pos/', shuffle=True):
neg_df = load_reviews(path + neg + "*.txt")
pos_df = load_reviews(path + pos + "*.txt")
neg_df['sentiment'] = 0
pos_df['sentiment'] = 1
df = pd.concat([neg_df, pos_df], axis=0)
if shuffle:
df = df.sample(frac=1, random_state=42)
return df
train_df = load_labelled_data("./aclImdb/train/")
def fn_to_score(f):
scr = f.split("/")[-1] # get file name
scr = scr.split(".")[0] # remove extension
scr = int(scr.split("_")[-1]) #the score
return scr
train_df['score'] = train_df.filename.apply(fn_to_score)
train_df.head()
test_df = load_labelled_data("./aclImdb/test/")
# load encoder
imdb_encoder = tfds.features.text.SubwordTextEncoder.\
load_from_file("imdb")
# +
# we need a sample of 2000 reviews for training
num_recs = 2000
train_small = pd.read_pickle("train_2k.df")
# we dont need the snorkel column
train_small = train_small.drop(columns=['snorkel'])
# remove markup
cleaned_reviews = train_small.review.apply(lambda x: BeautifulSoup(x).text)
# convert pandas DF in to tf.Dataset
train = tf.data.Dataset.from_tensor_slices((cleaned_reviews.values,
train_small.sentiment.values))
# +
# transformation functions to be used with the dataset
def encode_pad_transform(sample):
encoded = imdb_encoder.encode(sample.numpy())
pad = sequence.pad_sequences([encoded], padding='post',
maxlen=150)
return np.array(pad[0], dtype=np.int64)
def encode_tf_fn(sample, label):
encoded = tf.py_function(encode_pad_transform,
inp=[sample],
Tout=(tf.int64))
encoded.set_shape([None])
label.set_shape([])
return encoded, label
encoded_train = train.map(encode_tf_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# -
# # Test Data Vectorization
# remove markup
cleaned_test_reviews = test_df.review.apply(lambda x: BeautifulSoup(x).text)
# convert pandas DF in to tf.Dataset
test = tf.data.Dataset.from_tensor_slices((cleaned_test_reviews.values,
test_df.sentiment.values))
encoded_test = test.map(encode_tf_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# # Baseline Model
# +
# Length of the vocabulary
vocab_size = imdb_encoder.vocab_size
# Number of RNN units
rnn_units = 64
# Embedding size
embedding_dim = 64
#batch size
BATCH_SIZE=100
dropout=0.3
def build_model_bilstm(vocab_size, embedding_dim, rnn_units, batch_size, dropout=0.):
model = tf.keras.Sequential([
Embedding(vocab_size, embedding_dim, mask_zero=True,
batch_input_shape=[batch_size, None]),
Bidirectional(LSTM(rnn_units, return_sequences=True,
dropout=dropout)),
Dropout(dropout),
Bidirectional(tf.keras.layers.LSTM(rnn_units, dropout=dropout)),
Dropout(dropout),
Dense(1, activation='sigmoid')
])
return model
# +
bilstm = build_model_bilstm(vocab_size = vocab_size,
embedding_dim=embedding_dim,
rnn_units=rnn_units, batch_size=BATCH_SIZE,
dropout=dropout)
bilstm.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy', 'Precision', 'Recall'])
encoded_train_batched = encoded_train.shuffle(num_recs, seed=42).\
batch(BATCH_SIZE)
bilstm.fit(encoded_train_batched, epochs=15)
bilstm.evaluate(encoded_test.batch(BATCH_SIZE))
print("BASELINE TRAINED")
# -
# # With Snorkel Labeled Data
# +
# labelled version of training data split
p1 = pd.read_pickle("snorkel_train_labeled.df")
p2 = pd.read_pickle("snorkel-unsup-nbs-v2.df")
p2 = p2.drop(columns=['snorkel']) # so that everything aligns
# now concatenate the three DFs
p2 = pd.concat([train_small, p1, p2]) # training plus snorkel labelled data
print("showing hist of additional data")
# now balance the labels
pos = p2[p2.sentiment == 1]
neg = p2[p2.sentiment == 0]
recs = min(pos.shape[0], neg.shape[0])
pos = pos.sample(n=recs, random_state=42)
neg = neg.sample(n=recs, random_state=42)
p3 = pd.concat((pos,neg))
p3.sentiment.hist()
# -
p3.info()
# +
# remove markup
cleaned_unsup_reviews = p3.review.apply(lambda x: BeautifulSoup(x).text)
snorkel_reviews = pd.concat((cleaned_reviews, cleaned_unsup_reviews))
snorkel_labels = pd.concat((train_small.sentiment, p3.sentiment))
# convert pandas DF in to tf.Dataset
snorkel_train = tf.data.Dataset.from_tensor_slices((snorkel_reviews.values,
snorkel_labels.values))
encoded_snorkel_train = snorkel_train.map(encode_tf_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# +
# Length of the vocabulary
vocab_size = imdb_encoder.vocab_size
# Number of RNN units
rnn_units = 64
# Embedding size
embedding_dim = 64
#batch size
BATCH_SIZE = 100
dropout = 0.3
bilstm2 = build_model_bilstm(
vocab_size = vocab_size,
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE,
dropout=dropout)
bilstm2.summary()
bilstm2.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy', 'Precision', 'Recall'])
shuffle_size = snorkel_reviews.shape[0] // BATCH_SIZE * BATCH_SIZE
encoded_snorkel_batched = encoded_snorkel_train.shuffle(buffer_size=shuffle_size,
seed=42).batch(BATCH_SIZE,
drop_remainder=True)
bilstm2.fit(encoded_snorkel_batched, epochs=20)
print("Checking on Test Set:")
bilstm2.evaluate(encoded_test.batch(BATCH_SIZE))
# -
#
|
chapter-8-weak-supervision-snorkel/imdb-with-snorkel-labels.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/LuisCastillo920809/daa_2021_1/blob/master/28septiembre.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="CFcpnNfQCsxo"
# # Seccion 1
# + [markdown] id="3w902LA8C2mP"
# En este archivo aprenderemos a programar en python con la herramienta de google
# Tambien aprenderemos a guardarlo en nuestro repositorio github
#
# + [markdown] id="bw7lrdXyEFSZ"
# **Hola**
#
# _italica_
#
# `
# edad=10
# print(edad)
#
# `
# + id="vslpxemdGdMh" outputId="389b3982-cbdd-44e5-8bca-7e42cee32757" colab={"base_uri": "https://localhost:8080/", "height": 34}
frutas =[]
frutas.append('manzana')
frutas.append('piña')
frutas.append('kiwwi')
print(frutas)
# + id="KJAzDLgcHspg"
archivo = open('prueba_diseño_analisis_algoritmos.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
|
28septiembre.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Load libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# !pip install imbalanced-learn
# ## Load data
from sklearn.datasets import make_classification
# +
X, y = make_classification(
n_samples=10000,
n_features=2,
n_redundant=0,
n_clusters_per_class=1,
weights=[0.99],
flip_y=0,
random_state=1)
X[:5]
# -
y[:5]
df = pd.DataFrame(data=X)
df['y'] = y
df.head()
df.y.value_counts()
# +
df0 = df[df.y == 0]
df1 = df[df.y == 1]
plt.scatter(df0[0], df0[1], color='blue')
plt.scatter(df1[0], df1[1], color='orange')
# -
# ## Up sampling
from sklearn.utils import resample
# +
df0 = df[df.y == 0]
df1 = df[df.y == 1]
df1_up = resample(
df1,
replace=True,
n_samples=len(df0),
random_state=0
)
df_up = pd.concat([df0, df1_up])
df_up.y.value_counts()
# +
df0 = df[df.y == 0]
df1 = df[df.y == 1]
plt.scatter(df0[0], df0[1], color='blue')
plt.scatter(df1[0], df1[1], color='orange')
# -
# ## SMOTE
from imblearn.over_sampling import SMOTE
smote = SMOTE()
X_up, y_up = smote.fit_resample(df.drop('y', axis=1), df.y)
df_up = pd.DataFrame(data=X_up)
df_up['y'] = y_up
df_up.y.value_counts()
# +
df0 = df_up[df_up.y == 0]
df1 = df_up[df_up.y == 1]
plt.scatter(df0[0], df0[1], color='blue')
plt.scatter(df1[0], df1[1], color='orange')
# -
# ## Down sampling
# +
df0 = df[df.y == 0]
df1 = df[df.y == 1]
df0_down = resample(
df0,
replace=False,
n_samples=len(df1),
random_state=0
)
df_down = pd.concat([df0_down, df1])
df_down.y.value_counts()
# +
df0 = df_down[df_down.y == 0]
df1 = df_down[df_down.y == 1]
plt.scatter(df0[0], df0[1], color='blue')
plt.scatter(df1[0], df1[1], color='orange')
# -
# ## Near Miss
from imblearn.under_sampling import NearMiss
nm = NearMiss(n_neighbors=5)
X_down, y_down = nm.fit_resample(df.drop('y', axis=1), df.y)
df_down = pd.DataFrame(data=X_down)
df_down['y'] = y_down
df_down.y.value_counts()
# +
df0 = df_down[df_down.y == 0]
df1 = df_down[df_down.y == 1]
plt.scatter(df0[0], df0[1], color='blue')
plt.scatter(df1[0], df1[1], color='orange')
plt.ylim((-0.81, 3.27))
plt.xlim((-3.47, 4.82))
# +
df0 = df[df.y == 0]
df1 = df[df.y == 1]
plt.scatter(df0[0], df0[1], color='blue')
plt.scatter(df1[0], df1[1], color='orange')
# -
# ## ClusterCentroids
from imblearn.under_sampling import ClusterCentroids
# +
X = df.drop('y', axis=1)
y = df.y
_object = ClusterCentroids()
_X, _y = _object.fit_resample(X, y)
# Plot
X0 = _X[_y == 0]
X1 = _X[_y == 1]
plt.scatter(X0[0], X0[1], color='blue')
plt.scatter(X1[0], X1[1], color='orange')
plt.ylim((-0.81, 3.27))
plt.xlim((-3.47, 4.82))
plt.title('{:s} ({:d}/{:d})'.format(str(_object), len(X0), len(X1)))
plt.show()
# -
# ## Combining over & under sampling
from imblearn.pipeline import Pipeline
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import SVMSMOTE
# +
pipeline = Pipeline(steps=[
('o', SVMSMOTE(sampling_strategy=0.1)), # over-sampling 10%
('u', RandomUnderSampler()) # under-sampling
])
_X, _y = pipeline.fit_resample(X,y)
# Plot
X0 = _X[_y == 0]
X1 = _X[_y == 1]
plt.scatter(X0[0], X0[1], color='blue')
plt.scatter(X1[0], X1[1], color='orange')
plt.ylim((-0.81, 3.27))
plt.xlim((-3.47, 4.82))
plt.title('{:s} ({:d}/{:d})'.format(str(pipeline), len(X0), len(X1)))
plt.show()
|
docs/!ml/notebooks/Under-over sampling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (moneyball)
# language: python
# name: moneyball
# ---
# # NLP Basics: Implementing a pipeline to clean text
# ### Pre-processing text data
#
# Cleaning up the text data is necessary to highlight attributes that you're going to want your machine learning system to pick up on. Cleaning (or pre-processing) the data typically consists of a number of steps:
# 1. **Remove punctuation**
# 2. **Tokenization**
# 3. **Remove stopwords**
# 4. Lemmatize/Stem
#
# The first three steps are covered in this chapter as they're implemented in pretty much any text cleaning pipeline. Lemmatizing and stemming are covered in the next chapter as they're helpful but not critical.
# +
import pandas as pd
pd.set_option('display.max_colwidth', 100)
data = pd.read_csv("SMSSpamCollection.tsv", sep='\t', header=None)
data.columns = ['label', 'body_text']
data.head()
# -
# What does the cleaned version look like?
data_cleaned = pd.read_csv("SMSSpamCollection_cleaned.tsv", sep='\t')
data_cleaned.head()
# ### Remove punctuation
import string
string.punctuation
"I like NLP." == "I like NLP"
# +
def remove_punct(text):
text_nopunct = "".join([char for char in text if char not in string.punctuation])
return text_nopunct
data['body_text_clean'] = data['body_text'].apply(lambda x: remove_punct(x))
data.head()
# -
# ### Tokenization
# +
import re
def tokenize(text):
tokens = re.split('\W+', text)
return tokens
data['body_text_tokenized'] = data['body_text_clean'].apply(lambda x: tokenize(x.lower()))
data.head()
# -
'NLP' == 'nlp'
# ### Remove stopwords
# +
import nltk
stopword = nltk.corpus.stopwords.words('english')
# +
def remove_stopwords(tokenized_list):
text = [word for word in tokenized_list if word not in stopword]
return text
data['body_text_nostop'] = data['body_text_tokenized'].apply(lambda x:
remove_stopwords(x))
data.head()
# -
|
nlp/Ex_Files_NLP_Python_ML_EssT/Exercise Files/Ch01/01_11/End/01_11.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Regression
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df = pd.read_csv('../data/weight-height.csv')
df.head()
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults')
# +
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults')
# Here we're plotting the red line 'by hand' with fixed values
# We'll try to learn this line with an algorithm below
plt.plot([55, 78], [75, 250], color='red', linewidth=3)
# -
def line(x, w=0, b=0):
return x * w + b
x = np.linspace(55, 80, 100)
x
yhat = line(x, w=0, b=0)
yhat
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults')
plt.plot(x, yhat, color='red', linewidth=3)
# ### Cost Function
def mean_squared_error(y_true, y_pred):
s = (y_true - y_pred)**2
return s.mean()
X = df[['Height']].values
y_true = df['Weight'].values
y_true
y_pred = line(X)
y_pred
mean_squared_error(y_true, y_pred.ravel())
# ### you do it!
#
# Try changing the values of the parameters b and w in the line above and plot it again to see how the plot and the cost change.
# +
plt.figure(figsize=(10, 5))
# we are going to draw 2 plots in the same figure
# first plot, data and a few lines
ax1 = plt.subplot(121)
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults', ax=ax1)
# let's explore the cost function for a few values of b between -100 and +150
bbs = np.array([-100, -50, 0, 50, 100, 150])
mses = [] # we will append the values of the cost here, for each line
for b in bbs:
y_pred = line(X, w=2, b=b)
mse = mean_squared_error(y_true, y_pred)
mses.append(mse)
plt.plot(X, y_pred)
# second plot: Cost function
ax2 = plt.subplot(122)
plt.plot(bbs, mses, 'o-')
plt.title('Cost as a function of b')
plt.xlabel('b');
# -
# ## Linear Regression with Keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam, SGD
model = Sequential()
model.add(Dense(1, input_shape=(1,)))
model.summary()
model.compile(Adam(learning_rate=0.8), 'mean_squared_error')
model.fit(X, y_true, epochs=40)
y_pred = model.predict(X)
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults')
plt.plot(X, y_pred, color='red')
W, B = model.get_weights()
W
B
# ## Evaluating Model Performance
from sklearn.metrics import r2_score
print("The R2 score is {:0.3f}".format(r2_score(y_true, y_pred)))
# ### Train Test Split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y_true,
test_size=0.2)
len(X_train)
len(X_test)
W[0, 0] = 0.0
B[0] = 0.0
model.set_weights((W, B))
model.fit(X_train, y_train, epochs=50, verbose=0)
y_train_pred = model.predict(X_train).ravel()
y_test_pred = model.predict(X_test).ravel()
from sklearn.metrics import mean_squared_error as mse
print("The Mean Squared Error on the Train set is:\t{:0.1f}".format(mse(y_train, y_train_pred)))
print("The Mean Squared Error on the Test set is:\t{:0.1f}".format(mse(y_test, y_test_pred)))
print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred)))
print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred)))
# # Classification
df = pd.read_csv('../data/user_visit_duration.csv')
df.head()
df.plot(kind='scatter', x='Time (min)', y='Buy');
model = Sequential()
model.add(Dense(1, input_shape=(1,), activation='sigmoid'))
model.compile(SGD(learning_rate=0.5), 'binary_crossentropy', metrics=['accuracy'])
model.summary()
# +
X = df[['Time (min)']].values
y = df['Buy'].values
model.fit(X, y, epochs=25)
# +
ax = df.plot(kind='scatter', x='Time (min)', y='Buy',
title='Purchase behavior VS time spent on site')
temp = np.linspace(0, 4)
ax.plot(temp, model.predict(temp), color='orange')
plt.legend(['model', 'data'])
# -
temp_class = model.predict(temp) > 0.5
# +
ax = df.plot(kind='scatter', x='Time (min)', y='Buy',
title='Purchase behavior VS time spent on site')
temp = np.linspace(0, 4)
ax.plot(temp, temp_class, color='orange')
plt.legend(['model', 'data'])
# -
y_pred = model.predict(X)
y_class_pred = y_pred > 0.5
from sklearn.metrics import accuracy_score
print("The accuracy score is {:0.3f}".format(accuracy_score(y, y_class_pred)))
# ### Train/Test split
#
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
params = model.get_weights()
params = [np.zeros(w.shape) for w in params]
model.set_weights(params)
print("The accuracy score is {:0.3f}".format(accuracy_score(y, model.predict(X) > 0.5)))
model.fit(X_train, y_train, epochs=25, verbose=0)
print("The train accuracy score is {:0.3f}".format(accuracy_score(y_train, model.predict(X_train) > 0.5)))
print("The test accuracy score is {:0.3f}".format(accuracy_score(y_test, model.predict(X_test) > 0.5)))
# ## Cross Validation
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
def build_logistic_regression_model():
model = Sequential()
model.add(Dense(1, input_shape=(1,), activation='sigmoid'))
model.compile(SGD(learning_rate=0.5),
'binary_crossentropy',
metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=build_logistic_regression_model,
epochs=25,
verbose=0)
from sklearn.model_selection import cross_val_score, KFold
cv = KFold(3, shuffle=True)
scores = cross_val_score(model, X, y, cv=cv)
scores
print("The cross validation accuracy is {:0.4f} ± {:0.4f}".format(scores.mean(), scores.std()))
# ## Confusion Matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_class_pred)
def pretty_confusion_matrix(y_true, y_pred, labels=["False", "True"]):
cm = confusion_matrix(y_true, y_pred)
pred_labels = ['Predicted '+ l for l in labels]
df = pd.DataFrame(cm, index=labels, columns=pred_labels)
return df
pretty_confusion_matrix(y, y_class_pred, ['Not Buy', 'Buy'])
from sklearn.metrics import precision_score, recall_score, f1_score
print("Precision:\t{:0.3f}".format(precision_score(y, y_class_pred)))
print("Recall: \t{:0.3f}".format(recall_score(y, y_class_pred)))
print("F1 Score:\t{:0.3f}".format(f1_score(y, y_class_pred)))
from sklearn.metrics import classification_report
print(classification_report(y, y_class_pred))
# ## Feature Preprocessing
# ### Categorical Features
df = pd.read_csv('../data/weight-height.csv')
df.head()
df['Gender'].unique()
pd.get_dummies(df['Gender'], prefix='Gender').head()
# ## Feature Transformations
# #### 1) Rescale with fixed factor
df['Height (feet)'] = df['Height']/12.0
df['Weight (100 lbs)'] = df['Weight']/100.0
df.describe().round(2)
# #### MinMax normalization
# +
from sklearn.preprocessing import MinMaxScaler
mms = MinMaxScaler()
df['Weight_mms'] = mms.fit_transform(df[['Weight']])
df['Height_mms'] = mms.fit_transform(df[['Height']])
df.describe().round(2)
# -
# #### 3) Standard normalization
# +
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
df['Weight_ss'] = ss.fit_transform(df[['Weight']])
df['Height_ss'] = ss.fit_transform(df[['Height']])
df.describe().round(2)
# +
plt.figure(figsize=(15, 5))
for i, feature in enumerate(['Height', 'Height (feet)', 'Height_mms', 'Height_ss']):
plt.subplot(1, 4, i+1)
df[feature].plot(kind='hist', title=feature)
plt.xlabel(feature);
# -
# # Machine Learning Exercises
# ## Exercise 1
#
# You've just been hired at a real estate investment firm and they would like you to build a model for pricing houses. You are given a dataset that contains data for house prices and a few features like number of bedrooms, size in square feet and age of the house. Let's see if you can build a model that is able to predict the price. In this exercise we extend what we have learned about linear regression to a dataset with more than one feature. Here are the steps to complete it:
#
# 1. Load the dataset ../data/housing-data.csv
# - plot the histograms for each feature
# - create 2 variables called X and y: X shall be a matrix with 3 columns (sqft,bdrms,age) and y shall be a vector with 1 column (price)
# - create a linear regression model in Keras with the appropriate number of inputs and output
# - split the data into train and test with a 20% test size
# - train the model on the training set and check its accuracy on training and test set
# - how's your model doing? Is the loss growing smaller?
# - try to improve your model with these experiments:
# - normalize the input features with one of the rescaling techniques mentioned above
# - use a different value for the learning rate of your model
# - use a different optimizer
# - once you're satisfied with training, check the R2score on the test set
# ## Exercise 2
#
# Your boss was extremely happy with your work on the housing price prediction model and decided to entrust you with a more challenging task. They've seen a lot of people leave the company recently and they would like to understand why that's happening. They have collected historical data on employees and they would like you to build a model that is able to predict which employee will leave next. They would like a model that is better than random guessing. They also prefer false negatives than false positives, in this first phase. Fields in the dataset include:
#
# - Employee satisfaction level
# - Last evaluation
# - Number of projects
# - Average monthly hours
# - Time spent at the company
# - Whether they have had a work accident
# - Whether they have had a promotion in the last 5 years
# - Department
# - Salary
# - Whether the employee has left
#
# Your goal is to predict the binary outcome variable `left` using the rest of the data. Since the outcome is binary, this is a classification problem. Here are some things you may want to try out:
#
# 1. load the dataset at ../data/HR_comma_sep.csv, inspect it with `.head()`, `.info()` and `.describe()`.
# - Establish a benchmark: what would be your accuracy score if you predicted everyone stay?
# - Check if any feature needs rescaling. You may plot a histogram of the feature to decide which rescaling method is more appropriate.
# - convert the categorical features into binary dummy columns. You will then have to combine them with the numerical features using `pd.concat`.
# - do the usual train/test split with a 20% test size
# - play around with learning rate and optimizer
# - check the confusion matrix, precision and recall
# - check if you still get the same results if you use a 5-Fold cross validation on all the data
# - Is the model good enough for your boss?
#
# As you will see in this exercise, the a logistic regression model is not good enough to help your boss. In the next chapter we will learn how to go beyond linear models.
#
# This dataset comes from https://www.kaggle.com/ludobenistant/hr-analytics/ and is released under [CC BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).
|
course/3 Machine Learning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="-MYot7DJh9kk"
# See code at https://github.com/google-research/vision_transformer/
#
# See paper at https://arxiv.org/abs/2010.11929
#
# This Colab allows you to run the [JAX](https://jax.readthedocs.org) implementation of the Vision Transformer.
# + [markdown] id="sXhZm0kpPpH6"
# ##### Copyright 2020 Google LLC.
# + id="KfmzfvFxPuk7" cellView="both"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="iOVCm4CnP1Do"
# <a href="https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="cyD76dm5JaeW"
# ### Setup
#
# Needs to be executed once in every VM.
#
# The cell below downloads the code from Github and install necessary dependencies.
# + id="zZvI8OXt78sj" cellView="form" outputId="3244d805-9cb6-4c91-aa9e-a5bd3d4518da" colab={"base_uri": "https://localhost:8080/"}
#@markdown Select whether you would like to store data in your personal drive.
#@markdown
#@markdown If you select **yes**, you will need to authorize Colab to access
#@markdown your personal drive
#@markdown
#@markdown If you select **no**, then any changes you make will diappear when
#@markdown this Colab's VM restarts after some time of inactivity...
use_gdrive = 'no' #@param ["yes", "no"]
if use_gdrive == 'yes':
from google.colab import drive
drive.mount('/gdrive')
root = '/gdrive/My Drive/vision_transformer_colab'
import os
if not os.path.isdir(root):
os.mkdir(root)
os.chdir(root)
print(f'\nChanged CWD to "{root}"')
else:
from IPython import display
display.display(display.HTML(
'<h1 style="color:red">CHANGES NOT PERSISTED</h1>'))
# + id="GeEy6gN71CDa" outputId="5a879ed4-0a2b-4e10-c6c3-7e45c8a4d1d7" colab={"base_uri": "https://localhost:8080/"}
# Clone repository and pull latest changes.
![ -d vision_transformer ] || git clone --depth=1 https://github.com/google-research/vision_transformer
# !cd vision_transformer && git pull
# + id="sCN4d-GQJdU4"
# !pip install -qr vision_transformer/vit_jax/requirements.txt
# + [markdown] id="bcLBTSXuNjK6"
# ### Imports
# + id="BcnlJF7FKTfD" outputId="e5f20679-444e-4b64-e0f2-cb3077639422" colab={"base_uri": "https://localhost:8080/"}
# Shows all available pre-trained models.
# !gsutil ls -lh gs://vit_models/*
# + id="6ztOhq_fzZyO"
# Download a pre-trained model.
model = 'ViT-B_16'
![ -e "$model".npz ] || gsutil cp gs://vit_models/imagenet21k/"$model".npz .
# + id="4EzOChfJeVrU" cellView="form" outputId="3390bcfc-1b71-4ba3-e66c-ec313c6b3ea1" colab={"base_uri": "https://localhost:8080/"}
#@markdown TPU setup : Boilerplate for connecting JAX to TPU.
import os
if 'google.colab' in str(get_ipython()) and 'COLAB_TPU_ADDR' in os.environ:
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print('Registered TPU:', config.FLAGS.jax_backend_target)
else:
print('No TPU detected. Can be changed under "Runtime/Change runtime type".')
# + id="igqZ6qYNeHWo" outputId="5cc488f1-553b-4054-9ad9-0069d34c21ac" colab={"base_uri": "https://localhost:8080/"}
import flax
import jax
from matplotlib import pyplot as plt
import numpy as np
import tqdm
# Shows the number of available devices.
# In a CPU/GPU runtime this will be a single device.
# In a TPU runtime this will be 8 cores.
jax.local_devices()
# + id="9TuMn31fNj0T" outputId="d458ac73-c110-47d7-cb51-1ba82ebf2c82" colab={"base_uri": "https://localhost:8080/", "height": 17}
# Open some code files in a split editor on the right.
# You can open more files in the file tab on the left.
from google.colab import files
files.view('vision_transformer/vit_jax/checkpoint.py')
files.view('vision_transformer/vit_jax/input_pipeline.py')
files.view('vision_transformer/vit_jax/models.py')
files.view('vision_transformer/vit_jax/momentum_clip.py')
files.view('vision_transformer/vit_jax/train.py')
files.view('vision_transformer/vit_jax/hyper.py')
# + id="sjN0_b-YbaHu"
# Import files from repository.
# Updating the files in the editor on the right will immediately update the
# modules by re-importing them.
import sys
if './vision_transformer' not in sys.path:
sys.path.append('./vision_transformer')
# %load_ext autoreload
# %autoreload 2
from vit_jax import checkpoint
from vit_jax import hyper
from vit_jax import input_pipeline
from vit_jax import logging
from vit_jax import models
from vit_jax import momentum_clip
from vit_jax import train
logger = logging.setup_logger('./logs')
# + id="GojydzsXgknd"
# Helper functions for images.
labelnames = dict(
# https://www.cs.toronto.edu/~kriz/cifar.html
cifar10=('airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'),
# https://www.cs.toronto.edu/~kriz/cifar.html
cifar100=('apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 'cockroach', 'couch', 'crab', 'crocodile', 'cup', 'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster', 'house', 'kangaroo', 'computer_keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 'possum', 'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose', 'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 'table', 'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout', 'tulip', 'turtle', 'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm')
)
def make_label_getter(dataset):
"""Returns a function converting label indices to names."""
def getter(label):
if dataset in labelnames:
return labelnames[dataset][label]
return f'label={label}'
return getter
def show_img(img, ax=None, title=None):
"""Shows a single image."""
if ax is None:
ax = plt.gca()
ax.imshow(img[...])
ax.set_xticks([])
ax.set_yticks([])
if title:
ax.set_title(title)
def show_img_grid(imgs, titles):
"""Shows a grid of images."""
n = int(np.ceil(len(imgs)**.5))
_, axs = plt.subplots(n, n, figsize=(3 * n, 3 * n))
for i, (img, title) in enumerate(zip(imgs, titles)):
img = (img + 1) / 2 # Denormalize
show_img(img, axs[i // n][i % n], title)
# + [markdown] id="QZfK1vIIMmFz"
# ### Load dataset
# + id="TSAVpYtP5VaE"
dataset = 'cifar10'
batch_size = 512 # Reduce to 256 if running on a single GPU.
# + id="ruzdzpsMNhGm" outputId="4c13e7b1-e6f4-4f24-efa8-9f1f9ae72160" colab={"base_uri": "https://localhost:8080/"}
# Note the datasets are configured in input_pipeline.DATASET_PRESETS
# Have a look in the editor at the right.
num_classes = input_pipeline.get_dataset_info(dataset, 'train')['num_classes']
# tf.data.Datset for training, infinite repeats.
ds_train = input_pipeline.get_data(
dataset=dataset, mode='train', repeats=None, batch_size=batch_size,
)
# tf.data.Datset for evaluation, single repeat.
ds_test = input_pipeline.get_data(
dataset=dataset, mode='test', repeats=1, batch_size=batch_size,
)
# + id="7c-LfxOJdj8_" outputId="9f91069e-8231-4cc3-bccd-a708b5743048" colab={"base_uri": "https://localhost:8080/"}
# Fetch a batch of test images for illustration purposes.
batch = next(iter(ds_test.as_numpy_iterator()))
# Note the shape : [num_local_devices, local_batch_size, h, w, c]
batch['image'].shape
# + id="rL0jQRBCgeJA" outputId="8f30c5fd-8e82-4a93-e6ef-01d920693344" colab={"base_uri": "https://localhost:8080/", "height": 540}
# Show some imags with their labels.
images, labels = batch['image'][0][:9], batch['label'][0][:9]
titles = map(make_label_getter(dataset), labels.argmax(axis=1))
show_img_grid(images, titles)
# + id="jFqi3h7yMEsB" outputId="94b5f12b-e2b9-456e-f51e-e5618a976b27" colab={"base_uri": "https://localhost:8080/", "height": 540}
# Same as above, but with train images.
# Do you spot a difference?
# Check out input_pipeline.get_data() in the editor at your right to see how the
# images are preprocessed differently.
batch = next(iter(ds_train.as_numpy_iterator()))
images, labels = batch['image'][0][:9], batch['label'][0][:9]
titles = map(make_label_getter(dataset), labels.argmax(axis=1))
show_img_grid(images, titles)
# + [markdown] id="ehzbRTSN20E5"
# ### Load pre-trained
# + id="DMKr-4nK3DlT"
# Load model definition & initialize random parameters.
VisionTransformer = models.KNOWN_MODELS[model].partial(num_classes=num_classes)
_, params = VisionTransformer.init_by_shape(
jax.random.PRNGKey(0),
# Discard the "num_local_devices" dimension of the batch for initialization.
[(batch['image'].shape[1:], batch['image'].dtype.name)])
# + id="zIXjOEDkvAWM" outputId="3a3d3ee5-1789-4dd7-a770-e4f63ed48a68" colab={"base_uri": "https://localhost:8080/"}
# Load and convert pretrained checkpoint.
# This involves loading the actual pre-trained model results, but then also also
# modifying the parameters a bit, e.g. changing the final layers, and resizing
# the positional embeddings.
# For details, refer to the code and to the methods of the paper.
params = checkpoint.load_pretrained(
pretrained_path=f'{model}.npz',
init_params=params,
model_config=models.CONFIGS[model],
logger=logger,
)
# + [markdown] id="aQVKzhaR8o-J"
# ### Evaluate
# + id="WB6ywRTY-LOa" outputId="407d8e33-1099-4158-80d1-ceb08ffb1e50" colab={"base_uri": "https://localhost:8080/"}
# So far, all our data is in the host memory. Let's now replicate the arrays
# into the devices.
# This will make every array in the pytree params become a ShardedDeviceArray
# that has the same data replicated across all local devices.
# For TPU it replicates the params in every core.
# For a single GPU this simply moves the data onto the device.
# For CPU it simply creates a copy.
params_repl = flax.jax_utils.replicate(params)
print('params.cls:', type(params['cls']).__name__, params['cls'].shape)
print('params_repl.cls:', type(params_repl['cls']).__name__, params_repl['cls'].shape)
# + id="_unNxEZAK0Cu"
# Then map the call to our model's forward pass onto all available devices.
vit_apply_repl = jax.pmap(VisionTransformer.call)
# + id="ZgjFBUQ88p4z"
def get_accuracy(params_repl):
"""Returns accuracy evaluated on the test set."""
good = total = 0
steps = input_pipeline.get_dataset_info(dataset, 'test')['num_examples'] // batch_size
for _, batch in zip(tqdm.notebook.trange(steps), ds_test.as_numpy_iterator()):
predicted = vit_apply_repl(params_repl, batch['image'])
is_same = predicted.argmax(axis=-1) == batch['label'].argmax(axis=-1)
good += is_same.sum()
total += len(is_same.flatten())
return good / total
# + id="3qc7j0lv-F6-" outputId="137620ce-39be-4c07-b6b3-59fd1fe5400e" colab={"base_uri": "https://localhost:8080/", "height": 100, "referenced_widgets": ["8931f2c301b64e5aabb8bb5948709f04", "de36a46956da431cae0295fdcd79c830", "0a87a22aad4d434f9af36c3128605585", "679d81864d9e48698ee517c89fd94d93", "522f004c7c304816b4d7989f15ab7245", "e67551f7c7064d13add2affec66f5d0e", "3dc06ffb43a94462ba177a9ee6bdcb39", "f8fce7c448154cd294fee824204ba7d7"]}
# Random performance without fine-tuning.
get_accuracy(params_repl)
# + [markdown] id="HxMdU_e5NeoT"
# ### Fine-tune
# + id="MI62dexw8mGo"
# 100 Steps take approximately 15 minutes in the TPU runtime.
total_steps = 100
warmup_steps = 5
decay_type = 'cosine'
grad_norm_clip = 1
# This controls in how many forward passes the batch is split. 8 works well with
# a TPU runtime that has 8 devices. 64 should work on a GPU. You can of course
# also adjust the batch_size above, but that would require you to adjust the
# learning rate accordingly.
accum_steps = 8
base_lr = 0.03
# + id="vzlfREb1ZHBY"
# Check out train.make_update_fn in the editor on the right side for details.
update_fn_repl = train.make_update_fn(VisionTransformer.call, accum_steps)
# We use a momentum optimizer that uses half precision for state to save
# memory. It als implements the gradient clipping.
opt = momentum_clip.Optimizer(grad_norm_clip=grad_norm_clip).create(params)
opt_repl = flax.jax_utils.replicate(opt)
# + id="RTU7OmgjHb-G"
lr_fn = hyper.create_learning_rate_schedule(total_steps, base_lr, decay_type, warmup_steps)
# Prefetch entire learning rate schedule onto devices. Otherwise we would have
# a slow transfer from host to devices in every step.
lr_iter = hyper.lr_prefetch_iter(lr_fn, 0, total_steps)
# Initialize PRNGs for dropout.
update_rngs = jax.random.split(jax.random.PRNGKey(0), jax.local_device_count())
# + id="zKn4IfUWHWPk" outputId="42296f8b-cfb8-488a-b97b-9d7074af3458" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["1845e36bf64143338625917c1027e0ca", "cda9b330730d41fcaacc4321efd6215a", "4ef5419eaa604e4ba6aa037ada95dbc1", "72c08a7e23294e8fbc332f47625de6a1", "da7cad8346854c54baae49bfc6e9487d", "753c29354bdc4c4cbdd7218d2eacb939", "9b21c5a0596d4ca9992168f83bc47225", "f298dc80c6724fa2a052929735bed092"]}
# The world's simplest training loop.
# Completes in ~20 min on the TPU runtime.
for step, batch, lr_repl in zip(
tqdm.notebook.trange(1, total_steps + 1),
ds_train.as_numpy_iterator(),
lr_iter
):
opt_repl, loss_repl, update_rngs = update_fn_repl(
opt_repl, lr_repl, batch, update_rngs)
# + id="jJhKAMhMI2D6" outputId="b2111597-4b67-426d-ebec-ddf98c94515e" colab={"base_uri": "https://localhost:8080/", "height": 100, "referenced_widgets": ["fd944cc7181b41a996afa28bdae13e4d", "f7200fa3f17347dfa1abd61ce0db427d", "c5d48246c0044a99863a03301d13cae9", "<KEY>", "<KEY>", "da3842a0b3c44919adff4d86ce3b296b", "458d4ab91d32411ea85d870c7e06f82d", "<KEY>"]}
# Should be ~97.2% for CIFAR10
# Should be ~71.2% for CIFAR10
get_accuracy(opt_repl.target)
# + [markdown] id="N-wIdj_qnbIM"
# ### Inference
# + id="Djl2U4Xgnhqi" outputId="a464c5e3-fbdd-4269-b697-fc80fe47489c" colab={"base_uri": "https://localhost:8080/"}
# Download model pre-trained on imagenet21k and fine-tuned on imagenet2012.
![ -e "$model"_imagenet2012.npz ] || gsutil cp gs://vit_models/imagenet21k+imagenet2012/"$model".npz "$model"_imagenet2012.npz
# + id="TE7BoGkyoY7X"
VisionTransformer = models.KNOWN_MODELS[model].partial(num_classes=1000)
# Load and convert pretrained checkpoint.
params = checkpoint.load(f'{model}_imagenet2012.npz')
params['pre_logits'] = {} # Need to restore empty leaf for Flax.
# + id="p3TTypN0gkwO" outputId="83335b65-648b-4e31-ef5f-b96fdcd3c0d3" colab={"base_uri": "https://localhost:8080/"}
# Get imagenet labels.
# !wget https://storage.googleapis.com/bit_models/ilsvrc2012_wordnet_lemmas.txt
imagenet_labels = dict(enumerate(open('ilsvrc2012_wordnet_lemmas.txt')))
# + id="IQwomPOopDlr" outputId="30c0330c-187a-427f-a1e3-ec403361bd48" colab={"base_uri": "https://localhost:8080/", "height": 673}
# Get a random picture with the correct dimensions.
# !wget https://picsum.photos/384 -O picsum.jpg
import PIL
img = PIL.Image.open('picsum.jpg')
img
# + id="wrvwNAGJshzb"
# Predict on a batch with a single item (note very efficient TPU usage...)
logits, = VisionTransformer.call(params, (np.array(img) / 128 - 1)[None, ...])
# + id="64hwCdaehs42" outputId="be813a54-7646-4944-8552-ad496fe7c8b8" colab={"base_uri": "https://localhost:8080/"}
preds = flax.nn.softmax(logits)
for idx in preds.argsort()[:-11:-1]:
print(f'{preds[idx]:.5f} : {imagenet_labels[idx]}', end='')
|
vit_jax.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## tFBA with metabolomic constraints
#
# This notebook illustrates how the functions from the Thermo module can be used to create a tFBA-ready model, constrain it with metabolomic data, relax its bounds, and obtain a solution.
# Import Thermo module.
# +
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from BFAIR import thermo, thermo_models
# -
# Load thermodynamic information and create model.
tmodel = thermo_models.iJO1366
# Load and apply reaction and log concentration bounds.
# +
bounds = pd.read_csv('../tests/test_data/test_bounds.csv')
rxn_bounds = bounds[bounds['bound_type'] == 'flux']
lc_bounds = bounds[bounds['bound_type'] == 'log_concentration']
thermo.adjust_model(tmodel, rxn_bounds, lc_bounds)
bounds.head()
# -
# Attempt optimization.
tmodel.slim_optimize()
if tmodel.solver.status == 'infeasible':
print('MILP problem requires relaxation.')
# Relax log concentration bounds.
relax_table = thermo.relax_lc(tmodel)
relax_table
# Obtain calculated Gibbs free energy.
delta_g = thermo.get_delta_g(tmodel)
delta_g.head()
# Plot Gibbs free energy.
# +
fig, ax = plt.subplots(figsize=(10, 4))
plot_mask = delta_g['subsystem'].isin(['Citric Acid Cycle', 'Pentose Phosphate Pathway'])
sns.barplot(
ax= ax,
data=delta_g[plot_mask].reset_index(),
y='delta_g', x='reaction', hue='subsystem', palette='Dark2'
)
ax.set_ylabel('ΔG° (kcal/mol)')
ax.set_xlabel('')
ax.tick_params(axis='x', rotation=45)
ax.axhline(y=0, c='.75')
# -
|
examples/EColi TFA with metabolomic data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn import metrics
import numpy as np
import tensorflow as tf
y_true = np.array([1,1,1,1,0,0], dtype=np.int32)
y_pred = np.array([1,1,1,0,1,1], dtype=np.int32)
print(metrics.accuracy_score(y_true, y_pred))
print(metrics.f1_score(y_true, y_pred))
metrics.precision_score(y_true, y_pred)
# +
a = np.array(2.748574587485748)
print("{:.5f}".format(float(a)))
# -
with tf.Graph().as_default():
with tf.Session() as sess:
tf.set_random_seed(3)
a = tf.get_variable("a",shape=[2], initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
b = tf.get_variable("b",shape=[2], initializer=tf.contrib.layers.xavier_initializer(), dtype=tf.float32)
sess.run(tf.global_variables_initializer())
print(sess.run(a))
print(sess.run(b))
# +
# stacking,在local_test上训练各模型的一个权重
best_wb = {}
best_f1 = 0.0
best_steps = 0
with tf.Graph().as_default():
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
with tf.Session(config=sess_config) as sess:
inputs = tf.placeholder(shape=[len(local_test_X), len(splits)], dtype=tf.float32, name="input")
labels = tf.placeholder(shape=[len(local_test_X)], dtype=tf.float32, name="label")
trans = tf.log(inputs / (1 - inputs))
W = tf.get_variable("w", shape=[len(splits), 1], dtype=tf.float32, initializer=tf.constant_initializer(0.2))
b = tf.get_variable("b", shape=[1], dtype=tf.float32, initializer=tf.constant_initializer(.0))
logits = tf.squeeze(tf.nn.bias_add(tf.matmul(trans, W), b), axis=1)
sigmoid = tf.nn.sigmoid(logits)
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_op = tf.train.GradientDescentOptimizer(0.7).minimize(loss)
sess.run(tf.global_variables_initializer())
for steps in range(1500):
output, op = sess.run([sigmoid, train_op], feed_dict={inputs:test_preds_local, \
labels:np.array(local_test_Y, dtype=np.float32)})
this_f1 = metrics.f1_score(local_test_Y, (output>best_threshold).astype(int))
if this_f1 > best_f1:
best_steps=steps
best_f1 = this_f1
best_wb["W"] = sess.run(W)
best_wb["b"] = sess.run(b)
if (steps+1) % 50==0 or steps==0:
print("steps:{}, f1:{:.5f}".format(steps+1, this_f1))
print(best_wb)
print(best_f1)
print(best_steps)
# -
a = np.zeros((100, 5), dtype=np.float32)
b = np.ones((100))
a[:,1] = b
a
|
demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Refine BiGG IDs
# +
from pprint import pprint
import pandas
import json, re
# define the reaction translation
# reactions = pandas.read_table(open('BiGG_reactions.txt', 'r'), sep = '\t')
# reactions_dict = {}
# for index, met in reactions.iterrows():
# reactions_dict[met['bigg_id']] = {
# 'name':met['name'],
# 'reaction_string':met['reaction_string']
# }
# with open('BiGG_reactions, parsed.json', 'w') as out:
# json.dump(reactions_dict, out, indent = 2)
# define the metabolite translation: BiGG -> SABIO
# metabolites = pandas.read_table(open('BiGG_metabolites.txt', 'r'), sep = '\t')
# metabolites_dict = {}
# for index, met in metabolites.iterrows():
# bigg_name = str(met['name'])
# bigg_name = bigg_name.strip()
# bigg_id = met['universal_bigg_id']
# name = re.sub('\s[A-Z0-9]{3,}$', '', str(met['name']))
# name = name.strip()
# metabolites_dict[bigg_id] = {
# 'name':name
# }
# if bigg_name != name:
# metabolites_dict[bigg_id]['bigg_name'] = bigg_name
# with open('BiGG_metabolites, parsed.json', 'w') as out:
# json.dump(metabolites_dict, out, indent = 2)
# define the metabolite translation: SABIO -> BiGG
metabolites = pandas.read_table(open('BiGG_metabolites.txt', 'r'), sep = '\t')
metabolites_dict = {}
for index, met in metabolites.iterrows():
bigg_name = str(met['name'])
bigg_name = bigg_name.strip()
bigg_id = met['universal_bigg_id']
name = re.sub('\s[A-Z0-9]{3,}$', '', str(met['name']))
name = name.strip()
metabolites_dict[name] = {
'id':bigg_id
}
if bigg_name != name:
metabolites_dict[name]['bigg_name'] = bigg_name
with open('BiGG_metabolite_names, parsed.json', 'w') as out:
json.dump(metabolites_dict, out, indent = 2)
# pprint(metabolites_dict)
# pprint(reactions_dict)
# -
# # Refine the BiGG model
# +
def split_reaction(reaction_string, bigg_metabolites):
def _parse_stoich(met):
stoich = ''
ch_number = 0
denom = False
while re.search('[0-9\./]', met[ch_number]):
stoich += met[ch_number]
if met[ch_number] == '/':
numerator = stoich
denom = True
if denom:
denominator += met[ch_number]
ch_number += 1
if denom:
stoich = f'{numerator}/{denominator}'
return stoich
def met_parsing(met):
# print(met)
met = met.strip()
if re.search('(\d\s\w|\d\.\d\s|\d/\d\s)', met):
coefficient = _parse_stoich(met)
coefficient = '{} '.format(coefficient)
else:
coefficient = ''
met = re.sub(coefficient, '', met)
# print(met, coefficient)
return met, coefficient
def reformat_met_name(met_name, sabio = False):
met_name = re.sub(' - ', '-', met_name)
if not sabio:
met_name = re.sub(' ', '_', met_name)
return met_name
# parse the reactants and products for the specified reaction string
reaction_split = reaction_string.split('<->')
reactants_list = reaction_split[0].split(' + ')
products_list = reaction_split[1].split(' + ')
# parse the reactants
reactants = []
sabio_reactants = []
for met in reactants_list:
# print(met)
met = met.strip()
met = re.sub('_\w$', '', met)
met, coefficient = met_parsing(met)
reactants.append(coefficient + reformat_met_name(bigg_metabolites[met]['name']))
sabio_reactants.append(coefficient + reformat_met_name(bigg_metabolites[met]['name'], True))
# parse the products
products = []
sabio_products = []
for met in products_list:
if not re.search('[a-z]', met, flags = re.IGNORECASE):
continue
met = met.strip()
met = re.sub('_\w$', '', met)
met, coefficient = met_parsing(met)
products.append(coefficient + reformat_met_name(bigg_metabolites[met]['name']))
sabio_products.append(coefficient + reformat_met_name(bigg_metabolites[met]['name'], True))
# compounds = reactants + products
reactant_string = ' + '.join(reactants)
product_string = ' + '.join(products)
reaction_string = ' <-> '.join([reactant_string, product_string])
# construct the set of compounds in the SABIO format
sabio_compounds = sabio_reactants + sabio_products
return reaction_string, sabio_compounds
# +
from pprint import pprint
import pandas
import json
# %run ../dfbapy/dfba.py
bigg_reactions = json.load(open('BiGG_reactions, parsed.json'))
bigg_metabolites = json.load(open('BiGG_metabolites, parsed.json'))
# substitute the reaction and metabolite names
model = json.load(open('Ecoli core, BiGG, indented.json'))
model_contents = {}
for reaction in model['reactions']:
# define the reaction identification
reaction_id = reaction['id']
reaction_name = bigg_reactions[reaction_id]['name']
# substitute the reaction string
og_reaction_string = bigg_reactions[reaction_id]['reaction_string']
# print('\n\n', og_reaction_string)
reaction_string, compounds = split_reaction(og_reaction_string, bigg_metabolites)
# print(reaction_string)
model_contents[reaction_name] = {
'reaction': {
'original': og_reaction_string,
'substituted': reaction_string,
},
'chemicals': compounds,
'annotations': reaction['annotation']
}
# pprint(model_contents)
with open('processed_Ecoli_model.json', 'w') as out:
json.dump(model_contents, out, indent = 3)
# -
# # Refine the data combination function
# +
import pandas
import json, re
class CaseInsensitiveDict(dict):
@classmethod
def _k(cls, key):
return key.lower() if isinstance(key, str) else key
def __init__(self, *args, **kwargs):
super(CaseInsensitiveDict, self).__init__(*args, **kwargs)
self._convert_keys()
def __getitem__(self, key):
return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))
def __setitem__(self, key, value):
super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)
def __delitem__(self, key):
return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))
def __contains__(self, key):
return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))
def has_key(self, key):
return super(CaseInsensitiveDict, self).has_key(self.__class__._k(key))
def pop(self, key, *args, **kwargs):
return super(CaseInsensitiveDict, self).pop(self.__class__._k(key), *args, **kwargs)
def get(self, key, *args, **kwargs):
return super(CaseInsensitiveDict, self).get(self.__class__._k(key), *args, **kwargs)
def setdefault(self, key, *args, **kwargs):
return super(CaseInsensitiveDict, self).setdefault(self.__class__._k(key), *args, **kwargs)
def update(self, E=None, **F):
super(CaseInsensitiveDict, self).update(self.__class__(E))
super(CaseInsensitiveDict, self).update(self.__class__(**F))
def _convert_keys(self):
for k in list(self.keys()):
v = super(CaseInsensitiveDict, self).pop(k)
self.__setitem__(k, v)
class test():
def __init__(self):
self.sabio_df = pandas.read_csv(open('proccessed-xls.csv', encoding="utf-8"))
self.bigg_to_sabio_metabolites = json.load(open('BiGG_metabolites, parsed.json'))
self.sabio_to_bigg_metabolites = json.load(open('BiGG_metabolite_names, parsed.json'))
self.bigg_reactions = json.load(open('BiGG_reactions, parsed.json'))
self.paths = {}
self.paths['entryids_path'] = 'entryids_progress.json'
self.paths["scraped_model_path"] = 'scraped_model.json'
self.printing = True
def _split_reaction(self,
reaction_string, # the sabio or bigg reaction string
sabio = False # specifies how the reaction string will be split
):
def _parse_stoich(met):
stoich = ''
ch_number = 0
denom = False
numerator = denominator = 0
while re.search('[0-9\./]', met[ch_number]):
stoich += met[ch_number]
if met[ch_number] == '/':
numerator = stoich
denom = True
if denom:
denominator += met[ch_number]
ch_number += 1
if denom:
stoich = f'{numerator}/{denominator}'
return stoich
def met_parsing(met):
# print(met)
met = met.strip()
met = re.sub('_\w$', '', met)
if re.search('(\d\s\w|\d\.\d\s|\d/\d\s)', met):
coefficient = _parse_stoich(met)
coefficient = '{} '.format(coefficient)
else:
coefficient = ''
met = re.sub(coefficient, '', met)
# print(met, coefficient)
return met, coefficient
def reformat_met_name(met_name, sabio = False):
met_name = re.sub(' - ', '-', met_name)
return met_name
def parsing_chemical_list(chemical_list):
bigg_chemicals = []
sabio_chemicals = []
for met in chemical_list:
# print('metabolite', met, type(met))
if not re.search('[A-Za-z]', met):
continue
met, coefficient = met_parsing(met)
# assign the proper chemical names
if not sabio:
sabio_chemicals.append(coefficient + reformat_met_name(self.bigg_to_sabio_metabolites[met]['name'], True))
if 'bigg_name' in self.bigg_to_sabio_metabolites[met]:
bigg_chemicals.append(coefficient + reformat_met_name(self.bigg_to_sabio_metabolites[met]['bigg_name']))
else:
bigg_chemicals.append(coefficient + reformat_met_name(self.bigg_to_sabio_metabolites[met]['name']))
elif sabio:
sabio_chemicals.append(coefficient + reformat_met_name(met, True))
# if met in list(self.sabio_to_bigg_metabolites.keys()):
# print('yes')
# try:
# print(self.sabio_to_bigg_metabolites[met])
# except:
# # print(met, '\n\n\n', self.sabio_to_bigg_metabolites.keys())
# for ch in met:
# print(ch, '\t', ord(ch))
dic = CaseInsensitiveDict(self.sabio_to_bigg_metabolites)
if 'bigg_name' in dic.get(met):
bigg_chemicals.append(coefficient + reformat_met_name(dic.get(met)['bigg_name']))
else:
bigg_chemicals.append(coefficient + reformat_met_name(met))
# bigg_chemicals.append(coefficient + reformat_met_name(self.bigg_metabolites[met]['name']))
# sabio_chemicals.append(coefficient + reformat_met_name(self.bigg_metabolites[met]['name'], True))
# if 'bigg_name' in self.bigg_metabolites[met]:
# bigg_chemicals[-1] = coefficient + reformat_met_name(self.bigg_metabolites[met]['name'])
return bigg_chemicals, sabio_chemicals
# parse the reactants and products for the specified reaction string
if not sabio:
reaction_split = reaction_string.split(' <-> ')
else:
reaction_split = reaction_string.split(' = ')
reactants_list = reaction_split[0].split(' + ')
products_list = reaction_split[1].split(' + ')
# parse the reactants and products
bigg_reactants, sabio_reactants = parsing_chemical_list(reactants_list)
bigg_products, sabio_products = parsing_chemical_list(products_list)
# assemble the chemicals list and reaction string
bigg_compounds = bigg_reactants + bigg_products
sabio_chemicals = sabio_reactants + sabio_products
reactant_string = ' + '.join(bigg_reactants)
product_string = ' + '.join(bigg_products)
reaction_string = ' <-> '.join([reactant_string, product_string])
# if sabio:
# reaction_string = ' = '.join([reactant_string, product_string])
return reaction_string, sabio_chemicals, bigg_compounds
def combine_data(self,):
# import previously parsed content
self.model_contents = json.load(open('processed_Ecoli_model.json'))
with open(self.paths['entryids_path']) as json_file:
entry_id_data = json.load(json_file)
# combine the scraped data into a programmable JSON
enzyme_dict = {}
missing_entry_ids = []
enzymes = self.sabio_df["Enzymename"].unique().tolist()
for enzyme in enzymes:
print('enzyme', enzyme)
enzyme_df = self.sabio_df.loc[self.sabio_df["Enzymename"] == enzyme]
enzyme_dict[enzyme] = {}
reactions = enzyme_df["Reaction"].unique().tolist()
for reaction in reactions:
enzyme_dict[enzyme][reaction] = {}
# ensure that the reaction chemicals match before accepting kinetic data
print('reaction', reaction)
rxn_string, sabio_chemicals, expected_bigg_chemicals= self._split_reaction(reaction, sabio = True)
bigg_chemicals = self.model_contents[enzyme]['bigg_compounds']
extra_bigg = set(bigg_chemicals) - set(expected_bigg_chemicals)
extra_bigg = set(re.sub('(H\+|H2O)', '', chem) for chem in extra_bigg)
if len(extra_bigg) != 1:
missed_reaction = f'The || {rxn_string} || reaction with {expected_bigg_chemicals} chemicals does not match the BiGG reaction of {bigg_chemicals} chemicals.'
if self.printing:
print(missed_reaction)
enzyme_dict[enzyme][reaction] = missed_reaction
continue
# parse each entryid of each reaction
enzyme_reactions_df = enzyme_df.loc[enzyme_df["Reaction"] == reaction]
entryids = enzyme_reactions_df["EntryID"].unique().tolist()
for entryid in entryids:
enzyme_reaction_entryids_df = enzyme_reactions_df.loc[enzyme_reactions_df["EntryID"] == entryid]
entryid_string = f'condition_{entryid}'
enzyme_dict[enzyme][reaction][entryid_string] = {}
head_of_df = enzyme_reaction_entryids_df.head(1).squeeze()
entry_id_flag = True
parameter_info = {}
try:
parameter_info = entry_id_data[str(entryid)]
enzyme_dict[enzyme][reaction][entryid_string]["Parameters"] = parameter_info
except:
missing_entry_ids.append(str(entryid))
entry_id_flag = False
enzyme_dict[enzyme][reaction][entryid_string]["Parameters"] = "NaN"
rate_law = head_of_df["Rate Equation"]
bad_rate_laws = ["unknown", "", "-"]
if not rate_law in bad_rate_laws:
enzyme_dict[enzyme][reaction][entryid_string]["RateLaw"] = rate_law
enzyme_dict[enzyme][reaction][entryid_string]["SubstitutedRateLaw"] = rate_law
else:
enzyme_dict[enzyme][reaction][entryid_string]["RateLaw"] = "NaN"
enzyme_dict[enzyme][reaction][entryid_string]["SubstitutedRateLaw"] = "NaN"
if entry_id_flag:
fields_to_copy = ["Buffer", "Product", "PubMedID", "Publication", "pH", "Temperature", "Enzyme Variant", "UniProtKB_AC", "Organism", "KineticMechanismType", "SabioReactionID"]
for field in fields_to_copy:
enzyme_dict[enzyme][reaction][entryid_string][field] = head_of_df[field]
enzyme_dict[enzyme][reaction][entryid_string]["Substrates"] = head_of_df["Substrate"].split(";")
out_rate_law = rate_law
if not rate_law in bad_rate_laws:
substrates = head_of_df["Substrate"].split(";")
stripped_string = re.sub('[0-9]', '', rate_law)
variables = re.split("\^|\*|\+|\-|\/|\(|\)| ", stripped_string)
variables = ' '.join(variables).split()
start_value_permutations = ["start value", "start val."]
substrates_key = {}
for var in variables:
if var in parameter_info:
for permutation in start_value_permutations:
try:
if var == "A" or var == "B":
substrates_key[var] = parameter_info[var]["species"]
else:
value = parameter_info[var][permutation]
if value != "-" and value != "" and value != " ": # The quantities must be converted to base units
out_rate_law = out_rate_law.replace(var, parameter_info[var][permutation])
except:
pass
enzyme_dict[enzyme][reaction][entryid_string]["RateLawSubstrates"] = substrates_key
enzyme_dict[enzyme][reaction][entryid_string]["SubstitutedRateLaw"] = out_rate_law
with open(self.paths["scraped_model_path"], 'w', encoding="utf-8") as f:
json.dump(enzyme_dict, f, indent=4, sort_keys=True, separators=(', ', ': '), ensure_ascii=False, cls=NumpyEncoder)
# -
tes = test()
tes.combine_data()
|
examples/E_coli_core.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit
# name: python3
# ---
# # <strong>Recurrent Neural Networks Using Pytorch</strong>
# ### In this notebook you will learn the basics of a Recurrent Neural Network using the python library Pytorch.
# ---
# ### <strong>Table of Contents</strong>
# 1. [Introduction to Recurrent Neural Networks](#intro)
# 2. [Long-Short Term Memory](#LSTM)
# 3. [Time Series Data](#time)
# 4. [Understanding the Dataset](#data)
# 5. [Using Pytorch](#pytorch)
# 6. [Code](#code)
# ---
# ### By the end of this notebook, you should be able to implement a basic RNN using Pytorch with the provided data set.
#
# ---
# ## <a name="intro"></a> <strong>Introduction</strong>
# ---
# *<strong>Recurrent neural networks</strong>*, or RNNs, are widely used in a variety of mediums. RNNs leverage sequential data to make predictions. **Sequential memory** makes it easier for the neural network to recognize patterns and replicate the input. In order to achieve learning through sequential memory, a **feedforward neural network** with looping mechanisms is implemented.
#
# As the image below outlines, there are *three* layers: **input, hidden and output**. There are loops that pass previous information forward, allowing the model to *sequentially* store and learn the data. The complexity of a hidden state is based on how much “historic” information is being stored, it is a representation of all previous steps. When training a model, once there is a prediction from a given output, a **loss function** is used to determine the error between the predicted output and real output. The model is trained through back propagation. The weight of each node in the neural network is adjusted with their corresponding gradient that is calculated during **back propagation**.
# <br>
# <br>
# <p align="center">
# <img src="../images/rnnImg.png">
# </p>
# <br>
# <br>
#
# The advantage of using sequential data to successfully predict certain outcomes is especially relevant when analyzing **time series data**.
# ---
# ## <a name="LSTM"></a><strong>Long-Short Term Memory</strong>
# ---
# ["Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow".*](https://ieeexplore.ieee.org/abstract/document/6795963)
#
# To solve this issue a type of RNN called **long short term memory** is used. [**LSTMs**](https://developer.ibm.com/tutorials/iot-deep-learning-anomaly-detection-1/) are able to keep track of long term dependencies by using gradients to modify the model, rather than backpropogations. These attributes are especially advantageous when examining **time series data**. The more historical data there is, the better the model can train, producing the most accurate outputs.
#
# An LSTM has an internal state variable that is modified based on weights and biases through operation gates. Traditionally, an LSTM is comprised of three operation gates: the forget gate, input gate, and output gate. The architecture of long short term memory is dependent on $tanh$ and $sigmoid$ functions implemented in the network. The $tahn$ function ensures that the values in the network remain between -1 and 1 while the $sigmoid$ function regulates if data should be remembered or forgotten. Furthermore,
#
# The mathematical representations of each gate are as follows:
#
# <strong>Forget Gate</strong>: $$f_t = \sigma(w_f*[h_{t-1},x_t] + b_f)$$
#
# <strong>Input Gate</strong>: $$i_t = \sigma(w_f*[h_{t-1},x_t] + b_i)$$
#
# <strong>Output Gate</strong>: $$O_t = \sigma(w_f*[h_{t-1},x_t] + b_o)$$
#
# Where:
# * $w_f$ = weight matrix between forget and input gate
# * $h_{t-1}$ = previous hidden state
# * $x_t$ = input
# * $b_f$ = connection bias at forget gate
# * $b_i$ = connection bias at input gate
# * $b_o$ = connection bias at output gate
#
#
# Each gate modifies the input a different way. The forget gate determines what data is relevant to keep and what information can be "forgotten". The input gate analyzes what information needs to be added to the current step, and the output gate finalizes the proceeding hidden state. Each of these gates allows for sequential data to be efficiently stored and analyzed, allowing for an accurate predictive model to be developed.
#
# ***Source:** *<NAME> and <NAME>, "Long Short-Term Memory," in Neural Computation, vol. 9, no. 8, pp. 1735-1780, 15 Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.*
# ---
# ## <a name="time"></a><strong>Time Series Data</strong>
# ---
# Prior to training a model, it is important to understand the type of data you are working with. There are many different types of data, this notebook incorporates time series data. In essence, time series data is a collection of chronologically collected observations made over a period of time- sometimes during specific intervals. Time series data can be grouped as either *<strong>metrics</strong>* or *<strong>events</strong>*.
#
# * **Metrics**: measurements taken at regular intervals.
#
# * **Events**: measurements taken at irregular intervals.
#
# Distinguishing if the data is comprised of metrics or events is critical. Events are not condusive for creating predictive models. The irregular intervals between each data point prevents sequential logic from creating patterns on past behavior. In contrast, the characteristic regularity between each metric allows machine learing models to learn from previous data and construct possible outcomes for the future. Creating an RNN using time series data, specifically metrics, is a great way to take advantage of the sequential learning pattern they leverage.
#
# Furthermore, time series data can also be categorized as *<strong>linear</strong>* or *<strong>non-linear</strong>*. Based on the mathematical relationship created by the model, the data is classified as one or the other.
#
# Popular examples of time series data include weather, stock, and health care data. In this notebook, we will be using stock data to create an RNN model to predict the value of the given stock.
#
#
# ---
# ## <a name="data"></a><strong>Understanding the Dataset</strong>
# ---
#
# The data set we are examining has been collected by IBM Watson. It is comprised of the Open, High, Low and Close values of a particular stock from the years 1980-2018. Later in the tutorial, we will extract information about the data's characteristics using certain Pandas methods, however, let us focus on identifying what we want to do with the data first.
#
# When describing a stocks value on any given day, the close value is used. Financial institutions also examine close values to analyze a stock's market performance as it represents how the stock performed through market hours. The overall importance of the Close value indicates to us that this is the value that we should train the model to predict.
#
# In this tutorial, we will be using all four data fields to create a multivariate model that can precit the Close Value for this stock.
#
# ---
# ## <a name="pytorch"></a><strong>Using Pytorch</strong>
# ---
# Pytorch is a python library that uses the specialized data structure Tensors to encode model parameters and inputs. The following is a brief tutorial on imports that we will be using to evaluate the stock data.
#
# In order to use Pytorch, we must first import the library into our workspace. To do this type the following code...
import torch
# In order to be able to work with data and create a neural network, we can use the Pytorch class nn and the primatives DataLoader and datasets. Dataset is meant to wrap an iterable around the dataset while DataLoader is meant to load and store the desired data. The matplotlib import allows us to change, create and plot a figure in a plotting area. This is useful for the model we are trying to create in this exercise.
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
# Now that we know what imports to use, we are ready to begin creating our model for our stock data set!
# ---
# ## <a name="code"></a><strong>Code</strong>
# ---
# +
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
# !pip install pandas
import pandas as pd
# -
# In the following empty code cell, import the Stock Data CSV file to your notebook.
# +
#insert stock data import here, follow the directions outlined in the article.
# -
# First, let us examine the data.
stock_data = df_data_1
print("Information about the dataset", end = "\n")
print(stock_data.info())
# We can see that there are a total of 6 columns, the columns we want to input into the model are the last four columns: index 2-5.
print("First five elements in the dataset", end = "\n")
print(stock_data.head(5))
print("Last five elements in the dataset", end = "\n")
print(stock_data.tail(5))
# By printing the first and last 5 elements, we can see that the data is in inverse order. The most recent data stored at the beginning of the set and the oldest data is at the end. For our model, we need to input the data from oldest to most recent. The Pandas method "sort_values" can be used to sort the data in chronological order.
stock_data = stock_data.sort_values(by="Date")
print(stock_data.head())
# After applying the "sort_values" method, we can use the "head" method to print the first five data points. Our data is now in the correct order.
from sklearn.preprocessing import MinMaxScaler
price = stock_data[['High','Low','Open','Close']]
print(price[:5])
# In order to create the multivariate model we can create an array containing only the variables we want to input. We no longer need the date or index column as the data has already been ordered.
scaler = MinMaxScaler(feature_range=(-1, 1))
price = scaler.fit_transform(price.values)
print(price[:5])
# After we have isolated the target variables, the next step is to normalize the values. This means, representing them as a value between -1 and 1. This allows us to make the data uniform. This means, irrespective of the range it falls under, each data point will have the same impact on the model.
# Below, we have defined our train window as 7 to represent the length of a week.
# +
train_window = 7
import numpy as np
def create_inout_sequences(input, tw):
inout_seq = []
L = len(input)
print('Length = ',L)
for i in range(L-tw):
data_seq = input[i:i+tw]
data_label = input[i+tw:i+tw+1][0][3]
inout_seq.append((data_seq ,data_label))
data = inout_seq;
print('size of data : ', len(data))
test_set_size = 20
train_set_size = len(data) - (test_set_size);
print('size of test : ', test_set_size)
print('size of train : ', train_set_size)
train = data[:train_set_size]
test = data[train_set_size:]
return train,test
train,test = create_inout_sequences(price, train_window )
# -
# The "create_intout_sequences" method creates labels for the dataset and isolates the datapoints we are inputting into the model, taking into account the training and test size.
# Let us print the first five elements of the train data to confirm its formatting.
print(train[:5])
# We can see the data type of the train data is an array, and the data is all normalized.
# This is the LSTM method. The number of inputs included in the model is 4 to represent the Open, High, Low and Close values we are feeding into the model.
class LSTM(nn.Module):
def __init__(self, input_size=4, hidden_layer_size=100, output_size=1):
super().__init__()
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(1,1,self.hidden_layer_size),
torch.zeros(1,1,self.hidden_layer_size))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
model = LSTM()
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# We must define an LSTM() method object and the loss function and optimizer so that we can use it when we train the model.
# Now that all the necessary parameters have been defined, we can use the "LSTM()" and "create_inout_sequence()" to train the model. We are training this model to 5 epochs.
epochs = 5
for i in range(epochs):
for seq, labels in train:
seq = torch.from_numpy(np.array(seq)).type(torch.Tensor)
#print(seq)
labels = torch.from_numpy(np.array(labels)).type(torch.Tensor)
#print(labels)
optimizer.zero_grad()
model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size),
torch.zeros(1, 1, model.hidden_layer_size))
y_pred = model(seq)
#print('y_pred : ',y_pred)
labels = labels.view(1)
#print('label : ', labels)
single_loss = loss_function(y_pred, labels)
single_loss.backward()
optimizer.step()
print(f'epoch: {i:3} loss: {single_loss.item():10.10f}')
# The following method allows us to predict the values using the trained model.
model.eval()
actual=[]
pred = []
#for i in range(fut_pred):
if True:
#seq = test_inout_seq
for seq, labels in test:
seq = torch.from_numpy(np.array(seq)).type(torch.Tensor)
#print(seq)
labels = torch.from_numpy(np.array(labels)).type(torch.Tensor)
actual.append(labels)
with torch.no_grad():
model.hidden = (torch.zeros(1, 1, model.hidden_layer_size),
torch.zeros(1, 1, model.hidden_layer_size))
pred.append(model(seq).item())
# Next, our aim is to convert the predicted and actual data into tensors.
pred = torch.from_numpy(np.array(pred)).type(torch.Tensor)
actual = torch.from_numpy(np.array(actual)).type(torch.Tensor)
# We can print the data to confirm that this it is formatted with the correct data type.
print(pred)
print(actual)
# There are 20 individual tensors in both the predicted and actual sets. As we recall, previously, we defined the variable "test_set_size" as 20 in the "create_inout_sequence" method. These values represent the predicted and actual Close Values for the last (most recent) 20 days.
# Finally, in order for us to better understand and analyze the accuracy of the model's predictive values, we need to convert the normalized values into thier scalar form.
import numpy as np
pred_new = scaler.inverse_transform( np.c_[ np.zeros(20),np.zeros(20),np.zeros(20),np.array(pred)])
print(pred_new[:,3])
actual_new = scaler.inverse_transform( np.c_[ np.zeros(20),np.zeros(20),np.zeros(20),np.array(actual)])
print(actual_new[:,3])
# It is clear that the model did a good job of predicting the Close Value of the stock. We can represent these values visually to see how close they are
fig = plt.figure()
fig2 = plt.figure()
fig3 = plt.figure()
a = fig.add_axes([0,0,1,1])
b = fig2.add_axes([0,0,1,1])
c = fig3.add_axes([0,0,1,1])
a.plot(actual_new[:,3], 'ro')
a.set_ylabel('Stock Value (dollars)')
b.plot(pred_new[:,3],'c')
b.set_ylabel('Stock Value (dollars)')
#fig.legend(labels = ('Actual','Predicted'),loc='upper left')
c.plot(actual_new[:,3], 'ro')
c.plot(pred_new[:,3],'c')
c.set_ylabel('Stock Value (dollars)')
plt.show()
# The red dots represent the Actual Value of the stock over the last 20 days, and the cayanne line represents the Predicted Value of the stock over the last 20 days.
# We can also plot the difference between the Actual and Predicted Value.
# +
difference = actual_new - pred_new
fig = plt.figure()
diffGraph = fig.add_axes([0,0,1,1])
diffGraph.plot(difference[:, 3], 'b')
diffGraph.set_ylabel('Difference between Actual and Predicted Stock Value (dollars)')
plt.show()
# -
# We have successfully completed creating our RNN model to predict the stock value.
#
# ---
# #### We hope this tutorial gave you a glimpse into creating a Recurrent Neural Network using PyTorch.
# #### Thanks for completing this lesson!
# ### <strong>Want to Learn More?</strong>
#
# Running deep learning programs usually needs a high performance platform. **PowerAI** speeds up deep learning and AI. Built on IBM’s Power Systems, **PowerAI** is a scalable software platform that accelerates deep learning and AI with blazing performance for individual users or enterprises. The **PowerAI** platform supports popular machine learning libraries and dependencies including TensorFlow, Caffe, Torch, and Theano. You can use [PowerAI on IMB Cloud](https://cocl.us/ML0120EN_PAI).
#
# Also, you can use **Watson Studio** to run these notebooks faster with bigger datasets. **Watson Studio** is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, **Watson Studio** enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of **Watson Studio** users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX). This is the end of this lesson. Thank you for reading this notebook, and good luck!
# Content and model created by: [<NAME>](https://www.linkedin.com/in/dhivya-lak/), [<NAME>](https://www.linkedin.com/in/samaya-madhavan)
#
# Added to IBM Developer by: [<NAME>](https://www.linkedin.com/in/dhivya-lak/)
# ### <strong>References</strong>
#
# * https://pytorch.org/tutorials/beginner/basics/intro.html
# * https://www.influxdata.com/what-is-time-series-data/
# * https://ieeexplore.ieee.org/abstract/document/6795963
# * https://www.youtube.com/watch?v=LHXXI4-IEns
# * https://www.pluralsight.com/guides/introduction-to-lstm-units-in-rnn
# * https://stackabuse.com/time-series-prediction-using-lstm-with-pytorch-in-python
|
supervised-deeplearning/notebooks/RNNPyTorch.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/edmanft/Drug_Synergy/blob/main/Commutative%2C%20Pearson%20and%20Callbacks.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="RgzUTT93a4-h"
#
#
#
#
# # Loading packages
# + colab={"base_uri": "https://localhost:8080/", "height": 0} id="Dl-h3Pk5X0v7" outputId="20ad7c6e-dc07-48a8-eb95-bc349cc2b3df"
# !pip install fastai --upgrade
# !pip install dtreeviz
# !pip install fastbook
# + colab={"base_uri": "https://localhost:8080/"} id="hbEoSe-AXiTd" outputId="36b5fa8d-69cc-489d-a783-ec85b8a61bc0"
import fastbook
fastbook.setup_book()
# + id="zBZBhbi7X5Zk"
from fastbook import *
from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype
from fastai.tabular.all import *
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.svm import SVR
from dtreeviz.trees import *
from IPython.display import Image, display_svg, SVG
from collections import Counter
import seaborn as sns
import xgboost
from xgboost import XGBRegressor
pd.options.display.max_rows = 20
pd.options.display.max_columns = 8
# + [markdown] id="vSlhUWpYbBRx"
# # Loading Data 1
# + id="dbcVPp4oY5xW"
path = "/content/drive/MyDrive/archivos_tfm/drug_comb_commutative.csv"
df_drug_comb = pd.read_csv(path, index_col = 0)
# + colab={"base_uri": "https://localhost:8080/", "height": 392} id="eXDnZtsmF3qF" outputId="2258d5b3-58d3-4a8a-80a2-3f632b536546"
df_drug_comb.head(10)
# + colab={"base_uri": "https://localhost:8080/"} id="fPGJFbpyZLqR" outputId="f64cf054-9497-438f-8766-d16fbf2335e7"
df_drug_comb.columns
# + colab={"base_uri": "https://localhost:8080/"} id="MP0jkB_RaX4r" outputId="6b168d54-8b56-46b1-f19b-3489481d41a3"
print("Splits: ",df_drug_comb['Dataset'].unique())
print("Challenge: ",df_drug_comb['Challenge'].unique())
# + [markdown] id="nlZgoV5wGV0t"
# In this dataset we have a train/test/Leaderboard split. Also, all of our data refers to challege 1, so we can just drop that column later.
# + colab={"base_uri": "https://localhost:8080/", "height": 391} id="FQ9zqo0Xae-N" outputId="26c75c2a-9db4-426b-8f0c-57e3bbcfe204"
# A little bit of data analysis
df_drug_comb.describe()
# + [markdown] id="g9EWAuUkG6Qp"
# We will choose to train only in high quality data, meaning QA=1.
# + colab={"base_uri": "https://localhost:8080/"} id="uYKILf-bGoOY" outputId="8ed5e315-f416-40ac-dad8-bfb5538441b0"
df_drug_comb = df_drug_comb[df_drug_comb["QA"] == 1]
print("Values of QA:", df_drug_comb["QA"].unique())
# + [markdown] id="7NQy1QElH-kd"
# We drop QA, Combination ID and Challenge.
# + id="cmzA0PWv3LdN" outputId="bc02143d-fd93-4228-ecd9-342046ad8c79" colab={"base_uri": "https://localhost:8080/"}
df_drug_comb.columns
# + id="EPgTGfks4giw" outputId="eb530a9b-ac11-43a2-9ea2-03f51b49043c" colab={"base_uri": "https://localhost:8080/", "height": 398}
df_drug_comb.drop(['QA','Challenge'], axis = 1, inplace = True)
# + [markdown] id="qH5bqtj949Nd"
# It gives a warning but it has succesfully dropped QA and Challenge columns from the DataFrame.
# + [markdown] id="8NCwHPhKltKw"
# # Loading Data 2
# + id="EjNzp1dz3SwW" outputId="4f1e2815-0920-42b6-ea38-e17e35871d93" colab={"base_uri": "https://localhost:8080/", "height": 0}
df_drug_comb.head()
# + id="fbWHe4Md45Cc" outputId="bb51c2d8-21e9-4c1e-972f-42849b7e948c" colab={"base_uri": "https://localhost:8080/"}
df_drug_comb.columns
# + id="xID4hKn8aqjB"
dep_var = 'Synergy score'
procs = [Categorify, FillMissing]
# + colab={"base_uri": "https://localhost:8080/", "height": 0} id="iaib4P60aqls" outputId="778a02a0-fe1c-4026-8a84-ff0f0b4ccc7a"
# We shuffle the data
df_drug_comb = df_drug_comb.sample(frac=1).reset_index(drop=True)
df_drug_comb.head(10)
# + [markdown] id="yT3AeMoEMF2L"
# We create the train/test/LB splits and then create a copy of our dataset without the Combination ID.
# We want to retain the Combination ID because we use it for our Weighted Pearson metric.
# + id="QXWl_Q8gaqr9"
dataset_size = df_drug_comb.shape[0]
complete_list = np.arange(dataset_size, dtype = int)
train_idx = complete_list[df_drug_comb["Dataset"]=="train"]
test_idx = complete_list[df_drug_comb["Dataset"]=="test"]
LB_idx = complete_list[df_drug_comb["Dataset"]=="LB"]
# For now we ignore the LB split
splits = (list(train_idx),list(test_idx))
df_drug_comb.drop(['Dataset'], axis = 1, inplace = True)
df_nocomb = df_drug_comb.drop(['Combination ID'], axis = 1)
# + colab={"base_uri": "https://localhost:8080/"} id="ZCcJ4PwpMgxl" outputId="82e6cb43-fa8e-4d84-d846-632f315d43a8"
print("With Combination ID: ", df_drug_comb.columns)
print("Without Combination ID:", df_nocomb.columns )
# + id="klLWUj6NaqwE"
cont,cat = cont_cat_split(df_nocomb, 1, dep_var=dep_var)
to = TabularPandas(df_nocomb, procs, cat, cont, y_names=dep_var, splits=splits)
# + colab={"base_uri": "https://localhost:8080/"} id="xX_7MKfocAPd" outputId="5d0bab33-fab0-432e-ef9c-42431ce4219c"
len(to.train),len(to.valid)
# + id="MKbl2kp9jNoB"
xs,y = to.train.xs,to.train.y
valid_xs,valid_y = to.valid.xs,to.valid.y
# + [markdown] id="OPbQBeyUQLpO"
# ### Understanding xs, ym valid_xs, valid_y
# + colab={"base_uri": "https://localhost:8080/"} id="jdsTUQD-QVus" outputId="f1642c7a-0162-44b5-816b-18b4d2a7d5c6"
xs.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 0} id="0AIW1xP5QeU4" outputId="286afc89-6b09-4ab1-df2a-add17ff0064f"
xs.head(10)
# + [markdown] id="utFj5mGCQhCu"
# It converts categorical variables into numerical ones.
# + colab={"base_uri": "https://localhost:8080/"} id="81yDrJF2Q5jz" outputId="2a02de3b-57ae-400d-f463-e4c00d790eb2"
to.classes["Compound A"]
# + id="UleG8u1o5W20" outputId="e161fb08-3f17-405d-8ad6-e5c5a8ef1b48" colab={"base_uri": "https://localhost:8080/"}
df_nocomb["Compound A"].unique()
# + [markdown] id="TWYg0z3n5dve"
# When we do to.classes there appears a "#na#" item that does not appear in the original DataFrame. Ask Pablo about that.
# + colab={"base_uri": "https://localhost:8080/"} id="Lwd9IVhoSMci" outputId="ebecd0b1-bd42-40a6-eb2e-06b695fe6b2c"
y
# + [markdown] id="ws7OIW92jwMc"
# We create a permutated valid_xs to make the evaluation conmutative
# + id="k3CFtlCCjn1K"
valid_xs_perm = pd.DataFrame()
valid_xs_perm[['Cell line name', 'Compound A',
'Compound B', 'Max. conc. A',
'Max. conc. B', 'IC50 A',
'H A', 'Einf A',
'IC50 B', 'H B', 'Einf B']] = valid_xs[['Cell line name', 'Compound B',
'Compound A', 'Max. conc. B',
'Max. conc. A', 'IC50 B',
'H B', 'Einf B',
'IC50 A', 'H A', 'Einf A']]
# + [markdown] id="eWyEElmulBj6"
# Sanity check: let's see of valid_xs_perm is the permuted version of valid_xs
# + id="_VvFhcUlkRq-" outputId="9d0d9b36-2849-49d4-d336-6893cccb56db" colab={"base_uri": "https://localhost:8080/"}
print(valid_xs_perm.head(5))
print(valid_xs.head(5))
# + [markdown] id="y6ktNAlyhbnK"
# # Baseline model: mean and median
# + [markdown] id="bUXc3HsciFpi"
# Let's see how good it performs a model whose only information is the mean or the median of the train set.
# + colab={"base_uri": "https://localhost:8080/"} id="4TGVdEKPhmGl" outputId="411bd7ac-f596-4c31-c12c-d355c3591e33"
mean = np.mean(y)
median = np.median(y)
print(f" Median = {median} \n Mean = {mean}")
# + [markdown] id="rXq3_BXXi_DE"
# We create our metrics
# + id="JUOxhFTbi-BK"
def r_mse(pred,y): return round(math.sqrt(((pred-y)**2).mean()), 6)
def m_rmse(m, xs, y): return r_mse(m.predict(xs), y)
# + colab={"base_uri": "https://localhost:8080/"} id="XQL79-SFi3_u" outputId="b61708e1-6778-465d-c07e-35cf706fcd76"
error_mean = r_mse(mean, valid_y)
error_median = r_mse(median, valid_y)
print(f" Error Median = {error_median} \n Error Mean = {error_mean}")
# + [markdown] id="7VjlAOOhShYj"
# However, we now know that the correct metric is going to be a weighted Pearson correlation coefficient, which takes into account the number of cell lines used per drug pair experiment. It will give 0 for uncorrelated data.
#
# What we want to measure is if the experimental and predicted synergy scores are correlated for every drug pair.
# + id="oJfJXt4xKAJ0"
def weighted_pearson(df_dc_test, y_pred):
"""
Computes the weighted Pearson correlation coefficient and a DataFrame of
the individual Pearson coefficients of each combination
INPUTS:
df_dc_test: dataset used for inference. We only use the combination
id and the observed synergy.
y_pred: predictions of the model"""
y_obs = np.asarray(df_dc_test["Synergy score"])
cl_count = Counter(df_dc_test["Combination ID"])
id_list = list(cl_count.keys())
rho_list = list()
numerator = 0
denominator = 0
for ids in id_list:
"We compute the appropiate mask, since combinations are not"
"grouped in the dataset"
condition = df_dc_test["Combination ID"] == ids
pearson = np.corrcoef(y_pred[condition], y_obs[condition])
"pearson is a correlation matrix, we want an off-diagonal term"
rho_list.append(pearson[0,1])
"Number of times each drug combination appears"
rep = np.sum(condition)
"np.sqrt(rep-1) is the relative weight of the drug pair"
numerator = numerator + np.sqrt(rep-1)*pearson[0,1]
denominator = denominator + np.sqrt(rep-1)
weighted_pear = numerator / denominator
pear_weights_df = pd.DataFrame({"Combination ID" : id_list ,
"n_cl" : list(cl_count.values()),
"Pearson coefficient" : rho_list })
return weighted_pear, pear_weights_df
# + [markdown] id="iZzBjwyAiMXE"
# We make a function that accounts for possible values of nan's in the dataset and sets them to 0.
#
# + id="2T4kogQ_iaQc"
def nan_weighted_pearson(df_dc_test, y_pred):
"""
Computes the weighted Pearson correlation coefficient and a DataFrame of
the individual Pearson coefficients of each combination,
setting nan's to 0.
INPUTS:
df_dc_test: dataset used for inference. We only use the combination
id and the observed synergy.
y_pred: predictions of the model"""
y_obs = np.asarray(df_dc_test["Synergy score"])
cl_count = Counter(df_dc_test["Combination ID"])
id_list = list(cl_count.keys())
rho_list = list()
numerator = 0
denominator = 0
for ids in id_list:
"We compute the appropiate mask, since combinations are not"
"grouped in the dataset"
condition = df_dc_test["Combination ID"] == ids
pearson = np.corrcoef(y_pred[condition], y_obs[condition])
"pearson is a correlation matrix, we want an off-diagonal term"
if np.isnan(pearson[0,1]):
rho_i = 0
else:
rho_i = pearson[0,1]
rho_list.append(rho_i)
"Number of times each drug combination appears"
rep = np.sum(condition)
"np.sqrt(rep-1) is the relative weight of the drug pair"
numerator = numerator + np.sqrt(rep-1)*rho_i
denominator = denominator + np.sqrt(rep-1)
weighted_pear = numerator / denominator
pear_weights_df = pd.DataFrame({"Combination ID" : id_list ,
"n_cl" : list(cl_count.values()),
"Pearson coefficient" : rho_list })
return weighted_pear, pear_weights_df
# + [markdown] id="KO592Uhtq1PV"
# And one that only gives metric for fast training:
# + id="islIw6uaq0l-"
def wpc_score(df_dc_test, y_pred):
"""
Computes the weighted Pearson correlation coefficient and a DataFrame of
the individual Pearson coefficients of each combination,
setting nan's to 0.
INPUTS:
df_dc_test: dataset used for inference. We only use the combination
id and the observed synergy.
y_pred: predictions of the model"""
y_obs = np.asarray(df_dc_test["Synergy score"])
cl_count = Counter(df_dc_test["Combination ID"])
id_list = list(cl_count.keys())
rho_list = list()
numerator = 0
denominator = 0
for ids in id_list:
"We compute the appropiate mask, since combinations are not"
"grouped in the dataset"
condition = df_dc_test["Combination ID"] == ids
pearson = np.corrcoef(y_pred[condition], y_obs[condition])
"pearson is a correlation matrix, we want an off-diagonal term"
if np.isnan(pearson[0,1]):
rho_i = 0
else:
rho_i = pearson[0,1]
"Number of times each drug combination appears"
rep = np.sum(condition)
"np.sqrt(rep-1) is the relative weight of the drug pair"
numerator = numerator + np.sqrt(rep-1)*rho_i
denominator = denominator + np.sqrt(rep-1)
weighted_pear = numerator / denominator
return weighted_pear
# + colab={"base_uri": "https://localhost:8080/", "height": 0} id="uPbu8whvKufF" outputId="94dbb8f4-d9b4-4d40-afb2-5a506d7896f4"
test_df = df_drug_comb.iloc[test_idx]
test_df
# + [markdown] id="ylLB1aNsKYM0"
# **Sanity check:** we try the metric by predicting values around the mean of the synergy score of the train split. As this values are random, statistically they will be uncorrelated with the experimental values. We should obtain a weighted Pearson score close to 0.
# + colab={"base_uri": "https://localhost:8080/"} id="Iivs-KUfLKdP" outputId="9e6a2998-c742-4f4e-a0a1-a7b822a4efe0"
n_exp = len(test_idx)
y_pred = np.ones(n_exp)* mean + np.random.random(n_exp)*1000
weighted_pear, pear_weights_df = weighted_pearson(test_df, y_pred)
print("Weighted Pearson: ", weighted_pear)
# + [markdown] id="hHllP7g1N-is"
# It's very small and oscillates around 0.
# + [markdown] id="7QyARb9HbHrh"
#
#
#
# # Decision Trees
#
# + id="TR2cdnjfcASV"
# Now that we have preprocessed our dataset, we build the tree
Tree = DecisionTreeRegressor(max_leaf_nodes=4)
Tree.fit(xs, y);
# + colab={"base_uri": "https://localhost:8080/", "height": 603} id="cr-4OZjzcAWQ" outputId="feefe2e9-0fe1-4d9f-d391-d4cff9f6ffb7"
draw_tree(Tree, xs, size=10, leaves_parallel=True, precision=2)
# + colab={"base_uri": "https://localhost:8080/", "height": 407} id="BNZrcNgkcAby" outputId="355d4a3a-9373-496e-8897-ea66d7bbdfd3"
samp_idx = np.random.permutation(len(y))[:500]
dtreeviz(Tree, xs.iloc[samp_idx], y.iloc[samp_idx], xs.columns, dep_var,
fontname='DejaVu Sans', scale=1.6, label_fontsize=10,
orientation='LR')
# + [markdown] id="rC--Kb_ufxUX"
# Let's now have the decision tree algorithm build a bigger tree. Here, we are not passing in any stopping criteria such as max_leaf_nodes:
#
# + id="eHN2f2X1cAfC"
m = DecisionTreeRegressor()
m.fit(xs, y);
# + colab={"base_uri": "https://localhost:8080/"} id="QIHx4KFqcAiT" outputId="eefe72ff-aae9-469f-d5e4-b756771886c9"
# In the training set
m_rmse(m, xs, y)
# + [markdown] id="MVw3CJtSgBvH"
# This just means that the model fits well in the training dataset, but we have to check how well it generalizes over unseen data:
# + colab={"base_uri": "https://localhost:8080/"} id="pnZboV44cAoM" outputId="ead8f6eb-97fa-4a60-c23c-23a57534cc57"
m_rmse(m, valid_xs, valid_y)
# + [markdown] id="hG5fJ9sogiSL"
# Now we will check for overfitting:
# + colab={"base_uri": "https://localhost:8080/"} id="OW1lrAwXcAuZ" outputId="c6d18ec5-d06f-4d07-e7d2-f14b980c54ca"
m.get_n_leaves(), len(xs)
# + [markdown] id="CZPZh1WBguJU"
# We see that it has as many leafs as datapoints, let's see what happens if we restrict the model.
# + colab={"base_uri": "https://localhost:8080/"} id="SF5qLstWcAw_" outputId="abd2e4b6-b197-478b-efab-69df027448fd"
m = DecisionTreeRegressor(min_samples_leaf=25)
m.fit(to.train.xs, to.train.y)
m_rmse(m, xs, y), m_rmse(m, valid_xs, valid_y)
# + colab={"base_uri": "https://localhost:8080/"} id="0QXfnHkbhHwU" outputId="8320ff43-082b-42f6-da3a-b30453ee0946"
m.get_n_leaves()
# + [markdown] id="TFMbvDPUg_Fa"
# **The RMSE is almost the same as the baseline model. That's not good, let's try some hyperparameter tuning.**
# + id="gEVR96chkHCj"
leafs = np.arange(500)+1
error_list = list()
for n_leafs in leafs:
m = DecisionTreeRegressor(min_samples_leaf=n_leafs)
m.fit(to.train.xs, to.train.y)
error_list.append( m_rmse(m, valid_xs, valid_y) )
# + colab={"base_uri": "https://localhost:8080/"} id="7i8sS2cdlIa2" outputId="98842fe2-2fa6-490e-8938-11767947b333"
error_list = np.asarray(error_list)
best_error = min(error_list)
best_leaf = leafs[error_list== min(error_list)][0]
print(f"Best number of leafs = {best_leaf} \n Error = {best_error}")
# + [markdown] id="jjR0Rqz_mRWZ"
# Let's check our metric
# + colab={"base_uri": "https://localhost:8080/"} id="1jLesdi6PLNT" outputId="0ac84925-d9ea-476f-a7df-658cf26fe1be"
m = DecisionTreeRegressor(min_samples_leaf = best_leaf)
m.fit(to.train.xs, to.train.y)
y_pred = m.predict(valid_xs)
print(test_df.shape)
weighted_pear, pear_weights_df = weighted_pearson(test_df, y_pred)
print("Weighted Pearson: ", weighted_pear)
print(pear_weights_df)
# + [markdown] id="3L-HgaoBR3Hc"
# For some drug combination we obtain NaN's. My hypothesis is that for some drug pairs the predicted value is the same for all cell lines.
# + colab={"base_uri": "https://localhost:8080/", "height": 238} id="EwWXYZ4vP8mA" outputId="7f6fe659-1e03-4dd4-bed1-d554cc09bde1"
check_nan = pear_weights_df["Pearson coefficient"].isnull()
combination_index = np.arange(pear_weights_df.shape[0], dtype = int)
nan_combinations = pear_weights_df.iloc[combination_index[check_nan]]
nan_combinations
# + [markdown] id="u3afK6hneB6i"
# These are the drug combinations that have a NaN Pearson correlation. We pick the first one and check for the predicted values.
# + colab={"base_uri": "https://localhost:8080/", "height": 267} id="0CHXopsddEvN" outputId="9ff87b07-4409-4ca2-9f33-fa0f74d7cb6d"
nan_mask = test_df["Combination ID"] == "ALK.CSNK2A1_2"
ALK_CSNK2A1_2_df = test_df[nan_mask]
ALK_CSNK2A1_2_df
# + colab={"base_uri": "https://localhost:8080/"} id="31UqvKYkfc-q" outputId="e9af6e77-fe63-4395-fb97-1be4f5c97007"
y_pred_ALK_CSNK2A1_2 = y_pred[nan_mask]
y_pred_ALK_CSNK2A1_2
# + [markdown] id="6i2ZTkuLfoTO"
# Just as I imagined. We are predicting the same values over and over. That's why our metric fails.
# + colab={"base_uri": "https://localhost:8080/", "height": 287} id="aC5PITsxfvvT" outputId="fc8697cf-7ea0-47e3-bcc3-7f35039cf2b8"
plt.plot(y_pred_ALK_CSNK2A1_2 , ALK_CSNK2A1_2_df["Synergy score"] , "bo")
plt.ylabel("Experimental Synergy", fontsize = 15)
plt.xlabel("Predicted Synergy", fontsize = 15)
plt.savefig("predicted_experimental_synergy_decision_tree_nan.eps", format = 'eps', dpi=300)
# + [markdown] id="Wx1SnAoqgO5u"
# Now with our metric function that takes nan's into account:
#
# + id="35Pt-Inej7lt" outputId="888e71d6-ac10-46ac-cdf6-08edbcb45dc6" colab={"base_uri": "https://localhost:8080/"}
weighted_pear, pear_weights_df = nan_weighted_pearson(test_df, y_pred)
print("Weighted Pearson: ", weighted_pear)
print(pear_weights_df)
# + [markdown] id="95Xyrl99kRs8"
# Just to make sure we check for the previous rows of pear_weights_df where there were nan's.
# + id="08KuIvskkaJ9" outputId="ac22275b-0a30-485e-cd17-d964c0baa4e6" colab={"base_uri": "https://localhost:8080/", "height": 238}
nan_combinations = pear_weights_df.iloc[combination_index[check_nan]]
nan_combinations
# + [markdown] id="dPaAEGt7kgG7"
# The function works.
# + [markdown] id="wAj2SSjumfVE"
# # Random Forest
# + id="sQBJ2ilNmW7R"
def rf(xs, y, n_estimators=100, max_samples=300,
max_features=0.5, min_samples_leaf=5, **kwargs):
return RandomForestRegressor(n_jobs=-1, n_estimators=n_estimators,
max_samples=max_samples, max_features=max_features,
min_samples_leaf=min_samples_leaf, random_state = 42 ,oob_score=True).fit(xs, y)
# + colab={"base_uri": "https://localhost:8080/"} id="hYtae3lHmkqG" outputId="14b4185f-d6ac-4fc6-ce06-67b832266bf9"
m = rf(xs, y);
m_rmse(m, xs, y), m_rmse(m, valid_xs, valid_y)
# + [markdown] id="tp8zQ3qimuk5"
# A little better than the Tree regressor, but not that great.
# + id="LT36jmUWOIue" outputId="1c216970-e990-4f4b-c59a-13afd5d4becd" colab={"base_uri": "https://localhost:8080/"}
y_pred = m.predict(valid_xs)
weighted_pear, pear_weights_df = weighted_pearson(test_df, y_pred)
print("Weighted Pearson: ", weighted_pear)
print(pear_weights_df)
# + [markdown] id="pODZ5XD7Q6lF"
# We see that Random Forest gives better prediction than Decision Trees.
#
# Let's try to obtain meaningful info. For example, let's order drug combinations from highest to lowest correlation
# + id="TKvr1J5oQ5sL" outputId="f25fe45a-c26f-418f-b8cb-e97638b074ca" colab={"base_uri": "https://localhost:8080/"}
ordered_df = pear_weights_df.sort_values(by = "Pearson coefficient",
ascending = False)
print(ordered_df)
# + [markdown] id="PFKEZaW2SfS6"
# Now the most uncorrelated:
# + id="WRMCLSt7ShRt" outputId="0803211e-e8bb-46b0-c999-24cacd943e2b" colab={"base_uri": "https://localhost:8080/", "height": 363}
ordered_df["abs Pearson"] = np.abs(ordered_df["Pearson coefficient"])
ordered_df = ordered_df.sort_values(by ="abs Pearson",
ascending = True)
ordered_df.head(10)
# + id="msWcS-3CT66x" outputId="af0c382a-bf17-41aa-b853-e5760f5c2dfd" colab={"base_uri": "https://localhost:8080/", "height": 458}
sns.histplot(data = ordered_df["Pearson coefficient"], bins = 10)
plt.savefig("pearson_random_forest.eps", format = 'eps', dpi=300)
# + [markdown] id="3e70Tq81k2pF"
# Let's try to do a bit of hyperparameter tuning to check for the best hyperparameters for random forest.
# + id="4aQWHQbIVxiJ"
def rf_training(xs, y, test_df, n_estimators=100, max_samples=300,
max_features=0.5, min_samples_leaf=5, random_state = 42, **kwargs):
m = RandomForestRegressor(n_jobs=-1, n_estimators=n_estimators,
max_samples=max_samples, max_features=max_features,
min_samples_leaf=min_samples_leaf,random_state = random_state ,
oob_score=True)
m.fit(xs, y)
y_pred = m.predict(valid_xs)
weighted_pear = wpc_score(test_df, y_pred)
return weighted_pear
# + [markdown] id="5hrT19Z9mQdL"
# Let's try our function and see if it reproduces previous results.
# + id="4xFq4WjQmYqf" outputId="4e84bf4b-f175-4677-e424-03d0b915bd2b" colab={"base_uri": "https://localhost:8080/"}
rf_training(xs, y, test_df)
# + [markdown] id="micGOe7ympVn"
# Close enough, the small difference can be explained by the fact that we didn't fix the random seed. Let's try some hyperparameter tuning.
# + [markdown] id="Ka6jDNBWuGLn"
# # Tabular learner fastai
# As next model we will try a Tabular Learner from FastAI.
# First thing we have to do is normalize the data. We create another TabularPandas object:
#
# + id="bjOxq4hJwG4n"
procs_nn = [Categorify, FillMissing, Normalize]
to_nn = TabularPandas(df_nocomb, procs, cat, cont, y_names=dep_var,
splits=splits)
# + [markdown] id="aXiIhTp-wdcJ"
# Now we create our dataloader and check for the maximum and minimum of our data. This is useful as we will limit the output interval of our model
# + id="UA1Hz-q7wduJ" outputId="584174e2-d7bc-450f-8177-2ee66696dc22" colab={"base_uri": "https://localhost:8080/"}
dls = to_nn.dataloaders(1024)
y = to_nn.train.y
y.min(),y.max()
# + [markdown] id="V2ZX93Nvw6Wo"
# We create our learner and look for an optimal learning rate with lr_find.
# + id="YjrM0YvhwvAS" outputId="b39f9789-2caf-4c1a-d391-4b72fc9e5098" colab={"base_uri": "https://localhost:8080/", "height": 306}
learn = tabular_learner(dls, y_range=(y.min(),y.max()), layers=[200,50, 20],
n_out=1, loss_func=F.mse_loss)
learn.lr_find()
# + [markdown] id="8aYR-o7kxBpl"
# Now we finetune our learner by a few epochs.
#
#
# + id="S-GNRx1zxCXl" outputId="0a2b8b0a-fe1f-491f-c42b-21b40d149c36" colab={"base_uri": "https://localhost:8080/", "height": 1000}
learn.fit_one_cycle(50, 1e-3)
# + [markdown] id="dbixBTvgxP3U"
# Lastly, we obtain our predictions and compute the WPC metric. Since the FastAI learner outputs a tensor, we have to Flatten it.
#
# + id="HSax3LJBxVoX" outputId="a6d16743-ac2c-46a5-f6ef-322b0c3530de" colab={"base_uri": "https://localhost:8080/", "height": 34}
y_pred, targs = learn.get_preds()
y_pred = np.asarray(y_pred[:,0])
wpc_score(test_df, y_pred)
# + id="QgL2xzGo2I2U" outputId="6880ff41-c861-4d32-a79e-f396907d6972" colab={"base_uri": "https://localhost:8080/"}
weighted_pear, pear_weights_df = nan_weighted_pearson(test_df, y_pred)
print("Weighted Pearson: ", weighted_pear)
print(pear_weights_df)
# + id="lYQZQUVN3QYz" outputId="2a23065a-e321-499b-9caa-57b846ee3fa6" colab={"base_uri": "https://localhost:8080/", "height": 458}
sns.histplot(data = pear_weights_df["Pearson coefficient"], bins = 10)
plt.savefig("pearson_tabular_learner.eps", format = 'eps', dpi=300)
# + [markdown] id="q40KDbpj4gOp"
# # Ensemble: Tabular Learner + Random Forest.
#
# We will average predictions between Tabular Learner and Random Forest to create a superior model.
# + colab={"base_uri": "https://localhost:8080/"} id="aHQPJUo9-X9i" outputId="e8fee027-3fd1-4deb-88a0-58b8e5fa2ab6"
"Random Forest"
m_rf = rf(xs, y);
"We take the two permutations and average"
y_pred_rf_1 = m_rf.predict(valid_xs)
y_pred_rf_2 = m_rf.predict(valid_xs_perm)
y_pred_rf = (y_pred_rf_1 + y_pred_rf_2)/2
wpc_score(test_df, y_pred_rf)
# + [markdown] id="77C3qHBWL_hv"
# Let's see if performance increases when we use pearson correlation coefficient as metric and EarlyStopping and SaveModelCallback callbacks
# + id="NPoY4i3p-5M5"
"Tabular learner"
procs_nn = [Categorify, FillMissing, Normalize]
"For some reason the prediction improves when I don't normalize"
"so I will use procs instead of procs_nn"
to_nn = TabularPandas(df_nocomb, procs, cat, cont, y_names=dep_var,
splits=splits)
dls = to_nn.dataloaders(1024)
y = to_nn.train.y
y.min(),y.max()
callbacks = [EarlyStoppingCallback(
monitor='pearsonr',
min_delta=0.01, patience=20), SaveModelCallback(monitor='pearsonr')]
learn = tabular_learner(dls, y_range=(y.min(),y.max()), layers=[30,20, 10],
n_out=1, loss_func=F.mse_loss,
metrics = PearsonCorrCoef(), cbs = callbacks)
#learn.lr_find()
# + id="J-zSJQ65CyCD"
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="LDMgtYcP_JHc" outputId="3e1cb43f-fbb6-4580-c879-e26adfc40c6b"
learn.fit(n_epoch = 300, lr = 0.001)
learn.recorder.plot_loss()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="u-6_UF2KAbh5" outputId="843edbda-7eea-4181-e3aa-285659bce093"
y_pred_nn_1, targs = learn.get_preds()
y_pred_nn_1 = np.asarray(y_pred_nn_1[:,0])
wpc_score(test_df, y_pred_nn_1)
# + [markdown] id="Q-wFo8BlVJ8T"
# let's save this learner and try to optimise it more.
# + colab={"base_uri": "https://localhost:8080/"} id="gUix4ge0VJT3" outputId="40f8898b-3fff-4cf3-e81b-b87e66067562"
learn.save("/content/drive/MyDrive/archivos_tfm/tabular_model_commutative.pth")
# + id="fD0qjK8Op178" outputId="4765d841-5314-47e0-e693-3959244c3f9e" colab={"base_uri": "https://localhost:8080/", "height": 34}
"And load it"
learn.load("/content/drive/MyDrive/archivos_tfm/tabular_model_commutative.pth")
y_pred_nn_1, targs = learn.get_preds()
y_pred_nn_1 = np.asarray(y_pred_nn_1[:,0])
wpc_score(test_df, y_pred_nn_1)
# + id="Hhmh42psqv2b" outputId="8bb9dc0b-8b7e-4266-9d4e-2b88096c6469" colab={"base_uri": "https://localhost:8080/", "height": 467}
valid_xs_perm_nn = to_nn.valid.xs.copy()
valid_xs_perm_nn[[ 'Compound A', 'Compound B',
'Max. conc. A',
'Max. conc. B',
'IC50 A', 'H A',
'Einf A', 'IC50 B',
'H B', 'Einf B']] = to_nn.valid.xs[[ 'Compound B', 'Compound A',
'Max. conc. B',
'Max. conc. A',
'IC50 B', 'H B',
'Einf B', 'IC50 A',
'H A', 'Einf A']]
valid_xs_perm_nn
# + id="XIh18nmCr1JT" outputId="bd949fb8-e440-4c4e-ff97-ac91ba0a7061" colab={"base_uri": "https://localhost:8080/", "height": 380}
y_pred_nn_2, targs = learn.predict(valid_xs_perm_nn)
y_pred_nn_2 = np.asarray(y_pred_nn_2[:,0])
wpc_score(test_df, y_pred_nn_2)
# + [markdown] id="gu9rqspPtwRh"
# I have not been able to use valid_xs_perm_nn to predict something in the tabular learner.
# + [markdown] id="EtCn2gjPBaUZ"
# All together:
# + colab={"base_uri": "https://localhost:8080/"} id="VxUhWS1mBcvq" outputId="8976270d-e454-48a7-87ca-24e3e95d72ad"
y_pred_avg = (y_pred_rf + y_pred_nn_1 )/ 2
wpc_score(test_df, y_pred_avg)
# + [markdown] id="D23J3qatHjzi"
# # XGBOOST
#
# Out of curiosity, let's try XGBoost and ensemble it.
# + colab={"base_uri": "https://localhost:8080/"} id="MVPmiO6dHsrs" outputId="0869d41c-c94f-414f-b2cf-90f7934e41f5"
m_xgb = XGBRegressor(n_estimators=200, max_depth=7, eta=0.1,
subsample=0.7, colsample_bytree=0.8, random_state = 42)
m_xgb.fit(xs, y)
y_pred_xgb_1 = m_xgb.predict(valid_xs)
y_pred_xgb_2 = m_xgb.predict(valid_xs_perm)
y_pred_xgb = (y_pred_xgb_1 + y_pred_xgb_2)
wpc_score(test_df, y_pred_xgb)
# + [markdown] id="j7Sz0PR2EdXr"
# Now we ensemble RF, XGBoost and Tabular Learner.
# + colab={"base_uri": "https://localhost:8080/"} id="8TF45oemIsm8" outputId="bbf00478-3b04-4d66-eafc-d603e5e904a0"
y_pred_avg = (y_pred_rf + y_pred_nn_1 + y_pred_xgb)/ 3
wpc_score(test_df, y_pred_avg)
# + [markdown] id="mqY_UQK1yCam"
# And only Random Forest and XGBoost
# + id="q8qptDBUyBv_" outputId="2d793a30-8c00-428e-8de7-8d819c644d2d" colab={"base_uri": "https://localhost:8080/"}
y_pred_avg = (y_pred_rf + y_pred_xgb)/ 2
wpc_score(test_df, y_pred_avg)
# + [markdown] id="mn2UXjkKYRtk"
#
# + [markdown] id="t2MFnBUFKM3M"
# # Support Vector Regressor
# + id="wAqfFr0_KNQM"
procs_nn = [Categorify, FillMissing, Normalize]
to_nn = TabularPandas(df_nocomb, procs_nn, cat, cont, y_names=dep_var,
splits=splits)
# + colab={"base_uri": "https://localhost:8080/"} id="m2dG4R9kL4wR" outputId="57e2ca1a-2fdc-4d37-b27a-f22db1cd0275"
m_SVR = SVR(kernel="linear")
m_SVR.fit(to_nn.train.xs, to_nn.train.y)
y_pred_SVR = m_SVR.predict(to_nn.valid.xs)
wpc_score(test_df, y_pred_SVR)
# + [markdown] id="7msa9vC9uyFT"
# With both permutations
# + id="H3kkBSGguxPi" outputId="7f7269fd-f1dd-4865-cc5f-afeaca64a262" colab={"base_uri": "https://localhost:8080/"}
m_SVR = SVR(kernel="linear")
m_SVR.fit(xs, y)
y_pred_SVR_1 = m_SVR.predict(valid_xs)
y_pred_SVR_2 = m_SVR.predict(valid_xs_perm)
y_pred_SVR = (y_pred_SVR_1 + y_pred_SVR_2)/2
wpc_score(test_df, y_pred_SVR)
# + [markdown] id="xAAge-wFNxwy"
# Now we ensemble the 4 of them:
# Random Forest + XGBoost + Neural Networks + SVM
# + colab={"base_uri": "https://localhost:8080/"} id="pPTMw5XeNzvZ" outputId="09e2f846-7843-45cb-ee74-a80095b6a721"
y_pred_avg = (y_pred_rf + y_pred_nn_1 + y_pred_xgb + y_pred_SVR)/ 4
wpc_score(test_df, y_pred_avg)
# + [markdown] id="0JpIBVmknM23"
# ## Feature importance
# + colab={"base_uri": "https://localhost:8080/", "height": 363} id="iZUNH_iwnPwR" outputId="920ed9b9-934b-4d16-f09a-5f2ab8d73182"
def rf_feat_importance(m, df):
return pd.DataFrame({'cols':df.columns, 'imp':m.feature_importances_}
).sort_values('imp', ascending=False)
fi = rf_feat_importance(m_rf, xs)
fi[:10]
# + colab={"base_uri": "https://localhost:8080/", "height": 431} id="-irGjzODnXij" outputId="afd15c3d-2a74-46e8-fdba-7c0654bacbbd"
def plot_fi(fi):
return fi.plot('cols', 'imp', 'barh', figsize=(12,7), legend=False)
plot_fi(fi[:30]);
# + [markdown] id="VErRdPP3n8kf"
# Here we don't have a lot of columns, so we don't have
# to erase any of them.
# + [markdown] id="C8VuEmBoP_wS"
# # Creating new features:
#
# We will try to do some feature engineering.
#
# First we are going to create features independent of the order of the drugs, since we would want our input to be commutative with respect to the drug.
# + id="WQZZEx2lQf2D" outputId="57ad709c-109f-4a40-e757-35b56fa029f3" colab={"base_uri": "https://localhost:8080/"}
df_nocomb.columns
# + id="mQca-ln_RL5U" outputId="a6ff2d40-b952-46be-914e-d52df44eafd2" colab={"base_uri": "https://localhost:8080/", "height": 423}
df_new_f = pd.DataFrame()
df_new_f[['Cell line name',
'Compound A', 'Compound B',
'Synergy score']] = df_nocomb[['Cell line name', 'Compound A',
'Compound B', 'Synergy score']]
df_new_f["Av_max_conc"] = (df_nocomb['Max. conc. A'] + df_nocomb['Max. conc. B']) / 2
df_new_f["abs_dif_max_conc"] = np.abs(df_nocomb['Max. conc. A'] - df_nocomb['Max. conc. B'])
df_new_f["Av_IC50"] = (df_nocomb['IC50 A'] + df_nocomb['IC50 B']) / 2
df_new_f["abs_dif_IC50"] = np.abs(df_nocomb['IC50 A'] - df_nocomb['IC50 B'])
df_new_f["Av_H"] = (df_nocomb['H A'] + df_nocomb['H B']) / 2
df_new_f["abs_dif_IC50"] = np.abs(df_nocomb['H A'] + df_nocomb['H B'])
df_new_f["Av_Einf"] = (df_nocomb['Einf A'] + df_nocomb['Einf B']) / 2
df_new_f["abs_dif_Einf"] = np.abs(df_nocomb['Einf A'] + df_nocomb['Einf B'])
df_new_f
# + id="OpOgxqRIR5T_"
cont,cat = cont_cat_split(df_new_f, 1, dep_var=dep_var)
to_new_f = TabularPandas(df_new_f, procs, cat, cont, y_names=dep_var,
splits=splits)
xs_new_f, y_new_f = to_new_f.train.xs, to_new_f.train.y
valid_xs_new_f, valid_y_new_f = to_new_f.valid.xs, to_new_f.valid.y
# + id="Q-KSzVFlRfvi" outputId="e47eedad-2c86-49b8-d136-5d658ae5f4bb" colab={"base_uri": "https://localhost:8080/", "height": 380}
"Random Forest"
m_rf = rf(xs_new_f, y_new_f, n_estimators=100, max_samples=500,
max_features=0.7, min_samples_leaf=1 , random_state = 42);
y_pred_rf_new_f = m_rf.predict(valid_xs_new_f)
wpc_score(test_df, y_pred_rf_new_f)
# + id="Q6Fp0WOPTx1h"
"XGBoost"
m_xgb = XGBRegressor(n_estimators=200, max_depth=7, eta=0.1,
subsample=0.7, colsample_bytree=0.8, random_state = 42)
m_xgb.fit(xs_new_f, y_new_f)
y_pred_xgb_new_f = m_xgb.predict(valid_xs_new_f)
wpc_score(test_df, y_pred_xgb_new_f)
# + [markdown] id="WnX3wARqTmlq"
# It's doesn't improve predictions. We have to look in other direction.
|
Commutative, Pearson and Callbacks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="wkm0Add8vPyK"
import numpy as np
import time
from agglio_lib import AG_GD, AG_SGD
from agglio_lib import sigmoid, getData
# + id="bAuBfJOKvPyP"
n = 1000
d=50
w_radius = 10
wAst = np.random.randn(d,1)
X = getData(0, 1, n, d)/np.sqrt(d)
w0 =w_radius*np.random.randn(d,1)/np.sqrt(d)
y = sigmoid(np.matmul(X, wAst))
cross_validation=True
B_init_list=[0.0001, 0.001, 0.01, 0.1]
# + id="wraKWRtSJjOI"
#AGGLIO-GD
l2_agd=[]
for B_init in B_init_list:
agd = AG_GD(alpha= 350, B_init=B_init, B_step=1.01 )
agd.fit( X, y.ravel(), w_init = w0.ravel(), w_star = wAst.ravel(), max_iter=600 )
l2_agd.append(agd.distVals[-1])
# + id="vnW07y7W1f-K"
#AGGLIO-SGD
l2_agsgd=[]
for B_init in B_init_list:
agsgd = AG_SGD(alpha= 350, B_init=B_init, B_step=1.01)
agsgd.fit( X, y.ravel(), w_init = w0.ravel(), w_star = wAst.ravel(), max_iter=600, minibatch_size=200 )
l2_agsgd.append(agsgd.distVals[-1])
# + id="Msgb3jzRoUvB" colab={"base_uri": "https://localhost:8080/", "height": 308} executionInfo={"status": "ok", "timestamp": 1641473578336, "user_tz": -330, "elapsed": 1550, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GglpIyRPRShihvW4kAdbmrUFsdl66kS95_t21NR=s64", "userId": "10648551671244950419"}} outputId="687f06bd-f397-4996-ab6e-68f3ee3d89b3"
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
y_fmt = ticker.FormatStrFormatter('%2.0e')
fig, ax = plt.subplots()
ax.yaxis.set_major_formatter(y_fmt)
plt.plot(B_init_list, l2_agd, label='AGGLIO-GD', color='#1b9e77', linewidth=3)
plt.plot(B_init_list, l2_agsgd, label='AGGLIO-SGD', color='#5e3c99', linewidth=3)
plt.legend()
plt.ylabel("$||w^t-w^*||_2$",fontsize=12)
plt.xlabel(r"temperature init, $\tau_0$",fontsize=12)
plt.grid()
plt.xscale('log')
plt.title(r"n=1000, d=50, $\beta$=1.01, $\eta $=350" )
plt.savefig('temperature_init_ablation.pdf', dpi=300, bbox_inches = 'tight')
plt.show()
|
codes/temperature_init_sensitivity_5b.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Day 4: The Ideal Stocking Stuffer
# Santa needs help mining some AdventCoins (very similar to bitcoins) to use as gifts for all the economically forward-thinking little girls and boys.
#
# To do this, he needs to find MD5 hashes which, in hexadecimal, start with at least five zeroes. The input to the MD5 hash is some secret key (your puzzle input, given below) followed by a number in decimal. To mine AdventCoins, you must find Santa the lowest positive number (no leading zeroes: 1, 2, 3, ...) that produces such a hash.
#
# For example:
#
# * If your secret key is `abcdef`, the answer is `609043`, because the MD5 hash of `abcdef609043` starts with five zeroes (`000001dbbfa...`), and it is the lowest such number to do so.
# * If your secret key is `pqrstuv`, the lowest number it combines with to make an MD5 hash starting with five zeroes is `1048970`; that is, the MD5 hash of `pqrstuv1048970` looks like `000006136ef....`
input_data = "ckczppom"
import hashlib
def min_int(input_data, match_str="00000"):
result = 1
while 1:
key = input_data + str(result)
hashed = hashlib.md5(key.encode()).hexdigest()
if hashed[:len(match_str)] == match_str:
return result
result = result + 1
min_int("ckczppom", "00000")
# # Part Two
# Now find one that starts with six zeroes.
min_int("ckczppom", "000000")
|
2015/Day04/Day4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## M.A.R.C.H. incident analyses by location
#
# In this analysis, we isolate MARCH raids based on Census tract. Once we join the latitude and longitude points to their respective Census tracts, we'll be able to determine how many other businesses were targeted in the immediate neighborhood surrounding "Friends and Lovers" — its Census tract and all adjacent Census tracts.
#
# The data was downloaded here: https://github.com/gltd/march/tree/master/data
import pandas as pd
import matplotlib.pyplot as plt
import geopandas as gp
from shapely.geometry import Point
march_data_lat_long = pd.read_csv(
'../data/march_raids_with_lat.csv',
parse_dates=["inspection_date"]
)
len(march_data_lat_long)
march_data_lat_long["inspection_date"].describe(datetime_is_numeric = True)
march_data_lat_long.head()
# Chart what MARCH raids looked like over time.
plt.plot(
march_data_lat_long.set_index(
"inspection_date"
)["address"].resample(
"MS"
).count(
)
)
# ## Merge MARCH data with Census tracts
# This part of the analysis:
# * turns data into geodataframe
# * merges MARCH data with Census tract data
# * produces a count per Census tract
march_data_lat_long["geometry"]=(
march_data_lat_long.apply(lambda z: Point(z["longitude"], z["latitude"]), axis = 1)
)
march_data_lat_long_geodf = gp.GeoDataFrame(march_data_lat_long, crs = "EPSG:4269")
march_data_lat_long_geodf.crs
march_data_lat_long_geodf.head()
# ### Read in the polygon data of New York City Census tracts
# - read in the NY Census tracts
# - filter it down to NYC Census tracts
# +
ny_state_tracts = (
gp.read_file(
"../data/censusTracts/states/tl_2019_36_NY_tract/tl_2019_36_tract.shp"
)
)
ny_state_tracts.head()
# -
NYC_COUNTIES = [
"005", # Bronx
"047", # Kings (Brooklyn)
"061", # New York County (Manhattan)
"081", # Queens
"085", # Richmond (Staten Island)
]
nyc_census_tracts = ny_state_tracts[
ny_state_tracts['COUNTYFP'].isin(NYC_COUNTIES)
]
len(nyc_census_tracts)
# ### Find number of raids per tract and other Census data
# * Merge geodataframes and a Census tract to each MARCH incident
# * import data from a previous analysis of NYC Census tracts and merge with MARCH data (more is [here](https://github.com/BuzzFeedNews/2020-02-gentrification)). Data includes historical demographic information, median home value, median income and calculations on whether a tract had gentrified between 2000 and 2017.
merged_gdf = gp.sjoin(
nyc_census_tracts,
march_data_lat_long_geodf,
how = "inner"
)
merged_gdf.head()
# +
march_raids_per_tract = (
merged_gdf
.groupby([
"GEOID",
])
.size()
.to_frame("raids")
.reset_index()
)
print(len(march_raids_per_tract))
march_raids_per_tract.head(20)
# -
march_raids_per_tract["GEOID"].value_counts().sort_values(ascending=False)
# ### Merge demographic data with MARCH raid counts
# +
gentrification = pd.read_csv(
"../data/gentrification.csv",
dtype={"GEOID":str}
)
gentrification.head()
# -
gentrification.columns
# +
gentrification_targets = pd.merge(
gentrification,
march_raids_per_tract,
on="GEOID"
)
print(len(gentrification), len(gentrification_targets))
# -
gentrification_targets.head()
gentrification_targets.head()
gentrification_targets.columns
merged_gdf.head()
# ### Find raids near Friends and Lovers
#
# In this part of the analysis, we identify raids in near Friends and Lovers. The geographical scope of this analysis includes raids in the Census tract where the bar Friends and Lovers is located as well as every adjacent Census tract. This list was then used to do on-the-ground reporting.
#
# Census tract IDs were found here: https://popfactfinder.planning.nyc.gov/#14/40.67871/-73.95851
# GEOIDs were then constructed based on Census Bureau conventions: https://www.census.gov/programs-surveys/geography/guidance/geo-identifiers.html
# +
census_tracts = [
"36047020100",
"36047022700",
"36047020300", # ode to babel
"36047020500",
"36047020700",
"36047021500",
"36047021700",
"36047021900",
"36047022100",
"36047024700",
"36047030500" # friends and lovers
]
pared_down_raids = merged_gdf[merged_gdf["GEOID"].isin(census_tracts)]
print(len(pared_down_raids))
pared_down_raids.head()
# -
pared_down_raids["GEOID"].value_counts()
# +
neighborhoodraids=(
pd.merge(
pared_down_raids,
gentrification_targets,
on="GEOID"
)
[[
'GEOID',
'NAME',
'address',
'inspection_date',
'access_1',
'ecb_violation_number',
'dob_violation_number',
'longitude',
'latitude',
'total_population_17',
'median_income_17',
'median_home_value_17',
'educational_attainment_17',
'white_alone_17',
'black_alone_17',
'native_alone_17',
'asian_alone_17',
'native_hawaiian_pacific_islander_17',
'some_other_race_alone',
'two_or_more',
'hispanic_or_latino_17',
'educational_attainment_pct_17',
'educational_attainment_change',
'home_pct_change',
'pct_white_alone_17', 'pct_white_alone_change',
'pct_black_alone_17', 'pct_black_alone_change',
'pct_asian_alone_17', 'pct_asian_alone_change',
'pct_native_alone_17', 'pct_native_alone_change',
'pct_native_hawaiian_pacific_islander_17',
'pct_native_hawaiian_pacific_islander_change',
'pct_hispanic_or_latino_17',
'pct_hispanic_or_latino_change', 'raids'
]]
)
len(neighborhoodraids)
# -
neighborhoodraids.to_csv("../output/neighborhood_raids.csv", index=False)
neighborhoodraids.columns
neighborhoodraids.head(5)
# ### Find a list of the most raided bars in the Census tracts surrounding "Friends and Lovers"
neighborhoodraids["address"].value_counts()
neighborhoodraids["address"].value_counts().to_csv("../output/neighborhood_target_list.csv")
# ### Some statistics for the neighborhood
# - demographic changes for Census tract in which Ode to Babel and Friends and Lovers are
# - to adjust numbers for inflation. The rate of 1.44 came from the [Bureau of Labor Statistics Inflation Calculator](https://www.bls.gov/data/inflation_calculator.htm)
#
# +
pd.set_option('display.max_rows', None)
gentrification_targets[gentrification_targets["GEOID"] == "36047020300"].T
# -
# * demographic changes for Census tract where Friends and Lovers is
gentrification_targets[gentrification_targets["GEOID"] == "36047030500"].T
# +
friends_lovers_tract =gentrification_targets[gentrification_targets["GEOID"] == "36047030500"]
median_home_value_increase = (
(friends_lovers_tract["median_home_value_17"] - (friends_lovers_tract["median_home_value_00"]* 1.44))/
(friends_lovers_tract["median_home_value_00"]* 1.44)
)
print(median_home_value_increase)
# -
print(
"median_home_value_00: ", friends_lovers_tract["median_home_value_00"]* 1.44,
"median_home_value_17: ", friends_lovers_tract["median_home_value_17"],
"median_income_00: ", friends_lovers_tract["median_income_00"]* 1.44,
"median_income_17: ", friends_lovers_tract["median_income_17"],
)
gentrification_targets.columns
# ### Noise complaints analysis
#
# This part of the analysis looks at noise complaints affiliated with Ode To Babel's address.
#
# In the cells we will:
# - identify types of complaints that are relevant (noise)
#
# The data was downloaded from this portal with `772 Dean Street`, the bar's address, being a filter:
# https://nycopendata.socrata.com/Social-Services/311-Service-Requests-from-2010-to-Present/erm2-nwe9/data
# +
complaints = pd.read_csv(
"../data/311-complaints_772_deanstreet.csv",
parse_dates = ["Created Date"]
)
len(complaints)
# -
complaints.dtypes
complaints.head()
complaints["Complaint Type"].value_counts()
noise_complaints = complaints[ complaints["Complaint Type"].str.contains("Noise")]
len(noise_complaints)
noise_complaints["Created Date"].describe(datetime_is_numeric = True)
(
noise_complaints
.groupby(pd.Grouper(
key = "Created Date",
freq = "MS"
))
.size()
.plot.bar()
)
noise_complaints.set_index(
"Created Date"
)["Incident Address"].resample(
"MS"
).count(
).to_csv(
"../output/noise_complaints_ode_to_babel.csv"
)
|
notebooks/01-march-and-311-analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Remote Data Science
#
# The purpose of this notebook is to describe the end-goal for how Syft and Grid will facilitate privacy preserving data science. The hope is that this notebook will drive interesting conversation around features and use cases of the tools we build. However, at the time of writing none of this functionality exists yet.
#
# Scenario: the scenario is that you are seeking to perform research to predict what it takes to get a good night's sleep. Specifically, the project is split into three pieces:
#
# - Federated Learning: train a classifier to predict whether someone will get a good night sleep based on various input factors
#
# - Federated Analytics: use the classifier to estimate the amount of sleep that the population of the USA is getting (importantly including folks who do not record their own sleep data).
#
# - Self Classification: leveraging the model from FL, predict what decisions I should make to get a better night sleep.
# # Step 1: Imports
#
# To begin, we must import syft.
#
# <!-- #### Development Notes:
# _To start, from a client perspective, we want to maximize for convenience and minimize the number of dependencies one needs to install to work with PyGrid. Thus, in an ideal world, users only have to install one python package in order to work with all of pygrid. I like the current design in syft 0.2.x where we have grid clients in a grid package inside of Syft. The thing we definitely want to avoid here is the need for users of PyGrid to have to install all of the dependencies needed to run grid nodes (flask, databases, etc.) just to be able to interact with the grid. Putting grid inside of syft solves this as well._ -->
import syft as sy
from syft import grid as gr
# # Step 2: View our Available Networks
#
# In this step, we need to see if we are connected to some number of data networks which we can use to search for data relating to sleep. Conveniently, the PySyft library remembers the networks we've previously used in other experiments. The list of "known networks" can be displayed by simply executing `gr.newtorks` as below.
#
# <!-- ### Development Notes
#
# _By default, it would be really great if we could support a combination of two lists of networks:_
#
# - networks which all users of PySyft have by default (OpenGrid)
# - a history of all networks previously accessed (stored in some local config file)
#
# _We should be able to view these available networks by just calling `gr.networks` which should pretty-print information about them. Below we show one way to do pretty-print using just a Pandas table as shown below._ -->
# +
# possible food for thought for how to implement this feature: https://hyperledger-indy.readthedocs.io/projects/hipe/en/latest/text/0036-mediators-and-relays/README.html
gr.networks
# -
# This table displays several useful pieces of information:
#
# - Id: the unique id of the network
# - Name: the name of the network
# - Models: the number of models currently hosted on the network (public and private)
# - Domains: the number of domains registered to this network (i.e., the number of individual hospitals)
# - Online: the number of domains which are currently online
# - Registered: the number of domains for which this user already has an account
# - Server-domains: the percentage of domains which are server based (cloud/on-prem compute cluster based grid noes)
# - Mobile-domains: the percentage of domains which are mobile based (smartphone based grid noes)
#
# Already from this list we can get some idea as to what kinds of datasets which may be available to us.
# - FitByte seems like a good source of gold-standard "sleep data"
# - Netflax seems like it might have some information on what people are doing before bed (watching tv)
# - TrackMiRun seems like it might have good information on exercise, which could affect sleep.
# - MiFitnessPal is a place where people journal what they eat - this should be very informative!
# # Step 3: Local Wallet
#
# On the right side of the table, we see the column "Registered". This indicates the number of domains within the network with which we already have user accounts which let us leverage the data. The fact that many of these values are >0 means that we've already been doing some data science on sources within these networks. To confirm, we can see that we have public and private keys corresponding to each account.
#
# <!-- ### Development Notes:
#
# _Somewhere in the local filesystem, we need to save the set of all keys/logins which this user has for various domains around the world. We should be able to see them here. Note this list is what creates the "Registered" number for each network._ -->
gr.wallet.domain_keys
# # Step 4: Adding Another Network
#
# Before we continue, we're also aware of an interesting sleep study done within the NHS. We'd like to see if we can access data relating to that sleep study which could be useful to our project. To add the new network to our list of "known networks", we just call the `gr.save_network()` method.
gr.save_network('http://localhost:8015/') # it's a network
gr.networks
# Before we continue, let's talk about what these columns represent. A "network", as you may have guessed, is a hosted service which exists to help you find data. Actually, it exists to help you find just about any kind of object within data science you might be looking for (data, compute, models, etc.), as long as that object exists within the network.
#
# The various members of the network are called "domains". The difference between "network" and "domain" is that a "domain" represents someone who actually *owns* a dataset whereas a "network" exists to help you *find* a dataset. So, for example, even though the "FitByte" network might help you *find* data relating to people's fitness, each individual person running their own *domain* (on their phone) is considered the owner. Importantly, the domain owner is the one which accepts/denies your request to perform data science (not the network owner).
#
# So, the next logical step is to start training on the data, right? Let's ask some domain owners for permission!
#
# Not so fast!
#
# First, we need to find which datasets we want to leverage - asking domain owners to approve us for data science can take some time - and we can do a bit of the work ahead of time by performing a search over the datasets they are hosting first.
# # Step 5: Search
#
# So, we're notionally connected to some networks which have domains with data. This allows us to do searches like so. We'll try the search term "diabetes".
# string search anywhere in an object's public metadata
result = gr.search(anywhere="diabetes")
result() #TODO: add distributed_models
# Looks like a few of these datasets relate to diabetes. We can view a few of them here!
pd_table = result.datasets.pandas()
pd_table
# ### Understanding the Results
#
# Before we go and look into more relevant (sleep related) data, let's take a moment to consider what this table is all about. The far left "name" column is the name of the dataset. This name isn't designed to be a unique name, it's just whatever the Domain owner chose to call their dataset. The network and domain are what you'd expect, domain is the owner of the data and the network is the network the domain is hosted in (note a domain can connect to multiple networks).
#
# #### Pricing
#
# The next two columns are perhaps a bit perplexing. The goal here is to get a general sense for how expensive the dataset is as well as how expensive the co-located compute is.
#
# ##### Compute Prices
#
# "USD/gpu_flop_hour" is perhaps the most intuitive. It's the average price (over the GPUs available) of an hour of GPU time normalized by flop (since some GPUs are more powerful than others). This is not the metric you would actually pay for (you'd pick your GPU and rent by the hour like most providers). However, this is to give a sense for how expensive compute is on each domain provider. (Maybe someday we'll "pay per flop" but not yet.)
#
# ##### Epsilon Prices
#
# USD/eps is perhaps a bit less intuitive. Epsilon is referring to the notion of epsilon from Differential Privacy literature. Is is a measure of "statistical uniqueness" of results and it's the most important pricing metric for a variety of reasons:
#
# - If epsilon is sufficiently high, the buyer could reverse engineer the dataset and then (in theory) sell it (or derivatives) as a competitor to you. Economically, this is the "hard limit" creating resource scarcity for how much a dataset can be used. While a dataset can always be worked with (in theory), giving too much epsilon to one player (or to players who will collude) can accidentally create a competitor.
# - Given that epsilon is a measure of resource scarcity, it's a great way to price the difference between two queries. Intuitively, if a seller of data has 10 queries against their data to consider, and one of those queries could be used to restore the entire dataset, then the price of that query must exceed the price of the other 9 queries combined in order for that query to be answered first. Of course, if the seller of data has reason to believe that no further queries will come, then selling to the 10th buyer becomes more reasonable.
# - The epsilon also can be used to measure the risk that an individual's identity will be revealed as being in the data. Risk should be priced, which is why this measure matters.
#
# This is the metric which creates a liquid market for insights. If you can answer the question using less sensitive/scarce/valuable data, you'll be financially incentivised to do so because of the scarcity constraints mentioned above.
#
# #### Certifications (certs)
#
# A perennial problem with working with data you cannot see - how do you know it is real? The next set of columns seeks to address this.
#
#
# ##### Validity Certifications
#
# These certifications are certifications of entities who claim to have "seen the data". Registered users of the domain are able to receive signed hashes of the dataset, signed using the private keys of well-known brands (such as major hospitals, etc.). The primary utility here is that data can be hosted by domains/networks on behalf of others, but still retain the "stamp" from the original creators of the data that "this is something I endorse".
#
# Note that the dataset from "Doctor's Direct" doesn't have any good certifications - this dataset should be considered suspect.
#
# ##### User Certifications
#
# This type of certification is basically a form of dataset reviews. Individuals have the ability to report the ranking of datasets within their experiment, meaning a ranking of "which dataset was most predictive for my task, second most predictive, etc.". The metric "avg_user_cert_ranking" is a measure of the average "top percentage" that a dataset took across all tasks it's been used for. So a score of "5%" would be quite good, it would mean that a dataset ranked in the top 5% of all datasets for that task. (This can be measured using cross validation.) Note that each ranking is signed by the user's private key along with some cryptographic material which only users of the dataset receive (issued by the domain owner before experimentation begins).
#
# Note that the dataset from "Doctor's Direct" has a pretty bad user score. I'll bet this dataset is (probably) fake.
#
# #### Schema
#
# Farther to the right in the table, you'll see the "schema" column. This is an extremely useful column wherein datasets can put forward the name of the schema the dataset is formatted in. This allows multiple datasets to subscribe to the same schema which makes federated learning easier. Note further that if you go through the trouble to format N datasets into a single schema for training, you can sell this transformation back to the data owners as a new dataset. It makes their data more valuable to be transformed into uniform/popular schemas.
#
# #### Other Fields
#
# Take a second to peruse the other fields in the table as well. Most of the names are quite intuitive (description, date uploaded, etc.). You'll notice that every dataset has a train, test, and validation piece. Each train/test/validation section is a single tensor. (People can choose for themselves which columns are input data and which ones should be the target.)
#
# #### Comprehensive Field Descriptions
#
# Note that not all columns are printed by default (because there are so many). Default columns are just an intuitive perspective on the table. If you'd like the full description of all the columns (you don't need to know this now), it's commented out below. Just double click this cell and you can read it:
# <!--
# - Dataset: This is a dataset object existing within a single Domain. It consists of
#
# - name (public - value - required): the name of the dataset
# - network (public - value - required): the network in which this dataset is hosted. Note that you might find the same dataset in different networks. They will be different rows in this table for now.
# - domain (public - value - required): the owner of the data who is running the compute within which the data is located.
#
# - data_hash: a hash of the .data attribute
# - metadata_hash: a hash of these attributes on the dataset not including .data
# - hash: a hash of the dataset including all attributes and the .data attribute
#
# - per-user-dataset USD/eps stages (public - value - required) - the list of price stages for varying amounts of eps. This is concerned with information we expect the user to not share with others.
# - per-lifetime-dataset USD/eps stages (public - value - required) - the list of price stages for varying amounts of eps. This is concerned with information we expect the user to make public (such as via a model or finding)
# - per-user-entity USD/eps stages (public - value - required) - the list of price stages for varying amounts of eps. This is concerned with information we expect the user to not share with others.
# - per-lifetime-entity USD/eps stages (public - value - required) - the list of price stages for varying amounts of eps. This is concerned with information we expect the user to make public (such as via a model or finding)
#
# - USD/eps (public - func - required) - a measure of the current price of purchasing a query from a dataset - measured by the epsilon privacy leakage of that query
# - USD/.1 eps (public - func - required) - the cost of 1 epsilon
# - USD/1 eps (public - func - required) - the cost of 1 epsilon
# - USD/2 eps (public - func - required) - the cost of 2 epsilon
# - USD/5 eps (public - func - required) - the cost of 5 epsilon
# - USD/10 eps (public - func - required) - the cost of 10 epsilon
#
# - available_compute: a list of "Device" objects (see below) which are available within this domain. See device spec for schema of objects in this list. (https://github.com/OpenMined/PySyft/issues/3867)
#
# - Fixed Device Prices:
# - USD/gpu mean flop hour - (public - func - required) a measure of the average price of flops across the GPU compute available within this domain
# - USD/gpu min flop hour - (public - func - required) a measure of the minimum price of flops across the GPU compute available within this domain
# - USD/gpu max flop hour - (public - func - required) a measure of the maximum price of flops across the GPU compute available within this domain
# - USD/gpu mean RAM MB hour - (public - func - required) a measure of the average price of a MB of RAM across the GPU compute available within this domain
# - USD/gpu min RAM MB hour - (public - func - required) a measure of the minimum price of a MB of RAM across the GPU compute available within this domain
# - USD/gpu max RAM MB hour - (public - func - required) a measure of the maximum price of a MB of RAM across the GPU compute available within this domain
# - USD/cpu mean flop hour - (public - func - required) a measure of the average price of flops across the CPU compute available within this domain
# - USD/cpu min flop hour - (public - func - required) a measure of the minimum price of flops across the CPU compute available within this domain
# - USD/cpu max flop hour - (public - func - required) a measure of the maximum price of flops across the CPU compute available within this domain
# - USD/cpu mean RAM MB hour - (public - func - required) a measure of the average price of a MB of RAM across the CPU compute available within this domain
# - USD/cpu min RAM MB hour - (public - func - required) a measure of the minimum price of a MB of RAM across the CPU compute available within this domain
# - USD/cpu max RAM MB hour - (public - func - required) a measure of the maximum price of a MB of RAM across the CPU compute available within this domain
#
# - Spot Device Prices:
# - USD/spot gpu mean flop hour - (public - func - required) a measure of the average price of flops across the spot GPU compute available within this domain
# - USD/spot gpu min flop hour - (public - func - required) a measure of the minimum price of flops across the spot GPU compute available within this domain
# - USD/spot gpu max flop hour - (public - func - required) a measure of the maximum price of flops across the spot GPU compute available within this domain
# - USD/spot gpu mean RAM MB hour - (public - func - required) a measure of the average price of a MB of RAM across the spot GPU compute available within this domain
# - USD/spot gpu min RAM MB hour - (public - func - required) a measure of the minimum price of a MB of RAM across the spot GPU compute available within this domain
# - USD/spot gpu max RAM MB hour - (public - func - required) a measure of the maximum price of a MB of RAM across the spot GPU compute available within this domain
# - USD/spot cpu mean flop hour - (public - func - required) a measure of the average price of flops across the spot CPU compute available within this domain
# - USD/spot cpu min flop hour - (public - func - required) a measure of the minimum price of flops across the spot CPU compute available within this domain
# - USD/spot cpu max flop hour - (public - func - required) a measure of the maximum price of flops across the spot CPU compute available within this domain
# - USD/spot cpu mean RAM MB hour - (public - func - required) a measure of the average price of a MB of RAM across the spot CPU compute available within this domain
# - USD/spot cpu min RAM MB hour - (public - func - required) a measure of the minimum price of a MB of RAM across the spot CPU compute available within this domain
# - USD/spot cpu max RAM MB hour - (public - func - required) a measure of the maximum price of a MB of RAM across the spot CPU compute available within this domain
#
# - validity certs - (public - required) the number of entities which have "seen the data" and verify that it is genuine (can include an actual statement from each certifier available elsewhere)
# - top_validity_certs - (public - required) a short list of the primary entities (brands) who signed this data
# - user_certs - (public - required) the number of users who have certified as using this data for an experiment
# - avg_user_cert_rank - (public - required) the average ranking this dataset had relative to other datasets in a federated learning pool
# - arbitrary_certs - (public - required) anyone can sign this dataset and say something about it. This is the number of people who have done so. These are available through the API.
# - provenance_claim_certs - (public - optional) if this dataset was created using other hosted datasets/models/etc., a certificate can be issued which claims as such
# - provenance_verified_computation_certs - (public - optional) if this dataset was created using other hosted datasets/models/etc by performing verified computation, than these certificates can verify that the computation was indeed genuine.
#
# - id (public - required): the uid of the dataset
# - upload-date (public - required): the date the dataset was uploaded
# - version (public - required): is this dataset a new version of previous datasets? If so, what version is it?
# - previous_version_id (public - required): if the version of this dataset >0, what is the id of the previous version?
# - frameworks (public - required): the available frameworks for this dataset (derived from supported frameworks for the worker). Grouped into train, dev, and test.
#
# - data (public - required): the tensor object containing all data (including train, dev, and test)
# - train_n_rows (public - required): the number of rows in the training dataset
# - train_indices (public - required): a list of the indices in the "data" object which correspond to the training data
# - dev_n_rows (public - required): the number of rows in the dev dataset
# - dev_indices (public - required): a list of the indices in the "data" object which correspond to the dev data
# - test_n_rows (public - required): the number of rows in the test dataset
# - test_indices (public - required): a list of the indices in the "data" object which correspond to the test data
#
# - schema (public - required): the DatasetSchema of the dataset - which is the name->schema mapping for each TensorSchema. Identical across train, dev, and test
#
# - tags (public - required): a list of tags affiliated with this dataset
# - description (public - required): a free text description of the dataset
# - metadata (public - optional): additional metadata someone wants to use for this dataset. We assume all of this data is public.
#
# - raw (private - optional): the raw version of the dataset (such as a CSV file, free text file, etc.)
#
# - private: is the dataset's "data" tensor private or can it be downloaded?
# - worst_case_user_budget: inferred values based on the worst case user-buget parameter within the dataset's tensors (see tensor user-budget)
# - dataset_budget_params (public - required): the per-user epsilon privacy budget parameters for this dataset:
#
# - entity_centric_lifetime_train
# - entity_centric_lifetime_dev
# - entity_centric_lifetime_test
#
# - lifetime_train: the total epsilon which can be published to the greater public (i.e., when a data scientist intends to release a number openly)
# - lifetime_dev: the total epsilon which can be published to the greater public (i.e., when a data scientist intends to release a number openly)
# - lifetime_test: the total epsilon which can be published to the greater public (i.e., when a data scientist intends to release a number openly)
#
# - user_lifetime_train: the total epsilon each data scientist gets when interacting with the training dataset
# - user_lifetime_dev: the total epsilon each data scientist gets when interacting with the dev dataset
# - user_lifetime_test: the total epsilon each data scientist gets when interacting with the dev dataset
#
# - daily_auto_train: the amount of epsilon each data scientist gets per day for sample statistics which doesn't require compliance officer review
# - daily_auto_dev: the amount of epsilon each data scientist gets per day for sample statistics which doesn't require compliance officer review
# - daily_auto_test: the amount of epsilon each data scientist gets per day for sample statistics which doesn't require compliance officer review
#
# - query_auto_train: the maximum amount of epsilon one query can return which doesn't require officer review when interacting with the training dataset
# - query_auto_dev: the maximum amount of epsilon one query can return which doesn't require officer review when interacting with the training dataset
# - query_auto_test: the maximum amount of epsilon one query can return which doesn't require officer review when interacting with the training dataset
#
#
# ### Types of Dataset Search
#
# Given all these fields, you can also perform a wide variety of queries. However, because you're searching over a distributed/decentralized database (which could even include devices like mobile phones!) we split the ability to search into two types:
#
# - Distant Search: the first step in searching is to request a bulk of information from the network to be copied to your local device. Searching can take a really long time, especially if devices are offline a lot (such as mobile devices) and the information hasn't been properly cached on the network node. Additionally, this is to reduce the burden on the network nodes themselves so that they're cheaper to run (and thus more likely to be run as a gift to the community).
#
# - Local Search: searching, sorting, and filtering the metadata returned to you from the Distant Search.
#
#
#
# #### Distant Search - Query 1: Basic Keyword Search
#
# This is the type of search you've already seen. It searches for your term over all attributes and returns every dataset on the network which has the term mentioned somewhere.
# string search anywhere in an object's public metadata
result = gr.search(anywhere="diabetes")
result()
# However, what you can't see under the hood is that this "result" object is still populating. It will continue attempting to populate for the next 24 hours if you let it. You can also set a timeout if you like.
# string search anywhere in an object's public metadata
result = gr.search(anywhere="diabetes", timeout_hours=48, percent_response_threshold=0.8)
# Now it will keep listening for the answer to the query for the next 48 hours.
#
# You can view your list of ongoing search by checking `gr.searches`
gr.searches
# You can also past in a list of strings, which will be processed separately
result = gr.search(anywhere=["diabetes", "california"], timeout_hours=48)
# This will return the set union between a search for "diabetes" and "california". However, if you only wanted queries which had BOTH terms as a requirement, you can add the require_all flag.
result = gr.search(anywhere=["diabetes", "california"], timeout_hours=48, require_all=True)
# #### Distant Search - Query 2: Specific Text Attribute Search:
#
# In this kind of search, you can query any attribute by name using "kwargs". timeout_hours, multiple strings, and require_all flags all still apply. For example, all of these are valid queries:
# +
# can also search for tags
result = gr.search(tags="diabetes")
result = gr.search(tags=["diabetes"], require_all=True)
# can also search on the names of objects
result = gr.search(name_includes="MNIST")
result = gr.search(name_exact="MNIST")
# can also search on the description of objects
result = gr.search(description="diabetes COVID mortality")
result = gr.search(description=["diabetes", "COVID", "mortality"])
# -
# Additionally, if custom metadata was added to the metadata file, you can also simply search for it using kwargs. For example, if "rocket_origin='NASA'" was a piece of metadata in a file, you could add it.
# +
# can also search for arbitrary metadata
result = gr.search(rocket_booster='NASA')
# Note: if you wanted to just search for anything that mentioned "rocket_booster", use the "anywhere"
# -
# If no such metadata including "rocket_booster" exists, the query will simply return empty.
# #### Distant Search - Query 3: Numerical Search
#
# For all of the numerical parameters, you can limit your search to specific ranges if you like. However, remember that for this first search bandwidth is really high but latency is really long. So it's probably better for you to not execute these kinds of searches until you get to "Local Search". However, if you want to use them, they're available.
result = gr.search(usd_spot_gpu_min_ram_mb_hour="<0.001")
# The trick is to pass these in as strings and use "<" or ">" in the beginning of the string. You can even do multiple by using a "-".
result = gr.search(usd_spot_gpu_min_ram_mb_hour="0.0001-0.001")
# #### Distant Search - Query 4: Credential Search (TODO)
#
# Perhaps the most elusive attributes on the dataset object refer to credentials.
#
# - validity certs - (public - required) the number of entities which have "seen the data" and verify that it is genuine (can include an actual statement from each certifier available elsewhere)
# - top_validity_certs - (public - required) a short list of the primary entities (brands) who signed this data
# - user_certs - (public - required) the number of users who have certified as using this data for an experiment
# - avg_user_cert_rank - (public - required) the average ranking this dataset had relative to other datasets in a federated learning pool
# - arbitrary_certs - (public - required) anyone can sign this dataset and say something about it. This is the number of people who have done so. These are available through the API.
# - provenance_claim_certs - (public - optional) if this dataset was created using other hosted datasets/models/etc., a certificate can be issued which claims as such
# - provenance_verified_computation_certs - (public - optional) if this dataset was created using other hosted datasets/models/etc by performing verified computation, than these certificates can verify that the computation was indeed genuine.
#
# Consider "validity certs". "Certs" is short for "certificates". What a certificate is is a statement by a specific person. So, unlike all of the other attributes on the dataset object, which have no explicit author (and thus are implied to be authored by the domain owner), these certificates were authored by people other than the domain owner. They exist explicitly because the domain owner might be dishonest. In particular, it is possible that the domain owner might lie about the kinds of data he/she has. Thus, each certificate is a "signed statement" by a third party claiming something about the dataset. What kinds of claims are made?
#
# - Validity Certs: these certificates exist to all say the same thing, namely that, "I, (certificate author) have seen the dataset and testify that it is genuine, namely that the rest of the metadata on the dataset object is true.". Validity certificates may also be accompanied by contracts which are binding in one or more legal jurisdictions, further adding weight to the claims. Note that the Domain owner may host these certificates themself because they are exclusively endorsements of the dataset, thus the domain owner has incentive to broadcast them. An important question to ask at this point is, "can the domain owner forge certificates?". The answer is yes and no. Yes, they can add as many certificates as they want to the list. However, each certificate is only as good as it's "signature". Much like in the real world, a signature is a symbol which only a specific individual could make. On paper, we have people use a pen and ink. In a data structure, we have people use a "cryptographic signature". In this way, a certificate is only useful if it is signed by someone who is known to be credible. "Credibility" in this case is usually associated with whether or not the individual has something to lose if they lose their reputation - aka, they're probably in the business of disseminating data and if word got out that they were disseminating fake data they would lose their shirt. The final important question you should ask is, "how do I know if a signature came from a specific person?". We will cover this later.
#
# - User Certs: these certificates are like Validity certificates except that:
# - They aren't just endorsements of the data. They can also be negative (claims that the data is un-useful).
# - Because they can be negative, they must have alternative sources to "host" them, typically the network (by default), but they can also be other domains (who have a competitive reason to hold their competitors accountable) or other network nodes. The preferred storage is a mix between the dataset's domain holder and other domain holders because they are maximally incentivized to hold positive/negative user certs respectively.
# - These certificates come in several types:
# - Cross val rank certificates: these certificates basically say, "I trained on this dataset using federated learning, but before averaging all the models together, I created a set of cross validation scores, one score for each dataset which was with-held from the model average. This allowed me to rank the datasets by their relative contribution to the overall accuracy. On average, this dataset was in the top X% of datasets for my task on dataset schema Y".
# - Holdout accuracy certificates: I trained on this dataset exclusively and got a holdout score using X test dataset of Y.
#
# - Provenance Certificates: these are claims or cryptographic proofs about where a particular dataset comes from.
# - Provenance Claim Certs: structured similarly to the user and validity certificates, this is merely a claim that a dataset originates from a particular transformation (such as a model) on another dataset.
# - Provenance Verified Computation Certs: a different kind of certificate which provides cryptographic proof that a dataset was created using one or more others passed through a specific function. This is computationally more intensive than Provenance Claim Certs, but in the right context they can lend very strong credibility to the claim.
#
# - Arbitrary Certs: these are simply signed strings attached to the dataset. They can say anything, although they do have a "type" field which can allow for standards to emerge if a specific kind of certificate is deemed useful.
#
# So, now that we have an intuition for what these certificates exist to do, we can consider how we might use them in a search query to find high-quality datasets.
result = gr.search(anywhere="diabetes", latest_version_only = True)
result.datasets()
# result.validity_identities()
validity_cert_ids=result.validity_identities(only_entities_with...) #TODO: change to "validity_identities" - FURTURE TODO: when Opus schema matures, be able to search on the schema.
validity_cert_ids
# Above we have printed all of the valid certificates matching this query. Namely, these are all the certificates attached to objects which were returned from the query. We can use them in a search by first searching for identities.
entities = gr.search_entities(signer_name="UCSF") #TODO - OPUS Project: what would we search for on an identity?
entities
result = gr.search(anywhere="diabetes", validity_cert_ids=entities)
result.datasets()
# And as you can see, the new search only returned datasets which have been verified by the group of entities returned from the identity search.
#
result = gr.search(anywhere="diabetes")
result.datasets()
# #### Raw User Certification Data Structure (for Dataset)
#
# - Dataset ID: the id of the dataset which was used for an experiment
# - Certifier Public Key: the id of the individual who is making the claim (positive/negative endorsement)
# - Dataset Schema: COVID-MORT-2
# - Input Columns: ["Blood pressure", "BMI", "Age"]
# - Target Columns: ["Mortality"]
# - Dev Dataset: the dataset we're using for evaluation to create the FL Crossval Metic. (options: local dataset, <dataset_id>)
# - FL Crossval (list of objects with the following schema):
# - Pool Size: (Example: pool_size=30) the number of participants training models in a single round of federated learning.
# - Evaluation Metric: (precision, recall, F1, accuracy, BLU, Entropy, Mean Squared Error, ROC, etc.)
# - Crossval Rank: (Example: crossval_rank=11) (11 / 30 = 36%)
# - Experiment Notes (optional): the way the experiment was run - details about modeling choices, etc.
result = gr.search(anywhere="diabetes")
result.datasets()
# result.validity_certificates()
user_cert_ids = result.user_cert_ids(filter="exclude people who on average overestimated the percentile ranking by 50% or more")
user_cert_ids
# returns the list of datasets we trust to be high quality.
result = result.search(search="diabetes", user_cert_ids=user_cert_ids, user_cert_id_rank="85%" validity_cert_ids=validity_cert_ids)
# #### Provenance Search - Query 6: Looking for datasets of a certain origin
#
# ##### Object Provenance Certificate
#
# - Object ID: the id of the object being signed
# - Current Owner Public Key: identifying who the current owner is
# - Provenance Link Type: {Creation, Transfer}
# - Transfer Public Key: the public key of of the target of the provenance link (the person the dataset is being transferred to)
# - Current Validity Certificate: a signed certificate of the dataset + metadata (includes all previous certificates) in its current state.
result = gr.search(anywhere="diabetes")
provenance_cert_ids = result.provenance_cert_ids()
provenance_cert_ids # result.validity_certificates()
provenance_cert_ids = result.provenance_cert_ids(filter="IDs corresponding to UCSF employed doctors")
result = result.search(provenance_cert_ids_must_include=provenance_cert_ids, provenance_cert_ids_must_exclude=result.provenance_cert_ids(filter="IDs corresponding to remote-worker UCSF doctors"))
# #### Dataset Schema (Distributed Dataset) Search - Query 7: Looking for dataset schemas verified to be good for a specific problem
# #### Distant Search - Query 5: REGEX Search
#
# You can also do all of these queries using regex parameters like so
result = gr.search_regex(anywhere="[a-z]{4}-[0-9]{4}-[a-z]{4}-[0-9]{4}")
# #### Local Search - Queries: All
#
# And you can actually perform all of the same searches locally as well using the same parameters. However, it'll return almost instantly since it's running on your own device.
#
# Just run your queries against the "result" variable. If you lose this variable you can find it again in `gr.searches`
local_result = result.search(anywhere="trial")
local_result()
# And even this local result can be further refined in the same way! (It's recursive)
local_result2 = local_result.search(anywhere="trial")
local_result2()
# # Step 5: Search (again)
#
# Ok, enough about search in general! Let's continue with our experiment to find the secret to a good night's sleep. As a review, we're looking to accomplish three things:
#
# - Federated Learning: train a classifier to predict whether someone will get a good night sleep based on various input factors
#
# - Federated Analytics: use the classifier to estimate the amount of sleep that the population of the USA is getting (importantly including folks who do not record their own sleep data).
#
# - Self Classification: leveraging the model from FL, predict what decisions I should make to get a better night sleep.
#
# We have several networks at our disposal:
gr.networks
# Let's start simple and search for "sleep"
result = gr.search(anywhere="sleep")
result()
result.status() # return the status of the query (what percentage of known devices have replied)
result.datasets()
#
# show the individual dataset results
result.datasets(latest_version_only=True, ignore_duplicates=False, require_gpu=False)
# +
# latest_return_only=True by default
result.distributed()
# -
# # Bulk Domain Registration
#
# Let's say I found 10,000 phones with data I want to rain on - i need to ask 10,000 people for permission to do so.
# # Select and Allocate Compute
#
# Now that we have found an interesting distributed dataset, we need to set up some compute to do our analysis. However, the important thing to consider is that we can't use just any compute, we have to use compute which is co-located with each dataset we want to analyze. For example, part of the COVID-MORT-2 distributed dataset we found above is in UCSF's datacenters, so we need to spin up some compute within UCSF's "Domain". A "Domain" is the official word we use when referring to "all the data and compute within the ownership and jurisdiction of a single entity, known as the domain owner".
#
# So, since the dataset were most interested in "COVID-MORT-2" is actually distributed across multiple domain owners, we need to get setup with some compute within each one.
#
#
#
#
# - Tensor:
# - name: the name of a tensor
# - schema (required - public - TensorSchema object): the schema of the tensor (type, name, and description for each column)
# - mock (generated): a mock tensor generated from the TensorSchema
# - id: the uid of the tensor
# - data: the tensor's values
# - tags (optional):
# - description (optional):
# - shape (required - public): the shape of the tensor
# - value: the tensor itself
# - private: is the tensor a private tensor?
# - sensitivity (optional): the sensitivity metadata for a tensor
# - h (public - derived from schema) - the max values a tensor can take on, derived from the schema
# - l (public - derived from schema)- the minimum values a tensor can take on, derived from the schema
# - e^h (private) - the max contributions from entities, initialized with the tensor
# - e^l (private) - the min contributions from entities, initialized with the tensor
# - accountant (private reference to global privacy accountant)
# - worst_case_user_budget: inferred values based on the worst case user-budget parameter across the entities in the tensor (see Entity.user_budget)
#
# - Entity:
# - uid (required, randomly generated, public)
# - metadata (optional)
# - user_budget (public - required): the per-user privacy budget parameters for this dataset:
# - lifetime_train: the total epsilon which can be published to the greater public (i.e., when a data scientist intends to release a number openly)
# - lifetime_dev: the total epsilon which can be published to the greater public (i.e., when a data scientist intends to release a number openly)
# - lifetime_test: the total epsilon which can be published to the greater public (i.e., when a data scientist intends to release a number openly)
# - user_lifetime_train: the total epsilon each data scientist gets when interacting with the training dataset
# - user_lifetime_dev: the total epsilon each data scientist gets when interacting with the dev dataset
# - user_lifetime_test: the total epsilon each data scientist gets when interacting with the dev dataset
# - daily_auto_train: the amount of epsilon each data scientist gets per day for sample statistics which doesn't require compliance officer review
# - daily_auto_dev: the amount of epsilon each data scientist gets per day for sample statistics which doesn't require compliance officer review
# - daily_auto_test: the amount of epsilon each data scientist gets per day for sample statistics which doesn't require compliance officer review
# - query_auto_train: the maximum amount of epsilon one query can return which doesn't require officer review when interacting with the training dataset
# - query_auto_dev: the maximum amount of epsilon one query can return which doesn't require officer review when interacting with the training dataset
# - query_auto_test: the maximum amount of epsilon one query can return which doesn't require officer review when interacting with the training dataset
#
#
# - TensorSchema:
# - name: the name of the schema
# - columns: each column has a type, name, and description for the column
#
# - SchemaColumn:
# - type
# - name
# - description
# - vocabulary (optional - for text datasets)
#
# - DatasetSchema: this is the schema of a dataset. Importantly, we try to encourage datasets in multiple locations to intentionally subscribe to the same schema so as to best facilitate Federated Learning.
#
# - DistributedDataset: this is a virtual object which refers to a collection of Dataset objects which all subscribe to the same Dataset Schema. It is a convenient object because it gives you fast access to datasets at multiple institutions which are appropriate to train on together.
#
# +
import pandas as pd
from time import time
import random
import os
from binascii import hexlify
def get_key():
key = hexlify(os.urandom(32)).decode()
return key
class Grid():
""
class Wallet():
""
gr = Grid()
gr.wallet = Wallet()
#ignore this...it's just to support the mock API
columns=["network", "domain", "pubkey", "prikey"]
data = [["OpenGrid", "PatrickCason", get_key(), get_key()],
["OpenGrid", "AndrewTrask", get_key(), get_key()],
["OpenGrid", "TudorCebere", get_key(), get_key()],
["OpenGrid", "JasonMancuso", get_key(), get_key()],
["OpenGrid", "BobbyWagner", get_key(), get_key()],
["AMA", "UCSF", get_key(), get_key()],
["AMA", "Vanderbilt", get_key(), get_key()],
["AMA", "MDAnderson", get_key(), get_key()],
["AMA", "BostonGeneral", get_key(), get_key()],
["AMA", "HCA", get_key(), get_key()],
["CDC", "Atlanta", get_key(), get_key()],
["CDC", "New York", get_key(), get_key()],
]
domain_keys = pd.DataFrame(columns=columns, data=data)
gr.wallet.domain_keys = domain_keys
#ignore this...it's just to support the mock API
columns=["id", "name", "datasets", "models", "domains", "online", "registered", "server-domains", "mobile-domains"]
data = [[random.randint(0,1e10), "OpenGrid", 235262, 2352, 2532, 2352, random.randint(0,100), "100%", "0%"],
[random.randint(0,1e10), "FitByte", 34734, 2352, 2532, 2352, random.randint(0,100), "2%", "98%"],
[random.randint(0,1e10), "DeepMined", 34734, 2352, 2532, 2352, random.randint(0,100), "100%", "0%"],
[random.randint(0,1e10), "OpanAI", 0, 2352, 2532, 2352, random.randint(0,100), "2%", "98%"],
[random.randint(0,1e10), "<NAME>", 0, 2352, 2532, 2352, random.randint(0,100), "99%", "1%"],
[random.randint(0,1e10), "MyFitnessPal", 0, 2352, 2532, 2352, random.randint(0,100), "50%", "50%"],
[random.randint(0,1e10), "TrackMyRun", 0, 2352, 2532, 2352, random.randint(0,100), "0%", "100%"],
[random.randint(0,1e10), "Netflax", 935685, 7473, 346, 216, 0, "54%", "46%"],
[random.randint(0,1e10), "AMA", 2352, 236622, 53, 52, random.randint(0,100), "100%", "0%"],
[random.randint(0,1e10), "CDC", 35, 0, 5, 5, 5, "100%", "0%"]]
networks = pd.DataFrame(columns=columns, data=data)
networks = networks.set_index("name")
gr.networks = networks
def save_network(network):
columns=["id", "name", "datasets", "models", "domains", "online", "registered"]
data = [[2352, "NHS", 86585, 6585, 5, 5, 0]]
network = pd.DataFrame(columns=columns, data=data)
network = network.set_index("name")
gr.networks = pd.concat([gr.networks, network])
print("Connecting... SUCCESS!")
return network
gr.save_network = save_network
def search_diabetes(*args, **kwargs):
if('anywhere' in kwargs and kwargs['anywhere'] == "diabetes"):
if("validity_cert_ids" in kwargs):
columns=["distributed_datasets", "datasets", "tensors", "dataset_schemas", "tensor_schemas", "models", "model_schemas"]
data = [[23, 75474, 947467, 532, 23, 235, 62]]
nets = pd.DataFrame(columns=columns, data=data)
key2 = get_key()
columns=["name", "network", "domain", "id", "upload-date", "version", "frameworks", "train_rows", "dev_rows", "test_rows", "schema", "tags", "description", "private", "metadata", "gpu_available"]
data = [["COVID Mortality", "UCSF", "UCSF", get_key(), "12/18/2019", "1", "PT/TF/PD/NP/JX", 2626, 353, 366, "COVID-MORT-2", "#covid #or...", "This is the official statistics for COVID deaths within...", "True", "{'collected':2019}", "True"]]
datasets = pd.DataFrame(columns=columns, data=data)
datasets = datasets.set_index("name")
columns=["schema-name", "networks", "domains", "popular_dataset_description", "n_datasets", "n_datasets_dedup", "train_rows", "dev_rows", "test_rows", "max_train_rows_per_domain", "max_dev_rows_per_domain", "max_test_rows_per_domain"]
data = [["COVID-MORT-2", "UCSF,CDC,AMA", "UCSF,Atlanta,Chicago,Boston General", "Nationally reported on a daily basis, this dataset includes", 4,3, 39610, 1063, 366, 34632, 355, 366],
["AMA-DIABETES-TRIAL-252", "AMA", "Boston General", "In 2018, the American Medical Association...", 4,3, 23267, 335, 3463, 23267, 335, 3463]]
distributed = pd.DataFrame(columns=columns, data=data)
distributed = distributed.set_index("schema-name")
class Networks():
def __call__(self):
return nets
def datasets(self, *args, **kwargs):
return datasets
def distributed(self, *args, **kwargs):
return distributed
def search(self, *args, **kwargs):
return self
networks = Networks()
return networks
columns=["distributed_datasets", "datasets", "tensors", "dataset_schemas", "tensor_schemas", "models", "model_schemas"]
data = [[23, 75474, 947467, 532, 23, 235, 62]]
nets = pd.DataFrame(columns=columns, data=data)
key2 = get_key()
columns=["name", "network", "domain", "$/eps", "$/gpu_flop_hour", "validity_certs", "top_validity_certs", "user_certs", "avg_user_cert_rank", "id", "upload-date", "version", "frameworks", "train_rows", "dev_rows", "test_rows", "schema", "tags", "description", "private", "metadata", "gpu_available"]
data = [["COVID Mortality", "UCSF", "UCSF", "$364.32", "$5.35", 25, "UCSF, FDA, NSF", 6432, "8.3%", get_key(), "12/18/2019", "1", "PT/TF/PD/NP/JX", 2626, 353, 366, "COVID-MORT-2", "#covid #or...", "This is the official statistics for COVID deaths within...", "True", "{'collected':2019}", "True"],
["US COVID Deaths", "CDC", "Atlanta", "$253.12", "$4.22", 23, "CDC, FDA, NSF", 3523, "12.5%", key2, "1/18/2020", "23", "PT/TF/PD/NP/JX", 34632, 355, 0, "COVID-MORT-2", "#covid #or...", "Nationally reported on a daily basis, this dataset includes", "True", "{'collected':2020}", "False"],
["US COVID Deaths", "CDC", "Chicago", "$263.56", "$3.12", 15, "UCSF, FDA, NSF", 1532, "3.3%", key2, "1/18/2020", "23", "PT/TF/PD/NP/JX", 34632, 355, 0, "COVID-MORT-2", "#covid #or...", "Nationally reported on a daily basis, this dataset includes", "True", "{'collected':2020}", "True"],
["COVID Deaths", "AMA", "Boston General", "$135.37", "$5.32", 1, "AMA", 3523, "20.3%", get_key(), "2/20/2020", "2", "PT/TF/PD/NP/JX", 2352, 335, 0, "COVID-MORT-2", "#covid #or...", "With attributes including risk factors like diabetes...", "True", "{'collected':2020}", "True"],
["COVID Deaths", "AMA", "Doctor's Direct", "$135.37", "$5.32", 1, "<NAME>", "93.5%", get_key(), "2/20/2020", "2", "PT/TF/PD/NP/JX", 2352, 335, 0, "COVID-MORT-2", "#covid #or...", "With attributes including risk factors like diabetes...", "True", "{'collected':2020}", "True"],
["Diabetes Pump Trial Data", "AMA", "Boston General", "$5,364.32", "$1.32", 1, "AMA", 235, "32.1%", get_key(), "1/3/2018", "26", "PT/TF/PD/NP/JX", 23267, 335, 3463, "AMA-DIABETES-TRIAL-252", "#diabetes #or...", "In 2018, the American Medical Association...", "True", "{'collected':2018}", "True"]]
datasets = pd.DataFrame(columns=columns, data=data)
datasets = datasets.set_index("name")
columns=["schema-name", "networks", "domains", "popular_dataset_description", "n_datasets", "n_datasets_dedup", "train_rows", "dev_rows", "test_rows", "max_train_rows_per_domain", "max_dev_rows_per_domain", "max_test_rows_per_domain"]
data = [["COVID-MORT-2", "UCSF,CDC,AMA", "UCSF,Atlanta,Chicago,Boston General", "Nationally reported on a daily basis, this dataset includes", 4,3, 39610, 1063, 366, 34632, 355, 366],
["AMA-DIABETES-TRIAL-252", "AMA", "Boston General", "In 2018, the American Medical Association...", 4,3, 23267, 335, 3463, 23267, 335, 3463]]
distributed = pd.DataFrame(columns=columns, data=data)
distributed = distributed.set_index("schema-name")
class Networks():
def __call__(self):
return nets
def datasets(self, *args, **kwargs):
return datasets
def distributed(self, *args, **kwargs):
return distributed
def search(self, *args, **kwargs):
return self
def validity_certificates(self, *args, **kwargs):
columns=["cert_id", "signer_pubkey", "signer_name", "signer url", "dataset_id", "dataset_name", "certificate_authority_type", "certificate_authority_url", "certificate_metadata"]
cdc = get_key()
data = [[get_key()[0:10], get_key(), "UCSF", "http://opus-id.org/ucsf", get_key()[0:10], "Covid Mortality", "https", "https://letsencrypt.org/", "{...}"],
[get_key()[0:10], get_key(), "FDA", "http://opus-id.org/fda", get_key()[0:10], "Covid Mortality", "https", "https://letsencrypt.org/", "{...}"],
[get_key()[0:10], get_key(), "NSF", "http://opus-id.org/nsf", get_key()[0:10], "Covid Mortality", "https", "https://letsencrypt.org/", "{...}"],
[get_key()[0:10], get_key(), "AMA", "http://opus-id.org/ama", get_key()[0:10], "COVID Deaths", "https", "https://letsencrypt.org/", "{...}"],
[get_key()[0:10], cdc, "CDC", "http://opus-id.org/cdc", get_key()[0:10], "US COVID Deaths", "https", "https://letsencrypt.org/", "{...}"],
[get_key()[0:10], cdc, "CDC", "http://opus-id.org/cdc", get_key()[0:10], "Covid Mortality", "https", "https://letsencrypt.org/", "{...}"]]
certs = pd.DataFrame(columns=columns, data=data)
certs = certs.set_index("cert_id")
return certs
networks = Networks()
return networks
else:
columns=["distributed_datasets", "datasets", "tensors", "dataset_schemas", "tensor_schemas", "models", "model_schemas"]
data = [[23, 75474, 947467, 532, 23, 235, 62]]
nets = pd.DataFrame(columns=columns, data=data)
key2 = get_key()
columns=["name", "network", "domain", "id", "upload-date", "version", "frameworks", "train_rows", "dev_rows", "test_rows", "schema", "tags", "description", "private", "metadata", "gpu_available"]
data = [["COVID Mortality", "UCSF", "UCSF", get_key(), "12/18/2019", "1", "PT/TF/PD/NP/JX", 2626, 353, 366, "COVID-MORT-2", "#covid #or...", "This is the official statistics for COVID deaths within...", "True", "{'collected':2019}", "True"]]
datasets = pd.DataFrame(columns=columns, data=data)
datasets = datasets.set_index("name")
columns=["schema-name", "networks", "domains", "popular_dataset_description", "n_datasets", "n_datasets_dedup", "train_rows", "dev_rows", "test_rows", "max_train_rows_per_domain", "max_dev_rows_per_domain", "max_test_rows_per_domain"]
data = [["COVID-MORT-2", "UCSF,CDC,AMA", "UCSF,Atlanta,Chicago,Boston General", "Nationally reported on a daily basis, this dataset includes", 4,3, 39610, 1063, 366, 34632, 355, 366],
["AMA-DIABETES-TRIAL-252", "AMA", "Boston General", "In 2018, the American Medical Association...", 4,3, 23267, 335, 3463, 23267, 335, 3463]]
distributed = pd.DataFrame(columns=columns, data=data)
distributed = distributed.set_index("schema-name")
class Networks():
def __call__(self):
return nets
def datasets(self, *args, **kwargs):
return datasets
def distributed(self, *args, **kwargs):
return distributed
def search(self, *args, **kwargs):
return self
def status(self):
columns=["query", "timestamp", "duration (hours)", "time remaining", "% of all known domains", "% of domains alive in last 24 hours"]
data = [["anywhere=sleep", time(), 24, 23.9, "35%", "98%"]]
datasets = pd.DataFrame(columns=columns, data=data)
return datasets
networks = Networks()
return networks
def search_entities(*args, **kwargs):
if("ucsf" in kwargs['anywhere'] or "UCSF" in kwargs['anywhere']):
columns=["pubkey", "name", "aliases", "opus_url", "verified_secondary_url"]
data = [[get_key(), "University of California San Franscisco", ["UCSF", "ucsf"], "http://opus-id.org/ucsf", "https://www.ucsf.edu/"]]
entities = pd.DataFrame(columns=columns, data=data)
entities = entities.set_index("pubkey")
return entities
gr.search = search_diabetes
gr.search_regex = search_diabetes
gr.searches = [{"anywhere":"diabetes", "hours_remaining":48}]
gr.search_entities = search_entities
# -
result = gr.search(anywhere="diabetes")
result.validity_certificates()
# +
# diabetesSearch = network.search('diabetes') # search dataset name, description, and tags for 'diabetes'
diabetesSearch = network.search({ tag: 'diabetes' }) # specifically search for datasets with a tag of 'diabetes'
print(diabetesSearch)
"""
[
{
id: 1,
name: 'Diabetes is terrible',
description: '',
node: 'ws://ucsf.com/pygrid',
tags: ['diabetes', 'california', 'ucsf'],
tensors: [
{
id: '1a',
name: 'data',
schema: []
},
{
id: '1b',
name: 'target',
schema: []
}
]
},
...
]
"""
network.disconnect()
client = grid.connect(diabetesSearch[0].node) # 'ws://ucsf.com/pygrid'
user = client.signup('<EMAIL>', 'password')
# user = client.login('<EMAIL>', 'password') # or, if you're already signed up
computeTypes = client.getComputeTypes()
"""
[
{
id: 1,
name: 'EC2 P3',
provider: 'AWS',
cpu: {
type: 'Intel Xeon 3.4GHz',
cores: 32
},
gpu: {
type: 'Tesla V100',
min: 0,
max: 8
},
ram: {
value: 64,
ordinal: 'gb'
}
},
...
]
"""
# env = user.createEnvironment() # creates the basic "default" environment for exploring
env = user.createEnvironment(computeTypes[0].id, {
ram: Grid.RAM(32, 'gb'),
gpu: 3
})
# Do stuff with "env"
# user.getEnvironments();
# -
# # Schema toolset
#
# Below some API calls related with schemas that could help in the search of datasets, starting new ones and aggregating data from different datasets.
#
#
# ### Describe Schema and Search Schema API calls
#
# Deciding the fields that should be in a dataset is challenging without revisiting your schema multiple times.
#
# Extending the search API to search schemas and adding a call to describe each schema would help institutions to reuse existing schemas or get inspired by existing ones. All this in the long term would help us train with different datasets that share the same or similar schemas.
#
# Given the name of a Schema, the describe call provides information about the fields in the dataset:
# - name: the name of the schema
# - id: id of the schema (this is not contemplated in the rest of the document but would be great to stablish it)
# - columns: each column has a type, name, and description for the column
#
# The existing search API could be extended to search across schemas and then the describe call could be used to get specific information about them. Note that this would already give us a 'popularity' indicator of each schema avoiding the reinvention of more schemas:
#
# ```result = gr.search(anywhere="sleep")
# pd_table = result.schemas.pandas()
# pd_table```
#
#
# ### Schema transformers
#
# Schema transformers are entities that can map classes from one or more input schemas to an output schema. The goals of those transformers are:
# * Be able to combine data from different datasets that do not share the same schema
# * Compare inference results of different machine learning models that do not share the same schema
# * Optionally provide data relationships between the input schema and the output schema. For example: class A from input dataset and class B from the same input dataset are both mapped to class X of the output dataset but class A scope is a subclass of B. This type of information might be useful in some training techniques.
#
#
# ##### Certificates
# The schema transformers are more than just some 'fuzzy matching' executed on top of the labels of each input dataset. The transformers are curated by subject matter experts that validate the relationships of classes between datasets.
#
# The Schema transformers adopt the user certificates and validity certificates applied to the datasets to give an idea to the user of the quality of the schema transformation provided
#
#
# ##### Data transformation operations
#
# In some circumstances a mapping from one schema to another might let some important fields empty. In some cases those fields might remain empty unless the entity that created the original dataset fills them but in some other cases it might be possible to transform input fields into a new output field. For example RGB to YUV conversions.
#
# Optionally by the transformer, it is possible to pull a description of how
#
#
|
examples/mock_apis/Remote Medical Data Science Grid Mock API.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jair226/daa_2021_1/blob/master/13Enero.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="n8QiFp7YA-3Y"
class NodoArbol:
def _init_( self , dato , hijo_izq=None , hijo_der=None):
self.dato = dato
self.left = hijo_izq
self.right = hijo_der
# + id="pR8MqhrHeenn"
class BinarySearchTree:
def _init_(self):
self.__root = None
def insert (self , value):
if self.__root == None:
self.__root == NodoArbol(value,None,None)
else:
#preguntar si value es menor que root, de ser el caso
#insertar a la izquierda....Puede ser el caso que el sun arbol
#izquierco ya tenga muchos elementos...
self._insert_nodo(self._root,value)
def _insert_nodo_(self,nodo,value):
if nodo.dato == value:
pass
elif value < nodo.data:#true va a la izq
if nodo.left == None:#si hay espacio a la izq ahi va
nodo.left = NodoArbol(value,None,None)#insertamos nodo
else:
self._insert_nodo_(nodo.left,value)#buscar en sub arbol izq
else:
if nodo.right == None:
nodo.right = NodoArbol(value, None,None)
else:
self._insert_nodo_(nodo.right,value)#buscar en sub arbol der
# + id="tj6pfKvoei1h"
bst=BinarySearchTree()
bst.insert(50)
bst.insert(30)
bst.insert(20)
#bst.search(50)#true o false
|
13Enero.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc="true"
# # Table of Contents
# <p><div class="lev1 toc-item"><a href="#Building-a-CNN" data-toc-modified-id="Building-a-CNN-1"><span class="toc-item-num">1 </span>Building a CNN</a></div><div class="lev2 toc-item"><a href="#Prepare-the-dataset" data-toc-modified-id="Prepare-the-dataset-11"><span class="toc-item-num">1.1 </span>Prepare the dataset</a></div><div class="lev2 toc-item"><a href="#Building-the-CNN" data-toc-modified-id="Building-the-CNN-12"><span class="toc-item-num">1.2 </span>Building the CNN</a></div><div class="lev2 toc-item"><a href="#Fitting-the-CNN-to-the-images" data-toc-modified-id="Fitting-the-CNN-to-the-images-13"><span class="toc-item-num">1.3 </span>Fitting the CNN to the images</a></div><div class="lev2 toc-item"><a href="#Making-new-predictions" data-toc-modified-id="Making-new-predictions-14"><span class="toc-item-num">1.4 </span>Making new predictions</a></div><div class="lev2 toc-item"><a href="#Challenge" data-toc-modified-id="Challenge-15"><span class="toc-item-num">1.5 </span>Challenge</a></div><div class="lev2 toc-item"><a href="#Better-solution" data-toc-modified-id="Better-solution-16"><span class="toc-item-num">1.6 </span>Better solution</a></div>
# -
# # Building a CNN
#
# Credit: [Deep Learning A-Z™: Hands-On Artificial Neural Networks](https://www.udemy.com/deeplearning/learn/v4/content)
#
# - [Getting the dataset](https://www.superdatascience.com/deep-learning/)
# ## Prepare the dataset
#
# - keras has an efficient support for images, but we need to prepare all the images in a special structure:
# - Separate images in training and test folder.
# - In order to differentiate the class labels, the simple trick is to make two different folders: one for dog and one for cat. Keras will understand how to differentiate the labels of our independent variables.
# List all directories and sub-directories
# !find ./Convolutional_Neural_Networks/dataset -type d -maxdepth 5
# ## Building the CNN
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# +
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# -
# ## Fitting the CNN to the images
# - We only have 10,000 images, that is not enough to make great performance results. We can either get more images or use the trick of data augmentation:
# - Image augmentation will create many batches of our images and then each batch, it will apply some random transformation on a random selection of our images like: rotating, flipping, shifting, shearing them. Eventually, in the training process, we have more diverse images inside these batches.
# - Image augmentation trick can help reduce overfitting.
# - [ImageDataGenerator](https://keras.io/preprocessing/image/)
# +
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('Convolutional_Neural_Networks/dataset/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('Convolutional_Neural_Networks/dataset/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 25,
validation_data = test_set,
validation_steps = 2000)
# -
# ## Making new predictions
# +
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('Convolutional_Neural_Networks/dataset/single_prediction/cat_or_dog_1.jpg',
target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
if result[0][0] == 1:
prediction = 'dog'
else:
prediction = 'cat'
print (prediction)
# -
# ## Challenge
#
# Evaluation was already made during the training with the validation set, therefore k-Fold Cross Validation is not needed.
#
# Then the techniques to improve and tune a CNN model are the same as for ANNs. So here is the challenge:
#
# Apply the techniques you've learnt and use your architect power to make a CNN that will give you the gold medal:
# - Bronze medal: Test set accuracy between 80% and 85%
# - Silver medal: Test set accuracy between 85% and 90%
# - Gold medal: Test set accuracy over 90%!!
#
# Rules:
# - Use the same training set
# - Dropout allowed
# - Customized SGD allowed
# - Specific seeds not allowed
#
#
#
# ## Better solution
# - [Prediction Challenge Solution - Result: 90%](https://www.udemy.com/deeplearning/learn/v4/questions/2276518)
# +
from tensorflow.contrib.keras.api.keras.layers import Dropout
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Conv2D
from tensorflow.contrib.keras.api.keras.layers import MaxPooling2D
from tensorflow.contrib.keras.api.keras.layers import Flatten
from tensorflow.contrib.keras.api.keras.layers import Dense
from tensorflow.contrib.keras.api.keras.callbacks import Callback
from tensorflow.contrib.keras.api.keras.preprocessing.image import ImageDataGenerator
from tensorflow.contrib.keras import backend
import os
class LossHistory(Callback):
def __init__(self):
super().__init__()
self.epoch_id = 0
self.losses = ''
def on_epoch_end(self, epoch, logs={}):
self.losses += "Epoch {}: accuracy -> {:.4f}, val_accuracy -> {:.4f}\n"\
.format(str(self.epoch_id), logs.get('acc'), logs.get('val_acc'))
self.epoch_id += 1
def on_train_begin(self, logs={}):
self.losses += 'Training begins...\n'
script_dir = os.path.dirname(__file__)
training_set_path = os.path.join(script_dir, '../dataset/training_set')
test_set_path = os.path.join(script_dir, '../dataset/test_set')
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
input_size = (128, 128)
classifier.add(Conv2D(32, (3, 3), input_shape=(*input_size, 3), activation='relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size=(2, 2))) # 2x2 is optimal
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation='relu'))
classifier.add(MaxPooling2D(pool_size=(2, 2)))
# Adding a third convolutional layer
classifier.add(Conv2D(64, (3, 3), activation='relu'))
classifier.add(MaxPooling2D(pool_size=(2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units=64, activation='relu'))
classifier.add(Dropout(0.5))
classifier.add(Dense(units=1, activation='sigmoid'))
# Compiling the CNN
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Part 2 - Fitting the CNN to the images
batch_size = 32
train_datagen = ImageDataGenerator(rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
training_set = train_datagen.flow_from_directory(training_set_path,
target_size=input_size,
batch_size=batch_size,
class_mode='binary')
test_set = test_datagen.flow_from_directory(test_set_path,
target_size=input_size,
batch_size=batch_size,
class_mode='binary')
# Create a loss history
history = LossHistory()
classifier.fit_generator(training_set,
steps_per_epoch=8000/batch_size,
epochs=90,
validation_data=test_set,
validation_steps=2000/batch_size,
workers=12,
max_q_size=100,
callbacks=[history])
# Save model
model_backup_path = os.path.join(script_dir, '../dataset/cat_or_dogs_model.h5')
classifier.save(model_backup_path)
print("Model saved to", model_backup_path)
# Save loss history to file
loss_history_path = os.path.join(script_dir, '../loss_history.log')
myFile = open(loss_history_path, 'w+')
myFile.write(history.losses)
myFile.close()
backend.clear_session()
print("The model class indices are:", training_set.class_indices)
|
DeepLearningA-Z/02-supervised-deep-learning/02-Convolutional-Neural-Networks-CNN/.ipynb_checkpoints/02-Building-a-CNN-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ML_venv
# language: python
# name: ml_venv
# ---
# # KNN Test
#
# Testando 30 rodadas com algoritmo KNN no **Iris dataset** utilizando a biblioteca `scikit-learn`.
# ## Importando Bibliotecas
# +
import pandas as pd
from random import randint
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, recall_score
# -
# ## Carregando _Digits DataSet_
iris_ds = load_iris()
iris_data, iris_target = load_iris(return_X_y=True)
# ## Rodadas de Teste
# +
k_vizinhos = int(len(iris_data) ** 0.5)
iris_accuracy = []
iris_recall = []
for rodada in range(30):
# Separando dados
semente = randint(0, 8001)
data_train, data_test, target_train, target_test = train_test_split(iris_data, iris_target, test_size=0.3, random_state=semente)
# Instanciando classificador
classifier = KNeighborsClassifier(n_neighbors=k_vizinhos)
# Treinando modelo
classifier.fit(data_train, target_train)
# Realizando teste de classificação
prediction = classifier.predict(data_test)
# Resultados
iris_accuracy.append(accuracy_score(target_test, prediction))
iris_recall.append(recall_score(target_test, prediction, average=None))
print("\n- RODADA {turn} - Semente {seed} -\n".format(turn=rodada, seed=semente))
print(classification_report(target_test, prediction, target_names=iris_ds.target_names))
print(confusion_matrix(target_test, prediction))
# -
# ## Lista de Acurácia (_accuracy_) por rodada
# +
# Acurácia formatada para exibir 3 casas decimais
accuracy_formatada = [round(acc, 3) for acc in iris_accuracy]
accuracy_df = pd.DataFrame(data=iris_accuracy, columns=['Acurácia'])
accuracy_df['Acurácia formatada'] = accuracy_formatada
accuracy_df
# -
# ## Lista de Sensibilidade (_recall_) por rodada
pd.DataFrame(data=iris_recall, columns=iris_ds.target_names)
# Sensibilidade formatada para exibir 3 casas decimais
recall_formatado = [[round(setosa, 3), round(versicolor, 3), round(virginica, 3)] for setosa, versicolor, virginica in iris_recall]
pd.DataFrame(data=recall_formatado, columns=iris_ds.target_names)
# ## Calculando Média (Acurácia)
# +
accuracy_media = 0
for acc in accuracy_formatada:
accuracy_media += acc
accuracy_media /= 30
# Arredondando para 5 casas decimais
print("Acurácia média: {media}".format(media=round(accuracy_media, 5)))
# -
# ## Calculando Desvio Padrão (Acurácia)
# +
accuracy_distancia = 0
for amostra in accuracy_formatada:
accuracy_distancia += (amostra - accuracy_media) ** 2
accuracy_DP = (accuracy_distancia / len(accuracy_formatada)) ** 0.5
print("Desvio padrâo da Acurácia: {dp}".format(dp=round(accuracy_DP, 5)))
# -
# ## Calculando Média (Sensibilidade)
# +
recall_media = [0, 0, 0]
for setosa, versicolor, virginica in recall_formatado:
recall_media[0] += setosa
recall_media[1] += versicolor
recall_media[2] += virginica
accuracy_media = round(accuracy_media, 5)
recall_media = [media/30 for media in recall_media]
# Arredondando para 5 casas decimais
print("Sensibilidade média")
print("setosa: {setosa}".format(setosa=round(recall_media[0], 5)))
print("versicolor: {versicolor}".format(versicolor=round(recall_media[1], 5)))
print("virginica: {virginica}".format(virginica=round(recall_media[2], 5)))
# -
# ## Calculando Desvio Padrão (Sensibilidade)
# +
recall_distancia = [0, 0, 0]
for Asetosa, Aversicolor, Avirginica in recall_formatado:
recall_distancia[0] += (Asetosa - recall_media[0]) ** 2
recall_distancia[1] += (Aversicolor - recall_media[1]) ** 2
recall_distancia[2] += (Avirginica - recall_media[2]) ** 2
recall_DP = [(distancia / len(recall_formatado)) ** 0.5 for distancia in recall_distancia]
# Arredondando para 5 casas decimais
print("Desvio padrâo da Sensibilidade")
print("Setosa: {setosa}".format(setosa=round(recall_DP[0], 5)))
print("Versicolor: {versicolor}".format(versicolor=round(recall_DP[1], 5)))
print("Virginica: {virginica}".format(virginica=round(recall_DP[2], 5)))
|
ML/f00-sklearn_explore/01-knn/01-test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import os
import warnings
warnings.filterwarnings('ignore')
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="1"
import pandas as pd
import numpy as np
from gtda.time_series import SlidingWindow
import matplotlib.pyplot as plt
from math import atan2, pi, sqrt, cos, sin, floor
from data_utils import *
import tensorflow as tf
from tensorflow.python.keras.backend import set_session
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
config.log_device_placement = True
sess2 = tf.compat.v1.Session(config=config)
set_session(sess2)
from tensorflow.keras.layers import Dense, MaxPooling1D, Flatten
from tensorflow.keras import Input, Model
from tensorflow.keras.callbacks import ModelCheckpoint
import tensorflow.compat.v1.keras.backend as K
from tensorflow.keras.models import load_model
from tcn import TCN, tcn_full_summary
from sklearn.metrics import mean_squared_error
from mango.tuner import Tuner
from scipy.stats import uniform
from keras_flops import get_flops
import pickle
import csv
import random
import itertools
import quaternion
import math
from hardware_utils import *
import time
from scipy import signal
# ## Import Training, Validation and Test Set
sampling_rate = 40
window_size = 10
stride = 10
f = '/home/nesl/TinyOdom/Animals/dataset/Gundog/' #dataset directory
#Training Set
X, Y_disp, Y_head, Y_pos, x0_list, y0_list, size_of_each, x_vel, y_vel, head_s, head_c, X_orig = import_gundog_dataset(type_flag = 1,
useStepCounter = True, dataset_folder = f, AugmentationCopies = 0,
sampling_rate = sampling_rate, window_size = window_size, stride = stride, verbose=False)
#Test Set
X_test, Y_disp_test, Y_head_test, Y_pos_test, x0_list_test, y0_list_test, size_of_each_test, x_vel_test, y_vel_test, head_s_test, head_c_test, X_orig_test = X, Y_disp, Y_head, Y_pos, x0_list, y0_list, size_of_each, x_vel, y_vel, head_s, head_c, X_orig =import_ronin_dataset(type_flag = 4,
useMagnetometer = True, useStepCounter = True, AugmentationCopies = 0,
dataset_folder = f, sampling_rate = sampling_rate, window_size = window_size, stride = stride,verbose=False)
# ## Training and NAS
device = "NUCLEO_F746ZG" #hardware name
model_name = 'TD_GunDog_'+device+'.hdf5'
dirpath="/home/nesl/Mbed Programs/tinyodom_tcn/" #hardware program directory
HIL = True #use real hardware or proxy?
quantization = False #use quantization or not?
model_epochs = 300 #epochs to train each model for
NAS_epochs = 30 #epochs for hyperparameter tuning
output_name = 'g_model.tflite'
log_file_name = 'log_NAS_GunDog_'+device+'.csv'
if os.path.exists(log_file_name):
os.remove(log_file_name)
row_write = ['score', 'rmse_vel_x','rmse_vel_y','RAM','Flash','Flops','Latency',
'nb_filters','kernel_size','dilations','dropout_rate','use_skip_connections','norm_flag']
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
if os.path.exists(log_file_name[0:-4]+'.p'):
os.remove(log_file_name[0:-4]+'.p')
def objective_NN(epochs=500,nb_filters=32,kernel_size=7,dilations=[1, 2, 4, 8, 16, 32, 64, 128],dropout_rate=0,
use_skip_connections=False,norm_flag=0):
inval = 0
rmse_vel_x = 'inf'
rmse_vel_y = 'inf'
batch_size, timesteps, input_dim = 256, window_size, X.shape[2]
i = Input(shape=(timesteps, input_dim))
if(norm_flag==1):
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections,use_batch_norm=True)(i)
else:
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections)(i)
m = tf.reshape(m, [-1, nb_filters, 1])
m = MaxPooling1D(pool_size=(2))(m)
m = Flatten()(m)
m = Dense(32, activation='linear', name='pre')(m)
output1 = Dense(1, activation='linear', name='velx')(m)
output2 = Dense(1, activation='linear', name='vely')(m)
model = Model(inputs=[i], outputs=[output1, output2])
opt = tf.keras.optimizers.Adam()
model.compile(loss={'velx': 'mse','vely':'mse'},optimizer=opt)
Flops = get_flops(model, batch_size=1)
convert_to_tflite_model(model=model,training_data=X,quantization=quantization,output_name=output_name)
maxRAM, maxFlash = return_hardware_specs(device)
if(HIL==True):
convert_to_cpp_model(dirpath)
RAM, Flash, Latency, idealArenaSize, errorCode = HIL_controller(dirpath=dirpath,
chosen_device=device,
window_size=window_size,
number_of_channels = input_dim,
quantization=quantization)
score = -5.0
if(Flash==-1):
row_write = [score, rmse_vel_x,rmse_vel_y,RAM,Flash,Flops,Latency,
nb_filters,kernel_size,dilations,dropout_rate,use_skip_connections,norm_flag]
print('Design choice:',row_write)
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
return score
elif(Flash!=-1):
checkpoint = ModelCheckpoint(model_name, monitor='loss', verbose=1, save_best_only=True)
model.fit(x=X, y=[x_vel, y_vel],epochs=epochs, shuffle=True,callbacks=[checkpoint],batch_size=batch_size)
model = load_model(model_name,custom_objects={'TCN': TCN})
model_acc = -(checkpoint.best)
resource_usage = (RAM/maxRAM) + (Flash/maxFlash)
score = model_acc + 0.01*resource_usage - 0.05*Latency #weigh each component as you like
row_write = [score, rmse_vel_x,rmse_vel_y,RAM,Flash,Flops,Latency,
nb_filters,kernel_size,dilations,dropout_rate,use_skip_connections,norm_flag]
print('Design choice:',row_write)
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
else:
score = -5.0
Flash = os.path.getsize(output_name)
RAM = get_model_memory_usage(batch_size=1,model=model)
Latency=-1
max_flops = (30e6)
if(RAM < maxRAM and Flash<maxFlash):
checkpoint = ModelCheckpoint(model_name, monitor='loss', verbose=1, save_best_only=True)
model.fit(x=X, y=[x_vel, y_vel],epochs=epochs, shuffle=True,callbacks=[checkpoint],batch_size=batch_size)
model = load_model(model_name,custom_objects={'TCN': TCN})
model_acc = -(checkpoint.best)
resource_usage = (RAM/maxRAM) + (Flash/maxFlash)
score = model_acc + 0.01*resource_usage - 0.05*(Flops/max_flops) #weigh each component as you like
row_write = [score, rmse_vel_x,rmse_vel_y,RAM,Flash,Flops,Latency,
nb_filters,kernel_size,dilations,dropout_rate,use_skip_connections,norm_flag]
print('Design choice:',row_write)
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
return score
# +
import pickle
def save_res(data, file_name):
pickle.dump( data, open( file_name, "wb" ) )
min_layer = 3
max_layer = 8
a_list = [1,2,4,8,16,32,64,128,256]
all_combinations = []
dil_list = []
for r in range(len(a_list) + 1):
combinations_object = itertools.combinations(a_list, r)
combinations_list = list(combinations_object)
all_combinations += combinations_list
all_combinations = all_combinations[1:]
for item in all_combinations:
if(len(item) >= min_layer and len(item) <= max_layer):
dil_list.append(list(item))
param_dict = {
'nb_filters': range(2,64),
'kernel_size': range(2,16),
'dropout_rate': np.arange(0.0,0.5,0.1),
'use_skip_connections': [True, False],
'norm_flag': np.arange(0,1),
'dil_list': dil_list
}
def objfunc(args_list):
objective_evaluated = []
start_time = time.time()
for hyper_par in args_list:
nb_filters = hyper_par['nb_filters']
kernel_size = hyper_par['kernel_size']
dropout_rate = hyper_par['dropout_rate']
use_skip_connections = hyper_par['use_skip_connections']
norm_flag=hyper_par['norm_flag']
dil_list = hyper_par['dil_list']
objective = objective_NN(epochs=model_epochs,nb_filters=nb_filters,kernel_size=kernel_size,
dilations=dil_list,
dropout_rate=dropout_rate,use_skip_connections=use_skip_connections,
norm_flag=norm_flag)
objective_evaluated.append(objective)
end_time = time.time()
print('objective:', objective, ' time:',end_time-start_time)
return objective_evaluated
conf_Dict = dict()
conf_Dict['batch_size'] = 1
conf_Dict['num_iteration'] = NAS_epochs
conf_Dict['initial_random']= 5
tuner = Tuner(param_dict, objfunc,conf_Dict)
all_runs = []
results = tuner.maximize()
all_runs.append(results)
save_res(all_runs,log_file_name[0:-4]+'.p')
# -
# ## Train the Best Model
# +
nb_filters = results['best_params']['nb_filters']
kernel_size = results['best_params']['kernel_size']
dilations = results['best_params']['dilations']
dropout_rate = results['best_params']['dropout_rate']
use_skip_connections = results['best_params']['use_skip_connections']
norm_flag = results['best_params']['norm_flag']
batch_size, timesteps, input_dim = 256, window_size, X.shape[2]
i = Input(shape=(timesteps, input_dim))
if(norm_flag==1):
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections,use_batch_norm=True)(i)
else:
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections)(i)
m = tf.reshape(m, [-1, nb_filters, 1])
m = MaxPooling1D(pool_size=(2))(m)
m = Flatten()(m)
m = Dense(32, activation='linear', name='pre')(m)
output1 = Dense(1, activation='linear', name='velx')(m)
output2 = Dense(1, activation='linear', name='vely')(m)
model = Model(inputs=[i], outputs=[output1, output2])
opt = tf.keras.optimizers.Adam()
model.compile(loss={'velx': 'mse','vely':'mse'},optimizer=opt)
checkpoint = ModelCheckpoint(model_name, monitor='loss', verbose=1, save_best_only=True)
model.fit(x=X, y=[x_vel, y_vel],epochs=model_epochs, shuffle=True,callbacks=[checkpoint],batch_size=batch_size)
# -
# ## Evaluate the Best Model
# #### Velocity Prediction RMSE
model = load_model(model_name,custom_objects={'TCN': TCN})
y_pred = model.predict(X_test)
rmse_vel_x = mean_squared_error(x_vel_test, y_pred[0], squared=False)
rmse_vel_y = mean_squared_error(y_vel_test, y_pred[1], squared=False)
print('Vel_X RMSE, Vel_Y RMSE:',rmse_vel_x,rmse_vel_y)
# #### ATE and RTE Metrics
# +
a = 0
b = size_of_each_test[0]
ATE = []
RTE = []
ATE_dist = []
RTE_dist = []
for i in range(len(size_of_each_test)):
X_test_sel = X_test[a:b,:,:]
x_vel_test_sel = x_vel_test[a:b]
y_vel_test_sel = y_vel_test[a:b]
Y_head_test_sel = Y_head_test[a:b]
Y_disp_test_sel = Y_disp_test[a:b]
if(i!=len(size_of_each_test)-1):
a += size_of_each_test[i]
b += size_of_each_test[i]
y_pred = model.predict(X_test_sel)
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + x_vel_test_sel[j]
Ly = Ly + y_vel_test_sel[j]
pointx.append(Lx)
pointy.append(Ly)
Gvx = pointx
Gvy = pointy
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + y_pred[0][j]
Ly = Ly + y_pred[1][j]
pointx.append(Lx)
pointy.append(Ly)
Pvx = pointx
Pvy = pointy
at, rt, at_all, rt_all = Cal_TE(Gvx, Gvy, Pvx, Pvy,
sampling_rate=sampling_rate,window_size=window_size,stride=stride)
ATE.append(at)
RTE.append(rt)
ATE_dist.append(Cal_len_meters(Gvx, Gvy))
RTE_dist.append(Cal_len_meters(Gvx, Gvy, 600))
print('ATE, RTE, Trajectory Length, Trajectory Length (60 seconds)',ATE[i],RTE[i],ATE_dist[i],RTE_dist[i])
print('Median ATE and RTE', np.median(ATE),np.median(RTE))
# -
# #### Sample Trajectory Plotting
gundog_algo = pd.read_csv('gundog_res.csv',header=None).to_numpy()
x = signal.resample(gundog_algo[:,0],143648)
y = signal.resample(gundog_algo[:,1],143648)
print('Plotting Trajectory of length (meters): ',Cal_len_meters(Gvx, Gvy))
plt.plot(Gvx,Gvy,label='Ground Truth',color='salmon')
plt.plot(x[3000:],y[3000:],label='GunDog',color='blue')
plt.plot(Pvx,Pvy,label='TinyOdom',color='green')
plt.legend(loc='best')
plt.xlabel('East (m)')
plt.ylabel('North (m)')
plt.title('Animal Tracking - GunDog Dataset')
plt.grid()
plt.show()
# #### Error Evolution
resapGvx = Gvx[22000:]
resapGvy = Gvy[22000:]
resapPvx = signal.resample(x[3000:],21648)
resapPvy = signal.resample(y[3000:],21648)
at, rt, all_te_2, all_rte = Cal_TE(resapGvx, resapGvy, resapPvx, resapPvy,
sampling_rate=sampling_rate,window_size=window_size,stride=stride)
x_ax = np.linspace(0,21635/4,21635)
plt.plot(x_ax[13:20000],all_te_2[13:20000],label='GunDog',color='blue',linestyle='-')
plt.plot(x_ax[10:20000],signal.resample(all_te,21645)[10:20000],label='TinyOdom',color='green',linestyle='-')
plt.legend()
plt.xlabel('Time (seconds)')
plt.ylabel('Position Error (m)')
plt.title('Animal Tracking - GunDog Dataset')
plt.grid()
# ## Deployment
# #### Conversion to TFLite
convert_to_tflite_model(model=model,training_data=X_tr,quantization=quantization,output_name='g_model.tflite')
# #### Conversion to C++
convert_to_cpp_model(dirpath)
|
Gundog/TinyOdom_GunDog.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:python2]
# language: python
# name: conda-env-python2-py
# ---
# # CHAPTER 7 Semantic and Sentiment Analysis
#
# ## Semantic Analysis
#
# ## Exploring WordNet
#
# ### Understanding Synsets
#
# +
from nltk.corpus import wordnet as wn
import pandas as pd
term = 'fruit'
synsets = wn.synsets(term)
print 'Total Synsets:', len(synsets)
# +
# synsets for fruit
for synset in synsets:
print 'Synset:', synset
print 'Part of speech:', synset.lexname()
print 'Definition:', synset.definition()
print 'Lemmas:', synset.lemma_names()
print 'Examples:', synset.examples()
print
# -
# ### Analyzing Lexical Semantic Relations
#
# entailments
for action in ['walk', 'eat', 'digest']:
action_syn = wn.synsets(action, pos='v')[0]
print action_syn, '-- entails -->', action_syn.entailments()
# homonyms\homographs
for synset in wn.synsets('bank'):
print synset.name(),'-',synset.definition()
# +
# synonyms and antonyms
term = 'large'
synsets = wn.synsets(term)
adj_large = synsets[1]
adj_large = adj_large.lemmas()[0]
adj_large_synonym = adj_large.synset()
adj_large_antonym = adj_large.antonyms()[0].synset()
print 'Synonym:', adj_large_synonym.name()
print 'Definition:', adj_large_synonym.definition()
print 'Antonym:', adj_large_antonym.name()
print 'Definition:', adj_large_antonym.definition()
print
# +
term = 'rich'
synsets = wn.synsets(term)[:3]
for synset in synsets:
rich = synset.lemmas()[0]
rich_synonym = rich.synset()
rich_antonym = rich.antonyms()[0].synset()
print 'Synonym:', rich_synonym.name()
print 'Definition:', rich_synonym.definition()
print 'Antonym:', rich_antonym.name()
print 'Definition:', rich_antonym.definition()
print
# +
# hyponyms and hypernyms
term = 'tree'
synsets = wn.synsets(term)
tree = synsets[0]
print 'Name:', tree.name()
print 'Definition:', tree.definition()
# -
hyponyms = tree.hyponyms()
print 'Total Hyponyms:', len(hyponyms)
print 'Sample Hyponyms'
for hyponym in hyponyms[:10]:
print hyponym.name(), '-', hyponym.definition()
print
hypernyms = tree.hypernyms()
print hypernyms
# +
hypernym_paths = tree.hypernym_paths()
print 'Total Hypernym paths:', len(hypernym_paths)
print 'Hypernym Hierarchy'
print ' -> '.join(synset.name() for synset in hypernym_paths[0])
# +
# holonyms and meronyms
# member holonyms
member_holonyms = tree.member_holonyms()
print 'Total Member Holonyms:', len(member_holonyms)
print 'Member Holonyms for [tree]:-'
for holonym in member_holonyms:
print holonym.name(), '-', holonym.definition()
print
# -
# part meronyms
part_meronyms = tree.part_meronyms()
print 'Total Part Meronyms:', len(part_meronyms)
print 'Part Meronyms for [tree]:-'
for meronym in part_meronyms:
print meronym.name(), '-', meronym.definition()
print
# substance meronyms
substance_meronyms = tree.substance_meronyms()
print 'Total Substance Meronyms:', len(substance_meronyms)
print 'Substance Meronyms for [tree]:-'
for meronym in substance_meronyms:
print meronym.name(), '-', meronym.definition()
print
# ### Semantic Relationships and Similarity
#
# +
# semantic relationships and similarities
tree = wn.synset('tree.n.01')
lion = wn.synset('lion.n.01')
tiger = wn.synset('tiger.n.02')
cat = wn.synset('cat.n.01')
dog = wn.synset('dog.n.01')
entities = [tree, lion, tiger, cat, dog]
entity_names = [entity.name().split('.')[0] for entity in entities]
entity_definitions = [entity.definition() for entity in entities]
for entity, definition in zip(entity_names, entity_definitions):
print entity, '-', definition
print
# +
common_hypernyms = []
for entity in entities:
# get pairwise lowest common hypernyms
common_hypernyms.append([entity.lowest_common_hypernyms(compared_entity)[0]
.name().split('.')[0]
for compared_entity in entities])
# build pairwise lower common hypernym matrix
common_hypernym_frame = pd.DataFrame(common_hypernyms,
index=entity_names,
columns=entity_names)
print common_hypernym_frame
# +
similarities = []
for entity in entities:
# get pairwise similarities
similarities.append([round(entity.path_similarity(compared_entity), 2)
for compared_entity in entities])
# build pairwise similarity matrix
similarity_frame = pd.DataFrame(similarities,
index=entity_names,
columns=entity_names)
print similarity_frame
# -
# ## Word Sense Disambiguation
#
# +
from nltk.wsd import lesk
from nltk import word_tokenize
samples = [('The fruits on that plant have ripened', 'n'),
('He finally reaped the fruit of his hard work as he won the race', 'n')]
word = 'fruit'
for sentence, pos_tag in samples:
word_syn = lesk(word_tokenize(sentence.lower()), word, pos_tag)
print 'Sentence:', sentence
print 'Word synset:', word_syn
print 'Corresponding defition:', word_syn.definition()
print
# +
samples = [('Lead is a very soft, malleable metal', 'n'),
('John is the actor who plays the lead in that movie', 'n'),
('This road leads to nowhere', 'v')]
word = 'lead'
for sentence, pos_tag in samples:
word_syn = lesk(word_tokenize(sentence.lower()), word, pos_tag)
print 'Sentence:', sentence
print 'Word synset:', word_syn
print 'Corresponding defition:', word_syn.definition()
print
# -
# ## Named Entity Recognition
#
text = """
Bayern Munich, or FC Bayern, is a German sports club based in Munich,
Bavaria, Germany. It is best known for its professional football team,
which plays in the Bundesliga, the top tier of the German football
league system, and is the most successful club in German football
history, having won a record 26 national titles and 18 national cups.
FC Bayern was founded in 1900 by eleven football players led by <NAME>.
Although Bayern won its first national championship in 1932, the club
was not selected for the Bundesliga at its inception in 1963. The club
had its period of greatest success in the middle of the 1970s when,
under the captaincy of <NAME>, it won the European Cup three
times in a row (1974-76). Overall, Bayern has reached ten UEFA Champions
League finals, most recently winning their fifth title in 2013 as part
of a continental treble.
"""
import nltk
from normalization import parse_document
import pandas as pd
sentences = parse_document(text)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
# +
# nltk NER
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
ne_chunked_sents = [nltk.ne_chunk(tagged) for tagged in tagged_sentences]
named_entities = []
for ne_tagged_sentence in ne_chunked_sents:
for tagged_tree in ne_tagged_sentence:
if hasattr(tagged_tree, 'label'):
entity_name = ' '.join(c[0] for c in tagged_tree.leaves())
entity_type = tagged_tree.label()
named_entities.append((entity_name, entity_type))
named_entities = list(set(named_entities))
entity_frame = pd.DataFrame(named_entities,
columns=['Entity Name', 'Entity Type'])
print entity_frame
# -
# set java path
import os
java_path = r'/usr/bin/java'
os.environ['JAVAHOME'] = java_path
from nltk.tag import StanfordNERTagger
sn = StanfordNERTagger('/pub/nltk-parser/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz',
path_to_jar='/pub/nltk-parser/stanford-ner/stanford-ner.jar')
ne_annotated_sentences = [sn.tag(sent) for sent in tokenized_sentences]
# +
named_entities = []
for sentence in ne_annotated_sentences:
temp_entity_name = ''
temp_named_entity = None
for term, tag in sentence:
if tag != 'O':
temp_entity_name = ' '.join([temp_entity_name, term]).strip()
temp_named_entity = (temp_entity_name, tag)
else:
if temp_named_entity:
named_entities.append(temp_named_entity)
temp_entity_name = ''
temp_named_entity = None
named_entities = list(set(named_entities))
entity_frame = pd.DataFrame(named_entities,
columns=['Entity Name', 'Entity Type'])
print entity_frame
# -
# ## Analyzing Semantic Representations
#
# +
import nltk
import pandas as pd
import os
# assign symbols and propositions
symbol_P = 'P'
symbol_Q = 'Q'
proposition_P = 'He is hungry'
propositon_Q = 'He will eat a sandwich'
# assign various truth values to the propositions
p_statuses = [False, False, True, True]
q_statuses = [False, True, False, True]
# assign the various expressions combining the logical operators
conjunction = '(P & Q)'
disjunction = '(P | Q)'
implication = '(P -> Q)'
equivalence = '(P <-> Q)'
expressions = [conjunction, disjunction, implication, equivalence]
results = []
for status_p, status_q in zip(p_statuses, q_statuses):
dom = set([])
val = nltk.Valuation([(symbol_P, status_p),
(symbol_Q, status_q)])
assignments = nltk.Assignment(dom)
model = nltk.Model(dom, val)
row = [status_p, status_q]
for expression in expressions:
result = model.evaluate(expression, assignments)
row.append(result)
results.append(row)
columns = [symbol_P, symbol_Q, conjunction,
disjunction, implication, equivalence]
result_frame = pd.DataFrame(results, columns=columns)
print 'P:', proposition_P
print 'Q:', propositon_Q
print
print 'Expression Outcomes:-'
print result_frame
# +
# first order logic
read_expr = nltk.sem.Expression.fromstring
os.environ['PROVER9'] = r'/usr/local/bin/prover9'
prover = nltk.Prover9()
#prover = nltk.ResolutionProver()
# +
# set the rule expression
rule = read_expr('all x. all y. (jumps_over(x, y) -> -jumps_over(y, x))')
# set the event occured
event = read_expr('jumps_over(fox, dog)')
# set the outcome we want to evaluate -- the goal
test_outcome = read_expr('jumps_over(dog, fox)')
# get the result
prover.prove(goal=test_outcome,
assumptions=[event, rule],
verbose=True)
# +
# set the rule expression
rule = read_expr('all x. (studies(x, exam) -> pass(x, exam))')
# set the events and outcomes we want to determine
event1 = read_expr('-studies(John, exam)')
test_outcome1 = read_expr('pass(John, exam)')
event2 = read_expr('studies(Pierre, exam)')
test_outcome2 = read_expr('pass(Pierre, exam)')
prover.prove(goal=test_outcome1,
assumptions=[event1, rule],
verbose=True)
# -
prover.prove(goal=test_outcome2,
assumptions=[event2, rule],
verbose=True)
# +
# define symbols (entities\functions) and their values
rules = """
rover => r
felix => f
garfield => g
alex => a
dog => {r, a}
cat => {g}
fox => {f}
runs => {a, f}
sleeps => {r, g}
jumps_over => {(f, g), (a, g), (f, r), (a, r)}
"""
val = nltk.Valuation.fromstring(rules)
print val
# +
dom = {'r', 'f', 'g', 'a'}
m = nltk.Model(dom, val)
print m.evaluate('jumps_over(felix, rover) & dog(rover) & runs(rover)', None)
print m.evaluate('jumps_over(felix, rover) & dog(rover) & -runs(rover)', None)
print m.evaluate('jumps_over(alex, garfield) & dog(alex) & cat(garfield) & sleeps(garfield)', None)
# +
g = nltk.Assignment(dom, [('x', 'r'), ('y', 'f')])
print m.evaluate('runs(y) & jumps_over(y, x) & sleeps(x)', g)
print m.evaluate('exists y. (fox(y) & runs(y))', g)
# -
formula = read_expr('runs(x)')
print m.satisfiers(formula, 'x', g)
formula = read_expr('runs(x) & fox(x)')
print m.satisfiers(formula, 'x', g)
# ## Sentiment Analysis
#
# ### Feature Extraction
#
# +
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
def build_feature_matrix(documents, feature_type='frequency',
ngram_range=(1, 1), min_df=0.0, max_df=1.0):
feature_type = feature_type.lower().strip()
if feature_type == 'binary':
vectorizer = CountVectorizer(binary=True, min_df=min_df,
max_df=max_df, ngram_range=ngram_range)
elif feature_type == 'frequency':
vectorizer = CountVectorizer(binary=False, min_df=min_df,
max_df=max_df, ngram_range=ngram_range)
elif feature_type == 'tfidf':
vectorizer = TfidfVectorizer(min_df=min_df, max_df=max_df,
ngram_range=ngram_range)
else:
raise Exception("Wrong feature type entered. Possible values: 'binary', 'frequency', 'tfidf'")
feature_matrix = vectorizer.fit_transform(documents).astype(float)
return vectorizer, feature_matrix
# -
# ### Model Performance Evaluation
#
# +
from sklearn import metrics
import numpy as np
import pandas as pd
def display_evaluation_metrics(true_labels, predicted_labels, positive_class=1):
print 'Accuracy:', np.round(metrics.accuracy_score(true_labels, predicted_labels), 2)
print 'Precision:', np.round(
metrics.precision_score(
true_labels, predicted_labels, pos_label=positive_class, average='binary'), 2)
print 'Recall:', np.round(
metrics.recall_score(
true_labels, predicted_labels, pos_label=positive_class, average='binary'), 2)
print 'F1 Score:', np.round(
metrics.f1_score(
true_labels, predicted_labels, pos_label=positive_class, average='binary'), 2)
def display_confusion_matrix(true_labels, predicted_labels, classes=[1,0]):
cm = metrics.confusion_matrix(y_true=true_labels,
y_pred=predicted_labels,
labels=classes)
cm_frame = pd.DataFrame(data=cm,
columns=pd.MultiIndex(levels=[['Predicted:'], classes],
labels=[[0,0],[0,1]]),
index=pd.MultiIndex(levels=[['Actual:'], classes],
labels=[[0,0],[0,1]]))
print cm_frame
def display_classification_report(true_labels, predicted_labels, classes=[1,0]):
report = metrics.classification_report(y_true=true_labels,
y_pred=predicted_labels,
labels=classes)
print report
# -
# ### Extract Review Data
import pandas as pd
import numpy as np
import os
# +
labels = {'pos': 'positive', 'neg': 'negative'}
dataset = pd.DataFrame()
for directory in ('test', 'train'):
for sentiment in ('pos', 'neg'):
path =r'/pub/data/aclImdb/{}/{}'.format(directory, sentiment)
for review_file in os.listdir(path):
with open(os.path.join(path, review_file), 'r') as input_file:
review = input_file.read()
dataset = dataset.append([[review, labels[sentiment]]],
ignore_index=True)
# -
dataset.columns = ['review', 'sentiment']
indices = dataset.index.tolist()
np.random.shuffle(indices)
indices = np.array(indices)
# +
dataset = dataset.reindex(index=indices)
dataset.to_csv('movie_reviews.csv', index=False)
# -
# ### Preparing Datasets
#
# +
import pandas as pd
import numpy as np
from normalization import normalize_corpus
#from utils import build_feature_matrix
dataset = pd.read_csv(r'movie_reviews.csv')
print dataset.head()
# +
train_data = dataset[:35000]
test_data = dataset[35000:]
train_reviews = np.array(train_data['review'])
train_sentiments = np.array(train_data['sentiment'])
test_reviews = np.array(test_data['review'])
test_sentiments = np.array(test_data['sentiment'])
sample_docs = [100, 5817, 7626, 7356, 1008, 7155, 3533, 13010]
sample_data = [(test_reviews[index],
test_sentiments[index])
for index in sample_docs]
sample_data
# -
# ## Supervised Machine Learning Technique
#
# normalization
norm_train_reviews = normalize_corpus(train_reviews,
lemmatize=True,
only_text_chars=True)
# feature extraction
vectorizer, train_features = build_feature_matrix(documents=norm_train_reviews,
feature_type='tfidf',
ngram_range=(1, 1),
min_df=0.0, max_df=1.0)
from sklearn.linear_model import SGDClassifier
# build the model
svm = SGDClassifier(loss='hinge', n_iter=500)
svm.fit(train_features, train_sentiments)
# normalize reviews
norm_test_reviews = normalize_corpus(test_reviews,
lemmatize=True,
only_text_chars=True)
# +
# extract features
test_features = vectorizer.transform(norm_test_reviews)
for doc_index in sample_docs:
print 'Review:-'
print test_reviews[doc_index]
print 'Actual Labeled Sentiment:', test_sentiments[doc_index]
doc_features = test_features[doc_index]
predicted_sentiment = svm.predict(doc_features)[0]
print 'Predicted Sentiment:', predicted_sentiment
print
# +
predicted_sentiments = svm.predict(test_features)
#from utils import display_evaluation_metrics, display_confusion_matrix, display_classification_report
display_evaluation_metrics(true_labels=test_sentiments,
predicted_labels=predicted_sentiments,
positive_class='positive')
# -
display_confusion_matrix(true_labels=test_sentiments,
predicted_labels=predicted_sentiments,
classes=['positive', 'negative'])
display_classification_report(true_labels=test_sentiments,
predicted_labels=predicted_sentiments,
classes=['positive', 'negative'])
# ## Unsupervised Lexicon-based Techniques
#
# ### AFINN Lexicon
#
# pip install afinn
# +
from afinn import Afinn
afn = Afinn(emoticons=True)
print afn.score('I really hated the plot of this movie')
print afn.score('I really hated the plot of this movie :(')
# -
# ### SentiWordNet
#
# +
import nltk
from nltk.corpus import sentiwordnet as swn
good = swn.senti_synsets('good', 'n')[0]
print 'Positive Polarity Score:', good.pos_score()
print 'Negative Polarity Score:', good.neg_score()
print 'Objective Score:', good.obj_score()
# +
from normalization import normalize_accented_characters, html_parser, strip_html
def analyze_sentiment_sentiwordnet_lexicon(review,
verbose=False):
# pre-process text
review = normalize_accented_characters(review)
review = html_parser.unescape(review)
review = strip_html(review)
# tokenize and POS tag text tokens
text_tokens = nltk.word_tokenize(review)
tagged_text = nltk.pos_tag(text_tokens)
pos_score = neg_score = token_count = obj_score = 0
# get wordnet synsets based on POS tags
# get sentiment scores if synsets are found
for word, tag in tagged_text:
ss_set = None
if 'NN' in tag and swn.senti_synsets(word, 'n'):
ss_set = swn.senti_synsets(word, 'n')[0]
elif 'VB' in tag and swn.senti_synsets(word, 'v'):
ss_set = swn.senti_synsets(word, 'v')[0]
elif 'JJ' in tag and swn.senti_synsets(word, 'a'):
ss_set = swn.senti_synsets(word, 'a')[0]
elif 'RB' in tag and swn.senti_synsets(word, 'r'):
ss_set = swn.senti_synsets(word, 'r')[0]
# if senti-synset is found
if ss_set:
# add scores for all found synsets
pos_score += ss_set.pos_score()
neg_score += ss_set.neg_score()
obj_score += ss_set.obj_score()
token_count += 1
# aggregate final scores
final_score = pos_score - neg_score
norm_final_score = round(float(final_score) / token_count, 2)
final_sentiment = 'positive' if norm_final_score >= 0 else 'negative'
if verbose:
norm_obj_score = round(float(obj_score) / token_count, 2)
norm_pos_score = round(float(pos_score) / token_count, 2)
norm_neg_score = round(float(neg_score) / token_count, 2)
# to display results in a nice table
sentiment_frame = pd.DataFrame([[final_sentiment, norm_obj_score,
norm_pos_score, norm_neg_score,
norm_final_score]],
columns=pd.MultiIndex(levels=[['SENTIMENT STATS:'],
['Predicted Sentiment', 'Objectivity',
'Positive', 'Negative', 'Overall']],
labels=[[0,0,0,0,0],[0,1,2,3,4]]))
print sentiment_frame
return final_sentiment
# -
for review, review_sentiment in sample_data:
print 'Review:'
print review
print
print 'Labeled Sentiment:', review_sentiment
print
final_sentiment = analyze_sentiment_sentiwordnet_lexicon(review, verbose=True)
print '-'*60
# +
sentiwordnet_predictions = [analyze_sentiment_sentiwordnet_lexicon(review)
for review in test_reviews]
#from utils import display_evaluation_metrics, display_confusion_matrix, display_classification_report
print 'Performance metrics:'
display_evaluation_metrics(true_labels=test_sentiments,
predicted_labels=sentiwordnet_predictions,
positive_class='positive')
# -
print '\nConfusion Matrix:'
display_confusion_matrix(true_labels=test_sentiments,
predicted_labels=sentiwordnet_predictions,
classes=['positive', 'negative'])
print '\nClassification report:'
display_classification_report(true_labels=test_sentiments,
predicted_labels=sentiwordnet_predictions,
classes=['positive', 'negative'])
# ### VADER Lexicon
#
# +
from nltk.sentiment.vader import SentimentIntensityAnalyzer
def analyze_sentiment_vader_lexicon(review,
threshold=0.1,
verbose=False):
# pre-process text
review = normalize_accented_characters(review)
review = html_parser.unescape(review)
review = strip_html(review)
# analyze the sentiment for review
analyzer = SentimentIntensityAnalyzer()
scores = analyzer.polarity_scores(review)
# get aggregate scores and final sentiment
agg_score = scores['compound']
final_sentiment = 'positive' if agg_score >= threshold\
else 'negative'
if verbose:
# display detailed sentiment statistics
positive = str(round(scores['pos'], 2)*100)+'%'
final = round(agg_score, 2)
negative = str(round(scores['neg'], 2)*100)+'%'
neutral = str(round(scores['neu'], 2)*100)+'%'
sentiment_frame = pd.DataFrame([[final_sentiment, final, positive,
negative, neutral]],
columns=pd.MultiIndex(levels=[['SENTIMENT STATS:'],
['Predicted Sentiment', 'Polarity Score',
'Positive', 'Negative',
'Neutral']],
labels=[[0,0,0,0,0],[0,1,2,3,4]]))
print sentiment_frame
return final_sentiment
# -
for review, review_sentiment in sample_data:
print 'Review:'
print review
print
print 'Labeled Sentiment:', review_sentiment
print
final_sentiment = analyze_sentiment_vader_lexicon(review, threshold=0.1, verbose=True)
print '-'*60
# +
vader_predictions = [analyze_sentiment_vader_lexicon(review, threshold=0.1)
for review in test_reviews]
print 'Performance metrics:'
display_evaluation_metrics(true_labels=test_sentiments,
predicted_labels=vader_predictions,
positive_class='positive')
# -
print '\nConfusion Matrix:'
display_confusion_matrix(true_labels=test_sentiments,
predicted_labels=vader_predictions,
classes=['positive', 'negative'])
print '\nClassification report:'
display_classification_report(true_labels=test_sentiments,
predicted_labels=vader_predictions,
classes=['positive', 'negative'])
# ### Pattern Lexicon
#
# +
from pattern.en import sentiment, mood, modality
def analyze_sentiment_pattern_lexicon(review, threshold=0.1,
verbose=False):
# pre-process text
review = normalize_accented_characters(review)
review = html_parser.unescape(review)
review = strip_html(review)
# analyze sentiment for the text document
analysis = sentiment(review)
sentiment_score = round(analysis[0], 2)
sentiment_subjectivity = round(analysis[1], 2)
# get final sentiment
final_sentiment = 'positive' if sentiment_score >= threshold\
else 'negative'
if verbose:
# display detailed sentiment statistics
sentiment_frame = pd.DataFrame([[final_sentiment, sentiment_score,
sentiment_subjectivity]],
columns=pd.MultiIndex(levels=[['SENTIMENT STATS:'],
['Predicted Sentiment', 'Polarity Score',
'Subjectivity Score']],
labels=[[0,0,0],[0,1,2]]))
print sentiment_frame
assessment = analysis.assessments
assessment_frame = pd.DataFrame(assessment,
columns=pd.MultiIndex(levels=[['DETAILED ASSESSMENT STATS:'],
['Key Terms', 'Polarity Score',
'Subjectivity Score', 'Type']],
labels=[[0,0,0,0],[0,1,2,3]]))
print assessment_frame
print
return final_sentiment
# -
for review, review_sentiment in sample_data:
print 'Review:'
print review
print
print 'Labeled Sentiment:', review_sentiment
print
final_sentiment = analyze_sentiment_pattern_lexicon(review,
threshold=0.1,
verbose=True)
print '-'*60
for review, review_sentiment in sample_data:
print 'Review:'
print review
print 'Labeled Sentiment:', review_sentiment
print 'Mood:', mood(review)
mod_score = modality(review)
print 'Modality Score:', round(mod_score, 2)
print 'Certainty:', 'Strong' if mod_score > 0.5 \
else 'Medium' if mod_score > 0.35 \
else 'Low'
print '-'*60
|
Chapter-7.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 (tensorflow-2.0)
# language: python
# name: tensorflow-2.0
# ---
# # T81-558: Applications of Deep Neural Networks
# **Module 4: Training for Tabular Data**
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# # Module 4 Material
#
# * Part 4.1: Encoding a Feature Vector for Keras Deep Learning [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb)
# * Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb)
# * Part 4.3: Keras Regression for Deep Neural Networks with RMSE [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb)
# * **Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training** [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb)
# * Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb)
# # Part 4.4: Training Neural Networks
# # Classic Backpropagation
# Backpropagation is the primary means by which a neural network's weights are determined during training. Backpropagation works by calculating a weight change amount ($v_t$) for every weight($\theta$, theata) in the neural network. This value is subtracted from every weight by the following equation:
#
# $ \theta_t = \theta_{t-1} - v_t $
#
# This process is repeated for every iteration($t$). How the weight change is calculated depends on the training algorithm. Classic backpropagation simply calculates a gradient ($\nabla$, nabla) for every weight in the neural network with respect to the error function ($J$) of the neural network. The gradient is scaled by a learning rate ($\eta$, eta).
#
# $ v_t = \eta \nabla_{\theta_{t-1}} J(\theta_{t-1}) $
#
# The learning rate is an important concept for backpropagation training. Setting the learning rate can be complex:
#
# * Too low of a learning rate will usually converge to a good solution; however, the process will be very slow.
# * Too high of a learning rate will either fail outright, or converge to a higher error than a better learning rate.
#
# Common values for learning rate are: 0.1, 0.01, 0.001, etc.
#
# Gradients:
#
# 
#
# The following link, from the book, shows how a simple [neural network is trained with backpropagation](http://www.heatonresearch.com/aifh/vol3/).
# ### Momentum Backpropagation
#
# Momentum adds another term to the calculation of $v_t$:
#
# $ v_t = \eta \nabla_{\theta_{t-1}} J(\theta_{t-1}) + \lambda v_{t-1} $
#
# Like the learning rate, momentum adds another training parameter that scales the effect of momentum. Momentum backpropagation has two training parameters: learning rate ($\eta$, eta) and momentum ($\lambda$, lambda). Momentum simply adds the scaled value of the previous weight change amount ($v_{t-1}$) to the current weight change amount($v_t$).
#
# This has the effect of adding additional force behind a direction a weight was moving. This might allow the weight to escape a local minima:
#
# 
#
# A very common value for momentum is 0.9.
#
# ### Batch and Online Backpropagation
#
# How often should the weights of a neural network be updated? Gradients can be calculated for a training set element. These gradients can also be summed together into batches and the weights updated once per batch.
#
# * **Online Training** - Update the weights based on gradients calculated from a single training set element.
# * **Batch Training** - Update the weights based on the sum of the gradients over all training set elements.
# * **Batch Size** - Update the weights based on the sum of some batch size of training set elements.
# * **Mini-Batch Training** - The same as batch size, but with a very small batch size. Mini-batches are very popular and they are often in the 32-64 element range.
#
# Because the batch size is smaller than the complete training set size, it may take several batches to make it completely through the training set.
#
# * **Step/Iteration** - The number of batches that were processed.
# * **Epoch** - The number of times the complete training set was processed.
#
# # Stochastic Gradient Descent
#
# Stochastic gradient descent (SGD) is currently one of the most popular neural network training algorithms. It works very similarly to Batch/Mini-Batch training, except that the batches are made up of a random set of training elements.
#
# This leads to a very irregular convergence in error during training:
#
# 
# [Image from Wikipedia](https://en.wikipedia.org/wiki/Stochastic_gradient_descent)
#
# Because the neural network is trained on a random sample of the complete training set each time, the error does not make a smooth transition downward. However, the error usually does go down.
#
# Advantages to SGD include:
#
# * Computationally efficient. Even with a very large training set, each training step can be relatively fast.
# * Decreases overfitting by focusing on only a portion of the training set each step.
#
# ### Other Techniques
#
# One problem with simple backpropagation training algorithms is that they are highly sensative to learning rate and momentum. This is difficult because:
#
# * Learning rate must be adjusted to a small enough level to train an accurate neural network.
# * Momentum must be large enough to overcome local minima, yet small enough to not destabilize the training.
# * A single learning rate/momentum is often not good enough for the entire training process. It is often useful to automatically decrease learning rate as the training progresses.
# * All weights share a single learning rate/momentum.
#
# Other training techniques:
#
# * **Resilient Propagation** - Use only the magnitude of the gradient and allow each neuron to learn at its own rate. No need for learning rate/momentum; however, only works in full batch mode.
# * **Nesterov accelerated gradient** - Helps mitigate the risk of choosing a bad mini-batch.
# * **Adagrad** - Allows an automatically decaying per-weight learning rate and momentum concept.
# * **Adadelta** - Extension of Adagrad that seeks to reduce its aggressive, monotonically decreasing learning rate.
# * **Non-Gradient Methods** - Non-gradient methods can *sometimes* be useful, though rarely outperform gradient-based backpropagation methods. These include: [simulated annealing](https://en.wikipedia.org/wiki/Simulated_annealing), [genetic algorithms](https://en.wikipedia.org/wiki/Genetic_algorithm), [particle swarm optimization](https://en.wikipedia.org/wiki/Particle_swarm_optimization), [Nelder Mead](https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method), and [many more](https://en.wikipedia.org/wiki/Category:Optimization_algorithms_and_methods).
# ### ADAM Update
#
# ADAM is the first training algorithm you should try. It is very effective. Kingma and Ba (2014) introduced the Adam update rule that derives its name from the adaptive moment estimates that it uses. Adam estimates the first (mean) and second (variance) moments to determine the weight corrections. Adam begins with an exponentially decaying average of past gradients (m):
#
# $ m_t = \beta_1 m_{t-1} + (1-\beta_1) g_t $
#
# This average accomplishes a similar goal as classic momentum update; however, its value is calculated automatically based on the current gradient ($g_t$). The update rule then calculates the second moment ($v_t$):
#
# $ v_t = \beta_2 v_{t-1} + (1-\beta_2) g_t^2 $
#
# The values $m_t$ and $v_t$ are estimates of the first moment (the mean) and the second moment (the uncentered variance) of the gradients respectively. However, they will have a strong bias towards zero in the initial training cycles. The first moment’s bias is corrected as follows.
#
# $ \hat{m}_t = \frac{m_t}{1-\beta^t_1} $
#
# Similarly, the second moment is also corrected:
#
# $ \hat{v}_t = \frac{v_t}{1-\beta_2^t} $
#
# These bias-corrected first and second moment estimates are applied to the ultimate Adam update rule, as follows:
#
# $ \theta_t = \theta_{t-1} - \frac{\alpha \cdot \hat{m}_t}{\sqrt{\hat{v}_t}+\eta} \hat{m}_t $
#
# Adam is very tolerant to initial learning rate (\alpha) and other training parameters. Kingma and Ba (2014) propose default values of 0.9 for $\beta_1$, 0.999 for $\beta_2$, and 10-8 for $\eta$.
# ### Methods Compared
#
# The following image shows how each of these algorithms train (image credits: [author](<NAME>), [where I found it](http://sebastianruder.com/optimizing-gradient-descent/index.html#visualizationofalgorithms) ):
#
# 
#
#
# ### Specifying the Update Rule in Tensorflow
#
# TensorFlow allows the update rule to be set to one of:
#
# * Adagrad
# * **Adam**
# * Ftrl
# * Momentum
# * RMSProp
# * **SGD**
#
#
# +
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.callbacks import EarlyStopping
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
import pandas as pd
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
# Create train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Build the neural network
model = Sequential()
model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(10, activation='relu')) # Hidden 2
model.add(Dense(1)) # Output
model.compile(loss='mean_squared_error', optimizer='adam') # Modify here
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5,
verbose=1, mode='auto', restore_best_weights=True)
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=0,epochs=1000)
# Plot the chart
chart_regression(pred.flatten(),y_test)
# -
|
t81_558_class_04_4_backprop.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import math
import numpy as np
import matplotlib.pyplot as plt
networks = ['contant_softmax', 'constant_edl_gen']
decks = ['batman_joker', 'captain_america', 'adversarial_standard',
'adversarial_batman_joker', 'adversarial_captain_america']
# +
def plot_higher_rank_results(noise_pct, constant=True):
labels = decks
# build results
softmax_rank_res = []
edl_gen_rank_res = []
if constant:
n_s = 'constant_softmax'
l_s = 'FFNSL Softmax (with const. pens.)'
l_e = 'FFNSL EDL-GEN (with const. pens.)'
n_e = 'constant_edl_gen'
else:
n_s = 'softmax'
n_e = 'edl_gen'
l_s = 'FFNSL Softmax (with NN pens.)'
l_e = 'FFNSL EDL-GEN (with NN pens.)'
for d in decks:
softmax_d_res = json.loads(open('../nsl/learned_rule_breakdown/{0}/{1}_more_repeats.json'.format(n_s,d),'r').read())
edl_gen_d_res = json.loads(open('../nsl/learned_rule_breakdown/{0}/{1}_more_repeats.json'.format(n_e,d),'r').read())
softmax_rank_res.append(softmax_d_res['noise_pct_{0}'.format(noise_pct)]['correct_rank_higher']*100)
edl_gen_rank_res.append(edl_gen_d_res['noise_pct_{0}'.format(noise_pct)]['correct_rank_higher']*100)
x = np.arange(len(labels)) # the label locations
width = 0.35 # the width of the bars
fig = plt.figure(figsize=(4,4))
ax = plt.gca()
rects1 = ax.bar(x - width/2, softmax_rank_res, width, label=l_s)
rects2 = ax.bar(x + width/2, edl_gen_rank_res, width, label=l_e)
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Percentage of repeats with correct rank_higher rule')
ax.set_xlabel('Deck')
ax.set_xticks(x)
ax.set_xticklabels(['BJ','CA','AS','ABJ','ACA'])
# plt.xticks(rotation=90)
ax.set_ylim([0,125])
ax.legend()
# ax.bar_label(rects1, padding=3)
# ax.bar_label(rects2, padding=3)
fig.tight_layout()
plt.savefig('learned_rule_dist/higher_rank_noise_pct_{0}_95_100.pdf'.format(noise_pct), format='pdf', bbox_inches='tight')
# +
def plot_suit_results(noise_pct, constant=True):
labels = decks
# build results
softmax_rank_res = []
edl_gen_rank_res = []
if constant:
n_s = 'constant_softmax'
l_s = 'FFNSL Softmax (with const. pens.)'
l_e = 'FFNSL EDL-GEN (with const. pens.)'
n_e = 'constant_edl_gen'
else:
n_s = 'softmax'
n_e = 'edl_gen'
l_s = 'FFNSL Softmax (with NN pens.)'
l_e = 'FFNSL EDL-GEN (with NN pens.)'
for d in decks:
softmax_d_res = json.loads(open('../nsl/learned_rule_breakdown/{0}/{1}_more_repeats.json'.format(n_s,d),'r').read())
edl_gen_d_res = json.loads(open('../nsl/learned_rule_breakdown/{0}/{1}_more_repeats.json'.format(n_e,d),'r').read())
softmax_rank_res.append(softmax_d_res['noise_pct_{0}'.format(noise_pct)]['correct_suit']*100)
edl_gen_rank_res.append(edl_gen_d_res['noise_pct_{0}'.format(noise_pct)]['correct_suit']*100)
x = np.arange(len(labels)) # the label locations
width = 0.35 # the width of the bars
fig = plt.figure(figsize=(4,4))
ax = plt.gca()
rects1 = ax.bar(x - width/2, softmax_rank_res, width, label=l_s)
rects2 = ax.bar(x + width/2, edl_gen_rank_res, width, label=l_e)
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Percentage of repeats with correct suit rule')
ax.set_xlabel('Deck')
ax.set_xticks(x)
ax.set_xticklabels(['BJ','CA','AS','ABJ','ACA'])
ax.set_ylim([0,125])
# ax.bar_label(rects1, padding=3)
# ax.bar_label(rects2, padding=3)
ax.legend()
fig.tight_layout()
plt.savefig('learned_rule_dist/suit_noise_pct_{0}_95_100.pdf'.format(noise_pct), format='pdf', bbox_inches='tight')
plt.show()
# -
noise_pct = 95
plot_higher_rank_results(noise_pct)
plot_suit_results(noise_pct)
noise_pct = 95
plot_higher_rank_results(noise_pct, constant=False)
noise_pct = 95
plot_suit_results(noise_pct, constant=True)
noise_pct = 95
plot_higher_rank_results(noise_pct, constant=False)
noise_pct = 95
plot_suit_results(noise_pct, constant=False)
|
examples/follow_suit/paper_results/graphs/Plot learned rules distribution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.6 64-bit (''pedrec'': venv)'
# name: python3
# ---
# +
import sys
sys.path.append("../")
import os
import math
import numpy as np
import cv2
import pandas as pd
import tikzplotlib
from torch.utils.data import DataLoader, ConcatDataset
from pedrec.models.constants.action_mappings import ACTION
from pedrec.configs.dataset_configs import get_sim_dataset_cfg_default
from pedrec.datasets.pedrec_dataset import PedRecDataset
from pedrec.configs.pedrec_net_config import PedRecNet50Config
from pedrec.models.constants.dataset_constants import DatasetType
from pedrec.visualizers.skeleton_visualizer import draw_skeleton
from pedrec.visualizers.skeleton_3d_visualizer import add_skeleton_3d_to_axes
from pedrec.visualizers.visualization_helper_3d import draw_grid_3d, draw_origin_3d
from random import randint
import matplotlib.pyplot as plt
# %matplotlib widget
# + pycharm={"name": "#%%\n"}
cfg = PedRecNet50Config()
# ROM Train
dataset_cfg = get_sim_dataset_cfg_default()
dataset_root = "data/datasets/Conti01/"
dataset_df_train_filename = "rt_conti_01_train_FIN.pkl"
dataset_df_val_filename = "rt_conti_01_val_FIN.pkl"
train_dataset = PedRecDataset(dataset_root, dataset_df_train_filename, DatasetType.VALIDATE, dataset_cfg, cfg.model.input_size, None)
val_dataset = PedRecDataset(dataset_root, dataset_df_val_filename, DatasetType.VALIDATE, dataset_cfg, cfg.model.input_size, None)
dataset = ConcatDataset([train_dataset, val_dataset])
dataset_length = len(dataset)
print(len(train_dataset))
print(train_dataset.info.full_length)
print(train_dataset.info.used_length)
print(len(val_dataset))
print(val_dataset.info.full_length)
print(val_dataset.info.used_length)
# dataset = val_dataset
# dataset_length = len(dataset)
# +
# dataset_df_val_path = os.path.join(dataset_root, dataset_df_val_filename)
# df_val = pd.read_pickle(dataset_df_val_path)
# pd.options.display.max_columns = None
# pd.options.display.float_format= '{:.2f}'.format
# skeleton2d_visibles = [col for col in df_val if col.startswith('skeleton2d') and col.endswith('_visible')]
# df_val["visible_joints"] = df_val[skeleton2d_visibles].sum(axis=1)
# df_val[df_val["visible_joints"] < 3].head(5)
# + pycharm={"name": "#%%\n"}
fig, ax = plt.subplots(3,3, figsize=(10,10))
fig_3d = plt.figure(figsize=(10,10))
count = 0
for i in range(0, 3):
for j in range(0, 3):
index = randint(0, len(val_dataset))
entry = val_dataset[index]
# entry = dataset[count + 606]
model_input, labels = entry
skeleton = labels["skeleton"]
skeleton_3d = labels["skeleton_3d"]
scale_factor = 3
skeleton_3d[:, :3] *= scale_factor
skeleton_3d[:, :3] -= (scale_factor / 2)
center = labels["center"]
scale = labels["scale"]
rotation = labels["rotation"]
is_real_img = labels["is_real_img"]
img_path = labels["img_path"]
skeleton[:, 0] *= model_input.shape[1]
skeleton[:, 1] *= model_input.shape[0]
visible_joints = np.sum(skeleton[:, 2])
body_orientation = labels["orientation"][0][1]
body_orientation *= 2*math.pi
body_orientation = math.degrees(body_orientation)
head_orientation = labels["orientation"][1][1]
head_orientation *= 2*math.pi
head_orientation = math.degrees(head_orientation)
draw_skeleton(model_input, skeleton)
img = model_input
ax[i, j].imshow(img)
ax[i, j].set_title(f"{index}: {count}: {visible_joints} | Bθ: {body_orientation:.1f}° | Hθ: {head_orientation:.1f}°")
ax[i, j].set_title(f"{index}")
ax_3d = fig_3d.add_subplot(3, 3, count+1, projection='3d')
draw_grid_3d(ax_3d, lim=1)
draw_origin_3d(ax_3d)
add_skeleton_3d_to_axes(ax_3d, skeleton_3d, size=4)
ax_3d.set_title(f"{index}: {count}: {visible_joints} | Bθ: {body_orientation:.1f}° | Hθ: {head_orientation:.1f}°")
# print(f"{count}: Model input shape: {model_input.shape}, Min value: {model_input.min()}, max value: {model_input.max()}")
# print(f"{count}: center: {center}, scale: {scale}, rotation: {rotation}")
# print(f"{count}: is_real_img: {is_real_img}")
# print(f"{count}: visible joints: {visible_joints}")
# print(f"{count}: path: {img_path}")
# print("------------")
count += 1
plt.show()
# + pycharm={"name": "#%%\n"}
dataset_df_train_path = os.path.join(dataset_root, dataset_df_train_filename)
df_train = pd.read_pickle(dataset_df_train_path)
dataset_df_val_path = os.path.join(dataset_root, dataset_df_val_filename)
df_val = pd.read_pickle(dataset_df_val_path)
df = pd.concat([df_train, df_val], ignore_index=True)
pd.options.display.max_columns = None
pd.options.display.float_format= '{:.2f}'.format
print(f"Number of entrys: {df.shape[0]}")
# + pycharm={"name": "#%%\n"}
filter_skeleton2d = [col for col in df if col.startswith('skeleton2d')]
filter_skeleton3d = [col for col in df if col.startswith('skeleton3d')]
filter_bb = [col for col in df if col.startswith('bb')]
filter_body_orientation = [col for col in df if col.startswith('body_orientation')]
filter_head_orientation = [col for col in df if col.startswith('head_orientation')]
filter_env = [col for col in df if col.startswith('env')]
# + pycharm={"name": "#%%\n"}
# skeleton 2d calculated columns
skeleton2d_xs = [col for col in df if col.startswith('skeleton2d') and col.endswith('_x')]
skeleton2d_ys = [col for col in df if col.startswith('skeleton2d') and col.endswith('_y')]
skeleton2d_visibles = [col for col in df if col.startswith('skeleton2d') and col.endswith('_visible')]
df["skeleton2d_width"] = df[skeleton2d_xs].max(axis=1) - df[skeleton2d_xs].min(axis=1)
df["skeleton2d_height"] = df[skeleton2d_ys].max(axis=1) - df[skeleton2d_ys].min(axis=1)
df["skeleton2d_size"] = np.sqrt(df["skeleton2d_width"]**2 + df["skeleton2d_height"]**2)
df["skeleton2d_visible_joints"] = df[skeleton2d_visibles].sum(axis=1)
df_train["skeleton2d_width"] = df_train[skeleton2d_xs].max(axis=1) - df_train[skeleton2d_xs].min(axis=1)
df_train["skeleton2d_height"] = df_train[skeleton2d_ys].max(axis=1) - df_train[skeleton2d_ys].min(axis=1)
df_train["skeleton2d_size"] = np.sqrt(df_train["skeleton2d_width"]**2 + df_train["skeleton2d_height"]**2)
df_train["skeleton2d_visible_joints"] = df_train[skeleton2d_visibles].sum(axis=1)
df_val["skeleton2d_width"] = df_val[skeleton2d_xs].max(axis=1) - df_val[skeleton2d_xs].min(axis=1)
df_val["skeleton2d_height"] = df_val[skeleton2d_ys].max(axis=1) - df_val[skeleton2d_ys].min(axis=1)
df_val["skeleton2d_size"] = np.sqrt(df_val["skeleton2d_width"]**2 + df_val["skeleton2d_height"]**2)
df_val["skeleton2d_visible_joints"] = df_val[skeleton2d_visibles].sum(axis=1)
# + pycharm={"name": "#%%\n"}
# skeleton 3d calculated columns
skeleton3d_xs = [col for col in df if col.startswith('skeleton3d') and col.endswith('_x')]
skeleton3d_ys = [col for col in df if col.startswith('skeleton3d') and col.endswith('_y')]
skeleton3d_zs = [col for col in df if col.startswith('skeleton3d') and col.endswith('_z')]
df["skeleton3d_width"] = df[skeleton3d_xs].max(axis=1) - df[skeleton3d_xs].min(axis=1)
df["skeleton3d_height"] = df[skeleton3d_ys].max(axis=1) - df[skeleton3d_ys].min(axis=1)
df["skeleton3d_depth"] = df[skeleton3d_zs].max(axis=1) - df[skeleton3d_zs].min(axis=1)
df["skeleton3d_size"] = np.sqrt(df["skeleton3d_width"]**2 + df["skeleton3d_height"]**2 + df["skeleton3d_depth"]**2)
df_train["skeleton3d_width"] = df_train[skeleton3d_xs].max(axis=1) - df_train[skeleton3d_xs].min(axis=1)
df_train["skeleton3d_height"] = df_train[skeleton3d_ys].max(axis=1) - df_train[skeleton3d_ys].min(axis=1)
df_train["skeleton3d_depth"] = df_train[skeleton3d_zs].max(axis=1) - df_train[skeleton3d_zs].min(axis=1)
df_train["skeleton3d_size"] = np.sqrt(df_train["skeleton3d_width"]**2 + df_train["skeleton3d_height"]**2 + df_train["skeleton3d_depth"]**2)
df_val["skeleton3d_width"] = df_val[skeleton3d_xs].max(axis=1) - df_val[skeleton3d_xs].min(axis=1)
df_val["skeleton3d_height"] = df_val[skeleton3d_ys].max(axis=1) - df_val[skeleton3d_ys].min(axis=1)
df_val["skeleton3d_depth"] = df_val[skeleton3d_zs].max(axis=1) - df_val[skeleton3d_zs].min(axis=1)
df_val["skeleton3d_size"] = np.sqrt(df_val["skeleton3d_width"]**2 + df_val["skeleton3d_height"]**2 + df_val["skeleton3d_depth"]**2)
# +
# remove empty frames
df = df[df["skeleton2d_size"] > 5]
df = df[df["skeleton3d_size"] > 10]
df_train = df_train[df_train["skeleton2d_size"] > 5]
df_train = df_train[df_train["skeleton3d_size"] > 10]
df_val = df_val[df_val["skeleton2d_size"] > 5]
df_val = df_val[df_val["skeleton3d_size"] > 10]
# +
# distance calculated column
env_position_xs = [col for col in df if col == "env_position_x"]
env_position_ys = [col for col in df if col == "env_position_y"]
env_position_zs = [col for col in df if col == "env_position_z"]
df["distance_xz"] = np.sqrt(np.abs(df[env_position_xs].sum(axis=1)**2) + np.abs(df[env_position_zs].sum(axis=1)**2))
df_train["distance_xz"] = np.sqrt(np.abs(df_train[env_position_xs].sum(axis=1)**2) + np.abs(df_train[env_position_zs].sum(axis=1)**2))
df_val["distance_xz"] = np.sqrt(np.abs(df_val[env_position_xs].sum(axis=1)**2) + np.abs(df_val[env_position_zs].sum(axis=1)**2))
df["body_orientation_phi"] = df["body_orientation_phi"] * math.pi * 2
df["body_orientation_theta"] = df["body_orientation_theta"] * math.pi
df["head_orientation_phi"] = df["head_orientation_phi"] * math.pi * 2
df["head_orientation_theta"] = df["head_orientation_theta"] * math.pi
df_val["body_orientation_phi"] = df_val["body_orientation_phi"] * math.pi * 2
df_val["body_orientation_theta"] = df_val["body_orientation_theta"] * math.pi
df_val["head_orientation_phi"] = df_val["head_orientation_phi"] * math.pi * 2
df_val["head_orientation_theta"] = df_val["head_orientation_theta"] * math.pi
df_train["body_orientation_phi"] = df_train["body_orientation_phi"] * math.pi * 2
df_train["body_orientation_theta"] = df_train["body_orientation_theta"] * math.pi
df_train["head_orientation_phi"] = df_train["head_orientation_phi"] * math.pi * 2
df_train["head_orientation_theta"] = df_train["head_orientation_theta"] * math.pi
# + pycharm={"name": "#%%\n"}
df.head(5)
# + pycharm={"name": "#%%\n"}
df.describe().apply(lambda s: s.apply(lambda x: format(x, 'g')))
# -
df_description = df.describe()
num_3d_models = 1
num_animations = 1
print("General Report")
print(f"Number of frames: {len(df)}")
print(f"FPS: 30")
print(f"Resolution: 1920x1080")
print(f"Number of 3D models: {num_3d_models}")
print(f"Number of animations {num_animations}")
print("Data & Mean & Std & Min & Max \\\\")
print(f"Skeleton 2D Diameter & ${df_description['skeleton2d_size']['mean']:.2f}px$ & ${df_description['skeleton2d_size']['std']:.2f}px$ & ${df_description['skeleton2d_size']['min']:.2f}px$ & ${df_description['skeleton2d_size']['max']:.2f}px$ \\\\")
print(f"Skeleton 3D Diameter & ${df_description['skeleton3d_size']['mean']:.2f}mm$ & ${df_description['skeleton3d_size']['std']:.2f}mm$ & ${df_description['skeleton3d_size']['min']:.2f}mm$ & ${df_description['skeleton3d_size']['max']:.2f}mm$ \\\\")
print(f"Camera Distance (XZ) & ${df_description['distance_xz']['mean']:.2f}mm$ & ${df_description['distance_xz']['std']:.2f}mm$ & ${df_description['distance_xz']['min']:.2f}mm$ & ${df_description['distance_xz']['max']:.2f}mm$ \\\\")
# +
action_count = {}
for actions in df['actions']:
for action in actions:
if action not in action_count:
action_count[action] = 0
action_count[action] += 1
# TODO: Actions
# Which Actions contained
# How many frames per action
# -
print(action_count)
for action_val, count in action_count.items():
print(f"{ACTION(action_val).name}: {count}")
fig = plt.figure()
body_phi_deg = df['body_orientation_phi'] * (180 / math.pi)
body_phi_deg.hist(bins=50)
xmin, xmax, ymin, ymax = plt.axis()
tikzplotlib.save("/home/dennis/Downloads/conti_01_body_orientation_phi.tex", extra_axis_parameters={
'width=0.7\\textwidth',
'font=\\footnotesize',
'title={Distribution of body $\\theta$ orientations}',
'xlabel={degrees ($°$)}',
'ylabel={Number of samples}',
'enlarge x limits=0.001',
'enlarge y limits=0.001',
'xmin=0',
'xmax=360',
})
fig = plt.figure()
head_phi_deg = df['head_orientation_phi'] * (180 / math.pi)
head_phi_deg.hist(bins=50)
xmin, xmax, ymin, ymax = plt.axis()
tikzplotlib.save("/home/dennis/Downloads/conti_01_head_orientation_phi.tex", extra_axis_parameters={
'width=0.7\\textwidth',
'font=\\footnotesize',
'title={Distribution of body $\\theta$ orientations}',
'xlabel={degrees ($°$)}',
'ylabel={Number of samples}',
'enlarge x limits=0.001',
'enlarge y limits=0.001',
'xmin=0',
'xmax=360',
})
fig = plt.figure()
df['skeleton2d_width'].hist(bins=50)
# + pycharm={"name": "#%%\n"}
fig = plt.figure()
df['skeleton2d_height'].hist(bins=50)
# -
fig = plt.figure()
df['skeleton2d_size'].hist(bins=50)
# + pycharm={"name": "#%%\n"}
fig = plt.figure()
df['skeleton3d_width'].hist(bins=50)
# + pycharm={"name": "#%%\n"}
fig = plt.figure()
df['skeleton3d_height'].hist(bins=50)
# + pycharm={"name": "#%%\n"}
fig = plt.figure()
df['skeleton3d_depth'].hist(bins=50)
# -
fig = plt.figure()
df['skeleton3d_size'].hist(bins=50)
fig = plt.figure()
df['body_orientation_phi'].hist(bins=50)
fig = plt.figure()
df['head_orientation_theta'].hist(bins=50)
fig = plt.figure()
df['head_orientation_phi'].hist(bins=50)
df_description = df_train.describe()
num_3d_models = 1
num_animations = 1
print("General Report (TRAIN)")
print(f"Number of frames: {len(df_train)}")
print(f"FPS: 30")
print(f"Resolution: 1920x1080")
print(f"Number of 3D models: {num_3d_models}")
print(f"Number of animations {num_animations}")
print("Data & Mean & Std & Min & Max \\\\")
print(f"Skeleton 2D Diameter & ${df_description['skeleton2d_size']['mean']:.2f}px$ & ${df_description['skeleton2d_size']['std']:.2f}px$ & ${df_description['skeleton2d_size']['min']:.2f}px$ & ${df_description['skeleton2d_size']['max']:.2f}px$ \\\\")
print(f"Skeleton 3D Diameter & ${df_description['skeleton3d_size']['mean']:.2f}mm$ & ${df_description['skeleton3d_size']['std']:.2f}mm$ & ${df_description['skeleton3d_size']['min']:.2f}mm$ & ${df_description['skeleton3d_size']['max']:.2f}mm$ \\\\")
print(f"Camera Distance (XZ) & ${df_description['distance_xz']['mean']:.2f}mm$ & ${df_description['distance_xz']['std']:.2f}mm$ & ${df_description['distance_xz']['min']:.2f}mm$ & ${df_description['distance_xz']['max']:.2f}mm$ \\\\")
df_description = df_val.describe()
num_3d_models = 1
num_animations = 1
print("General Report (VAL)")
print(f"Number of frames: {len(df_val)}")
print(f"FPS: 30")
print(f"Resolution: 1920x1080")
print(f"Number of 3D models: {num_3d_models}")
print(f"Number of animations {num_animations}")
print("Data & Mean & Std & Min & Max \\\\")
print(f"Skeleton 2D Diameter & ${df_description['skeleton2d_size']['mean']:.2f}px$ & ${df_description['skeleton2d_size']['std']:.2f}px$ & ${df_description['skeleton2d_size']['min']:.2f}px$ & ${df_description['skeleton2d_size']['max']:.2f}px$ \\\\")
print(f"Skeleton 3D Diameter & ${df_description['skeleton3d_size']['mean']:.2f}mm$ & ${df_description['skeleton3d_size']['std']:.2f}mm$ & ${df_description['skeleton3d_size']['min']:.2f}mm$ & ${df_description['skeleton3d_size']['max']:.2f}mm$ \\\\")
print(f"Camera Distance (XZ) & ${df_description['distance_xz']['mean']:.2f}mm$ & ${df_description['distance_xz']['std']:.2f}mm$ & ${df_description['distance_xz']['min']:.2f}mm$ & ${df_description['distance_xz']['max']:.2f}mm$ \\\\")
def circular_hist(ax, x, bins=16, density=True, offset=0, gaps=True):
"""
Produce a circular histogram of angles on ax.
Parameters
----------
ax : matplotlib.axes._subplots.PolarAxesSubplot
axis instance created with subplot_kw=dict(projection='polar').
x : array
Angles to plot, expected in units of radians.
bins : int, optional
Defines the number of equal-width bins in the range. The default is 16.
density : bool, optional
If True plot frequency proportional to area. If False plot frequency
proportional to radius. The default is True.
offset : float, optional
Sets the offset for the location of the 0 direction in units of
radians. The default is 0.
gaps : bool, optional
Whether to allow gaps between bins. When gaps = False the bins are
forced to partition the entire [-pi, pi] range. The default is True.
Returns
-------
n : array or list of arrays
The number of values in each bin.
bins : array
The edges of the bins.
patches : `.BarContainer` or list of a single `.Polygon`
Container of individual artists used to create the histogram
or list of such containers if there are multiple input datasets.
"""
# Wrap angles to [-pi, pi)
x = (x+np.pi) % (2*np.pi) - np.pi
# Force bins to partition entire circle
if not gaps:
bins = np.linspace(-np.pi, np.pi, num=bins+1)
# Bin data and record counts
n, bins = np.histogram(x, bins=bins)
# Compute width of each bin
widths = np.diff(bins)
# By default plot frequency proportional to area
if density:
# Area to assign each bin
area = n / x.size
# Calculate corresponding bin radius
radius = (area/np.pi) ** .5
# Otherwise plot frequency proportional to radius
else:
radius = n
# Plot data on ax
patches = ax.bar(bins[:-1], radius, zorder=1, align='edge', width=widths,
edgecolor='C0', color='orange', fill=True, linewidth=1)
# Set the direction of the zero angle
#ax.set_theta_offset(offset)
# Remove ylabels for area plots (they are mostly obstructive)
if density:
ax.set_yticks([])
return n, bins, patches
# Construct figure and axis to plot on
# fig, ax = plt.subplots(1, 2)
fig = plt.figure()
# ax = fig.add_subplot(projection='polar')
ax = fig.add_subplot()
body_phi_deg = df_train['body_orientation_phi'] * (180 / math.pi)
# body_phi_deg.hist(bins=360)
# fig, ax = plt.subplots(1, 2, subplot_kw=dict(projection='polar'))
# Visualise by area of bins
# circular_hist(ax[0], body_phi_deg)
# Visualise by radius of bins
circular_hist(ax, body_phi_deg, bins=16, offset=0, density=False)
xmin, xmax, ymin, ymax = plt.axis()
tikzplotlib.save("/home/dennis/Downloads/c01_train_body_orientation_phi.tex", extra_axis_parameters={
'width=0.5\\textwidth',
'title={Distribution of body $\\theta$ orientations}',
'xlabel={degrees [$°$]}',
'ylabel={Number of samples}',
# 'enlarge x limits=0.001',
# 'enlarge y limits=0.001',
# 'xmin=0',
# 'xmax=360',
})
body_phi_deg = df['body_orientation_phi'] * (180 / math.pi)
max_val = 0
for i in range (0, 370, 10):
elements = body_phi_deg[body_phi_deg.between(i, i+10, "both")]
num_els = len(elements)
max_val = max_val if max_val > num_els else num_els
print(f"{i},{num_els}")
print(max_val)
body_phi_deg = df_val['body_orientation_phi'] * (180 / math.pi)
max_val = 0
for i in range (0, 370, 10):
elements = body_phi_deg[body_phi_deg.between(i, i+10, "both")]
num_els = len(elements)
max_val = max_val if max_val > num_els else num_els
print(f"{i},{num_els}")
fig = plt.figure()
body_phi_deg = df_train['body_orientation_phi'] * (180 / math.pi)
body_phi_deg.hist(bins=50)
xmin, xmax, ymin, ymax = plt.axis()
tikzplotlib.save("/home/dennis/Downloads/c01_train_body_orientation_phi.tex", extra_axis_parameters={
'width=0.5\\textwidth',
'title={Distribution of body $\\theta$ orientations}',
'xlabel={degrees ($°$)}',
'ylabel={Number of samples}',
'enlarge x limits=0.001',
'enlarge y limits=0.001',
'xmin=0',
'xmax=360',
})
fig = plt.figure()
head_phi_deg = df_train['head_orientation_phi'] * (180 / math.pi)
head_phi_deg.hist(bins=50)
xmin, xmax, ymin, ymax = plt.axis()
tikzplotlib.save("/home/dennis/Downloads/c01_train_head_orientation_phi.tex", extra_axis_parameters={
'width=0.5\\textwidth',
'title={Distribution of body $\\theta$ orientations}',
'xlabel={degrees ($°$)}',
'ylabel={Number of samples}',
'enlarge x limits=0.001',
'enlarge y limits=0.001',
'xmin=0',
'xmax=360',
})
fig = plt.figure()
body_phi_deg = df_val['body_orientation_phi'] * (180 / math.pi)
body_phi_deg.hist(bins=50)
xmin, xmax, ymin, ymax = plt.axis()
tikzplotlib.save("/home/dennis/Downloads/c01_val_body_orientation_phi.tex", extra_axis_parameters={
'width=0.5\\textwidth',
'title={Distribution of body $\\theta$ orientations}',
'xlabel={degrees ($°$)}',
'ylabel={Number of samples}',
'enlarge x limits=0.001',
'enlarge y limits=0.001',
'xmin=0',
'xmax=360',
})
fig = plt.figure()
head_phi_deg = df_val['head_orientation_phi'] * (180 / math.pi)
head_phi_deg.hist(bins=50)
xmin, xmax, ymin, ymax = plt.axis()
tikzplotlib.save("/home/dennis/Downloads/c01_val_head_orientation_phi.tex", extra_axis_parameters={
'width=0.5\\textwidth',
'title={Distribution of body $\\theta$ orientations}',
'xlabel={degrees ($°$)}',
'ylabel={Number of samples}',
'enlarge x limits=0.001',
'enlarge y limits=0.001',
'xmin=0',
'xmax=360',
})
print("3D Result Evaluation")
# +
df_results = pd.read_pickle("data/datasets/Conti01/results/C01_pred_df_experiment_pedrec_p2d3d_c_o_h36m_sim_mebow_0.pkl")
dataset_df_val_path = os.path.join(dataset_root, "rt_conti_01_val.pkl")
df = pd.read_pickle(dataset_df_val_path)
print(len(df_results))
print(len(df_val))
# +
env_position_xs = [col for col in df if col == "env_position_x"]
env_position_ys = [col for col in df if col == "env_position_y"]
env_position_zs = [col for col in df if col == "env_position_z"]
df["distance_xz"] = np.sqrt(np.abs(df[env_position_xs].sum(axis=1)**2) + np.abs(df[env_position_zs].sum(axis=1)**2))
df_results["distance_xz"] = df["distance_xz"]
# -
pck_results = get_2d_pose_pck_results(skeleton_2d_gt, skeleton_2d_pred)
|
notebooks/dataset_rtsim_conti01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Density Estimation for a Gaussian mixture
#
#
# Plot the density estimation of a mixture of two Gaussians. Data is
# generated from two Gaussians with different centers and covariance
# matrices.
#
# +
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from sklearn import mixture
n_samples = 300
# generate random sample, two components
np.random.seed(0)
# generate spherical data centered on (20, 20)
shifted_gaussian = np.random.randn(n_samples, 2) + np.array([20, 20])
# generate zero centered stretched Gaussian data
C = np.array([[0., -0.7], [3.5, .7]])
stretched_gaussian = np.dot(np.random.randn(n_samples, 2), C)
# concatenate the two datasets into the final training set
X_train = np.vstack([shifted_gaussian, stretched_gaussian])
# fit a Gaussian Mixture Model with two components
clf = mixture.GaussianMixture(n_components=2, covariance_type='full')
clf.fit(X_train)
# display predicted scores by the model as a contour plot
x = np.linspace(-20., 30.)
y = np.linspace(-20., 40.)
X, Y = np.meshgrid(x, y)
XX = np.array([X.ravel(), Y.ravel()]).T
Z = -clf.score_samples(XX)
Z = Z.reshape(X.shape)
CS = plt.contour(X, Y, Z, norm=LogNorm(vmin=1.0, vmax=1000.0),
levels=np.logspace(0, 3, 10))
CB = plt.colorbar(CS, shrink=0.8, extend='both')
plt.scatter(X_train[:, 0], X_train[:, 1], .8)
plt.title('Negative log-likelihood predicted by a GMM')
plt.axis('tight')
plt.show()
|
sklearn/sklearn learning/demonstration/auto_examples_jupyter/mixture/plot_gmm_pdf.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import warnings
warnings.simplefilter("ignore")
warnings.filterwarnings("ignore")
import joblib
import missingno
import pandas_profiling
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
import xgboost as xgb
import lightgbm as lgb
from sklearn import metrics
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
# -
df_train = pd.read_csv("train.csv")
df_test = pd.read_csv("test.csv")
df_train
df_test
df = df_train.drop("ID", axis=1)
df
df = pd.get_dummies(df, drop_first=True)
df
# +
X = df.drop('Is_Churn', axis=1)
Y = df['Is_Churn']
# adding samples to make all the categorical label values same
oversample = SMOTE()
X, Y = oversample.fit_resample(X, Y)
Y.value_counts()
# +
# Feature Scaling on training data
scaler = StandardScaler()
X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
X
# +
# Classification Model Function
def classify(model, X, Y):
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=604)
# Training the model
model.fit(X_train, Y_train)
# Predicting Y_test
pred = model.predict(X_test)
# Classification Report
class_report = classification_report(Y_test, pred)
print("\nClassification Report:\n", class_report)
# Accuracy Score
acc_score = (accuracy_score(Y_test, pred))*100
print("Accuracy Score:", acc_score)
# F1 Score
f_one_score = (f1_score(Y_test, pred, average='macro'))*100
print("F1 Score:", f_one_score)
# Cross Validation Score
cv_score = (cross_val_score(model, X, Y, cv=5).mean())*100
print("Cross Validation Score:", cv_score)
# Result of accuracy minus cv scores
result = acc_score - cv_score
print("\nAccuracy Score - Cross Validation Score is", result)
# +
# K Neighbors Classifier
model=KNeighborsClassifier(n_neighbors=15)
classify(model, X, Y)
# +
# Data Preprocessing and Feature Engineering on Testing data
X = df_test.drop("ID", axis=1)
X = pd.get_dummies(X, drop_first=True)
# Feature Scaling
scaler = StandardScaler()
X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
Predicted_Churn = model.predict(X)
# Checking the predicted churn details and storing in dataframe format
predicted_output = pd.DataFrame()
predicted_output['ID'] = df_test["ID"]
predicted_output['Is_Churn'] = Predicted_Churn
predicted_output
# -
predicted_output.to_csv("sample_submission5.csv", index=False)
|
Jobathon CL (KNN Model).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# # Python Tools for Data Analysis
#
# <div class="alert alert-block alert-warning">
# <b>Learning outcomes:</b>
# <br>
# <ul>
# <li>Identify concepts in ethical reasoning which may influence our analysis and results from data.</li>
# <li>Learn and apply a basic set of methods from the core data analysis libraries of Numpy, Pandas and Matplotlib.</li>
# </ul>
# </div>
#
# Data has become the most important language of our era, informing everything from intelligence in automated machines, to predictive analytics in medical diagnostics. The plunging cost and easy accessibility of the raw requirements for such systems - data, software, distributed computing, and sensors - are driving the adoption and growth of data-driven decision-making.
#
# As it becomes ever-easier to collect data about individuals and systems, a diverse range of professionals - who have never been trained for such requirements - grapple with inadequate analytic and data management skills, as well as the ethical risks arising from the possession and consequences of such data and tools.
#
# Before we go on with the technical training, consider the following on the ethics of the data we use.
#
# ## Ethics
#
# Computers cannot make decisions. Their output is an absolute function of the data provided as input, and the algorithms applied to analyse that input. The aid of computers in decision-making does not override human responsibility and accountability.
#
# It should be expected that both data and algorithms should stand up to scrutiny so as to justify any and all decisions made as a result of their output. "Computer says no," is not an unquestionable statement.
#
# Our actions - as data scientists - are intended to persuade people to act or think other than the way they currently do based on nothing more than the strength of our analysis, and informed by data.
#
# The process by which we examine and explain why what we consider to be right or wrong, is considered right or wrong in matters of human conduct, belongs to the study of ethics.
#
# <br>
# <div class="well">
# <b><i>Case-study</i></b>: _Polygal_ was a gel made from beet and apple pectin. Administered to a severely wounded patient, it was supposed to reduce bleeding. To test this hypothesis, [Sigmund Rascher](https://en.wikipedia.org/wiki/Sigmund_Rascher) administered a tablet to human subjects who were then shot or - without anesthesia - had their limbs amputated.
# <br><br>
# During the Second World War, and under the direction of senior Nazi officers, medical experiments of quite unusual violence were conducted on prisoners of war and civilians regarded by the Nazi regime as sub-human. After the war, twenty medical doctors were tried for war crimes and crimes against humanity at the [Doctor's Trial](https://en.wikipedia.org/wiki/Doctors%27_trial) held in Nuremberg from 1946 to 1949.
# <br><br>
# In 1947 <a href="https://en.wikipedia.org/wiki/Kurt_Blome"><NAME></a> - Deputy Reich Health Leader, a high-ranking Nazi scientist - was acquitted of war crimes on the strength of intervention by the United States. Within two months, he was being debriefed by the US military who wished to learn everything he knew about biological warfare.
# <br><br>
# <i>Do you feel the US was "right" or "wrong" to offer Blome immunity from prosecution in exchange for what he knew?</i>
# <br><br>
# There were numerous experiments conducted by the Nazis that raise ethical dilemmas, including: immersing prisoners in freezing water to observe the result and test hypothermia revival techniques; high altitude pressure and decompression experiments; sulfanilamide tests for treating gangrene and other bacterial infections.
# <br><br>
# <i>Do you feel it would be "right" or "wrong" to use these data in your research and analysis?</i>
# <br><br>
# <a href="http://www.loyno.edu/~folse/ethics.html">Further reading</a>.
# </div>
#
# Whereas everything else we do can describe human behaviour as it _is_, ethics provides a theoretical framework to describe the world as it _should_, or _should not_ be. It gives us full range to describe an ideal outcome, and to consider all that we know and do not know which may impede or confound our desired result.
#
# <br>
# <div class="well">
# <b><i>Case-study</i></b>: A Nigerian man travels to a conference in the United States. After one of the sessions, he goes to the bathroom to wash his hands. The electronic automated soap dispensor does not recognise his hands beneath the sensor. A white American sees his confusion and places his hands beneath the device. Soap is dispensed. The Nigerian man tries again. It still does not recognise him.
# <br><br>
# <i>How would something like this happen? Is it an ethical concern?</i>
# <br><br>
# <a href="http://www.iflscience.com/technology/this-racist-soap-dispenser-reveals-why-diversity-in-tech-is-muchneeded/">Further reading</a>.
# </div>
#
# When we consider ethical outcomes, we use the terms _good_ or _bad_ to describe judgements about people or things, and we use _right_ or _wrong_ to refer to the outcome of specific actions. Understand, though, that - while right and wrong may sometimes be obvious - we are often stuck in ethical dilemmas.
#
# How we consider whether an action is right or wrong comes down to the tension between what was intended by an action, and what the consequences of that action were. Are only intensions important? Or should we only consider outcomes? And how absolutely do you want to judge this chain: the _right_ motivation, leading to the _right_ intention, performing the _right_ action, resulting in only _good_ consequences. How do we evaluate this against what it may be impossible to know at the time, even if that information will become available after a decision is made?
#
# We also need to consider competing interests in good and bad outcomes. A good outcome for the individual making the decision may be a bad decision for numerous others. Conversely, an altruistic person may act only for the benefit of others even to their own detriment.
#
# Ethical problems do not always require a call to facts to justify a particular decision, but they do have a number of characteristics:
#
# - _Public_: the process by which we arrive at an ethical choice is known to all participants;
# - _Informal_: the process cannot always be codified into law like a legal system;
# - _Rational_: despite the informality, the logic used must be accessible and defensible;
# - _Impartial_: any decision must not favour any group or person;
#
# Rather than imposing a specific set of rules to be obeyed, ethics provides a framework in which we may consider whether what we are setting out to achieve conforms to our values, and whether the process by which we arrive at our decision can be validated and inspected by others.
#
# No matter how sophisticated our automated machines become, unless our intention is to construct a society "of machines, for machines", people will always be needed to decide on what ethical considerations must be taken into account.
#
# There are limits to what analysis can achieve, and it is up to the individuals producing that analysis to ensure that any assumptions, doubts, and requirements are documented along with their results. Critically, it is also each individual's personal responsibility to raise any concerns with the source data used in the analysis, including whether personal data are being used legitimately, or whether the source data are at all trustworthy, as well as the algorithms used to process those data and produce a result.
#
# ## Data analysis
#
# This will be a very brief introduction to some tools used in data analysis in Python. This will not provide insight into the approaches to performing analysis, which is left to self-study, or to modules elsewhere in this series.
#
# ### Numpy arrays
#
# Data analysis often involves performing operations on large lists of data. Numpy is a powerful suite of tools permitting you to work quickly and easily with complete data lists. We refer to these lists as arrays, and - if you are familiar with the term from mathematics - you can think of these as matrix methods.
#
# By convention, we import Numpy as np; `import numpy as np`.
#
# We're also going to want to be generating a lot of lists of random floats for these exercises, and that's tedious to write. Let's get Python to do this for us using the `random` module.
# +
import numpy as np
import random
def generate_float_list(lwr, upr, num):
"""
Return a list of num random decimal floats ranged between lwr and upr.
Range(lwr, upr) creates a list of every integer between lwr and upr.
random.sample takes num integers from the range list, chosen randomly.
"""
int_list = random.sample(range(lwr, upr), num)
return [x/100 for x in int_list]
# Create two lists
height = generate_float_list(100, 220, 10)
weight = generate_float_list(5000, 20000, 10)
# Convert these to Numpy arrays
np_height = np.array(height)
np_weight = np.array(weight)
print(np_height)
print(np_weight)
# -
# There is a useful timer function built in to Jupyter Notebook. Start any line of code with `%time` and you'll get output on how long the code took to run.
#
# This is important when working with data-intensive operations where you want to squeeze out every drop of efficiency by optimising your code.
#
# We can now perform operations directly on all the values in these Numpy arrays. Here are two simple methods to use.
#
# <div class="alert alert-block alert-info">
# <b>Syntax</b>
# <br>
# <ul>
# <li><i>Element-wise calculations:</i> you can treat Numpy arrays as you would individual floats or integers. Note, they must either have the same shape (i.e. number or elements), or you can perform bitwise operations (operate on each item in the array) with a single float or int</li>
# <li><i>Filtering:</i> You can quickly filter Numpy arrays by performing boolean operations, e.g. `np_array[np_array > num]`, or, for a purely boolean response, `np_array > num`</li>
# </ul>
# </div>
# +
# Calculate body-mass index based on the heights and weights in our arrays
# Time the calculation ... it won't be long
# %time bmi = np_weight / np_height ** 2
print(bmi)
# Any BMI > 35 is considered severely obese. Let's see who in our sample is at risk.
# Get a boolean response
print(bmi > 35)
# Or print only BMI values above 35
print(bmi[bmi > 35])
# -
# ### Pandas
#
# We briefly experimented with Pandas back in [Built-in modules](03 - Python intermediate.ipynb#Built-in-modules).
#
# The description given for Pandas there was:
#
# _**pandas** is a Python package providing fast, flexible, and expressive data structures designed to make working with "relational" or "labeled" data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, **real world** data analysis in Python. Additionally, it has the broader goal of becoming **the most powerful and flexible open source data analysis / manipulation tool available in any language**._
#
# Pandas is developed by [<NAME>](http://wesmckinney.com/) and has a marvelous and active development community. Wes prefers pandas to be written in the lower-case (I'll alternate).
#
# Underneath Pandas is Numpy, so they are closely related and tightly integrated. Pandas allows you to manipulate data either as a `Series` (similarly to Numpy, but with added features), or in a tabular form with rows of values and named columns (similar to the way you may think of an Excel spreadsheet).
#
# This tabular form is known as a `DataFrame`. Pandas works well with Jupyter Notebook and you can output nicely formated dataframes (just make sure the last line of your code block is the name of the dataframe).
#
# The convention is to import pandas as pd, `import pandas as pd`.
#
# The following tutorial is taken direclty from the '10 minutes to pandas' section of the [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/10min.html). Note, this isn't the complete tutorial, and you can continue there.
#
# #### Object creation
#
# Create a `Series` by passing a list of values, and letting pandas create a default integer index.
# +
import pandas as pd
import numpy as np
s = pd.Series([1,3,5,np.nan,6,8])
s
# -
# Note that `np.nan` is Numpy's default way of presenting a value as "not-a-number". For instance, divide-by-zero returns `np.nan`. This means you can perform complex operations relatively safely and sort out the damage afterwards.
#
# Create a DataFrame by passing a numpy array, with a datetime index and labeled columns.
# Create a date range starting at an ISO-formatted date (YYYYMMDD)
dates = pd.date_range('20130101', periods=6)
dates
# Create a dataframe using the date range we created above as the index
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
# We can also mix text and numeric data with an automatically-generated index.
# +
dict = {"country": ["Brazil", "Russia", "India", "China", "South Africa"],
"capital": ["Brasilia", "Moscow", "New Dehli", "Beijing", "Pretoria"],
"area": [8.516, 17.10, 3.286, 9.597, 1.221],
"population": [200.4, 143.5, 1252, 1357, 52.98] }
brics = pd.DataFrame(dict)
brics
# -
# The numbers down the left-hand side of the table are called the index. This permits you to reference a specific row. However, Pandas permits you to set your own index, as we did where we set a date range index. You could set one of the existing columns as an index (as long as it consists of unique values) or you could set a new custom index.
# +
# Set the ISO two-letter country codes as the index
brics.index = ["BR", "RU", "IN", "CH", "SA"]
brics
# -
# #### Viewing data
#
# Pandas can work with exceptionally large datasets, including millions of rows. Presenting that takes up space and, if you only want to see what your data looks like (since, most of the time, you can work with it symbolically), then that can be painful. Fortunately, pandas comes with a number of ways of viewing and reviewing your data.
#
# <div class="alert alert-block alert-info">
# <b>Syntax</b>
# <br>
# <ul>
# <li>See the top and bottom rows of your dataframe with `df.head()` or `df.tail(num)` where `num` is an integer number of rows</li>
# <li>See the index, columns and underlying numpy data with `df.index`, `df.columns` and `df.values`</li>
# <li>Get a quick statistical summary of your data with `df.describe()`</li>
# <li>Transpose your data with `df.T`</li>
# <li>Sort by an axis with `df.sort_index(axis=1, ascending=False)` where `axis=1` refers to columns, and `axis=0` refers to rows</li>
# <li>Sort by values with `df.sort_values(by=column)`</li>
# </ul>
# </div>
# Head
df.head()
# Tail
df.tail(3)
# Index
df.index
# Values
df.values
# Statistical summary
df.describe()
# Transpose
df.T
# Sort by an axis
df.sort_index(axis=1, ascending=False)
# Sort by values
df.sort_values(by="B")
# #### Selections
#
# One of the first steps in data analysis is simply to filter your data and get at slices you're most interested in. Pandas has numerous approaches to quickly get only what you want.
# <div class="alert alert-block alert-info">
# <b>Syntax</b>
# <br>
# <ul>
# <li>Select a single column by addressing the dataframe as you would a dictionary, with `df[column]` or, if the column name is a single word, with `df.column`. This returns a series</li>
# <li>Select a slice in the way you would a Python list, with `df[]`, e.g. `df[:3]`, or by slicing the indices, `df["20130102":"20130104"]`</li>
# <li>Use `.loc` to select by specific labels, such as:</li>
# <ul>
# <li>Get a cross-section based on a label, with e.g. `df.loc[index[0]]`</li>
# <li>Get on multi-axis by a label, with `df.loc[:, ["A", "B"]]` where the first `:` indicates the slice of rows, and the second list `["A", "B"]` indicates the list of columns</li>
# </ul>
# <li>As you would with Numpy, you can get a boolean-based selection, with e.g. `df[df.A > num]`</li>
# </ul>
# </div>
#
# There are a _lot_ more ways to filter and access data, as well as methods to set data in your dataframes, but this will be enough for now.
# By column
df.A
# By slice
df["20130102":"20130104"]
# Cross-section
df.loc[dates[0]]
# Multi-axis
df.loc[:, ["A", "B"]]
# Boolean indexing
df[df.A > 0]
# ### Matplotlib
#
# In this last section, you get to meet _Matplotlib_, a fairly ubiquitous and powerful Python plotting library. Jupyter Notebook has some "magic" we can use in the line `%matplotlib inline` which permits us to draw charts directly in this notebook.
#
# Matplotlib, Numpy and Pandas form the three most important and ubiquitous tools in data analysis.
#
# Note that this is the merest slither of an introduction to what you can do with these libraries.
# +
import matplotlib.pyplot as plt
# This bit of magic code will allow your Matplotlib plots to be shown directly in your Jupyter Notebook.
# %matplotlib inline
# Produce a random timeseries
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
# Get the cumulative sum of the random numbers generated to mimic a historic data series
ts = ts.cumsum()
# And magically plot
ts.plot()
# +
# And do the same thing with a dataframe
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
columns=['A', 'B', 'C', 'D'])
df = df.cumsum()
# And plot, this time creating a figure and adding a plot and legend to it
plt.figure()
df.plot()
plt.legend(loc='best')
# -
# And that's it for this quick introduction to Python and its use in data analysis.
#
# <div class="alert alert-block alert-warning">
# <b>Module documentation:</b>
# <br>
# <ul>
# <li><a href="https://docs.scipy.org/doc/numpy-dev/user/quickstart.html">Numpy tutorial</a></li>
# <li><a href="https://pandas.pydata.org/pandas-docs/stable/10min.html">Pandas tutorial]</a></li>
# <li><a href="https://matplotlib.org/users/index.html">Matplotlib tutorial</a></li>
# <li><a href="https://stackoverflow.com/">Stack Overflow</a>: for when the documentation isn't enough, or is too complex, there's Stack Overflow. This is a community of about 50 million developers from around the world, some professional, some amateur, all passionate about what they do. Search the archives for questions that you're trying to answer that others have already solved, or ask your own.</li>
# </ul>
# </div>
#
# [(previous)](03%20-%20-%20-%20Python%20-%20intermediate.ipynb) | [(index)](00%20-%20-%20-%20Introduction%20-%20to%20-%20Python.ipynb) | [(next)](05%20-%20-%20-%20Introduction%20-%20to%20-%20data%20-%20as%20-%20a%20-%20science.ipynb)
|
6.Python-Tools-for-Data-Analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from dateutil import parser
date = parser.parse("4th of July,2015")
date
import numpy as np
date = np.array('2015-07-04',dtype=np.datetime64)
date
date + np.arange(13)
from pandas import Series
series = Series.from_csv('female_birth.csv')
print(series.head())
import pandas as pd
series = pd.read_csv('female_birth.csv',header=0)
series.head()
import numpy as np
np.size(series,0)
len(series)
series.describe()
series.isnull().sum()
# # Explore date and time
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from matplotlib.pyplot import rcParams
rcParams['figure.figsize']=15, 6
df = pd.read_csv('Airpassengers_Online.csv')
df.head()
df.dtypes
dateparse = lambda dates: pd.datetime.strptime(dates,'%Y-%m')
df = pd.read_csv('Airpassengers_Online.csv',parse_dates=['Month'],index_col='Month',date_parser=dateparse)
df.head()
df.dtypes
df.index
# Count the Number of obeservation per time stamp
df.groupby(level=0).count()
df['1960']
df['1960-05']
df[datetime(1960,5,1):] #This will give all the entries after this date but we have to convert it into date time format. See Tutorial online.
df.resample('Y').mean()
df.resample('M').mean()
df.resample('Y').sum()
df.resample('Y').sum().plot()
plt.show()
# # Pre-Processing Time Series
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('italy_earthquakes_from_2016-08-24_to_2016-11-30.csv').set_index('Time')
df.head()
df.dtypes
df.index = pd.to_datetime(df.index)
df.head()
df.describe()
df['Magnitude'].resample('D').apply([np.mean]).plot() #average over every day
plt.title('Magnitude average every day')
plt.ylabel('Magnitude')
df['Magnitude'].resample('2D').apply([np.mean]).plot() #average over every day
plt.title('Magnitude average every two days')
plt.ylabel('Magnitude')
# # Aggregating and Visualizing the Data Summary
from sklearn.model_selection import TimeSeriesSplit
data = pd.read_csv('avocado.csv',parse_dates=['Date'])
data.head()
summr = data.groupby('Date')['Total Volume'].mean().reset_index()
summr.head()
fig, ax = plt.subplots(1,1,figsize=(12,8))
summr.set_index('Date').plot(ax=ax,marker='o',linestyle='-',color='blue')
plt.show()
fig, ax = plt.subplots(1,1,figsize=(12,10))
(summr.set_index('Date')
.assign(month=lambda df:df.index.month)
.groupby('month')['Total Volume'].agg(["mean","std","median","min","max"])
.plot(ax=ax,marker="o"))
ax.set_xlabel('Month')
# # Temporal variations in two entities
df = pd.read_csv('italy_earthquakes_from_2016-08-24_to_2016-11-30.csv').set_index('Time')
df.head()
df.dtypes
df.index = pd.to_datetime(df.index) #Convert Time to date time
df.head()
df.isnull().sum()
df.isnull()
df.isnull().values.any()
depth_magn = df.where((df["Magnitude"]>=3.0)).dropna()[["Magnitude","Depth/Km"]]
depth_magn
dm = depth_magn.groupby(depth_magn.index.hour).mean()
# +
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_ylim([2.5,4.0])
ax.set_ylabel("Magnitude")
ax.set_xlabel("Hour of the day")
ax.yaxis.label.set_color("green")
ax2 = ax.twinx()
ax2.set_ylim([4.0,12])
ax2.set_ylabel("Depth/Km")
ax2.set_xlabel("Hour of the day")
ax2.yaxis.label.set_color("red")
width = 0.5
position = 0.5
dm["Depth/Km"].plot(kind="bar",color="red",ax=ax2)
dm["Magnitude"].plot(kind="bar",color='green',ax=ax)
plt.grid(False)
plt.title("Magnitude and Depth during the day")
#This graph is not right.
# +
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_ylim([2.5,4.0])
ax.set_ylabel("Magnitude")
ax.set_xlabel("Hour of the day")
ax.yaxis.label.set_color("green")
dm["Magnitude"].plot(kind="bar",color="green",ax=ax)
plt.grid(False)
plt.title("Magnitude during the day")
# +
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_ylim([4.0,12])
ax.set_ylabel("Depth/Km")
ax.set_xlabel("Hour of the day")
ax.yaxis.label.set_color("red")
dm["Depth/Km"].plot(kind="bar",color="red",ax=ax)
plt.grid(False)
plt.title("Depth/Km during the day")
# -
from statsmodels.tsa.api import ExponentialSmoothing
# # Moving Average
df = pd.read_csv('Airpassengers_Online.csv')
dateparse = lambda dates: pd.datetime.strptime(dates,'%Y-%m')
df = pd.read_csv('Airpassengers_Online.csv',parse_dates=['Month'],index_col='Month',date_parser=dateparse)
df.index
plt.plot(df)
# # Rolling Average
#
# Rolling Average means, for each point, you take the average of the points on either side of it
p = df[['#Passengers']]
p.rolling(12).mean().plot(figsize=(20,10),linewidth=5,fontsize=20)
#Monthly Data
plt.xlabel('Year',fontsize=20)
# # Seasonal Part of Time Series
data = Series.from_csv('Airpassengers_online.csv',header=0)
data.head()
data.plot()
from statsmodels.tsa.seasonal import seasonal_decompose
result = seasonal_decompose(data,model='multiplicative')
result.plot()
plt.show()
# # ADF (Dickey Fuller test)
# Null Hypothesis (H0): If failed to be rejected(p>0.05), it means the time series has the unit root and the series is non stationary.
from statsmodels.tsa.stattools import adfuller
from pandas import Series
series = Series.from_csv('Airpassengers_online.csv',header=0)
X = series.values
result = adfuller(X)
result
print('ADF Statistic: %f' % result[0])
print('p value: %f' % result[1]) #Hence Series is non-stationary. Value is greater than 0.05
print('Critical Values:')
for key, value in result[4].items():
print('\t%s: %.3f'%(key,value))
# # Make Time Series Stationary
# Reduce Trend
# Take log (or sqrt)
df = pd.read_csv('Airpassengers_Online.csv')
dateparse = lambda dates: pd.datetime.strptime(dates,'%Y-%m')
df = pd.read_csv('Airpassengers_Online.csv',parse_dates=['Month'],index_col='Month',date_parser=dateparse)
df.head()
d_log = np.log(df) #Take log
plt.plot(d_log) #Taking the log will help to smooth out the data upto some level
# # Make Time Series Stationary : Simple Differencing
y = df['#Passengers']
X = y.values
result = adfuller(X)
result
# First Order Difference
# Y_t - Y_t-1
y_diff = np.diff(y)
y_diff.shape
y_diff = pd.Series(y_diff)
X = y_diff.values
result = adfuller(X)
result #Here p values decreases to 0.05 close to the Null Hypothesis
rolmean = df.rolling(window=12).mean()
rolstd = df.rolling(window=12).std()
print(rolmean,rolstd)
indexedDataset_logScale = np.log(df)
plt.plot(indexedDataset_logScale)
movingAverage = indexedDataset_logScale.rolling(window=12).mean()
movingStd = indexedDataset_logScale.rolling(window=12).std()
plt.plot(indexedDataset_logScale)
plt.plot(movingAverage,color='red')
#Moving Averege shows the upward trend
datasetLogScaleMinusMovingAverage = indexedDataset_logScale - movingAverage
datasetLogScaleMinusMovingAverage.head()
datasetLogScaleMinusMovingAverage.dropna(inplace=True)
datasetLogScaleMinusMovingAverage.head(12)
X = datasetLogScaleMinusMovingAverage.values
X.shape
def test_stationary(timeseries):
dftest = adfuller(timeseries['#Passengers'],autolag='AIC')
print(dftest)
test_stationary(datasetLogScaleMinusMovingAverage) #Here p values drops to 0.22
|
Practice_Notebooks/Practice (Online + Class).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="iQjHqsmTAVLU"
# ## Exercise 3
# In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers.
#
# I've started the code for you -- you need to finish it!
#
# When 99.8% accuracy has been hit, you should print out the string "Reached 99.8% accuracy so cancelling training!"
#
# +
import tensorflow as tf
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
# -
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
# +
# GRADED FUNCTION: train_mnist_conv
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc')>0.998):
print("\nAchieved accuracy 99.8% so cancelling training.")
self.model.stop_training = True
def train_mnist_conv():
# Please write your code only where you are indicated.
# please do not remove model fitting inline comments.
# YOUR CODE STARTS HERE
callbacks = myCallback()
# YOUR CODE ENDS HERE
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path)
# YOUR CODE STARTS HERE
training_images = training_images / 255.0
test_images = test_images / 255.0
training_images = training_images.reshape(60000, 28, 28, 1)
test_images = test_images.reshape(10000, 28, 28, 1)
# YOUR CODE ENDS HERE
model = tf.keras.models.Sequential([
# YOUR CODE STARTS HERE
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
# YOUR CODE ENDS HERE
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
# model fitting
history = model.fit(
# YOUR CODE STARTS HERE
training_images, training_labels, epochs=20, callbacks=[callbacks]
# YOUR CODE ENDS HERE
)
test_loss = model.evaluate(test_images, test_labels)
# model fitting
return history.epoch, history.history['acc'][-1]
# -
_, _ = train_mnist_conv()
# +
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
# + language="javascript"
# <!-- Save the notebook -->
# IPython.notebook.save_checkpoint();
# + language="javascript"
# IPython.notebook.session.delete();
# window.onbeforeunload = null
# setTimeout(function() { window.close(); }, 1000);
|
1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/Week 3/mnist_above_99_accuracy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# # Quantum Counting
# -
# To understand this algorithm, it is important that you first understand both Grover’s algorithm and the quantum phase estimation algorithm. Whereas Grover’s algorithm attempts to find a solution to the Oracle, the quantum counting algorithm tells us how many of these solutions there are. This algorithm is interesting as it combines both quantum search and quantum phase estimation.
#
# ## Contents
#
# 1. [Overview](#overview)
# 1.1 [Intuition](#intuition)
# 1.2 [A Closer Look](#closer_look)
# 2. [The Code](#code)
# 2.1 [Initialising our Code](#init_code)
# 2.2 [The Controlled-Grover Iteration](#cont_grover)
# 2.3 [The Inverse QFT](#inv_qft)
# 2.4 [Putting it Together](#putting_together)
# 3. [Simulating](#simulating)
# 4. [Finding the Number of Solutions](#finding_m)
# 5. [Exercises](#exercises)
# 6. [References](#references)
# ## 1. Overview <a id='overview'></a>
# ### 1.1 Intuition <a id='intuition'></a>
# In quantum counting, we simply use the quantum phase estimation algorithm to find an eigenvalue of a Grover search iteration. You will remember that an iteration of Grover’s algorithm, $G$, rotates the state vector by $\theta$ in the $|\omega\rangle$, $|s’\rangle$ basis:
# 
#
# The percentage number of solutions in our search space affects the difference between $|s\rangle$ and $|s’\rangle$. For example, if there are not many solutions, $|s\rangle$ will be very close to $|s’\rangle$ and $\theta$ will be very small. It turns out that the eigenvalues of the Grover iterator are $e^{\pm i\theta}$, and we can extract this using quantum phase estimation (QPE) to estimate the number of solutions ($M$).
# ### 1.2 A Closer Look <a id='closer_look'></a>
# In the $|\omega\rangle$,$|s’\rangle$ basis we can write the Grover iterator as the matrix:
#
# $$
# G =
# \begin{pmatrix}
# \cos{\theta} && -\sin{\theta}\\
# \sin{\theta} && \cos{\theta}
# \end{pmatrix}
# $$
#
# The matrix $G$ has eigenvectors:
#
# $$
# \begin{pmatrix}
# -i\\
# 1
# \end{pmatrix}
# ,
# \begin{pmatrix}
# i\\
# 1
# \end{pmatrix}
# $$
#
# With the aforementioned eigenvalues $e^{\pm i\theta}$. Fortunately, we do not need to prepare our register in either of these states, the state $|s\rangle$ is in the space spanned by $|\omega\rangle$, $|s’\rangle$, and thus is a superposition of the two vectors.
# $$
# |s\rangle = \alpha |\omega\rangle + \beta|s'\rangle
# $$
#
# As a result, the output of the QPE algorithm will be a superposition of the two phases, and when we measure the register we will obtain one of these two values! We can then use some simple maths to get our estimate of $M$.
#
# 
#
# ## 2. The Code <a id='code'></a>
# ### 2.1 Initialising our Code <a id='init_code'></a>
#
# First, let’s import everything we’re going to need:
# + tags=["thebelab-init"]
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
import qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, transpile, assemble
# import basic plot tools
from qiskit.visualization import plot_histogram
# -
# In this guide will choose to ‘count’ on the first 4 qubits on our circuit (we call the number of counting qubits $t$, so $t = 4$), and to 'search' through the last 4 qubits ($n = 4$). With this in mind, we can start creating the building blocks of our circuit.
# ### 2.2 The Controlled-Grover Iteration <a id='cont_grover'></a>
# We have already covered Grover iterations in the Grover’s algorithm section. Here is an example with an Oracle we know has 5 solutions ($M = 5$) of 16 states ($N = 2^n = 16$), combined with a diffusion operator:
# + tags=["thebelab-init"]
def example_grover_iteration():
"""Small circuit with 5/16 solutions"""
# Do circuit
qc = QuantumCircuit(4)
# Oracle
qc.h([2,3])
qc.ccx(0,1,2)
qc.h(2)
qc.x(2)
qc.ccx(0,2,3)
qc.x(2)
qc.h(3)
qc.x([1,3])
qc.h(2)
qc.mct([0,1,3],2)
qc.x([1,3])
qc.h(2)
# Diffuser
qc.h(range(3))
qc.x(range(3))
qc.z(3)
qc.mct([0,1,2],3)
qc.x(range(3))
qc.h(range(3))
qc.z(3)
return qc
# -
# Notice the python function takes no input and returns a `QuantumCircuit` object with 4 qubits. In the past the functions you created might have modified an existing circuit, but a function like this allows us to turn the `QuantumCircuit` object into a single gate we can then control.
#
# We can use `.to_gate()` and `.control()` to create a controlled gate from a circuit. We will call our Grover iterator `grit` and the controlled Grover iterator `cgrit`:
# + tags=["thebelab-init"]
# Create controlled-Grover
grit = example_grover_iteration().to_gate()
cgrit = grit.control()
cgrit.label = "Grover"
# -
# ### 2.3 The Inverse QFT <a id='inv_qft'></a>
# We now need to create an inverse QFT. This code implements the QFT on n qubits:
# + tags=["thebelab-init"]
def qft(n):
"""Creates an n-qubit QFT circuit"""
circuit = QuantumCircuit(4)
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(np.pi/2**(n-qubit), qubit, n)
qft_rotations(circuit, n)
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
# -
# Again, note we have chosen to return another `QuantumCircuit` object, this is so we can easily invert the gate. We create the gate with t = 4 qubits as this is the number of counting qubits we have chosen in this guide:
# + tags=["thebelab-init"]
qft_dagger = qft(4).to_gate().inverse()
qft_dagger.label = "QFT†"
# -
# ### 2.4 Putting it Together <a id='putting_together'></a>
#
# We now have everything we need to complete our circuit! Let’s put it together.
#
# First we need to put all qubits in the $|+\rangle$ state:
# +
# Create QuantumCircuit
t = 4 # no. of counting qubits
n = 4 # no. of searching qubits
qc = QuantumCircuit(n+t, t) # Circuit with n+t qubits and t classical bits
# Initialise all qubits to |+>
for qubit in range(t+n):
qc.h(qubit)
# Begin controlled Grover iterations
iterations = 1
for qubit in range(t):
for i in range(iterations):
qc.append(cgrit, [qubit] + [*range(t, n+t)])
iterations *= 2
# Do inverse QFT on counting qubits
qc.append(qft_dagger, range(t))
# Measure counting qubits
qc.measure(range(t), range(t))
# Display the circuit
qc.draw()
# -
# Great! Now let’s see some results.
# ## 3. Simulating <a id='simulating'></a>
# Execute and see results
qasm_sim = Aer.get_backend('qasm_simulator')
transpiled_qc = transpile(qc, qasm_sim)
qobj = assemble(transpiled_qc)
job = qasm_sim.run(qobj)
hist = job.result().get_counts()
plot_histogram(hist)
# We can see two values stand out, having a much higher probability of measurement than the rest. These two values correspond to $e^{i\theta}$ and $e^{-i\theta}$, but we can’t see the number of solutions yet. We need to little more processing to get this information, so first let us get our output into something we can work with (an `int`).
#
# We will get the string of the most probable result from our output data:
measured_str = max(hist, key=hist.get)
# Let us now store this as an integer:
measured_int = int(measured_str,2)
print("Register Output = %i" % measured_int)
# ## 4. Finding the Number of Solutions (M) <a id='finding_m'></a>
#
# We will create a function, `calculate_M()` that takes as input the decimal integer output of our register, the number of counting qubits ($t$) and the number of searching qubits ($n$).
#
# First we want to get $\theta$ from `measured_int`. You will remember that QPE gives us a measured $\text{value} = 2^n \phi$ from the eigenvalue $e^{2\pi i\phi}$, so to get $\theta$ we need to do:
#
# $$
# \theta = \text{value}\times\frac{2\pi}{2^t}
# $$
#
# Or, in code:
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
# You may remember that we can get the angle $\theta/2$ can from the inner product of $|s\rangle$ and $|s’\rangle$:
#
# 
#
# $$
# \langle s'|s\rangle = \cos{\tfrac{\theta}{2}}
# $$
#
# And that $|s\rangle$ (a uniform superposition of computational basis states) can be written in terms of $|\omega\rangle$ and $|s'\rangle$ as:
#
# $$
# |s\rangle = \sqrt{\tfrac{M}{N}}|\omega\rangle + \sqrt{\tfrac{N-M}{N}}|s'\rangle
# $$
#
# The inner product of $|s\rangle$ and $|s'\rangle$ is:
#
# $$
# \langle s'|s\rangle = \sqrt{\frac{N-M}{N}} = \cos{\tfrac{\theta}{2}}
# $$
#
# From this, we can use some trigonometry and algebra to show:
#
# $$
# N\sin^2{\frac{\theta}{2}} = M
# $$
#
# From the [Grover's algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) chapter, you will remember that a common way to create a diffusion operator, $U_s$, is actually to implement $-U_s$. This implementation is used in the Grover iteration provided in this chapter. In a normal Grover search, this phase is global and can be ignored, but now we are controlling our Grover iterations, this phase does have an effect. The result is that we have effectively searched for the states that are _not_ solutions, and our quantum counting algorithm will tell us how many states are _not_ solutions. To fix this, we simply calculate $N-M$.
#
# And in code:
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
# And we can see we have (approximately) the correct answer! We can approximately calculate the error in this answer using:
m = t - 1 # Upper bound: Will be less than this
err = (math.sqrt(2*M*N) + N/(2**(m-1)))*(2**(-m))
print("Error < %.2f" % err)
# Explaining the error calculation is outside the scope of this article, but an explanation can be found in [1].
#
# Finally, here is the finished function `calculate_M()`:
def calculate_M(measured_int, t, n):
"""For Processing Output of Quantum Counting"""
# Calculate Theta
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
# Calculate No. of Solutions
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
# Calculate Upper Error Bound
m = t - 1 #Will be less than this (out of scope)
err = (math.sqrt(2*M*N) + N/(2**(m-1)))*(2**(-m))
print("Error < %.2f" % err)
# ## 5. Exercises <a id='exercises'></a>
#
# 1. Can you create an oracle with a different number of solutions? How does the accuracy of the quantum counting algorithm change?
# 2. Can you adapt the circuit to use more or less counting qubits to get a different precision in your result?
#
# ## 6. References <a id='references'></a>
#
# [1] <NAME> and <NAME>. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA.
import qiskit
qiskit.__qiskit_version__
|
content/ch-algorithms/quantum-counting.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# +
import numpy as np
import torch
import time
import pandas as pd
from tqdm.notebook import tqdm
import iisignature
import sigkernel
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
params = {'legend.fontsize': 14,
'figure.figsize': (16, 16),
'axes.labelsize': 14,
'axes.titlesize': 14,
'xtick.labelsize': 14,
'ytick.labelsize': 14}
pylab.rcParams.update(params)
# -
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# # Dependence on length of paths
# +
t_pde_cpu = []
t_pde_gpu = []
lengths = range(100, 500, 10)
N, dim = 5, 2
tol = 1e-3
for l in tqdm(lengths):
# generate paths
x = np.array([sigkernel.brownian(l-1, dim).astype(np.float64) for n in range(N)])
y = np.array([sigkernel.brownian(l-1, dim).astype(np.float64) for n in range(N)])
x_cpu = torch.tensor(x, dtype=torch.float64, device='cpu')
y_cpu = torch.tensor(y, dtype=torch.float64, device='cpu')
x_gpu = torch.tensor(x, dtype=torch.float64, device='cuda')
y_gpu = torch.tensor(y, dtype=torch.float64, device='cuda')
# true answer
# sig_x = iisignature.sig(x, 12)
# sig_y = iisignature.sig(y, 12)
# true_answer = np.array([1. + np.dot(s_x,s_y) for s_x,s_y in zip(sig_x,sig_y)])
static_kernel = sigkernel.LinearKernel()
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=5)
true_answer = signature_kernel.compute_kernel(x_cpu, y_cpu).numpy()
# sig PDE CPU
k = 1e8*np.ones(N)
o = -1
while (np.abs(k - true_answer) > tol).all():
o += 1
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
k = signature_kernel.compute_kernel(x_cpu, y_cpu).numpy()
t = time.time()
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
_ = signature_kernel.compute_kernel(x_cpu, y_cpu).numpy()
t_pde_cpu.append(time.time()-t)
# sig PDE GPU
k = 1e8*np.ones(N)
o = -1
while (np.abs(k - true_answer) > tol).all():
o += 1
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
k = signature_kernel.compute_kernel(x_gpu, y_gpu).cpu().numpy()
t = time.time()
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
_ = signature_kernel.compute_kernel(x_gpu, y_gpu).cpu().numpy()
t_pde_gpu.append(time.time()-t)
# +
t_pde_cpu_ = []
t_pde_gpu_ = []
dims = range(10, 3000, 50)
N, l = 5, 10
tol = 1e-3
for dim in tqdm(dims):
# generate paths
x = np.array([sigkernel.brownian(l-1, dim).astype(np.float64) for n in range(N)])
y = np.array([sigkernel.brownian(l-1, dim).astype(np.float64) for n in range(N)])
x_cpu = torch.tensor(x, dtype=torch.float64, device='cpu')
y_cpu = torch.tensor(y, dtype=torch.float64, device='cpu')
x_gpu = torch.tensor(x, dtype=torch.float64, device='cuda')
y_gpu = torch.tensor(y, dtype=torch.float64, device='cuda')
# true answer
static_kernel = sigkernel.LinearKernel()
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=5)
true_answer = signature_kernel.compute_kernel(x_cpu, y_cpu).numpy()
# sig PDE CPU
k = 1e8*np.ones(N)
o = -1
while (np.abs(k - true_answer) > tol).all():
o += 1
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
k = signature_kernel.compute_kernel(x_cpu, y_cpu).numpy()
t = time.time()
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
_ = signature_kernel.compute_kernel(x_cpu, y_cpu).numpy()
t_pde_cpu_.append(time.time()-t)
# sig PDE GPU
k = 1e8*np.ones(N)
o = -1
while (np.abs(k - true_answer) > tol).all():
o += 1
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
k = signature_kernel.compute_kernel(x_gpu, y_gpu).cpu().numpy()
t = time.time()
signature_kernel = sigkernel.SigKernel(static_kernel, dyadic_order=o)
_ = signature_kernel.compute_kernel(x_gpu, y_gpu).cpu().numpy()
t_pde_gpu_.append(time.time()-t)
# +
fig, ax = plt.subplots(1,2,figsize=(14,5))
ax=plt.subplot(1, 2, 1)
ax.plot(lengths, t_pde_cpu, label='signature PDE kernel on CPU', ls='--', marker='v', markersize=6, c='b')
ax.plot(lengths, t_pde_gpu, label='signature PDE kernel on GPU', ls='--', marker='s', markersize=6, c='r')
ax.set_title('Dependence of time complexity on length')
ax.set_xlabel('Number of time steps')
ax.set_ylabel('Elapsed time (s)')
ax.legend()
# ax.set_ylim(0,2.5)
ax.legend(loc="upper left")
# ax.grid(True)
ax=plt.subplot(1, 2, 2)
ax.plot(dims, t_pde_cpu_, label='signature PDE kernel on CPU', ls='--', marker='v', markersize=5, c='b')
ax.plot(dims, t_pde_gpu_, label='signature PDE kernel on GPU', ls='--', marker='s', markersize=5, c='r')
ax.set_title('Dependence of time complexity on dimension')
ax.set_xlabel('Number of channels')
ax.set_ylabel('Elapsed time (s)')
ax.legend()
# ax.set_ylim(0,2.5)
ax.legend(loc="upper left")
# ax.grid(True)
plt.subplots_adjust(wspace=0.2, hspace=0.4)
plt.tight_layout()
plt.savefig('../pictures/perfomance.png')
plt.show()
# -
|
examples/performance_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Module 3 Practice 1
# ## Conditionals
# <font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
# - **control code flow with `if`... `else` conditional logic**
# - using Boolean string methods (`.isupper(), .isalpha(), .startswith()...`)
# - using comparision (`>, <, >=, <=, ==, !=`)
# - using Strings in comparisons
#
# ## `if else`
#
# +
# [ ] input avariable: age as digit and cast to int
age = input('what is your age? ')
# if age greater than or equal to 12 then print message on age in 10 years
if int(age) >= 12:
print('your age in 10 years will be ', int(age) + 10)
# or else print message "It is good to be" age
else:
print('It is good to be ', age)
# +
# [ ] input a number
number = input('is the number you enter greater than 100? ')
# if number IS a digit string then cast to int
if int(number) > 100:
print(' the number is greater than 100')
elif int(number) <= 100:
print(' the number is equal to or less than 100')
else:
print('only int is accepted')
# print number "greater than 100 is" True/False
# if number is NOT a digit string then message the user that "only int is accepted"
# -
# ### Guessing a letter A-Z
# **check_guess()** takes 2 string arguments: **letter and guess** (both expect single alphabetical character)
# - if guess is not an alpha character print invalid and return False
# - test and print if guess is "high" or "low" and return False
# - test and print if guess is "correct" and return True
# +
# [ ] create check_guess()
def check_guess(letter = 'a',guess = input('guess a single alphebetical character ')):
if guess == guess.isalpha():
return('invalid enter a alphabetical character')
elif guess == letter:
return('that guess is correct')
elif guess > letter:
return('that guess is false')
else:
return('that guess is false')
# call with test
check_guess()
# -
# [ ] call check_guess with user input
check_guess('b')
# ### Letter Guess
# **create letter_guess() function that gives user 3 guesses**
# - takes a letter character argument for the answer letter
# - gets user input for letter guess
# - calls check_guess() with answer and guess
# - End letter_guess if
# - check_guess() equals True, return True
# - or after 3 failed attempts, return False
# +
# [ ] create letter_guess() function, call the function to test
def letter_guess(guess_num = 1):
input_1 = input('make your first guess of the letter ').lower()
if check_guess(input_1) == "that guess is correct":
print('good job you got it right!')
elif guess_num == 3:
print('sorry you got it wrong three times')
else:
print(check_guess(input_1))
guess_num + 1
return
letter_guess()
letter_guess(2)
letter_guess(3)
# -
# ### Pet Conversation
# **ask the user for a sentence about a pet and then reply**
# - get user input in variable: about_pet
# - using a series of **if** statements respond with appropriate conversation
# - check if "dog" is in the string about_pet (sample reply "Ah, a dog")
# - check if "cat" is in the string about_pet
# - check if 1 or more animal is in string about_pet
# - no need for **else**'s
# - finish with thanking for the story
# [ ] complete pet conversation
about_pet = input('what kind of pet is it? ').lower()
if about_pet == 'dog':
print("ah, a dog")
elif about_pet == 'cat':
print("ah, a cat")
elif about_pet == 'bird':
print("ah, a bird")
else:
print('thanks for the story! ')
# # Module 3 Practice 2
# ## conditionals, type, and mathematics extended
#
# <font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
# - code more than two choices using **`elif`**
# - gather numeric input using type casting
# - perform subtraction, multiplication and division operations in code
#
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Tasks</B></font>
# ### Rainbow colors
# ask for input of a favorite rainbow color first letter: ROYGBIV
#
# Using `if`, `elif`, and `else`:
# - print the color matching the letter
# - R = Red
# - O = Orange
# - Y = Yellow
# - G = Green
# - B = Blue
# - I = Indigo
# - V = Violet
# - else print "no match"
#
# +
# [ ] complete rainbow colors
color = input('what is your favorite rainbow color ').lower()
if color == "r":
print('Red')
elif color == "o":
print('Orange')
elif color == "y":
print('Yellow')
elif color == "g":
print('Green')
elif color == 'b':
print('blue')
elif color == 'i':
print('Indigio')
elif color == 'v':
print('Violet')
else:
print('no match')
# +
# [ ] make the code above into a function rainbow_color() that has a string parameter,
def rainbow_color(color = 'r'):
if color == "r":
print('Red')
elif color == "o":
print('Orange')
elif color == "y":
print('Yellow')
elif color == "g":
print('Green')
elif color == 'b':
print('blue')
elif color == 'i':
print('Indigio')
elif color == 'v':
print('Violet')
else:
return('no match')
# get input and call the function and return the matching color as a string or "no match" message.
color_input = input('what is your favorite rainbow color ').lower()
# Call the function and print the return string.
rainbow_color(color_input)
# -
# #
# **Create function age_20() that adds or subtracts 20 from your age for a return value based on current age** (use `if`)
# - call the funtion with user input and then use the return value in a sentence
# example `age_20(25)` returns **5**:
# > "5 years old, 20 years difference from now"
# [ ] complete age_20()
def age_20(age_20 = '25'):
if int(age_20) >= 21:
print('20 years ago you were this age ', int(age_20) - 20)
elif int(age_20) <= 20:
print('20 years from now you will be this age', int(age_20) + 20)
else:
print('please put in a number that could be an age')
age_input = input('enter your age')
age_20(age_input)
# **create a function rainbow_or_age that takes a string argument**
# - if argument is a digit return the value of calling age_20() with the str value cast as **`int`**
# - if argument is an alphabetical character return the value of calling rainbow_color() with the str
# - if neither return FALSE
# [ ] create rainbow_or_age()
def rainbow_or_age(rainbow_input):
if rainbow_input.isdigit():
(age_20(rainbow_input))
elif rainbow_input.isalpha():
rainbow_color(rainbow_input)
else:
return('FALSE')
rainbow_input = input('Enter your age or first letter of your favorite color')
rainbow_or_age(rainbow_input)
# [ ] add 2 numbers from input using a cast to integer and display the answer
first_number = input('enter a number')
second_number = input('enter a second number')
print(int(first_number) + int(second_number))
# [ ] Multiply 2 numbers from input using cast and save the answer as part of a string "the answer is..."
# display the string using print
first_number = input('enter a number')
second_number = input('enter a second number')
print('the answer is', int(first_number) * int(second_number))
# [ ] get input of 2 numbers and display the average: (num1 + num2) divided by 2
first_number = input('enter a number')
second_number = input('enter a second number')
print((int(first_number) + int(second_number))/2)
# [ ] get input of 2 numbers and subtract the largest from the smallest (use an if statement to see which is larger)
# show the answer
first_number = input('enter a number')
second_number = input('enter a second number')
if int(first_number) > int(second_number):
print(int(first_number) - int(second_number))
elif int(first_number) < int(second_number):
print(int(second_number) - int(first_number))
else:
print('please reenter the numbers')
# +
# [ ] Divide a larger number by a smaller number and print the integer part of the result
# don't divide by zero! if a zero is input make the result zero
first_number = input('enter a number')
second_number = input('enter a second number')
if int(first_number) > int(second_number):
print(int(first_number) / int(second_number))
elif int(first_number) < int(second_number):
print(int(second_number) / int(first_number))
elif first_number == '0':
print('0')
elif second_number == '0':
print('0')
else:
print('please reenter the numbers')
# [ ] cast the answer to an integer to cut off the decimals and print the result
# -
# [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
|
Python Absolute Beginner/Module_3.1_Practice.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from intercode import AutoencoderLinearDecoder, train_autoencoder, add_annotations
import torch
import scanpy as sc
import numpy as np
import slalom as sl
# preprocessed subset of Macosko15 retina dataset (log transformed, highly variable genes)
# with reactome annotations in adata.varm['I']
# download dataset - https://drive.google.com/open?id=1LToCFkHytmryebra2cRVVBcKTKvlTkCI
adata = sc.read('mouse_retina_sbs.h5ad')
add_annotations(adata, files=['c2.cp.reactome.v4.0.symbols.gmt', 'astrocytes.csv'], min_genes=13)
# remove unannotated genes, center
select_genes = adata.varm['I'].sum(1)>0
adata._inplace_subset_var(select_genes)
adata.X-=adata.X.mean(0)
# +
LR = 0.001
BATCH_SIZE = 62
N_EPOCHS = 40
# regularization hyperparameters
# lambda0 - page 19 of presentation
# lambdas 1-3 - last term on page 20
LAMBDA0 = 0.1
LAMBDA1 = 0.95*LR
LAMBDA3 = 0.55*LR
# -
autoencoder = AutoencoderLinearDecoder(adata.n_vars, n_ann=len(adata.uns['terms']))
# n_vars - number of genes in the dataset
# n_ann - number of annotated terms, corresponding l1 regularization hyperparameter - lambda1
# n_sparse - number of sparse terms, corresponding l1 regularization hyperparameter - lambda2
# n_dense - number of dense terms
train_autoencoder(adata, autoencoder, LR, BATCH_SIZE, N_EPOCHS,
l2_reg_lambda0=LAMBDA0, lambda1=LAMBDA1, lambda3=LAMBDA3)
autoencoder.decoder.weight_dict['annotated'].data.norm(p=2, dim=0)
select_terms = lambda t1, t2: np.where(np.logical_or(adata.uns['terms']==t1, adata.uns['terms']==t2))[0]
terms = select_terms('REGULATION_OF_INSULIN_SECRETIO', 'ASTROCYTES')
W = autoencoder.decoder.weight_dict['annotated'].data.numpy()
(np.abs(W[:, terms])>0).sum(0)
autoencoder.decoder.weight_dict['annotated'].data.norm(p=2, dim=0)[terms]
adata.varm['I'][:, terms].sum(0)
encoded, decoded = autoencoder(torch.from_numpy(adata.X))
vars_latent = encoded[:, terms].data.numpy()
fg = sl.plotFactors(terms=['REGULATION_OF_INSULIN_SECRETIO', 'ASTROCYTES'], X=vars_latent, lab=adata.obs['cell_type'], isCont=False)
|
notebooks/example-Macosko15.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Base imports
# +
# %matplotlib inline
from __future__ import print_function, division
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import torch
from torch import nn, optim
from torch.autograd import Variable
from torch.optim import Optimizer
import collections
import h5py, sys
import gzip
import os
import math
try:
import cPickle as pickle
except:
import pickle
# -
# ## Some utility functions
# +
def mkdir(paths):
if not isinstance(paths, (list, tuple)):
paths = [paths]
for path in paths:
if not os.path.isdir(path):
os.makedirs(path)
from __future__ import print_function
import torch
from torch import nn, optim
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
import sys
suffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
def humansize(nbytes):
i = 0
while nbytes >= 1024 and i < len(suffixes)-1:
nbytes /= 1024.
i += 1
f = ('%.2f' % nbytes)
return '%s%s' % (f, suffixes[i])
def get_num_batches(nb_samples, batch_size, roundup=True):
if roundup:
return ((nb_samples + (-nb_samples % batch_size)) / batch_size) # roundup division
else:
return nb_samples / batch_size
def generate_ind_batch(nb_samples, batch_size, random=True, roundup=True):
if random:
ind = np.random.permutation(nb_samples)
else:
ind = range(int(nb_samples))
for i in range(int(get_num_batches(nb_samples, batch_size, roundup))):
yield ind[i * batch_size: (i + 1) * batch_size]
def to_variable(var=(), cuda=True, volatile=False):
out = []
for v in var:
if isinstance(v, np.ndarray):
v = torch.from_numpy(v).type(torch.FloatTensor)
if not v.is_cuda and cuda:
v = v.cuda()
if not isinstance(v, Variable):
v = Variable(v, volatile=volatile)
out.append(v)
return out
def cprint(color, text, **kwargs):
if color[0] == '*':
pre_code = '1;'
color = color[1:]
else:
pre_code = ''
code = {
'a': '30',
'r': '31',
'g': '32',
'y': '33',
'b': '34',
'p': '35',
'c': '36',
'w': '37'
}
print("\x1b[%s%sm%s\x1b[0m" % (pre_code, code[color], text), **kwargs)
sys.stdout.flush()
def shuffle_in_unison_scary(a, b):
rng_state = np.random.get_state()
np.random.shuffle(a)
np.random.set_state(rng_state)
np.random.shuffle(b)
import torch.utils.data as data
from PIL import Image
import numpy as np
import h5py
# -
# ## Dataloader functions
# +
class Datafeed(data.Dataset):
def __init__(self, x_train, y_train, transform=None):
self.x_train = x_train
self.y_train = y_train
self.transform = transform
def __getitem__(self, index):
img = self.x_train[index]
if self.transform is not None:
img = self.transform(img)
return img, self.y_train[index]
def __len__(self):
return len(self.x_train)
class DatafeedImage(data.Dataset):
def __init__(self, x_train, y_train, transform=None):
self.x_train = x_train
self.y_train = y_train
self.transform = transform
def __getitem__(self, index):
img = self.x_train[index]
img = Image.fromarray(np.uint8(img))
if self.transform is not None:
img = self.transform(img)
return img, self.y_train[index]
def __len__(self):
return len(self.x_train)
# -
# ## Base network wrapper
import torch.nn.functional as F
class BaseNet(object):
def __init__(self):
cprint('c', '\nNet:')
def get_nb_parameters(self):
return np.sum(p.numel() for p in self.model.parameters())
def set_mode_train(self, train=True):
if train:
self.model.train()
else:
self.model.eval()
def update_lr(self, epoch, gamma=0.99):
self.epoch += 1
if self.schedule is not None:
if len(self.schedule) == 0 or epoch in self.schedule:
self.lr *= gamma
print('learning rate: %f (%d)\n' % self.lr, epoch)
for param_group in self.optimizer.param_groups:
param_group['lr'] = self.lr
def save(self, filename):
cprint('c', 'Writting %s\n' % filename)
torch.save({
'epoch': self.epoch,
'lr': self.lr,
'model': self.model,
'optimizer': self.optimizer}, filename)
def load(self, filename):
cprint('c', 'Reading %s\n' % filename)
state_dict = torch.load(filename)
self.epoch = state_dict['epoch']
self.lr = state_dict['lr']
self.model = state_dict['model']
self.optimizer = state_dict['optimizer']
print(' restoring epoch: %d, lr: %f' % (self.epoch, self.lr))
return self.epoch
# ## MC dropout layer
def MC_dropout(act_vec, p=0.5, mask=True):
return F.dropout(act_vec, p=0.5, training=mask, inplace=True)
# ## Our models
class Linear_2L(nn.Module):
def __init__(self, input_dim, output_dim):
super(Linear_2L, self).__init__()
n_hid = 1200
self.pdrop = 0.5
self.input_dim = input_dim
self.output_dim = output_dim
self.fc1 = nn.Linear(input_dim, n_hid)
self.fc2 = nn.Linear(n_hid, n_hid)
self.fc3 = nn.Linear(n_hid, output_dim)
# choose your non linearity
#self.act = nn.Tanh()
#self.act = nn.Sigmoid()
self.act = nn.ReLU(inplace=True)
#self.act = nn.ELU(inplace=True)
#self.act = nn.SELU(inplace=True)
def forward(self, x, sample=True):
mask = self.training or sample # if training or sampling, mc dropout will apply random binary mask
# Otherwise, for regular test set evaluation, we can just scale activations
x = x.view(-1, self.input_dim) # view(batch_size, input_dim)
# -----------------
x = self.fc1(x)
x = MC_dropout(x, p=self.pdrop, mask=mask)
# -----------------
x = self.act(x)
# -----------------
x = self.fc2(x)
x = MC_dropout(x, p=self.pdrop, mask=mask)
# -----------------
x = self.act(x)
# -----------------
y = self.fc3(x)
return y
def sample_predict(self, x, Nsamples):
# Just copies type from x, initializes new vector
predictions = x.data.new(Nsamples, x.shape[0], self.output_dim)
for i in range(Nsamples):
y = self.forward(x, sample=True)
predictions[i] = y
return predictions
# ## Network wrapper
# +
from __future__ import division
class Net(BaseNet):
eps = 1e-6
def __init__(self, lr=1e-3, channels_in=3, side_in=28, cuda=True, classes=10, batch_size=128, weight_decay=0):
super(Net, self).__init__()
cprint('y', ' Creating Net!! ')
self.lr = lr
self.schedule = None # [] #[50,200,400,600]
self.cuda = cuda
self.channels_in = channels_in
self.weight_decay = weight_decay
self.classes = classes
self.batch_size = batch_size
self.side_in=side_in
self.create_net()
self.create_opt()
self.epoch = 0
self.test=False
def create_net(self):
torch.manual_seed(42)
if self.cuda:
torch.cuda.manual_seed(42)
self.model = Linear_2L(input_dim=self.channels_in*self.side_in*self.side_in, output_dim=self.classes)
if self.cuda:
self.model.cuda()
# cudnn.benchmark = True
print(' Total params: %.2fM' % (self.get_nb_parameters() / 1000000.0))
def create_opt(self):
# self.optimizer = torch.optim.Adam(self.model.parameters(), lr=self.lr, betas=(0.9, 0.999), eps=1e-08,
# weight_decay=0)
self.optimizer = torch.optim.SGD(self.model.parameters(), lr=self.lr, momentum=0.5, weight_decay=self.weight_decay)
# self.optimizer = torch.optim.SGD(self.model.parameters(), lr=self.lr, momentum=0.9)
# self.sched = torch.optim.lr_scheduler.StepLR(self.optimizer, step_size=1, gamma=10, last_epoch=-1)
def fit(self, x, y):
x, y = to_variable(var=(x, y.long()), cuda=self.cuda)
self.optimizer.zero_grad()
out = self.model(x)
loss = F.cross_entropy(out, y, reduction='sum')
loss.backward()
self.optimizer.step()
# out: (batch_size, out_channels, out_caps_dims)
pred = out.data.max(dim=1, keepdim=False)[1] # get the index of the max log-probability
err = pred.ne(y.data).sum()
return loss.data, err
def eval(self, x, y, train=False):
x, y = to_variable(var=(x, y.long()), cuda=self.cuda)
out = self.model(x)
loss = F.cross_entropy(out, y, reduction='sum')
probs = F.softmax(out, dim=1).data.cpu()
pred = out.data.max(dim=1, keepdim=False)[1] # get the index of the max log-probability
err = pred.ne(y.data).sum()
return loss.data, err, probs
def sample_eval(self, x, y, Nsamples, logits=True, train=False):
x, y = to_variable(var=(x, y.long()), cuda=self.cuda)
out = self.model.sample_predict(x, Nsamples)
if logits:
mean_out = out.mean(dim=0, keepdim=False)
loss = F.cross_entropy(mean_out, y, reduction='sum')
probs = F.softmax(mean_out, dim=1).data.cpu()
else:
mean_out = F.softmax(out, dim=2).mean(dim=0, keepdim=False)
probs = mean_out.data.cpu()
log_mean_probs_out = torch.log(mean_out)
loss = F.nll_loss(log_mean_probs_out, y, reduction='sum')
pred = mean_out.data.max(dim=1, keepdim=False)[1] # get the index of the max log-probability
err = pred.ne(y.data).sum()
return loss.data, err, probs
def all_sample_eval(self, x, y, Nsamples):
x, y = to_variable(var=(x, y.long()), cuda=self.cuda)
out = self.model.sample_predict(x, Nsamples)
prob_out = F.softmax(out, dim=2)
prob_out = prob_out.data
return prob_out
def get_weight_samples(self):
weight_vec = []
state_dict = self.model.state_dict()
for key in state_dict.keys():
if 'weight' in key:
weight_mtx = state_dict[key].cpu().data
for weight in weight_mtx.view(-1):
weight_vec.append(weight)
return np.array(weight_vec)
# -
# +
from __future__ import print_function
from __future__ import division
import time
import copy
import torch.utils.data
from torchvision import transforms, datasets
import matplotlib
models_dir = 'models_MC_dropout_MNIST'
results_dir = 'results_MC_dropout_MNIST'
mkdir(models_dir)
mkdir(results_dir)
# ------------------------------------------------------------------------------------------------------
# train config
NTrainPointsMNIST = 60000
batch_size = 128
nb_epochs = 60 # We can do less iterations as this method has faster convergence
log_interval = 1
savemodel_its = [5, 10, 20, 30]
save_dicts = []
# ------------------------------------------------------------------------------------------------------
# dataset
cprint('c', '\nData:')
# load data
# data augmentation
transform_train = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.1307,), std=(0.3081,))
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.1307,), std=(0.3081,))
])
use_cuda = torch.cuda.is_available()
trainset = datasets.MNIST(root='../data', train=True, download=True, transform=transform_train)
valset = datasets.MNIST(root='../data', train=False, download=True, transform=transform_test)
if use_cuda:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=3)
valloader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=3)
else:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, pin_memory=False,
num_workers=3)
valloader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, pin_memory=False,
num_workers=3)
## ---------------------------------------------------------------------------------------------------------------------
# net dims
cprint('c', '\nNetwork:')
lr = 1e-3
prior_sigma = 10
weight_decay = 1/(prior_sigma**2)
########################################################################################
net = Net(lr=lr, channels_in=1, side_in=28, cuda=use_cuda, classes=10, batch_size=batch_size, weight_decay=weight_decay)
epoch = 0
## ---------------------------------------------------------------------------------------------------------------------
# train
cprint('c', '\nTrain:')
print(' init cost variables:')
pred_cost_train = np.zeros(nb_epochs)
err_train = np.zeros(nb_epochs)
cost_dev = np.zeros(nb_epochs)
err_dev = np.zeros(nb_epochs)
# best_cost = np.inf
best_err = np.inf
nb_its_dev = 1
tic0 = time.time()
for i in range(epoch, nb_epochs):
# if i in [1]:
# print('updating lr')
# net.sched.step()
net.set_mode_train(True)
tic = time.time()
nb_samples = 0
for x, y in trainloader:
cost_pred, err = net.fit(x, y)
err_train[i] += err
pred_cost_train[i] += cost_pred
nb_samples += len(x)
pred_cost_train[i] /= nb_samples
err_train[i] /= nb_samples
toc = time.time()
net.epoch = i
# ---- print
print("it %d/%d, Jtr_pred = %f, err = %f, " % (i, nb_epochs, pred_cost_train[i], err_train[i]), end="")
cprint('r', ' time: %f seconds\n' % (toc - tic))
# Save state dict
if i in savemodel_its:
save_dicts.append(copy.deepcopy(net.model.state_dict()))
# ---- dev
if i % nb_its_dev == 0:
net.set_mode_train(False)
nb_samples = 0
for j, (x, y) in enumerate(valloader):
cost, err, probs = net.eval(x, y)
cost_dev[i] += cost
err_dev[i] += err
nb_samples += len(x)
cost_dev[i] /= nb_samples
err_dev[i] /= nb_samples
cprint('g', ' Jdev = %f, err = %f\n' % (cost_dev[i], err_dev[i]))
if err_dev[i] < best_err:
best_err = err_dev[i]
cprint('b', 'best test error')
net.save(models_dir+'/theta_best.dat')
toc0 = time.time()
runtime_per_it = (toc0 - tic0) / float(nb_epochs)
cprint('r', ' average time: %f seconds\n' % runtime_per_it)
net.save(models_dir+'/theta_last.dat')
## ---------------------------------------------------------------------------------------------------------------------
# results
cprint('c', '\nRESULTS:')
nb_parameters = net.get_nb_parameters()
best_cost_dev = np.min(cost_dev)
best_cost_train = np.min(pred_cost_train)
err_dev_min = err_dev[::nb_its_dev].min()
print(' cost_dev: %f (cost_train %f)' % (best_cost_dev, best_cost_train))
print(' err_dev: %f' % (err_dev_min))
print(' nb_parameters: %d (%s)' % (nb_parameters, humansize(nb_parameters)))
print(' time_per_it: %fs\n' % (runtime_per_it))
## Save results for plots
# np.save('results/test_predictions.npy', test_predictions)
np.save(results_dir + '/cost_train.npy', pred_cost_train)
np.save(results_dir + '/cost_dev.npy', cost_dev)
np.save(results_dir + '/err_train.npy', err_train)
np.save(results_dir + '/err_dev.npy', err_dev)
## ---------------------------------------------------------------------------------------------------------------------
# fig cost vs its
textsize = 15
marker=5
plt.figure(dpi=100)
fig, ax1 = plt.subplots()
ax1.plot(pred_cost_train, 'r--')
ax1.plot(range(0, nb_epochs, nb_its_dev), cost_dev[::nb_its_dev], 'b-')
ax1.set_ylabel('Cross Entropy')
plt.xlabel('epoch')
plt.grid(b=True, which='major', color='k', linestyle='-')
plt.grid(b=True, which='minor', color='k', linestyle='--')
lgd = plt.legend(['test error', 'train error'], markerscale=marker, prop={'size': textsize, 'weight': 'normal'})
ax = plt.gca()
plt.title('classification costs')
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(textsize)
item.set_weight('normal')
plt.savefig(results_dir + '/cost.png', bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.figure(dpi=100)
fig2, ax2 = plt.subplots()
ax2.set_ylabel('% error')
ax2.semilogy(range(0, nb_epochs, nb_its_dev), 100 * err_dev[::nb_its_dev], 'b-')
ax2.semilogy(100 * err_train, 'r--')
plt.xlabel('epoch')
plt.grid(b=True, which='major', color='k', linestyle='-')
plt.grid(b=True, which='minor', color='k', linestyle='--')
ax2.get_yaxis().set_minor_formatter(matplotlib.ticker.ScalarFormatter())
ax2.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
lgd = plt.legend(['test error', 'train error'], markerscale=marker, prop={'size': textsize, 'weight': 'normal'})
ax = plt.gca()
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(textsize)
item.set_weight('normal')
plt.savefig(results_dir + '/err.png', bbox_extra_artists=(lgd,), box_inches='tight')
# -
# +
textsize = 15
marker=5
plt.figure(dpi=100)
fig, ax1 = plt.subplots()
ax1.plot(pred_cost_train, 'r--')
ax1.plot(range(0, nb_epochs, nb_its_dev), cost_dev[::nb_its_dev], 'b-')
ax1.set_ylabel('Cross Entropy')
plt.xlabel('epoch')
plt.grid(b=True, which='major', color='k', linestyle='-')
plt.grid(b=True, which='minor', color='k', linestyle='--')
lgd = plt.legend(['test error', 'train error'], markerscale=marker, prop={'size': textsize, 'weight': 'normal'})
ax = plt.gca()
plt.title('classification costs')
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(textsize)
item.set_weight('normal')
plt.savefig(results_dir + '/cost.png', bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.figure(dpi=100)
fig2, ax2 = plt.subplots()
ax2.set_ylabel('% error')
ax2.semilogy(range(0, nb_epochs, nb_its_dev), 100 * err_dev[::nb_its_dev], 'b-')
ax2.semilogy(100 * err_train, 'r--')
plt.xlabel('epoch')
plt.grid(b=True, which='major', color='k', linestyle='-')
plt.grid(b=True, which='minor', color='k', linestyle='--')
ax2.get_yaxis().set_minor_formatter(matplotlib.ticker.ScalarFormatter())
ax2.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
lgd = plt.legend(['test error', 'train error'], markerscale=marker, prop={'size': textsize, 'weight': 'normal'})
ax = plt.gca()
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(textsize)
item.set_weight('normal')
plt.savefig(results_dir + '/err.png', bbox_extra_artists=(lgd,), bbox_inches='tight')
# -
# ## load model
# +
from __future__ import print_function
from __future__ import division
import time
import copy
import torch.utils.data
from torchvision import transforms, datasets
import matplotlib
models_dir = 'models_MC_dropout_MNIST'
results_dir = 'results_MC_dropout_MNIST'
mkdir(models_dir)
mkdir(results_dir)
# ------------------------------------------------------------------------------------------------------
# train config
NTrainPointsMNIST = 60000
batch_size = 128
nb_epochs = 60 # We can do less iterations as this method has faster convergence
log_interval = 1
savemodel_its = [5, 10, 20, 30]
save_dicts = []
# ------------------------------------------------------------------------------------------------------
# dataset
cprint('c', '\nData:')
# load data
# data augmentation
transform_train = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.1307,), std=(0.3081,))
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.1307,), std=(0.3081,))
])
use_cuda = torch.cuda.is_available()
trainset = datasets.MNIST(root='../data', train=True, download=True, transform=transform_train)
valset = datasets.MNIST(root='../data', train=False, download=True, transform=transform_test)
if use_cuda:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=3)
valloader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=3)
else:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, pin_memory=False,
num_workers=3)
valloader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, pin_memory=False,
num_workers=3)
## ---------------------------------------------------------------------------------------------------------------------
# net dims
cprint('c', '\nNetwork:')
lr = 1e-3
prior_sigma = 10
weight_decay = 1/(prior_sigma**2)
########################################################################################
net = Net(lr=lr, channels_in=1, side_in=28, cuda=use_cuda, classes=10, batch_size=batch_size, weight_decay=weight_decay)
net.load(models_dir+'/theta_last.dat')
# -
# ####
# ## inference with sampling on test set
# +
batch_size = 200
if use_cuda:
valloader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=4)
else:
valloader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, pin_memory=False,
num_workers=4)
test_cost = 0 # Note that these are per sample
test_err = 0
nb_samples = 0
test_predictions = np.zeros((10000, 10))
Nsamples = 100
net.set_mode_train(False)
for j, (x, y) in enumerate(valloader):
cost, err, probs = net.sample_eval(x, y, Nsamples, logits=False) # , logits=True
test_cost += cost
test_err += err.cpu().numpy()
test_predictions[nb_samples:nb_samples+len(x), :] = probs.numpy()
nb_samples += len(x)
# test_cost /= nb_samples
test_err /= nb_samples
cprint('b', ' Loglike = %5.6f, err = %1.6f\n' % (-test_cost, test_err))
# -
# ## rotations, Bayesian [IGNORE THIS FOR NOW]
# #### First load data into numpy format
# +
x_dev = []
y_dev = []
for x, y in valloader:
x_dev.append(x.cpu().numpy())
y_dev.append(y.cpu().numpy())
x_dev = np.concatenate(x_dev)
y_dev = np.concatenate(y_dev)
print(x_dev.shape)
print(y_dev.shape)
# +
## ROTATIONS marginloss percentile distance
import matplotlib
from torch.autograd import Variable
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
###########################################
import matplotlib.pyplot as plt
import scipy.ndimage as ndim
import matplotlib.colors as mcolors
conv = mcolors.ColorConverter().to_rgb
#############
im_ind = 90
Nsamples = 100
#############
angle = 0
plt.figure()
plt.imshow( ndim.interpolation.rotate(x_dev[im_ind,0,:,:], 0, reshape=False))
plt.title('original image')
# plt.savefig('original_digit.png')
s_rot = 0
end_rot = 179
steps = 10
rotations = (np.linspace(s_rot, end_rot, steps)).astype(int)
ims = []
predictions = []
# percentile_dist_confidence = []
x, y = x_dev[im_ind], y_dev[im_ind]
fig = plt.figure(figsize=(steps, 8), dpi=80)
# DO ROTATIONS ON OUR IMAGE
for i in range(len(rotations)):
angle = rotations[i]
x_rot = np.expand_dims(ndim.interpolation.rotate(x[0, :, :], angle, reshape=False, cval=-0.42421296), 0)
ax = fig.add_subplot(3, (steps-1), 2*(steps-1)+i)
ax.imshow(x_rot[0,:,:])
ax.axis('off')
ax.set_xticklabels([])
ax.set_yticklabels([])
ims.append(x_rot[:,:,:])
ims = np.concatenate(ims)
net.set_mode_train(False)
y = np.ones(ims.shape[0])*y
ims = np.expand_dims(ims, axis=1)
cost, err, probs = net.sample_eval(torch.from_numpy(ims), torch.from_numpy(y), Nsamples=Nsamples, logits=False) # , logits=True
predictions = probs.numpy()
textsize = 20
lw = 5
print(ims.shape)
ims = ims[:,0,:,:]
# predictions = np.concatenate(predictions)
#print(percentile_dist_confidence)
c = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd',
'#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
# c = ['#ff0000', '#ffff00', '#00ff00', '#00ffff', '#0000ff',
# '#ff00ff', '#990000', '#999900', '#009900', '#009999']
ax0 = plt.subplot2grid((3, steps-1), (0, 0), rowspan=2, colspan=steps-1)
#ax0 = fig.add_subplot(2, 1, 1)
plt.gca().set_color_cycle(c)
ax0.plot(rotations, predictions, linewidth=lw)
##########################
# Dots at max
for i in range(predictions.shape[1]):
selections = (predictions[:,i] == predictions.max(axis=1))
for n in range(len(selections)):
if selections[n]:
ax0.plot(rotations[n], predictions[n, i], 'o', c=c[i], markersize=15.0)
##########################
lgd = ax0.legend(['prob 0', 'prob 1', 'prob 2',
'prob 3', 'prob 4', 'prob 5',
'prob 6', 'prob 7', 'prob 8',
'prob 9'], loc='upper right', prop={'size': textsize, 'weight': 'normal'}, bbox_to_anchor=(1.4,1))
plt.xlabel('rotation angle')
# plt.ylabel('probability')
plt.title('True class: %d, Nsamples %d' % (y[0], Nsamples))
# ax0.axis('tight')
plt.tight_layout()
plt.autoscale(enable=True, axis='x', tight=True)
plt.subplots_adjust(wspace=0, hspace=0)
for item in ([ax0.title, ax0.xaxis.label, ax0.yaxis.label] +
ax0.get_xticklabels() + ax0.get_yticklabels()):
item.set_fontsize(textsize)
item.set_weight('normal')
# plt.savefig('percentile_label_probabilities.png', bbox_extra_artists=(lgd,), bbox_inches='tight')
# files.download('percentile_label_probabilities.png')
# -
# ### All dataset with entropy
#
#
#
#
#
#
#
# +
## ROTATIONS marginloss percentile distance
import matplotlib
from torch.autograd import Variable
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
###########################################
import matplotlib.pyplot as plt
import scipy.ndimage as ndim
import matplotlib.colors as mcolors
conv = mcolors.ColorConverter().to_rgb
Nsamples = 100
s_rot = 0
end_rot = 179
steps = 16
rotations = (np.linspace(s_rot, end_rot, steps)).astype(int)
all_preds = np.zeros((len(x_dev), steps, 10))
all_sample_preds = np.zeros((len(x_dev), Nsamples, steps, 10))
# DO ROTATIONS ON OUR IMAGE
for im_ind in range(len(x_dev)):
x, y = x_dev[im_ind], y_dev[im_ind]
print(im_ind)
ims = []
predictions = []
for i in range(len(rotations)):
angle = rotations[i]
x_rot = np.expand_dims(ndim.interpolation.rotate(x[0, :, :], angle, reshape=False, cval=-0.42421296), 0)
ims.append(x_rot[:,:,:])
ims = np.concatenate(ims)
net.set_mode_train(False)
y = np.ones(ims.shape[0])*y
ims = np.expand_dims(ims, axis=1)
# cost, err, probs = net.sample_eval(torch.from_numpy(ims), torch.from_numpy(y), Nsamples=Nsamples, logits=False)
sample_probs = net.all_sample_eval(torch.from_numpy(ims), torch.from_numpy(y), Nsamples=Nsamples)
probs = sample_probs.mean(dim=0)
all_sample_preds[im_ind, :, :, :] = sample_probs.cpu().numpy()
predictions = probs.cpu().numpy()
all_preds[im_ind, :, :] = predictions
all_preds_entropy = -(all_preds * np.log(all_preds)).sum(axis=2)
mean_angle_entropy = all_preds_entropy.mean(axis=0)
std_angle_entropy = all_preds_entropy.std(axis=0)
correct_preds = np.zeros((len(x_dev), steps))
for i in range(len(x_dev)):
correct_preds[i,:] = all_preds[i,:,y_dev[i]]
correct_mean = correct_preds.mean(axis=0)
correct_std = correct_preds.std(axis=0)
# -
np.save(results_dir+'/correct_preds.npy', correct_preds)
np.save(results_dir+'/all_preds.npy', all_preds)
np.save(results_dir+'/all_sample_preds.npy', all_sample_preds) #all_sample_preds
def errorfill(x, y, yerr, color=None, alpha_fill=0.3, ax=None):
ax = ax if ax is not None else plt.gca()
if color is None:
color = ax._get_lines.color_cycle.next()
if np.isscalar(yerr) or len(yerr) == len(y):
ymin = y - yerr
ymax = y + yerr
elif len(yerr) == 2:
ymin, ymax = yerr
line_ax = ax.plot(x, y, color=color)
ax.fill_between(x, ymax, ymin, color=color, alpha=alpha_fill)
return line_ax
# +
plt.figure(dpi=100)
line_ax0 = errorfill(rotations, correct_mean, yerr=correct_std, color=c[2])
ax = plt.gca()
ax2 = ax.twinx()
line_ax1 = errorfill(rotations, mean_angle_entropy, yerr=std_angle_entropy, color=c[3], ax=ax2)
plt.xlabel('rotation angle')
lns = line_ax0+line_ax1
lgd = plt.legend(lns, ['correct class', 'predictive entropy'], loc='upper right',
prop={'size': 15, 'weight': 'normal'}, bbox_to_anchor=(1.75,1))
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] + [ax2.title, ax2.xaxis.label, ax2.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels() + ax2.get_xticklabels() + ax2.get_yticklabels()):
item.set_fontsize(15)
item.set_weight('normal')
plt.autoscale(enable=True, axis='x', tight=True)
# -
# ## Weight histogram
#
# +
name = 'MC Dropout'
# mkdir('weight_samples')
weight_vector = net.get_weight_samples()
np.save(results_dir+'/weight_samples_'+name+'.npy', weight_vector)
print(weight_vector.shape)
fig = plt.figure(dpi=120)
ax = fig.add_subplot(111)
sns.distplot(weight_vector, norm_hist=False, label=name, ax=ax)
# ax.hist(weight_vector, bins=70, density=True);
ax.set_ylabel('Density')
ax.legend()
plt.title('Total parameters: %d' % len(weight_vector))
# -
# ### Evolution over iterations
# +
last_state_dict = copy.deepcopy(net.model.state_dict())
Nsamples = 10
for idx, iteration in enumerate(savemodel_its):
net.model.load_state_dict(save_dicts[idx])
weight_vector = net.get_weight_samples()
fig = plt.figure(dpi=120)
ax = fig.add_subplot(111)
symlim = 1
lim_idxs = np.where(np.logical_and(weight_vector>=symlim, weight_vector<=symlim))
sns.distplot(weight_vector, norm_hist=False, label='it %d' % (iteration), ax=ax)
ax.legend()
# ax.set_xlim((-symlim, symlim))
net.model.load_state_dict(last_state_dict)
# -
|
notebooks/classification/MC_dropout_MNIST.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Practice Assessment
# This is your first assessment. Although it is for practice only, you must go through each question and ensure you can run your Python code in the notebook.
#
# It is important that you are able to save your completed notebook into a repo in your GitHub account. And you must be able to email a copy of your uploaded file to <EMAIL>
# ## Q1
# Using an f-string, display this Einstein quote exactly as you see it.
#
# Anyone who has never made a mistake <br>
# has never tried anything new.
f"Anyone who has never made a mistake \
has never tried anything new."
# ### Q2
# Using an f-string and the variables provided, display this message
#
# `Letterkenny Institute of Technology can be abbreviated to (LYIT).`
# +
LYIT_name = "Letterkenny Institute of Technology"
acronym = "LYIT"
# Enter your code here
print(f"{LYIT_name} can be abbreviated to ({acronym})")
# -
# ### Q3
# Write a Python program to display the current date and time.
#
# Sample output:
# `2020-01-10 18:31:21.076466`
# +
from datetime import datetime
# datetime object containing current date and time
now = datetime.now()
print(now)
# -
# ### Q4
# Write a Python program to ask the user to input their first name and DOB. Then calculate the number of days they have been alive.
#
# Display a message to the user with an f-string using this format `James, you are 40 years old`.
#
# Use the following code and then add your code.
# +
import datetime as dt
first_name = input("Please enter your first name : ")
print("Please enter your DOB as integer values")
year = int(input("Enter year : "))
month = int(input("Enter month : "))
day = int(input("Enter day : "))
# Enter your code here
birthdate = dt.date(year, month, day)
today = dt.date.today()
age = today - birthdate
days_old = age.days
years_old = days_old // 365
print(f"{first_name} you are {years_old} years old")
# -
# ### Q5
# Enter 2 numbers. If both numbers add up to a number between 50 to 60 then print a message stating `not accepted`, otherwise show a message `accepted`.
#
# Hint: Use the `sum in range` command to check whether your number is within a specific rang or not.
# +
number1 = int(input("Please enter number 1 : "))
number2 = int(input("Please enter number 2 : "))
sum = number1 + number2
# Enter your code here
if sum in range(50,60):
print("not accepted")
else:
print("accepted")
# -
# ### Q6
# Write a Python program to calculate the hypotenuse of a right angled triangle.
#
# Hint : hytothenuse = square root of 2 shortest sides squared
# +
from math import sqrt
print("Input lengths of shorter triangle sides : ")
a = float(input(" side a: "))
b = float(input(" side b: "))
# -
# ### Q7
# Use PyPDF2 to open the file **A_Midsummer_Night**. Extract the text of page 22 from this pdf. Print the contents of it.
# +
import PyPDF2
my_pdf_file = open("A_Midsummer_Night.pdf", mode="rb")
# get the first page
pdf_reader = PyPDF2.PdfFileReader(my_pdf_file)
pdf_page = pdf_reader.getPage(21).extractText()
print(pdf_page)
#print('Page type: {}'.format(str(type(pdf_page))))
# -
# ### Q8
# Open a new file called **OnePage.txt** and add the text you extracted from page 22 from the pdf file **A Midsummer Night** into the file **OnePage.txt**.
# ### Q9
# Now print the contents of **OnePage.txt**. Store the contents of the file in a variable called **file_contents**.
# ### Q10
# Using the **file_contents** variable above, extract any text that is 5 characters long followed by an exclamation mark.
# ### Save your file to GitHub
#
# Create a repo called **AI assessments** and save this assessment file to it.
#
# Email a link to your GitHub page to <EMAIL>
|
Program/.ipynb_checkpoints/Test 1-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:airpol]
# language: python
# name: conda-env-airpol-py
# ---
import torch
import torch.utils.data as utils
import torch.nn.functional as F
import torch.nn as nn
from torch.nn.parameter import Parameter
import torch.optim as optim
import random
from torch.autograd import Variable, grad
from torch.optim.lr_scheduler import ReduceLROnPlateau
import math
class FilterLinear(nn.Module):
def __init__(self, in_features, out_features, filter_square_matrix, bias=True):
'''
filter_square_matrix : filter square matrix, whose each elements is 0 or 1.
'''
super(FilterLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
use_gpu = torch.cuda.is_available()
self.filter_square_matrix = None
if use_gpu:
self.filter_square_matrix = Variable(filter_square_matrix.cuda(), requires_grad=False)
else:
self.filter_square_matrix = Variable(filter_square_matrix, requires_grad=False)
self.weight = Parameter(torch.Tensor(out_features, in_features))
if bias:
self.bias = Parameter(torch.Tensor(out_features))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
# print(self.weight.data)
# print(self.bias.data)
def forward(self, input):
# print(self.filter_square_matrix.mul(self.weight))
return F.linear(input, self.filter_square_matrix.mul(self.weight), self.bias)
def __repr__(self):
return self.__class__.__name__ + '(' \
+ 'in_features=' + str(self.in_features) \
+ ', out_features=' + str(self.out_features) \
+ ', bias=' + str(self.bias is not None) + ')'
# +
import os
os.chdir(os.path.pardir)
# load data from file
import numpy as np
save_file_name = ['fea_seq.npy', 'last_observation_seq.npy', 'label_seq.npy', 'masking_seq.npy',
'delta_seq.npy', 'train_valid_test_split.npy']
save_folder = 'data/raw/met-search'
saved_arrays = []
for file_name in save_file_name:
saved_arrays.append(np.load(os.path.join(save_folder, file_name)))
[fea_seq, last_observation_seq, label_seq, masking_seq, delta_seq, train_valid_test_split] = saved_arrays
# train-test-split
train_index = [k for k in range(train_valid_test_split[0])]
dev_index = [k for k in range(train_valid_test_split[0],
train_valid_test_split[0] + train_valid_test_split[1])]
test_index = [k for k in range(train_valid_test_split[0] + train_valid_test_split[1],
train_valid_test_split[0] + train_valid_test_split[1] + train_valid_test_split[2])]
def get_array_by_index_range(nparray_list, label_array, index_range):
'''
nparray_list: list of nparrays to select according to index range
label_array: select the labels from label array
'''
# get non-na index
non_na_index = []
for index in index_range:
if not np.isnan(label_array[index]):
non_na_index.append(index)
return [k[non_na_index] for k in nparray_list], label_array[non_na_index].reshape(-1)
# +
# normalize delta
# we can normlize this because it's know features
delta_seq = (delta_seq - np.mean(delta_seq)) / np.std(delta_seq)
# split set to train, test and dev sets
# train set
[fea_train, last_train, masking_train, delta_train], label_train = get_array_by_index_range([fea_seq,last_observation_seq, masking_seq, delta_seq
], label_seq, train_index)
# dev set
[fea_dev, last_dev, masking_dev, delta_dev], label_dev = get_array_by_index_range([fea_seq, last_observation_seq, masking_seq, delta_seq
], label_seq, dev_index)
# test set
[fea_test, last_test, masking_test, delta_test], label_test = get_array_by_index_range([fea_seq, last_observation_seq, masking_seq, delta_seq
], label_seq, test_index)
def normalize_feature(fea_train, array_list):
"""
array_list: [fea_dev, fea_test, last_train, last_dev, last_test] to normalize
"""
train_mean = np.nanmean(fea_train, axis=0)
train_std = np.nanstd(fea_train, axis=0)
def norm_arr(nparr):
return(nparr - train_mean)/train_std
return (norm_arr(fea_train), [norm_arr(k) for k in array_list])
fea_train, [fea_dev, fea_test, last_train, last_dev, last_test] = normalize_feature(fea_train,
[fea_dev, fea_test,
last_train, last_dev,
last_test])
# +
# record mean after normalization
x_mean_aft_nor = np.nanmean(fea_train, axis=0)
# get dataset for grud
def dataset_aggregation(feature_array, last_obsv, mask, delta):
# expand dimension of array
def expd(arr):
return np.expand_dims(arr, axis=1)
return np.concatenate((expd(feature_array), expd(last_obsv), expd(mask), expd(delta)), axis = 1)
# dataset_aggregation for train, dev, test
# train_aggr = dataset_aggregation(fea_train, last_train, masking_train, delta_train)
train_aggr = dataset_aggregation(last_train, last_train, masking_train, delta_train)
# dev_aggr = dataset_aggregation(fea_dev, last_dev, masking_dev, delta_dev)
# test_aggr = dataset_aggregation(fea_test, last_test, masking_test, delta_test)
dev_aggr = dataset_aggregation(last_dev, last_dev, masking_dev, delta_dev)
test_aggr = dataset_aggregation(last_test, last_test, masking_test, delta_test)
# -
train_aggr.shape
# +
def train_mfn(X_train, y_train, X_valid, y_valid, X_test, y_test, configs, X_mean):
# p = np.random.permutation(X_train.shape[0])
# no shuffle, keep original order
# swap axes for back propagation
# def swap_axes(nparr):
# return nparr.swapaxes(0,1)
# X_train = swap_axes(X_train)
# X_valid = swap_axes(X_valid)
# X_test = swap_axes(X_test)
# model parameters
input_size = X_train.shape[3]
h = 32
t = X_train.shape[2]
output_dim = 1
dropout = 0.5
# d = X_train.shape[2]
# h = 128
# t = X_train.shape[0]
# output_dim = 1
# dropout = 0.5
[config,NN1Config,NN2Config,gamma1Config,gamma2Config,outConfig] = configs
#model = EFLSTM(d,h,output_dim,dropout)
model = MFN(config,NN1Config,NN2Config,gamma1Config,gamma2Config,outConfig, X_mean)
optimizer = optim.Adam(model.parameters(),lr=config["lr"])
#optimizer = optim.SGD(model.parameters(),lr=config["lr"],momentum=config["momentum"])
# optimizer = optim.SGD([
# {'params':model.lstm_l.parameters(), 'lr':config["lr"]},
# {'params':model.classifier.parameters(), 'lr':config["lr"]}
# ], momentum=0.9)
criterion = nn.MSELoss()
device = torch.device('cpu')
model = model.to(device)
criterion = criterion.to(device)
scheduler = ReduceLROnPlateau(optimizer, mode="min", patience=10, factor=0.5, verbose=True)
# criterion = nn.L1Loss()
# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# model = model.to(device)
# criterion = criterion.to(device)
# scheduler = ReduceLROnPlateau(optimizer,mode='min',patience=100,factor=0.5,verbose=True)
def train(model, batchsize, X_train, y_train, optimizer, criterion):
epoch_loss = 0
model.train()
total_n = X_train.shape[0]
num_batches = math.ceil(total_n / batchsize)
for batch in range(num_batches):
start = batch*batchsize
end = (batch+1)*batchsize
optimizer.zero_grad()
batch_X = torch.Tensor(X_train[start:end,:])
batch_y = torch.Tensor(y_train[start:end])
predictions = model.forward(batch_X).squeeze(1)
loss = criterion(predictions, batch_y)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / num_batches
def evaluate(model, X_valid, y_valid, criterion):
epoch_loss = 0
model.eval()
with torch.no_grad():
batch_X = torch.Tensor(X_valid)
batch_y = torch.Tensor(y_valid)
predictions = model.forward(batch_X).squeeze(1)
epoch_loss = criterion(predictions, batch_y).item()
return epoch_loss
def predict(model, X_test):
epoch_loss = 0
model.eval()
with torch.no_grad():
batch_X = torch.Tensor(X_test)
predictions = model.forward(batch_X).squeeze(1)
predictions = predictions.cpu().data.numpy()
return predictions
best_valid = 999999.0
rand = random.randint(0,100000)
print('epoch train_loss valid_loss')
for epoch in range(config["num_epochs"]):
train_loss = train(model, config["batchsize"], X_train, y_train, optimizer, criterion)
valid_loss = evaluate(model, X_valid, y_valid, criterion)
scheduler.step(valid_loss)
if valid_loss <= best_valid:
# save model
best_valid = valid_loss
print(epoch, train_loss, valid_loss, 'saving model')
torch.save(model, 'models/temp_models/mfn_%d.pt' %rand)
else:
print(epoch, train_loss, valid_loss)
# print 'model number is:', rand
model = torch.load('models/temp_models/mfn_%d.pt' %rand)
predictions = predict(model, X_test)
mae = np.mean(np.absolute(predictions-y_test))
print("mae: ", mae)
mse = np.mean((predictions - y_test)**2)
print("mse: ", mse)
# +
class GRUD(nn.Module):
def __init__(self, input_size, hidden_size, X_mean, output_last = True, dropout=0):
"""
Recurrent Neural Networks for Multivariate Times Series with Missing Values
GRU-D: GRU exploit two representations of informative missingness patterns, i.e., masking and time interval.
cell_size is the size of cell_state.
GRU-D:
input_size: variable dimension of each time
hidden_size: dimension of hidden_state
mask_size: dimension of masking vector
X_mean: the mean of the historical input data
"""
super(GRUD, self).__init__()
self.hidden_size = hidden_size
self.delta_size = input_size
self.mask_size = input_size
self.identity = torch.eye(input_size)
self.zeros = Variable(torch.zeros(input_size))
self.zl = nn.Linear(input_size + hidden_size + self.mask_size, hidden_size)
self.rl = nn.Linear(input_size + hidden_size + self.mask_size, hidden_size)
self.hl = nn.Linear(input_size + hidden_size + self.mask_size, hidden_size)
self.gamma_x_l = FilterLinear(self.delta_size, self.delta_size, self.identity)
# self.gamma_h_l = nn.Linear(self.delta_size, self.delta_size)
self.gamma_h_l = nn.Linear(self.delta_size, hidden_size)
self.map_hidden_gamma = nn.Linear(hidden_size, 1)
self.output_last = output_last
self.fc1 = nn.Linear(hidden_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, 1)
self.dropout = nn.Dropout(dropout)
# def forward(self, x, x_last_obsv, x_mean, h, mask, delta):
# use input to feed the four dimensions
def forward(self, dataset, X_mean, h):
"""
dataset: matrix shape (n * c * d)
n: batchsize; c: class; d number of dims
"""
x_mean = Variable(torch.Tensor(X_mean))
# print("dataset size")
# print(dataset.size())
batch_size = dataset.size(0)
dim_size = dataset.size(2)
x = dataset[:,0,:]
x_last_obsv = dataset[:,1,:]
mask = dataset[:,2,:]
delta = dataset[:,3,:]
delta_x = torch.exp(-torch.max(self.zeros, self.gamma_x_l(delta)))
delta_h = F.sigmoid(self.map_hidden_gamma(F.relu(self.gamma_h_l(delta))))
x = mask * x + (1 - mask) * (delta_x * x_last_obsv + (1 - delta_x) * x_mean)
h = delta_h * h
# print("combined size")
# print(x.size())
# print(h.size())
# print(mask.size())
combined = torch.cat((x, h, mask), 1)
z = torch.sigmoid(self.zl(combined))
r = torch.sigmoid(self.rl(combined))
combined_r = torch.cat((x, r * h, mask), 1)
h_tilde = torch.tanh(self.hl(combined_r))
h = (1 - z) * h + z * h_tilde
return h
# +
class MFN(nn.Module):
def __init__(self,config,NN1Config,NN2Config,gamma1Config,gamma2Config,outConfig, X_mean):
super(MFN, self).__init__()
[self.d_l,self.d_a] = config["input_dims"]
[self.dh_l,self.dh_a] = config["h_dims"]
total_h_dim = self.dh_l+self.dh_a
self.mem_dim = config["memsize"]
window_dim = config["windowsize"]
output_dim = 1
attInShape = total_h_dim*window_dim
gammaInShape = attInShape+self.mem_dim
final_out = total_h_dim+self.mem_dim
h_att1 = NN1Config["shapes"]
h_att2 = NN2Config["shapes"]
h_gamma1 = gamma1Config["shapes"]
h_gamma2 = gamma2Config["shapes"]
h_out = outConfig["shapes"]
att1_dropout = NN1Config["drop"]
att2_dropout = NN2Config["drop"]
gamma1_dropout = gamma1Config["drop"]
gamma2_dropout = gamma2Config["drop"]
out_dropout = outConfig["drop"]
# use grud cell instead of LSTM Cell
# (self, input_size, hidden_size, X_mean, output_last = True, dropout=0):
self.gru_l = GRUD(self.d_l, self.dh_l, X_mean)
self.gru_a = GRUD(self.d_a, self.dh_a, X_mean)
self.att1_fc1 = nn.Linear(attInShape, h_att1)
self.att1_fc2 = nn.Linear(h_att1, attInShape)
self.att1_dropout = nn.Dropout(att1_dropout)
self.att2_fc1 = nn.Linear(attInShape, h_att2)
self.att2_fc2 = nn.Linear(h_att2, self.mem_dim)
self.att2_dropout = nn.Dropout(att2_dropout)
self.gamma1_fc1 = nn.Linear(gammaInShape, h_gamma1)
self.gamma1_fc2 = nn.Linear(h_gamma1, self.mem_dim)
self.gamma1_dropout = nn.Dropout(gamma1_dropout)
# self.gamma2_fc1 = nn.Linear(gammaInShape, h_gamma2)
# self.gamma2_fc2 = nn.Linear(h_gamma2, self.mem_dim)
# self.gamma2_dropout = nn.Dropout(gamma2_dropout)
self.out_fc1 = nn.Linear(final_out, h_out)
self.out_fc2 = nn.Linear(h_out, output_dim)
self.out_dropout = nn.Dropout(out_dropout)
## add the part for grud missing imputation
self.x_mean = X_mean
self.hidden_size = self.dh_l
##
self.gamma_decay = nn.Linear(self.d_l+self.d_a, self.hidden_size)
self.map_gamma_decay = nn.Linear(self.hidden_size, 1)
def squeeze_d2(self, matrix):
return torch.squeeze(matrix, dim=2)
def forward(self,x):
x_l = x[:,:,:,:self.d_l]
x_a = x[:,:,:,self.d_l:self.d_l+self.d_a]
# x is n x c x t x d
# seperate x_mean_l and x_mean_a
x_mean_l = self.x_mean[:,:self.d_l]
x_mean_a = self.x_mean[:,self.d_l:self.d_l+self.d_a]
n = x.shape[0]
t = x.shape[1]
self.h_l = self.initHidden(n)
self.h_a = self.initHidden(n)
self.mem = torch.zeros(n, self.mem_dim)
all_h_ls = []
all_h_as = []
all_mems = []
for i in range(t):
# prev time step
prev_h_l = self.h_l
prev_h_a = self.h_a
# print("x_l size")
# print(x_l.size())
# curr time step
new_h_l = self.gru_l(self.squeeze_d2(x_l[:,:,i:i+1,:]), x_mean_l[i:i+1,:], prev_h_l)
new_h_a = self.gru_a(self.squeeze_d2(x_a[:,:,i:i+1,:]), x_mean_a[i:i+1,:], prev_h_a)
# curr time step
# new_h_l, new_c_l = self.lstm_l(x_l[i], (self.h_l, self.c_l))
# new_h_a, new_c_a = self.lstm_a(x_a[i], (self.h_a, self.c_a))
# concatenate
prev_hs = torch.cat([prev_h_l,prev_h_a], dim=1)
new_hs = torch.cat([new_h_l,new_h_a], dim=1)
cStar = torch.cat([prev_hs,new_hs], dim=1)
attention = F.softmax(self.att1_fc2(self.att1_dropout(F.relu(self.att1_fc1(cStar)))),dim=1)
attended = attention*cStar
cHat = F.tanh(self.att2_fc2(self.att2_dropout(F.relu(self.att2_fc1(attended)))))
both = torch.cat([attended,self.mem], dim=1)
gamma1 = F.sigmoid(self.gamma1_fc2(self.gamma1_dropout(F.relu(self.gamma1_fc1(both)))))
# gamma2 = F.sigmoid(self.gamma2_fc2(self.gamma2_dropout(F.relu(self.gamma2_fc1(both)))))
self.mem = gamma1*self.mem + (1-gamma1)*cHat
# decay teh memory according to delta
gamma_decay = F.sigmoid(self.map_gamma_decay(F.relu(self.gamma_decay(torch.squeeze(x[:,3,i:i+1,:], dim=1)))))
self.mem = self.mem * gamma_decay
# record delta condition for memory
all_mems.append(self.mem)
# update
self.h_l = new_h_l
self.h_a = new_h_a
all_h_ls.append(self.h_l)
all_h_as.append(self.h_a)
# last hidden layer last_hs is n x h
last_h_l = all_h_ls[-1]
last_h_a = all_h_as[-1]
last_mem = all_mems[-1]
last_hs = torch.cat([last_h_l,last_h_a,last_mem], dim=1)
output = self.out_fc2(self.out_dropout(F.relu(self.out_fc1(last_hs))))
return output
def initHidden(self, batch_size):
use_gpu = torch.cuda.is_available()
if use_gpu:
Hidden_State = Variable(torch.zeros(batch_size, self.hidden_size).cuda())
return Hidden_State
else:
Hidden_State = Variable(torch.zeros(batch_size, self.hidden_size))
return Hidden_State
# +
config = dict()
config["input_dims"] = [5, 47]
hl = 128
ha = 128
config["h_dims"] = [hl, ha]
config["memsize"] = 128
config["windowsize"] = 2
config["batchsize"] = 32
config["num_epochs"] = 50
config["lr"] = 0.005
NN1Config = dict()
NN1Config["shapes"] = 32
NN1Config["drop"] = 0.5
NN2Config = dict()
NN2Config["shapes"] = 32
NN2Config["drop"] = 0.5
gamma1Config = dict()
gamma1Config["shapes"] = 32
gamma1Config["drop"] = 0.5
gamma2Config = dict()
gamma2Config["shapes"] = 32
gamma2Config["drop"] = 0.5
outConfig = dict()
outConfig["shapes"] = 32
outConfig["drop"] = 0.5
configs = [config,NN1Config,NN2Config,gamma1Config,gamma2Config,outConfig]
seed = 123
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
train_mfn(train_aggr, label_train, dev_aggr, label_dev, test_aggr, label_test, configs, x_mean_aft_nor)
# train_grud(train_aggr, label_train, dev_aggr, label_dev, test_aggr, label_test, config, x_mean_aft_nor)
# -
|
notebooks/13-mfn-grud-met-search.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import os, sys, time, copy
import numpy as np
import matplotlib.pyplot as plt
import pickle
import myokit
sys.path.append('../')
sys.path.append('../Protocols')
sys.path.append('../Models')
sys.path.append('../Lib')
import protocol_lib, vc_protocols
import mod_trace
from ord2011 import ORD2011
# +
'''
O'Hara-Rudy CiPA v1.0 (2011)
'''
cells = {
'Endocardial' : 0,
'Epicardial' : 1,
'Mid-myocardial' : 2,
}
current_li = ['I_Na', 'I_NaL', 'I_to', 'I_CaL', 'I_Kr', 'I_Ks', 'I_K1' ]
# protocol = vc_protocols.hERG_CiPA()
protocol = pickle.load(open("./trial_steps_ramps_Kernik_200_50_4_-120_60_paper/shortened_trial_steps_ramps_Kernik_200_50_4_-120_60_500_artefact_True_short.pkl", 'rb'))
# protocol = pickle.load(open("./trial_steps_ramps_ORD2011_288_51_4_-121_61/shortened_trial_steps_ramps_ORD2011_288_51_4_-121_61_500_artefact_False_short.pkl", 'rb'))
end_time = protocol.get_voltage_change_endpoints()[-1]
t_span = (0, end_time)
t_eval = np.linspace(0, end_time, 20000)
print(end_time)
# +
start_time = time.time()
import simulator_myokit
'''
Simulation with Myokit
'''
model_path = "../mmt-model-files/ord-2011_VC.mmt"
model_myokit, protocol_myokit, script = myokit.load(model_path)
sim_myokit = simulator_myokit.Simulator(model_myokit, protocol, max_step=1.0, abs_tol=1e-8, rel_tol=1e-8, vhold=-80.0) # 1e-12, 1e-14 # 1e-08, 1e-10
sim_myokit.name = "ORD2011"
sim_myokit.simulation.set_constant('cell.mode', cells['Epicardial'])
# y0_myokit = sim_myokit.pre_simulate(5000, sim_type=1)
d_myokit = sim_myokit.simulate(end_time, log_times=None, extra_log=['ina.INa', 'inal.INaL', 'ito.Ito', 'ical.ICaL', 'ikr.IKr', 'iks.IKs', 'ik1.IK1'])
myokit_current_response_info = mod_trace.CurrentResponseInfo(protocol)
for i, t in enumerate(d_myokit['engine.time']):
current_timestep = [
mod_trace.Current(name='I_Na', value=d_myokit['ina.INa'][i]),
mod_trace.Current(name='I_NaL', value=d_myokit['inal.INaL'][i]),
mod_trace.Current(name='I_to', value=d_myokit['ito.Ito'][i]),
mod_trace.Current(name='I_CaL', value=d_myokit['ical.ICaL'][i]),
mod_trace.Current(name='I_Kr', value=d_myokit['ikr.IKr'][i]),
mod_trace.Current(name='I_Ks', value=d_myokit['iks.IKs'][i]),
mod_trace.Current(name='I_K1', value=d_myokit['ik1.IK1'][i]),
]
myokit_current_response_info.currents.append(current_timestep)
print("--- %s seconds ---"%(time.time()-start_time))
# +
'''
Plot
'''
fig, axes = plt.subplots(8,1, figsize=(20,20))
# fig.suptitle(model_scipy.name, fontsize=14)
plot_li = ['Voltage'] + current_li
for i, name in enumerate(plot_li):
# ax.set_title('Simulation %d'%(simulationNo))
# axes[i].set_xlim(model_scipy.times.min(), model_scipy.times.max())
# ax.set_ylim(ylim[0], ylim[1])
axes[i].set_xlabel('Time (ms)')
axes[i].set_ylabel(f'{name} (mV)')
if i==0:
axes[i].plot( t_eval, [protocol.get_voltage_at_time(t) for t in t_eval], label=name, color='k')
else:
axes[i].plot( d_myokit['engine.time'], myokit_current_response_info.get_current([name]), label=name, color='r', linewidth=5)
# axes[i].plot( sol_scipy.t, model_scipy.current_response_info.get_current([name]), label=name, color='k')
axes[i].legend()
axes[i].grid()
plt.subplots_adjust(left=0.0, bottom=0.3, right=1.0, top=0.95, wspace=0.5, hspace=0.2)
plt.show()
fig.savefig(os.path.join('Results', "ORD2011-VC"), dpi=100)
# +
start_time = time.time()
protocol = vc_protocols.hERG_CiPA()
sim_myokit.reset_simulation_with_new_protocol( protocol)
sim_myokit.simulation.set_constant('cell.mode', cells['Epicardial'])
y0_myokit = sim_myokit.pre_simulate(5000, sim_type=1)
d_myokit = sim_myokit.simulate(1000, log_times=None, extra_log=['ina.INa', 'inal.INaL', 'ito.Ito', 'ical.ICaL', 'ikr.IKr', 'iks.IKs', 'ik1.IK1'])
myokit_current_response_info = mod_trace.CurrentResponseInfo(protocol)
for i, t in enumerate(d_myokit['engine.time']):
current_timestep = [
mod_trace.Current(name='I_Na', value=d_myokit['ina.INa'][i]),
mod_trace.Current(name='I_NaL', value=d_myokit['inal.INaL'][i]),
mod_trace.Current(name='I_to', value=d_myokit['ito.Ito'][i]),
mod_trace.Current(name='I_CaL', value=d_myokit['ical.ICaL'][i]),
mod_trace.Current(name='I_Kr', value=d_myokit['ikr.IKr'][i]),
mod_trace.Current(name='I_Ks', value=d_myokit['iks.IKs'][i]),
mod_trace.Current(name='I_K1', value=d_myokit['ik1.IK1'][i]),
]
myokit_current_response_info.currents.append(current_timestep)
print("--- %s seconds ---"%(time.time()-start_time))
# +
'''
Plot
'''
fig, axes = plt.subplots(8,1, figsize=(20,20))
# fig.suptitle(model_scipy.name, fontsize=14)
plot_li = ['Voltage'] + current_li
for i, name in enumerate(plot_li):
# ax.set_title('Simulation %d'%(simulationNo))
# axes[i].set_xlim(model_scipy.times.min(), model_scipy.times.max())
# ax.set_ylim(ylim[0], ylim[1])
axes[i].set_xlabel('Time (ms)')
axes[i].set_ylabel(f'{name} (mV)')
if i==0:
axes[i].plot( np.linspace(0,1000,2000), [protocol.get_voltage_at_time(t) for t in np.linspace(0,1000,2000)], label=name, color='k')
else:
axes[i].plot( d_myokit['engine.time'], myokit_current_response_info.get_current([name]), label=name, color='r', linewidth=5)
# axes[i].plot( sol_scipy.t, model_scipy.current_response_info.get_current([name]), label=name, color='k')
axes[i].legend()
axes[i].grid()
plt.subplots_adjust(left=0.0, bottom=0.3, right=1.0, top=0.95, wspace=0.5, hspace=0.2)
plt.show()
fig.savefig(os.path.join('Results', "ORD2011-VC"), dpi=100)
# +
import simulator_myokit
'''
Simulation with Myokit
'''
start_time = time.time()
model_path = "../mmt-model-files/ord-2011_VC.mmt"
model_myokit, protocol_myokit, script = myokit.load(model_path)
protocol = vc_protocols.hERG_CiPA()
sim_myokit = simulator_myokit.Simulator(model_myokit, protocol, max_step=1.0, abs_tol=1e-8, rel_tol=1e-10, vhold=-80.0) # 1e-12, 1e-14 # 1e-08, 1e-10
sim_myokit.simulation.set_constant('cell.mode', cells['Epicardial'])
y0_myokit = sim_myokit.pre_simulate(5000, sim_type=1)
d_myokit = sim_myokit.simulate(1000, log_times=None, extra_log=['ina.INa', 'inal.INaL', 'ito.Ito', 'ical.ICaL', 'ikr.IKr', 'iks.IKs', 'ik1.IK1'])
myokit_current_response_info = mod_trace.CurrentResponseInfo(protocol)
for i, t in enumerate(d_myokit['engine.time']):
current_timestep = [
mod_trace.Current(name='I_Na', value=d_myokit['ina.INa'][i]),
mod_trace.Current(name='I_NaL', value=d_myokit['inal.INaL'][i]),
mod_trace.Current(name='I_to', value=d_myokit['ito.Ito'][i]),
mod_trace.Current(name='I_CaL', value=d_myokit['ical.ICaL'][i]),
mod_trace.Current(name='I_Kr', value=d_myokit['ikr.IKr'][i]),
mod_trace.Current(name='I_Ks', value=d_myokit['iks.IKs'][i]),
mod_trace.Current(name='I_K1', value=d_myokit['ik1.IK1'][i]),
]
myokit_current_response_info.currents.append(current_timestep)
print("--- %s seconds ---"%(time.time()-start_time))
'''
Plot
'''
fig, axes = plt.subplots(8,1, figsize=(20,20))
# fig.suptitle(model_scipy.name, fontsize=14)
plot_li = ['Voltage'] + current_li
for i, name in enumerate(plot_li):
# ax.set_title('Simulation %d'%(simulationNo))
# axes[i].set_xlim(model_scipy.times.min(), model_scipy.times.max())
# ax.set_ylim(ylim[0], ylim[1])
axes[i].set_xlabel('Time (ms)')
axes[i].set_ylabel(f'{name} (mV)')
if i==0:
axes[i].plot( np.linspace(0,1000,2000), [protocol.get_voltage_at_time(t) for t in np.linspace(0,1000,2000)], label=name, color='k')
else:
axes[i].plot( d_myokit['engine.time'], myokit_current_response_info.get_current([name]), label=name, color='r', linewidth=5)
# axes[i].plot( sol_scipy.t, model_scipy.current_response_info.get_current([name]), label=name, color='k')
axes[i].legend()
axes[i].grid()
plt.subplots_adjust(left=0.0, bottom=0.3, right=1.0, top=0.95, wspace=0.5, hspace=0.2)
plt.show()
fig.savefig(os.path.join('Results', "ORD2011-VC"), dpi=100)
# -
|
Examples/Myokit_protocol_change_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import os
import pandas as pd
cities = pd.read_csv('cities.csv', index_col=['CityId'], nrows=None)
cities = cities * 1000
cities.head()
# + magic_args="-e" language="bash"
# wget http://akira.ruc.dk/~keld/research/LKH/LKH-2.0.9.tgz
# tar xvfz LKH-2.0.9.tgz
# cd LKH-2.0.9
# make
# +
def write_tsp(nodes, filename, name='traveling-santa-2018-prime-paths'):
# From https://www.kaggle.com/blacksix/concorde-for-5-hours.
with open(filename, 'w') as f:
f.write('NAME : %s\n' % name)
f.write('COMMENT : %s\n' % name)
f.write('TYPE : TSP\n')
f.write('DIMENSION : %d\n' % len(cities))
f.write('EDGE_WEIGHT_TYPE : EUC_2D\n')
f.write('NODE_COORD_SECTION\n')
for row in cities.itertuples():
f.write('%d %.11f %.11f\n' % (row.Index + 1, row.X, row.Y))
f.write('EOF\n')
write_tsp(cities, 'cities.tsp')
# +
def write_parameters(parameters, filename='LKH-2.0.9/params.par'):
with open(filename, 'w') as f:
for param, value in parameters:
f.write("{} = {}\n".format(param, value))
print("Parameters saved as", filename)
parameters = [
("PROBLEM_FILE", "cities.tsp"),
("OUTPUT_TOUR_FILE", "tsp_solution.csv"),
("SEED", 2018),
('CANDIDATE_SET_TYPE', 'POPMUSIC'), #'NEAREST-NEIGHBOR', 'ALPHA'),
('INITIAL_PERIOD', 10000),
('MAX_TRIALS', 1000),
]
write_parameters(parameters)
# + magic_args="-e" language="bash"
# timeout 100s ./LKH-2.0.9/LKH ./LKH-2.0.9/params.par
# +
def read_tour(filename):
tour = []
for line in open(filename).readlines():
line = line.replace('\n', '')
try:
tour.append(int(line) - 1)
except ValueError as e:
pass # skip if not a city id (int)
return tour[:-1]
tour = read_tour('tsp_solution.csv')
print("Tour length", len(tour))
# -
|
LKH.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lecture 3.14: Introduction To Pytorch
#
# [**Lecture Slides**](https://docs.google.com/presentation/d/10G4hNPwtIq0urT--yN3VFznhqaS8kalBrlLXfcd9bvE/edit?usp=sharing)
#
# This lecture, we are going to train a neural network classifier in pytorch both on local CPU and cloud GPU.
#
# **Learning goals:**
# - understand the difference between `ndarray` and `Tensor`
# - carry out basic operations on `Tensor`s
# - backpropagate through a computational graph with `autograd`
# - build and train a neural network banknote classifier
# - setup a google colab notebook
# - train a neural network on a GPU
# - compare the pros and cons of pytorch and keras
# ## 0. Setup
#
# To use pytorch, we have to install the `torch` package. It was added to this projects's `Pipfile`, so please run:
#
# pipenv install
#
# in the repo's root directory to install the latest dependencies.
#
#
# Remember your first `import numpy as np`? Time has flown by! This is the start of another exciting chapter, your first pytorch import 🎊
import torch
# ## 1. Tensors
#
# ### 1.1 Tensor Creation
#
# Welcome to the world of pytorch! 🔥
#
# The main class here is the `Tensor`. It's a scary mathematical name, but according to [pytorch](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py) themselves:
#
# > Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
#
# This means that we can expect their interface to be similar to `ndarray`. 😌
#
# The preferred way to build `Tensor`s, is with the `torch.tensor()` constructor. This is similar to `np.array()`:
torch.tensor([42, 666])
# pytorch also offers a bunch of other useful constructors:
torch.randn(4, 2)
torch.ones(3, 5)
torch.zeros(1)
torch.arange(0, 1337, 55)
# ### 1.2 Basic Operations
#
# Just like NumPy, pytorch integrates closely with python by overloading many common operators. We can use python list slicing 🔪:
a = torch.tensor([0, 1, 1, 2, 3, 5, 8, 13])
print(a[3])
print(a[3:6])
print(a[1:7:2])
# We can use arithmetic operators:
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
print(a + b)
print(a*b)
# Piecewise operations are the default in pytorch, just like for `ndarray`s.
# ### 1.3 Tensor Data Types
#
# Remember that NumPy is super fast because it uses its own data types. Maybe pytorch uses the same trick! Let's check the type of our tensors elements:
a = torch.tensor([1, 2, 3])
type(a[0])
# 😑 Ok we already knew this was a `Tensor`, but what data type does it contain? We can check through its `.dtype` field:
a[0].dtype
# If we want to convert this torch value to a python scalar, we can use the `.item()` method:
a[0].item()
# If we want to convert any `Tensor` to a numpy array, we can use the `.numpy()` method:
a = torch.tensor([1, 2, 3])
b = a.numpy()
b
# This is a _bridge_ meaning that `a` and `b` share their underlying memory location. Careful, as changing one will change the other!
#
# Finally we can convert `ndarray`s to `Tensor`s with the `torch.from_numpy()` constructor:
# +
import numpy as np
a = np.array([1, 2, 3])
b = torch.from_numpy(a)
b
# -
# ## 2. Autograd
#
# ### 2.1 Backpropagation
#
# Pretty familiar grounds so far! But we haven't really shown what makes `Tensor`s special. 🌈 The official documentation mentions GPU acceleration, but we'll keep that for the end of the lecture. Until then, let's focus on another pytorch killer feature: the `grad_fn`.
#
# By providing the `requires_grad=True` argument to our `Tensor` constructors, we tell pytorch to track all operations on it. 📡 These operations are stored in their `.grad_fn` field:
x = torch.ones(2, 2, requires_grad=True)
y = x + 2
y.grad_fn
# y is a `Tensor` "born" from the addition $x + 2$ on the `Tensor` `x` which had `requires_grad=True`. Therefore pytorch keep track of this operation, and saved it under `.grad_fn`.
#
# This `requires_grad` property is passed on to the children `Tensors`, meaning we can create a _chain_ of `.grad_fn`.
y.requires_grad
z = y**2
z.grad_fn
# Pytorch therefore keeps track of the _computational graph_ created from root tensors with `requires_grad=True`.
#
# But why are going through this hassle? Well tensors secretly work with `torch.autograd`, the [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) package 🕵️♀️. So by saving the operations linking `Tensors`, `autograd` is then able to numerically evaluate the derivatives of all nodes in the computational graph.
#
# When we call `.backward()` on a `Tensor`, we ask `autograd` to use the chain rule and evaluate the gradients through the nodes of the computational graph. The gradients are then saved on each `Tensor` under `.grad`. These are the partial derivatives of the operation on which we called `.backward()` with respect to that `Tensor`.
#
# This all sounds familiar.... Propagating gradients through a computation graph with the chain rule? Caching intermediate derivative values? `.backward()` is therefore an implementation of _backpropagation_. Of course, pytorch's end goal here is to use these gradients as part of _gradient descent_ to optimize neural networks.
# ### 2.2 Linear Regression
#
# To understand exactly what gradient descent with pytorch looks like, let's implement a simple two layer linear neural network, less glamourously known as a ✨linear regression model✨ (checkout lecture 3.5 for a refresher!) We'll use two features and a single _example_ to update the model parameters.
#
# First let's initialize our model parameters:
theta0 = torch.tensor(-2.0, requires_grad=True)
theta1 = torch.tensor(3.0, requires_grad=True)
theta2 = torch.tensor(-1.0, requires_grad=True)
# We then define our only training example:
x = torch.tensor([0.3, -0.1])
y = torch.tensor(0.4)
# We recall that the linear regression hypothesis is:
#
# $$ y = \sum_{i=0}^{n}\theta_{i}x_{i} $$
y_predict = theta0 + x[0] * theta1 + x[1] * theta2
y_predict
# So our initial prediction is $-1$ ... pretty far from the label $0.4$! Let's calculate the associated MSE loss:
loss = (y_predict - y)**2
loss
# This loss is the final node in our computation graph. By calling `.backward()` on it, we tell `autograd` to backpropagate through the edges, and calculate the gradients.
loss.backward()
# ... that was it? Backpropagation felt much more complicated in the lecture 3.12 slides 😅Let's check the gradient $\frac{dJ}{d\theta_{0}}$:
theta0.grad
# That's pretty cool, but this value of $-2.8$ doesn't get us anywhere on its own. We need to use it as part of a gradient descent update:
# +
print(f"value of theta0 before gradient descent update: {theta0}")
alpha = 0.01
theta0 = theta0 - alpha * theta0.grad
theta1 = theta1 - alpha * theta1.grad
theta2 = theta2 - alpha * theta2.grad
print(f"value of theta0 after gradient descent update: {theta0}")
# -
# The value of $\theta_{0}$ was updated by taking a step in the opposite direction to the gradient. Let's check how this affected our loss:
y_predict = theta0 + x[0] * theta1 + x[1] * theta2
new_loss = (y_predict - y)**2
print(f"Loss before gradient descent update: {loss}")
print(f"Loss after gradient descent update: {new_loss}")
# The loss was minimized, and if we repeated this process many times with with many example, we might obtain $\theta$s which correctly predict the labels `y`.
#
# Note that almost _all_ of the code above is mathematical operations. Pytorch is appreciated by data scientists because it abstracts away the machine learning heavy lifting (e.g backpropagation), but still lets them control the low level operations.
#
# Sometimes we don't want to write all the low level maths, and would rather get on training our neural networks. 😅 Let's use pytorch to train the exact same fake banknote detection classifier as last lecture.
# ## 3. Data Munging
# We load our dataset in a `DataFrame`:
# +
import pandas as pd
df = pd.read_csv('bank_note.csv')
df.head()
# -
# Our features are scaled and ready to go! 🏋️♀️We'll use all 4 features and put them in a feature matrix:
# +
import torch
X = df[['feature_1', 'feature_2', 'feature_3', 'feature_4']].values
y = df['is_fake'].values
# -
# Notice that in the linear regression example above, we don't have access to a `.fit(X, y)` method which takes care of feeding the dataset to the neural network. Therefore, we are going to want to manually create _examples_ , which are pairs of (features, label), i.e pairs of rows from `X` and `y`. We can create these tuple pairs with the handy [`.zip()`](https://realpython.com/python-zip-function/) function:
dataset = list(zip(X, y))
dataset[0]
# Our first example here has:
#
# $$
# x = \begin{bmatrix}1.12\\1.15\\-0.98\\0.35\end{bmatrix}; y = 0
# $$
# ## 4. Training
# ### 4.1 Neural Network Structure
#
# Our dataset is ready, so now we want to train a neural network! We saw in the previous section that pytorch allows to backpropagate through any mathematical operation. However, passing around raw `x + y ` functions and `Tensors` can lead to a lot of copy-pasting and isn't very bug-safe. 😬 Instead we'd like to abstract away the computational graph in a dedicated class. Pytorch makes this easy with the `nn.Module`.
#
# We extend the `nn.Module` abstract class, and create a `Net` class. All we have to do is:
# - define our _layers_ in the `__init__` constructor
# - define the computational graph in the `.forward()` method. For neural networks, this often includes appplying activation functions, and feeding the output of one layer into the next. `.forward()` is simply the chained function defined by the neural network, which takes features as input and returns predictions as output.
#
# Since we are lazy, we'll use the ready-made pytorch `nn.Linear` layer. Just like keras, this will infer the required number of model parameters from its inputs and outputs. `nn.Linear` also takes care of setting `requires_grad` on all the right `Tensor`s, so we know backpropagation will work.
#
# Let's put all of this together in a 2 hidden layer neural network with ReLU activations:
#
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# create the layers
self.dense1 = nn.Linear(4, 6)
self.dense2 = nn.Linear(6, 6)
self.dense3 = nn.Linear(6, 1)
def forward(self, x):
# first hidden layer
x = F.relu(self.dense1(x))
# second hidden layer
x = F.relu(self.dense2(x))
# output layer
x = torch.sigmoid(self.dense3(x))
return x
net = Net()
print(net)
# -
# We made a neural network in pytorch 🎊
#
# 🧠 What do the arguments to `nn.Linear()` represent? Why are we using these particular values here?
#
# Let's test its predictions:
x_predict = torch.tensor([0.1, -0.2, 0.3, 0.4])
net(x_predict)
# Yes, `nn.Module` does some background python magic for us so we can call your network directly as a function. 😮
# That prediction doesn't look great though, because we haven't trained the model yet!
#
# ### 4.2 Dataset Iteration
# For this, we want to loop through our `dataset`. But we also don't really want to implement batches or shuffling of examples... Luckily, pytorch takes care of that for us too.
#
# We use a `DataLoader` to iterate through our dataset. The `torch.utils.data` can load many dataset types, including our list of tuples of `ndarrays`. Checkout the [documentation](https://pytorch.org/docs/stable/data.html) for more details! All we have to do, is pass it in the constructor:
#
# +
from torch.utils.data import DataLoader
ds_loader = DataLoader(dataset, batch_size=32, shuffle=True)
# -
# `ds_loader` is a python _iterable_ , meaning use it in `for` loops, or peek into the first batch of examples like this:
list(ds_loader)[0]
# That's a lot of values! The shape of the features should always be `(batch_size, dims)`:
list(ds_loader)[0][0].shape
# ### 4.3 Loss and Optimization
#
# There are two missing pieces to our training. Just like the `nn.Linear` layers, we don't really want to implement the _loss_ and the _optimizer_ for our neural network. Recall that the loss is the mathematical function defining the average model error for given predictions and labels, $J$. The optimizer defines the update of each model parameter for a given gradient with respect to that $\theta$. These can get complicated! Instead, we'll use the `torch.nn` and the `torch.optim` modules:
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
# - `criterion` is a simple function which calculates the loss:
# ```
# loss = criterion(predictions, labels)
# ```
# We choose binary cross-entropy loss (`BCELoss`) since we are dealing with a binary classification problem.
#
# - `optimizer` will fetch the `.grad` fields directly on the `net.parameters()`, and adapts the learning rate according to its algorithm. The update is done in place by calling the `.step()` method:
# ```
# optimizer.step()
# ```
# We pick `Adam` because adam rocks 🤘
# ### 4.4 Optimization
#
# We're now ready to optimize! Let's start easy, just like our pytorch linear regression. We'll carry out _one_ gradient descent update, with only the first batch of data.
#
# First, we load the batch from our `DataLoader`. We have to convert its `dtype` and reshape the labels, because like `sklearn`, pytorch likes its input a certain way:
# fetch the first batch
inputs, labels = list(ds_loader)[0]
# pytorch likes floats
inputs = inputs.float()
# view = np.reshape(), need a matrix here
labels = labels.float().view(-1, 1)
# Then we can predict the outputs using our neural network and calculate the associated cross-entropy loss:
# predict
outputs = net(inputs)
# loss from predictions & labels
loss = criterion(outputs, labels)
print(f"Loss before gradient descent update: {loss}")
# Finally we can use gradient descent to update the model parameters:
# +
# always zero the parameter gradients before calling .backward()
optimizer.zero_grad()
# backpropagation
loss.backward()
# gradient descent update
optimizer.step()
# calculate the new loss
outputs = net(inputs)
loss = criterion(outputs, labels)
print(f"Loss after gradient descent update: {loss}")
# -
# 🧠 Take your time to understand the above code, and step through the different stages. What happens in the `.net()` call? what about the `.step()` method?
#
# We reduced the loss! This is great progress, but we'd like to find the global minimum, not just go "down" one step... For this, we have to repeat the updates for all batches and several epochs. Let's go! 🤠
#
# Pytorch uses a dynamic computation graph, so it starts from scratch everytime we make a forward pass through the neural network: `net(inputs)`. This means we can just loop the gradient descent step without worrying about global state or breaking things. We'll also set the pytorch and NumPy random seeds to improve [reproducibility](https://pytorch.org/docs/stable/notes/randomness.html).
# +
# reproducibility
torch.manual_seed(1337)
np.random.seed(666)
# initialization
net = Net()
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
losses = []
# loop over epochs
for epoch in range(100):
print(f'epoch {epoch} ')
# loop over batches
for i, data in enumerate(ds_loader):
# data loading
inputs, labels = data
inputs = inputs.float()
labels = labels.float().view(-1, 1)
# prediction
outputs = net(inputs)
loss = criterion(outputs, labels)
# optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
# print statistics
losses.append(loss.item())
print('finished Training')
# -
# Wow that was fast! Pytorch's custom data types and `autograd` package make for swift gradient calculations. And we haven't even used GPUs yet 😏
#
# 🧠 How many epochs did we train our neural network for?
#
# We stored the mean loss of each batch in the `losses` list. Let's visualize our loss curve:
# +
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
fig = plt.figure(dpi=100)
ax = fig.add_subplot(111)
ax.plot(losses, lw=1)
ax.set_xlabel('batch')
ax.set_ylabel('steps')
ax.set_title('Loss Curve');
# -
# The loss curve looks just the same as last lecture with keras, which means we have successfully trained our pytorch neural network. 🎊
#
#
# ## 5. Prediction
#
# Just like we did with our untrained model, we can call the `Net` directly as a function to predict the label a given input `Tensor`:
banknote = [0.0, -0.1, 0.3, 0.2]
x_predict = torch.tensor(banknote)
net(x_predict)
# Our neural network is very confident that this banknote is genuine 💸
# ## 6. Exercises
#
# ### 6.1 BCEWithLogits
#
# For binary classification, pytorch recommends the use of [`BCEWithLogitsLoss`](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss). According to the documentation:
#
# > This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
#
# Sounds like a good idea for our fake banknote detector. 👌
#
# 💪 Implement and train the same pytorch neural network model for banknote classification, but this time, use `BCEWithLogitsLoss`.
# - since `BCEWithLogitsLoss` already incorporates the output layer sigmoid activation, you'll have to rewrite the `Net` class and remove it
# - use the same hyperparameters as above: `batch_size=32`, `epochs=100`, `Adam` optimizer, ...
# - store your losses per batch in `losses`, the loss curve will be automatically plotted when running the unit test
# - the loss curve should look exactly the same as above
# INSERT YOUR CODE HERE
# +
def test_bce_with_logits():
assert losses, "Can't find losses. Did you use the correct variable name?"
assert np.array(losses[-10:]).mean() < 0.005, "It doesn't look like your loss converged"
print('Success! 🎉')
fig = plt.figure(dpi=100)
ax = fig.add_subplot(111)
ax.plot(losses, lw=1)
ax.set_xlabel('batch')
ax.set_ylabel('steps')
ax.set_title('Loss Curve')
test_bce_with_logits()
# -
# ### 6.2 Batch size analysis
#
# 💪💪 Analyse the effect of batch size on neural network optimization, by plotting the loss curves of models with different batch sizes.
# - look at lecture 3.13 for an example... but implement this with pytorch!
# - plot the loss curves side by side, or the graphs will be unreadable
# - wrap the neural training in a function to easily iterate through different batchsizes
# - either reduce the number of epochs to ~ 20, or find a way to set individual epochs for each batch size. Otherwise, the small batch sizes will take a long time to train
# - the graph is the unit test 🙃
#
# +
# INSERT YOUR CODE HERE
# -
for batch_size, losses in losses_dict.items():
fig = plt.figure(dpi=100)
ax = fig.add_subplot(111)
ax.plot(losses, lw=1)
ax.set_xlabel('batch')
ax.set_ylabel('steps')
ax.set_title(f'batch_size={batch_size}');
# 🧠 Think about your results. Do they agree with the results from lecture 3.13?
# ## 7. GPUs
#
# We've seen how pytorch adds backpropagation support to Tensors, and provides some useful methods to iterate through datasets, calculate common losses, and perform gradient updates on model parameters.
#
# Tensors have another trick up their sleeve: they can be used on a _GPU_. As mentioned in the slides, GPUs can parallelize matrix operations, and have a larger memory bandwidth. Training a neural network on a GPU can be orders of magnitude faster than a CPU.
#
# Let's move to google colab and play with some fancy hardware! 🤖 Jupyter and google colab are compatible, so we can open the second notebook in this directory [directly in colab](https://colab.research.google.com/github/camille-vanhoffelen/introduction-to-machine-learning/blob/master/data_analysis/lecture3.14/introduction_to_pytorch_GPU.ipynb).
#
# This downloads the `.ipynb` file straight from github, and is therefore not connected to your local file. If you'd like to modify and save that notebook, either make a copy in your google drive, or open your local file in google colab directly.
# ## 8. Pytorch vs Keras vs Tensorflow
#
# We have learned about _two_ deep learning frameworks, keras, and pytorch. By now, it should be obvious that these libraries take a different approach to neural networks. Keras' api is more abstracted and OOP focused, whilst pytorch augments mathematical functions. There is no "better" choice, as each will shine for different usecases:
#
# **keras**
# - concise code
# - little theoretical understanding required
# - good for simple projects and fast POCs
#
# **pytorch**
# - lower level control
# - more mathematical
# - good for growing projects and advanced NNs
# - popular nowadays 😎 great community support
#
# Tensorflow, which we have been running as keras backend, is another popular open source deep learning library. It is on par with pytorch in terms of customization and low level control, and sells itself as being production and deployment focused. However, more and more machine learning engineers are swithing to pytorch, which is considered easier to develop with. This might change in the near future, so it's good to keep an eye on the evolution of the ML open source ecosystem 👀.
#
# All in all, coders are allowed to have personal preferences 💁♂️, so you should pick the tools that best solve your problem _and_ that you are most comfortable with. Now you have two to choose from ✌️
#
# ## 9. Summary
#
# Today, we discovered a new deep learning library, **pytorch**. We first explained how **GPUs** can accelerate neural networks with parallel matrix operations, and large memory bandwitdth. Computing time is a bottleneck in neural network optimization, so **faster** training leads to **more powerful** models. We listed tensorflow & pytorch as the most popular DL frameworks which can interface with GPUs. We then learned how to use pytorch: first by creating **`Tensor`s**, then by backpropagating through a computation graph with the **`autograd`** package. We applied this to neural networks by extending the **`nn.Module`**, where we defined **layers** and the **`.forward()`** method. We **looped** this code by iterating through a **`DataLoader`** , and recreated the banknote authentication classifier from last lecture. Finally, we ported the pytorch neural network to **Google colab** , and trained it on a **GPU runtime environment**.
#
# # Resources
#
# ## Core Resources
#
# - [**Slides**](https://docs.google.com/presentation/d/10G4hNPwtIq0urT--yN3VFznhqaS8kalBrlLXfcd9bvE/edit?usp=sharing)
# - [Pytorch tutorial](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)
# Fast official pytorch tutorial, great for a refresher on basics
# - [Collection of DL models implemented in tf and pytorch](https://github.com/rasbt/deeplearning-models)
# Great to have bookmarked for inspiration / debugging when creating neural networks with pytorch or tensorflow
#
# ### Additional Resources
#
# - [Pytorch ecosystem](https://pytorch.org/ecosystem/)
# List of the many libraries extending pytorch
# - [ignite](https://github.com/pytorch/ignite)
# Pytorch wrapper similar to the keras api
# - [Pytorch for fast.ai](https://www.fast.ai/2017/09/08/introducing-pytorch-for-fastai/)
# Blogpost explaining fast.ai's move from keras+tensorflow to pytorch
# - [Pytorch reproducibility](https://pytorch.org/docs/stable/notes/randomness.html)
# Notes about randomness in pytorch
# - [Why are GPUs well suited to deep learning](https://www.quora.com/Why-are-GPUs-well-suited-to-deep-learning)
# Quora thread with a detailed explanation of GPUs advantages over CPUs
# - [Carbon footprint of large language models](https://arxiv.org/pdf/1906.02243.pdf)
# Famous paper investigating the environmental impact of ML training
|
data_analysis/3.14_introduction_to_pytorch/introduction_to_pytorch.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 1 - Packages
# - [numpy](https://www.numpy.org)
# - [pandas](https://pandas.pydata.org)
#
#
import numpy as np
import pandas as pd
# ## 2 - Load the dataset
# Load the [dataset](https://www.kaggle.com/c/titanic/data), the dataset already split for you.
#
#
# +
train_data = pd.read_csv("train.csv", delimiter = ',')
test_data = pd.read_csv("test.csv", delimiter = ',')
test_result = pd.read_csv("gender_submission.csv", delimiter = ',')
test_data["Survived"] = test_result["Survived"]
# -
# ## 3 - prepare the dataset
#
# +
def prepare_data(table):
X = table.drop(["PassengerId", "Name", "Ticket", "Cabin", "Survived"], axis = 1)
Y = table["Survived"]
# fix "Age"
age_avg = round(X["Age"].sum() / (len(X) - X["Age"].isna().sum()), 1)
X["Age"].fillna(age_avg, inplace = True)
#normalize Age
X["Age"] = (X["Age"] - X["Age"].mean()) / X["Age"].std()
# fix "Fare"
Fare_avg = round(X["Fare"].sum() / (len(X) - X["Fare"].isna().sum()), 1)
X["Fare"].fillna(Fare_avg, inplace = True)
X["Fare"] = (X["Fare"] - X["Fare"].mean()) / X["Fare"].std()
# fix Embarked
X["Embarked"] = X["Embarked"].map({'C' : 0.0, 'S' : 1.0, 'Q' : 2.0})
X['Embarked'].fillna(-1, inplace = True)
#fix sex
X['Sex'] = X['Sex'].map({'female' : 0.0, 'male' : 1.0})
return X.values.T, Y.values.reshape(1, len(Y))
# -
X_train, Y_train = prepare_data(train_data)
X_test, Y_test = prepare_data(test_data)
# ## 4 - Activation functions
#
# - [sigmoid](https://en.wikipedia.org/wiki/Sigmoid_function)
# - [relu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks))
def sigmoid(Z):
"""
Argument:
Z -- the input for the activation function
Returns:
A -- the output of the activation function
cache -- dictionary containing Z
"""
A = 1.0 / (1.0 + np.exp(-Z))
cache = (Z)
return A, cache
def sigmoid_backward(dA, cache):
"""
Argument:
Z -- the input for the activation function
Returns:
dZ -- gradients of the activations
"""
Z = cache
s = 1.0 / (1.0 + np.exp(-Z))
dZ = dA * s * (1 - s)
return dZ
def relu(Z):
"""
Argument:
Z -- the input for the activation function
Returns:
A -- the output of the activation function
cache -- dictionary containing Z
"""
A = Z * (Z > 0)
cache = (Z)
return A, cache
def relu_backward(dA, cache):
"""
Argument:
Z -- the input for the activation function
Returns:
dZ -- gradients of the activations
"""
Z = cache
dZ = dA * (Z > 0)
return dZ
# ## 5 - Initialization
# Initialize the weights matrices and biases vectors.
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- number of units in the input layer
n_h -- number of units in the hidden layer
n_y -- number of units in the hidden layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
# ## 6 - Forward propagation
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer
W -- weights matrix
b -- bias vector
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- dictionary containing A, W and b; stored to compute backward propagation step
"""
Z = np.dot(W, A) + b
cache = (A, W, b)
return Z, cache
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer
W -- weights matrix
b -- bias vector
activation -- the activation to be used in this layer
Returns:
A -- output of the activation function
cache -- dictionary stored linear_cache and activation_cache to compute backward propagation step
"""
Z, linear_cache = linear_forward(A_prev, W, b)
if activation == "sigmoid":
A, activation_cache = sigmoid(Z)
if activation == "relu":
A, activation_cache = relu(Z)
cache = (linear_cache, activation_cache)
return A, cache
# ## 7 - Cost function
def compute_cost(Yhat, Y):
"""
Arguments:
Yhat -- probabilities vector
Y -- labels vector
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
cost = (-1 / m) * np.sum(np.multiply(Y, np.log(Yhat)) + np.multiply(1 - Y, np.log(1 - Yhat)))
cost = np.squeeze(cost)
return cost
# ## 8 - Backward propagation
def linear_backward(dZ, cache):
"""
Arguments:
dZ -- gradients of activations
cache -- tuple of values (A_prev, W, b)
Returns:
dA_prev -- gradients of activations
dW -- gradients of weights
db -- gradients of biases
"""
A_prev, W, b = cache
m = A_prev.shape[1]
dW = np.dot(dZ, cache[0].T) / m
db = np.sum(dZ, axis= 1, keepdims= True) / m
dA_prev = np.dot(W.T, dZ)
return dA_prev, dW, db
def linear_activation_backward(dA, cache, activation):
"""
Arguments:
dA -- gradients of activations
cache -- tuple of values (linear_cache, activation_cache)
activation -- activation function to use
Returns:
dA_prev -- gradients of activations
dW -- gradients of weights
db -- gradients of biases
"""
linear_cache, activation_cache = cache
if activation == "sigmoid":
dZ = sigmoid_backward(dA, activation_cache)
if activation == "relu":
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev, dW, db
# ## 9 - Update parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Arguments:
parameters -- dictionary containing the parameters
grads -- dictionary contaning the gradients
learning_rate -- the learning rate
Returns:
parameters -- dictionary containing updated parameters
"""
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
dW1 = grads['dW1']
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
# ## 10 - The model
# Everything come together here
# +
def model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000):
"""
X -- training examples
Y -- training labels
layers_dims -- layers dimensions
learning_rate -- the learning rate
num_iterations -- number of iterations for gradient descent
Returns:
parameters -- the parameters for the final iteration
"""
grads = {}
m = X.shape[1]
(n_x, n_h, n_y) = layers_dims
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
for i in range(0, num_iterations):
A1, cache1 = linear_activation_forward(X, W1, b1, "relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")
cost = compute_cost(A2, Y)
dA2 = -(np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, "sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA2, cache1, "relu")
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
parameters = update_parameters(parameters, grads, learning_rate)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
if (i + 1) % 100 == 0:
print("Cost after iteration {}: {}".format(i + 1, np.squeeze(cost)))
return parameters
# -
parameters = model(X_train, Y_train, layers_dims = (7, 8, 1))
# ## 11 - Prediction
#
# Use `X_test` and `Y_test` to make predictions
# +
def predict(X, Y, parameters):
"""
X -- test examples
Y -- test labels
parameters: gradient descent parameters
Returns:
percent -- the percentage of correction
"""
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
A1, cache = linear_activation_forward(X, W1, b1, "relu")
A2, cache = linear_activation_forward(A1, W2, b2, "sigmoid")
m = Y.shape[1]
predictions = (Y == (A2 > 0.5))
percent = np.sum(predictions) / m * 100
return percent
# -
print("{0:.2f}%".format(predict(X_test, Y_test, parameters)))
|
2-Layer-NN-without-tensorflow.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Números
#
# Los numeros se clasifican por sus propiedades en conjuntos, llamadados "Conjuntos numéricos". Cada conjunto numérico está contenido dentro de otro con excepción de los *Números complejos*. A continuación un diagrama que muestra los conjuntos y su relación:
#
# 
#
# A continuacion veremos las propiedades de estos conjuntos y como representarlos en Python.
# ## Números naturales
#
# Los numeros naturales nacen debido a la necesidad de contar. Este conjunto está definido por:
#
# $$\mathbb{N} = \{1, 2, 3, 4, 5, ...\}$$
#
# Su representación en Python es la siguiente:
2
# ## Números cardinales y enteros
#
# Los numeros cardinales $\mathbb{N}^*$ incluyen el elemento neutro 0, Mientras que los enteros $\mathbb{Z}$ incluyen los elementos negativos. Se pueden definir como:
#
# $$\mathbb{N}^* = \{0, 1, 2, 3, 4, 5, ...\}$$
# $$\mathbb{Z} = \{..., -3, -2, -1, 0, 1, 2, 3, 4, 5, ...\}$$
#
# El conjunto de los números enteros $\mathbb{Z}$ incluye ($\subset$) a los naturales $\mathbb{N}$:
#
# $$\mathbb{N} \subset \mathbb{Z}$$
#
# Esto se lee *el conjunto de los naturales es subconjunto de los enteros*
#
# Su representación en Python es la siguiente:
-1, 0, 3
# ### Operaciones con números enteros
#
# Algunas de las operaciones que se pueden hacer con numeros enteros.
1 + 1
|
notebooks/matematica/preuniversitaria/numeros.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Efficiently Donating Excess COVID Vaccine Supplies
# ### Exploring logistics optimizations in the context of COVID vaccination donations
# by <NAME>; <NAME>; <NAME>; <NAME>; <NAME> (<EMAIL>)
# We've all heard of the Traveling Salesperson Problem<sup>(1)</sup>, it is one of the most common Graph problems and has numerous implementations. In the practice, however, problems are much more complicated. In the world of logistics, a more realistic setup is typicall the Multiple Supply Demand Chain Optimization (MSDO), where there are multiple sources and sinks and we're looking for the most optimal delivery routes. A generic overview can be seen here: https://www.kinetica.com/blog/kinetica-graph-analytics-multiple-supply-demand-chain-optimization-msdo-graph-solver/
#
# To demonstrate Multiple Supply Demand Chain Optimization (MSDO) with an immediate challenge we face globally, we've modeled the challenge of vaccine donations. The US has excess vaccine supply which expires over time. If supply is expiring, it is better to donate it abroad before expiration, as quickly as possible. The White House has been doing this<sup>(2)</sup>. But can it be done more efficiently?
#
# This is a complex problem:
# * We have multiple supply sites (each state or region) which can feed into major international airports
# * We have multiple demand sites (many nations), many of which desperately require more vaccine supply
# * We have time constraints since accumulation, transport, and distribution need to be faster than expiration timelines
# * Everything above is dynamic -- the supply and demand constantly changes with broad usage and infection trends
#
# We have a Multiple Supply Demand Chain Optimization (MSDO) problem! We've modeled the supply, demand, routes, and have everything ready to run on a database (to respond to daily changes in global supply and demand.) The setup documentation can be seen at https://docs.kinetica.com/7.1/guides/match_graph_dc_multi_supply_demand/ but it will be more instructive to run it yourself below. Everything below will run on the Developer Edition (https://www.kinetica.com/try/) or on Kinetica Cloud (https://www.kinetica.com/kinetica-as-a-service-on-azure/)
# The CDC publishes extensive daily data on vaccine delivery, availability, and administration here: https://covid.cdc.gov/covid-data-tracker/#vaccinations_vacc-total-admin-rate-total (see button "View Historic Vaccination Data".) From this, an approximate "excess" supply can be calculated at the state level for each state in the US. Further, from standard data on vaccine expirations, we can calculate excess supply which is at risk of expiring.
#
# How would we efficiently get the vaccines to the destinations given tight vaccine expiration timeframes? We can use commercial ariline networks! All data on airline networks and routes are public and we can create a graph from this.
# #### Required Data
# To achieve this logistical optimization, we'll need several datasets. The eight key datasets are listed below:
# * Global Airports and Air Routes Network Data
#
# * Airlines, IATA Codes, and Callsigns
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/airlines.csv
#
#
#
# * Airport Metadata for Supply and Demand Aggregations
#
# * Airport IATA Codes
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/airports.csv
#
# * US Vaccine Statistics -- Availability, Usage (to calculate excess)
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/vaccine-us.csv
#
# * Map of Country ISO Alpha-2 to Alpha-3 for Cross-Dataset Mapping
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/map_iso_alpha2_alpha3.csv
#
# * Airports, Countries serviced, and Geo-coordinates
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/airport-to-country-map.csv
# * COVID Incidence and Vaccine Utilization by Jurisdiction
#
# * Global COVID Statistics (to calculate need/demand for vaccine)
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/owid-covid-data.csv
#
# * Airport-to-Airport Routes and Flight Map
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/routes.csv
#
# * List of US Airports (to segment "excess supply" locations)
# https://kinetica-community.s3.amazonaws.com/vaccine-distro/airport_metadata_usa.csv
# We are now going to load a number of files into the database, but first you'll need to get them onto the database machine. Grab all the files referenced above and either put them into the /mnt/persist folder of Kinetica Dev Edition, or load them into a directory of your choice (e.g., /vaccine) on Kinetica Cloud via the KIFS upload screen. You can find more information here: https://docs.kinetica.com/7.1/tools/kifs/
# We took all the files above and loaded them onto the Kinetica file system under the /vaccine folder:
# * kifs://vaccine/airlines.csv
# * kifs://vaccine/airport-to-country-map.csv
# * kifs://vaccine/vaccine-us.csv
# * kifs://vaccine/map_iso_alpha2_alpha3.csv
# * kifs://vaccine/owid-covid-data.csv
# * kifs://vaccine/airport_metadata_usa.csv
# Both the supply and demand datasets are highly dynamic, changing at least once a day. The above dataset will provide you a year's worth of data, but if you wish to run an as-of-now optmization, it is best to ingest current vaccine Supply and Demand from Kinetica Confluent Cloud topics:
# * Broker: pkc-ep9mm.us-east-2.aws.confluent.cloud:9092
# * Topics: vaccine-distro-us-states; vaccine-distro-countries-global
import os
import csv
import gpudb
# We'll be interacting with Kinetica along the way, loading data, setting up graph optimziations, and viewing results. Ensure to export these environment variables (or override them below): **KINETICA_HOST, KINETICA_USER, KINETICA_PASS**. All the code below is Python, but many are SQL commands excuted via Python directly against the database you connect to below.
KINETICA_HOST = os.getenv('KINETICA_HOST', "localhost:9191")
KINETICA_USER = os.getenv('KINETICA_USER', "admin")
KINETICA_PASS = os.getenv('KINETICA_PASS')
db = gpudb.GPUdb(host=KINETICA_HOST, username=KINETICA_USER, password=<PASSWORD>)
# First, we'll take the airports and airline routes data above and create a graph from it. More information about creating graphs is available at https://docs.kinetica.com/7.1/graph_solver/network_graph_solver/. Below we work through it step-by-step. Start by downloading the airports and routes file referenced above (we put it into a "./data" directory.) Our goal will be to create nodes and edges files so we can create a graph which will drive logistics.
# +
INPUT_FILE_AIRPORTS = "data_originals/airports.csv"
INPUT_FILE_ROUTES = "data_originals/routes.csv"
OUTPUT_FILE_NODES = "out_nodes.csv"
OUTPUT_FILE_EDGES = "out_edges.csv"
FIELDS_NODES = ["NODE_ID",
"NODE_X",
"NODE_Y",
"NODE_NAME",
"NODE_WKTPOINT",
"NODE_LABEL",
"IATA",
"ICAO",
"CITY",
"COUNTRY"]
FIELDS_EDGES = ["EDGE_ID",
"EDGE_NODE1_ID",
"EDGE_NODE2_ID",
"EDGE_WKTLINE",
"EDGE_NODE1_X",
"EDGE_NODE1_Y",
"EDGE_NODE2_X",
"EDGE_NODE2_Y",
"EDGE_NODE1_WKTPOINT",
"EDGE_NODE2_WKTPOINT",
"EDGE_NODE1_NAME",
"EDGE_NODE2_NAME",
"EDGE_DIRECTION",
"EDGE_LABEL",
"EDGE_WEIGHT_VALUESPECIFIED"]
# +
# Helper function to simplify strings
def cleanse(in_str):
out_str = in_str
out_str.replace(",", "")
out_str.replace("'", "")
return out_str
# TODO: this is just a rough measure, a stand-in for now
# https://stackoverflow.com/questions/19412462/getting-distance-between-two-points-based-on-latitude-longitude
def rough_distance(slon, slat, dlon, dlat):
from math import sin, cos, sqrt, atan2, radians
# approximate radius of earth in km
R = 6373.0
lat1 = radians(float(slat))
lon1 = radians(float(slon))
lat2 = radians(float(dlat))
lon2 = radians(float(dlon))
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
# +
lookup_airport = {}
airport_nodes = []
## ------------------------------------------------------------------------
## Create Lookup table for Edge Enrichment
input_file = csv.DictReader(open(INPUT_FILE_AIRPORTS))
for row in input_file:
lookup_airport[row['AIRPORT_ID']] = {
'AIRPORT_ID': row['AIRPORT_ID'],
'NAME': cleanse(row['NAME']),
'CITY': cleanse(row['CITY']),
'COUNTRY': cleanse(row['COUNTRY']),
'IATA': row['IATA'],
'ICAO': row['ICAO'],
'LATITUDE': row['LATITUDE'],
'LONGITUDE': row['LONGITUDE']
}
## ------------------------------------------------------------------------
## Create Nodes
if row['AIRPORT_ID'] == "\\N":
continue
nlon = row['LONGITUDE']
nlat = row['LATITUDE']
if row['IATA']=="\\N":
row['IATA']=None
nodename = f"{row['ICAO']}: {cleanse(row['NAME'])}"
nodelabel = f"{row['ICAO']}: {cleanse(row['NAME'])}; {cleanse(row['CITY'])}, {cleanse(row['COUNTRY'])}"
else:
nodename = f"{row['IATA']}: {cleanse(row['NAME'])}"
nodelabel = f"{row['IATA']}: {cleanse(row['NAME'])}; {cleanse(row['CITY'])}, {cleanse(row['COUNTRY'])}"
persistable = {
"NODE_ID": row['AIRPORT_ID'],
"NODE_X": nlon,
"NODE_Y": nlat,
"NODE_NAME": nodename,
"NODE_WKTPOINT": f"POINT({nlon} {nlat})",
"NODE_LABEL": nodelabel,
"IATA": row['IATA'],
"ICAO": row['ICAO'],
"CITY": cleanse(row['CITY']),
"COUNTRY": cleanse(row['COUNTRY'])
}
airport_nodes.append(persistable)
#print(f"Adding node {persistable['NODE_LABEL']}")
#print(f"Writing {len(airport_nodes)} rows")
with open(OUTPUT_FILE_NODES, 'w', newline='\n') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=FIELDS_NODES)
writer.writeheader()
for n in airport_nodes:
writer.writerow(n)
## ------------------------------------------------------------------------
## Create Edges
inter_airport_network_edges = []
edge_id = 10000
input_file = csv.DictReader(open(INPUT_FILE_ROUTES))
for row in input_file:
edge_id = edge_id + 1
if row['SOURCE_AIRPORT_ID'] == "\\N":
#print(f"Warn source airport {row['SOURCE_AIRPORT_ID']} is Null, skipping...")
continue
if row['DEST_AIRPORT_ID'] == "\\N":
#print(f"Warn source airport {row['DEST_AIRPORT_ID']} is Null, skipping...")
continue
if str(row['SOURCE_AIRPORT_ID']) not in lookup_airport:
#print(f"Warn source airport {row['SOURCE_AIRPORT_ID']} not found in Airports lookup table, skipping...")
continue
if str(row['DEST_AIRPORT_ID']) not in lookup_airport:
#print(f"Warn destination airport {row['SOURCE_AIRPORT_ID']} not found in Airports lookup table, skipping...")
continue
if lookup_airport[row['SOURCE_AIRPORT_ID']]['COUNTRY'] == lookup_airport[row['DEST_AIRPORT_ID']]['COUNTRY']:
# skipping domestic flight
continue
slon = lookup_airport[row['SOURCE_AIRPORT_ID']]['LONGITUDE']
slat = lookup_airport[row['SOURCE_AIRPORT_ID']]['LATITUDE']
dlon = lookup_airport[row['DEST_AIRPORT_ID']]['LONGITUDE']
dlat = lookup_airport[row['DEST_AIRPORT_ID']]['LATITUDE']
persistable = {
"EDGE_ID": edge_id,
"EDGE_NODE1_ID": row['SOURCE_AIRPORT_ID'],
"EDGE_NODE2_ID": row['DEST_AIRPORT_ID'],
"EDGE_WKTLINE": f"LINESTRING({slon} {slat}, {dlon} {dlat})",
"EDGE_NODE1_X": slon,
"EDGE_NODE1_Y": slat,
"EDGE_NODE2_X": dlon,
"EDGE_NODE2_Y": dlat,
"EDGE_NODE1_WKTPOINT": f"POINT({slon} {slat})",
"EDGE_NODE2_WKTPOINT": f"POINT({dlon} {dlat})",
"EDGE_NODE1_NAME": f"{lookup_airport[row['SOURCE_AIRPORT_ID']]['IATA']}: {lookup_airport[row['SOURCE_AIRPORT_ID']]['NAME']}",
"EDGE_NODE2_NAME": f"{lookup_airport[row['DEST_AIRPORT_ID']]['IATA']}: {lookup_airport[row['DEST_AIRPORT_ID']]['NAME']}",
"EDGE_DIRECTION": "0",
"EDGE_LABEL": f"'{row['AIRLINE']} {row['AIRLINE_ID']} from {lookup_airport[row['SOURCE_AIRPORT_ID']]['IATA']} --> {lookup_airport[row['DEST_AIRPORT_ID']]['IATA']}'",
"EDGE_WEIGHT_VALUESPECIFIED": rough_distance(slon, slat, dlon, dlat)
}
inter_airport_network_edges.append(persistable)
#print(f"Adding edge {persistable['EDGE_LABEL']}")
print(f"Writing {len(inter_airport_network_edges)} rows")
with open(OUTPUT_FILE_EDGES, 'w', newline='\n') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=FIELDS_EDGES)
writer.writeheader()
for i in inter_airport_network_edges:
writer.writerow(i)
# -
# In real life, data is always dirty, especially when joining data from different sources. While the above code looks extensive, we're just throwing out bad (e.g., airline routes mismatching the airport list, airports or routes with null data, etc.) At the end of the process, we end up with graph nodes and edges, ready to load into Kinetica:
# -rw-r--r-- 1 saif staff 1397234 Aug 2 10:30 out_nodes.csv
# -rw-r--r-- 1 saif staff 11438747 Aug 2 10:30 out_edges.csv
#
# Now that we have the graph set up, we'll also need supply and demand figures. Of course, "supply" and "demand" are subjective figures -- we will need to derive these figures from high-order public data and personal judgements on which health data should drive demand (positivity rate, age distribution, vaccination rate, deaths, etc.) We present a simple approach but encourage the community consider this deeply and propose better or more fair mechanisms to calculate supply/demand. Lets start by loading the raw data:
# Lets create a schema to keep all our work in one place:
exec_result = db.execute_sql("CREATE SCHEMA DEMO_Vaccine_Distro;")
exec_result['status_info']
# Lets load in the nodes and edges files we just created.
exec_result = db.execute_sql("""
CREATE OR REPLACE MATERIALIZED EXTERNAL TABLE "DEMO_Vaccine_Distro"."airport_nodes"
(
"NODE_ID" SMALLINT NOT NULL,
"NODE_X" DOUBLE NOT NULL,
"NODE_Y" DOUBLE NOT NULL,
"NODE_NAME" VARCHAR (128) NOT NULL,
"NODE_WKTPOINT" GEOMETRY NOT NULL,
"NODE_LABEL" VARCHAR (128) NOT NULL,
"IATA" VARCHAR (4) NOT NULL,
"ICAO" VARCHAR (4) NOT NULL,
"CITY" VARCHAR (64),
"COUNTRY" VARCHAR (32, dict) NOT NULL
)
FILE PATHS 'kifs://vaccine/out_nodes.csv'
FORMAT DELIMITED TEXT;
""")
exec_result['status_info']
exec_result = db.execute_sql("""
CREATE or REPLACE MATERIALIZED EXTERNAL TABLE "DEMO_Vaccine_Distro"."airport_routes"
(
"EDGE_ID" INTEGER NOT NULL,
"EDGE_NODE1_ID" INTEGER NOT NULL,
"EDGE_NODE2_ID" INTEGER NOT NULL,
"EDGE_WKTLINE" GEOMETRY NOT NULL,
"EDGE_NODE1_X" DOUBLE NOT NULL,
"EDGE_NODE1_Y" DOUBLE NOT NULL,
"EDGE_NODE2_X" DOUBLE NOT NULL,
"EDGE_NODE2_Y" DOUBLE NOT NULL,
"EDGE_NODE1_WKTPOINT" GEOMETRY NOT NULL,
"EDGE_NODE2_WKTPOINT" GEOMETRY NOT NULL,
"EDGE_NODE1_NAME" VARCHAR (128, dict) NOT NULL,
"EDGE_NODE2_NAME" VARCHAR (128, dict) NOT NULL,
"EDGE_DIRECTION" INTEGER NOT NULL,
"EDGE_LABEL" VARCHAR (32) NOT NULL,
"EDGE_WEIGHT_VALUESPECIFIED" DOUBLE NOT NULL
)
FILE PATHS 'kifs://vaccine/out_edges.csv'
FORMAT DELIMITED TEXT;
""")
exec_result['status_info']
# ### Load Demand Data
#
# Supply and Demand data needs to be loaded to drive the optimization. The data changes daily, so you can re-run the optimization daily for the most up-to-date logistical recommendations. To load the data, we recommend Kinetica External Tables. https://docs.kinetica.com/7.1/concepts/external_tables/
#
# The External Tables funcitonality will allow easy loads, schema inference, and re-loads. Depending on the data source (which can sometimes be 'rough', inferred schemas occasionally require tweaks. Below we load the data using an inferred schema tweaked slighty. We encourage you to explore fully automated inference as well, if curious. The original data can be sourced from Our World in Data (https://ourworldindata.org/coronavirus) if desired.
exec_result = db.execute_sql("""
CREATE or REPLACE MATERIALIZED EXTERNAL TABLE "DEMO_Vaccine_Distro"."global_covid_statistics"
(
"iso_code" VARCHAR (8, dict) NOT NULL,
"continent" VARCHAR (16, dict),
"location" VARCHAR (32, dict) NOT NULL,
"date" DATE (dict) NOT NULL,
"total_cases" DECIMAL(18,4),
"new_cases" DECIMAL(18,4),
"new_cases_smoothed" DECIMAL(18,4),
"total_deaths" DECIMAL(18,4),
"new_deaths" DECIMAL(18,4),
"new_deaths_smoothed" DECIMAL(18,4),
"total_cases_per_million" DECIMAL(18,4),
"new_cases_per_million" DECIMAL(18,4),
"new_cases_smoothed_per_million" DECIMAL(18,4),
"total_deaths_per_million" DECIMAL(18,4),
"new_deaths_per_million" DECIMAL(18,4),
"new_deaths_smoothed_per_million" DECIMAL(18,4),
"reproduction_rate" DECIMAL(18,4),
"icu_patients" DECIMAL(18,4),
"icu_patients_per_million" DECIMAL(18,4),
"hosp_patients" DECIMAL(18,4),
"hosp_patients_per_million" DECIMAL(18,4),
"weekly_icu_admissions" DECIMAL(18,4),
"weekly_icu_admissions_per_million" DECIMAL(18,4),
"weekly_hosp_admissions" DECIMAL(18,4),
"weekly_hosp_admissions_per_million" DECIMAL(18,4),
"new_tests" DECIMAL(18,4),
"total_tests" DECIMAL(18,4),
"total_tests_per_thousand" DECIMAL(18,4),
"new_tests_per_thousand" DECIMAL(18,4),
"new_tests_smoothed" DECIMAL(18,4),
"new_tests_smoothed_per_thousand" DECIMAL(18,4),
"positive_rate" DECIMAL(18,4),
"tests_per_case" DECIMAL(18,4),
"tests_units" VARCHAR (16, dict),
"total_vaccinations" DECIMAL(18,4),
"people_vaccinated" DECIMAL(18,4),
"people_fully_vaccinated" DECIMAL(18,4),
"new_vaccinations" DECIMAL(18,4),
"new_vaccinations_smoothed" DECIMAL(18,4),
"total_vaccinations_per_hundred" DECIMAL(18,4),
"people_vaccinated_per_hundred" DECIMAL(18,4),
"people_fully_vaccinated_per_hundred" DECIMAL(18,4),
"new_vaccinations_smoothed_per_million" DECIMAL(18,4),
"stringency_index" DECIMAL(18,4),
"population" DECIMAL(18,4),
"population_density" DECIMAL(18,4),
"median_age" DECIMAL(18,4),
"aged_65_older" DECIMAL(18,4),
"aged_70_older" DECIMAL(18,4),
"gdp_per_capita" DECIMAL(18,4),
"extreme_poverty" DECIMAL(18,4),
"cardiovasc_death_rate" DECIMAL(18,4),
"diabetes_prevalence" DECIMAL(18,4),
"female_smokers" DECIMAL(18,4),
"male_smokers" DECIMAL(18,4),
"handwashing_facilities" DECIMAL(18,4),
"hospital_beds_per_thousand" DECIMAL(18,4),
"life_expectancy" DECIMAL(18,4),
"human_development_index" DECIMAL(18,4),
"excess_mortality" DECIMAL(18,4)
)
FILE PATHS 'kifs://vaccine/owid-covid-data.csv'
FORMAT DELIMITED TEXT;
""")
exec_result['status_info']
# ### Load Supply Data
#
# Just like with demand data, we'll load supply data using External Tables. Fresh data can be sourced daily from https://covid.cdc.gov/covid-data-tracker/#vaccinations
#
exec_result = db.execute_sql("""
CREATE OR REPLACE MATERIALIZED EXTERNAL TABLE "DEMO_Vaccine_Distro"."vaccine_utilization_usa"
(
"State_Territory_Federal_Entity" VARCHAR (32),
"Total_Doses_Delivered" INTEGER,
"Doses_Delivered_per_100K" INTEGER,
"18plus_Doses_Delivered_per_100K" INTEGER,
"Total_Doses_Administered_by_State_where_Administered" INTEGER,
"Doses_Administered_per_100k_by_State_where_Administered" INTEGER,
"18plus_Doses_Administered_by_State_where_Administered" INTEGER,
"18plus_Doses_Administered_per_100K_by_State_where_Administered" INTEGER,
"People_with_at_least_One_Dose_by_State_of_Residence" INTEGER,
"Percent_of_Total_Pop_with_at_least_One_Dose_by_State_of_Residence" DECIMAL(18,4),
"People_18plus_with_at_least_One_Dose_by_State_of_Residence" INTEGER,
"Percent_of_18plus_Pop_with_at_least_One_Dose_by_State_of_Residence" DECIMAL(18,4),
"People_Fully_Vaccinated_by_State_of_Residence" INTEGER,
"Percent_of_Total_Pop_Fully_Vaccinated_by_State_of_Residence" DECIMAL(18,4),
"People_18plus_Fully_Vaccinated_by_State_of_Residence" INTEGER,
"Percent_of_18plus_Pop_Fully_Vaccinated_by_State_of_Residence" DECIMAL(18,4),
"Total_Number_of_Pfizer_doses_delivered" INTEGER,
"Total_Number_of_Moderna_doses_delivered" INTEGER,
"Total_Number_of_Janssen_doses_delivered" INTEGER,
"Total_Number_of_doses_from_unknown_manufacturer_delivered" TINYINT,
"Total_Number_of_Janssen_doses_administered" INTEGER,
"Total_Number_of_Moderna_doses_administered" INTEGER,
"Total_Number_of_Pfizer_doses_adminstered" INTEGER,
"Total_Number_of_doses_from_unknown_manufacturer_administered" INTEGER,
"People_Fully_Vaccinated_Moderna_Resident" INTEGER,
"People_Fully_Vaccinated_Pfizer_Resident" INTEGER,
"People_Fully_Vaccinated_Janssen_Resident" INTEGER,
"People_Fully_Vaccinated_Unknown_2_dose_manufacturer_Resident" SMALLINT,
"People_18plus_Fully_Vaccinated_Moderna_Resident" INTEGER,
"People_18plus_Fully_Vaccinated_Pfizer_Resident" INTEGER,
"People_18plus_Fully_Vaccinated_Janssen_Resident" INTEGER,
"People_18plus_Fully_Vaccinated_Unknown_2_dose_manufacturer_Resident" SMALLINT,
"People_with_2_Doses_by_State_of_Residence" INTEGER,
"Percent_of_Total_Pop_with_1plus_Doses_by_State_of_Residence" DECIMAL(18,4),
"People_18plus_with_1plus_Doses_by_State_of_Residence" INTEGER,
"Percent_of_18plus_Pop_with_1plus_Doses_by_State_of_Residence" DECIMAL(18,4),
"Percent_of_Total_Pop_with_2_Doses_by_State_of_Residence" DECIMAL(18,4),
"People_18plus_with_2_Doses_by_State_of_Residence" INTEGER,
"Percent_of_18plus_Pop_with_2_Doses_by_State_of_Residence" DECIMAL(18,4),
"People_with_1plus_Doses_by_State_of_Residence" INTEGER,
"People_65plus_with_at_least_One_Dose_by_State_of_Residence" INTEGER,
"Percent_of_65plus_Pop_with_at_least_One_Dose_by_State_of_Residence" DECIMAL(18,4),
"People_65plus_Fully_Vaccinated_by_State_of_Residence" INTEGER,
"Percent_of_65plus_Pop_Fully_Vaccinated_by_State_of_Residence" DECIMAL(18,4),
"People_65plus_Fully_Vaccinated_Moderna_Resident" INTEGER,
"People_65plus_Fully_Vaccinated_Pfizer_Resident" INTEGER,
"People_65plus_Fully_Vaccinated_Janssen_Resident" INTEGER,
"People_65plus_Fully_Vaccinated_Unknown_2_dose_Manuf_Resident" SMALLINT,
"65plus_Doses_Administered_by_State_where_Administered" INTEGER,
"Doses_Administered_per_100k_of_65plus_pop_by_State_where_Administered" INTEGER,
"Doses_Delivered_per_100k_of_65plus_pop" INTEGER,
"People_12plus_with_at_least_One_Dose_by_State_of_Residence" INTEGER,
"Percent_of_12plus_Pop_with_at_least_One_Dose_by_State_of_Residence" DECIMAL(18,4),
"People_12plus_Fully_Vaccinated_by_State_of_Residence" INTEGER,
"Percent_of_12plus_Pop_Fully_Vaccinated_by_State_of_Residence" DECIMAL(18,4),
"People_12plus_Fully_Vaccinated_Moderna_Resident" INTEGER,
"People_12plus_Fully_Vaccinated_Pfizer_Resident" INTEGER,
"People_12plus_Fully_Vaccinated_Janssen_Resident" INTEGER,
"People_12plus_Fully_Vaccinated_Unknown_2_dose_Manuf_Resident" SMALLINT,
"12plus_Doses_Administered_by_State_where_Administered" INTEGER,
"Doses_Administered_per_100k_of_12plus_pop_by_State_where_Administered" INTEGER,
"Doses_Delivered_per_100k_of_12plus_pop" INTEGER
)
FILE PATHS 'kifs://vaccine/vaccine-us.csv'
FORMAT DELIMITED TEXT;
""")
exec_result['status_info']
# Once again, we present a simple approach to supply/demand -- but encourage the community consider this deeply and propose better or more fair mechanisms to calculate supply/demand. Your supply/demand mechanism just needs to output a supply and demand figure, source by source and destination by destination. We do this via two *Materialized Views.* We encourage you to read about these here: https://docs.kinetica.com/7.1/concepts/materialized_views/. They can be setup to automatically or periodically refresh based on underlying data changes, which is great for our use case.
# ### Supply
# +
exec_result = db.execute_sql("""
CREATE OR REPLACE materialized VIEW DEMO_Vaccine_Distro.vaccine_supply_usa refresh ON change AS
(
select
Total_Doses_Delivered,
Total_Doses_Administered_by_State_where_Administered,
People_Fully_Vaccinated_by_State_of_Residence,
People_18plus_Fully_Vaccinated_by_State_of_Residence,
Total_Doses_Delivered - Total_Doses_Administered_by_State_where_Administered as CURRENT_VACCINE_INVENTORY,
int(0.23 * (Total_Doses_Delivered - Total_Doses_Administered_by_State_where_Administered)) as "Donateable" /* PER BIDEN DONATION ESTIMATES */
from DEMO_Vaccine_Distro.vaccine_utilization_usa
)
""")
exec_result['status_info']
# -
# ### Demand
# +
exec_result = db.execute_sql("""
CREATE OR REPLACE materialized VIEW DEMO_Vaccine_Distro.vaccine_demand_intl refresh ON change AS
(
SELECT iso_code,
location,
countrymap.ISO_ALPHA_2 as country_iso2,
countrymap.ISO_ALPHA_3 as country_iso3,
int(population) as "total_population",
int(people_fully_vaccinated) as "fully_vaccinated_population",
int(new_cases_per_million) as "new_cases_per_1mm",
int(total_deaths_per_million) as "total_deaths_per_1mm",
int(population-people_fully_vaccinated) AS "unvaccinated",
int(100*(population-people_fully_vaccinated)/population) AS "unvaccinated_pct",
(select int(avg(100*(population-people_fully_vaccinated)/population)) from DEMO_Vaccine_Distro.global_covid_statistics) as "avg_global_unvaccinated_pct",
if (people_fully_vaccinated , int(population-people_fully_vaccinated), int(0.92 * population) ) AS "vaccinate_doses_required"
FROM DEMO_Vaccine_Distro.global_covid_statistics gstat,
DEMO_Vaccine_Distro.map_iso_alpha23 countrymap
WHERE date='2021-06-01'
AND trim(countrymap.ISO_ALPHA_3) = trim(iso_code)
)
""")
exec_result['status_info']
# -
exec_result = db.execute_sql("""
CREATE MATERIALIZED EXTERNAL TABLE "DEMO_Vaccine_Distro"."airport_metadata_usa"
(
"id" INTEGER,
"ident" VARCHAR (8),
"type" VARCHAR (16, dict),
"name" VARCHAR (128),
"latitude_deg" DOUBLE,
"longitude_deg" DOUBLE,
"elevation_ft" SMALLINT,
"continent" VARCHAR (2),
"country_name" VARCHAR (16, dict),
"iso_country" VARCHAR (2),
"region_name" VARCHAR (32, dict),
"iso_region" VARCHAR (8, dict),
"local_region" VARCHAR (2),
"municipality" VARCHAR (64),
"scheduled_service" TINYINT,
"gps_code" VARCHAR (4),
"iata_code" VARCHAR (4),
"local_code" VARCHAR (8),
"home_link" VARCHAR (128),
"wikipedia_link" VARCHAR (128),
"keywords" VARCHAR (256),
"score" INTEGER,
"last_updated" VARCHAR (32)
)
FILE PATHS 'kifs://vaccine/airport_metadata_usa.csv'
FORMAT DELIMITED TEXT;
""")
exec_result['status_info']
exec_result = db.execute_sql("""
CREATE MATERIALIZED EXTERNAL TABLE "DEMO_Vaccine_Distro"."airport-to-country-map"
(
"ident" VARCHAR (8) NOT NULL,
"type" VARCHAR (16, dict) NOT NULL,
"name" VARCHAR (128) NOT NULL,
"elevation_ft" SMALLINT,
"continent" VARCHAR (2) NOT NULL,
"iso_country" VARCHAR (2) NOT NULL,
"iso_region" VARCHAR (8, dict) NOT NULL,
"municipality" VARCHAR (64),
"gps_code" VARCHAR (4),
"iata_code" VARCHAR (4),
"local_code" VARCHAR (8),
"coordinates" VARCHAR (64) NOT NULL
)
FILE PATHS 'kifs://vaccine/airport-to-country-map.csv'
FORMAT DELIMITED TEXT;
""")
exec_result['status_info']
# After all the above loads, we should have all the setup we need for graph optimizations. When viewing GAdmin, it should look somewhat like this:
#
# 
# It would be too complicated to go from every single airport in the US to overseas airports, so it makes sense to consolidate supply within the US into, perhaps into several major regional airports. We start with a simple model where Washington DC Dulles International Airport serves as an East cost logistics hub (dont worrry, we'll iteratively expand this below to make it more realistic.) We create this graph with the following declaration:
# +
exec_result = db.execute_sql("""
SELECT
(dist)/(800*1000/3600) AS edge_weight,
geo AS edge_wktline
FROM
(
SELECT length/(ST_nPoints(geo)-1) AS dist, geo
FROM
(
SELECT
ST_length(EDGE_WKTLINE,1) AS length,
ST_segmentize(EDGE_WKTLINE, 1000000, 1) AS geo
FROM DEMO_Vaccine_Distro.airport_routes
WHERE EDGE_NODE1_ID = 3714 and EDGE_NODE2_ID = 1382
)
) """)
exec_result['status_info']
# +
exec_result = db.execute_sql("""
CREATE OR REPLACE DIRECTED GRAPH airport
(
NODES => INPUT_TABLES
(
(
SELECT NODE_NAME, NODE_WKTPOINT
FROM DEMO_Vaccine_Distro.airport_nodes
),
(
SELECT NODE_ID, NODE_WKTPOINT
FROM DEMO_Vaccine_Distro.airport_nodes
)
),
EDGES => INPUT_TABLE
(
SELECT EDGE_ID, EDGE_WKTLINE, EDGE_DIRECTION, EDGE_WEIGHT_VALUESPECIFIED
FROM DEMO_Vaccine_Distro.airport_routes
),
OPTIONS => KV_PAIRS
(
'enable_graph_draw' = 'true',
'graph_table' = 'DEMO_Vaccine_Distro.airport_graph'
)
) """)
exec_result['status_info']
# -
# And we then solve the graph with:
# +
exec_result = db.execute_sql("""
EXECUTE FUNCTION SOLVE_GRAPH
(
GRAPH => 'airport',
SOLVER_TYPE => 'SHORTEST_PATH',
SOURCE_NODES => INPUT_TABLE
(
SELECT 'IAD: Washington Dulles International Airport' AS NODE_NAME
),
SOLUTION_TABLE => 'DEMO_Vaccine_Distro.airport_sssp',
OPTIONS => KV_PAIRS
(
'min_solution_radius' = '10000',
'max_solution_radius' = '12000',
'max_solution_targets' = '10',
'accurate_snaps' = 'false',
'output_edge_path' = 'true',
'output_wkt_path' = 'true'
)
)
""")
exec_result['status_info']
# -
# 
# 
# Finally, we plot out the full route we would use for the round-trip:
#
# 
#
# ### Refining the Setup - Multiple US Supply Sites
# That was good, but lets continue to refine this. We can add a second departure hub in the US to route vaccine supply from the south, lets set up both IAD (Washington Dulles Airport) and DFW (Dallas Forth Worth Airport.)
# As before, we'll have a graph setup, and a graph solve. We set up as follows:
# +
exec_result = db.execute_sql("""
CREATE OR REPLACE DIRECTED GRAPH airport
(
NODES => INPUT_TABLES
(
(
SELECT NODE_NAME, NODE_WKTPOINT
FROM DEMO_Vaccine_Distro.airport_nodes
),
(
SELECT NODE_ID, NODE_WKTPOINT
FROM DEMO_Vaccine_Distro.airport_nodes
)
),
EDGES => INPUT_TABLE
(
SELECT EDGE_ID, EDGE_WKTLINE, EDGE_DIRECTION, EDGE_WEIGHT_VALUESPECIFIED
FROM DEMO_Vaccine_Distro.airport_routes
),
OPTIONS => KV_PAIRS
(
'merge_tolerance' = '0.0001',
'use_rtree' = 'false',
'min_x' = '-180',
'max_x' = '180',
'min_y' = '-90',
'max_y' = '90',
'export_create_results' = 'false',
'enable_graph_draw' = 'true',
'save_persist' = 'false',
'sync_db' = 'false',
'add_table_monitor' = 'false',
'graph_table' = 'DEMO_Vaccine_Distro.airport_graph',
'add_turns' = 'false',
'turn_angle' = '60.0',
'is_partitioned' = 'false'
)
)
""")
exec_result['status_info']
# -
# And we solve the graph as follows:
# +
exec_result = db.execute_sql("""
EXECUTE FUNCTION MATCH_GRAPH
(
GRAPH => 'airport',
SOLVE_METHOD => 'MATCH_SUPPLY_DEMAND',
SAMPLE_POINTS => INPUT_TABLES
(
(
SELECT
demand_id AS SAMPLE_DEMAND_ID,
demand_wkt AS SAMPLE_DEMAND_WKTPOINT,
demand_size AS SAMPLE_DEMAND_SIZE,
supply_id AS SAMPLE_DEMAND_DEPOT_ID
FROM DEMO_Vaccine_Distro.airport_demand
),
(
SELECT
supply_id AS SAMPLE_SUPPLY_DEPOT_ID,
supply_wkt AS SAMPLE_SUPPLY_WKTPOINT,
supply_craft AS SAMPLE_SUPPLY_TRUCK_ID,
supply_size AS SAMPLE_SUPPLY_TRUCK_SIZE
FROM DEMO_Vaccine_Distro.airport_supply
)
),
SOLUTION_TABLE => 'DEMO_Vaccine_Distro.airport_msdo_multi1',
OPTIONS => KV_PAIRS
(
'partial_loading' = 'true',
'max_combinations' = '10000',
'aggregated_output' = 'true',
'output_tracks' = 'false',
'max_trip_cost' = '0.0',
'unit_unloading_cost' = '0.0',
'truck_service_limit' = '0.0',
'max_truck_stops' = '0',
'enable_truck_reuse' = 'false',
'left_turn_penalty' = '0.0',
'right_turn_penalty' = '0.0',
'intersection_penalty' = '0.0',
'sharp_turn_penalty' = '0.0'
)
)
""")
exec_result['status_info']
# -
# ### Taking this further
#
# Real life is complex. To make predictions or decisions, we create models to represent reality the best we can balancing trade-offs between correctness with complexity. So far, we've made a number of simplifying assumptions, but we can continue to refine the model to introduce more features and address nuances. Some reasonable next steps would be:
#
# * Come up with a more advanced supply model using CDC data but with more nuance around vaccine expirations, expected usage before expiration, etc.
# * Come up with a more advanced demand model using OWID data, but with your thoughts on prioriization schems -- do we prioritize the elderly population (more at risk), or prioritize the young (more likely to be outside and spread disease), or prioritize countries with the most deaths, or prioritize the countries with the least access to vaccination?
# * Add several more regional US centers, perhps one for the midwest (Chicago ORD) and one for the west (San Francisco SFO)
# * Consider that multi-hop drop-offs themselves require time, and thus, consider how many days to expiration are required on vaccines beyond which we should avoid transporting them
# ### References
# 1. Travelling Salesperson Problem - https://en.wikipedia.org/wiki/Travelling_salesman_problem
# 2. White House Fact Sheet: President Biden Announces Major Milestone in Administration’s Global Vaccination Efforts: More Than 100 Million U.S. COVID-19 Vaccine Doses Donated and Shipped Abroad - https://www.whitehouse.gov/briefing-room/statements-releases/2021/08/03/fact-sheet-president-biden-announces-major-milestone-in-administrations-global-vaccination-efforts-more-than-100-million-u-s-covid-19-vaccine-doses-donated-and-shipped-abroad/
|
start-here.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
# Before moving to Burger's equation, the 2D diffusion equation is:
#
# $$
# \dfrac{\partial u}{\partial t} = \nu \dfrac{\partial^2 u}{\partial x^2} + \nu \dfrac{\partial^2 u}{\partial y^2}
# $$
#
# Following the same discretization scheme as for Step 3, the equation ends up being:
#
# $$
# \dfrac{u_{i,j}^{n+1}-u_{i,j}^n}{\Delta t} = \nu \dfrac{u_{i+1,j}^n-2u^n_{i,j}+u^n_{i-1,j}}{\Delta x^2} + \nu \dfrac{u_{i,j+1}^n-2u^n_{i,j}+u^n_{i,j-1}}{\Delta y^2}
# $$
#
# Solving for the unknown $u_{i,j}^{n+1}$:
#
# $$
# u^{n+1}_{i,j} = u_{i,j}^n + \nu \dfrac{\Delta t}{\Delta x^2} \left( u_{i+1,j}^n-2u^n_{i,j}+u^n_{i-1,j} \right) + \nu \dfrac{\Delta t}{\Delta y^2} \left( u_{i,j+1}^n-2u^n_{i,j}+u^n_{i,j+1} \right)
# $$
#
# Both initial and boundary conditions are the same as for previous cases.
# +
nx = 31
ny = 31
nu = 0.05
dx = 2 / (nx-1)
dy = 2 / (ny-1)
CFL = 0.25
dt = CFL *dx * dy / nu
x = np.linspace(0,2,nx)
y = np.linspace(0,2,ny)
X, Y = np.meshgrid(x,y)
u = np.ones((ny,nx))
u[int(0.5/dy):int(1/dy + 1), int(0.5/dx):int(1/dx+1)] = 2
# -
#simple code to 3D plotting
fig = plt.figure(figsize=(11,7), dpi=100)
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, u, cmap=cm.viridis)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_zlabel('$z$')
ax.set_title('U velocity field map')
ax.grid(True)
def diffuse(nt):
u[int(0.5/dy):int(1/dy + 1), int(0.5/dx):int(1/dx+1)] = 2
for n in range(nt+1):
un = u.copy()
u[1:-1, 1:-1] = (un[1:-1,1:-1] + nu * dt / dx**2 * (un[1:-1, 2:] - 2 * un[1:-1, 1:-1] + un[1:-1, :-2]) + nu * dt/dy**2 *(un[2:,1:-1] - 2*un[1:-1,1:-1] + un[:-2,1:-1]))
u[0, :] = 1
u[-1, :] = 1
u[:, 0] = 1
u[:, -1] = 1
#simple code to 3D plotting
fig = plt.figure(figsize=(11,7), dpi=100)
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, u, cmap=cm.viridis)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_zlabel('$z$')
ax.set_title('U velocity field map')
ax.set_zlim([1.0,2.0])
ax.grid(True)
diffuse(10)
diffuse(14)
diffuse(50)
|
12-steps-CFD/step07_2DdiffusionEq.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/satyajitghana/ProjektDepth/blob/master/notebooks/03_DepthModel_MeanStd.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="KhGyFGuLiOpv" colab_type="code" outputId="44831ce3-c171-490e-8f3a-2226cdd8438d" colab={"base_uri": "https://localhost:8080/", "height": 51}
# ! nvidia-smi
# + id="FPLhcqUFiU4w" colab_type="code" outputId="9862d191-1194-42f6-ac4a-2f08a2edfd10" colab={"base_uri": "https://localhost:8080/", "height": 122}
from google.colab import drive
drive.mount('/content/gdrive')
# + id="L3IK0lPTip2r" colab_type="code" outputId="10fc2060-5ee2-4fee-d74b-28a48467ebb1" colab={"base_uri": "https://localhost:8080/", "height": 34}
# cd gdrive/My\ Drive/DepthProject
# + id="1TEVzco0iwfM" colab_type="code" outputId="e21cb9ba-0302-4315-a2d1-71ca60474949" colab={"base_uri": "https://localhost:8080/", "height": 119}
# !ls -l
# + id="d0uMPr4Ui_u-" colab_type="code" outputId="41fb923f-8746-464a-9e86-9338dc63608f" colab={"base_uri": "https://localhost:8080/", "height": 51}
from zipfile import ZipFile
import zipfile
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from PIL import Image
import io
from itertools import groupby
import cv2
from tqdm.auto import tqdm
from pathlib import Path
from time import time
from torchvision import datasets
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from torch.utils.data import random_split
from PIL import Image
import torchvision.transforms as T
import os
import torch
sns.set()
# + [markdown] id="4TSbKbgajwZ7" colab_type="text"
# Let's see if the dataset works
# + id="1XIuvrsRix1w" colab_type="code" colab={}
depth_fg_bg_zip = ZipFile('depth_fg_bg.zip', 'r')
# + id="nO3B59AujCxq" colab_type="code" colab={}
all_files = [info.filename for info in depth_fg_bg_zip.infolist() if not info.is_dir()]
# + id="eB8r6mZ8jJXd" colab_type="code" outputId="af8a2eb1-7d93-4c8c-8f32-3b67418e99cc" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(all_files)
# + id="k-dyf90IyaU4" colab_type="code" colab={}
fg_bg_zip = ZipFile('depth_dataset_zipped/fg_bg.zip', 'r')
# + id="FS4DP7G3yno-" colab_type="code" colab={}
all_files = [info.filename for info in fg_bg_zip.infolist() if not info.is_dir()]
# + id="KHJImqfKzFr_" colab_type="code" colab={}
for idx, file in enumerate(all_files):
imgdata = fg_bg_zip.read(file)
img = Image.open(io.BytesIO(imgdata))
img = img.convert("RGB")
if idx % 1000 == 0:
print(f'processed {file} to {img.size}')
# + id="tpHR0gGsjLnv" colab_type="code" colab={}
class DenseDepth(Dataset):
'''
DenseDepth Dataset
Input is fg_bg image AND bg image
Target is fg_bg_mask AND depth_fg_bg image
'''
def __init__(self, root, data='fg_bg', train=True, transform=None, target_transform=None, download=False):
self.root = root
self.transform = transform
self.target_transform = target_transform
# open the respective data and target
bg_zip = ZipFile(os.path.join(self.root,'depth_dataset_zipped/bg.zip'), 'r') # bg
fg_bg_zip = ZipFile(os.path.join(self.root,'depth_dataset_zipped/fg_bg.zip'), 'r') # fg_bg
fg_bg_mask_zip = ZipFile(os.path.join(self.root,'depth_dataset_zipped/fg_bg_mask.zip'), 'r') # fg_bg_mask
depth_fg_bg_zip = ZipFile(os.path.join(self.root,'depth_dataset_zipped/depth_fg_bg.zip'), 'r') # target
bg_paths = [info.filename for info in bg_zip.infolist() if not info.is_dir()]
fg_bg_paths = [info.filename for info in fg_bg_zip.infolist() if not info.is_dir()]
fg_bg_mask_paths = [info.filename for info in fg_bg_mask_zip.infolist() if not info.is_dir()]
depth_fg_bg_paths = [info.filename for info in depth_fg_bg_zip.infolist() if not info.is_dir()]
# fg_bg_masks_w_depth = zip(fg_bg_paths, fg_bg_mask_paths)
assert(len(bg_paths) == 100)
assert(len(fg_bg_paths) == 400000)
assert(len(fg_bg_mask_paths) == 400000)
assert(len(depth_fg_bg_paths) == 400000)
print(f'found {len(bg_paths)} bg images, {len(fg_bg_paths)} fg_bg images, {len(fg_bg_mask_paths)} fg_bg_mask images, {len(depth_fg_bg_paths)} depth_fg_bg images')
if data == 'fg_bg':
self.image_zip = fg_bg_zip
self.image_paths = fg_bg_paths
elif data == 'bg':
self.image_zip = bg_zip
self.image_paths = bg_paths
elif data == 'fg_bg_mask':
self.image_zip = fg_bg_mask_zip
self.image_paths = fg_bg_mask_paths
elif data == 'depth_fg_bg':
self.image_zip = depth_fg_bg_zip
self.image_paths = depth_fg_bg_paths
else:
raise f'{data} is not a valid option'
self.targets = []
# train_size = int(0.8 * len(self.image_paths))
# test_size = len(self.image_paths) - train_size
# train_paths, test_paths = torch.utils.data.random_split(, [train_size, test_size])
# if train:
def __getitem__(self, index):
imgdata = self.image_zip.read(self.image_paths[index])
img = Image.open(io.BytesIO(imgdata))
img = img.convert("RGB")
img = np.array(img)
if self.transform is not None:
img = self.transform(img)
return img
# filepath = self.image_paths[index]
# img = Image.open(filepath)
# img = img.convert("RGB")
# target = self.target[index]
# if self.transform is not None:
# img = self.transform(img)
# if self.target_transform is not None:
# target = self.target_transform(target)
# return img, target
def __len__(self):
return len(self.image_paths)
# + id="3ik8Mk1N1Xka" colab_type="code" colab={}
del depth_dataset, depth_dataloader
# + id="dzTcpIQ0oxDN" colab_type="code" colab={}
depth_dataset = DenseDepth(root='./', transform=T.Compose([T.ToTensor()]))
# + id="u_hx827wntRw" colab_type="code" colab={}
depth_dataloader = DataLoader(depth_dataset, batch_size=4, shuffle=True, num_workers=1)
# + id="dbeQ_ksso-nl" colab_type="code" colab={}
def get_mean_and_std(dataset):
'''Compute the mean and std value of dataset.'''
dataloader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=True, num_workers=1)
mean = torch.zeros(3)
std = torch.zeros(3)
print('==> Computing mean and std..')
for images in tqdm(dataloader):
for i in range(3):
mean[i] += images[:, i, :, :].mean()
std[i] += images[:, i, :, :].std()
mean.div_(len(dataset))
std.div_(len(dataset))
return mean, std
# + [markdown] id="13vrN4quCSwE" colab_type="text"
# ## Compute Mean and Std dev. for BG images
# + id="w3gSRX7GCFBA" colab_type="code" outputId="c153cb5b-3eb1-4a74-c418-bc5f3c0b0c7f" colab={"base_uri": "https://localhost:8080/", "height": 100, "referenced_widgets": ["151304f4210a42d78c5e4f6f7454717d", "1a265da511c044b08d9511f6a2658692", "9d338d78c2c84ca6adfae8885011d421", "1b9aeb0782a04b6d91f465b78b88ae5c", "0956f5de9a444116965101dddf4e6809", "50c3a292391248ee94d14b58f8ee98ec", "42955e3ed3d1418d9633fb9cb7e8ac55", "c87ca87fc5984e929ac2ea69e1bc0595"]}
bg_dataset = DenseDepth(root='./', data='bg', transform=T.Compose([T.ToTensor()]))
mean, std = get_mean_and_std(bg_dataset)
# + id="dXprJOr0Ddrw" colab_type="code" outputId="2151cb58-8328-4ebe-f84b-436dd29add8f" colab={"base_uri": "https://localhost:8080/", "height": 51}
[f'{m:.15f}' for m in mean], [f'{s:.15f}' for s in std]
# + [markdown] id="Vk4lc55QCYGd" colab_type="text"
# ## Compute Mean and Std dev. for fg_bg images
# + id="iM-82bAVwKcW" colab_type="code" outputId="3ab37d54-149b-4a3f-e760-7060ed7a84c6" colab={"base_uri": "https://localhost:8080/", "height": 100, "referenced_widgets": ["ecd6fa4de139495aac0ced94082d8547", "9f9627abe7f347dc8cbb21d770e854f5", "5a4829ab6f794157a56f84396d18892a", "02250d96d0784383a1d0a31ea2d7ada9", "901278b0867d43fb94ea646077206d1e", "<KEY>", "<KEY>", "587cc107d89940fca19d156b1d1e122c"]}
fg_bg_dataset = DenseDepth(root='./', data='fg_bg', transform=T.Compose([T.ToTensor()]))
mean, std = get_mean_and_std(fg_bg_dataset)
# + id="xs1IUw63DxcJ" colab_type="code" outputId="e4fbf589-97fb-4954-ec59-0dd4777d32d2" colab={"base_uri": "https://localhost:8080/", "height": 51}
[f'{m:.15f}' for m in mean], [f'{s:.15f}' for s in std]
# + [markdown] id="l-E1p3g_Cc4J" colab_type="text"
# ## Compute Mean and Std dev. for fg_bg_mask
# + id="F9nAZP0QCX5X" colab_type="code" outputId="a825dc50-8b08-4b89-fe5e-e0d8dab1ba2c" colab={"base_uri": "https://localhost:8080/", "height": 100, "referenced_widgets": ["d54d1fd649b549b98aff27fcfd9e05a2", "0d8fb8c08fe24412ad59032044866676", "21c13e47886f4dd3b120c65c661d6286", "1090638386e64804a1e8df0dc59387d1", "c683394642714610a0bb2db1897ba602", "26388c01d0c2431994845c3ea77520e6", "d7f3e656a948432a955d0c587745075f", "55ec1f65ea6e4abebda93c6c5b423e06"]}
fg_bg_mask_dataset = DenseDepth(root='./', data='fg_bg_mask', transform=T.Compose([T.ToTensor()]))
mean, std = get_mean_and_std(fg_bg_mask_dataset)
# + id="0gXpcAH-D3Jh" colab_type="code" outputId="abfe21ad-faae-41fd-e23c-6e29d1742993" colab={"base_uri": "https://localhost:8080/", "height": 51}
[f'{m:.15f}' for m in mean], [f'{s:.15f}' for s in std]
# + [markdown] id="zkPHJnh8DEnR" colab_type="text"
# ## Compute Mean and Std dev. for depth_fg_bg
# + id="VDB-EvsuDEbS" colab_type="code" outputId="275db6b4-525d-499c-9da5-a7d58dfd17a6" colab={"base_uri": "https://localhost:8080/", "height": 100, "referenced_widgets": ["05baadd618b747e2912d6f4c251fcc07", "7880759419d8498ca034593f51f99e4b", "7c39934d943b4e86a99055a60538dcbe", "68e5d1055b5945c19aead20d8a76e9b8", "<KEY>", "f01f3d2e407e4871afd7babbd2198c2b", "e438e55455b246629a46cf967871a6d4", "29a547f9f8f342eaad406b48c3508a09"]}
depth_fg_bg_dataset = DenseDepth(root='./', data='depth_fg_bg', transform=T.Compose([T.ToTensor()]))
mean, std = get_mean_and_std(depth_fg_bg_dataset)
# + id="F-mq43fcD6J2" colab_type="code" outputId="24ccd0f9-cf3e-4ed1-f898-d5db3357f409" colab={"base_uri": "https://localhost:8080/", "height": 51}
[f'{m:.15f}' for m in mean], [f'{s:.15f}' for s in std]
# + id="YveT6mABw7kX" colab_type="code" colab={}
f = open(root_path, 'rb')
self.zip_content = f.read()
f.close()
self.zip_file = zipfile.ZipFile(io.BytesIO(self.zip_content), 'r')
|
notebooks/03_DepthModel_MeanStd.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sympy import Sum, oo, Function, S, symbols, solve, summation, Add, Eq, IndexedBase
n = symbols('n')
class fn(Function):
@classmethod
def eval(cls, n):
if n.is_Number:
if n == 1:
return 4
elif n == 2:
return 5
else:
return cls.eval(n - 2) + cls.eval(n - 1)
S = Sum(1 / ((fn(n)**2) + (fn(n) * fn(n+1))), (n, 1, 100))
Eq(S, S.doit())
sum_ = summation(1 / ((fn(n)**2) + (fn(n) * fn(n+1))), (n, 1, 100))
sum_
# +
seq_sum = 0
for i in range(50, 61):
seq_sum += (i - 1) * 2
print(seq_sum)
# -
|
Sequence sum.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Digits Classification Exercise
#
#
# A tutorial exercise regarding the use of classification techniques on
# the Digits dataset.
#
# This exercise is used in the `clf_tut` part of the
# `supervised_learning_tut` section of the
# `stat_learn_tut_index`.
#
# +
print(__doc__)
from sklearn import datasets, neighbors, linear_model
X_digits, y_digits = datasets.load_digits(return_X_y=True)
X_digits = X_digits / X_digits.max()
n_samples = len(X_digits)
X_train = X_digits[:int(.9 * n_samples)]
y_train = y_digits[:int(.9 * n_samples)]
X_test = X_digits[int(.9 * n_samples):]
y_test = y_digits[int(.9 * n_samples):]
knn = neighbors.KNeighborsClassifier()
logistic = linear_model.LogisticRegression(max_iter=1000)
print('KNN score: %f' % knn.fit(X_train, y_train).score(X_test, y_test))
print('LogisticRegression score: %f'
% logistic.fit(X_train, y_train).score(X_test, y_test))
|
sklearn/sklearn learning/demonstration/auto_examples_jupyter/exercises/plot_digits_classification_exercise.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:env_gcp_dl]
# language: python
# name: conda-env-env_gcp_dl-py
# ---
# # MNIST classification example with TensorFlow
# ## Install packages on Google Cloud Datalab (locally use conda env)
# ### Select in the Python3 Kernel:
# In the menu bar the of 'Kernel', select
# **python3**
# ### Install needed packages
# # copy the command below in a Google Cloud Datalab cell
# **!pip install tensorflow==1.12**
# ### Restart the Kernel
# this is to take into account the new installed packages. Click in the menu bar on:
# **Reset Session**
# ## Needed librairies
import tensorflow as tf
import tensorflow.contrib.eager as tfe
import numpy as np
import matplotlib.pyplot as plt
tf.__version__
tf.enable_eager_execution()
tf.executing_eagerly()
# ## Import the Data
# get mnist data, split between train and test sets
# on GCP
#(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# with AXA network
import gzip
import sys
import _pickle as cPickle
def load_data(path):
f = gzip.open(path, 'rb')
if sys.version_info < (3,):
data = cPickle.load(f)
else:
data = cPickle.load(f, encoding='bytes')
f.close()
return data
(x_train, y_train), (x_test, y_test) = load_data(path='../data/mnist.pkl.gz')
# check data shape (training)
x_train.shape
# check data shape (train)
x_test.shape
x_train.dtype, x_test.dtype
np.max(x_train), np.min(x_train), np.max(x_test), np.min(x_test)
# ## Normalize and reorganize the data
# cast uint8 -> float32
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# renormalize the data 255 grey variation
x_train /= 255
x_test /= 255
# reshape the data 28 x 28 -> 784
x_train = x_train.reshape(len(x_train), x_train.shape[1]*x_train.shape[2])
x_test = x_test.reshape(len(x_test), x_test.shape[1]*x_test.shape[2])
x_train.shape
x_test.shape
# ## Reshape the labels
y_train.shape
y_test.shape
np.unique(y_train), np.unique(y_test)
num_classes = len(np.unique(y_train))
num_classes
# convert class vectors to binary class matrices
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
y_train.shape
y_test.shape
dim_input=x_train.shape[1]
dim_input
# ## Visualize the data
plt.figure(figsize=(20,4))
for index, (image, label) in enumerate(zip(x_train[0:5], y_train[0:5])):
plt.subplot(1, 5, index + 1)
plt.imshow(np.reshape(image, (28,28)), cmap=plt.cm.gray)
plt.title('Training: %i\n' % np.argmax(label), fontsize = 20)
# ## Defined some hyperparameters
# +
# print info every n_print
n_print = 1
# hidden layer 1
n1=300
#learning_r=0.01
# -
# ## Defined our model
# +
# x [60000, 784]
# y [60000, 10]
# 1 layer n1 with 200 neurones
# n1 = 300
# . . . . . . input data (ffattened pixels) x [batch , dim_input]
# \x/\x/x/ -- fully connected layer (relu) W1[dim_input, n1] B1[n1 ]
# . . . Y1[batch , n1]
# \x/ -- fully connected layer (softmax) W2[n1 , num_classes] B2[num_classes]
# . Y2[batch , num_classes]
# reset graph before starting
tf.reset_default_graph()
# tensor (placeholder) for the learning rate: learning_rate -> not needed with Eager execution
# tensor (placeholder) for the momentum: momentum -> not needed with Eager execution
# tensor (placeholder) for the input data [60000, 784]: x -> not needed with Eager execution
# tensor (placeholder) for the output data [60000, 10]: y -> not needed with Eager execution
# now declare the weights connecting the input to the hidden layer
W1 = tf.Variable(tf.random_normal([dim_input, n1], stddev=0.03), name='W1')
b1 = tf.Variable(tf.random_normal([n1]), name='b1')
# and the weights connecting the hidden layer to the output layer
W2 = tf.Variable(tf.random_normal([n1, num_classes], stddev=0.03), name='W2')
b2 = tf.Variable(tf.random_normal([num_classes]), name='b2')
# now let's define the cost function which we are going to train the model on
# cross entropy defined manually -> need to be a function in Eager mode
# -sum(y * log(y_) + (1-y) * log(1-y_))
# need to be a function -> Eager execution
def cost(x,y):
# calculate the output of the hidden layer
Y1 = tf.nn.relu(tf.matmul(x, W1) + b1)
# last layer
Ylogits= tf.matmul(Y1, W2) + b2
# output layer
y_ = tf.nn.softmax(Ylogits)
y_clipped = tf.clip_by_value(y_, 1e-10, 0.9999999) # if value are below 1e-10 return 1e-10 same of 0.9999999
cross_entropy = tf.reduce_sum(y * tf.log(y_clipped)+ (1 - y) * tf.log(1 - y_clipped), axis=1)
cost = -tf.reduce_mean(cross_entropy)
return cost, y_
# define an accuracy assessment operation
# need to be a function -> Eager execution
def acc(y_, y):
# define an accuracy assessment operation
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return accuracy.numpy()
# -
def run_logistic_model(learning_r, training_epochs, train_obs, train_labels, test_obs, test_labels, debug = False):
cost_history_train = np.empty(shape=[0], dtype = float)
cost_history_test = np.empty(shape=[0], dtype = float)
# add an optimiser
optimiser = tf.train.GradientDescentOptimizer(learning_rate=learning_r)
# to add for Eager optimisation
grad=tfe.implicit_gradients(cost)
for epoch in range(training_epochs+1):
optimiser.apply_gradients(grad(train_obs, train_labels))
cost_train, y_train_ = cost(train_obs, train_labels)
cost_history_train = np.append(cost_history_train, cost_train.numpy())
cost_test, y_test_ = cost(test_obs, test_labels)
cost_history_test = np.append(cost_history_test, cost_test.numpy())
if (epoch % n_print == 0) & debug:
print("Reached epoch", epoch, "cost J =", str.format('{0:.6f}', cost_train))
acc_train=acc(y_train_.numpy(), train_labels)
acc_test=acc(y_test_.numpy(), test_labels)
print(" accurary on the training set", str.format('{0:.2f}', acc_train))
print(" accurary on the testing set", str.format('{0:.2f}', acc_test))
print ("Accuracy on training data:", acc(y_train_.numpy(), y_train))
print ("Accuracy on testing data:", acc(y_test_.numpy(), y_test))
return cost_history_train, cost_history_test
cost_history_train, cost_history_test = run_logistic_model(learning_r = 0.2,
training_epochs = 50,
train_obs = x_train,
train_labels = y_train,
test_obs = x_test,
test_labels = y_test,
debug = True)
# +
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(1, 1, 1)
ax.plot(cost_history_train, ls='-', color = 'black', lw = 3, label = r'Training $\gamma = 0.01$')
ax.plot(cost_history_test, ls='--', color = 'blue', lw = 3, label = r'Testing $\gamma = 0.01$')
ax.set_xlabel('epochs', fontsize = 16)
ax.set_ylabel('Cost function $J$', fontsize = 16)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize = 16)
plt.tick_params(labelsize=16);
|
notebook/TF_2.0/10_Example_mnist_tensorflow_eager.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # LS11 with Tunable Periodic Components
# Loehle and Scafetta (2011) (hereafter LS11) propose a model for global mean surface temperatures (GMST) where the natural component is composed of a linear term and two cyclic terms, with periods of 20 and 60 years, to which a linear term is added (from approximately 1942 onwards) representing anthropogenic influences:
#
# \begin{equation*}
# y(t) = A \cos\left[2\pi(t - T_1)/H \right] + B \cos\left[2\pi(t - T_2)/G \right] + C(t-1900) + D + \max\left[E + F\times(t-1950), 0\right].
# \end{equation*}
#
# The parameters of this model, taken from LS11 are as follows:
#
# \begin{equation*}
# A = 0.121,~~
# B = 0.041,~~
# C = 0.0016,~~
# D = -0.317,~~
# E = 0.054,~~
# F = 0.0066,~~
# G = 20, ~~
# H = 60, ~~
# T_1 = 1998.58,~~
# T_2 = 1999.65.
# \end{equation*}
#
# In this notebook, we will also tune the periodicities of the sinusoidal components.
#
# First we import libraries for maths functions etc., plotting and downloading data:
import numpy as np;
import matplotlib.pyplot as plt
import urllib.request
from scipy.optimize import least_squares
# Next, we download the HadCRUT3-gl dataset from CRU webserver and extract the data, which are on alternate lines of the file. The year is in the first column and the annual global mean surface temperature anomaly is in the last column.
response = urllib.request.urlopen('https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT3-gl.dat')
year, temp = [], [];
for count, line in enumerate(response, start=1):
if count % 2 == 1:
line = line.split();
year.append(int(line[0]))
temp.append(float(line[-1]))
# We then define a function that implements the LS11 model. This time, rather than the parameters being hard-coded as in the previous notebook, they are provided as an argument. The arguments expected to be numpy array objects.
def ls11(year, theta) :
return ( theta[3]
+ theta[2]*(year - 1900)
+ theta[1]*np.cos(2*np.pi*(year - theta[9])/theta[6])
+ theta[0]*np.cos(2*np.pi*(year - theta[8])/theta[7])
+ np.maximum(theta[4] + theta[5]*(year-1950), 0))
# Just as a sanity check, reproduce the graph uwing an array of model parameters as given by LS11:
theta = np.array([0.121, 0.041, 0.0016, -0.317, 0.054, 0.0066, 20, 60, 1998.58, 1999.65]);
year = np.array(year);
temp = np.array(temp);
pred = ls11(year, theta);
plt.plot(year, temp, year, pred);
plt.ylabel('GMST anomaly ($^\circ$C)');
plt.xlabel('Year');
plt.legend(['HadCRUT3-gl', 'LS11']);
# Attempt to fit the natural component of the model via non-linear least-squares optimisation, using the same calibration period (1850-1950)
# +
def ls11natural(year, theta) :
return ( theta[3]
+ theta[2]*(year - 1900)
+ theta[1]*np.cos(2*np.pi*(year - theta[9])/theta[6])
+ theta[0]*np.cos(2*np.pi*(year - theta[8])/theta[7]))
def fun(theta, year, temp):
return ls11natural(year, theta) - temp
idx = year <= 1950;
result = least_squares(fun, theta.copy(), loss='linear', args=(year[idx], temp[idx]));
phi = result.x;
# -
# Plot the natural component:
pred = ls11natural(year, result.x);
plt.plot(year, temp, year, pred);
plt.ylabel('GMST anomaly ($^\circ$C)');
plt.xlabel('Year');
plt.legend(['HadCRUT3-gl', 'LS11 (natural)']);
# Fit the anthropogenic component of the model to the residuals for 1950-2010. Note that with tunable periodicities for the cyclic components, the residuals from 1950 can once more be reasonably modelled as a linear function.
idx = (year >= 1950) & (year <= 2010);
X = np.array([np.ones(year[idx].shape), year[idx]-1950]).T;
y = temp[idx] - pred[idx];
phi[4:6] = (np.linalg.pinv(X.T@X)@(X.T@y));
plt.plot(year, temp-pred, year, np.maximum(phi[5]*(year-1950) + phi[4], 0));
plt.ylabel('GMST anomaly ($^\circ$C)');
plt.xlabel('Year');
plt.legend(['residuals', 'LS11 (anthro)']);
# Lastly plot the natural and anthropogenic components
pred = ls11(year, phi);
plt.plot(year, temp, year, pred);
plt.ylabel('GMST anomaly ($^\circ$C)');
plt.xlabel('Year');
plt.legend(['HadCRUT3-gl', 'LS11']);
# If the periodicities are made tunable, instead of being 20 and 60 years, they are approximately 22 years and 69 years. Note the slope of the anthropogenic component has also increased from 0.0066 to 0.0074,
phi
# ## References:
# [LS11] <NAME> and <NAME>, "Climate Change Attribution Using Empirical Decomposition of Climate Data", <i>The Open Atmospheric Science Journal</i>, volume 5, pages 74-86, 2011.
|
ls11_notebook_004.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true outputExpanded=false
# # Playing with new 2017 tf.contrib.seq2seq
#
# TF now has new `tf.contrib.seq2seq`. Let's make small example of using it.
# + deletable=true editable=true outputExpanded=false
# %matplotlib inline
import numpy as np
import tensorflow as tf
from tensorflow.contrib.rnn import LSTMCell, GRUCell
from model_new import Seq2SeqModel, train_on_copy_task
import pandas as pd
import helpers
import warnings
warnings.filterwarnings("ignore")
# + deletable=true editable=true outputExpanded=false
tf.__version__
# + [markdown] deletable=true editable=true
# By this point implementations are quite long, so I put them in [model_new.py](model_new.py), while notebook will illustrate the application.
# + deletable=true editable=true
tf.reset_default_graph()
tf.set_random_seed(1)
with tf.Session() as session:
# with bidirectional encoder, decoder state size should be
# 2x encoder state size
model = Seq2SeqModel(encoder_cell=LSTMCell(10),
decoder_cell=LSTMCell(20),
vocab_size=10,
embedding_size=10,
attention=True,
bidirectional=True,
debug=False)
session.run(tf.global_variables_initializer())
train_on_copy_task(session, model,
length_from=3, length_to=8,
vocab_lower=2, vocab_upper=10,
batch_size=100,
max_batches=3000,
batches_in_epoch=1000,
verbose=True)
# + [markdown] deletable=true editable=true
# ## Fun exercise, compare performance of different seq2seq variants.
#
# Comparison will be done using train loss tracks, since the task is algorithmic and data is generated directly from true distribution and out-of-sample testing doesn't really make sense.
# + deletable=true editable=true
loss_tracks = dict()
def do_train(session, model):
return train_on_copy_task(session, model,
length_from=3, length_to=8,
vocab_lower=2, vocab_upper=10,
batch_size=100,
max_batches=5000,
batches_in_epoch=1000,
verbose=False)
def make_model(**kwa):
args = dict(cell_class=LSTMCell,
num_units_encoder=10,
vocab_size=10,
embedding_size=10,
attention=False,
bidirectional=False,
debug=False)
args.update(kwa)
cell_class = args.pop('cell_class')
num_units_encoder = args.pop('num_units_encoder')
num_units_decoder = num_units_encoder
if args['bidirectional']:
num_units_decoder *= 2
args['encoder_cell'] = cell_class(num_units_encoder)
args['decoder_cell'] = cell_class(num_units_decoder)
return Seq2SeqModel(**args)
# + [markdown] deletable=true editable=true
# ### Test bidirectional/forward encoder, attention/no attention, in all combinations
# + deletable=true editable=true
tf.reset_default_graph()
tf.set_random_seed(1)
with tf.Session() as session:
model = make_model(bidirectional=False, attention=False)
session.run(tf.global_variables_initializer())
loss_tracks['forward encoder, no attention'] = do_train(session, model)
tf.reset_default_graph()
tf.set_random_seed(1)
with tf.Session() as session:
model = make_model(bidirectional=True, attention=False)
session.run(tf.global_variables_initializer())
loss_tracks['bidirectional encoder, no attention'] = do_train(session, model)
tf.reset_default_graph()
tf.set_random_seed(1)
with tf.Session() as session:
model = make_model(bidirectional=False, attention=True)
session.run(tf.global_variables_initializer())
loss_tracks['forward encoder, with attention'] = do_train(session, model)
tf.reset_default_graph()
tf.set_random_seed(1)
with tf.Session() as session:
model = make_model(bidirectional=True, attention=True)
session.run(tf.global_variables_initializer())
loss_tracks['bidirectional encoder, with attention'] = do_train(session, model)
pd.DataFrame(loss_tracks).plot(figsize=(13, 8))
# + [markdown] deletable=true editable=true
# Naturally, attention helps a lot when the task is simply copying from inputs to outputs.
# + [markdown] deletable=true editable=true
# ### Test GRU vs LSTM
# + deletable=true editable=true
import time
tf.reset_default_graph()
tf.set_random_seed(1)
with tf.Session() as session:
model = make_model(bidirectional=True, attention=True, cell_class=LSTMCell)
session.run(tf.global_variables_initializer())
t0 = time.time()
lstm_track = do_train(session, model)
lstm_took = time.time() - t0
tf.reset_default_graph()
tf.set_random_seed(1)
with tf.Session() as session:
model = make_model(bidirectional=True, attention=True, cell_class=GRUCell)
session.run(tf.global_variables_initializer())
t0 = time.time()
gru_track = do_train(session, model)
gru_took = time.time() - t0
gru = pd.Series(gru_track, name='gru')
lstm = pd.Series(lstm_track, name='lstm')
tracks_batch = pd.DataFrame(dict(lstm=lstm, gru=gru))
tracks_batch.index.name = 'batch'
gru.index = gru.index / gru_took
lstm.index = lstm.index / lstm_took
tracks_time = pd.DataFrame(dict(lstm=lstm, gru=gru)).ffill()
tracks_time.index.name = 'time (seconds)'
# + deletable=true editable=true
tracks_batch.plot(figsize=(8, 5), title='GRU vs LSTM loss, batch-time')
# + deletable=true editable=true
tracks_time.plot(figsize=(8, 5), title='GRU vs LSTM loss, compute-time')
# + [markdown] deletable=true editable=true
# GRU has fewer parameters, so training supposed to be faster? This test doesn't show it.
|
3-seq2seq-native-new.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
to2019 = pd.read_csv('/Users/philippmetzger/OneDrive/PHILIPP/NOVA IMS/2nd Semester/02 Data Visualisation 4 ECTS/00 Projects/Project 2/Potential datasets/LA crime/Crime_Data_from_2010_to_2019.csv')
to2019.head(5)
toPresent = pd.read_csv('/Users/philippmetzger/OneDrive/PHILIPP/NOVA IMS/2nd Semester/02 Data Visualisation 4 ECTS/00 Projects/Project 2/Potential datasets/LA crime/Crime_Data_from_2020_to_Present.csv')
toPresent.head(5)
list(to2019)
i=27
print(list(to2019)[i])
print()
to2019.iloc[0:5,i]
pd.unique(to2019['AREA NAME'])
to2019['DATE OCC'].max()
to2019['Date Rptd'].max()
len(pd.unique(to2019['Rpt Dist No']))
to2019['LOCATION']
to2019.shape
toPresent.shape
# +
# https://www.geeksforgeeks.org/get-zip-code-with-given-location-using-geopy-in-python/
from datetime import datetime
from geopy.geocoders import Nominatim
# initialize Nominatim API
geolocator = Nominatim(user_agent="geoapiExercises")
def address_to_zipcode(address):
print_times = True
t0 = datetime.now()
place = str.split(address)
#print(place)
place = ' '.join(place)
#print(place)
place = place + ' Los Angeles'
# print(place)
t1 = datetime.now()
location = geolocator.geocode(place)
t2 = datetime.now()
#print(type(location))
NonesList = []
if location is None:
NonesList.append(address)
t3 = datetime.now()
if print_times:
print('Total time:', t3-t0)
print('Geopy time:', t2-t1)
print('Share of Geopy time:', (t2-t1)/(t3-t0))
return None
else:
location = location[0]
location = str.split(location, ',')
#print(location)
if len(location) < 6:
NonesList.append(address)
t3 = datetime.now()
if print_times:
print('Total time:', t3-t0)
print('Geopy time:', t2-t1)
print('Share of Geopy time:', (t2-t1)/(t3-t0))
return None
else:
zipcode = location[5]
#print(zipcode)
t3 = datetime.now()
if print_times:
print('Total time:', t3-t0)
print('Geopy time:', t2-t1)
print('Share of Geopy time:', (t2-t1)/(t3-t0))
return zipcode
# -
zipcodes_toPresent = toPresent['LOCATION'].map(lambda x: address_to_zipcode(x))
zipcodes_toPresent
0.148133 * 231587 / 60 / 60
0.148133 * 2116954 / 60 / 60
# # LA census dataset
census_by_zipcode_2010 = pd.read_csv('/Users/philippmetzger/OneDrive/PHILIPP/NOVA IMS/2nd Semester/02 Data Visualisation 4 ECTS/00 Projects/Project 2/Potential datasets/census/2010-census-populations-by-zip-code.csv')
census_by_council_district = pd.read_csv('/Users/philippmetzger/OneDrive/PHILIPP/NOVA IMS/2nd Semester/02 Data Visualisation 4 ECTS/00 Projects/Project 2/Potential datasets/census/census-data-by-council-district.csv')
census_by_neighborhood_council = pd.read_csv('/Users/philippmetzger/OneDrive/PHILIPP/NOVA IMS/2nd Semester/02 Data Visualisation 4 ECTS/00 Projects/Project 2/Potential datasets/census/census-data-by-neighborhood-council.csv')
census_by_zipcode_2010.head(3)
census_by_council_district.head(3)
pd.unique(census_by_council_district['Council District'])
list(census_by_council_district)
census_by_neighborhood_council.head(3)
pd.unique(to2019['AREA NAME'])
pd.unique(census_by_neighborhood_council['NC_Name'])
# # LA income data
income = pd.read_csv('/Users/philippmetzger/OneDrive/PHILIPP/NOVA IMS/2nd Semester/02 Data Visualisation 4 ECTS/00 Projects/Project 2/Potential datasets/Median Household Income/Median_household_income_2012_2015.csv')
pd.unique(income['calendar_year'])
income[income['calendar_year']==2014].sort_values(by='council_district')
|
Dashboard/2021-03-24_trying_to_link_crime_and_census.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## ipyleaflet and mapquest
#
# Get your free API key here: https://developer.mapquest.com/
#
# The API documentation is available here:
# https://developer.mapquest.com/documentation/open/directions-api/
#
# The following wrapper is suggested. Feel free to adapt!
# +
from functools import lru_cache
import requests
from ipyleaflet import Map, Marker, Polyline
from typing import Any, Dict, List
JSONType = Dict[str, Any]
class Point:
def __init__(self, json: JSONType) -> None:
self.json = json
self.latlng = json["results"][0]["locations"][0]["latLng"]
def __repr__(self) -> str:
x = self.json["results"][0]["locations"][0]
return ", ".join(
x[f"adminArea{i}"]
for i in reversed(range(1, 7))
if f"adminArea{i}" in x and x[f"adminArea{i}"] != ""
)
def add_to(self, m: Map, **kwargs) -> None:
m.add_layer(
Marker(location=[self.latlng["lat"], self.latlng["lng"]], **kwargs)
)
class Route:
def __init__(self, json: JSONType) -> None:
self.json = json
@property
def distance(self) -> float:
return self.json["route"]["distance"]
# lru_cache provides the most basic form of cache
# think of better strategies if you need to save requests
@lru_cache(None)
def get_shape(self) -> List[float]:
return MapQuest.routeshape(self.json["route"]["sessionId"])
def add_to(self, m: Map, **kwargs) -> None:
kwargs = {"fill_opacity": 0, **kwargs}
it = iter(self.get_shape())
def getTuple(it):
while True:
try:
yield next(it), next(it)
except StopIteration:
return
m.add_layer(Polyline(locations=list(getTuple(it)), **kwargs))
class MapQuest:
# fill here your API key
api_key = ""
@staticmethod
def geolocate(name: str) -> Point:
base_url = "http://www.mapquestapi.com/geocoding/v1/address"
req = requests.post(
base_url, params={"key": MapQuest.api_key}, json={"location": name}
)
return Point(req.json())
@staticmethod
def route(from_: str, to_: str) -> Route:
base_url = "http://www.mapquestapi.com/directions/v2/route"
req = requests.post(
base_url,
params={"key": MapQuest.api_key},
json={"locations": [from_, to_]},
)
return Route(req.json())
@staticmethod
def routeshape(sessionId: str) -> List[float]:
base_url = "http://open.mapquestapi.com/directions/v2/routeshape"
req = requests.get(
base_url,
params={
"key": MapQuest.api_key,
"sessionId": sessionId,
"fullShape": True,
},
)
json = req.json()
return json["route"]["shape"]["shapePoints"]
# -
m = Map(
center=(44, 2), zoom=7,
)
m
# +
MapQuest.geolocate('Toulouse').add_to(m)
MapQuest.geolocate('Bordeaux').add_to(m)
route = MapQuest.route('Toulouse', 'Perpignan')
route.add_to(m)
route.distance
# -
MapQuest.route('Montpellier', 'Rodez').add_to(m, color='red')
|
notebooks/.ipynb_checkpoints/ipyleaflet-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.012406, "end_time": "2021-11-09T00:05:18.997085", "exception": false, "start_time": "2021-11-09T00:05:18.984679", "status": "completed"} tags=[]
# Now that you are familiar with the coding environment, it's time to learn how to make your own charts!
#
# In this tutorial, you'll learn just enough Python to create professional looking **line charts**. Then, in the following exercise, you'll put your new skills to work with a real-world dataset.
#
# # Set up the notebook
#
# We begin by setting up the coding environment. (_This code is hidden, but you can un-hide it by clicking on the "Code" button immediately below this text, on the right._)
# + _kg_hide-input=true _kg_hide-output=true papermill={"duration": 1.028483, "end_time": "2021-11-09T00:05:20.037524", "exception": false, "start_time": "2021-11-09T00:05:19.009041", "status": "completed"} tags=[]
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
print("Setup Complete")
# + [markdown] papermill={"duration": 0.013811, "end_time": "2021-11-09T00:05:20.063802", "exception": false, "start_time": "2021-11-09T00:05:20.049991", "status": "completed"} tags=[]
# # Select a dataset
#
# The dataset for this tutorial tracks global daily streams on the music streaming service [Spotify](https://en.wikipedia.org/wiki/Spotify). We focus on five popular songs from 2017 and 2018:
# 1. "Shape of You", by <NAME> [(link)](https://bit.ly/2tmbfXp)
# 2. "Despacito", by <NAME> [(link)](https://bit.ly/2vh7Uy6)
# 3. "Something Just Like This", by The Chainsmokers and Coldplay [(link)](https://bit.ly/2OfSsKk)
# 4. "HUMBLE.", by <NAME> [(link)](https://bit.ly/2YlhPw4)
# 5. "Unforgettable", by French Montana [(link)](https://bit.ly/2oL7w8b)
#
# 
#
# Notice that the first date that appears is January 6, 2017, corresponding to the release date of "The Shape of You", by <NAME>. And, using the table, you can see that "The Shape of You" was streamed 12,287,078 times globally on the day of its release. Notice that the other songs have missing values in the first row, because they weren't released until later!
#
# # Load the data
#
# As you learned in the previous tutorial, we load the dataset using the `pd.read_csv` command.
# + papermill={"duration": 0.043296, "end_time": "2021-11-09T00:05:20.119084", "exception": false, "start_time": "2021-11-09T00:05:20.075788", "status": "completed"} tags=[]
# Path of the file to read
spotify_filepath = "../input/spotify.csv"
# Read the file into a variable spotify_data
spotify_data = pd.read_csv(spotify_filepath, index_col="Date", parse_dates=True)
# + [markdown] papermill={"duration": 0.011524, "end_time": "2021-11-09T00:05:20.142632", "exception": false, "start_time": "2021-11-09T00:05:20.131108", "status": "completed"} tags=[]
# The end result of running both lines of code above is that we can now access the dataset by using `spotify_data`.
#
# # Examine the data
#
# We can print the _first_ five rows of the dataset by using the `head` command that you learned about in the previous tutorial.
# + papermill={"duration": 0.034188, "end_time": "2021-11-09T00:05:20.188638", "exception": false, "start_time": "2021-11-09T00:05:20.154450", "status": "completed"} tags=[]
# Print the first 5 rows of the data
spotify_data.head()
# + [markdown] papermill={"duration": 0.012186, "end_time": "2021-11-09T00:05:20.213448", "exception": false, "start_time": "2021-11-09T00:05:20.201262", "status": "completed"} tags=[]
# Check now that the first five rows agree with the image of the dataset (_from when we saw what it would look like in Excel_) above.
#
# > Empty entries will appear as `NaN`, which is short for "Not a Number".
#
# We can also take a look at the _last_ five rows of the data by making only one small change (where `.head()` becomes `.tail()`):
# + papermill={"duration": 0.028585, "end_time": "2021-11-09T00:05:20.254418", "exception": false, "start_time": "2021-11-09T00:05:20.225833", "status": "completed"} tags=[]
# Print the last five rows of the data
spotify_data.tail()
# + [markdown] papermill={"duration": 0.012798, "end_time": "2021-11-09T00:05:20.280375", "exception": false, "start_time": "2021-11-09T00:05:20.267577", "status": "completed"} tags=[]
# Thankfully, everything looks about right, with millions of daily global streams for each song, and we can proceed to plotting the data!
#
# # Plot the data
#
# Now that the dataset is loaded into the notebook, we need only one line of code to make a line chart!
# + papermill={"duration": 0.654345, "end_time": "2021-11-09T00:05:20.947714", "exception": false, "start_time": "2021-11-09T00:05:20.293369", "status": "completed"} tags=[]
# Line chart showing daily global streams of each song
sns.lineplot(data=spotify_data)
# + [markdown] papermill={"duration": 0.015386, "end_time": "2021-11-09T00:05:20.979442", "exception": false, "start_time": "2021-11-09T00:05:20.964056", "status": "completed"} tags=[]
# As you can see above, the line of code is relatively short and has two main components:
# - `sns.lineplot` tells the notebook that we want to create a line chart.
# - _Every command that you learn about in this course will start with `sns`, which indicates that the command comes from the [seaborn](https://seaborn.pydata.org/) package. For instance, we use `sns.lineplot` to make line charts. Soon, you'll learn that we use `sns.barplot` and `sns.heatmap` to make bar charts and heatmaps, respectively._
# - `data=spotify_data` selects the data that will be used to create the chart.
#
# Note that you will always use this same format when you create a line chart, and **_the only thing that changes with a new dataset is the name of the dataset_**. So, if you were working with a different dataset named `financial_data`, for instance, the line of code would appear as follows:
# ```
# sns.lineplot(data=financial_data)
# ```
#
# Sometimes there are additional details we'd like to modify, like the size of the figure and the title of the chart. Each of these options can easily be set with a single line of code.
# + papermill={"duration": 0.5183, "end_time": "2021-11-09T00:05:21.513343", "exception": false, "start_time": "2021-11-09T00:05:20.995043", "status": "completed"} tags=[]
# Set the width and height of the figure
plt.figure(figsize=(14,6))
# Add title
plt.title("Daily Global Streams of Popular Songs in 2017-2018")
# Line chart showing daily global streams of each song
sns.lineplot(data=spotify_data)
# + [markdown] papermill={"duration": 0.018484, "end_time": "2021-11-09T00:05:21.551048", "exception": false, "start_time": "2021-11-09T00:05:21.532564", "status": "completed"} tags=[]
# The first line of code sets the size of the figure to `14` inches (in width) by `6` inches (in height). To set the size of _any figure_, you need only copy the same line of code as it appears. Then, if you'd like to use a custom size, change the provided values of `14` and `6` to the desired width and height.
#
# The second line of code sets the title of the figure. Note that the title must *always* be enclosed in quotation marks (`"..."`)!
#
# # Plot a subset of the data
#
# So far, you've learned how to plot a line for _every_ column in the dataset. In this section, you'll learn how to plot a _subset_ of the columns.
#
# We'll begin by printing the names of all columns. This is done with one line of code and can be adapted for any dataset by just swapping out the name of the dataset (in this case, `spotify_data`).
# + papermill={"duration": 0.027981, "end_time": "2021-11-09T00:05:21.597816", "exception": false, "start_time": "2021-11-09T00:05:21.569835", "status": "completed"} tags=[]
list(spotify_data.columns)
# + [markdown] papermill={"duration": 0.019071, "end_time": "2021-11-09T00:05:21.636207", "exception": false, "start_time": "2021-11-09T00:05:21.617136", "status": "completed"} tags=[]
# In the next code cell, we plot the lines corresponding to the first two columns in the dataset.
# + papermill={"duration": 0.413224, "end_time": "2021-11-09T00:05:22.068883", "exception": false, "start_time": "2021-11-09T00:05:21.655659", "status": "completed"} tags=[]
# Set the width and height of the figure
plt.figure(figsize=(14,6))
# Add title
plt.title("Daily Global Streams of Popular Songs in 2017-2018")
# Line chart showing daily global streams of 'Shape of You'
sns.lineplot(data=spotify_data['Shape of You'], label="Shape of You")
# Line chart showing daily global streams of 'Despacito'
sns.lineplot(data=spotify_data['Despacito'], label="Despacito")
# Add label for horizontal axis
plt.xlabel("Date")
# + [markdown] papermill={"duration": 0.021572, "end_time": "2021-11-09T00:05:22.113096", "exception": false, "start_time": "2021-11-09T00:05:22.091524", "status": "completed"} tags=[]
# The first two lines of code set the title and size of the figure (_and should look very familiar!_).
#
# The next two lines each add a line to the line chart. For instance, consider the first one, which adds the line for "Shape of You":
#
# ```python
# # Line chart showing daily global streams of 'Shape of You'
# sns.lineplot(data=spotify_data['Shape of You'], label="Shape of You")
# ```
# This line looks really similar to the code we used when we plotted every line in the dataset, but it has a few key differences:
# - Instead of setting `data=spotify_data`, we set `data=spotify_data['Shape of You']`. In general, to plot only a single column, we use this format with putting the name of the column in single quotes and enclosing it in square brackets. (_To make sure that you correctly specify the name of the column, you can print the list of all column names using the command you learned above._)
# - We also add `label="Shape of You"` to make the line appear in the legend and set its corresponding label.
#
# The final line of code modifies the label for the horizontal axis (or x-axis), where the desired label is placed in quotation marks (`"..."`).
#
# # What's next?
#
# Put your new skills to work in a **[coding exercise](https://www.kaggle.com/kernels/fork/3303716)**!
# + [markdown] papermill={"duration": 0.022201, "end_time": "2021-11-09T00:05:22.157119", "exception": false, "start_time": "2021-11-09T00:05:22.134918", "status": "completed"} tags=[]
# ---
#
#
#
#
# *Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/data-visualization/discussion) to chat with other learners.*
|
course/Data Visualization/line-charts.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reconstruct the Original Meeting Transcripts
import re
from bs4 import BeautifulSoup as bsoup
import os
# It can be observed from the file that there are two name form of all the files. So it can be seperated to deal with the data in different file name length as below.
# this method is use to deal with the files that are in shorter name length
def parse1(path):
# make extraction in the topic files to make a topic list that contains the word numbers
alist = []
#xmlsoup = bsoup(open(path),"lxml")
for i,topic in enumerate(xmlsoup.findAll("topic")):
#alist.append([topic])
#print(alist)
a = re.findall(r'words.xml#id(.+)\"',str(topic))
alist.append([a])
#print(alist)
for child in alist[i][0]:
#print(child)
if len(child) >25:
#print(child)
group1 = re.match(r'^(\(\w+.\w.\w+\))(..id)?(\(\w+.\w.\w+\))?$',str(child)).group(1)
group2 = re.match(r'^(\(\w+.\w.\w+\))(..id)?(\(\w+.\w.\w+\))?$',str(child)).group(3)
#print(group1[1:-1],group2[1:-1])
alist[i].append((group1[1:-1],group2[1:-1]))
else:
group1 = re.match(r'^(\(\w+.\w.\w+\))(..id)?(\(\w+.\w.\w+\))?$',str(child)).group(0)
alist[i].append((group1[1:-1]))
del alist[i][0]
#print(alist)
#return alist
# make extraction of the subtopics, to remove the subtopic list that are repeated in list
tt = []
tt = alist
temp=[]
for i in range(len(tt)):
temp.append([])
for j in range(len(tt[i])):
if len(tt[i][j]) == 2:
temp[i].append(tt[i][j][0])
else:
temp[i].append(tt[i][j])
strlist=[]
record_index=[]
for k in range(len(temp)):
if k ==0:
for h in temp[k]:
strlist.append(h)
else:
if temp[k][0] in strlist:
record_index.append(k)
else:
for h in temp[k]:
strlist.append(h)
tt=[element for a,element in enumerate(tt) if a not in record_index]
#deal with the exception of the topic list
tlist1 = []
tlist2 = []
tlist3 = []
tlist4 = []
dlist = []
if tt != []:
dlist = tt
else:
dlist = alist
for s,top in enumerate (dlist):
for u, top2 in enumerate (top):
if len(top2) > 2:
if top2[7] == 'A':
tlist1.append(top2[14:])
if top2[7] == 'B':
tlist2.append(top2[14:])
if top2[7] == 'C':
tlist3.append(top2[14:])
if top2[7] == 'D':
tlist4.append(top2[14:])
else:
if top2[1][7] == 'A':
tlist1.append(top2[1][14:])
if top2[1][7] == 'B':
tlist2.append(top2[1][14:])
if top2[1][7] == 'C':
tlist3.append(top2[1][14:])
if top2[1][7] == 'D':
tlist4.append(top2[1][14:])
#set up word lists that contains the word number of the the word files
#handle the files that are with 'A,B,C,D' in names
wlist1 = []
wlist2 = []
wlist3 = []
wlist4 = []
wsoup1 = bsoup(open("./words/%s.A.words.xml"%file[:6]),"lxml")
wsub=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup1))
wsoup1=bsoup(wsub,'lxml')
word1 = wsoup1.body.children
for data in word1:
#print(data)
for data1 in data.children:
#print(data1)
data2 = str(data1)
#print(data2)
if re.search(r'words(\d+)', str(data2)) is not None:
wid = re.search(r'words(\d+)', str(data2)).group(1)
#print(wid)
if re.search(r'>(.+)<', str(data2)) is not None:
word = re.search(r'>(.+)<', str(data2)).group(1)
#print(word)
wlist1.append((wid,word))
else:
wlist1.append((wid,""))
#print(wlist1)
wsoup2 = bsoup(open("./words/%s.B.words.xml"%file[:6]),"lxml")
wsub2=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup2))
wsoup2=bsoup(wsub2,'lxml')
word2 = wsoup2.body.children
for data3 in word2:
for data4 in data3.children:
data5 = str(data4)
if re.search(r'words(\d+)', str(data5)) is not None:
wid = re.search(r'words(\d+)', str(data5)).group(1)
if re.search(r'>(.+)<', str(data5)) is not None:
word = re.search(r'>(.+)<', str(data5)).group(1)
wlist2.append((wid,word))
else:
wlist2.append((wid,""))
#print(wlist2)
wsoup3 = bsoup(open("./words/%s.C.words.xml"%file[:6]),"lxml")
wsub3=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup3))
wsoup1=bsoup(wsub3,'lxml')
word3 = wsoup3.body.children
for data6 in word3:
for data7 in data6.children:
data8 = str(data7)
if re.search(r'words(\d+)', str(data8)) is not None:
wid = re.search(r'words(\d+)', str(data8)).group(1)
if re.search(r'>(.+)<', str(data8)) is not None:
word = re.search(r'>(.+)<', str(data8)).group(1)
wlist3.append((wid,word))
else:
wlist3.append((wid,""))
#print(wlist3)
wsoup4 = bsoup(open("./words/%s.D.words.xml"%file[:6]),"lxml")
wsub4=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup4))
wsoup4=bsoup(wsub4,'lxml')
word4 = wsoup4.body.children
for data9 in word4:
for data10 in data9.children:
data11 = str(data10)
if re.search(r'words(\d+)', str(data11)) is not None:
wid = re.search(r'words(\d+)', str(data11)).group(1)
if re.search(r'>(.+)<', str(data11)) is not None:
word = re.search(r'>(.+)<', str(data11)).group(1)
wlist4.append((wid,word))
else:
wlist4.append((wid,""))
#print(wlist4)
#set up segment lists that contains the word number as the changing row marker
#handle the files that are with 'A,B,C,D' in names
slist1 = []
slist2 = []
slist3 = []
slist4 = []
ssoup1 = bsoup(open("./segments/%s.A.segments.xml"%file[:6]),"lxml")
#ssoup1
seg1 = ssoup1.body.children
for segdata in seg1:
for segdata1 in segdata.children:
#print(segdata1)
if re.search(r'words(\d+)\)"',str(segdata1)) is not None:
snum = re.search(r'(words)(\d+)(\)")',str(segdata1)).group(2)
#print(snum)
slist1.append(snum)
#print(slist1)
ssoup2 = bsoup(open("./segments/%s.B.segments.xml"%file[:6]),"lxml")
#ssoup1
seg2 = ssoup2.body.children
for segdata2 in seg2:
for segdata3 in segdata2.children:
if re.search(r'words(\d+)\)"',str(segdata3)) is not None:
snum2 = re.search(r'(words)(\d+)(\)")',str(segdata3)).group(2)
slist2.append(snum2)
#print(slist2)
ssoup3 = bsoup(open("./segments/%s.C.segments.xml"%file[:6]),"lxml")
#ssoup1
seg3 = ssoup3.body.children
for segdata4 in seg3:
for segdata5 in segdata4.children:
if re.search(r'words(\d+)\)"',str(segdata5)) is not None:
snum3 = re.search(r'(words)(\d+)(\)")',str(segdata5)).group(2)
slist3.append(snum3)
#print(slist3)
ssoup4 = bsoup(open("./segments/%s.D.segments.xml"%file[:6]),"lxml")
#ssoup1
seg4 = ssoup4.body.children
for segdata6 in seg4:
for segdata7 in segdata6.children:
if re.search(r'words(\d+)\)"',str(segdata7)) is not None:
snum4 = re.search(r'(words)(\d+)(\)")',str(segdata7)).group(2)
slist4.append(snum4)
#print(slist4)
#deal with the exception in the topic list that are not in segment list
for v in tlist1:
if v in slist1:
continue
else:
ti = tlist1.index(v)
slist1.insert(slist1.index(tlist1[ti-1])+1,v)
for v in tlist2:
if v in slist2:
continue
else:
ti = tlist2.index(v)
slist2.insert(slist2.index(tlist2[ti-1])+1,v)
for v in tlist3:
if v in slist3:
continue
else:
ti = tlist3.index(v)
slist3.insert(slist3.index(tlist3[ti-1])+1,v)
for v in tlist4:
if v in slist4:
continue
else:
ti = tlist4.index(v)
slist4.insert(slist4.index(tlist4[ti-1])+1,v)
#put changing row marker into the word lists
for h,sele in enumerate(slist1):
for i, wele in enumerate(wlist1):
if int(wele[0]) != i:
wlist1.insert(i,(i,''))
if int(sele) == i:
#print(wele)
wlist1[i] = (wele[0],wele[1] +'\n')
#print(wlist1[i])
for j,sele2 in enumerate(slist2):
for k, wele2 in enumerate(wlist2):
if int(wele2[0]) != k:
wlist2.insert(k,(k,''))
if int(sele2) == k:
wlist2[k] = (wele2[0],wele2[1] +'\n')
#print(wele2)
for l,sele3 in enumerate(slist3):
for m, wele3 in enumerate(wlist3):
if int(wele3[0]) != m:
wlist3.insert(m,(m,''))
if int(sele3) == m:
wlist3[m] = (wele3[0],wele3[1] +'\n')
#print(wele3)
for n,sele4 in enumerate(slist4):
for p, wele4 in enumerate(wlist4):
if int(wele4[0]) != p:
wlist4.insert(p,(p,''))
if int(sele4) == p:
wlist4[p] = (wele4[0],wele4[1] +'\n')
#print(wele4)
#find out the word text and combine them, output to txt files
wlist=[]
text=''
t = []
if tt != []:
t = tt
else:
t = alist
for x in range(len(t)):
for y in range(len(t[x])):
#print(t[x][y])
if len(t[x][y]) == 2:
if t[x][y][0][7] == 'A':
wlist=wlist1
if t[x][y][0][7] == 'B':
wlist=wlist2
if t[x][y][0][7] == 'C':
wlist=wlist3
if t[x][y][0][7] == 'D':
wlist=wlist4
#print(re.search(r'(words)(\d+)',str(t[x][y][0])).group(2))
start = re.search(r'(words)(\d+)',str(t[x][y][0])).group(2)
end = re.search(r'(words)(\d+)',str(t[x][y][1])).group(2)
#print(start,end)
for i in range(int(start),int(end)+1):
text = text + ' ' + wlist[i][1]
if len(t[x][y]) > 2:
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'A':
wlist=wlist1
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'B':
wlist=wlist2
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'C':
wlist=wlist3
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'D':
wlist=wlist4
start = re.search(r'(words)(\d+)',str(t[x][y][0:])).group(2)
text = text + wlist[int(start)][1]+ ' '
#print(t[x][y][0:])
#print(start)
text = text +'\n'+'**********'+'\n'
content = re.sub('\s+\\n','\n',text)
content1 = content.replace(" "," ")
content2 = content1.lstrip('\n')
return content2
# this method is use to deal with the files that are in longer name length
def parse2(path):
# make extraction in the topic files to make a topic list that contains the word numbers
alist = []
#xmlsoup = bsoup(open(path),"lxml")
for i,topic in enumerate(xmlsoup.findAll("topic")):
#alist.append([topic])
#print(alist)
a = re.findall(r'words.xml#id(.+)\"',str(topic))
alist.append([a])
#print(alist)
for child in alist[i][0]:
#print(child)
if len(child) >25:
#print(child)
group1 = re.match(r'^(\(\w+.\w.\w+\))(..id)?(\(\w+.\w.\w+\))?$',str(child)).group(1)
group2 = re.match(r'^(\(\w+.\w.\w+\))(..id)?(\(\w+.\w.\w+\))?$',str(child)).group(3)
#print(group1[1:-1],group2[1:-1])
alist[i].append((group1[1:-1],group2[1:-1]))
else:
group1 = re.match(r'^(\(\w+.\w.\w+\))(..id)?(\(\w+.\w.\w+\))?$',str(child)).group(0)
alist[i].append((group1[1:-1]))
del alist[i][0]
#print(alist)
#return alist
# make extraction of the subtopics, to remove the subtopic list that are repeated in list
tt = []
tt = alist
temp=[]
for i in range(len(tt)):
temp.append([])
for j in range(len(tt[i])):
if len(tt[i][j]) == 2:
temp[i].append(tt[i][j][0])
else:
temp[i].append(tt[i][j])
strlist=[]
record_index=[]
for k in range(len(temp)):
if k ==0:
for h in temp[k]:
strlist.append(h)
else:
if temp[k][0] in strlist:
record_index.append(k)
else:
for h in temp[k]:
strlist.append(h)
tt=[element for a,element in enumerate(tt) if a not in record_index]
#deal with the exception of the topic list
tlist1 = []
tlist2 = []
tlist3 = []
tlist4 = []
dlist = []
if tt != []:
dlist = tt
else:
dlist = alist
for s,top in enumerate (dlist):
for u, top2 in enumerate (top):
if len(top2) > 2:
if top2[8] == 'A':
tlist1.append(top2[15:])
if top2[8] == 'B':
tlist2.append(top2[15:])
if top2[8] == 'C':
tlist3.append(top2[15:])
if top2[8] == 'D':
tlist4.append(top2[15:])
else:
if top2[1][8] == 'A':
tlist1.append(top2[1][15:])
if top2[1][8] == 'B':
tlist2.append(top2[1][15:])
if top2[1][8] == 'C':
tlist3.append(top2[1][15:])
if top2[1][8] == 'D':
tlist4.append(top2[1][15:])
#set up word lists that contains the word number of the the word files
#handle the files that are with 'A,B,C,D' in names
wlist1 = []
wlist2 = []
wlist3 = []
wlist4 = []
wsoup1 = bsoup(open("./words/%s.A.words.xml"%file[:7]),"lxml")
wsub=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup1))
wsoup1=bsoup(wsub,'lxml')
word1 = wsoup1.body.children
for data in word1:
#print(data)
for data1 in data.children:
#print(data1)
data2 = str(data1)
#print(data2)
if re.search(r'words(\d+)', str(data2)) is not None:
wid = re.search(r'words(\d+)', str(data2)).group(1)
#print(wid)
if re.search(r'>(.+)<', str(data2)) is not None:
word = re.search(r'>(.+)<', str(data2)).group(1)
#print(word)
wlist1.append((wid,word))
else:
wlist1.append((wid,""))
#print(wlist1)
wsoup2 = bsoup(open("./words/%s.B.words.xml"%file[:7]),"lxml")
wsub2=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup2))
wsoup2=bsoup(wsub2,'lxml')
word2 = wsoup2.body.children
for data3 in word2:
for data4 in data3.children:
data5 = str(data4)
if re.search(r'words(\d+)', str(data5)) is not None:
wid = re.search(r'words(\d+)', str(data5)).group(1)
if re.search(r'>(.+)<', str(data5)) is not None:
word = re.search(r'>(.+)<', str(data5)).group(1)
wlist2.append((wid,word))
else:
wlist2.append((wid,""))
#print(wlist2)
wsoup3 = bsoup(open("./words/%s.C.words.xml"%file[:7]),"lxml")
wsub3=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup3))
wsoup3=bsoup(wsub3,'lxml')
word3 = wsoup3.body.children
for data6 in word3:
for data7 in data6.children:
data8 = str(data7)
if re.search(r'words(\d+)', str(data8)) is not None:
wid = re.search(r'words(\d+)', str(data8)).group(1)
if re.search(r'>(.+)<', str(data8)) is not None:
word = re.search(r'>(.+)<', str(data8)).group(1)
wlist3.append((wid,word))
else:
wlist3.append((wid,""))
#print(wlist3)
wsoup4 = bsoup(open("./words/%s.D.words.xml"%file[:7]),"lxml")
wsub4=re.sub('</nite:root>\n</body>','</nite:root></body>',str(wsoup4))
wsoup4=bsoup(wsub4,'lxml')
word4 = wsoup4.body.children
for data9 in word4:
for data10 in data9.children:
data11 = str(data10)
if re.search(r'words(\d+)', str(data11)) is not None:
wid = re.search(r'words(\d+)', str(data11)).group(1)
if re.search(r'>(.+)<', str(data11)) is not None:
word = re.search(r'>(.+)<', str(data11)).group(1)
wlist4.append((wid,word))
else:
wlist4.append((wid,""))
#print(wlist4)
#set up segment lists that contains the word number as the changing row marker
#handle the files that are with 'A,B,C,D' in names
slist1 = []
slist2 = []
slist3 = []
slist4 = []
ssoup1 = bsoup(open("./segments/%s.A.segments.xml"%file[:7]),"lxml")
#ssoup1
seg1 = ssoup1.body.children
for segdata in seg1:
for segdata1 in segdata.children:
#print(segdata1)
if re.search(r'words(\d+)\)"',str(segdata1)) is not None:
snum = re.search(r'(words)(\d+)(\)")',str(segdata1)).group(2)
#print(snum)
slist1.append(snum)
#print(slist1)
ssoup2 = bsoup(open("./segments/%s.B.segments.xml"%file[:7]),"lxml")
#ssoup1
seg2 = ssoup2.body.children
for segdata2 in seg2:
for segdata3 in segdata2.children:
if re.search(r'words(\d+)\)"',str(segdata3)) is not None:
snum2 = re.search(r'(words)(\d+)(\)")',str(segdata3)).group(2)
slist2.append(snum2)
#print(slist2)
ssoup3 = bsoup(open("./segments/%s.C.segments.xml"%file[:7]),"lxml")
#ssoup1
seg3 = ssoup3.body.children
for segdata4 in seg3:
for segdata5 in segdata4.children:
if re.search(r'words(\d+)\)"',str(segdata5)) is not None:
snum3 = re.search(r'(words)(\d+)(\)")',str(segdata5)).group(2)
slist3.append(snum3)
#print(slist3)
ssoup4 = bsoup(open("./segments/%s.D.segments.xml"%file[:7]),"lxml")
#ssoup1
seg4 = ssoup4.body.children
for segdata6 in seg4:
for segdata7 in segdata6.children:
if re.search(r'words(\d+)\)"',str(segdata7)) is not None:
snum4 = re.search(r'(words)(\d+)(\)")',str(segdata7)).group(2)
slist4.append(snum4)
#print(slist4)
#deal with the exception in the topic list that are not in segment list
for v in tlist1:
if v in slist1:
continue
else:
ti = tlist1.index(v)
slist1.insert(slist1.index(tlist1[ti-1])+1,v)
for v in tlist2:
if v in slist2:
continue
else:
ti = tlist2.index(v)
slist2.insert(slist2.index(tlist2[ti-1])+1,v)
for v in tlist3:
if v in slist3:
continue
else:
ti = tlist3.index(v)
slist3.insert(slist3.index(tlist3[ti-1])+1,v)
for v in tlist4:
if v in slist4:
continue
else:
ti = tlist4.index(v)
slist4.insert(slist4.index(tlist4[ti-1])+1,v)
#put changing row marker into the word lists
for h,sele in enumerate(slist1):
for i, wele in enumerate(wlist1):
if int(wele[0]) != i:
wlist1.insert(i,(i,''))
if int(sele) == i:
#print(wele)
wlist1[i] = (wele[0],wele[1] +'\n')
#print(wlist1[i])
for j,sele2 in enumerate(slist2):
for k, wele2 in enumerate(wlist2):
if int(wele2[0]) != k:
wlist2.insert(k,(k,''))
if int(sele2) == k:
wlist2[k] = (wele2[0],wele2[1] +'\n')
#print(wele2)
for l,sele3 in enumerate(slist3):
for m, wele3 in enumerate(wlist3):
if int(wele3[0]) != m:
wlist3.insert(m,(m,''))
if int(sele3) == m:
wlist3[m] = (wele3[0],wele3[1] +'\n')
#print(wele3)
for n,sele4 in enumerate(slist4):
for p, wele4 in enumerate(wlist4):
if int(wele4[0]) != p:
wlist4.insert(p,(p,''))
if int(sele4) == p:
wlist4[p] = (wele4[0],wele4[1] +'\n')
#print(wele4)
#find out the word text and combine them, output to txt files
wlist=[]
text=''
t = []
if tt != []:
t = tt
else:
t = alist
for x in range(len(t)):
for y in range(len(t[x])):
#print(t[x][y])
if len(t[x][y]) == 2:
if t[x][y][0][8] == 'A':
wlist=wlist1
if t[x][y][0][8] == 'B':
wlist=wlist2
if t[x][y][0][8] == 'C':
wlist=wlist3
if t[x][y][0][8] == 'D':
wlist=wlist4
#print(re.search(r'(words)(\d+)',str(t[x][y][0])).group(2))
start = re.search(r'(words)(\d+)',str(t[x][y][0])).group(2)
end = re.search(r'(words)(\d+)',str(t[x][y][1])).group(2)
#print(start,end)
for i in range(int(start),int(end)+1):
text = text + ' ' + wlist[i][1]
if len(t[x][y]) > 2:
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'A':
wlist=wlist1
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'B':
wlist=wlist2
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'C':
wlist=wlist3
if re.search(r'(\w+\.)(\w+)(\.)(\w+\d+)',str(t[x][y])).group(2) == 'D':
wlist=wlist4
start = re.search(r'(words)(\d+)',str(t[x][y][0:])).group(2)
text = text + wlist[int(start)][1]+ ' '
#print(t[x][y][0:])
#print(start)
text = text +'\n'+'**********'+'\n'
content = re.sub('\s+\\n','\n',text)
content1 = content.replace(" "," ")
content2 = content1.lstrip('\n')
return content2
# ### start the whole program
# open the file folders and make use of the two functions above
xml_file_path = "./topics"
files= os.listdir(xml_file_path)
for file in files:
if len(file) == 16:
xmlsoup = bsoup(open(xml_file_path+"/"+file),"lxml")
contents = parse1(xmlsoup)
f = open("./txt_files/%s.txt"%file[:6], "w+")
f.write(contents)
f.close()
else:
xmlsoup = bsoup(open(xml_file_path+"/"+file),"lxml")
contents = parse2(xmlsoup)
f = open("./txt_files/%s.txt"%file[:7], "w+")
f.write(contents)
f.close()
# ## Summary
# 1. make cardful observation of the files and find out the connection between files and their own characters
# 2. steps of dealling with the files are: find out the word list in topics- find out the words- find out the changing row places(segments and topics)-make combination
|
Reconstructing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="DkA0Fobtb9dM"
# ##### Copyright 2020 The Cirq Developers
# + cellView="form" id="tUshu7YfcAAW"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="igOQCrBOcF5d"
# # Protocols
# + [markdown] id="LHRAvc9TcHOH"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://quantumai.google/cirq/protocols"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/protocols.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/protocols.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/protocols.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
# </td>
# </table>
# + id="bd9529db1c0b"
try:
import cirq
except ImportError:
print("installing cirq...")
# !pip install --quiet cirq
print("installed cirq.")
# + [markdown] id="lB__WndjHWGa"
# # Introduction
#
# Cirq's protocols are very similar concept to Python's built-in protocols that were introduced in [PEP 544](https://www.python.org/dev/peps/pep-0544/).
# Python's built-in protocols are extremely convenient, for example behind all the for loops and list comprehensions you can find the Iterator protocol.
# As long as an object has the `__iter__()` magic method that returns an iterator object, it has iterator support.
# An iterator object has to define `__iter__()` and `__next__()` magic methods, that defines the iterator protocol.
# The `iter(val)` builtin function returns an iterator for `val` if it defines the above methods, otherwise throws a `TypeError`. Cirq protocols work similarly.
#
# A canonical Cirq protocol example is the `unitary` protocol that allows to check the unitary matrix of values that support the protocol by calling `cirq.unitary(val)`.
# + id="4a6bcd71ae5f"
import cirq
print(cirq.X)
print("cirq.X unitary:\n", cirq.unitary(cirq.X))
a, b = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.X(a), cirq.Y(b))
print(circuit)
print("circuit unitary:\n", cirq.unitary(circuit))
# + [markdown] id="6b3b43b2141b"
# When an object does not support a given protocol, an error is thrown.
# + id="a988c0efc9b7"
try:
print(cirq.unitary(a)) ## error!
except Exception as e:
print("As expected, a qubit does not have a unitary. The error: ")
print(e)
# + [markdown] id="b4d4bc702a5e"
# ## What is a protocol?
#
# A protocol is a combination of the following two items:
# - a `SupportsXYZ` class, which defines and documents all the magic functions that need to be implemented in order to support that given protocol
# - the entrypoint function(s), which are exposed to the main cirq namespace as `cirq.xyz()`
#
# Note: While the protocol is technically both of these things, we refer to the public utility functions interchangeably as protocols. See the list of them below.
#
#
# ## Cirq's protocols
#
# For a complete list of Cirq protocols, refer to the `cirq.protocols` package.
# Here we provide a list of frequently used protocols for debugging, simulation and testing.
#
#
# | Protocol | Description |
# |----------|-------|
# |`cirq.apply_channel`| High performance evolution under a channel evolution. |
# |`cirq.apply_mixture`| High performance evolution under a mixture of unitaries evolution. |
# |`cirq.apply_unitaries`| Apply a series of unitaries onto a state tensor. |
# |`cirq.apply_unitary`| High performance left-multiplication of a unitary effect onto a tensor. |
# |`cirq.approx_eq`| Approximately compares two objects. |
# |`cirq.commutes`| Determines whether two values commute. |
# |`cirq.definitely_commutes`| Determines whether two values definitely commute. |
# |`cirq.decompose`| Recursively decomposes a value into `cirq.Operation`s meeting a criteria. |
# |`cirq.decompose_once`| Decomposes a value into operations, if possible. |
# |`cirq.decompose_once_with_qubits`| Decomposes a value into operations on the given qubits. |
# |`cirq.equal_up_to_global_phase`| Determine whether two objects are equal up to global phase. |
# |`cirq.has_kraus`| Returns whether the value has a Kraus representation. |
# |`cirq.has_mixture`| Returns whether the value has a mixture representation. |
# |`cirq.has_unitary`| Determines whether the value has a unitary effect. |
# |`cirq.inverse`| Returns the inverse `val**-1` of the given value, if defined. |
# |`cirq.is_measurement`| Determines whether or not the given value is a measurement. |
# |`cirq.is_parameterized`| Returns whether the object is parameterized with any Symbols. |
# |`cirq.kraus`| Returns a Kraus representation of the given channel. |
# |`cirq.measurement_key`| Get the single measurement key for the given value. |
# |`cirq.measurement_keys`| Gets the measurement keys of measurements within the given value. |
# |`cirq.mixture`| Return a sequence of tuples representing a probabilistic unitary. |
# |`cirq.num_qubits`| Returns the number of qubits, qudits, or qids `val` operates on. |
# |`cirq.parameter_names`| Returns parameter names for this object. |
# |`cirq.parameter_symbols`| Returns parameter symbols for this object. |
# |`cirq.pauli_expansion`| Returns coefficients of the expansion of val in the Pauli basis. |
# |`cirq.phase_by`| Returns a phased version of the effect. |
# |`cirq.pow`| Returns `val**factor` of the given value, if defined. |
# |`cirq.qasm`| Returns QASM code for the given value, if possible. |
# |`cirq.qid_shape`| Returns a tuple describing the number of quantum levels of each |
# |`cirq.quil`| Returns the QUIL code for the given value. |
# |`cirq.read_json`| Read a JSON file that optionally contains cirq objects. |
# |`cirq.resolve_parameters`| Resolves symbol parameters in the effect using the param resolver. |
# |`cirq.to_json`| Write a JSON file containing a representation of obj. |
# |`cirq.trace_distance_bound`| Returns a maximum on the trace distance between this effect's input |
# |`cirq.trace_distance_from_angle_list`| Given a list of arguments of the eigenvalues of a unitary matrix, |
# |`cirq.unitary`| Returns a unitary matrix describing the given value. |
# |`cirq.validate_mixture`| Validates that the mixture's tuple are valid probabilities. |
#
#
# ### Quantum operator representation protocols
#
# The following family of protocols is an important and frequently used set of features of Cirq and it is worthwhile mentioning them and and how they interact with each other. They are, in the order of increasing generality:
#
# * `*unitary`
# * `*kraus`
# * `*mixture`
#
# All these protocols make it easier to work with different representations of quantum operators, namely:
# - finding that representation (`unitary`, `kraus`, `mixture`),
# - determining whether the operator has that representation (`has_*`)
# - and applying them (`apply_*`) on a state vector.
#
# #### Unitary
#
# The `*unitary` protocol is the least generic, as only unitary operators should implement it. The `cirq.unitary` function returns the matrix representation of the operator in the computational basis. We saw an example of the unitary protocol above, but let's see the unitary matrix of the Pauli-Y operator as well:
#
# + id="d2ae567abe99"
print(cirq.unitary(cirq.Y))
# + [markdown] id="2c8e107b45da"
# #### Mixture
#
# The `*mixture` protocol should be implemented by operators that are _unitary-mixtures_. These probabilistic operators are represented by a list of tuples ($p_i$, $U_i$), where each unitary effect $U_i$ occurs with a certain probability $p_i$, and $\sum p_i = 1$. Probabilities are a Python float between 0.0 and 1.0, and the unitary matrices are numpy arrays.
#
# Constructing simple probabilistic gates in Cirq is easiest with the `with_probability` method.
# + id="5f9ec1e69ba6"
probabilistic_x = cirq.X.with_probability(.3)
for p, op in cirq.mixture(probabilistic_x):
print(f"probability: {p}")
print("operator:")
print(op)
# + [markdown] id="51addeffe113"
# In case an operator does not implement `SupportsMixture`, but does implement `SupportsUnitary`, `*mixture` functions fall back to the `*unitary` methods. It is easy to see that a unitary operator $U$ is just a "mixture" of a single unitary with probability $p=1$.
# + id="4a8a43eb4cbc"
# cirq.Y has a unitary effect but does not implement SupportsMixture
# thus mixture protocols will return ((1, cirq.unitary(Y)))
print(cirq.mixture(cirq.Y))
print(cirq.has_mixture(cirq.Y))
# + [markdown] id="7385326f4a37"
# #### Channel
#
#
# The `kraus` representation is the operator sum representation of a quantum operator (a channel):
#
# $$
# \rho \rightarrow \sum_{k=0}^{r-1} A_k \rho A_k^\dagger
# $$
#
# These matrices are required to satisfy the trace preserving condition
#
# $$
# \sum_{k=0}^{r-1} A_k^\dagger A_k = I
# $$
#
# where $I$ is the identity matrix. The matrices $A_k$ are sometimes called Kraus or noise operators.
#
# The `cirq.kraus` returns a tuple of numpy arrays, one for each of the Kraus operators:
# + id="dbfd41797730"
cirq.kraus(cirq.DepolarizingChannel(p=0.3))
# + [markdown] id="6a0d509482eb"
# In case the operator does not implement `SupportsChannel`, but it does implement `SupportsMixture`, the `*kraus` protocol will generate the Kraus operators based on the `*mixture` representation.
#
# $$
# ((p_0, U_0),(p_1, U_1),\ldots,(p_n, U_n)) \rightarrow (\sqrt{p_0}U_0, \sqrt{p_1}U_1, \ldots, \sqrt{p_n}U_n)
# $$
#
# Thus for example `((0.25, X), (0.75, I)) -> (0.5 X, sqrt(0.75) I)`:
# + id="259cadda9d9d"
cirq.kraus(cirq.X.with_probability(0.25))
# + [markdown] id="288ad97dcd90"
# In the simplest case of a unitary operator, `cirq.kraus` returns a one-element tuple with the same unitary as returned by `cirq.unitary`:
# + id="d424899d3af2"
print(cirq.kraus(cirq.Y))
print(cirq.unitary(cirq.Y))
print(cirq.has_kraus(cirq.Y))
|
docs/protocols.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simulating GLasso with Stock portfolio
# - [Sparse inverse covariance estimation](#Sparse-inverse-covariance-estimation)
# - [Visualization](#Visualization)
# # Collecting data
# +
from __future__ import print_function
# Author: <NAME> <EMAIL>
# License: BSD 3 clause
import sys
from datetime import datetime
from scipy import linalg
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
import pandas as pd
from sklearn.covariance import GraphicalLassoCV, ledoit_wolf
from sklearn import cluster, covariance, manifold
print(__doc__)
# +
symbol_dict = {
'sgn': 'SAGS CO',
'hvn': 'VINA AIRLINE',
'vjc': 'VIETJET AIR',
'acv': 'ACV CO',
'pvs': 'PTSC',
'pvi': 'PVI CO',
'pvd': 'PV DRILLING',
'gas': 'PV-GAS',
'vcg': 'VINACONFX JSC',
'dxg': 'GREEN LAND',
'ctd': 'COTECOIN',
'ros': 'FLC-FAROS',
'mch': 'MASAN RETAIL',
'msn': 'MASAN GROUP',
'sab': 'SABECO',
'vnm': 'VINAMILK',
'ctg': 'CTG BANK',
'bvh': 'BVH GROUP',
'bid':'BIDV',
'vcb':'VIETCOMBANK',
'nvl':'CTHH No Va ',
'ree':'REE CO',
'kdh': 'KHANGDIEN HOMES',
'vic': 'VINGROUP',
}
symbols, names = np.array(sorted(symbol_dict.items())).T
quotes = []
for symbol in symbols:
url = ('./Data-stock/excel_{}.csv')
df = pd.read_csv(url.format(symbol))
if df.shape[0] <= 395: #730 :
print('Dont have enough data %r' % symbol, file=sys.stderr)
else:
df = df[['<Ticker>','<DTYYYYMMDD>','<Open>','<Close>']][0:395]
print('Fetching quote history for %r' % symbol, file=sys.stderr)
quotes.append(df)
# -
len(quotes[1])
close_prices = np.vstack([q['<Close>'] for q in quotes])
open_prices = np.vstack([q['<Open>'] for q in quotes])
# +
close_prices = np.vstack([q['<Close>'] for q in quotes])
open_prices = np.vstack([q['<Open>'] for q in quotes])
# The daily variations of the quotes are what carry most information
#if close_prices - open_prices <= 0:
# variation = close_prices - open_prices[1:395]
variation = close_prices - open_prices
# -
# # Sparse inverse covariance estimation
#
# Using the GraphicalLasso estimator to learn a covariance and sparse precision from a small number of samples.
#
# To estimate a probabilistic model (e.g. a Gaussian model), estimating the precision matrix, that is the inverse covariance matrix, is as important as estimating the covariance matrix. Indeed a Gaussian model is parametrized by the precision matrix.
#
# To be in favorable recovery conditions, we sample the data from a model with a sparse inverse covariance matrix. In addition, we ensure that the data is not too much correlated (limiting the largest coefficient of the precision matrix) and that there a no small coefficients in the precision matrix that cannot be recovered. In addition, with a small number of observations, it is easier to recover a correlation matrix rather than a covariance, thus we scale the time series.
#
# Here, the number of samples is slightly larger than the number of dimensions, thus the empirical covariance is still invertible. However, as the observations are strongly correlated, the empirical covariance matrix is ill-conditioned and as a result its inverse --the empirical precision matrix-- is very far from the ground truth.
#
# If we use l2 shrinkage, as with the Ledoit-Wolf estimator, as the number of samples is small, we need to shrink a lot. As a result, the Ledoit-Wolf precision is fairly close to the ground truth precision, that is not far from being diagonal, but the off-diagonal structure is lost.
#
# The l1-penalized estimator can recover part of this off-diagonal structure. It learns a sparse precision. It is not able to recover the exact sparsity pattern: it detects too many non-zero coefficients. However, the highest non-zero coefficients of the l1 estimated correspond to the non-zero coefficients in the ground truth. Finally, the coefficients of the l1 precision estimate are biased toward zero: because of the penalty, they are all smaller than the corresponding ground truth value, as can be seen on the figure.
#
# Note that, the color range of the precision matrices is tweaked to improve readability of the figure. The full range of values of the empirical precision is not displayed.
#
# The alpha parameter of the GraphicalLasso setting the sparsity of the model is set by internal cross-validation in the GraphicalLassoCV. As can be seen on figure namely "Model selection", the grid to compute the cross-validation score is iteratively refined in the neighborhood of the maximum.
#
# +
# #############################################################################
# Learn a graphical structure from the correlations
edge_model = covariance.GraphicalLassoCV(cv=5)
# standardize the time series: using correlations rather than covariance
# is more efficient for structure recovery
X = variation.copy().T
X /= X.std(axis=0)
edge_model.fit(X)
# +
# plot the model selection metric
plt.figure(figsize=(4, 3))
plt.axes([.2, .15, .75, .7])
plt.plot(edge_model.cv_alphas_, np.mean(edge_model.grid_scores_, axis=1), 'o-')
plt.axvline(edge_model.alpha_, color='.5')
plt.title('Model selection')
plt.ylabel('Cross-validation score')
plt.xlabel('alpha')
plt.show()
# +
# #############################################################################
# Estimate the covariance
emp_cov = np.dot(X.T, X) / len(X.T[0])
cov_ = edge_model.covariance_
prec_ = edge_model.precision_
lw_cov_, _ = ledoit_wolf(X)
lw_prec_ = linalg.inv(lw_cov_)
# +
# Plot the results
plt.figure(figsize=(8, 5))
plt.subplots_adjust(left=0.02, right=0.98)
# plot the covariances
covs = [('Empirical', emp_cov), ('Ledoit-Wolf', lw_cov_),
('GraphicalLassoCV', cov_)]
vmax = cov_.max()
for i, (name, this_cov) in enumerate(covs):
plt.subplot(2, 3, i + 1)
plt.imshow(this_cov, interpolation='nearest', vmin=-vmax, vmax=vmax,
cmap=plt.cm.RdBu_r)
plt.xticks(())
plt.yticks(())
plt.title('%s covariance' % name)
# plot the precisions
precs = [('Empirical', linalg.inv(emp_cov)), ('Ledoit-Wolf', lw_prec_),
('GraphicalLasso', prec_)]
vmax = .9 * prec_.max()
for i, (name, this_prec) in enumerate(precs):
ax = plt.subplot(2, 3, i + 4)
plt.imshow(np.ma.masked_equal(this_prec, 0),
interpolation='nearest', vmin=-vmax, vmax=vmax,
cmap=plt.cm.RdBu_r)
plt.xticks(())
plt.yticks(())
plt.title('%s precision' % name)
if hasattr(ax, 'set_facecolor'):
ax.set_facecolor('.7')
else:
ax.set_axis_bgcolor('.7')
# -
#
# # Visualization
# Visualizing the stock market structure
#
# This example employs several unsupervised learning techniques to extract the stock market structure from variations in historical quotes.
#
# The quantity that we use is the daily variation in quote price: quotes that are linked tend to cofluctuate during a day.
# Learning a graph structure
#
# We use sparse inverse covariance estimation to find which quotes are correlated conditionally on the others. Specifically, sparse inverse covariance gives us a graph, that is a list of connection. For each symbol, the symbols that it is connected too are those useful to explain its fluctuations.
#
# ## Embedding in 2D space
# For visualization purposes, we need to lay out the different symbols on a 2D canvas. For this we use manifold techniques to retrieve 2D embedding.
# <br>
# The output of the 3 models are combined in a 2D graph where nodes represents the stocks and edges the:
# - cluster labels are used to define the color of the nodes
# - the sparse covariance model is used to display the strength of the edges
# - the 2D embedding is used to position the nodes in the plan
#
# This example has a fair amount of visualization-related code, as visualization is crucial here to display the graph. One of the challenge is to position the labels minimizing overlap. For this we use an heuristic based on the direction of the nearest neighbor along each axis.
# +
# #############################################################################
# Cluster using affinity propagation
_, labels = cluster.affinity_propagation(edge_model.covariance_)
n_labels = labels.max()
for i in range(n_labels + 1):
print('Cluster %i: %s' % ((i + 1), ', '.join(names[labels == i])))
# #############################################################################
# Find a low-dimension embedding for visualization: find the best position of
# the nodes (the stocks) on a 2D plane
# We use a dense eigen_solver to achieve reproducibility (arpack is
# initiated with random vectors that we don't control). In addition, we
# use a large number of neighbors to capture the large-scale structure.
node_position_model = manifold.LocallyLinearEmbedding(
n_components=2, eigen_solver='dense', n_neighbors=6)
embedding = node_position_model.fit_transform(X.T).T
# #############################################################################
# Visualization
plt.figure(1, facecolor='gray', figsize=(10, 8))
plt.clf()
ax = plt.axes([0., 0., 1., 1.])
plt.axis('off')
# Display a graph of the partial correlations
partial_correlations = edge_model.precision_.copy()
d = 1 / np.sqrt(np.diag(partial_correlations))
partial_correlations *= d
partial_correlations *= d[:, np.newaxis]
non_zero = (np.abs(np.triu(partial_correlations, k=1)) > 0.02)
# Plot the nodes using the coordinates of our embedding
plt.scatter(embedding[0], embedding[1], s=100 * d ** 2, c=labels,
cmap=plt.cm.nipy_spectral)
# Plot the edges
start_idx, end_idx = np.where(non_zero)
# a sequence of (*line0*, *line1*, *line2*), where::
# linen = (x0, y0), (x1, y1), ... (xm, ym)
segments = [[embedding[:, start], embedding[:, stop]]
for start, stop in zip(start_idx, end_idx)]
values = np.abs(partial_correlations[non_zero])
lc = LineCollection(segments,
zorder=0, cmap=plt.cm.hot_r,
norm=plt.Normalize(0, .7 * values.max()))
lc.set_array(values)
lc.set_linewidths(15 * values)
ax.add_collection(lc)
# Add a label to each node. The challenge here is that we want to
# position the labels to avoid overlap with other labels
for index, (name, label, (x, y)) in enumerate(
zip(names, labels, embedding.T)):
dx = x - embedding[0]
dx[index] = 1
dy = y - embedding[1]
dy[index] = 1
this_dx = dx[np.argmin(np.abs(dy))]
this_dy = dy[np.argmin(np.abs(dx))]
if this_dx > 0:
horizontalalignment = 'left'
x = x + .002
else:
horizontalalignment = 'right'
x = x - .002
if this_dy > 0:
verticalalignment = 'bottom'
y = y + .002
else:
verticalalignment = 'top'
y = y - .002
plt.text(x, y, name, size=10,
horizontalalignment=horizontalalignment,
verticalalignment=verticalalignment,
bbox=dict(facecolor='gray',
edgecolor=plt.cm.nipy_spectral(label / float(n_labels)),
alpha=.6))
plt.xlim(embedding[0].min() - .15 * embedding[0].ptp(),
embedding[0].max() + .10 * embedding[0].ptp(),)
plt.ylim(embedding[1].min() - .03 * embedding[1].ptp(),
embedding[1].max() + .03 * embedding[1].ptp())
for i in range(n_labels + 1):
print('Cluster %i: %s' % ((i + 1), ', '.join(names[labels == i])))
plt.show()
# -
|
Statistical-Learning-StandfordNB/Notebook/Portfolio Optimization with GLasso.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="abjPZ50TtPxx"
def HammingDistance(string1, string2):
counter = 0
if len(string1) > len(string2):
for i in range(len(string2)):
if string1[i] != string2[i]:
counter = counter + 1
counter = counter + (len(string1) - len(string2))
else:
for i in range(len(string1)):
if string1[i] != string2[i]:
counter = counter + 1
counter = counter + (len(string2) - len(string1))
return counter
# + colab={"base_uri": "https://localhost:8080/"} id="WyULJVOitq_e" outputId="2a1b0dd4-70eb-4246-bf9c-8cdee43a61fa"
HammingDistance('GGGCCGTTGGT', 'GGACCGTTGAC')
# + colab={"base_uri": "https://localhost:8080/"} id="5fFVNW6duua3" outputId="891294af-748d-4f65-e6f9-139ac8e236d7"
HammingDistance('ATATTGGTTAGGTTCTCTCTGGGTCGCGTTCTTATCGACATAGGATGTATAGAATGCCGGATAAGACGACATAACCCATAGGTTTCTGGAACGGGCGGTGGTGCGATGAGCATAGTAAGTTCGTCTTCACCTGTTCCCCCGCACAACCGAAGGTAATCGACGGTCCCTAAGCTACGGCTAAGCGAATTCGTTTAGCCCACTCGGGATAAGAAATGGGACAATGATGCGCTATCGAAGTCTTAAGGGCGTGTACTCTCCTGTCTTACTGCAATAATTCTGTCCGGATGACGCTCACCAGTCCAGAGAACGCATCGATGCCTTCGTAGTCCAGCGACTAGACATCCCCGAGGCTTATCATACGTTTGCCTGAGTCCTAATAAACTTCTAAATAATCACATGTGGCCTAGTGGCCAAACGTGAACTGGGACTTCGCCGTAGAGCTATGACTTGCCCGCATGTCCCAAAAAACCGACCGGTCAATCAAAACATGTCGGGGCTTCGTGCCAGCACAAGGCCCTATACAAGTACCGTACTCCTGTCCGTGTAAGAAAGTTTACGGCTATCATTCGGTCAATCACCGCTGATTCCGTTTACGCACATCATAGCGTCGAGCTGGGTACCACGCTTCACCTCCATGCCCTATTTTACGAGGTCTGTTGGGCTATTACTATATGTACCATTCTGGGTGCCAATGTAATGAACACGATGACGGAGTTATCATAGGGGACGCCACCGAACTATGGCGTAGTATATCCGTGCCGCGATGAAATTGCAAGCGCCTCCCATAAACCGCGTAAACGACGGGCCCAACAGGCAGCTGGAAAGACTCTGCGCCTTTAGAATGATCAGCGTTTGGTCCCTACAATGAAAGTGACGATCGACGGTGTAGCCAGCGGACGTATCCTGTCAAGATCCGCCGCGTGCACCCAGAGCTTTCACCTTCCATGTAGGGATTCTACATTTTAACGGAGCCCCCAACGCGAAGCCCCATTCGCCAATTGGGCGGGTTGTCATACGAAAAAGTTCTTTAACTCTCAGTAGAAGTGAAAAGATATAGGACGCCCCAGGTCCCTCGGATT', 'ATTCAGCTGAATGTCCGAACAGTTTGTAACAGACAAAGCCAGGTCTTAGGCGGAGAAGTGCAGCCCGTACCCGTACATTGCCCCGGCCAAGGCACGCGTTAGGAGCTTAACCTGGGAAGCTCGCCCGCATCATGGGACAACATGGAGCTTGGGAGCGACAACGACTGTCCAGGACACCGATCGCCTGCCCCTGCTGCAAACATGAGTAGTAACCGCCTAGTGCAGACGCTGTGGACTCGCGTTGAAGGGCTTCATACAACGATAAGTCTTCATAAACGAGTCCTGTCTGACGAACCTTATGTTCCGCGCGTCAATTTATGCAGCGTCATTTTTCGTTTAATACAAATAGTAACATTTTTATCAGGCCCTTGCATACCTCGCAATTTCCTATTATTATCTGATGTACTCCAAAGGCACGCAATCGTGGCGGTGATTAAATATGGTGGCCATGGCTCCGCTGCAGATAGACAAGTATTGGCCGGAACTTAGCCACAGAGGGAAACTTGGGCTCGAACTGCCGAAGACGATATTCCTTACCAGAAGTATCTTAGATCTTTGAGCGTCAAAGTACGTCGCCATAGCGCTGATCATTAAGCTAGAAGGCTAGCATTAAAAAGGACCCCTTGAAGAGGAGTCTTGAAACACAACGACTATTCTTGGTAGGTATAAACGTGAGGTGGAAGCGGGGAAAGCCGACCTGTTCCACCGCTTTTTCCCGGTCCCTGGCCTTGAATATCCAACAGTAGGTTGCTCGTCCTGGCTAGTTTAGCCAACCAAAACTGGACACAACATGTCCAGAACGACACACAATTTATAAAATATTCAACGGCGTCATTGTGTGTCCTTTAATCTTGTACCTACTGTGTACCGTAAAACGGTACTTGTTCTTCAACACTGATGGCTCTTGAAGGGATGAGAAATATAGCGCGGGCCGGAAGCCGCCACCAAGCGCTCTCGTTTACTCCGCTGTCCGGGAAACCGCTGATGTTTAAATTTCGCCTCTAAGGCACTGGGGTCTCGCACAGTGAGCGCGTCTCACGCTGCAGAATCCCCGAGCCAGACGGTAGGCTCTGGCCCGT')
|
BA1G.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
import warnings
from datetime import datetime, date,timedelta
from matplotlib import gridspec
from matplotlib import cm
from yellowbrick.regressor import PredictionError, ResidualsPlot
warnings.filterwarnings('ignore')
# %matplotlib inline
# -
df_SF = pd.read_pickle('df_SF_HH.pkl')
df_SF.head()
# +
# df_SF.dtypes
# -
FILE_MET = 'k34_Met_Soil_v0.csv'
df_met = pd.read_csv(FILE_MET, header = 0, na_values = '-9999')
df_met['TIMESTAMP'] = [datetime.strptime(str(i),'%Y%m%d%H%M') for i in df_met['TIMESTAMP_START']]
df_sv_t1 = df_SF[['TIMESTAMP','SV_OUT_1A']].copy()
# df_sv_t1.head()
# Combine micro-met data and sap flow data
df = pd.merge(df_met,df_sv_t1,how = 'left',on = 'TIMESTAMP')
# df.head()
# +
# df.dtypes
# -
# # Time Period
# Drought
# +
# all drought
# time_start = datetime(2015,8,1,0,0,0)
# time_end = datetime(2016,3,1,0,0,0)
# very dry
time_start = datetime(2015,9,1,0,0,0)
time_end = datetime(2015,10,1,0,0,0)
# -
df_drought = df.loc[(df['TIMESTAMP'] > time_start) & (df['TIMESTAMP'] < time_end)]
cols_drop = ['TIMESTAMP_START','TIMESTAMP_END','TIMESTAMP','BATTERY_VOLTAGE',
'H2O_1','H2O_2','H2O_3','H2O_4','H2O_5','H2O_6',
'RH_2','RH_3','RH_4','RH_5','P_2','RH_6',
'WS_4','Rho_PAR','Rho_OIR','NDVI',
'PPFD_IN','PPFD_OUT','T_LW_IN','T_LW_OUT','LW_IN','LW_OUT',
'PPFD_IN_2','PA']
# cols = ['G', 'SWC_1', 'SWC_2', 'SWC_3', 'SWC_4', 'SWC_5', 'SWC_6',
# 'TS_1', 'TS_2', 'TS_3', 'TS_4', 'TS_5',
# 'SW_IN', 'SW_OUT', 'RH_1','NETRAD', 'T_CANOPY',
# 'TA_1', 'TA_2', 'TA_3', 'TA_4', 'TA_5', 'TA_6',
# 'WS_1', 'WD_1',
# 'CO2_1', 'CO2_2', 'CO2_3', 'CO2_4', 'CO2_5', 'CO2_6',
# 'LW_IN_CORR', 'LW_OUT_CORR',
# 'SV_OUT_1A']
# cols = ['G', 'SWC_1', 'SWC_2', 'SWC_3', 'SWC_4', 'SWC_5', 'SWC_6',
# 'TS_1', 'TS_2', 'TS_3', 'TS_4', 'TS_5',
# 'SW_IN', 'SW_OUT', 'RH_1','NETRAD', 'T_CANOPY',
# 'TA_1', 'TA_2', 'TA_3', 'TA_4', 'TA_5', 'TA_6',
# 'WS_1',
# 'CO2_1', 'CO2_2', 'CO2_3', 'CO2_4', 'CO2_5', 'CO2_6',
# 'SV_OUT_1A']
cols = ['G', 'SWC_1', 'SWC_2', 'SWC_3', 'SWC_4', 'SWC_5', 'SWC_6',
'TS_1', 'TS_2', 'TS_3', 'TS_4', 'TS_5',
'RH_1','NETRAD', 'T_CANOPY',
'TA_1', 'TA_2', 'TA_3', 'TA_4', 'TA_5', 'TA_6',
'CO2_1', 'CO2_2', 'CO2_3', 'CO2_4', 'CO2_5', 'CO2_6',
'SV_OUT_1A']
df_fs = df_drought[cols].copy()
# df_fs.head()
df_fs.head()
df_fs.isna().sum()
y_list = ['SV_OUT_1A']
cols_x = cols[:-2]
print(cols_x)
print(list(df_fs))
df_ml = df_fs.dropna(axis = 0,how = 'any')
df_ml.shape
# +
# df_ml.to_pickle('SapVelocity_ML_Drought.pkl')
# -
print(list(df_ml))
# # Machine Learning Models
X_org = df_ml.drop(['SV_OUT_1A'],axis = 1)
y_org = df_ml['SV_OUT_1A']
from sklearn import preprocessing
lab_enc = preprocessing.LabelEncoder()
y_encoded = lab_enc.fit_transform(y_org)
# # Feature Importance
# +
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
forest = RandomForestClassifier()
forest.fit(X_org,y_encoded)
print(forest.feature_importances_)
feat_importances = pd.Series(forest.feature_importances_, index=X_org.columns)
feat_importances.nlargest(10).plot(kind='barh')
plt.show()
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X_org.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
# plt.title("Feature importances")
plt.bar(range(X_org.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_org.shape[1]), indices)
plt.xlim([-1, X_org.shape[1]])
plt.ylabel("Importances")
plt.show()
# -
# Plot the feature importances of the forest
plt.figure(figsize = (20,6))
# plt.title("ET Feature importances")
barlist = plt.bar(range(X_org.shape[1]), importances[indices],
color="g", yerr=std[indices], align="center")
for i in range(5):
barlist[i].set_color('r')
for i in range(6):
barlist[-(i+1)].set_color('b')
plt.xticks(range(X_org.shape[1]), X_org.columns[indices])
plt.xlim([-1, X_org.shape[1]])
plt.ylabel("Importances")
plt.show()
feat_importances
# # Feature Selection
# Model-based Feature Selection
# +
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.feature_selection import SelectFromModel
# X.shape
clf = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1,max_depth=1, random_state=0, loss='ls')
clf = clf.fit(X_org, y_org)
clf.feature_importances_
model = SelectFromModel(clf, prefit=True)
X_new = model.transform(X_org)
y_new = y_org
# X_new.shape
cols = model.get_support(indices=True)
# Create new dataframe with only desired columns, or overwrite existing
new_features = X_org.columns[cols]
print(list(new_features))
# -
# +
# # Feature Selection
# # Automatic Fearture Selection
# from sklearn.feature_selection import SelectPercentile
# rng = np.random.RandomState(42)
# noise_train = rng.normal(size=(len(X_train_prefs), 50))
# X_train_noise = np.hstack([X_train_prefs, noise_train])
# noise_test = rng.normal(size=(len(X_test_prefs), 50))
# X_test_noise = np.hstack([X_test_prefs, noise_test])
# select = SelectPercentile(percentile=10)
# select.fit(X_train_noise, y_train)
# # transform training set
# X_train1 = select.transform(X_train_noise)
# # transform test data
# X_test1 = select.transform(X_test_noise)
# print("X_train.shape: {}".format(X_train_noise.shape))
# print("X_train_selected.shape: {}".format(X_train1.shape))
# -
# Split train and test data
# 1. assign datasets for ML models
X = X_new
y = y_new
# Split train and test data
from sklearn.model_selection import train_test_split
X_train_pre, X_test_pre, y_train, y_test = train_test_split(X, y)
# Preprocessing & Normalization
from sklearn.preprocessing import MinMaxScaler, StandardScaler
scaler = StandardScaler()
scaler.fit(X_train_pre)
# Train and Test data before feature selection
X_train_prefs = scaler.transform(X_train_pre)
X_test_prefs = scaler.transform(X_test_pre)
# Train and Test Data
X_train = X_train_prefs
X_test = X_test_prefs
# # Simple ML Regressors
# k-neighbors regression
# +
# k-neighbors regression
from sklearn.neighbors import KNeighborsRegressor
knr = KNeighborsRegressor(n_neighbors=6)
visualizer1 = PredictionError(knr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
# -
visualizer2 = ResidualsPlot(knr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Linear Regression
# Linear Regression
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train,y_train)
feat_importances = lr.feature_importances_
print(feat_importances)
visualizer1 = PredictionError(lr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
visualizer2 = ResidualsPlot(lr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Ridge Regression
# +
from sklearn.linear_model import Ridge
rr = Ridge(alpha = 5)
visualizer1 = PredictionError(rr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
# -
visualizer2 = ResidualsPlot(rr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Lasso
from sklearn.linear_model import Lasso
lsr = Lasso(alpha = 0.01, max_iter=10000)
visualizer1 = PredictionError(lsr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
visualizer2 = ResidualsPlot(lsr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Decision Tree
# +
# Decision Tree Regressor
from sklearn.tree import DecisionTreeRegressor
dtr = DecisionTreeRegressor(max_depth = 5)
visualizer1 = PredictionError(dtr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
# -
visualizer2 = ResidualsPlot(dtr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Random Forest
# +
# Random Forest Regressor
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(max_depth=3, random_state=0,n_estimators=1000)
visualizer1 = PredictionError(rfr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
# -
visualizer2 = ResidualsPlot(rfr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Support Vector Machines
# +
from sklearn.svm import SVR
# svmr = SVR(gamma = 'scale',C = 1.0, epsilon = 0.2)
svmr = SVR(kernel='linear', gamma='auto')
visualizer1 = PredictionError(svmr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
# -
visualizer2 = ResidualsPlot(svmr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Stochastic gradient descent regressor
# +
from sklearn import linear_model
sgdr = linear_model.SGDRegressor(max_iter=1000, tol=1e-3)
visualizer1 = PredictionError(sgdr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
# -
visualizer2 = ResidualsPlot(sgdr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
# Shallow Neural Networks
# +
# Shallow Neural Networks
from sklearn.neural_network import MLPRegressor
mlpr = MLPRegressor()
visualizer1 = PredictionError(mlpr)
visualizer1.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer1.score(X_test, y_test) # Evaluate the model on the test data
visualizer1.show()
# -
visualizer2 = ResidualsPlot(mlpr)
visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
visualizer2.show()
|
FeatureSelection_Drought.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
#To use psycopg2 to join SQl and jupyter notebook
#or PostGREs and SQLAlchemy
# !pip install psycopg2 sqlalchemy
import csv
import datetime as dt
import json as json
import os
import pandas as pd
import time
import matplotlib.pyplot as plt
# %matplotlib inline
from matplotlib.ticker import StrMethodFormatter
import numpy as np
import scipy.stats as st
import sqlalchemy
# Python SQL toolkit and Object Relational Mapper
from sqlalchemy import create_engine
from sqlalchemy import create_engine, func
from sqlalchemy.orm import Session
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine, inspect
from sqlalchemy import func
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base() #have to bring pandas manually b/c of ext
# +
#Step 1 - Climate Analysis and Exploration
# -
#database access
engine = create_engine('postgresql://postgres:Ben&LizzyA2@localhost:5433/SQLAlchemy_db')
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
conn = engine.connect()
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
M = Measurement
S = Station
inspector = inspect(engine)
inspector.get_table_names()
# Get a list of column names and types
columns = inspector.get_columns('Measurement')
for c in columns:
print(c['name'], c["type"])
# Get a list of column names and types
columns = inspector.get_columns('Station')
for c in columns:
print(c['name'], c["type"])
from sqlalchemy.orm import Session
session = Session(engine)
#Creating joined SQL database
# select * SQL
# This JOINs the data in the two tables together into a single dataset (here in the form of a tuple).
# Note: We are going to limit the results to 10 for printing
Hawaii_station = [M.station, M.date, M.prcp, M.tobs, S.station, S.name]
Hawaii = session.query(*Hawaii_station).filter(M.station == S.station).limit(10).all()
print(Hawaii)
# Get a list of column names and types
columns = inspector.get_columns('Hawaii')
for c in columns:
print(c['name'], c["type"])
pip install pyodbc
from sqlalchemy.orm import Session
session = Session(engine)
# query measurement database to store for the later use
measurements = session.query(Measurement.station, Measurement.date, Measurement.prcp, Measurement.tobs).all()
measurements_df = pd.DataFrame(measurements)
measurements_df.columns =['station', 'date', 'prcp', 'tobs']
measurements_df.head()
# query station database to store for the later use
stations = session.query(Station.station, Station.name).all()
stations_df = pd.DataFrame(stations)
stations_df.columns =['station', 'name']
stations_df
# +
#Combine the data into a single dataset
Hawaii_df = pd.merge(measurements_df, stations_df, how="left", on=["station", "station"])
#Display the data table for preview
Hawaii_df.head()
# +
#Precipitation Analysis
# -
#Start by finding the most recent date in the data set.
mostrecent_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print(mostrecent_date)
# +
session.query(Measurement.date).order_by(Measurement.date.desc()).first()
year_to_date = dt.date(2017, 8, 23) - dt.timedelta(days=365)
print(year_to_date)
precipitation = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date > year_to_date).\
order_by(Measurement.date).all()
# -
#Using this date, retrieve the last 12 months of precipitation data by querying the 12 preceding months of data.
#**Note** you do not pass in the date as a variable to your query.
year_to_date = dt.date(2017, 8, 23) - dt.timedelta(days=365)
print(year_to_date)
#Select only the `date` and `prcp` values.
precipitation = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date > year_to_date).\
order_by(Measurement.date).all()
#Load the query results into a Pandas DataFrame and set the index to the date column.
prcp_df = pd.DataFrame(precipitation)
prcp_df.columns =['date', 'precipitations']
prcp_df.set_index('date').head()
prcp_df.head()
#Sort the DataFrame values by `date`.
prcp_df.set_index('date').head().sort_values(by='date',ascending=False)
#print(prcp_df)
prcp_df.head()
# +
#Plot the results using the DataFrame `plot` method.
plt.rcParams["figure.figsize"] = [12,4]
prcp_df.plot('date','precipitations', color="cornflowerblue", markersize=10, linewidth=4)
plt.title('Precipitation Analysis, 723/2016-7/23/2017', fontsize = 16, fontweight = "bold", color = "c")
plt.xlabel('date', fontsize = 14)
plt.ylabel('Precipitation', fontsize = 14)
plt.xticks(color = "c", rotation = 45)
plt.grid()
plt.savefig("Figures/precipitations.png")
# -
#Use Pandas to print the summary statistics for the precipitation data.
prcp_df.describe()
#Close out your session.
session.close()
#Filter by the station with the highest number of observations.
#Query the last 12 months of temperature observation data for this station.
WAIHEE_data = session.query(Measurement.date, Measurement.tobs).\
filter(Measurement.date > year_to_date).filter(Measurement.station == 'USC00519281').all()
#print(WAIHEE_data)
WAIHEE_df = pd.DataFrame(WAIHEE_data)
WAIHEE_df.columns =['date', 'tobs']
WAIHEE_df.head()
WAIHEE_df.set_index('date').sort_values(by='date',ascending=False)
WAIHEE_df.head()
#Plot the results as a histogram with `bins=12`.
WAIHEE_df.set_index('date')
WAIHEE_df.hist(bins = 12, column='tobs', color="#2ab0ff")
plt.title('Temperature Analysis, 7/23/2016-7/23/2017', fontsize = 16, fontweight = "bold", color = "c")
plt.xlabel("Temperature")
plt.ylabel("Observations")
plt.savefig("Figures/temperatures.png")
# +
#
# -
#Close out your session.
session.close()
|
HW_partI_climate_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Plik z pre-processingiem danych do modelu. Zawiera sparametryzowane metody do usuwania niepoprawnych wartości, usuwania outlierów, skalowania wartości do skali logarytmicznej, usuwania najrzadziej używanych związków itd. Kod jest skopiowany także do pliku utils.py, dla łatwiejszego importownia w innych notebookach.
import pandas as pd
import numpy as np
def remove_wrong_values(df):
df['IC50'] = pd.to_numeric(df['IC50'], errors='coerce')
df = df.dropna()
return df
def remove_least_used(df, min_perc_used=0):
occur = pd.DataFrame(df.drop('IC50', axis=1).sum())
occur.columns = ['number_of_feature_occurrences']
min_occurrs = int(df.shape[0] * min_perc_used)
not_qualified = occur[occur['number_of_feature_occurrences']<min_occurrs]
return df.drop(not_qualified.index, axis=1)
def remove_target_outliers(df):
return df[(df['IC50']>1) & (df['IC50']<=100_000)]
def make_log_scale(df):
df['IC50'] = np.log10(df['IC50'])
return df
def prepare_df(file, min_perc_used=0, remove_outliers=True, log_scale=True):
print(f'Preparing ({file}) file.')
df = pd.read_csv(file, low_memory=False)
print(f'DataFrame base shape: {df.shape}')
df = remove_wrong_values(df)
print(f'Shape after removing wrong values: {df.shape}')
if min_perc_used != 0:
df = remove_least_used(df, min_perc_used=min_perc_used)
print(f'Shape after removing least used features: {df.shape}')
if remove_outliers:
df = remove_target_outliers(df)
print(f'Shape after removing outliers: {df.shape}')
if log_scale:
df = make_log_scale(df)
print()
return df
def classify_on_IC50(df, IC50_threshold, log_scale=True):
if log_scale:
IC50_threshold = np.log10(IC50_threshold)
df['IC50'] = np.where(df['IC50']<IC50_threshold, 1, 0)
return df
def get_MACCSFP_fingerprints(min_perc_used=0, remove_outliers=True, log_scale=True):
file = 'ready_sets/cardiotoxicity_hERG_MACCSFP.csv'
df = prepare_df(file, min_perc_used=min_perc_used, remove_outliers=remove_outliers, log_scale=log_scale)
return df
def get_KlekotaRoth_fingerprints(min_perc_used=0, remove_outliers=True, log_scale=True):
file = 'ready_sets/cardiotoxicity_hERG_KlekFP.csv'
df = prepare_df(file, min_perc_used=min_perc_used, remove_outliers=remove_outliers, log_scale=log_scale)
return df
def get_hashed_fingerprints(min_perc_used=0, remove_outliers=True, log_scale=True):
file = 'ready_sets/cardiotoxicity_hERG_ExtFP.csv'
df = prepare_df(file, min_perc_used=min_perc_used, remove_outliers=remove_outliers, log_scale=log_scale)
return df
def get_mixed_fingerprints(min_perc_used=0, remove_outliers=True, log_scale=True):
print('Preparing files for mixed fingerprints.\n')
df1 = get_MACCSFP_fingerprints(min_perc_used=min_perc_used, remove_outliers=remove_outliers, log_scale=log_scale)
df2 = get_KlekotaRoth_fingerprints(min_perc_used=min_perc_used, remove_outliers=remove_outliers, log_scale=log_scale).drop('IC50',axis=1)
df3 = get_hashed_fingerprints(min_perc_used=min_perc_used, remove_outliers=remove_outliers, log_scale=log_scale).drop('IC50',axis=1)
return df1.join(df2).join(df3)
df = get_mixed_fingerprints(min_perc_used=0.01)
df
|
data_preparation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="zX4Kg8DUTKWO"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="view-in-github"
# <a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%204%20-%20S%2BP/S%2BP%20Week%201%20-%20Lesson%203%20-%20Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="y7QztBIVR1tb"
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
# + id="t9HrvPfrSlzS"
import tensorflow as tf
print(tf.__version__)
# + [markdown] id="I9x4GjlEVTQN"
# The next code block will set up the time series with seasonality, trend and a bit of noise.
# + id="gqWabzlJ63nL"
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
# + [markdown] id="UfdyqJJ1VZVu"
# Now that we have the time series, let's split it so we can start forecasting
# + id="_w0eKap5uFNP"
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
plt.figure(figsize=(10, 6))
plot_series(time_train, x_train)
plt.show()
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plt.show()
# + [markdown] id="bjD8ncEZbjEW"
# # Naive Forecast
# + id="Pj_-uCeYxcAb"
naive_forecast = series[split_time - 1:-1]
# + id="JtxwHj9Ig0jT"
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, naive_forecast)
# + [markdown] id="fw1SP5WeuixH"
# Let's zoom in on the start of the validation period:
# + id="D0MKg7FNug9V"
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150)
plot_series(time_valid, naive_forecast, start=1, end=151)
# + [markdown] id="35gIlQLfu0TT"
# You can see that the naive forecast lags 1 step behind the time series.
# + [markdown] id="Uh_7244Gsxfx"
# Now let's compute the mean squared error and the mean absolute error between the forecasts and the predictions in the validation period:
# + id="byNnC7IbsnMZ"
print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy())
print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())
# + [markdown] id="WGPBC9QttI1u"
# That's our baseline, now let's try a moving average:
# + id="YGz5UsUdf2tV"
def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast"""
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
# + id="HHFhGXQji7_r"
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, moving_avg)
# + id="wG7pTAd7z0e8"
print(keras.metrics.mean_squared_error(x_valid, moving_avg).numpy())
print(keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy())
# + [markdown] id="JMYPnJqwz8nS"
# That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time *t* – 365 from the value at time *t*.
# + id="5pqySF7-rJR4"
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series)
plt.show()
# + [markdown] id="xPlPlS7DskWg"
# Great, the trend and seasonality seem to be gone, so now we can use the moving average:
# + id="QmZpz7arsjbb"
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:])
plot_series(time_valid, diff_moving_avg)
plt.show()
# + [markdown] id="Gno9S2lyecnc"
# Now let's bring back the trend and seasonality by adding the past values from t – 365:
# + id="Dv6RWFq7TFGB"
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_past)
plt.show()
# + id="59jmBrwcTFCx"
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_past).numpy())
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy())
# + [markdown] id="vx9Et1Hkeusl"
# Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:
# + id="K81dtROoTE_r"
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-360], 10) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_smooth_past)
plt.show()
# + id="iN2MsBxWTE3m"
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())
|
sequences-time_series-predictions/S+P_Week_1_Lesson_3_Notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Microbenchmarks on CPU
# This is a notebook for microbenchmarks running on CPU.
# +
from pyspark.sql import SparkSession
from pyspark.conf import SparkConf
from time import time
import os
# Change to your cluster ip:port
SPARK_MASTER_URL = os.getenv("SPARK_MASTER_URL", "spark://your-ip:port")
# -
# Run the microbenchmark with retry times
def runMicroBenchmark(spark, appName, query, retryTimes):
count = 0
total_time = 0
# You can print the physical plan of each query
# spark.sql(query).explain()
while count < retryTimes:
start = time()
spark.sql(query).show(5)
end = time()
total_time += round(end - start, 2)
count = count + 1
print("Retry times : {}, ".format(count) + appName + " Microbenchmark takes {} seconds".format(round(end - start, 2)))
print(appName + " Microbenchmark takes average {} seconds after {} retries".format(round(total_time/retryTimes),retryTimes))
# +
# You need to update data path with your real path and hardware resource!
dataRoot = os.getenv("DATA_ROOT", "/data")
driverMem = os.getenv("DRIVER_MEM", "50g")
executorMem = os.getenv("EXECUTOR_MEM", "100g")
maxPartionBytes = os.getenv("MAX_PARTITION_BYTES", "1g")
executorCores = int(os.getenv("EXECUTOR_CORES", "32"))
# common spark settings
conf = SparkConf()
conf.setMaster(SPARK_MASTER_URL)
conf.setAppName("Microbenchmark on CPU")
conf.set("spark.driver.memory", driverMem)
conf.set("spark.executor.memory", executorMem)
conf.set("spark.executor.cores", executorCores)
conf.set("spark.locality.wait", "0")
conf.set("spark.sql.files.maxPartitionBytes", maxPartionBytes)
conf.set("spark.dynamicAllocation.enabled", "false")
conf.set("spark.sql.adaptive.enabled", "true")
# create spark session
spark = SparkSession.builder.config(conf=conf).getOrCreate()
# Load dataframe and create tempView
spark.read.parquet(dataRoot + "/tpcds/store_sales").createOrReplaceTempView("store_sales")
spark.read.parquet(dataRoot + "/tpcds/catalog_sales").createOrReplaceTempView("catalog_sales")
spark.read.parquet(dataRoot + "/tpcds/web_sales").createOrReplaceTempView("web_sales")
spark.read.parquet(dataRoot + "/tpcds/item").createOrReplaceTempView("item")
spark.read.parquet(dataRoot + "/tpcds/date_dim").createOrReplaceTempView("date_dim")
# -
# ### Expand&HashAggregate
# This is a microbenchmark about Expand&HashAggregate expressions running on the CPU. The query calculates the distinct value of some dimension columns and average birth year by different c_salutation of customers after grouping by c_current_hdemo_sk.
# +
# As a part of this query the size of the data in each task grows a lot.
# By default, Spark will try to distribute the data among all the tasks in the cluster,
# but on large clusters with large parquet files the splittable portions of the parquet files end up not being distributed evenly
# and it is faster to re-partition the data to redistribute it than to deal with skew.
spark.read.parquet(dataRoot + "/tpcds/customer").repartition(512).createOrReplaceTempView("customer")
print("-"*50)
# -
query = '''
select c_current_hdemo_sk,
count(DISTINCT if(c_salutation=="Ms.",c_salutation,null)) as c1,
count(DISTINCT if(c_salutation=="Mr.",c_salutation,null)) as c12,
count(DISTINCT if(c_salutation=="Dr.",c_salutation,null)) as c13,
count(DISTINCT if(c_salutation=="Ms.",c_first_name,null)) as c2,
count(DISTINCT if(c_salutation=="Mr.",c_first_name,null)) as c22,
count(DISTINCT if(c_salutation=="Dr.",c_first_name,null)) as c23,
count(DISTINCT if(c_salutation=="Ms.",c_last_name,null)) as c3,
count(DISTINCT if(c_salutation=="Mr.",c_last_name,null)) as c32,
count(DISTINCT if(c_salutation=="Dr.",c_last_name,null)) as c33,
count(DISTINCT if(c_salutation=="Ms.",c_birth_country,null)) as c4,
count(DISTINCT if(c_salutation=="Mr.",c_birth_country,null)) as c42,
count(DISTINCT if(c_salutation=="Dr.",c_birth_country,null)) as c43,
count(DISTINCT if(c_salutation=="Ms.",c_email_address,null)) as c5,
count(DISTINCT if(c_salutation=="Mr.",c_email_address,null)) as c52,
count(DISTINCT if(c_salutation=="Dr.",c_email_address,null)) as c53,
count(DISTINCT if(c_salutation=="Ms.",c_login,null)) as c6,
count(DISTINCT if(c_salutation=="Mr.",c_login,null)) as c62,
count(DISTINCT if(c_salutation=="Dr.",c_login,null)) as c63,
count(DISTINCT if(c_salutation=="Ms.",c_preferred_cust_flag,null)) as c7,
count(DISTINCT if(c_salutation=="Mr.",c_preferred_cust_flag,null)) as c72,
count(DISTINCT if(c_salutation=="Dr.",c_preferred_cust_flag,null)) as c73,
count(DISTINCT if(c_salutation=="Ms.",c_birth_month,null)) as c8,
count(DISTINCT if(c_salutation=="Mr.",c_birth_month,null)) as c82,
count(DISTINCT if(c_salutation=="Dr.",c_birth_month,null)) as c83,
avg(if(c_salutation=="Ms.",c_birth_year,null)) as avg1,
avg(if(c_salutation=="Mr.",c_birth_year,null)) as avg2,
avg(if(c_salutation=="Dr.",c_birth_year,null)) as avg3,
avg(if(c_salutation=="Miss.",c_birth_year,null)) as avg4,
avg(if(c_salutation=="Mrs.",c_birth_year,null)) as avg5,
avg(if(c_salutation=="Sir.",c_birth_year,null)) as avg6,
avg(if(c_salutation=="Professor.",c_birth_year,null)) as avg7,
avg(if(c_salutation=="Teacher.",c_birth_year,null)) as avg8,
avg(if(c_salutation=="Agent.",c_birth_year,null)) as avg9,
avg(if(c_salutation=="Director.",c_birth_year,null)) as avg10
from customer group by c_current_hdemo_sk
'''
print("-"*50)
# Run microbenchmark with n retry time
runMicroBenchmark(spark,"Expand&HashAggregate",query ,2)
# ### Windowing (without data skew)
# This is a microbenchmark about windowing expressions running on CPU mode. The sub-query calculates the average ss_sales_price of a fixed window function partition by ss_customer_sk, and the parent query calculates the average price of the sub-query grouping by each customer.
query = '''
select ss_customer_sk,avg(avg_price) as avg_price
from
(
SELECT ss_customer_sk ,avg(ss_sales_price) OVER (PARTITION BY ss_customer_sk order by ss_sold_date_sk ROWS BETWEEN 50 PRECEDING AND 50 FOLLOWING ) as avg_price
FROM store_sales
where ss_customer_sk is not null
) group by ss_customer_sk order by 2 desc
'''
print("-"*50)
# Run microbenchmark with n retry time
runMicroBenchmark(spark,"Windowing without skew",query , 2)
# ### Windowing(with data skew)
# Data skew is caused by many null values in the ss_customer_sk column.
query = '''
select ss_customer_sk,avg(avg_price) as avg_price
from
(
SELECT ss_customer_sk ,avg(ss_sales_price) OVER (PARTITION BY ss_customer_sk order by ss_sold_date_sk ROWS BETWEEN 50 PRECEDING AND 50 FOLLOWING ) as avg_price
FROM store_sales
) group by ss_customer_sk order by 2 desc
'''
print("-"*50)
# Run microbenchmark with n retry time
runMicroBenchmark(spark,"Windowing with skew",query ,2)
# ### Intersection
# This is a microbenchmark about intersection operation running on CPU mode. The query calculates items in the same brand, class, and category that are sold in all three sales channels in two consecutive years.
query = '''
select i_item_sk ss_item_sk
from item,
(select iss.i_brand_id brand_id, iss.i_class_id class_id, iss.i_category_id category_id
from store_sales, item iss, date_dim d1
where ss_item_sk = iss.i_item_sk
and ss_sold_date_sk = d1.d_date_sk
and d1.d_year between 1999 AND 1999 + 2
intersect
select ics.i_brand_id, ics.i_class_id, ics.i_category_id
from catalog_sales, item ics, date_dim d2
where cs_item_sk = ics.i_item_sk
and cs_sold_date_sk = d2.d_date_sk
and d2.d_year between 1999 AND 1999 + 2
intersect
select iws.i_brand_id, iws.i_class_id, iws.i_category_id
from web_sales, item iws, date_dim d3
where ws_item_sk = iws.i_item_sk
and ws_sold_date_sk = d3.d_date_sk
and d3.d_year between 1999 AND 1999 + 2) x
where i_brand_id = brand_id
and i_class_id = class_id
and i_category_id = category_id
'''
# Run microbenchmark with n retry time
runMicroBenchmark(spark,"TPC-DS Q14a subquery",query ,2)
# ### Crossjoin
# This is a microbenchmark for a 1-million rows crossjoin with itself.
# +
# You have to stop the sparksession and create a new one
# because in this query we need to create more executors with less cores to get the best performance
spark.stop()
conf = SparkConf()
# Common spark settings
conf.setMaster(SPARK_MASTER_URL)
conf.setAppName("Crossjoin Microbenchmark on CPU")
conf.set("spark.driver.memory", driverMem)
conf.set("spark.executor.memory", executorMem)
conf.set("spark.executor.cores", executorCores)
conf.set("spark.locality.wait", "0")
conf.set("spark.sql.files.maxPartitionBytes", maxPartionBytes)
conf.set("spark.dynamicAllocation.enabled", "false")
conf.set("spark.sql.adaptive.enabled", "true")
# We can get a better performance by broadcast one table to change CartesianJoin to BroadCastNestLoopJoin
conf.set("spark.sql.autoBroadcastJoinThreshold",1000000000)
# Get or create spark session
spark = SparkSession.builder.config(conf=conf).getOrCreate()
print("-"*50)
# -
# Load dataframe and create tempView
start = time()
spark.read.parquet(dataRoot + "/tpcds/customer").limit(1000000).write.format("parquet").mode("overwrite").save("/data/tmp/customer1m")
end = time()
print("scanning and writing parquet cost : {} seconds".format(round(end - start, 2)))
# We need to tune the partition number to get the best performance.
spark.read.parquet("/data/tmp/customer1m").repartition(16000).createOrReplaceTempView("costomer_df_1_million")
query = '''
select count(*) from costomer_df_1_million c1 inner join costomer_df_1_million c2 on c1.c_customer_sk>c2.c_customer_sk
'''
print("-"*50)
# Run microbenchmark with n retry time
runMicroBenchmark(spark,"Crossjoin",query ,2)
|
examples/micro-benchmarks/notebooks/micro-benchmarks-cpu.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ACS2 in Multiplexer
# +
# %matplotlib inline
# General
from __future__ import unicode_literals
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker as tkr
import numpy as np
import pandas as pd
# Logger
import logging
logging.basicConfig(level=logging.WARN)
# ALCS + custom environments
import sys, os
sys.path.append(os.path.abspath('../'))
# Enable automatic module reload
# %load_ext autoreload
# %autoreload 2
# Load PyALCS module
from lcs.agents import EnvironmentAdapter
from lcs.agents.acs2 import ACS2, Configuration, ClassifiersList
from lcs.metrics import population_metrics
# Load environments
import gym
import gym_multiplexer
# -
# ## Multiplexer
# +
mp = gym.make('boolean-multiplexer-20bit-v0')
situation = mp.reset()
# render phenotype
mp.render()
# -
# perform random action
state, reward, done, _ = mp.step(mp.action_space.sample())
print(f"New state: {state}, reward: {reward}, is done: {done}")
# ## Environment adapter
class MultiplexerAdapter(EnvironmentAdapter):
@classmethod
def to_genotype(cls, phenotype):
return [str(x) for x in phenotype]
genotype = MultiplexerAdapter().to_genotype(state)
''.join(genotype)
# ## Go agent, go...
# Perform experiment for a couple of explore/exploit trials.
# +
def get_6bit_mp_actors():
mp = gym.make('boolean-multiplexer-6bit-v0')
cfg = Configuration(
mp.env.observation_space.n, 2,
environment_adapter=MultiplexerAdapter(),
user_metrics_collector_fcn=population_metrics,
do_ga=True)
return ACS2(cfg), mp
def get_11bit_mp_actors():
mp = gym.make('boolean-multiplexer-11bit-v0')
cfg = Configuration(
mp.env.observation_space.n, 2,
environment_adapter=MultiplexerAdapter(),
user_metrics_collector_fcn=population_metrics,
do_ga=True)
return ACS2(cfg), mp
def get_20bit_mp_actors():
mp = gym.make('boolean-multiplexer-20bit-v0')
cfg = Configuration(
mp.env.observation_space.n, 2,
environment_adapter=MultiplexerAdapter(),
user_metrics_collector_fcn=population_metrics,
do_ga=True)
return ACS2(cfg), mp
# -
def perform_experiment(agent, env, trials=250_000):
population, metrics = agent.explore_exploit(env, trials)
print("Population size: {}".format(metrics[-1]['population']))
print("Reliable size: {}".format(metrics[-1]['reliable']))
print(metrics[-1])
reliable_classifiers = [c for c in population if c.is_reliable()]
reliable_classifiers = sorted(reliable_classifiers, key=lambda cl: -cl.fitness)
# Print top 10 reliable classifiers
for cl in reliable_classifiers[:10]:
print(f"{cl}, q: {cl.q:.2f}, fit: {cl.fitness:.2f}, exp: {cl.exp:.2f}")
return population, metrics
# Here you will probably want to run these experiments for about 250k trials.
TRIALS = 5_000
# ### 6-bit MPX
# %%time
p6, m6 = perform_experiment(*get_6bit_mp_actors(), trials=TRIALS)
# ### 11-bit MPX
# %%time
p11, m11 = perform_experiment(*get_11bit_mp_actors(), trials=TRIALS)
# ### 20-bit MPX
# %%time
p20, m20 = perform_experiment(*get_20bit_mp_actors(), trials=TRIALS)
def parse_metrics(metrics):
lst = [[
m['trial'],
m['numerosity'],
m['reliable'],
m['reward'],
] for m in metrics]
df = pd.DataFrame(lst, columns=['trial', 'numerosity', 'reliable', 'reward'])
df = df.set_index('trial')
return df
# parse metrics to df
df6bit = parse_metrics(m6)
df11bit = parse_metrics(m11)
df20bit = parse_metrics(m20)
# ## Number of reliable classifiers
# +
window=50
fig, ax = plt.subplots()
df6bit['reliable'].rolling(window=window).mean().plot(label='6-bit', linewidth=1.0, ax=ax)
df11bit['reliable'].rolling(window=window).mean().plot(label='11-bit', linewidth=1.0, ax=ax)
df20bit['reliable'].rolling(window=window).mean().plot(label='20-bit', linewidth=1.0, ax=ax)
ax.set_xlabel('Trial')
ax.set_ylabel('Reliable classifiers')
ax.set_title(f'Number of reliable classifiers for boolean MPX.\nResults averaged over {window} trials')
plt.legend()
plt.show()
# -
# ## Average reward
# +
window=250
fig, ax = plt.subplots()
df6bit['reward'].rolling(window=window).mean().plot(label='6-bit', linewidth=1.0, ax=ax)
df11bit['reward'].rolling(window=window).mean().plot(label='11-bit', linewidth=1.0, ax=ax)
df20bit['reward'].rolling(window=window).mean().plot(label='20-bit', linewidth=1.0, ax=ax)
plt.axhline(1000, c='black', linewidth=1.0, linestyle=':')
ax.set_xlabel('Trial')
ax.set_ylabel('Reward')
ax.set_title(f'Reward obtained.\nResults averaged over {window} trials')
ax.set_ylim([500, 1050])
plt.legend()
plt.show()
|
notebooks/ACS2 in Multiplexer.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Recommendations with IBM
#
# In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
#
#
# You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**
#
# By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
#
#
# ## Table of Contents
#
# I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>
# II. [Rank Based Recommendations](#Rank)<br>
# III. [User-User Based Collaborative Filtering](#User-User)<br>
# IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>
# V. [Matrix Factorization](#Matrix-Fact)<br>
# VI. [Extras & Concluding](#conclusions)
#
# At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import project_tests as t
import pickle
# %matplotlib inline
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df.head()
# +
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.figure_factory as ff
# -
# Show df_content to get an idea of the data
df_content.head()
# ### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a>
#
# `1.` What is the distribution of how many articles a user interacts with in the dataset?
# Distribution: Value_count for user
user_count= df.groupby("email")['article_id'].count()
user_count.head()
# +
# fig=go.Figure(data=[go.Histogram(x=user_count)])
# fig.show()
# +
# Fill in the median and maximum number of user_article interactios below
median_val = user_count.median()# 50% of individuals interact with ____ number of articles or fewer.
max_views_by_user = user_count.max()# The maximum number of user-article interactions by any 1 user is ______.
# -
# `2.` Explore and remove duplicate articles from the **df_content** dataframe.
# Find and explore duplicate articles
num_duplicate_records=df_content.duplicated(subset='article_id').sum()
if(num_duplicate_records>0):
df_content.drop_duplicates(subset=['article_id'], keep='first', inplace=True)
# `3.` Use the cells below to find:
#
# **a.** The number of unique articles that have an interaction with a user.
# **b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>
# **c.** The number of unique users in the dataset. (excluding null values) <br>
# **d.** The number of user-article interactions in the dataset.
# +
unique_articles = df.article_id.nunique()# The number of unique articles that have at least one interaction
total_articles = df_content.shape[0] #The number of unique articles on the IBM platform
unique_users = df.email.nunique() # The number of unique users
user_article_interactions = df.shape[0]# The number of user-article interactions
print("unique_articles:{} \ntotal_articles:{} \nunique_users:{} \nuser_article_interactions:{}".format(unique_articles,total_articles,unique_users,user_article_interactions))
# -
# `4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
ans=df['article_id'].value_counts().head()
most_viewed_article_id = str(ans.index[0]) # The most viewed article in the dataset as a string with one value following the decimal
max_views =ans.values[0] # The most viewed article in the dataset was viewed how many times?
# +
# Email Mapper
# Dict
# store email as id, c
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
# +
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
# -
# ### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a>
#
# Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
#
# `1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
# +
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
top_article_id=df['article_id'].value_counts().index[:n].tolist()
top_articles=df[df['article_id'].isin(top_article_id)].title.unique().tolist()
return top_articles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# Your code here
top_articles=df['article_id'].value_counts().values[:n].tolist()
return top_articles # Return the top article ids
# -
print(get_top_articles(10))
print(get_top_article_ids(10))
# +
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
# -
# ### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a>
#
#
# `1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns.
#
# * Each **user** should only appear in each **row** once.
#
#
# * Each **article** should only show up in one **column**.
#
#
# * **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
#
#
# * **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**.
#
# Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
# +
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix = user_movie
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
# Fill in the function here
df_copy= df.copy()
df_copy=df_copy.drop_duplicates(subset= ['article_id','user_id'])
user_item=df_copy.groupby(["user_id","article_id"])['title'].count().unstack() # unstack turn a long matrix into a wide matrix
user_item.fillna(0,inplace=True)
return user_item
user_item = create_user_item_matrix(df)
user_item.shape
# -
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
# `2.` Function: provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
#
# Use the tests to test your function.
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# compute similarity of each user to the provided user
user_idx = list(user_item.index.values)
U=user_item_arr = np.array(user_item[user_item.index==user_id])
V=np.array(user_item).T
dot_product = np.dot(U,V)[0]# U* (Vtranspose) User movies
dot_product= pd.Series(dot_product, index=user_idx)
# sort by similarity
dot_product = dot_product.sort_values(ascending=False)
dot_product.drop(labels=[user_id],inplace=True)
# create list of just the ids
most_similar_users = dot_product.index.values.tolist()
# remove the own user's id
return most_similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
# `3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
# +
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
#article_names = df[df['article_id'].isin(article_ids)]['title'].drop_duplicates().values.tolist()
article_names=df[df['article_id'].isin(article_ids)]['title'].drop_duplicates().values.tolist()
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the doc_full_name column in df_content)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
# Your code here
# get row of user
user_row = user_item.loc[user_id]
# find indices of user_row where there are interactionns
ind = np.where(user_row == 1)
# get article ids where user has interactions
article_ids = user_row.index[ind].values.tolist()
article_ids=list(map(lambda x:str(x),article_ids))
# get article names where users have interactions
article_names = get_article_names(article_ids)
return article_ids, article_names # return the ids and names
# article_id=[]
# article_names=[]
# user_row = user_item.loc[user_id]
# # find indices of user_row where there are interactionns
# ind = list(np.where(user_row == 1))
# lst=[]
# for i in range(0,len(ind)):
# lst.extend(ind[i])
# lst=set(lst)
# article_ids=lst
# article_name=get_article_names(article_id)
# return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
recommendations = []
similar_users = find_similar_users(user_id)
my_movie = get_user_articles(user_id)
for user in similar_users:
article_ids, article_names = get_user_articles(user)
recommendations += article_ids
recommendations = list(set([x for x in recommendations if x not in my_movie ]))[:m]
return recommendations # return your recommendations for this user_id
# -
set(get_user_articles(20)[0])
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
# `4.` Now we are going to improve the consistency of the **user_user_recs** function from above.
#
# * Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
#
#
# * Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
# +
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
# Your code here
return neighbors_df # Return the dataframe specified in the doc_string
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
# Your code here
return recs, rec_names
# -
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
# `5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
# +
### Tests with a dictionary of results
user1_most_sim = # Find the user that is most similar to user 1
user131_10th_sim = # Find the 10th most similar user to user 131
# +
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
# -
# `6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
# **Provide your response here.**
# `7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
# +
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = # Your recommendations here
# +
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
# -
# ### <a class="anchor" id="Content-Recs">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>
#
# Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information.
#
# `1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.
#
# ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
def make_content_recs():
'''
INPUT:
OUTPUT:
'''
# `2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?
#
# ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
# **Write an explanation of your content based recommendation system here.**
# `3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.
#
# ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
# +
# make recommendations for a brand new user
# make a recommendations for a user who only has interacted with article id '1427.0'
# -
# ### <a class="anchor" id="Matrix-Fact">Part V: Matrix Factorization</a>
#
# In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
#
# `1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.p')
# quick look at the matrix
user_item_matrix.head()
# `2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
# +
# Perform SVD on the User-Item Matrix Here
u, s, vt = # use the built in to get the three matrices
# -
# **Provide your response here.**
# `3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
# +
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
# -
# `4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
#
# Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
#
# * How many users can we make predictions for in the test set?
# * How many users are we not able to make predictions for because of the cold start problem?
# * How many articles can we make predictions for in the test set?
# * How many articles are we not able to make predictions for because of the cold start problem?
# +
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
# Your code here
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
# +
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?': # letter here,
'How many users in the test set are we not able to make predictions for because of the cold start problem?': # letter here,
'How many movies can we make predictions for in the test set?': # letter here,
'How many movies in the test set are we not able to make predictions for because of the cold start problem?': # letter here
}
t.sol_4_test(sol_4_dict)
# -
# `5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.
#
# Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = # fit svd similar to above then use the cells below
# +
# Use these cells to see how well you can use the training
# decomposition to predict on test data
# -
# `6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
# **Your response here.**
# <a id='conclusions'></a>
# ### Extras
# Using your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!
#
#
# ## Conclusion
#
# > Congratulations! You have reached the end of the Recommendations with IBM project!
#
# > **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible.
#
#
# ## Directions to Submit
#
# > Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).
#
# > Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.
#
# > Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
|
Recommendations_with_IBM.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="hLEyhfMqmnrt" colab_type="text"
# ## Colab JAX TPU Setup
# + id="5CTEVmyKmkfp" colab_type="code" colab={}
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver_nightly'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
# + [markdown] id="ebUMqK9mGIDm" colab_type="text"
# ## The basics: interactive NumPy on GPU and TPU
#
# ---
#
#
# + id="27TqNtiQF97X" colab_type="code" colab={}
import jax
import jax.numpy as np
from jax import random
key = random.PRNGKey(0)
# + id="cRWoxSCNGU4o" colab_type="code" colab={}
key, subkey = random.split(key)
x = random.normal(key, (5000, 5000))
print(x.shape)
print(x.dtype)
# + id="diPllsvgGfSA" colab_type="code" colab={}
y = np.dot(x, x)
print(y[0, 0])
# + id="8-psauxnGiRk" colab_type="code" colab={}
x
# + id="-2FMQ8UeoTJ8" colab_type="code" colab={}
import matplotlib.pyplot as plt
plt.plot(x[0])
# + id="DRnwCKFuGk8P" colab_type="code" colab={}
np.dot(x, x.T)
# + id="z4VX5PkMHJIu" colab_type="code" colab={}
print(np.dot(x, 2 * x)[[0, 2, 1, 0], ..., None, ::-1])
# + id="ORZ9Odu85BCJ" colab_type="code" colab={}
import numpy as onp
x_cpu = onp.array(x)
# %timeit -n 1 -r 1 onp.dot(x_cpu, x_cpu)
# + id="5BKh0eeAGvO5" colab_type="code" colab={}
# %timeit -n 5 -r 5 np.dot(x, x).block_until_ready()
# + [markdown] id="fm4Q2zpFHUAu" colab_type="text"
# ## Automatic differentiation
# + id="MCIQbyUYHWn1" colab_type="code" colab={}
from jax import grad
# + id="kfqZpKYsHo4j" colab_type="code" colab={}
def f(x):
if x > 0:
return 2 * x ** 3
else:
return 3 * x
# + id="K_26_odPHqLJ" colab_type="code" colab={}
key = random.PRNGKey(0)
x = random.normal(key, ())
print(grad(f)(x))
print(grad(f)(-x))
# + id="q5V3A6loHrhS" colab_type="code" colab={}
print(grad(grad(f))(-x))
print(grad(grad(grad(f)))(-x))
# + id="ba4WY4ArHv8I" colab_type="code" colab={}
def predict(params, inputs):
for W, b in params:
outputs = np.dot(inputs, W) + b
inputs = np.tanh(outputs)
return outputs
def loss(params, batch):
inputs, targets = batch
predictions = predict(params, inputs)
return np.sum((predictions - targets)**2)
def init_layer(key, n_in, n_out):
k1, k2 = random.split(key)
W = random.normal(k1, (n_in, n_out))
b = random.normal(k2, (n_out,))
return W, b
layer_sizes = [5, 2, 3]
key = random.PRNGKey(0)
key, *keys = random.split(key, len(layer_sizes))
params = list(map(init_layer, keys, layer_sizes[:-1], layer_sizes[1:]))
key, *keys = random.split(key, 3)
inputs = random.normal(keys[0], (8, 5))
targets = random.normal(keys[1], (8, 3))
batch = (inputs, targets)
# + id="LiTBibJdHz4K" colab_type="code" colab={}
print(loss(params, batch))
# + id="a3KFpwH3H4Cl" colab_type="code" colab={}
step_size = 1e-2
for _ in range(20):
grads = grad(loss)(params, batch)
params = [(W - step_size * dW, b - step_size * db)
for (W, b), (dW, db) in zip(params, grads)]
# + id="YLltDr0GH7LX" colab_type="code" colab={}
print(loss(params, batch))
# + [markdown] id="bmxAPFC0I8b0" colab_type="text"
# Other JAX autodiff highlights:
#
# * Forward- and reverse-mode, totally composable
# * Fast Jacobians and Hessians
# * Complex number support (holomorphic and non-holomorphic)
# * Jacobian pre-accumulation for elementwise operations (like `gelu`)
#
#
# For much more, see the [JAX Autodiff Cookbook (Part 1)](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html).
# + [markdown] id="TRkxaVLJKNre" colab_type="text"
# ## End-to-end compilation with XLA using `jit`
# + id="bKo4rX9-KSW7" colab_type="code" colab={}
from jax import jit
# + id="94iIgZSfKWh8" colab_type="code" colab={}
key = random.PRNGKey(0)
x = random.normal(key, (5000, 5000))
# + id="Ybuz8Ag9KXMd" colab_type="code" colab={}
def f(x):
y = x
for _ in range(10):
y = y - 0.1 * y + 3.
return y[:100, :100]
f(x)
# + id="Y9dx5ifSKaGJ" colab_type="code" colab={}
g = jit(f)
g(x)
# + id="UtsS67BvKYkC" colab_type="code" colab={}
# %timeit f(x).block_until_ready()
# + id="-vfcaSo9KbvR" colab_type="code" colab={}
# %timeit g(x).block_until_ready()
# + id="E3BQF1_AKeLn" colab_type="code" colab={}
grad(jit(grad(jit(grad(np.tanh)))))(1.0)
# + [markdown] id="AvXl1WDPKjmV" colab_type="text"
# ### Constraints that come with using `jit`
# + id="mCtwRF18KnsE" colab_type="code" colab={}
def f(x):
if x > 0:
return 2 * x ** 2
else:
return 3 * x
g = jit(f)
# + id="_82tY-ZSKqv4" colab_type="code" colab={}
f(2)
# + id="TjSAFc-iKrcB" colab_type="code" colab={}
g(2)
# + id="RhizP9pjKsug" colab_type="code" colab={}
def f(x, n):
i = 0
while i < n:
x = x * x
i += 1
return x
g = jit(f)
# + id="Wn6haTmUK-Q8" colab_type="code" colab={}
f(np.array([1., 2., 3.]), 5)
# + id="HwBy1I04K-81" colab_type="code" colab={}
g(np.array([1., 2., 3.]), 5)
# + id="XmaTryZaK_3M" colab_type="code" colab={}
g = jit(f, static_argnums=(1,))
# + id="HcWjxVktV4fa" colab_type="code" colab={}
g(np.array([1., 2., 3.]), 5)
# + [markdown] id="0M_-pJe7LOcO" colab_type="text"
# ## Vectorization with `vmap`
# + id="8XIot_ndLRH1" colab_type="code" colab={}
from jax import vmap
# + id="tRvCZn2wBkXP" colab_type="code" colab={}
print(vmap(lambda x: x**2)(np.arange(8)))
# + id="icfsXizI_rkD" colab_type="code" colab={}
from jax import make_jaxpr
make_jaxpr(np.dot)(np.ones(8), np.ones(8))
# + id="uQm4cvAbA6M3" colab_type="code" colab={}
make_jaxpr(vmap(np.dot))(np.ones((10, 8)), np.ones((10, 8)))
# + id="NeiFfCHEBLsU" colab_type="code" colab={}
make_jaxpr(vmap(vmap(np.dot)))(np.ones((10, 10, 8)), np.ones((10, 10, 8)))
# + id="csX71fkSCZrp" colab_type="code" colab={}
perex_grads = vmap(grad(loss), in_axes=(None, 0))
make_jaxpr(perex_grads)(params, batch)
# + [markdown] id="Tmf1NT2Wqv5p" colab_type="text"
# ## Parallel accelerators with pmap
# + id="t6RRAFn1CEln" colab_type="code" colab={}
jax.devices()
# + id="tEK1I6Duqunw" colab_type="code" colab={}
from jax import pmap
# + id="S-iCNfeGqzkY" colab_type="code" colab={}
y = pmap(lambda x: x ** 2)(np.arange(8))
print(y)
# + id="xgutf5JPP3wi" colab_type="code" colab={}
y
# + id="xxShG3Tdq4Gj" colab_type="code" colab={}
z = y / 2
print(z)
# + id="uvDL2_bCq7kq" colab_type="code" colab={}
import matplotlib.pyplot as plt
plt.plot(y)
# + id="Xg76CmLYq_Q6" colab_type="code" colab={}
keys = random.split(random.PRNGKey(0), 8)
mats = pmap(lambda key: random.normal(key, (5000, 5000)))(keys)
result = pmap(np.dot)(mats, mats)
print(pmap(np.mean)(result))
# + id="jbw_hRx7rDzX" colab_type="code" colab={}
timeit -n 5 -r 5 pmap(np.dot)(mats, mats).block_until_ready()
# + [markdown] id="xf5N9ZRirJhL" colab_type="text"
# ### Collective communication operations
# + id="9i1PfxUvrThh" colab_type="code" colab={}
from functools import partial
from jax.lax import psum
@partial(pmap, axis_name='i')
def normalize(x):
return x / psum(x, 'i')
print(normalize(np.arange(8.)))
# + id="lnvwnlOFrVa-" colab_type="code" colab={}
@partial(pmap, axis_name='rows')
@partial(pmap, axis_name='cols')
def f(x):
row_sum = psum(x, 'rows')
col_sum = psum(x, 'cols')
total_sum = psum(x, ('rows', 'cols'))
return row_sum, col_sum, total_sum
x = np.arange(8.).reshape((4, 2))
a, b, c = f(x)
print("input:\n", x)
print("row sum:\n", a)
print("col sum:\n", b)
print("total sum:\n", c)
# + [markdown] id="f-FBsWeo1AXE" colab_type="text"
# <img src="https://raw.githubusercontent.com/google/jax/master/cloud_tpu_colabs/images/nested_pmap.png" width="70%"/>
# + [markdown] id="jC-KIMQ1q-lK" colab_type="text"
# For more, see the [`pmap` cookbook](https://colab.sandbox.google.com/github/google/jax/blob/master/cloud_tpu_colabs/Pmap_Cookbook.ipynb).
# + [markdown] id="-A-oVDo6rdWA" colab_type="text"
# ### Compose pmap with other transforms!
# + id="WC_dMIN2rgTZ" colab_type="code" colab={}
@pmap
def f(x):
y = np.sin(x)
@pmap
def g(z):
return np.cos(z) * np.tan(y.sum()) * np.tanh(x).sum()
return grad(lambda w: np.sum(g(w)))(x)
f(x)
# + id="apuACjPWrixV" colab_type="code" colab={}
grad(lambda x: np.sum(f(x)))(x)
# + [markdown] id="WD9xtROsYX4i" colab_type="text"
# ### Compose everything
# + id="h65c9AQCWAyn" colab_type="code" colab={}
from jax import jvp, vjp # forward and reverse-mode
curry = lambda f: partial(partial, f)
@curry
def jacfwd(fun, x):
pushfwd = partial(jvp, fun, (x,)) # jvp!
std_basis = np.eye(onp.size(x)).reshape((-1,) + np.shape(x)),
y, jac_flat = vmap(pushfwd, out_axes=(None, -1))(std_basis) # vmap!
return jac_flat.reshape(np.shape(y) + np.shape(x))
@curry
def jacrev(fun, x):
y, pullback = vjp(fun, x) # vjp!
std_basis = np.eye(onp.size(y)).reshape((-1,) + np.shape(y))
jac_flat, = vmap(pullback)(std_basis) # vmap!
return jac_flat.reshape(np.shape(y) + np.shape(x))
def hessian(fun):
return jit(jacfwd(jacrev(fun))) # jit!
# + id="G9qDX84RWhW7" colab_type="code" colab={}
input_hess = hessian(lambda inputs: loss(params, (inputs, targets)))
per_example_hess = pmap(input_hess) # pmap!
per_example_hess(inputs)
# + id="u3ggM_WYZ8QC" colab_type="code" colab={}
|
cloud_tpu_colabs/JAX_demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.5.0
# language: julia
# name: julia-0.5
# ---
# 12 steps to Navier-Stokes
# =====
# ***
# The final two steps in this interactive module teaching beginning CFD with Julia will both solve the Navier-Stokes equations in two dimensions, but with different boundary conditions.
#
# The momentum equation in vector form for a velocity field $\vec{v}$ is:
#
# $$\frac{\partial \vec{v}}{\partial t}+(\vec{v}\cdot\nabla)\vec{v}=-\frac{1}{\rho}\nabla p + \nu \nabla^2\vec{v}$$
#
# This represents three scalar equations, one for each velocity component $(u,v,w)$. But we will solve it in two dimensions, so there will be two scalar equations.
#
# Remember the continuity equation? This is where the Poisson equation for pressure comes in!
# Step 11: Cavity Flow with Navier-Stokes
# ----
# ***
# Here is the system of differential equations: two equations for the velocity components $u,v$ and one equation for pressure:
# $$\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y} = -\frac{1}{\rho}\frac{\partial p}{\partial x}+\nu \left(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2} \right) $$
#
#
# $$\frac{\partial v}{\partial t}+u\frac{\partial v}{\partial x}+v\frac{\partial v}{\partial y} = -\frac{1}{\rho}\frac{\partial p}{\partial y}+\nu\left(\frac{\partial^2 v}{\partial x^2}+\frac{\partial^2 v}{\partial y^2}\right) $$
#
# $$\frac{\partial^2 p}{\partial x^2}+\frac{\partial^2 p}{\partial y^2} = -\rho\left(\frac{\partial u}{\partial x}\frac{\partial u}{\partial x}+2\frac{\partial u}{\partial y}\frac{\partial v}{\partial x}+\frac{\partial v}{\partial y}\frac{\partial v}{\partial y} \right)$$
# From the previous steps, we already know how to discretize all these terms. Only the last equation is a little unfamiliar. But with a little patience, it will not be hard!
# ### Discretized equations
# First, let's discretize the $u$-momentum equation, as follows:
# \begin{eqnarray}
# &&\frac{u_{i,j}^{n+1}-u_{i,j}^{n}}{\Delta t}+u_{i,j}^{n}\frac{u_{i,j}^{n}-u_{i-1,j}^{n}}{\Delta x}+v_{i,j}^{n}\frac{u_{i,j}^{n}-u_{i,j-1}^{n}}{\Delta y}\\\
# &&=-\frac{1}{\rho}\frac{p_{i+1,j}^{n}-p_{i-1,j}^{n}}{2\Delta x}+\nu\left(\frac{u_{i+1,j}^{n}-2u_{i,j}^{n}+u_{i-1,j}^{n}}{\Delta x^2}+\frac{u_{i,j+1}^{n}-2u_{i,j}^{n}+u_{i,j-1}^{n}}{\Delta y^2}\right)\end{eqnarray}
# Similarly for the $v$-momentum equation:
# \begin{eqnarray}
# &&\frac{v_{i,j}^{n+1}-v_{i,j}^{n}}{\Delta t}+u_{i,j}^{n}\frac{v_{i,j}^{n}-v_{i-1,j}^{n}}{\Delta x}+v_{i,j}^{n}\frac{v_{i,j}^{n}-v_{i,j-1}^{n}}{\Delta y}\\\
# &&=-\frac{1}{\rho}\frac{p_{i,j+1}^{n}-p_{i,j-1}^{n}}{2\Delta y}
# +\nu\left(\frac{v_{i+1,j}^{n}-2v_{i,j}^{n}+v_{i-1,j}^{n}}{\Delta x^2}+\frac{v_{i,j+1}^{n}-2v_{i,j}^{n}+v_{i,j-1}^{n}}{\Delta y^2}\right)\end{eqnarray}
# Finally, the discretized pressure-Poisson equation can be written thus:
# $$ \frac{p_{i+1,j}^{n}-2p_{i,j}^{n}+p_{i-1,j}^{n}}{\Delta x^2}+\frac{p_{i,j+1}^{n}-2*p_{i,j}^{n}+p_{i,j-1}^{n}}{\Delta y^2}
# =\rho\left[\frac{1}{\Delta t}\left(\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}+\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\right)\right.$$
#
# $$-\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}
# - \ 2\frac{u_{i,j+1}-u_{i,j-1}}{2\Delta y}\frac{v_{i+1,j}-v_{i-1,j}}{2\Delta x}
# -\left.\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\right]
# $$
# You should write these equations down on your own notes, by hand, following each term mentally as you write it.
#
# As before, let's rearrange the equations in the way that the iterations need to proceed in the code. First, the momentum equations for the velocity at the next time step.
#
# The momentum equation in the $u$ direction:
#
# $$
# u_{i,j}^{n+1} = u_{i,j}^{n} - u_{i,j}^{n}\frac{\Delta t}{\Delta x}(u_{i,j}^{n}-u_{i-1,j}^{n})
# - v_{i,j}^{n}\frac{\Delta t}{\Delta y}(u_{i,j}^{n}-u_{i,j-1}^{n})$$
# $$-\frac{\Delta t}{\rho 2\Delta x}(p_{i+1,j}^{n}-p_{i-1,j}^{n})
# +\nu\left(\frac{\Delta t}{\Delta x^2}(u_{i+1,j}^{n}-2u_{i,j}^{n}+u_{i-1,j}^{n})\right.
# +\left.\frac{\Delta t}{\Delta y^2}(u_{i,j+1}^{n}-2u_{i,j}^{n}+u_{i,j-1}^{n})\right)
# $$
#
# The momentum equation in the $v$ direction:
#
# $$v_{i,j}^{n+1} = v_{i,j}^{n}-u_{i,j}^{n}\frac{\Delta t}{\Delta x}(v_{i,j}^{n}-v_{i-1,j}^{n})
# - v_{i,j}^{n}\frac{\Delta t}{\Delta y}(v_{i,j}^{n}-v_{i,j-1}^{n})$$
# $$
# -\frac{\Delta t}{\rho 2\Delta y}(p_{i,j+1}^{n}-p_{i,j-1}^{n})
# +\nu\left(\frac{\Delta t}{\Delta x^2}(v_{i+1,j}^{n}-2v_{i,j}^{n}+v_{i-1,j}^{n})\right.
# +\left.\frac{\Delta t}{\Delta y^2}(v_{i,j+1}^{n}-2v_{i,j}^{n}+v_{i,j-1}^{n})\right)$$
# Almost there! Now, we rearrange the pressure-Poisson equation:
#
# $$
# p_{i,j}^{n}=\frac{(p_{i+1,j}^{n}+p_{i-1,j}^{n})\Delta y^2+(p_{i,j+1}^{n}+p_{i,j-1}^{n})\Delta x^2}{2(\Delta x^2+\Delta y^2)}-\frac{\rho\Delta x^2\Delta y^2}{2(\Delta x^2+\Delta y^2)} \times$$
#
# $$\left[\frac{1}{\Delta t}\left(\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}+\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\right)-\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}\right. $$
#
# $$ -2\frac{u_{i,j+1}-u_{i,j-1}}{2\Delta y}\frac{v_{i+1,j}-v_{i-1,j}}{2\Delta x}-\left.\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\right]$$
# The initial condition is $u, v, p = 0$ everywhere, and the boundary conditions are:
#
# $u=1$ at $y=2$ (the "lid");
#
# $u, v=0$ on the other boundaries;
#
# $\frac{\partial p}{\partial y}=0$ at $y=0$;
#
# $p=0$ at $y=2$
#
# $\frac{\partial p}{\partial x}=0$ at $x=0,2$
#
# Implementing Cavity Flow
# ----
#
# +
using PyPlot
nx = 41;
ny = 41;
nt = 500;
nit=50;
c = 1;
dx = 2/(nx-1);
dy = 2/(ny-1);
x = linspace(0,2,nx);
y = linspace(0,2,ny);
rho = 1;
nu = .1;
dt = .001;
u = zeros((ny, nx));
v = zeros((ny, nx));
p = zeros((ny, nx));
b = zeros((ny, nx));
# -
# The pressure Poisson equation that's written above can be hard to write out without typos. The function `buildUpB` below represents the contents of the square brackets, so that the entirety of the PPE is slightly more manageable.
function buildUpB(b, rho, dt, u, v, dx, dy)
b[2:end-1,2:end-1]=rho*(1/dt*((u[2:end-1,3:end]-u[2:end-1,1:end-2])/(2*dx)+(v[3:end,2:end-1]-v[1:end-2,2:end-1])/(2*dy))-
((u[2:end-1,3:end]-u[2:end-1,1:end-2])/(2*dx)).^2-
2*((u[3:end,2:end-1]-u[1:end-2,2:end-1])/(2*dy)*(v[2:end-1,3:end]-v[2:end-1,1:end-2])/(2*dx))-
((v[3:end,2:end-1]-v[1:end-2,2:end-1])/(2*dy)).^2);
return b;
end
# The function `presPoisson` is also defined to help segregate the different rounds of calculations. Note the presence of the pseudo-time variable `nit`. This sub-iteration in the Poisson calculation helps ensure a divergence-free field.
function presPoisson(p, dx, dy, b)
pn = zeros(size(p));
pn = copy(p);
for q in 1:nit
pn = copy(p);
p[2:end-1,2:end-1] = ((pn[2:end-1,3:end]+pn[2:end-1,1:end-2])*dy^2+(pn[3:end,2:end-1]+pn[1:end-2,2:end-1])*dx^2)/
(2*(dx^2+dy^2)) -
dx^2*dy^2/(2*(dx^2+dy^2))*b[2:end-1,2:end-1];
p[:,end] =p[:,end-1]; ##dp/dy = 0 at x = 2
p[1,:] = p[2,:]; ##dp/dy = 0 at y = 0
p[:,1]=p[:,2]; ##dp/dx = 0 at x = 0
p[end,:]=0 ; ##p = 0 at y = 2
end
return p;
end
# Finally, the rest of the cavity flow equations are wrapped inside the function `cavityFlow`, allowing us to easily plot the results of the cavity flow solver for different lengths of time.
function cavityFlow(nt, u, v, dt, dx, dy, p, rho, nu)
un = zeros(size(u));
vn = zeros(size(v));
b = zeros((ny, nx))
for n in 1:nt
un = copy(u)
vn = copy(v)
b = buildUpB(b, rho, dt, u, v, dx, dy);
p = presPoisson(p, dx, dy, b);
u[2:end-1,2:end-1] = un[2:end-1,2:end-1]-
un[2:end-1,2:end-1].*dt/dx.*(un[2:end-1,2:end-1]-un[2:end-1,1:end-2])-
vn[2:end-1,2:end-1].*dt/dy.*(un[2:end-1,2:end-1]-un[1:end-2,2:end-1])-
dt/(2*rho*dx)*(p[2:end-1,3:end]-p[2:end-1,1:end-2])+
nu*(dt/dx^2*(un[2:end-1,3:end]-2*un[2:end-1,2:end-1]+un[2:end-1,1:end-2])+
dt/dy^2*(un[3:end,2:end-1]-2*un[2:end-1,2:end-1]+un[1:end-2,2:end-1]))
v[2:end-1,2:end-1] = vn[2:end-1,2:end-1]-
un[2:end-1,2:end-1].*dt/dx.*(vn[2:end-1,2:end-1]-vn[2:end-1,1:end-2])-
vn[2:end-1,2:end-1].*dt/dy.*(vn[2:end-1,2:end-1]-vn[1:end-2,2:end-1])-
dt/(2*rho*dy)*(p[3:end,2:end-1]-p[1:end-2,2:end-1])+
nu*(dt/dx^2*(vn[2:end-1,3:end]-2*vn[2:end-1,2:end-1]+vn[2:end-1,1:end-2])+
(dt/dy^2*(vn[3:end,2:end-1]-2*vn[2:end-1,2:end-1]+vn[1:end-2,2:end-1])))
u[1,:] = 0
u[:,1] = 0
u[:,end] = 0
u[end,:] = 1 #set velocity on cavity lid equal to 1
v[1,:] = 0
v[end,:]=0
v[:,1] = 0
v[:,end] = 0
end
return u, v, p
end
# Let's start with `nt = 100` and see what the solver gives us:
u = zeros((ny, nx));
v = zeros((ny, nx));
p = zeros((ny, nx));
b = zeros((ny, nx));
nt = 100;
u, v, p = cavityFlow(nt, u, v, dt, dx, dy, p, rho, nu);
fig = figure(figsize=(11,7), dpi=100)
contourf(x,y,p,alpha=0.5) ###plnttong the pressure field as a contour
colorbar()
contour(x,y,p) ###plotting the pressure field outlines
xgrid = repmat(x', nx, 1 )
ygrid = repmat(y, 1, ny )
quiver(xgrid[1:2:end,1:2:end],ygrid[1:2:end,1:2:end],u[1:2:end,1:2:end],v[1:2:end,1:2:end]) ##plotting velocity
xlabel("X")
ylabel("Y");
u[1,1:5]
# You can see that two distinct pressure zones are forming and that the spiral pattern expected from lid-driven cavity flow is beginning to form. Experiment with different values of `nt` to see how long the system takes to stabilize.
u = zeros((ny, nx));
v = zeros((ny, nx));
p = zeros((ny, nx));
b = zeros((ny, nx));
nt = 700;
u, v, p = cavityFlow(nt, u, v, dt, dx, dy, p, rho, nu);
fig = figure(figsize=(11,7), dpi=100);
contourf(x,y,p,alpha=0.5);
colorbar();
contour(x,y,p);
quiver(xgrid[1:2:end,1:2:end],ygrid[1:2:end,1:2:end],u[1:2:end,1:2:end],v[1:2:end,1:2:end]); ##plotting velocity
xlabel("X");
ylabel("Y");
|
lessons/15_Step_11-Copy1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting started with the practicals
#
# ***These notebooks are best viewed in Jupyter. GitHub might not display all content of the notebook properly.***
#
# ## Goal of the practical exercises
#
# The exercises have two goals:
#
# 1. Give you the opportunity to obtain 'hands-on' experience in implementing, training and evaluation machine learning models in Python. This experience will also help you better understand the theory covered during the lectures.
#
# 2. Occasionally demonstrate some 'exam-style' questions that you can use as a reference when studying for the exam. Note however that the example questions are (as the name suggests) only examples and do not constitute a complete and sufficient list of 'things that you have to learn for the exam'. You can recognize example questions as (parts of) exercises by <font color="#770a0a">this font color</font>.
#
# For each set of exercises (one Python notebook such as this one $==$ one set of exercises) you have to submit deliverables that will then be graded and constitute 25% of the final grade. Thus, the work that you do during the practicals has double contribution towards the final grade: as 30% direct contribution and as a preparation for the exam that will define the other 65% of the grade.
#
# ## Deliverables
#
# For each set of exercises, you have to submit:
# 1. Python functions and/or classes (`.py` files) that implement basic functionalities (e.g. a $k$-NN classifier) and
# 2. A *single* Python notebook that contains the experiments, visualization and answer to the questions and math problems. *Do not submit your answers as Word or PDF documents (they will not be graded)*. The submitted code and notebook should run without errors and be able to fully reproduce the reported results.
#
# We recommend that you clone the provided notebooks (such as this one) and write your code in them. The following rubric will be used when grading the practical work:
#
# Component | Insufficient | Satisfactory | Excellent
# --- | --- | --- | ---
# **Code** | Missing or incomplete code structure, runs with errors, lacks documentation | Self-contained, does not result in errors, contains some documentation, can be easily used to reproduce the reported results | User-friendly, well-structured (good separation of general functionality and experiments, i.e. between `.py` files and the Pyhthon notebook), detailed documentation, optimized for speed, use of a version control system (such as GitHub)
# **Answers to questions** | Incorrect, does not convey understanding of the material, appears to be copied from another source | Correct, conveys good understanding of the material, description in own words | Correct, conveys excellent level of understanding, makes connections between topics
#
# ## A word on notation
#
# When we refer to Python variables, we will use a monospace font. For example, `X` is a Python variable that contains the data matrix. When we refer to mathematical variables, we will use the de-facto standard notation: $a$ or $\lambda$ is a scalar variable, $\boldsymbol{\mathrm{w}}$ is a vector and $\boldsymbol{\mathrm{X}}$ is a matrix (e.g. a data matrix from the example above). You should use the same notation when writing your answers and solutions.
#
# # Two simple machine learning models
#
# ## Preliminaries
#
# Throughout the practical curriculum of this course, we will use the Python programming language and its ecosystem of libraries for scientific computing (such as `numpy`, `scipy`, `matplotlib`, `scikit-learn` etc). The practicals for the deep learning part of the course will use the `keras` deep learning framework. If you are not sufficiently familiar with this programming language and/or the listed libraries and packages, you are strongly advised to go over the corresponding tutorials from the ['Essential skills'](https://github.com/tueimage/essential-skills) module (the `scikit-learn` library is not covered by the tutorial, however, an extensive documentation is available [here](https://scikit-learn.org/stable/documentation.html).
#
# In this first set of exercises, we will use two toy datasets that ship together with `scikit-learn`.
#
# The first dataset is named `diabetes` and contains 442 patients described with 10 features: age, sex, body mass index, average blood pressure, and six blood serum measurements. The target variable is a continuous quantitative measure of the disease (diabetes) progression one year after the baseline measurements were recorded. More information is available [here](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/descr/diabetes.rst) and [here](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
#
# The second dataset is named `breast_cancer` and is a copy of the UCI ML Breast Cancer Wisconsin (Diagnostic) datasets (more infortmation is available [here](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/descr/breast_cancer.rst) and [here](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)). The datasets contains of 569 instances represented with 30 features that are computed from a images of a fine needle aspirate of a breast mass. The features describe characteristics of the cell nuclei present in the image. Each instance is associated with a binary target variable ('malignant' or 'benign').
#
# You can load the two datasets in the following way:
# +
import numpy as np
from sklearn.datasets import load_diabetes, load_breast_cancer
diabetes = load_diabetes()
breast_cancer = load_breast_cancer()
# -
# In the majority of the exercises in this course, we will use higher-level libraries and packages such as `scikit-learn` and `keras` to implement, train and evaluate machine learning models. However, the goal of this first set of exercises is to illustrate basic mathematical tools and machine learning concepts. Because of this, we will impose a restriction of only using basic `numpy` functionality. Furthermore, you should as much as possible restrict the use of for-loops (e.g. use a vector-to-matrix product instead of a for loop when appropriate).
#
# If `X` is a 2D data matrix, we will use the convention that the rows of the matrix contain the samples (or instances) and the columns contain the features (inputs to the model). That means that a data matrix with a shape `(122, 13)` represents a dataset with 122 samples, each represented with 13 features. Similarly, if `Y` is a 2D matrix containing the targets, the rows correspond to the samples and the columns to the different targets (outputs of the model). Thus, if the shape of `Y` is `(122, 3)` that means that there are 122 samples and each sample is has 3 targets (note that in the majority of the examples we will only have a single target and thus the number of columns of `Y` will be 1).
#
# You can obtain the data and target matrices from the two datasets in the following way:
# +
X = diabetes.data
Y = diabetes.target[:, np.newaxis]
print(X.shape)
print(Y.shape)
# -
# If you want to only use a subset of the available features, you can obtain a reduced data matrix in the following way:
# +
# use only the fourth feature
X = diabetes.data[:, np.newaxis, 3]
print(X.shape)
# use the third, and tenth features
X = diabetes.data[:, (3,9)]
print(X.shape)
# -
# ***Question***: Why we need to use the `np.newaxis` expression in the examples above?
#
# Note that in all your experiments in the exercises, you should use and independent training and testing sets. You can split the dataset into a training and testing subsets in the following way:
# use the fourth feature
# use the first 300 training samples for training, and the rest for testing
X_train = diabetes.data[:300, np.newaxis, 3]
y_train = diabetes.target[:300, np.newaxis]
X_test = diabetes.data[300:, np.newaxis, 3]
y_test = diabetes.target[300:, np.newaxis]
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# ## Exercises
#
# ### Linear regression
#
# Implement training and evaluation of a linear regression model on the diabetes dataset using only matrix multiplication, inversion and transpose operations. Report the mean squared error of the model.
#
# To get you started we have implemented the first part of this exercise (fitting of the model) as an example.
# +
# add subfolder that contains all the function implementations
# to the system path so we can import them
import sys
sys.path.append('code/')
# the actual implementation is in linear_regression.py,
# here we will just use it to fit a model
from linear_regression import *
from sklearn.datasets import load_diabetes, load_breast_cancer
diabetes = load_diabetes()
breast_cancer = load_breast_cancer()
# load the dataset
# same as before, but now we use all features
X_train = diabetes.data[:300, :]
y_train = diabetes.target[:300, np.newaxis]
X_test = diabetes.data[300:, :]
y_test = diabetes.target[300:, np.newaxis]
beta = lsq(X_train, y_train)
X_test2 = np.c_[np.ones(len(X_test)) ,X_test] #add column of ones
Y_pred = np.dot(X_test2,beta)
Err = np.subtract(y_test,Y_pred)
MSE = np.dot(Err.T, Err)/len(X_test)
print("MSE of test data =", MSE)
# -
# ### Weighted linear regression
#
# Assume that in the dataset that you use to train a linear regression model, there are identical versions of some samples. This problem can be reformulated to a weighted linear regression problem where the matrices $\boldsymbol{\mathrm{X}}$ and $\boldsymbol{\mathrm{Y}}$ (or the vector $\boldsymbol{\mathrm{y}}$ if there is only a single target/output variable) contain only the unique data samples, and a vector $\boldsymbol{\mathrm{d}}$ is introduced that gives more weight to samples that appear multiple times in the original dataset (for example, the sample that appears 3 times has a corresponding weight of 3).
#
# <p><font color='#770a0a'>Derive the expression for the least-squares solution of a weighted linear regression model (note that in addition to the matrices $\boldsymbol{\mathrm{X}}$ and $\boldsymbol{\mathrm{Y}}$, the solution should include a vector of weights $\boldsymbol{\mathrm{d}}$).</font></p>
# \begin{align}
# WRRS(\mathbf{w}) & = \sum_{i=1}^N \mathbf{d}(y_i - \mathbf{x}_i\mathbf{w})^2 \\
# \end{align}
#
# The matrix notation:
# \begin{align}
# WRRS(\mathbf{w}) & = (\mathbf{y} - \mathbf{Xw})^T\mathbf{d}(\mathbf{y}-\mathbf{Xw}) \\
# \end{align}
#
# Differentiate with respect to w
# \begin{align}
# -2\mathbf{dX}^T(\mathbf{y}-\mathbf{Xw}) \\
# \end{align}
#
# To find the minimum our derivative must be 0, hence:
# $$
# \begin{align}
# \ -2\mathbf{dX}^T(\mathbf{y}-\mathbf{Xw}) = 0 \\
# \ \mathbf{dX}^T\mathbf{y}-\mathbf{dX}^T\mathbf{Xw} = 0 \\
# \ \mathbf{dX}^T\mathbf{y} = \mathbf{dX}^T\mathbf{Xw} \\
# \ \mathbf{w} = (\mathbf{dX}^T\mathbf{X})^{-1}\mathbf{dX}^T\mathbf{Y}
# \end{align}
# $$
#
# ### $k$-NN classification
#
# Implement a $k$-Nearest neighbors classifier from scratch in Python using only basic matrix operations with `numpy` and `scipy`. Train and evaluate the classifier on the breast cancer dataset, using all features. Show the performance of the classifier for different values of $k$ (plot the results in a graph). Note that for optimal results, you should normalize the features (e.g. to the $[0, 1]$ range or to have a zero mean and unit standard deviation).
# +
import numpy as np
from sklearn.datasets import load_diabetes, load_breast_cancer
import operator
import matplotlib.pyplot as plt
import sys
sys.path.append('code/')
from def_of_week_1_v2 import *
diabetes = load_diabetes()
breast_cancer = load_breast_cancer()
x = breast_cancer.data[:]
# normalize the data
max_ft = np.zeros(x.shape[1]) #create vector max_ft
for ii in range (0,x.shape[1]):
max_ft[ii] = np.max(x[:,ii]) #try to make this an array of all max values and compute the normalized data in a matrix division
x_norm = x*(1/max_ft)
X_train = x_norm[:350] #CHANGE BACK TO 350
y_train = breast_cancer.target[:350, np.newaxis]
X_test = x_norm[350:]
y_test = breast_cancer.target[350:, np.newaxis]
dict_of_acc ={}
# the value of k needs always to be an odd number
for k in range(1,X_train.shape[0]+1,2): #len(X_train)+1,
predicted_labels = kNN_test(X_train, X_test, y_train, y_test, k)
#calculates the error of every k
#knn_error = error_squared(y_test,predicted_labels)
predictionmeasure = predicted_labels ==y_test.T #whuch predictions were true or false
right = sum(sum(predictionmeasure)) #nr of correct predictions
accuracy = right/len(y_test)
dict_of_acc[k]=accuracy
# It plots all the errors (y-as) against the k value's (x-as)
plt.plot(list(dict_of_acc.keys()), list(dict_of_acc.values()))
plt.xlabel('value of k')
plt.ylabel('accuracy')
plt.title('Accuracy dependence on k')
plt.show()
# It will print the value of K with the lowest error
whichK = sorted(dict_of_acc.items(), key=operator.itemgetter(1), reverse=True)
bestKvalue = whichK[0][0]
print("the value of k with the best accuracy =", bestKvalue)
# -
# It is important to choose an odd k-value.
# As you can see in the image, as the k-value becomes too high, the error
# will become higher. This process is also called overfitting.
# The best value of k in this example is 31
# ### $k$-NN regression
#
# Modify the $k$-NN implementation to do regression instead of classification. Compare the performance of the linear regression model and the $k$-NN regression model on the diabetes dataset for different values of $k$..
# +
import sys
sys.path.append('code/')
from knn_classifier1 import *
import numpy as np
from sklearn.datasets import load_diabetes
import operator
from scipy.special import expit
import matplotlib.pyplot as plt
diabetes = load_diabetes()
X_train_d = diabetes.data[:300, np.newaxis, 3]
y_train_d = diabetes.target[:300, np.newaxis]
X_test_d = diabetes.data[300:, np.newaxis, 3]
y_test_d = diabetes.target[300:, np.newaxis]
dict_of_reg_errors ={}
all_errors = 0
for k in range(1,len(X_train_d)+1,2): # the value of k needs always to be an odd number
prediction = kNN_test_reg(X_train_d, X_test_d, y_train_d, y_test_d, k)
knn_error = error_squared(y_test_d,prediction) #calculates the error of every k
all_errors = all_errors + knn_error
if k in dict_of_reg_errors:
print (k)
# creates a dictionary with the k as key and the error as value
else:
dict_of_reg_errors[k]=knn_error
# It plots all the errors (y-as) against the k value's (x-as)
plt.figure()
plt.plot(list(dict_of_reg_errors.keys()), list(dict_of_reg_errors.values()))
plt.xlabel('value of k')
plt.ylabel('error')
plt.title('predicting which k is optimal for the lowest error')
# It will print the value of K with the lowest error
whichK = sorted(dict_of_reg_errors.items(), key=operator.itemgetter(1))
bestKvalue = whichK[0][0]
print ("the value of k with the lowest error =", bestKvalue)
#calculate the mean squared error
MSE = all_errors/len(y_test_d)
print("the value of the mean squared error =", MSE)
# -
# The order of the MSE with k-NN regression is the same order of the MSE calculated with linear regression. The data doesn't show a lineair relationship, thus the sum of squared errors are very high.
# ### Class-conditional probability
#
# Compute and visualize the class-conditional probability (conditional probability where the class label is the conditional variable, i.e. $P(X = x \mid Y = y)$ for all features in the breast cancer dataset. Assume a Gaussian distribution.
#
# <p><font color='#770a0a'>Based on visual analysis of the plots, which individual feature can best discriminate between the two classes? Motivate your answer.</font></p>
#
#
# +
import scipy.stats as stats
from scipy.stats import norm
import matplotlib.pyplot as plt
#1 is B_data(healthy), 0 is M_dataant
B_patients = np.where(breast_cancer.target==1) #which patients have benign breast cancer
B_data = breast_cancer.data[B_patients] #data of these patients
M_patients=np.where(breast_cancer.target==0)
M_data = breast_cancer.data[M_patients]
B_mean = np.average(B_data, axis=0) #calcalate the mean
B_std = np.std(B_data, axis=0) #calculate standard deviation
M_mean = np.average(M_data, axis=0)
M_std = np.std(M_data, axis=0)
#for ii in range(30):
ii=27
xB = np.linspace(B_mean[ii]-3*B_std[ii],B_mean[ii]+3*B_std[ii],100)
xM = np.linspace(M_mean[ii]-3*M_std[ii], M_mean[ii]+3*M_std[ii], 100)
plt.figure()
plt.plot(xB, stats.norm.pdf(xB,B_mean[ii],B_std[ii]),xM,stats.norm.pdf(xM,M_mean[ii],M_std[ii]))
plt.title(ii)
# -
# When the class-conditional probabilities for all 30 features were plotted, the 28th feature 'worst concave points' gave the most distinguishable classes.
|
practicals/week_1_new.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/naufalhisyam/TurbidityPrediction-thesis/blob/main/train_model_resnet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="DpSYbrzpM90P"
import os
import datetime
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
# !pip install tensorflow-addons
import tensorflow_addons as tfa
from sklearn.model_selection import train_test_split
# %load_ext tensorboard
# + id="mKCAZxGrzo8m"
# !git clone https://github.com/naufalhisyam/TurbidityPrediction-thesis.git
os.chdir('/content/TurbidityPrediction-thesis')
# + [markdown] id="1CrL24L20ir7"
# **PREPARING DATASET**
# + id="zNwDTsrAzo8o"
images = pd.read_csv(r'./Datasets/0degree/0degInfo.csv') #load dataset info
train_df, test_df = train_test_split(images, train_size=0.9, shuffle=True, random_state=1) #Split into train and test set
# + id="FoQoBig2zo8o"
train_generator = tf.keras.preprocessing.image.ImageDataGenerator(
horizontal_flip=True,
validation_split=0.2
)
test_generator = tf.keras.preprocessing.image.ImageDataGenerator(
horizontal_flip=True
)
# + id="BA-9Gp_Tzo8p"
train_images = train_generator.flow_from_dataframe(
dataframe=train_df,
x_col='Filepath',
y_col='Turbidity',
target_size=(224, 224),
color_mode='rgb',
class_mode='raw',
batch_size=32,
shuffle=True,
seed=42,
subset='training'
)
val_images = train_generator.flow_from_dataframe(
dataframe=train_df,
x_col='Filepath',
y_col='Turbidity',
target_size=(224, 224),
color_mode='rgb',
class_mode='raw',
batch_size=32,
shuffle=True,
seed=42,
subset='validation'
)
test_images = test_generator.flow_from_dataframe(
dataframe=test_df,
x_col='Filepath',
y_col='Turbidity',
target_size=(224, 224),
color_mode='rgb',
class_mode='raw',
batch_size=32,
shuffle=False
)
# + [markdown] id="woSoOSOwzo8q"
# **CREATING THE MODEL**
# + [markdown] id="eyGUAtAJdnFQ"
# Model Architecture
# + id="rQToQ6MENCTt"
def get_model():
#Create model
inputs = tf.keras.Input(shape=(224, 224, 3))
x = tf.keras.layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPool2D()(x)
x = tf.keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPool2D()(x)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
prediction = tf.keras.layers.Dense(1, activation="linear")(x)
model = tf.keras.Model(inputs = inputs, outputs = prediction)
#Compile the model
opt = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(loss=tf.keras.losses.Huber(), optimizer=opt,
metrics=['mae','mse', tfa.metrics.RSquare(name="R2")])
return model
model = get_model()
tf.test.gpu_device_name()
# + [markdown] id="Z1rINXUigLTK"
# Training Callbacks
# + id="Hp8dP9o0zo8s"
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
earlyStop = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=5,
restore_best_weights=True)
class CustomModelCheckpointCallback(tf.keras.callbacks.ModelCheckpoint):
def __init__(self, ignore_first, *args, **kwargs):
super(CustomModelCheckpointCallback, self).__init__(*args, **kwargs)
self.ignore_first = ignore_first
def on_epoch_end(self, epoch, logs):
if epoch+1> self.ignore_first:
super().on_epoch_end(epoch, logs)
pathname = 'saved_model/proposedNet-epoch{epoch:02d}-loss{val_loss:.2f}'
checkpoint = CustomModelCheckpointCallback(
ignore_first=80, filepath = pathname,
monitor='val_loss', mode='min',
save_best_only=True, save_freq='epoch')
# + [markdown] id="tAogg36tzo8t"
# Training Model
# + id="zV2nFqfLf5yB"
num_epoch = 60
history = model.fit(train_images, validation_data=val_images,
epochs=num_epoch, batch_size=8, callbacks=[tensorboard_callback, checkpoint], verbose=1)
# -
# Save Model Manually
last_val_loss = history.history['val_loss'][-1]
name = f'proposedNet-epoch{num_epoch}-loss{last_val_loss}'
model.save(f"saved_model/{name}")
hist_df = pd.DataFrame(history.history)
hist_csv_file = f'saved_model/{name}/history.csv'
with open(hist_csv_file, mode='w') as f:
hist_df.to_csv(f)
#tf.keras.utils.plot_model(model, f"saved_model/{name}/densenet_model_arch.png", show_shapes=False)
# + [markdown] id="z7FnC4Q2gXGL"
# Model Evaluation
# -
# %tensorboard --logdir logs
# + id="tFCSQyKH46W4"
pred_turbid = np.squeeze(model.predict(test_images))
true_turbid = test_images.labels
residuals = true_turbid - pred_turbid
# + id="OIQTkGf8SEUC"
f, axs = plt.subplots(1, 2, figsize=(8,5), gridspec_kw={'width_ratios': [4, 1]})
axs[0].scatter(pred_turbid,residuals)
axs[0].set_title('Residual Plot dari Model ProposedNet', fontsize=13, fontweight='bold')
axs[0].set_ylabel('Residual')
axs[0].set_xlabel('Predicted Turbidity')
axs[0].axhline(0, color='black')
axs[0].grid()
axs[1].hist(residuals, bins=40, orientation="horizontal", density=True)
axs[1].axhline(0, color='black')
axs[1].set_xlabel('Distribution')
axs[1].yaxis.tick_right()
axs[1].grid(axis='y')
plt.subplots_adjust(wspace=0.05)
plt.savefig(f'saved_model/{name}/residualPlot_{name}.png', dpi=150)
plt.show()
# + id="OjeQEjhyUom-"
ms_error = history.history['loss']
val_ms_error = history.history['val_loss']
ma_error = history.history['mae']
val_ma_error = history.history['val_mae']
r2 = history.history['R2']
val_r2 = history.history['val_R2']
epochs = range(1, len(ms_error) + 1)
f, axs = plt.subplots(3, 1, figsize=(6,14))
axs[0].plot(epochs, ms_error, 'tab:orange', label='train_loss (mse)')
axs[0].plot(epochs, val_ms_error, 'tab:blue', label='val_loss (mse)')
axs[0].set_title('MSE Selama Training', fontsize=13, fontweight='bold')
axs[0].set_xlabel('Epoch')
axs[0].set_ylabel('MSE')
axs[0].legend(facecolor='white')
axs[0].grid()
axs[1].plot(epochs, ma_error, 'tab:orange', label='train_mae')
axs[1].plot(epochs, val_ma_error, 'tab:blue', label='val_mae')
axs[1].set_title('MAE Selama Training', fontsize=13, fontweight='bold')
axs[1].set_xlabel('Epoch')
axs[1].set_ylabel('MAE')
axs[1].legend(facecolor='white')
axs[1].grid()
axs[2].plot(epochs, r2, 'tab:orange', label='train_R2')
axs[2].plot(epochs, val_r2, 'tab:blue', label='val_R2')
axs[2].set_title('$R^2$ Selama Training', fontsize=13, fontweight='bold')
axs[2].set_xlabel('Epoch')
axs[2].set_ylabel('$R^2$')
axs[2].legend(facecolor='white')
axs[2].grid()
plt.tight_layout()
plt.savefig(f'saved_model/{name}/trainPlot_{name}.png', dpi=150)
plt.show()
# -
# Copy to Drive
# + id="dDRWzVhPZ9qj"
from google.colab import drive
drive.mount('/content/gdrive')
model_name = 'proposedNet_90deg_lr1e-4_decay1e-3_bs8_huber'
save_path = f"/content/gdrive/MyDrive/Hasil_Training/proposedNet/{model_name}"
if not os.path.exists(save_path):
os.makedirs(save_path)
oripath = "saved_model/."
# !cp -a "{oripath}" "{save_path}" # copies files to google drive
|
train_model_proposedNet.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Flame Speed with Sensitivity Analysis
# In this example we simulate a freely-propagating, adiabatic, 1-D flame and
# * Calculate its laminar burning velocity
# * Perform a sensitivity analysis of its kinetics
#
# The figure below illustrates the setup, in a flame-fixed co-ordinate system. The reactants enter with density $\rho_{u}$, temperature $T_{u}$ and speed $S_{u}$. The products exit the flame at speed $S_{b}$, density $\rho_{b}$ and temperature $T_{b}$.
# <img src="images/flameSpeed.png" alt="Freely Propagating Flame" style="width: 300px;"/>
# ### Import Modules
# +
from __future__ import print_function
from __future__ import division
import cantera as ct
import numpy as np
print("Running Cantera Version: " + str(ct.__version__))
# +
# Import plotting modules and define plotting preference
# %matplotlib notebook
import matplotlib.pylab as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.figsize'] = (8,6)
# Get the best of both ggplot and seaborn
plt.style.use('ggplot')
plt.style.use('seaborn-deep')
plt.rcParams['figure.autolayout'] = True
# Import Pandas for DataFrames
import pandas as pd
# -
# ### Define the reactant conditions, gas mixture and kinetic mechanism associated with the gas
# +
#Inlet Temperature in Kelvin and Inlet Pressure in Pascals
#In this case we are setting the inlet T and P to room temperature conditions
To = 300
Po = 101325
#Define the gas-mixutre and kinetics
#In this case, we are choosing a GRI3.0 gas
gas = ct.Solution('gri30.cti')
# Create a stoichiometric CH4/Air premixed mixture
gas.set_equivalence_ratio(1.0, 'CH4', {'O2':1.0, 'N2':3.76})
gas.TP = To, Po
# -
# ### Define flame simulation conditions
# +
# Domain width in metres
width = 0.014
# Create the flame object
flame = ct.FreeFlame(gas, width=width)
# Define tolerances for the solver
flame.set_refine_criteria(ratio=3, slope=0.1, curve=0.1)
# Define logging level
loglevel = 1
# -
# ### Solve
# +
flame.solve(loglevel=loglevel, auto=True)
Su0 = flame.u[0]
print("Flame Speed is: {:.2f} cm/s".format(Su0*100))
# Note that the variable Su0 will also be used downsteam in the sensitivity analysis
# -
# ### Plot figures
#
# Check and see if all has gone well. Plot temperature and species fractions to see
#
# #### Temperature Plot
# +
plt.figure()
plt.plot(flame.grid*100, flame.T, '-o')
plt.xlabel('Distance (cm)')
plt.ylabel('Temperature (K)');
# -
# #### Major species' plot
# To plot species, we first have to identify the index of the species in the array
# For this, cut & paste the following lines and run in a new cell to get the index
#
# for i, specie in enumerate(gas.species()):
# print(str(i) + '. ' + str(specie))
# +
# Extract concentration data
X_CH4 = flame.X[13]
X_CO2 = flame.X[15]
X_H2O = flame.X[5]
plt.figure()
plt.plot(flame.grid*100, X_CH4, '-o', label=r'$CH_{4}$')
plt.plot(flame.grid*100, X_CO2, '-s', label=r'$CO_{2}$')
plt.plot(flame.grid*100, X_H2O, '-<', label=r'$H_{2}O$')
plt.legend(loc=2)
plt.xlabel('Distance (cm)')
plt.ylabel('MoleFractions');
# -
# ## Sensitivity Analysis
#
# See which reactions effect the flame speed the most
# Create a dataframe to store sensitivity-analysis data
sensitivities = pd.DataFrame(data=[], index=gas.reaction_equations(range(gas.n_reactions)))
# ### Compute sensitivities
# +
# Set the value of the perturbation
dk = 1e-2
# Create an empty column to store the sensitivities data
sensitivities["baseCase"] = ""
# +
for m in range(gas.n_reactions):
gas.set_multiplier(1.0) # reset all multipliers
gas.set_multiplier(1+dk, m) # perturb reaction m
# Always force loglevel=0 for this
# Make sure the grid is not refined, otherwise it won't strictly
# be a small perturbation analysis
flame.solve(loglevel=0, refine_grid=False)
# The new flame speed
Su = flame.u[0]
sensitivities["baseCase"][m] = (Su-Su0)/(Su0*dk)
# This step is essential, otherwise the mechanism will have been altered
gas.set_multiplier(1.0)
# -
sensitivities.head()
# ### Make plots
# +
# Reaction mechanisms can contains thousands of elementary steps. Choose a threshold
# to see only the top few
threshold = 0.03
firstColumn = sensitivities.columns[0]
# For plotting, collect only those steps that are above the threshold
# Otherwise, the y-axis gets crowded and illegible
sensitivitiesSubset = sensitivities[sensitivities[firstColumn].abs() > threshold]
indicesMeetingThreshold = sensitivitiesSubset[firstColumn].abs().sort_values(ascending=False).index
sensitivitiesSubset.loc[indicesMeetingThreshold].plot.barh(title="Sensitivities for GRI 3.0",
legend=None)
plt.gca().invert_yaxis()
plt.rcParams.update({'axes.labelsize': 20})
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{S_{u}}}{\partial\:\ln{k}}$');
# Uncomment the following to save the plot. A higher than usual resolution (dpi) helps
# plt.savefig('sensitivityPlot', dpi=300)
|
Examen/.ipynb_checkpoints/flame_speed_with_sensitivity_analysis-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# -
# <img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;">
#
# # Scaling Criteo: Download and Convert
#
# ## Criteo 1TB Click Logs dataset
#
# The [Criteo 1TB Click Logs dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/) is the largest public available dataset for recommender system. It contains ~1.3 TB of uncompressed click logs containing over four billion samples spanning 24 days. Each record contains 40 features: one label indicating a click or no click, 13 numerical figures, and 26 categorical features. The dataset is provided by CriteoLabs. A subset of 7 days was used in this [Kaggle Competition](https://www.kaggle.com/c/criteo-display-ad-challenge/overview). We will use the dataset as an example how to scale ETL, Training and Inference.
# First, we will download the data and extract it. We define the base directory for the dataset and the numbers of day. Criteo provides 24 days. We will use the last day as validation dataset and the remaining days as training.
#
# **Each day has a size of ~15GB compressed `.gz` and uncompressed ~XXXGB. You can define a smaller subset of days, if you like. Each day takes ~20-30min to download and extract it.**
# We create the folder structure and download and extract the files. If the file already exist, it will be skipped.
# The original dataset is in text format. We will convert the dataset into `.parquet` format. Parquet is a compressed, column-oriented file structure and requires less disk space.
# ### Conversion Script for Criteo Dataset (CSV-to-Parquet)
#
# __Step 1__: Import libraries
# +
import os
import glob
import numpy as np
from dask.distributed import Client
from dask_cuda import LocalCUDACluster
import nvtabular as nvt
from nvtabular.utils import device_mem_size, get_rmm_size
# -
# __Step 2__: Specify options
#
# Specify the input and output paths, unless the `INPUT_DATA_DIR` and `OUTPUT_DATA_DIR` environment variables are already set. For multi-GPU systems, check that the `CUDA_VISIBLE_DEVICES` environment variable includes all desired device IDs.
INPUT_PATH = '/home/jupyter/criteo/criteo_orig'
OUTPUT_PATH = '/home/jupyter/criteo/criteo_2'
CUDA_VISIBLE_DEVICES = "0,1,2,3"
frac_size = 0.12
# __Step 3__: (Optionally) Start a Dask cluster
cluster = None # Connect to existing cluster if desired
if cluster is None:
cluster = LocalCUDACluster(
CUDA_VISIBLE_DEVICES=CUDA_VISIBLE_DEVICES,
rmm_pool_size=get_rmm_size(0.8 * device_mem_size()),
local_directory=os.path.join(OUTPUT_PATH, "dask-space"),
)
client = Client(cluster)
# __Step 5__: Convert original data to an NVTabular Dataset
# +
# Specify column names
cont_names = ["I" + str(x) for x in range(1, 14)]
cat_names = ["C" + str(x) for x in range(1, 27)]
cols = ["label"] + cont_names + cat_names
# Specify column dtypes. Note that "hex" means that
# the values will be hexadecimal strings that should
# be converted to int32
dtypes = {}
dtypes["label"] = np.int32
for x in cont_names:
dtypes[x] = np.int32
for x in cat_names:
dtypes[x] = "hex"
# Create an NVTabular Dataset from a CSV-file glob
file_list = glob.glob(os.path.join(INPUT_PATH, "day_*"))
dataset = nvt.Dataset(
file_list,
engine="csv",
names=cols,
part_mem_fraction=frac_size,
sep="\t",
dtypes=dtypes,
client=client,
)
# -
# **__Step 6__**: Write Dataset to Parquet
help(dataset.to_parquet)
dataset.npartitions
dataset.to_parquet(
'/home/jupyter/criteo/criteo_2',
preserve_files=True
)
# You can delete the original criteo files as they require a lot of disk space.
|
cudf-sandbox/01-Download-Convert.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import psycopg2
import nltk
import unicodedata
import pandas as pd
import pickle
import re
import os
from nltk.corpus import wordnet
import time
from nltk.tokenize import RegexpTokenizer
wnl = nltk.WordNetLemmatizer()
nltk.download('averaged_perceptron_tagger')
from nltk.util import ngrams
from sklearn.feature_extraction.text import TfidfVectorizer
from yellowbrick.text import FreqDistVisualizer
from sklearn.feature_extraction.text import CountVectorizer
from collections import Counter
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
# +
#Input your PostGres credentials to connect
dbname = ''
username = ''
host = ''
password = ''
conn = psycopg2.connect('dbname={} user={} host={} password={}'.format(dbname, username, host, password))
cur = conn.cursor()
# +
#Adjust the sample size by changing the number of instances you request following LIMIT
cur = conn.cursor()
cur.execute("""
SELECT review_id, user_id, business_id, review_text FROM review LIMIT 100
""")
cols = ['review_id', 'user_id', 'business_id', 'review_text']
review_sample = pd.DataFrame(cur.fetchall(), columns=cols)
# -
#make sure you got the sample
review_sample
#View specific instance
print(review_sample.loc[4, 'review_text'])
"""
#Function to create customized stopword list that retains words with negative connotation and removes common, non-negative contrations
def _create_stop_words():
stops = nltk.corpus.stopwords.words('english')
neg_stops = ['no',
'nor',
'not',
'don',
"don't",
'ain',
'aren',
"aren't",
'couldn',
"couldn't",
'didn',
"didn't",
'doesn',
"doesn't",
'hadn',
"hadn't",
'hasn',
"hasn't",
'haven',
"haven't",
'isn',
"isn't",
'mightn',
"mightn't",
'mustn',
"mustn't",
'needn',
"needn't",
'shan',
"shan't",
'shouldn',
"shouldn't",
'wasn',
"wasn't",
'weren',
"weren't",
"won'",
"won't",
'wouldn',
"wouldn't",
'but',
"don'",
"ain't"]
common_nonneg_contr = ["could've",
"he'd",
"he'd've",
"he'll",
"he's",
"how'd",
"how'll",
"how's",
"i'd",
"i'd've",
"i'll",
"i'm",
"i've",
"it'd",
"it'd've",
"it'll",
"it's",
"let's",
"ma'am",
"might've",
"must've",
"o'clock",
"'ow's'at",
"she'd",
"she'd've",
"she'll",
"she's",
"should've",
"somebody'd",
"somebody'd've",
"somebody'll",
"somebody's",
"someone'd",
"someone'd've",
"someone'll",
"someone's",
"something'd",
"something'd've",
"something'll",
"something's",
"that'll",
"that's",
"there'd",
"there'd've",
"there're",
"there's",
"they'd",
"they'd've",
"they'll",
"they're",
"they've",
"'twas",
"we'd",
"we'd've",
"we'll",
"we're",
"we've",
"what'll",
"what're",
"what's",
"what've",
"when's",
"where'd",
"where's",
"where've",
"who'd",
"who'd've",
"who'll",
"who're",
"who's",
"who've",
"why'll",
"why're",
"why's",
"would've",
"y'all",
"y'all'll",
"y'all'd've",
"you'd",
"you'd've",
"you'll",
"you're",
"you've"]
letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't',
'u', 'v', 'w', 'x', 'y', 'z']
ranks = ['st', 'nd', 'rd', 'th']
for x in neg_stops:
if x in stops:
stops.remove(x)
new_stops = stops + common_nonneg_contr + letters + ranks + [""] + ['us'] + ['']
stops = list(set(new_stops))
return stops
"""
"""
#The if len(word) > 0 check is still not sufficient.. as it will leave in '' tokens
def get_wordnet_pos(word):
#Added in this line because originally broke when trying to pass through '', which occured when there was
#a token like '2's' that got reduced to "'s" and then '' before being passed through lemmatizer
if len(word) > 0:
tag = nltk.pos_tag([word])[0][1][0].lower()
tag_dict = {"a": wordnet.ADJ,
"n": wordnet.NOUN,
"v": wordnet.VERB,
"r": wordnet.ADV}
return tag_dict.get(tag, wordnet.NOUN)
else:
return wordnet.NOUN
def _clean_review(text):
text = text.lower()
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf8', 'ignore')
tokenizer = nltk.RegexpTokenizer('\w+\'?\w+')
filtered_tokens = [(re.sub(r"[^A-Za-z\s']", '', token)) for token in tokenizer.tokenize(text)]
stops = _create_stop_words()
tokens = [token for token in filtered_tokens if token not in stops]
for i, token in enumerate(tokens):
filtered_token = re.sub("'s", '', token)
tokens[i] = wnl.lemmatize(filtered_token, pos= get_wordnet_pos(filtered_token))
return tokens
"""
"""
def get_wordnet_pos2(word):
tag = nltk.pos_tag([word])[0][1][0].lower()
tag_dict = {"a": wordnet.ADJ,
"n": wordnet.NOUN,
"v": wordnet.VERB,
"r": wordnet.ADV}
return tag_dict.get(tag, wordnet.NOUN)
def _clean_review2(text):
text = text.lower()
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf8', 'ignore')
tokenizer = nltk.RegexpTokenizer('\w+\'?\w+')
filtered_tokens = [(re.sub(r"[^A-Za-z\s']", '', token)) for token in tokenizer.tokenize(text)]
stops = _create_stop_words()
tokens = [token for token in filtered_tokens if token not in stops]
tokens = [re.sub("'s", '', token) for token in tokens if re.sub("'s", '', token) != '']
for i, token in enumerate(tokens):
tokens[i] = wnl.lemmatize(token, pos= get_wordnet_pos2(token))
tokens = [token for token in tokens if token != '']
return tokens
"""
def _process_review(text):
def _create_stop_words():
stops = nltk.corpus.stopwords.words('english')
neg_stops = ['no',
'nor',
'not',
'don',
"don't",
'ain',
'aren',
"aren't",
'couldn',
"couldn't",
'didn',
"didn't",
'doesn',
"doesn't",
'hadn',
"hadn't",
'hasn',
"hasn't",
'haven',
"haven't",
'isn',
"isn't",
'mightn',
"mightn't",
'mustn',
"mustn't",
'needn',
"needn't",
'shan',
"shan't",
'shouldn',
"shouldn't",
'wasn',
"wasn't",
'weren',
"weren't",
"won'",
"won't",
'wouldn',
"wouldn't",
'but',
"don'",
"ain't"]
common_nonneg_contr = ["could've",
"he'd",
"he'd've",
"he'll",
"he's",
"how'd",
"how'll",
"how's",
"i'd",
"i'd've",
"i'll",
"i'm",
"i've",
"it'd",
"it'd've",
"it'll",
"it's",
"let's",
"ma'am",
"might've",
"must've",
"o'clock",
"'ow's'at",
"she'd",
"she'd've",
"she'll",
"she's",
"should've",
"somebody'd",
"somebody'd've",
"somebody'll",
"somebody's",
"someone'd",
"someone'd've",
"someone'll",
"someone's",
"something'd",
"something'd've",
"something'll",
"something's",
"that'll",
"that's",
"there'd",
"there'd've",
"there're",
"there's",
"they'd",
"they'd've",
"they'll",
"they're",
"they've",
"'twas",
"we'd",
"we'd've",
"we'll",
"we're",
"we've",
"what'll",
"what're",
"what's",
"what've",
"when's",
"where'd",
"where's",
"where've",
"who'd",
"who'd've",
"who'll",
"who're",
"who's",
"who've",
"why'll",
"why're",
"why's",
"would've",
"y'all",
"y'all'll",
"y'all'd've",
"you'd",
"you'd've",
"you'll",
"you're",
"you've"]
letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't',
'u', 'v', 'w', 'x', 'y', 'z']
ranks = ['st', 'nd', 'rd', 'th']
for x in neg_stops:
if x in stops:
stops.remove(x)
new_stops = stops + common_nonneg_contr + letters + ranks + [""] + ['us'] + [''] + ["'"]
stops = list(set(new_stops))
return stops
def get_wordnet_pos(word):
tag = nltk.pos_tag([word])[0][1][0].lower()
tag_dict = {"a": wordnet.ADJ,
"n": wordnet.NOUN,
"v": wordnet.VERB,
"r": wordnet.ADV}
return tag_dict.get(tag, wordnet.NOUN)
def _clean_review(text):
text = text.lower()
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf8', 'ignore')
tokenizer = nltk.RegexpTokenizer('\w+\'?\w+')
filtered_tokens = [(re.sub(r"[^A-Za-z\s']", '', token)) for token in tokenizer.tokenize(text)]
stops = _create_stop_words()
tokens = [token for token in filtered_tokens if token not in stops]
tokens = [re.sub("'s", '', token) for token in tokens if re.sub("'s", '', token) != '']
for i, token in enumerate(tokens):
tokens[i] = wnl.lemmatize(token, pos= get_wordnet_pos(token))
tokens = [token for token in tokens if token not in stops]
return tokens
return _clean_review(text)
"""
#Code to apply _clean_review function on all review_text column and put tokens in new column titled 'review_tokens'
def apply_on_column(data):
data['review_tokens'] = data['review_text'].apply(lambda x: _clean_review(x))
return data
"""
#Code to apply _process_review function on all review_text column and put tokens in new column titled 'review_tokens'
def apply_on_column(data):
data['review_tokens'] = data['review_text'].apply(lambda x: _process_review(x))
return data
"""
#Get times for how long it takes to run apply_on_column function on review sample
start = time.time()
apply_on_column(review_sample)
end = time.time()
dur = end - start
# Verify that the function is working
print('Processed {} instances in {} minutes {} seconds.\n'.format(review_sample.shape[0], dur//60, dur%60))
"""
#Get times for how long it takes to run apply_on_column function on review sample
start = time.time()
apply_on_column(review_sample)
end = time.time()
dur = end - start
# Verify that the function is working
print('Processed {} instances in {} minutes {} seconds.\n'.format(review_sample.shape[0], dur//60, dur%60))
#Check to see that the 'review_tokens' column was properly created
review_sample
#Print out example full review and its associated tokens after running _process_review()
print('Full review:\n\n{}'.format(review_sample.loc[4, 'review_text']))
print('\n\nTokenized review: \n\n{}'.format(review_sample.loc[4, 'review_tokens']))
data_dir = '/Users/alice.naghshineh/pickled_data/threshold_analysis'
with open(os.path.join(data_dir, 'half_reviews_clean_threshold_analysis.pkl'), 'wb') as f:
pickle.dump([review_sample], f)
f = open(os.path.join(data_dir, 'half_reviews_clean_threshold_analysis.pkl'), 'rb')
test = pickle.load(f)
test[0].shape[0]
# ## Ngram codes for visualization
def get_ngrams(tokens, n):
n_grams = ngrams(tokens, n)
return [ ' '.join(grams) for grams in n_grams]
#make sure ngrams code is working for bi-grams
get_ngrams(review_sample.loc[9, 'review_tokens'], 2)
#what about tri-grams?
get_ngrams(review_sample.loc[9, 'review_tokens'], 3)
def apply_ngrams_on_column(data):
for n in range(2,6):
data['{}_grams'.format(n)] = data['review_tokens'].apply(lambda x: get_ngrams(x, n))
print('Done creating {}-grams...'.format(n))
return data
apply_ngrams_on_column(review_sample)
# +
#Make sure the ngrams function worked as you thought by viewing a few exmples from the dataframe
print('Full review:\n\n{}'.format(review_sample.loc[9, 'review_text']))
print('\n\nTokenized review:\n\n{}'.format(review_sample.loc[9, 'review_tokens']))
print('\n\n2-grams review: \n\n{}'.format(review_sample.loc[9, '2_grams']))
print('\n\n3-grams review: \n\n{}'.format(review_sample.loc[9, '3_grams']))
print('\n\n4-grams review: \n\n{}'.format(review_sample.loc[9, '4_grams']))
print('\n\n5-grams review: \n\n{}'.format(review_sample.loc[9, '5_grams']))
# -
# ## Counts of Tokens in Corpus
# +
#Creates function to show us a vizualization of our top 50 token counts
#Adjust n to show n top tokens. Default is 50
def _get_top_tokens(tokens, n = 50):
def dummy_fun(text):
return text
vectorizer = CountVectorizer(
tokenizer = dummy_fun,
preprocessor= dummy_fun,
token_pattern=None)
docs = vectorizer.fit_transform(tokens)
features = vectorizer.get_feature_names()
visualizer = FreqDistVisualizer(features=features, size=(1080, 720), n = n)
visualizer.fit(docs)
visualizer.poof()
#I'm going to update this function to spit out a histogram
def _get_least_tokens(tokens, n = 50):
def dummy_fun(text):
return text
vectorizer = CountVectorizer(
tokenizer = dummy_fun,
preprocessor= dummy_fun,
token_pattern=None)
docs = vectorizer.fit_transform(tokens)
counts = docs.sum(axis=0).A1
features = vectorizer.get_feature_names()
freq_distribution = Counter(dict(zip(features, counts)))
return list(reversed(freq_distribution.most_common()[-n:]))
# -
#For this function, n represents the cut-off frequency threshold. The default is 3.
#In other words, if n = 3, the function will return all tokens that appear at most 3 times in the corpus
def _infrequent_tokens(tokens, n = 3):
def dummy_fun(text):
return text
vectorizer = CountVectorizer(
tokenizer = dummy_fun,
preprocessor= dummy_fun,
token_pattern=None)
docs = vectorizer.fit_transform(tokens)
counts = docs.sum(axis=0).A1
features = vectorizer.get_feature_names()
freq_distribution = Counter(dict(zip(features, counts)))
infrequent_tokens = [[token, count] for token, count in freq_distribution.items() if count <= n]
df = pd.DataFrame(infrequent_tokens, columns = ['token', 'count']).sort_values(by='count', ascending=False)
return df
_infrequent_tokens(review_sample['review_tokens'], 2)
#Let's see what our top 50 tokens are!
_get_top_tokens(review_sample['review_tokens'])
_get_least_tokens(review_sample['review_tokens'], 100)
# ## TF-IDF Vectorization
# +
def dummy_fun(text):
return text
#This first TF-IDF function creates a vectorizer that takes the review text in string format (review_text column)
#So if our dataframe review_sample, it would take: review_sample['review_text']
#That's because it calls our pre-made preprocessor, _process_review
tfidf1 = TfidfVectorizer(
tokenizer=_process_review,
preprocessor=dummy_fun,
token_pattern=None)
#This second TF-IDF function takes our already tokenized reviews, so the column review_sample['review_tokens']
#This essentially means that we need to run our custom preprocessor _process_review on our review text in raw form
tfidf2 = TfidfVectorizer(
tokenizer=dummy_fun,
preprocessor=dummy_fun,
token_pattern=None)
# -
#Run tfidf1
Y = tfidf1.fit_transform(review_sample['review_text'])
tfidf1.vocabulary_
idf_df1 = pd.DataFrame(Y.toarray(), columns=tfidf1.get_feature_names())
idf_df1
print(review_sample.loc[11, 'review_text'])
print(idf_df1.loc[11])
#Run tfidf2
Z = tfidf2.fit_transform(review_sample['review_tokens'])
idf_df2 = pd.DataFrame(Z.toarray(), columns=tfidf2.get_feature_names())
idf_df2
print(review_sample.loc[11, 'review_text'])
print(idf_df2.loc[11])
|
notebooks/ann-text-processing2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/scsanjay/ml_from_scratch/blob/main/02.%20K%20Nearest%20Neighbor%20(KNN)/Knn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="kd7KTuj57Zde"
# ##Implementation of KNN
# + id="uiyhaXDMDVWB"
import numpy as np
from collections import Counter
# + id="Lvmj7PxgDZ7W"
class Knn:
"""
k-nearest neighbors aka knn is a classification technique which looks
at it's neighbors to predict label for a query point.
Parameters
----------
n_neighbors : int, default=5
Number of nearest neighbors to look at for predicting.
weights : {'uniform', 'distance'}, default='uniform'
uniform gives equal importance to each neighbors.
distance gives more importance to closer neighbor, importance = 1/distance.
p : int, default=2
It is parameter for the minkowski distance (lp distance). It is used only
when metric=minkowski.
metric : {'minkowski', 'manhattan', 'euclidean'}, default='minkowski'
The distance metric used to check for neighbors. The default is minkowski
with p=2 which is equivalent to euclidean.
Attributes
----------
classes_ : array of shape (n_classes,)
It returns the class labels based on fitted data.
n_classes_ : int
It returns the number of distinct class labels based on fitted data.
n_samples_fit_ : int
It gives number of train data fitted.
"""
def __init__(self, n_neighbors=5, weights='uniform', p=2, metric='minkowski'):
self.n_neighbors = n_neighbors
self.weights = weights
self.metric = metric
self.p = p
def fit(self, X, y):
"""
Fit the training data to the model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The training data.
y : array-like of shape (n_samples,)
The target labels.
Returns
-------
self
"""
# save train data
self.X_train = np.array(X)
self.y_train = np.array(y)
# set distinct class labels
self.classes_ = np.sort(np.unique(self.y_train))
# set distinct class labels count
self.n_classes_ = len(self.classes_)
# set no. of train data
self.n_samples_fit_ = len(self.X_train)
return self
def predict(self, X):
"""
Predict the labels of test data.
Parameters
----------
X : array-like of shape (n_queries, n_features)
The testing data.
Returns
-------
y : array-like of shape (n_queries,)
The target labels of test data.
"""
X_test = np.array(X)
y_pred = []
# loop over each test data
for test_data in X_test:
# calculate distances between test data and all the train data
distances = self._getDistances(test_data)
# get k sorted indices based on distance
distance_indices = np.argsort(distances)[:self.n_neighbors]
# get label based on weight and distance
if self.weights == 'uniform':
# get labels of k nearest points
labels = self.y_train[distance_indices]
# perfom majority vote to get the label
label = Counter(labels).most_common(1)[0][0]
else:
# calculate weights for each labels
label_weight = dict()
for distance_index in distance_indices:
distance = distances[distance_index]
weight = 1/distance
label = self.y_train[distance_index]
if not label_weight.get(label):
label_weight[label] = weight
else:
label_weight[label] += weight
# get the label based on max distance
label = max(label_weight, key=label_weight.get)
#save the label
y_pred.append(label)
# return predicted labels
return np.array(y_pred)
def kneighbors(self, X):
"""
Return the distances and the k nearest neighbors indices.
Parameters
----------
X : array-like of shape (n_queries, n_features)
The testing data.
Returns
-------
(distance, neighbors) : tuple of array-like of shape (n_queries, k) each
distance is the distance of each point from the query points.
neighbors is represents the indices of neighbors in train data.
"""
X_test = np.array(X)
neighbor_indices = []
neighbor_distances = []
# loop over each test data
for test_data in X_test:
# calculate distances between test data and all the train data
distances = self._getDistances(test_data)
# get k sorted indices based on distance
distance_indices = np.argsort(distances)[:self.n_neighbors]
# append indices of the neighbors
neighbor_indices.append(distance_indices)
# append distance of the neighbors
neighbor_distances.append(distances[distance_indices])
# return (distance, neighbors)
return (np.array(neighbor_distances), np.array(neighbor_indices))
def _getDistances(self, test_data):
if self.metric == 'euclidean':
p = 2
elif self.metric == 'manhattan':
p = 1
else:
p = self.p
# implement the distance calculation
distances = np.power(np.sum(np.power(np.abs(test_data-self.X_train), p), axis=1), 1/p)
return distances
# + [markdown] id="ecv8Jat3wku7"
# ##Testing the validity of implementation by comparing with **sklearn.neighbors.KNeighborsClassifier**
# + id="RS72_d0y_uXs"
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestNeighbors
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score
# + colab={"base_uri": "https://localhost:8080/"} id="56R95IvP6EJL" outputId="781c1f4f-3bef-4853-aa10-195cdbe1e43a"
# See doc
help(Knn)
# + id="QoLa0dLnxDpL"
data = load_iris()
# + id="5hNLi7uZ_y4k"
#increased test size to get some inaccuraccy
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.40, random_state=42)
# + colab={"base_uri": "https://localhost:8080/"} id="p8IoeoCuMRq5" outputId="7e26709c-39ba-4f2a-9c36-3c44909375fb"
neigh = KNeighborsClassifier()
neigh.fit(X_train, y_train)
print('classes : ',neigh.classes_)
y_pred = neigh.predict(X_test)
print('y_pred : ',y_pred)
print('score : ',accuracy_score(y_test, y_pred))
# + colab={"base_uri": "https://localhost:8080/"} id="mu_JKqAVzGzg" outputId="49292c25-00a6-4ab6-e57c-c0b32607d288"
neigh = Knn()
neigh.fit(X_train, y_train)
print('classes : ',neigh.classes_)
y_pred = neigh.predict(X_test)
print('y_pred : ',y_pred)
print('score : ',accuracy_score(y_test, y_pred))
# + [markdown] id="GYQhbu6E5LSB"
# We are getting same predicted values and accuracy score for both the models.
# + colab={"base_uri": "https://localhost:8080/"} id="xuo37qcp2pGq" outputId="9f416729-2831-442e-a73e-3124ce9e6b66"
neigh = KNeighborsClassifier(n_neighbors=15, metric='manhattan')
neigh.fit(X_train, y_train)
y_pred = neigh.predict(X_test)
print('score : ',accuracy_score(y_test, y_pred))
# + colab={"base_uri": "https://localhost:8080/"} id="dXwi0P3c7B4B" outputId="4d87286c-c70c-497a-9c47-722740b654e3"
neigh = Knn(n_neighbors=15, metric='manhattan')
neigh.fit(X_train, y_train)
y_pred = neigh.predict(X_test)
print('score : ',accuracy_score(y_test, y_pred))
# + [markdown] id="OZ7tp7fl783X"
# Other params such as n_neighbors and metric also seems to be working fine.
# + colab={"base_uri": "https://localhost:8080/"} id="-9tAPxHy7EXF" outputId="a4994026-639e-411e-9c29-45b591d35a88"
neigh = KNeighborsClassifier(n_neighbors=3, metric='manhattan')
neigh.fit(X_train, y_train)
print(neigh.kneighbors(X_test[:2]))
# + colab={"base_uri": "https://localhost:8080/"} id="YAesevsf8PNw" outputId="f4b252e0-a001-45fc-9aee-2c0911f1b05b"
neigh = Knn(n_neighbors=3, metric='manhattan')
neigh.fit(X_train, y_train)
print(neigh.kneighbors(X_test[:2]))
# + [markdown] id="pujwSYyy9avI"
# kneighbors method is also working as expected
# + id="--2bJoLV9bFz"
|
02. K Nearest Neighbor (KNN)/Knn.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MavrellousG/Azure_ML_DFE/blob/main/designer-predict-diabetes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="eVd9aac60Hqu" outputId="11ecc1ea-c31b-42a5-cadf-eaa1d1cfb80b"
import urllib.request
import json
import os
import ssl
def allowSelfSignedHttps(allowed):
# bypass the server certificate verification on client side
if allowed and not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
allowSelfSignedHttps(True) # this line is needed if you use self-signed certificate in your scoring service.
# Request data goes here
data = {
"Inputs": {
"WebServiceInput0":
[
{
'PatientID': "1882185",
'Pregnancies': "9",
'PlasmaGlucose': "104",
'DiastolicBloodPressure': "51",
'TricepsThickness': "7",
'SerumInsulin': "24",
'BMI': "27.36983156",
'DiabetesPedigree': "1.3504720469999998",
'Age': "43",
},
],
},
"GlobalParameters": {
}
}
body = str.encode(json.dumps(data))
url = 'http://a9c5017c-b4fd-48b0-b443-a3497c6a53f9.eastus2.azurecontainer.io/score'
api_key = '<KEY>' # Replace this with the API key for the web service
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
print(result)
except urllib.error.HTTPError as error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(json.loads(error.read().decode("utf8", 'ignore')))
|
designer-predict-diabetes.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## GLIMS region files download: Level 0 files
# This script creates "Level 0" RGI files. These files are fetched directly from the GLIMS database, subsetted with a spatial bounding box around each region.
#
# Level 0 files need to be updated to reflect new entries into the GLIMS database.
from glims_database_dump import *
import geopandas as gpd
import shutil
from utils import mkdir
import numpy as np
servers = {
'production': 'www.glims.org/services',
'blue': 'blue.glims-services.apps.int.nsidc.org',
'integration': 'integration.glims-services.apps.int.nsidc.org',
'qa': 'qa.glims-services.apps.int.nsidc.org',
'staging': 'staging.glims-services.apps.int.nsidc.org',
}
# ## RGI Region files
# go down from rgi7_scripts/workflow/preprocessing
data_dir = '../../../rgi7_data/'
reg_file = os.path.join(data_dir, 'l0_regions', '00_rgi70_regions', '00_rgi70_O1Regions.shp')
reg_f = gpd.read_file(reg_file)
reg_f.plot(alpha=0.5, edgecolor='k');
# These are the two regions with more than one box Region 01 and region 10:
reg_f.loc[[0, 1]].plot(alpha=0.5, edgecolor='k');
# We drop the second alaska box which is useless
reg_f = reg_f.drop(1)
reg_f.loc[[0]].plot(alpha=0.5, edgecolor='k');
# The Alaska box 1 has no glaciers in GLIMS to date (07.06.2021), you'll need to remove it otherwize the download will stall.
#
# Region 10 has two boxes with glaciers in it:
reg_f.loc[[10, 11]].plot(alpha=0.5, edgecolor='k');
# ## Download loop
# If you want to (re-)download only selected regions
reg_f_sel = reg_f.loc[[10, 11]]
reg_f_sel
# +
buffer = 0.5 # in degrees, buffer around the box
from_glims = mkdir(os.path.join(data_dir, 'l0_from_glims'))
for i, reg in reg_f_sel.iterrows():
# Prepare bounds
x0, y0, x1, y1 = reg.geometry.bounds
x0 = np.clip(x0 - buffer, -180., None)
y0 = np.clip(y0 - buffer, -90., None)
x1 = np.clip(x1 + buffer, None, 180.)
y1 = np.clip(y1 + buffer, None, 90.)
bounds = f' {x0:.2f},{y0:.2f},{x1:.2f},{y1:.2f}'
print('')
print('{}, {}. Bounds: {}'.format(reg.RGI_CODE, reg.FULL_NAME, bounds))
# GLIMS request arguments
p = setup_argument_parser()
args = p.parse_args(['--mode', 'glims',
'--nunataks', 'GLIMS', # We can't use GLIMS because it is buggy
'--archive_type', 'tar',
'--download_type', 'extent',
'--clause', bounds])
server = servers[args.env]
filebasename = issue_order(server, args)
poll_readiness(server, filebasename, period=2, tries=args.tries, protocol=args.protocol) # returns when file is ready
do_download(server, filebasename, args)
shutil.move(filebasename, os.path.join(from_glims, '{:02d}_RGI{:02d}.tgz'.format(i, int(reg['RGI_CODE']))))
# -
|
workflow/preprocessing/02_l0_download_from_glims.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pyKIS
# language: python
# name: pykis
# ---
# ## 1. Writing your first script
# Remember the function `commission()`? In this exercise you will learn how to port it to a script and make it run, just like it was running in a jupyter notebook cell (you can either use your own function, or the one that is pasted here).
# ```python
# cargo = ['bananas', 'apples', 'milk']
# actual = [24, 24, 12]
# nominal = [4, 52, 0]
#
# def commission(cargo, actual, nominal, warn=False):
# diff = abs(actual - nominal)
# print('Cargo is ', cargo, ', diff = ', diff, sep='')
#
# if (diff > 25) and (warn == True):
# print('Warning: Low supply.')
#
# return diff
#
#
# for c, a, n in zip(cargo, actual, nominal):
# diff = commission(c, a, n, warn=True)
# ```
# 1. In your jupyter overview tab, click New -> Other -> Text File to create a new file, within the current directory. Then at the top of the page, click the files name (which is "untitled.txt" by default). Rename it to "commission_cargo.py".
# 2. Copy your code into the file.
# 3. Run the script in the cell below.
# %run commission_cargo_sol_3_1.py
# ## 2. A few DataCamp exercises
# Have a look at the exercises on [DataCamp](https://learn.datacamp.com/):
#
# **1.1.** [Import a package 1](https://campus.datacamp.com/courses/intro-to-python-for-data-science/chapter-3-functions-and-packages?ex=10)
#
# **1.2.** [Import a package 2](https://campus.datacamp.com/courses/intro-to-python-for-data-science/chapter-3-functions-and-packages?ex=11)
#
# **1.3.** [Splitting a function](https://campus.datacamp.com/courses/writing-functions-in-python/best-practices?ex=7)
#
# There's a lot more exercises, over at [python-data-science-toolbox-part-1](https://learn.datacamp.com/courses/python-data-science-toolbox-part-1): If you're motivated, go through the whole thing.
# ## 3. Writing your first module
# Reuse the code for your function `commission` and make a module out of it.
#
# 1. Reset your namespace by using `%reset -f`. To make each step more clear, keep resetting everytime you run a cell
# - 1.1. run `commission_cargo.py` again
# - 1.2. import the module `commission_cargo`, without changing the file `commission_cargo.py`
# - 1.3. From `commission_cargo` import explicitely only the function `commission()`
# - 1.4. Do a *-import from `commission_cargo`
# - 1.5. Looking at your namespace, what is the difference between all the different ways to import the file?
#
# 2. Use the notation `if __name__ == '__main__':` within the file `commission_cargo.py`, and write all declarations and calls in the `if`-block.
# - 2.1. With the now new file, do all of the above again
# - 2.2. In general, which way of importing is the best?
# +
# %reset -f
# %run commission_cargo_sol_3_1.py
print(dir()[7:])
# +
# %reset -f
import commission_cargo_sol_3_1
print(dir()[7:])
# +
# %reset -f
from commission_cargo_sol_3_1 import commission
print(dir()[7:])
# +
# %reset -f
from commission_cargo_sol_3_1 import *
print(dir()[7:])
# +
# %reset -f
# %run commission_cargo_sol_3_2.py
print(dir()[7:])
# +
# %reset -f
import commission_cargo_sol_3_2
print(dir()[7:])
# +
# %reset -f
from commission_cargo_sol_3_2 import commission
print(dir()[7:])
# +
# %reset -f
from commission_cargo_sol_3_2 import *
print(dir()[7:])
# -
# ## 4. Good practises
# Have a look at the following code:
# ```python
# a='melons'
# b=0
# c=1
# def f(x):
# x=x + 10
# return x
# from commission_cargo import *
# d=commission(a,f(b),c,False)
# ```
# It defines an actual value of `1` melon, and defines the nominal value as `0`. Then a function is defined that adds `10` to, in this case, the nominal value. The function `commission()` is called after importing it.
# 1. Port the code into a script called melons.py (similar to exercise 1.) and run it
# 2. In the cell below, `pycodestyle` is used to list some of the issues with the code. Go through them and try to adapt the script and execute the cell again until 0 issues are found
# 3. Replace all variable and function names with meaningful ones and comment the code. Afterwards, compare your new code with the old version above and convince yourself how much more readable the new version is
# +
from pycodestyle import Checker as pch
c = pch('melons.py')
E = c.check_all()
print("Total of", E, "issues found.")
# -
# ```python
# import commission_cargo as cc
#
#
# def increase_stock(stock):
# """
# Take some value, add 10 and return it.
# """
# stock = stock + 10
#
# return stock
#
#
# # Define data
# cargo, actual, nominal = 'melons', 0, 1
#
# # Define new actual value
# actual = increase_stock(actual)
#
# diff = cc.commission(cargo, actual, nominal, warn=False)
# ```
|
exercises/sol_1_4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:riboraptor]
# language: python
# name: conda-env-riboraptor-py
# ---
# +
# %pylab inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.decomposition import PCA
sns.set_context('talk', font_scale=2)
sns.set_style('white')
# -
np.random.seed(42)
X = np.dot(np.random.rand(2, 2), np.random.randn(2, 100)).T
fig, ax = plt.subplots(figsize=(8,8))
ax.scatter(X[:, 0], X[:, 1])
ax.axis('equal')
fig.tight_layout()
def draw_vector(v0, v1, ax=None, linestyle='-'):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
linestyle=linestyle,
shrinkA=0, shrinkB=0, color='black')
ax.annotate('', v1, v0, arrowprops=arrowprops, clip_on=False)
pca = PCA(n_components=2)
pca.fit(X)
pca.explained_variance_
pca.mean_
fig, ax = plt.subplots(figsize=(8, 8))
ax.scatter(X[:, 0], X[:, 1], alpha=0.4)
original_vectors = []
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 2.5 * np.sqrt(length)
original_vectors.append((pca.mean_, pca.mean_ + v))
draw_vector(pca.mean_, pca.mean_ + v, ax, '-')
plt.axis('equal');
# # Add outliers
np.random.seed(42)
X_normal = np.dot(np.random.rand(2, 2), np.random.randn(2, 100)).T
X_outlier1 = np.random.normal(loc=-0.5, scale=1, size=(1,2))*2.5 + np.array([2, 9])
X_outlier2 = np.random.normal(loc=0.5, scale=1, size=(1,2))*2.5 + np.array([-2, -9])
X_outlier3 = np.random.normal(loc=0.5, scale=1, size=(1,2))*5 + np.array([-2, -9])
X = np.vstack([X_normal, X_outlier1, X_outlier2, X_outlier3])
fig, ax = plt.subplots(figsize=(8,8))
ax.scatter(X[:, 0], X[:, 1])
ax.axis('equal')
fig.tight_layout()
# +
pca = PCA(n_components=2)
pca.fit(X)
fig, ax = plt.subplots(figsize=(8, 8))
ax.scatter(X[:, 0], X[:, 1], alpha=0.4)
for index, (length, vector) in enumerate(zip(pca.explained_variance_, pca.components_)):
v = vector * 2.5 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v, ax)
draw_vector(original_vectors[index][0], original_vectors[index][1], ax, '--')
plt.axis('equal');
# -
# # Robust PCA
class RPCA:
def __init__(self, D, mu=None, lmbda=None):
self.D = D
self.S = np.zeros(self.D.shape)
self.Y = np.zeros(self.D.shape)
if mu:
self.mu = mu
else:
self.mu = np.prod(self.D.shape) / (4 * self.norm_p(self.D, 2))
self.mu_inv = 1 / self.mu
if lmbda:
self.lmbda = lmbda
else:
self.lmbda = 1 / np.sqrt(np.max(self.D.shape))
@staticmethod
def norm_p(M, p):
return np.sum(np.power(M, p))
@staticmethod
def shrink(M, tau):
return np.sign(M) * np.maximum((np.abs(M) - tau), np.zeros(M.shape))
def svd_threshold(self, M, tau):
U, S, V = np.linalg.svd(M, full_matrices=False)
return np.dot(U, np.dot(np.diag(self.shrink(S, tau)), V))
def fit(self, tol=None, max_iter=1000, iter_print=100):
iter = 0
err = np.Inf
Sk = self.S
Yk = self.Y
Lk = np.zeros(self.D.shape)
if tol:
_tol = tol
else:
_tol = 1E-7 * self.norm_p(np.abs(self.D), 2)
while (err > _tol) and iter < max_iter:
Lk = self.svd_threshold(
self.D - Sk + self.mu_inv * Yk, self.mu_inv)
Sk = self.shrink(
self.D - Lk + (self.mu_inv * Yk), self.mu_inv * self.lmbda)
Yk = Yk + self.mu * (self.D - Lk - Sk)
err = self.norm_p(np.abs(self.D - Lk - Sk), 2)
iter += 1
if (iter % iter_print) == 0 or iter == 1 or iter > max_iter or err <= _tol:
print('iteration: {0}, error: {1}'.format(iter, err))
self.L = Lk
self.S = Sk
return Lk, Sk
def plot_fit(self, size=None, tol=0.1, axis_on=True):
n, d = self.D.shape
if size:
nrows, ncols = size
else:
sq = np.ceil(np.sqrt(n))
nrows = int(sq)
ncols = int(sq)
ymin = np.nanmin(self.D)
ymax = np.nanmax(self.D)
print('ymin: {0}, ymax: {1}'.format(ymin, ymax))
numplots = np.min([n, nrows * ncols])
plt.figure()
for n in range(numplots):
plt.subplot(nrows, ncols, n + 1)
plt.ylim((ymin - tol, ymax + tol))
plt.plot(self.L[n, :] + self.S[n, :], 'r')
plt.plot(self.L[n, :], 'b')
if not axis_on:
plt.axis('off')
rpca = RPCA(X)
L, S = rpca.fit(max_iter=10000, iter_print=100)
# +
fig, ax = plt.subplots(figsize=(8, 8))
ax.scatter(S[:, 0], S[:, 1], alpha=0.4, clip_on=False)
for index in range(2):
print(index)
draw_vector(original_vectors[index][0], original_vectors[index][1], ax, '--')
draw_vector(L[0], L[0]*5, ax, '-')
draw_vector(L[1], L[1]*9, ax, '-')
ax.axis('equal')
# -
S.shape
np.dot(L[0], L[0].T)
fig, ax = plt.subplots(figsize=(8, 8))
ax.scatter(L[:, 0], L[:, 1], alpha=0.4, clip_on=False)
# # PCA using SVD
# Let the data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is *centered*, i.e. column means have been subtracted and are now equal to zero.
#
# Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$. It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called *principal axes* or *principal directions* of the data. Projections of the data on the principal axes are called *principal components*, also known as *PC scores*; these can be seen as new, transformed, variables. The $j$-th principal component is given by $j$-th column of $\mathbf {XV}$. The coordinates of the $i$-th data point in the new PC space are given by the $i$-th row of $\mathbf{XV}$.
#
# If we now perform singular value decomposition of $\mathbf X$, we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix and $\mathbf S$ is the diagonal matrix of singular values $s_i$. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions and that singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$.
#
# To summarize:
#
# 1. If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$, then columns of $\mathbf V$ are principal directions/axes.
# 2. Columns of $\mathbf {US}$ are principal components ("scores").
# 3. Singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$. Eigenvalues $\lambda_i$ show variances of the respective PCs.
# 4. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$. See e.g. [here](https://stats.stackexchange.com/questions/125684) and [here](https://stats.stackexchange.com/questions/143905) for why "loadings" should not be confused with principal directions.
# 5. **The above is correct only if $\mathbf X$ is centered.** Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$.
# 6. The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations.
# 7. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations.
# 8. To reduce the dimensionality of the data from $p$ to $k<p$, select $k$ first columns of $\mathbf U$, and $k\times k$ upper-left part of $\mathbf S$. Their product $\mathbf U_k \mathbf S_k$ is the required $n \times k$ matrix containing first $k$ PCs.
# 9. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V_k^\top$ yields $\mathbf X_k = \mathbf U_k^\vphantom \top \mathbf S_k^\vphantom \top \mathbf V_k^\top$ matrix that has the original $n \times p$ size but is *of lower rank* (of rank $k$). This matrix $\mathbf X_k$ provides a *reconstruction* of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, [see my answer here](https://stats.stackexchange.com/questions/130721).
# 10. Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an *economy size* (or *thin*) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$.
#
# ---------------
#
# ##Further links
#
# * [What is the intuitive relationship between SVD and PCA](https://math.stackexchange.com/questions/3869) -- a very popular and very similar thread on math.SE.
#
# * https://stats.stackexchange.com/questions/79043 -- a discussion of what are the benefits of performing PCA via SVD [short answer: numerical stability].
#
# * [PCA and Correspondence analysis in their relation to Biplot](https://stats.stackexchange.com/q/141754/3277) -- PCA in the context of some congeneric techniques, all based on SVD.
#
# * https://stats.stackexchange.com/questions/121162 -- a question asking if there any benefits in using SVD *instead* of PCA [short answer: ill-posed question].
#
# * [Making sense of principal component analysis, eigenvectors & eigenvalues](https://stats.stackexchange.com/a/140579/28666) -- my answer giving a non-technical explanation of PCA. To draw attention, I reproduce one figure here:
#
# [![Rotating PCA animation][1]][1]
#
#
# [1]: http://i.stack.imgur.com/Q7HIP.gif
# +
import numpy as np
from sklearn.preprocessing import StandardScaler
# Rows are samples, columns are features
X_std = StandardScaler().fit_transform(X)
X_mean_centered = X - X.mean(axis=0)
X_std_mean_centered = X_std - X_std.mean(axis=0)
# SVD of the original matrix
U, s, V = np.linalg.svd(X)
# SVD of the standardized matrix
U, s, V = np.linalg.svd(X_std)
mean_vec = np.mean(X_std, axis=0)
cov_mat = (X_std - mean_vec).T.dot((X_std - mean_vec)) / (X_std.shape[0]-1)
print('Covariance matrix \n%s' %cov_mat)
# -
U
s
V
# +
cov_mat = np.cov(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
eig_vecs
# -
eig_vals
s
|
notebooks/02.PCA-vanilla.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TIC TAC TOE GAMES
import random as rd
from IPython.display import clear_output
from tabulate import tabulate
player = [1,2]
O = 'O'
X = 'X'
K = ' '
# ## Function
def gantian(iPLayer):
if iPLayer == 1:
iPLayer = 2
elif iPLayer == 2:
iPLayer = 1
return iPLayer
def isi(valcon,iPlayer):
games[valcon] = gaco(iPlayer)
def checkFull(games):
full = True
for i in range(len(games)):
if((games[i]) == K):
full = False
return full
def allTrue(x):
final = True
for i in range(len(x)):
if(x[i] == False):
final = False
return final
def equal(a,b):
if(a is b):
return True
else:
return False
def printTable(games):
for i in range(0,8,3):
for j in range(i,i+3):
print(games[j], end = K)
print('')
def gaco(i):
if(i == 1):
return O
elif(i == 2):
return X
# ### Direction Function
def up(i):
if(equal(i,0)):
return False
elif(equal(i,1)):
return False
elif(equal(i,2)):
return False
else:
return i-3
def down(i):
if(equal(i,6)):
return False
elif(equal(i,7)):
return False
elif(equal(i,8)):
return False
else:
return i+3
def left(i):
if(equal(i,0)):
return False
elif(equal(i,3)):
return False
elif(equal(i,6)):
return False
else:
return i-1
def right(i):
if(equal(i,2)):
return False
elif(equal(i,5)):
return False
elif(equal(i,8)):
return False
else:
return i+1
def lup(i):
if(equal(i,0)):
return False
elif(equal(i,1)):
return False
elif(equal(i,2)):
return False
elif(equal(i,3)):
return False
elif(equal(i,6)):
return False
else:
return i-4
def rup(i):
if(equal(i,0)):
return False
elif(equal(i,1)):
return False
elif(equal(i,2)):
return False
elif(equal(i,5)):
return False
elif(equal(i,8)):
return False
else:
return i-2
def lown(i):
if(equal(i,0)):
return False
elif(equal(i,3)):
return False
elif(equal(i,6)):
return False
elif(equal(i,7)):
return False
elif(equal(i,8)):
return False
else:
return i+2
def rown(i):
if(equal(i,2)):
return False
elif(equal(i,5)):
return False
elif(equal(i,6)):
return False
elif(equal(i,7)):
return False
elif(equal(i,8)):
return False
else:
return i+4
contoh2 = range(9)
printTable(contoh2)
for i in range(9):
print(i,up(i),end=' | ')
# ## Check Win
def lineHorizon(games,iWin):
for j in range(0,9,3):
if(equal(games[j],O)):
temp = O
elif(equal(games[j],X)):
temp = X
else:
break
for i in range(j,j+3):
if(equal(games[i],temp)):
iWin += 1
else:
iWin = 0
break
if(iWin == 3):
return True
def lineVertical(games,iWin):
for j in range(0,3):
if(equal(games[j],O)):
temp = O
elif(equal(games[j],X)):
temp = X
else:
break
for i in range(j,j+9,3):
if(equal(games[i],temp)):
iWin += 1
else:
iWin = 0
break
if(iWin == 3):
return True
def diagonal(games,iWin):
for j in range(0,3,2):
if(equal(games[j],O)):
temp = O
elif(equal(games[j],X)):
temp = X
else:
break
if(j == 0):
for i in range(j,9,4):
if(equal(games[i],temp)):
iWin += 1
else:
iWin = 0
break
if(iWin == 3):
return True
elif(j == 2):
for i in range(j,7,2):
if(equal(games[i],temp)):
iWin += 1
else:
iWin = 0
break
if(iWin == 3):
return True
def checkWin(games):
iWin = 0
if(lineHorizon(games,iWin)):
return True
if(lineVertical(games,iWin)):
return True
if(diagonal(games,iWin)):
return True
return False
# ### Win Diagonal
games = [O,X,O,X,O,X,X,O,O]
printTable(games)
checkWin(games)
# ### Win Vertical
games = [O,X,X,O,O,X,X,O,X]
printTable(games)
checkWin(games)
# ### Win Horizon
games = [X,O,X,X,X,X,O,X,O]
printTable(games)
checkWin(games)
# ## BOT
def bot(i):
# games = [K,K,K,K,K,K,K,K,K]
# contoh = range(9)
# printTable(contoh)
# printTable(games)
nomination = []
if(not equal(up(i),False) and equal(games[up(i)],K)):
nomination.append(up(i))
if(not equal(down(i),False) and equal(games[down(i)],K)):
nomination.append(down(i))
if(not equal(right(i),False) and equal(games[right(i)],K)):
nomination.append(right(i))
if(not equal(left(i),False) and equal(games[left(i)],K)):
nomination.append(left(i))
if(not equal(lup(i),False) and equal(games[lup(i)],K)):
nomination.append(lup(i))
if(not equal(rup(i),False) and equal(games[rup(i)],K)):
nomination.append(rup(i))
if(not equal(lown(i),False) and equal(games[lown(i)],K)):
nomination.append(lown(i))
if(not equal(rown(i),False) and equal(games[rown(i)],K)):
nomination.append(rown(i))
print(nomination)
r = rd.choice(nomination)
print('random',r)
return r
# ## Main
games = [K,K,K,K,K,K,K,K,K]
# contoh = [1,2,3,4,5,6,7,8,9]
contoh = range(9)
iPLayer = rd.choice(player)
step = 1
first = rd.randrange(0,9)
valcon2 = rd.randrange(0,9)
while(not checkWin(games) and not checkFull(games)):
print('step',step)
print('player',iPLayer,'(',gaco(iPLayer),')','jalan')
while True:
# val = input('input 1-9:')
# valcon = int(val)
# valcon =
# valcon = valcon-1
if(step == 1):
valcon1 = first
elif(step % 2 == 0): #even
valcon2 = bot(valcon1)
elif(step % 2 == 1): #odd
valcon1 = bot(valcon1)
if(games[valcon1] == K or games[valcon2] == K):
break
if(step == 1):
isi(valcon1,iPLayer)
elif(step % 2 == 0): #even
isi(valcon2,iPLayer)
elif(step % 2 == 1): #odd
isi(valcon1,iPLayer)
iPLayer = gantian(iPLayer)
# clear_output()
printTable(contoh)
printTable(games)
step+=1
print('----')
print(checkWin(games))
|
TicTacToe.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Import libraries
# %matplotlib inline
import pandas as pd
import numpy as np
odi = pd.read_csv('C:/Users/Raghavendra N/OneDrive/Official/Datasets/ODI_Analytics.csv')
print(odi.shape)
# +
# ODI Data set
odi.head()
# Domain -sports
# Player's performance in individual matches -runs scored on a particular date in a match
# Granularity => Country, Player, Match Date
# -
# # ODI DATA SET
#
#find the data types of column
odi.dtypes
#Clasiify the columns
"""
Dimensions/Categorical - Country, Player, Versus, Ground, Versus
Metrics - Runs , ScoreRate
Location - Country, Ground, Versus
Dates - MatchDate
Text - None
Misc - None
"""
# numeric columns analysis
odi.describe()
#any function starting with underscore is internal function not to be used by end user
# get numeric columns
odi._get_numeric_data().columns
# describes all strin g columns or factors
odi.describe(include='object')
odi.info()
odi.head()
# ### Derived Metrics
# +
#extract multiple columns from existing columns
# Date -day, month, year, weekday, Quarter, weekday/weekend, hour details - morning ,noon, night
#Convert the object type to datetime object
odi['MatchDate']=pd.to_datetime(odi['MatchDate'],format='%m-%d-%Y')
# -
odi.MatchDate.head()
# +
#Extract the columns
odi['year']=odi['MatchDate'].dt.year
odi['month']=odi['MatchDate'].dt.month
odi['day']=odi['MatchDate'].dt.day
odi['weekday'] = odi['MatchDate'].dt.day_name()
print(odi[['MatchDate', 'year','month','day','weekday']].head())
# +
# Numerical columns can be categoriesed
# Bin them to different ranges
#Runs- Century, Fifty, Duckouts, Missed Century,type_of_run
odi['Runs'].head()
#odi['Century']=[1 if run>=100 else 0 for run in odi['Runs']]
odi['Century']=odi['Runs'].apply(lambda x: 1 if x>=100 else 0)
#odi['Fifty']=[1 if run>=50 and run<=99 else 0 for run in odi['Runs']]
odi['Fifty']=odi['Runs'].apply(lambda x: 1 if x>=50 and x<=99 else 0)
#odi['Duckouts']=[1 if run==0 else 0 for run in odi['Runs']]
odi['Duckouts']=odi['Runs'].apply(lambda x: 1 if x==0 else 0)
#odi['Missed_Century']=[1 if run>=90 and run<=99 else 0 for run in odi['Runs']]
odi['Missed_Century']=odi['Runs'].apply(lambda x: 1 if x>=90 and x<=99 else 0)
# -
#Group by players and find count
pl_per=odi[['Player','year','Runs','Century','Fifty','Duckouts','Missed_Century']].groupby(['Player','year']).\
agg('sum')
#Maximum no of runs
pl_per.sort_values(by='Runs',ascending=False).head(10)
#player with max Fifties
pl_per.sort_values(by='Fifty',ascending=False).head(10)
#player with max Duckouts
pl_per.sort_values(by='Duckouts',ascending=False).head(10)
pl_per.sort_values(by='Missed_Century',ascending=False).head(10)
|
notebooks/2.ODI_CaseStudy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ch `06`: Concept `01`
# ## Hidden Markov model forward algorithm
# Oof this code's a bit complicated if you don't already know how HMMs work. Please see the book chapter for step-by-step explanations. I'll try to improve the documentation, or feel free to send a pull request with your own documentation!
#
# First, let's import TensorFlow and NumPy:
import numpy as np
import tensorflow as tf
# Define the HMM model:
class HMM(object):
def __init__(self, initial_prob, trans_prob, obs_prob):
self.N = np.size(initial_prob)
self.initial_prob = initial_prob
self.trans_prob = trans_prob
self.emission = tf.constant(obs_prob)
assert self.initial_prob.shape == (self.N, 1)
assert self.trans_prob.shape == (self.N, self.N)
assert obs_prob.shape[0] == self.N
self.obs_idx = tf.placeholder(tf.int32)
self.fwd = tf.placeholder(tf.float64)
def get_emission(self, obs_idx):
slice_location = [0, obs_idx]
num_rows = tf.shape(self.emission)[0]
slice_shape = [num_rows, 1]
return tf.slice(self.emission, slice_location, slice_shape)
def forward_init_op(self):
obs_prob = self.get_emission(self.obs_idx)
fwd = tf.multiply(self.initial_prob, obs_prob)
return fwd
def forward_op(self):
transitions = tf.matmul(self.fwd, tf.transpose(self.get_emission(self.obs_idx)))
weighted_transitions = transitions * self.trans_prob
fwd = tf.reduce_sum(weighted_transitions, 0)
return tf.reshape(fwd, tf.shape(self.fwd))
# Define the forward algorithm:
def forward_algorithm(sess, hmm, observations):
fwd = sess.run(hmm.forward_init_op(), feed_dict={hmm.obs_idx: observations[0]})
for t in range(1, len(observations)):
fwd = sess.run(hmm.forward_op(), feed_dict={hmm.obs_idx: observations[t], hmm.fwd: fwd})
prob = sess.run(tf.reduce_sum(fwd))
return prob
# Let's try it out:
if __name__ == '__main__':
initial_prob = np.array([[0.6], [0.4]])
trans_prob = np.array([[0.7, 0.3], [0.4, 0.6]])
obs_prob = np.array([[0.1, 0.4, 0.5], [0.6, 0.3, 0.1]])
hmm = HMM(initial_prob=initial_prob, trans_prob=trans_prob, obs_prob=obs_prob)
observations = [0, 1, 1, 2, 1]
with tf.Session() as sess:
prob = forward_algorithm(sess, hmm, observations)
print('Probability of observing {} is {}'.format(observations, prob))
|
ch06_hmm/Concept01_forward.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NBB
# ## Data munging
#
# Data munging (sometimes referred to as data wrangling) is the process of transforming and mapping data from one "raw" data form into another format with the intent of making it more appropriate and valuable for a variety of downstream purposes such as analytics.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
import seaborn as sns
from sklearn.datasets import load_boston
# Make plots larger
plt.rcParams['figure.figsize'] = (15, 9)
# -
tips = sns.load_dataset("tips")
tips.head(n=7)
tips.describe()
tips['time'].value_counts()
sns.boxplot(x="time", y="total_bill", data=tips);
sns.boxplot(x=tips["total_bill"])
tips['tip'].hist(bins=33)
sns.distplot(tips['tip'])
dataset = load_boston()
boston = pd.DataFrame(dataset.data, columns=dataset.feature_names)
boston.head()
# ### Check missing values in the dataset
sum(boston['NOX'].isnull())
boston.apply(lambda x: sum(x.isnull()),axis=0)
# ### Check the range of values in the dataset
boston.describe()
# ## Pima Indians Diabetes Dataset
#
# [Pima Indians Diabetes Dataset](https://archive.ics.uci.edu/ml/datasets/pima+indians+diabetes) is known to have missing values.
#
# Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage. ADAP is an adaptive learning routine that generates and executes digital analogs of perceptron-like devices. It is a unique algorithm; see the paper for details.
#
#
# Attribute Information:
#
# 1. Number of times pregnant
# 2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test
# 3. Diastolic blood pressure (mm Hg)
# 4. Triceps skin fold thickness (mm)
# 5. 2-Hour serum insulin (mu U/ml)
# 6. Body mass index (weight in kg/(height in m)^2)
# 7. Diabetes pedigree function
# 8. Age (years)
# 9. Class variable (0 or 1)
#
#
# Relevant Papers:
#
# <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (1988). Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the Symposium on Computer Applications and Medical Care} (pp. 261--265). IEEE Computer Society Press.
#
pima = pd.read_csv("http://nikbearbrown.com/YouTube/MachineLearning/DATA/pima-indians-diabetes.csv", sep=',')
pima.head()
pima.describe()
pima.apply(lambda x: sum(x.isnull()),axis=0)
pima.isnull().sum()
pima.info()
pima.isnull().sum()
(pima['serum_insulin'] == 0).sum()
(pima['serum_insulin'] < 50).sum()
pima_bak=pima.copy()
# ## How to fill missing values?
pima_bak.isnull().sum()
pima_bak.info()
# drop rows with missing values
pima_bak.dropna(inplace=True)
pima_bak.isnull().sum()
pima_bak.info()
(pima_bak['serum_insulin'] < 50).sum()
pima_bak.loc[pima_bak['serum_insulin'] < 50]
pima_bak['serum_insulin'].loc[pima_bak['serum_insulin'] < 50]
pima_bak['serum_insulin'].mean()
pima_bak.head()
pima_bak.loc[pima_bak['serum_insulin'] < 50, 'serum_insulin'] = pima_bak['serum_insulin'].mean()
(pima_bak['serum_insulin'] < 50).sum()
pima_bak.head()
pima_bak['serum_insulin'].fillna(pima_bak['serum_insulin'].mean(), inplace=True)
# ## Data cleaning checklist
#
# * Save original data
# * Identify missing data
# * Identify placeholder data (e.g. 0's for NA's)
# * Identify outliers
# * Check for overall plausibility and errors (e.g., typos, unreasonable ranges)
# * Identify highly correlated variables
# * Identify variables with (nearly) no variance
# * Identify variables with strange names or values
# * Check variable classes (eg. Characters vs factors)
# * Remove/transform some variables (maybe your model does not like categorial variables)
# * Rename some variables or values (if not all data is useful)
# * Check some overall pattern (statistical/ numerical summaries)
# * Possibly center/scale variables
#
# Last update September 5, 2017
|
Week_1/NBB_Data_Munging_Wrangling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Breaking simple things with `battle_tested`
#
# While many will insist that since they kept their code "simple" enough so nothing has room to go wrong, I'd like to show you some interesting findings from poking different libraries with the fuzzer.
from battle_tested import fuzz, battle_tested
def add_to_hello(a):
return 'hello '+a
add_to_hello('world')
fuzz(add_to_hello, seconds=1, keep_testing=True)
def add_to_hello_with_format(a):
return 'hello {}'.format(a)
add_to_hello_with_format('world')
fuzz(add_to_hello_with_format, seconds=1, keep_testing=True)
def count(a):
return len(a)
count('hello')
fuzz(count, seconds=1, keep_testing=True)
|
tutorials/breaking_simple_things.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 align="center"> PR#06 Dinamika Sistem dan Simulasi Nomor 2 Part II</h1>
# <h3>Nama Anggota:</h3>
# <body>
# <ul>
# <li><NAME>. (13317007)</li>
# <li><NAME>. (13317039)</li>
# <li><NAME> (13317041)</li>
# </ul>
# Pengerjaan utama dilakukan oleh <NAME>
# </body>
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as signal
from ipywidgets import interact, interactive, fixed, interact_manual , HBox, VBox, Label, Layout
import ipywidgets as widgets
# ### Deskripsi Sistem
# <img src="./1.jpg" style="width:30%">
# <img src="./3.jpg" style="width:50%">
#
# #### 1. Input
# effort $e_v$, asumsi input merupakan sinyal step
#
# #### 2. Output
# $\omega_a$ dan $\omega_b$
#
# #### 3. Parameter
# $R_a,L_a,K_g,K_1,K_m,N_1,N_2,J_2,c,b_r$
#DEFINISI WIDGETS PARAMETER
Ra_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$R_a (\\Omega)$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
La_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$L_a (H)$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
Kg_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$K_g (\\frac {V}{rad.s^-1})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
K1_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$K_1 (\\frac {V}{V})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
Km_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$K_m (\\frac {N.m}{A})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
N1_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$N1 Gear Ratio$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
N2_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$N2 Gear Ratio$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
J2_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$J_2 (\\frac {Nm}{rad.s^-2})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
c_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$c (\\frac {Nm}{rad.s^-1})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
br_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$b_r (\\frac {Nm}{rad.s^-1})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
grid_button = widgets.ToggleButton(
value=True,
description='Grid',
icon='check',
layout=Layout(width='20%', height='50px',margin='10px 10px 10px 350px'),
style={'description_width': '200px'},
)
def plot_w1w2(Ra,La,Kg,K1,Km,N1,N2,J2,c,br,grid):
# Pembuatan model transfer function
A= [[(-Ra/La),(-Kg/La)],[Km/((N2/N1)**2 * J2),(-c+br)]]
B= [[(K1/La)],[0]]
C= [[0,1],[0,(N2/N1)]]
D= [[0],[0]]
sys1=signal.StateSpace(A,B,C,D)
t1,y1=signal.step(sys1)
plt.title("Plot $\\omega_1$ dan $\\omega_2$")
plt.plot(t1,y1)
plt.grid(grid)
ui_em = widgets.VBox([Ra_slider,La_slider,Kg_slider,K1_slider,Km_slider,N1_slider,N2_slider,J2_slider,c_slider,br_slider,grid_button])
out_em = widgets.interactive_output(plot_w1w2, {'Ra':Ra_slider,'La':La_slider,'Kg':Kg_slider,'K1':K1_slider,'Km':Km_slider,'N1':N1_slider,'N2':N2_slider,'J2':J2_slider,'c':c_slider,'br':br_slider, 'grid':grid_button})
display(ui_em,out_em)
|
Final_Task_2019/ExtraElectroMechanicalGearedMotor/PR06 Nomor 2 Part 2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="FzZVTkFT5RBu"
# #PID Control Simulation
#
# This is a simulation of solving a simple one-dimensional control problem using a PID controller.
#
# For an overview on PID controllers, see the [PID controller](https://en.wikipedia.org/wiki/PID_controller) WikiPedia page.
#
# + [markdown] id="ozukbhNS6Iuz"
# ## Import libraries
#
# For this simulation, all we'll need is numpy and pyplot.
# + id="6JdpcNkjx6aF"
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] id="4VJJ5Dz96RDa"
# ## Problem
#
# We set up a one dimensional control problem.
# + id="kAekwTsH87-U"
T = np.linspace(0, 10, num=1001) #Time array
SP = 100 # Target Set Point constant
np.random.seed(42) # Random Number Generator Seed
EF = np.random.normal(loc=0, scale=1, size=1001).cumsum() #External Force array
UPV = np.ones(1001)*SP + EF # Uncontrolled Process Variable array
# + [markdown] id="OQM8llPT6m4x"
# And we visualize the problem.
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="H7QV4B0f_iBB" outputId="724d9d50-1b46-4c9e-c22f-32922e21cbca"
fig, ax = plt.subplots()
ax.plot(T, UPV, linewidth=0.75, color="k", label='Uncontrolled Process Variable')
ax.axhline(y=SP, linewidth=0.75, color='r', ls=":", label='Target Set Point')
ax.set_ylim(bottom=80, top=130)
ax.set_title("Example PID Control Problem 1")
ax.set_xlabel("Time Scale")
ax.set_ylabel("Process Measurement Scale")
ax.legend()
# + [markdown] id="F4BvKesO6u_J"
# ## Solution
#
# Next we solve the problem using PID controller.
# + id="_YBZwRJnJk-F"
Kp = 0.01 # Proportional gain constant
Ki = 0.01 # Integral gain constant
Kd = 0.01 # Derivative gain constant
dt = T[1]-T[0] # Time step size constant
E = SP - UPV # Error array
PE = np.concatenate((np.array([0]), np.diff(E))) # Previous Errors array
P = E # Proportional array
I = (E*dt).cumsum() # Integral array
D = (E-PE)/dt # Derivative array
Ut = Kp*P + Ki*I + Kd*D # Control Signal array
CPV = UPV + Ut # Controlled Process Variable array
# + [markdown] id="XR7tWbSX7FqJ"
# And we visualize the result.
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="HdCbDgFPZsYr" outputId="03e43d1f-7aa4-4f8e-8aab-366af9db15e6"
fig, ax = plt.subplots()
ax.plot(T, CPV, linewidth=0.75, color="g", label='Controlled Process Variable')
ax.plot(T, UPV, linewidth=0.75, color="k", label='Uncontrolled Process Variable')
ax.axhline(y=SP, linewidth=2, color='r', ls=":", label='Target Set Point')
ax.set_ylim(bottom=80, top=130)
ax.set_title("Example PID Control Solution 1")
ax.set_xlabel("Time Scale")
ax.set_ylabel("Process Measurement Scale")
ax.legend()
|
notebooks/PID_Control_Simulation.ipynb
|