content
stringlengths 6
1.05M
|
|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dimensionality Reduction & Data Visualization
# ### To-Dos:
#
# - [ ] PCA
# - [ ] ICE
# - [ ] UMAP
# - [ ] LIME
# - [ ] Interpreted RF
# - [ ] Confusion Matrix
# - [ ] Kmeans
# - [ ] Gaussian Process Cluster
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # STEP 2: From categorical to One-Hot Vector and Segmentation and fold training
# From categorical to One-Hot Vector dataset and segmentations of the different training "folds". In the section, we will compute the fold training instead of exporting the files.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
from utils.trainFold import train, saveColumnsCategorical
# # Data Transformation
# ## Data Loading
df = pd.read_csv('datasets/train_cleaned.csv', index_col='MachineIdentifier')
# ## Categorical Data Analysis
columns_categorical = df.select_dtypes(include=['object']).columns
# Threshold of the size of different values of each categorical data
h_threshold = 350
total = 0
columns_to_partition = []
columns_to_onehot = []
for c in columns_categorical:
values = df[c].nunique()
suf = ""
if (values > h_threshold):
columns_to_partition.append(c)
suf = ', PARTITION'
else:
columns_to_onehot.append(c)
total += values
print(c,': ',values,suf)
print('Total new vars: ' + str(total))
# ## Segmentation and train
# We are going to split the dataset based on the most scattered categorical variables
columns_to_partition
asvGB = df.groupby(columns_to_partition)['ProductName'].count()
# +
# ... Maybe it is not a good idea to order this this way
#asvGB.sort_values(ascending=False, inplace=True)
# -
asvGBdf=asvGB.to_frame()
asvGBdf.columns=['count']
asvGBdf.sort_index(level=[0,1,2],inplace=True)
plt.plot(asvGBdf['count'].values)
# Now, let's select indicators for each group and create a dictionary. Each group will have 200.000 elements aprox:
NE = 364000
asvGBDict = asvGBdf.to_dict()
# +
acc = 0
f=1 # fold
foldDict = {}
for key, value in asvGBDict['count'].items():
acc += value
if (acc > NE):
acc = 0
f += 1
foldDict[key]=f
# -
def setFold(row):
real_row = row.values
return foldDict[(real_row[0],real_row[1],real_row[2])]
df['fold'] = df[columns_to_partition].apply(axis=1, func=setFold)
sns.countplot(data=df, x='fold')
# We have all this folds to train, let's get the different files with 50% of data to train the ensembled block:
nFolds = df['fold'].nunique()
scores = []
lcurves = []
for i in range(nFolds):
print('processing fold ',(i+1),' ... ')
fold_df = df[df['fold']==(i+1)]
columns_categorical = fold_df.select_dtypes(include=['object']).columns
fold_df_num=pd.get_dummies(data=fold_df,columns=columns_categorical)
m=fold_df.shape[0]
tm = int(m/2)
ensemble_df = fold_df_num[0:tm]
stack_df = fold_df[tm:]
# Save columns to partition values
saveColumnsCategorical(i+1, fold_df, columns_categorical)
# ensemble process
ensemble_df.drop(labels=['fold'],axis=1,inplace=True)
(score, lcurve) = train(i+1,ensemble_df)
scores.append(score)
lcurves.append(lcurve)
# stacking process
if (i == 0):
stack_complete_df = stack_df.copy()
else:
stack_complete_df = pd.concat([stack_complete_df, stack_df])
stack_complete_df.to_csv('datasets/train_stack.csv')
# ## Show the results of the training of each fold
def showFoldModelInfo(fold_id):
print('scoring: ',str(scores[i-1]))
plt.plot(lcurves[i-1][0,:],'b',lcurves[i-1][1,:],'r')
showFoldModelInfo(1)
showFoldModelInfo(2)
showFoldModelInfo(3)
showFoldModelInfo(4)
showFoldModelInfo(5)
showFoldModelInfo(6)
showFoldModelInfo(7)
showFoldModelInfo(8)
showFoldModelInfo(9)
# # End of Analisys!
# After this execution we have the "folded" models and the "stacking" dataset!
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="5bUv9ftkhM_n" colab_type="text"
# This is the Part-2 of Deep Reinforcement Learning Notebook series.***In this Notebook I have introduced introduces the first Policy Gradient method known as REINFORCE***.
#
#
# The Notebook series is about Deep RL algorithms so itexcludes all other techniques that can be used to learn functions in reinforcement learning and also the Notebook Series is not exhaustive i.e. it contains the most widely used Deep RL algorithms only.
#
# + [markdown] id="SeRaQFKfhivP" colab_type="text"
# The REINFORCE algorithm involves learning a parametrized policy that produces action probabilities given states. Agents use this policy directly to act in an environment. The key idea is that during learning, actions that resulted in good outcomes should become more probable (these actions are positively reinforced). Conversely, actions that resulted in bad outcomes should become less probable. If learning is successful, over the course of many iterations action probabilities produced by the policy shift to distribution that results in a good performance in an environment. Action probabilities are changed by following the policy gradient, hence REINFORCE
# is known as a policy gradient algorithm.
# The algorithm needs three components:
# 1. a parametrized policy
# 2. an objective to be maximized
# 3. a method for updating the policy parameters
# + [markdown] id="jgHI0PI1wLB_" colab_type="text"
# # A Parametrized Policy
# A policy π is a function which maps states to action probabilities, which is used to sample an action a ∼ π(s). In REINFORCE, an agent learns a policy and uses this to act in an environment. A good policy is one that maximizes the cumulative discounted rewards. The key idea in the algorithm is to learn a good policy, and this means doing function approximation. Neural networks are powerful and flexible function approximators, so we can represent a policy using a deep neural network consisting of learnable parameters θ. This is often referred to as a policy network $π_θ$. We say that the policy is parametrized by θ. Each specific set of values of the parameters of the policy network represents a particular policy. Formulated in this way, the process of learning a good policy corresponds to searching for a good set of values for θ. For this reason, the policy network must be differentiable. We will see later in this notebook that the mechanism by which the policy is improved is through gradient ascent in parameter space.
#
# + [markdown] id="A0tHgqeWxHSf" colab_type="text"
# # THE OBJECTIVE FUNCTION
# An objective can be understood as an agent’s goal, such as winning a game or getting the highest score possible.that an agent acting in an environment generates a trajectory, which contains a sequence of rewards along with the states and actions. A trajectory is denoted τ = $s_0, a_0,r_0,...,s_T,a_T,r_T$ .The return of a trajectory $R_t$(τ) is defined as a discounted sum of rewards from time step t to the end of a trajectory as shown below(Equation 2.1).
#
# 
# Equation 2.1
#
# Here we can see that the sum starts from time step t, but the power that the discount factor γ is raised to starts from 0 when summing for return, hence we need to offset the power by the starting time step t using t′ – t.
#
# The natural question which might now arise is that why do we need the discounted return? Why not just simply add all returns? The Answer is the return is task-oriented i.e. in some cases you might give equal priority to every return and in the other one return is more important than others. Different values of gamma(γ) have different effects on the agent. If we keep the value of gamma less than one then the agent is short-sighted i.e. it gives more priority to initial rewards than rewards obtained later in the time. And similarly, if the value of gamma if greater than one then the agent is far-sighted i.e. it gives more priority to rewards that are obtained later in time rather than initial rewards. The value of gamma depends on the tasks of the agent. We can have gamma equal to one in case we want to give equal priority to all rewards.
#
# 
# Equation 2.2
#
# Here the objective is the expected return overall complete trajectories generated by an agent. The above equation(Equation 2.2) says that the expectation is calculated over many trajectories sampled from a policy, that is, τ ∼ $π_θ$. This expectation approaches the true value as more samples are gathered, and it is tied to the specific policy $π_θ$ used.
#
# + [markdown] id="k9AR2md12eCc" colab_type="text"
# #THE POLICY GRADIENT
#
# The policy gradient is the mechanism by which action probabilities produced by the policy are changed. If the return $R_t(τ)$ > 0, then the probability of the action $π_θ (a_t|s_t)$ is increased; conversely, if the return $R_t$(τ) < 0, then the probability of the action $π_θ (a_t|s_t)$ is decreased. Over the course of many updates, the policy will learn to produce actions that result in high $R_t(τ)$.
#
# To maximize the objective, we perform gradient ascent on the policy parameters θ.To improve on the objective1, compute the policy gradient, and use it to update the parameters as shown below:
#
# 1.Compute the policy gradient
#
# 
# Equation 2.3
#
# The term $▽_θJ(π_θ)$ is known as the policy gradient. The term $π_θ(a_t|s_t)$ is the probability of the action taken by the agent at time step t. The action is sampled from the policy, $a_t$ ∼ $π_θ(s_t)$. The right-hand side of the equation states that the gradient of the log probability of the action with respect to θ is multiplied by return $R_t(τ)$. Here I have skipped the proof of above equation but you can look at https://danieltakeshi.github.io/2017/03/28/going-deeper-into-reinforcement-learning-fundamentals-of-policy-gradients/
#
# 2.Update the policy parameters θ
#
# 
# Equation 2.4
#
# α is a scalar known as the learning rate; it controls the size of the parameter update
# + [markdown] id="tZW6MwSx5mYm" colab_type="text"
# The REINFORCE algorithm numerically estimates the policy gradient using Monte Carlo sampling.
# Monte Carlo sampling refers to any method which uses random sampling to generate data which is used to approximate a function. In essence, it is just “approximation with random sampling”
#
# The expectation $E_{τ∼πθ}$ implies that as more trajectories τ s are sampled using a policy $π_θ$ and averaged, it approaches the actual policy gradient $▽_θJ(π_θ)$. Instead of sampling many trajectories per policy, we can sample just one as shown in equation 2.3.This is how policy gradient is implemented — as a Monte Carlo estimate over sampled trajectories.
# + [markdown] id="ZpJd8Rhd-vEI" colab_type="text"
# **Disadvantage of Reinforce Algorithm and its few remedies**
#
# Disadvantage=>
#
# When using Monte Carlo sampling, the policy gradient estimate may have high variance because the returns can vary significantly from trajectory to trajectory. This is due to three factors. First, actions have some randomness because they are sampled from a probability distribution. Second, the starting state may vary per episode. Third, the environment transition function may be stochastic.
#
# Remedy=>
#
# One way to reduce the variance of the estimate is to modify the returns by subtracting a suitable action-independent baseline as shown below(Equation 2.5)
#
# 
#
# Equation 2.5
#
# One option for the baseline is the value function V. This choice of baseline motivates the Actor-Critic algorithm(Which we discuss later in this notebook series)
#
# An alternative is to use the mean returns over the trajectory.Let $b=\sum_{t=0}^T R_t(τ)$
#
# To see why this is useful, consider the case where all the rewards for an environment are negative. Without a baseline, even when an agent produces a very good action, it gets discouraged because the returns are always negative. Over time, this can still result in good policies since worse actions will get discouraged even more and thus indirectly increase the probabilities of better actions. However, it can lead to slower learning because probability adjustments can only be made in a single direction. Likewise, the converse happens in environments where all the rewards are positive. A more effective way is to both increase and decreases the action probabilities directly. This requires having both positive and negative returns.
# + [markdown] id="BM5OAz0t36rE" colab_type="text"
# # Here is the implementation of Reinforce Algorithm
# + [markdown] id="RsIrUXHeBiwu" colab_type="text"
# 
#
# First, initialize the learning rate α and construct a policy network πθ with randomly initialized weights.
# Next, iterate for multiple episodes as follows: use the policy network πθ to generate a trajectory $τ = s_0, a_0, r_0, . . . , s_T, a_T, r_T$ for an episode. Then, for each time step t in the trajectory, compute the return $R_t(τ)$. Next, use $R_t(τ)$ to estimate the policy gradient. Sum the policy gradients for all time steps, then use the result to update the policy network parameters θ
# + [markdown] id="Pq9KIniXBnYZ" colab_type="text"
# # Below is the implementation in code of this algorithm on a simple example of Cart-Pole where agent tries to balance the pole on the cart
# + [markdown] id="11IylqnLlfTd" colab_type="text"
# **Below code setups the environment required to run the game.**
# + id="hmhPprax4UrV" colab_type="code" colab={}
#Installing the dependencies
# !pip install gym
# !apt-get install python-opengl -y
# !apt install xvfb -y
# + id="tqh1bvYz4YVB" colab_type="code" colab={}
# !pip install gym[atari]
# !pip install pyvirtualdisplay
# !pip install piglet
# + id="RRc3yd6e4q80" colab_type="code" colab={}
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
# + id="8gyhug2U4rb7" colab_type="code" colab={}
# This code creates a virtual display to draw game images on.
# If you are running locally, just ignore it
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
# !bash ../xvfb start
# %env DISPLAY=:1
# + id="7wJ1Lye24tYJ" colab_type="code" colab={}
import gym
import tensorflow as tf
from keras.layers import Dense
from keras.models import Sequential
from keras.optimizers import Adam
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from IPython import display
from gym import logger as gymlogger
from gym.wrappers import Monitor
gymlogger.set_level(40) # error only
import tensorflow as tf
import numpy as np
import random
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
import math
import glob
import io
import base64
from IPython.display import HTML
from IPython import display as ipythondisplay
# + [markdown] id="Nz0mpYHflr6M" colab_type="text"
# """
# Utility functions to enable video recording of gym environment and displaying it
# To enable video, just do "env = wrap_env(env)""
# """
#
# + id="V-zUBrYJ4u8x" colab_type="code" colab={}
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
# + [markdown] id="y9N8b_w7ltXw" colab_type="text"
# This part ensures the reproducibility of the code below by using a random seed.
#
# + id="RK55zKBE44I5" colab_type="code" colab={}
RANDOM_SEED=1
N_EPISODES=500
# random seed (reproduciblity)
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
# set the env
env=gym.make("CartPole-v1") # env to import
#env=wrap_env(env) #To enable video, just do "env = wrap_env(env)""
env.seed(RANDOM_SEED)
env.reset() # reset to env
# + [markdown] id="VSHb7-IKo77W" colab_type="text"
# #Defining the REINFORCE Class
# At initiation, the REINFORCE object sets a few parameters. First, is the environment in which the model learns and its properties. Second, are both the parameters of the REINFORCE algorithm — Gamma (𝛾) and alpha (𝛼). Gamma, as discussed above is the decay rate of past observations and alpha is the learning rate by which the gradients affect the policy update. Lastly, it sets the learning rate for the neural network. In addition, this snippet includes the saved space for recording the observations during the game.
#
# + id="jrqQ4aWEo9Qv" colab_type="code" colab={}
class REINFORCE:
def __init__(self, env, path=None):
self.env=env #import env
self.state_shape=env.observation_space.shape # the state space
self.action_shape=env.action_space.n # the action space
self.gamma=0.99 # decay rate of past observations
self.alpha=1e-4 # learning rate in the policy gradient
self.learning_rate=0.01 # learning rate in deep learning
if not path:
self.model=self._create_model() #build model
else:
self.model=self.load_model(path) #import model
# record observations
self.states=[]
self.gradients=[]
self.rewards=[]
self.probs=[]
self.discounted_rewards=[]
self.total_rewards=[]
# + [markdown] id="NgGI99zVnMvA" colab_type="text"
# ##Defining useful utility methods
# The REINFORCE agent object uses a couple of utility methods. The first, hot_encode_action, encodes the actions into a one-hot-encoder format (read more about what is one-hot-encoding and why is it a good idea, here(https://medium.com/@michaeldelsolewhat-is-one-hot-encoding-and-how-to-do-it-f0ae272f1179)). And the second, remember, records the observations of each step.
# + id="pDJ87pkXolTY" colab_type="code" colab={}
def remember(self, state, action, action_prob, reward):
'''stores observations'''
encoded_action=self.hot_encode_action(action)
self.gradients.append(encoded_action-action_prob)
self.states.append(state)
self.rewards.append(reward)
self.probs.append(action_prob)
def hot_encode_action(self, action):
'''encoding the actions into a binary list'''
action_encoded=np.zeros(self.action_shape, np.float32)
action_encoded[action]=1
return action_encoded
# + [markdown] id="ZVoY-MWZnOfs" colab_type="text"
# ##Creating a Neural Network Model
# I chose to use a neural network to implement this agent. The network is a simple feedforward network with a few hidden layers. The output layer incorporates a softmax activation. The softmax function takes in logit scores and outputs a vector that represents the probability distributions of a list of potential outcomes.
# + id="w9r_dv2lnt59" colab_type="code" colab={}
def _create_model(self):
''' builds the model using keras'''
model=Sequential()
# input shape is of observations
model.add(Dense(24, input_shape=self.state_shape, activation="relu"))
# add a relu layer
model.add(Dense(12, activation="relu"))
# output shape is according to the number of action
# The softmax function outputs a probability distribution over the actions
model.add(Dense(self.action_shape, activation="softmax"))
model.compile(loss="categorical_crossentropy",
optimizer=Adam(lr=self.learning_rate))
return model
# + [markdown] id="Ut6scLzGnTrR" colab_type="text"
# ##Action Selection
# The get_action method guides its action choice. It uses the neural network to generate a normalized probability distribution for a given state. Then, it samples its next action from this distribution.
# + id="3MaAw88rnzJw" colab_type="code" colab={}
def get_action(self, state):
'''samples the next action based on the policy probabilty distribution
of the actions'''
# transform state
state=state.reshape([1, state.shape[0]])
# get action probably
action_probability_distribution=self.model.predict(state).flatten()
# norm action probability distribution
action_probability_distribution/=np.sum(action_probability_distribution)
# sample action
action=np.random.choice(self.action_shape,1,
p=action_probability_distribution)[0]
return action, action_probability_distribution
# + [markdown] id="sw6EFyJynXcJ" colab_type="text"
# ##Constructing the Reward
# The REINFORCE model includes a discounting parameter, 𝛾, that governs the long term reward calculation. Using gamma, the model discounts rewards from the early stages of the game. This calculation ensures that longer episodes would award a state-action pair greater than shorter ones. This function returns the normalized vector of discounted rewards.
# + id="TLlCX1UJn1vo" colab_type="code" colab={}
def get_discounted_rewards(self, rewards):
'''Use gamma to calculate the total reward discounting for rewards
Following - \gamma ^ t * Gt'''
discounted_rewards=[]
cumulative_total_return=0
# iterate the rewards backwards and and calc the total return
for reward in rewards[::-1]:
cumulative_total_return=(cumulative_total_return*self.gamma)+reward
discounted_rewards.insert(0, cumulative_total_return)
# normalize discounted rewards
mean_rewards=np.mean(discounted_rewards)
std_rewards=np.std(discounted_rewards)
norm_discounted_rewards=(discounted_rewards-
mean_rewards)/(std_rewards+1e-7) # avoiding zero div
return norm_discounted_rewards
# + [markdown] id="ip5ud197ndpA" colab_type="text"
# ##Updating the Policy
# Following each Monte-Carlo episode, the model uses the data collected to update the policy parameters. Recall the last step shown in the pseudo-code above. Here, training the neural network updates the policy. The network fits a vector of states to a vector of the gradients multiplied by the discounted rewards and the learning rate. This step facilitates the stochastic gradient ascent optimization.
#
# + id="u77cTqYcn5ml" colab_type="code" colab={}
def update_policy(self):
'''Updates the network.'''
# get X
states=np.vstack(self.states)
# get Y
gradients=np.vstack(self.gradients)
rewards=np.vstack(self.rewards)
discounted_rewards=self.get_discounted_rewards(rewards)
gradients*=discounted_rewards
gradients=self.alpha*np.vstack([gradients])+self.probs
history=self.model.train_on_batch(states, gradients)
self.states, self.probs, self.gradients, self.rewards=[], [], [], []
return history
# + [markdown] id="hgk3CMaanhYI" colab_type="text"
# ##Training the model
# This method creates a training environment for the model. Iterating through a set number of episodes, it uses the model to sample actions and play them. When such a sequence ends, the model is using the recorded observations to update the policy.
# + id="ohfFPk-34_g6" colab_type="code" colab={}
def train(self, episodes, rollout_n=1, render_n=50):
'''train the model
episodes - number of training iterations
rollout_n- number of episodes between policy update
render_n - number of episodes between env rendering '''
env=self.env
total_rewards=np.zeros(episodes)
for episode in range(episodes):
# each episode is a new game env
state=env.reset()
done=False
episode_reward=0 #record episode reward
while not done:
# play an action and record the game state & reward per episode
action, prob=self.get_action(state)
next_state, reward, done, _=env.step(action)
self.remember(state, action, prob, reward)
state=next_state
episode_reward+=reward
#print("Episode_reward{}".format(episode_reward))
'''
if episode%render_n==0: ## render env to visualize.
plt.imshow(env.render('rgb_array'))
display.clear_output(wait=True)
display.display(plt.gcf())'''
if done:
# update policy
if episode%rollout_n==0:
history=self.update_policy()
total_rewards[episode]=episode_reward
model=self.model
model.save('model.h5')
self.total_rewards=total_rewards
plt.figure(figsize=(4, 3))
display.clear_output(wait=True)
Agent=REINFORCE(env)
Agent.train(episodes=500)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="e1kbTu0hTTXh" executionInfo={"status": "ok", "timestamp": 1635758835389, "user_tz": -330, "elapsed": 5760, "user": {"displayName": "Sparsh Agarwal", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="e404b5ad-67aa-4b64-ec96-282faca8cc4d"
# !git clone https://github.com/rootlu/MetaHIN.git
# + colab={"base_uri": "https://localhost:8080/"} id="cVHali9JULmq" executionInfo={"status": "ok", "timestamp": 1635758835390, "user_tz": -330, "elapsed": 8, "user": {"displayName": "Sparsh Agarwal", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="357b9fe7-e2f6-4049-c00c-e6a805642083"
# %cd MetaHIN
# + id="HEVPQhC5UND-" executionInfo={"status": "ok", "timestamp": 1635758840047, "user_tz": -330, "elapsed": 4, "user": {"displayName": "Sparsh Agarwal", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}}
# # !python code/main.py
# + colab={"base_uri": "https://localhost:8080/"} id="YAehd92jz_Sd" executionInfo={"status": "ok", "timestamp": 1635758854761, "user_tz": -330, "elapsed": 5, "user": {"displayName": "Sparsh Agarwal", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="25f76d1e-b0af-408a-9ce1-e69c52ee6d7c"
# %cd code
# + colab={"base_uri": "https://localhost:8080/"} id="J7NSG-BD03qx" executionInfo={"status": "ok", "timestamp": 1635758889623, "user_tz": -330, "elapsed": 826, "user": {"displayName": "Sparsh Agarwal", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="3e1ecef4-e74a-425f-e722-bf359aeb0abd"
# %%writefile Config.py
# coding: utf-8
# author: lu yf
# create date: 2019-11-20 19:46
config_db = {
'dataset': 'dbook',
# 'mp': ['ub'],
# 'mp': ['ub','ubab'],
'mp': ['ub','ubab','ubub'],
'use_cuda': False,
'file_num': 10, # each task contains 10 files
# user
'num_location': 453,
'num_fea_item': 2,
# item
'num_publisher': 1698,
'num_fea_user': 1,
'item_fea_len': 1,
# model setting
# 'embedding_dim': 32,
# 'user_embedding_dim': 32*1, # 1 features
# 'item_embedding_dim': 32*1, # 1 features
'embedding_dim': 32,
'user_embedding_dim': 32*1, # 1 features
'item_embedding_dim': 32*1, # 1 features
'first_fc_hidden_dim': 64,
'second_fc_hidden_dim': 64,
'mp_update': 1,
'local_update': 1,
'lr': 5e-4,
'mp_lr': 5e-3,
'local_lr': 5e-3,
'batch_size': 32, # for each batch, the number of tasks
'num_epoch': 50,
'neigh_agg': 'mean',
# 'neigh_agg': 'attention',
'mp_agg': 'mean',
# 'mp_agg': 'attention',
}
config_ml = {
'dataset': 'movielens',
# 'mp': ['um'],
# 'mp': ['um','umdm'],
# 'mp': ['um','umam','umdm'],
'mp': ['um','umum','umam','umdm'],
'use_cuda': False,
'file_num': 12, # each task contains 12 files for movielens
# item
'num_rate': 6,
'num_genre': 25,
'num_fea_item': 2,
'item_fea_len': 26,
# user
'num_gender': 2,
'num_age': 7,
'num_occupation': 21,
'num_zipcode': 3402,
'num_fea_user': 4,
# model setting
'embedding_dim': 32,
'user_embedding_dim': 32*4, # 4 features
'item_embedding_dim': 32*2, # 2 features
'first_fc_hidden_dim': 64,
'second_fc_hidden_dim': 64,
'mp_update': 1,
'local_update': 1,
'lr': 5e-4,
'mp_lr': 5e-3,
'local_lr': 5e-3,
'batch_size': 32, # for each batch, the number of tasks
'num_epoch': 100,
'neigh_agg': 'mean',
# 'neigh_agg': 'max',
'mp_agg': 'mean',
# 'mp_agg': 'attention',
}
config_yelp = {
'dataset': 'yelp',
# 'mp': ['ubub'],
'mp': ['ub','ubcb','ubtb','ubub'],
'use_cuda': False,
'file_num': 12, # each task contains 12 files
# item
'num_stars': 9,
'num_postalcode': 6127,
'num_fea_item': 2,
'item_fea_len': 2,
# user
'num_fans': 412,
'num_avgrating': 359,
'num_fea_user': 2,
# model setting
'embedding_dim': 32,
'user_embedding_dim': 32*2, # 1 features
'item_embedding_dim': 32*2, # 1 features
'first_fc_hidden_dim': 64,
'second_fc_hidden_dim': 64,
'mp_update': 1,
'local_update': 1,
'lr': 5e-4,
'mp_lr': 1e-3,
'local_lr': 1e-3,
'batch_size': 32, # for each batch, the number of tasks
'num_epoch': 50,
'neigh_agg': 'mean',
# 'neigh_agg': 'attention',
'mp_agg': 'mean',
# 'mp_agg': 'attention',
}
states = ["meta_training","warm_up", "user_cold_testing", "item_cold_testing", "user_and_item_cold_testing"]
# + colab={"base_uri": "https://localhost:8080/"} id="6UKX-8MU1Jl0" executionInfo={"status": "ok", "timestamp": 1635759066902, "user_tz": -330, "elapsed": 7722, "user": {"displayName": "Sparsh Agarwal", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="b28c30a8-dc34-4064-9525-baccfa773df6"
# # !cd ../data/movielens && gdown --id 1bTpwctr6ZgdosU8FU4sLqlC_Gt1-z77G
# + colab={"base_uri": "https://localhost:8080/"} id="XJRl4gr61pKW" outputId="2e4863b2-00a4-4af8-dac6-de883850d48e"
# !cd ../data/movielens && tar -xvf movielens.tar.bz2
# + id="Qo0a9QTnUShm" colab={"base_uri": "https://localhost:8080/", "height": 523} executionInfo={"status": "error", "timestamp": 1635758743571, "user_tz": -330, "elapsed": 39955, "user": {"displayName": "Sparsh Agarwal", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="dcb57f52-71a8-4c86-fa73-e68ba27fb028"
import gc
import glob
import random
import time
import numpy as np
import torch
from HeteML_new import HML
from DataHelper import DataHelper
from tqdm.notebook import tqdm
# from Config import states
# random.seed(13)
np.random.seed(13)
torch.manual_seed(13)
def training(model, model_save=True, model_file=None, device='cpu'):
print('training model...')
if config['use_cuda']:
model.cuda()
model.train()
batch_size = config['batch_size']
num_epoch = config['num_epoch']
for _ in range(num_epoch): # 20
loss, mae, rmse = [], [], []
ndcg_at_5 = []
start = time.time()
random.shuffle(train_data)
num_batch = int(len(train_data) / batch_size) # ~80
supp_xs_s, supp_ys_s, supp_mps_s, query_xs_s, query_ys_s, query_mps_s = zip(*train_data) # supp_um_s:(list,list,...,2553)
for i in range(num_batch): # each batch contains some tasks (each task contains a support set and a query set)
support_xs = list(supp_xs_s[batch_size * i:batch_size * (i + 1)])
support_ys = list(supp_ys_s[batch_size * i:batch_size * (i + 1)])
support_mps = list(supp_mps_s[batch_size * i:batch_size * (i + 1)])
query_xs = list(query_xs_s[batch_size * i:batch_size * (i + 1)])
query_ys = list(query_ys_s[batch_size * i:batch_size * (i + 1)])
query_mps = list(query_mps_s[batch_size * i:batch_size * (i + 1)])
_loss, _mae, _rmse, _ndcg_5 = model.global_update(support_xs,support_ys,support_mps,
query_xs,query_ys,query_mps,device)
loss.append(_loss)
mae.append(_mae)
rmse.append(_rmse)
ndcg_at_5.append(_ndcg_5)
print('epoch: {}, loss: {:.6f}, cost time: {:.1f}s, mae: {:.5f}, rmse: {:.5f}, ndcg@5: {:.5f}'.
format(_, np.mean(loss), time.time() - start,
np.mean(mae), np.mean(rmse), np.mean(ndcg_at_5)))
if _ % 10 == 0 and _ != 0:
testing(model, device)
model.train()
if model_save:
print('saving model...')
torch.save(model.state_dict(), model_file)
def testing(model, device='cpu'):
# testing
print('evaluating model...')
if config['use_cuda']:
model.cuda()
model.eval()
for state in states:
if state == 'meta_training':
continue
print(state + '...')
evaluate(model, state, device)
def evaluate(model, state, device='cpu'):
test_data = data_helper.load_data(data_set=data_set, state=state,
load_from_file=True)
supp_xs_s, supp_ys_s, supp_mps_s, query_xs_s, query_ys_s, query_mps_s = zip(*test_data) # supp_um_s:(list,list,...,2553)
loss, mae, rmse = [], [], []
ndcg_at_5 = []
for i in range(len(test_data)): # each task
_mae, _rmse, _ndcg_5 = model.evaluation(supp_xs_s[i], supp_ys_s[i], supp_mps_s[i],
query_xs_s[i], query_ys_s[i], query_mps_s[i],device)
mae.append(_mae)
rmse.append(_rmse)
ndcg_at_5.append(_ndcg_5)
print('mae: {:.5f}, rmse: {:.5f}, ndcg@5: {:.5f}'.
format(np.mean(mae), np.mean(rmse),np.mean(ndcg_at_5)))
# print('fine tuning...')
# model.train()
# for i in range(len(test_data)):
# model.fine_tune(supp_xs_s[i], supp_ys_s[i], supp_mps_s[i])
# model.eval()
# for i in range(len(test_data)): # each task
# _mae, _rmse, _ndcg_5 = model.evaluation(supp_xs_s[i], supp_ys_s[i], supp_mps_s[i],
# query_xs_s[i], query_ys_s[i], query_mps_s[i],device)
# mae.append(_mae)
# rmse.append(_rmse)
# ndcg_at_5.append(_ndcg_5)
# print('mae: {:.5f}, rmse: {:.5f}, ndcg@5: {:.5f}'.
# format(np.mean(mae), np.mean(rmse), np.mean(ndcg_at_5)))
if __name__ == "__main__":
# data_set = 'dbook'
data_set = 'movielens'
# data_set = 'yelp'
input_dir = '../data/'
output_dir = '../data/'
res_dir = '../res/'+data_set
load_model = False
if data_set == 'movielens':
from Config import config_ml as config
elif data_set == 'yelp':
from Config import config_yelp as config
elif data_set == 'dbook':
from Config import config_db as config
cuda_or_cpu = torch.device("cuda" if config['use_cuda'] else "cpu")
print (time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
print(config)
model_filename = "{}/hml.pkl".format(res_dir)
data_helper = DataHelper(input_dir, output_dir, config)
# training model.
model_name = 'mp_update'
# model_name = 'mp_MAML'
# model_name = 'mp_update_multi_MAML'
# model_name = 'mp_update_no_f'
# model_name = 'no_MAML'
# model_name = 'no_MAML_with_finetuning'
hml = HML(config, model_name)
print('--------------- {} ---------------'.format(model_name))
if not load_model:
# Load training dataset
print('loading train data...')
train_data = data_helper.load_data(data_set=data_set,state='meta_training',load_from_file=True)
# print('loading warm data...')
# warm_data = data_helper.load_data(data_set=data_set, state='warm_up',load_from_file=True)
training(hml, model_save=True, model_file=model_filename,device=cuda_or_cpu)
else:
trained_state_dict = torch.load(model_filename)
hml.load_state_dict(trained_state_dict)
# testing
testing(hml, device=cuda_or_cpu)
print('--------------- {} ---------------'.format(model_name))
# + id="Ci9axlli0PWA"
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Note: Use standard account rather than admin account
# -
import os
import re
import pandas as pd
from selenium import webdriver
# +
### PARAMETERS ###
# Number of pages in course list
# Check query string of last page e.g. /application/en/courses-solr?page=39 and add 1 e.g. LAST_PAGE_EN = 40
LAST_PAGE_EN = 40
LAST_PAGE_FR = 40
# Bool for scraping FR descriptions
FRENCH = True
# -
### CREDENTIALS ###
USERNAME = os.environ.get('GCCAMPUS_USERNAME')
PASSWORD = os.environ.get('GCCAMPUS_PASSWORD')
assert USERNAME is not None, 'Missing USERNAME'
assert PASSWORD is not None, 'Missing PASSWORD'
browser = webdriver.Chrome()
# +
# Navigate to GCcampus and login
if FRENCH:
main_url = 'https://idp.csps-efpc.gc.ca/idp/login-fr.jsp'
else:
main_url = 'https://idp.csps-efpc.gc.ca/idp/Authn/UserPassword'
browser.get(main_url)
browser.find_element_by_id('j_username').send_keys(USERNAME)
browser.find_element_by_id('j_password').send_keys(PASSWORD)
browser.find_element_by_id('cbPrivacy').click()
browser.find_element_by_xpath("//button[@type='submit']").click()
# -
# Loop through course list and get all links
if FRENCH:
list_url = 'https://learn-apprendre.csps-efpc.gc.ca/application/fr/courses-solr?page='
else:
list_url = 'https://learn-apprendre.csps-efpc.gc.ca/application/en/courses-solr?page='
last_page = LAST_PAGE_FR if FRENCH else LAST_PAGE_EN
course_links = []
for i in range(last_page):
browser.get(list_url + str(i))
mars = browser.find_elements_by_css_selector('.field-items a')
for elem in mars:
course_links.append(elem.get_attribute('href'))
# Compile regex to extract course codes
# Optional capture group 1: e.g. 'C451-2'
# Optional capture group 2: e.g. 'G110 – MODULE 3'
regex = re.compile(pattern=r'[a-zA-Z]{1}\d{3}(?:[–-]{1}\d{1})?(?:\s{1}[–-]{1}\s{1}MODULE\s{1}\d{1})?')
# Dict to map problematic / exceptional course codes
EXCEPTION_DICT = {
'H200': 'H200_MODULE 1'
}
# For each link in 'course_links', navigate to page, grab course description (HTML
# tags included), search for course code, and save to 'desc_dict'
desc_dict = {}
for i, link in enumerate(course_links):
print('{0} / {1}'.format(i + 1, len(course_links)))
browser.get(link)
# Grab description
desc = browser.find_elements_by_css_selector('.field-item[property="content:encoded"]')[0].get_attribute('innerHTML')
# Grab title and extract course code
title = browser.find_elements_by_css_selector('.page-title')[0].get_attribute('innerHTML')
title = title.upper()
title_search = regex.findall(title)
pkey = title_search[0] if title_search else link
pkey = pkey.replace('–', '-')
if pkey in EXCEPTION_DICT:
pkey = EXCEPTION_DICT[pkey]
desc_dict[pkey] = desc
# Store 'desc_dict' in DataFrame for processing
df = pd.DataFrame.from_dict(desc_dict, orient='index')
df.reset_index(level=0, inplace=True)
df.columns = ['course_code', 'desc']
# Transform relative links to absolute links
df['desc'] = df['desc'].astype(str).str.replace('href="/application/en/',
'href="https://learn-apprendre.csps-efpc.gc.ca/application/en/',
regex=False)
df['desc'] = df['desc'].astype(str).str.replace('href="/application/fr/',
'href="https://learn-apprendre.csps-efpc.gc.ca/application/fr/',
regex=False)
# Remove junk info
df['desc'] = df['desc'].astype(str).str.replace(' This link will open in a new window', ' ', regex=False)
df['desc'] = df['desc'].astype(str).str.replace(' Ce lien va ouvrir dans une nouvelle fenêtre', ' ', regex=False)
# Replace new line characters '\r' and '\n' with a space
df['desc'] = df['desc'].astype(str).str.replace('\r', ' ', regex=False)
df['desc'] = df['desc'].astype(str).str.replace('\n', ' ', regex=False)
# Remove superfluous spacing
df['desc'] = df['desc'].astype(str).str.replace(r' +', ' ', regex=True)
# Export to CSV
df.to_csv('scraped_{0}.csv'.format('fr' if FRENCH else 'en'), sep=',', encoding='utf-8')
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Descargar letras de canciones
#
# Utilizando beautiful soup descargar todas las canciones de [Spinetta](https://es.wikipedia.org/wiki/Luis_Alberto_Spinetta) que hay en [letras.com](https://www.letras.com/spinetta/)
# +
from bs4 import BeautifulSoup
import requests
import os
letras_url = "https://www.letras.com"
def descargar_letras(artista):
# COMPLETAR
return
artista = "luis-alberto-spinetta"
descargar_letras(artista)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# argv:
# - python
# - -m
# - ipykernel_launcher
# - -f
# - '{connection_file}'
# display_name: Python 3 (ipykernel)
# env: null
# interrupt_mode: signal
# language: python
# metadata:
# debugger: true
# name: python3
# ---
# # Multiprocessing Relay
#
# This notebook explores different relay configurations.
# We test pipelines with 1-1, 1-n, n-1 and n-m connections.
#
# +
# %load_ext autoreload
# %autoreload 2
# setting logger name manually, __name___ is '__main__' in ipynb
LOG = 'ktz.ipynb'
# +
import yaml
import logging
import logging.config
from pprint import pprint
def setup_logging():
with open('../conf/logging.yaml', mode='r') as fd:
conf = yaml.safe_load(fd)
conf['handlers']['console']['formatter'] = 'plain'
conf['loggers']['ktz'] = {'handlers': ['console']}
conf['loggers']['root']['level'] = 'DEBUG'
logging.config.dictConfig(conf)
return logging.getLogger(LOG)
log = setup_logging()
log.info('hello!')
# +
import time
from ktz.multiprocessing import Actor
from ktz.multiprocessing import Relay
class Producer(Actor):
def loop(self):
for x in range(3):
self.send(x)
class Consumer(Actor):
def recv(self, x):
time.sleep(1)
class Worker(Actor):
def recv(self, x):
time.sleep(1)
y = x + 10
self.send(y)
# -
# 1 - 1
relay = Relay(log=LOG)
relay.connect(Producer(), Consumer())
relay.start()
# +
# 1 - n
relay = Relay(log=LOG)
relay.connect(Producer(), [Consumer() for _ in range(5)])
relay.start()
# -
# n - 1
relay = Relay(log=LOG)
relay.connect([Producer() for _ in range(3)], Consumer())
relay.start()
# +
# 1 - n - 1
relay = Relay(maxsize=2, log=LOG)
relay.connect(
Producer(),
[Worker() for _ in range(2)],
Consumer(),
)
relay.start()
# +
# 1 - n - m - 1
relay = Relay(maxsize=2, log=LOG)
relay.connect(
Producer(),
[Worker() for _ in range(2)],
[Worker() for _ in range(3)],
Consumer(),
)
relay.start()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://www.microsoft.com/en-us/research/uploads/prod/2020/05/Segmentation.png" width="400"/>
# # Customer Segmentation: Estimate Individualized Responses to Incentives
#
# Nowadays, business decision makers rely on estimating the causal effect of interventions to answer what-if questions about shifts in strategy, such as promoting specific product with discount, adding new features to a website or increasing investment from a sales team. However, rather than learning whether to take action for a specific intervention for all users, people are increasingly interested in understanding the different responses from different users to the two alternatives. Identifying the characteristics of users having the strongest response for the intervention could help make rules to segment the future users into different groups. This can help optimize the policy to use the least resources and get the most profit.
#
# In this case study, we will use a personalized pricing example to explain how the [EconML](https://aka.ms/econml) library could fit into this problem and provide robust and reliable causal solutions.
#
# ### Summary
#
# 1. [Background](#background)
# 2. [Data](#data)
# 3. [Get Causal Effects with EconML](#estimate)
# 4. [Understand Treatment Effects with EconML](#interpret)
# 5. [Make Policy Decisions with EconML](#policy)
# 6. [Conclusions](#conclusion)
#
# # Background <a id="background"></a>
#
# <img src="https://cdn.pixabay.com/photo/2018/08/16/11/59/radio-3610287_960_720.png" width="400" />
#
# The global online media market is growing fast over the years. Media companies are always interested in attracting more users into the market and encouraging them to buy more songs or become members. In this example, we'll consider a scenario where one experiment a media company is running is to give small discount (10%, 20% or 0) to their current users based on their income level in order to boost the likelihood of their purchase. The goal is to understand the **heterogeneous price elasticity of demand** for people with different income level, learning which users would respond most strongly to a small discount. Furthermore, their end goal is to make sure that despite decreasing the price for some consumers, the demand is raised enough to boost the overall revenue.
#
# EconML’s `DML` based estimators can be used to take the discount variation in existing data, along with a rich set of user features, to estimate heterogeneous price sensitivities that vary with multiple customer features. Then, the `SingleTreeCateInterpreter` provides a presentation-ready summary of the key features that explain the biggest differences in responsiveness to a discount, and the `SingleTreePolicyInterpreter` recommends a policy on who should receive a discount in order to increase revenue (not only demand), which could help the company to set an optimal price for those users in the future.
# +
# Some imports to get us started
# Utilities
import os
import urllib.request
import numpy as np
import pandas as pd
# Generic ML imports
from sklearn.preprocessing import PolynomialFeatures
from sklearn.ensemble import GradientBoostingRegressor
# EconML imports
from econml.dml import LinearDML, ForestDML
from econml.cate_interpreter import SingleTreeCateInterpreter, SingleTreePolicyInterpreter
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# # Data <a id="data"></a>
#
#
# The dataset* has ~10,000 observations and includes 9 continuous and categorical variables that represent user's characteristics and online behaviour history such as age, log income, previous purchase, previous online time per week, etc.
#
# We define the following variables:
#
# Feature Name|Type|Details
# :--- |:---|:---
# **account_age** |W| user's account age
# **age** |W|user's age
# **avg_hours** |W| the average hours user was online per week in the past
# **days_visited** |W| the average number of days user visited the website per week in the past
# **friend_count** |W| number of friends user connected in the account
# **has_membership** |W| whether the user had membership
# **is_US** |W| whether the user accesses the website from the US
# **songs_purchased** |W| the average songs user purchased per week in the past
# **income** |X| user's income
# **price** |T| the price user was exposed during the discount season (baseline price * small discount)
# **demand** |Y| songs user purchased during the discount season
#
# **To protect the privacy of the company, we use the simulated data as an example here. The data is synthetically generated and the feature distributions don't correspond to real distributions. However, the feature names have preserved their names and meaning.*
#
#
# The treatment and outcome are generated using the following functions:
# $$
# T =
# \begin{cases}
# 1 & \text{with } p=0.2, \\
# 0.9 & \text{with }p=0.3, & \text{if income}<1 \\
# 0.8 & \text{with }p=0.5, \\
# \\
# 1 & \text{with }p=0.7, \\
# 0.9 & \text{with }p=0.2, & \text{if income}\ge1 \\
# 0.8 & \text{with }p=0.1, \\
# \end{cases}
# $$
#
#
# \begin{align}
# \gamma(X) & = -3 - 14 \cdot \{\text{income}<1\} \\
# \beta(X,W) & = 20 + 0.5 \cdot \text{avg_hours} + 5 \cdot \{\text{days_visited}>4\} \\
# Y &= \gamma(X) \cdot T + \beta(X,W)
# \end{align}
#
#
# Import the sample pricing data
file_url = "https://msalicedatapublic.blob.core.windows.net/datasets/Pricing/pricing_sample.csv"
train_data = pd.read_csv(file_url)
# Data sample
train_data.head()
# Define estimator inputs
Y = train_data["demand"] # outcome of interest
T = train_data["price"] # intervention, or treatment
X = train_data[["income"]] # features
W = train_data.drop(columns=["demand", "price", "income"]) # confounders
# Get test data
X_test = np.linspace(0, 5, 100).reshape(-1, 1)
X_test_data = pd.DataFrame(X_test, columns=["income"])
# # Get Causal Effects with EconML <a id="estimate"></a>
# To learn the price elasticity on demand as a function of income, we fit the model as follows:
#
#
# \begin{align}
# log(Y) & = \theta(X) \cdot log(T) + f(X,W) + \epsilon \\
# log(T) & = g(X,W) + \eta
# \end{align}
#
#
# where $\epsilon, \eta$ are uncorrelated error terms.
#
# The models we fit here aren't an exact match for the data generation function above, but if they are a good approximation, they will allow us to create a good discount policy. Although the model is misspecified, we hope to see that our `DML` based estimators can still capture the right trend of $\theta(X)$ and that the recommended policy beats other baseline policies (such as always giving a discount) on revenue. Because of the mismatch between the data generating process and the model we're fitting, there isn't a single true $\theta(X)$ (the true elasticity varies with not only X but also T and W), but given how we generate the data above, we can still calculate the range of true $\theta(X)$ to compare against.
# +
# Define underlying treatment effect function given DGP
def gamma_fn(X):
return -3 - 14 * (X["income"] < 1)
def beta_fn(X):
return 20 + 0.5 * (X["avg_hours"]) + 5 * (X["days_visited"] > 4)
def demand_fn(data, T):
Y = gamma_fn(data) * T + beta_fn(data)
return Y
def true_te(x, n, stats):
if x < 1:
subdata = train_data[train_data["income"] < 1].sample(n=n, replace=True)
else:
subdata = train_data[train_data["income"] >= 1].sample(n=n, replace=True)
te_array = subdata["price"] * gamma_fn(subdata) / (subdata["demand"])
if stats == "mean":
return np.mean(te_array)
elif stats == "median":
return np.median(te_array)
elif isinstance(stats, int):
return np.percentile(te_array, stats)
# -
# Get the estimate and range of true treatment effect
truth_te_estimate = np.apply_along_axis(true_te, 1, X_test, 1000, "mean") # estimate
truth_te_upper = np.apply_along_axis(true_te, 1, X_test, 1000, 95) # upper level
truth_te_lower = np.apply_along_axis(true_te, 1, X_test, 1000, 5) # lower level
# ## Parametric heterogeneity
# First of all, we can try to learn a **linear projection of the treatment effect** assuming a polynomial form of $\theta(X)$. We use the `LinearDML` estimator. Since we don't have any priors on these models, we use a generic gradient boosting tree estimators to learn the expected price and demand from the data.
# Get log_T and log_Y
log_T = np.log(T)
log_Y = np.log(Y)
# Train EconML model
est = LinearDML(
model_y=GradientBoostingRegressor(),
model_t=GradientBoostingRegressor(),
featurizer=PolynomialFeatures(degree=2, include_bias=False),
)
est.fit(log_Y, log_T, X=X, W=W, inference="statsmodels")
# Get treatment effect and its confidence interval
te_pred = est.effect(X_test)
te_pred_interval = est.effect_interval(X_test)
# Compare the estimate and the truth
plt.figure(figsize=(10, 6))
plt.plot(X_test.flatten(), te_pred, label="Sales Elasticity Prediction")
plt.plot(X_test.flatten(), truth_te_estimate, "--", label="True Elasticity")
plt.fill_between(
X_test.flatten(),
te_pred_interval[0],
te_pred_interval[1],
alpha=0.2,
label="90% Confidence Interval",
)
plt.fill_between(
X_test.flatten(),
truth_te_lower,
truth_te_upper,
alpha=0.2,
label="True Elasticity Range",
)
plt.xlabel("Income")
plt.ylabel("Songs Sales Elasticity")
plt.title("Songs Sales Elasticity vs Income")
plt.legend(loc="lower right")
# From the plot above, it's clear to see that the true treatment effect is a **nonlinear** function of income, with elasticity around -1.75 when income is smaller than 1 and a small negative value when income is larger than 1. The model fits a quadratic treatment effect, which is not a great fit. But it still captures the overall trend: the elasticity is negative and people are less sensitive to the price change if they have higher income.
# Get the final coefficient and intercept summary
est.summary(feat_name=X.columns)
# `LinearDML` estimator can also return the summary of the coefficients and intercept for the final model, including point estimates, p-values and confidence intervals. From the table above, we notice that $income$ has positive effect and ${income}^2$ has negative effect, and both of them are statistically significant.
# ## Nonparametric Heterogeneity
# Since we already know the true treatment effect function is nonlinear, let us fit another model using `ForestDML`, which assumes a fully **nonparametric estimation of the treatment effect**.
# Train EconML model
est = ForestDML(
model_y=GradientBoostingRegressor(), model_t=GradientBoostingRegressor()
)
est.fit(log_Y, log_T, X=X, W=W, inference="blb")
# Get treatment effect and its confidence interval
te_pred = est.effect(X_test)
te_pred_interval = est.effect_interval(X_test)
# Compare the estimate and the truth
plt.figure(figsize=(10, 6))
plt.plot(X_test.flatten(), te_pred, label="Sales Elasticity Prediction")
plt.plot(X_test.flatten(), truth_te_estimate, "--", label="True Elasticity")
plt.fill_between(
X_test.flatten(),
te_pred_interval[0],
te_pred_interval[1],
alpha=0.2,
label="90% Confidence Interval",
)
plt.fill_between(
X_test.flatten(),
truth_te_lower,
truth_te_upper,
alpha=0.2,
label="True Elasticity Range",
)
plt.xlabel("Income")
plt.ylabel("Songs Sales Elasticity")
plt.title("Songs Sales Elasticity vs Income")
plt.legend(loc="lower right")
# We notice that this model fits much better than the `LinearDML`, the 90% confidence interval correctly covers the true treatment effect estimate and captures the variation when income is around 1. Overall, the model shows that people with low income are much more sensitive to the price changes than higher income people.
# # Understand Treatment Effects with EconML <a id="interpret"></a>
# EconML includes interpretability tools to better understand treatment effects. Treatment effects can be complex, but oftentimes we are interested in simple rules that can differentiate between users who respond positively, users who remain neutral and users who respond negatively to the proposed changes.
#
# The EconML `SingleTreeCateInterpreter` provides interperetability by training a single decision tree on the treatment effects outputted by the any of the EconML estimators. In the figure below we can see in dark red users respond strongly to the discount and the in white users respond lightly to the discount.
intrp = SingleTreeCateInterpreter(include_model_uncertainty=True, max_depth=2, min_samples_leaf=10)
intrp.interpret(est, X_test)
plt.figure(figsize=(25, 5))
intrp.plot(feature_names=X.columns, fontsize=12)
# # Make Policy Decision with EconML <a id="policy"></a>
# We want to make policy decisions to maximum the **revenue** instead of the demand. In this scenario,
#
#
# \begin{align}
# Rev & = Y \cdot T \\
# & = \exp^{log(Y)} \cdot T\\
# & = \exp^{(\theta(X) \cdot log(T) + f(X,W) + \epsilon)} \cdot T \\
# & = \exp^{(f(X,W) + \epsilon)} \cdot T^{(\theta(X)+1)}
# \end{align}
#
#
# With the decrease of price, revenue will increase only if $\theta(X)+1<0$. Thus, we set `sample_treatment_cast=-1` here to learn **what kinds of customers we should give a small discount to maximum the revenue**.
#
# The EconML library includes policy interpretability tools such as `SingleTreePolicyInterpreter` that take in a treatment cost and the treatment effects to learn simple rules about which customers to target profitably. In the figure below we can see the model recommends to give discount for people with income less than $0.985$ and give original price for the others.
intrp = SingleTreePolicyInterpreter(risk_level=0.05, max_depth=2, min_samples_leaf=1, min_impurity_decrease=0.001)
intrp.interpret(est, X_test, sample_treatment_costs=-1, treatment_names=["Discount", "No-Discount"])
plt.figure(figsize=(25, 5))
intrp.plot(feature_names=X.columns, fontsize=12)
# Now, let us compare our policy with other baseline policies! Our model says which customers to give a small discount to, and for this experiment, we will set a discount level of 10% for those users. Because the model is misspecified we would not expect good results with large discounts. Here, because we know the ground truth, we can evaluate the value of this policy.
# define function to compute revenue
def revenue_fn(data, discount_level1, discount_level2, baseline_T, policy):
policy_price = baseline_T * (1 - discount_level1) * policy + baseline_T * (1 - discount_level2) * (1 - policy)
demand = demand_fn(data, policy_price)
rev = demand * policy_price
return rev
# +
policy_dic = {}
# our policy above
policy = intrp.treat(X)
policy_dic["Our Policy"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, policy))
## previous strategy
policy_dic["Previous Strategy"] = np.mean(train_data["price"] * train_data["demand"])
## give everyone discount
policy_dic["Give Everyone Discount"] = np.mean(revenue_fn(train_data, 0.1, 0, 1, np.ones(len(X))))
## don't give discount
policy_dic["Give No One Discount"] = np.mean(revenue_fn(train_data, 0, 0.1, 1, np.ones(len(X))))
## follow our policy, but give -10% discount for the group doesn't recommend to give discount
policy_dic["Our Policy + Give Negative Discount for No-Discount Group"] = np.mean(revenue_fn(train_data, -0.1, 0.1, 1, policy))
## give everyone -10% discount
policy_dic["Give Everyone Negative Discount"] = np.mean(revenue_fn(train_data, -0.1, 0, 1, np.ones(len(X))))
# -
# get policy summary table
res = pd.DataFrame.from_dict(policy_dic, orient="index", columns=["Revenue"])
res["Rank"] = res["Revenue"].rank(ascending=False)
res
# **We beat the baseline policies!** Our policy gets the highest revenue except for the one raising the price for the No-Discount group. That means our currently baseline price is low, but the way we segment the user does help increase the revenue!
# # Conclusions <a id="conclusion"></a>
#
# In this notebook, we have demonstrated the power of using EconML to:
#
# * Estimate the treatment effect correctly even the model is misspecified
# * Interpret the resulting individual-level treatment effects
# * Make the policy decision beats the previous and baseline policies
#
# To learn more about what EconML can do for you, visit our [website](https://aka.ms/econml), our [GitHub page](https://github.com/microsoft/EconML) or our [documentation](https://econml.azurewebsites.net/).
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Head Pose Image Database
#
# http://www-prima.inrialpes.fr/perso/Gourier/Faces/HPDatabase.html
# ## このデータベースを利用する目的:
#
# さまざまな顔の向きで顔を検出できるかどうかを評価する。
#
# 各pitch, yaw の組み合わせに対して、30枚の画像があり、
# 顔向きごとの検出率を評価できる。
#
# ## 評価上の注意点:
#
# - 背景がフラットな画像になっているので、背景が込み入っている時の検出率を評価できない。
# - 被験者が欧米人に偏っている。
# - 照明条件の多様性がない。
# - 表情の変化が少ない(口を開けたりはしていない)
#
# 顔検出が面内回転に対してどれくらい頑強かを評価する。
#
# データベースによっては既に目位置を正規化してあり、
# 面内回転を加えたデータで評価してはじめて、実際環境での顔検出能力を評価できる。
#
# そこで、このスクリプトでは、データに面内回転を加えた画像を作って
# 検出率を評価している。
#
# %matplotlib inline
import pandas as pd
import os
import glob
dataset = "headPose"
names = glob.glob("headPose/Person*/*.jpg")
names.sort()
degs=(-45, -40, -35, -30, -25, -20, -15, -10, -5, 0, 5, 10, 15, 20, 25, 30, 35, 40, 45)
import dlibCnnFace as faceDetector
for deg in degs:
faceDetector.processDatabase(dataset, names, deg)
# # headPose dataset の検出処理後のデータ解析
dfs={}
for deg in degs:
dfs[deg] = pd.read_csv("log_headPose_%d.csv" % deg)
print deg, dfs[deg]["truePositives"].mean()
rates = [dfs[deg]["truePositives"].mean() for deg in degs]
falseRates = [dfs[deg]["falsePositives"].mean() for deg in degs]
data = {"degs":degs, "rates":rates, "falseRates":falseRates}
df = pd.DataFrame(data, columns=["degs", "rates", "falseRates"])
df.plot(x="degs", y="rates", grid=True)
df.plot(x="degs", y="falseRates", grid=True)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# warning: pdfminer uses python 2
from __future__ import division
# The UK government regularly releases information about the meetings that various ministers have with external organisations. You can find the releases by searching [here](https://www.gov.uk/government/publications). The hope is that by releasing information like this the public, journalists and other organisations can have some level of scrutiny over who members of parliament are meeting.
#
# Unfortunately, the information is released in a number of different formats and styles, making any sort of attempt to automatically catalogue it difficult. On the suggestion of [Transparency International](http://www.transparency.org.uk/), I have been attempting to automate the procedure. If you are interested in the output dataset, you can find it [here](https://github.com/ijmbarr/uk_minister_meetings), the full code I used to parse the documents can be found [here](https://github.com/ijmbarr/ti_intergrity_watch), (warning: it is a mess, currently undocumented and still in progress).
#
# To produce the output, I had to extract tabular information from a number of different formats: .csv, .doc, .pdf, .xlsx, .odt and .opd. Of these, by far the most difficult was the PDF file. While there are a number of different tools for extracting tabular information from pdf documents, such as [tabula](https://www.gov.uk/government/publications) and [pdftables](https://pdftables.readthedocs.io/en/latest/), neither of them quite worked on the documents I was looking at, so I decided to create my own.
#
# Reading around, you find the the best advice about parsing PDFs is [don't do it unless you have to](https://www.binpress.com/tutorial/manipulating-pdfs-with-python/167). The reason is, as we will see, unlike formats like html or other markup languages where tables and their internal structure are well defined, in PDFs we only have low level information about the location of individual characters and lines on the page. We are going to have to use this information to infer how the table is structured.
#
# The results presented here aren't that polished yet, and I don't know how readily they will apply to other pdf formats. However, I hope this method might be of some use to other.
#
# ## The Problem
#
# An example of the document we would like to parse can be found [here](https://www.gov.uk/government/publications/hmt-ministers-meetings-hospitality-gifts-and-overseas-travel-1-april-to-30-june-2014), [here](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/362728/Ministerial_Quarterly_Transparency_information_-_January_to_March_2014.pdf) and [here](https://www.gov.uk/government/publications/department-of-culture-media-and-sport-ministerial-gifts-hospitality-travel-and-meetings-july-2014-to-march-2015). Let start with the first one as an example. In order to access the content of the PDFs, I'm going to use [pdfminer](http://euske.github.io/pdfminer/index.html).
#
# The first job is to find out what sort of object exist within the PDF. pdfminer return a list of LTPage objects describing each page. Each page can contain other objects: text, rectangles, lines figures, etc. (the full hierarchy of objects returned by pdfminer is detailed [here](https://euske.github.io/pdfminer/programming.html#layout)). We can pull out pages of the document using the following code:
# +
from pdfminer.pdfparser import PDFParser
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfpage import PDFPage
from pdfminer.pdfpage import PDFTextExtractionNotAllowed
from pdfminer.pdfinterp import PDFResourceManager
from pdfminer.pdfinterp import PDFPageInterpreter
from pdfminer.layout import LAParams
from pdfminer.converter import PDFPageAggregator
def extract_layout_by_page(pdf_path):
"""
Extracts LTPage objects from a pdf file.
slightly modified from
https://euske.github.io/pdfminer/programming.html
"""
laparams = LAParams()
fp = open(pdf_path, 'rb')
parser = PDFParser(fp)
document = PDFDocument(parser)
if not document.is_extractable:
raise PDFTextExtractionNotAllowed
rsrcmgr = PDFResourceManager()
device = PDFPageAggregator(rsrcmgr, laparams=laparams)
interpreter = PDFPageInterpreter(rsrcmgr, device)
layouts = []
for page in PDFPage.create_pages(document):
interpreter.process_page(page)
layouts.append(device.get_result())
return layouts
example_file = "data/DH_Ministerial_gifts_hospitality_travel_and_external_meetings_Jan_to_Mar_2015.pdf"
page_layouts = extract_layout_by_page(example_file)
# -
len(page_layouts)
# We can now ask what are the types of object in the documents:
objects_on_page = set(type(o) for o in page_layouts[3])
objects_on_page
# So it looks like we are only dealing with text, or rectangles. The text exists as text boxes, unfortunately they don't always match up with the table columns in a way we would like, so recursively extract each character from the text objects:
# +
import pdfminer
TEXT_ELEMENTS = [
pdfminer.layout.LTTextBox,
pdfminer.layout.LTTextBoxHorizontal,
pdfminer.layout.LTTextLine,
pdfminer.layout.LTTextLineHorizontal
]
def flatten(lst):
"""Flattens a list of lists"""
return [subelem for elem in lst for subelem in elem]
def extract_characters(element):
"""
Recursively extracts individual characters from
text elements.
"""
if isinstance(element, pdfminer.layout.LTChar):
return [element]
if any(isinstance(element, i) for i in TEXT_ELEMENTS):
return flatten([extract_characters(e) for e in element])
if isinstance(element, list):
return flatten([extract_characters(l) for l in element])
return []
# +
current_page = page_layouts[4]
texts = []
rects = []
# seperate text and rectangle elements
for e in current_page:
if isinstance(e, pdfminer.layout.LTTextBoxHorizontal):
texts.append(e)
elif isinstance(e, pdfminer.layout.LTRect):
rects.append(e)
# sort them into
characters = extract_characters(texts)
# -
# Each element of the pdf is described the the bounding box. We can use this to visualise how the page is arranged:
# +
import matplotlib.pyplot as plt
from matplotlib import patches
# %matplotlib inline
def draw_rect_bbox((x0,y0,x1,y1), ax, color):
"""
Draws an unfilled rectable onto ax.
"""
ax.add_patch(
patches.Rectangle(
(x0, y0),
x1 - x0,
y1 - y0,
fill=False,
color=color
)
)
def draw_rect(rect, ax, color="black"):
draw_rect_bbox(rect.bbox, ax, color)
# +
xmin, ymin, xmax, ymax = current_page.bbox
size = 6
fig, ax = plt.subplots(figsize = (size, size * (ymax/xmax)))
for rect in rects:
draw_rect(rect, ax)
for c in characters:
draw_rect(c, ax, "red")
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.show()
# -
# To pull out the information in table format, we need a way to identify the cells of the table, and to identify which cell each character belongs to.
#
# It would be nice to think that the rectangles here match the edges of the cells of the tables, however exploring the layout you find that while some rectangles refer to cells, other seem to be used as line segments, and others as pixels. Some investigation suggest that the table can be defined by only looking at the rectangle which are "line-like", which we defined as any rectangle narrower then two pixels, with an area greater than one pixel:
# +
def width(rect):
x0, y0, x1, y1 = rect.bbox
return min(x1 - x0, y1 - y0)
def area(rect):
x0, y0, x1, y1 = rect.bbox
return (x1 - x0) * (y1 - y0)
def cast_as_line(rect):
"""
Replaces a retangle with a line based on its longest dimension.
"""
x0, y0, x1, y1 = rect.bbox
if x1 - x0 > y1 - y0:
return (x0, y0, x1, y0, "H")
else:
return (x0, y0, x0, y1, "V")
lines = [cast_as_line(r) for r in rects
if width(r) < 2 and
area(r) > 1]
# -
# Plotting the page again, but only with the lines gives:
# +
xmin, ymin, xmax, ymax = current_page.bbox
size = 6
fig, ax = plt.subplots(figsize = (size, size * (ymax/xmax)))
for l in lines:
x0,y0,x1,y1,_ = l
plt.plot([x0, x1], [y0, y1], 'k-')
for c in characters:
draw_rect(c, ax, "red")
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.show()
# -
# Now onto the main question: how do we assign characters to cells? We use a simple idea, for each character, find the line that bounds it from above, below, left and right. Define these four lines as the cell. We do this using the following function
# +
def does_it_intersect(x, (xmin, xmax)):
return (x <= xmax and x >= xmin)
def find_bounding_rectangle((x, y), lines):
"""
Given a collection of lines, and a point, try to find the rectangle
made from the lines that bounds the point. If the point is not
bounded, return None.
"""
v_intersects = [l for l in lines
if l[4] == "V"
and does_it_intersect(y, (l[1], l[3]))]
h_intersects = [l for l in lines
if l[4] == "H"
and does_it_intersect(x, (l[0], l[2]))]
if len(v_intersects) < 2 or len(h_intersects) < 2:
return None
v_left = [v[0] for v in v_intersects
if v[0] < x]
v_right = [v[0] for v in v_intersects
if v[0] > x]
if len(v_left) == 0 or len(v_right) == 0:
return None
x0, x1 = max(v_left), min(v_right)
h_down = [h[1] for h in h_intersects
if h[1] < y]
h_up = [h[1] for h in h_intersects
if h[1] > y]
if len(h_down) == 0 or len(h_up) == 0:
return None
y0, y1 = max(h_down), min(h_up)
return (x0, y0, x1, y1)
# -
# The line segments that make up the cell boundaries aren't always complete - those small pixel sized rectangle we threw away earlier leave gaps in them. Combined with the bbox's of character sometime lying outside their cell, we have to be careful about which point we use to find its cell. To make things robust I use three points: the bottom left corner, the top right corner and the centre. The box that defines that majority of these is the one chosen.
#
# We can now run this code over every character.
# +
from collections import defaultdict
import math
box_char_dict = {}
for c in characters:
# choose the bounding box that occurs the majority of times for each of these:
bboxes = defaultdict(int)
l_x, l_y = c.bbox[0], c.bbox[1]
bbox_l = find_bounding_rectangle((l_x, l_y), lines)
bboxes[bbox_l] += 1
c_x, c_y = math.floor((c.bbox[0] + c.bbox[2]) / 2), math.floor((c.bbox[1] + c.bbox[3]) / 2)
bbox_c = find_bounding_rectangle((c_x, c_y), lines)
bboxes[bbox_c] += 1
u_x, u_y = c.bbox[2], c.bbox[3]
bbox_u = find_bounding_rectangle((u_x, u_y), lines)
bboxes[bbox_u] += 1
# if all values are in different boxes, default to character center.
# otherwise choose the majority.
if max(bboxes.values()) == 1:
bbox = bbox_c
else:
bbox = max(bboxes.items(), key=lambda x: x[1])[0]
if bbox is None:
continue
if bbox in box_char_dict.keys():
box_char_dict[bbox].append(c)
continue
box_char_dict[bbox] = [c]
# -
# To check that this has works, we can plot the page again, focusing on one particular cell:
# +
import random
xmin, ymin, xmax, ymax = current_page.bbox
size = 6
fig, ax = plt.subplots(figsize = (size, size * (ymax/xmax)))
for l in lines:
x0,y0,x1,y1,_ = l
plt.plot([x0, x1], [y0, y1], 'k-')
for c in characters:
draw_rect(c, ax, "red")
# plot the characters of a random cell as green
for c in random.choice(box_char_dict.values()):
draw_rect(c, ax, "green")
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.show()
# -
# To capture empty cells, I choose a grid on points across the page and try to assign them to a cell. If this cell isn't present in box_char_dict, then it is created and left empty.
# +
xmin, ymin, xmax, ymax = current_page.bbox
for x in range(int(xmin), int(xmax), 10):
for y in range(int(ymin), int(ymax), 10):
bbox = find_bounding_rectangle((x, y), lines)
if bbox is None:
continue
if bbox in box_char_dict.keys():
continue
box_char_dict[bbox] = []
# -
# All that remains is to map between the ordering of cells on the page and a python data structure and between the ordering of characters in a cell and a string. The two functions below carry this out:
# +
def chars_to_string(chars):
"""
Converts a collection of characters into a string, by ordering them left to right,
then top to bottom.
"""
if not chars:
return ""
rows = sorted(list(set(c.bbox[1] for c in chars)), reverse=True)
text = ""
for row in rows:
sorted_row = sorted([c for c in chars if c.bbox[1] == row], key=lambda c: c.bbox[0])
text += "".join(c.get_text() for c in sorted_row)
return text
def boxes_to_table(box_record_dict):
"""
Converts a dictionary of cell:characters mapping into a python list
of lists of strings. Tries to split cells into rows, then for each row
breaks it down into columns.
"""
boxes = box_record_dict.keys()
rows = sorted(list(set(b[1] for b in boxes)), reverse=True)
table = []
for row in rows:
sorted_row = sorted([b for b in boxes if b[1] == row], key=lambda b: b[0])
table.append([chars_to_string(box_record_dict[b]) for b in sorted_row])
return table
# -
# The results aren't bad:
boxes_to_table(box_char_dict)
# ## Conclusions
# We have achieved what we set out to do: extract tabular information from a PDF into a data structure that we can use.
#
# Certain things in this approach get missed, such as distinctions between tables, and distinctions between headers and rows, but depending on the document these things can often be inferred from the structure.
#
# Hopefully this will be of use to someone. I had considered packaging the whole thing up into a module, but I am not sure how well this approach generalises to other documents. If there is sufficient interest I'm happy to consider spending the time on it.
#
# ## How it was Made
# This post was created as a jupyter notebook. You download a copy [here](https://github.com/ijmbarr/parsing-pdfs).
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import lightgbm as lgb
import xgboost as xgb
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.arima_model import ARIMA
owid_data = pd.read_csv("./1_owid/owid-covid-data.csv")
owid_data.info()
owid_data.columns
data = owid_data.dropna(subset=["total_cases"])
data = data.loc[data['location'] == "World"]
# to ensure that we didn't use future data in prediction
data_sort = data.sort_values(by=["date"])
# drop columns that are all NaNs
data_sort = data_sort.dropna(axis=1, how='all')
data_sort.shape
data_sort.info()
plt.figure(figsize=(10,5))
sns.violinplot(x = owid_data.continent, y=owid_data['total_cases'])
plt.xlabel('continent',fontsize=10)
plt.ylabel('total_cases',fontsize=10)
plt.show()
# ### ARIMA Model
def get_stationarity(timeseries):
# rolling statistics
rolling_mean = timeseries.rolling(window=12).mean()
rolling_std = timeseries.rolling(window=12).std()
# rolling statistics plot
original = plt.plot(timeseries, color='blue', label='Original')
mean = plt.plot(rolling_mean, color='red', label='Rolling Mean')
std = plt.plot(rolling_std, color='black', label='Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
# Dickey–Fuller test:
result = adfuller(timeseries['total_cases'])
print('ADF Statistic: {}'.format(result[0]))
print('p-value: {}'.format(result[1]))
print('Critical Values:')
for key, value in result[4].items():
print('\t{}: {}'.format(key, value))
df_log = pd.DataFrame(np.log(data_sort["total_cases"].values), columns=["total_cases"])
rolling_mean = df_log.rolling(window=10).mean()
df_log_minus_mean = df_log - rolling_mean
df_log_minus_mean.dropna(inplace=True)
get_stationarity(df_log_minus_mean)
# > Use the original format
train_size = int(data_sort.shape[0]*0.75)
df = data_sort[["total_cases"]].reset_index(drop=True).iloc[:train_size]
df_rolling_mean = df.rolling(window=10).mean()
df_minus_mean = df - df_rolling_mean
df_minus_mean.dropna(inplace=True)
get_stationarity(df_minus_mean)
df_shift = df - df.shift()
df_shift.dropna(inplace=True)
get_stationarity(df_shift)
model = ARIMA(df, order=(2,1,2))
#model = ARIMA(df, order=(3,1,3))
results = model.fit(disp=-1)
plt.plot(df_shift[1:])
plt.plot(results.fittedvalues[1:], color='red')
predictions_ARIMA_diff = pd.Series(results.fittedvalues, copy=True)
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
predictions_ARIMA = pd.Series(df["total_cases"].iloc[0], index=df.index)
predictions_ARIMA = predictions_ARIMA.add(predictions_ARIMA_diff_cumsum, fill_value=0)
plt.plot(df)
plt.plot(predictions_ARIMA)
print(mean_squared_error(df, predictions_ARIMA, squared=False))
pred = results.predict(start=train_size, end = len(data_sort)-1)
predictions_ARIMA_diff = pd.Series(pred, copy=True)
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
predictions_ARIMA = pd.Series(df["total_cases"].iloc[-1], index=predictions_ARIMA_diff_cumsum.index)
predictions_ARIMA = predictions_ARIMA.add(predictions_ARIMA_diff_cumsum, fill_value=0)
print(mean_squared_error(data_sort[["total_cases"]].reset_index(drop=True).iloc[train_size:], predictions_ARIMA, squared=False))
results.plot_predict(1, data_sort.shape[0])
# ### Models Using Multiple Columns - Regression and GBDT Model
# > Notes: "iso_code": because it's same as location. Fillna: "total_vaccinations", "total_vaccinations_per_hundred" with 0.
data_model = data_sort.copy()
y_values = data_model['total_cases'].values[1:]
drop_cols = ['iso_code', 'location', 'date']
data_model[["total_vaccinations", "total_vaccinations_per_hundred", "reproduction_rate",
"new_cases_smoothed", "new_cases_smoothed_per_million", "new_deaths_smoothed",
"new_deaths_smoothed_per_million"]].describe()
data_model.fillna(0, inplace=True)
data_model.info()
x_values = data_model.drop(columns=drop_cols).iloc[:-1]
print(data_model.columns)
print(y_values.shape)
print(x_values.shape)
corr = data_model[[i for i in data_model.columns]].corr()
ax = sns.heatmap(
corr,
vmin=-1, vmax=1, center=0,
cmap=sns.diverging_palette(20, 220, n=200),
square=True
)
ax.set_xticklabels(
ax.get_xticklabels(),
rotation=45,
horizontalalignment='right'
);
#ax.set_title('Correlation matrix',fontsize=15)
plt.savefig('corr_worldwide.png')
import pandas as pd
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import scipy.cluster.hierarchy as hac
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import fcluster
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.cluster import KMeans, DBSCAN
from sklearn.decomposition import PCA
from sklearn.metrics import silhouette_samples, silhouette_score
from sklearn.datasets import make_blobs
from sklearn.neighbors import NearestNeighbors
from gap_statistic import OptimalK
def plot_clusters(full_data, group_col):
# seperate features only from cluster labels
feature_columns = [colname for colname in list(full_data.columns) if 'Cluster' not in colname]
features_only = full_data[feature_columns]
# fit PCA to the whole data
fitted_pca = PCA().fit(features_only)
# take a sample of the whole data
df_sample = features_only.sample(full_data.shape[0], random_state=109)
# df_sample = full_data
# apply the PCA transform on the sample
pca_sample = pd.DataFrame(fitted_pca.transform(df_sample), columns = ["PCA{}".format(i) for i in range(len(df_sample.columns.values))])
pca_sample.index = df_sample.index
# re-include a cluster label for the pca data
pca_sample[group_col] = full_data.loc[pca_sample.index, group_col]
plt.figure(figsize=(11,8.5))
marker_types = [".", "v", "1", "^", "s", "p", "P", "3", "H", "<", "|", "_", "x", "*","d","X"]
marker_colors = np.concatenate([np.array(plt.cm.tab10.colors),np.array(plt.cm.Pastel1.colors)])
for i, (cluster_id, cur_df) in enumerate(pca_sample.groupby([group_col])):
pca1_scores = cur_df.iloc[:,0]
pca2_scores = cur_df.iloc[:,1]
plt.scatter(pca1_scores, pca2_scores, label=cluster_id, c=marker_colors[i].reshape(1,-1), marker=marker_types[i])
plt.xlabel("PC1 ({}%)".format(np.round(100*fitted_pca.explained_variance_ratio_[0],1)))
plt.ylabel("PC2 ({}%)".format(np.round(100*fitted_pca.explained_variance_ratio_[1],1)))
plt.legend()
plt.show()
return pca_sample
# +
scaler = MinMaxScaler()
data_mod = data_model.drop(columns=drop_cols)
scaler.fit(data_mod)
df_process = scaler.transform(data_mod)
df_process = pd.DataFrame(df_process, columns = data_mod.columns)
km6 = KMeans(n_clusters=6, n_init=46, random_state=109).fit(df_process)
df_process['Cluster6'] = km6.labels_
plot_clusters(df_process, 'Cluster6')
# -
train_size = int(x_values.shape[0]*0.75)
x_train = x_values.iloc[:train_size]
x_test = x_values.iloc[train_size:]
y_train = y_values[:train_size]
y_test = y_values[train_size:]
# +
from sklearn.linear_model import LinearRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
result_df = pd.DataFrame()
lr = LinearRegression().fit(x_train, y_train)
lr_train_acc = mean_squared_error(y_train, lr.predict(x_train), squared=False)
lr_test_acc = mean_squared_error(y_test, lr.predict(x_test), squared=False)
print('Baseline regression: RMSE on train set is ', str(round(lr_train_acc,4)))
print('Baseline regression: RMSE on test set is ', str(round(lr_test_acc,4)))
# -
from sklearn.metrics import r2_score
print(r2_score(y_train, lr.predict(x_train)))
print(r2_score(y_test, lr.predict(x_test)))
lr.coef_
sub_col = ["total_cases", 'reproduction_rate',
'total_vaccinations', 'total_vaccinations_per_hundred', 'population',
'population_density', 'median_age', 'aged_65_older', 'aged_70_older',
'gdp_per_capita', 'extreme_poverty', 'cardiovasc_death_rate',
'diabetes_prevalence', 'female_smokers', 'male_smokers',
'handwashing_facilities', 'hospital_beds_per_thousand',
'life_expectancy']
# +
from sklearn.preprocessing import PolynomialFeatures
inters= PolynomialFeatures(degree=2,interaction_only=True,include_bias=False)
x_interaction_train = inters.fit_transform(x_train)
x_interaction_train_df = pd.DataFrame(x_interaction_train,columns = inters.get_feature_names(x_train.columns))
x_interaction_test = inters.fit_transform(x_test.copy())
x_interaction_test_df = pd.DataFrame(x_interaction_test,columns= inters.get_feature_names(x_test.columns))
# +
lr_inter = LinearRegression().fit(x_interaction_train_df,y_train)
inter_train_acc = mean_squared_error(y_train, lr_inter.predict(x_interaction_train_df), squared=False)
inter_test_acc = mean_squared_error(y_test, lr_inter.predict(x_interaction_test_df), squared=False)
print('Linear regression with interaction term: RMSE on train set is ', str(round(inter_train_acc,4)))
print('Linear regression with interaction term: RMSE on test set is ', str(round(inter_test_acc,4)))
# -
# > Simple Decision Tree with max depth = 3
# +
from sklearn.tree import DecisionTreeClassifier
tree0 = DecisionTreeClassifier(max_depth=3)
tree0.fit(x_train, y_train)
y_pred_test = tree0.predict(x_test)
y_pred_train = tree0.predict(x_train)
acc_test_tree = mean_squared_error(y_test, y_pred_test, squared=False)
acc_train_tree = mean_squared_error(y_train,y_pred_train, squared=False)
print("Decision Tree RMSE in train set is: " + str(round(acc_train_tree, 4)))
print("Decision Tree RMSE in test set is: " + str(round(acc_test_tree, 4)))
# +
import pydotplus
import sklearn.tree as tree
from IPython.display import Image
dt_feature_names = list(x_train.columns)
dt_target_names = [str(s) for s in y_train]
tree.export_graphviz(tree0, out_file='tree.dot',
feature_names=dt_feature_names, class_names=dt_target_names,
filled=True)
graph = pydotplus.graph_from_dot_file('tree.dot')
Image(graph.create_png())
# +
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
x_train_gbm, x_val_gbm, y_train_gbm, y_val_gbm = train_test_split(x_train, y_train, train_size=0.85, random_state=1)
params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'learning_rate': 0.01,
'feature_fraction': 0.9,
'bagging_fraction': 0.9,
'bagging_freq': 5,
'max_depth': 100,
'seed': 1
}
lgb_train = lgb.Dataset(x_train_gbm, y_train_gbm, feature_name=list(x_train.columns))
lgb_eval = lgb.Dataset(x_val_gbm, y_val_gbm, feature_name=list(x_train.columns))
model2 = lgb.train(params, lgb_train, num_boost_round=10000, valid_sets=[lgb_train, lgb_eval],
valid_names=["train", "eval"], early_stopping_rounds=1500, verbose_eval=False)
y_pred_train = model2.predict(x_train, num_iteration=model2.best_iteration)
acc_lgb_train = mean_squared_error(y_train, y_pred_train, squared=False)
y_pred_test = model2.predict(x_test, num_iteration=model2.best_iteration)
acc_lgb_test = mean_squared_error(y_test, y_pred_test, squared=False)
print("RMSE in train set is: " + str(round(acc_lgb_train, 4)))
print("RMSE in test set is: " + str(round(acc_lgb_test, 4)))
# -
import shap
explainer = shap.TreeExplainer(model2)
shap_values = explainer.shap_values(x_train)
shap.summary_plot(shap_values, x_train, plot_type="bar", feature_names=x_train.columns)
shap_interaction_values = shap.TreeExplainer(model2).shap_interaction_values(x_train)
shap.summary_plot(shap_interaction_values,x_train,feature_names=x_train.columns)
shap.dependence_plot("total_cases", shap_values, x_train, interaction_index='new_cases_smoothed',feature_names=x_train.columns)
# +
variable_drop = []
accs_train_gbm = []
accs_test_gbm = []
accs_train_logis = []
accs_test_logis = []
accs_train_logis_l2 = []
accs_test_logis_l2 = []
for i in range(29):
print(i)
df_temp_train = x_train.copy()
df_temp_test = x_test.copy()
if i > 0:
x_train_feature = df_temp_train.drop(variable_drop, axis=1)
x_test_feature = df_temp_test.drop(variable_drop, axis=1)
else:
x_train_feature = df_temp_train.copy()
x_test_feature = df_temp_test.copy()
x_train_gbm, x_val_gbm, y_train_gbm, y_val_gbm = train_test_split(x_train_feature, y_train, train_size=0.85, random_state=1)
params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'learning_rate': 0.01,
'feature_fraction': 0.9,
'bagging_fraction': 0.9,
'bagging_freq': 5,
'max_depth': 100,
'seed': 1
}
lgb_train = lgb.Dataset(x_train_gbm, y_train_gbm, feature_name=list(x_train_feature.columns))
lgb_eval = lgb.Dataset(x_val_gbm, y_val_gbm, feature_name=list(x_train_feature.columns))
model_fea = lgb.train(params, lgb_train, num_boost_round=10000, valid_sets=[lgb_train, lgb_eval],
valid_names=["train", "eval"], early_stopping_rounds=1500, verbose_eval=False)
y_pred_lgb_train = model_fea.predict(x_train_feature, num_iteration=model_fea.best_iteration)
acc_lgb_train = mean_squared_error(y_train, y_pred_lgb_train, squared=False)
y_pred_lgb_test = model_fea.predict(x_test_feature, num_iteration=model_fea.best_iteration)
acc_lgb_test = mean_squared_error(y_test, y_pred_lgb_test,squared=False)
accs_train_gbm.append(acc_lgb_train)
accs_test_gbm.append(acc_lgb_test)
lr = LinearRegression().fit(x_train_feature, y_train)
accs_train_logis_l2.append(mean_squared_error(y_train, lr.predict(x_train_feature), squared=False))
accs_test_logis_l2.append(mean_squared_error(y_test, lr.predict(x_test_feature), squared=False))
# drop
explainer = shap.TreeExplainer(model_fea)
shap_values = explainer.shap_values(x_train_feature)
mean_shap_feature_values = pd.DataFrame(shap_values,
columns=x_train_feature.columns).abs().mean(axis=0).sort_values(ascending=False)
variable_drop.append(list(mean_shap_feature_values.index)[-1])
# -
fig, ax = plt.subplots(figsize=(10, 6))
#ax.plot(range(len(accs_train_gbm)), accs_train_gbm, label="Training RMSE of LightGBM")
# ax.plot(range(len(accs_train_gbm)), accs_test_gbm, label="Test RMSE of LightGBM")
ax.plot(range(len(accs_train_gbm)), accs_train_logis_l2, label="Training RMSE of Linear Regression")
ax.plot(range(len(accs_train_gbm)), accs_test_logis_l2, label="Test RMSE of Linear Regression")
plt.legend(fontsize=12)
plt.xlabel("Number of variable dropped")
plt.ylabel("RMSE")
plt.grid(':', alpha=0.4)
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(figsize=(10, 6))
ax.plot(range(len(accs_train_gbm)), accs_train_gbm, label="Training RMSE of LightGBM")
ax.plot(range(len(accs_train_gbm)), accs_test_gbm, label="Test RMSE of LightGBM")
# ax.plot(range(len(accs_train_gbm)), accs_train_logis_l2, label="Training RMSE of Linear Regression")
# ax.plot(range(len(accs_train_gbm)), accs_test_logis_l2, label="Test RMSE of Linear Regression")
plt.legend(fontsize=12)
plt.xlabel("Number of variable dropped")
plt.ylabel("RMSE")
plt.grid(':', alpha=0.4)
plt.tight_layout()
plt.show()
print(variable_drop)
print(len(variable_drop))
print(accs_train_gbm)
# +
val_list = ['new_deaths_smoothed_per_million', 'total_deaths', 'new_cases_smoothed_per_million', 'total_deaths_per_million', 'new_cases_smoothed', 'total_cases']
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
x_train_gbm, x_val_gbm, y_train_gbm, y_val_gbm = train_test_split(x_train[val_list], y_train, train_size=0.85, random_state=1)
params = {
'boosting_type': 'gbdt',
'objective': 'regression',
'learning_rate': 0.01,
'feature_fraction': 0.9,
'bagging_fraction': 0.9,
'bagging_freq': 5,
'max_depth': 100,
'seed': 1
}
lgb_train = lgb.Dataset(x_train_gbm, y_train_gbm, feature_name=list(x_train[val_list].columns))
lgb_eval = lgb.Dataset(x_val_gbm, y_val_gbm, feature_name=list(x_train[val_list].columns))
model2 = lgb.train(params, lgb_train, num_boost_round=10000, valid_sets=[lgb_train, lgb_eval],
valid_names=["train", "eval"], early_stopping_rounds=1500, verbose_eval=False)
y_pred_train = model2.predict(x_train[val_list], num_iteration=model2.best_iteration)
acc_lgb_train = mean_squared_error(y_train, y_pred_train, squared=False)
y_pred_test = model2.predict(x_test[val_list], num_iteration=model2.best_iteration)
acc_lgb_test = mean_squared_error(y_test, y_pred_test, squared=False)
print("RMSE in train set is: " + str(round(acc_lgb_train, 4)))
print("RMSE in test set is: " + str(round(acc_lgb_test, 4)))
# +
val_list = ['new_cases','new_deaths_smoothed_per_million', 'total_deaths', 'new_cases_smoothed_per_million', 'total_deaths_per_million', 'new_cases_smoothed', 'total_cases']
lr = LinearRegression().fit(x_train[val_list], y_train)
lr_train_acc = mean_squared_error(y_train, lr.predict(x_train[val_list]), squared=False)
lr_test_acc = mean_squared_error(y_test, lr.predict(x_test[val_list]), squared=False)
print('Baseline regression: RMSE on train set is ', str(round(lr_train_acc,4)))
print('Baseline regression: RMSE on test set is ', str(round(lr_test_acc,4)))
# +
from sklearn.linear_model import Lasso
lr = Lasso().fit(x_train[val_list], y_train)
lr_train_acc = mean_squared_error(y_train, lr.predict(x_train[val_list]), squared=False)
lr_test_acc = mean_squared_error(y_test, lr.predict(x_test[val_list]), squared=False)
print('Baseline regression: RMSE on train set is ', str(round(lr_train_acc,4)))
print('Baseline regression: RMSE on test set is ', str(round(lr_test_acc,4)))
# +
from sklearn.linear_model import Ridge
lr = Ridge().fit(x_train[val_list], y_train)
lr_train_acc = mean_squared_error(y_train, lr.predict(x_train[val_list]), squared=False)
lr_test_acc = mean_squared_error(y_test, lr.predict(x_test[val_list]), squared=False)
print('Baseline regression: RMSE on train set is ', str(round(lr_train_acc,4)))
print('Baseline regression: RMSE on test set is ', str(round(lr_test_acc,4)))
# -
# > ML performs not very well; small data points; maybe groupedby continent would be better
owid_data.groupby(by=["continent"]).count()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="35%" align="right" border="0"><br>
# # Python for Finance (2nd ed.)
#
# **Mastering Data-Driven Finance**
#
# © Dr. Yves J. Hilpisch | The Python Quants GmbH
#
# <img src="http://hilpisch.com/images/py4fi_2nd_shadow.png" width="300px" align="left">
# # Trading Strategies (b)
import numpy as np
import pandas as pd
import datetime as dt
from pylab import mpl, plt
import warnings
warnings.simplefilter('ignore')
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
np.random.seed(1000)
# %matplotlib inline
# ## Linear OLS Regression
# ### The Data
raw = pd.read_csv('http://hilpisch.com/tr_eikon_eod_data.csv',
index_col=0, parse_dates=True).dropna()
raw.columns
symbol = 'EUR='
data = pd.DataFrame(raw[symbol])
data['returns'] = np.log(data / data.shift(1))
data.dropna(inplace=True)
data['direction'] = np.sign(data['returns']).astype(int)
data.head()
data['returns'].hist(bins=35, figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_01.png')
lags = 2
def create_lags(data):
global cols
cols = []
for lag in range(1, lags + 1):
col = 'lag_{}'.format(lag)
data[col] = data['returns'].shift(lag)
cols.append(col)
create_lags(data)
data.head()
data.dropna(inplace=True)
data.plot.scatter(x='lag_1', y='lag_2', c='returns',
cmap='coolwarm', figsize=(10, 6), colorbar=True)
plt.axvline(0, c='r', ls='--')
plt.axhline(0, c='r', ls='--');
# plt.savefig('../../images/ch15/strat_ml_02.png');
# ### Regression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
data['pos_ols_1'] = model.fit(data[cols], data['returns']).predict(data[cols])
data['pos_ols_2'] = model.fit(data[cols], data['direction']).predict(data[cols])
data[['pos_ols_1', 'pos_ols_2']].head()
data[['pos_ols_1', 'pos_ols_2']] = np.where(
data[['pos_ols_1', 'pos_ols_2']] > 0, 1, -1)
data['pos_ols_1'].value_counts()
data['pos_ols_2'].value_counts()
data['pos_ols_1']
data['pos_ols_1'].diff()
(data['pos_ols_1'].diff() != 0).sum()
(data['pos_ols_2'].diff() != 0).sum()
data['strat_ols_1'] = data['pos_ols_1'] * data['returns']
data['strat_ols_2'] = data['pos_ols_2'] * data['returns']
data[['returns', 'strat_ols_1', 'strat_ols_2']].sum().apply(np.exp)
(data['direction'] == data['pos_ols_1']).value_counts()
(data['direction'] == data['pos_ols_2']).value_counts()
data[['returns', 'strat_ols_1', 'strat_ols_2']].cumsum(
).apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_03.png');
# ## Clustering
from sklearn.cluster import KMeans
model = KMeans(n_clusters=2, random_state=0) # <1>
model.fit(data[cols])
data['pos_clus'] = model.predict(data[cols])
data['pos_clus']
data['pos_clus'] = np.where(data['pos_clus'] == 1, -1, 1)
data['pos_clus'].values
data[cols].iloc[:, 1]
plt.figure(figsize=(10, 6))
# X座標はdata[cols].iloc[:, 0] でlag_1、Y座標はdata[cols].iloc[:, 1] で lag_2
# クラスタリングの結果に基づいて、色分け
plt.scatter(data[cols].iloc[:, 0], data[cols].iloc[:, 1],
c=data['pos_clus'], cmap='coolwarm');
# plt.savefig('../../images/ch15/strat_ml_04.png');
data['strat_clus'] = data['pos_clus'] * data['returns']
data[['returns', 'strat_clus']].sum().apply(np.exp)
(data['direction'] == data['pos_clus']).value_counts()
data[['returns', 'strat_clus']].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_05.png');
# ## Frequency Approach
def create_bins(data, bins=[0]):
global cols_bin
cols_bin = []
for col in cols:
col_bin = col + '_bin'
data[col_bin] = np.digitize(data[col], bins=bins)
cols_bin.append(col_bin)
data[cols].head()
create_bins(data)
# lag_x 列が正の場合は1、負の場合は0になる
data[cols_bin + ['direction']].head()
cols_bin + ['direction']
grouped = data.groupby(cols_bin + ['direction'])
grouped.size()
# DataFrameオブジェクトに周波数列を追加したものをresに設定
res = grouped['direction'].size().unstack(fill_value=0)
def highlight_max(s):
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
res.style.apply(highlight_max, axis=1)
# lag_1_bin、lag_2_binの和が2の場合は -1方向。そうでない場合は1方向
data['pos_freq'] = np.where(data[cols_bin].sum(axis=1) == 2, -1, 1)
(data['direction'] == data['pos_freq']).value_counts()
data['strat_freq'] = data['pos_freq'] * data['returns']
data[['returns', 'strat_freq']].sum().apply(np.exp)
data[['returns', 'strat_freq']].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_06.png');
# ## Classification Algorithms
from sklearn import linear_model
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
C = 1
models = {
'log_reg': linear_model.LogisticRegression(C=C),
'gauss_nb': GaussianNB(),
'svm': SVC(C=C)
}
def fit_models(data):
# data[cols_bin]を説明変数、data['direction']を目的変数として重みを学習
mfit = {model: models[model].fit(data[cols_bin], data['direction'])
for model in models.keys()}
fit_models(data)
def derive_positions(data):
for model in models.keys():
data['pos_' + model] = models[model].predict(data[cols_bin])
# 学習結果を用いて direction を予測
# 各モデルに対する結果は以下に格納される
# data['pos_log_reg']
# data['pos_gauss_nb']
# data['pos_svm']
derive_positions(data)
def evaluate_strats(data):
global sel
sel = []
for model in models.keys():
col = 'strat_' + model
data[col] = data['pos_' + model] * data['returns']
sel.append(col)
sel.insert(0, 'returns')
evaluate_strats(data)
sel.insert(1, 'strat_freq')
data[sel].sum().apply(np.exp)
data[sel].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_07.png')
data = pd.DataFrame(raw[symbol])
data['returns'] = np.log(data / data.shift(1))
data['direction'] = np.sign(data['returns'])
lags = 5
create_lags(data)
data.dropna(inplace=True)
create_bins(data)
cols_bin
data[cols_bin].head()
data.dropna(inplace=True)
fit_models(data)
derive_positions(data)
evaluate_strats(data)
data[sel].sum().apply(np.exp)
data[sel].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_08.png');
mu = data['returns'].mean()
v = data['returns'].std()
bins = [mu - v, mu, mu + v]
bins
create_bins(data, bins)
data[cols_bin].head()
fit_models(data)
derive_positions(data)
evaluate_strats(data)
data[sel].sum().apply(np.exp)
data[sel].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_09.png')
# ### Sequential Train-Test Split
split = int(len(data) * 0.5)
train = data.iloc[:split].copy()
fit_models(train)
test = data.iloc[split:].copy()
derive_positions(test)
evaluate_strats(test)
test[sel].sum().apply(np.exp)
test[sel].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_10.png');
# ### Randomized Train-Test Split
from sklearn.model_selection import train_test_split
train, test = train_test_split(data, test_size=0.5,
shuffle=True, random_state=100)
train = train.copy().sort_index()
train[cols_bin].head()
test = test.copy().sort_index()
fit_models(train)
derive_positions(test)
evaluate_strats(test)
test[sel].sum().apply(np.exp)
test[sel].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_11.png');
# ## Deep Neural Network
# ### DNN with scikit-learn
from sklearn.neural_network import MLPClassifier
model = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=2 * [250], random_state=1)
# %time model.fit(data[cols_bin], data['direction'])
data['pos_dnn_sk'] = model.predict(data[cols_bin])
data['strat_dnn_sk'] = data['pos_dnn_sk'] * data['returns']
data[['returns', 'strat_dnn_sk']].sum().apply(np.exp)
data[['returns', 'strat_dnn_sk']].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_12.png');
train, test = train_test_split(data, test_size=0.5, random_state=100)
train = train.copy().sort_index()
test = test.copy().sort_index()
# hidden layer の数と hidden units の数を増やした
model = MLPClassifier(solver='lbfgs', alpha=1e-5, max_iter=500,
hidden_layer_sizes=3 * [500], random_state=1)
# ランダム化した訓練データを使用
# %time model.fit(train[cols_bin], train['direction'])
test['pos_dnn_sk'] = model.predict(test[cols_bin])
test['strat_dnn_sk'] = test['pos_dnn_sk'] * test['returns']
test[['returns', 'strat_dnn_sk']].sum().apply(np.exp)
test[['returns', 'strat_dnn_sk']].cumsum().apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_13.png');
# ### DNN with Keras & TensorFlow Backend
import tensorflow as tf
from keras.layers import Dense
from keras.models import Sequential
def create_model():
np.random.seed(100)
tf.random.set_seed(100)
model = Sequential()
model.add(Dense(16, activation='relu', input_dim=lags))
model.add(Dense(16, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
return model
data_ = (data - data.mean()) / data.std()
data['direction_'] = np.where(data['direction'] == 1, 1, 0)
model = create_model()
# %%time
model.fit(data_[cols], data['direction_'],
epochs=50, verbose=False)
model.evaluate(data_[cols], data['direction_'])
pred = np.where(model.predict(data_[cols]) > 0.5, 1, 0)
pred[:10].flatten()
data['pos_dnn_ke'] = np.where(pred > 0, 1, -1)
data['strat_dnn_ke'] = data['pos_dnn_ke'] * data['returns']
data[['returns', 'strat_dnn_ke']].sum().apply(np.exp)
data[['returns', 'strat_dnn_ke']].cumsum(
).apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_14.png');
mu, std = train.mean(), train.std()
train_ = (train - mu) / mu.std()
model = create_model()
train['direction_'] = np.where(train['direction'] > 0, 1, 0)
# %%time
model.fit(train_[cols], train['direction_'],
epochs=50, verbose=False)
test_ = (test - mu) / std
test['direction_'] = np.where(test['direction'] > 0, 1, 0)
model.evaluate(test_[cols], test['direction_'])
pred = np.where(model.predict(test_[cols]) > 0.5, 1, 0)
pred[:10].flatten()
test['pos_dnn_ke'] = np.where(pred > 0, 1, -1)
test['strat_dnn_ke'] = test['pos_dnn_ke'] * test['returns']
test[['returns', 'strat_dnn_sk', 'strat_dnn_ke']].sum().apply(np.exp)
test[['returns', 'strat_dnn_sk', 'strat_dnn_ke']].cumsum(
).apply(np.exp).plot(figsize=(10, 6));
# plt.savefig('../../images/ch15/strat_ml_15.png');
# <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="35%" align="right" border="0"><br>
#
# <a href="http://tpq.io" target="_blank">http://tpq.io</a> | <a href="http://twitter.com/dyjh" target="_blank">@dyjh</a> | <a href="mailto:training@tpq.io">training@tpq.io</a>
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # developer notes
#
# the documentation is build with jupyter book and sphinx
# ## run in an isolated environment
#
# this option requires nox for session management.
# ### build the html docs
# # !nox -s docs
# ### build the pdf docs
# # !nox -s docs -- pdf
# ## running tasks in your environment
#
# `nox` runs `doit`, `doit` manages files tasks, and you can use it directly.
# list the tasks available.
# # !doit list
# Build the html content
# # !doit html
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# +
# CREATE WHOLE DF
# -
data = pd.read_csv('data/monday.csv', delimiter = ';', parse_dates=True)
data.head()
data.shape
data.dtypes
# Encode the customer nr and day into one variable
...
data["timestamp"] = pd.to_datetime(data["timestamp"])
# +
#ENTRANCE
f_datetime = data.groupby("customer_no")["timestamp"].first().reset_index()
# pandas.DataFrame.first - When having a DataFrame with dates as index,
# this function can select the first few rows based on a date offset.
# -
one_min = pd.Timedelta(minutes=1)
for i in range(len(f_datetime)):
data = data.append({"timestamp": f_datetime['timestamp'].iloc[i] - one_min,
"customer_no": f_datetime['customer_no'].iloc[i],
"location": "entrance"},
ignore_index=True)
data.head()
data[data['customer_no'] == 1]
# Extract the hour from the timestamp
data["hour"] = data["timestamp"].dt.hour
data["time"] = data["timestamp"].dt.time
# DIFFERENCE BETWEEN CHECKOUT-NO & CUST-NO:
check_c = set(data[data["location"]=="checkout"]["customer_no"].unique()) # number of checked out customers
all_c = set(data["customer_no"].unique()) # number of all customers
diff = all_c.difference(check_c) # difference between all & checked out
diff
# +
#FILL IN 'CHECKOUTS'
for cust in diff:
data = data.append({"timestamp":"2019-09-02 22:00:00","customer_no":cust,
"location":"checkout"}, ignore_index=True)
#check again for dtype of 'timestamp'
data["timestamp"] = pd.to_datetime(data["timestamp"])
# -
#NEW COL FOR TIME_SPEND IN LOC
data['time_spend'] = data.sort_values(['customer_no','timestamp']).groupby('customer_no')['timestamp'].diff().shift(-1)
data.head(10)
data = data.set_index('timestamp').groupby('customer_no').resample('1min').fillna('ffill').drop(columns='customer_no')
data.head(10)
# NEW COL FOR NEXT LOC
data['next_loc'] = data['location'].shift(-1)
data.head(10)
#next_loc for 'checkout'
data.loc[(data.location == 'checkout'), 'next_loc'] = 'checkout'
# + active=""
# #DROP ROWS WITH CHECKOUT FOR CROSSTAB
# data = data.drop(data[data['location'] == 'checkout'].index)
# -
# calculate transition matrix
transition_matrix = pd.crosstab(data['location'], #or 'before'
data['next_loc'],
normalize=0)
transition_matrix
import seaborn as sns
heat_map = sns.heatmap(transition_matrix)
# check if the matrix is correct
transition_matrix.sum(axis=1)
data.head(20)
# ### Data Exploration
# Calculate the time each customer spent in the market
data.reset_index(inplace=True)
data['entrance_time']=data.groupby('customer_no')['timestamp'].transform(min)
data['checkout_time']=data.groupby('customer_no')['timestamp'].transform(max)
data['time_spent'] = data['checkout_time'] - data['entrance_time']
data.head()
# ### Data Exploration
# Calculate the total number of customers in the supermarket over time.
data.groupby('location')['customer_no'].nunique().plot(kind='bar')
data_sns = data.loc[(data['location'] != 'entrance') & (data['location'] != 'checkout'),:].copy()
data_sns=data_sns.groupby(['hour','location']).agg({"customer_no": pd.Series.nunique})
data_sns.head()
data_sns.reset_index(inplace=True)
import seaborn as sns
sns.lineplot(
data=data_sns, x="hour", y="customer_no", legend="full",
hue="location", palette="mako_r")
# ### Revenue Estimate
# Estimate the total revenue for a customer
data['price']=data['location']
data.price = data.price.map( {'entrance':0 ,'checkout':0,'fruit':4,'spices':3,'dairy':5,'drinks':6} )
data.head()
pd.DataFrame(data.groupby(by=['customer_no'])["price"].sum())
pd.DataFrame(data.groupby(by=['location'])["price"].sum())
pd.DataFrame(data.groupby(by=['location'])["price"].sum().plot(kind='bar',x='location',y='price'))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Replication of results for the HIV model - scalar QoI
#
# This is a notebook to recreate the results of Section 7 of
#
# W.N. Edeling, "On the deep active subspace method", (submitted), 2021.
#
# Here we will apply the deep active subspace method [1] to an HIV model consisting of 7 coupled ordinary differential equations [2], with 27 uncertain input parameters, see the article above for more information.
#
# This notebook contains the results for the scalar QoI case. For the vector-valued QoI, see `HIV_vector.ipynb`.
#
# ### Requirements
#
# The Deep Active Subspace method is implemented in [EasySurrogate](https://github.com/wedeling/EasySurrogate). To install, simply uncomment the `!pip install` line below. Furthermore, `scipy`, `seaborn` and `pandas` are also required.
#
# [1] Tripathy, R., & Bilionis, I. (2019, August). Deep active subspaces: A scalable method for high-dimensional uncertainty propagation. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 59179, p. V001T02A074). American Society of Mechanical Engineers.
#
# [2] Loudon, T., & Pankavich, S. (2017). Mathematical analysis and dynamic active subspaces for a long term model of HIV. Mathematical Biosciences and Engineering, 14(3), 709-733.
# +
# #!pip install easysurrogate==0.18
# -
import numpy as np
import matplotlib.pyplot as plt
import easysurrogate as es
from scipy import linalg
import pandas as pd
import seaborn as sns
# select the seismic color scheme
plt.rcParams['image.cmap'] = 'seismic'
# ### EasySurrogate campaign
#
# EasySurrogate's basic object is called a `campaign', which handles the data.
# Create EasySurrogate campaign
campaign = es.Campaign()
# ### Load training data
#
# Here we use the campaign to load the training data, which is precomputed and stored in the `my_samples.hdf5` file. We also load the model gradients from https://github.com/paulcon/as-data-sets/tree/master/HIV to compute the reference (original) active subspace.
# +
##########################
# Generate training data #
##########################
# number of inputs
D = 27
# the times (in days) at which the HIV model was sampled
times = np.array([5, 15, 24, 38, 40, 45, 50, 55, 65, 90, 140, 500, 750,
1000, 1600, 1800, 2000, 2200, 2400, 2800, 3400])
T = times.size
# Use derivative data from https://github.com/paulcon/as-data-sets/tree/master/HIV
derivatives = pd.read_csv('./gradients.csv').values[:, 1:]
derivatives = derivatives.reshape([-1, T, D])
# Load HDF5 training data using the Campaign
data = campaign.load_hdf5_data(file_path='my_samples.hdf5')
# input parameters in [-1, 1]
params = data['inputs']
# output (T-cell counts at times)
samples = data['outputs']
# time index at which to construct an active subspace
I = 5
samples = samples[:, I].reshape([-1, 1])
derivatives = derivatives[:, I, :]
# scale the derivatives for consistency with the standardized ANN and DAS results
derivatives /= np.std(samples)
# -
# ### Select $d$
#
# We we select $d=1$, i.e. we are constructing a 1D active subspace.
########################################
# choose the active subspace dimension #
########################################
d = 1
# ### Train a (unconstrained) artificial neural network
#
# We train a standard artificial neural network. The inputs are already normalized to lie within $[-1, 1]$, and we standardize the output.
ann_uc_surrogate = es.methods.ANN_Surrogate()
# train vanilla ANN. The input parameters are already scaled to [-1, 1], so no need to
# standardize these
ann_uc_surrogate.train(params, samples,
n_iter=10000, n_layers=4, n_neurons=100, test_frac = 0.1,
batch_size = 64, standardize_X=False, standardize_y=True)
# ### Compute the original active subspace of the unconstrained ANN
# +
# Number of Monte Carlo samples
n_mc = params.shape[0]
# gradient matrix for the ANN
C_ann_uc = 0.0
ann_uc_samples = np.zeros(n_mc)
# compute the derivative of the neural net output for every input
for i, param in enumerate(params):
# construct the C matrix
df_dx = ann_uc_surrogate.derivative(param, norm=False)
C_ann_uc += np.dot(df_dx, df_dx.T) / n_mc
# store predictions for later
ann_uc_samples[i] = ann_uc_surrogate.predict(param)
# Solve eigenproblem
eigvals_C_ann_uc, eigvecs_C_ann_uc = linalg.eigh(C_ann_uc)
# Sort the eigensolutions in the descending order of eigenvalues
order_ann_uc = eigvals_C_ann_uc.argsort()[::-1]
eigvals_C_ann_uc = eigvals_C_ann_uc[order_ann_uc]
eigvecs_C_ann_uc = eigvecs_C_ann_uc[:, order_ann_uc]
R_1 = eigvecs_C_ann_uc[:, 0:d]
y_ann_uc = np.dot(R_1.T, params.T).T
# -
# ### Train a (constrained) artificial neural network
#
# We train a constrained artificial neural network without enforced orthonormality, but with $d$ neurons in the first hidden layer. The inputs are already normalized to lie within $[-1, 1]$, and we standardize the output.
# +
##########################
# Train an ANN surrogate #
##########################
ann_surrogate = es.methods.ANN_Surrogate()
# train constrained ANN
ann_surrogate.train(params, samples,
n_iter=10000, n_layers=4,
# use just d neurons in the first hidden layer
n_neurons=[d, 100, 100], test_frac = 0.1,
# turn of bias in the first layer (optional, brings it closer to the DAS network)
bias=[False, True, True, True],
batch_size = 64, standardize_X=False, standardize_y=True)
# -
# ### Compute the original active subspace of the constrained ANN
# +
# the (non-orthonormal) weight matrix of the ANN
M_1 = ann_surrogate.neural_net.layers[1].W
# gradient matrix for the ANN
C_ann = 0.0
ann_samples = np.zeros(n_mc)
# compute the derivative of the neural net output for every input
for i, param in enumerate(params):
# construct the C matrix
df_dx = ann_surrogate.derivative(param, norm=False)
C_ann += np.dot(df_dx, df_dx.T) / n_mc
# store predictions for later
ann_samples[i] = ann_surrogate.predict(param)
# Solve eigenproblem
eigvals_C_ann, eigvecs_C_ann = linalg.eigh(C_ann)
# Sort the eigensolutions in the descending order of eigenvalues
order_ann = eigvals_C_ann.argsort()[::-1]
eigvals_C_ann = eigvals_C_ann[order_ann]
eigvecs_C_ann = eigvecs_C_ann[:, order_ann]
# orthoormal projection matrix extracted from (constrained) ANN
V_1 = eigvecs_C_ann[:, 0:d]
y_ann = np.dot(V_1.T, params.T).T
# -
# ### Compute the reference active subspace
#
# Here we compute the reference active subspace, by using the derivative data from https://github.com/paulcon/as-data-sets/tree/master/HIV
# +
C_ref = 0.0
for i in range(derivatives.shape[0]):
C_ref += np.dot(derivatives[i].reshape([-1,1]), derivatives[i].reshape([1, -1])) / n_mc
eigvals_ref, eigvecs_ref = linalg.eigh(C_ref)
# Sort the eigensolutions in the descending order of eigenvalues
order_ref = eigvals_ref.argsort()[::-1]
eigvals_ref = eigvals_ref[order_ref]
eigvecs_ref = eigvecs_ref[:, order_ref]
# -
# ### Train a deep active subspace network
#
# Below we train a deep active subspace network, using $d=1$ in the DAS layer.
# +
#####################
# train DAS network #
#####################
das_surrogate = es.methods.DAS_Surrogate()
das_surrogate.train(params, samples, d, n_iter=10000, n_layers=4, n_neurons=100, test_frac = 0.1,
batch_size = 64, standardize_X=False, standardize_y=True)
# -
# ### Compute the original active subspace of the DAS network
# +
# the gradient matrix computed of the DAS network, computed using the classical AS method
C_das = 0.0
# the MC approximation of C_1 = (df/dh)(df/dh)^T
C_1 = 0.0
# Compute C1 and C_das
das_samples = np.zeros(n_mc)
for i, param in enumerate(params):
# compute the derivative of f at the input layer (needed for C_das)
df_dx = das_surrogate.derivative(param, norm=False)
# store predictions for later
das_samples[i] = das_surrogate.predict(param)
# derivative of f in the DAS layer (needed for C_1)
df_dh = das_surrogate.neural_net.layers[1].delta_hy.reshape([-1,1])
# update C_1 and C_das
C_1 += np.dot(df_dh, df_dh.T) / n_mc
C_das += np.dot(df_dx, df_dx.T) / n_mc
# solve eigenvalue problem for C_das
eigvals_C_das, eigvecs_C_das = linalg.eigh(C_das)
# Sort the eigensolutions in the descending order of eigenvalues
order = eigvals_C_das.argsort()[::-1]
eigvals_C_das = eigvals_C_das[order]
eigvecs_C_das = eigvecs_C_das[:, order]
# the DAS weight matrix of the first hidden layer
W_1 = das_surrogate.neural_net.layers[1].W
y_das = np.dot(W_1.T, params.T).T
# Alternatively, in the DAS case we can solve only the eigendecomposition of C_1 to obtain the same result
eigvals_C_1, eigvecs_C_1 = linalg.eigh(C_1)
# Sort the eigensolutions in the descending order of eigenvalues
order = eigvals_C_1.argsort()[::-1]
eigvals_C_1 = eigvals_C_1[order]
eigvecs_C_1 = eigvecs_C_1[:, order]
print('=====================')
print("Eigenvalues C_das:\n %s" % eigvals_C_das)
print('=====================')
print("Eigenvalues C_1:\n %s" % eigvals_C_1)
print('=====================')
print('Difference eigenvectors:\n %s' % (eigvecs_C_das[:, 0:d] - np.dot(W_1, eigvecs_C_1)))
print('=====================')
# -
# ### Recreate the eigenvalue plots
# +
####################
# plot eigenvalues #
####################
fig = plt.figure(figsize=[12, 4])
ax = fig.add_subplot(131, yscale='log', title='reference eigenvalues', ylim=[1e-16, 20])
ax.set_ylabel(r'$\lambda_i$', fontsize=12)
ax.set_xlabel(r'$i$', fontsize=12)
ax.plot(range(1, D + 1), eigvals_ref, 's', color='dodgerblue', markersize=3,)
ax.set_xticks(np.arange(1, D + 1, 2))
#
ax2 = fig.add_subplot(132, yscale='log', title=r'%s eigenvalues, $d=%d$' % (r'$C_{DAS}$', d), ylim=[1e-16, 20])
ax2.set_ylabel(r'$\lambda_i$', fontsize=12)
ax2.set_xlabel(r'$i$', fontsize=12)
# ax2.plot(range(1, d + 1), eigvals_C_1, 'o', color='salmon', markersize=8,
# label = '%s of %s' % (r'$\lambda_i$', r'$\overline{C}_1$'))
ax2.plot(range(1, D + 1), eigvals_C_das, 's', color='dodgerblue', markersize=3,
label='%s of %s' % (r'$\lambda_i$', r'$\overline{C}_{DAS}$'))
ax2.set_xticks(np.arange(1, D + 1, 2))
sns.despine(top=True)
#
ax3 = fig.add_subplot(133, yscale='log', title=r'%s eigenvalues, $d=%d$' % (r'$C_{ANN}$', d), ylim=[1e-16, 20])
ax3.set_ylabel(r'$\lambda_i$', fontsize=12)
ax3.set_xlabel(r'$i$', fontsize=12)
# ax2.plot(range(1, d + 1), eigvals_C_1, 'o', color='salmon', markersize=8,
# label = '%s of %s' % (r'$\lambda_i$', r'$\overline{C}_1$'))
ax3.plot(range(1, D + 1), eigvals_C_ann, 's', color='dodgerblue', markersize=3,
label='%s of %s' % (r'$\lambda_i$', r'$\overline{C}_{DAS}$'))
ax3.set_xticks(np.arange(1, D + 1, 2))
sns.despine(top=True)
#
plt.legend(loc=0, frameon=False)
plt.tight_layout()
# -
# ### Recreate the active subspace plot
# +
#########################
# plot active subspaces #
#########################
# Generate new code validation samples
from HIV_model import *
n_val = 100
x_val = np.random.rand(n_val, D) * 2 - 1
val_samples = Tcells(x_val, np.linspace(1, times[I], times[I]))[:, -1]
y_val = np.dot(W_1.T, x_val.T).T
y_val_ann = np.dot(V_1.T, x_val.T).T
y_val_ann_uc = np.dot(R_1.T, x_val.T).T
# plot DAS surrogate in y coordinate
fig = plt.figure(figsize=[12, 4])
ax = fig.add_subplot(131)
ax.set_xlabel(r'$y_1$', fontsize=12)
ax.set_ylabel(r'$\widetilde{G}\left(y_1\right)$', fontsize=12)
ax.plot(y_val, val_samples, 's', color='dodgerblue', label='validation samples')
ax.plot(y_das, das_samples, '+', color='salmon', label='DAS', alpha=0.5)
leg = ax.legend(loc=0, frameon=False)
leg.set_draggable(True)
sns.despine(top=True)
plt.tight_layout()
# plot ANN surrogate in y coordinate
ax = fig.add_subplot(132)
ax.set_xlabel(r'$y_1$', fontsize=12)
ax.set_ylabel(r'$\widetilde{G}\left(y_1\right)$', fontsize=12)
ax.plot(y_val_ann, val_samples, 's', color='dodgerblue', label='validation samples')
ax.plot(y_ann, ann_samples, '+', color='salmon', label='ANN (d=1)', alpha=0.5)
leg = ax.legend(loc=0, frameon=False)
leg.set_draggable(True)
sns.despine(top=True)
plt.tight_layout()
ax = fig.add_subplot(133)
ax.set_xlabel(r'$y_1$', fontsize=12)
ax.set_ylabel(r'$\widetilde{G}\left(y_1\right)$', fontsize=12)
ax.plot(y_val_ann_uc, val_samples, 's', color='dodgerblue', label='validation samples')
ax.plot(y_ann_uc, ann_uc_samples, '+', color='salmon', label='unconstrained ANN', alpha=0.5)
leg = ax.legend(loc=0, frameon=False)
leg.set_draggable(True)
sns.despine(top=True)
plt.tight_layout()
# -
# ### Recreate the C heat maps
# +
#####################################
# plot a heat map of the C matrices #
#####################################
fig = plt.figure(figsize=[12,4])
ax1 = fig.add_subplot(131, title=r'$\overline{C}_{REF}$', xlabel='$i$', ylabel='$j$')
im = ax1.imshow(C_ref)
plt.colorbar(im)
ax2 = fig.add_subplot(132, title=r'$\overline{C}_{DAS},\; d=%d$' % d, xlabel='$i$', ylabel='$j$')
im = ax2.imshow(C_das)
plt.colorbar(im)
ax3 = fig.add_subplot(133, title=r'$\overline{C}_{ANN},\; d=%d$' % d, xlabel='$i$', ylabel='$j$')
im = ax3.imshow(C_ann)
plt.colorbar(im)
plt.tight_layout()
# -
# ### Recreate the global-derivative based sensitivity plots
def sensitivity(idx, V_i, **kwargs):
# Parameter names
param_names = np.array([r'$s_1$', r'$s_2$', r'$s_3$', r'$p_1$', r'$C_1$', r'$K_1$', r'$K_2$', r'$K_3$',
r'$K_4$', r'$K_5$', r'$K_6$', r'$K_7$', r'$K_8$', r'$K_9$', r'$K_{10}$',
r'$K_{11}$', r'$K_{12}$', r'$K_{13}$', r'$\delta_1$', r'$\delta_2$',
r'$\delta_3$', r'$\delta_4$', r'$\delta_5$', r'$\delta_6$', r'$\delta_7$', r'$\alpha_1$',
r'$\psi$'])
fig = plt.figure(figsize=[4, 8])
ax = fig.add_subplot(111, title=kwargs.get('title', ''))
# ax.set_ylabel(r'$\int\left(\frac{\partial f}{\partial x_i}\right)^2 p({\bf x})d{\bf x}$', fontsize=14)
ax.set_xlabel(r'$\nu_i$', fontsize=14)
ax.barh(range(V_i.size), width = V_i[idx].flatten(), color = 'dodgerblue')
ax.set_yticks(range(V_i.size))
ax.set_yticklabels(param_names[idx[0]], fontsize=14)
# plt.xticks(rotation=90)
ax.invert_yaxis()
sns.despine(top=True)
plt.tight_layout()
# +
#####################################
# global gradient-based sensitivity #
#####################################
das_analysis = es.analysis.DAS_analysis(das_surrogate)
ann_analysis = es.analysis.ANN_analysis(ann_surrogate)
idx, V_i = das_analysis.sensitivity_measures(params, norm=False)
sensitivity(idx, V_i, title = 'DAS')
idx, V_i = ann_analysis.sensitivity_measures(params, norm=False)
sensitivity(idx, V_i, title = 'ANN (d=1)')
print('Parameters ordered according to the reference activity score')
V_i_ref = np.diag(C_ref)
idx_ref = np.flipud(np.argsort(np.diag(C_ref))).reshape([1, -1])
print(idx_ref)
sensitivity(idx_ref, V_i_ref, title='reference')
# -
# ### Error analysis
#
# The errors were computed using 100 replica networks, which takes a long time (several hours). The errors are therefore loaded from memory using the files `errors_n_neurons100.hdf5` or `errors_n_neurons10.hdf5`, corresponding to the case of 10 or 100 neurons per hidden layer.
#
# If you still wish to recompute the errors, execute `recompute_HIV_errors.py`. Note that it is possible that (1 or 2) outliers are present, where one of the replica neural networks did not converge. We removed these from the HDF5 files.
def get_error_CI(err):
mean_err = np.mean(err, axis=0)
lower, upper = analysis.get_confidence_intervals(err, conf=conf)
err = np.array([mean_err - lower, upper - mean_err])
return mean_err, err
# +
# number of neurons, replicas and tra
n_test_fracs = 10
test_fracs = np.linspace(0.5, 0.1, n_test_fracs)
file = 'errors_n_neurons10.hdf5'
errors = campaign.load_hdf5_data(file_path=file)
err_ANN_unconstrained = errors['err_ANN_unconstrained']
err_ANN = errors['err_ANN']
err_DAS = errors['err_DAS']
# trun into percentage
err_ANN *= 100
err_DAS *= 100
err_ANN_unconstrained *= 100
# select confidence
conf = 0.95
# size of training data used
data_size = (1 - test_fracs) * samples.shape[0]
# mean and CI of ANN training error
analysis = es.analysis.BaseAnalysis()
mean_ANN_err_training, err_ANN_training = get_error_CI(err_ANN[:,:,0])
mean_DAS_err_training, err_DAS_training = get_error_CI(err_DAS[:,:,0])
mean_ANN_unconstrained_training, err_ANN_unconstrained_training = get_error_CI(err_ANN_unconstrained[:,:,0])
mean_ANN_err_test, err_ANN_test = get_error_CI(err_ANN[:,:,1])
mean_DAS_err_test, err_DAS_test = get_error_CI(err_DAS[:,:,1])
mean_ANN_unconstrained_test, err_ANN_unconstrained_test = get_error_CI(err_ANN_unconstrained[:,:,1])
# plot results
import seaborn as sns
fig = plt.figure(figsize=[8, 4])
ax = fig.add_subplot(121)
ax.set_xlabel('training data size')
ax.set_ylabel('relative error e [%]')
ax.set_title('training error')
sns.despine(top=True)
offset=5
ax.errorbar(data_size-offset, mean_ANN_unconstrained_training,
yerr=err_ANN_unconstrained_training, fmt='o',
color='dodgerblue', label='unconstrained ANN, 95% CI')
ax.errorbar(data_size, mean_ANN_err_training, yerr=err_ANN_training, fmt='^',
color='mediumaquamarine', label='ANN (d=1), 95% CI')
ax.errorbar(data_size+offset, mean_DAS_err_training, yerr=err_DAS_training, fmt='s',
color='salmon', label='DAS, 95% CI')
leg = ax.legend(loc=0, frameon=False)
#
ax2 = fig.add_subplot(122, sharey=ax)
ax2.set_xlabel('training data size')
ax2.set_title('test error')
ax2.errorbar(data_size, mean_ANN_err_test, yerr=err_ANN_test, fmt='^', color='mediumaquamarine')
ax2.errorbar(data_size+offset, mean_DAS_err_test, yerr=err_DAS_test, fmt='s', color='salmon')
ax2.errorbar(data_size-offset, mean_ANN_unconstrained_test, yerr=err_ANN_unconstrained_test,
fmt='o', color='dodgerblue')
sns.despine(left=True, ax=ax2)
ax2.get_yaxis().set_visible(False)
plt.tight_layout()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Los datos son baratos pero el conocimiento es más difícil de conseguir
# ## Estadística Descriptiva
#
# Empezar a entender mis datos.
# ### Media
#
# Si se tiene una muestra de $n$ valores: $x_i$
# La media $\mu$ es la suma de los valores dividido por el número de valores
#
# $$ \mu = \frac{1}{n} \sum_{i}^{n} x_i $$
# +
import pandas as pd
import numpy as np
data = pd.read_csv("train.csv")
media_edad = np.mean(data['Age'])
media_edad
# -
data.describe()
# La media se encarga de describir la tendencia central de nuestros datos.
# ¡Importante!, esta media $\mu$ se usa para describir a una población completa.
# ### Varianza
#
# Otro valor estadístico que nos ayuda a entender nuestros datos es la Varianza. A diferencia de la media que describe la tendencia de en donde se centran nuestros datos, la varianza describe que tan lejos se encuentran los datos de la media.
#
# $$ \sigma^2 = \frac{1}{n} \sum_{i}^{n} (x_i - \mu)^2 $$
varianza_edad = np.var(data['Age'])
varianza_edad
# ¿Años al cuadrado?
# La varianza es difícil de interpretar debido a las unidades.
#
# Por suerte la desviación estándar es un estadístico más significativo.
#
# ### Desviación estándar
#
# $$ \sigma = \sqrt{\sigma} $$
desviacion_edad = np.std(data['Age'])
desviacion_edad
data.Age.std()
# ¡Importante!, estas formulas para $\sigma^2$ y $\sigma$ se usan para describir a una población completa.
#
# Si lidiamos con una muestra de N valores se usan estimadores, $\bar{x}$ y $S^2$
#
# $$ \bar{x} = \frac{1}{N} \sum_{i}^{N} x_i $$
#
# $$ S^2 = \frac{1}{N-1} \sum_{i}^{N} (x_i - \bar{x})^2 $$
# ## Distribuciones
#
# La media, la varianza y la desviación estándar son estadísticos concisos, pero también peligrosos, ya que nublan la información que nos proporcionan los datos.
#
# Un apoyo para entenderlos mejor es ver la distribución de los datos.
#
# La representación más común de una distribución es un histograma, que describe frecuencia con la que aparece cada valor.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
edades = data[data.Age.notnull()]['Age']
edades.hist()
plt.xlabel('Edad', fontsize=18)
plt.ylabel('Frecuencia', fontsize=16)
# -
len(edades.unique())
edades = data[data.Age.notnull()]['Age']
edades.hist(bins=88)
plt.xlabel('Edad', fontsize=18)
plt.ylabel('Frecuencia', fontsize=16)
# Los histogramas son útiles porque podemos revisar las siguientes características rápidamente:
#
# - Moda: El valor más común o que más se repite en una distribución se llama moda.
#
# - Forma: Alrededor de la moda podemos ver que la distribución es asimétrica.
#
# - Los valores atípicos. (outliers)
# ### Función de probabilidad
#
# Si queremos transformar las frecuencias a una función de probabilidad debemos dividir la serie entre el número de elementos
edades.hist(bins=88, normed=True)
plt.xlabel('Edad', fontsize=18)
plt.ylabel('Probabilidad', fontsize=16)
# $$ P(X = x) = f(x) $$
# La función de probabilidad funciona bien si el número de valores es pequeño.
#
# Pero a medida que el número de valores aumenta, la probabilidad asociada a cada valor se hace más pequeño.
#
# Una alternativa es usar la función de distribución acumulada.
edades.describe()
# ### Función de distribución acumulada
edades.hist(cumulative=True, bins=88, normed=True)
# $$ P(X \leq x) = f(x) $$
# ### Distribuciones continuas
#
# Cuando tenemos variables aleatorias continuas
#
# La distribución normal (Gaussiana), es una de las distribuciones de probabilidad que con más frecuencia aparece aproximada en fenómenos reales.
# $$ P(x, \sigma, \mu) = \frac{1}{\sigma\sqrt{2 \pi}}e^{-(x -\mu)^2/2\sigma^2}$$
# +
import numpy as np
x = np.random.randn(5000)
# Make a normed histogram. It'll be multiplied by 100 later.
plt.hist(x, bins=50, normed=True)
# -
# <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Standard_deviation_diagram.svg/2000px-Standard_deviation_diagram.svg.png">
# ### ¿Por qué usar distribuciones continuas?
#
# Como todos los modelos, las distribuciones continuas son abstracciones, lo que significa que pueden simplificar y deshacerse de los detalles que se consideran irrelevantes (Errores de medición, outliers).
#
# Además son una forma de comprimir los datos. Ya que si logramos ajustar un modelo a un conjunto de datos, un pequeño conjunto de parámetros puede resumir una gran cantidad de datos.
# ### ¿Por qué es tan importante la distribución Normal?
#
# El teorema de límite central establece que la media de la muestra $\bar{X}$ sigue una distribución normal (para $n$ grandes)
# con media $\mu$ y desviación estándar $\frac{\sigma}{\sqrt(n)}$
#
# El teorema del límite central explica, porque aparece con tanta frecuencia la distribución normal en el mundo natural.
#
# La mayoría de las características de los animales y otras formas de vida se ven afectadas por un gran número de variables genéticas y ambientales cuyo efecto es aditivo.
#
# Las características que medimos son la suma de un gran número de pequeños efectos, por lo que su distribución tiende a ser normal.
# +
#Prueba para el Teorema de Limite Central usando 50
media_muestra = [] #Iniciamos una lista
for x in range(0, 50):
media_muestra.append(np.mean(edades.sample(n=300)))
media_muestra = pd.Series(media_muestra)
media_muestra.hist(bins=50)
# +
#Prueba para el Teorema de Limite Central usando 500
media_muestra = []
for x in range(0, 500):
media_muestra.append(np.mean(edades.sample(n=300)))
media_muestra = pd.Series(media_muestra)
media_muestra.hist(bins = 50)
# +
#Prueba para el Teorema de Limite Central usando 5000
media_muestra = []
for x in range(0, 5000):
media_muestra.append(np.mean(edades.sample(n=300)))
media_muestra = pd.Series(media_muestra)
media_muestra.hist(bins = 50)
# -
# ### Probabilidad
#
# Anteriormente mencionamos que la probabilidad es la frecuencia expresada como una fracción tamaño de muestra.
#
# Esa es una definición de probabilidad, pero no es la única y de hecho, el significado de probabilidad es un tema controversial.
#
# Existe un consenso general de que la probabilidad es un valor real entre 0 y 1. Este valor pretende dar una medida cuantitativa que corresponde a la noción de que algunas cosas son más probables que otras.
#
# $$ P(E) \epsilon [0,1] $$
# ### Reglas de probabilidad (Recordando a Kolmogorov)
#
# - La probabilidad de que ocurra un evento es un valor entre 0 y 1. Para todo evento existe una probabilidad
# $$ 0 \leq P(E) \leq 1 $$
# - La probabilidad de que nada ocurra es 0
# $$ P(\emptyset) = 0 $$
# - La probabilidad de que algo ocurra es 1
# $$ P(\Omega) = 1 $$
# - La probabilidad de algo es 1 menos la probabilidad de lo contrario
# <img src="conditional_risk.png">
# ### Probabilidad condicional
#
# $$ P(A | B) = \frac{P(A \cap B)}{P(B)} $$
#
# Si A y B son eventos independientes entonces:
#
# $$ P(A | B) = \frac{P(A) P(B)}{P(B)} = P(A) $$
# ##### Monty Hall
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/mhlc7peGlGg?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>')
# ### Regla de Bayes
#
# El teorema de Bayes es a menudo interpretado como una declaración acerca de cómo la evidencia, E, afecta la probabilidad de una hipótesis, H:
#
# $$P(H | E) = P(H) \frac{P(E|H)}{P(E)}$$
#
# En palabras, esta ecuación dice que la probabilidad de H después de haber visto E es el producto de $P(H)$, que es la probabilidad de que H antes de ver la evidencia E, y la relación de $P(E|H)$, la probabilidad de ver la evidencia suponiendo que H es verdadera, y $P(E)$, la probabilidad de ver la evidencia bajo cualquier circunstancia.
#
# Ejemplo: Filtro de Spam
#
# $$ P(S|W) = \frac{P(W|S) \cdot P(S)}{P(W|S) \cdot P(S) + P(W|H) \cdot P(H)} $$
#
# donde:
#
# - $P(S|W)$ Es la probabilidad de que nuestro mensaje sea SPAM, sabiendo que encontramos la palabra "Dinero"
# - $P(S)$ Es la probabilidad de que cualquier mensaje sea SPAM
# - $P(W|S)$ La probabilidad de que nuestra palabra aparezca en mensajes de SPAM
# - $P(H)$ La probabilidad de que nuestro mensaje sea HAM
# - $P(W|H)$ La probabilidad de que la nuestra palabra aparezca en HAM
#
#
#
#
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/R13BD8qKeTg?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>')
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Apresentando Mensagens com o Flask
# | [Anterior](12.Operacoes-de-Login.ipynb)| [Próximo](14.Cookies-e-Session.ipynb) |
# | :------------- | :----------:|
# Para isso, basta implementar de acordo com o documento: https://flask.palletsprojects.com/en/2.0.x/patterns/flashing/
# Apresenta mensagens padronizadas ao usuário.
# ### 1. Exemplo básico com Flash
# ```
# from flask import Flask, flash, redirect, render_template, request, url_for
#
# app = Flask(__name__)
# app.secret_key = b'_5#y2L"F4Q8z\n\xec]/'
#
# @app.route('/login', methods=['GET', 'POST'])
# def login():
# error_message = None
#
# if request.method == 'POST':
# if request.form['username'] != 'admin' or \
# request.form['password'] != 'secret':
# error_message = 'Invalid credentials'
# else:
# username = request.form.username.data
# success_message = 'Usuário {} logado com sucesso!'.format(username)
# flash(success_message)
#
# return redirect(url_for('index'))
#
# return render_template('login.html', error=error_message)
# ```
# * Página HTML (Ex: base.html)
#
# ```
# <!doctype html>
# <title>My Application</title>
# {% with messages = get_flashed_messages() %}
# {% if messages %}
# <ul class=flashes>
# {% for message in messages %}
# <li>{{ message }}</li>
# {% endfor %}
# </ul>
# {% endif %}
# {% endwith %}
# {% block body %}{% endblock %}
# ```
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import mglearn
import pandas as pd
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import matplotlib
from matplotlib import font_manager, rc
# %matplotlib inline
font_location = 'c:/Windows/fonts/malgun.ttf'
font_name = font_manager.FontProperties(fname=font_location).get_name()
matplotlib.rc('font', family=font_name)
data = pd.read_csv('data/Ex14_academy.csv')
print(data.iloc[:, 1:])
# 군집 모델을 만듭니다
kmeans = KMeans(n_clusters=3)
kmeans.fit(data.iloc[:, 1:])
print('클러스터 레이블', kmeans.labels_)
mglearn.discrete_scatter(data.iloc[:,1], data.iloc[:,2], kmeans.labels_)
plt.legend(['cluster 0', 'cluster 1', 'cluster 2'], loc='best')
plt.xlabel('국어점수')
plt.ylabel('영어점수')
plt.show()
# 국어점수 100, 영어점수 80 인 새로운 학생이 입학하였습니다.
# 이 학생은 몇번 클러스터에 포함되어야 합니까?
print(kmeans.predict([[100, 80]]))
plt.show()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_py3
# language: python
# name: conda_tensorflow_py3
# ---
# + jupyter={"outputs_hidden": true}
# !pip install keras==2.2.4
# + jupyter={"outputs_hidden": true}
# !pip install h5py
# + jupyter={"outputs_hidden": true}
# !pip install gensim
# +
import pandas as pd
import numpy as np
import tqdm
import time
from gensim.models import Word2Vec
np.random.seed(2020)
print('loading trian data ... \n\n')
train_click = pd.read_csv('../../train_preliminary/click_log.csv').sort_values(by = ['user_id','time'])
train_ad = pd.read_csv('../../train_preliminary/ad.csv')
train_click = train_click.merge(train_ad,on = 'creative_id',how = 'left')
train_click = train_click[['user_id','time','creative_id']].sort_values(by = ['user_id','time'])
train_click = train_click.set_index('user_id')
print('loading trian data over ... \n\n')
print('loading test data ... \n\n')
test_click = pd.read_csv('../../test/click_log.csv').sort_values(by = ['user_id','time'])
test_ad = pd.read_csv('../../test/ad.csv')
test_click = test_click.merge(test_ad,on = 'creative_id',how = 'left')
test_click = test_click[['user_id','time','creative_id']].sort_values(by = ['user_id','time'])
test_click = test_click.set_index('user_id')
print('loading test data over ... \n\n')
def get_str_data():
text_data = []
for i in tqdm.tqdm(range(1, 900001)):
# for i in tqdm.tqdm(range(1, 9001)):
train_user_ad = train_click.loc[i, 'creative_id'].values
text = [str(x) for x in train_user_ad]
text_data.append(text)
for i in tqdm.tqdm(range(3000001, 4000001)):
# for i in tqdm.tqdm(range(3000001, 3001001)):
test_user_ad = test_click.loc[i, 'creative_id'].values
text = [str(x) for x in test_user_ad]
text_data.append(text)
return text_data
print('get list data ...\n\n')
data = get_str_data()
# +
# del train_click
# del test_click
# del train_ad
# del test_ad
model = Word2Vec(data, size=32, window=100, min_count=0, workers=8, sg=1, iter = 10 )
print('saving model ... \n\n')
model.save('word2vec_creative_id_pre_embed_size_32_w100_count_0')
print('model saved ... \n\n')
# +
# model = Word2Vec.load("word2vec_ad_id_pre_embed_size_32_w50_count_0")
#1,导入w2v模型
# w2v_model = Word2Vec.load("../../get_w2v_feat/word2vec.model")
## 2 构造包含所有词语的 list,以及初始化 “词语-序号”字典 和 “词向量”矩阵
vocab_list = [word for word, Vocab in model.wv.vocab.items()]# 存储 所有的 词语
word_index = {" ": 0}# 初始化 `[word : token]` ,后期 tokenize 语料库就是用该词典。
word_vector = {} # 初始化`[word : vector]`字典
# 初始化存储所有向量的大矩阵,留意其中多一位(首行),词向量全为 0,用于 padding补零。
# 行数 为 所有单词数+1 比如 10000+1 ; 列数为 词向量“维度”比如100。
embeddings_matrix = np.zeros((len(vocab_list) + 1, model.vector_size))
## 3 填充 上述 的字典 和 大矩阵
for i in tqdm.tqdm(range(len(vocab_list))):
# print(i)
word = vocab_list[i] # 每个词语
word_index[word] = i + 1 # 词语:序号
word_vector[word] = model.wv[word] # 词语:词向量
embeddings_matrix[i + 1] = model.wv[word] # 词向量矩阵
import h5py
with h5py.File('embeddings_matrix_creative_size_32_w100_count_0.h5', 'w') as hf:
hf.create_dataset("embeddings_matrix", data=embeddings_matrix)
# +
# 序号化 文本,tokenizer句子,并返回每个句子所对应的词语索引
from keras.preprocessing import sequence
texts = data
data_pad = []
for sentence in tqdm.tqdm(texts):
new_txt = []
for word in sentence:
try:
new_txt.append(word_index[word]) # 把句子中的 词语转化为index
except:
new_txt.append(0)
data_pad.append(new_txt)
MAX_SEQUENCE_LENGTH = 200
texts_pad = sequence.pad_sequences(data_pad, maxlen = MAX_SEQUENCE_LENGTH) # 使用kears的内置函数padding对齐句子,好处是输出numpy数组,不用自己转化了
train_data = texts_pad[:900000,:]
test_data = texts_pad[900000:,:]
with h5py.File('word_train_creative_w2v_w100.h5', 'w') as hf:
hf.create_dataset("word_data", data=train_data)
with h5py.File('word_test_creative_w2v_w100.h5', 'w') as hf:
hf.create_dataset("word_data", data=test_data)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Value at Risk, Expected Shortfall
# ## Reading project − T.Á.
#
# The RiskMetrics(TM) method
#
# based on Chapter 2.2.4 of
#
# Allen, Boudoukh, Saunders - Understanding Market, Credit and Operational Risk: The
# Value at Risk Approach.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from simulationTools import *
tList = np.load('./strong_data/tList.npy')
LHSneu = np.load('./strong_data/LHSneu.npy')
LHSmm = np.load('./strong_data/LHSmm.npy')
f = np.load('./strong_data/f.npy')
# Plot both sides of the inequality
ax = plt.subplot(111)
ax.plot(tList, LHSmm, label='Min-max', ls='dashdot', lw=2)
ax.plot(tList, LHSneu, label='von Neumann', ls='dashed', lw=2)
ax.plot(tList, f, label='RHS', lw=2)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.ylim(0)
plt.xlabel("Time (units of 1/J)")
plt.ylabel("Bits")
plt.legend(frameon=False)
plt.savefig('Larger_g.pdf')
plt.show()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="icVPttAPFurS"
# # Traditional transfer learning tutorial
# This is a tutorial notebook for traditional transfer learning (i.e., non-deep learning).
#
# We'll implement two algorithms:
# - **TCA** (Transfer Component Analysis) [1]
# - **BDA** (Balanced Distribution Adaptation) [2]
#
# Then, we test the algorithms using **Office-Caltech10** SURF dataset. This dataset will be downloaded automatically in this tutorial.
#
# References:
#
# [1] Pan S J, Tsang I W, Kwok J T, et al. Domain adaptation via transfer component analysis[J]. IEEE Transactions on Neural Networks, 2010, 22(2): 199-210.
#
# [2] Wang J, Chen Y, Hao S, et al. Balanced distribution adaptation for transfer learning[C]//2017 IEEE international conference on data mining (ICDM). IEEE, 2017: 1129-1134.
# + [markdown] id="Fs_zONUnFurW"
# ## Download and unzip dataset
# You can also download the dataset from here: https://github.com/jindongwang/transferlearning/tree/master/data#office-caltech10
# + colab={"base_uri": "https://localhost:8080/"} id="UItzY-rN-C6Z" outputId="fe3bb35e-292f-48bb-db11-385fc1e53191"
# !wget https://transferlearningdrive.blob.core.windows.net/teamdrive/dataset/office-caltech-surf.zip
# !unzip office-caltech-surf.zip
# + [markdown] id="4wSXVFUZFurX"
# ## Import libraries
# + id="fRoy6hvY-zKM"
import numpy as np
import scipy.io
import scipy.linalg
import sklearn.metrics
from sklearn.neighbors import KNeighborsClassifier
# + [markdown] id="Z3fuq1sqFurY"
# ## Define a kernel function
# + id="FAmo6hc_-5Pu"
def kernel(ker, X1, X2, gamma):
K = None
if not ker or ker == 'primal':
K = X1
elif ker == 'linear':
if X2 is not None:
K = sklearn.metrics.pairwise.linear_kernel(np.asarray(X1).T, np.asarray(X2).T)
else:
K = sklearn.metrics.pairwise.linear_kernel(np.asarray(X1).T)
elif ker == 'rbf':
if X2 is not None:
K = sklearn.metrics.pairwise.rbf_kernel(np.asarray(X1).T, np.asarray(X2).T, gamma)
else:
K = sklearn.metrics.pairwise.rbf_kernel(np.asarray(X1).T, None, gamma)
return K
# + [markdown] id="GhngbY7DFurY"
# # Implement TCA
# + id="yjaBHq7P-8qB"
class TCA:
def __init__(self, kernel_type='primal', dim=30, lamb=1, gamma=1):
'''
Init func
:param kernel_type: kernel, values: 'primal' | 'linear' | 'rbf'
:param dim: dimension after transfer
:param lamb: lambda value in equation
:param gamma: kernel bandwidth for rbf kernel
'''
self.kernel_type = kernel_type
self.dim = dim
self.lamb = lamb
self.gamma = gamma
def fit(self, Xs, Xt):
'''
Transform Xs and Xt
:param Xs: ns * n_feature, source feature
:param Xt: nt * n_feature, target feature
:return: Xs_new and Xt_new after TCA
'''
X = np.hstack((Xs.T, Xt.T))
X /= np.linalg.norm(X, axis=0)
m, n = X.shape
ns, nt = len(Xs), len(Xt)
e = np.vstack((1 / ns * np.ones((ns, 1)), -1 / nt * np.ones((nt, 1))))
M = e * e.T
M = M / np.linalg.norm(M, 'fro')
H = np.eye(n) - 1 / n * np.ones((n, n))
K = kernel(self.kernel_type, X, None, gamma=self.gamma)
n_eye = m if self.kernel_type == 'primal' else n
a, b = np.linalg.multi_dot([K, M, K.T]) + self.lamb * np.eye(n_eye), np.linalg.multi_dot([K, H, K.T])
w, V = scipy.linalg.eig(a, b)
ind = np.argsort(w)
A = V[:, ind[:self.dim]]
Z = np.dot(A.T, K)
Z /= np.linalg.norm(Z, axis=0)
Xs_new, Xt_new = Z[:, :ns].T, Z[:, ns:].T
return Xs_new, Xt_new
def fit_predict(self, Xs, Ys, Xt, Yt):
'''
Transform Xs and Xt, then make predictions on target using 1NN
:param Xs: ns * n_feature, source feature
:param Ys: ns * 1, source label
:param Xt: nt * n_feature, target feature
:param Yt: nt * 1, target label
:return: Accuracy and predicted_labels on the target domain
'''
Xs_new, Xt_new = self.fit(Xs, Xt)
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(Xs_new, Ys.ravel())
y_pred = clf.predict(Xt_new)
acc = sklearn.metrics.accuracy_score(Yt, y_pred)
return acc, y_pred
# + [markdown] id="yuKeKRCUFurZ"
# ## Implement BDA
# + id="wVb8DLYA_U03"
class BDA:
def __init__(self, kernel_type='primal', dim=30, lamb=1, mu=0.5, gamma=1, T=10, mode='BDA', estimate_mu=False):
'''
Init func
:param kernel_type: kernel, values: 'primal' | 'linear' | 'rbf'
:param dim: dimension after transfer
:param lamb: lambda value in equation
:param mu: mu. Default is -1, if not specificied, it calculates using A-distance
:param gamma: kernel bandwidth for rbf kernel
:param T: iteration number
:param mode: 'BDA' | 'WBDA'
:param estimate_mu: True | False, if you want to automatically estimate mu instead of manally set it
'''
self.kernel_type = kernel_type
self.dim = dim
self.lamb = lamb
self.mu = mu
self.gamma = gamma
self.T = T
self.mode = mode
self.estimate_mu = estimate_mu
def fit_predict(self, Xs, Ys, Xt, Yt):
'''
Transform and Predict using 1NN as JDA paper did
:param Xs: ns * n_feature, source feature
:param Ys: ns * 1, source label
:param Xt: nt * n_feature, target feature
:param Yt: nt * 1, target label
:return: acc, y_pred, list_acc
'''
list_acc = []
X = np.hstack((Xs.T, Xt.T))
X /= np.linalg.norm(X, axis=0)
m, n = X.shape
ns, nt = len(Xs), len(Xt)
e = np.vstack((1 / ns * np.ones((ns, 1)), -1 / nt * np.ones((nt, 1))))
C = len(np.unique(Ys))
H = np.eye(n) - 1 / n * np.ones((n, n))
mu = self.mu
M = 0
Y_tar_pseudo = None
Xs_new = None
for t in range(self.T):
N = 0
M0 = e * e.T * C
if Y_tar_pseudo is not None and len(Y_tar_pseudo) == nt:
for c in range(1, C + 1):
e = np.zeros((n, 1))
Ns = len(Ys[np.where(Ys == c)])
Nt = len(Y_tar_pseudo[np.where(Y_tar_pseudo == c)])
if self.mode == 'WBDA':
Ps = Ns / len(Ys)
Pt = Nt / len(Y_tar_pseudo)
alpha = Pt / Ps
mu = 1
else:
alpha = 1
tt = Ys == c
e[np.where(tt == True)] = 1 / Ns
yy = Y_tar_pseudo == c
ind = np.where(yy == True)
inds = [item + ns for item in ind]
e[tuple(inds)] = -alpha / Nt
e[np.isinf(e)] = 0
N = N + np.dot(e, e.T)
# In BDA, mu can be set or automatically estimated using A-distance
# In WBDA, we find that setting mu=1 is enough
if self.estimate_mu and self.mode == 'BDA':
if Xs_new is not None:
mu = estimate_mu(Xs_new, Ys, Xt_new, Y_tar_pseudo)
else:
mu = 0
M = (1 - mu) * M0 + mu * N
M /= np.linalg.norm(M, 'fro')
K = kernel(self.kernel_type, X, None, gamma=self.gamma)
n_eye = m if self.kernel_type == 'primal' else n
a, b = np.linalg.multi_dot(
[K, M, K.T]) + self.lamb * np.eye(n_eye), np.linalg.multi_dot([K, H, K.T])
w, V = scipy.linalg.eig(a, b)
ind = np.argsort(w)
A = V[:, ind[:self.dim]]
Z = np.dot(A.T, K)
Z /= np.linalg.norm(Z, axis=0)
Xs_new, Xt_new = Z[:, :ns].T, Z[:, ns:].T
clf = sklearn.neighbors.KNeighborsClassifier(n_neighbors=1)
clf.fit(Xs_new, Ys.ravel())
Y_tar_pseudo = clf.predict(Xt_new)
acc = sklearn.metrics.accuracy_score(Yt, Y_tar_pseudo)
list_acc.append(acc)
print('{} iteration [{}/{}]: Acc: {:.4f}'.format(self.mode, t + 1, self.T, acc))
return acc, Y_tar_pseudo, list_acc
# + [markdown] id="5IuuTWGlFurc"
# ## Load data
# We'll load data. For demonstration, we use *Caltech* as the source and *amazon* as the target domain.
# + id="aRBJ87xg-_H7"
src, tar = 'caltech_surf_10.mat', 'amazon_surf_10.mat'
src_domain, tar_domain = scipy.io.loadmat(src), scipy.io.loadmat(tar)
Xs, Ys, Xt, Yt = src_domain['feas'], src_domain['label'], tar_domain['feas'], tar_domain['label']
# + [markdown] id="p1qkzPZxFure"
# ## Test TCA
# + colab={"base_uri": "https://localhost:8080/"} id="7U6Ez1B2Furf" outputId="47398327-4c65-41e9-fee0-1783fe562325"
tca = TCA(kernel_type='linear', dim=30, lamb=1, gamma=1)
acc, ypre = tca.fit_predict(Xs, Ys, Xt, Yt)
print(f'The accuracy of TCA is: {acc:.4f}')
# + [markdown] id="68GhdRh4Furf"
# ## Test BDA
# + colab={"base_uri": "https://localhost:8080/"} id="5m609OVBFurf" outputId="8165ebd1-4ebb-49f2-f893-f1e9b24afe10"
bda = BDA(kernel_type='primal', dim=30, lamb=1, mu=0.5, mode='BDA', gamma=1, estimate_mu=False)
acc, ypre, list_acc = bda.fit_predict(Xs, Ys, Xt, Yt)
print(f'The accuracy of BDA is: {acc:.4f}')
# + [markdown] id="sp2axlhhILUF"
# ## Test WBDA
# + colab={"base_uri": "https://localhost:8080/"} id="somL-0WnFurf" outputId="a5ba906f-ee1b-416f-9e7f-b2fbbb4de3de"
wbda = BDA(kernel_type='primal', dim=30, lamb=1, mode='WBDA', gamma=1, estimate_mu=False)
acc, ypre, list_acc = wbda.fit_predict(Xs, Ys, Xt, Yt)
print(f'The accuracy of WBDA is: {acc:.4f}')
# + id="h5B5VO8yKBQC"
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:probml_py3912]
# language: python
# name: conda-env-probml_py3912-py
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
try:
from cycler import cycler
except ModuleNotFoundError:
# %pip install cycler
from cycler import cycler
from scipy.spatial.distance import cdist
try:
import probml_utils as pml
except ModuleNotFoundError:
# %pip install git+https://github.com/probml/probml-utils.git
import probml_utils as pml
np.random.seed(0)
CB_color = ["#377eb8", "#ff7f00"]
cb_cycler = cycler(linestyle=["-", "--", "-."]) * cycler(color=CB_color)
plt.rc("axes", prop_cycle=cb_cycler)
def fun(x, w):
return w[0] * x + w[1] * np.square(x)
# 'Data as mentioned in the matlab code'
def polydatemake():
n = 21
sigma = 2
xtrain = np.linspace(0, 20, n)
xtest = np.arange(0, 20.1, 0.1)
w = np.array([-1.5, 1 / 9])
ytrain = fun(xtrain, w).reshape(-1, 1) + np.random.randn(xtrain.shape[0], 1)
ytestNoisefree = fun(xtest, w)
ytestNoisy = ytestNoisefree + sigma * np.random.randn(xtest.shape[0], 1) * sigma
return xtrain, ytrain, xtest, ytestNoisefree, ytestNoisy
[xtrain, ytrain, xtest, ytestNoisefree, ytestNoisy] = polydatemake()
sigmas = [0.5, 10, 50]
K = 10
centers = np.linspace(np.min(xtrain), np.max(xtrain), K)
def addones(x):
# x is of shape (s,)
return np.insert(x[:, np.newaxis], 0, [[1]], axis=1)
def rbf_features(X, centers, sigma):
dist_mat = cdist(X, centers, "minkowski", p=2.0)
return np.exp((-0.5 / (sigma**2)) * (dist_mat**2))
# using matrix inversion for ridge regression
def ridgeReg(X, y, lambd): # returns weight vectors.
D = X.shape[1]
w = np.linalg.inv(X.T @ X + lambd * np.eye(D, D)) @ X.T @ y
return w
fig, ax = plt.subplots(3, 3, figsize=(10, 10))
plt.tight_layout()
for (i, s) in enumerate(sigmas):
rbf_train = rbf_features(addones(xtrain), addones(centers), s)
rbf_test = rbf_features(addones(xtest), addones(centers), s)
reg_w = ridgeReg(rbf_train, ytrain, 0.3)
ypred = rbf_test @ reg_w
ax[i, 0].plot(xtrain, ytrain, ".", markersize=8)
ax[i, 0].plot(xtest, ypred)
ax[i, 0].set_ylim([-10, 20])
ax[i, 0].set_xticks(np.arange(0, 21, 5))
for j in range(K):
ax[i, 1].plot(xtest, rbf_test[:, j], "b-")
ax[i, 1].set_xticks(np.arange(0, 21, 5))
ax[i, 1].ticklabel_format(style="sci", scilimits=(-2, 2))
ax[i, 2].imshow(rbf_train, interpolation="nearest", aspect="auto", cmap=plt.get_cmap("viridis"))
ax[i, 2].set_yticks(np.arange(20, 4, -5))
ax[i, 2].set_xticks(np.arange(2, 10, 2))
pml.savefig("rbfDemoALL.pdf", dpi=300)
plt.show()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Least square fit
from jupyterthemes import jtplot
jtplot.style(theme='onedork', context='notebook', ticks=True, grid=False)
import sympy as sp
import pandas as pd
import numpy as np
# +
c_0,c_1 = sp.symbols('c_0 c_1')
epsilon = sp.IndexedBase("epsilon")
y = sp.IndexedBase("y")
x = sp.IndexedBase("x")
i, N = sp.var("i,N", integer=True)
linear_equation = sp.Eq(y[i], c_0+c_1*x[i]+epsilon[i])
# -
linear_equation
epsilon_equation = sp.Eq(epsilon[i],sp.solve(linear_equation, epsilon[i])[0])
epsilon_equation
# ## Criteria 1
# Mean error should be 0.
#
# $mean(\epsilon_i)=0 $
criteria_1_equation = sp.Eq(1/N*sp.Sum(epsilon[i], (i, 1, N)),0)
criteria_1_equation
criteria_1_equation = sp.Eq(sp.Sum(sp.solve(epsilon_equation,epsilon[i])[0], (i, 1, N)),0)
criteria_1_equation
criteria_1_equation = sp.Eq(sp.expand(criteria_1_equation.lhs).doit(),0)
criteria_1_equation
criteria_1_equation = sp.Eq(c_0,sp.solve(criteria_1_equation,c_0)[0])
criteria_1_equation
criteria_1_equation = criteria_1_equation.expand().doit()
criteria_1_equation
S_y,S_x,x_i,y_i = sp.symbols('S_y S_x x_i y_i')
criteria_1_equation = criteria_1_equation.subs([(y[i],y_i),
(x[i],x_i),
])
criteria_1_equation
S_y,S_x = sp.symbols('S_y S_x')
S_y_equation = sp.Eq(S_y, sp.Sum(y_i,(i, 1, N)))
S_y_equation
S_x_equation = sp.Eq(S_x, sp.Sum(x_i,(i, 1, N)))
S_x_equation
eqs=(S_y_equation,
S_x_equation,
criteria_1_equation
)
sp.solve(eqs, c_0, S_y,S_x,)
criteria_1_equation.subs(S_y_equation.rhs,S_y_equation.lhs)
# ## Square error should be minimized
square_error_equation = sp.Eq(epsilon[i]**2,
sp.Sum((sp.solve(epsilon_equation,epsilon[i])[0])**2, (i, 1, N)))
square_error_equation
square_error_zero_derivative_equation = sp.Eq(square_error_equation.rhs.diff(c_1),0)
square_error_zero_derivative_equation
square_error_zero_derivative_equation = square_error_zero_derivative_equation.expand().doit()
square_error_zero_derivative_equation
square_error_zero_derivative_equation = square_error_zero_derivative_equation.subs(c_0,
sp.solve(criteria_1_equation, c_0)[0]).doit().expand()
square_error_zero_derivative_equation
sp.solve(square_error_zero_derivative_equation,c_1)
solution = sp.solve(eqs,c_0,c_1)[0]
c0_equation = sp.Eq(c_0,solution[0].simplify())
c0_equation
c1_equation = sp.Eq(c_1,solution[1].simplify())
c1_equation
solution[1]
s = sp.Sum(c_0+c_1*x[i], (i, 1, N))
sp.solve(epsilon_equation,epsilon[i])[0]
sp.Vector()
linear_equation
# +
from sympy import IndexedBase
from sympy.tensor.array import Array
a,b,c,d,e,f = sp.symbols("a b c d e f")
X = Array([[a, b, c], [d, e, f]])
X
# -
Array([[1]])
# +
X = IndexedBase("X")
W = IndexedBase("W")
i, j, M, K = sp.var("i,j,M,K", integer=True)
s = sp.Sum(X[i, j]*W[j], (i, 1, M), (j, 1, K))
# -
s
s
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import tkinter
import tkinter as tk
import tkinter.ttk as ttk
import argparse # new in Python2.7
import atexit
import logging
import string
import sys
import threading
import time
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
import os
import pandas as pd
import numpy as np
from scipy import signal
import serial
import queue
import threading
from yapsy.PluginManager import PluginManager
from tkinter import Canvas
from guiLoop import guiLoop
from ttkthemes import themed_tk as ttk
root = ttk.ThemedTk()
root.title("OpenBCI")
def process():
while 1:
df = pd.read_csv('/Applications/SavedData/OpenBCI-RAW-2019-04-29_09-57-10.txt',delim_whitespace=True, names=('Tiempo', '1', '2','3','4','5','6','7','8'))
#df2 = pd.read_csv(filelist[-1], delimiter = ',',names=['Tiempo', '1', '2','3','4','5','6','7','8','9'])
dt = df.loc[: , "7"]
dt1=dt.tail(8000) #2 seg
data = dt1.str.replace(",","")
data = pd.to_numeric(data,downcast='signed')
np.nan_to_num(data,[])
fs= 250
t = np.arange(1,len(data)+1,1)
tsample = 1/fs
# In[] Filtros
f_low = 50
f_high = 1
# Filtro pasa bajas de 50 hz
b, a = signal.butter(2, 2*f_low/fs, btype='low')
filt= signal.filtfilt(b, a, data)
#Filtro pasa altas de 1 hz
b1, a1 = signal.butter(2, 2*f_high/fs, btype='high')
filt1= signal.filtfilt(b1, a1, filt)
#Filtro Notch
fstart=(58)/fs*2;# Cutoff frequencies
fstop=(62)/fs*2;# Cutoff frequencies
b2, a2 = signal.butter(2,[fstart,fstop],'bandstop'); # Calculate filter coefficients
filt2= signal.filtfilt(b2, a2, filt1)
filt2=filt2/24
# In[] Transformada de Fourier
ft = np.abs(np.fft.fft(filt2)) #Magnitud
ft = ft[0:int(len(ft)/2)]
f = np.linspace(0,fs/2,len(ft))# Vector de frecuencias
#f= [8,2,9,10,9,11,12]
#ft=[18,18,12,23,40,50,16]
plt.figure()
ax1 = plt.subplot(2,1,1)
plt.plot(t,filt2)
plt.title('EEG'),plt.xlabel('Tiempo (s)'),plt.ylabel('Amplitud')
plt.grid()
ax1 = plt.subplot(2,1,2)
plt.plot(f,ft)
plt.title('Magnitud de la transformada de Fourier'),plt.xlabel('Frecuencia (Hz)'),plt.ylabel('Amplitud')
plt.grid()
plt.show()
for i in range(len(f)):
#f1= f[i,]
f1= f[i]
if f1 >= 8 and f1<= 14:
#amp=ft[i,]
amp=ft[i]
print (amp)
if amp >= 15 and amp <=30:
label2 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label2.pack( fill="x", pady=50)
label2.place(x=124,y=172,width=100,height=85)
label3 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label3.pack( fill="x", pady=50)
label3.place(x=124,y=258,width=100,height=85)
label4 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label4.pack( fill="x", pady=50)
label4.place(x=124,y=344,width=100,height=85)
label5 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label5.pack( fill="x", pady=50)
label5.place(x=124,y=430,width=100,height=85)
elif amp >=31 and amp <=40:
label2 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label2.pack( fill="x", pady=50)
label2.place(x=124,y=172,width=100,height=85)
label3 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label3.pack( fill="x", pady=50)
label3.place(x=124,y=258,width=100,height=85)
label4 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label4.pack( fill="x", pady=50)
label4.place(x=124,y=344,width=100,height=85)
label5 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label5.pack( fill="x", pady=50)
label5.place(x=124,y=430,width=100,height=85)
elif amp >=41 and amp <=50:
label2 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label2.pack( fill="x", pady=50)
label2.place(x=124,y=172,width=100,height=85)
label3 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label3.pack( fill="x", pady=50)
label3.place(x=124,y=258,width=100,height=85)
label4 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label4.pack( fill="x", pady=50)
label4.place(x=124,y=344,width=100,height=85)
label5 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label5.pack( fill="x", pady=50)
label5.place(x=124,y=430,width=100,height=85)
elif amp >=51 and amp <=60:
label2 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label2.pack( fill="x", pady=50)
label2.place(x=124,y=172,width=100,height=85)
label3 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label3.pack( fill="x", pady=50)
label3.place(x=124,y=258,width=100,height=85)
label4 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label4.pack( fill="x", pady=50)
label4.place(x=124,y=344,width=100,height=85)
label5 = tk.Label(text="_____", fg="#b2ebf2", bg="#b2ebf2")
label5.pack( fill="x", pady=50)
label5.place(x=124,y=430,width=100,height=85)
else:
label2 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label2.pack( fill="x", pady=50)
label2.place(x=124,y=172,width=100,height=85)
label3 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label3.pack( fill="x", pady=50)
label3.place(x=124,y=258,width=100,height=85)
label4 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label4.pack( fill="x", pady=50)
label4.place(x=124,y=344,width=100,height=85)
label5 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label5.pack( fill="x", pady=50)
label5.place(x=124,y=430,width=100,height=85)
canvas = Canvas(bg="#1959B3",height=550, width=340)
canvas.pack()
canvas.place(x=3,y=3)
label = tk.Label(text="OpenBCI", font=("Times",50),fg="white", bg="#1959B3")
label.pack(side="top", fill="x", pady=10)
label.place(relx=0.2,rely=0.07)
label2 = tk.Label(text="Camila Andrea Navarrete Cataño", fg="white", bg="#1959B3", font=("Verdana"))
label2.pack(side="bottom", fill="x", pady=10)
label3 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label3.pack( fill="x", pady=50)
label3.place(x=124,y=172,width=100,height=85)
label4 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label4.pack( fill="x", pady=50)
label4.place(x=124,y=258,width=100,height=85)
label5 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label5.pack( fill="x", pady=50)
label5.place(x=124,y=344,width=100,height=85)
label6 = tk.Label(text="_____", fg="#1959B3", bg="#1959B3")
label6.pack( fill="x", pady=50)
label6.place(x=124,y=430,width=100,height=85)
t1=threading.Thread(target=process)
t1.start()
root.geometry("350x600+440+20")
root.mainloop()
# -
# +
#def process(self):
# while 1:
# df = pd.read_csv('/Applications/SavedData/OpenBCI-RAW-2019-04-20_18-42-47.txt',delim_whitespace=True, names=('Tiempo', '1', '2','3','4','5','6','7','8'))
# #df2 = pd.read_csv(filelist[-1], delimiter = ',',names=['Tiempo', '1', '2','3','4','5','6','7','8','9'])
# dt = df.loc[: , "7"]
# dt1=dt.tail(500) #2 seg
# data = dt1.str.replace(",","")
# data = pd.to_numeric(data,downcast='signed')
# np.nan_to_num(data,[])
#
# fs= 250
# t = np.arange(1,len(data)+1,1)
#
# tsample = 1/fs
# In[] Filtros
# f_low = 50
# f_high = 1
# Filtro pasa bajas de 50 hz
# b, a = signal.butter(2, 2*f_low/fs, btype='low')
# filt= signal.filtfilt(b, a, data)
# #Filtro pasa altas de 1 hz
# b1, a1 = signal.butter(2, 2*f_high/fs, btype='high')
# filt1= signal.filtfilt(b1, a1, filt)
# #Filtro Notch
# fstart=(58)/fs*2;# Cutoff frequencies
# fstop=(62)/fs*2;# Cutoff frequencies
# b2, a2 = signal.butter(2,[fstart,fstop],'bandstop'); # Calculate filter coefficients
# filt2= signal.filtfilt(b2, a2, filt1)
# filt2=filt2/24
# In[] Transformada de Fourier
# ft = np.abs(np.fft.fft(filt2)) #Magnitud
# ft = ft[0:int(len(ft)/2)]
# f = np.linspace(0,fs/2,len(ft))# Vector de frecuencias
#f= [8,2,9,10,9,11,12]
#ft=[18,18,12,23,40,50,16]
# for i in range(len(f)):
# #f1= f[i,]
# f1= f[i]
# if f1 >= 8 and f1<= 14:
# #amp=ft[i,]
# amp=ft[i]
# print (amp)
# if amp >= 15 and amp <=30:
# label2 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label2.pack( fill="x", pady=50)
# label2.place(x=124,y=172,width=100,height=85)
# label3 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label3.pack( fill="x", pady=50)
# label3.place(x=124,y=258,width=100,height=85)
# label4 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label4.pack( fill="x", pady=50)
# label4.place(x=124,y=344,width=100,height=85)
# label5 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label5.pack( fill="x", pady=50)
# label5.place(x=124,y=430,width=100,height=85)
# elif amp >=31 and amp <=40:
# label2 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label2.pack( fill="x", pady=50)
# label2.place(x=124,y=172,width=100,height=85)
# label3 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label3.pack( fill="x", pady=50)
# label3.place(x=124,y=258,width=100,height=85)
# label4 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label4.pack( fill="x", pady=50)
# label4.place(x=124,y=344,width=100,height=85)
# label5 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label5.pack( fill="x", pady=50)
# label5.place(x=124,y=430,width=100,height=85)
#
# elif amp >=41 and amp <=50:
# label2 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label2.pack( fill="x", pady=50)
# label2.place(x=124,y=172,width=100,height=85)
# label3 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label3.pack( fill="x", pady=50)
# label3.place(x=124,y=258,width=100,height=85)
# label4 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label4.pack( fill="x", pady=50)
# label4.place(x=124,y=344,width=100,height=85)
# label5 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label5.pack( fill="x", pady=50)
# label5.place(x=124,y=430,width=100,height=85)
# elif amp >=51 and amp <=60:
# label2 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label2.pack( fill="x", pady=50)
# label2.place(x=124,y=172,width=100,height=85)
# label3 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label3.pack( fill="x", pady=50)
# label3.place(x=124,y=258,width=100,height=85)
# label4 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label4.pack( fill="x", pady=50)
# label4.place(x=124,y=344,width=100,height=85)
# label5 = tk.Label(self,text="_____", fg="#b2ebf2", bg="#b2ebf2")
# label5.pack( fill="x", pady=50)
# label5.place(x=124,y=430,width=100,height=85)
# else:
# label2 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label2.pack( fill="x", pady=50)
# label2.place(x=124,y=172,width=100,height=85)
# label3 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label3.pack( fill="x", pady=50)
# label3.place(x=124,y=258,width=100,height=85)
# label4 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label4.pack( fill="x", pady=50)
# label4.place(x=124,y=344,width=100,height=85)
# label5 = tk.Label(self,text="_____", fg="#1959B3", bg="#1959B3")
# label5.pack( fill="x", pady=50)
# label5.place(x=124,y=430,width=100,height=85)
#class SampleApp(tk.Tk):
# def __init__(self, *args, **kwargs):
# tk.Tk.__init__(self, *args, **kwargs)
# self.title("OpenBCI")
# canvas = Canvas(self,bg="#1959B3",height=550, width=340)
# canvas.pack()
# canvas.place(x=3,y=3)
# label = tk.Label(self, text="OpenBCI", font=("Times",50),fg="white", bg="#1959B3")
# label.pack(side="top", fill="x", pady=10)
# label.place(relx=0.2,rely=0.07)
# label2 = tk.Label(self, text="Camila Andrea Navarrete Cataño", fg="white", bg="#1959B3", font=("Verdana"))
# label2.pack(side="bottom", fill="x", pady=10)
# t1=threading.Thread(target=process)
# t1.start()
#logging.basicConfig(level=logging.ERROR)
# Load the plugins from the plugin directory.
#manager = PluginManager()
# #%run user.py -p /dev/tty.usbserial-DM00Q8QL --add csv_collect record.csv
#time.sleep(6)
#filelist=os.listdir('./')
#for fichier in filelist[:]: # filelist[:] makes a copy of filelist.
# if not(fichier.endswith(".csv")):
# filelist.remove(fichier)
#filelist.sort(key=lambda x: os.path.getmtime(x))
#ps= serial.Serial('/dev/cu.usbmodem14201',9600)
#cont=0
#time.sleep(0.5)
# filelist=os.listdir('./')
#for fichier in filelist[:]: # filelist[:] makes a copy of filelist.
# if not(fichier.endswith(".csv")):
# filelist.remove(fichier)
# print(filelist)
#filelist.sort(key=lambda x: os.path.getmtime(x))
#filelist[-1]
#app = SampleApp()
#app.geometry("350x600+440+20")
#app.mainloop()
# -
filelist=os.listdir('/Applications/SavedData')
for fichier in filelist[:]: # filelist[:] makes a copy of filelist.
if not(fichier.endswith(".txt")):
filelist.remove(fichier)
print(filelist)
# +
filelist.sort(key=lambda x: os.path.getmtime(x))
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 64-bit
# language: python
# name: python3
# ---
# # Structure manipulation
#
# In this labs, we will revise manipulation with molecular 3D structure from the first labs. Then we will show how to visualize protein (or nucleic acid) structure in the notebook and then we will proceed to 3D structure superposition.
#
# ### Bio.PDB
#
# As we went through this package in the first labs, we will revisit it here only very briefly. Structures, irrespective of whether they are obtained from PDB or mmCIF file, are represented by the [Bio.PDB.Entity](https://biopython.org/docs/1.75/api/Bio.PDB.Entity.html) module from the [Bio.PDB](https://biopython.org/docs/1.75/api/Bio.PDB.html) package. A structure object has the structure->model->chain->residue architecture.
#
# 
# +
from Bio.PDB.MMCIFParser import MMCIFParser
from Bio.PDB.PDBList import PDBList
pdbl = PDBList()
file_name=pdbl.retrieve_pdb_file("7lkr", file_format="mmCif", pdir=".")
parser = MMCIFParser()
structure = parser.get_structure("7lkr", file_name)
model = structure[0]
print(model.get_list())
chain = model['A']
print(chain.get_list()[0])
res = chain[(' ',5,' ')]
atom=res['CA']
print(atom.get_vector())
# -
# ### ---- Begin Exercise ----
# - Implement a function for protein-ligand binding sites detection. The function should find residues with an atom with within a certain distnace (parameter of the function) from any atom of any ligand (heteroatoms).
# ### ---- End Exercise ----
# ### Structure visualization
# There exist advanced tools for structure visualization such as PyMOL or Chimera for offline viewing or web-based plugins such as [Mol*](https://molstar.org/). As stated on Mol* GitHub, Mol* development was jointly initiated by PDBe and RCSB PDB to combine and build on the strengths of [LiteMol](https://litemol.org/) (developed by PDBe) and [NGL](https://nglviewer.org/) (developed by RCSB PDB) viewers. We are mentioning this here becuase NGL is (maybe soon "was" as it is not actively developed any more) provided as a Jupyter widget and thus can be used to visualize structures in a notebook.
import nglview as nv
view = nv.show_pdbid("3pqr") # load "3pqr" from RCSB PDB and display viewer widget
view
view.remove_surface()
view.add_surface(selection="protein", opacity=0.4)
# Let's now visualize the ligand binding sites we obtained from the first excercise. That can be done using the NGL's [selection language](https://nglviewer.org/ngl/api/manual/usage/selection-language.html).
#selection = " ".join(["{}:{} or".format(r.get_id()[1], r.parent.id) for r in binding_sites])[:-3]
selection = '25:A or 26:A or 27:A or 28:A or 39:A or 40:A or 41:A or 42:A or 44:A or 45:A or 46:A or 48:A or 49:A or 50:A or 52:A or 54:A or 117:A or 118:A or 138:A or 139:A or 140:A or 141:A or 142:A or 143:A or 144:A or 145:A or 146:A or 147:A or 161:A or 163:A or 164:A or 165:A or 166:A or 167:A or 168:A or 170:A or 171:A or 172:A or 173:A or 181:A or 186:A or 187:A or 188:A or 189:A or 190:A or 191:A or 192:A or 25:B or 26:B or 27:B or 28:B or 39:B or 40:B or 41:B or 42:B or 44:B or 45:B or 46:B or 48:B or 49:B or 50:B or 51:B or 52:B or 54:B or 117:B or 118:B or 138:B or 139:B or 140:B or 141:B or 142:B or 143:B or 144:B or 145:B or 146:B or 147:B or 161:B or 163:B or 164:B or 165:B or 166:B or 167:B or 168:B or 169:B or 170:B or 171:B or 172:B or 173:B or 181:B or 186:B or 187:B or 188:B or 189:B or 190:B or 191:B or 192:B or 193:B or 213:B or 252:B or 253:B or 254:B or 255:B or 256:B or 257:B or 258:B or 259:B'
view_bp = nv.show_biopython(structure[0])
view_bp.representations = [
{"type": "cartoon", "params": {
"sele": "protein", "color": "residueindex"
}},
{"type": "ball+stick", "params": {
"sele": "hetero"
}},
{"type": "surface", "params": {
"sele": selection,
"opacity": 0.3,
"color": "pink"
}}
]
view_bp
view_bp.remove_surface()
#view_bp.clear_representations()
#view_bp.add_representation('surface', selection=selection, color='green')
view_bp.add_surface(selection=selection, color='green', opacity=0.3)
# The widget has some many other features such as adding/updating/removing additional strucutures, representations (see the [API](http://nglviewer.org/nglview/latest/api.html)).
# ### Structure superposition
#
# To superpose a pair of structures, we need a mapping between the atoms in the two structures and based on this mapping a procedure which minimizes RMSD (such as [Kabsch algorithm](https://en.wikipedia.org/wiki/Kabsch_algorithm)) can be executed. In BioPython, we can use the [Bio.PDB.Superimposer](https://biopython.org/docs/1.75/api/Bio.PDB.Superimposer.html) module. To use it, we need to specify the mapping as two sets of atoms ([set_atoms method](https://biopython.org/docs/1.75/api/Bio.PDB.Superimposer.html#Bio.PDB.Superimposer.Superimposer.set_atoms)) based on which the transformation is computed and canbe applied ([apply method](https://biopython.org/docs/1.75/api/Bio.PDB.Superimposer.html#Bio.PDB.Superimposer.Superimposer.apply)) to one of the structures, to optimally (in terms of RMSD) map it onto the other.
#
# Let's try this procedure on the *7lkr* and *7ar5* structures which are both structures of the SARS-CoV-2 3CL protease.
# +
from Bio.PDB.MMCIFParser import MMCIFParser
from Bio.PDB.PDBList import PDBList
pdbl = PDBList()
file_name=pdbl.retrieve_pdb_file("7ar5", file_format="mmCif", pdir=".")
# -
parser = MMCIFParser()
structure_7ar5 = parser.get_structure("7ar5", '7ar5.cif')
structure_7lkr = parser.get_structure("7lkr", '7lkr.cif')
# Let's visualize the two structures in NGL.
# +
import nglview as nv
v = nv.NGLWidget()
c1 = v.add_component(nv.BiopythonStructure(structure_7ar5[0]))
c2 = v.add_component(nv.BiopythonStructure(structure_7lkr[0]))
c1.clear_representations()
c1.add_representation('cartoon', color="blue")
c2.clear_representations()
c2.add_representation('cartoon', color="red")
v
# -
# The structures are clearly not superimposed. Morever, one of the structures is a (homo)dimer. We will need to pick one of the chains and use it to find mapping against the only chain in the other structure. To do that, we will simply use sequence alignment. But, first, the sequences of the chosen chains need to be extracted from the structure.
# +
from Bio.PDB.Polypeptide import three_to_one
def get_sequence(residues):
return ''.join(three_to_one(r.get_resname()) for r in residues)
def get_residues(structure, chain):
return [r for r in structure[0][chain].get_residues() if r.id[0] == ' ']
# -
res1 = get_residues(structure_7lkr, 'A')
seq1 = get_sequence(res1)
res2 = get_residues(structure_7ar5, 'A')
seq2 = get_sequence(res2)
# Once we have the sequences, we can carry out sequence alignment (see the second labs for details).
# +
from Bio import Align
from Bio.Align import substitution_matrices
aligner = Align.PairwiseAligner()
aligner.substitution_matrix = substitution_matrices.load("BLOSUM62")
aligner.open_gap_score = -11
aligner.end_open_gap_score = -11
aligner.extend_gap_score = -1
aligner.end_extend_gap_score = -1
# -
alns = aligner.align(seq1, seq2)
print(len(alns))
# We got one alignment which looks as follows.
alignment = alns[0]
print(alignment)
# Now, we can use the [aligned](https://biopython.org/docs/1.75/api/Bio.Align.html?highlight=alignment#Bio.Align.PairwiseAlignment.aligned) property of the alignment that gives us ranges of residues in the respective sequences which are mapped onto each other.
print(alignment.aligned)
for range1, range2 in zip(alignment.aligned[0], alignment.aligned[1]):
print(range1, range2)
# From the ranges, we can enumerate indices of the residues which should be mapped.
mapping = [[],[]]
for range1, range2 in zip(alns[0].aligned[0], alns[0].aligned[1]):
mapping[0] += list(range(range1[0], range1[1]))
mapping[1] += list(range(range2[0], range2[1]))
# However, the `set_atoms` method in the `Bio.PDB.Superimposer` does not accept residue mapping but atom mapping. To do that, we extract C-alpha atoms of the mapped residues.
atoms1 = []
atoms2 = []
for r, m, a in [(res1, mapping[0], atoms1), (res2, mapping[1], atoms2)]:
for i in m:
a.append([a for a in r[i].get_atoms() if a.get_name() == 'CA'][0])
# Now, we have a mapping to be passed to the `Superimposer` and apply the transformation.
import Bio
super_imposer = Bio.PDB.Superimposer()
super_imposer.set_atoms(atoms1, atoms2)
super_imposer.apply(structure_7ar5.get_atoms())
# And finally, let's visualize the superposed structures (we are using a new NGL instance).
# +
import nglview as nv
v_super = nv.NGLWidget()
c1 = v_super.add_component(nv.BiopythonStructure(structure_7ar5[0]))
c2 = v_super.add_component(nv.BiopythonStructure(structure_7lkr[0]))
c1.clear_representations()
c1.add_representation('cartoon', color="blue")
c2.clear_representations()
c2.add_representation('cartoon', color="red")
v_super
# -
super_imposer.apply(structure_7ar5.get_atoms())
# ### ---- Begin Exercise ----
# - Implement a function which when passed two protein structures would align all their chains, find the pair with best alignment, and align the structures based on that pair of chains. An example, where matching 'A' chains won't work are GPCR structures 7E2X and 6A94. There you need to align chain R of 7E2X with chain A from 6A94 to get reasonable superposition.
# ### ---- End Exercise ----
|
# ---
# jupyter:
# jupytext:
# formats: ipynb,md
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.012804, "end_time": "2021-11-09T00:11:49.460537", "exception": false, "start_time": "2021-11-09T00:11:49.447733", "status": "completed"} tags=[]
# # Welcome to Computer Vision! #
#
# Have you ever wanted to teach a computer to see? In this course, that's exactly what you'll do!
#
# In this course, you'll:
# - Use modern deep-learning networks to build an **image classifier** with Keras
# - Design your own **custom convnet** with reusable blocks
# - Learn the fundamental ideas behind visual **feature extraction**
# - Master the art of **transfer learning** to boost your models
# - Utilize **data augmentation** to extend your dataset
#
# If you've taken the *Introduction to Deep Learning* course, you'll know everything you need to be successful.
#
# Now let's get started!
# + [markdown] papermill={"duration": 0.009492, "end_time": "2021-11-09T00:11:49.480261", "exception": false, "start_time": "2021-11-09T00:11:49.470769", "status": "completed"} tags=[]
# # Introduction #
#
# This course will introduce you to the fundamental ideas of computer vision. Our goal is to learn how a neural network can "understand" a natural image well-enough to solve the same kinds of problems the human visual system can solve.
#
# The neural networks that are best at this task are called **convolutional neural networks** (Sometimes we say **convnet** or **CNN** instead.) Convolution is the mathematical operation that gives the layers of a convnet their unique structure. In future lessons, you'll learn why this structure is so effective at solving computer vision problems.
#
# We will apply these ideas to the problem of **image classification**: given a picture, can we train a computer to tell us what it's a picture *of*? You may have seen [apps](https://identify.plantnet.org/) that can identify a species of plant from a photograph. That's an image classifier! In this course, you'll learn how to build image classifiers just as powerful as those used in professional applications.
#
# While our focus will be on image classification, what you'll learn in this course is relevant to every kind of computer vision problem. At the end, you'll be ready to move on to more advanced applications like generative adversarial networks and image segmentation.
# + [markdown] papermill={"duration": 0.009597, "end_time": "2021-11-09T00:11:49.499535", "exception": false, "start_time": "2021-11-09T00:11:49.489938", "status": "completed"} tags=[]
# # The Convolutional Classifier #
#
# A convnet used for image classification consists of two parts: a **convolutional base** and a **dense head**.
#
# <center>
# <!-- <img src="./images/1-parts-of-a-convnet.png" width="600" alt="The parts of a convnet: image, base, head, class; input, extract, classify, output.">-->
# <img src="https://i.imgur.com/U0n5xjU.png" width="600" alt="The parts of a convnet: image, base, head, class; input, extract, classify, output.">
# </center>
#
# The base is used to **extract the features** from an image. It is formed primarily of layers performing the convolution operation, but often includes other kinds of layers as well. (You'll learn about these in the next lesson.)
#
# The head is used to **determine the class** of the image. It is formed primarily of dense layers, but might include other layers like dropout.
#
# What do we mean by visual feature? A feature could be a line, a color, a texture, a shape, a pattern -- or some complicated combination.
#
# The whole process goes something like this:
#
# <center>
# <!-- <img src="./images/1-extract-classify.png" width="600" alt="The idea of feature extraction."> -->
# <img src="https://i.imgur.com/UUAafkn.png" width="600" alt="The idea of feature extraction.">
# </center>
#
# The features actually extracted look a bit different, but it gives the idea.
# + [markdown] papermill={"duration": 0.009437, "end_time": "2021-11-09T00:11:49.518831", "exception": false, "start_time": "2021-11-09T00:11:49.509394", "status": "completed"} tags=[]
# # Training the Classifier #
#
# The goal of the network during training is to learn two things:
# 1. which features to extract from an image (base),
# 2. which class goes with what features (head).
#
# These days, convnets are rarely trained from scratch. More often, we **reuse the base of a pretrained model**. To the pretrained base we then **attach an untrained head**. In other words, we reuse the part of a network that has already learned to do *1. Extract features*, and attach to it some fresh layers to learn *2. Classify*.
#
# <center>
# <!-- <img src="./images/1-attach-head-to-base.png" width="400" alt="Attaching a new head to a trained base."> -->
# <img src="https://imgur.com/E49fsmV.png" width="400" alt="Attaching a new head to a trained base.">
# </center>
#
# Because the head usually consists of only a few dense layers, very accurate classifiers can be created from relatively little data.
#
# Reusing a pretrained model is a technique known as **transfer learning**. It is so effective, that almost every image classifier these days will make use of it.
# + [markdown] papermill={"duration": 0.009615, "end_time": "2021-11-09T00:11:49.538766", "exception": false, "start_time": "2021-11-09T00:11:49.529151", "status": "completed"} tags=[]
# # Example - Train a Convnet Classifier #
#
# Throughout this course, we're going to be creating classifiers that attempt to solve the following problem: is this a picture of a *Car* or of a *Truck*? Our dataset is about 10,000 pictures of various automobiles, around half cars and half trucks.
# + [markdown] papermill={"duration": 0.009396, "end_time": "2021-11-09T00:11:49.557751", "exception": false, "start_time": "2021-11-09T00:11:49.548355", "status": "completed"} tags=[]
# ## Step 1 - Load Data ##
#
# This next hidden cell will import some libraries and set up our data pipeline. We have a training split called `ds_train` and a validation split called `ds_valid`.
# + _kg_hide-input=true papermill={"duration": 10.609353, "end_time": "2021-11-09T00:12:00.176745", "exception": false, "start_time": "2021-11-09T00:11:49.567392", "status": "completed"} tags=[]
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed(31415)
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
# + [markdown] papermill={"duration": 0.011317, "end_time": "2021-11-09T00:12:00.199929", "exception": false, "start_time": "2021-11-09T00:12:00.188612", "status": "completed"} tags=[]
# Let's take a look at a few examples from the training set.
# + _kg_hide-input=true papermill={"duration": 0.016523, "end_time": "2021-11-09T00:12:00.227459", "exception": false, "start_time": "2021-11-09T00:12:00.210936", "status": "completed"} tags=[]
import matplotlib.pyplot as plt
# + [markdown] papermill={"duration": 0.011122, "end_time": "2021-11-09T00:12:00.249578", "exception": false, "start_time": "2021-11-09T00:12:00.238456", "status": "completed"} tags=[]
# ## Step 2 - Define Pretrained Base ##
#
# The most commonly used dataset for pretraining is [*ImageNet*](http://image-net.org/about-overview), a large dataset of many kind of natural images. Keras includes a variety models pretrained on ImageNet in its [`applications` module](https://www.tensorflow.org/api_docs/python/tf/keras/applications). The pretrained model we'll use is called **VGG16**.
# + papermill={"duration": 2.941773, "end_time": "2021-11-09T00:12:03.202276", "exception": false, "start_time": "2021-11-09T00:12:00.260503", "status": "completed"} tags=[]
pretrained_base = tf.keras.models.load_model(
'../input/cv-course-models/cv-course-models/vgg16-pretrained-base',
)
pretrained_base.trainable = False
# + [markdown] papermill={"duration": 0.017807, "end_time": "2021-11-09T00:12:03.238334", "exception": false, "start_time": "2021-11-09T00:12:03.220527", "status": "completed"} tags=[]
# ## Step 3 - Attach Head ##
#
# Next, we attach the classifier head. For this example, we'll use a layer of hidden units (the first `Dense` layer) followed by a layer to transform the outputs to a probability score for class 1, `Truck`. The `Flatten` layer transforms the two dimensional outputs of the base into the one dimensional inputs needed by the head.
# + papermill={"duration": 0.159583, "end_time": "2021-11-09T00:12:03.416002", "exception": false, "start_time": "2021-11-09T00:12:03.256419", "status": "completed"} tags=[]
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
pretrained_base,
layers.Flatten(),
layers.Dense(6, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
# + [markdown] papermill={"duration": 0.023124, "end_time": "2021-11-09T00:12:03.459264", "exception": false, "start_time": "2021-11-09T00:12:03.436140", "status": "completed"} tags=[]
# ## Step 4 - Train ##
#
# Finally, let's train the model. Since this is a two-class problem, we'll use the binary versions of `crossentropy` and `accuracy`. The `adam` optimizer generally performs well, so we'll choose it as well.
# + papermill={"duration": 298.154824, "end_time": "2021-11-09T00:17:01.639361", "exception": false, "start_time": "2021-11-09T00:12:03.484537", "status": "completed"} tags=[]
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=30,
verbose=0,
)
# + [markdown] papermill={"duration": 0.024503, "end_time": "2021-11-09T00:17:01.690257", "exception": false, "start_time": "2021-11-09T00:17:01.665754", "status": "completed"} tags=[]
# When training a neural network, it's always a good idea to examine the loss and metric plots. The `history` object contains this information in a dictionary `history.history`. We can use Pandas to convert this dictionary to a dataframe and plot it with a built-in method.
# + papermill={"duration": 0.778829, "end_time": "2021-11-09T00:17:02.493039", "exception": false, "start_time": "2021-11-09T00:17:01.714210", "status": "completed"} tags=[]
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
# + [markdown] papermill={"duration": 0.015094, "end_time": "2021-11-09T00:17:02.525296", "exception": false, "start_time": "2021-11-09T00:17:02.510202", "status": "completed"} tags=[]
# # Conclusion #
#
# In this lesson, we learned about the structure of a convnet classifier: a **head** to act as a classifier atop of a **base** which performs the feature extraction.
#
# The head, essentially, is an ordinary classifier like you learned about in the introductory course. For features, it uses those features extracted by the base. This is the basic idea behind convolutional classifiers: that we can attach a unit that performs feature engineering to the classifier itself.
#
# This is one of the big advantages deep neural networks have over traditional machine learning models: given the right network structure, the deep neural net can learn how to engineer the features it needs to solve its problem.
#
# For the next few lessons, we'll take a look at how the convolutional base accomplishes the feature extraction. Then, you'll learn how to apply these ideas and design some classifiers of your own.
# + [markdown] papermill={"duration": 0.016343, "end_time": "2021-11-09T00:17:02.558196", "exception": false, "start_time": "2021-11-09T00:17:02.541853", "status": "completed"} tags=[]
# # Your Turn #
#
# For now, move on to the [**Exercise**](https://www.kaggle.com/kernels/fork/10781907) and build your own image classifier!
# + [markdown] papermill={"duration": 0.016967, "end_time": "2021-11-09T00:17:02.592170", "exception": false, "start_time": "2021-11-09T00:17:02.575203", "status": "completed"} tags=[]
# ---
#
#
#
#
# *Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/computer-vision/discussion) to chat with other learners.*
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sklearn
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import cross_validation
from sklearn import tree
from sklearn import neighbors
from sklearn import svm
from sklearn import ensemble
from sklearn import cluster
from sklearn import model_selection
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns #for graphics and figure styling
import pandas as pd
from sklearn import preprocessing
beer=pd.read_csv('recipeData.csv', index_col='BeerID', encoding='latin1')
beer.head()
beer=beer.drop(['URL','Name','BoilGravity','MashThickness','PitchRate','PrimaryTemp','PrimingMethod','PrimingAmount'], axis=1)
beer.head()
beer=beer.drop(['Style'], axis=1)
beer.head()
beer.SugarScale.describe()
beer.SugarScale.categories
beer.info()
beer.SugarScale = beer.SugarScale.astype('category')
beer.info()
beer.BrewMethod = beer.BrewMethod.astype('category')
beer.info()
beer.BrewMethod
beer.SugarScale
from pandas.api.types import CategoricalDtype
beer.BrewMethod
beer.info()
beer.BrewMethod.cat.categories
beer.BrewMethod = beer.BrewMethod.cat.rename_categories([0,1,2,3])
print(np.bincount(beer.StyleID).argmax())
from collections import Counter
words=beer.StyleID
most_common_numbers= [word for word, word_count in Counter(words).most_common(5)]
most_common_numbers_2= [word for word, word_count in Counter(words).most_common(5)]
print(most_common_numbers)
words=beer.StyleID
word_count_total = 0
most_common_numbers= [word_count for word, word_count in Counter(words).most_common(5)]
word_count_total=np.array(most_common_numbers)
print(most_common_numbers)
print(word_count_total.sum())
beer4 = beer.set_index("StyleID")
print((beer.loc[beer['StyleID'].isin([7, 10, 134, 9, 4])]).head(50))
beer5 = beer.loc[beer['StyleID'].isin([7, 10, 134, 9, 4])]
beerStyleColumn = beer.StyleID
beerStyleColumn
beer = beer.drop('StyleID', axis=1)
beer5StyleColumn = beer5.StyleID
beer5 = beer5.drop('StyleID', axis=1)
beer5.SugarScale = beer5.SugarScale.cat.rename_categories([0,1])
beer5Small = beer5
beer5StyleColumnSmall = beer5StyleColumn
beerInfo_train, beerInfo_test, beerStyle_train, beerStyle_test = cross_validation.train_test_split(beer5Small, beer5StyleColumnSmall, test_size=0.3, random_state=0)
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
beerIDM2 = RandomForestClassifier(max_depth=5, random_state=0)
from sklearn.feature_selection import RFE
rfe3 = RFE(beerIDM2, n_features_to_select=6)
rfe3.fit(beerInfo_train, beerStyle_train)
from sklearn.linear_model import LinearRegression
beerIDM = linear_model.LogisticRegression()
rfe2 = RFE(beerIDM, n_features_to_select=4)
rfe2.fit(beerInfo_train, beerStyle_train)
from sklearn.svm import SVR
clf = SVR(kernel="linear")
rfe4 = RFE(clf, n_features_to_select=5)
rfe4.fit(beerInfo_train, beerStyle_train)
rfe2.ranking_
predictOutput=rfe2.predict(beerInfo_test)
(predictOutput==beerStyle_test).sum()
(predictOutput!=beerStyle_test).sum()
beerStyle_train
beerIDM.fit(beerStyle_train, beerInfo_train)
beerTestPredict=beerIDM.predict(beerStyle_test)
print(beerTestPredict==beerInfo_test)
beerInfo_test.BeerID
# +
#Running The Beer OOB Error Test
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(beerInfo_train, beerStyle_train)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
xss=[0]*3
yss=[0]*3
i=0
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
xss[i]=xs
yss[i]=ys
i=i+1
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
# +
#Running The Extra Trees OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("ExtraTreesClassifier, max_features='sqrt'",
ExtraTreesClassifier(warm_start=True, max_features='sqrt',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features='log2'",
ExtraTreesClassifier(warm_start=True, max_features='log2',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features=None",
ExtraTreesClassifier(warm_start=True, max_features=None,
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(beerInfo_train, beerStyle_train)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
xss=[0]*3
yss=[0]*3
i=0
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
xss[i]=xs
yss[i]=ys
i=i+1
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
# +
#Running The Random Forest Test Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(beerInfo_train, beerStyle_train)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(beerInfo_test)
test_errorCLF = (1 - sum(y_pred == beerStyle_test)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
# +
#Running The Extra Trees Test Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("ExtraTreesClassifier, max_features='sqrt'",
ExtraTreesClassifier(warm_start=True, max_features='sqrt',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features='log2'",
ExtraTreesClassifier(warm_start=True, max_features='log2',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features=None",
ExtraTreesClassifier(warm_start=True, max_features=None,
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(beerInfo_train, beerStyle_train)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(beerInfo_test)
test_errorCLF = (1 - sum(y_pred == beerStyle_test)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
# +
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import zero_one_loss
from sklearn.ensemble import AdaBoostClassifier
n_estimators = 400
# A learning rate of 1. may not be optimal for both SAMME and SAMME.R
learning_rate = 1.
dt = DecisionTreeClassifier(max_depth=9, min_samples_leaf=1)
dt.fit(beerInfo_train, beerStyle_train)
dt_err = 1.0 - dt.score(beerInfo_test, beerStyle_test)
dt_stump = DecisionTreeClassifier(max_depth=1, min_samples_leaf=1)
dt_stump.fit(beerInfo_train, beerStyle_train)
dt_stump_err = 1.0 - dt_stump.score(beerInfo_test, beerStyle_test)
ada_real = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME.R")
ada_discrete = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME")
ada_discrete.fit(beerInfo_train, beerStyle_train)
ada_real.fit(beerInfo_train, beerStyle_train)
fig = plt.figure(figsize=[15,10])
ax = fig.add_subplot(111)
ax.plot([1, n_estimators], [dt_stump_err] * 2, 'k-',
label='Decision Stump Error')
ax.plot([1, n_estimators], [dt_err] * 2, 'k--',
label='Decision Tree Error')
ada_discrete_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(beerInfo_test)):
ada_discrete_err[i] = zero_one_loss(y_pred, beerStyle_test)
ada_discrete_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(beerInfo_train)):
ada_discrete_err_train[i] = zero_one_loss(y_pred, beerStyle_train)
ada_real_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(beerInfo_test)):
ada_real_err[i] = zero_one_loss(y_pred, beerStyle_test)
ada_real_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(beerInfo_train)):
ada_real_err_train[i] = zero_one_loss(y_pred, beerStyle_train)
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err,
label='Discrete AdaBoost Test Error',
color='red')
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err_train,
label='Discrete AdaBoost Train Error',
color='blue')
ax.plot(np.arange(n_estimators) + 1, ada_real_err,
label='Real AdaBoost Test Error',
color='orange')
ax.plot(np.arange(n_estimators) + 1, ada_real_err_train,
label='Real AdaBoost Train Error',
color='green')
ax.set_ylim((0.3, 0.5))
ax.set_xlabel('n_estimators')
ax.set_ylabel('error rate')
leg = ax.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.7)
plt.show()
fig.savefig("adaboostBeer.pdf", bbox_inches='tight')
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Connect With Me in Linkedin :-** https://www.linkedin.com/in/dheerajkumar1997/
# ## Ordinal numbering encoding
#
# **Ordinal categorical variables**
#
# Categorical variable which categories can be meaningfully ordered are called ordinal. For example:
#
# - Student's grade in an exam (A, B, C or Fail).
# - Days of the week can be ordinal with Monday = 1, and Sunday = 7.
# - Educational level, with the categories: Elementary school, High school, College graduate, PhD ranked from 1 to 4.
#
# When the categorical variable is ordinal, the most straightforward approach is to replace the labels by some ordinal number.
#
# ### Advantages
#
# - Keeps the semantical information of the variable (human readable content)
# - Straightforward
#
# ### Disadvantage
#
# - Does not add machine learning valuable information
#
# I will simulate some data below to demonstrate this exercise
import pandas as pd
import datetime
# +
# create a variable with dates, and from that extract the weekday
# I create a list of dates with 30 days difference from today
# and then transform it into a datafame
base = datetime.datetime.today()
date_list = [base - datetime.timedelta(days=x) for x in range(0, 30)]
df = pd.DataFrame(date_list)
df.columns = ['day']
df
# +
# extract the week day name
df['day_of_week'] = df['day'].dt.weekday_name
df.head()
# +
# Engineer categorical variable by ordinal number replacement
weekday_map = {'Monday':1,
'Tuesday':2,
'Wednesday':3,
'Thursday':4,
'Friday':5,
'Saturday':6,
'Sunday':7
}
df['day_ordinal'] = df.day_of_week.map(weekday_map)
df.head(10)
# -
# We can now use the variable day_ordinal in sklearn to build machine learning models.
# **Connect With Me in Linkedin :-** https://www.linkedin.com/in/dheerajkumar1997/
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Challenge 1-2
# +
from azure.cognitiveservices.vision.customvision.training import training_api
from azure.cognitiveservices.vision.customvision.training.models import ImageUrlCreateEntry
# Replace with a valid key
training_key = ""
prediction_key = ""
trainer = training_api.TrainingApi(training_key)
# Create a new project
print ("Creating project...")
project = trainer.create_project("aiworkshop")
# -
# Make two tags in the new project
hardshell_tag = trainer.create_tag(project.id, "Hardshell jackets")
insulated_tag = trainer.create_tag(project.id, "Insulated jackets")
# +
# Upload images to the project
import os
import time
hardshell_dir = "gear_images\\hardshell_jackets"
for image in os.listdir(os.fsencode(hardshell_dir)):
with open(hardshell_dir + "\\" + os.fsdecode(image), mode="rb") as img_data:
trainer.create_images_from_data(project.id, img_data.read(), [ hardshell_tag.id ])
insulated_dir = "gear_images\\insulated_jackets"
for image in os.listdir(os.fsencode(insulated_dir)):
with open(insulated_dir + "\\" + os.fsdecode(image), mode="rb") as img_data:
trainer.create_images_from_data(project.id, img_data.read(), [ insulated_tag.id ])
# +
Train the project
print ("Training...")
iteration = trainer.train_project(project.id)
while (iteration.status == "Training"):
iteration = trainer.get_iteration(project.id, iteration.id)
print ("Training status: " + iteration.status)
time.sleep(1)
# The iteration is now trained. Make it the default project endpoint
trainer.update_iteration(project.id, iteration.id, is_default=True)
print ("Done!")
# +
from azure.cognitiveservices.vision.customvision.prediction import prediction_endpoint
from azure.cognitiveservices.vision.customvision.prediction.prediction_endpoint import models
# Now there is a trained endpoint that can be used to make a prediction
predictor = prediction_endpoint.PredictionEndpoint(prediction_key)
# Open the sample image and get back the prediction results.
with open("gear_images\\hardshell_jackets\\112128.jpeg", mode="rb") as test_data:
results = predictor.predict_image(project.id, test_data.read(), iteration.id)
# Display the results.
for prediction in results.predictions:
print ("\t" + prediction.tag + ": {0:.2f}%".format(prediction.probability * 100))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Erica97/Winter-2022-Data-Science-Intern-Challenge/blob/main/Q1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Ld8oGhzLXjyd"
# ###Question 1: Given some sample data, write a program to answer the following:
# + [markdown] id="aP0Lb3DPZlG-"
# On Shopify, we have exactly 100 sneaker shops, and each of these shops sells only one model of shoe. We want to do some analysis of the average order value (AOV). When we look at orders data over a 30 day window, we naively calculate an AOV of $3145.13. Given that we know these shops are selling sneakers, a relatively affordable item, something seems wrong with our analysis.
# + [markdown] id="UBjeHzHzZo6R"
# ####a) Think about what could be going wrong with our calculation. Think about a better way to evaluate this data.
# + colab={"base_uri": "https://localhost:8080/", "height": 221} id="e2bj0nfggL7V" outputId="d0958c69-9884-4a15-8042-5daf542d8fa6"
from google.colab import drive
drive.mount('/content/gdrive')
import pandas as pd
df=pd.read_csv('gdrive/My Drive/Shopify/2019 Winter Data Science Intern Challenge Data Set.csv')
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="tiq0VsDyEnnA" outputId="cc0269ce-c8ed-4812-9dbb-35778dddfd15"
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
print("Firstly validate the 30-day window, from earliest:", min(df['created_at'])[:10] + ' to latest: ' + max(df['created_at'])[:10] + '.\n')
inaccurate_AOV=df['order_amount'].sum()/len(df['order_id'])
print("The inaccurate AOV is simply calculated as dividing the sum of order_amount by the number of orders.\n\nThe result is therefore ${0:.2f}.".format(inaccurate_AOV))
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="W8QOEsWTRL72" outputId="2f54590c-cddb-4c72-dcf0-84a90aca5c7e"
print("Secondly, inspect the distributions of order_amount by each type of payment_method.\n")
sns.set(style="darkgrid")
ax = sns.boxplot(x=df["payment_method"], y=df["order_amount"])
# + [markdown] id="Lejx4GCBxDNR"
# This problem was caused due to not removing outliers in the dataset. Next, I will use the IQR method in Statistics to find outliers in each payment category.
# + colab={"base_uri": "https://localhost:8080/", "height": 354} id="ZUIKowiBx324" outputId="b2a8505d-df7d-45d4-e916-9e19f874a62a"
IQR = df['order_amount'].quantile(.75) - df['order_amount'].quantile(.25)
filtered_df = df[(df['order_amount'] < df['order_amount'].quantile(.5) + IQR*1.75) & (df['order_amount'] > df['order_amount'].quantile(.5) - IQR*1.75)]
filtered_df.boxplot(column='order_amount', by='payment_method')
plt.suptitle('')
plt.show()
# + [markdown] id="lFuRDb2B3I8X"
# Now I want to know the min and max order_amount for each type of payment method after removing outliers.
# + colab={"base_uri": "https://localhost:8080/", "height": 173} id="HxukmaS5wg5i" outputId="62b262f9-0e4e-4b49-84e0-2cbeed5e74e6"
filtered_df.groupby('payment_method').describe()['order_amount']
# + [markdown] id="wcEq47o44r-I"
# Since the ranges for each type of payment methods are close, I will keep all orders with order_amount between 90 and 676.
# + id="9KbMXjZ45D20"
filtered_df = filtered_df[filtered_df['order_amount'] >= 90]
filtered_df = filtered_df[filtered_df['order_amount'] <= 676]
# + [markdown] id="2Nnp3bjyDEec"
# To better evaluate AOV, I will use the filtered data in part b) and c).
# + [markdown] id="HPBYSSfuZuWX"
# ####b) What metric would you report for this dataset? c) What is its value?
# + [markdown] id="f9Fa0AUlUGYe"
# **Alternative metric 1**: Remove the outliers we found in part a) and perform the same calculation.
#
# This is the **mean revenue per order.**
# + colab={"base_uri": "https://localhost:8080/"} id="DUiReGpBUHy4" outputId="7086d751-e256-45fc-d414-5d6c3ec4144f"
df_1 = filtered_df.copy()
AOV_1=df_1['order_amount'].sum()/len(df_1['order_id'])
print('alternaive AOV1: ${0:.2f}'.format(AOV_1))
# + [markdown] id="A17pGKL3bGe-"
# **Alternative metric 2**: Suppose I want to find the **average unit price** for sneakers.
# + colab={"base_uri": "https://localhost:8080/"} id="zFdLNHGieiT8" outputId="f6aeafd1-b2ab-4bef-c362-2ed491728574"
AOV_2=df_1['order_amount'].sum()/df_1['total_items'].sum()
print('alternaive AOV2: ${0:.2f}'.format(AOV_2))
# + [markdown] id="k8hADrxD7qvK"
# **Alternative 3**: Use median instead of mean, because the variance in order_amount is high. This is the **median revenue per order.**
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="hoiX8q3A8XuO" outputId="9924bf49-f81f-4cc5-970f-373387c09e3a"
df_1.describe()
# + [markdown] id="_DSw_Rxx8zHQ"
# alternative AOV3: $276.00
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# $\newcommand{\mb}[1]{\mathbf{ #1 }}$
# $\newcommand{\bs}[1]{\boldsymbol{ #1 }}$
# $\newcommand{\bb}[1]{\mathbb{ #1 }}$
#
# $\newcommand{\R}{\bb{R}}$
#
# $\newcommand{\ip}[2]{\left\langle #1, #2 \right\rangle}$
# $\newcommand{\norm}[1]{\left\Vert #1 \right\Vert}$
#
# $\newcommand{\der}[2]{\frac{\mathrm{d} #1 }{\mathrm{d} #2 }}$
# $\newcommand{\derp}[2]{\frac{\partial #1 }{\partial #2 }}$
#
# # Cart Pole
# Consider a cart on a frictionless track. Suppose a pendulum is attached to the cart by a frictionless joint. The cart is modeled as a point mass $m_c$ and the pendulum is modeled as a massless rigid link with point mass $m_p$ a distance $l$ away from the cart.
# Let $\mathcal{I} = (\mb{i}^1, \mb{i}^2, \mb{i}^3)$ denote an inertial frame. Suppose the position of the cart is resolved in the inertial frame as $\mb{r}_{co}^{\mathcal{I}} = (x, 0, 0)$. Additionally, suppose the gravitational force acting on the pendulum is resolved in the inertial frame as $\mb{f}_g^{\mathcal{I}} = (0, 0, -m_p g)$.
# Let $\mathcal{B} = (\mb{b}^1, \mb{b}^2, \mb{b}^3)$ denote a body reference frame, with $\mb{b}^2 = \mb{i}^2$. The position of the pendulum mass relative to the cart is resolved in the body frame as $\mb{r}_{pc}^\mathcal{B} = (0, 0, l)$.
# The kinetic energy of the system is:
#
# \begin{equation}
# \frac{1}{2} m_c \norm{\dot{\mb{r}}_{co}^\mathcal{I}}_2^2 + \frac{1}{2} m_p \norm{\dot{\mb{r}}_{po}^\mathcal{I}}_2^2
# \end{equation}
# First, note that $\dot{\mb{r}}_{co}^{\mathcal{I}} = (\dot{x}, 0, 0)$.
# Next, note that $\mb{r}_{po}^\mathcal{I} = \mb{r}_{pc}^\mathcal{I} + \mb{r}_{co}^\mathcal{I} = \mb{C}_{\mathcal{I}\mathcal{B}}\mb{r}_{pc}^\mathcal{B} + \mb{r}_{co}^\mathcal{I}$, where $\mb{C}_{\mathcal{I}\mathcal{B}}$ is the direction cosine matrix (DCM) satisfying:
#
# \begin{equation}
# \mb{C}_{\mathcal{I}\mathcal{B}} = \begin{bmatrix} \ip{\mb{i}_1}{\mb{b}_1} & \ip{\mb{i}_1}{\mb{b}_2} & \ip{\mb{i}_1}{\mb{b}_3} \\ \ip{\mb{i}_2}{\mb{b}_1} & \ip{\mb{i}_2}{\mb{b}_2} & \ip{\mb{i}_2}{\mb{b}_3} \\ \ip{\mb{i}_3}{\mb{b}_1} & \ip{\mb{i}_3}{\mb{b}_2} & \ip{\mb{i}_3}{\mb{b}_3} \end{bmatrix}.
# \end{equation}
# We parameterize the DCM using $\theta$, measuring the clockwise angle of the pendulum from upright in radians. In this case, the DCM is:
#
# \begin{equation}
# \mb{C}_{\mathcal{I}\mathcal{B}} = \begin{bmatrix} \cos{\theta} & 0 & \sin{\theta} \\ 0 & 1 & 0 \\ -\sin{\theta} & 0 & \cos{\theta} \end{bmatrix},
# \end{equation}
#
# following from $\cos{\left( \frac{\pi}{2} - \theta \right)} = \sin{\theta}$. Therefore:
#
# \begin{equation}
# \mb{r}_{po}^\mathcal{I} = \begin{bmatrix} x + l\sin{\theta} \\ 0 \\ l\cos{\theta} \end{bmatrix}
# \end{equation}
# We have $\dot{\mb{r}}_{po}^\mathcal{I} = \dot{\mb{C}}_{\mathcal{I}\mathcal{B}} \mb{r}_{pc}^\mathcal{B} + \dot{\mb{r}}_{co}^\mathcal{I}$, following from $\dot{\mb{r}}_{pc}^\mathcal{B} = \mb{0}_3$ since the pendulum is rigid. The derivative of the DCM is:
#
# \begin{equation}
# \der{{\mb{C}}_{\mathcal{I}\mathcal{B}}}{\theta} = \begin{bmatrix} -\sin{\theta} & 0 & \cos{\theta} \\ 0 & 0 & 0 \\ -\cos{\theta} & 0 & -\sin{\theta} \end{bmatrix},
# \end{equation}
#
# finally yielding:
#
# \begin{equation}
# \dot{\mb{r}}_{po}^\mathcal{I} = \dot{\theta} \der{\mb{C}_{\mathcal{I}\mathcal{B}}}{\theta} \mb{r}^{\mathcal{B}}_{pc} + \dot{\mb{r}}_{co}^\mathcal{I} = \begin{bmatrix} l\dot{\theta}\cos{\theta} + \dot{x} \\ 0 \\ -l\dot{\theta}\sin{\theta} \end{bmatrix}
# \end{equation}
# Define generalized coordinates $\mb{q} = (x, \theta)$ with configuration space $\mathcal{Q} = \R \times \bb{S}^1$, where $\bb{S}^1$ denotes the $1$-sphere. The kinetic energy can then be expressed as:
#
# \begin{align}
# T(\mb{q}, \dot{\mb{q}}) &= \frac{1}{2} m_c \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix}^\top \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix} + \frac{1}{2} m_p \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix}^\top \begin{bmatrix} 1 & l \cos{\theta} \\ l \cos{\theta} & l^2 \end{bmatrix} \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix}\\
# &= \frac{1}{2} \dot{\mb{q}}^\top\mb{D}(\mb{q})\dot{\mb{q}},
# \end{align}
#
# where inertia matrix function $\mb{D}: \mathcal{Q} \to \bb{S}^2_{++}$ is defined as:
#
# \begin{equation}
# \mb{D}(\mb{q}) = \begin{bmatrix} m_c + m_p & m_p l \cos{\theta} \\ m_p l \cos{\theta} & m_pl^2 \end{bmatrix}.
# \end{equation}
# Note that:
#
# \begin{equation}
# \derp{\mb{D}}{x} = \mb{0}_{2 \times 2},
# \end{equation}
#
# and:
#
# \begin{equation}
# \derp{\mb{D}}{\theta} = \begin{bmatrix} 0 & -m_p l \sin{\theta} \\ -m_p l \sin{\theta} & 0 \end{bmatrix},
# \end{equation}
#
# so we can express:
#
# \begin{equation}
# \derp{\mb{D}}{\mb{q}} = -m_p l \sin{\theta} (\mb{e}_1 \otimes \mb{e}_2 \otimes \mb{e}_2 + \mb{e}_2 \otimes \mb{e}_1 \otimes \mb{e}_2).
# \end{equation}
# The potential energy of the system is $U: \mathcal{Q} \to \R$ defined as:
#
# \begin{equation}
# U(\mb{q}) = -\ip{\mb{f}_g^\mathcal{I}}{\mb{r}^{\mathcal{I}}_{po}} = m_p g l \cos{\theta}.
# \end{equation}
# Define $\mb{G}: \mathcal{Q} \to \R^2$ as:
#
# \begin{equation}
# \mb{G}(\mb{q}) = \left(\derp{U}{\mb{q}}\right)^\top = \begin{bmatrix} 0 \\ -m_p g l \sin{\theta} \end{bmatrix}.
# \end{equation}
# Assume a force $(F, 0, 0)$ (resolved in the inertial frame) can be applied to the cart. The Euler-Lagrange equation yields:
#
# \begin{align}
# \der{}{t} \left( \derp{T}{\dot{\mb{q}}} \right)^\top - \left( \derp{T}{\mb{q}} - \derp{U}{\mb{q}} \right)^\top &= \der{}{t} \left( \mb{D}(\mb{q})\dot{\mb{q}} \right) - \frac{1}{2}\derp{\mb{D}}{\mb{q}}(\dot{\mb{q}}, \dot{\mb{q}}, \cdot) + \mb{G}(\mb{q})\\
# &= \mb{D}(\mb{q})\ddot{\mb{q}} + \derp{\mb{D}}{\mb{q}}(\cdot, \dot{\mb{q}}, \dot{\mb{q}}) - \frac{1}{2}\derp{\mb{D}}{\mb{q}}(\dot{\mb{q}}, \dot{\mb{q}}, \cdot) + \mb{G}(\mb{q})\\
# &= \mb{B} F,
# \end{align}
#
# with static actuation matrix:
#
# \begin{equation}
# \mb{B} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}.
# \end{equation}
# Note that:
#
# \begin{align}
# \derp{\mb{D}}{\mb{q}}(\cdot, \dot{\mb{q}}, \dot{\mb{q}}) - \frac{1}{2}\derp{\mb{D}}{\mb{q}}(\dot{\mb{q}}, \dot{\mb{q}}, \cdot) &= -m_p l \sin{\theta} (\mb{e}_1 \dot{\theta}\dot{\theta} + \mb{e}_2\dot{x}\dot{\theta}) + \frac{1}{2} m_p l \sin{\theta} (\dot{x}\dot{\theta} \mb{e}_2 + \dot{\theta}\dot{x} \mb{e}_2)\\
# &= \begin{bmatrix} -m_p l \dot{\theta}^2 \sin{\theta} \\ 0 \end{bmatrix}\\
# &= \mb{C}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}},
# \end{align}
#
# with Coriolis terms defined as:
#
# \begin{equation}
# \mb{C}(\mb{q}, \dot{\mb{q}}) = \begin{bmatrix} 0 & -m_p l \sin{\theta} \\ 0 & 0 \end{bmatrix}.
# \end{equation}
# Finally, we have:
#
# \begin{equation}
# \mb{D}(\mb{q})\ddot{\mb{q}} + \mb{C}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}} + \mb{G}(\mb{q}) = \mb{B}F
# \end{equation}
# +
from numpy import array, concatenate, cos, dot, reshape, sin, zeros
from core.dynamics import RoboticDynamics
class CartPole(RoboticDynamics):
def __init__(self, m_c, m_p, l, g=9.81):
RoboticDynamics.__init__(self, 2, 1)
self.params = m_c, m_p, l, g
def D(self, q):
m_c, m_p, l, _ = self.params
_, theta = q
return array([[m_c + m_p, m_p * l * cos(theta)], [m_p * l * cos(theta), m_p * (l ** 2)]])
def C(self, q, q_dot):
_, m_p, l, _ = self.params
_, theta = q
_, theta_dot = q_dot
return array([[0, -m_p * l * theta_dot * sin(theta)], [0, 0]])
def U(self, q):
_, m_p, l, g = self.params
_, theta = q
return m_p * g * l * cos(theta)
def G(self, q):
_, m_p, l, g = self.params
_, theta = q
return array([0, -m_p * g * l * sin(theta)])
def B(self, q):
return array([[1], [0]])
m_c = 0.5
m_p = 0.25
l = 0.5
cart_pole = CartPole(m_c, m_p, l)
# -
# We attempt to stabilize the pendulum upright, that is, drive $\theta$ to $0$. We'll use the normal form transformation:
#
# \begin{equation}
# \bs{\Phi}(\mb{q}, \dot{\mb{q}}) = \begin{bmatrix} \bs{\eta}(\mb{q}, \dot{\mb{q}}) \\ \mb{z}(\mb{q}, \dot{\mb{q}}) \end{bmatrix} = \begin{bmatrix} \theta \\ \dot{\theta} \\ x \\ m_p l \dot{x} \cos{\theta} + m_p l^2 \dot{\theta} \end{bmatrix}.
# \end{equation}
# +
from core.dynamics import ConfigurationDynamics
class CartPoleOutput(ConfigurationDynamics):
def __init__(self, cart_pole):
ConfigurationDynamics.__init__(self, cart_pole, 1)
self.cart_pole = cart_pole
def y(self, q):
return q[1:]
def dydq(self, q):
return array([[0, 1]])
def d2ydq2(self, q):
return zeros((1, 2, 2))
output = CartPoleOutput(cart_pole)
# +
from numpy import identity
from core.controllers import FBLinController, LQRController
Q = 10 * identity(2)
R = identity(1)
lqr = LQRController.build(output, Q, R)
fb_lin = FBLinController(output, lqr)
# +
from numpy import linspace, pi
x_0 = array([0, pi / 4, 0, 0])
ts = linspace(0, 10, 1000 + 1)
xs, us = cart_pole.simulate(x_0, fb_lin, ts)
# -
from matplotlib.pyplot import subplots, show, tight_layout
# +
_, axs = subplots(2, 2, figsize=(8, 8))
ylabels = ['$x$ (m)', '$\\theta$ (rad)', '$\\dot{x}$ (m / sec)', '$\\dot{\\theta}$ (rad / sec)']
for ax, data, ylabel in zip(axs.flatten(), xs.T, ylabels):
ax.plot(ts, data, linewidth=3)
ax.set_ylabel(ylabel, fontsize=16)
ax.grid()
for ax in axs[-1]:
ax.set_xlabel('$t$ (sec)', fontsize=16)
tight_layout()
show()
# +
_, ax = subplots(figsize=(4, 4))
ax.plot(ts[:-1], us, linewidth=3)
ax.grid()
ax.set_xlabel('$t$ (sec)', fontsize=16)
ax.set_ylabel('$F$ (N)', fontsize=16)
show()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Methods
#
# Methods are essentially functions built into objects.
#
# Methods perform specific actions on an object and can also take arguments, just like a function.
#
# Methods are in the form:
#
# object.method(arg1,arg2,etc...)
# Create a simple list
lst = [1,2,3,4,5]
# Fortunately, with iPython and the Jupyter Notebook we can quickly see all the possible methods using the tab key. The methods for a list are:
#
# * append
# * count
# * extend
# * insert
# * pop
# * remove
# * reverse
# * sort
#
# Let's try out a few of them:
# append() allows us to add elements to the end of a list:
lst.append(6)
lst
# Great! Now how about count()? The count() method will count the number of occurrences of an element in a list.
# Check how many times 2 shows up in the list
lst.count(2)
# You can always use Shift+Tab in the Jupyter Notebook to get more help about the method. In general Python you can use the help() function:
help(lst.count)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .ps1
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: .NET (PowerShell)
# language: PowerShell
# name: .net-powershell
# ---
# # T1484 - Group Policy Modification
# Adversaries may modify Group Policy Objects (GPOs) to subvert the intended discretionary access controls for a domain, usually with the intention of escalating privileges on the domain. Group policy allows for centralized management of user and computer settings in Active Directory (AD). GPOs are containers for group policy settings made up of files stored within a predicable network path <code>\\<DOMAIN>\SYSVOL\<DOMAIN>\Policies\</code>.(Citation: TechNet Group Policy Basics)(Citation: ADSecurity GPO Persistence 2016)
#
# Like other objects in AD, GPOs have access controls associated with them. By default all user accounts in the domain have permission to read GPOs. It is possible to delegate GPO access control permissions, e.g. write access, to specific users or groups in the domain.
#
# Malicious GPO modifications can be used to implement many other malicious behaviors such as [Scheduled Task/Job](https://attack.mitre.org/techniques/T1053), [Disable or Modify Tools](https://attack.mitre.org/techniques/T1562/001), [Ingress Tool Transfer](https://attack.mitre.org/techniques/T1105), [Create Account](https://attack.mitre.org/techniques/T1136), [Service Execution](https://attack.mitre.org/techniques/T1035), and more.(Citation: ADSecurity GPO Persistence 2016)(Citation: Wald0 Guide to GPOs)(Citation: Harmj0y Abusing GPO Permissions)(Citation: Mandiant M Trends 2016)(Citation: Microsoft Hacking Team Breach) Since GPOs can control so many user and machine settings in the AD environment, there are a great number of potential attacks that can stem from this GPO abuse.(Citation: Wald0 Guide to GPOs)
#
# For example, publicly available scripts such as <code>New-GPOImmediateTask</code> can be leveraged to automate the creation of a malicious [Scheduled Task/Job](https://attack.mitre.org/techniques/T1053) by modifying GPO settings, in this case modifying <code><GPO_PATH>\Machine\Preferences\ScheduledTasks\ScheduledTasks.xml</code>.(Citation: Wald0 Guide to GPOs)(Citation: Harmj0y Abusing GPO Permissions) In some cases an adversary might modify specific user rights like SeEnableDelegationPrivilege, set in <code><GPO_PATH>\MACHINE\Microsoft\Windows NT\SecEdit\GptTmpl.inf</code>, to achieve a subtle AD backdoor with complete control of the domain because the user account under the adversary's control would then be able to modify GPOs.(Citation: Harmj0y SeEnableDelegationPrivilege Right)
#
# ## Atomic Tests:
# Currently, no tests are available for this technique.
# ## Detection
# It is possible to detect GPO modifications by monitoring directory service changes using Windows event logs. Several events may be logged for such GPO modifications, including:
#
# * Event ID 5136 - A directory service object was modified
# * Event ID 5137 - A directory service object was created
# * Event ID 5138 - A directory service object was undeleted
# * Event ID 5139 - A directory service object was moved
# * Event ID 5141 - A directory service object was deleted
#
#
# GPO abuse will often be accompanied by some other behavior such as [Scheduled Task/Job](https://attack.mitre.org/techniques/T1053), which will have events associated with it to detect. Subsequent permission value modifications, like those to SeEnableDelegationPrivilege, can also be searched for in events associated with privileges assigned to new logons (Event ID 4672) and assignment of user rights (Event ID 4704).
# ## Shield Active Defense
# ### System Activity Monitoring
# Collect system activity logs which can reveal adversary activity.
#
# Capturing system logs can show logins, user and system events, etc. Collecting this data and potentially sending it to a centralized location can help reveal the presence of an adversary and the actions they perform on a compromised system.
# #### Opportunity
# There is an opportunity to deploy a tripwire that triggers an alert when an adversary touches a network resource or uses a specific technique.
# #### Use Case
# A defender could monitor for directory service changes using Windows event logs. This can alert to the presence of an adversary in the network.
# #### Procedures
# Ensure that systems capture and retain common system level activity artifacts that might be produced.
# Monitor Windows systems for event codes that reflect an adversary changing passwords, adding accounts to groups, etc.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Apprentissage Non Supervisé
# À la différence de l’apprentissage supervisé, l’apprentissage non supervisé est celui où l’algorithme doit opérer à partir d’exemples non annotés. En effet, dans ce cas de figure, l’apprentissage par la machine se fait de manière entièrement indépendante. Des données sont alors renseignées à la machine sans qu’on lui fournisse des exemples de résultats.
#
# Ainsi, dans cette situation d’apprentissage, les réponses que l’on veut trouver ne sont pas présentes dans les données fournies : l’algorithme utilise des données non étiquetées. On attend donc de la machine qu’elle crée elle-même les réponses grâce à différentes analyses et au classement des données.
#
# Donc, la machine analyse la structure des données X (sans résultats) pour apprendre ensuite à réaliser elle-même certaines tâches...
# ### 1. CLUSTERING
# *La classification non supervisée, laisser la machine à classer les données selon leur ressemblence*
# ### - KMeans Clustering :
# - **K-Mean cherche la position des centres qui minimise la distance entre les points d'un cluster(groupe ou classe de points) et le centre de ce dernier :**
#
# *Cela équivaut à minimiser la variance des clusters*
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
# +
# Nbre d'échantillon dans une population
n_samples = 2000
random_state = 130
# Création du jeu de données
X, y = make_blobs(n_samples=n_samples, random_state=random_state)
# -
# Implémentation d'un KMeans avec 3 centroides (clusters)
model = KMeans(n_clusters=3, random_state=random_state)
# Entrainer le modèle sur les données X
model.fit(X)
# Faire des prédictions
y_pred = model.predict(X)
# Visualisation des résultats des 2 premières variables en fonction des prédictions
plt.figure(figsize=(10, 8))
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
# Afficher la position finale de tous nos centroids
plt.title("ALGORITHME DE KMEANS")
plt.show()
# +
# Fonction coût de notre modèle
# 1ere forme, négative
model.score(X)
# 2nde forme, positive
model.inertia_
# -
# Reprise de l'exemple précedent avec moins de données
X, y = make_blobs(n_samples=100, centers=3, cluster_std=0.4, random_state=0)
plt.figure(figsize=(10, 6))
plt.scatter(X[:, 0], X[:, 1])
plt.show()
# Créer un modèle avec le nbre de K clusters
model = KMeans(n_clusters=3)
# Entrainer le modèle sur les données X
model.fit(X)
# On peut voir comment sont classés les différents échantillons
#model.labels_
# Ou encore appliquer la fonction predic() qui donne le même résultat
y_pred = model.predict(X)
# Visualiser comment sont classées les données
plt.figure(figsize=(10, 6))
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
# Afficher la position finale de nos centroids
#plt.scatter(model.cluster_centers_[:, 0], model.cluster_centers_[:, 0], c="r")
plt.show()
# ### Elbow Method : *Pour trouver le bon nombre de clusters*
# **Détecter une zone de "coude" dans la minimisation du coût (inertia_)**
# Liste des coûts des différents modèles
inertia = []
# Une rangée de valeurs à tester
K_range = range(1, 30)
for k in K_range:
# Créer un modèle en l'entrainant sur les données X
model = KMeans(n_clusters=k).fit(X)
# Calculer le coût du modèle
inertia.append(model.inertia_)
# Graphique pour visualiser
plt.figure(figsize=(8, 6))
plt.plot(K_range, inertia)
plt.xlabel("nombre de clusters")
plt.ylabel("coût du modèle (Inertia)")
plt.show()
# # 2. Anomaly Detection
# **Cette technique consiste à détecter dans le Dataset les échantillons dont les cartéristiques X sont très éloignées de celles des autres échantillons**
# ### Isolation Forest :
# - **Cet algo consiste à effectuer une série de split aléatoires dans le Dataset, et compte ensuite le nombre de splits (découpe) qu'il faut effectuer pour pouvoir isoler les échantillons. Plus faible nombre de splits => Forte probabilité d'anomalie**
# Génération de données
X, y = make_blobs(n_samples=50, centers=1, cluster_std=0.1, random_state=0)
X[-1:] = np.array([2.25, 5])
# Visualisation
plt.figure(figsize=(8, 6))
# Les deux premières variables
plt.scatter(X[:, 0], X[:, 1])
plt.show()
# Modèle Isolation Forest
from sklearn.ensemble import IsolationForest
# +
# Génération du modèle en précisant le pourcentage de données à filtrer
model = IsolationForest(contamination=0.02)
# Entrainer le modèle sur les données X
model.fit(X)
# -
model.predict(X)
# +
# # +1 = Normal
# -1 = Anomalie
# Et d'après le tableau ci-dessus, on peut voir que nous avons une seule anomalie parmi dans l'ensemble
# +
y_pred = model.predict(X)
# Visualisation
plt.figure(figsize=(8, 6))
# Les deux premières variables
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
plt.show()
# -
# ## Application - Décomposition Digits
from sklearn.datasets import load_digits
# +
# Créer une instance
digits = load_digits()
images = digits.images
# Créer des variables (données)
X = digits.data
# Créer des cibles (classes)
y = digits.target
# Vérification de la dimension des données
X.shape
# -
# Créer le modèle, garder le taux de contamination est mieux entre 0.01(1%) et 0.05(5%)
model = IsolationForest(random_state=0, contamination=0.03)
# Entrainer le modèle sur les données X
model.fit(X)
# L'entrainement et les prédictions peuvent se faire de façon simultanée
model.fit_predict(X)
# Les prédictions du modèle tout en filtrant en faisant du booleane Indexing
# Création des outliers contenant toutes les prédictions ayant des valeurs -1 (anomalies)
outliers = model.predict(X) == -1
outliers
# Ejecter ce tableau outliers dans les images
images[outliers]
plt.figure(figsize=(8, 6))
# Afficher uniquement les outliers de la 1ère image du Dataset
plt.imshow(images[outliers][0])
# On peut filtrer uniquement le 1er outlier pour savoir ce que répresente ce ciffre
plt.title(y[outliers][0])
plt.show()
# ### - Local Outlier Factor
# **Cet algo se repose sur la méthode des voisins les plus proches et permet de faire de la** *Novelty Detection (détecte les anomalies dans le données futures*
# # 3. PCA (Analyse en Composantes Princiaples) :
# Le principe est de projeter les données sur des axes appelés **Composantes Principales**, en cherchant à minimiser la distance entre les points et leur projections.
# De cette manière, en réduit la dimension du **Dataset** tout en **préservant au maximum la variance** de nos données.
#
# Pour trouver les axes de projection :
# - **On calcul la matrice de covariance des données**
# - **On détermine les vecteurs propres de cette matrice : ce sont les *Composantes Principales***
# - **On projette les données sur ces axes**
# ## - Dimension Reduction
# **Elle consiste à réduire la complexité superflue d'un Dataset en projetant ses données dans un espace de plus petite dimension (un espace avec moins de variables) en vue d'accélérer l'apprentissage de la machine et de lutter contre le phénomène appelé le** *fléau de la dimension* **(risque d'overfitting lié au surplus de dimensions)**
from sklearn.decomposition import PCA
# ### Comment choisir le nombre de composantes ?
# 1. Visualisation des données :
# *On projette notre Dataset dans un espace* **2D ou 3D (n_components=2 ou 3)**
# 2. Compréssion de données :
# *Réduire au maximum la taille du Dataset tout en conservant **95-99% de la variance** des données*
# +
# Créer le modèle tout en précisant le nbre de dimension sur lesquelles on souhaite projeter les données (Composantes)
model = PCA(n_components=10)
# Entrainer le modèle sur les données X puis le transformer
model.fit_transform(X)
# -
# ### 1. Visualisation de données
X.shape
# On cherche à projetter ces 64 variables dans un espace 2D et visualiser cela dans un graphique
model = PCA(n_components=2)
# Entrainer le modèle puis l'utiliser en transformant les données X
X_reduit = model.fit_transform(X)
X_reduit.shape
# Observation des deux composantes de X_reduit dans un graphique
plt.figure(figsize=(10, 8))
# Afficher les 2 1ères composantes de X_reduit en fonction des étiquettes(y)
plt.scatter(X_reduit[:, 0], X_reduit[:, 1], c=y)
plt.colorbar()
plt.show()
# A quoi correspondent les axes
correspondance = model.components_
correspondance
# On peut voir que chaque correspondance contient 64 valeurs
correspondance.shape
# ### 2. Compréssion de données (Réduction de dimension) : *Conserver 95-99% de la variance de nos données*
X.shape
# Entrainer le modèle sur le nbre de variable que l'on a 64
model = PCA(n_components=64)
X_reduit = model.fit_transform(X)
# Examiner le pourcentage de données préservé par chaque composantes
model.explained_variance_ratio_
# Faire la somme cumulée de toutes ces variances
np.cumsum(model.explained_variance_ratio_)
# Visualiser cela dans un graphique
plt.plot(np.cumsum(model.explained_variance_ratio_))
plt.show()
# Savoir à partir de quel indice le pourcentage est supérieur à 99%
np.argmax(np.cumsum(model.explained_variance_ratio_) > 0.99 )
# Une fois que l'on a su l'indice, on peut alors entrainer à partir de cet indice
model = PCA(n_components=40)
X_reduit = model.fit_transform(X)
# S'il faut visualiser ces images après qu'elles ont été compressées, il va falloir d'abord les
# décompresser afin qu'elles aient à nouveau 64 pixels avec la méthode inverse_transform()
X_recovered = model.inverse_transform(X_reduit)
plt.figure(figsize=(10, 8))
plt.imshow(X_recovered[0].reshape((8, 8)))
plt.show()
# Combien de composantes pour atteindre le pourcentage de la variance ?
model.n_components_
# ### NB :
# - **Il faut standardiser les données avant d'utiliser PCA (StandardScaler)**
# - **PCA est normalement conçu pour traiter les *variables continues***
# - **PCA n'est pas *efficace* sur les Datasets *non-linéaires***
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Provides utilities to preprocess images for the MobileNet networks."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.python.ops import control_flow_ops
def apply_with_random_selector(x, func, num_cases):
"""Computes func(x, sel), with sel sampled from [0...num_cases-1].
Args:
x: input Tensor.
func: Python function to apply.
num_cases: Python int32, number of cases to sample sel from.
Returns:
The result of func(x, sel), where func receives the value of the
selector as a python integer, but sel is sampled dynamically.
"""
sel = tf.random_uniform([], maxval=num_cases, dtype=tf.int32)
# Pass the real x only to one of the func calls.
return control_flow_ops.merge([
func(control_flow_ops.switch(x, tf.equal(sel, case))[1], case)
for case in range(num_cases)])[0]
def distort_color(image, color_ordering=0, fast_mode=True, scope=None):
"""Distort the color of a Tensor image.
Each color distortion is non-commutative and thus ordering of the color ops
matters. Ideally we would randomly permute the ordering of the color ops.
Rather then adding that level of complication, we select a distinct ordering
of color ops for each preprocessing thread.
Args:
image: 3-D Tensor containing single image in [0, 1].
color_ordering: Python int, a type of distortion (valid values: 0-3).
fast_mode: Avoids slower ops (random_hue and random_contrast)
scope: Optional scope for name_scope.
Returns:
3-D Tensor color-distorted image on range [0, 1]
Raises:
ValueError: if color_ordering not in [0, 3]
"""
with tf.name_scope(scope, 'distort_color', [image]):
if fast_mode:
if color_ordering == 0:
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
else:
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
else:
if color_ordering == 0:
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
elif color_ordering == 1:
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
elif color_ordering == 2:
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
elif color_ordering == 3:
image = tf.image.random_hue(image, max_delta=0.2)
image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
image = tf.image.random_brightness(image, max_delta=32. / 255.)
else:
raise ValueError('color_ordering must be in [0, 3]')
# The random_* ops do not necessarily clamp.
return tf.clip_by_value(image, 0.0, 1.0)
def distorted_bounding_box_crop(image,
bbox,
min_object_covered=0.1,
aspect_ratio_range=(0.75, 1.33),
area_range=(0.05, 1.0),
max_attempts=100,
scope=None):
"""Generates cropped_image using a one of the bboxes randomly distorted.
See `tf.image.sample_distorted_bounding_box` for more documentation.
Args:
image: 3-D Tensor of image (it will be converted to floats in [0, 1]).
bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
where each coordinate is [0, 1) and the coordinates are arranged
as [ymin, xmin, ymax, xmax]. If num_boxes is 0 then it would use the whole
image.
min_object_covered: An optional `float`. Defaults to `0.1`. The cropped
area of the image must contain at least this fraction of any bounding box
supplied.
aspect_ratio_range: An optional list of `floats`. The cropped area of the
image must have an aspect ratio = width / height within this range.
area_range: An optional list of `floats`. The cropped area of the image
must contain a fraction of the supplied image within in this range.
max_attempts: An optional `int`. Number of attempts at generating a cropped
region of the image of the specified constraints. After `max_attempts`
failures, return the entire image.
scope: Optional scope for name_scope.
Returns:
A tuple, a 3-D Tensor cropped_image and the distorted bbox
"""
with tf.name_scope(scope, 'distorted_bounding_box_crop', [image, bbox]):
# Each bounding box has shape [1, num_boxes, box coords] and
# the coordinates are ordered [ymin, xmin, ymax, xmax].
# A large fraction of image datasets contain a human-annotated bounding
# box delineating the region of the image containing the object of interest.
# We choose to create a new bounding box for the object which is a randomly
# distorted version of the human-annotated bounding box that obeys an
# allowed range of aspect ratios, sizes and overlap with the human-annotated
# bounding box. If no box is supplied, then we assume the bounding box is
# the entire image.
sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box(
tf.shape(image),
bounding_boxes=bbox,
min_object_covered=min_object_covered,
aspect_ratio_range=aspect_ratio_range,
area_range=area_range,
max_attempts=max_attempts,
use_image_if_no_bounding_boxes=True)
bbox_begin, bbox_size, distort_bbox = sample_distorted_bounding_box
# Crop the image to the specified bounding box.
cropped_image = tf.slice(image, bbox_begin, bbox_size)
return cropped_image, distort_bbox
def preprocess_for_train(image, height, width, bbox,
fast_mode=True,
scope=None):
"""Distort one image for training a network.
Distorting images provides a useful technique for augmenting the data
set during training in order to make the network invariant to aspects
of the image that do not effect the label.
Additionally it would create image_summaries to display the different
transformations applied to the image.
Args:
image: 3-D Tensor of image. If dtype is tf.float32 then the range should be
[0, 1], otherwise it would converted to tf.float32 assuming that the range
is [0, MAX], where MAX is largest positive representable number for
int(8/16/32) data type (see `tf.image.convert_image_dtype` for details).
height: integer
width: integer
bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
where each coordinate is [0, 1) and the coordinates are arranged
as [ymin, xmin, ymax, xmax].
fast_mode: Optional boolean, if True avoids slower transformations (i.e.
bi-cubic resizing, random_hue or random_contrast).
scope: Optional scope for name_scope.
Returns:
3-D float Tensor of distorted image used for training with range [-1, 1].
"""
with tf.name_scope(scope, 'distort_image', [image, height, width, bbox]):
if bbox is None:
bbox = tf.constant([0.0, 0.0, 1.0, 1.0],
dtype=tf.float32,
shape=[1, 1, 4])
if image.dtype != tf.float32:
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
# Each bounding box has shape [1, num_boxes, box coords] and
# the coordinates are ordered [ymin, xmin, ymax, xmax].
image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
bbox)
tf.summary.image('image_with_bounding_boxes', image_with_box)
distorted_image, distorted_bbox = distorted_bounding_box_crop(image, bbox)
# Restore the shape since the dynamic slice based upon the bbox_size loses
# the third dimension.
distorted_image.set_shape([None, None, 3])
image_with_distorted_box = tf.image.draw_bounding_boxes(
tf.expand_dims(image, 0), distorted_bbox)
tf.summary.image('images_with_distorted_bounding_box',
image_with_distorted_box)
# This resizing operation may distort the images because the aspect
# ratio is not respected. We select a resize method in a round robin
# fashion based on the thread number.
# Note that ResizeMethod contains 4 enumerated resizing methods.
# We select only 1 case for fast_mode bilinear.
num_resize_cases = 1 if fast_mode else 4
distorted_image = apply_with_random_selector(
distorted_image,
lambda x, method: tf.image.resize_images(x, [height, width], method=method),
num_cases=num_resize_cases)
tf.summary.image('cropped_resized_image',
tf.expand_dims(distorted_image, 0))
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
# Randomly distort the colors. There are 4 ways to do it.
distorted_image = apply_with_random_selector(
distorted_image,
lambda x, ordering: distort_color(x, ordering, fast_mode),
num_cases=4)
tf.summary.image('final_distorted_image',
tf.expand_dims(distorted_image, 0))
distorted_image = tf.subtract(distorted_image, 0.5)
distorted_image = tf.multiply(distorted_image, 2.0)
return distorted_image
def preprocess_for_eval(image, height, width,
central_fraction=0.875, scope=None):
"""Prepare one image for evaluation.
If height and width are specified it would output an image with that size by
applying resize_bilinear.
If central_fraction is specified it would cropt the central fraction of the
input image.
Args:
image: 3-D Tensor of image. If dtype is tf.float32 then the range should be
[0, 1], otherwise it would converted to tf.float32 assuming that the range
is [0, MAX], where MAX is largest positive representable number for
int(8/16/32) data type (see `tf.image.convert_image_dtype` for details)
height: integer
width: integer
central_fraction: Optional Float, fraction of the image to crop.
scope: Optional scope for name_scope.
Returns:
3-D float Tensor of prepared image.
"""
with tf.name_scope(scope, 'eval_image', [image, height, width]):
if image.dtype != tf.float32:
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
# Crop the central region of the image with an area containing 87.5% of
# the original image.
if central_fraction:
image = tf.image.central_crop(image, central_fraction=central_fraction)
if height and width:
# Resize the image to the specified height and width.
image = tf.expand_dims(image, 0)
image = tf.image.resize_bilinear(image, [height, width],
align_corners=False)
image = tf.squeeze(image, [0])
image = tf.subtract(image, 0.5)
image = tf.multiply(image, 2.0)
return image
def preprocess_image(image, height, width,
is_training=False,
bbox=None,
fast_mode=True):
"""Pre-process one image for training or evaluation.
Args:
image: 3-D Tensor [height, width, channels] with the image.
height: integer, image expected height.
width: integer, image expected width.
is_training: Boolean. If true it would transform an image for train,
otherwise it would transform it for evaluation.
bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
where each coordinate is [0, 1) and the coordinates are arranged as
[ymin, xmin, ymax, xmax].
fast_mode: Optional boolean, if True avoids slower transformations.
Returns:
3-D float Tensor containing an appropriately scaled image
Raises:
ValueError: if user does not provide bounding box
"""
if is_training:
return preprocess_for_train(image, height, width, bbox, fast_mode)
else:
return preprocess_for_eval(image, height, width)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib as mt
maas = np.random.normal(4000,500,1000)
print(np.mean(maas))
# -
len(maas)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import warnings
warnings.filterwarnings('ignore')
import iris
import iris.plot as iplt
import numpy
import matplotlib.pyplot as plt
import glob
# -
model = 'BCC-CSM2-MR'
# ## Salintiy profiles (in temperature space)
wm_file_pattern = '/g/data/r87/dbi599/zika/water-mass_Omon_%s_historical_*.nc' %(model)
wm_file = glob.glob(wm_file_pattern)[0]
vcube_tbin = iris.load_cube(wm_file, 'Ocean Grid-Cell Volume binned by temperature')
print(vcube_tbin.summary(shorten=True))
vscube_tbin = iris.load_cube(wm_file, "Sea Water Salinity times Ocean Grid-Cell Volume binned by temperature")
print(vscube_tbin.summary(shorten=True))
so_profiles = vscube_tbin / vcube_tbin
print(so_profiles.summary(shorten=True))
plt.plot(so_profiles.data[0, :, 0], so_profiles.coord('sea_water_potential_temperature').points,
color='red', label='North Atlantic')
plt.plot(so_profiles.data[0, :, 1], so_profiles.coord('sea_water_potential_temperature').points,
color='gold', label='South Atlantic')
plt.plot(so_profiles.data[0, :, 2], so_profiles.coord('sea_water_potential_temperature').points,
color='blue', label='North Pacific')
plt.plot(so_profiles.data[0, :, 3], so_profiles.coord('sea_water_potential_temperature').points,
color='cyan', label='South Atlantic')
plt.plot(so_profiles.data[0, :, 4], so_profiles.coord('sea_water_potential_temperature').points,
color='green', label='Indian')
plt.xlabel('salinity (g/kg)')
plt.ylabel('temperature (C)')
plt.legend()
plt.show()
# ## Temperature profiles (in salinity space)
vcube_sbin = iris.load_cube(wm_file, 'Ocean Grid-Cell Volume binned by salinity')
print(vcube_sbin.summary(shorten=True))
vtcube_sbin = iris.load_cube(wm_file, "Sea Water Potential Temperature times Ocean Grid-Cell Volume binned by salinity")
print(vtcube_sbin.summary(shorten=True))
thetao_profiles = vtcube_sbin / vcube_sbin
print(thetao_profiles.summary(shorten=True))
thetao_profiles[0, :, 0]
thetao_profiles.coord('sea_water_salinity').points[3: -3]
iplt.plot(thetao_profiles[0, 3:-3, 0], color='red', label='North Atlantic')
iplt.plot(thetao_profiles[0, 3:-3, 1], color='gold', label='South Atlantic')
iplt.plot(thetao_profiles[0, 3:-3, 2], color='blue', label='North Pacific')
iplt.plot(thetao_profiles[0, 3:-3, 3], color='cyan', label='South Atlantic')
iplt.plot(thetao_profiles[0, 3:-3, 4], color='green', label='Indian')
plt.xlabel('salinity (g/kg)')
plt.ylabel('temperature (C)')
plt.legend()
plt.show()
iplt.plot(vcube_sbin[0, 3:-3, 0], color='red', label='North Atlantic')
iplt.plot(vcube_sbin[0, 3:-3, 1], color='gold', label='South Atlantic')
iplt.plot(vcube_sbin[0, 3:-3, 2], color='blue', label='North Pacific')
iplt.plot(vcube_sbin[0, 3:-3, 3], color='cyan', label='South Atlantic')
iplt.plot(vcube_sbin[0, 3:-3, 4], color='green', label='Indian')
plt.xlabel('salinity (g/kg)')
plt.ylabel('volume (m3)')
plt.legend()
plt.show()
# ## Surface water flux (in temperature space)
wfo_hist_pattern = '/g/data/r87/dbi599/zika/wfo-tos-binned_Omon_%s_historical_*.nc' %(model)
#wfo_hist_pattern = '/g/data/r87/dbi599/zika/wfo-thetao-binned_Omon_%s_historical_*.nc' %(model)
wfo_hist_file = glob.glob(wfo_hist_pattern)[0]
wfo_hist_cube = iris.load_cube(wfo_hist_file, 'water_flux_into_sea_water')
print(wfo_hist_cube.summary(shorten=True))
# +
fig = plt.figure(figsize=[10, 7])
plt.axhline(y=0, color='0.5')
iplt.plot(wfo_hist_cube[0, :, 0], label='North Atlantic')
iplt.plot(wfo_hist_cube[0, :, 1], label='South Atlantic')
iplt.plot(wfo_hist_cube[0, :, 2], label='North Pacific')
iplt.plot(wfo_hist_cube[0, :, 3], label='South Pacific')
iplt.plot(wfo_hist_cube[0, :, 4], label='Indian')
plt.legend()
plt.xlabel('sea surface temperature')
plt.ylabel('water flux into sea water (kg s-1)')
plt.title('%s (year 1850)' %(model))
plt.show()
# -
# ## Volume distribution
vol_pattern = '/g/data/r87/dbi599/zika/volo-tsdist_Omon_%s_historical_*.nc' %(model)
vol_file = glob.glob(vol_pattern)[0]
vol_cube = iris.load_cube(vol_file)
print(vol_cube.summary(shorten=True))
vol_cube.data.sum()
def plot_basin(vol_cube, salinity_cube, basin_name):
"""Plot at volume distribution."""
x_values = vol_cube.coord('sea_water_salinity').points
y_values = vol_cube.coord('sea_water_potential_temperature').points
extents = [x_values[0], x_values[-1], y_values[0], y_values[-1]]
basin_dict = {'north_atlantic': 0, 'south_atlantic': 1,
'north_pacific': 2, 'south_pacific': 3,
'indian': 4, 'arctic': 5, 'marginal_seas_and_land': 6}
log_hist = numpy.log(vol_cube.data[:, :, basin_dict[basin_name]]).T
plt.figure(figsize=(9, 8))
plt.imshow(log_hist, origin='lower', extent=extents, aspect='auto', cmap='hot_r')
cb = plt.colorbar()
cb.set_label('log(volume), $m^3 (^\circ C \; g/kg)^{-1}$')
sprofile = salinity_cube.data[-1, :, basin_dict[basin_name]]
plt.plot(sprofile, salinity_cube.coord('sea_water_potential_temperature').points)
plt.xlim(x_values[0], x_values[-1])
plt.title(basin_name)
plt.xlabel('salinity (g/kg)')
plt.ylabel('temperature (C)')
plt.show()
plot_basin(vol_cube, so_profiles, 'north_atlantic')
plot_basin(vol_cube, so_profiles, 'south_atlantic')
plot_basin(vol_cube, so_profiles, 'north_pacific')
plot_basin(vol_cube, so_profiles, 'south_pacific')
plot_basin(vol_cube, so_profiles, 'indian')
# # Surface layer area distribution
area_file = '/g/data/r87/dbi599/CMIP6/CMIP/CCCma/CanESM5/historical/r1i1p1f1/Omon/areao/gn/v20190429/areao-tsdist_Omon_CanESM5_historical_r1i1p1f1_gn_2005-2014-monthly-clim.nc'
area_cube = iris.load_cube(area_file)
print(area_cube.summary(shorten=True))
awm_files = glob.glob('/g/data/r87/dbi599/CMIP6/CMIP/CCCma/CanESM5/historical/r1i1p1f1/Omon/water-mass/gn/v20190429/surface-water-mass_Omon_CanESM5_historical_r1i1p1f1_gn_*.nc')
awm_files.sort()
acube = iris.load_cube(awm_files[-1], 'cell_area')
print(acube.summary(shorten=True))
#ascube = iris.load_cube(awm_files[-1], "Sea Water Salinity times Grid-Cell Area")
ascube = iris.load_cube(awm_files[-1], "Sea Water Salinity times Grid-Cell Area for Ocean Variables")
print(ascube.summary(shorten=True))
sos_profiles = ascube / acube
print(sos_profiles.summary(shorten=True))
def plot_surface_basin(area_cube, salinity_cube, basin_name):
"""Plot the surface layer area distribution."""
x_values = area_cube.coord('sea_water_salinity').points
y_values = area_cube.coord('sea_water_potential_temperature').points
extents = [x_values[0], x_values[-1], y_values[0], y_values[-1]]
basin_dict = {'north_atlantic': 0, 'south_atlantic': 1,
'north_pacific': 2, 'south_pacific': 3,
'indian': 4, 'arctic': 5, 'marginal_seas_and_land': 6}
log_hist = numpy.log(area_cube.data[:, :, basin_dict[basin_name]]).T
#hist = area_cube.data[:, :, basin_dict[basin_name]].T
plt.figure(figsize=(9, 8))
plt.imshow(log_hist, origin='lower', extent=extents, aspect='auto', cmap='hot_r')
cb = plt.colorbar()
cb.set_label('log(area), $m^2 (^\circ C \; g/kg)^{-1}$')
sprofile = salinity_cube.data[-1, :, basin_dict[basin_name]]
plt.plot(sprofile, salinity_cube.coord('sea_water_potential_temperature').points)
plt.xlim(x_values[0], x_values[-1])
plt.title(basin_name)
plt.xlabel('salinity (g/kg)')
plt.ylabel('temperature (C)')
plt.show()
plot_surface_basin(area_cube, sos_profiles, 'north_atlantic')
plot_surface_basin(area_cube, sos_profiles, 'south_atlantic')
plot_surface_basin(area_cube, sos_profiles, 'north_pacific')
plot_surface_basin(area_cube, sos_profiles, 'south_pacific')
plot_surface_basin(area_cube, sos_profiles, 'indian')
#
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# The magic commands below allow reflecting the changes in an imported module without restarting the kernel.
# %load_ext autoreload
# %autoreload 2
# We need to add balsam and the modules it depends on to the Python search paths.
import sys
sys.path.insert(0,'/soft/datascience/Balsam/0.3.5.1/env/lib/python3.6/site-packages/')
sys.path.insert(0,'/soft/datascience/Balsam/0.3.5.1/')
# We also need balsam and postgresql to be in the path. (Misha suggests this may not be necessary)
import os
os.environ['PATH'] ='/soft/datascience/Balsam/0.3.5.1/env/bin/:' + os.environ['PATH']
os.environ['PATH'] +=':/soft/datascience/PostgreSQL/9.6.12/bin/'
try:
import balsam
except:
print('Cannot find balsam, make sure balsam is installed or it is available in Python search paths')
# Import widgets
from ipywidgets import interact, interactive
from ipywidgets import fixed, interact_manual
from ipywidgets import Textarea, widgets, Layout, Accordion
from ipywidgets import VBox, HBox, Box, Text, BoundedIntText
# +
from balsam.django_config.db_index import refresh_db_index
databasepaths = []
databasepaths.extend(refresh_db_index())
print(f'There are {len(databasepaths)} Balsam databases available.')
for i,db in enumerate(databasepaths):
print(f'{i}: {db}')
@interact(db=[(i,db) for i,db in enumerate(databasepaths)])
def activate_database(db=''):
"""
Activates Balsam database by setting the BALSAM_DB_PATH environment variable.
Note: Once BALSAM_DB_PATH is set, you need to restart Jupyter kernel to change it again.
"""
os.environ["BALSAM_DB_PATH"] = db
print(f'Selected database: {os.environ["BALSAM_DB_PATH"]}')
# +
# If balsam server is not running (happens after theta maintanence) you get:
# "OperationalError: could not connect to server: Connection refused"
# This exception is caught here and it tries to restart the server.
from balsam.core.models import ApplicationDefinition as App
from balsam.scripts import postgres_control
try:
apps = App.objects.all()
print(f'Found {len(apps)} apps in {os.environ["BALSAM_DB_PATH"]}:')
for i,app in enumerate(apps):
print(f'{i}: {app.name}')
except Exception as e:
if 'could not connect to server' in e:
print('Exception caught. Could not connect to server.')
print(f'Trying to restart the Balsam server {os.environ["BALSAM_DB_PATH"]} ...')
try:
postgres_control.start_main(os.environ["BALSAM_DB_PATH"])
except Exception as e:
print('Exception caught:')
print(e.with_traceback())
else:
print('Exception caught:')
print(e.with_traceback())
# -
#apps = App.objects.all()
@interact(name='',executable='',checkexe=False,description='',preprocess='',postprocess='',saveapp=False)
def add_app(name, executable, description='', envscript='', preprocess='', postprocess='', checkexe=False,saveapp=False):
"""
Adds a new app to the balsam database.
Parameters
----------
name: str, name of the app
executable: str, path to the executable
checkexe: boolean, True: check if executable is available
description: str, info about the app
preprocess: str, path to the preprocessing script
postprocess: str, path to the postprocessing script
saveapp: boolean, True: save app to the database
"""
from balsam.core.models import ApplicationDefinition as App
import shutil
newapp = App()
if checkexe:
if shutil.which(executable):
print('{} is found'.format(executable))
else:
print('{} is not found'.format(executable))
return newapp
if App.objects.filter(name=name).exists():
print("An application named {} already exists".format(name))
else:
newapp.name = name
newapp.executable = executable
newapp.description = description
newapp.envscript = envscript
newapp.preprocess = preprocess
newapp.postprocess = postprocess
if saveapp:
newapp.save()
print(f'{newapp.name} added to the balsam database.')
return newapp
# Not ready, find how to add dictionaries
#apps = App.objects.all()
#appnames = [app.name for app in apps]
@interact(name='', workflow='', application=appnames, description='', args='', num_nodes=range(1,4394), ranks_per_node=range(1,256),cpu_affinity=['depth','none'],data={},environ_vars={})
def add_job(name, workflow, application, description='', args='', num_nodes=1, ranks_per_node=1,cpu_affinity='depth',data={},environ_vars={}):
from balsam.launcher.dag import BalsamJob
job = BalsamJob()
job.name = name
job.workflow = workflow
job.application = application
job.description = description
job.args = args
job.num_nodes = num_nodes
job.ranks_per_node = ranks_per_node
job.cpu_affinity = cpu_affinity
job.environ_vars = environ_vars
job.data = {}
job.save()
#def print_job_info(id=''):
@interact(job_id='',show_output=False)
def get_job_info(job_id='',show_output=False):
"""
Prints verbose job info for a given job id.
Parameters
----------
job_id: str, Partial or full Balsam job id.
"""
from balsam.launcher.dag import BalsamJob as Job
jobs = Job.objects.all().filter(job_id__contains=job_id)
if len(jobs) == 1:
thejob = jobs[0]
print(jobs[0])
if show_output:
output = f'{thejob.working_directory}/{thejob.name}.out'
with open(output) as f:
out = f.read()
print(f'Output file {output} content:')
print(out)
elif len(jobs) == 0:
print('No matching jobs')
else:
print(f'{len(jobs)} jobs matched, enter full id.')
from balsam.launcher.dag import BalsamJob as Job
#for job in Job.objects.filter(state='JOB_FINISHED',workflow='wf_test_valence190705').all():
from balsam.core.models import ApplicationDefinition as App
allstates = ['ALL',
'CREATED',
'AWAITING_PARENTS',
'READY',
'STAGED_IN',
'PREPROCESSED',
'RUNNING',
'RUN_DONE',
'POSTPROCESSED',
'JOB_FINISHED',
'RUN_TIMEOUT',
'RUN_ERROR',
'RESTART_READY',
'FAILED',
'USER_KILLED']
allworkflows = [wf['workflow'] for wf in Job.objects.order_by().values('workflow').distinct()]
allworkflows.append('ALL')
allapps = [app.name for app in App.objects.all()]
allapps.append('ALL')
@interact(state=allstates,workflow=allworkflows,app=allapps,name='')
def list_jobs(state='ALL',workflow='ALL',app='ALL',name=''):
jobs = Job.objects.all()
print(f'Total number of jobs: {len(jobs)}')
if state != 'ALL':
jobs = jobs.filter(state=state)
if workflow != 'ALL':
jobs = jobs.filter(workflow=workflow)
if app != 'ALL':
jobs = jobs.filter(application=app)
if name:
jobs = jobs.filter(name__icontains=name)
print(f'Selected number of jobs: {len(jobs)}')
if len(jobs) > 0:
t = '{:<20}'.format('Name')
t += ' {:>8}'.format('Nodes')
t += ' {:>12}'.format('Ranks')
t += ' {:^8}'.format('ID')
if state =='JOB_FINISHED':
t += '{:>12}'.format('Runtime')
elif state =='ALL':
t += '{:>15}'.format('State')
print(t)
for job in jobs:
s = '{:<20.15}'.format(job.name)
s += ' {:>8}'.format(job.num_nodes)
s += ' {:>12}'.format(job.num_ranks)
s += ' {:>8}'.format(str(job.job_id).split('-')[0])
if state =='JOB_FINISHED':
s += '{:>12.3f}'.format(job.runtime_seconds)
elif state =='ALL':
s += '{:>15}'.format(job.state)
print(s)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option("display.precision", 2)
sns.set()
# %matplotlib inline
# -
data = pd.read_csv('./survery_data.csv')
data.head()
# +
data.columns = ["timestamp",
"is_tech_major",
"avg_internet_use",
"tech_savvy_level",
"will_click_checkbox",
"will_click_donation",
"download_option",
"options_permitted_to_select",
"know_about_dark_pattern",
"learnt_guidelines",
"brief_explanation"
]
data.drop(columns=["timestamp", "brief_explanation"], inplace=True)
# -
data.info()
data.head()
data['is_tech_major'] = data['is_tech_major'].map({'No': 0, 'Yes': 1})
data['avg_internet_use'] = (data['avg_internet_use'] >= 10).map({False: 0, True: 1})
data['tech_savvy_level'] = data['tech_savvy_level'].map({1: 0, 2: 0, 3: 0, 4: 1, 5: 1})
data['will_click_checkbox'] = data['will_click_checkbox'].map({'No': 0, 'Yes': 1})
data['will_click_donation'] = data['will_click_donation'].map({'The left button': 0, 'The right button': 1})
data['download_option'] = data['download_option'].map({'Option 1': 1, 'Option 2': 2, 'Option 3': 3})
data['know_about_dark_pattern'] = data['know_about_dark_pattern'].map({'No': 0, 'Yes': 1})
data['learnt_guidelines'] = data['learnt_guidelines'].map({'No': 0, 'Yes': 1})
data['count'] = np.ones(data.shape[0])
data.head()
data['is_tech_major'].value_counts(normalize=True)
data['avg_internet_use'].value_counts(normalize=True)
data['tech_savvy_level'].value_counts(normalize=True)
pd.crosstab(data['is_tech_major'], data['will_click_checkbox'], normalize='index')
pd.crosstab(data['is_tech_major'], data['will_click_donation'], normalize='index')
pd.crosstab(data['is_tech_major'], data['download_option'], normalize='index')
data['options_permitted_to_select'].value_counts(normalize=True)
data['options_permitted_to_select'].value_counts().sort_values().plot(kind = 'barh');
plt.xlabel('count')
plt.ylabel('number of options')
plt.title('Plot showing the responses for Q7.')
pd.crosstab(data['is_tech_major'], data['options_permitted_to_select'], normalize='index')
pd.crosstab(data['is_tech_major'], data['know_about_dark_pattern'], normalize='index')
pd.crosstab(data['is_tech_major'], data['learnt_guidelines'], normalize='index')
pd.crosstab([data['is_tech_major'], data['know_about_dark_pattern']], data['will_click_checkbox'])
int(data[((data['is_tech_major'] == 1)
& (data['know_about_dark_pattern'] == 1)
& (data['will_click_checkbox'] == 0)
& (data['download_option'] == 2)
& (data['will_click_donation'] == 0))]['count'].sum())
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="WtLdBlwAUcwF" outputId="36928f27-48e6-4013-bd84-350f9eef5724"
# Importing necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import r2_score, mean_squared_error
from scipy.stats import pearsonr
# -
# Uploading the input data
filepath = ('C:/Users/USER/Desktop/coursera/python/data for dl project/bootstrapped data.csv')
data =pd.read_csv(filepath)
X = data.iloc[0:1400,:-1]
y = data.iloc[0:1400,-1]
X.shape
print(X.shape)
print(y.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 383} colab_type="code" id="HMz-pOLuV0qT" outputId="4fb0dffa-f865-4d48-e7ef-8e54f9f51fc8"
# Calculating correlation between input features
corr = X.corr()
import seaborn as sns
sns.heatmap(corr)
corr.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 309} colab_type="code" id="Y5f0tTnXWT6u" outputId="de237f0d-256d-4f09-db31-be9e9c643ddb"
col = np.full((15),True, dtype="bool")
print(col)
flag = 0
if flag==0:
for i in range(12):
for j in range(i+1,12):
if abs(corr.iloc[i,j])>0.9:
col[j] = False
X = X.iloc[:,col]
flag = 1
print(col)
X.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="Fqa7y9wLUcwR" outputId="b2636bc1-64f6-4728-f9c4-e6cbb7847c61"
# Splitting data into train and test sets
y = np.array(y).reshape(-1,1)
scale = StandardScaler()
X = scale.fit_transform(X)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=.2,random_state=0)
print(X_train.shape,X_test.shape,y_train.shape,y_test.shape)
# + colab={} colab_type="code" id="-LPCbCVkUcwb"
# Creating model
linreg = LinearRegression()
linreg.fit(X_train,y_train)
y_Pred = linreg.predict(X_test)
print(y_test.shape)
print(y_Pred.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 297} colab_type="code" id="7lnhvO8DUcwl" outputId="51510818-6754-4728-8126-d2a2fe6d071a"
plt.scatter(y_test,y_Pred,color="black")
plt.plot(y_test,y_Pred,color="yellow",label = "Linear reg Model")
plt.xlabel("Actual Reaction time")
plt.ylabel("Predicted Reaction time ")
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="6HUE9HedUcwt" outputId="12e48ee1-0557-4cd0-ebf8-f073acbc42e7"
# Validating model
score = linreg.score(X_test,y_test)
r2score = r2_score(y_test,y_Pred)
MSE = mean_squared_error(y_test,y_Pred)
MAE = mean_absolute_error(y_test,y_Pred)
print('R2 score:',r2score)
print('MSE:',MSE)
print('MAE:',MAE)
r = pd.DataFrame(np.concatenate((y_test,y_Pred), axis = 1)).corr()
pear_coff = r.iloc[0,1]
print('Pearson Correlation coefficient:',pear_coff)
index = pear_coff/MSE
print('index:',index)
# -
# Visualizing model
maxi = max(max(y_Pred), max(y_test))
mini = min(min(y_Pred), min(y_test))
fig = plt.figure(figsize=(8,6))
plt.style.use('ggplot')
plt.scatter(y_test, y_Pred, label='Linear model', c = 'b', marker='o')
plt.plot(range(int(mini), int(maxi+1)), range(int(mini), int(maxi+1)),'-.r')
plt.title('Linear regression model for mental fatigue estimation')
plt.xlabel("Actual Reaction time")
plt.ylabel("Predicted Reaction time ")
plt.legend(loc='best')
plt.show()
# Calculating DTW
from dtw import dtw
from scipy.spatial.distance import sqeuclidean
d, cost_matrix, acc_cost_matrix, path = dtw(y_test,y_Pred, dist=sqeuclidean)
print('DTW: ',d)
# +
# Calculating FastDTW
from fastdtw import fastdtw
from scipy.spatial.distance import sqeuclidean
distance, path = fastdtw(y_test,y_Pred, dist=sqeuclidean)
print('FastDTW: ',distance)
# -
# Calculating cDTW
from cdtw import pydtw
d = pydtw.dtw(y_test,y_Pred,pydtw.Settings(step = 'p0sym',
window = 'palival',
param = 2.0,
norm = False,
compute_path = True))
d.get_dist()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# The difference between the last frame time and the current frame time is knows as the:
#
# $$
# \Delta t
# $$
#
# $$
# \Delta t = t' - t
# $$
#
# This can either be measured in milliseconds (size_t) or seconds (float)
# In one frame, the software:
# * Updated every parameter
# * Renders it
# * Skips back to step 1.
#
# $$
# 30 \textrm{fps} = \frac{1}{30} = 0.0333...
# $$
# $$
# 60 \textrm{fps} = \frac{1}{60} = 0.0166...
# $$
# In early games, physics simulation and manipulation was tied to this framerate. This lead to the game running faster or slower depending on the components of your computer. To make it such that an object with location $P$ with velocity $V$ moves to location $P'$ not dependent on the framerate, you do:
#
# $$
# P' = P + V \Delta t
# $$
# As an example, if this object wanted to jump, we can define a vector $J$ with a specific magnitude and simply add it to the velocity vector:
#
# $$
# V' = V + J
# $$
#
# To bring the object back to the ground, we need to define another vector $G$ and add it to the equation:
#
# $$
# G = \begin{pmatrix} 0, -9.8 \end{pmatrix} \\
# V' = V + G \Delta t
# $$
# This creates a parrabola for jumping.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Alternate Class Instantiation
#
# Python 3 has added type annotations / hinting to the standard library and core language syntax. The implementation of this extended `__getitem__` to [the class level](https://www.python.org/dev/peps/pep-0560/#class-getitem), exposing an interesting mechanism to _spawn variants_. Having written "alternate universe" tempate engines which "abuse" Python language semantics, including `__getitem__`, to perform work they were never intended to… I see an opportunity.
#
# In [Marrow Tags](https://github.com/marrow/tags#readme) dict-like dereferencing is used to denote child object containment, not member retrieval. For example, a simple "template":
#
# ```python
# html [
# head [ title [ "Welcome!", " — ", SITE_NAME ] ],
# flush,
# body ( class_ = 'nav-welcome' ) [
# site_header(),
# p [ "Lorem ipsum dolor sit amet…" ],
# site_footer()
# ]
# ]
# ```
#
# This somewhat demonstrates that Python is not, in fact, a "whitespace sensitive" language. Indentation within a derefernece (or any structural literal) is ignored. However, this poses a problem. Where is `html`, or `head`, or `title` coming from? Can their behaviors be customized or adapted?
#
# Marrow Tags populates a default set from the HTML specification of the time but _custom tags_ are becoming a more popular feature to utilize. The web has come a long way, and there are [clearly documented API specifications available](https://developer.mozilla.org/en-US/docs/Web/API/Element) outlining how to manipulate, query, and interrogate these types of objects. Some tags may desire additional functionality: a `<head>` element, for example, need not be emitted if the first child is a `<title>`. Similar with the `<body>` element if the first child is a content element, or `<html>` itself if no attributes are present. [Really!](https://gist.github.com/amcgregor/71c62ea2984839a9063232ed2c0adf27)
#
# As there are several components to this problem and its solution, we'll start by defining a basic "HTML tag" representation which will largely follow the [published specification](https://dom.spec.whatwg.org/#interface-element) on attribute naming and represent our overall data model.
# ## Front Matter
#
# The initial collection of imports provide the primitives used to form [variable annotations](https://www.python.org/dev/peps/pep-0526/) and [type hinting](https://docs.python.org/3/library/typing.html) which can be optionally enforced statically using tools like [mypy](http://mypy-lang.org), but also at runtime using libraries such as [typeguard](https://typeguard.readthedocs.io/en/latest/).
#
# We also define one utility function, useful later.
# +
from collections.abc import Collection
from textwrap import indent
from typing import Any, Dict, Iterable, List, Optional, Set, Type, TypeVar, Union
from re import compile as re
camel = lambda s: (m.group(0).lower() for m in camel.pattern.finditer(s))
camel.pattern = re('.+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)')
# -
'-'.join(camel('myAwesomeTag'))
# ## Basic Tag
#
# With the basics needed for self-documentation out of the way, we'll form the basic definition of a Tag. We won't worry about specialized behavior using this new functionality yet.
# +
from xml.sax.saxutils import quoteattr # Used when rendering element attributes.
class T:
children: List
classList: Set[str]
localName: str
attributes: dict
_inline = {'a', 'abbr', 'acronym', 'audio', 'b', 'bdi', 'bdo', 'big', 'button', 'canvas', 'cite', 'code', 'data', 'datalist', 'del', 'dfn', 'em', 'embed', 'i', 'iframe', 'img', 'input', 'ins', 'kbd', 'label', 'map', 'mark', 'meter', 'object', 'output', 'picture', 'progress', 'q', 'ruby', 's', 'samp', 'script', 'select', 'slot', 'small', 'span', 'strong', 'sub', 'sup', 'svg', 'textarea', 'time', 'title', 'u', 'tt', 'var', 'wbr'}
def __init__(self, name:str, children:Optional[List]=None, **kw) -> None:
self.children = children or [] # Populate empty defaults.
self.classList = set()
self.attributes = {'class': 'classList'}
self.localName = name
for name, value in kw.items():
setattr(self, name, value)
def __repr__(self) -> str:
return f"<tag '{self.localName}' children={len(self)}{' ' if self.attributes else ''}{self.attributeMapping}>"
def __len__(self):
"""Our length is that of the number of our child elements."""
return len(self.children)
def __iter__(self):
"""Act as if we are our collection of children when iterated."""
return iter(self.children)
def __str__(self) -> str:
parts = [] # These are the string fragments that will ultimately be returned as one.
parts.extend(('<', self.localName))
block = self.localName not in self._inline
for key, value in sorted(self.attributeMapping.items()):
if key[0] == '_': continue # Armour against protected attribute access.
# Skip values that are non-zero falsy, working around the fact that False == 0.
if not value and (value is False or value != 0):
continue
name = str(key).rstrip('_').replace('__', ':').replace('_', '-')
# Add spacer if needed.
if len(parts) == 2:
parts.append(' ')
if value is True: # For explicitly True values, don't emit a value for the attribute.
parts.append(name)
continue
# Non-string iterables (such as lists, sets, tuples, etc.) are treated as space-separated strings.
if isinstance(value, Iterable) and not isinstance(value, str):
value = " ".join(str(i) for i in value)
value = quoteattr(str(value))
if " " not in value:
value = value.strip('"')
parts.extend((name, "=", value))
parts.append('>')
if self.children:
if __debug__ and block:
# Prettier "linted" output when optimizations aren't enabled.
parts.append("\n")
parts.append(indent("".join(str(child) for child in self), "\t"))
parts.append("\n")
else:
parts.extend(str(child) for child in self)
parts.extend(('</', self.localName, '>\n' if __debug__ and block else '>'))
return ''.join(parts)
# Three different possible "null" / "empty" scenarios.
return f'<{self.localName}></{self.localName}>' + "\n" if __debug__ else "" # Missing contents.
return f'<{self.localName} />' # XML-like explicit NULL element.
return f'<{self.localName}>' # HTML5-like self-closing tag.
def __len__(self):
return len(self.children)
@property
def attributeMapping(self):
return {k: v for k, v in {name: getattr(self, origin, None) for name, origin in self.attributes.items()}.items() if v}
# API-conformant aliases for "localName".
tagName = \
nodeName = \
property(lambda self: self.localName)
# +
print(T)
print(repr(T('html')), T('html'), sep="\t")
print(repr(T('p')), T('p'), sep="\t")
ex = T('p', ['Lorem ipsum dolor...'], classList={'example'})
print(repr(ex), ex, sep="\n")
print(ex.attributeMapping)
page = T('html', [
T('head', [
T('title', ["Welcome"])
]),
T('body', [
T('p', ["Lorem ipsum dolor sit amet…"])
]),
])
print("", repr(page), "", page, sep="\n")
# -
# It may be noticeable that there are some optimizations we can apply to the HTML generation, and by gum this is an ugly syntax to try to work with. We can improve both of these aspects. Users with a Perl background may be more comfortable with this syntax than others.
#
# We'll deal with the HTML serializaiton later.
#
# ## The Tag Factory
#
# A key point to these elements is that _instantiation_ creates a new element _instance_, and that there isn't a distinct class per element. Writing your HTML this way would be highly cumbersome. Having to write out not just that it is a tag, but which one, its exact children, etc., up-front is sub-optimal. We could provide dedicated subclasses for every possible element, using the module scope to contain them, but there is a better, more dynamic way.
#
# Instances have the ability to participate in the attribute lookup protocol using `__getattr__` and `__getattribute__` methods. So too do _metaclasses_ have the ability to interpose attribute lookup on _classes_.
#
# Step one would then be to define a base metaclass that our "magical" classes will inherit from to provide compile–time and run–time behavior.
# +
class TagMeta(type):
def __new__(meta, name, bases, attrs):
cls = type.__new__(meta, str(name), bases, attrs)
return cls
def __getattr__(Tag, name:str):
return Tag('-'.join(camel(name)))
class Tag(T, metaclass=TagMeta): ...
# -
print(repr(Tag))
print(repr(Tag.title))
print(Tag.title)
# Congratulations, we now have the class itself as a factory for its own instances. Which sounds absolutely unimpressive, as that is the purpose of classes, however the factory is _attribute access_ (to otherwise unknown attributes) **itself**, not invocation. The name of the tag is inferred from the name of the attribute. This even supports "custom tags" using `camelCase` notation.
print(Tag.fileUpload)
# All tags are treated the same, in this rough draft. There are differences, especially in making the serializaiton attractive to humans such as the distinction between _block_ and _inline_ elements, which also offer points for minor optimization.
#
# Notice, however, that absolutely nothing is currently HTML-specific!
# ## Mutation
#
# What if we now wish to provide new values for attributes? Attribute access of the class is returning an already-instantiated value, making it too late to provide values to `__init__`. Instances can be made _callable_ by defining a `__call__` method, however, mimmicing use of the instance as if it were a class. In the "assigning children" case, a fresh-off-the-press newly minted instance will never have children, so mutation of the existing instance and returning the same (`self`) is a highly practical solution.
#
# Note in the `__call__` case, returning what is technically a new instance, with all existing attributes transferred and optionally mutated, is preferable. This has the side-effect that one can create "template" objects, but it is critical to be aware that _mutable_ objects, such as lists, sets, etc., will have their contents shared amongst the derived copies; mutation of one will appear to alter them all.
class Tag(T, metaclass=TagMeta):
def __call__(self, **attributes) -> T:
"""Produce a new, cloned and mutated instance of this tag incorporating attribute changes."""
instance = self.__class__(self.localName, self.children) # Mutant pools children!
instance.__dict__ = self.__dict__.copy() # It pools all mutable attributes!
for name, value in attributes.items():
setattr(instance, name, value)
return instance
def __getitem__(self, children) -> T:
"""Mutate this instance to add these children, returning this instance for chained manipulation."""
if isinstance(children, (tuple, list)):
self.children.extend(children)
else:
self.children.append(children)
return self
bdole = Tag.p(classList={'name'})["Bob Dole"]
print(bdole, bdole.__dict__, "", sep="\n")
print(Tag.p(classList={'fancy', 'wau'})["Much content, super elide."])
# You might not have noticed it earlier, but see how the `class` attribute of the first paragraph has no quotes.
#
# We can now replicate an earlier example using a much nicer syntax.
# +
page = Tag.html [
Tag.head [
Tag.title ["Welcome"]
],
Tag.body [
Tag.p ["Lorem ipsum dolor sit amet…"]
],
]
print(repr(page), page, sep="\n\n")
# -
# Returning to a point touched on earlier, there's nothing HTML-specific about any of this. Let's see what happens attempting to generate another type of document structured using tags…
# +
feed = Tag.rss[Tag.channel[
Tag.title["My awesome RSS feed!"],
Tag.link["https://example.com/"],
Tag.item["..."]
]]
print(feed)
# -
# Technically what we're doing here is constructing a _heirarchical tagged string_ representation. Like a word processor, just with complete freedom as to the tags in use.
# ## Customization
#
# In order to permit subclassing of Tag to implement specific customizations to tag behavior, the metaclass will require some changes to how it determines which class to instantiate when an unknown class-level attribute is accessed. Lucky for us, a class method is provided to identify these to us. Unfortunately, it is not a mapping (how could it be?) but we can reasonably mandate that a given tag have only one implementation.
# +
class HTML(Tag):
...
Tag.__subclasses__()
# +
class TagMeta(type):
def __new__(meta, name, bases, attrs):
cls = type.__new__(meta, str(name), bases, attrs)
return cls
@property
def __subclass_map__(Tag):
return {subclass.__name__: subclass for subclass in Tag.__subclasses__()}
def __getattr__(Tag, name:str):
localName = '-'.join(camel(name))
name = name[0].upper() + name[1:]
Tag = Tag.__subclass_map__.get(name, Tag)
return Tag(localName)
class Tag(T, metaclass=TagMeta):
def __call__(self, **attributes) -> T:
"""Produce a new, cloned and mutated instance of this tag incorporating attribute changes."""
instance = self.__class__(self.localName, self.children) # Mutant pools children!
instance.__dict__ = self.__dict__.copy() # It pools all mutable attributes!
for name, value in attributes.items():
setattr(instance, name, value)
return instance
def __getitem__(self, children) -> T:
"""Mutate this instance to add these children, returning this instance for chained manipulation."""
if isinstance(children, (tuple, list)):
self.children.extend(children)
elif children is not None:
self.children.append(children)
return self
Tag.__subclass_map__ # No specializations of Tag exist yet.
# +
class _Prefixed:
"""A generic mix-in to automatically prefix an element with given text."""
prefix: str
def __str__(self):
return self.prefix + ("\n" if __debug__ else "") + super().__str__()
class _Elidable:
"""A generic mix-in to override serialization.
If the element would have no attributes, do not render the element at all.
"""
def __str__(self):
if not self.classList:
return "".join(str(child) for child in self)
if __debug__: # Prettier "linted" output when optimizations aren't enabled.
parts.append("\n")
parts.append(indent("".join(str(child) for child in self), "\t"))
else:
parts.extend(str(child) for child in self)
# +
class Html(_Prefixed, _Elidable, Tag):
prefix = "<!DOCTYPE html>"
class Head(_Elidable, Tag): pass
class Body(_Elidable, Tag): pass
Tag.__subclass_map__ # Now we have specific subclasses.
# -
Tag.html.__class__
Tag.title.__class__
# ## The Punchline
#
# With several key element behaviors overridden, we can now create the smallest valid output I think we can generate. We don't include quotes around attributes that don't require them, and we should no longer be incuding elements when unneeded. Additionally, elements may have prefixes that even if the element is elided, will remain.
#
# Testing this out on a very simple case:
print(Tag.html[Tag.title["Hello world!"]])
# Now with the original "page template" example:
# +
page = Tag.html [
Tag.head [
Tag.title [ "I'm really not kidding you." ]
],
Tag.body [
Tag.p [ "This is a complete and fully-formed HTML document." ]
],
]
print(page)
# -
# But how performant is this serialization?
# %timeit str(page)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deter Model training
#
#
# %load_ext autoreload
# %autoreload 2
# +
import os
import sys
from pathlib import Path, PurePath
import matplotlib.pyplot as plt
import glob
sys.path.insert(0, "/home/haridas/projects/opensource/detr")
sys.path.insert(0, "../")
os.environ["CUDA_VISIBLE_DEVICES"] = ""
# %matplotlib inline
# +
import pandas as pd
import numpy as np
import seaborn as sns
from mystique.utils import plot_results
# +
import torch
from torch.utils.data import DataLoader, SequentialSampler
import torchvision.transforms as T
import torchvision.transforms.functional as F
from PIL import Image
import datasets
from datasets import build_dataset, get_coco_api_from_dataset
from datasets.coco_eval import CocoEvaluator
from datasets.coco import make_coco_transforms
from models.detr import DETR, SetCriterion, PostProcess
from models.transformer import build_transformer
from models.backbone import build_backbone
from models.matcher import build_matcher
from engine import evaluate
from util.misc import collate_fn, NestedTensor
from util.plot_utils import plot_logs, plot_precision_recall
from datasets.coco import make_coco_transforms
# -
# img_transform = make_coco_transforms("val")
transform = T.Compose([
T.Resize(800),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
T.Resize(800)
# ## Strip Trained model for transfer learning
# +
# checkpoint = torch.load(f"{basedir}/detr-r50-e632da11.pth", map_location='cpu')
# checkpoint = torch.load(f"{basedir}/detr-r101-dc5-a2e86def.pth", map_location='cpu')
# +
# checkpoint["model"].keys()
# +
# # Param sets that needs to be custom learned.
# del checkpoint["model"]["class_embed.weight"]
# del checkpoint["model"]["class_embed.bias"]
# del checkpoint["model"]["query_embed.weight"]
# +
# torch.save(checkpoint, f"{basedir}/detr-r101-dc5-a2e86def-class-head.pth")
# -
# ## Dataset
# +
class Args:
coco_path = "/home/haridas/projects/mystique/data/train_and_test-2020-Jun-05-coco/"
dataset_file = "pic2card"
masks = False
train_ds = datasets.custom_coco_build("train", Args)
# -
image, target = super(datasets.coco.CocoDetection, train_ds).__getitem__(10)
target = {'image_id': train_ds.ids[0], 'annotations': target}
image, target = train_ds.prepare(image, target)
target
_image, _target = datasets.transforms.RandomResize([800], max_size=1333)(image, target)
_target
# +
# torch.rand()
# region = T.RandomCrop.get_params(image, (799, 1000))
# +
# F.crop(image, *T.RandomCrop.get_params(image, (400, 300)))
# image.shape
# +
# datasets.transforms.crop(image, target, )
# +
# datasets.transforms.crop
# -
# +
# np.asarray(img)
# +
# image.permute(1, 2, 0).numpy()
# image.permute(1, 2, 0).numpy()
# +
# Image.fromarray(image)
# -
# The index are directly from the coco dataset index.
CLASSES = {
0: 'background', # This one is a default class learned by model, or a catch all.
1: 'textbox',
2: 'radiobutton',
3: 'checkbox',
4: 'actionset',
5: 'image',
6: 'rating'
}
# ## Generate Coco Metrics
# +
class DefaultConf:
# Basic network
backbone = "resnet50"
position_embedding = "sine"
hidden_dim = 256
dropout = 0.1
nheads = 8
dim_feedforward = 2048
enc_layers = 6
dec_layers = 6
pre_norm = False
num_queries = 100
aux_loss = False
# Force to eval model
lr_backbone = 0
masks = False
dilation = False
device = "cuda"
# Loss tuning params.
set_cost_class = 1
set_cost_bbox = 5
set_cost_giou = 2
bbox_loss_coef = 5
giou_loss_coef = 2
eos_coef = 0.1
losses = ["labels", "boxes", "cardinality"]
# Configuration fitting the pic2card specific
# class configuration.
coco_path = "/home/haridas/projects/mystique/data/train_and_test-2020-Jun-05-coco/"
dataset_file = "pic2card"
weight_dict = {
'loss_ce': 1,
'loss_bbox': DefaultConf.bbox_loss_coef,
'loss_giou': DefaultConf.giou_loss_coef
}
backbone = build_backbone(DefaultConf)
transformer_network = build_transformer(DefaultConf)
matcher = build_matcher(DefaultConf)
criterion = SetCriterion(num_classes=len(CLASSES),
matcher=matcher,
weight_dict=weight_dict,
eos_coef=DefaultConf.eos_coef,
losses=DefaultConf.losses
)
postprocessors = {"bbox": PostProcess()}
dataset_test = build_dataset(image_set="test", args=DefaultConf)
sample_test = SequentialSampler(dataset_test)
base_ds = get_coco_api_from_dataset(dataset_test)
# -
# +
basedir = Path("/home/haridas/projects/opensource/detr/")
model_path = basedir / "outputs-2020-06-30-1593500748" / "checkpoint.pth"
state_dict = torch.load(model_path, map_location="cpu")
detr = DETR(backbone=backbone,
transformer=transformer_network,
num_queries=100, num_classes=6, aux_loss=False)
detr.load_state_dict(state_dict["model"])
detr.eval();
# -
data_loader_test = DataLoader(dataset_test,
batch_size=2,
sampler=sample_test,
drop_last=False,
collate_fn=collate_fn,
num_workers=1)
# +
# for samples, targets in data_loader_test:
# print([i['image_id'] for i in targets])
# import pdb; pdb.set_trace()
# +
# test_stats, coco_evaluator = evaluate(
# detr, criterion, postprocessors,
# data_loader_test,
# base_ds,
# device="cpu",
# output_dir="./out")
# -
# # Model Inference
def load_detr_model(model_path, num_queries=60, num_classes=6):
basedir = Path(model_path)
model_path = basedir / "checkpoint.pth"
state_dict = torch.load(model_path, map_location="cpu")
detr = DETR(backbone=backbone,
transformer=transformer_network,
num_queries=num_queries,
num_classes=num_classes, aux_loss=False)
detr.load_state_dict(state_dict["model"])
detr.eval();
return detr
model_path = "/home/haridas/projects/opensource/detr/best_model/checkpoint.pth"
_detr = torch.load(model_path, map_location="cpu")
_detr['model'].get("transformer.encoder.layers.0.self_attn.in_proj_weight").shape
# ## Single image Inference
detr = load_detr_model("/home/haridas/projects/opensource/detr/best_model")
transform_test = make_coco_transforms("test")
# output['pred_boxes'][-1, keep]
img = Image.open("/home/haridas/projects/AdaptiveCards-ro/source/pic2card/app/assets/samples/3.png").convert("RGB")
img = Image.open("/home/haridas/projects/mystique/data/templates_test_data/1.png").convert("RGB")
probs, boxes = detect(img, detr_trace_module, transform, threshold=0.8)
scores = probs.max(-1).values.detach().numpy()
classes = probs.max(-1).indices.detach().numpy()
plot_results(img, classes, scores, boxes, label_map=CLASSES, score_threshold=0.8)
scores.max(-1).values.detach().numpy()
# ## Using libraries
from mystique.models.pth.detr.predict import detect as detect_
img.size
scores_, boxes_ = detect_(img, detr_trace_module, transform_, threshold=0.8)
plot_results(img, scores_, boxes_, label_map=CLASSES)
boxes_
# ## Ploting train vs eval performance
detr_experiments = [Path(i) for i in glob.glob("/home/haridas/projects/opensource/detr/outputs-2020-07-07*")]
p = Path("/home/haridas/projects/opensource/detr/best_model/")
log_df = pd.read_json(p / "log.txt", lines=True)
# +
# log_df.head().test_coco_eval_bbox[0]
# -
state_dict = torch.load(p / "checkpoint.pth", map_location="cpu")
torch.save(state_dict["model"], p / "checkpoint_model.pth")
# +
# state_dict["model"]
# -
score = torch.load(p / 'eval.pth')
score.keys()
# glob.glob(p / 'eval/*')
plot_precision_recall(
[Path(p) for p in glob.glob("/home/haridas/projects/opensource/detr/best_model/eval/*.pth")]
)
detr_experiments.sort()
plot_logs(detr_experiments[-1])
# # TorchScript
#
# See how the model can be serialized efficiently for production purpose.
img = Image.open("/home/haridas/projects/AdaptiveCards-ro/source/pic2card/app/assets/samples/5.png").convert("RGB")
# im = transform(img).unsqueeze(0)
img_np = np.asarray(img)
im = transform(img).unsqueeze(0)
# ### Torch Jit Trace
detr_trace_module = torch.jit.trace(detr, im, strict=False)
detr_trace_module.save("/home/haridas/projects/pic2card-models/pytorch/detr_trace.pt")
detr_trace_module = torch.jit.load("/home/haridas/projects/pic2card-models/pytorch/detr_trace.pt")
t = detr_trace_module(im)
# +
# print(detr_trace_module.graph)
# +
from typing import List
@torch.jit.script
def an_error(x):
#r = torch.rand(1)
return x
@torch.jit.script
def foo(x, y):
if x.max() > y.max():
r = x
else:
r = y
return r
# +
# print(type(foo))
# print(torch.jit.trace(foo, (torch.ones(2,3), torch.ones(1,2))).code)
# print(foo.code)
# +
# torch.jit.trace(foo, (torch.ones(2,3), torch.ones(1,2)))
# +
# print(foo.graph)
# -
# ### Torch Jit Script
detr_tscript = torch.jit.script(detr)
# +
# print(detr_tscript.code)
# -
detr_tscript.save("/home/haridas/projects/pic2card-models/pytorch/detr.pt")
# +
# print(detr_tscript.code)
# +
# # !du -sh /home/haridas/projects/pic2card-models/pytorch/detr.pt
# -
detr_tscript = torch.jit.load("/home/haridas/projects/pic2card-models/pytorch/detr.pt")
# +
# nested_tensor = NestedTensor(im, None)
# detr_tscript(img)
# +
# detr_tscript(nested_tensor)
# print(detr_tscript.graph)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import kfp
from datetime import datetime
from kfp.v2 import compiler
from kfp.v2.google.client import AIPlatformClient
from google.cloud import aiplatform
from google_cloud_pipeline_components import aiplatform as gcc_aip
project_id = "feature-store-mars21"
region = "us-central1"
pipeline_root_path = "gs://feature-store-mars21/pl-root/telco"
# -
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = pipeline_root_path
gcs_csv_path = "gs://feature-store-mars21/data/telco/Telco-Customer-Churn.csv"
# +
@kfp.dsl.pipeline(name="automl-tab-training-v2",
pipeline_root=pipeline_root_path)
def pipeline(project_id: str):
dataset_create_op = gcc_aip.TabularDatasetCreateOp(
project=project_id, display_name="churn-pred", gcs_source=gcs_csv_path,
)
training_op = gcc_aip.AutoMLTabularTrainingJobRunOp(
project=project_id,
display_name="train_churn_prediction_1",
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
dataset=dataset_create_op.outputs["dataset"],
target_column="Churn",
budget_milli_node_hours=1000,
)
deploy_op = gcc_aip.ModelDeployOp(
model=training_op.outputs["model"],
project=project_id
)
compiler.Compiler().compile(pipeline_func=pipeline,
package_path='churn_classif_pipeline.json')
api_client = AIPlatformClient(project_id=project_id, region=region)
response = api_client.create_run_from_job_spec(
'churn_classif_pipeline.json',
pipeline_root=pipeline_root_path,
parameter_values={
'project_id': project_id
})
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] solution="hidden"
# # Contenido
# -
# - Conceptos Fundamentales de Python:
# - Contenedores `List`
# - Contenedores `Tuple`
# - Funciones
# - La Instrucción `for`
# - Nociones Adicionales
# ## Introducción
# Tratemos de hacer algo útil con lo que ya sabemos. Por ejemplo, calculemos el promedio de los resultados diarios en trading en una semana. Supongamos entonces que los resultados son los siguientes (en una unidad monetaria cualquiera): 100, 110, -150, 200, 250. Para calcular el promedio, tenemos que sumar los 5 resultdos y dividirlos por 5. Hagamos ese cálculo en la siguiente celda.
(100 + 110 - 150 + 200 + 250) / 5
# Tenemos el resultado, pero no hicimos nada muy astuto, simplemente utilizamos Python como si fuera una calculadora. Vamos mejorando de a poco, el objetivo es construir una pequeña herramienta (que será una función) que nos permita calcular lo más cómodamente posible el promedio de un conjunto de datos.
# ## Cada Resultado es una Variable
# Asignamos cada nota a una variable:
res1 = 100
res2 = 110
res3 = -150
res4 = 200
res5 = 250
# Sólo ahí calculamos el promedio:
(res1 + res2 + res3 + res4 + res5) / 5
# ¿Qué ganamos con esto? Un poco más de generalidad, si cambia el valor de un resultdo, sólo tenemos que cambiar el valor asignado a la variable que le corresponde. Por ejemplo, si `res1` sube a 120 haremos lo siguiente:
res1 = 120
(res1 + res2 + res3 + res4 + res5) / 5
# ¿Qué pasa si ahora son más resultados? Por ejemplo, queremos calcular el promedio de 2 semanas. En ese caso podríamos hacer lo siguiente para calcular el promedio:
res6 = 115
res7 = -200
res8 = -120
res9 = 200
res10 = 50
(res1 + res2 + res3 + res4 + res5 + res6 + res7 + res8 + res9 + res10) / 10
# Resulta, pero no es muy general, si después quisiéramos el resultado promedio del mes tendríamos que agregar unas 10 variables más y dividir por 20. Además, todo eso lo haríamos a mano (una manera de pensar en la generalidad de la herramienta que queremos diseñar es considerar el trabajo adicional que hay que hacer para calcular un nuevo caso).
# ## Las Variables de Tipo `List`
# Una `List` es simplemente una lista de valores, pueden ser números, strings u otros tipos. Incluso, una `List` puede contener una `List`. Veamos ejemplos:
edades = [22, 36, 41, 52]
edades
type(edades)
nombres_apellidos = ["Pedro", "Pablo", "Perez", "Pereira"]
nombres_apellidos
nombres_edades = ["Juan", 75, "Claudia", 42] # es un contenedor heterogéneo
nombres_edades
# Los elementos de una lista se cuentan desde el 0 en adelante. Por ejemplo, en `edades` la edad 0 es 22, la número 1 es 36, la número 2 es 41 y la número 3 es 52.
#
# Se puede obtener un elemento de una lista de la siguiente forma:
edades[0] # Se parte contando desde 0. El primer elemento es el número 0.
edades[3]
# Se puede cambiar el valor de un elemento de una `List`.
edades[3] = 66 # Las List son mutables. Significa que, después de ser definidas, pueden cambiar.
edades
# Si trato de obtener un elemento que no existe, voy a obtener un error:
edades[4]
# Fijarse bien en el mensaje de error `IndexError: list index out of range`, está diciendo que el índice que usamos (en este caso 4) está fuera de rango, o sea la lista **no tiene** un elemento de índice 4.
# Podemos alargar una lista:
edades
edades.append(76) # append = agregar a la cola
edades
# Es segunda vez que vemos la notación "." (`notas.append`). Por ahora no vamos a explicar su significado completo (es parte de lo relacionado a programación orientada a objetos), pero podemos pensar que `append` es una función que se puede aplicar a una variable de tipo `List` y que permite agregar un valor a esa variable.
# ¿Qué ganamos con tener las variables en una `List`? Hay un par de funciones que nos ayudan a hacer más general el cálculo de promedio:
sum(edades) # Sólo resulta con List que sólo contengan números.
# Con la función `sum` podemos obtener la suma de los elementos de la lista, si la `List` sólo contiene números, con otro tipo de elementos se va a producir un error.
sum(nombres_apellidos)
# También tenemos la función `len` que cacula el número de elementos de una `List`.
len(edades) # Funciona con cualquier List
# Con estas dos funciones, el cálculo del promedio se puede escribir como:
sum(edades) / len(edades)
# Verifiquémoslo:
edades
(22 + 36 + 41 + 66 + 76) / 5
# ¡Impecable! Funciona como queríamos. Nos va faltando un elemento, ¿podemos dejar guardado en una variable el **cálculo de promedio**? Así lo podemos reutilizar sin tener que volver a escribirlo. La respuesta es sí, tenemos que definir una función.
# ## Las Variables de Tipo `Tuple`
# Una `Tuple` también es una lista de valores, pueden ser números, strings u otros tipos. Incluso, una `Tuple` puede contener una `Tuple`. La diferencia con una `List` es que las variables de tipo `Tuple` son inmutables, una vez que se definen, no pueden cambiar.
#
#
# Veamos ejemplos:
edades = (22, 36, 41, 52) # <--- Notar que se usan () y no []
edades
nombres_apellidos = ("Pedro", "Pablo", "Perez", "Pereira")
nombres_apellidos
nombres_edades = ("Juan", 75, "Claudia", 42) # es un contenedor heterogéneo
nombres_edades
# Las `Tuple` son inmutables:
edades[0] = 23
# Si quiero cambiar un valor, tengo que redefinir el contenido de toda la variable.
edades = (23, 36, 41, 52)
# Tampoco se puede agregar un valor.
edades.append(60)
# ## Funciones
# En matemáticas decimos que una función $f:X \rightarrow Y$ es una *regla de cálculo* que asocia a cada $x \in X$ un elemento $y \in Y$. El primer tipo de función que aprendemos es una función lineal $L: \mathbb{R} \rightarrow \mathbb{R}$, por ejemplo:
#
# $$L(x) = 2x + 3$$
# La primera que función que vamos a crear será la función $promedio: List[Number] \rightarrow Number$. O sea, una función que a cada `List` que sólo tenga números como elementos asocia un número.
def promedio(numeros):
return sum(numeros) / len(numeros)
type(promedio)
# Fijarse bien en los 4 espacios que hay antes de la línea `return`, son indispensables, si no están se producirá un error. Al haber **declarado** la función `promedio` en la línea anterior, Python espera que en la línea siguiente comience la **definición** de la función, dicha definición tiene que que estar en líneas **indentadas** (con una sangría) de 4 espacios. Después del `return` se puede volver a escribir alineado a la izquierda sin indentación.
def promedio_malo(numeros):
return sum(numeros) / len(numeros)
# Para definir una función:
#
# - partimos con `def`,
# - luego un espacio y el nombre que queremos darle a la función
# - luego, entre paréntesis, el nombre del argumento de la función
# - al final de esa línea un ":"
# - las líneas que vienen tienen que partir con 4 espacios (Jupyter las mete automáticamente)
# - la última línea de la función comienza con `return` ahí se define el resultado de aplicar la función a su argumento
# Hagamos la prueba,
promedio(edades)
# Por último, podemos lograr que el output sea un formato más ordenado usando la instrucción `format` o un `fstring`.
mensaje = "El promedio de edades es: {0:.2f}" # Lo que está entre {} indica que ahí va una variable,
# el 0 indica que es la primera los : separan el número
# de variable de su formato para imprimir y la expresión
# .2f indica que se imprimirá con 2 decimales (f de flotante).
# Si hacemos directamente `print(mensaje)` obtenemos:
print(mensaje)
# Al aplicar `format` se obtiene:
print(mensaje.format(promedio(edades)))
# Hagamos lo mismo, pero con un `fstring`.
mensaje2 = f"El promedio de edades es: {promedio(edades):.2f}"
print(mensaje2)
# ### Ejercicios
# No mirar las soluciones antes de haber tratado de resolver los ejercicios. Si después de tratar un rato no lo has resuelto, sí se puede mirar la solución, pero si lo haces antes de tratar, la respuesta entrará por una oreja y saldrá por la otra.
# #### Ejercicio
# + [markdown] solution2="hidden" solution2_first=true
# Escriba una función en Python que implemente la siguiente función matemática $f: \mathbb{R} \rightarrow \mathbb{R}$, $f(x) = 2x+3$.
# + solution2="hidden"
def f(x):
return 2*x + 3
# + solution2="hidden"
x = [2, 2.7]
print("El valor de f calculada en {} es: {}".format(x[0], f(x[0])))
print("El valor de f calculada en {} es: {}".format(x[1], f(x[1])))
# -
# #### Ejercicio
# + [markdown] solution2="hidden" solution2_first=true
# Escriba una función en Python que implemente la siguiente función matemática $g: \mathbb{R^2} \rightarrow \mathbb{R}$, $g(x,y) = 2x+3y+1$.
# + solution2="hidden"
def g(x, y):
return 2*x +3*y + 1
# -
# ## La Instrucción `for`
# Consideremos la siguiente situación, tenemos una posición en un depósito a plazo en CLP que vence en 365 días más y queremos sensibilizar el valor del depósito para varias tasas de descuento.
monto = 1000000000
plazo = 365
tasas = [.01, .0125, .015, .0175, .02] # Tasas lineales Act360.
# La instrucción `for` nos permite recorrer secuencialmente la `List` tasas y calcular el valor presente del depósito para cada uno de los valores ahí almacenados.
for tasa in tasas: # <--- Notar la indentación en la línea siguiente.
print(f"El valor del depósito para una tasa del {tasa:,.2%} es: {monto / (1 + tasa * plazo / 360):,.0f}")
# Podemos definir una función que almacene estos resultados en una `List` y retorne esa `List`.
def sens_depo(monto, plazo, tasas):
resultado = []
for tasa in tasas:
resultado.append(monto / (1 + tasa * plazo / 360))
return resultado
sens_depo(monto, plazo, tasas)
# ## Nociones Adicionales
# En Python un string (tipo `str`) se comporta como una `Tuple` de caracteres de texto. Por ejemplo,
str1 = "Python"
# Es inmutable.
str1[0] = 'p'
# Puedo obtener el caracter en la posición `n` igual que en una `Tuple` (o una `List`).
x = [2, 2.7]
y = (1.3, 3.14)
print("El valor de g calculada en ({}, {}) es: {}".format(x[0], y[0], g(x[0], y[0])))
print("El valor de g calculada en ({}, {}) es: {}".format(x[1], y[1], g(x[1], y[1])))
str1[1]
# También puedo calcular el número de caracteres con la función `len`.
len(str1)
# Sin embargo, también hay funciones específicas para un `str`, como la función `format` que ya vimos y las dos funciones que mostramos a continuación:
str1.upper()
str1.lower()
# Existe también una funcionalidad que en inglés se llama *slice* (o rebanar) y que se aplica tanto a las `List` como a las `str` y a las `Tuple`.
lista = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
print(lista[1:]) # Desde el primero en adelante
print(lista[0:3]) # Desde el 0 inclusive hasta el 3 no inclusive
print(lista[:-1]) # Del primero al último no inclusive
print(lista[-1:]) # El último
print(lista[-2:]) # Desde el penúltimo en adelante
# También aplica a un `str`.
str2 = "esdrújula"
print(str2[1:]) # Desde el primero en adelante
print(str2[0:3]) # Desde el 0 inclusive hasta el 3 no inclusive
print(str2[:-1]) # Del primero al último no inclusive
print(str2[-1:]) # El último
print(str2[-2:]) # Desde el penúltimo en adelante
# Y a una `Tuple`. Notar que, en este caso, el `slice` de la `Tuple` es una `Tuple`.
tupla = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
print(tupla[1:]) # Desde el primero en adelante
print(tupla[0:3]) # Desde el 0 inclusive hasta el 3 no inclusive
print(tupla[:-1]) # Del primero al último no inclusive
print(tupla[-1:]) # El último
print(tupla[-2:]) # Desde el penúltimo en adelante
# ### Ejercicio
# + [markdown] solution2="hidden" solution2_first=true
# Escriba una función en Python que tenga como argumento un `str` (una palabra) y devuelva como resultado la `str` inicial con el primer caracter (y sólo el primer caracter) en mayúsculas.
# + solution2="hidden"
def primer_en_mayus(palabra):
primera_letra_mayus = palabra[0].upper()
resto_palabra = palabra[1:]
resultado = primera_letra_mayus + resto_palabra
return resultado
# + solution2="hidden"
palabras = [str2, "onomatopeya", "español"]
print("{} ---> {}".format(palabras[0], primer_en_mayus(palabras[0])))
print("{} ---> {}".format(palabras[1], primer_en_mayus(palabras[1])))
print("{} ---> {}".format(palabras[2], primer_en_mayus(palabras[2])))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/lmcanavals/pcd/blob/master/0603_dinning_philosophers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="XCRm24rrdxmp" language="bash"
# sudo apt install spin
# spin --version
# + colab={"base_uri": "https://localhost:8080/"} id="IzZrWrJAdz0u" outputId="1a63f410-8961-4f9c-e0e9-bb799b13257d"
# %%file phil1.pml
#define wait(s) atomic { s > 0 -> s-- }
#define signal(s) s++
byte forks[5] = { 1 }
active[5] proctype Philosopher() {
byte i = _pid
do
::
// think
wait(forks[i])
wait(forks[(i+1) % 5])
// eat
signal(forks[i])
signal(forks[(i+1) % 5])
od
}
# + colab={"base_uri": "https://localhost:8080/"} id="TMwB3kv0e2A3" outputId="b65f62da-0223-4fce-cda6-39cd1c67eee0"
# !spin phil1.pml
# + colab={"base_uri": "https://localhost:8080/"} id="ZAdlzAhpeh7o" outputId="fc381b36-30b0-4def-e1e6-bc485fee44f2" language="bash"
# rm *trail
# F=phil1.pml
# spin -a $F && gcc pan.c && ./a.out
# [ -f $F.trail ] && spin -t $F
# + colab={"base_uri": "https://localhost:8080/"} id="0HfcmGKZe8Vd" outputId="3fd25c01-fa4c-4cc4-b593-188fdfca27f4"
# %%file phil2.pml
#define wait(s) atomic { s > 0 -> s-- }
#define signal(s) s++
byte forks[5] = { 1 }
byte room = 4
active[5] proctype Philosopher() {
byte i = _pid
do
::
// think
wait(room)
wait(forks[i])
wait(forks[(i+1) % 5])
// eat
signal(forks[i])
signal(forks[(i+1) % 5])
signal(room)
od
}
# + colab={"base_uri": "https://localhost:8080/"} id="cCG-iHqff9Ji" outputId="f00bf6ea-db47-43b6-abb4-6c58b2900509" language="bash"
# rm *trail
# F=phil2.pml
# spin -a $F && gcc pan.c && ./a.out
# [ -f $F.trail ] && spin -t $F
# + colab={"base_uri": "https://localhost:8080/"} id="d1FUHp-MgBSm" outputId="9e6f08dc-ebd1-4e3c-9e11-d9fc9146cbd1"
# %%file phil3.pml
#define wait(s) atomic { s > 0 -> s-- }
#define signal(s) s++
byte forks[5] = { 1 }
active[4] proctype Philosopher() {
byte i = _pid
do
::
// think
wait(forks[i])
wait(forks[i+1])
// eat
signal(forks[i])
signal(forks[i+1])
od
}
active proctype Lefty() {
byte i = _pid
do
::
// think
wait(forks[0])
wait(forks[4])
// eat
signal(forks[0])
signal(forks[4])
od
}
# + colab={"base_uri": "https://localhost:8080/"} id="lXqhF_0Vg_Pp" outputId="90976417-ba98-4283-ed9a-c4dc61a6dbc7" language="bash"
# rm *trail
# F=phil3.pml
# spin -a $F && gcc pan.c && ./a.out
# [ -f $F.trail ] && spin -t $F
# + id="UW5PFeo0hB7o"
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [cknowledge.org/ai](http://cknowledge.org/ai): Crowdsourcing benchmarking and optimization of AI
# * [Reproducible Quality-Efficient Systems Tournaments](http://cknowledge.org/request) ([ReQuEST initiative](http://cknowledge.org/request.html#organizers))
# * [AI artifacts](http://cknowledge.org/ai-artifacts) (cTuning foundation)
# * [Android app](https://play.google.com/store/apps/details?id=openscience.crowdsource.video.experiments) (dividiti)
# * [Desktop app](https://github.com/dividiti/ck-crowdsource-dnn-optimization) (dividiti)
# * [CK-Caffe](https://github.com/dividiti/ck-caffe) (Berkeley)
# * [CK-Caffe2](https://github.com/ctuning/ck-caffe2) (Facebook)
# * [CK-CNTK](https://github.com/ctuning/ck-cntk) (Microsoft)
# * [CK-KaNN](https://github.com/ctuning/ck-kann) (Kalray)
# * [CK-MVNC](https://github.com/ctuning/ck-mvnc) (Movidius / Intel)
# * [CK-MXNet](https://github.com/ctuning/ck-mxnet) (Apache)
# * [CK-NNTest](https://github.com/ctuning/ck-nntest) (cTuning foundation)
# * [CK-TensorFlow](https://github.com/ctuning/ck-tensorflow) (Google)
# * [CK-TensorRT](https://github.com/ctuning/ck-tensorrt) (NVIDIA)
# * etc.
# # [dividiti](http://dividiti.com)'s submission to [ReQuEST @ ASPLOS'18](http://cknowledge.org/request-cfp-asplos2018.html)
# ## Table of Contents
# 1. [Overview](#overview)
# 1. [Platforms](#platforms)
# 1. [Linaro HiKey960](#platforms_hikey) (**"HiKey"**)
# 1. [Firefly RK3399](#platforms_firefly) (**"Firefly"**)
# 1. [Experimental data](#data) [for developers]
# 1. [Data wrangling code](#code) [for developers]
# 1. [Experiments on Hikey](#experiments_hikey)
# 1. [TensorFlow](#experiments_tensorflow_hikey)
# 1. [ArmCL](#experiments_armcl_hikey)
# 1. [ArmCL vs. TensorFlow](#experiments_armcl_tensorflow_hikey)
# 1. [Experiments on Firefly](#experiments_firefly)
# 1. [TensorFlow](#experiments_tensorflow_firefly)
# 1. [ArmCL](#experiments_armcl_firefly)
# 1. [ArmCL vs. TensorFlow](#experiments_armcl_tensorflow_firefly)
# <a id="overview"></a>
# ## Overview
# This Jupyter Notebook studies performance (execution time) vs accuracy (top1 / top5) using the [Arm Compute Library](https://github.com/ARM-software/ComputeLibrary) on two development platforms:
# - [Linaro HiKey960](https://www.96boards.org/product/hikey960/);
# - [Firefly RK3399](http://en.t-firefly.com/index.php/product/rk3399.html).
# <a id="platforms"></a>
# ## Platforms
# <a id="platforms_hikey"></a>
# ### Linaro HiKey960
# - Chip:
# - [HiSilicon Kirin 960](http://www.hisilicon.com/en/Solutions/Kirin)
# - CPU ("performance" / "big"):
# - ARM® Cortex®-A73;
# - Max clock 2362 MHz;
# - 4 cores;
# - CPU ("efficiency" / "LITTLE"):
# - ARM® Cortex®-A53;
# - Max clock 1844 MHz;
# - 4 cores;
# - GPU:
# - ARM® Mali™ G71 architecture;
# - Max clock 1037 MHz;
# - 8 cores;
# - OpenCL driver (`hikey962`: `instr=1,clexperimental=1,softjobpatch`):
# ```
# $ ck run program:tool-print-opencl-devices | grep "version:"
# OpenCL 2.0 v1.r6p0-01rel0.24c5f5e966f2b7f1f19b91d6f32ff53e
# ```
#
# - RAM:
# - LPDDR4 SDRAM;
# - 3 GB;
#
# - BSP:
# - Debian Stretch (9) Linux
# ```
# $ uname -a
# Linux hikey962 4.4.74-00216-g10816f6 #3 SMP PREEMPT Thu Jul 6 14:38:42 BST 2017 aarch64 GNU/Linux
# ```
hikey_model = 'HiKey960\x00'
hikey_name = 'Linaro HiKey960'
hikey_id = 'hikey-960'
hikey_gpu = 'Mali-G71 MP8'
hikey_gpu_mhz = 807
# <a id="platforms_firefly"></a>
# ### Firefly RK3399
# - Chip:
# - [Rockchip RK3399](http://rockchip.wikidot.com/rk3399)
# - CPU ("big"):
# - ARM® Cortex®-A72 architecture
# - Max clock 1800 MHz;
# - 2 cores;
# - CPU ("LITTLE"):
# - ARM® Cortex®-A53 architecture;
# - Max clock 1416 MHz;
# - 4 cores;
# - GPU:
# - ARM® Mali™-T860 architecture;
# - Max clock 800 MHz;
# - 4 cores;
# - OpenCL driver:
# ```
# $ ck run program:tool-print-opencl-devices | grep "version:"
# v1.r13p0-00rel0-git(a4271c9).31ba04af2d3c01618138bef3aed66c2c
# ```
#
# - RAM:
# - Samsung dual-channel DDR3;
# - 4 GB (8 GB swap);
# - BSP:
# - [Firefly-rk3399_xubuntu1604_201711301130.7z](https://drive.google.com/drive/u/0/folders/1lbaR7XVyHT4SnXkJ2ybj5YXAzAjDBWfT)
# ```
# $ cat /etc/lsb-release
# DISTRIB_ID=Ubuntu
# DISTRIB_RELEASE=16.04
# DISTRIB_CODENAME=xenial
# DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
# $ uname -a
# Linux firefly 4.4.77 #554 SMP Thu Nov 30 11:30:11 HKT 2017 aarch64 aarch64 aarch64 GNU/Linux
# ```
firefly_model = 'Rockchip RK3399 Firefly Board (Linux Opensource)\x00'
firefly_name = 'Firefly RK3399'
firefly_id = 'firefly'
firefly_gpu = 'Mali-T860 MP4'
firefly_gpu_mhz = 800
# ### Platform mappings
model_to_id = {
firefly_model : firefly_id,
hikey_model : hikey_id
}
id_to_name = {
firefly_id : firefly_name,
hikey_id : hikey_name
}
id_to_gpu = {
firefly_id : firefly_gpu,
hikey_id : hikey_gpu
}
id_to_gpu_mhz = {
firefly_id : firefly_gpu_mhz,
hikey_id : hikey_gpu_mhz
}
# ### Convolution method mapping
convolution_method_to_name = [
'gemm',
'direct',
'winograd'
]
# <a id="data"></a>
# ## Get the experimental data
# The experimental data can be downloaded and registered with CK as described below.
# ### ArmCL experiments on HiKey
# #### ArmCL accuracy experiments on 50,000 images
# ```
# $ wget https://www.dropbox.com/s/tm1qlom7ehfbe0w/ck-request-asplos18-mobilenets-armcl-opencl-accuracy-50000.zip
# $ ck add repo --zip=ck-request-asplos18-mobilenets-armcl-opencl-accuracy-50000.zip
# ```
armcl_accuracy_50000_repo_uoa = 'ck-request-asplos18-mobilenets-armcl-opencl-accuracy-50000'
# !ck list $armcl_accuracy_50000_repo_uoa:experiment:* | sort
# #### ArmCL accuracy experiments on 500 images
# ```
# $ wget https://www.dropbox.com/s/wqqchrhr36skm9y/ck-request-asplos18-mobilenets-armcl-opencl-accuracy-500.zip
# $ ck add repo --zip=ck-request-asplos18-mobilenets-armcl-opencl-accuracy-500.zip
# ```
armcl_accuracy_500_repo_uoa = 'ck-request-asplos18-mobilenets-armcl-opencl-accuracy-500'
# !ck list $armcl_accuracy_500_repo_uoa:experiment:* | sort
# #### ArmCL performance (latency) experiments
# ```
# $ wget https://www.dropbox.com/s/wm3ahhm20y7g04k/ck-request-asplos18-mobilenets-armcl-opencl-performance.zip
# $ ck add repo --zip=ck-request-asplos18-mobilenets-armcl-opencl-performance.zip
# ```
armcl_performance_repo_uoa = 'ck-request-asplos18-mobilenets-armcl-opencl-performance'
# !ck list $armcl_performance_repo_uoa:experiment:* | sort
# ### TensorFlow experiments on HiKey
# #### TensorFlow accuracy experiments on 50000 images
# ```
# $ wget https://www.dropbox.com/s/ro5txjz9n396s0t/ck-request-asplos18-mobilenets-tensorflow-accuracy-50000.zip
# $ ck add repo --zip=ck-request-asplos18-mobilenets-tensorflow-accuracy-50000.zip
# ```
tensorflow_accuracy_50000_repo_uoa = 'ck-request-asplos18-mobilenets-tensorflow-accuracy-50000'
# !ck list $tensorflow_accuracy_50000_repo_uoa:experiment:* | sort
# #### TensorFlow accuracy experiments on 500 images
# ```
# $ wget https://www.dropbox.com/s/k0xhhb7owwvyfgu/ck-request-asplos18-mobilenets-tensorflow-accuracy-500.zip
# $ ck add repo --zip=ck-request-asplos18-mobilenets-tensorflow-accuracy-500.zip
# ```
tensorflow_accuracy_500_repo_uoa = 'ck-request-asplos18-mobilenets-tensorflow-accuracy-500'
# !ck list $tensorflow_accuracy_500_repo_uoa:experiment:* | sort
# #### TensorFlow performance (latency) experiments
# ```
# $ wget https://www.dropbox.com/s/1fagdonfaqsdfou/ck-request-asplos18-mobilenets-tensorflow-performance.zip
# $ ck add repo --zip=ck-request-asplos18-mobilenets-tensorflow-performance.zip
# ```
tensorflow_performance_repo_uoa = 'ck-request-asplos18-mobilenets-tensorflow-performance'
# !ck list $tensorflow_performance_repo_uoa:experiment:* | sort
# ### TensorFlow experiments on Firefly
# #### TensorFlow accuracy experiments on 500 images
firefly_tensorflow_accuracy_500_repo_uoa = 'ck-request-asplos18-mobilenets-tensorflow-accuracy-500-firefly'
# !ck list $firefly_tensorflow_accuracy_500_repo_uoa:experiment:* | sort
# #### TensorFlow performance (latency) experiments
firefly_tensorflow_performance_repo_uoa = 'ck-request-asplos18-mobilenets-tensorflow-performance-firefly'
# !ck list $firefly_tensorflow_performance_repo_uoa:experiment:* | sort
# ### ArmCL experiments on Firefly
# #### ArmCL performance (latency) experiments
firefly_armcl_performance_repo_uoa = 'ck-request-asplos18-mobilenets-armcl-opencl-performance-firefly'
# !ck list $firefly_armcl_performance_repo_uoa:experiment:* | sort
# #### ArmCL accuracy experiments on 500 images
firefly_armcl_accuracy_500_repo_uoa = 'ck-request-asplos18-mobilenets-armcl-opencl-accuracy-500-firefly'
# !ck list $armcl_accuracy_500_repo_uoa:experiment:* | sort
# <a id="code"></a>
# ## Data wrangling code
# **NB:** Please ignore this section if you are not interested in re-running or modifying this notebook.
# ### Includes
# #### Standard
import os
import sys
import json
import re
# #### Scientific
# If some of the scientific packages are missing, please install them using:
# ```
# # pip install jupyter pandas numpy matplotlib
# ```
import IPython as ip
import pandas as pd
import numpy as np
import matplotlib as mp
import seaborn as sb
print ('IPython version: %s' % ip.__version__)
print ('Pandas version: %s' % pd.__version__)
print ('NumPy version: %s' % np.__version__)
print ('Matplotlib version: %s' % mp.__version__)
print ('Seaborn version: %s' % sb.__version__)
from IPython.display import Image, display
def display_in_full(df):
pd.options.display.max_columns = len(df.columns)
pd.options.display.max_rows = len(df.index)
display(df)
import matplotlib.pyplot as plt
from matplotlib import cm
# %matplotlib inline
default_colormap = cm.autumn
default_fontsize = 16
default_barwidth = 0.8
default_figwidth = 24
default_figheight = 3
default_figdpi = 200
default_figsize = [default_figwidth, default_figheight]
if mp.__version__[0]=='2': mp.style.use('classic')
mp.rcParams['figure.max_open_warning'] = 200
mp.rcParams['figure.dpi'] = default_figdpi
mp.rcParams['font.size'] = default_fontsize
mp.rcParams['legend.fontsize'] = 'medium'
# #### Collective Knowledge
# If CK is not installed, please install it using:
# ```
# # pip install ck
# ```
import ck.kernel as ck
print ('CK version: %s' % ck.__version__)
# ### Access experimental data
def get_experimental_results(repo_uoa, tags='explore-mobilenets-performance', accuracy=False,
module_uoa='experiment', _library=None, _platform=None):
r = ck.access({'action':'search', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'tags':tags})
if r['return']>0:
print('Error: %s' % r['error'])
exit(1)
experiments = r['lst']
dfs = []
for experiment in experiments:
data_uoa = experiment['data_uoa']
r = ck.access({'action':'list_points', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'data_uoa':data_uoa})
if r['return']>0:
print('Error: %s' % r['error'])
exit(1)
# Mapping of expected library tags to reader-friendly names.
tag_to_name = {
# ArmCL tags on HiKey.
'17.12-48bc34ea' : 'armcl-17.12',
'18.01-f45d5a9b' : 'armcl-18.01',
'18.03-e40997bb' : 'armcl-18.03',
'request-d8f69c13' : 'armcl-dv/dt', # armcl-18.03+
'18.05-b3a371bc' : 'armcl-18.05',
'18.05-0acd60ed-request' : 'armcl-dv/dt', # armcl-18.05+
# ArmCL tags on Firefly.
'17.12-48bc34e' : 'armcl-17.12',
'18.01-f45d5a9' : 'armcl-18.01',
'18.03-e40997b' : 'armcl-18.03',
'18.05-b3a371b' : 'armcl-18.05',
# TensorFlow tags.
'tensorflow-1.7' : 'tensorflow-1.7',
'tensorflow-1.8' : 'tensorflow-1.8',
}
# Library.
library_tags = [ tag for tag in r['dict']['tags'] if tag in tag_to_name.keys() ]
if len(library_tags)==1:
library = tag_to_name[library_tags[0]]
else:
print('[Warning] Bad library tags. Skipping experiment with tags:')
print(r['dict']['tags'])
continue
if _library and _library!=library: continue
# For each point.
for point in r['points']:
point_file_path = os.path.join(r['path'], 'ckp-%s.0001.json' % point)
with open(point_file_path) as point_file:
point_data_raw = json.load(point_file)
characteristics_list = point_data_raw['characteristics_list']
num_repetitions = len(characteristics_list)
platform = model_to_id[point_data_raw['features']['platform']['platform']['model']]
if _platform and _platform!=platform: continue
env = point_data_raw['choices']['env']
batch_size = np.int64(env.get('CK_BATCH_SIZE',-1))
batch_count = np.int64(env.get('CK_BATCH_COUNT',-1))
convolution_method_default = 1 if platform==hikey_id else 0
convolution_method = convolution_method_to_name[np.int64(env.get('CK_CONVOLUTION_METHOD_HINT',env.get('CK_CONVOLUTION_METHOD', convolution_method_default)))]
if library.startswith('tensorflow-'):
multiplier = np.float64(env.get('CK_ENV_TENSORFLOW_MODEL_MOBILENET_MULTIPLIER',-1))
resolution = np.int64(env.get('CK_ENV_TENSORFLOW_MODEL_MOBILENET_RESOLUTION',-1))
else:
multiplier = np.float64(env.get('CK_ENV_MOBILENET_WIDTH_MULTIPLIER',-1))
resolution = np.int64(env.get('CK_ENV_MOBILENET_RESOLUTION',-1))
model = 'v1-%.2f-%d' % (multiplier, resolution)
if accuracy:
data = [
{
# features
'platform': platform,
'library': library,
# choices
'model': model,
'batch_size': batch_size,
'batch_count': batch_count,
'convolution_method': convolution_method,
'resolution': resolution,
'multiplier': multiplier,
# statistical repetition
'repetition_id': repetition_id,
# runtime characteristics
'success?': characteristics['run'].get('run_success', 'n/a'),
'accuracy_top1': characteristics['run'].get('accuracy_top1', 0),
'accuracy_top5': characteristics['run'].get('accuracy_top5', 0),
'frame_predictions': characteristics['run'].get('frame_predictions', []),
# # recompute accuracy from frame_predictions (was incorrectly recorded in early experiments)
# 'accuracy_top1_': len([
# prediction for prediction in characteristics['run'].get('frame_predictions', [])
# if prediction['accuracy_top1']=='yes'
# ]) / np.float64(batch_count),
# 'accuracy_top5_': len([
# prediction for prediction in characteristics['run'].get('frame_predictions', [])
# if prediction['accuracy_top5']=='yes'
# ]) / np.float64(batch_count)
}
for (repetition_id, characteristics) in zip(range(num_repetitions), characteristics_list)
]
else: # performance
data = [
{
# features
'platform': platform,
'library': library,
# choices
'model': model,
'batch_size': batch_size,
'batch_count': batch_count,
'convolution_method': convolution_method,
'resolution': resolution,
'multiplier': multiplier,
# statistical repetition
'repetition_id': repetition_id,
# runtime characteristics
'success?': characteristics['run'].get('run_success', 'n/a'),
'time_avg_ms': characteristics['run']['prediction_time_avg_s']*1e+3,
'time_total_ms': characteristics['run']['prediction_time_total_s']*1e+3,
}
for (repetition_id, characteristics) in zip(range(num_repetitions), characteristics_list)
]
index = [
'platform', 'library', 'model', 'multiplier', 'resolution', 'batch_size', 'convolution_method', 'repetition_id'
]
# Construct a DataFrame.
df = pd.DataFrame(data)
df = df.set_index(index)
# Append to the list of similarly constructed DataFrames.
dfs.append(df)
if dfs:
# Concatenate all thus constructed DataFrames (i.e. stack on top of each other).
result = pd.concat(dfs)
result.sort_index(ascending=True, inplace=True)
else:
# Construct a dummy DataFrame the success status of which can be safely checked.
result = pd.DataFrame(columns=['success?'])
return result
# ### Merge performance and accuracy data
# Return a new DataFrame with only the performance and accuracy metrics.
def merge_performance_accuracy(df_performance, df_accuracy,
reference_platform=None, reference_lib=None, reference_convolution_method='direct',
performance_metric='time_avg_ms', accuracy_metric='accuracy_top1'):
df = df_performance[[performance_metric]]
accuracy_list = []
for index, row in df.iterrows():
(platform, lib, model, multiplier, resolution, batch_size, convolution_method) = index
if reference_platform: platform = reference_platform
try:
accuracy = df_accuracy.loc[(platform, lib, model, multiplier, resolution, batch_size, convolution_method)][accuracy_metric]
except:
if reference_lib: lib = reference_lib
convolution_method = reference_convolution_method
accuracy = df_accuracy.loc[(platform, lib, model, multiplier, resolution, batch_size, convolution_method)][accuracy_metric]
accuracy_list.append(accuracy)
df = df.assign(accuracy_top1=accuracy_list) # FIXME: assign to the value of accuracy_metric
return df
# ### Plot experimental data
# #### Plot performance (bar plot)
def plot_performance(df_raw, groupby_level='convolution_method', platform_id=hikey_id, gpu_mhz=id_to_gpu_mhz[hikey_id],
performance_metric='time_avg_ms', title=None, figsize=None, rot=90):
df_bar = pd.DataFrame(
data=df_raw[performance_metric].values, columns=['ms'],
index=pd.MultiIndex.from_tuples(
tuples=[ (l,m[3:],c,r) for (p,l,m,_,_,_,c,r) in df_raw.index.values ],
names=[ 'library', 'model', 'convolution_method', 'repetition_id' ]
)
)
df_bar.columns.names = ['time']
if groupby_level=='convolution_method':
unstack_level = 'library'
xlabel='(Model [channel multiplier - input resolution], Convolution Method)'
colormap = cm.autumn
elif groupby_level=='library':
unstack_level = 'convolution_method'
xlabel='(Library, Model [channel multiplier - input resolution])'
colormap = cm.summer
# Set default style.
ylabel='Image recognition time (ms)'
if not title: title = '%s (GPU: %s @ %d MHz)' % (id_to_name[platform_id], id_to_gpu[platform_id], gpu_mhz)
if not figsize: figsize = [default_figwidth, 8]
# Plot
mean = df_bar.groupby(level=df_bar.index.names[:-1]).mean().unstack(unstack_level)
std = df_bar.groupby(level=df_bar.index.names[:-1]).std().unstack(unstack_level)
axes = mean.groupby(level=groupby_level) \
.plot(yerr=std, kind='bar', grid=True, width=0.8, rot=rot, figsize=figsize,
fontsize=default_fontsize, colormap=colormap)
for ax in axes:
# Title.
ax.set_title(title)
# X label.
ax.set_xlabel(xlabel)
# Y axis.
ax.set_ylabel(ylabel)
# #### Plot performance (violin plot)
def plot_performance_violin(df_raw, groupby_level='convolution_method', platform_id=hikey_id, gpu_mhz=id_to_gpu_mhz[hikey_id],
performance_metric='time_avg_ms', title=None, figsize=None, fontscale=1.75):
df_violin = pd.DataFrame(
data=df_raw[performance_metric].values, columns=['ms'],
index=pd.MultiIndex.from_tuples(
tuples=[ (l,m[3:],c,r) for (p,l,m,_,_,_,c,r) in df_raw.index.values ],
names=[ 'library', 'model', 'convolution_method', 'repetition_id' ]
)
)
if groupby_level=='convolution_method':
df_violin = df_violin.swaplevel('convolution_method', 'library')
hue_level = 'library'
palette = 'autumn'
elif groupby_level=='library':
hue_level = 'convolution_method'
palette = 'summer'
num_model_values = len(df_violin.index.get_level_values(level='model').unique())
# Set default style.
xlabel='Model [channel multiplier - input resolution]'
ylabel='Image recognition time (ms)'
if not title: title = '%s (GPU: %s @ %d MHz)' % (id_to_name[platform_id], id_to_gpu[platform_id], gpu_mhz)
if not figsize: figsize = (num_model_values*1.5, 12)
sb.set_style('whitegrid')
sb.set_palette(palette)
# For each unique groupby value.
groupby_values = df_violin.index.get_level_values(level=groupby_level).unique()
for groupby_value in groupby_values:
fig = plt.figure(figsize=figsize, dpi=default_figdpi)
ax = fig.gca()
df_violin_loc = df_violin.loc[groupby_value].reset_index()
sb.violinplot(ax=ax, data=df_violin_loc, x='model', y='ms', hue=hue_level,
fontscale=fontscale, inner='point', split=False, saturation=0.8)
# Title.
groupby_title = '%s: %s=%s' % (title, groupby_level, groupby_value)
ax.set_title(groupby_title, fontsize=default_fontsize*fontscale)
# X axis.
ax.set_xlabel(xlabel, fontsize=default_fontsize*fontscale)
# Y axis.
ystep = 10
ymin = np.int64(df_violin_loc['ms'].min())
ymax = np.int64(df_violin_loc['ms'].max()) // ystep * ystep + ystep + 1
ax.set_ylim([ymin, ymax])
ax.set_yticks(range(0, ymax, ystep))
ax.set_ylabel(ylabel, fontsize=default_fontsize*fontscale)
# Vertical lines between groups of violins.
for x in ax.get_xticks():
ax.vlines(x=x+0.5, ymin=0, ymax=ymax, linestyles='dotted', colors='purple')
# #### Plot performance vs. accuracy
def plot(df_performance_accuracy, libs=None, platform_id=hikey_id, gpu_mhz=id_to_gpu_mhz[hikey_id],
performance_metric='time_avg_ms', accuracy_metric='accuracy_top1',
xmin=0.0, xmax=75.1, xstep=5.0, ymin=0.4, ymax=0.751, ystep=0.05,
title=None, save_fig=False, save_fig_name='mobilenets-default'):
fig = plt.figure(figsize=(8,4), dpi=default_figdpi)
ax = fig.gca()
lib_to_color = {
'armcl-17.12' : 'red',
'armcl-18.01' : 'yellow',
'armcl-18.03' : 'orange',
'armcl-dv/dt' : 'green',
'armcl-18.05' : 'purple',
'tensorflow-1.7' : 'cyan',
'tensorflow-1.8' : 'blue',
}
multiplier_to_marker = {
'gemm' : { 1.00 : '*', 0.75 : 'D', 0.50: 'v', 0.25 : '8' },
'direct' : { 1.00 : 'p', 0.75 : 's', 0.50: '^', 0.25 : 'o' },
'winograd' : { 1.00 : 'P', 0.75 : 'X', 0.50: '<', 0.25 : '.' },
}
if libs==None: libs = df_performance_accuracy.index.levels[1].tolist()
df = df_performance_accuracy.loc[platform_id].loc[libs]
for index, row in df.iterrows():
(lib, model, multiplier, resolution, batch_size, convolution_method) = index
performance = row[performance_metric]
accuracy = row[accuracy_metric]
# Mark Pareto-optimal points.
is_on_pareto = True
for index1, row1 in df.iterrows():
is_faster = row1[performance_metric] < row[performance_metric]
is_no_less_accurate = row1[accuracy_metric] >= row[accuracy_metric]
if is_faster and is_no_less_accurate:
is_on_pareto = False
break
# GEMM-based convolution should be exactly the same in '18.03' and 'dv/dt', so plot
# the minimum execution time of '18.03' and 'dv/dt' as '18.03'.
if 'armcl-dv/dt' in libs and convolution_method=='gemm' and (lib=='armcl-dv/dt' or lib=='armcl-18.03'):
performance_dv_dt = df.loc[('armcl-dv/dt', model, multiplier, resolution, batch_size, convolution_method)][performance_metric]
performance_18_03 = df.loc[('armcl-18.03', model, multiplier, resolution, batch_size, convolution_method)][performance_metric]
if lib=='armcl-18.03':
if (performance_dv_dt < performance_18_03):
continue
if lib=='armcl-dv/dt':
if (performance_dv_dt < performance_18_03):
lib = 'armcl-18.03' # change color
else:
continue
# Select size, color and marker.
size = resolution / 16
color = lib_to_color[lib]
marker = multiplier_to_marker[convolution_method][multiplier]
# Plot.
ax.plot(performance, accuracy, marker, markerfacecolor=color, markersize=size)
# Mark Pareto-optimal points with scaled black pluses.
if is_on_pareto:
ax.plot(performance, accuracy, 'k+', markersize=size)
# Title.
if not title: title = '%s (GPU: %s @ %d MHz)' % (id_to_name[platform_id], id_to_gpu[platform_id], gpu_mhz)
ax.set_title(title)
# X axis.
xlabel='Image recognition time (ms)' if performance_metric=='time_avg_ms' else ''
ax.set_xlabel(xlabel)
ax.set_xlim(xmin, xmax)
ax.set_xticks(np.arange(xmin, xmax, xstep))
for xtick in ax.xaxis.get_major_ticks(): xtick.label.set_fontsize(12)
# Y axis.
ylabel='Image recognition accuracy (top %s)' % accuracy_metric[-1]
ax.set_ylabel(ylabel)
ax.set_ylim(ymin, ymax)
ax.set_yticks(np.arange(ymin, ymax, ystep))
for ytick in ax.yaxis.get_major_ticks(): ytick.label.set_fontsize(12)
# Legend.
handles = [
mp.patches.Patch(color=color, label=lib)
for (lib, color) in sorted(lib_to_color.items())
if lib in libs
]
plt.legend(title='Library', handles=handles[::-1], loc='lower right')
# Show with grid on.
plt.grid(True)
plt.show()
# Save figure.
if save_fig:
save_fig_path = os.path.join(save_fig_dir, '%s.%s' % (save_fig_name, save_fig_ext))
plt.savefig(save_fig_path, dpi=default_figdpi, bbox_inches='tight')
# ### Set options for saving figures/tables
def get_paper_dir(module_uoa='dissemination.publication', data_uoa='08da9685582866a0'):
r = ck.access({'action':'find','module_uoa':module_uoa,'data_uoa':data_uoa})
if r['return']>0:
print('Warning: %s' % r['error'])
paper_dir = os.path.curdir
else:
paper_dir = r['path']
return paper_dir
save_fig_ext = 'pdf'
save_fig_dir = os.path.join(get_paper_dir(), 'figures')
if not os.path.exists(save_fig_dir):
os.makedirs(save_fig_dir)
save_tab = False
save_tab_ext = 'tex'
save_tab_dir = os.path.join(get_paper_dir(), 'tables')
if not os.path.exists(save_tab_dir):
os.makedirs(save_tab_dir)
# <a id="experiments_tensorflow_hikey"></a>
# ## TensorFlow experiments on HiKey
# ### TensorFlow performance (latency)
df_tensorflow_performance_raw = get_experimental_results(repo_uoa=tensorflow_performance_repo_uoa,
tags='explore-mobilenets-performance', accuracy=False)
# Take the minimum execution time out of several repetitions.
df_tensorflow_performance = \
df_tensorflow_performance_raw.groupby(level=df_tensorflow_performance_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_tensorflow_performance)
# #### Plot by convolution method
plot_performance(df_tensorflow_performance_raw, platform_id=hikey_id, groupby_level='convolution_method')
plot_performance_violin(df_tensorflow_performance_raw, platform_id=hikey_id, groupby_level='convolution_method')
# #### Plot by library
plot_performance(df_tensorflow_performance_raw, platform_id=hikey_id, groupby_level='library')
plot_performance_violin(df_tensorflow_performance_raw, platform_id=hikey_id, groupby_level='library')
# ### TensorFlow accuracy on 500 images
df_tensorflow_accuracy_500_raw = get_experimental_results(repo_uoa=tensorflow_accuracy_500_repo_uoa,
tags='explore-mobilenets-accuracy', accuracy=True)
# Extract frame predictions.
df_tensorflow_predictions_500 = df_tensorflow_accuracy_500_raw[['frame_predictions']]
# Reduce the repetition_id index dimension (only 1 repetition anyway).
df_tensorflow_accuracy_500 = \
df_tensorflow_accuracy_500_raw[['accuracy_top1', 'accuracy_top5']] \
.groupby(level=df_tensorflow_accuracy_500_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_tensorflow_accuracy_500)
# ### TensorFlow accuracy on 50,000 images (measured)
df_tensorflow_accuracy_50000_raw = get_experimental_results(repo_uoa=tensorflow_accuracy_50000_repo_uoa,
tags='explore-mobilenets-accuracy', accuracy=True)
# Extract frame predictions.
df_tensorflow_predictions_50000 = df_tensorflow_accuracy_50000_raw[['frame_predictions']]
# Reduce the repetition_id index dimension (only 1 repetition anyway).
df_tensorflow_accuracy_50000 = \
df_tensorflow_accuracy_50000_raw[['accuracy_top1', 'accuracy_top5']] \
.groupby(level=df_tensorflow_accuracy_50000_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_tensorflow_accuracy_50000)
# ### TensorFlow accuracy on 50,000 images (claimed)
# TensorFlow accuracy reported with the MobileNets pretrained weights shared on 2017_06_14. Copied from:
# https://github.com/tensorflow/models/blob/1630da3434974e9ad5a0b6d887ac716a97ce03d3/research/slim/nets/mobilenet_v1.md#pre-trained-models
tensorflow_accuracy_50000_table = {
'v1-1.00-224':[569, 4.24, 70.7, 89.5],
'v1-1.00-192':[418, 4.24, 69.3, 88.9],
'v1-1.00-160':[291, 4.24, 67.2, 87.5],
'v1-1.00-128':[186, 4.24, 64.1, 85.3],
'v1-0.75-224':[317, 2.59, 68.4, 88.2],
'v1-0.75-192':[233, 2.59, 67.4, 87.3],
'v1-0.75-160':[162, 2.59, 65.2, 86.1],
'v1-0.75-128':[104, 2.59, 61.8, 83.6],
'v1-0.50-224':[150, 1.34, 64.0, 85.4],
'v1-0.50-192':[110, 1.34, 62.1, 84.0],
'v1-0.50-160':[77, 1.34, 59.9, 82.5],
'v1-0.50-128':[49, 1.34, 56.2, 79.6],
'v1-0.25-224':[41, 0.47, 50.6, 75.0],
'v1-0.25-192':[34, 0.47, 49.0, 73.6],
'v1-0.25-160':[21, 0.47, 46.0, 70.7],
'v1-0.25-128':[14, 0.47, 41.3, 66.2],
}
df_tensorflow_accuracy_50000_claimed = pd.DataFrame(
index=['MACs (million)', 'Parameters (million)', 'accuracy_top1 (%)', 'accuracy_top5 (%)'],
data=tensorflow_accuracy_50000_table,
).T.sort_index()
accuracy_top1 = df_tensorflow_accuracy_50000_claimed['accuracy_top1 (%)']/100
accuracy_top5 = df_tensorflow_accuracy_50000_claimed['accuracy_top5 (%)']/100
df_tensorflow_accuracy_50000_claimed = df_tensorflow_accuracy_50000_claimed.assign(accuracy_top1=accuracy_top1)
df_tensorflow_accuracy_50000_claimed = df_tensorflow_accuracy_50000_claimed.assign(accuracy_top5=accuracy_top5)
df_tensorflow_accuracy_50000_claimed.index = df_tensorflow_accuracy_50000.index
display_in_full(df_tensorflow_accuracy_50000_claimed)
# Diff measured as the fraction of correctly predicted images.
df_tensorflow_accuracy_50000_diff = \
df_tensorflow_accuracy_50000_claimed[['accuracy_top1', 'accuracy_top5']] - \
df_tensorflow_accuracy_50000[['accuracy_top1', 'accuracy_top5']]
display_in_full(df_tensorflow_accuracy_50000_diff)
# Diff measured as the number of mispredicted images.
df_tensorflow_accuracy_50000_diff_mispredicted = (df_tensorflow_accuracy_50000_diff) * 50000
df_tensorflow_accuracy_50000_diff_mispredicted.columns = ['mispredicted_top1', 'mispredicted_top5']
display_in_full(df_tensorflow_accuracy_50000_diff_mispredicted)
# <a id="experiments_armcl_hikey"></a>
# ## ArmCL experiments on HiKey
# ### ArmCL performance (latency)
df_armcl_performance_raw = get_experimental_results(repo_uoa=armcl_performance_repo_uoa,
tags='explore-mobilenets-performance', accuracy=False)
# Take the minimum execution time out of several repetitions.
df_armcl_performance = df_armcl_performance_raw.groupby(level=df_armcl_performance_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_armcl_performance)
# #### Plot by convolution method
plot_performance(df_armcl_performance_raw, platform_id=hikey_id, groupby_level='convolution_method')
plot_performance_violin(df_armcl_performance_raw, platform_id=hikey_id, groupby_level='convolution_method')
# #### Plot by library
plot_performance(df_armcl_performance_raw, platform_id=hikey_id, groupby_level='library')
plot_performance_violin(df_armcl_performance_raw, platform_id=hikey_id, groupby_level='library')
# ### ArmCL accuracy on 500 images
df_armcl_accuracy_500_raw = get_experimental_results(repo_uoa=armcl_accuracy_500_repo_uoa,
tags='explore-mobilenets-accuracy', accuracy=True)
# Extract frame predictionsdf_armcl_accuracy_500_raw
df_armcl_predictions_500 = df_armcl_accuracy_500_raw[['frame_predictions']]
# Reduce the repetition_id index dimension (only 1 repetition anyway).
df_armcl_accuracy_500 = \
df_armcl_accuracy_500_raw[['accuracy_top1', 'accuracy_top5']] \
.groupby(level=df_armcl_accuracy_500_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_armcl_accuracy_500)
# Identical accuracy for "18.03" and "dv/dt".
(df_armcl_accuracy_500.loc[hikey_id,'armcl-18.03'] - df_armcl_accuracy_500.loc[hikey_id,'armcl-dv/dt'] == 0).all()
# Identical accuracy for "18.03" and "18.01".
(df_armcl_accuracy_500.loc[hikey_id,'armcl-18.03'] - df_armcl_accuracy_500.loc[hikey_id,'armcl-18.01'] == 0).all()
df_armcl_accuracy_500.loc[hikey_id,'armcl-18.03'] - df_armcl_accuracy_500.loc[hikey_id,'armcl-17.12']
# +
# TODO: Outline into a function for comparing ArmCL and TensorFlow predictions.
df_armcl_predictions = df_armcl_predictions_500
df_tensorflow_predictions = df_tensorflow_predictions_500
tensorflow_lib = 'tensorflow-1.7'
tensorflow_convolution_method = 'direct'
for index, row in df_armcl_predictions.iterrows():
(platform, lib, model, multiplier, resolution, batch_size, convolution_method, repetition_id) = index
# For now, only check mispredictions for '18.03' and 'v1-1.00-224'.
if not lib=='armcl-18.03' or not model=='v1-1.00-224': continue
tensorflow_index = (platform, tensorflow_lib, model, multiplier, resolution, batch_size, tensorflow_convolution_method, repetition_id)
# Extract frame predictions.
armcl_predictions = row['frame_predictions']
tensorflow_predictions = df_tensorflow_predictions.loc[tensorflow_index]['frame_predictions']
# At the very minimum, the frame predictions should be of the same length.
if len(armcl_predictions) != len(tensorflow_predictions):
print('[Warning] ArmCL and TensorFlow predictions have different length! Skipping...')
continue
# Iterate over the frame predictions.
for (armcl_prediction, tensorflow_prediction) in zip(armcl_predictions, tensorflow_predictions):
if(armcl_prediction['accuracy_top1'] != tensorflow_prediction['accuracy_top1']):
print(index)
print('ArmCL: '+str(armcl_prediction))
print('TensorFlow: '+str(tensorflow_prediction))
print('')
# -
# ### ArmCL accuracy on 50,000 images
df_armcl_accuracy_50000_raw = get_experimental_results(repo_uoa=armcl_accuracy_50000_repo_uoa,
tags='explore-mobilenets-accuracy', accuracy=True)
# Extract frame predictions.
df_armcl_predictions_50000 = df_armcl_accuracy_50000_raw[['frame_predictions']]
# Reduce the repetition_id index dimension (only 1 repetition anyway).
df_armcl_accuracy_50000 = \
df_armcl_accuracy_50000_raw[['accuracy_top1', 'accuracy_top5']] \
.groupby(level=df_armcl_accuracy_50000_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_armcl_accuracy_50000)
# ### Plot top 1 accuracy on 50,000 images (using the 'dv/dt' fork as reference lib) vs. performance
accuracy_metric = 'accuracy_top1'
df_armcl_performance_accuracy_50000 = merge_performance_accuracy(df_armcl_performance, df_armcl_accuracy_50000,
reference_lib='armcl-dv/dt',
reference_convolution_method='direct')
display_in_full(df_armcl_performance_accuracy_50000)
# #### Only "18.03"
plot(df_armcl_performance_accuracy_50000, libs=['armcl-18.03'], accuracy_metric=accuracy_metric,
save_fig_name='%s-%s-50000-18_03' % (hikey_id, accuracy_metric+'_'))
# #### "dv/dt" vs. "18.03"
plot(df_armcl_performance_accuracy_50000, libs=['armcl-18.03','armcl-dv/dt'], accuracy_metric=accuracy_metric,
save_fig_name='%s-%s-50000-dv_dt__18_03' % (hikey_id, accuracy_metric+'_'))
# #### "dv/dt" vs. "18.03" vs. "18.01" vs. "17.12"
plot(df_armcl_performance_accuracy_50000, accuracy_metric=accuracy_metric,
save_fig_name='%s-%s-50000-dv_dt__18_03__18_01__17_12' % (hikey_id, accuracy_metric+'_'))
# ### Plot top 1 accuracy on 500 images (using the 'dv/dt' fork as the reference lib) vs. performance
df_armcl_performance_accuracy_500 = merge_performance_accuracy(df_armcl_performance, df_armcl_accuracy_500,
reference_lib='armcl-dv/dt',
reference_convolution_method='direct')
display_in_full(df_armcl_performance_accuracy_500)
# #### "dv/dt" vs. "18.03" vs. "18.01" vs. "17.12"
plot(df_armcl_performance_accuracy_500, accuracy_metric=accuracy_metric,
save_fig_name='%s-%s-500-dv_dt__18_03__18_01__17_12' % (hikey_id, accuracy_metric+'_'))
# ### Plot top 1 accuracy on 500 images vs. performance
df_armcl_performance_accuracy_500 = merge_performance_accuracy(df_armcl_performance, df_armcl_accuracy_500)
display_in_full(df_armcl_performance_accuracy_500)
# #### "dv/dt" vs. "18.03" vs. "18.01" vs. "17.12"
plot(df_armcl_performance_accuracy_500, accuracy_metric=accuracy_metric,
save_fig_name='%s-%s-500-dv_dt__18_03__18_01__17_12' % (hikey_id, accuracy_metric))
# <a id="experiments_armcl_tensorflow_hikey"></a>
# ## ArmCL vs. TensorFlow on HiKey
df_accuracy_50000 = pd.DataFrame(
data=[
df_armcl_accuracy_50000['accuracy_top1'].values,
df_tensorflow_accuracy_50000['accuracy_top1'].values,
df_tensorflow_accuracy_50000_claimed['accuracy_top1'].values,
],
index=[
'ArmCL 18.03 (measured)',
'TensorFlow 1.7 (measured)',
'TensorFlow 1.x (claimed)',
],
columns=df_tensorflow_accuracy_50000_claimed.index.get_level_values(level='model').values
).T.sort_index(ascending=False)
# df_accuracy_50000.index.name = 'model'
if save_tab:
save_tab_name = 'accuracy_top1-50000'
save_tab_path = os.path.join(save_tab_dir, '%s.%s' % (save_tab_name, save_tab_ext))
with open(save_tab_path, 'w') as f: f.write(df_accuracy_50000.to_latex())
display_in_full(df_accuracy_50000)
df_performance = pd.concat([df_armcl_performance, df_tensorflow_performance])
# ### Plot top 1 accuracy on 500 images vs. performance
df_accuracy_500 = pd.concat([df_armcl_accuracy_500, df_tensorflow_accuracy_500])
df_performance_accuracy_500 = merge_performance_accuracy(df_performance, df_accuracy_500)
plot(df_performance_accuracy_500, accuracy_metric=accuracy_metric, save_fig=True,
save_fig_name='%s-%s-500-dv_dt__18_03__18_01__17_12__tf' % (hikey_id, accuracy_metric))
# ### Plot top 1 accuracy on 50,000 images vs. performance
df_accuracy_50000 = pd.concat([df_armcl_accuracy_50000, df_tensorflow_accuracy_50000])
df_performance_accuracy_50000 = merge_performance_accuracy(df_performance, df_accuracy_50000,
reference_lib='armcl-dv/dt',
reference_convolution_method='direct')
# #### "dv/dt" vs. "18.03" vs. "18.01" vs. "17.12" vs. "tf"
plot(df_performance_accuracy_50000, accuracy_metric=accuracy_metric, save_fig=True,
save_fig_name='%s-%s-50000-dv_dt__18_03__18_01__17_12__tf' % (hikey_id, accuracy_metric+'_'))
# #### "dv/dt" vs. "18.03" vs. "17.12" vs. "tf" (no "18.01")
plot(df_performance_accuracy_50000, libs=['armcl-17.12','armcl-18.03','armcl-dv/dt','tensorflow-1.7'],
accuracy_metric=accuracy_metric,
save_fig_name='%s-%s-50000-dv_dt__18_03__17_12__tf' % (hikey_id, accuracy_metric))
# <a id="experiments_tensorflow_firefly"></a>
# ## TensorFlow experiments on Firefly
# ### TensorFlow accuracy on 500 images
df_firefly_tensorflow_accuracy_500_raw = get_experimental_results(repo_uoa=firefly_tensorflow_accuracy_500_repo_uoa,
tags='explore-mobilenets-accuracy', accuracy=True)
# Extract frame predictions.
df_firefly_tensorflow_predictions_500 = df_firefly_tensorflow_accuracy_500_raw[['frame_predictions']]
# Reduce the repetition_id index dimension (only 1 repetition anyway).
df_firefly_tensorflow_accuracy_500 = \
df_firefly_tensorflow_accuracy_500_raw[['accuracy_top1', 'accuracy_top5']] \
.groupby(level=df_firefly_tensorflow_accuracy_500_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_firefly_tensorflow_accuracy_500)
# Check whether TensorFlow accuracy on Firefly is the same as on HiKey. (It's not!)
df_firefly_tensorflow_accuracy_500.loc[firefly_id].loc['tensorflow-1.7'] - \
df_tensorflow_accuracy_500.loc[hikey_id].loc['tensorflow-1.7']
# ### TensorFlow performance (latency)
df_firefly_tensorflow_performance_raw = get_experimental_results(repo_uoa=firefly_tensorflow_performance_repo_uoa,
tags='explore-mobilenets-performance', accuracy=False)
# Take the minimum execution time out of several repetitions.
df_firefly_tensorflow_performance_min = \
df_firefly_tensorflow_performance_raw.groupby(level=df_firefly_tensorflow_performance_raw.index.names[:-1]).min()
# Display all rows and columns.
# display_in_full(df_firefly_tensorflow_performance_min)
# Take the maximum execution time out of several repetitions.
df_firefly_tensorflow_performance_max = \
df_firefly_tensorflow_performance_raw.groupby(level=df_firefly_tensorflow_performance_raw.index.names[:-1]).max()
# Set 'convolution_method' to 'gemm' for all rows to reuse the available plotting functionality.
df_firefly_tensorflow_performance_max.index = \
df_firefly_tensorflow_performance_max.index \
.set_levels(levels=['gemm']*df_firefly_tensorflow_performance_max.index.size, level='convolution_method')
# Display all rows and columns.
# display_in_full(df_firefly_tensorflow_performance_max)
# #### "tf-1.7" vs "tf-1.8" (min/max)
df_firefly_tensorflow_performance = pd.concat([df_firefly_tensorflow_performance_min, df_firefly_tensorflow_performance_max])
# TODO: Use df_firefly_armcl_accuracy_500 and df_firefly_tensorflow_accuracy_500.
df_accuracy_500 = pd.concat([df_armcl_accuracy_500, df_tensorflow_accuracy_500])
df_firefly_performance_accuracy_500 = merge_performance_accuracy(df_firefly_tensorflow_performance, df_accuracy_500,
reference_platform=hikey_id,
reference_lib='tensorflow-1.7')
plot(df_firefly_performance_accuracy_500, accuracy_metric=accuracy_metric, platform_id=firefly_id,
xmin=0, xmax=150.1, xstep=10, save_fig_name='%s-%s-500-tf-min_max' % (firefly_id, accuracy_metric))
# +
# plot(df_firefly_performance_accuracy_500, platform_id=firefly_id, title=firefly_name,
# xmin=10., xmax=190.1, xstep=10, ymin=0.35, ymax=.801,
# accuracy_metric=accuracy_metric, save_fig_name='%s-%s-500-tf-min_max-complete' % (firefly_id, accuracy_metric))
# -
# #### "tf-1.7" vs "tf-1.8"
df_firefly_performance = df_firefly_tensorflow_performance_min
df_firefly_accuracy_500 = df_firefly_tensorflow_accuracy_500
df_firefly_performance_accuracy_500 = merge_performance_accuracy(df_firefly_performance, df_firefly_accuracy_500)
plot(df_firefly_performance_accuracy_500, platform_id=firefly_id, title=firefly_name, xmin=0, xmax=150.1, xstep=10,
accuracy_metric=accuracy_metric, save_fig_name='%s-%s-500-tf' % (firefly_id, accuracy_metric))
# #### Plot by convolution method
plot_performance(df_firefly_tensorflow_performance_raw, platform_id=firefly_id, groupby_level='convolution_method')
plot_performance_violin(df_firefly_tensorflow_performance_raw, platform_id=firefly_id, groupby_level='convolution_method')
# #### Plot by library
plot_performance(df_firefly_tensorflow_performance_raw, platform_id=firefly_id, groupby_level='library')
plot_performance_violin(df_firefly_tensorflow_performance_raw, platform_id=firefly_id, groupby_level='library')
# <a id="experiments_armcl_firefly"></a>
# ## ArmCL experiments on Firefly
# ### ArmCL performance (latency)
df_firefly_armcl_performance_raw = get_experimental_results(repo_uoa=firefly_armcl_performance_repo_uoa,
tags='explore-mobilenets-performance', accuracy=False)
# Take the minimum execution time out of several repetitions.
df_firefly_armcl_performance = \
df_firefly_armcl_performance_raw .groupby(level=df_firefly_armcl_performance_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_firefly_armcl_performance)
# #### Plot by convolution method
plot_performance(df_firefly_armcl_performance_raw, platform_id=firefly_id, groupby_level='convolution_method')
plot_performance_violin(df_firefly_armcl_performance_raw, platform_id=firefly_id, groupby_level='convolution_method')
# #### Plot by library
plot_performance(df_firefly_armcl_performance_raw, platform_id=firefly_id, groupby_level='library')
plot_performance_violin(df_firefly_armcl_performance_raw, platform_id=firefly_id, groupby_level='library')
# ### ArmCL accuracy on 500 images
df_firefly_armcl_accuracy_500_raw = get_experimental_results(repo_uoa=firefly_armcl_accuracy_500_repo_uoa,
tags='explore-mobilenets-accuracy', accuracy=True)
# Extract frame predictions.
# df_firefly_armcl_predictions_500 = df_firefly_armcl_accuracy_500_raw[['frame_predictions']]
# Reduce the repetition_id index dimension (only 1 repetition anyway).
df_firefly_armcl_accuracy_500 = \
df_firefly_armcl_accuracy_500_raw[['accuracy_top1', 'accuracy_top5']] \
.groupby(level=df_firefly_armcl_accuracy_500_raw.index.names[:-1]).min()
# Display all rows and columns.
display_in_full(df_firefly_armcl_accuracy_500)
# Identical accuracy for "18.03" and "18.01".
(df_firefly_armcl_accuracy_500.loc[firefly_id,'armcl-18.03'] - df_firefly_armcl_accuracy_500.loc[firefly_id,'armcl-18.01'] == 0).all()
# Non-identical accuracy for "18.03" and "17.12".
df_firefly_armcl_accuracy_500.loc[firefly_id,'armcl-18.03'] - df_firefly_armcl_accuracy_500.loc[firefly_id,'armcl-17.12']
# Non-identical accuracy for "18.03" on HiKey and "18.03" on Firefly.
df_armcl_accuracy_500.loc[hikey_id,'armcl-18.03'] - df_firefly_armcl_accuracy_500.loc[firefly_id,'armcl-18.03']
df_firefly_armcl_performance_accuracy_500 = merge_performance_accuracy(df_firefly_armcl_performance, df_firefly_armcl_accuracy_500)
display_in_full(df_firefly_armcl_performance_accuracy_500)
# ### Plot top 1 accuracy on 500 images vs. performance
plot(df_firefly_armcl_performance_accuracy_500, accuracy_metric=accuracy_metric, platform_id=firefly_id,
xmin=0, xmax=150.1, xstep=10, save_fig_name='%s-%s-500-18_03__18_01__17_12' % (firefly_id, accuracy_metric))
# <a id="experiments_armcl_tensorflow_firefly"></a>
# ## ArmCL vs. TensorFlow on Firefly
df_firefly_tensorflow_performance = df_firefly_tensorflow_performance_min.loc[firefly_id,['tensorflow-1.7'],:]
df_firefly_accuracy_500 = pd.concat([df_firefly_armcl_accuracy_500, df_firefly_tensorflow_accuracy_500])
df_firefly_performance = pd.concat([df_firefly_armcl_performance, df_firefly_tensorflow_performance])
df_firefly_performance_accuracy_500 = merge_performance_accuracy(df_firefly_performance, df_firefly_accuracy_500)
plot(df_firefly_performance_accuracy_500, accuracy_metric=accuracy_metric, platform_id=firefly_id, save_fig=True,
xmin=0, xmax=150.1, xstep=10, save_fig_name='%s-%s-500-18_03__18_01__17_12__tf' % (firefly_id, accuracy_metric))
# ## New experiments
# #### 20180709 - 178 MHz
df_armcl_performance_178mhz = get_experimental_results(repo_uoa='ck-request-asplos18-mobilenets-armcl-opencl-20180709-178mhz',
tags='explore-mobilenets-performance', accuracy=False)
plot_performance_violin(df_armcl_performance_178mhz, platform_id=hikey_id, gpu_mhz=178, groupby_level='convolution_method')
plot_performance_violin(df_armcl_performance_178mhz, platform_id=hikey_id, gpu_mhz=178, groupby_level='library')
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Model Training
# +
import sys
import warnings
import pandas as pd
import numpy as np
import joblib
from tpot import TPOTRegressor
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.base import clone
sys.path.append("../../")
warnings.filterwarnings('ignore')
pd.set_option("display.max_columns", None)
# -
# ## Load Data
df = pd.read_csv("../../data/insurance.csv")
df.head()
# ## Create Training and Test Sets
# +
mask = np.random.rand(len(df)) < 0.8
training_set = df[mask]
testing_set = df[~mask]
print(training_set.shape)
print(testing_set.shape)
# -
# save training and test sets to be used later
training_set.to_csv("../../data/training_set.csv")
testing_set.to_csv("../../data/testing_set.csv")
# +
# separating the feature columns from the target column
feature_columns = ["age", "sex", "bmi", "children", "smoker", "region"]
target_column = "charges"
X_train = training_set[feature_columns]
y_train = training_set[target_column]
X_test = testing_set[feature_columns]
y_test = testing_set[target_column]
# -
# ## Apply the Preprocessing
#
# loading the preprocessing pipeline we built in the previous notebook
transformer = joblib.load("transformer.joblib")
# +
# applying the column transformer
features = transformer.fit_transform(X_train)
features
# -
# ## Find an Optimal Pipeline
tpot_regressor = TPOTRegressor(generations=50,
population_size=50,
random_state=42,
cv=5,
n_jobs=8,
verbosity=2,
early_stop=10)
tpot_regressor = tpot_regressor.fit(features, y_train)
# ## Create Pipeline
#
# Now that we have an optimal pipeline created by TPOT we will be adding our own preprocessors to it. To do this we'll need to have an unfitted pipeline object.
#
# To create an unfitted pipeline from the fitted pipeline that we already have, we'll clone the pipeline object:
# +
unfitted_tpot_regressor = clone(tpot_regressor.fitted_pipeline_)
unfitted_tpot_regressor
# -
# Now that we can build the same pipeline that was found by the TPOT package, we'll add our own preprocessors to the pipeline. This will ensure that the final pipeline will accept the features in the original dataset and will process the features correctly.
#
# We'll compose the preprocessing pipeline and the tpot pipeline into one pipeline:
model = Pipeline([
("transformer", transformer),
("tpot_pipeline", unfitted_tpot_regressor)
])
# ## Train Model
model.fit(X_train, y_train)
# ## Test Model With Single Sample
# +
# testing the ColumnTransformer
test_df = pd.DataFrame([[65, "male", 12.5, 0, "yes", "southwest"]],
columns=["age", "sex", "bmi", "children", "smoker", "region"])
result = model.predict(test_df)
result
# -
# ## Save Model
joblib.dump(model, "model.joblib")
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import libraries
import numpy as np
import cv2 as cv
import csv
import time
import tensorflow as tf
from tensorflow.python.keras.models import load_model
# ## Setting capture window parameters and loading labels
# +
#Window Parameters.
frameWidth=640
frameHeight=480
brightness= 180
#Setting threshold limit(tunable).
threshold= 0.95
font= cv.FONT_HERSHEY_SIMPLEX
# Loading class labels into dictionary.
with open('labels.csv', mode='r') as infile:
reader = csv.reader(infile)
mydict = {rows[0]:rows[1] for rows in reader}
# -
# ## Loading our model
model=load_model("tsc_model.h5")
# ## Preprocessing of frame
# +
def preprocess(img):
img = cv.resize(img,(32,32))
#Converting to grayscale.
img = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
img = cv.equalizeHist(img)
#Normalize image.
img=img/255
# Reshaping image to (1,32,32,1).
img=img[np.newaxis,:,:,np.newaxis]
return img
# -
# ## Importing OpenVINO libraries for optimization of model
import keras2onnx
import onnx
import onnxruntime
# ## Converting model to ONNX format
# +
# Convert to onnx model.
onnx_model = keras2onnx.convert_keras(model, model.name)
temp_model_file = 'tsc.onnx'
#Saving the model.
keras2onnx.save_model(onnx_model, temp_model_file)
sess = onnxruntime.InferenceSession(temp_model_file)
# -
# ## Importing Inference Engine libraries
from openvino.inference_engine import IECore
from openvino.inference_engine import IENetwork
# ## Using optimized model from OpenVINO to generate output
# +
#Loading model for generating inference
def load_to_IE(model):
ie = IECore()
net = ie.read_network(model=r"C:\Program Files (x86)\Intel\openvino_2021.2.185\deployment_tools\model_optimizer\tsc.xml", weights=r"C:\Program Files (x86)\Intel\openvino_2021.2.185\deployment_tools\model_optimizer\tsc.bin")
exec_net = ie.load_network(network=net, device_name="CPU")
return exec_net
def do_inference(exec_net, image):
input_blob = next(iter(exec_net.inputs))
return exec_net.infer({input_blob: image})
# load our model
tsc_net = load_to_IE("tsc_model.h5")
# We need dynamically generated key for fetching output tensor
tsc_outputs = list(tsc_net.outputs.keys())
# Taking video input from camera.
cap= cv.VideoCapture(0)
prev_frame_time = 0
new_frame_time = 0
cap.set(3,frameWidth)
cap.set(4,frameHeight)
cap.set(10,brightness)
while True:
# If videocapture fails, exit else continue.
ret, image = cap.read()
if not ret:
break
#Taking video frame and converting into numpy array.
img = np.asarray(image)
# Preprocessing frame by calling method.
img = preprocess(img)
new_frame_time = time.time()
fps = 1/(new_frame_time-prev_frame_time)
prev_frame_time = new_frame_time
# converting the fps into integer
fps = int(fps)
# Formatting for output display text.
cv.putText(image, "CLASS: ",(20,35),font,0.75,(0,0,255),2,cv.LINE_AA)
cv.putText(image, "PROBABILITY: ",(20,75),font,0.75,(255,0,0),2,cv.LINE_AA)
cv.putText(image, "FPS: ", (20,115), font, 0.75, (0, 255, 0), 2, cv.LINE_AA)
# Inference
output = do_inference(tsc_net, image=img)
# Storing label with maximum probability and probability score.
classIndex = np.argmax(output[tsc_outputs[0]],axis=1)
confidence = np.amax(output[tsc_outputs[0]])
print(classIndex, confidence)
# If probability score satisfies threshold limit, show output.
if confidence>threshold:
cv.putText(image, str(classIndex)+" "+mydict[str(max(classIndex))],(120,35),font,0.75,(0,0,255),2,cv.LINE_AA)
cv.putText(image, str(round(confidence,2)),(180,75),font,0.75,(255,0,0),2,cv.LINE_AA)
cv.putText(image, str(fps),(180,115),font,0.75,(0,255,0),2,cv.LINE_AA)
cv.imshow("Result", image)
# Setting wait time.
k = cv.waitKey(1) & 0xFF == ord('q')
if k==10:
break
cap.release()
cap.destroyAllWindows()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:cs528p2]
# language: python
# name: conda-env-cs528p2-py
# ---
import numpy as np
import numpy.linalg as la
# Generate random data
data = np.arange(3000)
np.random.shuffle(data)
data = data.reshape(-1, 4)
print(data.shape)
# Set number of seeds, `k`
k = 4
# Pick `k` initial seeds
# +
seed_ind = np.random.choice(np.arange(data.shape[0]), k, replace=False)
print(f"Seeds: {seed_ind}")
print("Seeds:")
seeds = data[seed_ind, :]
print(seeds)
# -
# Calculate distance from each sample to each seed
def distances(data, seeds):
ds = [[la.norm(s - d) for s in seeds] for d in data]
return np.array(ds)
# For each datum in `data` find the seed to which it was the closest
def make_labels(data, seeds):
ds = distances(data, seeds)
labels = np.argmin(ds, axis=1)
return labels
# Calculate the new center of each cluster by averaging the seeds
def new_seeds(data, labels, k):
inds = [np.argwhere(labels == _) for _ in range(k)]
new_seeds = [np.mean(data[i, :], axis=0) for i in inds]
return np.array(new_seeds)
# Calculate new seeds until convergence
labels = make_labels(data, seeds)
seeds = new_seeds(data, labels, k)
conv_count = 0
old_seeds = np.zeros_like(seeds)
while not(np.allclose(seeds, old_seeds)):
conv_count += 1
old_seeds = np.copy(seeds)
labels = make_labels(data, seeds)
seeds = new_seeds(data, labels, k)
# I will define goodness of clustering as the sum of the squared distance of each sample to its seed
def error(data, seeds, labels):
e = 0
for _ in range(labels.shape[0]):
l = labels[_]
e += la.norm(data[_] - seeds[l])**2
return e
error(data, seeds, labels)
a = np.array([3, 3])
b = np.array([1, 1])
la.norm(a-b)
np.sqrt(np.sum((a - b) ** 2))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Final Project Required Coding Activity
# Introduction to Python (Unit 2) Fundamentals
#
# All course .ipynb Jupyter Notebooks are available from the project files download topic in Module 1, Section 1.
#
# This activity is based on modules 1 - 4 and is similar to exercises in the Jupyter Notebooks **`Practice_MOD03_IntroPy.ipynb`** and **`Practice_MOD04_IntroPy.ipynb`** which you may have completed as practice.
#
# | **Assignment Requirements** |
# |:-------------------------------|
# |This program requires the use of **`print`** output and use of **`input`**, **`for`**/**`in`** loop, **`if`**, file **`open`**, **`.readline`**, **`.append`**, **`.strip`**, **`len`**. and function **`def`** and **`return`**. The code should also consider using most of the following (`.upper()` or `.lower()`, `.title()`, `print("hello",end="")` `else`, `elif`, `range()`, `while`, `.close()`) |
#
#
# ## Program: Element_Quiz
# In this program the user enters the name of any 5 of the first 20 Atomic Elements and is given a grade and test report for items correct and incorrect.
#
#
# ### Sample input and output:
# ```
# list any 5 of the first 20 elements in the Period table
# Enter the name of an element: argon
# Enter the name of an element: chlorine
# Enter the name of an element: sodium
# Enter the name of an element: argon
# argon was already entered <--no duplicates allowed
# Enter the name of an element: helium
# Enter the name of an element: gold
#
# 80 % correct
# Found: Argon Chlorine Sodium Helium
# Not Found: Gold
# ```
#
#
# ### Create get_names() Function to collect input of 5 unique element names
#
# - The function accepts no arguments and returns a list of 5 input strings (element names)
# - define a list to hold the input
# - collect input of a element name
# - if input it is **not** already in the list add the input to the list
# - don't allow empty strings as input
# - once 5 unique inputs **return** the list
#
#
# ### Create the Program flow
#
# #### import the file into the Jupyter Notebook environment
#
# - use `!curl` to download https://raw.githubusercontent.com/MicrosoftLearning/intropython/master/elements1_20.txt" as `elements1_20.txt`
# - open the file with the first 20 elements
# - read one line at a time to get element names, remove any whitespace (spaces, newlines) and save each element name, as lowercase, into a list
#
#
# #### Call the get_names() function
#
# - the return value will be the quiz responses list
#
# #### check if responses are in the list of elements
# Iterate through 5 responses
# - compare each response to the list of 20 elements
# - any response that is in the list of 20 elements is correct and should be added to a list of correct responses
# - if not in the list of 20 elements then add to a list of incorrect responses
#
# #### calculate the % correct
#
# - find the the number of items in the correct responses and divide by 5, this will result in answers like 1.0, .8, .6,...
# - to get the % multiple the calculated answer above by 100, this will result in answers like 100, 80, 60...
# - *hint: instead of dividing by 5 and then multiplying by 100, the number of correct responses can be multiplied by 20*
#
# #### Print output
#
# - print the Score % right
# - print each of the correct responses
# - print each of the incorrect responses
#
#
# ### create Element_Quiz
#
#
# +
# [] create Element_Quiz
# [] copy and paste in edX assignment page
# -
# Submit this by creating a python file (.py) and submitting it in D2L. Be sure to test that it works. Know that For this to work correctly in Python rather than Jupyter, you would need to switch to using import os rather than !curl. To convert !curl to run in the normal python interpreter try a method such as importing the os library and calling os.system(cmd) with your shell command in the cmd variable.
#
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Positional PRC
#
# Window size = length of PWM
#
# Window overlap with motif threshold for positive = 0.8
#
# Window stride = 1
#
# Making sure our results are reproducible
from numpy.random import seed
seed(1234)
from tensorflow import set_random_seed
set_random_seed(1234)
# +
#load dragonn tutorial utilities
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from dragonn.tutorial_utils import *
from dragonn.utils import *
from dragonn.positional_prc import *
from dragonn.callbacks import *
#To prepare for model training, we import the necessary functions and submodules from keras
from keras.models import Sequential
from keras.layers.core import Dropout, Reshape, Dense, Activation, Flatten
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.optimizers import Adadelta, SGD, RMSprop;
import keras.losses;
from keras.constraints import maxnorm;
from keras.layers.normalization import BatchNormalization
from keras.regularizers import l1, l2
from keras.callbacks import EarlyStopping, History
from keras import backend as K
K.set_image_data_format('channels_last')
# +
## Simulate data
motif_density_localization_simulation_parameters = {
"motif_name": "TAL1_known4",
"seq_length": 1500,
"center_size": 150,
"min_motif_counts": 2,
"max_motif_counts": 4,
"num_pos": 3000,
"num_neg": 3000,
"GC_fraction": 0.4}
simulation_data = get_simulation_data("simulate_motif_density_localization",
motif_density_localization_simulation_parameters,
validation_set_size=1000, test_set_size=1000)
# +
#We define a custom callback to print training and validation metrics while training.
metrics_callback=MetricsCallback(train_data=(simulation_data.X_train,simulation_data.y_train),
validation_data=(simulation_data.X_valid,simulation_data.y_valid))
#Define the model architecture in keras
regularized_keras_model=Sequential()
regularized_keras_model.add(Conv2D(filters=15,kernel_size=(1,10),input_shape=simulation_data.X_train.shape[1::]))
regularized_keras_model.add(Activation('relu'))
regularized_keras_model.add(Dropout(0.2))
regularized_keras_model.add(MaxPooling2D(pool_size=(1,35)))
regularized_keras_model.add(Conv2D(filters=15,kernel_size=(1,10),input_shape=simulation_data.X_train.shape[1::]))
regularized_keras_model.add(Activation('relu'))
regularized_keras_model.add(Dropout(0.2))
regularized_keras_model.add(Conv2D(filters=15,kernel_size=(1,10),input_shape=simulation_data.X_train.shape[1::]))
regularized_keras_model.add(Activation('relu'))
regularized_keras_model.add(Dropout(0.2))
regularized_keras_model.add(Flatten())
regularized_keras_model.add(Dense(1))
regularized_keras_model.add(Activation("sigmoid"))
##compile the model, specifying the Adam optimizer, and binary cross-entropy loss.
regularized_keras_model.compile(optimizer='adam',
loss='binary_crossentropy')
# -
## use the keras fit function to train the model for 150 epochs with early stopping after 3 epochs
history=regularized_keras_model.fit(x=simulation_data.X_train,
y=simulation_data.y_train,
batch_size=128,
epochs=150,
verbose=0,
callbacks=[EarlyStopping(patience=3),
History(),
metrics_callback],
validation_data=(simulation_data.X_valid,
simulation_data.y_valid))
## Use the keras predict function to get model predictions on held-out test set.
test_predictions=regularized_keras_model.predict(simulation_data.X_test)
## Generate a ClassificationResult object to print performance metrics on held-out test set
print(ClassificationResult(simulation_data.y_test,test_predictions))
## Visualize the model's performance
plot_learning_curve(history)
#get the indices of the positive examples -- we want to do a separate interpretation for positive examples only
pos_indx=np.flatnonzero(simulation_data.y_valid==1)
pos_X=simulation_data.X_valid[pos_indx]
# ### Motif scores (all)
motif_scores=get_motif_scores(simulation_data.X_valid,simulation_data.motif_names)
motif_score_posPRC=positionalPRC(simulation_data.valid_embeddings,motif_scores)
plot_positionalPRC(motif_score_posPRC)
# ### Motif scores (positives)
motif_score_posPRC=positionalPRC([simulation_data.valid_embeddings[i] for i in pos_indx],motif_scores[pos_indx])
plot_positionalPRC(motif_score_posPRC)
# ### In silico mutagenesis (all)
from dragonn.tutorial_utils import in_silico_mutagenesis, plot_ism
ism_vals=in_silico_mutagenesis(regularized_keras_model,simulation_data.X_valid)
#take the maximum along the third axis of the absolute values -- we care only about magnitude of change
ism_collapsed=np.max(abs(ism_vals),axis=2)
print(ism_collapsed.shape)
# +
ism_posPRC=positionalPRC(simulation_data.valid_embeddings,ism_collapsed)
plot_positionalPRC(ism_posPRC)
# -
# ### In silico mutagenesis (positives)
ism_posPRC=positionalPRC([simulation_data.valid_embeddings[i] for i in pos_indx],ism_collapsed[pos_indx])
plot_positionalPRC(ism_posPRC)
# ### Gradient x Input (all)
from dragonn.tutorial_utils import input_grad,plot_seq_importance
gradxinput=input_grad(regularized_keras_model,simulation_data.X_valid)*simulation_data.X_valid
gradxinput=np.squeeze(gradxinput)
gradxinput_collapsed=np.max(gradxinput,axis=2)
print(gradxinput_collapsed.shape)
gradxinput_PRC=positionalPRC(simulation_data.valid_embeddings,gradxinput_collapsed)
plot_positionalPRC(gradxinput_PRC)
# ### Gradient x Input (positives)
gradxinput_PRC=positionalPRC([simulation_data.valid_embeddings[i] for i in pos_indx],gradxinput_collapsed[pos_indx])
plot_positionalPRC(gradxinput_PRC)
# ### DeepLIFT (all)
from dragonn.tutorial_utils import deeplift
dl=deeplift(regularized_keras_model,simulation_data.X_valid)
dl=dl.squeeze()
dl_collapsed=np.max(dl,axis=2)
print(dl_collapsed.shape)
dl_PRC=positionalPRC(simulation_data.valid_embeddings,dl_collapsed)
plot_positionalPRC(dl_PRC)
# ### DeepLIFT (positives)
dl_PRC=positionalPRC([simulation_data.valid_embeddings[i] for i in pos_indx],dl_collapsed[pos_indx])
plot_positionalPRC(dl_PRC)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Using Documentation
#
# Let's start by using the available documentation to pick a hardware injection we want to recover. From the S5 CBC hardware injection page, follow the link to the table of injections for H1. We'll pick an injection with a relatively high SNR for the tutorial. Scroll down until you see GPS time 817064899. You should see a line in the table that looks like this:
#
# 817064899 H1 10 10 25 Successful 28.16 26.55
#
# ### Getting Data
#
# If you do not already know how to download and read a LIGO data file, you may want to start with the Introduction to LIGO Data Files. As a reminder, to download this data file, follow the menu link to Data & Catalogs to find the S5 Data Archive.
# - Navigate to the [S5 Data Archive](https://losc.ligo.org/archive/S5)
# - Select the H1 instrument
# - Input the injection time as both the start and end of your query.
# - Click submit
#
# This should return a result with only the data file containing the injection.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import readligo as rl
# %matplotlib inline
strain, time, dq = rl.loaddata('H-H1_LOSC_4_V1-817061888-4096.hdf5')
dt = time[1] - time[0]
fs = 1.0 / dt
print(dq.keys())
plt.figure(figsize=(16,12))
plt.plot(dq['HW_CBC'] + 2, label='HW_CBC')
plt.plot(dq['DEFAULT'], label='Good Data')
plt.xlabel('Time since ' + str(time[0]) + ' (s)')
plt.axis([0, 4096, -1, 6])
plt.legend()
plt.show()
# Notice that all the time around the injection passes the default data quality flags (DEFAULT is 1), so we should have no problems analyzing this time.
#
# Finally, let's grab the segment of data containing the injection. We'll also identify a segment of data just before the injection, to use for a PSD estimation. We'll call this the "noise" segment, and it will be 8 times as long as the segment containing the injection.
# +
# -- Get the injection segment
inj_slice = rl.dq_channel_to_seglist(dq['HW_CBC'])[0]
inj_data = strain[inj_slice]
inj_time = time[inj_slice]
# -- Get the noise segment
noise_slice = slice(inj_slice.start-8*len(inj_data), inj_slice.start)
noise_data = strain[noise_slice]
# -- How long is the segment?
seg_time = len(inj_data) / fs
print("The injection segment is {0} s long".format(seg_time))
# -
# -- Make a frequency domain template
import template
temp, temp_freq = template.createTemplate(4096, seg_time, 10, 10)
# -- LIGO noise is very high below 25 Hz, so we won't search there
temp[temp_freq < 25] = 0
# Plot the template
plt.figure(figsize=(16,12))
plt.loglog(temp_freq, abs(temp))
plt.axis([10, 1000, 1e-22, 1e-19])
plt.xlabel("Frequency (Hz)")
plt.ylabel("Template value (Strain/Hz)")
plt.grid()
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.0
# language: julia
# name: julia-0.6
# ---
# +
using SymPy;
@vars xi1 xi2;
∇(f) = [diff(f, xi1); diff(f, xi2)];
# xi_2
# N3 ^ N2
# # +----|----+
# | + | + |
# | Q2| Q1|
# -------------> xi_1
# | Q3| Q4|
# | + | + |
# # +----|----+
# N4 N1
# quad points at -1/√3 and 1/√3
# shape function
𝐍 = [
0.25(1+xi1)*(1-xi2);
0.25(1+xi1)*(1+xi2);
0.25(1-xi1)*(1+xi2);
0.25(1-xi1)*(1-xi2)];
# quadrature points for numerical integration
Q = Array{Array{Float64}}(4)
Q[1] = [ 1/√3 1/√3];
Q[2] = [-1/√3 1/√3];
Q[3] = [-1/√3 -1/√3];
Q[4] = [ 1/√3 -1/√3];
# evaluate the shape functions at quadrature points
N(i,q) = 𝐍[i] |> replace(xi1, q[1]) |> replace(xi2, q[2]) |> float;
# evaluate the derivative of shape functions at quadrature points
dN(i,q) = 𝐍[i] |> ∇ |>
x -> [replace(x[1], xi1, q[1]) replace(x[2], xi1, q[1])] |>
x -> [replace(x[1], xi2, q[2]) replace(x[2], xi2, q[2])] |> float;
# shape functions in quad points
𝐍Q = [N(i,q) for i in 1:4, q in Q];
# derivative of shape functions at quad points
∇𝐍Q = [dN(i,q) for i in 1:4, q in Q];
# dyadic product
function ⊗(x::Array{Float64,2}, y::Array{Float64,2})
return [x[1]*y[1] x[1]*y[2];
x[2]*y[1] x[2]*y[2]]
end
# dot product
function ⊙(x1::Array{Float64,2}, x2::Array{Float64,2})
return x1[1]*x2[1] + x1[2]*x2[2]
end
# -
# solves Δu = ρ with Dirichlet boundary conditions using finite element method
function solve()
K = zeros(nDoF,nDoF);
b = zeros(nDoF);
DET = zeros(4); #
∂x∂ξ = Array{Array{Float64}}(4); # jacobian wrt parent coods
∂ξ∂x = Array{Array{Float64}}(4);
∂𝐍∂x = Array{Array{Float64}}(4,4); # ∂𝐍∂x[node, quad]
RANGE4 = range(1,4)
for e in elements
K_e = zeros(4,4);
b_e = zeros(4);
X = [nodes[n] for n in e] # coordinates of the element
for q in RANGE4
∂x∂ξ[q] = X[1] ⊗ ∇𝐍Q[1,q] +
X[2] ⊗ ∇𝐍Q[2,q] +
X[3] ⊗ ∇𝐍Q[3,q] +
X[4] ⊗ ∇𝐍Q[4,q]
∂ξ∂x[q] = inv(∂x∂ξ[q])
DET[q] = det(∂x∂ξ[q])
for I in RANGE4
∂𝐍∂x[I, q] = ∇𝐍Q[I,q]*∂ξ∂x[q]
end
end
# load vector and element stiffness
for I in RANGE4
for q in RANGE4
b_e[I] += 𝐍Q[I,q]*ρ*DET[q]
end
for J in RANGE4
for q in RANGE4
K_e[I,J] += ∂𝐍∂x[I,q]⊙∂𝐍∂x[J,q]*DET[q]
end
end
end
# global stiffness
for I in RANGE4
i = e[I]
b[i] += b_e[I]
for J in RANGE4
j = e[J]
K[i,j] += K_e[I,J]
end
end
end
nDBC = 0
for nodeBC in keys(DBC_nodes)
nDBC += length(DBC_nodes[nodeBC])
end
L = zeros(nDBC, length(nodes));
u = zeros(nDBC)
iCount = 1
for key in keys(DBC_nodes)
val = DBC_values[key]
for node in DBC_nodes[key]
u[iCount] = val
L[iCount, node] = 1.0
iCount += 1
end
end
#b = L'*(L*K^-1*L')^-1*u;
#x = K^-1*b
x = [K L'; L zeros(nDBC,nDBC)]\[b;u]; return x;
end
using PyCall
using PyPlot
@pyimport matplotlib.patches as patches
@pyimport matplotlib.path as Path
# Plt instead of plt so it doesn't collide with plt from PyPlot
@pyimport matplotlib.pyplot as Plt
@pyimport matplotlib as mpl
@pyimport matplotlib.cm as CM
@pyimport matplotlib.collections as mplCollections
function draw_mesh(elements, nodes, V)
PatchCollection = mplCollections.PatchCollection
fig, ax = Plt.subplots()
ax[:set_aspect]("equal")
PATCHES = []
FACE_COLOR = []
# v_center
center = [0. 0.];
n1 = N(1, center);
n2 = N(2, center);
n3 = N(3, center);
n4 = N(4, center);
v_center = [V[e[1]]*n1 + V[e[2]]*n2 + V[e[3]]*n3 + V[e[4]]*n4
for e in elements]
# color map
norm = mpl.colors[:Normalize](vmin=minimum(v_center),
vmax=maximum(v_center))
cmap = CM.rainbow
scMap = CM.ScalarMappable(norm=norm, cmap=cmap)
scMap[:set_array]([])
cbar = Plt.colorbar(scMap)
for (e,v) in zip(elements,v_center)
p = patches.Polygon([[nodes[e[1]][1] nodes[e[1]][2]];
[nodes[e[2]][1] nodes[e[2]][2]];
[nodes[e[3]][1] nodes[e[3]][2]];
[nodes[e[4]][1] nodes[e[4]][2]]], closed=true)
c = scMap[:to_rgba](v)
append!(PATCHES, [p])
append!(FACE_COLOR, [c])
end
collection = PatchCollection(PATCHES,facecolor=FACE_COLOR,lw=.1,color="gray")
ax[:set_xlim](0,6); ax[:set_ylim](0,6);
ax[:add_collection](collection)
Plt.show()
end
# +
# small mesh
nodes= Any[ [0. 5.],[1. 5.],[2. 5.],[3. 5.],[4. 5.],
[0. 4.],[1. 4.],[2. 4.],[3. 4.],[4. 4.],
[0. 3.],[1. 3.],[2. 3.],[3. 3.],[4. 3.],
[0. 2.],[1. 2.],[2. 2.],[3. 2.],[4. 2.],
[0. 1.],[1. 1.],[2. 1.],[3. 1.],[4. 1.] ];
elements= Any[ [1 6 7 2],[2 7 8 3],[3 8 9 4],[4 9 10 5],
[6 11 12 7],[7 12 13 8],[8 13 14 9],[9 14 15 10],
[11 16 17 12],[12 17 18 13],[13 18 19 14],[14 19 20 15],
[16 21 22 17],[17 22 23 18],[18 23 24 19],[19 24 25 20] ];
# BCs nodes
DBC_nodes = Dict("1" => [1],
"2" => [25]);
# node set "1" is set to 2 and node set "2" is set to 3
DBC_values = Dict("1" => 2.0, "2" => 3.0);
DBC = Dict("nodes" => DBC_nodes,
"values" => DBC_values);
ρ = 0.0
nDoF = length(nodes);
x = solve()
draw_mesh(elements, nodes,x[1:nDoF])
# -
# A fine mesh (60x60)
N_x = 61
L_x = 4
nodes = [ [i j] for i in 0:(L_x/(N_x-1)):4, j in 1:(L_x/(N_x-1)):5]
elements = [[i+N_x*(j-1) , i+1+N_x*(j-1),
i+1+N_x+N_x*(j-1), i+N_x+N_x*(j-1)] for i in 1:(N_x-1),
j in 1:(N_x-1)]
nDoF = length(nodes);
# +
# BCs 2 parallel edges are prescribed, plus zero heating (ρ=0)
DBC_nodes = Dict("1" => 1:N_x,
"2" => N_x*(N_x-1):N_x*N_x);
DBC_values = Dict("1" => 1.0, "2" => 10.0);
DBC = Dict("nodes" => DBC_nodes,
"values" => DBC_values);
ρ = 0.0
x = solve()
draw_mesh(elements, nodes,x[1:nDoF])
# +
# BCs 2 parallel edges are prescribed, plus nonzero heating (ρ=1)
DBC_nodes = Dict("1" => 1:N_x,
"2" => N_x*(N_x-1):N_x*N_x);
DBC_values = Dict("1" => 1.0, "2" => 10.0);
DBC = Dict("nodes" => DBC_nodes,
"values" => DBC_values);
ρ = 1.0
x = solve()
draw_mesh(elements, nodes,x[1:nDoF])
# +
# BCs: 2 nodes on a diagonal are prescribed and zero heating (ρ=0)
DBC_nodes = Dict("1" => [1],
"2" => [N_x*N_x]);
DBC_values = Dict("1" => 1.0, "2" => 10.0);
DBC = Dict("nodes" => DBC_nodes,
"values" => DBC_values);
ρ = 0.0
x = solve()
draw_mesh(elements, nodes,x[1:nDoF])
# +
# BCs: 2 nodes at the center of two parallel edges are prescribed, plus small heating
DBC_nodes = Dict("1" => [31],
"2" => [31+(N_x-1)*N_x],
"3" => [1+N_x*31],
"4" => [N_x*32]);
DBC_values = Dict("1" => 1.0, "2" => 10.0, "3" => 1., "4" => 8.);
DBC = Dict("nodes" => DBC_nodes,
"values" => DBC_values);
ρ = 0.5
x = solve()
draw_mesh(elements, nodes,x[1:nDoF])
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.1.1
# language: julia
# name: julia-1.1
# ---
# # Introduciton to Lotka-Volterra Competition
using Parameters
using DifferentialEquations
using ForwardDiff
using Plots
pyplot();
# # Introduction/Overview
# This notebook is simply looking at the two species L-V competiti9on model. Here, we are exploring
# the geometry and dynamics of this model.
#
# # Model and Equilibria
# The model we will use is the classical LV competition model:
# +
# Inplace definition (the `du` array is passed to the function and changed), the default way to
# define this in Julia.
function lv_comp(du, u, p, t)
@unpack r1, r2, α12, α21, K1, K2 = p
du[1] = u[1] * r1 * (1 - (u[1] + α12 * u[2]) / K1)
du[2] = u[1] * r1 * (1 - (u[1] + α12 * u[2]) / K1)
return
end
# Make a version that allocates the output `du`, useful for symbolic calculations
function lv_comp(u, p, t)
du = similar(u)
lv_comp(du, u, p, t)
return du
end
# -
# # Model Parameters
@with_kw mutable struct LVPar
α12 = 0.8
α21 = 0.3
r1 = 1.0
r2 = 2.0
K1 = 1.5
K2 = 1.0
end
# # Evaluationg the model -- time series
let
u0 = [0.5, 0.5]
t_span = (0.0, 100.0)
p = LVPar()
prob = ODEProblem(lv_comp, u0, t_span, p)
sol = solve(prob, reltol = 1e-8)
plot(sol)
xlabel!("time")
ylabel!("Density")
end
# # Equilibria and Isoclines
# From the above equations we can generate the Jacobian:
# Make a numerical (not symbolic version using the ForwardDiff.jl library.
# This is what you will want to do most of the time)
jac(u, p) = ForwardDiff.jacobian(u -> lv_comp(u, p, NaN), u)
# Here we have the jacobian evaluated at the point `u = [0.5, 0.5]`, with the parameter set `par`
jac([0.5, 0.5], LVPar())
# First solve for $f_1$ and $f_2$ to determine when functions are equal to 0 (i.e., when $du_1/dt$
# and $du_2/dt = 0$). These are known as the isoclines, or nullclines, and describe the set of
# solutions when $u_1$ and $u_2$ do not change.
# We need to use a library for Symbolic calculations, a very common one is SymPy from the python
# language. Luckily Julia has excellent support for it.
using SymPy
# We want to deal with all of these as symbolic variables, which are not the default type in Julia,
# unlike Mathematica.
@vars u1 u2
@vars r1 r2 a12 a21 K1 K2
# We need to make a symbolic parameter list, as `LVPar` is numeric
spar = Dict(
:α12 => a12,
:α21 => a21,
:r1 => r1,
:r2 => r2,
:K1 => K1,
:K2 => K2);
# we need to make symbolic versions of the model equations. We do this by calling the function with
# the symbolic parameters. the last parameter could be anythign as the time (`t`) argument is not
# used. I have set it to `NaN` which is a name for not a number.
f1, f2 = lv_comp([u1, u2], spar, NaN)
sympy.solve(f1, u1)
sympy.solve(f1, u2)
sympy.solve(f2, u1)
sympy.solve(f2, u2)
# We want to solve our equations for $u_1$ so that we can plot the isoclines as a function of $u_1$
# ($u_1$ on the y-axis), but here I am keeping all possible solutions, with respect to both $u_1$
# and $u_2$. You should be able to see that the first two solutions ($u_1 = K_1 - \alpha_{12}u_2$
# and $u_2 = (K_1 - u_1)/\alpha_{12}$) are equivalent.
#
# The solutions $u_1 = K_1 - \alpha_{12}u_2$ and $u_1 = (K_2 - u_2)/\alpha_{21}$ are the equations
# for our isoclines for $du_1/dt = 0$ and $du_2/dt = 0$, respectively. We can find the interior
# equilibrium where these lines intersect (i.e., both $u_1$ and $u_2$ do not change). In this case
# we get one interior equilibrium, but note that in more complicated models we can get more interior
# equilibria.
# # Now plot the isoclines
# We will again use the manipulate function to see how our parameters change the isoclines. This can
# immediately tell us a lot about our equilibria and stability.
function iso1(u2, p)
@unpack α12, K1 = p
return K1 - α12 * u2 / K1
end
function iso2(u2, p)
@unpack α21, K2 = p
return (K2 - u2) / α21
end
let
p = LVPar()
u2s = range(0, 5, length = 100)
fig = plot(u2s, [iso1(u2, p) for u2 in u2s], label = "u1 = 0")
plot!(u2s, [iso2(u2, p) for u2 in u2s], label = "u2 = 0")
xlims!(0, 3)
ylims!(0, 5)
xlabel!("u2")
ylabel!("u1")
end
# *This notebook was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# basic imports
import sys
sys.path.append("/home/raygoza/anaconda3/envs/humann/lib/python3.8/site-packages/")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
# models and metrics
import xgboost as xgb
from sklearn.model_selection import train_test_split
from xgbse.metrics import concordance_index
from xgbse.non_parametric import get_time_bins
import xgbse
from xgbse import (
XGBSEKaplanNeighbors,
XGBSEKaplanTree,
XGBSEDebiasedBCE,
XGBSEBootstrapEstimator
)
from xgbse.converters import (
convert_data_to_xgb_format,
convert_to_structured
)
# better plots
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
plt.style.use('bmh')
# setting seed
np.random.seed(42)
from sklearn import tree
from sksurv.datasets import get_x_y
from sksurv.io.arffread import loadarff
import pandas as pd
print(xgbse.__version__)
# +
# to easily plot confidence intervals
def plot_ci(mean, upper_ci, lower_ci, i=42, title='Probability of survival $P(T \geq t)$'):
# plotting mean and confidence intervals
plt.figure(figsize=(12, 4), dpi=120)
plt.plot(mean.columns,mean.iloc[i])
plt.fill_between(mean.columns, lower_ci.iloc[i], upper_ci.iloc[i], alpha=0.2)
plt.title(title)
plt.xlabel('Time [days]')
plt.ylabel('Probability')
plt.tight_layout()
# +
# to write data as markdown for publication
def df_to_markdown(df, float_format='%.2g'):
"""
Export a pandas.DataFrame to markdown-formatted text.
DataFrame should not contain any `|` characters.
"""
from os import linesep
df.columns = df.columns.astype(str)
return linesep.join([
'|'.join(df.columns),
'|'.join(4 * '-' for i in df.columns),
df.to_csv(sep='|', index=False, header=False, float_format=float_format)
]).replace('|', ' | ')
# +
## pre selected params for models ##
PARAMS_XGB_AFT = {
'objective': 'survival:aft',
'eval_metric': 'aft-nloglik',
'aft_loss_distribution': 'normal',
'aft_loss_distribution_scale': 1.0,
'tree_method': 'hist',
'learning_rate': 5e-2,
'max_depth': 5,
'booster':'dart',
'subsample':0.5,
'min_child_weight': 58,
'colsample_bynode':0.5
}
N_NEIGHBORS = 50
TIME_BINS = np.arange(15, 315, 15)
# +
gem_train = loadarff('training_rsf_fin.arff')
gem_test = loadarff('testing_rsf_fin.arff')
X_train, y_train=get_x_y(gem_train,attr_labels=['events','TimeInStudy'],pos_label='TRUE')
X_valid, y_valid=get_x_y(gem_test,attr_labels=['events','TimeInStudy'],pos_label='TRUE')
feature_names=list(X_train.columns)
random_state=20
#random_state=30
# +
# converting to xgboost format
dtrain = convert_data_to_xgb_format(X_train, y_train, 'survival:aft')
dval = convert_data_to_xgb_format(X_valid, y_valid, 'survival:aft')
# training model
bst = xgb.train(
PARAMS_XGB_AFT,
dtrain,
num_boost_round=100,
early_stopping_rounds=100,
evals=[(dval, 'val')],
verbose_eval=0
)
# predicting and evaluating
preds = bst.predict(dval)
cind = concordance_index(y_valid, -preds, risk_strategy='precomputed')
print(f"C-index: {cind:.3f}")
def pred(df):
b=convert_data_to_xgb_format(X_valid, y_valid, 'survival:aft')
return()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Loading Required Packages
import pandas as pd
import numpy as np
from collections import Counter
import matplotlib.pyplot as plt
# ## Loading the datasets
mpi_national = pd.read_csv('MPI_national.csv')
mpi_subnation = pd.read_csv('MPI_subnational.csv')
netflix = pd.read_csv('netflix_titles.csv')
# ## Determining the Dimensions of Poverty
# ### Which Country is having Highest Poverty in Urban Areas (Assuming MPI is used to measure Poverty index)
mpi_national = mpi_national.sort_values(by=['MPI Urban'],ascending=False)
mpi_national.head(1)
# #### South Sudan is having poverty in Urban Areas
# ### Which Country is having Highest and Lowest Population suffering from Poverty
# #### Population details are not present in the datset provided. Had the population data would have been present, then based on countries having highest & lowest population could have easily extracted poverty
# ### What is the Intensity of Deprivation in Asian Countries.
# +
asian_countries = mpi_subnation[mpi_subnation['World region'].isin(['East Asia and the Pacific','South Asia','Europe and Central Asia'])]
asian_countries = asian_countries[['Country','Intensity of deprivation Regional']]
asian_countries = asian_countries.groupby('Country').mean()
asian_countries = asian_countries.reset_index()
asian_countries.sort_values(by=['Intensity of deprivation Regional'], inplace=True,ascending=False)
asian_countries.head() # Showing data for only top 6 countries. But extracted for all
# -
# ### Which 10 Countries are the Poorest Countries
# +
mpi_subnation = mpi_subnation.sort_values(by=['MPI Regional'],ascending=False)
poorest_countries = mpi_subnation[['Country','MPI National']]
poorest_countries = poorest_countries.groupby('Country').mean()
poorest_countries = poorest_countries.reset_index()
poorest_countries.sort_values(by=['MPI National'], inplace=True,ascending=False)
poorest_countries.head(10) # Showing data for only top 10 countries. But extracted for all
# -
# ### Which Countries’ Rural Population is having highest Poverty Index in the World
mpi_national = mpi_national.sort_values(by=['MPI Rural'],ascending=False)
mpi_national.head(1)
# #### Niger is having highest poverty index is rural population (0.669)
# ### Analyzing Poverty in Afghanistan
mpi_subnation = mpi_subnation.reset_index()
del mpi_subnation['index']
afg = mpi_subnation[mpi_subnation['ISO country code']=='AFG']
#MPI Regional of Afganistan
afg_mpi_reg = afg[['Sub-national region','MPI Regional']]
afg_mpi_reg. head()
afg_mpi_reg.plot(x="Sub-national region", y='MPI Regional', kind="bar",figsize=(10,10))
#Head Count Ratio Regional of Afganistan
afg_head_ratio = afg[['Sub-national region','Headcount Ratio Regional']]
afg_head_ratio.head()
afg_head_ratio.plot(x="Sub-national region", y='Headcount Ratio Regional', kind="bar",figsize=(10,10))
# ### What is the Difference between India and Afghanistan in terms of Poverty Index.
afg = mpi_national[mpi_national['ISO']=='AFG']
ind = mpi_national[mpi_national['ISO']=='IND']
# +
#### MPI Urban of India - 0.064
#### MPI Urban of Afganistan - 0.132
#### Headcount Ratio Urban of India - 14.8
#### Headcount Ratio Urban of Afganistan - 25.8
#### Intensity of Deprivation Urban of India - 43.3
#### Intensity of Deprivation Urban of Afganistan - 45.8
#### MPI Rural of India - 0.25
#### MPI Rural of Afganistan - 0.347
#### Headcount Ratio Rural of India - 53.49
#### Headcount Ratio Rural of Afganistan - 64.66
#### Intensity of Deprivation Rural of India - 46.7
#### Intensity of Deprivation Rural of Afganistan - 53.6
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import lppls
import data_loader
import numpy as np
import pandas as pd
# %matplotlib inline
data = data_loader.sp500_2017()
data
time = np.linspace(0, len(data)-1, len(data))
price = np.log(data['Adj Close'].values)
observations = np.array([time, price])
lppls_model = lppls.LPPLS(observations=observations)
tc, m, w, a, b, c, c1, c2 = lppls_model.fit(observations, 25, minimizer='Nelder-Mead')
lppls_model.plot_fit()
filter_conditions_config = [
{'condition_1':[
(0.0, 0.1), # tc_range
(-1,1), # m_range
(4,25), # w_range
2.5, # O_min
0.5, # D_min
]},
]
res = lppls_model.mp_compute_indicator(
workers=1,
window_size=120,
smallest_window_size=30,
increment=5,
max_searches=25,
filter_conditions_config=filter_conditions_config
)
# +
lppls_model.plot_confidence_indicators(res, condition_name='condition_1', title='Short Term Indicator 120-30')
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Day 17
#
# https://adventofcode.com/2020/day/17
import collections
import itertools
import typing
import aocd
ACTIVE = '#'
INACTIVE = '.'
data = aocd.get_data(day=17, year=2020).splitlines()
len(data)
test_data = [
'.#.',
'..#',
'###',
]
class Point(typing.NamedTuple):
x: int
y: int
z: int
def __repr__(self):
return f'({self.x}, {self.y}, {self.z})'
def get_all_neighbors(self):
result = [
Point(self.x + i, self.y + j, self.z + k)
for i, j, k in itertools.product([-1, 0, 1], repeat=3)
if (i, j, k) != (0, 0, 0)
]
assert len(result) == 26
return set(result)
class Grid(collections.abc.Set):
def __init__(self, active_cells):
self._active = set(active_cells)
self.x_min = min(c.x for c in self._active)
self.x_max = max(c.x for c in self._active)
self.y_min = min(c.y for c in self._active)
self.y_max = max(c.y for c in self._active)
self.z_min = min(c.z for c in self._active)
self.z_max = max(c.z for c in self._active)
@classmethod
def from_lines(cls, lines):
values = {
Point(x=i, y=j, z=0): value
for j, row in enumerate(lines)
for i, value in enumerate(row)
}
active_cells = [
cell for cell in values
if values[cell] == ACTIVE
]
return cls(active_cells)
def __contains__(self, cell: Point):
return cell in self._active
def __len__(self):
return len(self._active)
def __iter__(self):
return iter(self._active)
def is_active(self, cell: Point) -> bool:
return cell in self
def is_inactive(self, cell: Point) -> bool:
return not cell in self
@property
def active_cells(self):
return set(self._active)
@property
def potential_next_cells(self):
return set([
neighbor for cell in self
for neighbor in cell.get_all_neighbors()
])
@property
def domain(self):
return (
(self.x_min, self.x_max),
(self.y_min, self.y_max),
(self.z_min, self.z_max),
)
def __repr__(self):
x_range, y_range, z_range = self.domain
return (
f'<{self.__class__.__name__}('
f'x_range={x_range}'
f', y_range={y_range}'
f', z_range={z_range})>'
)
def all_cells_for_layer(self, layer: int):
return [
[
Point(x, y, layer)
for x in range(self.x_min, self.x_max + 1)
]
for y in range(self.y_min, self.y_max + 1)
]
def print_layer(self, layer: int):
lines = [
''.join(
ACTIVE if self.is_active(cell) else INACTIVE
for cell in row
)
for row in self.all_cells_for_layer(layer)
]
print(*lines, sep='\n')
def count_active_neighbors(self, cell) -> int:
neighbors = set([
n for n in cell.get_all_neighbors()
if n in self
])
return len([n for n in neighbors if self.is_active(n)])
grid = Grid.from_lines(data)
grid
grid.print_layer(0)
# ### Solution to Part 1
def compute_new_cell(cell, *, grid) -> str:
n = grid.count_active_neighbors(cell)
if grid.is_active(cell):
return ACTIVE if n in (2, 3) else INACTIVE
elif grid.is_inactive(cell):
return ACTIVE if n == 3 else INACTIVE
else:
raise RuntimeError('cannot happen')
def run_cycle(grid, *, grid_type=Grid):
return grid_type([
cell for cell in grid.potential_next_cells
if compute_new_cell(cell, grid=grid) == ACTIVE
])
def simulate(grid, *, n: int = 6, grid_type=Grid):
old_grid = grid_type(grid)
for _ in range(n):
new_grid = run_cycle(old_grid, grid_type=grid_type)
old_grid = grid_type(new_grid)
return new_grid
final_grid = simulate(grid)
len(final_grid.active_cells)
# ### Solution to Part 2
class Point4D(typing.NamedTuple):
x: int
y: int
z: int
w: int
def __repr__(self):
return f'({self.x}, {self.y}, {self.z}, {self.w})'
def get_all_neighbors(self):
result = [
Point4D(self.x + i, self.y + j, self.z + k, self.w + m)
for i, j, k, m in itertools.product([-1, 0, 1], repeat=4)
if (i, j, k, m) != (0, 0, 0, 0)
]
assert len(result) == 80
return set(result)
class Grid4D(collections.abc.Set):
def __init__(self, active_cells):
self._active = set(active_cells)
self.x_min = min(c.x for c in self._active)
self.x_max = max(c.x for c in self._active)
self.y_min = min(c.y for c in self._active)
self.y_max = max(c.y for c in self._active)
self.z_min = min(c.z for c in self._active)
self.z_max = max(c.z for c in self._active)
self.w_min = min(c.w for c in self._active)
self.w_max = max(c.w for c in self._active)
@classmethod
def from_lines(cls, lines):
values = {
Point4D(x=i, y=j, z=0, w=0): value
for j, row in enumerate(lines)
for i, value in enumerate(row)
}
active_cells = [
cell for cell in values
if values[cell] == ACTIVE
]
return cls(active_cells)
def __contains__(self, cell: Point):
return cell in self._active
def __len__(self):
return len(self._active)
def __iter__(self):
return iter(self._active)
def is_active(self, cell: Point) -> bool:
return cell in self
def is_inactive(self, cell: Point) -> bool:
return not cell in self
@property
def active_cells(self):
return set(self._active)
@property
def potential_next_cells(self):
return set([
neighbor for cell in self
for neighbor in cell.get_all_neighbors()
])
@property
def domain(self):
return (
(self.x_min, self.x_max),
(self.y_min, self.y_max),
(self.z_min, self.z_max),
(self.w_min, self.w_max),
)
def __repr__(self):
x_range, y_range, z_range, w_range = self.domain
return (
f'<{self.__class__.__name__}('
f'x_range={x_range}'
f', y_range={y_range}'
f', z_range={z_range}'
f', w_range={w_range})>'
)
def all_cells_for_layer(self, *, z: int, w: int):
return [
[
Point4D(x, y, z, w)
for x in range(self.x_min, self.x_max + 1)
]
for y in range(self.y_min, self.y_max + 1)
]
def print_layer(self, *, z: int, w: int):
lines = [
''.join(
ACTIVE if self.is_active(cell) else INACTIVE
for cell in row
)
for row in self.all_cells_for_layer(z=z, w=w)
]
print(*lines, sep='\n')
def count_active_neighbors(self, cell) -> int:
neighbors = set([
n for n in cell.get_all_neighbors()
if n in self
])
return len([n for n in neighbors if self.is_active(n)])
grid4D = Grid4D.from_lines(data)
grid4D
final_grid = simulate(grid4D, grid_type=Grid4D)
len(final_grid.active_cells)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.3.7
# language: julia
# name: julia-0.3
# ---
# +
using JSON
#load OD pair-route incidence
odPairRoute = readall("od_pair_route_incidence_MA.json");
odPairRoute = JSON.parse(odPairRoute);
#load link-route incidence
linkRoute = readall("link_route_incidence_MA.json");
linkRoute = JSON.parse(linkRoute);
#load OD pair labels
odPairLabel = readall("od_pair_label_dict_MA_refined.json");
odPairLabel = JSON.parse(odPairLabel);
odPairLabel_ = readall("od_pair_label_dict__MA_refined.json");
odPairLabel_ = JSON.parse(odPairLabel_);
#load link labels
linkLabel = readall("link_label_dict_MA.json");
linkLabel = JSON.parse(linkLabel);
linkLabel_ = readall("link_label_dict_MA_.json");
linkLabel_ = JSON.parse(linkLabel_);
#load node-link incidence
nodeLink = readall("node_link_incidence_MA.json");
nodeLink = JSON.parse(nodeLink);
# -
include("load_network_uni-class.jl")
ta_data = load_ta_network("East_Massachusetts_Apr_weekend");
capacity = ta_data.capacity;
free_flow_time = ta_data.free_flow_time;
############
#Read in the demand file
file = open("../data_traffic_assignment_uni-class/East_Massachusetts_trips_Apr_weekend.txt")
demands = Dict{(Int64,Int64), Float64}()
s = 0
for line in eachline(file)
if contains(line, "Origin")
s = int(split(line)[2])
else
pairs = split(line, ";")
for pair in pairs
if !contains(pair, "\n")
pair_vals = split(pair, ":")
t, demand = int(pair_vals[1]), float(pair_vals[2])
demands[(s,t)] = demand
end
end
end
end
close(file)
demands
odPairLabel_
# +
demandsVec = zeros(56)
for i = 1:length(demandsVec)
demandsVec[i] = demands[(odPairLabel_["$i"][1], odPairLabel_["$i"][2])]
end
# -
demandsVec
for key=keys(odPairRoute)
if contains(key, "56-")
println(key)
end
end
linkRoute
coeffs_dict = readall("../temp_files/coeffs_dict_Apr_weekend.json")
coeffs_dict = JSON.parse(coeffs_dict)
fcoeffs = coeffs_dict["(8,0.5,10000.0,1)"]
# +
using JuMP
# m = Model(solver=GurobiSolver(OutputFlag=false))
m = Model()
numLinks = 24
numRoute = 314
numOD = 56
@defVar(m, linkFlow[1:numLinks])
@defVar(m, pathFlow[1:numRoute])
pathFlowSum = Dict()
for i=1:numOD
pathFlowSum[i] = 0
for j=1:numRoute
if "$(i)-$(j)" in keys(odPairRoute)
pathFlowSum[i] += pathFlow[j]
end
end
@addConstraint(m, pathFlowSum[i] == demandsVec[i])
end
pathFlowLinkSum = Dict()
for a=1:numLinks
pathFlowLinkSum[a] = 0
for j=1:numRoute
if "$(a)-$(j)" in keys(linkRoute)
pathFlowLinkSum[a] += pathFlow[j]
end
end
@addConstraint(m, pathFlowLinkSum[a] == linkFlow[a])
end
for j=1:numRoute
@addConstraint(m, pathFlow[j] >= 0)
end
# @defNLExpr(f, sum{free_flow_time[a]*linkFlow[a] + .03*free_flow_time[a]*((linkFlow[a])^5)/((capacity[a])^4), a = 1:numLinks})
@defNLExpr(f, sum{free_flow_time[a] * fcoeffs[1] * linkFlow[a] +
free_flow_time[a] * fcoeffs[2] * linkFlow[a]^2 / capacity[a] +
free_flow_time[a] * fcoeffs[3] * linkFlow[a]^3 / capacity[a]^2 +
free_flow_time[a] * fcoeffs[4] * linkFlow[a]^4 / capacity[a]^3 +
free_flow_time[a] * fcoeffs[5] * linkFlow[a]^5 / capacity[a]^4 +
free_flow_time[a] * fcoeffs[6] * linkFlow[a]^6 / capacity[a]^5 +
free_flow_time[a] * fcoeffs[7] * linkFlow[a]^7 / capacity[a]^6 +
free_flow_time[a] * fcoeffs[8] * linkFlow[a]^8 / capacity[a]^7 +
free_flow_time[a] * fcoeffs[9] * linkFlow[a]^9 / capacity[a]^8, a = 1:numLinks})
@setNLObjective(m, Min, f)
solve(m)
# -
getValue(linkFlow)
getObjectiveValue(m)
# +
flows = Dict{(Int64,Int64),Float64}()
for i = 1:length(ta_data.start_node)
key = (ta_data.start_node[i], ta_data.end_node[i])
flows[key] = getValue(linkFlow)[i]
end
flows
# -
using PyCall
unshift!(PyVector(pyimport("sys")["path"]), "");
@pyimport GLS_Apr_weekend
flow_user = GLS_Apr_weekend.x_
function socialObj(linkFlowVec)
objVal = sum([free_flow_time[a] * fcoeffs[1] * linkFlowVec[a] + free_flow_time[a] * fcoeffs[2] * linkFlowVec[a]^2 / capacity[a] + free_flow_time[a] * fcoeffs[3] * linkFlowVec[a]^3 / capacity[a]^2 + free_flow_time[a] * fcoeffs[4] * linkFlowVec[a]^4 / capacity[a]^3 + free_flow_time[a] * fcoeffs[5] * linkFlowVec[a]^5 / capacity[a]^4 + free_flow_time[a] * fcoeffs[6] * linkFlowVec[a]^6 / capacity[a]^5 + free_flow_time[a] * fcoeffs[7] * linkFlowVec[a]^7 / capacity[a]^6 + free_flow_time[a] * fcoeffs[8] * linkFlowVec[a]^8 / capacity[a]^7 + free_flow_time[a] * fcoeffs[9] * linkFlowVec[a]^9 / capacity[a]^8 for a = 1:numLinks])
return objVal
end
# +
weekend_Apr_list = [1, 7, 8, 14, 15, 21, 22, 28, 29]
for i in weekend_Apr_list
println(socialObj(flow_user[:, i])/getObjectiveValue(m))
end
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## CSV Logger
# + hide_input=true
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
from fastai.callbacks import *
# + hide_input=true
show_doc(CSVLogger)
# -
# First let's show an example of use, with a training on the usual MNIST dataset.
path = untar_data(URLs.MNIST_TINY)
data = ImageDataBunch.from_folder(path)
learn = Learner(data, simple_cnn((3, 16, 16, 2)), metrics=[accuracy, error_rate], callback_fns=[CSVLogger])
learn.fit(3)
# Training details have been saved in 'history.csv'.
# Note that it only saves float/int metrics, so time currently is not saved. This could be saved but requires changing the recording - you can submit a PR [fixing that](https://forums.fast.ai/t/expand-recorder-to-deal-with-non-int-float-data/41534).
learn.path.ls()
# Note that, as with all [`LearnerCallback`](/basic_train.html#LearnerCallback), you can access the object as an attribute of `learn` after it has been created. Here it's `learn.csv_logger`.
# + hide_input=true
show_doc(CSVLogger.read_logged_file)
# -
learn.csv_logger.read_logged_file()
# Optionally you can set `append=True` to log results of consequent stages of training.
# don't forget to remove the old file
if learn.csv_logger.path.exists(): os.remove(learn.csv_logger.path)
learn = Learner(data, simple_cnn((3, 16, 16, 2)), metrics=[accuracy, error_rate],
callback_fns=[partial(CSVLogger, append=True)])
# stage-1
learn.fit(3)
# stage-2
learn.fit(3)
learn.csv_logger.read_logged_file()
# ### Calback methods
# You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
# + hide_input=true
show_doc(CSVLogger.on_train_begin)
# + hide_input=true
show_doc(CSVLogger.on_epoch_end)
# + hide_input=true
show_doc(CSVLogger.on_train_end)
# -
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
# ## New Methods - Please document or move to the undocumented section
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
apurv = 2785.75
kartik= 2268.64
salina = 1000
total = apurv + kartik + salina
rent = 4974.09
rent_diff = rent - (0)
k = rent_diff * kartik/total
a = rent_diff * apurv/total
s = rent_diff * salina / total
k + a + s
k, a, s
1863.84 + 2288.68 + 821.57
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import PIL
from PIL import Image
import skimage.io
from skimage.transform import resize
from imgaug import augmenters as iaa
from tqdm import tqdm
from sklearn.utils import class_weight, shuffle
import tensorflow as tf
import warnings
warnings.filterwarnings("ignore")
WINDOW_SIZE = 331
IMAGE_SIZE = 512
IMAGE_CHANNELS=3
NUM_CLASSES=28
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, load_model
from keras.layers import Activation, Dropout, Flatten, Reshape, Dense, Concatenate, GlobalMaxPooling2D
from keras.layers import BatchNormalization, Input, Conv2D, Lambda, Average
from keras.applications.nasnet import NASNetLarge
from keras.callbacks import ModelCheckpoint
from keras import metrics
from keras.optimizers import Adam
from keras import backend as K
import keras
from keras.models import Model
from keras.utils import multi_gpu_model, multi_gpu_utils
# +
path_to_train = '../../Human_Protein_Atlas/input/train/'
data = pd.read_csv('../../Human_Protein_Atlas/input/train.csv')
train_dataset_info = []
for name, labels in zip(data['Id'], data['Target'].str.split(' ')):
train_dataset_info.append({
'path':os.path.join(path_to_train, name),
'labels':np.array([int(label) for label in labels])})
train_dataset_info = np.array(train_dataset_info)
class data_generator:
def __init__(self, it):
self.it = it
def __call__(self):
return self.it
def get_dataset(dataset_info, batch_size, shape, augument=True):
gen = data_generator.create_train(dataset_info, batch_size, shape, augument)
gen = data_generator(gen)
types = (tf.float32, tf.float32)
shapes=(tf.TensorShape((WINDOW_SIZE, WINDOW_SIZE, IMAGE_CHANNELS)), tf.TensorShape([NUM_CLASSES]))
dataset = tf.data.Dataset.from_generator(
gen, types, shapes
)
#dataset = dataset.repeat()
dataset = dataset.batch(batch_size, drop_remainder=True).prefetch(batch_size*8)
return dataset
def create_train(dataset_info, batch_size, shape, augument=True):
assert shape[2] == 3
dataset_info = shuffle(dataset_info)
while True:
for xs, xe, ys, ye in data_generator.slice_images():
for idx in range(len(dataset_info)):
#X_train_batch = dataset_info[start:end]
batch_labels = np.zeros((NUM_CLASSES))
image = data_generator.load_image(
dataset_info[idx]['path'], shape)
if augument:
image = data_generator.augment(image)
#print(image)
image=image/255.
#print(image)
batch_labels[dataset_info[idx]['labels']] = 1
yield image[xs:xe, ys:ye, :], batch_labels
def load_image(path, shape):
image_red_ch = Image.open(path+'_red.png')
image_yellow_ch = Image.open(path+'_yellow.png')
image_green_ch = Image.open(path+'_green.png')
image_blue_ch = Image.open(path+'_blue.png')
image = np.stack((
np.array(image_red_ch),
np.array(image_green_ch),
np.array(image_blue_ch)), -1)
#image = cv2.resize(image, (shape[0], shape[1]))
return image
def augment(image):
augment_img = iaa.Sequential([
iaa.OneOf([
iaa.Affine(rotate=0),
iaa.Affine(rotate=90),
iaa.Affine(rotate=180),
iaa.Affine(rotate=270),
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
])], random_order=True)
image_aug = augment_img.augment_image(image)
return image_aug
def slice_images():
offset = int(IMAGE_SIZE%WINDOW_SIZE)
for i in range(2):
for j in range(2):
x_start=i*offset
x_end=x_start+WINDOW_SIZE
y_start=j*offset
y_end=y_start+WINDOW_SIZE
yield x_start, x_end, y_start, y_end
# -
with tf.device('/cpu:0'):
input_shape=(WINDOW_SIZE,WINDOW_SIZE, IMAGE_CHANNELS)
input_tensor = Input(shape=(WINDOW_SIZE, WINDOW_SIZE, IMAGE_CHANNELS))
base_model = NASNetLarge(include_top=False,
weights='imagenet',
input_shape=input_shape
#input_shape=(WINDOW_SIZE, WINDOW_SIZE, IMAGE_CHANNELS)
)
bn = BatchNormalization()(input_tensor)
x = base_model(bn)
x = Conv2D(32, kernel_size=(1,1), activation='relu')(x)
x = Flatten()(x)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
output = Dense(NUM_CLASSES, activation='sigmoid')(x)
model = Model(input_tensor, output)
model = multi_gpu_model(model, gpus=2)
# +
# create callbacks list
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, EarlyStopping, ReduceLROnPlateau
from sklearn.model_selection import train_test_split
epochs = 50; batch_size = 16
checkpoint = ModelCheckpoint('../../Human_Protein_Atlas/working/NASNetLarge.h5', monitor='val_loss', verbose=1,
save_best_only=True, mode='min', save_weights_only = True)
reduceLROnPlat = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3,
verbose=1, mode='auto', epsilon=0.0001)
early = EarlyStopping(monitor="val_loss",
mode="min",
patience=6)
callbacks_list = [checkpoint, early, reduceLROnPlat]
# split data into train, valid
indexes = np.arange(train_dataset_info.shape[0])
np.random.shuffle(indexes)
train_indexes, valid_indexes = train_test_split(indexes, test_size=0.05, random_state=74)
# create train and valid datagens
train_generator = data_generator.get_dataset(
train_dataset_info[train_indexes], batch_size, (IMAGE_SIZE,IMAGE_SIZE,IMAGE_CHANNELS), augument=True)
validation_generator = data_generator.get_dataset(
train_dataset_info[valid_indexes], 32, (IMAGE_SIZE,IMAGE_SIZE,IMAGE_CHANNELS), augument=False)
# +
for layer in model.layers:
layer.trainable = False
model.layers[-1].trainable = True
model.layers[-2].trainable = True
model.layers[-3].trainable = True
model.layers[-4].trainable = True
model.layers[-5].trainable = True
model.compile(
loss='binary_crossentropy',
optimizer=Adam(1e-03),
metrics=['acc'])
# model.summary()
train_images, train_labels = train_generator.make_one_shot_iterator().get_next()
val_images, val_labels = validation_generator.make_one_shot_iterator().get_next()
# -
model.fit(
x=train_images, y=train_labels,
steps_per_epoch=int(np.ceil(float(len(train_indexes)) / float(batch_size))*4),
validation_data=(val_images, val_labels),
validation_steps=int(np.ceil(float(len(valid_indexes)) / float(batch_size))*4),
epochs=2,
verbose=1)
# train all layers
for layer in model.layers:
layer.trainable = True
model.compile(loss='binary_crossentropy',
optimizer=Adam(lr=1e-4),
metrics=['accuracy'])
model.fit(
x=train_images, y=train_labels,
steps_per_epoch=int(np.ceil(float(len(train_indexes)) / float(batch_size))*4),
validation_data=(val_images, val_labels),
validation_steps=int(np.ceil(float(len(valid_indexes)) / float(batch_size))*4),
epochs=epochs,
verbose=1,
callbacks=callbacks_list)
model.save_weights('../../Human_Protein_Atlas/working/NASNetLarge.h5')
model.fit(
x=train_images, y=train_labels,
steps_per_epoch=int(np.ceil(float(len(train_indexes)) / float(batch_size))*4),
validation_data=(val_images, val_labels),
validation_steps=int(np.ceil(float(len(valid_indexes)) / float(batch_size))*4),
epochs=epochs,
verbose=1,
callbacks=callbacks_list)
1
2
# +
with tf.device('/cpu:0'):
input_shape=(WINDOW_SIZE,WINDOW_SIZE, IMAGE_CHANNELS)
input_tensor = Input(shape=(WINDOW_SIZE, WINDOW_SIZE, IMAGE_CHANNELS))
base_model = NASNetLarge(include_top=False,
weights='imagenet',
input_shape=input_shape
#input_shape=(WINDOW_SIZE, WINDOW_SIZE, IMAGE_CHANNELS)
)
bn = BatchNormalization()(input_tensor)
x = base_model(bn)
x = Conv2D(32, kernel_size=(1,1), activation='relu')(x)
x = Flatten()(x)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
output = Dense(NUM_CLASSES, activation='sigmoid')(x)
model = Model(input_tensor, output)
model = multi_gpu_model(model, gpus=2)
model.load_weights('../../Human_Protein_Atlas/working/NASNetLarge.h5')
# -
1
submit = pd.read_csv('../input/sample_submission.csv')
predicted = []
draw_predict = []
for name in tqdm(submit['Id']):
path = os.path.join('../input/test/', name)
image = data_generator.load_image(path, (SIZE,SIZE,3))/255.
score_predict = model.predict(image[np.newaxis])[0]
draw_predict.append(score_predict)
label_predict = np.arange(28)[score_predict>=0.2]
str_predict_label = ' '.join(str(l) for l in label_predict)
predicted.append(str_predict_label)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# New ipynb for us to do ML
# what modules do we need?
# +
# read in the hdf files
# +
# read in the ESA schemes for CryoSat
# +
# data editting and formatiing
# +
# run the machie learning
# what scheme to use.... tensor flow maybe?
# +
# plot something
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="tqrD7Yzlmlsk"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" id="2k8X1C1nmpKv"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="32xflLc4NTx-"
# # Custom Federated Algorithms, Part 2: Implementing Federated Averaging
# + [markdown] id="jtATV6DlqPs0"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/main/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/federated/blob/main/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="_igJ2sfaNWS8"
# This tutorial is the second part of a two-part series that demonstrates how to
# implement custom types of federated algorithms in TFF using the
# [Federated Core (FC)](../federated_core.md), which serves as a foundation for
# the [Federated Learning (FL)](../federated_learning.md) layer (`tff.learning`).
#
# We encourage you to first read the
# [first part of this series](custom_federated_algorithms_1.ipynb), which
# introduce some of the key concepts and programming abstractions used here.
#
# This second part of the series uses the mechanisms introduced in the first part
# to implement a simple version of federated training and evaluation algorithms.
#
# We encourage you to review the
# [image classification](federated_learning_for_image_classification.ipynb) and
# [text generation](federated_learning_for_text_generation.ipynb) tutorials for a
# higher-level and more gentle introduction to TFF's Federated Learning APIs, as
# they will help you put the concepts we describe here in context.
# + [markdown] id="cuJuLEh2TfZG"
# ## Before we start
#
# Before we start, try to run the following "Hello World" example to make sure
# your environment is correctly setup. If it doesn't work, please refer to the
# [Installation](../install.md) guide for instructions.
# + id="rB1ovcX1mBxQ"
#@test {"skip": true}
# !pip install --quiet --upgrade tensorflow-federated
# !pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
# + id="-skNC6aovM46"
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
# Must use the Python context because it
# supports tff.sequence_* intrinsics.
executor_factory = tff.framework.local_executor_factory(
support_sequence_ops=True)
execution_context = tff.framework.ExecutionContext(
executor_fn=executor_factory)
tff.framework.set_default_context(execution_context)
# + id="zzXwGnZamIMM"
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
# + [markdown] id="iu5Gd8D6W33s"
# ## Implementing Federated Averaging
#
# As in
# [Federated Learning for Image Classification](federated_learning_for_image_classification.ipynb),
# we are going to use the MNIST example, but since this is intended as a low-level
# tutorial, we are going to bypass the Keras API and `tff.simulation`, write raw
# model code, and construct a federated data set from scratch.
#
# + [markdown] id="b6qCjef350c_"
# ### Preparing federated data sets
#
# For the sake of a demonstration, we're going to simulate a scenario in which we
# have data from 10 users, and each of the users contributes knowledge how to
# recognize a different digit. This is about as
# non-[i.i.d.](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables)
# as it gets.
#
# First, let's load the standard MNIST data:
# + id="uThZM4Ds-KDQ"
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
# + id="PkJc5rHA2no_"
[(x.dtype, x.shape) for x in mnist_train]
# + [markdown] id="mFET4BKJFbkP"
# The data comes as Numpy arrays, one with images and another with digit labels, both
# with the first dimension going over the individual examples. Let's write a
# helper function that formats it in a way compatible with how we feed federated
# sequences into TFF computations, i.e., as a list of lists - the outer list
# ranging over the users (digits), the inner ones ranging over batches of data in
# each client's sequence. As is customary, we will structure each batch as a pair
# of tensors named `x` and `y`, each with the leading batch dimension. While at
# it, we'll also flatten each image into a 784-element vector and rescale the
# pixels in it into the `0..1` range, so that we don't have to clutter the model
# logic with data conversions.
# + id="XTaTLiq5GNqy"
NUM_EXAMPLES_PER_USER = 1000
BATCH_SIZE = 100
def get_data_for_digit(source, digit):
output_sequence = []
all_samples = [i for i, d in enumerate(source[1]) if d == digit]
for i in range(0, min(len(all_samples), NUM_EXAMPLES_PER_USER), BATCH_SIZE):
batch_samples = all_samples[i:i + BATCH_SIZE]
output_sequence.append({
'x':
np.array([source[0][i].flatten() / 255.0 for i in batch_samples],
dtype=np.float32),
'y':
np.array([source[1][i] for i in batch_samples], dtype=np.int32)
})
return output_sequence
federated_train_data = [get_data_for_digit(mnist_train, d) for d in range(10)]
federated_test_data = [get_data_for_digit(mnist_test, d) for d in range(10)]
# + [markdown] id="xpNdBimWaMHD"
# As a quick sanity check, let's look at the `Y` tensor in the last batch of data
# contributed by the fifth client (the one corresponding to the digit `5`).
# + id="bTNuL1W4bcuc"
federated_train_data[5][-1]['y']
# + [markdown] id="Xgvcwv7Obhat"
# Just to be sure, let's also look at the image corresponding to the last element of that batch.
# + id="cI4aat1za525"
from matplotlib import pyplot as plt
plt.imshow(federated_train_data[5][-1]['x'][-1].reshape(28, 28), cmap='gray')
plt.grid(False)
plt.show()
# + [markdown] id="J-ox58PA56f8"
# ### On combining TensorFlow and TFF
#
# In this tutorial, for compactness we immediately decorate functions that
# introduce TensorFlow logic with `tff.tf_computation`. However, for more complex
# logic, this is not the pattern we recommend. Debugging TensorFlow can already be
# a challenge, and debugging TensorFlow after it has been fully serialized and
# then re-imported necessarily loses some metadata and limits interactivity,
# making debugging even more of a challenge.
#
# Therefore, **we strongly recommend writing complex TF logic as stand-alone
# Python functions** (that is, without `tff.tf_computation` decoration). This way
# the TensorFlow logic can be developed and tested using TF best practices and
# tools (like eager mode), before serializing the computation for TFF (e.g., by invoking `tff.tf_computation` with a Python function as the argument).
# + [markdown] id="RSd6UatXbzw-"
# ### Defining a loss function
#
# Now that we have the data, let's define a loss function that we can use for
# training. First, let's define the type of input as a TFF named tuple. Since the
# size of data batches may vary, we set the batch dimension to `None` to indicate
# that the size of this dimension is unknown.
# + id="653xv5NXd4fy"
BATCH_SPEC = collections.OrderedDict(
x=tf.TensorSpec(shape=[None, 784], dtype=tf.float32),
y=tf.TensorSpec(shape=[None], dtype=tf.int32))
BATCH_TYPE = tff.to_type(BATCH_SPEC)
str(BATCH_TYPE)
# + [markdown] id="pb6qPUvyh5A1"
# You may be wondering why we can't just define an ordinary Python type. Recall
# the discussion in [part 1](custom_federated_algorithms_1.ipynb), where we
# explained that while we can express the logic of TFF computations using Python,
# under the hood TFF computations *are not* Python. The symbol `BATCH_TYPE`
# defined above represents an abstract TFF type specification. It is important to
# distinguish this *abstract* TFF type from concrete Python *representation*
# types, e.g., containers such as `dict` or `collections.namedtuple` that may be
# used to represent the TFF type in the body of a Python function. Unlike Python,
# TFF has a single abstract type constructor `tff.StructType` for tuple-like
# containers, with elements that can be individually named or left unnamed. This
# type is also used to model formal parameters of computations, as TFF
# computations can formally only declare one parameter and one result - you will
# see examples of this shortly.
#
# Let's now define the TFF type of model parameters, again as a TFF named tuple of
# *weights* and *bias*.
# + id="Og7VViafh-30"
MODEL_SPEC = collections.OrderedDict(
weights=tf.TensorSpec(shape=[784, 10], dtype=tf.float32),
bias=tf.TensorSpec(shape=[10], dtype=tf.float32))
MODEL_TYPE = tff.to_type(MODEL_SPEC)
print(MODEL_TYPE)
# + [markdown] id="iHhdaWSpfQxo"
# With those definitions in place, now we can define the loss for a given model, over a single batch. Note the usage of `@tf.function` decorator inside the `@tff.tf_computation` decorator. This allows us to write TF using Python like semantics even though were inside a `tf.Graph` context created by the `tff.tf_computation` decorator.
# + id="4EObiz_Ke0uK"
# NOTE: `forward_pass` is defined separately from `batch_loss` so that it can
# be later called from within another tf.function. Necessary because a
# @tf.function decorated method cannot invoke a @tff.tf_computation.
@tf.function
def forward_pass(model, batch):
predicted_y = tf.nn.softmax(
tf.matmul(batch['x'], model['weights']) + model['bias'])
return -tf.reduce_mean(
tf.reduce_sum(
tf.one_hot(batch['y'], 10) * tf.math.log(predicted_y), axis=[1]))
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE)
def batch_loss(model, batch):
return forward_pass(model, batch)
# + [markdown] id="8K0UZHGnr8SB"
# As expected, computation `batch_loss` returns `float32` loss given the model and
# a single data batch. Note how the `MODEL_TYPE` and `BATCH_TYPE` have been lumped
# together into a 2-tuple of formal parameters; you can recognize the type of
# `batch_loss` as `(<MODEL_TYPE,BATCH_TYPE> -> float32)`.
# + id="4WXEAY8Nr89V"
str(batch_loss.type_signature)
# + [markdown] id="pAnt_UcdnvGa"
# As a sanity check, let's construct an initial model filled with zeros and
# compute the loss over the batch of data we visualized above.
# + id="U8Ne8igan3os"
initial_model = collections.OrderedDict(
weights=np.zeros([784, 10], dtype=np.float32),
bias=np.zeros([10], dtype=np.float32))
sample_batch = federated_train_data[5][-1]
batch_loss(initial_model, sample_batch)
# + [markdown] id="ckigEAyDAWFV"
# Note that we feed the TFF computation with the initial model defined as a
# `dict`, even though the body of the Python function that defines it consumes
# model parameters as `model['weight']` and `model['bias']`. The arguments of the call
# to `batch_loss` aren't simply passed to the body of that function.
#
#
# What happens when we invoke `batch_loss`?
# The Python body of `batch_loss` has already been traced and serialized in the above cell where it was defined. TFF acts as the caller to `batch_loss`
# at the computation definition time, and as the target of invocation at the time
# `batch_loss` is invoked. In both roles, TFF serves as the bridge between TFF's
# abstract type system and Python representation types. At the invocation time,
# TFF will accept most standard Python container types (`dict`, `list`, `tuple`,
# `collections.namedtuple`, etc.) as concrete representations of abstract TFF
# tuples. Also, although as noted above, TFF computations formally only accept a
# single parameter, you can use the familiar Python call syntax with positional
# and/or keyword arguments in case where the type of the parameter is a tuple - it
# works as expected.
# + [markdown] id="eB510nILYbId"
# ### Gradient descent on a single batch
#
# Now, let's define a computation that uses this loss function to perform a single
# step of gradient descent. Note how in defining this function, we use
# `batch_loss` as a subcomponent. You can invoke a computation constructed with
# `tff.tf_computation` inside the body of another computation, though typically
# this is not necessary - as noted above, because serialization looses some
# debugging information, it is often preferable for more complex computations to
# write and test all the TensorFlow without the `tff.tf_computation` decorator.
# + id="O4uaVxw3AyYS"
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE, tf.float32)
def batch_train(initial_model, batch, learning_rate):
# Define a group of model variables and set them to `initial_model`. Must
# be defined outside the @tf.function.
model_vars = collections.OrderedDict([
(name, tf.Variable(name=name, initial_value=value))
for name, value in initial_model.items()
])
optimizer = tf.keras.optimizers.SGD(learning_rate)
@tf.function
def _train_on_batch(model_vars, batch):
# Perform one step of gradient descent using loss from `batch_loss`.
with tf.GradientTape() as tape:
loss = forward_pass(model_vars, batch)
grads = tape.gradient(loss, model_vars)
optimizer.apply_gradients(
zip(tf.nest.flatten(grads), tf.nest.flatten(model_vars)))
return model_vars
return _train_on_batch(model_vars, batch)
# + id="Y84gQsaohC38"
str(batch_train.type_signature)
# + [markdown] id="ID8xg9FCUL2A"
# When you invoke a Python function decorated with `tff.tf_computation` within the
# body of another such function, the logic of the inner TFF computation is
# embedded (essentially, inlined) in the logic of the outer one. As noted above,
# if you are writing both computations, it is likely preferable to make the inner
# function (`batch_loss` in this case) a regular Python or `tf.function` rather
# than a `tff.tf_computation`. However, here we illustrate that calling one
# `tff.tf_computation` inside another basically works as expected. This may be
# necessary if, for example, you do not have the Python code defining
# `batch_loss`, but only its serialized TFF representation.
#
# Now, let's apply this function a few times to the initial model to see whether
# the loss decreases.
# + id="8edcJTlXUULm"
model = initial_model
losses = []
for _ in range(5):
model = batch_train(model, sample_batch, 0.1)
losses.append(batch_loss(model, sample_batch))
# + id="3n1onojT1zHv"
losses
# + [markdown] id="EQk4Ha8PU-3P"
# ### Gradient descent on a sequence of local data
#
# Now, since `batch_train` appears to work, let's write a similar training
# function `local_train` that consumes the entire sequence of all batches from one
# user instead of just a single batch. The new computation will need to now
# consume `tff.SequenceType(BATCH_TYPE)` instead of `BATCH_TYPE`.
# + id="EfPD5a6QVNXM"
LOCAL_DATA_TYPE = tff.SequenceType(BATCH_TYPE)
@tff.federated_computation(MODEL_TYPE, tf.float32, LOCAL_DATA_TYPE)
def local_train(initial_model, learning_rate, all_batches):
@tff.tf_computation(LOCAL_DATA_TYPE, tf.float32)
def _insert_learning_rate_to_sequence(dataset, learning_rate):
return dataset.map(lambda x: (x, learning_rate))
batches_with_learning_rate = _insert_learning_rate_to_sequence(all_batches, learning_rate)
# Mapping function to apply to each batch.
@tff.federated_computation(MODEL_TYPE, batches_with_learning_rate.type_signature.element)
def batch_fn(model, batch_with_lr):
batch, lr = batch_with_lr
return batch_train(model, batch, lr)
return tff.sequence_reduce(batches_with_learning_rate, initial_model, batch_fn)
# + id="sAhkS5yKUgjC"
str(local_train.type_signature)
# + [markdown] id="EYT-SiopYBtH"
# There are quite a few details buried in this short section of code, let's go
# over them one by one.
#
# First, while we could have implemented this logic entirely in TensorFlow,
# relying on `tf.data.Dataset.reduce` to process the sequence similarly to how
# we've done it earlier, we've opted this time to express the logic in the glue
# language, as a `tff.federated_computation`. We've used the federated operator
# `tff.sequence_reduce` to perform the reduction.
#
# The operator `tff.sequence_reduce` is used similarly to
# `tf.data.Dataset.reduce`. You can think of it as essentially the same as
# `tf.data.Dataset.reduce`, but for use inside federated computations, which as
# you may remember, cannot contain TensorFlow code. It is a template operator with
# a formal parameter 3-tuple that consists of a *sequence* of `T`-typed elements,
# the initial state of the reduction (we'll refer to it abstractly as *zero*) of
# some type `U`, and the *reduction operator* of type `(<U,T> -> U)` that alters the
# state of the reduction by processing a single element. The result is the final
# state of the reduction, after processing all elements in a sequential order. In
# our example, the state of the reduction is the model trained on a prefix of the
# data, and the elements are data batches.
#
# Second, note that we have again used one computation (`batch_train`) as a
# component within another (`local_train`), but not directly. We can't use it as a
# reduction operator because it takes an additional parameter - the learning rate.
# To resolve this, we define an embedded federated computation `batch_fn` that
# binds to the `local_train`'s parameter `learning_rate` in its body. It is
# allowed for a child computation defined this way to capture a formal parameter
# of its parent as long as the child computation is not invoked outside the body
# of its parent. You can think of this pattern as an equivalent of
# `functools.partial` in Python.
#
# The practical implication of capturing `learning_rate` this way is, of course,
# that the same learning rate value is used across all batches.
#
# Now, let's try the newly defined local training function on the entire sequence
# of data from the same user who contributed the sample batch (digit `5`).
# + id="EnWFLoZGcSby"
locally_trained_model = local_train(initial_model, 0.1, federated_train_data[5])
# + [markdown] id="y0UXUqGk9zoF"
# Did it work? To answer this question, we need to implement evaluation.
# + [markdown] id="a8WDKu6WYy__"
# ### Local evaluation
#
# Here's one way to implement local evaluation by adding up the losses across all data
# batches (we could have just as well computed the average; we'll leave it as an
# exercise for the reader).
# + id="0RiODuc6z7Ln"
@tff.federated_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def local_eval(model, all_batches):
@tff.tf_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def _insert_model_to_sequence(model, dataset):
return dataset.map(lambda x: (model, x))
model_plus_data = _insert_model_to_sequence(model, all_batches)
@tff.tf_computation(tf.float32, batch_loss.type_signature.result)
def tff_add(accumulator, arg):
return accumulator + arg
return tff.sequence_reduce(
tff.sequence_map(
batch_loss,
model_plus_data), 0., tff_add)
# + id="pH2XPEAKa4Dg"
str(local_eval.type_signature)
# + [markdown] id="efX81HuE-BcO"
# Again, there are a few new elements illustrated by this code, let's go over them
# one by one.
#
# First, we have used two new federated operators for processing sequences:
# `tff.sequence_map` that takes a *mapping function* `T->U` and a *sequence* of
# `T`, and emits a sequence of `U` obtained by applying the mapping function
# pointwise, and `tff.sequence_sum` that just adds all the elements. Here, we map
# each data batch to a loss value, and then add the resulting loss values to
# compute the total loss.
#
# Note that we could have again used `tff.sequence_reduce`, but this wouldn't be
# the best choice - the reduction process is, by definition, sequential, whereas
# the mapping and sum can be computed in parallel. When given a choice, it's best
# to stick with operators that don't constrain implementation choices, so that
# when our TFF computation is compiled in the future to be deployed to a specific
# environment, one can take full advantage of all potential opportunities for a
# faster, more scalable, more resource-efficient execution.
#
# Second, note that just as in `local_train`, the component function we need
# (`batch_loss`) takes more parameters than what the federated operator
# (`tff.sequence_map`) expects, so we again define a partial, this time inline by
# directly wrapping a `lambda` as a `tff.federated_computation`. Using wrappers
# inline with a function as an argument is the recommended way to use
# `tff.tf_computation` to embed TensorFlow logic in TFF.
#
# Now, let's see whether our training worked.
# + id="vPw6JSVf5q_x"
print('initial_model loss =', local_eval(initial_model,
federated_train_data[5]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[5]))
# + [markdown] id="6Tvu70cnBsUf"
# Indeed, the loss decreased. But what happens if we evaluated it on another
# user's data?
# + id="gjF0NYAj5wls"
print('initial_model loss =', local_eval(initial_model,
federated_train_data[0]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[0]))
# + [markdown] id="7WPumnRTBzUs"
# As expected, things got worse. The model was trained to recognize `5`, and has
# never seen a `0`. This brings the question - how did the local training impact
# the quality of the model from the global perspective?
# + [markdown] id="QJnL2mQRZKTO"
# ### Federated evaluation
#
# This is the point in our journey where we finally circle back to federated types
# and federated computations - the topic that we started with. Here's a pair of
# TFF types definitions for the model that originates at the server, and the data
# that remains on the clients.
# + id="LjGGhpoEBh_6"
SERVER_MODEL_TYPE = tff.type_at_server(MODEL_TYPE)
CLIENT_DATA_TYPE = tff.type_at_clients(LOCAL_DATA_TYPE)
# + [markdown] id="4gTXV2-jZtE3"
# With all the definitions introduced so far, expressing federated evaluation in
# TFF is a one-liner - we distribute the model to clients, let each client invoke
# local evaluation on its local portion of data, and then average out the loss.
# Here's one way to write this.
# + id="2zChEPzEBx4T"
@tff.federated_computation(SERVER_MODEL_TYPE, CLIENT_DATA_TYPE)
def federated_eval(model, data):
return tff.federated_mean(
tff.federated_map(local_eval, [tff.federated_broadcast(model), data]))
# + [markdown] id="IWcNONNWaE0N"
# We've already seen examples of `tff.federated_mean` and `tff.federated_map`
# in simpler scenarios, and at the intuitive level, they work as expected, but
# there's more in this section of code than meets the eye, so let's go over it
# carefully.
#
# First, let's break down the *let each client invoke local evaluation on its
# local portion of data* part. As you may recall from the preceding sections,
# `local_eval` has a type signature of the form `(<MODEL_TYPE, LOCAL_DATA_TYPE> ->
# float32)`.
#
# The federated operator `tff.federated_map` is a template that accepts as a
# parameter a 2-tuple that consists of the *mapping function* of some type `T->U`
# and a federated value of type `{T}@CLIENTS` (i.e., with member constituents of
# the same type as the parameter of the mapping function), and returns a result of
# type `{U}@CLIENTS`.
#
# Since we're feeding `local_eval` as a mapping function to apply on a per-client
# basis, the second argument should be of a federated type `{<MODEL_TYPE,
# LOCAL_DATA_TYPE>}@CLIENTS`, i.e., in the nomenclature of the preceding sections,
# it should be a federated tuple. Each client should hold a full set of arguments
# for `local_eval` as a member consituent. Instead, we're feeding it a 2-element
# Python `list`. What's happening here?
#
# Indeed, this is an example of an *implicit type cast* in TFF, similar to
# implicit type casts you may have encountered elsewhere, e.g., when you feed an
# `int` to a function that accepts a `float`. Implicit casting is used scarcily at
# this point, but we plan to make it more pervasive in TFF as a way to minimize
# boilerplate.
#
# The implicit cast that's applied in this case is the equivalence between
# federated tuples of the form `{<X,Y>}@Z`, and tuples of federated values
# `<{X}@Z,{Y}@Z>`. While formally, these two are different type signatures,
# looking at it from the programmers's perspective, each device in `Z` holds two
# units of data `X` and `Y`. What happens here is not unlike `zip` in Python, and
# indeed, we offer an operator `tff.federated_zip` that allows you to perform such
# conversions explicity. When the `tff.federated_map` encounters a tuple as a
# second argument, it simply invokes `tff.federated_zip` for you.
#
# Given the above, you should now be able to recognize the expression
# `tff.federated_broadcast(model)` as representing a value of TFF type
# `{MODEL_TYPE}@CLIENTS`, and `data` as a value of TFF type
# `{LOCAL_DATA_TYPE}@CLIENTS` (or simply `CLIENT_DATA_TYPE`), the two getting
# filtered together through an implicit `tff.federated_zip` to form the second
# argument to `tff.federated_map`.
#
# The operator `tff.federated_broadcast`, as you'd expect, simply transfers data
# from the server to the clients.
#
# Now, let's see how our local training affected the average loss in the system.
# + id="tbmtJItcn94j"
print('initial_model loss =', federated_eval(initial_model,
federated_train_data))
print('locally_trained_model loss =',
federated_eval(locally_trained_model, federated_train_data))
# + [markdown] id="LQi2rGX_fK7i"
# Indeed, as expected, the loss has increased. In order to improve the model for
# all users, we'll need to train in on everyone's data.
# + [markdown] id="vkw9f59qfS7o"
# ### Federated training
#
# The simplest way to implement federated training is to locally train, and then
# average the models. This uses the same building blocks and patters we've already
# discussed, as you can see below.
# + id="mBOC4uoG6dd-"
SERVER_FLOAT_TYPE = tff.type_at_server(tf.float32)
@tff.federated_computation(SERVER_MODEL_TYPE, SERVER_FLOAT_TYPE,
CLIENT_DATA_TYPE)
def federated_train(model, learning_rate, data):
return tff.federated_mean(
tff.federated_map(local_train, [
tff.federated_broadcast(model),
tff.federated_broadcast(learning_rate), data
]))
# + [markdown] id="z2vACMsQjzO1"
# Note that in the full-featured implementation of Federated Averaging provided by
# `tff.learning`, rather than averaging the models, we prefer to average model
# deltas, for a number of reasons, e.g., the ability to clip the update norms,
# for compression, etc.
#
# Let's see whether the training works by running a few rounds of training and
# comparing the average loss before and after.
# + id="NLx-3rLs9jGY"
model = initial_model
learning_rate = 0.1
for round_num in range(5):
model = federated_train(model, learning_rate, federated_train_data)
learning_rate = learning_rate * 0.9
loss = federated_eval(model, federated_train_data)
print('round {}, loss={}'.format(round_num, loss))
# + [markdown] id="Z0VjSLQzlUIp"
# For completeness, let's now also run on the test data to confirm that our model
# generalizes well.
# + id="ZaZT45yFMOaM"
print('initial_model test loss =',
federated_eval(initial_model, federated_test_data))
print('trained_model test loss =', federated_eval(model, federated_test_data))
# + [markdown] id="pxlHHwLGlgFB"
# This concludes our tutorial.
#
# Of course, our simplified example doesn't reflect a number of things you'd need
# to do in a more realistic scenario - for example, we haven't computed metrics
# other than loss. We encourage you to study
# [the implementation](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/python/learning/federated_averaging.py)
# of federated averaging in `tff.learning` as a more complete example, and as a
# way to demonstrate some of the coding practices we'd like to encourage.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import pandas as pd
mask = pd.read_csv('/Users/architverma/Documents/manifold-alignment-paper/gilad-data/gse118723/GSE118723_quality-single-cells.txt.gz', delimiter = '\t', header = None, names = ['Cell','Mask']),
counts = pd.read_csv('/Users/architverma/Documents/manifold-alignment-paper/gilad-data/gse118723/GSE118723_scqtl-counts.txt.gz', delimiter = '\t')
meta_data = pd.read_csv('/Users/architverma/Documents/manifold-alignment-paper/gilad-data/gse118723/GSE118723_scqtl-annotation.txt.gz', delimiter = '\t')
counts.shape
mm = mask[0]['Mask'].values
counts_mat = np.array(counts.values[:,1:]).astype(np.int)
counts_mat_post = counts_mat[:,mm]
print(counts_mat_post.shape)
ids = list(counts)[1:]
ids_clip = [ids[i][0:7] for i in range(len(ids))]
ids_array = (np.array(ids_clip)[mm])
print(len(np.unique(ids_array)))
meta_data.head()
meta1000 = pd.read_csv('1000genomes_meta.csv')
# +
#meta1000
# -
dat_ids = [np.char.split(ids,sep = '.')[i][0] for i in range(len(ids))]
dat_meta_map = [meta1000[meta1000['Catalog ID'] == dat_ids[i]].index[0] for i in range(len(ids))]
dat_genders = meta1000['Gender'][dat_meta_map]
import h5py
#fit = h5py.File('./rand-500-fullM/model-output-final.hdf5')
fit = h5py.File('./gilad-pca-matern-q10-noz-log2/model-output-final.hdf5')
fit2 = h5py.File('./gender-results/only_gender-pca-actually-1000-fullM/model-output-final.hdf5')
fit3 = h5py.File('./gender-results/only_gender-rand-1000-fullM/model-output-final.hdf5')
import matplotlib.pyplot as plt
plt.style.use('seaborn-poster')
for i in dat_genders.unique():
mask = dat_genders == i
mask = mask[mm]
mask = np.array(mask.values, dtype = bool)
plt.scatter(fit['x_mean'][mask,1], fit['x_mean'][mask,2], s = 5.0, label = i)
plt.legend()
plt.xlabel('Latent Dimension 1')
plt.ylabel('Latent Dimension 2')
plt.title('Uncorrected')
plt.show()
for i in dat_genders.unique():
mask = dat_genders == i
mask = mask[mm]
mask = np.array(mask.values, dtype = bool)
plt.scatter(fit2['x_mean'][mask,1], fit2['x_mean'][mask,2], s = 5.0, label = i)
plt.legend()
plt.title('Corrected for Chip and Gender')
plt.xlabel('Latent Dimension 1')
plt.ylabel('Latent Dimension 2')
plt.show()
for i in dat_genders.unique():
mask = dat_genders == i
mask = mask[mm]
mask = np.array(mask.values, dtype = bool)
plt.scatter(fit3['x_mean'][mask,1], fit3['x_mean'][mask,2], s = 5.0, label = i)
plt.legend()
plt.title('Corrected for Chip and Gender')
plt.xlabel('Latent Dimension 1')
plt.ylabel('Latent Dimension 2')
plt.show()
from sklearn import metrics
# +
# for j in range(0,10):
# for k in range(j,10):
# for i in dat_genders.unique():
# mask = dat_genders == i
# mask = mask[mm]
# mask = np.array(mask.values, dtype = bool)
# plt.scatter(fit['x_mean'][mask,j], fit['x_mean'][mask,k], s = 3.0, label = i)
# plt.legend()
# plt.show()
# -
from sklearn.linear_model import LogisticRegression
y = np.array(dat_genders == 'Male')
lr_counts = LogisticRegression()
lr_counts.fit(counts_mat[:,mm].T, y[mm])
lr_counts.score(counts_mat[:,mm].T, y[mm])
predicted = lr_counts.predict_proba(counts_mat[:,mm].T)
metrics.roc_auc_score(y[mm], predicted[:,1])
lr_fit = LogisticRegression()
lr_fit.fit(fit['x_mean'], y[mm])
lr_fit.score(fit['x_mean'], y[mm])
predicted = lr_fit.predict_proba(fit['x_mean'])
metrics.roc_auc_score(y[mm], predicted[:,1])
lr_fit = LogisticRegression()
lr_fit.fit(fit['x_mean'][:,[0,7,8]], y[mm])
lr_fit.score(fit['x_mean'][:,[0,7,8]], y[mm])
predicted = lr_fit.predict_proba(fit['x_mean'][:,[0,7,8]])
metrics.roc_auc_score(y[mm], predicted[:,1])
lr_fit2 = LogisticRegression()
lr_fit2.fit(fit2['x_mean'], y[mm])
lr_fit2.score(fit2['x_mean'], y[mm])
predicted = lr_fit2.predict_proba(fit['x_mean'])
metrics.roc_auc_score(y[mm], predicted[:,1])
lr_fit2 = LogisticRegression()
lr_fit2.fit(fit3['x_mean'], y[mm])
lr_fit2.score(fit3['x_mean'], y[mm])
predicted = lr_fit2.predict_proba(fit['x_mean'])
metrics.roc_auc_score(y[mm], predicted[:,1])
dims = np.sort([8, 7, 0])
lr_fit2 = LogisticRegression()
lr_fit2.fit(fit2['x_mean'][:,dims], y[mm])
lr_fit2.score(fit2['x_mean'][:,dims], y[mm])
predicted = lr_fit2.predict_proba(fit['x_mean'][:,dims])
metrics.roc_auc_score(y[mm], predicted[:,1])
plt.hist(predicted[:,1], bins = 100)
plt.show()
cmales = counts_mat_post.T[predicted[:,1] > 0.55]
cfemale = counts_mat_post.T[predicted[:,1] < 0.30]
gender_diff = ttest_ind(cmales, cfemale, equal_var = False)
np.sum(gender_diff[1] < 1e-9)
for i in np.where(gender_diff[1] < 1e-9)[0]:
print(genes[i]['symbol'])
print(gender_diff[0][i])
not_clear = counts_mat_post.T[predicted[:,1] < 0.55]
gender_diff = ttest_ind(cmales, not_clear, equal_var = False)
for i in np.where(gender_diff[1] < 1e-200)[0]:
print(genes[i]['symbol'])
print(gender_diff[0][i])
plt.scatter(fit2['x_mean'][:,0], fit2['x_mean'][:,7], s = 5.0, c = predicted[:,0])
plt.colorbar()
plt.show()
for i in dat_genders.unique():
mask = dat_genders == i
mask = mask[mm]
mask = np.array(mask.values, dtype = bool)
plt.scatter(fit2['x_mean'][mask,0], fit2['x_mean'][mask,7], s = 5.0, label = i)
plt.legend()
plt.show()
np.argsort(np.sum(fit3['lengthscales'][0:10,0:2], axis = 1))
np.argsort(np.sum(fit2['lengthscales'][0:10,0:2], axis = 1))
np.argsort(np.sum(fit['lengthscales'][0:10,0:2], axis = 1))
# +
# plt.style.use('seaborn-poster')
# for j in range(0,10):
# for k in range(j,10):
# for i in dat_genders.unique():
# mask = dat_genders == i
# mask = mask[mm]
# mask = np.array(mask.values, dtype = bool)
# plt.subplot(121)
# plt.scatter(fit['x_mean'][mask,j], fit['x_mean'][mask,k], s = 3.0, label = i)
# plt.subplot(122)
# plt.scatter(fit2['x_mean'][mask,j], fit2['x_mean'][mask,k], s = 3.0, label = i)
# plt.legend()
# plt.show()
# +
# plt.style.use('seaborn-poster')
# j = 0
# k = 8
# for i in dat_genders.unique():
# mask = dat_genders == i
# mask = mask[mm]
# mask = np.array(mask.values, dtype = bool)
# plt.subplot(121)
# plt.scatter(fit['x_mean'][mask,j], fit['x_mean'][mask,k], s = 3.0, label = i)
# plt.subplot(122)
# plt.scatter(fit2['x_mean'][mask,j], fit2['x_mean'][mask,k], s = 3.0, label = i)
# plt.legend()
# plt.show()
# -
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression
process = h5py.File('gilad_processed.hdf5','r')
z = np.hstack((process['z_c1'],np.expand_dims(y[mm],axis = 1)))
ols = LinearRegression()
ols.fit(z, counts_mat_post.T)
yres = counts_mat_post.T - ols.predict(z)
pca = PCA(n_components = 10)
zres = pca.fit_transform(yres)
np.sum(pca.explained_variance_ratio_)
lr_yres = LogisticRegression()
lr_yres.fit(zres, y[mm])
lr_yres.score(zres, y[mm])
predicted = lr_yres.predict_proba(zres)
metrics.roc_auc_score(y[mm], predicted[:,1])
pca = PCA(n_components = 200)
zres = pca.fit_transform(yres)
print(np.sum(pca.explained_variance_ratio_))
lr_yres = LogisticRegression()
lr_yres.fit(zres, y[mm])
lr_yres.score(zres, y[mm])
predicted = lr_yres.predict_proba(zres)
metrics.roc_auc_score(y[mm], predicted[:,1])
pca = PCA(n_components = 1000)
zres = pca.fit_transform(yres)
print(np.sum(pca.explained_variance_ratio_))
lr_yres = LogisticRegression()
lr_yres.fit(zres, y[mm])
lr_yres.score(zres, y[mm])
predicted = lr_yres.predict_proba(zres)
metrics.roc_auc_score(y[mm], predicted[:,1])
import GPy
from IPython.display import display
kernel = GPy.kern.RBF(input_dim=z.shape[1])
# +
# m = GPy.models.GPRegression(z,counts_mat_post.T,kernel)
# m.optimize()
# +
# corrected_zero = m.predict(z)
# +
# gpres = counts_mat_post.T - corrected_zero[0]
# +
# gpres.shape
# +
# pca = PCA(n_components = 1000)
# zres = pca.fit_transform(gpres)
# # np.sum(pca.explained_variance_ratio_)
# +
# lr_gpres = LogisticRegression()
# lr_gpres.fit(zres, y[mm])
# lr_gpres.score(zres, y[mm])
# -
# Marker Genes
# +
# https://www.abcam.com/primary-antibodies/b-cells-basic-immunophenotyping
# -
import mygene
gene_var = np.var(counts_mat_post, axis = 1)
gene_mean = np.mean(counts_mat_post, axis = 1)
argsort_var = np.argsort(gene_var)
argsort_mean = np.argsort(gene_mean)
mg = mygene.MyGeneInfo()
genes = mg.querymany(counts['gene'].values, scopes = 'ensembl.gene', fields = 'symbol', species = 'human')
from scipy.stats import variation
genes_cv = variation(counts_mat_post, axis = 1)
argsort_cv = np.argsort(genes_cv)
argsort_is_nan = True
lastix = 1;
while argsort_is_nan:
argsort_is_nan = np.isnan(genes_cv[argsort_cv[-lastix]])
lastix += 1
mask = np.zeros(len(genes))
for i in range(len(genes)):
try:
mask[i] = 'CD' in genes[i]['symbol']
except:
mask[i] = False
# +
#for i in np.where(mask)[0]:
# print(str(i) + ': ' + genes[i]['symbol'])
# -
markers = ['CD19', 'CD27', 'CD38', 'CD40', 'CD80', 'CD1D', 'CD1A', 'CD1C', 'CD1B', 'CD1E', 'CD22', 'CD5L']
ixs = [14077, 7739, 49, 2419, 10158, 10159, 10161, 10163, 10165, 312, 1240]
celltype = ['Activated B Cell', 'Plasma Cell', 'Plasma Cell', 'Memory Cell', 'Memory Cell',
'Marginal zone B cells', 'Marginal zone B cells', 'Marginal zone B cells', 'Marginal zone B cells', 'Marginal zone B cells',
'Follicular B Cells', 'Regulatory B Cells']
import umap
reducer = umap.UMAP()
embedding = reducer.fit_transform(fit['x_mean'])
for i in dat_genders.unique():
mask = dat_genders == i
mask = mask[mm]
mask = np.array(mask.values, dtype = bool)
plt.scatter(embedding[mask,0], embedding[mask,1], s = 5.0, label = i)
plt.legend()
plt.show()
embedding2 = reducer.fit_transform(fit2['x_mean'])
for i in dat_genders.unique():
mask = dat_genders == i
mask = mask[mm]
mask = np.array(mask.values, dtype = bool)
plt.scatter(embedding2[mask,0], embedding2[mask,1], s = 5.0, label = i)
plt.legend()
plt.show()
embedding3 = reducer.fit_transform(fit3['x_mean'])
plt.style.use('seaborn-poster')
for i in range(len(ixs)):
plt.scatter(embedding2[:,0], embedding2[:, 1], c = counts_mat_post[i,:], cmap = 'Reds', s= 5.0)
plt.title(markers[i])
plt.colorbar()
plt.show()
plt.style.use('seaborn-poster')
for i in range(len(ixs)):
plt.scatter(embedding3[:,0], embedding3[:, 1], c = counts_mat_post[i,:], cmap = 'Reds', s= 5.0)
plt.title(markers[i])
plt.colorbar()
plt.show()
for i in range(1,6):
plt.scatter(embedding2[:,0], embedding2[:, 1], c = counts_mat_post[argsort_var[-i],:], cmap = 'Reds', s= 5.0)
plt.title(genes[argsort_var[-i]]['symbol'])
plt.colorbar()
plt.show()
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 2)
c2 = kmeans.fit_predict(fit2['x_mean'])
for i in range(np.max(c2) + 1):
mask = c2 == i
plt.scatter(embedding3[mask,0], embedding3[mask, 1], s= 5.0)
plt.show()
mask1 = c2 == 1
counts1 = counts_mat_post[:, mask1]
counts0 = counts_mat_post[:, np.logical_not(mask1)]
diff = np.mean(counts0, axis = 1) - np.mean(counts1, axis = 1)
sort_diff = np.argsort(np.abs(diff))
for i in np.where(np.abs(diff) > 100)[0]:
plt.scatter(embedding2[:,0], embedding2[:, 1], c = counts_mat_post[i,:], cmap = 'Reds', s= 5.0)
plt.title(genes[i]['symbol'])
plt.colorbar()
plt.show()
for i in range(1,11):
cix = argsort_cv[-(lastix + i)]
plt.scatter(embedding2[:,0], embedding2[:, 1], c = counts_mat_post[cix,:], cmap = 'Reds', s= 5.0)
plt.title(genes[cix]['symbol'])
plt.colorbar()
plt.show()
genes_cv[argsort_cv[-448]]
genes[argsort_cv[-448]]['symbol']
plt.scatter(gene_mean, np.sqrt(gene_var), s = 5.0)
plt.show()
counts[counts['gene'] == 'ENSG00000229807']
for i in range(len(genes)):
try:
found = genes[i]['symbol'] == 'RPS4Y1'
if found:
print(i)
except:
found = False
cix = 6141
plt.scatter(embedding2[:,0], embedding2[:, 1], c = counts_mat_post[cix,:], cmap = 'Reds', s= 5.0)
plt.title(genes[cix]['symbol'])
plt.colorbar()
plt.show()
cix = ixs[0]
c = np.log2(1 + counts_mat_post[cix,:])
for j in range(0,10):
for k in range(j,10):
plt.subplot(131)
plt.scatter(fit['x_mean'][:,j], fit['x_mean'][:,k], c = c, cmap = 'Reds', s = 5.0)
plt.subplot(132)
plt.scatter(fit2['x_mean'][:,j], fit2['x_mean'][:,k], c = c, cmap = 'Reds', s = 5.0)
plt.subplot(133)
plt.scatter(fit3['x_mean'][:,j], fit3['x_mean'][:,k], c = c, cmap = 'Reds', s = 5.0)
plt.colorbar()
plt.show()
from scipy.stats import spearmanr
spearmans = spearmanr(fit['x_mean'][:,[7,8]], counts_mat_post.T)
sub = spearmans[0][0:2,2:]
test = np.argsort(sub[0,:])
counter = 0
ix = 1
while counter < 5:
if np.isnan(sub[0,test[-ix]]):
pass
else:
print(genes[test[-ix]]['symbol'])
print(test[-ix])
counter += 1
ix += 1
test = np.argsort(sub[1,:])
counter = 0
ix = 1
while counter < 5:
if np.isnan(sub[0,test[-ix]]):
pass
else:
print(genes[test[-ix]]['symbol'])
print(test[-ix])
counter += 1
ix += 1
np.argsort(sub[0,])
test[-1]
ix
# +
## posterior gene expression, compare manifold of genes in male vs female posterior
# -
from sklearn.gaussian_process.kernels import Matern, RBF
def kernel(x1,x2, dims, z):
m12 = Matern(length_scale = z['lengthscales'][dims,1], nu = 0.5)
rbf = RBF(length_scale = z['lengthscales'][dims,0])
m12eval = m12(x1[:,dims],x2[:,dims])
rbfeval = rbf(x1[:,dims],x2[:,dims])
total = z['variances'][0]*np.array(rbfeval) + z['variances'][1]*m12eval
return total
def impute(y, z, dims, fixed):
N = y.shape[0]
xu = z['xu']
zpost = np.concatenate((z['x_mean'],fixed),axis = 1)
#zpost.shape
qkfu = kernel(zpost,xu, dims, z)
qkff = kernel(zpost,zpost, dims, z)
kuupsi = np.linalg.inv(np.matmul(qkfu.T,qkfu))
kuu = kernel(xu,xu, dims,z)
kuukuupsi = np.matmul(kuu,kuupsi)
psiy = np.matmul(qkfu.T,1.+y)
qu = np.matmul(kuukuupsi,psiy).T
kuuinv = np.linalg.inv(kuu)
qkfukuuinv = np.matmul(qkfu,kuuinv)
#qkfukuuinv.shape
qkfukuuinvu = np.matmul(qkfukuuinv,qu.T)
return np.array(qkfukuuinvu)
y[mm]
male = counts_mat_post[:,y[mm]]
female = counts_mat_post[:, np.logical_not(y[mm])]
from scipy.stats import ttest_ind
t, prob = ttest_ind(male,female,axis = 1, equal_var = False)
np.where(prob < 1e-100)[0]
genes[2567]['symbol']
genes[6141]['symbol']
genes[17171]['symbol']
plt.style.use('seaborn-poster')
ix1 = 991
ix2 = 2567
plt.scatter(male[ix1], male[ix2], c = 'b', s = 10.0, alpha = 0.5)
plt.scatter(female[ix1], female[ix2], c = 'r', s = 10.0, alpha = 0.5)
plt.xlabel(genes[ix1]['symbol'])
plt.ylabel(genes[ix2]['symbol'])
plt.show()
fixed_female = np.zeros((5597,1))
fixed_male = np.ones((5597,1))
print(fit['lengthscales'][-1,:])
print(np.mean(fit['lengthscales'][10:-1,:], axis = 0))
print(fit['lengthscales'][0:10,:])
dims = range(10)
t_impute_male = impute(counts_mat_post.T, fit2, dims, fixed_male)
t_impute_female = impute(counts_mat_post.T, fit2, dims, fixed_female)
ix1 = 991
ix2 = 2567
plt.scatter(t_impute_male[:,ix1], t_impute_male[:,ix2], c = 'b', s = 10.0, alpha = 0.5)
plt.scatter(t_impute_female[:,ix1], t_impute_female[:,ix2], c = 'r', s = 10.0, alpha = 0.5)
plt.xlabel(genes[ix1]['symbol'])
plt.ylabel(genes[ix2]['symbol'])
plt.show()
np.max(t_impute_male - counts_mat_post.T)
np.max(t_impute_female - counts_mat_post.T)
np.max(t_impute_female - t_impute_male)
t_impute_male[:,ix1]
fixed_test = 10*np.ones((5597,79))
t_impute_test = impute(counts_mat_post.T, fit2, dims, fixed_test)
diff_test = t_impute_test - t_impute_male
qx_mean = np.vstack(fit['x_mean'],
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 8. Support Vector Machies y regresión lineal
#
# En este caso, se utiliza un algoritmo de aprendizaje supervisado, no hacemos una predicción de un valor concreto, sino una clasificación y regresión.
#
# Se parte de los datos analizados, normalizados y acotados
# logrados en el punto 0, para el training.
#
# SVM consiste básicamente en entrenar un modelo para etiquetar clases en los datos, para después, dada una muestra, definir a que clase pertenece.
#
#
# Partiendo de una contrucción del modelo, haremos un proceso iterativo de validación y ajuste del mismo (modificando parámetros y variables), hasta obtener el que mejor predice nuestra target, sin infra o sobreajustes
#
# ## Importación de datos y selección de variables
#
# +
#Librerías a usar
import numpy as np
import pandas
from scipy.stats import skew
from sklearn.linear_model import ElasticNet
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import cross_val_score, GridSearchCV
from sklearn.svm import SVR
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error
# +
#Importación de datos
df_train = pd.read_csv("data/PreciosCasas/train_final.csv", sep='\t', encoding='utf-8')
df_test = pd.read_csv("data/PreciosCasas/test.csv")
# print a summary of the data in Melbourne data
df_train.describe()
# -
df_test.describe()
# +
#Vamos a ver que variables elegimos: todas como columnas y el SalesPrice como target
X_train = df_train.loc[:, df_train.columns != 'Unnamed: 0']
X_train= X_train.loc[:, X_train.columns != 'SalePrice']
X_train= X_train.loc[:, X_train.columns != 'Id']
print (X_train.head())
y = df_train['SalePrice']
# -
# Se hace el modelo de regresión lineal
alphas = [0.0005, 0.00075, 0.001, 0.00125, 0.0015]
scores = [
np.sqrt(-cross_val_score(ElasticNet(alpha), X_train, y, scoring="neg_mean_squared_error", cv=5)).mean()
for alpha in alphas
]
scores = pandas.Series(scores, index=alphas)
scores.plot(title = "Alphas vs error (Lowest error is best)")
# ## Modelo SVM
#
# Esta es la parte interesante. Vamos a usar la función GridSearchCV de sklearn
#
# Para ser capaces de ir validando el modelo, lo separaremos en dos grupos, predictors and target. Lo haremos mediando un split con un número generaro aleatorio. Como queremos que todas las veces que ejecutemos el modelo nos salga lo mismo, estableceremos el argumento de random_state.
#
# +
from sklearn.model_selection import train_test_split
#Separamos los datos en dos grupos,
train_X, val_X, train_y, val_y = train_test_split(X_train, y,random_state = 0)
# +
gsc = GridSearchCV(
estimator=SVR(kernel='rbf'),
param_grid={
'C': range(1, 4),
'epsilon': (0.03, 0.04, 0.05, 0.06, 0.07),
},
cv=5
)
grid_result = gsc.fit(train_X, train_y)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
# +
# Veamos la relación entre los parámetros del SVM
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter([row['C'] for row in grid_result.cv_results_['params']],
[row['epsilon'] for row in grid_result.cv_results_['params']],
grid_result.cv_results_['mean_test_score'],
c='b', marker='^')
ax.set_xlabel('C')
ax.set_ylabel('Epsilon')
ax.set_zlabel('Score')
# -
# entrenando el modelo con los parámetros encontrados antes
# +
linear_model = ElasticNet(alpha=0.001)
linear_model.fit(train_X, train_y)
svr_model = SVR(kernel='rbf', C=1, epsilon=0.03)
svr_model.fit(train_X, train_y)
# -
# ## Predicción
#
#
# +
#Error cometido en esta medicion MAE
prediccion = svr_model.predict(val_X)
print("y en este caso el error es: ")
print (mean_absolute_error(val_y,prediccion ))
# Veamoslo en un scatter plot
plt.scatter(prediccion, val_y );
plt.title('Validación');
plt.ylabel('Modelo');
plt.xlabel('Prediccion');
plt.show()
# +
#Error cometido en esta medicion MAE
prediccion = linear_model.predict(val_X)
print("y en este caso el error es: ")
print (mean_absolute_error(val_y,prediccion ))
# Veamoslo en un scatter plot
plt.scatter(prediccion, val_y );
plt.title('Validación');
plt.ylabel('Modelo');
plt.xlabel('Prediccion');
plt.show()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# # HubSpot - Create deal
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/HubSpot/HubSpot_Create_deal.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# **Tags:** #hubspot #crm #sales #deal #naas_drivers #snippet
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/)
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ## Input
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Import library
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
from naas_drivers import hubspot
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Setup your HubSpot
# 👉 Access your [HubSpot API key](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key)
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
HS_API_KEY = 'YOUR_HUBSPOT_API_KEY'
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Enter deal parameters
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
dealname = "TEST"
closedate = None #must be in format %Y-%m-%d
amount = None
hubspot_owner_id = None
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Enter deal stage ID
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
df_pipelines = hubspot.connect(HS_API_KEY).pipelines.get_all()
df_pipelines
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
dealstage = '5102584'
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ## Model
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Create deal
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Using send method
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
send_deal = {"properties":
{
"dealstage": dealstage,
"dealname": dealname,
"amount": amount,
"closedate": closedate,
"hubspot_owner_id": hubspot_owner_id,
}
}
deal1 = hubspot.connect(HS_API_KEY).deals.send(send_deal)
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Using create method
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
deal2 = hubspot.connect(HS_API_KEY).deals.create(
dealname,
dealstage,
closedate
)
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ## Output
# + [markdown] papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
# ### Display results
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
deal1
# + papermill={} tags=["awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb", "awesome-notebooks/HubSpot/HubSpot_Create_deal.ipynb"]
deal2
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
"""
Stream tubes originating from a probing grid of points.
Data is from CFD analysis of airflow in an office with
ventilation and a burning cigarette.
"""
# see original script at:
# https://github.com/Kitware/VTK/blob/master/Examples/
# VisualizationAlgorithms/Python/officeTube.py
from vtkplotter import *
from office_furniture import furniture
# We read a data file the is a CFD analysis of airflow in an office
# (with ventilation and a burning cigarette).
sgrid = loadStructuredGrid(datadir + "office.binary.vtk")
# Now we will generate multiple streamlines in the data. We create a
# grid of points of points and then use those as integration seeds.
seeds = Grid(pos=[2,2,1], normal=[1,0,0], resx=2, resy=3, c="gray")
# We select the integration order to use (RungeKutta order 4) and
# associate it with the streamer. We integrate in the forward direction.
slines = streamLines(
sgrid, seeds,
integrator="rk4",
direction="forward",
initialStepSize=0.01,
maxPropagation=15,
tubes={"radius":0.004, "varyRadius":2, "ratio":1},
)
show(slines, seeds)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # segmentation with SVM
import os
import numpy as np
import imageio
import matplotlib.pyplot as plt
# import mahotas as mt
# +
Image_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells/Th0'
Mask_dir = os.path.join(Image_dir,'Masks')
Image_name = 'Tcells_Th0_1f_photons.tiff'
I = imageio.imread(os.path.join(Image_dir,Image_name)).astype("uint8")
if len(I.shape) > 2:
I = I[:,:,0]
print(I.dtype)
print(I.shape)
print(I.min())
print(I.mean())
print(I.std())
# print images
plt.figure(figsize=(10,10))
plt.imshow(I, cmap='gray', vmin=0, vmax=255)
# +
hist, bins = np.histogram(I, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = I.shape
norm_cum_hist = cum_hist / (height * width)
norm_hist = hist / hist.max()
#width = (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, norm_hist, align='center')
plt.plot(norm_cum_hist, color='r')
plt.show()
# +
hists_cdf = (norm_cum_hist * 255).astype("uint8")
# mapping
img_eq = hists_cdf[I]
# img_eq[img_eq<100] = 0
plt.figure(figsize=(10,10))
plt.imshow(img_eq, cmap='gray')
# +
hist, bins = np.histogram(img_eq, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = img_eq.shape
norm_cum_hist = cum_hist / (height * width)
norm_hist = hist / hist.max()
#width = (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, norm_hist, align='center')
plt.plot(norm_cum_hist, color='r')
plt.show()
# -
M_cell = 'Tcells_Th0_1n_photons_cells.tiff'
Mask_cell = imageio.imread(os.path.join(Mask_dir,M_cell))
plt.imshow(Mask_cell, cmap='gray', vmin=0, vmax=255)
print(Mask_cell.shape)
Mask_cell[Mask_cell>=1] = 1
plt.imshow(Mask_cell, cmap='gray', vmin=0, vmax=1)
print(Mask_cell.max(), Mask_cell.shape)
x = img_eq.reshape(-1,1)
x.shape
# ## cell segmentation
def img_preprocess(img_dir):
image = imageio.imread(img_dir).astype("uint8")
hist, bins = np.histogram(image, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = image.shape
norm_cum_hist = cum_hist / (height * width)
hists_cdf = (norm_cum_hist * 255).astype("uint8")
# mapping
img_eq = hists_cdf[image]
plt.imshow(img_eq, cmap='gray', vmin=0, vmax=255)
return img_eq
# vec_img_eq = img_eq.reshape(-1,1)
# return vec_img_eq
def label_preprocess(mask_dir):
mask = imageio.imread(mask_dir).astype('uint8')
mask[mask>=1] = 1
plt.imshow(mask, cmap='gray', vmin=0, vmax=1)
return mask
# vec_mask = mask.reshape(-1,1)
# return vec_mask
img_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells/Th0/Tcells_Th0_1n_photons.tiff'
lab_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells/Th0/Masks/Tcells_Th0_1n_photons_cells.tiff'
img_pre = img_preprocess(img_dir)
mask_pre = label_preprocess(lab_dir)
def harlick_features(img, h_neigh, ss_idx):
print ('[INFO] Computing haralick features.')
size = h_neigh
shape = (img.shape[0] - size + 1, img.shape[1] - size + 1, size, size)
strides = 2 * img.strides
patches = stride_tricks.as_strided(img, shape=shape, strides=strides)
patches = patches.reshape(-1, size, size)
if len(ss_idx) == 0 :
bar = progressbar.ProgressBar(maxval=len(patches), \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
else:
bar = progressbar.ProgressBar(maxval=len(ss_idx), \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
h_features = []
if len(ss_idx) == 0:
for i, p in enumerate(patches):
bar.update(i+1)
h_features.append(calc_haralick(p))
else:
for i, p in enumerate(patches[ss_idx]):
bar.update(i+1)
h_features.append(calc_haralick(p))
#h_features = [calc_haralick(p) for p in patches[ss_idx]]
return np.array(h_features)
def create_binary_pattern(img, p, r):
print ('[INFO] Computing local binary pattern features.')
lbp = feature.local_binary_pattern(img, p, r)
return (lbp-np.min(lbp))/(np.max(lbp)-np.min(lbp)) * 255
def create_features(img, img_gray, label, train=True):
lbp_radius = 24 # local binary pattern neighbourhood
h_neigh = 11 # haralick neighbourhood
num_examples = 1000 # number of examples per image to use for training model
lbp_points = lbp_radius*8
h_ind = int((h_neigh - 1)/ 2)
feature_img = np.zeros((img.shape[0],img.shape[1],4))
feature_img[:,:,:3] = img
img = None
feature_img[:,:,3] = create_binary_pattern(img_gray, lbp_points, lbp_radius)
feature_img = feature_img[h_ind:-h_ind, h_ind:-h_ind]
features = feature_img.reshape(feature_img.shape[0]*feature_img.shape[1], feature_img.shape[2])
if train == True:
ss_idx = subsample_idx(0, features.shape[0], num_examples)
features = features[ss_idx]
else:
ss_idx = []
h_features = harlick_features(img_gray, h_neigh, ss_idx)
features = np.hstack((features, h_features))
if train == True:
label = label[h_ind:-h_ind, h_ind:-h_ind]
labels = label.reshape(label.shape[0]*label.shape[1], 1)
labels = labels[ss_idx]
else:
labels = None
return features, labels
def create_training_dataset(image_list, label_list):
print ('[INFO] Creating training dataset on %d image(s).' %len(image_list))
X = []
y = []
for i, img_dir in enumerate(image_list):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
features, labels = create_features(img, img_gray, label_list[i])
X.append(features)
y.append(labels)
X = np.array(X)
X = X.reshape(X.shape[0]*X.shape[1], X.shape[2])
y = np.array(y)
y = y.reshape(y.shape[0]*y.shape[1], y.shape[2]).ravel()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print ('[INFO] Feature vector size:', X_train.shape)
return X_train, X_test, y_train, y_test
def train_model(X, y, classifier):
if classifier == "SVM":
from sklearn.svm import SVC
print ('[INFO] Training Support Vector Machine model.')
model = SVC()
model.fit(X, y)
elif classifier == "RF":
from sklearn.ensemble import RandomForestClassifier
print ('[INFO] Training Random Forest model.')
model = RandomForestClassifier(n_estimators=250, max_depth=12, random_state=42)
model.fit(X, y)
elif classifier == "GBC":
from sklearn.ensemble import GradientBoostingClassifier
model = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0)
model.fit(X, y)
def test_model(X, y, model):
pred = model.predict(X)
precision = metrics.precision_score(y, pred, average='weighted', labels=np.unique(pred))
recall = metrics.recall_score(y, pred, average='weighted', labels=np.unique(pred))
f1 = metrics.f1_score(y, pred, average='weighted', labels=np.unique(pred))
accuracy = metrics.accuracy_score(y, pred)
print ('--------------------------------')
print ('[RESULTS] Accuracy: %.2f' %accuracy)
print ('[RESULTS] Precision: %.2f' %precision)
print ('[RESULTS] Recall: %.2f' %recall)
print ('[RESULTS] F1: %.2f' %f1)
print ('--------------------------------')
# +
all_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells'
image_list, label_list = read_data(all_dir)
X_train, X_test, y_train, y_test = create_training_dataset(image_list, label_list)
model = train_model(X_train, y_train, classifier)
test_model(X_test, y_test, model)
# +
image_list = []
label_list = []
all_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells'
all_folder = os.listdir(all_dir)
for folder in all_folder:
# folder = all_folder[3]
current_folder = os.path.join(all_dir, folder)
all_images = os.listdir(current_folder)
for file_name in all_images:
# file_name = all_images[3]
file_name_front, file_name_end = os.path.splitext(file_name)
if file_name_end is not '':
fn = file_name_front.split('_')[2][1]
if file_name_end=='.tiff' and fn=='n':
image_dir = os.path.join(current_folder,file_name)
image_list.append(image_dir)
mask_dir = os.path.join(current_folder,'Masks',file_name_front+'cells.tiff')
label_list.append(mask_dir)
# -
image_list
label_list
# ## cyto and nuclei segmentation
# +
hist, bins = np.histogram(Mask_cell, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = Mask_cell.shape
norm_cum_hist = cum_hist / (height * width)
norm_hist = hist / hist.max()
#width = (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, norm_hist, align='center')
plt.plot(norm_cum_hist, color='r')
plt.show()
# +
M_cyto = 'Tcells_Th0_1n_photons_cyto.tiff'
Mask_cyto = imageio.imread(os.path.join(Mask_dir,M_cyto))
# plt.imshow(Mask_cyto, cmap='gray', vmin=0, vmax=255)
# print(Mask_cyto.shape)
Mask_cyto[Mask_cyto>=1] = 1
plt.imshow(Mask_cyto, cmap='gray', vmin=0, vmax=1)
# +
M_nuclei = 'Tcells_Th0_1n_photons_nuclei.tiff'
Mask_nuclei = imageio.imread(os.path.join(Mask_dir,M_nuclei))
# plt.imshow(Mask_nuclei, cmap='gray', vmin=0, vmax=255)
# print(Mask_nuclei.shape)
Mask_nuclei[Mask_nuclei>=1] = 1
plt.imshow(Mask_nuclei, cmap='gray', vmin=0, vmax=1)
# -
image_list, label_list = read_data(image_dir, label_dir)
X_train, X_test, y_train, y_test = create_training_dataset(image_list, label_list)
model = train_model(X_train, y_train, classifier)
test_model(X_test, y_test, model)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="dBGntMqB-Up_" colab_type="code" outputId="3eb8b898-3edd-4b74-c252-1d4d18112c36" executionInfo={"status": "ok", "timestamp": 1572803491735, "user_tz": -480, "elapsed": 4562, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 145}
# !pip install baostock
import baostock as bs
import pandas as pd
from datetime import datetime, timedelta
import numpy as np
# + id="dAzqO2euMjHj" colab_type="code" outputId="dfc626d3-7a33-43cf-cbba-8c81a9c086f2" executionInfo={"status": "ok", "timestamp": 1572803492888, "user_tz": -480, "elapsed": 5697, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 71}
lg = bs.login()
print(f'login respond code: {lg.error_code}')
print(f'login respond msg: {lg.error_msg}')
params = ','.join(['date', 'open', 'high', 'low', 'close', 'preclose','volume','amount','turn', 'tradestatus', 'pctChg'])
# + id="bCi8GbOQMrar" colab_type="code" colab={}
def query_history_k_data_plus_with_df(**kwargs) -> pd.DataFrame:
rs = bs.query_history_k_data_plus(**kwargs)
data_list = []
if rs.error_code!='0':
raise Exception(f'error in fetch message: {rs.error_msg}')
while rs.error_code == '0' and rs.next():
data_list.append(rs.get_row_data())
return pd.DataFrame(data_list, columns=rs.fields)
# + id="9bF8kgR2Mt0r" colab_type="code" colab={}
def fill_suspension(raw_df: pd.DataFrame, start_date: str, end_date: str) -> pd.DataFrame:
start = datetime.strptime(start_date,'%Y-%m-%d')
end = datetime.strptime(end_date, '%Y-%m-%d')
date_counter = dict()
columns = raw_df.columns.tolist()
date_index = columns.index('date')
close_index = columns.index('close')
for r in raw_df.values:
date_counter[r[date_index]] = list(r)
first_record = raw_df.iloc[0]
first_date = datetime.strptime(first_record['date'],'%Y-%m-%d')
current = start
last_close = first_record['close']
while current < first_date:
current_str = current.strftime('%Y-%m-%d')
date_counter[current_str] = [current_str,last_close, last_close, last_close,last_close,last_close,0,0.0,0.0,0,0.0]
current = current + timedelta(days=1)
while current <= end:
current_str = current.strftime('%Y-%m-%d')
if date_counter.get(current_str) is None:
last_day_str = (current + timedelta(days=-1)).strftime('%Y-%m-%d')
last = date_counter.get(last_day_str)
last_close = last[close_index]
date_counter[current_str] = [current_str,last_close, last_close, last_close,last_close,last_close,0,0.0,0.0,0,0.0]
current = current + timedelta(days=1)
new_data = sorted(date_counter.values(),key=lambda x: x[date_index])
return pd.DataFrame(new_data,columns=columns)
# + id="VKqiprndMwf2" colab_type="code" colab={}
import os
import csv
def load_history_k_data_plus_with_df(**kwargs) -> pd.DataFrame:
code = kwargs.get('code')
frequency = kwargs.get('frequency')
adjust = kwargs.get('adjustflag')
path = os.path.join('.','resources',f'{code}-{frequency}-{adjust}.csv')
if not os.path.exists(path):
rs = query_history_k_data_plus_with_df(**kwargs)
# rs = fill_suspension(rs, kwargs.get('start_date'), kwargs.get('end_date'))
rs.to_csv(path, index=False, encoding='utf-8', quoting=csv.QUOTE_NONNUMERIC)
rs = pd.read_csv(path, quoting=csv.QUOTE_NONNUMERIC)
return rs
# + id="ZzBTxSRRM25R" colab_type="code" colab={}
start_date = '2006-01-01'
end_date = '2019-10-20'
# + id="8t38sKtCNGMx" colab_type="code" outputId="e61098a5-3382-4d71-ee70-f7e97497bb2c" executionInfo={"status": "ok", "timestamp": 1572803492894, "user_tz": -480, "elapsed": 5664, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 55}
from google.colab import drive
drive.mount('/content/drive')
# + id="3gF_m5u7NII7" colab_type="code" colab={}
os.chdir('/content/drive/My Drive/workspace/comp3035/')
# + id="qdi3UMVZM3w3" colab_type="code" outputId="161ad59e-cda8-4a47-d89c-62ef9b8e92a2" executionInfo={"status": "ok", "timestamp": 1572803492897, "user_tz": -480, "elapsed": 5643, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 309}
rs_no = load_history_k_data_plus_with_df(code="sh.000001",start_date=start_date, end_date=end_date,fields=params,frequency='d', adjustflag = '3')
rs_all = rs_no
rs_all.head()
# + id="mB8z-8P5NEM6" colab_type="code" outputId="3554e527-9b23-4ff5-f752-d37f4e86c49e" executionInfo={"status": "ok", "timestamp": 1572803492899, "user_tz": -480, "elapsed": 5634, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 340}
dataset = rs_all.set_index('date')
dataset['range'] = dataset['high'] - dataset['low']
dataset['chg'] = dataset['close'] - dataset['open']
dataset['predict'] = dataset['close'].shift(-1)
dataset['trend'] = (dataset['predict']-dataset['close']).apply(lambda x: 1 if x>=0 else 0)
dataset = dataset.dropna(axis=0)
dataset.head()
# + id="YTY4d9mHQzR9" colab_type="code" outputId="a6aaf6d4-df59-409b-fe95-57f11e606050" executionInfo={"status": "ok", "timestamp": 1572803492900, "user_tz": -480, "elapsed": 5624, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
print(len(dataset))
# + id="jDuUvxngRjqE" colab_type="code" outputId="42fbe462-9080-4be8-c790-8c3d2fa7eb2a" executionInfo={"status": "ok", "timestamp": 1572803953776, "user_tz": -480, "elapsed": 603, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 269}
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
float_data = dataset.iloc[:,:-1].values
float_data = scaler.fit_transform(float_data)
print(float_data.shape)
float_data
# + id="LYNW7HcqUDXS" colab_type="code" colab={}
delay_close = float_data[:,-1]
float_data = float_data[:,:-1]
delay_trend = dataset.values[:,-1]
# + id="8ydhjDV6Uxgu" colab_type="code" outputId="f5c57903-0d3a-4b2c-96be-5636b57e9497" executionInfo={"status": "ok", "timestamp": 1572803494867, "user_tz": -480, "elapsed": 7553, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
delay_close
# + id="rZuYbPDiUyjN" colab_type="code" outputId="228b20c4-fae9-418d-f946-9d445963b4b6" executionInfo={"status": "ok", "timestamp": 1572803494867, "user_tz": -480, "elapsed": 7530, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
delay_trend
# + id="k55K-0w1e4LQ" colab_type="code" colab={}
import random
from typing import Union
def generator(x, y, lookback: int, min_index: Union[int,None], max_index: Union[int,None],
shuffle: True, shuffle_seed:int=0):
indices = list(range(lookback, len(x)))
if shuffle:
random.Random(shuffle_seed).shuffle(indices)
if max_index is not None and min_index is not None:
indices = indices[min_index:max_index+1]
elif max_index is None and min_index is None:
pass
elif max_index is None:
indices = indices[min_index:]
elif min_index is None:
indices = indices[:max_index]
res_x = []
res_y = []
for i in indices:
res_x.append(x[i-lookback:i+1])
res_y.append(y[i])
res_x = np.asarray(res_x, dtype=np.float32)
res_y = np.asarray(res_y)
return (res_x,res_y)
# + id="7LjhbuiehX1l" colab_type="code" outputId="930c0487-e22b-4aec-e5b7-b9b54b0ca9e4" executionInfo={"status": "ok", "timestamp": 1572803494869, "user_tz": -480, "elapsed": 7505, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 71}
shuffle=True
shuffle_seed = 0
lookback = 31
target = delay_trend
train_data = generator(float_data, target, lookback=lookback,
min_index=None, max_index=2850,
shuffle=shuffle, shuffle_seed=shuffle_seed)
valid_data = generator(float_data, target, lookback=lookback,
min_index=2851, max_index=None,
shuffle=shuffle, shuffle_seed=shuffle_seed)
print(train_data[0].shape)
print(train_data[1].shape)
train_data[1]
# + id="szEUXHbKS8VL" colab_type="code" colab={}
from tensorflow.keras import models, layers, optimizers, losses, activations, metrics
# + colab_type="code" id="VayLElJG_INZ" colab={}
model = models.Sequential()
model.add(layers.Conv1D(64, 5, activation=activations.relu,
input_shape=(None, train_data[0].shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(64, 5, activation=activations.relu))
model.add(layers.GRU(64,
dropout=0.1,
recurrent_dropout=0.5,
return_sequences=False,
))
model.add(layers.Dense(1, activation=activations.sigmoid))
model.compile(optimizer=optimizers.RMSprop(lr=1e-5),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy]
)
# + id="OMsSIloHwL_O" colab_type="code" outputId="1fe8069f-8cca-4e50-978e-4cb01ac2c2aa" executionInfo={"status": "ok", "timestamp": 1572803619534, "user_tz": -480, "elapsed": 12473, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}} colab={"base_uri": "https://localhost:8080/", "height": 415}
history = model.fit(train_data[0], train_data[1],
batch_size=64,
epochs = 10,
validation_data=(valid_data[0],valid_data[1]))
# + id="zvZOqpwqZr_p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="bf79e544-4ead-4011-9e31-6031f99f890b" executionInfo={"status": "ok", "timestamp": 1572804008074, "user_tz": -480, "elapsed": 839, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}}
shuffle=True
shuffle_seed = 0
lookback = 31
target = delay_close
train_data = generator(float_data, target, lookback=lookback,
min_index=None, max_index=2850,
shuffle=shuffle, shuffle_seed=shuffle_seed)
valid_data = generator(float_data, target, lookback=lookback,
min_index=2851, max_index=None,
shuffle=shuffle, shuffle_seed=shuffle_seed)
print(train_data[0].shape)
print(train_data[1].shape)
# + id="hHBgUv65Z2-b" colab_type="code" colab={}
model = models.Sequential()
model.add(layers.Conv1D(64, 5, activation=activations.relu,
input_shape=(None, train_data[0].shape[-1])))
model.add(layers.MaxPooling1D(3))
model.add(layers.Conv1D(64, 5, activation=activations.relu))
model.add(layers.GRU(64,
dropout=0.1,
recurrent_dropout=0.5,
return_sequences=False,
))
model.add(layers.Dense(1))
model.compile(optimizer=optimizers.RMSprop(lr=1e-5),
loss=losses.mse,
metrics=[metrics.mse]
)
# + id="3BQcNx8MaIp4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 775} outputId="c3f7d830-7905-499e-90cc-e376de81af58" executionInfo={"status": "ok", "timestamp": 1572804194086, "user_tz": -480, "elapsed": 43631, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}}
history = model.fit(train_data[0], train_data[1],
batch_size=16,
epochs = 20,
validation_data=(valid_data[0],valid_data[1]))
# + id="u4N-nlR1cL8d" colab_type="code" colab={}
test_data = generator(float_data, target, lookback=lookback,
min_index=-365, max_index=None,
shuffle=False, shuffle_seed=None)
pred = model.predict(test_data[0])
# + id="6Jc2jbRqdMPU" colab_type="code" colab={}
inversed = scaler.inverse_transform(np.concatenate(
(float_data[-365:], pred.reshape((pred.shape[0],1))), axis=1))
true = inversed[:,3]
pred = inversed[:,-1]
# + id="DyIUGsfDd51Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} outputId="d8431f67-eecd-47a3-c01a-013c224093ea" executionInfo={"status": "ok", "timestamp": 1572805196326, "user_tz": -480, "elapsed": 1386, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}}
from matplotlib import pyplot as plt
points = len(true)
plt.figure(figsize=(8,4))
plt.plot(range(points),pred[-points:],'b-',label='pred',linewidth=1)
plt.plot(range(points),true[-points:],'g-',label='true', linewidth=1)
plt.legend()
# + id="kMs_xFzwfrQX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="50c09045-c74e-4a35-eacb-ffa87b319e31" executionInfo={"status": "ok", "timestamp": 1572805206792, "user_tz": -480, "elapsed": 315, "user": {"displayName": "\u8499\u53e4\u56fd\u6d77\u519b\u53f8\u4ee4", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAz7KWB8tOXuc5KP6AlK0pNtFm-RDgU7LR1VgQR=s64", "userId": "17406439168111607696"}}
from sklearn.metrics import mean_squared_error
print(f"mean squared error: {mean_squared_error(pred, true)}")
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Preminaries just for the first week
# ## Numpy
# ## Matplotlib
# ## Pandas
# ## scipy.stats
# ## PyMC3
# ## ARVIZ
# ## patsy
#
# https://patsy.readthedocs.io/en/latest/quickstart.html
#
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generate Regex JSON from the Alphabet and Punctuation JSON
# +
from unicodedata import normalize
from pathlib import Path
import json
import collections
import re
from pprint import pprint
nena_dir = Path.home().joinpath('github/CambridgeSemiticsLab/nena_corpus')
standards = nena_dir.joinpath('standards')
alpha_file = standards.joinpath('alphabet/alphabet.json')
punct_file = standards.joinpath('punctuation/punctuation.json')
alpha_data = json.loads(alpha_file.read_text())
punct_data = json.loads(punct_file.read_text())
regex_file = standards.joinpath('NFD_regexes.json')
# +
alpha_patts = []
for letter in alpha_data:
lower, upper = letter['decomposed_string'], letter['decomposed_upper_string']
if upper == lower:
upper = ''
combo = '|'.join(c for c in [lower, upper] if c)
alpha_patts.append(combo)
punct_patts = []
for punct in punct_data:
punct_patts.append(re.escape(punct["decomposed_string"]))
alpha_join = '|'.join(alpha_patts)
alpha_re = f'({alpha_join})(?![\u0300-\u036F])'
re_dict = {
'alphabet': alpha_re,
'punctuation': '|'.join(punct_patts),
}
# -
punct_patts
help(re.escape)
re_dict['alphabet']
re_dict['punctuation']
# ## Test
test_file = normalize('NFD', nena_dir.joinpath('texts/alpha/Barwar/A Hundred Gold Coins.nena').read_text())
test_text = test_file[146:]
test_alpha = re.compile(re_dict['alphabet'])
test_punct = re.compile(re_dict['punctuation'])
test_text[:100]
# +
found_alphas = set(test_alpha.findall(test_text))
found_puncts = set(test_punct.findall(test_text))
print(found_alphas)
# -
found_puncts
# ## Export
with open(regex_file, 'w') as outfile:
json.dump(re_dict, outfile, ensure_ascii=False, indent=4)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Zc7tX_oEeCCv" colab_type="code" colab={}
from keras.datasets import boston_housing
import numpy as np
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
# + id="YIR0pTBUeqYg" colab_type="code" colab={}
max_target = np.max(train_targets)
# + id="4s5HdPlNeEao" colab_type="code" colab={}
train_data2 = train_data/np.max(train_data,axis=0)
test_data2 = test_data/np.max(train_data,axis=0)
train_targets = train_targets/max_target
test_targets = test_targets/max_target
# + id="aaNkioaGeF9n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="f2f7a09d-0c5f-49f6-8b0c-135fb31a46e6"
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
from keras.regularizers import l1
model = Sequential()
model.add(Dense(64, input_dim=13, activation='relu', kernel_regularizer = l1(0.1)))
model.add(Dense(1, activation='relu', kernel_regularizer = l1(0.1)))
model.summary()
# + id="O0W5WGYieNRK" colab_type="code" colab={}
model.compile(loss='mean_absolute_error', optimizer='adam')
# + id="XZXT5rl7eX7M" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 3434} outputId="fb2454ee-a9c3-47c6-b2ec-a625b1c5758c"
history = model.fit(train_data2, train_targets, validation_data=(test_data2, test_targets), epochs=100, batch_size=32, verbose=1)
# + id="p2NausoueZpw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a73e4cf9-52d6-4cee-f754-1d50afbae9fa"
np.mean(np.abs(model.predict(test_data2) - test_targets))*max_target
# + id="8vyV433aewp0" colab_type="code" colab={}
# + id="Z1Ab96uwe7ax" colab_type="code" colab={}
# + [markdown] id="2mzJC5nqe7sq" colab_type="text"
# # Custom loss function
# + id="y0Gn38N2e81P" colab_type="code" colab={}
import keras.backend as K
def loss_function(y_true, y_pred):
return K.square(K.sqrt(y_pred)-K.sqrt(y_true))
# + id="2YbUaMPde9dj" colab_type="code" colab={}
model.compile(loss=loss_function, optimizer='adam')
# + id="O2vIYn09e-9Q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 3434} outputId="fbc25f04-ff74-466e-b78e-cfb52a341be8"
history = model.fit(train_data2, train_targets, validation_data=(test_data2, test_targets), epochs=100, batch_size=32, verbose=1)
# + id="7TRcV__efF60" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="09e2b0a7-d59b-4581-e341-5b3d2e8fd574"
np.mean(np.abs(model.predict(test_data2) - test_targets))*max_target
# + id="54pVIf-TfIGQ" colab_type="code" colab={}
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 (tensorflow)
# language: python
# name: tensorflow
# ---
# # T81-558: Applications of Deep Neural Networks
# **Module 13: Advanced/Other Topics**
# * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# # Module 13 Video Material
#
# * Part 13.1: Flask and Deep Learning Web Services [[Video]](https://www.youtube.com/watch?v=H73m9XvKHug&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_13_01_flask.ipynb)
# * Part 13.2: Deploying a Model to AWS [[Video]](https://www.youtube.com/watch?v=8ygCyvRZ074&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_13_02_cloud.ipynb)
# * **Part 13.3: Using a Keras Deep Neural Network with a Web Application** [[Video]](https://www.youtube.com/watch?v=OBbw0e-UroI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_13_03_web.ipynb)
# * Part 13.4: When to Retrain Your Neural Network [[Video]](https://www.youtube.com/watch?v=K2Tjdx_1v9g&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_13_04_retrain.ipynb)
# * Part 13.5: AI at the Edge: Using Keras on a Mobile Device [[Video]]() [[Notebook]](t81_558_class_13_05_edge.ipynb)
#
# # Part 13.3: Using a Keras Deep Neural Network with a Web Application
#
# In this part, we will extend the image API developed in Part 13.1 to work with a web application. This technique allows you to use a simple website to upload/predict images, such as Figure 13.WEB.
#
# **Figure 13.WEB: AI Web Application**
# 
#
# To do this, we will use the same API developed in Module 13.1. However, we will now add a [ReactJS](https://reactjs.org/) website around it. This application is a single page web application that allows you to upload images for classification by the neural network. If you would like to read more about ReactJS and image uploading, you can refer to the [blog post](http://www.hartzis.me/react-image-upload/) that provided some inspiration for this example. I added neural network functionality to a simple ReactJS image upload and preview example.
#
# I built this example from the following components:
#
# * [GitHub Location for Web App](./py/)
# * [image_web_server_1.py](./py/image_web_server_1.py) - The code both to start Flask and serve the HTML/JavaScript/CSS needed to provide the web interface.
# * Directory WWW - Contains web assets.
# * [index.html](./py/www/index.html) - The main page for the web application.
# * [style.css](./py/www/style.css) - The stylesheet for the web application.
# * [script.js](./py/www/script.js) - The JavaScript code for the web application.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Локализација на робот
# ### Упатство како да се поврзете со Zethus
#
# - Пробај прво со `p.connect(p.GUI)`. Треба да се рендерира симулацијата во нов прозорец.
# - Изврши ја целата целата скрипта
# - Отвoри Zethus и одбери ги следните топикс:
# 1. Под категоријата Marker имаме `/cubes`
# 2. Под категоријата Pose имаме `/car`
# 3. Под категоријата Pose имаме `/filter`
import time
import numpy as np
import ipywidgets as ipw
import pybullet_data
import pybullet as p
p.connect(p.GUI) # p.DIRECT ако не работи
p.setAdditionalSearchPath(pybullet_data.getDataPath())
# ## ROS комуникација
# +
import rospy
import jupyros
from geometry_msgs.msg import PoseStamped, Point, Quaternion
from visualization_msgs.msg import Marker
import tf.transformations as tft
rospy.init_node('CarNode')
def get_msg_pose(target):
""" Returns PoseStamped message with target pose """
ps = PoseStamped()
ps.header.stamp = rospy.Time.now()
ps.header.frame_id = 'Car'
position, orientation = target
ps.pose.position = Point(*position)
ps.pose.orientation = Quaternion(*orientation)
return ps
def get_msg_marker(marker_id,target):
""" Returns Marker message with target pose """
marker = Marker()
marker.header.stamp = rospy.Time.now();
marker.ns = "my_namespace";
marker.id = marker_id;
marker.type = Marker.CUBE;
marker.action = Marker.ADD;
position, orientation = target
marker.pose.position = Point(*position)
marker.pose.orientation = Quaternion(*orientation)
marker.scale.x = 1;
marker.scale.y = 1;
marker.scale.z = 1;
marker.color.a = 1.0;
marker.color.r = 0.4117;
marker.color.g = 0.4117;
marker.color.b = 0.4117;
return marker
# -
# ## Класа за колата во pybullet
class Car:
def __init__(self, **kwargs):
self.bid = p.loadURDF('racecar/racecar.urdf', **kwargs)
def set_velocity(self, velocity, force=10):
""" Set target velocity and force """
for wheel in [2, 3, 5, 7]:
p.setJointMotorControl2(
self.bid, wheel, p.VELOCITY_CONTROL,
targetVelocity=velocity, force=force)
def set_steering(self, angle):
""" Set steering angle """
for hinge in [4, 6]:
p.setJointMotorControl2(
self.bid, hinge, p.POSITION_CONTROL, targetPosition=angle)
def get_bt_pose(self):
""" Return car pose """
return p.getBasePositionAndOrientation(self.bid)
def get_bt_heading(self):
""" Return heading """
_,q = p.getBasePositionAndOrientation(self.bid)
return np.arctan2(2.0*(q[0]*q[1] + q[3]*q[2]), q[3]*q[3] + q[0]*q[0] - q[1]*q[1] - q[2]*q[2])
def get_bt_velocity(self):
""" Return linear velocity """
linear,_ = p.getBaseVelocity(car.bid)
return np.sqrt(linear[0]**2 + linear[1]**2)
# ## Иницијализација на светот во pybullet
# +
def init_world():
p.resetSimulation()
p.setGravity(0, 0, -10)
p.setRealTimeSimulation(False)
plane = p.loadURDF('plane.urdf')
car = Car(basePosition=[1, 1, 0])
cubes_pos = [(2,2),(4,-3),(-2,-4)] #set cube positions
cubes = []
for i,cube in enumerate(cubes_pos):
cubes.append(p.loadURDF('cube.urdf', basePosition=[cube[0],cube[1],0.5]))
return car,cubes
car,cubes = init_world()
# -
# ## Модел за движење на автомобилот
# Автомобилот се управува со вртење на предните гуми додека се движи. Предниот дел на автомобилот се движи во насока во која се насочени тркалата додека се вртат околу задните. Овој едноставен опис е комплициран поради проблеми како што се лизгање поради триење, различното однесување на гумите при различни брзини и потребата надворешната гума да патува во различен радиус од внатрешната гума. Прецизното моделирање на управувањето бара комплициран сет на диференцијални равенки.
#
# 
#
# Овде гледаме дека предната гума е насочена во насока $\alpha$ во однос на меѓуоскиното растојание. За краток временски период автомобилот се движи напред, а задното тркало завршува понапред и малку свртено навнатре, како што е прикажано со сината гума. За толку кратка временска рамка, можеме да го приближиме ова како вртење околу радиусот $R$. Можеме да го пресметаме аголот на вртење $\beta$ со
#
# $$\beta = \frac{d}{w} \tan{(\alpha)}$$
#
# а радиусот на вртење R е даден со
#
# $$R = \frac{d}{\beta}$$
#
# каде што растојанието што го поминува задното тркало со оглед на брзината $v$ е $d=v\Delta t$.
#
# Со тоа што $\theta$ е ориентацијата на колата, ја пресметуваме позицијата $C$ пред да започне вртењето како
#
#
# $$\begin{aligned}
# C_x &= x - R\sin(\theta) \\
# C_y &= y + R\cos(\theta)
# \end{aligned}$$
#
# По движењето напред за време $\Delta t$ новата позиција и ориентација на колата е
#
# $$\begin{aligned} \bar x &= C_x + R\sin(\theta + \beta) \\
# \bar y &= C_y - R\cos(\theta + \beta) \\
# \bar \theta &= \theta + \beta
# \end{aligned}
# $$
#
# Откако ќе замениме за $C$ добиваме
#
# $$\begin{aligned} \bar x &= x - R\sin(\theta) + R\sin(\theta + \beta) \\
# \bar y &= y + R\cos(\theta) - R\cos(\theta + \beta) \\
# \bar \theta &= \theta + \beta
# \end{aligned}
# $$
#
# ### Дизајнирање на состејбените променливи
#
# За нашиот робот ќе ја задржиме позицијата и ориентацијата:
#
# $$\mathbf x = \begin{bmatrix}x & y & \theta\end{bmatrix}^\mathsf{T}$$
#
# Контролниот влез $\mathbf{u}$ е командната брзина и аголот на управување
#
# $$\mathbf{u} = \begin{bmatrix}v & \alpha\end{bmatrix}^\mathsf{T}$$
def Fx(x, dt, u):
""" State trainsition model that returns x,y and heading, u is the velocity and steering angle from the car """
heading = x[2]
velocity = u[0]
steering_angle = u[1]
dist = velocity * dt
if abs(steering_angle) > 0.001: # is robot turning?
beta = (dist / wheelbase) * np.tan(steering_angle)
r = wheelbase / np.tan(steering_angle) # radius
sinh, sinhb = np.sin(heading), np.sin(heading + beta)
cosh, coshb = np.cos(heading), np.cos(heading + beta)
return x + np.array([-r*sinh + r*sinhb,
r*cosh - r*coshb, beta])
else: # moving in straight line
return x + np.array([dist*np.cos(heading), dist*np.sin(heading), 0])
# ### Дизајнирање на мерниот модел
#
# Сензорот обезбедува беринг и растојание до повеќе познати локации во светот. Мерниот модел ќе ги конвертира состојбите $\begin{bmatrix}x & y&\theta\end{bmatrix}^\mathsf{T}$ во беринг и ратојание до коцката. Ако $p$ е позиција на обележје, растојанието $r$ е
#
# $$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}$$
#
# Претпоставуваме дека сензорот обезбедува беринг во однос на ориентацијата на роботот, така што мора да ја одземеме ориентацијата на роботот од лежиштето за да го добиеме читањето на сензорот:
#
# $$\phi = \tan^{-1}(\frac{p_y - y}{p_x - x}) - \theta$$
#
# Така нашата мерна функција е
#
# $$\begin{aligned}
# \mathbf{z}& = h(\mathbf x, \mathbf P) &+ \mathcal{N}(0, R)\\
# &= \begin{bmatrix}
# \sqrt{(p_x - x)^2 + (p_y - y)^2} \\
# \tan^{-1}(\frac{p_y - y}{p_x - x}) - \theta
# \end{bmatrix} &+ \mathcal{N}(0, R)
# \end{aligned}$$
def Hx(x, cubes):
""" Measurment model that computes distance and bering to an array of cubes """
hx = []
for cube in cubes:
px, py ,_= p.getBasePositionAndOrientation(cube)[0]
dist = np.sqrt((px - x[0])**2 + (py - x[1])**2)
angle = np.arctan2(py - x[1], px - x[0])
hx.extend([dist, normalize_angle(angle - x[2])])
return np.array(hx)
# ## Помошни функции за имплементација на моделот во UKF
# +
def normalize_angle(x):
""" Returns normalized angle """
x = x % (2 * np.pi) # force in range [0, 2 pi)
if x > np.pi: # move to [-pi, pi)
x -= 2 * np.pi
return x
def residual_h(a, b):
""" Rewriting residual_h to handle many cubes """
y = a - b
# data in format [dist_1, bearing_1, dist_2, bearing_2,...]
for i in range(0, len(y), 2):
y[i + 1] = normalize_angle(y[i + 1])
return y
def residual_x(a, b):
""" Rewriting residual_x to normalize heading """
y = a - b
y[2] = normalize_angle(y[2])
return y
def state_mean(sigmas, Wm):
""" Rewriting state_mean to be faster with numpy and compute average of a set of angles """
x = np.zeros(3)
sum_sin = np.sum(np.dot(np.sin(sigmas[:, 2]), Wm))
sum_cos = np.sum(np.dot(np.cos(sigmas[:, 2]), Wm))
x[0] = np.sum(np.dot(sigmas[:, 0], Wm))
x[1] = np.sum(np.dot(sigmas[:, 1], Wm))
x[2] = np.arctan2(sum_sin, sum_cos)
return x
def z_mean(sigmas, Wm):
""" Rewriting z_mean to be faster with numpy and compute average of a set of angles """
z_count = sigmas.shape[1]
x = np.zeros(z_count)
for z in range(0, z_count, 2):
sum_sin = np.sum(np.dot(np.sin(sigmas[:, z+1]), Wm))
sum_cos = np.sum(np.dot(np.cos(sigmas[:, z+1]), Wm))
x[z] = np.sum(np.dot(sigmas[:,z], Wm))
x[z+1] = np.arctan2(sum_sin, sum_cos)
return x
# -
# ## Имплементирање на UKF
# +
from filterpy.kalman import MerweScaledSigmaPoints
from filterpy.kalman import UnscentedKalmanFilter as UKF
dt = 1/60 # framerate of the simulation
wheelbase = 0.3 # distance between the cars wheels
sigma_range,sigma_bearing=0.3 , 0.1 # measurement noise
points = MerweScaledSigmaPoints(n=3, alpha=.00001, beta=2, kappa=0,
subtract=residual_x)
ukf = UKF(dim_x=3, dim_z=2*len(cubes), fx=Fx, hx=Hx,
dt=dt, points=points, x_mean_fn=state_mean,
z_mean_fn=z_mean, residual_x=residual_x,
residual_z=residual_h)
ukf.x = np.array([1, 1, .1]) # initial car pose
ukf.P = np.diag([.1, .1, .05])
ukf.R = np.diag([sigma_range**2,
sigma_bearing**2]*len(cubes))
ukf.Q = np.eye(3)*0.0001
# -
# ## Widget за контрола на колата
# +
def cb_steering(change):
car.set_steering(-change.new)
wd_steering = ipw.FloatSlider(
value=0, min=-0.5, max=0.5, step=0.05, continuous_update=True,
description='Волан:')
wd_steering.observe(cb_steering, names='value')
def cb_velocity(change):
car.set_velocity(change.new)
wd_velocity = ipw.IntSlider(
value=0, min=-10, max=30, step=1, continuous_update=True,
description='Брзина:')
wd_velocity.observe(cb_velocity, names='value')
ipw.VBox([wd_steering, wd_velocity])
# -
# ## Нишка за симулација во pybullet и обновување на филтерот
# +
# %%thread_cell
rate = rospy.Rate(60)
# create rospy publishers so send data to zethus
pub_car = rospy.Publisher('/car', PoseStamped, queue_size=10)
pub_filter = rospy.Publisher('/filter', PoseStamped, queue_size=10)
vis_pub = rospy.Publisher('/cubes', Marker, queue_size=10)
while True:
p.stepSimulation()
ukf.predict(u=(car.get_bt_velocity(),wd_steering.value))
x,y,_ = car.get_bt_pose()[0] # get current pose of the car
z = []
for cube in cubes: # simulate reading from sensor for each cube
px, py ,_= p.getBasePositionAndOrientation(cube)[0]
distance = np.sqrt((px - x)**2 + (py - y)**2) + np.random.randn()*sigma_range # add noise to the distance
bearing = np.arctan2(py - y, px - x)
angle = (normalize_angle(bearing - car.get_bt_heading() + np.random.randn()*sigma_bearing)) # add noise to the bering
z.extend([distance, angle])
vis_pub.publish(get_msg_marker(cube,p.getBasePositionAndOrientation(cube))) # send curent pose of each cube
ukf.update(z,cubes=cubes)
# send real and filtered pose
pub_car.publish(get_msg_pose(car.get_bt_pose()))
euler = p.getEulerFromQuaternion(car.get_bt_pose()[1])
pub_filter.publish(get_msg_pose([(ukf.x[0], ukf.x[1], 0), p.getQuaternionFromEuler([euler[0], euler[1], ukf.x[2]])] ))
rate.sleep()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Carta Futura
from datetime import date
from datetime import datetime
from datetime import timedelta
fecha= date.today()
nueva_fecha= str(fecha+timedelta(days=365))
txt=input('Ingrese su nombre: ')
age=input('ingrese su edad: ')
age=int(age)
print('Bogotá DC, Colombia'.upper())
print('\nEstimada {nombre},'.format(nombre=txt))
print('Ya tienes {edad} años, espero muchas cosas de ti, que hayas enfrentado a tus miedos internos, aplastado tus inseguridades, dejado ir tus dudas internas; que hayas comprometido tu voluntad y florecido en tu capacidad de impactar positivamente en los demás. Que hayas dado todo lo que podías a tu familia, a tus amigos, a aquellos a los que cuidas mientras te cuidas a ti mismo.Espero que tus sueños se hayan hecho realidad, que hayas vivido la vida al máximo y que hayas aprovechado cada día como intentabas hacerlo cuando eras más joven.Espero que sigas teniendo el deseo de aprender y crecer, de no dejar nunca de contemplar cómo puedes ser una versión mejor de ti misma, y de ayudar a los demás a ver lo que ellos no pueden.Espero que estés abrazando el sentimiento de vulnerabilidad, haciendo conexiones, cuidando las relaciones y buscando aventuras.Espero que estés considerando las cosas que desearías haberle dicho a tu yo más joven, y que compartas esos consejos con tu familia y amigos. Sólo puedo esperar, porque no sé lo que depara el futuro y como siempre, nos ponemos metas muy optimistas.'.format(edad=age+1))
print('Espero que tus sueños se hayan hecho realidad, que hayas vivido la vida al máximo y que hayas aprovechado cada día como intentabas hacerlo cuando eras más joven.Espero que sigas teniendo el deseo de aprender y crecer, de no dejar nunca de contemplar cómo puedes ser una versión mejor de ti misma, y de ayudar a los demás a ver lo que ellos no pueden.Espero que estés abrazando el sentimiento de vulnerabilidad, haciendo conexiones, cuidando las relaciones y buscando aventuras.Espero que estés considerando las cosas que desearías haberle dicho a tu yo más joven, y que compartas esos consejos con tu familia y amigos. Sólo puedo esperar, porque no sé lo que depara el futuro y como siempre, nos ponemos metas muy optimistas.'.format(edad=age+1))
print('\nCon todo el amor, {nombre} del futuro.'.format(nombre=txt))
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jhnnxyz/computer-blue/blob/master/_notebooks/2021_08_01_math_harmony.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="IRXr-aGQCQu4"
# # The Mathematics of Harmony
# > On the ideas of Harmony and the Golden Section
#
# - toc: true
# - branch: master
# - badges: true
# - comments: true
# - categories: [mathematics]
# - image: images/some_folder/your_image.png
# - hide: true
# - search_exclude: true
# - metadata_key1: metadata_value1
# - metadata_key2: metadata_value2
# + [markdown] id="md2oYkaQC_Re"
# ##
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create your own Module
#
# ## 1) Introduction
#
# The main interest of creating your own module is to reuse clever functions, since a module is a collection of functions or constants. In general, modules are order around a thematic, eg: algebra, statistics, ...
#
# ## 2) Create a module
#
# In Python, the creation of modules is very simple, one just need to write a list of functions in a file and write it with the extension .py. Here is an example that we will call messages.py and is saved in the modules directory:
# !more ../modules/messages.py
# The strings with 3 " characters are not necessary but they are useful to document the software. By convention, constants are written in caps in Python, for instance DATE.
# ## 3) Using your module
#
# In order to call one of the module function, the module must be loaded first. This will work if the file is in the working directory or if it is in the directory pointed by the PYTHONPATH environment variable. It is then necessary to import the module as usual.
#
# For linux and macosx OS using a bash type shell, modifying the PYTHONPATH environment variable is done by typing the instruction in the shell:
# export PYTHONPATH=$PYTHONPATH:/path/to/my/module/
# This modification will only be valid in the current shell. If one want to make it permanent, one need to modify the .bashrc file.
#
# In Windows, one need a PowerShell and type the following command:
# env:PYTHONPATH += ";C:\path/to/my/module/"
#
# in a jupyter notebook, while it is not completely clear to me how, the following looks to work:
# - import os
# - os.environ['PYTHONPATH'] = "$PYTHONPATH:/path/to/module"
# - import sys
# - sys.path.append('./module')
#
# One can then check that it was done properly by typing:
# - for Linux/Mac os bash: echo \$PYTHONPATH
# - for Windows: echo $env:PYTHONPATH
# +
import os
os.environ['PYTHONPATH'] = "$PYTHONPATH:/Users/lhelary/Heidelberg_Python_Lecture_2019/modules/"
import sys
sys.path.append('../modules/')
# -
# !echo $PYTHONPATH
import messages
print(messages.hello("Joe"))
print(messages.ciao("Bill"))
messages.DATE
# Note that the first time a module is imported Python create a directory named __pycache__ that contains a file with an extension .pyc and which contain a precompiled version of the module.
#
# ## 4) Docstrings:
#
# When writting a module, it is important to write dome documentation that explain what the module does, and how to use it. To achieve this purpose, one uses docstrings. These commands will be visible with the help() function for instance.
help(messages)
# Python generate by default these help messages. One way to format these docstrings is as follow:
# +
def multiply_numbers ( num1 , num2 ):
""" Multiply together two integers .
Parameters
----------
num1 : int
The first integer .
num2 : int
The second integer .
One can add more informations.
On many lines.
Returns
-------
int
Le product of the two numbers .
"""
return num1 * num2
print(multiply_numbers(1,2))
help(multiply_numbers)
# -
# After the fonction definition, one find a general explanation of what the function does, then a parameters section, and a return section. In each of these sections the type used are described.
# ## 5) Exercises:
#
# - Create a custom_statistics.py module in the ../modules/ directory that will take lists as inputs and will contains functions to compute, the min value, the max value, the average, and the standard deviation.
#
# - Create a physical_constants.py module in the ../modules/ directory that will contain the list of the principal physics constants (https://en.wikipedia.org/wiki/Physical_constant).
#
# - Create a trigonometric_functions.py module that will contains the principal trigonometric functions (https://en.wikipedia.org/wiki/Trigonometric_functions).
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Relatório V
#
# ## Tratamento de dados faltantes
import pandas as pd
dados = pd.read_csv("aluguel_resid.csv", sep=';')
dados
dados.isnull()
dados.notnull()
dados.info()
dados[dados['Valor'].isnull()]
a = dados.shape[0]
dados.dropna(subset = ['Valor'], inplace = True)
b = dados.shape[0]
a - b
# ## Continuação - IPTU e Condomínio
dados[dados['Condominio'].isnull()].shape[0] #OK
selecao = (dados['Tipo'] == 'Apartamento') & (dados['Condominio'].isnull())
selecao
A = dados.shape[0]
dados = dados[~selecao]
B = dados.shape[0]
A - B
# ## Mas o DF que está acima são os dados que eu não quero. Poderia substituir *isnull()* por *notnull()*
# ## Outra forma é usando o ~ [negação]
# +
dados2 = dados[~selecao]
dados2.info()
dados.shape[0] - dados2.shape[0]
# +
# dados.fillna(0, inplace=True)
# -
# ## Além do .fillna(), eu posso completar os valores nulos desta maneira:
dados = dados.fillna({'Condominio': 0, 'IPTU': 0}) #Posso colocar valores diferentes para variáveis diferentes
dados.info()
dados.to_csv('aluguel_refinado.csv', sep=';', index=False)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Read PulseEKKO file
#
# - irlib: https://github.com/njwilson23/irlib (MIT License)
# - GPRPy: https://github.com/NSGeophysics/GPRPy (MIT License)
import os
import numpy as np
from struct import unpack
import matplotlib.pyplot as plt
# ## Using irlib to load data
#
# The follow three functions `parse_header`, `parse_data`, and `read_pulseEKKO` were copied from the file
# https://github.com/njwilson23/irlib/blob/master/irlib/pEKKOdriver.py ([#8a04be7](https://github.com/njwilson23/irlib/commit/8a04be7f9b86c2e0c9bedaa42eb5ccf6b40bbefc)).
# +
def parse_header(lines):
""" Read a header string and return a dictionary.
str -> dict
"""
meta = {}
for line in lines:
if "=" in line:
k,v = line.split("=", 1)
meta[k.strip()] = v.strip()
elif (("-" in line) or ("/" in line)) and len(line.strip()) == 8:
meta["date"] = line.strip()
return meta
def parse_data(s):
""" Read a data string and return a dictionary and a data array.
str -> (dict, array)
"""
i = 0
dlist = []
meta = {}
while True:
if len(s) < i+128:
break
hdr = unpack("32f", s[i:i+128])
nsmp = int(hdr[2])
d = unpack("{0}h".format(nsmp), s[i+128:i+128+2*nsmp])
meta[i] = hdr
dlist.append(d)
i += (128 + 2*nsmp)
# Pad short traces with zeros
maxlen = max([len(a) for a in dlist])
dlist_even = map(lambda a,n: a if len(a) == n else a+(n-len(a))*[0],
dlist, (maxlen for _ in dlist))
darray = np.vstack(list(dlist_even)).T
return meta, darray
def read_pulseEKKO(path):
""" Search for header and data files matching path, open them, and return a
dictionary of line metadata, a dictionary of trace metadata, and an array
of radar data.
str -> (dict, dict, array)
"""
directory, nm = os.path.split(path)
if nm + ".HD" not in os.listdir(directory):
raise IOError("{0}.HD not found".format(nm))
elif nm + ".DT1" not in os.listdir(directory):
raise IOError("{0}.DT1 not found".format(nm))
with open(path + ".HD", "r") as f:
lnmeta = parse_header(f.readlines())
with open(path + ".DT1", "rb") as f:
trmeta, darray = parse_data(f.read())
return lnmeta, trmeta, darray
# -
line_meta, trace_meta, data = read_pulseEKKO('/home/dtr/Desktop/GPR_lines34to63/LINE35')
line_meta # , trace_meta
# +
max_val = np.max(np.abs(data))
plt.figure()
plt.pcolormesh(
data,
vmin=-max_val, vmax=max_val, # Make the colormap symmetric
cmap='bwr') # Choose colormap
plt.gca().invert_yaxis()
plt.show()
# -
# ## GPRPy
#
# Here a screenshot of the same as above, but using GPRPy.
#
# 
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from zephyr.Parallel import CommonReducer
import numpy as np
# +
dict0 = {
'key0': np.array([1,2,3,4,5]),
'key1': np.array([0,1,0,1,0]),
}
CR0 = CommonReducer(dict0)
dict1 = {
'key1': np.array([5,4,3,2,1]),
'key2': 10,
}
CR1 = CommonReducer(dict1)
# -
CR0 + CR1
CR0 - CR1
CR0 * CR1
CR0 / CR1
CR0.real
CR1.imag
CR0.sum()
CR1.sum()
(CR0 + CR1).sum()
(CR0 + CR1 + {'key2': np.array([1,2,3])}).sum()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
dict = {'car': ['A', 'B', 'C', 'D'],
'Fuel': ['gas', 'diesel', 'gas', 'gas']}
df = pd.DataFrame(dict)
df[['gas', 'diesel']] = pd.get_dummies(df['Fuel']) #Quantitive data from categorical string
df
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Introduction
#
# ## Goals
#
# By the end of this course, you should be able to
# - Do basic data analysis using R or Python/Pandas, with a special emphasis on
# - triton, or other similar HPC cluster environments
# - workflows, I/O strategies etc. that work on HPC clusters.
#
# What this course is NOT:
# - A basic course in programming. We don't expect you to have prior knowledge of R or Python, but some programming experience is required.
# - A basic course in statistics / machine learning. As part of the course we'll do some simple stuff, but we expect that you either understand the statistics from before or learn it on your own.
#
# Topics that we're going to cover on the R part of the course:
# - The modern Tinyverse approach to easy data analysis.
# - The tibble data structure, and how it relates to other common data structures.
# - Verbs, maps and pipes that you can use to organize and modify your data.
# - Visualizing your results with ggplot2.
# - CSV / feather data handling
# - Some advanced Base-R tricks and functions from the apply-family.
# ## Note about syntax
#
# Anything written like `this` refers to R-code. Depending on context it might be a type or a function.
#
# ## R as a language
#
# R is nowadays one of the most popular data science languages. [1](http://r4stats.com/articles/popularity/)
#
# In Stack Overflow it is the 7th most popular language and neck to neck with C. [2](https://stackoverflow.blog/2017/10/10/impressive-growth-r/)
#
# Compared to its older competitors like SPSS or SAS it is way more usable, but it is not as generic as e.g. Python.
#
# So use R for what it has been designed for - data analysis.
#
# Nowadays there are three popular styles of doing data analysis in R:
# 1. **Base R** - using objects (`vector`,`list`,`factor`,`data.frame`), functions (`apply`) and operators (`[ ]`,`[[ ]]`) of the base R packages
# 2. **Tidyverse** - using objects (`tibble`), functions (`filter`,`select`,`mutate`,`map`) and operators (`%>%`) from [Tidyverse](https://www.tidyverse.org/) collection of packages.
# 3. **data.table** - using objects (`data.table`), functions (`set`) and operators (`[ ] `) of [data.table](http://r-datatable.com/) in conjunction with previous packages.
#
# Base R can is (at least in my opinion) similar to Matlab/Fortran in its style. It is very permissive which enables very good solutions, but also very bad solutions.
#
# Tidyverse is centered around the idea of certain base operations and code structures that are most frequent in data analysis. By making those structures easily readable and understandable it is easy to generate efficient code. We'll be doing this quite a bit here.
#
# data.table is an extension to base R's `data.frame`, but the syntax and underlying properties are written in a way that it can handle huge (up and beyond your RAM) datasets. We'll look a bit into it, but if you at some point need to work with big data, do remember that it exists.
# ## Packages
#
# R has a wide variety of libraries supplied in the R-CRAN-network (Comprehensive R Archive Network). Installing these libraries is usually as simple as running:
#
# `install.package("packagename")`
#
# Loading these libraries is done with:
#
# `library(packagename)`
#
# In this course we will be using packages from R [Tidyverse](https://www.tidyverse.org/).
# Let's try installing package `ggplot2movies` to folder `rlibs`. After installation let's load the newly installed library.
if (!file.exists('rlibs')) {
dir.create('rlibs')
}
if (!file.exists('rlibs/ggplot2movies')) {
install.packages('ggplot2movies', repos="http://cran.r-project.org", lib='rlibs')
}
library('ggplot2movies', lib.loc='rlibs')
# ## Imporant references
#
# There's lots of good R material available free on the internet. I highly recommend [R for Data Science](http://r4ds.had.co.nz/) and [Efficient R programming](https://csgillespie.github.io/efficientR/).
#
# When it comes to stuff related to the Tidyverse, the best options are googling or going to [its webpage](https://www.tidyverse.org/) and clicking on the honeycomb structure to get information on packages that supply the extensions.
#
# Good keywords when searching information in the internet are:
# - "tidyverse" for tidyverse packages as a whole
# - "dplyr" for verbs that we'll be using
# - "ggplot2" for the versatile plotting library
# - "tibble" to get information on the `data.frame` extension
# - "purrr" for mapping functions that can be used to run functions on the data
# - "tidyr" for reshaping the data
# - "readr" for data reading functions
#
# Usually something like "R *keyword i want to do this*" gives a good answer from stackoverflow etc.
# # Let's get started
#
# ## Simple example
#
# Let's load up the Tidyverse and analyze dataset `movies` from package `ggplot2movies` that has [IMDB](https://www.imdb.com) data from some 60,000 movies.
# +
library(tidyverse)
data('movies',package='ggplot2movies')
# +
head(movies)
ncol(movies)
nrow(movies)
dim(movies)
str(movies)
is_tibble(movies)
# -
# Let's break the previous commands down:
#
# - `data`-command can be used to load data from `package`
# - `head` shows the first rows of data
# - `ncol` and `nrow` can be used to get the number of columns and rows respectively
# - `dim` can be used to get the dimensions of the data
# - `str` (structure) shows a lot of information on how the data is structured
# - `is_X` (previously `is.X`) can be used to verify that the type of the object is `X`.
#
# From the output of `str` you can see few things:
# - The object `movies` is a `tbl` (`tibble`), which is an extension of `data.frame`.
# - Each row of the `data.frame` is an *observation* while each *column* is a variable. This choice of which axis is an observation and which is variable is not arbitrary.
# - Each column of the `data.frame` is a `vector` with defined type and length.
#
# Do note that the data is expected to fit a square form with n forws and m columns.
#
# I liked [The Matrix](http://www.imdb.com/title/tt0133093/), let's see if I was alone in my opinion.
movies %>%
filter(title == 'Matrix, The') %>%
select(r1:r10) %>%
gather(key='rating',value='percentage',factor_key=T) %>%
mutate(rating=factor(as.integer(rating))) %>%
ggplot(aes(x=rating,y=percentage)) +
geom_col() +
ggtitle('Ratings distribution for movie "The Matrix"') +
xlab('Rating') +
ylab('Percentage of ratings')
# Seems pretty popular.
#
# There's a lot to unpack on in the example. Lets go through it one step at a time.
#
# The Matrix had its title written as "Matrix, The" instead of "The Matrix". If you do not know the formatting used in the dataset you might want to find all movies with "Matrix" in the title.
#
# To do this we need to use the `filter` function, `str_detect`-function and the pipe operator `%>%`.
# Find all movies with Matrix in the title
movies %>%
filter(str_detect(title,'Matrix'))
# Pipe `%>%` is used to minimize this kind of code:
#
# ```r
# a <- data.frame(...)
# a_tmp1 <- func1(a)
# a_tmp2 <- func2(a_tmp1)
# b <- func3(a_tmp2)
# ```
#
# With `tibble` and `%>%` the previous code block would be
#
# ```r
# a <- tibble(...)
# b <- a %>%
# func1() %>%
# func2() %>%
# func3()
# ```
#
# Basically the pipe-operator puts output from left side to the first argument of the function call on the right side.
#
# In Tidyverse the functions `func1` etc. are called "verbs" and they return a `tibble` as their output. Thus the object type of input and output are the same and the pipe can be chained across multiple calls. The pipe structure works with other functions as well, but with Tidyverse verbs you can be certain that the object does not change along the way.
#
# Using the pipe produces code that has the data pipeline clearly visible.
#
# Here we used the `filter`-verb to filter data based on data's values. The argument it takes is a logical statement about some column or as it is this case, a function call that checks whether "Matrix" can be found in column "title". (See e.g. this blog post for various filter possibilites [[1]](https://data-se.netlify.com/2016/12/21/dplyr_filter/))
#
# Now that we know the exact title let's pick it.
# Pick the 'Matrix, The'
movies %>%
filter(title == 'Matrix, The')
# The rating vote percentages are stored in columns `r1` to `r10`. Let's use verb `select` to select columns based on their names.
# +
# Get rating distribution for 'Matrix,The'
movies %>%
filter(title == 'Matrix, The') %>%
select(r1,r2,r3,r4,r5,r6,r7,r8,r9,r10)
# -
# This is a bit too much manual labor so, as the columns are ordered, lets take a range of columns from `r1` to `r10`.
# Get rating distribution in a more usable fashion
movies %>%
filter(title == 'Matrix, The') %>%
select(r1:r10)
# If the columns weren't ordered, we could have used the function `num_range` from [select_helpers](http://dplyr.tidyverse.org/reference/select_helpers.html) to choose all rows starting with "r" and having a number from range `1:10` at the end.
# Another solution for choosing columns
movies %>%
filter(title == 'Matrix, The') %>%
select(num_range('r',1:10))
# Now we got what we want to plot, but the data is in a wrong ordering. Remember that in tidy format columns were meant to be variables and rows were meant to be observations. As each user can vote for a single rating, the columns `r1` to `r10` are not independent variables. Instead we want to have columns `rating`, that describes the rating that has been given, and `percentage`, that describes the percentage of votes given to this particular rating.
#
# To achieve this we can use `gather`-function from tidyr-package. [[1]](http://tidyr.tidyverse.org/reference/gather.html)
#
# `gather` can be used to take columns into rows. Here the `key` parameter is the name of the new variable
#
# By settings `factor_key=T` the column names are stored as a `factor`. `factor` stores a mapping between strings and integers and then represents the data as a `vector` of integer. So basically `r1` is `1`, `r2`,`2`, etc. This preserves the ordering. `factor`s are useful for string data that can be ordered or categorized (identical strings have same `integer` mapped to them).
movies %>%
filter(title == 'Matrix, The') %>%
select(r1:r10) %>%
gather(key='rating',value='percentage',factor_key=T)
# Now we want to plot the result. To do this let's use functions from ggplot2.
#
# `ggplot2` function can be used to create a figure. It takes as its argument an `aes` or aesthetic mapping that defines which columns are mapped to which axes.
#
# After specifying the figure one can use calls to various ggplot2-functions to add plots or properties to the figure.
# `+`-operator is used to sum all of these together.
#
# In this case let's use `geom_col` to create a bar plot. It sets bars heights based on data values. [[1]](http://ggplot2.tidyverse.org/reference/geom_bar.html)
#
movies %>%
filter(title == 'Matrix, The') %>%
select(r1:r10) %>%
gather(key='rating',value='percentage',factor_key=T) %>%
ggplot(aes(x=rating,y=percentage)) +
geom_col()
# This plot is already something, but we can make it more beautiful. Let's change the ratings from `r1` to `1` etc.. As rating-variable is a `factor` (basically a `vector` of integers), we can use `mutate`-verb with `as.integer` to make the conversion.
#
# `mutate` can be used to create new variables or to modify existing ones. It keeps rest of the variables intact. `transmute` would drop rest of the variables. [[1]](http://dplyr.tidyverse.org/reference/mutate.html)
movies %>%
filter(title == 'Matrix, The') %>%
select(r1:r10) %>%
gather(key='rating',value='percentage',factor_key=T) %>%
mutate(rating=as.integer(rating)) %>%
ggplot(aes(x=rating,y=percentage)) +
geom_col()
# This worked, but now the x-axis looks bad. Lets convert the rating to `factor` so that x-axis is strings instead of integers.
movies %>%
filter(title == 'Matrix, The') %>%
select(r1:r10) %>%
gather(key='rating',value='percentage',factor_key=T) %>%
mutate(rating=factor(as.integer(rating))) %>%
ggplot(aes(x=rating,y=percentage)) +
geom_col()
# Great. Lets finish the job by specifying axis labels with `xlab` and `ylab` and specifying title with `ggtitle`. [[1]](http://ggplot2.tidyverse.org/reference/labs.html)
movies %>%
filter(title == 'Matrix, The') %>%
select(r1:r10) %>%
gather(key='rating',value='percentage',factor_key=T) %>%
mutate(rating=factor(as.integer(rating))) %>%
ggplot(aes(x=rating,y=percentage)) +
geom_col() +
ggtitle('Ratings distribution for movie "The Matrix"') +
xlab('Rating') +
ylab('Percentage of ratings')
# This looks nice. With small additions one can even plot the whole trilogy.
#
# Here:
# 1. Movies were chosen based on whether both "Matrix" and "The" were present in title.
# 2. Columns year and title were included with the select-statement.
# 3. Title and year were excluded from column gathering with `-title` and `-year`.
# 4. In `aes`-mapping the `fill` keyword was used to specify the fill colour names. By using `reorder` the titles were re-ordered based on publishing year.
# 5. x-, y- and fill-labels were set using `labs`-function.
#
movies %>%
filter(str_detect(title,'Matrix') & str_detect(title,'The')) %>%
select(r1:r10,title,year) %>%
gather(key='rating',value='percentage',-title,-year,factor_key=T) %>%
mutate(rating=factor(as.integer(rating))) %>%
ggplot(aes(x=rating,y=percentage,fill=reorder(title,year))) +
geom_col(position='dodge') +
ggtitle('Ratings distribution for "The Matrix"-trilogy') +
labs(x='Rating',y='Percentage of ratings',fill='Movie title')
# # Exercise time:
#
# 1. Use `filter` and `geom_histogram` to plot a histogram of ratings for action-movies (Doc pages: [[filter]](http://dplyr.tidyverse.org/reference/filter.html) [[geom_histogram]](http://ggplot2.tidyverse.org/reference/geom_histogram.html)).
# 2. Use `filter` and `arrange` to find the highest rated romance-movie with more than 1000 votes (Doc pages: [[filter]](http://dplyr.tidyverse.org/reference/filter.html) [[arrange]](https://www.rdocumentation.org/packages/dplyr/versions/0.7.3/topics/arrange)).
# 3. Take movies with more than 1000 votes and an estimated budget (`! is.na(budget)`). Use `top_n` to limit yourself to 200 highest rated. Use `geom_point` and `scale_x_log10` to create a semi-log scatter plot of movie budget vs. movie rating ([[top_n]](http://dplyr.tidyverse.org/reference/top_n.html) [[geom_point]](http://ggplot2.tidyverse.org/reference/geom_point.html) [[scale_x_log10]](http://ggplot2.tidyverse.org/reference/scale_continuous.html)).
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Medical Appointment No Shows Capstone
# ## Random Forest Model
# +
#Import necessary libraries
import pandas as pd
import numpy as np
# Plotting modules
import matplotlib.pyplot as plt
import seaborn as sns
# Analysing datetime
from datetime import datetime as dt
# File system manangement
import os,sys
# Suppress warnings
import warnings
warnings.filterwarnings('ignore')
from IPython.display import Image
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# %matplotlib inline
# +
#MODEL_SELECTION
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
#METRICS
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import roc_curve, roc_auc_score, auc,explained_variance_score
from sklearn.metrics import balanced_accuracy_score, precision_score, recall_score
#TREE
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import export_graphviz
#Preprocessing
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# -
#SCIPY
from scipy.stats import randint
# ### Load Data
# #### OneHotEncoder Data
path = 'data/df_ohe.csv'
df_ohe = pd.read_csv(path, index_col=None)
df_ohe.sample()
# ### Train/Test
# +
X = df_ohe.drop(['NoShow_1'], axis = 1)
y = df_ohe['NoShow_1']
SEED = 42
TS = 0.25
# Create training and test sets
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size = TS, random_state=SEED, stratify=y)
#Feature Scaling to prevent information leakage
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print (X_train.shape)
print (y_train.shape)
print (X_test.shape)
print (y_test.shape)
# -
# ___
# # Random Forest Classifier
# +
MODEL_PARAMS = {
"n_estimators": 350,
"criterion": 'entropy',
"max_depth": None,
"min_samples_split":2,
"min_samples_leaf":1,
"min_weight_fraction_leaf":0.0,
"max_features":'auto',
"max_leaf_nodes":None,
"min_impurity_decrease":0.0,
"min_impurity_split":None,
"bootstrap":True,
"oob_score":False,
"n_jobs":None,
"random_state": SEED,
"verbose":0,
"warm_start":False,
"class_weight":None,
"ccp_alpha":0.0,
"max_samples":None,
}
rf = RandomForestClassifier(**MODEL_PARAMS)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
y_pred_proba = rf.predict_proba(X_test)[:,1]
rf_score = round(rf.score(X_train, y_train) * 100, 2)
rf_test_score = round(rf.score(X_test, y_test) * 100, 2)
print('Random Forest Training Score:', rf_score)
print('Random Forest Test Score:', rf_test_score)
#Checking performance on our model with ROC Score.
rf_roc_score = round(roc_auc_score(y_test,y_pred_proba)*100,3)
print("ROC Score:", rf_roc_score)
# -
# ### ROC curve values:
# +
# Generate ROC curve values: fpr, tpr, thresholds
fpr,tpr, thresholds = roc_curve(y_test,y_pred_proba)
roc_auc = auc(fpr, tpr)
# Plot ROC curve
_ = plt.plot([0, 1], [0, 1], 'k--', label='naive classifier')
_ = plt.plot(fpr,tpr, 'y', label = 'AUC = %0.2f' % roc_auc)
_ = plt.xlim([0, 1])
_ = plt.ylim([0, 1])
_ = plt.legend(prop={'size':10},loc = 'lower right')
_ = plt.xlabel('False Positive Rate', size=12)
_ = plt.ylabel('True Positive Rate', size=12)
_ = plt.title('ROC Curve', size=15)
_ = plt.show()
# -
# ___
# ## Hyperparameter tuning with RandomizedSearchCV
# +
n_estimators = [10, 50, 100, 200, 300, 400, 500]
max_features = ['auto', 'sqrt'] #log2
max_depth = [3, 6, 10, 15, 20]
criterion = ['gini', 'entropy']
min_samples_leaf = [1,2,3,5,10,15]
min_samples_split = [1,2,3,5,10,15,20]
bootstrap = [True,False]
# Setup the parameters and distributions to sample from: param_dist
param_dist = {"n_estimators": n_estimators,
"max_depth": max_depth,
"max_features": max_features,
"min_samples_leaf": min_samples_leaf,
"min_samples_split": min_samples_split,
"bootstrap":bootstrap,
"criterion": criterion}
# Instantiate a Decision Tree classifier: tree
rfc = RandomForestClassifier(random_state=SEED)
# Instantiate the RandomizedSearchCV object: tree_cv
tree_cv = RandomizedSearchCV(rfc, param_dist, scoring="roc_auc", random_state=SEED, cv=10)
# Fit it to the data
tree_cv.fit(X_train,y_train)
# Print the tuned parameters and score
print("Tuned Decision Tree Parameters: {}".format(tree_cv.best_params_))
print("Best score is {}".format(tree_cv.best_score_))
# +
MODEL_PARAMS = {
"n_estimators": 500,
"criterion": 'entropy',
"max_depth": 20,
"min_samples_split":15,
"min_samples_leaf":15,
"min_weight_fraction_leaf":0.0,
"max_features":'auto',
"max_leaf_nodes":None,
"min_impurity_decrease":0.0,
"min_impurity_split":None,
"bootstrap":True,
"oob_score":False,
"n_jobs":None,
"random_state": SEED,
"verbose":0,
"warm_start":False,
"class_weight":None,
"ccp_alpha":0.0,
"max_samples":None,
}
rf = RandomForestClassifier(**MODEL_PARAMS)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
y_pred_proba = rf.predict_proba(X_test)[:,1]
rf_score = round(rf.score(X_train, y_train) * 100, 2)
rf_test_score = round(rf.score(X_test, y_test) * 100, 2)
print('Random Forest Training Score:', rf_score)
print('Random Forest Test Score:', rf_test_score)
#Checking performance on our model with ROC Score.
rf_roc_score = round(roc_auc_score(y_test,y_pred_proba)*100,3)
print("ROC Score:", rf_roc_score)
# -
# ## Hyperparameter tuning with GridSearchCV
# +
n_estimators = [100, 200]
max_features = ['auto', 'sqrt', 'log2']
max_depth = [3, 6, 10, 15, 20]
criterion = ['gini', 'entropy']
param_grid = {
'n_estimators':n_estimators,
'max_features': max_features,
'max_depth' : max_depth,
'criterion' :criterion,
}
rf = RandomForestClassifier()
gs_rfc = GridSearchCV(estimator=rf,
param_grid=param_grid,
scoring='roc_auc',
cv= 5)
gs_rfc.fit(X_train, y_train)
# Print the tuned parameters and score
print("Tuned Decision Tree Parameters: {}".format(gs_rfc.best_params_))
print("Best score is {}".format(gs_rfc.best_score_))
print(gs_rfc.best_estimator_.get_params())
# -
# ___
# +
MODEL_PARAMS = {
"n_estimators": 300,
"criterion": 'entropy',
"max_depth": 7,
"min_samples_split":2,
"min_samples_leaf":3,
"min_weight_fraction_leaf":0.0,
"max_features":3,
"max_leaf_nodes":None,
"min_impurity_decrease":0.0,
"min_impurity_split":None,
"bootstrap":True,
"oob_score":False,
"n_jobs":None,
"random_state": SEED,
"verbose":0,
"warm_start":False,
"class_weight":None,
"ccp_alpha":0.0,
"max_samples":None,
}
rf = RandomForestClassifier(**MODEL_PARAMS)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
y_pred_proba = rf.predict_proba(X_test)[:,1]
rf_score = round(rf.score(X_train, y_train) * 100, 2)
rf_test_score = round(rf.score(X_test, y_test) * 100, 2)
print('Random Forest Training Score:', rf_score)
print('Random Forest Test Score:', rf_test_score)
#Checking performance on our model with ROC Score.
rf_roc_score = round(roc_auc_score(y_test,y_pred_proba)*100,3)
print("ROC Score:", rf_roc_score)
# -
param_grid = {'n_estimators':np.arange(1,50)}
knn = RandomForestClassifier()
knn_cv= GridSearchCV(knn,param_grid,cv=5)
knn_cv.fit(X_train,y_train)
# +
estimator = rf.estimators_[5]
# Export as dot file
export_graphviz(estimator, out_file='tree.dot',
feature_names = X_train,
class_names = y_train,
rounded = True,
proportion = False,
precision = 2,
filled = True)
# Convert to png using system command (requires Graphviz)
from subprocess import call
call(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600'])
# Display in jupyter notebook
Image(filename = 'tree.png')
# +
# evaluate each feature's importance
importances = rf.feature_importances_
# sort by descending order of importances
indices = np.argsort(importances)[::-1]
#create sorted dictionary
forest_importances = {}
print("Feature ranking:")
for f in range(X.shape[1]):
forest_importances[X.columns[indices[f]]] = importances[indices[f]]
print("%d. %s (%f)" % (f + 1, X.columns[indices[f]], importances[indices[f]]))
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="86OnzMaKd_js" colab_type="text"
# # 6. 学習・推論・可視化
# + [markdown] id="QNjHIrZeeDzj" colab_type="text"
# ## 6.1. 準備
# + id="-fIkEeGByRyO" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1596349403057, "user_tz": -540, "elapsed": 620, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}}
import sys
sys.path.append("/content/drive/My Drive/Transformer")
# + id="UN_scyVed7E0" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1596351203783, "user_tz": -540, "elapsed": 599, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}}
import torch
import torch.nn as nn
import torch.optim as optim
import os
import utils
from utils.dataloader import get_IMDb_DataLoaders_and_TEXT
from utils.transformer import TransformerClassification
# + id="1PO_TXLp1Os9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1596349340313, "user_tz": -540, "elapsed": 696, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}} outputId="3ff09753-d7b7-405c-eb26-2873193ad86b"
# ※リロード用
import importlib
importlib.reload(torch.nn)
importlib.reload(utils.transformer)
# + id="OgF6k9nqeTni" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1596359070917, "user_tz": -540, "elapsed": 567, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}} outputId="18ade918-f228-4b88-eecb-8050399b295f"
d_model = 300
max_seq_len = 256
d_hidden = 1024
drop_ratio = 0.1
d_out = 2
batch_size = 64
lr = 2e-5
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("使用デバイス: {}".format(device))
# + [markdown] id="__cii0goj9aC" colab_type="text"
# ### 6.1.1. DataLoader
# + id="Qug-NkZgeMIL" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1596359096166, "user_tz": -540, "elapsed": 20643, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}}
# 学習データの取得
train_dl, val_dl, test_dl, TEXT = get_IMDb_DataLoaders_and_TEXT(max_seq_len, batch_size)
train_data_dict = {"train": train_dl, "val": val_dl}
# + [markdown] id="l3VIuLIVj79b" colab_type="text"
# ### 6.1.2. Transformer
# + id="WcQQvaz7eoGs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 850} executionInfo={"status": "ok", "timestamp": 1596359168480, "user_tz": -540, "elapsed": 3412, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}} outputId="f07aa1a8-403b-4265-8be2-53765b141874"
# モデル定義
net = TransformerClassification(TEXT.vocab.vectors, d_model, max_seq_len, d_hidden, d_out, drop_ratio, device)
net.to(device)
net.train()
# 重みの初期化
def weight_init(m):
cls_name = m.__class__.__name__
if cls_name.find("Linear") != -1:
nn.init.kaiming_normal_(m.weight)
if m.bias is not None:
nn.init.constant_(m.bias, 0.0)
net.trm1.apply(weight_init)
net.trm2.apply(weight_init)
print(net)
# + [markdown] id="OCYMuYQrkHxx" colab_type="text"
# ### 6.1.3. 損失関数と最適化方法
# + id="D5y80gjZfPJL" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1596359172129, "user_tz": -540, "elapsed": 616, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}}
# 損失関数
loss_func = nn.CrossEntropyLoss()
# 最適化方法
optimizer = optim.Adam(net.parameters(), lr=lr)
# + [markdown] id="q6hgxayIlwVT" colab_type="text"
# ## 6.2. 学習
# + id="dYbM0p0fl2DR" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1596359177947, "user_tz": -540, "elapsed": 591, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}}
def train(num_epochs):
torch.backends.cudnn.benchmark = True
for epoch in range(num_epochs):
for phase in ["train", "val"]:
if phase == "train":
net.train()
else:
net.eval()
epoch_loss = 0.0 # エポック内の損失
epoch_corrects = 0 # エポック内の合計正解数
for batch in train_data_dict[phase]:
x = batch.Text[0].to(device)
t = batch.Label.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == "train"): # 学習データのみ勾配を計算
input_pad = TEXT.vocab.stoi["<pad>"]
input_mask = (x != input_pad) # 文章でない箇所をマスクする
y, _, _ = net(x, input_mask)
loss = loss_func(y, t)
if phase == "train":
loss.backward()
optimizer.step()
_, preds = torch.max(y, 1) # ラベル予測(大きい方のインデックスを取得)
epoch_loss += loss.item() * batch_size
epoch_corrects += torch.sum(preds == t.data)
epoch_loss = epoch_loss / len(train_data_dict[phase].dataset)
epoch_acc = epoch_corrects.double() / len(train_data_dict[phase].dataset) # 正解率
print("Epoch {}/{} | {:^5} | Loss: {:.4f} Acc: {:.4f}".format(epoch + 1, num_epochs, phase, epoch_loss, epoch_acc))
# + id="kUhBx9Ev5GaL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1596359183192, "user_tz": -540, "elapsed": 795, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}} outputId="0f4a0be6-43d2-41d8-ad31-351beb21de70"
# 学習モデルの読込み
model_file = "/content/drive/My Drive/Transformer/model/transformer.pth"
if os.path.exists(model_file):
load_data = torch.load(model_file, map_location=device)
net.load_state_dict(load_data["state_dict"])
print("Load model")
else:
print("Model file not found")
# + id="3Mnwh1Vo5DJM" colab_type="code" colab={}
num_iterate = 10
num_epochs = 10
for i in range(num_iterate):
print("-"*10 + " Iterate: {} ".format(i) + "-"*10)
train(num_epochs)
# 学習モデルの保存
save_data = {"state_dict": net.state_dict()}
torch.save(save_data, model_file)
print("Save model")
# + id="PBLqPy7L5HSL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1596362589242, "user_tz": -540, "elapsed": 894, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}} outputId="593c55c1-9980-47fe-e6c5-6d90954cf288"
# 学習モデルの保存
save_data = {"state_dict": net.state_dict()}
torch.save(save_data, model_file)
print("Save model")
# + [markdown] id="46n9mgVtPrik" colab_type="text"
# ## 6.3. 推論
# + id="QdNhuL6XPufy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1596362617612, "user_tz": -540, "elapsed": 24947, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}} outputId="da3b2fd8-eb1f-4996-82db-453fdb804e45"
net.eval()
epoch_corrects = 0
for batch in test_dl:
x = batch.Text[0].to(device)
t = batch.Label.to(device)
with torch.set_grad_enabled(False):
input_pad = TEXT.vocab.stoi["<pad>"]
input_mask = (x != input_pad) # 文章でない箇所をマスクする
y, _, _ = net(x, input_mask)
_, preds = torch.max(y, 1) # ラベル予測(大きい方のインデックスを取得)
epoch_corrects += torch.sum(preds == t.data)
epoch_acc = epoch_corrects.double() / len(test_dl.dataset) # 正解率
print("テストデータでの正解率: {:.4f}".format(epoch_acc))
# + [markdown] id="IAIF9vueQ7Tk" colab_type="text"
# ## 6.4. 可視化
# + id="DwWIfZuQSaWK" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1596363298756, "user_tz": -540, "elapsed": 1106, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}}
# 適当なデータをモデルに入力
batch = next(iter(test_dl))
x = batch.Text[0].to(device)
t = batch.Label.to(device)
input_pad = TEXT.vocab.stoi["<pad>"]
input_mask = (x != input_pad) # 文章でない箇所をマスクする
y, attn_w1, attn_w2 = net(x, input_mask)
_, preds = torch.max(y, 1) # ラベル予測(大きい方のインデックスを取得)
# + id="AAWxz0qpTBJT" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1596363303580, "user_tz": -540, "elapsed": 1201, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}}
def highlight(word, attn):
html_color = "#{:02X}{:02X}{:02X}".format(255, int(255 * (1 - attn)), int(255 * (1 - attn)))
return "<span style=\"color: #000000; background-color: {}\"> {}</span>".format(html_color, word)
def mk_html(index, batch, preds, attn_w1, attn_w2, TEXT):
sentence = batch.Text[0][index] # 文章(単語ID)
label = batch.Label[index] # 正解ラベル
pred = preds[index] # 予測ラベル
label_strs = ["Negative", "Positive"]
label_str = label_strs[label]
pred_str = label_strs[pred]
attn1 = attn_w1[index, 0, :] # <cls>からみたAttention
attn1 = attn1 / attn1.max()
attn2 = attn_w2[index, 0, :] # <cls>からみたAttention
attn2 = attn2 / attn2.max()
html = "<hr>"
html += "正解ラベル: {}<br>予測ラベル: {}<br><br>".format(label_str, pred_str)
html += "[TransformerBlock1段目のAttention]<br>"
for word, attn in zip(sentence, attn1):
html += highlight(TEXT.vocab.itos[word], attn)
html += "<br><br>"
html += "[TransformerBlock2段目のAttention]<br>"
for word, attn in zip(sentence, attn2):
html += highlight(TEXT.vocab.itos[word], attn)
html += "<hr>"
return html
# + id="gwMOt6bSQ9Cj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 253} executionInfo={"status": "ok", "timestamp": 1596363352585, "user_tz": -540, "elapsed": 655, "user": {"displayName": "\u6c34\u4ee3\u3064\u3070\u3081", "photoUrl": "", "userId": "05712836990767418031"}} outputId="0befb408-108d-43b1-b02c-880ee051ee44"
from IPython.display import HTML
index = 32
html_out = mk_html(index, batch, preds, attn_w1, attn_w2, TEXT)
HTML(html_out)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# %matplotlib inline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
a = pd.read_csv('https://github.com/kushalnavghare/wine-quality-prediction/winequality-white.csv', sep=';')
a.head(10)
a.quality.plot(kind='hist')
X = a.drop('quality', axis=1)
y = a.quality
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, stratify=y)
X_train = a.drop('quality', axis=1)
y_train = a.quality
X_train.shape
y_train.shape
y.head()
y.shape
X.columns
features = X_train.columns[:11]
features
clf = RandomForestClassifier(n_jobs=2, random_state=0)
clf.fit(X_train[features], y_train)
clf.predict(X_test)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Probability of Collision [exercise]
#
# The truth table shown above represents the probability of two cars colliding at an intersection if they both attempt a maneuver at the same time. If Car 1 goes straight and car two goes left, for example, there is a probability of collision of 0.25.
# +
# \ Car 1|
# \ |
# \ |
# Car 2 \ | LEFT | STRAIGHT | RIGHT
# ___________________________________
# LEFT | 0.5 | 0.25 | 0.1
# STRAIGHT| 0.25 | 0.02 | 0.1
# RIGHT | 0.1 | 0.1 | 0.01
# +
def probability_of_collision(car_1, car_2):
"""
Calculate the probablity of a collision based on the car turns
Args:
car_1 (string): The turning direction of car_1
car_2 (string): The turning direction of car_2
Returns:
float: the probability of a collision
"""
probability = 0.0 # you should change this value based on the directions.
if car_1 == "L":
# Case Car1 L and Car2 L
if car_2 == "L":
probability = 0.5
# Case Car1 L and Car2 S
elif car_2 == "S":
probability = 0.25
# Case Car1 L and Car2 R (this is all other possible cases)
else:
probability = 0.1
elif car_1 == "S":
# Case Car1 S and Car2 L
if car_2 == "L":
probability = 0.25
# Case Car1 S and Car2 S
elif car_2 == "S":
probability = 0.02
# Case Car1 S and Car2 R (this is all other possible cases)
else:
probability = 0.1
else: # Car1 R
# Case Car1 R and Car2 L
if car_2 == "L":
probability = 0.1
# Case Car1 R and Car2 S
elif car_2 == "S":
probability = 0.1
# Case Car1 R and Car2 R (this is all other possible cases)
else:
probability = 0.01
return probability
def test():
num_correct = 0
p1 = probability_of_collision("L", "L")
if p1 == 0.5:
num_correct += 1
p2 = probability_of_collision("L", "R")
if p2 == 0.1:
num_correct += 1
p3 = probability_of_collision("L", "S")
if p3 == 0.25:
num_correct += 1
p4 = probability_of_collision("S", "R")
if p4 == 0.1:
num_correct += 1
print("You got", num_correct, "out of 4 correct")
test()
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Expansion of nodes using Monarch APIs
# * https://api.monarchinitiative.org/api/#/
# * https://scigraph-ontology.monarchinitiative.org/scigraph/docs/#/
import pandas as pd
import requests
import json
from pandas.io.json import json_normalize
# +
# input gene
gene = 'NCBIGene:55768'
# output files
path = 'ngly1-human-expansion/ngly1_human'
# -
# ## Graph queries (SciGraph)
# api address
api = 'https://scigraph-ontology.monarchinitiative.org/scigraph'
endpoint = '/graph'
# get neighbors (JSON content)
r = requests.get('{}{}/neighbors/{}'.format(api,endpoint,gene))
r.headers
# Read results
r.json()
with open('{}_neighbors.json'.format(path), 'w') as f:
json.dump(r.json(), f, sort_keys=True, indent=4)
neighbors_df = json_normalize(r.json(), 'nodes')
neighbors_df.head()
neighbors_df[neighbors_df.id.str.startswith('NCBIGene')]
# the other ncbigene terms are discontinued ones (replaced by ncbi:55768)
# Analyze neighbours' types
neighbors_df.id.unique()
print('nodes: {}'.format(len(neighbors_df.id.unique())))
neighbors_df['node_type'] = neighbors_df.id.apply(lambda x: x.split(':')[0])
neighbors_df.node_type.value_counts()
# +
# conclusion: neighbors are taxon, chr position, so, xref (ensembl, hgnc, omim) and provenance
# +
# Filters
# Filter by interaction_type. BUT what are the strings per interaction_type???
# -
# get reachable nodes (JSON content)
r = requests.get('{}{}/reachablefrom/{}'.format(api,endpoint,gene))
r.headers
# Read results
r.json()
reach_df = json_normalize(r.json(), 'nodes')
reach_df.head()
reach_df.id.unique()
print('nodes: {}'.format(len(reach_df.id.unique())))
reach_df['node_type'] = reach_df.id.apply(lambda x: x.split(':')[0])
reach_df.node_type.value_counts()
# +
# conclusion: reachables are taxon, chr, so
# -
# ## Edge Queries (Monarch)
# api address
api = 'https://api.monarchinitiative.org/api'
endpoint = '/bioentity'
# get gene info
r = requests.get('{}{}/gene/{}'.format(api,endpoint,gene))
#r = requests.get('https://api.monarchinitiative.org/api/bioentity/gene/%s/phenotypes/'%gene, headers={'Accept':'application/json'})
r.headers
r.json()
with open('{}_id.json'.format(path), 'w') as f:
json.dump(r.json(), f, sort_keys=True, indent=4)
# get gene-gene interactions
r = requests.get('{}{}/gene/{}/interactions/'.format(api,endpoint,gene))
nassociations = len(r.json()['objects'])
print('Number of nodes associated are {}'.format(nassociations))
with open('{}_interactions.json'.format(path), 'w') as f:
json.dump(r.json(), f, sort_keys=True, indent=4)
ggi_df = json_normalize(r.json(), 'objects')
ggi_df.columns = ['gene_id']
ggi_df.head(2)
ggi_df.to_csv('{}_ggi.tsv'.format(path), sep='\t', index=False, header=True)
# get gene-phenotype
r = requests.get('{}{}/gene/{}/phenotypes/'.format(api,endpoint,gene))
nassociations = len(r.json()['associations'])
print('Number of nodes associated are {}'.format(nassociations))
with open('{}_phenotypes.json'.format(path), 'w') as f:
json.dump(r.json(), f, sort_keys=True, indent=4)
gph_df = json_normalize(r.json(), 'objects')
gph_df.columns = ['phenotype_id']
gph_df.head(2)
gph_df.to_csv('{}_gene_phenotype.tsv'.format(path), sep='\t', index=False, header=True)
# Get gene-disease
r = requests.get('{}{}/gene/{}/diseases/'.format(api,endpoint,gene))
nassociations = len(r.json()['objects'])
print('Number of nodes associated are {}'.format(nassociations))
with open('{}_diseases.json'.format(path), 'w') as f:
json.dump(r.json(), f, sort_keys=True, indent=4)
r_dict = r.json()
gda_df = json_normalize(r_dict, 'objects')
gda_df.columns = ['disease_id']
gda_df.head(2)
gda_df.to_csv('{}_gene_disease.tsv'.format(path), sep='\t', index=False, header=True)
# get gene-function
r = requests.get('{}{}/gene/{}/function/'.format(api, endpoint, gene))
nassociations = len(r.json()['objects'])
print('Number of nodes associated are {}'.format(nassociations))
r.json()
# get gene-expressedInAnatomy
r = requests.get('{}{}/gene/{}/expressed/'.format(api, endpoint, gene))
nassociations = len(r.json()['objects'])
print('Number of nodes associated are {}'.format(nassociations))
with open('{}_expressed.json'.format(path), 'w') as f:
json.dump(r.json(), f, sort_keys=True, indent=4)
r_data = r.json()
gaa_df = json_normalize(r_data, 'objects')
gaa_df.columns = ['anatomy_id']
gaa_df.head(2)
gaa_df.to_csv('{}_gene_anatomy.tsv'.format(path), sep='\t', index=False, header=True)
# get gene-pub
r = requests.get('{}{}/gene/{}/pubs/'.format(api, endpoint, gene))
nassociations = len(r.json()['objects'])
print('Number of nodes associated are {}'.format(nassociations))
# get gene-homolog
r = requests.get('{}{}/gene/{}/homologs/'.format(api, endpoint, gene))
nassociations = len(r.json()['objects'])
print('Number of nodes associated are {}'.format(nassociations))
with open('{}_homologs.json'.format(path), 'w') as f:
json.dump(r.json(), f, sort_keys=True, indent=4)
r_dict = r.json()
gha_df = json_normalize(r_dict, 'objects')
gha_df.columns = ['homolog_id']
gha_df.head(2)
gha_df.to_csv('{}_gene_homolog.tsv'.format(path), sep='\t', index=False, header=True)
# ## Query Wikidata for Knowldege.Bio
# api address:
api = 'https://query.wikidata.org/sparql'
def generate_table(header, results):
df = {}
for res_d in results:
for head in header:
df[head] = []
for res_d in results:
for head in header:
try:
value = res_d[head]['value']
except:
value = 'NA'
if value.startswith('http'):
namespace, value = value.rsplit('/', 1)
aux = df[head]
aux.append(value)
df[head] = aux
try:
results_df = pd.DataFrame.from_dict(df)
except e:
print(e)
print(df)
results_df = results_df[header]
return results_df
# get NCBIGene:
query = """SELECT DISTINCT ?id ?item ?itemLabel (group_concat(distinct ?itemaltLabel; separator="|") as ?altLabel) ?itemDesc
WHERE
{
{?item wdt:P351 ?id .} # ncbi gene
values ?id {"55768"}
OPTIONAL{
?item skos:altLabel ?itemaltLabel .
FILTER(LANG(?itemaltLabel) = "en")
?item schema:description ?itemDesc .
FILTER(LANG(?itemDesc) = "en")
}
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
}
group by ?item ?id ?itemLabel ?itemDesc"""
r = requests.post(api, data={'query': query}, headers={'Accept':'application/sparql-results+json'})
header_l = r.json()['head']['vars']
results_l = r.json()['results']['bindings']
df = generate_table(header_l, results_l)
df
df.to_csv('{}_subject_concept_kb.tsv'.format(path), sep='\t', index=False, header=True)
# get uberon
# get input_list
input_df = pd.read_table('/home/nuria/workspace/monarch/ngly1_human_expansion/ngly1_human_gene_anatomy.tsv')
input_df = input_df[input_df.anatomy_id.str.contains('UBERON')]
input_df['id'] = input_df.anatomy_id.apply(lambda x: '"' + str(x.split(':')[1]) + '"')
input_l = list(input_df['id'])
input_s = ' '.join(input_l)
input_s
query = """SELECT DISTINCT ?id ?item ?itemLabel (group_concat(distinct ?itemaltLabel; separator="|") as ?altLabel) ?itemDesc
WHERE
{
?item wdt:P1554 ?id .
values ?id {""" + input_s + """}
OPTIONAL{
?item skos:altLabel ?itemaltLabel .
FILTER(LANG(?itemaltLabel) = "en")
?item schema:description ?itemDesc .
FILTER(LANG(?itemDesc) = "en")
}
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
}
group by ?item ?id ?itemLabel ?itemDesc"""
r = requests.post(api, data={'query': query}, headers={'Accept':'application/sparql-results+json'})
#r.json()
header_l = r.json()['head']['vars']
results_l = r.json()['results']['bindings']
df = generate_table(header_l, results_l)
df.head(2)
# merge input with response
input_df = input_df[['anatomy_id']]
input_df['id'] = input_df.anatomy_id.apply(lambda x: x.split(':')[1])
output_df = input_df.merge(df, how="left")
output_df = output_df[['anatomy_id', 'item', 'itemLabel', 'altLabel', 'itemDesc']]
output_df.head(2)
output_df.to_csv('{}_anatomies_concept_kb.tsv'.format(path), sep='\t', index=False, header=True)
# get orthologs (ncbigene)
# get input_list
input_df = pd.read_table('/home/nuria/workspace/monarch/ngly1_human_expansion/ngly1_human_gene_homolog.tsv')
input_df = input_df[input_df.homolog_id.str.contains('NCBIGene')]
input_df['id'] = input_df.homolog_id.apply(lambda x: '"' + str(x.split(':')[1]) + '"')
input_l = list(input_df['id'])
input_s = ' '.join(input_l)
input_s
# query
query = """SELECT DISTINCT ?id ?item ?itemLabel (group_concat(distinct ?itemaltLabel; separator="|") as ?altLabel) ?itemDesc
WHERE
{
{?item wdt:P351 ?id .} # ncbi gene
values ?id {""" + input_s + """}
OPTIONAL{
?item skos:altLabel ?itemaltLabel .
FILTER(LANG(?itemaltLabel) = "en")
?item schema:description ?itemDesc .
FILTER(LANG(?itemDesc) = "en")
}
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
}
group by ?item ?id ?itemLabel ?itemDesc"""
r = requests.post(api, data={'query': query}, headers={'Accept':'application/sparql-results+json'})
#r.json()
header_l = r.json()['head']['vars']
results_l = r.json()['results']['bindings']
df = generate_table(header_l, results_l)
df.head(2)
# merge input with response
input_df = input_df[['homolog_id']]
input_df['id'] = input_df.homolog_id.apply(lambda x: x.split(':')[1])
output_df = input_df.merge(df, how="left")
output_df = output_df[['homolog_id', 'item', 'itemLabel', 'altLabel', 'itemDesc']]
output_df.head(2)
ncbigene_df = output_df
# get orthologs (mgi)
# get input_list
input_df = pd.read_table('/home/nuria/workspace/monarch/ngly1_human_expansion/ngly1_human_gene_homolog.tsv')
input_df = input_df[input_df.homolog_id.str.contains('MGI')]
input_df['id'] = input_df.homolog_id.apply(lambda x: '"' + str(x) + '"')
input_l = list(input_df['id'])
input_s = ' '.join(input_l)
input_s
# +
# query
query = """SELECT DISTINCT ?id ?item ?itemLabel (group_concat(distinct ?itemaltLabel; separator="|") as ?altLabel) ?itemDesc
WHERE
{
{?item wdt:P671 ?id .} # mgi gene
values ?id {""" + input_s + """}
OPTIONAL{
?item skos:altLabel ?itemaltLabel .
FILTER(LANG(?itemaltLabel) = "en")
?item schema:description ?itemDesc .
FILTER(LANG(?itemDesc) = "en")
}
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
}
group by ?item ?id ?itemLabel ?itemDesc"""
r = requests.post(api, data={'query': query}, headers={'Accept':'application/sparql-results+json'})
#r.json()
header_l = r.json()['head']['vars']
results_l = r.json()['results']['bindings']
df = generate_table(header_l, results_l)
df.head(2)
# merge input with response
input_df = input_df[['homolog_id']]
input_df['id'] = input_df.homolog_id
output_df = input_df.merge(df, how="left")
output_df = output_df[['homolog_id', 'item', 'itemLabel', 'altLabel', 'itemDesc']]
mgi_df = output_df
output_df.head(2)
# -
# get orthologs (fly)
# get input_list
input_df = pd.read_table('/home/nuria/workspace/monarch/ngly1_human_expansion/ngly1_human_gene_homolog.tsv')
input_df = input_df[input_df.homolog_id.str.contains('FlyBase')]
input_df['id'] = input_df.homolog_id.apply(lambda x: '"' + str(x.split(':')[1]) + '"')
input_l = list(input_df['id'])
input_s = ' '.join(input_l)
input_s
# +
# query
query = """SELECT DISTINCT ?id ?item ?itemLabel (group_concat(distinct ?itemaltLabel; separator="|") as ?altLabel) ?itemDesc
WHERE
{
{?item wdt:P3852 ?id .} # ncbi gene
values ?id {""" + input_s + """}
OPTIONAL{
?item skos:altLabel ?itemaltLabel .
FILTER(LANG(?itemaltLabel) = "en")
?item schema:description ?itemDesc .
FILTER(LANG(?itemDesc) = "en")
}
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
}
group by ?item ?id ?itemLabel ?itemDesc"""
r = requests.post(api, data={'query': query}, headers={'Accept':'application/sparql-results+json'})
#r.json()
header_l = r.json()['head']['vars']
results_l = r.json()['results']['bindings']
df = generate_table(header_l, results_l)
df.head(2)
# merge input with response
input_df = input_df[['homolog_id']]
input_df['id'] = input_df.homolog_id.apply(lambda x: x.split(':')[1])
output_df = input_df.merge(df, how="left")
output_df = output_df[['homolog_id', 'item', 'itemLabel', 'altLabel', 'itemDesc']]
fly_df = output_df
output_df.head(2)
# -
output_df = pd.concat([fly_df,mgi_df,ncbigene_df])
output_df
output_df.to_csv('{}_homologs_concept_kb.tsv'.format(path), sep='\t', index=False, header=True)
# get NCBIGene (ggi):
# get input_list
input_df = pd.read_table('/home/nuria/workspace/monarch/ngly1_human_expansion/ngly1_human_ggi.tsv')
input_df['id'] = input_df.gene_id.apply(lambda x: '"' + str(x.split(':')[1]) + '"')
input_l = list(input_df['id'])
input_s = ' '.join(input_l)
input_s
# +
# query
query = """SELECT DISTINCT ?id ?item ?itemLabel (group_concat(distinct ?itemaltLabel; separator="|") as ?altLabel) ?itemDesc
WHERE
{
{?item wdt:P351 ?id .} # ncbi gene
values ?id {""" + input_s + """}
OPTIONAL{
?item skos:altLabel ?itemaltLabel .
FILTER(LANG(?itemaltLabel) = "en")
?item schema:description ?itemDesc .
FILTER(LANG(?itemDesc) = "en")
}
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
}
group by ?item ?id ?itemLabel ?itemDesc"""
r = requests.post(api, data={'query': query}, headers={'Accept':'application/sparql-results+json'})
#r.json()
# +
header_l = r.json()['head']['vars']
results_l = r.json()['results']['bindings']
df = generate_table(header_l, results_l)
# merge input with response
input_df = input_df[['gene_id']]
input_df['id'] = input_df.gene_id.apply(lambda x: x.split(':')[1])
output_df = input_df.merge(df, how="left")
output_df = output_df[['gene_id', 'item', 'itemLabel', 'altLabel', 'itemDesc']]
output_df.head(2)
# -
output_df.to_csv('{}_ncbi_interactors_concept_kb.tsv'.format(path), sep='\t', index=False, header=True)
# get phenotypes (hp)
# get input_list
input_df = pd.read_table('/home/nuria/workspace/monarch/ngly1_human_expansion/ngly1_human_gene_phenotype.tsv')
input_df = input_df[input_df.phenotype_id.str.contains('HP')]
input_df['id'] = input_df.phenotype_id.apply(lambda x: '"' + str(x) + '"')
input_l = list(input_df['id'])
input_s = ' '.join(input_l)
input_s
# +
# query
query = """SELECT DISTINCT ?id ?item ?itemLabel (group_concat(distinct ?itemaltLabel; separator="|") as ?altLabel) ?itemDesc
WHERE
{
{?item wdt:P3841 ?id .} # mgi gene
values ?id {""" + input_s + """}
OPTIONAL{
?item skos:altLabel ?itemaltLabel .
FILTER(LANG(?itemaltLabel) = "en")
?item schema:description ?itemDesc .
FILTER(LANG(?itemDesc) = "en")
}
SERVICE wikibase:label { bd:serviceParam wikibase:language "en" }
}
group by ?item ?id ?itemLabel ?itemDesc"""
r = requests.post(api, data={'query': query}, headers={'Accept':'application/sparql-results+json'})
#r.json()
# +
header_l = r.json()['head']['vars']
results_l = r.json()['results']['bindings']
df = generate_table(header_l, results_l)
df.head(2)
# merge input with response
input_df = input_df[['phenotype_id']]
input_df['id'] = input_df.phenotype_id
output_df = input_df.merge(df, how="left")
output_df = output_df[['phenotype_id', 'item', 'itemLabel', 'altLabel', 'itemDesc']]
output_df.head(2)
# -
output_df.to_csv('{}_phenotypes_concept_kb.tsv'.format(path), sep='\t', index=False, header=True)
# +
# get taxon, chr, so
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
miserie = pd.Series()
df = pd.DataFrame()
p = pd.Panel()
print(miserie, df, p, sep = '\n'*2)
miserie = pd.Series({'a':100, 'b':500, 'c':200, 'd':100})
print(miserie)
miserie['d']
# +
#help(pd.Series)
# +
#help(list)
# -
lista = [100,200,300,400]
diccionario = {'a':100, 'b':200, 'c':300, 'd':400}
type(diccionario)
#creación desde una lista, le agrego mi propio indice, ya que por defecto será 0,1,2..,n
serie1 = pd.Series(data=lista, index=['a','b','c','d'], name ='Mi primera lista')
print(serie1)
serie2 = pd.Series(data = diccionario, name = 'Segunda lista diccionario')
print(serie2)
s3 = np.asarray(lista)
print(s3)
s4 = np.asanyarray(diccionario)
print(s4)
serie1['a']
serie1[0:3]
serie1['a':'c']
serie1['e'] = 500
serie1
# ## DataFrame
# Un DataFrame puede considerarse un diccionario de Series.Tien filas(indices>>index) y columnas(pueden ser datos diferentes)
# Se le pueden añadir nuevas columnas
df1 = pd.DataFrame(np.random.randn(10,3), columns=['col1', 'col2','col3'], index=range(1,11))
df1
df1.index
print(df1['col1'])
df1.columns
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
homepath = os.environ['HOMEPATH']
print(homepath)
path = os.path.join(homepath,"FlightPricePredict")
print(path)
df_path = os.path.join(path,"Data_Train")
df = pd.read_excel(df_path+".xlsx")
df.head()
df.dropna(inplace=True)
df.info()
df['Airline'].value_counts()
# +
# plt.xticks(rotation=45)
# sns.boxplot(x=df['Airline'],y=df['Price'])
sns.catplot(y='Price', x='Airline', data=df.sort_values('Price',ascending=False),kind='boxen',height=6, aspect=2.5)
plt.xticks(rotation=30)
plt.show()
# -
df['Source'].value_counts()
sns.catplot(y='Price', x='Source', data=df.sort_values('Price',ascending=False),kind='boxen',height=6, aspect=2.5)
plt.xticks(rotation=30)
plt.show()
df['Total_Stops'].value_counts()
sns.catplot(y='Price', x='Total_Stops', data=df.sort_values('Price',ascending=False),kind='boxen',height=6, aspect=2.5)
plt.xticks(rotation=30)
plt.show()
# +
# df['Journey_day']=pd.to_datetime(df.Date_of_Journey,format="%d/%m/%Y").dt.day
# df['Journey_month']=pd.to_datetime(df.Date_of_Journey,format="%d/%m/%Y").dt.month
day_list=list(df['Date_of_Journey'])
for i in range(len(day_list)):
day_list[i] = int(day_list[i].split(sep="/")[0])
month_list=list(df['Date_of_Journey'])
for i in range(len(month_list)):
month_list[i] = int(month_list[i].split(sep="/")[1])
df['Journey_day'] = day_list
df['Journey_month'] = month_list
df1= df.drop(['Date_of_Journey'], axis=1) #df but with Date of Journey dropped
# +
#splitting Departure Time into Departure Hour and Departure Minute
Dep_Hour=np.zeros(len(df1))
Dep_Min=np.zeros(len(df1))
t=0
for i in df1['Dep_Time']:
x,y= i.split(":")
Dep_Hour[t] = x
Dep_Min[t] = y
t+=1
df1['Dep_Hour'] = Dep_Hour
df1['Dep_Min'] = Dep_Min
# +
#splitting Arrival Time into Arrival Hour and Arrival Minute
#alternate way
df1["Arrival_hour"] = pd.to_datetime(df1.Arrival_Time).dt.hour
df1["Arrival_min"] = pd.to_datetime(df1.Arrival_Time).dt.minute
# -
df1.tail()
# +
#Now to split duration into hours and mins
duration = list(df1["Duration"])
for i in range(len(duration)):
if len(duration[i].split()) != 2: # Check if duration contains only hour or mins
if "h" in duration[i]:
duration[i] = duration[i].strip() + " 0m" # Adds 0 minute
else:
duration[i] = "0h " + duration[i]
duration_hours = []
duration_mins = []
for i in range(len(duration)):
duration_hours.append(int(duration[i].split(sep = "h")[0])) # Extract hours from duration
duration_mins.append(int(duration[i].split(sep = "m")[0].split()[-1])) # Extracts only minutes from duration
# -
df1["Duration_hours"] = duration_hours
df1["Duration_mins"] = duration_mins
df1.tail()
df2 = df1.drop(['Dep_Time', 'Arrival_Time', 'Duration', 'Route', 'Additional_Info'], axis=1)
df2.tail()
df2.to_csv(os.path.join(path,'dropped_values.csv'))
df2 = pd.read_csv(os.path.join(path,'dropped_values.csv'))
df2 = df2.iloc[:,1:] #to remove extra unnamed column
# ## Now to turn categorical variables Source,Destination,No.of stops into dummy variables via encoding
# +
Airline = df2[['Airline']]
Airline = pd.get_dummies(Airline, drop_first=True)
Airline.head()
Src = df2[['Source']]
Src = pd.get_dummies(Src, drop_first=True)
Src.head()
Dest = df2[['Destination']]
Dest = pd.get_dummies(Dest, drop_first=True)
Dest.head()
# -
df2.replace({"non-stop" : 0, "1 stop" : 1, "2 stops" : 2, "3 stops" : 3, "4 stops" : 4}, inplace=True)
df2.head()
df3 = pd.concat([df2, Airline, Src, Dest], axis=1)
df3.drop(['Airline', 'Source', 'Destination'], axis = 1, inplace=True)
df3.head()
# +
#df3.to_csv(os.path.join(path,'df3_prepared.csv'), index=False)
# -
df_prep = pd.read_csv(os.path.join(path,'df3_prepared.csv'))
df_prep.head()
# ## xxxxxxxxxxxxx Doing the same procedure for Test Data xxxxxxxxxxxxx
test_df_path = os.path.join(path,"Test_set")
test_df = pd.read_excel(test_df_path+".xlsx")
print(test_df.info())
test_df.tail()
test_df.drop(['Route', 'Additional_Info'], axis=1, inplace=True)
test_df.dropna(inplace=True)
test_df.shape
# +
#this was creating problems; unexpected NaN values created.
#test_df['Journey_day']=pd.to_datetime(test_df.Date_of_Journey,format="%d/%m/%Y", errors='ignore').dt.day
#test_df['Journey_month']=pd.to_datetime(test_df.Date_of_Journey,format="%d/%m/%Y", errors='raise').dt.month
# +
day_list=list(test_df['Date_of_Journey'])
for i in range(len(day_list)):
day_list[i] = int(day_list[i].split(sep="/")[0])
month_list=list(test_df['Date_of_Journey'])
for i in range(len(month_list)):
month_list[i] = int(month_list[i].split(sep="/")[1])
# -
test_df['Journey_day'] = day_list
test_df['Journey_month'] = month_list
test_df.tail()
test_df['Dep_Hour'] = pd.to_datetime(test_df.Dep_Time).dt.hour
test_df['Dep_Min'] = pd.to_datetime(test_df.Dep_Time).dt.minute
test_df['Arrival_hour'] = pd.to_datetime(test_df.Arrival_Time).dt.hour
test_df['Arrival_min'] = pd.to_datetime(test_df.Arrival_Time).dt.minute
# +
duration = list(test_df['Duration'])
for i in range(len(duration)):
if( duration[i].split() != 2 ):
if "h" in duration[i]:
duration[i] = duration[i].strip() + " 0m"
else:
duration[i] = "0h " + duration[i].strip()
duration_hours = []
duration_mins = []
for i in range(len(duration)):
duration_hours.append(int(duration[i].split(sep = "h")[0]))
duration_mins.append(int(duration[i].split(sep = "m")[0].split()[-1]))
# -
test_df['Duration_hours'] = duration_hours
test_df['Duration_mins'] = duration_mins
test_df.replace({"non-stop":0, "1 stop":1, "2 stops":2, "3 stops":3, "4 stops":4}, inplace=True)
Airline = test_df[['Airline']]
Airline = pd.get_dummies(Airline, drop_first=True)
#Airline.head()
Src = test_df[['Source']]
Src = pd.get_dummies(Src, drop_first=True)
#Src.head()
Dest = test_df[['Destination']]
Dest = pd.get_dummies(Dest, drop_first=True)
#Dest.head()
test_df2= pd.concat([test_df, Airline, Src, Dest], axis=1)
test_df2.tail()
test_df3 = test_df2.drop(['Airline', 'Date_of_Journey', 'Source',
'Destination', 'Dep_Time', 'Arrival_Time', 'Duration'], axis=1)
test_df3.tail()
# +
#test_df3.to_csv(os.path.join(path,"Test_prepared.csv"), index=False)
# -
prepared_test = pd.read_csv(os.path.join(path,"Test_prepared.csv"))
prepared_test.tail()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter: Feedforward Neural Networks
#
#
# # Topic: FFNN-based Soft Sensor for kamyr dataset
# import required packages
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
# random number seed for result reproducibility
from numpy.random import seed
seed(10)
import tensorflow
tensorflow.random.set_seed(20)
# fetch data
data = pd.read_csv('kamyr-digester.csv', usecols = range(1,23))
# +
# pre-process
# find the # of nan entries in each column
na_counts = data.isna().sum(axis = 0)
# remove columns that have a lot of nan entries
data_cleaned = data.drop(columns = ['AAWhiteSt-4 ','SulphidityL-4 '])
# remove any row that have any nan entry
data_cleaned = data_cleaned.dropna(axis = 0)
# separate X, y
y = data_cleaned.iloc[:,0].values[:,np.newaxis] # StandardScaler requires 2D array
X = data_cleaned.iloc[:,1:].values
print('Number of samples left: ', X.shape[0])
# +
# separate train and test data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 100)
X_est, X_val, y_est, y_val = train_test_split(X_train, y_train, test_size = 0.3, random_state = 100)
# +
# scale data
from sklearn.preprocessing import StandardScaler
X_scaler = StandardScaler()
X_est_scaled = X_scaler.fit_transform(X_est)
X_val_scaled = X_scaler.transform(X_val)
X_test_scaled = X_scaler.transform(X_test)
y_scaler = StandardScaler()
y_est_scaled = y_scaler.fit_transform(y_est)
y_val_scaled = y_scaler.transform(y_val)
y_test_scaled = y_scaler.transform(y_test)
# +
##%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
## Define & Fit FFNN model without early stopping
## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# -
#%% import Keras libraries
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# define model
def FFNN_model():
model = Sequential()
model.add(Dense(20, activation='tanh', kernel_initializer='he_normal', input_shape=(19,)))
model.add(Dense(5, activation='tanh', kernel_initializer='he_normal'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='Adam')
return model
# fit model
history = FFNN_model().fit(X_est_scaled, y_est_scaled, epochs=250, batch_size=32, validation_data=(X_val_scaled, y_val_scaled))
# plot validation curve
plt.figure()
plt.title('Validation Curves')
plt.xlabel('Epoch')
plt.ylabel('MSE')
plt.plot(history.history['loss'], label='training')
plt.plot(history.history['val_loss'], label='validation')
plt.legend()
plt.grid()
plt.show()
# +
##%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
## Define & Fit FFNN model with early stopping
## %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
# -
# random number seed for result reproducibility
from numpy.random import seed
seed(10)
import tensorflow
tensorflow.random.set_seed(20)
# +
# fit model again with early stopping
from tensorflow.keras.callbacks import EarlyStopping
es = EarlyStopping(monitor='val_loss', patience=15)
history = FFNN_model().fit(X_est_scaled, y_est_scaled, epochs=250, batch_size=32, validation_data=(X_val_scaled, y_val_scaled), callbacks=es)
# -
# plot validation curve
plt.figure()
plt.title('Validation Curves')
plt.xlabel('Epoch')
plt.ylabel('MSE')
plt.plot(history.history['loss'], label='training')
plt.plot(history.history['val_loss'], label='validation')
plt.legend()
plt.grid()
plt.show()
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing Libraries
# -------------------
#
# One of the greatest strengths of the python programming language is its rich set of libraries- pre-written code that implements a variety of functionality. For the data scientist, python's libraries (also called "modules") are particularly valuable. With a little bit of research into the details of python's libraries, a lot of common data tasks are little more than a function call away. Libraries exist for doing data cleaning, analysis, visualization, machine learning and statistics.
#
# [This XKCD cartoon](http://xkcd.com/353/) pretty much summarizes what Python libraries can do...
#
# <img src="http://imgs.xkcd.com/comics/python.png">
#
# In order to have access to a libraries functionality in a block of code, you must first import it. Importing a library tells python that while executing your code, it should not only consider the code and functions that you have written, but code and functions in the libraries that you have imported.
#
# There are several ways to import modules in python, some have ebetter properties than others. Below we see the preferred general way to import modules. In documentation, you may see other ways to import libraries (`from a_library import foo`). There is no risk to just copying this pattern if it is known to work.
#
# Imagine I want to import a library called `some_python_library`. This can be done using the import commands. All code below that import statement has access to the library contents.
#
# + `import some_python_library`: imports the module `some_python_library`, and creates a reference to that module in the current namespace. Or in other words, after you’ve run this statement, you can use `some_python_library.name` to refer to things defined in module `some_python_library`.
#
# + `import some_python_library as plib`: imports the module `some_python_library` and sets an alias for that library that may be easier to refer to. To refer to a thing defined in the library `some_python_library`, use `plib.name`.
#
# In practice you'll see the second pattern used very frequently; `pandas` referred to as `pd`, `numpy` referred to as `np`, etc.
# +
import math
number = 2
math.sqrt(number)
# +
import math as m
m.log(number)
# -
# ### Example: Matplotlib
#
# Matplotlib is one of the first python libraries a budding data scientist is likely to encounter. Matplotlib is a feature-rich plotting framework, capable of most plots you'll likely need. The interface to the matplotlib module mimics the plotting functionality in Matlab, another language and environment for scientific computing. If you're familiar with Matlab plots, matplotlib will seem very familiar. Even the plots look almost identical.
#
# Here, we'll cover some basic functionality of matplotlib, line and bar plots and histograms. As with most content convered in this course, this is just scratching the surface. For more info, including many great examples, please consult the [official matplotlib documentation](http://matplotlib.org/). A typical pattern for me when plotting things in python is to find an example that closely mirrors what I'm trying to do, copy this, and tweak until i get things right.
#
# Note: to get plots to appear inline in ipython notebooks, you must invoke the "magic function" `%matplotlib inline`. To have a stand-alone python app plot in a new window, use `plt.show()`.
#
# In most cases, the input to matplotlib plotting functions is arrays of numerical types, floats or integers.
# +
import matplotlib
matplotlib.__version__
# +
# used to embed plots inside an ipython notebook
# %matplotlib inline
import matplotlib.pyplot as plt
# really simple example:
y = [1, 2, 3, 4, 5, 4, 3, 2, 1]
x = [1, 2, 3, 4, 5, 6, 7, 8, 9]
plt.plot(x, y)
plt.plot([1, 2, 3, 4])
plt.ylabel("some numbers")
plt.show()
# +
import numpy as np
X = np.linspace(0, 10, 101) # create values from 0 to 10, and use 101 values
X
# +
import numpy as np
import math as math
X = np.linspace(0, 10, 100001) # create values from 0 to 10, and use 10001 values
Y = []
for x in X:
y = math.sin(x)
Y.append(y)
plt.plot(X, Y, "mx")
plt.title("The Sine Wave")
plt.xlabel("X")
plt.ylabel("sin(X)")
# -
# Notice that most of the functionality in matplotlib that we're using is in the sub-module `matplotlib.pyplot`.
#
# The third argument (i.e., the `'r-.'`) in the plot function `plt.plot(X, Y, 'r-.')` is a formatting specifier. This defines some properties for a line to be displayed. Some details:
# Color characters:
#
# + `b`: blue
#
# + `k`: black
#
# + `r`: red
#
# + `c`: cyan
#
# + `m`: magenta
#
# + `y`: yellow
#
# + `g`: green
#
# + `w`: white
#
# Some line/marker formatting specifiers:
#
# + `-`: solid line style
#
# + `--`: dashed line style
#
# + `-.`: dash-dot line style
#
# + `:`: dotted line style
#
# + `.`: point marker
#
# + `,`: pixel marker
#
# + `o`: circle marker
#
# + `+`: plus marker
#
# + `x`: x marker
#
# There are many other options for plots that can be specified. See [documentation](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot) for more info. We will also revisit this topic in the Visualization lectures.
#
# It is possible to plot multiple plots on the same y-axis. In order to do this, the Y data passed into the plot function must be a **list of lists**, each with the same length as the X data that is input:
# +
Y = []
for x in X:
y = [math.sin(x), math.cos(x), 0.1 * x]
Y.append(y)
plt.plot(X, Y)
plt.legend(["sin(x)", "cos(x)", "x/10"])
# -
# It is also possible to just plot Y data without corresponding X values. In this case, the index in the array is assumed to be X.
plt.plot(Y)
plt.xlabel("index")
plt.ylabel("f(x)")
plt.legend(["sin(x)", "cos(x)", "x/10"])
# Alternately, multiple calls to plot can be made with differing data. Doing so overlays the subsequent plots, creating the same effect.
# +
Y = []
Z = []
for x in X:
Y.append(math.sin(x))
Z.append(math.cos(x))
plt.plot(X, Y, "b-.")
plt.plot(X, Z, "r--")
plt.legend(["sin(x)", "cos(x)"])
# -
# ### Bar Plots
#
# Bar plots are often a good way to compare data in categories. This is an easy matter with matplotlib, the interface is almost identical to the that used when making line plots.
vals = [7, 6.2, 3, 5, 9]
xval = [1, 2, 3, 4, 5]
plt.bar(xval, vals)
# ### Histograms
#
# Histograms are extremely useful for analyzing data. Histograms partition numerical data into a discrete number of buckets (called bins), and return the number of values within each bucket. Typically this is displayed as a bar plot.
# +
Y = []
for x in range(0, 100000):
Y.append(np.random.randn())
plt.hist(Y, 50)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Random trees
#
# There are some simple tools in toytree, based on functions in ete3, that can be used to generate random trees which can be useful for exploring data and plotting methods.
#
import toytree
import toyplot
import numpy as np
## create a random tree with N tips
tre = toytree.rtree.coaltree(24)
## assign random edge lengths and supports to each node
for node in tre.treenode.traverse():
node.dist = np.random.exponential(1)
node.support = int(np.random.uniform(50, 100))
# ### Color-mapping
## make a colormap
colormap = toyplot.color.brewer.map(name="Spectral", domain_min=0, domain_max=100)
## get node support values in plot order
vals = np.array(tre.get_node_values("support", show_root=1, show_tips=1))
vals
## get colors mapped to values
colors = toyplot.color.broadcast((vals, colormap), shape=vals.shape)
colors = [toyplot.color.to_css(i) for i in colors]
# ### Show nodes but not values (text)
# Use None as the argument to `get_node_values()`. This allows you to still toggle whether you want the root and tip nodes to show with the following arguments to `get_node_values`. This argument would commonly be used if you wanted to set different node sizes or colors to represent values instead of having text show at nodes.
tre.draw(
height=400, width=300,
node_labels=tre.get_node_values(None, show_root=True, show_tips=False),
node_colors=colors,
node_sizes=10,
node_style={"stroke": "black"},
tip_labels_align=True,
);
# ### Show nodes and node values
# Similar to above except now enter the name of a real node "feature" into the `get_node_values` functions. This will return the values of the feature in node plotting order. The features "support", "dist", "name", "height", and "idx" are always available. Additional features can be added to nodes.
tre.draw(
height=400, width=350,
node_labels=tre.get_node_values("support", False, False),
node_colors=colors,
node_sizes=15,
node_style={"stroke": "black"},
use_edge_lengths=False,
);
# ### Show node values but not the nodes themselves
tre.draw(
height=400, width=350,
use_edge_lengths=False,
node_labels=tre.get_node_values("support", False, False),
node_colors=colors,
node_sizes=0,
node_style={"stroke": "black"},
node_labels_style={
"baseline-shift": "8px",
"-toyplot-anchor-shift": "-15px",
"font-size": "10px"},
);
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="zX4Kg8DUTKWO"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="Za8-Nr5k11fh"
# ##### Copyright 2018 The TensorFlow Authors.
# + cellView="form" id="Eq10uEbw0E4l"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="oYM61xrTsP5d"
# # Transfer Learning with TensorFlow Hub for TFLite
# + [markdown] id="aFNhz34Svuhe"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Examples/TFLite_Week1_Transfer_Learning.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
# Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite/Week%201/Examples/TFLite_Week1_Transfer_Learning.ipynb">
# <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
# View source on GitHub</a>
# </td>
# </table>
# + [markdown] id="bL54LWCHt5q5"
# ## Setup
# + id="110fGB18UNJn"
try:
# %tensorflow_version 2.x
except:
pass
# + id="dlauq-4FWGZM"
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tqdm import tqdm
print("\u2022 Using TensorFlow Version:", tf.__version__)
print("\u2022 Using TensorFlow Hub Version: ", hub.__version__)
print('\u2022 GPU Device Found.' if tf.test.is_gpu_available() else '\u2022 GPU Device Not Found. Running on CPU')
# + [markdown] id="mmaHHH7Pvmth"
# ## Select the Hub/TF2 Module to Use
#
# Hub modules for TF 1.x won't work here, please use one of the selections provided.
# + id="FlsEcKVeuCnf"
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
# + id="eDg-yd8fatdQ"
# + [markdown] id="sYUsgwCBv87A"
# ## Data Preprocessing
# + [markdown] id="8nqVX3KYwGPh"
# Use [TensorFlow Datasets](http://tensorflow.org/datasets) to load the cats and dogs dataset.
#
# This `tfds` package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see [loading image data](../load_data/images.ipynb)
#
# + [markdown] id="YkF4Boe5wN7N"
# The `tfds.load` method downloads and caches the data, and returns a `tf.data.Dataset` object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.
#
# Since `"cats_vs_dog"` doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
# + id="xYXcGT_7b_Rr"
# splits = tfds.Split.ALL.subsplit(weighted=(80,10,10))
# splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split=splits)
# (train_examples, validation_examples, test_examples) = splits
# num_examples = info.splits['train'].num_examples
# num_classes = info.features['label'].num_classes
# + id="SQ9xK9F2wGD8"
# splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
# splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
ri = (tfds.core.ReadInstruction('train', to=10, unit='%') +
tfds.core.ReadInstruction('train', from_=-80, unit='%'))
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split=[ 'train[:80%]', 'train[80%:90%]','train[90%:100%]' ] )
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
# + id="Yu2hgXSVmikc"
num_classes
# + [markdown] id="pmXQYXNWwf19"
# ### Format the Data
#
# Use the `tf.image` module to format the images for the task.
#
# Resize the images to a fixes input size, and rescale the input channels
# + id="y7UyXblSwkUS"
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
# + [markdown] id="1nrDR8CnwrVk"
# Now shuffle and batch the data
#
# + id="zAEUG7vawxLm"
BATCH_SIZE = 32 #@param {type:"integer"}
# + id="fHEC9mbswxvM"
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)
test_batches = test_examples.map(format_image).batch(1)
# + [markdown] id="ghQhZjgEw1cK"
# Inspect a batch
# + id="gz0xsMCjwx54"
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
# + [markdown] id="FS_gVStowW3G"
# ## Defining the Model
#
# All it takes is to put a linear classifier on top of the `feature_extractor_layer` with the Hub module.
#
# For speed, we start out with a non-trainable `feature_extractor_layer`, but you can also enable fine-tuning for greater accuracy.
# + cellView="form" id="RaJW3XrPyFiF"
do_fine_tuning = True #@param {type:"boolean"}
# + [markdown] id="wd0KfstqaUmE"
# Load TFHub Module
# + id="svvDrt3WUrrm"
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
# + id="50FYNIb1dmJH"
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.summary()
# + id="y5DkJxH3ZpQG"
#@title (Optional) Unfreeze some layers
NUM_LAYERS = 9 #@param {type:"slider", min:1, max:50, step:1}
if do_fine_tuning:
feature_extractor.trainable = True
for layer in model.layers[-NUM_LAYERS:]:
layer.trainable = True
else:
feature_extractor.trainable = False
# + [markdown] id="u2e5WupIw2N2"
# ## Training the Model
# + id="9f3yBUvkd_VJ"
if do_fine_tuning:
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.002, momentum=0.9),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
else:
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# + id="w_YKX2Qnfg6x"
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
# + [markdown] id="u_psFoTeLpHU"
# ## Export the Model
# + id="XaSb5nVzHcVv"
CATS_VS_DOGS_SAVED_MODEL = "exp_saved_model"
# + [markdown] id="fZqRAg1uz1Nu"
# Export the SavedModel
# + id="yJMue5YgnwtN"
tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)
# + id="SOQF4cOan0SY" magic_args="-s $CATS_VS_DOGS_SAVED_MODEL" language="bash"
# saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
# + id="FY7QGBgBytwX"
loaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)
# + id="tIhPyMISz952"
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
# + [markdown] id="XxLiLC8n0H16"
# ## Convert Using TFLite's Converter
# + [markdown] id="1aUYvCpfWmrQ"
# Load the TFLiteConverter with the SavedModel
# + id="dqJRyIg8Wl1n"
converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)
# + [markdown] id="AudcNjT0UtfF"
# ### Post-Training Quantization
# The simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.
#
# To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.
# + id="WmSr2-yZoUhz"
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# + [markdown] id="YpCijI08UxP0"
# ### Post-Training Integer Quantization
# We can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.
# + id="clM_dTIkWdIa"
def representative_data_gen():
for input_value, _ in test_batches.take(100):
yield [input_value]
# + id="0oPkAxDvUias"
converter.representative_dataset = representative_data_gen
# + [markdown] id="IGUAVTqXVfnu"
# The resulting model will be fully quantized but still take float input and output for convenience.
#
# Ops that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float.
# + [markdown] id="cPVdjaEJVkHy"
# ### Full Integer Quantization
#
# To require the converter to only output integer operations, one can specify:
# + id="eQi1aO2cVhoL"
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# + [markdown] id="snwssESbVtFw"
# ### Finally convert the model
# + id="tUEgr46WVsqd"
tflite_model = converter.convert()
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, "wb") as f:
f.write(tflite_model)
# + [markdown] id="BbTF6nd1KG2o"
# ## Test the TFLite Model Using the Python Interpreter
# + id="dg2NkVTmLUdJ"
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_file)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# + id="snJQVs9JNglv"
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
# + cellView="form" id="YMTWNqPpNiAI"
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['cat', 'dog']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]), color=color)
# + [markdown] id="fK_CTyL3XQt1"
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.
# + cellView="form" id="1-lbnicPNkZs"
#@title Visualize the outputs { run: "auto" }
index = 3 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
# + [markdown] id="PmZRieHmKLY5"
# Create a file to save the labels.
# + id="H92yi-vbZpQb"
labels = ['cat', 'dog']
with open('labels.txt', 'w') as f:
f.write('\n'.join(labels))
# + [markdown] id="7PxItDgvZpQb"
# If you are running this notebook in a Colab, you can run the cell below to download the model and labels to your local disk.
#
# **Note**: If the files do not download when you run the cell, try running the cell a second time. Your browser might prompt you to allow multiple files to be downloaded.
# + id="0jJAxrQB2VFw"
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
# + [markdown] id="BDlmpjC6VnFZ"
# # Prepare the Test Images for Download (Optional)
# + [markdown] id="_1ja_WA0WZOH"
# This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
# + id="fzLKEBrfTREA"
# !mkdir -p test_images
# + id="Qn7ukNQCSewb"
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
# + id="xVKKWUG8UMO5"
# !ls test_images
# + id="l_w_-UdlS9Vi"
# !zip -qq cats_vs_dogs_test_images.zip -r test_images/
# + [markdown] id="OHujfDG5ZpQl"
# If you are running this notebook in a Colab, you can run the cell below to download the Zip file with the images to your local disk.
#
# **Note**: If the Zip file does not download when you run the cell, try running the cell a second time.
# + id="Giva6EHwWm6Y"
try:
files.download('cats_vs_dogs_test_images.zip')
except:
pass
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.5.2
# language: julia
# name: julia-0.5
# ---
# # Finite Markov Chains: Examples
# **Daisuke Oyama**
#
# *Faculty of Economics, University of Tokyo*
# This notebook demonstrates how to analyze finite-state Markov chains
# with the `MarkovChain` class.
# For basic concepts and properties on Markov chains, see
#
# * [the lecture on finite Markov chains](http://quant-econ.net/py/finite_markov.html)
# in Quantitative Economics, and
# * [the documentation for `MarkovChain`](http://quanteconpy.readthedocs.org/en/stable/markov/core.html).
#
# For algorithmic issues in detecting reducibility and periodicity of a Markov chain,
# see, for example,
#
# * J. P. Jarvis and D. R. Shier,
# "[Graph-Theoretic Analysis of Finite Markov Chains](http://www.ces.clemson.edu/~shierd/Shier/markov.pdf),"
#
# from which we draw some examples below.
using PyPlot
using QuantEcon
# +
T = 0.5 # Time expiration (years)
vol = 0.2 # Annual volatility
r = 0.05 # Annual interest rate
strike = 2.1 # Strike price
p0 = 2 # Current price
N = 100 # Number of periods to expiration
# Time length of a period
tau = T/N
# Discount factor
beta = exp(-r*tau)
# Up-jump factor
u = exp(vol*sqrt(tau))
# Up-jump probability
q = 1/2 + sqrt(tau)*(r - vol^2/2)/(2*vol)
# -
P = zeros(6, 6)
P[1, 1] = 1
P[2, 5] = 1
P[3, 3], P[3, 4], P[3, 5] = 1/3, 1/3, 1/3
P[4, 1], P[4, 6] = 1/2, 1/2
P[5, 2], P[5, 5] = 1/2, 1/2
P[6, 1], P[6, 4] = 1/2, 1/2
# Create a MarkovChain instance:
mc1 = MarkovChain(P)
# ### Classification of states
# This Markov chain is reducible:
is_irreducible(mc1)
num_communication_classes(mc1)
# Determine the communication classes:
communication_classes(mc1)
# Classify the states of this Markov chain:
recurrent_classes(mc1)
# Obtain a list of the recurrent states:
recurrent_states = [x for x in vcat(recurrent_classes(mc1)...)]
recurrent_states
# Obtain a list of the transient states:
transient_states = setdiff(collect(1:n_states(mc1)), recurrent_states)
transient_states
# A Markov chain is reducible (i.e., its directed graph is not strongly connected)
# if and only if by symmetric permulations of rows and columns,
# its transition probability matrix is written in the form ("canonical form")
# $$
# \begin{pmatrix}
# U & 0 \\
# W & V
# \end{pmatrix},
# $$
# where $U$ and $W$ are square matrices.
#
# Such a form for `mc1` is obtained by the following:
permutation = vcat(recurrent_states, transient_states)
print(mc1.p[permutation, :][:, permutation])
# This Markov chain is aperiodic
# (i.e., the least common multiple of the periods of the recurrent sub-chains is one):
is_aperiodic(mc1)
# Indeed, each of the sub-chains corresponding to the recurrent classes has period $1$,
# i.e., every recurrent state is aperiodic:
println(recurrent_classes(mc1))
for recurrent_class in recurrent_classes(mc1)
sub_matrix = mc1.p[recurrent_class, :][:, recurrent_class]
d = period(MarkovChain(sub_matrix))
println("Period of the sub-chain\n $sub_matrix \n = $d")
end
# ### Stationary distributions
# For each recurrent class $C$, there is a unique stationary distribution $\psi^C$
# such that $\psi^C_i > 0$ for all $i \in C$ and $\psi^C_i = 0$ otherwise.
# `MarkovChain.stationary_distributions` returns
# these unique stationary distributions for the recurrent classes.
# Any stationary distribution is written as a convex combination of these distributions.
print(mc_compute_stationary(mc1))
# These are indeed stationary distributions:
print(mc_compute_stationary(mc1) * mc1.p)
# Plot these distributions.
function draw_histogram(distribution; ax=nothing, figsize=nothing,
title=nothing, xlabel=nothing, ylabel=nothing, ylim=(0, 1))
"""
Plot the given distribution.
"""
if ax != nothing
fig, ax = figsize != nothing ? subplots(figsize=figsize) : subplots()
end
n = length(distribution)
ax[:bar](collect(1:n), distribution, align="center")
#bar(1:n, distribution, align="center")
ax[:set_xlim](-0.5, (n-1)+0.5)
ax[:set_ylim](ylim)
if title != nothing
ax[:set_title](title)
end
if xlabel != nothing
ax[:set_xlabel](xlabel)
end
if ylabel != nothing
ax[:set_ylabel](ylabel)
end
if ax != nothing
show()
end
end
# +
fig, axes = subplots(1, 2, figsize=(12, 4))
titles = ["Stationary distribution for the recurrent class $recurrent_class"
for recurrent_class in recurrent_classes(mc1)]
for (ax, title, dist) in zip(axes, titles, mc_compute_stationary(mc1))
draw_histogram(dist, ax=ax, title=title, xlabel="States")
end
suptitle("Stationary distributions", y=-0.05, fontsize=12)
show()
# -
# ### Simulation
# Let us simulate our Markov chain `mc1`.
# The `simualte` method generates a sample path
# of length given by the first argument, `ts_length`,
# with an initial state as specified by an optional argument `init`;
# if not specified, the initial state is randomly drawn.
# A sample path from state `1`:
simulate(mc1, 50, [1])
# As is clear from the transition matrix `P`,
# if it starts at state `1`, the chain stays there forever,
# i.e., `1` is an absorbing state, a state that constitutes a singleton recurrent class.
# Start with state `2`:
simulate(mc1, 50, [2])
# You can observe that the chain stays in the recurrent class $\{2,5\}$
# and visits states `2` and `5` with certain frequencies.
# If `init` is not specified, the initial state is randomly chosen:
simulate(mc1, 50)
# **Note on reproducibility**:
# The `simulate` method offers an option `random_state` to set a random seed
# to initialize the pseudo-random number generator.
# As you provide the same random seed value,
# `simulate` returns the same outcome.
# For example, the following will always give the same sequence:
# Note that simulate function in julia does not have the random_state argument.
#simulate(mc1, 50, random_state = 12345) #=> doesn't work in julia
#we can alternatively use the following srand() function
srand(12345)
simulate(mc1, 50)
# #### Time series averages
# Now, let us compute the frequency distribution along a sample path, given by
# $$
# \frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}\{X_{\tau} = s\}
# \quad (s \in S).
# $$
function time_series_dist(mc, t, init=nothing, random_state::Int=0)
"""
Return the distribution of visits by a sample path of length t
of mc with an initial state init.
"""
if typeof(t) == Int #t is an int
t_max = t
ts_size = 1
ts_array = [t]
dim = 1
else
t_max = maximum(t)
ts_size = length(t) # t is an array
ts_array = t
dim = 2
end
srand(random_state)
X = simulate(mc, t_max, [init])
dists = zeros((ts_size, n_states(mc)))
bins = 1:n_states(mc)
for (i, length) in enumerate(ts_array)
e, bin_edges = hist(X[1:length], bins)
dists[i, :] = e / length
end
if dim == 1
return dists[1]
else
return dists
end
end
# Here is a frequency distribution along a sample path, of length 100,
# from initial state `1`, which is a recurrent state:
time_series_dist(mc1, 100, 1)
# Length 10,000:
time_series_dist(mc1, 10^4, 1)
# The distribution becomes close to the stationary distribution `(0, 1/3, 0, 0, 2/3, 0)`.
# Plot the frequency distributions for a couple of different time lengths:
function plot_time_series_dists(mc, ts, init; seed=nothing, figsize=(12,4))
dists = typeof(seed) == Int? time_series_dist(mc, ts, init, seed) : time_series_dist(mc, ts, init)
fig, axes = subplots(1, length(ts), figsize=figsize)
titles = ["t=$t" for t in ts]
for (ax, title, dist) in zip(axes, titles, dists)
draw_histogram(dist, ax=ax, title=title, xlabel="States")
#fig.suptitle("Time series distributions with init=$init",
# y=-0.05, fontsize=12)
end
show()
end
init = 1
ts = [5, 10, 50, 100]
plot_time_series_dists(mc1, ts, init)
# Start with state `2`,
# which is a transient state:
init = 2
ts = [5, 10, 50, 100]
plot_time_series_dists(mc1, ts, init)
# Run the above cell several times;
# you will observe that the limit distribution differs across sample paths.
# Sometimes the state is absorbed into the recurrent class $\{0\}$,
# while other times it is absorbed into the recurrent class $\{1,4\}$.
# +
init = 2
ts = [5, 10, 50, 100]
seeds = [222, 2222]
descriptions = ["{0} sample path with init={1}".format(adjective, init) for adjective
in ["Some"] + ["Another"] + ["Yet another"]*(length(seeds)-1)]
for seed, description in zip(seeds, descriptions)
print(description)
plot_time_series_dists(mc1, ts, init, seed=seed)
end
# -
# In fact,
# for almost every sample path of a finite Markov chain $\{X_t\}$,
# for some recurrent class $C$ we have
# $$
# \frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}\{X_{\tau} = s\} \to \psi^C[s]
# \quad \text{as $t \to \infty$}
# $$
# for all states $s$,
# where $\psi^C$ is the stationary distribution associated with the recurrent class $C$.
#
# If the initial state $s_0$ is a recurrent state,
# then the recurrent class $C$ above is the one that contains $s_0$,
# while if it is a transient state,
# then the recurrent class to which the convergence occurs depends on the sample path.
# Let us simulate with the remaining states, `3`, `4`, and `5`.
# +
inits = [3, 4, 5]
t = 100
fig, axes = subplots(1, 3, figsize=(12, 3))
for (init, ax) in zip(inits, axes)
draw_histogram(time_series_dist(mc1, t=t, init=init), ax=ax,
title="Initial state = $init",
xlabel="States")
end
suptitle("Time series distributions for t=$t",
y=-0.05, fontsize=12)
show()
# -
# #### Cross sectional averages
# Next, let us repeat the simulation for many times (say, 10,000 times)
# and obtain the distribution of visits to each state at a given time period `T`.
# That is, we want to simulate the marginal distribution at time `T`.
function cross_sectional_dist(mc, T; init=None, num_reps=10^4, random_state=0)
"""
Return the distribution of visits at time T by num_reps times of simulation
of mc with an initial state init.
"""
if typeof(T) == Int # T is an int
T_max = T
Ts_size = 1
Ts_array = [T]
dim = 1
else
T_max = maximum(T)
Ts_size = length(T) # T is an array
Ts_array = T
dim = 2
end
srand(random_state)
x = simulate(mc, ts_length=T_max+1, init=init, num_reps=num_reps)[:, Ts_array]
dists = zeros((x.shape[-1], n_states(mc))
bins = np.arange(n_states(mc))
for i in range(x.shape[-1])
hist, bin_edges = np.histogram(x[:, i], bins=bins)
dists[i, :] = hist / num_reps
end
if dim == 1
return dists[1]
else
return dists
end
end
# Start with state `1`:
init = 1
T = 10
cross_sectional_dist(mc1, init=init, T=T)
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Use Python function to recognize hand-written digits with `ibm-watson-machine-learning`
#
# Create and deploy a function that receives HTML canvas image data from a web app and then processes and sends that data to a model trained to recognize handwritten digits.
# See: <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-deployed-func-mnist-tutorial.html" target="_blank">MNIST function deployment tutorial</a>
#
# This notebook runs on Python 3.8.
#
# ## Learning goals
#
# The learning goals of this notebook are:
#
# - AI function definition
# - Store AI function
# - Deployment creation
#
# ## Contents
#
# This notebook contains the following parts:
# 1. [Setup](#setup)
# 2. [Get an ID for a model deployment](#step4)
# 3. [Get sample canvas data](#step5)
# 4. [Create a deployable function](#step6)
# 5. [Store and deploy the function](#step7)
# 6. [Test the deployed function](#step8)
# 7. [Clean up](#cleanup)
# <a id="setup"></a>
# ## 1. Set up the environment
#
# Before you use the sample code in this notebook, you must perform the following setup tasks:
#
# - Contact your Cloud Pack for Data administrator and ask them for your account credentials
# ### Connection to WML
#
# Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `api_key`.
username = 'PASTE YOUR USERNAME HERE'
api_key = 'PASTE YOUR API_KEY HERE'
url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = {
"username": username,
"apikey": api_key,
"url": url,
"instance_id": 'openshift',
"version": '4.0'
}
# Alternatively you can use `username` and `password` to authenticate WML services.
#
# ```
# wml_credentials = {
# "username": ***,
# "password": ***,
# "url": ***,
# "instance_id": 'openshift',
# "version": '4.0'
# }
#
# ```
# ### Install and import the `ibm-watson-machine-learning` package
# **Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
# !pip install -U ibm-watson-machine-learning
# +
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
# -
# ### Working with spaces
#
# First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.
#
# - Click New Deployment Space
# - Create an empty space
# - Go to space `Settings` tab
# - Copy `space_id` and paste it below
#
# **Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd4.0/notebooks/python_sdk/instance-management/Space%20management.ipynb).
#
# **Action**: Assign space ID below
space_id = 'PASTE YOUR SPACE ID HERE'
# You can use `list` method to print all existing spaces.
client.spaces.list(limit=10)
# To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
client.set.default_space(space_id)
# ## 2. <a id="step4"></a> Get an ID for a model deployment
#
# The deployed function created in this notebook is designed to send payload data to a TensorFlow model created in the <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-mnist-tutorials.html" target="_blank" rel="noopener noreferrer">MNIST tutorials</a>.
import os, wget, json
import numpy as np
import matplotlib.pyplot as plt
import requests
# ### Option 1: Use your own, existing model deployment
#
# If you already deployed a model while working through one of the following MNIST tutorials, you can use that model deployment:
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml_dlaas_tutorial_tensorflow_experiment-builder.html" target="_blank" rel="noopener noreferrer">Experiment builder MNIST tutorial</a>
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml_dlaas_tutorial_tensorflow_experiment-builder_hpo.html" target="_blank" rel="noopener noreferrer">Experiment builder (HPO) MNIST tutorial</a>
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-python-mnist-tutorial.html" target="_blank" rel="noopener noreferrer">Python client (notebook) MNIST tutorial</a>
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml_dlaas_tutorial_tensorflow_cli.html" target="_blank" rel="noopener noreferrer">CLI MNIST tutorial</a>
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml_dlaas_cli_with_hpo.html" target="_blank" rel="noopener noreferrer">CLI (HPO) MNIST tutorial</a>
#
# Paste the model deployment ID in the following cell.
#
# See: <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-get-endpoint-url.html" target="_blank" rel="noopener noreferrer">Looking up an online deployment ID</a>
#
for x in client.deployments.get_details()['resources']:
if (x['entity']['name'] == 'Scikit German Risk Deployment WML V4'):
deployment_uid = x['metadata']['id']
model_deployment_id = ""
# ### Option 2: Download, store, and deploy a sample model
# You can deployed a sample model and get its deployment ID by running the code in the following four cells.
# +
# Download a sample model to the notebook working directory
sample_saved_model_filename = 'mnist-tf-hpo-saved-model.tar.gz'
url = 'https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd4.0/models/tensorflow/mnist/' + sample_saved_model_filename
if not os.path.isfile(sample_saved_model_filename):
wget.download(url)
# +
# Look up software specification for the MNIST model
sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.8")
# +
# Store the sample model in your Watson Machine Learning repository
metadata = {
client.repository.ModelMetaNames.NAME: 'Saved MNIST model',
client.repository.ModelMetaNames.TYPE: 'tensorflow_2.4',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid
}
model_details = client.repository.store_model(
model=sample_saved_model_filename,
meta_props=metadata
)
# -
model_details
# +
# Get published model ID
published_model_uid = client.repository.get_model_uid(model_details)
# +
# Deploy the stored model
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "MNIST saved model deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
model_deployment_details = client.deployments.create(published_model_uid, meta_props=metadata)
# +
# Get the ID of the model deployment just created
model_deployment_id = client.deployments.get_uid(model_deployment_details)
print(model_deployment_id)
# -
# ## <a id="step5"></a> 3. Get sample canvas data
#
# The deployed function created in this notebook is designed to accept RGBA image data from an HTML canvas object in one of these sample apps:
#
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-nodejs-mnist-tutorial.html" target="_blank" rel="noopener noreferrer">Node.js MNIST sample app</a>
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-python-flask-mnist-tutorial.html" target="_blank" rel="noopener noreferrer">Python Flask MNIST sample app</a>
#
# Run the following cells to download and view sample canvas data for testing the deployed function.
# ### 3.1 Download sample data file
# +
# Download the file containing the sample data
sample_canvas_data_filename = 'mnist-html-canvas-image-data.json'
url = 'https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd4.0/data/mnist/' + sample_canvas_data_filename
if not os.path.isfile(sample_canvas_data_filename):
wget.download(url)
# +
# Load the sample data from the file into a variable
with open(sample_canvas_data_filename) as data_file:
sample_cavas_data = json.load(data_file)
# -
# ### 3.2 View sample data
# +
# View the raw contents of the sample data
print("Height (n): " + str(sample_cavas_data["height"]) + " pixels\n")
print("Num image data entries: " + str(len( sample_cavas_data["data"])) + " - (n * n * 4) elements - RGBA values\n")
print(json.dumps(sample_cavas_data, indent=3)[:75] + "...\n" + json.dumps(sample_cavas_data, indent=3)[-50:])
# +
# See what hand-drawn digit the sample data represents
rgba_arr = np.asarray(sample_cavas_data["data"]).astype('uint8')
n = sample_cavas_data["height"]
plt.figure()
plt.imshow( rgba_arr.reshape(n, n, 4))
plt.xticks([])
plt.yticks([])
plt.show()
# -
# ## <a id="step6"></a> 4. Create a deployable function
#
# The basics of creating and deploying functions in Watson Machine Learning are given here:
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-deploy-functions.html" target="_blank" rel="noopener noreferrer">Creating and deploying functions</a>
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-functions.html" target="_blank" rel="noopener noreferrer">Implementation details of deployable functions</a>
#
# ### 4.1 Define the function
# 1. Define a Python closure with an inner function named "score".
# 2. Use default parameters to save your Watson Machine Learning credentials and the model deployment ID with the deployed function.
# 3. Process the canvas data (reshape and normalize) and then send the processed data to the model deployment.
# 4. Process the results from the model deployment so the deployed function returns simpler results.
# 5. Implement error handling so the function will behave gracefully if there is an error.
# +
ai_parms = {"wml_credentials": wml_credentials, "space_id": space_id, "model_deployment_id": model_deployment_id}
def my_deployable_function( parms=ai_parms ):
def getRGBAArr(canvas_data):
import numpy as np
dimension = canvas_data["height"]
rgba_data = canvas_data["data"]
rgba_arr = np.asarray(rgba_data).astype('uint8')
return rgba_arr.reshape(dimension, dimension, 4)
def getNormAlphaList(img):
import numpy as np
alpha_arr = np.array(img.split()[-1])
norm_alpha_arr = alpha_arr / 255
norm_alpha_list = norm_alpha_arr.reshape(1, 784).tolist()
return norm_alpha_list
def score(function_payload):
try:
from PIL import Image
canvas_data = function_payload["input_data"][0]["values"][0] # Read the payload received by the function
rgba_arr = getRGBAArr(canvas_data) # Create an array object with the required shape
img = Image.fromarray(rgba_arr, 'RGBA') # Create an image object that can be resized
sm_img = img.resize((28, 28), Image.LANCZOS) # Resize the image to 28 x 28 pixels
alpha_list = getNormAlphaList(sm_img) # Create a 1 x 784 array of values between 0 and 1
model_payload = {"input_data": [{"values" : alpha_list}]} # Create a payload to be sent to the model
#print( "Payload for model:" ) # For debugging purposes
#print( model_payload ) # For debugging purposes
from ibm_watson_machine_learning import APIClient
client = APIClient(parms["wml_credentials"])
client.set.default_space(parms["space_id"])
model_result = client.deployments.score(parms["model_deployment_id"], model_payload)
digit_class = model_result["predictions"][0]["values"][0]
return model_result
except Exception as e:
return {'predictions': [{'values': [repr(e)]}]}
#return {"error" : repr(e)}
return score
# -
# ### 4.2 Test locally
# You can test your function in the notebook before deploying the function.
#
# To see debugging info:
# 1. Uncomment the print statements inside the score function
# 2. Rerun the cell defining the function
# 3. When you rerun the this cell, you will see the debugging info
# +
# Pass the sample canvas data to the function as a test
func_result = my_deployable_function()({"input_data": [{"values": [sample_cavas_data]}]})
print(func_result)
# -
# ## <a id="step7"></a> 5. Store and deploy the function
# Before you can deploy the function, you must store the function in your Watson Machine Learning repository.
# +
# Look up software specification for the deployable function
sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.8")
# +
# Store the deployable function in your Watson Machine Learning repository
meta_data = {
client.repository.FunctionMetaNames.NAME: 'MNIST deployable function',
client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid
}
function_details = client.repository.store_function(meta_props=meta_data, function=my_deployable_function)
# +
# Get published function ID
function_uid = client.repository.get_function_uid(function_details)
# +
# Deploy the stored function
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "MNIST function deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
function_deployment_details = client.deployments.create(function_uid, meta_props=metadata)
# -
# ## <a id="step8"></a> 6. Test the deployed function
#
# You can use the Watson Machine Learning Python client or REST API to send data to your function deployment for processing in exactly the same way you send data to model deployments for processing.
# +
# Get the endpoint URL of the function deployment just created
function_deployment_id = client.deployments.get_uid(function_deployment_details)
function_deployment_endpoint_url = client.deployments.get_scoring_href(function_deployment_details)
print(function_deployment_id)
print(function_deployment_endpoint_url)
# -
payload = {"input_data": [{"values": [sample_cavas_data]}]}
# ### 6.1 Watson Machine Learning Python client
result = client.deployments.score(function_deployment_id, payload)
if "error" in result:
print(result["error"])
else:
print(result)
# ## <a id="cleanup"></a> 7. Clean up
# If you want to clean up all created assets:
# - experiments
# - trainings
# - pipelines
# - model definitions
# - models
# - functions
# - deployments
#
# please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
# ## Summary and next steps
# In this notebook, you created a Python function that receives HTML canvas image data and then processes and sends that data to a model trained to recognize handwritten digits.
#
# To learn how you can use this deployed function in a web app, see:
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-nodejs-mnist-tutorial.html" target="_blank" rel="noopener noreferrer">Sample Node.js app that recognizes hand-drawn digits</a>
# - <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/ml-python-flask-mnist-tutorial.html" target="_blank" rel="noopener noreferrer">Sample Python Flask app that recognizes hand-drawn digits</a>
# ### <a id="authors"></a>Authors
#
# **Sarah Packowski** is a member of the IBM Watson Studio Content Design team in Canada.
#
# <hr>
# Copyright © IBM Corp. 2018-2021. This notebook and its source code are released under the terms of the MIT License.
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# inspired by https://github.com/jupyter-widgets/ipywidgets/blob/master/docs/source/examples/Lorenz%20Differential%20Equations.ipynb and others
# # Exploring the Aizawa of Differential Equations
# ## Overview
# In this Notebook we explore the Aizawa of differential equations:
#
# $$
# \begin{aligned}
# \dot{x} & = (z-b)x - dy \\
# \dot{y} & = dx + (z-b)y \\
# \dot{z} & = c + az - z^3/3 - x^2 + fzx^3
# \end{aligned}
# $$
#
# This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters (\\(a\\), \\(b\\), \\(c\\), \\(d\\), \\(e\\), \\(f\\)) are varied.
# ## Imports
# First, we import the needed things from IPython, NumPy, Matplotlib and SciPy.
# %matplotlib inline
from ipywidgets import interact, interactive, HBox, Layout,VBox
from IPython.display import clear_output, display, HTML
# ## Computing the trajectories and plotting the result
# We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the numerical integration (`max_time`), the number of trajectories to be followed (`N`) and the visualization (`xangle`, `yangle`). An additional import ist made to gain access to the method doing the actual calculation and plotting. It is encapsulated in its very own python script:
from aizawa import solve_aizawa
# Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling tightly around the center while going upward and then going in wider radii downward again.
t, x_t = solve_aizawa(anglex=0, angley=30, numberOfTrajectories=10)
# ## Playing around with it...
# Using IPython's `interactive` function, we can explore how the trajectories behave as we change the various parameters. The nuts and bolts of laying out the widgets and components are again encapsulated in a separate python script providing the function `layoutWidgets`.
# +
from layoutWidgets import layout
w = interactive(solve_aizawa, min_x0=(-20.,20.), max_x0=(-20.,20.), anglex=(0.,360.), angley=(0.,360.), max_time=(0.1, 40.0),
numberOfTrajectories=(1,50),a=(-10.0,10.0),b=(-10.0,10.0),c=(-10.0,10.0),d=(-10.0,10.0),e=(-10.0,10.0),f=(-10.0,10.0))
layout(w)
# -
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 基本程序设计
# - 一切代码输入,请使用英文输入法
print('hello word')
# chang=int(input('请输入一个边长'))
# area=chang * chang
# print(area)
# 编写一个简单的程序
# - 圆公式面积: area = radius \* radius \* 3.1415
radius=int(input('请输入半径'))
area=radius*radius*3.1415926
print(area)
chang=eval(input('出入一个边长'))
area=chang*chang
print(area)
# ### 在Python里面不需要定义数据的类型
# ## 控制台的读取与输入
# - input 输入进去的是字符串
# - eval
# - 在jupyter用shift + tab 键可以跳出解释文档
# ## 变量命名的规范
# - 由字母、数字、下划线构成
# - 不能以数字开头 \*
# - 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合)
# - 可以是任意长度
# - 驼峰式命名
# ## 变量、赋值语句和赋值表达式
# - 变量: 通俗理解为可以变化的量
# - x = 2 \* x + 1 在数学中是一个方程,而在语言中它是一个表达式
# - test = test + 1 \* 变量在赋值之前必须有值
# ## 同时赋值
# var1, var2,var3... = exp1,exp2,exp3...
# ## 定义常量
# - 常量:表示一种定值标识符,适合于多次使用的场景。比如PI
# - 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的
# ## 数值数据类型和运算符
# - 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次
# <img src = "../Photo/01.jpg"></img>
# ## 运算符 /、//、**
# ## 运算符 %
EP:
- 25/4 多少,如果要将其转变为整数该怎么改写
- 输入一个数字判断是奇数还是偶数
- 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒
- 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天
print(25//4)
x=int(input('输入一个整数'))
if x%2==0:
print('偶数')
else:
print('奇数')
times=eval(input('输入秒数'))
mins=times//60
miao=times%60
print(str(mins) + '分' + str(miao) + '秒')
week = eval(input('请输入当天的星期数为:'))
weekend = (week + 10) % 7
print('今天是星期' + str(week) + ',10天以后星期' + str(weekend))
# ## 科学计数法
# - 1.234e+2
# - 1.234e-2
1.234e+2
1.234e-2
# ## 计算表达式和运算优先级
# <img src = "../Photo/02.png"></img>
# <img src = "../Photo/03.png"></img>
x,y,a,b,c = eval(input('请输入:'))
result = ((3+4*x)/5) - ((10*(y-5)*(a+b+c))/x) + (9+(4/x+(9+x)/y))
print('输出的结果是:' + str(result))
# ## 增强型赋值运算
# <img src = "../Photo/04.png"></img>
# ## 类型转换
# - float -> int
# - 四舍五入 round
round(3.1415926,2)
# ## EP:
# - 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数)
# - 必须使用科学计数法
round(197.55e+2*6e-4,2)
# # Project
# - 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment)
# 
贷款数,月供,月利率,年限=eval(input('money,month,rate,years'))
月供 = (贷款数 * 月利率) / (1 - (1 / (1 + 月利率)**(年限 * 12)))
还款数=月供*年限*12
print(还款数)
# # Homework
# - 1
# <img src="../Photo/06.png"></img>
celsius=eval(input('输入摄氏温度:'))
fahrenheit=(9/5)*celsius+32
print(fahrenheit)
# - 2
# <img src="../Photo/07.png"></img>
height,radius=eval(input('输入高和半径'))
area=radius*radius*3.14
volume=area*height
print(area,volume)
# - 3
# <img src="../Photo/08.png"></img>
x=eval(input('输入一个英尺数'))
mi=x*0.305
print(mi)
# - 4
# <img src="../Photo/10.png"></img>
m,chushi,zuizhong=eval(input('输入水量、初始温度、最终温度'))
Q=M*(zuizhong-chushi)*4184
print(Q)
# - 5
# <img src="../Photo/11.png"></img>
chae,nianlilv=eval(input('输入差额、年利率'))
lixi=chae*(nianlilv/1200)
print(lixi)
# - 6
# <img src="../Photo/12.png"></img>
v0,v1,t=eval(input('输入初速度、末速度、时间'))
a=(v1-v0)/t
print(a)
# - 7 进阶
# <img src="../Photo/13.png"></img>
x = eval(input('存入金额 '))
yuefen=eval(input('月份'))
s = 0
y=0
while s<yuefen:
y=(x+y)*(1+0.00417)
s=s+1
print(y)
# - 8 进阶
# <img src="../Photo/14.png"></img>
# +
def x(num):
return sum(int(i) for i in str(num) if i.isdigit())
if __name__ == '__main__':
num = input('请输入一个整数: ')
print('{} 每位数相加之和是: {}'.format(num, x(num)))
# -
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.